text
stringlengths
7
328k
id
stringlengths
14
166
metadata
dict
__index_level_0__
int64
0
459
<jupyter_start><jupyter_text>If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets. Uncomment the following cell and run it.<jupyter_code>#! pip install datasets transformers<jupyter_output><empty_output><jupyter_text>If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow.First you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) then execute the following cell and input your username and password:<jupyter_code>from huggingface_hub import notebook_login notebook_login()<jupyter_output><empty_output><jupyter_text>Then you need to install Git-LFS. Uncomment the following instructions:<jupyter_code># !apt install git-lfs<jupyter_output><empty_output><jupyter_text>Make sure your version of Transformers is at least 4.11.0 since the functionality was introduced in that version:<jupyter_code>import transformers print(transformers.__version__)<jupyter_output><empty_output><jupyter_text>You can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/master/examples/language-modeling). We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely.<jupyter_code>from transformers.utils import send_example_telemetry send_example_telemetry("language_modeling_notebook", framework="pytorch")<jupyter_output><empty_output><jupyter_text>Fine-tuning a language model In this notebook, we'll see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model on a language modeling tasks. We will cover two types of language modeling tasks which are:- Causal language modeling: the model has to predict the next token in the sentence (so the labels are the same as the inputs shifted to the right). To make sure the model does not cheat, it gets an attention mask that will prevent it to access the tokens after token i when trying to predict the token i+1 in the sentence.- Masked language modeling: the model has to predict some tokens that are masked in the input. It still has access to the whole sentence, so it can use the tokens before and after the tokens masked to predict their value.We will see how to easily load and preprocess the dataset for each one of those tasks, and how to use the `Trainer` API to fine-tune a model on it.A script version of this notebook you can directly run on a distributed environment or on TPU is available in our [examples folder](https://github.com/huggingface/transformers/tree/master/examples). Preparing the dataset For each of those tasks, we will use the [Wikitext 2]() dataset as an example. You can load it very easily with the 🤗 Datasets library.<jupyter_code>from datasets import load_dataset datasets = load_dataset('wikitext', 'wikitext-2-raw-v1')<jupyter_output>Reusing dataset wikitext (/home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91)<jupyter_text>You can replace the dataset above with any dataset hosted on [the hub](https://huggingface.co/datasets) or use your own files. Just uncomment the following cell and replace the paths with values that will lead to your files:<jupyter_code># datasets = load_dataset("text", data_files={"train": path_to_train.txt, "validation": path_to_validation.txt}<jupyter_output><empty_output><jupyter_text>You can also load datasets from a csv or a JSON file, see the [full documentation](https://huggingface.co/docs/datasets/loading_datasets.htmlfrom-local-files) for more information. To access an actual element, you need to select a split first, then give an index:<jupyter_code>datasets["train"][10]<jupyter_output><empty_output><jupyter_text>To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset.<jupyter_code>from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) for column, typ in dataset.features.items(): if isinstance(typ, ClassLabel): df[column] = df[column].transform(lambda i: typ.names[i]) display(HTML(df.to_html())) show_random_elements(datasets["train"])<jupyter_output><empty_output><jupyter_text>As we can see, some of the texts are a full paragraph of a Wikipedia article while others are just titles or empty lines. Causal Language modeling For causal language modeling (CLM) we are going to take all the texts in our dataset and concatenate them after they are tokenized. Then we will split them in examples of a certain sequence length. This way the model will receive chunks of contiguous text that may look like:```part of text 1```or ```end of text 1 [BOS_TOKEN] beginning of text 2```depending on whether they span over several of the original texts in the dataset or not. The labels will be the same as the inputs, shifted to the left.We will use the [`distilgpt2`](https://huggingface.co/distilgpt2) model for this example. You can pick any of the checkpoints listed [here](https://huggingface.co/models?filter=causal-lm) instead:<jupyter_code>model_checkpoint = "distilgpt2"<jupyter_output><empty_output><jupyter_text>To tokenize all our texts with the same vocabulary that was used when training the model, we have to download a pretrained tokenizer. This is all done by the `AutoTokenizer` class:<jupyter_code>from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)<jupyter_output><empty_output><jupyter_text>We can now call the tokenizer on all our texts. This is very simple, using the [`map`](https://huggingface.co/docs/datasets/package_reference/main_classes.htmldatasets.Dataset.map) method from the Datasets library. First we define a function that call the tokenizer on our texts:<jupyter_code>def tokenize_function(examples): return tokenizer(examples["text"])<jupyter_output><empty_output><jupyter_text>Then we apply it to all the splits in our `datasets` object, using `batched=True` and 4 processes to speed up the preprocessing. We won't need the `text` column afterward, so we discard it.<jupyter_code>tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])<jupyter_output>Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-0a686d6f64cb210f.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-659bcb80cad0097c.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-7f22912475d34c88.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-b3566e2fe9c5c036.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cach[...]<jupyter_text>If we now look at an element of our datasets, we will see the text have been replaced by the `input_ids` the model will need:<jupyter_code>tokenized_datasets["train"][1]<jupyter_output><empty_output><jupyter_text>Now for the harder part: we need to concatenate all our texts together then split the result in small chunks of a certain `block_size`. To do this, we will use the `map` method again, with the option `batched=True`. This option actually lets us change the number of examples in the datasets by returning a different number of examples than we got. This way, we can create our new samples from a batch of examples.First, we grab the maximum length our model was pretrained with. This might be a big too big to fit in your GPU RAM, so here we take a bit less at just 128.<jupyter_code># block_size = tokenizer.model_max_length block_size = 128<jupyter_output><empty_output><jupyter_text>Then we write the preprocessing function that will group our texts:<jupyter_code>def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can # customize this part to your needs. total_length = (total_length // block_size) * block_size # Split by chunks of max_len. result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } result["labels"] = result["input_ids"].copy() return result<jupyter_output><empty_output><jupyter_text>First note that we duplicate the inputs for our labels. This is because the model of the 🤗 Transformers library apply the shifting to the right, so we don't need to do it manually.Also note that by default, the `map` method will send a batch of 1,000 examples to be treated by the preprocessing function. So here, we will drop the remainder to make the concatenated tokenized texts a multiple of `block_size` every 1,000 examples. You can adjust this behavior by passing a higher batch size (which will also be processed slower). You can also speed-up the preprocessing by using multiprocessing:<jupyter_code>lm_datasets = tokenized_datasets.map( group_texts, batched=True, batch_size=1000, num_proc=4, )<jupyter_output>Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-da77bf362d4c6fa4.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-7d08a6d62516c9ff.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-a985b575c96ddae3.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-47fffef35acafddb.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cach[...]<jupyter_text>And we can check our datasets have changed: now the samples contain chunks of `block_size` contiguous tokens, potentially spanning over several of our original texts.<jupyter_code>tokenizer.decode(lm_datasets["train"][1]["input_ids"])<jupyter_output><empty_output><jupyter_text>Now that the data has been cleaned, we're ready to instantiate our `Trainer`. We will a model:<jupyter_code>from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(model_checkpoint)<jupyter_output><empty_output><jupyter_text>And some `TrainingArguments`:<jupyter_code>from transformers import Trainer, TrainingArguments model_name = model_checkpoint.split("/")[-1] training_args = TrainingArguments( f"{model_name}-finetuned-wikitext2", evaluation_strategy = "epoch", learning_rate=2e-5, weight_decay=0.01, push_to_hub=True, )<jupyter_output><empty_output><jupyter_text>The last argument to setup everything so we can push the model to the [Hub](https://huggingface.co/models) regularly during training. Remove it if you didn't follow the installation steps at the top of the notebook. If you want to save your model locally in a name that is different than the name of the repository it will be pushed, or if you want to push your model under an organization and not your name space, use the `hub_model_id` argument to set the repo name (it needs to be the full name, including your namespace: for instance `"sgugger/gpt-finetuned-wikitext2"` or `"huggingface/gpt-finetuned-wikitext2"`). We pass along all of those to the `Trainer` class:<jupyter_code>trainer = Trainer( model=model, args=training_args, train_dataset=lm_datasets["train"], eval_dataset=lm_datasets["validation"], )<jupyter_output><empty_output><jupyter_text>And we can train our model:<jupyter_code>trainer.train()<jupyter_output><empty_output><jupyter_text>Once the training is completed, we can evaluate our model and get its perplexity on the validation set like this:<jupyter_code>import math eval_results = trainer.evaluate() print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")<jupyter_output>Perplexity: 38.17<jupyter_text>You can now upload the result of the training to the Hub, just execute this instruction:<jupyter_code>trainer.push_to_hub()<jupyter_output><empty_output><jupyter_text>You can now share this model with all your friends, family, favorite pets: they can all load it with the identifier `"your-username/the-name-you-picked"` so for instance:```pythonfrom transformers import AutoModelForCausalLMmodel = AutoModelForCausalLM.from_pretrained("sgugger/my-awesome-model")``` Masked language modeling For masked language modeling (MLM) we are going to use the same preprocessing as before for our dataset with one additional step: we will randomly mask some tokens (by replacing them by `[MASK]`) and the labels will be adjusted to only include the masked tokens (we don't have to predict the non-masked tokens).We will use the [`distilroberta-base`](https://huggingface.co/distilroberta-base) model for this example. You can pick any of the checkpoints listed [here](https://huggingface.co/models?filter=masked-lm) instead:<jupyter_code>model_checkpoint = "distilroberta-base"<jupyter_output><empty_output><jupyter_text>We can apply the same tokenization function as before, we just need to update our tokenizer to use the checkpoint we just picked:<jupyter_code>tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True) tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])<jupyter_output>Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-333e4baa6f280a66.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-23acd0930cc16da7.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-56ae8ad41a9fdf19.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-599a47a0e666ad65.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cach[...]<jupyter_text>And like before, we group texts together and chunk them in samples of length `block_size`. You can skip that step if your dataset is composed of individual sentences.<jupyter_code>lm_datasets = tokenized_datasets.map( group_texts, batched=True, batch_size=1000, num_proc=4, )<jupyter_output>Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-661796332aa2b576.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-e019d91824c225fd.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-b5875c725d0e5cb7.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cache-a8e3eeaa703ca023.arrow Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91/cach[...]<jupyter_text>The rest is very similar to what we had, with two exceptions. First we use a model suitable for masked LM:<jupyter_code>from transformers import AutoModelForMaskedLM model = AutoModelForMaskedLM.from_pretrained(model_checkpoint)<jupyter_output>Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at distilroberta-base and are newly initialized: ['lm_head.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.<jupyter_text>We redefine our `TrainingArguments`:<jupyter_code>model_name = model_checkpoint.split("/")[-1] training_args = TrainingArguments( f"{model_name}-finetuned-wikitext2", evaluation_strategy = "epoch", learning_rate=2e-5, weight_decay=0.01, push_to_hub=True, )<jupyter_output><empty_output><jupyter_text>Like before, the last argument to setup everything so we can push the model to the [Hub](https://huggingface.co/models) regularly during training. Remove it if you didn't follow the installation steps at the top of the notebook. If you want to save your model locally in a name that is different than the name of the repository it will be pushed, or if you want to push your model under an organization and not your name space, use the `hub_model_id` argument to set the repo name (it needs to be the full name, including your namespace: for instance `"sgugger/bert-finetuned-wikitext2"` or `"huggingface/bert-finetuned-wikitext2"`). Finally, we use a special `data_collator`. The `data_collator` is a function that is responsible of taking the samples and batching them in tensors. In the previous example, we had nothing special to do, so we just used the default for this argument. Here we want to do the random-masking. We could do it as a pre-processing step (like the tokenization) but then the tokens would always be masked the same way at each epoch. By doing this step inside the `data_collator`, we ensure this random masking is done in a new way each time we go over the data.To do this masking for us, the library provides a `DataCollatorForLanguageModeling`. We can adjust the probability of the masking:<jupyter_code>from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15)<jupyter_output><empty_output><jupyter_text>Then we just have to pass everything to `Trainer` and begin training:<jupyter_code>trainer = Trainer( model=model, args=training_args, train_dataset=lm_datasets["train"], eval_dataset=lm_datasets["validation"], data_collator=data_collator, ) trainer.train()<jupyter_output><empty_output><jupyter_text>Like before, we can evaluate our model on the validation set. The perplexity is much lower than for the CLM objective because for the MLM objective, we only have to make predictions for the masked tokens (which represent 15% of the total here) while having access to the rest of the tokens. It's thus an easier task for the model.<jupyter_code>eval_results = trainer.evaluate() print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")<jupyter_output><empty_output><jupyter_text>You can now upload the result of the training to the Hub, just execute this instruction:<jupyter_code>trainer.push_to_hub()<jupyter_output><empty_output>
notebooks/examples/language_modeling.ipynb/0
{ "file_path": "notebooks/examples/language_modeling.ipynb", "repo_id": "notebooks", "token_count": 7093 }
158
<jupyter_start><jupyter_text>If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets. Uncomment the following cell and run it.<jupyter_code>#! pip install transformers datasets huggingface_hub<jupyter_output><empty_output><jupyter_text>If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow.First you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) then uncomment the following cell and input your token.<jupyter_code>from huggingface_hub import notebook_login notebook_login()<jupyter_output><empty_output><jupyter_text>Then you need to install Git-LFS and setup Git if you haven't already. Uncomment the following instructions and adapt with your name and email:<jupyter_code># !apt install git-lfs # !git config --global user.email "[email protected]" # !git config --global user.name "Your Name"<jupyter_output><empty_output><jupyter_text>Make sure your version of Transformers is at least 4.16.0 since the functionality was introduced in that version:<jupyter_code>import transformers print(transformers.__version__)<jupyter_output>4.21.0.dev0<jupyter_text>You can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering). We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely.<jupyter_code>from transformers.utils import send_example_telemetry send_example_telemetry("question_answering_notebook", framework="tensorflow")<jupyter_output><empty_output><jupyter_text>Fine-tuning a model on a question-answering task In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model to a question answering task, which is the task of extracting the answer to a question from a given context. We will see how to easily load a dataset for these kinds of tasks and use Keras to fine-tune a model on it. Note that this model **does not generate new text!** Instead, it selects a span of the input passage as the answer. This notebook is built to run on any question answering task with the same format as SQUAD (version 1 or 2), with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a version with a token classification head and a fast tokenizer (check on [this table](https://huggingface.co/transformers/index.htmlbigtable) if this is the case). It might, however, need some small adjustments if you decide to use a different dataset than the one used here. Depending on your model and the GPU you are using, you might need to adjust the batch size to avoid out-of-memory errors. Set those three parameters, then the rest of the notebook should run smoothly:<jupyter_code># This flag is the difference between SQUAD v1 or 2 (if you're using another dataset, it indicates if impossible # answers are allowed or not). squad_v2 = False model_checkpoint = "distilbert-base-uncased" batch_size = 16<jupyter_output><empty_output><jupyter_text>Loading the dataset We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions `load_dataset` and `load_metric`.<jupyter_code>from datasets import load_dataset, load_metric<jupyter_output><empty_output><jupyter_text>For our example here, we'll use the [SQUAD dataset](https://rajpurkar.github.io/SQuAD-explorer/). The notebook should work with any question answering dataset in the 🤗 Datasets library. If you're using your own dataset in a JSON or CSV file (see the [Datasets documentation](https://huggingface.co/docs/datasets/loading_datasets.htmlfrom-local-files) on how to load them), it might need some adjustments to the column names.<jupyter_code>datasets = load_dataset("squad_v2" if squad_v2 else "squad")<jupyter_output>Reusing dataset squad (/home/matt/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)<jupyter_text>The `datasets` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.htmldatasetdict), which contains one key for the training, validation and test set.<jupyter_code>datasets<jupyter_output><empty_output><jupyter_text>We can see the training, validation and test sets all have a column for the context, the question and the answers to those questions. To access an actual element, you need to select a split first, then give an index:<jupyter_code>datasets["train"][0]<jupyter_output><empty_output><jupyter_text>We can see the answers are indicated by their start position in the text (here at character 515) and their full text, which is a substring of the context as we mentioned above. To get a sense of what the data looks like, the following function will show some examples picked randomly from the dataset and decoded back to strings.<jupyter_code>from datasets import ClassLabel, Sequence import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len( dataset ), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset) - 1) while pick in picks: pick = random.randint(0, len(dataset) - 1) picks.append(pick) df = pd.DataFrame(dataset[picks]) for column, typ in dataset.features.items(): if isinstance(typ, ClassLabel): df[column] = df[column].transform(lambda i: typ.names[i]) elif isinstance(typ, Sequence) and isinstance(typ.feature, ClassLabel): df[column] = df[column].transform( lambda x: [typ.feature.names[i] for i in x] ) display(HTML(df.to_html())) show_random_elements(datasets["train"])<jupyter_output><empty_output><jupyter_text>Preprocessing the training data Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:- we get a tokenizer that corresponds to the model architecture we want to use,- we download the vocabulary used when pretraining this specific checkpoint.That vocabulary will be cached, so it's not downloaded again the next time we run the cell.<jupyter_code>from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)<jupyter_output><empty_output><jupyter_text>The following assertion ensures that our tokenizer is a fast tokenizer (backed by Rust) from the 🤗 Tokenizers library. Those fast tokenizers are available for almost all models, and we will need some of the special features they have for our preprocessing.<jupyter_code>import transformers assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)<jupyter_output><empty_output><jupyter_text>You can check which type of models have a fast tokenizer available and which don't in the [big table of models](https://huggingface.co/transformers/index.htmlbigtable). You can directly call this tokenizer on two sentences (one for the answer, one for the context):<jupyter_code>tokenizer("What is your name?", "My name is Sylvain.")<jupyter_output><empty_output><jupyter_text>Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested.Now one specific thing for the preprocessing in question answering is how to deal with very long documents. We usually truncate them in other tasks, when they are longer than the model maximum sentence length, but here, removing part of the the context might result in losing the answer we are looking for. To deal with this, we will allow one (long) example in our dataset to give several input features, each of length shorter than the maximum length of the model (or the one we set as a hyper-parameter). Also, just in case the answer lies at the point we split a long context, we allow some overlap between the features we generate controlled by the hyper-parameter `doc_stride`:<jupyter_code>max_length = 384 # The maximum length of a feature (question and context) doc_stride = 128 # The allowed overlap between two part of the context when splitting is performed.<jupyter_output><empty_output><jupyter_text>Let's find one long example in our dataset:<jupyter_code>for i, example in enumerate(datasets["train"]): if len(tokenizer(example["question"], example["context"])["input_ids"]) > 384: break example = datasets["train"][i]<jupyter_output><empty_output><jupyter_text>Without any truncation, we get the following length for the input IDs:<jupyter_code>len(tokenizer(example["question"], example["context"])["input_ids"])<jupyter_output><empty_output><jupyter_text>Now, if we just truncate, we will lose information (and possibly the answer to our question):<jupyter_code>len( tokenizer( example["question"], example["context"], max_length=max_length, truncation="only_second", )["input_ids"] )<jupyter_output><empty_output><jupyter_text>Note that we never want to truncate the question, only the context, and so we use the `only_second` truncation method. Our tokenizer can automatically return a list of features capped by a certain maximum length, with the overlap we talked about above, we just have to tell it to do so with `return_overflowing_tokens=True` and by passing the stride:<jupyter_code>tokenized_example = tokenizer( example["question"], example["context"], max_length=max_length, truncation="only_second", return_overflowing_tokens=True, stride=doc_stride, )<jupyter_output><empty_output><jupyter_text>Now we don't have one list of `input_ids`, but several:<jupyter_code>[len(x) for x in tokenized_example["input_ids"]]<jupyter_output><empty_output><jupyter_text>And if we decode them, we can see the overlap:<jupyter_code>for x in tokenized_example["input_ids"][:2]: print(tokenizer.decode(x))<jupyter_output>[CLS] how many wins does the notre dame men's basketball team have? [SEP] the men's basketball team has over 1, 600 wins, one of only 12 schools who have reached that mark, and have appeared in 28 ncaa tournaments. former player austin carr holds the record for most points scored in a single game of the tournament with 61. although the team has never won the ncaa tournament, they were named by the helms athletic foundation as national champions twice. the team has orchestrated a number of upsets of number one ranked teams, the most notable of which was ending ucla's record 88 - game winning streak in 1974. the team has beaten an additional eight number - one teams, and those nine wins rank second, to ucla's 10, all - time in wins against the top team. the team plays in newly renovated purcell pavilion ( within the edmund p. joyce center ), which reopened for the beginning of the 2009 – 2010 season. the team is coached by mike brey, who, as of the 2014 – 15 season, his fifteenth at notr[...]<jupyter_text>It's going to take some work to properly label the answers here: we need to find in which of those features the answer actually is, and where exactly in that feature. The models we will use require the start and end positions of these answers in the tokens, so we will also need to to map parts of the original context to some tokens. Thankfully, the tokenizer we're using can help us with that by returning an `offset_mapping`:<jupyter_code>tokenized_example = tokenizer( example["question"], example["context"], max_length=max_length, truncation="only_second", return_overflowing_tokens=True, return_offsets_mapping=True, stride=doc_stride, ) print(tokenized_example["offset_mapping"][0][:100])<jupyter_output>[(0, 0), (0, 3), (4, 8), (9, 13), (14, 18), (19, 22), (23, 28), (29, 33), (34, 37), (37, 38), (38, 39), (40, 50), (51, 55), (56, 60), (60, 61), (0, 0), (0, 3), (4, 7), (7, 8), (8, 9), (10, 20), (21, 25), (26, 29), (30, 34), (35, 36), (36, 37), (37, 40), (41, 45), (45, 46), (47, 50), (51, 53), (54, 58), (59, 61), (62, 69), (70, 73), (74, 78), (79, 86), (87, 91), (92, 96), (96, 97), (98, 101), (102, 106), (107, 115), (116, 118), (119, 121), (122, 126), (127, 138), (138, 139), (140, 146), (147, 153), (154, 160), (161, 165), (166, 171), (172, 175), (176, 182), (183, 186), (187, 191), (192, 198), (199, 205), (206, 208), (209, 210), (211, 217), (218, 222), (223, 225), (226, 229), (230, 240), (241, 245), (246, 248), (248, 249), (250, 258), (259, 262), (263, 267), (268, 271), (272, 277), (278, 281), (282, 285), (286, 290), (291, 301), (301, 302), (303, 307), (308, 312), (313, 318), (319, 321), (322, 325), (326, 330), (330, 331), (332, 340), (341, 351), (352, 354), (355, 363), (364, 373), (374,[...]<jupyter_text>This gives the corresponding start and end character in the original text for each token in our input IDs. The very first token (`[CLS]`) has (0, 0) because it doesn't correspond to any part of the question/answer, then the second token is the same as the characters 0 to 3 of the question:<jupyter_code>first_token_id = tokenized_example["input_ids"][0][1] offsets = tokenized_example["offset_mapping"][0][1] print( tokenizer.convert_ids_to_tokens([first_token_id])[0], example["question"][offsets[0] : offsets[1]], )<jupyter_output>how How<jupyter_text>So we can use this mapping to find the position of the start and end tokens of our answer in a given feature. We just have to distinguish which parts of the offsets correspond to the question and which part correspond to the context, this is where the `sequence_ids` method of our `tokenized_example` can be useful:<jupyter_code>sequence_ids = tokenized_example.sequence_ids() print(sequence_ids)<jupyter_output>[None, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, None, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, [...]<jupyter_text>It returns `None` for the special tokens, then 0 or 1 depending on whether the corresponding token comes from the first sentence past (the question) or the second (the context). Now with all of this, we can find the first and last token of the answer in one of our input feature (or if the answer is not in this feature):<jupyter_code>answers = example["answers"] start_char = answers["answer_start"][0] end_char = start_char + len(answers["text"][0]) # Start token index of the current span in the text. token_start_index = 0 while sequence_ids[token_start_index] != 1: token_start_index += 1 # End token index of the current span in the text. token_end_index = len(tokenized_example["input_ids"][0]) - 1 while sequence_ids[token_end_index] != 1: token_end_index -= 1 # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index). offsets = tokenized_example["offset_mapping"][0] if ( offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char ): # Move the token_start_index and token_end_index to the two ends of the answer. # Note: we could go after the last offset if the answer is the last word (edge case). while ( token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char ): token_start_index += 1 start_position = token_start_index - 1 while offsets[token_end_index][1] >= end_char: token_end_index -= 1 end_position = token_end_index + 1 print(start_position, end_position) else: print("The answer is not in this feature.")<jupyter_output>23 26<jupyter_text>And we can double check that it is indeed the correct answer:<jupyter_code>print( tokenizer.decode( tokenized_example["input_ids"][0][start_position : end_position + 1] ) ) print(answers["text"][0])<jupyter_output>over 1, 600 over 1,600<jupyter_text>For this notebook to work with any kind of model, we need to account for the special case where the model expects padding on the left (in which case we switch the order of the question and the context):<jupyter_code>pad_on_right = tokenizer.padding_side == "right"<jupyter_output><empty_output><jupyter_text>Now let's put everything together in one function we will apply to our training set. In the case of impossible answers (the answer is in another feature given by an example with a long context), we set the cls index for both the start and end position. We could also simply discard those examples from the training set if the flag `allow_impossible_answers` is `False`. Since the preprocessing is already complex enough as it is, we've kept is simple for this part.<jupyter_code>def prepare_train_features(examples): # Tokenize our examples with truncation and padding, but keep the overflows using a stride. This results # in one example possible giving several features when a context is long, each of those features having a # context that overlaps a bit the context of the previous feature. tokenized_examples = tokenizer( examples["question" if pad_on_right else "context"], examples["context" if pad_on_right else "question"], truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) # Since one example might give us several features if it has a long context, we need a map from a feature to # its corresponding example. This key gives us just that. sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") # The offset mappings will give us a map from token to character position in the original context. This will # help us compute the start_positions and end_positions. offset_mapping = tokenized_examples.pop("offset_mapping") # Let's label those examples! tokenized_examples["start_positions"] = [] tokenized_examples["end_positions"] = [] for i, offsets in enumerate(offset_mapping): # We will label impossible answers with the index of the CLS token. input_ids = tokenized_examples["input_ids"][i] cls_index = input_ids.index(tokenizer.cls_token_id) # Grab the sequence corresponding to that example (to know what is the context and what is the question). sequence_ids = tokenized_examples.sequence_ids(i) # One example can give several spans, this is the index of the example containing this span of text. sample_index = sample_mapping[i] answers = examples["answers"][sample_index] # If no answers are given, set the cls_index as answer. if len(answers["answer_start"]) == 0: tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Start/end character index of the answer in the text. start_char = answers["answer_start"][0] end_char = start_char + len(answers["text"][0]) # Start token index of the current span in the text. token_start_index = 0 while sequence_ids[token_start_index] != (1 if pad_on_right else 0): token_start_index += 1 # End token index of the current span in the text. token_end_index = len(input_ids) - 1 while sequence_ids[token_end_index] != (1 if pad_on_right else 0): token_end_index -= 1 # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index). if not ( offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char ): tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Otherwise move the token_start_index and token_end_index to the two ends of the answer. # Note: we could go after the last offset if the answer is the last word (edge case). while ( token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char ): token_start_index += 1 tokenized_examples["start_positions"].append(token_start_index - 1) while offsets[token_end_index][1] >= end_char: token_end_index -= 1 tokenized_examples["end_positions"].append(token_end_index + 1) return tokenized_examples<jupyter_output><empty_output><jupyter_text>This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists for each key:<jupyter_code>features = prepare_train_features(datasets["train"][:5])<jupyter_output><empty_output><jupyter_text>To apply this function on all the sentences (or pairs of sentences) in our dataset, we just use the `map` method of the `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command. Since our preprocessing changes the number of samples, we need to remove the old columns when applying it.<jupyter_code>tokenized_datasets = datasets.map( prepare_train_features, batched=True, remove_columns=datasets["train"].column_names )<jupyter_output>Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453/cache-ad89cfc588b4b5ad.arrow Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453/cache-123d7bb970edffa2.arrow<jupyter_text>Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently. Fine-tuning the model Now that our data is ready for training, we can download the pretrained model and fine-tune it. Since our task is question answering, we use the `TFAutoModelForQuestionAnswering` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us:<jupyter_code>from transformers import TFAutoModelForQuestionAnswering model = TFAutoModelForQuestionAnswering.from_pretrained(model_checkpoint)<jupyter_output>2022-07-21 15:10:11.409257: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-21 15:10:11.415291: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-21 15:10:11.415996: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-07-21 15:10:11.417100: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags[...]<jupyter_text>The warning is telling us we are throwing away some weights (the `vocab_transform` and `vocab_layer_norm` layers) and randomly initializing some other (the `pre_classifier` and `classifier` layers). This is absolutely normal in this case, because we are removing the head used to pretrain the model on a masked language modeling objective and replacing it with a new head for which we don't have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do. To train our model, we will need to define a few more things. The first two arguments are to setup everything so we can push the model to the [Hub](https://huggingface.co/models) at the end of training. Remove the two of them if you didn't follow the installation steps at the top of the notebook, otherwise you can change the value of `push_to_hub_model_id` to something you would prefer.We also tweak the learning rate, use the `batch_size` defined at the top of the notebook and customize the number of epochs for training, as well as the weight decay.<jupyter_code>model_name = model_checkpoint.split("/")[-1] push_to_hub_model_id = f"{model_name}-finetuned-squad" learning_rate = 2e-5 num_train_epochs = 2 weight_decay = 0.01<jupyter_output><empty_output><jupyter_text>Next, we convert our datasets to `tf.data.Dataset`, which Keras understands natively. There are two ways to do this - we can use the slightly more low-level [`Dataset.to_tf_dataset()`](https://huggingface.co/docs/datasets/package_reference/main_classesdatasets.Dataset.to_tf_dataset) method, or we can use [`Model.prepare_tf_dataset()`](https://huggingface.co/docs/transformers/main_classes/modeltransformers.TFPreTrainedModel.prepare_tf_dataset). The main difference between these two is that the `Model` method can inspect the model to determine which column names it can use as input, which means you don't need to specify them yourself. It also supplies a default data collator that will work fine for us, as our samples are already padded to the same length and ready to go.<jupyter_code>train_set = model.prepare_tf_dataset( tokenized_datasets["train"], shuffle=True, batch_size=batch_size, ) validation_set = model.prepare_tf_dataset( tokenized_datasets["validation"], shuffle=False, batch_size=batch_size, )<jupyter_output><empty_output><jupyter_text>Next, we can create an optimizer and specify a loss function. The `create_optimizer` function gives us a very solid `AdamW` optimizer with weight decay and a learning rate schedule, but it needs us to compute the number of training steps to build that schedule.<jupyter_code>from transformers import create_optimizer total_train_steps = len(train_set) * num_train_epochs optimizer, schedule = create_optimizer( init_lr=learning_rate, num_warmup_steps=0, num_train_steps=total_train_steps )<jupyter_output><empty_output><jupyter_text>Note that most Transformers models compute loss internally, so we actually don't have to specify anything there! You can of course set your own loss function if you want, but by default our models will choose the 'obvious' loss that matches their task, such as cross-entropy in the case of language modelling. The built-in loss will also correctly handle things like masking the loss on padding tokens, or unlabelled tokens in the case of masked language modelling, so we recommend using it unless you're an advanced user!In addition, because the outputs and loss for this model class are quite straightforward, we can use built-in Keras metrics - these are liable to misbehave in other contexts (for example, they don't know about the masking in masked language modelling) but work well here.We can also use `jit_compile` to compile the model with [XLA](https://www.tensorflow.org/xla). In other cases, we should be careful about that - if our inputs might have variable sequence lengths, we may end up having to do a new XLA compilation for each possible length, because XLA compilation expects a static input shape! In this notebook, however, we have padded all examples to exactly the same length. This makes it perfect for XLA, which will give us a nice performance boost.<jupyter_code>import tensorflow as tf model.compile(optimizer=optimizer, jit_compile=True, metrics=["accuracy"])<jupyter_output>No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! To disable this behaviour please pass a loss argument, or explicitly pass `loss=None` if you do not want your model to compute a loss.<jupyter_text>We will evaluate our model and compute metrics in the next section (this is a very long operation, so we will only compute the evaluation loss during training). For now, let's just train our model. We can also add a callback to sync up our model with the Hub - this allows us to resume training from other machines and even test the model's inference quality midway through training! If you don't want to do this, simply remove the callbacks argument in the call to `fit()`.<jupyter_code>from transformers.keras_callbacks import PushToHubCallback from tensorflow.keras.callbacks import TensorBoard push_to_hub_callback = PushToHubCallback( output_dir="./qa_model_save", tokenizer=tokenizer, hub_model_id=push_to_hub_model_id, ) tensorboard_callback = TensorBoard(log_dir="./qa_model_save/logs") callbacks = [tensorboard_callback, push_to_hub_callback] model.fit( train_set, validation_data=validation_set, epochs=num_train_epochs, callbacks=callbacks, )<jupyter_output>/home/matt/PycharmProjects/notebooks/examples/qa_model_save is already a clone of https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad. Make sure you pull the latest changes with `repo.git_pull()`.<jupyter_text>Evaluation Evaluating our model will require a bit more work, as we will need to map the predictions of our model back to parts of the context. The model itself predicts logits for the start and end position of our answers: if we take a batch from our validation dataset, here is the output our model gives us:<jupyter_code>batch = next(iter(validation_set)) output = model.predict_on_batch(batch) output.keys()<jupyter_output><empty_output><jupyter_text>The output of the model is a dict-like object that contains the loss (since we provided labels), the start and end logits. We won't need the loss for our predictions, let's have a look a the logits:<jupyter_code>output.start_logits.shape, output.end_logits.shape<jupyter_output><empty_output><jupyter_text>We have one logit for each feature and each token. The most obvious thing to predict an answer for each feature is to take the index for the maximum of the start logits as a start position and the index of the maximum of the end logits as an end position.<jupyter_code>import numpy as np np.argmax(output.start_logits, -1), np.argmax(output.end_logits, -1)<jupyter_output><empty_output><jupyter_text>This will work great in a lot of cases, but what if this prediction gives us something impossible: the start position could be greater than the end position, or point to a span of text in the question instead of the answer. In that case, we might want to look at the second best prediction to see if it gives a possible answer and select that instead.However, picking the second best answer is not as easy as picking the best one: is it the second best index in the start logits with the best index in the end logits? Or the best index in the start logits with the second best index in the end logits? And if that second best answer is not possible either, it gets even trickier for the third best answer.To classify our answers, we will use the score obtained by adding the start and end logits. We won't try to order all the possible answers and limit ourselves to with a hyper-parameter we call `n_best_size`. We'll pick the best indices in the start and end logits and gather all the answers this predicts. After checking if each one is valid, we will sort them by their score and keep the best one. Here is how we would do this on the first feature in the batch:<jupyter_code>n_best_size = 20 import numpy as np start_logits = output.start_logits[0] end_logits = output.end_logits[0] # Gather the indices the best start/end logits: start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist() end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist() valid_answers = [] for start_index in start_indexes: for end_index in end_indexes: if ( start_index <= end_index ): # We need to refine that test to check the answer is inside the context valid_answers.append( { "score": start_logits[start_index] + end_logits[end_index], "text": "", # We need to find a way to get back the original substring corresponding to the answer in the context } )<jupyter_output><empty_output><jupyter_text>And then we can sort the `valid_answers` according to their `score` and only keep the best one. The only point left is how to check a given span is inside the context (and not the question) and how to get back the text inside. To do this, we need to add two things to our validation features:- the ID of the example that generated the feature (since each example can generate several features, as seen before);- the offset mapping that will give us a map from token indices to character positions in the context.That's why we will re-process the validation set with the following function, slightly different from `prepare_train_features`:<jupyter_code>def prepare_validation_features(examples): # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results # in one example possible giving several features when a context is long, each of those features having a # context that overlaps a bit the context of the previous feature. tokenized_examples = tokenizer( examples["question" if pad_on_right else "context"], examples["context" if pad_on_right else "question"], truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) # Since one example might give us several features if it has a long context, we need a map from a feature to # its corresponding example. This key gives us just that. sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") # We keep the example_id that gave us this feature and we will store the offset mappings. tokenized_examples["example_id"] = [] for i in range(len(tokenized_examples["input_ids"])): # Grab the sequence corresponding to that example (to know what is the context and what is the question). sequence_ids = tokenized_examples.sequence_ids(i) context_index = 1 if pad_on_right else 0 # One example can give several spans, this is the index of the example containing this span of text. sample_index = sample_mapping[i] tokenized_examples["example_id"].append(examples["id"][sample_index]) # Set to None the offset_mapping that are not part of the context so it's easy to determine if a token # position is part of the context or not. tokenized_examples["offset_mapping"][i] = [ (o if sequence_ids[k] == context_index else None) for k, o in enumerate(tokenized_examples["offset_mapping"][i]) ] return tokenized_examples<jupyter_output><empty_output><jupyter_text>And like before, we can apply that function to our validation set easily:<jupyter_code>validation_features = datasets["validation"].map( prepare_validation_features, batched=True, remove_columns=datasets["validation"].column_names, )<jupyter_output>Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453/cache-fb6eddd5466a5d8b.arrow<jupyter_text>And turn the dataset into a `tf.data.Dataset` as before.<jupyter_code>validation_dataset = model.prepare_tf_dataset( validation_features, shuffle=False, batch_size=batch_size, )<jupyter_output><empty_output><jupyter_text>Now we can grab the predictions for all features by using the `model.predict` method:<jupyter_code>raw_predictions = model.predict(validation_dataset) raw_predictions<jupyter_output><empty_output><jupyter_text>We can now refine the test we had before: since we set `None` in the offset mappings when it corresponds to a part of the question, it's easy to check if an answer is fully inside the context. We also eliminate very long answers from our considerations (with an hyper-parameter we can tune)<jupyter_code>max_answer_length = 30 start_logits = output.start_logits[0] end_logits = output.end_logits[0] offset_mapping = validation_features[0]["offset_mapping"] # The first feature comes from the first example. For the more general case, we will need to be match the example_id to # an example index context = datasets["validation"][0]["context"] # Gather the indices the best start/end logits: start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist() end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist() valid_answers = [] for start_index in start_indexes: for end_index in end_indexes: # Don't consider out-of-scope answers, either because the indices are out of bounds or correspond # to part of the input_ids that are not in the context. if ( start_index >= len(offset_mapping) or end_index >= len(offset_mapping) or offset_mapping[start_index] is None or offset_mapping[end_index] is None ): continue # Don't consider answers with a length that is either < 0 or > max_answer_length. if end_index < start_index or end_index - start_index + 1 > max_answer_length: continue if ( start_index <= end_index ): # We need to refine that test to check the answer is inside the context start_char = offset_mapping[start_index][0] end_char = offset_mapping[end_index][1] valid_answers.append( { "score": start_logits[start_index] + end_logits[end_index], "text": context[start_char:end_char], } ) valid_answers = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[ :n_best_size ] valid_answers<jupyter_output><empty_output><jupyter_text>We can compare to the actual ground-truth answer:<jupyter_code>datasets["validation"][0]["answers"]<jupyter_output><empty_output><jupyter_text>Our model's most likely answer is correct!As we mentioned in the code above, this was easy on the first feature because we knew it comes from the first example. For the other features, we will need a map between examples and their corresponding features. Also, since one example can give several features, we will need to gather together all the answers in all the features generated by a given example, then pick the best one. The following code builds a map from example index to its corresponding features indices:<jupyter_code>import collections examples = datasets["validation"] features = validation_features example_id_to_index = {k: i for i, k in enumerate(examples["id"])} features_per_example = collections.defaultdict(list) for i, feature in enumerate(features): features_per_example[example_id_to_index[feature["example_id"]]].append(i)<jupyter_output><empty_output><jupyter_text>We're almost ready for our post-processing function. The last bit to deal with is the impossible answer (when `squad_v2 = True`). The code above only keeps answers that are inside the context, we need to also grab the score for the impossible answer (which has start and end indices corresponding to the index of the CLS token). When one example gives several features, we have to predict the impossible answer when all the features give a high score to the impossible answer (since one feature could predict the impossible answer just because the answer isn't in the part of the context it has access too), which is why the score of the impossible answer for one example is the *minimum* of the scores for the impossible answer in each feature generated by the example.We then predict the impossible answer when that score is greater than the score of the best non-impossible answer. All combined together, this gives us this post-processing function:<jupyter_code>from tqdm.auto import tqdm def postprocess_qa_predictions( examples, features, all_start_logits, all_end_logits, n_best_size=20, max_answer_length=30, ): # Build a map example to its corresponding features. example_id_to_index = {k: i for i, k in enumerate(examples["id"])} features_per_example = collections.defaultdict(list) for i, feature in enumerate(features): features_per_example[example_id_to_index[feature["example_id"]]].append(i) # The dictionaries we have to fill. predictions = collections.OrderedDict() # Logging. print( f"Post-processing {len(examples)} example predictions split into {len(features)} features." ) # Let's loop over all the examples! for example_index, example in enumerate(tqdm(examples)): # Those are the indices of the features associated to the current example. feature_indices = features_per_example[example_index] min_null_score = None # Only used if squad_v2 is True. valid_answers = [] context = example["context"] # Looping through all the features associated to the current example. for feature_index in feature_indices: # We grab the predictions of the model for this feature. start_logits = all_start_logits[feature_index] end_logits = all_end_logits[feature_index] # This is what will allow us to map some the positions in our logits to span of texts in the original # context. offset_mapping = features[feature_index]["offset_mapping"] # Update minimum null prediction. cls_index = features[feature_index]["input_ids"].index( tokenizer.cls_token_id ) feature_null_score = start_logits[cls_index] + end_logits[cls_index] if min_null_score is None or min_null_score < feature_null_score: min_null_score = feature_null_score # Go through all possibilities for the `n_best_size` greater start and end logits. start_indexes = np.argsort(start_logits)[ -1 : -n_best_size - 1 : -1 ].tolist() end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist() for start_index in start_indexes: for end_index in end_indexes: # Don't consider out-of-scope answers, either because the indices are out of bounds or correspond # to part of the input_ids that are not in the context. if ( start_index >= len(offset_mapping) or end_index >= len(offset_mapping) or not offset_mapping[start_index] or not offset_mapping[end_index] ): continue # Don't consider answers with a length that is either < 0 or > max_answer_length. if ( end_index < start_index or end_index - start_index + 1 > max_answer_length ): continue start_char = offset_mapping[start_index][0] end_char = offset_mapping[end_index][1] valid_answers.append( { "score": start_logits[start_index] + end_logits[end_index], "text": context[start_char:end_char], } ) if len(valid_answers) > 0: best_answer = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[ 0 ] else: # In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid # failure. best_answer = {"text": "", "score": 0.0} # Let's pick our final answer: the best one or the null answer (only for squad_v2) if not squad_v2: predictions[example["id"]] = best_answer["text"] else: answer = ( best_answer["text"] if best_answer["score"] > min_null_score else "" ) predictions[example["id"]] = answer return predictions<jupyter_output><empty_output><jupyter_text>And we can apply our post-processing function to our raw predictions:<jupyter_code>final_predictions = postprocess_qa_predictions( datasets["validation"], validation_features, raw_predictions["start_logits"], raw_predictions["end_logits"], )<jupyter_output>Post-processing 10570 example predictions split into 10784 features.<jupyter_text>Then we can load the metric from the datasets library.<jupyter_code>metric = load_metric("squad_v2" if squad_v2 else "squad")<jupyter_output><empty_output><jupyter_text>Then we can call compute on it. We just need to format predictions and labels a bit as it expects a list of dictionaries and not one big dictionary. In the case of squad_v2, we also have to set a `no_answer_probability` argument (which we set to 0.0 here as we have already set the answer to empty if we picked it).<jupyter_code>if squad_v2: formatted_predictions = [ {"id": k, "prediction_text": v, "no_answer_probability": 0.0} for k, v in final_predictions.items() ] else: formatted_predictions = [ {"id": k, "prediction_text": v} for k, v in final_predictions.items() ] references = [ {"id": ex["id"], "answers": ex["answers"]} for ex in datasets["validation"] ] metric.compute(predictions=formatted_predictions, references=references)<jupyter_output><empty_output><jupyter_text>If you ran the callback above, you can now share this model with all your friends, family or favorite pets: they can all load it with the identifier `"your-username/the-name-you-picked"` so for instance:```pythonfrom transformers import TFAutoModelForQuestionAnsweringmodel = TFAutoModelForQuestionAnswering.from_pretrained("your-username/my-awesome-model")``` Inference Now we've trained our model, let's see how we could load it and use it to answer questions in future! First, let's load it from the hub. This means we can resume the code from here without needing to rerun everything above every time.<jupyter_code>from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering # You can, of course, use your own username and model name here # once you've pushed your model using the code above! checkpoint = "Rocketknight1/distilbert-base-uncased-finetuned-squad" model = TFAutoModelForQuestionAnswering.from_pretrained(checkpoint) tokenizer = AutoTokenizer.from_pretrained(checkpoint)<jupyter_output><empty_output><jupyter_text>Now, let's get some sample text and ask a question. Feel free to substitute your own text and question!<jupyter_code>context = """The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.""" question = "What kind of mechanisms is Transformer based on?" inputs = tokenizer([context], [question], return_tensors="np") outputs = model(inputs)<jupyter_output><empty_output><jupyter_text>The outputs are logits, so let's use argmax to find the largest logit, which represents the model's best guess for the right answer.<jupyter_code>start_position = np.argmax(outputs.start_logits[0]) end_position = np.argmax(outputs.end_logits[0]) print(start_position) print(end_position) # Extract this substring from the inputs answer = inputs["input_ids"][0, start_position: end_position + 1] print(answer)<jupyter_output>64 65 [ 3086 10595]<jupyter_text>Well, these are definitely tokens. Let's decode them back to text:<jupyter_code>tokenizer.decode(answer)<jupyter_output><empty_output><jupyter_text>Pipeline API An alternative way to quickly perform inference with any model on the hub is to use the [Pipeline API](https://huggingface.co/docs/transformers/main_classes/pipelines), which abstracts away all the steps we did manually above. It will perform the preprocessing, forward pass and postprocessing all in a single object.Let's showcase this for our trained model:<jupyter_code>from transformers import pipeline question_answerer = pipeline("question-answering", "Rocketknight1/distilbert-base-uncased-finetuned-squad", framework="tf") question_answerer(context=context, question=question)<jupyter_output><empty_output>
notebooks/examples/question_answering-tf.ipynb/0
{ "file_path": "notebooks/examples/question_answering-tf.ipynb", "repo_id": "notebooks", "token_count": 17339 }
159
<jupyter_start><jupyter_text>Probabilistic Time Series Forecasting with 🤗 Transformers IntroductionTime series forecasting is an essential scientific and business problem and as such has also seen a lot of innovation recently with the use of [deep learning based](https://dl.acm.org/doi/abs/10.1145/3533382) models in addition to the [classical methods](https://otexts.com/fpp3/). An important difference between classical methods like ARIMA and novel deep learning methods is the following. Probabilistic ForecastingTypically, classical methods are fitted on each time series in a dataset individually. These are often referred to as "single" or "local" methods. However, when dealing with a large amount of time series for some applications, it is beneficial to train a "global" model on all available time series, which enables the model to learn latent representations from many different sources.Some classical methods are point-valued (meaning, they just output a single value per time step) and models are trained by minimizing an L2 or L1 type of loss with respect to the ground truth data. However, since forecasts are often used in some real-world decision making pipeline, even with humans in the loop, it is much more beneficial to provide the uncertainties of predictions. This is also called "probabilistic forecasting", as opposed to "point forecasting". This entails modeling a probabilistic distribution, from which one can sample.So in short, rather than training local point forecasting models, we hope to train **global probabilistic** models. Deep learning is a great fit for this, as neural networks can learn representations from several related time series as well as model the uncertainty of the data.It is common in the probabilistic setting to learn the future parameters of some chosen parametric distribution, like Gaussian or Student-T; or learn the conditional quantile function; or use the framework of Conformal Prediction adapted to the time series setting. The choice of method does not affect the modeling aspect and thus can be typically thought of as yet another hyperparameter. One can always turn a probabilistic model into a point-forecasting model, by taking empirical means or medians. The Time Series TransformerIn terms of modeling time series data which are sequential in nature, as one can imagine, researchers have come up with models which use Recurrent Neural Networks (RNN) like LSTM or GRU, or Convolutional Networks (CNN), and more recently Transformer based methods which fit naturally to the time series forecasting setting.In this blog post, we're going to leverage the vanilla Transformer [(Vaswani et al., 2017)](https://arxiv.org/abs/1706.03762) for the **univariate** probabilistic forecasting task (i.e. predicting each time series' 1-d distribution individually). The Encoder-Decoder Transformer is a natural choice for forecasting as it encapsulates several inductive biases nicely.To begin with, the use of an Encoder-Decoder architecture is helpful at inference time where typically for some logged data we wish to forecast some prediction steps into the future. This can be thought of as analogous to the text generation task where given some context, we sample the next token and pass it back into the decoder (also called "autoregressive generation"). Similarly here we can also, given some distribution type, sample from it to provide forecasts up until our desired prediction horizon. This is known as Greedy Sampling/Search and there is a great blog post about it [here](https://huggingface.co/blog/how-to-generate) for the NLP setting.Secondly, a Transformer helps us to train on time series data which might contain thousands of time points. It might not be feasible to input *all* the history of a time series at once to the model, due to the time- and memory constraints of the attention mechanism. Thus, one can consider some appropriate context window and sample this window and the subsequent prediction length sized window from the training data when constructing batches for stochastic gradient descent (SGD). The context sized window can be passed to the encoder and the prediction window to a *causal-masked* decoder. This means that the decoder can only look at previous time steps when learning the next value. This is equivalent to how one would train a vanilla Transformer for machine translation, referred to as "teacher forcing".Another benefit of Transformers over the other architectures is that we can incorporate missing values (which are common in the time series setting) as an additional mask to the encoder or decoder and still train without resorting to in-filling or imputation. This is equivalent to the `attention_mask` of models like BERT and GPT-2 in the Transformers library, to not include padding tokens in the computation of the attention matrix.A drawback of the Transformer architecture is the limit to the sizes of the context and prediction windows because of the quadratic compute and memory requirements of the vanilla Transformer, see [Tay et al., 2020](https://arxiv.org/abs/2009.06732). Additionally, since the Transformer is a powerful architecture, it might overfit or learn spurious correlations much more easily compared to other [methods](https://openreview.net/pdf?id=D7YBmfX_VQy).The 🤗 Transformers library comes with a vanilla probabilistic time series Transformer model, simply called the [Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer). In the sections below, we'll show how to train such a model on a custom dataset. Set-up EnvironmentFirst, let's install the necessary libraries: 🤗 Transformers, 🤗 Datasets, 🤗 Evaluate, 🤗 Accelerate and [GluonTS](https://github.com/awslabs/gluonts).As we will show, GluonTS will be used for transforming the data to create features as well as for creating appropriate training, validation and test batches.<jupyter_code>!pip install -q transformers !pip install -q datasets !pip install -q evaluate !pip install -q accelerate !pip install -q gluonts ujson<jupyter_output> |████████████████████████████████| 1.0 MB 29.8 MB/s  |████████████████████████████████| 52 kB 1.5 MB/s [?25h<jupyter_text>We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely.<jupyter_code>from transformers.utils import send_example_telemetry send_example_telemetry("time_series_transformers_notebook", framework="pytorch")<jupyter_output><empty_output><jupyter_text>Load DatasetIn this blog post, we'll use the `tourism_monthly` dataset, which is available on the [Hugging Face Hub](https://huggingface.co/datasets/monash_tsf). This dataset contains monthly tourism volumes for 366 regions in Australia.This dataset is part of the [Monash Time Series Forecasting](https://forecastingdata.org/) repository, a collection of time series datasets from a number of domains. It can be viewed as the GLUE benchmark of time series forecasting.<jupyter_code>from datasets import load_dataset dataset = load_dataset("monash_tsf", "tourism_monthly")<jupyter_output><empty_output><jupyter_text>As can be seen, the dataset contains 3 splits: train, validation and test.<jupyter_code>dataset<jupyter_output><empty_output><jupyter_text>Each example contains a few keys, of which `start` and `target` are the most important ones. Let us have a look at the first time series in the dataset:<jupyter_code>train_example = dataset["train"][0] train_example.keys()<jupyter_output><empty_output><jupyter_text>The `start` simply indicates the start of the time series (as a datetime), and the `target` contains the actual values of the time series.The `start` will be useful to add time related features to the time series values, as extra input to the model (such as "month of year"). Since we know the frequency of the data is `monthly`, we know for instance that the second value has the timestamp `1979-02-01`, etc.<jupyter_code>print(train_example["start"]) print(train_example["target"])<jupyter_output>1979-01-01 00:00:00 [1149.8699951171875, 1053.8001708984375, 1388.8797607421875, 1783.3702392578125, 1921.025146484375, 2704.94482421875, 4184.41357421875, 4148.35400390625, 2620.72509765625, 1650.300048828125, 1115.9200439453125, 1370.6251220703125, 1096.31494140625, 978.4600219726562, 1294.68505859375, 1480.465087890625, 1748.865234375, 2216.920166015625, 4690.5185546875, 4682.8642578125, 2459.579833984375, 1484.4901123046875, 1028.985107421875, 1109.3648681640625, 960.8751220703125, 896.35009765625, 1118.6551513671875, 1619.9949951171875, 1847.994873046875, 2367.044921875, 4991.16015625, 4772.9443359375, 2894.678466796875, 1860.4801025390625, 1185.150146484375, 1313.659912109375, 1160.9150390625, 1061.5048828125, 1301.77001953125, 1794.3797607421875, 2106.455078125, 2789.034912109375, 4917.8466796875, 4994.4833984375, 3016.754150390625, 1941.505126953125, 1234.135009765625, 1378.72021484375, 1182.9749755859375, 1081.6600341796875, 1424.110107421875, 1774.5350341796875, 2115.42016601[...]<jupyter_text>The validation set contains the same data as the training set, just for a `prediction_length` longer amount of time. This allows us to validate the model's predictions against the ground truth.The test set is again one `prediction_length` longer data compared to the validation set (or some multiple of `prediction_length` longer data compared to the training set for testing on multiple rolling windows).<jupyter_code>validation_example = dataset["validation"][0] validation_example.keys()<jupyter_output><empty_output><jupyter_text>The initial values are exactly the same as the corresponding training example:<jupyter_code>print(validation_example["start"]) print(validation_example["target"])<jupyter_output>1979-01-01 00:00:00 [1149.8699951171875, 1053.8001708984375, 1388.8797607421875, 1783.3702392578125, 1921.025146484375, 2704.94482421875, 4184.41357421875, 4148.35400390625, 2620.72509765625, 1650.300048828125, 1115.9200439453125, 1370.6251220703125, 1096.31494140625, 978.4600219726562, 1294.68505859375, 1480.465087890625, 1748.865234375, 2216.920166015625, 4690.5185546875, 4682.8642578125, 2459.579833984375, 1484.4901123046875, 1028.985107421875, 1109.3648681640625, 960.8751220703125, 896.35009765625, 1118.6551513671875, 1619.9949951171875, 1847.994873046875, 2367.044921875, 4991.16015625, 4772.9443359375, 2894.678466796875, 1860.4801025390625, 1185.150146484375, 1313.659912109375, 1160.9150390625, 1061.5048828125, 1301.77001953125, 1794.3797607421875, 2106.455078125, 2789.034912109375, 4917.8466796875, 4994.4833984375, 3016.754150390625, 1941.505126953125, 1234.135009765625, 1378.72021484375, 1182.9749755859375, 1081.6600341796875, 1424.110107421875, 1774.5350341796875, 2115.42016601[...]<jupyter_text>However, this example has `prediction_length=24` additional values compared to the training example. Let us verify it.<jupyter_code>freq = "1M" prediction_length = 24 assert len(train_example["target"]) + prediction_length == len( validation_example["target"] )<jupyter_output><empty_output><jupyter_text>Let's visualize this:<jupyter_code>import matplotlib.pyplot as plt figure, axes = plt.subplots() axes.plot(train_example["target"], color="blue") axes.plot(validation_example["target"], color="red", alpha=0.5) plt.show()<jupyter_output><empty_output><jupyter_text>Let's split up the data:<jupyter_code>train_dataset = dataset["train"] test_dataset = dataset["test"]<jupyter_output><empty_output><jupyter_text>Update `start` to `pd.Period`The first thing we'll do is convert the `start` feature of each time series to a pandas `Period` index using the data's `freq`:<jupyter_code>from functools import lru_cache import pandas as pd import numpy as np @lru_cache(10_000) def convert_to_pandas_period(date, freq): return pd.Period(date, freq) def transform_start_field(batch, freq): batch["start"] = [convert_to_pandas_period(date, freq) for date in batch["start"]] return batch<jupyter_output><empty_output><jupyter_text>We now use `datasets`' [`set_transform`](https://huggingface.co/docs/datasets/v2.7.0/en/package_reference/main_classesdatasets.Dataset.set_transform) functionality to do this on-the-fly in place:<jupyter_code>from functools import partial train_dataset.set_transform(partial(transform_start_field, freq=freq)) test_dataset.set_transform(partial(transform_start_field, freq=freq))<jupyter_output><empty_output><jupyter_text>Define the modelNext, let's instantiate a model. The model will be trained from scratch, hence we won't use the `from_pretrained` method here, but rather randomly initialize the model from a [`config`](https://huggingface.co/docs/transformers/model_doc/time_series_transformertransformers.TimeSeriesTransformerConfig).We specify a couple of additional parameters to the model:- `prediction_length` (in our case, `24` months): this is the horizon that the decoder of the Transformer will learn to predict for;- `context_length`: the model will set the `context_length` (input of the encoder) equal to the `prediction_length`, if no `context_length` is specified;- `lags` for a given frequency: these specify how much we "look back", to be added as additional features. e.g. for a `Daily` frequency we might consider a look back of `[1, 2, 7, 30, ...]` or in other words look back 1, 2, ... days while for `Minute` data we might consider `[1, 30, 60, 60*24, ...]` etc.;- the number of time features: in our case, this will be `2` as we'll add `MonthOfYear` and `Age` features;- the number of static categorical features: in our case, this will be just `1` as we'll add a single "time series ID" feature;- the cardinality: the number of values of each static categorical feature, as a list which for our case will be `[366]` as we have 366 different time series- the embedding dimension: the embedding dimension for each static categorical feature, as a list, for example `[3]` meaning the model will learn an embedding vector of size `3` for each of the `366` time series (regions). Let's use the default lags provided by GluonTS for the given frequency ("monthly"):<jupyter_code>from gluonts.time_feature import get_lags_for_frequency lags_sequence = get_lags_for_frequency(freq) print(lags_sequence)<jupyter_output>[1, 2, 3, 4, 5, 6, 7, 11, 12, 13, 23, 24, 25, 35, 36, 37]<jupyter_text>This means that we'll look back up to 37 months for each time step, as additional features.Let's also check the default time features which GluonTS provides us:<jupyter_code>from gluonts.time_feature import time_features_from_frequency_str time_features = time_features_from_frequency_str(freq) print(time_features)<jupyter_output>[<function month_of_year at 0x7f84840216c0>]<jupyter_text>In this case, there's only a single feature, namely "month of year". This means that for each time step, we'll add the month as a scalar value (e.g. `1` in case the timestamp is "january", `2` in case the timestamp is "february", etc.).We now have everything to define the model:<jupyter_code>from transformers import TimeSeriesTransformerConfig, TimeSeriesTransformerForPrediction config = TimeSeriesTransformerConfig( prediction_length=prediction_length, # context length: context_length=prediction_length * 2, # lags coming from helper given the freq: lags_sequence=lags_sequence, # we'll add 2 time features ("month of year" and "age", see further): num_time_features=len(time_features) + 1, # we have a single static categorical feature, namely time series ID: num_static_categorical_features=1, # it has 366 possible values: cardinality=[len(train_dataset)], # the model will learn an embedding of size 2 for each of the 366 possible values: embedding_dimension=[2], # transformer params: encoder_layers=4, decoder_layers=4, d_model=32, ) model = TimeSeriesTransformerForPrediction(config)<jupyter_output><empty_output><jupyter_text>Note that, similar to other models in the 🤗 Transformers library, [`TimeSeriesTransformerModel`](https://huggingface.co/docs/transformers/model_doc/time_series_transformertransformers.TimeSeriesTransformerModel) corresponds to the encoder-decoder Transformer without any head on top, and [`TimeSeriesTransformerForPrediction`](https://huggingface.co/docs/transformers/model_doc/time_series_transformertransformers.TimeSeriesTransformerForPrediction) corresponds to `TimeSeriesTransformerModel` with a **distribution head** on top. By default, the model uses a Student-t distribution (but this is configurable):<jupyter_code>model.config.distribution_output<jupyter_output><empty_output><jupyter_text>This is an important difference with Transformers for NLP, where the head typically consists of a fixed categorical distribution implemented as an `nn.Linear` layer. Define TransformationsNext, we define the transformations for the data, in particular for the creation of the time features (based on the dataset or universal ones).Again, we'll use the GluonTS library for this. We define a `Chain` of transformations (which is a bit comparable to `torchvision.transforms.Compose` for images). It allows us to combine several transformations into a single pipeline.<jupyter_code>from gluonts.time_feature import ( time_features_from_frequency_str, TimeFeature, get_lags_for_frequency, ) from gluonts.dataset.field_names import FieldName from gluonts.transform import ( AddAgeFeature, AddObservedValuesIndicator, AddTimeFeatures, AsNumpyArray, Chain, ExpectedNumInstanceSampler, InstanceSplitter, RemoveFields, SelectFields, SetField, TestSplitSampler, Transformation, ValidationSplitSampler, VstackFeatures, RenameFields, )<jupyter_output><empty_output><jupyter_text>The transformations below are annotated with comments, to explain what they do. At a high level, we will iterate over the individual time series of our dataset and add/remove fields or features:<jupyter_code>from transformers import PretrainedConfig def create_transformation(freq: str, config: PretrainedConfig) -> Transformation: remove_field_names = [] if config.num_static_real_features == 0: remove_field_names.append(FieldName.FEAT_STATIC_REAL) if config.num_dynamic_real_features == 0: remove_field_names.append(FieldName.FEAT_DYNAMIC_REAL) if config.num_static_categorical_features == 0: remove_field_names.append(FieldName.FEAT_STATIC_CAT) # a bit like torchvision.transforms.Compose return Chain( # step 1: remove static/dynamic fields if not specified [RemoveFields(field_names=remove_field_names)] # step 2: convert the data to NumPy (potentially not needed) + ( [ AsNumpyArray( field=FieldName.FEAT_STATIC_CAT, expected_ndim=1, dtype=int, ) ] if config.num_static_categorical_features > 0 else [] ) + ( [ AsNumpyArray( field=FieldName.FEAT_STATIC_REAL, expected_ndim=1, ) ] if config.num_static_real_features > 0 else [] ) + [ AsNumpyArray( field=FieldName.TARGET, # we expect an extra dim for the multivariate case: expected_ndim=1 if config.input_size == 1 else 2, ), # step 3: handle the NaN's by filling in the target with zero # and return the mask (which is in the observed values) # true for observed values, false for nan's # the decoder uses this mask (no loss is incurred for unobserved values) # see loss_weights inside the xxxForPrediction model AddObservedValuesIndicator( target_field=FieldName.TARGET, output_field=FieldName.OBSERVED_VALUES, ), # step 4: add temporal features based on freq of the dataset # month of year in the case when freq="M" # these serve as positional encodings AddTimeFeatures( start_field=FieldName.START, target_field=FieldName.TARGET, output_field=FieldName.FEAT_TIME, time_features=time_features_from_frequency_str(freq), pred_length=config.prediction_length, ), # step 5: add another temporal feature (just a single number) # tells the model where in the life the value of the time series is # sort of running counter AddAgeFeature( target_field=FieldName.TARGET, output_field=FieldName.FEAT_AGE, pred_length=config.prediction_length, log_scale=True, ), # step 6: vertically stack all the temporal features into the key FEAT_TIME VstackFeatures( output_field=FieldName.FEAT_TIME, input_fields=[FieldName.FEAT_TIME, FieldName.FEAT_AGE] + ( [FieldName.FEAT_DYNAMIC_REAL] if config.num_dynamic_real_features > 0 else [] ), ), # step 7: rename to match HuggingFace names RenameFields( mapping={ FieldName.FEAT_STATIC_CAT: "static_categorical_features", FieldName.FEAT_STATIC_REAL: "static_real_features", FieldName.FEAT_TIME: "time_features", FieldName.TARGET: "values", FieldName.OBSERVED_VALUES: "observed_mask", } ), ] )<jupyter_output><empty_output><jupyter_text>Define `InstanceSplitter`For training/validation/testing we next create an `InstanceSplitter` which is used to sample windows from the dataset (as, remember, we can't pass the entire history of values to the Transformer due to time- and memory constraints).The instance splitter samples random `context_length` sized and subsequent `prediction_length` sized windows from the data, and appends a `past_` or `future_` key to any temporal keys in `time_series_fields` for the respective windows. The instance splitter can be configured into three different modes:1. `mode="train"`: Here we sample the context and prediction length windows randomly from the dataset given to it (the training dataset)2. `mode="validation"`: Here we sample the very last context length window and prediction window from the dataset given to it (for the back-testing or validation likelihood calculations)3. `mode="test"`: Here we sample the very last context length window only (for the prediction use case)<jupyter_code>from gluonts.transform.sampler import InstanceSampler from typing import Optional def create_instance_splitter( config: PretrainedConfig, mode: str, train_sampler: Optional[InstanceSampler] = None, validation_sampler: Optional[InstanceSampler] = None, ) -> Transformation: assert mode in ["train", "validation", "test"] instance_sampler = { "train": train_sampler or ExpectedNumInstanceSampler( num_instances=1.0, min_future=config.prediction_length ), "validation": validation_sampler or ValidationSplitSampler(min_future=config.prediction_length), "test": TestSplitSampler(), }[mode] return InstanceSplitter( target_field="values", is_pad_field=FieldName.IS_PAD, start_field=FieldName.START, forecast_start_field=FieldName.FORECAST_START, instance_sampler=instance_sampler, past_length=config.context_length + max(config.lags_sequence), future_length=config.prediction_length, time_series_fields=["time_features", "observed_mask"], )<jupyter_output><empty_output><jupyter_text>Create DataLoadersNext, it's time to create the DataLoaders, which allow us to have batches of (input, output pairs) - or in other words (`past_values`, `future_values`).<jupyter_code>from typing import Iterable import torch from gluonts.itertools import Cyclic, Cached from gluonts.dataset.loader import as_stacked_batches def create_train_dataloader( config: PretrainedConfig, freq, data, batch_size: int, num_batches_per_epoch: int, shuffle_buffer_length: Optional[int] = None, cache_data: bool = True, **kwargs, ) -> Iterable: PREDICTION_INPUT_NAMES = [ "past_time_features", "past_values", "past_observed_mask", "future_time_features", ] if config.num_static_categorical_features > 0: PREDICTION_INPUT_NAMES.append("static_categorical_features") if config.num_static_real_features > 0: PREDICTION_INPUT_NAMES.append("static_real_features") TRAINING_INPUT_NAMES = PREDICTION_INPUT_NAMES + [ "future_values", "future_observed_mask", ] transformation = create_transformation(freq, config) transformed_data = transformation.apply(data, is_train=True) if cache_data: transformed_data = Cached(transformed_data) # we initialize a Training instance instance_splitter = create_instance_splitter(config, "train") # the instance splitter will sample a window of # context length + lags + prediction length (from the 366 possible transformed time series) # randomly from within the target time series and return an iterator. stream = Cyclic(transformed_data).stream() training_instances = instance_splitter.apply(stream) return as_stacked_batches( training_instances, batch_size=batch_size, shuffle_buffer_length=shuffle_buffer_length, field_names=TRAINING_INPUT_NAMES, output_type=torch.tensor, num_batches_per_epoch=num_batches_per_epoch, ) def create_backtest_dataloader( config: PretrainedConfig, freq, data, batch_size: int, **kwargs, ): PREDICTION_INPUT_NAMES = [ "past_time_features", "past_values", "past_observed_mask", "future_time_features", ] if config.num_static_categorical_features > 0: PREDICTION_INPUT_NAMES.append("static_categorical_features") if config.num_static_real_features > 0: PREDICTION_INPUT_NAMES.append("static_real_features") transformation = create_transformation(freq, config) transformed_data = transformation.apply(data) # We create a Validation Instance splitter which will sample the very last # context window seen during training only for the encoder. instance_sampler = create_instance_splitter(config, "validation") # we apply the transformations in train mode testing_instances = instance_sampler.apply(transformed_data, is_train=True) return as_stacked_batches( testing_instances, batch_size=batch_size, output_type=torch.tensor, field_names=PREDICTION_INPUT_NAMES, )<jupyter_output><empty_output><jupyter_text>We have a test dataloader helper for completion, even though we will not use it here. This is useful in a production setting where we want to start forecasting from the end of a given time series. Thus, the test dataloader will sample the very last context window from the dataset provided and pass it to the model.<jupyter_code>def create_test_dataloader( config: PretrainedConfig, freq, data, batch_size: int, **kwargs, ): PREDICTION_INPUT_NAMES = [ "past_time_features", "past_values", "past_observed_mask", "future_time_features", ] if config.num_static_categorical_features > 0: PREDICTION_INPUT_NAMES.append("static_categorical_features") if config.num_static_real_features > 0: PREDICTION_INPUT_NAMES.append("static_real_features") transformation = create_transformation(freq, config) transformed_data = transformation.apply(data, is_train=False) # We create a test Instance splitter to sample the very last # context window from the dataset provided. instance_sampler = create_instance_splitter(config, "test") # We apply the transformations in test mode testing_instances = instance_sampler.apply(transformed_data, is_train=False) return as_stacked_batches( testing_instances, batch_size=batch_size, output_type=torch.tensor, field_names=PREDICTION_INPUT_NAMES, ) train_dataloader = create_train_dataloader( config=config, freq=freq, data=train_dataset, batch_size=256, num_batches_per_epoch=100, ) test_dataloader = create_backtest_dataloader( config=config, freq=freq, data=test_dataset, batch_size=64, )<jupyter_output><empty_output><jupyter_text>Let's check the first batch:<jupyter_code>batch = next(iter(train_dataloader)) for k, v in batch.items(): print(k, v.shape, v.type())<jupyter_output>past_time_features torch.Size([256, 85, 2]) torch.FloatTensor past_values torch.Size([256, 85]) torch.FloatTensor past_observed_mask torch.Size([256, 85]) torch.FloatTensor future_time_features torch.Size([256, 24, 2]) torch.FloatTensor static_categorical_features torch.Size([256, 1]) torch.LongTensor future_values torch.Size([256, 24]) torch.FloatTensor future_observed_mask torch.Size([256, 24]) torch.FloatTensor<jupyter_text>As can be seen, we don't feed `input_ids` and `attention_mask` to the encoder (as would be the case for NLP models), but rather `past_values`, along with `past_observed_mask`, `past_time_features`, and `static_categorical_features`.The decoder inputs consist of `future_values`, `future_observed_mask` and `future_time_features`. The `future_values` can be seen as the equivalent of `decoder_input_ids` in NLP.We refer to the [docs](https://huggingface.co/docs/transformers/model_doc/time_series_transformertransformers.TimeSeriesTransformerForPrediction.forward.past_values) for a detailed explanation for each of them. Forward passLet's perform a single forward pass with the batch we just created:<jupyter_code># perform forward pass outputs = model( past_values=batch["past_values"], past_time_features=batch["past_time_features"], past_observed_mask=batch["past_observed_mask"], static_categorical_features=batch["static_categorical_features"] if config.num_static_categorical_features > 0 else None, static_real_features=batch["static_real_features"] if config.num_static_real_features > 0 else None, future_values=batch["future_values"], future_time_features=batch["future_time_features"], future_observed_mask=batch["future_observed_mask"], output_hidden_states=True, ) print("Loss:", outputs.loss.item())<jupyter_output>Loss: 9.069628715515137<jupyter_text>Note that the model is returning a loss. This is possible as the decoder automatically shifts the `future_values` one position to the right in order to have the labels. This allows computing a loss between the predicted values and the labels.Also note that the decoder uses a causal mask to not look into the future as the values it needs to predict are in the `future_values` tensor. Train the ModelIt's time to train the model! We'll use a standard PyTorch training loop.We will use the 🤗 [Accelerate](https://huggingface.co/docs/accelerate/index) library here, which automatically places the model, optimizer and dataloader on the appropriate `device`.<jupyter_code>from accelerate import Accelerator from torch.optim import AdamW accelerator = Accelerator() device = accelerator.device model.to(device) optimizer = AdamW(model.parameters(), lr=6e-4, betas=(0.9, 0.95), weight_decay=1e-1) model, optimizer, train_dataloader = accelerator.prepare( model, optimizer, train_dataloader, ) model.train() for epoch in range(40): for idx, batch in enumerate(train_dataloader): optimizer.zero_grad() outputs = model( static_categorical_features=batch["static_categorical_features"].to(device) if config.num_static_categorical_features > 0 else None, static_real_features=batch["static_real_features"].to(device) if config.num_static_real_features > 0 else None, past_time_features=batch["past_time_features"].to(device), past_values=batch["past_values"].to(device), future_time_features=batch["future_time_features"].to(device), future_values=batch["future_values"].to(device), past_observed_mask=batch["past_observed_mask"].to(device), future_observed_mask=batch["future_observed_mask"].to(device), ) loss = outputs.loss # Backpropagation accelerator.backward(loss) optimizer.step() if idx % 100 == 0: print(loss.item())<jupyter_output>9.312426567077637 7.79284143447876 7.852108001708984 7.6523308753967285 7.4140448570251465 7.391452789306641 7.355312824249268 7.018772125244141 6.6947102546691895 6.884510040283203 6.586727142333984 6.800746917724609 6.795780181884766 7.579933166503906 7.15477180480957 6.703517436981201 7.250757694244385 7.39132833480835 7.598387241363525 7.2024149894714355 7.323209285736084 6.823130130767822 6.757688045501709 7.494504451751709 7.513833522796631 7.290976047515869 6.932094097137451 7.130832672119141 7.020802974700928 6.652693271636963 6.758007049560547 7.680879592895508 7.614417552947998 6.844751834869385 6.809683322906494 6.6291022300720215 7.306612491607666 6.697507381439209 7.026710510253906 6.921131134033203<jupyter_text>InferenceAt inference time, it's recommended to use the `generate()` method for autoregressive generation, similar to NLP models.Forecasting involves getting data from the test instance sampler, which will sample the very last `context_length` sized window of values from each time series in the dataset, and pass it to the model. Note that we pass `future_time_features`, which are known ahead of time, to the decoder.The model will autoregressively sample a certain number of values from the predicted distribution and pass them back to the decoder to return the prediction outputs:<jupyter_code>model.eval() forecasts = [] for batch in test_dataloader: outputs = model.generate( static_categorical_features=batch["static_categorical_features"].to(device) if config.num_static_categorical_features > 0 else None, static_real_features=batch["static_real_features"].to(device) if config.num_static_real_features > 0 else None, past_time_features=batch["past_time_features"].to(device), past_values=batch["past_values"].to(device), future_time_features=batch["future_time_features"].to(device), past_observed_mask=batch["past_observed_mask"].to(device), ) forecasts.append(outputs.sequences.cpu().numpy())<jupyter_output><empty_output><jupyter_text>The model outputs a tensor of shape (`batch_size`, `number of samples`, `prediction length`). In this case, we get `100` possible values for the next `24` months (for each example in the batch which is of size `64`):<jupyter_code>forecasts[0].shape<jupyter_output><empty_output><jupyter_text>We'll stack them vertically, to get forecasts for all time-series in the test dataset:<jupyter_code>forecasts = np.vstack(forecasts) print(forecasts.shape)<jupyter_output>(366, 100, 24)<jupyter_text>We can evaluate the resulting forecast with respect to the ground truth out of sample values present in the test set. For that, we'll use the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library, which includes the [MASE](https://huggingface.co/spaces/evaluate-metric/mase) and [sMAPE](https://huggingface.co/spaces/evaluate-metric/smape) metrics.We calculate both metrics for each time series in the dataset:<jupyter_code>from evaluate import load from gluonts.time_feature import get_seasonality mase_metric = load("evaluate-metric/mase") smape_metric = load("evaluate-metric/smape") forecast_median = np.median(forecasts, 1) mase_metrics = [] smape_metrics = [] for item_id, ts in enumerate(test_dataset): training_data = ts["target"][:-prediction_length] ground_truth = ts["target"][-prediction_length:] mase = mase_metric.compute( predictions=forecast_median[item_id], references=np.array(ground_truth), training=np.array(training_data), periodicity=get_seasonality(freq), ) mase_metrics.append(mase["mase"]) smape = smape_metric.compute( predictions=forecast_median[item_id], references=np.array(ground_truth), ) smape_metrics.append(smape["smape"]) print(f"MASE: {np.mean(mase_metrics)}") print(f"sMAPE: {np.mean(smape_metrics)}")<jupyter_output>sMAPE: 0.1609541520852549<jupyter_text>We can also plot the individual metrics of each time series in the dataset and observe that a handful of time series contribute a lot to the final test metric:<jupyter_code>plt.scatter(mase_metrics, smape_metrics, alpha=0.3) plt.xlabel("MASE") plt.ylabel("sMAPE") plt.show()<jupyter_output><empty_output><jupyter_text>To plot the prediction for any time series with respect the ground truth test data we define the following helper:<jupyter_code>import matplotlib.dates as mdates def plot(ts_index): fig, ax = plt.subplots() index = pd.period_range( start=test_dataset[ts_index][FieldName.START], periods=len(test_dataset[ts_index][FieldName.TARGET]), freq=freq, ).to_timestamp() # Major ticks every half year, minor ticks every month, ax.xaxis.set_major_locator(mdates.MonthLocator(bymonth=(1, 7))) ax.xaxis.set_minor_locator(mdates.MonthLocator()) ax.plot( index[-2 * prediction_length :], test_dataset[ts_index]["target"][-2 * prediction_length :], label="actual", ) plt.plot( index[-prediction_length:], np.median(forecasts[ts_index], axis=0), label="median", ) plt.fill_between( index[-prediction_length:], forecasts[ts_index].mean(0) - forecasts[ts_index].std(axis=0), forecasts[ts_index].mean(0) + forecasts[ts_index].std(axis=0), alpha=0.3, interpolate=True, label="+/- 1-std", ) plt.legend() plt.show()<jupyter_output><empty_output><jupyter_text>For example:<jupyter_code>plot(334)<jupyter_output><empty_output>
notebooks/examples/time-series-transformers.ipynb/0
{ "file_path": "notebooks/examples/time-series-transformers.ipynb", "repo_id": "notebooks", "token_count": 13676 }
160
from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments, AutoTokenizer from sklearn.metrics import accuracy_score, precision_recall_fscore_support from datasets import load_from_disk import random import logging import sys import argparse import os import torch if __name__ == "__main__": parser = argparse.ArgumentParser() # hyperparameters sent by the client are passed as command-line arguments to the script. parser.add_argument("--epochs", type=int, default=3) parser.add_argument("--train_batch_size", type=int, default=32) parser.add_argument("--eval_batch_size", type=int, default=64) parser.add_argument("--warmup_steps", type=int, default=500) parser.add_argument("--model_name", type=str) parser.add_argument("--learning_rate", type=str, default=5e-5) # Data, model, and output directories parser.add_argument("--output_data_dir", type=str, default=os.environ["SM_OUTPUT_DATA_DIR"]) parser.add_argument("--model_dir", type=str, default=os.environ["SM_MODEL_DIR"]) parser.add_argument("--n_gpus", type=str, default=os.environ["SM_NUM_GPUS"]) parser.add_argument("--training_dir", type=str, default=os.environ["SM_CHANNEL_TRAIN"]) parser.add_argument("--test_dir", type=str, default=os.environ["SM_CHANNEL_TEST"]) args, _ = parser.parse_known_args() # Set up logging logger = logging.getLogger(__name__) logging.basicConfig( level=logging.getLevelName("INFO"), handlers=[logging.StreamHandler(sys.stdout)], format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", ) # load datasets train_dataset = load_from_disk(args.training_dir) test_dataset = load_from_disk(args.test_dir) logger.info(f" loaded train_dataset length is: {len(train_dataset)}") logger.info(f" loaded test_dataset length is: {len(test_dataset)}") # compute metrics function for binary classification def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average="binary") acc = accuracy_score(labels, preds) return {"accuracy": acc, "f1": f1, "precision": precision, "recall": recall} # download model from model hub model = AutoModelForSequenceClassification.from_pretrained(args.model_name) tokenizer = AutoTokenizer.from_pretrained(args.model_name) # define training args training_args = TrainingArguments( output_dir=args.model_dir, num_train_epochs=args.epochs, per_device_train_batch_size=args.train_batch_size, per_device_eval_batch_size=args.eval_batch_size, warmup_steps=args.warmup_steps, evaluation_strategy="epoch", logging_dir=f"{args.output_data_dir}/logs", learning_rate=float(args.learning_rate), ) # create Trainer instance trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset, eval_dataset=test_dataset, tokenizer=tokenizer, ) # train model trainer.train() # evaluate model eval_result = trainer.evaluate(eval_dataset=test_dataset) # writes eval result to file which can be accessed later in s3 ouput with open(os.path.join(args.output_data_dir, "eval_results.txt"), "w") as writer: print(f"***** Eval results *****") for key, value in sorted(eval_result.items()): writer.write(f"{key} = {value}\n") # Saves the model to s3 trainer.save_model(args.model_dir)
notebooks/sagemaker/01_getting_started_pytorch/scripts/train.py/0
{ "file_path": "notebooks/sagemaker/01_getting_started_pytorch/scripts/train.py", "repo_id": "notebooks", "token_count": 1418 }
161
<jupyter_start><jupyter_text>Huggingface Sagemaker-sdk - Deploy 🤗 Transformers for inference Welcome to this getting started guide, we will use the new Hugging Face Inference DLCs and Amazon SageMaker Python SDK to deploy a transformer model for inference. In this example we directly deploy one of the 10 000+ Hugging Face Transformers from the [Hub](https://huggingface.co/models) to Amazon SageMaker for Inference. API - [SageMaker Hugging Face Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit) Using the `transformers pipelines`, we designed an API, which makes it easy for you to benefit from all `pipelines` features. The API is oriented at the API of the [🤗 Accelerated Inference API](https://api-inference.huggingface.co/docs/python/html/detailed_parameters.html), meaning your inputs need to be defined in the `inputs` key and if you want additional supported `pipelines` parameters you can add them in the `parameters` key. Below you can find examples for requests. **text-classification request body**```python{ "inputs": "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days."}```**question-answering request body**```python{ "inputs": { "question": "What is used for inference?", "context": "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference." }}```**zero-shot classification request body**```python{ "inputs": "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!", "parameters": { "candidate_labels": [ "refund", "legal", "faq" ] }}```<jupyter_code>!pip install "sagemaker>=2.48.0" --upgrade<jupyter_output><empty_output><jupyter_text>Deploy one of the 10 000+ Hugging Face Transformers to Amazon SageMaker for Inference_This is an experimental feature, where the model will be loaded after the endpoint is created. This could lead to errors, e.g. models > 10GB_To deploy a model directly from the Hub to SageMaker we need to define 2 environment variables when creating the `HuggingFaceModel` . We need to define:- `HF_MODEL_ID`: defines the model id, which will be automatically loaded from [huggingface.co/models](http://huggingface.co/models) when creating or SageMaker Endpoint. The 🤗 Hub provides +10 000 models all available through this environment variable.- `HF_TASK`: defines the task for the used 🤗 Transformers pipeline. A full list of tasks can be find [here](https://huggingface.co/transformers/main_classes/pipelines.html).<jupyter_code>import sagemaker import boto3 try: role = sagemaker.get_execution_role() except ValueError: iam = boto3.client('iam') role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn'] print(f"sagemaker role arn: {role}") from sagemaker.huggingface import HuggingFaceModel # Hub Model configuration. https://huggingface.co/models hub = { 'HF_MODEL_ID':'distilbert-base-uncased-distilled-squad', # model_id from hf.co/models 'HF_TASK':'question-answering' # NLP task you want to use for predictions } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( env=hub, role=role, # iam role with permissions to create an Endpoint transformers_version="4.26", # transformers version used pytorch_version="1.13", # pytorch version used py_version="py39", # python version of the DLC ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, instance_type="ml.m5.xlarge" ) # example request, you always need to define "inputs" data = { "inputs": { "question": "What is used for inference?", "context": "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference." } } # request predictor.predict(data) # delete endpoint predictor.delete_model() predictor.delete_endpoint()<jupyter_output><empty_output>
notebooks/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb/0
{ "file_path": "notebooks/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb", "repo_id": "notebooks", "token_count": 1196 }
162
<jupyter_start><jupyter_text>Semantic Segmantion with Hugging Face's Transformers & Amazon SageMaker Transformer models are changing are changing the world of machine learning, starting with natural language processing, and now, with audio and computer vision. Hugging Face's mission is to democratize good machine learning and giving any one the opportunity to use these new state-of-the-art machine learning models. Together with Amazon SageMaker and AWS we have been working on extending the functionalities of the Hugging Face Inference DLC and the Python SageMaker SDK to make it easier to use speech and vision models together with `transformers`. You can now use the Hugging Face Inference DLC to do [automatic speech recognition](https://huggingface.co/tasks/automatic-speech-recognition) using MetaAIs [wav2vec2](https://arxiv.org/abs/2006.11477) model or Microsofts [WavLM](https://arxiv.org/abs/2110.13900) or use NVIDIAs [SegFormer](https://arxiv.org/abs/2105.15203) for [image segmentation](https://huggingface.co/tasks/image-segmentation).This guide will walk you through how to do [Image Segmentation](https://huggingface.co/tasks/image-segmentation) using [segformer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) and new `DataSerializer` In this example you will learn how to: 1. Setup a development Environment and permissions for deploying Amazon SageMaker Inference Endpoints.2. Deploy a segformer model to Amazon SageMaker for image segmentation3. Send requests to the endpoint to do image segmentation. Let's get started! 🚀---*If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it.* 1. Setup a development Environment and permissions for deploying Amazon SageMaker Inference Endpoints.Setting up the development environment and permissions needs to be done for the automatic-speech-recognition example and the semantic-segmentation example. First we update the `sagemaker` SDK to make sure we have new `DataSerializer`.<jupyter_code>%pip install sagemaker segmentation-mask-overlay pillow matplotlib --upgrade<jupyter_output><empty_output><jupyter_text>After we have update the SDK we can set the permissions._If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._<jupyter_code>import sagemaker import boto3 sess = sagemaker.Session() # sagemaker session bucket -> used for uploading data, models and logs # sagemaker will automatically create this bucket if it not exists sagemaker_session_bucket=None if sagemaker_session_bucket is None and sess is not None: # set to default bucket if a bucket name is not given sagemaker_session_bucket = sess.default_bucket() try: role = sagemaker.get_execution_role() except ValueError: iam = boto3.client('iam') role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn'] sess = sagemaker.Session(default_bucket=sagemaker_session_bucket) print(f"sagemaker role arn: {role}") print(f"sagemaker bucket: {sess.default_bucket()}") print(f"sagemaker session region: {sess.boto_region_name}")<jupyter_output>sagemaker role arn: arn:aws:iam::558105141721:role/sagemaker_execution_role sagemaker bucket: sagemaker-us-east-1-558105141721 sagemaker session region: us-east-1<jupyter_text>2. Deploy a segformer model to Amazon SageMaker for image segmentationImage Segmentation divides an image into segments where each pixel in the image is mapped to an object. This task has multiple variants such as instance segmentation, panoptic segmentation and semantic segmentation.We use the [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) model running our segmentation endpoint. This model is fine-tuned on ADE20k (scene-centric image) at resolution 512x512.<jupyter_code>from sagemaker.huggingface.model import HuggingFaceModel from sagemaker.serializers import DataSerializer # Hub Model configuration. <https://huggingface.co/models> hub = { 'HF_MODEL_ID':'nvidia/segformer-b0-finetuned-ade-512-512', 'HF_TASK':'image-segmentation', } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( env=hub, # configuration for loading model from Hub role=role, # iam role with permissions to create an Endpoint transformers_version="4.26", # transformers version used pytorch_version="1.13", # pytorch version used py_version='py39', # python version used )<jupyter_output><empty_output><jupyter_text>Before we are able to deploy our `HuggingFaceModel` class we need to create a new serializer, which supports our audio data. The Serializer are used in Predictor and in the `predict` method to serializer our data to a specific `mime-type`, which send to the endpoint. The default serialzier for the HuggingFacePredcitor is a JSNON serializer, but since we are not going to send text data to the endpoint we will use the DataSerializer.<jupyter_code># create a serializer for the data image_serializer = DataSerializer(content_type='image/x-image') # using x-image to support multiple image formats # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, # number of instances instance_type='ml.g4dn.xlarge', # ec2 instance type serializer=image_serializer, # serializer for our audio data. )<jupyter_output>-----------!<jupyter_text>3. Send requests to the endpoint to do image segmentation.The `.deploy()` returns an `HuggingFacePredictor` object with our `DataSeriliazer` which can be used to request inference. This `HuggingFacePredictor` makes it easy to send requests to your endpoint and get the results back.We will use 2 different methods to send requests to the endpoint:a. Provide a image file via path to the predictor b. Provide binary image data object to the predictor a. Provide a image file via path to the predictorUsing a image file as input is easy as easy as providing the path to its location. The `DataSerializer` will then read it and send the bytes to the endpoint. We can use a `libirispeech` sample hosted on huggingface.co<jupyter_code>!wget https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/raw/main/ADE_val_00000001.jpg<jupyter_output>--2023-03-21 08:29:41-- https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/raw/main/ADE_val_00000001.jpg Resolving huggingface.co (huggingface.co)... 52.203.75.138, 3.216.111.67, 3.83.196.160, ... Connecting to huggingface.co (huggingface.co)|52.203.75.138|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 52650 (51K) [image/jpeg] Saving to: ‘ADE_val_00000001.jpg’ 100%[======================================>] 52,650 --.-K/s in 0.001s 2023-03-21 08:29:41 (43.8 MB/s) - ‘ADE_val_00000001.jpg’ saved [52650/52650]<jupyter_text>before we send our request lest create a helper function to display our segmentation results.<jupyter_code>from PIL import Image import io from segmentation_mask_overlay import overlay_masks import numpy as np import base64 import matplotlib.pyplot as plt def stringToRGB(base64_string): # convert base64 string to numpy array imgdata = base64.b64decode(str(base64_string)) image = Image.open(io.BytesIO(imgdata)) return np.array(image) def get_overlay(original_image_path,result): masks = [stringToRGB(r["mask"]).astype('bool') for r in res] masks_labels = [r["label"] for r in result] cmap = plt.cm.tab20(np.arange(len(masks_labels))) image = Image.open(original_image_path) overlay_masks(image, masks, labels=masks_labels, colors=cmap, mask_alpha=0.5)<jupyter_output><empty_output><jupyter_text>To send a request with provide our path to the audio file we can use the following code:<jupyter_code>image_path = "ADE_val_00000001.jpg" res = predictor.predict(data=image_path) print(res[0].keys()) get_overlay(image_path,res)<jupyter_output>dict_keys(['score', 'label', 'mask'])<jupyter_text>b. Provide binary image data object to the predictorInstead of providing a path to the image file we can also directy provide the bytes of it reading the file in python._make sure `ADE_val_00000001.jpg` is in the directory_<jupyter_code>image_path = "ADE_val_00000001.jpg" with open(image_path, "rb") as data_file: image_data = data_file.read() res = predictor.predict(data=image_data) print(res[0].keys()) get_overlay(image_path,res)<jupyter_output>dict_keys(['score', 'label', 'mask'])<jupyter_text>Clean up<jupyter_code>predictor.delete_model() predictor.delete_endpoint()<jupyter_output><empty_output>
notebooks/sagemaker/21_image_segmantation/sagemaker-notebook.ipynb/0
{ "file_path": "notebooks/sagemaker/21_image_segmantation/sagemaker-notebook.ipynb", "repo_id": "notebooks", "token_count": 2831 }
163
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Adapters Adapter-based methods add extra trainable parameters after the attention and fully-connected layers of a frozen pretrained model to reduce memory-usage and speed up training. The method varies depending on the adapter, it could simply be an extra added layer or it could be expressing the weight updates ∆W as a low-rank decomposition of the weight matrix. Either way, the adapters are typically small but demonstrate comparable performance to a fully finetuned model and enable training larger models with fewer resources. This guide will give you a brief overview of the adapter methods supported by PEFT (if you're interested in learning more details about a specific method, take a look at the linked paper). ## Low-Rank Adaptation (LoRA) <Tip> LoRA is one of the most popular PEFT methods and a good starting point if you're just getting started with PEFT. It was originally developed for large language models but it is a tremendously popular training method for diffusion models because of its efficiency and effectiveness. </Tip> As mentioned briefly earlier, [LoRA](https://hf.co/papers/2106.09685) is a technique that accelerates finetuning large models while consuming less memory. LoRA represents the weight updates ∆W with two smaller matrices (called *update matrices*) through low-rank decomposition. These new matrices can be trained to adapt to the new data while keeping the overall number of parameters low. The original weight matrix remains frozen and doesn't receive any further updates. To produce the final results, the original and extra adapted weights are combined. You could also merge the adapter weights with the base model to eliminate inference latency. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_animated.gif"/> </div> This approach has a number of advantages: * LoRA makes finetuning more efficient by drastically reducing the number of trainable parameters. * The original pretrained weights are kept frozen, which means you can have multiple lightweight and portable LoRA models for various downstream tasks built on top of them. * LoRA is orthogonal to other parameter-efficient methods and can be combined with many of them. * Performance of models finetuned using LoRA is comparable to the performance of fully finetuned models. In principle, LoRA can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. However, for simplicity and further parameter efficiency, LoRA is typically only applied to the attention blocks in Transformer models. The resulting number of trainable parameters in a LoRA model depends on the size of the update matrices, which is determined mainly by the rank `r` and the shape of the original weight matrix. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora.png"/> </div> <small><a href="https://hf.co/papers/2103.10385">Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation</a></small> ## Low-Rank Hadamard Product (LoHa) Low-rank decomposition can impact performance because the weight updates are limited to the low-rank space, which can constrain a model's expressiveness. However, you don't necessarily want to use a larger rank because it increases the number of trainable parameters. To address this, [LoHa](https://huggingface.co/papers/2108.06098) (a method originally developed for computer vision) was applied to diffusion models where the ability to generate diverse images is an important consideration. LoHa should also work with general model types, but the embedding layers aren't currently implemented in PEFT. LoHa uses the [Hadamard product](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)) (element-wise product) instead of the matrix product. ∆W is represented by four smaller matrices instead of two - like in LoRA - and each pair of these low-rank matrices are combined with the Hadamard product. As a result, ∆W can have the same number of trainable parameters but a higher rank and expressivity. ## Low-Rank Kronecker Product (LoKr) [LoKr](https://hf.co/papers/2309.14859) is very similar to LoRA and LoHa, and it is also mainly applied to diffusion models, though you could also use it with other model types. LoKr replaces the matrix product with the [Kronecker product](https://en.wikipedia.org/wiki/Kronecker_product) instead. The Kronecker product decomposition creates a block matrix which preserves the rank of the original weight matrix. Another benefit of the Kronecker product is that it can be vectorized by stacking the matrix columns. This can speed up the process because you're avoiding fully reconstructing ∆W. ## Orthogonal Finetuning (OFT) <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/oft.png"/> </div> <small><a href="https://hf.co/papers/2306.07280">Controlling Text-to-Image Diffusion by Orthogonal Finetuning</a></small> [OFT](https://hf.co/papers/2306.07280) is a method that primarily focuses on preserving a pretrained model's generative performance in the finetuned model. It tries to maintain the same cosine similarity (hyperspherical energy) between all pairwise neurons in a layer because this better captures the semantic information among neurons. This means OFT is more capable at preserving the subject and it is better for controllable generation (similar to [ControlNet](https://huggingface.co/docs/diffusers/using-diffusers/controlnet)). OFT preserves the hyperspherical energy by learning an orthogonal transformation for neurons to keep the cosine similarity between them unchanged. In practice, this means taking the matrix product of an orthogonal matrix with the pretrained weight matrix. However, to be parameter-efficient, the orthogonal matrix is represented as a block-diagonal matrix with rank `r` blocks. Whereas LoRA reduces the number of trainable parameters with low-rank structures, OFT reduces the number of trainable parameters with a sparse block-diagonal matrix structure. ## Adaptive Low-Rank Adaptation (AdaLoRA) [AdaLoRA](https://hf.co/papers/2303.10512) manages the parameter budget introduced from LoRA by allocating more parameters - in other words, a higher rank `r` - for important weight matrices that are better adapted for a task and pruning less important ones. The rank is controlled by a method similar to singular value decomposition (SVD). The ∆W is parameterized with two orthogonal matrices and a diagonal matrix which contains singular values. This parametrization method avoids iteratively applying SVD which is computationally expensive. Based on this method, the rank of ∆W is adjusted according to an importance score. ∆W is divided into triplets and each triplet is scored according to its contribution to model performance. Triplets with low importance scores are pruned and triplets with high importance scores are kept for finetuning. ## Llama-Adapter [Llama-Adapter](https://hf.co/papers/2303.16199) is a method for adapting Llama into a instruction-following model. To help adapt the model for instruction-following, the adapter is trained with a 52K instruction-output dataset. A set of of learnable adaption prompts are prefixed to the input instruction tokens. These are inserted into the upper layers of the model because it is better to learn with the higher-level semantics of the pretrained model. The instruction-output tokens prefixed to the input guide the adaption prompt to generate a contextual response. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/llama-adapter.png"/> </div> <small><a href="https://hf.co/papers/2303.16199">LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention</a></small> To avoid adding noise to the tokens, the adapter uses zero-initialized attention. On top of this, the adapter adds a learnable gating factor (initialized with zeros) to progressively add information to the model during training. This prevents overwhelming the model's pretrained knowledge with the newly learned instructions.
peft/docs/source/conceptual_guides/adapter.md/0
{ "file_path": "peft/docs/source/conceptual_guides/adapter.md", "repo_id": "peft", "token_count": 2203 }
164
<!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Configuration [`PeftConfigMixin`] is the base configuration class for storing the adapter configuration of a [`PeftModel`], and [`PromptLearningConfig`] is the base configuration class for soft prompt methods (p-tuning, prefix tuning, and prompt tuning). These base classes contain methods for saving and loading model configurations from the Hub, specifying the PEFT method to use, type of task to perform, and model configurations like number of layers and number of attention heads. ## PeftConfigMixin [[autodoc]] config.PeftConfigMixin - all ## PeftConfig [[autodoc]] PeftConfig - all ## PromptLearningConfig [[autodoc]] PromptLearningConfig - all
peft/docs/source/package_reference/config.md/0
{ "file_path": "peft/docs/source/package_reference/config.md", "repo_id": "peft", "token_count": 224 }
165
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Quicktour PEFT offers parameter-efficient methods for finetuning large pretrained models. The traditional paradigm is to finetune all of a model's parameters for each downstream task, but this is becoming exceedingly costly and impractical because of the enormous number of parameters in models today. Instead, it is more efficient to train a smaller number of prompt parameters or use a reparametrization method like low-rank adaptation (LoRA) to reduce the number of trainable parameters. This quicktour will show you PEFT's main features and how you can train or run inference on large models that would typically be inaccessible on consumer devices. ## Train Each PEFT method is defined by a [`PeftConfig`] class that stores all the important parameters for building a [`PeftModel`]. For example, to train with LoRA, load and create a [`LoraConfig`] class and specify the following parameters: - `task_type`: the task to train for (sequence-to-sequence language modeling in this case) - `inference_mode`: whether you're using the model for inference or not - `r`: the dimension of the low-rank matrices - `lora_alpha`: the scaling factor for the low-rank matrices - `lora_dropout`: the dropout probability of the LoRA layers ```python from peft import LoraConfig, TaskType peft_config = LoraConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1) ``` <Tip> See the [`LoraConfig`] reference for more details about other parameters you can adjust, such as the modules to target or the bias type. </Tip> Once the [`LoraConfig`] is setup, create a [`PeftModel`] with the [`get_peft_model`] function. It takes a base model - which you can load from the Transformers library - and the [`LoraConfig`] containing the parameters for how to configure a model for training with LoRA. Load the base model you want to finetune. ```python from transformers import AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/mt0-large") ``` Wrap the base model and `peft_config` with the [`get_peft_model`] function to create a [`PeftModel`]. To get a sense of the number of trainable parameters in your model, use the [`print_trainable_parameters`] method. ```python from peft import get_peft_model model = get_peft_model(model, peft_config) model.print_trainable_parameters() "output: trainable params: 2359296 || all params: 1231940608 || trainable%: 0.19151053100118282" ``` Out of [bigscience/mt0-large's](https://huggingface.co/bigscience/mt0-large) 1.2B parameters, you're only training 0.19% of them! That is it 🎉! Now you can train the model with the Transformers [`~transformers.Trainer`], Accelerate, or any custom PyTorch training loop. For example, to train with the [`~transformers.Trainer`] class, setup a [`~transformers.TrainingArguments`] class with some training hyperparameters. ```py training_args = TrainingArguments( output_dir="your-name/bigscience/mt0-large-lora", learning_rate=1e-3, per_device_train_batch_size=32, per_device_eval_batch_size=32, num_train_epochs=2, weight_decay=0.01, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, ) ``` Pass the model, training arguments, dataset, tokenizer, and any other necessary component to the [`~transformers.Trainer`], and call [`~transformers.Trainer.train`] to start training. ```py trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["test"], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) trainer.train() ``` ### Save model After your model is finished training, you can save your model to a directory using the [`~transformers.PreTrainedModel.save_pretrained`] function. ```py model.save_pretrained("output_dir") ``` You can also save your model to the Hub (make sure you're logged in to your Hugging Face account first) with the [`~transformers.PreTrainedModel.push_to_hub`] function. ```python from huggingface_hub import notebook_login notebook_login() model.push_to_hub("your-name/bigscience/mt0-large-lora") ``` Both methods only save the extra PEFT weights that were trained, meaning it is super efficient to store, transfer, and load. For example, this [facebook/opt-350m](https://huggingface.co/ybelkada/opt-350m-lora) model trained with LoRA only contains two files: `adapter_config.json` and `adapter_model.safetensors`. The `adapter_model.safetensors` file is just 6.3MB! <div class="flex flex-col justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/PEFT-hub-screenshot.png"/> <figcaption class="text-center">The adapter weights for a opt-350m model stored on the Hub are only ~6MB compared to the full size of the model weights, which can be ~700MB.</figcaption> </div> ## Inference <Tip> Take a look at the [AutoPeftModel](package_reference/auto_class) API reference for a complete list of available `AutoPeftModel` classes. </Tip> Easily load any PEFT-trained model for inference with the [`AutoPeftModel`] class and the [`~transformers.PreTrainedModel.from_pretrained`] method: ```py from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer import torch model = AutoPeftModelForCausalLM.from_pretrained("ybelkada/opt-350m-lora") tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") model = model.to("cuda") model.eval() inputs = tokenizer("Preheat the oven to 350 degrees and place the cookie dough", return_tensors="pt") outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), max_new_tokens=50) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]) "Preheat the oven to 350 degrees and place the cookie dough in the center of the oven. In a large bowl, combine the flour, baking powder, baking soda, salt, and cinnamon. In a separate bowl, combine the egg yolks, sugar, and vanilla." ``` For other tasks that aren't explicitly supported with an `AutoPeftModelFor` class - such as automatic speech recognition - you can still use the base [`AutoPeftModel`] class to load a model for the task. ```py from peft import AutoPeftModel model = AutoPeftModel.from_pretrained("smangrul/openai-whisper-large-v2-LORA-colab") ``` ## Next steps Now that you've seen how to train a model with one of the PEFT methods, we encourage you to try out some of the other methods like prompt tuning. The steps are very similar to the ones shown in the quicktour: 1. prepare a [`PeftConfig`] for a PEFT method 2. use the [`get_peft_model`] method to create a [`PeftModel`] from the configuration and base model Then you can train it however you like! To load a PEFT model for inference, you can use the [`AutoPeftModel`] class. Feel free to also take a look at the task guides if you're interested in training a model with another PEFT method for a specific task such as semantic segmentation, multilingual automatic speech recognition, DreamBooth, token classification, and more.
peft/docs/source/quicktour.md/0
{ "file_path": "peft/docs/source/quicktour.md", "repo_id": "peft", "token_count": 2384 }
166
<jupyter_start><jupyter_code>from transformers import AutoModelForSeq2SeqLM import peft from peft import get_peft_config, get_peft_model, get_peft_model_state_dict, IA3Config, TaskType import torch from datasets import load_dataset import os os.environ["TOKENIZERS_PARALLELISM"] = "false" from transformers import AutoTokenizer from torch.utils.data import DataLoader from transformers import default_data_collator, get_linear_schedule_with_warmup from tqdm import tqdm from datasets import load_dataset device = "cuda" model_name_or_path = "bigscience/mt0-large" tokenizer_name_or_path = "bigscience/mt0-large" checkpoint_name = "financial_sentiment_analysis_ia3_v1.pt" text_column = "sentence" label_column = "text_label" max_length = 128 lr = 8e-3 num_epochs = 3 batch_size = 8 import importlib importlib.reload(peft) # creating model peft_config = IA3Config(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, feedforward_modules=[]) model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path) model model = get_peft_model(model, peft_config) model.print_trainable_parameters() model # loading dataset dataset = load_dataset("financial_phrasebank", "sentences_allagree") dataset = dataset["train"].train_test_split(test_size=0.1) dataset["validation"] = dataset["test"] del dataset["test"] classes = dataset["train"].features["label"].names dataset = dataset.map( lambda x: {"text_label": [classes[label] for label in x["label"]]}, batched=True, num_proc=1, ) dataset["train"][0] # data preprocessing tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) def preprocess_function(examples): inputs = examples[text_column] targets = examples[label_column] model_inputs = tokenizer(inputs, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt") labels = tokenizer(targets, max_length=3, padding="max_length", truncation=True, return_tensors="pt") labels = labels["input_ids"] labels[labels == tokenizer.pad_token_id] = -100 model_inputs["labels"] = labels return model_inputs processed_datasets = dataset.map( preprocess_function, batched=True, num_proc=1, remove_columns=dataset["train"].column_names, load_from_cache_file=False, desc="Running tokenizer on dataset", ) train_dataset = processed_datasets["train"] eval_dataset = processed_datasets["validation"] train_dataloader = DataLoader( train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True ) eval_dataloader = DataLoader(eval_dataset, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True) # optimizer and lr scheduler optimizer = torch.optim.AdamW(model.parameters(), lr=lr) lr_scheduler = get_linear_schedule_with_warmup( optimizer=optimizer, num_warmup_steps=0, num_training_steps=(len(train_dataloader) * num_epochs), ) # training and evaluation model = model.to(device) for epoch in range(num_epochs): model.train() total_loss = 0 for step, batch in enumerate(tqdm(train_dataloader)): batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss total_loss += loss.detach().float() loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() model.eval() eval_loss = 0 eval_preds = [] for step, batch in enumerate(tqdm(eval_dataloader)): batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) loss = outputs.loss eval_loss += loss.detach().float() eval_preds.extend( tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True) ) eval_epoch_loss = eval_loss / len(eval_dataloader) eval_ppl = torch.exp(eval_epoch_loss) train_epoch_loss = total_loss / len(train_dataloader) train_ppl = torch.exp(train_epoch_loss) print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_epoch_loss=}") # print accuracy correct = 0 total = 0 for pred, true in zip(eval_preds, dataset["validation"]["text_label"]): if pred.strip() == true.strip(): correct += 1 total += 1 accuracy = correct / total * 100 print(f"{accuracy=} % on the evaluation dataset") print(f"{eval_preds[:10]=}") print(f"{dataset['validation']['text_label'][:10]=}") # saving model peft_model_id = f"{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}" model.save_pretrained(peft_model_id) ckpt = f"{peft_model_id}/adapter_model.bin" !du -h $ckpt from peft import PeftModel, PeftConfig peft_model_id = f"{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path) model = PeftModel.from_pretrained(model, peft_model_id) model.eval() i = 13 inputs = tokenizer(dataset["validation"][text_column][i], return_tensors="pt") print(dataset["validation"][text_column][i]) print(inputs) with torch.no_grad(): outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10) print(outputs) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))<jupyter_output>25 November 2010 - Finnish paints and coatings company Tikkurila Oyj ( HEL : TIK1V ) said today that Finnish state-owned investment company Solidium Oy sold its 14.7 % stake in the company for a total of EUR98m . {'input_ids': tensor([[ 877, 3277, 1068, 259, 264, 515, 143136, 42068, 263, 305, 259, 101264, 263, 5835, 22538, 4496, 2697, 20860, 385, 274, 76347, 259, 267, 259, 93686, 353, 561, 259, 271, 2426, 7883, 533, 515, 143136, 6509, 264, 45815, 37624, 5835, 35133, 16558, 20860, 22026, 2476, 5006, 487, 1448, 259, 96189, 281, 287, 5835, 332, 259, 262, 2725, 304, 2687, 5577, 282, 259, 260, 1]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,[...]
peft/examples/conditional_generation/peft_ia3_seq2seq.ipynb/0
{ "file_path": "peft/examples/conditional_generation/peft_ia3_seq2seq.ipynb", "repo_id": "peft", "token_count": 2685 }
167
<jupyter_start><jupyter_text>Fine-tune FLAN-T5 using `bitsandbytes`, `peft` & `transformers` 🤗 In this notebook we will see how to properly use `peft` , `transformers` & `bitsandbytes` to fine-tune `flan-t5-large` in a google colab!We will finetune the model on [`financial_phrasebank`](https://huggingface.co/datasets/financial_phrasebank) dataset, that consists of pairs of text-labels to classify financial-related sentences, if they are either `positive`, `neutral` or `negative`.Note that you could use the same notebook to fine-tune `flan-t5-xl` as well, but you would need to shard the models first to avoid CPU RAM issues on Google Colab, check [these weights](https://huggingface.co/ybelkada/flan-t5-xl-sharded-bf16). Install requirements<jupyter_code>!pip install -q bitsandbytes datasets accelerate !pip install -q git+https://github.com/huggingface/transformers.git@main git+https://github.com/huggingface/peft.git@main<jupyter_output> ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 76.3/76.3 MB 10.6 MB/s eta 0:00:00  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 462.8/462.8 KB 45.6 MB/s eta 0:00:00  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 199.7/199.7 KB 26.9 MB/s eta 0:00:00  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 132.0/132.0 KB 20.1 MB/s eta 0:00:00  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 190.3/190.3 KB 26.8 MB/s eta 0:00:00  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 213.0/213.0 KB 26.5 MB/s eta 0:00:00  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 140.6/140.6 KB 20.2 MB/s eta 0:00:00 [?25h Installing build dependencies ... [?25l[?25hdone Getting requirements to build wheel ... [?25l[?25hdone Preparing metadata (pyproject.tom[...]<jupyter_text>Import model and tokenizer<jupyter_code># Select CUDA device index import os import torch os.environ["CUDA_VISIBLE_DEVICES"] = "0" from datasets import load_dataset from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, BitsAndBytesConfig model_name = "google/flan-t5-large" model = AutoModelForSeq2SeqLM.from_pretrained(model_name, quantization_config=BitsAndBytesConfig(load_in_8bit=True)) tokenizer = AutoTokenizer.from_pretrained(model_name)<jupyter_output><empty_output><jupyter_text>Prepare model for training Some pre-processing needs to be done before training such an int8 model using `peft`, therefore let's import an utiliy function `prepare_model_for_int8_training` that will: - Casts all the non `int8` modules to full precision (`fp32`) for stability- Add a `forward_hook` to the input embedding layer to enable gradient computation of the input hidden states- Enable gradient checkpointing for more memory-efficient training<jupyter_code>from peft import prepare_model_for_int8_training model = prepare_model_for_int8_training(model)<jupyter_output><empty_output><jupyter_text>Load your `PeftModel` Here we will use LoRA (Low-Rank Adaptators) to train our model<jupyter_code>from peft import LoraConfig, get_peft_model, TaskType def print_trainable_parameters(model): """ Prints the number of trainable parameters in the model. """ trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print( f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" ) lora_config = LoraConfig( r=16, lora_alpha=32, target_modules=["q", "v"], lora_dropout=0.05, bias="none", task_type="SEQ_2_SEQ_LM" ) model = get_peft_model(model, lora_config) print_trainable_parameters(model)<jupyter_output>trainable params: 4718592 || all params: 787868672 || trainable%: 0.5989059049678777<jupyter_text>As you can see, here we are only training 0.6% of the parameters of the model! This is a huge memory gain that will enable us to fine-tune the model without any memory issue. Load and process dataHere we will use [`financial_phrasebank`](https://huggingface.co/datasets/financial_phrasebank) dataset to fine-tune our model on sentiment classification on financial sentences. We will load the split `sentences_allagree`, which corresponds according to the model card to the split where there is a 100% annotator agreement.<jupyter_code># loading dataset dataset = load_dataset("financial_phrasebank", "sentences_allagree") dataset = dataset["train"].train_test_split(test_size=0.1) dataset["validation"] = dataset["test"] del dataset["test"] classes = dataset["train"].features["label"].names dataset = dataset.map( lambda x: {"text_label": [classes[label] for label in x["label"]]}, batched=True, num_proc=1, )<jupyter_output><empty_output><jupyter_text>Let's also apply some pre-processing of the input data, the labels needs to be pre-processed, the tokens corresponding to `pad_token_id` needs to be set to `-100` so that the `CrossEntropy` loss associated with the model will correctly ignore these tokens.<jupyter_code># data preprocessing text_column = "sentence" label_column = "text_label" max_length = 128 def preprocess_function(examples): inputs = examples[text_column] targets = examples[label_column] model_inputs = tokenizer(inputs, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt") labels = tokenizer(targets, max_length=3, padding="max_length", truncation=True, return_tensors="pt") labels = labels["input_ids"] labels[labels == tokenizer.pad_token_id] = -100 model_inputs["labels"] = labels return model_inputs processed_datasets = dataset.map( preprocess_function, batched=True, num_proc=1, remove_columns=dataset["train"].column_names, load_from_cache_file=False, desc="Running tokenizer on dataset", ) train_dataset = processed_datasets["train"] eval_dataset = processed_datasets["validation"]<jupyter_output><empty_output><jupyter_text>Train our model! Let's now train our model, run the cells below.Note that for T5 since some layers are kept in `float32` for stability purposes there is no need to call autocast on the trainer.<jupyter_code>from transformers import TrainingArguments, Trainer training_args = TrainingArguments( "temp", evaluation_strategy="epoch", learning_rate=1e-3, gradient_accumulation_steps=1, auto_find_batch_size=True, num_train_epochs=1, save_steps=100, save_total_limit=8, ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, ) model.config.use_cache = False # silence the warnings. Please re-enable for inference! trainer.train()<jupyter_output>/usr/local/lib/python3.8/dist-packages/transformers/optimization.py:346: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( ***** Running training ***** Num examples = 2037 Num Epochs = 1 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 255 Number of trainable parameters = 4718592 /usr/local/lib/python3.8/dist-packages/bitsandbytes/autograd/_functions.py:298: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")<jupyter_text>Qualitatively test our model Let's have a quick qualitative evaluation of the model, by taking a sample from the dataset that corresponds to a positive label. Run your generation similarly as you were running your model from `transformers`:<jupyter_code>model.eval() input_text = "In January-September 2009 , the Group 's net interest income increased to EUR 112.4 mn from EUR 74.3 mn in January-September 2008 ." inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10) print("input sentence: ", input_text) print(" output prediction: ", tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))<jupyter_output>Generate config GenerationConfig { "_from_model_config": true, "decoder_start_token_id": 0, "eos_token_id": 1, "pad_token_id": 0, "transformers_version": "4.27.0.dev0", "use_cache": false } /usr/local/lib/python3.8/dist-packages/bitsandbytes/autograd/_functions.py:298: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization") /usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py:1374: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`. warnings.warn(<jupyter_text>Share your adapters on 🤗 Hub Once you have trained your adapter, you can easily share it on the Hub using the method `push_to_hub` . Note that only the adapter weights and config will be pushed<jupyter_code>from huggingface_hub import notebook_login notebook_login() model.push_to_hub("ybelkada/flan-t5-large-financial-phrasebank-lora", use_auth_token=True)<jupyter_output>Uploading the following files to ybelkada/flan-t5-large-lora: adapter_model.bin,adapter_config.json<jupyter_text>Load your adapter from the Hub You can load the model together with the adapter with few lines of code! Check the snippet below to load the adapter from the Hub and run the example evaluation!<jupyter_code>import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForSeq2SeqLM, AutoTokenizer peft_model_id = "ybelkada/flan-t5-large-financial-phrasebank-lora" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, torch_dtype="auto", device_map="auto") tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) # Load the Lora model model = PeftModel.from_pretrained(model, peft_model_id) model.eval() input_text = "In January-September 2009 , the Group 's net interest income increased to EUR 112.4 mn from EUR 74.3 mn in January-September 2008 ." inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10) print("input sentence: ", input_text) print(" output prediction: ", tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))<jupyter_output>Generate config GenerationConfig { "_from_model_config": true, "decoder_start_token_id": 0, "eos_token_id": 1, "pad_token_id": 0, "transformers_version": "4.27.0.dev0" } /usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py:1374: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`. warnings.warn(
peft/examples/int8_training/Finetune_flan_t5_large_bnb_peft.ipynb/0
{ "file_path": "peft/examples/int8_training/Finetune_flan_t5_large_bnb_peft.ipynb", "repo_id": "peft", "token_count": 4290 }
168
<jupyter_start><jupyter_code>import os os.environ["CUDA_VISIBLE_DEVICES"] = "1" from peft import PeftConfig, PeftModel from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer from datasets import load_dataset import torch import random peft_model_id = "smangrul/tinyllama_lora_norobots" device = "cuda" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, load_in_4bit=True, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(peft_model_id) model.resize_token_embeddings(len(tokenizer)) model = PeftModel.from_pretrained(model, peft_model_id, adapter_name="norobots") _ = model.load_adapter("smangrul/tinyllama_lora_sql", adapter_name="sql") _ = model.load_adapter("smangrul/tinyllama_lora_adcopy", adapter_name="adcopy") %%time # [0.8, 0.1, 0.1] linear #[1.0, 0.2] 0.7 density dare_linear #[1.5, 0.3] 0.5 density ties #[0.8, 0.5] cat adapters = ["norobots", "adcopy", "sql"] weights = [2.0, 0.3, 0.7] adapter_name = "merge" density = 0.2 combination_type = "ties" if adapter_name in model.peft_config: model.delete_adapter(adapter_name) model.add_weighted_adapter(adapters, weights, adapter_name, combination_type=combination_type, density=density) model.eval() model.set_adapter("merge") messages = [ {"role": "user", "content": "Write an essay about Generative AI."}, ] text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) inputs = tokenizer(text, return_tensors="pt") # , add_special_tokens=False) inputs = {k: v.to("cuda") for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=256, do_sample=True, top_p=0.95, temperature=0.2, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0])) messages = [ {"role": "system", "content": "Create a text ad given the following product and description."}, { "role": "user", "content": "Product: Sony PS5 PlayStation Console\nDescription: The PS5™ console unleashes new gaming possibilities that you never anticipated.", }, ] text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) inputs = tokenizer(text, return_tensors="pt") # , add_special_tokens=False) inputs = {k: v.to("cuda") for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, top_p=0.95, temperature=0.2, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0])) text = """Table: 2-11365528-2 Columns: ['Team', 'Head Coach', 'President', 'Home Ground', 'Location'] Natural Query: Who is the Head Coach of the team whose President is Mario Volarevic? SQL Query:""" inputs = tokenizer(text, return_tensors="pt") # , add_special_tokens=False) inputs = {k: v.to("cuda") for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=64, repetition_penalty=1.1, eos_token_id=tokenizer("</s>").input_ids[-1] ) print(tokenizer.decode(outputs[0]))<jupyter_output><s> Table: 2-11365528-2 Columns: ['Team', 'Head Coach', 'President', 'Home Ground', 'Location'] Natural Query: Who is the Head Coach of the team whose President is Mario Volarevic? SQL Query: SELECT Head Coach FROM 2-11365528-2 WHERE President = Mario Volarevic</s>
peft/examples/multi_adapter_examples/Lora_Merging.ipynb/0
{ "file_path": "peft/examples/multi_adapter_examples/Lora_Merging.ipynb", "repo_id": "peft", "token_count": 1305 }
169
import os from enum import Enum import torch from datasets import DatasetDict, load_dataset, load_from_disk from datasets.builder import DatasetGenerationError from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, ) from peft import LoraConfig DEFAULT_CHATML_CHAT_TEMPLATE = "{% for message in messages %}\n{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% if loop.last and add_generation_prompt %}{{'<|im_start|>assistant\n' }}{% endif %}{% endfor %}" DEFAULT_ZEPHYR_CHAT_TEMPLATE = "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}" class ZephyrSpecialTokens(str, Enum): user = "<|user|>" assistant = "<|assistant|>" system = "<|system|>" eos_token = "</s>" bos_token = "<s>" pad_token = "<pad>" @classmethod def list(cls): return [c.value for c in cls] class ChatmlSpecialTokens(str, Enum): user = "<|im_start|>user" assistant = "<|im_start|>assistant" system = "<|im_start|>system" eos_token = "<|im_end|>" bos_token = "<s>" pad_token = "<pad>" @classmethod def list(cls): return [c.value for c in cls] def create_datasets(tokenizer, data_args, training_args, apply_chat_template=False): def preprocess(samples): batch = [] for conversation in samples["messages"]: batch.append(tokenizer.apply_chat_template(conversation, tokenize=False)) return {"content": batch} raw_datasets = DatasetDict() for split in data_args.splits.split(","): try: # Try first if dataset on a Hub repo dataset = load_dataset(data_args.dataset_name, split=split) except DatasetGenerationError: # If not, check local dataset dataset = load_from_disk(os.path.join(data_args.dataset_name, split)) if "train" in split: raw_datasets["train"] = dataset elif "test" in split: raw_datasets["test"] = dataset else: raise ValueError(f"Split type {split} not recognized as one of test or train.") if apply_chat_template: raw_datasets = raw_datasets.map( preprocess, batched=True, remove_columns=raw_datasets["train"].column_names, ) train_data = raw_datasets["train"] valid_data = raw_datasets["test"] print(f"Size of the train set: {len(train_data)}. Size of the validation set: {len(valid_data)}") print(f"A sample of train dataset: {train_data[0]}") return train_data, valid_data def create_and_prepare_model(args, data_args, training_args): if args.use_unsloth: from unsloth import FastLanguageModel bnb_config = None quant_storage_dtype = None if ( torch.distributed.is_available() and torch.distributed.is_initialized() and torch.distributed.get_world_size() > 1 and args.use_unsloth ): raise NotImplementedError("Unsloth is not supported in distributed training") if args.use_4bit_quantization: compute_dtype = getattr(torch, args.bnb_4bit_compute_dtype) quant_storage_dtype = getattr(torch, args.bnb_4bit_quant_storage_dtype) bnb_config = BitsAndBytesConfig( load_in_4bit=args.use_4bit_quantization, bnb_4bit_quant_type=args.bnb_4bit_quant_type, bnb_4bit_compute_dtype=compute_dtype, bnb_4bit_use_double_quant=args.use_nested_quant, bnb_4bit_quant_storage=quant_storage_dtype, ) if compute_dtype == torch.float16 and args.use_4bit_quantization: major, _ = torch.cuda.get_device_capability() if major >= 8: print("=" * 80) print("Your GPU supports bfloat16, you can accelerate training with the argument --bf16") print("=" * 80) elif args.use_8bit_quantization: bnb_config = BitsAndBytesConfig(load_in_8bit=args.use_8bit_quantization) if args.use_unsloth: # Load model model, _ = FastLanguageModel.from_pretrained( model_name=args.model_name_or_path, max_seq_length=data_args.max_seq_length, dtype=None, load_in_4bit=args.use_4bit_quantization, ) else: model = AutoModelForCausalLM.from_pretrained( args.model_name_or_path, quantization_config=bnb_config, trust_remote_code=True, attn_implementation="flash_attention_2" if args.use_flash_attn else "eager", torch_dtype=quant_storage_dtype or torch.float32, ) peft_config = None chat_template = None if args.use_peft_lora and not args.use_unsloth: peft_config = LoraConfig( lora_alpha=args.lora_alpha, lora_dropout=args.lora_dropout, r=args.lora_r, bias="none", task_type="CAUSAL_LM", target_modules=args.lora_target_modules.split(",") if args.lora_target_modules != "all-linear" else args.lora_target_modules, ) special_tokens = None chat_template = None if args.chat_template_format == "chatml": special_tokens = ChatmlSpecialTokens chat_template = DEFAULT_CHATML_CHAT_TEMPLATE elif args.chat_template_format == "zephyr": special_tokens = ZephyrSpecialTokens chat_template = DEFAULT_ZEPHYR_CHAT_TEMPLATE if special_tokens is not None: tokenizer = AutoTokenizer.from_pretrained( args.model_name_or_path, pad_token=special_tokens.pad_token.value, bos_token=special_tokens.bos_token.value, eos_token=special_tokens.eos_token.value, additional_special_tokens=special_tokens.list(), trust_remote_code=True, ) tokenizer.chat_template = chat_template # make embedding resizing configurable? model.resize_token_embeddings(len(tokenizer), pad_to_multiple_of=8) else: tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token if args.use_unsloth: # Do model patching and add fast LoRA weights model = FastLanguageModel.get_peft_model( model, lora_alpha=args.lora_alpha, lora_dropout=args.lora_dropout, r=args.lora_r, target_modules=args.lora_target_modules.split(",") if args.lora_target_modules != "all-linear" else args.lora_target_modules, use_gradient_checkpointing=training_args.gradient_checkpointing, random_state=training_args.seed, max_seq_length=data_args.max_seq_length, ) return model, peft_config, tokenizer
peft/examples/sft/utils.py/0
{ "file_path": "peft/examples/sft/utils.py", "repo_id": "peft", "token_count": 3277 }
170
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import annotations from typing import TYPE_CHECKING, Any import torch from .config import PeftConfig from .mixed_model import PeftMixedModel from .peft_model import ( PeftModel, PeftModelForCausalLM, PeftModelForFeatureExtraction, PeftModelForQuestionAnswering, PeftModelForSeq2SeqLM, PeftModelForSequenceClassification, PeftModelForTokenClassification, ) from .tuners import ( AdaLoraConfig, AdaLoraModel, AdaptionPromptConfig, IA3Config, IA3Model, LoHaConfig, LoHaModel, LoKrConfig, LoKrModel, LoraConfig, LoraModel, MultitaskPromptTuningConfig, OFTConfig, OFTModel, PolyConfig, PolyModel, PrefixTuningConfig, PromptEncoderConfig, PromptTuningConfig, ) from .utils import _prepare_prompt_learning_config if TYPE_CHECKING: from transformers import PreTrainedModel MODEL_TYPE_TO_PEFT_MODEL_MAPPING: dict[str, PeftModel] = { "SEQ_CLS": PeftModelForSequenceClassification, "SEQ_2_SEQ_LM": PeftModelForSeq2SeqLM, "CAUSAL_LM": PeftModelForCausalLM, "TOKEN_CLS": PeftModelForTokenClassification, "QUESTION_ANS": PeftModelForQuestionAnswering, "FEATURE_EXTRACTION": PeftModelForFeatureExtraction, } PEFT_TYPE_TO_CONFIG_MAPPING: dict[str, PeftConfig] = { "ADAPTION_PROMPT": AdaptionPromptConfig, "PROMPT_TUNING": PromptTuningConfig, "PREFIX_TUNING": PrefixTuningConfig, "P_TUNING": PromptEncoderConfig, "LORA": LoraConfig, "LOHA": LoHaConfig, "LOKR": LoKrConfig, "ADALORA": AdaLoraConfig, "IA3": IA3Config, "MULTITASK_PROMPT_TUNING": MultitaskPromptTuningConfig, "OFT": OFTConfig, "POLY": PolyConfig, } PEFT_TYPE_TO_TUNER_MAPPING = { "LORA": LoraModel, "LOHA": LoHaModel, "LOKR": LoKrModel, "ADALORA": AdaLoraModel, "IA3": IA3Model, "OFT": OFTModel, "POLY": PolyModel, } def get_peft_config(config_dict: dict[str, Any]) -> PeftConfig: """ Returns a Peft config object from a dictionary. Args: config_dict (`Dict[str, Any]`): Dictionary containing the configuration parameters. """ return PEFT_TYPE_TO_CONFIG_MAPPING[config_dict["peft_type"]](**config_dict) def get_peft_model( model: PreTrainedModel, peft_config: PeftConfig, adapter_name: str = "default", mixed: bool = False ) -> PeftModel | PeftMixedModel: """ Returns a Peft model object from a model and a config. Args: model ([`transformers.PreTrainedModel`]): Model to be wrapped. peft_config ([`PeftConfig`]): Configuration object containing the parameters of the Peft model. adapter_name (`str`, `optional`, defaults to `"default"`): The name of the adapter to be injected, if not provided, the default adapter name is used ("default"). mixed (`bool`, `optional`, defaults to `False`): Whether to allow mixing different (compatible) adapter types. """ model_config = getattr(model, "config", {"model_type": "custom"}) if hasattr(model_config, "to_dict"): model_config = model_config.to_dict() peft_config.base_model_name_or_path = model.__dict__.get("name_or_path", None) if mixed: return PeftMixedModel(model, peft_config, adapter_name=adapter_name) if peft_config.task_type not in MODEL_TYPE_TO_PEFT_MODEL_MAPPING.keys() and not peft_config.is_prompt_learning: return PeftModel(model, peft_config, adapter_name=adapter_name) if peft_config.is_prompt_learning: peft_config = _prepare_prompt_learning_config(peft_config, model_config) return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name) def inject_adapter_in_model( peft_config: PeftConfig, model: torch.nn.Module, adapter_name: str = "default" ) -> torch.nn.Module: r""" A simple API to create and inject adapter in-place into a model. Currently the API does not support prompt learning methods and adaption prompt. Make sure to have the correct `target_names` set in the `peft_config` object. The API calls `get_peft_model` under the hood but would be restricted only to non-prompt learning methods. Args: peft_config (`PeftConfig`): Configuration object containing the parameters of the Peft model. model (`torch.nn.Module`): The input model where the adapter will be injected. adapter_name (`str`, `optional`, defaults to `"default"`): The name of the adapter to be injected, if not provided, the default adapter name is used ("default"). """ if peft_config.is_prompt_learning or peft_config.is_adaption_prompt: raise ValueError("`create_and_replace` does not support prompt learning and adaption prompt yet.") if peft_config.peft_type not in PEFT_TYPE_TO_TUNER_MAPPING.keys(): raise ValueError( f"`inject_adapter_in_model` does not support {peft_config.peft_type} yet. Please use `get_peft_model`." ) tuner_cls = PEFT_TYPE_TO_TUNER_MAPPING[peft_config.peft_type] # By instantiating a peft model we are injecting randomly initialized LoRA layers into the model's modules. peft_model = tuner_cls(model, peft_config, adapter_name=adapter_name) return peft_model.model
peft/src/peft/mapping.py/0
{ "file_path": "peft/src/peft/mapping.py", "repo_id": "peft", "token_count": 2265 }
171
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import annotations import warnings from typing import Any, Optional import bitsandbytes as bnb import torch from peft.import_utils import is_bnb_4bit_available, is_bnb_available from peft.tuners.tuners_utils import BaseTunerLayer, check_adapters_to_merge from peft.utils.integrations import dequantize_bnb_weight from peft.utils.other import transpose from .layer import LoraLayer if is_bnb_available(): class Linear8bitLt(torch.nn.Module, LoraLayer): # Lora implemented in a dense layer def __init__( self, base_layer: torch.nn.Module, adapter_name: str, r: int = 0, lora_alpha: int = 1, lora_dropout: float = 0.0, init_lora_weights: bool = True, use_rslora: bool = False, use_dora: bool = False, **kwargs, ) -> None: super().__init__() LoraLayer.__init__(self, base_layer) self.fan_in_fan_out = False self._active_adapter = adapter_name self.update_layer( adapter_name, r, lora_alpha=lora_alpha, lora_dropout=lora_dropout, init_lora_weights=init_lora_weights, use_rslora=use_rslora, use_dora=use_dora, ) def merge(self, safe_merge: bool = False, adapter_names: Optional[list[str]] = None) -> None: """ Merge the active adapter weights into the base weights Args: safe_merge (`bool`, *optional*): If True, the merge operation will be performed in a copy of the original weights and check for NaNs before merging the weights. This is useful if you want to check if the merge operation will produce NaNs. Defaults to `False`. adapter_names (`list[str]`, *optional*): The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults to `None`. """ adapter_names = check_adapters_to_merge(self, adapter_names) if not adapter_names: # no adapter to merge return for active_adapter in adapter_names: if active_adapter not in self.lora_A.keys(): continue warnings.warn( "Merge lora module to 8-bit linear may get different generations due to rounding errors." ) lora_data = self.get_delta_weight(active_adapter) weight = self.get_base_layer().weight state = self.get_base_layer().state if state.SCB is None: state.SCB = weight.SCB # Dequantize the result of identity matrix and int8 weight because bitsandbytes does not support int8 # dequantization directly output = dequantize_bnb_weight(weight, state=state) if not self.use_dora[active_adapter]: w_data = output.to(lora_data.dtype).to(lora_data.device) + lora_data else: # handle dora # since output already includes scaling, set it to 1 here weight_norm = self._get_weight_norm(output, lora_data, scaling=1).detach() # We need to cache weight_norm because it has to be based on the original weights. We # cannot calculate it on the fly based on the merged weights when unmerging because its a # different value self._cache_store(f"{active_adapter}-weight_norm", weight_norm) dora_factor = self.lora_magnitude_vector[active_adapter] / weight_norm w_data = dora_factor.view(-1, 1) * (output + lora_data) if safe_merge and not torch.isfinite(w_data).all(): raise ValueError( f"NaNs detected in the merged weights. The adapter {active_adapter} seems to be broken" ) self.get_base_layer().weight = bnb.nn.Int8Params( w_data.to("cpu"), requires_grad=False, has_fp16_weights=weight.has_fp16_weights ).to(weight.device) state.reset_grads() self.merged_adapters.append(active_adapter) def unmerge(self) -> None: """ This method unmerges all merged adapter layers from the base weights. """ if not self.merged: warnings.warn("Already unmerged. Nothing to do.") return while len(self.merged_adapters) > 0: active_adapter = self.merged_adapters.pop() if active_adapter not in self.lora_A.keys(): continue warnings.warn( "Unmerge lora module to 8-bit linear may get different generations due to rounding errors." ) lora_data = self.get_delta_weight(active_adapter) weight = self.get_base_layer().weight state = self.get_base_layer().state if state.SCB is None: state.SCB = weight.SCB output = dequantize_bnb_weight(weight, state=state) if not self.use_dora[active_adapter]: w_data = output.to(lora_data.dtype).to(lora_data.device) - lora_data else: weight_norm = self._cache_pop(f"{active_adapter}-weight_norm") dora_factor = self.lora_magnitude_vector[active_adapter] / weight_norm w_data = output.data / dora_factor.view(-1, 1) - lora_data self.get_base_layer().weight = bnb.nn.Int8Params( w_data.to("cpu"), requires_grad=False, has_fp16_weights=weight.has_fp16_weights ).to(weight.device) state.reset_grads() def get_delta_weight(self, adapter): return ( transpose( self.lora_B[adapter].weight @ self.lora_A[adapter].weight, False, ) * self.scaling[adapter] ) def _mixed_batch_forward( self, x: torch.Tensor, *args: Any, adapter_names: list[str], **kwargs: Any ) -> torch.Tensor: # This is a special method that handles the case when users pass the argument `adapter_names`. This is an # extra argument that allows mixing different adapters in the same batch at inference time. result = self.base_layer(x, *args, **kwargs) unique_adapters = set(adapter_names) sub_batch_indices_list = [] for adapter in unique_adapters: sub_batch_indices_list.append([index for index, item in enumerate(adapter_names) if item == adapter]) for i, active_adapter in enumerate(unique_adapters): if active_adapter == "__base__": continue if active_adapter not in self.lora_A.keys(): continue lora_A = self.lora_A[active_adapter] lora_B = self.lora_B[active_adapter] dropout = self.lora_dropout[active_adapter] scaling = self.scaling[active_adapter] requires_conversion = not torch.is_autocast_enabled() if requires_conversion: expected_dtype = result.dtype compute_dtype = lora_A.weight.dtype if x.dtype != compute_dtype: x = x.to(compute_dtype) # getting the sub-batch, passing it to LoRA layers and updating the corresponding indices of the linear # layer output sub_batch = x[sub_batch_indices_list[i]] output = lora_B(lora_A(dropout(sub_batch))) * scaling if requires_conversion: output = output.to(expected_dtype) result[sub_batch_indices_list[i]] += output return result def forward(self, x: torch.Tensor, *args, **kwargs) -> torch.Tensor: self._check_forward_args(x, *args, **kwargs) adapter_names = kwargs.pop("adapter_names", None) if self.disable_adapters: if self.merged: self.unmerge() result = self.base_layer(x, *args, **kwargs) elif adapter_names is not None: result = self._mixed_batch_forward(x, *args, adapter_names=adapter_names, **kwargs) elif self.merged: result = self.base_layer(x, *args, **kwargs) else: result = self.base_layer(x, *args, **kwargs) for active_adapter in self.active_adapters: if active_adapter not in self.lora_A.keys(): continue lora_A = self.lora_A[active_adapter] lora_B = self.lora_B[active_adapter] dropout = self.lora_dropout[active_adapter] scaling = self.scaling[active_adapter] requires_conversion = not torch.is_autocast_enabled() if requires_conversion: expected_dtype = result.dtype compute_dtype = lora_A.weight.dtype if x.dtype != compute_dtype: x = x.to(compute_dtype) if not self.use_dora[active_adapter]: output = lora_B(lora_A(dropout(x))) * scaling else: output = self._apply_dora(x, lora_A, lora_B, scaling, active_adapter) if requires_conversion: output = output.to(expected_dtype) result = result + output return result def __repr__(self) -> str: rep = super().__repr__() return "lora." + rep def dispatch_bnb_8bit(target: torch.nn.Module, adapter_name: str, **kwargs): new_module = None if isinstance(target, BaseTunerLayer): target_base_layer = target.get_base_layer() else: target_base_layer = target loaded_in_8bit = kwargs.get("loaded_in_8bit", False) if loaded_in_8bit and isinstance(target_base_layer, bnb.nn.Linear8bitLt): eightbit_kwargs = kwargs.copy() eightbit_kwargs.update( { "has_fp16_weights": target.state.has_fp16_weights, "memory_efficient_backward": target.state.memory_efficient_backward, "threshold": target.state.threshold, "index": target.index, } ) new_module = Linear8bitLt(target, adapter_name, **eightbit_kwargs) return new_module if is_bnb_4bit_available(): class Linear4bit(torch.nn.Module, LoraLayer): # Lora implemented in a dense layer def __init__( self, base_layer: torch.nn.Module, adapter_name: str, r: int = 0, lora_alpha: int = 1, lora_dropout: float = 0.0, init_lora_weights: bool = True, use_rslora: bool = False, use_dora: bool = False, **kwargs, ) -> None: super().__init__() LoraLayer.__init__(self, base_layer) self.fan_in_fan_out = False self._active_adapter = adapter_name self.update_layer( adapter_name, r, lora_alpha=lora_alpha, lora_dropout=lora_dropout, init_lora_weights=init_lora_weights, use_rslora=use_rslora, use_dora=use_dora, ) def merge(self, safe_merge: bool = False, adapter_names: Optional[list[str]] = None) -> None: """ Merge the active adapter weights into the base weights Args: safe_merge (`bool`, *optional*): If True, the merge operation will be performed in a copy of the original weights and check for NaNs before merging the weights. This is useful if you want to check if the merge operation will produce NaNs. Defaults to `False`. adapter_names (`list[str]`, *optional*): The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults to `None`. """ adapter_names = check_adapters_to_merge(self, adapter_names) if not adapter_names: # no adapter to merge return for active_adapter in adapter_names: if active_adapter not in self.lora_A.keys(): continue warnings.warn( "Merge lora module to 4-bit linear may get different generations due to rounding errors." ) # Refer to https://gist.github.com/ChrisHayduk/1a53463331f52dca205e55982baf9930 weight = self.get_base_layer().weight kwargs = weight.__dict__ lora_data = self.get_delta_weight(active_adapter) output = dequantize_bnb_weight(weight, state=weight.quant_state) if not self.use_dora[active_adapter]: w_data = output + lora_data else: # handle dora # since output already includes scaling, set it to 1 here weight_norm = self._get_weight_norm(output, lora_data, scaling=1).detach() # We need to cache weight_norm because it has to be based on the original weights. We # cannot calculate it on the fly based on the merged weights when unmerging because its a # different value self._cache_store(f"{active_adapter}-weight_norm", weight_norm) dora_factor = self.lora_magnitude_vector[active_adapter] / weight_norm w_data = dora_factor.view(-1, 1) * (output + lora_data) if safe_merge and not torch.isfinite(w_data).all(): raise ValueError( f"NaNs detected in the merged weights. The adapter {active_adapter} seems to be broken" ) if "bnb_quantized" in kwargs: kwargs["bnb_quantized"] = False self.get_base_layer().weight = bnb.nn.Params4bit(w_data.to("cpu"), requires_grad=False, **kwargs).to( weight.device ) self.merged_adapters.append(active_adapter) def unmerge(self) -> None: """ This method unmerges all merged adapter layers from the base weights. """ if not self.merged: warnings.warn("Already unmerged. Nothing to do.") return while len(self.merged_adapters) > 0: active_adapter = self.merged_adapters.pop() if active_adapter not in self.lora_A.keys(): continue warnings.warn( "Unmerge lora module to 4-bit linear may get different generations due to rounding errors." ) lora_data = self.get_delta_weight(active_adapter) weight = self.get_base_layer().weight kwargs = weight.__dict__ output = dequantize_bnb_weight(weight, state=weight.quant_state) if not self.use_dora[active_adapter]: w_data = output - lora_data else: weight_norm = self._cache_pop(f"{active_adapter}-weight_norm") dora_factor = self.lora_magnitude_vector[active_adapter] / weight_norm w_data = output.data / dora_factor.view(-1, 1) - lora_data if "bnb_quantized" in kwargs: kwargs["bnb_quantized"] = False self.get_base_layer().weight = bnb.nn.Params4bit(w_data.to("cpu"), requires_grad=False, **kwargs).to( weight.device ) def get_delta_weight(self, adapter): return ( transpose( self.lora_B[adapter].weight @ self.lora_A[adapter].weight, False, ) * self.scaling[adapter] ) def _mixed_batch_forward( self, x: torch.Tensor, *args: Any, adapter_names: list[str], **kwargs: Any ) -> torch.Tensor: # This is a special method that handles the case when users pass the argument `adapter_names`. This is an # extra argument that allows mixing different adapters in the same batch at inference time. result = self.base_layer(x, *args, **kwargs) unique_adapters = set(adapter_names) sub_batch_indices_list = [] for adapter in unique_adapters: sub_batch_indices_list.append([index for index, item in enumerate(adapter_names) if item == adapter]) for i, active_adapter in enumerate(unique_adapters): if active_adapter == "__base__": continue if active_adapter not in self.lora_A.keys(): continue lora_A = self.lora_A[active_adapter] lora_B = self.lora_B[active_adapter] dropout = self.lora_dropout[active_adapter] scaling = self.scaling[active_adapter] requires_conversion = not torch.is_autocast_enabled() if requires_conversion: expected_dtype = result.dtype x = x.to(lora_A.weight.dtype) # getting the sub-batch, passing it to LoRA layers and updating the corresponding indices of the linear # layer output sub_batch = x[sub_batch_indices_list[i]] output = lora_B(lora_A(dropout(sub_batch))) * scaling if requires_conversion: output = output.to(expected_dtype) result[sub_batch_indices_list[i]] += output return result def forward(self, x: torch.Tensor, *args, **kwargs) -> torch.Tensor: self._check_forward_args(x, *args, **kwargs) adapter_names = kwargs.pop("adapter_names", None) if self.disable_adapters: if self.merged: self.unmerge() result = self.base_layer(x, *args, **kwargs) elif adapter_names is not None: result = self._mixed_batch_forward(x, *args, adapter_names=adapter_names, **kwargs) elif self.merged: result = self.base_layer(x, *args, **kwargs) else: result = self.base_layer(x, *args, **kwargs) # As per Tim Dettmers, for 4bit, we need to defensively clone here. # The reason is that in some cases, an error can occur that backprop # does not work on a manipulated view. This issue may be solved with # newer PyTorch versions but this would need extensive testing to be # sure. result = result.clone() for active_adapter in self.active_adapters: if active_adapter not in self.lora_A.keys(): continue lora_A = self.lora_A[active_adapter] lora_B = self.lora_B[active_adapter] dropout = self.lora_dropout[active_adapter] scaling = self.scaling[active_adapter] requires_conversion = not torch.is_autocast_enabled() if requires_conversion: expected_dtype = result.dtype x = x.to(lora_A.weight.dtype) if not self.use_dora[active_adapter]: output = lora_B(lora_A(dropout(x))) * scaling else: output = self._apply_dora(x, lora_A, lora_B, scaling, active_adapter) if requires_conversion: output = output.to(expected_dtype) result = result + output return result def __repr__(self) -> str: rep = super().__repr__() return "lora." + rep def dispatch_bnb_4bit(target: torch.nn.Module, adapter_name: str, **kwargs): new_module = None if isinstance(target, BaseTunerLayer): target_base_layer = target.get_base_layer() else: target_base_layer = target loaded_in_4bit = kwargs.get("loaded_in_4bit", False) if loaded_in_4bit and is_bnb_4bit_available() and isinstance(target_base_layer, bnb.nn.Linear4bit): fourbit_kwargs = kwargs.copy() fourbit_kwargs.update( { "compute_dtype": target_base_layer.compute_dtype, "compress_statistics": target_base_layer.weight.compress_statistics, "quant_type": target_base_layer.weight.quant_type, } ) new_module = Linear4bit(target, adapter_name, **fourbit_kwargs) return new_module
peft/src/peft/tuners/lora/bnb.py/0
{ "file_path": "peft/src/peft/tuners/lora/bnb.py", "repo_id": "peft", "token_count": 11452 }
172
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch # needed for prefix-tuning of bloom model def bloom_model_postprocess_past_key_value(past_key_values): past_key_values = torch.cat(past_key_values) total_layers, batch_size, num_attention_heads, num_virtual_tokens, head_dim = past_key_values.shape keys = past_key_values[: total_layers // 2] keys = keys.transpose(2, 3).reshape( total_layers // 2, batch_size * num_attention_heads, head_dim, num_virtual_tokens ) values = past_key_values[total_layers // 2 :] values = values.reshape(total_layers // 2, batch_size * num_attention_heads, num_virtual_tokens, head_dim) return tuple(zip(keys, values)) # needed for prefix-tuning of StarCoder models def starcoder_model_postprocess_past_key_value(past_key_values): result = [] for k in past_key_values: k = k[:, :, 0] k = k.permute([1, 2, 0, 3]) k = k.reshape(*k.shape[:-2], -1) result.append(k) return tuple(result) TRANSFORMERS_MODELS_TO_PREFIX_TUNING_POSTPROCESS_MAPPING = { "bloom": bloom_model_postprocess_past_key_value, "gpt_bigcode": starcoder_model_postprocess_past_key_value, } TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING = { "t5": ["q", "v"], "mt5": ["q", "v"], "bart": ["q_proj", "v_proj"], "gpt2": ["c_attn"], "bloom": ["query_key_value"], "blip-2": ["q", "v", "q_proj", "v_proj"], "opt": ["q_proj", "v_proj"], "gptj": ["q_proj", "v_proj"], "gpt_neox": ["query_key_value"], "gpt_neo": ["q_proj", "v_proj"], "bert": ["query", "value"], "roberta": ["query", "value"], "xlm-roberta": ["query", "value"], "electra": ["query", "value"], "deberta-v2": ["query_proj", "value_proj"], "deberta": ["in_proj"], "layoutlm": ["query", "value"], "llama": ["q_proj", "v_proj"], "chatglm": ["query_key_value"], "gpt_bigcode": ["c_attn"], "mpt": ["Wqkv"], "RefinedWebModel": ["query_key_value"], "RefinedWeb": ["query_key_value"], "falcon": ["query_key_value"], "btlm": ["c_proj", "c_attn"], "codegen": ["qkv_proj"], "mistral": ["q_proj", "v_proj"], "mixtral": ["q_proj", "v_proj"], "stablelm": ["q_proj", "v_proj"], "phi": ["q_proj", "v_proj", "fc1", "fc2"], "gemma": ["q_proj", "v_proj"], } TRANSFORMERS_MODELS_TO_IA3_TARGET_MODULES_MAPPING = { "t5": ["k", "v", "wo"], "mt5": ["k", "v", "wi_1"], "gpt2": ["c_attn", "mlp.c_proj"], "bloom": ["query_key_value", "mlp.dense_4h_to_h"], "roberta": ["key", "value", "output.dense"], "opt": ["q_proj", "k_proj", "fc2"], "gptj": ["q_proj", "v_proj", "fc_out"], "gpt_neox": ["query_key_value", "dense_4h_to_h"], "gpt_neo": ["q_proj", "v_proj", "c_proj"], "bart": ["q_proj", "v_proj", "fc2"], "gpt_bigcode": ["c_attn", "mlp.c_proj"], "llama": ["k_proj", "v_proj", "down_proj"], "mistral": ["k_proj", "v_proj", "down_proj"], "mixtral": ["k_proj", "v_proj", "w2"], "bert": ["key", "value", "output.dense"], "deberta-v2": ["key_proj", "value_proj", "output.dense"], "deberta": ["in_proj", "output.dense"], "RefinedWebModel": ["query_key_value", "dense_4h_to_h"], "RefinedWeb": ["query_key_value", "dense_4h_to_h"], "falcon": ["query_key_value", "dense_4h_to_h"], "phi": ["q_proj", "v_proj", "fc2"], "gemma": ["q_proj", "v_proj", "down_proj"], } TRANSFORMERS_MODELS_TO_IA3_FEEDFORWARD_MODULES_MAPPING = { "t5": ["wo"], "mt5": [], "gpt2": ["mlp.c_proj"], "bloom": ["mlp.dense_4h_to_h"], "roberta": ["output.dense"], "opt": ["fc2"], "gptj": ["fc_out"], "gpt_neox": ["dense_4h_to_h"], "gpt_neo": ["c_proj"], "bart": ["fc2"], "gpt_bigcode": ["mlp.c_proj"], "llama": ["down_proj"], "mistral": ["down_proj"], "mixtral": ["w2"], "bert": ["output.dense"], "deberta-v2": ["output.dense"], "deberta": ["output.dense"], "RefinedWeb": ["dense_4h_to_h"], "RefinedWebModel": ["dense_4h_to_h"], "falcon": ["dense_4h_to_h"], "phi": ["fc2"], "gemma": ["down_proj"], } TRANSFORMERS_MODELS_TO_ADALORA_TARGET_MODULES_MAPPING = { "t5": ["q", "k", "v", "o", "wi", "wo"], "mt5": ["q", "k", "v", "o", "wi_0", "wi_1", "wo"], "bart": ["q_proj", "k_proj", "v_proj", "out_proj", "fc1", "fc2"], "gpt2": ["c_attn"], "bloom": ["query_key_value"], "opt": ["q_proj", "k_proj", "v_proj", "out_proj", "fc1", "fc2"], "gptj": ["q_proj", "v_proj"], "gpt_neox": ["query_key_value"], "gpt_neo": ["q_proj", "v_proj"], "llama": ["q_proj", "v_proj"], "bert": ["query", "value"], "roberta": ["query", "key", "value", "dense"], # "xlm-roberta": ["query", "value"], # "electra": ["query", "value"], "deberta-v2": ["query_proj", "key_proj", "value_proj", "dense"], "gpt_bigcode": ["c_attn"], "deberta": ["in_proj"], # "layoutlm": ["query", "value"], } WEIGHTS_NAME = "adapter_model.bin" SAFETENSORS_WEIGHTS_NAME = "adapter_model.safetensors" CONFIG_NAME = "adapter_config.json" EMBEDDING_LAYER_NAMES = ["embed_tokens", "lm_head"] INCLUDE_LINEAR_LAYERS_SHORTHAND = "all-linear" TOKENIZER_CONFIG_NAME = "tokenizer_config.json"
peft/src/peft/utils/constants.py/0
{ "file_path": "peft/src/peft/utils/constants.py", "repo_id": "peft", "token_count": 2721 }
173
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import tempfile import unittest import torch from parameterized import parameterized from transformers import AutoModelForSeq2SeqLM, AutoModelForTokenClassification from peft import LoraConfig, TaskType, get_peft_model from .testing_common import PeftCommonTester, PeftTestConfigManager PEFT_ENCODER_DECODER_MODELS_TO_TEST = [ "ybelkada/tiny-random-T5ForConditionalGeneration-calibrated", "hf-internal-testing/tiny-random-BartForConditionalGeneration", ] FULL_GRID = {"model_ids": PEFT_ENCODER_DECODER_MODELS_TO_TEST, "task_type": "SEQ_2_SEQ_LM"} class PeftEncoderDecoderModelTester(unittest.TestCase, PeftCommonTester): r""" Test if the PeftModel behaves as expected. This includes: - test if the model has the expected methods We use parametrized.expand for debugging purposes to test each model individually. """ transformers_class = AutoModelForSeq2SeqLM def prepare_inputs_for_testing(self): input_ids = torch.tensor([[1, 1, 1], [1, 2, 1]]).to(self.torch_device) decoder_input_ids = torch.tensor([[1, 1, 1], [1, 2, 1]]).to(self.torch_device) attention_mask = torch.tensor([[1, 1, 1], [1, 0, 1]]).to(self.torch_device) input_dict = { "input_ids": input_ids, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, } return input_dict @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_attributes_parametrized(self, test_name, model_id, config_cls, config_kwargs): self._test_model_attr(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_adapter_name(self, test_name, model_id, config_cls, config_kwargs): self._test_adapter_name(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_prepare_for_training_parametrized(self, test_name, model_id, config_cls, config_kwargs): self._test_prepare_for_training(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_save_pretrained(self, test_name, model_id, config_cls, config_kwargs): self._test_save_pretrained(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_save_pretrained_pickle(self, test_name, model_id, config_cls, config_kwargs): self._test_save_pretrained(model_id, config_cls, config_kwargs, safe_serialization=False) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_save_pretrained_selected_adapters(self, test_name, model_id, config_cls, config_kwargs): self._test_save_pretrained_selected_adapters(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_save_pretrained_selected_adapters_pickle(self, test_name, model_id, config_cls, config_kwargs): self._test_save_pretrained_selected_adapters(model_id, config_cls, config_kwargs, safe_serialization=False) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_from_pretrained_config_construction(self, test_name, model_id, config_cls, config_kwargs): self._test_from_pretrained_config_construction(model_id, config_cls, config_kwargs) @parameterized.expand( PeftTestConfigManager.get_grid_parameters( { "model_ids": PEFT_ENCODER_DECODER_MODELS_TO_TEST, "lora_kwargs": {"init_lora_weights": [False]}, "ia3_kwargs": {"init_ia3_weights": [False]}, "task_type": "SEQ_2_SEQ_LM", }, ) ) def test_merge_layers(self, test_name, model_id, config_cls, config_kwargs): self._test_merge_layers(model_id, config_cls, config_kwargs) @parameterized.expand( PeftTestConfigManager.get_grid_parameters( { "model_ids": PEFT_ENCODER_DECODER_MODELS_TO_TEST, "lora_kwargs": {"init_lora_weights": [False]}, "task_type": "SEQ_2_SEQ_LM", }, ) ) def test_mixed_adapter_batches(self, test_name, model_id, config_cls, config_kwargs): self._test_mixed_adapter_batches(model_id, config_cls, config_kwargs) # skip non lora models - generate does not work for prefix tuning, prompt tuning @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_generate(self, test_name, model_id, config_cls, config_kwargs): self._test_generate(model_id, config_cls, config_kwargs) # skip non lora models - generate does not work for prefix tuning, prompt tuning @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_generate_pos_args(self, test_name, model_id, config_cls, config_kwargs): # positional arguments are not supported for PeftModelForSeq2SeqLM self._test_generate_pos_args(model_id, config_cls, config_kwargs, raises_err=True) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_generate_half_prec(self, test_name, model_id, config_cls, config_kwargs): self._test_generate_half_prec(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_prefix_tuning_half_prec_conversion(self, test_name, model_id, config_cls, config_kwargs): self._test_prefix_tuning_half_prec_conversion(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_training_encoder_decoders(self, test_name, model_id, config_cls, config_kwargs): self._test_training(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_training_encoder_decoders_layer_indexing(self, test_name, model_id, config_cls, config_kwargs): self._test_training_layer_indexing(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_training_encoder_decoders_gradient_checkpointing(self, test_name, model_id, config_cls, config_kwargs): self._test_training_gradient_checkpointing(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_inference_safetensors(self, test_name, model_id, config_cls, config_kwargs): self._test_inference_safetensors(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_peft_model_device_map(self, test_name, model_id, config_cls, config_kwargs): self._test_peft_model_device_map(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_delete_adapter(self, test_name, model_id, config_cls, config_kwargs): self._test_delete_adapter(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_delete_inactive_adapter(self, test_name, model_id, config_cls, config_kwargs): self._test_delete_inactive_adapter(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_adding_multiple_adapters_with_bias_raises(self, test_name, model_id, config_cls, config_kwargs): self._test_adding_multiple_adapters_with_bias_raises(model_id, config_cls, config_kwargs) @parameterized.expand( PeftTestConfigManager.get_grid_parameters( { "model_ids": PEFT_ENCODER_DECODER_MODELS_TO_TEST, "lora_kwargs": {"init_lora_weights": [False]}, "adalora_kwargs": {"init_lora_weights": [False]}, "ia3_kwargs": {"init_ia3_weights": [False]}, "task_type": "SEQ_2_SEQ_LM", }, ) ) def test_unload_adapter(self, test_name, model_id, config_cls, config_kwargs): self._test_unload_adapter(model_id, config_cls, config_kwargs) @parameterized.expand( PeftTestConfigManager.get_grid_parameters( { "model_ids": PEFT_ENCODER_DECODER_MODELS_TO_TEST, "lora_kwargs": {"init_lora_weights": [False]}, "task_type": "SEQ_2_SEQ_LM", }, ) ) def test_weighted_combination_of_adapters(self, test_name, model_id, config_cls, config_kwargs): self._test_weighted_combination_of_adapters(model_id, config_cls, config_kwargs) @parameterized.expand(PeftTestConfigManager.get_grid_parameters(FULL_GRID)) def test_training_prompt_learning_tasks(self, test_name, model_id, config_cls, config_kwargs): self._test_training_prompt_learning_tasks(model_id, config_cls, config_kwargs) @parameterized.expand( PeftTestConfigManager.get_grid_parameters( { "model_ids": PEFT_ENCODER_DECODER_MODELS_TO_TEST, "lora_kwargs": {"init_lora_weights": [False]}, "adalora_kwargs": {"init_lora_weights": [False]}, "ia3_kwargs": {"init_ia3_weights": [False]}, "task_type": "SEQ_2_SEQ_LM", }, ) ) def test_disable_adapter(self, test_name, model_id, config_cls, config_kwargs): self._test_disable_adapter(model_id, config_cls, config_kwargs) class PeftEncoderDecoderCustomModelTester(unittest.TestCase): """ A custom class to write any custom test related with Enc-Dec models """ def test_save_shared_tensors(self): model_id = "hf-internal-testing/tiny-random-RobertaModel" peft_config = LoraConfig( task_type=TaskType.TOKEN_CLS, inference_mode=False, r=16, lora_alpha=16, lora_dropout=0.1, bias="all" ) model = AutoModelForTokenClassification.from_pretrained(model_id, num_labels=11) model = get_peft_model(model, peft_config) with tempfile.TemporaryDirectory() as tmp_dir: # This should work fine model.save_pretrained(tmp_dir, safe_serialization=True)
peft/tests/test_encoder_decoder_models.py/0
{ "file_path": "peft/tests/test_encoder_decoder_models.py", "repo_id": "peft", "token_count": 4631 }
174
# ECA-ResNet An **ECA ResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that utilises an [Efficient Channel Attention module](https://paperswithcode.com/method/efficient-channel-attention). Efficient Channel Attention is an architectural unit based on [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) that reduces model complexity without dimensionality reduction. {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @misc{wang2020ecanet, title={ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks}, author={Qilong Wang and Banggu Wu and Pengfei Zhu and Peihua Li and Wangmeng Zuo and Qinghua Hu}, year={2020}, eprint={1910.03151}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: ECAResNet Paper: Title: 'ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks' URL: https://paperswithcode.com/paper/eca-net-efficient-channel-attention-for-deep Models: - Name: ecaresnet101d In Collection: ECAResNet Metadata: FLOPs: 10377193728 Parameters: 44570000 File Size: 178815067 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Efficient Channel Attention - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Squeeze-and-Excitation Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 4x RTX 2080Ti GPUs ID: ecaresnet101d LR: 0.1 Epochs: 100 Layers: 101 Crop Pct: '0.875' Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1087 Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNet101D_281c5844.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 82.18% Top 5 Accuracy: 96.06% - Name: ecaresnet101d_pruned In Collection: ECAResNet Metadata: FLOPs: 4463972081 Parameters: 24880000 File Size: 99852736 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Efficient Channel Attention - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Squeeze-and-Excitation Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet ID: ecaresnet101d_pruned Layers: 101 Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1097 Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45610/outputs/ECAResNet101D_P_75a3370e.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.82% Top 5 Accuracy: 95.64% - Name: ecaresnet50d In Collection: ECAResNet Metadata: FLOPs: 5591090432 Parameters: 25580000 File Size: 102579290 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Efficient Channel Attention - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Squeeze-and-Excitation Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 4x RTX 2080Ti GPUs ID: ecaresnet50d LR: 0.1 Epochs: 100 Layers: 50 Crop Pct: '0.875' Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1045 Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNet50D_833caf58.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.61% Top 5 Accuracy: 95.31% - Name: ecaresnet50d_pruned In Collection: ECAResNet Metadata: FLOPs: 3250730657 Parameters: 19940000 File Size: 79990436 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Efficient Channel Attention - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Squeeze-and-Excitation Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet ID: ecaresnet50d_pruned Layers: 50 Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1055 Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45899/outputs/ECAResNet50D_P_9c67f710.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.71% Top 5 Accuracy: 94.88% - Name: ecaresnetlight In Collection: ECAResNet Metadata: FLOPs: 5276118784 Parameters: 30160000 File Size: 120956612 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Efficient Channel Attention - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Squeeze-and-Excitation Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet ID: ecaresnetlight Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1077 Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNetLight_4f34b35b.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.46% Top 5 Accuracy: 95.25% -->
pytorch-image-models/docs/models/.templates/models/ecaresnet.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/ecaresnet.md", "repo_id": "pytorch-image-models", "token_count": 2832 }
175
# Inception v4 **Inception-v4** is a convolutional neural network architecture that builds on previous iterations of the Inception family by simplifying the architecture and using more inception modules than [Inception-v3](https://paperswithcode.com/method/inception-v3). {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @misc{szegedy2016inceptionv4, title={Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning}, author={Christian Szegedy and Sergey Ioffe and Vincent Vanhoucke and Alex Alemi}, year={2016}, eprint={1602.07261}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: Inception v4 Paper: Title: Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning URL: https://paperswithcode.com/paper/inception-v4-inception-resnet-and-the-impact Models: - Name: inception_v4 In Collection: Inception v4 Metadata: FLOPs: 15806527936 Parameters: 42680000 File Size: 171082495 Architecture: - Average Pooling - Dropout - Inception-A - Inception-B - Inception-C - Reduction-A - Reduction-B - Softmax Tasks: - Image Classification Training Techniques: - Label Smoothing - RMSProp - Weight Decay Training Data: - ImageNet Training Resources: 20x NVIDIA Kepler GPUs ID: inception_v4 LR: 0.045 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Image Size: '299' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/inception_v4.py#L313 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-cadene/inceptionv4-8e4777a0.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 1.01% Top 5 Accuracy: 16.85% -->
pytorch-image-models/docs/models/.templates/models/inception-v4.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/inception-v4.md", "repo_id": "pytorch-image-models", "token_count": 816 }
176
# ResNet-D **ResNet-D** is a modification on the [ResNet](https://paperswithcode.com/method/resnet) architecture that utilises an [average pooling](https://paperswithcode.com/method/average-pooling) tweak for downsampling. The motivation is that in the unmodified ResNet, the [1×1 convolution](https://paperswithcode.com/method/1x1-convolution) for the downsampling block ignores 3/4 of input feature maps, so this is modified so no information will be ignored {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @misc{he2018bag, title={Bag of Tricks for Image Classification with Convolutional Neural Networks}, author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li}, year={2018}, eprint={1812.01187}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: ResNet-D Paper: Title: Bag of Tricks for Image Classification with Convolutional Neural Networks URL: https://paperswithcode.com/paper/bag-of-tricks-for-image-classification-with Models: - Name: resnet101d In Collection: ResNet-D Metadata: FLOPs: 13805639680 Parameters: 44570000 File Size: 178791263 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: resnet101d Crop Pct: '0.94' Image Size: '256' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L716 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet101d_ra2-2803ffab.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 82.31% Top 5 Accuracy: 96.06% - Name: resnet152d In Collection: ResNet-D Metadata: FLOPs: 20155275264 Parameters: 60210000 File Size: 241596837 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: resnet152d Crop Pct: '0.94' Image Size: '256' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L724 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet152d_ra2-5cac0439.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 83.13% Top 5 Accuracy: 96.35% - Name: resnet18d In Collection: ResNet-D Metadata: FLOPs: 2645205760 Parameters: 11710000 File Size: 46893231 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: resnet18d Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L649 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet18d_ra2-48a79e06.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 72.27% Top 5 Accuracy: 90.69% - Name: resnet200d In Collection: ResNet-D Metadata: FLOPs: 26034378752 Parameters: 64690000 File Size: 259662933 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: resnet200d Crop Pct: '0.94' Image Size: '256' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L749 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet200d_ra2-bdba9bf9.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 83.24% Top 5 Accuracy: 96.49% - Name: resnet26d In Collection: ResNet-D Metadata: FLOPs: 3335276032 Parameters: 16010000 File Size: 64209122 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: resnet26d Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L683 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet26d-69e92c46.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 76.69% Top 5 Accuracy: 93.15% - Name: resnet34d In Collection: ResNet-D Metadata: FLOPs: 5026601728 Parameters: 21820000 File Size: 87369807 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: resnet34d Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L666 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet34d_ra2-f8dcfcaf.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.11% Top 5 Accuracy: 93.38% - Name: resnet50d In Collection: ResNet-D Metadata: FLOPs: 5591002624 Parameters: 25580000 File Size: 102567109 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: resnet50d Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L699 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet50d_ra2-464e36ba.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.55% Top 5 Accuracy: 95.16% -->
pytorch-image-models/docs/models/.templates/models/resnet-d.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/resnet-d.md", "repo_id": "pytorch-image-models", "token_count": 3126 }
177
# (Tensorflow) EfficientNet **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network width, depth, and resolution with a set of fixed scaling coefficients. For example, if we want to use $2^N$ times more computational resources, then we can simply increase the network depth by $\alpha ^ N$, width by $\beta ^ N$, and image size by $\gamma ^ N$, where $\alpha, \beta, \gamma$ are constant coefficients determined by a small grid search on the original small model. EfficientNet uses a compound coefficient $\phi$ to uniformly scales network width, depth, and resolution in a principled way. The compound scaling method is justified by the intuition that if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image. The base EfficientNet-B0 network is based on the inverted bottleneck residual blocks of [MobileNetV2](https://paperswithcode.com/method/mobilenetv2), in addition to [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block). The weights from this model were ported from [Tensorflow/TPU](https://github.com/tensorflow/tpu). {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @misc{tan2020efficientnet, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, year={2020}, eprint={1905.11946}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` <!-- Type: model-index Collections: - Name: TF EfficientNet Paper: Title: 'EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks' URL: https://paperswithcode.com/paper/efficientnet-rethinking-model-scaling-for Models: - Name: tf_efficientnet_b0 In Collection: TF EfficientNet Metadata: FLOPs: 488688572 Parameters: 5290000 File Size: 21383997 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet Training Resources: TPUv3 Cloud TPU ID: tf_efficientnet_b0 LR: 0.256 Epochs: 350 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 2048 Image Size: '224' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1241 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b0_aa-827b6e33.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 76.85% Top 5 Accuracy: 93.23% - Name: tf_efficientnet_b1 In Collection: TF EfficientNet Metadata: FLOPs: 883633200 Parameters: 7790000 File Size: 31512534 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet ID: tf_efficientnet_b1 LR: 0.256 Epochs: 350 Crop Pct: '0.882' Momentum: 0.9 Batch Size: 2048 Image Size: '240' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1251 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b1_aa-ea7a6ee0.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.84% Top 5 Accuracy: 94.2% - Name: tf_efficientnet_b2 In Collection: TF EfficientNet Metadata: FLOPs: 1234321170 Parameters: 9110000 File Size: 36797929 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet ID: tf_efficientnet_b2 LR: 0.256 Epochs: 350 Crop Pct: '0.89' Momentum: 0.9 Batch Size: 2048 Image Size: '260' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1261 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b2_aa-60c94f97.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.07% Top 5 Accuracy: 94.9% - Name: tf_efficientnet_b3 In Collection: TF EfficientNet Metadata: FLOPs: 2275247568 Parameters: 12230000 File Size: 49381362 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet ID: tf_efficientnet_b3 LR: 0.256 Epochs: 350 Crop Pct: '0.904' Momentum: 0.9 Batch Size: 2048 Image Size: '300' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1271 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b3_aa-84b4657e.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 81.65% Top 5 Accuracy: 95.72% - Name: tf_efficientnet_b4 In Collection: TF EfficientNet Metadata: FLOPs: 5749638672 Parameters: 19340000 File Size: 77989689 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet Training Resources: TPUv3 Cloud TPU ID: tf_efficientnet_b4 LR: 0.256 Epochs: 350 Crop Pct: '0.922' Momentum: 0.9 Batch Size: 2048 Image Size: '380' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1281 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b4_aa-818f208c.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 83.03% Top 5 Accuracy: 96.3% - Name: tf_efficientnet_b5 In Collection: TF EfficientNet Metadata: FLOPs: 13176501888 Parameters: 30390000 File Size: 122403150 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet ID: tf_efficientnet_b5 LR: 0.256 Epochs: 350 Crop Pct: '0.934' Momentum: 0.9 Batch Size: 2048 Image Size: '456' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1291 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b5_ra-9a3e5369.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 83.81% Top 5 Accuracy: 96.75% - Name: tf_efficientnet_b6 In Collection: TF EfficientNet Metadata: FLOPs: 24180518488 Parameters: 43040000 File Size: 173232007 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet ID: tf_efficientnet_b6 LR: 0.256 Epochs: 350 Crop Pct: '0.942' Momentum: 0.9 Batch Size: 2048 Image Size: '528' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1301 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b6_aa-80ba17e4.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 84.11% Top 5 Accuracy: 96.89% - Name: tf_efficientnet_b7 In Collection: TF EfficientNet Metadata: FLOPs: 48205304880 Parameters: 66349999 File Size: 266850607 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet ID: tf_efficientnet_b7 LR: 0.256 Epochs: 350 Crop Pct: '0.949' Momentum: 0.9 Batch Size: 2048 Image Size: '600' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1312 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b7_ra-6c08e654.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 84.93% Top 5 Accuracy: 97.2% - Name: tf_efficientnet_b8 In Collection: TF EfficientNet Metadata: FLOPs: 80962956270 Parameters: 87410000 File Size: 351379853 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet ID: tf_efficientnet_b8 LR: 0.256 Epochs: 350 Crop Pct: '0.954' Momentum: 0.9 Batch Size: 2048 Image Size: '672' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1323 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b8_ra-572d5dd9.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 85.35% Top 5 Accuracy: 97.39% - Name: tf_efficientnet_el In Collection: TF EfficientNet Metadata: FLOPs: 9356616096 Parameters: 10590000 File Size: 42800271 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Data: - ImageNet ID: tf_efficientnet_el Crop Pct: '0.904' Image Size: '300' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1551 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_el-5143854e.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.45% Top 5 Accuracy: 95.17% - Name: tf_efficientnet_em In Collection: TF EfficientNet Metadata: FLOPs: 3636607040 Parameters: 6900000 File Size: 27933644 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Data: - ImageNet ID: tf_efficientnet_em Crop Pct: '0.882' Image Size: '240' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1541 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_em-e78cfe58.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.71% Top 5 Accuracy: 94.33% - Name: tf_efficientnet_es In Collection: TF EfficientNet Metadata: FLOPs: 2057577472 Parameters: 5440000 File Size: 22008479 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Data: - ImageNet ID: tf_efficientnet_es Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1531 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_es-ca1afbfe.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.28% Top 5 Accuracy: 93.6% - Name: tf_efficientnet_l2_ns_475 In Collection: TF EfficientNet Metadata: FLOPs: 217795669644 Parameters: 480310000 File Size: 1925950424 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - FixRes - Label Smoothing - Noisy Student - RMSProp - RandAugment - Weight Decay Training Data: - ImageNet - JFT-300M Training Resources: TPUv3 Cloud TPU ID: tf_efficientnet_l2_ns_475 LR: 0.128 Epochs: 350 Dropout: 0.5 Crop Pct: '0.936' Momentum: 0.9 Batch Size: 2048 Image Size: '475' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Stochastic Depth Survival: 0.8 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1509 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_l2_ns_475-bebbd00a.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 88.24% Top 5 Accuracy: 98.55% -->
pytorch-image-models/docs/models/.templates/models/tf-efficientnet.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/tf-efficientnet.md", "repo_id": "pytorch-image-models", "token_count": 7172 }
178
# Dual Path Network (DPN) A **Dual Path Network (DPN)** is a convolutional neural network which presents a new topology of connection paths internally. The intuition is that [ResNets](https://paperswithcode.com/method/resnet) enables feature re-usage while DenseNet enables new feature exploration, and both are important for learning good representations. To enjoy the benefits from both path topologies, Dual Path Networks share common features while maintaining the flexibility to explore new features through dual path architectures. The principal building block is an [DPN Block](https://paperswithcode.com/method/dpn-block). ## How do I use this model on an image? To load a pretrained model: ```python import timm model = timm.create_model('dpn107', pretrained=True) model.eval() ``` To load and preprocess the image: ```python import urllib from PIL import Image from timm.data import resolve_data_config from timm.data.transforms_factory import create_transform config = resolve_data_config({}, model=model) transform = create_transform(**config) url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") urllib.request.urlretrieve(url, filename) img = Image.open(filename).convert('RGB') tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```python import torch with torch.no_grad(): out = model(tensor) probabilities = torch.nn.functional.softmax(out[0], dim=0) print(probabilities.shape) # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```python # Get imagenet class mappings url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") urllib.request.urlretrieve(url, filename) with open("imagenet_classes.txt", "r") as f: categories = [s.strip() for s in f.readlines()] # Print top categories per image top5_prob, top5_catid = torch.topk(probabilities, 5) for i in range(top5_prob.size(0)): print(categories[top5_catid[i]], top5_prob[i].item()) # prints class names and probabilities like: # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `dpn107`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```python model = timm.create_model('dpn107', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @misc{chen2017dual, title={Dual Path Networks}, author={Yunpeng Chen and Jianan Li and Huaxin Xiao and Xiaojie Jin and Shuicheng Yan and Jiashi Feng}, year={2017}, eprint={1707.01629}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: DPN Paper: Title: Dual Path Networks URL: https://paperswithcode.com/paper/dual-path-networks Models: - Name: dpn107 In Collection: DPN Metadata: FLOPs: 23524280296 Parameters: 86920000 File Size: 348612331 Architecture: - Batch Normalization - Convolution - DPN Block - Dense Connections - Global Average Pooling - Max Pooling - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 40x K80 GPUs ID: dpn107 LR: 0.316 Layers: 107 Crop Pct: '0.875' Batch Size: 1280 Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dpn.py#L310 Weights: https://github.com/rwightman/pytorch-dpn-pretrained/releases/download/v0.1/dpn107_extra-1ac7121e2.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.16% Top 5 Accuracy: 94.91% - Name: dpn131 In Collection: DPN Metadata: FLOPs: 20586274792 Parameters: 79250000 File Size: 318016207 Architecture: - Batch Normalization - Convolution - DPN Block - Dense Connections - Global Average Pooling - Max Pooling - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 40x K80 GPUs ID: dpn131 LR: 0.316 Layers: 131 Crop Pct: '0.875' Batch Size: 960 Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dpn.py#L302 Weights: https://github.com/rwightman/pytorch-dpn-pretrained/releases/download/v0.1/dpn131-71dfe43e0.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.83% Top 5 Accuracy: 94.71% - Name: dpn68 In Collection: DPN Metadata: FLOPs: 2990567880 Parameters: 12610000 File Size: 50761994 Architecture: - Batch Normalization - Convolution - DPN Block - Dense Connections - Global Average Pooling - Max Pooling - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 40x K80 GPUs ID: dpn68 LR: 0.316 Layers: 68 Crop Pct: '0.875' Batch Size: 1280 Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dpn.py#L270 Weights: https://github.com/rwightman/pytorch-dpn-pretrained/releases/download/v0.1/dpn68-66bebafa7.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 76.31% Top 5 Accuracy: 92.97% - Name: dpn68b In Collection: DPN Metadata: FLOPs: 2990567880 Parameters: 12610000 File Size: 50781025 Architecture: - Batch Normalization - Convolution - DPN Block - Dense Connections - Global Average Pooling - Max Pooling - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 40x K80 GPUs ID: dpn68b LR: 0.316 Layers: 68 Crop Pct: '0.875' Batch Size: 1280 Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dpn.py#L278 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/dpn68b_ra-a31ca160.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.21% Top 5 Accuracy: 94.42% - Name: dpn92 In Collection: DPN Metadata: FLOPs: 8357659624 Parameters: 37670000 File Size: 151248422 Architecture: - Batch Normalization - Convolution - DPN Block - Dense Connections - Global Average Pooling - Max Pooling - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 40x K80 GPUs ID: dpn92 LR: 0.316 Layers: 92 Crop Pct: '0.875' Batch Size: 1280 Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dpn.py#L286 Weights: https://github.com/rwightman/pytorch-dpn-pretrained/releases/download/v0.1/dpn92_extra-b040e4a9b.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.99% Top 5 Accuracy: 94.84% - Name: dpn98 In Collection: DPN Metadata: FLOPs: 15003675112 Parameters: 61570000 File Size: 247021307 Architecture: - Batch Normalization - Convolution - DPN Block - Dense Connections - Global Average Pooling - Max Pooling - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 40x K80 GPUs ID: dpn98 LR: 0.4 Layers: 98 Crop Pct: '0.875' Batch Size: 1280 Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dpn.py#L294 Weights: https://github.com/rwightman/pytorch-dpn-pretrained/releases/download/v0.1/dpn98-5b90dec4d.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.65% Top 5 Accuracy: 94.61% -->
pytorch-image-models/docs/models/dpn.md/0
{ "file_path": "pytorch-image-models/docs/models/dpn.md", "repo_id": "pytorch-image-models", "token_count": 3689 }
179
# Inception v3 **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswithcode.com/method/auxiliary-classifier) to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). The key building block is an [Inception Module](https://paperswithcode.com/method/inception-v3-module). ## How do I use this model on an image? To load a pretrained model: ```python import timm model = timm.create_model('inception_v3', pretrained=True) model.eval() ``` To load and preprocess the image: ```python import urllib from PIL import Image from timm.data import resolve_data_config from timm.data.transforms_factory import create_transform config = resolve_data_config({}, model=model) transform = create_transform(**config) url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") urllib.request.urlretrieve(url, filename) img = Image.open(filename).convert('RGB') tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```python import torch with torch.no_grad(): out = model(tensor) probabilities = torch.nn.functional.softmax(out[0], dim=0) print(probabilities.shape) # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```python # Get imagenet class mappings url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") urllib.request.urlretrieve(url, filename) with open("imagenet_classes.txt", "r") as f: categories = [s.strip() for s in f.readlines()] # Print top categories per image top5_prob, top5_catid = torch.topk(probabilities, 5) for i in range(top5_prob.size(0)): print(categories[top5_catid[i]], top5_prob[i].item()) # prints class names and probabilities like: # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `inception_v3`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```python model = timm.create_model('inception_v3', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @article{DBLP:journals/corr/SzegedyVISW15, author = {Christian Szegedy and Vincent Vanhoucke and Sergey Ioffe and Jonathon Shlens and Zbigniew Wojna}, title = {Rethinking the Inception Architecture for Computer Vision}, journal = {CoRR}, volume = {abs/1512.00567}, year = {2015}, url = {http://arxiv.org/abs/1512.00567}, archivePrefix = {arXiv}, eprint = {1512.00567}, timestamp = {Mon, 13 Aug 2018 16:49:07 +0200}, biburl = {https://dblp.org/rec/journals/corr/SzegedyVISW15.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <!-- Type: model-index Collections: - Name: Inception v3 Paper: Title: Rethinking the Inception Architecture for Computer Vision URL: https://paperswithcode.com/paper/rethinking-the-inception-architecture-for Models: - Name: inception_v3 In Collection: Inception v3 Metadata: FLOPs: 7352418880 Parameters: 23830000 File Size: 108857766 Architecture: - 1x1 Convolution - Auxiliary Classifier - Average Pooling - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inception-v3 Module - Max Pooling - ReLU - Softmax Tasks: - Image Classification Training Techniques: - Gradient Clipping - Label Smoothing - RMSProp - Weight Decay Training Data: - ImageNet Training Resources: 50x NVIDIA Kepler GPUs ID: inception_v3 LR: 0.045 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Image Size: '299' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/inception_v3.py#L442 Weights: https://download.pytorch.org/models/inception_v3_google-1a9a5a14.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.46% Top 5 Accuracy: 93.48% -->
pytorch-image-models/docs/models/inception-v3.md/0
{ "file_path": "pytorch-image-models/docs/models/inception-v3.md", "repo_id": "pytorch-image-models", "token_count": 1888 }
180
# ResNeSt A **ResNeSt** is a variant on a [ResNet](https://paperswithcode.com/method/resnet), which instead stacks [Split-Attention blocks](https://paperswithcode.com/method/split-attention). The cardinal group representations are then concatenated along the channel dimension: $V = \text{Concat}${$V^{1},V^{2},\cdots{V}^{K}$}. As in standard residual blocks, the final output $Y$ of otheur Split-Attention block is produced using a shortcut connection: $Y=V+X$, if the input and output feature-map share the same shape. For blocks with a stride, an appropriate transformation $\mathcal{T}$ is applied to the shortcut connection to align the output shapes: $Y=V+\mathcal{T}(X)$. For example, $\mathcal{T}$ can be strided convolution or combined convolution-with-pooling. ## How do I use this model on an image? To load a pretrained model: ```python import timm model = timm.create_model('resnest101e', pretrained=True) model.eval() ``` To load and preprocess the image: ```python import urllib from PIL import Image from timm.data import resolve_data_config from timm.data.transforms_factory import create_transform config = resolve_data_config({}, model=model) transform = create_transform(**config) url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") urllib.request.urlretrieve(url, filename) img = Image.open(filename).convert('RGB') tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```python import torch with torch.no_grad(): out = model(tensor) probabilities = torch.nn.functional.softmax(out[0], dim=0) print(probabilities.shape) # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```python # Get imagenet class mappings url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") urllib.request.urlretrieve(url, filename) with open("imagenet_classes.txt", "r") as f: categories = [s.strip() for s in f.readlines()] # Print top categories per image top5_prob, top5_catid = torch.topk(probabilities, 5) for i in range(top5_prob.size(0)): print(categories[top5_catid[i]], top5_prob[i].item()) # prints class names and probabilities like: # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `resnest101e`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```python model = timm.create_model('resnest101e', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @misc{zhang2020resnest, title={ResNeSt: Split-Attention Networks}, author={Hang Zhang and Chongruo Wu and Zhongyue Zhang and Yi Zhu and Haibin Lin and Zhi Zhang and Yue Sun and Tong He and Jonas Mueller and R. Manmatha and Mu Li and Alexander Smola}, year={2020}, eprint={2004.08955}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: ResNeSt Paper: Title: 'ResNeSt: Split-Attention Networks' URL: https://paperswithcode.com/paper/resnest-split-attention-networks Models: - Name: resnest101e In Collection: ResNeSt Metadata: FLOPs: 17423183648 Parameters: 48280000 File Size: 193782911 Architecture: - 1x1 Convolution - Convolution - Dense Connections - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax - Split Attention Tasks: - Image Classification Training Techniques: - AutoAugment - DropBlock - Label Smoothing - Mixup - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 64x NVIDIA V100 GPUs ID: resnest101e LR: 0.1 Epochs: 270 Layers: 101 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 4096 Image Size: '256' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L182 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest101-22405ba7.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 82.88% Top 5 Accuracy: 96.31% - Name: resnest14d In Collection: ResNeSt Metadata: FLOPs: 3548594464 Parameters: 10610000 File Size: 42562639 Architecture: - 1x1 Convolution - Convolution - Dense Connections - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax - Split Attention Tasks: - Image Classification Training Techniques: - AutoAugment - DropBlock - Label Smoothing - Mixup - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 64x NVIDIA V100 GPUs ID: resnest14d LR: 0.1 Epochs: 270 Layers: 14 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 8192 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L148 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/gluon_resnest14-9c8fe254.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 75.51% Top 5 Accuracy: 92.52% - Name: resnest200e In Collection: ResNeSt Metadata: FLOPs: 45954387872 Parameters: 70200000 File Size: 193782911 Architecture: - 1x1 Convolution - Convolution - Dense Connections - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax - Split Attention Tasks: - Image Classification Training Techniques: - AutoAugment - DropBlock - Label Smoothing - Mixup - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 64x NVIDIA V100 GPUs ID: resnest200e LR: 0.1 Epochs: 270 Layers: 200 Dropout: 0.2 Crop Pct: '0.909' Momentum: 0.9 Batch Size: 2048 Image Size: '320' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L194 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest101-22405ba7.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 83.85% Top 5 Accuracy: 96.89% - Name: resnest269e In Collection: ResNeSt Metadata: FLOPs: 100830307104 Parameters: 110930000 File Size: 445402691 Architecture: - 1x1 Convolution - Convolution - Dense Connections - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax - Split Attention Tasks: - Image Classification Training Techniques: - AutoAugment - DropBlock - Label Smoothing - Mixup - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 64x NVIDIA V100 GPUs ID: resnest269e LR: 0.1 Epochs: 270 Layers: 269 Dropout: 0.2 Crop Pct: '0.928' Momentum: 0.9 Batch Size: 2048 Image Size: '416' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L206 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest269-0cc87c48.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 84.53% Top 5 Accuracy: 96.99% - Name: resnest26d In Collection: ResNeSt Metadata: FLOPs: 4678918720 Parameters: 17070000 File Size: 68470242 Architecture: - 1x1 Convolution - Convolution - Dense Connections - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax - Split Attention Tasks: - Image Classification Training Techniques: - AutoAugment - DropBlock - Label Smoothing - Mixup - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 64x NVIDIA V100 GPUs ID: resnest26d LR: 0.1 Epochs: 270 Layers: 26 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 8192 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L159 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/gluon_resnest26-50eb607c.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.48% Top 5 Accuracy: 94.3% - Name: resnest50d In Collection: ResNeSt Metadata: FLOPs: 6937106336 Parameters: 27480000 File Size: 110273258 Architecture: - 1x1 Convolution - Convolution - Dense Connections - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax - Split Attention Tasks: - Image Classification Training Techniques: - AutoAugment - DropBlock - Label Smoothing - Mixup - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 64x NVIDIA V100 GPUs ID: resnest50d LR: 0.1 Epochs: 270 Layers: 50 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 8192 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L170 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest50-528c19ca.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.96% Top 5 Accuracy: 95.38% - Name: resnest50d_1s4x24d In Collection: ResNeSt Metadata: FLOPs: 5686764544 Parameters: 25680000 File Size: 103045531 Architecture: - 1x1 Convolution - Convolution - Dense Connections - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax - Split Attention Tasks: - Image Classification Training Techniques: - AutoAugment - DropBlock - Label Smoothing - Mixup - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 64x NVIDIA V100 GPUs ID: resnest50d_1s4x24d LR: 0.1 Epochs: 270 Layers: 50 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 8192 Image Size: '224' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L229 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest50_fast_1s4x24d-d4a4f76f.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 81.0% Top 5 Accuracy: 95.33% - Name: resnest50d_4s2x40d In Collection: ResNeSt Metadata: FLOPs: 5657064720 Parameters: 30420000 File Size: 122133282 Architecture: - 1x1 Convolution - Convolution - Dense Connections - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax - Split Attention Tasks: - Image Classification Training Techniques: - AutoAugment - DropBlock - Label Smoothing - Mixup - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 64x NVIDIA V100 GPUs ID: resnest50d_4s2x40d LR: 0.1 Epochs: 270 Layers: 50 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 8192 Image Size: '224' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L218 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest50_fast_4s2x40d-41d14ed0.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 81.11% Top 5 Accuracy: 95.55% -->
pytorch-image-models/docs/models/resnest.md/0
{ "file_path": "pytorch-image-models/docs/models/resnest.md", "repo_id": "pytorch-image-models", "token_count": 5449 }
181
# (Tensorflow) EfficientNet Lite **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network width, depth, and resolution with a set of fixed scaling coefficients. For example, if we want to use $2^N$ times more computational resources, then we can simply increase the network depth by $\alpha ^ N$, width by $\beta ^ N$, and image size by $\gamma ^ N$, where $\alpha, \beta, \gamma$ are constant coefficients determined by a small grid search on the original small model. EfficientNet uses a compound coefficient $\phi$ to uniformly scales network width, depth, and resolution in a principled way. The compound scaling method is justified by the intuition that if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image. The base EfficientNet-B0 network is based on the inverted bottleneck residual blocks of [MobileNetV2](https://paperswithcode.com/method/mobilenetv2). EfficientNet-Lite makes EfficientNet more suitable for mobile devices by introducing [ReLU6](https://paperswithcode.com/method/relu6) activation functions and removing [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation). The weights from this model were ported from [Tensorflow/TPU](https://github.com/tensorflow/tpu). ## How do I use this model on an image? To load a pretrained model: ```python import timm model = timm.create_model('tf_efficientnet_lite0', pretrained=True) model.eval() ``` To load and preprocess the image: ```python import urllib from PIL import Image from timm.data import resolve_data_config from timm.data.transforms_factory import create_transform config = resolve_data_config({}, model=model) transform = create_transform(**config) url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") urllib.request.urlretrieve(url, filename) img = Image.open(filename).convert('RGB') tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```python import torch with torch.no_grad(): out = model(tensor) probabilities = torch.nn.functional.softmax(out[0], dim=0) print(probabilities.shape) # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```python # Get imagenet class mappings url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") urllib.request.urlretrieve(url, filename) with open("imagenet_classes.txt", "r") as f: categories = [s.strip() for s in f.readlines()] # Print top categories per image top5_prob, top5_catid = torch.topk(probabilities, 5) for i in range(top5_prob.size(0)): print(categories[top5_catid[i]], top5_prob[i].item()) # prints class names and probabilities like: # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `tf_efficientnet_lite0`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```python model = timm.create_model('tf_efficientnet_lite0', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @misc{tan2020efficientnet, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, year={2020}, eprint={1905.11946}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` <!-- Type: model-index Collections: - Name: TF EfficientNet Lite Paper: Title: 'EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks' URL: https://paperswithcode.com/paper/efficientnet-rethinking-model-scaling-for Models: - Name: tf_efficientnet_lite0 In Collection: TF EfficientNet Lite Metadata: FLOPs: 488052032 Parameters: 4650000 File Size: 18820223 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - RELU6 Tasks: - Image Classification Training Data: - ImageNet ID: tf_efficientnet_lite0 Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1596 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_lite0-0aa007d2.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 74.83% Top 5 Accuracy: 92.17% - Name: tf_efficientnet_lite1 In Collection: TF EfficientNet Lite Metadata: FLOPs: 773639520 Parameters: 5420000 File Size: 21939331 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - RELU6 Tasks: - Image Classification Training Data: - ImageNet ID: tf_efficientnet_lite1 Crop Pct: '0.882' Image Size: '240' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1607 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_lite1-bde8b488.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 76.67% Top 5 Accuracy: 93.24% - Name: tf_efficientnet_lite2 In Collection: TF EfficientNet Lite Metadata: FLOPs: 1068494432 Parameters: 6090000 File Size: 24658687 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - RELU6 Tasks: - Image Classification Training Data: - ImageNet ID: tf_efficientnet_lite2 Crop Pct: '0.89' Image Size: '260' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1618 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_lite2-dcccb7df.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.48% Top 5 Accuracy: 93.75% - Name: tf_efficientnet_lite3 In Collection: TF EfficientNet Lite Metadata: FLOPs: 2011534304 Parameters: 8199999 File Size: 33161413 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - RELU6 Tasks: - Image Classification Training Data: - ImageNet ID: tf_efficientnet_lite3 Crop Pct: '0.904' Image Size: '300' Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1629 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_lite3-b733e338.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.83% Top 5 Accuracy: 94.91% - Name: tf_efficientnet_lite4 In Collection: TF EfficientNet Lite Metadata: FLOPs: 5164802912 Parameters: 13010000 File Size: 52558819 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - RELU6 Tasks: - Image Classification Training Data: - ImageNet ID: tf_efficientnet_lite4 Crop Pct: '0.92' Image Size: '380' Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1640 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_lite4-741542c3.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 81.54% Top 5 Accuracy: 95.66% -->
pytorch-image-models/docs/models/tf-efficientnet-lite.md/0
{ "file_path": "pytorch-image-models/docs/models/tf-efficientnet-lite.md", "repo_id": "pytorch-image-models", "token_count": 3355 }
182
# timm <img class="float-left !m-0 !border-0 !dark:border-0 !shadow-none !max-w-lg w-[150px]" src="https://huggingface.co/front/thumbnails/docs/timm.png"/> `timm` is a library containing SOTA computer vision models, layers, utilities, optimizers, schedulers, data-loaders, augmentations, and training/evaluation scripts. It comes packaged with >700 pretrained models, and is designed to be flexible and easy to use. Read the [quick start guide](quickstart) to get up and running with the `timm` library. You will learn how to load, discover, and use pretrained models included in the library. <div class="mt-10"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5"> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./feature_extraction" ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div> <p class="text-gray-700">Learn the basics and become familiar with timm. Start here if you are using timm for the first time!</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./reference/models" ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div> <p class="text-gray-700">Technical descriptions of how timm classes and methods work.</p> </a> </div> </div>
pytorch-image-models/hfdocs/source/index.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/index.mdx", "repo_id": "pytorch-image-models", "token_count": 560 }
183
# ESE-VoVNet **VoVNet** is a convolutional neural network that seeks to make [DenseNet](https://paperswithcode.com/method/densenet) more efficient by concatenating all features only once in the last feature map, which makes input size constant and enables enlarging new output channel. Read about [one-shot aggregation here](https://paperswithcode.com/method/one-shot-aggregation). ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('ese_vovnet19b_dw', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.no_grad(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `ese_vovnet19b_dw`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('ese_vovnet19b_dw', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../scripts) for training a new model afresh. ## Citation ```BibTeX @misc{lee2019energy, title={An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection}, author={Youngwan Lee and Joong-won Hwang and Sangrok Lee and Yuseok Bae and Jongyoul Park}, year={2019}, eprint={1904.09730}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: ESE VovNet Paper: Title: 'CenterMask : Real-Time Anchor-Free Instance Segmentation' URL: https://paperswithcode.com/paper/centermask-real-time-anchor-free-instance-1 Models: - Name: ese_vovnet19b_dw In Collection: ESE VovNet Metadata: FLOPs: 1711959904 Parameters: 6540000 File Size: 26243175 Architecture: - Batch Normalization - Convolution - Max Pooling - One-Shot Aggregation - ReLU Tasks: - Image Classification Training Data: - ImageNet ID: ese_vovnet19b_dw Layers: 19 Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/vovnet.py#L361 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ese_vovnet19b_dw-a8741004.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 76.82% Top 5 Accuracy: 93.28% - Name: ese_vovnet39b In Collection: ESE VovNet Metadata: FLOPs: 9089259008 Parameters: 24570000 File Size: 98397138 Architecture: - Batch Normalization - Convolution - Max Pooling - One-Shot Aggregation - ReLU Tasks: - Image Classification Training Data: - ImageNet ID: ese_vovnet39b Layers: 39 Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/vovnet.py#L371 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ese_vovnet39b-f912fe73.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.31% Top 5 Accuracy: 94.72% -->
pytorch-image-models/hfdocs/source/models/ese-vovnet.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/ese-vovnet.mdx", "repo_id": "pytorch-image-models", "token_count": 1951 }
184
# MixNet **MixNet** is a type of convolutional neural network discovered via AutoML that utilises [MixConvs](https://paperswithcode.com/method/mixconv) instead of regular [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution). ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('mixnet_l', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.no_grad(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `mixnet_l`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('mixnet_l', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../scripts) for training a new model afresh. ## Citation ```BibTeX @misc{tan2019mixconv, title={MixConv: Mixed Depthwise Convolutional Kernels}, author={Mingxing Tan and Quoc V. Le}, year={2019}, eprint={1907.09595}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: MixNet Paper: Title: 'MixConv: Mixed Depthwise Convolutional Kernels' URL: https://paperswithcode.com/paper/mixnet-mixed-depthwise-convolutional-kernels Models: - Name: mixnet_l In Collection: MixNet Metadata: FLOPs: 738671316 Parameters: 7330000 File Size: 29608232 Architecture: - Batch Normalization - Dense Connections - Dropout - Global Average Pooling - Grouped Convolution - MixConv - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - MNAS Training Data: - ImageNet ID: mixnet_l Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1669 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mixnet_l-5a9a2ed8.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.98% Top 5 Accuracy: 94.18% - Name: mixnet_m In Collection: MixNet Metadata: FLOPs: 454543374 Parameters: 5010000 File Size: 20298347 Architecture: - Batch Normalization - Dense Connections - Dropout - Global Average Pooling - Grouped Convolution - MixConv - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - MNAS Training Data: - ImageNet ID: mixnet_m Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1660 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mixnet_m-4647fc68.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.27% Top 5 Accuracy: 93.42% - Name: mixnet_s In Collection: MixNet Metadata: FLOPs: 321264910 Parameters: 4130000 File Size: 16727982 Architecture: - Batch Normalization - Dense Connections - Dropout - Global Average Pooling - Grouped Convolution - MixConv - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - MNAS Training Data: - ImageNet ID: mixnet_s Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1651 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mixnet_s-a907afbc.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 75.99% Top 5 Accuracy: 92.79% - Name: mixnet_xl In Collection: MixNet Metadata: FLOPs: 1195880424 Parameters: 11900000 File Size: 48001170 Architecture: - Batch Normalization - Dense Connections - Dropout - Global Average Pooling - Grouped Convolution - MixConv - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - MNAS Training Data: - ImageNet ID: mixnet_xl Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1678 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mixnet_xl_ra-aac3c00c.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.47% Top 5 Accuracy: 94.93% -->
pytorch-image-models/hfdocs/source/models/mixnet.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/mixnet.mdx", "repo_id": "pytorch-image-models", "token_count": 2684 }
185
# Wide ResNet **Wide Residual Networks** are a variant on [ResNets](https://paperswithcode.com/method/resnet) where we decrease depth and increase the width of residual networks. This is achieved through the use of [wide residual blocks](https://paperswithcode.com/method/wide-residual-block). ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('wide_resnet101_2', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.no_grad(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `wide_resnet101_2`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('wide_resnet101_2', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../scripts) for training a new model afresh. ## Citation ```BibTeX @article{DBLP:journals/corr/ZagoruykoK16, author = {Sergey Zagoruyko and Nikos Komodakis}, title = {Wide Residual Networks}, journal = {CoRR}, volume = {abs/1605.07146}, year = {2016}, url = {http://arxiv.org/abs/1605.07146}, archivePrefix = {arXiv}, eprint = {1605.07146}, timestamp = {Mon, 13 Aug 2018 16:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/ZagoruykoK16.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <!-- Type: model-index Collections: - Name: Wide ResNet Paper: Title: Wide Residual Networks URL: https://paperswithcode.com/paper/wide-residual-networks Models: - Name: wide_resnet101_2 In Collection: Wide ResNet Metadata: FLOPs: 29304929280 Parameters: 126890000 File Size: 254695146 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax - Wide Residual Block Tasks: - Image Classification Training Data: - ImageNet ID: wide_resnet101_2 Crop Pct: '0.875' Image Size: '224' Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/5f9aff395c224492e9e44248b15f44b5cc095d9c/timm/models/resnet.py#L802 Weights: https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.85% Top 5 Accuracy: 94.28% - Name: wide_resnet50_2 In Collection: Wide ResNet Metadata: FLOPs: 14688058368 Parameters: 68880000 File Size: 275853271 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax - Wide Residual Block Tasks: - Image Classification Training Data: - ImageNet ID: wide_resnet50_2 Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/5f9aff395c224492e9e44248b15f44b5cc095d9c/timm/models/resnet.py#L790 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/wide_resnet50_racm-8234f177.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 81.45% Top 5 Accuracy: 95.52% -->
pytorch-image-models/hfdocs/source/models/wide-resnet.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/wide-resnet.mdx", "repo_id": "pytorch-image-models", "token_count": 2035 }
186
import numpy as np import pandas as pd results = { 'results-imagenet.csv': [ 'results-imagenet-real.csv', 'results-imagenetv2-matched-frequency.csv', 'results-sketch.csv' ], 'results-imagenet-a-clean.csv': [ 'results-imagenet-a.csv', ], 'results-imagenet-r-clean.csv': [ 'results-imagenet-r.csv', ], } def diff(base_df, test_csv): base_models = base_df['model'].values test_df = pd.read_csv(test_csv) test_models = test_df['model'].values rank_diff = np.zeros_like(test_models, dtype='object') top1_diff = np.zeros_like(test_models, dtype='object') top5_diff = np.zeros_like(test_models, dtype='object') for rank, model in enumerate(test_models): if model in base_models: base_rank = int(np.where(base_models == model)[0]) top1_d = test_df['top1'][rank] - base_df['top1'][base_rank] top5_d = test_df['top5'][rank] - base_df['top5'][base_rank] # rank_diff if rank == base_rank: rank_diff[rank] = f'0' elif rank > base_rank: rank_diff[rank] = f'-{rank - base_rank}' else: rank_diff[rank] = f'+{base_rank - rank}' # top1_diff if top1_d >= .0: top1_diff[rank] = f'+{top1_d:.3f}' else: top1_diff[rank] = f'-{abs(top1_d):.3f}' # top5_diff if top5_d >= .0: top5_diff[rank] = f'+{top5_d:.3f}' else: top5_diff[rank] = f'-{abs(top5_d):.3f}' else: rank_diff[rank] = '' top1_diff[rank] = '' top5_diff[rank] = '' test_df['top1_diff'] = top1_diff test_df['top5_diff'] = top5_diff test_df['rank_diff'] = rank_diff test_df['param_count'] = test_df['param_count'].map('{:,.2f}'.format) test_df.sort_values(['top1', 'top5', 'model'], ascending=[False, False, True], inplace=True) test_df.to_csv(test_csv, index=False, float_format='%.3f') for base_results, test_results in results.items(): base_df = pd.read_csv(base_results) base_df.sort_values(['top1', 'top5', 'model'], ascending=[False, False, True], inplace=True) for test_csv in test_results: diff(base_df, test_csv) base_df['param_count'] = base_df['param_count'].map('{:,.2f}'.format) base_df.to_csv(base_results, index=False, float_format='%.3f')
pytorch-image-models/results/generate_csv_results.py/0
{ "file_path": "pytorch-image-models/results/generate_csv_results.py", "repo_id": "pytorch-image-models", "token_count": 1346 }
187
from .version import __version__ from .layers import is_scriptable, is_exportable, set_scriptable, set_exportable from .models import create_model, list_models, list_pretrained, is_model, list_modules, model_entrypoint, \ is_model_pretrained, get_pretrained_cfg, get_pretrained_cfg_value
pytorch-image-models/timm/__init__.py/0
{ "file_path": "pytorch-image-models/timm/__init__.py", "repo_id": "pytorch-image-models", "token_count": 91 }
188
from .reader_factory import create_reader from .img_extensions import *
pytorch-image-models/timm/data/readers/__init__.py/0
{ "file_path": "pytorch-image-models/timm/data/readers/__init__.py", "repo_id": "pytorch-image-models", "token_count": 20 }
189
""" Transforms Factory Factory methods for building image transforms for use with TIMM (PyTorch Image Models) Hacked together by / Copyright 2019, Ross Wightman """ import math from typing import Optional, Tuple, Union import torch from torchvision import transforms from timm.data.constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD, DEFAULT_CROP_PCT from timm.data.auto_augment import rand_augment_transform, augment_and_mix_transform, auto_augment_transform from timm.data.transforms import str_to_interp_mode, str_to_pil_interp, RandomResizedCropAndInterpolation,\ ResizeKeepRatio, CenterCropOrPad, RandomCropOrPad, TrimBorder, ToNumpy from timm.data.random_erasing import RandomErasing def transforms_noaug_train( img_size: Union[int, Tuple[int, int]] = 224, interpolation: str = 'bilinear', use_prefetcher: bool = False, mean: Tuple[float, ...] = IMAGENET_DEFAULT_MEAN, std: Tuple[float, ...] = IMAGENET_DEFAULT_STD, ): """ No-augmentation image transforms for training. Args: img_size: Target image size. interpolation: Image interpolation mode. mean: Image normalization mean. std: Image normalization standard deviation. use_prefetcher: Prefetcher enabled. Do not convert image to tensor or normalize. Returns: """ if interpolation == 'random': # random interpolation not supported with no-aug interpolation = 'bilinear' tfl = [ transforms.Resize(img_size, interpolation=str_to_interp_mode(interpolation)), transforms.CenterCrop(img_size) ] if use_prefetcher: # prefetcher and collate will handle tensor conversion and norm tfl += [ToNumpy()] else: tfl += [ transforms.ToTensor(), transforms.Normalize( mean=torch.tensor(mean), std=torch.tensor(std) ) ] return transforms.Compose(tfl) def transforms_imagenet_train( img_size: Union[int, Tuple[int, int]] = 224, scale: Optional[Tuple[float, float]] = None, ratio: Optional[Tuple[float, float]] = None, train_crop_mode: Optional[str] = None, hflip: float = 0.5, vflip: float = 0., color_jitter: Union[float, Tuple[float, ...]] = 0.4, color_jitter_prob: Optional[float] = None, force_color_jitter: bool = False, grayscale_prob: float = 0., gaussian_blur_prob: float = 0., auto_augment: Optional[str] = None, interpolation: str = 'random', mean: Tuple[float, ...] = IMAGENET_DEFAULT_MEAN, std: Tuple[float, ...] = IMAGENET_DEFAULT_STD, re_prob: float = 0., re_mode: str = 'const', re_count: int = 1, re_num_splits: int = 0, use_prefetcher: bool = False, separate: bool = False, ): """ ImageNet-oriented image transforms for training. Args: img_size: Target image size. train_crop_mode: Training random crop mode ('rrc', 'rkrc', 'rkrr'). scale: Random resize scale range (crop area, < 1.0 => zoom in). ratio: Random aspect ratio range (crop ratio for RRC, ratio adjustment factor for RKR). hflip: Horizontal flip probability. vflip: Vertical flip probability. color_jitter: Random color jitter component factors (brightness, contrast, saturation, hue). Scalar is applied as (scalar,) * 3 (no hue). color_jitter_prob: Apply color jitter with this probability if not None (for SimlCLR-like aug). force_color_jitter: Force color jitter where it is normally disabled (ie with RandAugment on). grayscale_prob: Probability of converting image to grayscale (for SimCLR-like aug). gaussian_blur_prob: Probability of applying gaussian blur (for SimCLR-like aug). auto_augment: Auto augment configuration string (see auto_augment.py). interpolation: Image interpolation mode. mean: Image normalization mean. std: Image normalization standard deviation. re_prob: Random erasing probability. re_mode: Random erasing fill mode. re_count: Number of random erasing regions. re_num_splits: Control split of random erasing across batch size. use_prefetcher: Prefetcher enabled. Do not convert image to tensor or normalize. separate: Output transforms in 3-stage tuple. Returns: If separate==True, the transforms are returned as a tuple of 3 separate transforms for use in a mixing dataset that passes * all data through the first (primary) transform, called the 'clean' data * a portion of the data through the secondary transform * normalizes and converts the branches above with the third, final transform """ train_crop_mode = train_crop_mode or 'rrc' assert train_crop_mode in {'rrc', 'rkrc', 'rkrr'} if train_crop_mode in ('rkrc', 'rkrr'): # FIXME integration of RKR is a WIP scale = tuple(scale or (0.8, 1.00)) ratio = tuple(ratio or (0.9, 1/.9)) primary_tfl = [ ResizeKeepRatio( img_size, interpolation=interpolation, random_scale_prob=0.5, random_scale_range=scale, random_scale_area=True, # scale compatible with RRC random_aspect_prob=0.5, random_aspect_range=ratio, ), CenterCropOrPad(img_size, padding_mode='reflect') if train_crop_mode == 'rkrc' else RandomCropOrPad(img_size, padding_mode='reflect') ] else: scale = tuple(scale or (0.08, 1.0)) # default imagenet scale range ratio = tuple(ratio or (3. / 4., 4. / 3.)) # default imagenet ratio range primary_tfl = [ RandomResizedCropAndInterpolation( img_size, scale=scale, ratio=ratio, interpolation=interpolation, ) ] if hflip > 0.: primary_tfl += [transforms.RandomHorizontalFlip(p=hflip)] if vflip > 0.: primary_tfl += [transforms.RandomVerticalFlip(p=vflip)] secondary_tfl = [] disable_color_jitter = False if auto_augment: assert isinstance(auto_augment, str) # color jitter is typically disabled if AA/RA on, # this allows override without breaking old hparm cfgs disable_color_jitter = not (force_color_jitter or '3a' in auto_augment) if isinstance(img_size, (tuple, list)): img_size_min = min(img_size) else: img_size_min = img_size aa_params = dict( translate_const=int(img_size_min * 0.45), img_mean=tuple([min(255, round(255 * x)) for x in mean]), ) if interpolation and interpolation != 'random': aa_params['interpolation'] = str_to_pil_interp(interpolation) if auto_augment.startswith('rand'): secondary_tfl += [rand_augment_transform(auto_augment, aa_params)] elif auto_augment.startswith('augmix'): aa_params['translate_pct'] = 0.3 secondary_tfl += [augment_and_mix_transform(auto_augment, aa_params)] else: secondary_tfl += [auto_augment_transform(auto_augment, aa_params)] if color_jitter is not None and not disable_color_jitter: # color jitter is enabled when not using AA or when forced if isinstance(color_jitter, (list, tuple)): # color jitter should be a 3-tuple/list if spec brightness/contrast/saturation # or 4 if also augmenting hue assert len(color_jitter) in (3, 4) else: # if it's a scalar, duplicate for brightness, contrast, and saturation, no hue color_jitter = (float(color_jitter),) * 3 if color_jitter_prob is not None: secondary_tfl += [ transforms.RandomApply([ transforms.ColorJitter(*color_jitter), ], p=color_jitter_prob ) ] else: secondary_tfl += [transforms.ColorJitter(*color_jitter)] if grayscale_prob: secondary_tfl += [transforms.RandomGrayscale(p=grayscale_prob)] if gaussian_blur_prob: secondary_tfl += [ transforms.RandomApply([ transforms.GaussianBlur(kernel_size=23), # hardcoded for now ], p=gaussian_blur_prob, ) ] final_tfl = [] if use_prefetcher: # prefetcher and collate will handle tensor conversion and norm final_tfl += [ToNumpy()] else: final_tfl += [ transforms.ToTensor(), transforms.Normalize( mean=torch.tensor(mean), std=torch.tensor(std) ), ] if re_prob > 0.: final_tfl += [ RandomErasing( re_prob, mode=re_mode, max_count=re_count, num_splits=re_num_splits, device='cpu', ) ] if separate: return transforms.Compose(primary_tfl), transforms.Compose(secondary_tfl), transforms.Compose(final_tfl) else: return transforms.Compose(primary_tfl + secondary_tfl + final_tfl) def transforms_imagenet_eval( img_size: Union[int, Tuple[int, int]] = 224, crop_pct: Optional[float] = None, crop_mode: Optional[str] = None, crop_border_pixels: Optional[int] = None, interpolation: str = 'bilinear', mean: Tuple[float, ...] = IMAGENET_DEFAULT_MEAN, std: Tuple[float, ...] = IMAGENET_DEFAULT_STD, use_prefetcher: bool = False, ): """ ImageNet-oriented image transform for evaluation and inference. Args: img_size: Target image size. crop_pct: Crop percentage. Defaults to 0.875 when None. crop_mode: Crop mode. One of ['squash', 'border', 'center']. Defaults to 'center' when None. crop_border_pixels: Trim a border of specified # pixels around edge of original image. interpolation: Image interpolation mode. mean: Image normalization mean. std: Image normalization standard deviation. use_prefetcher: Prefetcher enabled. Do not convert image to tensor or normalize. Returns: Composed transform pipeline """ crop_pct = crop_pct or DEFAULT_CROP_PCT if isinstance(img_size, (tuple, list)): assert len(img_size) == 2 scale_size = tuple([math.floor(x / crop_pct) for x in img_size]) else: scale_size = math.floor(img_size / crop_pct) scale_size = (scale_size, scale_size) tfl = [] if crop_border_pixels: tfl += [TrimBorder(crop_border_pixels)] if crop_mode == 'squash': # squash mode scales each edge to 1/pct of target, then crops # aspect ratio is not preserved, no img lost if crop_pct == 1.0 tfl += [ transforms.Resize(scale_size, interpolation=str_to_interp_mode(interpolation)), transforms.CenterCrop(img_size), ] elif crop_mode == 'border': # scale the longest edge of image to 1/pct of target edge, add borders to pad, then crop # no image lost if crop_pct == 1.0 fill = [round(255 * v) for v in mean] tfl += [ ResizeKeepRatio(scale_size, interpolation=interpolation, longest=1.0), CenterCropOrPad(img_size, fill=fill), ] else: # default crop model is center # aspect ratio is preserved, crops center within image, no borders are added, image is lost if scale_size[0] == scale_size[1]: # simple case, use torchvision built-in Resize w/ shortest edge mode (scalar size arg) tfl += [ transforms.Resize(scale_size[0], interpolation=str_to_interp_mode(interpolation)) ] else: # resize the shortest edge to matching target dim for non-square target tfl += [ResizeKeepRatio(scale_size)] tfl += [transforms.CenterCrop(img_size)] if use_prefetcher: # prefetcher and collate will handle tensor conversion and norm tfl += [ToNumpy()] else: tfl += [ transforms.ToTensor(), transforms.Normalize( mean=torch.tensor(mean), std=torch.tensor(std), ) ] return transforms.Compose(tfl) def create_transform( input_size: Union[int, Tuple[int, int], Tuple[int, int, int]] = 224, is_training: bool = False, no_aug: bool = False, train_crop_mode: Optional[str] = None, scale: Optional[Tuple[float, float]] = None, ratio: Optional[Tuple[float, float]] = None, hflip: float = 0.5, vflip: float = 0., color_jitter: Union[float, Tuple[float, ...]] = 0.4, color_jitter_prob: Optional[float] = None, grayscale_prob: float = 0., gaussian_blur_prob: float = 0., auto_augment: Optional[str] = None, interpolation: str = 'bilinear', mean: Tuple[float, ...] = IMAGENET_DEFAULT_MEAN, std: Tuple[float, ...] = IMAGENET_DEFAULT_STD, re_prob: float = 0., re_mode: str = 'const', re_count: int = 1, re_num_splits: int = 0, crop_pct: Optional[float] = None, crop_mode: Optional[str] = None, crop_border_pixels: Optional[int] = None, tf_preprocessing: bool = False, use_prefetcher: bool = False, separate: bool = False, ): """ Args: input_size: Target input size (channels, height, width) tuple or size scalar. is_training: Return training (random) transforms. no_aug: Disable augmentation for training (useful for debug). train_crop_mode: Training random crop mode ('rrc', 'rkrc', 'rkrr'). scale: Random resize scale range (crop area, < 1.0 => zoom in). ratio: Random aspect ratio range (crop ratio for RRC, ratio adjustment factor for RKR). hflip: Horizontal flip probability. vflip: Vertical flip probability. color_jitter: Random color jitter component factors (brightness, contrast, saturation, hue). Scalar is applied as (scalar,) * 3 (no hue). color_jitter_prob: Apply color jitter with this probability if not None (for SimlCLR-like aug). grayscale_prob: Probability of converting image to grayscale (for SimCLR-like aug). gaussian_blur_prob: Probability of applying gaussian blur (for SimCLR-like aug). auto_augment: Auto augment configuration string (see auto_augment.py). interpolation: Image interpolation mode. mean: Image normalization mean. std: Image normalization standard deviation. re_prob: Random erasing probability. re_mode: Random erasing fill mode. re_count: Number of random erasing regions. re_num_splits: Control split of random erasing across batch size. crop_pct: Inference crop percentage (output size / resize size). crop_mode: Inference crop mode. One of ['squash', 'border', 'center']. Defaults to 'center' when None. crop_border_pixels: Inference crop border of specified # pixels around edge of original image. tf_preprocessing: Use TF 1.0 inference preprocessing for testing model ports use_prefetcher: Pre-fetcher enabled. Do not convert image to tensor or normalize. separate: Output transforms in 3-stage tuple. Returns: Composed transforms or tuple thereof """ if isinstance(input_size, (tuple, list)): img_size = input_size[-2:] else: img_size = input_size if tf_preprocessing and use_prefetcher: assert not separate, "Separate transforms not supported for TF preprocessing" from timm.data.tf_preprocessing import TfPreprocessTransform transform = TfPreprocessTransform( is_training=is_training, size=img_size, interpolation=interpolation, ) else: if is_training and no_aug: assert not separate, "Cannot perform split augmentation with no_aug" transform = transforms_noaug_train( img_size, interpolation=interpolation, use_prefetcher=use_prefetcher, mean=mean, std=std, ) elif is_training: transform = transforms_imagenet_train( img_size, train_crop_mode=train_crop_mode, scale=scale, ratio=ratio, hflip=hflip, vflip=vflip, color_jitter=color_jitter, color_jitter_prob=color_jitter_prob, grayscale_prob=grayscale_prob, gaussian_blur_prob=gaussian_blur_prob, auto_augment=auto_augment, interpolation=interpolation, use_prefetcher=use_prefetcher, mean=mean, std=std, re_prob=re_prob, re_mode=re_mode, re_count=re_count, re_num_splits=re_num_splits, separate=separate, ) else: assert not separate, "Separate transforms not supported for validation preprocessing" transform = transforms_imagenet_eval( img_size, interpolation=interpolation, use_prefetcher=use_prefetcher, mean=mean, std=std, crop_pct=crop_pct, crop_mode=crop_mode, crop_border_pixels=crop_border_pixels, ) return transform
pytorch-image-models/timm/data/transforms_factory.py/0
{ "file_path": "pytorch-image-models/timm/data/transforms_factory.py", "repo_id": "pytorch-image-models", "token_count": 8112 }
190
""" Activation Factory Hacked together by / Copyright 2020 Ross Wightman """ from typing import Union, Callable, Type from .activations import * from .activations_jit import * from .activations_me import * from .config import is_exportable, is_scriptable, is_no_jit # PyTorch has an optimized, native 'silu' (aka 'swish') operator as of PyTorch 1.7. # Also hardsigmoid, hardswish, and soon mish. This code will use native version if present. # Eventually, the custom SiLU, Mish, Hard*, layers will be removed and only native variants will be used. _has_silu = 'silu' in dir(torch.nn.functional) _has_hardswish = 'hardswish' in dir(torch.nn.functional) _has_hardsigmoid = 'hardsigmoid' in dir(torch.nn.functional) _has_mish = 'mish' in dir(torch.nn.functional) _ACT_FN_DEFAULT = dict( silu=F.silu if _has_silu else swish, swish=F.silu if _has_silu else swish, mish=F.mish if _has_mish else mish, relu=F.relu, relu6=F.relu6, leaky_relu=F.leaky_relu, elu=F.elu, celu=F.celu, selu=F.selu, gelu=gelu, gelu_tanh=gelu_tanh, quick_gelu=quick_gelu, sigmoid=sigmoid, tanh=tanh, hard_sigmoid=F.hardsigmoid if _has_hardsigmoid else hard_sigmoid, hard_swish=F.hardswish if _has_hardswish else hard_swish, hard_mish=hard_mish, ) _ACT_FN_JIT = dict( silu=F.silu if _has_silu else swish_jit, swish=F.silu if _has_silu else swish_jit, mish=F.mish if _has_mish else mish_jit, hard_sigmoid=F.hardsigmoid if _has_hardsigmoid else hard_sigmoid_jit, hard_swish=F.hardswish if _has_hardswish else hard_swish_jit, hard_mish=hard_mish_jit, ) _ACT_FN_ME = dict( silu=F.silu if _has_silu else swish_me, swish=F.silu if _has_silu else swish_me, mish=F.mish if _has_mish else mish_me, hard_sigmoid=F.hardsigmoid if _has_hardsigmoid else hard_sigmoid_me, hard_swish=F.hardswish if _has_hardswish else hard_swish_me, hard_mish=hard_mish_me, ) _ACT_FNS = (_ACT_FN_ME, _ACT_FN_JIT, _ACT_FN_DEFAULT) for a in _ACT_FNS: a.setdefault('hardsigmoid', a.get('hard_sigmoid')) a.setdefault('hardswish', a.get('hard_swish')) _ACT_LAYER_DEFAULT = dict( silu=nn.SiLU if _has_silu else Swish, swish=nn.SiLU if _has_silu else Swish, mish=nn.Mish if _has_mish else Mish, relu=nn.ReLU, relu6=nn.ReLU6, leaky_relu=nn.LeakyReLU, elu=nn.ELU, prelu=PReLU, celu=nn.CELU, selu=nn.SELU, gelu=GELU, gelu_tanh=GELUTanh, quick_gelu=QuickGELU, sigmoid=Sigmoid, tanh=Tanh, hard_sigmoid=nn.Hardsigmoid if _has_hardsigmoid else HardSigmoid, hard_swish=nn.Hardswish if _has_hardswish else HardSwish, hard_mish=HardMish, identity=nn.Identity, ) _ACT_LAYER_JIT = dict( silu=nn.SiLU if _has_silu else SwishJit, swish=nn.SiLU if _has_silu else SwishJit, mish=nn.Mish if _has_mish else MishJit, hard_sigmoid=nn.Hardsigmoid if _has_hardsigmoid else HardSigmoidJit, hard_swish=nn.Hardswish if _has_hardswish else HardSwishJit, hard_mish=HardMishJit, ) _ACT_LAYER_ME = dict( silu=nn.SiLU if _has_silu else SwishMe, swish=nn.SiLU if _has_silu else SwishMe, mish=nn.Mish if _has_mish else MishMe, hard_sigmoid=nn.Hardsigmoid if _has_hardsigmoid else HardSigmoidMe, hard_swish=nn.Hardswish if _has_hardswish else HardSwishMe, hard_mish=HardMishMe, ) _ACT_LAYERS = (_ACT_LAYER_ME, _ACT_LAYER_JIT, _ACT_LAYER_DEFAULT) for a in _ACT_LAYERS: a.setdefault('hardsigmoid', a.get('hard_sigmoid')) a.setdefault('hardswish', a.get('hard_swish')) def get_act_fn(name: Union[Callable, str] = 'relu'): """ Activation Function Factory Fetching activation fns by name with this function allows export or torch script friendly functions to be returned dynamically based on current config. """ if not name: return None if isinstance(name, Callable): return name if not (is_no_jit() or is_exportable() or is_scriptable()): # If not exporting or scripting the model, first look for a memory-efficient version with # custom autograd, then fallback if name in _ACT_FN_ME: return _ACT_FN_ME[name] if not (is_no_jit() or is_exportable()): if name in _ACT_FN_JIT: return _ACT_FN_JIT[name] return _ACT_FN_DEFAULT[name] def get_act_layer(name: Union[Type[nn.Module], str] = 'relu'): """ Activation Layer Factory Fetching activation layers by name with this function allows export or torch script friendly functions to be returned dynamically based on current config. """ if name is None: return None if not isinstance(name, str): # callable, module, etc return name if not name: return None if not (is_no_jit() or is_exportable() or is_scriptable()): if name in _ACT_LAYER_ME: return _ACT_LAYER_ME[name] if not (is_no_jit() or is_exportable()): if name in _ACT_LAYER_JIT: return _ACT_LAYER_JIT[name] return _ACT_LAYER_DEFAULT[name] def create_act_layer(name: Union[Type[nn.Module], str], inplace=None, **kwargs): act_layer = get_act_layer(name) if act_layer is None: return None if inplace is None: return act_layer(**kwargs) try: return act_layer(inplace=inplace, **kwargs) except TypeError: # recover if act layer doesn't have inplace arg return act_layer(**kwargs)
pytorch-image-models/timm/layers/create_act.py/0
{ "file_path": "pytorch-image-models/timm/layers/create_act.py", "repo_id": "pytorch-image-models", "token_count": 2445 }
191
""" Layer/Module Helpers Hacked together by / Copyright 2020 Ross Wightman """ from itertools import repeat import collections.abc # From PyTorch internals def _ntuple(n): def parse(x): if isinstance(x, collections.abc.Iterable) and not isinstance(x, str): return tuple(x) return tuple(repeat(x, n)) return parse to_1tuple = _ntuple(1) to_2tuple = _ntuple(2) to_3tuple = _ntuple(3) to_4tuple = _ntuple(4) to_ntuple = _ntuple def make_divisible(v, divisor=8, min_value=None, round_limit=.9): min_value = min_value or divisor new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) # Make sure that round down does not go down by more than 10%. if new_v < round_limit * v: new_v += divisor return new_v def extend_tuple(x, n): # pads a tuple to specified n by padding with last value if not isinstance(x, (tuple, list)): x = (x,) else: x = tuple(x) pad_n = n - len(x) if pad_n <= 0: return x[:n] return x + (x[-1],) * pad_n
pytorch-image-models/timm/layers/helpers.py/0
{ "file_path": "pytorch-image-models/timm/layers/helpers.py", "repo_id": "pytorch-image-models", "token_count": 462 }
192
""" Position Embedding Utilities Hacked together by / Copyright 2022 Ross Wightman """ import logging import math from typing import List, Tuple, Optional, Union import torch import torch.nn.functional as F from .helpers import to_2tuple _logger = logging.getLogger(__name__) def resample_abs_pos_embed( posemb, new_size: List[int], old_size: Optional[List[int]] = None, num_prefix_tokens: int = 1, interpolation: str = 'bicubic', antialias: bool = True, verbose: bool = False, ): # sort out sizes, assume square if old size not provided num_pos_tokens = posemb.shape[1] num_new_tokens = new_size[0] * new_size[1] + num_prefix_tokens if num_new_tokens == num_pos_tokens and new_size[0] == new_size[1]: return posemb if old_size is None: hw = int(math.sqrt(num_pos_tokens - num_prefix_tokens)) old_size = hw, hw if num_prefix_tokens: posemb_prefix, posemb = posemb[:, :num_prefix_tokens], posemb[:, num_prefix_tokens:] else: posemb_prefix, posemb = None, posemb # do the interpolation embed_dim = posemb.shape[-1] orig_dtype = posemb.dtype posemb = posemb.float() # interpolate needs float32 posemb = posemb.reshape(1, old_size[0], old_size[1], -1).permute(0, 3, 1, 2) posemb = F.interpolate(posemb, size=new_size, mode=interpolation, antialias=antialias) posemb = posemb.permute(0, 2, 3, 1).reshape(1, -1, embed_dim) posemb = posemb.to(orig_dtype) # add back extra (class, etc) prefix tokens if posemb_prefix is not None: posemb = torch.cat([posemb_prefix, posemb], dim=1) if not torch.jit.is_scripting() and verbose: _logger.info(f'Resized position embedding: {old_size} to {new_size}.') return posemb def resample_abs_pos_embed_nhwc( posemb, new_size: List[int], interpolation: str = 'bicubic', antialias: bool = True, verbose: bool = False, ): if new_size[0] == posemb.shape[-3] and new_size[1] == posemb.shape[-2]: return posemb orig_dtype = posemb.dtype posemb = posemb.float() # do the interpolation posemb = posemb.reshape(1, posemb.shape[-3], posemb.shape[-2], posemb.shape[-1]).permute(0, 3, 1, 2) posemb = F.interpolate(posemb, size=new_size, mode=interpolation, antialias=antialias) posemb = posemb.permute(0, 2, 3, 1).to(orig_dtype) if not torch.jit.is_scripting() and verbose: _logger.info(f'Resized position embedding: {posemb.shape[-3:-1]} to {new_size}.') return posemb
pytorch-image-models/timm/layers/pos_embed.py/0
{ "file_path": "pytorch-image-models/timm/layers/pos_embed.py", "repo_id": "pytorch-image-models", "token_count": 1127 }
193
""" Binary Cross Entropy w/ a few extras Hacked together by / Copyright 2021 Ross Wightman """ from typing import Optional, Union import torch import torch.nn as nn import torch.nn.functional as F class BinaryCrossEntropy(nn.Module): """ BCE with optional one-hot from dense targets, label smoothing, thresholding NOTE for experiments comparing CE to BCE /w label smoothing, may remove """ def __init__( self, smoothing=0.1, target_threshold: Optional[float] = None, weight: Optional[torch.Tensor] = None, reduction: str = 'mean', sum_classes: bool = False, pos_weight: Optional[Union[torch.Tensor, float]] = None, ): super(BinaryCrossEntropy, self).__init__() assert 0. <= smoothing < 1.0 if pos_weight is not None: if not isinstance(pos_weight, torch.Tensor): pos_weight = torch.tensor(pos_weight) self.smoothing = smoothing self.target_threshold = target_threshold self.reduction = 'none' if sum_classes else reduction self.sum_classes = sum_classes self.register_buffer('weight', weight) self.register_buffer('pos_weight', pos_weight) def forward(self, x: torch.Tensor, target: torch.Tensor) -> torch.Tensor: batch_size = x.shape[0] assert batch_size == target.shape[0] if target.shape != x.shape: # NOTE currently assume smoothing or other label softening is applied upstream if targets are already sparse num_classes = x.shape[-1] # FIXME should off/on be different for smoothing w/ BCE? Other impl out there differ off_value = self.smoothing / num_classes on_value = 1. - self.smoothing + off_value target = target.long().view(-1, 1) target = torch.full( (batch_size, num_classes), off_value, device=x.device, dtype=x.dtype).scatter_(1, target, on_value) if self.target_threshold is not None: # Make target 0, or 1 if threshold set target = target.gt(self.target_threshold).to(dtype=target.dtype) loss = F.binary_cross_entropy_with_logits( x, target, self.weight, pos_weight=self.pos_weight, reduction=self.reduction, ) if self.sum_classes: loss = loss.sum(-1).mean() return loss
pytorch-image-models/timm/loss/binary_cross_entropy.py/0
{ "file_path": "pytorch-image-models/timm/loss/binary_cross_entropy.py", "repo_id": "pytorch-image-models", "token_count": 1082 }
194
""" DeiT - Data-efficient Image Transformers DeiT model defs and weights from https://github.com/facebookresearch/deit, original copyright below paper: `DeiT: Data-efficient Image Transformers` - https://arxiv.org/abs/2012.12877 paper: `DeiT III: Revenge of the ViT` - https://arxiv.org/abs/2204.07118 Modifications copyright 2021, Ross Wightman """ # Copyright (c) 2015-present, Facebook, Inc. # All rights reserved. from functools import partial from typing import Sequence, Union import torch from torch import nn as nn from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import resample_abs_pos_embed from timm.models.vision_transformer import VisionTransformer, trunc_normal_, checkpoint_filter_fn from ._builder import build_model_with_cfg from ._manipulate import checkpoint_seq from ._registry import generate_default_cfgs, register_model, register_model_deprecations __all__ = ['VisionTransformerDistilled'] # model_registry will add each entrypoint fn to this class VisionTransformerDistilled(VisionTransformer): """ Vision Transformer w/ Distillation Token and Head Distillation token & head support for `DeiT: Data-efficient Image Transformers` - https://arxiv.org/abs/2012.12877 """ def __init__(self, *args, **kwargs): weight_init = kwargs.pop('weight_init', '') super().__init__(*args, **kwargs, weight_init='skip') assert self.global_pool in ('token',) self.num_prefix_tokens = 2 self.dist_token = nn.Parameter(torch.zeros(1, 1, self.embed_dim)) self.pos_embed = nn.Parameter( torch.zeros(1, self.patch_embed.num_patches + self.num_prefix_tokens, self.embed_dim)) self.head_dist = nn.Linear(self.embed_dim, self.num_classes) if self.num_classes > 0 else nn.Identity() self.distilled_training = False # must set this True to train w/ distillation token self.init_weights(weight_init) def init_weights(self, mode=''): trunc_normal_(self.dist_token, std=.02) super().init_weights(mode=mode) @torch.jit.ignore def group_matcher(self, coarse=False): return dict( stem=r'^cls_token|pos_embed|patch_embed|dist_token', blocks=[ (r'^blocks\.(\d+)', None), (r'^norm', (99999,))] # final norm w/ last block ) @torch.jit.ignore def get_classifier(self): return self.head, self.head_dist def reset_classifier(self, num_classes, global_pool=None): self.num_classes = num_classes self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity() self.head_dist = nn.Linear(self.embed_dim, self.num_classes) if num_classes > 0 else nn.Identity() @torch.jit.ignore def set_distilled_training(self, enable=True): self.distilled_training = enable def _pos_embed(self, x): if self.dynamic_img_size: B, H, W, C = x.shape pos_embed = resample_abs_pos_embed( self.pos_embed, (H, W), num_prefix_tokens=0 if self.no_embed_class else self.num_prefix_tokens, ) x = x.view(B, -1, C) else: pos_embed = self.pos_embed if self.no_embed_class: # deit-3, updated JAX (big vision) # position embedding does not overlap with class token, add then concat x = x + pos_embed x = torch.cat(( self.cls_token.expand(x.shape[0], -1, -1), self.dist_token.expand(x.shape[0], -1, -1), x), dim=1) else: # original timm, JAX, and deit vit impl # pos_embed has entry for class token, concat then add x = torch.cat(( self.cls_token.expand(x.shape[0], -1, -1), self.dist_token.expand(x.shape[0], -1, -1), x), dim=1) x = x + pos_embed return self.pos_drop(x) def forward_head(self, x, pre_logits: bool = False) -> torch.Tensor: x, x_dist = x[:, 0], x[:, 1] if pre_logits: return (x + x_dist) / 2 x = self.head(x) x_dist = self.head_dist(x_dist) if self.distilled_training and self.training and not torch.jit.is_scripting(): # only return separate classification predictions when training in distilled mode return x, x_dist else: # during standard train / finetune, inference average the classifier predictions return (x + x_dist) / 2 def _create_deit(variant, pretrained=False, distilled=False, **kwargs): if kwargs.get('features_only', None): raise RuntimeError('features_only not implemented for Vision Transformer models.') model_cls = VisionTransformerDistilled if distilled else VisionTransformer model = build_model_with_cfg( model_cls, variant, pretrained, pretrained_filter_fn=partial(checkpoint_filter_fn, adapt_layer_scale=True), **kwargs, ) return model def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None, 'crop_pct': .9, 'interpolation': 'bicubic', 'fixed_input_size': True, 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'patch_embed.proj', 'classifier': 'head', **kwargs } default_cfgs = generate_default_cfgs({ # deit models (FB weights) 'deit_tiny_patch16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_tiny_patch16_224-a1311bcf.pth'), 'deit_small_patch16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_small_patch16_224-cd65a155.pth'), 'deit_base_patch16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_base_patch16_224-b5f2ef4d.pth'), 'deit_base_patch16_384.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_base_patch16_384-8de9b5d1.pth', input_size=(3, 384, 384), crop_pct=1.0), 'deit_tiny_distilled_patch16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_tiny_distilled_patch16_224-b40b3cf7.pth', classifier=('head', 'head_dist')), 'deit_small_distilled_patch16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_small_distilled_patch16_224-649709d9.pth', classifier=('head', 'head_dist')), 'deit_base_distilled_patch16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_base_distilled_patch16_224-df68dfff.pth', classifier=('head', 'head_dist')), 'deit_base_distilled_patch16_384.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_base_distilled_patch16_384-d0272ac0.pth', input_size=(3, 384, 384), crop_pct=1.0, classifier=('head', 'head_dist')), 'deit3_small_patch16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_small_224_1k.pth'), 'deit3_small_patch16_384.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_small_384_1k.pth', input_size=(3, 384, 384), crop_pct=1.0), 'deit3_medium_patch16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_medium_224_1k.pth'), 'deit3_base_patch16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_base_224_1k.pth'), 'deit3_base_patch16_384.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_base_384_1k.pth', input_size=(3, 384, 384), crop_pct=1.0), 'deit3_large_patch16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_large_224_1k.pth'), 'deit3_large_patch16_384.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_large_384_1k.pth', input_size=(3, 384, 384), crop_pct=1.0), 'deit3_huge_patch14_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_huge_224_1k.pth'), 'deit3_small_patch16_224.fb_in22k_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_small_224_21k.pth', crop_pct=1.0), 'deit3_small_patch16_384.fb_in22k_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_small_384_21k.pth', input_size=(3, 384, 384), crop_pct=1.0), 'deit3_medium_patch16_224.fb_in22k_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_medium_224_21k.pth', crop_pct=1.0), 'deit3_base_patch16_224.fb_in22k_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_base_224_21k.pth', crop_pct=1.0), 'deit3_base_patch16_384.fb_in22k_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_base_384_21k.pth', input_size=(3, 384, 384), crop_pct=1.0), 'deit3_large_patch16_224.fb_in22k_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_large_224_21k.pth', crop_pct=1.0), 'deit3_large_patch16_384.fb_in22k_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_large_384_21k.pth', input_size=(3, 384, 384), crop_pct=1.0), 'deit3_huge_patch14_224.fb_in22k_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/deit_3_huge_224_21k_v1.pth', crop_pct=1.0), }) @register_model def deit_tiny_patch16_224(pretrained=False, **kwargs) -> VisionTransformer: """ DeiT-tiny model @ 224x224 from paper (https://arxiv.org/abs/2012.12877). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=192, depth=12, num_heads=3) model = _create_deit('deit_tiny_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def deit_small_patch16_224(pretrained=False, **kwargs) -> VisionTransformer: """ DeiT-small model @ 224x224 from paper (https://arxiv.org/abs/2012.12877). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=384, depth=12, num_heads=6) model = _create_deit('deit_small_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def deit_base_patch16_224(pretrained=False, **kwargs) -> VisionTransformer: """ DeiT base model @ 224x224 from paper (https://arxiv.org/abs/2012.12877). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12) model = _create_deit('deit_base_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def deit_base_patch16_384(pretrained=False, **kwargs) -> VisionTransformer: """ DeiT base model @ 384x384 from paper (https://arxiv.org/abs/2012.12877). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12) model = _create_deit('deit_base_patch16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def deit_tiny_distilled_patch16_224(pretrained=False, **kwargs) -> VisionTransformerDistilled: """ DeiT-tiny distilled model @ 224x224 from paper (https://arxiv.org/abs/2012.12877). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=192, depth=12, num_heads=3) model = _create_deit( 'deit_tiny_distilled_patch16_224', pretrained=pretrained, distilled=True, **dict(model_args, **kwargs)) return model @register_model def deit_small_distilled_patch16_224(pretrained=False, **kwargs) -> VisionTransformerDistilled: """ DeiT-small distilled model @ 224x224 from paper (https://arxiv.org/abs/2012.12877). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=384, depth=12, num_heads=6) model = _create_deit( 'deit_small_distilled_patch16_224', pretrained=pretrained, distilled=True, **dict(model_args, **kwargs)) return model @register_model def deit_base_distilled_patch16_224(pretrained=False, **kwargs) -> VisionTransformerDistilled: """ DeiT-base distilled model @ 224x224 from paper (https://arxiv.org/abs/2012.12877). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12) model = _create_deit( 'deit_base_distilled_patch16_224', pretrained=pretrained, distilled=True, **dict(model_args, **kwargs)) return model @register_model def deit_base_distilled_patch16_384(pretrained=False, **kwargs) -> VisionTransformerDistilled: """ DeiT-base distilled model @ 384x384 from paper (https://arxiv.org/abs/2012.12877). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12) model = _create_deit( 'deit_base_distilled_patch16_384', pretrained=pretrained, distilled=True, **dict(model_args, **kwargs)) return model @register_model def deit3_small_patch16_224(pretrained=False, **kwargs) -> VisionTransformer: """ DeiT-3 small model @ 224x224 from paper (https://arxiv.org/abs/2204.07118). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=384, depth=12, num_heads=6, no_embed_class=True, init_values=1e-6) model = _create_deit('deit3_small_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def deit3_small_patch16_384(pretrained=False, **kwargs) -> VisionTransformer: """ DeiT-3 small model @ 384x384 from paper (https://arxiv.org/abs/2204.07118). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=384, depth=12, num_heads=6, no_embed_class=True, init_values=1e-6) model = _create_deit('deit3_small_patch16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def deit3_medium_patch16_224(pretrained=False, **kwargs) -> VisionTransformer: """ DeiT-3 medium model @ 224x224 (https://arxiv.org/abs/2012.12877). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=512, depth=12, num_heads=8, no_embed_class=True, init_values=1e-6) model = _create_deit('deit3_medium_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def deit3_base_patch16_224(pretrained=False, **kwargs) -> VisionTransformer: """ DeiT-3 base model @ 224x224 from paper (https://arxiv.org/abs/2204.07118). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12, no_embed_class=True, init_values=1e-6) model = _create_deit('deit3_base_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def deit3_base_patch16_384(pretrained=False, **kwargs) -> VisionTransformer: """ DeiT-3 base model @ 384x384 from paper (https://arxiv.org/abs/2204.07118). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12, no_embed_class=True, init_values=1e-6) model = _create_deit('deit3_base_patch16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def deit3_large_patch16_224(pretrained=False, **kwargs) -> VisionTransformer: """ DeiT-3 large model @ 224x224 from paper (https://arxiv.org/abs/2204.07118). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=1024, depth=24, num_heads=16, no_embed_class=True, init_values=1e-6) model = _create_deit('deit3_large_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def deit3_large_patch16_384(pretrained=False, **kwargs) -> VisionTransformer: """ DeiT-3 large model @ 384x384 from paper (https://arxiv.org/abs/2204.07118). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=16, embed_dim=1024, depth=24, num_heads=16, no_embed_class=True, init_values=1e-6) model = _create_deit('deit3_large_patch16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def deit3_huge_patch14_224(pretrained=False, **kwargs) -> VisionTransformer: """ DeiT-3 base model @ 384x384 from paper (https://arxiv.org/abs/2204.07118). ImageNet-1k weights from https://github.com/facebookresearch/deit. """ model_args = dict(patch_size=14, embed_dim=1280, depth=32, num_heads=16, no_embed_class=True, init_values=1e-6) model = _create_deit('deit3_huge_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model register_model_deprecations(__name__, { 'deit3_small_patch16_224_in21ft1k': 'deit3_small_patch16_224.fb_in22k_ft_in1k', 'deit3_small_patch16_384_in21ft1k': 'deit3_small_patch16_384.fb_in22k_ft_in1k', 'deit3_medium_patch16_224_in21ft1k': 'deit3_medium_patch16_224.fb_in22k_ft_in1k', 'deit3_base_patch16_224_in21ft1k': 'deit3_base_patch16_224.fb_in22k_ft_in1k', 'deit3_base_patch16_384_in21ft1k': 'deit3_base_patch16_384.fb_in22k_ft_in1k', 'deit3_large_patch16_224_in21ft1k': 'deit3_large_patch16_224.fb_in22k_ft_in1k', 'deit3_large_patch16_384_in21ft1k': 'deit3_large_patch16_384.fb_in22k_ft_in1k', 'deit3_huge_patch14_224_in21ft1k': 'deit3_huge_patch14_224.fb_in22k_ft_in1k' })
pytorch-image-models/timm/models/deit.py/0
{ "file_path": "pytorch-image-models/timm/models/deit.py", "repo_id": "pytorch-image-models", "token_count": 8300 }
195
""" Global Context ViT From scratch implementation of GCViT in the style of timm swin_transformer_v2_cr.py Global Context Vision Transformers -https://arxiv.org/abs/2206.09959 @article{hatamizadeh2022global, title={Global Context Vision Transformers}, author={Hatamizadeh, Ali and Yin, Hongxu and Kautz, Jan and Molchanov, Pavlo}, journal={arXiv preprint arXiv:2206.09959}, year={2022} } Free of any code related to NVIDIA GCVit impl at https://github.com/NVlabs/GCVit. The license for this code release is Apache 2.0 with no commercial restrictions. However, weight files adapted from NVIDIA GCVit impl ARE under a non-commercial share-alike license (https://creativecommons.org/licenses/by-nc-sa/4.0/) until I have a chance to train new ones... Hacked together by / Copyright 2022, Ross Wightman """ import math from functools import partial from typing import Callable, List, Optional, Tuple, Union import torch import torch.nn as nn import torch.utils.checkpoint as checkpoint from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import DropPath, to_2tuple, to_ntuple, Mlp, ClassifierHead, LayerNorm2d, \ get_attn, get_act_layer, get_norm_layer, RelPosBias, _assert from ._builder import build_model_with_cfg from ._features_fx import register_notrace_function from ._manipulate import named_apply from ._registry import register_model, generate_default_cfgs __all__ = ['GlobalContextVit'] class MbConvBlock(nn.Module): """ A depthwise separable / fused mbconv style residual block with SE, `no norm. """ def __init__( self, in_chs, out_chs=None, expand_ratio=1.0, attn_layer='se', bias=False, act_layer=nn.GELU, ): super().__init__() attn_kwargs = dict(act_layer=act_layer) if isinstance(attn_layer, str) and attn_layer == 'se' or attn_layer == 'eca': attn_kwargs['rd_ratio'] = 0.25 attn_kwargs['bias'] = False attn_layer = get_attn(attn_layer) out_chs = out_chs or in_chs mid_chs = int(expand_ratio * in_chs) self.conv_dw = nn.Conv2d(in_chs, mid_chs, 3, 1, 1, groups=in_chs, bias=bias) self.act = act_layer() self.se = attn_layer(mid_chs, **attn_kwargs) self.conv_pw = nn.Conv2d(mid_chs, out_chs, 1, 1, 0, bias=bias) def forward(self, x): shortcut = x x = self.conv_dw(x) x = self.act(x) x = self.se(x) x = self.conv_pw(x) x = x + shortcut return x class Downsample2d(nn.Module): def __init__( self, dim, dim_out=None, reduction='conv', act_layer=nn.GELU, norm_layer=LayerNorm2d, # NOTE in NCHW ): super().__init__() dim_out = dim_out or dim self.norm1 = norm_layer(dim) if norm_layer is not None else nn.Identity() self.conv_block = MbConvBlock(dim, act_layer=act_layer) assert reduction in ('conv', 'max', 'avg') if reduction == 'conv': self.reduction = nn.Conv2d(dim, dim_out, 3, 2, 1, bias=False) elif reduction == 'max': assert dim == dim_out self.reduction = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) else: assert dim == dim_out self.reduction = nn.AvgPool2d(kernel_size=2) self.norm2 = norm_layer(dim_out) if norm_layer is not None else nn.Identity() def forward(self, x): x = self.norm1(x) x = self.conv_block(x) x = self.reduction(x) x = self.norm2(x) return x class FeatureBlock(nn.Module): def __init__( self, dim, levels=0, reduction='max', act_layer=nn.GELU, ): super().__init__() reductions = levels levels = max(1, levels) if reduction == 'avg': pool_fn = partial(nn.AvgPool2d, kernel_size=2) else: pool_fn = partial(nn.MaxPool2d, kernel_size=3, stride=2, padding=1) self.blocks = nn.Sequential() for i in range(levels): self.blocks.add_module(f'conv{i+1}', MbConvBlock(dim, act_layer=act_layer)) if reductions: self.blocks.add_module(f'pool{i+1}', pool_fn()) reductions -= 1 def forward(self, x): return self.blocks(x) class Stem(nn.Module): def __init__( self, in_chs: int = 3, out_chs: int = 96, act_layer: Callable = nn.GELU, norm_layer: Callable = LayerNorm2d, # NOTE stem in NCHW ): super().__init__() self.conv1 = nn.Conv2d(in_chs, out_chs, kernel_size=3, stride=2, padding=1) self.down = Downsample2d(out_chs, act_layer=act_layer, norm_layer=norm_layer) def forward(self, x): x = self.conv1(x) x = self.down(x) return x class WindowAttentionGlobal(nn.Module): def __init__( self, dim: int, num_heads: int, window_size: Tuple[int, int], use_global: bool = True, qkv_bias: bool = True, attn_drop: float = 0., proj_drop: float = 0., ): super().__init__() window_size = to_2tuple(window_size) self.window_size = window_size self.num_heads = num_heads self.head_dim = dim // num_heads self.scale = self.head_dim ** -0.5 self.use_global = use_global self.rel_pos = RelPosBias(window_size=window_size, num_heads=num_heads) if self.use_global: self.qkv = nn.Linear(dim, dim * 2, bias=qkv_bias) else: self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) self.attn_drop = nn.Dropout(attn_drop) self.proj = nn.Linear(dim, dim) self.proj_drop = nn.Dropout(proj_drop) def forward(self, x, q_global: Optional[torch.Tensor] = None): B, N, C = x.shape if self.use_global and q_global is not None: _assert(x.shape[-1] == q_global.shape[-1], 'x and q_global seq lengths should be equal') kv = self.qkv(x) kv = kv.reshape(B, N, 2, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4) k, v = kv.unbind(0) q = q_global.repeat(B // q_global.shape[0], 1, 1, 1) q = q.reshape(B, N, self.num_heads, self.head_dim).permute(0, 2, 1, 3) else: qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4) q, k, v = qkv.unbind(0) q = q * self.scale attn = q @ k.transpose(-2, -1).contiguous() # NOTE contiguous() fixes an odd jit bug in PyTorch 2.0 attn = self.rel_pos(attn) attn = attn.softmax(dim=-1) attn = self.attn_drop(attn) x = (attn @ v).transpose(1, 2).reshape(B, N, C) x = self.proj(x) x = self.proj_drop(x) return x def window_partition(x, window_size: Tuple[int, int]): B, H, W, C = x.shape x = x.view(B, H // window_size[0], window_size[0], W // window_size[1], window_size[1], C) windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size[0], window_size[1], C) return windows @register_notrace_function # reason: int argument is a Proxy def window_reverse(windows, window_size: Tuple[int, int], img_size: Tuple[int, int]): H, W = img_size C = windows.shape[-1] x = windows.view(-1, H // window_size[0], W // window_size[1], window_size[0], window_size[1], C) x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, H, W, C) return x class LayerScale(nn.Module): def __init__(self, dim, init_values=1e-5, inplace=False): super().__init__() self.inplace = inplace self.gamma = nn.Parameter(init_values * torch.ones(dim)) def forward(self, x): return x.mul_(self.gamma) if self.inplace else x * self.gamma class GlobalContextVitBlock(nn.Module): def __init__( self, dim: int, feat_size: Tuple[int, int], num_heads: int, window_size: int = 7, mlp_ratio: float = 4., use_global: bool = True, qkv_bias: bool = True, layer_scale: Optional[float] = None, proj_drop: float = 0., attn_drop: float = 0., drop_path: float = 0., attn_layer: Callable = WindowAttentionGlobal, act_layer: Callable = nn.GELU, norm_layer: Callable = nn.LayerNorm, ): super().__init__() feat_size = to_2tuple(feat_size) window_size = to_2tuple(window_size) self.window_size = window_size self.num_windows = int((feat_size[0] // window_size[0]) * (feat_size[1] // window_size[1])) self.norm1 = norm_layer(dim) self.attn = attn_layer( dim, num_heads=num_heads, window_size=window_size, use_global=use_global, qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=proj_drop, ) self.ls1 = LayerScale(dim, layer_scale) if layer_scale is not None else nn.Identity() self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity() self.norm2 = norm_layer(dim) self.mlp = Mlp(in_features=dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer, drop=proj_drop) self.ls2 = LayerScale(dim, layer_scale) if layer_scale is not None else nn.Identity() self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity() def _window_attn(self, x, q_global: Optional[torch.Tensor] = None): B, H, W, C = x.shape x_win = window_partition(x, self.window_size) x_win = x_win.view(-1, self.window_size[0] * self.window_size[1], C) attn_win = self.attn(x_win, q_global) x = window_reverse(attn_win, self.window_size, (H, W)) return x def forward(self, x, q_global: Optional[torch.Tensor] = None): x = x + self.drop_path1(self.ls1(self._window_attn(self.norm1(x), q_global))) x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x)))) return x class GlobalContextVitStage(nn.Module): def __init__( self, dim, depth: int, num_heads: int, feat_size: Tuple[int, int], window_size: Tuple[int, int], downsample: bool = True, global_norm: bool = False, stage_norm: bool = False, mlp_ratio: float = 4., qkv_bias: bool = True, layer_scale: Optional[float] = None, proj_drop: float = 0., attn_drop: float = 0., drop_path: Union[List[float], float] = 0.0, act_layer: Callable = nn.GELU, norm_layer: Callable = nn.LayerNorm, norm_layer_cl: Callable = LayerNorm2d, ): super().__init__() if downsample: self.downsample = Downsample2d( dim=dim, dim_out=dim * 2, norm_layer=norm_layer, ) dim = dim * 2 feat_size = (feat_size[0] // 2, feat_size[1] // 2) else: self.downsample = nn.Identity() self.feat_size = feat_size window_size = to_2tuple(window_size) feat_levels = int(math.log2(min(feat_size) / min(window_size))) self.global_block = FeatureBlock(dim, feat_levels) self.global_norm = norm_layer_cl(dim) if global_norm else nn.Identity() self.blocks = nn.ModuleList([ GlobalContextVitBlock( dim=dim, num_heads=num_heads, feat_size=feat_size, window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, use_global=(i % 2 != 0), layer_scale=layer_scale, proj_drop=proj_drop, attn_drop=attn_drop, drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, act_layer=act_layer, norm_layer=norm_layer_cl, ) for i in range(depth) ]) self.norm = norm_layer_cl(dim) if stage_norm else nn.Identity() self.dim = dim self.feat_size = feat_size self.grad_checkpointing = False def forward(self, x): # input NCHW, downsample & global block are 2d conv + pooling x = self.downsample(x) global_query = self.global_block(x) # reshape NCHW --> NHWC for transformer blocks x = x.permute(0, 2, 3, 1) global_query = self.global_norm(global_query.permute(0, 2, 3, 1)) for blk in self.blocks: if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint.checkpoint(blk, x) else: x = blk(x, global_query) x = self.norm(x) x = x.permute(0, 3, 1, 2).contiguous() # back to NCHW return x class GlobalContextVit(nn.Module): def __init__( self, in_chans: int = 3, num_classes: int = 1000, global_pool: str = 'avg', img_size: Tuple[int, int] = 224, window_ratio: Tuple[int, ...] = (32, 32, 16, 32), window_size: Tuple[int, ...] = None, embed_dim: int = 64, depths: Tuple[int, ...] = (3, 4, 19, 5), num_heads: Tuple[int, ...] = (2, 4, 8, 16), mlp_ratio: float = 3.0, qkv_bias: bool = True, layer_scale: Optional[float] = None, drop_rate: float = 0., proj_drop_rate: float = 0., attn_drop_rate: float = 0., drop_path_rate: float = 0., weight_init='', act_layer: str = 'gelu', norm_layer: str = 'layernorm2d', norm_layer_cl: str = 'layernorm', norm_eps: float = 1e-5, ): super().__init__() act_layer = get_act_layer(act_layer) norm_layer = partial(get_norm_layer(norm_layer), eps=norm_eps) norm_layer_cl = partial(get_norm_layer(norm_layer_cl), eps=norm_eps) img_size = to_2tuple(img_size) feat_size = tuple(d // 4 for d in img_size) # stem reduction by 4 self.global_pool = global_pool self.num_classes = num_classes self.drop_rate = drop_rate num_stages = len(depths) self.num_features = int(embed_dim * 2 ** (num_stages - 1)) if window_size is not None: window_size = to_ntuple(num_stages)(window_size) else: assert window_ratio is not None window_size = tuple([(img_size[0] // r, img_size[1] // r) for r in to_ntuple(num_stages)(window_ratio)]) self.stem = Stem( in_chs=in_chans, out_chs=embed_dim, act_layer=act_layer, norm_layer=norm_layer ) dpr = [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(depths)).split(depths)] stages = [] for i in range(num_stages): last_stage = i == num_stages - 1 stage_scale = 2 ** max(i - 1, 0) stages.append(GlobalContextVitStage( dim=embed_dim * stage_scale, depth=depths[i], num_heads=num_heads[i], feat_size=(feat_size[0] // stage_scale, feat_size[1] // stage_scale), window_size=window_size[i], downsample=i != 0, stage_norm=last_stage, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, layer_scale=layer_scale, proj_drop=proj_drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], act_layer=act_layer, norm_layer=norm_layer, norm_layer_cl=norm_layer_cl, )) self.stages = nn.Sequential(*stages) # Classifier head self.head = ClassifierHead(self.num_features, num_classes, pool_type=global_pool, drop_rate=drop_rate) if weight_init: named_apply(partial(self._init_weights, scheme=weight_init), self) def _init_weights(self, module, name, scheme='vit'): # note Conv2d left as default init if scheme == 'vit': if isinstance(module, nn.Linear): nn.init.xavier_uniform_(module.weight) if module.bias is not None: if 'mlp' in name: nn.init.normal_(module.bias, std=1e-6) else: nn.init.zeros_(module.bias) else: if isinstance(module, nn.Linear): nn.init.normal_(module.weight, std=.02) if module.bias is not None: nn.init.zeros_(module.bias) @torch.jit.ignore def no_weight_decay(self): return { k for k, _ in self.named_parameters() if any(n in k for n in ["relative_position_bias_table", "rel_pos.mlp"])} @torch.jit.ignore def group_matcher(self, coarse=False): matcher = dict( stem=r'^stem', # stem and embed blocks=r'^stages\.(\d+)' ) return matcher @torch.jit.ignore def set_grad_checkpointing(self, enable=True): for s in self.stages: s.grad_checkpointing = enable @torch.jit.ignore def get_classifier(self): return self.head.fc def reset_classifier(self, num_classes, global_pool=None): self.num_classes = num_classes if global_pool is None: global_pool = self.head.global_pool.pool_type self.head = ClassifierHead(self.num_features, num_classes, pool_type=global_pool, drop_rate=self.drop_rate) def forward_features(self, x: torch.Tensor) -> torch.Tensor: x = self.stem(x) x = self.stages(x) return x def forward_head(self, x, pre_logits: bool = False): return self.head(x, pre_logits=pre_logits) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.forward_features(x) x = self.forward_head(x) return x def _create_gcvit(variant, pretrained=False, **kwargs): if kwargs.get('features_only', None): raise RuntimeError('features_only not implemented for Vision Transformer models.') model = build_model_with_cfg(GlobalContextVit, variant, pretrained, **kwargs) return model def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7), 'crop_pct': 0.875, 'interpolation': 'bicubic', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'stem.conv1', 'classifier': 'head.fc', 'fixed_input_size': True, **kwargs } default_cfgs = generate_default_cfgs({ 'gcvit_xxtiny.in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-morevit/gcvit_xxtiny_224_nvidia-d1d86009.pth'), 'gcvit_xtiny.in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-morevit/gcvit_xtiny_224_nvidia-274b92b7.pth'), 'gcvit_tiny.in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-morevit/gcvit_tiny_224_nvidia-ac783954.pth'), 'gcvit_small.in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-morevit/gcvit_small_224_nvidia-4e98afa2.pth'), 'gcvit_base.in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-morevit/gcvit_base_224_nvidia-f009139b.pth'), }) @register_model def gcvit_xxtiny(pretrained=False, **kwargs) -> GlobalContextVit: model_kwargs = dict( depths=(2, 2, 6, 2), num_heads=(2, 4, 8, 16), **kwargs) return _create_gcvit('gcvit_xxtiny', pretrained=pretrained, **model_kwargs) @register_model def gcvit_xtiny(pretrained=False, **kwargs) -> GlobalContextVit: model_kwargs = dict( depths=(3, 4, 6, 5), num_heads=(2, 4, 8, 16), **kwargs) return _create_gcvit('gcvit_xtiny', pretrained=pretrained, **model_kwargs) @register_model def gcvit_tiny(pretrained=False, **kwargs) -> GlobalContextVit: model_kwargs = dict( depths=(3, 4, 19, 5), num_heads=(2, 4, 8, 16), **kwargs) return _create_gcvit('gcvit_tiny', pretrained=pretrained, **model_kwargs) @register_model def gcvit_small(pretrained=False, **kwargs) -> GlobalContextVit: model_kwargs = dict( depths=(3, 4, 19, 5), num_heads=(3, 6, 12, 24), embed_dim=96, mlp_ratio=2, layer_scale=1e-5, **kwargs) return _create_gcvit('gcvit_small', pretrained=pretrained, **model_kwargs) @register_model def gcvit_base(pretrained=False, **kwargs) -> GlobalContextVit: model_kwargs = dict( depths=(3, 4, 19, 5), num_heads=(4, 8, 16, 32), embed_dim=128, mlp_ratio=2, layer_scale=1e-5, **kwargs) return _create_gcvit('gcvit_base', pretrained=pretrained, **model_kwargs)
pytorch-image-models/timm/models/gcvit.py/0
{ "file_path": "pytorch-image-models/timm/models/gcvit.py", "repo_id": "pytorch-image-models", "token_count": 10789 }
196
""" MobileNet V3 A PyTorch impl of MobileNet-V3, compatible with TF weights from official impl. Paper: Searching for MobileNetV3 - https://arxiv.org/abs/1905.02244 Hacked together by / Copyright 2019, Ross Wightman """ from functools import partial from typing import Callable, List, Optional, Tuple import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.checkpoint import checkpoint from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD, IMAGENET_INCEPTION_MEAN, IMAGENET_INCEPTION_STD from timm.layers import SelectAdaptivePool2d, Linear, LayerType, PadType, create_conv2d, get_norm_act_layer from ._builder import build_model_with_cfg, pretrained_cfg_for_features from ._efficientnet_blocks import SqueezeExcite from ._efficientnet_builder import BlockArgs, EfficientNetBuilder, decode_arch_def, efficientnet_init_weights, \ round_channels, resolve_bn_args, resolve_act_layer, BN_EPS_TF_DEFAULT from ._features import FeatureInfo, FeatureHooks from ._manipulate import checkpoint_seq from ._registry import generate_default_cfgs, register_model, register_model_deprecations __all__ = ['MobileNetV3', 'MobileNetV3Features'] class MobileNetV3(nn.Module): """ MobiletNet-V3 Based on my EfficientNet implementation and building blocks, this model utilizes the MobileNet-v3 specific 'efficient head', where global pooling is done before the head convolution without a final batch-norm layer before the classifier. Paper: `Searching for MobileNetV3` - https://arxiv.org/abs/1905.02244 Other architectures utilizing MobileNet-V3 efficient head that are supported by this impl include: * HardCoRe-NAS - https://arxiv.org/abs/2102.11646 (defn in hardcorenas.py uses this class) * FBNet-V3 - https://arxiv.org/abs/2006.02049 * LCNet - https://arxiv.org/abs/2109.15099 """ def __init__( self, block_args: BlockArgs, num_classes: int = 1000, in_chans: int = 3, stem_size: int = 16, fix_stem: bool = False, num_features: int = 1280, head_bias: bool = True, pad_type: PadType = '', act_layer: Optional[LayerType] = None, norm_layer: Optional[LayerType] = None, se_layer: Optional[LayerType] = None, se_from_exp: bool = True, round_chs_fn: Callable = round_channels, drop_rate: float = 0., drop_path_rate: float = 0., global_pool: str = 'avg', ): """ Args: block_args: Arguments for blocks of the network. num_classes: Number of classes for classification head. in_chans: Number of input image channels. stem_size: Number of output channels of the initial stem convolution. fix_stem: If True, don't scale stem by round_chs_fn. num_features: Number of output channels of the conv head layer. head_bias: If True, add a learnable bias to the conv head layer. pad_type: Type of padding to use for convolution layers. act_layer: Type of activation layer. norm_layer: Type of normalization layer. se_layer: Type of Squeeze-and-Excite layer. se_from_exp: If True, calculate SE channel reduction from expanded mid channels. round_chs_fn: Callable to round number of filters based on depth multiplier. drop_rate: Dropout rate. drop_path_rate: Stochastic depth rate. global_pool: Type of pooling to use for global pooling features of the FC head. """ super(MobileNetV3, self).__init__() act_layer = act_layer or nn.ReLU norm_layer = norm_layer or nn.BatchNorm2d norm_act_layer = get_norm_act_layer(norm_layer, act_layer) se_layer = se_layer or SqueezeExcite self.num_classes = num_classes self.num_features = num_features self.drop_rate = drop_rate self.grad_checkpointing = False # Stem if not fix_stem: stem_size = round_chs_fn(stem_size) self.conv_stem = create_conv2d(in_chans, stem_size, 3, stride=2, padding=pad_type) self.bn1 = norm_act_layer(stem_size, inplace=True) # Middle stages (IR/ER/DS Blocks) builder = EfficientNetBuilder( output_stride=32, pad_type=pad_type, round_chs_fn=round_chs_fn, se_from_exp=se_from_exp, act_layer=act_layer, norm_layer=norm_layer, se_layer=se_layer, drop_path_rate=drop_path_rate, ) self.blocks = nn.Sequential(*builder(stem_size, block_args)) self.feature_info = builder.features head_chs = builder.in_chs # Head + Pooling self.global_pool = SelectAdaptivePool2d(pool_type=global_pool) num_pooled_chs = head_chs * self.global_pool.feat_mult() self.conv_head = create_conv2d(num_pooled_chs, self.num_features, 1, padding=pad_type, bias=head_bias) self.act2 = act_layer(inplace=True) self.flatten = nn.Flatten(1) if global_pool else nn.Identity() # don't flatten if pooling disabled self.classifier = Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity() efficientnet_init_weights(self) def as_sequential(self): layers = [self.conv_stem, self.bn1] layers.extend(self.blocks) layers.extend([self.global_pool, self.conv_head, self.act2]) layers.extend([nn.Flatten(), nn.Dropout(self.drop_rate), self.classifier]) return nn.Sequential(*layers) @torch.jit.ignore def group_matcher(self, coarse: bool = False): return dict( stem=r'^conv_stem|bn1', blocks=r'^blocks\.(\d+)' if coarse else r'^blocks\.(\d+)\.(\d+)' ) @torch.jit.ignore def set_grad_checkpointing(self, enable: bool = True): self.grad_checkpointing = enable @torch.jit.ignore def get_classifier(self): return self.classifier def reset_classifier(self, num_classes: int, global_pool: str = 'avg'): self.num_classes = num_classes # cannot meaningfully change pooling of efficient head after creation self.global_pool = SelectAdaptivePool2d(pool_type=global_pool) self.flatten = nn.Flatten(1) if global_pool else nn.Identity() # don't flatten if pooling disabled self.classifier = Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity() def forward_features(self, x: torch.Tensor) -> torch.Tensor: x = self.conv_stem(x) x = self.bn1(x) if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint_seq(self.blocks, x, flatten=True) else: x = self.blocks(x) return x def forward_head(self, x: torch.Tensor, pre_logits: bool = False) -> torch.Tensor: x = self.global_pool(x) x = self.conv_head(x) x = self.act2(x) x = self.flatten(x) if pre_logits: return x if self.drop_rate > 0.: x = F.dropout(x, p=self.drop_rate, training=self.training) return self.classifier(x) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.forward_features(x) x = self.forward_head(x) return x class MobileNetV3Features(nn.Module): """ MobileNetV3 Feature Extractor A work-in-progress feature extraction module for MobileNet-V3 to use as a backbone for segmentation and object detection models. """ def __init__( self, block_args: BlockArgs, out_indices: Tuple[int, ...] = (0, 1, 2, 3, 4), feature_location: str = 'bottleneck', in_chans: int = 3, stem_size: int = 16, fix_stem: bool = False, output_stride: int = 32, pad_type: PadType = '', round_chs_fn: Callable = round_channels, se_from_exp: bool = True, act_layer: Optional[LayerType] = None, norm_layer: Optional[LayerType] = None, se_layer: Optional[LayerType] = None, drop_rate: float = 0., drop_path_rate: float = 0., ): """ Args: block_args: Arguments for blocks of the network. out_indices: Output from stages at indices. feature_location: Location of feature before/after each block, must be in ['bottleneck', 'expansion'] in_chans: Number of input image channels. stem_size: Number of output channels of the initial stem convolution. fix_stem: If True, don't scale stem by round_chs_fn. output_stride: Output stride of the network. pad_type: Type of padding to use for convolution layers. round_chs_fn: Callable to round number of filters based on depth multiplier. se_from_exp: If True, calculate SE channel reduction from expanded mid channels. act_layer: Type of activation layer. norm_layer: Type of normalization layer. se_layer: Type of Squeeze-and-Excite layer. drop_rate: Dropout rate. drop_path_rate: Stochastic depth rate. """ super(MobileNetV3Features, self).__init__() act_layer = act_layer or nn.ReLU norm_layer = norm_layer or nn.BatchNorm2d se_layer = se_layer or SqueezeExcite self.drop_rate = drop_rate self.grad_checkpointing = False # Stem if not fix_stem: stem_size = round_chs_fn(stem_size) self.conv_stem = create_conv2d(in_chans, stem_size, 3, stride=2, padding=pad_type) self.bn1 = norm_layer(stem_size) self.act1 = act_layer(inplace=True) # Middle stages (IR/ER/DS Blocks) builder = EfficientNetBuilder( output_stride=output_stride, pad_type=pad_type, round_chs_fn=round_chs_fn, se_from_exp=se_from_exp, act_layer=act_layer, norm_layer=norm_layer, se_layer=se_layer, drop_path_rate=drop_path_rate, feature_location=feature_location, ) self.blocks = nn.Sequential(*builder(stem_size, block_args)) self.feature_info = FeatureInfo(builder.features, out_indices) self._stage_out_idx = {f['stage']: f['index'] for f in self.feature_info.get_dicts()} efficientnet_init_weights(self) # Register feature extraction hooks with FeatureHooks helper self.feature_hooks = None if feature_location != 'bottleneck': hooks = self.feature_info.get_dicts(keys=('module', 'hook_type')) self.feature_hooks = FeatureHooks(hooks, self.named_modules()) @torch.jit.ignore def set_grad_checkpointing(self, enable: bool = True): self.grad_checkpointing = enable def forward(self, x: torch.Tensor) -> List[torch.Tensor]: x = self.conv_stem(x) x = self.bn1(x) x = self.act1(x) if self.feature_hooks is None: features = [] if 0 in self._stage_out_idx: features.append(x) # add stem out for i, b in enumerate(self.blocks): if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint(b, x) else: x = b(x) if i + 1 in self._stage_out_idx: features.append(x) return features else: self.blocks(x) out = self.feature_hooks.get_output(x.device) return list(out.values()) def _create_mnv3(variant: str, pretrained: bool = False, **kwargs) -> MobileNetV3: features_mode = '' model_cls = MobileNetV3 kwargs_filter = None if kwargs.pop('features_only', False): if 'feature_cfg' in kwargs: features_mode = 'cfg' else: kwargs_filter = ('num_classes', 'num_features', 'head_conv', 'head_bias', 'global_pool') model_cls = MobileNetV3Features features_mode = 'cls' model = build_model_with_cfg( model_cls, variant, pretrained, features_only=features_mode == 'cfg', pretrained_strict=features_mode != 'cls', kwargs_filter=kwargs_filter, **kwargs, ) if features_mode == 'cls': model.default_cfg = pretrained_cfg_for_features(model.default_cfg) return model def _gen_mobilenet_v3_rw(variant: str, channel_multiplier: float = 1.0, pretrained: bool = False, **kwargs) -> MobileNetV3: """Creates a MobileNet-V3 model. Ref impl: ? Paper: https://arxiv.org/abs/1905.02244 Args: channel_multiplier: multiplier to number of channels per layer. """ arch_def = [ # stage 0, 112x112 in ['ds_r1_k3_s1_e1_c16_nre_noskip'], # relu # stage 1, 112x112 in ['ir_r1_k3_s2_e4_c24_nre', 'ir_r1_k3_s1_e3_c24_nre'], # relu # stage 2, 56x56 in ['ir_r3_k5_s2_e3_c40_se0.25_nre'], # relu # stage 3, 28x28 in ['ir_r1_k3_s2_e6_c80', 'ir_r1_k3_s1_e2.5_c80', 'ir_r2_k3_s1_e2.3_c80'], # hard-swish # stage 4, 14x14in ['ir_r2_k3_s1_e6_c112_se0.25'], # hard-swish # stage 5, 14x14in ['ir_r3_k5_s2_e6_c160_se0.25'], # hard-swish # stage 6, 7x7 in ['cn_r1_k1_s1_c960'], # hard-swish ] model_kwargs = dict( block_args=decode_arch_def(arch_def), head_bias=False, round_chs_fn=partial(round_channels, multiplier=channel_multiplier), norm_layer=partial(nn.BatchNorm2d, **resolve_bn_args(kwargs)), act_layer=resolve_act_layer(kwargs, 'hard_swish'), se_layer=partial(SqueezeExcite, gate_layer='hard_sigmoid'), **kwargs, ) model = _create_mnv3(variant, pretrained, **model_kwargs) return model def _gen_mobilenet_v3(variant: str, channel_multiplier: float = 1.0, pretrained: bool = False, **kwargs) -> MobileNetV3: """Creates a MobileNet-V3 model. Ref impl: ? Paper: https://arxiv.org/abs/1905.02244 Args: channel_multiplier: multiplier to number of channels per layer. """ if 'small' in variant: num_features = 1024 if 'minimal' in variant: act_layer = resolve_act_layer(kwargs, 'relu') arch_def = [ # stage 0, 112x112 in ['ds_r1_k3_s2_e1_c16'], # stage 1, 56x56 in ['ir_r1_k3_s2_e4.5_c24', 'ir_r1_k3_s1_e3.67_c24'], # stage 2, 28x28 in ['ir_r1_k3_s2_e4_c40', 'ir_r2_k3_s1_e6_c40'], # stage 3, 14x14 in ['ir_r2_k3_s1_e3_c48'], # stage 4, 14x14in ['ir_r3_k3_s2_e6_c96'], # stage 6, 7x7 in ['cn_r1_k1_s1_c576'], ] else: act_layer = resolve_act_layer(kwargs, 'hard_swish') arch_def = [ # stage 0, 112x112 in ['ds_r1_k3_s2_e1_c16_se0.25_nre'], # relu # stage 1, 56x56 in ['ir_r1_k3_s2_e4.5_c24_nre', 'ir_r1_k3_s1_e3.67_c24_nre'], # relu # stage 2, 28x28 in ['ir_r1_k5_s2_e4_c40_se0.25', 'ir_r2_k5_s1_e6_c40_se0.25'], # hard-swish # stage 3, 14x14 in ['ir_r2_k5_s1_e3_c48_se0.25'], # hard-swish # stage 4, 14x14in ['ir_r3_k5_s2_e6_c96_se0.25'], # hard-swish # stage 6, 7x7 in ['cn_r1_k1_s1_c576'], # hard-swish ] else: num_features = 1280 if 'minimal' in variant: act_layer = resolve_act_layer(kwargs, 'relu') arch_def = [ # stage 0, 112x112 in ['ds_r1_k3_s1_e1_c16'], # stage 1, 112x112 in ['ir_r1_k3_s2_e4_c24', 'ir_r1_k3_s1_e3_c24'], # stage 2, 56x56 in ['ir_r3_k3_s2_e3_c40'], # stage 3, 28x28 in ['ir_r1_k3_s2_e6_c80', 'ir_r1_k3_s1_e2.5_c80', 'ir_r2_k3_s1_e2.3_c80'], # stage 4, 14x14in ['ir_r2_k3_s1_e6_c112'], # stage 5, 14x14in ['ir_r3_k3_s2_e6_c160'], # stage 6, 7x7 in ['cn_r1_k1_s1_c960'], ] else: act_layer = resolve_act_layer(kwargs, 'hard_swish') arch_def = [ # stage 0, 112x112 in ['ds_r1_k3_s1_e1_c16_nre'], # relu # stage 1, 112x112 in ['ir_r1_k3_s2_e4_c24_nre', 'ir_r1_k3_s1_e3_c24_nre'], # relu # stage 2, 56x56 in ['ir_r3_k5_s2_e3_c40_se0.25_nre'], # relu # stage 3, 28x28 in ['ir_r1_k3_s2_e6_c80', 'ir_r1_k3_s1_e2.5_c80', 'ir_r2_k3_s1_e2.3_c80'], # hard-swish # stage 4, 14x14in ['ir_r2_k3_s1_e6_c112_se0.25'], # hard-swish # stage 5, 14x14in ['ir_r3_k5_s2_e6_c160_se0.25'], # hard-swish # stage 6, 7x7 in ['cn_r1_k1_s1_c960'], # hard-swish ] se_layer = partial(SqueezeExcite, gate_layer='hard_sigmoid', force_act_layer=nn.ReLU, rd_round_fn=round_channels) model_kwargs = dict( block_args=decode_arch_def(arch_def), num_features=num_features, stem_size=16, fix_stem=channel_multiplier < 0.75, round_chs_fn=partial(round_channels, multiplier=channel_multiplier), norm_layer=partial(nn.BatchNorm2d, **resolve_bn_args(kwargs)), act_layer=act_layer, se_layer=se_layer, **kwargs, ) model = _create_mnv3(variant, pretrained, **model_kwargs) return model def _gen_fbnetv3(variant: str, channel_multiplier: float = 1.0, pretrained: bool = False, **kwargs): """ FBNetV3 Paper: `FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining` - https://arxiv.org/abs/2006.02049 FIXME untested, this is a preliminary impl of some FBNet-V3 variants. """ vl = variant.split('_')[-1] if vl in ('a', 'b'): stem_size = 16 arch_def = [ ['ds_r2_k3_s1_e1_c16'], ['ir_r1_k5_s2_e4_c24', 'ir_r3_k5_s1_e2_c24'], ['ir_r1_k5_s2_e5_c40_se0.25', 'ir_r4_k5_s1_e3_c40_se0.25'], ['ir_r1_k5_s2_e5_c72', 'ir_r4_k3_s1_e3_c72'], ['ir_r1_k3_s1_e5_c120_se0.25', 'ir_r5_k5_s1_e3_c120_se0.25'], ['ir_r1_k3_s2_e6_c184_se0.25', 'ir_r5_k5_s1_e4_c184_se0.25', 'ir_r1_k5_s1_e6_c224_se0.25'], ['cn_r1_k1_s1_c1344'], ] elif vl == 'd': stem_size = 24 arch_def = [ ['ds_r2_k3_s1_e1_c16'], ['ir_r1_k3_s2_e5_c24', 'ir_r5_k3_s1_e2_c24'], ['ir_r1_k5_s2_e4_c40_se0.25', 'ir_r4_k3_s1_e3_c40_se0.25'], ['ir_r1_k3_s2_e5_c72', 'ir_r4_k3_s1_e3_c72'], ['ir_r1_k3_s1_e5_c128_se0.25', 'ir_r6_k5_s1_e3_c128_se0.25'], ['ir_r1_k3_s2_e6_c208_se0.25', 'ir_r5_k5_s1_e5_c208_se0.25', 'ir_r1_k5_s1_e6_c240_se0.25'], ['cn_r1_k1_s1_c1440'], ] elif vl == 'g': stem_size = 32 arch_def = [ ['ds_r3_k3_s1_e1_c24'], ['ir_r1_k5_s2_e4_c40', 'ir_r4_k5_s1_e2_c40'], ['ir_r1_k5_s2_e4_c56_se0.25', 'ir_r4_k5_s1_e3_c56_se0.25'], ['ir_r1_k5_s2_e5_c104', 'ir_r4_k3_s1_e3_c104'], ['ir_r1_k3_s1_e5_c160_se0.25', 'ir_r8_k5_s1_e3_c160_se0.25'], ['ir_r1_k3_s2_e6_c264_se0.25', 'ir_r6_k5_s1_e5_c264_se0.25', 'ir_r2_k5_s1_e6_c288_se0.25'], ['cn_r1_k1_s1_c1728'], ] else: raise NotImplemented round_chs_fn = partial(round_channels, multiplier=channel_multiplier, round_limit=0.95) se_layer = partial(SqueezeExcite, gate_layer='hard_sigmoid', rd_round_fn=round_chs_fn) act_layer = resolve_act_layer(kwargs, 'hard_swish') model_kwargs = dict( block_args=decode_arch_def(arch_def), num_features=1984, head_bias=False, stem_size=stem_size, round_chs_fn=round_chs_fn, se_from_exp=False, norm_layer=partial(nn.BatchNorm2d, **resolve_bn_args(kwargs)), act_layer=act_layer, se_layer=se_layer, **kwargs, ) model = _create_mnv3(variant, pretrained, **model_kwargs) return model def _gen_lcnet(variant: str, channel_multiplier: float = 1.0, pretrained: bool = False, **kwargs): """ LCNet Essentially a MobileNet-V3 crossed with a MobileNet-V1 Paper: `PP-LCNet: A Lightweight CPU Convolutional Neural Network` - https://arxiv.org/abs/2109.15099 Args: channel_multiplier: multiplier to number of channels per layer. """ arch_def = [ # stage 0, 112x112 in ['dsa_r1_k3_s1_c32'], # stage 1, 112x112 in ['dsa_r2_k3_s2_c64'], # stage 2, 56x56 in ['dsa_r2_k3_s2_c128'], # stage 3, 28x28 in ['dsa_r1_k3_s2_c256', 'dsa_r1_k5_s1_c256'], # stage 4, 14x14in ['dsa_r4_k5_s1_c256'], # stage 5, 14x14in ['dsa_r2_k5_s2_c512_se0.25'], # 7x7 ] model_kwargs = dict( block_args=decode_arch_def(arch_def), stem_size=16, round_chs_fn=partial(round_channels, multiplier=channel_multiplier), norm_layer=partial(nn.BatchNorm2d, **resolve_bn_args(kwargs)), act_layer=resolve_act_layer(kwargs, 'hard_swish'), se_layer=partial(SqueezeExcite, gate_layer='hard_sigmoid', force_act_layer=nn.ReLU), num_features=1280, **kwargs, ) model = _create_mnv3(variant, pretrained, **model_kwargs) return model def _gen_lcnet(variant: str, channel_multiplier: float = 1.0, pretrained: bool = False, **kwargs): """ LCNet Essentially a MobileNet-V3 crossed with a MobileNet-V1 Paper: `PP-LCNet: A Lightweight CPU Convolutional Neural Network` - https://arxiv.org/abs/2109.15099 Args: channel_multiplier: multiplier to number of channels per layer. """ arch_def = [ # stage 0, 112x112 in ['dsa_r1_k3_s1_c32'], # stage 1, 112x112 in ['dsa_r2_k3_s2_c64'], # stage 2, 56x56 in ['dsa_r2_k3_s2_c128'], # stage 3, 28x28 in ['dsa_r1_k3_s2_c256', 'dsa_r1_k5_s1_c256'], # stage 4, 14x14in ['dsa_r4_k5_s1_c256'], # stage 5, 14x14in ['dsa_r2_k5_s2_c512_se0.25'], # 7x7 ] model_kwargs = dict( block_args=decode_arch_def(arch_def), stem_size=16, round_chs_fn=partial(round_channels, multiplier=channel_multiplier), norm_layer=partial(nn.BatchNorm2d, **resolve_bn_args(kwargs)), act_layer=resolve_act_layer(kwargs, 'hard_swish'), se_layer=partial(SqueezeExcite, gate_layer='hard_sigmoid', force_act_layer=nn.ReLU), num_features=1280, **kwargs, ) model = _create_mnv3(variant, pretrained, **model_kwargs) return model def _cfg(url: str = '', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7), 'crop_pct': 0.875, 'interpolation': 'bilinear', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'conv_stem', 'classifier': 'classifier', **kwargs } default_cfgs = generate_default_cfgs({ 'mobilenetv3_large_075.untrained': _cfg(url=''), 'mobilenetv3_large_100.ra_in1k': _cfg( interpolation='bicubic', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_large_100_ra-f55367f5.pth', hf_hub_id='timm/'), 'mobilenetv3_large_100.miil_in21k_ft_in1k': _cfg( interpolation='bilinear', mean=(0., 0., 0.), std=(1., 1., 1.), origin_url='https://github.com/Alibaba-MIIL/ImageNet21K', paper_ids='arXiv:2104.10972v4', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/mobilenetv3_large_100_1k_miil_78_0-66471c13.pth', hf_hub_id='timm/'), 'mobilenetv3_large_100.miil_in21k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/mobilenetv3_large_100_in21k_miil-d71cc17b.pth', hf_hub_id='timm/', origin_url='https://github.com/Alibaba-MIIL/ImageNet21K', paper_ids='arXiv:2104.10972v4', interpolation='bilinear', mean=(0., 0., 0.), std=(1., 1., 1.), num_classes=11221), 'mobilenetv3_small_050.lamb_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_small_050_lambc-4b7bbe87.pth', hf_hub_id='timm/', interpolation='bicubic'), 'mobilenetv3_small_075.lamb_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_small_075_lambc-384766db.pth', hf_hub_id='timm/', interpolation='bicubic'), 'mobilenetv3_small_100.lamb_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_small_100_lamb-266a294c.pth', hf_hub_id='timm/', interpolation='bicubic'), 'mobilenetv3_rw.rmsp_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_100-35495452.pth', hf_hub_id='timm/', interpolation='bicubic'), 'tf_mobilenetv3_large_075.in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_large_075-150ee8b0.pth', hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD), 'tf_mobilenetv3_large_100.in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_large_100-427764d5.pth', hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD), 'tf_mobilenetv3_large_minimal_100.in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_large_minimal_100-8596ae28.pth', hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD), 'tf_mobilenetv3_small_075.in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_small_075-da427f52.pth', hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD), 'tf_mobilenetv3_small_100.in1k': _cfg( url= 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_small_100-37f49e2b.pth', hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD), 'tf_mobilenetv3_small_minimal_100.in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_small_minimal_100-922a7843.pth', hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD), 'fbnetv3_b.ra2_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/fbnetv3_b_224-ead5d2a1.pth', hf_hub_id='timm/', test_input_size=(3, 256, 256), crop_pct=0.95), 'fbnetv3_d.ra2_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/fbnetv3_d_224-c98bce42.pth', hf_hub_id='timm/', test_input_size=(3, 256, 256), crop_pct=0.95), 'fbnetv3_g.ra2_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/fbnetv3_g_240-0b1df83b.pth', hf_hub_id='timm/', input_size=(3, 240, 240), test_input_size=(3, 288, 288), crop_pct=0.95, pool_size=(8, 8)), "lcnet_035.untrained": _cfg(), "lcnet_050.ra2_in1k": _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/lcnet_050-f447553b.pth', hf_hub_id='timm/', interpolation='bicubic', ), "lcnet_075.ra2_in1k": _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/lcnet_075-318cad2c.pth', hf_hub_id='timm/', interpolation='bicubic', ), "lcnet_100.ra2_in1k": _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/lcnet_100-a929038c.pth', hf_hub_id='timm/', interpolation='bicubic', ), "lcnet_150.untrained": _cfg(), }) @register_model def mobilenetv3_large_075(pretrained: bool = False, **kwargs) -> MobileNetV3: """ MobileNet V3 """ model = _gen_mobilenet_v3('mobilenetv3_large_075', 0.75, pretrained=pretrained, **kwargs) return model @register_model def mobilenetv3_large_100(pretrained: bool = False, **kwargs) -> MobileNetV3: """ MobileNet V3 """ model = _gen_mobilenet_v3('mobilenetv3_large_100', 1.0, pretrained=pretrained, **kwargs) return model @register_model def mobilenetv3_small_050(pretrained: bool = False, **kwargs) -> MobileNetV3: """ MobileNet V3 """ model = _gen_mobilenet_v3('mobilenetv3_small_050', 0.50, pretrained=pretrained, **kwargs) return model @register_model def mobilenetv3_small_075(pretrained: bool = False, **kwargs) -> MobileNetV3: """ MobileNet V3 """ model = _gen_mobilenet_v3('mobilenetv3_small_075', 0.75, pretrained=pretrained, **kwargs) return model @register_model def mobilenetv3_small_100(pretrained: bool = False, **kwargs) -> MobileNetV3: """ MobileNet V3 """ model = _gen_mobilenet_v3('mobilenetv3_small_100', 1.0, pretrained=pretrained, **kwargs) return model @register_model def mobilenetv3_rw(pretrained: bool = False, **kwargs) -> MobileNetV3: """ MobileNet V3 """ kwargs.setdefault('bn_eps', BN_EPS_TF_DEFAULT) model = _gen_mobilenet_v3_rw('mobilenetv3_rw', 1.0, pretrained=pretrained, **kwargs) return model @register_model def tf_mobilenetv3_large_075(pretrained: bool = False, **kwargs) -> MobileNetV3: """ MobileNet V3 """ kwargs.setdefault('bn_eps', BN_EPS_TF_DEFAULT) kwargs.setdefault('pad_type', 'same') model = _gen_mobilenet_v3('tf_mobilenetv3_large_075', 0.75, pretrained=pretrained, **kwargs) return model @register_model def tf_mobilenetv3_large_100(pretrained: bool = False, **kwargs) -> MobileNetV3: """ MobileNet V3 """ kwargs.setdefault('bn_eps', BN_EPS_TF_DEFAULT) kwargs.setdefault('pad_type', 'same') model = _gen_mobilenet_v3('tf_mobilenetv3_large_100', 1.0, pretrained=pretrained, **kwargs) return model @register_model def tf_mobilenetv3_large_minimal_100(pretrained: bool = False, **kwargs) -> MobileNetV3: """ MobileNet V3 """ kwargs.setdefault('bn_eps', BN_EPS_TF_DEFAULT) kwargs.setdefault('pad_type', 'same') model = _gen_mobilenet_v3('tf_mobilenetv3_large_minimal_100', 1.0, pretrained=pretrained, **kwargs) return model @register_model def tf_mobilenetv3_small_075(pretrained: bool = False, **kwargs) -> MobileNetV3: """ MobileNet V3 """ kwargs.setdefault('bn_eps', BN_EPS_TF_DEFAULT) kwargs.setdefault('pad_type', 'same') model = _gen_mobilenet_v3('tf_mobilenetv3_small_075', 0.75, pretrained=pretrained, **kwargs) return model @register_model def tf_mobilenetv3_small_100(pretrained: bool = False, **kwargs) -> MobileNetV3: """ MobileNet V3 """ kwargs.setdefault('bn_eps', BN_EPS_TF_DEFAULT) kwargs.setdefault('pad_type', 'same') model = _gen_mobilenet_v3('tf_mobilenetv3_small_100', 1.0, pretrained=pretrained, **kwargs) return model @register_model def tf_mobilenetv3_small_minimal_100(pretrained: bool = False, **kwargs) -> MobileNetV3: """ MobileNet V3 """ kwargs.setdefault('bn_eps', BN_EPS_TF_DEFAULT) kwargs.setdefault('pad_type', 'same') model = _gen_mobilenet_v3('tf_mobilenetv3_small_minimal_100', 1.0, pretrained=pretrained, **kwargs) return model @register_model def fbnetv3_b(pretrained: bool = False, **kwargs) -> MobileNetV3: """ FBNetV3-B """ model = _gen_fbnetv3('fbnetv3_b', pretrained=pretrained, **kwargs) return model @register_model def fbnetv3_d(pretrained: bool = False, **kwargs) -> MobileNetV3: """ FBNetV3-D """ model = _gen_fbnetv3('fbnetv3_d', pretrained=pretrained, **kwargs) return model @register_model def fbnetv3_g(pretrained: bool = False, **kwargs) -> MobileNetV3: """ FBNetV3-G """ model = _gen_fbnetv3('fbnetv3_g', pretrained=pretrained, **kwargs) return model @register_model def lcnet_035(pretrained: bool = False, **kwargs) -> MobileNetV3: """ PP-LCNet 0.35""" model = _gen_lcnet('lcnet_035', 0.35, pretrained=pretrained, **kwargs) return model @register_model def lcnet_050(pretrained: bool = False, **kwargs) -> MobileNetV3: """ PP-LCNet 0.5""" model = _gen_lcnet('lcnet_050', 0.5, pretrained=pretrained, **kwargs) return model @register_model def lcnet_075(pretrained: bool = False, **kwargs) -> MobileNetV3: """ PP-LCNet 1.0""" model = _gen_lcnet('lcnet_075', 0.75, pretrained=pretrained, **kwargs) return model @register_model def lcnet_100(pretrained: bool = False, **kwargs) -> MobileNetV3: """ PP-LCNet 1.0""" model = _gen_lcnet('lcnet_100', 1.0, pretrained=pretrained, **kwargs) return model @register_model def lcnet_150(pretrained: bool = False, **kwargs) -> MobileNetV3: """ PP-LCNet 1.5""" model = _gen_lcnet('lcnet_150', 1.5, pretrained=pretrained, **kwargs) return model register_model_deprecations(__name__, { 'mobilenetv3_large_100_miil': 'mobilenetv3_large_100.miil_in21k_ft_in1k', 'mobilenetv3_large_100_miil_in21k': 'mobilenetv3_large_100.miil_in21k', })
pytorch-image-models/timm/models/mobilenetv3.py/0
{ "file_path": "pytorch-image-models/timm/models/mobilenetv3.py", "repo_id": "pytorch-image-models", "token_count": 17103 }
197
"""PyTorch ResNet This started as a copy of https://github.com/pytorch/vision 'resnet.py' (BSD-3-Clause) with additional dropout and dynamic global avg/max pool. ResNeXt, SE-ResNeXt, SENet, and MXNet Gluon stem/downsample variants, tiered stems added by Ross Wightman Copyright 2019, Ross Wightman """ import math from functools import partial from typing import Any, Dict, List, Optional, Tuple, Type, Union import torch import torch.nn as nn import torch.nn.functional as F from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import DropBlock2d, DropPath, AvgPool2dSame, BlurPool2d, GroupNorm, LayerType, create_attn, \ get_attn, get_act_layer, get_norm_layer, create_classifier from ._builder import build_model_with_cfg from ._manipulate import checkpoint_seq from ._registry import register_model, generate_default_cfgs, register_model_deprecations __all__ = ['ResNet', 'BasicBlock', 'Bottleneck'] # model_registry will add each entrypoint fn to this def get_padding(kernel_size: int, stride: int, dilation: int = 1) -> int: padding = ((stride - 1) + dilation * (kernel_size - 1)) // 2 return padding def create_aa(aa_layer: Type[nn.Module], channels: int, stride: int = 2, enable: bool = True) -> nn.Module: if not aa_layer or not enable: return nn.Identity() if issubclass(aa_layer, nn.AvgPool2d): return aa_layer(stride) else: return aa_layer(channels=channels, stride=stride) class BasicBlock(nn.Module): expansion = 1 def __init__( self, inplanes: int, planes: int, stride: int = 1, downsample: Optional[nn.Module] = None, cardinality: int = 1, base_width: int = 64, reduce_first: int = 1, dilation: int = 1, first_dilation: Optional[int] = None, act_layer: Type[nn.Module] = nn.ReLU, norm_layer: Type[nn.Module] = nn.BatchNorm2d, attn_layer: Optional[Type[nn.Module]] = None, aa_layer: Optional[Type[nn.Module]] = None, drop_block: Optional[Type[nn.Module]] = None, drop_path: Optional[nn.Module] = None, ): """ Args: inplanes: Input channel dimensionality. planes: Used to determine output channel dimensionalities. stride: Stride used in convolution layers. downsample: Optional downsample layer for residual path. cardinality: Number of convolution groups. base_width: Base width used to determine output channel dimensionality. reduce_first: Reduction factor for first convolution output width of residual blocks. dilation: Dilation rate for convolution layers. first_dilation: Dilation rate for first convolution layer. act_layer: Activation layer. norm_layer: Normalization layer. attn_layer: Attention layer. aa_layer: Anti-aliasing layer. drop_block: Class for DropBlock layer. drop_path: Optional DropPath layer. """ super(BasicBlock, self).__init__() assert cardinality == 1, 'BasicBlock only supports cardinality of 1' assert base_width == 64, 'BasicBlock does not support changing base width' first_planes = planes // reduce_first outplanes = planes * self.expansion first_dilation = first_dilation or dilation use_aa = aa_layer is not None and (stride == 2 or first_dilation != dilation) self.conv1 = nn.Conv2d( inplanes, first_planes, kernel_size=3, stride=1 if use_aa else stride, padding=first_dilation, dilation=first_dilation, bias=False) self.bn1 = norm_layer(first_planes) self.drop_block = drop_block() if drop_block is not None else nn.Identity() self.act1 = act_layer(inplace=True) self.aa = create_aa(aa_layer, channels=first_planes, stride=stride, enable=use_aa) self.conv2 = nn.Conv2d( first_planes, outplanes, kernel_size=3, padding=dilation, dilation=dilation, bias=False) self.bn2 = norm_layer(outplanes) self.se = create_attn(attn_layer, outplanes) self.act2 = act_layer(inplace=True) self.downsample = downsample self.stride = stride self.dilation = dilation self.drop_path = drop_path def zero_init_last(self): if getattr(self.bn2, 'weight', None) is not None: nn.init.zeros_(self.bn2.weight) def forward(self, x: torch.Tensor) -> torch.Tensor: shortcut = x x = self.conv1(x) x = self.bn1(x) x = self.drop_block(x) x = self.act1(x) x = self.aa(x) x = self.conv2(x) x = self.bn2(x) if self.se is not None: x = self.se(x) if self.drop_path is not None: x = self.drop_path(x) if self.downsample is not None: shortcut = self.downsample(shortcut) x += shortcut x = self.act2(x) return x class Bottleneck(nn.Module): expansion = 4 def __init__( self, inplanes: int, planes: int, stride: int = 1, downsample: Optional[nn.Module] = None, cardinality: int = 1, base_width: int = 64, reduce_first: int = 1, dilation: int = 1, first_dilation: Optional[int] = None, act_layer: Type[nn.Module] = nn.ReLU, norm_layer: Type[nn.Module] = nn.BatchNorm2d, attn_layer: Optional[Type[nn.Module]] = None, aa_layer: Optional[Type[nn.Module]] = None, drop_block: Optional[Type[nn.Module]] = None, drop_path: Optional[nn.Module] = None, ): """ Args: inplanes: Input channel dimensionality. planes: Used to determine output channel dimensionalities. stride: Stride used in convolution layers. downsample: Optional downsample layer for residual path. cardinality: Number of convolution groups. base_width: Base width used to determine output channel dimensionality. reduce_first: Reduction factor for first convolution output width of residual blocks. dilation: Dilation rate for convolution layers. first_dilation: Dilation rate for first convolution layer. act_layer: Activation layer. norm_layer: Normalization layer. attn_layer: Attention layer. aa_layer: Anti-aliasing layer. drop_block: Class for DropBlock layer. drop_path: Optional DropPath layer. """ super(Bottleneck, self).__init__() width = int(math.floor(planes * (base_width / 64)) * cardinality) first_planes = width // reduce_first outplanes = planes * self.expansion first_dilation = first_dilation or dilation use_aa = aa_layer is not None and (stride == 2 or first_dilation != dilation) self.conv1 = nn.Conv2d(inplanes, first_planes, kernel_size=1, bias=False) self.bn1 = norm_layer(first_planes) self.act1 = act_layer(inplace=True) self.conv2 = nn.Conv2d( first_planes, width, kernel_size=3, stride=1 if use_aa else stride, padding=first_dilation, dilation=first_dilation, groups=cardinality, bias=False) self.bn2 = norm_layer(width) self.drop_block = drop_block() if drop_block is not None else nn.Identity() self.act2 = act_layer(inplace=True) self.aa = create_aa(aa_layer, channels=width, stride=stride, enable=use_aa) self.conv3 = nn.Conv2d(width, outplanes, kernel_size=1, bias=False) self.bn3 = norm_layer(outplanes) self.se = create_attn(attn_layer, outplanes) self.act3 = act_layer(inplace=True) self.downsample = downsample self.stride = stride self.dilation = dilation self.drop_path = drop_path def zero_init_last(self): if getattr(self.bn3, 'weight', None) is not None: nn.init.zeros_(self.bn3.weight) def forward(self, x: torch.Tensor) -> torch.Tensor: shortcut = x x = self.conv1(x) x = self.bn1(x) x = self.act1(x) x = self.conv2(x) x = self.bn2(x) x = self.drop_block(x) x = self.act2(x) x = self.aa(x) x = self.conv3(x) x = self.bn3(x) if self.se is not None: x = self.se(x) if self.drop_path is not None: x = self.drop_path(x) if self.downsample is not None: shortcut = self.downsample(shortcut) x += shortcut x = self.act3(x) return x def downsample_conv( in_channels: int, out_channels: int, kernel_size: int, stride: int = 1, dilation: int = 1, first_dilation: Optional[int] = None, norm_layer: Optional[Type[nn.Module]] = None, ) -> nn.Module: norm_layer = norm_layer or nn.BatchNorm2d kernel_size = 1 if stride == 1 and dilation == 1 else kernel_size first_dilation = (first_dilation or dilation) if kernel_size > 1 else 1 p = get_padding(kernel_size, stride, first_dilation) return nn.Sequential(*[ nn.Conv2d( in_channels, out_channels, kernel_size, stride=stride, padding=p, dilation=first_dilation, bias=False), norm_layer(out_channels) ]) def downsample_avg( in_channels: int, out_channels: int, kernel_size: int, stride: int = 1, dilation: int = 1, first_dilation: Optional[int] = None, norm_layer: Optional[Type[nn.Module]] = None, ) -> nn.Module: norm_layer = norm_layer or nn.BatchNorm2d avg_stride = stride if dilation == 1 else 1 if stride == 1 and dilation == 1: pool = nn.Identity() else: avg_pool_fn = AvgPool2dSame if avg_stride == 1 and dilation > 1 else nn.AvgPool2d pool = avg_pool_fn(2, avg_stride, ceil_mode=True, count_include_pad=False) return nn.Sequential(*[ pool, nn.Conv2d(in_channels, out_channels, 1, stride=1, padding=0, bias=False), norm_layer(out_channels) ]) def drop_blocks(drop_prob: float = 0.): return [ None, None, partial(DropBlock2d, drop_prob=drop_prob, block_size=5, gamma_scale=0.25) if drop_prob else None, partial(DropBlock2d, drop_prob=drop_prob, block_size=3, gamma_scale=1.00) if drop_prob else None] def make_blocks( block_fn: Union[BasicBlock, Bottleneck], channels: List[int], block_repeats: List[int], inplanes: int, reduce_first: int = 1, output_stride: int = 32, down_kernel_size: int = 1, avg_down: bool = False, drop_block_rate: float = 0., drop_path_rate: float = 0., **kwargs, ) -> Tuple[List[Tuple[str, nn.Module]], List[Dict[str, Any]]]: stages = [] feature_info = [] net_num_blocks = sum(block_repeats) net_block_idx = 0 net_stride = 4 dilation = prev_dilation = 1 for stage_idx, (planes, num_blocks, db) in enumerate(zip(channels, block_repeats, drop_blocks(drop_block_rate))): stage_name = f'layer{stage_idx + 1}' # never liked this name, but weight compat requires it stride = 1 if stage_idx == 0 else 2 if net_stride >= output_stride: dilation *= stride stride = 1 else: net_stride *= stride downsample = None if stride != 1 or inplanes != planes * block_fn.expansion: down_kwargs = dict( in_channels=inplanes, out_channels=planes * block_fn.expansion, kernel_size=down_kernel_size, stride=stride, dilation=dilation, first_dilation=prev_dilation, norm_layer=kwargs.get('norm_layer'), ) downsample = downsample_avg(**down_kwargs) if avg_down else downsample_conv(**down_kwargs) block_kwargs = dict(reduce_first=reduce_first, dilation=dilation, drop_block=db, **kwargs) blocks = [] for block_idx in range(num_blocks): downsample = downsample if block_idx == 0 else None stride = stride if block_idx == 0 else 1 block_dpr = drop_path_rate * net_block_idx / (net_num_blocks - 1) # stochastic depth linear decay rule blocks.append(block_fn( inplanes, planes, stride, downsample, first_dilation=prev_dilation, drop_path=DropPath(block_dpr) if block_dpr > 0. else None, **block_kwargs, )) prev_dilation = dilation inplanes = planes * block_fn.expansion net_block_idx += 1 stages.append((stage_name, nn.Sequential(*blocks))) feature_info.append(dict(num_chs=inplanes, reduction=net_stride, module=stage_name)) return stages, feature_info class ResNet(nn.Module): """ResNet / ResNeXt / SE-ResNeXt / SE-Net This class implements all variants of ResNet, ResNeXt, SE-ResNeXt, and SENet that * have > 1 stride in the 3x3 conv layer of bottleneck * have conv-bn-act ordering This ResNet impl supports a number of stem and downsample options based on the v1c, v1d, v1e, and v1s variants included in the MXNet Gluon ResNetV1b model. The C and D variants are also discussed in the 'Bag of Tricks' paper: https://arxiv.org/pdf/1812.01187. The B variant is equivalent to torchvision default. ResNet variants (the same modifications can be used in SE/ResNeXt models as well): * normal, b - 7x7 stem, stem_width = 64, same as torchvision ResNet, NVIDIA ResNet 'v1.5', Gluon v1b * c - 3 layer deep 3x3 stem, stem_width = 32 (32, 32, 64) * d - 3 layer deep 3x3 stem, stem_width = 32 (32, 32, 64), average pool in downsample * e - 3 layer deep 3x3 stem, stem_width = 64 (64, 64, 128), average pool in downsample * s - 3 layer deep 3x3 stem, stem_width = 64 (64, 64, 128) * t - 3 layer deep 3x3 stem, stem width = 32 (24, 48, 64), average pool in downsample * tn - 3 layer deep 3x3 stem, stem width = 32 (24, 32, 64), average pool in downsample ResNeXt * normal - 7x7 stem, stem_width = 64, standard cardinality and base widths * same c,d, e, s variants as ResNet can be enabled SE-ResNeXt * normal - 7x7 stem, stem_width = 64 * same c, d, e, s variants as ResNet can be enabled SENet-154 - 3 layer deep 3x3 stem (same as v1c-v1s), stem_width = 64, cardinality=64, reduction by 2 on width of first bottleneck convolution, 3x3 downsample convs after first block """ def __init__( self, block: Union[BasicBlock, Bottleneck], layers: List[int], num_classes: int = 1000, in_chans: int = 3, output_stride: int = 32, global_pool: str = 'avg', cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = '', replace_stem_pool: bool = False, block_reduce_first: int = 1, down_kernel_size: int = 1, avg_down: bool = False, act_layer: LayerType = nn.ReLU, norm_layer: LayerType = nn.BatchNorm2d, aa_layer: Optional[Type[nn.Module]] = None, drop_rate: float = 0.0, drop_path_rate: float = 0., drop_block_rate: float = 0., zero_init_last: bool = True, block_args: Optional[Dict[str, Any]] = None, ): """ Args: block (nn.Module): class for the residual block. Options are BasicBlock, Bottleneck. layers (List[int]) : number of layers in each block num_classes (int): number of classification classes (default 1000) in_chans (int): number of input (color) channels. (default 3) output_stride (int): output stride of the network, 32, 16, or 8. (default 32) global_pool (str): Global pooling type. One of 'avg', 'max', 'avgmax', 'catavgmax' (default 'avg') cardinality (int): number of convolution groups for 3x3 conv in Bottleneck. (default 1) base_width (int): bottleneck channels factor. `planes * base_width / 64 * cardinality` (default 64) stem_width (int): number of channels in stem convolutions (default 64) stem_type (str): The type of stem (default ''): * '', default - a single 7x7 conv with a width of stem_width * 'deep' - three 3x3 convolution layers of widths stem_width, stem_width, stem_width * 2 * 'deep_tiered' - three 3x3 conv layers of widths stem_width//4 * 3, stem_width, stem_width * 2 replace_stem_pool (bool): replace stem max-pooling layer with a 3x3 stride-2 convolution block_reduce_first (int): Reduction factor for first convolution output width of residual blocks, 1 for all archs except senets, where 2 (default 1) down_kernel_size (int): kernel size of residual block downsample path, 1x1 for most, 3x3 for senets (default: 1) avg_down (bool): use avg pooling for projection skip connection between stages/downsample (default False) act_layer (str, nn.Module): activation layer norm_layer (str, nn.Module): normalization layer aa_layer (nn.Module): anti-aliasing layer drop_rate (float): Dropout probability before classifier, for training (default 0.) drop_path_rate (float): Stochastic depth drop-path rate (default 0.) drop_block_rate (float): Drop block rate (default 0.) zero_init_last (bool): zero-init the last weight in residual path (usually last BN affine weight) block_args (dict): Extra kwargs to pass through to block module """ super(ResNet, self).__init__() block_args = block_args or dict() assert output_stride in (8, 16, 32) self.num_classes = num_classes self.drop_rate = drop_rate self.grad_checkpointing = False act_layer = get_act_layer(act_layer) norm_layer = get_norm_layer(norm_layer) # Stem deep_stem = 'deep' in stem_type inplanes = stem_width * 2 if deep_stem else 64 if deep_stem: stem_chs = (stem_width, stem_width) if 'tiered' in stem_type: stem_chs = (3 * (stem_width // 4), stem_width) self.conv1 = nn.Sequential(*[ nn.Conv2d(in_chans, stem_chs[0], 3, stride=2, padding=1, bias=False), norm_layer(stem_chs[0]), act_layer(inplace=True), nn.Conv2d(stem_chs[0], stem_chs[1], 3, stride=1, padding=1, bias=False), norm_layer(stem_chs[1]), act_layer(inplace=True), nn.Conv2d(stem_chs[1], inplanes, 3, stride=1, padding=1, bias=False)]) else: self.conv1 = nn.Conv2d(in_chans, inplanes, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = norm_layer(inplanes) self.act1 = act_layer(inplace=True) self.feature_info = [dict(num_chs=inplanes, reduction=2, module='act1')] # Stem pooling. The name 'maxpool' remains for weight compatibility. if replace_stem_pool: self.maxpool = nn.Sequential(*filter(None, [ nn.Conv2d(inplanes, inplanes, 3, stride=1 if aa_layer else 2, padding=1, bias=False), create_aa(aa_layer, channels=inplanes, stride=2) if aa_layer is not None else None, norm_layer(inplanes), act_layer(inplace=True), ])) else: if aa_layer is not None: if issubclass(aa_layer, nn.AvgPool2d): self.maxpool = aa_layer(2) else: self.maxpool = nn.Sequential(*[ nn.MaxPool2d(kernel_size=3, stride=1, padding=1), aa_layer(channels=inplanes, stride=2)]) else: self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) # Feature Blocks channels = [64, 128, 256, 512] stage_modules, stage_feature_info = make_blocks( block, channels, layers, inplanes, cardinality=cardinality, base_width=base_width, output_stride=output_stride, reduce_first=block_reduce_first, avg_down=avg_down, down_kernel_size=down_kernel_size, act_layer=act_layer, norm_layer=norm_layer, aa_layer=aa_layer, drop_block_rate=drop_block_rate, drop_path_rate=drop_path_rate, **block_args, ) for stage in stage_modules: self.add_module(*stage) # layer1, layer2, etc self.feature_info.extend(stage_feature_info) # Head (Pooling and Classifier) self.num_features = 512 * block.expansion self.global_pool, self.fc = create_classifier(self.num_features, self.num_classes, pool_type=global_pool) self.init_weights(zero_init_last=zero_init_last) @torch.jit.ignore def init_weights(self, zero_init_last: bool = True): for n, m in self.named_modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') if zero_init_last: for m in self.modules(): if hasattr(m, 'zero_init_last'): m.zero_init_last() @torch.jit.ignore def group_matcher(self, coarse: bool = False): matcher = dict(stem=r'^conv1|bn1|maxpool', blocks=r'^layer(\d+)' if coarse else r'^layer(\d+)\.(\d+)') return matcher @torch.jit.ignore def set_grad_checkpointing(self, enable: bool = True): self.grad_checkpointing = enable @torch.jit.ignore def get_classifier(self, name_only: bool = False): return 'fc' if name_only else self.fc def reset_classifier(self, num_classes, global_pool='avg'): self.num_classes = num_classes self.global_pool, self.fc = create_classifier(self.num_features, self.num_classes, pool_type=global_pool) def forward_features(self, x: torch.Tensor) -> torch.Tensor: x = self.conv1(x) x = self.bn1(x) x = self.act1(x) x = self.maxpool(x) if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint_seq([self.layer1, self.layer2, self.layer3, self.layer4], x, flatten=True) else: x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) return x def forward_head(self, x: torch.Tensor, pre_logits: bool = False) -> torch.Tensor: x = self.global_pool(x) if self.drop_rate: x = F.dropout(x, p=float(self.drop_rate), training=self.training) return x if pre_logits else self.fc(x) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.forward_features(x) x = self.forward_head(x) return x def _create_resnet(variant, pretrained: bool = False, **kwargs) -> ResNet: return build_model_with_cfg(ResNet, variant, pretrained, **kwargs) def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7), 'crop_pct': 0.875, 'interpolation': 'bilinear', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'conv1', 'classifier': 'fc', **kwargs } def _tcfg(url='', **kwargs): return _cfg(url=url, **dict({'interpolation': 'bicubic'}, **kwargs)) def _ttcfg(url='', **kwargs): return _cfg(url=url, **dict({ 'interpolation': 'bicubic', 'test_input_size': (3, 288, 288), 'test_crop_pct': 0.95, 'origin_url': 'https://github.com/huggingface/pytorch-image-models', }, **kwargs)) def _rcfg(url='', **kwargs): return _cfg(url=url, **dict({ 'interpolation': 'bicubic', 'crop_pct': 0.95, 'test_input_size': (3, 288, 288), 'test_crop_pct': 1.0, 'origin_url': 'https://github.com/huggingface/pytorch-image-models', 'paper_ids': 'arXiv:2110.00476' }, **kwargs)) def _r3cfg(url='', **kwargs): return _cfg(url=url, **dict({ 'interpolation': 'bicubic', 'input_size': (3, 160, 160), 'pool_size': (5, 5), 'crop_pct': 0.95, 'test_input_size': (3, 224, 224), 'test_crop_pct': 0.95, 'origin_url': 'https://github.com/huggingface/pytorch-image-models', 'paper_ids': 'arXiv:2110.00476', }, **kwargs)) def _gcfg(url='', **kwargs): return _cfg(url=url, **dict({ 'interpolation': 'bicubic', 'origin_url': 'https://cv.gluon.ai/model_zoo/classification.html', }, **kwargs)) default_cfgs = generate_default_cfgs({ # ResNet and Wide ResNet trained w/ timm (RSB paper and others) 'resnet10t.c3_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet10t_176_c3-f3215ab1.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_crop_pct=0.95, test_input_size=(3, 224, 224), first_conv='conv1.0'), 'resnet14t.c3_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet14t_176_c3-c4ed2c37.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_crop_pct=0.95, test_input_size=(3, 224, 224), first_conv='conv1.0'), 'resnet18.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet18_a1_0-d63eafa0.pth'), 'resnet18.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet18_a2_0-b61bd467.pth'), 'resnet18.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet18_a3_0-40c531c8.pth'), 'resnet18d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet18d_ra2-48a79e06.pth', first_conv='conv1.0'), 'resnet34.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet34_a1_0-46f8f793.pth'), 'resnet34.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet34_a2_0-82d47d71.pth'), 'resnet34.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet34_a3_0-a20cabb6.pth', crop_pct=0.95), 'resnet34.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet34-43635321.pth'), 'resnet34d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet34d_ra2-f8dcfcaf.pth', first_conv='conv1.0'), 'resnet26.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet26-9aa10e23.pth'), 'resnet26d.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet26d-69e92c46.pth', first_conv='conv1.0'), 'resnet26t.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/resnet26t_256_ra2-6f6fa748.pth', first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.94, test_input_size=(3, 320, 320), test_crop_pct=1.0), 'resnet50.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_a1_0-14fe96d1.pth'), 'resnet50.a1h_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_a1h2_176-001a1197.pth', input_size=(3, 176, 176), pool_size=(6, 6), crop_pct=0.9, test_input_size=(3, 224, 224), test_crop_pct=1.0), 'resnet50.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_a2_0-a2746f79.pth'), 'resnet50.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_a3_0-59cae1ef.pth'), 'resnet50.b1k_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_b1k-532a802a.pth'), 'resnet50.b2k_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_b2k-1ba180c1.pth'), 'resnet50.c1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_c1-5ba5e060.pth'), 'resnet50.c2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_c2-d01e05b2.pth'), 'resnet50.d_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_d-f39db8af.pth'), 'resnet50.ram_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-weights/resnet50_ram-a26f946b.pth'), 'resnet50.am_in1k': _tcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-weights/resnet50_am-6c502b37.pth'), 'resnet50.ra_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-weights/resnet50_ra-85ebb6e5.pth'), 'resnet50.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-weights/rw_resnet50-86acaeed.pth'), 'resnet50d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet50d_ra2-464e36ba.pth', first_conv='conv1.0'), 'resnet50d.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50d_a1_0-e20cff14.pth', first_conv='conv1.0'), 'resnet50d.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50d_a2_0-a3adc64d.pth', first_conv='conv1.0'), 'resnet50d.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50d_a3_0-403fdfad.pth', first_conv='conv1.0'), 'resnet50t.untrained': _ttcfg(first_conv='conv1.0'), 'resnet101.a1h_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet101_a1h-36d3f2aa.pth'), 'resnet101.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet101_a1_0-cdcb52a9.pth'), 'resnet101.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet101_a2_0-6edb36c7.pth'), 'resnet101.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet101_a3_0-1db14157.pth'), 'resnet101d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet101d_ra2-2803ffab.pth', first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.95, test_crop_pct=1.0, test_input_size=(3, 320, 320)), 'resnet152.a1h_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet152_a1h-dc400468.pth'), 'resnet152.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet152_a1_0-2eee8a7a.pth'), 'resnet152.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet152_a2_0-b4c6978f.pth'), 'resnet152.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet152_a3_0-134d4688.pth'), 'resnet152d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet152d_ra2-5cac0439.pth', first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.95, test_crop_pct=1.0, test_input_size=(3, 320, 320)), 'resnet200.untrained': _ttcfg(), 'resnet200d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet200d_ra2-bdba9bf9.pth', first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.95, test_crop_pct=1.0, test_input_size=(3, 320, 320)), 'wide_resnet50_2.racm_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/wide_resnet50_racm-8234f177.pth'), # torchvision resnet weights 'resnet18.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet18-5c106cde.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnet34.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet34-333f7ec4.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnet50.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet50-19c8e357.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnet50.tv2_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet50-11ad3fa6.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_input_size=(3, 224, 224), test_crop_pct=0.965, license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnet101.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet101-5d3b4d8f.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnet101.tv2_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet101-cd907fc2.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_input_size=(3, 224, 224), test_crop_pct=0.965, license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnet152.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet152-b121ed2d.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnet152.tv2_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet152-f82ba261.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_input_size=(3, 224, 224), test_crop_pct=0.965, license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'wide_resnet50_2.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'wide_resnet50_2.tv2_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/wide_resnet50_2-9ba9bcbe.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_input_size=(3, 224, 224), test_crop_pct=0.965, license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'wide_resnet101_2.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'wide_resnet101_2.tv2_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/wide_resnet101_2-d733dc28.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_input_size=(3, 224, 224), test_crop_pct=0.965, license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), # ResNets w/ alternative norm layers 'resnet50_gn.a1h_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_gn_a1h2-8fe6c4d0.pth', crop_pct=0.94), # ResNeXt trained in timm (RSB paper and others) 'resnext50_32x4d.a1h_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnext50_32x4d_a1h-0146ab0a.pth'), 'resnext50_32x4d.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnext50_32x4d_a1_0-b5a91a1d.pth'), 'resnext50_32x4d.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnext50_32x4d_a2_0-efc76add.pth'), 'resnext50_32x4d.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnext50_32x4d_a3_0-3e450271.pth'), 'resnext50_32x4d.ra_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-weights/resnext50_32x4d_ra-d733960d.pth'), 'resnext50d_32x4d.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnext50d_32x4d-103e99f8.pth', first_conv='conv1.0'), 'resnext101_32x4d.untrained': _ttcfg(), 'resnext101_64x4d.c1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/resnext101_64x4d_c-0d0e0cc0.pth'), # torchvision ResNeXt weights 'resnext50_32x4d.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnext101_32x8d.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnext101_64x4d.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnext101_64x4d-173b62eb.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnext50_32x4d.tv2_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnext50_32x4d-1a0047aa.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_input_size=(3, 224, 224), test_crop_pct=0.965, license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnext101_32x8d.tv2_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnext101_32x8d-110c445d.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_input_size=(3, 224, 224), test_crop_pct=0.965, license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), # ResNeXt models - Weakly Supervised Pretraining on Instagram Hashtags # from https://github.com/facebookresearch/WSL-Images # Please note the CC-BY-NC 4.0 license on these weights, non-commercial use only. 'resnext101_32x8d.fb_wsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/ig_resnext101_32x8-c38310e5.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/WSL-Images'), 'resnext101_32x16d.fb_wsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/ig_resnext101_32x16-c6f796b0.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/WSL-Images'), 'resnext101_32x32d.fb_wsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/ig_resnext101_32x32-e4b90b00.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/WSL-Images'), 'resnext101_32x48d.fb_wsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/ig_resnext101_32x48-3e41cc8a.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/WSL-Images'), # Semi-Supervised ResNe*t models from https://github.com/facebookresearch/semi-supervised-ImageNet1K-models # Please note the CC-BY-NC 4.0 license on theses weights, non-commercial use only. 'resnet18.fb_ssl_yfcc100m_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnet18-d92f0530.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnet50.fb_ssl_yfcc100m_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnet50-08389792.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnext50_32x4-ddb3e555.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnext101_32x4-dc43570a.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnext101_32x8-2cfe2f8b.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnext101_32x16-15fffa57.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), # Semi-Weakly Supervised ResNe*t models from https://github.com/facebookresearch/semi-supervised-ImageNet1K-models # Please note the CC-BY-NC 4.0 license on theses weights, non-commercial use only. 'resnet18.fb_swsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_weakly_supervised_resnet18-118f1556.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnet50.fb_swsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_weakly_supervised_resnet50-16a12f1b.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext50_32x4d.fb_swsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_weakly_supervised_resnext50_32x4-72679e44.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext101_32x4d.fb_swsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_weakly_supervised_resnext101_32x4-3f87e46b.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext101_32x8d.fb_swsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_weakly_supervised_resnext101_32x8-b4712904.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext101_32x16d.fb_swsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_weakly_supervised_resnext101_32x16-f3559a9c.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), # Efficient Channel Attention ResNets 'ecaresnet26t.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ecaresnet26t_ra2-46609757.pth', first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8), test_crop_pct=0.95, test_input_size=(3, 320, 320)), 'ecaresnetlight.miil_in1k': _tcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/ecaresnetlight-75a9c627.pth', test_crop_pct=0.95, test_input_size=(3, 288, 288)), 'ecaresnet50d.miil_in1k': _tcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/ecaresnet50d-93c81e3b.pth', first_conv='conv1.0', test_crop_pct=0.95, test_input_size=(3, 288, 288)), 'ecaresnet50d_pruned.miil_in1k': _tcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/ecaresnet50d_p-e4fa23c2.pth', first_conv='conv1.0', test_crop_pct=0.95, test_input_size=(3, 288, 288)), 'ecaresnet50t.ra2_in1k': _tcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ecaresnet50t_ra2-f7ac63c4.pth', first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8), test_crop_pct=0.95, test_input_size=(3, 320, 320)), 'ecaresnet50t.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/ecaresnet50t_a1_0-99bd76a8.pth', first_conv='conv1.0'), 'ecaresnet50t.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/ecaresnet50t_a2_0-b1c7b745.pth', first_conv='conv1.0'), 'ecaresnet50t.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/ecaresnet50t_a3_0-8cc311f1.pth', first_conv='conv1.0'), 'ecaresnet101d.miil_in1k': _tcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/ecaresnet101d-153dad65.pth', first_conv='conv1.0', test_crop_pct=0.95, test_input_size=(3, 288, 288)), 'ecaresnet101d_pruned.miil_in1k': _tcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/ecaresnet101d_p-9e74cb91.pth', first_conv='conv1.0', test_crop_pct=0.95, test_input_size=(3, 288, 288)), 'ecaresnet200d.untrained': _ttcfg( first_conv='conv1.0', input_size=(3, 256, 256), crop_pct=0.95, pool_size=(8, 8)), 'ecaresnet269d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ecaresnet269d_320_ra2-7baa55cb.pth', first_conv='conv1.0', input_size=(3, 320, 320), pool_size=(10, 10), crop_pct=0.95, test_crop_pct=1.0, test_input_size=(3, 352, 352)), # Efficient Channel Attention ResNeXts 'ecaresnext26t_32x4d.untrained': _tcfg(first_conv='conv1.0'), 'ecaresnext50t_32x4d.untrained': _tcfg(first_conv='conv1.0'), # Squeeze-Excitation ResNets, to eventually replace the models in senet.py 'seresnet18.untrained': _ttcfg(), 'seresnet34.untrained': _ttcfg(), 'seresnet50.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/seresnet50_a1_0-ffa00869.pth', crop_pct=0.95), 'seresnet50.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/seresnet50_a2_0-850de0d9.pth', crop_pct=0.95), 'seresnet50.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/seresnet50_a3_0-317ecd56.pth', crop_pct=0.95), 'seresnet50.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnet50_ra_224-8efdb4bb.pth'), 'seresnet50t.untrained': _ttcfg( first_conv='conv1.0'), 'seresnet101.untrained': _ttcfg(), 'seresnet152.untrained': _ttcfg(), 'seresnet152d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnet152d_ra2-04464dd2.pth', first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.95, test_crop_pct=1.0, test_input_size=(3, 320, 320) ), 'seresnet200d.untrained': _ttcfg( first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8)), 'seresnet269d.untrained': _ttcfg( first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8)), # Squeeze-Excitation ResNeXts, to eventually replace the models in senet.py 'seresnext26d_32x4d.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnext26d_32x4d-80fa48a3.pth', first_conv='conv1.0'), 'seresnext26t_32x4d.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnext26tn_32x4d-569cb627.pth', first_conv='conv1.0'), 'seresnext50_32x4d.racm_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnext50_32x4d_racm-a304a460.pth'), 'seresnext101_32x4d.untrained': _ttcfg(), 'seresnext101_32x8d.ah_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/seresnext101_32x8d_ah-e6bc4c0a.pth'), 'seresnext101d_32x8d.ah_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/seresnext101d_32x8d_ah-191d7b94.pth', first_conv='conv1.0'), # ResNets with anti-aliasing / blur pool 'resnetaa50d.sw_in12k_ft_in1k': _ttcfg( hf_hub_id='timm/', first_conv='conv1.0', crop_pct=0.95, test_crop_pct=1.0), 'resnetaa101d.sw_in12k_ft_in1k': _ttcfg( hf_hub_id='timm/', first_conv='conv1.0', crop_pct=0.95, test_crop_pct=1.0), 'seresnextaa101d_32x8d.sw_in12k_ft_in1k_288': _ttcfg( hf_hub_id='timm/', crop_pct=0.95, input_size=(3, 288, 288), pool_size=(9, 9), test_input_size=(3, 320, 320), test_crop_pct=1.0, first_conv='conv1.0'), 'seresnextaa101d_32x8d.sw_in12k_ft_in1k': _ttcfg( hf_hub_id='timm/', first_conv='conv1.0', test_crop_pct=1.0), 'seresnextaa201d_32x8d.sw_in12k_ft_in1k_384': _cfg( hf_hub_id='timm/', interpolation='bicubic', first_conv='conv1.0', pool_size=(12, 12), input_size=(3, 384, 384), crop_pct=1.0), 'seresnextaa201d_32x8d.sw_in12k': _cfg( hf_hub_id='timm/', num_classes=11821, interpolation='bicubic', first_conv='conv1.0', crop_pct=0.95, input_size=(3, 320, 320), pool_size=(10, 10), test_input_size=(3, 384, 384), test_crop_pct=1.0), 'resnetaa50d.sw_in12k': _ttcfg( hf_hub_id='timm/', num_classes=11821, first_conv='conv1.0', crop_pct=0.95, test_crop_pct=1.0), 'resnetaa50d.d_in12k': _ttcfg( hf_hub_id='timm/', num_classes=11821, first_conv='conv1.0', crop_pct=0.95, test_crop_pct=1.0), 'resnetaa101d.sw_in12k': _ttcfg( hf_hub_id='timm/', num_classes=11821, first_conv='conv1.0', crop_pct=0.95, test_crop_pct=1.0), 'seresnextaa101d_32x8d.sw_in12k': _ttcfg( hf_hub_id='timm/', num_classes=11821, first_conv='conv1.0', crop_pct=0.95, test_crop_pct=1.0), 'resnetblur18.untrained': _ttcfg(), 'resnetblur50.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnetblur50-84f4748f.pth'), 'resnetblur50d.untrained': _ttcfg(first_conv='conv1.0'), 'resnetblur101d.untrained': _ttcfg(first_conv='conv1.0'), 'resnetaa34d.untrained': _ttcfg(first_conv='conv1.0'), 'resnetaa50.a1h_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnetaa50_a1h-4cf422b3.pth'), 'seresnetaa50d.untrained': _ttcfg(first_conv='conv1.0'), 'seresnextaa101d_32x8d.ah_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/seresnextaa101d_32x8d_ah-83c8ae12.pth', first_conv='conv1.0'), # ResNet-RS models 'resnetrs50.tf_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rs-weights/resnetrs50_ema-6b53758b.pth', input_size=(3, 160, 160), pool_size=(5, 5), crop_pct=0.91, test_input_size=(3, 224, 224), interpolation='bicubic', first_conv='conv1.0'), 'resnetrs101.tf_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rs-weights/resnetrs101_i192_ema-1509bbf6.pth', input_size=(3, 192, 192), pool_size=(6, 6), crop_pct=0.94, test_input_size=(3, 288, 288), interpolation='bicubic', first_conv='conv1.0'), 'resnetrs152.tf_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rs-weights/resnetrs152_i256_ema-a9aff7f9.pth', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=1.0, test_input_size=(3, 320, 320), interpolation='bicubic', first_conv='conv1.0'), 'resnetrs200.tf_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/resnetrs200_c-6b698b88.pth', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=1.0, test_input_size=(3, 320, 320), interpolation='bicubic', first_conv='conv1.0'), 'resnetrs270.tf_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rs-weights/resnetrs270_ema-b40e674c.pth', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=1.0, test_input_size=(3, 352, 352), interpolation='bicubic', first_conv='conv1.0'), 'resnetrs350.tf_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rs-weights/resnetrs350_i256_ema-5a1aa8f1.pth', input_size=(3, 288, 288), pool_size=(9, 9), crop_pct=1.0, test_input_size=(3, 384, 384), interpolation='bicubic', first_conv='conv1.0'), 'resnetrs420.tf_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rs-weights/resnetrs420_ema-972dee69.pth', input_size=(3, 320, 320), pool_size=(10, 10), crop_pct=1.0, test_input_size=(3, 416, 416), interpolation='bicubic', first_conv='conv1.0'), # gluon resnet weights 'resnet18.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet18_v1b-0757602b.pth'), 'resnet34.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet34_v1b-c6d82d59.pth'), 'resnet50.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1b-0ebe02e2.pth'), 'resnet101.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1b-3b017079.pth'), 'resnet152.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1b-c1edb0dd.pth'), 'resnet50c.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1c-48092f55.pth', first_conv='conv1.0'), 'resnet101c.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1c-1f26822a.pth', first_conv='conv1.0'), 'resnet152c.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1c-a3bb0b98.pth', first_conv='conv1.0'), 'resnet50d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1d-818a1b1b.pth', first_conv='conv1.0'), 'resnet101d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1d-0f9c8644.pth', first_conv='conv1.0'), 'resnet152d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1d-bd354e12.pth', first_conv='conv1.0'), 'resnet50s.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1s-1762acc0.pth', first_conv='conv1.0'), 'resnet101s.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1s-60fe0cc1.pth', first_conv='conv1.0'), 'resnet152s.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1s-dcc41b81.pth', first_conv='conv1.0'), 'resnext50_32x4d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnext50_32x4d-e6a097c1.pth'), 'resnext101_32x4d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnext101_32x4d-b253c8c4.pth'), 'resnext101_64x4d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnext101_64x4d-f9a8e184.pth'), 'seresnext50_32x4d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_seresnext50_32x4d-90cf2d6e.pth'), 'seresnext101_32x4d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_seresnext101_32x4d-cf52900d.pth'), 'seresnext101_64x4d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_seresnext101_64x4d-f9926f93.pth'), 'senet154.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_senet154-70a1a3c0.pth', first_conv='conv1.0'), }) @register_model def resnet10t(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-10-T model. """ model_args = dict(block=BasicBlock, layers=[1, 1, 1, 1], stem_width=32, stem_type='deep_tiered', avg_down=True) return _create_resnet('resnet10t', pretrained, **dict(model_args, **kwargs)) @register_model def resnet14t(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-14-T model. """ model_args = dict(block=Bottleneck, layers=[1, 1, 1, 1], stem_width=32, stem_type='deep_tiered', avg_down=True) return _create_resnet('resnet14t', pretrained, **dict(model_args, **kwargs)) @register_model def resnet18(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-18 model. """ model_args = dict(block=BasicBlock, layers=[2, 2, 2, 2]) return _create_resnet('resnet18', pretrained, **dict(model_args, **kwargs)) @register_model def resnet18d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-18-D model. """ model_args = dict(block=BasicBlock, layers=[2, 2, 2, 2], stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnet18d', pretrained, **dict(model_args, **kwargs)) @register_model def resnet34(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-34 model. """ model_args = dict(block=BasicBlock, layers=[3, 4, 6, 3]) return _create_resnet('resnet34', pretrained, **dict(model_args, **kwargs)) @register_model def resnet34d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-34-D model. """ model_args = dict(block=BasicBlock, layers=[3, 4, 6, 3], stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnet34d', pretrained, **dict(model_args, **kwargs)) @register_model def resnet26(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-26 model. """ model_args = dict(block=Bottleneck, layers=[2, 2, 2, 2]) return _create_resnet('resnet26', pretrained, **dict(model_args, **kwargs)) @register_model def resnet26t(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-26-T model. """ model_args = dict(block=Bottleneck, layers=[2, 2, 2, 2], stem_width=32, stem_type='deep_tiered', avg_down=True) return _create_resnet('resnet26t', pretrained, **dict(model_args, **kwargs)) @register_model def resnet26d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-26-D model. """ model_args = dict(block=Bottleneck, layers=[2, 2, 2, 2], stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnet26d', pretrained, **dict(model_args, **kwargs)) @register_model def resnet50(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50 model. """ model_args = dict(block=Bottleneck, layers=[3, 4, 6, 3]) return _create_resnet('resnet50', pretrained, **dict(model_args, **kwargs)) @register_model def resnet50c(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-C model. """ model_args = dict(block=Bottleneck, layers=[3, 4, 6, 3], stem_width=32, stem_type='deep') return _create_resnet('resnet50c', pretrained, **dict(model_args, **kwargs)) @register_model def resnet50d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-D model. """ model_args = dict(block=Bottleneck, layers=[3, 4, 6, 3], stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnet50d', pretrained, **dict(model_args, **kwargs)) @register_model def resnet50s(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-S model. """ model_args = dict(block=Bottleneck, layers=[3, 4, 6, 3], stem_width=64, stem_type='deep') return _create_resnet('resnet50s', pretrained, **dict(model_args, **kwargs)) @register_model def resnet50t(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-T model. """ model_args = dict(block=Bottleneck, layers=[3, 4, 6, 3], stem_width=32, stem_type='deep_tiered', avg_down=True) return _create_resnet('resnet50t', pretrained, **dict(model_args, **kwargs)) @register_model def resnet101(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101 model. """ model_args = dict(block=Bottleneck, layers=[3, 4, 23, 3]) return _create_resnet('resnet101', pretrained, **dict(model_args, **kwargs)) @register_model def resnet101c(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101-C model. """ model_args = dict(block=Bottleneck, layers=[3, 4, 23, 3], stem_width=32, stem_type='deep') return _create_resnet('resnet101c', pretrained, **dict(model_args, **kwargs)) @register_model def resnet101d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101-D model. """ model_args = dict(block=Bottleneck, layers=[3, 4, 23, 3], stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnet101d', pretrained, **dict(model_args, **kwargs)) @register_model def resnet101s(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101-S model. """ model_args = dict(block=Bottleneck, layers=[3, 4, 23, 3], stem_width=64, stem_type='deep') return _create_resnet('resnet101s', pretrained, **dict(model_args, **kwargs)) @register_model def resnet152(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-152 model. """ model_args = dict(block=Bottleneck, layers=[3, 8, 36, 3]) return _create_resnet('resnet152', pretrained, **dict(model_args, **kwargs)) @register_model def resnet152c(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-152-C model. """ model_args = dict(block=Bottleneck, layers=[3, 8, 36, 3], stem_width=32, stem_type='deep') return _create_resnet('resnet152c', pretrained, **dict(model_args, **kwargs)) @register_model def resnet152d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-152-D model. """ model_args = dict(block=Bottleneck, layers=[3, 8, 36, 3], stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnet152d', pretrained, **dict(model_args, **kwargs)) @register_model def resnet152s(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-152-S model. """ model_args = dict(block=Bottleneck, layers=[3, 8, 36, 3], stem_width=64, stem_type='deep') return _create_resnet('resnet152s', pretrained, **dict(model_args, **kwargs)) @register_model def resnet200(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-200 model. """ model_args = dict(block=Bottleneck, layers=[3, 24, 36, 3]) return _create_resnet('resnet200', pretrained, **dict(model_args, **kwargs)) @register_model def resnet200d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-200-D model. """ model_args = dict(block=Bottleneck, layers=[3, 24, 36, 3], stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnet200d', pretrained, **dict(model_args, **kwargs)) @register_model def wide_resnet50_2(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a Wide ResNet-50-2 model. The model is the same as ResNet except for the bottleneck number of channels which is twice larger in every block. The number of channels in outer 1x1 convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048 channels, and in Wide ResNet-50-2 has 2048-1024-2048. """ model_args = dict(block=Bottleneck, layers=[3, 4, 6, 3], base_width=128) return _create_resnet('wide_resnet50_2', pretrained, **dict(model_args, **kwargs)) @register_model def wide_resnet101_2(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a Wide ResNet-101-2 model. The model is the same as ResNet except for the bottleneck number of channels which is twice larger in every block. The number of channels in outer 1x1 convolutions is the same. """ model_args = dict(block=Bottleneck, layers=[3, 4, 23, 3], base_width=128) return _create_resnet('wide_resnet101_2', pretrained, **dict(model_args, **kwargs)) @register_model def resnet50_gn(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50 model w/ GroupNorm """ model_args = dict(block=Bottleneck, layers=[3, 4, 6, 3], norm_layer='groupnorm') return _create_resnet('resnet50_gn', pretrained, **dict(model_args, **kwargs)) @register_model def resnext50_32x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNeXt50-32x4d model. """ model_args = dict(block=Bottleneck, layers=[3, 4, 6, 3], cardinality=32, base_width=4) return _create_resnet('resnext50_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def resnext50d_32x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNeXt50d-32x4d model. ResNext50 w/ deep stem & avg pool downsample """ model_args = dict( block=Bottleneck, layers=[3, 4, 6, 3], cardinality=32, base_width=4, stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnext50d_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def resnext101_32x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNeXt-101 32x4d model. """ model_args = dict(block=Bottleneck, layers=[3, 4, 23, 3], cardinality=32, base_width=4) return _create_resnet('resnext101_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def resnext101_32x8d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNeXt-101 32x8d model. """ model_args = dict(block=Bottleneck, layers=[3, 4, 23, 3], cardinality=32, base_width=8) return _create_resnet('resnext101_32x8d', pretrained, **dict(model_args, **kwargs)) @register_model def resnext101_32x16d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNeXt-101 32x16d model """ model_args = dict(block=Bottleneck, layers=[3, 4, 23, 3], cardinality=32, base_width=16) return _create_resnet('resnext101_32x16d', pretrained, **dict(model_args, **kwargs)) @register_model def resnext101_32x32d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNeXt-101 32x32d model """ model_args = dict(block=Bottleneck, layers=[3, 4, 23, 3], cardinality=32, base_width=32) return _create_resnet('resnext101_32x32d', pretrained, **dict(model_args, **kwargs)) @register_model def resnext101_64x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNeXt101-64x4d model. """ model_args = dict(block=Bottleneck, layers=[3, 4, 23, 3], cardinality=64, base_width=4) return _create_resnet('resnext101_64x4d', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnet26t(pretrained: bool = False, **kwargs) -> ResNet: """Constructs an ECA-ResNeXt-26-T model. This is technically a 28 layer ResNet, like a 'D' bag-of-tricks model but with tiered 24, 32, 64 channels in the deep stem and ECA attn. """ model_args = dict( block=Bottleneck, layers=[2, 2, 2, 2], stem_width=32, stem_type='deep_tiered', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet26t', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnet50d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-D model with eca. """ model_args = dict( block=Bottleneck, layers=[3, 4, 6, 3], stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet50d', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnet50d_pruned(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-D model pruned with eca. The pruning has been obtained using https://arxiv.org/pdf/2002.08258.pdf """ model_args = dict( block=Bottleneck, layers=[3, 4, 6, 3], stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet50d_pruned', pretrained, pruned=True, **dict(model_args, **kwargs)) @register_model def ecaresnet50t(pretrained: bool = False, **kwargs) -> ResNet: """Constructs an ECA-ResNet-50-T model. Like a 'D' bag-of-tricks model but with tiered 24, 32, 64 channels in the deep stem and ECA attn. """ model_args = dict( block=Bottleneck, layers=[3, 4, 6, 3], stem_width=32, stem_type='deep_tiered', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet50t', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnetlight(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-D light model with eca. """ model_args = dict( block=Bottleneck, layers=[1, 1, 11, 3], stem_width=32, avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnetlight', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnet101d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101-D model with eca. """ model_args = dict( block=Bottleneck, layers=[3, 4, 23, 3], stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet101d', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnet101d_pruned(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101-D model pruned with eca. The pruning has been obtained using https://arxiv.org/pdf/2002.08258.pdf """ model_args = dict( block=Bottleneck, layers=[3, 4, 23, 3], stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet101d_pruned', pretrained, pruned=True, **dict(model_args, **kwargs)) @register_model def ecaresnet200d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-200-D model with ECA. """ model_args = dict( block=Bottleneck, layers=[3, 24, 36, 3], stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet200d', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnet269d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-269-D model with ECA. """ model_args = dict( block=Bottleneck, layers=[3, 30, 48, 8], stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet269d', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnext26t_32x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs an ECA-ResNeXt-26-T model. This is technically a 28 layer ResNet, like a 'D' bag-of-tricks model but with tiered 24, 32, 64 channels in the deep stem. This model replaces SE module with the ECA module """ model_args = dict( block=Bottleneck, layers=[2, 2, 2, 2], cardinality=32, base_width=4, stem_width=32, stem_type='deep_tiered', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnext26t_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnext50t_32x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs an ECA-ResNeXt-50-T model. This is technically a 28 layer ResNet, like a 'D' bag-of-tricks model but with tiered 24, 32, 64 channels in the deep stem. This model replaces SE module with the ECA module """ model_args = dict( block=Bottleneck, layers=[2, 2, 2, 2], cardinality=32, base_width=4, stem_width=32, stem_type='deep_tiered', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnext50t_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet18(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict(block=BasicBlock, layers=[2, 2, 2, 2], block_args=dict(attn_layer='se')) return _create_resnet('seresnet18', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet34(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict(block=BasicBlock, layers=[3, 4, 6, 3], block_args=dict(attn_layer='se')) return _create_resnet('seresnet34', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet50(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict(block=Bottleneck, layers=[3, 4, 6, 3], block_args=dict(attn_layer='se')) return _create_resnet('seresnet50', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet50t(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=[3, 4, 6, 3], stem_width=32, stem_type='deep_tiered', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnet50t', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet101(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict(block=Bottleneck, layers=[3, 4, 23, 3], block_args=dict(attn_layer='se')) return _create_resnet('seresnet101', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet152(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict(block=Bottleneck, layers=[3, 8, 36, 3], block_args=dict(attn_layer='se')) return _create_resnet('seresnet152', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet152d(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=[3, 8, 36, 3], stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnet152d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet200d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-200-D model with SE attn. """ model_args = dict( block=Bottleneck, layers=[3, 24, 36, 3], stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnet200d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet269d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-269-D model with SE attn. """ model_args = dict( block=Bottleneck, layers=[3, 30, 48, 8], stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnet269d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnext26d_32x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a SE-ResNeXt-26-D model.` This is technically a 28 layer ResNet, using the 'D' modifier from Gluon / bag-of-tricks for combination of deep stem and avg_pool in downsample. """ model_args = dict( block=Bottleneck, layers=[2, 2, 2, 2], cardinality=32, base_width=4, stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnext26d_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnext26t_32x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a SE-ResNet-26-T model. This is technically a 28 layer ResNet, like a 'D' bag-of-tricks model but with tiered 24, 32, 64 channels in the deep stem. """ model_args = dict( block=Bottleneck, layers=[2, 2, 2, 2], cardinality=32, base_width=4, stem_width=32, stem_type='deep_tiered', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnext26t_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnext50_32x4d(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=[3, 4, 6, 3], cardinality=32, base_width=4, block_args=dict(attn_layer='se')) return _create_resnet('seresnext50_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnext101_32x4d(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=[3, 4, 23, 3], cardinality=32, base_width=4, block_args=dict(attn_layer='se')) return _create_resnet('seresnext101_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnext101_32x8d(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=[3, 4, 23, 3], cardinality=32, base_width=8, block_args=dict(attn_layer='se')) return _create_resnet('seresnext101_32x8d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnext101d_32x8d(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=[3, 4, 23, 3], cardinality=32, base_width=8, stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnext101d_32x8d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnext101_64x4d(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=[3, 4, 23, 3], cardinality=64, base_width=4, block_args=dict(attn_layer='se')) return _create_resnet('seresnext101_64x4d', pretrained, **dict(model_args, **kwargs)) @register_model def senet154(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=[3, 8, 36, 3], cardinality=64, base_width=4, stem_type='deep', down_kernel_size=3, block_reduce_first=2, block_args=dict(attn_layer='se')) return _create_resnet('senet154', pretrained, **dict(model_args, **kwargs)) @register_model def resnetblur18(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-18 model with blur anti-aliasing """ model_args = dict(block=BasicBlock, layers=[2, 2, 2, 2], aa_layer=BlurPool2d) return _create_resnet('resnetblur18', pretrained, **dict(model_args, **kwargs)) @register_model def resnetblur50(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50 model with blur anti-aliasing """ model_args = dict(block=Bottleneck, layers=[3, 4, 6, 3], aa_layer=BlurPool2d) return _create_resnet('resnetblur50', pretrained, **dict(model_args, **kwargs)) @register_model def resnetblur50d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-D model with blur anti-aliasing """ model_args = dict( block=Bottleneck, layers=[3, 4, 6, 3], aa_layer=BlurPool2d, stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnetblur50d', pretrained, **dict(model_args, **kwargs)) @register_model def resnetblur101d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101-D model with blur anti-aliasing """ model_args = dict( block=Bottleneck, layers=[3, 4, 23, 3], aa_layer=BlurPool2d, stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnetblur101d', pretrained, **dict(model_args, **kwargs)) @register_model def resnetaa34d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-34-D model w/ avgpool anti-aliasing """ model_args = dict( block=BasicBlock, layers=[3, 4, 6, 3], aa_layer=nn.AvgPool2d, stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnetaa34d', pretrained, **dict(model_args, **kwargs)) @register_model def resnetaa50(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50 model with avgpool anti-aliasing """ model_args = dict(block=Bottleneck, layers=[3, 4, 6, 3], aa_layer=nn.AvgPool2d) return _create_resnet('resnetaa50', pretrained, **dict(model_args, **kwargs)) @register_model def resnetaa50d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-D model with avgpool anti-aliasing """ model_args = dict( block=Bottleneck, layers=[3, 4, 6, 3], aa_layer=nn.AvgPool2d, stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnetaa50d', pretrained, **dict(model_args, **kwargs)) @register_model def resnetaa101d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101-D model with avgpool anti-aliasing """ model_args = dict( block=Bottleneck, layers=[3, 4, 23, 3], aa_layer=nn.AvgPool2d, stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnetaa101d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnetaa50d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a SE=ResNet-50-D model with avgpool anti-aliasing """ model_args = dict( block=Bottleneck, layers=[3, 4, 6, 3], aa_layer=nn.AvgPool2d, stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnetaa50d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnextaa101d_32x8d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a SE=ResNeXt-101-D 32x8d model with avgpool anti-aliasing """ model_args = dict( block=Bottleneck, layers=[3, 4, 23, 3], cardinality=32, base_width=8, stem_width=32, stem_type='deep', avg_down=True, aa_layer=nn.AvgPool2d, block_args=dict(attn_layer='se')) return _create_resnet('seresnextaa101d_32x8d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnextaa201d_32x8d(pretrained: bool = False, **kwargs): """Constructs a SE=ResNeXt-101-D 32x8d model with avgpool anti-aliasing """ model_args = dict( block=Bottleneck, layers=[3, 24, 36, 4], cardinality=32, base_width=8, stem_width=64, stem_type='deep', avg_down=True, aa_layer=nn.AvgPool2d, block_args=dict(attn_layer='se')) return _create_resnet('seresnextaa201d_32x8d', pretrained, **dict(model_args, **kwargs)) @register_model def resnetrs50(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-RS-50 model. Paper: Revisiting ResNets - https://arxiv.org/abs/2103.07579 Pretrained weights from https://github.com/tensorflow/tpu/tree/bee9c4f6/models/official/resnet/resnet_rs """ attn_layer = partial(get_attn('se'), rd_ratio=0.25) model_args = dict( block=Bottleneck, layers=[3, 4, 6, 3], stem_width=32, stem_type='deep', replace_stem_pool=True, avg_down=True, block_args=dict(attn_layer=attn_layer)) return _create_resnet('resnetrs50', pretrained, **dict(model_args, **kwargs)) @register_model def resnetrs101(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-RS-101 model. Paper: Revisiting ResNets - https://arxiv.org/abs/2103.07579 Pretrained weights from https://github.com/tensorflow/tpu/tree/bee9c4f6/models/official/resnet/resnet_rs """ attn_layer = partial(get_attn('se'), rd_ratio=0.25) model_args = dict( block=Bottleneck, layers=[3, 4, 23, 3], stem_width=32, stem_type='deep', replace_stem_pool=True, avg_down=True, block_args=dict(attn_layer=attn_layer)) return _create_resnet('resnetrs101', pretrained, **dict(model_args, **kwargs)) @register_model def resnetrs152(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-RS-152 model. Paper: Revisiting ResNets - https://arxiv.org/abs/2103.07579 Pretrained weights from https://github.com/tensorflow/tpu/tree/bee9c4f6/models/official/resnet/resnet_rs """ attn_layer = partial(get_attn('se'), rd_ratio=0.25) model_args = dict( block=Bottleneck, layers=[3, 8, 36, 3], stem_width=32, stem_type='deep', replace_stem_pool=True, avg_down=True, block_args=dict(attn_layer=attn_layer)) return _create_resnet('resnetrs152', pretrained, **dict(model_args, **kwargs)) @register_model def resnetrs200(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-RS-200 model. Paper: Revisiting ResNets - https://arxiv.org/abs/2103.07579 Pretrained weights from https://github.com/tensorflow/tpu/tree/bee9c4f6/models/official/resnet/resnet_rs """ attn_layer = partial(get_attn('se'), rd_ratio=0.25) model_args = dict( block=Bottleneck, layers=[3, 24, 36, 3], stem_width=32, stem_type='deep', replace_stem_pool=True, avg_down=True, block_args=dict(attn_layer=attn_layer)) return _create_resnet('resnetrs200', pretrained, **dict(model_args, **kwargs)) @register_model def resnetrs270(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-RS-270 model. Paper: Revisiting ResNets - https://arxiv.org/abs/2103.07579 Pretrained weights from https://github.com/tensorflow/tpu/tree/bee9c4f6/models/official/resnet/resnet_rs """ attn_layer = partial(get_attn('se'), rd_ratio=0.25) model_args = dict( block=Bottleneck, layers=[4, 29, 53, 4], stem_width=32, stem_type='deep', replace_stem_pool=True, avg_down=True, block_args=dict(attn_layer=attn_layer)) return _create_resnet('resnetrs270', pretrained, **dict(model_args, **kwargs)) @register_model def resnetrs350(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-RS-350 model. Paper: Revisiting ResNets - https://arxiv.org/abs/2103.07579 Pretrained weights from https://github.com/tensorflow/tpu/tree/bee9c4f6/models/official/resnet/resnet_rs """ attn_layer = partial(get_attn('se'), rd_ratio=0.25) model_args = dict( block=Bottleneck, layers=[4, 36, 72, 4], stem_width=32, stem_type='deep', replace_stem_pool=True, avg_down=True, block_args=dict(attn_layer=attn_layer)) return _create_resnet('resnetrs350', pretrained, **dict(model_args, **kwargs)) @register_model def resnetrs420(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-RS-420 model Paper: Revisiting ResNets - https://arxiv.org/abs/2103.07579 Pretrained weights from https://github.com/tensorflow/tpu/tree/bee9c4f6/models/official/resnet/resnet_rs """ attn_layer = partial(get_attn('se'), rd_ratio=0.25) model_args = dict( block=Bottleneck, layers=[4, 44, 87, 4], stem_width=32, stem_type='deep', replace_stem_pool=True, avg_down=True, block_args=dict(attn_layer=attn_layer)) return _create_resnet('resnetrs420', pretrained, **dict(model_args, **kwargs)) register_model_deprecations(__name__, { 'tv_resnet34': 'resnet34.tv_in1k', 'tv_resnet50': 'resnet50.tv_in1k', 'tv_resnet101': 'resnet101.tv_in1k', 'tv_resnet152': 'resnet152.tv_in1k', 'tv_resnext50_32x4d' : 'resnext50_32x4d.tv_in1k', 'ig_resnext101_32x8d': 'resnext101_32x8d.fb_wsl_ig1b_ft_in1k', 'ig_resnext101_32x16d': 'resnext101_32x8d.fb_wsl_ig1b_ft_in1k', 'ig_resnext101_32x32d': 'resnext101_32x8d.fb_wsl_ig1b_ft_in1k', 'ig_resnext101_32x48d': 'resnext101_32x8d.fb_wsl_ig1b_ft_in1k', 'ssl_resnet18': 'resnet18.fb_ssl_yfcc100m_ft_in1k', 'ssl_resnet50': 'resnet50.fb_ssl_yfcc100m_ft_in1k', 'ssl_resnext50_32x4d': 'resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k', 'ssl_resnext101_32x4d': 'resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k', 'ssl_resnext101_32x8d': 'resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k', 'ssl_resnext101_32x16d': 'resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k', 'swsl_resnet18': 'resnet18.fb_swsl_ig1b_ft_in1k', 'swsl_resnet50': 'resnet50.fb_swsl_ig1b_ft_in1k', 'swsl_resnext50_32x4d': 'resnext50_32x4d.fb_swsl_ig1b_ft_in1k', 'swsl_resnext101_32x4d': 'resnext101_32x4d.fb_swsl_ig1b_ft_in1k', 'swsl_resnext101_32x8d': 'resnext101_32x8d.fb_swsl_ig1b_ft_in1k', 'swsl_resnext101_32x16d': 'resnext101_32x16d.fb_swsl_ig1b_ft_in1k', 'gluon_resnet18_v1b': 'resnet18.gluon_in1k', 'gluon_resnet34_v1b': 'resnet34.gluon_in1k', 'gluon_resnet50_v1b': 'resnet50.gluon_in1k', 'gluon_resnet101_v1b': 'resnet101.gluon_in1k', 'gluon_resnet152_v1b': 'resnet152.gluon_in1k', 'gluon_resnet50_v1c': 'resnet50c.gluon_in1k', 'gluon_resnet101_v1c': 'resnet101c.gluon_in1k', 'gluon_resnet152_v1c': 'resnet152c.gluon_in1k', 'gluon_resnet50_v1d': 'resnet50d.gluon_in1k', 'gluon_resnet101_v1d': 'resnet101d.gluon_in1k', 'gluon_resnet152_v1d': 'resnet152d.gluon_in1k', 'gluon_resnet50_v1s': 'resnet50s.gluon_in1k', 'gluon_resnet101_v1s': 'resnet101s.gluon_in1k', 'gluon_resnet152_v1s': 'resnet152s.gluon_in1k', 'gluon_resnext50_32x4d': 'resnext50_32x4d.gluon_in1k', 'gluon_resnext101_32x4d': 'resnext101_32x4d.gluon_in1k', 'gluon_resnext101_64x4d': 'resnext101_64x4d.gluon_in1k', 'gluon_seresnext50_32x4d': 'seresnext50_32x4d.gluon_in1k', 'gluon_seresnext101_32x4d': 'seresnext101_32x4d.gluon_in1k', 'gluon_seresnext101_64x4d': 'seresnext101_64x4d.gluon_in1k', 'gluon_senet154': 'senet154.gluon_in1k', 'seresnext26tn_32x4d': 'seresnext26t_32x4d', })
pytorch-image-models/timm/models/resnet.py/0
{ "file_path": "pytorch-image-models/timm/models/resnet.py", "repo_id": "pytorch-image-models", "token_count": 44237 }
198
""" Vision Transformer (ViT) in PyTorch A PyTorch implement of Vision Transformers as described in: 'An Image Is Worth 16 x 16 Words: Transformers for Image Recognition at Scale' - https://arxiv.org/abs/2010.11929 `How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers` - https://arxiv.org/abs/2106.10270 `FlexiViT: One Model for All Patch Sizes` - https://arxiv.org/abs/2212.08013 The official jax code is released and available at * https://github.com/google-research/vision_transformer * https://github.com/google-research/big_vision Acknowledgments: * The paper authors for releasing code and weights, thanks! * I fixed my class token impl based on Phil Wang's https://github.com/lucidrains/vit-pytorch * Simple transformer style inspired by Andrej Karpathy's https://github.com/karpathy/minGPT * Bert reference code checks against Huggingface Transformers and Tensorflow Bert Hacked together by / Copyright 2020, Ross Wightman """ import logging import math from collections import OrderedDict from functools import partial from typing import Any, Callable, Dict, Optional, Sequence, Set, Tuple, Type, Union, List try: from typing import Literal except ImportError: from typing_extensions import Literal import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.checkpoint from torch.jit import Final from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD, IMAGENET_INCEPTION_MEAN, IMAGENET_INCEPTION_STD, \ OPENAI_CLIP_MEAN, OPENAI_CLIP_STD from timm.layers import PatchEmbed, Mlp, DropPath, AttentionPoolLatent, RmsNorm, PatchDropout, SwiGLUPacked, \ trunc_normal_, lecun_normal_, resample_patch_embed, resample_abs_pos_embed, use_fused_attn, \ get_act_layer, get_norm_layer, LayerType from ._builder import build_model_with_cfg from ._manipulate import named_apply, checkpoint_seq, adapt_input_conv from ._registry import generate_default_cfgs, register_model, register_model_deprecations __all__ = ['VisionTransformer'] # model_registry will add each entrypoint fn to this _logger = logging.getLogger(__name__) class Attention(nn.Module): fused_attn: Final[bool] def __init__( self, dim: int, num_heads: int = 8, qkv_bias: bool = False, qk_norm: bool = False, attn_drop: float = 0., proj_drop: float = 0., norm_layer: nn.Module = nn.LayerNorm, ) -> None: super().__init__() assert dim % num_heads == 0, 'dim should be divisible by num_heads' self.num_heads = num_heads self.head_dim = dim // num_heads self.scale = self.head_dim ** -0.5 self.fused_attn = use_fused_attn() self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) self.q_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity() self.k_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity() self.attn_drop = nn.Dropout(attn_drop) self.proj = nn.Linear(dim, dim) self.proj_drop = nn.Dropout(proj_drop) def forward(self, x: torch.Tensor) -> torch.Tensor: B, N, C = x.shape qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4) q, k, v = qkv.unbind(0) q, k = self.q_norm(q), self.k_norm(k) if self.fused_attn: x = F.scaled_dot_product_attention( q, k, v, dropout_p=self.attn_drop.p if self.training else 0., ) else: q = q * self.scale attn = q @ k.transpose(-2, -1) attn = attn.softmax(dim=-1) attn = self.attn_drop(attn) x = attn @ v x = x.transpose(1, 2).reshape(B, N, C) x = self.proj(x) x = self.proj_drop(x) return x class LayerScale(nn.Module): def __init__( self, dim: int, init_values: float = 1e-5, inplace: bool = False, ) -> None: super().__init__() self.inplace = inplace self.gamma = nn.Parameter(init_values * torch.ones(dim)) def forward(self, x: torch.Tensor) -> torch.Tensor: return x.mul_(self.gamma) if self.inplace else x * self.gamma class Block(nn.Module): def __init__( self, dim: int, num_heads: int, mlp_ratio: float = 4., qkv_bias: bool = False, qk_norm: bool = False, proj_drop: float = 0., attn_drop: float = 0., init_values: Optional[float] = None, drop_path: float = 0., act_layer: nn.Module = nn.GELU, norm_layer: nn.Module = nn.LayerNorm, mlp_layer: nn.Module = Mlp, ) -> None: super().__init__() self.norm1 = norm_layer(dim) self.attn = Attention( dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_norm=qk_norm, attn_drop=attn_drop, proj_drop=proj_drop, norm_layer=norm_layer, ) self.ls1 = LayerScale(dim, init_values=init_values) if init_values else nn.Identity() self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity() self.norm2 = norm_layer(dim) self.mlp = mlp_layer( in_features=dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer, drop=proj_drop, ) self.ls2 = LayerScale(dim, init_values=init_values) if init_values else nn.Identity() self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity() def forward(self, x: torch.Tensor) -> torch.Tensor: x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x)))) x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x)))) return x class ResPostBlock(nn.Module): def __init__( self, dim: int, num_heads: int, mlp_ratio: float = 4., qkv_bias: bool = False, qk_norm: bool = False, proj_drop: float = 0., attn_drop: float = 0., init_values: Optional[float] = None, drop_path: float = 0., act_layer: nn.Module = nn.GELU, norm_layer: nn.Module = nn.LayerNorm, mlp_layer: nn.Module = Mlp, ) -> None: super().__init__() self.init_values = init_values self.attn = Attention( dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_norm=qk_norm, attn_drop=attn_drop, proj_drop=proj_drop, norm_layer=norm_layer, ) self.norm1 = norm_layer(dim) self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity() self.mlp = mlp_layer( in_features=dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer, drop=proj_drop, ) self.norm2 = norm_layer(dim) self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity() self.init_weights() def init_weights(self) -> None: # NOTE this init overrides that base model init with specific changes for the block type if self.init_values is not None: nn.init.constant_(self.norm1.weight, self.init_values) nn.init.constant_(self.norm2.weight, self.init_values) def forward(self, x: torch.Tensor) -> torch.Tensor: x = x + self.drop_path1(self.norm1(self.attn(x))) x = x + self.drop_path2(self.norm2(self.mlp(x))) return x class ParallelScalingBlock(nn.Module): """ Parallel ViT block (MLP & Attention in parallel) Based on: 'Scaling Vision Transformers to 22 Billion Parameters` - https://arxiv.org/abs/2302.05442 """ fused_attn: Final[bool] def __init__( self, dim: int, num_heads: int, mlp_ratio: float = 4., qkv_bias: bool = False, qk_norm: bool = False, proj_drop: float = 0., attn_drop: float = 0., init_values: Optional[float] = None, drop_path: float = 0., act_layer: nn.Module = nn.GELU, norm_layer: nn.Module = nn.LayerNorm, mlp_layer: Optional[nn.Module] = None, ) -> None: super().__init__() assert dim % num_heads == 0, 'dim should be divisible by num_heads' self.num_heads = num_heads self.head_dim = dim // num_heads self.scale = self.head_dim ** -0.5 self.fused_attn = use_fused_attn() mlp_hidden_dim = int(mlp_ratio * dim) in_proj_out_dim = mlp_hidden_dim + 3 * dim self.in_norm = norm_layer(dim) self.in_proj = nn.Linear(dim, in_proj_out_dim, bias=qkv_bias) self.in_split = [mlp_hidden_dim] + [dim] * 3 if qkv_bias: self.register_buffer('qkv_bias', None) self.register_parameter('mlp_bias', None) else: self.register_buffer('qkv_bias', torch.zeros(3 * dim), persistent=False) self.mlp_bias = nn.Parameter(torch.zeros(mlp_hidden_dim)) self.q_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity() self.k_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity() self.attn_drop = nn.Dropout(attn_drop) self.attn_out_proj = nn.Linear(dim, dim) self.mlp_drop = nn.Dropout(proj_drop) self.mlp_act = act_layer() self.mlp_out_proj = nn.Linear(mlp_hidden_dim, dim) self.ls = LayerScale(dim, init_values=init_values) if init_values is not None else nn.Identity() self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() def forward(self, x: torch.Tensor) -> torch.Tensor: B, N, C = x.shape # Combined MLP fc1 & qkv projections y = self.in_norm(x) if self.mlp_bias is not None: # Concat constant zero-bias for qkv w/ trainable mlp_bias. # Appears faster than adding to x_mlp separately y = F.linear(y, self.in_proj.weight, torch.cat((self.qkv_bias, self.mlp_bias))) else: y = self.in_proj(y) x_mlp, q, k, v = torch.split(y, self.in_split, dim=-1) # Dot product attention w/ qk norm q = self.q_norm(q.view(B, N, self.num_heads, self.head_dim)).transpose(1, 2) k = self.k_norm(k.view(B, N, self.num_heads, self.head_dim)).transpose(1, 2) v = v.view(B, N, self.num_heads, self.head_dim).transpose(1, 2) if self.fused_attn: x_attn = F.scaled_dot_product_attention( q, k, v, dropout_p=self.attn_drop.p if self.training else 0., ) else: q = q * self.scale attn = q @ k.transpose(-2, -1) attn = attn.softmax(dim=-1) attn = self.attn_drop(attn) x_attn = attn @ v x_attn = x_attn.transpose(1, 2).reshape(B, N, C) x_attn = self.attn_out_proj(x_attn) # MLP activation, dropout, fc2 x_mlp = self.mlp_act(x_mlp) x_mlp = self.mlp_drop(x_mlp) x_mlp = self.mlp_out_proj(x_mlp) # Add residual w/ drop path & layer scale applied y = self.drop_path(self.ls(x_attn + x_mlp)) x = x + y return x class ParallelThingsBlock(nn.Module): """ Parallel ViT block (N parallel attention followed by N parallel MLP) Based on: `Three things everyone should know about Vision Transformers` - https://arxiv.org/abs/2203.09795 """ def __init__( self, dim: int, num_heads: int, num_parallel: int = 2, mlp_ratio: float = 4., qkv_bias: bool = False, qk_norm: bool = False, init_values: Optional[float] = None, proj_drop: float = 0., attn_drop: float = 0., drop_path: float = 0., act_layer: nn.Module = nn.GELU, norm_layer: nn.Module = nn.LayerNorm, mlp_layer: nn.Module = Mlp, ) -> None: super().__init__() self.num_parallel = num_parallel self.attns = nn.ModuleList() self.ffns = nn.ModuleList() for _ in range(num_parallel): self.attns.append(nn.Sequential(OrderedDict([ ('norm', norm_layer(dim)), ('attn', Attention( dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_norm=qk_norm, attn_drop=attn_drop, proj_drop=proj_drop, norm_layer=norm_layer, )), ('ls', LayerScale(dim, init_values=init_values) if init_values else nn.Identity()), ('drop_path', DropPath(drop_path) if drop_path > 0. else nn.Identity()) ]))) self.ffns.append(nn.Sequential(OrderedDict([ ('norm', norm_layer(dim)), ('mlp', mlp_layer( dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer, drop=proj_drop, )), ('ls', LayerScale(dim, init_values=init_values) if init_values else nn.Identity()), ('drop_path', DropPath(drop_path) if drop_path > 0. else nn.Identity()) ]))) def _forward_jit(self, x: torch.Tensor) -> torch.Tensor: x = x + torch.stack([attn(x) for attn in self.attns]).sum(dim=0) x = x + torch.stack([ffn(x) for ffn in self.ffns]).sum(dim=0) return x @torch.jit.ignore def _forward(self, x: torch.Tensor) -> torch.Tensor: x = x + sum(attn(x) for attn in self.attns) x = x + sum(ffn(x) for ffn in self.ffns) return x def forward(self, x: torch.Tensor) -> torch.Tensor: if torch.jit.is_scripting() or torch.jit.is_tracing(): return self._forward_jit(x) else: return self._forward(x) class VisionTransformer(nn.Module): """ Vision Transformer A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` - https://arxiv.org/abs/2010.11929 """ dynamic_img_size: Final[bool] def __init__( self, img_size: Union[int, Tuple[int, int]] = 224, patch_size: Union[int, Tuple[int, int]] = 16, in_chans: int = 3, num_classes: int = 1000, global_pool: Literal['', 'avg', 'token', 'map'] = 'token', embed_dim: int = 768, depth: int = 12, num_heads: int = 12, mlp_ratio: float = 4., qkv_bias: bool = True, qk_norm: bool = False, init_values: Optional[float] = None, class_token: bool = True, no_embed_class: bool = False, reg_tokens: int = 0, pre_norm: bool = False, fc_norm: Optional[bool] = None, dynamic_img_size: bool = False, dynamic_img_pad: bool = False, drop_rate: float = 0., pos_drop_rate: float = 0., patch_drop_rate: float = 0., proj_drop_rate: float = 0., attn_drop_rate: float = 0., drop_path_rate: float = 0., weight_init: Literal['skip', 'jax', 'jax_nlhb', 'moco', ''] = '', fix_init: bool = False, embed_layer: Callable = PatchEmbed, norm_layer: Optional[LayerType] = None, act_layer: Optional[LayerType] = None, block_fn: Type[nn.Module] = Block, mlp_layer: Type[nn.Module] = Mlp, ) -> None: """ Args: img_size: Input image size. patch_size: Patch size. in_chans: Number of image input channels. num_classes: Mumber of classes for classification head. global_pool: Type of global pooling for final sequence (default: 'token'). embed_dim: Transformer embedding dimension. depth: Depth of transformer. num_heads: Number of attention heads. mlp_ratio: Ratio of mlp hidden dim to embedding dim. qkv_bias: Enable bias for qkv projections if True. init_values: Layer-scale init values (layer-scale enabled if not None). class_token: Use class token. no_embed_class: Don't include position embeddings for class (or reg) tokens. reg_tokens: Number of register tokens. fc_norm: Pre head norm after pool (instead of before), if None, enabled when global_pool == 'avg'. drop_rate: Head dropout rate. pos_drop_rate: Position embedding dropout rate. attn_drop_rate: Attention dropout rate. drop_path_rate: Stochastic depth rate. weight_init: Weight initialization scheme. fix_init: Apply weight initialization fix (scaling w/ layer index). embed_layer: Patch embedding layer. norm_layer: Normalization layer. act_layer: MLP activation layer. block_fn: Transformer block layer. """ super().__init__() assert global_pool in ('', 'avg', 'token', 'map') assert class_token or global_pool != 'token' use_fc_norm = global_pool == 'avg' if fc_norm is None else fc_norm norm_layer = get_norm_layer(norm_layer) or partial(nn.LayerNorm, eps=1e-6) act_layer = get_act_layer(act_layer) or nn.GELU self.num_classes = num_classes self.global_pool = global_pool self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models self.num_prefix_tokens = 1 if class_token else 0 self.num_prefix_tokens += reg_tokens self.num_reg_tokens = reg_tokens self.has_class_token = class_token self.no_embed_class = no_embed_class # don't embed prefix positions (includes reg) self.dynamic_img_size = dynamic_img_size self.grad_checkpointing = False embed_args = {} if dynamic_img_size: # flatten deferred until after pos embed embed_args.update(dict(strict_img_size=False, output_fmt='NHWC')) self.patch_embed = embed_layer( img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, bias=not pre_norm, # disable bias if pre-norm is used (e.g. CLIP) dynamic_img_pad=dynamic_img_pad, **embed_args, ) num_patches = self.patch_embed.num_patches self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if class_token else None self.reg_token = nn.Parameter(torch.zeros(1, reg_tokens, embed_dim)) if reg_tokens else None embed_len = num_patches if no_embed_class else num_patches + self.num_prefix_tokens self.pos_embed = nn.Parameter(torch.randn(1, embed_len, embed_dim) * .02) self.pos_drop = nn.Dropout(p=pos_drop_rate) if patch_drop_rate > 0: self.patch_drop = PatchDropout( patch_drop_rate, num_prefix_tokens=self.num_prefix_tokens, ) else: self.patch_drop = nn.Identity() self.norm_pre = norm_layer(embed_dim) if pre_norm else nn.Identity() dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule self.blocks = nn.Sequential(*[ block_fn( dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_norm=qk_norm, init_values=init_values, proj_drop=proj_drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, act_layer=act_layer, mlp_layer=mlp_layer, ) for i in range(depth)]) self.norm = norm_layer(embed_dim) if not use_fc_norm else nn.Identity() # Classifier Head if global_pool == 'map': self.attn_pool = AttentionPoolLatent( self.embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, norm_layer=norm_layer, ) else: self.attn_pool = None self.fc_norm = norm_layer(embed_dim) if use_fc_norm else nn.Identity() self.head_drop = nn.Dropout(drop_rate) self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity() if weight_init != 'skip': self.init_weights(weight_init) if fix_init: self.fix_init_weight() def fix_init_weight(self): def rescale(param, _layer_id): param.div_(math.sqrt(2.0 * _layer_id)) for layer_id, layer in enumerate(self.blocks): rescale(layer.attn.proj.weight.data, layer_id + 1) rescale(layer.mlp.fc2.weight.data, layer_id + 1) def init_weights(self, mode: str = '') -> None: assert mode in ('jax', 'jax_nlhb', 'moco', '') head_bias = -math.log(self.num_classes) if 'nlhb' in mode else 0. trunc_normal_(self.pos_embed, std=.02) if self.cls_token is not None: nn.init.normal_(self.cls_token, std=1e-6) named_apply(get_init_weights_vit(mode, head_bias), self) def _init_weights(self, m: nn.Module) -> None: # this fn left here for compat with downstream users init_weights_vit_timm(m) @torch.jit.ignore() def load_pretrained(self, checkpoint_path: str, prefix: str = '') -> None: _load_weights(self, checkpoint_path, prefix) @torch.jit.ignore def no_weight_decay(self) -> Set: return {'pos_embed', 'cls_token', 'dist_token'} @torch.jit.ignore def group_matcher(self, coarse: bool = False) -> Dict: return dict( stem=r'^cls_token|pos_embed|patch_embed', # stem and embed blocks=[(r'^blocks\.(\d+)', None), (r'^norm', (99999,))] ) @torch.jit.ignore def set_grad_checkpointing(self, enable: bool = True) -> None: self.grad_checkpointing = enable @torch.jit.ignore def get_classifier(self) -> nn.Module: return self.head def reset_classifier(self, num_classes: int, global_pool = None) -> None: self.num_classes = num_classes if global_pool is not None: assert global_pool in ('', 'avg', 'token', 'map') if global_pool == 'map' and self.attn_pool is None: assert False, "Cannot currently add attention pooling in reset_classifier()." elif global_pool != 'map ' and self.attn_pool is not None: self.attn_pool = None # remove attention pooling self.global_pool = global_pool self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity() def _pos_embed(self, x: torch.Tensor) -> torch.Tensor: if self.dynamic_img_size: B, H, W, C = x.shape pos_embed = resample_abs_pos_embed( self.pos_embed, (H, W), num_prefix_tokens=0 if self.no_embed_class else self.num_prefix_tokens, ) x = x.view(B, -1, C) else: pos_embed = self.pos_embed to_cat = [] if self.cls_token is not None: to_cat.append(self.cls_token.expand(x.shape[0], -1, -1)) if self.reg_token is not None: to_cat.append(self.reg_token.expand(x.shape[0], -1, -1)) if self.no_embed_class: # deit-3, updated JAX (big vision) # position embedding does not overlap with class token, add then concat x = x + pos_embed if to_cat: x = torch.cat(to_cat + [x], dim=1) else: # original timm, JAX, and deit vit impl # pos_embed has entry for class token, concat then add if to_cat: x = torch.cat(to_cat + [x], dim=1) x = x + pos_embed return self.pos_drop(x) def _intermediate_layers( self, x: torch.Tensor, n: Union[int, Sequence] = 1, ) -> List[torch.Tensor]: outputs, num_blocks = [], len(self.blocks) take_indices = set(range(num_blocks - n, num_blocks) if isinstance(n, int) else n) last_index_to_take = max(take_indices) # forward pass x = self.patch_embed(x) x = self._pos_embed(x) x = self.patch_drop(x) x = self.norm_pre(x) for i, blk in enumerate(self.blocks[: last_index_to_take + 1]): x = blk(x) if i in take_indices: outputs.append(x) return outputs def get_intermediate_layers( self, x: torch.Tensor, n: Union[int, Sequence] = 1, reshape: bool = False, return_prefix_tokens: bool = False, norm: bool = False, ) -> Tuple[Union[torch.Tensor, Tuple[torch.Tensor]]]: """ Intermediate layer accessor (NOTE: This is a WIP experiment). Inspired by DINO / DINOv2 interface """ # take last n blocks if n is an int, if in is a sequence, select by matching indices outputs = self._intermediate_layers(x, n) if norm: outputs = [self.norm(out) for out in outputs] prefix_tokens = [out[:, 0:self.num_prefix_tokens] for out in outputs] outputs = [out[:, self.num_prefix_tokens:] for out in outputs] if reshape: patch_size = self.patch_embed.patch_size batch, _, height, width = x.size() outputs = [ out.reshape(batch, int(math.ceil(height / patch_size[0])), int(math.ceil(width / patch_size[1])), -1) .permute(0, 3, 1, 2) .contiguous() for out in outputs ] if return_prefix_tokens: return tuple(zip(outputs, prefix_tokens)) return tuple(outputs) def forward_features(self, x: torch.Tensor) -> torch.Tensor: x = self.patch_embed(x) x = self._pos_embed(x) x = self.patch_drop(x) x = self.norm_pre(x) if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint_seq(self.blocks, x) else: x = self.blocks(x) x = self.norm(x) return x def forward_head(self, x: torch.Tensor, pre_logits: bool = False) -> torch.Tensor: if self.attn_pool is not None: x = self.attn_pool(x) elif self.global_pool == 'avg': x = x[:, self.num_prefix_tokens:].mean(dim=1) elif self.global_pool: x = x[:, 0] # class token x = self.fc_norm(x) x = self.head_drop(x) return x if pre_logits else self.head(x) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.forward_features(x) x = self.forward_head(x) return x def init_weights_vit_timm(module: nn.Module, name: str = '') -> None: """ ViT weight initialization, original timm impl (for reproducibility) """ if isinstance(module, nn.Linear): trunc_normal_(module.weight, std=.02) if module.bias is not None: nn.init.zeros_(module.bias) elif hasattr(module, 'init_weights'): module.init_weights() def init_weights_vit_jax(module: nn.Module, name: str = '', head_bias: float = 0.0) -> None: """ ViT weight initialization, matching JAX (Flax) impl """ if isinstance(module, nn.Linear): if name.startswith('head'): nn.init.zeros_(module.weight) nn.init.constant_(module.bias, head_bias) else: nn.init.xavier_uniform_(module.weight) if module.bias is not None: nn.init.normal_(module.bias, std=1e-6) if 'mlp' in name else nn.init.zeros_(module.bias) elif isinstance(module, nn.Conv2d): lecun_normal_(module.weight) if module.bias is not None: nn.init.zeros_(module.bias) elif hasattr(module, 'init_weights'): module.init_weights() def init_weights_vit_moco(module: nn.Module, name: str = '') -> None: """ ViT weight initialization, matching moco-v3 impl minus fixed PatchEmbed """ if isinstance(module, nn.Linear): if 'qkv' in name: # treat the weights of Q, K, V separately val = math.sqrt(6. / float(module.weight.shape[0] // 3 + module.weight.shape[1])) nn.init.uniform_(module.weight, -val, val) else: nn.init.xavier_uniform_(module.weight) if module.bias is not None: nn.init.zeros_(module.bias) elif hasattr(module, 'init_weights'): module.init_weights() def get_init_weights_vit(mode: str = 'jax', head_bias: float = 0.0) -> Callable: if 'jax' in mode: return partial(init_weights_vit_jax, head_bias=head_bias) elif 'moco' in mode: return init_weights_vit_moco else: return init_weights_vit_timm def resize_pos_embed( posemb: torch.Tensor, posemb_new: torch.Tensor, num_prefix_tokens: int = 1, gs_new: Tuple[int, int] = (), interpolation: str = 'bicubic', antialias: bool = False, ) -> torch.Tensor: """ Rescale the grid of position embeddings when loading from state_dict. *DEPRECATED* This function is being deprecated in favour of resample_abs_pos_embed Adapted from: https://github.com/google-research/vision_transformer/blob/00883dd691c63a6830751563748663526e811cee/vit_jax/checkpoint.py#L224 """ ntok_new = posemb_new.shape[1] if num_prefix_tokens: posemb_prefix, posemb_grid = posemb[:, :num_prefix_tokens], posemb[0, num_prefix_tokens:] ntok_new -= num_prefix_tokens else: posemb_prefix, posemb_grid = posemb[:, :0], posemb[0] gs_old = int(math.sqrt(len(posemb_grid))) if not len(gs_new): # backwards compatibility gs_new = [int(math.sqrt(ntok_new))] * 2 assert len(gs_new) >= 2 _logger.info(f'Resized position embedding: {posemb.shape} ({[gs_old, gs_old]}) to {posemb_new.shape} ({gs_new}).') posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) posemb_grid = F.interpolate(posemb_grid, size=gs_new, mode=interpolation, antialias=antialias, align_corners=False) posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_new[0] * gs_new[1], -1) posemb = torch.cat([posemb_prefix, posemb_grid], dim=1) return posemb @torch.no_grad() def _load_weights(model: VisionTransformer, checkpoint_path: str, prefix: str = '') -> None: """ Load weights from .npz checkpoints for official Google Brain Flax implementation """ import numpy as np def _n2p(w, t=True): if w.ndim == 4 and w.shape[0] == w.shape[1] == w.shape[2] == 1: w = w.flatten() if t: if w.ndim == 4: w = w.transpose([3, 2, 0, 1]) elif w.ndim == 3: w = w.transpose([2, 0, 1]) elif w.ndim == 2: w = w.transpose([1, 0]) return torch.from_numpy(w) w = np.load(checkpoint_path) interpolation = 'bilinear' antialias = False big_vision = False if not prefix: if 'opt/target/embedding/kernel' in w: prefix = 'opt/target/' elif 'params/embedding/kernel' in w: prefix = 'params/' big_vision = True elif 'params/img/embedding/kernel' in w: prefix = 'params/img/' big_vision = True if hasattr(model.patch_embed, 'backbone'): # hybrid backbone = model.patch_embed.backbone stem_only = not hasattr(backbone, 'stem') stem = backbone if stem_only else backbone.stem stem.conv.weight.copy_(adapt_input_conv(stem.conv.weight.shape[1], _n2p(w[f'{prefix}conv_root/kernel']))) stem.norm.weight.copy_(_n2p(w[f'{prefix}gn_root/scale'])) stem.norm.bias.copy_(_n2p(w[f'{prefix}gn_root/bias'])) if not stem_only: for i, stage in enumerate(backbone.stages): for j, block in enumerate(stage.blocks): bp = f'{prefix}block{i + 1}/unit{j + 1}/' for r in range(3): getattr(block, f'conv{r + 1}').weight.copy_(_n2p(w[f'{bp}conv{r + 1}/kernel'])) getattr(block, f'norm{r + 1}').weight.copy_(_n2p(w[f'{bp}gn{r + 1}/scale'])) getattr(block, f'norm{r + 1}').bias.copy_(_n2p(w[f'{bp}gn{r + 1}/bias'])) if block.downsample is not None: block.downsample.conv.weight.copy_(_n2p(w[f'{bp}conv_proj/kernel'])) block.downsample.norm.weight.copy_(_n2p(w[f'{bp}gn_proj/scale'])) block.downsample.norm.bias.copy_(_n2p(w[f'{bp}gn_proj/bias'])) embed_conv_w = _n2p(w[f'{prefix}embedding/kernel']) else: embed_conv_w = adapt_input_conv( model.patch_embed.proj.weight.shape[1], _n2p(w[f'{prefix}embedding/kernel'])) if embed_conv_w.shape[-2:] != model.patch_embed.proj.weight.shape[-2:]: embed_conv_w = resample_patch_embed( embed_conv_w, model.patch_embed.proj.weight.shape[-2:], interpolation=interpolation, antialias=antialias, verbose=True, ) model.patch_embed.proj.weight.copy_(embed_conv_w) model.patch_embed.proj.bias.copy_(_n2p(w[f'{prefix}embedding/bias'])) if model.cls_token is not None: model.cls_token.copy_(_n2p(w[f'{prefix}cls'], t=False)) if big_vision: pos_embed_w = _n2p(w[f'{prefix}pos_embedding'], t=False) else: pos_embed_w = _n2p(w[f'{prefix}Transformer/posembed_input/pos_embedding'], t=False) if pos_embed_w.shape != model.pos_embed.shape: old_shape = pos_embed_w.shape num_prefix_tokens = 0 if getattr(model, 'no_embed_class', False) else getattr(model, 'num_prefix_tokens', 1) pos_embed_w = resample_abs_pos_embed( # resize pos embedding when different size from pretrained weights pos_embed_w, new_size=model.patch_embed.grid_size, num_prefix_tokens=num_prefix_tokens, interpolation=interpolation, antialias=antialias, verbose=True, ) model.pos_embed.copy_(pos_embed_w) model.norm.weight.copy_(_n2p(w[f'{prefix}Transformer/encoder_norm/scale'])) model.norm.bias.copy_(_n2p(w[f'{prefix}Transformer/encoder_norm/bias'])) if (isinstance(model.head, nn.Linear) and f'{prefix}head/bias' in w and model.head.bias.shape[0] == w[f'{prefix}head/bias'].shape[-1]): model.head.weight.copy_(_n2p(w[f'{prefix}head/kernel'])) model.head.bias.copy_(_n2p(w[f'{prefix}head/bias'])) # NOTE representation layer has been removed, not used in latest 21k/1k pretrained weights # if isinstance(getattr(model.pre_logits, 'fc', None), nn.Linear) and f'{prefix}pre_logits/bias' in w: # model.pre_logits.fc.weight.copy_(_n2p(w[f'{prefix}pre_logits/kernel'])) # model.pre_logits.fc.bias.copy_(_n2p(w[f'{prefix}pre_logits/bias'])) if model.attn_pool is not None: block_prefix = f'{prefix}MAPHead_0/' mha_prefix = block_prefix + f'MultiHeadDotProductAttention_0/' model.attn_pool.latent.copy_(_n2p(w[f'{block_prefix}probe'], t=False)) model.attn_pool.kv.weight.copy_(torch.cat([ _n2p(w[f'{mha_prefix}{n}/kernel'], t=False).flatten(1).T for n in ('key', 'value')])) model.attn_pool.kv.bias.copy_(torch.cat([ _n2p(w[f'{mha_prefix}{n}/bias'], t=False).reshape(-1) for n in ('key', 'value')])) model.attn_pool.q.weight.copy_(_n2p(w[f'{mha_prefix}query/kernel'], t=False).flatten(1).T) model.attn_pool.q.bias.copy_(_n2p(w[f'{mha_prefix}query/bias'], t=False).reshape(-1)) model.attn_pool.proj.weight.copy_(_n2p(w[f'{mha_prefix}out/kernel']).flatten(1)) model.attn_pool.proj.bias.copy_(_n2p(w[f'{mha_prefix}out/bias'])) model.attn_pool.norm.weight.copy_(_n2p(w[f'{block_prefix}LayerNorm_0/scale'])) model.attn_pool.norm.bias.copy_(_n2p(w[f'{block_prefix}LayerNorm_0/bias'])) for r in range(2): getattr(model.attn_pool.mlp, f'fc{r + 1}').weight.copy_(_n2p(w[f'{block_prefix}MlpBlock_0/Dense_{r}/kernel'])) getattr(model.attn_pool.mlp, f'fc{r + 1}').bias.copy_(_n2p(w[f'{block_prefix}MlpBlock_0/Dense_{r}/bias'])) mha_sub, b_sub, ln1_sub = (0, 0, 1) if big_vision else (1, 3, 2) for i, block in enumerate(model.blocks.children()): block_prefix = f'{prefix}Transformer/encoderblock_{i}/' mha_prefix = block_prefix + f'MultiHeadDotProductAttention_{mha_sub}/' block.norm1.weight.copy_(_n2p(w[f'{block_prefix}LayerNorm_0/scale'])) block.norm1.bias.copy_(_n2p(w[f'{block_prefix}LayerNorm_0/bias'])) block.attn.qkv.weight.copy_(torch.cat([ _n2p(w[f'{mha_prefix}{n}/kernel'], t=False).flatten(1).T for n in ('query', 'key', 'value')])) block.attn.qkv.bias.copy_(torch.cat([ _n2p(w[f'{mha_prefix}{n}/bias'], t=False).reshape(-1) for n in ('query', 'key', 'value')])) block.attn.proj.weight.copy_(_n2p(w[f'{mha_prefix}out/kernel']).flatten(1)) block.attn.proj.bias.copy_(_n2p(w[f'{mha_prefix}out/bias'])) block.norm2.weight.copy_(_n2p(w[f'{block_prefix}LayerNorm_{ln1_sub}/scale'])) block.norm2.bias.copy_(_n2p(w[f'{block_prefix}LayerNorm_{ln1_sub}/bias'])) for r in range(2): getattr(block.mlp, f'fc{r + 1}').weight.copy_(_n2p(w[f'{block_prefix}MlpBlock_{b_sub}/Dense_{r}/kernel'])) getattr(block.mlp, f'fc{r + 1}').bias.copy_(_n2p(w[f'{block_prefix}MlpBlock_{b_sub}/Dense_{r}/bias'])) def _convert_openai_clip( state_dict: Dict[str, torch.Tensor], model: VisionTransformer, prefix: str = 'visual.', ) -> Dict[str, torch.Tensor]: out_dict = {} swaps = [ ('conv1', 'patch_embed.proj'), ('positional_embedding', 'pos_embed'), ('transformer.resblocks.', 'blocks.'), ('ln_pre', 'norm_pre'), ('ln_post', 'norm'), ('ln_', 'norm'), ('in_proj_', 'qkv.'), ('out_proj', 'proj'), ('mlp.c_fc', 'mlp.fc1'), ('mlp.c_proj', 'mlp.fc2'), ] for k, v in state_dict.items(): if not k.startswith(prefix): continue k = k.replace(prefix, '') for sp in swaps: k = k.replace(sp[0], sp[1]) if k == 'proj': k = 'head.weight' v = v.transpose(0, 1) out_dict['head.bias'] = torch.zeros(v.shape[0]) elif k == 'class_embedding': k = 'cls_token' v = v.unsqueeze(0).unsqueeze(1) elif k == 'pos_embed': v = v.unsqueeze(0) if v.shape[1] != model.pos_embed.shape[1]: # To resize pos embedding when using model at different size from pretrained weights num_prefix_tokens = 0 if getattr(model, 'no_embed_class', False) \ else getattr(model, 'num_prefix_tokens', 1) v = resample_abs_pos_embed( v, new_size=model.patch_embed.grid_size, num_prefix_tokens=num_prefix_tokens, verbose=True, ) out_dict[k] = v return out_dict def _convert_dinov2( state_dict: Dict[str, torch.Tensor], model: VisionTransformer, ) -> Dict[str, torch.Tensor]: import re out_dict = {} state_dict.pop("mask_token", None) if 'register_tokens' in state_dict: # convert dinov2 w/ registers to no_embed_class timm model (neither cls or reg tokens overlap pos embed) out_dict['reg_token'] = state_dict.pop('register_tokens') out_dict['cls_token'] = state_dict.pop('cls_token') + state_dict['pos_embed'][:, 0] out_dict['pos_embed'] = state_dict.pop('pos_embed')[:, 1:] for k, v in state_dict.items(): if re.match(r"blocks\.(\d+)\.mlp\.w12\.(?:weight|bias)", k): out_dict[k.replace("w12", "fc1")] = v continue elif re.match(r"blocks\.(\d+)\.mlp\.w3\.(?:weight|bias)", k): out_dict[k.replace("w3", "fc2")] = v continue out_dict[k] = v return out_dict def checkpoint_filter_fn( state_dict: Dict[str, torch.Tensor], model: VisionTransformer, adapt_layer_scale: bool = False, interpolation: str = 'bicubic', antialias: bool = True, ) -> Dict[str, torch.Tensor]: """ convert patch embedding weight from manual patchify + linear proj to conv""" import re out_dict = {} state_dict = state_dict.get('model', state_dict) state_dict = state_dict.get('state_dict', state_dict) prefix = '' if 'visual.class_embedding' in state_dict: return _convert_openai_clip(state_dict, model) elif 'module.visual.class_embedding' in state_dict: return _convert_openai_clip(state_dict, model, prefix='module.visual.') if "mask_token" in state_dict: state_dict = _convert_dinov2(state_dict, model) if "encoder" in state_dict: state_dict = state_dict['encoder'] prefix = 'module.' if 'visual.trunk.pos_embed' in state_dict: # convert an OpenCLIP model with timm vision encoder # FIXME remap final nn.Linear if it exists outside of the timm .trunk (ie in visual.head.proj) prefix = 'visual.trunk.' if prefix: # filter on & remove prefix string from keys state_dict = {k[len(prefix):]: v for k, v in state_dict.items() if k.startswith(prefix)} for k, v in state_dict.items(): if 'patch_embed.proj.weight' in k: O, I, H, W = model.patch_embed.proj.weight.shape if len(v.shape) < 4: # For old models that I trained prior to conv based patchification O, I, H, W = model.patch_embed.proj.weight.shape v = v.reshape(O, -1, H, W) if v.shape[-1] != W or v.shape[-2] != H: v = resample_patch_embed( v, (H, W), interpolation=interpolation, antialias=antialias, verbose=True, ) elif k == 'pos_embed' and v.shape[1] != model.pos_embed.shape[1]: # To resize pos embedding when using model at different size from pretrained weights num_prefix_tokens = 0 if getattr(model, 'no_embed_class', False) else getattr(model, 'num_prefix_tokens', 1) v = resample_abs_pos_embed( v, new_size=model.patch_embed.grid_size, num_prefix_tokens=num_prefix_tokens, interpolation=interpolation, antialias=antialias, verbose=True, ) elif adapt_layer_scale and 'gamma_' in k: # remap layer-scale gamma into sub-module (deit3 models) k = re.sub(r'gamma_([0-9])', r'ls\1.gamma', k) elif 'pre_logits' in k: # NOTE representation layer removed as not used in latest 21k/1k pretrained weights continue out_dict[k] = v return out_dict def _cfg(url: str = '', **kwargs) -> Dict[str, Any]: return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None, 'crop_pct': 0.9, 'interpolation': 'bicubic', 'fixed_input_size': True, 'mean': IMAGENET_INCEPTION_MEAN, 'std': IMAGENET_INCEPTION_STD, 'first_conv': 'patch_embed.proj', 'classifier': 'head', **kwargs, } default_cfgs = { # re-finetuned augreg 21k FT on in1k weights 'vit_base_patch16_224.augreg2_in21k_ft_in1k': _cfg( hf_hub_id='timm/'), 'vit_base_patch16_384.augreg2_in21k_ft_in1k': _cfg(), 'vit_base_patch8_224.augreg2_in21k_ft_in1k': _cfg( hf_hub_id='timm/'), # How to train your ViT (augreg) weights, pretrained on 21k FT on in1k 'vit_tiny_patch16_224.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_tiny_patch16_384.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_small_patch32_224.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_small_patch32_384.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_small_patch16_224.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_small_patch16_384.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_base_patch32_224.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_base_patch32_384.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_base_patch16_224.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_base_patch16_384.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_base_patch8_224.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_8-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_large_patch16_224.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_large_patch16_384.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), # patch models (weights from official Google JAX impl) pretrained on in21k FT on in1k 'vit_base_patch16_224.orig_in21k_ft_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p16_224-80ecf9dd.pth', hf_hub_id='timm/'), 'vit_base_patch16_384.orig_in21k_ft_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p16_384-83fb41ba.pth', hf_hub_id='timm/', input_size=(3, 384, 384), crop_pct=1.0), 'vit_large_patch32_384.orig_in21k_ft_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_p32_384-9b920ba8.pth', hf_hub_id='timm/', input_size=(3, 384, 384), crop_pct=1.0), # How to train your ViT (augreg) weights trained on in1k only 'vit_small_patch16_224.augreg_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_16-i1k-300ep-lr_0.001-aug_medium2-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_small_patch16_384.augreg_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_16-i1k-300ep-lr_0.001-aug_medium2-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_base_patch32_224.augreg_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_32-i1k-300ep-lr_0.001-aug_medium2-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_base_patch32_384.augreg_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_32-i1k-300ep-lr_0.001-aug_medium2-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_base_patch16_224.augreg_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_16-i1k-300ep-lr_0.001-aug_strong2-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_base_patch16_384.augreg_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_16-i1k-300ep-lr_0.001-aug_strong2-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_large_patch14_224.untrained': _cfg(url=''), 'vit_huge_patch14_224.untrained': _cfg(url=''), 'vit_giant_patch14_224.untrained': _cfg(url=''), 'vit_gigantic_patch14_224.untrained': _cfg(url=''), # patch models, imagenet21k (weights from official Google JAX impl), classifier not valid 'vit_base_patch32_224.orig_in21k': _cfg( #url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch32_224_in21k-8db57226.pth', hf_hub_id='timm/', num_classes=0), 'vit_base_patch16_224.orig_in21k': _cfg( #url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch16_224_in21k-e5005f0a.pth', hf_hub_id='timm/', num_classes=0), 'vit_large_patch32_224.orig_in21k': _cfg( #url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch32_224_in21k-9046d2e7.pth', hf_hub_id='timm/', num_classes=0), 'vit_large_patch16_224.orig_in21k': _cfg( #url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch16_224_in21k-606da67d.pth', hf_hub_id='timm/', num_classes=0), 'vit_huge_patch14_224.orig_in21k': _cfg( hf_hub_id='timm/', num_classes=0), # How to train your ViT (augreg) weights, pretrained on in21k 'vit_tiny_patch16_224.augreg_in21k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz', hf_hub_id='timm/', custom_load=True, num_classes=21843), 'vit_small_patch32_224.augreg_in21k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0.npz', hf_hub_id='timm/', custom_load=True, num_classes=21843), 'vit_small_patch16_224.augreg_in21k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0.npz', hf_hub_id='timm/', custom_load=True, num_classes=21843), 'vit_base_patch32_224.augreg_in21k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.03-do_0.0-sd_0.0.npz', hf_hub_id='timm/', custom_load=True, num_classes=21843), 'vit_base_patch16_224.augreg_in21k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0.npz', hf_hub_id='timm/', custom_load=True, num_classes=21843), 'vit_base_patch8_224.augreg_in21k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_8-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0.npz', hf_hub_id='timm/', custom_load=True, num_classes=21843), 'vit_large_patch16_224.augreg_in21k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1.npz', hf_hub_id='timm/', custom_load=True, num_classes=21843), # SAM trained models (https://arxiv.org/abs/2106.01548) 'vit_base_patch32_224.sam_in1k': _cfg( url='https://storage.googleapis.com/vit_models/sam/ViT-B_32.npz', custom_load=True, hf_hub_id='timm/'), 'vit_base_patch16_224.sam_in1k': _cfg( url='https://storage.googleapis.com/vit_models/sam/ViT-B_16.npz', custom_load=True, hf_hub_id='timm/'), # DINO pretrained - https://arxiv.org/abs/2104.14294 (no classifier head, for fine-tune only) 'vit_small_patch16_224.dino': _cfg( url='https://dl.fbaipublicfiles.com/dino/dino_deitsmall16_pretrain/dino_deitsmall16_pretrain.pth', hf_hub_id='timm/', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_small_patch8_224.dino': _cfg( url='https://dl.fbaipublicfiles.com/dino/dino_deitsmall8_pretrain/dino_deitsmall8_pretrain.pth', hf_hub_id='timm/', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_base_patch16_224.dino': _cfg( url='https://dl.fbaipublicfiles.com/dino/dino_vitbase16_pretrain/dino_vitbase16_pretrain.pth', hf_hub_id='timm/', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_base_patch8_224.dino': _cfg( url='https://dl.fbaipublicfiles.com/dino/dino_vitbase8_pretrain/dino_vitbase8_pretrain.pth', hf_hub_id='timm/', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), # DINOv2 pretrained - https://arxiv.org/abs/2304.07193 (no classifier head, for fine-tune/features only) 'vit_small_patch14_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), 'vit_base_patch14_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), 'vit_large_patch14_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), 'vit_giant_patch14_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), # DINOv2 pretrained w/ registers - https://arxiv.org/abs/2309.16588 (no classifier head, for fine-tune/features only) 'vit_small_patch14_reg4_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_reg4_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), 'vit_base_patch14_reg4_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_reg4_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), 'vit_large_patch14_reg4_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_reg4_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), 'vit_giant_patch14_reg4_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_reg4_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), # ViT ImageNet-21K-P pretraining by MILL 'vit_base_patch16_224_miil.in21k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/vit_base_patch16_224_in21k_miil-887286df.pth', hf_hub_id='timm/', mean=(0., 0., 0.), std=(1., 1., 1.), crop_pct=0.875, interpolation='bilinear', num_classes=11221), 'vit_base_patch16_224_miil.in21k_ft_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/vit_base_patch16_224_1k_miil_84_4-2deb18e3.pth', hf_hub_id='timm/', mean=(0., 0., 0.), std=(1., 1., 1.), crop_pct=0.875, interpolation='bilinear'), # Custom timm variants 'vit_base_patch16_rpn_224.sw_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/vit_base_patch16_rpn_224-sw-3b07e89d.pth', hf_hub_id='timm/'), 'vit_medium_patch16_gap_240.sw_in12k': _cfg( hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95, num_classes=11821), 'vit_medium_patch16_gap_256.sw_in12k_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_medium_patch16_gap_384.sw_in12k_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 384, 384), crop_pct=0.95, crop_mode='squash'), 'vit_base_patch16_gap_224': _cfg(), # CLIP pretrained image tower and related fine-tuned weights 'vit_base_patch32_clip_224.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD), 'vit_base_patch32_clip_384.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 384, 384)), 'vit_base_patch32_clip_448.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 448, 448)), 'vit_base_patch16_clip_224.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=0.95), 'vit_base_patch16_clip_384.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 384, 384), crop_mode='squash'), 'vit_large_patch14_clip_224.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD, crop_pct=1.0), 'vit_large_patch14_clip_336.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD, crop_pct=1.0, input_size=(3, 336, 336), crop_mode='squash'), 'vit_huge_patch14_clip_224.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0), 'vit_huge_patch14_clip_336.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 336, 336), crop_mode='squash'), 'vit_base_patch32_clip_224.openai_ft_in12k_in1k': _cfg( # hf_hub_id='timm/vit_base_patch32_clip_224.openai_ft_in12k_in1k', # FIXME weight exists, need to push mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD), 'vit_base_patch32_clip_384.openai_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=0.95, input_size=(3, 384, 384), crop_mode='squash'), 'vit_base_patch16_clip_224.openai_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=0.95), 'vit_base_patch16_clip_384.openai_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=0.95, input_size=(3, 384, 384), crop_mode='squash'), 'vit_large_patch14_clip_224.openai_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0), 'vit_large_patch14_clip_336.openai_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 336, 336), crop_mode='squash'), 'vit_base_patch32_clip_224.laion2b_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD), 'vit_base_patch16_clip_224.laion2b_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0), 'vit_base_patch16_clip_384.laion2b_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 384, 384), crop_mode='squash'), 'vit_large_patch14_clip_224.laion2b_ft_in1k': _cfg( hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD, crop_pct=1.0), 'vit_large_patch14_clip_336.laion2b_ft_in1k': _cfg( hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD, crop_pct=1.0, input_size=(3, 336, 336), crop_mode='squash'), 'vit_huge_patch14_clip_224.laion2b_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0), 'vit_huge_patch14_clip_336.laion2b_ft_in1k': _cfg( hf_hub_id='', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 336, 336), crop_mode='squash'), 'vit_base_patch32_clip_224.openai_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD), 'vit_base_patch16_clip_224.openai_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD), 'vit_base_patch16_clip_384.openai_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 384, 384), crop_mode='squash'), 'vit_large_patch14_clip_224.openai_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0), 'vit_base_patch32_clip_224.laion2b_ft_in12k': _cfg( #hf_hub_id='timm/vit_base_patch32_clip_224.laion2b_ft_in12k', # FIXME weight exists, need to push mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=11821), 'vit_base_patch16_clip_224.laion2b_ft_in12k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=11821), 'vit_large_patch14_clip_224.laion2b_ft_in12k': _cfg( hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD, crop_pct=1.0, num_classes=11821), 'vit_huge_patch14_clip_224.laion2b_ft_in12k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=11821), 'vit_base_patch32_clip_224.openai_ft_in12k': _cfg( # hf_hub_id='timm/vit_base_patch32_clip_224.openai_ft_in12k', # FIXME weight exists, need to push mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=11821), 'vit_base_patch16_clip_224.openai_ft_in12k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=11821), 'vit_large_patch14_clip_224.openai_ft_in12k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=11821), 'vit_base_patch32_clip_224.laion2b': _cfg( hf_hub_id='laion/CLIP-ViT-B-32-laion2B-s34B-b79K', hf_hub_filename='open_clip_pytorch_model.bin', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_base_patch16_clip_224.laion2b': _cfg( hf_hub_id='laion/CLIP-ViT-B-16-laion2B-s34B-b88K', hf_hub_filename='open_clip_pytorch_model.bin', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_large_patch14_clip_224.laion2b': _cfg( hf_hub_id='laion/CLIP-ViT-L-14-laion2B-s32B-b82K', hf_hub_filename='open_clip_pytorch_model.bin', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD, crop_pct=1.0, num_classes=768), 'vit_huge_patch14_clip_224.laion2b': _cfg( hf_hub_id='laion/CLIP-ViT-H-14-laion2B-s32B-b79K', hf_hub_filename='open_clip_pytorch_model.bin', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=1024), 'vit_giant_patch14_clip_224.laion2b': _cfg( hf_hub_id='laion/CLIP-ViT-g-14-laion2B-s12B-b42K', hf_hub_filename='open_clip_pytorch_model.bin', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=1024), 'vit_gigantic_patch14_clip_224.laion2b': _cfg( hf_hub_id='laion/CLIP-ViT-bigG-14-laion2B-39B-b160k', hf_hub_filename='open_clip_pytorch_model.bin', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=1280), 'vit_base_patch32_clip_224.datacompxl': _cfg( hf_hub_id='laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K', hf_hub_filename='open_clip_pytorch_model.bin', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_base_patch32_clip_256.datacompxl': _cfg( hf_hub_id='laion/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K', hf_hub_filename='open_clip_pytorch_model.bin', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 256, 256), num_classes=512), 'vit_base_patch16_clip_224.datacompxl': _cfg( hf_hub_id='laion/CLIP-ViT-B-16-DataComp.XL-s13B-b90K', hf_hub_filename='open_clip_pytorch_model.bin', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_large_patch14_clip_224.datacompxl': _cfg( hf_hub_id='laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K', hf_hub_filename='open_clip_pytorch_model.bin', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=768), 'vit_base_patch16_clip_224.dfn2b': _cfg( hf_hub_id='apple/DFN2B-CLIP-ViT-B-16', hf_hub_filename='open_clip_pytorch_model.bin', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_large_patch14_clip_224.dfn2b': _cfg( hf_hub_id='apple/DFN2B-CLIP-ViT-L-14', hf_hub_filename='open_clip_pytorch_model.bin', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=768), 'vit_huge_patch14_clip_224.dfn5b': _cfg( hf_hub_id='apple/DFN5B-CLIP-ViT-H-14', hf_hub_filename='open_clip_pytorch_model.bin', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=1024), 'vit_huge_patch14_clip_378.dfn5b': _cfg( hf_hub_id='apple/DFN5B-CLIP-ViT-H-14-378', hf_hub_filename='open_clip_pytorch_model.bin', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, notes=('natively QuickGELU, use quickgelu model variant for original results',), crop_pct=1.0, input_size=(3, 378, 378), num_classes=1024), 'vit_base_patch32_clip_224.metaclip_2pt5b': _cfg( hf_hub_id='facebook/metaclip-b32-fullcc2.5b', hf_hub_filename='metaclip_b32_fullcc2.5b.bin', license='cc-by-nc-4.0', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_base_patch16_clip_224.metaclip_2pt5b': _cfg( hf_hub_id='facebook/metaclip-b16-fullcc2.5b', hf_hub_filename='metaclip_b16_fullcc2.5b.bin', license='cc-by-nc-4.0', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_large_patch14_clip_224.metaclip_2pt5b': _cfg( hf_hub_id='facebook/metaclip-l14-fullcc2.5b', hf_hub_filename='metaclip_l14_fullcc2.5b.bin', license='cc-by-nc-4.0', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=768), 'vit_huge_patch14_clip_224.metaclip_2pt5b': _cfg( hf_hub_id='facebook/metaclip-h14-fullcc2.5b', hf_hub_filename='metaclip_h14_fullcc2.5b.bin', license='cc-by-nc-4.0', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=1024), 'vit_base_patch32_clip_224.openai': _cfg( hf_hub_id='timm/vit_base_patch32_clip_224.openai', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_base_patch16_clip_224.openai': _cfg( hf_hub_id='timm/vit_base_patch16_clip_224.openai', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_large_patch14_clip_224.openai': _cfg( hf_hub_id='timm/vit_large_patch14_clip_224.openai', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=768), 'vit_large_patch14_clip_336.openai': _cfg( hf_hub_id='timm/vit_large_patch14_clip_336.openai', hf_hub_filename='open_clip_pytorch_model.bin', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 336, 336), num_classes=768), # experimental (may be removed) 'vit_base_patch32_plus_256.untrained': _cfg(url='', input_size=(3, 256, 256), crop_pct=0.95), 'vit_base_patch16_plus_240.untrained': _cfg(url='', input_size=(3, 240, 240), crop_pct=0.95), 'vit_small_patch16_36x1_224.untrained': _cfg(url=''), 'vit_small_patch16_18x2_224.untrained': _cfg(url=''), 'vit_base_patch16_18x2_224.untrained': _cfg(url=''), # EVA fine-tuned weights from MAE style MIM - EVA-CLIP target pretrain # https://github.com/baaivision/EVA/blob/7ecf2c0a370d97967e86d047d7af9188f78d2df3/eva/README.md#eva-l-learning-better-mim-representations-from-eva-clip 'eva_large_patch14_196.in22k_ft_in22k_in1k': _cfg( # hf_hub_id='BAAI/EVA', hf_hub_filename='eva_l_psz14_196px_21k_to_1k_ft_88p6.pt', hf_hub_id='timm/', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, input_size=(3, 196, 196), crop_pct=1.0), 'eva_large_patch14_336.in22k_ft_in22k_in1k': _cfg( # hf_hub_id='BAAI/EVA', hf_hub_filename='eva_l_psz14_336px_21k_to_1k_ft_89p2.pt', hf_hub_id='timm/', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, input_size=(3, 336, 336), crop_pct=1.0, crop_mode='squash'), 'eva_large_patch14_196.in22k_ft_in1k': _cfg( # hf_hub_id='BAAI/EVA', hf_hub_filename='eva_l_psz14_196px_1k_ft_88p0.pt', hf_hub_id='timm/', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, input_size=(3, 196, 196), crop_pct=1.0), 'eva_large_patch14_336.in22k_ft_in1k': _cfg( # hf_hub_id='BAAI/EVA', hf_hub_filename='eva_l_psz14_336px_1k_ft_88p65.pt', hf_hub_id='timm/', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, input_size=(3, 336, 336), crop_pct=1.0, crop_mode='squash'), 'flexivit_small.1200ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_s_i1k.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_small.600ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_s_i1k_600ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_small.300ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_s_i1k_300ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_base.1200ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_b_i1k.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_base.600ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_b_i1k_600ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_base.300ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_b_i1k_300ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_base.1000ep_in21k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_b_i21k_1000ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95, num_classes=21843), 'flexivit_base.300ep_in21k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_b_i21k_300ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95, num_classes=21843), 'flexivit_large.1200ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_l_i1k.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_large.600ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_l_i1k_600ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_large.300ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_l_i1k_300ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_base.patch16_in21k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/vit_b16_i21k_300ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95, num_classes=21843), 'flexivit_base.patch30_in21k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/vit_b30_i21k_300ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95, num_classes=21843), 'vit_base_patch16_xp_224.untrained': _cfg(url=''), 'vit_large_patch14_xp_224.untrained': _cfg(url=''), 'vit_huge_patch14_xp_224.untrained': _cfg(url=''), 'vit_base_patch16_224.mae': _cfg( url='https://dl.fbaipublicfiles.com/mae/pretrain/mae_pretrain_vit_base.pth', hf_hub_id='timm/', license='cc-by-nc-4.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_large_patch16_224.mae': _cfg( url='https://dl.fbaipublicfiles.com/mae/pretrain/mae_pretrain_vit_large.pth', hf_hub_id='timm/', license='cc-by-nc-4.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_huge_patch14_224.mae': _cfg( url='https://dl.fbaipublicfiles.com/mae/pretrain/mae_pretrain_vit_huge.pth', hf_hub_id='timm/', license='cc-by-nc-4.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_huge_patch14_gap_224.in1k_ijepa': _cfg( url='https://dl.fbaipublicfiles.com/ijepa/IN1K-vit.h.14-300e.pth.tar', # hf_hub_id='timm/', license='cc-by-nc-4.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_huge_patch14_gap_224.in22k_ijepa': _cfg( url='https://dl.fbaipublicfiles.com/ijepa/IN22K-vit.h.14-900e.pth.tar', # hf_hub_id='timm/', license='cc-by-nc-4.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_huge_patch16_gap_448.in1k_ijepa': _cfg( url='https://dl.fbaipublicfiles.com/ijepa/IN1K-vit.h.16-448px-300e.pth.tar', # hf_hub_id='timm/', license='cc-by-nc-4.0', input_size=(3, 448, 448), crop_pct=1.0, mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_giant_patch16_gap_224.in22k_ijepa': _cfg( url='https://dl.fbaipublicfiles.com/ijepa/IN22K-vit.g.16-600e.pth.tar', # hf_hub_id='timm/', license='cc-by-nc-4.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_base_patch16_siglip_224.webli': _cfg( hf_hub_id='timm/ViT-B-16-SigLIP', hf_hub_filename='open_clip_pytorch_model.bin', num_classes=0), 'vit_base_patch16_siglip_256.webli': _cfg( hf_hub_id='timm/ViT-B-16-SigLIP-256', hf_hub_filename='open_clip_pytorch_model.bin', input_size=(3, 256, 256), num_classes=0), 'vit_base_patch16_siglip_384.webli': _cfg( hf_hub_id='timm/ViT-B-16-SigLIP-384', hf_hub_filename='open_clip_pytorch_model.bin', input_size=(3, 384, 384), num_classes=0), 'vit_base_patch16_siglip_512.webli': _cfg( hf_hub_id='timm/ViT-B-16-SigLIP-512', hf_hub_filename='open_clip_pytorch_model.bin', input_size=(3, 512, 512), num_classes=0), 'vit_large_patch16_siglip_256.webli': _cfg( hf_hub_id='timm/ViT-L-16-SigLIP-256', hf_hub_filename='open_clip_pytorch_model.bin', input_size=(3, 256, 256), num_classes=0), 'vit_large_patch16_siglip_384.webli': _cfg( hf_hub_id='timm/ViT-L-16-SigLIP-384', hf_hub_filename='open_clip_pytorch_model.bin', input_size=(3, 384, 384), num_classes=0), 'vit_so400m_patch14_siglip_224.webli': _cfg( hf_hub_id='timm/ViT-SO400M-14-SigLIP', hf_hub_filename='open_clip_pytorch_model.bin', num_classes=0), 'vit_so400m_patch14_siglip_384.webli': _cfg( hf_hub_id='timm/ViT-SO400M-14-SigLIP-384', hf_hub_filename='open_clip_pytorch_model.bin', input_size=(3, 384, 384), num_classes=0), 'vit_xsmall_patch16_clip_224.tinyclip_yfcc15m': _cfg( hf_hub_id='timm/', hf_hub_filename='open_clip_pytorch_model.bin', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_medium_patch32_clip_224.tinyclip_laion400m': _cfg( hf_hub_id='timm/', hf_hub_filename='open_clip_pytorch_model.bin', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_medium_patch16_clip_224.tinyclip_yfcc15m': _cfg( hf_hub_id='timm/', hf_hub_filename='open_clip_pytorch_model.bin', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_betwixt_patch32_clip_224.tinyclip_laion400m': _cfg( hf_hub_id='timm/', hf_hub_filename='open_clip_pytorch_model.bin', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_medium_patch16_reg4_256': _cfg( input_size=(3, 256, 256)), 'vit_medium_patch16_reg4_gap_256': _cfg( input_size=(3, 256, 256)), 'vit_base_patch16_reg4_gap_256': _cfg( input_size=(3, 256, 256)), 'vit_so150m_patch16_reg4_gap_256': _cfg( input_size=(3, 256, 256)), 'vit_so150m_patch16_reg4_map_256': _cfg( input_size=(3, 256, 256)), } _quick_gelu_cfgs = [ 'vit_large_patch14_clip_224.dfn2b', 'vit_huge_patch14_clip_224.dfn5b', 'vit_huge_patch14_clip_378.dfn5b', 'vit_base_patch32_clip_224.metaclip_2pt5b', 'vit_base_patch16_clip_224.metaclip_2pt5b', 'vit_large_patch14_clip_224.metaclip_2pt5b', 'vit_huge_patch14_clip_224.metaclip_2pt5b', 'vit_base_patch32_clip_224.openai', 'vit_base_patch16_clip_224.openai', 'vit_large_patch14_clip_224.openai', 'vit_large_patch14_clip_336.openai', ] default_cfgs.update({ n.replace('_clip_', '_clip_quickgelu_'): default_cfgs[n] for n in _quick_gelu_cfgs }) default_cfgs = generate_default_cfgs(default_cfgs) def _create_vision_transformer(variant: str, pretrained: bool = False, **kwargs) -> VisionTransformer: if kwargs.get('features_only', None): raise RuntimeError('features_only not implemented for Vision Transformer models.') if 'flexi' in variant: # FIXME Google FlexiViT pretrained models have a strong preference for bilinear patch / embed # interpolation, other pretrained models resize better w/ anti-aliased bicubic interpolation. _filter_fn = partial(checkpoint_filter_fn, interpolation='bilinear', antialias=False) else: _filter_fn = checkpoint_filter_fn # FIXME attn pool (currently only in siglip) params removed if pool disabled, is there a better soln? strict = True if 'siglip' in variant and kwargs.get('global_pool', None) != 'map': strict = False return build_model_with_cfg( VisionTransformer, variant, pretrained, pretrained_filter_fn=_filter_fn, pretrained_strict=strict, **kwargs, ) @register_model def vit_tiny_patch16_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Tiny (Vit-Ti/16) """ model_args = dict(patch_size=16, embed_dim=192, depth=12, num_heads=3) model = _create_vision_transformer('vit_tiny_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_tiny_patch16_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Tiny (Vit-Ti/16) @ 384x384. """ model_args = dict(patch_size=16, embed_dim=192, depth=12, num_heads=3) model = _create_vision_transformer('vit_tiny_patch16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch32_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Small (ViT-S/32) """ model_args = dict(patch_size=32, embed_dim=384, depth=12, num_heads=6) model = _create_vision_transformer('vit_small_patch32_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch32_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Small (ViT-S/32) at 384x384. """ model_args = dict(patch_size=32, embed_dim=384, depth=12, num_heads=6) model = _create_vision_transformer('vit_small_patch32_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch16_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Small (ViT-S/16) """ model_args = dict(patch_size=16, embed_dim=384, depth=12, num_heads=6) model = _create_vision_transformer('vit_small_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch16_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Small (ViT-S/16) """ model_args = dict(patch_size=16, embed_dim=384, depth=12, num_heads=6) model = _create_vision_transformer('vit_small_patch16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch8_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Small (ViT-S/8) """ model_args = dict(patch_size=8, embed_dim=384, depth=12, num_heads=6) model = _create_vision_transformer('vit_small_patch8_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch32_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/32) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=32, embed_dim=768, depth=12, num_heads=12) model = _create_vision_transformer('vit_base_patch32_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch32_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base model (ViT-B/32) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k @ 384x384, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=32, embed_dim=768, depth=12, num_heads=12) model = _create_vision_transformer('vit_base_patch32_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k @ 224x224, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12) model = _create_vision_transformer('vit_base_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k @ 384x384, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12) model = _create_vision_transformer('vit_base_patch16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch8_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/8) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k @ 224x224, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=8, embed_dim=768, depth=12, num_heads=12) model = _create_vision_transformer('vit_base_patch8_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch32_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/32) from original paper (https://arxiv.org/abs/2010.11929). No pretrained weights. """ model_args = dict(patch_size=32, embed_dim=1024, depth=24, num_heads=16) model = _create_vision_transformer('vit_large_patch32_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch32_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/32) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k @ 384x384, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=32, embed_dim=1024, depth=24, num_heads=16) model = _create_vision_transformer('vit_large_patch32_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch16_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k @ 224x224, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=16, embed_dim=1024, depth=24, num_heads=16) model = _create_vision_transformer('vit_large_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch16_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k @ 384x384, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=16, embed_dim=1024, depth=24, num_heads=16) model = _create_vision_transformer('vit_large_patch16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/14) """ model_args = dict(patch_size=14, embed_dim=1024, depth=24, num_heads=16) model = _create_vision_transformer('vit_large_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) from original paper (https://arxiv.org/abs/2010.11929). """ model_args = dict(patch_size=14, embed_dim=1280, depth=32, num_heads=16) model = _create_vision_transformer('vit_huge_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_giant_patch14_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Giant (little-g) model (ViT-g/14) from `Scaling Vision Transformers` - https://arxiv.org/abs/2106.04560 """ model_args = dict(patch_size=14, embed_dim=1408, mlp_ratio=48/11, depth=40, num_heads=16) model = _create_vision_transformer('vit_giant_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_gigantic_patch14_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Gigantic (big-G) model (ViT-G/14) from `Scaling Vision Transformers` - https://arxiv.org/abs/2106.04560 """ model_args = dict(patch_size=14, embed_dim=1664, mlp_ratio=64/13, depth=48, num_heads=16) model = _create_vision_transformer( 'vit_gigantic_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_224_miil(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929). Weights taken from: https://github.com/Alibaba-MIIL/ImageNet21K """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12, qkv_bias=False) model = _create_vision_transformer( 'vit_base_patch16_224_miil', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_medium_patch16_gap_240(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Medium (ViT-M/16) w/o class token, w/ avg-pool @ 240x240 """ model_args = dict( patch_size=16, embed_dim=512, depth=12, num_heads=8, class_token=False, global_pool='avg', qkv_bias=False, init_values=1e-6, fc_norm=False) model = _create_vision_transformer( 'vit_medium_patch16_gap_240', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_medium_patch16_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Medium (ViT-M/16) w/o class token, w/ avg-pool @ 256x256 """ model_args = dict( patch_size=16, embed_dim=512, depth=12, num_heads=8, class_token=False, global_pool='avg', qkv_bias=False, init_values=1e-6, fc_norm=False) model = _create_vision_transformer( 'vit_medium_patch16_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_medium_patch16_gap_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Medium (ViT-M/16) w/o class token, w/ avg-pool @ 384x384 """ model_args = dict( patch_size=16, embed_dim=512, depth=12, num_heads=8, class_token=False, global_pool='avg', qkv_bias=False, init_values=1e-6, fc_norm=False) model = _create_vision_transformer( 'vit_medium_patch16_gap_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_gap_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/16) w/o class token, w/ avg-pool @ 224x224 """ model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=16, class_token=False, global_pool='avg', fc_norm=False) model = _create_vision_transformer( 'vit_base_patch16_gap_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_gap_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) w/ no class token, avg pool """ model_args = dict( patch_size=14, embed_dim=1280, depth=32, num_heads=16, class_token=False, global_pool='avg', fc_norm=False) model = _create_vision_transformer( 'vit_huge_patch14_gap_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch16_gap_448(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/16) w/ no class token, avg pool @ 448x448 """ model_args = dict( patch_size=16, embed_dim=1280, depth=32, num_heads=16, class_token=False, global_pool='avg', fc_norm=False) model = _create_vision_transformer( 'vit_huge_patch16_gap_448', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_giant_patch16_gap_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Giant (little-gg) model (ViT-g/16) w/ no class token, avg pool """ model_args = dict( patch_size=16, embed_dim=1408, depth=40, num_heads=16, mlp_ratio=48/11, class_token=False, global_pool='avg', fc_norm=False) model = _create_vision_transformer( 'vit_giant_patch16_gap_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_xsmall_patch16_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: # TinyCLIP 8M model_args = dict(embed_dim=256, depth=10, num_heads=4, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_xsmall_patch16_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_medium_patch32_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: # TinyCLIP 40M model_args = dict( patch_size=32, embed_dim=512, depth=12, num_heads=8, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_medium_patch32_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_medium_patch16_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: # TinyCLIP 39M model_args = dict(embed_dim=512, depth=12, num_heads=8, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_medium_patch16_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_betwixt_patch32_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: # TinyCLIP 61M model_args = dict( patch_size=32, embed_dim=640, depth=12, num_heads=10, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_betwixt_patch32_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch32_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/32 CLIP image tower @ 224x224 """ model_args = dict( patch_size=32, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_base_patch32_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch32_clip_256(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/32 CLIP image tower @ 256x256 """ model_args = dict( patch_size=32, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_base_patch32_clip_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch32_clip_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/32 CLIP image tower @ 384x384 """ model_args = dict( patch_size=32, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_base_patch32_clip_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch32_clip_448(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/32 CLIP image tower @ 448x448 """ model_args = dict( patch_size=32, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_base_patch32_clip_448', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/16 CLIP image tower """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_base_patch16_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_clip_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/16 CLIP image tower @ 384x384 """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_base_patch16_clip_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/14) CLIP image tower """ model_args = dict(patch_size=14, embed_dim=1024, depth=24, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_large_patch14_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_clip_336(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/14) CLIP image tower @ 336x336 """ model_args = dict(patch_size=14, embed_dim=1024, depth=24, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_large_patch14_clip_336', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) CLIP image tower. """ model_args = dict(patch_size=14, embed_dim=1280, depth=32, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_huge_patch14_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_clip_336(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) CLIP image tower @ 336x336 """ model_args = dict(patch_size=14, embed_dim=1280, depth=32, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_huge_patch14_clip_336', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_clip_378(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) CLIP image tower @ 378x378 """ model_args = dict(patch_size=14, embed_dim=1280, depth=32, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_huge_patch14_clip_378', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_giant_patch14_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Giant (little-g) model (ViT-g/14) from `Scaling Vision Transformers` - https://arxiv.org/abs/2106.04560 Pretrained weights from CLIP image tower. """ model_args = dict( patch_size=14, embed_dim=1408, mlp_ratio=48/11, depth=40, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_giant_patch14_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_gigantic_patch14_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-bigG model (ViT-G/14) from `Scaling Vision Transformers` - https://arxiv.org/abs/2106.04560 Pretrained weights from CLIP image tower. """ model_args = dict( patch_size=14, embed_dim=1664, mlp_ratio=64/13, depth=48, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_gigantic_patch14_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch32_clip_quickgelu_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/32 CLIP image tower @ 224x224 """ model_args = dict( patch_size=32, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm, act_layer='quick_gelu') model = _create_vision_transformer( 'vit_base_patch32_clip_quickgelu_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_clip_quickgelu_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/16 CLIP image tower w/ QuickGELU act """ model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm, act_layer='quick_gelu') model = _create_vision_transformer( 'vit_base_patch16_clip_quickgelu_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_clip_quickgelu_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/14) CLIP image tower w/ QuickGELU act """ from timm.layers import get_act_layer model_args = dict( patch_size=14, embed_dim=1024, depth=24, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm, act_layer='quick_gelu') model = _create_vision_transformer( 'vit_large_patch14_clip_quickgelu_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_clip_quickgelu_336(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/14) CLIP image tower @ 336x336 w/ QuickGELU act """ model_args = dict( patch_size=14, embed_dim=1024, depth=24, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm, act_layer='quick_gelu') model = _create_vision_transformer( 'vit_large_patch14_clip_quickgelu_336', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_clip_quickgelu_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) CLIP image tower w/ QuickGELU act. """ model_args = dict( patch_size=14, embed_dim=1280, depth=32, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm, act_layer='quick_gelu') model = _create_vision_transformer( 'vit_huge_patch14_clip_quickgelu_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_clip_quickgelu_378(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) CLIP image tower @ 378x378 w/ QuickGELU act """ model_args = dict( patch_size=14, embed_dim=1280, depth=32, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm, act_layer='quick_gelu') model = _create_vision_transformer( 'vit_huge_patch14_clip_quickgelu_378', pretrained=pretrained, **dict(model_args, **kwargs)) return model # Experimental models below @register_model def vit_base_patch32_plus_256(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/32+) """ model_args = dict(patch_size=32, embed_dim=896, depth=12, num_heads=14, init_values=1e-5) model = _create_vision_transformer( 'vit_base_patch32_plus_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_plus_240(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/16+) """ model_args = dict(patch_size=16, embed_dim=896, depth=12, num_heads=14, init_values=1e-5) model = _create_vision_transformer( 'vit_base_patch16_plus_240', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_rpn_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/16) w/ residual post-norm """ model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, qkv_bias=False, init_values=1e-5, class_token=False, block_fn=ResPostBlock, global_pool='avg') model = _create_vision_transformer( 'vit_base_patch16_rpn_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch16_36x1_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base w/ LayerScale + 36 x 1 (36 block serial) config. Experimental, may remove. Based on `Three things everyone should know about Vision Transformers` - https://arxiv.org/abs/2203.09795 Paper focuses on 24x2 + 48x1 for 'Small' width but those are extremely slow. """ model_args = dict(patch_size=16, embed_dim=384, depth=36, num_heads=6, init_values=1e-5) model = _create_vision_transformer( 'vit_small_patch16_36x1_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch16_18x2_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Small w/ LayerScale + 18 x 2 (36 block parallel) config. Experimental, may remove. Based on `Three things everyone should know about Vision Transformers` - https://arxiv.org/abs/2203.09795 Paper focuses on 24x2 + 48x1 for 'Small' width but those are extremely slow. """ model_args = dict( patch_size=16, embed_dim=384, depth=18, num_heads=6, init_values=1e-5, block_fn=ParallelThingsBlock) model = _create_vision_transformer( 'vit_small_patch16_18x2_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_18x2_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base w/ LayerScale + 18 x 2 (36 block parallel) config. Experimental, may remove. Based on `Three things everyone should know about Vision Transformers` - https://arxiv.org/abs/2203.09795 """ model_args = dict( patch_size=16, embed_dim=768, depth=18, num_heads=12, init_values=1e-5, block_fn=ParallelThingsBlock) model = _create_vision_transformer( 'vit_base_patch16_18x2_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def eva_large_patch14_196(pretrained: bool = False, **kwargs) -> VisionTransformer: """ EVA-large model https://arxiv.org/abs/2211.07636 /via MAE MIM pretrain""" model_args = dict(patch_size=14, embed_dim=1024, depth=24, num_heads=16, global_pool='avg') model = _create_vision_transformer( 'eva_large_patch14_196', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def eva_large_patch14_336(pretrained: bool = False, **kwargs) -> VisionTransformer: """ EVA-large model https://arxiv.org/abs/2211.07636 via MAE MIM pretrain""" model_args = dict(patch_size=14, embed_dim=1024, depth=24, num_heads=16, global_pool='avg') model = _create_vision_transformer('eva_large_patch14_336', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def flexivit_small(pretrained: bool = False, **kwargs) -> VisionTransformer: """ FlexiViT-Small """ model_args = dict(patch_size=16, embed_dim=384, depth=12, num_heads=6, no_embed_class=True) model = _create_vision_transformer('flexivit_small', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def flexivit_base(pretrained: bool = False, **kwargs) -> VisionTransformer: """ FlexiViT-Base """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12, no_embed_class=True) model = _create_vision_transformer('flexivit_base', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def flexivit_large(pretrained: bool = False, **kwargs) -> VisionTransformer: """ FlexiViT-Large """ model_args = dict(patch_size=16, embed_dim=1024, depth=24, num_heads=16, no_embed_class=True) model = _create_vision_transformer('flexivit_large', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_xp_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/14) w/ parallel blocks and qk norm enabled. """ model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, pre_norm=True, no_embed_class=True, norm_layer=RmsNorm, block_fn=ParallelScalingBlock, qkv_bias=False, qk_norm=True, ) model = _create_vision_transformer( 'vit_base_patch16_xp_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_xp_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/14) w/ parallel blocks and qk norm enabled. """ model_args = dict( patch_size=14, embed_dim=1024, depth=24, num_heads=16, pre_norm=True, no_embed_class=True, norm_layer=RmsNorm, block_fn=ParallelScalingBlock, qkv_bias=False, qk_norm=True, ) model = _create_vision_transformer( 'vit_large_patch14_xp_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_xp_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) w/ parallel blocks and qk norm enabled. """ model_args = dict( patch_size=14, embed_dim=1280, depth=32, num_heads=16, pre_norm=True, no_embed_class=True, norm_layer=RmsNorm, block_fn=ParallelScalingBlock, qkv_bias=False, qk_norm=True, ) model = _create_vision_transformer( 'vit_huge_patch14_xp_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch14_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-S/14 for DINOv2 """ model_args = dict(patch_size=14, embed_dim=384, depth=12, num_heads=6, init_values=1e-5, img_size=518) model = _create_vision_transformer( 'vit_small_patch14_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch14_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/14 for DINOv2 """ model_args = dict(patch_size=14, embed_dim=768, depth=12, num_heads=12, init_values=1e-5, img_size=518) model = _create_vision_transformer( 'vit_base_patch14_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-L/14 for DINOv2 """ model_args = dict(patch_size=14, embed_dim=1024, depth=24, num_heads=16, init_values=1e-5, img_size=518) model = _create_vision_transformer( 'vit_large_patch14_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_giant_patch14_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-G/14 for DINOv2 """ # The hidden_features of SwiGLU is calculated by: # hidden_features = (int(hidden_features * 2 / 3) + 7) // 8 * 8 # When embed_dim=1536, hidden_features=4096 # With SwiGLUPacked, we need to set hidden_features = 2 * 4096 = 8192 model_args = dict( patch_size=14, embed_dim=1536, depth=40, num_heads=24, init_values=1e-5, mlp_ratio=2.66667 * 2, mlp_layer=SwiGLUPacked, img_size=518, act_layer=nn.SiLU ) model = _create_vision_transformer( 'vit_giant_patch14_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch14_reg4_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-S/14 for DINOv2 w/ 4 registers """ model_args = dict( patch_size=14, embed_dim=384, depth=12, num_heads=6, init_values=1e-5, reg_tokens=4, no_embed_class=True, ) model = _create_vision_transformer( 'vit_small_patch14_reg4_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch14_reg4_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/14 for DINOv2 w/ 4 registers """ model_args = dict( patch_size=14, embed_dim=768, depth=12, num_heads=12, init_values=1e-5, reg_tokens=4, no_embed_class=True, ) model = _create_vision_transformer( 'vit_base_patch14_reg4_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_reg4_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-L/14 for DINOv2 w/ 4 registers """ model_args = dict( patch_size=14, embed_dim=1024, depth=24, num_heads=16, init_values=1e-5, reg_tokens=4, no_embed_class=True, ) model = _create_vision_transformer( 'vit_large_patch14_reg4_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_giant_patch14_reg4_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-G/14 for DINOv2 """ # The hidden_features of SwiGLU is calculated by: # hidden_features = (int(hidden_features * 2 / 3) + 7) // 8 * 8 # When embed_dim=1536, hidden_features=4096 # With SwiGLUPacked, we need to set hidden_features = 2 * 4096 = 8192 model_args = dict( patch_size=14, embed_dim=1536, depth=40, num_heads=24, init_values=1e-5, mlp_ratio=2.66667 * 2, mlp_layer=SwiGLUPacked, act_layer=nn.SiLU, reg_tokens=4, no_embed_class=True, ) model = _create_vision_transformer( 'vit_giant_patch14_reg4_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_siglip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_base_patch16_siglip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_siglip_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_base_patch16_siglip_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_siglip_384(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_base_patch16_siglip_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_siglip_512(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_base_patch16_siglip_512', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch16_siglip_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=1024, depth=24, num_heads=16, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_large_patch16_siglip_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch16_siglip_384(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=1024, depth=24, num_heads=16, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_large_patch16_siglip_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so400m_patch14_siglip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=14, embed_dim=1152, depth=27, num_heads=16, mlp_ratio=3.7362, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_so400m_patch14_siglip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so400m_patch14_siglip_384(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=14, embed_dim=1152, depth=27, num_heads=16, mlp_ratio=3.7362, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_so400m_patch14_siglip_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_medium_patch16_reg4_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=512, depth=12, num_heads=8, class_token=True, no_embed_class=True, reg_tokens=4, ) model = _create_vision_transformer( 'vit_medium_patch16_reg4_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_medium_patch16_reg4_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=512, depth=12, num_heads=8, class_token=False, no_embed_class=True, reg_tokens=4, global_pool='avg', ) model = _create_vision_transformer( 'vit_medium_patch16_reg4_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_reg4_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, class_token=False, no_embed_class=True, global_pool='avg', reg_tokens=4, ) model = _create_vision_transformer( 'vit_base_patch16_reg4_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so150m_patch16_reg4_map_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=896, depth=18, num_heads=14, mlp_ratio=2.572, class_token=False, reg_tokens=4, global_pool='map', ) model = _create_vision_transformer( 'vit_so150m_patch16_reg4_map_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so150m_patch16_reg4_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=896, depth=18, num_heads=14, mlp_ratio=2.572, class_token=False, reg_tokens=4, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_so150m_patch16_reg4_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model register_model_deprecations(__name__, { 'vit_tiny_patch16_224_in21k': 'vit_tiny_patch16_224.augreg_in21k', 'vit_small_patch32_224_in21k': 'vit_small_patch32_224.augreg_in21k', 'vit_small_patch16_224_in21k': 'vit_small_patch16_224.augreg_in21k', 'vit_base_patch32_224_in21k': 'vit_base_patch32_224.augreg_in21k', 'vit_base_patch16_224_in21k': 'vit_base_patch16_224.augreg_in21k', 'vit_base_patch8_224_in21k': 'vit_base_patch8_224.augreg_in21k', 'vit_large_patch32_224_in21k': 'vit_large_patch32_224.orig_in21k', 'vit_large_patch16_224_in21k': 'vit_large_patch16_224.augreg_in21k', 'vit_huge_patch14_224_in21k': 'vit_huge_patch14_224.orig_in21k', 'vit_base_patch32_224_sam': 'vit_base_patch32_224.sam', 'vit_base_patch16_224_sam': 'vit_base_patch16_224.sam', 'vit_small_patch16_224_dino': 'vit_small_patch16_224.dino', 'vit_small_patch8_224_dino': 'vit_small_patch8_224.dino', 'vit_base_patch16_224_dino': 'vit_base_patch16_224.dino', 'vit_base_patch8_224_dino': 'vit_base_patch8_224.dino', 'vit_base_patch16_224_miil_in21k': 'vit_base_patch16_224_miil.in21k', 'vit_base_patch32_224_clip_laion2b': 'vit_base_patch32_clip_224.laion2b', 'vit_large_patch14_224_clip_laion2b': 'vit_large_patch14_clip_224.laion2b', 'vit_huge_patch14_224_clip_laion2b': 'vit_huge_patch14_clip_224.laion2b', 'vit_giant_patch14_224_clip_laion2b': 'vit_giant_patch14_clip_224.laion2b', })
pytorch-image-models/timm/models/vision_transformer.py/0
{ "file_path": "pytorch-image-models/timm/models/vision_transformer.py", "repo_id": "pytorch-image-models", "token_count": 59568 }
199
""" PyTorch Lamb optimizer w/ behaviour similar to NVIDIA FusedLamb This optimizer code was adapted from the following (starting with latest) * https://github.com/HabanaAI/Model-References/blob/2b435114fe8e31f159b1d3063b8280ae37af7423/PyTorch/nlp/bert/pretraining/lamb.py * https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/Transformer-XL/pytorch/lamb.py * https://github.com/cybertronai/pytorch-lamb Use FusedLamb if you can (GPU). The reason for including this variant of Lamb is to have a version that is similar in behaviour to APEX FusedLamb if you aren't using NVIDIA GPUs or cannot install/use APEX. In addition to some cleanup, this Lamb impl has been modified to support PyTorch XLA and has been tested on TPU. Original copyrights for above sources are below. Modifications Copyright 2021 Ross Wightman """ # Copyright (c) 2021, Habana Labs Ltd. All rights reserved. # Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # MIT License # # Copyright (c) 2019 cybertronai # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import math import torch from torch.optim import Optimizer class Lamb(Optimizer): """Implements a pure pytorch variant of FuseLAMB (NvLamb variant) optimizer from apex.optimizers.FusedLAMB reference: https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/Transformer-XL/pytorch/lamb.py LAMB was proposed in `Large Batch Optimization for Deep Learning: Training BERT in 76 minutes`_. Arguments: params (iterable): iterable of parameters to optimize or dicts defining parameter groups. lr (float, optional): learning rate. (default: 1e-3) betas (Tuple[float, float], optional): coefficients used for computing running averages of gradient and its norm. (default: (0.9, 0.999)) eps (float, optional): term added to the denominator to improve numerical stability. (default: 1e-8) weight_decay (float, optional): weight decay (L2 penalty) (default: 0) grad_averaging (bool, optional): whether apply (1-beta2) to grad when calculating running averages of gradient. (default: True) max_grad_norm (float, optional): value used to clip global grad norm (default: 1.0) trust_clip (bool): enable LAMBC trust ratio clipping (default: False) always_adapt (boolean, optional): Apply adaptive learning rate to 0.0 weight decay parameter (default: False) .. _Large Batch Optimization for Deep Learning - Training BERT in 76 minutes: https://arxiv.org/abs/1904.00962 .. _On the Convergence of Adam and Beyond: https://openreview.net/forum?id=ryQu7f-RZ """ def __init__( self, params, lr=1e-3, bias_correction=True, betas=(0.9, 0.999), eps=1e-6, weight_decay=0.01, grad_averaging=True, max_grad_norm=1.0, trust_clip=False, always_adapt=False): defaults = dict( lr=lr, bias_correction=bias_correction, betas=betas, eps=eps, weight_decay=weight_decay, grad_averaging=grad_averaging, max_grad_norm=max_grad_norm, trust_clip=trust_clip, always_adapt=always_adapt) super().__init__(params, defaults) @torch.no_grad() def step(self, closure=None): """Performs a single optimization step. Arguments: closure (callable, optional): A closure that reevaluates the model and returns the loss. """ loss = None if closure is not None: with torch.enable_grad(): loss = closure() device = self.param_groups[0]['params'][0].device one_tensor = torch.tensor(1.0, device=device) # because torch.where doesn't handle scalars correctly global_grad_norm = torch.zeros(1, device=device) for group in self.param_groups: for p in group['params']: if p.grad is None: continue grad = p.grad if grad.is_sparse: raise RuntimeError('Lamb does not support sparse gradients, consider SparseAdam instad.') global_grad_norm.add_(grad.pow(2).sum()) global_grad_norm = torch.sqrt(global_grad_norm) # FIXME it'd be nice to remove explicit tensor conversion of scalars when torch.where promotes # scalar types properly https://github.com/pytorch/pytorch/issues/9190 max_grad_norm = torch.tensor(self.defaults['max_grad_norm'], device=device) clip_global_grad_norm = torch.where( global_grad_norm > max_grad_norm, global_grad_norm / max_grad_norm, one_tensor) for group in self.param_groups: bias_correction = 1 if group['bias_correction'] else 0 beta1, beta2 = group['betas'] grad_averaging = 1 if group['grad_averaging'] else 0 beta3 = 1 - beta1 if grad_averaging else 1.0 # assume same step across group now to simplify things # per parameter step can be easily support by making it tensor, or pass list into kernel if 'step' in group: group['step'] += 1 else: group['step'] = 1 if bias_correction: bias_correction1 = 1 - beta1 ** group['step'] bias_correction2 = 1 - beta2 ** group['step'] else: bias_correction1, bias_correction2 = 1.0, 1.0 for p in group['params']: if p.grad is None: continue grad = p.grad.div_(clip_global_grad_norm) state = self.state[p] # State initialization if len(state) == 0: # Exponential moving average of gradient valuesa state['exp_avg'] = torch.zeros_like(p) # Exponential moving average of squared gradient values state['exp_avg_sq'] = torch.zeros_like(p) exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] # Decay the first and second moment running average coefficient exp_avg.mul_(beta1).add_(grad, alpha=beta3) # m_t exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2) # v_t denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps']) update = (exp_avg / bias_correction1).div_(denom) weight_decay = group['weight_decay'] if weight_decay != 0: update.add_(p, alpha=weight_decay) if weight_decay != 0 or group['always_adapt']: # Layer-wise LR adaptation. By default, skip adaptation on parameters that are # excluded from weight decay, unless always_adapt == True, then always enabled. w_norm = p.norm(2.0) g_norm = update.norm(2.0) # FIXME nested where required since logical and/or not working in PT XLA trust_ratio = torch.where( w_norm > 0, torch.where(g_norm > 0, w_norm / g_norm, one_tensor), one_tensor, ) if group['trust_clip']: # LAMBC trust clipping, upper bound fixed at one trust_ratio = torch.minimum(trust_ratio, one_tensor) update.mul_(trust_ratio) p.add_(update, alpha=-group['lr']) return loss
pytorch-image-models/timm/optim/lamb.py/0
{ "file_path": "pytorch-image-models/timm/optim/lamb.py", "repo_id": "pytorch-image-models", "token_count": 3768 }
200
""" Plateau Scheduler Adapts PyTorch plateau scheduler and allows application of noise, warmup. Hacked together by / Copyright 2020 Ross Wightman """ import torch from .scheduler import Scheduler class PlateauLRScheduler(Scheduler): """Decay the LR by a factor every time the validation loss plateaus.""" def __init__( self, optimizer, decay_rate=0.1, patience_t=10, verbose=True, threshold=1e-4, cooldown_t=0, warmup_t=0, warmup_lr_init=0, lr_min=0, mode='max', noise_range_t=None, noise_type='normal', noise_pct=0.67, noise_std=1.0, noise_seed=None, initialize=True, ): super().__init__( optimizer, 'lr', noise_range_t=noise_range_t, noise_type=noise_type, noise_pct=noise_pct, noise_std=noise_std, noise_seed=noise_seed, initialize=initialize, ) self.lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau( self.optimizer, patience=patience_t, factor=decay_rate, verbose=verbose, threshold=threshold, cooldown=cooldown_t, mode=mode, min_lr=lr_min ) self.warmup_t = warmup_t self.warmup_lr_init = warmup_lr_init if self.warmup_t: self.warmup_steps = [(v - warmup_lr_init) / self.warmup_t for v in self.base_values] super().update_groups(self.warmup_lr_init) else: self.warmup_steps = [1 for _ in self.base_values] self.restore_lr = None def state_dict(self): return { 'best': self.lr_scheduler.best, 'last_epoch': self.lr_scheduler.last_epoch, } def load_state_dict(self, state_dict): self.lr_scheduler.best = state_dict['best'] if 'last_epoch' in state_dict: self.lr_scheduler.last_epoch = state_dict['last_epoch'] # override the base class step fn completely def step(self, epoch, metric=None): if epoch <= self.warmup_t: lrs = [self.warmup_lr_init + epoch * s for s in self.warmup_steps] super().update_groups(lrs) else: if self.restore_lr is not None: # restore actual LR from before our last noise perturbation before stepping base for i, param_group in enumerate(self.optimizer.param_groups): param_group['lr'] = self.restore_lr[i] self.restore_lr = None self.lr_scheduler.step(metric, epoch) # step the base scheduler if self._is_apply_noise(epoch): self._apply_noise(epoch) def step_update(self, num_updates: int, metric: float = None): return None def _apply_noise(self, epoch): noise = self._calculate_noise(epoch) # apply the noise on top of previous LR, cache the old value so we can restore for normal # stepping of base scheduler restore_lr = [] for i, param_group in enumerate(self.optimizer.param_groups): old_lr = float(param_group['lr']) restore_lr.append(old_lr) new_lr = old_lr + old_lr * noise param_group['lr'] = new_lr self.restore_lr = restore_lr def _get_lr(self, t: int) -> float: assert False, 'should not be called as step is overridden'
pytorch-image-models/timm/scheduler/plateau_lr.py/0
{ "file_path": "pytorch-image-models/timm/scheduler/plateau_lr.py", "repo_id": "pytorch-image-models", "token_count": 1800 }
201
""" Misc utils Hacked together by / Copyright 2020 Ross Wightman """ import argparse import ast import re def natural_key(string_): """See http://www.codinghorror.com/blog/archives/001018.html""" return [int(s) if s.isdigit() else s for s in re.split(r'(\d+)', string_.lower())] def add_bool_arg(parser, name, default=False, help=''): dest_name = name.replace('-', '_') group = parser.add_mutually_exclusive_group(required=False) group.add_argument('--' + name, dest=dest_name, action='store_true', help=help) group.add_argument('--no-' + name, dest=dest_name, action='store_false', help=help) parser.set_defaults(**{dest_name: default}) class ParseKwargs(argparse.Action): def __call__(self, parser, namespace, values, option_string=None): kw = {} for value in values: key, value = value.split('=') try: kw[key] = ast.literal_eval(value) except ValueError: kw[key] = str(value) # fallback to string (avoid need to escape on command line) setattr(namespace, self.dest, kw)
pytorch-image-models/timm/utils/misc.py/0
{ "file_path": "pytorch-image-models/timm/utils/misc.py", "repo_id": "pytorch-image-models", "token_count": 451 }
202
/// Inspired by https://github.com/orhun/rust-tui-template/blob/472aa515119d4c94903eac12d9784417281dc7f5/src/event.rs use crossterm::event; use std::time::{Duration, Instant}; use tokio::sync::{broadcast, mpsc}; /// Events #[derive(Debug)] pub(crate) enum Event { /// Terminal tick. Tick, /// Key press. Key(event::KeyEvent), /// Terminal resize. Resize(u16, u16), } pub(crate) async fn terminal_event_task( fps: u32, event_sender: mpsc::Sender<Event>, mut shutdown_receiver: broadcast::Receiver<()>, _shutdown_guard_sender: mpsc::Sender<()>, ) { // End task if a message is received on shutdown_receiver // _shutdown_guard_sender will be dropped once the task is finished tokio::select! { _ = event_loop(fps, event_sender) => { }, _ = shutdown_receiver.recv() => {} } } /// Main event loop async fn event_loop(fps: u32, event_sender: mpsc::Sender<Event>) { // Frame budget let per_frame = Duration::from_secs(1) / fps; // When was last frame executed let mut last_frame = Instant::now(); loop { // Sleep to avoid blocking the thread for too long if let Some(sleep) = per_frame.checked_sub(last_frame.elapsed()) { tokio::time::sleep(sleep).await; } // Get crossterm event and send a new one over the channel if event::poll(Duration::from_secs(0)).expect("no events available") { match event::read().expect("unable to read event") { event::Event::Key(e) => event_sender.send(Event::Key(e)).await.unwrap_or(()), event::Event::Resize(w, h) => { event_sender.send(Event::Resize(w, h)).await.unwrap_or(()) } _ => (), } } // Frame budget exceeded if last_frame.elapsed() >= per_frame { // Send tick event_sender.send(Event::Tick).await.unwrap_or(()); // Rest last_frame time last_frame = Instant::now(); } } }
text-generation-inference/benchmark/src/event.rs/0
{ "file_path": "text-generation-inference/benchmark/src/event.rs", "repo_id": "text-generation-inference", "token_count": 922 }
203
[ { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 17934, "logprob": null, "text": "Pour" }, { "id": 49833, "logprob": -10.5390625, "text": " dég" }, { "id": 21543, "logprob": -0.14758301, "text": "uster" }, { "id": 447, "logprob": -1.9296875, "text": " un" }, { "id": 46341, "logprob": -15.4453125, "text": " ort" }, { "id": 35567, "logprob": -7.59375, "text": "olan" }, { "id": 15, "logprob": -1.3994141, "text": "," }, { "id": 1669, "logprob": -1.578125, "text": " il" }, { "id": 11580, "logprob": -0.9453125, "text": " faut" }, { "id": 3913, "logprob": -3.7011719, "text": " tout" }, { "id": 39261, "logprob": -1.5732422, "text": " d'abord" } ], "seed": null, "tokens": [ { "id": 578, "logprob": -1.7529297, "special": false, "text": " le" }, { "id": 5608, "logprob": -2.6054688, "special": false, "text": " faire" }, { "id": 1767, "logprob": -1.5283203, "special": false, "text": " cu" }, { "id": 1273, "logprob": -0.00010049343, "special": false, "text": "ire" }, { "id": 1486, "logprob": -1.4716797, "special": false, "text": " dans" }, { "id": 283, "logprob": -1.1982422, "special": false, "text": " de" }, { "id": 40410, "logprob": -0.11853027, "special": false, "text": " l'eau" }, { "id": 20226, "logprob": -0.41210938, "special": false, "text": " bou" }, { "id": 172483, "logprob": -0.0037765503, "special": false, "text": "illante" }, { "id": 2805, "logprob": -1.0166016, "special": false, "text": " sal" } ] }, "generated_text": " le faire cuire dans de l'eau bouillante sal" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 17934, "logprob": null, "text": "Pour" }, { "id": 49833, "logprob": -10.515625, "text": " dég" }, { "id": 21543, "logprob": -0.1484375, "text": "uster" }, { "id": 447, "logprob": -1.9287109, "text": " un" }, { "id": 46341, "logprob": -15.34375, "text": " ort" }, { "id": 35567, "logprob": -7.515625, "text": "olan" }, { "id": 15, "logprob": -1.4199219, "text": "," }, { "id": 1669, "logprob": -1.5664062, "text": " il" }, { "id": 11580, "logprob": -0.94091797, "text": " faut" }, { "id": 3913, "logprob": -3.6660156, "text": " tout" }, { "id": 39261, "logprob": -1.7753906, "text": " d'abord" } ], "seed": null, "tokens": [ { "id": 578, "logprob": -1.7626953, "special": false, "text": " le" }, { "id": 5608, "logprob": -2.5820312, "special": false, "text": " faire" }, { "id": 1767, "logprob": -1.5097656, "special": false, "text": " cu" }, { "id": 1273, "logprob": -9.393692e-05, "special": false, "text": "ire" }, { "id": 1486, "logprob": -1.5175781, "special": false, "text": " dans" }, { "id": 283, "logprob": -1.1982422, "special": false, "text": " de" }, { "id": 40410, "logprob": -0.11883545, "special": false, "text": " l'eau" }, { "id": 20226, "logprob": -0.4909668, "special": false, "text": " bou" }, { "id": 172483, "logprob": -0.003047943, "special": false, "text": "illante" }, { "id": 2805, "logprob": -1.0185547, "special": false, "text": " sal" } ] }, "generated_text": " le faire cuire dans de l'eau bouillante sal" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 17934, "logprob": null, "text": "Pour" }, { "id": 49833, "logprob": -10.515625, "text": " dég" }, { "id": 21543, "logprob": -0.1484375, "text": "uster" }, { "id": 447, "logprob": -1.9287109, "text": " un" }, { "id": 46341, "logprob": -15.34375, "text": " ort" }, { "id": 35567, "logprob": -7.515625, "text": "olan" }, { "id": 15, "logprob": -1.4199219, "text": "," }, { "id": 1669, "logprob": -1.5664062, "text": " il" }, { "id": 11580, "logprob": -0.94091797, "text": " faut" }, { "id": 3913, "logprob": -3.6660156, "text": " tout" }, { "id": 39261, "logprob": -1.7753906, "text": " d'abord" } ], "seed": null, "tokens": [ { "id": 578, "logprob": -1.7626953, "special": false, "text": " le" }, { "id": 5608, "logprob": -2.5820312, "special": false, "text": " faire" }, { "id": 1767, "logprob": -1.5097656, "special": false, "text": " cu" }, { "id": 1273, "logprob": -9.393692e-05, "special": false, "text": "ire" }, { "id": 1486, "logprob": -1.5175781, "special": false, "text": " dans" }, { "id": 283, "logprob": -1.1982422, "special": false, "text": " de" }, { "id": 40410, "logprob": -0.11883545, "special": false, "text": " l'eau" }, { "id": 20226, "logprob": -0.4909668, "special": false, "text": " bou" }, { "id": 172483, "logprob": -0.003047943, "special": false, "text": "illante" }, { "id": 2805, "logprob": -1.0185547, "special": false, "text": " sal" } ] }, "generated_text": " le faire cuire dans de l'eau bouillante sal" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 17934, "logprob": null, "text": "Pour" }, { "id": 49833, "logprob": -10.515625, "text": " dég" }, { "id": 21543, "logprob": -0.1484375, "text": "uster" }, { "id": 447, "logprob": -1.9287109, "text": " un" }, { "id": 46341, "logprob": -15.34375, "text": " ort" }, { "id": 35567, "logprob": -7.515625, "text": "olan" }, { "id": 15, "logprob": -1.4199219, "text": "," }, { "id": 1669, "logprob": -1.5664062, "text": " il" }, { "id": 11580, "logprob": -0.94091797, "text": " faut" }, { "id": 3913, "logprob": -3.6660156, "text": " tout" }, { "id": 39261, "logprob": -1.7753906, "text": " d'abord" } ], "seed": null, "tokens": [ { "id": 578, "logprob": -1.7626953, "special": false, "text": " le" }, { "id": 5608, "logprob": -2.5820312, "special": false, "text": " faire" }, { "id": 1767, "logprob": -1.5097656, "special": false, "text": " cu" }, { "id": 1273, "logprob": -9.393692e-05, "special": false, "text": "ire" }, { "id": 1486, "logprob": -1.5175781, "special": false, "text": " dans" }, { "id": 283, "logprob": -1.1982422, "special": false, "text": " de" }, { "id": 40410, "logprob": -0.11883545, "special": false, "text": " l'eau" }, { "id": 20226, "logprob": -0.4909668, "special": false, "text": " bou" }, { "id": 172483, "logprob": -0.003047943, "special": false, "text": "illante" }, { "id": 2805, "logprob": -1.0185547, "special": false, "text": " sal" } ] }, "generated_text": " le faire cuire dans de l'eau bouillante sal" } ]
text-generation-inference/integration-tests/models/__snapshots__/test_bloom_560m_sharded/test_bloom_560m_sharded_load.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_bloom_560m_sharded/test_bloom_560m_sharded_load.json", "repo_id": "text-generation-inference", "token_count": 7258 }
204
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [], "seed": null, "tokens": [ { "id": 29896, "logprob": -0.7685547, "special": false, "text": "1" }, { "id": 29906, "logprob": -0.33666992, "special": false, "text": "2" }, { "id": 29941, "logprob": -0.009979248, "special": false, "text": "3" }, { "id": 29946, "logprob": -0.64208984, "special": false, "text": "4" }, { "id": 29945, "logprob": -0.4970703, "special": false, "text": "5" }, { "id": 29953, "logprob": -0.46533203, "special": false, "text": "6" }, { "id": 29992, "logprob": -0.5336914, "special": false, "text": "@" }, { "id": 21980, "logprob": -0.53759766, "special": false, "text": "gmail" }, { "id": 29889, "logprob": -0.0008878708, "special": false, "text": "." }, { "id": 510, "logprob": -0.002275467, "special": false, "text": "com" } ], "top_tokens": null }, "generated_text": "[email protected]" }
text-generation-inference/integration-tests/models/__snapshots__/test_flash_grammar_llama/test_flash_llama_grammar_single_load_instance.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_grammar_llama/test_flash_llama_grammar_single_load_instance.json", "repo_id": "text-generation-inference", "token_count": 866 }
205
[ { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 50278, "logprob": null, "text": "<|prompter|>" }, { "id": 1276, "logprob": -8.03125, "text": "What" }, { "id": 310, "logprob": -5.421875, "text": " is" }, { "id": 247, "logprob": -2.1601562, "text": " a" }, { "id": 1167, "logprob": -5.4609375, "text": " mem" }, { "id": 70, "logprob": -0.005657196, "text": "e" }, { "id": 13, "logprob": -7.28125, "text": "," }, { "id": 285, "logprob": -0.2980957, "text": " and" }, { "id": 752, "logprob": -2.1679688, "text": " what" }, { "id": 434, "logprob": -5.6210938, "text": "'s" }, { "id": 253, "logprob": -0.81103516, "text": " the" }, { "id": 2892, "logprob": -6.6640625, "text": " history" }, { "id": 3212, "logprob": -2.265625, "text": " behind" }, { "id": 436, "logprob": -11.5078125, "text": " this" }, { "id": 3159, "logprob": -2.1582031, "text": " word" }, { "id": 32, "logprob": -0.008720398, "text": "?" }, { "id": 0, "logprob": -2.4726562, "text": "<|endoftext|>" }, { "id": 50281, "logprob": -18.265625, "text": "<|assistant|>" } ], "seed": null, "tokens": [ { "id": 510, "logprob": -0.63183594, "special": false, "text": "The" }, { "id": 3159, "logprob": -0.5488281, "special": false, "text": " word" }, { "id": 346, "logprob": -0.045684814, "special": false, "text": " \"" }, { "id": 6441, "logprob": -0.00207901, "special": false, "text": "mem" }, { "id": 70, "logprob": -1.335144e-05, "special": false, "text": "e" }, { "id": 3, "logprob": -0.00097227097, "special": false, "text": "\"" }, { "id": 369, "logprob": -0.0892334, "special": false, "text": " was" }, { "id": 806, "logprob": -0.12463379, "special": false, "text": " first" }, { "id": 908, "logprob": -0.01737976, "special": false, "text": " used" }, { "id": 275, "logprob": -0.50341797, "special": false, "text": " in" } ] }, "generated_text": "The word \"meme\" was first used in" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 50278, "logprob": null, "text": "<|prompter|>" }, { "id": 1276, "logprob": -8.03125, "text": "What" }, { "id": 310, "logprob": -5.421875, "text": " is" }, { "id": 247, "logprob": -2.1601562, "text": " a" }, { "id": 1167, "logprob": -5.4609375, "text": " mem" }, { "id": 70, "logprob": -0.005657196, "text": "e" }, { "id": 13, "logprob": -7.28125, "text": "," }, { "id": 285, "logprob": -0.2980957, "text": " and" }, { "id": 752, "logprob": -2.1679688, "text": " what" }, { "id": 434, "logprob": -5.6210938, "text": "'s" }, { "id": 253, "logprob": -0.81103516, "text": " the" }, { "id": 2892, "logprob": -6.6640625, "text": " history" }, { "id": 3212, "logprob": -2.265625, "text": " behind" }, { "id": 436, "logprob": -11.5078125, "text": " this" }, { "id": 3159, "logprob": -2.1582031, "text": " word" }, { "id": 32, "logprob": -0.008720398, "text": "?" }, { "id": 0, "logprob": -2.4726562, "text": "<|endoftext|>" }, { "id": 50281, "logprob": -18.265625, "text": "<|assistant|>" } ], "seed": null, "tokens": [ { "id": 510, "logprob": -0.63183594, "special": false, "text": "The" }, { "id": 3159, "logprob": -0.5488281, "special": false, "text": " word" }, { "id": 346, "logprob": -0.045684814, "special": false, "text": " \"" }, { "id": 6441, "logprob": -0.00207901, "special": false, "text": "mem" }, { "id": 70, "logprob": -1.335144e-05, "special": false, "text": "e" }, { "id": 3, "logprob": -0.00097227097, "special": false, "text": "\"" }, { "id": 369, "logprob": -0.0892334, "special": false, "text": " was" }, { "id": 806, "logprob": -0.12463379, "special": false, "text": " first" }, { "id": 908, "logprob": -0.01737976, "special": false, "text": " used" }, { "id": 275, "logprob": -0.50341797, "special": false, "text": " in" } ] }, "generated_text": "The word \"meme\" was first used in" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 50278, "logprob": null, "text": "<|prompter|>" }, { "id": 1276, "logprob": -8.03125, "text": "What" }, { "id": 310, "logprob": -5.421875, "text": " is" }, { "id": 247, "logprob": -2.1601562, "text": " a" }, { "id": 1167, "logprob": -5.4609375, "text": " mem" }, { "id": 70, "logprob": -0.005657196, "text": "e" }, { "id": 13, "logprob": -7.28125, "text": "," }, { "id": 285, "logprob": -0.2980957, "text": " and" }, { "id": 752, "logprob": -2.1679688, "text": " what" }, { "id": 434, "logprob": -5.6210938, "text": "'s" }, { "id": 253, "logprob": -0.81103516, "text": " the" }, { "id": 2892, "logprob": -6.6640625, "text": " history" }, { "id": 3212, "logprob": -2.265625, "text": " behind" }, { "id": 436, "logprob": -11.5078125, "text": " this" }, { "id": 3159, "logprob": -2.1582031, "text": " word" }, { "id": 32, "logprob": -0.008720398, "text": "?" }, { "id": 0, "logprob": -2.4726562, "text": "<|endoftext|>" }, { "id": 50281, "logprob": -18.265625, "text": "<|assistant|>" } ], "seed": null, "tokens": [ { "id": 510, "logprob": -0.63183594, "special": false, "text": "The" }, { "id": 3159, "logprob": -0.5488281, "special": false, "text": " word" }, { "id": 346, "logprob": -0.045684814, "special": false, "text": " \"" }, { "id": 6441, "logprob": -0.00207901, "special": false, "text": "mem" }, { "id": 70, "logprob": -1.335144e-05, "special": false, "text": "e" }, { "id": 3, "logprob": -0.00097227097, "special": false, "text": "\"" }, { "id": 369, "logprob": -0.0892334, "special": false, "text": " was" }, { "id": 806, "logprob": -0.12463379, "special": false, "text": " first" }, { "id": 908, "logprob": -0.01737976, "special": false, "text": " used" }, { "id": 275, "logprob": -0.50341797, "special": false, "text": " in" } ] }, "generated_text": "The word \"meme\" was first used in" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 50278, "logprob": null, "text": "<|prompter|>" }, { "id": 1276, "logprob": -8.03125, "text": "What" }, { "id": 310, "logprob": -5.421875, "text": " is" }, { "id": 247, "logprob": -2.1601562, "text": " a" }, { "id": 1167, "logprob": -5.4609375, "text": " mem" }, { "id": 70, "logprob": -0.005657196, "text": "e" }, { "id": 13, "logprob": -7.28125, "text": "," }, { "id": 285, "logprob": -0.2980957, "text": " and" }, { "id": 752, "logprob": -2.1679688, "text": " what" }, { "id": 434, "logprob": -5.6210938, "text": "'s" }, { "id": 253, "logprob": -0.81103516, "text": " the" }, { "id": 2892, "logprob": -6.6640625, "text": " history" }, { "id": 3212, "logprob": -2.265625, "text": " behind" }, { "id": 436, "logprob": -11.5078125, "text": " this" }, { "id": 3159, "logprob": -2.1582031, "text": " word" }, { "id": 32, "logprob": -0.008720398, "text": "?" }, { "id": 0, "logprob": -2.4726562, "text": "<|endoftext|>" }, { "id": 50281, "logprob": -18.265625, "text": "<|assistant|>" } ], "seed": null, "tokens": [ { "id": 510, "logprob": -0.63183594, "special": false, "text": "The" }, { "id": 3159, "logprob": -0.5488281, "special": false, "text": " word" }, { "id": 346, "logprob": -0.045684814, "special": false, "text": " \"" }, { "id": 6441, "logprob": -0.00207901, "special": false, "text": "mem" }, { "id": 70, "logprob": -1.335144e-05, "special": false, "text": "e" }, { "id": 3, "logprob": -0.00097227097, "special": false, "text": "\"" }, { "id": 369, "logprob": -0.0892334, "special": false, "text": " was" }, { "id": 806, "logprob": -0.12463379, "special": false, "text": " first" }, { "id": 908, "logprob": -0.01737976, "special": false, "text": " used" }, { "id": 275, "logprob": -0.50341797, "special": false, "text": " in" } ] }, "generated_text": "The word \"meme\" was first used in" } ]
text-generation-inference/integration-tests/models/__snapshots__/test_flash_neox_sharded/test_flash_neox_load.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_neox_sharded/test_flash_neox_load.json", "repo_id": "text-generation-inference", "token_count": 9176 }
206
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 20, "prefill": [ { "id": 589, "logprob": null, "text": "def" }, { "id": 3226, "logprob": -8.5859375, "text": " ge" }, { "id": 21017, "logprob": -7.5898438, "text": "ometric" }, { "id": 81, "logprob": -0.26586914, "text": "_" }, { "id": 6009, "logprob": -1.6347656, "text": "mean" }, { "id": 26, "logprob": -0.22705078, "text": "(" }, { "id": 62, "logprob": -5.2382812, "text": "L" }, { "id": 44, "logprob": -3.0996094, "text": ":" }, { "id": 1682, "logprob": -1.1025391, "text": " List" }, { "id": 77, "logprob": -0.14294434, "text": "[" }, { "id": 1808, "logprob": -0.32226562, "text": "float" }, { "id": 10794, "logprob": -2.8164062, "text": "]):" } ], "seed": 0, "tokens": [ { "id": 284, "logprob": 0.0, "special": false, "text": "\n " }, { "id": 442, "logprob": -1.3134766, "special": false, "text": " return" }, { "id": 11665, "logprob": -0.10021973, "special": false, "text": " reduce" }, { "id": 26, "logprob": 0.0, "special": false, "text": "(" }, { "id": 5962, "logprob": 0.0, "special": false, "text": "lambda" }, { "id": 816, "logprob": 0.0, "special": false, "text": " x" }, { "id": 30, "logprob": 0.0, "special": false, "text": "," }, { "id": 533, "logprob": 0.0, "special": false, "text": " y" }, { "id": 44, "logprob": 0.0, "special": false, "text": ":" }, { "id": 816, "logprob": 0.0, "special": false, "text": " x" }, { "id": 319, "logprob": -0.42871094, "special": false, "text": " *" }, { "id": 533, "logprob": 0.0, "special": false, "text": " y" }, { "id": 30, "logprob": 0.0, "special": false, "text": "," }, { "id": 498, "logprob": 0.0, "special": false, "text": " L" }, { "id": 27, "logprob": 0.0, "special": false, "text": ")" }, { "id": 1115, "logprob": 0.0, "special": false, "text": " **" }, { "id": 308, "logprob": 0.0, "special": false, "text": " (" }, { "id": 35, "logprob": 0.0, "special": false, "text": "1" }, { "id": 32, "logprob": -0.31323242, "special": false, "text": "." }, { "id": 34, "logprob": 0.0, "special": false, "text": "0" } ], "top_tokens": null }, "generated_text": "\n return reduce(lambda x, y: x * y, L) ** (1.0" }
text-generation-inference/integration-tests/models/__snapshots__/test_flash_starcoder_gptq/test_flash_starcoder_gptq_default_params.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_starcoder_gptq/test_flash_starcoder_gptq_default_params.json", "repo_id": "text-generation-inference", "token_count": 2310 }
207
[ { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 50278, "logprob": null, "text": "<|prompter|>" }, { "id": 1276, "logprob": -8.0234375, "text": "What" }, { "id": 310, "logprob": -5.4179688, "text": " is" }, { "id": 247, "logprob": -2.1542969, "text": " a" }, { "id": 1167, "logprob": -5.359375, "text": " mem" }, { "id": 70, "logprob": -0.006038666, "text": "e" }, { "id": 13, "logprob": -7.328125, "text": "," }, { "id": 285, "logprob": -0.3173828, "text": " and" }, { "id": 752, "logprob": -2.0625, "text": " what" }, { "id": 434, "logprob": -5.7734375, "text": "'s" }, { "id": 253, "logprob": -0.74072266, "text": " the" }, { "id": 2892, "logprob": -6.5898438, "text": " history" }, { "id": 3212, "logprob": -2.2949219, "text": " behind" }, { "id": 436, "logprob": -11.40625, "text": " this" }, { "id": 3159, "logprob": -2.1113281, "text": " word" }, { "id": 32, "logprob": -0.008056641, "text": "?" }, { "id": 0, "logprob": -2.3300781, "text": "<|endoftext|>" }, { "id": 50281, "logprob": -18.28125, "text": "<|assistant|>" } ], "seed": null, "tokens": [ { "id": 510, "logprob": -0.5878906, "special": false, "text": "The" }, { "id": 3159, "logprob": -0.5498047, "special": false, "text": " word" }, { "id": 346, "logprob": -0.04815674, "special": false, "text": " \"" }, { "id": 6441, "logprob": -0.002313614, "special": false, "text": "mem" }, { "id": 70, "logprob": -1.2636185e-05, "special": false, "text": "e" }, { "id": 3, "logprob": -0.0010147095, "special": false, "text": "\"" }, { "id": 369, "logprob": -0.0859375, "special": false, "text": " was" }, { "id": 806, "logprob": -0.12609863, "special": false, "text": " first" }, { "id": 908, "logprob": -0.016601562, "special": false, "text": " used" }, { "id": 275, "logprob": -0.38256836, "special": false, "text": " in" } ] }, "generated_text": "The word \"meme\" was first used in" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 50278, "logprob": null, "text": "<|prompter|>" }, { "id": 1276, "logprob": -8.0234375, "text": "What" }, { "id": 310, "logprob": -5.421875, "text": " is" }, { "id": 247, "logprob": -2.1640625, "text": " a" }, { "id": 1167, "logprob": -5.40625, "text": " mem" }, { "id": 70, "logprob": -0.005420685, "text": "e" }, { "id": 13, "logprob": -7.2226562, "text": "," }, { "id": 285, "logprob": -0.26879883, "text": " and" }, { "id": 752, "logprob": -2.1992188, "text": " what" }, { "id": 434, "logprob": -5.46875, "text": "'s" }, { "id": 253, "logprob": -0.8017578, "text": " the" }, { "id": 2892, "logprob": -6.6796875, "text": " history" }, { "id": 3212, "logprob": -2.1972656, "text": " behind" }, { "id": 436, "logprob": -11.4453125, "text": " this" }, { "id": 3159, "logprob": -2.1933594, "text": " word" }, { "id": 32, "logprob": -0.007858276, "text": "?" }, { "id": 0, "logprob": -2.328125, "text": "<|endoftext|>" }, { "id": 50281, "logprob": -18.21875, "text": "<|assistant|>" } ], "seed": null, "tokens": [ { "id": 510, "logprob": -0.6201172, "special": false, "text": "The" }, { "id": 3159, "logprob": -0.546875, "special": false, "text": " word" }, { "id": 346, "logprob": -0.051879883, "special": false, "text": " \"" }, { "id": 6441, "logprob": -0.0020179749, "special": false, "text": "mem" }, { "id": 70, "logprob": -9.059906e-06, "special": false, "text": "e" }, { "id": 3, "logprob": -0.00096797943, "special": false, "text": "\"" }, { "id": 369, "logprob": -0.07940674, "special": false, "text": " was" }, { "id": 806, "logprob": -0.12182617, "special": false, "text": " first" }, { "id": 908, "logprob": -0.017227173, "special": false, "text": " used" }, { "id": 275, "logprob": -0.44482422, "special": false, "text": " in" } ] }, "generated_text": "The word \"meme\" was first used in" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 50278, "logprob": null, "text": "<|prompter|>" }, { "id": 1276, "logprob": -8.0234375, "text": "What" }, { "id": 310, "logprob": -5.421875, "text": " is" }, { "id": 247, "logprob": -2.1640625, "text": " a" }, { "id": 1167, "logprob": -5.40625, "text": " mem" }, { "id": 70, "logprob": -0.005420685, "text": "e" }, { "id": 13, "logprob": -7.2226562, "text": "," }, { "id": 285, "logprob": -0.26879883, "text": " and" }, { "id": 752, "logprob": -2.1992188, "text": " what" }, { "id": 434, "logprob": -5.46875, "text": "'s" }, { "id": 253, "logprob": -0.8017578, "text": " the" }, { "id": 2892, "logprob": -6.6796875, "text": " history" }, { "id": 3212, "logprob": -2.1972656, "text": " behind" }, { "id": 436, "logprob": -11.4453125, "text": " this" }, { "id": 3159, "logprob": -2.1933594, "text": " word" }, { "id": 32, "logprob": -0.007858276, "text": "?" }, { "id": 0, "logprob": -2.328125, "text": "<|endoftext|>" }, { "id": 50281, "logprob": -18.21875, "text": "<|assistant|>" } ], "seed": null, "tokens": [ { "id": 510, "logprob": -0.6201172, "special": false, "text": "The" }, { "id": 3159, "logprob": -0.546875, "special": false, "text": " word" }, { "id": 346, "logprob": -0.051879883, "special": false, "text": " \"" }, { "id": 6441, "logprob": -0.0020179749, "special": false, "text": "mem" }, { "id": 70, "logprob": -9.059906e-06, "special": false, "text": "e" }, { "id": 3, "logprob": -0.00096797943, "special": false, "text": "\"" }, { "id": 369, "logprob": -0.07940674, "special": false, "text": " was" }, { "id": 806, "logprob": -0.12182617, "special": false, "text": " first" }, { "id": 908, "logprob": -0.017227173, "special": false, "text": " used" }, { "id": 275, "logprob": -0.44482422, "special": false, "text": " in" } ] }, "generated_text": "The word \"meme\" was first used in" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 50278, "logprob": null, "text": "<|prompter|>" }, { "id": 1276, "logprob": -8.0234375, "text": "What" }, { "id": 310, "logprob": -5.421875, "text": " is" }, { "id": 247, "logprob": -2.1640625, "text": " a" }, { "id": 1167, "logprob": -5.40625, "text": " mem" }, { "id": 70, "logprob": -0.005420685, "text": "e" }, { "id": 13, "logprob": -7.2226562, "text": "," }, { "id": 285, "logprob": -0.26879883, "text": " and" }, { "id": 752, "logprob": -2.1992188, "text": " what" }, { "id": 434, "logprob": -5.46875, "text": "'s" }, { "id": 253, "logprob": -0.8017578, "text": " the" }, { "id": 2892, "logprob": -6.6796875, "text": " history" }, { "id": 3212, "logprob": -2.1972656, "text": " behind" }, { "id": 436, "logprob": -11.4453125, "text": " this" }, { "id": 3159, "logprob": -2.1933594, "text": " word" }, { "id": 32, "logprob": -0.007858276, "text": "?" }, { "id": 0, "logprob": -2.328125, "text": "<|endoftext|>" }, { "id": 50281, "logprob": -18.21875, "text": "<|assistant|>" } ], "seed": null, "tokens": [ { "id": 510, "logprob": -0.6201172, "special": false, "text": "The" }, { "id": 3159, "logprob": -0.546875, "special": false, "text": " word" }, { "id": 346, "logprob": -0.051879883, "special": false, "text": " \"" }, { "id": 6441, "logprob": -0.0020179749, "special": false, "text": "mem" }, { "id": 70, "logprob": -1.04904175e-05, "special": false, "text": "e" }, { "id": 3, "logprob": -0.0009560585, "special": false, "text": "\"" }, { "id": 369, "logprob": -0.08557129, "special": false, "text": " was" }, { "id": 806, "logprob": -0.12084961, "special": false, "text": " first" }, { "id": 908, "logprob": -0.01737976, "special": false, "text": " used" }, { "id": 275, "logprob": -0.4025879, "special": false, "text": " in" } ] }, "generated_text": "The word \"meme\" was first used in" } ]
text-generation-inference/integration-tests/models/__snapshots__/test_neox_sharded/test_neox_load.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_neox_sharded/test_neox_load.json", "repo_id": "text-generation-inference", "token_count": 9164 }
208
import pytest @pytest.fixture(scope="module") def flash_llama_gptq_handle(launcher): with launcher("huggingface/llama-7b-gptq", num_shard=2, quantize="gptq") as handle: yield handle @pytest.fixture(scope="module") async def flash_llama_gptq(flash_llama_gptq_handle): await flash_llama_gptq_handle.health(300) return flash_llama_gptq_handle.client @pytest.mark.asyncio @pytest.mark.private async def test_flash_llama_gptq(flash_llama_gptq, response_snapshot): response = await flash_llama_gptq.generate( "Test request", max_new_tokens=10, decoder_input_details=True ) assert response.details.generated_tokens == 10 assert response == response_snapshot @pytest.mark.asyncio @pytest.mark.private async def test_flash_llama_gptq_all_params(flash_llama_gptq, response_snapshot): response = await flash_llama_gptq.generate( "Test request", max_new_tokens=10, repetition_penalty=1.2, return_full_text=True, temperature=0.5, top_p=0.9, top_k=10, truncate=5, typical_p=0.9, watermark=True, decoder_input_details=True, seed=0, ) assert response.details.generated_tokens == 10 assert response == response_snapshot @pytest.mark.asyncio @pytest.mark.private async def test_flash_llama_gptq_load( flash_llama_gptq, generate_load, response_snapshot ): responses = await generate_load( flash_llama_gptq, "Test request", max_new_tokens=10, n=4 ) assert len(responses) == 4 assert all([r.generated_text == responses[0].generated_text for r in responses]) assert responses == response_snapshot
text-generation-inference/integration-tests/models/test_flash_llama_gptq.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_flash_llama_gptq.py", "repo_id": "text-generation-inference", "token_count": 723 }
209
import pytest @pytest.fixture(scope="module") def neox_handle(launcher): with launcher( "stabilityai/stablelm-tuned-alpha-3b", num_shard=1, use_flash_attention=False ) as handle: yield handle @pytest.fixture(scope="module") async def neox(neox_handle): await neox_handle.health(300) return neox_handle.client @pytest.mark.skip @pytest.mark.asyncio async def test_neox(neox, response_snapshot): response = await neox.generate( "<|USER|>What's your mood today?<|ASSISTANT|>", max_new_tokens=10, decoder_input_details=True, ) assert response.details.generated_tokens == 10 assert response == response_snapshot @pytest.mark.skip @pytest.mark.asyncio async def test_neox_load(neox, generate_load, response_snapshot): responses = await generate_load( neox, "<|USER|>What's your mood today?<|ASSISTANT|>", max_new_tokens=10, n=4, ) generated_texts = [r.generated_text for r in responses] assert len(generated_texts) == 4 assert generated_texts, all( [text == generated_texts[0] for text in generated_texts] ) assert responses == response_snapshot
text-generation-inference/integration-tests/models/test_neox.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_neox.py", "repo_id": "text-generation-inference", "token_count": 499 }
210
syntax = "proto3"; package generate.v2; service TextGenerationService { /// Model Info rpc Info (InfoRequest) returns (InfoResponse) {} /// Service discovery rpc ServiceDiscovery (ServiceDiscoveryRequest) returns (ServiceDiscoveryResponse) {} /// Empties batch cache rpc ClearCache (ClearCacheRequest) returns (ClearCacheResponse); /// Remove requests from a cached batch rpc FilterBatch (FilterBatchRequest) returns (FilterBatchResponse); /// Warmup the model and compute max cache size rpc Warmup (WarmupRequest) returns (WarmupResponse); /// Prefill batch and decode first token rpc Prefill (PrefillRequest) returns (PrefillResponse); /// Decode token for a list of prefilled batches rpc Decode (DecodeRequest) returns (DecodeResponse); /// Health check rpc Health (HealthRequest) returns (HealthResponse); } message HealthRequest {} message HealthResponse {} /// Empty request message InfoRequest {} message InfoResponse { bool requires_padding = 1; string dtype = 2; string device_type = 3; optional uint32 window_size = 4; uint32 speculate = 5; } /// Empty request message ServiceDiscoveryRequest {} message ServiceDiscoveryResponse { /// Other shards urls repeated string urls = 1; } message ClearCacheRequest { /// Optional batch id optional uint64 id = 1; } /// Empty response message ClearCacheResponse {} enum GrammarType { GRAMMAR_TYPE_NONE = 0; GRAMMAR_TYPE_JSON = 1; GRAMMAR_TYPE_REGEX = 2; } message NextTokenChooserParameters { /// exponential scaling output probability distribution float temperature = 1; /// restricting to the k highest probability elements uint32 top_k = 2; /// restricting to top tokens summing to prob_cut_off <= prob_cut_off float top_p = 3; /// restricting to top tokens summing to prob_cut_off <= prob_cut_off float typical_p = 4; /// apply sampling on the logits bool do_sample = 5; /// random seed for sampling uint64 seed = 6; /// repetition penalty float repetition_penalty = 7; /// frequency penalty float frequency_penalty = 9; /// token watermarking using "A Watermark for Large Language Models" bool watermark = 8; /// grammar (applied if not empty) string grammar = 10; /// grammar type GrammarType grammar_type = 11; } message StoppingCriteriaParameters { /// Maximum number of generated tokens uint32 max_new_tokens = 1; /// Optional stopping sequences repeated string stop_sequences = 2; /// Ignore end of sequence token /// used for benchmarking bool ignore_eos_token = 3; } message Request { /// Request ID uint64 id = 1; /// The generation context string inputs = 2; /// Context truncation uint32 truncate = 3; /// Next Token Chooser Parameters NextTokenChooserParameters parameters = 4; /// Stopping Criteria Parameters StoppingCriteriaParameters stopping_parameters = 5; /// Return prefill logprobs bool prefill_logprobs = 6; /// Return most likely n tokens uint32 top_n_tokens = 7; } message Batch { /// Batch ID uint64 id = 1; /// Individual requests repeated Request requests = 2; /// Batch size (==len(requests)) uint32 size = 3; /// Maximum number of tokens this batch will grow to uint32 max_tokens = 4; } message CachedBatch { /// Batch ID uint64 id = 1; /// Individual requests ids repeated uint64 request_ids = 2; /// Batch size (==len(requests)) uint32 size = 3; /// Maximum number of tokens this batch will grow to uint32 max_tokens = 4; } enum FinishReason { FINISH_REASON_LENGTH = 0; FINISH_REASON_EOS_TOKEN = 1; FINISH_REASON_STOP_SEQUENCE = 2; } message GeneratedText { /// Output string text = 1; /// Number of generated tokens uint32 generated_tokens = 2; /// Finish reason FinishReason finish_reason = 3; /// Seed optional uint64 seed = 4; } message Tokens { /// Token IDs repeated uint32 ids = 1; /// Logprobs repeated float logprobs = 2; /// tokens repeated string texts = 3; /// special repeated bool is_special = 4; } message Generation { /// Request ID uint64 request_id = 1; /// Prefill tokens (optional) Tokens prefill_tokens = 2; Tokens tokens = 3; /// Complete generated text optional GeneratedText generated_text = 4; /// Top tokens repeated Tokens top_tokens = 5; } message FilterBatchRequest { /// Batch ID uint64 batch_id = 1; /// Requests to keep repeated uint64 request_ids = 2; } message FilterBatchResponse { /// Filtered Batch (cached) CachedBatch batch = 1; } message PrefillRequest { /// Batch Batch batch = 1; } message PrefillResponse { /// Generation repeated Generation generations = 1; /// Next batch (cached) optional CachedBatch batch = 2; /// Forward elapsed time in nanoseconds uint64 forward_ns = 3; /// Decode elapsed time in nanoseconds uint64 decode_ns = 4; /// Total elapsed time in nanoseconds uint64 total_ns = 5; } message DecodeRequest { /// Cached batches repeated CachedBatch batches = 1; } message DecodeResponse { /// Decodes repeated Generation generations = 1; /// Next batch (cached) optional CachedBatch batch = 2; /// Forward elapsed time in nanoseconds uint64 forward_ns = 3; /// Decode elapsed time in nanoseconds uint64 decode_ns = 4; /// Total elapsed time in nanoseconds uint64 total_ns = 5; /// Concatenate elapsed time in nanoseconds optional uint64 concat_ns = 6; } message WarmupRequest { /// Batch to warmup on Batch batch = 1; uint32 max_input_length = 2; uint32 max_prefill_tokens = 3; uint32 max_total_tokens = 4; } message WarmupResponse { /// Maximum number of tokens supported by the model optional uint32 max_supported_total_tokens = 1; }
text-generation-inference/proto/generate.proto/0
{ "file_path": "text-generation-inference/proto/generate.proto", "repo_id": "text-generation-inference", "token_count": 2074 }
211
use crate::infer::InferError; use crate::infer::InferStreamResponse; use crate::validation::ValidGenerateRequest; use nohash_hasher::{BuildNoHashHasher, IntMap}; use std::cmp::min; use std::collections::VecDeque; use text_generation_client::{Batch, Request}; use tokio::sync::{mpsc, oneshot}; use tokio::time::Instant; use tracing::{info_span, instrument, Span}; /// Queue entry #[derive(Debug)] pub(crate) struct Entry { /// Request pub request: ValidGenerateRequest, /// Response sender to communicate between the Infer struct and the batching_task pub response_tx: mpsc::UnboundedSender<Result<InferStreamResponse, InferError>>, /// Span that will live as long as entry pub span: Span, /// Temporary span used as a guard when logging inference, wait times... pub temp_span: Option<Span>, /// Instant when this entry was queued pub queue_time: Instant, /// Instant when this entry was added to a batch pub batch_time: Option<Instant>, } /// Request Queue #[derive(Debug, Clone)] pub(crate) struct Queue { /// Channel to communicate with the background queue task queue_sender: mpsc::UnboundedSender<QueueCommand>, } impl Queue { pub(crate) fn new( requires_padding: bool, block_size: u32, window_size: Option<u32>, speculate: u32, ) -> Self { // Create channel let (queue_sender, queue_receiver) = mpsc::unbounded_channel(); // Launch background queue task tokio::spawn(queue_task( requires_padding, block_size, window_size, speculate, queue_receiver, )); Self { queue_sender } } /// Append an entry to the queue #[instrument(skip_all)] pub(crate) fn append(&self, entry: Entry) { // Send append command to the background task managing the state // Unwrap is safe here self.queue_sender .send(QueueCommand::Append(Box::new(entry), Span::current())) .unwrap(); } // Get the next batch #[instrument(skip(self))] pub(crate) async fn next_batch( &self, min_size: Option<usize>, max_size: Option<usize>, prefill_token_budget: u32, token_budget: u32, ) -> Option<NextBatch> { // Create response channel let (response_sender, response_receiver) = oneshot::channel(); // Send next batch command to the background task managing the state // Unwrap is safe here self.queue_sender .send(QueueCommand::NextBatch { min_size, max_size, prefill_token_budget, token_budget, response_sender, span: Span::current(), }) .unwrap(); // Await on response channel // Unwrap is safe here response_receiver.await.unwrap() } } // Background task responsible of the queue state async fn queue_task( requires_padding: bool, block_size: u32, window_size: Option<u32>, speculate: u32, mut receiver: mpsc::UnboundedReceiver<QueueCommand>, ) { let mut state = State::new(requires_padding, block_size, window_size, speculate); while let Some(cmd) = receiver.recv().await { match cmd { QueueCommand::Append(entry, span) => { span.in_scope(|| state.append(*entry)); metrics::increment_gauge!("tgi_queue_size", 1.0); } QueueCommand::NextBatch { min_size, max_size, prefill_token_budget, token_budget, response_sender, span, } => span.in_scope(|| { let next_batch = state.next_batch(min_size, max_size, prefill_token_budget, token_budget); response_sender.send(next_batch).unwrap(); metrics::gauge!("tgi_queue_size", state.entries.len() as f64); }), } } } /// Queue State #[derive(Debug)] struct State { /// Queue entries organized in a Vec entries: VecDeque<(u64, Entry)>, /// Id of the next entry next_id: u64, /// Id of the next batch next_batch_id: u64, /// Whether the model is using padding requires_padding: bool, /// Paged Attention block size block_size: u32, /// Sliding window window_size: Option<u32>, /// Speculation amount speculate: u32, } impl State { fn new( requires_padding: bool, block_size: u32, window_size: Option<u32>, speculate: u32, ) -> Self { Self { entries: VecDeque::with_capacity(128), next_id: 0, next_batch_id: 0, requires_padding, block_size, window_size, speculate, } } /// Append an entry to the queue fn append(&mut self, mut entry: Entry) { // Create a span that will live as long as the entry is in the queue waiting to be batched let queue_span = info_span!(parent: &entry.span, "queued"); entry.temp_span = Some(queue_span); // Push entry in the queue self.entries.push_back((self.next_id, entry)); self.next_id += 1; } // Get the next batch fn next_batch( &mut self, min_size: Option<usize>, max_size: Option<usize>, prefill_token_budget: u32, token_budget: u32, ) -> Option<NextBatch> { if self.entries.is_empty() { return None; } // Check if we have enough entries if let Some(min_size) = min_size { if self.entries.len() < min_size { return None; } } // Create span for this batch to add context to inference calls let next_batch_span = info_span!(parent: None, "batch", batch_size = tracing::field::Empty); next_batch_span.follows_from(&Span::current()); let mut batch_requests = Vec::with_capacity(self.entries.len()); let mut batch_entries = IntMap::with_capacity_and_hasher(self.entries.len(), BuildNoHashHasher::default()); let mut max_input_length = 0; let mut prefill_tokens: u32 = 0; let mut decode_tokens: u32 = 0; // Pop entries starting from the front of the queue while let Some((id, mut entry)) = self.entries.pop_front() { // Filter entries where the response receiver was dropped (== entries where the request // was dropped by the client) if entry.response_tx.is_closed() { metrics::increment_counter!("tgi_request_failure", "err" => "dropped"); continue; } if self.requires_padding { // We pad to max input length in the Python shards // We need to take these padding tokens into the equation max_input_length = max_input_length.max(entry.request.input_length); prefill_tokens = (batch_requests.len() + 1) as u32 * max_input_length } else { // pad to block size prefill_tokens += ((entry.request.input_length + self.block_size - 1) / self.block_size) * self.block_size; } if self.requires_padding { decode_tokens += entry.request.stopping_parameters.max_new_tokens; } else { let max_new_tokens = match self.window_size { None => entry.request.stopping_parameters.max_new_tokens, Some(window_size) => min( window_size.saturating_sub(entry.request.input_length), entry.request.stopping_parameters.max_new_tokens, ), }; // pad to block size decode_tokens += ((max_new_tokens + self.block_size - 1) / self.block_size) * self.block_size; } if prefill_tokens > prefill_token_budget || (prefill_tokens + decode_tokens + self.speculate) > token_budget { // Entry is over budget // Add it back to the front self.entries.push_front((id, entry)); break; } // Create a new span to link the batch back to this entry let entry_batch_span = info_span!(parent: &entry.span, "infer"); // Add relationships next_batch_span.follows_from(&entry_batch_span); entry_batch_span.follows_from(&next_batch_span); // Update entry entry.temp_span = Some(entry_batch_span); batch_requests.push(Request { id, prefill_logprobs: entry.request.decoder_input_details, inputs: entry.request.inputs.clone(), truncate: entry.request.truncate, parameters: Some(entry.request.parameters.clone()), stopping_parameters: Some(entry.request.stopping_parameters.clone()), top_n_tokens: entry.request.top_n_tokens, }); // Set batch_time entry.batch_time = Some(Instant::now()); // Insert in batch_entries IntMap batch_entries.insert(id, entry); // Check if max_size if Some(batch_requests.len()) == max_size { break; } } // Empty batch if batch_requests.is_empty() { return None; } // Check if our batch is big enough if let Some(min_size) = min_size { // Batch is too small if batch_requests.len() < min_size { // Add back entries to the queue in the correct order for r in batch_requests.into_iter().rev() { let id = r.id; let entry = batch_entries.remove(&id).unwrap(); self.entries.push_front((id, entry)); } return None; } } // Final batch size let size = batch_requests.len() as u32; next_batch_span.record("batch_size", size); let batch = Batch { id: self.next_batch_id, requests: batch_requests, size, max_tokens: (prefill_tokens + decode_tokens), }; // Increment batch id self.next_batch_id += 1; metrics::histogram!("tgi_batch_next_size", batch.size as f64); Some((batch_entries, batch, next_batch_span)) } } type NextBatch = (IntMap<u64, Entry>, Batch, Span); #[derive(Debug)] enum QueueCommand { Append(Box<Entry>, Span), NextBatch { min_size: Option<usize>, max_size: Option<usize>, prefill_token_budget: u32, token_budget: u32, response_sender: oneshot::Sender<Option<NextBatch>>, span: Span, }, } #[cfg(test)] mod tests { use super::*; use text_generation_client::{ GrammarType as ProtoGrammarType, NextTokenChooserParameters, StoppingCriteriaParameters, }; use tracing::info_span; fn default_entry() -> ( Entry, mpsc::UnboundedReceiver<Result<InferStreamResponse, InferError>>, ) { let (response_tx, receiver_tx) = mpsc::unbounded_channel(); let entry = Entry { request: ValidGenerateRequest { inputs: String::new(), input_length: 0, truncate: 0, decoder_input_details: false, parameters: NextTokenChooserParameters { temperature: 0.0, top_k: 0, top_p: 0.0, typical_p: 0.0, do_sample: false, seed: 0, repetition_penalty: 0.0, frequency_penalty: 0.0, watermark: false, grammar: String::new(), grammar_type: ProtoGrammarType::None as i32, }, stopping_parameters: StoppingCriteriaParameters { ignore_eos_token: false, max_new_tokens: 1, stop_sequences: vec![], }, top_n_tokens: 0, }, response_tx, span: info_span!("entry"), temp_span: None, queue_time: Instant::now(), batch_time: None, }; (entry, receiver_tx) } #[test] fn test_append() { let mut state = State::new(false, 1, None, 0); let (entry, _guard) = default_entry(); assert_eq!(state.next_id, 0); assert_eq!(state.entries.len(), 0); state.append(entry); assert_eq!(state.next_id, 1); assert_eq!(state.entries.len(), 1); let (id, _) = state.entries.remove(0).unwrap(); assert_eq!(id, 0); } #[test] fn test_next_batch_empty() { let mut state = State::new(false, 1, None, 0); assert!(state.next_batch(None, None, 1, 1).is_none()); assert!(state.next_batch(Some(1), None, 1, 1).is_none()); } #[test] fn test_next_batch_min_size() { let mut state = State::new(false, 1, None, 0); let (entry1, _guard1) = default_entry(); let (entry2, _guard2) = default_entry(); state.append(entry1); state.append(entry2); let (entries, batch, _) = state.next_batch(None, None, 2, 2).unwrap(); assert_eq!(entries.len(), 2); assert!(entries.contains_key(&0)); assert!(entries.contains_key(&1)); assert!(entries.get(&0).unwrap().batch_time.is_some()); assert!(entries.get(&1).unwrap().batch_time.is_some()); assert_eq!(batch.id, 0); assert_eq!(batch.size, 2); assert_eq!(state.next_id, 2); assert_eq!(state.entries.len(), 0); assert_eq!(state.next_batch_id, 1); let (entry3, _guard3) = default_entry(); state.append(entry3); assert!(state.next_batch(Some(2), None, 2, 2).is_none()); assert_eq!(state.next_id, 3); assert_eq!(state.entries.len(), 1); let (id, _) = state.entries.remove(0).unwrap(); assert_eq!(id, 2); } #[test] fn test_next_batch_max_size() { let mut state = State::new(false, 1, None, 0); let (entry1, _guard1) = default_entry(); let (entry2, _guard2) = default_entry(); state.append(entry1); state.append(entry2); let (entries, batch, _) = state.next_batch(None, Some(1), 2, 2).unwrap(); assert_eq!(entries.len(), 1); assert!(entries.contains_key(&0)); assert!(entries.get(&0).unwrap().batch_time.is_some()); assert_eq!(batch.id, 0); assert_eq!(batch.size, 1); assert_eq!(state.next_id, 2); assert_eq!(state.entries.len(), 1); assert_eq!(state.next_batch_id, 1); } #[test] fn test_next_batch_token_budget() { let mut state = State::new(false, 1, None, 0); let (entry1, _guard1) = default_entry(); let (entry2, _guard2) = default_entry(); state.append(entry1); state.append(entry2); let (entries, batch, _) = state.next_batch(None, None, 1, 1).unwrap(); assert_eq!(entries.len(), 1); assert!(entries.contains_key(&0)); assert_eq!(batch.id, 0); assert_eq!(batch.size, 1); assert_eq!(state.next_id, 2); assert_eq!(state.entries.len(), 1); assert_eq!(state.next_batch_id, 1); let (entry3, _guard3) = default_entry(); state.append(entry3); let (entries, batch, _) = state.next_batch(None, None, 3, 3).unwrap(); assert_eq!(entries.len(), 2); assert!(entries.contains_key(&1)); assert!(entries.contains_key(&2)); assert_eq!(batch.id, 1); assert_eq!(batch.size, 2); assert_eq!(state.next_id, 3); assert_eq!(state.entries.len(), 0); assert_eq!(state.next_batch_id, 2); } #[tokio::test] async fn test_queue_append() { let queue = Queue::new(false, 1, None, 0); let (entry, _guard) = default_entry(); queue.append(entry); } #[tokio::test] async fn test_queue_next_batch_empty() { let queue = Queue::new(false, 1, None, 0); assert!(queue.next_batch(None, None, 1, 1).await.is_none()); assert!(queue.next_batch(Some(1), None, 1, 1).await.is_none()); } #[tokio::test] async fn test_queue_next_batch_min_size() { let queue = Queue::new(false, 1, None, 0); let (entry1, _guard1) = default_entry(); let (entry2, _guard2) = default_entry(); queue.append(entry1); queue.append(entry2); let (entries, batch, _) = queue.next_batch(None, None, 2, 2).await.unwrap(); assert_eq!(entries.len(), 2); assert!(entries.contains_key(&0)); assert!(entries.contains_key(&1)); assert!(entries.get(&0).unwrap().batch_time.is_some()); assert!(entries.get(&1).unwrap().batch_time.is_some()); assert_eq!(batch.id, 0); assert_eq!(batch.size, 2); let (entry3, _guard3) = default_entry(); queue.append(entry3); // Not enough requests pending assert!(queue.next_batch(Some(2), None, 2, 2).await.is_none()); // Not enough token budget assert!(queue.next_batch(Some(1), None, 0, 0).await.is_none()); // Ok let (entries2, batch2, _) = queue.next_batch(Some(1), None, 2, 2).await.unwrap(); assert_eq!(entries2.len(), 1); assert!(entries2.contains_key(&2)); assert!(entries2.get(&2).unwrap().batch_time.is_some()); assert_eq!(batch2.id, 1); assert_eq!(batch2.size, 1); } #[tokio::test] async fn test_queue_next_batch_max_size() { let queue = Queue::new(false, 1, None, 0); let (entry1, _guard1) = default_entry(); let (entry2, _guard2) = default_entry(); queue.append(entry1); queue.append(entry2); let (entries, batch, _) = queue.next_batch(None, Some(1), 2, 2).await.unwrap(); assert_eq!(entries.len(), 1); assert!(entries.contains_key(&0)); assert!(entries.get(&0).unwrap().batch_time.is_some()); assert_eq!(batch.id, 0); assert_eq!(batch.size, 1); } #[tokio::test] async fn test_queue_next_batch_token_budget() { let queue = Queue::new(false, 1, None, 0); let (entry1, _guard1) = default_entry(); let (entry2, _guard2) = default_entry(); queue.append(entry1); queue.append(entry2); let (entries, batch, _) = queue.next_batch(None, None, 1, 1).await.unwrap(); assert_eq!(entries.len(), 1); assert!(entries.contains_key(&0)); assert_eq!(batch.id, 0); assert_eq!(batch.size, 1); let (entry3, _guard3) = default_entry(); queue.append(entry3); let (entries, batch, _) = queue.next_batch(None, None, 3, 3).await.unwrap(); assert_eq!(entries.len(), 2); assert!(entries.contains_key(&1)); assert!(entries.contains_key(&2)); assert_eq!(batch.id, 1); assert_eq!(batch.size, 2); } #[tokio::test] async fn test_queue_next_batch_token_speculate() { let queue = Queue::new(false, 1, None, 2); let (entry1, _guard1) = default_entry(); let (entry2, _guard2) = default_entry(); queue.append(entry1); queue.append(entry2); // Budget of 1 is not enough assert!(queue.next_batch(None, None, 1, 1).await.is_none()); let (entries, batch, _) = queue.next_batch(None, None, 6, 6).await.unwrap(); assert_eq!(entries.len(), 2); assert!(entries.contains_key(&0)); assert!(entries.contains_key(&1)); assert_eq!(batch.id, 0); assert_eq!(batch.size, 2); } #[tokio::test] async fn test_queue_next_batch_dropped_receiver() { let queue = Queue::new(false, 1, None, 0); let (entry, _) = default_entry(); queue.append(entry); assert!(queue.next_batch(None, None, 1, 1).await.is_none()); } }
text-generation-inference/router/src/queue.rs/0
{ "file_path": "text-generation-inference/router/src/queue.rs", "repo_id": "text-generation-inference", "token_count": 9950 }
212
from setuptools import setup from torch.utils.cpp_extension import BuildExtension, CUDAExtension import torch extra_compile_args = ["-std=c++17"] if not torch.version.hip: extra_compile_args.append("-arch=compute_80") setup( name="custom_kernels", ext_modules=[ CUDAExtension( name="custom_kernels.fused_bloom_attention_cuda", sources=["custom_kernels/fused_bloom_attention_cuda.cu"], extra_compile_args=extra_compile_args, ), CUDAExtension( name="custom_kernels.fused_attention_cuda", sources=["custom_kernels/fused_attention_cuda.cu"], extra_compile_args=extra_compile_args, ), ], cmdclass={"build_ext": BuildExtension}, )
text-generation-inference/server/custom_kernels/setup.py/0
{ "file_path": "text-generation-inference/server/custom_kernels/setup.py", "repo_id": "text-generation-inference", "token_count": 342 }
213
#ifndef _config_h #define _config_h #define MAX_Q_GEMM_ROWS 50 #define MAX_Q_GEMM_WEIGHTS 4 // must be <= MAX_Q_GEMM_ROWS #define QMODE_2BIT 1 #define QMODE_3BIT 1 #define QMODE_4BIT 1 #define QMODE_5BIT 1 #define QMODE_6BIT 0 #define QMODE_8BIT 0 #endif
text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/config.h/0
{ "file_path": "text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/config.h", "repo_id": "text-generation-inference", "token_count": 119 }
214
#ifndef _qdq_util_cuh #define _qdq_util_cuh union half2_uint32 { uint32_t as_uint32; half2 as_half2; __device__ half2_uint32(uint32_t val) : as_uint32(val) {} __device__ half2_uint32(half2 val) : as_half2(val) {} __device__ half2_uint32() : as_uint32(0) {} }; union half_uint16 { uint16_t as_uint16; half as_half; __device__ half_uint16(uint16_t val) : as_uint16(val) {} __device__ half_uint16(half val) : as_half(val) {} __device__ half_uint16() : as_uint16(0) {} }; // Max_scale premultiplied by 1/256 __forceinline__ __device__ half dq_scale(const int qs, const half max_scale) { int qs_i = qs + 1; half qs_h = __int2half_rn(qs_i * qs_i); qs_h = __hmul(qs_h, max_scale); return qs_h; } __forceinline__ __device__ half dq(const int q, const int qzero, const half scale) { return __hmul(__int2half_rn(q - qzero), scale); } __forceinline__ __device__ half dq_ns(const int q, const int qzero) { //return __hsub(__int2half_rn(q), __int2half_rn(qzero)); return __int2half_rn(q - qzero); } __forceinline__ __device__ int exb(const uint32_t q, const int shift, const int mask) { return (int)((q >> shift) & mask); } __forceinline__ __device__ int exb(const uint32_t q1, const uint32_t q0, const int shift, const int mask) { return (int)(__funnelshift_rc(q0, q1, shift) & mask); } #endif
text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_util.cuh/0
{ "file_path": "text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_util.cuh", "repo_id": "text-generation-inference", "token_count": 602 }
215
import os import requests import tempfile import pytest import huggingface_hub.constants from huggingface_hub import hf_api import text_generation_server.utils.hub from text_generation_server.utils.hub import ( weight_hub_files, download_weights, weight_files, EntryNotFoundError, LocalEntryNotFoundError, RevisionNotFoundError, ) @pytest.fixture() def offline(): current_value = text_generation_server.utils.hub.HF_HUB_OFFLINE text_generation_server.utils.hub.HF_HUB_OFFLINE = True yield "offline" text_generation_server.utils.hub.HF_HUB_OFFLINE = current_value @pytest.fixture() def fresh_cache(): with tempfile.TemporaryDirectory() as d: current_value = huggingface_hub.constants.HUGGINGFACE_HUB_CACHE huggingface_hub.constants.HUGGINGFACE_HUB_CACHE = d text_generation_server.utils.hub.HUGGINGFACE_HUB_CACHE = d os.environ["HUGGINGFACE_HUB_CACHE"] = d yield huggingface_hub.constants.HUGGINGFACE_HUB_CACHE = current_value os.environ["HUGGINGFACE_HUB_CACHE"] = current_value text_generation_server.utils.hub.HUGGINGFACE_HUB_CACHE = current_value @pytest.fixture() def prefetched(): model_id = "bert-base-uncased" huggingface_hub.snapshot_download( repo_id=model_id, revision="main", local_files_only=False, repo_type="model", allow_patterns=["*.safetensors"], ) yield model_id def test_weight_hub_files_offline_error(offline, fresh_cache): # If the model is not prefetched then it will raise an error with pytest.raises(EntryNotFoundError): weight_hub_files("gpt2") def test_weight_hub_files_offline_ok(prefetched, offline): # If the model is prefetched then we should be able to get the weight files from local cache filenames = weight_hub_files(prefetched) root = None assert len(filenames) == 1 for f in filenames: curroot, filename = os.path.split(f) if root is None: root = curroot else: assert root == curroot assert filename == "model.safetensors" def test_weight_hub_files(): filenames = weight_hub_files("bigscience/bloom-560m") assert filenames == ["model.safetensors"] def test_weight_hub_files_llm(): filenames = weight_hub_files("bigscience/bloom") assert filenames == [f"model_{i:05d}-of-00072.safetensors" for i in range(1, 73)] def test_weight_hub_files_empty(): with pytest.raises(EntryNotFoundError): weight_hub_files("bigscience/bloom", extension=".errors") def test_download_weights(): model_id = "bigscience/bloom-560m" filenames = weight_hub_files(model_id) files = download_weights(filenames, model_id) local_files = weight_files("bigscience/bloom-560m") assert files == local_files def test_weight_files_revision_error(): with pytest.raises(RevisionNotFoundError): weight_files("bigscience/bloom-560m", revision="error") def test_weight_files_not_cached_error(fresh_cache): with pytest.raises(LocalEntryNotFoundError): weight_files("bert-base-uncased")
text-generation-inference/server/tests/utils/test_hub.py/0
{ "file_path": "text-generation-inference/server/tests/utils/test_hub.py", "repo_id": "text-generation-inference", "token_count": 1264 }
216
# coding=utf-8 # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved. # # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX # and OPT implementations in this library. It has been modified from its # original forms to accommodate minor architectural differences compared # to GPT-NeoX and OPT used by the Meta AI team that trained the model. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch import torch.distributed from torch import nn from transformers.activations import ACT2FN from transformers.configuration_utils import PretrainedConfig from typing import Optional, List, Tuple from text_generation_server.utils import paged_attention, flash_attn from text_generation_server.utils.layers import ( TensorParallelRowLinear, TensorParallelColumnLinear, TensorParallelEmbedding, PositionRotaryEmbedding, SpeculativeHead, get_linear, FastRMSNorm, ) class MistralConfig(PretrainedConfig): model_type = "mistral" def __init__( self, vocab_size=32000, hidden_size=4096, intermediate_size=14336, num_hidden_layers=32, num_attention_heads=32, num_key_value_heads=8, hidden_act="silu", max_position_embeddings=4096 * 32, initializer_range=0.02, rms_norm_eps=1e-6, use_cache=True, pad_token_id=None, bos_token_id=1, eos_token_id=2, pretraining_tp=1, tie_word_embeddings=False, rope_theta=10000.0, sliding_window=None, **kwargs, ): self.vocab_size = vocab_size self.max_position_embeddings = max_position_embeddings self.hidden_size = hidden_size self.intermediate_size = intermediate_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.sliding_window = sliding_window # for backward compatibility if num_key_value_heads is None: num_key_value_heads = num_attention_heads self.num_key_value_heads = num_key_value_heads self.hidden_act = hidden_act self.initializer_range = initializer_range self.rms_norm_eps = rms_norm_eps self.pretraining_tp = pretraining_tp self.use_cache = use_cache self.rope_theta = rope_theta super().__init__( pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, tie_word_embeddings=tie_word_embeddings, **kwargs, ) def load_attention(config, prefix, weights): if config.num_attention_heads != config.num_key_value_heads: return _load_gqa(config, prefix, weights) else: return TensorParallelColumnLinear.load_multi( config, prefixes=[f"{prefix}.q_proj", f"{prefix}.k_proj", f"{prefix}.v_proj"], dim=0, weights=weights, bias=False, ) def _load_gqa(config, prefix: str, weights): assert config.hidden_size % config.num_attention_heads == 0 assert config.num_attention_heads % weights.process_group.size() == 0 weight = weights.get_multi_weights_col( prefixes=[f"{prefix}.q_proj", f"{prefix}.k_proj", f"{prefix}.v_proj"], quantize=config.quantize, dim=0, ) if config.quantize not in ["gptq", "awq"]: weight = weight.to(dtype=weights.dtype).to(device=weights.device) head_size = config.hidden_size // config.num_attention_heads num_heads = config.num_attention_heads // weights.process_group.size() num_key_value_heads = config.num_key_value_heads // weights.process_group.size() assert list(weight.shape) == [ (num_heads + 2 * num_key_value_heads) * head_size, config.hidden_size, ], f"{list(weight.shape)} != {[(num_heads + 2 * config.num_key_value_heads) * head_size, config.hidden_size]}" return TensorParallelColumnLinear( get_linear(weight, bias=None, quantize=config.quantize) ) class MistralAttention(torch.nn.Module): def __init__( self, prefix: str, config, weights, ): super().__init__() self.max_past = ( config.sliding_window if config.sliding_window is not None else -1 ) self.num_heads = config.num_attention_heads self.hidden_size = config.hidden_size self.head_size = self.hidden_size // self.num_heads self.rotary_emb = PositionRotaryEmbedding.static( config=config, dim=self.head_size, base=config.rope_theta, device=weights.device, ) self.softmax_scale = self.head_size**-0.5 if self.num_heads % weights.process_group.size() != 0: raise ValueError( f"`num_heads` must be divisible by `num_shards` (got `num_heads`: {self.num_heads} " f"and `num_shards`: {weights.process_group.size()}" ) self.num_heads = self.num_heads // weights.process_group.size() self.num_key_value_heads = ( config.num_key_value_heads // weights.process_group.size() ) self.query_key_value = load_attention(config, prefix, weights) self.o_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.o_proj", weights=weights, bias=False, ) self.num_groups = self.num_heads // self.num_key_value_heads self.kv_head_mapping = torch.arange( 0, self.num_key_value_heads, dtype=torch.int32, device=weights.device ).repeat_interleave(self.num_groups) def forward( self, hidden_states, cos, sin, cu_seqlen_prefill, kv_cache, block_tables, slots, input_lengths, max_s, prefill_cache_indices, ): qkv = self.query_key_value(hidden_states) query, kv = qkv.split( [ self.head_size * self.num_heads, 2 * self.head_size * self.num_key_value_heads, ], dim=1, ) query = query.view(-1, self.num_heads, self.head_size) kv = kv.view(-1, 2, self.num_key_value_heads, self.head_size) self.rotary_emb(query, torch.select(kv, dim=1, index=0), cos, sin) if prefill_cache_indices is not None: kv_to_cache = kv[prefill_cache_indices] else: kv_to_cache = kv paged_attention.reshape_and_cache( kv_to_cache[:, 0], kv_to_cache[:, 1], kv_cache[0], kv_cache[1], slots ) # output tensor attn_output = torch.empty_like(query) # Prefill if cu_seqlen_prefill is not None: # flash attention flash_attn.attention( query, torch.select(kv, dim=1, index=0), torch.select(kv, dim=1, index=1), attn_output, cu_seqlen_prefill, max_s, self.softmax_scale, window_size_left=self.max_past, ) # Decode else: paged_attention.attention( attn_output, query, kv_cache[0], kv_cache[1], self.kv_head_mapping, self.softmax_scale, block_tables, input_lengths, max_s, ) return self.o_proj(attn_output.view(-1, self.num_heads * self.head_size)) class MistralMLP(nn.Module): def __init__(self, prefix, config, weights): super().__init__() act = config.hidden_act self.act = ( ACT2FN[act] if "gelu" not in act else lambda x: torch.nn.functional.gelu( x, approximate=( "tanh" if act in ["gelu_fast", "gelu_pytorch_tanh"] else "none" ), ) ) # Fuse gate and up proj self.gate_up_proj = TensorParallelColumnLinear.load_multi( config, prefixes=[f"{prefix}.gate_proj", f"{prefix}.up_proj"], weights=weights, dim=0, bias=False, ) self.down_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.down_proj", weights=weights, bias=False, ) self.intermediate_size = ( config.intermediate_size // weights.process_group.size() ) def forward(self, hidden_states): gate_up_states = self.gate_up_proj(hidden_states) gate_up_states = gate_up_states.view(-1, 2, self.intermediate_size) return self.down_proj(self.act(gate_up_states[:, 0]) * gate_up_states[:, 1]) class MistralLayer(nn.Module): def __init__(self, layer_id, config, weights): super().__init__() prefix = f"model.layers.{layer_id}" self.self_attn = MistralAttention( prefix=f"{prefix}.self_attn", config=config, weights=weights ) self.mlp = MistralMLP(prefix=f"{prefix}.mlp", config=config, weights=weights) self.input_layernorm = FastRMSNorm.load( prefix=f"{prefix}.input_layernorm", weights=weights, eps=config.rms_norm_eps ) self.post_attention_layernorm = FastRMSNorm.load( prefix=f"{prefix}.post_attention_layernorm", weights=weights, eps=config.rms_norm_eps, ) def forward( self, hidden_states, residual, cos, sin, cu_seqlen_prefill, kv_cache, block_tables, slots, input_lengths, max_s, prefill_cache_indices, ): normed_hidden_states, res = self.input_layernorm(hidden_states, residual) # Self Attention attn_output = self.self_attn( normed_hidden_states, cos, sin, cu_seqlen_prefill, kv_cache, block_tables, slots, input_lengths, max_s, prefill_cache_indices, ) # faster post attention rms norm normed_attn_res_output, attn_res = self.post_attention_layernorm( attn_output, res ) mlp_output = self.mlp(normed_attn_res_output) return mlp_output, attn_res class MistralModel(torch.nn.Module): def __init__(self, config, weights): super().__init__() process_group = weights.process_group self.tp_rank = process_group.rank() self.tp_world_size = process_group.size() self.embed_tokens = TensorParallelEmbedding( prefix="model.embed_tokens", weights=weights ) self.layers = nn.ModuleList( [ MistralLayer( layer_id, config, weights, ) for layer_id in range(config.num_hidden_layers) ] ) self.norm = FastRMSNorm.load( prefix="model.norm", weights=weights, eps=config.rms_norm_eps ) self.gradient_checkpointing = False self.head_size = self.layers[0].self_attn.head_size self.num_heads = self.layers[0].self_attn.num_heads self.num_key_value_heads = self.layers[0].self_attn.num_key_value_heads def forward( self, input_ids: torch.Tensor, position_ids: torch.Tensor, cu_seqlen_prefill: Optional[torch.Tensor], kv_cache: List[Tuple[torch.Tensor, torch.Tensor]], block_tables: torch.Tensor, slots: torch.Tensor, input_lengths: torch.Tensor, max_s: int, true_max_s: int, prefill_cache_indices: Optional[torch.Tensor], ) -> torch.Tensor: hidden_states = self.embed_tokens(input_ids) # Get rotary cos and sin for this forward # Avoid to index in each layer cos, sin = self.layers[0].self_attn.rotary_emb.get_cos_sin( position_ids, true_max_s, hidden_states.dtype ) residual = None for i, layer in enumerate(self.layers): hidden_states, residual = layer( hidden_states, residual, cos, sin, cu_seqlen_prefill, kv_cache[i], block_tables, slots, input_lengths, max_s, prefill_cache_indices, ) hidden_states, _ = self.norm(hidden_states, residual) return hidden_states class FlashMistralForCausalLM(torch.nn.Module): def __init__(self, config, weights): super().__init__() self.model = MistralModel(config, weights) self.lm_head = SpeculativeHead.load( config, prefix="lm_head", weights=weights, ) self.max_past = config.sliding_window self.max_past_tensor = ( torch.tensor(config.sliding_window, device=weights.device) if self.max_past is not None else None ) def forward( self, input_ids: torch.Tensor, position_ids: torch.Tensor, cu_seqlen_prefill: Optional[torch.Tensor], kv_cache: List[Tuple[torch.Tensor, torch.Tensor]], block_tables: torch.Tensor, slots: torch.Tensor, input_lengths: torch.Tensor, max_s: int, prefill_cache_indices: Optional[torch.Tensor], lm_head_indices: Optional[torch.Tensor] = None, ) -> torch.Tensor: true_max_s = max_s if prefill_cache_indices is not None: # Slots also need to be sliced as it has the same size as the whole kv tensor slots = slots[prefill_cache_indices] elif self.max_past is not None: # Clamp in decode mode as paged attention requires clamped values whereas the flash attention # kernel requires the true values input_lengths = torch.clamp(input_lengths, max=self.max_past_tensor) hidden_states = self.model( input_ids, position_ids, cu_seqlen_prefill, kv_cache, block_tables, slots, input_lengths, max_s, true_max_s, prefill_cache_indices, ) if lm_head_indices is not None: hidden_states = hidden_states[lm_head_indices] logits = self.lm_head(hidden_states) return logits
text-generation-inference/server/text_generation_server/models/custom_modeling/flash_mistral_modeling.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/flash_mistral_modeling.py", "repo_id": "text-generation-inference", "token_count": 7562 }
217
# coding=utf-8 # Copyright 2022 EleutherAI The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ PyTorch GPTNeoX model.""" from typing import Optional, Tuple, Union import os import torch import torch.distributed import torch.utils.checkpoint from torch import nn from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss from transformers.activations import ACT2FN from transformers.file_utils import ( add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward, replace_return_docstrings, ) from transformers.modeling_outputs import ( BaseModelOutputWithPast, CausalLMOutputWithPast, QuestionAnsweringModelOutput, SequenceClassifierOutputWithPast, TokenClassifierOutput, ) from transformers.modeling_utils import PreTrainedModel from transformers import GPTNeoXConfig from loguru import logger from text_generation_server.utils.layers import ( TensorParallelColumnLinear, TensorParallelEmbedding, TensorParallelRowLinear, SpeculativeHead, ) CUSTOM_KERNELS_ENABLED = False if ( torch.cuda.is_available() and not os.environ.get("DISABLE_CUSTOM_KERNELS", "False") == "True" ): try: from custom_kernels import fused_attention_cuda CUSTOM_KERNELS_ENABLED = True except ImportError: pass if not CUSTOM_KERNELS_ENABLED: logger.warning("We're not using custom kernels.") def make_causal_mask( input_ids_shape: torch.Size, device: torch.device, past_key_values_length: int ) -> torch.BoolTensor: """ Make causal mask used for self-attention. """ batch_size, target_length = input_ids_shape mask = torch.ones( (target_length, target_length + past_key_values_length), dtype=torch.bool, device=device, ) mask = mask.triu(1 + past_key_values_length) expanded_mask = mask.unsqueeze(0).expand( batch_size, target_length, target_length + past_key_values_length ) return expanded_mask def expand_mask(mask: torch.Tensor, tgt_length: int) -> torch.BoolTensor: """ Expands attention_mask from `[batch_size, src_length]` to `[batch_size, 1, tgt_length, src_length]`. """ batch_size, src_length = mask.shape tgt_length = tgt_length if tgt_length is not None else src_length expanded_mask = ~(mask[:, None, :].to(torch.bool)) return expanded_mask.expand(batch_size, tgt_length, src_length) def prepare_attn_mask( attention_mask: torch.Tensor, input_shape: Tuple[int, int], past_key_values_length: int, ) -> torch.BoolTensor: # create causal mask # [batch_size, seq_length] -> [batch_size, tgt_length, src_length] combined_attention_mask = None device = attention_mask.device _, src_length = input_shape if src_length > 1: combined_attention_mask = make_causal_mask( input_shape, device=device, past_key_values_length=past_key_values_length ) # [batch_size, seq_length] -> [batch_size, tgt_length, src_length] expanded_attn_mask = expand_mask(attention_mask, tgt_length=src_length) combined_attention_mask = ( expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask | combined_attention_mask ) return combined_attention_mask class GPTNeoXPreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ class GPTNeoXAttention(nn.Module): def __init__(self, config, prefix, weights): super().__init__() self.num_attention_heads = config.num_attention_heads self.hidden_size = config.hidden_size self.head_size = self.hidden_size // self.num_attention_heads self.rotary_ndims = int(self.head_size * config.rotary_pct) max_positions = config.max_position_embeddings # ??? TODO # self.register_buffer( # "bias", # torch.tril(torch.ones((max_positions, max_positions), dtype=torch.bool)).view( # 1, 1, max_positions, max_positions # ), # ) # self.register_buffer("masked_bias", torch.tensor(-1e9)) self.rotary_emb = RotaryEmbedding( self.rotary_ndims, config.max_position_embeddings, base=config.rotary_emb_base, ) self.rotary_emb.inv_freq = nn.Parameter( weights.get_tensor(f"{prefix}.rotary_emb.inv_freq") ) self.inv_norm_factor = 1.0 / torch.sqrt( torch.tensor(self.head_size, dtype=torch.float32) ).to(torch.get_default_dtype()) if self.num_attention_heads % weights.process_group.size() != 0: raise ValueError( f"`num_attention_heads` must be divisible by `num_shards` " f"(got `num_attention_heads`: {self.num_attention_heads} " f"and `num_shards`: {weights.process_group.size()}" ) self.num_attention_heads = ( self.num_attention_heads // weights.process_group.size() ) self.query_key_value = TensorParallelColumnLinear.load( config, prefix=f"{prefix}.query_key_value", weights=weights, bias=True ) self.dense = TensorParallelRowLinear.load( config, prefix=f"{prefix}.dense", weights=weights, bias=True ) def forward( self, hidden_states, position_ids, attention_mask, head_mask=None, layer_past=None, use_cache=False, output_attentions=False, ): has_layer_past = layer_past is not None # Compute QKV # Attention heads [batch, seq_len, hidden_size] # --> [batch, seq_len, (np * 3 * head_size)] qkv = self.query_key_value(hidden_states) # [batch, seq_len, (num_heads * 3 * head_size)] # --> [batch, seq_len, num_heads, 3 * head_size] new_qkv_shape = qkv.size()[:-1] + (self.num_attention_heads, 3 * self.head_size) qkv = qkv.view(*new_qkv_shape).permute(0, 2, 1, 3) # [batch, seq_len, num_attention_heads, 3 * head_size] --> 3 [batch, num_attention_heads, seq_len, head_size] query, key, value = qkv.split(self.head_size, -1) # Compute token offset for rotary embeddings (when decoding) seq_len = key.shape[-2] if has_layer_past: seq_len += layer_past[0].shape[-2] # Compute rotary embeddings on rotary_ndims query_rot = query[..., : self.rotary_ndims] key_rot = key[..., : self.rotary_ndims] query_rot, key_rot = self.rotary_emb(query_rot, key_rot, position_ids, seq_len) query[..., : self.rotary_ndims] = query_rot key[..., : self.rotary_ndims] = key_rot if CUSTOM_KERNELS_ENABLED: attn_output, present, attn_weights = fused_attention_cuda.forward( query, key, value, layer_past, attention_mask, head_mask, self.inv_norm_factor, self.num_attention_heads, use_cache, ) else: # Cache QKV values if has_layer_past: past_key = layer_past[0] past_value = layer_past[1] key = torch.cat((past_key, key), dim=-2) value = torch.cat((past_value, value), dim=-2) present = (key, value) if use_cache else None # Compute attention attn_output, attn_weights = self._attn( query, key, value, attention_mask, head_mask ) # Reshape outputs attn_output = self._merge_heads( attn_output, self.num_attention_heads, self.head_size ) attn_output = self.dense(attn_output) outputs = (attn_output, present) if output_attentions: outputs += (attn_weights,) return outputs @classmethod def _split_heads(cls, tensor, num_attention_heads, attn_head_size): """ Splits hidden dim into attn_head_size and num_attention_heads """ # tensor: [bs, seq_len, hidden_size] new_shape = tensor.size()[:-1] + (num_attention_heads, attn_head_size) # -> [bs, seq_len, num_attention_heads, attn_head_size] tensor = tensor.view(new_shape) # -> [bs, num_attention_heads, seq_len, attn_head_size] tensor = tensor.permute(0, 2, 1, 3) return tensor @classmethod def _merge_heads(cls, tensor, num_attention_heads, attn_head_size): """ Merges attn_head_size dim and num_attn_heads dim into hidden dim """ # tensor [bs, num_attention_heads, seq_len, attn_head_size] tensor = tensor.permute(0, 2, 1, 3).contiguous() # -> [bs, seq_len, num_attention_heads, attn_head_size] tensor = tensor.view( tensor.size(0), tensor.size(1), num_attention_heads * attn_head_size ) # -> [bs, seq_len, hidden_size] return tensor def _attn(self, query, key, value, attention_mask=None, head_mask=None): # q, k, v: [bs, num_attention_heads, seq_len, attn_head_size] # compute causal mask from causal mask buffer batch_size, num_attention_heads, query_length, attn_head_size = query.size() key_length = key.size(-2) query = query.reshape( batch_size * num_attention_heads, query_length, attn_head_size ) key = key.reshape(batch_size * num_attention_heads, key_length, attn_head_size) attn_scores = torch.zeros( 1, dtype=query.dtype, device=key.device, ).expand(batch_size * num_attention_heads, query_length, key_length) attn_scores = torch.baddbmm( attn_scores, query, key.transpose(1, 2), beta=1.0, alpha=self.inv_norm_factor, ) # cast attention scores to fp32, compute scaled softmax and cast back to initial dtype - [batch_size, num_heads, q_length, kv_length] input_dtype = attn_scores.dtype if input_dtype in [torch.float16, torch.bfloat16]: attn_scores = attn_scores.to(torch.float) attn_scores = torch.where( attention_mask, torch.finfo(attn_scores.dtype).min, attn_scores ) attn_scores = attn_scores.view( batch_size, num_attention_heads, query_length, key_length ) attn_weights = nn.functional.softmax(attn_scores, dim=-1) attn_weights = attn_weights.to(value.dtype) # Mask heads if we want to if head_mask is not None: attn_weights = attn_weights * head_mask attn_output = torch.matmul(attn_weights, value) return attn_output, attn_weights class RotaryEmbedding(torch.nn.Module): def __init__(self, dim, max_position_embeddings, base=10000, device=None): super().__init__() self.true_inv_freq = 1.0 / ( base ** (torch.arange(0, dim, 2).float().to(device) / dim) ) self.register_buffer("inv_freq", self.true_inv_freq) # Build here to make `torch.jit.trace` work. self.max_seq_len_cached = max_position_embeddings self.cos_cached = None self.sin_cached = None @staticmethod def rotate_half(x): """Rotates half the hidden dims of the input.""" x1 = x[..., : x.shape[-1] // 2] x2 = x[..., x.shape[-1] // 2 :] return torch.cat((-x2, x1), dim=-1) @staticmethod def _create_cos_sin(inv_freq, max_position_embeddings, dtype, device): t = torch.arange( max_position_embeddings, device=inv_freq.device, dtype=inv_freq.dtype ) freqs = torch.einsum("i,j->ij", t, inv_freq) # Different from paper, but it uses a different permutation in order to obtain the same calculation emb = torch.cat((freqs, freqs), dim=-1) return emb.cos().to(device).to(dtype), emb.sin().to(device).to(dtype) def forward(self, q, k, position_ids, seq_len=None): # x: [bs, num_attention_heads, seq_len, head_size] if ( seq_len > self.max_seq_len_cached or self.cos_cached is None or self.sin_cached is None ): if seq_len > self.max_seq_len_cached: self.max_seq_len_cached = seq_len self.cos_cached, self.sin_cached = self._create_cos_sin( self.true_inv_freq, self.max_seq_len_cached, q.dtype, q.device ) return rotary_forward(q, k, self.cos_cached, self.sin_cached, position_ids) @torch.jit.script def rotary_forward(q, k, cos, sin, position_ids): cos = cos[position_ids].unsqueeze(1) sin = sin[position_ids].unsqueeze(1) chunk_size = q.shape[-1] // 2 q1, q2 = q.split(chunk_size, -1) q_rotated = torch.cat((-q2, q1), dim=-1) k1, k2 = k.split(chunk_size, -1) k_rotated = torch.cat((-k2, k1), dim=-1) q_embed = (q * cos) + (q_rotated * sin) k_embed = (k * cos) + (k_rotated * sin) return q_embed, k_embed class GPTNeoXMLP(nn.Module): def __init__(self, config, prefix, weights): super().__init__() self.act = ( ACT2FN[config.hidden_act] if "gelu_fast" not in config.hidden_act else lambda x: torch.nn.functional.gelu(x, approximate="tanh") ) self.dense_h_to_4h = TensorParallelColumnLinear.load( config, prefix=f"{prefix}.dense_h_to_4h", weights=weights, bias=True ) self.dense_4h_to_h = TensorParallelRowLinear.load( config, prefix=f"{prefix}.dense_4h_to_h", weights=weights, bias=True ) def forward(self, hidden_states): hidden_states = self.dense_h_to_4h(hidden_states) hidden_states = self.act(hidden_states) hidden_states = self.dense_4h_to_h(hidden_states) return hidden_states class GPTNeoXLayer(nn.Module): def __init__(self, layer_id, config, weights): super().__init__() self.use_parallel_residual = config.use_parallel_residual self.input_layernorm = nn.LayerNorm.load( prefix=f"gpt_neox.layers.{layer_id}.input_layernorm", weights=weights, eps=config.layer_norm_eps, ) self.post_attention_layernorm = nn.LayerNorm.load( prefix=f"gpt_neox.layers.{layer_id}.post_attention_layernorm", weights=weights, eps=config.layer_norm_eps, ) self.attention = GPTNeoXAttention( config, prefix=f"gpt_neox.layers.{layer_id}.attention", weights=weights ) self.mlp = GPTNeoXMLP( config, prefix=f"gpt_neox.layers.{layer_id}.mlp", weights=weights ) def forward( self, hidden_states, position_ids, attention_mask=None, head_mask=None, use_cache=False, layer_past=None, output_attentions=False, ): attention_layer_outputs = self.attention( self.input_layernorm(hidden_states), attention_mask=attention_mask, position_ids=position_ids, layer_past=layer_past, head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, ) attn_output = attention_layer_outputs[ 0 ] # output_attn: attn_output, present, (attn_weights) outputs = attention_layer_outputs[1:] if self.use_parallel_residual: # pseudocode: # x = x + attn(ln1(x)) + mlp(ln2(x)) mlp_output = self.mlp(self.post_attention_layernorm(hidden_states)) hidden_states = mlp_output + attn_output + hidden_states else: # pseudocode: # x = x + attn(ln1(x)) # x = x + mlp(ln2(x)) attn_output = attn_output + hidden_states mlp_output = self.mlp(self.post_attention_layernorm(attn_output)) hidden_states = mlp_output + attn_output if use_cache: outputs = ( hidden_states, ) + outputs # hidden_states, present, (attn_weights) else: outputs = (hidden_states,) + outputs[1:] # hidden_states, (attn_weights) return outputs class GPTNeoXModel(GPTNeoXPreTrainedModel): def __init__(self, config, weights): super().__init__(config) self.config = config self.num_attention_heads = config.num_attention_heads self.embed_in = TensorParallelEmbedding( prefix="gpt_neox.embed_in", weights=weights ) self.layers = nn.ModuleList( [ GPTNeoXLayer(layer_id, config, weights) for layer_id in range(config.num_hidden_layers) ] ) self.final_layer_norm = nn.LayerNorm.load( prefix="gpt_neox.final_layer_norm", weights=weights, eps=config.layer_norm_eps, ) self.tp_world_size = weights.process_group.size() def forward( self, input_ids: Optional[torch.LongTensor] = None, position_ids=None, attention_mask: Optional[torch.FloatTensor] = None, head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, BaseModelOutputWithPast]: r""" past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. use_cache (`bool`, *optional*): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). """ output_attentions = ( output_attentions if output_attentions is not None else self.config.output_attentions ) output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = ( return_dict if return_dict is not None else self.config.use_return_dict ) use_cache = use_cache if use_cache is not None else self.config.use_cache if input_ids is not None and inputs_embeds is not None: raise ValueError( "You cannot specify both input_ids and inputs_embeds at the same time" ) elif input_ids is not None: input_shape = input_ids.size() elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") batch_size, seq_length = input_shape if past_key_values is None: past_length = 0 past_key_values = tuple([None] * self.config.num_hidden_layers) else: past_length = past_key_values[0][0].size(-2) if position_ids is None: device = input_ids.device if input_ids is not None else inputs_embeds.device position_ids = torch.arange( past_length, seq_length + past_length, dtype=torch.long, device=device ) position_ids = position_ids.unsqueeze(0).view(-1, seq_length) else: position_ids = position_ids.view(-1, seq_length).long() if inputs_embeds is None: inputs_embeds = self.embed_in(input_ids) hidden_states = inputs_embeds # Attention mask. seq_length_with_past = seq_length past_key_values_length = 0 if past_key_values[0] is not None: past_key_values_length = past_key_values[0][0].shape[-1] seq_length_with_past = seq_length_with_past + past_key_values_length if attention_mask is None: attention_mask = torch.ones( (batch_size, seq_length_with_past), device=hidden_states.device ) else: attention_mask = attention_mask.to(hidden_states.device) causal_mask = prepare_attn_mask( attention_mask, input_shape=(batch_size, seq_length), past_key_values_length=past_key_values_length, ) assert self.num_attention_heads % self.tp_world_size == 0 block_size = self.num_attention_heads // self.tp_world_size causal_mask = torch.repeat_interleave(causal_mask, block_size, dim=0) # Prepare head mask if needed # 1.0 in head_mask indicate we keep the head # attention_probs has shape bsz x n_heads x N x N # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) presents = () if use_cache else None all_attentions = () if output_attentions else None all_hidden_states = () if output_hidden_states else None for i, (layer, layer_past) in enumerate(zip(self.layers, past_key_values)): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) outputs = layer( hidden_states, position_ids=position_ids, attention_mask=causal_mask, head_mask=head_mask[i], layer_past=layer_past, use_cache=use_cache, output_attentions=output_attentions, ) hidden_states = outputs[0] if use_cache is True: presents = presents + (outputs[1],) if output_attentions: all_attentions = all_attentions + (outputs[2 if use_cache else 1],) hidden_states = self.final_layer_norm(hidden_states) # Add last hidden state if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple( v for v in [hidden_states, presents, all_hidden_states, all_attentions] if v is not None ) return BaseModelOutputWithPast( last_hidden_state=hidden_states, past_key_values=presents, hidden_states=all_hidden_states, attentions=all_attentions, ) class GPTNeoxForCausalLM(GPTNeoXPreTrainedModel): _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"] def __init__(self, config, weights): super().__init__(config) self.gpt_neox = GPTNeoXModel(config, weights) self.embed_out = SpeculativeHead.load( config, prefix="embed_out", weights=weights ) def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, CausalLMOutputWithPast]: r""" past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. The two additional tensors are only required when the model is used as a decoder in a Sequence to Sequence model. Contains pre-computed hidden-states (key and values in the self-attention blocks that can be used (see `past_key_values` input) to speed up sequential decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels n `[0, ..., config.vocab_size]`. use_cache (`bool`, *optional*): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). Returns: Example: ```python >>> from transformers import AutoTokenizer, GPTNeoXForCausalLM, GPTNeoXConfig >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") >>> config = GPTNeoXConfig.from_pretrained("EleutherAI/gpt-neox-20b") >>> config.is_decoder = True >>> model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b", config=config) >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> prediction_logits = outputs.logits ```""" return_dict = ( return_dict if return_dict is not None else self.config.use_return_dict ) outputs = self.gpt_neox( input_ids, attention_mask=attention_mask, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) hidden_states = outputs[0] lm_logits, speculative_logits = self.embed_out(hidden_states) lm_loss = None if labels is not None: # move labels to correct device to enable model parallelism labels = labels.to(lm_logits.device) # we are doing next-token prediction; shift prediction scores and input ids by one shift_logits = lm_logits[:, :-1, :].contiguous() labels = labels[:, 1:].contiguous() loss_fct = CrossEntropyLoss() lm_loss = loss_fct( shift_logits.view(-1, shift_logits.size(-1)), labels.view(-1) ) if not return_dict: output = (lm_logits,) + outputs[1:] return ((lm_loss,) + output) if lm_loss is not None else output return ( CausalLMOutputWithPast( loss=lm_loss, logits=lm_logits, past_key_values=outputs.past_key_values, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ), speculative_logits, ) def prepare_inputs_for_generation( self, input_ids, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs, ): input_shape = input_ids.shape # cut decoder_input_ids if past is used if past_key_values and past_key_values[0] is not None: input_ids = input_ids[:, -1:] position_ids = kwargs.get("position_ids", None) if attention_mask is not None and position_ids is None: # create position_ids on the fly for batch generation position_ids = attention_mask.long().cumsum(-1) - 1 position_ids.masked_fill_(attention_mask == 0, 1) if past_key_values: position_ids = position_ids[:, -1].unsqueeze(-1) # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly if attention_mask is None: attention_mask = input_ids.new_ones(input_shape) # if `inputs_embeds` are passed, we only want to use them in the 1st generation step if inputs_embeds is not None and past_key_values is None: model_inputs = {"inputs_embeds": inputs_embeds} else: model_inputs = {"input_ids": input_ids} model_inputs.update( { "attention_mask": attention_mask, "past_key_values": past_key_values, "position_ids": position_ids, } ) return model_inputs def _reorder_cache(self, past_key_values, beam_idx): reordered_past = () for layer_past in past_key_values: reordered_past += ( tuple( past_state.index_select(0, beam_idx) for past_state in layer_past[:2] ) + layer_past[2:], ) return reordered_past
text-generation-inference/server/text_generation_server/models/custom_modeling/neox_modeling.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/neox_modeling.py", "repo_id": "text-generation-inference", "token_count": 14336 }
218
import torch import os MEM_POOL = torch.cuda.graph_pool_handle() # This is overridden by the cli ENABLE_CUDA_GRAPHS = os.getenv("ENABLE_CUDA_GRAPHS", "false").lower() in {"1", "true"}
text-generation-inference/server/text_generation_server/models/globals.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/globals.py", "repo_id": "text-generation-inference", "token_count": 73 }
219
import grpc from opentelemetry import trace from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter from opentelemetry.instrumentation.grpc._aio_server import ( OpenTelemetryAioServerInterceptor, ) from opentelemetry.semconv.trace import SpanAttributes from opentelemetry.sdk.resources import Resource from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import ( BatchSpanProcessor, ) class UDSOpenTelemetryAioServerInterceptor(OpenTelemetryAioServerInterceptor): def __init__(self): super().__init__(trace.get_tracer(__name__)) def _start_span(self, handler_call_details, context, set_status_on_exception=False): """ Rewrite _start_span method to support Unix Domain Socket gRPC contexts """ # standard attributes attributes = { SpanAttributes.RPC_SYSTEM: "grpc", SpanAttributes.RPC_GRPC_STATUS_CODE: grpc.StatusCode.OK.value[0], } # if we have details about the call, split into service and method if handler_call_details.method: service, method = handler_call_details.method.lstrip("/").split("/", 1) attributes.update( { SpanAttributes.RPC_METHOD: method, SpanAttributes.RPC_SERVICE: service, } ) # add some attributes from the metadata metadata = dict(context.invocation_metadata()) if "user-agent" in metadata: attributes["rpc.user_agent"] = metadata["user-agent"] # We use gRPC over a UNIX socket attributes.update({SpanAttributes.NET_TRANSPORT: "unix"}) return self._tracer.start_as_current_span( name=handler_call_details.method, kind=trace.SpanKind.SERVER, attributes=attributes, set_status_on_exception=set_status_on_exception, ) def setup_tracing(shard: int, otlp_endpoint: str): resource = Resource.create( attributes={"service.name": f"text-generation-inference.server-{shard}"} ) span_exporter = OTLPSpanExporter(endpoint=otlp_endpoint, insecure=True) span_processor = BatchSpanProcessor(span_exporter) trace.set_tracer_provider(TracerProvider(resource=resource)) trace.get_tracer_provider().add_span_processor(span_processor)
text-generation-inference/server/text_generation_server/tracing.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/tracing.py", "repo_id": "text-generation-inference", "token_count": 985 }
220
import math import torch from loguru import logger from typing import Dict, Union from text_generation_server.pb.generate_pb2 import GrammarType from outlines.fsm.fsm import RegexFSM from outlines.fsm.json_schema import build_regex_from_object from functools import lru_cache from typing import List, Optional, DefaultDict import time from transformers import ( LogitsWarper, LogitsProcessor, TemperatureLogitsWarper, TopKLogitsWarper, TopPLogitsWarper, TypicalLogitsWarper, ) mempool = torch.cuda.graph_pool_handle() if torch.cuda.is_available() else None class StaticWarper: def __init__( self, temperature=1.0, top_k=None, top_p=None, typical_p=None, ): self.warpers = [] if temperature is not None and temperature != 1.0: temperature = float(temperature) self.warpers.append(TemperatureLogitsWarper(temperature)) if top_k is not None and top_k != 0: self.warpers.append(TopKLogitsWarper(top_k=top_k)) if top_p is not None and top_p < 1.0: self.warpers.append(TopPLogitsWarper(top_p=top_p)) if typical_p is not None and typical_p < 1.0: self.warpers.append(TypicalLogitsWarper(mass=typical_p)) self.cuda_graph = None self.static_scores = None self.static_warped_scores = None self.static_next_logprob = None def __call__(self, scores): if torch.cuda.is_available(): if self.cuda_graph is None: self.static_scores = scores self.cuda_graph = torch.cuda.CUDAGraph() with torch.cuda.graph(self.cuda_graph, pool=mempool): local_scores = self.static_scores for warper in self.warpers: local_scores = warper(None, local_scores) self.static_warped_scores = local_scores # Compute logprobs self.static_next_logprob = torch.log_softmax( self.static_warped_scores, -1 ) self.static_scores.copy_(scores) self.cuda_graph.replay() return self.static_warped_scores, self.static_next_logprob # CPU branch for warper in self.warpers: scores = warper(None, scores) return scores, torch.log_softmax(scores, -1) @lru_cache(10) def static_warper( temperature: Optional[float], top_k: Optional[int], top_p: Optional[float], typical_p: Optional[float], ) -> StaticWarper: return StaticWarper( temperature=temperature, top_k=top_k, top_p=top_p, typical_p=typical_p ) class HeterogeneousRepetitionPenaltyLogitsProcessor(LogitsProcessor): r""" [`LogitsProcessor`] enforcing an exponential penalty on repeated sequences. This version allows for a separate value for each sample and runs inplace when possible. It doesn't validate inputs. Args: repetition_penalty (`List[float]`): The parameter for repetition penalty. 1.0 means no penalty. See [this paper](https://arxiv.org/pdf/1909.05858.pdf) for more details. """ def __init__(self, penalty: List[float], dtype: torch.dtype, device: torch.device): self.penalty = penalty self.penalty_tensor = torch.tensor( penalty, dtype=dtype, device=device ).unsqueeze(1) def __call__(self, input_ids: torch.Tensor, scores: torch.Tensor) -> torch.Tensor: score = torch.gather(scores, 1, input_ids) # if score < 0 then repetition penalty has to be multiplied to reduce the previous token probability score = torch.where( score < 0, score * self.penalty_tensor, score / self.penalty_tensor ) scores.scatter_(1, input_ids, score) return scores def filter(self, indices): self.penalty = [self.penalty[i] for i in indices] if any([x != 1.0 for x in self.penalty]): self.penalty_tensor = self.penalty_tensor[indices] return self return None class FrequencyPenaltyLogitsProcessor(LogitsProcessor): r""" Frequency penalty as defined by OpenAI Args: penalty (`float`): The parameter for frequency penalty. 0.0 means no penalty. """ def __init__(self, penalty: float): self.penalty = penalty def __call__( self, input_ids: torch.LongTensor, scores: torch.FloatTensor ) -> torch.FloatTensor: score = torch.gather(scores, 1, input_ids) # if score < 0 then penalty has to be multiplied to reduce the previous token probability score = -torch.where(score < 0, score * self.penalty, score / self.penalty) return scores.scatter_add_(1, input_ids, score) class HeterogeneousFrequencyPenaltyLogitsProcessor(LogitsProcessor): r""" Frequency penalty as defined by OpenAI Args: frequency_penalty (`List[float]`): The parameter for frequency penalty. 0.0 means no penalty. """ def __init__(self, penalty: List[float], dtype: torch.dtype, device: torch.device): self.penalty = penalty self.penalty_tensor = torch.tensor( penalty, dtype=dtype, device=device ).unsqueeze(1) def __call__(self, input_ids: torch.Tensor, scores: torch.Tensor) -> torch.Tensor: score = torch.gather(scores, 1, input_ids) # if score < 0 then penalty has to be multiplied to reduce the previous token probability score = -torch.where( score < 0, score * self.penalty_tensor, score / self.penalty_tensor ) return scores.scatter_add_(1, input_ids, score) def filter(self, indices): self.penalty = [self.penalty[i] for i in indices] if any([x != 0.0 for x in self.penalty]): self.penalty_tensor = self.penalty_tensor[indices] return self return None class HeterogeneousTemperatureLogitsWarper: r""" [`LogitsWarper`] for temperature (exponential scaling output probability distribution). This version allows for a separate value for each sample and runs inplace when possible. It doesn't validate inputs. Args: temperature (`float`): The value used to module the logits distribution. """ def __init__( self, temperature: List[float], dtype: torch.dtype, device: torch.device ): self.temperature = temperature self.temperature_tensor = torch.tensor( temperature, dtype=dtype, device=device ).unsqueeze(1) def __call__(self, input_ids: torch.Tensor, scores: torch.Tensor) -> torch.Tensor: scores.div_(self.temperature_tensor) return scores def filter(self, indices): self.temperature = [self.temperature[i] for i in indices] if any([x != 1.0 for x in self.temperature]): self.temperature_tensor = self.temperature_tensor[indices] return self return None class HeterogeneousTopPLogitsWarper(LogitsWarper): """ [`LogitsWarper`] that performs top-p, i.e. restricting to top tokens summing to prob_cut_off <= prob_cut_off. This version allows for a separate value for each sample and runs inplace when possible. It doesn't validate inputs. Args: top_p (`float`): If set to < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or higher are kept for generation. filter_value (`float`, *optional*, defaults to `-float("Inf")`): All filtered values will be set to this float value. min_tokens_to_keep (`int`, *optional*, defaults to 1): Minimum number of tokens that cannot be filtered. """ def __init__( self, top_p: List[float], dtype: torch.dtype, device: torch.device, filter_value: float = -math.inf, min_tokens_to_keep: int = 1, ): self.top_p = top_p self.top_p_opposite = 1 - torch.tensor( top_p, dtype=dtype, device=device ).unsqueeze(1) self.filter_value = filter_value self.min_tokens_to_keep = min_tokens_to_keep def __call__(self, input_ids: torch.Tensor, scores: torch.Tensor) -> torch.Tensor: sorted_logits, sorted_indices = torch.sort(scores, descending=False) probs = sorted_logits.softmax(dim=-1) # This is way faster for some reason for i in range(probs.shape[0]): probs[i] = probs[i].cumsum(dim=-1) # Remove tokens with cumulative top_p above the threshold (token with 0 are kept) sorted_indices_to_remove = probs <= self.top_p_opposite # Keep at least min_tokens_to_keep sorted_indices_to_remove[..., -self.min_tokens_to_keep :] = 0 # scatter sorted tensors to original indexing indices_to_remove = sorted_indices_to_remove.scatter( 1, sorted_indices, sorted_indices_to_remove ) warped_scores = scores.masked_fill_(indices_to_remove, self.filter_value) return warped_scores def filter(self, indices): self.top_p = [self.top_p[i] for i in indices] if any([x < 1.0 for x in self.top_p]): self.top_p_opposite = self.top_p_opposite[indices] return self return None class HeterogeneousTopKLogitsWarper(LogitsWarper): r""" [`LogitsWarper`] that performs top-k, i.e. restricting to the k highest probability elements. This version allows for a separate value for each sample and runs inplace when possible. It doesn't validate inputs. Args: top_k (`int`): The number of highest probability vocabulary tokens to keep for top-k-filtering. filter_value (`float`, *optional*, defaults to `-float("Inf")`): All filtered values will be set to this float value. min_tokens_to_keep (`int`, *optional*, defaults to 1): Minimum number of tokens that cannot be filtered. """ def __init__( self, top_k: List[int], device: torch.device, filter_value: float = -math.inf, min_tokens_to_keep: int = 1, ): self.top_k = top_k self.max_top_k = max(top_k) # value - 1 as we will use top_k to index and python uses 0 based numbering self.top_k_tensor = torch.tensor( [max(x - 1, min_tokens_to_keep - 1) for x in top_k], dtype=torch.int64, device=device, ).unsqueeze(1) # 0 is a special value that disables top_k warping for this member of the batch disabled = [x == 0 for x in top_k] if any(disabled): self.top_k_disabled_mask = torch.tensor( disabled, dtype=torch.bool, device=device ).view(-1, 1) else: self.top_k_disabled_mask = None self.filter_value = filter_value def __call__(self, input_ids: torch.Tensor, scores: torch.Tensor) -> torch.Tensor: # If max_top_k is superior to the vocab, we need to clamp or the warper will fail if scores.size(-1) < self.max_top_k: max_top_k = scores.size(-1) top_k = torch.clamp_max(self.top_k_tensor, max_top_k) else: max_top_k = self.max_top_k top_k = self.top_k_tensor # Get the kth score for each member of the batch kth_scores = torch.gather(torch.topk(scores, max_top_k)[0], 1, top_k) # Mask member of kth_scores that do not want to use top_k warping if self.top_k_disabled_mask is not None: kth_scores.masked_fill_(self.top_k_disabled_mask, self.filter_value) # Remove all tokens with a probability less than the last token of the top-k indices_to_remove = scores < kth_scores scores.masked_fill_(indices_to_remove, self.filter_value) return scores def filter(self, indices): self.top_k = [self.top_k[i] for i in indices] disabled = [x == 0 for x in self.top_k] if not all(disabled): self.top_k_tensor = self.top_k_tensor[indices] self.max_top_k = max(self.top_k) if self.top_k_disabled_mask is not None: self.top_k_disabled_mask = ( self.top_k_disabled_mask[indices] if any(disabled) else None ) return self return None class HeterogeneousTypicalLogitsWarper(LogitsWarper): r""" [`LogitsWarper`] that performs typical decoding. See [Typical Decoding for Natural Language Generation](https://arxiv.org/abs/2202.00666) for more information. This version allows for a separate value for each sample and runs inplace when possible. It doesn't validate inputs. Args: mass (`float`): Value of typical_p between 0 and 1 inclusive, defaults to 0.9. filter_value (`float`, *optional*, defaults to `-float("Inf")`): All filtered values will be set to this float value. min_tokens_to_keep (`int`, *optional*, defaults to 1): Minimum number of tokens that cannot be filtered. """ def __init__( self, mass: List[float], dtype: torch.dtype, device: torch.device, filter_value: float = -math.inf, min_tokens_to_keep: int = 1, ): self.mass = mass self.mass_tensor = torch.tensor(mass, dtype=dtype, device=device).unsqueeze(1) # 1 is a special value that disables typical_p warping for this member of the batch disabled = [x == 1.0 for x in mass] if any(disabled): self.disabled_mask = torch.tensor(disabled, dtype=torch.bool, device=device) else: self.disabled_mask = None self.filter_value = filter_value self.min_tokens_to_keep = min_tokens_to_keep def __call__(self, input_ids: torch.Tensor, scores: torch.Tensor) -> torch.Tensor: # calculate entropy normalized = torch.nn.functional.log_softmax(scores, dim=-1) p = torch.exp(normalized) ent = -(normalized * p).nansum(-1, keepdim=True) # shift and sort shifted_scores = torch.abs((-normalized) - ent) sorted_scores, sorted_indices = torch.sort(shifted_scores, descending=False) sorted_logits = scores.gather(-1, sorted_indices) probs = sorted_logits.softmax(dim=-1) # This is way faster for some reason for i in range(probs.shape[0]): probs[i] = probs[i].cumsum(dim=-1) # Remove tokens with cumulative mass above the threshold last_ind = (probs < self.mass_tensor).sum(dim=1) last_ind[last_ind < 0] = 0 if self.disabled_mask is not None: last_ind.masked_fill_(self.disabled_mask, scores.shape[-1] - 1) sorted_indices_to_remove = sorted_scores > sorted_scores.gather( 1, last_ind.view(-1, 1) ) if self.min_tokens_to_keep > 1: # Keep at least min_tokens_to_keep (set to min_tokens_to_keep-1 because we add the first one below) sorted_indices_to_remove[..., : self.min_tokens_to_keep] = 0 indices_to_remove = sorted_indices_to_remove.scatter( 1, sorted_indices, sorted_indices_to_remove ) warped_scores = scores.masked_fill_(indices_to_remove, self.filter_value) return warped_scores def filter(self, indices): self.mass = [self.mass[i] for i in indices] disabled = [x == 1.0 for x in self.mass] if not all(disabled): self.mass_tensor = self.mass_tensor[indices] if self.disabled_mask is not None: self.disabled_mask = ( self.disabled_mask[indices] if any(disabled) else None ) return self return None class HeterogeneousProcessorWrapper(LogitsProcessor): r""" A wrapper for logit warpers or processors without heterogeneous parameter support. Args: processors (`Dict[int, Union[LogitsProcessor, LogitsWarper]]`): A mapping of sample indices to logit warpers or processors, to be run sequentially. """ def __init__( self, processors: Dict[int, Union[LogitsProcessor, LogitsWarper]], ): self.processors = processors def __call__(self, input_ids: torch.Tensor, scores: torch.Tensor) -> torch.Tensor: for i, processor in self.processors.items(): scores[i : i + 1] = processor(input_ids[i : i + 1], scores[i : i + 1]) return scores def filter(self, indices): new_processors = {} for i, idx in enumerate(indices): if idx in self.processors: new_processors[i] = self.processors[idx] if new_processors: self.processors = new_processors return self return None class GrammarLogitProcessor(LogitsProcessor): fsm_state: DefaultDict[int, int] fsm: RegexFSM def __init__(self, tokenizer, device, grammar, grammar_type): self.device = device self.tokenizer = GrammarLogitProcessor._cached_adapt_tokenizer(tokenizer) self.fsm = GrammarLogitProcessor._cached_compile_fsm( grammar_type, grammar, self.tokenizer ) def __call__( self, logits: torch.Tensor, fsm_grammar_state: int, ): if fsm_grammar_state == -1 or self.fsm is None: return logits allowed_tokens = self.fsm.allowed_token_ids(fsm_grammar_state) mask = torch.full_like(logits, -math.inf) mask[:, allowed_tokens] = 0 biased_scores = logits + mask return biased_scores def advance(self, next_token_id, fsm_grammar_state): return GrammarLogitProcessor._advance( next_token_id, fsm_grammar_state, self.fsm ) @staticmethod def _advance(next_token_id, fsm_grammar_state, fsm): if fsm_grammar_state == -1: return fsm_grammar_state return fsm.next_state(fsm_grammar_state, next_token_id) # TODO: move grammar compilation into the router @staticmethod @lru_cache(maxsize=32, typed=True) def _cached_compile_fsm(grammar_type, schema, tokenizer): start_time = time.time() if grammar_type == GrammarType.GRAMMAR_TYPE_JSON: schema = build_regex_from_object(schema) elif grammar_type == GrammarType.GRAMMAR_TYPE_REGEX: pass # schema is already a regex just here for clarity fsm = RegexFSM(schema, tokenizer) logger.debug(f"Compiled FSM in {time.time() - start_time:.2f}s") return fsm @staticmethod @lru_cache(maxsize=32, typed=True) def _cached_adapt_tokenizer(tokenizer): """Adapt tokenizer to work with the FSM. The API of Outlines tokenizers is slightly different to that of `transformers`. In addition we need to handle the missing spaces to Llama's tokenizer to be able to compile FSMs for this model. """ start_time = time.time() tokenizer.vocabulary = tokenizer.get_vocab() tokenizer.special_tokens = set(tokenizer.all_special_tokens) def convert_token_to_string(token: str) -> str: from transformers.file_utils import SPIECE_UNDERLINE string = tokenizer.convert_tokens_to_string([token]) # A hack to handle missing spaces to HF's Llama tokenizers if token.startswith(SPIECE_UNDERLINE) or token == "<0x20>": return " " + string return string tokenizer.convert_token_to_string = convert_token_to_string logger.debug(f"Adapted tokenizer in {time.time() - start_time:.2f}s") return tokenizer class HeterogeneousGrammarLogitProcessor(LogitsProcessor): def __init__(self, tokenizer, device, grammars, grammar_types): self.device = device self.tokenizer = GrammarLogitProcessor._cached_adapt_tokenizer(tokenizer) self.fsms = [] for grammar, grammar_type in zip(grammars, grammar_types): fsm = GrammarLogitProcessor._cached_compile_fsm( grammar_type, grammar, self.tokenizer ) self.fsms.append(fsm) def __call__( self, logits: torch.Tensor, fsm_grammar_states: List[int], ): mask = torch.full_like(logits, -math.inf) for i in range(logits.shape[0]): fsm = self.fsms[i] if fsm_grammar_states[i] == -1 or fsm is None: continue allowed_tokens = fsm.allowed_token_ids(fsm_grammar_states[i]) mask[i, allowed_tokens] = 0 logits += mask return logits def advance_batch(self, next_token_ids, fsm_grammar_states): return [ GrammarLogitProcessor._advance( next_token_ids[i], fsm_grammar_states[i], self.fsms[i] ) for i in range(len(next_token_ids)) ] def advance_at_index(self, next_token_id, fsm_grammar_state, index): return GrammarLogitProcessor._advance( next_token_id, fsm_grammar_state, self.fsms[index] ) def filter(self, indices): new_fsms = [] for i in indices: new_fsms.append(self.fsms[i]) self.fsms = new_fsms return self
text-generation-inference/server/text_generation_server/utils/logits_process.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/utils/logits_process.py", "repo_id": "text-generation-inference", "token_count": 9502 }
221
import { PaddingDirection, WordPiece, punctuationPreTokenizer, sequencePreTokenizer, whitespacePreTokenizer, Encoding, EncodeOptions, Tokenizer, } from '../../' import { InputSequence } from '../../types' const MOCKS_DIR = __dirname + '/__mocks__' describe('Can modify pretokenizers on the fly', () => { let encoding: Encoding let encode: ( sequence: InputSequence, pair?: InputSequence | null, options?: EncodeOptions | null, ) => Promise<Encoding> let tokenizer: Tokenizer beforeAll(async () => { const model = await WordPiece.fromFile(`${MOCKS_DIR}/vocab.txt`, { continuingSubwordPrefix: '##', }) tokenizer = new Tokenizer(model) encode = tokenizer.encode.bind(tokenizer) }) it('Can change pre tokenizer', async () => { const input = 'my name is john.!?' tokenizer.setPreTokenizer(sequencePreTokenizer([whitespacePreTokenizer()])) encoding = await encode(input, null) expect(encoding.getIds()).toEqual([0, 1, 2, 3, 4, 8]) // Change pre tokenizer tokenizer.setPreTokenizer(sequencePreTokenizer([whitespacePreTokenizer(), punctuationPreTokenizer()])) encoding = await encode(input, null) expect(encoding.getIds()).toEqual([0, 1, 2, 3, 4, 8, 8, 8]) }) }) describe('Encoding', () => { const originalString = 'my name is john' const originalPairString = 'what is yours?' let encoding: Encoding let encodingDual: Encoding let encode: ( sequence: InputSequence, pair?: InputSequence | null, options?: EncodeOptions | null, ) => Promise<Encoding> beforeAll(async () => { const model = await WordPiece.fromFile(`${MOCKS_DIR}/vocab.txt`, { continuingSubwordPrefix: '##', }) const tokenizer = new Tokenizer(model) tokenizer.setPreTokenizer(whitespacePreTokenizer()) encode = tokenizer.encode.bind(tokenizer) }) beforeEach(async () => { encoding = await encode(originalString, null) encodingDual = await encode(originalString, originalPairString) }) it('has a list of defined methods', () => { expect(typeof encoding.wordToTokens).toBe('function') expect(typeof encoding.wordToChars).toBe('function') expect(typeof encoding.tokenToChars).toBe('function') expect(typeof encoding.tokenToWord).toBe('function') expect(typeof encoding.charToToken).toBe('function') expect(typeof encoding.charToWord).toBe('function') expect(typeof encoding.getAttentionMask).toBe('function') expect(typeof encoding.getIds).toBe('function') expect(typeof encoding.getLength).toBe('function') expect(typeof encoding.getOffsets).toBe('function') expect(typeof encoding.getOverflowing).toBe('function') expect(typeof encoding.getSpecialTokensMask).toBe('function') expect(typeof encoding.getTokens).toBe('function') expect(typeof encoding.getTypeIds).toBe('function') expect(typeof encoding.getWordIds).toBe('function') expect(typeof encoding.getSequenceIds).toBe('function') expect(typeof encoding.pad).toBe('function') expect(typeof encoding.truncate).toBe('function') }) describe('truncate', () => { it('accepts `undefined` as second parameter', () => { expect(encoding.truncate(10, undefined)).toBeUndefined() }) it('should throw an Error on invalid direction', () => { const t = () => encoding.truncate(10, 3, 'not_valid') expect(t).toThrow(`not_valid is not a valid truncation direction`) }) }) describe('getWordIds', () => { it('returns the correct list of indexes', () => { const indexes = encoding.getWordIds() expect(indexes).toEqual([0, 1, 2, 3, 3]) }) }) describe('getSequenceIds', () => { it('returns the correct list of indexes', () => { expect(encoding.getSequenceIds()).toEqual([0, 0, 0, 0, 0]) expect(encodingDual.getSequenceIds()).toEqual([0, 0, 0, 0, 0, 1, 1, 1, 1]) }) }) describe('wordToTokens', () => { it('returns the correct indexes', () => { const indexes = encoding.wordToTokens(3) expect(indexes).toEqual([3, 5]) }) it('returns the corrent indexes with pair sequences', () => { expect(encodingDual.wordToTokens(3, 0)).toEqual([3, 5]) expect(encodingDual.wordToTokens(3, 1)).toEqual([8, 9]) }) it('returns undefined when out of range word', () => { const index = encoding.wordToTokens(100) expect(index).toBeNull() }) }) describe('wordToChars', () => { it('returns the correct offsets', () => { const offsets = encoding.wordToChars(3) expect(offsets).toEqual([11, 15]) }) it('returns the correct offsets with pair sequences', () => { expect(encodingDual.wordToChars(3, 0)).toEqual([11, 15]) expect(encodingDual.wordToChars(3, 1)).toEqual([13, 14]) }) it('returns undefined when out of range word', () => { const offsets = encoding.wordToChars(100) expect(offsets).toBeNull() }) }) describe('tokenToSequence', () => { it('returns the correct value', () => { expect(encodingDual.tokenToSequence(4)).toEqual(0) expect(encodingDual.tokenToSequence(6)).toEqual(1) }) }) describe('tokenToChars', () => { it('returns the correct offsets', () => { const offsets = encoding.tokenToChars(3) expect(offsets).toEqual([11, 13]) }) it('returns the correct offsets with pair sequences', () => { expect(encodingDual.tokenToChars(3)).toEqual([11, 13]) expect(encodingDual.tokenToChars(7)).toEqual([8, 13]) }) it('returns undefined when out of range token', () => { const offsets = encoding.tokenToChars(100) expect(offsets).toBeNull() }) }) describe('tokenToWord', () => { it('returns the correct index', () => { const index = encoding.tokenToWord(3) expect(index).toEqual(3) }) it('returns the correct index with pair sequences', () => { expect(encodingDual.tokenToWord(3)).toEqual(3) expect(encodingDual.tokenToWord(7)).toEqual(2) }) it('returns undefined when out of range token', () => { const index = encoding.tokenToWord(100) expect(index).toBeNull() }) }) describe('charToToken', () => { it('returns the correct index', () => { const index = encoding.charToToken(3) expect(index).toEqual(1) }) it('returns the correct index with pair sequences', () => { expect(encodingDual.charToToken(3, 0)).toEqual(1) expect(encodingDual.charToToken(3, 1)).toEqual(5) }) it('returns undefined when out of range char', () => { const index = encoding.charToToken(100) expect(index).toBeNull() }) }) describe('charToWord', () => { it('returns the correct index', () => { const index = encoding.charToWord(3) expect(index).toEqual(1) }) it('returns the correct index with pair sequences', () => { expect(encodingDual.charToWord(3, 0)).toEqual(1) expect(encodingDual.charToWord(3, 1)).toEqual(0) }) it('returns undefined when out of range char', () => { const index = encoding.charToWord(100) expect(index).toBeNull() }) }) describe('pad', () => { it('works correctly with only one parameter', () => { encoding.pad(10) expect(encoding.getTokens()).toHaveLength(10) }) it('accepts `undefined` as second parameter', () => { encoding.pad(10, undefined) expect(encoding.getTokens()).toHaveLength(10) }) it('accepts options as second parameter', () => { encoding.pad(10, { direction: PaddingDirection.Left, padToken: '[PA]', padTypeId: 10, padId: 400, }) const tokens = encoding.getTokens() expect(tokens).toHaveLength(10) expect(tokens[0]).toBe('[PA]') expect(encoding.getTypeIds()[0]).toBe(10) expect(encoding.getIds()[0]).toBe(400) }) }) })
tokenizers/bindings/node/lib/bindings/encoding.test.ts/0
{ "file_path": "tokenizers/bindings/node/lib/bindings/encoding.test.ts", "repo_id": "tokenizers", "token_count": 3021 }
222
{ "name": "tokenizers-freebsd-x64", "version": "0.13.4-rc1", "os": [ "freebsd" ], "cpu": [ "x64" ], "main": "tokenizers.freebsd-x64.node", "files": [ "tokenizers.freebsd-x64.node" ], "description": "Tokenizers platform specific bindings", "keywords": [ "napi-rs", "NAPI", "N-API", "Rust", "node-addon", "node-addon-api" ], "license": "MIT", "engines": { "node": ">= 10" }, "publishConfig": { "registry": "https://registry.npmjs.org/", "access": "public" }, "repository": "tokenizers" }
tokenizers/bindings/node/npm/freebsd-x64/package.json/0
{ "file_path": "tokenizers/bindings/node/npm/freebsd-x64/package.json", "repo_id": "tokenizers", "token_count": 272 }
223
{ "name": "tokenizers-win32-x64-msvc", "version": "0.13.4-rc1", "os": [ "win32" ], "cpu": [ "x64" ], "main": "tokenizers.win32-x64-msvc.node", "files": [ "tokenizers.win32-x64-msvc.node" ], "description": "Tokenizers platform specific bindings", "keywords": [ "napi-rs", "NAPI", "N-API", "Rust", "node-addon", "node-addon-api" ], "license": "MIT", "engines": { "node": ">= 10" }, "publishConfig": { "registry": "https://registry.npmjs.org/", "access": "public" }, "repository": "tokenizers" }
tokenizers/bindings/node/npm/win32-x64-msvc/package.json/0
{ "file_path": "tokenizers/bindings/node/npm/win32-x64-msvc/package.json", "repo_id": "tokenizers", "token_count": 277 }
224
use napi::bindgen_prelude::*; use napi_derive::napi; use tokenizers as tk; use tokenizers::Encoding; use crate::encoding::JsEncoding; #[napi] pub fn slice(s: String, begin_index: Option<i32>, end_index: Option<i32>) -> Result<String> { let len = s.chars().count(); let get_index = |x: i32| -> usize { if x >= 0 { x as usize } else { (len as i32 + x) as usize } }; let begin_index = get_index(begin_index.unwrap_or(0)); let end_index = get_index(end_index.unwrap_or(len as i32)); if let Some(slice) = tk::tokenizer::normalizer::get_range_of(&s, begin_index..end_index) { Ok(slice.to_string()) } else { Err(Error::new( Status::GenericFailure, "Error in offsets".to_string(), )) } } #[napi] pub fn merge_encodings( encodings: Vec<&JsEncoding>, growing_offsets: Option<bool>, ) -> Result<JsEncoding> { let growing_offsets = growing_offsets.unwrap_or(false); let encodings: Vec<_> = encodings .into_iter() .map(|enc| enc.encoding.to_owned().unwrap()) .collect(); let new_encoding = Encoding::merge(encodings, growing_offsets); let js_encoding = JsEncoding { encoding: Some(new_encoding), }; Ok(js_encoding) }
tokenizers/bindings/node/src/utils.rs/0
{ "file_path": "tokenizers/bindings/node/src/utils.rs", "repo_id": "tokenizers", "token_count": 503 }
225
import datasets from tokenizers import Tokenizer, models, normalizers, pre_tokenizers # Build a tokenizer bpe_tokenizer = Tokenizer(models.BPE()) bpe_tokenizer.pre_tokenizer = pre_tokenizers.Whitespace() bpe_tokenizer.normalizer = normalizers.Lowercase() # Initialize a dataset dataset = datasets.load_dataset("wikitext", "wikitext-103-raw-v1", split="train") # Build an iterator over this dataset def batch_iterator(): batch_size = 1000 for batch in dataset.iter(batch_size=batch_size): yield batch["text"] # And finally train bpe_tokenizer.train_from_iterator(batch_iterator(), length=len(dataset))
tokenizers/bindings/python/examples/train_with_datasets.py/0
{ "file_path": "tokenizers/bindings/python/examples/train_with_datasets.py", "repo_id": "tokenizers", "token_count": 207 }
226
# Generated content DO NOT EDIT class Normalizer: """ Base class for all normalizers This class is not supposed to be instantiated directly. Instead, any implementation of a Normalizer will return an instance of this class when instantiated. """ def normalize(self, normalized): """ Normalize a :class:`~tokenizers.NormalizedString` in-place This method allows to modify a :class:`~tokenizers.NormalizedString` to keep track of the alignment information. If you just want to see the result of the normalization on a raw string, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize_str` Args: normalized (:class:`~tokenizers.NormalizedString`): The normalized string on which to apply this :class:`~tokenizers.normalizers.Normalizer` """ pass def normalize_str(self, sequence): """ Normalize the given string This method provides a way to visualize the effect of a :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment information. If you need to get/convert offsets, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize` Args: sequence (:obj:`str`): A string to normalize Returns: :obj:`str`: A string after normalization """ pass class BertNormalizer(Normalizer): """ BertNormalizer Takes care of normalizing raw text before giving it to a Bert model. This includes cleaning the text, handling accents, chinese chars and lowercasing Args: clean_text (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether to clean the text, by removing any control characters and replacing all whitespaces by the classic one. handle_chinese_chars (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether to handle chinese chars by putting spaces around them. strip_accents (:obj:`bool`, `optional`): Whether to strip all accents. If this option is not specified (ie == None), then it will be determined by the value for `lowercase` (as in the original Bert). lowercase (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether to lowercase. """ def __init__(self, clean_text=True, handle_chinese_chars=True, strip_accents=None, lowercase=True): pass def normalize(self, normalized): """ Normalize a :class:`~tokenizers.NormalizedString` in-place This method allows to modify a :class:`~tokenizers.NormalizedString` to keep track of the alignment information. If you just want to see the result of the normalization on a raw string, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize_str` Args: normalized (:class:`~tokenizers.NormalizedString`): The normalized string on which to apply this :class:`~tokenizers.normalizers.Normalizer` """ pass def normalize_str(self, sequence): """ Normalize the given string This method provides a way to visualize the effect of a :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment information. If you need to get/convert offsets, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize` Args: sequence (:obj:`str`): A string to normalize Returns: :obj:`str`: A string after normalization """ pass class Lowercase(Normalizer): """ Lowercase Normalizer """ def __init__(self): pass def normalize(self, normalized): """ Normalize a :class:`~tokenizers.NormalizedString` in-place This method allows to modify a :class:`~tokenizers.NormalizedString` to keep track of the alignment information. If you just want to see the result of the normalization on a raw string, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize_str` Args: normalized (:class:`~tokenizers.NormalizedString`): The normalized string on which to apply this :class:`~tokenizers.normalizers.Normalizer` """ pass def normalize_str(self, sequence): """ Normalize the given string This method provides a way to visualize the effect of a :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment information. If you need to get/convert offsets, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize` Args: sequence (:obj:`str`): A string to normalize Returns: :obj:`str`: A string after normalization """ pass class NFC(Normalizer): """ NFC Unicode Normalizer """ def __init__(self): pass def normalize(self, normalized): """ Normalize a :class:`~tokenizers.NormalizedString` in-place This method allows to modify a :class:`~tokenizers.NormalizedString` to keep track of the alignment information. If you just want to see the result of the normalization on a raw string, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize_str` Args: normalized (:class:`~tokenizers.NormalizedString`): The normalized string on which to apply this :class:`~tokenizers.normalizers.Normalizer` """ pass def normalize_str(self, sequence): """ Normalize the given string This method provides a way to visualize the effect of a :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment information. If you need to get/convert offsets, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize` Args: sequence (:obj:`str`): A string to normalize Returns: :obj:`str`: A string after normalization """ pass class NFD(Normalizer): """ NFD Unicode Normalizer """ def __init__(self): pass def normalize(self, normalized): """ Normalize a :class:`~tokenizers.NormalizedString` in-place This method allows to modify a :class:`~tokenizers.NormalizedString` to keep track of the alignment information. If you just want to see the result of the normalization on a raw string, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize_str` Args: normalized (:class:`~tokenizers.NormalizedString`): The normalized string on which to apply this :class:`~tokenizers.normalizers.Normalizer` """ pass def normalize_str(self, sequence): """ Normalize the given string This method provides a way to visualize the effect of a :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment information. If you need to get/convert offsets, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize` Args: sequence (:obj:`str`): A string to normalize Returns: :obj:`str`: A string after normalization """ pass class NFKC(Normalizer): """ NFKC Unicode Normalizer """ def __init__(self): pass def normalize(self, normalized): """ Normalize a :class:`~tokenizers.NormalizedString` in-place This method allows to modify a :class:`~tokenizers.NormalizedString` to keep track of the alignment information. If you just want to see the result of the normalization on a raw string, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize_str` Args: normalized (:class:`~tokenizers.NormalizedString`): The normalized string on which to apply this :class:`~tokenizers.normalizers.Normalizer` """ pass def normalize_str(self, sequence): """ Normalize the given string This method provides a way to visualize the effect of a :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment information. If you need to get/convert offsets, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize` Args: sequence (:obj:`str`): A string to normalize Returns: :obj:`str`: A string after normalization """ pass class NFKD(Normalizer): """ NFKD Unicode Normalizer """ def __init__(self): pass def normalize(self, normalized): """ Normalize a :class:`~tokenizers.NormalizedString` in-place This method allows to modify a :class:`~tokenizers.NormalizedString` to keep track of the alignment information. If you just want to see the result of the normalization on a raw string, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize_str` Args: normalized (:class:`~tokenizers.NormalizedString`): The normalized string on which to apply this :class:`~tokenizers.normalizers.Normalizer` """ pass def normalize_str(self, sequence): """ Normalize the given string This method provides a way to visualize the effect of a :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment information. If you need to get/convert offsets, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize` Args: sequence (:obj:`str`): A string to normalize Returns: :obj:`str`: A string after normalization """ pass class Nmt(Normalizer): """ Nmt normalizer """ def __init__(self): pass def normalize(self, normalized): """ Normalize a :class:`~tokenizers.NormalizedString` in-place This method allows to modify a :class:`~tokenizers.NormalizedString` to keep track of the alignment information. If you just want to see the result of the normalization on a raw string, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize_str` Args: normalized (:class:`~tokenizers.NormalizedString`): The normalized string on which to apply this :class:`~tokenizers.normalizers.Normalizer` """ pass def normalize_str(self, sequence): """ Normalize the given string This method provides a way to visualize the effect of a :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment information. If you need to get/convert offsets, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize` Args: sequence (:obj:`str`): A string to normalize Returns: :obj:`str`: A string after normalization """ pass class Precompiled(Normalizer): """ Precompiled normalizer Don't use manually it is used for compatiblity for SentencePiece. """ def __init__(self, precompiled_charsmap): pass def normalize(self, normalized): """ Normalize a :class:`~tokenizers.NormalizedString` in-place This method allows to modify a :class:`~tokenizers.NormalizedString` to keep track of the alignment information. If you just want to see the result of the normalization on a raw string, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize_str` Args: normalized (:class:`~tokenizers.NormalizedString`): The normalized string on which to apply this :class:`~tokenizers.normalizers.Normalizer` """ pass def normalize_str(self, sequence): """ Normalize the given string This method provides a way to visualize the effect of a :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment information. If you need to get/convert offsets, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize` Args: sequence (:obj:`str`): A string to normalize Returns: :obj:`str`: A string after normalization """ pass class Prepend(Normalizer): """ Prepend normalizer """ def __init__(self, prepend): pass def normalize(self, normalized): """ Normalize a :class:`~tokenizers.NormalizedString` in-place This method allows to modify a :class:`~tokenizers.NormalizedString` to keep track of the alignment information. If you just want to see the result of the normalization on a raw string, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize_str` Args: normalized (:class:`~tokenizers.NormalizedString`): The normalized string on which to apply this :class:`~tokenizers.normalizers.Normalizer` """ pass def normalize_str(self, sequence): """ Normalize the given string This method provides a way to visualize the effect of a :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment information. If you need to get/convert offsets, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize` Args: sequence (:obj:`str`): A string to normalize Returns: :obj:`str`: A string after normalization """ pass class Replace(Normalizer): """ Replace normalizer """ def __init__(self, pattern, content): pass def normalize(self, normalized): """ Normalize a :class:`~tokenizers.NormalizedString` in-place This method allows to modify a :class:`~tokenizers.NormalizedString` to keep track of the alignment information. If you just want to see the result of the normalization on a raw string, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize_str` Args: normalized (:class:`~tokenizers.NormalizedString`): The normalized string on which to apply this :class:`~tokenizers.normalizers.Normalizer` """ pass def normalize_str(self, sequence): """ Normalize the given string This method provides a way to visualize the effect of a :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment information. If you need to get/convert offsets, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize` Args: sequence (:obj:`str`): A string to normalize Returns: :obj:`str`: A string after normalization """ pass class Sequence(Normalizer): """ Allows concatenating multiple other Normalizer as a Sequence. All the normalizers run in sequence in the given order Args: normalizers (:obj:`List[Normalizer]`): A list of Normalizer to be run as a sequence """ def normalize(self, normalized): """ Normalize a :class:`~tokenizers.NormalizedString` in-place This method allows to modify a :class:`~tokenizers.NormalizedString` to keep track of the alignment information. If you just want to see the result of the normalization on a raw string, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize_str` Args: normalized (:class:`~tokenizers.NormalizedString`): The normalized string on which to apply this :class:`~tokenizers.normalizers.Normalizer` """ pass def normalize_str(self, sequence): """ Normalize the given string This method provides a way to visualize the effect of a :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment information. If you need to get/convert offsets, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize` Args: sequence (:obj:`str`): A string to normalize Returns: :obj:`str`: A string after normalization """ pass class Strip(Normalizer): """ Strip normalizer """ def __init__(self, left=True, right=True): pass def normalize(self, normalized): """ Normalize a :class:`~tokenizers.NormalizedString` in-place This method allows to modify a :class:`~tokenizers.NormalizedString` to keep track of the alignment information. If you just want to see the result of the normalization on a raw string, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize_str` Args: normalized (:class:`~tokenizers.NormalizedString`): The normalized string on which to apply this :class:`~tokenizers.normalizers.Normalizer` """ pass def normalize_str(self, sequence): """ Normalize the given string This method provides a way to visualize the effect of a :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment information. If you need to get/convert offsets, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize` Args: sequence (:obj:`str`): A string to normalize Returns: :obj:`str`: A string after normalization """ pass class StripAccents(Normalizer): """ StripAccents normalizer """ def __init__(self): pass def normalize(self, normalized): """ Normalize a :class:`~tokenizers.NormalizedString` in-place This method allows to modify a :class:`~tokenizers.NormalizedString` to keep track of the alignment information. If you just want to see the result of the normalization on a raw string, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize_str` Args: normalized (:class:`~tokenizers.NormalizedString`): The normalized string on which to apply this :class:`~tokenizers.normalizers.Normalizer` """ pass def normalize_str(self, sequence): """ Normalize the given string This method provides a way to visualize the effect of a :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment information. If you need to get/convert offsets, you can use :meth:`~tokenizers.normalizers.Normalizer.normalize` Args: sequence (:obj:`str`): A string to normalize Returns: :obj:`str`: A string after normalization """ pass
tokenizers/bindings/python/py_src/tokenizers/normalizers/__init__.pyi/0
{ "file_path": "tokenizers/bindings/python/py_src/tokenizers/normalizers/__init__.pyi", "repo_id": "tokenizers", "token_count": 8053 }
227
use std::sync::{Arc, RwLock}; use crate::utils::PyChar; use crate::utils::PyPattern; use pyo3::exceptions; use pyo3::prelude::*; use pyo3::types::*; use serde::de::Error; use serde::{Deserialize, Deserializer, Serialize, Serializer}; use tk::decoders::bpe::BPEDecoder; use tk::decoders::byte_fallback::ByteFallback; use tk::decoders::byte_level::ByteLevel; use tk::decoders::ctc::CTC; use tk::decoders::fuse::Fuse; use tk::decoders::metaspace::Metaspace; use tk::decoders::sequence::Sequence; use tk::decoders::strip::Strip; use tk::decoders::wordpiece::WordPiece; use tk::decoders::DecoderWrapper; use tk::normalizers::replace::Replace; use tk::Decoder; use tokenizers as tk; use super::error::ToPyResult; /// Base class for all decoders /// /// This class is not supposed to be instantiated directly. Instead, any implementation of /// a Decoder will return an instance of this class when instantiated. #[pyclass(dict, module = "tokenizers.decoders", name = "Decoder", subclass)] #[derive(Clone, Deserialize, Serialize)] pub struct PyDecoder { #[serde(flatten)] pub(crate) decoder: PyDecoderWrapper, } impl PyDecoder { pub(crate) fn new(decoder: PyDecoderWrapper) -> Self { PyDecoder { decoder } } pub(crate) fn get_as_subtype(&self, py: Python<'_>) -> PyResult<PyObject> { let base = self.clone(); Ok(match &self.decoder { PyDecoderWrapper::Custom(_) => Py::new(py, base)?.into_py(py), PyDecoderWrapper::Wrapped(inner) => match &*inner.as_ref().read().unwrap() { DecoderWrapper::Metaspace(_) => Py::new(py, (PyMetaspaceDec {}, base))?.into_py(py), DecoderWrapper::WordPiece(_) => Py::new(py, (PyWordPieceDec {}, base))?.into_py(py), DecoderWrapper::ByteFallback(_) => { Py::new(py, (PyByteFallbackDec {}, base))?.into_py(py) } DecoderWrapper::Strip(_) => Py::new(py, (PyStrip {}, base))?.into_py(py), DecoderWrapper::Fuse(_) => Py::new(py, (PyFuseDec {}, base))?.into_py(py), DecoderWrapper::ByteLevel(_) => Py::new(py, (PyByteLevelDec {}, base))?.into_py(py), DecoderWrapper::Replace(_) => Py::new(py, (PyReplaceDec {}, base))?.into_py(py), DecoderWrapper::BPE(_) => Py::new(py, (PyBPEDecoder {}, base))?.into_py(py), DecoderWrapper::CTC(_) => Py::new(py, (PyCTCDecoder {}, base))?.into_py(py), DecoderWrapper::Sequence(_) => { Py::new(py, (PySequenceDecoder {}, base))?.into_py(py) } }, }) } } impl Decoder for PyDecoder { fn decode_chain(&self, tokens: Vec<String>) -> tk::Result<Vec<String>> { self.decoder.decode_chain(tokens) } } #[pymethods] impl PyDecoder { #[staticmethod] fn custom(decoder: PyObject) -> Self { let decoder = PyDecoderWrapper::Custom(Arc::new(RwLock::new(CustomDecoder::new(decoder)))); PyDecoder::new(decoder) } fn __getstate__(&self, py: Python) -> PyResult<PyObject> { let data = serde_json::to_string(&self.decoder).map_err(|e| { exceptions::PyException::new_err(format!( "Error while attempting to pickle Decoder: {}", e )) })?; Ok(PyBytes::new(py, data.as_bytes()).to_object(py)) } fn __setstate__(&mut self, py: Python, state: PyObject) -> PyResult<()> { match state.extract::<&PyBytes>(py) { Ok(s) => { self.decoder = serde_json::from_slice(s.as_bytes()).map_err(|e| { exceptions::PyException::new_err(format!( "Error while attempting to unpickle Decoder: {}", e )) })?; Ok(()) } Err(e) => Err(e), } } /// Decode the given list of tokens to a final string /// /// Args: /// tokens (:obj:`List[str]`): /// The list of tokens to decode /// /// Returns: /// :obj:`str`: The decoded string #[pyo3(text_signature = "(self, tokens)")] fn decode(&self, tokens: Vec<String>) -> PyResult<String> { ToPyResult(self.decoder.decode(tokens)).into() } } macro_rules! getter { ($self: ident, $variant: ident, $($name: tt)+) => {{ let super_ = $self.as_ref(); if let PyDecoderWrapper::Wrapped(ref wrap) = super_.decoder { if let DecoderWrapper::$variant(ref dec) = *wrap.read().unwrap() { dec.$($name)+ } else { unreachable!() } } else { unreachable!() } }}; } macro_rules! setter { ($self: ident, $variant: ident, $name: ident, $value: expr) => {{ let super_ = $self.as_ref(); if let PyDecoderWrapper::Wrapped(ref wrap) = super_.decoder { if let DecoderWrapper::$variant(ref mut dec) = *wrap.write().unwrap() { dec.$name = $value; } } }}; ($self: ident, $variant: ident, @$name: ident, $value: expr) => {{ let super_ = $self.as_ref(); if let PyDecoderWrapper::Wrapped(ref wrap) = super_.decoder { if let DecoderWrapper::$variant(ref mut dec) = *wrap.write().unwrap() { dec.$name($value); } } }}; } /// ByteLevel Decoder /// /// This decoder is to be used in tandem with the :class:`~tokenizers.pre_tokenizers.ByteLevel` /// :class:`~tokenizers.pre_tokenizers.PreTokenizer`. #[pyclass(extends=PyDecoder, module = "tokenizers.decoders", name = "ByteLevel")] pub struct PyByteLevelDec {} #[pymethods] impl PyByteLevelDec { #[new] #[pyo3(signature = (**_kwargs), text_signature = "(self)")] fn new(_kwargs: Option<&PyDict>) -> (Self, PyDecoder) { (PyByteLevelDec {}, ByteLevel::default().into()) } } /// Replace Decoder /// /// This decoder is to be used in tandem with the :class:`~tokenizers.pre_tokenizers.Replace` /// :class:`~tokenizers.pre_tokenizers.PreTokenizer`. #[pyclass(extends=PyDecoder, module = "tokenizers.decoders", name = "Replace")] pub struct PyReplaceDec {} #[pymethods] impl PyReplaceDec { #[new] #[pyo3(text_signature = "(self, pattern, content)")] fn new(pattern: PyPattern, content: String) -> PyResult<(Self, PyDecoder)> { Ok(( PyReplaceDec {}, ToPyResult(Replace::new(pattern, content)).into_py()?.into(), )) } } /// WordPiece Decoder /// /// Args: /// prefix (:obj:`str`, `optional`, defaults to :obj:`##`): /// The prefix to use for subwords that are not a beginning-of-word /// /// cleanup (:obj:`bool`, `optional`, defaults to :obj:`True`): /// Whether to cleanup some tokenization artifacts. Mainly spaces before punctuation, /// and some abbreviated english forms. #[pyclass(extends=PyDecoder, module = "tokenizers.decoders", name = "WordPiece")] pub struct PyWordPieceDec {} #[pymethods] impl PyWordPieceDec { #[getter] fn get_prefix(self_: PyRef<Self>) -> String { getter!(self_, WordPiece, prefix.clone()) } #[setter] fn set_prefix(self_: PyRef<Self>, prefix: String) { setter!(self_, WordPiece, prefix, prefix); } #[getter] fn get_cleanup(self_: PyRef<Self>) -> bool { getter!(self_, WordPiece, cleanup) } #[setter] fn set_cleanup(self_: PyRef<Self>, cleanup: bool) { setter!(self_, WordPiece, cleanup, cleanup); } #[new] #[pyo3(signature = (prefix = String::from("##"), cleanup = true), text_signature = "(self, prefix=\"##\", cleanup=True)")] fn new(prefix: String, cleanup: bool) -> (Self, PyDecoder) { (PyWordPieceDec {}, WordPiece::new(prefix, cleanup).into()) } } /// ByteFallback Decoder /// ByteFallback is a simple trick which converts tokens looking like `<0x61>` /// to pure bytes, and attempts to make them into a string. If the tokens /// cannot be decoded you will get � instead for each inconvertable byte token /// #[pyclass(extends=PyDecoder, module = "tokenizers.decoders", name = "ByteFallback")] pub struct PyByteFallbackDec {} #[pymethods] impl PyByteFallbackDec { #[new] #[pyo3(signature = (), text_signature = "(self)")] fn new() -> (Self, PyDecoder) { (PyByteFallbackDec {}, ByteFallback::new().into()) } } /// Fuse Decoder /// Fuse simply fuses every token into a single string. /// This is the last step of decoding, this decoder exists only if /// there is need to add other decoders *after* the fusion #[pyclass(extends=PyDecoder, module = "tokenizers.decoders", name = "Fuse")] pub struct PyFuseDec {} #[pymethods] impl PyFuseDec { #[new] #[pyo3(signature = (), text_signature = "(self)")] fn new() -> (Self, PyDecoder) { (PyFuseDec {}, Fuse::new().into()) } } /// Strip normalizer /// Strips n left characters of each token, or n right characters of each token #[pyclass(extends=PyDecoder, module = "tokenizers.decoders", name = "Strip")] pub struct PyStrip {} #[pymethods] impl PyStrip { #[getter] fn get_start(self_: PyRef<Self>) -> usize { getter!(self_, Strip, start) } #[setter] fn set_start(self_: PyRef<Self>, start: usize) { setter!(self_, Strip, start, start) } #[getter] fn get_stop(self_: PyRef<Self>) -> usize { getter!(self_, Strip, stop) } #[setter] fn set_stop(self_: PyRef<Self>, stop: usize) { setter!(self_, Strip, stop, stop) } #[getter] fn get_content(self_: PyRef<Self>) -> char { getter!(self_, Strip, content) } #[setter] fn set_content(self_: PyRef<Self>, content: char) { setter!(self_, Strip, content, content) } #[new] #[pyo3(signature = (content=' ', left=0, right=0), text_signature = "(self, content, left=0, right=0)")] fn new(content: char, left: usize, right: usize) -> (Self, PyDecoder) { (PyStrip {}, Strip::new(content, left, right).into()) } } /// Metaspace Decoder /// /// Args: /// replacement (:obj:`str`, `optional`, defaults to :obj:`▁`): /// The replacement character. Must be exactly one character. By default we /// use the `▁` (U+2581) meta symbol (Same as in SentencePiece). /// /// add_prefix_space (:obj:`bool`, `optional`, defaults to :obj:`True`): /// Whether to add a space to the first word if there isn't already one. This /// lets us treat `hello` exactly like `say hello`. #[pyclass(extends=PyDecoder, module = "tokenizers.decoders", name = "Metaspace")] pub struct PyMetaspaceDec {} #[pymethods] impl PyMetaspaceDec { #[getter] fn get_replacement(self_: PyRef<Self>) -> String { getter!(self_, Metaspace, get_replacement().to_string()) } #[setter] fn set_replacement(self_: PyRef<Self>, replacement: PyChar) { setter!(self_, Metaspace, @set_replacement, replacement.0); } #[getter] fn get_add_prefix_space(self_: PyRef<Self>) -> bool { getter!(self_, Metaspace, add_prefix_space) } #[setter] fn set_add_prefix_space(self_: PyRef<Self>, add_prefix_space: bool) { setter!(self_, Metaspace, add_prefix_space, add_prefix_space); } #[new] #[pyo3(signature = (replacement = PyChar('▁'), add_prefix_space = true), text_signature = "(self, replacement = \"▁\", add_prefix_space = True)")] fn new(replacement: PyChar, add_prefix_space: bool) -> (Self, PyDecoder) { ( PyMetaspaceDec {}, Metaspace::new(replacement.0, add_prefix_space).into(), ) } } /// BPEDecoder Decoder /// /// Args: /// suffix (:obj:`str`, `optional`, defaults to :obj:`</w>`): /// The suffix that was used to caracterize an end-of-word. This suffix will /// be replaced by whitespaces during the decoding #[pyclass(extends=PyDecoder, module = "tokenizers.decoders", name = "BPEDecoder")] pub struct PyBPEDecoder {} #[pymethods] impl PyBPEDecoder { #[getter] fn get_suffix(self_: PyRef<Self>) -> String { getter!(self_, BPE, suffix.clone()) } #[setter] fn set_suffix(self_: PyRef<Self>, suffix: String) { setter!(self_, BPE, suffix, suffix); } #[new] #[pyo3(signature = (suffix = String::from("</w>")), text_signature = "(self, suffix=\"</w>\")")] fn new(suffix: String) -> (Self, PyDecoder) { (PyBPEDecoder {}, BPEDecoder::new(suffix).into()) } } /// CTC Decoder /// /// Args: /// pad_token (:obj:`str`, `optional`, defaults to :obj:`<pad>`): /// The pad token used by CTC to delimit a new token. /// word_delimiter_token (:obj:`str`, `optional`, defaults to :obj:`|`): /// The word delimiter token. It will be replaced by a <space> /// cleanup (:obj:`bool`, `optional`, defaults to :obj:`True`): /// Whether to cleanup some tokenization artifacts. /// Mainly spaces before punctuation, and some abbreviated english forms. #[pyclass(extends=PyDecoder, module = "tokenizers.decoders", name = "CTC")] pub struct PyCTCDecoder {} #[pymethods] impl PyCTCDecoder { #[getter] fn get_pad_token(self_: PyRef<Self>) -> String { getter!(self_, CTC, pad_token.clone()) } #[setter] fn set_pad_token(self_: PyRef<Self>, pad_token: String) { setter!(self_, CTC, pad_token, pad_token); } #[getter] fn get_word_delimiter_token(self_: PyRef<Self>) -> String { getter!(self_, CTC, word_delimiter_token.clone()) } #[setter] fn set_word_delimiter_token(self_: PyRef<Self>, word_delimiter_token: String) { setter!(self_, CTC, word_delimiter_token, word_delimiter_token); } #[getter] fn get_cleanup(self_: PyRef<Self>) -> bool { getter!(self_, CTC, cleanup) } #[setter] fn set_cleanup(self_: PyRef<Self>, cleanup: bool) { setter!(self_, CTC, cleanup, cleanup); } #[new] #[pyo3(signature = ( pad_token = String::from("<pad>"), word_delimiter_token = String::from("|"), cleanup = true ), text_signature = "(self, pad_token=\"<pad>\", word_delimiter_token=\"|\", cleanup=True)")] fn new(pad_token: String, word_delimiter_token: String, cleanup: bool) -> (Self, PyDecoder) { ( PyCTCDecoder {}, CTC::new(pad_token, word_delimiter_token, cleanup).into(), ) } } /// Sequence Decoder /// /// Args: /// decoders (:obj:`List[Decoder]`) /// The decoders that need to be chained #[pyclass(extends=PyDecoder, module = "tokenizers.decoders", name="Sequence")] pub struct PySequenceDecoder {} #[pymethods] impl PySequenceDecoder { #[new] #[pyo3(signature = (decoders_py), text_signature = "(self, decoders)")] fn new(decoders_py: &PyList) -> PyResult<(Self, PyDecoder)> { let mut decoders: Vec<DecoderWrapper> = Vec::with_capacity(decoders_py.len()); for decoder_py in decoders_py.iter() { let decoder: PyRef<PyDecoder> = decoder_py.extract()?; let decoder = match &decoder.decoder { PyDecoderWrapper::Wrapped(inner) => inner, PyDecoderWrapper::Custom(_) => unimplemented!(), }; decoders.push(decoder.read().unwrap().clone()); } Ok((PySequenceDecoder {}, Sequence::new(decoders).into())) } fn __getnewargs__<'p>(&self, py: Python<'p>) -> &'p PyTuple { PyTuple::new(py, [PyList::empty(py)]) } } #[derive(Clone)] pub(crate) struct CustomDecoder { inner: PyObject, } impl CustomDecoder { pub(crate) fn new(inner: PyObject) -> Self { CustomDecoder { inner } } } impl Decoder for CustomDecoder { fn decode(&self, tokens: Vec<String>) -> tk::Result<String> { Python::with_gil(|py| { let decoded = self .inner .call_method(py, "decode", (tokens,), None)? .extract(py)?; Ok(decoded) }) } fn decode_chain(&self, tokens: Vec<String>) -> tk::Result<Vec<String>> { Python::with_gil(|py| { let decoded = self .inner .call_method(py, "decode_chain", (tokens,), None)? .extract(py)?; Ok(decoded) }) } } impl Serialize for CustomDecoder { fn serialize<S>(&self, _serializer: S) -> std::result::Result<S::Ok, S::Error> where S: Serializer, { Err(serde::ser::Error::custom( "Custom PyDecoder cannot be serialized", )) } } impl<'de> Deserialize<'de> for CustomDecoder { fn deserialize<D>(_deserializer: D) -> std::result::Result<Self, D::Error> where D: Deserializer<'de>, { Err(D::Error::custom("PyDecoder cannot be deserialized")) } } #[derive(Clone, Deserialize, Serialize)] #[serde(untagged)] pub(crate) enum PyDecoderWrapper { Custom(Arc<RwLock<CustomDecoder>>), Wrapped(Arc<RwLock<DecoderWrapper>>), } impl<I> From<I> for PyDecoderWrapper where I: Into<DecoderWrapper>, { fn from(norm: I) -> Self { PyDecoderWrapper::Wrapped(Arc::new(RwLock::new(norm.into()))) } } impl<I> From<I> for PyDecoder where I: Into<DecoderWrapper>, { fn from(dec: I) -> Self { PyDecoder { decoder: dec.into().into(), } } } impl Decoder for PyDecoderWrapper { fn decode_chain(&self, tokens: Vec<String>) -> tk::Result<Vec<String>> { match self { PyDecoderWrapper::Wrapped(inner) => inner.read().unwrap().decode_chain(tokens), PyDecoderWrapper::Custom(inner) => inner.read().unwrap().decode_chain(tokens), } } } /// Decoders Module #[pymodule] pub fn decoders(_py: Python, m: &PyModule) -> PyResult<()> { m.add_class::<PyDecoder>()?; m.add_class::<PyByteLevelDec>()?; m.add_class::<PyReplaceDec>()?; m.add_class::<PyWordPieceDec>()?; m.add_class::<PyByteFallbackDec>()?; m.add_class::<PyFuseDec>()?; m.add_class::<PyStrip>()?; m.add_class::<PyMetaspaceDec>()?; m.add_class::<PyBPEDecoder>()?; m.add_class::<PyCTCDecoder>()?; m.add_class::<PySequenceDecoder>()?; Ok(()) } #[cfg(test)] mod test { use std::sync::{Arc, RwLock}; use pyo3::prelude::*; use tk::decoders::metaspace::Metaspace; use tk::decoders::DecoderWrapper; use crate::decoders::{CustomDecoder, PyDecoder, PyDecoderWrapper}; #[test] fn get_subtype() { Python::with_gil(|py| { let py_dec = PyDecoder::new(Metaspace::default().into()); let py_meta = py_dec.get_as_subtype(py).unwrap(); assert_eq!("Metaspace", py_meta.as_ref(py).get_type().name().unwrap()); }) } #[test] fn serialize() { let py_wrapped: PyDecoderWrapper = Metaspace::default().into(); let py_ser = serde_json::to_string(&py_wrapped).unwrap(); let rs_wrapped = DecoderWrapper::Metaspace(Metaspace::default()); let rs_ser = serde_json::to_string(&rs_wrapped).unwrap(); assert_eq!(py_ser, rs_ser); let py_dec: PyDecoder = serde_json::from_str(&rs_ser).unwrap(); match py_dec.decoder { PyDecoderWrapper::Wrapped(msp) => match *msp.as_ref().read().unwrap() { DecoderWrapper::Metaspace(_) => {} _ => panic!("Expected Metaspace"), }, _ => panic!("Expected wrapped, not custom."), } let obj = Python::with_gil(|py| { let py_msp = PyDecoder::new(Metaspace::default().into()); let obj: PyObject = Py::new(py, py_msp).unwrap().into_py(py); obj }); let py_seq = PyDecoderWrapper::Custom(Arc::new(RwLock::new(CustomDecoder::new(obj)))); assert!(serde_json::to_string(&py_seq).is_err()); } }
tokenizers/bindings/python/src/decoders.rs/0
{ "file_path": "tokenizers/bindings/python/src/decoders.rs", "repo_id": "tokenizers", "token_count": 9016 }
228
import argparse import inspect import os from pathlib import Path INDENT = " " * 4 GENERATED_COMMENT = "# Generated content DO NOT EDIT\n" def do_indent(text: str, indent: str): return text.replace("\n", f"\n{indent}") def function(obj, indent, text_signature=None): if text_signature is None: text_signature = obj.__text_signature__ string = "" string += f"{indent}def {obj.__name__}{text_signature}:\n" indent += INDENT string += f'{indent}"""\n' string += f"{indent}{do_indent(obj.__doc__, indent)}\n" string += f'{indent}"""\n' string += f"{indent}pass\n" string += "\n" string += "\n" return string def member_sort(member): if inspect.isclass(member): value = 10 + len(inspect.getmro(member)) else: value = 1 return value def fn_predicate(obj): value = inspect.ismethoddescriptor(obj) or inspect.isbuiltin(obj) if value: return obj.__doc__ and obj.__text_signature__ and not obj.__name__.startswith("_") if inspect.isgetsetdescriptor(obj): return obj.__doc__ and not obj.__name__.startswith("_") return False def get_module_members(module): members = [ member for name, member in inspect.getmembers(module) if not name.startswith("_") and not inspect.ismodule(member) ] members.sort(key=member_sort) return members def pyi_file(obj, indent=""): string = "" if inspect.ismodule(obj): string += GENERATED_COMMENT members = get_module_members(obj) for member in members: string += pyi_file(member, indent) elif inspect.isclass(obj): indent += INDENT mro = inspect.getmro(obj) if len(mro) > 2: inherit = f"({mro[1].__name__})" else: inherit = "" string += f"class {obj.__name__}{inherit}:\n" body = "" if obj.__doc__: body += f'{indent}"""\n{indent}{do_indent(obj.__doc__, indent)}\n{indent}"""\n' fns = inspect.getmembers(obj, fn_predicate) # Init if obj.__text_signature__: body += f"{indent}def __init__{obj.__text_signature__}:\n" body += f"{indent+INDENT}pass\n" body += "\n" for name, fn in fns: body += pyi_file(fn, indent=indent) if not body: body += f"{indent}pass\n" string += body string += "\n\n" elif inspect.isbuiltin(obj): string += f"{indent}@staticmethod\n" string += function(obj, indent) elif inspect.ismethoddescriptor(obj): string += function(obj, indent) elif inspect.isgetsetdescriptor(obj): # TODO it would be interesing to add the setter maybe ? string += f"{indent}@property\n" string += function(obj, indent, text_signature="(self)") else: raise Exception(f"Object {obj} is not supported") return string def py_file(module, origin): members = get_module_members(module) string = GENERATED_COMMENT string += f"from .. import {origin}\n" string += "\n" for member in members: name = member.__name__ string += f"{name} = {origin}.{name}\n" return string import subprocess from typing import List, Optional, Tuple def do_ruff(code, is_pyi: bool): command = ["ruff", "format", "--config", "pyproject.toml", "--silent", "-"] if is_pyi: command.extend(["--stdin-filename", "test.pyi"]) process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) stdout, _ = process.communicate(input=code.encode("utf-8")) return stdout.decode("utf-8") def write(module, directory, origin, check=False): submodules = [(name, member) for name, member in inspect.getmembers(module) if inspect.ismodule(member)] filename = os.path.join(directory, "__init__.pyi") pyi_content = pyi_file(module) pyi_content = do_ruff(pyi_content, is_pyi=True) os.makedirs(directory, exist_ok=True) if check: with open(filename, "r") as f: data = f.read() assert data == pyi_content, f"The content of {filename} seems outdated, please run `python stub.py`" else: with open(filename, "w") as f: f.write(pyi_content) filename = os.path.join(directory, "__init__.py") py_content = py_file(module, origin) py_content = do_ruff(py_content, is_pyi=False) os.makedirs(directory, exist_ok=True) is_auto = False if not os.path.exists(filename): is_auto = True else: with open(filename, "r") as f: line = f.readline() if line == GENERATED_COMMENT: is_auto = True if is_auto: if check: with open(filename, "r") as f: data = f.read() assert data == py_content, f"The content of {filename} seems outdated, please run `python stub.py`" else: with open(filename, "w") as f: f.write(py_content) for name, submodule in submodules: write(submodule, os.path.join(directory, name), f"{name}", check=check) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--check", action="store_true") args = parser.parse_args() import tokenizers write(tokenizers.tokenizers, "py_src/tokenizers/", "tokenizers", check=args.check)
tokenizers/bindings/python/stub.py/0
{ "file_path": "tokenizers/bindings/python/stub.py", "repo_id": "tokenizers", "token_count": 2395 }
229
# Models <tokenizerslangcontent> <python> ## BPE [[autodoc]] tokenizers.models.BPE ## Model [[autodoc]] tokenizers.models.Model ## Unigram [[autodoc]] tokenizers.models.Unigram ## WordLevel [[autodoc]] tokenizers.models.WordLevel ## WordPiece [[autodoc]] tokenizers.models.WordPiece </python> <rust> The Rust API Reference is available directly on the [Docs.rs](https://docs.rs/tokenizers/latest/tokenizers/) website. </rust> <node> The node API has not been documented yet. </node> </tokenizerslangcontent>
tokenizers/docs/source-doc-builder/api/models.mdx/0
{ "file_path": "tokenizers/docs/source-doc-builder/api/models.mdx", "repo_id": "tokenizers", "token_count": 179 }
230
Installation with npm ---------------------------------------------------------------------------------------------------- You can simply install 🤗 Tokenizers with npm using:: npm install tokenizers
tokenizers/docs/source/installation/node.inc/0
{ "file_path": "tokenizers/docs/source/installation/node.inc", "repo_id": "tokenizers", "token_count": 31 }
231
#[macro_use] extern crate criterion; use criterion::Criterion; use std::collections::HashMap; use std::fs::read_to_string; use std::time::{Duration, Instant}; use tokenizers::models::unigram::Unigram; use tokenizers::models::unigram::UnigramTrainer; pub fn bench_train(c: &mut Criterion) { let trainer = UnigramTrainer::builder() .show_progress(false) .unk_token(Some("<UNK>".into())) .build() .unwrap(); let mut model = Unigram::default(); let content = read_to_string("data/small.txt").unwrap(); let mut word_counts = HashMap::new(); content.split_whitespace().for_each(|word| { // This is important for the test of char vs u8 let word = format!("▁{}", word); *word_counts.entry(word).or_insert(0) += 1; }); let sentences: Vec<_> = word_counts .iter() .map(|(s, i)| (s.to_owned(), *i)) .collect(); c.bench_function("Unigram Train vocabulary (small)", |b| { b.iter_custom(|iters| { let mut duration = Duration::new(0, 0); for _i in 0..iters { let sentences = sentences.clone(); let start = Instant::now(); trainer.do_train(sentences, &mut model).unwrap(); duration = duration.checked_add(start.elapsed()).unwrap(); } duration }) }); let content = read_to_string("data/big.txt").unwrap(); // creating `medium` data, which is the first 25% of `data/big.txt` let content = String::from(&content[..(content.len() as f64 * 0.25) as usize]); let mut word_counts = HashMap::new(); content.split_whitespace().for_each(|word| { // This is important for the test of char vs u8 let word = format!("▁{}", word); *word_counts.entry(word).or_insert(0) += 1; }); let sentences: Vec<_> = word_counts .iter() .map(|(s, i)| (s.to_owned(), *i)) .collect(); c.bench_function("Unigram Train vocabulary (medium)", |b| { b.iter_custom(|iters| { let mut duration = Duration::new(0, 0); for _i in 0..iters { let sentences = sentences.clone(); let start = Instant::now(); trainer.do_train(sentences, &mut model).unwrap(); duration = duration.checked_add(start.elapsed()).unwrap(); } duration }) }); } criterion_group! { name = benches_train; config = Criterion::default().sample_size(10); targets = bench_train } criterion_main!(benches_train);
tokenizers/tokenizers/benches/unigram_benchmark.rs/0
{ "file_path": "tokenizers/tokenizers/benches/unigram_benchmark.rs", "repo_id": "tokenizers", "token_count": 1174 }
232
import * as wasm from "unstable_wasm"; console.log(wasm.tokenize("ab")); console.log(wasm.tokenize("abc"));
tokenizers/tokenizers/examples/unstable_wasm/www/index.js/0
{ "file_path": "tokenizers/tokenizers/examples/unstable_wasm/www/index.js", "repo_id": "tokenizers", "token_count": 43 }
233
use super::{super::OrderedVocabIter, convert_merges_to_hashmap, BpeBuilder, Pair, BPE}; use serde::{ de::{Error, MapAccess, Visitor}, ser::SerializeStruct, Deserialize, Deserializer, Serialize, Serializer, }; use std::collections::HashMap; impl Serialize for BPE { fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error> where S: Serializer, { let mut model = serializer.serialize_struct("BPE", 8)?; // Start by small fields model.serialize_field("type", "BPE")?; model.serialize_field("dropout", &self.dropout)?; model.serialize_field("unk_token", &self.unk_token)?; model.serialize_field("continuing_subword_prefix", &self.continuing_subword_prefix)?; model.serialize_field("end_of_word_suffix", &self.end_of_word_suffix)?; model.serialize_field("fuse_unk", &self.fuse_unk)?; model.serialize_field("byte_fallback", &self.byte_fallback)?; // Then the large ones let mut merges: Vec<(&Pair, &u32)> = self .merges .iter() .map(|(pair, (rank, _))| (pair, rank)) .collect(); merges.sort_unstable_by_key(|k| *k.1); let merges_str = merges .into_iter() .map(|(pair, _)| format!("{} {}", self.vocab_r[&pair.0], self.vocab_r[&pair.1])) .collect::<Vec<_>>(); let ordered_vocab = OrderedVocabIter::new(&self.vocab_r); model.serialize_field("vocab", &ordered_vocab)?; model.serialize_field("merges", &merges_str)?; model.end() } } impl<'de> Deserialize<'de> for BPE { fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where D: Deserializer<'de>, { deserializer.deserialize_struct( "BPE", &[ "type", "dropout", "unk_token", "continuing_subword_prefix", "end_of_word_suffix", "fuse_unk", "byte_fallback", "vocab", "merges", ], BPEVisitor, ) } } struct BPEVisitor; impl<'de> Visitor<'de> for BPEVisitor { type Value = BPE; fn expecting(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result { write!(fmt, "struct BPE") } fn visit_map<V>(self, mut map: V) -> std::result::Result<Self::Value, V::Error> where V: MapAccess<'de>, { let mut builder = BpeBuilder::new(); let mut vocab: Option<HashMap<String, u32>> = None; let mut merges: Option<Vec<String>> = None; while let Some(key) = map.next_key::<String>()? { match key.as_ref() { "dropout" => { if let Some(dropout) = map.next_value()? { builder = builder.dropout(dropout); } } "unk_token" => { if let Some(unk) = map.next_value()? { builder = builder.unk_token(unk); } } "continuing_subword_prefix" => { if let Some(prefix) = map.next_value()? { builder = builder.continuing_subword_prefix(prefix); } } "end_of_word_suffix" => { if let Some(suffix) = map.next_value()? { builder = builder.end_of_word_suffix(suffix); } } "fuse_unk" => { if let Some(suffix) = map.next_value()? { builder = builder.fuse_unk(suffix); } } "byte_fallback" => { if let Some(suffix) = map.next_value()? { builder = builder.byte_fallback(suffix); } } "vocab" => vocab = Some(map.next_value()?), "merges" => merges = Some(map.next_value()?), "type" => match map.next_value()? { "BPE" => {} u => { return Err(serde::de::Error::invalid_value( serde::de::Unexpected::Str(u), &"BPE", )) } }, _ => {} } } if let (Some(vocab), Some(merges)) = (vocab, merges) { let merges = convert_merges_to_hashmap(merges.into_iter(), &vocab).map_err(Error::custom)?; builder = builder.vocab_and_merges(vocab, merges); Ok(builder.build().map_err(Error::custom)?) } else { Err(Error::custom("Missing vocab/merges")) } } }
tokenizers/tokenizers/src/models/bpe/serialization.rs/0
{ "file_path": "tokenizers/tokenizers/src/models/bpe/serialization.rs", "repo_id": "tokenizers", "token_count": 2739 }
234
use crate::tokenizer::{NormalizedString, Normalizer, Result}; use serde::{Deserialize, Serialize}; use unicode_categories::UnicodeCategories; /// Checks whether a character is whitespace fn is_whitespace(c: char) -> bool { // These are technically control characters but we count them as whitespace match c { '\t' | '\n' | '\r' => true, _ => c.is_whitespace(), } } /// Checks whether a character is a control character fn is_control(c: char) -> bool { // These are technically control characters but we count them as whitespace match c { '\t' | '\n' | '\r' => false, // The definition of `is_control` here is quite large and contains also // Cc, Cf, Cn or Co // cf. https://unicode.org/reports/tr44/ (Table 12) _ => c.is_other(), } } /// Checks whether a character is chinese /// This defines a "chinese character" as anything in the CJK Unicode block: /// https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) /// /// Note that the CJK Unicode block is NOT all Japanese and Korean characters, /// despite its name. The modern Korean Hangul alphabet is a different block, /// as is Japanese Hiragana and Katakana. Those alphabets are used to write /// space-separated words, so they are not treated specially and handled /// like for all of the other languages. fn is_chinese_char(c: char) -> bool { matches!( c as usize, 0x4E00..=0x9FFF | 0x3400..=0x4DBF | 0x20000..=0x2A6DF | 0x2A700..=0x2B73F | 0x2B740..=0x2B81F | 0x2B920..=0x2CEAF | 0xF900..=0xFAFF | 0x2F800..=0x2FA1F ) } #[derive(Copy, Clone, Debug, Deserialize, Serialize)] #[serde(tag = "type")] #[non_exhaustive] pub struct BertNormalizer { /// Whether to do the bert basic cleaning: /// 1. Remove any control characters /// 2. Replace all sorts of whitespace by the classic one ` ` pub clean_text: bool, /// Whether to put spaces around chinese characters so they get split pub handle_chinese_chars: bool, /// Whether to strip accents pub strip_accents: Option<bool>, /// Whether to lowercase the input pub lowercase: bool, } impl Default for BertNormalizer { fn default() -> Self { Self { clean_text: true, handle_chinese_chars: true, strip_accents: None, lowercase: true, } } } impl BertNormalizer { pub fn new( clean_text: bool, handle_chinese_chars: bool, strip_accents: Option<bool>, lowercase: bool, ) -> Self { Self { clean_text, handle_chinese_chars, strip_accents, lowercase, } } fn do_clean_text(&self, normalized: &mut NormalizedString) { normalized .filter(|c| !(c as usize == 0 || c as usize == 0xfffd || is_control(c))) .map(|c| if is_whitespace(c) { ' ' } else { c }); } fn do_handle_chinese_chars(&self, normalized: &mut NormalizedString) { let mut new_chars: Vec<(char, isize)> = vec![]; normalized.for_each(|c| { if is_chinese_char(c) { new_chars.extend([(' ', 0), (c, 1), (' ', 1)]); } else { new_chars.push((c, 0)); } }); normalized.transform(new_chars, 0); } fn do_strip_accents(&self, normalized: &mut NormalizedString) { normalized.nfd().filter(|c| !c.is_mark_nonspacing()); } fn do_lowercase(&self, normalized: &mut NormalizedString) { normalized.lowercase(); } } impl Normalizer for BertNormalizer { fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> { if self.clean_text { self.do_clean_text(normalized); } if self.handle_chinese_chars { self.do_handle_chinese_chars(normalized); } let strip_accents = self.strip_accents.unwrap_or(self.lowercase); if strip_accents { self.do_strip_accents(normalized); } if self.lowercase { self.do_lowercase(normalized); } Ok(()) } }
tokenizers/tokenizers/src/normalizers/bert.rs/0
{ "file_path": "tokenizers/tokenizers/src/normalizers/bert.rs", "repo_id": "tokenizers", "token_count": 1856 }
235
use crate::utils::SysRegex; use serde::{Deserialize, Deserializer, Serialize}; use crate::tokenizer::{ pattern::Invert, PreTokenizedString, PreTokenizer, Result, SplitDelimiterBehavior, }; /// Represents the different patterns that `Split` can use #[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Eq)] pub enum SplitPattern { String(String), Regex(String), } impl From<String> for SplitPattern { fn from(v: String) -> Self { Self::String(v) } } impl From<&str> for SplitPattern { fn from(v: &str) -> Self { Self::String(v.to_owned()) } } #[derive(Debug, Serialize)] #[serde(tag = "type")] pub struct Split { pattern: SplitPattern, #[serde(skip)] regex: SysRegex, behavior: SplitDelimiterBehavior, invert: bool, } impl<'de> Deserialize<'de> for Split { fn deserialize<D>(deserializer: D) -> std::result::Result<Self, D::Error> where D: Deserializer<'de>, { #[derive(Deserialize)] enum Type { Split, } #[derive(Deserialize)] pub struct SplitHelper { #[serde(rename = "type")] _type: Type, pattern: SplitPattern, behavior: SplitDelimiterBehavior, invert: bool, } let helper = SplitHelper::deserialize(deserializer)?; Self::new(helper.pattern, helper.behavior, helper.invert).map_err(serde::de::Error::custom) } } impl Clone for Split { fn clone(&self) -> Self { Self::new(self.pattern.clone(), self.behavior, self.invert).unwrap() } } impl PartialEq for Split { fn eq(&self, other: &Self) -> bool { self.pattern == other.pattern && self.behavior == other.behavior && self.invert == other.invert } } impl Split { pub fn new<I: Into<SplitPattern>>( pattern: I, behavior: SplitDelimiterBehavior, invert: bool, ) -> Result<Self> { let pattern: SplitPattern = pattern.into(); let regex = match &pattern { SplitPattern::String(s) => SysRegex::new(&regex::escape(s))?, SplitPattern::Regex(r) => SysRegex::new(r)?, }; Ok(Self { pattern, regex, behavior, invert, }) } } impl PreTokenizer for Split { fn pre_tokenize(&self, pretokenized: &mut PreTokenizedString) -> Result<()> { if self.invert { pretokenized.split(|_, normalized| normalized.split(Invert(&self.regex), self.behavior)) } else { pretokenized.split(|_, normalized| normalized.split(&self.regex, self.behavior)) } } } #[cfg(test)] mod tests { use super::*; use crate::{OffsetReferential, OffsetType, PreTokenizer}; use SplitDelimiterBehavior::*; #[test] fn basic() { let tests = vec![ ( Removed, "How are you doing?", vec![ ("How", (0, 3)), ("are", (4, 7)), ("you", (8, 11)), ("doing", (12, 17)), ("?", (17, 18)), ], ), ( Isolated, "How are you doing?", vec![ ("How", (0, 3)), (" ", (3, 4)), ("are", (4, 7)), (" ", (7, 8)), ("you", (8, 11)), (" ", (11, 12)), ("doing", (12, 17)), ("?", (17, 18)), ], ), ( MergedWithPrevious, "How are you doing?", vec![ ("How ", (0, 4)), ("are ", (4, 8)), ("you ", (8, 12)), ("doing", (12, 17)), ("?", (17, 18)), ], ), ( MergedWithNext, "How are you doing?", vec![ ("How", (0, 3)), (" are", (3, 7)), (" you", (7, 11)), (" doing", (11, 17)), ("?", (17, 18)), ], ), ( Contiguous, "How are you doing?", vec![ ("How", (0, 3)), (" ", (3, 4)), ("are", (4, 7)), (" ", (7, 8)), ("you", (8, 11)), (" ", (11, 12)), ("doing?", (12, 18)), ], ), ]; // use whitespace regex let regex = SplitPattern::Regex(r"\w+|[^\w\s]+".into()); for (behavior, s, res) in tests { let mut pretokenized = PreTokenizedString::from(s); let pretok = Split::new(regex.clone(), behavior, true).unwrap(); pretok.pre_tokenize(&mut pretokenized).unwrap(); assert_eq!( pretokenized .get_splits(OffsetReferential::Original, OffsetType::Byte) .into_iter() .map(|(s, o, _)| (s, o)) .collect::<Vec<_>>(), res ); } } #[test] fn regex_string() { let mut pretok_str_for_regex = PreTokenizedString::from("Hey, man!"); let mut pretok_str_for_string = pretok_str_for_regex.clone(); // pre-tokenizer splits on " " - one from Regex, one from string let pretokenizer_regex = Split::new( SplitPattern::Regex(r"\s+".into()), SplitDelimiterBehavior::Removed, false, ) .unwrap(); let pretokenizer_string = Split::new(" ", SplitDelimiterBehavior::Removed, false).unwrap(); pretokenizer_regex .pre_tokenize(&mut pretok_str_for_regex) .unwrap(); pretokenizer_string .pre_tokenize(&mut pretok_str_for_string) .unwrap(); assert_eq!(pretok_str_for_regex, pretok_str_for_string); } #[test] fn invert() { let mut pretok_str = PreTokenizedString::from("Hello Hello Hello"); let mut pretok_str_for_invert = pretok_str.clone(); // one pre-tokenizer splits on " " - one splits inverted on "Hello" let pretokenizer = Split::new(" ", SplitDelimiterBehavior::Removed, false).unwrap(); let pretokenizer_invert = Split::new("Hello", SplitDelimiterBehavior::Removed, true).unwrap(); pretokenizer.pre_tokenize(&mut pretok_str).unwrap(); pretokenizer_invert .pre_tokenize(&mut pretok_str_for_invert) .unwrap(); assert_eq!(pretok_str, pretok_str_for_invert); } #[test] fn serialization() { use SplitDelimiterBehavior::*; let split = Split::new("Hello", Removed, true).unwrap(); let split_s = r#"{"type":"Split","pattern":{"String":"Hello"},"behavior":"Removed","invert":true}"#; assert_eq!(serde_json::to_string(&split).unwrap(), split_s); assert_eq!(serde_json::from_str::<Split>(split_s).unwrap(), split); let split = Split::new(SplitPattern::Regex(r"\s+".into()), Isolated, false).unwrap(); let split_s = r#"{"type":"Split","pattern":{"Regex":"\\s+"},"behavior":"Isolated","invert":false}"#; assert_eq!(serde_json::to_string(&split).unwrap(), split_s); assert_eq!(serde_json::from_str::<Split>(split_s).unwrap(), split); } }
tokenizers/tokenizers/src/pre_tokenizers/split.rs/0
{ "file_path": "tokenizers/tokenizers/src/pre_tokenizers/split.rs", "repo_id": "tokenizers", "token_count": 4038 }
236
use std::marker::PhantomData; use serde::{ self, de::{Error, MapAccess, Visitor}, ser::SerializeStruct, Deserialize, Deserializer, Serialize, Serializer, }; use super::{added_vocabulary::AddedTokenWithId, TokenizerImpl}; use crate::{Decoder, Model, Normalizer, PostProcessor, PreTokenizer, TokenizerBuilder}; static SERIALIZATION_VERSION: &str = "1.0"; impl<M, N, PT, PP, D> Serialize for TokenizerImpl<M, N, PT, PP, D> where M: Serialize, N: Serialize, PT: Serialize, PP: Serialize, D: Serialize, { fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error> where S: Serializer, { let mut tokenizer = serializer.serialize_struct("Tokenizer", 9)?; // Start by adding the current version tokenizer.serialize_field("version", SERIALIZATION_VERSION)?; // Params tokenizer.serialize_field("truncation", &self.truncation)?; tokenizer.serialize_field("padding", &self.padding)?; // Added tokens tokenizer.serialize_field("added_tokens", &self.added_vocabulary)?; // Then add our parts tokenizer.serialize_field("normalizer", &self.normalizer)?; tokenizer.serialize_field("pre_tokenizer", &self.pre_tokenizer)?; tokenizer.serialize_field("post_processor", &self.post_processor)?; tokenizer.serialize_field("decoder", &self.decoder)?; tokenizer.serialize_field("model", &self.model)?; tokenizer.end() } } impl<'de, M, N, PT, PP, D> Deserialize<'de> for TokenizerImpl<M, N, PT, PP, D> where M: Deserialize<'de> + Model, N: Deserialize<'de> + Normalizer, PT: Deserialize<'de> + PreTokenizer, PP: Deserialize<'de> + PostProcessor, D: Deserialize<'de> + Decoder, { fn deserialize<De>(deserializer: De) -> Result<Self, De::Error> where De: Deserializer<'de>, { deserializer.deserialize_struct( "Tokenizer", &[ "version", "truncation", "padding", "added_tokens", "normalizer", "pre_tokenizer", "post_processor", "decoder", "model", ], TokenizerVisitor( PhantomData, PhantomData, PhantomData, PhantomData, PhantomData, ), ) } } struct TokenizerVisitor<M, N, PT, PP, D>( PhantomData<M>, PhantomData<N>, PhantomData<PT>, PhantomData<PP>, PhantomData<D>, ); impl<'de, M, N, PT, PP, D> Visitor<'de> for TokenizerVisitor<M, N, PT, PP, D> where M: Deserialize<'de> + Model, N: Deserialize<'de> + Normalizer, PT: Deserialize<'de> + PreTokenizer, PP: Deserialize<'de> + PostProcessor, D: Deserialize<'de> + Decoder, { type Value = TokenizerImpl<M, N, PT, PP, D>; fn expecting(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result { write!(fmt, "struct Tokenizer") } fn visit_map<V>(self, mut map: V) -> Result<Self::Value, V::Error> where V: MapAccess<'de>, { let mut builder = TokenizerBuilder::new(); let mut tokens: Vec<AddedTokenWithId> = vec![]; while let Some(key) = map.next_key::<String>()? { match key.as_ref() { "version" => { let v: String = map.next_value()?; if &v != "1.0" { return Err(Error::custom(format!("Unknown tokenizer version '{}'", v))); } } "truncation" => { builder = builder.with_truncation(map.next_value()?); } "padding" => { builder = builder.with_padding(map.next_value()?); } "added_tokens" => { tokens = map.next_value()?; } "normalizer" => { builder = builder.with_normalizer(map.next_value()?); } "pre_tokenizer" => { builder = builder.with_pre_tokenizer(map.next_value()?); } "model" => { builder = builder.with_model(map.next_value()?); } "decoder" => { builder = builder.with_decoder(map.next_value()?); } "post_processor" => { builder = builder.with_post_processor(map.next_value()?); } _ => {} }; } let mut tokenizer = builder .build() .map_err(|e| V::Error::custom(e.to_string()))?; // We take care of deserializing the added_tokens (instead of `AddedVocabulary` directly // because it let us check that associated IDs are still good, and warn the user otherwise for token in &tokens { // Warn the user if the id is different than expected let received_id = tokenizer.token_to_id(&token.token.content); if received_id != Some(token.id) { warn!( "Warning: Token '{}' was expected to have ID '{}' but was given ID '{}'", token.token.content, token.id, if let Some(rid) = received_id { rid.to_string() } else { "None".to_string() } ); } } let added_tokens: Vec<_> = tokens.into_iter().map(|token| token.token).collect(); tokenizer.add_tokens(&added_tokens[..]); Ok(tokenizer) } } #[cfg(test)] mod tests { use crate::tokenizer::Tokenizer; use std::str::FromStr; #[test] fn test_deserialization_serialization_invariant() { let tok_json = r#"{ "version": "1.0", "truncation": null, "padding": null, "added_tokens": [ { "id": 0, "content": "[SPECIAL_0]", "single_word": false, "lstrip": false, "rstrip": false, "normalized": false, "special": true }, { "id": 1, "content": "[SPECIAL_1]", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "special": false }, { "id": 2, "content": "[SPECIAL_2]", "single_word": false, "lstrip": false, "rstrip": false, "normalized": false, "special": true } ], "normalizer": null, "pre_tokenizer": null, "post_processor": null, "decoder": null, "model": { "type": "WordPiece", "unk_token": "[UNK]", "continuing_subword_prefix": "", "max_input_chars_per_word": 100, "vocab": {} } }"#; let tokenizer = Tokenizer::from_str(tok_json).unwrap(); let tok_str = serde_json::to_string_pretty(&tokenizer).unwrap(); // It should be exactly the same as above assert_eq!(tok_str, tok_json); } }
tokenizers/tokenizers/src/tokenizer/serialization.rs/0
{ "file_path": "tokenizers/tokenizers/src/tokenizer/serialization.rs", "repo_id": "tokenizers", "token_count": 3618 }
237
mod common; use common::*; use tokenizers::decoders::byte_level::ByteLevel; use tokenizers::decoders::DecoderWrapper; use tokenizers::models::bpe::BPE; use tokenizers::models::wordlevel::WordLevel; use tokenizers::models::wordpiece::WordPiece; use tokenizers::models::ModelWrapper; use tokenizers::normalizers::bert::BertNormalizer; use tokenizers::normalizers::unicode::{NFC, NFKC}; use tokenizers::normalizers::NormalizerWrapper; use tokenizers::pre_tokenizers::bert::BertPreTokenizer; use tokenizers::pre_tokenizers::delimiter::CharDelimiterSplit; use tokenizers::pre_tokenizers::split::{Split, SplitPattern}; use tokenizers::pre_tokenizers::whitespace::Whitespace; use tokenizers::pre_tokenizers::PreTokenizerWrapper; use tokenizers::processors::bert::BertProcessing; use tokenizers::processors::PostProcessorWrapper; use tokenizers::{SplitDelimiterBehavior, Tokenizer, TokenizerImpl}; #[test] fn bpe_serde() { let bpe = get_byte_level_bpe(); let ser = serde_json::to_string(&bpe).unwrap(); let de = serde_json::from_str(&ser).unwrap(); assert_eq!(bpe, de); } #[test] fn wordpiece_serde() { let wordpiece = get_bert_wordpiece(); let ser = serde_json::to_string(&wordpiece).unwrap(); let de = serde_json::from_str(&ser).unwrap(); assert_eq!(wordpiece, de); } #[test] fn wordlevel_serde() { let wordlevel = WordLevel::from_file("data/gpt2-vocab.json", "<unk>".into()).unwrap(); let ser = serde_json::to_string(&wordlevel).unwrap(); let de = serde_json::from_str(&ser).unwrap(); assert_eq!(wordlevel, de); } #[test] fn normalizers() { // Test unit struct let nfc = NFC; let nfc_ser = serde_json::to_string(&nfc).unwrap(); assert_eq!(nfc_ser, r#"{"type":"NFC"}"#); // empty struct can deserialize from self serde_json::from_str::<NFC>(&nfc_ser).unwrap(); let err: Result<NFKC, _> = serde_json::from_str(&nfc_ser); assert!(err.is_err(), "NFKC shouldn't be deserializable from NFC"); // wrapper can can deserialize from inner let nfc_wrapped: NormalizerWrapper = serde_json::from_str(&nfc_ser).unwrap(); match &nfc_wrapped { NormalizerWrapper::NFC(_) => (), _ => panic!("NFC wrapped with incorrect variant"), } let ser_wrapped = serde_json::to_string(&nfc_wrapped).unwrap(); assert_eq!(ser_wrapped, nfc_ser); // Test non-empty roundtrip let bert = BertNormalizer::default(); let bert_ser = serde_json::to_string(&bert).unwrap(); assert_eq!( bert_ser, r#"{"type":"BertNormalizer","clean_text":true,"handle_chinese_chars":true,"strip_accents":null,"lowercase":true}"# ); // make sure we can deserialize to self serde_json::from_str::<BertNormalizer>(&bert_ser).unwrap(); // wrapper can deserialize from inner serialization let bert_wrapped: NormalizerWrapper = serde_json::from_str(&bert_ser).unwrap(); match &bert_wrapped { NormalizerWrapper::BertNormalizer(_) => (), _ => panic!("BertNormalizer wrapped with incorrect variant"), } // wrapped serializes same way as inner let ser_wrapped = serde_json::to_string(&bert_wrapped).unwrap(); assert_eq!(ser_wrapped, bert_ser); } #[test] fn processors() { let bert = BertProcessing::new(("SEP".into(), 0), ("CLS".into(), 0)); let bert_ser = serde_json::to_string(&bert).unwrap(); assert_eq!( bert_ser, r#"{"type":"BertProcessing","sep":["SEP",0],"cls":["CLS",0]}"# ); serde_json::from_str::<BertProcessing>(&bert_ser).unwrap(); let bert_wrapped: PostProcessorWrapper = serde_json::from_str(&bert_ser).unwrap(); match &bert_wrapped { PostProcessorWrapper::Bert(_) => (), _ => panic!("Bert wrapped with incorrect variant"), } let ser_wrapped = serde_json::to_string(&bert_wrapped).unwrap(); assert_eq!(ser_wrapped, bert_ser); } #[test] fn pretoks() { // Test unit struct let bert = BertPreTokenizer; let bert_ser = serde_json::to_string(&bert).unwrap(); assert_eq!(bert_ser, r#"{"type":"BertPreTokenizer"}"#); // empty struct can deserialize from self serde_json::from_str::<BertPreTokenizer>(&bert_ser).unwrap(); let err: Result<Whitespace, _> = serde_json::from_str(&bert_ser); assert!( err.is_err(), "Whitespace shouldn't be deserializable from BertPreTokenizer" ); // wrapper can can deserialize from inner let bert_wrapped: PreTokenizerWrapper = serde_json::from_str(&bert_ser).unwrap(); match &bert_wrapped { PreTokenizerWrapper::BertPreTokenizer(_) => (), _ => panic!("Bert wrapped with incorrect variant"), } let ser_wrapped = serde_json::to_string(&bert_wrapped).unwrap(); assert_eq!(ser_wrapped, bert_ser); // Test non-empty roundtrip let ch = CharDelimiterSplit::new(' '); let ch_ser = serde_json::to_string(&ch).unwrap(); assert_eq!(ch_ser, r#"{"type":"CharDelimiterSplit","delimiter":" "}"#); // make sure we can deserialize to self serde_json::from_str::<CharDelimiterSplit>(&ch_ser).unwrap(); // wrapper can deserialize from inner serialization let ch_wrapped: PreTokenizerWrapper = serde_json::from_str(&ch_ser).unwrap(); match &ch_wrapped { PreTokenizerWrapper::Delimiter(_) => (), _ => panic!("CharDelimiterSplit wrapped with incorrect variant"), } // wrapped serializes same way as inner let ser_wrapped = serde_json::to_string(&ch_wrapped).unwrap(); assert_eq!(ser_wrapped, ch_ser); let wsp = Whitespace {}; let wsp_ser = serde_json::to_string(&wsp).unwrap(); assert_eq!(wsp_ser, r#"{"type":"Whitespace"}"#); serde_json::from_str::<Whitespace>(&wsp_ser).unwrap(); let err: Result<BertPreTokenizer, _> = serde_json::from_str(&wsp_ser); assert!( err.is_err(), "BertPreTokenizer shouldn't be deserializable from Whitespace" ); let pattern: SplitPattern = "[SEP]".into(); let pretok = Split::new(pattern, SplitDelimiterBehavior::Isolated, false).unwrap(); let pretok_str = serde_json::to_string(&pretok).unwrap(); assert_eq!( pretok_str, r#"{"type":"Split","pattern":{"String":"[SEP]"},"behavior":"Isolated","invert":false}"# ); assert_eq!(serde_json::from_str::<Split>(&pretok_str).unwrap(), pretok); let pattern = SplitPattern::Regex("[SEP]".to_string()); let pretok = Split::new(pattern, SplitDelimiterBehavior::Isolated, false).unwrap(); let pretok_str = serde_json::to_string(&pretok).unwrap(); assert_eq!( pretok_str, r#"{"type":"Split","pattern":{"Regex":"[SEP]"},"behavior":"Isolated","invert":false}"# ); assert_eq!(serde_json::from_str::<Split>(&pretok_str).unwrap(), pretok); } #[test] fn decoders() { let byte_level = ByteLevel::default(); let byte_level_ser = serde_json::to_string(&byte_level).unwrap(); assert_eq!( byte_level_ser, r#"{"type":"ByteLevel","add_prefix_space":true,"trim_offsets":true,"use_regex":true}"# ); serde_json::from_str::<ByteLevel>(&byte_level_ser).unwrap(); let byte_level_wrapper: DecoderWrapper = serde_json::from_str(&byte_level_ser).unwrap(); match &byte_level_wrapper { DecoderWrapper::ByteLevel(_) => (), _ => panic!("ByteLevel wrapped with incorrect variant"), } let ser_wrapped = serde_json::to_string(&byte_level_wrapper).unwrap(); assert_eq!(ser_wrapped, byte_level_ser); } #[test] fn models() { let bpe = BPE::default(); let bpe_ser = serde_json::to_string(&bpe).unwrap(); serde_json::from_str::<BPE>(&bpe_ser).unwrap(); let bpe_wrapper: ModelWrapper = serde_json::from_str(&bpe_ser).unwrap(); match &bpe_wrapper { ModelWrapper::BPE(_) => (), _ => panic!("BPE wrapped with incorrect variant"), } let ser_wrapped = serde_json::to_string(&bpe_wrapper).unwrap(); assert_eq!(ser_wrapped, bpe_ser); } #[test] fn tokenizer() { let wordpiece = WordPiece::default(); let mut tokenizer = Tokenizer::new(wordpiece); tokenizer.with_normalizer(NFC); let ser = serde_json::to_string(&tokenizer).unwrap(); let _: Tokenizer = serde_json::from_str(&ser).unwrap(); let unwrapped_nfc_tok: TokenizerImpl< WordPiece, NFC, PreTokenizerWrapper, PostProcessorWrapper, DecoderWrapper, > = serde_json::from_str(&ser).unwrap(); assert_eq!(serde_json::to_string(&unwrapped_nfc_tok).unwrap(), ser); let err: Result< TokenizerImpl<WordPiece, NFKC, PreTokenizerWrapper, PostProcessorWrapper, DecoderWrapper>, _, > = serde_json::from_str(&ser); assert!(err.is_err(), "NFKC shouldn't be deserializable from NFC"); let de: TokenizerImpl< WordPiece, NormalizerWrapper, PreTokenizerWrapper, PostProcessorWrapper, DecoderWrapper, > = serde_json::from_str(&ser).unwrap(); assert_eq!(serde_json::to_string(&de).unwrap(), ser); } #[test] fn test_deserialize_long_file() { let _tokenizer = Tokenizer::from_file("data/albert-base-v1-tokenizer.json").unwrap(); }
tokenizers/tokenizers/tests/serialization.rs/0
{ "file_path": "tokenizers/tokenizers/tests/serialization.rs", "repo_id": "tokenizers", "token_count": 3683 }
238
FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 LABEL maintainer="Hugging Face" LABEL repository="transformers" RUN apt update && \ apt install -y bash \ build-essential \ git \ curl \ ca-certificates \ python3 \ python3-pip && \ rm -rf /var/lib/apt/lists RUN python3 -m pip install --no-cache-dir --upgrade pip && \ python3 -m pip install --no-cache-dir \ jupyter \ tensorflow \ torch RUN git clone https://github.com/NVIDIA/apex RUN cd apex && \ python3 setup.py install && \ pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ WORKDIR /workspace COPY . transformers/ RUN cd transformers/ && \ python3 -m pip install --no-cache-dir . CMD ["/bin/bash"]
transformers/docker/transformers-gpu/Dockerfile/0
{ "file_path": "transformers/docker/transformers-gpu/Dockerfile", "repo_id": "transformers", "token_count": 397 }
239
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Schnellstart [[open-in-colab]] Mit 🤗 Transformers können Sie sofort loslegen! Verwenden Sie die [`pipeline`] für schnelle Inferenz und laden Sie schnell ein vortrainiertes Modell und einen Tokenizer mit einer [AutoClass](./model_doc/auto), um Ihre Text-, Bild- oder Audioaufgabe zu lösen. <Tip> Alle in der Dokumentation vorgestellten Codebeispiele haben oben links einen Umschalter für PyTorch und TensorFlow. Wenn nicht, wird erwartet, dass der Code für beide Backends ohne Änderungen funktioniert. </Tip> ## Pipeline [`pipeline`] ist der einfachste Weg, ein vortrainiertes Modell für eine bestimmte Aufgabe zu verwenden. <Youtube id="tiZFewofSLM"/> Die [`pipeline`] unterstützt viele gängige Aufgaben: **Text**: * Stimmungsanalyse: Klassifizierung der Polarität eines gegebenen Textes. * Textgenerierung (auf Englisch): Generierung von Text aus einer gegebenen Eingabe. * Name-Entity-Recognition (NER): Kennzeichnung jedes Worts mit der Entität, die es repräsentiert (Person, Datum, Ort usw.). * Beantwortung von Fragen: Extrahieren der Antwort aus dem Kontext, wenn ein gewisser Kontext und eine Frage gegeben sind. * Fill-mask: Ausfüllen von Lücken in einem Text mit maskierten Wörtern. * Zusammenfassung: Erstellung einer Zusammenfassung einer langen Text- oder Dokumentensequenz. * Übersetzung: Übersetzen eines Textes in eine andere Sprache. * Merkmalsextraktion: Erstellen einer Tensordarstellung des Textes. **Bild**: * Bildklassifizierung: Klassifizierung eines Bildes. * Bildsegmentierung: Klassifizierung jedes Pixels in einem Bild. * Objekterkennung: Erkennen von Objekten innerhalb eines Bildes. **Audio**: * Audioklassifizierung: Zuweisung eines Labels zu einem bestimmten Audiosegment. * Automatische Spracherkennung (ASR): Transkription von Audiodaten in Text. <Tip> Für mehr Details über die [`pipeline`] und assoziierte Aufgaben, schauen Sie in die Dokumentation [hier](./main_classes/pipelines). </Tip> ### Verwendung der Pipeline Im folgenden Beispiel werden Sie die [`pipeline`] für die Stimmungsanalyse verwenden. Installieren Sie die folgenden Abhängigkeiten, falls Sie dies nicht bereits getan haben: <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> Importieren sie die [`pipeline`] und spezifizieren sie die Aufgabe, welche sie lösen möchten: ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis") ``` Die Pipeline lädt ein standardmäßiges [vortrainiertes Modell](https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english) und einen Tokenizer für die Stimmungs-Analyse herunter und speichert sie. Jetzt können Sie den "Klassifikator" auf Ihren Zieltext anwenden: ```py >>> classifier("We are very happy to show you the 🤗 Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] ``` For more than one sentence, pass a list of sentences to the [`pipeline`] which returns a list of dictionaries: ```py >>> results = classifier(["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."]) >>> for result in results: ... print(f"label: {result['label']}, with score: {round(result['score'], 4)}") label: POSITIVE, with score: 0.9998 label: NEGATIVE, with score: 0.5309 ``` Die [`pipeline`] kann auch über einen ganzen Datensatz iterieren. Starten wir mit der Installation der [🤗 Datasets](https://huggingface.co/docs/datasets/) Bibliothek: ```bash pip install datasets ``` Erstellen wir eine [`pipeline`] mit der Aufgabe die wir lösen und dem Modell welches wir nutzen möchten. ```py >>> import torch >>> from transformers import pipeline >>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") ``` Als nächstes laden wir den Datensatz (siehe 🤗 Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart) für mehr Details) welches wir nutzen möchten. Zum Beispiel laden wir den [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) Datensatz: ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT ``` Wir müssen sicherstellen, dass die Abtastrate des Datensatzes der Abtastrate entspricht, mit der `facebook/wav2vec2-base-960h` trainiert wurde. ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) ``` Audiodateien werden automatisch geladen und neu abgetastet, wenn die Spalte "audio" aufgerufen wird. Extrahieren wir die rohen Wellenform-Arrays der ersten 4 Beispiele und übergeben wir sie als Liste an die Pipeline: ```py >>> result = speech_recognizer(dataset[:4]["audio"]) >>> print([d["text"] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FODING HOW I'D SET UP A JOIN TO HET WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE AP SO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AND I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I THURN A JOIN A COUNT'] ``` Bei einem größeren Datensatz mit vielen Eingaben (wie bei Sprache oder Bildverarbeitung) sollten Sie einen Generator anstelle einer Liste übergeben, der alle Eingaben in den Speicher lädt. Weitere Informationen finden Sie in der [Pipeline-Dokumentation](./main_classes/pipelines). ### Ein anderes Modell und einen anderen Tokenizer in der Pipeline verwenden Die [`pipeline`] kann jedes Modell aus dem [Model Hub](https://huggingface.co/models) verwenden, wodurch es einfach ist, die [`pipeline`] für andere Anwendungsfälle anzupassen. Wenn Sie beispielsweise ein Modell wünschen, das französischen Text verarbeiten kann, verwenden Sie die Tags im Model Hub, um nach einem geeigneten Modell zu filtern. Das oberste gefilterte Ergebnis liefert ein mehrsprachiges [BERT-Modell](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment), das auf die Stimmungsanalyse abgestimmt ist. Großartig, verwenden wir dieses Modell! ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> Use the [`AutoModelForSequenceClassification`] and [`AutoTokenizer`] to load the pretrained model and it's associated tokenizer (more on an `AutoClass` below): ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> Use the [`TFAutoModelForSequenceClassification`] and [`AutoTokenizer`] to load the pretrained model and it's associated tokenizer (more on an `TFAutoClass` below): ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> Dann können Sie das Modell und den Tokenizer in der [`pipeline`] angeben und den `Klassifikator` auf Ihren Zieltext anwenden: ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes très heureux de vous présenter la bibliothèque 🤗 Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` Wenn Sie kein Modell für Ihren Anwendungsfall finden können, müssen Sie ein vortrainiertes Modell auf Ihren Daten feinabstimmen. Schauen Sie sich unser [Feinabstimmungs-Tutorial](./training) an, um zu erfahren, wie das geht. Und schließlich, nachdem Sie Ihr trainiertes Modell verfeinert haben, sollten Sie es mit der Community im Model Hub teilen (siehe Tutorial [hier](./model_sharing)), um NLP für alle zu demokratisieren! 🤗 ## AutoClass <Youtube id="AhChOFRegn4"/> Unter der Haube arbeiten die Klassen [`AutoModelForSequenceClassification`] und [`AutoTokenizer`] zusammen, um die [`pipeline`] zu betreiben. Eine [`AutoClass`](./model_doc/auto) ist eine Abkürzung, die automatisch die Architektur eines trainierten Modells aus dessen Namen oder Pfad abruft. Sie müssen nur die passende `AutoClass` für Ihre Aufgabe und den zugehörigen Tokenizer mit [`AutoTokenizer`] auswählen. Kehren wir zu unserem Beispiel zurück und sehen wir uns an, wie Sie die `AutoClass` verwenden können, um die Ergebnisse der [`pipeline`] zu replizieren. ### AutoTokenizer Ein Tokenizer ist für die Vorverarbeitung von Text in ein für das Modell verständliches Format zuständig. Zunächst zerlegt der Tokenisierer den Text in Wörter, die *Token* genannt werden. Es gibt mehrere Regeln für den Tokenisierungsprozess, z. B. wie und auf welcher Ebene ein Wort aufgespalten wird (weitere Informationen über Tokenisierung [hier](./tokenizer_summary)). Das Wichtigste ist jedoch, dass Sie den Tokenizer mit demselben Modellnamen instanziieren müssen, um sicherzustellen, dass Sie dieselben Tokenisierungsregeln verwenden, mit denen ein Modell zuvor trainiert wurde. Laden sie einen Tokenizer mit [`AutoTokenizer`]: ```py >>> from transformers import AutoTokenizer >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` Anschließend wandelt der Tokenizer die Token in Zahlen um, um einen Tensor als Eingabe für das Modell zu konstruieren. Dieser wird als *Vokabular* des Modells bezeichnet. Übergeben Sie Ihren Text an den Tokenizer: ```py >>> encoding = tokenizer("We are very happy to show you the 🤗 Transformers library.") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Der Tokenizer gibt ein Wörterbuch zurück, das Folgendes enthält: * [input_ids](./glossary#input-ids): numerische Repräsentationen Ihrer Token. * [atttention_mask](.glossary#attention-mask): gibt an, welche Token beachtet werden sollen. Genau wie die [`pipeline`] akzeptiert der Tokenizer eine Liste von Eingaben. Darüber hinaus kann der Tokenizer den Text auch auffüllen und kürzen, um einen Stapel mit einheitlicher Länge zurückzugeben: <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> Lesen Sie das Tutorial [preprocessing](./preprocessing) für weitere Details zur Tokenisierung. ### AutoModel <frameworkcontent> <pt> 🤗 Transformers bietet eine einfache und einheitliche Möglichkeit, vortrainierte Instanzen zu laden. Das bedeutet, dass Sie ein [`AutoModel`] laden können, wie Sie einen [`AutoTokenizer`] laden würden. Der einzige Unterschied ist die Auswahl des richtigen [`AutoModel`] für die Aufgabe. Da Sie eine Text- oder Sequenzklassifizierung vornehmen, laden Sie [`AutoModelForSequenceClassification`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> In der [Aufgabenzusammenfassung](./task_summary) steht, welche [AutoModel]-Klasse für welche Aufgabe zu verwenden ist. </Tip> Jetzt können Sie Ihren vorverarbeiteten Stapel von Eingaben direkt an das Modell übergeben. Sie müssen nur das Wörterbuch entpacken, indem Sie `**` hinzufügen: ```py >>> pt_outputs = pt_model(**pt_batch) ``` Das Modell gibt die endgültigen Aktivierungen in dem Attribut "logits" aus. Wenden Sie die Softmax-Funktion auf die "logits" an, um die Wahrscheinlichkeiten zu erhalten: ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> 🤗 Transformers bietet eine einfache und einheitliche Methode zum Laden von vortrainierten Instanzen. Das bedeutet, dass Sie ein [`TFAutoModel`] genauso laden können, wie Sie einen [`AutoTokenizer`] laden würden. Der einzige Unterschied ist die Auswahl des richtigen [`TFAutoModel`] für die Aufgabe. Da Sie Text - oder Sequenz - Klassifizierung machen, laden Sie [`TFAutoModelForSequenceClassification`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> In der [Aufgabenzusammenfassung](./task_summary) steht, welche [AutoModel]-Klasse für welche Aufgabe zu verwenden ist. </Tip> Jetzt können Sie Ihren vorverarbeiteten Stapel von Eingaben direkt an das Modell übergeben, indem Sie die Wörterbuchschlüssel direkt an die Tensoren übergeben: ```py >>> tf_outputs = tf_model(tf_batch) ``` Das Modell gibt die endgültigen Aktivierungen in dem Attribut "logits" aus. Wenden Sie die Softmax-Funktion auf die "logits" an, um die Wahrscheinlichkeiten zu erhalten: ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> Alle 🤗 Transformers-Modelle (PyTorch oder TensorFlow) geben die Tensoren *vor* der endgültigen Aktivierungsfunktion Funktion (wie Softmax) aus, da die endgültige Aktivierungsfunktion oft mit dem Verlusten verschmolzen ist. </Tip> Modelle sind ein standardmäßiges [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) oder ein [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model), sodass Sie sie in Ihrer üblichen Trainingsschleife verwenden können. Um jedoch die Dinge einfacher zu machen, bietet 🤗 Transformers eine [`Trainer`]-Klasse für PyTorch, die Funktionalität für verteiltes Training, gemischte Präzision und mehr bietet. Für TensorFlow können Sie die Methode `fit` aus [Keras](https://keras.io/) verwenden. Siehe das [training tutorial](./training) für weitere Details. <Tip> Transformers-Modellausgaben sind spezielle Datenklassen, so dass ihre Attribute in einer IDE automatisch vervollständigt werden. Die Modellausgänge verhalten sich auch wie ein Tupel oder ein Wörterbuch (z.B. können Sie mit einem Integer, einem Slice oder einem String indexieren), wobei die Attribute, die "None" sind, ignoriert werden. </Tip> ### Modell speichern <frameworkcontent> <pt> Sobald Ihr Modell feinabgestimmt ist, können Sie es mit seinem Tokenizer speichern, indem Sie [`PreTrainedModel.save_pretrained`] verwenden: ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` Wenn Sie bereit sind, das Modell erneut zu verwenden, laden Sie es mit [`PreTrainedModel.from_pretrained`]: ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> Sobald Ihr Modell feinabgestimmt ist, können Sie es mit seinem Tokenizer unter Verwendung von [`TFPreTrainedModel.save_pretrained`] speichern: ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` Wenn Sie bereit sind, das Modell wieder zu verwenden, laden Sie es mit [`TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> Ein besonders cooles 🤗 Transformers-Feature ist die Möglichkeit, ein Modell zu speichern und es entweder als PyTorch- oder TensorFlow-Modell wieder zu laden. Der Parameter "from_pt" oder "from_tf" kann das Modell von einem Framework in das andere konvertieren: <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent> ## Custom model builds Sie können die Konfigurationsklasse des Modells ändern, um zu bestimmen, wie ein Modell aufgebaut ist. Die Konfiguration legt die Attribute eines Modells fest, z. B. die Anzahl der verborgenen Schichten oder der Aufmerksamkeitsköpfe. Wenn Sie ein Modell aus einer benutzerdefinierten Konfigurationsklasse initialisieren, beginnen Sie bei Null. Die Modellattribute werden zufällig initialisiert, und Sie müssen das Modell trainieren, bevor Sie es verwenden können, um aussagekräftige Ergebnisse zu erhalten. Beginnen Sie mit dem Import von [`AutoConfig`] und laden Sie dann das trainierte Modell, das Sie ändern möchten. Innerhalb von [`AutoConfig.from_pretrained`] können Sie das Attribut angeben, das Sie ändern möchten, z. B. die Anzahl der Aufmerksamkeitsköpfe: ```py >>> from transformers import AutoConfig >>> my_config = AutoConfig.from_pretrained("distilbert/distilbert-base-uncased", n_heads=12) ``` <frameworkcontent> <pt> Create a model from your custom configuration with [`AutoModel.from_config`]: ```py >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) ``` </pt> <tf> Create a model from your custom configuration with [`TFAutoModel.from_config`]: ```py >>> from transformers import TFAutoModel >>> my_model = TFAutoModel.from_config(my_config) ``` </tf> </frameworkcontent> Weitere Informationen zur Erstellung von benutzerdefinierten Konfigurationen finden Sie in der Anleitung [Erstellen einer benutzerdefinierten Architektur](./create_a_model). ## Wie geht es weiter? Nachdem Sie nun die 🤗 Transformers-Kurztour abgeschlossen haben, schauen Sie sich unsere Anleitungen an und erfahren Sie, wie Sie spezifischere Dinge tun können, wie das Schreiben eines benutzerdefinierten Modells, die Feinabstimmung eines Modells für eine Aufgabe und wie man ein Modell mit einem Skript trainiert. Wenn Sie mehr über die Kernkonzepte von 🤗 Transformers erfahren möchten, nehmen Sie sich eine Tasse Kaffee und werfen Sie einen Blick auf unsere konzeptionellen Leitfäden!
transformers/docs/source/de/quicktour.md/0
{ "file_path": "transformers/docs/source/de/quicktour.md", "repo_id": "transformers", "token_count": 7330 }
240
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Instantiating a big model When you want to use a very big pretrained model, one challenge is to minimize the use of the RAM. The usual workflow from PyTorch is: 1. Create your model with random weights. 2. Load your pretrained weights. 3. Put those pretrained weights in your random model. Step 1 and 2 both require a full version of the model in memory, which is not a problem in most cases, but if your model starts weighing several GigaBytes, those two copies can make you get out of RAM. Even worse, if you are using `torch.distributed` to launch a distributed training, each process will load the pretrained model and store these two copies in RAM. <Tip> Note that the randomly created model is initialized with "empty" tensors, which take the space in memory without filling it (thus the random values are whatever was in this chunk of memory at a given time). The random initialization following the appropriate distribution for the kind of model/parameters instantiated (like a normal distribution for instance) is only performed after step 3 on the non-initialized weights, to be as fast as possible! </Tip> In this guide, we explore the solutions Transformers offer to deal with this issue. Note that this is an area of active development, so the APIs explained here may change slightly in the future. ## Sharded checkpoints Since version 4.18.0, model checkpoints that end up taking more than 10GB of space are automatically sharded in smaller pieces. In terms of having one single checkpoint when you do `model.save_pretrained(save_dir)`, you will end up with several partial checkpoints (each of which being of size < 10GB) and an index that maps parameter names to the files they are stored in. You can control the maximum size before sharding with the `max_shard_size` parameter, so for the sake of an example, we'll use a normal-size models with a small shard size: let's take a traditional BERT model. ```py from transformers import AutoModel model = AutoModel.from_pretrained("google-bert/bert-base-cased") ``` If you save it using [`~PreTrainedModel.save_pretrained`], you will get a new folder with two files: the config of the model and its weights: ```py >>> import os >>> import tempfile >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir) ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model.bin'] ``` Now let's use a maximum shard size of 200MB: ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model-00001-of-00003.bin', 'pytorch_model-00002-of-00003.bin', 'pytorch_model-00003-of-00003.bin', 'pytorch_model.bin.index.json'] ``` On top of the configuration of the model, we see three different weights files, and an `index.json` file which is our index. A checkpoint like this can be fully reloaded using the [`~PreTrainedModel.from_pretrained`] method: ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... new_model = AutoModel.from_pretrained(tmp_dir) ``` The main advantage of doing this for big models is that during step 2 of the workflow shown above, each shard of the checkpoint is loaded after the previous one, capping the memory usage in RAM to the model size plus the size of the biggest shard. Behind the scenes, the index file is used to determine which keys are in the checkpoint, and where the corresponding weights are stored. We can load that index like any json and get a dictionary: ```py >>> import json >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... with open(os.path.join(tmp_dir, "pytorch_model.bin.index.json"), "r") as f: ... index = json.load(f) >>> print(index.keys()) dict_keys(['metadata', 'weight_map']) ``` The metadata just consists of the total size of the model for now. We plan to add other information in the future: ```py >>> index["metadata"] {'total_size': 433245184} ``` The weights map is the main part of this index, which maps each parameter name (as usually found in a PyTorch model `state_dict`) to the file it's stored in: ```py >>> index["weight_map"] {'embeddings.LayerNorm.bias': 'pytorch_model-00001-of-00003.bin', 'embeddings.LayerNorm.weight': 'pytorch_model-00001-of-00003.bin', ... ``` If you want to directly load such a sharded checkpoint inside a model without using [`~PreTrainedModel.from_pretrained`] (like you would do `model.load_state_dict()` for a full checkpoint) you should use [`~modeling_utils.load_sharded_checkpoint`]: ```py >>> from transformers.modeling_utils import load_sharded_checkpoint >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... load_sharded_checkpoint(model, tmp_dir) ``` ## Low memory loading Sharded checkpoints reduce the memory usage during step 2 of the workflow mentioned above, but in order to use that model in a low memory setting, we recommend leveraging our tools based on the Accelerate library. Please read the following guide for more information: [Large model loading using Accelerate](./main_classes/model#large-model-loading)
transformers/docs/source/en/big_models.md/0
{ "file_path": "transformers/docs/source/en/big_models.md", "repo_id": "transformers", "token_count": 1722 }
241
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Installation Install 🤗 Transformers for whichever deep learning library you're working with, setup your cache, and optionally configure 🤗 Transformers to run offline. 🤗 Transformers is tested on Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: * [PyTorch](https://pytorch.org/get-started/locally/) installation instructions. * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) installation instructions. * [Flax](https://flax.readthedocs.io/en/latest/) installation instructions. ## Install with pip You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). A virtual environment makes it easier to manage different projects, and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: ```bash python -m venv .env ``` Activate the virtual environment. On Linux and MacOs: ```bash source .env/bin/activate ``` Activate Virtual environment on Windows ```bash .env/Scripts/activate ``` Now you're ready to install 🤗 Transformers with the following command: ```bash pip install transformers ``` For CPU-support only, you can conveniently install 🤗 Transformers and a deep learning library in one line. For example, install 🤗 Transformers and PyTorch with: ```bash pip install 'transformers[torch]' ``` 🤗 Transformers and TensorFlow 2.0: ```bash pip install 'transformers[tf-cpu]' ``` <Tip warning={true}> M1 / ARM Users You will need to install the following before installing TensorFLow 2.0 ```bash brew install cmake brew install pkg-config ``` </Tip> 🤗 Transformers and Flax: ```bash pip install 'transformers[flax]' ``` Finally, check if 🤗 Transformers has been properly installed by running the following command. It will download a pretrained model: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` Then print out the label and score: ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## Install from source Install 🤗 Transformers from source with the following command: ```bash pip install git+https://github.com/huggingface/transformers ``` This command installs the bleeding edge `main` version rather than the latest `stable` version. The `main` version is useful for staying up-to-date with the latest developments. For instance, if a bug has been fixed since the last official release but a new release hasn't been rolled out yet. However, this means the `main` version may not always be stable. We strive to keep the `main` version operational, and most issues are usually resolved within a few hours or a day. If you run into a problem, please open an [Issue](https://github.com/huggingface/transformers/issues) so we can fix it even sooner! Check if 🤗 Transformers has been properly installed by running the following command: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## Editable install You will need an editable install if you'd like to: * Use the `main` version of the source code. * Contribute to 🤗 Transformers and need to test changes in the code. Clone the repository and install 🤗 Transformers with the following commands: ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` These commands will link the folder you cloned the repository to and your Python library paths. Python will now look inside the folder you cloned to in addition to the normal library paths. For example, if your Python packages are typically installed in `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python will also search the folder you cloned to: `~/transformers/`. <Tip warning={true}> You must keep the `transformers` folder if you want to keep using the library. </Tip> Now you can easily update your clone to the latest version of 🤗 Transformers with the following command: ```bash cd ~/transformers/ git pull ``` Your Python environment will find the `main` version of 🤗 Transformers on the next run. ## Install with conda Install from the conda channel `conda-forge`: ```bash conda install conda-forge::transformers ``` ## Cache setup Pretrained models are downloaded and locally cached at: `~/.cache/huggingface/hub`. This is the default directory given by the shell environment variable `TRANSFORMERS_CACHE`. On Windows, the default directory is given by `C:\Users\username\.cache\huggingface\hub`. You can change the shell environment variables shown below - in order of priority - to specify a different cache directory: 1. Shell environment variable (default): `HUGGINGFACE_HUB_CACHE` or `TRANSFORMERS_CACHE`. 2. Shell environment variable: `HF_HOME`. 3. Shell environment variable: `XDG_CACHE_HOME` + `/huggingface`. <Tip> 🤗 Transformers will use the shell environment variables `PYTORCH_TRANSFORMERS_CACHE` or `PYTORCH_PRETRAINED_BERT_CACHE` if you are coming from an earlier iteration of this library and have set those environment variables, unless you specify the shell environment variable `TRANSFORMERS_CACHE`. </Tip> ## Offline mode Run 🤗 Transformers in a firewalled or offline environment with locally cached files by setting the environment variable `TRANSFORMERS_OFFLINE=1`. <Tip> Add [🤗 Datasets](https://huggingface.co/docs/datasets/) to your offline training workflow with the environment variable `HF_DATASETS_OFFLINE=1`. </Tip> ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path google-t5/t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` This script should run without hanging or waiting to timeout because it won't attempt to download the model from the Hub. You can also bypass loading a model from the Hub from each [`~PreTrainedModel.from_pretrained`] call with the [`local_files_only`] parameter. When set to `True`, only local files are loaded: ```py from transformers import T5Model model = T5Model.from_pretrained("./path/to/local/directory", local_files_only=True) ``` ### Fetch models and tokenizers to use offline Another option for using 🤗 Transformers offline is to download the files ahead of time, and then point to their local path when you need to use them offline. There are three ways to do this: * Download a file through the user interface on the [Model Hub](https://huggingface.co/models) by clicking on the ↓ icon. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * Use the [`PreTrainedModel.from_pretrained`] and [`PreTrainedModel.save_pretrained`] workflow: 1. Download your files ahead of time with [`PreTrainedModel.from_pretrained`]: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. Save your files to a specified directory with [`PreTrainedModel.save_pretrained`]: ```py >>> tokenizer.save_pretrained("./your/path/bigscience_t0") >>> model.save_pretrained("./your/path/bigscience_t0") ``` 3. Now when you're offline, reload your files with [`PreTrainedModel.from_pretrained`] from the specified directory: ```py >>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./your/path/bigscience_t0") ``` * Programmatically download files with the [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) library: 1. Install the `huggingface_hub` library in your virtual environment: ```bash python -m pip install huggingface_hub ``` 2. Use the [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) function to download a file to a specific path. For example, the following command downloads the `config.json` file from the [T0](https://huggingface.co/bigscience/T0_3B) model to your desired path: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0") ``` Once your file is downloaded and locally cached, specify it's local path to load and use it: ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json") ``` <Tip> See the [How to download files from the Hub](https://huggingface.co/docs/hub/how-to-downstream) section for more details on downloading files stored on the Hub. </Tip>
transformers/docs/source/en/installation.md/0
{ "file_path": "transformers/docs/source/en/installation.md", "repo_id": "transformers", "token_count": 2901 }
242
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Data Collator Data collators are objects that will form a batch by using a list of dataset elements as input. These elements are of the same type as the elements of `train_dataset` or `eval_dataset`. To be able to build batches, data collators may apply some processing (like padding). Some of them (like [`DataCollatorForLanguageModeling`]) also apply some random data augmentation (like random masking) on the formed batch. Examples of use can be found in the [example scripts](../examples) or [example notebooks](../notebooks). ## Default data collator [[autodoc]] data.data_collator.default_data_collator ## DefaultDataCollator [[autodoc]] data.data_collator.DefaultDataCollator ## DataCollatorWithPadding [[autodoc]] data.data_collator.DataCollatorWithPadding ## DataCollatorForTokenClassification [[autodoc]] data.data_collator.DataCollatorForTokenClassification ## DataCollatorForSeq2Seq [[autodoc]] data.data_collator.DataCollatorForSeq2Seq ## DataCollatorForLanguageModeling [[autodoc]] data.data_collator.DataCollatorForLanguageModeling - numpy_mask_tokens - tf_mask_tokens - torch_mask_tokens ## DataCollatorForWholeWordMask [[autodoc]] data.data_collator.DataCollatorForWholeWordMask - numpy_mask_tokens - tf_mask_tokens - torch_mask_tokens ## DataCollatorForPermutationLanguageModeling [[autodoc]] data.data_collator.DataCollatorForPermutationLanguageModeling - numpy_mask_tokens - tf_mask_tokens - torch_mask_tokens
transformers/docs/source/en/main_classes/data_collator.md/0
{ "file_path": "transformers/docs/source/en/main_classes/data_collator.md", "repo_id": "transformers", "token_count": 681 }
243
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ALBERT <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=albert"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-albert-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/albert-base-v2"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview The ALBERT model was proposed in [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942) by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT: - Splitting the embedding matrix into two smaller matrices. - Using repeating layers split among groups. The abstract from the paper is the following: *Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations, longer training times, and unexpected model degradation. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large.* This model was contributed by [lysandre](https://huggingface.co/lysandre). This model jax version was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/google-research/ALBERT). ## Usage tips - ALBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - ALBERT uses repeating layers which results in a small memory footprint, however the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. - Embedding size E is different from hidden size H justified because the embeddings are context independent (one embedding vector represents one token), whereas hidden states are context dependent (one hidden state represents a sequence of tokens) so it's more logical to have H >> E. Also, the embedding matrix is large since it's V x E (V being the vocab size). If E < H, it has less parameters. - Layers are split in groups that share parameters (to save memory). Next sentence prediction is replaced by a sentence ordering prediction: in the inputs, we have two sentences A and B (that are consecutive) and we either feed A followed by B or B followed by A. The model must predict if they have been swapped or not. This model was contributed by [lysandre](https://huggingface.co/lysandre). This model jax version was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/google-research/ALBERT). ## Resources The resources provided in the following sections consist of a list of official Hugging Face and community (indicated by 🌎) resources to help you get started with AlBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/> - [`AlbertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification). - [`TFAlbertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification). - [`FlaxAlbertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - Check the [Text classification task guide](../tasks/sequence_classification) on how to use the model. <PipelineTag pipeline="token-classification"/> - [`AlbertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification). - [`TFAlbertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [`FlaxAlbertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - Check the [Token classification task guide](../tasks/token_classification) on how to use the model. <PipelineTag pipeline="fill-mask"/> - [`AlbertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFAlbertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxAlbertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - Check the [Masked language modeling task guide](../tasks/masked_language_modeling) on how to use the model. <PipelineTag pipeline="question-answering"/> - [`AlbertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [`TFAlbertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [`FlaxAlbertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - Check the [Question answering task guide](../tasks/question_answering) on how to use the model. **Multiple choice** - [`AlbertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb). - [`TFAlbertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - Check the [Multiple choice task guide](../tasks/multiple_choice) on how to use the model. ## AlbertConfig [[autodoc]] AlbertConfig ## AlbertTokenizer [[autodoc]] AlbertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## AlbertTokenizerFast [[autodoc]] AlbertTokenizerFast ## Albert specific outputs [[autodoc]] models.albert.modeling_albert.AlbertForPreTrainingOutput [[autodoc]] models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput <frameworkcontent> <pt> ## AlbertModel [[autodoc]] AlbertModel - forward ## AlbertForPreTraining [[autodoc]] AlbertForPreTraining - forward ## AlbertForMaskedLM [[autodoc]] AlbertForMaskedLM - forward ## AlbertForSequenceClassification [[autodoc]] AlbertForSequenceClassification - forward ## AlbertForMultipleChoice [[autodoc]] AlbertForMultipleChoice ## AlbertForTokenClassification [[autodoc]] AlbertForTokenClassification - forward ## AlbertForQuestionAnswering [[autodoc]] AlbertForQuestionAnswering - forward </pt> <tf> ## TFAlbertModel [[autodoc]] TFAlbertModel - call ## TFAlbertForPreTraining [[autodoc]] TFAlbertForPreTraining - call ## TFAlbertForMaskedLM [[autodoc]] TFAlbertForMaskedLM - call ## TFAlbertForSequenceClassification [[autodoc]] TFAlbertForSequenceClassification - call ## TFAlbertForMultipleChoice [[autodoc]] TFAlbertForMultipleChoice - call ## TFAlbertForTokenClassification [[autodoc]] TFAlbertForTokenClassification - call ## TFAlbertForQuestionAnswering [[autodoc]] TFAlbertForQuestionAnswering - call </tf> <jax> ## FlaxAlbertModel [[autodoc]] FlaxAlbertModel - __call__ ## FlaxAlbertForPreTraining [[autodoc]] FlaxAlbertForPreTraining - __call__ ## FlaxAlbertForMaskedLM [[autodoc]] FlaxAlbertForMaskedLM - __call__ ## FlaxAlbertForSequenceClassification [[autodoc]] FlaxAlbertForSequenceClassification - __call__ ## FlaxAlbertForMultipleChoice [[autodoc]] FlaxAlbertForMultipleChoice - __call__ ## FlaxAlbertForTokenClassification [[autodoc]] FlaxAlbertForTokenClassification - __call__ ## FlaxAlbertForQuestionAnswering [[autodoc]] FlaxAlbertForQuestionAnswering - __call__ </jax> </frameworkcontent>
transformers/docs/source/en/model_doc/albert.md/0
{ "file_path": "transformers/docs/source/en/model_doc/albert.md", "repo_id": "transformers", "token_count": 3405 }
244
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CLIP ## Overview The CLIP model was proposed in [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. The abstract from the paper is the following: *State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at this https URL.* This model was contributed by [valhalla](https://huggingface.co/valhalla). The original code can be found [here](https://github.com/openai/CLIP). ## Usage tips and example CLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. CLIP uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similar score. To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. The [`CLIPImageProcessor`] can be used to resize (or rescale) and normalize images for the model. The [`CLIPTokenizer`] is used to encode the text. The [`CLIPProcessor`] wraps [`CLIPImageProcessor`] and [`CLIPTokenizer`] into a single instance to both encode the text and prepare the images. The following example shows how to get the image-text similarity scores using [`CLIPProcessor`] and [`CLIPModel`]. ```python >>> from PIL import Image >>> import requests >>> from transformers import CLIPProcessor, CLIPModel >>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") >>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score >>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ``` ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CLIP. - [Fine tuning CLIP with Remote Sensing (Satellite) images and captions](https://huggingface.co/blog/fine-tune-clip-rsicd), a blog post about how to fine-tune CLIP with [RSICD dataset](https://github.com/201528014227051/RSICD_optimal) and comparison of performance changes due to data augmentation. - This [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text) shows how to train a CLIP-like vision-text dual encoder model using a pre-trained vision and text encoder using [COCO dataset](https://cocodataset.org/#home). <PipelineTag pipeline="image-to-text"/> - A [notebook](https://colab.research.google.com/drive/1tuoAC5F4sC7qid56Z0ap-stR3rwdk0ZV?usp=sharing) on how to use a pretrained CLIP for inference with beam search for image captioning. 🌎 **Image retrieval** - A [notebook](https://colab.research.google.com/drive/1bLVwVKpAndpEDHqjzxVPr_9nGrSbuOQd?usp=sharing) on image retrieval using pretrained CLIP and computing MRR(Mean Reciprocal Rank) score. 🌎 - A [notebook](https://colab.research.google.com/github/deep-diver/image_search_with_natural_language/blob/main/notebooks/Image_Search_CLIP.ipynb) on image retrieval and showing the similarity score. 🌎 - A [notebook](https://colab.research.google.com/drive/1xO-wC_m_GNzgjIBQ4a4znvQkvDoZJvH4?usp=sharing) on how to map images and texts to the same vector space using Multilingual CLIP. 🌎 - A [notebook](https://colab.research.google.com/github/vivien000/clip-demo/blob/master/clip.ipynb#scrollTo=uzdFhRGqiWkR) on how to run CLIP on semantic image search using [Unsplash](https://unsplash.com) and [TMDB](https://www.themoviedb.org/) datasets. 🌎 **Explainability** - A [notebook](https://colab.research.google.com/github/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP_explainability.ipynb) on how to visualize similarity between input token and image segment. 🌎 If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. ## CLIPConfig [[autodoc]] CLIPConfig - from_text_vision_configs ## CLIPTextConfig [[autodoc]] CLIPTextConfig ## CLIPVisionConfig [[autodoc]] CLIPVisionConfig ## CLIPTokenizer [[autodoc]] CLIPTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## CLIPTokenizerFast [[autodoc]] CLIPTokenizerFast ## CLIPImageProcessor [[autodoc]] CLIPImageProcessor - preprocess ## CLIPFeatureExtractor [[autodoc]] CLIPFeatureExtractor ## CLIPProcessor [[autodoc]] CLIPProcessor <frameworkcontent> <pt> ## CLIPModel [[autodoc]] CLIPModel - forward - get_text_features - get_image_features ## CLIPTextModel [[autodoc]] CLIPTextModel - forward ## CLIPTextModelWithProjection [[autodoc]] CLIPTextModelWithProjection - forward ## CLIPVisionModelWithProjection [[autodoc]] CLIPVisionModelWithProjection - forward ## CLIPVisionModel [[autodoc]] CLIPVisionModel - forward ## CLIPForImageClassification [[autodoc]] CLIPForImageClassification - forward </pt> <tf> ## TFCLIPModel [[autodoc]] TFCLIPModel - call - get_text_features - get_image_features ## TFCLIPTextModel [[autodoc]] TFCLIPTextModel - call ## TFCLIPVisionModel [[autodoc]] TFCLIPVisionModel - call </tf> <jax> ## FlaxCLIPModel [[autodoc]] FlaxCLIPModel - __call__ - get_text_features - get_image_features ## FlaxCLIPTextModel [[autodoc]] FlaxCLIPTextModel - __call__ ## FlaxCLIPTextModelWithProjection [[autodoc]] FlaxCLIPTextModelWithProjection - __call__ ## FlaxCLIPVisionModel [[autodoc]] FlaxCLIPVisionModel - __call__ </jax> </frameworkcontent>
transformers/docs/source/en/model_doc/clip.md/0
{ "file_path": "transformers/docs/source/en/model_doc/clip.md", "repo_id": "transformers", "token_count": 2696 }
245
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # DeBERTa ## Overview The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google's BERT model released in 2018 and Facebook's RoBERTa model released in 2019. It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in RoBERTa. The abstract from the paper is the following: *Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.* This model was contributed by [DeBERTa](https://huggingface.co/DeBERTa). This model TF 2.0 implementation was contributed by [kamalkraj](https://huggingface.co/kamalkraj) . The original code can be found [here](https://github.com/microsoft/DeBERTa). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/> - A blog post on how to [Accelerate Large Model Training using DeepSpeed](https://huggingface.co/blog/accelerate-deepspeed) with DeBERTa. - A blog post on [Supercharged Customer Service with Machine Learning](https://huggingface.co/blog/supercharge-customer-service-with-machine-learning) with DeBERTa. - [`DebertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [`TFDebertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). - [Text classification task guide](../tasks/sequence_classification) <PipelineTag pipeline="token-classification" /> - [`DebertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb). - [`TFDebertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - [Byte-Pair Encoding tokenization](https://huggingface.co/course/chapter6/5?fw=pt) chapter of the 🤗 Hugging Face Course. - [Token classification task guide](../tasks/token_classification) <PipelineTag pipeline="fill-mask"/> - [`DebertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFDebertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - [Masked language modeling task guide](../tasks/masked_language_modeling) <PipelineTag pipeline="question-answering"/> - [`DebertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [`TFDebertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - [Question answering task guide](../tasks/question_answering) ## DebertaConfig [[autodoc]] DebertaConfig ## DebertaTokenizer [[autodoc]] DebertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## DebertaTokenizerFast [[autodoc]] DebertaTokenizerFast - build_inputs_with_special_tokens - create_token_type_ids_from_sequences <frameworkcontent> <pt> ## DebertaModel [[autodoc]] DebertaModel - forward ## DebertaPreTrainedModel [[autodoc]] DebertaPreTrainedModel ## DebertaForMaskedLM [[autodoc]] DebertaForMaskedLM - forward ## DebertaForSequenceClassification [[autodoc]] DebertaForSequenceClassification - forward ## DebertaForTokenClassification [[autodoc]] DebertaForTokenClassification - forward ## DebertaForQuestionAnswering [[autodoc]] DebertaForQuestionAnswering - forward </pt> <tf> ## TFDebertaModel [[autodoc]] TFDebertaModel - call ## TFDebertaPreTrainedModel [[autodoc]] TFDebertaPreTrainedModel - call ## TFDebertaForMaskedLM [[autodoc]] TFDebertaForMaskedLM - call ## TFDebertaForSequenceClassification [[autodoc]] TFDebertaForSequenceClassification - call ## TFDebertaForTokenClassification [[autodoc]] TFDebertaForTokenClassification - call ## TFDebertaForQuestionAnswering [[autodoc]] TFDebertaForQuestionAnswering - call </tf> </frameworkcontent>
transformers/docs/source/en/model_doc/deberta.md/0
{ "file_path": "transformers/docs/source/en/model_doc/deberta.md", "repo_id": "transformers", "token_count": 2499 }
246
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # EfficientFormer ## Overview The EfficientFormer model was proposed in [EfficientFormer: Vision Transformers at MobileNet Speed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. EfficientFormer proposes a dimension-consistent pure transformer that can be run on mobile devices for dense prediction tasks like image classification, object detection and semantic segmentation. The abstract from the paper is the following: *Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks. However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance? To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs. Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as a design paradigm. Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer. Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices. Our fastest model, EfficientFormer-L1, achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6 ms inference latency on iPhone 12 (compiled with CoreML), which { runs as fast as MobileNetV2×1.4 (1.6 ms, 74.7% top-1),} and our largest model, EfficientFormer-L7, obtains 83.3% accuracy with only 7.0 ms latency. Our work proves that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance.* This model was contributed by [novice03](https://huggingface.co/novice03) and [Bearnardd](https://huggingface.co/Bearnardd). The original code can be found [here](https://github.com/snap-research/EfficientFormer). The TensorFlow version of this model was added by [D-Roberts](https://huggingface.co/D-Roberts). ## Documentation resources - [Image classification task guide](../tasks/image_classification) ## EfficientFormerConfig [[autodoc]] EfficientFormerConfig ## EfficientFormerImageProcessor [[autodoc]] EfficientFormerImageProcessor - preprocess <frameworkcontent> <pt> ## EfficientFormerModel [[autodoc]] EfficientFormerModel - forward ## EfficientFormerForImageClassification [[autodoc]] EfficientFormerForImageClassification - forward ## EfficientFormerForImageClassificationWithTeacher [[autodoc]] EfficientFormerForImageClassificationWithTeacher - forward </pt> <tf> ## TFEfficientFormerModel [[autodoc]] TFEfficientFormerModel - call ## TFEfficientFormerForImageClassification [[autodoc]] TFEfficientFormerForImageClassification - call ## TFEfficientFormerForImageClassificationWithTeacher [[autodoc]] TFEfficientFormerForImageClassificationWithTeacher - call </tf> </frameworkcontent>
transformers/docs/source/en/model_doc/efficientformer.md/0
{ "file_path": "transformers/docs/source/en/model_doc/efficientformer.md", "repo_id": "transformers", "token_count": 1075 }
247
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # FSMT ## Overview FSMT (FairSeq MachineTranslation) models were introduced in [Facebook FAIR's WMT19 News Translation Task Submission](https://arxiv.org/abs/1907.06616) by Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov. The abstract of the paper is the following: *This paper describes Facebook FAIR's submission to the WMT19 shared news translation task. We participate in two language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes, as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the human evaluation campaign. On En->De, our system significantly outperforms other systems as well as human translations. This system improves upon our WMT'18 submission by 4.5 BLEU points.* This model was contributed by [stas](https://huggingface.co/stas). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/wmt19). ## Implementation Notes - FSMT uses source and target vocabulary pairs that aren't combined into one. It doesn't share embeddings tokens either. Its tokenizer is very similar to [`XLMTokenizer`] and the main model is derived from [`BartModel`]. ## FSMTConfig [[autodoc]] FSMTConfig ## FSMTTokenizer [[autodoc]] FSMTTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## FSMTModel [[autodoc]] FSMTModel - forward ## FSMTForConditionalGeneration [[autodoc]] FSMTForConditionalGeneration - forward
transformers/docs/source/en/model_doc/fsmt.md/0
{ "file_path": "transformers/docs/source/en/model_doc/fsmt.md", "repo_id": "transformers", "token_count": 739 }
248
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # HerBERT ## Overview The HerBERT model was proposed in [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, and Ireneusz Gawlik. It is a BERT-based Language Model trained on Polish Corpora using only MLM objective with dynamic masking of whole words. The abstract from the paper is the following: *In recent years, a series of Transformer-based models unlocked major improvements in general natural language understanding (NLU) tasks. Such a fast pace of research would not be possible without general NLU benchmarks, which allow for a fair comparison of the proposed methods. However, such benchmarks are available only for a handful of languages. To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language understanding, accompanied by an online leaderboard. It consists of a diverse set of tasks, adopted from existing datasets for named entity recognition, question-answering, textual entailment, and others. We also introduce a new sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR). To ensure a common evaluation scheme and promote models that generalize to different NLU tasks, the benchmark includes datasets from varying domains and applications. Additionally, we release HerBERT, a Transformer-based model trained specifically for the Polish language, which has the best average performance and obtains the best results for three out of nine tasks. Finally, we provide an extensive evaluation, including several standard baselines and recently proposed, multilingual Transformer-based models.* This model was contributed by [rmroczkowski](https://huggingface.co/rmroczkowski). The original code can be found [here](https://github.com/allegro/HerBERT). ## Usage example ```python >>> from transformers import HerbertTokenizer, RobertaModel >>> tokenizer = HerbertTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") >>> model = RobertaModel.from_pretrained("allegro/herbert-klej-cased-v1") >>> encoded_input = tokenizer.encode("Kto ma lepszą sztukę, ma lepszy rząd – to jasne.", return_tensors="pt") >>> outputs = model(encoded_input) >>> # HerBERT can also be loaded using AutoTokenizer and AutoModel: >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") >>> model = AutoModel.from_pretrained("allegro/herbert-klej-cased-v1") ``` <Tip> Herbert implementation is the same as `BERT` except for the tokenization method. Refer to [BERT documentation](bert) for API reference and examples. </Tip> ## HerbertTokenizer [[autodoc]] HerbertTokenizer ## HerbertTokenizerFast [[autodoc]] HerbertTokenizerFast
transformers/docs/source/en/model_doc/herbert.md/0
{ "file_path": "transformers/docs/source/en/model_doc/herbert.md", "repo_id": "transformers", "token_count": 956 }
249
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # LLaMA ## Overview The LLaMA model was proposed in [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. It is a collection of foundation language models ranging from 7B to 65B parameters. The abstract from the paper is the following: *We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community. * This model was contributed by [zphang](https://huggingface.co/zphang) with contributions from [BlackSamorez](https://huggingface.co/BlackSamorez). The code of the implementation in Hugging Face is based on GPT-NeoX [here](https://github.com/EleutherAI/gpt-neox). The original code of the authors can be found [here](https://github.com/facebookresearch/llama). ## Usage tips - Weights for the LLaMA models can be obtained from by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) - After downloading the weights, they will need to be converted to the Hugging Face Transformers format using the [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py). The script can be called with the following (example) command: ```bash python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` - After conversion, the model and tokenizer can be loaded via: ```python from transformers import LlamaForCausalLM, LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained("/output/path") model = LlamaForCausalLM.from_pretrained("/output/path") ``` Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). For the 65B model, it's thus 130GB of RAM needed. - The LLaMA tokenizer is a BPE model based on [sentencepiece](https://github.com/google/sentencepiece). One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. "Banana"), the tokenizer does not prepend the prefix space to the string. This model was contributed by [zphang](https://huggingface.co/zphang) with contributions from [BlackSamorez](https://huggingface.co/BlackSamorez). The code of the implementation in Hugging Face is based on GPT-NeoX [here](https://github.com/EleutherAI/gpt-neox). The original code of the authors can be found [here](https://github.com/facebookresearch/llama). The Flax version of the implementation was contributed by [afmck](https://huggingface.co/afmck) with the code in the implementation based on Hugging Face's Flax GPT-Neo. Based on the original LLaMA model, Meta AI has released some follow-up works: - **Llama2**: Llama2 is an improved version of Llama with some architectural tweaks (Grouped Query Attention), and is pre-trained on 2Trillion tokens. Refer to the documentation of Llama2 which can be found [here](llama2). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LLaMA. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/> - A [notebook](https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-sst2.ipynb#scrollTo=f04ba4d2) on how to use prompt tuning to adapt the LLaMA model for text classification task. 🌎 <PipelineTag pipeline="question-answering"/> - [StackLLaMA: A hands-on guide to train LLaMA with RLHF](https://huggingface.co/blog/stackllama#stackllama-a-hands-on-guide-to-train-llama-with-rlhf), a blog post about how to train LLaMA to answer questions on [Stack Exchange](https://stackexchange.com/) with RLHF. ⚗️ Optimization - A [notebook](https://colab.research.google.com/drive/1SQUXq1AMZPSLD4mk3A3swUIc6Y2dclme?usp=sharing) on how to fine-tune LLaMA model using xturing library on GPU which has limited memory. 🌎 ⚡️ Inference - A [notebook](https://colab.research.google.com/github/DominguesM/alpaca-lora-ptbr-7b/blob/main/notebooks/02%20-%20Evaluate.ipynb) on how to run the LLaMA Model using PeftModel from the 🤗 PEFT library. 🌎 - A [notebook](https://colab.research.google.com/drive/1l2GiSSPbajVyp2Nk3CFT4t3uH6-5TiBe?usp=sharing) on how to load a PEFT adapter LLaMA model with LangChain. 🌎 🚀 Deploy - A [notebook](https://colab.research.google.com/github/lxe/simple-llama-finetuner/blob/master/Simple_LLaMA_FineTuner.ipynb#scrollTo=3PM_DilAZD8T) on how to fine-tune LLaMA model using LoRA method via the 🤗 PEFT library with intuitive UI. 🌎 - A [notebook](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart-foundation-models/text-generation-open-llama.ipynb) on how to deploy Open-LLaMA model for text generation on Amazon SageMaker. 🌎 ## LlamaConfig [[autodoc]] LlamaConfig ## LlamaTokenizer [[autodoc]] LlamaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## LlamaTokenizerFast [[autodoc]] LlamaTokenizerFast - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - update_post_processor - save_vocabulary ## LlamaModel [[autodoc]] LlamaModel - forward ## LlamaForCausalLM [[autodoc]] LlamaForCausalLM - forward ## LlamaForSequenceClassification [[autodoc]] LlamaForSequenceClassification - forward ## LlamaForQuestionAnswering [[autodoc]] LlamaForQuestionAnswering - forward ## FlaxLlamaModel [[autodoc]] FlaxLlamaModel - __call__ ## FlaxLlamaForCausalLM [[autodoc]] FlaxLlamaForCausalLM - __call__
transformers/docs/source/en/model_doc/llama.md/0
{ "file_path": "transformers/docs/source/en/model_doc/llama.md", "repo_id": "transformers", "token_count": 2356 }
250
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # MBart and MBart-50 <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=mbart"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-mbart-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/mbart-large-50-one-to-many-mmt"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview of MBart The MBart model was presented in [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pretrained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pretraining a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text. This model was contributed by [valhalla](https://huggingface.co/valhalla). The Authors' code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/mbart) ### Training of MBart MBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for translation task. As the model is multilingual it expects the sequences in a different format. A special language id token is added in both the source and target text. The source text format is `X [eos, src_lang_code]` where `X` is the source text. The target text format is `[tgt_lang_code] X [eos]`. `bos` is never used. The regular [`~MBartTokenizer.__call__`] will encode source text format passed as first argument or with the `text` keyword, and target text format passed with the `text_label` keyword argument. - Supervised training ```python >>> from transformers import MBartForConditionalGeneration, MBartTokenizer >>> tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO") >>> example_english_phrase = "UN Chief Says There Is No Military Solution in Syria" >>> expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria" >>> inputs = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors="pt") >>> model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-en-ro") >>> # forward pass >>> model(**inputs) ``` - Generation While generating the target text set the `decoder_start_token_id` to the target language id. The following example shows how to translate English to Romanian using the *facebook/mbart-large-en-ro* model. ```python >>> from transformers import MBartForConditionalGeneration, MBartTokenizer >>> tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX") >>> article = "UN Chief Says There Is No Military Solution in Syria" >>> inputs = tokenizer(article, return_tensors="pt") >>> translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["ro_RO"]) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] "Şeful ONU declară că nu există o soluţie militară în Siria" ``` ## Overview of MBart-50 MBart-50 was introduced in the [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. MBart-50 is created using the original *mbart-large-cc25* checkpoint by extendeding its embedding layers with randomly initialized vectors for an extra set of 25 language tokens and then pretrained on 50 languages. According to the abstract *Multilingual translation models can be created through multilingual finetuning. Instead of finetuning on one direction, a pretrained model is finetuned on many directions at the same time. It demonstrates that pretrained models can be extended to incorporate additional languages without loss of performance. Multilingual finetuning improves on average 1 BLEU over the strongest baselines (being either multilingual from scratch or bilingual finetuning) while improving 9.3 BLEU on average over bilingual baselines from scratch.* ### Training of MBart-50 The text format for MBart-50 is slightly different from mBART. For MBart-50 the language id token is used as a prefix for both source and target text i.e the text format is `[lang_code] X [eos]`, where `lang_code` is source language id for source text and target language id for target text, with `X` being the source or target text respectively. MBart-50 has its own tokenizer [`MBart50Tokenizer`]. - Supervised training ```python from transformers import MBartForConditionalGeneration, MBart50TokenizerFast model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO") src_text = " UN Chief Says There Is No Military Solution in Syria" tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria" model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt") model(**model_inputs) # forward pass ``` - Generation To generate using the mBART-50 multilingual translation models, `eos_token_id` is used as the `decoder_start_token_id` and the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the *forced_bos_token_id* parameter to the *generate* method. The following example shows how to translate between Hindi to French and Arabic to English using the *facebook/mbart-50-large-many-to-many* checkpoint. ```python from transformers import MBartForConditionalGeneration, MBart50TokenizerFast article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है" article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا." model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") # translate Hindi to French tokenizer.src_lang = "hi_IN" encoded_hi = tokenizer(article_hi, return_tensors="pt") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"]) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire en Syria." # translate Arabic to English tokenizer.src_lang = "ar_AR" encoded_ar = tokenizer(article_ar, return_tensors="pt") generated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"]) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "The Secretary-General of the United Nations says there is no military solution in Syria." ``` ## Documentation resources - [Text classification task guide](../tasks/sequence_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## MBartConfig [[autodoc]] MBartConfig ## MBartTokenizer [[autodoc]] MBartTokenizer - build_inputs_with_special_tokens ## MBartTokenizerFast [[autodoc]] MBartTokenizerFast ## MBart50Tokenizer [[autodoc]] MBart50Tokenizer ## MBart50TokenizerFast [[autodoc]] MBart50TokenizerFast <frameworkcontent> <pt> ## MBartModel [[autodoc]] MBartModel ## MBartForConditionalGeneration [[autodoc]] MBartForConditionalGeneration ## MBartForQuestionAnswering [[autodoc]] MBartForQuestionAnswering ## MBartForSequenceClassification [[autodoc]] MBartForSequenceClassification ## MBartForCausalLM [[autodoc]] MBartForCausalLM - forward </pt> <tf> ## TFMBartModel [[autodoc]] TFMBartModel - call ## TFMBartForConditionalGeneration [[autodoc]] TFMBartForConditionalGeneration - call </tf> <jax> ## FlaxMBartModel [[autodoc]] FlaxMBartModel - __call__ - encode - decode ## FlaxMBartForConditionalGeneration [[autodoc]] FlaxMBartForConditionalGeneration - __call__ - encode - decode ## FlaxMBartForSequenceClassification [[autodoc]] FlaxMBartForSequenceClassification - __call__ - encode - decode ## FlaxMBartForQuestionAnswering [[autodoc]] FlaxMBartForQuestionAnswering - __call__ - encode - decode </jax> </frameworkcontent>
transformers/docs/source/en/model_doc/mbart.md/0
{ "file_path": "transformers/docs/source/en/model_doc/mbart.md", "repo_id": "transformers", "token_count": 3130 }
251
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # MPT ## Overview The MPT model was proposed by the [MosaicML](https://www.mosaicml.com/) team and released with multiple sizes and finetuned variants. The MPT models is a series of open source and commercially usable LLMs pre-trained on 1T tokens. MPT models are GPT-style decoder-only transformers with several improvements: performance-optimized layer implementations, architecture changes that provide greater training stability, and the elimination of context length limits by replacing positional embeddings with ALiBi. - MPT base: MPT base pre-trained models on next token prediction - MPT instruct: MPT base models fine-tuned on instruction based tasks - MPT storywriter: MPT base models fine-tuned for 2500 steps on 65k-token excerpts of fiction books contained in the books3 corpus, this enables the model to handle very long sequences The original code is available at the [`llm-foundry`](https://github.com/mosaicml/llm-foundry/tree/main) repository. Read more about it [in the release blogpost](https://www.mosaicml.com/blog/mpt-7b) ## Usage tips - Learn more about some techniques behind training of the model [in this section of llm-foundry repository](https://github.com/mosaicml/llm-foundry/blob/main/TUTORIAL.md#faqs) - If you want to use the advanced version of the model (triton kernels, direct flash attention integration), you can still use the original model implementation by adding `trust_remote_code=True` when calling `from_pretrained`. ## Resources - [Fine-tuning Notebook](https://colab.research.google.com/drive/1HCpQkLL7UXW8xJUJJ29X7QAeNJKO0frZ?usp=sharing) on how to fine-tune MPT-7B on a free Google Colab instance to turn the model into a Chatbot. ## MptConfig [[autodoc]] MptConfig - all ## MptModel [[autodoc]] MptModel - forward ## MptForCausalLM [[autodoc]] MptForCausalLM - forward ## MptForSequenceClassification [[autodoc]] MptForSequenceClassification - forward ## MptForTokenClassification [[autodoc]] MptForTokenClassification - forward ## MptForQuestionAnswering [[autodoc]] MptForQuestionAnswering - forward
transformers/docs/source/en/model_doc/mpt.md/0
{ "file_path": "transformers/docs/source/en/model_doc/mpt.md", "repo_id": "transformers", "token_count": 824 }
252
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Pyramid Vision Transformer V2 (PVTv2) ## Overview The PVTv2 model was proposed in [PVT v2: Improved Baselines with Pyramid Vision Transformer](https://arxiv.org/abs/2106.13797) by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. As an improved variant of PVT, it eschews position embeddings, relying instead on positional information encoded through zero-padding and overlapping patch embeddings. This lack of reliance on position embeddings simplifies the architecture, and enables running inference at any resolution without needing to interpolate them. The PVTv2 encoder structure has been successfully deployed to achieve state-of-the-art scores in [Segformer](https://arxiv.org/abs/2105.15203) for semantic segmentation, [GLPN](https://arxiv.org/abs/2201.07436) for monocular depth, and [Panoptic Segformer](https://arxiv.org/abs/2109.03814) for panoptic segmentation. PVTv2 belongs to a family of models called [hierarchical transformers](https://natecibik.medium.com/the-rise-of-vision-transformers-f623c980419f) , which make adaptations to transformer layers in order to generate multi-scale feature maps. Unlike the columnal structure of Vision Transformer ([ViT](https://arxiv.org/abs/2010.11929)) which loses fine-grained detail, multi-scale feature maps are known preserve this detail and aid performance in dense prediction tasks. In the case of PVTv2, this is achieved by generating image patch tokens using 2D convolution with overlapping kernels in each encoder layer. The multi-scale features of hierarchical transformers allow them to be easily swapped in for traditional workhorse computer vision backbone models like ResNet in larger architectures. Both Segformer and Panoptic Segformer demonstrated that configurations using PVTv2 for a backbone consistently outperformed those with similarly sized ResNet backbones. Another powerful feature of the PVTv2 is the complexity reduction in the self-attention layers called Spatial Reduction Attention (SRA), which uses 2D convolution layers to project hidden states to a smaller resolution before attending to them with the queries, improving the $O(n^2)$ complexity of self-attention to $O(n^2/R)$, with $R$ being the spatial reduction ratio (`sr_ratio`, aka kernel size and stride in the 2D convolution). SRA was introduced in PVT, and is the default attention complexity reduction method used in PVTv2. However, PVTv2 also introduced the option of using a self-attention mechanism with linear complexity related to image size, which they called "Linear SRA". This method uses average pooling to reduce the hidden states to a fixed size that is invariant to their original resolution (although this is inherently more lossy than regular SRA). This option can be enabled by setting `linear_attention` to `True` in the PVTv2Config. ### Abstract from the paper: *Transformer recently has presented encouraging progress in computer vision. In this work, we present new baselines by improving the original Pyramid Vision Transformer (PVT v1) by adding three designs, including (1) linear complexity attention layer, (2) overlapping patch embedding, and (3) convolutional feed-forward network. With these modifications, PVT v2 reduces the computational complexity of PVT v1 to linear and achieves significant improvements on fundamental vision tasks such as classification, detection, and segmentation. Notably, the proposed PVT v2 achieves comparable or better performances than recent works such as Swin Transformer. We hope this work will facilitate state-of-the-art Transformer researches in computer vision. Code is available at https://github.com/whai362/PVT.* This model was contributed by [FoamoftheSea](https://huggingface.co/FoamoftheSea). The original code can be found [here](https://github.com/whai362/PVT). ## Usage tips - [PVTv2](https://arxiv.org/abs/2106.13797) is a hierarchical transformer model which has demonstrated powerful performance in image classification and multiple other tasks, used as a backbone for semantic segmentation in [Segformer](https://arxiv.org/abs/2105.15203), monocular depth estimation in [GLPN](https://arxiv.org/abs/2201.07436), and panoptic segmentation in [Panoptic Segformer](https://arxiv.org/abs/2109.03814), consistently showing higher performance than similar ResNet configurations. - Hierarchical transformers like PVTv2 achieve superior data and parameter efficiency on image data compared with pure transformer architectures by incorporating design elements of convolutional neural networks (CNNs) into their encoders. This creates a best-of-both-worlds architecture that infuses the useful inductive biases of CNNs like translation equivariance and locality into the network while still enjoying the benefits of dynamic data response and global relationship modeling provided by the self-attention mechanism of [transformers](https://arxiv.org/abs/1706.03762). - PVTv2 uses overlapping patch embeddings to create multi-scale feature maps, which are infused with location information using zero-padding and depth-wise convolutions. - To reduce the complexity in the attention layers, PVTv2 performs a spatial reduction on the hidden states using either strided 2D convolution (SRA) or fixed-size average pooling (Linear SRA). Although inherently more lossy, Linear SRA provides impressive performance with a linear complexity with respect to image size. To use Linear SRA in the self-attention layers, set `linear_attention=True` in the `PvtV2Config`. - [`PvtV2Model`] is the hierarchical transformer encoder (which is also often referred to as Mix Transformer or MiT in the literature). [`PvtV2ForImageClassification`] adds a simple classifier head on top to perform Image Classification. [`PvtV2Backbone`] can be used with the [`AutoBackbone`] system in larger architectures like Deformable DETR. - ImageNet pretrained weights for all model sizes can be found on the [hub](https://huggingface.co/models?other=pvt_v2). The best way to get started with the PVTv2 is to load the pretrained checkpoint with the size of your choosing using `AutoModelForImageClassification`: ```python import requests import torch from transformers import AutoModelForImageClassification, AutoImageProcessor from PIL import Image model = AutoModelForImageClassification.from_pretrained("OpenGVLab/pvt_v2_b0") image_processor = AutoImageProcessor.from_pretrained("OpenGVLab/pvt_v2_b0") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processed = image_processor(image) outputs = model(torch.tensor(processed["pixel_values"])) ``` To use the PVTv2 as a backbone for more complex architectures like DeformableDETR, you can use AutoBackbone (this model would need fine-tuning as you're replacing the backbone in the pretrained model): ```python import requests import torch from transformers import AutoConfig, AutoModelForObjectDetection, AutoImageProcessor from PIL import Image model = AutoModelForObjectDetection.from_config( config=AutoConfig.from_pretrained( "SenseTime/deformable-detr", backbone_config=AutoConfig.from_pretrained("OpenGVLab/pvt_v2_b5"), use_timm_backbone=False ), ) image_processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processed = image_processor(image) outputs = model(torch.tensor(processed["pixel_values"])) ``` [PVTv2](https://github.com/whai362/PVT/tree/v2) performance on ImageNet-1K by model size (B0-B5): | Method | Size | Acc@1 | #Params (M) | |------------------|:----:|:-----:|:-----------:| | PVT-V2-B0 | 224 | 70.5 | 3.7 | | PVT-V2-B1 | 224 | 78.7 | 14.0 | | PVT-V2-B2-Linear | 224 | 82.1 | 22.6 | | PVT-V2-B2 | 224 | 82.0 | 25.4 | | PVT-V2-B3 | 224 | 83.1 | 45.2 | | PVT-V2-B4 | 224 | 83.6 | 62.6 | | PVT-V2-B5 | 224 | 83.8 | 82.0 | ## PvtV2Config [[autodoc]] PvtV2Config ## PvtForImageClassification [[autodoc]] PvtV2ForImageClassification - forward ## PvtModel [[autodoc]] PvtV2Model - forward
transformers/docs/source/en/model_doc/pvt_v2.md/0
{ "file_path": "transformers/docs/source/en/model_doc/pvt_v2.md", "repo_id": "transformers", "token_count": 2543 }
253
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Hybrid Vision Transformer (ViT Hybrid) ## Overview The hybrid Vision Transformer (ViT) model was proposed in [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining very good results compared to familiar convolutional architectures. ViT hybrid is a slight variant of the [plain Vision Transformer](vit), by leveraging a convolutional backbone (specifically, [BiT](bit)) whose features are used as initial "tokens" for the Transformer. The abstract from the paper is the following: *While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code (written in JAX) can be found [here](https://github.com/google-research/vision_transformer). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT Hybrid. <PipelineTag pipeline="image-classification"/> - [`ViTHybridForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## ViTHybridConfig [[autodoc]] ViTHybridConfig ## ViTHybridImageProcessor [[autodoc]] ViTHybridImageProcessor - preprocess ## ViTHybridModel [[autodoc]] ViTHybridModel - forward ## ViTHybridForImageClassification [[autodoc]] ViTHybridForImageClassification - forward
transformers/docs/source/en/model_doc/vit_hybrid.md/0
{ "file_path": "transformers/docs/source/en/model_doc/vit_hybrid.md", "repo_id": "transformers", "token_count": 966 }
254
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # XLM-RoBERTa-XL ## Overview The XLM-RoBERTa-XL model was proposed in [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. The abstract from the paper is the following: *Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available.* This model was contributed by [Soonhwan-Kwon](https://github.com/Soonhwan-Kwon) and [stefan-it](https://huggingface.co/stefan-it). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/xlmr). ## Usage tips XLM-RoBERTa-XL is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require `lang` tensors to understand which language is used, and should be able to determine the correct language from the input ids. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## XLMRobertaXLConfig [[autodoc]] XLMRobertaXLConfig ## XLMRobertaXLModel [[autodoc]] XLMRobertaXLModel - forward ## XLMRobertaXLForCausalLM [[autodoc]] XLMRobertaXLForCausalLM - forward ## XLMRobertaXLForMaskedLM [[autodoc]] XLMRobertaXLForMaskedLM - forward ## XLMRobertaXLForSequenceClassification [[autodoc]] XLMRobertaXLForSequenceClassification - forward ## XLMRobertaXLForMultipleChoice [[autodoc]] XLMRobertaXLForMultipleChoice - forward ## XLMRobertaXLForTokenClassification [[autodoc]] XLMRobertaXLForTokenClassification - forward ## XLMRobertaXLForQuestionAnswering [[autodoc]] XLMRobertaXLForQuestionAnswering - forward
transformers/docs/source/en/model_doc/xlm-roberta-xl.md/0
{ "file_path": "transformers/docs/source/en/model_doc/xlm-roberta-xl.md", "repo_id": "transformers", "token_count": 969 }
255
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Load adapters with 🤗 PEFT [[open-in-colab]] [Parameter-Efficient Fine Tuning (PEFT)](https://huggingface.co/blog/peft) methods freeze the pretrained model parameters during fine-tuning and add a small number of trainable parameters (the adapters) on top of it. The adapters are trained to learn task-specific information. This approach has been shown to be very memory-efficient with lower compute usage while producing results comparable to a fully fine-tuned model. Adapters trained with PEFT are also usually an order of magnitude smaller than the full model, making it convenient to share, store, and load them. <div class="flex flex-col justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/PEFT-hub-screenshot.png"/> <figcaption class="text-center">The adapter weights for a OPTForCausalLM model stored on the Hub are only ~6MB compared to the full size of the model weights, which can be ~700MB.</figcaption> </div> If you're interested in learning more about the 🤗 PEFT library, check out the [documentation](https://huggingface.co/docs/peft/index). ## Setup Get started by installing 🤗 PEFT: ```bash pip install peft ``` If you want to try out the brand new features, you might be interested in installing the library from source: ```bash pip install git+https://github.com/huggingface/peft.git ``` ## Supported PEFT models 🤗 Transformers natively supports some PEFT methods, meaning you can load adapter weights stored locally or on the Hub and easily run or train them with a few lines of code. The following methods are supported: - [Low Rank Adapters](https://huggingface.co/docs/peft/conceptual_guides/lora) - [IA3](https://huggingface.co/docs/peft/conceptual_guides/ia3) - [AdaLoRA](https://arxiv.org/abs/2303.10512) If you want to use other PEFT methods, such as prompt learning or prompt tuning, or about the 🤗 PEFT library in general, please refer to the [documentation](https://huggingface.co/docs/peft/index). ## Load a PEFT adapter To load and use a PEFT adapter model from 🤗 Transformers, make sure the Hub repository or local directory contains an `adapter_config.json` file and the adapter weights, as shown in the example image above. Then you can load the PEFT adapter model using the `AutoModelFor` class. For example, to load a PEFT adapter model for causal language modeling: 1. specify the PEFT model id 2. pass it to the [`AutoModelForCausalLM`] class ```py from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(peft_model_id) ``` <Tip> You can load a PEFT adapter with either an `AutoModelFor` class or the base model class like `OPTForCausalLM` or `LlamaForCausalLM`. </Tip> You can also load a PEFT adapter by calling the `load_adapter` method: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "facebook/opt-350m" peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(model_id) model.load_adapter(peft_model_id) ``` ## Load in 8bit or 4bit The `bitsandbytes` integration supports 8bit and 4bit precision data types, which are useful for loading large models because it saves memory (see the `bitsandbytes` integration [guide](./quantization#bitsandbytes-integration) to learn more). Add the `load_in_8bit` or `load_in_4bit` parameters to [`~PreTrainedModel.from_pretrained`] and set `device_map="auto"` to effectively distribute the model to your hardware: ```py from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(peft_model_id, device_map="auto", load_in_8bit=True) ``` ## Add a new adapter You can use [`~peft.PeftModel.add_adapter`] to add a new adapter to a model with an existing adapter as long as the new adapter is the same type as the current one. For example, if you have an existing LoRA adapter attached to a model: ```py from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import LoraConfig model_id = "facebook/opt-350m" model = AutoModelForCausalLM.from_pretrained(model_id) lora_config = LoraConfig( target_modules=["q_proj", "k_proj"], init_lora_weights=False ) model.add_adapter(lora_config, adapter_name="adapter_1") ``` To add a new adapter: ```py # attach new adapter with same config model.add_adapter(lora_config, adapter_name="adapter_2") ``` Now you can use [`~peft.PeftModel.set_adapter`] to set which adapter to use: ```py # use adapter_1 model.set_adapter("adapter_1") output = model.generate(**inputs) print(tokenizer.decode(output_disabled[0], skip_special_tokens=True)) # use adapter_2 model.set_adapter("adapter_2") output_enabled = model.generate(**inputs) print(tokenizer.decode(output_enabled[0], skip_special_tokens=True)) ``` ## Enable and disable adapters Once you've added an adapter to a model, you can enable or disable the adapter module. To enable the adapter module: ```py from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import PeftConfig model_id = "facebook/opt-350m" adapter_model_id = "ybelkada/opt-350m-lora" tokenizer = AutoTokenizer.from_pretrained(model_id) text = "Hello" inputs = tokenizer(text, return_tensors="pt") model = AutoModelForCausalLM.from_pretrained(model_id) peft_config = PeftConfig.from_pretrained(adapter_model_id) # to initiate with random weights peft_config.init_lora_weights = False model.add_adapter(peft_config) model.enable_adapters() output = model.generate(**inputs) ``` To disable the adapter module: ```py model.disable_adapters() output = model.generate(**inputs) ``` ## Train a PEFT adapter PEFT adapters are supported by the [`Trainer`] class so that you can train an adapter for your specific use case. It only requires adding a few more lines of code. For example, to train a LoRA adapter: <Tip> If you aren't familiar with fine-tuning a model with [`Trainer`], take a look at the [Fine-tune a pretrained model](training) tutorial. </Tip> 1. Define your adapter configuration with the task type and hyperparameters (see [`~peft.LoraConfig`] for more details about what the hyperparameters do). ```py from peft import LoraConfig peft_config = LoraConfig( lora_alpha=16, lora_dropout=0.1, r=64, bias="none", task_type="CAUSAL_LM", ) ``` 2. Add adapter to the model. ```py model.add_adapter(peft_config) ``` 3. Now you can pass the model to [`Trainer`]! ```py trainer = Trainer(model=model, ...) trainer.train() ``` To save your trained adapter and load it back: ```py model.save_pretrained(save_dir) model = AutoModelForCausalLM.from_pretrained(save_dir) ``` ## Add additional trainable layers to a PEFT adapter You can also fine-tune additional trainable adapters on top of a model that has adapters attached by passing `modules_to_save` in your PEFT config. For example, if you want to also fine-tune the lm_head on top of a model with a LoRA adapter: ```py from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import LoraConfig model_id = "facebook/opt-350m" model = AutoModelForCausalLM.from_pretrained(model_id) lora_config = LoraConfig( target_modules=["q_proj", "k_proj"], modules_to_save=["lm_head"], ) model.add_adapter(lora_config) ``` <!-- TODO: (@younesbelkada @stevhliu) - Link to PEFT docs for further details - Trainer - 8-bit / 4-bit examples ? -->
transformers/docs/source/en/peft.md/0
{ "file_path": "transformers/docs/source/en/peft.md", "repo_id": "transformers", "token_count": 2640 }
256
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Checks on a Pull Request When you open a pull request on 🤗 Transformers, a fair number of checks will be run to make sure the patch you are adding is not breaking anything existing. Those checks are of four types: - regular tests - documentation build - code and documentation style - general repository consistency In this document, we will take a stab at explaining what those various checks are and the reason behind them, as well as how to debug them locally if one of them fails on your PR. Note that, ideally, they require you to have a dev install: ```bash pip install transformers[dev] ``` or for an editable install: ```bash pip install -e .[dev] ``` inside the Transformers repo. Since the number of optional dependencies of Transformers has grown a lot, it's possible you don't manage to get all of them. If the dev install fails, make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do ```bash pip install transformers[quality] ``` or for an editable install: ```bash pip install -e .[quality] ``` ## Tests All the jobs that begin with `ci/circleci: run_tests_` run parts of the Transformers testing suite. Each of those jobs focuses on a part of the library in a certain environment: for instance `ci/circleci: run_tests_pipelines_tf` runs the pipelines test in an environment where TensorFlow only is installed. Note that to avoid running tests when there is no real change in the modules they are testing, only part of the test suite is run each time: a utility is run to determine the differences in the library between before and after the PR (what GitHub shows you in the "Files changes" tab) and picks the tests impacted by that diff. That utility can be run locally with: ```bash python utils/tests_fetcher.py ``` from the root of the Transformers repo. It will: 1. Check for each file in the diff if the changes are in the code or only in comments or docstrings. Only the files with real code changes are kept. 2. Build an internal map that gives for each file of the source code of the library all the files it recursively impacts. Module A is said to impact module B if module B imports module A. For the recursive impact, we need a chain of modules going from module A to module B in which each module imports the previous one. 3. Apply this map on the files gathered in step 1, which gives us the list of model files impacted by the PR. 4. Map each of those files to their corresponding test file(s) and get the list of tests to run. When executing the script locally, you should get the results of step 1, 3 and 4 printed and thus know which tests are run. The script will also create a file named `test_list.txt` which contains the list of tests to run, and you can run them locally with the following command: ```bash python -m pytest -n 8 --dist=loadfile -rA -s $(cat test_list.txt) ``` Just in case anything slipped through the cracks, the full test suite is also run daily. ## Documentation build The `build_pr_documentation` job builds and generates a preview of the documentation to make sure everything looks okay once your PR is merged. A bot will add a link to preview the documentation in your PR. Any changes you make to the PR are automatically updated in the preview. If the documentation fails to build, click on **Details** next to the failed job to see where things went wrong. Often, the error is as simple as a missing file in the `toctree`. If you're interested in building or previewing the documentation locally, take a look at the [`README.md`](https://github.com/huggingface/transformers/tree/main/docs) in the docs folder. ## Code and documentation style Code formatting is applied to all the source files, the examples and the tests using `black` and `ruff`. We also have a custom tool taking care of the formatting of docstrings and `rst` files (`utils/style_doc.py`), as well as the order of the lazy imports performed in the Transformers `__init__.py` files (`utils/custom_init_isort.py`). All of this can be launched by executing ```bash make style ``` The CI checks those have been applied inside the `ci/circleci: check_code_quality` check. It also runs `ruff`, that will have a basic look at your code and will complain if it finds an undefined variable, or one that is not used. To run that check locally, use ```bash make quality ``` This can take a lot of time, so to run the same thing on only the files you modified in the current branch, run ```bash make fixup ``` This last command will also run all the additional checks for the repository consistency. Let's have a look at them. ## Repository consistency This regroups all the tests to make sure your PR leaves the repository in a good state, and is performed by the `ci/circleci: check_repository_consistency` check. You can locally run that check by executing the following: ```bash make repo-consistency ``` This checks that: - All objects added to the init are documented (performed by `utils/check_repo.py`) - All `__init__.py` files have the same content in their two sections (performed by `utils/check_inits.py`) - All code identified as a copy from another module is consistent with the original (performed by `utils/check_copies.py`) - All configuration classes have at least one valid checkpoint mentioned in their docstrings (performed by `utils/check_config_docstrings.py`) - All configuration classes only contain attributes that are used in corresponding modeling files (performed by `utils/check_config_attributes.py`) - The translations of the READMEs and the index of the doc have the same model list as the main README (performed by `utils/check_copies.py`) - The auto-generated tables in the documentation are up to date (performed by `utils/check_table.py`) - The library has all objects available even if not all optional dependencies are installed (performed by `utils/check_dummies.py`) - All docstrings properly document the arguments in the signature of the object (performed by `utils/check_docstrings.py`) Should this check fail, the first two items require manual fixing, the last four can be fixed automatically for you by running the command ```bash make fix-copies ``` Additional checks concern PRs that add new models, mainly that: - All models added are in an Auto-mapping (performed by `utils/check_repo.py`) <!-- TODO Sylvain, add a check that makes sure the common tests are implemented.--> - All models are properly tested (performed by `utils/check_repo.py`) <!-- TODO Sylvain, add the following - All models are added to the main README, inside the main doc - All checkpoints used actually exist on the Hub --> ### Check copies Since the Transformers library is very opinionated with respect to model code, and each model should fully be implemented in a single file without relying on other models, we have added a mechanism that checks whether a copy of the code of a layer of a given model stays consistent with the original. This way, when there is a bug fix, we can see all other impacted models and choose to trickle down the modification or break the copy. <Tip> If a file is a full copy of another file, you should register it in the constant `FULL_COPIES` of `utils/check_copies.py`. </Tip> This mechanism relies on comments of the form `# Copied from xxx`. The `xxx` should contain the whole path to the class of function which is being copied below. For instance, `RobertaSelfOutput` is a direct copy of the `BertSelfOutput` class, so you can see [here](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L289) it has a comment: ```py # Copied from transformers.models.bert.modeling_bert.BertSelfOutput ``` Note that instead of applying this to a whole class, you can apply it to the relevant methods that are copied from. For instance [here](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L598) you can see how `RobertaPreTrainedModel._init_weights` is copied from the same method in `BertPreTrainedModel` with the comment: ```py # Copied from transformers.models.bert.modeling_bert.BertPreTrainedModel._init_weights ``` Sometimes the copy is exactly the same except for names: for instance in `RobertaAttention`, we use `RobertaSelfAttention` insted of `BertSelfAttention` but other than that, the code is exactly the same. This is why `# Copied from` supports simple string replacements with the following syntax: `Copied from xxx with foo->bar`. This means the code is copied with all instances of `foo` being replaced by `bar`. You can see how it used [here](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L304C1-L304C86) in `RobertaAttention` with the comment: ```py # Copied from transformers.models.bert.modeling_bert.BertAttention with Bert->Roberta ``` Note that there shouldn't be any spaces around the arrow (unless that space is part of the pattern to replace of course). You can add several patterns separated by a comma. For instance here `CamemberForMaskedLM` is a direct copy of `RobertaForMaskedLM` with two replacements: `Roberta` to `Camembert` and `ROBERTA` to `CAMEMBERT`. You can see [here](https://github.com/huggingface/transformers/blob/15082a9dc6950ecae63a0d3e5060b2fc7f15050a/src/transformers/models/camembert/modeling_camembert.py#L929) this is done with the comment: ```py # Copied from transformers.models.roberta.modeling_roberta.RobertaForMaskedLM with Roberta->Camembert, ROBERTA->CAMEMBERT ``` If the order matters (because one of the replacements might conflict with a previous one), the replacements are executed from left to right. <Tip> If the replacements change the formatting (if you replace a short name by a very long name for instance), the copy is checked after applying the auto-formatter. </Tip> Another way when the patterns are just different casings of the same replacement (with an uppercased and a lowercased variants) is just to add the option `all-casing`. [Here](https://github.com/huggingface/transformers/blob/15082a9dc6950ecae63a0d3e5060b2fc7f15050a/src/transformers/models/mobilebert/modeling_mobilebert.py#L1237) is an example in `MobileBertForSequenceClassification` with the comment: ```py # Copied from transformers.models.bert.modeling_bert.BertForSequenceClassification with Bert->MobileBert all-casing ``` In this case, the code is copied from `BertForSequenceClassification` by replacing: - `Bert` by `MobileBert` (for instance when using `MobileBertModel` in the init) - `bert` by `mobilebert` (for instance when defining `self.mobilebert`) - `BERT` by `MOBILEBERT` (in the constant `MOBILEBERT_INPUTS_DOCSTRING`)
transformers/docs/source/en/pr_checks.md/0
{ "file_path": "transformers/docs/source/en/pr_checks.md", "repo_id": "transformers", "token_count": 3180 }
257