text
stringlengths
7
328k
id
stringlengths
14
166
metadata
dict
__index_level_0__
int64
0
459
<jupyter_start><jupyter_text>Integrations avec le *Hub* d'Hugging Face Installez les bibliothèques 🤗 Transformers et 🤗 Gradio pour exécuter ce *notebook*.<jupyter_code>!pip install datasets transformers[sentencepiece] !pip install gradio import gradio as gr title = "GPT-J-6B (Boris)" description = "Démo Gradio pour GPT-J 6B (Boris), un transformer entraîné à l'aide du Mesh Transformer JAX de Ben Wang. GPT-J' fait référence à la classe du modèle, tandis que '6B' représente le nombre de paramètres entraînables. Pour l'utiliser, il suffit d'ajouter votre texte, ou de cliquer sur l'un des exemples pour le charger. Pour en savoir plus, consultez les liens ci-dessous." article = "<p style='text-align: center'><a href='https://github.com/kingoflolz/mesh-transformer-jax' target='_blank'>GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model</a></p>" examples = [ ["La tour mesure 324 mètres (1 063 pieds) de haut,"], ["L'orbite de la Lune autour de la Terre a"], ["Le bassin lisse de Borealis dans l'hémisphère nord couvre 40 %"], ] gr.Interface.load( # "huggingface/Cedille/fr-boris", # C'est un très gros modèle, le temps d'éxécution peut être très long (voire planter) ! "huggingface/asi/gpt-fr-cased-small", # Alternative beaucoup plus légère inputs=gr.Textbox(lines=5, label="Texte d'entrée"), title=title, description=description, article=article, examples=examples, enable_queue=True, ).launch() gr.Interface.load("spaces/abidlabs/remove-bg").launch() gr.Interface.load( "spaces/abidlabs/remove-bg", inputs="webcam", title="Supprimez l'arrière-plan de votre webcam !" ).launch()<jupyter_output><empty_output>
notebooks/course/fr/chapter9/section5.ipynb/0
{ "file_path": "notebooks/course/fr/chapter9/section5.ipynb", "repo_id": "notebooks", "token_count": 653 }
138
<jupyter_start><jupyter_text>**The Stable Diffusion Guide** 🎨 *...using `🧨 diffusers`* **Intro**Stable Diffusion is a [Latent Diffusion model](https://github.com/CompVis/latent-diffusion) developed by researchers from the Machine Vision and Learning group at LMU Munich, *a.k.a* CompVis.Model checkpoints were publicly released at the end of August by a collaboration of Stability AI, CompVis, and Runway with support from EleutherAI and LAION. For more information, you can check out [the official blog post](https://stability.ai/blog/stable-diffusion-public-release).Since its public release the community has done an incredible job at working together to make the stable diffusion checkpoints **faster**, **more memory efficient**, and **more performant**.[🧨 Diffusers](https://github.com/huggingface/diffusers) offers a simple API to run stable diffusion with all memory, compute and quality improvements.This notebook walks you through the improvements one-by-one and you can best leverage *Stable Diffusion* for **inference**. SetupFirst, please make sure you are using a GPU runtime to run this notebook, so inference is much faster. If the following command fails, use the `Runtime` menu above and select `Change runtime type`.<jupyter_code>!nvidia-smi<jupyter_output>Thu Jan 5 10:04:48 2023 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 | | N/A 72C P0 30W / 70W | 9870MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-------[...]<jupyter_text>Next, you should install `diffusers` as well `scipy`, `ftfy` and `transformers`.<jupyter_code>!pip install diffusers[torch]==0.11.1 transformers scipy ftfy accelerate<jupyter_output>Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Collecting diffusers[torch] Downloading diffusers-0.11.1-py3-none-any.whl (524 kB)  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 524.9/524.9 KB 9.6 MB/s eta 0:00:00 [?25hCollecting transformers Downloading transformers-4.25.1-py3-none-any.whl (5.8 MB)  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.8/5.8 MB 71.7 MB/s eta 0:00:00 [?25hRequirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (1.7.3) Collecting ftfy Downloading ftfy-6.1.1-py3-none-any.whl (53 kB)  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 53.1/53.1 KB 7.7 MB/s eta 0:00:00 [?25hCollecting accelerate Downloading accelerate-0.15.0-py3-none-any.whl (191 kB)  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 191.5/191.5 KB 774.2 kB/s eta 0:00:00 [...]<jupyter_text>Prompt Engineering 🎨When running *Stable Diffusion* in inference, we usually want to generate a certain type, style of image and then improve upon it. Improving upon a previously generated image means running inference over and over again with a different prompt and potentially a different seed until we are happy with our generation. So to begin with, it is most important to speed up stable diffusion as much as possible to generate as many pictures as possible in a given amount of time. This can be done by both improving the **computational efficiency** (speed) and the **memory efficiency** (GPU RAM).Let's start by looking into the computational efficiency first. Through-out the notebook, we will focus on [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5):<jupyter_code>model_id = "runwayml/stable-diffusion-v1-5"<jupyter_output><empty_output><jupyter_text>Let's load the pipeline as defined on https://huggingface.co/runwayml/stable-diffusion-v1-5 . Speed Optimization<jupyter_code>from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained(model_id)<jupyter_output><empty_output><jupyter_text>We aim at generating a beautiful photograph of an *old warrior chief* and will later try to find the best prompt to generate such a photograph. For now, let's keep the prompt simple:<jupyter_code>prompt = "portrait photo of a old warrior chief"<jupyter_output><empty_output><jupyter_text>To begin with, we should make sure we run inference on GPU, so let's move the pipeline to GPU, just like you would with any PyTorch module.<jupyter_code>pipe = pipe.to("cuda")<jupyter_output><empty_output><jupyter_text>To generate an image, you should use the `__call__` method. You can have a look at its docstring to better understand what arguments can be passed [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2imgdiffusers.StableDiffusionPipeline.__call__). To make sure we can reproduce more or less the same image in every call, let's make use of the generator. See documentation on reproducibality [here]( ) for more information.<jupyter_code>import torch generator = torch.Generator("cuda").manual_seed(0)<jupyter_output><empty_output><jupyter_text>Now, let's take a spin on it.<jupyter_code>image = pipe(prompt, generator=generator).images[0] image<jupyter_output><empty_output><jupyter_text>Cool, this now took roughly 30 seconds on a T4 GPU (you might see faster inference if you're allocated GPU is better than a T4).The default run we did above used full float32 precision and run the default 50 inference steps. The easiest speed-ups come from switiching to float16 (or half) precision and simply running less inference steps. Let's load the model now instead in float16.<jupyter_code>pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda")<jupyter_output><empty_output><jupyter_text>And we can again call the pipeline to generate an image.<jupyter_code>generator = torch.Generator("cuda").manual_seed(0) image = pipe(prompt, generator=generator).images[0] image<jupyter_output><empty_output><jupyter_text>Cool, this is almost three times as fast for arguably the exact same image quality. We strongly suggest to always run your pipelines in float16 as so far we have very rarely seen degradations in quality because of it.Next, let's see if we really need to use 50 inference steps or whether we could use significantly less. Let's have a look at all schedulers, the stable diffusion pipeline is compatible with.<jupyter_code>pipe.scheduler.compatibles<jupyter_output><empty_output><jupyter_text>Cool, that's a lot of schedulers.🧨 Diffusers is constantely adding a bunch of novel schedulers/samplers that can be used with Stable Diffusion. For more information, we recommend to take a look at the official documentation [here](https://huggingface.co/docs/diffusers/main/en/api/schedulers/overview).Alright, right now Stable Diffusion is using the `PNDMScheduler` which usually requires around 50 inference steps. However, other schedulers such as `DPMSolverMultistepScheduler` or `DPMSolverSinglestepScheduler` seem to get away with just 20 to 25 inference steps. Let's try them out. You can set a new scheduler by making use of the [from_config](https://huggingface.co/docs/diffusers/main/en/api/configurationdiffusers.ConfigMixin.from_config) function.<jupyter_code>from diffusers import DPMSolverMultistepScheduler pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)<jupyter_output><empty_output><jupyter_text>Now, let's try to reduce the number of inference steps to just 20.<jupyter_code>generator = torch.Generator("cuda").manual_seed(0) image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] image<jupyter_output><empty_output><jupyter_text>The image now does look a little different, but it's arguably still of equally high quality. We now cut inference time to just 4 seconds though 😍. Memory Optimization More memory indirectly also means more speed since we're often trying to maximize how many images we can generate per second, so more images per inference, more images per second usually. The easiest way to see how much we can increase memory is to simply try it out, and see when we get a *"Out-of-memory (OOM)"* error.One can run batched inference by simply passing a list of prompts and generators. Let's define a quick function that generates a batch for us.<jupyter_code>def get_inputs(batch_size=1): generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] prompts = batch_size * [prompt] num_inference_steps = 20 return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps}<jupyter_output><empty_output><jupyter_text>We also need a method that allows us to easily display a batch of images.<jupyter_code>from PIL import Image def image_grid(imgs, rows=2, cols=2): w, h = imgs[0].size grid = Image.new('RGB', size=(cols*w, rows*h)) for i, img in enumerate(imgs): grid.paste(img, box=(i%cols*w, i//cols*h)) return grid<jupyter_output><empty_output><jupyter_text>Cool, let's see how much memory we can use starting with batch_size=4.<jupyter_code>images = pipe(**get_inputs(batch_size=4)).images image_grid(images)<jupyter_output><empty_output><jupyter_text>Going over a batch_size of 4 will error out in this colab. Also we can see we only generate slighly more images per second (3.75s/image) compared to 4s/image previously.However, the community has found some nice tricks to improve the memory constraints further${}^1$. By far most of the memory is taken up by the cross attention layers. Instead of running this operation in batch, one can run it sequentially to save a significant amout of memory.It can easily be enabled by calling `enable_attention_slicing` as is documented [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2imgdiffusers.StableDiffusionPipeline.enable_attention_slicing).---${}^1$ After stable diffusion was relesaed, the community found improvements within days and shared the freely over GitHub - open-source at its finest!I believe theh original idea came from [this](https://github.com/basujindal/stable-diffusion/pull/117). GitHub thread.<jupyter_code>pipe.enable_attention_slicing()<jupyter_output><empty_output><jupyter_text>Great now that attention slicing is enabled, let's try to double the batch size again, going for `batch_size=8`.<jupyter_code>images = pipe(**get_inputs(batch_size=8)).images image_grid(images, rows=2, cols=4)<jupyter_output><empty_output><jupyter_text>Nice, it works. However, the speed gain is again not very big (it might however be much more significant on other GPUs).We're at roughly 3.5 seconds per image 🔥 which is probably the fastest we can be with a simple T4 without sacrificing quality.Next, let's look into how to improve the quality! Quality ImprovementsNow that our image generation pipeline is blazing fast, let's try to get maximum image quality.First of all, image quality is extremely subjective, so it's difficult to make general claims here.The most obvious step to take to improve quality is to use *better checkpoints*. Since the release of Stable Diffusion many improved versions have been released, which are summarized here:- *Official Release - 22 Aug 2022*: [Stable-Diffusion 1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4)- *20 October 2022*: [Stable-Diffusion 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5)- *24 Nov 2022*: [Stable-Diffusion 2.0](https://huggingface.co/stabilityai/stable-diffusion-2-0)- *7 Dec 2022*: [Stable-Diffusion 2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1)Newer versions don't necessarily mean better image quality with the same parameters. People mentioned that *2.0* is slighly worse than *1.5* for certain prompts, but given the right prompt engineering *2.0* and *2.1* seem to be better. Overall, we strongly recommend to just try the models out and read up on advice online $^{2}$Additionally, the community has started fine-tuning many of the above versions on certain styles with some of them having an extremely high quality and gaining a lot of traction. We recommend to simply have a look at all [diffusers checkpoints sorted by downloads and try out the different checkpoints](https://huggingface.co/models?library=diffusers).For the following, we will stick to v1.5 for simplicity.---$^{2}$ E.g. it has been shown that using negative prompts is very important for 2.0 and 2.1 to get the highest possible quality. See for example [this nice blog post](https://minimaxir.com/2022/11/stable-diffusion-negative-prompt/). Next, we can also try to optimize single components of the pipeline, e.g. switching out the latent decoder. For more details on how the whole Stable Diffusion pipeline works, please have a look at [this blog post](https://huggingface.co/blog/stable_diffusion).Let's load [stabilityai's newest auto-decoder](https://huggingface.co/stabilityai/stable-diffusion-2-1).<jupyter_code>from diffusers import AutoencoderKL vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda")<jupyter_output><empty_output><jupyter_text>Now we can set it to the vae of the pipeline to use it.<jupyter_code>pipe.vae = vae<jupyter_output><empty_output><jupyter_text>Let's run the same prompt as before to compare quality.<jupyter_code>images = pipe(**get_inputs(batch_size=8)).images image_grid(images, rows=2, cols=4)<jupyter_output><empty_output><jupyter_text>Seems like the difference is only very minor, but the new generations are arguably a bit *sharper*. Cool, finally, let's look a bit into prompt engineering.Our goal was to generate a photo of an old warrior chief. Let's now try to bring a bit more color into the photos and make the look more impressive.Originally our prompt was "*portrait photo of a old warrior chief*".To improve the prompt, it often helps to add cues that could have been used online to save high quality photos, as well as adding more details since.Essentially, when doing prompt engineering, one has to think:- How was the photo or similar photos of the one I want probably stored on the internet?- What additional detail can I give that steers the models into the style that I want.Cool, let's add more details.<jupyter_code>prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes"<jupyter_output><empty_output><jupyter_text>and let's also add some cues that usually help to generate higher quality images.<jupyter_code>prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" prompt<jupyter_output><empty_output><jupyter_text>Cool, let's now try this prompt.<jupyter_code>images = pipe(**get_inputs(batch_size=8)).images image_grid(images, rows=2, cols=4)<jupyter_output><empty_output><jupyter_text>Pretty impressive! We got definitely some very high quality image generations there. The 2nd image is my personal favorite, so I'll re-use this seed and see whether I can tweak the prompts slighly by using "oldest warrior", "old", "", and "young" instead of "old".<jupyter_code>prompts = [ "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", ] generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] # 1 because we want the 2nd image images = pipe(prompt=prompts, generator=generator, num_inference_steps=25).images image_grid(images)<jupyter_output><empty_output>
notebooks/diffusers/sd_101_guide.ipynb/0
{ "file_path": "notebooks/diffusers/sd_101_guide.ipynb", "repo_id": "notebooks", "token_count": 5420 }
139
""" On one node, launch with `deepspeed --num_gpus N idefics_zero3_finetuning.py` by replacing N with the number of your GPUs For several nodes, using Slurm, a template script is provided at `examples/idefics/idefics_zero3_finetuning/slurm_script_idefics_zero3_finetuning_multinode.slurm` For more information, follow the tutorial on using DeepSpeed with Transformers at https://huggingface.co/docs/transformers/main_classes/deepspeed """ import torch import torchvision.transforms as transforms from datasets import load_dataset from PIL import Image from transformers import AutoProcessor, IdeficsForVisionText2Text, Trainer, TrainingArguments device = "cuda" if torch.cuda.is_available() else "cpu" checkpoint = "HuggingFaceM4/idefics-9b" processor = AutoProcessor.from_pretrained(checkpoint, use_auth_token=True) def convert_to_rgb(image): # `image.convert("RGB")` would only work for .jpg images, as it creates a wrong background # for transparent images. The call to `alpha_composite` handles this case if image.mode == "RGB": return image image_rgba = image.convert("RGBA") background = Image.new("RGBA", image_rgba.size, (255, 255, 255)) alpha_composite = Image.alpha_composite(background, image_rgba) alpha_composite = alpha_composite.convert("RGB") return alpha_composite def ds_transforms(example_batch): image_size = processor.image_processor.image_size image_mean = processor.image_processor.image_mean image_std = processor.image_processor.image_std image_transform = transforms.Compose( [ convert_to_rgb, transforms.RandomResizedCrop( (image_size, image_size), scale=(0.9, 1.0), interpolation=transforms.InterpolationMode.BICUBIC ), transforms.ToTensor(), transforms.Normalize(mean=image_mean, std=image_std), ] ) prompts = [] for i in range(len(example_batch["caption"])): # We split the captions to avoid having very long examples, which would require more GPU ram during training caption = example_batch["caption"][i].split(".")[0] try: # There are a handful of images that are not hosted anymore. This is a small (dummy) hack to skip these processor.image_processor.fetch_images(example_batch["image_url"][i]) except Exception: print( "Warning: at least one image couldn't be retrieved from the internet in an example. Skipping the" " batch." ) prompts.append( [ example_batch["image_url"][i], f"Question: What's on the picture? Answer: This is {example_batch['name'][i]}. {caption}</s>", ], ) inputs = processor(prompts, transform=image_transform, return_tensors="pt").to(device) inputs["labels"] = inputs["input_ids"] return inputs # load and prepare dataset ds = load_dataset("TheFusion21/PokemonCards") ds = ds["train"].train_test_split(test_size=0.002) train_ds = ds["train"] eval_ds = ds["test"] train_ds.set_transform(ds_transforms) eval_ds.set_transform(ds_transforms) # Important, define the training_args before the model ds_config = { "communication_data_type": "fp32", "bf16": {"enabled": True}, "zero_optimization": { "stage": 3, "overlap_comm": False, "reduce_bucket_size": "auto", "contiguous_gradients": True, "stage3_gather_16bit_weights_on_model_save": False, "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 2e9, "stage3_max_reuse_distance": 2e9, "offload_optimizer": {"device": "none"}, "offload_param": {"device": "none"}, }, "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "steps_per_print": 2000000, } training_args = TrainingArguments( output_dir="idefics-pokemon", learning_rate=2e-4, bf16=True, per_device_train_batch_size=1, per_device_eval_batch_size=1, gradient_accumulation_steps=1, # gradient_checkpointing=True, # Uncomment if OOM dataloader_pin_memory=False, save_total_limit=3, evaluation_strategy="steps", save_strategy="steps", save_steps=40, eval_steps=20, logging_steps=20, max_steps=40, remove_unused_columns=False, push_to_hub=False, label_names=["labels"], load_best_model_at_end=True, report_to="none", optim="adamw_torch", deepspeed=ds_config, ) model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16) trainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=eval_ds, ) result = trainer.train() print(result) # Prints one per process - mostly here for sanity check
notebooks/examples/idefics/idefics_zero3_finetuning/idefics_zero3_finetuning.py/0
{ "file_path": "notebooks/examples/idefics/idefics_zero3_finetuning/idefics_zero3_finetuning.py", "repo_id": "notebooks", "token_count": 1980 }
140
<jupyter_start><jupyter_text>PatchTSMixer in HuggingFace - Getting Started `PatchTSMixer` is a lightweight time-series modeling approach based on the MLP-Mixer architecture. It is proposed in [TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting](https://huggingface.co/papers/2306.09364) by IBM Research authors Vijay Ekambaram, Arindam Jati, Nam Nguyen, Phanwadee Sinthong and Jayant Kalagnanam.For effective mindshare and to promote open-sourcing - IBM Research joins hands with the HuggingFace team to release this model in the Transformers library.In the [Hugging Face implementation](https://huggingface.co/docs/transformers/main/en/model_doc/patchtsmixer), we provide PatchTSMixer’s capabilities to effortlessly facilitate lightweight mixing across patches, channels, and hidden features for effective multivariate time-series modeling. It also supports various attention mechanisms starting from simple gated attention to more complex self-attention blocks that can be customized accordingly. The model can be pretrained and subsequently used for various downstream tasks such as forecasting, classification, and regression.`PatchTSMixer` outperforms state-of-the-art MLP and Transformer models in forecasting by a considerable margin of 8-60%. It also outperforms the latest strong benchmarks of Patch-Transformer models (by 1-2%) with a significant reduction in memory and runtime (2-3X). For more details, refer to the [paper](https://arxiv.org/pdf/2306.09364.pdf).In this blog, we will demonstrate examples of getting started with PatchTSMixer. We will first demonstrate the forecasting capability of `PatchTSMixer` on the Electricity dataset. We will then demonstrate the transfer learning capability of PatchTSMixer by using the model trained on Electricity to do zero-shot forecasting on the `ETTH2` dataset. InstallationThis demo requires Hugging Face [`Transformers`](https://github.com/huggingface/transformers) for the model and the IBM `tsfm` package for auxiliary data pre-processing.Both can be installed by following the steps below.1. Install IBM Time Series Foundation Model Repository [`tsfm`](https://github.com/ibm/tsfm).```pip install git+https://github.com:IBM/tsfm.git```2. Install Hugging Face [`Transformers`](https://github.com/huggingface/transformersinstallation)```pip install transformers```3. Test it with the following commands in a `python` terminal.```from transformers import PatchTSMixerConfigfrom tsfm_public.toolkit.dataset import ForecastDFDataset```<jupyter_code>!pip install git+https://github.com:IBM/tsfm.git transformers from transformers import PatchTSMixerConfig from tsfm_public.toolkit.dataset import ForecastDFDataset<jupyter_output><empty_output><jupyter_text>Part 1: Forecasting on Electricity dataset<jupyter_code>import os import random from transformers import ( EarlyStoppingCallback, PatchTSMixerConfig, PatchTSMixerForPrediction, Trainer, TrainingArguments, ) import numpy as np import pandas as pd import torch from tsfm_public.toolkit.dataset import ForecastDFDataset from tsfm_public.toolkit.time_series_preprocessor import TimeSeriesPreprocessor from tsfm_public.toolkit.util import select_by_index<jupyter_output>/dccstor/dnn_forecasting/conda_envs/envs/hf/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm 2023-12-11 01:25:50.313015: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2023-12-11 01:25:50.313102: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2023-12-11 01:25:50.313132: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2023-12-11 01:25:51.234452: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] [...]<jupyter_text>Set seed<jupyter_code>from transformers import set_seed set_seed(42)<jupyter_output><empty_output><jupyter_text>Load and prepare datasetsIn the next cell, please adjust the following parameters to suit your application:- `dataset_path`: path to local .csv file, or web address to a csv file for the data of interest. Data is loaded with pandas, so anything supported by`pd.read_csv` is supported: (https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html).- `timestamp_column`: column name containing timestamp information, use `None` if there is no such column.- `id_columns`: List of column names specifying the IDs of different time series. If no ID column exists, use `[]`.- `forecast_columns`: List of columns to be modeled.- `context_length`: The amount of historical data used as input to the model. Windows of the input time series data with length equal to`context_length` will be extracted from the input dataframe. In the case of a multi-time series dataset, the context windows will be createdso that they are contained within a single time series (i.e., a single ID).- `forecast_horizon`: Number of timestamps to forecast in the future.- `train_start_index`, `train_end_index`: the start and end indices in the loaded data which delineate the training data.- `valid_start_index`, `valid_end_index`: the start and end indices in the loaded data which delineate the validation data.- `test_start_index`, `test_end_index`: the start and end indices in the loaded data which delineate the test data.- `num_workers`: Number of CPU workers in the PyTorch dataloader.- `batch_size`: Batch size.The data is first loaded into a Pandas dataframe and split into training, validation, and test parts. Then the Pandas dataframes are converted to the appropriate PyTorch dataset required for training.<jupyter_code>PRETRAIN_AGAIN = True # Download ECL data from https://github.com/zhouhaoyi/Informer2020 dataset_path = "~/Downloads/ECL.csv" timestamp_column = "date" id_columns = [] context_length = 512 forecast_horizon = 96 patch_length = 8 num_workers = 16 # Reduce this if you have low number of CPU cores batch_size = 64 # Adjust according to GPU memory if PRETRAIN_AGAIN: data = pd.read_csv( dataset_path, parse_dates=[timestamp_column], ) forecast_columns = list(data.columns[1:]) # get split num_train = int(len(data) * 0.7) num_test = int(len(data) * 0.2) num_valid = len(data) - num_train - num_test border1s = [ 0, num_train - context_length, len(data) - num_test - context_length, ] border2s = [num_train, num_train + num_valid, len(data)] train_start_index = border1s[0] # None indicates beginning of dataset train_end_index = border2s[0] # we shift the start of the evaluation period back by context length so that # the first evaluation timestamp is immediately following the training data valid_start_index = border1s[1] valid_end_index = border2s[1] test_start_index = border1s[2] test_end_index = border2s[2] train_data = select_by_index( data, id_columns=id_columns, start_index=train_start_index, end_index=train_end_index, ) valid_data = select_by_index( data, id_columns=id_columns, start_index=valid_start_index, end_index=valid_end_index, ) test_data = select_by_index( data, id_columns=id_columns, start_index=test_start_index, end_index=test_end_index, ) tsp = TimeSeriesPreprocessor( context_length=context_length, timestamp_column=timestamp_column, id_columns=id_columns, input_columns=forecast_columns, output_columns=forecast_columns, scaling=True, ) tsp.train(train_data) if PRETRAIN_AGAIN: train_dataset = ForecastDFDataset( tsp.preprocess(train_data), id_columns=id_columns, timestamp_column="date", input_columns=forecast_columns, output_columns=forecast_columns, context_length=context_length, prediction_length=forecast_horizon, ) valid_dataset = ForecastDFDataset( tsp.preprocess(valid_data), id_columns=id_columns, timestamp_column="date", input_columns=forecast_columns, output_columns=forecast_columns, context_length=context_length, prediction_length=forecast_horizon, ) test_dataset = ForecastDFDataset( tsp.preprocess(test_data), id_columns=id_columns, timestamp_column="date", input_columns=forecast_columns, output_columns=forecast_columns, context_length=context_length, prediction_length=forecast_horizon, )<jupyter_output><empty_output><jupyter_text>Configure the PatchTSMixer modelNext, we instantiate a randomly initialized PatchTSMixer model with a configuration. The settings below control the different hyperparameters related to the architecture. - `num_input_channels`: the number of input channels (or dimensions) in the time series data. This is automatically set to the number for forecast columns. - `context_length`: As described above, the amount of historical data used as input to the model. - `prediction_length`: This is same as the forecast horizon as described above. - `patch_length`: The patch length for the `PatchTSMixer` model. It is recommended to choose a value that evenly divides `context_length`. - `patch_stride`: The stride used when extracting patches from the context window. - `d_model`: Hidden feature dimension of the model. - `num_layers`: The number of model layers. - `dropout`: Dropout probability for all fully connected layers in the encoder. - `head_dropout`: Dropout probability used in the head of the model. - `mode`: PatchTSMixer operating mode. "common_channel"/"mix_channel". Common-channel works in channel-independent mode. For pretraining, use "common_channel". - `scaling`: Per-widow standard scaling. Recommended value: "std".For full details on the parameters, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/patchtsmixertransformers.PatchTSMixerConfig).We recommend that you only adjust the values in the next cell.<jupyter_code>if PRETRAIN_AGAIN: config = PatchTSMixerConfig( context_length=context_length, prediction_length=forecast_horizon, patch_length=patch_length, num_input_channels=len(forecast_columns), patch_stride=patch_length, d_model=16, num_layers=8, expansion_factor=2, dropout=0.2, head_dropout=0.2, mode="common_channel", scaling="std", ) model = PatchTSMixerForPrediction(config)<jupyter_output><empty_output><jupyter_text>Train model Next, we can leverage the Hugging Face [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) class to train the model based on the direct forecasting strategy. We first define the [TrainingArguments](https://huggingface.co/docs/transformers/main_classes/trainertransformers.TrainingArguments) which lists various hyperparameters regarding training such as the number of epochs, learning rate, and so on.<jupyter_code>if PRETRAIN_AGAIN: training_args = TrainingArguments( output_dir="./checkpoint/patchtsmixer/electricity/pretrain/output/", overwrite_output_dir=True, learning_rate=0.001, num_train_epochs=100, # For a quick test of this notebook, set it to 1 do_eval=True, evaluation_strategy="epoch", per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, dataloader_num_workers=num_workers, report_to="tensorboard", save_strategy="epoch", logging_strategy="epoch", save_total_limit=3, logging_dir="./checkpoint/patchtsmixer/electricity/pretrain/logs/", # Make sure to specify a logging directory load_best_model_at_end=True, # Load the best model when training ends metric_for_best_model="eval_loss", # Metric to monitor for early stopping greater_is_better=False, # For loss label_names=["future_values"], # max_steps=20, ) # Create the early stopping callback early_stopping_callback = EarlyStoppingCallback( early_stopping_patience=10, # Number of epochs with no improvement after which to stop early_stopping_threshold=0.0001, # Minimum improvement required to consider as improvement ) # define trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=valid_dataset, callbacks=[early_stopping_callback], ) # pretrain trainer.train()<jupyter_output>/dccstor/dnn_forecasting/conda_envs/envs/hf/lib/python3.10/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all '<jupyter_text>Evaluate the model on the test set.**Note that the training and evaluation loss for PatchTSMixer is the Mean Squared Error (MSE) loss. Hence, we do not separately compute the MSE metric in any of the following evaluation experiments.**<jupyter_code>if PRETRAIN_AGAIN: results = trainer.evaluate(test_dataset) print("Test result:") print(results)<jupyter_output>/dccstor/dnn_forecasting/conda_envs/envs/hf/lib/python3.10/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all '<jupyter_text>We get MSE score of 0.128 which is the SOTA result on the Electricity data. Save model<jupyter_code>if PRETRAIN_AGAIN: save_dir = "patchtsmixer/electricity/model/pretrain/" os.makedirs(save_dir, exist_ok=True) trainer.save_model(save_dir)<jupyter_output><empty_output><jupyter_text>Part 2: Transfer Learning from Electricity to `ETTH2`In this section, we will demonstrate the transfer learning capability of the `PatchTSMixer` model.We use the model pre-trained on the Electricity dataset to do zero-shot forecasting on the `ETTH2` dataset.By Transfer Learning, we mean that we first pretrain the model for a forecasting task on a `source` dataset (which we did above on the `Electricity` dataset). Then, we will use the pretrained model for zero-shot forecasting on a `target` dataset. By zero-shot, we mean that we test the performance in the `target` domain without any additional training. We hope that the model gained enough knowledge from pretraining which can be transferred to a different dataset. Subsequently, we will do linear probing and (then) finetuning of the pretrained model on the `train` split of the target data, and will validate the forecasting performance on the `test` split of the target data. In this example, the source dataset is the Electricity dataset and the target dataset is `ETTH2`. Transfer Learning on `ETTh2` dataAll evaluations are on the `test` part of the `ETTh2` data:Step 1: Directly evaluate the electricity-pretrained model. This is the zero-shot performance. Step 2: Evaluate after doing linear probing. Step 3: Evaluate after doing full finetuning. Load `ETTh2` datasetBelow, we load the `ETTh2` dataset as a Pandas dataframe. Next, we create 3 splits for training, validation and testing. We then leverage the `TimeSeriesPreprocessor` class to prepare each split for the model.<jupyter_code>dataset = "ETTh2" print(f"Loading target dataset: {dataset}") dataset_path = f"https://raw.githubusercontent.com/zhouhaoyi/ETDataset/main/ETT-small/{dataset}.csv" timestamp_column = "date" id_columns = [] forecast_columns = ["HUFL", "HULL", "MUFL", "MULL", "LUFL", "LULL", "OT"] train_start_index = None # None indicates beginning of dataset train_end_index = 12 * 30 * 24 # we shift the start of the evaluation period back by context length so that # the first evaluation timestamp is immediately following the training data valid_start_index = 12 * 30 * 24 - context_length valid_end_index = 12 * 30 * 24 + 4 * 30 * 24 test_start_index = 12 * 30 * 24 + 4 * 30 * 24 - context_length test_end_index = 12 * 30 * 24 + 8 * 30 * 24 data = pd.read_csv( dataset_path, parse_dates=[timestamp_column], ) train_data = select_by_index( data, id_columns=id_columns, start_index=train_start_index, end_index=train_end_index, ) valid_data = select_by_index( data, id_columns=id_columns, start_index=valid_start_index, end_index=valid_end_index, ) test_data = select_by_index( data, id_columns=id_columns, start_index=test_start_index, end_index=test_end_index, ) tsp = TimeSeriesPreprocessor( timestamp_column=timestamp_column, id_columns=id_columns, input_columns=forecast_columns, output_columns=forecast_columns, scaling=True, ) tsp.train(train_data) train_dataset = ForecastDFDataset( tsp.preprocess(train_data), id_columns=id_columns, input_columns=forecast_columns, output_columns=forecast_columns, context_length=context_length, prediction_length=forecast_horizon, ) valid_dataset = ForecastDFDataset( tsp.preprocess(valid_data), id_columns=id_columns, input_columns=forecast_columns, output_columns=forecast_columns, context_length=context_length, prediction_length=forecast_horizon, ) test_dataset = ForecastDFDataset( tsp.preprocess(test_data), id_columns=id_columns, input_columns=forecast_columns, output_columns=forecast_columns, context_length=context_length, prediction_length=forecast_horizon, )<jupyter_output><empty_output><jupyter_text>Zero-shot forecasting on `ETTh2`As we are going to test forecasting performance out-of-the-box, we load the model which we pretrained above.<jupyter_code>print("Loading pretrained model") finetune_forecast_model = PatchTSMixerForPrediction.from_pretrained( "patchtsmixer/electricity/model/pretrain/" ) print("Done") finetune_forecast_args = TrainingArguments( output_dir="./checkpoint/patchtsmixer/transfer/finetune/output/", overwrite_output_dir=True, learning_rate=0.0001, num_train_epochs=100, do_eval=True, evaluation_strategy="epoch", per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, dataloader_num_workers=num_workers, report_to="tensorboard", save_strategy="epoch", logging_strategy="epoch", save_total_limit=3, logging_dir="./checkpoint/patchtsmixer/transfer/finetune/logs/", # Make sure to specify a logging directory load_best_model_at_end=True, # Load the best model when training ends metric_for_best_model="eval_loss", # Metric to monitor for early stopping greater_is_better=False, # For loss ) # Create a new early stopping callback with faster convergence properties early_stopping_callback = EarlyStoppingCallback( early_stopping_patience=5, # Number of epochs with no improvement after which to stop early_stopping_threshold=0.001, # Minimum improvement required to consider as improvement ) finetune_forecast_trainer = Trainer( model=finetune_forecast_model, args=finetune_forecast_args, train_dataset=train_dataset, eval_dataset=valid_dataset, callbacks=[early_stopping_callback], ) print("\n\nDoing zero-shot forecasting on target data") result = finetune_forecast_trainer.evaluate(test_dataset) print("Target data zero-shot forecasting result:") print(result)<jupyter_output>Doing zero-shot forecasting on target data<jupyter_text>As can be seen, we get a mean-squared error (MSE) of 0.3 zero-shot which is near to the state-of-the-art result.Next, let's see how we can do by performing linear probing, which involves training a linear classifier on top of a frozen pre-trained model. Linear probing is often done to test the performance of features of a pretrained model. Linear probing on `ETTh2`We can do a quick linear probing on the `train` part of the target data to see any possible `test` performance improvement.<jupyter_code># Freeze the backbone of the model for param in finetune_forecast_trainer.model.model.parameters(): param.requires_grad = False print("\n\nLinear probing on the target data") finetune_forecast_trainer.train() print("Evaluating") result = finetune_forecast_trainer.evaluate(test_dataset) print("Target data head/linear probing result:") print(result)<jupyter_output>Linear probing on the target data<jupyter_text>As can be seen, by training a simple linear layer on top of the frozen backbone, the MSE decreased from 0.3 to 0.271 achieving state-of-the-art results.<jupyter_code>save_dir = f"patchtsmixer/electricity/model/transfer/{dataset}/model/linear_probe/" os.makedirs(save_dir, exist_ok=True) finetune_forecast_trainer.save_model(save_dir) save_dir = f"patchtsmixer/electricity/model/transfer/{dataset}/preprocessor/" os.makedirs(save_dir, exist_ok=True) tsp.save_pretrained(save_dir)<jupyter_output><empty_output><jupyter_text>Finally, let's see if we get any more improvements by doing a full finetune of the model on the target dataset. Full finetuning on `ETTh2`We can do a full model finetune (instead of probing the last linear layer as shown above) on the `train` part of the target data to see a possible `test` performance improvement. The code looks similar to the linear probing task above, except that we are not freezing any parameters.<jupyter_code># Reload the model finetune_forecast_model = PatchTSMixerForPrediction.from_pretrained( "patchtsmixer/electricity/model/pretrain/" ) finetune_forecast_trainer = Trainer( model=finetune_forecast_model, args=finetune_forecast_args, train_dataset=train_dataset, eval_dataset=valid_dataset, callbacks=[early_stopping_callback], ) print("\n\nFinetuning on the target data") finetune_forecast_trainer.train() print("Evaluating") result = finetune_forecast_trainer.evaluate(test_dataset) print("Target data full finetune result:") print(result)<jupyter_output>Finetuning on the target data<jupyter_text>In this case, there is not much improvement by doing full finetuning. Let's save the model anyway.<jupyter_code>save_dir = f"patchtsmixer/electricity/model/transfer/{dataset}/model/fine_tuning/" os.makedirs(save_dir, exist_ok=True) finetune_forecast_trainer.save_model(save_dir)<jupyter_output><empty_output>
notebooks/examples/patch_tsmixer.ipynb/0
{ "file_path": "notebooks/examples/patch_tsmixer.ipynb", "repo_id": "notebooks", "token_count": 7682 }
141
<jupyter_start><jupyter_text>Explain *Anything* Like I'm Five: A Model for Open Domain Long Form Question Answering--- Table of Contents 1. [**Introduction**](intro) a. [Preliminaries](prelims) b. [Note on Data and Biases](reddit_biases)2. [**Task and Data Description**](task_description) 3. [**Sparse Retrieval:** Making Support Documents with ElasticSearch](elasticsearch) 4. [**Dense Retrieval:** Making Support Documents with an ELI5-Trained Model](dense_retrieval) b. [Contrastive Training with ELI5 In-Batch Negatives](dense_train) c. [Using the Trained Dense Retriever and Wikipedia Index](dense_use) d. [Retrieval Model Evaluation](dense_eval) 5. [**Generating Answers with a Sequence-to-Sequence Model**](generation) --- --- Screenshot of the demo for the question: "How do people make chocolate?" --- 1. IntroductionImagine that you are taken with a sudden desire to understand **how the fruit of a tropical tree gets transformed into chocolate bars**, or want to understand **the role of fever in the human body's immune response**: how would you go about finding that information?If your specific question has already been asked and answered clearly and succintly on one of the many question answering platforms available on the Internet (such as [**Quora**](https://www.quora.com/How-is-chocolate-made), [**Reddit**](https://www.reddit.com/user/ex_5_libris/comments/9c8gb1/chocolate_how_chocolate_is_made/), or [**Yahoo Answers**](https://answers.yahoo.com/question/index?qid=20070615082202AArsYN1)), you're in luck: modern search engines will probably take you to that pre-existing answer pretty reliably in a matter of a few clicks. If no one else has asked the exact question you are interested in, however, the process will be a little more involved. You will likely have to collect relevant information from a variety of sources, figure out how these pieces of knowledge fit together in relation to your query, and synthetize a narrative that answers your initial question.Now, wouldn't it be great if your computer could do all of that for you: **gather** the right sources (*e.g.* paragraphs from relevant Wikipedia pages), **synthetize** the information, and **write up** an easy-to-read, original summary of the relevant points? Such a system isn't quite available yet, at least not one that can provide *reliable* information in its summary. Even though current systems excel at finding an extractive span that answers a factoid question in a given document, they still find open-domain settings where a model needs to find its own sources of information and long answer generation challenging. Thankfully, a number of recent advances in natural language understanding and generation have made working toward solving this problem much easier! These advances include progress in the pre-training (e.g. [BART](https://arxiv.org/abs/1910.13461), [T5](https://arxiv.org/abs/1910.10683)) and evaluation (e.g. for [factuality](https://arxiv.org/abs/2004.04228)) of sequence-to-sequence models for conditional text generation, new ways to use language understanding models to find information in Wikipedia (e.g. [REALM](https://kentonl.com/pub/gltpc.2020.pdf), [DPR](https://arxiv.org/abs/2004.04906)), and a new training dataset introduced in the paper [ELI5: Long Form Question Answering](https://arxiv.org/abs/1907.09190).The [ELI5](https://arxiv.org/abs/1907.09190) dataset was built by gathering questions that were asked by community members of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit, along with the answers that were provided by other users. The [rules of the subreddit](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.**In this notebook,** we show how we can take advantage of these recent advances to train a **long form question answering** system which takes in a question, fetches 10 relevant passages from a [Wikipedia snapshot](https://www.aclweb.org/anthology/2020.lrec-1.297/), and writes a multi-sentence answer based on the question and retrieved passages. In particular, training embedding-based retrieval models to gather supporting evidence for open-domain questions is relatively new research area: the last few months have seen some significant progress in cases where direct supervision is available, or with extensive task-specific pretraining. Here, we show how the ELI5 dataset allows us to **train a dense retrieval system without access to either**, making dense retrieval models more accessible. See [this presentation](https://docs.google.com/presentation/d/1A5wJEzFYGdNem7egJ-BTm6EMI3jGNe1lalyChYL54gw) from the [Hugging Face reading group](https://github.com/huggingface/awesome-papers) for a non-exhaustive overview of recent work in the field. Follow along to learn about the steps involved and read some background on the state of the art for some related tasks, or go straight to the: [**Live Demo!**](https://huggingface.co/qa/) (And don't forget to scroll down on the left sidebar to show all of the generation options!) 1.a - Preliminaries The implementation presented here relies on the [Hugging Face](https://huggingface.co/) [🤗transformers](https://github.com/huggingface/transformers) and [🤗nlp](https://github.com/huggingface/nlp) libraries. Wikipedia indexing relies on [ElasticSearch](https://www.elastic.co/elasticsearch) with its [python bindings](https://github.com/elastic/elasticsearch-py) for the sparse version, and [faiss](https://github.com/facebookresearch/faiss/) for the dense version. You can get all of these by running:> pip install elasticsearch > pip install faiss_gpu > pip install nlp > pip install transformers > > wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.7.1-linux-x86_64.tar.gz > tar -xzvf elasticsearch-7.7.1-linux-x86_64.tar.gz The training relies on two datasets: [ELI5](https://arxiv.org/abs/1907.09190), a processed version of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit, and the [Wiki40b](https://www.aclweb.org/anthology/2020.lrec-1.297/) Wikipedia image. You can download both using the [🤗nlp](https://github.com/huggingface/nlp) linrary with:<jupyter_code>import nlp eli5 = nlp.load_dataset('eli5') wiki40b_snippets = nlp.load_dataset('wiki_snippets', name='wiki40b_en_100_0')['train']<jupyter_output><empty_output><jupyter_text>Additionally, all of the useful methods used in this notebook are compiled in the [lfqa_utils.py](https://github.com/huggingface/longform-qa/lfqa_utils.py) script:<jupyter_code>from lfqa_utils import *<jupyter_output><empty_output><jupyter_text>1.b - Note on Data and BiasesBefore we go any further, let us take a moment to talk about the provenance of our training data. While Reddit hosts a number of thriving communities with high quality discussions, it is also widely known to have corners where sexism, hate, and harassment are significant issues. See for example the [recent post from Reddit founder u/spez](https://www.reddit.com/r/announcements/comments/gxas21/upcoming_changes_to_our_content_policy_our_board/) outlining some of the ways he thinks the website's historical policies have been responsible for this problem, [Adrienne Massanari's 2015 article on GamerGate](https://www.researchgate.net/publication/283848479_Gamergate_and_The_Fappening_How_Reddit's_algorithm_governance_and_culture_support_toxic_technocultures) and follow-up works, or a [2019 Wired article on misogyny on Reddit](https://www.wired.com/story/misogyny-reddit-research/).While there has been some recent work in the NLP community on *de-biasing* models (e.g. [Black is to Criminal as Caucasian is to Police:Detecting and Removing Multiclass Bias in Word Embeddings](https://arxiv.org/abs/1904.04047) for word embeddings trained specifically on Reddit data), this problem is far from solved, and the likelihood that a trained model might learn the biases present in the data remains a significant concern.As mentioned above, the magnitude of the problem depends on the specific communities/subreddits. This work uses data from [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), and the `nlp` library also gives access to examples from [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/). There are some encouraging signs for all of these communities: [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) and [r/askscience](https://www.reddit.com/r/askscience/) have similar structures and purposes, and [r/askscience](https://www.reddit.com/r/askscience/) was found in 2015 to show medium supportiveness and very low toxicity when compared to other subreddits (see a [hackerfall post](https://hackerfall.com/story/study-and-interactive-visualization-of-toxicity-in), [thecut.com write-up](https://www.thecut.com/2015/03/interactive-chart-of-reddits-toxicity.html) and supporting [data](https://chart-studio.plotly.com/~bsbell21/210/toxicity-vs-supportiveness-by-subreddit/data)). Meanwhile, the [r/AskHistorians rules](https://www.reddit.com/r/AskHistorians/wiki/rules) mention that the admins will not tolerate "*racism, sexism, or any other forms of bigotry*".This is obviously not enough to exonerate the model (the pre-training step, for example, raises its own questions on that topic), and there is still a lot of interesting work to do to be able to quantify the biases in a conditional text generation model. One thing you can do to help: if you find any particularly egregious answers provided by the model when using the demo, or want to work on this research question please send a DM to [@YJernite on Twitter](https://twitter.com/YJernite)! 2. Task and Data DescriptionLet's recap: we are interested in the task of Long Form Question Answering. As in other Question Answering tasks, the model is presented with a question, and is required to generate a natural language answer. Whereas a majority of QA datasets contain mostly **factoid** questions, where the answer, such as a date or the name of a single entity, can be expressed in a few words or single sentence, Long Form QA focuses on questions which call for an **explanation** consisting of a few sentences or a few paragraphs.In order to teach a model to answer such questions, we use questions and answers written by Reddit users. Note that the `nlp.load_dataset` command above actually downloaded questions and their associated answers from the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subreddits. We focus here on the **ELI5/explainlikeimfive** part to train the system, as these examples tend to be a little simpler. Let's look at one item from the test set:<jupyter_code>eli5['test_eli5'][12345]<jupyter_output><empty_output><jupyter_text>So here we have the question:> Why does water heated to room temperature feel colder than the air around it? This definitely requires a multi-step explanation: no single phrase can sum up all of the information we are looking for. Here are the answers that were given on ELI5, and were given scores of +5 and +2 respectively by Reddit users:> 1. Water transfers heat more efficiently than air. When something feels cold it's because heat is being transferred from your skin to whatever you're touching. Since water absorbs the heat more readily than air, it feels colder. > 2. Air isn't as good at transferring heat compared to something like water or steel (sit on a room temperature steel bench vs. a room temperature wooden bench, and the steel one will feel more cold). When you feel cold, what you're feeling is heat being transferred out of you. If there is no breeze, you feel a certain way. If there's a breeze, you will get colder faster (because the moving air is pulling the heat away from you), and if you get into water, its quite good at pulling heat from you. Get out of the water and have a breeze blow on you while you're wet, all of the water starts evaporating, pulling even more heat from you. First, note that in this case **we have two answers** which broadly describe the same phenomenon: the first one is scored higher because it is more succint and to the point. This example already illustrates one important feature of the LFQA task: **there are usually several valid ways to answer a given question.** Of the 272K examples in the ELI5 training set, nearly two thirds (167K) have at least two answers. We'll need to keep this in mind when training and evaluation of the model. Secondly, we need to give our model access to the information that is expressed in both these answers. Recently released models have been shown to include a significant amount of world knowledge in their parameters without the need of any external knowledge at all (see e.g. the [Closed-book QA performance of the T5 model](https://arxiv.org/abs/2002.08910)). There are several advantages to giving the model explicit access to information in text form however. First, a larger number of parameters in a model implies a larger computational cost. Secondly, getting information from a text database allows us to easily update the model's knowledge without having to re-train its parameters.--- --- Overview of the full question answering process. First, the Document Retriever selects a set of passages from Wikipedia that have information relevant to the question. Then, the Answer Generation Model reads the concatenation of the question and retrieverd passages, and writes out the answer.--- Here, we choose to give the model access to Wikipedia text. Full Wikipedia articles are typically too long for most current models to handle, and notable exceptions like the [Reformer](https://arxiv.org/abs/2001.04451) or [Longformer](https://arxiv.org/abs/2004.05150) architectures unfortunately do not yet have pre-trained sequence-to-sequence variants. Thus, we follow previous work in splitting Wikipedia articles into disjoint snippets of 100 words, and keep track of the title of the article and sections a snippet came from. Here's how you can get a pre-processed Wiki40b version split into 100-word passages with the `nlp` library, and an example snippet which has some of the information we're looking for ("*little conduction would occur since air is a poor conductor of heat*"):<jupyter_code>wiki40b_snippets[8991855]<jupyter_output><empty_output><jupyter_text>In the next two sections, we show how we can use either a [sparse retriever](elasticsearch) or a [trained dense retriever](dense_retrieval) to automatically find relevant snippets for a question. 3. Sparse Retrieval: Making Support Documents with ElasticSearchIn this section, we show how to use either such a "classical" Information Retrieval (IR) system based on **sparse** word matching with [ElasticSearch](https://www.elastic.co/elasticsearch/), an extremely popular and efficient search engine that can be used for finding documents that match a given query based on word overlap. Specifically, ElasticSearch provides a convenient way to index documents so they can easily be queried for nearest neighbor search using the [BM25](https://en.wikipedia.org/wiki/Okapi_BM25) similarity function (which relies on TF-IDF weighting of words). While this word-matching based approach has obvious limitations, such as failing to take synonyms and sometimes grammatical variation into account, it does pretty well overall and has only recently been overtaken by embedding-based systems for Wikipedia-based Open-Domain QA tasks.In order to use ElasticSearch, you will first need to launch a server. In a different window, run:> ./elasticsearch-7.7.0/bin/elasticsearchBy default, your ElasticSearch server will be listening on `localhost` port `9200`. To connect to it run:<jupyter_code>es_client = Elasticsearch([{'host': 'localhost', 'port': '9200'}])<jupyter_output><empty_output><jupyter_text>The `eli5_utils.py` script provides utilities to create (`make_es_index_snippets`) and query (`query_es_index`) an ElasticSearch index from within Python.The main implementation details are: 1. We index the article title, section title, and text of each of the passages for BM25 passages. These choices are implemented in the `index_config` variable: ```python index_config = { "settings": { "number_of_shards": 1, }, "mappings": { "properties": { "article_title": {"type": "text", "analyzer": "standard", "similarity": "BM25"}, "section_title": {"type": "text", "analyzer": "standard", "similarity": "BM25"}, "passage_text": {"type": "text", "analyzer": "standard", "similarity": "BM25"} } } }```2. To query the index, we found it useful to add a few task-dependent stop-words. The query text is then compared to all of the indexed fields, giving more weight to the passage text:```python banned = ['how', 'why', 'what', 'where', 'which', 'do', 'does', 'is', '?', 'eli5', 'eli5:'] q = ' '.join([w for w in q.split() if w not in banned]) response = es_client.search( index = index_name, body = { "query": { "multi_match": { "query": q, "fields": ["article_title", "section_title", "passage_text^2"], "type": "cross_fields", } }, "size": n_results, } )``` Here's the command to create the index, it should take one to three hours depending on your system.<jupyter_code>if not es_client.indices.exists('wiki40b_snippets_100w'): make_es_index_snippets(es_client, wiki40b_snippets, index_name='wiki40b_snippets_100w')<jupyter_output><empty_output><jupyter_text>Now let's test the ElasticSearch retriever with our running example ELI5 question about skin-to-water heat transfer by returning the 10 best candidate passages:<jupyter_code>question = eli5['test_eli5'][12345]['title'] doc, res_list = query_es_index(question, es_client, index_name='wiki40b_snippets_100w') df = pd.DataFrame({ 'Article': ['---'] + [res['article_title'] for res in res_list], 'Sections': ['---'] + [res['section_title'] if res['section_title'].strip() != '' else res['article_title'] for res in res_list], 'Text': ['--- ' + question] + [res['passage_text'] for res in res_list], }) df.style.set_properties(**{'text-align': 'left'})<jupyter_output><empty_output><jupyter_text>We can immediately see both the strengths and limitations of this approach. The system manages to retrieve documents that are all broadly on topic, mentioning some combination of *water*, *air*, *relative temperature*, and *temperature transfer*. In spite of this, only example 8 ends up containing information that is actually relevant to the question:> Cold air with high relative humidity "feels" colder than dry air of the same temperature because high humidity in cold weather increases the conduction of heat from the body. We got lucky this time, but this passage could as easily have been ranked 11th and not been included in the support document we provide to the answer generation system. As it is, the model will have to sort through mostly off-topic information to find this sentence when reading the resulting supporting document. 4. Retrieving Support Documents with an ELI5-Trained Dense ModelThe sparse retriever works by finding passages which feature the words from the query. However, it has no way to know *a priori* which of these words are more important in context, and seems to struggle with understanding the central theme of the query (human-perceived temperature).Thankfully, some recent works have taken advantage of advances in pre-trained contextual word representations to solve this problem. Models such as [DPR](https://arxiv.org/abs/2004.04906) or [REALM](https://arxiv.org/abs/2002.08909) for example learn to compute a vector representation of the query, as well as vector representations of Wikipedia passages in such a way that the passages that best answers a question maximize the dot product between the two representations. Retrieval is then reduced to a Maximum Inner Product Search, which can be executed efficiently using systems like [FAISS](https://github.com/facebookresearch/faiss).These successes are very encouraging for our Open-Domain Long Form QA application. However, our task and setup do not quite meet the requirements of either of either of these approaches. On the one hand, the [DPR](https://arxiv.org/abs/2004.04906) system is trained using gold passage annotations: most major QA dataset tell the system which Wikipedia passage contains the answer. Unfortunately, we do not have such annotations for the ELI5 data. On the other hand, while [REALM](https://arxiv.org/abs/2002.08909) is trained without passage supervision, it requires a pretty expensive pre-training step with an [Inverse Cloze Task](https://arxiv.org/abs/1906.00300) (100,000 steps with batch size 4096), and the ability to re-compute the embeddings of all Wikipedia passages regularly during training.In order to train a similar dense retrieval system at reduced cost without having access to gold passage annotation, we will have to **take advantage of another unique feature of our dataset**, namely the fact that the long form answers are quite similar in style to the Wikipedia passages we want to index. Our hypothesis then is that if we train a system to embed the questions and answers in our dataset in a way that allows us to easily match questions to answers, then using the answer embedder on Wikipedia passages should allow us to similarly match questions to supporting evidence from Wikipedia. 4.a - Contrastive Training with ELI5 In-Batch NegativesAs mentioned above, we want to train a system to produce question and answer embeddings, such that the dot product between the representation of a question and any of its answers is greater than between it and answers of all of the other questions in the dataset. Unfortunately, actually comparing all questions to all answers before taking every single gradient step is computationally prohibitive: instead, we follow previous work in simply processing medium to large batches of question-answer pairs, and making sure that the dot product of a question with its answer is larger than with all other answers in the batch, and *vice versa*. We use a cross-entropy loss for the multinomial distribution over all of the answers (or questions) in a batch, and make use of [PyTorch gradient checkpointing](https://pytorch.org/docs/stable/checkpoint.html) to be able to use large batches with limited GPU memory: you can find all implementation details in the `RetrievalQAEmbedder` class in `eli5_utils.py`.--- --- To train the retriever, we show the model batches of 512 question-answer pairs. The model needs to ensure that the embedding of each question in the batch is closer to the embedding of its corresponding answer than to the embedding of any other answer in the batch.--- We use a single BERT-style pre-trained model to embed the questions and answers, and learn different projection matrices to bring both representations down to dimension 128: the projection matrices are trained from scratch as the sentence embedding model is fine-tuned. We found that the 8-layer distilled version of BERT from the [Well-Read Students Learn Better paper](https://arxiv.org/abs/1908.08962) performed as well or better as full BERT for a notable gain in computation speed: if you want an even faster model, that work provides pre-trained models spanning the full range of computation/accuracy trade-offs.The model can than be trained with the following code: with batch size 32/512 on a single 16GB GPU, you can run 10 training epochs in under 6 hours.<jupyter_code># training arguments class ArgumentsQAR(): def __init__(self): self.batch_size = 512 self.max_length = 128 self.checkpoint_batch_size = 32 self.print_freq = 100 self.pretrained_model_name = "google/bert_uncased_L-8_H-768_A-12" self.model_save_name = "retriever_models/eli5_retriever_model_l-8_h-768_b-512-512" self.learning_rate = 2e-4 self.num_epochs = 10 qar_args = ArgumentsQAR() # prepare torch Dataset objects qar_train_dset = ELI5DatasetQARetriver(eli5['train_eli5'], training=True) qar_valid_dset = ELI5DatasetQARetriver(eli5['validation_eli5'], training=False) # load pre-trained BERT and make model qar_tokenizer, qar_model = make_qa_retriever_model( model_name=qar_args.pretrained_model_name, from_file=None, device="cuda:0" ) # train the model train_qa_retriever(qar_model, qar_tokenizer, qar_train_dset, qar_valid_dset, qar_args)<jupyter_output><empty_output><jupyter_text>If you don't want to train the model yourself, you can also download trained weights from the [Hugging Face model repository](https://huggingface.co/models) with:<jupyter_code>qar_tokenizer = AutoTokenizer.from_pretrained('yjernite/retribert-base-uncased') qar_model = AutoModel.from_pretrained('yjernite/retribert-base-uncased').to('cuda:1') _ = qar_model.eval()<jupyter_output><empty_output><jupyter_text>Once the model is trained, it can be used to compute passage embeddings for all Wikipedia snippets. The `make_qa_dense_index` method takes advantage of `numpy` memory-mapping, so embeddings are written directly to disk. Again with a single GPU, computing the full set of passage embeddings should take about 18 hours.<jupyter_code>if not os.path.isfile('wiki40b_passages_reps_32_l-8_h-768_b-512-512.dat'): make_qa_dense_index( qar_model, qar_tokenizer, wiki40b_snippets, device='cuda:0', index_name='wiki40b_passages_reps_32_l-8_h-768_b-512-512.dat' )<jupyter_output><empty_output><jupyter_text>4.b - Using the Trained Dense Retriever and Wikipedia IndexNow that we have trained our model to compute query and answer embeddings and used it to compute passage embeddings for all our Wikipedia snippets, let's see whether it can actually find supporting evidence for a new question. Recall the the two steps to using the dense retriever: we first compute an embedding for a new question, then do Max Inner Product Search with the pre-computed passage representations.--- --- At test time, the Retriever Model encodes the question, and compares its embedding to the pre-computed representation of all the Wikipedia passages. The ten passages with the closest embedding are returned to create the support document.--- The MIPS part can be executed efficiently with the `faiss` library. Additionally, since we computed 128-dimensional passage embeddings, the whole of the representations fits on a GPU, making retrieval even faster. We can create the `faiss_gpu` index with the following code:<jupyter_code>faiss_res = faiss.StandardGpuResources() wiki40b_passage_reps = np.memmap( 'wiki40b_passages_reps_32_l-8_h-768_b-512-512.dat', dtype='float32', mode='r', shape=(wiki40b_snippets.num_rows, 128) ) wiki40b_index_flat = faiss.IndexFlatIP(128) wiki40b_gpu_index = faiss.index_cpu_to_gpu(faiss_res, 1, wiki40b_index_flat) wiki40b_gpu_index.add(wiki40b_passage_reps)<jupyter_output><empty_output><jupyter_text>Now we can use the `query_qa_dense_index` function to query the dense index for our running example question about perceived temperature:<jupyter_code>question = eli5['test_eli5'][12345]['title'] doc, res_list = query_qa_dense_index(question, qar_model, qar_tokenizer, wiki40b_snippets, wiki40b_gpu_index, device='cuda:1') df = pd.DataFrame({ 'Article': ['---'] + [res['article_title'] for res in res_list], 'Sections': ['---'] + [res['section_title'] if res['section_title'].strip() != '' else res['article_title'] for res in res_list], 'Text': ['--- ' + question] + [res['passage_text'] for res in res_list], }) df.style.set_properties(**{'text-align': 'left'})<jupyter_output><empty_output><jupyter_text>The retrieved documents are quite different from the ones returned by the sparse retrieval, with a greater focus on how water helps draw heat from a body, either through evaporation or through better conduction, which is information the model needs to answer this question.The retriever still misses out on one aspect of the query: the way the question is formulated implies that in the considered scenario the person is immersed in water rather than just wet, which makes the "latent heat" and evaporation arguments a little less relevant, but that's a really subtle distinction! 4.c - Retriever Model EvaluationWe have trained a retrieval model that *seems* to be working a little better than the traditional word-matching based approach, at least on our running example. Before we use it to actually answer questions, however, we would like to be able to get some **quantitative evaluation** of the performances of both approaches.For the retriever, we want to favor **recall** over precision: our first priority is to make sure that all of the information needed to write the answers is present in the support document. If there is unrelated information, the generation model can learn to sort it out. We measure this by computing the proportion of words in the high-scoring answers which are present in the retrieved support document. To focus on important words, we also weigh answer words by their *Inverse Document Frequency*. This gives us the following **IDF-recall** scoring function:<jupyter_code># We first select high-scoring answers (answers beyond the first must have a score of at least 3) test_qa_list = [(exple['title'], ' '.join([a for i, (a, sc) in enumerate(zip(exple['answers']['text'], exple['answers']['score'])) \ if i == 0 or sc >= 3 ])) for exple in eli5['test_eli5']] # We then compute word frequencies in answer text answer_doc_freq = {} for q, a in test_qa_list: for w in a.lower().split(): answer_doc_freq[w] = answer_doc_freq.get(w, 0) + 1 # The IDF-recall function is then: def da_idf_recall(doc, answer): d_words = dict([(w, True) for w in doc.lower().split()]) a_words = answer.lower().split() recall = sum([1. / math.log(1 + answer_doc_freq.get(w, 1)) for w in a_words if w in d_words]) / \ sum([1. / math.log(1 + answer_doc_freq.get(w, 1)) for w in a_words]) return recall<jupyter_output><empty_output><jupyter_text>The `evaluate_retriever` function in `eli5_utils.py` takes a retrieval and scoring function and computes both the average retrieval time and score of the document relative the the provided answer. Let's write some short-hand functions for the dense and sparse retrievers with our currently loaded indexes, and evaluate them on the ELI5 test set (be advised that evaluating the retriever on the full test set takes up to two hours):<jupyter_code>def dense_ret_for_eval(question, n_ret): _, dense_res_list = query_qa_dense_index( question, qar_model, qar_tokenizer, wiki40b_snippets, wiki40b_gpu_index, n_results=n_ret, device='cuda:1' ) dense_doc = ' '.join([res['passage_text'] for res in dense_res_list]) return dense_doc def sparse_ret_for_eval(question, n_ret): _, sparse_res_list = query_es_index( question, es_client, index_name='wiki40b_snippets_100w', n_results=n_ret ) sparse_doc = ' '.join([res['passage_text'] for res in sparse_res_list]) return sparse_doc dense_score = evaluate_retriever(test_qa_list, dense_ret_for_eval, da_idf_recall) sparse_score = evaluate_retriever(test_qa_list, sparse_ret_for_eval, da_idf_recall) df = pd.DataFrame({ 'IDF-Recall': [sparse_score['idf_recall'], dense_score['idf_recall']], 'Time/Query': [sparse_score['retrieval_time'], dense_score['retrieval_time']], }, index=[ 'Sparse', 'Dense']) df.style.format({'IDF-Recall': "{:.4f}", 'Time/Query': "{:.4f}"})<jupyter_output><empty_output><jupyter_text>This metric obviously has limitations. Since it only looks at individual word matches, it is oblivious to *word order* or *paraphrases* among others. However, we can be encouraged by the fact that the dense retriever not only yields **higher IDF-recall**, it also takes **less than a third of the time** of the ElasticSearch-based system! Considering these results, we can confidently use it for the next part: training the sequence-to-sequence answer generation system. 5. Generating Answers with a Sequence-to-Sequence ModelNow that we know how to create an evidence document with supporting information for a given question, let's look into training the second component of our system: the **answer generation module**. We will instantiate it as a sequence-to-sequence model which uses the [BART](https://arxiv.org/abs/1910.13461) architecture, and initialize it with the [bart-large pretrained weights](https://huggingface.co/facebook/bart-large). In short, the [BART paper](https://arxiv.org/abs/1910.13461) uses a denoising auto-encoder style objective to pre-train an encoder-decoder model (similarly to how masked language modeling is used to pre-trained BERT-style encoders). Among other applications, they show that large-scale pre-training with their objective followed by fine-tuning on ELI5 data yields the state-of-the-art ROUGE performance for the original version of the dataset (which uses pre-computed support documents made from CommonCrawl pages).We provide the concatenation of the question and support document as input to the model, and train the decoder to minimize the perplexity of the gold answer. One notable choice is that **we train the model using all high-scoring answers in the training set**, so the model will see several instances of the same question-document input with different outputs. The supporting passages are separated by a special token ``, so the input for our running example will look like:> question: Why does water heated to room temperature feel colder than the air around it? context: \\ when the skin is completely wet. The body continuously loses ... this heat comes from the liquid itself and the surrounding gas and surfaces. \\ protected by a glass panel. Consequently, these types of collectors... Since heat loss due to convection cannot cross a vacuum, it forms an efficient isolation mechanism to keep heat inside the collector pipes. Since two flat \\ ... \\ changes. Conduction On... Fluids—especially gases—are less conductive. Thermal contact conductance is the study of heat conduction between solid bodies in contact. The process of heat transferThe first thing we do is pre-compute the support documents for the training and validation sets so we can use all available GPUs to train the sequence-to-sequence model. The model is then trained with the `train_qa_s2s` function in `eli5_utils.py`. A 16GB GPU accomodates up to two examples at a time, so here is the code to train the model using 4 GPUs with `torch.nn.DataPArallel`. One epoch should take about 18 hours:<jupyter_code># pre-computing support documents eli5_train_docs = [] for example in eli5['train_eli5']: support_doc, dense_res_list = query_qa_dense_index( example['title'], qar_model, qar_tokenizer, wiki40b_snippets, wiki40b_gpu_index, n_results=n_ret ) eli5_train_docs += [(example['q_id'], support_doc, dense_res_list)] eli5_valid_docs = [] for example in eli5['validation_eli5']: support_doc, dense_res_list = query_qa_dense_index( example['title'], qar_model, qar_tokenizer, wiki40b_snippets, wiki40b_gpu_index, n_results=n_ret ) eli5_valid_docs += [(example['q_id'], support_doc, dense_res_list)] # training loop proper class ArgumentsS2S(): def __init__(self): self.batch_size = 8 self.backward_freq = 16 self.max_length = 1024 self.print_freq = 100 self.model_save_name = "seq2seq_models/eli5_bart_model" self.learning_rate = 2e-4 self.num_epochs = 3 s2s_args = ArgumentsS2S() eli5_train_docs = json.load(open('precomputed/eli5_train_precomputed_dense_docs.json')) eli5_valid_docs = json.load(open('precomputed/eli5_valid_precomputed_dense_docs.json')) s2s_train_dset = ELI5DatasetS2S(eli5['train_eli5'], document_cache=dict([(k, d) for k, d, src_ls in eli5_train_docs])) s2s_valid_dset = ELI5DatasetS2S(eli5['validation_eli5'], document_cache=dict([(k, d) for k, d, src_ls in eli5_valid_docs]), training=False) qa_s2s_tokenizer, pre_model = make_qa_s2s_model( model_name="facebook/bart-large", from_file=None, device="cuda:0" ) qa_s2s_model = torch.nn.DataParallel(pre_model) train_qa_s2s(qa_s2s_model, qa_s2s_tokenizer, s2s_train_dset, s2s_valid_dset, s2s_args)<jupyter_output><empty_output><jupyter_text>Again, if you don't want to train the model yourself, we made trained weights available on the [Hugging Face model repository](https://huggingface.co/models) , which you can download with:<jupyter_code>qa_s2s_tokenizer = AutoTokenizer.from_pretrained('yjernite/bart_eli5') qa_s2s_model = AutoModelForSeq2SeqLM.from_pretrained('yjernite/bart_eli5').to('cuda:0') _ = qa_s2s_model.eval()<jupyter_output><empty_output><jupyter_text>We now have everything we need to answer any question! Now let's try the full system on our running example along with the first four questions of the test set:<jupyter_code>questions = [] answers = [] for i in [12345] + [j for j in range(4)]: # create support document with the dense index question = eli5['test_eli5'][i]['title'] doc, res_list = query_qa_dense_index( question, qar_model, qar_tokenizer, wiki40b_snippets, wiki40b_gpu_index, device='cuda:1' ) # concatenate question and support document into BART input question_doc = "question: {} context: {}".format(question, doc) # generate an answer with beam search answer = qa_s2s_generate( question_doc, qa_s2s_model, qa_s2s_tokenizer, num_answers=1, num_beams=8, min_len=64, max_len=256, max_input_length=1024, device="cuda:0" )[0] questions += [question] answers += [answer] df = pd.DataFrame({ 'Question': questions, 'Answer': answers, }) df.style.set_properties(**{'text-align': 'left'})<jupyter_output><empty_output><jupyter_text>We made it, and a lot of these answers actually make sense! The model seems to sometimes struggle with coherence and with starting some of the answers, but we're getting some pretty good information overall.The last thing we'll do is see how we can get a quantitative evaluation of the model performance. Here, we'll use the ROUGE implementation provided in the `nlp` library. Note that it is a different implementation than the one used in the [BART](https://arxiv.org/abs/1910.13461) and [ELI5](https://arxiv.org/abs/1907.09190) papers: the [rouge](https://pypi.org/project/rouge/) Python package they use normalises all numerical values, among other pre-processing choices, leading to higher numbers. We reproduce their evaluation in the Appendix section, but recommend using the more sensitive metric provided by the `nlp` package, which can be computed with:<jupyter_code>predicted = [] reference = [] # Generate answers for the full test set for i in range(eli5['test_eli5'].num_rows): # create support document with the dense index question = eli5['test_eli5'][i]['title'] doc, res_list = query_qa_dense_index( question, qar_model, qar_tokenizer, wiki40b_snippets, wiki40b_gpu_index, device='cuda:1' ) # concatenate question and support document into BART input question_doc = "question: {} context: {}".format(question, doc) # generate an answer with beam search answer = qa_s2s_generate( question_doc, qa_s2s_model, qa_s2s_tokenizer, num_answers=1, num_beams=8, min_len=96, max_len=256, max_input_length=1024, device="cuda:0" )[0] predicted += [answer] reference += [eli5['test_eli5'][i]['answers']['text'][0]] # Compare each generation to the fist answer from the dataset nlp_rouge = nlp.load_metric('rouge') scores = nlp_rouge.compute( predicted, reference, rouge_types=['rouge1', 'rouge2', 'rougeL', 'rougeLsum'], use_agregator=True, use_stemmer=False ) df = pd.DataFrame({ 'rouge1': [scores['rouge1'].mid.precision, scores['rouge1'].mid.recall, scores['rouge1'].mid.fmeasure], 'rouge2': [scores['rouge2'].mid.precision, scores['rouge2'].mid.recall, scores['rouge2'].mid.fmeasure], 'rougeL': [scores['rougeL'].mid.precision, scores['rougeL'].mid.recall, scores['rougeL'].mid.fmeasure], }, index=[ 'P', 'R', 'F']) df.style.format({'rouge1': "{:.4f}", 'rouge2': "{:.4f}", 'rougeL': "{:.4f}"})<jupyter_output><empty_output><jupyter_text>That's it for today! And once again, if you want to play with the model a bit more and ask it whatever question comes to mind, please feel free to head over to: [**Our Live Demo!**](https://huggingface.co/qa/) Thank you for reading! Appendix:Here we reproduce the ROUGE evaluation from the original [ELI5 paper](https://arxiv.org/abs/1907.09190) to be able to comparable our performance to theirs. Our generation setting leads to lower ROUGE-1 and ROUGE-2 than the state-of-the-art reported in [BART](https://arxiv.org/abs/1910.13461) (30.6 and 6.2 respectively), and higher ROUGE-L (24.3).<jupyter_code>from nltk import PorterStemmer from rouge import Rouge from spacy.lang.en import English from time import time stemmer = PorterStemmer() rouge = Rouge() tokenizer = English().Defaults.create_tokenizer() def compute_rouge_eli5(compare_list): preds = [" ".join([stemmer.stem(str(w)) for w in tokenizer(pred)]) for gold, pred in compare_list] golds = [" ".join([stemmer.stem(str(w)) for w in tokenizer(gold)]) for gold, pred in compare_list] scores = rouge.get_scores(preds, golds, avg=True) return scores compare_list = [(g, p) for p, g in zip(predicted, reference)] scores = compute_rouge_eli5(compare_list) df = pd.DataFrame({ 'rouge1': [scores['rouge-1']['p'], scores['rouge-1']['r'], scores['rouge-1']['f']], 'rouge2': [scores['rouge-2']['p'], scores['rouge-2']['r'], scores['rouge-2']['f']], 'rougeL': [scores['rouge-l']['p'], scores['rouge-l']['r'], scores['rouge-l']['f']], }, index=[ 'P', 'R', 'F']) df.style.format({'rouge1': "{:.4f}", 'rouge2': "{:.4f}", 'rougeL': "{:.4f}"})<jupyter_output><empty_output>
notebooks/longform-qa/Long_Form_Question_Answering_with_ELI5_and_Wikipedia.ipynb/0
{ "file_path": "notebooks/longform-qa/Long_Form_Question_Answering_with_ELI5_and_Wikipedia.ipynb", "repo_id": "notebooks", "token_count": 13060 }
142
import argparse import logging import os import sys import tensorflow as tf from datasets import load_dataset from tqdm import tqdm from transformers import AutoTokenizer, TFAutoModelForSequenceClassification from transformers.file_utils import is_sagemaker_dp_enabled if os.environ.get("SDP_ENABLED") or is_sagemaker_dp_enabled(): SDP_ENABLED = True os.environ["SAGEMAKER_INSTANCE_TYPE"] = "p3dn.24xlarge" import smdistributed.dataparallel.tensorflow as sdp else: SDP_ENABLED = False def fit(model, loss, opt, train_dataset, epochs, train_batch_size, max_steps=None): pbar = tqdm(train_dataset) for i, batch in enumerate(pbar): with tf.GradientTape() as tape: inputs, targets = batch outputs = model(batch) loss_value = loss(targets, outputs.logits) if SDP_ENABLED: tape = sdp.DistributedGradientTape(tape, sparse_as_dense=True) grads = tape.gradient(loss_value, model.trainable_variables) opt.apply_gradients(zip(grads, model.trainable_variables)) pbar.set_description(f"Loss: {loss_value:.4f}") if SDP_ENABLED: if i == 0: sdp.broadcast_variables(model.variables, root_rank=0) sdp.broadcast_variables(opt.variables(), root_rank=0) first_batch = False if max_steps and i >= max_steps: break train_results = {"loss": loss_value.numpy()} return train_results def get_datasets(): # Load dataset train_dataset, test_dataset = load_dataset("imdb", split=["train", "test"]) # Preprocess train dataset train_dataset = train_dataset.map( lambda e: tokenizer(e["text"], truncation=True, padding="max_length"), batched=True ) train_dataset.set_format(type="tensorflow", columns=["input_ids", "attention_mask", "label"]) train_features = { x: train_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.model_max_length]) for x in ["input_ids", "attention_mask"] } tf_train_dataset = tf.data.Dataset.from_tensor_slices((train_features, train_dataset["label"])) # Preprocess test dataset test_dataset = test_dataset.map( lambda e: tokenizer(e["text"], truncation=True, padding="max_length"), batched=True ) test_dataset.set_format(type="tensorflow", columns=["input_ids", "attention_mask", "label"]) test_features = { x: test_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.model_max_length]) for x in ["input_ids", "attention_mask"] } tf_test_dataset = tf.data.Dataset.from_tensor_slices((test_features, test_dataset["label"])) if SDP_ENABLED: tf_train_dataset = tf_train_dataset.shard(sdp.size(), sdp.rank()) tf_test_dataset = tf_test_dataset.shard(sdp.size(), sdp.rank()) tf_train_dataset = tf_train_dataset.batch(args.train_batch_size, drop_remainder=True) tf_test_dataset = tf_test_dataset.batch(args.eval_batch_size, drop_remainder=True) return tf_train_dataset, tf_test_dataset if __name__ == "__main__": parser = argparse.ArgumentParser() # Hyperparameters sent by the client are passed as command-line arguments to the script. parser.add_argument("--epochs", type=int, default=3) parser.add_argument("--train_batch_size", type=int, default=16) parser.add_argument("--eval_batch_size", type=int, default=8) parser.add_argument("--model_name", type=str) parser.add_argument("--learning_rate", type=str, default=5e-5) parser.add_argument("--do_train", type=bool, default=True) parser.add_argument("--do_eval", type=bool, default=True) # Data, model, and output directories parser.add_argument("--output_data_dir", type=str, default=os.environ["SM_OUTPUT_DATA_DIR"]) parser.add_argument("--model_dir", type=str, default=os.environ["SM_MODEL_DIR"]) parser.add_argument("--n_gpus", type=str, default=os.environ["SM_NUM_GPUS"]) args, _ = parser.parse_known_args() # Set up logging logger = logging.getLogger(__name__) logging.basicConfig( level=logging.getLevelName("INFO"), handlers=[logging.StreamHandler(sys.stdout)], format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", ) if SDP_ENABLED: sdp.init() gpus = tf.config.experimental.list_physical_devices("GPU") for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) if gpus: tf.config.experimental.set_visible_devices(gpus[sdp.local_rank()], "GPU") # Load model and tokenizer model = TFAutoModelForSequenceClassification.from_pretrained(args.model_name) tokenizer = AutoTokenizer.from_pretrained(args.model_name) # get datasets tf_train_dataset, tf_test_dataset = get_datasets() # fine optimizer and loss optimizer = tf.keras.optimizers.Adam(learning_rate=args.learning_rate) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metrics = [tf.keras.metrics.SparseCategoricalAccuracy()] model.compile(optimizer=optimizer, loss=loss, metrics=metrics) # Training if args.do_train: # train_results = model.fit(tf_train_dataset, epochs=args.epochs, batch_size=args.train_batch_size) train_results = fit( model, loss, optimizer, tf_train_dataset, args.epochs, args.train_batch_size, max_steps=None ) logger.info("*** Train ***") output_eval_file = os.path.join(args.output_data_dir, "train_results.txt") if not SDP_ENABLED or sdp.rank() == 0: with open(output_eval_file, "w") as writer: logger.info("***** Train results *****") logger.info(train_results) for key, value in train_results.items(): logger.info(" %s = %s", key, value) writer.write("%s = %s\n" % (key, value)) # Evaluation if args.do_eval and (not SDP_ENABLED or sdp.rank() == 0): result = model.evaluate(tf_test_dataset, batch_size=args.eval_batch_size, return_dict=True) logger.info("*** Evaluate ***") output_eval_file = os.path.join(args.output_data_dir, "eval_results.txt") with open(output_eval_file, "w") as writer: logger.info("***** Eval results *****") logger.info(result) for key, value in result.items(): logger.info(" %s = %s", key, value) writer.write("%s = %s\n" % (key, value)) # Save result if SDP_ENABLED: if sdp.rank() == 0: model.save_pretrained(args.model_dir) tokenizer.save_pretrained(args.model_dir) else: model.save_pretrained(args.model_dir) tokenizer.save_pretrained(args.model_dir)
notebooks/sagemaker/07_tensorflow_distributed_training_data_parallelism/scripts/train.py/0
{ "file_path": "notebooks/sagemaker/07_tensorflow_distributed_training_data_parallelism/scripts/train.py", "repo_id": "notebooks", "token_count": 2936 }
143
<jupyter_start><jupyter_text>HuggingFace Hub meets Amazon SageMaker Fine-tune a Multi-Class Classification with `Trainer` and `emotion` dataset and push it to the [Hugging Face Hub](https://huggingface.co/models) IntroductionWelcome to our end-to-end multi-class Text-Classification example. In this demo, we will use the Hugging Faces `transformers` and `datasets` library together with a custom Amazon sagemaker-sdk extension to fine-tune a pre-trained transformer for multi-class text classification. In particular, the pre-trained model will be fine-tuned using the `emotion` dataset. To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. _**NOTE: You can run this demo in Sagemaker Studio, your local machine or Sagemaker Notebook Instances**_ Development Environment and Permissions Installation_*Note:* we only install the required libraries from Hugging Face and AWS. You also need PyTorch or Tensorflow, if you haven´t it installed_<jupyter_code>!pip install "sagemaker>=2.140.0" "transformers==4.26.1" "datasets[s3]==2.10.1" --upgrade import sagemaker.huggingface<jupyter_output><empty_output><jupyter_text>Permissions _If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._<jupyter_code>import sagemaker import boto3 sess = sagemaker.Session() # sagemaker session bucket -> used for uploading data, models and logs # sagemaker will automatically create this bucket if it not exists sagemaker_session_bucket=None if sagemaker_session_bucket is None and sess is not None: # set to default bucket if a bucket name is not given sagemaker_session_bucket = sess.default_bucket() try: role = sagemaker.get_execution_role() except ValueError: iam = boto3.client('iam') role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn'] sess = sagemaker.Session(default_bucket=sagemaker_session_bucket) print(f"sagemaker role arn: {role}") print(f"sagemaker bucket: {sess.default_bucket()}") print(f"sagemaker session region: {sess.boto_region_name}")<jupyter_output><empty_output><jupyter_text>PreprocessingWe are using the `datasets` library to download and preprocess the `emotion` dataset. After preprocessing, the dataset will be uploaded to our `sagemaker_session_bucket` to be used within our training job. The [emotion](https://github.com/dair-ai/emotion_dataset) dataset consists of 16000 training examples, 2000 validation examples, and 2000 testing examples. Tokenization<jupyter_code>from datasets import load_dataset from transformers import AutoTokenizer # tokenizer used in preprocessing tokenizer_name = 'distilbert-base-uncased' # dataset used dataset_name = 'emotion' # s3 key prefix for the data s3_prefix = 'samples/datasets/emotion' # download tokenizer tokenizer = AutoTokenizer.from_pretrained(tokenizer_name) # tokenizer helper function def tokenize(batch): return tokenizer(batch['text'], padding='max_length', truncation=True) # load dataset train_dataset, test_dataset = load_dataset(dataset_name, split=['train', 'test']) # tokenize dataset train_dataset = train_dataset.map(tokenize, batched=True) test_dataset = test_dataset.map(tokenize, batched=True) # set format for pytorch train_dataset = train_dataset.rename_column("label", "labels") train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels']) test_dataset = test_dataset.rename_column("label", "labels") test_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])<jupyter_output>Downloading: 3.62kB [00:00, 629kB/s] Downloading: 3.28kB [00:00, 998kB/s] Using custom data configuration default Reusing dataset emotion (/Users/philipp/.cache/huggingface/datasets/emotion/default/0.0.0/348f63ca8e27b3713b6c04d723efe6d824a56fb3d1449794716c0f0296072705) 100%|██████████| 2/2 [00:00<00:00, 99.32it/s] 100%|██████████| 16/16 [00:02<00:00, 7.47ba/s] 100%|██████████| 2/2 [00:00<00:00, 6.47ba/s]<jupyter_text>Uploading data to `sagemaker_session_bucket`After we processed the `datasets` we are going to use the new `FileSystem` [integration](https://huggingface.co/docs/datasets/filesystems.html) to upload our dataset to S3.<jupyter_code># save train_dataset to s3 training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train' train_dataset.save_to_disk(training_input_path) # save test_dataset to s3 test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test' test_dataset.save_to_disk(test_input_path)<jupyter_output><empty_output><jupyter_text>Creating an Estimator and start a training job List of supported models: https://huggingface.co/models?library=pytorch,transformers&sort=downloads setting up `push_to_hub` for our model. The `train.py` scripts implements the `push_to_hub` using the `Trainer` and `TrainingArguments`. To push our model to the [Hub](https://huggingface.co/models) we need to define the `push_to_hub`. hyperparameter and set it to `True` and provide out [Hugging Face Token](https://hf.co/settings/token). Additionally, we can configure the repository name and saving strategy using the `hub_model_id`, `hub_strategy`.You can find documentation to those parameters [here](https://huggingface.co/transformers/main_classes/trainer.html). We are going to provide our HF Token securely with out exposing it to the public using `notebook_login` from the `huggingface_hub` SDK. But be careful your token will still be visible insight the logs of the training job. If you run `huggingface_estimator.fit(...,wait=True)` you will see the token in the logs.A better way of providing your `HF_TOKEN` to your training jobs would be using [AWS Secret Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) You can also directly find your token at [https://hf.co/settings/token](https://hf.co/settings/token).<jupyter_code>from huggingface_hub import notebook_login notebook_login() from sagemaker.huggingface import HuggingFace from huggingface_hub import HfFolder import time # hyperparameters, which are passed into the training job hyperparameters={'epochs': 1, # number of training epochs 'train_batch_size': 32, # batch size for training 'eval_batch_size': 64, # batch size for evaluation 'learning_rate': 3e-5, # learning rate used during training 'model_id':'distilbert-base-uncased', # pre-trained model 'fp16': True, # Whether to use 16-bit (mixed) precision training 'push_to_hub': True, # Defines if we want to push the model to the hub 'hub_model_id': 'sagemaker-distilbert-emotion', # The model id of the model to push to the hub 'hub_strategy': 'every_save', # The strategy to use when pushing the model to the hub 'hub_token': HfFolder.get_token() # HuggingFace token to have permission to push } # define Training Job Name job_name = f'push-to-hub-sample-{time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())}' # create the Estimator huggingface_estimator = HuggingFace( entry_point = 'train.py', # fine-tuning script used in training jon source_dir = './scripts', # directory where fine-tuning script is stored instance_type = 'ml.p3.2xlarge', # instances type used for the training job instance_count = 1, # the number of instances used for training base_job_name = job_name, # the name of the training job role = role, # Iam role used in training job to access AWS ressources, e.g. S3 transformers_version = '4.26', # the transformers version used in the training job pytorch_version = '1.13', # the pytorch_version version used in the training job py_version = 'py39', # the python version used in the training job hyperparameters = hyperparameters, # the hyperparameter used for running the training job ) # define a data input dictonary with our uploaded s3 uris data = { 'train': training_input_path, 'test': test_input_path } # starting the train job with our uploaded datasets as input # setting wait to False to not expose the HF Token huggingface_estimator.fit(data, wait=False) # adding waiter to see when training is done waiter = huggingface_estimator.sagemaker_session.sagemaker_client.get_waiter('training_job_completed_or_stopped') waiter.wait(TrainingJobName=huggingface_estimator.latest_training_job.name)<jupyter_output><empty_output><jupyter_text>Accessing the model on hf.co/modelswe can access the model on [hf.co/models](https://hf.co/models) using the `hub_model_id` and our username.<jupyter_code>from huggingface_hub import HfApi whoami = HfApi().whoami() username = whoami['name'] print(f"https://huggingface.co/{username}/{hyperparameters['hub_model_id']}")<jupyter_output>https://huggingface.co/philschmid/sagemaker-distilbert-emotion<jupyter_text>Deploying the model from Hugging Face to a SageMaker EndpointTo deploy our model to Amazon SageMaker we can create a `HuggingFaceModel` and provide the Hub configuration (`HF_MODEL_ID` & `HF_TASK`) to deploy it. Alternatively, we can use the the `hugginface_estimator` to deploy our model from S3 with `huggingface_estimator.deploy()`.<jupyter_code>from sagemaker.huggingface import HuggingFaceModel import sagemaker role = sagemaker.get_execution_role() # Hub Model configuration. https://huggingface.co/models hub = { 'HF_MODEL_ID':f"{username}/{hyperparameters['hub_model_id']}", 'HF_TASK':'text-classification' } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( transformers_version='4.26', pytorch_version='1.13', py_version='py39', env=hub, role=role, ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, # number of instances instance_type='ml.m5.xlarge' # ec2 instance type )<jupyter_output><empty_output><jupyter_text>Then, we use the returned predictor object to call the endpoint.<jupyter_code>sentiment_input= {"inputs": "Winter is coming and it will be dark soon."} predictor.predict(sentiment_input)<jupyter_output><empty_output><jupyter_text>Finally, we delete the inference endpoint.<jupyter_code>predictor.delete_model() predictor.delete_endpoint()<jupyter_output><empty_output>
notebooks/sagemaker/14_train_and_push_to_hub/sagemaker-notebook.ipynb/0
{ "file_path": "notebooks/sagemaker/14_train_and_push_to_hub/sagemaker-notebook.ipynb", "repo_id": "notebooks", "token_count": 3921 }
144
from transformers import DonutProcessor, VisionEncoderDecoderModel import torch device = "cuda" if torch.cuda.is_available() else "cpu" def model_fn(model_dir): # Load our model from Hugging Face processor = DonutProcessor.from_pretrained(model_dir) model = VisionEncoderDecoderModel.from_pretrained(model_dir) # Move model to GPU model.to(device) return model, processor def predict_fn(data, model_and_processor): # unpack model and tokenizer model, processor = model_and_processor image = data.get("inputs") pixel_values = processor.feature_extractor(image, return_tensors="pt").pixel_values task_prompt = "<s>" # start of sequence token for decoder since we are not having a user prompt decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids # run inference outputs = model.generate( pixel_values.to(device), decoder_input_ids=decoder_input_ids.to(device), max_length=model.decoder.config.max_position_embeddings, early_stopping=True, pad_token_id=processor.tokenizer.pad_token_id, eos_token_id=processor.tokenizer.eos_token_id, use_cache=True, num_beams=1, bad_words_ids=[[processor.tokenizer.unk_token_id]], return_dict_in_generate=True, ) # process output prediction = processor.batch_decode(outputs.sequences)[0] prediction = processor.token2json(prediction) return prediction
notebooks/sagemaker/26_document_ai_donut/scripts/inference.py/0
{ "file_path": "notebooks/sagemaker/26_document_ai_donut/scripts/inference.py", "repo_id": "notebooks", "token_count": 571 }
145
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # PEFT 🤗 PEFT (Parameter-Efficient Fine-Tuning) is a library for efficiently adapting large pretrained models to various downstream applications without fine-tuning all of a model's parameters because it is prohibitively costly. PEFT methods only fine-tune a small number of (extra) model parameters - significantly decreasing computational and storage costs - while yielding performance comparable to a fully fine-tuned model. This makes it more accessible to train and store large language models (LLMs) on consumer hardware. PEFT is integrated with the Transformers, Diffusers, and Accelerate libraries to provide a faster and easier way to load, train, and use large models for inference. <div class="mt-10"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5"> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="quicktour" ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Get started</div> <p class="text-gray-700">Start here if you're new to 🤗 PEFT to get an overview of the library's main features, and how to train a model with a PEFT method.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./task_guides/image_classification_lora" ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div> <p class="text-gray-700">Practical guides demonstrating how to apply various PEFT methods across different types of tasks like image classification, causal language modeling, automatic speech recognition, and more. Learn how to use 🤗 PEFT with the DeepSpeed and Fully Sharded Data Parallel scripts.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual_guides/lora" ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div> <p class="text-gray-700">Get a better theoretical understanding of how LoRA and various soft prompting methods help reduce the number of trainable parameters to make training more efficient.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/config" ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div> <p class="text-gray-700">Technical descriptions of how 🤗 PEFT classes and methods work.</p> </a> </div> </div> <iframe src="https://stevhliu-peft-methods.hf.space" frameborder="0" width="850" height="620" ></iframe>
peft/docs/source/index.md/0
{ "file_path": "peft/docs/source/index.md", "repo_id": "peft", "token_count": 1161 }
146
<jupyter_start><jupyter_code>from transformers import AutoModelForCausalLM from peft import get_peft_config, get_peft_model, PromptTuningInit, PromptTuningConfig, TaskType, PeftType import torch from datasets import load_dataset import os from transformers import AutoTokenizer from torch.utils.data import DataLoader from transformers import default_data_collator, get_linear_schedule_with_warmup from tqdm import tqdm from datasets import load_dataset device = "cuda" model_name_or_path = "bigscience/bloomz-560m" tokenizer_name_or_path = "bigscience/bloomz-560m" peft_config = PromptTuningConfig( task_type=TaskType.CAUSAL_LM, prompt_tuning_init=PromptTuningInit.TEXT, num_virtual_tokens=8, prompt_tuning_init_text="Classify if the tweet is a complaint or not:", tokenizer_name_or_path=model_name_or_path, ) dataset_name = "twitter_complaints" checkpoint_name = f"{dataset_name}_{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}_v1.pt".replace( "/", "_" ) text_column = "Tweet text" label_column = "text_label" max_length = 64 lr = 3e-2 num_epochs = 50 batch_size = 8 from datasets import load_dataset dataset = load_dataset("ought/raft", dataset_name) classes = [k.replace("_", " ") for k in dataset["train"].features["Label"].names] print(classes) dataset = dataset.map( lambda x: {"text_label": [classes[label] for label in x["Label"]]}, batched=True, num_proc=1, ) print(dataset) dataset["train"][0] # data preprocessing tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) if tokenizer.pad_token_id is None: tokenizer.pad_token_id = tokenizer.eos_token_id target_max_length = max([len(tokenizer(class_label)["input_ids"]) for class_label in classes]) print(target_max_length) def preprocess_function(examples): batch_size = len(examples[text_column]) inputs = [f"{text_column} : {x} Label : " for x in examples[text_column]] targets = [str(x) for x in examples[label_column]] model_inputs = tokenizer(inputs) labels = tokenizer(targets, add_special_tokens=False) # don't add bos token because we concatenate with inputs for i in range(batch_size): sample_input_ids = model_inputs["input_ids"][i] label_input_ids = labels["input_ids"][i] + [tokenizer.eos_token_id] # print(i, sample_input_ids, label_input_ids) model_inputs["input_ids"][i] = sample_input_ids + label_input_ids labels["input_ids"][i] = [-100] * len(sample_input_ids) + label_input_ids model_inputs["attention_mask"][i] = [1] * len(model_inputs["input_ids"][i]) # print(model_inputs) for i in range(batch_size): sample_input_ids = model_inputs["input_ids"][i] label_input_ids = labels["input_ids"][i] model_inputs["input_ids"][i] = [tokenizer.pad_token_id] * ( max_length - len(sample_input_ids) ) + sample_input_ids model_inputs["attention_mask"][i] = [0] * (max_length - len(sample_input_ids)) + model_inputs[ "attention_mask" ][i] labels["input_ids"][i] = [-100] * (max_length - len(sample_input_ids)) + label_input_ids model_inputs["input_ids"][i] = torch.tensor(model_inputs["input_ids"][i][:max_length]) model_inputs["attention_mask"][i] = torch.tensor(model_inputs["attention_mask"][i][:max_length]) labels["input_ids"][i] = torch.tensor(labels["input_ids"][i][:max_length]) model_inputs["labels"] = labels["input_ids"] return model_inputs processed_datasets = dataset.map( preprocess_function, batched=True, num_proc=1, remove_columns=dataset["train"].column_names, load_from_cache_file=False, desc="Running tokenizer on dataset", ) train_dataset = processed_datasets["train"] eval_dataset = processed_datasets["train"] train_dataloader = DataLoader( train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True ) eval_dataloader = DataLoader(eval_dataset, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True) def test_preprocess_function(examples): batch_size = len(examples[text_column]) inputs = [f"{text_column} : {x} Label : " for x in examples[text_column]] model_inputs = tokenizer(inputs) # print(model_inputs) for i in range(batch_size): sample_input_ids = model_inputs["input_ids"][i] model_inputs["input_ids"][i] = [tokenizer.pad_token_id] * ( max_length - len(sample_input_ids) ) + sample_input_ids model_inputs["attention_mask"][i] = [0] * (max_length - len(sample_input_ids)) + model_inputs[ "attention_mask" ][i] model_inputs["input_ids"][i] = torch.tensor(model_inputs["input_ids"][i][:max_length]) model_inputs["attention_mask"][i] = torch.tensor(model_inputs["attention_mask"][i][:max_length]) return model_inputs test_dataset = dataset["test"].map( test_preprocess_function, batched=True, num_proc=1, remove_columns=dataset["train"].column_names, load_from_cache_file=False, desc="Running tokenizer on dataset", ) test_dataloader = DataLoader(test_dataset, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True) next(iter(test_dataloader)) next(iter(train_dataloader)) len(test_dataloader) next(iter(test_dataloader)) # creating model model = AutoModelForCausalLM.from_pretrained(model_name_or_path) model = get_peft_model(model, peft_config) model.print_trainable_parameters() # model # optimizer and lr scheduler optimizer = torch.optim.AdamW(model.parameters(), lr=lr) lr_scheduler = get_linear_schedule_with_warmup( optimizer=optimizer, num_warmup_steps=0, num_training_steps=(len(train_dataloader) * num_epochs), ) # training and evaluation model = model.to(device) for epoch in range(num_epochs): model.train() total_loss = 0 for step, batch in enumerate(tqdm(train_dataloader)): batch = {k: v.to(device) for k, v in batch.items()} # print(batch) # print(batch["input_ids"].shape) outputs = model(**batch) loss = outputs.loss total_loss += loss.detach().float() loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() model.eval() eval_loss = 0 eval_preds = [] for step, batch in enumerate(tqdm(eval_dataloader)): batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) loss = outputs.loss eval_loss += loss.detach().float() eval_preds.extend( tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True) ) eval_epoch_loss = eval_loss / len(eval_dataloader) eval_ppl = torch.exp(eval_epoch_loss) train_epoch_loss = total_loss / len(train_dataloader) train_ppl = torch.exp(train_epoch_loss) print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_epoch_loss=}") model.eval() i = 33 inputs = tokenizer(f'{text_column} : {dataset["test"][i]["Tweet text"]} Label : ', return_tensors="pt") print(dataset["test"][i]["Tweet text"]) print(inputs) with torch.no_grad(): inputs = {k: v.to(device) for k, v in inputs.items()} outputs = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_new_tokens=10, eos_token_id=3 ) print(outputs) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))<jupyter_output>@TommyHilfiger Dramatic shopping exp. ordered 6 jeans same size (30/32) 2 fits / 2 too large / 2 too slim : same brand &gt; different sizing {'input_ids': tensor([[227985, 5484, 915, 2566, 226154, 126015, 5385, 259, 239364, 3396, 70823, 5853, 17, 57247, 1231, 191040, 5025, 7869, 375, 2324, 149349, 12, 415, 122321, 897, 415, 10136, 10021, 897, 415, 10136, 6497, 381, 915, 5025, 51950, 66869, 5955, 272, 20311, 77658, 915, 210]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])} tensor([[227985, 5484, 915, 2566, 226154, 126015, 5385, 259, 239364, 3396, 70823, 5853, 17, 57247, 1231, 191040, 5025, 7869, 375, 2324, 149349, 12, 415, 122321, 897, 415, 10136, 10021, 897, 415, 10136, [...]<jupyter_text>You can push model to hub or save model locally. - Option1: Pushing the model to Hugging Face Hub```pythonmodel.push_to_hub( f"{dataset_name}_{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}".replace("/", "_"), token = "hf_...")```token (`bool` or `str`, *optional*): `token` is to be used for HTTP Bearer authorization when accessing remote files. If `True`, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). Will default to `True` if `repo_url` is not specified. Or you can get your token from https://huggingface.co/settings/token```- Or save model locally```pythonpeft_model_id = f"{dataset_name}_{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}".replace("/", "_")model.save_pretrained(peft_model_id)```<jupyter_code># saving model peft_model_id = f"{dataset_name}_{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}".replace( "/", "_" ) model.save_pretrained(peft_model_id) ckpt = f"{peft_model_id}/adapter_model.bin" !du -h $ckpt from peft import PeftModel, PeftConfig peft_model_id = f"{dataset_name}_{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}".replace( "/", "_" ) config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path) model = PeftModel.from_pretrained(model, peft_model_id) model.to(device) model.eval() i = 4 inputs = tokenizer(f'{text_column} : {dataset["test"][i]["Tweet text"]} Label : ', return_tensors="pt") print(dataset["test"][i]["Tweet text"]) print(inputs) with torch.no_grad(): inputs = {k: v.to(device) for k, v in inputs.items()} outputs = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_new_tokens=10, eos_token_id=3 ) print(outputs) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))<jupyter_output>@greateranglia Ok thanks... {'input_ids': tensor([[227985, 5484, 915, 2566, 14173, 2960, 29906, 387, 20706, 49337, 1369, 77658, 915, 210]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])} tensor([[227985, 5484, 915, 2566, 14173, 2960, 29906, 387, 20706, 49337, 1369, 77658, 915, 210, 1936, 106863, 3]], device='cuda:0') ['Tweet text : @greateranglia Ok thanks... Label : no complaint']
peft/examples/causal_language_modeling/peft_prompt_tuning_clm.ipynb/0
{ "file_path": "peft/examples/causal_language_modeling/peft_prompt_tuning_clm.ipynb", "repo_id": "peft", "token_count": 4787 }
147
import argparse import os from collections import Counter from dataclasses import dataclass from typing import Dict, Optional import safetensors import torch from diffusers import UNet2DConditionModel from transformers import CLIPTextModel from peft import LoraConfig, get_peft_model, get_peft_model_state_dict, set_peft_model_state_dict # Default kohya_ss LoRA replacement modules # https://github.com/kohya-ss/sd-scripts/blob/c924c47f374ac1b6e33e71f82948eb1853e2243f/networks/lora.py#L661 UNET_TARGET_REPLACE_MODULE = ["Transformer2DModel", "Attention"] UNET_TARGET_REPLACE_MODULE_CONV2D_3X3 = ["ResnetBlock2D", "Downsample2D", "Upsample2D"] TEXT_ENCODER_TARGET_REPLACE_MODULE = ["CLIPAttention", "CLIPMLP"] LORA_PREFIX_UNET = "lora_unet" LORA_PREFIX_TEXT_ENCODER = "lora_te" @dataclass class LoRAInfo: kohya_key: str peft_key: str alpha: Optional[float] = None rank: Optional[int] = None lora_A: Optional[torch.Tensor] = None lora_B: Optional[torch.Tensor] = None def peft_state_dict(self) -> Dict[str, torch.Tensor]: if self.lora_A is None or self.lora_B is None: raise ValueError("At least one of lora_A or lora_B is None, they must both be provided") return {f"{peft_key}.lora_A.weight": self.lora_A, f"{peft_key}.lora_B.weight": self.lora_A} def construct_peft_loraconfig(info: Dict[str, LoRAInfo]) -> LoraConfig: """Constructs LoraConfig from data extracted from kohya checkpoint Args: info (Dict[str, LoRAInfo]): Information extracted from kohya checkpoint Returns: LoraConfig: config for constructing LoRA """ # Unpack all ranks and alphas ranks = {x[0]: x[1].rank for x in info.items()} alphas = {x[0]: x[1].alpha or x[1].rank for x in info.items()} # Determine which modules needs to be transformed target_modules = list(info.keys()) # Determine most common rank and alpha r = Counter(ranks.values()).most_common(1)[0] lora_alpha = Counter(alphas.values()).most_common(1)[0] # Determine which modules have different rank and alpha rank_pattern = dict(filter(lambda x: x[1] != r, ranks.items())) alpha_pattern = dict(filter(lambda x: x[1] != lora_alpha, alphas.items())) config = LoraConfig( r=r, lora_alpha=lora_alpha, target_modules=target_modules, lora_dropout=0.0, bias="none", init_lora_weights=False, rank_pattern=rank_pattern, alpha_pattern=alpha_pattern, ) return config def combine_peft_state_dict(info: Dict[str, LoRAInfo]) -> Dict[str, torch.Tensor]: result = {} for key_name, key_info in info.items(): result[f"base_model.model.{key_name}.lora_A.weight"] = key_info.lora_A result[f"base_model.model.{key_name}.lora_B.weight"] = key_info.lora_B return result if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--sd_checkpoint", default=None, type=str, required=True, help="SD checkpoint to use") parser.add_argument( "--kohya_lora_path", default=None, type=str, required=True, help="Path to kohya_ss trained LoRA" ) parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.") parser.add_argument("--half", action="store_true", help="Save weights in half precision.") args = parser.parse_args() # Load all models that we need to add adapter to text_encoder = CLIPTextModel.from_pretrained(args.sd_checkpoint, subfolder="text_encoder") unet = UNet2DConditionModel.from_pretrained(args.sd_checkpoint, subfolder="unet") # Construct possible mapping from kohya keys to peft keys models_keys = {} for model, model_key, model_name in [ (text_encoder, LORA_PREFIX_TEXT_ENCODER, "text_encoder"), (unet, LORA_PREFIX_UNET, "unet"), ]: models_keys.update( { f"{model_key}.{peft_key}".replace(".", "_"): peft_key for peft_key in (x[0] for x in model.named_modules()) } ) # Store conversion info (model_type -> peft_key -> LoRAInfo) lora_info: Dict[str, Dict[str, LoRAInfo]] = { "text_encoder": {}, "unet": {}, } # Open kohya_ss checkpoint with safetensors.safe_open(args.kohya_lora_path, framework="pt", device="cpu") as f: # Extract information about LoRA structure metadata = f.metadata() # Iterate through available info and unpack all the values for key in f.keys(): kohya_key, kohya_type = key.split(".")[:2] # Find which model this key belongs to if kohya_key.startswith(LORA_PREFIX_TEXT_ENCODER): model_type = "text_encoder" elif kohya_key.startswith(LORA_PREFIX_UNET): model_type = "unet" else: raise ValueError(f"Cannot determine model for key: {key}") # Find corresponding peft key if kohya_key not in models_keys: raise ValueError(f"Cannot find corresponding key for diffusers/transformers model: {kohya_key}") peft_key = models_keys[kohya_key] if peft_key not in lora_info[model_type]: lora_info[model_type][peft_key] = LoRAInfo(kohya_key=kohya_key, peft_key=peft_key) if kohya_type == "alpha": lora_info[model_type][peft_key].alpha = f.get_tensor(key).item() elif kohya_type == "lora_down": tensor = f.get_tensor(key) lora_info[model_type][peft_key].lora_A = tensor lora_info[model_type][peft_key].rank = tensor.shape[0] elif kohya_type == "lora_up": tensor = f.get_tensor(key) lora_info[model_type][peft_key].lora_B = f.get_tensor(key) lora_info[model_type][peft_key].rank = tensor.shape[1] else: raise ValueError(f"Unknown weight name in key: {key} - {kohya_type}") # Process each model for model, model_name in [(text_encoder, "text_encoder"), (unet, "unet")]: config = construct_peft_loraconfig(lora_info[model_name]) model = get_peft_model(model, config) keys_peft = list(get_peft_model_state_dict(model).keys()) keys_new = list(combine_peft_state_dict(lora_info[model_name]).keys()) set_peft_model_state_dict(model, combine_peft_state_dict(lora_info[model_name])) if args.half: model.to(torch.float16) # Save model to disk model.save_pretrained(os.path.join(args.dump_path, model_name))
peft/examples/lora_dreambooth/convert_kohya_ss_sd_lora_to_peft.py/0
{ "file_path": "peft/examples/lora_dreambooth/convert_kohya_ss_sd_lora_to_peft.py", "repo_id": "peft", "token_count": 2947 }
148
# flake8: noqa # There's no way to ignore "F401 '...' imported but unused" warnings in this # module, but to preserve other warnings. So, don't check this module at all. # coding=utf-8 # Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. __version__ = "0.9.1.dev0" from .auto import ( AutoPeftModel, AutoPeftModelForCausalLM, AutoPeftModelForSequenceClassification, AutoPeftModelForSeq2SeqLM, AutoPeftModelForTokenClassification, AutoPeftModelForQuestionAnswering, AutoPeftModelForFeatureExtraction, ) from .mapping import ( MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_peft_config, get_peft_model, inject_adapter_in_model, ) from .mixed_model import PeftMixedModel from .peft_model import ( PeftModel, PeftModelForCausalLM, PeftModelForSeq2SeqLM, PeftModelForSequenceClassification, PeftModelForTokenClassification, PeftModelForQuestionAnswering, PeftModelForFeatureExtraction, ) from .tuners import ( AdaptionPromptConfig, AdaptionPromptModel, LoraConfig, LoftQConfig, LoraModel, LoHaConfig, LoHaModel, LoKrConfig, LoKrModel, IA3Config, IA3Model, AdaLoraConfig, AdaLoraModel, PrefixEncoder, PrefixTuningConfig, PromptEmbedding, PromptEncoder, PromptEncoderConfig, PromptEncoderReparameterizationType, PromptTuningConfig, PromptTuningInit, MultitaskPromptTuningConfig, MultitaskPromptTuningInit, OFTConfig, OFTModel, PolyConfig, PolyModel, ) from .utils import ( TRANSFORMERS_MODELS_TO_PREFIX_TUNING_POSTPROCESS_MAPPING, PeftType, TaskType, bloom_model_postprocess_past_key_value, get_peft_model_state_dict, prepare_model_for_int8_training, prepare_model_for_kbit_training, replace_lora_weights_loftq, set_peft_model_state_dict, shift_tokens_right, load_peft_weights, cast_mixed_precision_params, ) from .config import PeftConfig, PromptLearningConfig
peft/src/peft/__init__.py/0
{ "file_path": "peft/src/peft/__init__.py", "repo_id": "peft", "token_count": 970 }
149
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math from typing import Any, Optional, Set, Tuple, Union import torch import torch.nn as nn import torch.nn.functional as F from peft.tuners.lycoris_utils import LycorisLayer class LoKrLayer(nn.Module, LycorisLayer): # All names of layers that may contain adapter weights adapter_layer_names = ( "lokr_w1", "lokr_w1_a", "lokr_w1_b", "lokr_w2", "lokr_w2_a", "lokr_w2_b", "lokr_t2", ) # other_param_names is defined on parent class def __init__(self, base_layer: nn.Module) -> None: super().__init__() LycorisLayer.__init__(self, base_layer) # LoKr info self.lokr_w1 = nn.ParameterDict({}) self.lokr_w1_a = nn.ParameterDict({}) self.lokr_w1_b = nn.ParameterDict({}) self.lokr_w2 = nn.ParameterDict({}) self.lokr_w2_a = nn.ParameterDict({}) self.lokr_w2_b = nn.ParameterDict({}) self.lokr_t2 = nn.ParameterDict({}) @property def _available_adapters(self) -> Set[str]: return { *self.lokr_w1, *self.lokr_w1_a, *self.lokr_w1_b, *self.lokr_w2, *self.lokr_w2_a, *self.lokr_w2_b, *self.lokr_t2, } def create_adapter_parameters( self, adapter_name: str, r: int, shape, use_w1: bool, use_w2: bool, use_effective_conv2d: bool, ): if use_w1: self.lokr_w1[adapter_name] = nn.Parameter(torch.empty(shape[0][0], shape[1][0])) else: self.lokr_w1_a[adapter_name] = nn.Parameter(torch.empty(shape[0][0], r)) self.lokr_w1_b[adapter_name] = nn.Parameter(torch.empty(r, shape[1][0])) if len(shape) == 4: # Conv2d if use_w2: self.lokr_w2[adapter_name] = nn.Parameter(torch.empty(shape[0][1], shape[1][1], *shape[2:])) elif use_effective_conv2d: self.lokr_t2[adapter_name] = nn.Parameter(torch.empty(r, r, shape[2], shape[3])) self.lokr_w2_a[adapter_name] = nn.Parameter(torch.empty(r, shape[0][1])) # b, 1-mode self.lokr_w2_b[adapter_name] = nn.Parameter(torch.empty(r, shape[1][1])) # d, 2-mode else: self.lokr_w2_a[adapter_name] = nn.Parameter(torch.empty(shape[0][1], r)) self.lokr_w2_b[adapter_name] = nn.Parameter(torch.empty(r, shape[1][1] * shape[2] * shape[3])) else: # Linear if use_w2: self.lokr_w2[adapter_name] = nn.Parameter(torch.empty(shape[0][1], shape[1][1])) else: self.lokr_w2_a[adapter_name] = nn.Parameter(torch.empty(shape[0][1], r)) self.lokr_w2_b[adapter_name] = nn.Parameter(torch.empty(r, shape[1][1])) def reset_adapter_parameters(self, adapter_name: str): if adapter_name in self.lokr_w1: nn.init.zeros_(self.lokr_w1[adapter_name]) else: nn.init.zeros_(self.lokr_w1_a[adapter_name]) nn.init.kaiming_uniform_(self.lokr_w1_b[adapter_name], a=math.sqrt(5)) if adapter_name in self.lokr_w2: nn.init.kaiming_uniform_(self.lokr_w2[adapter_name], a=math.sqrt(5)) else: nn.init.kaiming_uniform_(self.lokr_w2_a[adapter_name], a=math.sqrt(5)) nn.init.kaiming_uniform_(self.lokr_w2_b[adapter_name], a=math.sqrt(5)) if adapter_name in self.lokr_t2: nn.init.kaiming_uniform_(self.lokr_t2[adapter_name], a=math.sqrt(5)) def reset_adapter_parameters_random(self, adapter_name: str): if adapter_name in self.lokr_w1: nn.init.kaiming_uniform_(self.lokr_w1[adapter_name], a=math.sqrt(5)) else: nn.init.kaiming_uniform_(self.lokr_w1_a[adapter_name], a=math.sqrt(5)) nn.init.kaiming_uniform_(self.lokr_w1_b[adapter_name], a=math.sqrt(5)) if adapter_name in self.lokr_w2: nn.init.kaiming_uniform_(self.lokr_w2[adapter_name], a=math.sqrt(5)) else: nn.init.kaiming_uniform_(self.lokr_w2_a[adapter_name], a=math.sqrt(5)) nn.init.kaiming_uniform_(self.lokr_w2_b[adapter_name], a=math.sqrt(5)) if adapter_name in self.lokr_t2: nn.init.kaiming_uniform_(self.lokr_t2[adapter_name], a=math.sqrt(5)) def update_layer( self, adapter_name: str, r: int, alpha: float, rank_dropout: float, module_dropout: float, init_weights: bool, use_effective_conv2d: bool, decompose_both: bool, decompose_factor: int, **kwargs, ) -> None: """Internal function to create lokr adapter Args: adapter_name (`str`): Name for the adapter to add. r (`int`): Rank for the added adapter. alpha (`float`): Alpha for the added adapter. rank_dropout (`float`): The dropout probability for rank dimension during training module_dropout (`float`): The dropout probability for disabling adapter during training. init_weights (`bool`): Whether to initialize adapter weights. use_effective_conv2d (`bool`): Use parameter effective decomposition for Conv2d with ksize > 1. decompose_both (`bool`): Perform rank decomposition of left kronecker product matrix. decompose_factor (`int`): Kronecker product decomposition factor. """ if r <= 0: raise ValueError(f"`r` should be a positive integer value but the value passed is {r}") self.r[adapter_name] = r self.alpha[adapter_name] = alpha self.scaling[adapter_name] = alpha / r self.rank_dropout[adapter_name] = rank_dropout self.module_dropout[adapter_name] = module_dropout base_layer = self.get_base_layer() # Determine shape of LoKr weights if isinstance(base_layer, nn.Linear): in_dim, out_dim = base_layer.in_features, base_layer.out_features in_m, in_n = factorization(in_dim, decompose_factor) out_l, out_k = factorization(out_dim, decompose_factor) shape = ((out_l, out_k), (in_m, in_n)) # ((a, b), (c, d)), out_dim = a*c, in_dim = b*d use_w1 = not (decompose_both and r < max(shape[0][0], shape[1][0]) / 2) use_w2 = not (r < max(shape[0][1], shape[1][1]) / 2) use_effective_conv2d = False elif isinstance(base_layer, nn.Conv2d): in_dim, out_dim = base_layer.in_channels, base_layer.out_channels k_size = base_layer.kernel_size in_m, in_n = factorization(in_dim, decompose_factor) out_l, out_k = factorization(out_dim, decompose_factor) shape = ((out_l, out_k), (in_m, in_n), *k_size) # ((a, b), (c, d), *k_size) use_w1 = not (decompose_both and r < max(shape[0][0], shape[1][0]) / 2) use_w2 = r >= max(shape[0][1], shape[1][1]) / 2 use_effective_conv2d = use_effective_conv2d and base_layer.kernel_size != (1, 1) else: raise TypeError(f"LoKr is not implemented for base layers of type {type(base_layer).__name__}") # Create weights with provided shape self.create_adapter_parameters(adapter_name, r, shape, use_w1, use_w2, use_effective_conv2d) # Initialize weights if init_weights: self.reset_adapter_parameters(adapter_name) else: self.reset_adapter_parameters_random(adapter_name) # Move new weights to device weight = getattr(self.get_base_layer(), "weight", None) if weight is not None: # the layer is already completely initialized, this is an update if weight.dtype.is_floating_point or weight.dtype.is_complex: self.to(weight.device, dtype=weight.dtype) else: self.to(weight.device) self.set_adapter(self.active_adapters) def get_delta_weight(self, adapter_name: str) -> torch.Tensor: # https://github.com/KohakuBlueleaf/LyCORIS/blob/e4259b870d3354a9615a96be61cb5d07455c58ea/lycoris/modules/lokr.py#L224 if adapter_name in self.lokr_w1: w1 = self.lokr_w1[adapter_name] else: w1 = self.lokr_w1_a[adapter_name] @ self.lokr_w1_b[adapter_name] if adapter_name in self.lokr_w2: w2 = self.lokr_w2[adapter_name] elif adapter_name in self.lokr_t2: w2 = make_weight_cp(self.lokr_t2[adapter_name], self.lokr_w2_a[adapter_name], self.lokr_w2_b[adapter_name]) else: w2 = self.lokr_w2_a[adapter_name] @ self.lokr_w2_b[adapter_name] # Make weights with Kronecker product weight = make_kron(w1, w2) weight = weight.reshape(self.get_base_layer().weight.shape) # Perform rank dropout during training - drop rows of addition weights rank_dropout = self.rank_dropout[adapter_name] if self.training and rank_dropout: drop = (torch.rand(weight.size(0)) > rank_dropout).float() drop = drop.view(-1, *[1] * len(weight.shape[1:])).to(weight.device) drop /= drop.mean() weight *= drop return weight def forward(self, x: torch.Tensor, *args, **kwargs) -> torch.Tensor: previous_dtype = x.dtype if self.disable_adapters: if self.merged: self.unmerge() result = self.base_layer(x, *args, **kwargs) elif self.merged: result = self.base_layer(x, *args, **kwargs) else: result = self.base_layer(x, *args, **kwargs) # Execute all the adapters for active_adapter in self.active_adapters: if active_adapter not in self._available_adapters: continue module_dropout = self.module_dropout[active_adapter] # Modify current execution weights if (not self.training) or (self.training and torch.rand(1) > module_dropout): result = result + self._get_delta_activations(active_adapter, x, *args, **kwargs) result = result.to(previous_dtype) return result class Linear(LoKrLayer): """LoKr implemented in Linear layer""" def __init__( self, base_layer: nn.Module, device: Optional[Union[str, torch.device]] = None, dtype: Optional[torch.dtype] = None, adapter_name: str = "default", r: int = 0, alpha: float = 0.0, rank_dropout: float = 0.0, module_dropout: float = 0.0, init_weights: bool = True, **kwargs, ): super().__init__(base_layer) # Create adapter and set it active self._active_adapter = adapter_name self.update_layer(adapter_name, r, alpha, rank_dropout, module_dropout, init_weights, **kwargs) def _get_delta_activations( self, adapter_name: str, input: torch.Tensor, *args: Any, **kwargs: Any ) -> torch.Tensor: delta_weight = self.get_delta_weight(adapter_name) # don't add bias here, because the bias is already included in the output of the base_layer return F.linear(input, delta_weight) def __repr__(self) -> str: rep = super().__repr__() return "lokr." + rep class Conv2d(LoKrLayer): """LoKr implemented in Conv2d layer""" def __init__( self, base_layer: nn.Module, device: Optional[Union[str, torch.device]] = None, dtype: Optional[torch.dtype] = None, adapter_name: str = "default", r: int = 0, alpha: float = 0.0, rank_dropout: float = 0.0, module_dropout: float = 0.0, use_effective_conv2d: bool = False, init_weights: bool = True, **kwargs, ): super().__init__(base_layer) # Create adapter and set it active self._active_adapter = adapter_name self.update_layer( adapter_name, r, alpha, rank_dropout, module_dropout, init_weights, use_effective_conv2d, **kwargs ) def _get_delta_activations( self, adapter_name: str, input: torch.Tensor, *args: Any, **kwargs: Any ) -> torch.Tensor: delta_weight = self.get_delta_weight(adapter_name) # don't add bias here, because the bias is already included in the output of the base_layer base_layer = self.get_base_layer() return F.conv2d( input, delta_weight, stride=base_layer.stride, padding=base_layer.padding, dilation=base_layer.dilation, groups=base_layer.groups, ) def __repr__(self) -> str: rep = super().__repr__() return "lokr." + rep # Below code is a direct copy from https://github.com/KohakuBlueleaf/LyCORIS/blob/eb460098187f752a5d66406d3affade6f0a07ece/lycoris/modules/lokr.py#L11 def factorization(dimension: int, factor: int = -1) -> Tuple[int, int]: """Factorizes the provided number into the product of two numbers Args: dimension (`int`): The number that needs to be factorized. factor (`int`, optional): Factorization divider. The algorithm will try to output two numbers, one of each will be as close to the factor as possible. If -1 is provided, the decomposition algorithm would try to search dividers near the square root of the dimension. Defaults to -1. Returns: Tuple[`int`, `int`]: A tuple of two numbers, whose product is equal to the provided number. The first number is always less than or equal to the second. Example: ```py >>> factorization(256, factor=-1) (16, 16) >>> factorization(128, factor=-1) (8, 16) >>> factorization(127, factor=-1) (1, 127) >>> factorization(128, factor=4) (4, 32) ``` """ if factor > 0 and (dimension % factor) == 0: m = factor n = dimension // factor return m, n if factor == -1: factor = dimension m, n = 1, dimension length = m + n while m < n: new_m = m + 1 while dimension % new_m != 0: new_m += 1 new_n = dimension // new_m if new_m + new_n > length or new_m > factor: break else: m, n = new_m, new_n if m > n: n, m = m, n return m, n def make_weight_cp(t, wa, wb): rebuild2 = torch.einsum("i j k l, i p, j r -> p r k l", t, wa, wb) # [c, d, k1, k2] return rebuild2 def make_kron(w1, w2, scale=1.0): if len(w2.shape) == 4: w1 = w1.unsqueeze(2).unsqueeze(2) w2 = w2.contiguous() rebuild = torch.kron(w1, w2) return rebuild * scale
peft/src/peft/tuners/lokr/layer.py/0
{ "file_path": "peft/src/peft/tuners/lokr/layer.py", "repo_id": "peft", "token_count": 7536 }
150
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch from peft.tuners.prompt_tuning import PromptEmbedding from peft.utils import TaskType from .config import MultitaskPromptTuningConfig, MultitaskPromptTuningInit # This code is adapted for the paper: https://arxiv.org/abs/2303.02861 and # constitutes the work done at MIT-IBM Watson Research Lab. class MultitaskPromptEmbedding(PromptEmbedding): def __init__(self, config: MultitaskPromptTuningConfig, word_embeddings): super().__init__(config, word_embeddings) self.num_tasks = config.num_tasks self.num_ranks = config.num_ranks self.num_virtual_tokens = config.num_virtual_tokens self.num_transformer_submodules = config.num_transformer_submodules if self.num_transformer_submodules is None: self.num_transformer_submodules = 2 if config.task_type == TaskType.SEQ_2_SEQ_LM else 1 self.token_dim = config.token_dim total_virtual_tokens = self.num_virtual_tokens * self.num_transformer_submodules self.prefix_task_cols = torch.nn.Parameter( torch.normal( mean=0, std=0.02, size=(self.num_tasks, total_virtual_tokens, self.num_ranks), ) ) self.prefix_task_rows = torch.nn.Parameter( torch.normal( mean=0, std=0.02, size=(self.num_tasks, self.num_ranks, self.token_dim), ) ) if config.prompt_tuning_init in [ MultitaskPromptTuningInit.AVERAGE_SOURCE_TASKS, MultitaskPromptTuningInit.EXACT_SOURCE_TASK, MultitaskPromptTuningInit.ONLY_SOURCE_SHARED, ]: if config.prompt_tuning_init_state_dict_path is None: raise ValueError( f"prompt_tuning_init_state_dict_path needs to be specified with {config.prompt_tuning_init} " "init method" ) # TODO: There should be an option for safetensors state_dict: dict = torch.load( config.prompt_tuning_init_state_dict_path, map_location=word_embeddings.weight.device, ) if config.prompt_tuning_init in [ MultitaskPromptTuningInit.AVERAGE_SOURCE_TASKS, MultitaskPromptTuningInit.EXACT_SOURCE_TASK, ]: prefix_task_cols_: torch.Tensor = state_dict["prefix_task_cols"] prefix_task_rows_: torch.Tensor = state_dict["prefix_task_rows"] if config.prompt_tuning_init == MultitaskPromptTuningInit.AVERAGE_SOURCE_TASKS: prefix_task_cols_ = prefix_task_cols_.mean(0, keepdim=True) prefix_task_rows_ = prefix_task_rows_.mean(0, keepdim=True) elif config.prompt_tuning_init == MultitaskPromptTuningInit.EXACT_SOURCE_TASK: prefix_task_cols_ = prefix_task_cols_[config.prompt_tuning_init_task, ...].unsqueeze(0) prefix_task_rows_ = prefix_task_rows_[config.prompt_tuning_init_task, ...].unsqueeze(0) state_dict = { "embedding.weight": state_dict["prompt_embeddings"], "prefix_task_cols": prefix_task_cols_, "prefix_task_rows": prefix_task_rows_, } self.load_state_dict(state_dict, strict=True) elif config.prompt_tuning_init == MultitaskPromptTuningInit.ONLY_SOURCE_SHARED: state_dict = { "embedding.weight": state_dict["prompt_embeddings"], } self.load_state_dict(state_dict, strict=False) def forward(self, indices, task_ids): if task_ids is None: raise ValueError("task_ids cannot be None") prompt_embeddings = self.embedding(indices) task_cols = torch.index_select(self.prefix_task_cols, 0, task_ids) task_rows = torch.index_select(self.prefix_task_rows, 0, task_ids) task_prompts = torch.matmul(task_cols, task_rows) prompt_embeddings *= task_prompts return prompt_embeddings
peft/src/peft/tuners/multitask_prompt_tuning/model.py/0
{ "file_path": "peft/src/peft/tuners/multitask_prompt_tuning/model.py", "repo_id": "peft", "token_count": 2121 }
151
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import tempfile import unittest import torch from peft import ( AutoPeftModel, AutoPeftModelForCausalLM, AutoPeftModelForFeatureExtraction, AutoPeftModelForQuestionAnswering, AutoPeftModelForSeq2SeqLM, AutoPeftModelForSequenceClassification, AutoPeftModelForTokenClassification, PeftModel, PeftModelForCausalLM, PeftModelForFeatureExtraction, PeftModelForQuestionAnswering, PeftModelForSeq2SeqLM, PeftModelForSequenceClassification, PeftModelForTokenClassification, ) from peft.utils import infer_device class PeftAutoModelTester(unittest.TestCase): dtype = torch.float16 if infer_device() == "mps" else torch.bfloat16 def test_peft_causal_lm(self): model_id = "peft-internal-testing/tiny-OPTForCausalLM-lora" model = AutoPeftModelForCausalLM.from_pretrained(model_id) assert isinstance(model, PeftModelForCausalLM) with tempfile.TemporaryDirectory() as tmp_dirname: model.save_pretrained(tmp_dirname) model = AutoPeftModelForCausalLM.from_pretrained(tmp_dirname) assert isinstance(model, PeftModelForCausalLM) # check if kwargs are passed correctly model = AutoPeftModelForCausalLM.from_pretrained(model_id, torch_dtype=self.dtype) assert isinstance(model, PeftModelForCausalLM) assert model.base_model.lm_head.weight.dtype == self.dtype adapter_name = "default" is_trainable = False # This should work _ = AutoPeftModelForCausalLM.from_pretrained(model_id, adapter_name, is_trainable, torch_dtype=self.dtype) def test_peft_causal_lm_extended_vocab(self): model_id = "peft-internal-testing/tiny-random-OPTForCausalLM-extended-vocab" model = AutoPeftModelForCausalLM.from_pretrained(model_id) assert isinstance(model, PeftModelForCausalLM) # check if kwargs are passed correctly model = AutoPeftModelForCausalLM.from_pretrained(model_id, torch_dtype=self.dtype) assert isinstance(model, PeftModelForCausalLM) assert model.base_model.lm_head.weight.dtype == self.dtype adapter_name = "default" is_trainable = False # This should work _ = AutoPeftModelForCausalLM.from_pretrained(model_id, adapter_name, is_trainable, torch_dtype=self.dtype) def test_peft_seq2seq_lm(self): model_id = "peft-internal-testing/tiny_T5ForSeq2SeqLM-lora" model = AutoPeftModelForSeq2SeqLM.from_pretrained(model_id) assert isinstance(model, PeftModelForSeq2SeqLM) with tempfile.TemporaryDirectory() as tmp_dirname: model.save_pretrained(tmp_dirname) model = AutoPeftModelForSeq2SeqLM.from_pretrained(tmp_dirname) assert isinstance(model, PeftModelForSeq2SeqLM) # check if kwargs are passed correctly model = AutoPeftModelForSeq2SeqLM.from_pretrained(model_id, torch_dtype=self.dtype) assert isinstance(model, PeftModelForSeq2SeqLM) assert model.base_model.lm_head.weight.dtype == self.dtype adapter_name = "default" is_trainable = False # This should work _ = AutoPeftModelForSeq2SeqLM.from_pretrained(model_id, adapter_name, is_trainable, torch_dtype=self.dtype) def test_peft_sequence_cls(self): model_id = "peft-internal-testing/tiny_OPTForSequenceClassification-lora" model = AutoPeftModelForSequenceClassification.from_pretrained(model_id) assert isinstance(model, PeftModelForSequenceClassification) with tempfile.TemporaryDirectory() as tmp_dirname: model.save_pretrained(tmp_dirname) model = AutoPeftModelForSequenceClassification.from_pretrained(tmp_dirname) assert isinstance(model, PeftModelForSequenceClassification) # check if kwargs are passed correctly model = AutoPeftModelForSequenceClassification.from_pretrained(model_id, torch_dtype=self.dtype) assert isinstance(model, PeftModelForSequenceClassification) assert model.score.original_module.weight.dtype == self.dtype adapter_name = "default" is_trainable = False # This should work _ = AutoPeftModelForSequenceClassification.from_pretrained( model_id, adapter_name, is_trainable, torch_dtype=self.dtype ) def test_peft_token_classification(self): model_id = "peft-internal-testing/tiny_GPT2ForTokenClassification-lora" model = AutoPeftModelForTokenClassification.from_pretrained(model_id) assert isinstance(model, PeftModelForTokenClassification) with tempfile.TemporaryDirectory() as tmp_dirname: model.save_pretrained(tmp_dirname) model = AutoPeftModelForTokenClassification.from_pretrained(tmp_dirname) assert isinstance(model, PeftModelForTokenClassification) # check if kwargs are passed correctly model = AutoPeftModelForTokenClassification.from_pretrained(model_id, torch_dtype=self.dtype) assert isinstance(model, PeftModelForTokenClassification) assert model.base_model.classifier.original_module.weight.dtype == self.dtype adapter_name = "default" is_trainable = False # This should work _ = AutoPeftModelForTokenClassification.from_pretrained( model_id, adapter_name, is_trainable, torch_dtype=self.dtype ) def test_peft_question_answering(self): model_id = "peft-internal-testing/tiny_OPTForQuestionAnswering-lora" model = AutoPeftModelForQuestionAnswering.from_pretrained(model_id) assert isinstance(model, PeftModelForQuestionAnswering) with tempfile.TemporaryDirectory() as tmp_dirname: model.save_pretrained(tmp_dirname) model = AutoPeftModelForQuestionAnswering.from_pretrained(tmp_dirname) assert isinstance(model, PeftModelForQuestionAnswering) # check if kwargs are passed correctly model = AutoPeftModelForQuestionAnswering.from_pretrained(model_id, torch_dtype=self.dtype) assert isinstance(model, PeftModelForQuestionAnswering) assert model.base_model.qa_outputs.original_module.weight.dtype == self.dtype adapter_name = "default" is_trainable = False # This should work _ = AutoPeftModelForQuestionAnswering.from_pretrained( model_id, adapter_name, is_trainable, torch_dtype=self.dtype ) def test_peft_feature_extraction(self): model_id = "peft-internal-testing/tiny_OPTForFeatureExtraction-lora" model = AutoPeftModelForFeatureExtraction.from_pretrained(model_id) assert isinstance(model, PeftModelForFeatureExtraction) with tempfile.TemporaryDirectory() as tmp_dirname: model.save_pretrained(tmp_dirname) model = AutoPeftModelForFeatureExtraction.from_pretrained(tmp_dirname) assert isinstance(model, PeftModelForFeatureExtraction) # check if kwargs are passed correctly model = AutoPeftModelForFeatureExtraction.from_pretrained(model_id, torch_dtype=self.dtype) assert isinstance(model, PeftModelForFeatureExtraction) assert model.base_model.model.decoder.embed_tokens.weight.dtype == self.dtype adapter_name = "default" is_trainable = False # This should work _ = AutoPeftModelForFeatureExtraction.from_pretrained( model_id, adapter_name, is_trainable, torch_dtype=self.dtype ) def test_peft_whisper(self): model_id = "peft-internal-testing/tiny_WhisperForConditionalGeneration-lora" model = AutoPeftModel.from_pretrained(model_id) assert isinstance(model, PeftModel) with tempfile.TemporaryDirectory() as tmp_dirname: model.save_pretrained(tmp_dirname) model = AutoPeftModel.from_pretrained(tmp_dirname) assert isinstance(model, PeftModel) # check if kwargs are passed correctly model = AutoPeftModel.from_pretrained(model_id, torch_dtype=self.dtype) assert isinstance(model, PeftModel) assert model.base_model.model.model.encoder.embed_positions.weight.dtype == self.dtype adapter_name = "default" is_trainable = False # This should work _ = AutoPeftModel.from_pretrained(model_id, adapter_name, is_trainable, torch_dtype=self.dtype)
peft/tests/test_auto.py/0
{ "file_path": "peft/tests/test_auto.py", "repo_id": "peft", "token_count": 3615 }
152
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import asdict, replace from unittest import TestCase import numpy as np from diffusers import StableDiffusionPipeline from parameterized import parameterized from peft import LoHaConfig, LoraConfig, OFTConfig, get_peft_model from .testing_common import ClassInstantier, PeftCommonTester from .testing_utils import temp_seed PEFT_DIFFUSERS_SD_MODELS_TO_TEST = ["hf-internal-testing/tiny-stable-diffusion-torch"] CONFIG_TESTING_KWARGS = ( { "text_encoder": { "r": 8, "lora_alpha": 32, "target_modules": ["k_proj", "q_proj", "v_proj", "out_proj", "fc1", "fc2"], "lora_dropout": 0.0, "bias": "none", }, "unet": { "r": 8, "lora_alpha": 32, "target_modules": ["proj_in", "proj_out", "to_k", "to_q", "to_v", "to_out.0", "ff.net.0.proj", "ff.net.2"], "lora_dropout": 0.0, "bias": "none", }, }, { "text_encoder": { "r": 8, "alpha": 32, "target_modules": ["k_proj", "q_proj", "v_proj", "out_proj", "fc1", "fc2"], "rank_dropout": 0.0, "module_dropout": 0.0, }, "unet": { "r": 8, "alpha": 32, "target_modules": ["proj_in", "proj_out", "to_k", "to_q", "to_v", "to_out.0", "ff.net.0.proj", "ff.net.2"], "rank_dropout": 0.0, "module_dropout": 0.0, }, }, { "text_encoder": { "r": 8, "target_modules": ["k_proj", "q_proj", "v_proj", "out_proj", "fc1", "fc2"], "module_dropout": 0.0, }, "unet": { "r": 8, "target_modules": ["proj_in", "proj_out", "to_k", "to_q", "to_v", "to_out.0", "ff.net.0.proj", "ff.net.2"], "module_dropout": 0.0, }, }, ) CLASSES_MAPPING = { "lora": (LoraConfig, CONFIG_TESTING_KWARGS[0]), "loha": (LoHaConfig, CONFIG_TESTING_KWARGS[1]), "lokr": (LoHaConfig, CONFIG_TESTING_KWARGS[1]), "oft": (OFTConfig, CONFIG_TESTING_KWARGS[2]), } PeftStableDiffusionTestConfigManager = ClassInstantier(CLASSES_MAPPING) class StableDiffusionModelTester(TestCase, PeftCommonTester): r""" Tests that diffusers StableDiffusion model works with PEFT as expected. """ transformers_class = StableDiffusionPipeline def instantiate_sd_peft(self, model_id, config_cls, config_kwargs): # Instantiate StableDiffusionPipeline model = self.transformers_class.from_pretrained(model_id) config_kwargs = config_kwargs.copy() text_encoder_kwargs = config_kwargs.pop("text_encoder") unet_kwargs = config_kwargs.pop("unet") # the remaining config kwargs should be applied to both configs for key, val in config_kwargs.items(): text_encoder_kwargs[key] = val unet_kwargs[key] = val # Instantiate text_encoder adapter config_text_encoder = config_cls(**text_encoder_kwargs) model.text_encoder = get_peft_model(model.text_encoder, config_text_encoder) # Instantiate unet adapter config_unet = config_cls(**unet_kwargs) model.unet = get_peft_model(model.unet, config_unet) # Move model to device model = model.to(self.torch_device) return model def prepare_inputs_for_testing(self): return { "prompt": "a high quality digital photo of a cute corgi", "num_inference_steps": 20, } @parameterized.expand( PeftStableDiffusionTestConfigManager.get_grid_parameters( { "model_ids": PEFT_DIFFUSERS_SD_MODELS_TO_TEST, "lora_kwargs": {"init_lora_weights": [False]}, "loha_kwargs": {"init_weights": [False]}, "oft_kwargs": {"init_weights": [False]}, }, ) ) def test_merge_layers(self, test_name, model_id, config_cls, config_kwargs): # Instantiate model & adapters model = self.instantiate_sd_peft(model_id, config_cls, config_kwargs) # Generate output for peft modified StableDiffusion dummy_input = self.prepare_inputs_for_testing() with temp_seed(seed=42): peft_output = np.array(model(**dummy_input).images[0]).astype(np.float32) # Merge adapter and model if config_cls not in [LoHaConfig, OFTConfig]: # TODO: Merging the text_encoder is leading to issues on CPU with PyTorch 2.1 model.text_encoder = model.text_encoder.merge_and_unload() model.unet = model.unet.merge_and_unload() # Generate output for peft merged StableDiffusion with temp_seed(seed=42): merged_output = np.array(model(**dummy_input).images[0]).astype(np.float32) # Images are in uint8 drange, so use large atol assert np.allclose(peft_output, merged_output, atol=1.0) @parameterized.expand( PeftStableDiffusionTestConfigManager.get_grid_parameters( { "model_ids": PEFT_DIFFUSERS_SD_MODELS_TO_TEST, "lora_kwargs": {"init_lora_weights": [False]}, "loha_kwargs": {"init_weights": [False]}, "oft_kwargs": {"init_weights": [False]}, }, ) ) def test_merge_layers_safe_merge(self, test_name, model_id, config_cls, config_kwargs): # Instantiate model & adapters model = self.instantiate_sd_peft(model_id, config_cls, config_kwargs) # Generate output for peft modified StableDiffusion dummy_input = self.prepare_inputs_for_testing() with temp_seed(seed=42): peft_output = np.array(model(**dummy_input).images[0]).astype(np.float32) # Merge adapter and model if config_cls not in [LoHaConfig, OFTConfig]: # TODO: Merging the text_encoder is leading to issues on CPU with PyTorch 2.1 model.text_encoder = model.text_encoder.merge_and_unload(safe_merge=True) model.unet = model.unet.merge_and_unload(safe_merge=True) # Generate output for peft merged StableDiffusion with temp_seed(seed=42): merged_output = np.array(model(**dummy_input).images[0]).astype(np.float32) # Images are in uint8 drange, so use large atol assert np.allclose(peft_output, merged_output, atol=1.0) @parameterized.expand( PeftStableDiffusionTestConfigManager.get_grid_parameters( { "model_ids": PEFT_DIFFUSERS_SD_MODELS_TO_TEST, "lora_kwargs": {"init_lora_weights": [False]}, }, filter_params_func=lambda tests: [x for x in tests if all(s not in x[0] for s in ["loha", "lokr", "oft"])], ) ) def test_add_weighted_adapter_base_unchanged(self, test_name, model_id, config_cls, config_kwargs): # Instantiate model & adapters model = self.instantiate_sd_peft(model_id, config_cls, config_kwargs) # Get current available adapter config text_encoder_adapter_name = next(iter(model.text_encoder.peft_config.keys())) unet_adapter_name = next(iter(model.unet.peft_config.keys())) text_encoder_adapter_config = replace(model.text_encoder.peft_config[text_encoder_adapter_name]) unet_adapter_config = replace(model.unet.peft_config[unet_adapter_name]) # Create weighted adapters model.text_encoder.add_weighted_adapter([unet_adapter_name], [0.5], "weighted_adapter_test") model.unet.add_weighted_adapter([unet_adapter_name], [0.5], "weighted_adapter_test") # Assert that base adapters config did not change assert asdict(text_encoder_adapter_config) == asdict(model.text_encoder.peft_config[text_encoder_adapter_name]) assert asdict(unet_adapter_config) == asdict(model.unet.peft_config[unet_adapter_name]) @parameterized.expand( PeftStableDiffusionTestConfigManager.get_grid_parameters( { "model_ids": PEFT_DIFFUSERS_SD_MODELS_TO_TEST, "lora_kwargs": {"init_lora_weights": [False]}, "loha_kwargs": {"init_weights": [False]}, "lokr_kwargs": {"init_weights": [False]}, "oft_kwargs": {"init_weights": [False]}, }, ) ) def test_disable_adapter(self, test_name, model_id, config_cls, config_kwargs): self._test_disable_adapter(model_id, config_cls, config_kwargs)
peft/tests/test_stablediffusion.py/0
{ "file_path": "peft/tests/test_stablediffusion.py", "repo_id": "peft", "token_count": 4243 }
153
import argparse import hashlib import os import mxnet as mx import gluoncv import torch from timm import create_model parser = argparse.ArgumentParser(description='Convert from MXNet') parser.add_argument('--model', default='all', type=str, metavar='MODEL', help='Name of model to train (default: "all"') def convert(mxnet_name, torch_name): # download and load the pre-trained model net = gluoncv.model_zoo.get_model(mxnet_name, pretrained=True) # create corresponding torch model torch_net = create_model(torch_name) mxp = [(k, v) for k, v in net.collect_params().items() if 'running' not in k] torchp = list(torch_net.named_parameters()) torch_params = {} # convert parameters # NOTE: we are relying on the fact that the order of parameters # are usually exactly the same between these models, thus no key name mapping # is necessary. Asserts will trip if this is not the case. for (tn, tv), (mn, mv) in zip(torchp, mxp): m_split = mn.split('_') t_split = tn.split('.') print(t_split, m_split) print(tv.shape, mv.shape) # ensure ordering of BN params match since their sizes are not specific if m_split[-1] == 'gamma': assert t_split[-1] == 'weight' if m_split[-1] == 'beta': assert t_split[-1] == 'bias' # ensure shapes match assert all(t == m for t, m in zip(tv.shape, mv.shape)) torch_tensor = torch.from_numpy(mv.data().asnumpy()) torch_params[tn] = torch_tensor # convert buffers (batch norm running stats) mxb = [(k, v) for k, v in net.collect_params().items() if any(x in k for x in ['running_mean', 'running_var'])] torchb = [(k, v) for k, v in torch_net.named_buffers() if 'num_batches' not in k] for (tn, tv), (mn, mv) in zip(torchb, mxb): print(tn, mn) print(tv.shape, mv.shape) # ensure ordering of BN params match since their sizes are not specific if 'running_var' in tn: assert 'running_var' in mn if 'running_mean' in tn: assert 'running_mean' in mn torch_tensor = torch.from_numpy(mv.data().asnumpy()) torch_params[tn] = torch_tensor torch_net.load_state_dict(torch_params) torch_filename = './%s.pth' % torch_name torch.save(torch_net.state_dict(), torch_filename) with open(torch_filename, 'rb') as f: sha_hash = hashlib.sha256(f.read()).hexdigest() final_filename = os.path.splitext(torch_filename)[0] + '-' + sha_hash[:8] + '.pth' os.rename(torch_filename, final_filename) print("=> Saved converted model to '{}, SHA256: {}'".format(final_filename, sha_hash)) def map_mx_to_torch_model(mx_name): torch_name = mx_name.lower() if torch_name.startswith('se_'): torch_name = torch_name.replace('se_', 'se') elif torch_name.startswith('senet_'): torch_name = torch_name.replace('senet_', 'senet') elif torch_name.startswith('inceptionv3'): torch_name = torch_name.replace('inceptionv3', 'inception_v3') torch_name = 'gluon_' + torch_name return torch_name ALL = ['resnet18_v1b', 'resnet34_v1b', 'resnet50_v1b', 'resnet101_v1b', 'resnet152_v1b', 'resnet50_v1c', 'resnet101_v1c', 'resnet152_v1c', 'resnet50_v1d', 'resnet101_v1d', 'resnet152_v1d', #'resnet50_v1e', 'resnet101_v1e', 'resnet152_v1e', 'resnet50_v1s', 'resnet101_v1s', 'resnet152_v1s', 'resnext50_32x4d', 'resnext101_32x4d', 'resnext101_64x4d', 'se_resnext50_32x4d', 'se_resnext101_32x4d', 'se_resnext101_64x4d', 'senet_154', 'inceptionv3'] def main(): args = parser.parse_args() if not args.model or args.model == 'all': for mx_model in ALL: torch_model = map_mx_to_torch_model(mx_model) convert(mx_model, torch_model) else: mx_model = args.model torch_model = map_mx_to_torch_model(mx_model) convert(mx_model, torch_model) if __name__ == '__main__': main()
pytorch-image-models/convert/convert_from_mxnet.py/0
{ "file_path": "pytorch-image-models/convert/convert_from_mxnet.py", "repo_id": "pytorch-image-models", "token_count": 1786 }
154
# CSP-ResNet **CSPResNet** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNet](https://paperswithcode.com/method/resnet). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network. {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @misc{wang2019cspnet, title={CSPNet: A New Backbone that can Enhance Learning Capability of CNN}, author={Chien-Yao Wang and Hong-Yuan Mark Liao and I-Hau Yeh and Yueh-Hua Wu and Ping-Yang Chen and Jun-Wei Hsieh}, year={2019}, eprint={1911.11929}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: CSP ResNet Paper: Title: 'CSPNet: A New Backbone that can Enhance Learning Capability of CNN' URL: https://paperswithcode.com/paper/cspnet-a-new-backbone-that-can-enhance Models: - Name: cspresnet50 In Collection: CSP ResNet Metadata: FLOPs: 5924992000 Parameters: 21620000 File Size: 86679303 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - Label Smoothing - Polynomial Learning Rate Decay - SGD with Momentum - Weight Decay Training Data: - ImageNet ID: cspresnet50 LR: 0.1 Layers: 50 Crop Pct: '0.887' Momentum: 0.9 Batch Size: 128 Image Size: '256' Weight Decay: 0.005 Interpolation: bilinear Training Steps: 8000000 Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/cspnet.py#L415 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/cspresnet50_ra-d3e8d487.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.57% Top 5 Accuracy: 94.71% -->
pytorch-image-models/docs/models/.templates/models/csp-resnet.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/csp-resnet.md", "repo_id": "pytorch-image-models", "token_count": 897 }
155
# (Gluon) Xception **Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution](https://paperswithcode.com/method/depthwise-separable-convolution) layers. The weights from this model were ported from [Gluon](https://cv.gluon.ai/model_zoo/classification.html). {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @misc{chollet2017xception, title={Xception: Deep Learning with Depthwise Separable Convolutions}, author={François Chollet}, year={2017}, eprint={1610.02357}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: Gloun Xception Paper: Title: 'Xception: Deep Learning with Depthwise Separable Convolutions' URL: https://paperswithcode.com/paper/xception-deep-learning-with-depthwise Models: - Name: gluon_xception65 In Collection: Gloun Xception Metadata: FLOPs: 17594889728 Parameters: 39920000 File Size: 160551306 Architecture: - 1x1 Convolution - Convolution - Dense Connections - Depthwise Separable Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_xception65 Crop Pct: '0.903' Image Size: '299' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_xception.py#L241 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/gluon_xception-7015a15c.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.7% Top 5 Accuracy: 94.87% -->
pytorch-image-models/docs/models/.templates/models/gloun-xception.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/gloun-xception.md", "repo_id": "pytorch-image-models", "token_count": 747 }
156
# RegNetX **RegNetX** is a convolutional network design space with simple, regular models with parameters: depth $d$, initial width $w\_{0} > 0$, and slope $w\_{a} > 0$, and generates a different block width $u\_{j}$ for each block $j < d$. The key restriction for the RegNet types of model is that there is a linear parameterisation of block widths (the design space only contains models with this linear structure): $$ u\_{j} = w\_{0} + w\_{a}\cdot{j} $$ For **RegNetX** we have additional restrictions: we set $b = 1$ (the bottleneck ratio), $12 \leq d \leq 28$, and $w\_{m} \geq 2$ (the width multiplier). {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @misc{radosavovic2020designing, title={Designing Network Design Spaces}, author={Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Dollár}, year={2020}, eprint={2003.13678}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: RegNetX Paper: Title: Designing Network Design Spaces URL: https://paperswithcode.com/paper/designing-network-design-spaces Models: - Name: regnetx_002 In Collection: RegNetX Metadata: FLOPs: 255276032 Parameters: 2680000 File Size: 10862199 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_002 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L337 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_002-e7e85e5c.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 68.75% Top 5 Accuracy: 88.56% - Name: regnetx_004 In Collection: RegNetX Metadata: FLOPs: 510619136 Parameters: 5160000 File Size: 20841309 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_004 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L343 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_004-7d0e9424.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 72.39% Top 5 Accuracy: 90.82% - Name: regnetx_006 In Collection: RegNetX Metadata: FLOPs: 771659136 Parameters: 6200000 File Size: 24965172 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_006 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L349 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_006-85ec1baa.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 73.84% Top 5 Accuracy: 91.68% - Name: regnetx_008 In Collection: RegNetX Metadata: FLOPs: 1027038208 Parameters: 7260000 File Size: 29235944 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_008 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L355 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_008-d8b470eb.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 75.05% Top 5 Accuracy: 92.34% - Name: regnetx_016 In Collection: RegNetX Metadata: FLOPs: 2059337856 Parameters: 9190000 File Size: 36988158 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_016 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L361 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_016-65ca972a.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 76.95% Top 5 Accuracy: 93.43% - Name: regnetx_032 In Collection: RegNetX Metadata: FLOPs: 4082555904 Parameters: 15300000 File Size: 61509573 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_032 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L367 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_032-ed0c7f7e.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.15% Top 5 Accuracy: 94.09% - Name: regnetx_040 In Collection: RegNetX Metadata: FLOPs: 5095167744 Parameters: 22120000 File Size: 88844824 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_040 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L373 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_040-73c2a654.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.48% Top 5 Accuracy: 94.25% - Name: regnetx_064 In Collection: RegNetX Metadata: FLOPs: 8303405824 Parameters: 26210000 File Size: 105184854 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_064 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L379 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_064-29278baa.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.06% Top 5 Accuracy: 94.47% - Name: regnetx_080 In Collection: RegNetX Metadata: FLOPs: 10276726784 Parameters: 39570000 File Size: 158720042 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_080 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L385 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_080-7c7fcab1.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.21% Top 5 Accuracy: 94.55% - Name: regnetx_120 In Collection: RegNetX Metadata: FLOPs: 15536378368 Parameters: 46110000 File Size: 184866342 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_120 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L391 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_120-65d5521e.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.61% Top 5 Accuracy: 94.73% - Name: regnetx_160 In Collection: RegNetX Metadata: FLOPs: 20491740672 Parameters: 54280000 File Size: 217623862 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_160 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L397 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_160-c98c4112.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.84% Top 5 Accuracy: 94.82% - Name: regnetx_320 In Collection: RegNetX Metadata: FLOPs: 40798958592 Parameters: 107810000 File Size: 431962133 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_320 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L403 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_320-8ea38b93.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.25% Top 5 Accuracy: 95.03% -->
pytorch-image-models/docs/models/.templates/models/regnetx.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/regnetx.md", "repo_id": "pytorch-image-models", "token_count": 5745 }
157
# SSL ResNeXT A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) $C$, as an essential factor in addition to the dimensions of depth and width. The model in this collection utilises semi-supervised learning to improve the performance of the model. The approach brings important gains to standard architectures for image, video and fine-grained classification. Please note the CC-BY-NC 4.0 license on theses weights, non-commercial use only. {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @article{DBLP:journals/corr/abs-1905-00546, author = {I. Zeki Yalniz and Herv{\'{e}} J{\'{e}}gou and Kan Chen and Manohar Paluri and Dhruv Mahajan}, title = {Billion-scale semi-supervised learning for image classification}, journal = {CoRR}, volume = {abs/1905.00546}, year = {2019}, url = {http://arxiv.org/abs/1905.00546}, archivePrefix = {arXiv}, eprint = {1905.00546}, timestamp = {Mon, 28 Sep 2020 08:19:37 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1905-00546.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <!-- Type: model-index Collections: - Name: SSL ResNext Paper: Title: Billion-scale semi-supervised learning for image classification URL: https://paperswithcode.com/paper/billion-scale-semi-supervised-learning-for Models: - Name: ssl_resnext101_32x16d In Collection: SSL ResNext Metadata: FLOPs: 46623691776 Parameters: 194030000 File Size: 777518664 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Global Average Pooling - Grouped Convolution - Max Pooling - ReLU - ResNeXt Block - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet - YFCC-100M Training Resources: 64x GPUs ID: ssl_resnext101_32x16d LR: 0.0015 Epochs: 30 Layers: 101 Crop Pct: '0.875' Batch Size: 1536 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/resnet.py#L944 Weights: https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnext101_32x16-15fffa57.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 81.84% Top 5 Accuracy: 96.09% - Name: ssl_resnext101_32x4d In Collection: SSL ResNext Metadata: FLOPs: 10298145792 Parameters: 44180000 File Size: 177341913 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Global Average Pooling - Grouped Convolution - Max Pooling - ReLU - ResNeXt Block - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet - YFCC-100M Training Resources: 64x GPUs ID: ssl_resnext101_32x4d LR: 0.0015 Epochs: 30 Layers: 101 Crop Pct: '0.875' Batch Size: 1536 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/resnet.py#L924 Weights: https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnext101_32x4-dc43570a.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.91% Top 5 Accuracy: 95.73% - Name: ssl_resnext101_32x8d In Collection: SSL ResNext Metadata: FLOPs: 21180417024 Parameters: 88790000 File Size: 356056638 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Global Average Pooling - Grouped Convolution - Max Pooling - ReLU - ResNeXt Block - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet - YFCC-100M Training Resources: 64x GPUs ID: ssl_resnext101_32x8d LR: 0.0015 Epochs: 30 Layers: 101 Crop Pct: '0.875' Batch Size: 1536 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/resnet.py#L934 Weights: https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnext101_32x8-2cfe2f8b.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 81.61% Top 5 Accuracy: 96.04% - Name: ssl_resnext50_32x4d In Collection: SSL ResNext Metadata: FLOPs: 5472648192 Parameters: 25030000 File Size: 100428550 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Global Average Pooling - Grouped Convolution - Max Pooling - ReLU - ResNeXt Block - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet - YFCC-100M Training Resources: 64x GPUs ID: ssl_resnext50_32x4d LR: 0.0015 Epochs: 30 Layers: 50 Crop Pct: '0.875' Batch Size: 1536 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/resnet.py#L914 Weights: https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnext50_32x4-ddb3e555.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.3% Top 5 Accuracy: 95.41% -->
pytorch-image-models/docs/models/.templates/models/ssl-resnext.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/ssl-resnext.md", "repo_id": "pytorch-image-models", "token_count": 2623 }
158
site_name: 'Pytorch Image Models' site_description: 'Pretained Image Recognition Models' repo_name: 'rwightman/pytorch-image-models' repo_url: 'https://github.com/rwightman/pytorch-image-models' nav: - index.md - models.md - ... | models/*.md - results.md - scripts.md - training_hparam_examples.md - feature_extraction.md - changes.md - archived_changes.md theme: name: 'material' feature: tabs: false extra_javascript: - 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-MML-AM_CHTML' - https://cdnjs.cloudflare.com/ajax/libs/tablesort/5.2.1/tablesort.min.js - javascripts/tables.js markdown_extensions: - codehilite: linenums: true - admonition - pymdownx.arithmatex - pymdownx.betterem: smart_enable: all - pymdownx.caret - pymdownx.critic - pymdownx.details - pymdownx.emoji: emoji_generator: !!python/name:pymdownx.emoji.to_svg - pymdownx.inlinehilite - pymdownx.magiclink - pymdownx.mark - pymdownx.smartsymbols - pymdownx.superfences - pymdownx.tasklist: custom_checkbox: true - pymdownx.tilde - mdx_truly_sane_lists plugins: - search - awesome-pages - redirects: redirect_maps: 'index.md': 'https://huggingface.co/docs/timm/index' 'models.md': 'https://huggingface.co/docs/timm/models' 'results.md': 'https://huggingface.co/docs/timm/results' 'scripts.md': 'https://huggingface.co/docs/timm/training_script' 'training_hparam_examples.md': 'https://huggingface.co/docs/timm/training_script#training-examples' 'feature_extraction.md': 'https://huggingface.co/docs/timm/feature_extraction'
pytorch-image-models/mkdocs.yml/0
{ "file_path": "pytorch-image-models/mkdocs.yml", "repo_id": "pytorch-image-models", "token_count": 727 }
159
import math import torch from torch.utils.data import Sampler import torch.distributed as dist class OrderedDistributedSampler(Sampler): """Sampler that restricts data loading to a subset of the dataset. It is especially useful in conjunction with :class:`torch.nn.parallel.DistributedDataParallel`. In such case, each process can pass a DistributedSampler instance as a DataLoader sampler, and load a subset of the original dataset that is exclusive to it. .. note:: Dataset is assumed to be of constant size. Arguments: dataset: Dataset used for sampling. num_replicas (optional): Number of processes participating in distributed training. rank (optional): Rank of the current process within num_replicas. """ def __init__(self, dataset, num_replicas=None, rank=None): if num_replicas is None: if not dist.is_available(): raise RuntimeError("Requires distributed package to be available") num_replicas = dist.get_world_size() if rank is None: if not dist.is_available(): raise RuntimeError("Requires distributed package to be available") rank = dist.get_rank() self.dataset = dataset self.num_replicas = num_replicas self.rank = rank self.num_samples = int(math.ceil(len(self.dataset) * 1.0 / self.num_replicas)) self.total_size = self.num_samples * self.num_replicas def __iter__(self): indices = list(range(len(self.dataset))) # add extra samples to make it evenly divisible indices += indices[:(self.total_size - len(indices))] assert len(indices) == self.total_size # subsample indices = indices[self.rank:self.total_size:self.num_replicas] assert len(indices) == self.num_samples return iter(indices) def __len__(self): return self.num_samples class RepeatAugSampler(Sampler): """Sampler that restricts data loading to a subset of the dataset for distributed, with repeated augmentation. It ensures that different each augmented version of a sample will be visible to a different process (GPU). Heavily based on torch.utils.data.DistributedSampler This sampler was taken from https://github.com/facebookresearch/deit/blob/0c4b8f60/samplers.py Used in Copyright (c) 2015-present, Facebook, Inc. """ def __init__( self, dataset, num_replicas=None, rank=None, shuffle=True, num_repeats=3, selected_round=256, selected_ratio=0, ): if num_replicas is None: if not dist.is_available(): raise RuntimeError("Requires distributed package to be available") num_replicas = dist.get_world_size() if rank is None: if not dist.is_available(): raise RuntimeError("Requires distributed package to be available") rank = dist.get_rank() self.dataset = dataset self.num_replicas = num_replicas self.rank = rank self.shuffle = shuffle self.num_repeats = num_repeats self.epoch = 0 self.num_samples = int(math.ceil(len(self.dataset) * num_repeats / self.num_replicas)) self.total_size = self.num_samples * self.num_replicas # Determine the number of samples to select per epoch for each rank. # num_selected logic defaults to be the same as original RASampler impl, but this one can be tweaked # via selected_ratio and selected_round args. selected_ratio = selected_ratio or num_replicas # ratio to reduce selected samples by, num_replicas if 0 if selected_round: self.num_selected_samples = int(math.floor( len(self.dataset) // selected_round * selected_round / selected_ratio)) else: self.num_selected_samples = int(math.ceil(len(self.dataset) / selected_ratio)) def __iter__(self): # deterministically shuffle based on epoch g = torch.Generator() g.manual_seed(self.epoch) if self.shuffle: indices = torch.randperm(len(self.dataset), generator=g) else: indices = torch.arange(start=0, end=len(self.dataset)) # produce repeats e.g. [0, 0, 0, 1, 1, 1, 2, 2, 2....] if isinstance(self.num_repeats, float) and not self.num_repeats.is_integer(): # resample for repeats w/ non-integer ratio repeat_size = math.ceil(self.num_repeats * len(self.dataset)) indices = indices[torch.tensor([int(i // self.num_repeats) for i in range(repeat_size)])] else: indices = torch.repeat_interleave(indices, repeats=int(self.num_repeats), dim=0) indices = indices.tolist() # leaving as tensor thrashes dataloader memory # add extra samples to make it evenly divisible padding_size = self.total_size - len(indices) if padding_size > 0: indices += indices[:padding_size] assert len(indices) == self.total_size # subsample per rank indices = indices[self.rank:self.total_size:self.num_replicas] assert len(indices) == self.num_samples # return up to num selected samples return iter(indices[:self.num_selected_samples]) def __len__(self): return self.num_selected_samples def set_epoch(self, epoch): self.epoch = epoch
pytorch-image-models/timm/data/distributed_sampler.py/0
{ "file_path": "pytorch-image-models/timm/data/distributed_sampler.py", "repo_id": "pytorch-image-models", "token_count": 2276 }
160
""" Dataset reader for webdataset Hacked together by / Copyright 2022 Ross Wightman """ import io import json import logging import math import os import random import sys from dataclasses import dataclass from functools import partial from itertools import islice from typing import Any, Callable, Dict, List, Optional, Tuple import torch import torch.distributed as dist import yaml from PIL import Image from torch.utils.data import Dataset, IterableDataset, get_worker_info try: import webdataset as wds from webdataset.filters import _shuffle, getfirst from webdataset.shardlists import expand_urls from webdataset.tariterators import base_plus_ext, url_opener, tar_file_expander, valid_sample except ImportError: wds = None expand_urls = None from .class_map import load_class_map from .reader import Reader from .shared_count import SharedCount _logger = logging.getLogger(__name__) SAMPLE_SHUFFLE_SIZE = int(os.environ.get('WDS_SHUFFLE_SIZE', 8192)) SAMPLE_INITIAL_SIZE = int(os.environ.get('WDS_INITIAL_SIZE', 2048)) def _load_info(root, names=('_info.json', 'info.json')): if isinstance(names, str): names = (names,) tried = [] err_str = '' for n in names: full_path = os.path.join(root, n) try: tried.append(full_path) with wds.gopen(full_path) as f: if n.endswith('.json'): info_dict = json.load(f) else: info_dict = yaml.safe_load(f) return info_dict except Exception as e: err_str = str(e) _logger.warning( f'Dataset info file not found at {tried}. Error: {err_str}. ' 'Falling back to provided split and size arg.') return {} @dataclass class SplitInfo: num_samples: int filenames: Tuple[str] shard_lengths: Tuple[int] = () alt_label: str = '' name: str = '' def _parse_split_info(split: str, info: Dict): def _info_convert(dict_info): return SplitInfo( num_samples=dict_info['num_samples'], filenames=tuple(dict_info['filenames']), shard_lengths=tuple(dict_info['shard_lengths']), alt_label=dict_info.get('alt_label', ''), name=dict_info['name'], ) if 'tar' in split or '..' in split: # split in WDS string braceexpand format, sample count can be included with a | separator # ex: `dataset-split-{0000..9999}.tar|100000` for 9999 shards, covering 100,000 samples split = split.split('|') num_samples = 0 split_name = '' if len(split) > 1: num_samples = int(split[1]) split = split[0] if '::' not in split: split_parts = split.split('-', 3) split_idx = len(split_parts) - 1 if split_idx and 'splits' in info and split_parts[split_idx] in info['splits']: split_name = split_parts[split_idx] split_filenames = expand_urls(split) if split_name: split_info = info['splits'][split_name] if not num_samples: _fc = {f: c for f, c in zip(split_info['filenames'], split_info['shard_lengths'])} num_samples = sum(_fc[f] for f in split_filenames) split_info['filenames'] = tuple(_fc.keys()) split_info['shard_lengths'] = tuple(_fc.values()) split_info['num_samples'] = num_samples split_info = _info_convert(split_info) else: split_info = SplitInfo( name=split_name, num_samples=num_samples, filenames=split_filenames, ) else: if 'splits' not in info or split not in info['splits']: raise RuntimeError(f"split {split} not found in info ({info.get('splits', {}).keys()})") split = split split_info = info['splits'][split] split_info = _info_convert(split_info) return split_info def log_and_continue(exn): """Call in an exception handler to ignore exceptions, isssue a warning, and continue.""" _logger.warning(f'Handling webdataset error ({repr(exn)}). Ignoring.') # NOTE: try force an exit on errors that are clearly code / config and not transient if isinstance(exn, TypeError): raise exn return True def _decode( sample, image_key='jpg', image_mode='RGB', target_key='cls', alt_label='' ): """ Custom sample decode * decode and convert PIL Image * cls byte string label to int * pass through JSON byte string (if it exists) without parse """ # decode class label, skip if alternate label not valid if alt_label: # alternative labels are encoded in json metadata meta = json.loads(sample['json']) class_label = int(meta[alt_label]) if class_label < 0: # skipped labels currently encoded as -1, may change to a null/None value return None else: class_label = int(sample[target_key]) # decode image img = getfirst(sample, image_key) with io.BytesIO(img) as b: img = Image.open(b) img.load() if image_mode: img = img.convert(image_mode) # json passed through in undecoded state decoded = dict(jpg=img, cls=class_label, json=sample.get('json', None)) return decoded def pytorch_worker_seed(): """get dataloader worker seed from pytorch""" worker_info = get_worker_info() if worker_info is not None: # favour the seed already created for pytorch dataloader workers if it exists return worker_info.seed # fallback to wds rank based seed return wds.utils.pytorch_worker_seed() if wds is not None: # conditional to avoid mandatory wds import (via inheritance of wds.PipelineStage) class detshuffle2(wds.PipelineStage): def __init__( self, bufsize=1000, initial=100, seed=0, epoch=-1, ): self.bufsize = bufsize self.initial = initial self.seed = seed self.epoch = epoch def run(self, src): if isinstance(self.epoch, SharedCount): epoch = self.epoch.value else: # NOTE: this is epoch tracking is problematic in a multiprocess (dataloader workers or train) # situation as different workers may wrap at different times (or not at all). self.epoch += 1 epoch = self.epoch if self.seed < 0: seed = pytorch_worker_seed() + epoch else: seed = self.seed + epoch # _logger.info(f'shuffle seed: {self.seed}, {seed}, epoch: {epoch}') # FIXME temporary rng = random.Random(seed) return _shuffle(src, self.bufsize, self.initial, rng) else: detshuffle2 = None class ResampledShards2(IterableDataset): """An iterable dataset yielding a list of urls.""" def __init__( self, urls, nshards=sys.maxsize, worker_seed=None, deterministic=True, epoch=-1, ): """Sample shards from the shard list with replacement. :param urls: a list of URLs as a Python list or brace notation string """ super().__init__() urls = wds.shardlists.expand_urls(urls) self.urls = urls assert isinstance(self.urls[0], str) self.nshards = nshards self.rng = random.Random() self.worker_seed = pytorch_worker_seed if worker_seed is None else worker_seed self.deterministic = deterministic self.epoch = epoch def __iter__(self): """Return an iterator over the shards.""" if isinstance(self.epoch, SharedCount): epoch = self.epoch.value else: # NOTE: this is epoch tracking is problematic in a multiprocess (dataloader workers or train) # situation as different workers may wrap at different times (or not at all). self.epoch += 1 epoch = self.epoch if self.deterministic: # reset seed w/ epoch if deterministic, worker seed should be deterministic due to arg.seed self.rng = random.Random(self.worker_seed() + epoch) for _ in range(self.nshards): index = self.rng.randint(0, len(self.urls) - 1) yield dict(url=self.urls[index]) class ReaderWds(Reader): def __init__( self, root: str, name: Optional[str] = None, split: str = 'train', is_training: bool = False, num_samples: Optional[int] = None, batch_size: int = 1, repeats: int = 0, seed: int = 42, class_map: Optional[dict] = None, input_key: str = 'jpg;png;webp', input_img_mode: str = 'RGB', target_key: str = 'cls', target_img_mode: str = '', filename_key: str = 'filename', sample_shuffle_size: Optional[int] = None, smaple_initial_size: Optional[int] = None, ): super().__init__() if wds is None: raise RuntimeError( 'Please install webdataset 0.2.x package `pip install git+https://github.com/webdataset/webdataset`.') self.root = root self.is_training = is_training self.batch_size = batch_size self.repeats = repeats self.common_seed = seed # a seed that's fixed across all worker / distributed instances self.shard_shuffle_size = 500 self.sample_shuffle_size = sample_shuffle_size or SAMPLE_SHUFFLE_SIZE self.sample_initial_size = smaple_initial_size or SAMPLE_INITIAL_SIZE self.input_key = input_key self.input_img_mode = input_img_mode self.target_key = target_key self.filename_key = filename_key self.key_ext = '.JPEG' # extension to add to key for original filenames (DS specific, default ImageNet) self.info = _load_info(self.root) self.split_info = _parse_split_info(split, self.info) if num_samples is not None: self.num_samples = num_samples else: self.num_samples = self.split_info.num_samples if not self.num_samples: raise RuntimeError(f'Invalid split definition, num_samples not specified.') self.remap_class = False if class_map: self.class_to_idx = load_class_map(class_map) self.remap_class = True else: self.class_to_idx = {} # Distributed world state self.dist_rank = 0 self.dist_num_replicas = 1 if dist.is_available() and dist.is_initialized() and dist.get_world_size() > 1: self.dist_rank = dist.get_rank() self.dist_num_replicas = dist.get_world_size() # Attributes that are updated in _lazy_init self.worker_info = None self.worker_id = 0 self.worker_seed = seed # seed unique to each worker instance self.num_workers = 1 self.global_worker_id = 0 self.global_num_workers = 1 self.init_count = 0 self.epoch_count = SharedCount() # DataPipeline is lazy init, the majority of WDS DataPipeline could be init here, BUT, shuffle seed # is not handled in manner where it can be deterministic for each worker AND initialized up front self.ds = None def set_epoch(self, count): self.epoch_count.value = count def set_loader_cfg( self, num_workers: Optional[int] = None, ): if self.ds is not None: return if num_workers is not None: self.num_workers = num_workers self.global_num_workers = self.dist_num_replicas * self.num_workers def _lazy_init(self): """ Lazily initialize worker (in worker processes) """ if self.worker_info is None: worker_info = torch.utils.data.get_worker_info() if worker_info is not None: self.worker_info = worker_info self.worker_id = worker_info.id self.worker_seed = worker_info.seed self.num_workers = worker_info.num_workers self.global_num_workers = self.dist_num_replicas * self.num_workers self.global_worker_id = self.dist_rank * self.num_workers + self.worker_id # init data pipeline abs_shard_filenames = [os.path.join(self.root, f) for f in self.split_info.filenames] pipeline = [wds.SimpleShardList(abs_shard_filenames)] # at this point we have an iterator over all the shards if self.is_training: pipeline.extend([ detshuffle2( self.shard_shuffle_size, seed=self.common_seed, epoch=self.epoch_count, ), self._split_by_node_and_worker, # at this point, we have an iterator over the shards assigned to each worker wds.tarfile_to_samples(handler=log_and_continue), wds.shuffle( bufsize=self.sample_shuffle_size, initial=self.sample_initial_size, rng=random.Random(self.worker_seed) # this is why we lazy-init whole DataPipeline ), ]) else: pipeline.extend([ self._split_by_node_and_worker, # at this point, we have an iterator over the shards assigned to each worker wds.tarfile_to_samples(handler=log_and_continue), ]) pipeline.extend([ wds.map( partial( _decode, image_key=self.input_key, image_mode=self.input_img_mode, alt_label=self.split_info.alt_label, ), handler=log_and_continue, ), wds.rename(image=self.input_key, target=self.target_key) ]) self.ds = wds.DataPipeline(*pipeline) def _split_by_node_and_worker(self, src): if self.global_num_workers > 1: for s in islice(src, self.global_worker_id, None, self.global_num_workers): yield s else: for s in src: yield s def _num_samples_per_worker(self): num_worker_samples = self.num_samples / max(self.global_num_workers, self.dist_num_replicas) if self.is_training or self.dist_num_replicas > 1: num_worker_samples = math.ceil(num_worker_samples) if self.is_training: num_worker_samples = math.ceil(num_worker_samples / self.batch_size) * self.batch_size return int(num_worker_samples) def __iter__(self): if self.ds is None: self._lazy_init() num_worker_samples = self._num_samples_per_worker() if self.is_training or self.dist_num_replicas > 1: # NOTE: doing distributed validation w/ WDS is messy, hard to meet constraints that # same # of batches needed across all replicas w/ seeing each sample once. # with_epoch() is simple but could miss a shard's worth of samples in some workers, # and duplicate in others. Best to keep num DL workers low and a divisor of #val shards. ds = self.ds.with_epoch(num_worker_samples) else: ds = self.ds i = 0 # _logger.info(f'start {i}, {self.worker_id}') # FIXME temporary debug for sample in ds: target = sample['target'] if self.remap_class: target = self.class_to_idx[target] yield sample['image'], target i += 1 # _logger.info(f'end {i}, {self.worker_id}') # FIXME temporary debug def __len__(self): num_samples = self._num_samples_per_worker() * self.num_workers return num_samples def _filename(self, index, basename=False, absolute=False): assert False, "Not supported" # no random access to examples def filenames(self, basename=False, absolute=False): """ Return all filenames in dataset, overrides base""" if self.ds is None: self._lazy_init() names = [] for sample in self.ds: if self.filename_key in sample: name = sample[self.filename_key] elif '__key__' in sample: name = sample['__key__'] + self.key_ext else: assert False, "No supported name field present" names.append(name) if len(names) >= self.num_samples: break # safety for ds.repeat() case return names
pytorch-image-models/timm/data/readers/reader_wds.py/0
{ "file_path": "pytorch-image-models/timm/data/readers/reader_wds.py", "repo_id": "pytorch-image-models", "token_count": 7878 }
161
""" Classifier head and layer factory Hacked together by / Copyright 2020 Ross Wightman """ from collections import OrderedDict from functools import partial from typing import Optional, Union, Callable import torch import torch.nn as nn from torch.nn import functional as F from .adaptive_avgmax_pool import SelectAdaptivePool2d from .create_act import get_act_layer from .create_norm import get_norm_layer def _create_pool( num_features: int, num_classes: int, pool_type: str = 'avg', use_conv: bool = False, input_fmt: Optional[str] = None, ): flatten_in_pool = not use_conv # flatten when we use a Linear layer after pooling if not pool_type: assert num_classes == 0 or use_conv,\ 'Pooling can only be disabled if classifier is also removed or conv classifier is used' flatten_in_pool = False # disable flattening if pooling is pass-through (no pooling) global_pool = SelectAdaptivePool2d( pool_type=pool_type, flatten=flatten_in_pool, input_fmt=input_fmt, ) num_pooled_features = num_features * global_pool.feat_mult() return global_pool, num_pooled_features def _create_fc(num_features, num_classes, use_conv=False): if num_classes <= 0: fc = nn.Identity() # pass-through (no classifier) elif use_conv: fc = nn.Conv2d(num_features, num_classes, 1, bias=True) else: fc = nn.Linear(num_features, num_classes, bias=True) return fc def create_classifier( num_features: int, num_classes: int, pool_type: str = 'avg', use_conv: bool = False, input_fmt: str = 'NCHW', drop_rate: Optional[float] = None, ): global_pool, num_pooled_features = _create_pool( num_features, num_classes, pool_type, use_conv=use_conv, input_fmt=input_fmt, ) fc = _create_fc( num_pooled_features, num_classes, use_conv=use_conv, ) if drop_rate is not None: dropout = nn.Dropout(drop_rate) return global_pool, dropout, fc return global_pool, fc class ClassifierHead(nn.Module): """Classifier head w/ configurable global pooling and dropout.""" def __init__( self, in_features: int, num_classes: int, pool_type: str = 'avg', drop_rate: float = 0., use_conv: bool = False, input_fmt: str = 'NCHW', ): """ Args: in_features: The number of input features. num_classes: The number of classes for the final classifier layer (output). pool_type: Global pooling type, pooling disabled if empty string (''). drop_rate: Pre-classifier dropout rate. """ super(ClassifierHead, self).__init__() self.in_features = in_features self.use_conv = use_conv self.input_fmt = input_fmt global_pool, fc = create_classifier( in_features, num_classes, pool_type, use_conv=use_conv, input_fmt=input_fmt, ) self.global_pool = global_pool self.drop = nn.Dropout(drop_rate) self.fc = fc self.flatten = nn.Flatten(1) if use_conv and pool_type else nn.Identity() def reset(self, num_classes, pool_type=None): if pool_type is not None and pool_type != self.global_pool.pool_type: self.global_pool, self.fc = create_classifier( self.in_features, num_classes, pool_type=pool_type, use_conv=self.use_conv, input_fmt=self.input_fmt, ) self.flatten = nn.Flatten(1) if self.use_conv and pool_type else nn.Identity() else: num_pooled_features = self.in_features * self.global_pool.feat_mult() self.fc = _create_fc( num_pooled_features, num_classes, use_conv=self.use_conv, ) def forward(self, x, pre_logits: bool = False): x = self.global_pool(x) x = self.drop(x) if pre_logits: return self.flatten(x) x = self.fc(x) return self.flatten(x) class NormMlpClassifierHead(nn.Module): def __init__( self, in_features: int, num_classes: int, hidden_size: Optional[int] = None, pool_type: str = 'avg', drop_rate: float = 0., norm_layer: Union[str, Callable] = 'layernorm2d', act_layer: Union[str, Callable] = 'tanh', ): """ Args: in_features: The number of input features. num_classes: The number of classes for the final classifier layer (output). hidden_size: The hidden size of the MLP (pre-logits FC layer) if not None. pool_type: Global pooling type, pooling disabled if empty string (''). drop_rate: Pre-classifier dropout rate. norm_layer: Normalization layer type. act_layer: MLP activation layer type (only used if hidden_size is not None). """ super().__init__() self.in_features = in_features self.hidden_size = hidden_size self.num_features = in_features self.use_conv = not pool_type norm_layer = get_norm_layer(norm_layer) act_layer = get_act_layer(act_layer) linear_layer = partial(nn.Conv2d, kernel_size=1) if self.use_conv else nn.Linear self.global_pool = SelectAdaptivePool2d(pool_type=pool_type) self.norm = norm_layer(in_features) self.flatten = nn.Flatten(1) if pool_type else nn.Identity() if hidden_size: self.pre_logits = nn.Sequential(OrderedDict([ ('fc', linear_layer(in_features, hidden_size)), ('act', act_layer()), ])) self.num_features = hidden_size else: self.pre_logits = nn.Identity() self.drop = nn.Dropout(drop_rate) self.fc = linear_layer(self.num_features, num_classes) if num_classes > 0 else nn.Identity() def reset(self, num_classes, pool_type=None): if pool_type is not None: self.global_pool = SelectAdaptivePool2d(pool_type=pool_type) self.flatten = nn.Flatten(1) if pool_type else nn.Identity() self.use_conv = self.global_pool.is_identity() linear_layer = partial(nn.Conv2d, kernel_size=1) if self.use_conv else nn.Linear if self.hidden_size: if ((isinstance(self.pre_logits.fc, nn.Conv2d) and not self.use_conv) or (isinstance(self.pre_logits.fc, nn.Linear) and self.use_conv)): with torch.no_grad(): new_fc = linear_layer(self.in_features, self.hidden_size) new_fc.weight.copy_(self.pre_logits.fc.weight.reshape(new_fc.weight.shape)) new_fc.bias.copy_(self.pre_logits.fc.bias) self.pre_logits.fc = new_fc self.fc = linear_layer(self.num_features, num_classes) if num_classes > 0 else nn.Identity() def forward(self, x, pre_logits: bool = False): x = self.global_pool(x) x = self.norm(x) x = self.flatten(x) x = self.pre_logits(x) x = self.drop(x) if pre_logits: return x x = self.fc(x) return x
pytorch-image-models/timm/layers/classifier.py/0
{ "file_path": "pytorch-image-models/timm/layers/classifier.py", "repo_id": "pytorch-image-models", "token_count": 3585 }
162
""" Gather-Excite Attention Block Paper: `Gather-Excite: Exploiting Feature Context in CNNs` - https://arxiv.org/abs/1810.12348 Official code here, but it's only partial impl in Caffe: https://github.com/hujie-frank/GENet I've tried to support all of the extent both w/ and w/o params. I don't believe I've seen another impl that covers all of the cases. NOTE: extent=0 + extra_params=False is equivalent to Squeeze-and-Excitation Hacked together by / Copyright 2021 Ross Wightman """ import math from torch import nn as nn import torch.nn.functional as F from .create_act import create_act_layer, get_act_layer from .create_conv2d import create_conv2d from .helpers import make_divisible from .mlp import ConvMlp class GatherExcite(nn.Module): """ Gather-Excite Attention Module """ def __init__( self, channels, feat_size=None, extra_params=False, extent=0, use_mlp=True, rd_ratio=1./16, rd_channels=None, rd_divisor=1, add_maxpool=False, act_layer=nn.ReLU, norm_layer=nn.BatchNorm2d, gate_layer='sigmoid'): super(GatherExcite, self).__init__() self.add_maxpool = add_maxpool act_layer = get_act_layer(act_layer) self.extent = extent if extra_params: self.gather = nn.Sequential() if extent == 0: assert feat_size is not None, 'spatial feature size must be specified for global extent w/ params' self.gather.add_module( 'conv1', create_conv2d(channels, channels, kernel_size=feat_size, stride=1, depthwise=True)) if norm_layer: self.gather.add_module(f'norm1', nn.BatchNorm2d(channels)) else: assert extent % 2 == 0 num_conv = int(math.log2(extent)) for i in range(num_conv): self.gather.add_module( f'conv{i + 1}', create_conv2d(channels, channels, kernel_size=3, stride=2, depthwise=True)) if norm_layer: self.gather.add_module(f'norm{i + 1}', nn.BatchNorm2d(channels)) if i != num_conv - 1: self.gather.add_module(f'act{i + 1}', act_layer(inplace=True)) else: self.gather = None if self.extent == 0: self.gk = 0 self.gs = 0 else: assert extent % 2 == 0 self.gk = self.extent * 2 - 1 self.gs = self.extent if not rd_channels: rd_channels = make_divisible(channels * rd_ratio, rd_divisor, round_limit=0.) self.mlp = ConvMlp(channels, rd_channels, act_layer=act_layer) if use_mlp else nn.Identity() self.gate = create_act_layer(gate_layer) def forward(self, x): size = x.shape[-2:] if self.gather is not None: x_ge = self.gather(x) else: if self.extent == 0: # global extent x_ge = x.mean(dim=(2, 3), keepdims=True) if self.add_maxpool: # experimental codepath, may remove or change x_ge = 0.5 * x_ge + 0.5 * x.amax((2, 3), keepdim=True) else: x_ge = F.avg_pool2d( x, kernel_size=self.gk, stride=self.gs, padding=self.gk // 2, count_include_pad=False) if self.add_maxpool: # experimental codepath, may remove or change x_ge = 0.5 * x_ge + 0.5 * F.max_pool2d(x, kernel_size=self.gk, stride=self.gs, padding=self.gk // 2) x_ge = self.mlp(x_ge) if x_ge.shape[-1] != 1 or x_ge.shape[-2] != 1: x_ge = F.interpolate(x_ge, size=size) return x * self.gate(x_ge)
pytorch-image-models/timm/layers/gather_excite.py/0
{ "file_path": "pytorch-image-models/timm/layers/gather_excite.py", "repo_id": "pytorch-image-models", "token_count": 1956 }
163
""" Normalization + Activation Layers Provides Norm+Act fns for standard PyTorch norm layers such as * BatchNorm * GroupNorm * LayerNorm This allows swapping with alternative layers that are natively both norm + act such as * EvoNorm (evo_norm.py) * FilterResponseNorm (filter_response_norm.py) * InplaceABN (inplace_abn.py) Hacked together by / Copyright 2022 Ross Wightman """ from typing import Union, List, Optional, Any import torch from torch import nn as nn from torch.nn import functional as F from torchvision.ops.misc import FrozenBatchNorm2d from .create_act import get_act_layer from .fast_norm import is_fast_norm, fast_group_norm, fast_layer_norm from .trace_utils import _assert def _create_act(act_layer, act_kwargs=None, inplace=False, apply_act=True): act_layer = get_act_layer(act_layer) # string -> nn.Module act_kwargs = act_kwargs or {} if act_layer is not None and apply_act: if inplace: act_kwargs['inplace'] = inplace act = act_layer(**act_kwargs) else: act = nn.Identity() return act class BatchNormAct2d(nn.BatchNorm2d): """BatchNorm + Activation This module performs BatchNorm + Activation in a manner that will remain backwards compatible with weights trained with separate bn, act. This is why we inherit from BN instead of composing it as a .bn member. """ def __init__( self, num_features, eps=1e-5, momentum=0.1, affine=True, track_running_stats=True, apply_act=True, act_layer=nn.ReLU, act_kwargs=None, inplace=True, drop_layer=None, device=None, dtype=None, ): try: factory_kwargs = {'device': device, 'dtype': dtype} super(BatchNormAct2d, self).__init__( num_features, eps=eps, momentum=momentum, affine=affine, track_running_stats=track_running_stats, **factory_kwargs, ) except TypeError: # NOTE for backwards compat with old PyTorch w/o factory device/dtype support super(BatchNormAct2d, self).__init__( num_features, eps=eps, momentum=momentum, affine=affine, track_running_stats=track_running_stats, ) self.drop = drop_layer() if drop_layer is not None else nn.Identity() self.act = _create_act(act_layer, act_kwargs=act_kwargs, inplace=inplace, apply_act=apply_act) def forward(self, x): # cut & paste of torch.nn.BatchNorm2d.forward impl to avoid issues with torchscript and tracing _assert(x.ndim == 4, f'expected 4D input (got {x.ndim}D input)') # exponential_average_factor is set to self.momentum # (when it is available) only so that it gets updated # in ONNX graph when this node is exported to ONNX. if self.momentum is None: exponential_average_factor = 0.0 else: exponential_average_factor = self.momentum if self.training and self.track_running_stats: # TODO: if statement only here to tell the jit to skip emitting this when it is None if self.num_batches_tracked is not None: # type: ignore[has-type] self.num_batches_tracked.add_(1) # type: ignore[has-type] if self.momentum is None: # use cumulative moving average exponential_average_factor = 1.0 / float(self.num_batches_tracked) else: # use exponential moving average exponential_average_factor = self.momentum r""" Decide whether the mini-batch stats should be used for normalization rather than the buffers. Mini-batch stats are used in training mode, and in eval mode when buffers are None. """ if self.training: bn_training = True else: bn_training = (self.running_mean is None) and (self.running_var is None) r""" Buffers are only updated if they are to be tracked and we are in training mode. Thus they only need to be passed when the update should occur (i.e. in training mode when they are tracked), or when buffer stats are used for normalization (i.e. in eval mode when buffers are not None). """ x = F.batch_norm( x, # If buffers are not to be tracked, ensure that they won't be updated self.running_mean if not self.training or self.track_running_stats else None, self.running_var if not self.training or self.track_running_stats else None, self.weight, self.bias, bn_training, exponential_average_factor, self.eps, ) x = self.drop(x) x = self.act(x) return x class SyncBatchNormAct(nn.SyncBatchNorm): # Thanks to Selim Seferbekov (https://github.com/rwightman/pytorch-image-models/issues/1254) # This is a quick workaround to support SyncBatchNorm for timm BatchNormAct2d layers # but ONLY when used in conjunction with the timm conversion function below. # Do not create this module directly or use the PyTorch conversion function. def forward(self, x: torch.Tensor) -> torch.Tensor: x = super().forward(x) # SyncBN doesn't work with torchscript anyways, so this is fine if hasattr(self, "drop"): x = self.drop(x) if hasattr(self, "act"): x = self.act(x) return x def convert_sync_batchnorm(module, process_group=None): # convert both BatchNorm and BatchNormAct layers to Synchronized variants module_output = module if isinstance(module, torch.nn.modules.batchnorm._BatchNorm): if isinstance(module, BatchNormAct2d): # convert timm norm + act layer module_output = SyncBatchNormAct( module.num_features, module.eps, module.momentum, module.affine, module.track_running_stats, process_group=process_group, ) # set act and drop attr from the original module module_output.act = module.act module_output.drop = module.drop else: # convert standard BatchNorm layers module_output = torch.nn.SyncBatchNorm( module.num_features, module.eps, module.momentum, module.affine, module.track_running_stats, process_group, ) if module.affine: with torch.no_grad(): module_output.weight = module.weight module_output.bias = module.bias module_output.running_mean = module.running_mean module_output.running_var = module.running_var module_output.num_batches_tracked = module.num_batches_tracked if hasattr(module, "qconfig"): module_output.qconfig = module.qconfig for name, child in module.named_children(): module_output.add_module(name, convert_sync_batchnorm(child, process_group)) del module return module_output class FrozenBatchNormAct2d(torch.nn.Module): """ BatchNormAct2d where the batch statistics and the affine parameters are fixed Args: num_features (int): Number of features ``C`` from an expected input of size ``(N, C, H, W)`` eps (float): a value added to the denominator for numerical stability. Default: 1e-5 """ def __init__( self, num_features: int, eps: float = 1e-5, apply_act=True, act_layer=nn.ReLU, act_kwargs=None, inplace=True, drop_layer=None, ): super().__init__() self.eps = eps self.register_buffer("weight", torch.ones(num_features)) self.register_buffer("bias", torch.zeros(num_features)) self.register_buffer("running_mean", torch.zeros(num_features)) self.register_buffer("running_var", torch.ones(num_features)) self.drop = drop_layer() if drop_layer is not None else nn.Identity() self.act = _create_act(act_layer, act_kwargs=act_kwargs, inplace=inplace, apply_act=apply_act) def _load_from_state_dict( self, state_dict: dict, prefix: str, local_metadata: dict, strict: bool, missing_keys: List[str], unexpected_keys: List[str], error_msgs: List[str], ): num_batches_tracked_key = prefix + "num_batches_tracked" if num_batches_tracked_key in state_dict: del state_dict[num_batches_tracked_key] super()._load_from_state_dict( state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs ) def forward(self, x: torch.Tensor) -> torch.Tensor: # move reshapes to the beginning # to make it fuser-friendly w = self.weight.reshape(1, -1, 1, 1) b = self.bias.reshape(1, -1, 1, 1) rv = self.running_var.reshape(1, -1, 1, 1) rm = self.running_mean.reshape(1, -1, 1, 1) scale = w * (rv + self.eps).rsqrt() bias = b - rm * scale x = x * scale + bias x = self.act(self.drop(x)) return x def __repr__(self) -> str: return f"{self.__class__.__name__}({self.weight.shape[0]}, eps={self.eps}, act={self.act})" def freeze_batch_norm_2d(module): """ Converts all `BatchNorm2d` and `SyncBatchNorm` or `BatchNormAct2d` and `SyncBatchNormAct2d` layers of provided module into `FrozenBatchNorm2d` or `FrozenBatchNormAct2d` respectively. Args: module (torch.nn.Module): Any PyTorch module. Returns: torch.nn.Module: Resulting module Inspired by https://github.com/pytorch/pytorch/blob/a5895f85be0f10212791145bfedc0261d364f103/torch/nn/modules/batchnorm.py#L762 """ res = module if isinstance(module, (BatchNormAct2d, SyncBatchNormAct)): res = FrozenBatchNormAct2d(module.num_features) res.num_features = module.num_features res.affine = module.affine if module.affine: res.weight.data = module.weight.data.clone().detach() res.bias.data = module.bias.data.clone().detach() res.running_mean.data = module.running_mean.data res.running_var.data = module.running_var.data res.eps = module.eps res.drop = module.drop res.act = module.act elif isinstance(module, (torch.nn.modules.batchnorm.BatchNorm2d, torch.nn.modules.batchnorm.SyncBatchNorm)): res = FrozenBatchNorm2d(module.num_features) res.num_features = module.num_features res.affine = module.affine if module.affine: res.weight.data = module.weight.data.clone().detach() res.bias.data = module.bias.data.clone().detach() res.running_mean.data = module.running_mean.data res.running_var.data = module.running_var.data res.eps = module.eps else: for name, child in module.named_children(): new_child = freeze_batch_norm_2d(child) if new_child is not child: res.add_module(name, new_child) return res def unfreeze_batch_norm_2d(module): """ Converts all `FrozenBatchNorm2d` layers of provided module into `BatchNorm2d`. If `module` is itself and instance of `FrozenBatchNorm2d`, it is converted into `BatchNorm2d` and returned. Otherwise, the module is walked recursively and submodules are converted in place. Args: module (torch.nn.Module): Any PyTorch module. Returns: torch.nn.Module: Resulting module Inspired by https://github.com/pytorch/pytorch/blob/a5895f85be0f10212791145bfedc0261d364f103/torch/nn/modules/batchnorm.py#L762 """ res = module if isinstance(module, FrozenBatchNormAct2d): res = BatchNormAct2d(module.num_features) if module.affine: res.weight.data = module.weight.data.clone().detach() res.bias.data = module.bias.data.clone().detach() res.running_mean.data = module.running_mean.data res.running_var.data = module.running_var.data res.eps = module.eps res.drop = module.drop res.act = module.act elif isinstance(module, FrozenBatchNorm2d): res = torch.nn.BatchNorm2d(module.num_features) if module.affine: res.weight.data = module.weight.data.clone().detach() res.bias.data = module.bias.data.clone().detach() res.running_mean.data = module.running_mean.data res.running_var.data = module.running_var.data res.eps = module.eps else: for name, child in module.named_children(): new_child = unfreeze_batch_norm_2d(child) if new_child is not child: res.add_module(name, new_child) return res def _num_groups(num_channels, num_groups, group_size): if group_size: assert num_channels % group_size == 0 return num_channels // group_size return num_groups class GroupNormAct(nn.GroupNorm): # NOTE num_channel and num_groups order flipped for easier layer swaps / binding of fixed args def __init__( self, num_channels, num_groups=32, eps=1e-5, affine=True, group_size=None, apply_act=True, act_layer=nn.ReLU, act_kwargs=None, inplace=True, drop_layer=None, ): super(GroupNormAct, self).__init__( _num_groups(num_channels, num_groups, group_size), num_channels, eps=eps, affine=affine, ) self.drop = drop_layer() if drop_layer is not None else nn.Identity() self.act = _create_act(act_layer, act_kwargs=act_kwargs, inplace=inplace, apply_act=apply_act) self._fast_norm = is_fast_norm() def forward(self, x): if self._fast_norm: x = fast_group_norm(x, self.num_groups, self.weight, self.bias, self.eps) else: x = F.group_norm(x, self.num_groups, self.weight, self.bias, self.eps) x = self.drop(x) x = self.act(x) return x class GroupNorm1Act(nn.GroupNorm): def __init__( self, num_channels, eps=1e-5, affine=True, apply_act=True, act_layer=nn.ReLU, act_kwargs=None, inplace=True, drop_layer=None, ): super(GroupNorm1Act, self).__init__(1, num_channels, eps=eps, affine=affine) self.drop = drop_layer() if drop_layer is not None else nn.Identity() self.act = _create_act(act_layer, act_kwargs=act_kwargs, inplace=inplace, apply_act=apply_act) self._fast_norm = is_fast_norm() def forward(self, x): if self._fast_norm: x = fast_group_norm(x, self.num_groups, self.weight, self.bias, self.eps) else: x = F.group_norm(x, self.num_groups, self.weight, self.bias, self.eps) x = self.drop(x) x = self.act(x) return x class LayerNormAct(nn.LayerNorm): def __init__( self, normalization_shape: Union[int, List[int], torch.Size], eps=1e-5, affine=True, apply_act=True, act_layer=nn.ReLU, act_kwargs=None, inplace=True, drop_layer=None, ): super(LayerNormAct, self).__init__(normalization_shape, eps=eps, elementwise_affine=affine) self.drop = drop_layer() if drop_layer is not None else nn.Identity() act_layer = get_act_layer(act_layer) # string -> nn.Module self.act = _create_act(act_layer, act_kwargs=act_kwargs, inplace=inplace, apply_act=apply_act) self._fast_norm = is_fast_norm() def forward(self, x): if self._fast_norm: x = fast_layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps) else: x = F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps) x = self.drop(x) x = self.act(x) return x class LayerNormAct2d(nn.LayerNorm): def __init__( self, num_channels, eps=1e-5, affine=True, apply_act=True, act_layer=nn.ReLU, act_kwargs=None, inplace=True, drop_layer=None, ): super(LayerNormAct2d, self).__init__(num_channels, eps=eps, elementwise_affine=affine) self.drop = drop_layer() if drop_layer is not None else nn.Identity() self.act = _create_act(act_layer, act_kwargs=act_kwargs, inplace=inplace, apply_act=apply_act) self._fast_norm = is_fast_norm() def forward(self, x): x = x.permute(0, 2, 3, 1) if self._fast_norm: x = fast_layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps) else: x = F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps) x = x.permute(0, 3, 1, 2) x = self.drop(x) x = self.act(x) return x
pytorch-image-models/timm/layers/norm_act.py/0
{ "file_path": "pytorch-image-models/timm/layers/norm_act.py", "repo_id": "pytorch-image-models", "token_count": 8051 }
164
try: from torch import _assert except ImportError: def _assert(condition: bool, message: str): assert condition, message def _float_to_int(x: float) -> int: """ Symbolic tracing helper to substitute for inbuilt `int`. Hint: Inbuilt `int` can't accept an argument of type `Proxy` """ return int(x)
pytorch-image-models/timm/layers/trace_utils.py/0
{ "file_path": "pytorch-image-models/timm/layers/trace_utils.py", "repo_id": "pytorch-image-models", "token_count": 119 }
165
import hashlib import json import logging import os from functools import partial from pathlib import Path from tempfile import TemporaryDirectory from typing import Iterable, Optional, Union import torch from torch.hub import HASH_REGEX, download_url_to_file, urlparse try: from torch.hub import get_dir except ImportError: from torch.hub import _get_torch_home as get_dir try: import safetensors.torch _has_safetensors = True except ImportError: _has_safetensors = False try: from typing import Literal except ImportError: from typing_extensions import Literal from timm import __version__ from timm.models._pretrained import filter_pretrained_cfg try: from huggingface_hub import ( create_repo, get_hf_file_metadata, hf_hub_download, hf_hub_url, repo_type_and_id_from_hf_id, upload_folder) from huggingface_hub.utils import EntryNotFoundError hf_hub_download = partial(hf_hub_download, library_name="timm", library_version=__version__) _has_hf_hub = True except ImportError: hf_hub_download = None _has_hf_hub = False _logger = logging.getLogger(__name__) __all__ = ['get_cache_dir', 'download_cached_file', 'has_hf_hub', 'hf_split', 'load_model_config_from_hf', 'load_state_dict_from_hf', 'save_for_hf', 'push_to_hf_hub'] # Default name for a weights file hosted on the Huggingface Hub. HF_WEIGHTS_NAME = "pytorch_model.bin" # default pytorch pkl HF_SAFE_WEIGHTS_NAME = "model.safetensors" # safetensors version HF_OPEN_CLIP_WEIGHTS_NAME = "open_clip_pytorch_model.bin" # default pytorch pkl HF_OPEN_CLIP_SAFE_WEIGHTS_NAME = "open_clip_model.safetensors" # safetensors version def get_cache_dir(child_dir=''): """ Returns the location of the directory where models are cached (and creates it if necessary). """ # Issue warning to move data if old env is set if os.getenv('TORCH_MODEL_ZOO'): _logger.warning('TORCH_MODEL_ZOO is deprecated, please use env TORCH_HOME instead') hub_dir = get_dir() child_dir = () if not child_dir else (child_dir,) model_dir = os.path.join(hub_dir, 'checkpoints', *child_dir) os.makedirs(model_dir, exist_ok=True) return model_dir def download_cached_file(url, check_hash=True, progress=False): if isinstance(url, (list, tuple)): url, filename = url else: parts = urlparse(url) filename = os.path.basename(parts.path) cached_file = os.path.join(get_cache_dir(), filename) if not os.path.exists(cached_file): _logger.info('Downloading: "{}" to {}\n'.format(url, cached_file)) hash_prefix = None if check_hash: r = HASH_REGEX.search(filename) # r is Optional[Match[str]] hash_prefix = r.group(1) if r else None download_url_to_file(url, cached_file, hash_prefix, progress=progress) return cached_file def check_cached_file(url, check_hash=True): if isinstance(url, (list, tuple)): url, filename = url else: parts = urlparse(url) filename = os.path.basename(parts.path) cached_file = os.path.join(get_cache_dir(), filename) if os.path.exists(cached_file): if check_hash: r = HASH_REGEX.search(filename) # r is Optional[Match[str]] hash_prefix = r.group(1) if r else None if hash_prefix: with open(cached_file, 'rb') as f: hd = hashlib.sha256(f.read()).hexdigest() if hd[:len(hash_prefix)] != hash_prefix: return False return True return False def has_hf_hub(necessary=False): if not _has_hf_hub and necessary: # if no HF Hub module installed, and it is necessary to continue, raise error raise RuntimeError( 'Hugging Face hub model specified but package not installed. Run `pip install huggingface_hub`.') return _has_hf_hub def hf_split(hf_id: str): # FIXME I may change @ -> # and be parsed as fragment in a URI model name scheme rev_split = hf_id.split('@') assert 0 < len(rev_split) <= 2, 'hf_hub id should only contain one @ character to identify revision.' hf_model_id = rev_split[0] hf_revision = rev_split[-1] if len(rev_split) > 1 else None return hf_model_id, hf_revision def load_cfg_from_json(json_file: Union[str, os.PathLike]): with open(json_file, "r", encoding="utf-8") as reader: text = reader.read() return json.loads(text) def download_from_hf(model_id: str, filename: str): hf_model_id, hf_revision = hf_split(model_id) return hf_hub_download(hf_model_id, filename, revision=hf_revision) def load_model_config_from_hf(model_id: str): assert has_hf_hub(True) cached_file = download_from_hf(model_id, 'config.json') hf_config = load_cfg_from_json(cached_file) if 'pretrained_cfg' not in hf_config: # old form, pull pretrain_cfg out of the base dict pretrained_cfg = hf_config hf_config = {} hf_config['architecture'] = pretrained_cfg.pop('architecture') hf_config['num_features'] = pretrained_cfg.pop('num_features', None) if 'labels' in pretrained_cfg: # deprecated name for 'label_names' pretrained_cfg['label_names'] = pretrained_cfg.pop('labels') hf_config['pretrained_cfg'] = pretrained_cfg # NOTE currently discarding parent config as only arch name and pretrained_cfg used in timm right now pretrained_cfg = hf_config['pretrained_cfg'] pretrained_cfg['hf_hub_id'] = model_id # insert hf_hub id for pretrained weight load during model creation pretrained_cfg['source'] = 'hf-hub' # model should be created with base config num_classes if its exist if 'num_classes' in hf_config: pretrained_cfg['num_classes'] = hf_config['num_classes'] # label meta-data in base config overrides saved pretrained_cfg on load if 'label_names' in hf_config: pretrained_cfg['label_names'] = hf_config.pop('label_names') if 'label_descriptions' in hf_config: pretrained_cfg['label_descriptions'] = hf_config.pop('label_descriptions') model_args = hf_config.get('model_args', {}) model_name = hf_config['architecture'] return pretrained_cfg, model_name, model_args def load_state_dict_from_hf(model_id: str, filename: str = HF_WEIGHTS_NAME): assert has_hf_hub(True) hf_model_id, hf_revision = hf_split(model_id) # Look for .safetensors alternatives and load from it if it exists if _has_safetensors: for safe_filename in _get_safe_alternatives(filename): try: cached_safe_file = hf_hub_download(repo_id=hf_model_id, filename=safe_filename, revision=hf_revision) _logger.info( f"[{model_id}] Safe alternative available for '{filename}' " f"(as '{safe_filename}'). Loading weights using safetensors.") return safetensors.torch.load_file(cached_safe_file, device="cpu") except EntryNotFoundError: pass # Otherwise, load using pytorch.load cached_file = hf_hub_download(hf_model_id, filename=filename, revision=hf_revision) _logger.debug(f"[{model_id}] Safe alternative not found for '{filename}'. Loading weights using default pytorch.") return torch.load(cached_file, map_location='cpu') def save_config_for_hf( model, config_path: str, model_config: Optional[dict] = None, model_args: Optional[dict] = None ): model_config = model_config or {} hf_config = {} pretrained_cfg = filter_pretrained_cfg(model.pretrained_cfg, remove_source=True, remove_null=True) # set some values at root config level hf_config['architecture'] = pretrained_cfg.pop('architecture') hf_config['num_classes'] = model_config.pop('num_classes', model.num_classes) # NOTE these attr saved for informational purposes, do not impact model build hf_config['num_features'] = model_config.pop('num_features', model.num_features) global_pool_type = model_config.pop('global_pool', getattr(model, 'global_pool', None)) if isinstance(global_pool_type, str) and global_pool_type: hf_config['global_pool'] = global_pool_type # Save class label info if 'labels' in model_config: _logger.warning( "'labels' as a config field for is deprecated. Please use 'label_names' and 'label_descriptions'." " Renaming provided 'labels' field to 'label_names'.") model_config.setdefault('label_names', model_config.pop('labels')) label_names = model_config.pop('label_names', None) if label_names: assert isinstance(label_names, (dict, list, tuple)) # map label id (classifier index) -> unique label name (ie synset for ImageNet, MID for OpenImages) # can be a dict id: name if there are id gaps, or tuple/list if no gaps. hf_config['label_names'] = label_names label_descriptions = model_config.pop('label_descriptions', None) if label_descriptions: assert isinstance(label_descriptions, dict) # maps label names -> descriptions hf_config['label_descriptions'] = label_descriptions if model_args: hf_config['model_args'] = model_args hf_config['pretrained_cfg'] = pretrained_cfg hf_config.update(model_config) with config_path.open('w') as f: json.dump(hf_config, f, indent=2) def save_for_hf( model, save_directory: str, model_config: Optional[dict] = None, model_args: Optional[dict] = None, safe_serialization: Union[bool, Literal["both"]] = False, ): assert has_hf_hub(True) save_directory = Path(save_directory) save_directory.mkdir(exist_ok=True, parents=True) # Save model weights, either safely (using safetensors), or using legacy pytorch approach or both. tensors = model.state_dict() if safe_serialization is True or safe_serialization == "both": assert _has_safetensors, "`pip install safetensors` to use .safetensors" safetensors.torch.save_file(tensors, save_directory / HF_SAFE_WEIGHTS_NAME) if safe_serialization is False or safe_serialization == "both": torch.save(tensors, save_directory / HF_WEIGHTS_NAME) config_path = save_directory / 'config.json' save_config_for_hf( model, config_path, model_config=model_config, model_args=model_args, ) def push_to_hf_hub( model: torch.nn.Module, repo_id: str, commit_message: str = 'Add model', token: Optional[str] = None, revision: Optional[str] = None, private: bool = False, create_pr: bool = False, model_config: Optional[dict] = None, model_card: Optional[dict] = None, model_args: Optional[dict] = None, safe_serialization: Union[bool, Literal["both"]] = False, ): """ Arguments: (...) safe_serialization (`bool` or `"both"`, *optional*, defaults to `False`): Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`). Can be set to `"both"` in order to push both safe and unsafe weights. """ # Create repo if it doesn't exist yet repo_url = create_repo(repo_id, token=token, private=private, exist_ok=True) # Infer complete repo_id from repo_url # Can be different from the input `repo_id` if repo_owner was implicit _, repo_owner, repo_name = repo_type_and_id_from_hf_id(repo_url) repo_id = f"{repo_owner}/{repo_name}" # Check if README file already exist in repo try: get_hf_file_metadata(hf_hub_url(repo_id=repo_id, filename="README.md", revision=revision)) has_readme = True except EntryNotFoundError: has_readme = False # Dump model and push to Hub with TemporaryDirectory() as tmpdir: # Save model weights and config. save_for_hf( model, tmpdir, model_config=model_config, model_args=model_args, safe_serialization=safe_serialization, ) # Add readme if it does not exist if not has_readme: model_card = model_card or {} model_name = repo_id.split('/')[-1] readme_path = Path(tmpdir) / "README.md" readme_text = generate_readme(model_card, model_name) readme_path.write_text(readme_text) # Upload model and return return upload_folder( repo_id=repo_id, folder_path=tmpdir, revision=revision, create_pr=create_pr, commit_message=commit_message, ) def generate_readme(model_card: dict, model_name: str): readme_text = "---\n" readme_text += "tags:\n- image-classification\n- timm\n" readme_text += "library_name: timm\n" readme_text += f"license: {model_card.get('license', 'apache-2.0')}\n" if 'details' in model_card and 'Dataset' in model_card['details']: readme_text += 'datasets:\n' if isinstance(model_card['details']['Dataset'], (tuple, list)): for d in model_card['details']['Dataset']: readme_text += f"- {d.lower()}\n" else: readme_text += f"- {model_card['details']['Dataset'].lower()}\n" if 'Pretrain Dataset' in model_card['details']: if isinstance(model_card['details']['Pretrain Dataset'], (tuple, list)): for d in model_card['details']['Pretrain Dataset']: readme_text += f"- {d.lower()}\n" else: readme_text += f"- {model_card['details']['Pretrain Dataset'].lower()}\n" readme_text += "---\n" readme_text += f"# Model card for {model_name}\n" if 'description' in model_card: readme_text += f"\n{model_card['description']}\n" if 'details' in model_card: readme_text += f"\n## Model Details\n" for k, v in model_card['details'].items(): if isinstance(v, (list, tuple)): readme_text += f"- **{k}:**\n" for vi in v: readme_text += f" - {vi}\n" elif isinstance(v, dict): readme_text += f"- **{k}:**\n" for ki, vi in v.items(): readme_text += f" - {ki}: {vi}\n" else: readme_text += f"- **{k}:** {v}\n" if 'usage' in model_card: readme_text += f"\n## Model Usage\n" readme_text += model_card['usage'] readme_text += '\n' if 'comparison' in model_card: readme_text += f"\n## Model Comparison\n" readme_text += model_card['comparison'] readme_text += '\n' if 'citation' in model_card: readme_text += f"\n## Citation\n" if not isinstance(model_card['citation'], (list, tuple)): citations = [model_card['citation']] else: citations = model_card['citation'] for c in citations: readme_text += f"```bibtex\n{c}\n```\n" return readme_text def _get_safe_alternatives(filename: str) -> Iterable[str]: """Returns potential safetensors alternatives for a given filename. Use case: When downloading a model from the Huggingface Hub, we first look if a .safetensors file exists and if yes, we use it. Main use case is filename "pytorch_model.bin" => check for "model.safetensors" or "pytorch_model.safetensors". """ if filename == HF_WEIGHTS_NAME: yield HF_SAFE_WEIGHTS_NAME if filename == HF_OPEN_CLIP_WEIGHTS_NAME: yield HF_OPEN_CLIP_SAFE_WEIGHTS_NAME if filename not in (HF_WEIGHTS_NAME, HF_OPEN_CLIP_WEIGHTS_NAME) and filename.endswith(".bin"): yield filename[:-4] + ".safetensors"
pytorch-image-models/timm/models/_hub.py/0
{ "file_path": "pytorch-image-models/timm/models/_hub.py", "repo_id": "pytorch-image-models", "token_count": 6737 }
166
""" ConvMixer """ import torch import torch.nn as nn from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import SelectAdaptivePool2d from ._registry import register_model, generate_default_cfgs from ._builder import build_model_with_cfg from ._manipulate import checkpoint_seq __all__ = ['ConvMixer'] class Residual(nn.Module): def __init__(self, fn): super().__init__() self.fn = fn def forward(self, x): return self.fn(x) + x class ConvMixer(nn.Module): def __init__( self, dim, depth, kernel_size=9, patch_size=7, in_chans=3, num_classes=1000, global_pool='avg', drop_rate=0., act_layer=nn.GELU, **kwargs, ): super().__init__() self.num_classes = num_classes self.num_features = dim self.grad_checkpointing = False self.stem = nn.Sequential( nn.Conv2d(in_chans, dim, kernel_size=patch_size, stride=patch_size), act_layer(), nn.BatchNorm2d(dim) ) self.blocks = nn.Sequential( *[nn.Sequential( Residual(nn.Sequential( nn.Conv2d(dim, dim, kernel_size, groups=dim, padding="same"), act_layer(), nn.BatchNorm2d(dim) )), nn.Conv2d(dim, dim, kernel_size=1), act_layer(), nn.BatchNorm2d(dim) ) for i in range(depth)] ) self.pooling = SelectAdaptivePool2d(pool_type=global_pool, flatten=True) self.head_drop = nn.Dropout(drop_rate) self.head = nn.Linear(dim, num_classes) if num_classes > 0 else nn.Identity() @torch.jit.ignore def group_matcher(self, coarse=False): matcher = dict(stem=r'^stem', blocks=r'^blocks\.(\d+)') return matcher @torch.jit.ignore def set_grad_checkpointing(self, enable=True): self.grad_checkpointing = enable @torch.jit.ignore def get_classifier(self): return self.head def reset_classifier(self, num_classes, global_pool=None): self.num_classes = num_classes if global_pool is not None: self.pooling = SelectAdaptivePool2d(pool_type=global_pool, flatten=True) self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity() def forward_features(self, x): x = self.stem(x) if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint_seq(self.blocks, x) else: x = self.blocks(x) return x def forward_head(self, x, pre_logits: bool = False): x = self.pooling(x) x = self.head_drop(x) return x if pre_logits else self.head(x) def forward(self, x): x = self.forward_features(x) x = self.forward_head(x) return x def _create_convmixer(variant, pretrained=False, **kwargs): if kwargs.get('features_only', None): raise RuntimeError('features_only not implemented for ConvMixer models.') return build_model_with_cfg(ConvMixer, variant, pretrained, **kwargs) def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None, 'crop_pct': .96, 'interpolation': 'bicubic', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'classifier': 'head', 'first_conv': 'stem.0', **kwargs } default_cfgs = generate_default_cfgs({ 'convmixer_1536_20.in1k': _cfg(hf_hub_id='timm/'), 'convmixer_768_32.in1k': _cfg(hf_hub_id='timm/'), 'convmixer_1024_20_ks9_p14.in1k': _cfg(hf_hub_id='timm/') }) @register_model def convmixer_1536_20(pretrained=False, **kwargs) -> ConvMixer: model_args = dict(dim=1536, depth=20, kernel_size=9, patch_size=7, **kwargs) return _create_convmixer('convmixer_1536_20', pretrained, **model_args) @register_model def convmixer_768_32(pretrained=False, **kwargs) -> ConvMixer: model_args = dict(dim=768, depth=32, kernel_size=7, patch_size=7, act_layer=nn.ReLU, **kwargs) return _create_convmixer('convmixer_768_32', pretrained, **model_args) @register_model def convmixer_1024_20_ks9_p14(pretrained=False, **kwargs) -> ConvMixer: model_args = dict(dim=1024, depth=20, kernel_size=9, patch_size=14, **kwargs) return _create_convmixer('convmixer_1024_20_ks9_p14', pretrained, **model_args)
pytorch-image-models/timm/models/convmixer.py/0
{ "file_path": "pytorch-image-models/timm/models/convmixer.py", "repo_id": "pytorch-image-models", "token_count": 2228 }
167
# NOTE timm.models.layers is DEPRECATED, please use timm.layers, this is here to reduce breakages in transition from timm.layers.activations import * from timm.layers.adaptive_avgmax_pool import \ adaptive_avgmax_pool2d, select_adaptive_pool2d, AdaptiveAvgMaxPool2d, SelectAdaptivePool2d from timm.layers.attention_pool2d import AttentionPool2d, RotAttentionPool2d, RotaryEmbedding from timm.layers.blur_pool import BlurPool2d from timm.layers.classifier import ClassifierHead, create_classifier from timm.layers.cond_conv2d import CondConv2d, get_condconv_initializer from timm.layers.config import is_exportable, is_scriptable, is_no_jit, set_exportable, set_scriptable, set_no_jit,\ set_layer_config from timm.layers.conv2d_same import Conv2dSame, conv2d_same from timm.layers.conv_bn_act import ConvNormAct, ConvNormActAa, ConvBnAct from timm.layers.create_act import create_act_layer, get_act_layer, get_act_fn from timm.layers.create_attn import get_attn, create_attn from timm.layers.create_conv2d import create_conv2d from timm.layers.create_norm import get_norm_layer, create_norm_layer from timm.layers.create_norm_act import get_norm_act_layer, create_norm_act_layer, get_norm_act_layer from timm.layers.drop import DropBlock2d, DropPath, drop_block_2d, drop_path from timm.layers.eca import EcaModule, CecaModule, EfficientChannelAttn, CircularEfficientChannelAttn from timm.layers.evo_norm import EvoNorm2dB0, EvoNorm2dB1, EvoNorm2dB2,\ EvoNorm2dS0, EvoNorm2dS0a, EvoNorm2dS1, EvoNorm2dS1a, EvoNorm2dS2, EvoNorm2dS2a from timm.layers.fast_norm import is_fast_norm, set_fast_norm, fast_group_norm, fast_layer_norm from timm.layers.filter_response_norm import FilterResponseNormTlu2d, FilterResponseNormAct2d from timm.layers.gather_excite import GatherExcite from timm.layers.global_context import GlobalContext from timm.layers.helpers import to_ntuple, to_2tuple, to_3tuple, to_4tuple, make_divisible, extend_tuple from timm.layers.inplace_abn import InplaceAbn from timm.layers.linear import Linear from timm.layers.mixed_conv2d import MixedConv2d from timm.layers.mlp import Mlp, GluMlp, GatedMlp, ConvMlp from timm.layers.non_local_attn import NonLocalAttn, BatNonLocalAttn from timm.layers.norm import GroupNorm, GroupNorm1, LayerNorm, LayerNorm2d from timm.layers.norm_act import BatchNormAct2d, GroupNormAct, convert_sync_batchnorm from timm.layers.padding import get_padding, get_same_padding, pad_same from timm.layers.patch_embed import PatchEmbed from timm.layers.pool2d_same import AvgPool2dSame, create_pool2d from timm.layers.squeeze_excite import SEModule, SqueezeExcite, EffectiveSEModule, EffectiveSqueezeExcite from timm.layers.selective_kernel import SelectiveKernel from timm.layers.separable_conv import SeparableConv2d, SeparableConvNormAct from timm.layers.space_to_depth import SpaceToDepthModule from timm.layers.split_attn import SplitAttn from timm.layers.split_batchnorm import SplitBatchNorm2d, convert_splitbn_model from timm.layers.std_conv import StdConv2d, StdConv2dSame, ScaledStdConv2d, ScaledStdConv2dSame from timm.layers.test_time_pool import TestTimePoolHead, apply_test_time_pool from timm.layers.trace_utils import _assert, _float_to_int from timm.layers.weight_init import trunc_normal_, trunc_normal_tf_, variance_scaling_, lecun_normal_ import warnings warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", DeprecationWarning)
pytorch-image-models/timm/models/layers/__init__.py/0
{ "file_path": "pytorch-image-models/timm/models/layers/__init__.py", "repo_id": "pytorch-image-models", "token_count": 1240 }
168
"""RegNet X, Y, Z, and more Paper: `Designing Network Design Spaces` - https://arxiv.org/abs/2003.13678 Original Impl: https://github.com/facebookresearch/pycls/blob/master/pycls/models/regnet.py Paper: `Fast and Accurate Model Scaling` - https://arxiv.org/abs/2103.06877 Original Impl: None Based on original PyTorch impl linked above, but re-wrote to use my own blocks (adapted from ResNet here) and cleaned up with more descriptive variable names. Weights from original pycls impl have been modified: * first layer from BGR -> RGB as most PyTorch models are * removed training specific dict entries from checkpoints and keep model state_dict only * remap names to match the ones here Supports weight loading from torchvision and classy-vision (incl VISSL SEER) A number of custom timm model definitions additions including: * stochastic depth, gradient checkpointing, layer-decay, configurable dilation * a pre-activation 'V' variant * only known RegNet-Z model definitions with pretrained weights Hacked together by / Copyright 2020 Ross Wightman """ import math from dataclasses import dataclass, replace from functools import partial from typing import Optional, Union, Callable import numpy as np import torch import torch.nn as nn from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import ClassifierHead, AvgPool2dSame, ConvNormAct, SEModule, DropPath, GroupNormAct from timm.layers import get_act_layer, get_norm_act_layer, create_conv2d, make_divisible from ._builder import build_model_with_cfg from ._manipulate import checkpoint_seq, named_apply from ._registry import generate_default_cfgs, register_model, register_model_deprecations __all__ = ['RegNet', 'RegNetCfg'] # model_registry will add each entrypoint fn to this @dataclass class RegNetCfg: depth: int = 21 w0: int = 80 wa: float = 42.63 wm: float = 2.66 group_size: int = 24 bottle_ratio: float = 1. se_ratio: float = 0. group_min_ratio: float = 0. stem_width: int = 32 downsample: Optional[str] = 'conv1x1' linear_out: bool = False preact: bool = False num_features: int = 0 act_layer: Union[str, Callable] = 'relu' norm_layer: Union[str, Callable] = 'batchnorm' def quantize_float(f, q): """Converts a float to the closest non-zero int divisible by q.""" return int(round(f / q) * q) def adjust_widths_groups_comp(widths, bottle_ratios, groups, min_ratio=0.): """Adjusts the compatibility of widths and groups.""" bottleneck_widths = [int(w * b) for w, b in zip(widths, bottle_ratios)] groups = [min(g, w_bot) for g, w_bot in zip(groups, bottleneck_widths)] if min_ratio: # torchvision uses a different rounding scheme for ensuring bottleneck widths divisible by group widths bottleneck_widths = [make_divisible(w_bot, g, min_ratio) for w_bot, g in zip(bottleneck_widths, groups)] else: bottleneck_widths = [quantize_float(w_bot, g) for w_bot, g in zip(bottleneck_widths, groups)] widths = [int(w_bot / b) for w_bot, b in zip(bottleneck_widths, bottle_ratios)] return widths, groups def generate_regnet(width_slope, width_initial, width_mult, depth, group_size, quant=8): """Generates per block widths from RegNet parameters.""" assert width_slope >= 0 and width_initial > 0 and width_mult > 1 and width_initial % quant == 0 # TODO dWr scaling? # depth = int(depth * (scale ** 0.1)) # width_scale = scale ** 0.4 # dWr scale, exp 0.8 / 2, applied to both group and layer widths widths_cont = np.arange(depth) * width_slope + width_initial width_exps = np.round(np.log(widths_cont / width_initial) / np.log(width_mult)) widths = np.round(np.divide(width_initial * np.power(width_mult, width_exps), quant)) * quant num_stages, max_stage = len(np.unique(widths)), width_exps.max() + 1 groups = np.array([group_size for _ in range(num_stages)]) return widths.astype(int).tolist(), num_stages, groups.astype(int).tolist() def downsample_conv( in_chs, out_chs, kernel_size=1, stride=1, dilation=1, norm_layer=None, preact=False, ): norm_layer = norm_layer or nn.BatchNorm2d kernel_size = 1 if stride == 1 and dilation == 1 else kernel_size dilation = dilation if kernel_size > 1 else 1 if preact: return create_conv2d( in_chs, out_chs, kernel_size, stride=stride, dilation=dilation, ) else: return ConvNormAct( in_chs, out_chs, kernel_size, stride=stride, dilation=dilation, norm_layer=norm_layer, apply_act=False, ) def downsample_avg( in_chs, out_chs, kernel_size=1, stride=1, dilation=1, norm_layer=None, preact=False, ): """ AvgPool Downsampling as in 'D' ResNet variants. This is not in RegNet space but I might experiment.""" norm_layer = norm_layer or nn.BatchNorm2d avg_stride = stride if dilation == 1 else 1 pool = nn.Identity() if stride > 1 or dilation > 1: avg_pool_fn = AvgPool2dSame if avg_stride == 1 and dilation > 1 else nn.AvgPool2d pool = avg_pool_fn(2, avg_stride, ceil_mode=True, count_include_pad=False) if preact: conv = create_conv2d(in_chs, out_chs, 1, stride=1) else: conv = ConvNormAct(in_chs, out_chs, 1, stride=1, norm_layer=norm_layer, apply_act=False) return nn.Sequential(*[pool, conv]) def create_shortcut( downsample_type, in_chs, out_chs, kernel_size, stride, dilation=(1, 1), norm_layer=None, preact=False, ): assert downsample_type in ('avg', 'conv1x1', '', None) if in_chs != out_chs or stride != 1 or dilation[0] != dilation[1]: dargs = dict(stride=stride, dilation=dilation[0], norm_layer=norm_layer, preact=preact) if not downsample_type: return None # no shortcut, no downsample elif downsample_type == 'avg': return downsample_avg(in_chs, out_chs, **dargs) else: return downsample_conv(in_chs, out_chs, kernel_size=kernel_size, **dargs) else: return nn.Identity() # identity shortcut (no downsample) class Bottleneck(nn.Module): """ RegNet Bottleneck This is almost exactly the same as a ResNet Bottlneck. The main difference is the SE block is moved from after conv3 to after conv2. Otherwise, it's just redefining the arguments for groups/bottleneck channels. """ def __init__( self, in_chs, out_chs, stride=1, dilation=(1, 1), bottle_ratio=1, group_size=1, se_ratio=0.25, downsample='conv1x1', linear_out=False, act_layer=nn.ReLU, norm_layer=nn.BatchNorm2d, drop_block=None, drop_path_rate=0., ): super(Bottleneck, self).__init__() act_layer = get_act_layer(act_layer) bottleneck_chs = int(round(out_chs * bottle_ratio)) groups = bottleneck_chs // group_size cargs = dict(act_layer=act_layer, norm_layer=norm_layer) self.conv1 = ConvNormAct(in_chs, bottleneck_chs, kernel_size=1, **cargs) self.conv2 = ConvNormAct( bottleneck_chs, bottleneck_chs, kernel_size=3, stride=stride, dilation=dilation[0], groups=groups, drop_layer=drop_block, **cargs, ) if se_ratio: se_channels = int(round(in_chs * se_ratio)) self.se = SEModule(bottleneck_chs, rd_channels=se_channels, act_layer=act_layer) else: self.se = nn.Identity() self.conv3 = ConvNormAct(bottleneck_chs, out_chs, kernel_size=1, apply_act=False, **cargs) self.act3 = nn.Identity() if linear_out else act_layer() self.downsample = create_shortcut( downsample, in_chs, out_chs, kernel_size=1, stride=stride, dilation=dilation, norm_layer=norm_layer, ) self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0 else nn.Identity() def zero_init_last(self): nn.init.zeros_(self.conv3.bn.weight) def forward(self, x): shortcut = x x = self.conv1(x) x = self.conv2(x) x = self.se(x) x = self.conv3(x) if self.downsample is not None: # NOTE stuck with downsample as the attr name due to weight compatibility # now represents the shortcut, no shortcut if None, and non-downsample shortcut == nn.Identity() x = self.drop_path(x) + self.downsample(shortcut) x = self.act3(x) return x class PreBottleneck(nn.Module): """ RegNet Bottleneck This is almost exactly the same as a ResNet Bottlneck. The main difference is the SE block is moved from after conv3 to after conv2. Otherwise, it's just redefining the arguments for groups/bottleneck channels. """ def __init__( self, in_chs, out_chs, stride=1, dilation=(1, 1), bottle_ratio=1, group_size=1, se_ratio=0.25, downsample='conv1x1', linear_out=False, act_layer=nn.ReLU, norm_layer=nn.BatchNorm2d, drop_block=None, drop_path_rate=0., ): super(PreBottleneck, self).__init__() norm_act_layer = get_norm_act_layer(norm_layer, act_layer) bottleneck_chs = int(round(out_chs * bottle_ratio)) groups = bottleneck_chs // group_size self.norm1 = norm_act_layer(in_chs) self.conv1 = create_conv2d(in_chs, bottleneck_chs, kernel_size=1) self.norm2 = norm_act_layer(bottleneck_chs) self.conv2 = create_conv2d( bottleneck_chs, bottleneck_chs, kernel_size=3, stride=stride, dilation=dilation[0], groups=groups, ) if se_ratio: se_channels = int(round(in_chs * se_ratio)) self.se = SEModule(bottleneck_chs, rd_channels=se_channels, act_layer=act_layer) else: self.se = nn.Identity() self.norm3 = norm_act_layer(bottleneck_chs) self.conv3 = create_conv2d(bottleneck_chs, out_chs, kernel_size=1) self.downsample = create_shortcut( downsample, in_chs, out_chs, kernel_size=1, stride=stride, dilation=dilation, preact=True, ) self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0 else nn.Identity() def zero_init_last(self): pass def forward(self, x): x = self.norm1(x) shortcut = x x = self.conv1(x) x = self.norm2(x) x = self.conv2(x) x = self.se(x) x = self.norm3(x) x = self.conv3(x) if self.downsample is not None: # NOTE stuck with downsample as the attr name due to weight compatibility # now represents the shortcut, no shortcut if None, and non-downsample shortcut == nn.Identity() x = self.drop_path(x) + self.downsample(shortcut) return x class RegStage(nn.Module): """Stage (sequence of blocks w/ the same output shape).""" def __init__( self, depth, in_chs, out_chs, stride, dilation, drop_path_rates=None, block_fn=Bottleneck, **block_kwargs, ): super(RegStage, self).__init__() self.grad_checkpointing = False first_dilation = 1 if dilation in (1, 2) else 2 for i in range(depth): block_stride = stride if i == 0 else 1 block_in_chs = in_chs if i == 0 else out_chs block_dilation = (first_dilation, dilation) dpr = drop_path_rates[i] if drop_path_rates is not None else 0. name = "b{}".format(i + 1) self.add_module( name, block_fn( block_in_chs, out_chs, stride=block_stride, dilation=block_dilation, drop_path_rate=dpr, **block_kwargs, ) ) first_dilation = dilation def forward(self, x): if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint_seq(self.children(), x) else: for block in self.children(): x = block(x) return x class RegNet(nn.Module): """RegNet-X, Y, and Z Models Paper: https://arxiv.org/abs/2003.13678 Original Impl: https://github.com/facebookresearch/pycls/blob/master/pycls/models/regnet.py """ def __init__( self, cfg: RegNetCfg, in_chans=3, num_classes=1000, output_stride=32, global_pool='avg', drop_rate=0., drop_path_rate=0., zero_init_last=True, **kwargs, ): """ Args: cfg (RegNetCfg): Model architecture configuration in_chans (int): Number of input channels (default: 3) num_classes (int): Number of classifier classes (default: 1000) output_stride (int): Output stride of network, one of (8, 16, 32) (default: 32) global_pool (str): Global pooling type (default: 'avg') drop_rate (float): Dropout rate (default: 0.) drop_path_rate (float): Stochastic depth drop-path rate (default: 0.) zero_init_last (bool): Zero-init last weight of residual path kwargs (dict): Extra kwargs overlayed onto cfg """ super().__init__() self.num_classes = num_classes self.drop_rate = drop_rate assert output_stride in (8, 16, 32) cfg = replace(cfg, **kwargs) # update cfg with extra passed kwargs # Construct the stem stem_width = cfg.stem_width na_args = dict(act_layer=cfg.act_layer, norm_layer=cfg.norm_layer) if cfg.preact: self.stem = create_conv2d(in_chans, stem_width, 3, stride=2) else: self.stem = ConvNormAct(in_chans, stem_width, 3, stride=2, **na_args) self.feature_info = [dict(num_chs=stem_width, reduction=2, module='stem')] # Construct the stages prev_width = stem_width curr_stride = 2 per_stage_args, common_args = self._get_stage_args( cfg, output_stride=output_stride, drop_path_rate=drop_path_rate, ) assert len(per_stage_args) == 4 block_fn = PreBottleneck if cfg.preact else Bottleneck for i, stage_args in enumerate(per_stage_args): stage_name = "s{}".format(i + 1) self.add_module( stage_name, RegStage( in_chs=prev_width, block_fn=block_fn, **stage_args, **common_args, ) ) prev_width = stage_args['out_chs'] curr_stride *= stage_args['stride'] self.feature_info += [dict(num_chs=prev_width, reduction=curr_stride, module=stage_name)] # Construct the head if cfg.num_features: self.final_conv = ConvNormAct(prev_width, cfg.num_features, kernel_size=1, **na_args) self.num_features = cfg.num_features else: final_act = cfg.linear_out or cfg.preact self.final_conv = get_act_layer(cfg.act_layer)() if final_act else nn.Identity() self.num_features = prev_width self.head = ClassifierHead( in_features=self.num_features, num_classes=num_classes, pool_type=global_pool, drop_rate=drop_rate, ) named_apply(partial(_init_weights, zero_init_last=zero_init_last), self) def _get_stage_args(self, cfg: RegNetCfg, default_stride=2, output_stride=32, drop_path_rate=0.): # Generate RegNet ws per block widths, num_stages, stage_gs = generate_regnet(cfg.wa, cfg.w0, cfg.wm, cfg.depth, cfg.group_size) # Convert to per stage format stage_widths, stage_depths = np.unique(widths, return_counts=True) stage_br = [cfg.bottle_ratio for _ in range(num_stages)] stage_strides = [] stage_dilations = [] net_stride = 2 dilation = 1 for _ in range(num_stages): if net_stride >= output_stride: dilation *= default_stride stride = 1 else: stride = default_stride net_stride *= stride stage_strides.append(stride) stage_dilations.append(dilation) stage_dpr = np.split(np.linspace(0, drop_path_rate, sum(stage_depths)), np.cumsum(stage_depths[:-1])) # Adjust the compatibility of ws and gws stage_widths, stage_gs = adjust_widths_groups_comp( stage_widths, stage_br, stage_gs, min_ratio=cfg.group_min_ratio) arg_names = ['out_chs', 'stride', 'dilation', 'depth', 'bottle_ratio', 'group_size', 'drop_path_rates'] per_stage_args = [ dict(zip(arg_names, params)) for params in zip(stage_widths, stage_strides, stage_dilations, stage_depths, stage_br, stage_gs, stage_dpr) ] common_args = dict( downsample=cfg.downsample, se_ratio=cfg.se_ratio, linear_out=cfg.linear_out, act_layer=cfg.act_layer, norm_layer=cfg.norm_layer, ) return per_stage_args, common_args @torch.jit.ignore def group_matcher(self, coarse=False): return dict( stem=r'^stem', blocks=r'^s(\d+)' if coarse else r'^s(\d+)\.b(\d+)', ) @torch.jit.ignore def set_grad_checkpointing(self, enable=True): for s in list(self.children())[1:-1]: s.grad_checkpointing = enable @torch.jit.ignore def get_classifier(self): return self.head.fc def reset_classifier(self, num_classes, global_pool='avg'): self.head.reset(num_classes, pool_type=global_pool) def forward_features(self, x): x = self.stem(x) x = self.s1(x) x = self.s2(x) x = self.s3(x) x = self.s4(x) x = self.final_conv(x) return x def forward_head(self, x, pre_logits: bool = False): return self.head(x, pre_logits=pre_logits) def forward(self, x): x = self.forward_features(x) x = self.forward_head(x) return x def _init_weights(module, name='', zero_init_last=False): if isinstance(module, nn.Conv2d): fan_out = module.kernel_size[0] * module.kernel_size[1] * module.out_channels fan_out //= module.groups module.weight.data.normal_(0, math.sqrt(2.0 / fan_out)) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Linear): nn.init.normal_(module.weight, mean=0.0, std=0.01) if module.bias is not None: nn.init.zeros_(module.bias) elif zero_init_last and hasattr(module, 'zero_init_last'): module.zero_init_last() def _filter_fn(state_dict): state_dict = state_dict.get('model', state_dict) replaces = [ ('f.a.0', 'conv1.conv'), ('f.a.1', 'conv1.bn'), ('f.b.0', 'conv2.conv'), ('f.b.1', 'conv2.bn'), ('f.final_bn', 'conv3.bn'), ('f.se.excitation.0', 'se.fc1'), ('f.se.excitation.2', 'se.fc2'), ('f.se', 'se'), ('f.c.0', 'conv3.conv'), ('f.c.1', 'conv3.bn'), ('f.c', 'conv3.conv'), ('proj.0', 'downsample.conv'), ('proj.1', 'downsample.bn'), ('proj', 'downsample.conv'), ] if 'classy_state_dict' in state_dict: # classy-vision & vissl (SEER) weights import re state_dict = state_dict['classy_state_dict']['base_model']['model'] out = {} for k, v in state_dict['trunk'].items(): k = k.replace('_feature_blocks.conv1.stem.0', 'stem.conv') k = k.replace('_feature_blocks.conv1.stem.1', 'stem.bn') k = re.sub( r'^_feature_blocks.res\d.block(\d)-(\d+)', lambda x: f's{int(x.group(1))}.b{int(x.group(2)) + 1}', k) k = re.sub(r's(\d)\.b(\d+)\.bn', r's\1.b\2.downsample.bn', k) for s, r in replaces: k = k.replace(s, r) out[k] = v for k, v in state_dict['heads'].items(): if 'projection_head' in k or 'prototypes' in k: continue k = k.replace('0.clf.0', 'head.fc') out[k] = v return out if 'stem.0.weight' in state_dict: # torchvision weights import re out = {} for k, v in state_dict.items(): k = k.replace('stem.0', 'stem.conv') k = k.replace('stem.1', 'stem.bn') k = re.sub( r'trunk_output.block(\d)\.block(\d+)\-(\d+)', lambda x: f's{int(x.group(1))}.b{int(x.group(3)) + 1}', k) for s, r in replaces: k = k.replace(s, r) k = k.replace('fc.', 'head.fc.') out[k] = v return out return state_dict # Model FLOPS = three trailing digits * 10^8 model_cfgs = dict( # RegNet-X regnetx_002=RegNetCfg(w0=24, wa=36.44, wm=2.49, group_size=8, depth=13), regnetx_004=RegNetCfg(w0=24, wa=24.48, wm=2.54, group_size=16, depth=22), regnetx_004_tv=RegNetCfg(w0=24, wa=24.48, wm=2.54, group_size=16, depth=22, group_min_ratio=0.9), regnetx_006=RegNetCfg(w0=48, wa=36.97, wm=2.24, group_size=24, depth=16), regnetx_008=RegNetCfg(w0=56, wa=35.73, wm=2.28, group_size=16, depth=16), regnetx_016=RegNetCfg(w0=80, wa=34.01, wm=2.25, group_size=24, depth=18), regnetx_032=RegNetCfg(w0=88, wa=26.31, wm=2.25, group_size=48, depth=25), regnetx_040=RegNetCfg(w0=96, wa=38.65, wm=2.43, group_size=40, depth=23), regnetx_064=RegNetCfg(w0=184, wa=60.83, wm=2.07, group_size=56, depth=17), regnetx_080=RegNetCfg(w0=80, wa=49.56, wm=2.88, group_size=120, depth=23), regnetx_120=RegNetCfg(w0=168, wa=73.36, wm=2.37, group_size=112, depth=19), regnetx_160=RegNetCfg(w0=216, wa=55.59, wm=2.1, group_size=128, depth=22), regnetx_320=RegNetCfg(w0=320, wa=69.86, wm=2.0, group_size=168, depth=23), # RegNet-Y regnety_002=RegNetCfg(w0=24, wa=36.44, wm=2.49, group_size=8, depth=13, se_ratio=0.25), regnety_004=RegNetCfg(w0=48, wa=27.89, wm=2.09, group_size=8, depth=16, se_ratio=0.25), regnety_006=RegNetCfg(w0=48, wa=32.54, wm=2.32, group_size=16, depth=15, se_ratio=0.25), regnety_008=RegNetCfg(w0=56, wa=38.84, wm=2.4, group_size=16, depth=14, se_ratio=0.25), regnety_008_tv=RegNetCfg(w0=56, wa=38.84, wm=2.4, group_size=16, depth=14, se_ratio=0.25, group_min_ratio=0.9), regnety_016=RegNetCfg(w0=48, wa=20.71, wm=2.65, group_size=24, depth=27, se_ratio=0.25), regnety_032=RegNetCfg(w0=80, wa=42.63, wm=2.66, group_size=24, depth=21, se_ratio=0.25), regnety_040=RegNetCfg(w0=96, wa=31.41, wm=2.24, group_size=64, depth=22, se_ratio=0.25), regnety_064=RegNetCfg(w0=112, wa=33.22, wm=2.27, group_size=72, depth=25, se_ratio=0.25), regnety_080=RegNetCfg(w0=192, wa=76.82, wm=2.19, group_size=56, depth=17, se_ratio=0.25), regnety_080_tv=RegNetCfg(w0=192, wa=76.82, wm=2.19, group_size=56, depth=17, se_ratio=0.25, group_min_ratio=0.9), regnety_120=RegNetCfg(w0=168, wa=73.36, wm=2.37, group_size=112, depth=19, se_ratio=0.25), regnety_160=RegNetCfg(w0=200, wa=106.23, wm=2.48, group_size=112, depth=18, se_ratio=0.25), regnety_320=RegNetCfg(w0=232, wa=115.89, wm=2.53, group_size=232, depth=20, se_ratio=0.25), regnety_640=RegNetCfg(w0=352, wa=147.48, wm=2.4, group_size=328, depth=20, se_ratio=0.25), regnety_1280=RegNetCfg(w0=456, wa=160.83, wm=2.52, group_size=264, depth=27, se_ratio=0.25), regnety_2560=RegNetCfg(w0=640, wa=230.83, wm=2.53, group_size=373, depth=27, se_ratio=0.25), #regnety_2560=RegNetCfg(w0=640, wa=124.47, wm=2.04, group_size=848, depth=27, se_ratio=0.25), # Experimental regnety_040_sgn=RegNetCfg( w0=96, wa=31.41, wm=2.24, group_size=64, depth=22, se_ratio=0.25, act_layer='silu', norm_layer=partial(GroupNormAct, group_size=16)), # regnetv = 'preact regnet y' regnetv_040=RegNetCfg( depth=22, w0=96, wa=31.41, wm=2.24, group_size=64, se_ratio=0.25, preact=True, act_layer='silu'), regnetv_064=RegNetCfg( depth=25, w0=112, wa=33.22, wm=2.27, group_size=72, se_ratio=0.25, preact=True, act_layer='silu', downsample='avg'), # RegNet-Z (unverified) regnetz_005=RegNetCfg( depth=21, w0=16, wa=10.7, wm=2.51, group_size=4, bottle_ratio=4.0, se_ratio=0.25, downsample=None, linear_out=True, num_features=1024, act_layer='silu', ), regnetz_040=RegNetCfg( depth=28, w0=48, wa=14.5, wm=2.226, group_size=8, bottle_ratio=4.0, se_ratio=0.25, downsample=None, linear_out=True, num_features=0, act_layer='silu', ), regnetz_040_h=RegNetCfg( depth=28, w0=48, wa=14.5, wm=2.226, group_size=8, bottle_ratio=4.0, se_ratio=0.25, downsample=None, linear_out=True, num_features=1536, act_layer='silu', ), ) def _create_regnet(variant, pretrained, **kwargs): return build_model_with_cfg( RegNet, variant, pretrained, model_cfg=model_cfgs[variant], pretrained_filter_fn=_filter_fn, **kwargs) def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7), 'test_input_size': (3, 288, 288), 'crop_pct': 0.95, 'test_crop_pct': 1.0, 'interpolation': 'bicubic', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'stem.conv', 'classifier': 'head.fc', **kwargs } def _cfgpyc(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7), 'crop_pct': 0.875, 'interpolation': 'bicubic', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'stem.conv', 'classifier': 'head.fc', 'license': 'mit', 'origin_url': 'https://github.com/facebookresearch/pycls', **kwargs } def _cfgtv2(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7), 'crop_pct': 0.965, 'interpolation': 'bicubic', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'stem.conv', 'classifier': 'head.fc', 'license': 'bsd-3-clause', 'origin_url': 'https://github.com/pytorch/vision', **kwargs } default_cfgs = generate_default_cfgs({ # timm trained models 'regnety_032.ra_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-weights/regnety_032_ra-7f2439f9.pth'), 'regnety_040.ra3_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-tpu-weights/regnety_040_ra3-670e1166.pth'), 'regnety_064.ra3_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-tpu-weights/regnety_064_ra3-aa26dc7d.pth'), 'regnety_080.ra3_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-tpu-weights/regnety_080_ra3-1fdc4344.pth'), 'regnety_120.sw_in12k_ft_in1k': _cfg(hf_hub_id='timm/'), 'regnety_160.sw_in12k_ft_in1k': _cfg(hf_hub_id='timm/'), 'regnety_160.lion_in12k_ft_in1k': _cfg(hf_hub_id='timm/'), # timm in12k pretrain 'regnety_120.sw_in12k': _cfg( hf_hub_id='timm/', num_classes=11821), 'regnety_160.sw_in12k': _cfg( hf_hub_id='timm/', num_classes=11821), # timm custom arch (v and z guess) + trained models 'regnety_040_sgn.untrained': _cfg(url=''), 'regnetv_040.ra3_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-tpu-weights/regnetv_040_ra3-c248f51f.pth', first_conv='stem'), 'regnetv_064.ra3_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-tpu-weights/regnetv_064_ra3-530616c2.pth', first_conv='stem'), 'regnetz_005.untrained': _cfg(url=''), 'regnetz_040.ra3_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-tpu-weights/regnetz_040_ra3-9007edf5.pth', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=1.0, test_input_size=(3, 320, 320)), 'regnetz_040_h.ra3_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-tpu-weights/regnetz_040h_ra3-f594343b.pth', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=1.0, test_input_size=(3, 320, 320)), # used in DeiT for distillation (from Facebook DeiT GitHub repository) 'regnety_160.deit_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/deit/regnety_160-a5fe301d.pth'), 'regnetx_004_tv.tv2_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_x_400mf-62229a5f.pth'), 'regnetx_008.tv2_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_x_800mf-94a99ebd.pth'), 'regnetx_016.tv2_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_x_1_6gf-a12f2b72.pth'), 'regnetx_032.tv2_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_x_3_2gf-7071aa85.pth'), 'regnetx_080.tv2_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_x_8gf-2b70d774.pth'), 'regnetx_160.tv2_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_x_16gf-ba3796d7.pth'), 'regnetx_320.tv2_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_x_32gf-6eb8fdc6.pth'), 'regnety_004.tv2_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_y_400mf-e6988f5f.pth'), 'regnety_008_tv.tv2_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_y_800mf-58fc7688.pth'), 'regnety_016.tv2_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_y_1_6gf-0d7bc02a.pth'), 'regnety_032.tv2_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_y_3_2gf-9180c971.pth'), 'regnety_080_tv.tv2_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_y_8gf-dc2b1b54.pth'), 'regnety_160.tv2_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_y_16gf-3e4a00f9.pth'), 'regnety_320.tv2_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_y_32gf-8db6d4b5.pth'), 'regnety_160.swag_ft_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_y_16gf_swag-43afe44d.pth', license='cc-by-nc-4.0', input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0), 'regnety_320.swag_ft_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_y_32gf_swag-04fdfa75.pth', license='cc-by-nc-4.0', input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0), 'regnety_1280.swag_ft_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_y_128gf_swag-c8ce3e52.pth', license='cc-by-nc-4.0', input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0), 'regnety_160.swag_lc_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_y_16gf_lc_swag-f3ec0043.pth', license='cc-by-nc-4.0'), 'regnety_320.swag_lc_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_y_32gf_lc_swag-e1583746.pth', license='cc-by-nc-4.0'), 'regnety_1280.swag_lc_in1k': _cfgtv2( hf_hub_id='timm/', url='https://download.pytorch.org/models/regnet_y_128gf_lc_swag-cbe8ce12.pth', license='cc-by-nc-4.0'), 'regnety_320.seer_ft_in1k': _cfgtv2( hf_hub_id='timm/', license='other', origin_url='https://github.com/facebookresearch/vissl', url='https://dl.fbaipublicfiles.com/vissl/model_zoo/seer_finetuned/seer_regnet32_finetuned_in1k_model_final_checkpoint_phase78.torch', input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0), 'regnety_640.seer_ft_in1k': _cfgtv2( hf_hub_id='timm/', license='other', origin_url='https://github.com/facebookresearch/vissl', url='https://dl.fbaipublicfiles.com/vissl/model_zoo/seer_finetuned/seer_regnet64_finetuned_in1k_model_final_checkpoint_phase78.torch', input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0), 'regnety_1280.seer_ft_in1k': _cfgtv2( hf_hub_id='timm/', license='other', origin_url='https://github.com/facebookresearch/vissl', url='https://dl.fbaipublicfiles.com/vissl/model_zoo/seer_finetuned/seer_regnet128_finetuned_in1k_model_final_checkpoint_phase78.torch', input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0), 'regnety_2560.seer_ft_in1k': _cfgtv2( hf_hub_id='timm/', license='other', origin_url='https://github.com/facebookresearch/vissl', url='https://dl.fbaipublicfiles.com/vissl/model_zoo/seer_finetuned/seer_regnet256_finetuned_in1k_model_final_checkpoint_phase38.torch', input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0), 'regnety_320.seer': _cfgtv2( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/vissl/model_zoo/seer_regnet32d/seer_regnet32gf_model_iteration244000.torch', num_classes=0, license='other', origin_url='https://github.com/facebookresearch/vissl'), 'regnety_640.seer': _cfgtv2( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/vissl/model_zoo/seer_regnet64/seer_regnet64gf_model_final_checkpoint_phase0.torch', num_classes=0, license='other', origin_url='https://github.com/facebookresearch/vissl'), 'regnety_1280.seer': _cfgtv2( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/vissl/model_zoo/swav_ig1b_regnet128Gf_cnstant_bs32_node16_sinkhorn10_proto16k_syncBN64_warmup8k/model_final_checkpoint_phase0.torch', num_classes=0, license='other', origin_url='https://github.com/facebookresearch/vissl'), # FIXME invalid weight <-> model match, mistake on their end #'regnety_2560.seer': _cfgtv2( # url='https://dl.fbaipublicfiles.com/vissl/model_zoo/swav_ig1b_cosine_rg256gf_noBNhead_wd1e5_fairstore_bs16_node64_sinkhorn10_proto16k_apex_syncBN64_warmup8k/model_final_checkpoint_phase0.torch', # num_classes=0, license='other', origin_url='https://github.com/facebookresearch/vissl'), 'regnetx_002.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnetx_004.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnetx_006.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnetx_008.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnetx_016.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnetx_032.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnetx_040.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnetx_064.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnetx_080.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnetx_120.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnetx_160.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnetx_320.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnety_002.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnety_004.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnety_006.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnety_008.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnety_016.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnety_032.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnety_040.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnety_064.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnety_080.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnety_120.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnety_160.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), 'regnety_320.pycls_in1k': _cfgpyc(hf_hub_id='timm/'), }) @register_model def regnetx_002(pretrained=False, **kwargs) -> RegNet: """RegNetX-200MF""" return _create_regnet('regnetx_002', pretrained, **kwargs) @register_model def regnetx_004(pretrained=False, **kwargs) -> RegNet: """RegNetX-400MF""" return _create_regnet('regnetx_004', pretrained, **kwargs) @register_model def regnetx_004_tv(pretrained=False, **kwargs) -> RegNet: """RegNetX-400MF w/ torchvision group rounding""" return _create_regnet('regnetx_004_tv', pretrained, **kwargs) @register_model def regnetx_006(pretrained=False, **kwargs) -> RegNet: """RegNetX-600MF""" return _create_regnet('regnetx_006', pretrained, **kwargs) @register_model def regnetx_008(pretrained=False, **kwargs) -> RegNet: """RegNetX-800MF""" return _create_regnet('regnetx_008', pretrained, **kwargs) @register_model def regnetx_016(pretrained=False, **kwargs) -> RegNet: """RegNetX-1.6GF""" return _create_regnet('regnetx_016', pretrained, **kwargs) @register_model def regnetx_032(pretrained=False, **kwargs) -> RegNet: """RegNetX-3.2GF""" return _create_regnet('regnetx_032', pretrained, **kwargs) @register_model def regnetx_040(pretrained=False, **kwargs) -> RegNet: """RegNetX-4.0GF""" return _create_regnet('regnetx_040', pretrained, **kwargs) @register_model def regnetx_064(pretrained=False, **kwargs) -> RegNet: """RegNetX-6.4GF""" return _create_regnet('regnetx_064', pretrained, **kwargs) @register_model def regnetx_080(pretrained=False, **kwargs) -> RegNet: """RegNetX-8.0GF""" return _create_regnet('regnetx_080', pretrained, **kwargs) @register_model def regnetx_120(pretrained=False, **kwargs) -> RegNet: """RegNetX-12GF""" return _create_regnet('regnetx_120', pretrained, **kwargs) @register_model def regnetx_160(pretrained=False, **kwargs) -> RegNet: """RegNetX-16GF""" return _create_regnet('regnetx_160', pretrained, **kwargs) @register_model def regnetx_320(pretrained=False, **kwargs) -> RegNet: """RegNetX-32GF""" return _create_regnet('regnetx_320', pretrained, **kwargs) @register_model def regnety_002(pretrained=False, **kwargs) -> RegNet: """RegNetY-200MF""" return _create_regnet('regnety_002', pretrained, **kwargs) @register_model def regnety_004(pretrained=False, **kwargs) -> RegNet: """RegNetY-400MF""" return _create_regnet('regnety_004', pretrained, **kwargs) @register_model def regnety_006(pretrained=False, **kwargs) -> RegNet: """RegNetY-600MF""" return _create_regnet('regnety_006', pretrained, **kwargs) @register_model def regnety_008(pretrained=False, **kwargs) -> RegNet: """RegNetY-800MF""" return _create_regnet('regnety_008', pretrained, **kwargs) @register_model def regnety_008_tv(pretrained=False, **kwargs) -> RegNet: """RegNetY-800MF w/ torchvision group rounding""" return _create_regnet('regnety_008_tv', pretrained, **kwargs) @register_model def regnety_016(pretrained=False, **kwargs) -> RegNet: """RegNetY-1.6GF""" return _create_regnet('regnety_016', pretrained, **kwargs) @register_model def regnety_032(pretrained=False, **kwargs) -> RegNet: """RegNetY-3.2GF""" return _create_regnet('regnety_032', pretrained, **kwargs) @register_model def regnety_040(pretrained=False, **kwargs) -> RegNet: """RegNetY-4.0GF""" return _create_regnet('regnety_040', pretrained, **kwargs) @register_model def regnety_064(pretrained=False, **kwargs) -> RegNet: """RegNetY-6.4GF""" return _create_regnet('regnety_064', pretrained, **kwargs) @register_model def regnety_080(pretrained=False, **kwargs) -> RegNet: """RegNetY-8.0GF""" return _create_regnet('regnety_080', pretrained, **kwargs) @register_model def regnety_080_tv(pretrained=False, **kwargs) -> RegNet: """RegNetY-8.0GF w/ torchvision group rounding""" return _create_regnet('regnety_080_tv', pretrained, **kwargs) @register_model def regnety_120(pretrained=False, **kwargs) -> RegNet: """RegNetY-12GF""" return _create_regnet('regnety_120', pretrained, **kwargs) @register_model def regnety_160(pretrained=False, **kwargs) -> RegNet: """RegNetY-16GF""" return _create_regnet('regnety_160', pretrained, **kwargs) @register_model def regnety_320(pretrained=False, **kwargs) -> RegNet: """RegNetY-32GF""" return _create_regnet('regnety_320', pretrained, **kwargs) @register_model def regnety_640(pretrained=False, **kwargs) -> RegNet: """RegNetY-64GF""" return _create_regnet('regnety_640', pretrained, **kwargs) @register_model def regnety_1280(pretrained=False, **kwargs) -> RegNet: """RegNetY-128GF""" return _create_regnet('regnety_1280', pretrained, **kwargs) @register_model def regnety_2560(pretrained=False, **kwargs) -> RegNet: """RegNetY-256GF""" return _create_regnet('regnety_2560', pretrained, **kwargs) @register_model def regnety_040_sgn(pretrained=False, **kwargs) -> RegNet: """RegNetY-4.0GF w/ GroupNorm """ return _create_regnet('regnety_040_sgn', pretrained, **kwargs) @register_model def regnetv_040(pretrained=False, **kwargs) -> RegNet: """RegNetV-4.0GF (pre-activation)""" return _create_regnet('regnetv_040', pretrained, **kwargs) @register_model def regnetv_064(pretrained=False, **kwargs) -> RegNet: """RegNetV-6.4GF (pre-activation)""" return _create_regnet('regnetv_064', pretrained, **kwargs) @register_model def regnetz_005(pretrained=False, **kwargs) -> RegNet: """RegNetZ-500MF NOTE: config found in https://github.com/facebookresearch/ClassyVision/blob/main/classy_vision/models/regnet.py but it's not clear it is equivalent to paper model as not detailed in the paper. """ return _create_regnet('regnetz_005', pretrained, zero_init_last=False, **kwargs) @register_model def regnetz_040(pretrained=False, **kwargs) -> RegNet: """RegNetZ-4.0GF NOTE: config found in https://github.com/facebookresearch/ClassyVision/blob/main/classy_vision/models/regnet.py but it's not clear it is equivalent to paper model as not detailed in the paper. """ return _create_regnet('regnetz_040', pretrained, zero_init_last=False, **kwargs) @register_model def regnetz_040_h(pretrained=False, **kwargs) -> RegNet: """RegNetZ-4.0GF NOTE: config found in https://github.com/facebookresearch/ClassyVision/blob/main/classy_vision/models/regnet.py but it's not clear it is equivalent to paper model as not detailed in the paper. """ return _create_regnet('regnetz_040_h', pretrained, zero_init_last=False, **kwargs) register_model_deprecations(__name__, { 'regnetz_040h': 'regnetz_040_h', })
pytorch-image-models/timm/models/regnet.py/0
{ "file_path": "pytorch-image-models/timm/models/regnet.py", "repo_id": "pytorch-image-models", "token_count": 21400 }
169
""" Transformer in Transformer (TNT) in PyTorch A PyTorch implement of TNT as described in 'Transformer in Transformer' - https://arxiv.org/abs/2103.00112 The official mindspore code is released and available at https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/TNT """ import math import torch import torch.nn as nn from torch.utils.checkpoint import checkpoint from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import Mlp, DropPath, trunc_normal_, _assert, to_2tuple from ._builder import build_model_with_cfg from ._registry import register_model from .vision_transformer import resize_pos_embed __all__ = ['TNT'] # model_registry will add each entrypoint fn to this def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None, 'crop_pct': .9, 'interpolation': 'bicubic', 'fixed_input_size': True, 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'pixel_embed.proj', 'classifier': 'head', **kwargs } default_cfgs = { 'tnt_s_patch16_224': _cfg( url='https://github.com/contrastive/pytorch-image-models/releases/download/TNT/tnt_s_patch16_224.pth.tar', mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), ), 'tnt_b_patch16_224': _cfg( mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), ), } class Attention(nn.Module): """ Multi-Head Attention """ def __init__(self, dim, hidden_dim, num_heads=8, qkv_bias=False, attn_drop=0., proj_drop=0.): super().__init__() self.hidden_dim = hidden_dim self.num_heads = num_heads head_dim = hidden_dim // num_heads self.head_dim = head_dim self.scale = head_dim ** -0.5 self.qk = nn.Linear(dim, hidden_dim * 2, bias=qkv_bias) self.v = nn.Linear(dim, dim, bias=qkv_bias) self.attn_drop = nn.Dropout(attn_drop, inplace=True) self.proj = nn.Linear(dim, dim) self.proj_drop = nn.Dropout(proj_drop, inplace=True) def forward(self, x): B, N, C = x.shape qk = self.qk(x).reshape(B, N, 2, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4) q, k = qk.unbind(0) # make torchscript happy (cannot use tensor as tuple) v = self.v(x).reshape(B, N, self.num_heads, -1).permute(0, 2, 1, 3) attn = (q @ k.transpose(-2, -1)) * self.scale attn = attn.softmax(dim=-1) attn = self.attn_drop(attn) x = (attn @ v).transpose(1, 2).reshape(B, N, -1) x = self.proj(x) x = self.proj_drop(x) return x class Block(nn.Module): """ TNT Block """ def __init__( self, dim, dim_out, num_pixel, num_heads_in=4, num_heads_out=12, mlp_ratio=4., qkv_bias=False, proj_drop=0., attn_drop=0., drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm, ): super().__init__() # Inner transformer self.norm_in = norm_layer(dim) self.attn_in = Attention( dim, dim, num_heads=num_heads_in, qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=proj_drop, ) self.norm_mlp_in = norm_layer(dim) self.mlp_in = Mlp( in_features=dim, hidden_features=int(dim * 4), out_features=dim, act_layer=act_layer, drop=proj_drop, ) self.norm1_proj = norm_layer(dim) self.proj = nn.Linear(dim * num_pixel, dim_out, bias=True) # Outer transformer self.norm_out = norm_layer(dim_out) self.attn_out = Attention( dim_out, dim_out, num_heads=num_heads_out, qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=proj_drop, ) self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() self.norm_mlp = norm_layer(dim_out) self.mlp = Mlp( in_features=dim_out, hidden_features=int(dim_out * mlp_ratio), out_features=dim_out, act_layer=act_layer, drop=proj_drop, ) def forward(self, pixel_embed, patch_embed): # inner pixel_embed = pixel_embed + self.drop_path(self.attn_in(self.norm_in(pixel_embed))) pixel_embed = pixel_embed + self.drop_path(self.mlp_in(self.norm_mlp_in(pixel_embed))) # outer B, N, C = patch_embed.size() patch_embed = torch.cat( [patch_embed[:, 0:1], patch_embed[:, 1:] + self.proj(self.norm1_proj(pixel_embed).reshape(B, N - 1, -1))], dim=1) patch_embed = patch_embed + self.drop_path(self.attn_out(self.norm_out(patch_embed))) patch_embed = patch_embed + self.drop_path(self.mlp(self.norm_mlp(patch_embed))) return pixel_embed, patch_embed class PixelEmbed(nn.Module): """ Image to Pixel Embedding """ def __init__(self, img_size=224, patch_size=16, in_chans=3, in_dim=48, stride=4): super().__init__() img_size = to_2tuple(img_size) patch_size = to_2tuple(patch_size) # grid_size property necessary for resizing positional embedding self.grid_size = (img_size[0] // patch_size[0], img_size[1] // patch_size[1]) num_patches = (self.grid_size[0]) * (self.grid_size[1]) self.img_size = img_size self.num_patches = num_patches self.in_dim = in_dim new_patch_size = [math.ceil(ps / stride) for ps in patch_size] self.new_patch_size = new_patch_size self.proj = nn.Conv2d(in_chans, self.in_dim, kernel_size=7, padding=3, stride=stride) self.unfold = nn.Unfold(kernel_size=new_patch_size, stride=new_patch_size) def forward(self, x, pixel_pos): B, C, H, W = x.shape _assert(H == self.img_size[0], f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]}).") _assert(W == self.img_size[1], f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]}).") x = self.proj(x) x = self.unfold(x) x = x.transpose(1, 2).reshape(B * self.num_patches, self.in_dim, self.new_patch_size[0], self.new_patch_size[1]) x = x + pixel_pos x = x.reshape(B * self.num_patches, self.in_dim, -1).transpose(1, 2) return x class TNT(nn.Module): """ Transformer in Transformer - https://arxiv.org/abs/2103.00112 """ def __init__( self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, global_pool='token', embed_dim=768, inner_dim=48, depth=12, num_heads_inner=4, num_heads_outer=12, mlp_ratio=4., qkv_bias=False, drop_rate=0., pos_drop_rate=0., proj_drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_layer=nn.LayerNorm, first_stride=4, ): super().__init__() assert global_pool in ('', 'token', 'avg') self.num_classes = num_classes self.global_pool = global_pool self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models self.grad_checkpointing = False self.pixel_embed = PixelEmbed( img_size=img_size, patch_size=patch_size, in_chans=in_chans, in_dim=inner_dim, stride=first_stride, ) num_patches = self.pixel_embed.num_patches self.num_patches = num_patches new_patch_size = self.pixel_embed.new_patch_size num_pixel = new_patch_size[0] * new_patch_size[1] self.norm1_proj = norm_layer(num_pixel * inner_dim) self.proj = nn.Linear(num_pixel * inner_dim, embed_dim) self.norm2_proj = norm_layer(embed_dim) self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) self.patch_pos = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim)) self.pixel_pos = nn.Parameter(torch.zeros(1, inner_dim, new_patch_size[0], new_patch_size[1])) self.pos_drop = nn.Dropout(p=pos_drop_rate) dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule blocks = [] for i in range(depth): blocks.append(Block( dim=inner_dim, dim_out=embed_dim, num_pixel=num_pixel, num_heads_in=num_heads_inner, num_heads_out=num_heads_outer, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, proj_drop=proj_drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, )) self.blocks = nn.ModuleList(blocks) self.norm = norm_layer(embed_dim) self.head_drop = nn.Dropout(drop_rate) self.head = nn.Linear(embed_dim, num_classes) if num_classes > 0 else nn.Identity() trunc_normal_(self.cls_token, std=.02) trunc_normal_(self.patch_pos, std=.02) trunc_normal_(self.pixel_pos, std=.02) self.apply(self._init_weights) def _init_weights(self, m): if isinstance(m, nn.Linear): trunc_normal_(m.weight, std=.02) if isinstance(m, nn.Linear) and m.bias is not None: nn.init.constant_(m.bias, 0) elif isinstance(m, nn.LayerNorm): nn.init.constant_(m.bias, 0) nn.init.constant_(m.weight, 1.0) @torch.jit.ignore def no_weight_decay(self): return {'patch_pos', 'pixel_pos', 'cls_token'} @torch.jit.ignore def group_matcher(self, coarse=False): matcher = dict( stem=r'^cls_token|patch_pos|pixel_pos|pixel_embed|norm[12]_proj|proj', # stem and embed / pos blocks=[ (r'^blocks\.(\d+)', None), (r'^norm', (99999,)), ] ) return matcher @torch.jit.ignore def set_grad_checkpointing(self, enable=True): self.grad_checkpointing = enable @torch.jit.ignore def get_classifier(self): return self.head def reset_classifier(self, num_classes, global_pool=None): self.num_classes = num_classes if global_pool is not None: assert global_pool in ('', 'token', 'avg') self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity() def forward_features(self, x): B = x.shape[0] pixel_embed = self.pixel_embed(x, self.pixel_pos) patch_embed = self.norm2_proj(self.proj(self.norm1_proj(pixel_embed.reshape(B, self.num_patches, -1)))) patch_embed = torch.cat((self.cls_token.expand(B, -1, -1), patch_embed), dim=1) patch_embed = patch_embed + self.patch_pos patch_embed = self.pos_drop(patch_embed) if self.grad_checkpointing and not torch.jit.is_scripting(): for blk in self.blocks: pixel_embed, patch_embed = checkpoint(blk, pixel_embed, patch_embed) else: for blk in self.blocks: pixel_embed, patch_embed = blk(pixel_embed, patch_embed) patch_embed = self.norm(patch_embed) return patch_embed def forward_head(self, x, pre_logits: bool = False): if self.global_pool: x = x[:, 1:].mean(dim=1) if self.global_pool == 'avg' else x[:, 0] x = self.head_drop(x) return x if pre_logits else self.head(x) def forward(self, x): x = self.forward_features(x) x = self.forward_head(x) return x def checkpoint_filter_fn(state_dict, model): """ convert patch embedding weight from manual patchify + linear proj to conv""" if state_dict['patch_pos'].shape != model.patch_pos.shape: state_dict['patch_pos'] = resize_pos_embed(state_dict['patch_pos'], model.patch_pos, getattr(model, 'num_tokens', 1), model.pixel_embed.grid_size) return state_dict def _create_tnt(variant, pretrained=False, **kwargs): if kwargs.get('features_only', None): raise RuntimeError('features_only not implemented for Vision Transformer models.') model = build_model_with_cfg( TNT, variant, pretrained, pretrained_filter_fn=checkpoint_filter_fn, **kwargs) return model @register_model def tnt_s_patch16_224(pretrained=False, **kwargs) -> TNT: model_cfg = dict( patch_size=16, embed_dim=384, inner_dim=24, depth=12, num_heads_outer=6, qkv_bias=False) model = _create_tnt('tnt_s_patch16_224', pretrained=pretrained, **dict(model_cfg, **kwargs)) return model @register_model def tnt_b_patch16_224(pretrained=False, **kwargs) -> TNT: model_cfg = dict( patch_size=16, embed_dim=640, inner_dim=40, depth=12, num_heads_outer=10, qkv_bias=False) model = _create_tnt('tnt_b_patch16_224', pretrained=pretrained, **dict(model_cfg, **kwargs)) return model
pytorch-image-models/timm/models/tnt.py/0
{ "file_path": "pytorch-image-models/timm/models/tnt.py", "repo_id": "pytorch-image-models", "token_count": 6715 }
170
""" Adafactor Optimizer Lifted from https://github.com/pytorch/fairseq/blob/master/fairseq/optim/adafactor.py Original header/copyright below. """ # Copyright (c) Facebook, Inc. and its affiliates. # # This source code is licensed under the MIT license found in the # LICENSE file in the root directory of this source tree. import torch import math class Adafactor(torch.optim.Optimizer): """Implements Adafactor algorithm. This implementation is based on: `Adafactor: Adaptive Learning Rates with Sublinear Memory Cost` (see https://arxiv.org/abs/1804.04235) Note that this optimizer internally adjusts the learning rate depending on the *scale_parameter*, *relative_step* and *warmup_init* options. To use a manual (external) learning rate schedule you should set `scale_parameter=False` and `relative_step=False`. Arguments: params (iterable): iterable of parameters to optimize or dicts defining parameter groups lr (float, optional): external learning rate (default: None) eps (tuple[float, float]): regularization constants for square gradient and parameter scale respectively (default: (1e-30, 1e-3)) clip_threshold (float): threshold of root mean square of final gradient update (default: 1.0) decay_rate (float): coefficient used to compute running averages of square gradient (default: -0.8) beta1 (float): coefficient used for computing running averages of gradient (default: None) weight_decay (float, optional): weight decay (L2 penalty) (default: 0) scale_parameter (bool): if True, learning rate is scaled by root mean square of parameter (default: True) warmup_init (bool): time-dependent learning rate computation depends on whether warm-up initialization is being used (default: False) """ def __init__(self, params, lr=None, eps=1e-30, eps_scale=1e-3, clip_threshold=1.0, decay_rate=-0.8, betas=None, weight_decay=0.0, scale_parameter=True, warmup_init=False): relative_step = not lr if warmup_init and not relative_step: raise ValueError('warmup_init requires relative_step=True') beta1 = None if betas is None else betas[0] # make it compat with standard betas arg defaults = dict(lr=lr, eps=eps, eps_scale=eps_scale, clip_threshold=clip_threshold, decay_rate=decay_rate, beta1=beta1, weight_decay=weight_decay, scale_parameter=scale_parameter, relative_step=relative_step, warmup_init=warmup_init) super(Adafactor, self).__init__(params, defaults) @staticmethod def _get_lr(param_group, param_state): if param_group['relative_step']: min_step = 1e-6 * param_state['step'] if param_group['warmup_init'] else 1e-2 lr_t = min(min_step, 1.0 / math.sqrt(param_state['step'])) param_scale = 1.0 if param_group['scale_parameter']: param_scale = max(param_group['eps_scale'], param_state['RMS']) param_group['lr'] = lr_t * param_scale return param_group['lr'] @staticmethod def _get_options(param_group, param_shape): factored = len(param_shape) >= 2 use_first_moment = param_group['beta1'] is not None return factored, use_first_moment @staticmethod def _rms(tensor): return tensor.norm(2) / (tensor.numel() ** 0.5) def _approx_sq_grad(self, exp_avg_sq_row, exp_avg_sq_col): r_factor = (exp_avg_sq_row / exp_avg_sq_row.mean(dim=-1, keepdim=True)).rsqrt_().unsqueeze(-1) c_factor = exp_avg_sq_col.unsqueeze(-2).rsqrt() return torch.mul(r_factor, c_factor) @torch.no_grad() def step(self, closure=None): """Performs a single optimization step. Arguments: closure (callable, optional): A closure that reevaluates the model and returns the loss. """ loss = None if closure is not None: with torch.enable_grad(): loss = closure() for group in self.param_groups: for p in group['params']: if p.grad is None: continue grad = p.grad if grad.dtype in {torch.float16, torch.bfloat16}: grad = grad.float() if grad.is_sparse: raise RuntimeError('Adafactor does not support sparse gradients.') state = self.state[p] factored, use_first_moment = self._get_options(group, grad.shape) # State Initialization if len(state) == 0: state['step'] = 0 if use_first_moment: # Exponential moving average of gradient values state['exp_avg'] = torch.zeros_like(grad) if factored: state['exp_avg_sq_row'] = torch.zeros(grad.shape[:-1]).to(grad) state['exp_avg_sq_col'] = torch.zeros(grad.shape[:-2] + grad.shape[-1:]).to(grad) else: state['exp_avg_sq'] = torch.zeros_like(grad) state['RMS'] = 0 else: if use_first_moment: state['exp_avg'] = state['exp_avg'].to(grad) if factored: state['exp_avg_sq_row'] = state['exp_avg_sq_row'].to(grad) state['exp_avg_sq_col'] = state['exp_avg_sq_col'].to(grad) else: state['exp_avg_sq'] = state['exp_avg_sq'].to(grad) p_fp32 = p if p.dtype in {torch.float16, torch.bfloat16}: p_fp32 = p_fp32.float() state['step'] += 1 state['RMS'] = self._rms(p_fp32) lr_t = self._get_lr(group, state) beta2t = 1.0 - math.pow(state['step'], group['decay_rate']) update = grad ** 2 + group['eps'] if factored: exp_avg_sq_row = state['exp_avg_sq_row'] exp_avg_sq_col = state['exp_avg_sq_col'] exp_avg_sq_row.mul_(beta2t).add_(update.mean(dim=-1), alpha=1.0 - beta2t) exp_avg_sq_col.mul_(beta2t).add_(update.mean(dim=-2), alpha=1.0 - beta2t) # Approximation of exponential moving average of square of gradient update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col) update.mul_(grad) else: exp_avg_sq = state['exp_avg_sq'] exp_avg_sq.mul_(beta2t).add_(update, alpha=1.0 - beta2t) update = exp_avg_sq.rsqrt().mul_(grad) update.div_((self._rms(update) / group['clip_threshold']).clamp_(min=1.0)) update.mul_(lr_t) if use_first_moment: exp_avg = state['exp_avg'] exp_avg.mul_(group['beta1']).add_(update, alpha=1 - group['beta1']) update = exp_avg if group['weight_decay'] != 0: p_fp32.add_(p_fp32, alpha=-group['weight_decay'] * lr_t) p_fp32.add_(-update) if p.dtype in {torch.float16, torch.bfloat16}: p.copy_(p_fp32) return loss
pytorch-image-models/timm/optim/adafactor.py/0
{ "file_path": "pytorch-image-models/timm/optim/adafactor.py", "repo_id": "pytorch-image-models", "token_count": 3656 }
171
""" SGDP Optimizer Implementation copied from https://github.com/clovaai/AdamP/blob/master/adamp/sgdp.py Paper: `Slowing Down the Weight Norm Increase in Momentum-based Optimizers` - https://arxiv.org/abs/2006.08217 Code: https://github.com/clovaai/AdamP Copyright (c) 2020-present NAVER Corp. MIT license """ import torch import torch.nn.functional as F from torch.optim.optimizer import Optimizer, required import math from .adamp import projection class SGDP(Optimizer): def __init__(self, params, lr=required, momentum=0, dampening=0, weight_decay=0, nesterov=False, eps=1e-8, delta=0.1, wd_ratio=0.1): defaults = dict( lr=lr, momentum=momentum, dampening=dampening, weight_decay=weight_decay, nesterov=nesterov, eps=eps, delta=delta, wd_ratio=wd_ratio) super(SGDP, self).__init__(params, defaults) @torch.no_grad() def step(self, closure=None): loss = None if closure is not None: with torch.enable_grad(): loss = closure() for group in self.param_groups: weight_decay = group['weight_decay'] momentum = group['momentum'] dampening = group['dampening'] nesterov = group['nesterov'] for p in group['params']: if p.grad is None: continue grad = p.grad state = self.state[p] # State initialization if len(state) == 0: state['momentum'] = torch.zeros_like(p) # SGD buf = state['momentum'] buf.mul_(momentum).add_(grad, alpha=1. - dampening) if nesterov: d_p = grad + momentum * buf else: d_p = buf # Projection wd_ratio = 1. if len(p.shape) > 1: d_p, wd_ratio = projection(p, grad, d_p, group['delta'], group['wd_ratio'], group['eps']) # Weight decay if weight_decay != 0: p.mul_(1. - group['lr'] * group['weight_decay'] * wd_ratio / (1-momentum)) # Step p.add_(d_p, alpha=-group['lr']) return loss
pytorch-image-models/timm/optim/sgdp.py/0
{ "file_path": "pytorch-image-models/timm/optim/sgdp.py", "repo_id": "pytorch-image-models", "token_count": 1186 }
172
""" Batch size decay and retry helpers. Copyright 2022 Ross Wightman """ import math def decay_batch_step(batch_size, num_intra_steps=2, no_odd=False): """ power of two batch-size decay with intra steps Decay by stepping between powers of 2: * determine power-of-2 floor of current batch size (base batch size) * divide above value by num_intra_steps to determine step size * floor batch_size to nearest multiple of step_size (from base batch size) Examples: num_steps == 4 --> 64, 56, 48, 40, 32, 28, 24, 20, 16, 14, 12, 10, 8, 7, 6, 5, 4, 3, 2, 1 num_steps (no_odd=True) == 4 --> 64, 56, 48, 40, 32, 28, 24, 20, 16, 14, 12, 10, 8, 6, 4, 2 num_steps == 2 --> 64, 48, 32, 24, 16, 12, 8, 6, 4, 3, 2, 1 num_steps == 1 --> 64, 32, 16, 8, 4, 2, 1 """ if batch_size <= 1: # return 0 for stopping value so easy to use in loop return 0 base_batch_size = int(2 ** (math.log(batch_size - 1) // math.log(2))) step_size = max(base_batch_size // num_intra_steps, 1) batch_size = base_batch_size + ((batch_size - base_batch_size - 1) // step_size) * step_size if no_odd and batch_size % 2: batch_size -= 1 return batch_size def check_batch_size_retry(error_str): """ check failure error string for conditions where batch decay retry should not be attempted """ error_str = error_str.lower() if 'required rank' in error_str: # Errors involving phrase 'required rank' typically happen when a conv is used that's # not compatible with channels_last memory format. return False if 'illegal' in error_str: # 'Illegal memory access' errors in CUDA typically leave process in unusable state return False return True
pytorch-image-models/timm/utils/decay_batch.py/0
{ "file_path": "pytorch-image-models/timm/utils/decay_batch.py", "repo_id": "pytorch-image-models", "token_count": 656 }
173
import pytest from text_generation import __version__ from huggingface_hub.utils import build_hf_headers @pytest.fixture def flan_t5_xxl(): return "google/flan-t5-xxl" @pytest.fixture def fake_model(): return "fake/model" @pytest.fixture def unsupported_model(): return "gpt2" @pytest.fixture def base_url(): return "https://api-inference.huggingface.co/models" @pytest.fixture def bloom_url(base_url, bloom_model): return f"{base_url}/{bloom_model}" @pytest.fixture def flan_t5_xxl_url(base_url, flan_t5_xxl): return f"{base_url}/{flan_t5_xxl}" @pytest.fixture def fake_url(base_url, fake_model): return f"{base_url}/{fake_model}" @pytest.fixture def unsupported_url(base_url, unsupported_model): return f"{base_url}/{unsupported_model}" @pytest.fixture(scope="session") def hf_headers(): return build_hf_headers( library_name="text-generation-tests", library_version=__version__ )
text-generation-inference/clients/python/tests/conftest.py/0
{ "file_path": "text-generation-inference/clients/python/tests/conftest.py", "repo_id": "text-generation-inference", "token_count": 390 }
174
# Non-core Model Serving TGI supports various LLM architectures (see full list [here](../supported_models)). If you wish to serve a model that is not one of the supported models, TGI will fallback to the `transformers` implementation of that model. This means you will be unable to use some of the features introduced by TGI, such as tensor-parallel sharding or flash attention. However, you can still get many benefits of TGI, such as continuous batching or streaming outputs. You can serve these models using the same Docker command-line invocation as with fully supported models 👇 ```bash docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id gpt2 ``` If the model you wish to serve is a custom transformers model, and its weights and implementation are available in the Hub, you can still serve the model by passing the `--trust-remote-code` flag to the `docker run` command like below 👇 ```bash docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id <CUSTOM_MODEL_ID> --trust-remote-code ``` Finally, if the model is not on Hugging Face Hub but on your local, you can pass the path to the folder that contains your model like below 👇 ```bash # Make sure your model is in the $volume directory docker run --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id /data/<PATH-TO-FOLDER> ``` You can refer to [transformers docs on custom models](https://huggingface.co/docs/transformers/main/en/custom_models) for more information.
text-generation-inference/docs/source/basic_tutorials/non_core_models.md/0
{ "file_path": "text-generation-inference/docs/source/basic_tutorials/non_core_models.md", "repo_id": "text-generation-inference", "token_count": 472 }
175
import sys import subprocess import contextlib import pytest import asyncio import os import docker import json import math import time import random from docker.errors import NotFound from typing import Optional, List, Dict from syrupy.extensions.json import JSONSnapshotExtension from aiohttp import ClientConnectorError, ClientOSError, ServerDisconnectedError from text_generation import AsyncClient from text_generation.types import ( Response, Details, InputToken, Token, BestOfSequence, Grammar, ChatComplete, ChatCompletionChunk, ) DOCKER_IMAGE = os.getenv("DOCKER_IMAGE", None) HUGGING_FACE_HUB_TOKEN = os.getenv("HUGGING_FACE_HUB_TOKEN", None) DOCKER_VOLUME = os.getenv("DOCKER_VOLUME", "/data") class ResponseComparator(JSONSnapshotExtension): rtol = 0.2 def serialize( self, data, *, exclude=None, matcher=None, ): if isinstance(data, Response): data = data.dict() if isinstance(data, List): data = [d.dict() for d in data] data = self._filter( data=data, depth=0, path=(), exclude=exclude, matcher=matcher ) return json.dumps(data, indent=2, ensure_ascii=False, sort_keys=False) + "\n" def matches( self, *, serialized_data, snapshot_data, ) -> bool: def convert_data(data): data = json.loads(data) if isinstance(data, Dict) and "choices" in data: choices = data["choices"] if ( isinstance(choices, List) and len(choices) >= 1 and "delta" in choices[0] ): return ChatCompletionChunk(**data) return ChatComplete(**data) if isinstance(data, Dict): return Response(**data) if isinstance(data, List): return [Response(**d) for d in data] raise NotImplementedError def eq_token(token: Token, other: Token) -> bool: return ( token.id == other.id and token.text == other.text and math.isclose(token.logprob, other.logprob, rel_tol=self.rtol) and token.special == other.special ) def eq_prefill_token(prefill_token: InputToken, other: InputToken) -> bool: try: return ( prefill_token.id == other.id and prefill_token.text == other.text and ( math.isclose( prefill_token.logprob, other.logprob, rel_tol=self.rtol ) if prefill_token.logprob is not None else prefill_token.logprob == other.logprob ) ) except TypeError: return False def eq_best_of(details: BestOfSequence, other: BestOfSequence) -> bool: return ( details.finish_reason == other.finish_reason and details.generated_tokens == other.generated_tokens and details.seed == other.seed and len(details.prefill) == len(other.prefill) and all( [ eq_prefill_token(d, o) for d, o in zip(details.prefill, other.prefill) ] ) and len(details.tokens) == len(other.tokens) and all([eq_token(d, o) for d, o in zip(details.tokens, other.tokens)]) ) def eq_details(details: Details, other: Details) -> bool: return ( details.finish_reason == other.finish_reason and details.generated_tokens == other.generated_tokens and details.seed == other.seed and len(details.prefill) == len(other.prefill) and all( [ eq_prefill_token(d, o) for d, o in zip(details.prefill, other.prefill) ] ) and len(details.tokens) == len(other.tokens) and all([eq_token(d, o) for d, o in zip(details.tokens, other.tokens)]) and ( len(details.best_of_sequences) if details.best_of_sequences is not None else 0 ) == ( len(other.best_of_sequences) if other.best_of_sequences is not None else 0 ) and ( all( [ eq_best_of(d, o) for d, o in zip( details.best_of_sequences, other.best_of_sequences ) ] ) if details.best_of_sequences is not None else details.best_of_sequences == other.best_of_sequences ) ) def eq_chat_complete(response: ChatComplete, other: ChatComplete) -> bool: return ( response.choices[0].message.content == other.choices[0].message.content ) def eq_chat_complete_chunk( response: ChatCompletionChunk, other: ChatCompletionChunk ) -> bool: return response.choices[0].delta.content == other.choices[0].delta.content def eq_response(response: Response, other: Response) -> bool: return response.generated_text == other.generated_text and eq_details( response.details, other.details ) serialized_data = convert_data(serialized_data) snapshot_data = convert_data(snapshot_data) if not isinstance(serialized_data, List): serialized_data = [serialized_data] if not isinstance(snapshot_data, List): snapshot_data = [snapshot_data] if isinstance(serialized_data[0], ChatComplete): return len(snapshot_data) == len(serialized_data) and all( [eq_chat_complete(r, o) for r, o in zip(serialized_data, snapshot_data)] ) if isinstance(serialized_data[0], ChatCompletionChunk): return len(snapshot_data) == len(serialized_data) and all( [ eq_chat_complete_chunk(r, o) for r, o in zip(serialized_data, snapshot_data) ] ) return len(snapshot_data) == len(serialized_data) and all( [eq_response(r, o) for r, o in zip(serialized_data, snapshot_data)] ) class GenerousResponseComparator(ResponseComparator): # Needed for GPTQ with exllama which has serious numerical fluctuations. rtol = 0.75 class LauncherHandle: def __init__(self, port: int): self.client = AsyncClient(f"http://localhost:{port}") def _inner_health(self): raise NotImplementedError async def health(self, timeout: int = 60): assert timeout > 0 for _ in range(timeout): if not self._inner_health(): raise RuntimeError("Launcher crashed") try: await self.client.generate("test") return except (ClientConnectorError, ClientOSError, ServerDisconnectedError) as e: time.sleep(1) raise RuntimeError("Health check failed") class ContainerLauncherHandle(LauncherHandle): def __init__(self, docker_client, container_name, port: int): super(ContainerLauncherHandle, self).__init__(port) self.docker_client = docker_client self.container_name = container_name def _inner_health(self) -> bool: container = self.docker_client.containers.get(self.container_name) return container.status in ["running", "created"] class ProcessLauncherHandle(LauncherHandle): def __init__(self, process, port: int): super(ProcessLauncherHandle, self).__init__(port) self.process = process def _inner_health(self) -> bool: return self.process.poll() is None @pytest.fixture def response_snapshot(snapshot): return snapshot.use_extension(ResponseComparator) @pytest.fixture def generous_response_snapshot(snapshot): return snapshot.use_extension(GenerousResponseComparator) @pytest.fixture(scope="module") def event_loop(): loop = asyncio.get_event_loop() yield loop loop.close() @pytest.fixture(scope="module") def launcher(event_loop): @contextlib.contextmanager def local_launcher( model_id: str, num_shard: Optional[int] = None, quantize: Optional[str] = None, trust_remote_code: bool = False, use_flash_attention: bool = True, disable_grammar_support: bool = False, dtype: Optional[str] = None, revision: Optional[str] = None, ): port = random.randint(8000, 10_000) master_port = random.randint(10_000, 20_000) shard_uds_path = ( f"/tmp/tgi-tests-{model_id.split('/')[-1]}-{num_shard}-{quantize}-server" ) args = [ "text-generation-launcher", "--model-id", model_id, "--port", str(port), "--master-port", str(master_port), "--shard-uds-path", shard_uds_path, ] env = os.environ if disable_grammar_support: args.append("--disable-grammar-support") if num_shard is not None: args.extend(["--num-shard", str(num_shard)]) if quantize is not None: args.append("--quantize") args.append(quantize) if dtype is not None: args.append("--dtype") args.append(dtype) if revision is not None: args.append("--revision") args.append(revision) if trust_remote_code: args.append("--trust-remote-code") env["LOG_LEVEL"] = "info,text_generation_router=debug" if not use_flash_attention: env["USE_FLASH_ATTENTION"] = "false" with subprocess.Popen( args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env ) as process: yield ProcessLauncherHandle(process, port) process.terminate() process.wait(60) launcher_output = process.stdout.read().decode("utf-8") print(launcher_output, file=sys.stderr) process.stdout.close() process.stderr.close() if not use_flash_attention: del env["USE_FLASH_ATTENTION"] @contextlib.contextmanager def docker_launcher( model_id: str, num_shard: Optional[int] = None, quantize: Optional[str] = None, trust_remote_code: bool = False, use_flash_attention: bool = True, disable_grammar_support: bool = False, dtype: Optional[str] = None, revision: Optional[str] = None, ): port = random.randint(8000, 10_000) args = ["--model-id", model_id, "--env"] if disable_grammar_support: args.append("--disable-grammar-support") if num_shard is not None: args.extend(["--num-shard", str(num_shard)]) if quantize is not None: args.append("--quantize") args.append(quantize) if dtype is not None: args.append("--dtype") args.append(dtype) if revision is not None: args.append("--revision") args.append(revision) if trust_remote_code: args.append("--trust-remote-code") client = docker.from_env() container_name = f"tgi-tests-{model_id.split('/')[-1]}-{num_shard}-{quantize}" try: container = client.containers.get(container_name) container.stop() container.wait() except NotFound: pass gpu_count = num_shard if num_shard is not None else 1 env = { "LOG_LEVEL": "info,text_generation_router=debug", "ENABLE_CUDA_GRAPHS": "true", } if not use_flash_attention: env["USE_FLASH_ATTENTION"] = "false" if HUGGING_FACE_HUB_TOKEN is not None: env["HUGGING_FACE_HUB_TOKEN"] = HUGGING_FACE_HUB_TOKEN volumes = [] if DOCKER_VOLUME: volumes = [f"{DOCKER_VOLUME}:/data"] container = client.containers.run( DOCKER_IMAGE, command=args, name=container_name, environment=env, auto_remove=False, detach=True, device_requests=[ docker.types.DeviceRequest(count=gpu_count, capabilities=[["gpu"]]) ], volumes=volumes, ports={"80/tcp": port}, shm_size="1G", ) yield ContainerLauncherHandle(client, container.name, port) if not use_flash_attention: del env["USE_FLASH_ATTENTION"] try: container.stop() container.wait() except NotFound: pass container_output = container.logs().decode("utf-8") print(container_output, file=sys.stderr) container.remove() if DOCKER_IMAGE is not None: return docker_launcher return local_launcher @pytest.fixture(scope="module") def generate_load(): async def generate_load_inner( client: AsyncClient, prompt: str, max_new_tokens: int, n: int, seed: Optional[int] = None, grammar: Optional[Grammar] = None, stop_sequences: Optional[List[str]] = None, ) -> List[Response]: futures = [ client.generate( prompt, max_new_tokens=max_new_tokens, decoder_input_details=True, seed=seed, grammar=grammar, stop_sequences=stop_sequences, ) for _ in range(n) ] return await asyncio.gather(*futures) return generate_load_inner
text-generation-inference/integration-tests/conftest.py/0
{ "file_path": "text-generation-inference/integration-tests/conftest.py", "repo_id": "text-generation-inference", "token_count": 7278 }
176
[ { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 2, "logprob": null, "text": "<bos>" }, { "id": 2015, "logprob": -10.0, "text": "Test" }, { "id": 3853, "logprob": -10.875, "text": " request" } ], "seed": null, "tokens": [ { "id": 1736, "logprob": -2.09375, "special": false, "text": " form" }, { "id": 109, "logprob": -1.9140625, "special": false, "text": "\n\n" }, { "id": 651, "logprob": -2.453125, "special": false, "text": "The" }, { "id": 2121, "logprob": -1.8984375, "special": false, "text": " test" }, { "id": 3853, "logprob": -0.23535156, "special": false, "text": " request" }, { "id": 1736, "logprob": -0.091308594, "special": false, "text": " form" }, { "id": 603, "logprob": -0.96875, "special": false, "text": " is" }, { "id": 1671, "logprob": -1.6484375, "special": false, "text": " used" }, { "id": 577, "logprob": -0.43164062, "special": false, "text": " to" }, { "id": 3853, "logprob": -1.2421875, "special": false, "text": " request" } ], "top_tokens": null }, "generated_text": " form\n\nThe test request form is used to request" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 2, "logprob": null, "text": "<bos>" }, { "id": 2015, "logprob": -10.0, "text": "Test" }, { "id": 3853, "logprob": -10.875, "text": " request" } ], "seed": null, "tokens": [ { "id": 1736, "logprob": -2.09375, "special": false, "text": " form" }, { "id": 109, "logprob": -1.9140625, "special": false, "text": "\n\n" }, { "id": 651, "logprob": -2.453125, "special": false, "text": "The" }, { "id": 2121, "logprob": -1.8984375, "special": false, "text": " test" }, { "id": 3853, "logprob": -0.23535156, "special": false, "text": " request" }, { "id": 1736, "logprob": -0.091308594, "special": false, "text": " form" }, { "id": 603, "logprob": -0.96875, "special": false, "text": " is" }, { "id": 1671, "logprob": -1.6484375, "special": false, "text": " used" }, { "id": 577, "logprob": -0.43164062, "special": false, "text": " to" }, { "id": 3853, "logprob": -1.2421875, "special": false, "text": " request" } ], "top_tokens": null }, "generated_text": " form\n\nThe test request form is used to request" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 2, "logprob": null, "text": "<bos>" }, { "id": 2015, "logprob": -10.0, "text": "Test" }, { "id": 3853, "logprob": -10.875, "text": " request" } ], "seed": null, "tokens": [ { "id": 1736, "logprob": -2.09375, "special": false, "text": " form" }, { "id": 109, "logprob": -1.9140625, "special": false, "text": "\n\n" }, { "id": 651, "logprob": -2.453125, "special": false, "text": "The" }, { "id": 2121, "logprob": -1.8984375, "special": false, "text": " test" }, { "id": 3853, "logprob": -0.23535156, "special": false, "text": " request" }, { "id": 1736, "logprob": -0.091308594, "special": false, "text": " form" }, { "id": 603, "logprob": -0.96875, "special": false, "text": " is" }, { "id": 1671, "logprob": -1.6484375, "special": false, "text": " used" }, { "id": 577, "logprob": -0.43164062, "special": false, "text": " to" }, { "id": 3853, "logprob": -1.2421875, "special": false, "text": " request" } ], "top_tokens": null }, "generated_text": " form\n\nThe test request form is used to request" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 2, "logprob": null, "text": "<bos>" }, { "id": 2015, "logprob": -10.0, "text": "Test" }, { "id": 3853, "logprob": -10.875, "text": " request" } ], "seed": null, "tokens": [ { "id": 1736, "logprob": -2.09375, "special": false, "text": " form" }, { "id": 109, "logprob": -1.9140625, "special": false, "text": "\n\n" }, { "id": 651, "logprob": -2.453125, "special": false, "text": "The" }, { "id": 2121, "logprob": -1.8984375, "special": false, "text": " test" }, { "id": 3853, "logprob": -0.23535156, "special": false, "text": " request" }, { "id": 1736, "logprob": -0.091308594, "special": false, "text": " form" }, { "id": 603, "logprob": -0.96875, "special": false, "text": " is" }, { "id": 1671, "logprob": -1.6484375, "special": false, "text": " used" }, { "id": 577, "logprob": -0.43164062, "special": false, "text": " to" }, { "id": 3853, "logprob": -1.2421875, "special": false, "text": " request" } ], "top_tokens": null }, "generated_text": " form\n\nThe test request form is used to request" } ]
text-generation-inference/integration-tests/models/__snapshots__/test_flash_gemma/test_flash_gemma_load.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_gemma/test_flash_gemma_load.json", "repo_id": "text-generation-inference", "token_count": 4916 }
177
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 1, "logprob": null, "text": "<s>" }, { "id": 3735, "logprob": -12.9140625, "text": "Test" }, { "id": 2159, "logprob": -10.7578125, "text": "request" } ], "seed": 0, "tokens": [ { "id": 28747, "logprob": 0.0, "special": false, "text": ":" }, { "id": 3169, "logprob": -0.1307373, "special": false, "text": " Let" }, { "id": 332, "logprob": -2.3359375, "special": false, "text": " u" }, { "id": 347, "logprob": 0.0, "special": false, "text": " be" }, { "id": 325, "logprob": -1.0234375, "special": false, "text": " (" }, { "id": 28734, "logprob": -2.0292969, "special": false, "text": "0" }, { "id": 648, "logprob": -1.0439453, "special": false, "text": " +" }, { "id": 28705, "logprob": -0.24499512, "special": false, "text": " " }, { "id": 28770, "logprob": -0.5073242, "special": false, "text": "3" }, { "id": 387, "logprob": -1.5507812, "special": false, "text": " -" } ], "top_tokens": null }, "generated_text": "Test request: Let u be (0 + 3 -" }
text-generation-inference/integration-tests/models/__snapshots__/test_flash_mistral/test_flash_mistral_all_params.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_mistral/test_flash_mistral_all_params.json", "repo_id": "text-generation-inference", "token_count": 1041 }
178
[ { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 589, "logprob": null, "text": "def" }, { "id": 1459, "logprob": -5.6289062, "text": " print" }, { "id": 81, "logprob": -1.6005859, "text": "_" }, { "id": 7656, "logprob": -5.9921875, "text": "hello" } ], "seed": null, "tokens": [ { "id": 2262, "logprob": -0.7705078, "special": false, "text": "():" }, { "id": 284, "logprob": -0.2602539, "special": false, "text": "\n " }, { "id": 1459, "logprob": -0.39282227, "special": false, "text": " print" }, { "id": 440, "logprob": -0.6113281, "special": false, "text": "(\"" }, { "id": 8279, "logprob": -0.4765625, "special": false, "text": "Hello" }, { "id": 10896, "logprob": -1.5068359, "special": false, "text": " World" }, { "id": 657, "logprob": -0.8154297, "special": false, "text": "\")" }, { "id": 203, "logprob": -0.7319336, "special": false, "text": "\n" }, { "id": 203, "logprob": -0.35229492, "special": false, "text": "\n" }, { "id": 589, "logprob": -1.0380859, "special": false, "text": "def" } ] }, "generated_text": "():\n print(\"Hello World\")\n\ndef" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 589, "logprob": null, "text": "def" }, { "id": 1459, "logprob": -5.6289062, "text": " print" }, { "id": 81, "logprob": -1.6005859, "text": "_" }, { "id": 7656, "logprob": -5.9921875, "text": "hello" } ], "seed": null, "tokens": [ { "id": 2262, "logprob": -0.7705078, "special": false, "text": "():" }, { "id": 284, "logprob": -0.2602539, "special": false, "text": "\n " }, { "id": 1459, "logprob": -0.39282227, "special": false, "text": " print" }, { "id": 440, "logprob": -0.6113281, "special": false, "text": "(\"" }, { "id": 8279, "logprob": -0.4765625, "special": false, "text": "Hello" }, { "id": 10896, "logprob": -1.5068359, "special": false, "text": " World" }, { "id": 657, "logprob": -0.8154297, "special": false, "text": "\")" }, { "id": 203, "logprob": -0.7319336, "special": false, "text": "\n" }, { "id": 203, "logprob": -0.35229492, "special": false, "text": "\n" }, { "id": 589, "logprob": -1.0380859, "special": false, "text": "def" } ] }, "generated_text": "():\n print(\"Hello World\")\n\ndef" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 589, "logprob": null, "text": "def" }, { "id": 1459, "logprob": -5.6289062, "text": " print" }, { "id": 81, "logprob": -1.6005859, "text": "_" }, { "id": 7656, "logprob": -5.9921875, "text": "hello" } ], "seed": null, "tokens": [ { "id": 2262, "logprob": -0.7705078, "special": false, "text": "():" }, { "id": 284, "logprob": -0.2602539, "special": false, "text": "\n " }, { "id": 1459, "logprob": -0.39282227, "special": false, "text": " print" }, { "id": 440, "logprob": -0.6113281, "special": false, "text": "(\"" }, { "id": 8279, "logprob": -0.4765625, "special": false, "text": "Hello" }, { "id": 10896, "logprob": -1.5068359, "special": false, "text": " World" }, { "id": 657, "logprob": -0.8154297, "special": false, "text": "\")" }, { "id": 203, "logprob": -0.7319336, "special": false, "text": "\n" }, { "id": 203, "logprob": -0.35229492, "special": false, "text": "\n" }, { "id": 589, "logprob": -1.0380859, "special": false, "text": "def" } ] }, "generated_text": "():\n print(\"Hello World\")\n\ndef" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 589, "logprob": null, "text": "def" }, { "id": 1459, "logprob": -5.6289062, "text": " print" }, { "id": 81, "logprob": -1.6005859, "text": "_" }, { "id": 7656, "logprob": -5.9921875, "text": "hello" } ], "seed": null, "tokens": [ { "id": 2262, "logprob": -0.7705078, "special": false, "text": "():" }, { "id": 284, "logprob": -0.2602539, "special": false, "text": "\n " }, { "id": 1459, "logprob": -0.39282227, "special": false, "text": " print" }, { "id": 440, "logprob": -0.6113281, "special": false, "text": "(\"" }, { "id": 8279, "logprob": -0.4765625, "special": false, "text": "Hello" }, { "id": 10896, "logprob": -1.5068359, "special": false, "text": " World" }, { "id": 657, "logprob": -0.8154297, "special": false, "text": "\")" }, { "id": 203, "logprob": -0.7319336, "special": false, "text": "\n" }, { "id": 203, "logprob": -0.35229492, "special": false, "text": "\n" }, { "id": 589, "logprob": -1.0380859, "special": false, "text": "def" } ] }, "generated_text": "():\n print(\"Hello World\")\n\ndef" } ]
text-generation-inference/integration-tests/models/__snapshots__/test_flash_starcoder/test_flash_starcoder_load.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_starcoder/test_flash_starcoder_load.json", "repo_id": "text-generation-inference", "token_count": 5176 }
179
{ "details": { "best_of_sequences": null, "finish_reason": "eos_token", "generated_tokens": 9, "prefill": [ { "id": 0, "logprob": null, "text": "<pad>" } ], "seed": 0, "tokens": [ { "id": 16017, "logprob": -0.30908203, "special": false, "text": " blue" }, { "id": 20495, "logprob": 0.0, "special": false, "text": " sky" }, { "id": 259, "logprob": -0.28271484, "special": false, "text": " " }, { "id": 15484, "logprob": -1.7929688, "special": false, "text": "appear" }, { "id": 345, "logprob": -0.8935547, "special": false, "text": "ed" }, { "id": 281, "logprob": 0.0, "special": false, "text": " in" }, { "id": 287, "logprob": 0.0, "special": false, "text": " the" }, { "id": 20495, "logprob": -0.32299805, "special": false, "text": " sky" }, { "id": 1, "logprob": 0.0, "special": true, "text": "</s>" } ] }, "generated_text": "Why is the sky blue?blue sky appeared in the sky" }
text-generation-inference/integration-tests/models/__snapshots__/test_mt0_base/test_mt0_base_all_params.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_mt0_base/test_mt0_base_all_params.json", "repo_id": "text-generation-inference", "token_count": 831 }
180
import pytest @pytest.fixture(scope="module") def flash_llama_awq_handle_sharded(launcher): with launcher( "abhinavkulkarni/codellama-CodeLlama-7b-Python-hf-w4-g128-awq", num_shard=2, quantize="awq", ) as handle: yield handle @pytest.fixture(scope="module") async def flash_llama_awq_sharded(flash_llama_awq_handle_sharded): await flash_llama_awq_handle_sharded.health(300) return flash_llama_awq_handle_sharded.client @pytest.mark.asyncio async def test_flash_llama_awq_sharded(flash_llama_awq_sharded, response_snapshot): response = await flash_llama_awq_sharded.generate( "What is Deep Learning?", max_new_tokens=10, decoder_input_details=True ) assert response.details.generated_tokens == 10 assert ( response.generated_text == "\nWhat is the difference between Deep Learning and Machine" ) assert response == response_snapshot @pytest.mark.asyncio async def test_flash_llama_awq_load_sharded( flash_llama_awq_sharded, generate_load, response_snapshot ): responses = await generate_load( flash_llama_awq_sharded, "What is Deep Learning?", max_new_tokens=10, n=4 ) assert len(responses) == 4 assert all( [ r.generated_text == "\nWhat is the difference between Deep Learning and Machine" for r in responses ] ) assert responses == response_snapshot
text-generation-inference/integration-tests/models/test_flash_awq_sharded.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_flash_awq_sharded.py", "repo_id": "text-generation-inference", "token_count": 608 }
181
import pytest import json from text_generation.types import GrammarType @pytest.fixture(scope="module") def non_flash_llama_grammar_handle(launcher): with launcher( "TinyLlama/TinyLlama-1.1B-Chat-v1.0", num_shard=1, disable_grammar_support=False, use_flash_attention=False, ) as handle: yield handle @pytest.fixture(scope="module") async def non_flash_llama_grammar(non_flash_llama_grammar_handle): await non_flash_llama_grammar_handle.health(300) return non_flash_llama_grammar_handle.client @pytest.mark.skip @pytest.mark.asyncio async def test_non_flash_llama_grammar_json(non_flash_llama_grammar, response_snapshot): response = await non_flash_llama_grammar.generate( "info: david holtz like trees and has two cats. ", max_new_tokens=100, decoder_input_details=True, seed=0, grammar={ "type": GrammarType.Json, "value": json.dumps( { "type": "object", "$id": "https://example.com/person.schema.json", "$schema": "https://json-schema.org/draft/2020-12/schema", "title": "Person", "properties": { "firstName": { "type": "string", "description": "The person'''s first name.", }, "lastName": { "type": "string", "description": "The person'''s last name.", }, "hobby": { "description": "The person'''s hobby.", "type": "string", }, "numCats": { "description": "The number of cats the person has.", "type": "integer", "minimum": 0, }, }, "required": ["firstName", "lastName", "hobby", "numCats"], } ), }, ) assert response.details.generated_tokens == 30 assert ( response.generated_text == '{"firstName":"David","hobby":"Trees","lastName":"Holtz","numCats":2}' ) assert response == response_snapshot
text-generation-inference/integration-tests/models/test_grammar_llama.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_grammar_llama.py", "repo_id": "text-generation-inference", "token_count": 1338 }
182
use clap::{Parser, ValueEnum}; use nix::sys::signal::{self, Signal}; use nix::unistd::Pid; use serde::Deserialize; use std::env; use std::ffi::OsString; use std::io::{BufRead, BufReader, Lines}; use std::os::unix::process::{CommandExt, ExitStatusExt}; use std::path::Path; use std::process::{Child, Command, ExitStatus, Stdio}; use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::mpsc::TryRecvError; use std::sync::{mpsc, Arc}; use std::thread; use std::thread::sleep; use std::time::{Duration, Instant}; use std::{fs, io}; use tracing_subscriber::EnvFilter; mod env_runtime; #[derive(Clone, Copy, Debug, ValueEnum)] enum Quantization { /// 4 bit quantization. Requires a specific AWQ quantized model: /// https://hf.co/models?search=awq. /// Should replace GPTQ models wherever possible because of the better latency Awq, /// 8 bit quantization, doesn't require specific model. /// Should be a drop-in replacement to bitsandbytes with much better performance. /// Kernels are from https://github.com/NetEase-FuXi/EETQ.git Eetq, /// 4 bit quantization. Requires a specific GTPQ quantized model: https://hf.co/models?search=gptq. /// text-generation-inference will use exllama (faster) kernels wherever possible, and use /// triton kernel (wider support) when it's not. /// AWQ has faster kernels. Gptq, /// Bitsandbytes 8bit. Can be applied on any model, will cut the memory requirement in half, /// but it is known that the model will be much slower to run than the native f16. #[deprecated( since = "1.1.0", note = "Use `eetq` instead, which provides better latencies overall and is drop-in in most cases" )] Bitsandbytes, /// Bitsandbytes 4bit. Can be applied on any model, will cut the memory requirement by 4x, /// but it is known that the model will be much slower to run than the native f16. BitsandbytesNF4, /// Bitsandbytes 4bit. nf4 should be preferred in most cases but maybe this one has better /// perplexity performance for you model BitsandbytesFP4, } impl std::fmt::Display for Quantization { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { // To keep in track with `server`. match self { #[allow(deprecated)] // Use `eetq` instead, which provides better latencies overall and is drop-in in most cases Quantization::Bitsandbytes => { write!(f, "bitsandbytes") } Quantization::BitsandbytesNF4 => { write!(f, "bitsandbytes-nf4") } Quantization::BitsandbytesFP4 => { write!(f, "bitsandbytes-fp4") } Quantization::Gptq => { write!(f, "gptq") } Quantization::Awq => { write!(f, "awq") } Quantization::Eetq => { write!(f, "eetq") } } } } #[derive(Clone, Copy, Debug, ValueEnum)] enum Dtype { Float16, #[clap(name = "bfloat16")] BFloat16, } impl std::fmt::Display for Dtype { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { // To keep in track with `server`. match self { Dtype::Float16 => { write!(f, "float16") } Dtype::BFloat16 => { write!(f, "bfloat16") } } } } #[derive(Clone, Copy, Debug, ValueEnum)] enum RopeScaling { Linear, Dynamic, } impl std::fmt::Display for RopeScaling { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { // To keep in track with `server`. match self { RopeScaling::Linear => { write!(f, "linear") } RopeScaling::Dynamic => { write!(f, "dynamic") } } } } /// App Configuration #[derive(Parser, Debug)] #[clap(author, version, about, long_about = None)] struct Args { /// The name of the model to load. /// Can be a MODEL_ID as listed on <https://hf.co/models> like /// `gpt2` or `OpenAssistant/oasst-sft-1-pythia-12b`. /// Or it can be a local directory containing the necessary files /// as saved by `save_pretrained(...)` methods of transformers #[clap(default_value = "bigscience/bloom-560m", long, env)] model_id: String, /// The actual revision of the model if you're referring to a model /// on the hub. You can use a specific commit id or a branch like `refs/pr/2`. #[clap(long, env)] revision: Option<String>, /// The number of tokenizer workers used for payload validation and truncation inside the /// router. #[clap(default_value = "2", long, env)] validation_workers: usize, /// Whether to shard the model across multiple GPUs /// By default text-generation-inference will use all available GPUs to run /// the model. Setting it to `false` deactivates `num_shard`. #[clap(long, env)] sharded: Option<bool>, /// The number of shards to use if you don't want to use all GPUs on a given machine. /// You can use `CUDA_VISIBLE_DEVICES=0,1 text-generation-launcher... --num_shard 2` /// and `CUDA_VISIBLE_DEVICES=2,3 text-generation-launcher... --num_shard 2` to /// launch 2 copies with 2 shard each on a given machine with 4 GPUs for instance. #[clap(long, env)] num_shard: Option<usize>, /// Whether you want the model to be quantized. #[clap(long, env, value_enum)] quantize: Option<Quantization>, /// The number of input_ids to speculate on /// If using a medusa model, the heads will be picked up automatically /// Other wise, it will use n-gram speculation which is relatively free /// in terms of compute, but the speedup heavily depends on the task. #[clap(long, env)] speculate: Option<usize>, /// The dtype to be forced upon the model. This option cannot be used with `--quantize`. #[clap(long, env, value_enum)] dtype: Option<Dtype>, /// Whether you want to execute hub modelling code. Explicitly passing a `revision` is /// encouraged when loading a model with custom code to ensure no malicious code has been /// contributed in a newer revision. #[clap(long, env, value_enum)] trust_remote_code: bool, /// The maximum amount of concurrent requests for this particular deployment. /// Having a low limit will refuse clients requests instead of having them /// wait for too long and is usually good to handle backpressure correctly. #[clap(default_value = "128", long, env)] max_concurrent_requests: usize, /// This is the maximum allowed value for clients to set `best_of`. /// Best of makes `n` generations at the same time, and return the best /// in terms of overall log probability over the entire generated sequence #[clap(default_value = "2", long, env)] max_best_of: usize, /// This is the maximum allowed value for clients to set `stop_sequences`. /// Stop sequences are used to allow the model to stop on more than just /// the EOS token, and enable more complex "prompting" where users can preprompt /// the model in a specific way and define their "own" stop token aligned with /// their prompt. #[clap(default_value = "4", long, env)] max_stop_sequences: usize, /// This is the maximum allowed value for clients to set `top_n_tokens`. /// `top_n_tokens is used to return information about the the `n` most likely /// tokens at each generation step, instead of just the sampled token. This /// information can be used for downstream tasks like for classification or /// ranking. #[clap(default_value = "5", long, env)] max_top_n_tokens: u32, /// This is the maximum allowed input length (expressed in number of tokens) /// for users. The larger this value, the longer prompt users can send which /// can impact the overall memory required to handle the load. /// Please note that some models have a finite range of sequence they can handle. #[clap(default_value = "1024", long, env)] max_input_length: usize, /// This is the most important value to set as it defines the "memory budget" /// of running clients requests. /// Clients will send input sequences and ask to generate `max_new_tokens` /// on top. with a value of `1512` users can send either a prompt of /// `1000` and ask for `512` new tokens, or send a prompt of `1` and ask for /// `1511` max_new_tokens. /// The larger this value, the larger amount each request will be in your RAM /// and the less effective batching can be. #[clap(default_value = "2048", long, env)] max_total_tokens: usize, /// This represents the ratio of waiting queries vs running queries where /// you want to start considering pausing the running queries to include the waiting /// ones into the same batch. /// `waiting_served_ratio=1.2` Means when 12 queries are waiting and there's /// only 10 queries left in the current batch we check if we can fit those 12 /// waiting queries into the batching strategy, and if yes, then batching happens /// delaying the 10 running queries by a `prefill` run. /// /// This setting is only applied if there is room in the batch /// as defined by `max_batch_total_tokens`. #[clap(default_value = "1.2", long, env)] waiting_served_ratio: f32, /// Limits the number of tokens for the prefill operation. /// Since this operation take the most memory and is compute bound, it is interesting /// to limit the number of requests that can be sent. #[clap(default_value = "4096", long, env)] max_batch_prefill_tokens: u32, /// **IMPORTANT** This is one critical control to allow maximum usage /// of the available hardware. /// /// This represents the total amount of potential tokens within a batch. /// When using padding (not recommended) this would be equivalent of /// `batch_size` * `max_total_tokens`. /// /// However in the non-padded (flash attention) version this can be much finer. /// /// For `max_batch_total_tokens=1000`, you could fit `10` queries of `total_tokens=100` /// or a single query of `1000` tokens. /// /// Overall this number should be the largest possible amount that fits the /// remaining memory (after the model is loaded). Since the actual memory overhead /// depends on other parameters like if you're using quantization, flash attention /// or the model implementation, text-generation-inference cannot infer this number /// automatically. #[clap(long, env)] max_batch_total_tokens: Option<u32>, /// This setting defines how many tokens can be passed before forcing the waiting /// queries to be put on the batch (if the size of the batch allows for it). /// New queries require 1 `prefill` forward, which is different from `decode` /// and therefore you need to pause the running batch in order to run `prefill` /// to create the correct values for the waiting queries to be able to join the batch. /// /// With a value too small, queries will always "steal" the compute to run `prefill` /// and running queries will be delayed by a lot. /// /// With a value too big, waiting queries could wait for a very long time /// before being allowed a slot in the running batch. If your server is busy /// that means that requests that could run in ~2s on an empty server could /// end up running in ~20s because the query had to wait for 18s. /// /// This number is expressed in number of tokens to make it a bit more /// "model" agnostic, but what should really matter is the overall latency /// for end users. #[clap(default_value = "20", long, env)] max_waiting_tokens: usize, /// Enforce a maximum number of requests per batch /// Specific flag for hardware targets that do not support unpadded inference #[clap(long, env)] max_batch_size: Option<usize>, /// Enable experimental support for cuda graphs #[clap(long, env)] enable_cuda_graphs: bool, /// The IP address to listen on #[clap(default_value = "0.0.0.0", long, env)] hostname: String, /// The port to listen on. #[clap(default_value = "3000", long, short, env)] port: u16, /// The name of the socket for gRPC communication between the webserver /// and the shards. #[clap(default_value = "/tmp/text-generation-server", long, env)] shard_uds_path: String, /// The address the master shard will listen on. (setting used by torch distributed) #[clap(default_value = "localhost", long, env)] master_addr: String, /// The address the master port will listen on. (setting used by torch distributed) #[clap(default_value = "29500", long, env)] master_port: usize, /// The location of the huggingface hub cache. /// Used to override the location if you want to provide a mounted disk for instance #[clap(long, env)] huggingface_hub_cache: Option<String>, /// The location of the huggingface hub cache. /// Used to override the location if you want to provide a mounted disk for instance #[clap(long, env)] weights_cache_override: Option<String>, /// For some models (like bloom), text-generation-inference implemented custom /// cuda kernels to speed up inference. Those kernels were only tested on A100. /// Use this flag to disable them if you're running on different hardware and /// encounter issues. #[clap(long, env)] disable_custom_kernels: bool, /// Limit the CUDA available memory. /// The allowed value equals the total visible memory multiplied by cuda-memory-fraction. #[clap(default_value = "1.0", long, env)] cuda_memory_fraction: f32, /// Rope scaling will only be used for RoPE models /// and allow rescaling the position rotary to accomodate for /// larger prompts. /// /// Goes together with `rope_factor`. /// /// `--rope-factor 2.0` gives linear scaling with a factor of 2.0 /// `--rope-scaling dynamic` gives dynamic scaling with a factor of 1.0 /// `--rope-scaling linear` gives linear scaling with a factor of 1.0 (Nothing will be changed /// basically) /// /// `--rope-scaling linear --rope-factor` fully describes the scaling you want #[clap(long, env)] rope_scaling: Option<RopeScaling>, /// Rope scaling will only be used for RoPE models /// See `rope_scaling` #[clap(long, env)] rope_factor: Option<f32>, /// Outputs the logs in JSON format (useful for telemetry) #[clap(long, env)] json_output: bool, #[clap(long, env)] otlp_endpoint: Option<String>, #[clap(long, env)] cors_allow_origin: Vec<String>, #[clap(long, env)] watermark_gamma: Option<f32>, #[clap(long, env)] watermark_delta: Option<f32>, /// Enable ngrok tunneling #[clap(long, env)] ngrok: bool, /// ngrok authentication token #[clap(long, env)] ngrok_authtoken: Option<String>, /// ngrok edge #[clap(long, env)] ngrok_edge: Option<String>, /// The path to the tokenizer config file. This path is used to load the tokenizer configuration which may /// include a `chat_template`. If not provided, the default config will be used from the model hub. #[clap(long, env)] tokenizer_config_path: Option<String>, /// Disable outlines grammar constrained generation. /// This is a feature that allows you to generate text that follows a specific grammar. #[clap(long, env)] disable_grammar_support: bool, /// Display a lot of information about your runtime environment #[clap(long, short, action)] env: bool, } #[derive(Debug)] enum ShardStatus { Ready, Failed(usize), } #[allow(clippy::too_many_arguments)] fn shard_manager( model_id: String, revision: Option<String>, quantize: Option<Quantization>, speculate: Option<usize>, dtype: Option<Dtype>, trust_remote_code: bool, uds_path: String, rank: usize, world_size: usize, master_addr: String, master_port: usize, huggingface_hub_cache: Option<String>, weights_cache_override: Option<String>, disable_custom_kernels: bool, watermark_gamma: Option<f32>, watermark_delta: Option<f32>, enable_cuda_graphs: bool, cuda_memory_fraction: f32, rope_scaling: Option<RopeScaling>, rope_factor: Option<f32>, otlp_endpoint: Option<String>, status_sender: mpsc::Sender<ShardStatus>, shutdown: Arc<AtomicBool>, _shutdown_sender: mpsc::Sender<()>, ) { // Enter shard-manager tracing span let _span = tracing::span!(tracing::Level::INFO, "shard-manager", rank = rank).entered(); // Get UDS path let uds_string = format!("{uds_path}-{rank}"); let uds = Path::new(&uds_string); // Clean previous runs if uds.exists() { fs::remove_file(uds).unwrap(); } // Process args let mut shard_args = vec![ "serve".to_string(), model_id, "--uds-path".to_string(), uds_path, "--logger-level".to_string(), "INFO".to_string(), "--json-output".to_string(), ]; // Activate trust remote code if trust_remote_code { shard_args.push("--trust-remote-code".to_string()); } // Activate tensor parallelism if world_size > 1 { shard_args.push("--sharded".to_string()); } if let Some(quantize) = quantize { shard_args.push("--quantize".to_string()); shard_args.push(quantize.to_string()) } if let Some(speculate) = speculate { shard_args.push("--speculate".to_string()); shard_args.push(speculate.to_string()) } if let Some(dtype) = dtype { shard_args.push("--dtype".to_string()); shard_args.push(dtype.to_string()) } // Model optional revision if let Some(revision) = revision { shard_args.push("--revision".to_string()); shard_args.push(revision) } let rope = match (rope_scaling, rope_factor) { (None, None) => None, (Some(scaling), None) => Some((scaling, 1.0)), (Some(scaling), Some(factor)) => Some((scaling, factor)), (None, Some(factor)) => Some((RopeScaling::Linear, factor)), }; // OpenTelemetry if let Some(otlp_endpoint) = otlp_endpoint { shard_args.push("--otlp-endpoint".to_string()); shard_args.push(otlp_endpoint); } // Copy current process env let mut envs: Vec<(OsString, OsString)> = env::vars_os().collect(); // Torch Distributed Env vars envs.push(("RANK".into(), rank.to_string().into())); envs.push(("WORLD_SIZE".into(), world_size.to_string().into())); envs.push(("MASTER_ADDR".into(), master_addr.into())); envs.push(("MASTER_PORT".into(), master_port.to_string().into())); envs.push(("TORCH_NCCL_AVOID_RECORD_STREAMS".into(), "1".into())); // CUDA memory fraction envs.push(( "CUDA_MEMORY_FRACTION".into(), cuda_memory_fraction.to_string().into(), )); // Safetensors load fast envs.push(("SAFETENSORS_FAST_GPU".into(), "1".into())); // Disable progress bar envs.push(("HF_HUB_DISABLE_PROGRESS_BARS".into(), "1".into())); // Enable hf transfer for insane download speeds let enable_hf_transfer = env::var("HF_HUB_ENABLE_HF_TRANSFER").unwrap_or("1".to_string()); envs.push(( "HF_HUB_ENABLE_HF_TRANSFER".into(), enable_hf_transfer.into(), )); // Parse Inference API token if let Ok(api_token) = env::var("HF_API_TOKEN") { envs.push(("HUGGING_FACE_HUB_TOKEN".into(), api_token.into())) }; // Detect rope scaling // Sending as env instead of CLI args to not bloat everything // those only can be used by RoPE models, so passing information around // for all models will complexify code unnecessarily if let Some((scaling, factor)) = rope { envs.push(("ROPE_SCALING".into(), scaling.to_string().into())); envs.push(("ROPE_FACTOR".into(), factor.to_string().into())); } // If huggingface_hub_cache is some, pass it to the shard // Useful when running inside a docker container if let Some(huggingface_hub_cache) = huggingface_hub_cache { envs.push(("HUGGINGFACE_HUB_CACHE".into(), huggingface_hub_cache.into())); }; // If weights_cache_override is some, pass it to the shard // Useful when running inside a HuggingFace Inference Endpoint if let Some(weights_cache_override) = weights_cache_override { envs.push(( "WEIGHTS_CACHE_OVERRIDE".into(), weights_cache_override.into(), )); }; // Enable experimental support for cuda graphs if enable_cuda_graphs { envs.push(("ENABLE_CUDA_GRAPHS".into(), "True".into())) } // If disable_custom_kernels is true, pass it to the shard as an env var if disable_custom_kernels { envs.push(("DISABLE_CUSTOM_KERNELS".into(), "True".into())) } // Watermark Gamma if let Some(watermark_gamma) = watermark_gamma { envs.push(("WATERMARK_GAMMA".into(), watermark_gamma.to_string().into())) } // Watermark Delta if let Some(watermark_delta) = watermark_delta { envs.push(("WATERMARK_DELTA".into(), watermark_delta.to_string().into())) } // Start process tracing::info!("Starting shard"); let mut p = match Command::new("text-generation-server") .args(shard_args) .envs(envs) .stdout(Stdio::piped()) .stderr(Stdio::piped()) .process_group(0) .spawn() { Ok(p) => p, Err(err) => { if err.kind() == io::ErrorKind::NotFound { tracing::error!("text-generation-server not found in PATH"); tracing::error!("Please install it with `make install-server`") } { tracing::error!("{}", err); } status_sender.send(ShardStatus::Failed(rank)).unwrap(); return; } }; // Redirect STDOUT to the console let shard_stdout_reader = BufReader::new(p.stdout.take().unwrap()); let shard_stderr_reader = BufReader::new(p.stderr.take().unwrap()); //stdout tracing thread thread::spawn(move || { log_lines(shard_stdout_reader.lines()); }); // We read stderr in another thread as it seems that lines() can block in some cases let (err_sender, err_receiver) = mpsc::channel(); thread::spawn(move || { for line in shard_stderr_reader.lines().flatten() { err_sender.send(line).unwrap_or(()); } }); let mut ready = false; let start_time = Instant::now(); let mut wait_time = Instant::now(); loop { // Process exited if let Some(exit_status) = p.try_wait().unwrap() { let mut err = String::new(); while let Ok(line) = err_receiver.recv_timeout(Duration::from_millis(10)) { err = err + "\n" + &line; } tracing::error!("Shard complete standard error output:\n{err}"); if let Some(signal) = exit_status.signal() { tracing::error!("Shard process was signaled to shutdown with signal {signal}"); } status_sender.send(ShardStatus::Failed(rank)).unwrap(); return; } // We received a shutdown signal if shutdown.load(Ordering::SeqCst) { p.kill().unwrap(); let _ = p.wait(); tracing::info!("Shard terminated"); return; } // Shard is ready if uds.exists() && !ready { tracing::info!("Shard ready in {:?}", start_time.elapsed()); status_sender.send(ShardStatus::Ready).unwrap(); ready = true; } else if !ready && wait_time.elapsed() > Duration::from_secs(10) { tracing::info!("Waiting for shard to be ready..."); wait_time = Instant::now(); } sleep(Duration::from_millis(100)); } } fn shutdown_shards(shutdown: Arc<AtomicBool>, shutdown_receiver: &mpsc::Receiver<()>) { tracing::info!("Shutting down shards"); // Update shutdown value to true // This will be picked up by the shard manager shutdown.store(true, Ordering::SeqCst); // Wait for shards to shutdown // This will block till all shutdown_sender are dropped let _ = shutdown_receiver.recv(); } fn num_cuda_devices() -> Option<usize> { let devices = match env::var("CUDA_VISIBLE_DEVICES") { Ok(devices) => devices, Err(_) => env::var("NVIDIA_VISIBLE_DEVICES").ok()?, }; let n_devices = devices.split(',').count(); Some(n_devices) } #[derive(Deserialize)] #[serde(rename_all = "UPPERCASE")] enum PythonLogLevelEnum { Trace, Debug, Info, Success, Warning, Error, Critical, } #[derive(Deserialize)] struct PythonLogLevel { name: PythonLogLevelEnum, } #[derive(Deserialize)] struct PythonLogRecord { level: PythonLogLevel, } #[derive(Deserialize)] struct PythonLogMessage { text: String, record: PythonLogRecord, } impl PythonLogMessage { fn trace(&self) { match self.record.level.name { PythonLogLevelEnum::Trace => tracing::trace!("{}", self.text), PythonLogLevelEnum::Debug => tracing::debug!("{}", self.text), PythonLogLevelEnum::Info => tracing::info!("{}", self.text), PythonLogLevelEnum::Success => tracing::info!("{}", self.text), PythonLogLevelEnum::Warning => tracing::warn!("{}", self.text), PythonLogLevelEnum::Error => tracing::error!("{}", self.text), PythonLogLevelEnum::Critical => tracing::error!("{}", self.text), } } } impl TryFrom<&String> for PythonLogMessage { type Error = serde_json::Error; fn try_from(value: &String) -> Result<Self, Self::Error> { serde_json::from_str::<Self>(value) } } fn log_lines<S: Sized + BufRead>(lines: Lines<S>) { for line in lines.flatten() { match PythonLogMessage::try_from(&line) { Ok(log) => log.trace(), Err(_) => tracing::debug!("{line}"), } } } fn find_num_shards( sharded: Option<bool>, num_shard: Option<usize>, ) -> Result<usize, LauncherError> { // get the number of shards given `sharded` and `num_shard` let num_shard = match (sharded, num_shard) { (Some(true), None) => { // try to default to the number of available GPUs tracing::info!("Parsing num_shard from CUDA_VISIBLE_DEVICES/NVIDIA_VISIBLE_DEVICES"); let n_devices = num_cuda_devices() .expect("--num-shard and CUDA_VISIBLE_DEVICES/NVIDIA_VISIBLE_DEVICES are not set"); if n_devices <= 1 { return Err(LauncherError::NotEnoughCUDADevices(format!( "`sharded` is true but only found {n_devices} CUDA devices" ))); } n_devices } (Some(true), Some(num_shard)) => { // we can't have only one shard while sharded if num_shard <= 1 { return Err(LauncherError::ArgumentValidation( "`sharded` is true but `num_shard` <= 1".to_string(), )); } num_shard } (Some(false), Some(num_shard)) => num_shard, (Some(false), None) => 1, (None, None) => num_cuda_devices().unwrap_or(1), (None, Some(num_shard)) => num_shard, }; if num_shard < 1 { return Err(LauncherError::ArgumentValidation( "`num_shard` cannot be < 1".to_string(), )); } Ok(num_shard) } #[derive(Debug)] enum LauncherError { ArgumentValidation(String), NotEnoughCUDADevices(String), DownloadError, ShardCannotStart, ShardDisconnected, ShardFailed, WebserverFailed, WebserverCannotStart, } fn download_convert_model(args: &Args, running: Arc<AtomicBool>) -> Result<(), LauncherError> { // Enter download tracing span let _span = tracing::span!(tracing::Level::INFO, "download").entered(); let mut download_args = vec![ "download-weights".to_string(), args.model_id.to_string(), "--extension".to_string(), ".safetensors".to_string(), "--logger-level".to_string(), "INFO".to_string(), "--json-output".to_string(), ]; // Model optional revision if let Some(revision) = &args.revision { download_args.push("--revision".to_string()); download_args.push(revision.to_string()) } // Trust remote code for automatic peft fusion if args.trust_remote_code { download_args.push("--trust-remote-code".to_string()); } // Copy current process env let mut envs: Vec<(OsString, OsString)> = env::vars_os().collect(); // Disable progress bar envs.push(("HF_HUB_DISABLE_PROGRESS_BARS".into(), "1".into())); // If huggingface_hub_cache is set, pass it to the download process // Useful when running inside a docker container if let Some(ref huggingface_hub_cache) = args.huggingface_hub_cache { envs.push(("HUGGINGFACE_HUB_CACHE".into(), huggingface_hub_cache.into())); }; // Enable hf transfer for insane download speeds let enable_hf_transfer = env::var("HF_HUB_ENABLE_HF_TRANSFER").unwrap_or("1".to_string()); envs.push(( "HF_HUB_ENABLE_HF_TRANSFER".into(), enable_hf_transfer.into(), )); // Parse Inference API token if let Ok(api_token) = env::var("HF_API_TOKEN") { envs.push(("HUGGING_FACE_HUB_TOKEN".into(), api_token.into())) }; // If args.weights_cache_override is some, pass it to the download process // Useful when running inside a HuggingFace Inference Endpoint if let Some(weights_cache_override) = &args.weights_cache_override { envs.push(( "WEIGHTS_CACHE_OVERRIDE".into(), weights_cache_override.into(), )); }; // Start process tracing::info!("Starting download process."); let mut download_process = match Command::new("text-generation-server") .args(download_args) .envs(envs) .stdout(Stdio::piped()) .stderr(Stdio::piped()) .process_group(0) .spawn() { Ok(p) => p, Err(err) => { if err.kind() == io::ErrorKind::NotFound { tracing::error!("text-generation-server not found in PATH"); tracing::error!("Please install it with `make install-server`") } else { tracing::error!("{}", err); } return Err(LauncherError::DownloadError); } }; let download_stdout = BufReader::new(download_process.stdout.take().unwrap()); thread::spawn(move || { log_lines(download_stdout.lines()); }); let download_stderr = BufReader::new(download_process.stderr.take().unwrap()); // We read stderr in another thread as it seems that lines() can block in some cases let (err_sender, err_receiver) = mpsc::channel(); thread::spawn(move || { for line in download_stderr.lines().flatten() { err_sender.send(line).unwrap_or(()); } }); loop { if let Some(status) = download_process.try_wait().unwrap() { if status.success() { tracing::info!("Successfully downloaded weights."); break; } let mut err = String::new(); while let Ok(line) = err_receiver.recv_timeout(Duration::from_millis(10)) { err = err + "\n" + &line; } if let Some(signal) = status.signal() { tracing::error!( "Download process was signaled to shutdown with signal {signal}: {err}" ); } else { tracing::error!("Download encountered an error: {err}"); } return Err(LauncherError::DownloadError); } if !running.load(Ordering::SeqCst) { terminate("download", download_process, Duration::from_secs(10)).unwrap(); return Ok(()); } sleep(Duration::from_millis(100)); } Ok(()) } #[allow(clippy::too_many_arguments)] fn spawn_shards( num_shard: usize, args: &Args, shutdown: Arc<AtomicBool>, shutdown_receiver: &mpsc::Receiver<()>, shutdown_sender: mpsc::Sender<()>, status_receiver: &mpsc::Receiver<ShardStatus>, status_sender: mpsc::Sender<ShardStatus>, running: Arc<AtomicBool>, ) -> Result<(), LauncherError> { // Start shard processes for rank in 0..num_shard { let model_id = args.model_id.clone(); let revision = args.revision.clone(); let uds_path = args.shard_uds_path.clone(); let master_addr = args.master_addr.clone(); let huggingface_hub_cache = args.huggingface_hub_cache.clone(); let weights_cache_override = args.weights_cache_override.clone(); let status_sender = status_sender.clone(); let shutdown = shutdown.clone(); let shutdown_sender = shutdown_sender.clone(); let otlp_endpoint = args.otlp_endpoint.clone(); let quantize = args.quantize; let speculate = args.speculate; let dtype = args.dtype; let trust_remote_code = args.trust_remote_code; let master_port = args.master_port; let disable_custom_kernels = args.disable_custom_kernels; let watermark_gamma = args.watermark_gamma; let watermark_delta = args.watermark_delta; let enable_cuda_graphs = args.enable_cuda_graphs; let cuda_memory_fraction = args.cuda_memory_fraction; let rope_scaling = args.rope_scaling; let rope_factor = args.rope_factor; thread::spawn(move || { shard_manager( model_id, revision, quantize, speculate, dtype, trust_remote_code, uds_path, rank, num_shard, master_addr, master_port, huggingface_hub_cache, weights_cache_override, disable_custom_kernels, watermark_gamma, watermark_delta, enable_cuda_graphs, cuda_memory_fraction, rope_scaling, rope_factor, otlp_endpoint, status_sender, shutdown, shutdown_sender, ) }); } drop(shutdown_sender); // Wait for shard to start let mut shard_ready = 0; while running.load(Ordering::SeqCst) { match status_receiver.try_recv() { Ok(ShardStatus::Ready) => { shard_ready += 1; if shard_ready == num_shard { break; } } Err(TryRecvError::Empty) => { sleep(Duration::from_millis(100)); } Ok(ShardStatus::Failed(rank)) => { tracing::error!("Shard {rank} failed to start"); shutdown_shards(shutdown, shutdown_receiver); return Err(LauncherError::ShardCannotStart); } Err(TryRecvError::Disconnected) => { tracing::error!("Shard status channel disconnected"); shutdown_shards(shutdown, shutdown_receiver); return Err(LauncherError::ShardDisconnected); } } } Ok(()) } fn compute_type(num_shard: usize) -> Option<String> { let output = Command::new("nvidia-smi") .args(["--query-gpu=gpu_name", "--format=csv"]) .output() .ok()?; let output = String::from_utf8(output.stdout).ok()?; let fullname = output.split('\n').nth(1)?; let cardname = fullname.replace(' ', "-").to_lowercase(); let compute_type = format!("{num_shard}-{cardname}"); Some(compute_type) } fn spawn_webserver( num_shard: usize, args: Args, shutdown: Arc<AtomicBool>, shutdown_receiver: &mpsc::Receiver<()>, ) -> Result<Child, LauncherError> { // All shard started // Start webserver tracing::info!("Starting Webserver"); let mut router_args = vec![ "--max-concurrent-requests".to_string(), args.max_concurrent_requests.to_string(), "--max-best-of".to_string(), args.max_best_of.to_string(), "--max-stop-sequences".to_string(), args.max_stop_sequences.to_string(), "--max-top-n-tokens".to_string(), args.max_top_n_tokens.to_string(), "--max-input-length".to_string(), args.max_input_length.to_string(), "--max-total-tokens".to_string(), args.max_total_tokens.to_string(), "--max-batch-prefill-tokens".to_string(), args.max_batch_prefill_tokens.to_string(), "--waiting-served-ratio".to_string(), args.waiting_served_ratio.to_string(), "--max-waiting-tokens".to_string(), args.max_waiting_tokens.to_string(), "--validation-workers".to_string(), args.validation_workers.to_string(), "--hostname".to_string(), args.hostname.to_string(), "--port".to_string(), args.port.to_string(), "--master-shard-uds-path".to_string(), format!("{}-0", args.shard_uds_path), "--tokenizer-name".to_string(), args.model_id, ]; // Grammar support if args.disable_grammar_support { router_args.push("--disable-grammar-support".to_string()); } // Tokenizer config path if let Some(ref tokenizer_config_path) = args.tokenizer_config_path { router_args.push("--tokenizer-config-path".to_string()); router_args.push(tokenizer_config_path.to_string()); } // Model optional max batch total tokens if let Some(max_batch_total_tokens) = args.max_batch_total_tokens { router_args.push("--max-batch-total-tokens".to_string()); router_args.push(max_batch_total_tokens.to_string()); } // Router optional max batch size if let Some(max_batch_size) = args.max_batch_size { router_args.push("--max-batch-size".to_string()); router_args.push(max_batch_size.to_string()); } // Model optional revision if let Some(ref revision) = args.revision { router_args.push("--revision".to_string()); router_args.push(revision.to_string()) } if args.json_output { router_args.push("--json-output".to_string()); } // OpenTelemetry if let Some(otlp_endpoint) = args.otlp_endpoint { router_args.push("--otlp-endpoint".to_string()); router_args.push(otlp_endpoint); } // CORS origins for origin in args.cors_allow_origin.into_iter() { router_args.push("--cors-allow-origin".to_string()); router_args.push(origin); } // Ngrok if args.ngrok { router_args.push("--ngrok".to_string()); router_args.push("--ngrok-authtoken".to_string()); router_args.push(args.ngrok_authtoken.unwrap()); router_args.push("--ngrok-edge".to_string()); router_args.push(args.ngrok_edge.unwrap()); } // Copy current process env let mut envs: Vec<(OsString, OsString)> = env::vars_os().collect(); // Parse Inference API token if let Ok(api_token) = env::var("HF_API_TOKEN") { envs.push(("HUGGING_FACE_HUB_TOKEN".into(), api_token.into())) }; // Parse Compute type if let Ok(compute_type) = env::var("COMPUTE_TYPE") { envs.push(("COMPUTE_TYPE".into(), compute_type.into())) } else if let Some(compute_type) = compute_type(num_shard) { envs.push(("COMPUTE_TYPE".into(), compute_type.into())) } let mut webserver = match Command::new("text-generation-router") .args(router_args) .envs(envs) .stdout(Stdio::piped()) .stderr(Stdio::piped()) .process_group(0) .spawn() { Ok(p) => p, Err(err) => { tracing::error!("Failed to start webserver: {}", err); if err.kind() == io::ErrorKind::NotFound { tracing::error!("text-generation-router not found in PATH"); tracing::error!("Please install it with `make install-router`") } else { tracing::error!("{}", err); } shutdown_shards(shutdown, shutdown_receiver); return Err(LauncherError::WebserverCannotStart); } }; // Redirect STDOUT and STDERR to the console let webserver_stdout = webserver.stdout.take().unwrap(); let webserver_stderr = webserver.stderr.take().unwrap(); thread::spawn(move || { let stdout = BufReader::new(webserver_stdout); let stderr = BufReader::new(webserver_stderr); for line in stdout.lines() { println!("{}", line.unwrap()); } for line in stderr.lines() { println!("{}", line.unwrap()); } }); Ok(webserver) } fn terminate(process_name: &str, mut process: Child, timeout: Duration) -> io::Result<ExitStatus> { tracing::info!("Terminating {process_name}"); let terminate_time = Instant::now(); signal::kill(Pid::from_raw(process.id() as i32), Signal::SIGTERM).unwrap(); tracing::info!("Waiting for {process_name} to gracefully shutdown"); while terminate_time.elapsed() < timeout { if let Some(status) = process.try_wait()? { tracing::info!("{process_name} terminated"); return Ok(status); } sleep(Duration::from_millis(100)); } tracing::info!("Killing {process_name}"); process.kill()?; let exit_status = process.wait()?; tracing::info!("{process_name} killed"); Ok(exit_status) } fn main() -> Result<(), LauncherError> { // Pattern match configuration let args: Args = Args::parse(); // Filter events with LOG_LEVEL let env_filter = EnvFilter::try_from_env("LOG_LEVEL").unwrap_or_else(|_| EnvFilter::new("info")); if args.json_output { tracing_subscriber::fmt() .with_env_filter(env_filter) .json() .init(); } else { tracing_subscriber::fmt() .with_env_filter(env_filter) .compact() .init(); } if args.env { let env_runtime = env_runtime::Env::new(); tracing::info!("{}", env_runtime); } tracing::info!("{:?}", args); // Validate args if args.max_input_length >= args.max_total_tokens { return Err(LauncherError::ArgumentValidation( "`max_input_length` must be < `max_total_tokens`".to_string(), )); } if args.max_input_length as u32 > args.max_batch_prefill_tokens { return Err(LauncherError::ArgumentValidation(format!( "`max_batch_prefill_tokens` must be >= `max_input_length`. Given: {} and {}", args.max_batch_prefill_tokens, args.max_input_length ))); } if args.validation_workers == 0 { return Err(LauncherError::ArgumentValidation( "`validation_workers` must be > 0".to_string(), )); } if args.trust_remote_code { tracing::warn!( "`trust_remote_code` is set. Trusting that model `{}` do not contain malicious code.", args.model_id ); } let num_shard = find_num_shards(args.sharded, args.num_shard)?; if num_shard > 1 { tracing::info!("Sharding model on {num_shard} processes"); } if let Some(ref max_batch_total_tokens) = args.max_batch_total_tokens { if args.max_batch_prefill_tokens > *max_batch_total_tokens { return Err(LauncherError::ArgumentValidation(format!( "`max_batch_prefill_tokens` must be <= `max_batch_total_tokens`. Given: {} and {}", args.max_batch_prefill_tokens, max_batch_total_tokens ))); } if args.max_total_tokens as u32 > *max_batch_total_tokens { return Err(LauncherError::ArgumentValidation(format!( "`max_total_tokens` must be <= `max_batch_total_tokens`. Given: {} and {}", args.max_total_tokens, max_batch_total_tokens ))); } } if args.ngrok { if args.ngrok_authtoken.is_none() { return Err(LauncherError::ArgumentValidation( "`ngrok-authtoken` must be set when using ngrok tunneling".to_string(), )); } if args.ngrok_edge.is_none() { return Err(LauncherError::ArgumentValidation( "`ngrok-edge` must be set when using ngrok tunneling".to_string(), )); } } // Signal handler let running = Arc::new(AtomicBool::new(true)); let r = running.clone(); ctrlc::set_handler(move || { r.store(false, Ordering::SeqCst); }) .expect("Error setting Ctrl-C handler"); // Download and convert model weights download_convert_model(&args, running.clone())?; if !running.load(Ordering::SeqCst) { // Launcher was asked to stop return Ok(()); } // Shared shutdown bool let shutdown = Arc::new(AtomicBool::new(false)); // Shared shutdown channel // When shutting down, the main thread will wait for all senders to be dropped let (shutdown_sender, shutdown_receiver) = mpsc::channel(); // Shared channel to track shard status let (status_sender, status_receiver) = mpsc::channel(); spawn_shards( num_shard, &args, shutdown.clone(), &shutdown_receiver, shutdown_sender, &status_receiver, status_sender, running.clone(), )?; // We might have received a termination signal if !running.load(Ordering::SeqCst) { shutdown_shards(shutdown, &shutdown_receiver); return Ok(()); } let mut webserver = spawn_webserver(num_shard, args, shutdown.clone(), &shutdown_receiver) .map_err(|err| { shutdown_shards(shutdown.clone(), &shutdown_receiver); err })?; // Default exit code let mut exit_code = Ok(()); while running.load(Ordering::SeqCst) { if let Ok(ShardStatus::Failed(rank)) = status_receiver.try_recv() { tracing::error!("Shard {rank} crashed"); exit_code = Err(LauncherError::ShardFailed); break; }; match webserver.try_wait().unwrap() { Some(_) => { tracing::error!("Webserver Crashed"); shutdown_shards(shutdown, &shutdown_receiver); return Err(LauncherError::WebserverFailed); } None => { sleep(Duration::from_millis(100)); } }; } // Graceful termination terminate("webserver", webserver, Duration::from_secs(90)).unwrap(); shutdown_shards(shutdown, &shutdown_receiver); exit_code }
text-generation-inference/launcher/src/main.rs/0
{ "file_path": "text-generation-inference/launcher/src/main.rs", "repo_id": "text-generation-inference", "token_count": 19919 }
183
//! A crate to extract and inject a OpenTelemetry context from and to a gRPC request. //! Inspired by: https://github.com/open-telemetry/opentelemetry-rust gRPC examples use opentelemetry::global; use opentelemetry::propagation::{Extractor, Injector}; use tracing_opentelemetry::OpenTelemetrySpanExt; /// Extract context metadata from a gRPC request's metadata struct MetadataExtractor<'a>(pub &'a tonic::metadata::MetadataMap); impl<'a> Extractor for MetadataExtractor<'a> { /// Get a value for a key from the MetadataMap. If the value can't be converted to &str, returns None fn get(&self, key: &str) -> Option<&str> { self.0.get(key).and_then(|metadata| metadata.to_str().ok()) } /// Collect all the keys from the MetadataMap. fn keys(&self) -> Vec<&str> { self.0 .keys() .map(|key| match key { tonic::metadata::KeyRef::Ascii(v) => v.as_str(), tonic::metadata::KeyRef::Binary(v) => v.as_str(), }) .collect::<Vec<_>>() } } /// Inject context in the metadata of a gRPC request. struct MetadataInjector<'a>(pub &'a mut tonic::metadata::MetadataMap); impl<'a> Injector for MetadataInjector<'a> { /// Set a key and value in the MetadataMap. Does nothing if the key or value are not valid inputs fn set(&mut self, key: &str, value: String) { if let Ok(key) = tonic::metadata::MetadataKey::from_bytes(key.as_bytes()) { if let Ok(val) = value.parse() { self.0.insert(key, val); } } } } /// Get a context from the global context and inject the span into a gRPC request's metadata. fn inject(metadata: &mut tonic::metadata::MetadataMap) { global::get_text_map_propagator(|propagator| { propagator.inject_context( &tracing::Span::current().context(), &mut MetadataInjector(metadata), ) }) } pub trait InjectTelemetryContext { fn inject_context(self) -> Self; } impl<T> InjectTelemetryContext for tonic::Request<T> { fn inject_context(mut self) -> Self { inject(self.metadata_mut()); self } }
text-generation-inference/router/grpc-metadata/src/lib.rs/0
{ "file_path": "text-generation-inference/router/grpc-metadata/src/lib.rs", "repo_id": "text-generation-inference", "token_count": 889 }
184
selective_scan_commit := 2a3704fd47ba817b415627b06fd796b971fdc137 causal-conv1d: rm -rf causal-conv1d git clone https://github.com/Dao-AILab/causal-conv1d.git build-causal-conv1d: causal-conv1d cd causal-conv1d/ && git checkout v1.1.1 # known latest working version tag cd causal-conv1d/ && CAUSAL_CONV1D_FORCE_BUILD=TRUE python setup.py build install-causal-conv1d: build-causal-conv1d pip uninstall causal-conv1d -y || true cd causal-conv1d/ && pip install . # selective-scan dependends on causal-conv1d selective-scan: rm -rf mamba git clone https://github.com/state-spaces/mamba.git mamba build-selective-scan: selective-scan cd mamba/ && git fetch && git checkout $(selective_scan_commit) cd mamba && python setup.py build install-selective-scan: install-causal-conv1d build-selective-scan pip uninstall selective-scan-cuda -y || true cd mamba && pip install . build-all: build-causal-conv1d build-selective-scan
text-generation-inference/server/Makefile-selective-scan/0
{ "file_path": "text-generation-inference/server/Makefile-selective-scan", "repo_id": "text-generation-inference", "token_count": 351 }
185
// Adapted from turboderp exllama: https://github.com/turboderp/exllama #ifndef _hip_compat_cuh #define _hip_compat_cuh // Workaround for a bug in hipamd, backported from upstream, this is fixed in ROCm 5.6. __device__ __forceinline__ __half __compat_hrcp(__half x) { return __half_raw{ static_cast<_Float16>(__builtin_amdgcn_rcph(static_cast<__half_raw>(x).data))}; } __device__ __forceinline__ __half2 __compat_h2rcp(__half2 x) { return _Float16_2{static_cast<_Float16>(__builtin_amdgcn_rcph(x.x)), static_cast<_Float16>(__builtin_amdgcn_rcph(x.y))}; } #define hrcp __compat_hrcp #define h2rcp __compat_h2rcp // Automatic conversion of hipblasHgemm doesn't convert half to hipblasHalf. __host__ __forceinline__ hipblasStatus_t __compat_hipblasHgemm(hipblasHandle_t handle, hipblasOperation_t transA, hipblasOperation_t transB, int m, int n, int k, const half* alpha, const half* AP, int lda, const half* BP, int ldb, const half* beta, half* CP, int ldc) { return hipblasHgemm(handle, transA, transB, m, n, k, reinterpret_cast<const hipblasHalf *>(alpha), reinterpret_cast<const hipblasHalf *>(AP), lda, reinterpret_cast<const hipblasHalf *>(BP), ldb, reinterpret_cast<const hipblasHalf *>(beta), reinterpret_cast<hipblasHalf *>(CP), ldc); } #define hipblasHgemm __compat_hipblasHgemm // Previous version of PyTorch were converting to rocBLAS instead of hipBLAS. #define rocblas_handle hipblasHandle_t #define rocblas_operation_none HIPBLAS_OP_N #define rocblas_get_stream hipblasGetStream #define rocblas_set_stream hipblasSetStream #define rocblas_hgemm __compat_hipblasHgemm #endif
text-generation-inference/server/exllama_kernels/exllama_kernels/hip_compat.cuh/0
{ "file_path": "text-generation-inference/server/exllama_kernels/exllama_kernels/hip_compat.cuh", "repo_id": "text-generation-inference", "token_count": 1708 }
186
#ifndef _qdq_3_cuh #define _qdq_3_cuh #include "qdq_util.cuh" #include "../../config.h" #if QMODE_3BIT == 1 // Permutation: // // v9997775 55333111 u8886664 44222000 (u, v lsb) // vjjjhhhf ffdddbbb uiiiggge eecccaaa // vtttrrrp ppnnnlll usssqqqo oommmkkk __forceinline__ __device__ void shuffle_3bit_32 ( uint32_t* q, int stride ) { uint32_t qa = q[0 * stride]; uint32_t qb = q[1 * stride]; uint32_t qc = q[2 * stride]; // qa: aa999888 77766655 54443332 22111000 // qb: lkkkjjji iihhhggg fffeeedd dcccbbba // qc: vvvuuutt tsssrrrq qqpppooo nnnmmmll uint32_t qd = qc >> 26; qc <<= 4; qc |= qb >> 28; qb <<= 2; qb |= qa >> 30; // qa: ..999888 77766655 54443332 22111000 // qb: ..jjjiii hhhgggff feeedddc ccbbbaaa // qc: ..tttsss rrrqqqpp pooonnnm mmlllkkk // qd: vvvuuu uint32_t za = 0; uint32_t zb = 0; uint32_t zc = 0; for (int i = 0; i < 5; i++) { uint32_t t0 = qa & 0x07; uint32_t t1 = (qa & 0x38) >> 3; qa >>= 6; za |= (t0 << (i * 3)); za |= (t1 << (i * 3 + 16)); } for (int i = 0; i < 5; i++) { uint32_t t0 = qb & 0x07; uint32_t t1 = (qb & 0x38) >> 3; qb >>= 6; zb |= (t0 << (i * 3)); zb |= (t1 << (i * 3 + 16)); } for (int i = 0; i < 5; i++) { uint32_t t0 = qc & 0x07; uint32_t t1 = (qc & 0x38) >> 3; qc >>= 6; zc |= (t0 << (i * 3)); zc |= (t1 << (i * 3 + 16)); } // za: 9997775 55333111 8886664 44222000 // zb: jjjhhhf ffdddbbb iiiggge eecccaaa // zc: tttrrrp ppnnnlll sssqqqo oommmkkk // qd: vvvuuu za |= ((qd & 0x01) >> 0) << 15; zb |= ((qd & 0x02) >> 1) << 15; zc |= ((qd & 0x04) >> 2) << 15; za |= ((qd & 0x08) >> 3) << 31; zb |= ((qd & 0x10) >> 4) << 31; zc |= ((qd & 0x20) >> 5) << 31; // za: v9997775 55333111 u8886664 44222000 (u, v lsb) // zb: vjjjhhhf ffdddbbb uiiiggge eecccaaa // zc: vtttrrrp ppnnnlll usssqqqo oommmkkk q[0 * stride] = za; q[1 * stride] = zb; q[2 * stride] = zc; } __forceinline__ __device__ void dequant_3bit_32 ( const uint32_t q_0, const uint32_t q_1, const uint32_t q_2, half2 (&dq)[16], int stride ) { const uint32_t c0 = 0x64006400; const half y8_ = __float2half_rn(1.0f / 8.0f); const half y64_ = __float2half_rn(1.0f / 64.0f); const half2 y8 = __halves2half2(y8_, y8_); const half2 y64 = __halves2half2(y64_, y64_); const half z1_ = __float2half_rn(-1024.0f - 4.0f); const half z8_ = __float2half_rn(-1024.0f / 8.0f - 4.0f); const half z64_ = __float2half_rn(-1024.0f / 64.0f - 4.0f); const half2 z1 = __halves2half2(z1_, z1_); const half2 z8 = __halves2half2(z8_, z8_); const half2 z64 = __halves2half2(z64_, z64_); uint32_t qa = q_0; uint32_t qb = q_1; uint32_t qc = q_2; half2_uint32 q0((qa & 0x00070007) | c0); // half2(q[ 0], q[ 1]) + 1024 half2_uint32 q1((qa & 0x00380038) | c0); // half2(q[ 2], q[ 3]) * 8 + 1024 qa >>= 6; half2_uint32 q2((qa & 0x00070007) | c0); // half2(q[ 4], q[ 5]) + 1024 half2_uint32 q3((qa & 0x00380038) | c0); // half2(q[ 6], q[ 7]) * 8 + 1024 half2_uint32 q4((qa & 0x01c001c0) | c0); // half2(q[ 8], q[ 9]) * 64 + 1024 qa >>= 9; qa &= 0x00010001; half2_uint32 q5((qb & 0x00070007) | c0); // half2(q[10], q[11]) + 1024 half2_uint32 q6((qb & 0x00380038) | c0); // half2(q[12], q[13]) * 8 + 1024 qb >>= 6; half2_uint32 q7((qb & 0x00070007) | c0); // half2(q[14], q[15]) + 1024 half2_uint32 q8((qb & 0x00380038) | c0); // half2(q[16], q[17]) * 8 + 1024 half2_uint32 q9((qb & 0x01c001c0) | c0); // half2(q[18], q[19]) * 64 + 1024 qb >>= 8; qb &= 0x00020002; half2_uint32 q10((qc & 0x00070007) | c0); // half2(q[20], q[21]) + 1024 half2_uint32 q11((qc & 0x00380038) | c0); // half2(q[22], q[23]) * 8 + 1024 qc >>= 6; half2_uint32 q12((qc & 0x00070007) | c0); // half2(q[24], q[25]) + 1024 half2_uint32 q13((qc & 0x00380038) | c0); // half2(q[26], q[27]) * 8 + 1024 half2_uint32 q14((qc & 0x01c001c0) | c0); // half2(q[28], q[29]) * 64 + 1024 qc >>= 7; qc &= 0x00040004; half2_uint32 q15((qa | qb | qc) | c0); dq[ 0] = __hadd2( q0.as_half2, z1); dq[ 1] = __hfma2( q1.as_half2, y8, z8); dq[ 2] = __hadd2( q2.as_half2, z1); dq[ 3] = __hfma2( q3.as_half2, y8, z8); dq[ 4] = __hfma2( q4.as_half2, y64, z64); dq[ 5] = __hadd2( q5.as_half2, z1); dq[ 6] = __hfma2( q6.as_half2, y8, z8); dq[ 7] = __hadd2( q7.as_half2, z1); dq[ 8] = __hfma2( q8.as_half2, y8, z8); dq[ 9] = __hfma2( q9.as_half2, y64, z64); dq[10] = __hadd2(q10.as_half2, z1); dq[11] = __hfma2(q11.as_half2, y8, z8); dq[12] = __hadd2(q12.as_half2, z1); dq[13] = __hfma2(q13.as_half2, y8, z8); dq[14] = __hfma2(q14.as_half2, y64, z64); dq[15] = __hadd2(q15.as_half2, z1); } #else __forceinline__ __device__ void shuffle_3bit_32 ( uint32_t* q, int stride ) { } __forceinline__ __device__ void dequant_3bit_32 ( const uint32_t q_0, const uint32_t q_1, const uint32_t q_2, half2 (&dq)[16], int stride ) { half dqh[32]; for (int i = 0; i < 10; i++) dqh[ i] = dq_ns(exb( q_0, i * 3 , 0x07), 4); dqh[10 ] = dq_ns(exb(q_1, q_0, 30, 0x07), 4); for (int i = 0; i < 10; i++) dqh[11 + i] = dq_ns(exb( q_1, i * 3 + 1, 0x07), 4); dqh[21 ] = dq_ns(exb(q_2, q_1, 31, 0x07), 4); for (int i = 0; i < 10; i++) dqh[22 + i] = dq_ns(exb( q_2, i * 3 + 2, 0x07), 4); for (int i = 0; i < 16; i++) dq[i] = __halves2half2(dqh[i * 2], dqh[i * 2 + 1]); } #endif #endif
text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_3.cuh/0
{ "file_path": "text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_3.cuh", "repo_id": "text-generation-inference", "token_count": 3335 }
187
import pytest import torch from copy import copy from transformers import AutoTokenizer from text_generation_server.pb import generate_pb2 from text_generation_server.models.causal_lm import CausalLM, CausalLMBatch @pytest.fixture(scope="session") def default_causal_lm(): return CausalLM("gpt2") @pytest.fixture(scope="session") def gpt2_tokenizer(): tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left") tokenizer.pad_token_id = 50256 return tokenizer @pytest.fixture def default_pb_request(default_pb_parameters, default_pb_stop_parameters): return generate_pb2.Request( id=0, inputs="Test", prefill_logprobs=True, truncate=100, parameters=default_pb_parameters, stopping_parameters=default_pb_stop_parameters, ) @pytest.fixture def default_pb_batch(default_pb_request): return generate_pb2.Batch(id=0, requests=[default_pb_request], size=1) @pytest.fixture def default_causal_lm_batch(default_pb_batch, gpt2_tokenizer): return CausalLMBatch.from_pb( default_pb_batch, gpt2_tokenizer, torch.float32, torch.device("cpu") ) @pytest.fixture def default_multi_requests_causal_lm_batch(default_pb_request, gpt2_tokenizer): req_0 = copy(default_pb_request) req_0.id = 1 req_1 = default_pb_request req_1.id = 2 req_1.stopping_parameters.max_new_tokens = 5 batch_pb = generate_pb2.Batch(id=1, requests=[req_0, req_1], size=2) return CausalLMBatch.from_pb( batch_pb, gpt2_tokenizer, torch.float32, torch.device("cpu") ) def test_batch_from_pb(default_pb_batch, default_causal_lm_batch): batch = default_causal_lm_batch assert batch.batch_id == default_pb_batch.id assert batch.requests == default_pb_batch.requests assert len(batch.input_ids) == default_pb_batch.size assert batch.input_ids[0][-1] == 14402 assert torch.all(batch.input_ids[0][:-1] == 50256) assert batch.attention_mask[0, 0] == 1 assert torch.all(batch.attention_mask[0, 1:] == 0) assert batch.past_key_values is None assert all( [ torch.equal(input_ids, all_input_ids[:, 0]) for input_ids, all_input_ids in zip(batch.input_ids, batch.all_input_ids) ] ) assert batch.input_lengths == [1] assert len(batch) == default_pb_batch.size assert len(batch.next_token_choosers) == len(batch.stopping_criterias) == len(batch) assert batch.max_input_length == batch.input_lengths[0] def test_batch_concatenate_no_prefill(default_causal_lm_batch): with pytest.raises(ValueError): CausalLMBatch.concatenate([default_causal_lm_batch, default_causal_lm_batch]) def test_causal_lm_batch_type(default_causal_lm): assert default_causal_lm.batch_type == CausalLMBatch def test_causal_lm_generate_token(default_causal_lm, default_causal_lm_batch): sequence_length = len(default_causal_lm_batch.all_input_ids[0]) generations, next_batch, _ = default_causal_lm.generate_token( default_causal_lm_batch ) assert len(generations) == len(next_batch) assert isinstance(next_batch, CausalLMBatch) assert len(next_batch.all_input_ids) == len(next_batch) assert len(next_batch.all_input_ids[0]) == sequence_length + 1 assert len(next_batch.attention_mask[0]) == 11 assert next_batch.all_input_ids[0][-1] == 13 assert next_batch.all_input_ids[0][-2] == 14402 assert torch.all(next_batch.all_input_ids[0][:-2] == 50256) assert torch.all(next_batch.attention_mask[0][0:2] == 1) assert torch.all(next_batch.attention_mask[0][2:] == 0) assert next_batch.input_ids.shape == (len(next_batch), 1) assert next_batch.input_ids[0, 0] == 13 assert next_batch.input_lengths == [2] assert next_batch.max_input_length == next_batch.input_lengths[0] assert next_batch.past_key_values is not None assert all( [p[0].shape == (1, 12, sequence_length, 64) for p in next_batch.past_key_values] ) assert all( [p[1].shape == (1, 12, sequence_length, 64) for p in next_batch.past_key_values] ) assert all([generation.generated_text is None for generation in generations]) assert all([len(generation.prefill_tokens) == 1 for generation in generations]) assert all( [ token_id.item() == 13 for generation in generations for token_id in generation.tokens.token_ids ] ) assert all( [ token_text == "." for generation in generations for token_text in generation.tokens.texts ] ) assert generations[0].request_id == 0 def test_causal_lm_generate_token_completion( default_causal_lm, default_causal_lm_batch ): next_batch = default_causal_lm_batch for _ in range(default_causal_lm_batch.stopping_criterias[0].max_new_tokens - 1): generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert len(generations) == len(next_batch) generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert next_batch is None assert len(generations) == 1 assert generations[0].generated_text.text == ".java:784) at net.minecraft." assert generations[0].request_id == default_causal_lm_batch.requests[0].id assert ( generations[0].generated_text.generated_tokens == default_causal_lm_batch.stopping_criterias[0].max_new_tokens ) def test_causal_lm_generate_token_completion_multi( default_causal_lm, default_multi_requests_causal_lm_batch ): next_batch = default_multi_requests_causal_lm_batch for i in range( default_multi_requests_causal_lm_batch.stopping_criterias[1].max_new_tokens - 1 ): generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert len(generations) == len(next_batch) generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert next_batch is not None assert len(generations) == 2 assert generations[1].generated_text.text == ".java:784)" assert ( generations[1].request_id == default_multi_requests_causal_lm_batch.requests[1].id ) assert ( generations[1].generated_text.generated_tokens == default_multi_requests_causal_lm_batch.stopping_criterias[1].max_new_tokens ) # Copy stopping_criterias before filtering stopping_criterias = ( default_multi_requests_causal_lm_batch.stopping_criterias.copy() ) next_batch = next_batch.filter([next_batch.requests[0].id]) for _ in range( stopping_criterias[0].max_new_tokens - stopping_criterias[1].max_new_tokens - 1 ): generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert len(generations) == len(next_batch) generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert next_batch is None assert len(generations) == 1 assert generations[0].generated_text.text == ".java:784) at net.minecraft." assert ( generations[0].request_id == default_multi_requests_causal_lm_batch.requests[0].id ) assert ( generations[0].generated_text.generated_tokens == default_multi_requests_causal_lm_batch.stopping_criterias[0].max_new_tokens ) def test_batch_concatenate( default_causal_lm, default_causal_lm_batch, default_multi_requests_causal_lm_batch ): next_batch_0 = default_causal_lm_batch _, next_batch_0, _ = default_causal_lm.generate_token(next_batch_0) _, next_batch_0, _ = default_causal_lm.generate_token(next_batch_0) next_batch_1 = default_multi_requests_causal_lm_batch _, next_batch_1, _ = default_causal_lm.generate_token(next_batch_1) # Clone past_key_values before concatenating to compare after, # because they are removed from the concatenated batches next_batch_0_past_key_values = [ (k.clone(), v.clone()) for (k, v) in next_batch_0.past_key_values ] next_batch_1_past_key_values = [ (k.clone(), v.clone()) for (k, v) in next_batch_1.past_key_values ] next_batch = CausalLMBatch.concatenate([next_batch_0, next_batch_1]) assert torch.equal(next_batch.all_input_ids[0], next_batch_0.all_input_ids[0]) assert torch.equal(next_batch.all_input_ids[1], next_batch_1.all_input_ids[0]) assert torch.equal(next_batch.all_input_ids[2], next_batch_1.all_input_ids[1]) assert torch.all( next_batch.attention_mask[0, : -next_batch.padding_right_offset] == 1 ) assert torch.all( next_batch.attention_mask[1:, 1 : -next_batch.padding_right_offset] == 1 ) assert torch.all(next_batch.attention_mask[1:, 3:] == 0) assert next_batch.batch_id == 0 assert next_batch.input_ids[0, 0] == 12355 assert torch.all(next_batch.input_ids[1:] == 13) assert next_batch.input_lengths == [3, 2, 2] assert next_batch.max_input_length == 3 assert next_batch.requests[0] == next_batch_0.requests[0] assert next_batch.requests[1:] == next_batch_1.requests assert next_batch.next_token_choosers[0] == next_batch_0.next_token_choosers[0] assert next_batch.next_token_choosers[1:] == next_batch_1.next_token_choosers assert next_batch.stopping_criterias[0] == next_batch_0.stopping_criterias[0] assert next_batch.stopping_criterias[1:] == next_batch_1.stopping_criterias assert next_batch.past_key_values is not None assert all([p[0].shape == (3, 12, 2, 64) for p in next_batch.past_key_values]) assert all([p[1].shape == (3, 12, 2, 64) for p in next_batch.past_key_values]) for i, past in enumerate(next_batch.past_key_values): assert torch.equal(next_batch_0_past_key_values[i][0][0, :, -2:], past[0][0]) assert torch.equal( next_batch_1_past_key_values[i][0][:, :, -1:], past[0][1:, :, -1:, :] ) assert torch.equal(next_batch_0_past_key_values[i][1][0, :, -2:], past[1][0]) assert torch.equal( next_batch_1_past_key_values[i][1][:, :, -1:], past[1][1:, :, -1:, :] ) for _ in range( default_multi_requests_causal_lm_batch.stopping_criterias[1].max_new_tokens - 2 ): generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert len(generations) == len(next_batch) generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert next_batch is not None assert len(generations) == 3 assert generations[2].generated_text.text == ".java:784)" assert ( generations[2].request_id == default_multi_requests_causal_lm_batch.requests[1].id ) assert ( generations[2].generated_text.generated_tokens == default_multi_requests_causal_lm_batch.stopping_criterias[1].max_new_tokens ) next_batch = next_batch.filter( [next_batch.requests[0].id, next_batch.requests[1].id] ) for _ in range( default_causal_lm_batch.stopping_criterias[0].max_new_tokens - default_multi_requests_causal_lm_batch.stopping_criterias[1].max_new_tokens - 2 ): generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert len(generations) == len(next_batch) generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert next_batch is not None assert len(generations) == 2 assert generations[0].generated_text.text == ".java:784) at net.minecraft." assert generations[0].request_id == default_causal_lm_batch.requests[0].id assert ( generations[0].generated_text.generated_tokens == default_causal_lm_batch.stopping_criterias[0].max_new_tokens ) next_batch = next_batch.filter([next_batch.requests[1].id]) for _ in range( default_multi_requests_causal_lm_batch.stopping_criterias[0].max_new_tokens - default_causal_lm_batch.stopping_criterias[0].max_new_tokens - default_multi_requests_causal_lm_batch.stopping_criterias[1].max_new_tokens - 4 ): generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert len(generations) == len(next_batch) generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert next_batch is None assert len(generations) == 1 assert generations[0].generated_text.text == ".java:784) at net.minecraft." assert ( generations[0].request_id == default_multi_requests_causal_lm_batch.requests[0].id ) assert ( generations[0].generated_text.generated_tokens == default_multi_requests_causal_lm_batch.stopping_criterias[0].max_new_tokens )
text-generation-inference/server/tests/models/test_causal_lm.py/0
{ "file_path": "text-generation-inference/server/tests/models/test_causal_lm.py", "repo_id": "text-generation-inference", "token_count": 5345 }
188
# This code was adapted from https://github.com/lucidrains/flamingo-pytorch licensed under the MIT License. # # MIT License # # Copyright (c) 2020 The Google AI Language Team Authors, The HuggingFace Inc. team and github/lonePatient # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. """ Generic interface to various configurations of the Perceiver Resampler, that simply takes in a series of (potentially time-indexed) contextual embeddings, and "resamples" (compresses) them down to a pre-specified number of latents! Note that the Perceiver in general resamples based solely off the *long-range* context; there's a nice opportunity here to prime the Perceiver Resampler with say a single layer's worth of language embeddings (the target domain), and use that to softly "retrieve & compress" what we need --> this would be a novel contribution we should explore. References: - DeepMind's Flamingo: https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model - Code borrowed w/ love from: https://github.com/lucidrains/flamingo-pytorch """ from typing import Optional, Tuple import torch import torch.nn as nn from text_generation_server.utils.layers import ( TensorParallelColumnLinear, TensorParallelRowLinear, ) EPS = 1e-5 class IdeficsPerceiverResampler(nn.Module): def __init__( self, prefix, config, embed_dim: int, depth: int, n_heads: int, head_dim: int, n_latents: int, weights, ) -> None: """ Instantiates a Perceiver Resampler that operates over a sequence of embeddings (say from a ResNet or ViT or MAE) of a given dimension, performs `depth` blocks of cross-attention with a fixed `n_latents` inputs, then returns a Tensor of shape [bsz, n_latents, embed_dim]. :param embed_dim: Dimensionality of embeddings being fed to the Perceiver Resampler (also dimensionality of latent embeddings *returned* by the Perceiver Resampler. Could be e.g., VIT embed_dim, ResNet pool dim, and so on. Args: config (`IdeficsConfig`): config object embed_dim (`int`): The size of each embedding vector depth (`int`): Depth of the Perceiver Resampler (Transformer w/ cross attention). Should be shallow (< 3). n_heads (`int`): Number of heads in each Transformer block (for multi-headed self-attention). head_dim (`int`): Dimensionality of each head projection in the Transformer block. n_latents (`int`): Number of latent embeddings to resample ("compress") the input sequence to (usually < 128). """ super().__init__() self.embed_dim, self.n_heads, self.head_dim, self.n_latents = ( embed_dim, n_heads, head_dim, n_latents, ) self.qk_layer_norms = config.perceiver_config.qk_layer_norms_perceiver # Create Latents for Perceiver self.latents = nn.Parameter(weights.get_tensor(f"{prefix}.latents")) self.intermediate_dim = ( self.embed_dim * 4 if not hasattr(config.vision_config, "embed_dim") else config.vision_config.embed_dim * 4 ) # Create Transformer Blocks self.blocks = nn.ModuleList( [ nn.ModuleList( [ IdeficsPerceiverAttention( prefix=f"{prefix}.blocks.{layer_id}.0", config=config, embed_dim=self.embed_dim, n_heads=self.n_heads, head_dim=self.head_dim, qk_layer_norms=self.qk_layer_norms, weights=weights, ), IdeficsMLP( prefix=f"{prefix}.blocks.{layer_id}.1", intermediate_size=self.intermediate_dim, config=config, weights=weights, ), ] ) for layer_id in range(depth) ] ) self.layer_norm = nn.LayerNorm.load( prefix=f"{prefix}.layer_norm", weights=weights, eps=EPS ) def forward(self, context: torch.Tensor) -> torch.Tensor: """Resample arbitrary length context & *compress* down to self.n_latents latent embeddings""" # einsum.repeat(self.latents, "seq embed -> bsz seq embed", bsz=context.shape[0]) latents = self.latents.repeat(context.shape[0], 1, 1) # Feed through Perceiver Attention blocks... for attn, ff in self.blocks: latents = attn(context, latents) + latents latents = ff(latents) + latents return self.layer_norm(latents) class IdeficsPerceiverAttention(nn.Module): def __init__( self, prefix, config, embed_dim: int, n_heads: int, head_dim: int, qk_layer_norms: bool, weights, ) -> None: """Perceiver Cross-Attention Module --> let long-form inputs be `context`, resampled embeddings be `latents`""" super().__init__() self.embed_dim, self.n_heads, self.head_dim = embed_dim, n_heads, head_dim self.qk_layer_norms = qk_layer_norms # Normalization & Scaling self.context_layer_norm = nn.LayerNorm.load( prefix=f"{prefix}.context_layer_norm", weights=weights, eps=EPS ) self.latents_layer_norm = nn.LayerNorm.load( prefix=f"{prefix}.latents_layer_norm", weights=weights, eps=EPS ) if self.qk_layer_norms: self.q_layer_norm = nn.LayerNorm.load( prefix=f"{prefix}.q_layer_norm", weights=weights, eps=EPS ) self.k_layer_norm = nn.LayerNorm.load( prefix=f"{prefix}.k_layer_norm", weights=weights, eps=EPS ) self.qk_scale = self.head_dim**-0.5 process_group = weights.process_group if n_heads % weights.process_group.size() != 0: raise ValueError( f"`num_heads` must be divisible by `num_shards` (got `num_heads`: {n_heads} " f"and `num_shards`: {weights.process_group.size()}" ) self.n_heads //= weights.process_group.size() # Q, K, V Projection (no bias -- detail from Perceiver/Flamingo Papers). self.q_proj = TensorParallelColumnLinear.load( config=config, prefix=f"{prefix}.q_proj", weights=weights, bias=False ) self.k_proj = TensorParallelColumnLinear.load( config=config, prefix=f"{prefix}.k_proj", weights=weights, bias=False ) self.v_proj = TensorParallelColumnLinear.load( config=config, prefix=f"{prefix}.v_proj", weights=weights, bias=False ) self.output_proj = TensorParallelRowLinear.load( config=config, prefix=f"{prefix}.output_proj", weights=weights, bias=False ) def forward(self, context: torch.Tensor, latents: torch.Tensor) -> torch.Tensor: """ Runs Perceiver Self-Attention, with special (context, latents) appended along the `seq` dimension! Args: context (`torch.Tensor`): Tensor of shape `[bsz, seq, embed_dim]` representing long-form context to resample. latents (`torch.Tensor`): Tensor of shape `[bsz, n_latents, embed_dim]` representing fixed length latents to compress to. Returns: `torch.Tensor`: Tensor of shape `[bsz, n_latents, embed_dim]` representing attention over latents w/ cross from context. """ context = self.context_layer_norm(context) latents = self.latents_layer_norm(latents) batch_size, seq_length, embed_dim = context.shape[:3] # Query, Key, Value Projections --> Note that in Flamingo, latents are *concatenated* with context prior to attn! # Note: This results in queries w/ `seq = n_latents`, and keys, values with `seq = len(context) + n_latents` q = self.q_proj(latents) k = self.k_proj(torch.cat([context, latents], dim=-2)) v = self.v_proj(torch.cat([context, latents], dim=-2)) # Multiheaded Self-Attention w/ stable softmax (subtract per-row max -- `amax` -- before softmax call) # =>> `attn` should be a 2D matrix of shape [n_latents x (context + n_latents)] # einsum.rearrange(x, "bsz seq (heads embed) -> bsz heads seq embed", heads=self.n_heads) q, k, v = [ x.reshape(batch_size, x.shape[1], self.n_heads, self.head_dim).transpose( 1, 2 ) for x in (q, k, v) ] if self.qk_layer_norms: q = self.q_layer_norm(q) k = self.k_layer_norm(k) scores = torch.einsum("... i d, ... j d -> ... i j", q * self.qk_scale, k) stabilized_scores = scores - (scores.amax(dim=-1, keepdim=True).detach()) attn = stabilized_scores.softmax(dim=-1) # Attend & project back to output... resampled = torch.einsum("... i j, ... j d -> ... i d", attn, v) # einsum.rearrange(resampled, "bsz heads seq embed -> bsz seq (heads embed)", heads=self.n_heads) return self.output_proj(resampled.transpose(1, 2).flatten(-2)) class IdeficsMLP(nn.Module): def __init__( self, prefix, intermediate_size, config, weights, ): """Simple MLP block with intermediate_size and embedding size""" super().__init__() self.embed_dim = config.vision_config.embed_dim self.ln = nn.LayerNorm.load(prefix=f"{prefix}.ln", weights=weights, eps=EPS) self.fc = TensorParallelColumnLinear.load( config=config, prefix=f"{prefix}.fc", weights=weights, bias=False, ) self.act = nn.ReLU() self.c_proj = TensorParallelRowLinear.load( config=config, prefix=f"{prefix}.c_proj", weights=weights, bias=False, ) def forward( self, hidden_states: Optional[Tuple[torch.FloatTensor]] ) -> torch.FloatTensor: hidden_states = self.ln(hidden_states) hidden_states = self.fc(hidden_states) hidden_states = self.act(hidden_states) hidden_states = self.c_proj(hidden_states) return hidden_states
text-generation-inference/server/text_generation_server/models/custom_modeling/idefics_perceiver.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/idefics_perceiver.py", "repo_id": "text-generation-inference", "token_count": 5171 }
189
import math import torch import torch.distributed from opentelemetry import trace from transformers.models.qwen2 import Qwen2Tokenizer from typing import Optional from text_generation_server.models.cache_manager import BLOCK_SIZE from text_generation_server.models.flash_mistral import ( BaseFlashMistral, set_sliding_window, ) from text_generation_server.models.custom_modeling.flash_qwen2_modeling import ( Qwen2ForCausalLM, ) from transformers.models.qwen2 import Qwen2Config from text_generation_server.utils import ( initialize_torch_distributed, weight_files, Weights, ) tracer = trace.get_tracer(__name__) class FlashQwen2(BaseFlashMistral): def __init__( self, model_id: str, revision: Optional[str] = None, quantize: Optional[str] = None, use_medusa: Optional[str] = None, dtype: Optional[torch.dtype] = None, trust_remote_code: bool = False, ): self.process_group, rank, world_size = initialize_torch_distributed() if torch.cuda.is_available(): device = torch.device(f"cuda:{rank}") dtype = torch.float16 if dtype is None else dtype else: raise NotImplementedError("FlashQwen2 is only available on GPU") tokenizer = Qwen2Tokenizer.from_pretrained( model_id, revision=revision, padding_side="left", truncation_side="left", trust_remote_code=trust_remote_code, ) config = Qwen2Config.from_pretrained( model_id, revision=revision, trust_remote_code=trust_remote_code ) config.quantize = quantize config.use_medusa = use_medusa # Set context windows if config.sliding_window is not None: set_sliding_window( config.sliding_window, math.ceil(config.sliding_window / BLOCK_SIZE) ) torch.distributed.barrier(group=self.process_group) filenames = weight_files(model_id, revision=revision, extension=".safetensors") weights = Weights(filenames, device, dtype, process_group=self.process_group) if config.quantize in ["gptq", "awq"]: weights._set_gptq_params(model_id, revision) model = Qwen2ForCausalLM(config, weights) self.cuda_graphs = {} torch.distributed.barrier(group=self.process_group) super(BaseFlashMistral, self).__init__( model=model, tokenizer=tokenizer, num_layers=len(model.model.layers), num_kv_heads=model.model.num_key_value_heads, head_size=model.model.head_size, dtype=dtype, device=device, rank=rank, world_size=world_size, sliding_window=config.sliding_window, )
text-generation-inference/server/text_generation_server/models/flash_qwen2.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/flash_qwen2.py", "repo_id": "text-generation-inference", "token_count": 1259 }
190
import torch import time from dataclasses import dataclass from opentelemetry import trace from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, PreTrainedTokenizerBase from typing import Optional, Tuple, List, Type, Dict from text_generation_server.utils.tokens import batch_top_tokens from text_generation_server.models import Model from text_generation_server.models.types import ( GeneratedText, Batch, Generation, Tokens, ) from text_generation_server.pb import generate_pb2 from text_generation_server.utils import NextTokenChooser, StoppingCriteria, Sampling tracer = trace.get_tracer(__name__) @dataclass class Seq2SeqLMBatch(Batch): batch_id: int requests: List[generate_pb2.Request] requests_idx_mapping: Dict[int, int] # Encoder values input_ids: Optional[torch.Tensor] attention_mask: torch.Tensor # Decoder values decoder_input_ids: torch.Tensor decoder_attention_mask: Optional[torch.Tensor] encoder_last_hidden_state: Optional[torch.Tensor] # All tokens all_decoder_input_ids: List[torch.Tensor] # Seq2SeqLM keeps track of both encoder and decoder attention keys and values past_key_values: Optional[List[Tuple]] # Lengths of all generations present in the batch input_lengths: List[int] decoder_input_lengths: List[int] prefix_offsets: List[int] read_offsets: List[int] # Generation helpers next_token_choosers: List[NextTokenChooser] stopping_criterias: List[StoppingCriteria] top_n_tokens: List[int] top_n_tokens_tensor: torch.Tensor # Metadata used for padding max_input_length: int max_decoder_input_length: int padding_right_offset: int # Maximum number of tokens this batch will grow to max_tokens: int def to_pb(self) -> generate_pb2.CachedBatch: """Convert a Seq2SeqLMBatch to a text_generation_server.v1.CachedBatch protobuf""" return generate_pb2.CachedBatch( id=self.batch_id, request_ids=[r.id for r in self.requests], size=len(self), max_tokens=self.max_tokens, ) @classmethod def from_pb( cls, pb: generate_pb2.Batch, tokenizer: PreTrainedTokenizerBase, dtype: torch.dtype, device: torch.device, ) -> "Seq2SeqLMBatch": """Convert a text_generation_server.v1.Batch protobuf to a Seq2SeqLMBatch""" inputs = [] next_token_choosers = [] stopping_criterias = [] top_n_tokens = [] decoder_input_lengths = [] prefix_offsets = [] read_offsets = [] requests_idx_mapping = {} # Parse batch max_truncation = 0 padding_right_offset = 0 max_decode_tokens = 0 for i, r in enumerate(pb.requests): inputs.append(r.inputs) requests_idx_mapping[r.id] = i decoder_input_lengths.append(1) next_token_choosers.append( NextTokenChooser.from_pb(r.parameters, device, tokenizer) ) stopping_criteria = StoppingCriteria.from_pb( r.stopping_parameters, tokenizer ) stopping_criterias.append(stopping_criteria) top_n_tokens.append(r.top_n_tokens) max_truncation = max(max_truncation, r.truncate) max_decode_tokens += stopping_criteria.max_new_tokens padding_right_offset = max( padding_right_offset, stopping_criteria.max_new_tokens ) # Tokenize batch tokenized_inputs = tokenizer( inputs, return_tensors="pt", padding=True, return_token_type_ids=False, truncation=True, max_length=max_truncation, ).to(device) input_lengths = tokenized_inputs["attention_mask"].sum(1) max_input_length = input_lengths.max() # Decoder sequence only contains the bos_token decoder_input_ids = ( torch.tensor(tokenizer.bos_token_id, device=device) .repeat(len(pb.requests)) .view(-1, 1) ) for _ in pb.requests: prefix_offsets.append(0) read_offsets.append(1) all_decoder_input_ids = decoder_input_ids.view(-1).split(1) top_n_tokens_tensor = torch.tensor( top_n_tokens, device=device, dtype=torch.int64 ) max_tokens = len(inputs) * (max_input_length + max_decode_tokens) return cls( batch_id=pb.id, requests=pb.requests, requests_idx_mapping=requests_idx_mapping, input_ids=tokenized_inputs["input_ids"], attention_mask=tokenized_inputs["attention_mask"], decoder_input_ids=decoder_input_ids, all_decoder_input_ids=list(all_decoder_input_ids), decoder_attention_mask=None, encoder_last_hidden_state=None, past_key_values=None, input_lengths=input_lengths.tolist(), decoder_input_lengths=decoder_input_lengths, prefix_offsets=prefix_offsets, read_offsets=read_offsets, next_token_choosers=next_token_choosers, stopping_criterias=stopping_criterias, top_n_tokens=top_n_tokens, top_n_tokens_tensor=top_n_tokens_tensor, max_input_length=max_input_length.item(), max_decoder_input_length=1, padding_right_offset=padding_right_offset, max_tokens=max_tokens, ) @tracer.start_as_current_span("filter") def filter(self, request_ids: List[int]) -> Optional["Seq2SeqLMBatch"]: if len(request_ids) == 0: raise ValueError("Batch must have at least one request") if len(request_ids) == len(self): return self keep_indices = [] # New values after filtering requests_idx_mapping = {} requests = [] input_lengths = [] decoder_input_lengths = [] prefix_offsets = [] read_offsets = [] all_decoder_input_ids = [] next_token_choosers = [] stopping_criterias = [] top_n_tokens = [] max_input_length = 0 max_decoder_input_length = 0 padding_right_offset = 0 total_remaining_decode_tokens = 0 for i, request_id in enumerate(request_ids): idx = self.requests_idx_mapping[request_id] requests_idx_mapping[request_id] = i keep_indices.append(idx) requests.append(self.requests[idx]) prefix_offsets.append(self.prefix_offsets[idx]) read_offsets.append(self.read_offsets[idx]) all_decoder_input_ids.append(self.all_decoder_input_ids[idx]) request_input_length = self.input_lengths[idx] input_lengths.append(request_input_length) max_input_length = max(max_input_length, request_input_length) request_decoder_input_length = self.decoder_input_lengths[idx] decoder_input_lengths.append(request_decoder_input_length) max_decoder_input_length = max( max_decoder_input_length, request_decoder_input_length ) next_token_choosers.append(self.next_token_choosers[idx]) stopping_criteria = self.stopping_criterias[idx] stopping_criterias.append(stopping_criteria) top_n_tokens.append(self.top_n_tokens[idx]) remaining_decode_tokens = ( stopping_criteria.max_new_tokens - stopping_criteria.current_tokens ) total_remaining_decode_tokens += remaining_decode_tokens padding_right_offset = max(padding_right_offset, remaining_decode_tokens) # Apply indices to input_ids, attention mask, past key values and other items that need to be cached self.decoder_input_ids = self.decoder_input_ids[keep_indices] self.attention_mask = self.attention_mask[keep_indices, -max_input_length:] if self.decoder_attention_mask is not None: self.decoder_attention_mask = self.decoder_attention_mask[ keep_indices, -(self.padding_right_offset + max_decoder_input_length) : ( self.decoder_attention_mask.shape[1] - self.padding_right_offset ) + padding_right_offset, ] self.encoder_last_hidden_state = self.encoder_last_hidden_state[ keep_indices, -max_input_length: ] # Ensure that past_key_values tensors can be updated in-place if type(self.past_key_values[0]) == tuple: self.past_key_values = [ [t for t in layer] for layer in self.past_key_values ] decoder_past_seq_len = max_decoder_input_length - 1 for layer in self.past_key_values: layer[0] = layer[0][keep_indices, :, -decoder_past_seq_len:] layer[1] = layer[1][keep_indices, :, -decoder_past_seq_len:] layer[2] = layer[2][keep_indices, :, -max_input_length:] layer[3] = layer[3][keep_indices, :, -max_input_length:] top_n_tokens_tensor = self.top_n_tokens_tensor[keep_indices] max_tokens = ( len(request_ids) * (max_input_length + max_decoder_input_length) + remaining_decode_tokens ) self.requests = requests self.requests_idx_mapping = requests_idx_mapping self.input_ids = None self.all_decoder_input_ids = all_decoder_input_ids self.input_lengths = input_lengths self.decoder_input_lengths = decoder_input_lengths self.prefix_offsets = prefix_offsets self.read_offsets = read_offsets self.next_token_choosers = next_token_choosers self.stopping_criterias = stopping_criterias self.top_n_tokens = top_n_tokens self.top_n_tokens_tensor = top_n_tokens_tensor self.max_input_length = max_input_length self.max_decoder_input_length = max_decoder_input_length self.padding_right_offset = padding_right_offset self.max_tokens = max_tokens return self @classmethod @tracer.start_as_current_span("concatenate") def concatenate(cls, batches: List["Seq2SeqLMBatch"]) -> "Seq2SeqLMBatch": """Concatenate multiple batches together by padding internal torch tensors""" # Used for padding total_batch_size = 0 max_input_length = 0 max_decoder_input_length = 0 padding_right_offset = 0 for batch in batches: total_batch_size += len(batch) max_input_length = max(max_input_length, batch.max_input_length) max_decoder_input_length = max( max_decoder_input_length, batch.max_decoder_input_length ) padding_right_offset = max(padding_right_offset, batch.padding_right_offset) # Batch attributes requests = [] requests_idx_mapping = {} all_decoder_input_ids = [] input_lengths = [] decoder_input_lengths = [] prefix_offsets = [] read_offsets = [] next_token_choosers = [] stopping_criterias = [] top_n_tokens = [] max_tokens = 0 # Batch tensors attention_mask = None decoder_input_ids = None decoder_attention_mask = None encoder_last_hidden_state = None top_n_tokens_tensor = None past_key_values = [] # Used for slicing correctly inside the tensors # Equivalent to a cumsum on batch sizes start_index = 0 for i, batch in enumerate(batches): # Extend all list attributes requests.extend(batch.requests) all_decoder_input_ids.extend(batch.all_decoder_input_ids) input_lengths.extend(batch.input_lengths) decoder_input_lengths.extend(batch.decoder_input_lengths) prefix_offsets.extend(batch.prefix_offsets) read_offsets.extend(batch.read_offsets) next_token_choosers.extend(batch.next_token_choosers) stopping_criterias.extend(batch.stopping_criterias) top_n_tokens.extend(batch.top_n_tokens) if i == 0: requests_idx_mapping = batch.requests_idx_mapping else: # We need to offset the mapping for each batch by the cumulative batch size for k, v in batch.requests_idx_mapping.items(): requests_idx_mapping[k] = v + start_index # Slicing end index for this batch end_index = start_index + len(batch) # We only concatenate batches that did at least one step if batch.encoder_last_hidden_state is None: raise ValueError("Batch encoder_last_hidden_state cannot be None") # Create padded tensor if attention_mask is None: attention_mask = batch.attention_mask.new_zeros( (total_batch_size, max_input_length), ) # Copy to correct indices attention_mask[start_index:end_index, -batch.max_input_length :] = ( batch.attention_mask[:, -batch.max_input_length :] ) # Create padded tensor if decoder_input_ids is None: decoder_input_ids = batch.decoder_input_ids.new_zeros( (total_batch_size, 1), ) # Copy to correct indices decoder_input_ids[start_index:end_index] = batch.decoder_input_ids # Create padded tensor if decoder_attention_mask is None: # As decoder_attention_mask might not exist, we use `batch.attention_mask` for device here decoder_attention_mask = batch.attention_mask.new_zeros( (total_batch_size, max_decoder_input_length + padding_right_offset), ) # If the decoder mask does not exist yet, all generations started at the same time and we never concatenated # this batch. All generations are of length `batch.max_decoder_input_length`. left_offset = max_decoder_input_length - batch.max_decoder_input_length if batch.decoder_attention_mask is None: decoder_attention_mask[ start_index:end_index, left_offset:-padding_right_offset, ] = 1 # If it exists, we need to index else: batch_left_offset = ( batch.decoder_attention_mask.shape[1] - batch.max_decoder_input_length - batch.padding_right_offset ) decoder_attention_mask[ start_index:end_index, left_offset:-padding_right_offset, ] = batch.decoder_attention_mask[ :, batch_left_offset : -batch.padding_right_offset, ] # Create padded tensor if encoder_last_hidden_state is None: encoder_last_hidden_state = batch.encoder_last_hidden_state.new_zeros( ( total_batch_size, max_input_length, batch.encoder_last_hidden_state.shape[-1], ), ) if top_n_tokens_tensor is None: top_n_tokens_tensor = batches[0].top_n_tokens_tensor.new_zeros( total_batch_size, ) top_n_tokens_tensor[start_index:end_index] = batch.top_n_tokens_tensor # Copy to correct indices encoder_last_hidden_state[ start_index:end_index, -batch.max_input_length :, : ] = batch.encoder_last_hidden_state[:, -batch.max_input_length :, :] batch.encoder_last_hidden_state = None # Ensure that we can update tensors in-place if type(batch.past_key_values[0]) == tuple: batch.past_key_values = [ [t for t in layer] for layer in batch.past_key_values ] # Add eventual padding tokens that were added while concatenating max_tokens += batch.max_tokens + ( max_input_length - batch.max_input_length + max_decoder_input_length - batch.max_decoder_input_length ) * len(batch) start_index = end_index # Determine shapes for new past kv tensors first_past_kvs = batches[0].past_key_values _, num_heads, _, head_dim = first_past_kvs[0][0].shape padded_dec_t_shape = ( total_batch_size, num_heads, (max_decoder_input_length - 1), head_dim, ) padded_enc_t_shape = ( total_batch_size, num_heads, max_input_length, head_dim, ) # Iterate over attention layers for j in range(len(first_past_kvs)): past_key_values.append([]) # Decoder past for k in range(0, 2): # Initialize tensors padded_past_values = first_past_kvs[j][k].new_zeros(padded_dec_t_shape) past_key_values[j].append(padded_past_values) start_index = 0 for batch in batches: t = batch.past_key_values[j][k] # Clear reference to the original tensor batch.past_key_values[j][k] = None # Slicing end index for this batch end_index = start_index + len(batch) # We slice the past keys and values to remove the padding from previous batches past_seq_len = batch.max_decoder_input_length - 1 padded_past_values[start_index:end_index, :, -past_seq_len:, :] = t[ :, :, -past_seq_len:, : ] del t start_index = end_index # Encoder past for k in range(2, 4): # Initialize tensors padded_past_values = first_past_kvs[j][k].new_zeros(padded_enc_t_shape) past_key_values[j].append(padded_past_values) start_index = 0 for batch in batches: t = batch.past_key_values[j][k] # Clear reference to the original tensor batch.past_key_values[j][k] = None # Slicing end index for this batch end_index = start_index + len(batch) # We slice the past keys and values to remove the padding from previous batches padded_past_values[ start_index:end_index, :, -batch.max_input_length :, : ] = t[:, :, -batch.max_input_length :, :] del t start_index = end_index return cls( batch_id=batches[0].batch_id, requests=requests, requests_idx_mapping=requests_idx_mapping, input_ids=None, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, all_decoder_input_ids=all_decoder_input_ids, decoder_attention_mask=decoder_attention_mask, encoder_last_hidden_state=encoder_last_hidden_state, past_key_values=past_key_values, input_lengths=input_lengths, decoder_input_lengths=decoder_input_lengths, prefix_offsets=prefix_offsets, read_offsets=read_offsets, next_token_choosers=next_token_choosers, stopping_criterias=stopping_criterias, top_n_tokens=top_n_tokens, top_n_tokens_tensor=top_n_tokens_tensor, max_input_length=max_input_length, max_decoder_input_length=max_decoder_input_length, padding_right_offset=padding_right_offset, max_tokens=max_tokens, ) def __len__(self): return len(self.requests) class Seq2SeqLM(Model): def __init__( self, model_id: str, revision: Optional[str] = None, quantize: Optional[str] = None, use_medusa: Optional[str] = None, dtype: Optional[torch.dtype] = None, trust_remote_code: bool = False, ): if use_medusa: raise RuntimeError("Medusa decoding is not enabled for AutoModel") if torch.cuda.is_available(): device = torch.device("cuda") dtype = torch.float16 if dtype is None else dtype else: if quantize: raise ValueError("quantization is not available on CPU") device = torch.device("cpu") dtype = torch.float32 if dtype is None else dtype model = AutoModelForSeq2SeqLM.from_pretrained( model_id, revision=revision, torch_dtype=dtype, device_map=( "auto" if torch.cuda.is_available() and torch.cuda.device_count() > 1 else None ), load_in_8bit=quantize == "bitsandbytes", trust_remote_code=trust_remote_code, ) if torch.cuda.is_available() and torch.cuda.device_count() == 1: model = model.cuda() tokenizer = AutoTokenizer.from_pretrained( model_id, revision=revision, padding_side="left", truncation_side="left", trust_remote_code=trust_remote_code, ) tokenizer.bos_token_id = model.config.decoder_start_token_id super(Seq2SeqLM, self).__init__( model=model, tokenizer=tokenizer, requires_padding=True, dtype=dtype, device=device, ) @property def batch_type(self) -> Type[Seq2SeqLMBatch]: return Seq2SeqLMBatch def decode(self, decoder_ids: List[int]) -> str: return self.tokenizer.decode( decoder_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) def forward( self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask: Optional, encoder_last_hidden_state: Optional, past_key_values: Optional = None, ) -> Tuple[ torch.Tensor, Optional[torch.Tensor], torch.Tensor, List[Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]], ]: # Model Forward outputs = self.model.forward( input_ids=input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, encoder_outputs=encoder_last_hidden_state, past_key_values=past_key_values, use_cache=True, ) if isinstance(outputs, tuple): # Our custom models outputs, speculative_logits = outputs else: # Generic transformers models speculative_logits = None return ( outputs.logits, speculative_logits, outputs.encoder_last_hidden_state, outputs.past_key_values, ) @tracer.start_as_current_span("generate_token") def generate_token( self, batch: Seq2SeqLMBatch ) -> Tuple[List[Generation], Optional[Seq2SeqLMBatch], Tuple[int, int]]: start = time.time_ns() if batch.decoder_attention_mask is not None: # slice to the correct shape decoder_attention_mask = batch.decoder_attention_mask[ :, : -batch.padding_right_offset ] else: decoder_attention_mask = None # Wrap `encoder_last_hidden_state` because for some reason, Transformers does a `encoder_last_hidden_state[0]` # internally... if batch.encoder_last_hidden_state is not None: encoder_last_hidden_state = [batch.encoder_last_hidden_state] else: encoder_last_hidden_state = None logits, speculative_logits, encoder_last_hidden_state, past = self.forward( batch.input_ids, batch.attention_mask, batch.decoder_input_ids, decoder_attention_mask, encoder_last_hidden_state, batch.past_key_values, ) # Speculation is not active for seq2seq accepted_ids = torch.ones_like(batch.decoder_input_ids)[:, 0] batch_top_token_ids, batch_top_token_logprobs = batch_top_tokens( batch.top_n_tokens, batch.top_n_tokens_tensor, torch.log_softmax(logits[:, -1], -1), accepted_ids, ) start_decode = time.time_ns() # Finished requests generations: List[Generation] = [] stopped = True # Zipped iterator iterator = zip( batch.requests, batch.input_lengths, batch.prefix_offsets, batch.read_offsets, batch.decoder_input_lengths, logits, batch.next_token_choosers, batch.stopping_criterias, batch.all_decoder_input_ids, batch.top_n_tokens, batch_top_token_ids, batch_top_token_logprobs, ) # For each member of the batch for i, ( request, input_length, prefix_offset, read_offset, decoder_input_length, logits, next_token_chooser, stopping_criteria, all_decoder_input_ids, top_n_tokens, top_token_ids, top_token_logprobs, ) in enumerate(iterator): # Select next token next_token_id, logprobs = next_token_chooser( all_decoder_input_ids.view(1, -1), logits[-1:, :] ) # Append next token to decoder tokens all_decoder_input_ids = torch.cat( [all_decoder_input_ids, next_token_id.squeeze(1)] ) new_decoder_input_length = decoder_input_length + 1 # Generated token next_token_logprob = logprobs[-1, next_token_id] next_token_id_squeezed = next_token_id.squeeze() next_token_text, prefix_offset, read_offset = self.decode_token( all_decoder_input_ids, prefix_offset, read_offset ) # Evaluate stopping criteria stop, reason = stopping_criteria(next_token_id, next_token_text) if not stop: stopped = False # Shard generations # All generations will be appended in the rust sharded client if i % self.world_size == self.rank: if stop: # Slice with decoder_input_length to remove padding # Decode all tokens output_text, _, _ = self.decode_token( all_decoder_input_ids, prefix_offset=len(all_decoder_input_ids) - decoder_input_length - 1, read_offset=len(all_decoder_input_ids) - decoder_input_length, skip_special_tokens=True, ) # Get seed if isinstance(next_token_chooser.choice, Sampling): seed = next_token_chooser.choice.seed else: seed = None generated_text = GeneratedText( output_text, stopping_criteria.current_tokens, reason, seed ) else: generated_text = None # Prefill if stopping_criteria.current_tokens == 1 and request.prefill_logprobs: prefill_tokens = Tokens( [self.tokenizer.bos_token_id], [float("nan")], [self.tokenizer.bos_token], [False], ) else: prefill_tokens = None if top_n_tokens > 0: all_top_tokens = [] for top_token_ids, top_token_logprobs in zip( top_token_ids, top_token_logprobs ): toptoken_texts = self.tokenizer.batch_decode( top_token_ids, clean_up_tokenization_spaces=False, skip_special_tokens=False, ) special_toptokens = [ token_id in self.all_special_ids for token_id in top_token_ids ] top_tokens = Tokens( top_token_ids, top_token_logprobs, toptoken_texts, special_toptokens, ) all_top_tokens.append(top_tokens) top_tokens = all_top_tokens else: top_tokens = None generation = Generation( request.id, prefill_tokens, Tokens( [next_token_id_squeezed], [next_token_logprob], [next_token_text], [next_token_id_squeezed.item() in self.all_special_ids], ), generated_text, top_tokens, ) generations.append(generation) # Update values batch.next_token_choosers[i] = batch.next_token_choosers[i].advance_grammar( next_token_id_squeezed.item() ) batch.decoder_input_ids[i] = next_token_id batch.all_decoder_input_ids[i] = all_decoder_input_ids batch.input_lengths[i] = input_length batch.decoder_input_lengths[i] = new_decoder_input_length batch.prefix_offsets[i] = prefix_offset batch.read_offsets[i] = read_offset batch.max_input_length = max(batch.max_input_length, input_length) batch.max_decoder_input_length = max( batch.max_decoder_input_length, new_decoder_input_length ) # We finished all generations in the batch; there is no next batch if stopped: forward_ns = start_decode - start decode_ns = time.time_ns() - start_decode return generations, None, (forward_ns, decode_ns) # We don't need input_ids after the prefill forward batch.input_ids = None batch.encoder_last_hidden_state = encoder_last_hidden_state batch.past_key_values = past # Update decoder_attention_mask as we added a new token to input_ids if batch.decoder_attention_mask is not None: batch.decoder_attention_mask[:, -batch.padding_right_offset] = 1 batch.padding_right_offset -= 1 forward_ns = start_decode - start decode_ns = time.time_ns() - start_decode return generations, batch, (forward_ns, decode_ns)
text-generation-inference/server/text_generation_server/models/seq2seq_lm.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/seq2seq_lm.py", "repo_id": "text-generation-inference", "token_count": 16433 }
191
import time import torch.nn as nn import math import json import os import torch import transformers from texttable import Texttable from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer from huggingface_hub import HfApi from accelerate import init_empty_weights from text_generation_server.utils import initialize_torch_distributed, Weights from text_generation_server.utils.hub import weight_files from text_generation_server.utils.gptq.quant_linear import QuantLinear from loguru import logger from typing import Optional DEV = torch.device("cuda:0") class Quantizer(nn.Module): def __init__(self, shape=1): super(Quantizer, self).__init__() self.register_buffer("maxq", torch.tensor(0)) self.register_buffer("scale", torch.zeros(shape)) self.register_buffer("zero", torch.zeros(shape)) def configure( self, bits, perchannel=False, sym=True, mse=False, norm=2.4, grid=100, maxshrink=0.8, trits=False, ): self.maxq = torch.tensor(2**bits - 1) self.perchannel = perchannel self.sym = sym self.mse = mse self.norm = norm self.grid = grid self.maxshrink = maxshrink if trits: self.maxq = torch.tensor(-1) self.scale = torch.zeros_like(self.scale) def _quantize(self, x, scale, zero, maxq): if maxq < 0: return (x > scale / 2).float() * scale + (x < zero / 2).float() * zero q = torch.clamp(torch.round(x / scale) + zero, 0, maxq) return scale * (q - zero) def find_params(self, x, weight=False): dev = x.device self.maxq = self.maxq.to(dev) shape = x.shape if self.perchannel: if weight: x = x.flatten(1) else: if len(shape) == 4: x = x.permute([1, 0, 2, 3]) x = x.flatten(1) if len(shape) == 3: x = x.reshape((-1, shape[-1])).t() if len(shape) == 2: x = x.t() else: x = x.flatten().unsqueeze(0) tmp = torch.zeros(x.shape[0], device=dev) xmin = torch.minimum(x.min(1)[0], tmp) xmax = torch.maximum(x.max(1)[0], tmp) if self.sym: xmax = torch.maximum(torch.abs(xmin), xmax) tmp = xmin < 0 if torch.any(tmp): xmin[tmp] = -xmax[tmp] tmp = (xmin == 0) & (xmax == 0) xmin[tmp] = -1 xmax[tmp] = +1 if self.maxq < 0: self.scale = xmax self.zero = xmin else: self.scale = (xmax - xmin) / self.maxq if self.sym: self.zero = torch.full_like(self.scale, (self.maxq + 1) / 2) else: self.zero = torch.round(-xmin / self.scale) if self.mse: best = torch.full([x.shape[0]], float("inf"), device=dev) for i in range(int(self.maxshrink * self.grid)): p = 1 - i / self.grid xmin1 = p * xmin xmax1 = p * xmax scale1 = (xmax1 - xmin1) / self.maxq zero1 = torch.round(-xmin1 / scale1) if not self.sym else self.zero q = self._quantize( x, scale1.unsqueeze(1), zero1.unsqueeze(1), self.maxq ) q -= x q.abs_() q.pow_(self.norm) err = torch.sum(q, 1) tmp = err < best if torch.any(tmp): best[tmp] = err[tmp] self.scale[tmp] = scale1[tmp] self.zero[tmp] = zero1[tmp] if not self.perchannel: if weight: tmp = shape[0] else: tmp = shape[1] if len(shape) != 3 else shape[2] self.scale = self.scale.repeat(tmp) self.zero = self.zero.repeat(tmp) if weight: shape = [-1] + [1] * (len(shape) - 1) self.scale = self.scale.reshape(shape) self.zero = self.zero.reshape(shape) return if len(shape) == 4: self.scale = self.scale.reshape((1, -1, 1, 1)) self.zero = self.zero.reshape((1, -1, 1, 1)) if len(shape) == 3: self.scale = self.scale.reshape((1, 1, -1)) self.zero = self.zero.reshape((1, 1, -1)) if len(shape) == 2: self.scale = self.scale.unsqueeze(0) self.zero = self.zero.unsqueeze(0) def quantize(self, x): if self.ready(): return self._quantize(x, self.scale, self.zero, self.maxq) return x def enabled(self): return self.maxq > 0 def ready(self): return torch.all(self.scale != 0) class GPTQ: def __init__(self, layer, observe=False): self.layer = layer self.dev = self.layer.weight.device W = layer.weight.data.clone() if isinstance(self.layer, nn.Conv2d): W = W.flatten(1) if isinstance(self.layer, transformers.Conv1D): W = W.t() self.rows = W.shape[0] self.columns = W.shape[1] self.H = torch.zeros((self.columns, self.columns), device=self.dev) self.nsamples = 0 self.quantizer = Quantizer() self.observe = observe def add_batch(self, inp, out): # Hessian H = 2 X XT + λ I if self.observe: self.inp1 = inp self.out1 = out else: self.inp1 = None self.out1 = None if len(inp.shape) == 2: inp = inp.unsqueeze(0) tmp = inp.shape[0] if isinstance(self.layer, nn.Linear) or isinstance( self.layer, transformers.Conv1D ): if len(inp.shape) == 3: inp = inp.reshape((-1, inp.shape[-1])) inp = inp.t() if isinstance(self.layer, nn.Conv2d): unfold = nn.Unfold( self.layer.kernel_size, dilation=self.layer.dilation, padding=self.layer.padding, stride=self.layer.stride, ) inp = unfold(inp) inp = inp.permute([1, 0, 2]) inp = inp.flatten(1) self.H *= self.nsamples / (self.nsamples + tmp) self.nsamples += tmp # inp = inp.float() inp = math.sqrt(2 / self.nsamples) * inp.float() # self.H += 2 / self.nsamples * inp.matmul(inp.t()) self.H += inp.matmul(inp.t()) def print_loss(self, name, q_weight, weight_error, timecost): table = Texttable() length = 28 name = ( (name + " " * (length - len(name))) if len(name) <= length else name[:length] ) table.header(["name", "weight_error", "fp_inp_SNR", "q_inp_SNR", "time"]) # assign weight self.layer.weight.data = q_weight.reshape(self.layer.weight.shape).to( self.layer.weight.data.dtype ) if self.inp1 is not None: # quantize input to int8 quantizer = Quantizer() quantizer.configure(8, perchannel=False, sym=True, mse=False) quantizer.find_params(self.inp1) q_in = quantizer.quantize(self.inp1).type(torch.float16) q_out = self.layer(q_in) # get kinds of SNR q_SNR = torch_snr_error(q_out, self.out1).item() fp_SNR = torch_snr_error(self.layer(self.inp1), self.out1).item() else: q_SNR = "-" fp_SNR = "-" table.add_row([name, weight_error, fp_SNR, q_SNR, timecost]) print(table.draw().split("\n")[-2]) def fasterquant( self, blocksize=128, percdamp=0.01, groupsize=-1, act_order=False, name="" ): self.layer.to(self.dev) W = self.layer.weight.data.clone() if isinstance(self.layer, nn.Conv2d): W = W.flatten(1) if isinstance(self.layer, transformers.Conv1D): W = W.t() W = W.float() tick = time.time() if not self.quantizer.ready(): self.quantizer.find_params(W, weight=True) H = self.H if not self.observe: del self.H dead = torch.diag(H) == 0 H[dead, dead] = 1 W[:, dead] = 0 if act_order: perm = torch.argsort(torch.diag(H), descending=True) W = W[:, perm] H = H[perm][:, perm] Losses = torch.zeros_like(W) Q = torch.zeros_like(W) damp = percdamp * torch.mean(torch.diag(H)) diag = torch.arange(self.columns, device=self.dev) H[diag, diag] += damp H = torch.linalg.cholesky(H) H = torch.cholesky_inverse(H) try: H = torch.linalg.cholesky(H, upper=True) except Exception: # Addition because Falcon fails on h_to_4h H = torch.linalg.cholesky( H + 1e-5 * torch.eye(H.shape[0]).to(H.device), upper=True ) Hinv = H g_idx = [] scale = [] zero = [] now_idx = 1 for i1 in range(0, self.columns, blocksize): i2 = min(i1 + blocksize, self.columns) count = i2 - i1 W1 = W[:, i1:i2].clone() Q1 = torch.zeros_like(W1) Err1 = torch.zeros_like(W1) Losses1 = torch.zeros_like(W1) Hinv1 = Hinv[i1:i2, i1:i2] for i in range(count): w = W1[:, i] d = Hinv1[i, i] if groupsize != -1: if (i1 + i) % groupsize == 0: self.quantizer.find_params( W[:, (i1 + i) : (i1 + i + groupsize)], weight=True ) if ((i1 + i) // groupsize) - now_idx == -1: scale.append(self.quantizer.scale) zero.append(self.quantizer.zero) now_idx += 1 q = self.quantizer.quantize(w.unsqueeze(1)).flatten() Q1[:, i] = q Losses1[:, i] = (w - q) ** 2 / d**2 err1 = (w - q) / d W1[:, i:] -= err1.unsqueeze(1).matmul(Hinv1[i, i:].unsqueeze(0)) Err1[:, i] = err1 Q[:, i1:i2] = Q1 Losses[:, i1:i2] = Losses1 / 2 W[:, i2:] -= Err1.matmul(Hinv[i1:i2, i2:]) torch.cuda.synchronize() error = torch.sum(Losses).item() groupsize = groupsize if groupsize != -1 else self.columns g_idx = [i // groupsize for i in range(self.columns)] g_idx = torch.tensor(g_idx, dtype=torch.int32, device=Q.device) if act_order: invperm = torch.argsort(perm) Q = Q[:, invperm] g_idx = g_idx[invperm] if isinstance(self.layer, transformers.Conv1D): Q = Q.t() self.print_loss( name=name, q_weight=Q, weight_error=error, timecost=(time.time() - tick) ) if scale == []: scale.append(self.quantizer.scale) zero.append(self.quantizer.zero) scale = torch.cat(scale, dim=1) zero = torch.cat(zero, dim=1) return scale, zero, g_idx, error def free(self): self.inp1 = None self.out1 = None self.H = None self.Losses = None self.Trace = None torch.cuda.empty_cache() def get_wikitext2(nsamples, seed, seqlen, model_id, trust_remote_code): from datasets import load_dataset traindata = load_dataset("wikitext", "wikitext-2-raw-v1", split="train") testdata = load_dataset("wikitext", "wikitext-2-raw-v1", split="test") try: tokenizer = AutoTokenizer.from_pretrained( model_id, use_fast=False, trust_remote_code=trust_remote_code ) except: tokenizer = AutoTokenizer.from_pretrained( model_id, use_fast=True, trust_remote_code=trust_remote_code ) trainenc = tokenizer("\n\n".join(traindata["text"]), return_tensors="pt") testenc = tokenizer("\n\n".join(testdata["text"]), return_tensors="pt") import random random.seed(seed) trainloader = [] for _ in range(nsamples): i = random.randint(0, trainenc.input_ids.shape[1] - seqlen - 1) j = i + seqlen inp = trainenc.input_ids[:, i:j] tar = inp.clone() tar[:, :-1] = -100 trainloader.append((inp, tar)) return trainloader, testenc def get_ptb(nsamples, seed, seqlen, model_id, trust_remote_code): from datasets import load_dataset traindata = load_dataset("ptb_text_only", "penn_treebank", split="train") valdata = load_dataset("ptb_text_only", "penn_treebank", split="validation") try: tokenizer = AutoTokenizer.from_pretrained( model_id, use_fast=False, trust_remote_code=trust_remote_code ) except: tokenizer = AutoTokenizer.from_pretrained( model_id, use_fast=True, trust_remote_code=trust_remote_code ) trainenc = tokenizer("\n\n".join(traindata["sentence"]), return_tensors="pt") testenc = tokenizer("\n\n".join(valdata["sentence"]), return_tensors="pt") import random random.seed(seed) trainloader = [] for _ in range(nsamples): i = random.randint(0, trainenc.input_ids.shape[1] - seqlen - 1) j = i + seqlen inp = trainenc.input_ids[:, i:j] tar = inp.clone() tar[:, :-1] = -100 trainloader.append((inp, tar)) return trainloader, testenc def get_c4(nsamples, seed, seqlen, model_id, trust_remote_code): from datasets import load_dataset traindata = load_dataset( "allenai/c4", "allenai--c4", data_files={"train": "en/c4-train.00000-of-01024.json.gz"}, split="train", use_auth_token=False, ) valdata = load_dataset( "allenai/c4", "allenai--c4", data_files={"validation": "en/c4-validation.00000-of-00008.json.gz"}, split="validation", use_auth_token=False, ) try: tokenizer = AutoTokenizer.from_pretrained( model_id, use_fast=False, trust_remote_code=trust_remote_code ) except: tokenizer = AutoTokenizer.from_pretrained( model_id, use_fast=True, trust_remote_code=trust_remote_code ) import random random.seed(seed) trainloader = [] for _ in range(nsamples): while True: i = random.randint(0, len(traindata) - 1) trainenc = tokenizer(traindata[i]["text"], return_tensors="pt") if trainenc.input_ids.shape[1] >= seqlen: break i = random.randint(0, trainenc.input_ids.shape[1] - seqlen - 1) j = i + seqlen inp = trainenc.input_ids[:, i:j] tar = inp.clone() tar[:, :-1] = -100 trainloader.append((inp, tar)) import random random.seed(0) valenc = [] for _ in range(256): while True: i = random.randint(0, len(valdata) - 1) tmp = tokenizer(valdata[i]["text"], return_tensors="pt") if tmp.input_ids.shape[1] >= seqlen: break i = random.randint(0, tmp.input_ids.shape[1] - seqlen - 1) j = i + seqlen valenc.append(tmp.input_ids[:, i:j]) valenc = torch.hstack(valenc) class TokenizerWrapper: def __init__(self, input_ids): self.input_ids = input_ids valenc = TokenizerWrapper(valenc) return trainloader, valenc def get_ptb_new(nsamples, seed, seqlen, model_id, trust_remote_code): from datasets import load_dataset traindata = load_dataset("ptb_text_only", "penn_treebank", split="train") testdata = load_dataset("ptb_text_only", "penn_treebank", split="test") try: tokenizer = AutoTokenizer.from_pretrained( model_id, use_fast=False, trust_remote_code=trust_remote_code ) except: tokenizer = AutoTokenizer.from_pretrained( model_id, use_fast=True, trust_remote_code=trust_remote_code ) trainenc = tokenizer(" ".join(traindata["sentence"]), return_tensors="pt") testenc = tokenizer(" ".join(testdata["sentence"]), return_tensors="pt") import random random.seed(seed) trainloader = [] for _ in range(nsamples): i = random.randint(0, trainenc.input_ids.shape[1] - seqlen - 1) j = i + seqlen inp = trainenc.input_ids[:, i:j] tar = inp.clone() tar[:, :-1] = -100 trainloader.append((inp, tar)) return trainloader, testenc def get_c4_new(nsamples, seed, seqlen, model_id, trust_remote_code): from datasets import load_dataset traindata = load_dataset( "allenai/c4", "allenai--c4", data_files={"train": "en/c4-train.00000-of-01024.json.gz"}, split="train", ) valdata = load_dataset( "allenai/c4", "allenai--c4", data_files={"validation": "en/c4-validation.00000-of-00008.json.gz"}, split="validation", ) try: tokenizer = AutoTokenizer.from_pretrained( model_id, use_fast=False, trust_remote_code=trust_remote_code ) except: tokenizer = AutoTokenizer.from_pretrained( model_id, use_fast=True, trust_remote_code=trust_remote_code ) import random random.seed(seed) trainloader = [] for _ in range(nsamples): while True: i = random.randint(0, len(traindata) - 1) trainenc = tokenizer(traindata[i]["text"], return_tensors="pt") if trainenc.input_ids.shape[1] >= seqlen: break i = random.randint(0, trainenc.input_ids.shape[1] - seqlen - 1) j = i + seqlen inp = trainenc.input_ids[:, i:j] tar = inp.clone() tar[:, :-1] = -100 trainloader.append((inp, tar)) valenc = tokenizer(" ".join(valdata[:1100]["text"]), return_tensors="pt") valenc = valenc.input_ids[:, : (256 * seqlen)] class TokenizerWrapper: def __init__(self, input_ids): self.input_ids = input_ids valenc = TokenizerWrapper(valenc) return trainloader, valenc def get_loaders( name, nsamples=128, seed=0, seqlen=2048, model_id="", trust_remote_code=False ): if "wikitext2" in name: return get_wikitext2(nsamples, seed, seqlen, model_id, trust_remote_code) if "ptb" in name: if "new" in name: return get_ptb_new(nsamples, seed, seqlen, model_id, trust_remote_code) return get_ptb(nsamples, seed, seqlen, model_id, trust_remote_code) if "c4" in name: if "new" in name: return get_c4_new(nsamples, seed, seqlen, model_id, trust_remote_code) return get_c4(nsamples, seed, seqlen, model_id, trust_remote_code) def find_layers(module, layers=(nn.Conv2d, nn.Linear), name=""): # Skip last lm_head linear # Need isintance Falcon is inheriting Linear. if isinstance(module, layers) and "lm_head" not in name: return {name: module} res = {} for name1, child in module.named_children(): res.update( find_layers( child, layers=layers, name=name + "." + name1 if name != "" else name1 ) ) return res @torch.no_grad() def sequential( model, dataloader, dev, nsamples, bits, groupsize, *, hooks, percdamp=0.01, sym: bool = False, act_order: bool = False, ): print("Starting ...") use_cache = model.config.use_cache model.config.use_cache = False try: layers = model.model.layers prefix = "model.layers" except Exception: layers = model.transformer.h prefix = "transformer.h" dtype = next(iter(model.parameters())).dtype inps = torch.zeros( (nsamples, model.seqlen, model.config.hidden_size), dtype=dtype, device=dev ) cache = {"i": 0} extra = {} class Catcher(nn.Module): def __init__(self, module): super().__init__() self.module = module def forward(self, inp, **kwargs): inps[cache["i"]] = inp cache["i"] += 1 extra.update(kwargs.copy()) raise ValueError layers[0] = Catcher(layers[0]) for batch in dataloader: try: model(batch[0].cuda()) except ValueError: pass layers[0] = layers[0].module # layers[0] = layers[0].cpu() # model.model.embed_tokens = model.model.embed_tokens.cpu() # model.model.norm = model.model.norm.cpu() torch.cuda.empty_cache() for hook in hooks: hook.remove() outs = torch.zeros_like(inps) extra = { k: v.to(dev) if isinstance(v, torch.Tensor) else v for k, v in extra.items() } print("Ready.") quantizers = {} for i in range(len(layers)): print(f"Quantizing layer {i+1}/{len(layers)}..") print("+------------------+--------------+------------+-----------+-------+") print("| name | weight_error | fp_inp_SNR | q_inp_SNR | time |") print("+==================+==============+============+===========+=======+") layer = layers[i] layer.load() full = find_layers(layer) sequential = [list(full.keys())] for names in sequential: subset = {n: full[n] for n in names} gptq = {} for name in subset: gptq[name] = GPTQ(subset[name]) gptq[name].quantizer.configure( bits, perchannel=True, sym=sym, mse=False ) pass def add_batch(name): def tmp(_, inp, out): gptq[name].add_batch(inp[0].data, out.data) return tmp handles = [] for name in subset: handles.append(subset[name].register_forward_hook(add_batch(name))) for j in range(nsamples): outs[j] = layer(inps[j].unsqueeze(0), **extra)[0] for h in handles: h.remove() for name in subset: scale, zero, g_idx, error = gptq[name].fasterquant( percdamp=percdamp, groupsize=groupsize, act_order=act_order, name=name, ) quantizers[f"{prefix}.{i}.{name}"] = ( gptq[name].quantizer.cpu(), scale.cpu(), zero.cpu(), g_idx.cpu(), bits, groupsize, ) gptq[name].free() for j in range(nsamples): outs[j] = layer(inps[j].unsqueeze(0), **extra)[0] layer.unload() del layer del gptq torch.cuda.empty_cache() inps, outs = outs, inps print("+------------------+--------------+------------+-----------+-------+") print("\n") model.config.use_cache = use_cache return quantizers def make_quant_linear(module, names, bits, groupsize, name=""): if isinstance(module, QuantLinear): return for attr in dir(module): tmp = getattr(module, attr) name1 = name + "." + attr if name != "" else attr if name1 in names: delattr(module, attr) setattr( module, attr, QuantLinear.new( bits, groupsize, tmp.in_features, tmp.out_features, tmp.bias is not None, ), ) for name1, child in module.named_children(): make_quant_linear( child, names, bits, groupsize, name + "." + name1 if name != "" else name1 ) # TODO: perform packing on GPU def pack(model, quantizers, bits, groupsize): layers = find_layers(model) layers = {n: layers[n] for n in quantizers} make_quant_linear(model, quantizers, bits, groupsize) qlayers = find_layers(model, (QuantLinear,)) print("Packing ...") for name in qlayers: print(name) quantizers[name], scale, zero, g_idx, _, _ = quantizers[name] qlayers[name].pack(layers[name], scale, zero, g_idx) print("Done.") return model def setdeepattr(module, full_name, tensor): current = module tokens = full_name.split(".") for token in tokens[:-1]: current = getattr(current, token) setattr(current, tokens[-1], tensor) def getdeepattr(module, full_name): current = module tokens = full_name.split(".") for token in tokens: current = getattr(current, token) return current def load_weights_pre_hook(module_name, weights, recursive=False): def inner(module, args): print(f"Pre hook {module_name}") local_params = {} for k, v in module.named_parameters(): if not recursive and k.count(".") != 1: continue local_params[k] = v for k, v in module.named_buffers(): if not recursive and k.count(".") != 1: continue local_params[k] = v for local_param in local_params: current_tensor = getdeepattr(module, local_param) if current_tensor.device == torch.device("meta"): # print(f"Loading {local_param}") if module_name: tensor_name = f"{module_name}.{local_param}" else: tensor_name = local_param tensor = weights.get_tensor(tensor_name) setdeepattr(module, local_param, nn.Parameter(tensor)) else: tensor = current_tensor.to(device=torch.device("cuda:0")) if current_tensor.requires_grad: tensor = nn.Parameter(tensor) setdeepattr(module, local_param, tensor) return inner def load_weights_post_hook(module_name, weights, recursive=False): def inner(module, args, output): print(f"Post hook {module_name}") local_params = {} for k, v in module.named_parameters(): if not recursive and k.count(".") != 1: continue local_params[k] = v for k, v in module.named_buffers(): if not recursive and k.count(".") != 1: continue local_params[k] = v for local_param in local_params: # print(f"Unloading {local_param}") current_tensor = getdeepattr(module, local_param) setdeepattr( module, local_param, nn.Parameter(current_tensor.to(device=torch.device("cpu"))), ) return output return inner def quantize( model_id: str, bits: int, groupsize: int, output_dir: str, revision: str, trust_remote_code: bool, upload_to_model_id: Optional[str], percdamp: float, act_order: bool, ): print("loading model") config = AutoConfig.from_pretrained( model_id, trust_remote_code=trust_remote_code, ) with init_empty_weights(): model = AutoModelForCausalLM.from_config( config, torch_dtype=torch.float16, trust_remote_code=trust_remote_code ) model = model.eval() print("LOADED model") files = weight_files(model_id, revision, extension=".safetensors") process_group, _, _ = initialize_torch_distributed() weights = Weights( files, device=torch.device("cuda:0"), dtype=torch.float16, process_group=process_group, aliases={"embed_tokens.weight": ["lm_head.weight"]}, ) hooks = [] for name, module in model.named_modules(): def load(module, name): def _load(): load_weights_pre_hook(name, weights, recursive=True)(module, None) return _load def unload(module, name): def _unload(): load_weights_post_hook(name, weights, recursive=True)( module, None, None ) return _unload module.load = load(module, name) module.unload = unload(module, name) hooks.append( module.register_forward_pre_hook(load_weights_pre_hook(name, weights)) ) hooks.append( module.register_forward_hook(load_weights_post_hook(name, weights)) ) model.seqlen = 2048 dataset = "wikitext2" nsamples = 128 seed = None dataloader, testloader = get_loaders( dataset, nsamples=nsamples, seed=seed, model_id=model_id, seqlen=model.seqlen, trust_remote_code=trust_remote_code, ) tick = time.time() quantizers = sequential( model, dataloader, DEV, nsamples, bits, groupsize, percdamp=percdamp, act_order=act_order, hooks=hooks, ) print(time.time() - tick) pack(model, quantizers, bits, groupsize) from safetensors.torch import save_file from transformers.modeling_utils import shard_checkpoint state_dict = model.state_dict() state_dict = {k: v.cpu().contiguous() for k, v in state_dict.items()} state_dict["gptq_bits"] = torch.LongTensor([bits]) state_dict["gptq_groupsize"] = torch.LongTensor([groupsize]) max_shard_size = "10GB" shards, index = shard_checkpoint( state_dict, max_shard_size=max_shard_size, weights_name="model.safetensors" ) os.makedirs(output_dir, exist_ok=True) for shard_file, shard in shards.items(): save_file( shard, os.path.join(output_dir, shard_file), metadata={ "format": "pt", "quantized": "gptq", "origin": "text-generation-inference", }, ) if index is None: path_to_weights = os.path.join(output_dir, "model.safetensors") logger.info(f"Model weights saved in {path_to_weights}") else: save_index_file = "model.safetensors.index.json" save_index_file = os.path.join(output_dir, save_index_file) with open(save_index_file, "w", encoding="utf-8") as f: content = json.dumps(index, indent=2, sort_keys=True) + "\n" f.write(content) logger.info( f"The model is bigger than the maximum size per checkpoint ({max_shard_size}) and is going to be " f"split in {len(shards)} checkpoint shards. You can find where each parameters has been saved in the " f"index located at {save_index_file}." ) config = AutoConfig.from_pretrained(model_id, trust_remote_code=trust_remote_code) config.save_pretrained(output_dir) logger.info("Saved config") logger.info("Saving tokenizer") tokenizer = AutoTokenizer.from_pretrained( model_id, trust_remote_code=trust_remote_code ) tokenizer.save_pretrained(output_dir) logger.info("Saved tokenizer") if upload_to_model_id: api = HfApi() api.upload_folder( folder_path=output_dir, repo_id=upload_to_model_id, repo_type="model" )
text-generation-inference/server/text_generation_server/utils/gptq/quantize.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/utils/gptq/quantize.py", "repo_id": "text-generation-inference", "token_count": 15970 }
192
parser: '@typescript-eslint/parser' parserOptions: ecmaFeatures: jsx: true ecmaVersion: latest sourceType: module project: ./tsconfig.json env: browser: true es6: true node: true jest: true ignorePatterns: ['index.js', 'target/'] plugins: - import - '@typescript-eslint' extends: - eslint:recommended - plugin:prettier/recommended rules: # 0 = off, 1 = warn, 2 = error 'space-before-function-paren': 0 'no-useless-constructor': 0 'no-undef': 2 'no-console': [2, { allow: ['error', 'warn', 'info', 'assert'] }] 'comma-dangle': ['error', 'only-multiline'] 'no-unused-vars': 0 'no-var': 2 'one-var-declaration-per-line': 2 'prefer-const': 2 'no-const-assign': 2 'no-duplicate-imports': 2 'no-use-before-define': [2, { 'functions': false, 'classes': false }] 'eqeqeq': [2, 'always', { 'null': 'ignore' }] 'no-case-declarations': 0 'no-restricted-syntax': [ 2, { 'selector': 'BinaryExpression[operator=/(==|===|!=|!==)/][left.raw=true], BinaryExpression[operator=/(==|===|!=|!==)/][right.raw=true]', 'message': Don't compare for equality against boolean literals, }, ] # https://github.com/benmosher/eslint-plugin-import/pull/334 'import/no-duplicates': 2 'import/first': 2 'import/newline-after-import': 2 'import/order': [ 2, { 'newlines-between': 'always', 'alphabetize': { 'order': 'asc' }, 'groups': ['builtin', 'external', 'internal', 'parent', 'sibling', 'index'], }, ] overrides: - files: - ./**/*{.ts,.tsx} rules: 'no-unused-vars': [2, { varsIgnorePattern: '^_', argsIgnorePattern: '^_', ignoreRestSiblings: true }] 'no-undef': 0 # TypeScript declare merge 'no-redeclare': 0 'no-useless-constructor': 0 'no-dupe-class-members': 0 'no-case-declarations': 0 'no-duplicate-imports': 0 # TypeScript Interface and Type 'no-use-before-define': 0 '@typescript-eslint/adjacent-overload-signatures': 2 '@typescript-eslint/await-thenable': 2 '@typescript-eslint/consistent-type-assertions': 2 '@typescript-eslint/ban-types': [ 'error', { 'types': { 'String': { 'message': 'Use string instead', 'fixWith': 'string' }, 'Number': { 'message': 'Use number instead', 'fixWith': 'number' }, 'Boolean': { 'message': 'Use boolean instead', 'fixWith': 'boolean' }, 'Function': { 'message': 'Use explicit type instead' }, }, }, ] '@typescript-eslint/explicit-member-accessibility': [ 'error', { accessibility: 'explicit', overrides: { accessors: 'no-public', constructors: 'no-public', methods: 'no-public', properties: 'no-public', parameterProperties: 'explicit', }, }, ] '@typescript-eslint/method-signature-style': 2 '@typescript-eslint/no-floating-promises': 2 '@typescript-eslint/no-implied-eval': 2 '@typescript-eslint/no-for-in-array': 2 '@typescript-eslint/no-inferrable-types': 2 '@typescript-eslint/no-invalid-void-type': 2 '@typescript-eslint/no-misused-new': 2 '@typescript-eslint/no-misused-promises': 2 '@typescript-eslint/no-namespace': 2 '@typescript-eslint/no-non-null-asserted-optional-chain': 2 '@typescript-eslint/no-throw-literal': 2 '@typescript-eslint/no-unnecessary-boolean-literal-compare': 2 '@typescript-eslint/prefer-for-of': 2 '@typescript-eslint/prefer-nullish-coalescing': 2 '@typescript-eslint/switch-exhaustiveness-check': 2 '@typescript-eslint/prefer-optional-chain': 2 '@typescript-eslint/prefer-readonly': 2 '@typescript-eslint/prefer-string-starts-ends-with': 0 '@typescript-eslint/no-array-constructor': 2 '@typescript-eslint/require-await': 2 '@typescript-eslint/return-await': 2 '@typescript-eslint/ban-ts-comment': [2, { 'ts-expect-error': false, 'ts-ignore': true, 'ts-nocheck': true, 'ts-check': false }] '@typescript-eslint/naming-convention': [ 2, { selector: 'memberLike', format: ['camelCase', 'PascalCase'], modifiers: ['private'], leadingUnderscore: 'forbid', }, ] '@typescript-eslint/no-unused-vars': [2, { varsIgnorePattern: '^_', argsIgnorePattern: '^_', ignoreRestSiblings: true }] '@typescript-eslint/member-ordering': [ 2, { default: [ 'public-static-field', 'protected-static-field', 'private-static-field', 'public-static-method', 'protected-static-method', 'private-static-method', 'public-instance-field', 'protected-instance-field', 'private-instance-field', 'public-constructor', 'protected-constructor', 'private-constructor', 'public-instance-method', 'protected-instance-method', 'private-instance-method', ], }, ]
tokenizers/bindings/node/.eslintrc.yml/0
{ "file_path": "tokenizers/bindings/node/.eslintrc.yml", "repo_id": "tokenizers", "token_count": 2733 }
193
/* eslint-disable prettier/prettier */ // For a detailed explanation regarding each configuration property, visit: // https://jestjs.io/docs/en/configuration.html module.exports = { // All imported modules in your tests should be mocked automatically // automock: false, // Stop running tests after `n` failures // bail: 0, // Respect "browser" field in package.json when resolving modules // browser: false, // The directory where Jest should store its cached dependency information // cacheDirectory: "/private/var/folders/y_/n6h0fkqn3m57bg_ktk25j7rm0000gn/T/jest_dx", // Automatically clear mock calls and instances between every test // clearMocks: false, // Indicates whether the coverage information should be collected while executing the test // collectCoverage: false, // An array of glob patterns indicating a set of files for which coverage information should be collected // collectCoverageFrom: null, // The directory where Jest should output its coverage files // coverageDirectory: null, // An array of regexp pattern strings used to skip coverage collection // coveragePathIgnorePatterns: [ // "/node_modules/" // ], // A list of reporter names that Jest uses when writing coverage reports // coverageReporters: [ // "json", // "text", // "lcov", // "clover" // ], // An object that configures minimum threshold enforcement for coverage results // coverageThreshold: null, // A path to a custom dependency extractor // dependencyExtractor: null, // Make calling deprecated APIs throw helpful error messages // errorOnDeprecated: false, // Force coverage collection from ignored files using an array of glob patterns // forceCoverageMatch: [], // A path to a module which exports an async function that is triggered once before all test suites // globalSetup: null, // A path to a module which exports an async function that is triggered once after all test suites // globalTeardown: null, // A set of global variables that need to be available in all test environments // globals: {}, // The maximum amount of workers used to run your tests. Can be specified as % or a number. E.g. maxWorkers: 10% will use 10% of your CPU amount + 1 as the maximum worker number. maxWorkers: 2 will use a maximum of 2 workers. // maxWorkers: "50%", // An array of directory names to be searched recursively up from the requiring module's location // moduleDirectories: [ // "node_modules" // ], // An array of file extensions your modules use // moduleFileExtensions: [ // "js", // "json", // "jsx", // "ts", // "tsx", // "node" // ], // A map from regular expressions to module names that allow to stub out resources with a single module // moduleNameMapper: {}, // An array of regexp pattern strings, matched against all module paths before considered 'visible' to the module loader // modulePathIgnorePatterns: [], // Activates notifications for test results // notify: false, // An enum that specifies notification mode. Requires { notify: true } // notifyMode: "failure-change", // A preset that is used as a base for Jest's configuration preset: 'ts-jest', // Run tests from one or more projects // projects: null, // Use this configuration option to add custom reporters to Jest // reporters: undefined, // Automatically reset mock state between every test // resetMocks: false, // Reset the module registry before running each individual test // resetModules: false, // A path to a custom resolver // resolver: null, // Automatically restore mock state between every test // restoreMocks: false, // The root directory that Jest should scan for tests and modules within // rootDir: null, // A list of paths to directories that Jest should use to search for files in // roots: [ // "<rootDir>" // ], // Allows you to use a custom runner instead of Jest's default test runner // runner: "jest-runner", // The paths to modules that run some code to configure or set up the testing environment before each test // setupFiles: [], // A list of paths to modules that run some code to configure or set up the testing framework before each test // setupFilesAfterEnv: [], // A list of paths to snapshot serializer modules Jest should use for snapshot testing // snapshotSerializers: [], // The test environment that will be used for testing testEnvironment: 'node', // Options that will be passed to the testEnvironment // testEnvironmentOptions: {}, // Adds a location field to test results // testLocationInResults: false, // The glob patterns Jest uses to detect test files // testMatch: [ // "**/__tests__/**/*.[jt]s?(x)", // "**/?(*.)+(spec|test).[tj]s?(x)" // ], // An array of regexp pattern strings that are matched against all test paths, matched tests are skipped testPathIgnorePatterns: ['/node_modules/', '/dist/'], // The regexp pattern or array of patterns that Jest uses to detect test files // testRegex: [], // This option allows the use of a custom results processor // testResultsProcessor: null, // This option allows use of a custom test runner // testRunner: "jasmine2", // This option sets the URL for the jsdom environment. It is reflected in properties such as location.href // testURL: "http://localhost", // Setting this value to "fake" allows the use of fake timers for functions such as "setTimeout" // timers: "real", // A map from regular expressions to paths to transformers // transform: null, // An array of regexp pattern strings that are matched against all source file paths, matched files will skip transformation // transformIgnorePatterns: [ // "/node_modules/" // ], // An array of regexp pattern strings that are matched against all modules before the module loader will automatically return a mock for them // unmockedModulePathPatterns: undefined, // Indicates whether each individual test should be reported during the run // verbose: null, // An array of regexp patterns that are matched against all source file paths before re-running tests in watch mode watchPathIgnorePatterns: ['<rootDir>/node_modules/', '<rootDir>/native/', '<rootDir>/dist/', '<rootDir>/build/'], // Whether to use watchman for file crawling // watchman: true, }
tokenizers/bindings/node/jest.config.js/0
{ "file_path": "tokenizers/bindings/node/jest.config.js", "repo_id": "tokenizers", "token_count": 1715 }
194
# `tokenizers-darwin-arm64` This is the **aarch64-apple-darwin** binary for `tokenizers`
tokenizers/bindings/node/npm/darwin-arm64/README.md/0
{ "file_path": "tokenizers/bindings/node/npm/darwin-arm64/README.md", "repo_id": "tokenizers", "token_count": 33 }
195
# `tokenizers-win32-arm64-msvc` This is the **aarch64-pc-windows-msvc** binary for `tokenizers`
tokenizers/bindings/node/npm/win32-arm64-msvc/README.md/0
{ "file_path": "tokenizers/bindings/node/npm/win32-arm64-msvc/README.md", "repo_id": "tokenizers", "token_count": 38 }
196
pub mod models; pub mod tokenizer;
tokenizers/bindings/node/src/tasks/mod.rs/0
{ "file_path": "tokenizers/bindings/node/src/tasks/mod.rs", "repo_id": "tokenizers", "token_count": 11 }
197
import pytest def pytest_addoption(parser): parser.addoption("--runslow", action="store_true", default=False, help="run slow tests") def pytest_configure(config): config.addinivalue_line("markers", "slow: mark test as slow to run") def pytest_collection_modifyitems(config, items): if config.getoption("--runslow"): # --runslow given in cli: do not skip slow tests return skip_slow = pytest.mark.skip(reason="need --runslow option to run") for item in items: if "slow" in item.keywords: item.add_marker(skip_slow)
tokenizers/bindings/python/conftest.py/0
{ "file_path": "tokenizers/bindings/python/conftest.py", "repo_id": "tokenizers", "token_count": 217 }
198
from typing import Dict, Iterator, List, Optional, Tuple, Union from tokenizers import AddedToken, Tokenizer, decoders, pre_tokenizers, trainers from tokenizers.models import BPE from tokenizers.normalizers import NFKC from .base_tokenizer import BaseTokenizer class SentencePieceBPETokenizer(BaseTokenizer): """SentencePiece BPE Tokenizer Represents the BPE algorithm, with the pretokenization used by SentencePiece """ def __init__( self, vocab: Optional[Union[str, Dict[str, int]]] = None, merges: Optional[Union[str, Dict[Tuple[int, int], Tuple[int, int]]]] = None, unk_token: Union[str, AddedToken] = "<unk>", replacement: str = "▁", add_prefix_space: bool = True, dropout: Optional[float] = None, fuse_unk: Optional[bool] = False, ): if vocab is not None and merges is not None: tokenizer = Tokenizer(BPE(vocab, merges, dropout=dropout, unk_token=unk_token, fuse_unk=fuse_unk)) else: tokenizer = Tokenizer(BPE(dropout=dropout, unk_token=unk_token, fuse_unk=fuse_unk)) if tokenizer.token_to_id(str(unk_token)) is not None: tokenizer.add_special_tokens([str(unk_token)]) tokenizer.normalizer = NFKC() tokenizer.pre_tokenizer = pre_tokenizers.Metaspace(replacement=replacement, add_prefix_space=add_prefix_space) tokenizer.decoder = decoders.Metaspace(replacement=replacement, add_prefix_space=add_prefix_space) parameters = { "model": "SentencePieceBPE", "unk_token": unk_token, "replacement": replacement, "add_prefix_space": add_prefix_space, "dropout": dropout, } super().__init__(tokenizer, parameters) @staticmethod def from_file(vocab_filename: str, merges_filename: str, **kwargs): vocab, merges = BPE.read_file(vocab_filename, merges_filename) return SentencePieceBPETokenizer(vocab, merges, **kwargs) def train( self, files: Union[str, List[str]], vocab_size: int = 30000, min_frequency: int = 2, special_tokens: List[Union[str, AddedToken]] = ["<unk>"], limit_alphabet: int = 1000, initial_alphabet: List[str] = [], show_progress: bool = True, ): """Train the model using the given files""" trainer = trainers.BpeTrainer( vocab_size=vocab_size, min_frequency=min_frequency, special_tokens=special_tokens, limit_alphabet=limit_alphabet, initial_alphabet=initial_alphabet, show_progress=show_progress, ) if isinstance(files, str): files = [files] self._tokenizer.train(files, trainer=trainer) def train_from_iterator( self, iterator: Union[Iterator[str], Iterator[Iterator[str]]], vocab_size: int = 30000, min_frequency: int = 2, special_tokens: List[Union[str, AddedToken]] = ["<unk>"], limit_alphabet: int = 1000, initial_alphabet: List[str] = [], show_progress: bool = True, length: Optional[int] = None, ): """Train the model using the given iterator""" trainer = trainers.BpeTrainer( vocab_size=vocab_size, min_frequency=min_frequency, special_tokens=special_tokens, limit_alphabet=limit_alphabet, initial_alphabet=initial_alphabet, show_progress=show_progress, ) self._tokenizer.train_from_iterator( iterator, trainer=trainer, length=length, )
tokenizers/bindings/python/py_src/tokenizers/implementations/sentencepiece_bpe.py/0
{ "file_path": "tokenizers/bindings/python/py_src/tokenizers/implementations/sentencepiece_bpe.py", "repo_id": "tokenizers", "token_count": 1655 }
199
stable
tokenizers/bindings/python/rust-toolchain/0
{ "file_path": "tokenizers/bindings/python/rust-toolchain", "repo_id": "tokenizers", "token_count": 2 }
200
use pyo3::prelude::*; use std::collections::VecDeque; /// An simple iterator that can be instantiated with a specified length. /// We use this with iterators that don't have a size_hint but we might /// know its size. This is useful with progress bars for example. pub struct MaybeSizedIterator<I> { length: Option<usize>, iter: I, } impl<I> MaybeSizedIterator<I> where I: Iterator, { pub fn new(iter: I, length: Option<usize>) -> Self { Self { length, iter } } } impl<I> Iterator for MaybeSizedIterator<I> where I: Iterator, { type Item = I::Item; fn next(&mut self) -> Option<Self::Item> { self.iter.next() } fn size_hint(&self) -> (usize, Option<usize>) { (self.length.unwrap_or(0), None) } } /// A buffered iterator that takes care of locking the GIL only when needed. /// The `PyIterator` provided by PyO3 keeps a Python GIL token all along /// and thus doesn't allow us to release the GIL to allow having other threads. /// /// This iterator serves two purposes: /// - First, as opposed to the `pyo3::PyIterator`, it is Send and can easily be parallelized /// - Second, this let us release the GIL between two refills of the buffer, allowing other /// Python threads to work pub struct PyBufferedIterator<T, F> { iter: Option<Py<PyAny>>, converter: F, buffer: VecDeque<PyResult<T>>, size: usize, } impl<T, F, I> PyBufferedIterator<T, F> where F: Fn(&PyAny) -> I, I: IntoIterator<Item = PyResult<T>>, { /// Create a new PyBufferedIterator using the provided Python object. /// This object must implement the Python Iterator Protocol, and an error will /// be return if the contract is not respected. /// /// The `converter` provides a way to convert each item in the iterator into /// something that doesn't embed a 'py token and thus allows the GIL to be released /// /// The `buffer_size` represents the number of items that we buffer before we /// need to acquire the GIL again. pub fn new(iter: &PyAny, converter: F, buffer_size: usize) -> PyResult<Self> { let py = iter.py(); let iter: Py<PyAny> = unsafe { py.from_borrowed_ptr_or_err::<PyAny>(pyo3::ffi::PyObject_GetIter(iter.as_ptr()))? .to_object(py) }; Ok(Self { iter: Some(iter), converter, buffer: VecDeque::with_capacity(buffer_size), size: buffer_size, }) } /// Refill the buffer, and set `self.iter` as `None` if nothing more to get fn refill(&mut self) -> PyResult<()> { if self.iter.is_none() { return Ok(()); } Python::with_gil(|py| loop { if self.buffer.len() >= self.size { return Ok(()); } match unsafe { py.from_owned_ptr_or_opt::<PyAny>(pyo3::ffi::PyIter_Next( self.iter.as_ref().unwrap().as_ref(py).as_ptr(), )) } { Some(obj) => self.buffer.extend((self.converter)(obj)), None => { if PyErr::occurred(py) { return Err(PyErr::fetch(py)); } else { self.iter = None; } } }; if self.iter.is_none() { return Ok(()); } }) } } impl<T, F, I> Iterator for PyBufferedIterator<T, F> where F: Fn(&PyAny) -> I, I: IntoIterator<Item = PyResult<T>>, { type Item = PyResult<T>; fn next(&mut self) -> Option<Self::Item> { if !self.buffer.is_empty() { self.buffer.pop_front() } else if self.iter.is_some() { if let Err(e) = self.refill() { return Some(Err(e)); } self.next() } else { None } } }
tokenizers/bindings/python/src/utils/iterators.rs/0
{ "file_path": "tokenizers/bindings/python/src/utils/iterators.rs", "repo_id": "tokenizers", "token_count": 1797 }
201
import copy import os import pickle import pytest from tokenizers import ( AddedToken, SentencePieceUnigramTokenizer, Tokenizer, models, normalizers, pre_tokenizers, trainers, ) from ..utils import data_dir, train_files class TestBpeTrainer: def test_can_modify(self): trainer = trainers.BpeTrainer( vocab_size=12345, min_frequency=12, show_progress=False, special_tokens=["1", "2"], limit_alphabet=13, initial_alphabet=["a", "b", "c"], continuing_subword_prefix="pref", end_of_word_suffix="suf", ) assert trainer.vocab_size == 12345 assert trainer.min_frequency == 12 assert trainer.show_progress == False assert trainer.special_tokens == [ AddedToken("1", special=True), AddedToken("2", special=True), ] assert trainer.limit_alphabet == 13 assert sorted(trainer.initial_alphabet) == ["a", "b", "c"] assert trainer.continuing_subword_prefix == "pref" assert trainer.end_of_word_suffix == "suf" # Modify these trainer.vocab_size = 20000 assert trainer.vocab_size == 20000 trainer.min_frequency = 1 assert trainer.min_frequency == 1 trainer.show_progress = True assert trainer.show_progress == True trainer.special_tokens = [] assert trainer.special_tokens == [] trainer.limit_alphabet = None assert trainer.limit_alphabet == None trainer.initial_alphabet = ["d", "z"] assert sorted(trainer.initial_alphabet) == ["d", "z"] trainer.continuing_subword_prefix = None assert trainer.continuing_subword_prefix == None trainer.end_of_word_suffix = None assert trainer.continuing_subword_prefix == None def test_can_pickle(self): assert ( trainers.BpeTrainer(min_frequency=12).__getstate__() == b"""{"BpeTrainer":{"min_frequency":12,"vocab_size":30000,"show_progress":true,"special_tokens":[],"limit_alphabet":null,"initial_alphabet":[],"continuing_subword_prefix":null,"end_of_word_suffix":null,"max_token_length":null,"words":{}}}""" ) assert isinstance(pickle.loads(pickle.dumps(trainers.BpeTrainer(min_frequency=12))), trainers.BpeTrainer) assert isinstance(copy.deepcopy(trainers.BpeTrainer(min_frequency=12)), trainers.BpeTrainer) # Make sure everything is correct assert pickle.dumps(pickle.loads(pickle.dumps(trainers.BpeTrainer(min_frequency=12)))) == pickle.dumps( trainers.BpeTrainer(min_frequency=12) ) class TestWordPieceTrainer: def test_can_modify(self): trainer = trainers.WordPieceTrainer( vocab_size=12345, min_frequency=12, show_progress=False, special_tokens=["1", "2"], limit_alphabet=13, initial_alphabet=["a", "b", "c"], continuing_subword_prefix="pref", end_of_word_suffix="suf", ) assert trainer.vocab_size == 12345 assert trainer.min_frequency == 12 assert trainer.show_progress == False assert trainer.special_tokens == [ AddedToken("1", special=True), AddedToken("2", special=True), ] assert trainer.limit_alphabet == 13 assert sorted(trainer.initial_alphabet) == ["a", "b", "c"] assert trainer.continuing_subword_prefix == "pref" assert trainer.end_of_word_suffix == "suf" # Modify these trainer.vocab_size = 20000 assert trainer.vocab_size == 20000 trainer.min_frequency = 1 assert trainer.min_frequency == 1 trainer.show_progress = True assert trainer.show_progress == True trainer.special_tokens = [] assert trainer.special_tokens == [] trainer.limit_alphabet = None assert trainer.limit_alphabet == None trainer.initial_alphabet = ["d", "z"] assert sorted(trainer.initial_alphabet) == ["d", "z"] trainer.continuing_subword_prefix = None assert trainer.continuing_subword_prefix == None trainer.end_of_word_suffix = None assert trainer.continuing_subword_prefix == None def test_can_pickle(self): assert isinstance(pickle.loads(pickle.dumps(trainers.WordPieceTrainer())), trainers.WordPieceTrainer) class TestWordLevelTrainer: def test_can_modify(self): trainer = trainers.WordLevelTrainer( vocab_size=12345, min_frequency=12, show_progress=False, special_tokens=["1", "2"] ) assert trainer.vocab_size == 12345 assert trainer.min_frequency == 12 assert trainer.show_progress == False assert trainer.special_tokens == [ AddedToken("1", special=True), AddedToken("2", special=True), ] # Modify these trainer.vocab_size = 20000 assert trainer.vocab_size == 20000 trainer.min_frequency = 1 assert trainer.min_frequency == 1 trainer.show_progress = True assert trainer.show_progress == True trainer.special_tokens = [] assert trainer.special_tokens == [] def test_can_pickle(self): assert isinstance(pickle.loads(pickle.dumps(trainers.WordLevelTrainer())), trainers.WordLevelTrainer) class TestUnigram: def test_train(self, train_files): tokenizer = SentencePieceUnigramTokenizer() tokenizer.train(train_files["small"], show_progress=False) filename = "tests/data/unigram_trained.json" tokenizer.save(filename) os.remove(filename) def test_train_parallelism_with_custom_pretokenizer(self, train_files): class GoodCustomPretok: def split(self, n, normalized): # Here we just test that we can return a List[NormalizedString], it # does not really make sense to return twice the same otherwise return [normalized, normalized] def pre_tokenize(self, pretok): pretok.split(self.split) custom = pre_tokenizers.PreTokenizer.custom(GoodCustomPretok()) bpe_tokenizer = Tokenizer(models.BPE()) bpe_tokenizer.normalizer = normalizers.Lowercase() bpe_tokenizer.pre_tokenizer = custom if "TOKENIZERS_PARALLELISM" in os.environ: del os.environ["TOKENIZERS_PARALLELISM"] trainer = trainers.BpeTrainer(special_tokens=["<unk>"], show_progress=False) bpe_tokenizer.train([train_files["small"]], trainer=trainer) def test_can_pickle(self): assert isinstance(pickle.loads(pickle.dumps(trainers.UnigramTrainer())), trainers.UnigramTrainer) def test_train_with_special_tokens(self): filename = "tests/data/dummy-unigram-special_tokens-train.txt" with open(filename, "w") as f: f.write( """ [CLS] The Zen of Python, by Tim Peters [SEP] [CLS] Beautiful is better than ugly. [SEP] [CLS] Explicit is better than implicit. [SEP] [CLS] Simple is better than complex. [SEP] [CLS] Complex is better than complicated. [SEP] [CLS] Flat is better than nested. [SEP] [CLS] Sparse is better than dense. [SEP] [CLS] Readability counts. [SEP] [CLS] Special cases aren't special enough to break the rules. [SEP] [CLS] Although practicality beats purity. [SEP] [CLS] Errors should never pass silently. [SEP] [CLS] Unless explicitly silenced. [SEP] [CLS] In the face of ambiguity, refuse the temptation to guess. [SEP] [CLS] There should be one-- and preferably only one --obvious way to do it. [SEP] [CLS] Although that way may not be obvious at first unless you're Dutch. [SEP] [CLS] Now is better than never. [SEP] [CLS] Although never is often better than *right* now. [SEP] [CLS] If the implementation is hard to explain, it's a bad idea. [SEP] [CLS] If the implementation is easy to explain, it may be a good idea. [SEP] [CLS] Namespaces are one honking great idea -- let's do more of those! [SEP] """ ) tokenizer = Tokenizer(models.Unigram()) trainer = trainers.UnigramTrainer( show_progress=False, special_tokens=["[PAD]", "[SEP]", "[CLS]"], unk_token="[UNK]" ) tokenizer.train([filename], trainer=trainer) assert tokenizer.encode("[CLS] This is a test [SEP]").tokens == [ "[CLS]", " T", "h", "i", "s", " is ", "a", " ", "te", "s", "t ", "[SEP]", ] tokenizer = Tokenizer(models.Unigram()) trainer = trainers.UnigramTrainer( show_progress=False, special_tokens=["[PAD]", "[SEP]", "[CLS]"], unk_token="[UNK]", vocab_size=100, ) tokenizer.train([filename], trainer=trainer) assert tokenizer.get_vocab_size() == 100 tokenizer = Tokenizer(models.Unigram()) trainer = trainers.UnigramTrainer( show_progress=False, special_tokens=["[PAD]", "[SEP]", "[CLS]", "[UNK]"], unk_token="[UNK]", vocab_size=100, ) tokenizer.train([filename], trainer=trainer) assert tokenizer.get_vocab_size() == 100 def test_cannot_train_different_model(self): tokenizer = Tokenizer(models.BPE()) trainer = trainers.UnigramTrainer(show_progress=False) with pytest.raises(Exception, match="UnigramTrainer can only train a Unigram"): tokenizer.train([], trainer) def test_can_modify(self): trainer = trainers.UnigramTrainer( vocab_size=12345, show_progress=False, special_tokens=["1", AddedToken("2", lstrip=True)], initial_alphabet=["a", "b", "c"], ) assert trainer.vocab_size == 12345 assert trainer.show_progress == False assert trainer.special_tokens == [ AddedToken("1", normalized=False, special=True), AddedToken("2", lstrip=True, normalized=False, special=True), ] assert sorted(trainer.initial_alphabet) == ["a", "b", "c"] # Modify these trainer.vocab_size = 20000 assert trainer.vocab_size == 20000 trainer.show_progress = True assert trainer.show_progress == True trainer.special_tokens = [] assert trainer.special_tokens == [] trainer.initial_alphabet = ["d", "z"] assert sorted(trainer.initial_alphabet) == ["d", "z"] def test_continuing_prefix_trainer_mistmatch(self): UNK = "[UNK]" special_tokens = [UNK] tokenizer = Tokenizer(models.BPE(unk_token=UNK, continuing_subword_prefix="##")) trainer = trainers.BpeTrainer(special_tokens=special_tokens) tokenizer.pre_tokenizer = pre_tokenizers.Sequence( [pre_tokenizers.Whitespace(), pre_tokenizers.Digits(individual_digits=True)] ) tokenizer.train(files=["data/big.txt"], trainer=trainer) tokenizer.save("data/tokenizer.json") tokenizer.from_file("data/tokenizer.json")
tokenizers/bindings/python/tests/bindings/test_trainers.py/0
{ "file_path": "tokenizers/bindings/python/tests/bindings/test_trainers.py", "repo_id": "tokenizers", "token_count": 4957 }
202
# Added Tokens <tokenizerslangcontent> <python> ## AddedToken [[autodoc]] tokenizers.AddedToken - content - lstrip - normalized - rstrip - single_word </python> <rust> The Rust API Reference is available directly on the [Docs.rs](https://docs.rs/tokenizers/latest/tokenizers/) website. </rust> <node> The node API has not been documented yet. </node> </tokenizerslangcontent>
tokenizers/docs/source-doc-builder/api/added-tokens.mdx/0
{ "file_path": "tokenizers/docs/source-doc-builder/api/added-tokens.mdx", "repo_id": "tokenizers", "token_count": 134 }
203
# Quicktour Let's have a quick look at the 🤗 Tokenizers library features. The library provides an implementation of today's most used tokenizers that is both easy to use and blazing fast. ## Build a tokenizer from scratch To illustrate how fast the 🤗 Tokenizers library is, let's train a new tokenizer on [wikitext-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) (516M of text) in just a few seconds. First things first, you will need to download this dataset and unzip it with: ``` bash wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip unzip wikitext-103-raw-v1.zip ``` ### Training the tokenizer In this tour, we will build and train a Byte-Pair Encoding (BPE) tokenizer. For more information about the different type of tokenizers, check out this [guide](https://huggingface.co/transformers/tokenizer_summary.html) in the 🤗 Transformers documentation. Here, training the tokenizer means it will learn merge rules by: - Start with all the characters present in the training corpus as tokens. - Identify the most common pair of tokens and merge it into one token. - Repeat until the vocabulary (e.g., the number of tokens) has reached the size we want. The main API of the library is the `class` `Tokenizer`, here is how we instantiate one with a BPE model: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START init_tokenizer", "end-before": "END init_tokenizer", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_init_tokenizer", "end-before": "END quicktour_init_tokenizer", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START init_tokenizer", "end-before": "END init_tokenizer", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> To train our tokenizer on the wikitext files, we will need to instantiate a [trainer]{.title-ref}, in this case a `BpeTrainer` <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START init_trainer", "end-before": "END init_trainer", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_init_trainer", "end-before": "END quicktour_init_trainer", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START init_trainer", "end-before": "END init_trainer", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> We can set the training arguments like `vocab_size` or `min_frequency` (here left at their default values of 30,000 and 0) but the most important part is to give the `special_tokens` we plan to use later on (they are not used at all during training) so that they get inserted in the vocabulary. <Tip> The order in which you write the special tokens list matters: here `"[UNK]"` will get the ID 0, `"[CLS]"` will get the ID 1 and so forth. </Tip> We could train our tokenizer right now, but it wouldn't be optimal. Without a pre-tokenizer that will split our inputs into words, we might get tokens that overlap several words: for instance we could get an `"it is"` token since those two words often appear next to each other. Using a pre-tokenizer will ensure no token is bigger than a word returned by the pre-tokenizer. Here we want to train a subword BPE tokenizer, and we will use the easiest pre-tokenizer possible by splitting on whitespace. <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START init_pretok", "end-before": "END init_pretok", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_init_pretok", "end-before": "END quicktour_init_pretok", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START init_pretok", "end-before": "END init_pretok", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> Now, we can just call the `Tokenizer.train` method with any list of files we want to use: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START train", "end-before": "END train", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_train", "end-before": "END quicktour_train", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START train", "end-before": "END train", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> This should only take a few seconds to train our tokenizer on the full wikitext dataset! To save the tokenizer in one file that contains all its configuration and vocabulary, just use the `Tokenizer.save` method: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START save", "end-before": "END save", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_save", "end-before": "END quicktour_save", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START save", "end-before": "END save", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> and you can reload your tokenizer from that file with the `Tokenizer.from_file` `classmethod`: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START reload_tokenizer", "end-before": "END reload_tokenizer", "dedent": 12} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_reload_tokenizer", "end-before": "END quicktour_reload_tokenizer", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START reload_tokenizer", "end-before": "END reload_tokenizer", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> ### Using the tokenizer Now that we have trained a tokenizer, we can use it on any text we want with the `Tokenizer.encode` method: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START encode", "end-before": "END encode", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_encode", "end-before": "END quicktour_encode", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START encode", "end-before": "END encode", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> This applied the full pipeline of the tokenizer on the text, returning an `Encoding` object. To learn more about this pipeline, and how to apply (or customize) parts of it, check out [this page](pipeline). This `Encoding` object then has all the attributes you need for your deep learning model (or other). The `tokens` attribute contains the segmentation of your text in tokens: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START print_tokens", "end-before": "END print_tokens", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_print_tokens", "end-before": "END quicktour_print_tokens", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START print_tokens", "end-before": "END print_tokens", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> Similarly, the `ids` attribute will contain the index of each of those tokens in the tokenizer's vocabulary: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START print_ids", "end-before": "END print_ids", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_print_ids", "end-before": "END quicktour_print_ids", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START print_ids", "end-before": "END print_ids", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> An important feature of the 🤗 Tokenizers library is that it comes with full alignment tracking, meaning you can always get the part of your original sentence that corresponds to a given token. Those are stored in the `offsets` attribute of our `Encoding` object. For instance, let's assume we would want to find back what caused the `"[UNK]"` token to appear, which is the token at index 9 in the list, we can just ask for the offset at the index: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START print_offsets", "end-before": "END print_offsets", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_print_offsets", "end-before": "END quicktour_print_offsets", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START print_offsets", "end-before": "END print_offsets", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> and those are the indices that correspond to the emoji in the original sentence: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START use_offsets", "end-before": "END use_offsets", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_use_offsets", "end-before": "END quicktour_use_offsets", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START use_offsets", "end-before": "END use_offsets", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> ### Post-processing We might want our tokenizer to automatically add special tokens, like `"[CLS]"` or `"[SEP]"`. To do this, we use a post-processor. `TemplateProcessing` is the most commonly used, you just have to specify a template for the processing of single sentences and pairs of sentences, along with the special tokens and their IDs. When we built our tokenizer, we set `"[CLS]"` and `"[SEP]"` in positions 1 and 2 of our list of special tokens, so this should be their IDs. To double-check, we can use the `Tokenizer.token_to_id` method: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START check_sep", "end-before": "END check_sep", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_check_sep", "end-before": "END quicktour_check_sep", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START check_sep", "end-before": "END check_sep", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> Here is how we can set the post-processing to give us the traditional BERT inputs: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START init_template_processing", "end-before": "END init_template_processing", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_init_template_processing", "end-before": "END quicktour_init_template_processing", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START init_template_processing", "end-before": "END init_template_processing", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> Let's go over this snippet of code in more details. First we specify the template for single sentences: those should have the form `"[CLS] $A [SEP]"` where `$A` represents our sentence. Then, we specify the template for sentence pairs, which should have the form `"[CLS] $A [SEP] $B [SEP]"` where `$A` represents the first sentence and `$B` the second one. The `:1` added in the template represent the `type IDs` we want for each part of our input: it defaults to 0 for everything (which is why we don't have `$A:0`) and here we set it to 1 for the tokens of the second sentence and the last `"[SEP]"` token. Lastly, we specify the special tokens we used and their IDs in our tokenizer's vocabulary. To check out this worked properly, let's try to encode the same sentence as before: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START print_special_tokens", "end-before": "END print_special_tokens", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_print_special_tokens", "end-before": "END quicktour_print_special_tokens", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START print_special_tokens", "end-before": "END print_special_tokens", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> To check the results on a pair of sentences, we just pass the two sentences to `Tokenizer.encode`: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START print_special_tokens_pair", "end-before": "END print_special_tokens_pair", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_print_special_tokens_pair", "end-before": "END quicktour_print_special_tokens_pair", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START print_special_tokens_pair", "end-before": "END print_special_tokens_pair", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> You can then check the type IDs attributed to each token is correct with <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START print_type_ids", "end-before": "END print_type_ids", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_print_type_ids", "end-before": "END quicktour_print_type_ids", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START print_type_ids", "end-before": "END print_type_ids", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> If you save your tokenizer with `Tokenizer.save`, the post-processor will be saved along. ### Encoding multiple sentences in a batch To get the full speed of the 🤗 Tokenizers library, it's best to process your texts by batches by using the `Tokenizer.encode_batch` method: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START encode_batch", "end-before": "END encode_batch", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_encode_batch", "end-before": "END quicktour_encode_batch", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START encode_batch", "end-before": "END encode_batch", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> The output is then a list of `Encoding` objects like the ones we saw before. You can process together as many texts as you like, as long as it fits in memory. To process a batch of sentences pairs, pass two lists to the `Tokenizer.encode_batch` method: the list of sentences A and the list of sentences B: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START encode_batch_pair", "end-before": "END encode_batch_pair", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_encode_batch_pair", "end-before": "END quicktour_encode_batch_pair", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START encode_batch_pair", "end-before": "END encode_batch_pair", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> When encoding multiple sentences, you can automatically pad the outputs to the longest sentence present by using `Tokenizer.enable_padding`, with the `pad_token` and its ID (which we can double-check the id for the padding token with `Tokenizer.token_to_id` like before): <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START enable_padding", "end-before": "END enable_padding", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_enable_padding", "end-before": "END quicktour_enable_padding", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START enable_padding", "end-before": "END enable_padding", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> We can set the `direction` of the padding (defaults to the right) or a given `length` if we want to pad every sample to that specific number (here we leave it unset to pad to the size of the longest text). <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START print_batch_tokens", "end-before": "END print_batch_tokens", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_print_batch_tokens", "end-before": "END quicktour_print_batch_tokens", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START print_batch_tokens", "end-before": "END print_batch_tokens", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> In this case, the `attention mask` generated by the tokenizer takes the padding into account: <tokenizerslangcontent> <python> <literalinclude> {"path": "../../bindings/python/tests/documentation/test_quicktour.py", "language": "python", "start-after": "START print_attention_mask", "end-before": "END print_attention_mask", "dedent": 8} </literalinclude> </python> <rust> <literalinclude> {"path": "../../tokenizers/tests/documentation.rs", "language": "rust", "start-after": "START quicktour_print_attention_mask", "end-before": "END quicktour_print_attention_mask", "dedent": 4} </literalinclude> </rust> <node> <literalinclude> {"path": "../../bindings/node/examples/documentation/quicktour.test.ts", "language": "js", "start-after": "START print_attention_mask", "end-before": "END print_attention_mask", "dedent": 8} </literalinclude> </node> </tokenizerslangcontent> ## Pretrained <tokenizerslangcontent> <python> ### Using a pretrained tokenizer You can load any tokenizer from the Hugging Face Hub as long as a `tokenizer.json` file is available in the repository. ```python from tokenizers import Tokenizer tokenizer = Tokenizer.from_pretrained("bert-base-uncased") ``` ### Importing a pretrained tokenizer from legacy vocabulary files You can also import a pretrained tokenizer directly in, as long as you have its vocabulary file. For instance, here is how to import the classic pretrained BERT tokenizer: ```python from tokenizers import BertWordPieceTokenizer tokenizer = BertWordPieceTokenizer("bert-base-uncased-vocab.txt", lowercase=True) ``` as long as you have downloaded the file `bert-base-uncased-vocab.txt` with ```bash wget https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt ``` </python> </tokenizerslangcontent>
tokenizers/docs/source-doc-builder/quicktour.mdx/0
{ "file_path": "tokenizers/docs/source-doc-builder/quicktour.mdx", "repo_id": "tokenizers", "token_count": 7936 }
204
Components ==================================================================================================== When building a Tokenizer, you can attach various types of components to this Tokenizer in order to customize its behavior. This page lists most provided components. .. _normalizers: .. entities:: python BertNormalizer.clean_text clean_text BertNormalizer.handle_chinese_chars handle_chinese_chars BertNormalizer.strip_accents strip_accents BertNormalizer.lowercase lowercase Normalizer.Sequence ``Sequence([NFKC(), Lowercase()])`` PreTokenizer.Sequence ``Sequence([Punctuation(), WhitespaceSplit()])`` SplitDelimiterBehavior.removed :obj:`removed` SplitDelimiterBehavior.isolated :obj:`isolated` SplitDelimiterBehavior.merged_with_previous :obj:`merged_with_previous` SplitDelimiterBehavior.merged_with_next :obj:`merged_with_next` SplitDelimiterBehavior.contiguous :obj:`contiguous` .. entities:: rust BertNormalizer.clean_text clean_text BertNormalizer.handle_chinese_chars handle_chinese_chars BertNormalizer.strip_accents strip_accents BertNormalizer.lowercase lowercase Normalizer.Sequence ``Sequence::new(vec![NFKC, Lowercase])`` PreTokenizer.Sequence ``Sequence::new(vec![Punctuation, WhitespaceSplit])`` SplitDelimiterBehavior.removed :obj:`Removed` SplitDelimiterBehavior.isolated :obj:`Isolated` SplitDelimiterBehavior.merged_with_previous :obj:`MergedWithPrevious` SplitDelimiterBehavior.merged_with_next :obj:`MergedWithNext` SplitDelimiterBehavior.contiguous :obj:`Contiguous` .. entities:: node BertNormalizer.clean_text cleanText BertNormalizer.handle_chinese_chars handleChineseChars BertNormalizer.strip_accents stripAccents BertNormalizer.lowercase lowercase Normalizer.Sequence .. PreTokenizer.Sequence .. SplitDelimiterBehavior.removed :obj:`removed` SplitDelimiterBehavior.isolated :obj:`isolated` SplitDelimiterBehavior.merged_with_previous :obj:`mergedWithPrevious` SplitDelimiterBehavior.merged_with_next :obj:`mergedWithNext` SplitDelimiterBehavior.contiguous :obj:`contiguous` Normalizers ---------------------------------------------------------------------------------------------------- A ``Normalizer`` is in charge of pre-processing the input string in order to normalize it as relevant for a given use case. Some common examples of normalization are the Unicode normalization algorithms (NFD, NFKD, NFC & NFKC), lowercasing etc... The specificity of ``tokenizers`` is that we keep track of the alignment while normalizing. This is essential to allow mapping from the generated tokens back to the input text. The ``Normalizer`` is optional. .. list-table:: :header-rows: 1 * - Name - Description - Example * - NFD - NFD unicode normalization - * - NFKD - NFKD unicode normalization - * - NFC - NFC unicode normalization - * - NFKC - NFKC unicode normalization - * - Lowercase - Replaces all uppercase to lowercase - Input: ``HELLO ὈΔΥΣΣΕΎΣ`` Output: ``hello ὀδυσσεύς`` * - Strip - Removes all whitespace characters on the specified sides (left, right or both) of the input - Input: ``" hi "`` Output: ``"hi"`` * - StripAccents - Removes all accent symbols in unicode (to be used with NFD for consistency) - Input: ``é`` Ouput: ``e`` * - Replace - Replaces a custom string or regexp and changes it with given content - ``Replace("a", "e")`` will behave like this: Input: ``"banana"`` Ouput: ``"benene"`` * - BertNormalizer - Provides an implementation of the Normalizer used in the original BERT. Options that can be set are: - :entity:`BertNormalizer.clean_text` - :entity:`BertNormalizer.handle_chinese_chars` - :entity:`BertNormalizer.strip_accents` - :entity:`BertNormalizer.lowercase` - * - Sequence - Composes multiple normalizers that will run in the provided order - :entity:`Normalizer.Sequence` .. _pre-tokenizers: Pre tokenizers ---------------------------------------------------------------------------------------------------- The ``PreTokenizer`` takes care of splitting the input according to a set of rules. This pre-processing lets you ensure that the underlying ``Model`` does not build tokens across multiple "splits". For example if you don't want to have whitespaces inside a token, then you can have a ``PreTokenizer`` that splits on these whitespaces. You can easily combine multiple ``PreTokenizer`` together using a ``Sequence`` (see below). The ``PreTokenizer`` is also allowed to modify the string, just like a ``Normalizer`` does. This is necessary to allow some complicated algorithms that require to split before normalizing (e.g. the ByteLevel) .. list-table:: :header-rows: 1 * - Name - Description - Example * - ByteLevel - Splits on whitespaces while remapping all the bytes to a set of visible characters. This technique as been introduced by OpenAI with GPT-2 and has some more or less nice properties: - Since it maps on bytes, a tokenizer using this only requires **256** characters as initial alphabet (the number of values a byte can have), as opposed to the 130,000+ Unicode characters. - A consequence of the previous point is that it is absolutely unnecessary to have an unknown token using this since we can represent anything with 256 tokens (Youhou!! 🎉🎉) - For non ascii characters, it gets completely unreadable, but it works nonetheless! - Input: ``"Hello my friend, how are you?"`` Ouput: ``"Hello", "Ġmy", Ġfriend", ",", "Ġhow", "Ġare", "Ġyou", "?"`` * - Whitespace - Splits on word boundaries (using the following regular expression: ``\w+|[^\w\s]+`` - Input: ``"Hello there!"`` Output: ``"Hello", "there", "!"`` * - WhitespaceSplit - Splits on any whitespace character - Input: ``"Hello there!"`` Output: ``"Hello", "there!"`` * - Punctuation - Will isolate all punctuation characters - Input: ``"Hello?"`` Ouput: ``"Hello", "?"`` * - Metaspace - Splits on whitespaces and replaces them with a special char "▁" (U+2581) - Input: ``"Hello there"`` Ouput: ``"Hello", "▁there"`` * - CharDelimiterSplit - Splits on a given character - Example with ``x``: Input: ``"Helloxthere"`` Ouput: ``"Hello", "there"`` * - Digits - Splits the numbers from any other characters. - Input: ``"Hello123there"`` Output: ```"Hello", "123", "there"``` * - Split - Versatile pre-tokenizer that splits on provided pattern and according to provided behavior. The pattern can be inverted if necessary. - pattern should be either a custom string or regexp. - behavior should be one of: * :entity:`SplitDelimiterBehavior.removed` * :entity:`SplitDelimiterBehavior.isolated` * :entity:`SplitDelimiterBehavior.merged_with_previous` * :entity:`SplitDelimiterBehavior.merged_with_next` * :entity:`SplitDelimiterBehavior.contiguous` - invert should be a boolean flag. - Example with `pattern` = :obj:`" "`, `behavior` = :obj:`"isolated"`, `invert` = :obj:`False`: Input: ``"Hello, how are you?"`` Output: ```"Hello,", " ", "how", " ", "are", " ", "you?"``` * - Sequence - Lets you compose multiple ``PreTokenizer`` that will be run in the given order - :entity:`PreTokenizer.Sequence` .. _models: Models ---------------------------------------------------------------------------------------------------- Models are the core algorithms used to actually tokenize, and therefore, they are the only mandatory component of a Tokenizer. .. list-table:: :header-rows: 1 * - Name - Description * - WordLevel - This is the "classic" tokenization algorithm. It let's you simply map words to IDs without anything fancy. This has the advantage of being really simple to use and understand, but it requires extremely large vocabularies for a good coverage. *Using this* ``Model`` *requires the use of a* ``PreTokenizer``. *No choice will be made by this model directly, it simply maps input tokens to IDs* * - BPE - One of the most popular subword tokenization algorithm. The Byte-Pair-Encoding works by starting with characters, while merging those that are the most frequently seen together, thus creating new tokens. It then works iteratively to build new tokens out of the most frequent pairs it sees in a corpus. BPE is able to build words it has never seen by using multiple subword tokens, and thus requires smaller vocabularies, with less chances of having "unk" (unknown) tokens. * - WordPiece - This is a subword tokenization algorithm quite similar to BPE, used mainly by Google in models like BERT. It uses a greedy algorithm, that tries to build long words first, splitting in multiple tokens when entire words don't exist in the vocabulary. This is different from BPE that starts from characters, building bigger tokens as possible. It uses the famous ``##`` prefix to identify tokens that are part of a word (ie not starting a word). * - Unigram - Unigram is also a subword tokenization algorithm, and works by trying to identify the best set of subword tokens to maximize the probability for a given sentence. This is different from BPE in the way that this is not deterministic based on a set of rules applied sequentially. Instead Unigram will be able to compute multiple ways of tokenizing, while choosing the most probable one. .. _post-processors: PostProcessor ---------------------------------------------------------------------------------------------------- After the whole pipeline, we sometimes want to insert some special tokens before feed a tokenized string into a model like "[CLS] My horse is amazing [SEP]". The ``PostProcessor`` is the component doing just that. .. list-table:: :header-rows: 1 * - Name - Description - Example * - TemplateProcessing - Let's you easily template the post processing, adding special tokens, and specifying the ``type_id`` for each sequence/special token. The template is given two strings representing the single sequence and the pair of sequences, as well as a set of special tokens to use. - Example, when specifying a template with these values: - single: ``"[CLS] $A [SEP]"`` - pair: ``"[CLS] $A [SEP] $B [SEP]"`` - special tokens: - ``"[CLS]"`` - ``"[SEP]"`` Input: ``("I like this", "but not this")`` Output: ``"[CLS] I like this [SEP] but not this [SEP]"`` .. _decoders: Decoders ---------------------------------------------------------------------------------------------------- The Decoder knows how to go from the IDs used by the Tokenizer, back to a readable piece of text. Some ``Normalizer`` and ``PreTokenizer`` use special characters or identifiers that need to be reverted for example. .. list-table:: :header-rows: 1 * - Name - Description * - ByteLevel - Reverts the ByteLevel PreTokenizer. This PreTokenizer encodes at the byte-level, using a set of visible Unicode characters to represent each byte, so we need a Decoder to revert this process and get something readable again. * - Metaspace - Reverts the Metaspace PreTokenizer. This PreTokenizer uses a special identifer ``▁`` to identify whitespaces, and so this Decoder helps with decoding these. * - WordPiece - Reverts the WordPiece Model. This model uses a special identifier ``##`` for continuing subwords, and so this Decoder helps with decoding these.
tokenizers/docs/source/components.rst/0
{ "file_path": "tokenizers/docs/source/components.rst", "repo_id": "tokenizers", "token_count": 4236 }
205
<p align="center"> <br> <img src="https://huggingface.co/landing/assets/tokenizers/tokenizers-logo.png" width="600"/> <br> <p> <p align="center"> <img alt="Build" src="https://github.com/huggingface/tokenizers/workflows/Rust/badge.svg"> <a href="https://github.com/huggingface/tokenizers/blob/master/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/tokenizers.svg?color=blue"> </a> <a href="https://docs.rs/tokenizers/"> <img alt="Doc" src="https://docs.rs/tokenizers/badge.svg"> </a> </p> <br> {{readme}}
tokenizers/tokenizers/README.tpl/0
{ "file_path": "tokenizers/tokenizers/README.tpl", "repo_id": "tokenizers", "token_count": 259 }
206
use crate::tokenizer::{Decoder, Result}; use serde::{Deserialize, Serialize}; #[derive(Deserialize, Clone, Debug, Serialize, Default)] /// Strip is a simple trick which converts tokens looking like `<0x61>` /// to pure bytes, and attempts to make them into a string. If the tokens /// cannot be decoded you will get � instead for each inconvertable byte token #[serde(tag = "type")] #[non_exhaustive] pub struct Strip { pub content: char, pub start: usize, pub stop: usize, } impl Strip { pub fn new(content: char, start: usize, stop: usize) -> Self { Self { content, start, stop, } } } impl Decoder for Strip { fn decode_chain(&self, tokens: Vec<String>) -> Result<Vec<String>> { Ok(tokens .into_iter() .map(|token| { let chars: Vec<char> = token.chars().collect(); let mut start_cut = 0; for (i, &c) in chars.iter().enumerate().take(self.start) { if c == self.content { start_cut = i + 1; continue; } else { break; } } let mut stop_cut = chars.len(); for i in 0..self.stop { let index = chars.len() - i - 1; if chars[index] == self.content { stop_cut = index; continue; } else { break; } } let new_token: String = chars[start_cut..stop_cut].iter().collect(); new_token }) .collect()) } } #[cfg(test)] mod tests { use super::*; #[test] fn decode() { let decoder = Strip::new('H', 1, 0); let res = decoder .decode_chain(vec!["Hey".into(), " friend!".into(), "HHH".into()]) .unwrap(); assert_eq!(res, vec!["ey", " friend!", "HH"]); let decoder = Strip::new('y', 0, 1); let res = decoder .decode_chain(vec!["Hey".into(), " friend!".into()]) .unwrap(); assert_eq!(res, vec!["He", " friend!"]); } }
tokenizers/tokenizers/src/decoders/strip.rs/0
{ "file_path": "tokenizers/tokenizers/src/decoders/strip.rs", "repo_id": "tokenizers", "token_count": 1217 }
207
use super::{super::OrderedVocabIter, WordLevel, WordLevelBuilder}; use serde::{ de::{MapAccess, Visitor}, ser::SerializeStruct, Deserialize, Deserializer, Serialize, Serializer, }; use std::collections::HashSet; impl Serialize for WordLevel { fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error> where S: Serializer, { let mut model = serializer.serialize_struct("WordLevel", 3)?; let ordered_vocab = OrderedVocabIter::new(&self.vocab_r); model.serialize_field("type", "WordLevel")?; model.serialize_field("vocab", &ordered_vocab)?; model.serialize_field("unk_token", &self.unk_token)?; model.end() } } impl<'de> Deserialize<'de> for WordLevel { fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where D: Deserializer<'de>, { deserializer.deserialize_struct( "WordLevel", &["type", "vocab", "unk_token"], WordLevelVisitor, ) } } struct WordLevelVisitor; impl<'de> Visitor<'de> for WordLevelVisitor { type Value = WordLevel; fn expecting(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result { write!(fmt, "struct WordLevel") } fn visit_map<V>(self, mut map: V) -> std::result::Result<Self::Value, V::Error> where V: MapAccess<'de>, { let mut builder = WordLevelBuilder::new(); let mut missing_fields = vec![ // for retrocompatibility the "type" field is not mandatory "unk_token", "vocab", ] .into_iter() .collect::<HashSet<_>>(); while let Some(key) = map.next_key::<String>()? { match key.as_ref() { "vocab" => builder = builder.vocab(map.next_value()?), "unk_token" => builder = builder.unk_token(map.next_value()?), "type" => match map.next_value()? { "WordLevel" => {} u => { return Err(serde::de::Error::invalid_value( serde::de::Unexpected::Str(u), &"WordLevel", )) } }, _ => {} } missing_fields.remove::<str>(&key); } if !missing_fields.is_empty() { Err(serde::de::Error::missing_field( missing_fields.iter().next().unwrap(), )) } else { Ok(builder.build().map_err(serde::de::Error::custom)?) } } } #[cfg(test)] mod tests { use crate::models::wordlevel::{Vocab, WordLevel, WordLevelBuilder}; #[test] fn serde() { let wl = WordLevel::default(); let wl_s = r#"{"type":"WordLevel","vocab":{},"unk_token":"<unk>"}"#; assert_eq!(serde_json::to_string(&wl).unwrap(), wl_s); assert_eq!(serde_json::from_str::<WordLevel>(wl_s).unwrap(), wl); } #[test] fn incomplete_vocab() { let vocab: Vocab = [("<unk>".into(), 0), ("b".into(), 2)] .iter() .cloned() .collect(); let wordlevel = WordLevelBuilder::default() .vocab(vocab) .unk_token("<unk>".to_string()) .build() .unwrap(); let wl_s = r#"{"type":"WordLevel","vocab":{"<unk>":0,"b":2},"unk_token":"<unk>"}"#; assert_eq!(serde_json::to_string(&wordlevel).unwrap(), wl_s); assert_eq!(serde_json::from_str::<WordLevel>(wl_s).unwrap(), wordlevel); } #[test] fn deserialization_should_fail() { let missing_unk = r#"{"type":"WordLevel","vocab":{}}"#; assert!(serde_json::from_str::<WordLevel>(missing_unk) .unwrap_err() .to_string() .starts_with("missing field `unk_token`")); let wrong_type = r#"{"type":"WordPiece","vocab":{}}"#; assert!(serde_json::from_str::<WordLevel>(wrong_type) .unwrap_err() .to_string() .starts_with("invalid value: string \"WordPiece\", expected WordLevel")); } }
tokenizers/tokenizers/src/models/wordlevel/serialization.rs/0
{ "file_path": "tokenizers/tokenizers/src/models/wordlevel/serialization.rs", "repo_id": "tokenizers", "token_count": 2084 }
208
use serde::{Deserialize, Serialize}; use crate::tokenizer::{PreTokenizedString, PreTokenizer, Result, SplitDelimiterBehavior}; use crate::utils::macro_rules_attribute; #[derive(Clone, Debug, PartialEq, Eq)] /// Pre tokenizes the numbers into single tokens. If individual_digits is set /// to true, then all digits are splitted into individual tokens. #[non_exhaustive] #[macro_rules_attribute(impl_serde_type!)] pub struct Digits { pub individual_digits: bool, } impl Digits { pub fn new(individual_digits: bool) -> Self { Self { individual_digits } } } impl Default for Digits { fn default() -> Self { Self::new(false) } } impl PreTokenizer for Digits { fn pre_tokenize(&self, pretokenized: &mut PreTokenizedString) -> Result<()> { if self.individual_digits { pretokenized.split(|_, normalized| { normalized.split(char::is_numeric, SplitDelimiterBehavior::Isolated) }) } else { pretokenized.split(|_, normalized| { normalized.split(char::is_numeric, SplitDelimiterBehavior::Contiguous) }) } } } #[cfg(test)] mod tests { use super::*; use crate::{OffsetReferential, OffsetType}; #[test] fn numbers() { let pretok = Digits::new(false); let mut pretokenized = PreTokenizedString::from("Hey 123 friend!"); pretok.pre_tokenize(&mut pretokenized).unwrap(); assert_eq!( pretokenized .get_splits(OffsetReferential::Normalized, OffsetType::Byte) .into_iter() .map(|(s, o, _)| (s, o)) .collect::<Vec<_>>(), vec![("Hey ", (0, 4)), ("123", (4, 7)), (" friend!", (7, 15))] ); assert_eq!( pretokenized .get_splits(OffsetReferential::Original, OffsetType::Byte) .into_iter() .map(|(s, o, _)| (s, o)) .collect::<Vec<_>>(), vec![("Hey ", (0, 4)), ("123", (4, 7)), (" friend!", (7, 15))] ); } #[test] fn individual_digits() { let pretok = Digits::new(true); let mut pretokenized = PreTokenizedString::from("Hey 123 friend!"); pretok.pre_tokenize(&mut pretokenized).unwrap(); assert_eq!( pretokenized .get_splits(OffsetReferential::Normalized, OffsetType::Byte) .into_iter() .map(|(s, o, _)| (s, o)) .collect::<Vec<_>>(), vec![ ("Hey ", (0, 4)), ("1", (4, 5)), ("2", (5, 6)), ("3", (6, 7)), (" friend!", (7, 15)) ] ); assert_eq!( pretokenized .get_splits(OffsetReferential::Original, OffsetType::Byte) .into_iter() .map(|(s, o, _)| (s, o)) .collect::<Vec<_>>(), vec![ ("Hey ", (0, 4)), ("1", (4, 5)), ("2", (5, 6)), ("3", (6, 7)), (" friend!", (7, 15)) ] ); } }
tokenizers/tokenizers/src/pre_tokenizers/digits.rs/0
{ "file_path": "tokenizers/tokenizers/src/pre_tokenizers/digits.rs", "repo_id": "tokenizers", "token_count": 1667 }
209
use crate::parallelism::*; use crate::tokenizer::{Offsets, Token}; use crate::utils::padding::PaddingDirection; use crate::utils::truncation::TruncationDirection; use serde::{Deserialize, Serialize}; use std::collections::HashMap; use std::ops::Range; /// Represents the output of a `Tokenizer`. #[derive(Default, PartialEq, Debug, Clone, Serialize, Deserialize)] pub struct Encoding { /// IDs produced by the `Tokenizer` ids: Vec<u32>, /// Type of the IDs type_ids: Vec<u32>, /// Tokens associated to each ID tokens: Vec<String>, /// Indice of the word associated to each token/ID words: Vec<Option<u32>>, /// Offsets of the token/ID from the NormalizedString offsets: Vec<Offsets>, /// Mask identifying special tokens special_tokens_mask: Vec<u32>, /// Mask identifying padding tokens for the attention mechanism attention_mask: Vec<u32>, /// A list of overflowing Encoding generated when we got truncated overflowing: Vec<Encoding>, /// Ranges of tokens covered by each sequence. If this is empty we consider /// there is only one sequence in this Encoding, and that it covers the entire range. sequence_ranges: HashMap<usize, Range<usize>>, } impl Encoding { #[allow(clippy::too_many_arguments)] pub fn new( ids: Vec<u32>, type_ids: Vec<u32>, tokens: Vec<String>, words: Vec<Option<u32>>, offsets: Vec<Offsets>, special_tokens_mask: Vec<u32>, attention_mask: Vec<u32>, overflowing: Vec<Self>, sequence_ranges: HashMap<usize, Range<usize>>, ) -> Self { Self { ids, type_ids, tokens, words, offsets, special_tokens_mask, attention_mask, overflowing, sequence_ranges, } } pub fn with_capacity(len: usize) -> Self { Self { ids: Vec::with_capacity(len), type_ids: Vec::with_capacity(len), tokens: Vec::with_capacity(len), words: Vec::with_capacity(len), offsets: Vec::with_capacity(len), special_tokens_mask: Vec::with_capacity(len), attention_mask: Vec::with_capacity(len), overflowing: vec![], sequence_ranges: HashMap::new(), } } pub fn from_tokens(tokens: Vec<Token>, type_id: u32) -> Self { let length = tokens.len(); let (ids, tokens, offsets) = tokens.into_iter().fold( ( Vec::with_capacity(length), Vec::with_capacity(length), Vec::with_capacity(length), ), |(mut ids, mut tokens, mut offsets), t| { ids.push(t.id); tokens.push(t.value); offsets.push(t.offsets); (ids, tokens, offsets) }, ); Self { ids, tokens, offsets, words: vec![None; length], type_ids: vec![type_id; length], attention_mask: vec![1; length], special_tokens_mask: vec![0; length], overflowing: vec![], sequence_ranges: HashMap::new(), } } /// Whether this Encoding is empty pub fn is_empty(&self) -> bool { self.ids.is_empty() } /// Return the total length of this Encoding pub fn len(&self) -> usize { self.ids.len() } /// Return the number of sequences combined in this Encoding pub fn n_sequences(&self) -> usize { if self.sequence_ranges.is_empty() { 1 } else { self.sequence_ranges.len() } } /// Set the given sequence id for the whole range of tokens contained in this Encoding pub fn set_sequence_id(&mut self, sequence_id: usize) { self.sequence_ranges.insert(sequence_id, 0..self.len()); } pub fn get_tokens(&self) -> &[String] { &self.tokens[..] } pub fn get_word_ids(&self) -> &[Option<u32>] { &self.words } pub fn get_word_ids_mut(&mut self) -> &mut [Option<u32>] { &mut self.words } pub fn get_sequence_ids(&self) -> Vec<Option<usize>> { let mut sequences = vec![None; self.len()]; for seq_id in 0..self.n_sequences() { let range = self.sequence_range(seq_id); let seq_len = range.len(); sequences.splice(range, std::iter::repeat(Some(seq_id)).take(seq_len)); } sequences } pub fn get_ids(&self) -> &[u32] { &self.ids } pub fn get_type_ids(&self) -> &[u32] { &self.type_ids } pub fn set_type_ids(&mut self, type_ids: Vec<u32>) { self.type_ids = type_ids; } pub fn get_offsets(&self) -> &[Offsets] { &self.offsets } pub fn get_offsets_mut(&mut self) -> &mut [Offsets] { &mut self.offsets } pub fn get_special_tokens_mask(&self) -> &[u32] { &self.special_tokens_mask } pub fn get_attention_mask(&self) -> &[u32] { &self.attention_mask } pub fn get_overflowing(&self) -> &Vec<Encoding> { &self.overflowing } pub fn set_overflowing(&mut self, overflowing: Vec<Encoding>) { self.overflowing = overflowing; } pub fn get_overflowing_mut(&mut self) -> &mut Vec<Encoding> { &mut self.overflowing } pub fn take_overflowing(&mut self) -> Vec<Encoding> { std::mem::take(&mut self.overflowing) } pub(crate) fn process_tokens_with_offsets_mut<F>(&mut self, func: F) where F: FnMut((usize, (&String, &mut Offsets))), { self.tokens .iter() .zip(self.offsets.iter_mut()) .enumerate() .for_each(func) } /// Returns the range to target to retrieve something (word_id, offsets, ..) related to the /// given sequence id fn sequence_range(&self, sequence_id: usize) -> Range<usize> { self.sequence_ranges .get(&sequence_id) .cloned() .unwrap_or(0..self.len()) } /// Returns the index of the sequence containing the given token pub fn token_to_sequence(&self, token: usize) -> Option<usize> { if token > self.len() { None } else if self.sequence_ranges.is_empty() { Some(0) } else { self.sequence_ranges.iter().find_map(|(seq_id, range)| { if range.contains(&token) { Some(*seq_id) } else { None } }) } } /// Get the encoded tokens corresponding to the word at the given index in the input sequence, /// with the form (start_token, end_token + 1) pub fn word_to_tokens(&self, word: u32, sequence_id: usize) -> Option<(usize, usize)> { let (mut start, mut end) = (None, None); let sequence_range = self.sequence_range(sequence_id); self.words .get(sequence_range.clone())? .iter() .enumerate() .take_while(|(_, w)| **w <= Some(word)) .filter(|(_, w)| **w == Some(word)) .for_each(|(i, _)| { if start.is_none() || Some(i) < start { start = Some(i); } if end.is_none() || Some(i) >= end { end = Some(i + 1); } }); if let (Some(start), Some(end)) = (start, end) { Some((sequence_range.start + start, sequence_range.start + end)) } else { None } } /// Get the offsets of the word at the given index in the input sequence. pub fn word_to_chars(&self, word: u32, sequence_id: usize) -> Option<Offsets> { self.word_to_tokens(word, sequence_id) .and_then(|(start, end)| { if end == 0 { None } else { Some((self.offsets[start].0, self.offsets[end - 1].1)) } }) } /// Get the offsets of the token at the given index. pub fn token_to_chars(&self, token: usize) -> Option<(usize, Offsets)> { Some(( self.token_to_sequence(token)?, self.offsets.get(token).copied()?, )) } /// Get the word that contains the token at the given index. pub fn token_to_word(&self, token: usize) -> Option<(usize, u32)> { Some(( self.token_to_sequence(token)?, self.words.get(token).copied().flatten()?, )) } /// Get the token that contains the given char. pub fn char_to_token(&self, pos: usize, sequence_id: usize) -> Option<usize> { let sequence_range = self.sequence_range(sequence_id); self.offsets .get(sequence_range.clone())? .iter() .position(|(start, end)| pos >= *start && pos < *end) .map(|pos| sequence_range.start + pos) } /// Get the word that contains the given char. pub fn char_to_word(&self, pos: usize, sequence_id: usize) -> Option<u32> { Some( self.char_to_token(pos, sequence_id) .and_then(|token| self.token_to_word(token))? .1, ) } /// Truncate the current `Encoding`. /// /// Panics if `stride >= max_len` pub fn truncate(&mut self, max_len: usize, stride: usize, direction: TruncationDirection) { let encoding_len = self.ids.len(); if max_len >= encoding_len { return; } if max_len == 0 { let o = std::mem::replace(self, Encoding::with_capacity(0)); self.overflowing.push(o); return; } assert!(stride < max_len, "`stride` must be strictly less than `max_len={}` (note that `max_len` may be shorter than the max length of the original model, as it subtracts the number of special characters", max_len); // When truncating, we lose the `sequence_ranges` information. self.sequence_ranges.clear(); let offset = max_len - stride; let mut end = false; let parts_ranges: Vec<(usize, usize)> = match direction { TruncationDirection::Right => (0..encoding_len) .step_by(offset) .filter_map(|start| { if !end { let stop = std::cmp::min(start + max_len, encoding_len); end = stop == encoding_len; Some((start, stop)) } else { None } }) .collect(), TruncationDirection::Left => (0..encoding_len) .rev() .step_by(offset) .filter_map(|stop| { let stop = stop + 1; let start = if stop < max_len { 0 } else { stop - max_len }; if start < stop && !end { end = start == 0; Some((start, stop)) } else { None } }) .collect(), }; let mut i = 0; let (start, stop) = parts_ranges[i]; let mut new_encoding = Encoding { ids: self.ids[start..stop].to_vec(), type_ids: self.type_ids[start..stop].to_vec(), tokens: self.tokens[start..stop].to_vec(), words: self.words[start..stop].to_vec(), offsets: self.offsets[start..stop].to_vec(), special_tokens_mask: self.special_tokens_mask[start..stop].to_vec(), attention_mask: self.attention_mask[start..stop].to_vec(), overflowing: vec![], sequence_ranges: HashMap::new(), }; loop { if i == parts_ranges.len() - 1 { break; } i += 1; let (start, stop) = parts_ranges[i]; new_encoding.overflowing.push(Encoding { ids: self.ids[start..stop].to_vec(), type_ids: self.type_ids[start..stop].to_vec(), tokens: self.tokens[start..stop].to_vec(), words: self.words[start..stop].to_vec(), offsets: self.offsets[start..stop].to_vec(), special_tokens_mask: self.special_tokens_mask[start..stop].to_vec(), attention_mask: self.attention_mask[start..stop].to_vec(), overflowing: vec![], sequence_ranges: HashMap::new(), }); } *self = new_encoding; } /// Merge all Encodings together pub fn merge<I: IntoIterator<Item = Encoding>>(encodings: I, growing_offsets: bool) -> Self { let mut encoding = Encoding::default(); // TODO this is suboptimal as we're doing this iteratively instead of preallocating // all the encodings sizes all at once and only copying into this preallocated vector // https://github.com/huggingface/tokenizers/pull/1049 // In order to fix, we just need to preallocate all vectors, then copy everything // into it (and deal with overlowings correctly) for sub in encodings { encoding.merge_with(sub, growing_offsets); } encoding } /// Merge ourself with the given `Encoding`. Happens in place. pub fn merge_with(&mut self, pair: Encoding, growing_offsets: bool) { // Handle merging the overflowing parts too: Combine them all // In most of the cases, we expect `pair.overflowing.len() == 0` let mut overflowings = vec![]; // 1. All our overflowings with all the others for self_o in &self.overflowing { // 1. The pair itself let mut n_encoding = self_o.clone(); n_encoding.merge_with(pair.clone(), growing_offsets); overflowings.push(n_encoding); // 2. Its overflowings (this should rarely happen...) for other_o in &pair.overflowing { let mut n_encoding = self_o.clone(); n_encoding.merge_with(other_o.clone(), growing_offsets); overflowings.push(n_encoding); } } // 2. Ourself with all the other overflowings (this should rarely happen too...) for other_o in &pair.overflowing { let mut n_encoding = self.clone(); n_encoding.merge_with(other_o.clone(), growing_offsets); overflowings.push(n_encoding); } // Finish by merging ourself with the other encoding let original_self_len = self.len(); // Must be before any modification to self.ids self.sequence_ranges .extend(pair.sequence_ranges.into_iter().map(|(seq_id, range)| { ( seq_id, original_self_len + range.start..original_self_len + range.end, ) })); self.ids.extend(pair.ids); self.type_ids.extend(pair.type_ids); self.tokens.extend(pair.tokens); self.words.extend(pair.words); let starting_offset = if growing_offsets { self.offsets.last().map_or(0, |o| o.1) } else { 0 }; self.offsets.extend( pair.offsets .into_iter() .map(|(start, end)| (start + starting_offset, end + starting_offset)) .collect::<Vec<_>>(), ); self.special_tokens_mask.extend(pair.special_tokens_mask); self.attention_mask.extend(pair.attention_mask); self.overflowing = overflowings; } pub fn pad( &mut self, target_length: usize, pad_id: u32, pad_type_id: u32, pad_token: &str, direction: PaddingDirection, ) { // Dispatch call to all the overflowings first self.overflowing.maybe_par_iter_mut().for_each(|encoding| { encoding.pad(target_length, pad_id, pad_type_id, pad_token, direction) }); // Then check if we should pad ourself if self.ids.len() >= target_length { // We just do nothing if the wanted padding length is smaller than us return; } let pad_length = target_length - self.ids.len(); match direction { PaddingDirection::Left => { self.ids = (0..pad_length) .map(|_| pad_id) .chain(self.ids.drain(..)) .collect(); self.type_ids = (0..pad_length) .map(|_| pad_type_id) .chain(self.type_ids.drain(..)) .collect(); self.tokens = (0..pad_length) .map(|_| pad_token.to_owned()) .chain(self.tokens.drain(..)) .collect(); self.words = (0..pad_length) .map(|_| None) .chain(self.words.drain(..)) .collect(); self.attention_mask = (0..pad_length) .map(|_| 0) .chain(self.attention_mask.drain(..)) .collect(); self.special_tokens_mask = (0..pad_length) .map(|_| 1) .chain(self.special_tokens_mask.drain(..)) .collect(); self.offsets = (0..pad_length) .map(|_| (0, 0)) .chain(self.offsets.drain(..)) .collect(); self.sequence_ranges .iter_mut() .for_each(|(_seq_id, range)| { *range = (range.start + pad_length)..(range.end + pad_length) }); } PaddingDirection::Right => { self.ids.extend((0..pad_length).map(|_| pad_id)); self.type_ids.extend((0..pad_length).map(|_| pad_type_id)); self.tokens .extend((0..pad_length).map(|_| pad_token.to_owned())); self.words.extend((0..pad_length).map(|_| None)); self.attention_mask.extend((0..pad_length).map(|_| 0)); self.special_tokens_mask.extend((0..pad_length).map(|_| 1)); self.offsets.extend((0..pad_length).map(|_| (0, 0))); } } } } impl std::iter::FromIterator<Encoding> for Encoding { fn from_iter<I: IntoIterator<Item = Encoding>>(iter: I) -> Self { Self::merge(iter, false) } } impl std::iter::FromIterator<(u32, String, (usize, usize), Option<u32>, u32)> for Encoding { fn from_iter<I: IntoIterator<Item = (u32, String, (usize, usize), Option<u32>, u32)>>( iter: I, ) -> Self { let items = iter.into_iter(); let (lower, upper) = items.size_hint(); let length = upper.unwrap_or(lower); let mut encoding = Self::with_capacity(length); for (id, token, offsets, word, type_id) in items { encoding.ids.push(id); encoding.tokens.push(token); encoding.offsets.push(offsets); encoding.type_ids.push(type_id); encoding.words.push(word); encoding.special_tokens_mask.push(0); encoding.attention_mask.push(1); } encoding } } #[cfg(test)] mod tests { use super::*; use std::iter::FromIterator; #[test] fn merge_encodings() { let mut a = Encoding { ids: vec![1], type_ids: vec![0], tokens: vec![String::from("Hello ")], words: vec![Some(0)], offsets: vec![(0, 6)], special_tokens_mask: vec![0], attention_mask: vec![1], ..Default::default() }; let b = Encoding { ids: vec![2], type_ids: vec![1], tokens: vec![String::from("World!")], words: vec![Some(0)], offsets: vec![(0, 6)], special_tokens_mask: vec![0], attention_mask: vec![1], ..Default::default() }; a.merge_with(b, true); assert_eq!( a, Encoding { ids: vec![1, 2], type_ids: vec![0, 1], tokens: vec![String::from("Hello "), String::from("World!")], words: vec![Some(0), Some(0)], offsets: vec![(0, 6), (6, 12)], special_tokens_mask: vec![0, 0], attention_mask: vec![1, 1], ..Default::default() } ); } #[test] fn truncate() { let mut a = Encoding { ids: vec![1, 2, 3], type_ids: vec![0, 0, 0], tokens: vec![ String::from("Hello"), String::from("World"), String::from("!"), ], words: vec![Some(0), Some(1), Some(2)], offsets: vec![(0, 5), (6, 11), (11, 12)], special_tokens_mask: vec![0, 0, 0], attention_mask: vec![1, 1, 1], ..Default::default() }; a.truncate(2, 0, TruncationDirection::Right); assert_eq!( a, Encoding { ids: vec![1, 2], type_ids: vec![0, 0], tokens: vec![String::from("Hello"), String::from("World")], words: vec![Some(0), Some(1)], offsets: vec![(0, 5), (6, 11)], special_tokens_mask: vec![0, 0], attention_mask: vec![1, 1], overflowing: vec![Encoding { ids: vec![3], type_ids: vec![0], tokens: vec![String::from("!")], words: vec![Some(2)], offsets: vec![(11, 12)], special_tokens_mask: vec![0], attention_mask: vec![1], ..Default::default() }], ..Default::default() } ); } #[test] fn truncate_to_empty() { let mut a = Encoding { ids: vec![1, 2, 3], type_ids: vec![0, 0, 0], tokens: vec![ String::from("Hello"), String::from("World"), String::from("!"), ], words: vec![Some(0), Some(1), Some(2)], offsets: vec![(0, 5), (6, 11), (11, 12)], special_tokens_mask: vec![0, 0, 0], attention_mask: vec![1, 1, 1], ..Default::default() }; a.truncate(0, 0, TruncationDirection::Right); assert_eq!( a, Encoding { overflowing: vec![Encoding { ids: vec![1, 2, 3], type_ids: vec![0, 0, 0], tokens: vec![ String::from("Hello"), String::from("World"), String::from("!"), ], words: vec![Some(0), Some(1), Some(2)], offsets: vec![(0, 5), (6, 11), (11, 12)], special_tokens_mask: vec![0, 0, 0], attention_mask: vec![1, 1, 1], overflowing: vec![], ..Default::default() }], ..Default::default() } ); } #[test] fn truncate_overflow_with_stride() { let mut enc = Encoding { ids: vec![1, 2, 3, 4, 5], type_ids: vec![0, 0, 0, 0, 0], tokens: vec![ String::from("42"), String::from("is"), String::from("the"), String::from("answer"), String::from("!"), ], words: vec![Some(0), Some(1), Some(2), Some(3), Some(4)], offsets: vec![(0, 2), (2, 4), (4, 7), (7, 13), (13, 14)], special_tokens_mask: vec![0, 0, 0, 0, 0], attention_mask: vec![1, 1, 1, 1, 1], overflowing: vec![], ..Default::default() }; enc.truncate(4, 2, TruncationDirection::Right); assert_eq!( enc, Encoding { ids: vec![1, 2, 3, 4], type_ids: vec![0, 0, 0, 0], tokens: vec![ String::from("42"), String::from("is"), String::from("the"), String::from("answer"), ], words: vec![Some(0), Some(1), Some(2), Some(3)], offsets: vec![(0, 2), (2, 4), (4, 7), (7, 13)], special_tokens_mask: vec![0, 0, 0, 0], attention_mask: vec![1, 1, 1, 1], overflowing: vec![Encoding { ids: vec![3, 4, 5], type_ids: vec![0, 0, 0], tokens: vec![ String::from("the"), String::from("answer"), String::from("!"), ], words: vec![Some(2), Some(3), Some(4)], offsets: vec![(4, 7), (7, 13), (13, 14)], special_tokens_mask: vec![0, 0, 0], attention_mask: vec![1, 1, 1], overflowing: vec![], ..Default::default() }], ..Default::default() } ); } #[test] fn truncate_left() { let mut a = Encoding { ids: vec![1, 2, 3], type_ids: vec![0, 0, 0], tokens: vec![ String::from("Hello"), String::from("World"), String::from("!"), ], words: vec![Some(0), Some(1), Some(2)], offsets: vec![(0, 5), (6, 11), (11, 12)], special_tokens_mask: vec![0, 0, 0], attention_mask: vec![1, 1, 1], ..Default::default() }; a.truncate(2, 0, TruncationDirection::Left); assert_eq!( a, Encoding { ids: vec![2, 3], type_ids: vec![0, 0], tokens: vec![String::from("World"), String::from("!")], words: vec![Some(1), Some(2)], offsets: vec![(6, 11), (11, 12)], special_tokens_mask: vec![0, 0], attention_mask: vec![1, 1], overflowing: vec![Encoding { ids: vec![1], type_ids: vec![0], tokens: vec![String::from("Hello")], words: vec![Some(0)], offsets: vec![(0, 5)], special_tokens_mask: vec![0], attention_mask: vec![1], ..Default::default() }], ..Default::default() } ); } #[test] fn mappings() { let encoding = Encoding { ids: vec![0; 11], // Needed for Encoding::len tokens: vec![ // First sequence: "He".into(), "llo".into(), "won".into(), "der".into(), "ful".into(), "friend".into(), "!".into(), // Second sequence: "How".into(), "are".into(), "you".into(), "?".into(), ], offsets: vec![ // First sequence: (0, 2), (2, 5), (7, 10), (10, 13), (13, 16), (17, 23), (23, 24), // Second sequence: (0, 3), (4, 7), (8, 11), (11, 12), ], words: vec![ // First sequence: Some(0), Some(0), Some(1), Some(1), Some(1), Some(2), Some(3), // Second sequence: Some(0), Some(1), Some(2), Some(3), ], sequence_ranges: HashMap::from_iter(vec![(0, 0..7), (1, 7..11)]), ..Default::default() }; assert_eq!(encoding.word_to_tokens(0, 0), Some((0, 2))); assert_eq!(encoding.word_to_tokens(1, 0), Some((2, 5))); assert_eq!(encoding.word_to_tokens(2, 0), Some((5, 6))); assert_eq!(encoding.word_to_tokens(3, 0), Some((6, 7))); assert_eq!(encoding.word_to_tokens(0, 1), Some((7, 8))); assert_eq!(encoding.word_to_tokens(1, 1), Some((8, 9))); assert_eq!(encoding.word_to_tokens(2, 1), Some((9, 10))); assert_eq!(encoding.word_to_tokens(3, 1), Some((10, 11))); assert_eq!(encoding.word_to_chars(0, 0), Some((0, 5))); assert_eq!(encoding.word_to_chars(1, 0), Some((7, 16))); assert_eq!(encoding.word_to_chars(0, 1), Some((0, 3))); assert_eq!(encoding.word_to_chars(1, 1), Some((4, 7))); assert_eq!(encoding.token_to_chars(0), Some((0, (0, 2)))); assert_eq!(encoding.token_to_chars(1), Some((0, (2, 5)))); assert_eq!(encoding.token_to_chars(7), Some((1, (0, 3)))); assert_eq!(encoding.token_to_chars(9), Some((1, (8, 11)))); assert_eq!(encoding.token_to_word(1), Some((0, 0))); assert_eq!(encoding.token_to_word(2), Some((0, 1))); assert_eq!(encoding.token_to_word(7), Some((1, 0))); assert_eq!(encoding.token_to_word(9), Some((1, 2))); assert_eq!(encoding.token_to_word(11), None); assert_eq!(encoding.char_to_token(3, 0), Some(1)); assert_eq!(encoding.char_to_token(8, 0), Some(2)); assert_eq!(encoding.char_to_token(16, 0), None); assert_eq!(encoding.char_to_token(23, 0), Some(6)); assert_eq!(encoding.char_to_token(2, 1), Some(7)); assert_eq!(encoding.char_to_token(9, 1), Some(9)); assert_eq!(encoding.char_to_word(3, 0), Some(0)); assert_eq!(encoding.char_to_word(8, 0), Some(1)); assert_eq!(encoding.char_to_word(16, 0), None); assert_eq!(encoding.char_to_word(23, 0), Some(3)); assert_eq!(encoding.char_to_word(2, 1), Some(0)); assert_eq!(encoding.char_to_word(9, 1), Some(2)); } #[test] fn padding() { let mut a = Encoding { ids: vec![1], type_ids: vec![0], tokens: vec![String::from("Hello ")], words: vec![Some(0)], offsets: vec![(0, 6)], special_tokens_mask: vec![0], attention_mask: vec![1], sequence_ranges: HashMap::from([(0, 0..1)]), ..Default::default() }; let target_length = 2; let pad_id = 99; let pad_type_id = 0; let pad_token = "[PAD]"; a.pad( target_length, pad_id, pad_type_id, pad_token, PaddingDirection::Left, ); assert_eq!(a.sequence_ranges, HashMap::from([(0, 1..2)])); } }
tokenizers/tokenizers/src/tokenizer/encoding.rs/0
{ "file_path": "tokenizers/tokenizers/src/tokenizer/encoding.rs", "repo_id": "tokenizers", "token_count": 17197 }
210
mod common; use common::*; use tokenizers::tokenizer::AddedToken; #[test] fn add_tokens() { let mut tokenizer = get_empty(); assert_eq!( tokenizer.add_special_tokens(&[ AddedToken::from("<cls>", true), AddedToken::from("<sep>", true) ]), 2 ); assert_eq!(tokenizer.token_to_id("<cls>"), Some(0)); assert_eq!(tokenizer.token_to_id("<sep>"), Some(1)); assert_eq!( tokenizer.add_tokens(&[ AddedToken::from("hello", false), AddedToken::from("world", false) ]), 2 ); assert_eq!(tokenizer.token_to_id("hello"), Some(2)); assert_eq!(tokenizer.token_to_id("world"), Some(3)); } #[test] fn lstrip_tokens() { let mut tokenizer = get_byte_level(true, false); tokenizer.add_special_tokens(&[AddedToken::from("<mask>", true).lstrip(true)]); let input = "I saw a <mask> 😺"; let output = tokenizer.encode(input, false).unwrap(); assert_eq!( output.get_tokens(), &["ĠI", "Ġsaw", "Ġa", " <mask>", "ĠðŁĺ", "º"] ); assert_eq!( output.get_offsets(), &[(0, 1), (1, 5), (5, 7), (7, 14), (14, 19), (15, 19)] ); } #[test] fn rstrip_tokens() { let mut tokenizer = get_byte_level(false, false); tokenizer.add_special_tokens(&[AddedToken::from("<mask>", true).rstrip(true)]); let input = "I saw a <mask> 😺"; let output = tokenizer.encode(input, false).unwrap(); assert_eq!( output.get_tokens(), &["I", "Ġsaw", "Ġa", "Ġ", "<mask> ", "ðŁĺ", "º"] ); // When `add_prefix_space = true` rstrip cannot work as a prefix space is added // to the next token let mut tokenizer = get_byte_level(true, false); tokenizer.add_special_tokens(&[AddedToken::from("<mask>", true).rstrip(true)]); let input = "I saw a <mask> 😺"; let output = tokenizer.encode(input, false).unwrap(); assert_eq!( output.get_tokens(), &["ĠI", "Ġsaw", "Ġa", "Ġ", "<mask> ", "ĠðŁĺ", "º"] ); } #[test] fn single_word_tokens() { // If `single_word = true` it shouldn't split `dancing` let mut tokenizer = get_byte_level(false, false); tokenizer.add_special_tokens(&[AddedToken::from("ing", true).single_word(true)]); let input = "I like dancing"; let output = tokenizer.encode(input, false).unwrap(); assert_eq!(output.get_tokens(), &["I", "Ġlike", "Ġdancing"]); // If `single_word = false` it should split `dancing` let mut tokenizer = get_byte_level(false, false); tokenizer.add_special_tokens(&[AddedToken::from("ing", true).single_word(false)]); let input = "I like dancing"; let output = tokenizer.encode(input, false).unwrap(); assert_eq!(output.get_tokens(), &["I", "Ġlike", "Ġd", "anc", "ing"]); } #[test] fn overlapping_tokens() { let mut tokenizer = get_byte_level(false, false); tokenizer.add_special_tokens(&[AddedToken::from("danc", true)]); tokenizer.add_special_tokens(&[AddedToken::from("nci", true)]); tokenizer.add_special_tokens(&[AddedToken::from("ing", true)]); let input = "I like dancing"; let output = tokenizer.encode(input, false).unwrap(); assert_eq!(output.get_tokens(), &["I", "Ġlike", "Ġ", "danc", "ing"]); let mut tokenizer = get_byte_level(false, false); tokenizer.add_special_tokens(&[AddedToken::from("nci", true)]); tokenizer.add_special_tokens(&[AddedToken::from("danc", true)]); tokenizer.add_special_tokens(&[AddedToken::from("ing", true)]); tokenizer.add_special_tokens(&[AddedToken::from("ike", true)]); let output = tokenizer.encode(input, false).unwrap(); // Breaking change but following `transformers` breaking change. // This behavior is deemed not used in practice: // https://github.com/huggingface/transformers/pull/13220 // Order does NOT matter. (We could make it work again but the trie // would need to keep insertion order too) // // assert_eq!(output.get_tokens(), &["I", "Ġlike", "Ġda", "nci", "ng"]); assert_eq!(output.get_tokens(), &["I", "Ġl", "ike", "Ġ", "danc", "ing"]); }
tokenizers/tokenizers/tests/added_tokens.rs/0
{ "file_path": "tokenizers/tokenizers/tests/added_tokens.rs", "repo_id": "tokenizers", "token_count": 1770 }
211
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # How To Request Support This is an Open Source Project so please be mindful that like in any other project of this kind there is no obligation to answer all requests for help. However, we want to encourage you to ask for help whenever you think it's needed! We are happy about every question we get because it allows us to better understand your needs, possible misunderstandings, and most importantly a way for you to help us make this library better. That being said, this document's main purpose is to provide guidelines at how you can formulate your requests to increase your chances to be understood and to get support. There are two main venues to receive support: [the forums](https://discuss.huggingface.co/) and [the GitHub issues](https://github.com/huggingface/transformers/issues). ## The Forums [The user forums](https://discuss.huggingface.co/) are supported by the wide community of the library users and backed up by developers when needed. If you have a difficulty with deploying this library or some questions, or you'd like to discuss a new feature, please first consider discussing those things at the forums. Only when you feel your subject matter has been crystalized and you still need support from the library developers do proceed to file an [issue](https://github.com/huggingface/transformers/issues). In particular all "Please explain" questions or objectively very user-specific feature requests belong to the forums. Here are some example of such questions: * "I would like to use a BertModel within a RL-Agent for a customer support service. How can I use a BertForMaskedLM in my ChatBotModel?" * "Could you please explain why T5 has no positional embedding matrix under T5Model?" * "How should I set my generation parameters for translation?" * "How to train T5 on De->En translation?" ## The GitHub Issues Everything which hints at a bug should be opened as an [issue](https://github.com/huggingface/transformers/issues). You are not required to read the following guidelines before opening an issue. However, if you notice that your issue doesn't get any replies, chances are that the developers have one or several difficulties with its quality. In this case, reading the following points and adjusting your issue accordingly could help. 1. Before posting an issue, first search for already posted issues, since chances are someone has already asked a similar question before you. If you use Google your search query should be: ``` "huggingface" "transformers" your query ``` The first two quoted words tell Google to limit the search to the context of the Huggingface Transformers. The remainder is your query - most commonly this would be the error message the software fails with. We will go deeper into details shortly. The results of such a query will typically match GitHub issues, Hugging Face forums, StackExchange, and blogs. If you find relevant hints, you may choose to continue the discussion there if you have follow up questions. If what you found is similar but doesn't quite answer your problem, please, post a new issue and do include links to similar issues or forum discussions you may have found. Let's look at some examples: The error message, often referred to as an assertion, tells us what went wrong. Here is an example of an assertion: ```python Traceback (most recent call last): File "<string>", line 1, in <module> File "/transformers/src/transformers/__init__.py", line 34, in <module> from . import dependency_versions_check File "/transformers/src/transformers/dependency_versions_check.py", line 34, in <module> from .utils import is_tokenizers_available File "/transformers/src/transformers/utils/import_utils.py", line 40, in <module> from tqdm.auto import tqdm ModuleNotFoundError: No module named 'tqdm.auto' ``` and it typically includes a traceback, so that we can see the full stack of calls the program made before it fails. This gives us the context to know why the program failed. Going back to the above example. If you received this error search, look at the very last line of the error which is: ```python ModuleNotFoundError: No module named 'tqdm.auto' ``` And now we can use it to do the searching on your favorite search engine: 1. first for `"huggingface" "transformers" "ModuleNotFoundError: No module named 'tqdm.auto'"` 2. if you don't find relevant results, then search for just `"ModuleNotFoundError: No module named 'tqdm.auto'"` 3. and finally if nothing still comes up, then remove the outside quotes: `ModuleNotFoundError: No module named 'tqdm.auto'` If the error includes any messages that include bits unique to your filesystem, always remove those in the search query since other users will not have the same filesystem as yours. For example: ```bash python -c 'open("/tmp/wrong_path.txt", "r")' Traceback (most recent call last): File "<string>", line 1, in <module> FileNotFoundError: [Errno 2] No such file or directory: '/tmp/wrong_path.txt' ``` Here you'd search for just: `"FileNotFoundError: [Errno 2] No such file or directory"` If the local information that you removed were inside the error message and you removed them you may need to remove double quotes since your query is no longer exact. So if the error message was something like: ```bash ValueError: '/tmp/wrong_path.txt' cannot be found ``` then you'd search for `"ValueError" "cannot be found"` As you search you will notice that when you don't use quotes often the search engines will return a variety of unrelated hits, which may or may not be what you want. Experiment with different ways and find which approach gives the most satisfactory results. 2. Keep the issue short, providing the information that you think will aid the developers to understand your situation. Put yourself in the shoes of the person who has never seen your code or knows anything about your custom setup. This mental exercise will help to develop an intuition to what/what not to share" 3. If there is a software failure, always provide the full traceback, for example: ```python $ python -c 'import transformers' Traceback (most recent call last): File "<string>", line 1, in <module> File "/transformers/src/transformers/__init__.py", line 34, in <module> from . import dependency_versions_check File "/transformers/src/transformers/dependency_versions_check.py", line 34, in <module> from .utils import is_tokenizers_available File "/transformers/src/transformers/utils/import_utils.py", line 40, in <module> from tqdm.auto import tqdm ModuleNotFoundError: No module named 'tqdm.auto' ``` As compared to providing just the last line of the error message, e.g.: ```python ModuleNotFoundError: No module named 'tqdm.auto' ``` which is not sufficient. If your application is running on more than one GPU (e.g. under `DistributedDataParallel`) and typically getting every log and traceback printed multiple times, please make sure that you paste only one copy of it. At times the traceback from parallel processes may get interleaved - so either disentangle these or change the loggers to log only for `local_rank==0` so that only one process logs things. 4. When quoting a traceback, command line instructions and any type of code always enclose it in triple backticks inside the editor window, that is: ```` ``` git clone https://github.com/huggingface/transformers cd transformers pip install . ``` ```` If it's a command line with a long argument list, please consider breaking it down using backslashes and new lines. Here is an example of a good command line quote: ```bash cd examples/seq2seq torchrun --nproc_per_node=2 ./finetune_trainer.py \ --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --data_dir wmt_en_ro \ --output_dir output_dir --overwrite_output_dir \ --do_train --n_train 500 --num_train_epochs 1 \ --per_device_train_batch_size 1 --freeze_embeds \ --src_lang en_XX --tgt_lang ro_RO --task translation \ --fp16 ``` If you don't break it up, one has to scroll horizontally which often makes it quite difficult to quickly see what's happening. The backslashes allow us to copy the command directly into the console to run it, without needing to edit it. 5. Include only the important information that you think will help the developer to quickly identify the problem. For example applications often create huge amounts of logs. Ask yourself whether providing all or parts of the log is useful. Pasting a 100-1000 lines of log into the issue is an immediate turn off, since it will take a lot of time to figure out where the pertinent parts of the log are. Attaching a full log can be helpful if it's done as an attachment, if it's enclosed in the following html code in the comment editor window: ``` <details> <summary>Full log</summary> <pre> many lines go here </pre> </details> ``` which would result in the following entry, which can be opened if desired, but otherwise takes little space. <details> <summary>Full log</summary> <pre> many lines go here </pre> </details> You could also provide a link to a pastebin service, but this is less beneficial since those links tend to expire quickly and future readers of your issue might not be able to access that log file anymore and may lack some context. 6. If this is an issue in your code, do try to reduce that code to a minimal example that still demonstrates the problem. Please ask at the forums if you have a hard time figuring how to do that. Please realize that we don't have the luxury of having time to try and understand all of your custom code. If you really tried to make a short reproducible code but couldn't figure it out, it might be that having a traceback will give the developer enough information to know what's going on. But if it is not enough and we can't reproduce the problem, we can't really solve it. Do not despair if you can't figure it out from the beginning, just share what you can and perhaps someone else will be able to help you at the forums. If your setup involves any custom datasets, the best way to help us reproduce the problem is to create a [Google Colab notebook](https://colab.research.google.com/) that demonstrates the issue and once you verify that the issue still exists, include a link to that notebook in the Issue. Just make sure that you don't copy and paste the location bar url of the open notebook - as this is private and we won't be able to open it. Instead, you need to click on `Share` in the right upper corner of the notebook, select `Get Link` and then copy and paste the public link it will give to you. 7. If you forked off some of this project's code or example applications, please, do not ask us to go into your code repository and figure out what you may have done. The code is already very complex and unless there is an easy way to do a diff and it's a small diff, it won't be possible to find someone with time on their hands to make a lengthy investigation. Albeit, you might find someone at the forums who will be generous to do this for you. 8. Before reporting an issue, first, always try to update your environment to the latest official version of this library. We have no resources to go and debug older revisions, which could easily have bugs that have been fixed in the latest released version. We understand that this is not always possible, especially when APIs change, in which case file an issue against the highest library version your environment can support. Of course, if you upgrade the library, always retest that the problem is still there. 9. Please do not ask us to reproduce an issue with your custom data, since we don't have it. So, either you should use some existing dataset supported by HF datasets or you need to supply a code that generates a small sample on the fly, or some another quick and simple way to get it. Please do not send us any non-public domain data that may require a license or a permission to be used. 10. Do not tag multiple developers on the issue unless you know this is expected, either because you asked them and they gave you an explicit permission to tag them or the issue template instructs you to do so. The "who to tag for what domain" part of the issue template is there to help users direct their questions to the right developers who are designated maintainers of project's specific domains. They can then decide at their own discretion to tag other developers if they feel it'd help move the issue forward. We currently don't have a triage service and we trust your capacity to identify the right domain and thus the persons to tag in your issue. If you are not sure, please use the forums to ask for guidance. When in doubt, err on the side of not tagging a given person. If you tag multiple people out of context or permission don't be surprised if you get no response at all. Please remember that every time you tag someone, they get a notification and you're taking their time without their permission. Please be sensitive to that. If you got helped by one of the developers in the past please don't tag them in future issues, unless they are listed in the issue template for the domain you are asking about or that developer gave you an explicit permission to tag them in future issues. If you see a certain developer doing multiple and/or recent commits into a specific area of the project that you feel is relevant to your issue, it is not a good reason to tag them. Various developers may be fixing things that prevent them from moving forward, but often their work is focused on a totally different domain. And while they may or may not know how to help you with the problem at hand, it would benefit the whole community much more if they focus on the domain of their unique expertise. 11. Use the Edit button. Take your time, and re-read and improve the wording and formatting to make your posts and comments as easy to understand as possible. Avoid posting multiple comments in a row, as each comment generates a notification for the developers tagged in that issue. If you happened to post multiple comments in a row, and nobody followed up yet - consider merging those into one or a few comments while editing the combined content to be coherent. If you choose to edit your older comments after others posted follow up comments you need to be aware that your modifications might not be noticed, so if it's not a typo fixing, try to write a new comment flagging that something has been changed in the previous comments. For example, the very first comment is the most important one. If while the thread unfolds you realize that things aren't as they seemed to you originally you may want to edit the first post to reflect the up-to-date understanding of the issue at hand so that it helps those who read your issue in the future quickly understand what's going on and not need to sift through dozens of comments. It also helps to indicate that the post was edited. So, those reading the thread later can understand why there might be certain discontinuity in the information flow. Use bullets and items if you have lists of items and the outcome improves overall readability. Use backticks to refer to class and function names, e.g. `BartModel` and `generate` as these stand out and improve the speed of a reader's comprehension. Try not use italics and bold text too much as these often make the text more difficult to read. 12. If you are cross-referencing a specific comment in a given thread or another issue, always link to that specific comment, rather than using the issue link. If you do the latter it could be quite impossible to find which specific comment you're referring to. To get the link to the specific comment do not copy the url from the location bar of your browser, but instead, click the `...` icon in the upper right corner of the comment and then select "Copy Link". For example the first link is a link to an issue, and the second to a specific comment in the same issue: 1. https://github.com/huggingface/transformers/issues/9257 2. https://github.com/huggingface/transformers/issues/9257#issuecomment-749945162 13. If you are replying to a last comment, it's totally fine to make your reply with just your comment in it. The readers can follow the information flow here. But if you're replying to a comment that happened some comments back it's always a good practice to quote just the relevant lines you're replying it. The `>` is used for quoting, or you can always use the menu to do so. For example your editor box will look like: ``` > How big is your gpu cluster? Our cluster is made of 256 gpus. ``` If you are addressing multiple comments, quote the relevant parts of each before your answer. Some people use the same comment to do multiple replies, others separate them into separate comments. Either way works. The latter approach helps for linking to a specific comment. In general the best way to figure out what works the best is learn from issues posted by other people - see which issues get great responses and which get little to no response - observe what the posters who received great responses did differently from those who did not. Thank you for reading this somewhat lengthy document. We would like to conclude that these are not absolute rules, but a friendly advice that will help maximize the chances for us to understand what you are trying to communicate, reproduce the problem then resolve it to your satisfaction and the benefit of the whole community. If after reading this document there are remaining questions on how and why or there is a need for further elucidation, please, don't hesitate to ask your question in [this thread](https://discuss.huggingface.co/t/how-to-request-support/3128).
transformers/ISSUES.md/0
{ "file_path": "transformers/ISSUES.md", "repo_id": "transformers", "token_count": 4684 }
212
# Security Policy ## Reporting a Vulnerability 🤗 We have our bug bounty program set up with HackerOne. Please feel free to submit vulnerability reports to our private program at https://hackerone.com/hugging_face. Note that you'll need to be invited to our program, so send us a quick email at security@huggingface.co if you've found a vulnerability.
transformers/SECURITY.md/0
{ "file_path": "transformers/SECURITY.md", "repo_id": "transformers", "token_count": 89 }
213
FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04 LABEL maintainer="Hugging Face" ARG DEBIAN_FRONTEND=noninteractive # Use login shell to read variables from `~/.profile` (to pass dynamic created variables between RUN commands) SHELL ["sh", "-lc"] # The following `ARG` are mainly used to specify the versions explicitly & directly in this docker file, and not meant # to be used as arguments for docker build (so far). ARG PYTORCH='2.2.0' # Example: `cu102`, `cu113`, etc. ARG CUDA='cu118' RUN apt update RUN apt install -y git libsndfile1-dev tesseract-ocr espeak-ng python python3-pip ffmpeg RUN python3 -m pip install --no-cache-dir --upgrade pip ARG REF=main RUN git clone https://github.com/huggingface/transformers && cd transformers && git checkout $REF RUN [ ${#PYTORCH} -gt 0 ] && VERSION='torch=='$PYTORCH'.*' || VERSION='torch'; echo "export VERSION='$VERSION'" >> ~/.profile RUN echo torch=$VERSION # `torchvision` and `torchaudio` should be installed along with `torch`, especially for nightly build. # Currently, let's just use their latest releases (when `torch` is installed with a release version) RUN python3 -m pip install --no-cache-dir -U $VERSION torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/$CUDA RUN python3 -m pip install --no-cache-dir -e ./transformers[dev-torch] RUN python3 -m pip install --no-cache-dir git+https://github.com/huggingface/accelerate@main#egg=accelerate # Add bitsandbytes for mixed int8 testing RUN python3 -m pip install --no-cache-dir bitsandbytes # Add auto-gptq for gtpq quantization testing RUN python3 -m pip install --no-cache-dir auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Add optimum for gptq quantization testing RUN python3 -m pip install --no-cache-dir git+https://github.com/huggingface/optimum@main#egg=optimum # Add aqlm for quantization testing RUN python3 -m pip install --no-cache-dir aqlm[gpu]==1.0.2 # Add autoawq for quantization testing RUN python3 -m pip install --no-cache-dir https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.0/autoawq-0.2.0+cu118-cp38-cp38-linux_x86_64.whl # Add quanto for quantization testing RUN python3 -m pip install --no-cache-dir quanto # When installing in editable mode, `transformers` is not recognized as a package. # this line must be added in order for python to be aware of transformers. RUN cd transformers && python3 setup.py develop
transformers/docker/transformers-quantization-latest-gpu/Dockerfile/0
{ "file_path": "transformers/docker/transformers-quantization-latest-gpu/Dockerfile", "repo_id": "transformers", "token_count": 822 }
214
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Ein Modell teilen Die letzten beiden Tutorials haben gezeigt, wie man ein Modell mit PyTorch, Keras und 🤗 Accelerate für verteilte Setups feinabstimmen kann. Der nächste Schritt besteht darin, Ihr Modell mit der Community zu teilen! Bei Hugging Face glauben wir an den offenen Austausch von Wissen und Ressourcen, um künstliche Intelligenz für alle zu demokratisieren. Wir ermutigen Sie, Ihr Modell mit der Community zu teilen, um anderen zu helfen, Zeit und Ressourcen zu sparen. In diesem Tutorial lernen Sie zwei Methoden kennen, wie Sie ein trainiertes oder verfeinertes Modell auf dem [Model Hub](https://huggingface.co/models) teilen können: - Programmgesteuertes Übertragen Ihrer Dateien auf den Hub. - Ziehen Sie Ihre Dateien per Drag-and-Drop über die Weboberfläche in den Hub. <iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <Tip> Um ein Modell mit der Öffentlichkeit zu teilen, benötigen Sie ein Konto auf [huggingface.co](https://huggingface.co/join). Sie können auch einer bestehenden Organisation beitreten oder eine neue Organisation gründen. </Tip> ## Repository-Funktionen Jedes Repository im Model Hub verhält sich wie ein typisches GitHub-Repository. Unsere Repositorys bieten Versionierung, Commit-Historie und die Möglichkeit, Unterschiede zu visualisieren. Die integrierte Versionierung des Model Hub basiert auf Git und [git-lfs](https://git-lfs.github.com/). Mit anderen Worten: Sie können ein Modell als ein Repository behandeln, was eine bessere Zugriffskontrolle und Skalierbarkeit ermöglicht. Die Versionskontrolle ermöglicht *Revisionen*, eine Methode zum Anheften einer bestimmten Version eines Modells mit einem Commit-Hash, Tag oder Branch. Folglich können Sie eine bestimmte Modellversion mit dem Parameter "Revision" laden: ```py >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small", revision="v2.0.1" # tag name, or branch name, or commit hash ... ) ``` Dateien lassen sich auch in einem Repository leicht bearbeiten, und Sie können die Commit-Historie sowie die Unterschiede einsehen: ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png) ## Einrichtung Bevor Sie ein Modell für den Hub freigeben, benötigen Sie Ihre Hugging Face-Anmeldedaten. Wenn Sie Zugang zu einem Terminal haben, führen Sie den folgenden Befehl in der virtuellen Umgebung aus, in der 🤗 Transformers installiert ist. Dadurch werden Ihre Zugangsdaten in Ihrem Hugging Face-Cache-Ordner (standardmäßig `~/.cache/`) gespeichert: ```bash huggingface-cli login ``` Wenn Sie ein Notebook wie Jupyter oder Colaboratory verwenden, stellen Sie sicher, dass Sie die [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) Bibliothek installiert haben. Diese Bibliothek ermöglicht Ihnen die programmatische Interaktion mit dem Hub. ```bash pip install huggingface_hub ``` Verwenden Sie dann `notebook_login`, um sich beim Hub anzumelden, und folgen Sie dem Link [hier](https://huggingface.co/settings/token), um ein Token für die Anmeldung zu generieren: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Ein Modell für alle Frameworks konvertieren Um sicherzustellen, dass Ihr Modell von jemandem verwendet werden kann, der mit einem anderen Framework arbeitet, empfehlen wir Ihnen, Ihr Modell sowohl mit PyTorch- als auch mit TensorFlow-Checkpoints zu konvertieren und hochzuladen. Während Benutzer immer noch in der Lage sind, Ihr Modell von einem anderen Framework zu laden, wenn Sie diesen Schritt überspringen, wird es langsamer sein, weil 🤗 Transformers den Checkpoint on-the-fly konvertieren müssen. Die Konvertierung eines Checkpoints für ein anderes Framework ist einfach. Stellen Sie sicher, dass Sie PyTorch und TensorFlow installiert haben (siehe [hier](installation) für Installationsanweisungen), und finden Sie dann das spezifische Modell für Ihre Aufgabe in dem anderen Framework. <frameworkcontent> <pt> Geben Sie `from_tf=True` an, um einen Prüfpunkt von TensorFlow nach PyTorch zu konvertieren: ```py >>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True) >>> pt_model.save_pretrained("path/to/awesome-name-you-picked") ``` </pt> <tf> Geben Sie `from_pt=True` an, um einen Prüfpunkt von PyTorch nach TensorFlow zu konvertieren: ```py >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_pt=True) ``` Dann können Sie Ihr neues TensorFlow-Modell mit seinem neuen Checkpoint speichern: ```py >>> tf_model.save_pretrained("path/to/awesome-name-you-picked") ``` </tf> <jax> Wenn ein Modell in Flax verfügbar ist, können Sie auch einen Kontrollpunkt von PyTorch nach Flax konvertieren: ```py >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/to/awesome-name-you-picked", from_pt=True ... ) ``` </jax> </frameworkcontent> ## Ein Modell während des Trainings hochladen <frameworkcontent> <pt> <Youtube id="Z1-XMy-GNLQ"/> Die Weitergabe eines Modells an den Hub ist so einfach wie das Hinzufügen eines zusätzlichen Parameters oder Rückrufs. Erinnern Sie sich an das [Feinabstimmungs-Tutorial](training), in der Klasse [`TrainingArguments`] geben Sie Hyperparameter und zusätzliche Trainingsoptionen an. Eine dieser Trainingsoptionen beinhaltet die Möglichkeit, ein Modell direkt an den Hub zu pushen. Setzen Sie `push_to_hub=True` in Ihrer [`TrainingArguments`]: ```py >>> training_args = TrainingArguments(output_dir="my-awesome-model", push_to_hub=True) ``` Übergeben Sie Ihre Trainingsargumente wie gewohnt an [`Trainer`]: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` Nach der Feinabstimmung Ihres Modells rufen Sie [`~transformers.Trainer.push_to_hub`] auf [`Trainer`] auf, um das trainierte Modell an den Hub zu übertragen. Transformers fügt sogar automatisch Trainings-Hyperparameter, Trainingsergebnisse und Framework-Versionen zu Ihrer Modellkarte hinzu! ```py >>> trainer.push_to_hub() ``` </pt> <tf> Geben Sie ein Modell mit [`PushToHubCallback`] an den Hub weiter. In der [`PushToHubCallback`] Funktion, fügen Sie hinzu: - Ein Ausgabeverzeichnis für Ihr Modell. - Einen Tokenizer. - Die `hub_model_id`, die Ihr Hub-Benutzername und Modellname ist. ```py >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="./your_model_save_path", tokenizer=tokenizer, hub_model_id="your-username/my-awesome-model" ... ) ``` Fügen Sie den Callback zu [`fit`](https://keras.io/api/models/model_training_apis/) hinzu, und 🤗 Transformers wird das trainierte Modell an den Hub weiterleiten: ```py >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ``` </tf> </frameworkcontent> ## Verwenden Sie die Funktion `push_to_hub`. Sie können `push_to_hub` auch direkt für Ihr Modell aufrufen, um es in den Hub hochzuladen. Geben Sie den Namen Ihres Modells in "push_to_hub" an: ```py >>> pt_model.push_to_hub("my-awesome-model") ``` Dadurch wird ein Repository unter Ihrem Benutzernamen mit dem Modellnamen `my-awesome-model` erstellt. Benutzer können nun Ihr Modell mit der Funktion `from_pretrained` laden: ```py >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained("your_username/my-awesome-model") ``` Wenn Sie zu einer Organisation gehören und Ihr Modell stattdessen unter dem Namen der Organisation pushen wollen, fügen Sie diesen einfach zur `repo_id` hinzu: ```py >>> pt_model.push_to_hub("my-awesome-org/my-awesome-model") ``` Die Funktion "push_to_hub" kann auch verwendet werden, um andere Dateien zu einem Modell-Repository hinzuzufügen. Zum Beispiel kann man einen Tokenizer zu einem Modell-Repository hinzufügen: ```py >>> tokenizer.push_to_hub("my-awesome-model") ``` Oder vielleicht möchten Sie die TensorFlow-Version Ihres fein abgestimmten PyTorch-Modells hinzufügen: ```py >>> tf_model.push_to_hub("my-awesome-model") ``` Wenn Sie nun zu Ihrem Hugging Face-Profil navigieren, sollten Sie Ihr neu erstelltes Modell-Repository sehen. Wenn Sie auf die Registerkarte **Dateien** klicken, werden alle Dateien angezeigt, die Sie in das Repository hochgeladen haben. Weitere Einzelheiten zum Erstellen und Hochladen von Dateien in ein Repository finden Sie in der Hub-Dokumentation [hier](https://huggingface.co/docs/hub/how-to-upstream). ## Hochladen mit der Weboberfläche Benutzer, die einen no-code Ansatz bevorzugen, können ein Modell über das Webinterface des Hubs hochladen. Besuchen Sie [huggingface.co/new](https://huggingface.co/new) um ein neues Repository zu erstellen: ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) Fügen Sie von hier aus einige Informationen über Ihr Modell hinzu: - Wählen Sie den **Besitzer** des Repositorys. Dies können Sie selbst oder eine der Organisationen sein, denen Sie angehören. - Wählen Sie einen Namen für Ihr Modell, der auch der Name des Repositorys sein wird. - Wählen Sie, ob Ihr Modell öffentlich oder privat ist. - Geben Sie die Lizenzverwendung für Ihr Modell an. Klicken Sie nun auf die Registerkarte **Dateien** und klicken Sie auf die Schaltfläche **Datei hinzufügen**, um eine neue Datei in Ihr Repository hochzuladen. Ziehen Sie dann eine Datei per Drag-and-Drop hoch und fügen Sie eine Übergabemeldung hinzu. ![upload_file](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/upload_file.png) ## Hinzufügen einer Modellkarte Um sicherzustellen, dass die Benutzer die Fähigkeiten, Grenzen, möglichen Verzerrungen und ethischen Aspekte Ihres Modells verstehen, fügen Sie bitte eine Modellkarte zu Ihrem Repository hinzu. Die Modellkarte wird in der Datei `README.md` definiert. Sie können eine Modellkarte hinzufügen, indem Sie: * Manuelles Erstellen und Hochladen einer "README.md"-Datei. * Klicken Sie auf die Schaltfläche **Modellkarte bearbeiten** in Ihrem Modell-Repository. Werfen Sie einen Blick auf die DistilBert [model card](https://huggingface.co/distilbert/distilbert-base-uncased) als gutes Beispiel für die Art von Informationen, die eine Modellkarte enthalten sollte. Weitere Details über andere Optionen, die Sie in der Datei "README.md" einstellen können, wie z.B. den Kohlenstoff-Fußabdruck eines Modells oder Beispiele für Widgets, finden Sie in der Dokumentation [hier](https://huggingface.co/docs/hub/models-cards).
transformers/docs/source/de/model_sharing.md/0
{ "file_path": "transformers/docs/source/de/model_sharing.md", "repo_id": "transformers", "token_count": 4287 }
215
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # How to convert a 🤗 Transformers model to TensorFlow? Having multiple frameworks available to use with 🤗 Transformers gives you flexibility to play their strengths when designing your application, but it implies that compatibility must be added on a per-model basis. The good news is that adding TensorFlow compatibility to an existing model is simpler than [adding a new model from scratch](add_new_model)! Whether you wish to have a deeper understanding of large TensorFlow models, make a major open-source contribution, or enable TensorFlow for your model of choice, this guide is for you. This guide empowers you, a member of our community, to contribute TensorFlow model weights and/or architectures to be used in 🤗 Transformers, with minimal supervision from the Hugging Face team. Writing a new model is no small feat, but hopefully this guide will make it less of a rollercoaster 🎢 and more of a walk in the park 🚶. Harnessing our collective experiences is absolutely critical to make this process increasingly easier, and thus we highly encourage that you suggest improvements to this guide! Before you dive deeper, it is recommended that you check the following resources if you're new to 🤗 Transformers: - [General overview of 🤗 Transformers](add_new_model#general-overview-of-transformers) - [Hugging Face's TensorFlow Philosophy](https://huggingface.co/blog/tensorflow-philosophy) In the remainder of this guide, you will learn what's needed to add a new TensorFlow model architecture, the procedure to convert PyTorch into TensorFlow model weights, and how to efficiently debug mismatches across ML frameworks. Let's get started! <Tip> Are you unsure whether the model you wish to use already has a corresponding TensorFlow architecture? &nbsp; Check the `model_type` field of the `config.json` of your model of choice ([example](https://huggingface.co/google-bert/bert-base-uncased/blob/main/config.json#L14)). If the corresponding model folder in 🤗 Transformers has a file whose name starts with "modeling_tf", it means that it has a corresponding TensorFlow architecture ([example](https://github.com/huggingface/transformers/tree/main/src/transformers/models/bert)). </Tip> ## Step-by-step guide to add TensorFlow model architecture code There are many ways to design a large model architecture, and multiple ways of implementing said design. However, you might recall from our [general overview of 🤗 Transformers](add_new_model#general-overview-of-transformers) that we are an opinionated bunch - the ease of use of 🤗 Transformers relies on consistent design choices. From experience, we can tell you a few important things about adding TensorFlow models: - Don't reinvent the wheel! More often than not, there are at least two reference implementations you should check: the PyTorch equivalent of the model you are implementing and other TensorFlow models for the same class of problems. - Great model implementations survive the test of time. This doesn't happen because the code is pretty, but rather because the code is clear, easy to debug and build upon. If you make the life of the maintainers easy with your TensorFlow implementation, by replicating the same patterns as in other TensorFlow models and minimizing the mismatch to the PyTorch implementation, you ensure your contribution will be long lived. - Ask for help when you're stuck! The 🤗 Transformers team is here to help, and we've probably found solutions to the same problems you're facing. Here's an overview of the steps needed to add a TensorFlow model architecture: 1. Select the model you wish to convert 2. Prepare transformers dev environment 3. (Optional) Understand theoretical aspects and the existing implementation 4. Implement the model architecture 5. Implement model tests 6. Submit the pull request 7. (Optional) Build demos and share with the world ### 1.-3. Prepare your model contribution **1. Select the model you wish to convert** Let's start off with the basics: the first thing you need to know is the architecture you want to convert. If you don't have your eyes set on a specific architecture, asking the 🤗 Transformers team for suggestions is a great way to maximize your impact - we will guide you towards the most prominent architectures that are missing on the TensorFlow side. If the specific model you want to use with TensorFlow already has a TensorFlow architecture implementation in 🤗 Transformers but is lacking weights, feel free to jump straight into the [weight conversion section](#adding-tensorflow-weights-to--hub) of this page. For simplicity, the remainder of this guide assumes you've decided to contribute with the TensorFlow version of *BrandNewBert* (the same example as in the [guide](add_new_model) to add a new model from scratch). <Tip> Before starting the work on a TensorFlow model architecture, double-check that there is no ongoing effort to do so. You can search for `BrandNewBert` on the [pull request GitHub page](https://github.com/huggingface/transformers/pulls?q=is%3Apr) to confirm that there is no TensorFlow-related pull request. </Tip> **2. Prepare transformers dev environment** Having selected the model architecture, open a draft PR to signal your intention to work on it. Follow the instructions below to set up your environment and open a draft PR. 1. Fork the [repository](https://github.com/huggingface/transformers) by clicking on the 'Fork' button on the repository's page. This creates a copy of the code under your GitHub user account. 2. Clone your `transformers` fork to your local disk, and add the base repository as a remote: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. Set up a development environment, for instance by running the following command: ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If that's the case make sure to install TensorFlow then do: ```bash pip install -e ".[quality]" ``` **Note:** You don't need to have CUDA installed. Making the new model work on CPU is sufficient. 4. Create a branch with a descriptive name from your main branch ```bash git checkout -b add_tf_brand_new_bert ``` 5. Fetch and rebase to current main ```bash git fetch upstream git rebase upstream/main ``` 6. Add an empty `.py` file in `transformers/src/models/brandnewbert/` named `modeling_tf_brandnewbert.py`. This will be your TensorFlow model file. 7. Push the changes to your account using: ```bash git add . git commit -m "initial commit" git push -u origin add_tf_brand_new_bert ``` 8. Once you are satisfied, go to the webpage of your fork on GitHub. Click on “Pull request”. Make sure to add the GitHub handle of some members of the Hugging Face team as reviewers, so that the Hugging Face team gets notified for future changes. 9. Change the PR into a draft by clicking on “Convert to draft” on the right of the GitHub pull request web page. Now you have set up a development environment to port *BrandNewBert* to TensorFlow in 🤗 Transformers. **3. (Optional) Understand theoretical aspects and the existing implementation** You should take some time to read *BrandNewBert's* paper, if such descriptive work exists. There might be large sections of the paper that are difficult to understand. If this is the case, this is fine - don't worry! The goal is not to get a deep theoretical understanding of the paper, but to extract the necessary information required to effectively re-implement the model in 🤗 Transformers using TensorFlow. That being said, you don't have to spend too much time on the theoretical aspects, but rather focus on the practical ones, namely the existing model documentation page (e.g. [model docs for BERT](model_doc/bert)). After you've grasped the basics of the models you are about to implement, it's important to understand the existing implementation. This is a great chance to confirm that a working implementation matches your expectations for the model, as well as to foresee technical challenges on the TensorFlow side. It's perfectly natural that you feel overwhelmed with the amount of information that you've just absorbed. It is definitely not a requirement that you understand all facets of the model at this stage. Nevertheless, we highly encourage you to clear any pressing questions in our [forum](https://discuss.huggingface.co/). ### 4. Model implementation Now it's time to finally start coding. Our suggested starting point is the PyTorch file itself: copy the contents of `modeling_brand_new_bert.py` inside `src/transformers/models/brand_new_bert/` into `modeling_tf_brand_new_bert.py`. The goal of this section is to modify the file and update the import structure of 🤗 Transformers such that you can import `TFBrandNewBert` and `TFBrandNewBert.from_pretrained(model_repo, from_pt=True)` successfully loads a working TensorFlow *BrandNewBert* model. Sadly, there is no prescription to convert a PyTorch model into TensorFlow. You can, however, follow our selection of tips to make the process as smooth as possible: - Prepend `TF` to the name of all classes (e.g. `BrandNewBert` becomes `TFBrandNewBert`). - Most PyTorch operations have a direct TensorFlow replacement. For example, `torch.nn.Linear` corresponds to `tf.keras.layers.Dense`, `torch.nn.Dropout` corresponds to `tf.keras.layers.Dropout`, etc. If you're not sure about a specific operation, you can use the [TensorFlow documentation](https://www.tensorflow.org/api_docs/python/tf) or the [PyTorch documentation](https://pytorch.org/docs/stable/). - Look for patterns in the 🤗 Transformers codebase. If you come across a certain operation that doesn't have a direct replacement, the odds are that someone else already had the same problem. - By default, keep the same variable names and structure as in PyTorch. This will make it easier to debug, track issues, and add fixes down the line. - Some layers have different default values in each framework. A notable example is the batch normalization layer's epsilon (`1e-5` in [PyTorch](https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html#torch.nn.BatchNorm2d) and `1e-3` in [TensorFlow](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization)). Double-check the documentation! - PyTorch's `nn.Parameter` variables typically need to be initialized within TF Layer's `build()`. See the following example: [PyTorch](https://github.com/huggingface/transformers/blob/655f72a6896c0533b1bdee519ed65a059c2425ac/src/transformers/models/vit_mae/modeling_vit_mae.py#L212) / [TensorFlow](https://github.com/huggingface/transformers/blob/655f72a6896c0533b1bdee519ed65a059c2425ac/src/transformers/models/vit_mae/modeling_tf_vit_mae.py#L220) - If the PyTorch model has a `#copied from ...` on top of a function, the odds are that your TensorFlow model can also borrow that function from the architecture it was copied from, assuming it has a TensorFlow architecture. - Assigning the `name` attribute correctly in TensorFlow functions is critical to do the `from_pt=True` weight cross-loading. `name` is almost always the name of the corresponding variable in the PyTorch code. If `name` is not properly set, you will see it in the error message when loading the model weights. - The logic of the base model class, `BrandNewBertModel`, will actually reside in `TFBrandNewBertMainLayer`, a Keras layer subclass ([example](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L719)). `TFBrandNewBertModel` will simply be a wrapper around this layer. - Keras models need to be built in order to load pretrained weights. For that reason, `TFBrandNewBertPreTrainedModel` will need to hold an example of inputs to the model, the `dummy_inputs` ([example](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L916)). - If you get stuck, ask for help - we're here to help you! 🤗 In addition to the model file itself, you will also need to add the pointers to the model classes and related documentation pages. You can complete this part entirely following the patterns in other PRs ([example](https://github.com/huggingface/transformers/pull/18020/files)). Here's a list of the needed manual changes: - Include all public classes of *BrandNewBert* in `src/transformers/__init__.py` - Add *BrandNewBert* classes to the corresponding Auto classes in `src/transformers/models/auto/modeling_tf_auto.py` - Add the lazy loading classes related to *BrandNewBert* in `src/transformers/utils/dummy_tf_objects.py` - Update the import structures for the public classes in `src/transformers/models/brand_new_bert/__init__.py` - Add the documentation pointers to the public methods of *BrandNewBert* in `docs/source/en/model_doc/brand_new_bert.md` - Add yourself to the list of contributors to *BrandNewBert* in `docs/source/en/model_doc/brand_new_bert.md` - Finally, add a green tick ✅ to the TensorFlow column of *BrandNewBert* in `docs/source/en/index.md` When you're happy with your implementation, run the following checklist to confirm that your model architecture is ready: 1. All layers that behave differently at train time (e.g. Dropout) are called with a `training` argument, which is propagated all the way from the top-level classes 2. You have used `#copied from ...` whenever possible 3. `TFBrandNewBertMainLayer` and all classes that use it have their `call` function decorated with `@unpack_inputs` 4. `TFBrandNewBertMainLayer` is decorated with `@keras_serializable` 5. A TensorFlow model can be loaded from PyTorch weights using `TFBrandNewBert.from_pretrained(model_repo, from_pt=True)` 6. You can call the TensorFlow model using the expected input format ### 5. Add model tests Hurray, you've implemented a TensorFlow model! Now it's time to add tests to make sure that your model behaves as expected. As in the previous section, we suggest you start by copying the `test_modeling_brand_new_bert.py` file in `tests/models/brand_new_bert/` into `test_modeling_tf_brand_new_bert.py`, and continue by making the necessary TensorFlow replacements. For now, in all `.from_pretrained()` calls, you should use the `from_pt=True` flag to load the existing PyTorch weights. After you're done, it's time for the moment of truth: run the tests! 😬 ```bash NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py ``` The most likely outcome is that you'll see a bunch of errors. Don't worry, this is expected! Debugging ML models is notoriously hard, and the key ingredient to success is patience (and `breakpoint()`). In our experience, the hardest problems arise from subtle mismatches between ML frameworks, for which we have a few pointers at the end of this guide. In other cases, a general test might not be directly applicable to your model, in which case we suggest an override at the model test class level. Regardless of the issue, don't hesitate to ask for help in your draft pull request if you're stuck. When all tests pass, congratulations, your model is nearly ready to be added to the 🤗 Transformers library! 🎉 ### 6.-7. Ensure everyone can use your model **6. Submit the pull request** Once you're done with the implementation and the tests, it's time to submit a pull request. Before pushing your code, run our code formatting utility, `make fixup` 🪄. This will automatically fix any formatting issues, which would cause our automatic checks to fail. It's now time to convert your draft pull request into a real pull request. To do so, click on the "Ready for review" button and add Joao (`@gante`) and Matt (`@Rocketknight1`) as reviewers. A model pull request will need at least 3 reviewers, but they will take care of finding appropriate additional reviewers for your model. After all reviewers are happy with the state of your PR, the final action point is to remove the `from_pt=True` flag in `.from_pretrained()` calls. Since there are no TensorFlow weights, you will have to add them! Check the section below for instructions on how to do it. Finally, when the TensorFlow weights get merged, you have at least 3 reviewer approvals, and all CI checks are green, double-check the tests locally one last time ```bash NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py ``` and we will merge your PR! Congratulations on the milestone 🎉 **7. (Optional) Build demos and share with the world** One of the hardest parts about open-source is discovery. How can the other users learn about the existence of your fabulous TensorFlow contribution? With proper communication, of course! 📣 There are two main ways to share your model with the community: - Build demos. These include Gradio demos, notebooks, and other fun ways to show off your model. We highly encourage you to add a notebook to our [community-driven demos](https://huggingface.co/docs/transformers/community). - Share stories on social media like Twitter and LinkedIn. You should be proud of your work and share your achievement with the community - your model can now be used by thousands of engineers and researchers around the world 🌍! We will be happy to retweet your posts and help you share your work with the community. ## Adding TensorFlow weights to 🤗 Hub Assuming that the TensorFlow model architecture is available in 🤗 Transformers, converting PyTorch weights into TensorFlow weights is a breeze! Here's how to do it: 1. Make sure you are logged into your Hugging Face account in your terminal. You can log in using the command `huggingface-cli login` (you can find your access tokens [here](https://huggingface.co/settings/tokens)) 2. Run `transformers-cli pt-to-tf --model-name foo/bar`, where `foo/bar` is the name of the model repository containing the PyTorch weights you want to convert 3. Tag `@joaogante` and `@Rocketknight1` in the 🤗 Hub PR the command above has just created That's it! 🎉 ## Debugging mismatches across ML frameworks 🐛 At some point, when adding a new architecture or when creating TensorFlow weights for an existing architecture, you might come across errors complaining about mismatches between PyTorch and TensorFlow. You might even decide to open the model architecture code for the two frameworks, and find that they look identical. What's going on? 🤔 First of all, let's talk about why understanding these mismatches matters. Many community members will use 🤗 Transformers models out of the box, and trust that our models behave as expected. When there is a large mismatch between the two frameworks, it implies that the model is not following the reference implementation for at least one of the frameworks. This might lead to silent failures, in which the model runs but has poor performance. This is arguably worse than a model that fails to run at all! To that end, we aim at having a framework mismatch smaller than `1e-5` at all stages of the model. As in other numerical problems, the devil is in the details. And as in any detail-oriented craft, the secret ingredient here is patience. Here is our suggested workflow for when you come across this type of issues: 1. Locate the source of mismatches. The model you're converting probably has near identical inner variables up to a certain point. Place `breakpoint()` statements in the two frameworks' architectures, and compare the values of the numerical variables in a top-down fashion until you find the source of the problems. 2. Now that you've pinpointed the source of the issue, get in touch with the 🤗 Transformers team. It is possible that we've seen a similar problem before and can promptly provide a solution. As a fallback, scan popular pages like StackOverflow and GitHub issues. 3. If there is no solution in sight, it means you'll have to go deeper. The good news is that you've located the issue, so you can focus on the problematic instruction, abstracting away the rest of the model! The bad news is that you'll have to venture into the source implementation of said instruction. In some cases, you might find an issue with a reference implementation - don't abstain from opening an issue in the upstream repository. In some cases, in discussion with the 🤗 Transformers team, we might find that fixing the mismatch is infeasible. When the mismatch is very small in the output layers of the model (but potentially large in the hidden states), we might decide to ignore it in favor of distributing the model. The `pt-to-tf` CLI mentioned above has a `--max-error` flag to override the error message at weight conversion time.
transformers/docs/source/en/add_tensorflow_model.md/0
{ "file_path": "transformers/docs/source/en/add_tensorflow_model.md", "repo_id": "transformers", "token_count": 5786 }
216
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Text generation strategies Text generation is essential to many NLP tasks, such as open-ended text generation, summarization, translation, and more. It also plays a role in a variety of mixed-modality applications that have text as an output like speech-to-text and vision-to-text. Some of the models that can generate text include GPT2, XLNet, OpenAI GPT, CTRL, TransformerXL, XLM, Bart, T5, GIT, Whisper. Check out a few examples that use [`~transformers.generation_utils.GenerationMixin.generate`] method to produce text outputs for different tasks: * [Text summarization](./tasks/summarization#inference) * [Image captioning](./model_doc/git#transformers.GitForCausalLM.forward.example) * [Audio transcription](./model_doc/whisper#transformers.WhisperForConditionalGeneration.forward.example) Note that the inputs to the generate method depend on the model's modality. They are returned by the model's preprocessor class, such as AutoTokenizer or AutoProcessor. If a model's preprocessor creates more than one kind of input, pass all the inputs to generate(). You can learn more about the individual model's preprocessor in the corresponding model's documentation. The process of selecting output tokens to generate text is known as decoding, and you can customize the decoding strategy that the `generate()` method will use. Modifying a decoding strategy does not change the values of any trainable parameters. However, it can have a noticeable impact on the quality of the generated output. It can help reduce repetition in the text and make it more coherent. This guide describes: * default generation configuration * common decoding strategies and their main parameters * saving and sharing custom generation configurations with your fine-tuned model on 🤗 Hub ## Default text generation configuration A decoding strategy for a model is defined in its generation configuration. When using pre-trained models for inference within a [`pipeline`], the models call the `PreTrainedModel.generate()` method that applies a default generation configuration under the hood. The default configuration is also used when no custom configuration has been saved with the model. When you load a model explicitly, you can inspect the generation configuration that comes with it through `model.generation_config`: ```python >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") >>> model.generation_config GenerationConfig { "bos_token_id": 50256, "eos_token_id": 50256, } ``` Printing out the `model.generation_config` reveals only the values that are different from the default generation configuration, and does not list any of the default values. The default generation configuration limits the size of the output combined with the input prompt to a maximum of 20 tokens to avoid running into resource limitations. The default decoding strategy is greedy search, which is the simplest decoding strategy that picks a token with the highest probability as the next token. For many tasks and small output sizes this works well. However, when used to generate longer outputs, greedy search can start producing highly repetitive results. ## Customize text generation You can override any `generation_config` by passing the parameters and their values directly to the [`generate`] method: ```python >>> my_model.generate(**inputs, num_beams=4, do_sample=True) # doctest: +SKIP ``` Even if the default decoding strategy mostly works for your task, you can still tweak a few things. Some of the commonly adjusted parameters include: - `max_new_tokens`: the maximum number of tokens to generate. In other words, the size of the output sequence, not including the tokens in the prompt. As an alternative to using the output's length as a stopping criteria, you can choose to stop generation whenever the full generation exceeds some amount of time. To learn more, check [`StoppingCriteria`]. - `num_beams`: by specifying a number of beams higher than 1, you are effectively switching from greedy search to beam search. This strategy evaluates several hypotheses at each time step and eventually chooses the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability sequences that start with a lower probability initial tokens and would've been ignored by the greedy search. - `do_sample`: if set to `True`, this parameter enables decoding strategies such as multinomial sampling, beam-search multinomial sampling, Top-K sampling and Top-p sampling. All these strategies select the next token from the probability distribution over the entire vocabulary with various strategy-specific adjustments. - `num_return_sequences`: the number of sequence candidates to return for each input. This option is only available for the decoding strategies that support multiple sequence candidates, e.g. variations of beam search and sampling. Decoding strategies like greedy search and contrastive search return a single output sequence. ## Save a custom decoding strategy with your model If you would like to share your fine-tuned model with a specific generation configuration, you can: * Create a [`GenerationConfig`] class instance * Specify the decoding strategy parameters * Save your generation configuration with [`GenerationConfig.save_pretrained`], making sure to leave its `config_file_name` argument empty * Set `push_to_hub` to `True` to upload your config to the model's repo ```python >>> from transformers import AutoModelForCausalLM, GenerationConfig >>> model = AutoModelForCausalLM.from_pretrained("my_account/my_model") # doctest: +SKIP >>> generation_config = GenerationConfig( ... max_new_tokens=50, do_sample=True, top_k=50, eos_token_id=model.config.eos_token_id ... ) >>> generation_config.save_pretrained("my_account/my_model", push_to_hub=True) # doctest: +SKIP ``` You can also store several generation configurations in a single directory, making use of the `config_file_name` argument in [`GenerationConfig.save_pretrained`]. You can later instantiate them with [`GenerationConfig.from_pretrained`]. This is useful if you want to store several generation configurations for a single model (e.g. one for creative text generation with sampling, and one for summarization with beam search). You must have the right Hub permissions to add configuration files to a model. ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig >>> tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small") >>> model = AutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-small") >>> translation_generation_config = GenerationConfig( ... num_beams=4, ... early_stopping=True, ... decoder_start_token_id=0, ... eos_token_id=model.config.eos_token_id, ... pad_token=model.config.pad_token_id, ... ) >>> # Tip: add `push_to_hub=True` to push to the Hub >>> translation_generation_config.save_pretrained("/tmp", "translation_generation_config.json") >>> # You could then use the named generation config file to parameterize generation >>> generation_config = GenerationConfig.from_pretrained("/tmp", "translation_generation_config.json") >>> inputs = tokenizer("translate English to French: Configuration files are easy to use!", return_tensors="pt") >>> outputs = model.generate(**inputs, generation_config=generation_config) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Les fichiers de configuration sont faciles à utiliser!'] ``` ## Streaming The `generate()` supports streaming, through its `streamer` input. The `streamer` input is compatible with any instance from a class that has the following methods: `put()` and `end()`. Internally, `put()` is used to push new tokens and `end()` is used to flag the end of text generation. <Tip warning={true}> The API for the streamer classes is still under development and may change in the future. </Tip> In practice, you can craft your own streaming class for all sorts of purposes! We also have basic streaming classes ready for you to use. For example, you can use the [`TextStreamer`] class to stream the output of `generate()` into your screen, one word at a time: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer >>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt") >>> streamer = TextStreamer(tok) >>> # Despite returning the usual output, the streamer will also print the generated text to stdout. >>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven, ``` ## Decoding strategies Certain combinations of the `generate()` parameters, and ultimately `generation_config`, can be used to enable specific decoding strategies. If you are new to this concept, we recommend reading [this blog post that illustrates how common decoding strategies work](https://huggingface.co/blog/how-to-generate). Here, we'll show some of the parameters that control the decoding strategies and illustrate how you can use them. ### Greedy Search [`generate`] uses greedy search decoding by default so you don't have to pass any parameters to enable it. This means the parameters `num_beams` is set to 1 and `do_sample=False`. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "I look forward to" >>> checkpoint = "distilbert/distilgpt2" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['I look forward to seeing you all again!\n\n\n\n\n\n\n\n\n\n\n'] ``` ### Contrastive search The contrastive search decoding strategy was proposed in the 2022 paper [A Contrastive Framework for Neural Text Generation](https://arxiv.org/abs/2202.06417). It demonstrates superior results for generating non-repetitive yet coherent long outputs. To learn how contrastive search works, check out [this blog post](https://huggingface.co/blog/introducing-csearch). The two main parameters that enable and control the behavior of contrastive search are `penalty_alpha` and `top_k`: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> checkpoint = "openai-community/gpt2-large" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = "Hugging Face Company is" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> outputs = model.generate(**inputs, penalty_alpha=0.6, top_k=4, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Hugging Face Company is a family owned and operated business. We pride ourselves on being the best in the business and our customer service is second to none.\n\nIf you have any questions about our products or services, feel free to contact us at any time. We look forward to hearing from you!'] ``` ### Multinomial sampling As opposed to greedy search that always chooses a token with the highest probability as the next token, multinomial sampling (also called ancestral sampling) randomly selects the next token based on the probability distribution over the entire vocabulary given by the model. Every token with a non-zero probability has a chance of being selected, thus reducing the risk of repetition. To enable multinomial sampling set `do_sample=True` and `num_beams=1`. ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed(0) # For reproducibility >>> checkpoint = "openai-community/gpt2-large" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = "Today was an amazing day because" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> outputs = model.generate(**inputs, do_sample=True, num_beams=1, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Today was an amazing day because when you go to the World Cup and you don\'t, or when you don\'t get invited, that\'s a terrible feeling."'] ``` ### Beam-search decoding Unlike greedy search, beam-search decoding keeps several hypotheses at each time step and eventually chooses the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability sequences that start with lower probability initial tokens and would've been ignored by the greedy search. To enable this decoding strategy, specify the `num_beams` (aka number of hypotheses to keep track of) that is greater than 1. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "It is astonishing how one can" >>> checkpoint = "openai-community/gpt2-medium" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, max_new_tokens=50) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['It is astonishing how one can have such a profound impact on the lives of so many people in such a short period of time."\n\nHe added: "I am very proud of the work I have been able to do in the last few years.\n\n"I have'] ``` ### Beam-search multinomial sampling As the name implies, this decoding strategy combines beam search with multinomial sampling. You need to specify the `num_beams` greater than 1, and set `do_sample=True` to use this decoding strategy. ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, set_seed >>> set_seed(0) # For reproducibility >>> prompt = "translate English to German: The house is wonderful." >>> checkpoint = "google-t5/t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, do_sample=True) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Das Haus ist wunderbar.' ``` ### Diverse beam search decoding The diverse beam search decoding strategy is an extension of the beam search strategy that allows for generating a more diverse set of beam sequences to choose from. To learn how it works, refer to [Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models](https://arxiv.org/pdf/1610.02424.pdf). This approach has three main parameters: `num_beams`, `num_beam_groups`, and `diversity_penalty`. The diversity penalty ensures the outputs are distinct across groups, and beam search is used within each group. ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> checkpoint = "google/pegasus-xsum" >>> prompt = ( ... "The Permaculture Design Principles are a set of universal design principles " ... "that can be applied to any location, climate and culture, and they allow us to design " ... "the most efficient and sustainable human habitation and food production systems. " ... "Permaculture is a design system that encompasses a wide variety of disciplines, such " ... "as ecology, landscape design, environmental science and energy conservation, and the " ... "Permaculture design principles are drawn from these various disciplines. Each individual " ... "design principle itself embodies a complete conceptual framework based on sound " ... "scientific principles. When we bring all these separate principles together, we can " ... "create a design system that both looks at whole systems, the parts that these systems " ... "consist of, and how those parts interact with each other to create a complex, dynamic, " ... "living system. Each design principle serves as a tool that allows us to integrate all " ... "the separate parts of a design, referred to as elements, into a functional, synergistic, " ... "whole system, where the elements harmoniously interact and work together in the most " ... "efficient way possible." ... ) >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, num_beam_groups=5, max_new_tokens=30, diversity_penalty=1.0) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'The Design Principles are a set of universal design principles that can be applied to any location, climate and culture, and they allow us to design the' ``` This guide illustrates the main parameters that enable various decoding strategies. More advanced parameters exist for the [`generate`] method, which gives you even further control over the [`generate`] method's behavior. For the complete list of the available parameters, refer to the [API documentation](./main_classes/text_generation.md). ### Speculative Decoding Speculative decoding (also known as assisted decoding) is a modification of the decoding strategies above, that uses an assistant model (ideally a much smaller one) with the same tokenizer, to generate a few candidate tokens. The main model then validates the candidate tokens in a single forward pass, which speeds up the decoding process. If `do_sample=True`, then the token validation with resampling introduced in the [speculative decoding paper](https://arxiv.org/pdf/2211.17192.pdf) is used. Currently, only greedy search and sampling are supported with assisted decoding, and assisted decoding doesn't support batched inputs. To learn more about assisted decoding, check [this blog post](https://huggingface.co/blog/assisted-generation). To enable assisted decoding, set the `assistant_model` argument with a model. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "Alice and Bob" >>> checkpoint = "EleutherAI/pythia-1.4b-deduped" >>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a'] ``` When using assisted decoding with sampling methods, you can use the `temperature` argument to control the randomness, just like in multinomial sampling. However, in assisted decoding, reducing the temperature may help improve the latency. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> set_seed(42) # For reproducibility >>> prompt = "Alice and Bob" >>> checkpoint = "EleutherAI/pythia-1.4b-deduped" >>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model, do_sample=True, temperature=0.5) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are going to the same party. It is a small party, in a small'] ``` Alternativelly, you can also set the `prompt_lookup_num_tokens` to trigger n-gram based assisted decoding, as opposed to model based assisted decoding. You can read more about it [here](https://twitter.com/joao_gante/status/1747322413006643259).
transformers/docs/source/en/generation_strategies.md/0
{ "file_path": "transformers/docs/source/en/generation_strategies.md", "repo_id": "transformers", "token_count": 5636 }
217
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Optimizing LLMs for Speed and Memory [[open-in-colab]] Large Language Models (LLMs) such as GPT3/4, [Falcon](https://huggingface.co/tiiuae/falcon-40b), and [Llama](https://huggingface.co/meta-llama/Llama-2-70b-hf) are rapidly advancing in their ability to tackle human-centric tasks, establishing themselves as essential tools in modern knowledge-based industries. Deploying these models in real-world tasks remains challenging, however: - To exhibit near-human text understanding and generation capabilities, LLMs currently require to be composed of billions of parameters (see [Kaplan et al](https://arxiv.org/abs/2001.08361), [Wei et. al](https://arxiv.org/abs/2206.07682)). This consequently amplifies the memory demands for inference. - In many real-world tasks, LLMs need to be given extensive contextual information. This necessitates the model's capability to manage very long input sequences during inference. The crux of these challenges lies in augmenting the computational and memory capabilities of LLMs, especially when handling expansive input sequences. In this guide, we will go over the effective techniques for efficient LLM deployment: 1. **Lower Precision:** Research has shown that operating at reduced numerical precision, namely [8-bit and 4-bit](./main_classes/quantization.md) can achieve computational advantages without a considerable decline in model performance. 2. **Flash Attention:** Flash Attention is a variation of the attention algorithm that not only provides a more memory-efficient approach but also realizes increased efficiency due to optimized GPU memory utilization. 3. **Architectural Innovations:** Considering that LLMs are always deployed in the same way during inference, namely autoregressive text generation with a long input context, specialized model architectures have been proposed that allow for more efficient inference. The most important advancement in model architectures hereby are [Alibi](https://arxiv.org/abs/2108.12409), [Rotary embeddings](https://arxiv.org/abs/2104.09864), [Multi-Query Attention (MQA)](https://arxiv.org/abs/1911.02150) and [Grouped-Query-Attention (GQA)]((https://arxiv.org/abs/2305.13245)). Throughout this guide, we will offer an analysis of auto-regressive generation from a tensor's perspective. We delve into the pros and cons of adopting lower precision, provide a comprehensive exploration of the latest attention algorithms, and discuss improved LLM architectures. While doing so, we run practical examples showcasing each of the feature improvements. ## 1. Lower Precision Memory requirements of LLMs can be best understood by seeing the LLM as a set of weight matrices and vectors and the text inputs as a sequence of vectors. In the following, the definition *weights* will be used to signify all model weight matrices and vectors. At the time of writing this guide, LLMs consist of at least a couple billion parameters. Each parameter thereby is made of a decimal number, e.g. `4.5689` which is usually stored in either [float32](https://en.wikipedia.org/wiki/Single-precision_floating-point_format), [bfloat16](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format), or [float16](https://en.wikipedia.org/wiki/Half-precision_floating-point_format) format. This allows us to easily compute the memory requirement to load the LLM into memory: > *Loading the weights of a model having X billion parameters requires roughly 4 * X GB of VRAM in float32 precision* Nowadays, models are however rarely trained in full float32 precision, but usually in bfloat16 precision or less frequently in float16 precision. Therefore the rule of thumb becomes: > *Loading the weights of a model having X billion parameters requires roughly 2 * X GB of VRAM in bfloat16/float16 precision* For shorter text inputs (less than 1024 tokens), the memory requirement for inference is very much dominated by the memory requirement to load the weights. Therefore, for now, let's assume that the memory requirement for inference is equal to the memory requirement to load the model into the GPU VRAM. To give some examples of how much VRAM it roughly takes to load a model in bfloat16: - **GPT3** requires 2 \* 175 GB = **350 GB** VRAM - [**Bloom**](https://huggingface.co/bigscience/bloom) requires 2 \* 176 GB = **352 GB** VRAM - [**Llama-2-70b**](https://huggingface.co/meta-llama/Llama-2-70b-hf) requires 2 \* 70 GB = **140 GB** VRAM - [**Falcon-40b**](https://huggingface.co/tiiuae/falcon-40b) requires 2 \* 40 GB = **80 GB** VRAM - [**MPT-30b**](https://huggingface.co/mosaicml/mpt-30b) requires 2 \* 30 GB = **60 GB** VRAM - [**bigcode/starcoder**](https://huggingface.co/bigcode/starcoder) requires 2 \* 15.5 = **31 GB** VRAM As of writing this document, the largest GPU chip on the market is the A100 & H100 offering 80GB of VRAM. Most of the models listed before require more than 80GB just to be loaded and therefore necessarily require [tensor parallelism](https://huggingface.co/docs/transformers/perf_train_gpu_many#tensor-parallelism) and/or [pipeline parallelism](https://huggingface.co/docs/transformers/perf_train_gpu_many#naive-model-parallelism-vertical-and-pipeline-parallelism). 🤗 Transformers does not support tensor parallelism out of the box as it requires the model architecture to be written in a specific way. If you're interested in writing models in a tensor-parallelism-friendly way, feel free to have a look at [the text-generation-inference library](https://github.com/huggingface/text-generation-inference/tree/main/server/text_generation_server/models/custom_modeling). Naive pipeline parallelism is supported out of the box. For this, simply load the model with `device="auto"` which will automatically place the different layers on the available GPUs as explained [here](https://huggingface.co/docs/accelerate/v0.22.0/en/concept_guides/big_model_inference). Note, however that while very effective, this naive pipeline parallelism does not tackle the issues of GPU idling. For this more advanced pipeline parallelism is required as explained [here](https://huggingface.co/docs/transformers/en/perf_train_gpu_many#naive-model-parallelism-vertical-and-pipeline-parallelism). If you have access to an 8 x 80GB A100 node, you could load BLOOM as follows ```bash !pip install transformers accelerate bitsandbytes optimum ``` ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("bigscience/bloom", device_map="auto", pad_token_id=0) ``` By using `device_map="auto"` the attention layers would be equally distributed over all available GPUs. In this guide, we will use [bigcode/octocoder](https://huggingface.co/bigcode/octocoder) as it can be run on a single 40 GB A100 GPU device chip. Note that all memory and speed optimizations that we will apply going forward, are equally applicable to models that require model or tensor parallelism. Since the model is loaded in bfloat16 precision, using our rule of thumb above, we would expect the memory requirement to run inference with `bigcode/octocoder` to be around 31 GB VRAM. Let's give it a try. We first load the model and tokenizer and then pass both to Transformers' [pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines) object. ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import torch model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", torch_dtype=torch.bfloat16, device_map="auto", pad_token_id=0) tokenizer = AutoTokenizer.from_pretrained("bigcode/octocoder") pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) ``` ```python prompt = "Question: Please write a function in Python that transforms bytes to Giga bytes.\n\nAnswer:" result = pipe(prompt, max_new_tokens=60)[0]["generated_text"][len(prompt):] result ``` **Output**: ``` Here is a Python function that transforms bytes to Giga bytes:\n\n```python\ndef bytes_to_giga_bytes(bytes):\n return bytes / 1024 / 1024 / 1024\n```\n\nThis function takes a single ``` Nice, we can now directly use the result to convert bytes into Gigabytes. ```python def bytes_to_giga_bytes(bytes): return bytes / 1024 / 1024 / 1024 ``` Let's call [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_allocated.html) to measure the peak GPU memory allocation. ```python bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) ``` **Output**: ```bash 29.0260648727417 ``` Close enough to our back-of-the-envelope computation! We can see the number is not exactly correct as going from bytes to kilobytes requires a multiplication of 1024 instead of 1000. Therefore the back-of-the-envelope formula can also be understood as an "at most X GB" computation. Note that if we had tried to run the model in full float32 precision, a whopping 64 GB of VRAM would have been required. > Almost all models are trained in bfloat16 nowadays, there is no reason to run the model in full float32 precision if [your GPU supports bfloat16](https://discuss.pytorch.org/t/bfloat16-native-support/117155/5). Float32 won't give better inference results than the precision that was used to train the model. If you are unsure in which format the model weights are stored on the Hub, you can always look into the checkpoint's config under `"torch_dtype"`, *e.g.* [here](https://huggingface.co/meta-llama/Llama-2-7b-hf/blob/6fdf2e60f86ff2481f2241aaee459f85b5b0bbb9/config.json#L21). It is recommended to set the model to the same precision type as written in the config when loading with `from_pretrained(..., torch_dtype=...)` except when the original type is float32 in which case one can use both `float16` or `bfloat16` for inference. Let's define a `flush(...)` function to free all allocated memory so that we can accurately measure the peak allocated GPU memory. ```python del pipe del model import gc import torch def flush(): gc.collect() torch.cuda.empty_cache() torch.cuda.reset_peak_memory_stats() ``` Let's call it now for the next experiment. ```python flush() ``` In the recent version of the accelerate library, you can also use an utility method called `release_memory()` ```python from accelerate.utils import release_memory # ... release_memory(model) ``` Now what if your GPU does not have 32 GB of VRAM? It has been found that model weights can be quantized to 8-bit or 4-bits without a significant loss in performance (see [Dettmers et al.](https://arxiv.org/abs/2208.07339)). Model can be quantized to even 3 or 2 bits with an acceptable loss in performance as shown in the recent [GPTQ paper](https://arxiv.org/abs/2210.17323) 🤯. Without going into too many details, quantization schemes aim at reducing the precision of weights while trying to keep the model's inference results as accurate as possible (*a.k.a* as close as possible to bfloat16). Note that quantization works especially well for text generation since all we care about is choosing the *set of most likely next tokens* and don't really care about the exact values of the next token *logit* distribution. All that matters is that the next token *logit* distribution stays roughly the same so that an `argmax` or `topk` operation gives the same results. There are various quantization techniques, which we won't discuss in detail here, but in general, all quantization techniques work as follows: - 1. Quantize all weights to the target precision - 2. Load the quantized weights, and pass the input sequence of vectors in bfloat16 precision - 3. Dynamically dequantize weights to bfloat16 to perform the computation with their input vectors in bfloat16 precision In a nutshell, this means that *inputs-weight matrix* multiplications, with \\( X \\) being the *inputs*, \\( W \\) being a weight matrix and \\( Y \\) being the output: $$ Y = X * W $$ are changed to $$ Y = X * \text{dequantize}(W) $$ for every matrix multiplication. Dequantization and re-quantization is performed sequentially for all weight matrices as the inputs run through the network graph. Therefore, inference time is often **not** reduced when using quantized weights, but rather increases. Enough theory, let's give it a try! To quantize the weights with Transformers, you need to make sure that the [`bitsandbytes`](https://github.com/TimDettmers/bitsandbytes) library is installed. ```bash !pip install bitsandbytes ``` We can then load models in 8-bit quantization by simply adding a `load_in_8bit=True` flag to `from_pretrained`. ```python model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", load_in_8bit=True, pad_token_id=0) ``` Now, let's run our example again and measure the memory usage. ```python pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) result = pipe(prompt, max_new_tokens=60)[0]["generated_text"][len(prompt):] result ``` **Output**: ``` Here is a Python function that transforms bytes to Giga bytes:\n\n```python\ndef bytes_to_giga_bytes(bytes):\n return bytes / 1024 / 1024 / 1024\n```\n\nThis function takes a single ``` Nice, we're getting the same result as before, so no loss in accuracy! Let's look at how much memory was used this time. ```python bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) ``` **Output**: ``` 15.219234466552734 ``` Significantly less! We're down to just a bit over 15 GBs and could therefore run this model on consumer GPUs like the 4090. We're seeing a very nice gain in memory efficiency and more or less no degradation to the model's output. However, we can also notice a slight slow-down during inference. We delete the models and flush the memory again. ```python del model del pipe ``` ```python flush() ``` Let's see what peak GPU memory consumption 4-bit quantization gives. Quantizing the model to 4-bit can be done with the same API as before - this time by passing `load_in_4bit=True` instead of `load_in_8bit=True`. ```python model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", load_in_4bit=True, low_cpu_mem_usage=True, pad_token_id=0) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) result = pipe(prompt, max_new_tokens=60)[0]["generated_text"][len(prompt):] result ``` **Output**: ``` Here is a Python function that transforms bytes to Giga bytes:\n\n```\ndef bytes_to_gigabytes(bytes):\n return bytes / 1024 / 1024 / 1024\n```\n\nThis function takes a single argument ``` We're almost seeing the same output text as before - just the `python` is missing just before the code snippet. Let's see how much memory was required. ```python bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) ``` **Output**: ``` 9.543574333190918 ``` Just 9.5GB! That's really not a lot for a >15 billion parameter model. While we see very little degradation in accuracy for our model here, 4-bit quantization can in practice often lead to different results compared to 8-bit quantization or full `bfloat16` inference. It is up to the user to try it out. Also note that inference here was again a bit slower compared to 8-bit quantization which is due to the more aggressive quantization method used for 4-bit quantization leading to \\( \text{quantize} \\) and \\( \text{dequantize} \\) taking longer during inference. ```python del model del pipe ``` ```python flush() ``` Overall, we saw that running OctoCoder in 8-bit precision reduced the required GPU VRAM from 32G GPU VRAM to only 15GB and running the model in 4-bit precision further reduces the required GPU VRAM to just a bit over 9GB. 4-bit quantization allows the model to be run on GPUs such as RTX3090, V100, and T4 which are quite accessible for most people. For more information on quantization and to see how one can quantize models to require even less GPU VRAM memory than 4-bit, we recommend looking into the [`AutoGPTQ`](https://huggingface.co/docs/transformers/main/en/main_classes/quantization#autogptq-integration%60) implementation. > As a conclusion, it is important to remember that model quantization trades improved memory efficiency against accuracy and in some cases inference time. If GPU memory is not a constraint for your use case, there is often no need to look into quantization. However many GPUs simply can't run LLMs without quantization methods and in this case, 4-bit and 8-bit quantization schemes are extremely useful tools. For more in-detail usage information, we strongly recommend taking a look at the [Transformers Quantization Docs](https://huggingface.co/docs/transformers/main_classes/quantization#general-usage). Next, let's look into how we can improve computational and memory efficiency by using better algorithms and an improved model architecture. ## 2. Flash Attention Today's top-performing LLMs share more or less the same fundamental architecture that consists of feed-forward layers, activation layers, layer normalization layers, and most crucially, self-attention layers. Self-attention layers are central to Large Language Models (LLMs) in that they enable the model to understand the contextual relationships between input tokens. However, the peak GPU memory consumption for self-attention layers grows *quadratically* both in compute and memory complexity with number of input tokens (also called *sequence length*) that we denote in the following by \\( N \\) . While this is not really noticeable for shorter input sequences (of up to 1000 input tokens), it becomes a serious problem for longer input sequences (at around 16000 input tokens). Let's take a closer look. The formula to compute the output \\( \mathbf{O} \\) of a self-attention layer for an input \\( \mathbf{X} \\) of length \\( N \\) is: $$ \textbf{O} = \text{Attn}(\mathbf{X}) = \mathbf{V} \times \text{Softmax}(\mathbf{QK}^T) \text{ with } \mathbf{Q} = \mathbf{W}_q \mathbf{X}, \mathbf{V} = \mathbf{W}_v \mathbf{X}, \mathbf{K} = \mathbf{W}_k \mathbf{X} $$ \\( \mathbf{X} = (\mathbf{x}_1, ... \mathbf{x}_{N}) \\) is thereby the input sequence to the attention layer. The projections \\( \mathbf{Q} \\) and \\( \mathbf{K} \\) will each consist of \\( N \\) vectors resulting in the \\( \mathbf{QK}^T \\) being of size \\( N^2 \\) . LLMs usually have multiple attention heads, thus doing multiple self-attention computations in parallel. Assuming, the LLM has 40 attention heads and runs in bfloat16 precision, we can calculate the memory requirement to store the \\( \mathbf{QK^T} \\) matrices to be \\( 40 * 2 * N^2 \\) bytes. For \\( N=1000 \\) only around 50 MB of VRAM are needed, however, for \\( N=16000 \\) we would need 19 GB of VRAM, and for \\( N=100,000 \\) we would need almost 1TB just to store the \\( \mathbf{QK}^T \\) matrices. Long story short, the default self-attention algorithm quickly becomes prohibitively memory-expensive for large input contexts. As LLMs improve in text comprehension and generation, they are applied to increasingly complex tasks. While models once handled the translation or summarization of a few sentences, they now manage entire pages, demanding the capability to process extensive input lengths. How can we get rid of the exorbitant memory requirements for large input lengths? We need a new way to compute the self-attention mechanism that gets rid of the \\( QK^T \\) matrix. [Tri Dao et al.](https://arxiv.org/abs/2205.14135) developed exactly such a new algorithm and called it **Flash Attention**. In a nutshell, Flash Attention breaks the \\(\mathbf{V} \times \text{Softmax}(\mathbf{QK}^T\\)) computation apart and instead computes smaller chunks of the output by iterating over multiple softmax computation steps: $$ \textbf{O}_i \leftarrow s^a_{ij} * \textbf{O}_i + s^b_{ij} * \mathbf{V}_{j} \times \text{Softmax}(\mathbf{QK}^T_{i,j}) \text{ for multiple } i, j \text{ iterations} $$ with \\( s^a_{ij} \\) and \\( s^b_{ij} \\) being some softmax normalization statistics that need to be recomputed for every \\( i \\) and \\( j \\) . Please note that the whole Flash Attention is a bit more complex and is greatly simplified here as going in too much depth is out of scope for this guide. The reader is invited to take a look at the well-written [Flash Attention paper](https://arxiv.org/abs/2205.14135) for more details. The main takeaway here is: > By keeping track of softmax normalization statistics and by using some smart mathematics, Flash Attention gives **numerical identical** outputs compared to the default self-attention layer at a memory cost that only increases linearly with \\( N \\) . Looking at the formula, one would intuitively say that Flash Attention must be much slower compared to the default self-attention formula as more computation needs to be done. Indeed Flash Attention requires more FLOPs compared to normal attention as the softmax normalization statistics have to constantly be recomputed (see [paper](https://arxiv.org/abs/2205.14135) for more details if interested) > However, Flash Attention is much faster in inference compared to default attention which comes from its ability to significantly reduce the demands on the slower, high-bandwidth memory of the GPU (VRAM), focusing instead on the faster on-chip memory (SRAM). Essentially, Flash Attention makes sure that all intermediate write and read operations can be done using the fast *on-chip* SRAM memory instead of having to access the slower VRAM memory to compute the output vector \\( \mathbf{O} \\) . In practice, there is currently absolutely no reason to **not** use Flash Attention if available. The algorithm gives mathematically the same outputs, and is both faster and more memory-efficient. Let's look at a practical example. Our OctoCoder model now gets a significantly longer input prompt which includes a so-called *system prompt*. System prompts are used to steer the LLM into a better assistant that is tailored to the users' task. In the following, we use a system prompt that will make OctoCoder a better coding assistant. ```python system_prompt = """Below are a series of dialogues between various people and an AI technical assistant. The assistant tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble but knowledgeable. The assistant is happy to help with code questions and will do their best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical really does its best, and doesn't let caution get too much in the way of being useful. The Starcoder models are a series of 15.5B parameter models trained on 80+ programming languages from The Stack (v1.2) (excluding opt-out requests). The model uses Multi Query Attention, was trained using the Fill-in-the-Middle objective, and with 8,192 tokens context window for a trillion tokens of heavily deduplicated data. ----- Question: Write a function that takes two lists and returns a list that has alternating elements from each input list. Answer: Sure. Here is a function that does that. def alternating(list1, list2): results = [] for i in range(len(list1)): results.append(list1[i]) results.append(list2[i]) return results Question: Can you write some test cases for this function? Answer: Sure, here are some tests. assert alternating([10, 20, 30], [1, 2, 3]) == [10, 1, 20, 2, 30, 3] assert alternating([True, False], [4, 5]) == [True, 4, False, 5] assert alternating([], []) == [] Question: Modify the function so that it returns all input elements when the lists have uneven length. The elements from the longer list should be at the end. Answer: Here is the modified function. def alternating(list1, list2): results = [] for i in range(min(len(list1), len(list2))): results.append(list1[i]) results.append(list2[i]) if len(list1) > len(list2): results.extend(list1[i+1:]) else: results.extend(list2[i+1:]) return results ----- """ ``` For demonstration purposes, we duplicate the system prompt by ten so that the input length is long enough to observe Flash Attention's memory savings. We append the original text prompt `"Question: Please write a function in Python that transforms bytes to Giga bytes.\n\nAnswer: Here"` ```python long_prompt = 10 * system_prompt + prompt ``` We instantiate our model again in bfloat16 precision. ```python model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", torch_dtype=torch.bfloat16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained("bigcode/octocoder") pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) ``` Let's now run the model just like before *without Flash Attention* and measure the peak GPU memory requirement and inference time. ```python import time start_time = time.time() result = pipe(long_prompt, max_new_tokens=60)[0]["generated_text"][len(long_prompt):] print(f"Generated in {time.time() - start_time} seconds.") result ``` **Output**: ``` Generated in 10.96854019165039 seconds. Sure. Here is a function that does that.\n\ndef bytes_to_giga(bytes):\n return bytes / 1024 / 1024 / 1024\n\nAnswer: Sure. Here is a function that does that.\n\ndef ```` We're getting the same output as before, however this time, the model repeats the answer multiple times until it's 60 tokens cut-off. This is not surprising as we've repeated the system prompt ten times for demonstration purposes and thus cued the model to repeat itself. **Note** that the system prompt should not be repeated ten times in real-world applications - one time is enough! Let's measure the peak GPU memory requirement. ```python bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) ``` **Output**: ```bash 37.668193340301514 ``` As we can see the peak GPU memory requirement is now significantly higher than in the beginning, which is largely due to the longer input sequence. Also the generation takes a little over a minute now. We call `flush()` to free GPU memory for our next experiment. ```python flush() ``` For comparison, let's run the same function, but enable Flash Attention instead. To do so, we convert the model to [BetterTransformer](https://huggingface.co/docs/optimum/bettertransformer/overview) and by doing so enabling PyTorch's [SDPA self-attention](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention) which in turn is able to use Flash Attention. ```python model.to_bettertransformer() ``` Now we run the exact same code snippet as before and under the hood Transformers will make use of Flash Attention. ```py start_time = time.time() with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): result = pipe(long_prompt, max_new_tokens=60)[0]["generated_text"][len(long_prompt):] print(f"Generated in {time.time() - start_time} seconds.") result ``` **Output**: ``` Generated in 3.0211617946624756 seconds. Sure. Here is a function that does that.\n\ndef bytes_to_giga(bytes):\n return bytes / 1024 / 1024 / 1024\n\nAnswer: Sure. Here is a function that does that.\n\ndef ``` We're getting the exact same result as before, but can observe a very significant speed-up thanks to Flash Attention. Let's measure the memory consumption one last time. ```python bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) ``` **Output**: ``` 32.617331981658936 ``` And we're almost back to our original 29GB peak GPU memory from the beginning. We can observe that we only use roughly 100MB more GPU memory when passing a very long input sequence with Flash Attention compared to passing a short input sequence as done in the beginning. ```py flush() ``` For more information on how to use Flash Attention, please have a look at [this doc page](https://huggingface.co/docs/transformers/en/perf_infer_gpu_one#flashattention-2). ## 3. Architectural Innovations So far we have looked into improving computational and memory efficiency by: - Casting the weights to a lower precision format - Replacing the self-attention algorithm with a more memory- and compute efficient version Let's now look into how we can change the architecture of an LLM so that it is most effective and efficient for task that require long text inputs, *e.g.*: - Retrieval augmented Questions Answering, - Summarization, - Chat Note that *chat* not only requires the LLM to handle long text inputs, but it also necessitates that the LLM is able to efficiently handle the back-and-forth dialogue between user and assistant (such as ChatGPT). Once trained, the fundamental LLM architecture is difficult to change, so it is important to make considerations about the LLM's tasks beforehand and accordingly optimize the model's architecture. There are two important components of the model architecture that quickly become memory and/or performance bottlenecks for large input sequences. - The positional embeddings - The key-value cache Let's go over each component in more detail ### 3.1 Improving positional embeddings of LLMs Self-attention puts each token in relation to each other's tokens. As an example, the \\( \text{Softmax}(\mathbf{QK}^T) \\) matrix of the text input sequence *"Hello", "I", "love", "you"* could look as follows: ![](/blog/assets/163_optimize_llm/self_attn_tokens.png) Each word token is given a probability mass at which it attends all other word tokens and, therefore is put into relation with all other word tokens. E.g. the word *"love"* attends to the word *"Hello"* with 5%, to *"I"* with 30%, and to itself with 65%. A LLM based on self-attention, but without position embeddings would have great difficulties in understanding the positions of the text inputs to each other. This is because the probability score computed by \\( \mathbf{QK}^T \\) relates each word token to each other word token in \\( O(1) \\) computations regardless of their relative positional distance to each other. Therefore, for the LLM without position embeddings each token appears to have the same distance to all other tokens, *e.g.* differentiating between *"Hello I love you"* and *"You love I hello"* would be very challenging. For the LLM to understand sentence order, an additional *cue* is needed and is usually applied in the form of *positional encodings* (or also called *positional embeddings*). Positional encodings, encode the position of each token into a numerical presentation that the LLM can leverage to better understand sentence order. The authors of the [*Attention Is All You Need*](https://arxiv.org/abs/1706.03762) paper introduced sinusoidal positional embeddings \\( \mathbf{P} = \mathbf{p}_1, \ldots, \mathbf{p}_N \\) . where each vector \\( \mathbf{p}_i \\) is computed as a sinusoidal function of its position \\( i \\) . The positional encodings are then simply added to the input sequence vectors \\( \mathbf{\hat{X}} = \mathbf{\hat{x}}_1, \ldots, \mathbf{\hat{x}}_N \\) = \\( \mathbf{x}_1 + \mathbf{p}_1, \ldots, \mathbf{x}_N + \mathbf{p}_N \\) thereby cueing the model to better learn sentence order. Instead of using fixed position embeddings, others (such as [Devlin et al.](https://arxiv.org/abs/1810.04805)) used learned positional encodings for which the positional embeddings \\( \mathbf{P} \\) are learned during training. Sinusoidal and learned position embeddings used to be the predominant methods to encode sentence order into LLMs, but a couple of problems related to these positional encodings were found: 1. Sinusoidal and learned position embeddings are both absolute positional embeddings, *i.e.* encoding a unique embedding for each position id: \\( 0, \ldots, N \\) . As shown by [Huang et al.](https://arxiv.org/abs/2009.13658) and [Su et al.](https://arxiv.org/abs/2104.09864), absolute positional embeddings lead to poor LLM performance for long text inputs. For long text inputs, it is advantageous if the model learns the relative positional distance input tokens have to each other instead of their absolute position. 2. When using learned position embeddings, the LLM has to be trained on a fixed input length \\( N \\), which makes it difficult to extrapolate to an input length longer than what it was trained on. Recently, relative positional embeddings that can tackle the above mentioned problems have become more popular, most notably: - [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) - [ALiBi](https://arxiv.org/abs/2108.12409) Both *RoPE* and *ALiBi* argue that it's best to cue the LLM about sentence order directly in the self-attention algorithm as it's there that word tokens are put into relation with each other. More specifically, sentence order should be cued by modifying the \\( \mathbf{QK}^T \\) computation. Without going into too many details, *RoPE* notes that positional information can be encoded into query-key pairs, *e.g.* \\( \mathbf{q}_i \\) and \\( \mathbf{x}_j \\) by rotating each vector by an angle \\( \theta * i \\) and \\( \theta * j \\) respectively with \\( i, j \\) describing each vectors sentence position: $$ \mathbf{\hat{q}}_i^T \mathbf{\hat{x}}_j = \mathbf{{q}}_i^T \mathbf{R}_{\theta, i -j} \mathbf{{x}}_j. $$ \\( \mathbf{R}_{\theta, i - j} \\) thereby represents a rotational matrix. \\( \theta \\) is *not* learned during training, but instead set to a pre-defined value that depends on the maximum input sequence length during training. > By doing so, the propability score between \\( \mathbf{q}_i \\) and \\( \mathbf{q}_j \\) is only affected if \\( i \ne j \\) and solely depends on the relative distance \\( i - j \\) regardless of each vector's specific positions \\( i \\) and \\( j \\) . *RoPE* is used in multiple of today's most important LLMs, such as: - [**Falcon**](https://huggingface.co/tiiuae/falcon-40b) - [**Llama**](https://arxiv.org/abs/2302.13971) - [**PaLM**](https://arxiv.org/abs/2204.02311) As an alternative, *ALiBi* proposes a much simpler relative position encoding scheme. The relative distance that input tokens have to each other is added as a negative integer scaled by a pre-defined value `m` to each query-key entry of the \\( \mathbf{QK}^T \\) matrix right before the softmax computation. ![](/blog/assets/163_optimize_llm/alibi.png) As shown in the [ALiBi](https://arxiv.org/abs/2108.12409) paper, this simple relative positional encoding allows the model to retain a high performance even at very long text input sequences. *ALiBi* is used in multiple of today's most important LLMs, such as: - [**MPT**](https://huggingface.co/mosaicml/mpt-30b) - [**BLOOM**](https://huggingface.co/bigscience/bloom) Both *RoPE* and *ALiBi* position encodings can extrapolate to input lengths not seen during training whereas it has been shown that extrapolation works much better out-of-the-box for *ALiBi* as compared to *RoPE*. For ALiBi, one simply increases the values of the lower triangular position matrix to match the length of the input sequence. For *RoPE*, keeping the same \\( \theta \\) that was used during training leads to poor results when passing text inputs much longer than those seen during training, *c.f* [Press et al.](https://arxiv.org/abs/2108.12409). However, the community has found a couple of effective tricks that adapt \\( \theta \\), thereby allowing *RoPE* position embeddings to work well for extrapolated text input sequences (see [here](https://github.com/huggingface/transformers/pull/24653)). > Both RoPE and ALiBi are relative positional embeddings that are *not* learned during training, but instead are based on the following intuitions: - Positional cues about the text inputs should be given directly to the \\( QK^T \\) matrix of the self-attention layer - The LLM should be incentivized to learn a constant *relative* distance positional encodings have to each other - The further text input tokens are from each other, the lower the probability of their query-value probability. Both RoPE and ALiBi lower the query-key probability of tokens far away from each other. RoPE by decreasing their vector product by increasing the angle between the query-key vectors. ALiBi by adding large negative numbers to the vector product In conclusion, LLMs that are intended to be deployed in tasks that require handling large text inputs are better trained with relative positional embeddings, such as RoPE and ALiBi. Also note that even if an LLM with RoPE and ALiBi has been trained only on a fixed length of say \\( N_1 = 2048 \\) it can still be used in practice with text inputs much larger than \\( N_1 \\), like \\( N_2 = 8192 > N_1 \\) by extrapolating the positional embeddings. ### 3.2 The key-value cache Auto-regressive text generation with LLMs works by iteratively putting in an input sequence, sampling the next token, appending the next token to the input sequence, and continuing to do so until the LLM produces a token that signifies that the generation has finished. Please have a look at [Transformer's Generate Text Tutorial](https://huggingface.co/docs/transformers/llm_tutorial#generate-text) to get a more visual explanation of how auto-regressive generation works. Let's run a quick code snippet to show how auto-regressive works in practice. We will simply take the most likely next token via `torch.argmax`. ```python input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to("cuda") for _ in range(5): next_logits = model(input_ids)["logits"][:, -1:] next_token_id = torch.argmax(next_logits,dim=-1) input_ids = torch.cat([input_ids, next_token_id], dim=-1) print("shape of input_ids", input_ids.shape) generated_text = tokenizer.batch_decode(input_ids[:, -5:]) generated_text ``` **Output**: ``` shape of input_ids torch.Size([1, 21]) shape of input_ids torch.Size([1, 22]) shape of input_ids torch.Size([1, 23]) shape of input_ids torch.Size([1, 24]) shape of input_ids torch.Size([1, 25]) [' Here is a Python function'] ``` As we can see every time we increase the text input tokens by the just sampled token. With very few exceptions, LLMs are trained using the [causal language modeling objective](https://huggingface.co/docs/transformers/tasks/language_modeling#causal-language-modeling) and therefore mask the upper triangle matrix of the attention score - this is why in the two diagrams above the attention scores are left blank (*a.k.a* have 0 probability). For a quick recap on causal language modeling you can refer to the [*Illustrated Self Attention blog*](https://jalammar.github.io/illustrated-gpt2/#part-2-illustrated-self-attention). As a consequence, tokens *never* depend on previous tokens, more specifically the \\( \mathbf{q}_i \\) vector is never put in relation with any key, values vectors \\( \mathbf{k}_j, \mathbf{v}_j \\) if \\( j > i \\) . Instead \\( \mathbf{q}_i \\) only attends to previous key-value vectors \\( \mathbf{k}_{m < i}, \mathbf{v}_{m < i} \text{ , for } m \in \{0, \ldots i - 1\} \\). In order to reduce unnecessary computation, one can therefore cache each layer's key-value vectors for all previous timesteps. In the following, we will tell the LLM to make use of the key-value cache by retrieving and forwarding it for each forward pass. In Transformers, we can retrieve the key-value cache by passing the `use_cache` flag to the `forward` call and can then pass it with the current token. ```python past_key_values = None # past_key_values is the key-value cache generated_tokens = [] next_token_id = tokenizer(prompt, return_tensors="pt")["input_ids"].to("cuda") for _ in range(5): next_logits, past_key_values = model(next_token_id, past_key_values=past_key_values, use_cache=True).to_tuple() next_logits = next_logits[:, -1:] next_token_id = torch.argmax(next_logits, dim=-1) print("shape of input_ids", next_token_id.shape) print("length of key-value cache", len(past_key_values[0][0])) # past_key_values are of shape [num_layers, 0 for k, 1 for v, batch_size, length, hidden_dim] generated_tokens.append(next_token_id.item()) generated_text = tokenizer.batch_decode(generated_tokens) generated_text ``` **Output**: ``` shape of input_ids torch.Size([1, 1]) length of key-value cache 20 shape of input_ids torch.Size([1, 1]) length of key-value cache 21 shape of input_ids torch.Size([1, 1]) length of key-value cache 22 shape of input_ids torch.Size([1, 1]) length of key-value cache 23 shape of input_ids torch.Size([1, 1]) length of key-value cache 24 [' Here', ' is', ' a', ' Python', ' function'] ``` As one can see, when using the key-value cache the text input tokens are *not* increased in length, but remain a single input vector. The length of the key-value cache on the other hand is increased by one at every decoding step. > Making use of the key-value cache means that the \\( \mathbf{QK}^T \\) is essentially reduced to \\( \mathbf{q}_c\mathbf{K}^T \\) with \\( \mathbf{q}_c \\) being the query projection of the currently passed input token which is *always* just a single vector. Using the key-value cache has two advantages: - Significant increase in computational efficiency as less computations are performed compared to computing the full \\( \mathbf{QK}^T \\) matrix. This leads to an increase in inference speed - The maximum required memory is not increased quadratically with the number of generated tokens, but only increases linearly. > One should *always* make use of the key-value cache as it leads to identical results and a significant speed-up for longer input sequences. Transformers has the key-value cache enabled by default when making use of the text pipeline or the [`generate` method](https://huggingface.co/docs/transformers/main_classes/text_generation). <Tip warning={true}> Note that, despite our advice to use key-value caches, your LLM output may be slightly different when you use them. This is a property of the matrix multiplication kernels themselves -- you can read more about it [here](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535). </Tip> #### 3.2.1 Multi-round conversation The key-value cache is especially useful for applications such as chat where multiple passes of auto-regressive decoding are required. Let's look at an example. ``` User: How many people live in France? Assistant: Roughly 75 million people live in France User: And how many are in Germany? Assistant: Germany has ca. 81 million inhabitants ``` In this chat, the LLM runs auto-regressive decoding twice: 1. The first time, the key-value cache is empty and the input prompt is `"User: How many people live in France?"` and the model auto-regressively generates the text `"Roughly 75 million people live in France"` while increasing the key-value cache at every decoding step. 2. The second time the input prompt is `"User: How many people live in France? \n Assistant: Roughly 75 million people live in France \n User: And how many in Germany?"`. Thanks to the cache, all key-value vectors for the first two sentences are already computed. Therefore the input prompt only consists of `"User: And how many in Germany?"`. While processing the shortened input prompt, it's computed key-value vectors are concatenated to the key-value cache of the first decoding. The second Assistant's answer `"Germany has ca. 81 million inhabitants"` is then auto-regressively generated with the key-value cache consisting of encoded key-value vectors of `"User: How many people live in France? \n Assistant: Roughly 75 million people live in France \n User: And how many are in Germany?"`. Two things should be noted here: 1. Keeping all the context is crucial for LLMs deployed in chat so that the LLM understands all the previous context of the conversation. E.g. for the example above the LLM needs to understand that the user refers to the population when asking `"And how many are in Germany"`. 2. The key-value cache is extremely useful for chat as it allows us to continuously grow the encoded chat history instead of having to re-encode the chat history again from scratch (as e.g. would be the case when using an encoder-decoder architecture). In `transformers`, a `generate` call will return `past_key_values` when `return_dict_in_generate=True` is passed, in addition to the default `use_cache=True`. Note that it is not yet available through the `pipeline` interface. ```python # Generation as usual prompt = system_prompt + "Question: Please write a function in Python that transforms bytes to Giga bytes.\n\nAnswer: Here" model_inputs = tokenizer(prompt, return_tensors='pt') generation_output = model.generate(**model_inputs, max_new_tokens=60, return_dict_in_generate=True) decoded_output = tokenizer.batch_decode(generation_output.sequences)[0] # Piping the returned `past_key_values` to speed up the next conversation round prompt = decoded_output + "\nQuestion: How can I modify the function above to return Mega bytes instead?\n\nAnswer: Here" model_inputs = tokenizer(prompt, return_tensors='pt') generation_output = model.generate( **model_inputs, past_key_values=generation_output.past_key_values, max_new_tokens=60, return_dict_in_generate=True ) tokenizer.batch_decode(generation_output.sequences)[0][len(prompt):] ``` **Output**: ``` is a modified version of the function that returns Mega bytes instead. def bytes_to_megabytes(bytes): return bytes / 1024 / 1024 Answer: The function takes a number of bytes as input and returns the number of ``` Great, no additional time is spent recomputing the same key and values for the attention layer! There is however one catch. While the required peak memory for the \\( \mathbf{QK}^T \\) matrix is significantly reduced, holding the key-value cache in memory can become very memory expensive for long input sequences or multi-turn chat. Remember that the key-value cache needs to store the key-value vectors for all previous input vectors \\( \mathbf{x}_i \text{, for } i \in \{1, \ldots, c - 1\} \\) for all self-attention layers and for all attention heads. Let's compute the number of float values that need to be stored in the key-value cache for the LLM `bigcode/octocoder` that we used before. The number of float values amounts to two times the sequence length times the number of attention heads times the attention head dimension and times the number of layers. Computing this for our LLM at a hypothetical input sequence length of 16000 gives: ```python config = model.config 2 * 16_000 * config.n_layer * config.n_head * config.n_embd // config.n_head ``` **Output**: ``` 7864320000 ``` Roughly 8 billion float values! Storing 8 billion float values in `float16` precision requires around 15 GB of RAM which is circa half as much as the model weights themselves! Researchers have proposed two methods that allow to significantly reduce the memory cost of storing the key-value cache, which are explored in the next subsections. #### 3.2.2 Multi-Query-Attention (MQA) [Multi-Query-Attention](https://arxiv.org/abs/1911.02150) was proposed in Noam Shazeer's *Fast Transformer Decoding: One Write-Head is All You Need* paper. As the title says, Noam found out that instead of using `n_head` key-value projections weights, one can use a single head-value projection weight pair that is shared across all attention heads without that the model's performance significantly degrades. > By using a single head-value projection weight pair, the key value vectors \\( \mathbf{k}_i, \mathbf{v}_i \\) have to be identical across all attention heads which in turn means that we only need to store 1 key-value projection pair in the cache instead of `n_head` ones. As most LLMs use between 20 and 100 attention heads, MQA significantly reduces the memory consumption of the key-value cache. For the LLM used in this notebook we could therefore reduce the required memory consumption from 15 GB to less than 400 MB at an input sequence length of 16000. In addition to memory savings, MQA also leads to improved computational efficiency as explained in the following. In auto-regressive decoding, large key-value vectors need to be reloaded, concatenated with the current key-value vector pair to be then fed into the \\( \mathbf{q}_c\mathbf{K}^T \\) computation at every step. For auto-regressive decoding, the required memory bandwidth for the constant reloading can become a serious time bottleneck. By reducing the size of the key-value vectors less memory needs to be accessed, thus reducing the memory bandwidth bottleneck. For more detail, please have a look at [Noam's paper](https://arxiv.org/abs/1911.02150). The important part to understand here is that reducing the number of key-value attention heads to 1 only makes sense if a key-value cache is used. The peak memory consumption of the model for a single forward pass without key-value cache stays unchanged as every attention head still has a unique query vector so that each attention head still has a different \\( \mathbf{QK}^T \\) matrix. MQA has seen wide adoption by the community and is now used by many of the most popular LLMs: - [**Falcon**](https://huggingface.co/tiiuae/falcon-40b) - [**PaLM**](https://arxiv.org/abs/2204.02311) - [**MPT**](https://huggingface.co/mosaicml/mpt-30b) - [**BLOOM**](https://huggingface.co/bigscience/bloom) Also, the checkpoint used in this notebook - `bigcode/octocoder` - makes use of MQA. #### 3.2.3 Grouped-Query-Attention (GQA) [Grouped-Query-Attention](https://arxiv.org/abs/2305.13245), as proposed by Ainslie et al. from Google, found that using MQA can often lead to quality degradation compared to using vanilla multi-key-value head projections. The paper argues that more model performance can be kept by less drastically reducing the number of query head projection weights. Instead of using just a single key-value projection weight, `n < n_head` key-value projection weights should be used. By choosing `n` to a significantly smaller value than `n_head`, such as 2,4 or 8 almost all of the memory and speed gains from MQA can be kept while sacrificing less model capacity and thus arguably less performance. Moreover, the authors of GQA found out that existing model checkpoints can be *uptrained* to have a GQA architecture with as little as 5% of the original pre-training compute. While 5% of the original pre-training compute can still be a massive amount, GQA *uptraining* allows existing checkpoints to be useful for longer input sequences. GQA was only recently proposed which is why there is less adoption at the time of writing this notebook. The most notable application of GQA is [Llama-v2](https://huggingface.co/meta-llama/Llama-2-70b-hf). > As a conclusion, it is strongly recommended to make use of either GQA or MQA if the LLM is deployed with auto-regressive decoding and is required to handle large input sequences as is the case for example for chat. ## Conclusion The research community is constantly coming up with new, nifty ways to speed up inference time for ever-larger LLMs. As an example, one such promising research direction is [speculative decoding](https://arxiv.org/abs/2211.17192) where "easy tokens" are generated by smaller, faster language models and only "hard tokens" are generated by the LLM itself. Going into more detail is out of the scope of this notebook, but can be read upon in this [nice blog post](https://huggingface.co/blog/assisted-generation). The reason massive LLMs such as GPT3/4, Llama-2-70b, Claude, PaLM can run so quickly in chat-interfaces such as [Hugging Face Chat](https://huggingface.co/chat/) or ChatGPT is to a big part thanks to the above-mentioned improvements in precision, algorithms, and architecture. Going forward, accelerators such as GPUs, TPUs, etc... will only get faster and allow for more memory, but one should nevertheless always make sure to use the best available algorithms and architectures to get the most bang for your buck 🤗
transformers/docs/source/en/llm_tutorial_optimization.md/0
{ "file_path": "transformers/docs/source/en/llm_tutorial_optimization.md", "repo_id": "transformers", "token_count": 14705 }
218
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Processors Processors can mean two different things in the Transformers library: - the objects that pre-process inputs for multi-modal models such as [Wav2Vec2](../model_doc/wav2vec2) (speech and text) or [CLIP](../model_doc/clip) (text and vision) - deprecated objects that were used in older versions of the library to preprocess data for GLUE or SQUAD. ## Multi-modal processors Any multi-modal model will require an object to encode or decode the data that groups several modalities (among text, vision and audio). This is handled by objects called processors, which group together two or more processing objects such as tokenizers (for the text modality), image processors (for vision) and feature extractors (for audio). Those processors inherit from the following base class that implements the saving and loading functionality: [[autodoc]] ProcessorMixin ## Deprecated processors All processors follow the same architecture which is that of the [`~data.processors.utils.DataProcessor`]. The processor returns a list of [`~data.processors.utils.InputExample`]. These [`~data.processors.utils.InputExample`] can be converted to [`~data.processors.utils.InputFeatures`] in order to be fed to the model. [[autodoc]] data.processors.utils.DataProcessor [[autodoc]] data.processors.utils.InputExample [[autodoc]] data.processors.utils.InputFeatures ## GLUE [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com/) is a benchmark that evaluates the performance of models across a diverse set of existing NLU tasks. It was released together with the paper [GLUE: A multi-task benchmark and analysis platform for natural language understanding](https://openreview.net/pdf?id=rJ4km2R5t7) This library hosts a total of 10 processors for the following tasks: MRPC, MNLI, MNLI (mismatched), CoLA, SST2, STSB, QQP, QNLI, RTE and WNLI. Those processors are: - [`~data.processors.utils.MrpcProcessor`] - [`~data.processors.utils.MnliProcessor`] - [`~data.processors.utils.MnliMismatchedProcessor`] - [`~data.processors.utils.Sst2Processor`] - [`~data.processors.utils.StsbProcessor`] - [`~data.processors.utils.QqpProcessor`] - [`~data.processors.utils.QnliProcessor`] - [`~data.processors.utils.RteProcessor`] - [`~data.processors.utils.WnliProcessor`] Additionally, the following method can be used to load values from a data file and convert them to a list of [`~data.processors.utils.InputExample`]. [[autodoc]] data.processors.glue.glue_convert_examples_to_features ## XNLI [The Cross-Lingual NLI Corpus (XNLI)](https://www.nyu.edu/projects/bowman/xnli/) is a benchmark that evaluates the quality of cross-lingual text representations. XNLI is crowd-sourced dataset based on [*MultiNLI*](http://www.nyu.edu/projects/bowman/multinli/): pairs of text are labeled with textual entailment annotations for 15 different languages (including both high-resource language such as English and low-resource languages such as Swahili). It was released together with the paper [XNLI: Evaluating Cross-lingual Sentence Representations](https://arxiv.org/abs/1809.05053) This library hosts the processor to load the XNLI data: - [`~data.processors.utils.XnliProcessor`] Please note that since the gold labels are available on the test set, evaluation is performed on the test set. An example using these processors is given in the [run_xnli.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification/run_xnli.py) script. ## SQuAD [The Stanford Question Answering Dataset (SQuAD)](https://rajpurkar.github.io/SQuAD-explorer//) is a benchmark that evaluates the performance of models on question answering. Two versions are available, v1.1 and v2.0. The first version (v1.1) was released together with the paper [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250). The second version (v2.0) was released alongside the paper [Know What You Don't Know: Unanswerable Questions for SQuAD](https://arxiv.org/abs/1806.03822). This library hosts a processor for each of the two versions: ### Processors Those processors are: - [`~data.processors.utils.SquadV1Processor`] - [`~data.processors.utils.SquadV2Processor`] They both inherit from the abstract class [`~data.processors.utils.SquadProcessor`] [[autodoc]] data.processors.squad.SquadProcessor - all Additionally, the following method can be used to convert SQuAD examples into [`~data.processors.utils.SquadFeatures`] that can be used as model inputs. [[autodoc]] data.processors.squad.squad_convert_examples_to_features These processors as well as the aforementioned method can be used with files containing the data as well as with the *tensorflow_datasets* package. Examples are given below. ### Example usage Here is an example using the processors as well as the conversion method using data files: ```python # Loading a V2 processor processor = SquadV2Processor() examples = processor.get_dev_examples(squad_v2_data_dir) # Loading a V1 processor processor = SquadV1Processor() examples = processor.get_dev_examples(squad_v1_data_dir) features = squad_convert_examples_to_features( examples=examples, tokenizer=tokenizer, max_seq_length=max_seq_length, doc_stride=args.doc_stride, max_query_length=max_query_length, is_training=not evaluate, ) ``` Using *tensorflow_datasets* is as easy as using a data file: ```python # tensorflow_datasets only handle Squad V1. tfds_examples = tfds.load("squad") examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate) features = squad_convert_examples_to_features( examples=examples, tokenizer=tokenizer, max_seq_length=max_seq_length, doc_stride=args.doc_stride, max_query_length=max_query_length, is_training=not evaluate, ) ``` Another example using these processors is given in the [run_squad.py](https://github.com/huggingface/transformers/tree/main/examples/legacy/question-answering/run_squad.py) script.
transformers/docs/source/en/main_classes/processors.md/0
{ "file_path": "transformers/docs/source/en/main_classes/processors.md", "repo_id": "transformers", "token_count": 2073 }
219
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # BertGeneration ## Overview The BertGeneration model is a BERT model that can be leveraged for sequence-to-sequence tasks using [`EncoderDecoderModel`] as proposed in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. The abstract from the paper is the following: *Unsupervised pretraining of large neural models has recently revolutionized Natural Language Processing. By warm-starting from the publicly released checkpoints, NLP practitioners have pushed the state-of-the-art on multiple benchmarks while saving significant amounts of compute time. So far the focus has been mainly on the Natural Language Understanding tasks. In this paper, we demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We developed a Transformer-based sequence-to-sequence model that is compatible with publicly available pre-trained BERT, GPT-2 and RoBERTa checkpoints and conducted an extensive empirical study on the utility of initializing our model, both encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine Translation, Text Summarization, Sentence Splitting, and Sentence Fusion.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://tfhub.dev/s?module-type=text-generation&subtype=module,placeholder). ## Usage examples and tips The model can be used in combination with the [`EncoderDecoderModel`] to leverage two pretrained BERT checkpoints for subsequent fine-tuning: ```python >>> # leverage checkpoints for Bert2Bert model... >>> # use BERT's cls token as BOS token and sep token as EOS token >>> encoder = BertGenerationEncoder.from_pretrained("google-bert/bert-large-uncased", bos_token_id=101, eos_token_id=102) >>> # add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token >>> decoder = BertGenerationDecoder.from_pretrained( ... "google-bert/bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102 ... ) >>> bert2bert = EncoderDecoderModel(encoder=encoder, decoder=decoder) >>> # create tokenizer... >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-large-uncased") >>> input_ids = tokenizer( ... "This is a long article to summarize", add_special_tokens=False, return_tensors="pt" ... ).input_ids >>> labels = tokenizer("This is a short summary", return_tensors="pt").input_ids >>> # train... >>> loss = bert2bert(input_ids=input_ids, decoder_input_ids=labels, labels=labels).loss >>> loss.backward() ``` Pretrained [`EncoderDecoderModel`] are also directly available in the model hub, e.g.: ```python >>> # instantiate sentence fusion model >>> sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse") >>> tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse") >>> input_ids = tokenizer( ... "This is the first sentence. This is the second sentence.", add_special_tokens=False, return_tensors="pt" ... ).input_ids >>> outputs = sentence_fuser.generate(input_ids) >>> print(tokenizer.decode(outputs[0])) ``` Tips: - [`BertGenerationEncoder`] and [`BertGenerationDecoder`] should be used in combination with [`EncoderDecoder`]. - For summarization, sentence splitting, sentence fusion and translation, no special tokens are required for the input. Therefore, no EOS token should be added to the end of the input. ## BertGenerationConfig [[autodoc]] BertGenerationConfig ## BertGenerationTokenizer [[autodoc]] BertGenerationTokenizer - save_vocabulary ## BertGenerationEncoder [[autodoc]] BertGenerationEncoder - forward ## BertGenerationDecoder [[autodoc]] BertGenerationDecoder - forward
transformers/docs/source/en/model_doc/bert-generation.md/0
{ "file_path": "transformers/docs/source/en/model_doc/bert-generation.md", "repo_id": "transformers", "token_count": 1326 }
220
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ByT5 ## Overview The ByT5 model was presented in [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. The abstract from the paper is the following: *Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/google-research/byt5). <Tip> ByT5's architecture is based on the T5v1.1 model, refer to [T5v1.1's documentation page](t5v1.1) for the API reference. They only differ in how inputs should be prepared for the model, see the code examples below. </Tip> Since ByT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. ## Usage example ByT5 works on raw UTF-8 bytes, so it can be used without a tokenizer: ```python >>> from transformers import T5ForConditionalGeneration >>> import torch >>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small") >>> num_special_tokens = 3 >>> # Model has 3 special tokens which take up the input ids 0,1,2 of ByT5. >>> # => Need to shift utf-8 character encodings by 3 before passing ids to model. >>> input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + num_special_tokens >>> labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + num_special_tokens >>> loss = model(input_ids, labels=labels).loss >>> loss.item() 2.66 ``` For batched inference and training it is however recommended to make use of the tokenizer: ```python >>> from transformers import T5ForConditionalGeneration, AutoTokenizer >>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small") >>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-small") >>> model_inputs = tokenizer( ... ["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt" ... ) >>> labels_dict = tokenizer( ... ["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt" ... ) >>> labels = labels_dict.input_ids >>> loss = model(**model_inputs, labels=labels).loss >>> loss.item() 17.9 ``` Similar to [T5](t5), ByT5 was trained on the span-mask denoising task. However, since the model works directly on characters, the pretraining task is a bit different. Let's corrupt some characters of the input sentence `"The dog chases a ball in the park."` and ask ByT5 to predict them for us. ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-base") >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/byt5-base") >>> input_ids_prompt = "The dog chases a ball in the park." >>> input_ids = tokenizer(input_ids_prompt).input_ids >>> # Note that we cannot add "{extra_id_...}" to the string directly >>> # as the Byte tokenizer would incorrectly merge the tokens >>> # For ByT5, we need to work directly on the character level >>> # Contrary to T5, ByT5 does not use sentinel tokens for masking, but instead >>> # uses final utf character ids. >>> # UTF-8 is represented by 8 bits and ByT5 has 3 special tokens. >>> # => There are 2**8+2 = 259 input ids and mask tokens count down from index 258. >>> # => mask to "The dog [258]a ball [257]park." >>> input_ids = torch.tensor([input_ids[:8] + [258] + input_ids[14:21] + [257] + input_ids[28:]]) >>> input_ids tensor([[ 87, 107, 104, 35, 103, 114, 106, 35, 258, 35, 100, 35, 101, 100, 111, 111, 257, 35, 115, 100, 117, 110, 49, 1]]) >>> # ByT5 produces only one char at a time so we need to produce many more output characters here -> set `max_length=100`. >>> output_ids = model.generate(input_ids, max_length=100)[0].tolist() >>> output_ids [0, 258, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 257, 35, 108, 113, 35, 119, 107, 104, 35, 103, 108, 118, 102, 114, 256, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49, 35, 87, 107, 104, 35, 103, 114, 106, 35, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 35, 100, 35, 101, 100, 111, 111, 35, 108, 113, 255, 35, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49] >>> # ^- Note how 258 descends to 257, 256, 255 >>> # Now we need to split on the sentinel tokens, let's write a short loop for this >>> output_ids_list = [] >>> start_token = 0 >>> sentinel_token = 258 >>> while sentinel_token in output_ids: ... split_idx = output_ids.index(sentinel_token) ... output_ids_list.append(output_ids[start_token:split_idx]) ... start_token = split_idx ... sentinel_token -= 1 >>> output_ids_list.append(output_ids[start_token:]) >>> output_string = tokenizer.batch_decode(output_ids_list) >>> output_string ['<pad>', 'is the one who does', ' in the disco', 'in the park. The dog is the one who does a ball in', ' in the park.'] ``` ## ByT5Tokenizer [[autodoc]] ByT5Tokenizer See [`ByT5Tokenizer`] for all details.
transformers/docs/source/en/model_doc/byt5.md/0
{ "file_path": "transformers/docs/source/en/model_doc/byt5.md", "repo_id": "transformers", "token_count": 2292 }
221
<!--Copyright 2022 The HuggingFace Team and The OpenBMB Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CPMAnt ## Overview CPM-Ant is an open-source Chinese pre-trained language model (PLM) with 10B parameters. It is also the first milestone of the live training process of CPM-Live. The training process is cost-effective and environment-friendly. CPM-Ant also achieves promising results with delta tuning on the CUGE benchmark. Besides the full model, we also provide various compressed versions to meet the requirements of different hardware configurations. [See more](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live) This model was contributed by [OpenBMB](https://huggingface.co/openbmb). The original code can be found [here](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live). ## Resources - A tutorial on [CPM-Live](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live). ## CpmAntConfig [[autodoc]] CpmAntConfig - all ## CpmAntTokenizer [[autodoc]] CpmAntTokenizer - all ## CpmAntModel [[autodoc]] CpmAntModel - all ## CpmAntForCausalLM [[autodoc]] CpmAntForCausalLM - all
transformers/docs/source/en/model_doc/cpmant.md/0
{ "file_path": "transformers/docs/source/en/model_doc/cpmant.md", "repo_id": "transformers", "token_count": 534 }
222
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # DistilBERT <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=distilbert"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-distilbert-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/distilbert-base-uncased"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> <a href="https://huggingface.co/papers/1910.01108"> <img alt="Paper page" src="https://img.shields.io/badge/Paper%20page-1910.01108-green"> </a> </div> ## Overview The DistilBERT model was proposed in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5), and the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108). DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *google-bert/bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark. The abstract from the paper is the following: *As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pretraining phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pretraining, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study.* This model was contributed by [victorsanh](https://huggingface.co/victorsanh). This model jax version was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation). ## Usage tips - DistilBERT doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or `[SEP]`). - DistilBERT doesn't have options to select the input positions (`position_ids` input). This could be added if necessary though, just let us know if you need this option. - Same as BERT but smaller. Trained by distillation of the pretrained BERT model, meaning it’s been trained to predict the same probabilities as the larger model. The actual objective is a combination of: * finding the same probabilities as the teacher model * predicting the masked tokens correctly (but no next-sentence objective) * a cosine similarity between the hidden states of the student and the teacher model ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DistilBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/> - A blog post on [Getting Started with Sentiment Analysis using Python](https://huggingface.co/blog/sentiment-analysis-python) with DistilBERT. - A blog post on how to [train DistilBERT with Blurr for sequence classification](https://huggingface.co/blog/fastai). - A blog post on how to use [Ray to tune DistilBERT hyperparameters](https://huggingface.co/blog/ray-tune). - A blog post on how to [train DistilBERT with Hugging Face and Amazon SageMaker](https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face). - A notebook on how to [finetune DistilBERT for multi-label classification](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb). 🌎 - A notebook on how to [finetune DistilBERT for multiclass classification with PyTorch](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb). 🌎 - A notebook on how to [finetune DistilBERT for text classification in TensorFlow](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb). 🌎 - [`DistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [`TFDistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). - [`FlaxDistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - [Text classification task guide](../tasks/sequence_classification) <PipelineTag pipeline="token-classification"/> - [`DistilBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb). - [`TFDistilBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [`FlaxDistilBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - [Token classification task guide](../tasks/token_classification) <PipelineTag pipeline="fill-mask"/> - [`DistilBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFDistilBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxDistilBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - [Masked language modeling task guide](../tasks/masked_language_modeling) <PipelineTag pipeline="question-answering"/> - [`DistilBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [`TFDistilBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [`FlaxDistilBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - [Question answering task guide](../tasks/question_answering) **Multiple choice** - [`DistilBertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb). - [`TFDistilBertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - [Multiple choice task guide](../tasks/multiple_choice) ⚗️ Optimization - A blog post on how to [quantize DistilBERT with 🤗 Optimum and Intel](https://huggingface.co/blog/intel). - A blog post on how [Optimizing Transformers for GPUs with 🤗 Optimum](https://www.philschmid.de/optimizing-transformers-with-optimum-gpu). - A blog post on [Optimizing Transformers with Hugging Face Optimum](https://www.philschmid.de/optimizing-transformers-with-optimum). ⚡️ Inference - A blog post on how to [Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia](https://huggingface.co/blog/bert-inferentia-sagemaker) with DistilBERT. - A blog post on [Serverless Inference with Hugging Face's Transformers, DistilBERT and Amazon SageMaker](https://www.philschmid.de/sagemaker-serverless-huggingface-distilbert). 🚀 Deploy - A blog post on how to [deploy DistilBERT on Google Cloud](https://huggingface.co/blog/how-to-deploy-a-pipeline-to-google-clouds). - A blog post on how to [deploy DistilBERT with Amazon SageMaker](https://huggingface.co/blog/deploy-hugging-face-models-easily-with-amazon-sagemaker). - A blog post on how to [Deploy BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module](https://www.philschmid.de/terraform-huggingface-amazon-sagemaker). ## Combining DistilBERT and Flash Attention 2 First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. ```bash pip install -U flash-attn --no-build-isolation ``` Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16`) To load and run a model using Flash Attention 2, refer to the snippet below: ```python >>> import torch >>> from transformers import AutoTokenizer, AutoModel >>> device = "cuda" # the device to load the model onto >>> tokenizer = AutoTokenizer.from_pretrained('distilbert/distilbert-base-uncased') >>> model = AutoModel.from_pretrained("distilbert/distilbert-base-uncased", torch_dtype=torch.float16, attn_implementation="flash_attention_2") >>> text = "Replace me by any text you'd like." >>> encoded_input = tokenizer(text, return_tensors='pt').to(device) >>> model.to(device) >>> output = model(**encoded_input) ``` ## DistilBertConfig [[autodoc]] DistilBertConfig ## DistilBertTokenizer [[autodoc]] DistilBertTokenizer ## DistilBertTokenizerFast [[autodoc]] DistilBertTokenizerFast <frameworkcontent> <pt> ## DistilBertModel [[autodoc]] DistilBertModel - forward ## DistilBertForMaskedLM [[autodoc]] DistilBertForMaskedLM - forward ## DistilBertForSequenceClassification [[autodoc]] DistilBertForSequenceClassification - forward ## DistilBertForMultipleChoice [[autodoc]] DistilBertForMultipleChoice - forward ## DistilBertForTokenClassification [[autodoc]] DistilBertForTokenClassification - forward ## DistilBertForQuestionAnswering [[autodoc]] DistilBertForQuestionAnswering - forward </pt> <tf> ## TFDistilBertModel [[autodoc]] TFDistilBertModel - call ## TFDistilBertForMaskedLM [[autodoc]] TFDistilBertForMaskedLM - call ## TFDistilBertForSequenceClassification [[autodoc]] TFDistilBertForSequenceClassification - call ## TFDistilBertForMultipleChoice [[autodoc]] TFDistilBertForMultipleChoice - call ## TFDistilBertForTokenClassification [[autodoc]] TFDistilBertForTokenClassification - call ## TFDistilBertForQuestionAnswering [[autodoc]] TFDistilBertForQuestionAnswering - call </tf> <jax> ## FlaxDistilBertModel [[autodoc]] FlaxDistilBertModel - __call__ ## FlaxDistilBertForMaskedLM [[autodoc]] FlaxDistilBertForMaskedLM - __call__ ## FlaxDistilBertForSequenceClassification [[autodoc]] FlaxDistilBertForSequenceClassification - __call__ ## FlaxDistilBertForMultipleChoice [[autodoc]] FlaxDistilBertForMultipleChoice - __call__ ## FlaxDistilBertForTokenClassification [[autodoc]] FlaxDistilBertForTokenClassification - __call__ ## FlaxDistilBertForQuestionAnswering [[autodoc]] FlaxDistilBertForQuestionAnswering - __call__ </jax> </frameworkcontent>
transformers/docs/source/en/model_doc/distilbert.md/0
{ "file_path": "transformers/docs/source/en/model_doc/distilbert.md", "repo_id": "transformers", "token_count": 4631 }
223
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # FLAN-UL2 ## Overview Flan-UL2 is an encoder decoder model based on the T5 architecture. It uses the same configuration as the [UL2](ul2) model released earlier last year. It was fine tuned using the "Flan" prompt tuning and dataset collection. Similar to `Flan-T5`, one can directly use FLAN-UL2 weights without finetuning the model: According to the original blog here are the notable improvements: - The original UL2 model was only trained with receptive field of 512, which made it non-ideal for N-shot prompting where N is large. - The Flan-UL2 checkpoint uses a receptive field of 2048 which makes it more usable for few-shot in-context learning. - The original UL2 model also had mode switch tokens that was rather mandatory to get good performance. However, they were a little cumbersome as this requires often some changes during inference or finetuning. In this update/change, we continue training UL2 20B for an additional 100k steps (with small batch) to forget “mode tokens” before applying Flan instruction tuning. This Flan-UL2 checkpoint does not require mode tokens anymore. Google has released the following variants: The original checkpoints can be found [here](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints). ## Running on low resource devices The model is pretty heavy (~40GB in half precision) so if you just want to run the model, make sure you load your model in 8bit, and use `device_map="auto"` to make sure you don't have any OOM issue! ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-ul2", load_in_8bit=True, device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("google/flan-ul2") >>> inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt") >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['In a large skillet, brown the ground beef and onion over medium heat. Add the garlic'] ``` <Tip> Refer to [T5's documentation page](t5) for API reference, tips, code examples and notebooks. </Tip>
transformers/docs/source/en/model_doc/flan-ul2.md/0
{ "file_path": "transformers/docs/source/en/model_doc/flan-ul2.md", "repo_id": "transformers", "token_count": 785 }
224
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # GPT-NeoX-Japanese ## Overview We introduce GPT-NeoX-Japanese, which is an autoregressive language model for Japanese, trained on top of [https://github.com/EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox). Japanese is a unique language with its large vocabulary and a combination of hiragana, katakana, and kanji writing scripts. To address this distinct structure of the Japanese language, we use a [special sub-word tokenizer](https://github.com/tanreinama/Japanese-BPEEncoder_V2). We are very grateful to *tanreinama* for open-sourcing this incredibly helpful tokenizer. Following the recommendations from Google's research on [PaLM](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html), we have removed bias parameters from transformer blocks, achieving better model performance. Please refer [this article](https://medium.com/ml-abeja/training-a-better-gpt-2-93b157662ae4) in detail. Development of the model was led by [Shinya Otani](https://github.com/SO0529), [Takayoshi Makabe](https://github.com/spider-man-tm), [Anuj Arora](https://github.com/Anuj040), and [Kyo Hattori](https://github.com/go5paopao) from [ABEJA, Inc.](https://www.abejainc.com/). For more information on this model-building activity, please refer [here (ja)](https://tech-blog.abeja.asia/entry/abeja-gpt-project-202207). ### Usage example The `generate()` method can be used to generate text using GPT NeoX Japanese model. ```python >>> from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseTokenizer >>> model = GPTNeoXJapaneseForCausalLM.from_pretrained("abeja/gpt-neox-japanese-2.7b") >>> tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b") >>> prompt = "人とAIが協調するためには、" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids >>> gen_tokens = model.generate( ... input_ids, ... do_sample=True, ... temperature=0.9, ... max_length=100, ... ) >>> gen_text = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)[0] >>> print(gen_text) 人とAIが協調するためには、AIと人が共存し、AIを正しく理解する必要があります。 ``` ## Resources - [Causal language modeling task guide](../tasks/language_modeling) ## GPTNeoXJapaneseConfig [[autodoc]] GPTNeoXJapaneseConfig ## GPTNeoXJapaneseTokenizer [[autodoc]] GPTNeoXJapaneseTokenizer ## GPTNeoXJapaneseModel [[autodoc]] GPTNeoXJapaneseModel - forward ## GPTNeoXJapaneseForCausalLM [[autodoc]] GPTNeoXJapaneseForCausalLM - forward
transformers/docs/source/en/model_doc/gpt_neox_japanese.md/0
{ "file_path": "transformers/docs/source/en/model_doc/gpt_neox_japanese.md", "repo_id": "transformers", "token_count": 1075 }
225
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # MarianMT <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=marian"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-marian-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/opus-mt-zh-en"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview A framework for translation models, using the same models as BART. Translations should be similar, but not identical to output in the test set linked to in each model card. This model was contributed by [sshleifer](https://huggingface.co/sshleifer). ## Implementation Notes - Each model is about 298 MB on disk, there are more than 1,000 models. - The list of supported language pairs can be found [here](https://huggingface.co/Helsinki-NLP). - Models were originally trained by [Jörg Tiedemann](https://researchportal.helsinki.fi/en/persons/j%C3%B6rg-tiedemann) using the [Marian](https://marian-nmt.github.io/) C++ library, which supports fast training and translation. - All models are transformer encoder-decoders with 6 layers in each component. Each model's performance is documented in a model card. - The 80 opus models that require BPE preprocessing are not supported. - The modeling code is the same as [`BartForConditionalGeneration`] with a few minor modifications: - static (sinusoid) positional embeddings (`MarianConfig.static_position_embeddings=True`) - no layernorm_embedding (`MarianConfig.normalize_embedding=False`) - the model starts generating with `pad_token_id` (which has 0 as a token_embedding) as the prefix (Bart uses `<s/>`), - Code to bulk convert models can be found in `convert_marian_to_pytorch.py`. ## Naming - All model names use the following format: `Helsinki-NLP/opus-mt-{src}-{tgt}` - The language codes used to name models are inconsistent. Two digit codes can usually be found [here](https://developers.google.com/admin-sdk/directory/v1/languages), three digit codes require googling "language code {code}". - Codes formatted like `es_AR` are usually `code_{region}`. That one is Spanish from Argentina. - The models were converted in two stages. The first 1000 models use ISO-639-2 codes to identify languages, the second group use a combination of ISO-639-5 codes and ISO-639-2 codes. ## Examples - Since Marian models are smaller than many other translation models available in the library, they can be useful for fine-tuning experiments and integration tests. - [Fine-tune on GPU](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/train_distil_marian_enro.sh) ## Multilingual Models - All model names use the following format: `Helsinki-NLP/opus-mt-{src}-{tgt}`: - If a model can output multiple languages, and you should specify a language code by prepending the desired output language to the `src_text`. - You can see a models's supported language codes in its model card, under target constituents, like in [opus-mt-en-roa](https://huggingface.co/Helsinki-NLP/opus-mt-en-roa). - Note that if a model is only multilingual on the source side, like `Helsinki-NLP/opus-mt-roa-en`, no language codes are required. New multi-lingual models from the [Tatoeba-Challenge repo](https://github.com/Helsinki-NLP/Tatoeba-Challenge) require 3 character language codes: ```python >>> from transformers import MarianMTModel, MarianTokenizer >>> src_text = [ ... ">>fra<< this is a sentence in english that we want to translate to french", ... ">>por<< This should go to portuguese", ... ">>esp<< And this to Spanish", ... ] >>> model_name = "Helsinki-NLP/opus-mt-en-roa" >>> tokenizer = MarianTokenizer.from_pretrained(model_name) >>> print(tokenizer.supported_language_codes) ['>>zlm_Latn<<', '>>mfe<<', '>>hat<<', '>>pap<<', '>>ast<<', '>>cat<<', '>>ind<<', '>>glg<<', '>>wln<<', '>>spa<<', '>>fra<<', '>>ron<<', '>>por<<', '>>ita<<', '>>oci<<', '>>arg<<', '>>min<<'] >>> model = MarianMTModel.from_pretrained(model_name) >>> translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) >>> [tokenizer.decode(t, skip_special_tokens=True) for t in translated] ["c'est une phrase en anglais que nous voulons traduire en français", 'Isto deve ir para o português.', 'Y esto al español'] ``` Here is the code to see all available pretrained models on the hub: ```python from huggingface_hub import list_models model_list = list_models() org = "Helsinki-NLP" model_ids = [x.modelId for x in model_list if x.modelId.startswith(org)] suffix = [x.split("/")[1] for x in model_ids] old_style_multi_models = [f"{org}/{s}" for s in suffix if s != s.lower()] ``` ## Old Style Multi-Lingual Models These are the old style multi-lingual models ported from the OPUS-MT-Train repo: and the members of each language group: ```python no-style ['Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU', 'Helsinki-NLP/opus-mt-ROMANCE-en', 'Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA', 'Helsinki-NLP/opus-mt-de-ZH', 'Helsinki-NLP/opus-mt-en-CELTIC', 'Helsinki-NLP/opus-mt-en-ROMANCE', 'Helsinki-NLP/opus-mt-es-NORWAY', 'Helsinki-NLP/opus-mt-fi-NORWAY', 'Helsinki-NLP/opus-mt-fi-ZH', 'Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI', 'Helsinki-NLP/opus-mt-sv-NORWAY', 'Helsinki-NLP/opus-mt-sv-ZH'] GROUP_MEMBERS = { 'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'], 'ROMANCE': ['fr', 'fr_BE', 'fr_CA', 'fr_FR', 'wa', 'frp', 'oc', 'ca', 'rm', 'lld', 'fur', 'lij', 'lmo', 'es', 'es_AR', 'es_CL', 'es_CO', 'es_CR', 'es_DO', 'es_EC', 'es_ES', 'es_GT', 'es_HN', 'es_MX', 'es_NI', 'es_PA', 'es_PE', 'es_PR', 'es_SV', 'es_UY', 'es_VE', 'pt', 'pt_br', 'pt_BR', 'pt_PT', 'gl', 'lad', 'an', 'mwl', 'it', 'it_IT', 'co', 'nap', 'scn', 'vec', 'sc', 'ro', 'la'], 'NORTH_EU': ['de', 'nl', 'fy', 'af', 'da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'], 'SCANDINAVIA': ['da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'], 'SAMI': ['se', 'sma', 'smj', 'smn', 'sms'], 'NORWAY': ['nb_NO', 'nb', 'nn_NO', 'nn', 'nog', 'no_nb', 'no'], 'CELTIC': ['ga', 'cy', 'br', 'gd', 'kw', 'gv'] } ``` Example of translating english to many romance languages, using old-style 2 character language codes ```python >>> from transformers import MarianMTModel, MarianTokenizer >>> src_text = [ ... ">>fr<< this is a sentence in english that we want to translate to french", ... ">>pt<< This should go to portuguese", ... ">>es<< And this to Spanish", ... ] >>> model_name = "Helsinki-NLP/opus-mt-en-ROMANCE" >>> tokenizer = MarianTokenizer.from_pretrained(model_name) >>> model = MarianMTModel.from_pretrained(model_name) >>> translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) >>> tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated] ["c'est une phrase en anglais que nous voulons traduire en français", 'Isto deve ir para o português.', 'Y esto al español'] ``` ## Resources - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) - [Causal language modeling task guide](../tasks/language_modeling) ## MarianConfig [[autodoc]] MarianConfig ## MarianTokenizer [[autodoc]] MarianTokenizer - build_inputs_with_special_tokens <frameworkcontent> <pt> ## MarianModel [[autodoc]] MarianModel - forward ## MarianMTModel [[autodoc]] MarianMTModel - forward ## MarianForCausalLM [[autodoc]] MarianForCausalLM - forward </pt> <tf> ## TFMarianModel [[autodoc]] TFMarianModel - call ## TFMarianMTModel [[autodoc]] TFMarianMTModel - call </tf> <jax> ## FlaxMarianModel [[autodoc]] FlaxMarianModel - __call__ ## FlaxMarianMTModel [[autodoc]] FlaxMarianMTModel - __call__ </jax> </frameworkcontent>
transformers/docs/source/en/model_doc/marian.md/0
{ "file_path": "transformers/docs/source/en/model_doc/marian.md", "repo_id": "transformers", "token_count": 3064 }
226
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # MobileNet V1 ## Overview The MobileNet model was proposed in [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. The abstract from the paper is the following: *We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.* This model was contributed by [matthijs](https://huggingface.co/Matthijs). The original code and weights can be found [here](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md). ## Usage tips - The checkpoints are named **mobilenet\_v1\_*depth*\_*size***, for example **mobilenet\_v1\_1.0\_224**, where **1.0** is the depth multiplier (sometimes also referred to as "alpha" or the width multiplier) and **224** is the resolution of the input images the model was trained on. - Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32. - One can use [`MobileNetV1ImageProcessor`] to prepare images for the model. - The available image classification checkpoints are pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra “background” class (index 0). - The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a [`MobileNetV1Config`] with `tf_padding = False`. Unsupported features: - The [`MobileNetV1Model`] outputs a globally pooled version of the last hidden state. In the original model it is possible to use a 7x7 average pooling layer with stride 2 instead of global pooling. For larger inputs, this gives a pooled output that is larger than 1x1 pixel. The HuggingFace implementation does not support this. - It is currently not possible to specify an `output_stride`. For smaller output strides, the original model invokes dilated convolution to prevent the spatial resolution from being reduced further. The output stride of the HuggingFace model is always 32. - The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional "FakeQuantization" operations to unquantize the weights. - It's common to extract the output from the pointwise layers at indices 5, 11, 12, 13 for downstream purposes. Using `output_hidden_states=True` returns the output from all intermediate layers. There is currently no way to limit this to specific layers. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileNetV1. <PipelineTag pipeline="image-classification"/> - [`MobileNetV1ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## MobileNetV1Config [[autodoc]] MobileNetV1Config ## MobileNetV1FeatureExtractor [[autodoc]] MobileNetV1FeatureExtractor - preprocess ## MobileNetV1ImageProcessor [[autodoc]] MobileNetV1ImageProcessor - preprocess ## MobileNetV1Model [[autodoc]] MobileNetV1Model - forward ## MobileNetV1ForImageClassification [[autodoc]] MobileNetV1ForImageClassification - forward
transformers/docs/source/en/model_doc/mobilenet_v1.md/0
{ "file_path": "transformers/docs/source/en/model_doc/mobilenet_v1.md", "repo_id": "transformers", "token_count": 1403 }
227
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Nyströmformer ## Overview The Nyströmformer model was proposed in [*Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention*](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. The abstract from the paper is the following: *Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the input sequence length has limited its application to longer sequences -- a topic being actively studied in the community. To address this limitation, we propose Nyströmformer -- a model that exhibits favorable scalability as a function of sequence length. Our idea is based on adapting the Nyström method to approximate standard self-attention with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard sequence length, and find that our Nyströmformer performs comparably, or in a few cases, even slightly better, than standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nyströmformer performs favorably relative to other efficient self-attention methods. Our code is available at this https URL.* This model was contributed by [novice03](https://huggingface.co/novice03). The original code can be found [here](https://github.com/mlpen/Nystromformer). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## NystromformerConfig [[autodoc]] NystromformerConfig ## NystromformerModel [[autodoc]] NystromformerModel - forward ## NystromformerForMaskedLM [[autodoc]] NystromformerForMaskedLM - forward ## NystromformerForSequenceClassification [[autodoc]] NystromformerForSequenceClassification - forward ## NystromformerForMultipleChoice [[autodoc]] NystromformerForMultipleChoice - forward ## NystromformerForTokenClassification [[autodoc]] NystromformerForTokenClassification - forward ## NystromformerForQuestionAnswering [[autodoc]] NystromformerForQuestionAnswering - forward
transformers/docs/source/en/model_doc/nystromformer.md/0
{ "file_path": "transformers/docs/source/en/model_doc/nystromformer.md", "repo_id": "transformers", "token_count": 907 }
228
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # PLBart ## Overview The PLBART model was proposed in [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. This is a BART-like model which can be used to perform code-summarization, code-generation, and code-translation tasks. The pre-trained model `plbart-base` has been trained using multilingual denoising task on Java, Python and English. According to the abstract *Code summarization and generation empower conversion between programming language (PL) and natural language (NL), while code translation avails the migration of legacy code from one PL to another. This paper introduces PLBART, a sequence-to-sequence model capable of performing a broad spectrum of program and language understanding and generation tasks. PLBART is pre-trained on an extensive collection of Java and Python functions and associated NL text via denoising autoencoding. Experiments on code summarization in the English language, code generation, and code translation in seven programming languages show that PLBART outperforms or rivals state-of-the-art models. Moreover, experiments on discriminative tasks, e.g., program repair, clone detection, and vulnerable code detection, demonstrate PLBART's effectiveness in program understanding. Furthermore, analysis reveals that PLBART learns program syntax, style (e.g., identifier naming convention), logical flow (e.g., if block inside an else block is equivalent to else if block) that are crucial to program semantics and thus excels even with limited annotations.* This model was contributed by [gchhablani](https://huggingface.co/gchhablani). The Authors' code can be found [here](https://github.com/wasiahmad/PLBART). ## Usage examples PLBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for code-to-text, text-to-code, code-to-code tasks. As the model is multilingual it expects the sequences in a different format. A special language id token is added in both the source and target text. The source text format is `X [eos, src_lang_code]` where `X` is the source text. The target text format is `[tgt_lang_code] X [eos]`. `bos` is never used. However, for fine-tuning, in some cases no language token is provided in cases where a single language is used. Please refer to [the paper](https://arxiv.org/abs/2103.06333) to learn more about this. In cases where the language code is needed, the regular [`~PLBartTokenizer.__call__`] will encode source text format when you pass texts as the first argument or with the keyword argument `text`, and will encode target text format if it's passed with the `text_target` keyword argument. ### Supervised training ```python >>> from transformers import PLBartForConditionalGeneration, PLBartTokenizer >>> tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-base", src_lang="en_XX", tgt_lang="python") >>> example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])" >>> expected_translation_english = "Returns the maximum value of a b c." >>> inputs = tokenizer(example_python_phrase, text_target=expected_translation_english, return_tensors="pt") >>> model(**inputs) ``` ### Generation While generating the target text set the `decoder_start_token_id` to the target language id. The following example shows how to translate Python to English using the `uclanlp/plbart-python-en_XX` model. ```python >>> from transformers import PLBartForConditionalGeneration, PLBartTokenizer >>> tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-python-en_XX", src_lang="python", tgt_lang="en_XX") >>> example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])" >>> inputs = tokenizer(example_python_phrase, return_tensors="pt") >>> model = PLBartForConditionalGeneration.from_pretrained("uclanlp/plbart-python-en_XX") >>> translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["en_XX"]) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] "Returns the maximum value of a b c." ``` ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Causal language modeling task guide](../tasks/language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## PLBartConfig [[autodoc]] PLBartConfig ## PLBartTokenizer [[autodoc]] PLBartTokenizer - build_inputs_with_special_tokens ## PLBartModel [[autodoc]] PLBartModel - forward ## PLBartForConditionalGeneration [[autodoc]] PLBartForConditionalGeneration - forward ## PLBartForSequenceClassification [[autodoc]] PLBartForSequenceClassification - forward ## PLBartForCausalLM [[autodoc]] PLBartForCausalLM - forward
transformers/docs/source/en/model_doc/plbart.md/0
{ "file_path": "transformers/docs/source/en/model_doc/plbart.md", "repo_id": "transformers", "token_count": 1586 }
229
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # RoBERTa <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=roberta"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-roberta-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/roberta-base"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> <a href="https://huggingface.co/papers/1907.11692"> <img alt="Paper page" src="https://img.shields.io/badge/Paper%20page-1907.11692-green"> </a> </div> ## Overview The RoBERTa model was proposed in [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, [Myle Ott](https://huggingface.co/myleott), Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google's BERT model released in 2018. It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with much larger mini-batches and learning rates. The abstract from the paper is the following: *Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.* This model was contributed by [julien-c](https://huggingface.co/julien-c). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/roberta). ## Usage tips - This implementation is the same as [`BertModel`] with a tiny embeddings tweak as well as a setup for Roberta pretrained models. - RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a different pretraining scheme. - RoBERTa doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or `</s>`) - Same as BERT with better pretraining tricks: * dynamic masking: tokens are masked differently at each epoch, whereas BERT does it once and for all * together to reach 512 tokens (so the sentences are in an order than may span several documents) * train with larger batches * use BPE with bytes as a subunit and not characters (because of unicode characters) - [CamemBERT](camembert) is a wrapper around RoBERTa. Refer to this page for usage examples. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/> - A blog on [Getting Started with Sentiment Analysis on Twitter](https://huggingface.co/blog/sentiment-analysis-twitter) using RoBERTa and the [Inference API](https://huggingface.co/inference-api). - A blog on [Opinion Classification with Kili and Hugging Face AutoTrain](https://huggingface.co/blog/opinion-classification-with-kili) using RoBERTa. - A notebook on how to [finetune RoBERTa for sentiment analysis](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb). 🌎 - [`RobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [`TFRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). - [`FlaxRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - [Text classification task guide](../tasks/sequence_classification) <PipelineTag pipeline="token-classification"/> - [`RobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb). - [`TFRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [`FlaxRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - [Token classification task guide](../tasks/token_classification) <PipelineTag pipeline="fill-mask"/> - A blog on [How to train a new language model from scratch using Transformers and Tokenizers](https://huggingface.co/blog/how-to-train) with RoBERTa. - [`RobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - [Masked language modeling task guide](../tasks/masked_language_modeling) <PipelineTag pipeline="question-answering"/> - A blog on [Accelerated Inference with Optimum and Transformers Pipelines](https://huggingface.co/blog/optimum-inference) with RoBERTa for question answering. - [`RobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [`TFRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [`FlaxRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - [Question answering task guide](../tasks/question_answering) **Multiple choice** - [`RobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb). - [`TFRobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - [Multiple choice task guide](../tasks/multiple_choice) ## RobertaConfig [[autodoc]] RobertaConfig ## RobertaTokenizer [[autodoc]] RobertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## RobertaTokenizerFast [[autodoc]] RobertaTokenizerFast - build_inputs_with_special_tokens <frameworkcontent> <pt> ## RobertaModel [[autodoc]] RobertaModel - forward ## RobertaForCausalLM [[autodoc]] RobertaForCausalLM - forward ## RobertaForMaskedLM [[autodoc]] RobertaForMaskedLM - forward ## RobertaForSequenceClassification [[autodoc]] RobertaForSequenceClassification - forward ## RobertaForMultipleChoice [[autodoc]] RobertaForMultipleChoice - forward ## RobertaForTokenClassification [[autodoc]] RobertaForTokenClassification - forward ## RobertaForQuestionAnswering [[autodoc]] RobertaForQuestionAnswering - forward </pt> <tf> ## TFRobertaModel [[autodoc]] TFRobertaModel - call ## TFRobertaForCausalLM [[autodoc]] TFRobertaForCausalLM - call ## TFRobertaForMaskedLM [[autodoc]] TFRobertaForMaskedLM - call ## TFRobertaForSequenceClassification [[autodoc]] TFRobertaForSequenceClassification - call ## TFRobertaForMultipleChoice [[autodoc]] TFRobertaForMultipleChoice - call ## TFRobertaForTokenClassification [[autodoc]] TFRobertaForTokenClassification - call ## TFRobertaForQuestionAnswering [[autodoc]] TFRobertaForQuestionAnswering - call </tf> <jax> ## FlaxRobertaModel [[autodoc]] FlaxRobertaModel - __call__ ## FlaxRobertaForCausalLM [[autodoc]] FlaxRobertaForCausalLM - __call__ ## FlaxRobertaForMaskedLM [[autodoc]] FlaxRobertaForMaskedLM - __call__ ## FlaxRobertaForSequenceClassification [[autodoc]] FlaxRobertaForSequenceClassification - __call__ ## FlaxRobertaForMultipleChoice [[autodoc]] FlaxRobertaForMultipleChoice - __call__ ## FlaxRobertaForTokenClassification [[autodoc]] FlaxRobertaForTokenClassification - __call__ ## FlaxRobertaForQuestionAnswering [[autodoc]] FlaxRobertaForQuestionAnswering - __call__ </jax> </frameworkcontent>
transformers/docs/source/en/model_doc/roberta.md/0
{ "file_path": "transformers/docs/source/en/model_doc/roberta.md", "repo_id": "transformers", "token_count": 3783 }
230
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Splinter ## Overview The Splinter model was proposed in [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. Splinter is an encoder-only transformer (similar to BERT) pretrained using the recurring span selection task on a large corpus comprising Wikipedia and the Toronto Book Corpus. The abstract from the paper is the following: In several question answering benchmarks, pretrained models have reached human parity through fine-tuning on an order of 100,000 annotated questions and answers. We explore the more realistic few-shot setting, where only a few hundred training examples are available, and observe that standard models perform poorly, highlighting the discrepancy between current pretraining objectives and question answering. We propose a new pretraining scheme tailored for question answering: recurring span selection. Given a passage with multiple sets of recurring spans, we mask in each set all recurring spans but one, and ask the model to select the correct span in the passage for each masked span. Masked spans are replaced with a special token, viewed as a question representation, that is later used during fine-tuning to select the answer span. The resulting model obtains surprisingly good results on multiple benchmarks (e.g., 72.7 F1 on SQuAD with only 128 training examples), while maintaining competitive performance in the high-resource setting. This model was contributed by [yuvalkirstain](https://huggingface.co/yuvalkirstain) and [oriram](https://huggingface.co/oriram). The original code can be found [here](https://github.com/oriram/splinter). ## Usage tips - Splinter was trained to predict answers spans conditioned on a special [QUESTION] token. These tokens contextualize to question representations which are used to predict the answers. This layer is called QASS, and is the default behaviour in the [`SplinterForQuestionAnswering`] class. Therefore: - Use [`SplinterTokenizer`] (rather than [`BertTokenizer`]), as it already contains this special token. Also, its default behavior is to use this token when two sequences are given (for example, in the *run_qa.py* script). - If you plan on using Splinter outside *run_qa.py*, please keep in mind the question token - it might be important for the success of your model, especially in a few-shot setting. - Please note there are two different checkpoints for each size of Splinter. Both are basically the same, except that one also has the pretrained weights of the QASS layer (*tau/splinter-base-qass* and *tau/splinter-large-qass*) and one doesn't (*tau/splinter-base* and *tau/splinter-large*). This is done to support randomly initializing this layer at fine-tuning, as it is shown to yield better results for some cases in the paper. ## Resources - [Question answering task guide](../tasks/question-answering) ## SplinterConfig [[autodoc]] SplinterConfig ## SplinterTokenizer [[autodoc]] SplinterTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## SplinterTokenizerFast [[autodoc]] SplinterTokenizerFast ## SplinterModel [[autodoc]] SplinterModel - forward ## SplinterForQuestionAnswering [[autodoc]] SplinterForQuestionAnswering - forward ## SplinterForPreTraining [[autodoc]] SplinterForPreTraining - forward
transformers/docs/source/en/model_doc/splinter.md/0
{ "file_path": "transformers/docs/source/en/model_doc/splinter.md", "repo_id": "transformers", "token_count": 1101 }
231
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # WavLM ## Overview The WavLM model was proposed in [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. The abstract from the paper is the following: *Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisedly and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.* Relevant checkpoints can be found under https://huggingface.co/models?other=wavlm. This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be found [here](https://github.com/microsoft/unilm/tree/master/wavlm). ## Usage tips - WavLM is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use [`Wav2Vec2Processor`] for the feature extraction. - WavLM model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. - WavLM performs especially well on speaker verification, speaker identification, and speaker diarization tasks. ## Resources - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) ## WavLMConfig [[autodoc]] WavLMConfig ## WavLMModel [[autodoc]] WavLMModel - forward ## WavLMForCTC [[autodoc]] WavLMForCTC - forward ## WavLMForSequenceClassification [[autodoc]] WavLMForSequenceClassification - forward ## WavLMForAudioFrameClassification [[autodoc]] WavLMForAudioFrameClassification - forward ## WavLMForXVector [[autodoc]] WavLMForXVector - forward
transformers/docs/source/en/model_doc/wavlm.md/0
{ "file_path": "transformers/docs/source/en/model_doc/wavlm.md", "repo_id": "transformers", "token_count": 972 }
232
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Share a model The last two tutorials showed how you can fine-tune a model with PyTorch, Keras, and 🤗 Accelerate for distributed setups. The next step is to share your model with the community! At Hugging Face, we believe in openly sharing knowledge and resources to democratize artificial intelligence for everyone. We encourage you to consider sharing your model with the community to help others save time and resources. In this tutorial, you will learn two methods for sharing a trained or fine-tuned model on the [Model Hub](https://huggingface.co/models): - Programmatically push your files to the Hub. - Drag-and-drop your files to the Hub with the web interface. <iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <Tip> To share a model with the community, you need an account on [huggingface.co](https://huggingface.co/join). You can also join an existing organization or create a new one. </Tip> ## Repository features Each repository on the Model Hub behaves like a typical GitHub repository. Our repositories offer versioning, commit history, and the ability to visualize differences. The Model Hub's built-in versioning is based on git and [git-lfs](https://git-lfs.github.com/). In other words, you can treat one model as one repository, enabling greater access control and scalability. Version control allows *revisions*, a method for pinning a specific version of a model with a commit hash, tag or branch. As a result, you can load a specific model version with the `revision` parameter: ```py >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small", revision="v2.0.1" # tag name, or branch name, or commit hash ... ) ``` Files are also easily edited in a repository, and you can view the commit history as well as the difference: ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png) ## Setup Before sharing a model to the Hub, you will need your Hugging Face credentials. If you have access to a terminal, run the following command in the virtual environment where 🤗 Transformers is installed. This will store your access token in your Hugging Face cache folder (`~/.cache/` by default): ```bash huggingface-cli login ``` If you are using a notebook like Jupyter or Colaboratory, make sure you have the [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) library installed. This library allows you to programmatically interact with the Hub. ```bash pip install huggingface_hub ``` Then use `notebook_login` to sign-in to the Hub, and follow the link [here](https://huggingface.co/settings/token) to generate a token to login with: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Convert a model for all frameworks To ensure your model can be used by someone working with a different framework, we recommend you convert and upload your model with both PyTorch and TensorFlow checkpoints. While users are still able to load your model from a different framework if you skip this step, it will be slower because 🤗 Transformers will need to convert the checkpoint on-the-fly. Converting a checkpoint for another framework is easy. Make sure you have PyTorch and TensorFlow installed (see [here](installation) for installation instructions), and then find the specific model for your task in the other framework. <frameworkcontent> <pt> Specify `from_tf=True` to convert a checkpoint from TensorFlow to PyTorch: ```py >>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True) >>> pt_model.save_pretrained("path/to/awesome-name-you-picked") ``` </pt> <tf> Specify `from_pt=True` to convert a checkpoint from PyTorch to TensorFlow: ```py >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_pt=True) ``` Then you can save your new TensorFlow model with its new checkpoint: ```py >>> tf_model.save_pretrained("path/to/awesome-name-you-picked") ``` </tf> <jax> If a model is available in Flax, you can also convert a checkpoint from PyTorch to Flax: ```py >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/to/awesome-name-you-picked", from_pt=True ... ) ``` </jax> </frameworkcontent> ## Push a model during training <frameworkcontent> <pt> <Youtube id="Z1-XMy-GNLQ"/> Sharing a model to the Hub is as simple as adding an extra parameter or callback. Remember from the [fine-tuning tutorial](training), the [`TrainingArguments`] class is where you specify hyperparameters and additional training options. One of these training options includes the ability to push a model directly to the Hub. Set `push_to_hub=True` in your [`TrainingArguments`]: ```py >>> training_args = TrainingArguments(output_dir="my-awesome-model", push_to_hub=True) ``` Pass your training arguments as usual to [`Trainer`]: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` After you fine-tune your model, call [`~transformers.Trainer.push_to_hub`] on [`Trainer`] to push the trained model to the Hub. 🤗 Transformers will even automatically add training hyperparameters, training results and framework versions to your model card! ```py >>> trainer.push_to_hub() ``` </pt> <tf> Share a model to the Hub with [`PushToHubCallback`]. In the [`PushToHubCallback`] function, add: - An output directory for your model. - A tokenizer. - The `hub_model_id`, which is your Hub username and model name. ```py >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="./your_model_save_path", tokenizer=tokenizer, hub_model_id="your-username/my-awesome-model" ... ) ``` Add the callback to [`fit`](https://keras.io/api/models/model_training_apis/), and 🤗 Transformers will push the trained model to the Hub: ```py >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ``` </tf> </frameworkcontent> ## Use the `push_to_hub` function You can also call `push_to_hub` directly on your model to upload it to the Hub. Specify your model name in `push_to_hub`: ```py >>> pt_model.push_to_hub("my-awesome-model") ``` This creates a repository under your username with the model name `my-awesome-model`. Users can now load your model with the `from_pretrained` function: ```py >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained("your_username/my-awesome-model") ``` If you belong to an organization and want to push your model under the organization name instead, just add it to the `repo_id`: ```py >>> pt_model.push_to_hub("my-awesome-org/my-awesome-model") ``` The `push_to_hub` function can also be used to add other files to a model repository. For example, add a tokenizer to a model repository: ```py >>> tokenizer.push_to_hub("my-awesome-model") ``` Or perhaps you'd like to add the TensorFlow version of your fine-tuned PyTorch model: ```py >>> tf_model.push_to_hub("my-awesome-model") ``` Now when you navigate to your Hugging Face profile, you should see your newly created model repository. Clicking on the **Files** tab will display all the files you've uploaded to the repository. For more details on how to create and upload files to a repository, refer to the Hub documentation [here](https://huggingface.co/docs/hub/how-to-upstream). ## Upload with the web interface Users who prefer a no-code approach are able to upload a model through the Hub's web interface. Visit [huggingface.co/new](https://huggingface.co/new) to create a new repository: ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) From here, add some information about your model: - Select the **owner** of the repository. This can be yourself or any of the organizations you belong to. - Pick a name for your model, which will also be the repository name. - Choose whether your model is public or private. - Specify the license usage for your model. Now click on the **Files** tab and click on the **Add file** button to upload a new file to your repository. Then drag-and-drop a file to upload and add a commit message. ![upload_file](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/upload_file.png) ## Add a model card To make sure users understand your model's capabilities, limitations, potential biases and ethical considerations, please add a model card to your repository. The model card is defined in the `README.md` file. You can add a model card by: * Manually creating and uploading a `README.md` file. * Clicking on the **Edit model card** button in your model repository. Take a look at the DistilBert [model card](https://huggingface.co/distilbert/distilbert-base-uncased) for a good example of the type of information a model card should include. For more details about other options you can control in the `README.md` file such as a model's carbon footprint or widget examples, refer to the documentation [here](https://huggingface.co/docs/hub/models-cards).
transformers/docs/source/en/model_sharing.md/0
{ "file_path": "transformers/docs/source/en/model_sharing.md", "repo_id": "transformers", "token_count": 2967 }
233
<!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Performance and Scalability Training large transformer models and deploying them to production present various challenges. During training, the model may require more GPU memory than available or exhibit slow training speed. In the deployment phase, the model can struggle to handle the required throughput in a production environment. This documentation aims to assist you in overcoming these challenges and finding the optimal setting for your use-case. The guides are divided into training and inference sections, as each comes with different challenges and solutions. Within each section you'll find separate guides for different hardware configurations, such as single GPU vs. multi-GPU for training or CPU vs. GPU for inference. Use this document as your starting point to navigate further to the methods that match your scenario. ## Training Training large transformer models efficiently requires an accelerator such as a GPU or TPU. The most common case is where you have a single GPU. The methods that you can apply to improve training efficiency on a single GPU extend to other setups such as multiple GPU. However, there are also techniques that are specific to multi-GPU or CPU training. We cover them in separate sections. * [Methods and tools for efficient training on a single GPU](perf_train_gpu_one): start here to learn common approaches that can help optimize GPU memory utilization, speed up the training, or both. * [Multi-GPU training section](perf_train_gpu_many): explore this section to learn about further optimization methods that apply to a multi-GPU settings, such as data, tensor, and pipeline parallelism. * [CPU training section](perf_train_cpu): learn about mixed precision training on CPU. * [Efficient Training on Multiple CPUs](perf_train_cpu_many): learn about distributed CPU training. * [Training on TPU with TensorFlow](perf_train_tpu_tf): if you are new to TPUs, refer to this section for an opinionated introduction to training on TPUs and using XLA. * [Custom hardware for training](perf_hardware): find tips and tricks when building your own deep learning rig. * [Hyperparameter Search using Trainer API](hpo_train) ## Inference Efficient inference with large models in a production environment can be as challenging as training them. In the following sections we go through the steps to run inference on CPU and single/multi-GPU setups. * [Inference on a single CPU](perf_infer_cpu) * [Inference on a single GPU](perf_infer_gpu_one) * [Multi-GPU inference](perf_infer_gpu_one) * [XLA Integration for TensorFlow Models](tf_xla) ## Training and inference Here you'll find techniques, tips and tricks that apply whether you are training a model, or running inference with it. * [Instantiating a big model](big_models) * [Troubleshooting performance issues](debugging) ## Contribute This document is far from being complete and a lot more needs to be added, so if you have additions or corrections to make please don't hesitate to open a PR or if you aren't sure start an Issue and we can discuss the details there. When making contributions that A is better than B, please try to include a reproducible benchmark and/or a link to the source of that information (unless it comes directly from you).
transformers/docs/source/en/performance.md/0
{ "file_path": "transformers/docs/source/en/performance.md", "repo_id": "transformers", "token_count": 966 }
234
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Image tasks with IDEFICS [[open-in-colab]] While individual tasks can be tackled by fine-tuning specialized models, an alternative approach that has recently emerged and gained popularity is to use large models for a diverse set of tasks without fine-tuning. For instance, large language models can handle such NLP tasks as summarization, translation, classification, and more. This approach is no longer limited to a single modality, such as text, and in this guide, we will illustrate how you can solve image-text tasks with a large multimodal model called IDEFICS. [IDEFICS](../model_doc/idefics) is an open-access vision and language model based on [Flamingo](https://huggingface.co/papers/2204.14198), a state-of-the-art visual language model initially developed by DeepMind. The model accepts arbitrary sequences of image and text inputs and generates coherent text as output. It can answer questions about images, describe visual content, create stories grounded in multiple images, and so on. IDEFICS comes in two variants - [80 billion parameters](https://huggingface.co/HuggingFaceM4/idefics-80b) and [9 billion parameters](https://huggingface.co/HuggingFaceM4/idefics-9b), both of which are available on the 🤗 Hub. For each variant, you can also find fine-tuned instructed versions of the model adapted for conversational use cases. This model is exceptionally versatile and can be used for a wide range of image and multimodal tasks. However, being a large model means it requires significant computational resources and infrastructure. It is up to you to decide whether this approach suits your use case better than fine-tuning specialized models for each individual task. In this guide, you'll learn how to: - [Load IDEFICS](#loading-the-model) and [load the quantized version of the model](#quantized-model) - Use IDEFICS for: - [Image captioning](#image-captioning) - [Prompted image captioning](#prompted-image-captioning) - [Few-shot prompting](#few-shot-prompting) - [Visual question answering](#visual-question-answering) - [Image classification](#image-classification) - [Image-guided text generation](#image-guided-text-generation) - [Run inference in batch mode](#running-inference-in-batch-mode) - [Run IDEFICS instruct for conversational use](#idefics-instruct-for-conversational-use) Before you begin, make sure you have all the necessary libraries installed. ```bash pip install -q bitsandbytes sentencepiece accelerate transformers ``` <Tip> To run the following examples with a non-quantized version of the model checkpoint you will need at least 20GB of GPU memory. </Tip> ## Loading the model Let's start by loading the model's 9 billion parameters checkpoint: ```py >>> checkpoint = "HuggingFaceM4/idefics-9b" ``` Just like for other Transformers models, you need to load a processor and the model itself from the checkpoint. The IDEFICS processor wraps a [`LlamaTokenizer`] and IDEFICS image processor into a single processor to take care of preparing text and image inputs for the model. ```py >>> import torch >>> from transformers import IdeficsForVisionText2Text, AutoProcessor >>> processor = AutoProcessor.from_pretrained(checkpoint) >>> model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, device_map="auto") ``` Setting `device_map` to `"auto"` will automatically determine how to load and store the model weights in the most optimized manner given existing devices. ### Quantized model If high-memory GPU availability is an issue, you can load the quantized version of the model. To load the model and the processor in 4bit precision, pass a `BitsAndBytesConfig` to the `from_pretrained` method and the model will be compressed on the fly while loading. ```py >>> import torch >>> from transformers import IdeficsForVisionText2Text, AutoProcessor, BitsAndBytesConfig >>> quantization_config = BitsAndBytesConfig( ... load_in_4bit=True, ... bnb_4bit_compute_dtype=torch.float16, ... ) >>> processor = AutoProcessor.from_pretrained(checkpoint) >>> model = IdeficsForVisionText2Text.from_pretrained( ... checkpoint, ... quantization_config=quantization_config, ... device_map="auto" ... ) ``` Now that you have the model loaded in one of the suggested ways, let's move on to exploring tasks that you can use IDEFICS for. ## Image captioning Image captioning is the task of predicting a caption for a given image. A common application is to aid visually impaired people navigate through different situations, for instance, explore image content online. To illustrate the task, get an image to be captioned, e.g.: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-im-captioning.jpg" alt="Image of a puppy in a flower bed"/> </div> Photo by [Hendo Wang](https://unsplash.com/@hendoo). IDEFICS accepts text and image prompts. However, to caption an image, you do not have to provide a text prompt to the model, only the preprocessed input image. Without a text prompt, the model will start generating text from the BOS (beginning-of-sequence) token thus creating a caption. As image input to the model, you can use either an image object (`PIL.Image`) or a url from which the image can be retrieved. ```py >>> prompt = [ ... "https://images.unsplash.com/photo-1583160247711-2191776b4b91?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3542&q=80", ... ] >>> inputs = processor(prompt, return_tensors="pt").to("cuda") >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) A puppy in a flower bed ``` <Tip> It is a good idea to include the `bad_words_ids` in the call to `generate` to avoid errors arising when increasing the `max_new_tokens`: the model will want to generate a new `<image>` or `<fake_token_around_image>` token when there is no image being generated by the model. You can set it on-the-fly as in this guide, or store in the `GenerationConfig` as described in the [Text generation strategies](../generation_strategies) guide. </Tip> ## Prompted image captioning You can extend image captioning by providing a text prompt, which the model will continue given the image. Let's take another image to illustrate: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-prompted-im-captioning.jpg" alt="Image of the Eiffel Tower at night"/> </div> Photo by [Denys Nevozhai](https://unsplash.com/@dnevozhai). Textual and image prompts can be passed to the model's processor as a single list to create appropriate inputs. ```py >>> prompt = [ ... "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80", ... "This is an image of ", ... ] >>> inputs = processor(prompt, return_tensors="pt").to("cuda") >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) This is an image of the Eiffel Tower in Paris, France. ``` ## Few-shot prompting While IDEFICS demonstrates great zero-shot results, your task may require a certain format of the caption, or come with other restrictions or requirements that increase task's complexity. Few-shot prompting can be used to enable in-context learning. By providing examples in the prompt, you can steer the model to generate results that mimic the format of given examples. Let's use the previous image of the Eiffel Tower as an example for the model and build a prompt that demonstrates to the model that in addition to learning what the object in an image is, we would also like to get some interesting information about it. Then, let's see, if we can get the same response format for an image of the Statue of Liberty: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg" alt="Image of the Statue of Liberty"/> </div> Photo by [Juan Mayobre](https://unsplash.com/@jmayobres). ```py >>> prompt = ["User:", ... "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80", ... "Describe this image.\nAssistant: An image of the Eiffel Tower at night. Fun fact: the Eiffel Tower is the same height as an 81-storey building.\n", ... "User:", ... "https://images.unsplash.com/photo-1524099163253-32b7f0256868?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3387&q=80", ... "Describe this image.\nAssistant:" ... ] >>> inputs = processor(prompt, return_tensors="pt").to("cuda") >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=30, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) User: Describe this image. Assistant: An image of the Eiffel Tower at night. Fun fact: the Eiffel Tower is the same height as an 81-storey building. User: Describe this image. Assistant: An image of the Statue of Liberty. Fun fact: the Statue of Liberty is 151 feet tall. ``` Notice that just from a single example (i.e., 1-shot) the model has learned how to perform the task. For more complex tasks, feel free to experiment with a larger number of examples (e.g., 3-shot, 5-shot, etc.). ## Visual question answering Visual Question Answering (VQA) is the task of answering open-ended questions based on an image. Similar to image captioning it can be used in accessibility applications, but also in education (reasoning about visual materials), customer service (questions about products based on images), and image retrieval. Let's get a new image for this task: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-vqa.jpg" alt="Image of a couple having a picnic"/> </div> Photo by [Jarritos Mexican Soda](https://unsplash.com/@jarritos). You can steer the model from image captioning to visual question answering by prompting it with appropriate instructions: ```py >>> prompt = [ ... "Instruction: Provide an answer to the question. Use the image to answer.\n", ... "https://images.unsplash.com/photo-1623944889288-cd147dbb517c?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80", ... "Question: Where are these people and what's the weather like? Answer:" ... ] >>> inputs = processor(prompt, return_tensors="pt").to("cuda") >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=20, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) Instruction: Provide an answer to the question. Use the image to answer. Question: Where are these people and what's the weather like? Answer: They're in a park in New York City, and it's a beautiful day. ``` ## Image classification IDEFICS is capable of classifying images into different categories without being explicitly trained on data containing labeled examples from those specific categories. Given a list of categories and using its image and text understanding capabilities, the model can infer which category the image likely belongs to. Say, we have this image of a vegetable stand: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-classification.jpg" alt="Image of a vegetable stand"/> </div> Photo by [Peter Wendt](https://unsplash.com/@peterwendt). We can instruct the model to classify the image into one of the categories that we have: ```py >>> categories = ['animals','vegetables', 'city landscape', 'cars', 'office'] >>> prompt = [f"Instruction: Classify the following image into a single category from the following list: {categories}.\n", ... "https://images.unsplash.com/photo-1471193945509-9ad0617afabf?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80", ... "Category: " ... ] >>> inputs = processor(prompt, return_tensors="pt").to("cuda") >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=6, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) Instruction: Classify the following image into a single category from the following list: ['animals', 'vegetables', 'city landscape', 'cars', 'office']. Category: Vegetables ``` In the example above we instruct the model to classify the image into a single category, however, you can also prompt the model to do rank classification. ## Image-guided text generation For more creative applications, you can use image-guided text generation to generate text based on an image. This can be useful to create descriptions of products, ads, descriptions of a scene, etc. Let's prompt IDEFICS to write a story based on a simple image of a red door: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-story-generation.jpg" alt="Image of a red door with a pumpkin on the steps"/> </div> Photo by [Craig Tidball](https://unsplash.com/@devonshiremedia). ```py >>> prompt = ["Instruction: Use the image to write a story. \n", ... "https://images.unsplash.com/photo-1517086822157-2b0358e7684a?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=2203&q=80", ... "Story: \n"] >>> inputs = processor(prompt, return_tensors="pt").to("cuda") >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, num_beams=2, max_new_tokens=200, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) Instruction: Use the image to write a story. Story: Once upon a time, there was a little girl who lived in a house with a red door. She loved her red door. It was the prettiest door in the whole world. One day, the little girl was playing in her yard when she noticed a man standing on her doorstep. He was wearing a long black coat and a top hat. The little girl ran inside and told her mother about the man. Her mother said, “Don’t worry, honey. He’s just a friendly ghost.” The little girl wasn’t sure if she believed her mother, but she went outside anyway. When she got to the door, the man was gone. The next day, the little girl was playing in her yard again when she noticed the man standing on her doorstep. He was wearing a long black coat and a top hat. The little girl ran ``` Looks like IDEFICS noticed the pumpkin on the doorstep and went with a spooky Halloween story about a ghost. <Tip> For longer outputs like this, you will greatly benefit from tweaking the text generation strategy. This can help you significantly improve the quality of the generated output. Check out [Text generation strategies](../generation_strategies) to learn more. </Tip> ## Running inference in batch mode All of the earlier sections illustrated IDEFICS for a single example. In a very similar fashion, you can run inference for a batch of examples by passing a list of prompts: ```py >>> prompts = [ ... [ "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80", ... "This is an image of ", ... ], ... [ "https://images.unsplash.com/photo-1623944889288-cd147dbb517c?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80", ... "This is an image of ", ... ], ... [ "https://images.unsplash.com/photo-1471193945509-9ad0617afabf?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80", ... "This is an image of ", ... ], ... ] >>> inputs = processor(prompts, return_tensors="pt").to("cuda") >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> for i,t in enumerate(generated_text): ... print(f"{i}:\n{t}\n") 0: This is an image of the Eiffel Tower in Paris, France. 1: This is an image of a couple on a picnic blanket. 2: This is an image of a vegetable stand. ``` ## IDEFICS instruct for conversational use For conversational use cases, you can find fine-tuned instructed versions of the model on the 🤗 Hub: `HuggingFaceM4/idefics-80b-instruct` and `HuggingFaceM4/idefics-9b-instruct`. These checkpoints are the result of fine-tuning the respective base models on a mixture of supervised and instruction fine-tuning datasets, which boosts the downstream performance while making the models more usable in conversational settings. The use and prompting for the conversational use is very similar to using the base models: ```py >>> import torch >>> from transformers import IdeficsForVisionText2Text, AutoProcessor >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> checkpoint = "HuggingFaceM4/idefics-9b-instruct" >>> model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device) >>> processor = AutoProcessor.from_pretrained(checkpoint) >>> prompts = [ ... [ ... "User: What is in this image?", ... "https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG", ... "<end_of_utterance>", ... "\nAssistant: This picture depicts Idefix, the dog of Obelix in Asterix and Obelix. Idefix is running on the ground.<end_of_utterance>", ... "\nUser:", ... "https://static.wikia.nocookie.net/asterix/images/2/25/R22b.gif/revision/latest?cb=20110815073052", ... "And who is that?<end_of_utterance>", ... "\nAssistant:", ... ], ... ] >>> # --batched mode >>> inputs = processor(prompts, add_end_of_utterance_token=False, return_tensors="pt").to(device) >>> # --single sample mode >>> # inputs = processor(prompts[0], return_tensors="pt").to(device) >>> # Generation args >>> exit_condition = processor.tokenizer("<end_of_utterance>", add_special_tokens=False).input_ids >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, eos_token_id=exit_condition, bad_words_ids=bad_words_ids, max_length=100) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> for i, t in enumerate(generated_text): ... print(f"{i}:\n{t}\n") ```
transformers/docs/source/en/tasks/idefics.md/0
{ "file_path": "transformers/docs/source/en/tasks/idefics.md", "repo_id": "transformers", "token_count": 6890 }
235
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Transformers Agents <Tip warning={true}> Transformers Agents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. </Tip> Transformers version v4.29.0, building on the concept of *tools* and *agents*. You can play with in [this colab](https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj). In short, it provides a natural language API on top of transformers: we define a set of curated tools and design an agent to interpret natural language and to use these tools. It is extensible by design; we curated some relevant tools, but we'll show you how the system can be extended easily to use any tool developed by the community. Let's start with a few examples of what can be achieved with this new API. It is particularly powerful when it comes to multimodal tasks, so let's take it for a spin to generate images and read text out loud. ```py agent.run("Caption the following image", image=image) ``` | **Input** | **Output** | |-----------------------------------------------------------------------------------------------------------------------------|-----------------------------------| | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/beaver.png" width=200> | A beaver is swimming in the water | --- ```py agent.run("Read the following text out loud", text=text) ``` | **Input** | **Output** | |-------------------------------------------------------------------------------------------------------------------------|----------------------------------------------| | A beaver is swimming in the water | <audio controls><source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tts_example.wav" type="audio/wav"> your browser does not support the audio element. </audio> --- ```py agent.run( "In the following `document`, where will the TRRF Scientific Advisory Council Meeting take place?", document=document, ) ``` | **Input** | **Output** | |-----------------------------------------------------------------------------------------------------------------------------|----------------| | <img src="https://datasets-server.huggingface.co/assets/hf-internal-testing/example-documents/--/hf-internal-testing--example-documents/test/0/image/image.jpg" width=200> | ballroom foyer | ## Quickstart Before being able to use `agent.run`, you will need to instantiate an agent, which is a large language model (LLM). We provide support for openAI models as well as opensource alternatives from BigCode and OpenAssistant. The openAI models perform better (but require you to have an openAI API key, so cannot be used for free); Hugging Face is providing free access to endpoints for BigCode and OpenAssistant models. To start with, please install the `agents` extras in order to install all default dependencies. ```bash pip install transformers[agents] ``` To use openAI models, you instantiate an [`OpenAiAgent`] after installing the `openai` dependency: ```bash pip install openai ``` ```py from transformers import OpenAiAgent agent = OpenAiAgent(model="text-davinci-003", api_key="<your_api_key>") ``` To use BigCode or OpenAssistant, start by logging in to have access to the Inference API: ```py from huggingface_hub import login login("<YOUR_TOKEN>") ``` Then, instantiate the agent ```py from transformers import HfAgent # Starcoder agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") # StarcoderBase # agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoderbase") # OpenAssistant # agent = HfAgent(url_endpoint="https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5") ``` This is using the inference API that Hugging Face provides for free at the moment. If you have your own inference endpoint for this model (or another one) you can replace the URL above with your URL endpoint. <Tip> StarCoder and OpenAssistant are free to use and perform admirably well on simple tasks. However, the checkpoints don't hold up when handling more complex prompts. If you're facing such an issue, we recommend trying out the OpenAI model which, while sadly not open-source, performs better at this given time. </Tip> You're now good to go! Let's dive into the two APIs that you now have at your disposal. ### Single execution (run) The single execution method is when using the [`~Agent.run`] method of the agent: ```py agent.run("Draw me a picture of rivers and lakes.") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> It automatically selects the tool (or tools) appropriate for the task you want to perform and runs them appropriately. It can perform one or several tasks in the same instruction (though the more complex your instruction, the more likely the agent is to fail). ```py agent.run("Draw me a picture of the sea then transform the picture to add an island") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sea_and_island.png" width=200> <br/> Every [`~Agent.run`] operation is independent, so you can run it several times in a row with different tasks. Note that your `agent` is just a large-language model, so small variations in your prompt might yield completely different results. It's important to explain as clearly as possible the task you want to perform. We go more in-depth on how to write good prompts [here](custom_tools#writing-good-user-inputs). If you'd like to keep a state across executions or to pass non-text objects to the agent, you can do so by specifying variables that you would like the agent to use. For example, you could generate the first image of rivers and lakes, and ask the model to update that picture to add an island by doing the following: ```python picture = agent.run("Generate a picture of rivers and lakes.") updated_picture = agent.run("Transform the image in `picture` to add an island to it.", picture=picture) ``` <Tip> This can be helpful when the model is unable to understand your request and mixes tools. An example would be: ```py agent.run("Draw me the picture of a capybara swimming in the sea") ``` Here, the model could interpret in two ways: - Have the `text-to-image` generate a capybara swimming in the sea - Or, have the `text-to-image` generate capybara, then use the `image-transformation` tool to have it swim in the sea In case you would like to force the first scenario, you could do so by passing it the prompt as an argument: ```py agent.run("Draw me a picture of the `prompt`", prompt="a capybara swimming in the sea") ``` </Tip> ### Chat-based execution (chat) The agent also has a chat-based approach, using the [`~Agent.chat`] method: ```py agent.chat("Generate a picture of rivers and lakes") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> ```py agent.chat("Transform the picture so that there is a rock in there") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_and_beaver.png" width=200> <br/> This is an interesting approach when you want to keep the state across instructions. It's better for experimentation, but will tend to be much better at single instructions rather than complex instructions (which the [`~Agent.run`] method is better at handling). This method can also take arguments if you would like to pass non-text types or specific prompts. ### ⚠️ Remote execution For demonstration purposes and so that it could be used with all setups, we had created remote executors for several of the default tools the agent has access for the release. These are created using [inference endpoints](https://huggingface.co/inference-endpoints). We have turned these off for now, but in order to see how to set up remote executors tools yourself, we recommend reading the [custom tool guide](./custom_tools). ### What's happening here? What are tools, and what are agents? <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/diagram.png"> #### Agents The "agent" here is a large language model, and we're prompting it so that it has access to a specific set of tools. LLMs are pretty good at generating small samples of code, so this API takes advantage of that by prompting the LLM gives a small sample of code performing a task with a set of tools. This prompt is then completed by the task you give your agent and the description of the tools you give it. This way it gets access to the doc of the tools you are using, especially their expected inputs and outputs, and can generate the relevant code. #### Tools Tools are very simple: they're a single function, with a name, and a description. We then use these tools' descriptions to prompt the agent. Through the prompt, we show the agent how it would leverage tools to perform what was requested in the query. This is using brand-new tools and not pipelines, because the agent writes better code with very atomic tools. Pipelines are more refactored and often combine several tasks in one. Tools are meant to be focused on one very simple task only. #### Code-execution?! This code is then executed with our small Python interpreter on the set of inputs passed along with your tools. We hear you screaming "Arbitrary code execution!" in the back, but let us explain why that is not the case. The only functions that can be called are the tools you provided and the print function, so you're already limited in what can be executed. You should be safe if it's limited to Hugging Face tools. Then, we don't allow any attribute lookup or imports (which shouldn't be needed anyway for passing along inputs/outputs to a small set of functions) so all the most obvious attacks (and you'd need to prompt the LLM to output them anyway) shouldn't be an issue. If you want to be on the super safe side, you can execute the run() method with the additional argument return_code=True, in which case the agent will just return the code to execute and you can decide whether to do it or not. The execution will stop at any line trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent. ### A curated set of tools We identify a set of tools that can empower such agents. Here is an updated list of the tools we have integrated in `transformers`: - **Document question answering**: given a document (such as a PDF) in image format, answer a question on this document ([Donut](./model_doc/donut)) - **Text question answering**: given a long text and a question, answer the question in the text ([Flan-T5](./model_doc/flan-t5)) - **Unconditional image captioning**: Caption the image! ([BLIP](./model_doc/blip)) - **Image question answering**: given an image, answer a question on this image ([VILT](./model_doc/vilt)) - **Image segmentation**: given an image and a prompt, output the segmentation mask of that prompt ([CLIPSeg](./model_doc/clipseg)) - **Speech to text**: given an audio recording of a person talking, transcribe the speech into text ([Whisper](./model_doc/whisper)) - **Text to speech**: convert text to speech ([SpeechT5](./model_doc/speecht5)) - **Zero-shot text classification**: given a text and a list of labels, identify to which label the text corresponds the most ([BART](./model_doc/bart)) - **Text summarization**: summarize a long text in one or a few sentences ([BART](./model_doc/bart)) - **Translation**: translate the text into a given language ([NLLB](./model_doc/nllb)) These tools have an integration in transformers, and can be used manually as well, for example: ```py from transformers import load_tool tool = load_tool("text-to-speech") audio = tool("This is a text to speech tool") ``` ### Custom tools While we identify a curated set of tools, we strongly believe that the main value provided by this implementation is the ability to quickly create and share custom tools. By pushing the code of a tool to a Hugging Face Space or a model repository, you're then able to leverage the tool directly with the agent. We've added a few **transformers-agnostic** tools to the [`huggingface-tools` organization](https://huggingface.co/huggingface-tools): - **Text downloader**: to download a text from a web URL - **Text to image**: generate an image according to a prompt, leveraging stable diffusion - **Image transformation**: modify an image given an initial image and a prompt, leveraging instruct pix2pix stable diffusion - **Text to video**: generate a small video according to a prompt, leveraging damo-vilab The text-to-image tool we have been using since the beginning is a remote tool that lives in [*huggingface-tools/text-to-image*](https://huggingface.co/spaces/huggingface-tools/text-to-image)! We will continue releasing such tools on this and other organizations, to further supercharge this implementation. The agents have by default access to tools that reside on [`huggingface-tools`](https://huggingface.co/huggingface-tools). We explain how to you can write and share your tools as well as leverage any custom tool that resides on the Hub in [following guide](custom_tools). ### Code generation So far we have shown how to use the agents to perform actions for you. However, the agent is only generating code that we then execute using a very restricted Python interpreter. In case you would like to use the code generated in a different setting, the agent can be prompted to return the code, along with tool definition and accurate imports. For example, the following instruction ```python agent.run("Draw me a picture of rivers and lakes", return_code=True) ``` returns the following code ```python from transformers import load_tool image_generator = load_tool("huggingface-tools/text-to-image") image = image_generator(prompt="rivers and lakes") ``` that you can then modify and execute yourself.
transformers/docs/source/en/transformers_agents.md/0
{ "file_path": "transformers/docs/source/en/transformers_agents.md", "repo_id": "transformers", "token_count": 4397 }
236
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Glosario Este glosario define términos generales de aprendizaje automático y términos relacionados con 🤗 Transformers para ayudarte a comprender mejor la documentación. ## A ### attention mask La máscara de atención es un argumento opcional utilizado al agrupar secuencias. <Youtube id="M6adb1j2jPI"/> Este argumento indica al modelo qué tokens deben recibir atención y cuáles no. Por ejemplo, considera estas dos secuencias: ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased") >>> sequence_a = "This is a short sequence." >>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A." >>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"] >>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"] ``` Las versiones codificadas tienen longitudes diferentes: ```python >>> len(encoded_sequence_a), len(encoded_sequence_b) (8, 19) ``` Por lo tanto, no podemos colocarlas juntas en el mismo tensor tal cual. La primera secuencia necesita ser rellenada hasta la longitud de la segunda, o la segunda necesita ser truncada hasta la longitud de la primera. En el primer caso, la lista de IDs se extenderá con los índices de relleno. Podemos pasar una lista al tokenizador y pedirle que realice el relleno de esta manera: ```python >>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True) ``` Podemos ver que se han agregado ceros a la derecha de la primera oración para que tenga la misma longitud que la segunda: ```python >>> padded_sequences["input_ids"] [[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]] ``` Esto luego se puede convertir en un tensor en PyTorch o TensorFlow. La máscara de atención es un tensor binario que indica la posición de los índices de relleno para que el modelo no los tenga en cuenta. Para el [`BertTokenizer`], `1` indica un valor al que se debe prestar atención, mientras que `0` indica un valor de relleno. Esta máscara de atención está en el diccionario devuelto por el tokenizador bajo la clave "attention_mask": ```python >>> padded_sequences["attention_mask"] [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]] ``` ### autoencoding models Consulta [modelos de codificación](#encoder-models) y [modelado de lenguaje enmascarado](#masked-language-modeling-mlm) ### autoregressive models Consulta [modelado de lenguaje causal](#causal-language-modeling) y [modelos de decodificación](#decoder-models) ## B ### backbone La columna vertebral, backbone en inglés, es la red (embeddings y layers) que produce los estados ocultos o características crudas. Normalmente, está conectado a una [cabecera](#head), que acepta las características como entrada para hacer una predicción. Por ejemplo, [`ViTModel`] es una columna vertebral sin una cabecera específica encima. Otros modelos también pueden usar [`VitModel`] como columna vertebral, como por ejemplo [DPT](model_doc/dpt). ## C ### causal language modeling Una tarea de preentrenamiento donde el modelo lee los textos en orden y tiene que predecir la siguiente palabra. Generalmente, se realiza leyendo toda la oración, pero utilizando una máscara dentro del modelo para ocultar los tokens futuros en un cierto paso de tiempo. ### channel Las imágenes a color están compuestas por alguna combinación de valores en tres canales: rojo, verde y azul (RGB), y las imágenes en escala de grises solo tienen un canal. En 🤗 Transformers, el canal puede ser la primera o última dimensión del tensor de una imagen: [`n_channels`, `height`, `width`] o [`height`, `width`, `n_channels`]. ### connectionist temporal classification (CTC) Un algoritmo que permite que un modelo aprenda sin saber exactamente cómo están alineadas la entrada y la salida; CTC calcula la distribución de todas las salidas posibles para una entrada dada y elige la salida más probable de ella. CTC se utiliza comúnmente en tareas de reconocimiento de voz porque el habla no siempre se alinea perfectamente con la transcripción debido a diversas razones, como las diferentes velocidades de habla de los oradores. ### convolution Un tipo de capa en una red neuronal donde la matriz de entrada se multiplica elemento por elemento por una matriz más pequeña (núcleo o filtro) y los valores se suman en una nueva matriz. Esto se conoce como una operación de convolución que se repite sobre toda la matriz de entrada. Cada operación se aplica a un segmento diferente de la matriz de entrada. Las redes neuronales convolucionales (CNN) se utilizan comúnmente en visión por computadora. ## D ### DataParallel (DP) Técnica de paralelismo para entrenamiento en múltiples GPUs donde se replica la misma configuración varias veces, con cada instancia recibiendo una porción de datos única. El procesamiento se realiza en paralelo y todas las configuraciones se sincronizan al final de cada paso de entrenamiento. Obtén más información sobre cómo funciona el DataParallel [aquí](perf_train_gpu_many#dataparallel-vs-distributeddataparallel). ### decoder input IDs Esta entrada es específica para modelos codificador-decodificador y contiene los IDs de entrada que se enviarán al decodificador. Estas entradas deben usarse para tareas de secuencia a secuencia, como traducción o resumen, y generalmente se construyen de una manera específica para cada modelo. La mayoría de los modelos codificador-decodificador (BART, T5) crean sus `decoder_input_ids` por sí mismos a partir de las `labels`. En tales modelos, pasar las `labels` es la forma preferida de manejar el entrenamiento. Consulta la documentación de cada modelo para ver cómo manejan estos IDs de entrada para el entrenamiento de secuencia a secuencia. ### decoder models También conocidos como modelos autorregresivos, los modelos decodificadores involucran una tarea de preentrenamiento (llamada modelado de lenguaje causal) donde el modelo lee los textos en orden y tiene que predecir la siguiente palabra. Generalmente, se realiza leyendo la oración completa con una máscara para ocultar los tokens futuros en un cierto paso de tiempo. <Youtube id="d_ixlCubqQw"/> ### deep learning (DL) Algoritmos de aprendizaje automático que utilizan redes neuronales con varias capas. ## E ### encoder models También conocidos como modelos de codificación automática (autoencoding models), los modelos codificadores toman una entrada (como texto o imágenes) y las transforman en una representación numérica condensada llamada embedding. A menudo, los modelos codificadores se entrenan previamente utilizando técnicas como el [modelado de lenguaje enmascarado](#masked-language-modeling-mlm), que enmascara partes de la secuencia de entrada y obliga al modelo a crear representaciones más significativas. <Youtube id="H39Z_720T5s"/> ## F ### feature extraction El proceso de seleccionar y transformar datos crudos en un conjunto de características más informativas y útiles para algoritmos de aprendizaje automático. Algunos ejemplos de extracción de características incluyen transformar texto crudo en embeddings de palabras y extraer características importantes como bordes o formas de datos de imágenes/videos. ### feed forward chunking En cada bloque de atención residual en los transformadores, la capa de autoatención suele ir seguida de 2 capas de avance. El tamaño de embedding intermedio de las capas de avance suele ser mayor que el tamaño oculto del modelo (por ejemplo, para `google-bert/bert-base-uncased`). Para una entrada de tamaño `[batch_size, sequence_length]`, la memoria requerida para almacenar los embeddings intermedios de avance `[batch_size, sequence_length, config.intermediate_size]` puede representar una gran fracción del uso de memoria. Los autores de [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) observaron que, dado que el cálculo es independiente de la dimensión `sequence_length`, es matemáticamente equivalente calcular los embeddings de salida de ambas capas de avance `[batch_size, config.hidden_size]_0, ..., [batch_size, config.hidden_size]_n` individualmente y concatenarlos después a `[batch_size, sequence_length, config.hidden_size]` con `n = sequence_length`, lo que intercambia el aumento del tiempo de cálculo por una reducción en el uso de memoria, pero produce un resultado matemáticamente **equivalente**. Para modelos que utilizan la función [`apply_chunking_to_forward`], el `chunk_size` define el número de embeddings de salida que se calculan en paralelo y, por lo tanto, define el equilibrio entre la complejidad de memoria y tiempo. Si `chunk_size` se establece en 0, no se realiza ninguna fragmentación de avance. ### finetuned models El ajuste fino es una forma de transferencia de aprendizaje que implica tomar un modelo entrenado previamente, congelar sus pesos y reemplazar la capa de salida con una nueva [cabecera de modelo](#head) recién añadida. La cabecera del modelo se entrena en tu conjunto de datos objetivo. Consulta el tutorial [Ajustar finamente un modelo pre-entrenado](https://huggingface.co/docs/transformers/training) para obtener más detalles y aprende cómo ajustar finamente modelos con 🤗 Transformers. ## H ### head La cabecera del modelo se refiere a la última capa de una red neuronal que acepta los estados ocultos crudos y los proyecta en una dimensión diferente. Hay una cabecera de modelo diferente para cada tarea. Por ejemplo: * [`GPT2ForSequenceClassification`] es una cabecera de clasificación de secuencias, es decir, una capa lineal, encima del modelo base [`GPT2Model`]. * [`ViTForImageClassification`] es una cabecera de clasificación de imágenes, es decir, una capa lineal encima del estado oculto final del token `CLS`, encima del modelo base [`ViTModel`]. * [`Wav2Vec2ForCTC`] es una cabecera de modelado de lenguaje con [CTC](#connectionist-temporal-classification-ctc) encima del modelo base [`Wav2Vec2Model`]. ## I ### image patch Los modelos de Transformers basados en visión dividen una imagen en parches más pequeños que se incorporan linealmente y luego se pasan como una secuencia al modelo. Puedes encontrar el `patch_size` (o resolución del modelo) en su configuración. ### inference La inferencia es el proceso de evaluar un modelo en nuevos datos después de completar el entrenamiento. Consulta el tutorial [Pipeline for inference](https://huggingface.co/docs/transformers/pipeline_tutorial) para aprender cómo realizar inferencias con 🤗 Transformers. ### input IDs Los IDs de entrada a menudo son los únicos parámetros necesarios que se deben pasar al modelo como entrada. Son índices de tokens, representaciones numéricas de tokens que construyen las secuencias que se utilizarán como entrada por el modelo. <Youtube id="VFp38yj8h3A"/> Cada tokenizador funciona de manera diferente, pero el mecanismo subyacente sigue siendo el mismo. Aquí tienes un ejemplo utilizando el tokenizador BERT, que es un tokenizador [WordPiece](https://arxiv.org/pdf/1609.08144.pdf): ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased") >>> sequence = "A Titan RTX has 24GB of VRAM" ``` El tokenizador se encarga de dividir la secuencia en tokens disponibles en el vocabulario del tokenizador. ```python >>> tokenized_sequence = tokenizer.tokenize(sequence) ``` Los tokens son palabras o sub palabras. Por ejemplo, "VRAM" no estaba en el vocabulario del modelo, así que se dividió en "V", "RA" y "M". Para indicar que estos tokens no son palabras separadas sino partes de la misma palabra, se añade un prefijo de doble almohadilla para "RA" y "M": ```python >>> print(tokenized_sequence) ['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M'] ``` Estos tokens luego se pueden convertir en IDs que son comprensibles por el modelo. Esto se puede hacer alimentando directamente la oración al tokenizador, que aprovecha la implementación en Rust de [🤗 Tokenizers](https://github.com/huggingface/tokenizers) para obtener un rendimiento óptimo. ```python >>> inputs = tokenizer(sequence) ``` El tokenizador devuelve un diccionario con todos los argumentos necesarios para que su modelo correspondiente funcione correctamente. Los índices de los tokens están bajo la clave `input_ids`: ```python >>> encoded_sequence = inputs["input_ids"] >>> print(encoded_sequence) [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102] ``` Ten en cuenta que el tokenizador añade automáticamente "tokens especiales" (si el modelo asociado depende de ellos), que son IDs especiales que el modelo utiliza en ocasiones. Si descodificamos la secuencia anterior de IDs, ```python >>> decoded_sequence = tokenizer.decode(encoded_sequence) ``` Veremos ```python >>> print(decoded_sequence) [CLS] A Titan RTX has 24GB of VRAM [SEP] ``` Porque esta es la forma en que un [`BertModel`] espera sus entradas. ## L ### labels Las etiquetas son un argumento opcional que se puede pasar para que el modelo calcule la pérdida por sí mismo. Estas etiquetas deberían ser la predicción esperada del modelo: usará la pérdida estándar para calcular la pérdida entre sus predicciones y el valor esperado (la etiqueta). Estas etiquetas son diferentes según la cabecera del modelo, por ejemplo: - Para modelos de clasificación de secuencias ([`BertForSequenceClassification`]), el modelo espera un tensor de dimensión `(batch_size)` con cada valor del lote correspondiente a la etiqueta esperada de toda la secuencia. - Para modelos de clasificación de tokens ([`BertForTokenClassification`]), el modelo espera un tensor de dimensión `(batch_size, seq_length)` con cada valor correspondiente a la etiqueta esperada de cada token individual. - Para el modelado de lenguaje enmascarado ([`BertForMaskedLM`]), el modelo espera un tensor de dimensión `(batch_size, seq_length)` con cada valor correspondiente a la etiqueta esperada de cada token individual: las etiquetas son el ID del token enmascarado y los valores deben ignorarse para el resto (generalmente -100). - Para tareas de secuencia a secuencia ([`BartForConditionalGeneration`], [`MBartForConditionalGeneration`]), el modelo espera un tensor de dimensión `(batch_size, tgt_seq_length)` con cada valor correspondiente a las secuencias objetivo asociadas con cada secuencia de entrada. Durante el entrenamiento, tanto BART como T5 generarán internamente los `decoder_input_ids` y las máscaras de atención del decodificador. Por lo general, no es necesario suministrarlos. Esto no se aplica a los modelos que aprovechan el marco codificador-decodificador. - Para modelos de clasificación de imágenes ([`ViTForImageClassification`]), el modelo espera un tensor de dimensión `(batch_size)` con cada valor del lote correspondiente a la etiqueta esperada de cada imagen individual. - Para modelos de segmentación semántica ([`SegformerForSemanticSegmentation`]), el modelo espera un tensor de dimensión `(batch_size, height, width)` con cada valor del lote correspondiente a la etiqueta esperada de cada píxel individual. - Para modelos de detección de objetos ([`DetrForObjectDetection`]), el modelo espera una lista de diccionarios con claves `class_labels` y `boxes` donde cada valor del lote corresponde a la etiqueta esperada y el número de cajas delimitadoras de cada imagen individual. - Para modelos de reconocimiento automático de voz ([`Wav2Vec2ForCTC`]), el modelo espera un tensor de dimensión `(batch_size, target_length)` con cada valor correspondiente a la etiqueta esperada de cada token individual. <Tip> Las etiquetas de cada modelo pueden ser diferentes, así que asegúrate siempre de revisar la documentación de cada modelo para obtener más información sobre sus etiquetas específicas. </Tip> Los modelos base ([`BertModel`]) no aceptan etiquetas, ya que estos son los modelos base de transformadores, que simplemente generan características. ### large language models (LLM) Un término genérico que se refiere a modelos de lenguaje de transformadores (GPT-3, BLOOM, OPT) que fueron entrenados con una gran cantidad de datos. Estos modelos también tienden a tener un gran número de parámetros que se pueden aprender (por ejemplo, 175 mil millones para GPT-3). ## M ### masked language modeling (MLM) Una tarea de preentrenamiento en la que el modelo ve una versión corrupta de los textos, generalmente hecha al enmascarar algunos tokens al azar, y tiene que predecir el texto original. ### multimodal Una tarea que combina textos con otro tipo de entradas (por ejemplo: imágenes). ## N ### Natural language generation (NLG) Todas las tareas relacionadas con la generación de texto (por ejemplo: [Escribe con Transformers](https://transformer.huggingface.co/) o traducción). ### Natural language processing (NLP) Una forma genérica de decir "trabajar con textos". ### Natural language understanding (NLU) Todas las tareas relacionadas con entender lo que hay en un texto (por ejemplo: clasificar el texto completo o palabras individuales). ## P ### Pipeline Un pipeline en 🤗 Transformers es una abstracción que se refiere a una serie de pasos que se ejecutan en un orden específico para preprocesar y transformar datos y devolver una predicción de un modelo. Algunas etapas de ejemplo que se encuentran en un pipeline pueden ser el preprocesamiento de datos, la extracción de características y la normalización. Para obtener más detalles, consulta [Pipelines para inferencia](https://huggingface.co/docs/transformers/pipeline_tutorial). ### PipelineParallel (PP) Técnica de paralelismo en la que el modelo se divide verticalmente (a nivel de capa) en varios GPU, de modo que solo una o varias capas del modelo se colocan en un solo GPU. Cada GPU procesa en paralelo diferentes etapas del pipeline y trabaja en un pequeño fragmento del lote. Obtén más información sobre cómo funciona PipelineParallel [aquí](perf_train_gpu_many#from-naive-model-parallelism-to-pipeline-parallelism). ### pixel values Un tensor de las representaciones numéricas de una imagen que se pasa a un modelo. Los valores de píxeles tienen una forma de [`batch_size`, `num_channels`, `height`, `width`], y se generan a partir de un procesador de imágenes. ### pooling Una operación que reduce una matriz a una matriz más pequeña, ya sea tomando el máximo o el promedio de la dimensión (o dimensiones) agrupada(s). Las capas de agrupación se encuentran comúnmente entre capas convolucionales para reducir la representación de características. ### position IDs A diferencia de las RNN que tienen la posición de cada token incrustada en ellas, los transformers no son conscientes de la posición de cada token. Por lo tanto, se utilizan los IDs de posición (`position_ids`) para que el modelo identifique la posición de cada token en la lista de tokens. Son un parámetro opcional. Si no se pasan `position_ids` al modelo, los IDs se crean automáticamente como embeddings de posición absolutas. Los embeddings de posición absolutas se seleccionan en el rango `[0, config.max_position_embeddings - 1]`. Algunos modelos utilizan otros tipos de embeddings de posición, como embeddings de posición sinusoidales o embeddings de posición relativas. ### preprocessing La tarea de preparar datos crudos en un formato que pueda ser fácilmente consumido por modelos de aprendizaje automático. Por ejemplo, el texto se preprocesa típicamente mediante la tokenización. Para tener una mejor idea de cómo es el preprocesamiento para otros tipos de entrada, consulta el tutorial [Pre-procesar](https://huggingface.co/docs/transformers/preprocessing). ### pretrained model Un modelo que ha sido pre-entrenado en algunos datos (por ejemplo, toda Wikipedia). Los métodos de preentrenamiento involucran un objetivo auto-supervisado, que puede ser leer el texto e intentar predecir la siguiente palabra (ver [modelado de lenguaje causal](#causal-language-modeling)) o enmascarar algunas palabras e intentar predecirlas (ver [modelado de lenguaje enmascarado](#masked-language-modeling-mlm)). Los modelos de habla y visión tienen sus propios objetivos de pre-entrenamiento. Por ejemplo, Wav2Vec2 es un modelo de habla pre-entrenado en una tarea contrastiva que requiere que el modelo identifique la representación de habla "verdadera" de un conjunto de representaciones de habla "falsas". Por otro lado, BEiT es un modelo de visión pre-entrenado en una tarea de modelado de imágenes enmascaradas que enmascara algunos de los parches de la imagen y requiere que el modelo prediga los parches enmascarados (similar al objetivo de modelado de lenguaje enmascarado). ## R ### recurrent neural network (RNN) Un tipo de modelo que utiliza un bucle sobre una capa para procesar textos. ### representation learning Un subcampo del aprendizaje automático que se centra en aprender representaciones significativas de datos en bruto. Algunos ejemplos de técnicas de aprendizaje de representaciones incluyen embeddings de palabras, auto-encoders y Redes Generativas Adversarias (Generative Adversarial Networks, GANs). ## S ### sampling rate Una medida en hercios del número de muestras (la señal de audio) tomadas por segundo. La tasa de muestreo es el resultado de aproximar una señal continua como el habla. ### self-attention Cada elemento de la entrada averigua a cuáles otros elementos de la entrada debe prestar atención. ### self-supervised learning Una categoría de técnicas de aprendizaje automático en la que un modelo crea su propio objetivo de aprendizaje a partir de datos no etiquetados. Difiere del [aprendizaje no supervisado](#unsupervised-learning) y del [aprendizaje supervisado](#supervised-learning) en que el proceso de aprendizaje está supervisado, pero no explícitamente por el usuario. Un ejemplo de aprendizaje auto-supervisado es el [modelado de lenguaje enmascarado](#masked-language-modeling-mlm), donde un modelo recibe oraciones con una proporción de sus tokens eliminados y aprende a predecir los tokens faltantes. ### semi-supervised learning Una amplia categoría de técnicas de entrenamiento de aprendizaje automático que aprovecha una pequeña cantidad de datos etiquetados con una mayor cantidad de datos no etiquetados para mejorar la precisión de un modelo, a diferencia del [aprendizaje supervisado](#supervised-learning) y del [aprendizaje no supervisado](#unsupervised-learning). Un ejemplo de un enfoque de aprendizaje semi-supervisado es "auto-entrenamiento", en el que un modelo se entrena con datos etiquetados y luego se utiliza para hacer predicciones sobre los datos no etiquetados. La porción de datos no etiquetados que el modelo predice con mayor confianza se agrega al conjunto de datos etiquetados y se utiliza para volver a entrenar el modelo. ### sequence-to-sequence (seq2seq) Modelos que generan una nueva secuencia a partir de una entrada, como modelos de traducción o modelos de resumen (como [Bart](model_doc/bart) o [T5](model_doc/t5)). ### Sharded DDP Otro nombre para el concepto fundamental de [ZeRO](#zero-redundancy-optimizer-zero) utilizado por varias otras implementaciones de ZeRO. ### stride En [convolución](#convolution) o [agrupación](#pooling), el paso (stride) se refiere a la distancia que recorre el núcleo sobre una matriz. Un paso de 1 significa que el núcleo se mueve un píxel a la vez, y un paso de 2 significa que el núcleo se mueve dos píxeles a la vez. ### supervised learning Una forma de entrenamiento de modelos que utiliza directamente datos etiquetados para corregir y dirigir el rendimiento del modelo. Los datos se introducen en el modelo en entrenamiento, y sus predicciones se comparan con las etiquetas conocidas. El modelo actualiza sus pesos en función de cuán incorrectas fueron sus predicciones, y el proceso se repite para optimizar el rendimiento del modelo. ## T ### Tensor Parallelism (TP) Técnica de paralelismo para entrenamiento en múltiples GPU en la que cada tensor se divide en múltiples fragmentos, de modo que en lugar de tener todo el tensor en una sola GPU, cada fragmento del tensor reside en su GPU designada. Los fragmentos se procesan por separado y en paralelo en diferentes GPU y los resultados se sincronizan al final del paso de procesamiento.Esto es lo que a veces se llama paralelismo horizontal, ya que la división ocurre a nivel horizontal. Obtén más información sobre el Paralelismo de Tensores [aquí](perf_train_gpu_many#tensor-parallelism). ### token Parte de una oración, generalmente una palabra, pero también puede ser una sub-palabra (las palabras no comunes a menudo se dividen en sub-palabras) o un símbolo de puntuación. ### token Type IDs Algunos modelos tienen como objetivo realizar clasificación en pares de oraciones o responder preguntas. <Youtube id="0u3ioSwev3s"/> Estos requieren que dos secuencias diferentes se unan en una única entrada "input_ids", lo cual generalmente se realiza con la ayuda de tokens especiales, como el token de clasificación (`[CLS]`) y el token separador (`[SEP]`). Por ejemplo, el modelo BERT construye sus dos secuencias de entrada de la siguiente manera: ```python >>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP] ``` Podemos utilizar nuestro tokenizador para generar automáticamente una oración de este tipo al pasar las dos secuencias a `tokenizer` como dos argumentos (y no como una lista, como antes) de la siguiente manera: ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased") >>> sequence_a = "HuggingFace is based in NYC" >>> sequence_b = "Where is HuggingFace based?" >>> encoded_dict = tokenizer(sequence_a, sequence_b) >>> decoded = tokenizer.decode(encoded_dict["input_ids"]) ``` Que devolverá: ```python >>> print(decoded) [CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP] ``` Esto es suficiente para que algunos modelos comprendan dónde termina una secuencia y comienza otra. Sin embargo, otros modelos, como BERT, también utilizan identificadores de tipo de token (también llamados identificadores de segmento). Se representan como una máscara binaria que identifica los dos tipos de secuencia en el modelo. El tokenizador devuelve esta máscara como la entrada "token_type_ids": ```python >>> encoded_dict["token_type_ids"] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` La primera secuencia, el "contexto" utilizado para la pregunta, tiene todos sus tokens representados por un `0`, mientras que la segunda secuencia, correspondiente a la "pregunta", tiene todos sus tokens representados por un `1`. Algunos modelos, como [`XLNetModel`], utilizan un token adicional representado por un `2`. ### transfer learning Una técnica que implica tomar un modelo pre-entrenado y adaptarlo a un conjunto de datos específico para tu tarea. En lugar de entrenar un modelo desde cero, puedes aprovechar el conocimiento obtenido de un modelo existente como punto de partida. Esto acelera el proceso de aprendizaje y reduce la cantidad de datos de entrenamiento necesarios. ### transformer Arquitectura de modelo de aprendizaje profundo basada en auto-atención (Self-attention). ## U ### unsupervised learning Una forma de entrenamiento de modelos en la que los datos proporcionados al modelo no están etiquetados. Las técnicas de aprendizaje no supervisado aprovechan la información estadística de la distribución de datos para encontrar patrones útiles para la tarea en cuestión. ## Z ### Zero Redundancy Optimizer (ZeRO) Técnica de paralelismo que realiza la fragmentación de los tensores de manera algo similar a [TensorParallel](#tensor-parallelism-tp), excepto que todo el tensor se reconstruye a tiempo para una computación hacia adelante o hacia atrás, por lo tanto, el modelo no necesita ser modificado. Este método también admite diversas técnicas de descarga para compensar la memoria limitada de la GPU. Obtén más información sobre ZeRO [aquí](perf_train_gpu_many#zero-data-parallelism).
transformers/docs/source/es/glossary.md/0
{ "file_path": "transformers/docs/source/es/glossary.md", "repo_id": "transformers", "token_count": 10085 }
237