code
stringlengths
2.5k
150k
kind
stringclasses
1 value
matplotlib matplotlib.axis.Tick.set_label1 matplotlib.axis.Tick.set\_label1 ================================ Tick.set\_label1(*s*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L310-L319) Set the label1 text. Parameters: **s**str matplotlib mpl_toolkits.axes_grid1.parasite_axes mpl\_toolkits.axes\_grid1.parasite\_axes ======================================== Classes ------- | | | | --- | --- | | [`HostAxes`](mpl_toolkits.axes_grid1.parasite_axes.hostaxes#mpl_toolkits.axes_grid1.parasite_axes.HostAxes "mpl_toolkits.axes_grid1.parasite_axes.HostAxes") | alias of `AxesHostAxes` | | [`HostAxesBase`](mpl_toolkits.axes_grid1.parasite_axes.hostaxesbase#mpl_toolkits.axes_grid1.parasite_axes.HostAxesBase "mpl_toolkits.axes_grid1.parasite_axes.HostAxesBase")(\*args, \*\*kwargs) | | | [`ParasiteAxes`](mpl_toolkits.axes_grid1.parasite_axes.parasiteaxes#mpl_toolkits.axes_grid1.parasite_axes.ParasiteAxes "mpl_toolkits.axes_grid1.parasite_axes.ParasiteAxes") | alias of `AxesParasite` | | [`ParasiteAxesBase`](mpl_toolkits.axes_grid1.parasite_axes.parasiteaxesbase#mpl_toolkits.axes_grid1.parasite_axes.ParasiteAxesBase "mpl_toolkits.axes_grid1.parasite_axes.ParasiteAxesBase")(parent\_axes[, ...]) | | Functions --------- | | | | --- | --- | | [`host_axes`](mpl_toolkits.axes_grid1.parasite_axes.host_axes#mpl_toolkits.axes_grid1.parasite_axes.host_axes "mpl_toolkits.axes_grid1.parasite_axes.host_axes")(\*args[, axes\_class, figure]) | Create axes that can act as a hosts to parasitic axes. | | [`host_axes_class_factory`](mpl_toolkits.axes_grid1.parasite_axes.host_axes_class_factory#mpl_toolkits.axes_grid1.parasite_axes.host_axes_class_factory "mpl_toolkits.axes_grid1.parasite_axes.host_axes_class_factory")(axes\_class) | | | [`host_subplot`](mpl_toolkits.axes_grid1.parasite_axes.host_subplot#mpl_toolkits.axes_grid1.parasite_axes.host_subplot "mpl_toolkits.axes_grid1.parasite_axes.host_subplot")(\*args[, axes\_class, figure]) | Create a subplot that can act as a host to parasitic axes. | | [`host_subplot_class_factory`](mpl_toolkits.axes_grid1.parasite_axes.host_subplot_class_factory#mpl_toolkits.axes_grid1.parasite_axes.host_subplot_class_factory "mpl_toolkits.axes_grid1.parasite_axes.host_subplot_class_factory")(axes\_class) | | | [`parasite_axes_class_factory`](mpl_toolkits.axes_grid1.parasite_axes.parasite_axes_class_factory#mpl_toolkits.axes_grid1.parasite_axes.parasite_axes_class_factory "mpl_toolkits.axes_grid1.parasite_axes.parasite_axes_class_factory")(axes\_class) | | matplotlib mpl_toolkits.axisartist.angle_helper.LocatorHMS mpl\_toolkits.axisartist.angle\_helper.LocatorHMS ================================================= *class*mpl\_toolkits.axisartist.angle\_helper.LocatorHMS(*nbins*, *include\_last=True*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/angle_helper.py#L152-L154) Bases: [`LocatorBase`](mpl_toolkits.axisartist.angle_helper.locatorbase#mpl_toolkits.axisartist.angle_helper.LocatorBase "mpl_toolkits.axisartist.angle_helper.LocatorBase") \_\_call\_\_(*v1*, *v2*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/angle_helper.py#L153-L154) Call self as a function. Examples using `mpl_toolkits.axisartist.angle_helper.LocatorHMS` ---------------------------------------------------------------- [mpl\_toolkits.axisartist.floating\_axes features](https://matplotlib.org/stable/gallery/axisartist/demo_floating_axes.html#sphx-glr-gallery-axisartist-demo-floating-axes-py) :mod:`mpl\_toolkits.axisartist.floating\_axes` features matplotlib matplotlib.axes.Axes.contour matplotlib.axes.Axes.contour ============================ Axes.contour(*\*args*, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L6354-L6368) Plot contour lines. Call signature: ``` contour([X, Y,] Z, [levels], **kwargs) ``` [`contour`](#matplotlib.axes.Axes.contour "matplotlib.axes.Axes.contour") and [`contourf`](matplotlib.axes.axes.contourf#matplotlib.axes.Axes.contourf "matplotlib.axes.Axes.contourf") draw contour lines and filled contours, respectively. Except as noted, function signatures and return values are the same for both versions. Parameters: **X, Y**array-like, optional The coordinates of the values in *Z*. *X* and *Y* must both be 2D with the same shape as *Z* (e.g. created via [`numpy.meshgrid`](https://numpy.org/doc/stable/reference/generated/numpy.meshgrid.html#numpy.meshgrid "(in NumPy v1.23)")), or they must both be 1-D such that `len(X) == N` is the number of columns in *Z* and `len(Y) == M` is the number of rows in *Z*. *X* and *Y* must both be ordered monotonically. If not given, they are assumed to be integer indices, i.e. `X = range(N)`, `Y = range(M)`. **Z**(M, N) array-like The height values over which the contour is drawn. Color-mapping is controlled by *cmap*, *norm*, *vmin*, and *vmax*. **levels**int or array-like, optional Determines the number and positions of the contour lines / regions. If an int *n*, use [`MaxNLocator`](../ticker_api#matplotlib.ticker.MaxNLocator "matplotlib.ticker.MaxNLocator"), which tries to automatically choose no more than *n+1* "nice" contour levels between *vmin* and *vmax*. If array-like, draw contour lines at the specified levels. The values must be in increasing order. Returns: [`QuadContourSet`](../contour_api#matplotlib.contour.QuadContourSet "matplotlib.contour.QuadContourSet") Other Parameters: **corner\_mask**bool, default: `[rcParams["contour.corner\_mask"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=contour.corner_mask#matplotlibrc-sample)` (default: `True`) Enable/disable corner masking, which only has an effect if *Z* is a masked array. If `False`, any quad touching a masked point is masked out. If `True`, only the triangular corners of quads nearest those points are always masked out, other triangular corners comprising three unmasked points are contoured as usual. **colors**color string or sequence of colors, optional The colors of the levels, i.e. the lines for [`contour`](#matplotlib.axes.Axes.contour "matplotlib.axes.Axes.contour") and the areas for [`contourf`](matplotlib.axes.axes.contourf#matplotlib.axes.Axes.contourf "matplotlib.axes.Axes.contourf"). The sequence is cycled for the levels in ascending order. If the sequence is shorter than the number of levels, it's repeated. As a shortcut, single color strings may be used in place of one-element lists, i.e. `'red'` instead of `['red']` to color all levels with the same color. This shortcut does only work for color strings, not for other ways of specifying colors. By default (value *None*), the colormap specified by *cmap* will be used. **alpha**float, default: 1 The alpha blending value, between 0 (transparent) and 1 (opaque). **cmap**str or [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap"), default: `[rcParams["image.cmap"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.cmap#matplotlibrc-sample)` (default: `'viridis'`) The Colormap instance or registered colormap name used to map scalar data to colors. This parameter is ignored if *colors* is set. **norm**str or [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize"), optional The normalization method used to scale scalar data to the [0, 1] range before mapping to colors using *cmap*. By default, a linear scaling is used, mapping the lowest value to 0 and the highest to 1. If given, this can be one of the following: * An instance of [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") or one of its subclasses (see [Colormap Normalization](https://matplotlib.org/stable/tutorials/colors/colormapnorms.html)). * A scale name, i.e. one of "linear", "log", "symlog", "logit", etc. For a list of available scales, call [`matplotlib.scale.get_scale_names()`](../scale_api#matplotlib.scale.get_scale_names "matplotlib.scale.get_scale_names"). In that case, a suitable [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") subclass is dynamically generated and instantiated. This parameter is ignored if *colors* is set. **vmin, vmax**float, optional When using scalar data and no explicit *norm*, *vmin* and *vmax* define the data range that the colormap covers. By default, the colormap covers the complete value range of the supplied data. It is an error to use *vmin*/*vmax* when a *norm* instance is given (but using a [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") *norm* name together with *vmin*/*vmax* is acceptable). If *vmin* or *vmax* are not given, the default color scaling is based on *levels*. This parameter is ignored if *colors* is set. **origin**{*None*, 'upper', 'lower', 'image'}, default: None Determines the orientation and exact position of *Z* by specifying the position of `Z[0, 0]`. This is only relevant, if *X*, *Y* are not given. * *None*: `Z[0, 0]` is at X=0, Y=0 in the lower left corner. * 'lower': `Z[0, 0]` is at X=0.5, Y=0.5 in the lower left corner. * 'upper': `Z[0, 0]` is at X=N+0.5, Y=0.5 in the upper left corner. * 'image': Use the value from `[rcParams["image.origin"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.origin#matplotlibrc-sample)` (default: `'upper'`). **extent**(x0, x1, y0, y1), optional If *origin* is not *None*, then *extent* is interpreted as in [`imshow`](matplotlib.axes.axes.imshow#matplotlib.axes.Axes.imshow "matplotlib.axes.Axes.imshow"): it gives the outer pixel boundaries. In this case, the position of Z[0, 0] is the center of the pixel, not a corner. If *origin* is *None*, then (*x0*, *y0*) is the position of Z[0, 0], and (*x1*, *y1*) is the position of Z[-1, -1]. This argument is ignored if *X* and *Y* are specified in the call to contour. **locator**ticker.Locator subclass, optional The locator is used to determine the contour levels if they are not given explicitly via *levels*. Defaults to [`MaxNLocator`](../ticker_api#matplotlib.ticker.MaxNLocator "matplotlib.ticker.MaxNLocator"). **extend**{'neither', 'both', 'min', 'max'}, default: 'neither' Determines the `contourf`-coloring of values that are outside the *levels* range. If 'neither', values outside the *levels* range are not colored. If 'min', 'max' or 'both', color the values below, above or below and above the *levels* range. Values below `min(levels)` and above `max(levels)` are mapped to the under/over values of the [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap"). Note that most colormaps do not have dedicated colors for these by default, so that the over and under values are the edge values of the colormap. You may want to set these values explicitly using [`Colormap.set_under`](matplotlib.colors.colormap#matplotlib.colors.Colormap.set_under "matplotlib.colors.Colormap.set_under") and [`Colormap.set_over`](matplotlib.colors.colormap#matplotlib.colors.Colormap.set_over "matplotlib.colors.Colormap.set_over"). Note An existing [`QuadContourSet`](../contour_api#matplotlib.contour.QuadContourSet "matplotlib.contour.QuadContourSet") does not get notified if properties of its colormap are changed. Therefore, an explicit call `QuadContourSet.changed()` is needed after modifying the colormap. The explicit call can be left out, if a colorbar is assigned to the [`QuadContourSet`](../contour_api#matplotlib.contour.QuadContourSet "matplotlib.contour.QuadContourSet") because it internally calls `QuadContourSet.changed()`. Example: ``` x = np.arange(1, 10) y = x.reshape(-1, 1) h = x * y cs = plt.contourf(h, levels=[10, 30, 50], colors=['#808080', '#A0A0A0', '#C0C0C0'], extend='both') cs.cmap.set_over('red') cs.cmap.set_under('blue') cs.changed() ``` **xunits, yunits**registered units, optional Override axis units by specifying an instance of a [`matplotlib.units.ConversionInterface`](../units_api#matplotlib.units.ConversionInterface "matplotlib.units.ConversionInterface"). **antialiased**bool, optional Enable antialiasing, overriding the defaults. For filled contours, the default is *True*. For line contours, it is taken from `[rcParams["lines.antialiased"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=lines.antialiased#matplotlibrc-sample)` (default: `True`). **nchunk**int >= 0, optional If 0, no subdivision of the domain. Specify a positive integer to divide the domain into subdomains of *nchunk* by *nchunk* quads. Chunking reduces the maximum length of polygons generated by the contouring algorithm which reduces the rendering workload passed on to the backend and also requires slightly less RAM. It can however introduce rendering artifacts at chunk boundaries depending on the backend, the *antialiased* flag and value of *alpha*. **linewidths**float or array-like, default: `[rcParams["contour.linewidth"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=contour.linewidth#matplotlibrc-sample)` (default: `None`) *Only applies to* [`contour`](#matplotlib.axes.Axes.contour "matplotlib.axes.Axes.contour"). The line width of the contour lines. If a number, all levels will be plotted with this linewidth. If a sequence, the levels in ascending order will be plotted with the linewidths in the order specified. If None, this falls back to `[rcParams["lines.linewidth"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=lines.linewidth#matplotlibrc-sample)` (default: `1.5`). **linestyles**{*None*, 'solid', 'dashed', 'dashdot', 'dotted'}, optional *Only applies to* [`contour`](#matplotlib.axes.Axes.contour "matplotlib.axes.Axes.contour"). If *linestyles* is *None*, the default is 'solid' unless the lines are monochrome. In that case, negative contours will instead take their linestyle from the *negative\_linestyles* argument. *linestyles* can also be an iterable of the above strings specifying a set of linestyles to be used. If this iterable is shorter than the number of contour levels it will be repeated as necessary. **negative\_linestyles**{*None*, 'solid', 'dashed', 'dashdot', 'dotted'}, optional *Only applies to* [`contour`](#matplotlib.axes.Axes.contour "matplotlib.axes.Axes.contour"). If *linestyles* is *None* and the lines are monochrome, this argument specifies the line style for negative contours. If *negative\_linestyles* is *None*, the default is taken from `[rcParams["contour.negative\_linestyles"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=contour.negative_linestyles#matplotlibrc-sample)`. *negative\_linestyles* can also be an iterable of the above strings specifying a set of linestyles to be used. If this iterable is shorter than the number of contour levels it will be repeated as necessary. **hatches**list[str], optional *Only applies to* [`contourf`](matplotlib.axes.axes.contourf#matplotlib.axes.Axes.contourf "matplotlib.axes.Axes.contourf"). A list of cross hatch patterns to use on the filled areas. If None, no hatching will be added to the contour. Hatching is supported in the PostScript, PDF, SVG and Agg backends only. **algorithm**{'mpl2005', 'mpl2014', 'serial', 'threaded'}, optional Which contouring algorithm to use to calculate the contour lines and polygons. The algorithms are implemented in [ContourPy](https://github.com/contourpy/contourpy), consult the [ContourPy documentation](https://contourpy.readthedocs.io) for further information. The default is taken from `[rcParams["contour.algorithm"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=contour.algorithm#matplotlibrc-sample)` (default: `'mpl2014'`). **data**indexable object, optional If given, all parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception). #### Notes 1. [`contourf`](matplotlib.axes.axes.contourf#matplotlib.axes.Axes.contourf "matplotlib.axes.Axes.contourf") differs from the MATLAB version in that it does not draw the polygon edges. To draw edges, add line contours with calls to [`contour`](#matplotlib.axes.Axes.contour "matplotlib.axes.Axes.contour"). 2. [`contourf`](matplotlib.axes.axes.contourf#matplotlib.axes.Axes.contourf "matplotlib.axes.Axes.contourf") fills intervals that are closed at the top; that is, for boundaries *z1* and *z2*, the filled region is: ``` z1 < Z <= z2 ``` except for the lowest interval, which is closed on both sides (i.e. it includes the lowest value). 3. [`contour`](#matplotlib.axes.Axes.contour "matplotlib.axes.Axes.contour") and [`contourf`](matplotlib.axes.axes.contourf#matplotlib.axes.Axes.contourf "matplotlib.axes.Axes.contourf") use a [marching squares](https://en.wikipedia.org/wiki/Marching_squares) algorithm to compute contour locations. More information can be found in [ContourPy documentation](https://contourpy.readthedocs.io). Examples using `matplotlib.axes.Axes.contour` --------------------------------------------- [Contour Corner Mask](https://matplotlib.org/stable/gallery/images_contours_and_fields/contour_corner_mask.html#sphx-glr-gallery-images-contours-and-fields-contour-corner-mask-py) Contour Corner Mask [Contour Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/contour_demo.html#sphx-glr-gallery-images-contours-and-fields-contour-demo-py) Contour Demo [Contour Label Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/contour_label_demo.html#sphx-glr-gallery-images-contours-and-fields-contour-label-demo-py) Contour Label Demo [Contourf Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/contourf_demo.html#sphx-glr-gallery-images-contours-and-fields-contourf-demo-py) Contourf Demo [Contourf Hatching](https://matplotlib.org/stable/gallery/images_contours_and_fields/contourf_hatching.html#sphx-glr-gallery-images-contours-and-fields-contourf-hatching-py) Contourf Hatching [Contouring the solution space of optimizations](https://matplotlib.org/stable/gallery/images_contours_and_fields/contours_in_optimization_demo.html#sphx-glr-gallery-images-contours-and-fields-contours-in-optimization-demo-py) Contouring the solution space of optimizations [Blend transparency with color in 2D images](https://matplotlib.org/stable/gallery/images_contours_and_fields/image_transparency_blend.html#sphx-glr-gallery-images-contours-and-fields-image-transparency-blend-py) Blend transparency with color in 2D images [Contour plot of irregularly spaced data](https://matplotlib.org/stable/gallery/images_contours_and_fields/irregulardatagrid.html#sphx-glr-gallery-images-contours-and-fields-irregulardatagrid-py) Contour plot of irregularly spaced data [Patheffect Demo](https://matplotlib.org/stable/gallery/misc/patheffect_demo.html#sphx-glr-gallery-misc-patheffect-demo-py) Patheffect Demo [TickedStroke patheffect](https://matplotlib.org/stable/gallery/misc/tickedstroke_demo.html#sphx-glr-gallery-misc-tickedstroke-demo-py) TickedStroke patheffect [Demonstrates plotting contour (level) curves in 3D](https://matplotlib.org/stable/gallery/mplot3d/contour3d.html#sphx-glr-gallery-mplot3d-contour3d-py) Demonstrates plotting contour (level) curves in 3D [Demonstrates plotting contour (level) curves in 3D using the extend3d option](https://matplotlib.org/stable/gallery/mplot3d/contour3d_2.html#sphx-glr-gallery-mplot3d-contour3d-2-py) Demonstrates plotting contour (level) curves in 3D using the extend3d option [Projecting contour profiles onto a graph](https://matplotlib.org/stable/gallery/mplot3d/contour3d_3.html#sphx-glr-gallery-mplot3d-contour3d-3-py) Projecting contour profiles onto a graph [contour(X, Y, Z)](https://matplotlib.org/stable/plot_types/arrays/contour.html#sphx-glr-plot-types-arrays-contour-py) contour(X, Y, Z)
programming_docs
matplotlib matplotlib.pyplot.rc matplotlib.pyplot.rc ==================== matplotlib.pyplot.rc(*group*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L550-L552) Set the current [`rcParams`](../matplotlib_configuration_api#matplotlib.rcParams "matplotlib.rcParams"). *group* is the grouping for the rc, e.g., for `lines.linewidth` the group is `lines`, for `axes.facecolor`, the group is `axes`, and so on. Group may also be a list or tuple of group names, e.g., (*xtick*, *ytick*). *kwargs* is a dictionary attribute name/value pairs, e.g.,: ``` rc('lines', linewidth=2, color='r') ``` sets the current [`rcParams`](../matplotlib_configuration_api#matplotlib.rcParams "matplotlib.rcParams") and is equivalent to: ``` rcParams['lines.linewidth'] = 2 rcParams['lines.color'] = 'r' ``` The following aliases are available to save typing for interactive users: | Alias | Property | | --- | --- | | 'lw' | 'linewidth' | | 'ls' | 'linestyle' | | 'c' | 'color' | | 'fc' | 'facecolor' | | 'ec' | 'edgecolor' | | 'mew' | 'markeredgewidth' | | 'aa' | 'antialiased' | Thus you could abbreviate the above call as: ``` rc('lines', lw=2, c='r') ``` Note you can use python's kwargs dictionary facility to store dictionaries of default parameters. e.g., you can customize the font rc as follows: ``` font = {'family' : 'monospace', 'weight' : 'bold', 'size' : 'larger'} rc('font', **font) # pass in the font dict as kwargs ``` This enables you to easily switch between several configurations. Use `matplotlib.style.use('default')` or [`rcdefaults()`](../matplotlib_configuration_api#matplotlib.rcdefaults "matplotlib.rcdefaults") to restore the default [`rcParams`](../matplotlib_configuration_api#matplotlib.rcParams "matplotlib.rcParams") after changes. #### Notes Similar functionality is available by using the normal dict interface, i.e. `rcParams.update({"lines.linewidth": 2, ...})` (but `rcParams.update` does not support abbreviations or grouping). Examples using `matplotlib.pyplot.rc` ------------------------------------- [Customizing dashed line styles](https://matplotlib.org/stable/gallery/lines_bars_and_markers/line_demo_dash_control.html#sphx-glr-gallery-lines-bars-and-markers-line-demo-dash-control-py) Customizing dashed line styles [Styling with cycler](https://matplotlib.org/stable/tutorials/intermediate/color_cycle.html#sphx-glr-tutorials-intermediate-color-cycle-py) Styling with cycler matplotlib matplotlib.pyplot.margins matplotlib.pyplot.margins ========================= matplotlib.pyplot.margins(*\*margins*, *x=None*, *y=None*, *tight=True*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2653-L2655) Set or retrieve autoscaling margins. The padding added to each limit of the Axes is the *margin* times the data interval. All input parameters must be floats within the range [0, 1]. Passing both positional and keyword arguments is invalid and will raise a TypeError. If no arguments (positional or otherwise) are provided, the current margins will remain in place and simply be returned. Specifying any margin changes only the autoscaling; for example, if *xmargin* is not None, then *xmargin* times the X data interval will be added to each end of that interval before it is used in autoscaling. Parameters: **\*margins**float, optional If a single positional argument is provided, it specifies both margins of the x-axis and y-axis limits. If two positional arguments are provided, they will be interpreted as *xmargin*, *ymargin*. If setting the margin on a single axis is desired, use the keyword arguments described below. **x, y**float, optional Specific margin values for the x-axis and y-axis, respectively. These cannot be used with positional arguments, but can be used individually to alter on e.g., only the y-axis. **tight**bool or None, default: True The *tight* parameter is passed to [`autoscale_view`](matplotlib.axes.axes.autoscale_view#matplotlib.axes.Axes.autoscale_view "matplotlib.axes.Axes.autoscale_view"), which is executed after a margin is changed; the default here is *True*, on the assumption that when margins are specified, no additional padding to match tick marks is usually desired. Setting *tight* to *None* preserves the previous setting. Returns: **xmargin, ymargin**float #### Notes If a previously used Axes method such as [`pcolor()`](matplotlib.pyplot.pcolor#matplotlib.pyplot.pcolor "matplotlib.pyplot.pcolor") has set `use_sticky_edges` to [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)"), only the limits not set by the "sticky artists" will be modified. To force all of the margins to be set, set `use_sticky_edges` to [`False`](https://docs.python.org/3/library/constants.html#False "(in Python v3.10)") before calling [`margins()`](#matplotlib.pyplot.margins "matplotlib.pyplot.margins"). Examples using `matplotlib.pyplot.margins` ------------------------------------------ [Rotating custom tick labels](https://matplotlib.org/stable/gallery/ticks/ticklabels_rotation.html#sphx-glr-gallery-ticks-ticklabels-rotation-py) Rotating custom tick labels matplotlib mpl_toolkits.mplot3d.proj3d.proj_transform_clip mpl\_toolkits.mplot3d.proj3d.proj\_transform\_clip ================================================== mpl\_toolkits.mplot3d.proj3d.proj\_transform\_clip(*xs*, *ys*, *zs*, *M*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/proj3d.py#L165-L172) Transform the points by the projection matrix and return the clipping result returns txs, tys, tzs, tis matplotlib mpl_toolkits.axes_grid1.inset_locator.mark_inset mpl\_toolkits.axes\_grid1.inset\_locator.mark\_inset ==================================================== mpl\_toolkits.axes\_grid1.inset\_locator.mark\_inset(*parent\_axes*, *inset\_axes*, *loc1*, *loc2*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/inset_locator.py#L540-L592) Draw a box to mark the location of an area represented by an inset axes. This function draws a box in *parent\_axes* at the bounding box of *inset\_axes*, and shows a connection with the inset axes by drawing lines at the corners, giving a "zoomed in" effect. Parameters: **parent\_axes**[`matplotlib.axes.Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") Axes which contains the area of the inset axes. **inset\_axes**[`matplotlib.axes.Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") The inset axes. **loc1, loc2**{1, 2, 3, 4} Corners to use for connecting the inset axes and the area in the parent axes. **\*\*kwargs** Patch properties for the lines and box drawn: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | unknown | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | Returns: **pp**[`matplotlib.patches.Patch`](matplotlib.patches.patch#matplotlib.patches.Patch "matplotlib.patches.Patch") The patch drawn to represent the area of the inset axes. **p1, p2**[`matplotlib.patches.Patch`](matplotlib.patches.patch#matplotlib.patches.Patch "matplotlib.patches.Patch") The patches connecting two corners of the inset axes and its area. Examples using `mpl_toolkits.axes_grid1.inset_locator.mark_inset` ----------------------------------------------------------------- [Inset Locator Demo2](https://matplotlib.org/stable/gallery/axes_grid1/inset_locator_demo2.html#sphx-glr-gallery-axes-grid1-inset-locator-demo2-py) Inset Locator Demo2 matplotlib matplotlib.axes.Axes.plot_date matplotlib.axes.Axes.plot\_date =============================== Axes.plot\_date(*x*, *y*, *fmt='o'*, *tz=None*, *xdate=True*, *ydate=False*, *\**, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L1671-L1750) [*Discouraged*] Plot coercing the axis to treat floats as dates. Discouraged This method exists for historic reasons and will be deprecated in the future. * `datetime`-like data should directly be plotted using [`plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot"). * If you need to plot plain numeric data as [Matplotlib date format](../dates_api#date-format) or need to set a timezone, call `ax.xaxis.axis_date` / `ax.yaxis.axis_date` before [`plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot"). See [`Axis.axis_date`](matplotlib.axis.axis.axis_date#matplotlib.axis.Axis.axis_date "matplotlib.axis.Axis.axis_date"). Similar to [`plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot"), this plots *y* vs. *x* as lines or markers. However, the axis labels are formatted as dates depending on *xdate* and *ydate*. Note that [`plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot") will work with [`datetime`](https://docs.python.org/3/library/datetime.html#module-datetime "(in Python v3.10)") and [`numpy.datetime64`](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.datetime64 "(in NumPy v1.23)") objects without resorting to this method. Parameters: **x, y**array-like The coordinates of the data points. If *xdate* or *ydate* is *True*, the respective values *x* or *y* are interpreted as [Matplotlib dates](../dates_api#date-format). **fmt**str, optional The plot format string. For details, see the corresponding parameter in [`plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot"). **tz**timezone string or [`datetime.tzinfo`](https://docs.python.org/3/library/datetime.html#datetime.tzinfo "(in Python v3.10)"), default: `[rcParams["timezone"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=timezone#matplotlibrc-sample)` (default: `'UTC'`) The time zone to use in labeling dates. **xdate**bool, default: True If *True*, the *x*-axis will be interpreted as Matplotlib dates. **ydate**bool, default: False If *True*, the *y*-axis will be interpreted as Matplotlib dates. Returns: list of [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") Objects representing the plotted data. Other Parameters: **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *x*, *y* **\*\*kwargs** Keyword arguments control the [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") properties: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_antialiased "matplotlib.lines.Line2D.set_antialiased") or aa | bool | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_color "matplotlib.lines.Line2D.set_color") or c | color | | [`dash_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_capstyle "matplotlib.lines.Line2D.set_dash_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`dash_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_joinstyle "matplotlib.lines.Line2D.set_dash_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`dashes`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dashes "matplotlib.lines.Line2D.set_dashes") | sequence of floats (on/off ink in points) or (None, None) | | [`data`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_data "matplotlib.lines.Line2D.set_data") | (2, N) array or two 1D arrays | | [`drawstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_drawstyle "matplotlib.lines.Line2D.set_drawstyle") or ds | {'default', 'steps', 'steps-pre', 'steps-mid', 'steps-post'}, default: 'default' | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fillstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_fillstyle "matplotlib.lines.Line2D.set_fillstyle") | {'full', 'left', 'right', 'bottom', 'top', 'none'} | | [`gapcolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_gapcolor "matplotlib.lines.Line2D.set_gapcolor") | color or None | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linestyle "matplotlib.lines.Line2D.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linewidth "matplotlib.lines.Line2D.set_linewidth") or lw | float | | [`marker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_marker "matplotlib.lines.Line2D.set_marker") | marker style string, [`Path`](../path_api#matplotlib.path.Path "matplotlib.path.Path") or [`MarkerStyle`](matplotlib.markers.markerstyle#matplotlib.markers.MarkerStyle "matplotlib.markers.MarkerStyle") | | [`markeredgecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgecolor "matplotlib.lines.Line2D.set_markeredgecolor") or mec | color | | [`markeredgewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgewidth "matplotlib.lines.Line2D.set_markeredgewidth") or mew | float | | [`markerfacecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecolor "matplotlib.lines.Line2D.set_markerfacecolor") or mfc | color | | [`markerfacecoloralt`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecoloralt "matplotlib.lines.Line2D.set_markerfacecoloralt") or mfcalt | color | | [`markersize`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markersize "matplotlib.lines.Line2D.set_markersize") or ms | float | | [`markevery`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markevery "matplotlib.lines.Line2D.set_markevery") | None or int or (int, int) or slice or list[int] or float or (float, float) or list[bool] | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_picker "matplotlib.lines.Line2D.set_picker") | float or callable[[Artist, Event], tuple[bool, dict]] | | [`pickradius`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_pickradius "matplotlib.lines.Line2D.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`solid_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_capstyle "matplotlib.lines.Line2D.set_solid_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`solid_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_joinstyle "matplotlib.lines.Line2D.set_solid_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | unknown | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xdata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_xdata "matplotlib.lines.Line2D.set_xdata") | 1D array | | [`ydata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_ydata "matplotlib.lines.Line2D.set_ydata") | 1D array | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`matplotlib.dates`](../dates_api#module-matplotlib.dates "matplotlib.dates") Helper functions on dates. [`matplotlib.dates.date2num`](../dates_api#matplotlib.dates.date2num "matplotlib.dates.date2num") Convert dates to num. [`matplotlib.dates.num2date`](../dates_api#matplotlib.dates.num2date "matplotlib.dates.num2date") Convert num to dates. [`matplotlib.dates.drange`](../dates_api#matplotlib.dates.drange "matplotlib.dates.drange") Create an equally spaced sequence of dates. #### Notes If you are using custom date tickers and formatters, it may be necessary to set the formatters/locators after the call to [`plot_date`](#matplotlib.axes.Axes.plot_date "matplotlib.axes.Axes.plot_date"). [`plot_date`](#matplotlib.axes.Axes.plot_date "matplotlib.axes.Axes.plot_date") will set the default tick locator to [`AutoDateLocator`](../dates_api#matplotlib.dates.AutoDateLocator "matplotlib.dates.AutoDateLocator") (if the tick locator is not already set to a [`DateLocator`](../dates_api#matplotlib.dates.DateLocator "matplotlib.dates.DateLocator") instance) and the default tick formatter to [`AutoDateFormatter`](../dates_api#matplotlib.dates.AutoDateFormatter "matplotlib.dates.AutoDateFormatter") (if the tick formatter is not already set to a [`DateFormatter`](../dates_api#matplotlib.dates.DateFormatter "matplotlib.dates.DateFormatter") instance).
programming_docs
matplotlib matplotlib.pyplot.figure matplotlib.pyplot.figure ======================== matplotlib.pyplot.figure(*num=None*, *figsize=None*, *dpi=None*, *\**, *facecolor=None*, *edgecolor=None*, *frameon=True*, *FigureClass=<class 'matplotlib.figure.Figure'>*, *clear=False*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L654-L793) Create a new figure, or activate an existing figure. Parameters: **num**int or str or [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") or [`SubFigure`](../figure_api#matplotlib.figure.SubFigure "matplotlib.figure.SubFigure"), optional A unique identifier for the figure. If a figure with that identifier already exists, this figure is made active and returned. An integer refers to the `Figure.number` attribute, a string refers to the figure label. If there is no figure with the identifier or *num* is not given, a new figure is created, made active and returned. If *num* is an int, it will be used for the `Figure.number` attribute, otherwise, an auto-generated integer value is used (starting at 1 and incremented for each new figure). If *num* is a string, the figure label and the window title is set to this value. If num is a `SubFigure`, its parent `Figure` is activated. **figsize**(float, float), default: `[rcParams["figure.figsize"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=figure.figsize#matplotlibrc-sample)` (default: `[6.4, 4.8]`) Width, height in inches. **dpi**float, default: `[rcParams["figure.dpi"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=figure.dpi#matplotlibrc-sample)` (default: `100.0`) The resolution of the figure in dots-per-inch. **facecolor**color, default: `[rcParams["figure.facecolor"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=figure.facecolor#matplotlibrc-sample)` (default: `'white'`) The background color. **edgecolor**color, default: `[rcParams["figure.edgecolor"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=figure.edgecolor#matplotlibrc-sample)` (default: `'white'`) The border color. **frameon**bool, default: True If False, suppress drawing the figure frame. **FigureClass**subclass of [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") If set, an instance of this subclass will be created, rather than a plain [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure"). **clear**bool, default: False If True and the figure already exists, then it is cleared. **layout**{'constrained', 'tight', [`LayoutEngine`](../layout_engine_api#matplotlib.layout_engine.LayoutEngine "matplotlib.layout_engine.LayoutEngine"), None}, default: None The layout mechanism for positioning of plot elements to avoid overlapping Axes decorations (labels, ticks, etc). Note that layout managers can measurably slow down figure display. Defaults to *None* (but see the documentation of the [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") constructor regarding the interaction with rcParams). **\*\*kwargs** Additional keyword arguments are passed to the [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") constructor. Returns: [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") #### Notes Newly created figures are passed to the [`new_manager`](../backend_bases_api#matplotlib.backend_bases.FigureCanvasBase.new_manager "matplotlib.backend_bases.FigureCanvasBase.new_manager") method or the [`new_figure_manager`](matplotlib.pyplot.new_figure_manager#matplotlib.pyplot.new_figure_manager "matplotlib.pyplot.new_figure_manager") function provided by the current backend, which install a canvas and a manager on the figure. If you are creating many figures, make sure you explicitly call [`pyplot.close`](matplotlib.pyplot.close#matplotlib.pyplot.close "matplotlib.pyplot.close") on the figures you are not using, because this will enable pyplot to properly clean up the memory. [`rcParams`](../matplotlib_configuration_api#matplotlib.rcParams "matplotlib.rcParams") defines the default values, which can be modified in the matplotlibrc file. Examples using `matplotlib.pyplot.figure` ----------------------------------------- [Curve with error band](https://matplotlib.org/stable/gallery/lines_bars_and_markers/curve_error_band.html#sphx-glr-gallery-lines-bars-and-markers-curve-error-band-py) Curve with error band [Errorbar limit selection](https://matplotlib.org/stable/gallery/lines_bars_and_markers/errorbar_limits_simple.html#sphx-glr-gallery-lines-bars-and-markers-errorbar-limits-simple-py) Errorbar limit selection [EventCollection Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/eventcollection_demo.html#sphx-glr-gallery-lines-bars-and-markers-eventcollection-demo-py) EventCollection Demo [Filled polygon](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill.html#sphx-glr-gallery-lines-bars-and-markers-fill-py) Filled polygon [Scatter plot with histograms](https://matplotlib.org/stable/gallery/lines_bars_and_markers/scatter_hist.html#sphx-glr-gallery-lines-bars-and-markers-scatter-hist-py) Scatter plot with histograms [Barcode](https://matplotlib.org/stable/gallery/images_contours_and_fields/barcode_demo.html#sphx-glr-gallery-images-contours-and-fields-barcode-demo-py) Barcode [Figimage Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/figimage_demo.html#sphx-glr-gallery-images-contours-and-fields-figimage-demo-py) Figimage Demo [Layer Images](https://matplotlib.org/stable/gallery/images_contours_and_fields/layer_images.html#sphx-glr-gallery-images-contours-and-fields-layer-images-py) Layer Images [Aligning Labels](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/align_labels_demo.html#sphx-glr-gallery-subplots-axes-and-figures-align-labels-demo-py) Aligning Labels [Axes Zoom Effect](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_zoom_effect.html#sphx-glr-gallery-subplots-axes-and-figures-axes-zoom-effect-py) Axes Zoom Effect [Custom Figure subclasses](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/custom_figure_class.html#sphx-glr-gallery-subplots-axes-and-figures-custom-figure-class-py) Custom Figure subclasses [Resizing axes with constrained layout](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/demo_constrained_layout.html#sphx-glr-gallery-subplots-axes-and-figures-demo-constrained-layout-py) Resizing axes with constrained layout [Resizing axes with tight layout](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/demo_tight_layout.html#sphx-glr-gallery-subplots-axes-and-figures-demo-tight-layout-py) Resizing axes with tight layout [Geographic Projections](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/geo_demo.html#sphx-glr-gallery-subplots-axes-and-figures-geo-demo-py) Geographic Projections [Using Gridspec to make multi-column/row subplot layouts](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/gridspec_multicolumn.html#sphx-glr-gallery-subplots-axes-and-figures-gridspec-multicolumn-py) Using Gridspec to make multi-column/row subplot layouts [Nested Gridspecs](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/gridspec_nested.html#sphx-glr-gallery-subplots-axes-and-figures-gridspec-nested-py) Nested Gridspecs [Managing multiple figures in pyplot](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/multiple_figs_demo.html#sphx-glr-gallery-subplots-axes-and-figures-multiple-figs-demo-py) Managing multiple figures in pyplot [Figure subfigures](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subfigures.html#sphx-glr-gallery-subplots-axes-and-figures-subfigures-py) Figure subfigures [Creating multiple subplots using plt.subplots](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subplots_demo.html#sphx-glr-gallery-subplots-axes-and-figures-subplots-demo-py) Creating multiple subplots using ``plt.subplots`` [Polar Legend](https://matplotlib.org/stable/gallery/pie_and_polar_charts/polar_legend.html#sphx-glr-gallery-pie-and-polar-charts-polar-legend-py) Polar Legend [Scatter plot on polar axis](https://matplotlib.org/stable/gallery/pie_and_polar_charts/polar_scatter.html#sphx-glr-gallery-pie-and-polar-charts-polar-scatter-py) Scatter plot on polar axis [Arrow Demo](https://matplotlib.org/stable/gallery/text_labels_and_annotations/arrow_demo.html#sphx-glr-gallery-text-labels-and-annotations-arrow-demo-py) Arrow Demo [Auto-wrapping text](https://matplotlib.org/stable/gallery/text_labels_and_annotations/autowrap.html#sphx-glr-gallery-text-labels-and-annotations-autowrap-py) Auto-wrapping text [Text Rotation Mode](https://matplotlib.org/stable/gallery/text_labels_and_annotations/demo_text_rotation_mode.html#sphx-glr-gallery-text-labels-and-annotations-demo-text-rotation-mode-py) Text Rotation Mode [The difference between \dfrac and \frac](https://matplotlib.org/stable/gallery/text_labels_and_annotations/dfrac_demo.html#sphx-glr-gallery-text-labels-and-annotations-dfrac-demo-py) The difference between \\dfrac and \\frac [Annotation arrow style reference](https://matplotlib.org/stable/gallery/text_labels_and_annotations/fancyarrow_demo.html#sphx-glr-gallery-text-labels-and-annotations-fancyarrow-demo-py) Annotation arrow style reference [Fonts demo (object-oriented style)](https://matplotlib.org/stable/gallery/text_labels_and_annotations/fonts_demo.html#sphx-glr-gallery-text-labels-and-annotations-fonts-demo-py) Fonts demo (object-oriented style) [Fonts demo (keyword arguments)](https://matplotlib.org/stable/gallery/text_labels_and_annotations/fonts_demo_kw.html#sphx-glr-gallery-text-labels-and-annotations-fonts-demo-kw-py) Fonts demo (keyword arguments) [Convert texts to images](https://matplotlib.org/stable/gallery/text_labels_and_annotations/mathtext_asarray.html#sphx-glr-gallery-text-labels-and-annotations-mathtext-asarray-py) Convert texts to images [Mathtext Examples](https://matplotlib.org/stable/gallery/text_labels_and_annotations/mathtext_examples.html#sphx-glr-gallery-text-labels-and-annotations-mathtext-examples-py) Mathtext Examples [Rainbow text](https://matplotlib.org/stable/gallery/text_labels_and_annotations/rainbow_text.html#sphx-glr-gallery-text-labels-and-annotations-rainbow-text-py) Rainbow text [STIX Fonts](https://matplotlib.org/stable/gallery/text_labels_and_annotations/stix_fonts_demo.html#sphx-glr-gallery-text-labels-and-annotations-stix-fonts-demo-py) STIX Fonts [Unicode minus](https://matplotlib.org/stable/gallery/text_labels_and_annotations/unicode_minus.html#sphx-glr-gallery-text-labels-and-annotations-unicode-minus-py) Unicode minus [Usetex Baseline Test](https://matplotlib.org/stable/gallery/text_labels_and_annotations/usetex_baseline_test.html#sphx-glr-gallery-text-labels-and-annotations-usetex-baseline-test-py) Usetex Baseline Test [Usetex Fonteffects](https://matplotlib.org/stable/gallery/text_labels_and_annotations/usetex_fonteffects.html#sphx-glr-gallery-text-labels-and-annotations-usetex-fonteffects-py) Usetex Fonteffects [Annotation Polar](https://matplotlib.org/stable/gallery/pyplots/annotation_polar.html#sphx-glr-gallery-pyplots-annotation-polar-py) Annotation Polar [Fig Axes Customize Simple](https://matplotlib.org/stable/gallery/pyplots/fig_axes_customize_simple.html#sphx-glr-gallery-pyplots-fig-axes-customize-simple-py) Fig Axes Customize Simple [Simple axes labels](https://matplotlib.org/stable/gallery/pyplots/fig_axes_labels_simple.html#sphx-glr-gallery-pyplots-fig-axes-labels-simple-py) Simple axes labels [Adding lines to figures](https://matplotlib.org/stable/gallery/pyplots/fig_x.html#sphx-glr-gallery-pyplots-fig-x-py) Adding lines to figures [Pyplot Two Subplots](https://matplotlib.org/stable/gallery/pyplots/pyplot_two_subplots.html#sphx-glr-gallery-pyplots-pyplot-two-subplots-py) Pyplot Two Subplots [Text Commands](https://matplotlib.org/stable/gallery/pyplots/text_commands.html#sphx-glr-gallery-pyplots-text-commands-py) Text Commands [Text Layout](https://matplotlib.org/stable/gallery/pyplots/text_layout.html#sphx-glr-gallery-pyplots-text-layout-py) Text Layout [Drawing fancy boxes](https://matplotlib.org/stable/gallery/shapes_and_collections/fancybox_demo.html#sphx-glr-gallery-shapes-and-collections-fancybox-demo-py) Drawing fancy boxes [Hatch demo](https://matplotlib.org/stable/gallery/shapes_and_collections/hatch_demo.html#sphx-glr-gallery-shapes-and-collections-hatch-demo-py) Hatch demo [Axes Divider](https://matplotlib.org/stable/gallery/axes_grid1/demo_axes_divider.html#sphx-glr-gallery-axes-grid1-demo-axes-divider-py) Axes Divider [Demo Axes Grid](https://matplotlib.org/stable/gallery/axes_grid1/demo_axes_grid.html#sphx-glr-gallery-axes-grid1-demo-axes-grid-py) Demo Axes Grid [Axes Grid2](https://matplotlib.org/stable/gallery/axes_grid1/demo_axes_grid2.html#sphx-glr-gallery-axes-grid1-demo-axes-grid2-py) Axes Grid2 [Showing RGB channels using RGBAxes](https://matplotlib.org/stable/gallery/axes_grid1/demo_axes_rgb.html#sphx-glr-gallery-axes-grid1-demo-axes-rgb-py) Showing RGB channels using RGBAxes [Per-row or per-column colorbars](https://matplotlib.org/stable/gallery/axes_grid1/demo_edge_colorbar.html#sphx-glr-gallery-axes-grid1-demo-edge-colorbar-py) Per-row or per-column colorbars [Axes with a fixed physical size](https://matplotlib.org/stable/gallery/axes_grid1/demo_fixed_size_axes.html#sphx-glr-gallery-axes-grid1-demo-fixed-size-axes-py) Axes with a fixed physical size [Setting a fixed aspect on ImageGrid cells](https://matplotlib.org/stable/gallery/axes_grid1/demo_imagegrid_aspect.html#sphx-glr-gallery-axes-grid1-demo-imagegrid-aspect-py) Setting a fixed aspect on ImageGrid cells [Inset Locator Demo](https://matplotlib.org/stable/gallery/axes_grid1/inset_locator_demo.html#sphx-glr-gallery-axes-grid1-inset-locator-demo-py) Inset Locator Demo [Make room for ylabel using axes\_grid](https://matplotlib.org/stable/gallery/axes_grid1/make_room_for_ylabel_using_axesgrid.html#sphx-glr-gallery-axes-grid1-make-room-for-ylabel-using-axesgrid-py) Make room for ylabel using axes\_grid [Parasite Simple2](https://matplotlib.org/stable/gallery/axes_grid1/parasite_simple2.html#sphx-glr-gallery-axes-grid1-parasite-simple2-py) Parasite Simple2 [Simple Axes Divider 1](https://matplotlib.org/stable/gallery/axes_grid1/simple_axes_divider1.html#sphx-glr-gallery-axes-grid1-simple-axes-divider1-py) Simple Axes Divider 1 [Simple Axes Divider 3](https://matplotlib.org/stable/gallery/axes_grid1/simple_axes_divider3.html#sphx-glr-gallery-axes-grid1-simple-axes-divider3-py) Simple Axes Divider 3 [Simple ImageGrid](https://matplotlib.org/stable/gallery/axes_grid1/simple_axesgrid.html#sphx-glr-gallery-axes-grid1-simple-axesgrid-py) Simple ImageGrid [Simple ImageGrid 2](https://matplotlib.org/stable/gallery/axes_grid1/simple_axesgrid2.html#sphx-glr-gallery-axes-grid1-simple-axesgrid2-py) Simple ImageGrid 2 [Axis Direction](https://matplotlib.org/stable/gallery/axisartist/axis_direction.html#sphx-glr-gallery-axisartist-axis-direction-py) Axis Direction [axis\_direction demo](https://matplotlib.org/stable/gallery/axisartist/demo_axis_direction.html#sphx-glr-gallery-axisartist-demo-axis-direction-py) axis\_direction demo [Axis line styles](https://matplotlib.org/stable/gallery/axisartist/demo_axisline_style.html#sphx-glr-gallery-axisartist-demo-axisline-style-py) Axis line styles [Curvilinear grid demo](https://matplotlib.org/stable/gallery/axisartist/demo_curvelinear_grid.html#sphx-glr-gallery-axisartist-demo-curvelinear-grid-py) Curvilinear grid demo [Demo CurveLinear Grid2](https://matplotlib.org/stable/gallery/axisartist/demo_curvelinear_grid2.html#sphx-glr-gallery-axisartist-demo-curvelinear-grid2-py) Demo CurveLinear Grid2 [mpl\_toolkits.axisartist.floating\_axes features](https://matplotlib.org/stable/gallery/axisartist/demo_floating_axes.html#sphx-glr-gallery-axisartist-demo-floating-axes-py) :mod:`mpl\_toolkits.axisartist.floating\_axes` features [floating\_axis demo](https://matplotlib.org/stable/gallery/axisartist/demo_floating_axis.html#sphx-glr-gallery-axisartist-demo-floating-axis-py) floating\_axis demo [Parasite Axes demo](https://matplotlib.org/stable/gallery/axisartist/demo_parasite_axes.html#sphx-glr-gallery-axisartist-demo-parasite-axes-py) Parasite Axes demo [Ticklabel alignment](https://matplotlib.org/stable/gallery/axisartist/demo_ticklabel_alignment.html#sphx-glr-gallery-axisartist-demo-ticklabel-alignment-py) Ticklabel alignment [Ticklabel direction](https://matplotlib.org/stable/gallery/axisartist/demo_ticklabel_direction.html#sphx-glr-gallery-axisartist-demo-ticklabel-direction-py) Ticklabel direction [Simple Axis Direction01](https://matplotlib.org/stable/gallery/axisartist/simple_axis_direction01.html#sphx-glr-gallery-axisartist-simple-axis-direction01-py) Simple Axis Direction01 [Simple Axis Direction03](https://matplotlib.org/stable/gallery/axisartist/simple_axis_direction03.html#sphx-glr-gallery-axisartist-simple-axis-direction03-py) Simple Axis Direction03 [Simple Axis Pad](https://matplotlib.org/stable/gallery/axisartist/simple_axis_pad.html#sphx-glr-gallery-axisartist-simple-axis-pad-py) Simple Axis Pad [Custom spines with axisartist](https://matplotlib.org/stable/gallery/axisartist/simple_axisartist1.html#sphx-glr-gallery-axisartist-simple-axisartist1-py) Custom spines with axisartist [Simple Axisline](https://matplotlib.org/stable/gallery/axisartist/simple_axisline.html#sphx-glr-gallery-axisartist-simple-axisline-py) Simple Axisline [Simple Axisline3](https://matplotlib.org/stable/gallery/axisartist/simple_axisline3.html#sphx-glr-gallery-axisartist-simple-axisline3-py) Simple Axisline3 [Anatomy of a figure](https://matplotlib.org/stable/gallery/showcase/anatomy.html#sphx-glr-gallery-showcase-anatomy-py) Anatomy of a figure [Firefox](https://matplotlib.org/stable/gallery/showcase/firefox.html#sphx-glr-gallery-showcase-firefox-py) Firefox [Shaded & power normalized rendering](https://matplotlib.org/stable/gallery/showcase/mandelbrot.html#sphx-glr-gallery-showcase-mandelbrot-py) Shaded & power normalized rendering [XKCD](https://matplotlib.org/stable/gallery/showcase/xkcd.html#sphx-glr-gallery-showcase-xkcd-py) XKCD ![The double pendulum problem](https://matplotlib.org/stable/_images/sphx_glr_double_pendulum_thumb.gif) [The double pendulum problem](https://matplotlib.org/stable/gallery/animation/double_pendulum.html#sphx-glr-gallery-animation-double-pendulum-py) The double pendulum problem [Frame grabbing](https://matplotlib.org/stable/gallery/animation/frame_grabbing_sgskip.html#sphx-glr-gallery-animation-frame-grabbing-sgskip-py) Frame grabbing ![Rain simulation](https://matplotlib.org/stable/_images/sphx_glr_rain_thumb.gif) [Rain simulation](https://matplotlib.org/stable/gallery/animation/rain.html#sphx-glr-gallery-animation-rain-py) Rain simulation [Animated 3D random walk](https://matplotlib.org/stable/gallery/animation/random_walk.html#sphx-glr-gallery-animation-random-walk-py) Animated 3D random walk ![MATPLOTLIB **UNCHAINED**](https://matplotlib.org/stable/_images/sphx_glr_unchained_thumb.gif) [MATPLOTLIB UNCHAINED](https://matplotlib.org/stable/gallery/animation/unchained.html#sphx-glr-gallery-animation-unchained-py) MATPLOTLIB \*\*UNCHAINED\*\* [Close Event](https://matplotlib.org/stable/gallery/event_handling/close_event.html#sphx-glr-gallery-event-handling-close-event-py) Close Event [Interactive functions](https://matplotlib.org/stable/gallery/event_handling/ginput_manual_clabel_sgskip.html#sphx-glr-gallery-event-handling-ginput-manual-clabel-sgskip-py) Interactive functions [Hyperlinks](https://matplotlib.org/stable/gallery/misc/hyperlinks_sgskip.html#sphx-glr-gallery-misc-hyperlinks-sgskip-py) Hyperlinks [Matplotlib logo](https://matplotlib.org/stable/gallery/misc/logos2.html#sphx-glr-gallery-misc-logos2-py) Matplotlib logo [Multipage PDF](https://matplotlib.org/stable/gallery/misc/multipage_pdf.html#sphx-glr-gallery-misc-multipage-pdf-py) Multipage PDF [SVG Filter Line](https://matplotlib.org/stable/gallery/misc/svg_filter_line.html#sphx-glr-gallery-misc-svg-filter-line-py) SVG Filter Line [SVG Filter Pie](https://matplotlib.org/stable/gallery/misc/svg_filter_pie.html#sphx-glr-gallery-misc-svg-filter-pie-py) SVG Filter Pie [transforms.offset\_copy](https://matplotlib.org/stable/gallery/misc/transoffset.html#sphx-glr-gallery-misc-transoffset-py) transforms.offset\_copy [Zorder Demo](https://matplotlib.org/stable/gallery/misc/zorder_demo.html#sphx-glr-gallery-misc-zorder-demo-py) Zorder Demo [Plot 2D data on 3D plot](https://matplotlib.org/stable/gallery/mplot3d/2dcollections3d.html#sphx-glr-gallery-mplot3d-2dcollections3d-py) Plot 2D data on 3D plot [Demo of 3D bar charts](https://matplotlib.org/stable/gallery/mplot3d/3d_bars.html#sphx-glr-gallery-mplot3d-3d-bars-py) Demo of 3D bar charts [Create 2D bar graphs in different planes](https://matplotlib.org/stable/gallery/mplot3d/bars3d.html#sphx-glr-gallery-mplot3d-bars3d-py) Create 2D bar graphs in different planes [3D box surface plot](https://matplotlib.org/stable/gallery/mplot3d/box3d.html#sphx-glr-gallery-mplot3d-box3d-py) 3D box surface plot [Demonstrates plotting contour (level) curves in 3D](https://matplotlib.org/stable/gallery/mplot3d/contour3d.html#sphx-glr-gallery-mplot3d-contour3d-py) Demonstrates plotting contour (level) curves in 3D [Demonstrates plotting contour (level) curves in 3D using the extend3d option](https://matplotlib.org/stable/gallery/mplot3d/contour3d_2.html#sphx-glr-gallery-mplot3d-contour3d-2-py) Demonstrates plotting contour (level) curves in 3D using the extend3d option [Projecting contour profiles onto a graph](https://matplotlib.org/stable/gallery/mplot3d/contour3d_3.html#sphx-glr-gallery-mplot3d-contour3d-3-py) Projecting contour profiles onto a graph [Filled contours](https://matplotlib.org/stable/gallery/mplot3d/contourf3d.html#sphx-glr-gallery-mplot3d-contourf3d-py) Filled contours [Projecting filled contour onto a graph](https://matplotlib.org/stable/gallery/mplot3d/contourf3d_2.html#sphx-glr-gallery-mplot3d-contourf3d-2-py) Projecting filled contour onto a graph [3D errorbars](https://matplotlib.org/stable/gallery/mplot3d/errorbar3d.html#sphx-glr-gallery-mplot3d-errorbar3d-py) 3D errorbars [Create 3D histogram of 2D data](https://matplotlib.org/stable/gallery/mplot3d/hist3d.html#sphx-glr-gallery-mplot3d-hist3d-py) Create 3D histogram of 2D data [Parametric Curve](https://matplotlib.org/stable/gallery/mplot3d/lines3d.html#sphx-glr-gallery-mplot3d-lines3d-py) Parametric Curve [Lorenz Attractor](https://matplotlib.org/stable/gallery/mplot3d/lorenz_attractor.html#sphx-glr-gallery-mplot3d-lorenz-attractor-py) Lorenz Attractor [2D and 3D Axes in same Figure](https://matplotlib.org/stable/gallery/mplot3d/mixed_subplots.html#sphx-glr-gallery-mplot3d-mixed-subplots-py) 2D and 3D \*Axes\* in same \*Figure\* [Automatic Text Offsetting](https://matplotlib.org/stable/gallery/mplot3d/offset.html#sphx-glr-gallery-mplot3d-offset-py) Automatic Text Offsetting [Draw flat objects in 3D plot](https://matplotlib.org/stable/gallery/mplot3d/pathpatch3d.html#sphx-glr-gallery-mplot3d-pathpatch3d-py) Draw flat objects in 3D plot [Generate polygons to fill under 3D line graph](https://matplotlib.org/stable/gallery/mplot3d/polys3d.html#sphx-glr-gallery-mplot3d-polys3d-py) Generate polygons to fill under 3D line graph [3D quiver plot](https://matplotlib.org/stable/gallery/mplot3d/quiver3d.html#sphx-glr-gallery-mplot3d-quiver3d-py) 3D quiver plot [Rotating a 3D plot](https://matplotlib.org/stable/gallery/mplot3d/rotate_axes3d_sgskip.html#sphx-glr-gallery-mplot3d-rotate-axes3d-sgskip-py) Rotating a 3D plot [3D scatterplot](https://matplotlib.org/stable/gallery/mplot3d/scatter3d.html#sphx-glr-gallery-mplot3d-scatter3d-py) 3D scatterplot [3D plots as subplots](https://matplotlib.org/stable/gallery/mplot3d/subplot3d.html#sphx-glr-gallery-mplot3d-subplot3d-py) 3D plots as subplots [3D surface (solid color)](https://matplotlib.org/stable/gallery/mplot3d/surface3d_2.html#sphx-glr-gallery-mplot3d-surface3d-2-py) 3D surface (solid color) [3D surface (checkerboard)](https://matplotlib.org/stable/gallery/mplot3d/surface3d_3.html#sphx-glr-gallery-mplot3d-surface3d-3-py) 3D surface (checkerboard) [3D surface with polar coordinates](https://matplotlib.org/stable/gallery/mplot3d/surface3d_radial.html#sphx-glr-gallery-mplot3d-surface3d-radial-py) 3D surface with polar coordinates [Text annotations in 3D](https://matplotlib.org/stable/gallery/mplot3d/text3d.html#sphx-glr-gallery-mplot3d-text3d-py) Text annotations in 3D [Triangular 3D contour plot](https://matplotlib.org/stable/gallery/mplot3d/tricontour3d.html#sphx-glr-gallery-mplot3d-tricontour3d-py) Triangular 3D contour plot [Triangular 3D filled contour plot](https://matplotlib.org/stable/gallery/mplot3d/tricontourf3d.html#sphx-glr-gallery-mplot3d-tricontourf3d-py) Triangular 3D filled contour plot [Triangular 3D surfaces](https://matplotlib.org/stable/gallery/mplot3d/trisurf3d.html#sphx-glr-gallery-mplot3d-trisurf3d-py) Triangular 3D surfaces [More triangular 3D surfaces](https://matplotlib.org/stable/gallery/mplot3d/trisurf3d_2.html#sphx-glr-gallery-mplot3d-trisurf3d-2-py) More triangular 3D surfaces [3D voxel / volumetric plot](https://matplotlib.org/stable/gallery/mplot3d/voxels.html#sphx-glr-gallery-mplot3d-voxels-py) 3D voxel / volumetric plot [3D voxel plot of the numpy logo](https://matplotlib.org/stable/gallery/mplot3d/voxels_numpy_logo.html#sphx-glr-gallery-mplot3d-voxels-numpy-logo-py) 3D voxel plot of the numpy logo [3D voxel / volumetric plot with rgb colors](https://matplotlib.org/stable/gallery/mplot3d/voxels_rgb.html#sphx-glr-gallery-mplot3d-voxels-rgb-py) 3D voxel / volumetric plot with rgb colors [3D voxel / volumetric plot with cylindrical coordinates](https://matplotlib.org/stable/gallery/mplot3d/voxels_torus.html#sphx-glr-gallery-mplot3d-voxels-torus-py) 3D voxel / volumetric plot with cylindrical coordinates [3D wireframe plot](https://matplotlib.org/stable/gallery/mplot3d/wire3d.html#sphx-glr-gallery-mplot3d-wire3d-py) 3D wireframe plot [Animating a 3D wireframe plot](https://matplotlib.org/stable/gallery/mplot3d/wire3d_animation_sgskip.html#sphx-glr-gallery-mplot3d-wire3d-animation-sgskip-py) Animating a 3D wireframe plot [Asinh Demo](https://matplotlib.org/stable/gallery/scales/asinh_demo.html#sphx-glr-gallery-scales-asinh-demo-py) Asinh Demo [MRI with EEG](https://matplotlib.org/stable/gallery/specialty_plots/mri_with_eeg.html#sphx-glr-gallery-specialty-plots-mri-with-eeg-py) MRI with EEG [The Sankey class](https://matplotlib.org/stable/gallery/specialty_plots/sankey_basics.html#sphx-glr-gallery-specialty-plots-sankey-basics-py) The Sankey class [Long chain of connections using Sankey](https://matplotlib.org/stable/gallery/specialty_plots/sankey_links.html#sphx-glr-gallery-specialty-plots-sankey-links-py) Long chain of connections using Sankey [Rankine power cycle](https://matplotlib.org/stable/gallery/specialty_plots/sankey_rankine.html#sphx-glr-gallery-specialty-plots-sankey-rankine-py) Rankine power cycle [SkewT-logP diagram: using transforms and custom projections](https://matplotlib.org/stable/gallery/specialty_plots/skewt.html#sphx-glr-gallery-specialty-plots-skewt-py) SkewT-logP diagram: using transforms and custom projections [Spine Placement](https://matplotlib.org/stable/gallery/spines/spine_placement_demo.html#sphx-glr-gallery-spines-spine-placement-demo-py) Spine Placement [Ellipse with units](https://matplotlib.org/stable/gallery/units/ellipse_with_units.html#sphx-glr-gallery-units-ellipse-with-units-py) Ellipse with units [SVG Histogram](https://matplotlib.org/stable/gallery/user_interfaces/svg_histogram_sgskip.html#sphx-glr-gallery-user-interfaces-svg-histogram-sgskip-py) SVG Histogram [Tool Manager](https://matplotlib.org/stable/gallery/user_interfaces/toolmanager_sgskip.html#sphx-glr-gallery-user-interfaces-toolmanager-sgskip-py) Tool Manager [subplot2grid demo](https://matplotlib.org/stable/gallery/userdemo/demo_gridspec01.html#sphx-glr-gallery-userdemo-demo-gridspec01-py) subplot2grid demo [GridSpec demo](https://matplotlib.org/stable/gallery/userdemo/demo_gridspec03.html#sphx-glr-gallery-userdemo-demo-gridspec03-py) GridSpec demo [Nested GridSpecs](https://matplotlib.org/stable/gallery/userdemo/demo_gridspec06.html#sphx-glr-gallery-userdemo-demo-gridspec06-py) Nested GridSpecs [Simple Legend01](https://matplotlib.org/stable/gallery/userdemo/simple_legend01.html#sphx-glr-gallery-userdemo-simple-legend01-py) Simple Legend01 [Menu](https://matplotlib.org/stable/gallery/widgets/menu.html#sphx-glr-gallery-widgets-menu-py) Menu [Rectangle and ellipse selectors](https://matplotlib.org/stable/gallery/widgets/rectangle_selector.html#sphx-glr-gallery-widgets-rectangle-selector-py) Rectangle and ellipse selectors [Pyplot tutorial](https://matplotlib.org/stable/tutorials/introductory/pyplot.html#sphx-glr-tutorials-introductory-pyplot-py) Pyplot tutorial [Image tutorial](https://matplotlib.org/stable/tutorials/introductory/images.html#sphx-glr-tutorials-introductory-images-py) Image tutorial [Quick start guide](https://matplotlib.org/stable/tutorials/introductory/quick_start.html#sphx-glr-tutorials-introductory-quick-start-py) Quick start guide [Artist tutorial](https://matplotlib.org/stable/tutorials/intermediate/artists.html#sphx-glr-tutorials-intermediate-artists-py) Artist tutorial [Constrained Layout Guide](https://matplotlib.org/stable/tutorials/intermediate/constrainedlayout_guide.html#sphx-glr-tutorials-intermediate-constrainedlayout-guide-py) Constrained Layout Guide [Tight Layout guide](https://matplotlib.org/stable/tutorials/intermediate/tight_layout_guide.html#sphx-glr-tutorials-intermediate-tight-layout-guide-py) Tight Layout guide [Arranging multiple Axes in a Figure](https://matplotlib.org/stable/tutorials/intermediate/arranging_axes.html#sphx-glr-tutorials-intermediate-arranging-axes-py) Arranging multiple Axes in a Figure [origin and extent in imshow](https://matplotlib.org/stable/tutorials/intermediate/imshow_extent.html#sphx-glr-tutorials-intermediate-imshow-extent-py) \*origin\* and \*extent\* in `~.Axes.imshow` [Path effects guide](https://matplotlib.org/stable/tutorials/advanced/patheffects_guide.html#sphx-glr-tutorials-advanced-patheffects-guide-py) Path effects guide [Transformations Tutorial](https://matplotlib.org/stable/tutorials/advanced/transforms_tutorial.html#sphx-glr-tutorials-advanced-transforms-tutorial-py) Transformations Tutorial [Specifying Colors](https://matplotlib.org/stable/tutorials/colors/colors.html#sphx-glr-tutorials-colors-colors-py) Specifying Colors [Complex and semantic figure composition](https://matplotlib.org/stable/tutorials/provisional/mosaic.html#sphx-glr-tutorials-provisional-mosaic-py) Complex and semantic figure composition [Text in Matplotlib Plots](https://matplotlib.org/stable/tutorials/text/text_intro.html#sphx-glr-tutorials-text-text-intro-py) Text in Matplotlib Plots [Text properties and layout](https://matplotlib.org/stable/tutorials/text/text_props.html#sphx-glr-tutorials-text-text-props-py) Text properties and layout
programming_docs
matplotlib matplotlib.axes.Axes.set_xlabel matplotlib.axes.Axes.set\_xlabel ================================ Axes.set\_xlabel(*xlabel*, *fontdict=None*, *labelpad=None*, *\**, *loc=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L3463-L3512) Set the label for the x-axis. Parameters: **xlabel**str The label text. **labelpad**float, default: `[rcParams["axes.labelpad"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=axes.labelpad#matplotlibrc-sample)` (default: `4.0`) Spacing in points from the Axes bounding box including ticks and tick labels. If None, the previous value is left as is. **loc**{'left', 'center', 'right'}, default: `[rcParams["xaxis.labellocation"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=xaxis.labellocation#matplotlibrc-sample)` (default: `'center'`) The label position. This is a high-level alternative for passing parameters *x* and *horizontalalignment*. Other Parameters: **\*\*kwargs**[`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text") properties [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text") properties control the appearance of the label. See also [`text`](matplotlib.axes.axes.text#matplotlib.axes.Axes.text "matplotlib.axes.Axes.text") Documents the properties supported by [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text"). Examples using `matplotlib.axes.Axes.set_xlabel` ------------------------------------------------ [Bar Label Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_label_demo.html#sphx-glr-gallery-lines-bars-and-markers-bar-label-demo-py) Bar Label Demo [Horizontal bar chart](https://matplotlib.org/stable/gallery/lines_bars_and_markers/barh.html#sphx-glr-gallery-lines-bars-and-markers-barh-py) Horizontal bar chart [Broken Barh](https://matplotlib.org/stable/gallery/lines_bars_and_markers/broken_barh.html#sphx-glr-gallery-lines-bars-and-markers-broken-barh-py) Broken Barh [CSD Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/csd_demo.html#sphx-glr-gallery-lines-bars-and-markers-csd-demo-py) CSD Demo [Fill Between and Alpha](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill_between_alpha.html#sphx-glr-gallery-lines-bars-and-markers-fill-between-alpha-py) Fill Between and Alpha [Filling the area between lines](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill_between_demo.html#sphx-glr-gallery-lines-bars-and-markers-fill-between-demo-py) Filling the area between lines [Fill Betweenx Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill_betweenx_demo.html#sphx-glr-gallery-lines-bars-and-markers-fill-betweenx-demo-py) Fill Betweenx Demo [Hatch-filled histograms](https://matplotlib.org/stable/gallery/lines_bars_and_markers/filled_step.html#sphx-glr-gallery-lines-bars-and-markers-filled-step-py) Hatch-filled histograms [Hat graph](https://matplotlib.org/stable/gallery/lines_bars_and_markers/hat_graph.html#sphx-glr-gallery-lines-bars-and-markers-hat-graph-py) Hat graph [Mapping marker properties to multivariate data](https://matplotlib.org/stable/gallery/lines_bars_and_markers/multivariate_marker_plot.html#sphx-glr-gallery-lines-bars-and-markers-multivariate-marker-plot-py) Mapping marker properties to multivariate data [Scatter plots with custom symbols](https://matplotlib.org/stable/gallery/lines_bars_and_markers/scatter_custom_symbol.html#sphx-glr-gallery-lines-bars-and-markers-scatter-custom-symbol-py) Scatter plots with custom symbols [Scatter Demo2](https://matplotlib.org/stable/gallery/lines_bars_and_markers/scatter_demo2.html#sphx-glr-gallery-lines-bars-and-markers-scatter-demo2-py) Scatter Demo2 [Stackplots and streamgraphs](https://matplotlib.org/stable/gallery/lines_bars_and_markers/stackplot_demo.html#sphx-glr-gallery-lines-bars-and-markers-stackplot-demo-py) Stackplots and streamgraphs [hlines and vlines](https://matplotlib.org/stable/gallery/lines_bars_and_markers/vline_hline_demo.html#sphx-glr-gallery-lines-bars-and-markers-vline-hline-demo-py) hlines and vlines [Contourf Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/contourf_demo.html#sphx-glr-gallery-images-contours-and-fields-contourf-demo-py) Contourf Demo [Tricontour Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/tricontour_demo.html#sphx-glr-gallery-images-contours-and-fields-tricontour-demo-py) Tricontour Demo [Tripcolor Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/tripcolor_demo.html#sphx-glr-gallery-images-contours-and-fields-tripcolor-demo-py) Tripcolor Demo [Triplot Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/triplot_demo.html#sphx-glr-gallery-images-contours-and-fields-triplot-demo-py) Triplot Demo [Aligning Labels](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/align_labels_demo.html#sphx-glr-gallery-subplots-axes-and-figures-align-labels-demo-py) Aligning Labels [Axes Demo](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_demo.html#sphx-glr-gallery-subplots-axes-and-figures-axes-demo-py) Axes Demo [Axis Label Position](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axis_labels_demo.html#sphx-glr-gallery-subplots-axes-and-figures-axis-labels-demo-py) Axis Label Position [Resizing axes with constrained layout](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/demo_constrained_layout.html#sphx-glr-gallery-subplots-axes-and-figures-demo-constrained-layout-py) Resizing axes with constrained layout [Resizing axes with tight layout](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/demo_tight_layout.html#sphx-glr-gallery-subplots-axes-and-figures-demo-tight-layout-py) Resizing axes with tight layout [Figure labels: suptitle, supxlabel, supylabel](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/figure_title.html#sphx-glr-gallery-subplots-axes-and-figures-figure-title-py) Figure labels: suptitle, supxlabel, supylabel [Invert Axes](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/invert_axes.html#sphx-glr-gallery-subplots-axes-and-figures-invert-axes-py) Invert Axes [Secondary Axis](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/secondary_axis.html#sphx-glr-gallery-subplots-axes-and-figures-secondary-axis-py) Secondary Axis [Figure subfigures](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subfigures.html#sphx-glr-gallery-subplots-axes-and-figures-subfigures-py) Figure subfigures [Multiple subplots](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subplot.html#sphx-glr-gallery-subplots-axes-and-figures-subplot-py) Multiple subplots [Plots with different scales](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/two_scales.html#sphx-glr-gallery-subplots-axes-and-figures-two-scales-py) Plots with different scales [Box plots with custom fill colors](https://matplotlib.org/stable/gallery/statistics/boxplot_color.html#sphx-glr-gallery-statistics-boxplot-color-py) Box plots with custom fill colors [Boxplots](https://matplotlib.org/stable/gallery/statistics/boxplot_demo.html#sphx-glr-gallery-statistics-boxplot-demo-py) Boxplots [Box plot vs. violin plot comparison](https://matplotlib.org/stable/gallery/statistics/boxplot_vs_violin.html#sphx-glr-gallery-statistics-boxplot-vs-violin-py) Box plot vs. violin plot comparison [Violin plot customization](https://matplotlib.org/stable/gallery/statistics/customized_violin.html#sphx-glr-gallery-statistics-customized-violin-py) Violin plot customization [Using histograms to plot a cumulative distribution](https://matplotlib.org/stable/gallery/statistics/histogram_cumulative.html#sphx-glr-gallery-statistics-histogram-cumulative-py) Using histograms to plot a cumulative distribution [Some features of the histogram (hist) function](https://matplotlib.org/stable/gallery/statistics/histogram_features.html#sphx-glr-gallery-statistics-histogram-features-py) Some features of the histogram (hist) function [Producing multiple histograms side by side](https://matplotlib.org/stable/gallery/statistics/multiple_histograms_side_by_side.html#sphx-glr-gallery-statistics-multiple-histograms-side-by-side-py) Producing multiple histograms side by side [Using accented text in Matplotlib](https://matplotlib.org/stable/gallery/text_labels_and_annotations/accented_text.html#sphx-glr-gallery-text-labels-and-annotations-accented-text-py) Using accented text in Matplotlib [Labeling ticks using engineering notation](https://matplotlib.org/stable/gallery/text_labels_and_annotations/engineering_formatter.html#sphx-glr-gallery-text-labels-and-annotations-engineering-formatter-py) Labeling ticks using engineering notation [Using a ttf font file in Matplotlib](https://matplotlib.org/stable/gallery/text_labels_and_annotations/font_file.html#sphx-glr-gallery-text-labels-and-annotations-font-file-py) Using a ttf font file in Matplotlib [Legend Demo](https://matplotlib.org/stable/gallery/text_labels_and_annotations/legend_demo.html#sphx-glr-gallery-text-labels-and-annotations-legend-demo-py) Legend Demo [Mathtext](https://matplotlib.org/stable/gallery/text_labels_and_annotations/mathtext_demo.html#sphx-glr-gallery-text-labels-and-annotations-mathtext-demo-py) Mathtext [Multiline](https://matplotlib.org/stable/gallery/text_labels_and_annotations/multiline.html#sphx-glr-gallery-text-labels-and-annotations-multiline-py) Multiline [Rendering math equations using TeX](https://matplotlib.org/stable/gallery/text_labels_and_annotations/tex_demo.html#sphx-glr-gallery-text-labels-and-annotations-tex-demo-py) Rendering math equations using TeX [Title positioning](https://matplotlib.org/stable/gallery/text_labels_and_annotations/titles_demo.html#sphx-glr-gallery-text-labels-and-annotations-titles-demo-py) Title positioning [Simple axes labels](https://matplotlib.org/stable/gallery/pyplots/fig_axes_labels_simple.html#sphx-glr-gallery-pyplots-fig-axes-labels-simple-py) Simple axes labels [Text Commands](https://matplotlib.org/stable/gallery/pyplots/text_commands.html#sphx-glr-gallery-pyplots-text-commands-py) Text Commands [Color Demo](https://matplotlib.org/stable/gallery/color/color_demo.html#sphx-glr-gallery-color-color-demo-py) Color Demo [Line, Poly and RegularPoly Collection with autoscaling](https://matplotlib.org/stable/gallery/shapes_and_collections/collections.html#sphx-glr-gallery-shapes-and-collections-collections-py) Line, Poly and RegularPoly Collection with autoscaling [Ellipse Collection](https://matplotlib.org/stable/gallery/shapes_and_collections/ellipse_collection.html#sphx-glr-gallery-shapes-and-collections-ellipse-collection-py) Ellipse Collection [Dark background style sheet](https://matplotlib.org/stable/gallery/style_sheets/dark_background.html#sphx-glr-gallery-style-sheets-dark-background-py) Dark background style sheet [Make room for ylabel using axes\_grid](https://matplotlib.org/stable/gallery/axes_grid1/make_room_for_ylabel_using_axesgrid.html#sphx-glr-gallery-axes-grid1-make-room-for-ylabel-using-axesgrid-py) Make room for ylabel using axes\_grid [Parasite Simple](https://matplotlib.org/stable/gallery/axes_grid1/parasite_simple.html#sphx-glr-gallery-axes-grid1-parasite-simple-py) Parasite Simple [Parasite Axes demo](https://matplotlib.org/stable/gallery/axisartist/demo_parasite_axes.html#sphx-glr-gallery-axisartist-demo-parasite-axes-py) Parasite Axes demo [Parasite axis demo](https://matplotlib.org/stable/gallery/axisartist/demo_parasite_axes2.html#sphx-glr-gallery-axisartist-demo-parasite-axes2-py) Parasite axis demo [Ticklabel alignment](https://matplotlib.org/stable/gallery/axisartist/demo_ticklabel_alignment.html#sphx-glr-gallery-axisartist-demo-ticklabel-alignment-py) Ticklabel alignment [Simple Axis Direction03](https://matplotlib.org/stable/gallery/axisartist/simple_axis_direction03.html#sphx-glr-gallery-axisartist-simple-axis-direction03-py) Simple Axis Direction03 [Simple Axisline](https://matplotlib.org/stable/gallery/axisartist/simple_axisline.html#sphx-glr-gallery-axisartist-simple-axisline-py) Simple Axisline [Anatomy of a figure](https://matplotlib.org/stable/gallery/showcase/anatomy.html#sphx-glr-gallery-showcase-anatomy-py) Anatomy of a figure [XKCD](https://matplotlib.org/stable/gallery/showcase/xkcd.html#sphx-glr-gallery-showcase-xkcd-py) XKCD [Keypress event](https://matplotlib.org/stable/gallery/event_handling/keypress_demo.html#sphx-glr-gallery-event-handling-keypress-demo-py) Keypress event [Pythonic Matplotlib](https://matplotlib.org/stable/gallery/misc/pythonic_matplotlib.html#sphx-glr-gallery-misc-pythonic-matplotlib-py) Pythonic Matplotlib [Plot 2D data on 3D plot](https://matplotlib.org/stable/gallery/mplot3d/2dcollections3d.html#sphx-glr-gallery-mplot3d-2dcollections3d-py) Plot 2D data on 3D plot [Create 2D bar graphs in different planes](https://matplotlib.org/stable/gallery/mplot3d/bars3d.html#sphx-glr-gallery-mplot3d-bars3d-py) Create 2D bar graphs in different planes [3D errorbars](https://matplotlib.org/stable/gallery/mplot3d/errorbar3d.html#sphx-glr-gallery-mplot3d-errorbar3d-py) 3D errorbars [Lorenz Attractor](https://matplotlib.org/stable/gallery/mplot3d/lorenz_attractor.html#sphx-glr-gallery-mplot3d-lorenz-attractor-py) Lorenz Attractor [Automatic Text Offsetting](https://matplotlib.org/stable/gallery/mplot3d/offset.html#sphx-glr-gallery-mplot3d-offset-py) Automatic Text Offsetting [3D scatterplot](https://matplotlib.org/stable/gallery/mplot3d/scatter3d.html#sphx-glr-gallery-mplot3d-scatter3d-py) 3D scatterplot [3D surface with polar coordinates](https://matplotlib.org/stable/gallery/mplot3d/surface3d_radial.html#sphx-glr-gallery-mplot3d-surface3d-radial-py) 3D surface with polar coordinates [Text annotations in 3D](https://matplotlib.org/stable/gallery/mplot3d/text3d.html#sphx-glr-gallery-mplot3d-text3d-py) Text annotations in 3D [Asinh Demo](https://matplotlib.org/stable/gallery/scales/asinh_demo.html#sphx-glr-gallery-scales-asinh-demo-py) Asinh Demo [Log Bar](https://matplotlib.org/stable/gallery/scales/log_bar.html#sphx-glr-gallery-scales-log-bar-py) Log Bar [MRI with EEG](https://matplotlib.org/stable/gallery/specialty_plots/mri_with_eeg.html#sphx-glr-gallery-specialty-plots-mri-with-eeg-py) MRI with EEG [Multiple Yaxis With Spines](https://matplotlib.org/stable/gallery/spines/multiple_yaxis_with_spines.html#sphx-glr-gallery-spines-multiple-yaxis-with-spines-py) Multiple Yaxis With Spines [Centering labels between ticks](https://matplotlib.org/stable/gallery/ticks/centered_ticklabels.html#sphx-glr-gallery-ticks-centered-ticklabels-py) Centering labels between ticks [PGF fonts](https://matplotlib.org/stable/gallery/userdemo/pgf_fonts.html#sphx-glr-gallery-userdemo-pgf-fonts-py) PGF fonts [PGF texsystem](https://matplotlib.org/stable/gallery/userdemo/pgf_texsystem.html#sphx-glr-gallery-userdemo-pgf-texsystem-py) PGF texsystem [Slider](https://matplotlib.org/stable/gallery/widgets/slider_demo.html#sphx-glr-gallery-widgets-slider-demo-py) Slider [Quick start guide](https://matplotlib.org/stable/tutorials/introductory/quick_start.html#sphx-glr-tutorials-introductory-quick-start-py) Quick start guide [Artist tutorial](https://matplotlib.org/stable/tutorials/intermediate/artists.html#sphx-glr-tutorials-intermediate-artists-py) Artist tutorial [Constrained Layout Guide](https://matplotlib.org/stable/tutorials/intermediate/constrainedlayout_guide.html#sphx-glr-tutorials-intermediate-constrainedlayout-guide-py) Constrained Layout Guide [Tight Layout guide](https://matplotlib.org/stable/tutorials/intermediate/tight_layout_guide.html#sphx-glr-tutorials-intermediate-tight-layout-guide-py) Tight Layout guide [Arranging multiple Axes in a Figure](https://matplotlib.org/stable/tutorials/intermediate/arranging_axes.html#sphx-glr-tutorials-intermediate-arranging-axes-py) Arranging multiple Axes in a Figure [Choosing Colormaps in Matplotlib](https://matplotlib.org/stable/tutorials/colors/colormaps.html#sphx-glr-tutorials-colors-colormaps-py) Choosing Colormaps in Matplotlib [Text in Matplotlib Plots](https://matplotlib.org/stable/tutorials/text/text_intro.html#sphx-glr-tutorials-text-text-intro-py) Text in Matplotlib Plots matplotlib mpl_toolkits.axes_grid1.anchored_artists mpl\_toolkits.axes\_grid1.anchored\_artists =========================================== Classes ------- | | | | --- | --- | | [`AnchoredAuxTransformBox`](mpl_toolkits.axes_grid1.anchored_artists.anchoredauxtransformbox#mpl_toolkits.axes_grid1.anchored_artists.AnchoredAuxTransformBox "mpl_toolkits.axes_grid1.anchored_artists.AnchoredAuxTransformBox")(transform, loc[, ...]) | An anchored container with transformed coordinates. | | [`AnchoredDirectionArrows`](mpl_toolkits.axes_grid1.anchored_artists.anchoreddirectionarrows#mpl_toolkits.axes_grid1.anchored_artists.AnchoredDirectionArrows "mpl_toolkits.axes_grid1.anchored_artists.AnchoredDirectionArrows")(transform, label\_x, ...) | Draw two perpendicular arrows to indicate directions. | | [`AnchoredDrawingArea`](mpl_toolkits.axes_grid1.anchored_artists.anchoreddrawingarea#mpl_toolkits.axes_grid1.anchored_artists.AnchoredDrawingArea "mpl_toolkits.axes_grid1.anchored_artists.AnchoredDrawingArea")(width, height, xdescent, ...) | An anchored container with a fixed size and fillable DrawingArea. | | [`AnchoredEllipse`](mpl_toolkits.axes_grid1.anchored_artists.anchoredellipse#mpl_toolkits.axes_grid1.anchored_artists.AnchoredEllipse "mpl_toolkits.axes_grid1.anchored_artists.AnchoredEllipse")(transform, width, height, ...) | Draw an anchored ellipse of a given size. | | [`AnchoredSizeBar`](mpl_toolkits.axes_grid1.anchored_artists.anchoredsizebar#mpl_toolkits.axes_grid1.anchored_artists.AnchoredSizeBar "mpl_toolkits.axes_grid1.anchored_artists.AnchoredSizeBar")(transform, size, label, loc) | Draw a horizontal scale bar with a center-aligned label underneath. | matplotlib mpl_toolkits.mplot3d.art3d.Patch3D mpl\_toolkits.mplot3d.art3d.Patch3D =================================== *class*mpl\_toolkits.mplot3d.art3d.Patch3D(*\*args*, *zs=()*, *zdir='z'*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/art3d.py#L323-L346) Bases: [`Patch`](matplotlib.patches.patch#matplotlib.patches.Patch "matplotlib.patches.Patch") 3D patch object. The following kwarg properties are supported | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | unknown | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | do\_3d\_projection()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/art3d.py#L340-L346) get\_path()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/art3d.py#L337-L338) Return the path of this patch. set(*\**, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *animated=<UNSET>*, *antialiased=<UNSET>*, *capstyle=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *color=<UNSET>*, *edgecolor=<UNSET>*, *facecolor=<UNSET>*, *fill=<UNSET>*, *gid=<UNSET>*, *hatch=<UNSET>*, *in\_layout=<UNSET>*, *joinstyle=<UNSET>*, *label=<UNSET>*, *linestyle=<UNSET>*, *linewidth=<UNSET>*, *mouseover=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *rasterized=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *transform=<UNSET>*, *url=<UNSET>*, *visible=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`3d_properties`](#mpl_toolkits.mplot3d.art3d.Patch3D.set_3d_properties "mpl_toolkits.mplot3d.art3d.Patch3D.set_3d_properties") | unknown | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | set\_3d\_properties(*verts*, *zs=0*, *zdir='z'*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/art3d.py#L332-L335)
programming_docs
matplotlib matplotlib.artist.Artist.get_sketch_params matplotlib.artist.Artist.get\_sketch\_params ============================================ Artist.get\_sketch\_params()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L641-L659) Return the sketch parameters for the artist. Returns: tuple or None A 3-tuple with the following elements: * *scale*: The amplitude of the wiggle perpendicular to the source line. * *length*: The length of the wiggle along the line. * *randomness*: The scale factor by which the length is shrunken or expanded. Returns *None* if no sketch parameters were set. matplotlib matplotlib.colors.LightSource matplotlib.colors.LightSource ============================= *class*matplotlib.colors.LightSource(*azdeg=315*, *altdeg=45*, *hsv\_min\_val=0*, *hsv\_max\_val=1*, *hsv\_min\_sat=1*, *hsv\_max\_sat=0*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L2187-L2598) Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") Create a light source coming from the specified azimuth and elevation. Angles are in degrees, with the azimuth measured clockwise from north and elevation up from the zero plane of the surface. [`shade`](#matplotlib.colors.LightSource.shade "matplotlib.colors.LightSource.shade") is used to produce "shaded" rgb values for a data array. [`shade_rgb`](#matplotlib.colors.LightSource.shade_rgb "matplotlib.colors.LightSource.shade_rgb") can be used to combine an rgb image with an elevation map. [`hillshade`](#matplotlib.colors.LightSource.hillshade "matplotlib.colors.LightSource.hillshade") produces an illumination map of a surface. Specify the azimuth (measured clockwise from south) and altitude (measured up from the plane of the surface) of the light source in degrees. Parameters: **azdeg**float, default: 315 degrees (from the northwest) The azimuth (0-360, degrees clockwise from North) of the light source. **altdeg**float, default: 45 degrees The altitude (0-90, degrees up from horizontal) of the light source. #### Notes For backwards compatibility, the parameters *hsv\_min\_val*, *hsv\_max\_val*, *hsv\_min\_sat*, and *hsv\_max\_sat* may be supplied at initialization as well. However, these parameters will only be used if "blend\_mode='hsv'" is passed into [`shade`](#matplotlib.colors.LightSource.shade "matplotlib.colors.LightSource.shade") or [`shade_rgb`](#matplotlib.colors.LightSource.shade_rgb "matplotlib.colors.LightSource.shade_rgb"). See the documentation for [`blend_hsv`](#matplotlib.colors.LightSource.blend_hsv "matplotlib.colors.LightSource.blend_hsv") for more details. blend\_hsv(*rgb*, *intensity*, *hsv\_max\_sat=None*, *hsv\_max\_val=None*, *hsv\_min\_val=None*, *hsv\_min\_sat=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L2489-L2559) Take the input data array, convert to HSV values in the given colormap, then adjust those color values to give the impression of a shaded relief map with a specified light source. RGBA values are returned, which can then be used to plot the shaded image with imshow. The color of the resulting image will be darkened by moving the (s, v) values (in hsv colorspace) toward (hsv\_min\_sat, hsv\_min\_val) in the shaded regions, or lightened by sliding (s, v) toward (hsv\_max\_sat, hsv\_max\_val) in regions that are illuminated. The default extremes are chose so that completely shaded points are nearly black (s = 1, v = 0) and completely illuminated points are nearly white (s = 0, v = 1). Parameters: **rgb**ndarray An MxNx3 RGB array of floats ranging from 0 to 1 (color image). **intensity**ndarray An MxNx1 array of floats ranging from 0 to 1 (grayscale image). **hsv\_max\_sat**number, default: 1 The maximum saturation value that the *intensity* map can shift the output image to. **hsv\_min\_sat**number, optional The minimum saturation value that the *intensity* map can shift the output image to. Defaults to 0. **hsv\_max\_val**number, optional The maximum value ("v" in "hsv") that the *intensity* map can shift the output image to. Defaults to 1. **hsv\_min\_val**number, optional The minimum value ("v" in "hsv") that the *intensity* map can shift the output image to. Defaults to 0. Returns: ndarray An MxNx3 RGB array representing the combined images. blend\_overlay(*rgb*, *intensity*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L2580-L2598) Combine an rgb image with an intensity map using "overlay" blending. Parameters: **rgb**ndarray An MxNx3 RGB array of floats ranging from 0 to 1 (color image). **intensity**ndarray An MxNx1 array of floats ranging from 0 to 1 (grayscale image). Returns: ndarray An MxNx3 RGB array representing the combined images. blend\_soft\_light(*rgb*, *intensity*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L2561-L2578) Combine an rgb image with an intensity map using "soft light" blending, using the "pegtop" formula. Parameters: **rgb**ndarray An MxNx3 RGB array of floats ranging from 0 to 1 (color image). **intensity**ndarray An MxNx1 array of floats ranging from 0 to 1 (grayscale image). Returns: ndarray An MxNx3 RGB array representing the combined images. *property*direction The unit vector direction towards the light source. hillshade(*elevation*, *vert\_exag=1*, *dx=1*, *dy=1*, *fraction=1.0*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L2242-L2293) Calculate the illumination intensity for a surface using the defined azimuth and elevation for the light source. This computes the normal vectors for the surface, and then passes them on to [`shade_normals`](#matplotlib.colors.LightSource.shade_normals "matplotlib.colors.LightSource.shade_normals") Parameters: **elevation**2D array-like The height values used to generate an illumination map **vert\_exag**number, optional The amount to exaggerate the elevation values by when calculating illumination. This can be used either to correct for differences in units between the x-y coordinate system and the elevation coordinate system (e.g. decimal degrees vs. meters) or to exaggerate or de-emphasize topographic effects. **dx**number, optional The x-spacing (columns) of the input *elevation* grid. **dy**number, optional The y-spacing (rows) of the input *elevation* grid. **fraction**number, optional Increases or decreases the contrast of the hillshade. Values greater than one will cause intermediate values to move closer to full illumination or shadow (and clipping any values that move beyond 0 or 1). Note that this is not visually or mathematically the same as vertical exaggeration. Returns: ndarray A 2D array of illumination values between 0-1, where 0 is completely in shadow and 1 is completely illuminated. shade(*data*, *cmap*, *norm=None*, *blend\_mode='overlay'*, *vmin=None*, *vmax=None*, *vert\_exag=1*, *dx=1*, *dy=1*, *fraction=1*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L2341-L2414) Combine colormapped data values with an illumination intensity map (a.k.a. "hillshade") of the values. Parameters: **data**2D array-like The height values used to generate a shaded map. **cmap**[`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap") The colormap used to color the *data* array. Note that this must be a [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap") instance. For example, rather than passing in `cmap='gist_earth'`, use `cmap=plt.get_cmap('gist_earth')` instead. **norm**[`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") instance, optional The normalization used to scale values before colormapping. If None, the input will be linearly scaled between its min and max. **blend\_mode**{'hsv', 'overlay', 'soft'} or callable, optional The type of blending used to combine the colormapped data values with the illumination intensity. Default is "overlay". Note that for most topographic surfaces, "overlay" or "soft" appear more visually realistic. If a user-defined function is supplied, it is expected to combine an MxNx3 RGB array of floats (ranging 0 to 1) with an MxNx1 hillshade array (also 0 to 1). (Call signature `func(rgb, illum, **kwargs)`) Additional kwargs supplied to this function will be passed on to the *blend\_mode* function. **vmin**float or None, optional The minimum value used in colormapping *data*. If *None* the minimum value in *data* is used. If *norm* is specified, then this argument will be ignored. **vmax**float or None, optional The maximum value used in colormapping *data*. If *None* the maximum value in *data* is used. If *norm* is specified, then this argument will be ignored. **vert\_exag**number, optional The amount to exaggerate the elevation values by when calculating illumination. This can be used either to correct for differences in units between the x-y coordinate system and the elevation coordinate system (e.g. decimal degrees vs. meters) or to exaggerate or de-emphasize topography. **dx**number, optional The x-spacing (columns) of the input *elevation* grid. **dy**number, optional The y-spacing (rows) of the input *elevation* grid. **fraction**number, optional Increases or decreases the contrast of the hillshade. Values greater than one will cause intermediate values to move closer to full illumination or shadow (and clipping any values that move beyond 0 or 1). Note that this is not visually or mathematically the same as vertical exaggeration. **Additional kwargs are passed on to the \*blend\_mode\* function.** Returns: ndarray An MxNx4 array of floats ranging between 0-1. shade\_normals(*normals*, *fraction=1.0*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L2295-L2339) Calculate the illumination intensity for the normal vectors of a surface using the defined azimuth and elevation for the light source. Imagine an artificial sun placed at infinity in some azimuth and elevation position illuminating our surface. The parts of the surface that slope toward the sun should brighten while those sides facing away should become darker. Parameters: **fraction**number, optional Increases or decreases the contrast of the hillshade. Values greater than one will cause intermediate values to move closer to full illumination or shadow (and clipping any values that move beyond 0 or 1). Note that this is not visually or mathematically the same as vertical exaggeration. Returns: ndarray A 2D array of illumination values between 0-1, where 0 is completely in shadow and 1 is completely illuminated. shade\_rgb(*rgb*, *elevation*, *fraction=1.0*, *blend\_mode='hsv'*, *vert\_exag=1*, *dx=1*, *dy=1*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L2416-L2487) Use this light source to adjust the colors of the *rgb* input array to give the impression of a shaded relief map with the given *elevation*. Parameters: **rgb**array-like An (M, N, 3) RGB array, assumed to be in the range of 0 to 1. **elevation**array-like An (M, N) array of the height values used to generate a shaded map. **fraction**number Increases or decreases the contrast of the hillshade. Values greater than one will cause intermediate values to move closer to full illumination or shadow (and clipping any values that move beyond 0 or 1). Note that this is not visually or mathematically the same as vertical exaggeration. **blend\_mode**{'hsv', 'overlay', 'soft'} or callable, optional The type of blending used to combine the colormapped data values with the illumination intensity. For backwards compatibility, this defaults to "hsv". Note that for most topographic surfaces, "overlay" or "soft" appear more visually realistic. If a user-defined function is supplied, it is expected to combine an MxNx3 RGB array of floats (ranging 0 to 1) with an MxNx1 hillshade array (also 0 to 1). (Call signature `func(rgb, illum, **kwargs)`) Additional kwargs supplied to this function will be passed on to the *blend\_mode* function. **vert\_exag**number, optional The amount to exaggerate the elevation values by when calculating illumination. This can be used either to correct for differences in units between the x-y coordinate system and the elevation coordinate system (e.g. decimal degrees vs. meters) or to exaggerate or de-emphasize topography. **dx**number, optional The x-spacing (columns) of the input *elevation* grid. **dy**number, optional The y-spacing (rows) of the input *elevation* grid. **Additional kwargs are passed on to the \*blend\_mode\* function.** Returns: ndarray An (m, n, 3) array of floats ranging between 0-1. Examples using `matplotlib.colors.LightSource` ---------------------------------------------- [Shading example](https://matplotlib.org/stable/gallery/images_contours_and_fields/shading_example.html#sphx-glr-gallery-images-contours-and-fields-shading-example-py) Shading example [Shaded & power normalized rendering](https://matplotlib.org/stable/gallery/showcase/mandelbrot.html#sphx-glr-gallery-showcase-mandelbrot-py) Shaded & power normalized rendering [AGG filter](https://matplotlib.org/stable/gallery/misc/demo_agg_filter.html#sphx-glr-gallery-misc-demo-agg-filter-py) AGG filter [Custom hillshading in a 3D surface plot](https://matplotlib.org/stable/gallery/mplot3d/custom_shaded_3d_surface.html#sphx-glr-gallery-mplot3d-custom-shaded-3d-surface-py) Custom hillshading in a 3D surface plot [Hillshading](https://matplotlib.org/stable/gallery/specialty_plots/advanced_hillshading.html#sphx-glr-gallery-specialty-plots-advanced-hillshading-py) Hillshading [Topographic hillshading](https://matplotlib.org/stable/gallery/specialty_plots/topographic_hillshading.html#sphx-glr-gallery-specialty-plots-topographic-hillshading-py) Topographic hillshading matplotlib matplotlib.pyplot.figimage matplotlib.pyplot.figimage ========================== matplotlib.pyplot.figimage(*X*, *xo=0*, *yo=0*, *alpha=None*, *norm=None*, *cmap=None*, *vmin=None*, *vmax=None*, *origin=None*, *resize=False*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2207-L2213) Add a non-resampled image to the figure. The image is attached to the lower or upper left corner depending on *origin*. Parameters: **X** The image data. This is an array of one of the following shapes: * (M, N): an image with scalar data. Color-mapping is controlled by *cmap*, *norm*, *vmin*, and *vmax*. * (M, N, 3): an image with RGB values (0-1 float or 0-255 int). * (M, N, 4): an image with RGBA values (0-1 float or 0-255 int), i.e. including transparency. **xo, yo**int The *x*/*y* image offset in pixels. **alpha**None or float The alpha blending value. **cmap**str or [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap"), default: `[rcParams["image.cmap"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.cmap#matplotlibrc-sample)` (default: `'viridis'`) The Colormap instance or registered colormap name used to map scalar data to colors. This parameter is ignored if *X* is RGB(A). **norm**str or [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize"), optional The normalization method used to scale scalar data to the [0, 1] range before mapping to colors using *cmap*. By default, a linear scaling is used, mapping the lowest value to 0 and the highest to 1. If given, this can be one of the following: * An instance of [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") or one of its subclasses (see [Colormap Normalization](https://matplotlib.org/stable/tutorials/colors/colormapnorms.html)). * A scale name, i.e. one of "linear", "log", "symlog", "logit", etc. For a list of available scales, call [`matplotlib.scale.get_scale_names()`](../scale_api#matplotlib.scale.get_scale_names "matplotlib.scale.get_scale_names"). In that case, a suitable [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") subclass is dynamically generated and instantiated. This parameter is ignored if *X* is RGB(A). **vmin, vmax**float, optional When using scalar data and no explicit *norm*, *vmin* and *vmax* define the data range that the colormap covers. By default, the colormap covers the complete value range of the supplied data. It is an error to use *vmin*/*vmax* when a *norm* instance is given (but using a [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") *norm* name together with *vmin*/*vmax* is acceptable). This parameter is ignored if *X* is RGB(A). **origin**{'upper', 'lower'}, default: `[rcParams["image.origin"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.origin#matplotlibrc-sample)` (default: `'upper'`) Indicates where the [0, 0] index of the array is in the upper left or lower left corner of the axes. **resize**bool If *True*, resize the figure to match the given image size. Returns: [`matplotlib.image.FigureImage`](../image_api#matplotlib.image.FigureImage "matplotlib.image.FigureImage") Other Parameters: **\*\*kwargs** Additional kwargs are [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") kwargs passed on to [`FigureImage`](../image_api#matplotlib.image.FigureImage "matplotlib.image.FigureImage"). #### Notes figimage complements the Axes image ([`imshow`](matplotlib.axes.axes.imshow#matplotlib.axes.Axes.imshow "matplotlib.axes.Axes.imshow")) which will be resampled to fit the current Axes. If you want a resampled image to fill the entire figure, you can define an [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") with extent [0, 0, 1, 1]. #### Examples ``` f = plt.figure() nx = int(f.get_figwidth() * f.dpi) ny = int(f.get_figheight() * f.dpi) data = np.random.random((ny, nx)) f.figimage(data) plt.show() ``` matplotlib matplotlib.axes.Axes.xcorr matplotlib.axes.Axes.xcorr ========================== Axes.xcorr(*x*, *y*, *normed=True*, *detrend=<function detrend\_none>*, *usevlines=True*, *maxlags=10*, *\**, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L1972-L2079) Plot the cross correlation between *x* and *y*. The correlation with lag k is defined as \(\sum\_n x[n+k] \cdot y^\*[n]\), where \(y^\*\) is the complex conjugate of \(y\). Parameters: **x, y**array-like of length n **detrend**callable, default: [`mlab.detrend_none`](../mlab_api#matplotlib.mlab.detrend_none "matplotlib.mlab.detrend_none") (no detrending) A detrending function applied to *x* and *y*. It must have the signature ``` detrend(x: np.ndarray) -> np.ndarray ``` **normed**bool, default: True If `True`, input vectors are normalised to unit length. **usevlines**bool, default: True Determines the plot style. If `True`, vertical lines are plotted from 0 to the xcorr value using [`Axes.vlines`](matplotlib.axes.axes.vlines#matplotlib.axes.Axes.vlines "matplotlib.axes.Axes.vlines"). Additionally, a horizontal line is plotted at y=0 using [`Axes.axhline`](matplotlib.axes.axes.axhline#matplotlib.axes.Axes.axhline "matplotlib.axes.Axes.axhline"). If `False`, markers are plotted at the xcorr values using [`Axes.plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot"). **maxlags**int, default: 10 Number of lags to show. If None, will return all `2 * len(x) - 1` lags. Returns: **lags**array (length `2*maxlags+1`) The lag vector. **c**array (length `2*maxlags+1`) The auto correlation vector. **line**[`LineCollection`](../collections_api#matplotlib.collections.LineCollection "matplotlib.collections.LineCollection") or [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") added to the Axes of the correlation: * [`LineCollection`](../collections_api#matplotlib.collections.LineCollection "matplotlib.collections.LineCollection") if *usevlines* is True. * [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") if *usevlines* is False. **b**[`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") or None Horizontal line at 0 if *usevlines* is True None *usevlines* is False. Other Parameters: **linestyle**[`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") property, optional The linestyle for plotting the data points. Only used if *usevlines* is `False`. **marker**str, default: 'o' The marker for plotting the data points. Only used if *usevlines* is `False`. **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *x*, *y* **\*\*kwargs** Additional parameters are passed to [`Axes.vlines`](matplotlib.axes.axes.vlines#matplotlib.axes.Axes.vlines "matplotlib.axes.Axes.vlines") and [`Axes.axhline`](matplotlib.axes.axes.axhline#matplotlib.axes.Axes.axhline "matplotlib.axes.Axes.axhline") if *usevlines* is `True`; otherwise they are passed to [`Axes.plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot"). #### Notes The cross correlation is performed with [`numpy.correlate`](https://numpy.org/doc/stable/reference/generated/numpy.correlate.html#numpy.correlate "(in NumPy v1.23)") with `mode = "full"`. Examples using `matplotlib.axes.Axes.xcorr` ------------------------------------------- [Cross- and Auto-Correlation Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/xcorr_acorr_demo.html#sphx-glr-gallery-lines-bars-and-markers-xcorr-acorr-demo-py) Cross- and Auto-Correlation Demo
programming_docs
matplotlib matplotlib.colors.FuncNorm matplotlib.colors.FuncNorm ========================== *class*matplotlib.colors.FuncNorm(*functions*, *vmin=None*, *vmax=None*, *clip=False*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1743-L1773) Bases: [`FuncNorm`](#matplotlib.colors.FuncNorm "matplotlib.colors.FuncNorm") Arbitrary normalization using functions for the forward and inverse. Parameters: **functions**(callable, callable) two-tuple of the forward and inverse functions for the normalization. The forward function must be monotonic. Both functions must have the signature ``` def forward(values: array-like) -> array-like ``` **vmin, vmax**float or None If *vmin* and/or *vmax* is not given, they are initialized from the minimum and maximum value, respectively, of the first input processed; i.e., `__call__(A)` calls `autoscale_None(A)`. **clip**bool, default: False If `True` values falling outside the range `[vmin, vmax]`, are mapped to 0 or 1, whichever is closer, and masked values are set to 1. If `False` masked values remain masked. Clipping silently defeats the purpose of setting the over, under, and masked colors in a colormap, so it is likely to lead to surprises; therefore the default is `clip=False`. Parameters: **vmin, vmax**float or None If *vmin* and/or *vmax* is not given, they are initialized from the minimum and maximum value, respectively, of the first input processed; i.e., `__call__(A)` calls `autoscale_None(A)`. **clip**bool, default: False If `True` values falling outside the range `[vmin, vmax]`, are mapped to 0 or 1, whichever is closer, and masked values are set to 1. If `False` masked values remain masked. Clipping silently defeats the purpose of setting the over, under, and masked colors in a colormap, so it is likely to lead to surprises; therefore the default is `clip=False`. #### Notes Returns 0 if `vmin == vmax`. \_\_call\_\_(*value*, *clip=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1676-L1695) Normalize *value* data in the `[vmin, vmax]` interval into the `[0.0, 1.0]` interval and return it. Parameters: **value** Data to normalize. **clip**bool If `None`, defaults to `self.clip` (which defaults to `False`). #### Notes If not already initialized, `self.vmin` and `self.vmax` are initialized using `self.autoscale_None(value)`. autoscale(*A*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1714-L1717) Set *vmin*, *vmax* to min, max of *A*. autoscale\_None(*A*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1719-L1721) If vmin or vmax are not set, use the min/max of *A* to set them. inverse(*value*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1697-L1712) Examples using `matplotlib.colors.FuncNorm` ------------------------------------------- [Colormap Normalization](https://matplotlib.org/stable/tutorials/colors/colormapnorms.html#sphx-glr-tutorials-colors-colormapnorms-py) Colormap Normalization matplotlib matplotlib.axes.Axes.get_box_aspect matplotlib.axes.Axes.get\_box\_aspect ===================================== Axes.get\_box\_aspect()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L1763-L1777) Return the Axes box aspect, i.e. the ratio of height to width. The box aspect is `None` (i.e. chosen depending on the available figure space) unless explicitly specified. See also [`matplotlib.axes.Axes.set_box_aspect`](matplotlib.axes.axes.set_box_aspect#matplotlib.axes.Axes.set_box_aspect "matplotlib.axes.Axes.set_box_aspect") for a description of box aspect. [`matplotlib.axes.Axes.set_aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect") for a description of aspect handling. matplotlib matplotlib.animation.HTMLWriter matplotlib.animation.HTMLWriter =============================== *class*matplotlib.animation.HTMLWriter(*fps=30*, *codec=None*, *bitrate=None*, *extra\_args=None*, *metadata=None*, *embed\_frames=False*, *default\_mode='loop'*, *embed\_limit=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/animation.py#L707-L814) Writer for JavaScript-based HTML movies. Parameters: **fps**int, default: 5 Movie frame rate (per second). **codec**str or None, default: `[rcParams["animation.codec"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=animation.codec#matplotlibrc-sample)` (default: `'h264'`) The codec to use. **bitrate**int, default: `[rcParams["animation.bitrate"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=animation.bitrate#matplotlibrc-sample)` (default: `-1`) The bitrate of the movie, in kilobits per second. Higher values means higher quality movies, but increase the file size. A value of -1 lets the underlying movie encoder select the bitrate. **extra\_args**list of str or None, optional Extra command-line arguments passed to the underlying movie encoder. The default, None, means to use `[rcParams["animation.[name-of-encoder]\_args"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=animation.%5Bname-of-encoder%5D_args#matplotlibrc-sample)` for the builtin writers. **metadata**dict[str, str], default: {} A dictionary of keys and values for metadata to include in the output file. Some keys that may be of use include: title, artist, genre, subject, copyright, srcform, comment. \_\_init\_\_(*fps=30*, *codec=None*, *bitrate=None*, *extra\_args=None*, *metadata=None*, *embed\_frames=False*, *default\_mode='loop'*, *embed\_limit=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/animation.py#L717-L737) Parameters: **fps**int, default: 5 Movie frame rate (per second). **codec**str or None, default: `[rcParams["animation.codec"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=animation.codec#matplotlibrc-sample)` (default: `'h264'`) The codec to use. **bitrate**int, default: `[rcParams["animation.bitrate"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=animation.bitrate#matplotlibrc-sample)` (default: `-1`) The bitrate of the movie, in kilobits per second. Higher values means higher quality movies, but increase the file size. A value of -1 lets the underlying movie encoder select the bitrate. **extra\_args**list of str or None, optional Extra command-line arguments passed to the underlying movie encoder. The default, None, means to use `[rcParams["animation.[name-of-encoder]\_args"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=animation.%5Bname-of-encoder%5D_args#matplotlibrc-sample)` for the builtin writers. **metadata**dict[str, str], default: {} A dictionary of keys and values for metadata to include in the output file. Some keys that may be of use include: title, artist, genre, subject, copyright, srcform, comment. #### Methods | | | | --- | --- | | [`__init__`](#matplotlib.animation.HTMLWriter.__init__ "matplotlib.animation.HTMLWriter.__init__")([fps, codec, bitrate, extra\_args, ...]) | Parameters: | | `bin_path`() | Return the binary path to the commandline tool used by a specific subclass. | | [`finish`](#matplotlib.animation.HTMLWriter.finish "matplotlib.animation.HTMLWriter.finish")() | Finish any processing for writing the movie. | | [`grab_frame`](#matplotlib.animation.HTMLWriter.grab_frame "matplotlib.animation.HTMLWriter.grab_frame")(\*\*savefig\_kwargs) | Grab the image information from the figure and save as a movie frame. | | [`isAvailable`](#matplotlib.animation.HTMLWriter.isAvailable "matplotlib.animation.HTMLWriter.isAvailable")() | Return whether a MovieWriter subclass is actually available. | | `saving`(fig, outfile, dpi, \*args, \*\*kwargs) | Context manager to facilitate writing the movie file. | | [`setup`](#matplotlib.animation.HTMLWriter.setup "matplotlib.animation.HTMLWriter.setup")(fig, outfile[, dpi, frame\_dir]) | Setup for writing the movie file. | #### Attributes | | | | --- | --- | | `frame_format` | Format (png, jpeg, etc.) to use for saving the frames, which can be decided by the individual subclasses. | | `frame_size` | A tuple `(width, height)` in pixels of a movie frame. | | [`supported_formats`](#matplotlib.animation.HTMLWriter.supported_formats "matplotlib.animation.HTMLWriter.supported_formats") | | finish()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/animation.py#L781-L814) Finish any processing for writing the movie. grab\_frame(*\*\*savefig\_kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/animation.py#L758-L779) Grab the image information from the figure and save as a movie frame. All keyword arguments in *savefig\_kwargs* are passed on to the [`savefig`](../figure_api#matplotlib.figure.Figure.savefig "matplotlib.figure.Figure.savefig") call that saves the figure. *classmethod*isAvailable()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/animation.py#L713-L715) Return whether a MovieWriter subclass is actually available. setup(*fig*, *outfile*, *dpi=None*, *frame\_dir=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/animation.py#L739-L756) Setup for writing the movie file. Parameters: **fig**[`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") The figure to grab the rendered frames from. **outfile**str The filename of the resulting movie file. **dpi**float, default: `fig.dpi` The dpi of the output file. This, with the figure size, controls the size in pixels of the resulting movie file. **frame\_prefix**str, optional The filename prefix to use for temporary files. If *None* (the default), files are written to a temporary directory which is deleted by `cleanup`; if not *None*, no temporary files are deleted. supported\_formats*=['png', 'jpeg', 'tiff', 'svg']* matplotlib mpl_toolkits.axisartist.axislines.AxisArtistHelperRectlinear mpl\_toolkits.axisartist.axislines.AxisArtistHelperRectlinear ============================================================= *class*mpl\_toolkits.axisartist.axislines.AxisArtistHelperRectlinear[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L195-L307) Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") *class*Fixed(*axes*, *loc*, *nth\_coord=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L197-L236) Bases: [`Fixed`](mpl_toolkits.axisartist.axislines.axisartisthelper#mpl_toolkits.axisartist.axislines.AxisArtistHelper.Fixed "mpl_toolkits.axisartist.axislines.AxisArtistHelper.Fixed") nth\_coord = along which coordinate value varies in 2D, nth\_coord = 0 -> x axis, nth\_coord = 1 -> y axis get\_tick\_iterators(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L209-L236) tick\_loc, tick\_angle, tick\_label *class*Floating(*axes*, *nth\_coord*, *passingthrough\_point*, *axis\_direction='bottom'*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L238-L307) Bases: [`Floating`](mpl_toolkits.axisartist.axislines.axisartisthelper#mpl_toolkits.axisartist.axislines.AxisArtistHelper.Floating "mpl_toolkits.axisartist.axislines.AxisArtistHelper.Floating") get\_axislabel\_pos\_angle(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L262-L277) Return the label reference position in transAxes. get\_label\_transform() returns a transform of (transAxes+offset) get\_axislabel\_transform(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L259-L260) get\_line(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L245-L254) get\_line\_transform(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L256-L257) get\_tick\_iterators(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L282-L307) tick\_loc, tick\_angle, tick\_label get\_tick\_transform(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L279-L280) matplotlib matplotlib.axes.Axes.get_yaxis_text1_transform matplotlib.axes.Axes.get\_yaxis\_text1\_transform ================================================= Axes.get\_yaxis\_text1\_transform(*pad\_points*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L975-L999) Returns: **transform**Transform The transform used for drawing y-axis labels, which will add *pad\_points* of padding (in points) between the axis and the label. The x-direction is in axis coordinates and the y-direction is in data coordinates **valign**{'center', 'top', 'bottom', 'baseline', 'center\_baseline'} The text vertical alignment. **halign**{'center', 'left', 'right'} The text horizontal alignment. #### Notes This transformation is primarily used by the [`Axis`](../axis_api#matplotlib.axis.Axis "matplotlib.axis.Axis") class, and is meant to be overridden by new kinds of projections that may need to place axis elements in different locations. matplotlib mpl_toolkits.axisartist.angle_helper.LocatorDMS mpl\_toolkits.axisartist.angle\_helper.LocatorDMS ================================================= *class*mpl\_toolkits.axisartist.angle\_helper.LocatorDMS(*nbins*, *include\_last=True*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/angle_helper.py#L169-L171) Bases: [`LocatorBase`](mpl_toolkits.axisartist.angle_helper.locatorbase#mpl_toolkits.axisartist.angle_helper.LocatorBase "mpl_toolkits.axisartist.angle_helper.LocatorBase") \_\_call\_\_(*v1*, *v2*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/angle_helper.py#L170-L171) Call self as a function. Examples using `mpl_toolkits.axisartist.angle_helper.LocatorDMS` ---------------------------------------------------------------- [axis\_direction demo](https://matplotlib.org/stable/gallery/axisartist/demo_axis_direction.html#sphx-glr-gallery-axisartist-demo-axis-direction-py) axis\_direction demo [Curvilinear grid demo](https://matplotlib.org/stable/gallery/axisartist/demo_curvelinear_grid.html#sphx-glr-gallery-axisartist-demo-curvelinear-grid-py) Curvilinear grid demo [floating\_axis demo](https://matplotlib.org/stable/gallery/axisartist/demo_floating_axis.html#sphx-glr-gallery-axisartist-demo-floating-axis-py) floating\_axis demo [Simple Axis Pad](https://matplotlib.org/stable/gallery/axisartist/simple_axis_pad.html#sphx-glr-gallery-axisartist-simple-axis-pad-py) Simple Axis Pad matplotlib matplotlib.pyplot.matshow matplotlib.pyplot.matshow ========================= matplotlib.pyplot.matshow(*A*, *fignum=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2119-L2168) Display an array as a matrix in a new figure window. The origin is set at the upper left hand corner and rows (first dimension of the array) are displayed horizontally. The aspect ratio of the figure window is that of the array, unless this would make an excessively short or narrow figure. Tick labels for the xaxis are placed on top. Parameters: **A**2D array-like The matrix to be displayed. **fignum**None or int or False If *None*, create a new figure window with automatic numbering. If a nonzero integer, draw into the figure with the given number (create it if it does not exist). If 0, use the current axes (or create one if it does not exist). Note Because of how [`Axes.matshow`](matplotlib.axes.axes.matshow#matplotlib.axes.Axes.matshow "matplotlib.axes.Axes.matshow") tries to set the figure aspect ratio to be the one of the array, strange things may happen if you reuse an existing figure. Returns: [`AxesImage`](../image_api#matplotlib.image.AxesImage "matplotlib.image.AxesImage") Other Parameters: **\*\*kwargs**[`imshow`](matplotlib.axes.axes.imshow#matplotlib.axes.Axes.imshow "matplotlib.axes.Axes.imshow") arguments Examples using `matplotlib.pyplot.matshow` ------------------------------------------ [Matshow](https://matplotlib.org/stable/gallery/images_contours_and_fields/matshow.html#sphx-glr-gallery-images-contours-and-fields-matshow-py) Matshow matplotlib mpl_toolkits.axisartist.axis_artist.AxisArtist mpl\_toolkits.axisartist.axis\_artist.AxisArtist ================================================ *class*mpl\_toolkits.axisartist.axis\_artist.AxisArtist(*axes*, *helper*, *offset=None*, *axis\_direction='bottom'*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L598-L1047) Bases: [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") An artist which draws axis (a line along which the n-th axes coord is constant) line, ticks, ticklabels, and axis label. Parameters: **axes**[`mpl_toolkits.axisartist.axislines.Axes`](mpl_toolkits.axisartist.axislines.axes#mpl_toolkits.axisartist.axislines.Axes "mpl_toolkits.axisartist.axislines.Axes") **helper**[`AxisArtistHelper`](mpl_toolkits.axisartist.axislines.axisartisthelper#mpl_toolkits.axisartist.axislines.AxisArtistHelper "mpl_toolkits.axisartist.axislines.AxisArtistHelper") *property*LABELPAD draw(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L998-L1008) Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible ([`Artist.get_visible`](matplotlib.artist.artist.get_visible#matplotlib.artist.Artist.get_visible "matplotlib.artist.Artist.get_visible") returns False). Parameters: **renderer**[`RendererBase`](../backend_bases_api#matplotlib.backend_bases.RendererBase "matplotlib.backend_bases.RendererBase") subclass. #### Notes This method is overridden in the Artist subclasses. get\_axisline\_style()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L770-L772) Return the current axisline style. get\_helper()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L732-L736) Return axis artist helper instance. get\_tightbbox(*renderer=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L979-L996) Like [`Artist.get_window_extent`](matplotlib.artist.artist.get_window_extent#matplotlib.artist.Artist.get_window_extent "matplotlib.artist.Artist.get_window_extent"), but includes any clipping. Parameters: **renderer**[`RendererBase`](../backend_bases_api#matplotlib.backend_bases.RendererBase "matplotlib.backend_bases.RendererBase") subclass renderer that will be used to draw the figures (i.e. `fig.canvas.get_renderer()`) Returns: [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") The enclosing bounding box (in figure pixel coordinates). get\_transform()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L729-L730) Return the [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") instance used by this artist. invert\_ticklabel\_direction()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L710-L713) set(*\**, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *animated=<UNSET>*, *axis\_direction=<UNSET>*, *axislabel\_direction=<UNSET>*, *axisline\_style=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *gid=<UNSET>*, *in\_layout=<UNSET>*, *label=<UNSET>*, *mouseover=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *rasterized=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *ticklabel\_direction=<UNSET>*, *transform=<UNSET>*, *url=<UNSET>*, *visible=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`axis_direction`](#mpl_toolkits.axisartist.axis_artist.AxisArtist.set_axis_direction "mpl_toolkits.axisartist.axis_artist.AxisArtist.set_axis_direction") | unknown | | [`axislabel_direction`](#mpl_toolkits.axisartist.axis_artist.AxisArtist.set_axislabel_direction "mpl_toolkits.axisartist.axis_artist.AxisArtist.set_axislabel_direction") | {"+", "-"} | | [`axisline_style`](#mpl_toolkits.axisartist.axis_artist.AxisArtist.set_axisline_style "mpl_toolkits.axisartist.axis_artist.AxisArtist.set_axisline_style") | str or None | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | unknown | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`ticklabel_direction`](#mpl_toolkits.axisartist.axis_artist.AxisArtist.set_ticklabel_direction "mpl_toolkits.axisartist.axis_artist.AxisArtist.set_ticklabel_direction") | {"+", "-"} | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | set\_axis\_direction(*axis\_direction*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L660-L694) Adjust the direction, text angle, text alignment of ticklabels, labels following the matplotlib convention for the rectangle axes. The *axis\_direction* must be one of [left, right, bottom, top]. | property | left | bottom | right | top | | --- | --- | --- | --- | --- | | ticklabels location | "-" | "+" | "+" | "-" | | axislabel location | "-" | "+" | "+" | "-" | | ticklabels angle | 90 | 0 | -90 | 180 | | ticklabel va | center | baseline | center | baseline | | ticklabel ha | right | center | right | center | | axislabel angle | 180 | 0 | 0 | 180 | | axislabel va | center | top | center | bottom | | axislabel ha | right | center | right | center | Note that the direction "+" and "-" are relative to the direction of the increasing coordinate. Also, the text angles are actually relative to (90 + angle of the direction to the ticklabel), which gives 0 for bottom axis. set\_axislabel\_direction(*label\_direction*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L715-L727) Adjust the direction of the axislabel. Note that the *label\_direction*s '+' and '-' are relative to the direction of the increasing coordinate. Parameters: **label\_direction**{"+", "-"} set\_axisline\_style(*axisline\_style=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L738-L768) Set the axisline style. The new style is completely defined by the passed attributes. Existing style attributes are forgotten. Parameters: **axisline\_style**str or None The line style, e.g. '->', optionally followed by a comma-separated list of attributes. Alternatively, the attributes can be provided as keywords. If *None* this returns a string containing the available styles. #### Examples The following two commands are equal: >>> set\_axisline\_style("->,size=1.5") >>> set\_axisline\_style("->", size=1.5) set\_label(*s*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L976-L977) Set a label that will be displayed in the legend. Parameters: **s**object *s* will be converted to a string by calling [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)"). set\_ticklabel\_direction(*tick\_direction*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L696-L708) Adjust the direction of the ticklabel. Note that the *label\_direction*s '+' and '-' are relative to the direction of the increasing coordinate. Parameters: **tick\_direction**{"+", "-"} toggle(*all=None*, *ticks=None*, *ticklabels=None*, *label=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L1010-L1047) Toggle visibility of ticks, ticklabels, and (axis) label. To turn all off, ``` axis.toggle(all=False) ``` To turn all off but ticks on ``` axis.toggle(all=False, ticks=True) ``` To turn all on but (axis) label off ``` axis.toggle(all=True, label=False)) ``` zorder*=2.5*
programming_docs
matplotlib mpl_toolkits.axisartist.axes_rgb.RGBAxes mpl\_toolkits.axisartist.axes\_rgb.RGBAxes ========================================== *class*mpl\_toolkits.axisartist.axes\_rgb.RGBAxes(*\*args*, *pad=0*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axes_rgb.py#L6-L7) Bases: [`RGBAxes`](mpl_toolkits.axes_grid1.axes_rgb.rgbaxes#mpl_toolkits.axes_grid1.axes_rgb.RGBAxes "mpl_toolkits.axes_grid1.axes_rgb.RGBAxes") Parameters: **pad**float, default: 0 fraction of the axes height to put as padding. **axes\_class**matplotlib.axes.Axes **\*args** Unpacked into axes\_class() init for RGB **\*\*kwargs** Unpacked into axes\_class() init for RGB, R, G, B axes matplotlib matplotlib.axes.Axes.semilogx matplotlib.axes.Axes.semilogx ============================= Axes.semilogx(*\*args*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L1807-L1851) Make a plot with log scaling on the x axis. Call signatures: ``` semilogx([x], y, [fmt], data=None, **kwargs) semilogx([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs) ``` This is just a thin wrapper around [`plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot") which additionally changes the x-axis to log scaling. All of the concepts and parameters of plot can be used here as well. The additional parameters *base*, *subs*, and *nonpositive* control the x-axis properties. They are just forwarded to [`Axes.set_xscale`](matplotlib.axes.axes.set_xscale#matplotlib.axes.Axes.set_xscale "matplotlib.axes.Axes.set_xscale"). Parameters: **base**float, default: 10 Base of the x logarithm. **subs**array-like, optional The location of the minor xticks. If *None*, reasonable locations are automatically chosen depending on the number of decades in the plot. See [`Axes.set_xscale`](matplotlib.axes.axes.set_xscale#matplotlib.axes.Axes.set_xscale "matplotlib.axes.Axes.set_xscale") for details. **nonpositive**{'mask', 'clip'}, default: 'mask' Non-positive values in x can be masked as invalid, or clipped to a very small positive number. **\*\*kwargs** All parameters supported by [`plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot"). Returns: list of [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") Objects representing the plotted data. Examples using `matplotlib.axes.Axes.semilogx` ---------------------------------------------- [Log Demo](https://matplotlib.org/stable/gallery/scales/log_demo.html#sphx-glr-gallery-scales-log-demo-py) Log Demo [Log Axis](https://matplotlib.org/stable/gallery/scales/semilogx_demo.html#sphx-glr-gallery-scales-semilogx-demo-py) Log Axis [Transformations Tutorial](https://matplotlib.org/stable/tutorials/advanced/transforms_tutorial.html#sphx-glr-tutorials-advanced-transforms-tutorial-py) Transformations Tutorial matplotlib matplotlib.axes.Axes.format_coord matplotlib.axes.Axes.format\_coord ================================== Axes.format\_coord(*x*, *y*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L3960-L3965) Return a format string formatting the *x*, *y* coordinates. matplotlib mpl_toolkits.axisartist.grid_finder.GridFinder mpl\_toolkits.axisartist.grid\_finder.GridFinder ================================================ *class*mpl\_toolkits.axisartist.grid\_finder.GridFinder(*transform*, *extreme\_finder=None*, *grid\_locator1=None*, *grid\_locator2=None*, *tick\_formatter1=None*, *tick\_formatter2=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_finder.py#L123-L277) Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") transform : transform from the image coordinate (which will be the transData of the axes to the world coordinate. or transform = (transform\_xy, inv\_transform\_xy) locator1, locator2 : grid locator for 1st and 2nd axis. get\_grid\_info(*x1*, *y1*, *x2*, *y2*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_finder.py#L156-L207) lon\_values, lat\_valueslist of grid values. if integer is given, rough number of grids in each direction. get\_transform()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_finder.py#L256-L257) inv\_transform\_xy(*x*, *y*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_finder.py#L264-L266) set\_transform(*aux\_trans*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_finder.py#L247-L254) transform\_xy(*x*, *y*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_finder.py#L261-L262) update(*\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_finder.py#L268-L277) update\_transform(*aux\_trans*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_finder.py#L247-L254) matplotlib matplotlib.axes.Axes.get_title matplotlib.axes.Axes.get\_title =============================== Axes.get\_title(*loc='center'*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L68-L91) Get an Axes title. Get one of the three available Axes titles. The available titles are positioned above the Axes in the center, flush with the left edge, and flush with the right edge. Parameters: **loc**{'center', 'left', 'right'}, str, default: 'center' Which title to return. Returns: str The title text string. matplotlib matplotlib.artist.Artist.set_label matplotlib.artist.Artist.set\_label =================================== Artist.set\_label(*s*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L1060-L1074) Set a label that will be displayed in the legend. Parameters: **s**object *s* will be converted to a string by calling [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)"). matplotlib matplotlib.axis.XAxis.set_ticks_position matplotlib.axis.XAxis.set\_ticks\_position ========================================== XAxis.set\_ticks\_position(*position*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L2329-L2367) Set the ticks position. Parameters: **position**{'top', 'bottom', 'both', 'default', 'none'} 'both' sets the ticks to appear on both positions, but does not change the tick labels. 'default' resets the tick positions to the default: ticks on both positions, labels at bottom. 'none' can be used if you don't want any ticks. 'none' and 'both' affect only the ticks, not the labels. Examples using `matplotlib.axis.XAxis.set_ticks_position` --------------------------------------------------------- [Violin plot customization](https://matplotlib.org/stable/gallery/statistics/customized_violin.html#sphx-glr-gallery-statistics-customized-violin-py) Violin plot customization [Colorbar with AxesDivider](https://matplotlib.org/stable/gallery/axes_grid1/demo_colorbar_with_axes_divider.html#sphx-glr-gallery-axes-grid1-demo-colorbar-with-axes-divider-py) Colorbar with `.AxesDivider` [Controlling the position and size of colorbars with Inset Axes](https://matplotlib.org/stable/gallery/axes_grid1/demo_colorbar_with_inset_locator.html#sphx-glr-gallery-axes-grid1-demo-colorbar-with-inset-locator-py) Controlling the position and size of colorbars with Inset Axes [Integral as the area under a curve](https://matplotlib.org/stable/gallery/showcase/integral.html#sphx-glr-gallery-showcase-integral-py) Integral as the area under a curve [XKCD](https://matplotlib.org/stable/gallery/showcase/xkcd.html#sphx-glr-gallery-showcase-xkcd-py) XKCD [Spine Placement](https://matplotlib.org/stable/gallery/spines/spine_placement_demo.html#sphx-glr-gallery-spines-spine-placement-demo-py) Spine Placement [Spines](https://matplotlib.org/stable/gallery/spines/spines.html#sphx-glr-gallery-spines-spines-py) Spines [Custom spine bounds](https://matplotlib.org/stable/gallery/spines/spines_bounds.html#sphx-glr-gallery-spines-spines-bounds-py) Custom spine bounds [Dropped spines](https://matplotlib.org/stable/gallery/spines/spines_dropped.html#sphx-glr-gallery-spines-spines-dropped-py) Dropped spines [Choosing Colormaps in Matplotlib](https://matplotlib.org/stable/tutorials/colors/colormaps.html#sphx-glr-tutorials-colors-colormaps-py) Choosing Colormaps in Matplotlib matplotlib matplotlib.axes.Axes.get_yaxis_transform matplotlib.axes.Axes.get\_yaxis\_transform ========================================== Axes.get\_yaxis\_transform(*which='grid'*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L951-L973) Get the transformation used for drawing y-axis labels, ticks and gridlines. The x-direction is in axis coordinates and the y-direction is in data coordinates. Note This transformation is primarily used by the [`Axis`](../axis_api#matplotlib.axis.Axis "matplotlib.axis.Axis") class, and is meant to be overridden by new kinds of projections that may need to place axis elements in different locations. Examples using `matplotlib.axes.Axes.get_yaxis_transform` --------------------------------------------------------- [Centered spines with arrows](https://matplotlib.org/stable/gallery/spines/centered_spines_with_arrows.html#sphx-glr-gallery-spines-centered-spines-with-arrows-py) Centered spines with arrows [Connect Simple01](https://matplotlib.org/stable/gallery/userdemo/connect_simple01.html#sphx-glr-gallery-userdemo-connect-simple01-py) Connect Simple01 [Transformations Tutorial](https://matplotlib.org/stable/tutorials/advanced/transforms_tutorial.html#sphx-glr-tutorials-advanced-transforms-tutorial-py) Transformations Tutorial matplotlib matplotlib.axes.Axes.add_line matplotlib.axes.Axes.add\_line ============================== Axes.add\_line(*line*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L2330-L2345) Add a [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") to the Axes; return the line. Examples using `matplotlib.axes.Axes.add_line` ---------------------------------------------- [Artist within an artist](https://matplotlib.org/stable/gallery/text_labels_and_annotations/line_with_text.html#sphx-glr-gallery-text-labels-and-annotations-line-with-text-py) Artist within an artist [Reference for Matplotlib artists](https://matplotlib.org/stable/gallery/shapes_and_collections/artist_reference.html#sphx-glr-gallery-shapes-and-collections-artist-reference-py) Reference for Matplotlib artists [Artist tests](https://matplotlib.org/stable/gallery/units/artist_tests.html#sphx-glr-gallery-units-artist-tests-py) Artist tests matplotlib matplotlib.artist.Artist.get_mouseover matplotlib.artist.Artist.get\_mouseover ======================================= Artist.get\_mouseover()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L1338-L1343) Return whether this artist is queried for custom context information when the mouse cursor moves over it. matplotlib matplotlib.axis.Axis.get_tick_padding matplotlib.axis.Axis.get\_tick\_padding ======================================= Axis.get\_tick\_padding()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L1288-L1294) matplotlib matplotlib.pyplot.get matplotlib.pyplot.get ===================== matplotlib.pyplot.get(*obj*, *\*args*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L575-L577) Return the value of an [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist")'s *property*, or print all of them. Parameters: **obj**[`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") The queried artist; e.g., a [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D"), a [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text"), or an [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes"). **property**str or None, default: None If *property* is 'somename', this function returns `obj.get_somename()`. If it's None (or unset), it *prints* all gettable properties from *obj*. Many properties have aliases for shorter typing, e.g. 'lw' is an alias for 'linewidth'. In the output, aliases and full property names will be listed as: property or alias = value e.g.: linewidth or lw = 2 See also [`setp`](matplotlib.pyplot.setp#matplotlib.pyplot.setp "matplotlib.pyplot.setp") matplotlib matplotlib.axes.Axes.spy matplotlib.axes.Axes.spy ======================== Axes.spy(*Z*, *precision=0*, *marker=None*, *markersize=None*, *aspect='equal'*, *origin='upper'*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L7714-L7853) Plot the sparsity pattern of a 2D array. This visualizes the non-zero values of the array. Two plotting styles are available: image and marker. Both are available for full arrays, but only the marker style works for [`scipy.sparse.spmatrix`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.spmatrix.html#scipy.sparse.spmatrix "(in SciPy v1.9.1)") instances. **Image style** If *marker* and *markersize* are *None*, [`imshow`](matplotlib.axes.axes.imshow#matplotlib.axes.Axes.imshow "matplotlib.axes.Axes.imshow") is used. Any extra remaining keyword arguments are passed to this method. **Marker style** If *Z* is a [`scipy.sparse.spmatrix`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.spmatrix.html#scipy.sparse.spmatrix "(in SciPy v1.9.1)") or *marker* or *markersize* are *None*, a [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") object will be returned with the value of marker determining the marker type, and any remaining keyword arguments passed to [`plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot"). Parameters: **Z**(M, N) array-like The array to be plotted. **precision**float or 'present', default: 0 If *precision* is 0, any non-zero value will be plotted. Otherwise, values of \(|Z| > precision\) will be plotted. For [`scipy.sparse.spmatrix`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.spmatrix.html#scipy.sparse.spmatrix "(in SciPy v1.9.1)") instances, you can also pass 'present'. In this case any value present in the array will be plotted, even if it is identically zero. **aspect**{'equal', 'auto', None} or float, default: 'equal' The aspect ratio of the Axes. This parameter is particularly relevant for images since it determines whether data pixels are square. This parameter is a shortcut for explicitly calling [`Axes.set_aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect"). See there for further details. * 'equal': Ensures an aspect ratio of 1. Pixels will be square. * 'auto': The Axes is kept fixed and the aspect is adjusted so that the data fit in the Axes. In general, this will result in non-square pixels. * *None*: Use `[rcParams["image.aspect"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.aspect#matplotlibrc-sample)` (default: `'equal'`). **origin**{'upper', 'lower'}, default: `[rcParams["image.origin"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.origin#matplotlibrc-sample)` (default: `'upper'`) Place the [0, 0] index of the array in the upper left or lower left corner of the Axes. The convention 'upper' is typically used for matrices and images. Returns: [`AxesImage`](../image_api#matplotlib.image.AxesImage "matplotlib.image.AxesImage") or [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") The return type depends on the plotting style (see above). Other Parameters: **\*\*kwargs** The supported additional parameters depend on the plotting style. For the image style, you can pass the following additional parameters of [`imshow`](matplotlib.axes.axes.imshow#matplotlib.axes.Axes.imshow "matplotlib.axes.Axes.imshow"): * *cmap* * *alpha* * *url* * any [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") properties (passed on to the [`AxesImage`](../image_api#matplotlib.image.AxesImage "matplotlib.image.AxesImage")) For the marker style, you can pass any [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") property except for *linestyle*: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_antialiased "matplotlib.lines.Line2D.set_antialiased") or aa | bool | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_color "matplotlib.lines.Line2D.set_color") or c | color | | [`dash_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_capstyle "matplotlib.lines.Line2D.set_dash_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`dash_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_joinstyle "matplotlib.lines.Line2D.set_dash_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`dashes`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dashes "matplotlib.lines.Line2D.set_dashes") | sequence of floats (on/off ink in points) or (None, None) | | [`data`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_data "matplotlib.lines.Line2D.set_data") | (2, N) array or two 1D arrays | | [`drawstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_drawstyle "matplotlib.lines.Line2D.set_drawstyle") or ds | {'default', 'steps', 'steps-pre', 'steps-mid', 'steps-post'}, default: 'default' | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fillstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_fillstyle "matplotlib.lines.Line2D.set_fillstyle") | {'full', 'left', 'right', 'bottom', 'top', 'none'} | | [`gapcolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_gapcolor "matplotlib.lines.Line2D.set_gapcolor") | color or None | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linestyle "matplotlib.lines.Line2D.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linewidth "matplotlib.lines.Line2D.set_linewidth") or lw | float | | [`marker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_marker "matplotlib.lines.Line2D.set_marker") | marker style string, [`Path`](../path_api#matplotlib.path.Path "matplotlib.path.Path") or [`MarkerStyle`](matplotlib.markers.markerstyle#matplotlib.markers.MarkerStyle "matplotlib.markers.MarkerStyle") | | [`markeredgecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgecolor "matplotlib.lines.Line2D.set_markeredgecolor") or mec | color | | [`markeredgewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgewidth "matplotlib.lines.Line2D.set_markeredgewidth") or mew | float | | [`markerfacecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecolor "matplotlib.lines.Line2D.set_markerfacecolor") or mfc | color | | [`markerfacecoloralt`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecoloralt "matplotlib.lines.Line2D.set_markerfacecoloralt") or mfcalt | color | | [`markersize`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markersize "matplotlib.lines.Line2D.set_markersize") or ms | float | | [`markevery`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markevery "matplotlib.lines.Line2D.set_markevery") | None or int or (int, int) or slice or list[int] or float or (float, float) or list[bool] | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_picker "matplotlib.lines.Line2D.set_picker") | float or callable[[Artist, Event], tuple[bool, dict]] | | [`pickradius`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_pickradius "matplotlib.lines.Line2D.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`solid_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_capstyle "matplotlib.lines.Line2D.set_solid_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`solid_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_joinstyle "matplotlib.lines.Line2D.set_solid_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | unknown | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xdata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_xdata "matplotlib.lines.Line2D.set_xdata") | 1D array | | [`ydata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_ydata "matplotlib.lines.Line2D.set_ydata") | 1D array | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | Examples using `matplotlib.axes.Axes.spy` ----------------------------------------- [Spy Demos](https://matplotlib.org/stable/gallery/images_contours_and_fields/spy_demos.html#sphx-glr-gallery-images-contours-and-fields-spy-demos-py) Spy Demos
programming_docs
matplotlib matplotlib.axes.Axes.broken_barh matplotlib.axes.Axes.broken\_barh ================================= Axes.broken\_barh(*xranges*, *yrange*, *\**, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L2767-L2841) Plot a horizontal sequence of rectangles. A rectangle is drawn for each element of *xranges*. All rectangles have the same vertical position and size defined by *yrange*. This is a convenience function for instantiating a [`BrokenBarHCollection`](../collections_api#matplotlib.collections.BrokenBarHCollection "matplotlib.collections.BrokenBarHCollection"), adding it to the Axes and autoscaling the view. Parameters: **xranges**sequence of tuples (*xmin*, *xwidth*) The x-positions and extends of the rectangles. For each tuple (*xmin*, *xwidth*) a rectangle is drawn from *xmin* to *xmin* + *xwidth*. **yrange**(*ymin*, *yheight*) The y-position and extend for all the rectangles. Returns: [`BrokenBarHCollection`](../collections_api#matplotlib.collections.BrokenBarHCollection "matplotlib.collections.BrokenBarHCollection") Other Parameters: **data**indexable object, optional If given, all parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception). **\*\*kwargs**[`BrokenBarHCollection`](../collections_api#matplotlib.collections.BrokenBarHCollection "matplotlib.collections.BrokenBarHCollection") properties Each *kwarg* can be either a single argument applying to all rectangles, e.g.: ``` facecolors='black' ``` or a sequence of arguments over which is cycled, e.g.: ``` facecolors=('black', 'blue') ``` would create interleaving black and blue rectangles. Supported keywords: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](../collections_api#matplotlib.collections.Collection.set_alpha "matplotlib.collections.Collection.set_alpha") | array-like or scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](../collections_api#matplotlib.collections.Collection.set_antialiased "matplotlib.collections.Collection.set_antialiased") or aa or antialiaseds | bool or list of bools | | [`array`](../cm_api#matplotlib.cm.ScalarMappable.set_array "matplotlib.cm.ScalarMappable.set_array") | array-like or None | | [`capstyle`](../collections_api#matplotlib.collections.Collection.set_capstyle "matplotlib.collections.Collection.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clim`](../cm_api#matplotlib.cm.ScalarMappable.set_clim "matplotlib.cm.ScalarMappable.set_clim") | (vmin: float, vmax: float) | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`cmap`](../cm_api#matplotlib.cm.ScalarMappable.set_cmap "matplotlib.cm.ScalarMappable.set_cmap") | [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap") or str or None | | [`color`](../collections_api#matplotlib.collections.Collection.set_color "matplotlib.collections.Collection.set_color") | color or list of rgba tuples | | [`edgecolor`](../collections_api#matplotlib.collections.Collection.set_edgecolor "matplotlib.collections.Collection.set_edgecolor") or ec or edgecolors | color or list of colors or 'face' | | [`facecolor`](../collections_api#matplotlib.collections.Collection.set_facecolor "matplotlib.collections.Collection.set_facecolor") or facecolors or fc | color or list of colors | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](../collections_api#matplotlib.collections.Collection.set_hatch "matplotlib.collections.Collection.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](../collections_api#matplotlib.collections.Collection.set_joinstyle "matplotlib.collections.Collection.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](../collections_api#matplotlib.collections.Collection.set_linestyle "matplotlib.collections.Collection.set_linestyle") or dashes or linestyles or ls | str or tuple or list thereof | | [`linewidth`](../collections_api#matplotlib.collections.Collection.set_linewidth "matplotlib.collections.Collection.set_linewidth") or linewidths or lw | float or list of floats | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`norm`](../cm_api#matplotlib.cm.ScalarMappable.set_norm "matplotlib.cm.ScalarMappable.set_norm") | [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") or str or None | | [`offset_transform`](../collections_api#matplotlib.collections.Collection.set_offset_transform "matplotlib.collections.Collection.set_offset_transform") or transOffset | unknown | | [`offsets`](../collections_api#matplotlib.collections.Collection.set_offsets "matplotlib.collections.Collection.set_offsets") | (N, 2) or (2,) array-like | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`paths`](../collections_api#matplotlib.collections.PolyCollection.set_verts "matplotlib.collections.PolyCollection.set_verts") | list of array-like | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`pickradius`](../collections_api#matplotlib.collections.Collection.set_pickradius "matplotlib.collections.Collection.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | `sizes` | ndarray or None | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`urls`](../collections_api#matplotlib.collections.Collection.set_urls "matplotlib.collections.Collection.set_urls") | list of str or None | | [`verts`](../collections_api#matplotlib.collections.PolyCollection.set_verts "matplotlib.collections.PolyCollection.set_verts") | list of array-like | | [`verts_and_codes`](../collections_api#matplotlib.collections.PolyCollection.set_verts_and_codes "matplotlib.collections.PolyCollection.set_verts_and_codes") | unknown | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | Examples using `matplotlib.axes.Axes.broken_barh` ------------------------------------------------- [Broken Barh](https://matplotlib.org/stable/gallery/lines_bars_and_markers/broken_barh.html#sphx-glr-gallery-lines-bars-and-markers-broken-barh-py) Broken Barh matplotlib matplotlib.pyplot.ion matplotlib.pyplot.ion ===================== matplotlib.pyplot.ion()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L482-L519) Enable interactive mode. See [`pyplot.isinteractive`](matplotlib.pyplot.isinteractive#matplotlib.pyplot.isinteractive "matplotlib.pyplot.isinteractive") for more details. See also [`ioff`](matplotlib.pyplot.ioff#matplotlib.pyplot.ioff "matplotlib.pyplot.ioff") Disable interactive mode. [`isinteractive`](matplotlib.pyplot.isinteractive#matplotlib.pyplot.isinteractive "matplotlib.pyplot.isinteractive") Whether interactive mode is enabled. [`show`](matplotlib.pyplot.show#matplotlib.pyplot.show "matplotlib.pyplot.show") Show all figures (and maybe block). [`pause`](matplotlib.pyplot.pause#matplotlib.pyplot.pause "matplotlib.pyplot.pause") Show all figures, and block for a time. #### Notes For a temporary change, this can be used as a context manager: ``` # if interactive mode is off # then figures will not be shown on creation plt.ioff() # This figure will not be shown immediately fig = plt.figure() with plt.ion(): # interactive mode will be on # figures will automatically be shown fig2 = plt.figure() # ... ``` To enable optional usage as a context manager, this function returns a [`ExitStack`](https://docs.python.org/3/library/contextlib.html#contextlib.ExitStack "(in Python v3.10)") object, which is not intended to be stored or accessed by the user. matplotlib matplotlib.patches.Wedge matplotlib.patches.Wedge ======================== *class*matplotlib.patches.Wedge(*center*, *r*, *theta1*, *theta2*, *\**, *width=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L1166-L1256) Bases: [`Patch`](matplotlib.patches.patch#matplotlib.patches.Patch "matplotlib.patches.Patch") Wedge shaped patch. A wedge centered at *x*, *y* center with radius *r* that sweeps *theta1* to *theta2* (in degrees). If *width* is given, then a partial wedge is drawn from inner radius *r* - *width* to outer radius *r*. Valid keyword arguments are: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | unknown | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | get\_path()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L1253-L1256) Return the path of this patch. set(*\**, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *animated=<UNSET>*, *antialiased=<UNSET>*, *capstyle=<UNSET>*, *center=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *color=<UNSET>*, *edgecolor=<UNSET>*, *facecolor=<UNSET>*, *fill=<UNSET>*, *gid=<UNSET>*, *hatch=<UNSET>*, *in\_layout=<UNSET>*, *joinstyle=<UNSET>*, *label=<UNSET>*, *linestyle=<UNSET>*, *linewidth=<UNSET>*, *mouseover=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *radius=<UNSET>*, *rasterized=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *theta1=<UNSET>*, *theta2=<UNSET>*, *transform=<UNSET>*, *url=<UNSET>*, *visible=<UNSET>*, *width=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`center`](#matplotlib.patches.Wedge.set_center "matplotlib.patches.Wedge.set_center") | unknown | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`radius`](#matplotlib.patches.Wedge.set_radius "matplotlib.patches.Wedge.set_radius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`theta1`](#matplotlib.patches.Wedge.set_theta1 "matplotlib.patches.Wedge.set_theta1") | unknown | | [`theta2`](#matplotlib.patches.Wedge.set_theta2 "matplotlib.patches.Wedge.set_theta2") | unknown | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`width`](#matplotlib.patches.Wedge.set_width "matplotlib.patches.Wedge.set_width") | unknown | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | set\_center(*center*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L1228-L1231) set\_radius(*radius*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L1233-L1236) set\_theta1(*theta1*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L1238-L1241) set\_theta2(*theta2*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L1243-L1246) set\_width(*width*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L1248-L1251) Examples using `matplotlib.patches.Wedge` ----------------------------------------- [Labeling a pie and a donut](https://matplotlib.org/stable/gallery/pie_and_polar_charts/pie_and_donut_labels.html#sphx-glr-gallery-pie-and-polar-charts-pie-and-donut-labels-py) Labeling a pie and a donut [Reference for Matplotlib artists](https://matplotlib.org/stable/gallery/shapes_and_collections/artist_reference.html#sphx-glr-gallery-shapes-and-collections-artist-reference-py) Reference for Matplotlib artists [Circles, Wedges and Polygons](https://matplotlib.org/stable/gallery/shapes_and_collections/patch_collection.html#sphx-glr-gallery-shapes-and-collections-patch-collection-py) Circles, Wedges and Polygons [SVG Filter Pie](https://matplotlib.org/stable/gallery/misc/svg_filter_pie.html#sphx-glr-gallery-misc-svg-filter-pie-py) SVG Filter Pie
programming_docs
matplotlib matplotlib.axes.Axes.autoscale_view matplotlib.axes.Axes.autoscale\_view ==================================== Axes.autoscale\_view(*tight=None*, *scalex=True*, *scaley=True*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L2844-L2977) Autoscale the view limits using the data limits. Parameters: **tight**bool or None If *True*, only expand the axis limits using the margins. Note that unlike for [`autoscale`](matplotlib.axes.axes.autoscale#matplotlib.axes.Axes.autoscale "matplotlib.axes.Axes.autoscale"), `tight=True` does *not* set the margins to zero. If *False* and `[rcParams["axes.autolimit\_mode"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=axes.autolimit_mode#matplotlibrc-sample)` (default: `'data'`) is 'round\_numbers', then after expansion by the margins, further expand the axis limits using the axis major locator. If None (the default), reuse the value set in the previous call to [`autoscale_view`](#matplotlib.axes.Axes.autoscale_view "matplotlib.axes.Axes.autoscale_view") (the initial value is False, but the default style sets `[rcParams["axes.autolimit\_mode"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=axes.autolimit_mode#matplotlibrc-sample)` (default: `'data'`) to 'data', in which case this behaves like True). **scalex**bool, default: True Whether to autoscale the x axis. **scaley**bool, default: True Whether to autoscale the y axis. #### Notes The autoscaling preserves any preexisting axis direction reversal. The data limits are not updated automatically when artist data are changed after the artist has been added to an Axes instance. In that case, use [`matplotlib.axes.Axes.relim()`](matplotlib.axes.axes.relim#matplotlib.axes.Axes.relim "matplotlib.axes.Axes.relim") prior to calling autoscale\_view. If the views of the Axes are fixed, e.g. via [`set_xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim"), they will not be changed by autoscale\_view(). See [`matplotlib.axes.Axes.autoscale()`](matplotlib.axes.axes.autoscale#matplotlib.axes.Axes.autoscale "matplotlib.axes.Axes.autoscale") for an alternative. Examples using `matplotlib.axes.Axes.autoscale_view` ---------------------------------------------------- [Line, Poly and RegularPoly Collection with autoscaling](https://matplotlib.org/stable/gallery/shapes_and_collections/collections.html#sphx-glr-gallery-shapes-and-collections-collections-py) Line, Poly and RegularPoly Collection with autoscaling [Compound path](https://matplotlib.org/stable/gallery/shapes_and_collections/compound_path.html#sphx-glr-gallery-shapes-and-collections-compound-path-py) Compound path [Ellipse Collection](https://matplotlib.org/stable/gallery/shapes_and_collections/ellipse_collection.html#sphx-glr-gallery-shapes-and-collections-ellipse-collection-py) Ellipse Collection [Packed-bubble chart](https://matplotlib.org/stable/gallery/misc/packed_bubbles.html#sphx-glr-gallery-misc-packed-bubbles-py) Packed-bubble chart [Group barchart with units](https://matplotlib.org/stable/gallery/units/bar_unit_demo.html#sphx-glr-gallery-units-bar-unit-demo-py) Group barchart with units [Textbox](https://matplotlib.org/stable/gallery/widgets/textbox.html#sphx-glr-gallery-widgets-textbox-py) Textbox [Autoscaling](https://matplotlib.org/stable/tutorials/intermediate/autoscale.html#sphx-glr-tutorials-intermediate-autoscale-py) Autoscaling matplotlib matplotlib.axes.Axes.get_yminorticklabels matplotlib.axes.Axes.get\_yminorticklabels ========================================== Axes.get\_yminorticklabels()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L72-L73) Return the yaxis' minor tick labels, as a list of [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text"). matplotlib matplotlib.gridspec.GridSpecFromSubplotSpec matplotlib.gridspec.GridSpecFromSubplotSpec =========================================== *class*matplotlib.gridspec.GridSpecFromSubplotSpec(*nrows*, *ncols*, *subplot\_spec*, *wspace=None*, *hspace=None*, *height\_ratios=None*, *width\_ratios=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L488-L540) Bases: [`GridSpecBase`](matplotlib.gridspec.gridspecbase#matplotlib.gridspec.GridSpecBase "matplotlib.gridspec.GridSpecBase") GridSpec whose subplot layout parameters are inherited from the location specified by a given SubplotSpec. Parameters: **nrows, ncols**int Number of rows and number of columns of the grid. **subplot\_spec**SubplotSpec Spec from which the layout parameters are inherited. **wspace, hspace**float, optional See [`GridSpec`](matplotlib.gridspec.gridspec#matplotlib.gridspec.GridSpec "matplotlib.gridspec.GridSpec") for more details. If not specified default values (from the figure or rcParams) are used. **height\_ratios**array-like of length *nrows*, optional See [`GridSpecBase`](matplotlib.gridspec.gridspecbase#matplotlib.gridspec.GridSpecBase "matplotlib.gridspec.GridSpecBase") for details. **width\_ratios**array-like of length *ncols*, optional See [`GridSpecBase`](matplotlib.gridspec.gridspecbase#matplotlib.gridspec.GridSpecBase "matplotlib.gridspec.GridSpecBase") for details. get\_subplot\_params(*figure=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L520-L534) Return a dictionary of subplot layout parameters. get\_topmost\_subplotspec()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L536-L540) Return the topmost [`SubplotSpec`](matplotlib.gridspec.subplotspec#matplotlib.gridspec.SubplotSpec "matplotlib.gridspec.SubplotSpec") instance associated with the subplot. Examples using `matplotlib.gridspec.GridSpecFromSubplotSpec` ------------------------------------------------------------ [Resizing axes with constrained layout](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/demo_constrained_layout.html#sphx-glr-gallery-subplots-axes-and-figures-demo-constrained-layout-py) Resizing axes with constrained layout [Nested Gridspecs](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/gridspec_nested.html#sphx-glr-gallery-subplots-axes-and-figures-gridspec-nested-py) Nested Gridspecs [Nested GridSpecs](https://matplotlib.org/stable/gallery/userdemo/demo_gridspec06.html#sphx-glr-gallery-userdemo-demo-gridspec06-py) Nested GridSpecs [Constrained Layout Guide](https://matplotlib.org/stable/tutorials/intermediate/constrainedlayout_guide.html#sphx-glr-tutorials-intermediate-constrainedlayout-guide-py) Constrained Layout Guide [Arranging multiple Axes in a Figure](https://matplotlib.org/stable/tutorials/intermediate/arranging_axes.html#sphx-glr-tutorials-intermediate-arranging-axes-py) Arranging multiple Axes in a Figure matplotlib matplotlib.axes.Axes.get_xbound matplotlib.axes.Axes.get\_xbound ================================ Axes.get\_xbound()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L3528-L3542) Return the lower and upper x-axis bounds, in increasing order. See also [`set_xbound`](matplotlib.axes.axes.set_xbound#matplotlib.axes.Axes.set_xbound "matplotlib.axes.Axes.set_xbound") [`get_xlim`](matplotlib.axes.axes.get_xlim#matplotlib.axes.Axes.get_xlim "matplotlib.axes.Axes.get_xlim"), [`set_xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") [`invert_xaxis`](matplotlib.axes.axes.invert_xaxis#matplotlib.axes.Axes.invert_xaxis "matplotlib.axes.Axes.invert_xaxis"), [`xaxis_inverted`](matplotlib.axes.axes.xaxis_inverted#matplotlib.axes.Axes.xaxis_inverted "matplotlib.axes.Axes.xaxis_inverted") matplotlib matplotlib.axes.Axes.axline matplotlib.axes.Axes.axline =========================== Axes.axline(*xy1*, *xy2=None*, *\**, *slope=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L843-L915) Add an infinitely long straight line. The line can be defined either by two points *xy1* and *xy2*, or by one point *xy1* and a *slope*. This draws a straight line "on the screen", regardless of the x and y scales, and is thus also suitable for drawing exponential decays in semilog plots, power laws in loglog plots, etc. However, *slope* should only be used with linear scales; It has no clear meaning for all other scales, and thus the behavior is undefined. Please specify the line using the points *xy1*, *xy2* for non-linear scales. The *transform* keyword argument only applies to the points *xy1*, *xy2*. The *slope* (if given) is always in data coordinates. This can be used e.g. with `ax.transAxes` for drawing grid lines with a fixed slope. Parameters: **xy1, xy2**(float, float) Points for the line to pass through. Either *xy2* or *slope* has to be given. **slope**float, optional The slope of the line. Either *xy2* or *slope* has to be given. Returns: [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") Other Parameters: **\*\*kwargs** Valid kwargs are [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") properties | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_antialiased "matplotlib.lines.Line2D.set_antialiased") or aa | bool | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_color "matplotlib.lines.Line2D.set_color") or c | color | | [`dash_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_capstyle "matplotlib.lines.Line2D.set_dash_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`dash_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_joinstyle "matplotlib.lines.Line2D.set_dash_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`dashes`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dashes "matplotlib.lines.Line2D.set_dashes") | sequence of floats (on/off ink in points) or (None, None) | | [`data`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_data "matplotlib.lines.Line2D.set_data") | (2, N) array or two 1D arrays | | [`drawstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_drawstyle "matplotlib.lines.Line2D.set_drawstyle") or ds | {'default', 'steps', 'steps-pre', 'steps-mid', 'steps-post'}, default: 'default' | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fillstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_fillstyle "matplotlib.lines.Line2D.set_fillstyle") | {'full', 'left', 'right', 'bottom', 'top', 'none'} | | [`gapcolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_gapcolor "matplotlib.lines.Line2D.set_gapcolor") | color or None | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linestyle "matplotlib.lines.Line2D.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linewidth "matplotlib.lines.Line2D.set_linewidth") or lw | float | | [`marker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_marker "matplotlib.lines.Line2D.set_marker") | marker style string, [`Path`](../path_api#matplotlib.path.Path "matplotlib.path.Path") or [`MarkerStyle`](matplotlib.markers.markerstyle#matplotlib.markers.MarkerStyle "matplotlib.markers.MarkerStyle") | | [`markeredgecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgecolor "matplotlib.lines.Line2D.set_markeredgecolor") or mec | color | | [`markeredgewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgewidth "matplotlib.lines.Line2D.set_markeredgewidth") or mew | float | | [`markerfacecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecolor "matplotlib.lines.Line2D.set_markerfacecolor") or mfc | color | | [`markerfacecoloralt`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecoloralt "matplotlib.lines.Line2D.set_markerfacecoloralt") or mfcalt | color | | [`markersize`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markersize "matplotlib.lines.Line2D.set_markersize") or ms | float | | [`markevery`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markevery "matplotlib.lines.Line2D.set_markevery") | None or int or (int, int) or slice or list[int] or float or (float, float) or list[bool] | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_picker "matplotlib.lines.Line2D.set_picker") | float or callable[[Artist, Event], tuple[bool, dict]] | | [`pickradius`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_pickradius "matplotlib.lines.Line2D.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`solid_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_capstyle "matplotlib.lines.Line2D.set_solid_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`solid_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_joinstyle "matplotlib.lines.Line2D.set_solid_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | unknown | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xdata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_xdata "matplotlib.lines.Line2D.set_xdata") | 1D array | | [`ydata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_ydata "matplotlib.lines.Line2D.set_ydata") | 1D array | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`axhline`](matplotlib.axes.axes.axhline#matplotlib.axes.Axes.axhline "matplotlib.axes.Axes.axhline") for horizontal lines [`axvline`](matplotlib.axes.axes.axvline#matplotlib.axes.Axes.axvline "matplotlib.axes.Axes.axvline") for vertical lines #### Examples Draw a thick red line passing through (0, 0) and (1, 1): ``` >>> axline((0, 0), (1, 1), linewidth=4, color='r') ``` Examples using `matplotlib.axes.Axes.axline` -------------------------------------------- [axhspan Demo](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axhspan_demo.html#sphx-glr-gallery-subplots-axes-and-figures-axhspan-demo-py) axhspan Demo [Anscombe's quartet](https://matplotlib.org/stable/gallery/specialty_plots/anscombe.html#sphx-glr-gallery-specialty-plots-anscombe-py) Anscombe's quartet matplotlib mpl_toolkits.axisartist.floating_axes.FloatingAxesBase mpl\_toolkits.axisartist.floating\_axes.FloatingAxesBase ======================================================== *class*mpl\_toolkits.axisartist.floating\_axes.FloatingAxesBase(*\*args*, *grid\_helper*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/floating_axes.py#L298-L335) Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") adjust\_axes\_lim()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/floating_axes.py#L329-L335) clear()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/floating_axes.py#L316-L327) Examples using `mpl_toolkits.axisartist.floating_axes.FloatingAxesBase` ----------------------------------------------------------------------- [mpl\_toolkits.axisartist.floating\_axes features](https://matplotlib.org/stable/gallery/axisartist/demo_floating_axes.html#sphx-glr-gallery-axisartist-demo-floating-axes-py) :mod:`mpl\_toolkits.axisartist.floating\_axes` features matplotlib matplotlib.artist.Artist.pickable matplotlib.artist.Artist.pickable ================================= Artist.pickable()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L480-L488) Return whether the artist is pickable. See also [`set_picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker"), [`get_picker`](matplotlib.artist.artist.get_picker#matplotlib.artist.Artist.get_picker "matplotlib.artist.Artist.get_picker"), [`pick`](matplotlib.artist.artist.pick#matplotlib.artist.Artist.pick "matplotlib.artist.Artist.pick") matplotlib matplotlib.artist.Artist.set_mouseover matplotlib.artist.Artist.set\_mouseover ======================================= Artist.set\_mouseover(*mouseover*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L1345-L1366) Set whether this artist is queried for custom context information when the mouse cursor moves over it. Parameters: **mouseover**bool See also [`get_cursor_data`](matplotlib.artist.artist.get_cursor_data#matplotlib.artist.Artist.get_cursor_data "matplotlib.artist.Artist.get_cursor_data") [`ToolCursorPosition`](../backend_tools_api#matplotlib.backend_tools.ToolCursorPosition "matplotlib.backend_tools.ToolCursorPosition") [`NavigationToolbar2`](../backend_bases_api#matplotlib.backend_bases.NavigationToolbar2 "matplotlib.backend_bases.NavigationToolbar2")
programming_docs
matplotlib matplotlib.pyplot.get_fignums matplotlib.pyplot.get\_fignums ============================== matplotlib.pyplot.get\_fignums()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L838-L840) Return a list of existing figure numbers. matplotlib mpl_toolkits.axes_grid1.axes_size.from_any mpl\_toolkits.axes\_grid1.axes\_size.from\_any ============================================== mpl\_toolkits.axes\_grid1.axes\_size.from\_any(*size*, *fraction\_ref=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_size.py#L225-L239) Create a Fixed unit when the first argument is a float, or a Fraction unit if that is a string that ends with %. The second argument is only meaningful when Fraction unit is created. ``` >>> a = Size.from_any(1.2) # => Size.Fixed(1.2) >>> Size.from_any("50%", a) # => Size.Fraction(0.5, a) ``` matplotlib matplotlib.axes.Axes.redraw_in_frame matplotlib.axes.Axes.redraw\_in\_frame ====================================== Axes.redraw\_in\_frame()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L3118-L3126) Efficiently redraw Axes data, but not axis ticks, labels, etc. matplotlib mpl_toolkits.axisartist.floating_axes.GridHelperCurveLinear mpl\_toolkits.axisartist.floating\_axes.GridHelperCurveLinear ============================================================= *class*mpl\_toolkits.axisartist.floating\_axes.GridHelperCurveLinear(*aux\_trans*, *extremes*, *grid\_locator1=None*, *grid\_locator2=None*, *tick\_formatter1=None*, *tick\_formatter2=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/floating_axes.py#L156-L295) Bases: [`GridHelperCurveLinear`](mpl_toolkits.axisartist.grid_helper_curvelinear.gridhelpercurvelinear#mpl_toolkits.axisartist.grid_helper_curvelinear.GridHelperCurveLinear "mpl_toolkits.axisartist.grid_helper_curvelinear.GridHelperCurveLinear") aux\_trans : a transform from the source (curved) coordinate to target (rectilinear) coordinate. An instance of MPL's Transform (inverse transform should be defined) or a tuple of two callable objects which defines the transform and its inverse. The callables need take two arguments of array of source coordinates and should return two target coordinates. e.g., `x2, y2 = trans(x1, y1)` get\_boundary()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/floating_axes.py#L278-L295) [*Deprecated*] Return (N, 2) array of (x, y) coordinate of the boundary. #### Notes Deprecated since version 3.5. get\_data\_boundary(*side*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/floating_axes.py#L173-L181) Return v=0, nth=1. get\_gridlines(*which='major'*, *axis='both'*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/floating_axes.py#L270-L276) Return list of grid lines as a list of paths (list of points). *which* : "major" or "minor" *axis* : "both", "x" or "y" new\_fixed\_axis(*loc*, *nth\_coord=None*, *axis\_direction=None*, *offset=None*, *axes=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/floating_axes.py#L183-L200) Examples using `mpl_toolkits.axisartist.floating_axes.GridHelperCurveLinear` ---------------------------------------------------------------------------- [mpl\_toolkits.axisartist.floating\_axes features](https://matplotlib.org/stable/gallery/axisartist/demo_floating_axes.html#sphx-glr-gallery-axisartist-demo-floating-axes-py) :mod:`mpl\_toolkits.axisartist.floating\_axes` features matplotlib matplotlib.pyplot.quiver matplotlib.pyplot.quiver ======================== matplotlib.pyplot.quiver(*\*args*, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2757-L2763) Plot a 2D field of arrows. Call signature: ``` quiver([X, Y], U, V, [C], **kwargs) ``` *X*, *Y* define the arrow locations, *U*, *V* define the arrow directions, and *C* optionally sets the color. **Arrow length** The default settings auto-scales the length of the arrows to a reasonable size. To change this behavior see the *scale* and *scale\_units* parameters. **Arrow shape** The arrow shape is determined by *width*, *headwidth*, *headlength* and *headaxislength*. See the notes below. **Arrow styling** Each arrow is internally represented by a filled polygon with a default edge linewidth of 0. As a result, an arrow is rather a filled area, not a line with a head, and [`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection") properties like *linewidth*, *edgecolor*, *facecolor*, etc. act accordingly. Parameters: **X, Y**1D or 2D array-like, optional The x and y coordinates of the arrow locations. If not given, they will be generated as a uniform integer meshgrid based on the dimensions of *U* and *V*. If *X* and *Y* are 1D but *U*, *V* are 2D, *X*, *Y* are expanded to 2D using `X, Y = np.meshgrid(X, Y)`. In this case `len(X)` and `len(Y)` must match the column and row dimensions of *U* and *V*. **U, V**1D or 2D array-like The x and y direction components of the arrow vectors. The interpretation of these components (in data or in screen space) depends on *angles*. *U* and *V* must have the same number of elements, matching the number of arrow locations in *X*, *Y*. *U* and *V* may be masked. Locations masked in any of *U*, *V*, and *C* will not be drawn. **C**1D or 2D array-like, optional Numeric data that defines the arrow colors by colormapping via *norm* and *cmap*. This does not support explicit colors. If you want to set colors directly, use *color* instead. The size of *C* must match the number of arrow locations. **angles**{'uv', 'xy'} or array-like, default: 'uv' Method for determining the angle of the arrows. * 'uv': Arrow direction in screen coordinates. Use this if the arrows symbolize a quantity that is not based on *X*, *Y* data coordinates. If *U* == *V* the orientation of the arrow on the plot is 45 degrees counter-clockwise from the horizontal axis (positive to the right). * 'xy': Arrow direction in data coordinates, i.e. the arrows point from (x, y) to (x+u, y+v). Use this e.g. for plotting a gradient field. * Arbitrary angles may be specified explicitly as an array of values in degrees, counter-clockwise from the horizontal axis. In this case *U*, *V* is only used to determine the length of the arrows. Note: inverting a data axis will correspondingly invert the arrows only with `angles='xy'`. **pivot**{'tail', 'mid', 'middle', 'tip'}, default: 'tail' The part of the arrow that is anchored to the *X*, *Y* grid. The arrow rotates about this point. 'mid' is a synonym for 'middle'. **scale**float, optional Scales the length of the arrow inversely. Number of data units per arrow length unit, e.g., m/s per plot width; a smaller scale parameter makes the arrow longer. Default is *None*. If *None*, a simple autoscaling algorithm is used, based on the average vector length and the number of vectors. The arrow length unit is given by the *scale\_units* parameter. **scale\_units**{'width', 'height', 'dots', 'inches', 'x', 'y', 'xy'}, optional If the *scale* kwarg is *None*, the arrow length unit. Default is *None*. e.g. *scale\_units* is 'inches', *scale* is 2.0, and `(u, v) = (1, 0)`, then the vector will be 0.5 inches long. If *scale\_units* is 'width' or 'height', then the vector will be half the width/height of the axes. If *scale\_units* is 'x' then the vector will be 0.5 x-axis units. To plot vectors in the x-y plane, with u and v having the same units as x and y, use `angles='xy', scale_units='xy', scale=1`. **units**{'width', 'height', 'dots', 'inches', 'x', 'y', 'xy'}, default: 'width' Affects the arrow size (except for the length). In particular, the shaft *width* is measured in multiples of this unit. Supported values are: * 'width', 'height': The width or height of the Axes. * 'dots', 'inches': Pixels or inches based on the figure dpi. * 'x', 'y', 'xy': *X*, *Y* or \(\sqrt{X^2 + Y^2}\) in data units. The following table summarizes how these values affect the visible arrow size under zooming and figure size changes: | units | zoom | figure size change | | --- | --- | --- | | 'x', 'y', 'xy' | arrow size scales | * | | 'width', 'height' | * | arrow size scales | | 'dots', 'inches' | * | * | **width**float, optional Shaft width in arrow units. All head parameters are relative to *width*. The default depends on choice of *units* above, and number of vectors; a typical starting value is about 0.005 times the width of the plot. **headwidth**float, default: 3 Head width as multiple of shaft *width*. See the notes below. **headlength**float, default: 5 Head length as multiple of shaft *width*. See the notes below. **headaxislength**float, default: 4.5 Head length at shaft intersection as multiple of shaft *width*. See the notes below. **minshaft**float, default: 1 Length below which arrow scales, in units of head length. Do not set this to less than 1, or small arrows will look terrible! **minlength**float, default: 1 Minimum length as a multiple of shaft width; if an arrow length is less than this, plot a dot (hexagon) of this diameter instead. **color**color or color sequence, optional Explicit color(s) for the arrows. If *C* has been set, *color* has no effect. This is a synonym for the [`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection") *facecolor* parameter. Returns: [`Quiver`](matplotlib.quiver.quiver#matplotlib.quiver.Quiver "matplotlib.quiver.Quiver") Other Parameters: **data**indexable object, optional If given, all parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception). **\*\*kwargs**[`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection") properties, optional All other keyword arguments are passed on to [`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection"): | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](../collections_api#matplotlib.collections.Collection.set_alpha "matplotlib.collections.Collection.set_alpha") | array-like or scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](../collections_api#matplotlib.collections.Collection.set_antialiased "matplotlib.collections.Collection.set_antialiased") or aa or antialiaseds | bool or list of bools | | [`array`](../cm_api#matplotlib.cm.ScalarMappable.set_array "matplotlib.cm.ScalarMappable.set_array") | array-like or None | | [`capstyle`](../collections_api#matplotlib.collections.Collection.set_capstyle "matplotlib.collections.Collection.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clim`](../cm_api#matplotlib.cm.ScalarMappable.set_clim "matplotlib.cm.ScalarMappable.set_clim") | (vmin: float, vmax: float) | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`cmap`](../cm_api#matplotlib.cm.ScalarMappable.set_cmap "matplotlib.cm.ScalarMappable.set_cmap") | [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap") or str or None | | [`color`](../collections_api#matplotlib.collections.Collection.set_color "matplotlib.collections.Collection.set_color") | color or list of rgba tuples | | [`edgecolor`](../collections_api#matplotlib.collections.Collection.set_edgecolor "matplotlib.collections.Collection.set_edgecolor") or ec or edgecolors | color or list of colors or 'face' | | [`facecolor`](../collections_api#matplotlib.collections.Collection.set_facecolor "matplotlib.collections.Collection.set_facecolor") or facecolors or fc | color or list of colors | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](../collections_api#matplotlib.collections.Collection.set_hatch "matplotlib.collections.Collection.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](../collections_api#matplotlib.collections.Collection.set_joinstyle "matplotlib.collections.Collection.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](../collections_api#matplotlib.collections.Collection.set_linestyle "matplotlib.collections.Collection.set_linestyle") or dashes or linestyles or ls | str or tuple or list thereof | | [`linewidth`](../collections_api#matplotlib.collections.Collection.set_linewidth "matplotlib.collections.Collection.set_linewidth") or linewidths or lw | float or list of floats | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`norm`](../cm_api#matplotlib.cm.ScalarMappable.set_norm "matplotlib.cm.ScalarMappable.set_norm") | [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") or str or None | | [`offset_transform`](../collections_api#matplotlib.collections.Collection.set_offset_transform "matplotlib.collections.Collection.set_offset_transform") or transOffset | unknown | | [`offsets`](../collections_api#matplotlib.collections.Collection.set_offsets "matplotlib.collections.Collection.set_offsets") | (N, 2) or (2,) array-like | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`paths`](../collections_api#matplotlib.collections.PolyCollection.set_verts "matplotlib.collections.PolyCollection.set_verts") | list of array-like | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`pickradius`](../collections_api#matplotlib.collections.Collection.set_pickradius "matplotlib.collections.Collection.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | `sizes` | ndarray or None | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`urls`](../collections_api#matplotlib.collections.Collection.set_urls "matplotlib.collections.Collection.set_urls") | list of str or None | | [`verts`](../collections_api#matplotlib.collections.PolyCollection.set_verts "matplotlib.collections.PolyCollection.set_verts") | list of array-like | | [`verts_and_codes`](../collections_api#matplotlib.collections.PolyCollection.set_verts_and_codes "matplotlib.collections.PolyCollection.set_verts_and_codes") | unknown | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`Axes.quiverkey`](matplotlib.axes.axes.quiverkey#matplotlib.axes.Axes.quiverkey "matplotlib.axes.Axes.quiverkey") Add a key to a quiver plot. #### Notes **Arrow shape** The arrow is drawn as a polygon using the nodes as shown below. The values *headwidth*, *headlength*, and *headaxislength* are in units of *width*. The defaults give a slightly swept-back arrow. Here are some guidelines how to get other head shapes: * To make the head a triangle, make *headaxislength* the same as *headlength*. * To make the arrow more pointed, reduce *headwidth* or increase *headlength* and *headaxislength*. * To make the head smaller relative to the shaft, scale down all the head parameters proportionally. * To remove the head completely, set all *head* parameters to 0. * To get a diamond-shaped head, make *headaxislength* larger than *headlength*. * Warning: For *headaxislength* < (*headlength* / *headwidth*), the "headaxis" nodes (i.e. the ones connecting the head with the shaft) will protrude out of the head in forward direction so that the arrow head looks broken. matplotlib mpl_toolkits.mplot3d.proj3d.world_transformation mpl\_toolkits.mplot3d.proj3d.world\_transformation ================================================== mpl\_toolkits.mplot3d.proj3d.world\_transformation(*xmin*, *xmax*, *ymin*, *ymax*, *zmin*, *zmax*, *pb\_aspect=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/proj3d.py#L36-L55) Produce a matrix that scales homogeneous coords in the specified ranges to [0, 1], or [0, pb\_aspect[i]] if the plotbox aspect ratio is specified. matplotlib matplotlib.pyplot.subplot matplotlib.pyplot.subplot ========================= matplotlib.pyplot.subplot(*\*args*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L1081-L1281) Add an Axes to the current figure or retrieve an existing Axes. This is a wrapper of [`Figure.add_subplot`](../figure_api#matplotlib.figure.Figure.add_subplot "matplotlib.figure.Figure.add_subplot") which provides additional behavior when working with the implicit API (see the notes section). Call signatures: ``` subplot(nrows, ncols, index, **kwargs) subplot(pos, **kwargs) subplot(**kwargs) subplot(ax) ``` Parameters: **\*args**int, (int, int, *index*), or [`SubplotSpec`](matplotlib.gridspec.subplotspec#matplotlib.gridspec.SubplotSpec "matplotlib.gridspec.SubplotSpec"), default: (1, 1, 1) The position of the subplot described by one of * Three integers (*nrows*, *ncols*, *index*). The subplot will take the *index* position on a grid with *nrows* rows and *ncols* columns. *index* starts at 1 in the upper left corner and increases to the right. *index* can also be a two-tuple specifying the (*first*, *last*) indices (1-based, and including *last*) of the subplot, e.g., `fig.add_subplot(3, 1, (1, 2))` makes a subplot that spans the upper 2/3 of the figure. * A 3-digit integer. The digits are interpreted as if given separately as three single-digit integers, i.e. `fig.add_subplot(235)` is the same as `fig.add_subplot(2, 3, 5)`. Note that this can only be used if there are no more than 9 subplots. * A [`SubplotSpec`](matplotlib.gridspec.subplotspec#matplotlib.gridspec.SubplotSpec "matplotlib.gridspec.SubplotSpec"). **projection**{None, 'aitoff', 'hammer', 'lambert', 'mollweide', 'polar', 'rectilinear', str}, optional The projection type of the subplot ([`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes")). *str* is the name of a custom projection, see [`projections`](../projections_api#module-matplotlib.projections "matplotlib.projections"). The default None results in a 'rectilinear' projection. **polar**bool, default: False If True, equivalent to projection='polar'. **sharex, sharey**[`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes"), optional Share the x or y [`axis`](../axis_api#module-matplotlib.axis "matplotlib.axis") with sharex and/or sharey. The axis will have the same limits, ticks, and scale as the axis of the shared axes. **label**str A label for the returned axes. Returns: [`axes.SubplotBase`](matplotlib.axes.subplotbase#matplotlib.axes.SubplotBase "matplotlib.axes.SubplotBase"), or another subclass of [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") The axes of the subplot. The returned axes base class depends on the projection used. It is [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") if rectilinear projection is used and [`projections.polar.PolarAxes`](../projections_api#matplotlib.projections.polar.PolarAxes "matplotlib.projections.polar.PolarAxes") if polar projection is used. The returned axes is then a subplot subclass of the base class. Other Parameters: **\*\*kwargs** This method also takes the keyword arguments for the returned axes base class; except for the *figure* argument. The keyword arguments for the rectilinear base class [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") can be found in the following table but there might also be other keyword arguments if another projection is used. | Property | Description | | --- | --- | | [`adjustable`](matplotlib.axes.axes.set_adjustable#matplotlib.axes.Axes.set_adjustable "matplotlib.axes.Axes.set_adjustable") | {'box', 'datalim'} | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`anchor`](matplotlib.axes.axes.set_anchor#matplotlib.axes.Axes.set_anchor "matplotlib.axes.Axes.set_anchor") | (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...} | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect") | {'auto', 'equal'} or float | | [`autoscale_on`](matplotlib.axes.axes.set_autoscale_on#matplotlib.axes.Axes.set_autoscale_on "matplotlib.axes.Axes.set_autoscale_on") | bool | | [`autoscalex_on`](matplotlib.axes.axes.set_autoscalex_on#matplotlib.axes.Axes.set_autoscalex_on "matplotlib.axes.Axes.set_autoscalex_on") | unknown | | [`autoscaley_on`](matplotlib.axes.axes.set_autoscaley_on#matplotlib.axes.Axes.set_autoscaley_on "matplotlib.axes.Axes.set_autoscaley_on") | unknown | | [`axes_locator`](matplotlib.axes.axes.set_axes_locator#matplotlib.axes.Axes.set_axes_locator "matplotlib.axes.Axes.set_axes_locator") | Callable[[Axes, Renderer], Bbox] | | [`axisbelow`](matplotlib.axes.axes.set_axisbelow#matplotlib.axes.Axes.set_axisbelow "matplotlib.axes.Axes.set_axisbelow") | bool or 'line' | | [`box_aspect`](matplotlib.axes.axes.set_box_aspect#matplotlib.axes.Axes.set_box_aspect "matplotlib.axes.Axes.set_box_aspect") | float or None | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`facecolor`](matplotlib.axes.axes.set_facecolor#matplotlib.axes.Axes.set_facecolor "matplotlib.axes.Axes.set_facecolor") or fc | color | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`frame_on`](matplotlib.axes.axes.set_frame_on#matplotlib.axes.Axes.set_frame_on "matplotlib.axes.Axes.set_frame_on") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`navigate`](matplotlib.axes.axes.set_navigate#matplotlib.axes.Axes.set_navigate "matplotlib.axes.Axes.set_navigate") | bool | | [`navigate_mode`](matplotlib.axes.axes.set_navigate_mode#matplotlib.axes.Axes.set_navigate_mode "matplotlib.axes.Axes.set_navigate_mode") | unknown | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`position`](matplotlib.axes.axes.set_position#matplotlib.axes.Axes.set_position "matplotlib.axes.Axes.set_position") | [left, bottom, width, height] or [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`prop_cycle`](matplotlib.axes.axes.set_prop_cycle#matplotlib.axes.Axes.set_prop_cycle "matplotlib.axes.Axes.set_prop_cycle") | unknown | | [`rasterization_zorder`](matplotlib.axes.axes.set_rasterization_zorder#matplotlib.axes.Axes.set_rasterization_zorder "matplotlib.axes.Axes.set_rasterization_zorder") | float or None | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`title`](matplotlib.axes.axes.set_title#matplotlib.axes.Axes.set_title "matplotlib.axes.Axes.set_title") | str | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xbound`](matplotlib.axes.axes.set_xbound#matplotlib.axes.Axes.set_xbound "matplotlib.axes.Axes.set_xbound") | unknown | | [`xlabel`](matplotlib.axes.axes.set_xlabel#matplotlib.axes.Axes.set_xlabel "matplotlib.axes.Axes.set_xlabel") | str | | [`xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") | (bottom: float, top: float) | | [`xmargin`](matplotlib.axes.axes.set_xmargin#matplotlib.axes.Axes.set_xmargin "matplotlib.axes.Axes.set_xmargin") | float greater than -0.5 | | [`xscale`](matplotlib.axes.axes.set_xscale#matplotlib.axes.Axes.set_xscale "matplotlib.axes.Axes.set_xscale") | unknown | | [`xticklabels`](matplotlib.axes.axes.set_xticklabels#matplotlib.axes.Axes.set_xticklabels "matplotlib.axes.Axes.set_xticklabels") | unknown | | [`xticks`](matplotlib.axes.axes.set_xticks#matplotlib.axes.Axes.set_xticks "matplotlib.axes.Axes.set_xticks") | unknown | | [`ybound`](matplotlib.axes.axes.set_ybound#matplotlib.axes.Axes.set_ybound "matplotlib.axes.Axes.set_ybound") | unknown | | [`ylabel`](matplotlib.axes.axes.set_ylabel#matplotlib.axes.Axes.set_ylabel "matplotlib.axes.Axes.set_ylabel") | str | | [`ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim") | (bottom: float, top: float) | | [`ymargin`](matplotlib.axes.axes.set_ymargin#matplotlib.axes.Axes.set_ymargin "matplotlib.axes.Axes.set_ymargin") | float greater than -0.5 | | [`yscale`](matplotlib.axes.axes.set_yscale#matplotlib.axes.Axes.set_yscale "matplotlib.axes.Axes.set_yscale") | unknown | | [`yticklabels`](matplotlib.axes.axes.set_yticklabels#matplotlib.axes.Axes.set_yticklabels "matplotlib.axes.Axes.set_yticklabels") | unknown | | [`yticks`](matplotlib.axes.axes.set_yticks#matplotlib.axes.Axes.set_yticks "matplotlib.axes.Axes.set_yticks") | unknown | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`Figure.add_subplot`](../figure_api#matplotlib.figure.Figure.add_subplot "matplotlib.figure.Figure.add_subplot") [`pyplot.subplots`](matplotlib.pyplot.subplots#matplotlib.pyplot.subplots "matplotlib.pyplot.subplots") [`pyplot.axes`](matplotlib.pyplot.axes#matplotlib.pyplot.axes "matplotlib.pyplot.axes") [`Figure.subplots`](../figure_api#matplotlib.figure.Figure.subplots "matplotlib.figure.Figure.subplots") #### Notes Creating a new Axes will delete any preexisting Axes that overlaps with it beyond sharing a boundary: ``` import matplotlib.pyplot as plt # plot a line, implicitly creating a subplot(111) plt.plot([1, 2, 3]) # now create a subplot which represents the top plot of a grid # with 2 rows and 1 column. Since this subplot will overlap the # first, the plot (and its axes) previously created, will be removed plt.subplot(211) ``` If you do not want this behavior, use the [`Figure.add_subplot`](../figure_api#matplotlib.figure.Figure.add_subplot "matplotlib.figure.Figure.add_subplot") method or the [`pyplot.axes`](matplotlib.pyplot.axes#matplotlib.pyplot.axes "matplotlib.pyplot.axes") function instead. If no *kwargs* are passed and there exists an Axes in the location specified by *args* then that Axes will be returned rather than a new Axes being created. If *kwargs* are passed and there exists an Axes in the location specified by *args*, the projection type is the same, and the *kwargs* match with the existing Axes, then the existing Axes is returned. Otherwise a new Axes is created with the specified parameters. We save a reference to the *kwargs* which we use for this comparison. If any of the values in *kwargs* are mutable we will not detect the case where they are mutated. In these cases we suggest using [`Figure.add_subplot`](../figure_api#matplotlib.figure.Figure.add_subplot "matplotlib.figure.Figure.add_subplot") and the explicit Axes API rather than the implicit pyplot API. #### Examples ``` plt.subplot(221) # equivalent but more general ax1 = plt.subplot(2, 2, 1) # add a subplot with no frame ax2 = plt.subplot(222, frameon=False) # add a polar subplot plt.subplot(223, projection='polar') # add a red subplot that shares the x-axis with ax1 plt.subplot(224, sharex=ax1, facecolor='red') # delete ax2 from the figure plt.delaxes(ax2) # add ax2 to the figure again plt.subplot(ax2) # make the first axes "current" again plt.subplot(221) ``` Examples using `matplotlib.pyplot.subplot` ------------------------------------------ [Controlling view limits using margins and sticky\_edges](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_margins.html#sphx-glr-gallery-subplots-axes-and-figures-axes-margins-py) Controlling view limits using margins and sticky\_edges [Resizing axes with tight layout](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/demo_tight_layout.html#sphx-glr-gallery-subplots-axes-and-figures-demo-tight-layout-py) Resizing axes with tight layout [Geographic Projections](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/geo_demo.html#sphx-glr-gallery-subplots-axes-and-figures-geo-demo-py) Geographic Projections [Managing multiple figures in pyplot](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/multiple_figs_demo.html#sphx-glr-gallery-subplots-axes-and-figures-multiple-figs-demo-py) Managing multiple figures in pyplot [Sharing axis limits and views](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/share_axis_lims_views.html#sphx-glr-gallery-subplots-axes-and-figures-share-axis-lims-views-py) Sharing axis limits and views [Shared Axis](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/shared_axis_demo.html#sphx-glr-gallery-subplots-axes-and-figures-shared-axis-demo-py) Shared Axis [Multiple subplots](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subplot.html#sphx-glr-gallery-subplots-axes-and-figures-subplot-py) Multiple subplots [Subplots spacings and margins](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subplots_adjust.html#sphx-glr-gallery-subplots-axes-and-figures-subplots-adjust-py) Subplots spacings and margins [Bar chart on polar axis](https://matplotlib.org/stable/gallery/pie_and_polar_charts/polar_bar.html#sphx-glr-gallery-pie-and-polar-charts-polar-bar-py) Bar chart on polar axis [Pyplot Two Subplots](https://matplotlib.org/stable/gallery/pyplots/pyplot_two_subplots.html#sphx-glr-gallery-pyplots-pyplot-two-subplots-py) Pyplot Two Subplots [Simple Colorbar](https://matplotlib.org/stable/gallery/axes_grid1/simple_colorbar.html#sphx-glr-gallery-axes-grid1-simple-colorbar-py) Simple Colorbar ![MATPLOTLIB **UNCHAINED**](https://matplotlib.org/stable/_images/sphx_glr_unchained_thumb.gif) [MATPLOTLIB UNCHAINED](https://matplotlib.org/stable/gallery/animation/unchained.html#sphx-glr-gallery-animation-unchained-py) MATPLOTLIB \*\*UNCHAINED\*\* [Customize Rc](https://matplotlib.org/stable/gallery/misc/customize_rc.html#sphx-glr-gallery-misc-customize-rc-py) Customize Rc [transforms.offset\_copy](https://matplotlib.org/stable/gallery/misc/transoffset.html#sphx-glr-gallery-misc-transoffset-py) transforms.offset\_copy [Pyplot tutorial](https://matplotlib.org/stable/tutorials/introductory/pyplot.html#sphx-glr-tutorials-introductory-pyplot-py) Pyplot tutorial [Constrained Layout Guide](https://matplotlib.org/stable/tutorials/intermediate/constrainedlayout_guide.html#sphx-glr-tutorials-intermediate-constrainedlayout-guide-py) Constrained Layout Guide [Tight Layout guide](https://matplotlib.org/stable/tutorials/intermediate/tight_layout_guide.html#sphx-glr-tutorials-intermediate-tight-layout-guide-py) Tight Layout guide
programming_docs
matplotlib matplotlib.axes.Axes.stackplot matplotlib.axes.Axes.stackplot ============================== Axes.stackplot(*x*, *\*args*, *labels=()*, *colors=None*, *baseline='zero'*, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/stackplot.py#L16-L124) Draw a stacked area plot. Parameters: **x**(N,) array-like **y**(M, N) array-like The data is assumed to be unstacked. Each of the following calls is legal: ``` stackplot(x, y) # where y has shape (M, N) stackplot(x, y1, y2, y3) # where y1, y2, y3, y4 have length N ``` **baseline**{'zero', 'sym', 'wiggle', 'weighted\_wiggle'} Method used to calculate the baseline: * `'zero'`: Constant zero baseline, i.e. a simple stacked plot. * `'sym'`: Symmetric around zero and is sometimes called 'ThemeRiver'. * `'wiggle'`: Minimizes the sum of the squared slopes. * `'weighted_wiggle'`: Does the same but weights to account for size of each layer. It is also called 'Streamgraph'-layout. More details can be found at <http://leebyron.com/streamgraph/>. **labels**list of str, optional A sequence of labels to assign to each data series. If unspecified, then no labels will be applied to artists. **colors**list of color, optional A sequence of colors to be cycled through and used to color the stacked areas. The sequence need not be exactly the same length as the number of provided *y*, in which case the colors will repeat from the beginning. If not specified, the colors from the Axes property cycle will be used. **data**indexable object, optional If given, all parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception). **\*\*kwargs** All other keyword arguments are passed to [`Axes.fill_between`](matplotlib.axes.axes.fill_between#matplotlib.axes.Axes.fill_between "matplotlib.axes.Axes.fill_between"). Returns: list of [`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection") A list of [`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection") instances, one for each element in the stacked area plot. Examples using `matplotlib.axes.Axes.stackplot` ----------------------------------------------- [Stackplots and streamgraphs](https://matplotlib.org/stable/gallery/lines_bars_and_markers/stackplot_demo.html#sphx-glr-gallery-lines-bars-and-markers-stackplot-demo-py) Stackplots and streamgraphs [stackplot(x, y)](https://matplotlib.org/stable/plot_types/basic/stackplot.html#sphx-glr-plot-types-basic-stackplot-py) stackplot(x, y) matplotlib mpl_toolkits.axes_grid1.mpl_axes.SimpleChainedObjects mpl\_toolkits.axes\_grid1.mpl\_axes.SimpleChainedObjects ======================================================== *class*mpl\_toolkits.axes\_grid1.mpl\_axes.SimpleChainedObjects(*objects*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/mpl_axes.py#L6-L16) Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") \_\_call\_\_(*\*args*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/mpl_axes.py#L14-L16) Call self as a function. matplotlib matplotlib.axes.Axes.add_patch matplotlib.axes.Axes.add\_patch =============================== Axes.add\_patch(*p*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L2406-L2417) Add a [`Patch`](matplotlib.patches.patch#matplotlib.patches.Patch "matplotlib.patches.Patch") to the Axes; return the patch. Examples using `matplotlib.axes.Axes.add_patch` ----------------------------------------------- [Curve with error band](https://matplotlib.org/stable/gallery/lines_bars_and_markers/curve_error_band.html#sphx-glr-gallery-lines-bars-and-markers-curve-error-band-py) Curve with error band [Image Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/image_demo.html#sphx-glr-gallery-images-contours-and-fields-image-demo-py) Image Demo [Axes box aspect](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_box_aspect.html#sphx-glr-gallery-subplots-axes-and-figures-axes-box-aspect-py) Axes box aspect [Controlling view limits using margins and sticky\_edges](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_margins.html#sphx-glr-gallery-subplots-axes-and-figures-axes-margins-py) Controlling view limits using margins and sticky\_edges [Boxplots](https://matplotlib.org/stable/gallery/statistics/boxplot_demo.html#sphx-glr-gallery-statistics-boxplot-demo-py) Boxplots [Plot a confidence ellipse of a two-dimensional dataset](https://matplotlib.org/stable/gallery/statistics/confidence_ellipse.html#sphx-glr-gallery-statistics-confidence-ellipse-py) Plot a confidence ellipse of a two-dimensional dataset [Annotating Plots](https://matplotlib.org/stable/gallery/text_labels_and_annotations/annotation_demo.html#sphx-glr-gallery-text-labels-and-annotations-annotation-demo-py) Annotating Plots [Text alignment](https://matplotlib.org/stable/gallery/text_labels_and_annotations/text_alignment.html#sphx-glr-gallery-text-labels-and-annotations-text-alignment-py) Text alignment [Compound path](https://matplotlib.org/stable/gallery/shapes_and_collections/compound_path.html#sphx-glr-gallery-shapes-and-collections-compound-path-py) Compound path [Dolphins](https://matplotlib.org/stable/gallery/shapes_and_collections/dolphin.html#sphx-glr-gallery-shapes-and-collections-dolphin-py) Dolphins [Mmh Donuts!!!](https://matplotlib.org/stable/gallery/shapes_and_collections/donut.html#sphx-glr-gallery-shapes-and-collections-donut-py) Mmh Donuts!!! [Drawing fancy boxes](https://matplotlib.org/stable/gallery/shapes_and_collections/fancybox_demo.html#sphx-glr-gallery-shapes-and-collections-fancybox-demo-py) Drawing fancy boxes [Hatch style reference](https://matplotlib.org/stable/gallery/shapes_and_collections/hatch_style_reference.html#sphx-glr-gallery-shapes-and-collections-hatch-style-reference-py) Hatch style reference [PathPatch object](https://matplotlib.org/stable/gallery/shapes_and_collections/path_patch.html#sphx-glr-gallery-shapes-and-collections-path-patch-py) PathPatch object [Bezier Curve](https://matplotlib.org/stable/gallery/shapes_and_collections/quad_bezier.html#sphx-glr-gallery-shapes-and-collections-quad-bezier-py) Bezier Curve [ggplot style sheet](https://matplotlib.org/stable/gallery/style_sheets/ggplot.html#sphx-glr-gallery-style-sheets-ggplot-py) ggplot style sheet [Inset Locator Demo](https://matplotlib.org/stable/gallery/axes_grid1/inset_locator_demo.html#sphx-glr-gallery-axes-grid1-inset-locator-demo-py) Inset Locator Demo [Firefox](https://matplotlib.org/stable/gallery/showcase/firefox.html#sphx-glr-gallery-showcase-firefox-py) Firefox [Integral as the area under a curve](https://matplotlib.org/stable/gallery/showcase/integral.html#sphx-glr-gallery-showcase-integral-py) Integral as the area under a curve [Looking Glass](https://matplotlib.org/stable/gallery/event_handling/looking_glass.html#sphx-glr-gallery-event-handling-looking-glass-py) Looking Glass [Path Editor](https://matplotlib.org/stable/gallery/event_handling/path_editor.html#sphx-glr-gallery-event-handling-path-editor-py) Path Editor [Poly Editor](https://matplotlib.org/stable/gallery/event_handling/poly_editor.html#sphx-glr-gallery-event-handling-poly-editor-py) Poly Editor [Trifinder Event Demo](https://matplotlib.org/stable/gallery/event_handling/trifinder_event_demo.html#sphx-glr-gallery-event-handling-trifinder-event-demo-py) Trifinder Event Demo [Viewlims](https://matplotlib.org/stable/gallery/event_handling/viewlims.html#sphx-glr-gallery-event-handling-viewlims-py) Viewlims [Changing colors of lines intersecting a box](https://matplotlib.org/stable/gallery/misc/bbox_intersect.html#sphx-glr-gallery-misc-bbox-intersect-py) Changing colors of lines intersecting a box [Building histograms using Rectangles and PolyCollections](https://matplotlib.org/stable/gallery/misc/histogram_path.html#sphx-glr-gallery-misc-histogram-path-py) Building histograms using Rectangles and PolyCollections [Packed-bubble chart](https://matplotlib.org/stable/gallery/misc/packed_bubbles.html#sphx-glr-gallery-misc-packed-bubbles-py) Packed-bubble chart [SVG Filter Pie](https://matplotlib.org/stable/gallery/misc/svg_filter_pie.html#sphx-glr-gallery-misc-svg-filter-pie-py) SVG Filter Pie [TickedStroke patheffect](https://matplotlib.org/stable/gallery/misc/tickedstroke_demo.html#sphx-glr-gallery-misc-tickedstroke-demo-py) TickedStroke patheffect [Draw flat objects in 3D plot](https://matplotlib.org/stable/gallery/mplot3d/pathpatch3d.html#sphx-glr-gallery-mplot3d-pathpatch3d-py) Draw flat objects in 3D plot [Artist tests](https://matplotlib.org/stable/gallery/units/artist_tests.html#sphx-glr-gallery-units-artist-tests-py) Artist tests [Ellipse with units](https://matplotlib.org/stable/gallery/units/ellipse_with_units.html#sphx-glr-gallery-units-ellipse-with-units-py) Ellipse with units [Legend guide](https://matplotlib.org/stable/tutorials/intermediate/legend_guide.html#sphx-glr-tutorials-intermediate-legend-guide-py) Legend guide [Path Tutorial](https://matplotlib.org/stable/tutorials/advanced/path_tutorial.html#sphx-glr-tutorials-advanced-path-tutorial-py) Path Tutorial [Transformations Tutorial](https://matplotlib.org/stable/tutorials/advanced/transforms_tutorial.html#sphx-glr-tutorials-advanced-transforms-tutorial-py) Transformations Tutorial [Specifying Colors](https://matplotlib.org/stable/tutorials/colors/colors.html#sphx-glr-tutorials-colors-colors-py) Specifying Colors [Text properties and layout](https://matplotlib.org/stable/tutorials/text/text_props.html#sphx-glr-tutorials-text-text-props-py) Text properties and layout matplotlib matplotlib.axes.Axes.twiny matplotlib.axes.Axes.twiny ========================== Axes.twiny()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L4502-L4529) Create a twin Axes sharing the yaxis. Create a new Axes with an invisible y-axis and an independent x-axis positioned opposite to the original one (i.e. at top). The y-axis autoscale setting will be inherited from the original Axes. To ensure that the tick marks of both x-axes align, see [`LinearLocator`](../ticker_api#matplotlib.ticker.LinearLocator "matplotlib.ticker.LinearLocator"). Returns: Axes The newly created Axes instance #### Notes For those who are 'picking' artists while using twiny, pick events are only called for the artists in the top-most Axes. matplotlib matplotlib.pyplot.semilogy matplotlib.pyplot.semilogy ========================== matplotlib.pyplot.semilogy(*\*args*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2794-L2796) Make a plot with log scaling on the y axis. Call signatures: ``` semilogy([x], y, [fmt], data=None, **kwargs) semilogy([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs) ``` This is just a thin wrapper around [`plot`](matplotlib.pyplot.plot#matplotlib.pyplot.plot "matplotlib.pyplot.plot") which additionally changes the y-axis to log scaling. All of the concepts and parameters of plot can be used here as well. The additional parameters *base*, *subs*, and *nonpositive* control the y-axis properties. They are just forwarded to [`Axes.set_yscale`](matplotlib.axes.axes.set_yscale#matplotlib.axes.Axes.set_yscale "matplotlib.axes.Axes.set_yscale"). Parameters: **base**float, default: 10 Base of the y logarithm. **subs**array-like, optional The location of the minor yticks. If *None*, reasonable locations are automatically chosen depending on the number of decades in the plot. See [`Axes.set_yscale`](matplotlib.axes.axes.set_yscale#matplotlib.axes.Axes.set_yscale "matplotlib.axes.Axes.set_yscale") for details. **nonpositive**{'mask', 'clip'}, default: 'mask' Non-positive values in y can be masked as invalid, or clipped to a very small positive number. **\*\*kwargs** All parameters supported by [`plot`](matplotlib.pyplot.plot#matplotlib.pyplot.plot "matplotlib.pyplot.plot"). Returns: list of [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") Objects representing the plotted data. matplotlib matplotlib.axes.Axes.mouseover matplotlib.axes.Axes.mouseover ============================== *property*Axes.mouseover Return whether this artist is queried for custom context information when the mouse cursor moves over it. matplotlib matplotlib.axes.Axes.contains matplotlib.axes.Axes.contains ============================= Axes.contains(*mouseevent*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L4325-L4330) Test whether the artist contains the mouse event. Parameters: **mouseevent**[`matplotlib.backend_bases.MouseEvent`](../backend_bases_api#matplotlib.backend_bases.MouseEvent "matplotlib.backend_bases.MouseEvent") Returns: **contains**bool Whether any values are within the radius. **details**dict An artist-specific dictionary of details of the event context, such as which points are contained in the pick radius. See the individual Artist subclasses for details. matplotlib matplotlib.axes.Axes.pie matplotlib.axes.Axes.pie ======================== Axes.pie(*x*, *explode=None*, *labels=None*, *colors=None*, *autopct=None*, *pctdistance=0.6*, *shadow=False*, *labeldistance=1.1*, *startangle=0*, *radius=1*, *counterclock=True*, *wedgeprops=None*, *textprops=None*, *center=(0, 0)*, *frame=False*, *rotatelabels=False*, *\**, *normalize=True*, *data=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L3036-L3259) Plot a pie chart. Make a pie chart of array *x*. The fractional area of each wedge is given by `x/sum(x)`. The wedges are plotted counterclockwise, by default starting from the x-axis. Parameters: **x**1D array-like The wedge sizes. **explode**array-like, default: None If not *None*, is a `len(x)` array which specifies the fraction of the radius with which to offset each wedge. **labels**list, default: None A sequence of strings providing the labels for each wedge **colors**array-like, default: None A sequence of colors through which the pie chart will cycle. If *None*, will use the colors in the currently active cycle. **autopct**None or str or callable, default: None If not *None*, is a string or function used to label the wedges with their numeric value. The label will be placed inside the wedge. If it is a format string, the label will be `fmt % pct`. If it is a function, it will be called. **pctdistance**float, default: 0.6 The ratio between the center of each pie slice and the start of the text generated by *autopct*. Ignored if *autopct* is *None*. **shadow**bool, default: False Draw a shadow beneath the pie. **normalize**bool, default: True When *True*, always make a full pie by normalizing x so that `sum(x) == 1`. *False* makes a partial pie if `sum(x) <= 1` and raises a [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "(in Python v3.10)") for `sum(x) > 1`. **labeldistance**float or None, default: 1.1 The radial distance at which the pie labels are drawn. If set to `None`, label are not drawn, but are stored for use in `legend()` **startangle**float, default: 0 degrees The angle by which the start of the pie is rotated, counterclockwise from the x-axis. **radius**float, default: 1 The radius of the pie. **counterclock**bool, default: True Specify fractions direction, clockwise or counterclockwise. **wedgeprops**dict, default: None Dict of arguments passed to the wedge objects making the pie. For example, you can pass in `wedgeprops = {'linewidth': 3}` to set the width of the wedge border lines equal to 3. For more details, look at the doc/arguments of the wedge object. By default `clip_on=False`. **textprops**dict, default: None Dict of arguments to pass to the text objects. **center**(float, float), default: (0, 0) The coordinates of the center of the chart. **frame**bool, default: False Plot Axes frame with the chart if true. **rotatelabels**bool, default: False Rotate each label to the angle of the corresponding slice if true. **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *x*, *explode*, *labels*, *colors* Returns: **patches**list A sequence of [`matplotlib.patches.Wedge`](matplotlib.patches.wedge#matplotlib.patches.Wedge "matplotlib.patches.Wedge") instances **texts**list A list of the label [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text") instances. **autotexts**list A list of [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text") instances for the numeric labels. This will only be returned if the parameter *autopct* is not *None*. #### Notes The pie chart will probably look best if the figure and Axes are square, or the Axes aspect is equal. This method sets the aspect ratio of the axis to "equal". The Axes aspect ratio can be controlled with [`Axes.set_aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect"). Examples using `matplotlib.axes.Axes.pie` ----------------------------------------- [Basic pie chart](https://matplotlib.org/stable/gallery/pie_and_polar_charts/pie_features.html#sphx-glr-gallery-pie-and-polar-charts-pie-features-py) Basic pie chart [Bar of pie](https://matplotlib.org/stable/gallery/pie_and_polar_charts/bar_of_pie.html#sphx-glr-gallery-pie-and-polar-charts-bar-of-pie-py) Bar of pie [Nested pie charts](https://matplotlib.org/stable/gallery/pie_and_polar_charts/nested_pie.html#sphx-glr-gallery-pie-and-polar-charts-nested-pie-py) Nested pie charts [Labeling a pie and a donut](https://matplotlib.org/stable/gallery/pie_and_polar_charts/pie_and_donut_labels.html#sphx-glr-gallery-pie-and-polar-charts-pie-and-donut-labels-py) Labeling a pie and a donut [SVG Filter Pie](https://matplotlib.org/stable/gallery/misc/svg_filter_pie.html#sphx-glr-gallery-misc-svg-filter-pie-py) SVG Filter Pie [pie(x)](https://matplotlib.org/stable/plot_types/stats/pie.html#sphx-glr-plot-types-stats-pie-py) pie(x) matplotlib mpl_toolkits.axes_grid1.parasite_axes.host_axes_class_factory mpl\_toolkits.axes\_grid1.parasite\_axes.host\_axes\_class\_factory =================================================================== mpl\_toolkits.axes\_grid1.parasite\_axes.host\_axes\_class\_factory(*axes\_class*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/parasite_axes.py#L2278-L2300) matplotlib mpl_toolkits.axes_grid1.anchored_artists.AnchoredEllipse mpl\_toolkits.axes\_grid1.anchored\_artists.AnchoredEllipse =========================================================== *class*mpl\_toolkits.axes\_grid1.anchored\_artists.AnchoredEllipse(*transform*, *width*, *height*, *angle*, *loc*, *pad=0.1*, *borderpad=0.1*, *prop=None*, *frameon=True*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/anchored_artists.py#L127-L171) Bases: [`AnchoredOffsetbox`](../offsetbox_api#matplotlib.offsetbox.AnchoredOffsetbox "matplotlib.offsetbox.AnchoredOffsetbox") Draw an anchored ellipse of a given size. Parameters: **transform**[`matplotlib.transforms.Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") The transformation object for the coordinate system in use, i.e., `matplotlib.axes.Axes.transData`. **width, height**float Width and height of the ellipse, given in coordinates of *transform*. **angle**float Rotation of the ellipse, in degrees, anti-clockwise. **loc**str Location of the ellipse. Valid locations are 'upper left', 'upper center', 'upper right', 'center left', 'center', 'center right', 'lower left', 'lower center, 'lower right'. For backward compatibility, numeric values are accepted as well. See the parameter *loc* of [`Legend`](../legend_api#matplotlib.legend.Legend "matplotlib.legend.Legend") for details. **pad**float, default: 0.1 Padding around the ellipse, in fraction of the font size. **borderpad**float, default: 0.1 Border padding, in fraction of the font size. **frameon**bool, default: True If True, draw a box around the ellipse. **prop**[`matplotlib.font_manager.FontProperties`](../font_manager_api#matplotlib.font_manager.FontProperties "matplotlib.font_manager.FontProperties"), optional Font property used as a reference for paddings. **\*\*kwargs** Keyword arguments forwarded to [`AnchoredOffsetbox`](../offsetbox_api#matplotlib.offsetbox.AnchoredOffsetbox "matplotlib.offsetbox.AnchoredOffsetbox"). Attributes: **ellipse**[`matplotlib.patches.Ellipse`](matplotlib.patches.ellipse#matplotlib.patches.Ellipse "matplotlib.patches.Ellipse") Ellipse patch drawn. set(*\**, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *animated=<UNSET>*, *bbox\_to\_anchor=<UNSET>*, *child=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *gid=<UNSET>*, *height=<UNSET>*, *in\_layout=<UNSET>*, *label=<UNSET>*, *mouseover=<UNSET>*, *offset=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *rasterized=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *transform=<UNSET>*, *url=<UNSET>*, *visible=<UNSET>*, *width=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`bbox_to_anchor`](../offsetbox_api#matplotlib.offsetbox.AnchoredOffsetbox.set_bbox_to_anchor "matplotlib.offsetbox.AnchoredOffsetbox.set_bbox_to_anchor") | unknown | | [`child`](../offsetbox_api#matplotlib.offsetbox.AnchoredOffsetbox.set_child "matplotlib.offsetbox.AnchoredOffsetbox.set_child") | unknown | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`figure`](../offsetbox_api#matplotlib.offsetbox.OffsetBox.set_figure "matplotlib.offsetbox.OffsetBox.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`height`](../offsetbox_api#matplotlib.offsetbox.OffsetBox.set_height "matplotlib.offsetbox.OffsetBox.set_height") | float | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`offset`](../offsetbox_api#matplotlib.offsetbox.OffsetBox.set_offset "matplotlib.offsetbox.OffsetBox.set_offset") | (float, float) or callable | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`width`](../offsetbox_api#matplotlib.offsetbox.OffsetBox.set_width "matplotlib.offsetbox.OffsetBox.set_width") | float | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | Examples using `mpl_toolkits.axes_grid1.anchored_artists.AnchoredEllipse` ------------------------------------------------------------------------- [Simple Anchored Artists](https://matplotlib.org/stable/gallery/axes_grid1/simple_anchored_artists.html#sphx-glr-gallery-axes-grid1-simple-anchored-artists-py) Simple Anchored Artists
programming_docs
matplotlib matplotlib.axes.Axes.get_lines matplotlib.axes.Axes.get\_lines =============================== Axes.get\_lines()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L2158-L2160) Return a list of lines contained by the Axes. matplotlib matplotlib.axis.Axis.clear matplotlib.axis.Axis.clear ========================== Axis.clear()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L861-L895) Clear the axis. This resets axis properties to their default values: * the label * the scale * locators, formatters and ticks * major and minor grid * units * registered callbacks matplotlib matplotlib.pyplot.triplot matplotlib.pyplot.triplot ========================= matplotlib.pyplot.triplot(*\*args*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2950-L2952) Draw an unstructured triangular grid as lines and/or markers. Call signatures: ``` triplot(triangulation, ...) triplot(x, y, [triangles], *, [mask=mask], ...) ``` The triangular grid can be specified either by passing a [`Triangulation`](../tri_api#matplotlib.tri.Triangulation "matplotlib.tri.Triangulation") object as the first parameter, or by passing the points *x*, *y* and optionally the *triangles* and a *mask*. If neither of *triangulation* or *triangles* are given, the triangulation is calculated on the fly. Parameters: **triangulation**[`Triangulation`](../tri_api#matplotlib.tri.Triangulation "matplotlib.tri.Triangulation") An already created triangular grid. **x, y, triangles, mask** Parameters defining the triangular grid. See [`Triangulation`](../tri_api#matplotlib.tri.Triangulation "matplotlib.tri.Triangulation"). This is mutually exclusive with specifying *triangulation*. **other\_parameters** All other args and kwargs are forwarded to [`plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot"). Returns: **lines**[`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") The drawn triangles edges. **markers**[`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") The drawn marker nodes. matplotlib mpl_toolkits.axisartist.grid_finder.DictFormatter mpl\_toolkits.axisartist.grid\_finder.DictFormatter =================================================== *class*mpl\_toolkits.axisartist.grid\_finder.DictFormatter(*format\_dict*, *formatter=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_finder.py#L318-L338) Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") format\_dict : dictionary for format strings to be used. formatter : fall-back formatter \_\_call\_\_(*direction*, *factor*, *values*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_finder.py#L328-L338) factor is ignored if value is found in the dictionary Examples using `mpl_toolkits.axisartist.grid_finder.DictFormatter` ------------------------------------------------------------------ [mpl\_toolkits.axisartist.floating\_axes features](https://matplotlib.org/stable/gallery/axisartist/demo_floating_axes.html#sphx-glr-gallery-axisartist-demo-floating-axes-py) :mod:`mpl\_toolkits.axisartist.floating\_axes` features matplotlib matplotlib.axes.Axes.get_ylim matplotlib.axes.Axes.get\_ylim ============================== Axes.get\_ylim()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L3825-L3845) Return the y-axis view limits. Returns: **bottom, top**(float, float) The current y-axis limits in data coordinates. See also [`Axes.set_ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim") [`set_ybound`](matplotlib.axes.axes.set_ybound#matplotlib.axes.Axes.set_ybound "matplotlib.axes.Axes.set_ybound"), [`get_ybound`](matplotlib.axes.axes.get_ybound#matplotlib.axes.Axes.get_ybound "matplotlib.axes.Axes.get_ybound") [`invert_yaxis`](matplotlib.axes.axes.invert_yaxis#matplotlib.axes.Axes.invert_yaxis "matplotlib.axes.Axes.invert_yaxis"), [`yaxis_inverted`](matplotlib.axes.axes.yaxis_inverted#matplotlib.axes.Axes.yaxis_inverted "matplotlib.axes.Axes.yaxis_inverted") #### Notes The y-axis may be inverted, in which case the *bottom* value will be greater than the *top* value. Examples using `matplotlib.axes.Axes.get_ylim` ---------------------------------------------- [Line, Poly and RegularPoly Collection with autoscaling](https://matplotlib.org/stable/gallery/shapes_and_collections/collections.html#sphx-glr-gallery-shapes-and-collections-collections-py) Line, Poly and RegularPoly Collection with autoscaling matplotlib mpl_toolkits.axes_grid1.parasite_axes.HostAxesBase mpl\_toolkits.axes\_grid1.parasite\_axes.HostAxesBase ===================================================== *class*mpl\_toolkits.axes\_grid1.parasite\_axes.HostAxesBase(*\*args*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/parasite_axes.py#L97-L226) Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") clear()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/parasite_axes.py#L141-L144) draw(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/parasite_axes.py#L122-L139) get\_aux\_axes(*tr=None*, *viewlim\_mode='equal'*, *axes\_class=<class 'mpl\_toolkits.axes\_grid1.mpl\_axes.Axes'>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/parasite_axes.py#L102-L120) Add a parasite axes to this host. Despite this method's name, this should actually be thought of as an `add_parasite_axes` method. *tr* may be [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform"), in which case the following relation will hold: `parasite.transData = tr + host.transData`. Alternatively, it may be None (the default), no special relationship will hold between the parasite's and the host's `transData`. get\_tightbbox(*renderer=None*, *call\_axes\_locator=True*, *bbox\_extra\_artists=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/parasite_axes.py#L218-L226) pick(*mouseevent*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/parasite_axes.py#L146-L151) twin(*aux\_trans=None*, *axes\_class=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/parasite_axes.py#L179-L193) Create a twin of Axes with no shared axis. While self will have ticks on the left and bottom axis, the returned axes will have ticks on the top and right axis. twinx(*axes\_class=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/parasite_axes.py#L153-L164) Create a twin of Axes with a shared x-axis but independent y-axis. The y-axis of self will have ticks on the left and the returned axes will have ticks on the right. twiny(*axes\_class=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/parasite_axes.py#L166-L177) Create a twin of Axes with a shared y-axis but independent x-axis. The x-axis of self will have ticks on the bottom and the returned axes will have ticks on the top. Examples using `mpl_toolkits.axes_grid1.parasite_axes.HostAxesBase` ------------------------------------------------------------------- [Parasite Simple2](https://matplotlib.org/stable/gallery/axes_grid1/parasite_simple2.html#sphx-glr-gallery-axes-grid1-parasite-simple2-py) Parasite Simple2 [Curvilinear grid demo](https://matplotlib.org/stable/gallery/axisartist/demo_curvelinear_grid.html#sphx-glr-gallery-axisartist-demo-curvelinear-grid-py) Curvilinear grid demo [mpl\_toolkits.axisartist.floating\_axes features](https://matplotlib.org/stable/gallery/axisartist/demo_floating_axes.html#sphx-glr-gallery-axisartist-demo-floating-axes-py) :mod:`mpl\_toolkits.axisartist.floating\_axes` features [floating\_axis demo](https://matplotlib.org/stable/gallery/axisartist/demo_floating_axis.html#sphx-glr-gallery-axisartist-demo-floating-axis-py) floating\_axis demo [Parasite Axes demo](https://matplotlib.org/stable/gallery/axisartist/demo_parasite_axes.html#sphx-glr-gallery-axisartist-demo-parasite-axes-py) Parasite Axes demo matplotlib mpl_toolkits.axisartist.axislines.Axes mpl\_toolkits.axisartist.axislines.Axes ======================================= *class*mpl\_toolkits.axisartist.axislines.Axes(*\*args*, *grid\_helper=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L440-L558) Bases: [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes._axes.Axes") Build an Axes in a figure. Parameters: **fig**[`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") The Axes is built in the [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") *fig*. **rect**tuple (left, bottom, width, height). The Axes is built in the rectangle *rect*. *rect* is in [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") coordinates. **sharex, sharey**[`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes"), optional The x or y [`axis`](../axis_api#module-matplotlib.axis "matplotlib.axis") is shared with the x or y axis in the input [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes"). **frameon**bool, default: True Whether the Axes frame is visible. **box\_aspect**float, optional Set a fixed aspect for the Axes box, i.e. the ratio of height to width. See [`set_box_aspect`](matplotlib.axes.axes.set_box_aspect#matplotlib.axes.Axes.set_box_aspect "matplotlib.axes.Axes.set_box_aspect") for details. **\*\*kwargs** Other optional keyword arguments: | Property | Description | | --- | --- | | [`adjustable`](matplotlib.axes.axes.set_adjustable#matplotlib.axes.Axes.set_adjustable "matplotlib.axes.Axes.set_adjustable") | {'box', 'datalim'} | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`anchor`](matplotlib.axes.axes.set_anchor#matplotlib.axes.Axes.set_anchor "matplotlib.axes.Axes.set_anchor") | (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...} | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect") | {'auto', 'equal'} or float | | [`autoscale_on`](matplotlib.axes.axes.set_autoscale_on#matplotlib.axes.Axes.set_autoscale_on "matplotlib.axes.Axes.set_autoscale_on") | bool | | [`autoscalex_on`](matplotlib.axes.axes.set_autoscalex_on#matplotlib.axes.Axes.set_autoscalex_on "matplotlib.axes.Axes.set_autoscalex_on") | unknown | | [`autoscaley_on`](matplotlib.axes.axes.set_autoscaley_on#matplotlib.axes.Axes.set_autoscaley_on "matplotlib.axes.Axes.set_autoscaley_on") | unknown | | [`axes_locator`](matplotlib.axes.axes.set_axes_locator#matplotlib.axes.Axes.set_axes_locator "matplotlib.axes.Axes.set_axes_locator") | Callable[[Axes, Renderer], Bbox] | | [`axisbelow`](matplotlib.axes.axes.set_axisbelow#matplotlib.axes.Axes.set_axisbelow "matplotlib.axes.Axes.set_axisbelow") | bool or 'line' | | [`box_aspect`](matplotlib.axes.axes.set_box_aspect#matplotlib.axes.Axes.set_box_aspect "matplotlib.axes.Axes.set_box_aspect") | float or None | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`facecolor`](matplotlib.axes.axes.set_facecolor#matplotlib.axes.Axes.set_facecolor "matplotlib.axes.Axes.set_facecolor") or fc | color | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`frame_on`](matplotlib.axes.axes.set_frame_on#matplotlib.axes.Axes.set_frame_on "matplotlib.axes.Axes.set_frame_on") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`navigate`](matplotlib.axes.axes.set_navigate#matplotlib.axes.Axes.set_navigate "matplotlib.axes.Axes.set_navigate") | bool | | [`navigate_mode`](matplotlib.axes.axes.set_navigate_mode#matplotlib.axes.Axes.set_navigate_mode "matplotlib.axes.Axes.set_navigate_mode") | unknown | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`position`](matplotlib.axes.axes.set_position#matplotlib.axes.Axes.set_position "matplotlib.axes.Axes.set_position") | [left, bottom, width, height] or [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`prop_cycle`](matplotlib.axes.axes.set_prop_cycle#matplotlib.axes.Axes.set_prop_cycle "matplotlib.axes.Axes.set_prop_cycle") | unknown | | [`rasterization_zorder`](matplotlib.axes.axes.set_rasterization_zorder#matplotlib.axes.Axes.set_rasterization_zorder "matplotlib.axes.Axes.set_rasterization_zorder") | float or None | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`title`](matplotlib.axes.axes.set_title#matplotlib.axes.Axes.set_title "matplotlib.axes.Axes.set_title") | str | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xbound`](matplotlib.axes.axes.set_xbound#matplotlib.axes.Axes.set_xbound "matplotlib.axes.Axes.set_xbound") | unknown | | [`xlabel`](matplotlib.axes.axes.set_xlabel#matplotlib.axes.Axes.set_xlabel "matplotlib.axes.Axes.set_xlabel") | str | | [`xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") | (bottom: float, top: float) | | [`xmargin`](matplotlib.axes.axes.set_xmargin#matplotlib.axes.Axes.set_xmargin "matplotlib.axes.Axes.set_xmargin") | float greater than -0.5 | | [`xscale`](matplotlib.axes.axes.set_xscale#matplotlib.axes.Axes.set_xscale "matplotlib.axes.Axes.set_xscale") | unknown | | [`xticklabels`](matplotlib.axes.axes.set_xticklabels#matplotlib.axes.Axes.set_xticklabels "matplotlib.axes.Axes.set_xticklabels") | unknown | | [`xticks`](matplotlib.axes.axes.set_xticks#matplotlib.axes.Axes.set_xticks "matplotlib.axes.Axes.set_xticks") | unknown | | [`ybound`](matplotlib.axes.axes.set_ybound#matplotlib.axes.Axes.set_ybound "matplotlib.axes.Axes.set_ybound") | unknown | | [`ylabel`](matplotlib.axes.axes.set_ylabel#matplotlib.axes.Axes.set_ylabel "matplotlib.axes.Axes.set_ylabel") | str | | [`ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim") | (bottom: float, top: float) | | [`ymargin`](matplotlib.axes.axes.set_ymargin#matplotlib.axes.Axes.set_ymargin "matplotlib.axes.Axes.set_ymargin") | float greater than -0.5 | | [`yscale`](matplotlib.axes.axes.set_yscale#matplotlib.axes.Axes.set_yscale "matplotlib.axes.Axes.set_yscale") | unknown | | [`yticklabels`](matplotlib.axes.axes.set_yticklabels#matplotlib.axes.Axes.set_yticklabels "matplotlib.axes.Axes.set_yticklabels") | unknown | | [`yticks`](matplotlib.axes.axes.set_yticks#matplotlib.axes.Axes.set_yticks "matplotlib.axes.Axes.set_yticks") | unknown | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | Returns: [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") The new [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") object. \_\_call\_\_(*\*args*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L442-L443) Call self as a function. *property*axis Convenience method to get or set some axis properties. Call signatures: ``` xmin, xmax, ymin, ymax = axis() xmin, xmax, ymin, ymax = axis([xmin, xmax, ymin, ymax]) xmin, xmax, ymin, ymax = axis(option) xmin, xmax, ymin, ymax = axis(**kwargs) ``` Parameters: **xmin, xmax, ymin, ymax**float, optional The axis limits to be set. This can also be achieved using ``` ax.set(xlim=(xmin, xmax), ylim=(ymin, ymax)) ``` **option**bool or str If a bool, turns axis lines and labels on or off. If a string, possible values are: | Value | Description | | --- | --- | | 'on' | Turn on axis lines and labels. Same as `True`. | | 'off' | Turn off axis lines and labels. Same as `False`. | | 'equal' | Set equal scaling (i.e., make circles circular) by changing axis limits. This is the same as `ax.set_aspect('equal', adjustable='datalim')`. Explicit data limits may not be respected in this case. | | 'scaled' | Set equal scaling (i.e., make circles circular) by changing dimensions of the plot box. This is the same as `ax.set_aspect('equal', adjustable='box', anchor='C')`. Additionally, further autoscaling will be disabled. | | 'tight' | Set limits just large enough to show all data, then disable further autoscaling. | | 'auto' | Automatic scaling (fill plot box with data). | | 'image' | 'scaled' with axis limits equal to data limits. | | 'square' | Square plot; similar to 'scaled', but initially forcing `xmax-xmin == ymax-ymin`. | **emit**bool, default: True Whether observers are notified of the axis limit change. This option is passed on to [`set_xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") and [`set_ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim"). Returns: **xmin, xmax, ymin, ymax**float The axis limits. See also [`matplotlib.axes.Axes.set_xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") [`matplotlib.axes.Axes.set_ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim") clear()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L485-L511) Clear the Axes. get\_children()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L535-L541) Return a list of the child [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist")s of this [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist"). get\_grid\_helper()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L513-L514) grid(*visible=None*, *which='major'*, *axis='both'*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L516-L533) Toggle the gridlines, and optionally set the properties of the lines. new\_fixed\_axis(*loc*, *offset=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L543-L551) new\_floating\_axis(*nth\_coord*, *value*, *axis\_direction='bottom'*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L553-L558) new\_gridlines(*grid\_helper=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L470-L483) [*Deprecated*] Create and return a new GridlineCollection instance. *which* : "major" or "minor" *axis* : "both", "x" or "y" #### Notes Deprecated since version 3.6. set(*\**, *adjustable=<UNSET>*, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *anchor=<UNSET>*, *animated=<UNSET>*, *aspect=<UNSET>*, *autoscale\_on=<UNSET>*, *autoscalex\_on=<UNSET>*, *autoscaley\_on=<UNSET>*, *axes\_locator=<UNSET>*, *axisbelow=<UNSET>*, *box\_aspect=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *facecolor=<UNSET>*, *frame\_on=<UNSET>*, *gid=<UNSET>*, *in\_layout=<UNSET>*, *label=<UNSET>*, *mouseover=<UNSET>*, *navigate=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *position=<UNSET>*, *prop\_cycle=<UNSET>*, *rasterization\_zorder=<UNSET>*, *rasterized=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *title=<UNSET>*, *transform=<UNSET>*, *url=<UNSET>*, *visible=<UNSET>*, *xbound=<UNSET>*, *xlabel=<UNSET>*, *xlim=<UNSET>*, *xmargin=<UNSET>*, *xscale=<UNSET>*, *xticklabels=<UNSET>*, *xticks=<UNSET>*, *ybound=<UNSET>*, *ylabel=<UNSET>*, *ylim=<UNSET>*, *ymargin=<UNSET>*, *yscale=<UNSET>*, *yticklabels=<UNSET>*, *yticks=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`adjustable`](matplotlib.axes.axes.set_adjustable#matplotlib.axes.Axes.set_adjustable "matplotlib.axes.Axes.set_adjustable") | {'box', 'datalim'} | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`anchor`](matplotlib.axes.axes.set_anchor#matplotlib.axes.Axes.set_anchor "matplotlib.axes.Axes.set_anchor") | (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...} | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect") | {'auto', 'equal'} or float | | [`autoscale_on`](matplotlib.axes.axes.set_autoscale_on#matplotlib.axes.Axes.set_autoscale_on "matplotlib.axes.Axes.set_autoscale_on") | bool | | [`autoscalex_on`](matplotlib.axes.axes.set_autoscalex_on#matplotlib.axes.Axes.set_autoscalex_on "matplotlib.axes.Axes.set_autoscalex_on") | unknown | | [`autoscaley_on`](matplotlib.axes.axes.set_autoscaley_on#matplotlib.axes.Axes.set_autoscaley_on "matplotlib.axes.Axes.set_autoscaley_on") | unknown | | [`axes_locator`](matplotlib.axes.axes.set_axes_locator#matplotlib.axes.Axes.set_axes_locator "matplotlib.axes.Axes.set_axes_locator") | Callable[[Axes, Renderer], Bbox] | | [`axisbelow`](matplotlib.axes.axes.set_axisbelow#matplotlib.axes.Axes.set_axisbelow "matplotlib.axes.Axes.set_axisbelow") | bool or 'line' | | [`box_aspect`](matplotlib.axes.axes.set_box_aspect#matplotlib.axes.Axes.set_box_aspect "matplotlib.axes.Axes.set_box_aspect") | float or None | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`facecolor`](matplotlib.axes.axes.set_facecolor#matplotlib.axes.Axes.set_facecolor "matplotlib.axes.Axes.set_facecolor") or fc | color | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`frame_on`](matplotlib.axes.axes.set_frame_on#matplotlib.axes.Axes.set_frame_on "matplotlib.axes.Axes.set_frame_on") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`navigate`](matplotlib.axes.axes.set_navigate#matplotlib.axes.Axes.set_navigate "matplotlib.axes.Axes.set_navigate") | bool | | [`navigate_mode`](matplotlib.axes.axes.set_navigate_mode#matplotlib.axes.Axes.set_navigate_mode "matplotlib.axes.Axes.set_navigate_mode") | unknown | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`position`](matplotlib.axes.axes.set_position#matplotlib.axes.Axes.set_position "matplotlib.axes.Axes.set_position") | [left, bottom, width, height] or [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`prop_cycle`](matplotlib.axes.axes.set_prop_cycle#matplotlib.axes.Axes.set_prop_cycle "matplotlib.axes.Axes.set_prop_cycle") | unknown | | [`rasterization_zorder`](matplotlib.axes.axes.set_rasterization_zorder#matplotlib.axes.Axes.set_rasterization_zorder "matplotlib.axes.Axes.set_rasterization_zorder") | float or None | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`title`](matplotlib.axes.axes.set_title#matplotlib.axes.Axes.set_title "matplotlib.axes.Axes.set_title") | str | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xbound`](matplotlib.axes.axes.set_xbound#matplotlib.axes.Axes.set_xbound "matplotlib.axes.Axes.set_xbound") | unknown | | [`xlabel`](matplotlib.axes.axes.set_xlabel#matplotlib.axes.Axes.set_xlabel "matplotlib.axes.Axes.set_xlabel") | str | | [`xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") | (bottom: float, top: float) | | [`xmargin`](matplotlib.axes.axes.set_xmargin#matplotlib.axes.Axes.set_xmargin "matplotlib.axes.Axes.set_xmargin") | float greater than -0.5 | | [`xscale`](matplotlib.axes.axes.set_xscale#matplotlib.axes.Axes.set_xscale "matplotlib.axes.Axes.set_xscale") | unknown | | [`xticklabels`](matplotlib.axes.axes.set_xticklabels#matplotlib.axes.Axes.set_xticklabels "matplotlib.axes.Axes.set_xticklabels") | unknown | | [`xticks`](matplotlib.axes.axes.set_xticks#matplotlib.axes.Axes.set_xticks "matplotlib.axes.Axes.set_xticks") | unknown | | [`ybound`](matplotlib.axes.axes.set_ybound#matplotlib.axes.Axes.set_ybound "matplotlib.axes.Axes.set_ybound") | unknown | | [`ylabel`](matplotlib.axes.axes.set_ylabel#matplotlib.axes.Axes.set_ylabel "matplotlib.axes.Axes.set_ylabel") | str | | [`ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim") | (bottom: float, top: float) | | [`ymargin`](matplotlib.axes.axes.set_ymargin#matplotlib.axes.Axes.set_ymargin "matplotlib.axes.Axes.set_ymargin") | float greater than -0.5 | | [`yscale`](matplotlib.axes.axes.set_yscale#matplotlib.axes.Axes.set_yscale "matplotlib.axes.Axes.set_yscale") | unknown | | [`yticklabels`](matplotlib.axes.axes.set_yticklabels#matplotlib.axes.Axes.set_yticklabels "matplotlib.axes.Axes.set_yticklabels") | unknown | | [`yticks`](matplotlib.axes.axes.set_yticks#matplotlib.axes.Axes.set_yticks "matplotlib.axes.Axes.set_yticks") | unknown | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | toggle\_axisline(*b=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L452-L464)
programming_docs
matplotlib matplotlib.axes.Axes.get_xminorticklabels matplotlib.axes.Axes.get\_xminorticklabels ========================================== Axes.get\_xminorticklabels()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L72-L73) Return the xaxis' minor tick labels, as a list of [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text"). matplotlib mpl_toolkits.axes_grid1.axes_rgb.make_rgb_axes mpl\_toolkits.axes\_grid1.axes\_rgb.make\_rgb\_axes =================================================== mpl\_toolkits.axes\_grid1.axes\_rgb.make\_rgb\_axes(*ax*, *pad=0.01*, *axes\_class=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_rgb.py#L7-L53) Parameters: **pad**float Fraction of the axes height. Examples using `mpl_toolkits.axes_grid1.axes_rgb.make_rgb_axes` --------------------------------------------------------------- [Showing RGB channels using RGBAxes](https://matplotlib.org/stable/gallery/axes_grid1/demo_axes_rgb.html#sphx-glr-gallery-axes-grid1-demo-axes-rgb-py) Showing RGB channels using RGBAxes matplotlib matplotlib.axes.Axes.get_transformed_clip_path_and_affine matplotlib.axes.Axes.get\_transformed\_clip\_path\_and\_affine ============================================================== Axes.get\_transformed\_clip\_path\_and\_affine()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L867-L875) Return the clip path with the non-affine part of its transformation applied, and the remaining affine part of its transformation. matplotlib matplotlib.axes.Axes.plot matplotlib.axes.Axes.plot ========================= Axes.plot(*\*args*, *scalex=True*, *scaley=True*, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L1417-L1669) Plot y versus x as lines and/or markers. Call signatures: ``` plot([x], y, [fmt], *, data=None, **kwargs) plot([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs) ``` The coordinates of the points or line nodes are given by *x*, *y*. The optional parameter *fmt* is a convenient way for defining basic formatting like color, marker and linestyle. It's a shortcut string notation described in the *Notes* section below. ``` >>> plot(x, y) # plot x and y using default line style and color >>> plot(x, y, 'bo') # plot x and y using blue circle markers >>> plot(y) # plot y using x as index array 0..N-1 >>> plot(y, 'r+') # ditto, but with red plusses ``` You can use [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") properties as keyword arguments for more control on the appearance. Line properties and *fmt* can be mixed. The following two calls yield identical results: ``` >>> plot(x, y, 'go--', linewidth=2, markersize=12) >>> plot(x, y, color='green', marker='o', linestyle='dashed', ... linewidth=2, markersize=12) ``` When conflicting with *fmt*, keyword arguments take precedence. **Plotting labelled data** There's a convenient way for plotting objects with labelled data (i.e. data that can be accessed by index `obj['y']`). Instead of giving the data in *x* and *y*, you can provide the object in the *data* parameter and just give the labels for *x* and *y*: ``` >>> plot('xlabel', 'ylabel', data=obj) ``` All indexable objects are supported. This could e.g. be a [`dict`](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.10)"), a [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame "(in pandas v1.4.4)") or a structured numpy array. **Plotting multiple sets of data** There are various ways to plot multiple sets of data. * The most straight forward way is just to call [`plot`](#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot") multiple times. Example: ``` >>> plot(x1, y1, 'bo') >>> plot(x2, y2, 'go') ``` * If *x* and/or *y* are 2D arrays a separate data set will be drawn for every column. If both *x* and *y* are 2D, they must have the same shape. If only one of them is 2D with shape (N, m) the other must have length N and will be used for every data set m. Example: ``` >>> x = [1, 2, 3] >>> y = np.array([[1, 2], [3, 4], [5, 6]]) >>> plot(x, y) ``` is equivalent to: ``` >>> for col in range(y.shape[1]): ... plot(x, y[:, col]) ``` * The third way is to specify multiple sets of *[x]*, *y*, *[fmt]* groups: ``` >>> plot(x1, y1, 'g^', x2, y2, 'g-') ``` In this case, any additional keyword argument applies to all datasets. Also this syntax cannot be combined with the *data* parameter. By default, each line is assigned a different style specified by a 'style cycle'. The *fmt* and line property parameters are only necessary if you want explicit deviations from these defaults. Alternatively, you can also change the style cycle using `[rcParams["axes.prop\_cycle"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=axes.prop_cycle#matplotlibrc-sample)` (default: `cycler('color', ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf'])`). Parameters: **x, y**array-like or scalar The horizontal / vertical coordinates of the data points. *x* values are optional and default to `range(len(y))`. Commonly, these parameters are 1D arrays. They can also be scalars, or two-dimensional (in that case, the columns represent separate data sets). These arguments cannot be passed as keywords. **fmt**str, optional A format string, e.g. 'ro' for red circles. See the *Notes* section for a full description of the format strings. Format strings are just an abbreviation for quickly setting basic line properties. All of these and more can also be controlled by keyword arguments. This argument cannot be passed as keyword. **data**indexable object, optional An object with labelled data. If given, provide the label names to plot in *x* and *y*. Note Technically there's a slight ambiguity in calls where the second label is a valid *fmt*. `plot('n', 'o', data=obj)` could be `plt(x, y)` or `plt(y, fmt)`. In such cases, the former interpretation is chosen, but a warning is issued. You may suppress the warning by adding an empty format string `plot('n', 'o', '', data=obj)`. Returns: list of [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") A list of lines representing the plotted data. Other Parameters: **scalex, scaley**bool, default: True These parameters determine if the view limits are adapted to the data limits. The values are passed on to [`autoscale_view`](matplotlib.axes.axes.autoscale_view#matplotlib.axes.Axes.autoscale_view "matplotlib.axes.Axes.autoscale_view"). **\*\*kwargs**[`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") properties, optional *kwargs* are used to specify properties like a line label (for auto legends), linewidth, antialiasing, marker face color. Example: ``` >>> plot([1, 2, 3], [1, 2, 3], 'go-', label='line 1', linewidth=2) >>> plot([1, 2, 3], [1, 4, 9], 'rs', label='line 2') ``` If you specify multiple lines with one plot call, the kwargs apply to all those lines. In case the label object is iterable, each element is used as labels for each set of data. Here is a list of available [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") properties: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_antialiased "matplotlib.lines.Line2D.set_antialiased") or aa | bool | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_color "matplotlib.lines.Line2D.set_color") or c | color | | [`dash_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_capstyle "matplotlib.lines.Line2D.set_dash_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`dash_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_joinstyle "matplotlib.lines.Line2D.set_dash_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`dashes`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dashes "matplotlib.lines.Line2D.set_dashes") | sequence of floats (on/off ink in points) or (None, None) | | [`data`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_data "matplotlib.lines.Line2D.set_data") | (2, N) array or two 1D arrays | | [`drawstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_drawstyle "matplotlib.lines.Line2D.set_drawstyle") or ds | {'default', 'steps', 'steps-pre', 'steps-mid', 'steps-post'}, default: 'default' | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fillstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_fillstyle "matplotlib.lines.Line2D.set_fillstyle") | {'full', 'left', 'right', 'bottom', 'top', 'none'} | | [`gapcolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_gapcolor "matplotlib.lines.Line2D.set_gapcolor") | color or None | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linestyle "matplotlib.lines.Line2D.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linewidth "matplotlib.lines.Line2D.set_linewidth") or lw | float | | [`marker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_marker "matplotlib.lines.Line2D.set_marker") | marker style string, [`Path`](../path_api#matplotlib.path.Path "matplotlib.path.Path") or [`MarkerStyle`](matplotlib.markers.markerstyle#matplotlib.markers.MarkerStyle "matplotlib.markers.MarkerStyle") | | [`markeredgecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgecolor "matplotlib.lines.Line2D.set_markeredgecolor") or mec | color | | [`markeredgewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgewidth "matplotlib.lines.Line2D.set_markeredgewidth") or mew | float | | [`markerfacecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecolor "matplotlib.lines.Line2D.set_markerfacecolor") or mfc | color | | [`markerfacecoloralt`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecoloralt "matplotlib.lines.Line2D.set_markerfacecoloralt") or mfcalt | color | | [`markersize`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markersize "matplotlib.lines.Line2D.set_markersize") or ms | float | | [`markevery`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markevery "matplotlib.lines.Line2D.set_markevery") | None or int or (int, int) or slice or list[int] or float or (float, float) or list[bool] | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_picker "matplotlib.lines.Line2D.set_picker") | float or callable[[Artist, Event], tuple[bool, dict]] | | [`pickradius`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_pickradius "matplotlib.lines.Line2D.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`solid_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_capstyle "matplotlib.lines.Line2D.set_solid_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`solid_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_joinstyle "matplotlib.lines.Line2D.set_solid_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | unknown | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xdata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_xdata "matplotlib.lines.Line2D.set_xdata") | 1D array | | [`ydata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_ydata "matplotlib.lines.Line2D.set_ydata") | 1D array | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`scatter`](matplotlib.axes.axes.scatter#matplotlib.axes.Axes.scatter "matplotlib.axes.Axes.scatter") XY scatter plot with markers of varying size and/or color ( sometimes also called bubble chart). #### Notes **Format Strings** A format string consists of a part for color, marker and line: ``` fmt = '[marker][line][color]' ``` Each of them is optional. If not provided, the value from the style cycle is used. Exception: If `line` is given, but no `marker`, the data will be a line without markers. Other combinations such as `[color][marker][line]` are also supported, but note that their parsing may be ambiguous. **Markers** | character | description | | --- | --- | | `'.'` | point marker | | `','` | pixel marker | | `'o'` | circle marker | | `'v'` | triangle\_down marker | | `'^'` | triangle\_up marker | | `'<'` | triangle\_left marker | | `'>'` | triangle\_right marker | | `'1'` | tri\_down marker | | `'2'` | tri\_up marker | | `'3'` | tri\_left marker | | `'4'` | tri\_right marker | | `'8'` | octagon marker | | `'s'` | square marker | | `'p'` | pentagon marker | | `'P'` | plus (filled) marker | | `'*'` | star marker | | `'h'` | hexagon1 marker | | `'H'` | hexagon2 marker | | `'+'` | plus marker | | `'x'` | x marker | | `'X'` | x (filled) marker | | `'D'` | diamond marker | | `'d'` | thin\_diamond marker | | `'|'` | vline marker | | `'_'` | hline marker | **Line Styles** | character | description | | --- | --- | | `'-'` | solid line style | | `'--'` | dashed line style | | `'-.'` | dash-dot line style | | `':'` | dotted line style | Example format strings: ``` 'b' # blue markers with default shape 'or' # red circles '-g' # green solid line '--' # dashed line with default color '^k:' # black triangle_up markers connected by a dotted line ``` **Colors** The supported color abbreviations are the single letter codes | character | color | | --- | --- | | `'b'` | blue | | `'g'` | green | | `'r'` | red | | `'c'` | cyan | | `'m'` | magenta | | `'y'` | yellow | | `'k'` | black | | `'w'` | white | and the `'CN'` colors that index into the default property cycle. If the color is the only part of the format string, you can additionally use any [`matplotlib.colors`](../colors_api#module-matplotlib.colors "matplotlib.colors") spec, e.g. full names (`'green'`) or hex strings (`'#008000'`). Examples using `matplotlib.axes.Axes.plot` ------------------------------------------ [Plotting categorical variables](https://matplotlib.org/stable/gallery/lines_bars_and_markers/categorical_variables.html#sphx-glr-gallery-lines-bars-and-markers-categorical-variables-py) Plotting categorical variables [CSD Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/csd_demo.html#sphx-glr-gallery-lines-bars-and-markers-csd-demo-py) CSD Demo [Curve with error band](https://matplotlib.org/stable/gallery/lines_bars_and_markers/curve_error_band.html#sphx-glr-gallery-lines-bars-and-markers-curve-error-band-py) Curve with error band [EventCollection Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/eventcollection_demo.html#sphx-glr-gallery-lines-bars-and-markers-eventcollection-demo-py) EventCollection Demo [Fill Between and Alpha](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill_between_alpha.html#sphx-glr-gallery-lines-bars-and-markers-fill-between-alpha-py) Fill Between and Alpha [Filling the area between lines](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill_between_demo.html#sphx-glr-gallery-lines-bars-and-markers-fill-between-demo-py) Filling the area between lines [Fill Betweenx Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill_betweenx_demo.html#sphx-glr-gallery-lines-bars-and-markers-fill-betweenx-demo-py) Fill Betweenx Demo [Customizing dashed line styles](https://matplotlib.org/stable/gallery/lines_bars_and_markers/line_demo_dash_control.html#sphx-glr-gallery-lines-bars-and-markers-line-demo-dash-control-py) Customizing dashed line styles [Lines with a ticked patheffect](https://matplotlib.org/stable/gallery/lines_bars_and_markers/lines_with_ticks_demo.html#sphx-glr-gallery-lines-bars-and-markers-lines-with-ticks-demo-py) Lines with a ticked patheffect [Marker reference](https://matplotlib.org/stable/gallery/lines_bars_and_markers/marker_reference.html#sphx-glr-gallery-lines-bars-and-markers-marker-reference-py) Marker reference [Markevery Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/markevery_demo.html#sphx-glr-gallery-lines-bars-and-markers-markevery-demo-py) Markevery Demo [Mapping marker properties to multivariate data](https://matplotlib.org/stable/gallery/lines_bars_and_markers/multivariate_marker_plot.html#sphx-glr-gallery-lines-bars-and-markers-multivariate-marker-plot-py) Mapping marker properties to multivariate data [Psd Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/psd_demo.html#sphx-glr-gallery-lines-bars-and-markers-psd-demo-py) Psd Demo [Simple Plot](https://matplotlib.org/stable/gallery/lines_bars_and_markers/simple_plot.html#sphx-glr-gallery-lines-bars-and-markers-simple-plot-py) Simple Plot [Using span\_where](https://matplotlib.org/stable/gallery/lines_bars_and_markers/span_regions.html#sphx-glr-gallery-lines-bars-and-markers-span-regions-py) Using span\_where [Creating a timeline with lines, dates, and text](https://matplotlib.org/stable/gallery/lines_bars_and_markers/timeline.html#sphx-glr-gallery-lines-bars-and-markers-timeline-py) Creating a timeline with lines, dates, and text [hlines and vlines](https://matplotlib.org/stable/gallery/lines_bars_and_markers/vline_hline_demo.html#sphx-glr-gallery-lines-bars-and-markers-vline-hline-demo-py) hlines and vlines [Contour Corner Mask](https://matplotlib.org/stable/gallery/images_contours_and_fields/contour_corner_mask.html#sphx-glr-gallery-images-contours-and-fields-contour-corner-mask-py) Contour Corner Mask [Contour plot of irregularly spaced data](https://matplotlib.org/stable/gallery/images_contours_and_fields/irregulardatagrid.html#sphx-glr-gallery-images-contours-and-fields-irregulardatagrid-py) Contour plot of irregularly spaced data [pcolormesh grids and shading](https://matplotlib.org/stable/gallery/images_contours_and_fields/pcolormesh_grids.html#sphx-glr-gallery-images-contours-and-fields-pcolormesh-grids-py) pcolormesh grids and shading [Spectrogram Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/specgram_demo.html#sphx-glr-gallery-images-contours-and-fields-specgram-demo-py) Spectrogram Demo [Watermark image](https://matplotlib.org/stable/gallery/images_contours_and_fields/watermark_image.html#sphx-glr-gallery-images-contours-and-fields-watermark-image-py) Watermark image [Aligning Labels](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/align_labels_demo.html#sphx-glr-gallery-subplots-axes-and-figures-align-labels-demo-py) Aligning Labels [Axes box aspect](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_box_aspect.html#sphx-glr-gallery-subplots-axes-and-figures-axes-box-aspect-py) Axes box aspect [Axes Demo](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_demo.html#sphx-glr-gallery-subplots-axes-and-figures-axes-demo-py) Axes Demo [Controlling view limits using margins and sticky\_edges](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_margins.html#sphx-glr-gallery-subplots-axes-and-figures-axes-margins-py) Controlling view limits using margins and sticky\_edges [Axes Props](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_props.html#sphx-glr-gallery-subplots-axes-and-figures-axes-props-py) Axes Props [axhspan Demo](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axhspan_demo.html#sphx-glr-gallery-subplots-axes-and-figures-axhspan-demo-py) axhspan Demo [Broken Axis](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/broken_axis.html#sphx-glr-gallery-subplots-axes-and-figures-broken-axis-py) Broken Axis [Resizing axes with constrained layout](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/demo_constrained_layout.html#sphx-glr-gallery-subplots-axes-and-figures-demo-constrained-layout-py) Resizing axes with constrained layout [Resizing axes with tight layout](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/demo_tight_layout.html#sphx-glr-gallery-subplots-axes-and-figures-demo-tight-layout-py) Resizing axes with tight layout [Figure labels: suptitle, supxlabel, supylabel](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/figure_title.html#sphx-glr-gallery-subplots-axes-and-figures-figure-title-py) Figure labels: suptitle, supxlabel, supylabel [Invert Axes](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/invert_axes.html#sphx-glr-gallery-subplots-axes-and-figures-invert-axes-py) Invert Axes [Secondary Axis](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/secondary_axis.html#sphx-glr-gallery-subplots-axes-and-figures-secondary-axis-py) Secondary Axis [Sharing axis limits and views](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/share_axis_lims_views.html#sphx-glr-gallery-subplots-axes-and-figures-share-axis-lims-views-py) Sharing axis limits and views [Figure subfigures](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subfigures.html#sphx-glr-gallery-subplots-axes-and-figures-subfigures-py) Figure subfigures [Multiple subplots](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subplot.html#sphx-glr-gallery-subplots-axes-and-figures-subplot-py) Multiple subplots [Creating multiple subplots using plt.subplots](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subplots_demo.html#sphx-glr-gallery-subplots-axes-and-figures-subplots-demo-py) Creating multiple subplots using ``plt.subplots`` [Plots with different scales](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/two_scales.html#sphx-glr-gallery-subplots-axes-and-figures-two-scales-py) Plots with different scales [Boxplots](https://matplotlib.org/stable/gallery/statistics/boxplot_demo.html#sphx-glr-gallery-statistics-boxplot-demo-py) Boxplots [Using histograms to plot a cumulative distribution](https://matplotlib.org/stable/gallery/statistics/histogram_cumulative.html#sphx-glr-gallery-statistics-histogram-cumulative-py) Using histograms to plot a cumulative distribution [Some features of the histogram (hist) function](https://matplotlib.org/stable/gallery/statistics/histogram_features.html#sphx-glr-gallery-statistics-histogram-features-py) Some features of the histogram (hist) function [Polar plot](https://matplotlib.org/stable/gallery/pie_and_polar_charts/polar_demo.html#sphx-glr-gallery-pie-and-polar-charts-polar-demo-py) Polar plot [Polar Legend](https://matplotlib.org/stable/gallery/pie_and_polar_charts/polar_legend.html#sphx-glr-gallery-pie-and-polar-charts-polar-legend-py) Polar Legend [Using accented text in Matplotlib](https://matplotlib.org/stable/gallery/text_labels_and_annotations/accented_text.html#sphx-glr-gallery-text-labels-and-annotations-accented-text-py) Using accented text in Matplotlib [Scale invariant angle label](https://matplotlib.org/stable/gallery/text_labels_and_annotations/angle_annotation.html#sphx-glr-gallery-text-labels-and-annotations-angle-annotation-py) Scale invariant angle label [Annotating Plots](https://matplotlib.org/stable/gallery/text_labels_and_annotations/annotation_demo.html#sphx-glr-gallery-text-labels-and-annotations-annotation-demo-py) Annotating Plots [Composing Custom Legends](https://matplotlib.org/stable/gallery/text_labels_and_annotations/custom_legends.html#sphx-glr-gallery-text-labels-and-annotations-custom-legends-py) Composing Custom Legends [Date tick labels](https://matplotlib.org/stable/gallery/text_labels_and_annotations/date.html#sphx-glr-gallery-text-labels-and-annotations-date-py) Date tick labels [AnnotationBbox demo](https://matplotlib.org/stable/gallery/text_labels_and_annotations/demo_annotation_box.html#sphx-glr-gallery-text-labels-and-annotations-demo-annotation-box-py) AnnotationBbox demo [Labeling ticks using engineering notation](https://matplotlib.org/stable/gallery/text_labels_and_annotations/engineering_formatter.html#sphx-glr-gallery-text-labels-and-annotations-engineering-formatter-py) Labeling ticks using engineering notation [Annotation arrow style reference](https://matplotlib.org/stable/gallery/text_labels_and_annotations/fancyarrow_demo.html#sphx-glr-gallery-text-labels-and-annotations-fancyarrow-demo-py) Annotation arrow style reference [Legend using pre-defined labels](https://matplotlib.org/stable/gallery/text_labels_and_annotations/legend.html#sphx-glr-gallery-text-labels-and-annotations-legend-py) Legend using pre-defined labels [Legend Demo](https://matplotlib.org/stable/gallery/text_labels_and_annotations/legend_demo.html#sphx-glr-gallery-text-labels-and-annotations-legend-demo-py) Legend Demo [Mathtext](https://matplotlib.org/stable/gallery/text_labels_and_annotations/mathtext_demo.html#sphx-glr-gallery-text-labels-and-annotations-mathtext-demo-py) Mathtext [Math fontfamily](https://matplotlib.org/stable/gallery/text_labels_and_annotations/mathtext_fontfamily_example.html#sphx-glr-gallery-text-labels-and-annotations-mathtext-fontfamily-example-py) Math fontfamily [Multiline](https://matplotlib.org/stable/gallery/text_labels_and_annotations/multiline.html#sphx-glr-gallery-text-labels-and-annotations-multiline-py) Multiline [Rendering math equations using TeX](https://matplotlib.org/stable/gallery/text_labels_and_annotations/tex_demo.html#sphx-glr-gallery-text-labels-and-annotations-tex-demo-py) Rendering math equations using TeX [Text Rotation Relative To Line](https://matplotlib.org/stable/gallery/text_labels_and_annotations/text_rotation_relative_to_line.html#sphx-glr-gallery-text-labels-and-annotations-text-rotation-relative-to-line-py) Text Rotation Relative To Line [Title positioning](https://matplotlib.org/stable/gallery/text_labels_and_annotations/titles_demo.html#sphx-glr-gallery-text-labels-and-annotations-titles-demo-py) Title positioning [Text watermark](https://matplotlib.org/stable/gallery/text_labels_and_annotations/watermark_text.html#sphx-glr-gallery-text-labels-and-annotations-watermark-text-py) Text watermark [Annotate Transform](https://matplotlib.org/stable/gallery/pyplots/annotate_transform.html#sphx-glr-gallery-pyplots-annotate-transform-py) Annotate Transform [Annotating a plot](https://matplotlib.org/stable/gallery/pyplots/annotation_basic.html#sphx-glr-gallery-pyplots-annotation-basic-py) Annotating a plot [Annotation Polar](https://matplotlib.org/stable/gallery/pyplots/annotation_polar.html#sphx-glr-gallery-pyplots-annotation-polar-py) Annotation Polar [Programmatically controlling subplot adjustment](https://matplotlib.org/stable/gallery/pyplots/auto_subplots_adjust.html#sphx-glr-gallery-pyplots-auto-subplots-adjust-py) Programmatically controlling subplot adjustment [Dollar Ticks](https://matplotlib.org/stable/gallery/pyplots/dollar_ticks.html#sphx-glr-gallery-pyplots-dollar-ticks-py) Dollar Ticks [Simple axes labels](https://matplotlib.org/stable/gallery/pyplots/fig_axes_labels_simple.html#sphx-glr-gallery-pyplots-fig-axes-labels-simple-py) Simple axes labels [Text Commands](https://matplotlib.org/stable/gallery/pyplots/text_commands.html#sphx-glr-gallery-pyplots-text-commands-py) Text Commands [Color Demo](https://matplotlib.org/stable/gallery/color/color_demo.html#sphx-glr-gallery-color-color-demo-py) Color Demo [Color by y-value](https://matplotlib.org/stable/gallery/color/color_by_yvalue.html#sphx-glr-gallery-color-color-by-yvalue-py) Color by y-value [PathPatch object](https://matplotlib.org/stable/gallery/shapes_and_collections/path_patch.html#sphx-glr-gallery-shapes-and-collections-path-patch-py) PathPatch object [Bezier Curve](https://matplotlib.org/stable/gallery/shapes_and_collections/quad_bezier.html#sphx-glr-gallery-shapes-and-collections-quad-bezier-py) Bezier Curve [Dark background style sheet](https://matplotlib.org/stable/gallery/style_sheets/dark_background.html#sphx-glr-gallery-style-sheets-dark-background-py) Dark background style sheet [FiveThirtyEight style sheet](https://matplotlib.org/stable/gallery/style_sheets/fivethirtyeight.html#sphx-glr-gallery-style-sheets-fivethirtyeight-py) FiveThirtyEight style sheet [ggplot style sheet](https://matplotlib.org/stable/gallery/style_sheets/ggplot.html#sphx-glr-gallery-style-sheets-ggplot-py) ggplot style sheet [Axes with a fixed physical size](https://matplotlib.org/stable/gallery/axes_grid1/demo_fixed_size_axes.html#sphx-glr-gallery-axes-grid1-demo-fixed-size-axes-py) Axes with a fixed physical size [Parasite Simple](https://matplotlib.org/stable/gallery/axes_grid1/parasite_simple.html#sphx-glr-gallery-axes-grid1-parasite-simple-py) Parasite Simple [Simple Axisline4](https://matplotlib.org/stable/gallery/axes_grid1/simple_axisline4.html#sphx-glr-gallery-axes-grid1-simple-axisline4-py) Simple Axisline4 [Axis line styles](https://matplotlib.org/stable/gallery/axisartist/demo_axisline_style.html#sphx-glr-gallery-axisartist-demo-axisline-style-py) Axis line styles [Parasite Axes demo](https://matplotlib.org/stable/gallery/axisartist/demo_parasite_axes.html#sphx-glr-gallery-axisartist-demo-parasite-axes-py) Parasite Axes demo [Parasite axis demo](https://matplotlib.org/stable/gallery/axisartist/demo_parasite_axes2.html#sphx-glr-gallery-axisartist-demo-parasite-axes2-py) Parasite axis demo [Custom spines with axisartist](https://matplotlib.org/stable/gallery/axisartist/simple_axisartist1.html#sphx-glr-gallery-axisartist-simple-axisartist1-py) Custom spines with axisartist [Simple Axisline](https://matplotlib.org/stable/gallery/axisartist/simple_axisline.html#sphx-glr-gallery-axisartist-simple-axisline-py) Simple Axisline [Anatomy of a figure](https://matplotlib.org/stable/gallery/showcase/anatomy.html#sphx-glr-gallery-showcase-anatomy-py) Anatomy of a figure [Integral as the area under a curve](https://matplotlib.org/stable/gallery/showcase/integral.html#sphx-glr-gallery-showcase-integral-py) Integral as the area under a curve [Stock prices over 32 years](https://matplotlib.org/stable/gallery/showcase/stock_prices.html#sphx-glr-gallery-showcase-stock-prices-py) Stock prices over 32 years [XKCD](https://matplotlib.org/stable/gallery/showcase/xkcd.html#sphx-glr-gallery-showcase-xkcd-py) XKCD [Decay](https://matplotlib.org/stable/gallery/animation/animate_decay.html#sphx-glr-gallery-animation-animate-decay-py) Decay ![The Bayes update](https://matplotlib.org/stable/_images/sphx_glr_bayes_update_thumb.gif) [The Bayes update](https://matplotlib.org/stable/gallery/animation/bayes_update.html#sphx-glr-gallery-animation-bayes-update-py) The Bayes update ![The double pendulum problem](https://matplotlib.org/stable/_images/sphx_glr_double_pendulum_thumb.gif) [The double pendulum problem](https://matplotlib.org/stable/gallery/animation/double_pendulum.html#sphx-glr-gallery-animation-double-pendulum-py) The double pendulum problem [Animated 3D random walk](https://matplotlib.org/stable/gallery/animation/random_walk.html#sphx-glr-gallery-animation-random-walk-py) Animated 3D random walk [Animated line plot](https://matplotlib.org/stable/gallery/animation/simple_anim.html#sphx-glr-gallery-animation-simple-anim-py) Animated line plot ![MATPLOTLIB **UNCHAINED**](https://matplotlib.org/stable/_images/sphx_glr_unchained_thumb.gif) [MATPLOTLIB UNCHAINED](https://matplotlib.org/stable/gallery/animation/unchained.html#sphx-glr-gallery-animation-unchained-py) MATPLOTLIB \*\*UNCHAINED\*\* [Mouse move and click events](https://matplotlib.org/stable/gallery/event_handling/coords_demo.html#sphx-glr-gallery-event-handling-coords-demo-py) Mouse move and click events [Cross hair cursor](https://matplotlib.org/stable/gallery/event_handling/cursor_demo.html#sphx-glr-gallery-event-handling-cursor-demo-py) Cross hair cursor [Data Browser](https://matplotlib.org/stable/gallery/event_handling/data_browser.html#sphx-glr-gallery-event-handling-data-browser-py) Data Browser [Keypress event](https://matplotlib.org/stable/gallery/event_handling/keypress_demo.html#sphx-glr-gallery-event-handling-keypress-demo-py) Keypress event [Legend Picking](https://matplotlib.org/stable/gallery/event_handling/legend_picking.html#sphx-glr-gallery-event-handling-legend-picking-py) Legend Picking [Looking Glass](https://matplotlib.org/stable/gallery/event_handling/looking_glass.html#sphx-glr-gallery-event-handling-looking-glass-py) Looking Glass [Path Editor](https://matplotlib.org/stable/gallery/event_handling/path_editor.html#sphx-glr-gallery-event-handling-path-editor-py) Path Editor [Pick Event Demo](https://matplotlib.org/stable/gallery/event_handling/pick_event_demo.html#sphx-glr-gallery-event-handling-pick-event-demo-py) Pick Event Demo [Pick Event Demo2](https://matplotlib.org/stable/gallery/event_handling/pick_event_demo2.html#sphx-glr-gallery-event-handling-pick-event-demo2-py) Pick Event Demo2 [Resampling Data](https://matplotlib.org/stable/gallery/event_handling/resample.html#sphx-glr-gallery-event-handling-resample-py) Resampling Data [Timers](https://matplotlib.org/stable/gallery/event_handling/timers.html#sphx-glr-gallery-event-handling-timers-py) Timers [Changing colors of lines intersecting a box](https://matplotlib.org/stable/gallery/misc/bbox_intersect.html#sphx-glr-gallery-misc-bbox-intersect-py) Changing colors of lines intersecting a box [Custom projection](https://matplotlib.org/stable/gallery/misc/custom_projection.html#sphx-glr-gallery-misc-custom-projection-py) Custom projection [Patheffect Demo](https://matplotlib.org/stable/gallery/misc/patheffect_demo.html#sphx-glr-gallery-misc-patheffect-demo-py) Patheffect Demo [Pythonic Matplotlib](https://matplotlib.org/stable/gallery/misc/pythonic_matplotlib.html#sphx-glr-gallery-misc-pythonic-matplotlib-py) Pythonic Matplotlib [SVG Filter Line](https://matplotlib.org/stable/gallery/misc/svg_filter_line.html#sphx-glr-gallery-misc-svg-filter-line-py) SVG Filter Line [TickedStroke patheffect](https://matplotlib.org/stable/gallery/misc/tickedstroke_demo.html#sphx-glr-gallery-misc-tickedstroke-demo-py) TickedStroke patheffect [Zorder Demo](https://matplotlib.org/stable/gallery/misc/zorder_demo.html#sphx-glr-gallery-misc-zorder-demo-py) Zorder Demo [Plot 2D data on 3D plot](https://matplotlib.org/stable/gallery/mplot3d/2dcollections3d.html#sphx-glr-gallery-mplot3d-2dcollections3d-py) Plot 2D data on 3D plot [3D box surface plot](https://matplotlib.org/stable/gallery/mplot3d/box3d.html#sphx-glr-gallery-mplot3d-box3d-py) 3D box surface plot [Parametric Curve](https://matplotlib.org/stable/gallery/mplot3d/lines3d.html#sphx-glr-gallery-mplot3d-lines3d-py) Parametric Curve [Lorenz Attractor](https://matplotlib.org/stable/gallery/mplot3d/lorenz_attractor.html#sphx-glr-gallery-mplot3d-lorenz-attractor-py) Lorenz Attractor [2D and 3D Axes in same Figure](https://matplotlib.org/stable/gallery/mplot3d/mixed_subplots.html#sphx-glr-gallery-mplot3d-mixed-subplots-py) 2D and 3D \*Axes\* in same \*Figure\* [Asinh Demo](https://matplotlib.org/stable/gallery/scales/asinh_demo.html#sphx-glr-gallery-scales-asinh-demo-py) Asinh Demo [Loglog Aspect](https://matplotlib.org/stable/gallery/scales/aspect_loglog.html#sphx-glr-gallery-scales-aspect-loglog-py) Loglog Aspect [Scales](https://matplotlib.org/stable/gallery/scales/scales.html#sphx-glr-gallery-scales-scales-py) Scales [Symlog Demo](https://matplotlib.org/stable/gallery/scales/symlog_demo.html#sphx-glr-gallery-scales-symlog-demo-py) Symlog Demo [Anscombe's quartet](https://matplotlib.org/stable/gallery/specialty_plots/anscombe.html#sphx-glr-gallery-specialty-plots-anscombe-py) Anscombe's quartet [Radar chart (aka spider or star chart)](https://matplotlib.org/stable/gallery/specialty_plots/radar_chart.html#sphx-glr-gallery-specialty-plots-radar-chart-py) Radar chart (aka spider or star chart) [Centered spines with arrows](https://matplotlib.org/stable/gallery/spines/centered_spines_with_arrows.html#sphx-glr-gallery-spines-centered-spines-with-arrows-py) Centered spines with arrows [Multiple Yaxis With Spines](https://matplotlib.org/stable/gallery/spines/multiple_yaxis_with_spines.html#sphx-glr-gallery-spines-multiple-yaxis-with-spines-py) Multiple Yaxis With Spines [Spine Placement](https://matplotlib.org/stable/gallery/spines/spine_placement_demo.html#sphx-glr-gallery-spines-spine-placement-demo-py) Spine Placement [Spines](https://matplotlib.org/stable/gallery/spines/spines.html#sphx-glr-gallery-spines-spines-py) Spines [Custom spine bounds](https://matplotlib.org/stable/gallery/spines/spines_bounds.html#sphx-glr-gallery-spines-spines-bounds-py) Custom spine bounds [Centering labels between ticks](https://matplotlib.org/stable/gallery/ticks/centered_ticklabels.html#sphx-glr-gallery-ticks-centered-ticklabels-py) Centering labels between ticks [Formatting date ticks using ConciseDateFormatter](https://matplotlib.org/stable/gallery/ticks/date_concise_formatter.html#sphx-glr-gallery-ticks-date-concise-formatter-py) Formatting date ticks using ConciseDateFormatter [Date Demo Convert](https://matplotlib.org/stable/gallery/ticks/date_demo_convert.html#sphx-glr-gallery-ticks-date-demo-convert-py) Date Demo Convert [Custom tick formatter for time series](https://matplotlib.org/stable/gallery/ticks/date_index_formatter.html#sphx-glr-gallery-ticks-date-index-formatter-py) Custom tick formatter for time series [Date Precision and Epochs](https://matplotlib.org/stable/gallery/ticks/date_precision_and_epochs.html#sphx-glr-gallery-ticks-date-precision-and-epochs-py) Date Precision and Epochs [Major and minor ticks](https://matplotlib.org/stable/gallery/ticks/major_minor_demo.html#sphx-glr-gallery-ticks-major-minor-demo-py) Major and minor ticks [The default tick formatter](https://matplotlib.org/stable/gallery/ticks/scalarformatter.html#sphx-glr-gallery-ticks-scalarformatter-py) The default tick formatter [Set default y-axis tick labels on the right](https://matplotlib.org/stable/gallery/ticks/tick_label_right.html#sphx-glr-gallery-ticks-tick-label-right-py) Set default y-axis tick labels on the right [Setting tick labels from a list of values](https://matplotlib.org/stable/gallery/ticks/tick_labels_from_values.html#sphx-glr-gallery-ticks-tick-labels-from-values-py) Setting tick labels from a list of values [Move x-axis tick labels to the top](https://matplotlib.org/stable/gallery/ticks/tick_xlabel_top.html#sphx-glr-gallery-ticks-tick-xlabel-top-py) Move x-axis tick labels to the top [Evans test](https://matplotlib.org/stable/gallery/units/evans_test.html#sphx-glr-gallery-units-evans-test-py) Evans test [CanvasAgg demo](https://matplotlib.org/stable/gallery/user_interfaces/canvasagg.html#sphx-glr-gallery-user-interfaces-canvasagg-py) CanvasAgg demo [Annotate Explain](https://matplotlib.org/stable/gallery/userdemo/annotate_explain.html#sphx-glr-gallery-userdemo-annotate-explain-py) Annotate Explain [Connect Simple01](https://matplotlib.org/stable/gallery/userdemo/connect_simple01.html#sphx-glr-gallery-userdemo-connect-simple01-py) Connect Simple01 [Connection styles for annotations](https://matplotlib.org/stable/gallery/userdemo/connectionstyle_demo.html#sphx-glr-gallery-userdemo-connectionstyle-demo-py) Connection styles for annotations [Nested GridSpecs](https://matplotlib.org/stable/gallery/userdemo/demo_gridspec06.html#sphx-glr-gallery-userdemo-demo-gridspec06-py) Nested GridSpecs [PGF fonts](https://matplotlib.org/stable/gallery/userdemo/pgf_fonts.html#sphx-glr-gallery-userdemo-pgf-fonts-py) PGF fonts [PGF texsystem](https://matplotlib.org/stable/gallery/userdemo/pgf_texsystem.html#sphx-glr-gallery-userdemo-pgf-texsystem-py) PGF texsystem [Simple Annotate01](https://matplotlib.org/stable/gallery/userdemo/simple_annotate01.html#sphx-glr-gallery-userdemo-simple-annotate01-py) Simple Annotate01 [Simple Legend01](https://matplotlib.org/stable/gallery/userdemo/simple_legend01.html#sphx-glr-gallery-userdemo-simple-legend01-py) Simple Legend01 [Simple Legend02](https://matplotlib.org/stable/gallery/userdemo/simple_legend02.html#sphx-glr-gallery-userdemo-simple-legend02-py) Simple Legend02 [Annotated Cursor](https://matplotlib.org/stable/gallery/widgets/annotated_cursor.html#sphx-glr-gallery-widgets-annotated-cursor-py) Annotated Cursor [Buttons](https://matplotlib.org/stable/gallery/widgets/buttons.html#sphx-glr-gallery-widgets-buttons-py) Buttons [Check Buttons](https://matplotlib.org/stable/gallery/widgets/check_buttons.html#sphx-glr-gallery-widgets-check-buttons-py) Check Buttons [Cursor](https://matplotlib.org/stable/gallery/widgets/cursor.html#sphx-glr-gallery-widgets-cursor-py) Cursor [Multicursor](https://matplotlib.org/stable/gallery/widgets/multicursor.html#sphx-glr-gallery-widgets-multicursor-py) Multicursor [Radio Buttons](https://matplotlib.org/stable/gallery/widgets/radio_buttons.html#sphx-glr-gallery-widgets-radio-buttons-py) Radio Buttons [Rectangle and ellipse selectors](https://matplotlib.org/stable/gallery/widgets/rectangle_selector.html#sphx-glr-gallery-widgets-rectangle-selector-py) Rectangle and ellipse selectors [Slider](https://matplotlib.org/stable/gallery/widgets/slider_demo.html#sphx-glr-gallery-widgets-slider-demo-py) Slider [Snapping Sliders to Discrete Values](https://matplotlib.org/stable/gallery/widgets/slider_snap_demo.html#sphx-glr-gallery-widgets-slider-snap-demo-py) Snapping Sliders to Discrete Values [Span Selector](https://matplotlib.org/stable/gallery/widgets/span_selector.html#sphx-glr-gallery-widgets-span-selector-py) Span Selector [Textbox](https://matplotlib.org/stable/gallery/widgets/textbox.html#sphx-glr-gallery-widgets-textbox-py) Textbox [Quick start guide](https://matplotlib.org/stable/tutorials/introductory/quick_start.html#sphx-glr-tutorials-introductory-quick-start-py) Quick start guide [Artist tutorial](https://matplotlib.org/stable/tutorials/intermediate/artists.html#sphx-glr-tutorials-intermediate-artists-py) Artist tutorial [Legend guide](https://matplotlib.org/stable/tutorials/intermediate/legend_guide.html#sphx-glr-tutorials-intermediate-legend-guide-py) Legend guide [Styling with cycler](https://matplotlib.org/stable/tutorials/intermediate/color_cycle.html#sphx-glr-tutorials-intermediate-color-cycle-py) Styling with cycler [Constrained Layout Guide](https://matplotlib.org/stable/tutorials/intermediate/constrainedlayout_guide.html#sphx-glr-tutorials-intermediate-constrainedlayout-guide-py) Constrained Layout Guide [Tight Layout guide](https://matplotlib.org/stable/tutorials/intermediate/tight_layout_guide.html#sphx-glr-tutorials-intermediate-tight-layout-guide-py) Tight Layout guide [Arranging multiple Axes in a Figure](https://matplotlib.org/stable/tutorials/intermediate/arranging_axes.html#sphx-glr-tutorials-intermediate-arranging-axes-py) Arranging multiple Axes in a Figure [Autoscaling](https://matplotlib.org/stable/tutorials/intermediate/autoscale.html#sphx-glr-tutorials-intermediate-autoscale-py) Autoscaling [Faster rendering by using blitting](https://matplotlib.org/stable/tutorials/advanced/blitting.html#sphx-glr-tutorials-advanced-blitting-py) Faster rendering by using blitting [Path Tutorial](https://matplotlib.org/stable/tutorials/advanced/path_tutorial.html#sphx-glr-tutorials-advanced-path-tutorial-py) Path Tutorial [Transformations Tutorial](https://matplotlib.org/stable/tutorials/advanced/transforms_tutorial.html#sphx-glr-tutorials-advanced-transforms-tutorial-py) Transformations Tutorial [Specifying Colors](https://matplotlib.org/stable/tutorials/colors/colors.html#sphx-glr-tutorials-colors-colors-py) Specifying Colors [Text in Matplotlib Plots](https://matplotlib.org/stable/tutorials/text/text_intro.html#sphx-glr-tutorials-text-text-intro-py) Text in Matplotlib Plots [plot(x, y)](https://matplotlib.org/stable/plot_types/basic/plot.html#sphx-glr-plot-types-basic-plot-py) plot(x, y) [fill\_between(x, y1, y2)](https://matplotlib.org/stable/plot_types/basic/fill_between.html#sphx-glr-plot-types-basic-fill-between-py) fill\_between(x, y1, y2) [tricontour(x, y, z)](https://matplotlib.org/stable/plot_types/unstructured/tricontour.html#sphx-glr-plot-types-unstructured-tricontour-py) tricontour(x, y, z) [tricontourf(x, y, z)](https://matplotlib.org/stable/plot_types/unstructured/tricontourf.html#sphx-glr-plot-types-unstructured-tricontourf-py) tricontourf(x, y, z) [tripcolor(x, y, z)](https://matplotlib.org/stable/plot_types/unstructured/tripcolor.html#sphx-glr-plot-types-unstructured-tripcolor-py) tripcolor(x, y, z)
programming_docs
matplotlib matplotlib.artist.Artist.get_path_effects matplotlib.artist.Artist.get\_path\_effects =========================================== Artist.get\_path\_effects()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L700-L701) matplotlib matplotlib.pyplot.vlines matplotlib.pyplot.vlines ======================== matplotlib.pyplot.vlines(*x*, *ymin*, *ymax*, *colors=None*, *linestyles='solid'*, *label=''*, *\**, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2970-L2977) Plot vertical lines at each *x* from *ymin* to *ymax*. Parameters: **x**float or array-like x-indexes where to plot the lines. **ymin, ymax**float or array-like Respective beginning and end of each line. If scalars are provided, all lines will have the same length. **colors**list of colors, default: `[rcParams["lines.color"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=lines.color#matplotlibrc-sample)` (default: `'C0'`) **linestyles**{'solid', 'dashed', 'dashdot', 'dotted'}, optional **label**str, default: '' Returns: [`LineCollection`](../collections_api#matplotlib.collections.LineCollection "matplotlib.collections.LineCollection") Other Parameters: **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *x*, *ymin*, *ymax*, *colors* **\*\*kwargs**[`LineCollection`](../collections_api#matplotlib.collections.LineCollection "matplotlib.collections.LineCollection") properties. See also [`hlines`](matplotlib.pyplot.hlines#matplotlib.pyplot.hlines "matplotlib.pyplot.hlines") horizontal lines [`axvline`](matplotlib.pyplot.axvline#matplotlib.pyplot.axvline "matplotlib.pyplot.axvline") vertical line across the Axes matplotlib matplotlib.pyplot.stem matplotlib.pyplot.stem ====================== matplotlib.pyplot.stem(*\*args*, *linefmt=None*, *markerfmt=None*, *basefmt=None*, *bottom=0*, *label=None*, *use\_line\_collection=<deprecated parameter>*, *orientation='vertical'*, *data=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2839-L2850) Create a stem plot. A stem plot draws lines perpendicular to a baseline at each location *locs* from the baseline to *heads*, and places a marker there. For vertical stem plots (the default), the *locs* are *x* positions, and the *heads* are *y* values. For horizontal stem plots, the *locs* are *y* positions, and the *heads* are *x* values. Call signature: ``` stem([locs,] heads, linefmt=None, markerfmt=None, basefmt=None) ``` The *locs*-positions are optional. The formats may be provided either as positional or as keyword-arguments. Passing *markerfmt* and *basefmt* positionally is deprecated since Matplotlib 3.5. Parameters: **locs**array-like, default: (0, 1, ..., len(heads) - 1) For vertical stem plots, the x-positions of the stems. For horizontal stem plots, the y-positions of the stems. **heads**array-like For vertical stem plots, the y-values of the stem heads. For horizontal stem plots, the x-values of the stem heads. **linefmt**str, optional A string defining the color and/or linestyle of the vertical lines: | Character | Line Style | | --- | --- | | `'-'` | solid line | | `'--'` | dashed line | | `'-.'` | dash-dot line | | `':'` | dotted line | Default: 'C0-', i.e. solid line with the first color of the color cycle. Note: Markers specified through this parameter (e.g. 'x') will be silently ignored (unless using `use_line_collection=False`). Instead, markers should be specified using *markerfmt*. **markerfmt**str, optional A string defining the color and/or shape of the markers at the stem heads. If the marker is not given, use the marker 'o', i.e. filled circles. If the color is not given, use the color from *linefmt*. **basefmt**str, default: 'C3-' ('C2-' in classic mode) A format string defining the properties of the baseline. **orientation**str, default: 'vertical' If 'vertical', will produce a plot with stems oriented vertically, otherwise the stems will be oriented horizontally. **bottom**float, default: 0 The y/x-position of the baseline (depending on orientation). **label**str, default: None The label to use for the stems in legends. **use\_line\_collection**bool, default: True *Deprecated since 3.6* If `True`, store and plot the stem lines as a [`LineCollection`](../collections_api#matplotlib.collections.LineCollection "matplotlib.collections.LineCollection") instead of individual lines, which significantly increases performance. If `False`, defaults to the old behavior of using a list of [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") objects. **data**indexable object, optional If given, all parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception). Returns: [`StemContainer`](../container_api#matplotlib.container.StemContainer "matplotlib.container.StemContainer") The container may be treated like a tuple (*markerline*, *stemlines*, *baseline*) #### Notes See also The MATLAB function [stem](https://www.mathworks.com/help/matlab/ref/stem.html) which inspired this method. Examples using `matplotlib.pyplot.stem` --------------------------------------- [Stem Plot](https://matplotlib.org/stable/gallery/lines_bars_and_markers/stem_plot.html#sphx-glr-gallery-lines-bars-and-markers-stem-plot-py) Stem Plot matplotlib matplotlib.axes.Axes.set_axes_locator matplotlib.axes.Axes.set\_axes\_locator ======================================= Axes.set\_axes\_locator(*locator*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L1118-L1127) Set the Axes locator. Parameters: **locator**Callable[[Axes, Renderer], Bbox] Examples using `matplotlib.axes.Axes.set_axes_locator` ------------------------------------------------------ [HBoxDivider demo](https://matplotlib.org/stable/gallery/axes_grid1/demo_axes_hbox_divider.html#sphx-glr-gallery-axes-grid1-demo-axes-hbox-divider-py) `.HBoxDivider` demo matplotlib matplotlib.axes.Axes.get_xaxis_text1_transform matplotlib.axes.Axes.get\_xaxis\_text1\_transform ================================================= Axes.get\_xaxis\_text1\_transform(*pad\_points*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L899-L923) Returns: **transform**Transform The transform used for drawing x-axis labels, which will add *pad\_points* of padding (in points) between the axis and the label. The x-direction is in data coordinates and the y-direction is in axis coordinates **valign**{'center', 'top', 'bottom', 'baseline', 'center\_baseline'} The text vertical alignment. **halign**{'center', 'left', 'right'} The text horizontal alignment. #### Notes This transformation is primarily used by the [`Axis`](../axis_api#matplotlib.axis.Axis "matplotlib.axis.Axis") class, and is meant to be overridden by new kinds of projections that may need to place axis elements in different locations. matplotlib matplotlib.axes.Axes.violinplot matplotlib.axes.Axes.violinplot =============================== Axes.violinplot(*dataset*, *positions=None*, *vert=True*, *widths=0.5*, *showmeans=False*, *showextrema=True*, *showmedians=False*, *quantiles=None*, *points=100*, *bw\_method=None*, *\**, *data=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L7906-L8009) Make a violin plot. Make a violin plot for each column of *dataset* or each vector in sequence *dataset*. Each filled area extends to represent the entire data range, with optional lines at the mean, the median, the minimum, the maximum, and user-specified quantiles. Parameters: **dataset**Array or a sequence of vectors. The input data. **positions**array-like, default: [1, 2, ..., n] The positions of the violins. The ticks and limits are automatically set to match the positions. **vert**bool, default: True. If true, creates a vertical violin plot. Otherwise, creates a horizontal violin plot. **widths**array-like, default: 0.5 Either a scalar or a vector that sets the maximal width of each violin. The default is 0.5, which uses about half of the available horizontal space. **showmeans**bool, default: False If [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)"), will toggle rendering of the means. **showextrema**bool, default: True If [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)"), will toggle rendering of the extrema. **showmedians**bool, default: False If [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)"), will toggle rendering of the medians. **quantiles**array-like, default: None If not None, set a list of floats in interval [0, 1] for each violin, which stands for the quantiles that will be rendered for that violin. **points**int, default: 100 Defines the number of points to evaluate each of the gaussian kernel density estimations at. **bw\_method**str, scalar or callable, optional The method used to calculate the estimator bandwidth. This can be 'scott', 'silverman', a scalar constant or a callable. If a scalar, this will be used directly as `kde.factor`. If a callable, it should take a [`matplotlib.mlab.GaussianKDE`](../mlab_api#matplotlib.mlab.GaussianKDE "matplotlib.mlab.GaussianKDE") instance as its only parameter and return a scalar. If None (default), 'scott' is used. **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *dataset* Returns: dict A dictionary mapping each component of the violinplot to a list of the corresponding collection instances created. The dictionary has the following keys: * `bodies`: A list of the [`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection") instances containing the filled area of each violin. * `cmeans`: A [`LineCollection`](../collections_api#matplotlib.collections.LineCollection "matplotlib.collections.LineCollection") instance that marks the mean values of each of the violin's distribution. * `cmins`: A [`LineCollection`](../collections_api#matplotlib.collections.LineCollection "matplotlib.collections.LineCollection") instance that marks the bottom of each violin's distribution. * `cmaxes`: A [`LineCollection`](../collections_api#matplotlib.collections.LineCollection "matplotlib.collections.LineCollection") instance that marks the top of each violin's distribution. * `cbars`: A [`LineCollection`](../collections_api#matplotlib.collections.LineCollection "matplotlib.collections.LineCollection") instance that marks the centers of each violin's distribution. * `cmedians`: A [`LineCollection`](../collections_api#matplotlib.collections.LineCollection "matplotlib.collections.LineCollection") instance that marks the median values of each of the violin's distribution. * `cquantiles`: A [`LineCollection`](../collections_api#matplotlib.collections.LineCollection "matplotlib.collections.LineCollection") instance created to identify the quantile values of each of the violin's distribution. Examples using `matplotlib.axes.Axes.violinplot` ------------------------------------------------ [Violin plot customization](https://matplotlib.org/stable/gallery/statistics/customized_violin.html#sphx-glr-gallery-statistics-customized-violin-py) Violin plot customization [violinplot(D)](https://matplotlib.org/stable/plot_types/stats/violin.html#sphx-glr-plot-types-stats-violin-py) violinplot(D) matplotlib mpl_toolkits.mplot3d.art3d.Text3D mpl\_toolkits.mplot3d.art3d.Text3D ================================== *class*mpl\_toolkits.mplot3d.art3d.Text3D(*x=0*, *y=0*, *z=0*, *text=''*, *zdir='z'*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/art3d.py#L73-L150) Bases: [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text") Text object with 3D position and direction. Parameters: **x, y, z** The position of the text. **text**str The text string to display. **zdir**{'x', 'y', 'z', None, 3-tuple} The direction of the text. See [`get_dir_vector`](mpl_toolkits.mplot3d.art3d.get_dir_vector#mpl_toolkits.mplot3d.art3d.get_dir_vector "mpl_toolkits.mplot3d.art3d.get_dir_vector") for a description of the values. Other Parameters: **\*\*kwargs** All other parameters are passed on to [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text"). Create a [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text") instance at *x*, *y* with string *text*. The text is aligned relative to the anchor point (*x*, *y*) according to `horizontalalignment` (default: 'left') and `verticalalignment` (default: 'bottom'). See also [Text alignment](https://matplotlib.org/stable/gallery/text_labels_and_annotations/text_alignment.html). While Text accepts the 'label' keyword argument, by default it is not added to the handles of a legend. Valid keyword arguments are: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`backgroundcolor`](../text_api#matplotlib.text.Text.set_backgroundcolor "matplotlib.text.Text.set_backgroundcolor") | color | | [`bbox`](../text_api#matplotlib.text.Text.set_bbox "matplotlib.text.Text.set_bbox") | dict with properties for [`patches.FancyBboxPatch`](matplotlib.patches.fancybboxpatch#matplotlib.patches.FancyBboxPatch "matplotlib.patches.FancyBboxPatch") | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | unknown | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | unknown | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | unknown | | [`color`](../text_api#matplotlib.text.Text.set_color "matplotlib.text.Text.set_color") or c | color | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fontfamily`](../text_api#matplotlib.text.Text.set_fontfamily "matplotlib.text.Text.set_fontfamily") or family | {FONTNAME, 'serif', 'sans-serif', 'cursive', 'fantasy', 'monospace'} | | [`fontproperties`](../text_api#matplotlib.text.Text.set_fontproperties "matplotlib.text.Text.set_fontproperties") or font or font\_properties | [`font_manager.FontProperties`](../font_manager_api#matplotlib.font_manager.FontProperties "matplotlib.font_manager.FontProperties") or [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") or [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") | | [`fontsize`](../text_api#matplotlib.text.Text.set_fontsize "matplotlib.text.Text.set_fontsize") or size | float or {'xx-small', 'x-small', 'small', 'medium', 'large', 'x-large', 'xx-large'} | | [`fontstretch`](../text_api#matplotlib.text.Text.set_fontstretch "matplotlib.text.Text.set_fontstretch") or stretch | {a numeric value in range 0-1000, 'ultra-condensed', 'extra-condensed', 'condensed', 'semi-condensed', 'normal', 'semi-expanded', 'expanded', 'extra-expanded', 'ultra-expanded'} | | [`fontstyle`](../text_api#matplotlib.text.Text.set_fontstyle "matplotlib.text.Text.set_fontstyle") or style | {'normal', 'italic', 'oblique'} | | [`fontvariant`](../text_api#matplotlib.text.Text.set_fontvariant "matplotlib.text.Text.set_fontvariant") or variant | {'normal', 'small-caps'} | | [`fontweight`](../text_api#matplotlib.text.Text.set_fontweight "matplotlib.text.Text.set_fontweight") or weight | {a numeric value in range 0-1000, 'ultralight', 'light', 'normal', 'regular', 'book', 'medium', 'roman', 'semibold', 'demibold', 'demi', 'bold', 'heavy', 'extra bold', 'black'} | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`horizontalalignment`](../text_api#matplotlib.text.Text.set_horizontalalignment "matplotlib.text.Text.set_horizontalalignment") or ha | {'left', 'center', 'right'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linespacing`](../text_api#matplotlib.text.Text.set_linespacing "matplotlib.text.Text.set_linespacing") | float (multiple of font size) | | [`math_fontfamily`](../text_api#matplotlib.text.Text.set_math_fontfamily "matplotlib.text.Text.set_math_fontfamily") | str | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`multialignment`](../text_api#matplotlib.text.Text.set_multialignment "matplotlib.text.Text.set_multialignment") or ma | {'left', 'right', 'center'} | | [`parse_math`](../text_api#matplotlib.text.Text.set_parse_math "matplotlib.text.Text.set_parse_math") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`position`](../text_api#matplotlib.text.Text.set_position "matplotlib.text.Text.set_position") | (float, float) | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`rotation`](../text_api#matplotlib.text.Text.set_rotation "matplotlib.text.Text.set_rotation") | float or {'vertical', 'horizontal'} | | [`rotation_mode`](../text_api#matplotlib.text.Text.set_rotation_mode "matplotlib.text.Text.set_rotation_mode") | {None, 'default', 'anchor'} | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`text`](../text_api#matplotlib.text.Text.set_text "matplotlib.text.Text.set_text") | object | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`transform_rotates_text`](../text_api#matplotlib.text.Text.set_transform_rotates_text "matplotlib.text.Text.set_transform_rotates_text") | bool | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`usetex`](../text_api#matplotlib.text.Text.set_usetex "matplotlib.text.Text.set_usetex") | bool or None | | [`verticalalignment`](../text_api#matplotlib.text.Text.set_verticalalignment "matplotlib.text.Text.set_verticalalignment") or va | {'bottom', 'baseline', 'center', 'center\_baseline', 'top'} | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`wrap`](../text_api#matplotlib.text.Text.set_wrap "matplotlib.text.Text.set_wrap") | bool | | [`x`](../text_api#matplotlib.text.Text.set_x "matplotlib.text.Text.set_x") | float | | [`y`](../text_api#matplotlib.text.Text.set_y "matplotlib.text.Text.set_y") | float | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | draw(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/art3d.py#L134-L145) Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible ([`Artist.get_visible`](matplotlib.artist.artist.get_visible#matplotlib.artist.Artist.get_visible "matplotlib.artist.Artist.get_visible") returns False). Parameters: **renderer**[`RendererBase`](../backend_bases_api#matplotlib.backend_bases.RendererBase "matplotlib.backend_bases.RendererBase") subclass. #### Notes This method is overridden in the Artist subclasses. get\_position\_3d()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/art3d.py#L97-L99) Return the (x, y, z) position of the text. get\_tightbbox(*renderer=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/art3d.py#L147-L150) Like [`Artist.get_window_extent`](matplotlib.artist.artist.get_window_extent#matplotlib.artist.Artist.get_window_extent "matplotlib.artist.Artist.get_window_extent"), but includes any clipping. Parameters: **renderer**[`RendererBase`](../backend_bases_api#matplotlib.backend_bases.RendererBase "matplotlib.backend_bases.RendererBase") subclass renderer that will be used to draw the figures (i.e. `fig.canvas.get_renderer()`) Returns: [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") The enclosing bounding box (in figure pixel coordinates). set(*\**, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *animated=<UNSET>*, *backgroundcolor=<UNSET>*, *bbox=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *color=<UNSET>*, *fontfamily=<UNSET>*, *fontproperties=<UNSET>*, *fontsize=<UNSET>*, *fontstretch=<UNSET>*, *fontstyle=<UNSET>*, *fontvariant=<UNSET>*, *fontweight=<UNSET>*, *gid=<UNSET>*, *horizontalalignment=<UNSET>*, *in\_layout=<UNSET>*, *label=<UNSET>*, *linespacing=<UNSET>*, *math\_fontfamily=<UNSET>*, *mouseover=<UNSET>*, *multialignment=<UNSET>*, *parse\_math=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *position=<UNSET>*, *position\_3d=<UNSET>*, *rasterized=<UNSET>*, *rotation=<UNSET>*, *rotation\_mode=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *text=<UNSET>*, *transform=<UNSET>*, *transform\_rotates\_text=<UNSET>*, *url=<UNSET>*, *usetex=<UNSET>*, *verticalalignment=<UNSET>*, *visible=<UNSET>*, *wrap=<UNSET>*, *x=<UNSET>*, *y=<UNSET>*, *z=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`3d_properties`](#mpl_toolkits.mplot3d.art3d.Text3D.set_3d_properties "mpl_toolkits.mplot3d.art3d.Text3D.set_3d_properties") | unknown | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`backgroundcolor`](../text_api#matplotlib.text.Text.set_backgroundcolor "matplotlib.text.Text.set_backgroundcolor") | color | | [`bbox`](../text_api#matplotlib.text.Text.set_bbox "matplotlib.text.Text.set_bbox") | dict with properties for [`patches.FancyBboxPatch`](matplotlib.patches.fancybboxpatch#matplotlib.patches.FancyBboxPatch "matplotlib.patches.FancyBboxPatch") | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](../text_api#matplotlib.text.Text.set_color "matplotlib.text.Text.set_color") or c | color | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fontfamily`](../text_api#matplotlib.text.Text.set_fontfamily "matplotlib.text.Text.set_fontfamily") or family | {FONTNAME, 'serif', 'sans-serif', 'cursive', 'fantasy', 'monospace'} | | [`fontproperties`](../text_api#matplotlib.text.Text.set_fontproperties "matplotlib.text.Text.set_fontproperties") or font or font\_properties | [`font_manager.FontProperties`](../font_manager_api#matplotlib.font_manager.FontProperties "matplotlib.font_manager.FontProperties") or [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") or [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") | | [`fontsize`](../text_api#matplotlib.text.Text.set_fontsize "matplotlib.text.Text.set_fontsize") or size | float or {'xx-small', 'x-small', 'small', 'medium', 'large', 'x-large', 'xx-large'} | | [`fontstretch`](../text_api#matplotlib.text.Text.set_fontstretch "matplotlib.text.Text.set_fontstretch") or stretch | {a numeric value in range 0-1000, 'ultra-condensed', 'extra-condensed', 'condensed', 'semi-condensed', 'normal', 'semi-expanded', 'expanded', 'extra-expanded', 'ultra-expanded'} | | [`fontstyle`](../text_api#matplotlib.text.Text.set_fontstyle "matplotlib.text.Text.set_fontstyle") or style | {'normal', 'italic', 'oblique'} | | [`fontvariant`](../text_api#matplotlib.text.Text.set_fontvariant "matplotlib.text.Text.set_fontvariant") or variant | {'normal', 'small-caps'} | | [`fontweight`](../text_api#matplotlib.text.Text.set_fontweight "matplotlib.text.Text.set_fontweight") or weight | {a numeric value in range 0-1000, 'ultralight', 'light', 'normal', 'regular', 'book', 'medium', 'roman', 'semibold', 'demibold', 'demi', 'bold', 'heavy', 'extra bold', 'black'} | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`horizontalalignment`](../text_api#matplotlib.text.Text.set_horizontalalignment "matplotlib.text.Text.set_horizontalalignment") or ha | {'left', 'center', 'right'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linespacing`](../text_api#matplotlib.text.Text.set_linespacing "matplotlib.text.Text.set_linespacing") | float (multiple of font size) | | [`math_fontfamily`](../text_api#matplotlib.text.Text.set_math_fontfamily "matplotlib.text.Text.set_math_fontfamily") | str | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`multialignment`](../text_api#matplotlib.text.Text.set_multialignment "matplotlib.text.Text.set_multialignment") or ma | {'left', 'right', 'center'} | | [`parse_math`](../text_api#matplotlib.text.Text.set_parse_math "matplotlib.text.Text.set_parse_math") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`position`](../text_api#matplotlib.text.Text.set_position "matplotlib.text.Text.set_position") | (float, float) | | [`position_3d`](#mpl_toolkits.mplot3d.art3d.Text3D.set_position_3d "mpl_toolkits.mplot3d.art3d.Text3D.set_position_3d") | (float, float, float) | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`rotation`](../text_api#matplotlib.text.Text.set_rotation "matplotlib.text.Text.set_rotation") | float or {'vertical', 'horizontal'} | | [`rotation_mode`](../text_api#matplotlib.text.Text.set_rotation_mode "matplotlib.text.Text.set_rotation_mode") | {None, 'default', 'anchor'} | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`text`](../text_api#matplotlib.text.Text.set_text "matplotlib.text.Text.set_text") | object | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`transform_rotates_text`](../text_api#matplotlib.text.Text.set_transform_rotates_text "matplotlib.text.Text.set_transform_rotates_text") | bool | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`usetex`](../text_api#matplotlib.text.Text.set_usetex "matplotlib.text.Text.set_usetex") | bool or None | | [`verticalalignment`](../text_api#matplotlib.text.Text.set_verticalalignment "matplotlib.text.Text.set_verticalalignment") or va | {'bottom', 'baseline', 'center', 'center\_baseline', 'top'} | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`wrap`](../text_api#matplotlib.text.Text.set_wrap "matplotlib.text.Text.set_wrap") | bool | | [`x`](../text_api#matplotlib.text.Text.set_x "matplotlib.text.Text.set_x") | float | | [`y`](../text_api#matplotlib.text.Text.set_y "matplotlib.text.Text.set_y") | float | | [`z`](#mpl_toolkits.mplot3d.art3d.Text3D.set_z "mpl_toolkits.mplot3d.art3d.Text3D.set_z") | float | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | set\_3d\_properties(*z=0*, *zdir='z'*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/art3d.py#L129-L132) set\_position\_3d(*xyz*, *zdir=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/art3d.py#L101-L116) Set the (*x*, *y*, *z*) position of the text. Parameters: **xyz**(float, float, float) The position in 3D space. **zdir**{'x', 'y', 'z', None, 3-tuple} The direction of the text. If unspecified, the zdir will not be changed. set\_z(*z*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/art3d.py#L118-L127) Set the *z* position of the text. Parameters: **z**float
programming_docs
matplotlib matplotlib.pyplot.barh matplotlib.pyplot.barh ====================== matplotlib.pyplot.barh(*y*, *width*, *height=0.8*, *left=None*, *\**, *align='center'*, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2369-L2375) Make a horizontal bar plot. The bars are positioned at *y* with the given *align*ment. Their dimensions are given by *width* and *height*. The horizontal baseline is *left* (default 0). Many parameters can take either a single value applying to all bars or a sequence of values, one for each bar. Parameters: **y**float or array-like The y coordinates of the bars. See also *align* for the alignment of the bars to the coordinates. **width**float or array-like The width(s) of the bars. **height**float or array-like, default: 0.8 The heights of the bars. **left**float or array-like, default: 0 The x coordinates of the left side(s) of the bars. **align**{'center', 'edge'}, default: 'center' Alignment of the base to the *y* coordinates\*: * 'center': Center the bars on the *y* positions. * 'edge': Align the bottom edges of the bars with the *y* positions. To align the bars on the top edge pass a negative *height* and `align='edge'`. Returns: [`BarContainer`](../container_api#matplotlib.container.BarContainer "matplotlib.container.BarContainer") Container with all the bars and optionally errorbars. Other Parameters: **color**color or list of color, optional The colors of the bar faces. **edgecolor**color or list of color, optional The colors of the bar edges. **linewidth**float or array-like, optional Width of the bar edge(s). If 0, don't draw edges. **tick\_label**str or list of str, optional The tick labels of the bars. Default: None (Use default numeric labels.) **label**str or list of str, optional A single label is attached to the resulting [`BarContainer`](../container_api#matplotlib.container.BarContainer "matplotlib.container.BarContainer") as a label for the whole dataset. If a list is provided, it must be the same length as *y* and labels the individual bars. Repeated labels are not de-duplicated and will cause repeated label entries, so this is best used when bars also differ in style (e.g., by passing a list to *color*.) **xerr, yerr**float or array-like of shape(N,) or shape(2, N), optional If not *None*, add horizontal / vertical errorbars to the bar tips. The values are +/- sizes relative to the data: * scalar: symmetric +/- values for all bars * shape(N,): symmetric +/- values for each bar * shape(2, N): Separate - and + values for each bar. First row contains the lower errors, the second row contains the upper errors. * *None*: No errorbar. (default) See [Different ways of specifying error bars](https://matplotlib.org/stable/gallery/statistics/errorbar_features.html) for an example on the usage of *xerr* and *yerr*. **ecolor**color or list of color, default: 'black' The line color of the errorbars. **capsize**float, default: `[rcParams["errorbar.capsize"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=errorbar.capsize#matplotlibrc-sample)` (default: `0.0`) The length of the error bar caps in points. **error\_kw**dict, optional Dictionary of keyword arguments to be passed to the [`errorbar`](matplotlib.axes.axes.errorbar#matplotlib.axes.Axes.errorbar "matplotlib.axes.Axes.errorbar") method. Values of *ecolor* or *capsize* defined here take precedence over the independent keyword arguments. **log**bool, default: False If `True`, set the x-axis to be log scale. **data**indexable object, optional If given, all parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception). **\*\*kwargs**[`Rectangle`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle "matplotlib.patches.Rectangle") properties | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`angle`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_angle "matplotlib.patches.Rectangle.set_angle") | unknown | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`bounds`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_bounds "matplotlib.patches.Rectangle.set_bounds") | (left, bottom, width, height) | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`height`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_height "matplotlib.patches.Rectangle.set_height") | unknown | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`width`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_width "matplotlib.patches.Rectangle.set_width") | unknown | | [`x`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_x "matplotlib.patches.Rectangle.set_x") | unknown | | [`xy`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_xy "matplotlib.patches.Rectangle.set_xy") | (float, float) | | [`y`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_y "matplotlib.patches.Rectangle.set_y") | unknown | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`bar`](matplotlib.pyplot.bar#matplotlib.pyplot.bar "matplotlib.pyplot.bar") Plot a vertical bar plot. #### Notes Stacked bars can be achieved by passing individual *left* values per bar. See [Discrete distribution as horizontal bar chart](https://matplotlib.org/stable/gallery/lines_bars_and_markers/horizontal_barchart_distribution.html). matplotlib matplotlib.axes.Axes.pcolor matplotlib.axes.Axes.pcolor =========================== Axes.pcolor(*\*args*, *shading=None*, *alpha=None*, *norm=None*, *cmap=None*, *vmin=None*, *vmax=None*, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L5715-L5948) Create a pseudocolor plot with a non-regular rectangular grid. Call signature: ``` pcolor([X, Y,] C, **kwargs) ``` *X* and *Y* can be used to specify the corners of the quadrilaterals. Hint `pcolor()` can be very slow for large arrays. In most cases you should use the similar but much faster [`pcolormesh`](matplotlib.axes.axes.pcolormesh#matplotlib.axes.Axes.pcolormesh "matplotlib.axes.Axes.pcolormesh") instead. See [Differences between pcolor() and pcolormesh()](matplotlib.pyplot.pcolormesh#differences-pcolor-pcolormesh) for a discussion of the differences. Parameters: **C**2D array-like The color-mapped values. Color-mapping is controlled by *cmap*, *norm*, *vmin*, and *vmax*. **X, Y**array-like, optional The coordinates of the corners of quadrilaterals of a pcolormesh: ``` (X[i+1, j], Y[i+1, j]) (X[i+1, j+1], Y[i+1, j+1]) +-----+ | | +-----+ (X[i, j], Y[i, j]) (X[i, j+1], Y[i, j+1]) ``` Note that the column index corresponds to the x-coordinate, and the row index corresponds to y. For details, see the [Notes](matplotlib.pyplot.pcolormesh#axes-pcolormesh-grid-orientation) section below. If `shading='flat'` the dimensions of *X* and *Y* should be one greater than those of *C*, and the quadrilateral is colored due to the value at `C[i, j]`. If *X*, *Y* and *C* have equal dimensions, a warning will be raised and the last row and column of *C* will be ignored. If `shading='nearest'`, the dimensions of *X* and *Y* should be the same as those of *C* (if not, a ValueError will be raised). The color `C[i, j]` will be centered on `(X[i, j], Y[i, j])`. If *X* and/or *Y* are 1-D arrays or column vectors they will be expanded as needed into the appropriate 2D arrays, making a rectangular grid. **shading**{'flat', 'nearest', 'auto'}, default: `[rcParams["pcolor.shading"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=pcolor.shading#matplotlibrc-sample)` (default: `'auto'`) The fill style for the quadrilateral. Possible values: * 'flat': A solid color is used for each quad. The color of the quad (i, j), (i+1, j), (i, j+1), (i+1, j+1) is given by `C[i, j]`. The dimensions of *X* and *Y* should be one greater than those of *C*; if they are the same as *C*, then a deprecation warning is raised, and the last row and column of *C* are dropped. * 'nearest': Each grid point will have a color centered on it, extending halfway between the adjacent grid centers. The dimensions of *X* and *Y* must be the same as *C*. * 'auto': Choose 'flat' if dimensions of *X* and *Y* are one larger than *C*. Choose 'nearest' if dimensions are the same. See [pcolormesh grids and shading](https://matplotlib.org/stable/gallery/images_contours_and_fields/pcolormesh_grids.html) for more description. **cmap**str or [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap"), default: `[rcParams["image.cmap"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.cmap#matplotlibrc-sample)` (default: `'viridis'`) The Colormap instance or registered colormap name used to map scalar data to colors. **norm**str or [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize"), optional The normalization method used to scale scalar data to the [0, 1] range before mapping to colors using *cmap*. By default, a linear scaling is used, mapping the lowest value to 0 and the highest to 1. If given, this can be one of the following: * An instance of [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") or one of its subclasses (see [Colormap Normalization](https://matplotlib.org/stable/tutorials/colors/colormapnorms.html)). * A scale name, i.e. one of "linear", "log", "symlog", "logit", etc. For a list of available scales, call [`matplotlib.scale.get_scale_names()`](../scale_api#matplotlib.scale.get_scale_names "matplotlib.scale.get_scale_names"). In that case, a suitable [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") subclass is dynamically generated and instantiated. **vmin, vmax**float, optional When using scalar data and no explicit *norm*, *vmin* and *vmax* define the data range that the colormap covers. By default, the colormap covers the complete value range of the supplied data. It is an error to use *vmin*/*vmax* when a *norm* instance is given (but using a [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") *norm* name together with *vmin*/*vmax* is acceptable). **edgecolors**{'none', None, 'face', color, color sequence}, optional The color of the edges. Defaults to 'none'. Possible values: * 'none' or '': No edge. * *None*: `[rcParams["patch.edgecolor"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=patch.edgecolor#matplotlibrc-sample)` (default: `'black'`) will be used. Note that currently `[rcParams["patch.force\_edgecolor"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=patch.force_edgecolor#matplotlibrc-sample)` (default: `False`) has to be True for this to work. * 'face': Use the adjacent face color. * A color or sequence of colors will set the edge color. The singular form *edgecolor* works as an alias. **alpha**float, default: None The alpha blending value of the face color, between 0 (transparent) and 1 (opaque). Note: The edgecolor is currently not affected by this. **snap**bool, default: False Whether to snap the mesh to pixel boundaries. Returns: [`matplotlib.collections.Collection`](../collections_api#matplotlib.collections.Collection "matplotlib.collections.Collection") Other Parameters: **antialiaseds**bool, default: False The default *antialiaseds* is False if the default *edgecolors*="none" is used. This eliminates artificial lines at patch boundaries, and works regardless of the value of alpha. If *edgecolors* is not "none", then the default *antialiaseds* is taken from `[rcParams["patch.antialiased"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=patch.antialiased#matplotlibrc-sample)` (default: `True`). Stroking the edges may be preferred if *alpha* is 1, but will cause artifacts otherwise. **data**indexable object, optional If given, all parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception). **\*\*kwargs** Additionally, the following arguments are allowed. They are passed along to the [`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection") constructor: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](../collections_api#matplotlib.collections.Collection.set_alpha "matplotlib.collections.Collection.set_alpha") | array-like or scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](../collections_api#matplotlib.collections.Collection.set_antialiased "matplotlib.collections.Collection.set_antialiased") or aa or antialiaseds | bool or list of bools | | [`array`](../cm_api#matplotlib.cm.ScalarMappable.set_array "matplotlib.cm.ScalarMappable.set_array") | array-like or None | | [`capstyle`](../collections_api#matplotlib.collections.Collection.set_capstyle "matplotlib.collections.Collection.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clim`](../cm_api#matplotlib.cm.ScalarMappable.set_clim "matplotlib.cm.ScalarMappable.set_clim") | (vmin: float, vmax: float) | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`cmap`](../cm_api#matplotlib.cm.ScalarMappable.set_cmap "matplotlib.cm.ScalarMappable.set_cmap") | [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap") or str or None | | [`color`](../collections_api#matplotlib.collections.Collection.set_color "matplotlib.collections.Collection.set_color") | color or list of rgba tuples | | [`edgecolor`](../collections_api#matplotlib.collections.Collection.set_edgecolor "matplotlib.collections.Collection.set_edgecolor") or ec or edgecolors | color or list of colors or 'face' | | [`facecolor`](../collections_api#matplotlib.collections.Collection.set_facecolor "matplotlib.collections.Collection.set_facecolor") or facecolors or fc | color or list of colors | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](../collections_api#matplotlib.collections.Collection.set_hatch "matplotlib.collections.Collection.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](../collections_api#matplotlib.collections.Collection.set_joinstyle "matplotlib.collections.Collection.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](../collections_api#matplotlib.collections.Collection.set_linestyle "matplotlib.collections.Collection.set_linestyle") or dashes or linestyles or ls | str or tuple or list thereof | | [`linewidth`](../collections_api#matplotlib.collections.Collection.set_linewidth "matplotlib.collections.Collection.set_linewidth") or linewidths or lw | float or list of floats | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`norm`](../cm_api#matplotlib.cm.ScalarMappable.set_norm "matplotlib.cm.ScalarMappable.set_norm") | [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") or str or None | | [`offset_transform`](../collections_api#matplotlib.collections.Collection.set_offset_transform "matplotlib.collections.Collection.set_offset_transform") or transOffset | unknown | | [`offsets`](../collections_api#matplotlib.collections.Collection.set_offsets "matplotlib.collections.Collection.set_offsets") | (N, 2) or (2,) array-like | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`paths`](../collections_api#matplotlib.collections.PolyCollection.set_verts "matplotlib.collections.PolyCollection.set_verts") | list of array-like | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`pickradius`](../collections_api#matplotlib.collections.Collection.set_pickradius "matplotlib.collections.Collection.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | `sizes` | ndarray or None | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`urls`](../collections_api#matplotlib.collections.Collection.set_urls "matplotlib.collections.Collection.set_urls") | list of str or None | | [`verts`](../collections_api#matplotlib.collections.PolyCollection.set_verts "matplotlib.collections.PolyCollection.set_verts") | list of array-like | | [`verts_and_codes`](../collections_api#matplotlib.collections.PolyCollection.set_verts_and_codes "matplotlib.collections.PolyCollection.set_verts_and_codes") | unknown | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`pcolormesh`](matplotlib.axes.axes.pcolormesh#matplotlib.axes.Axes.pcolormesh "matplotlib.axes.Axes.pcolormesh") for an explanation of the differences between pcolor and pcolormesh. [`imshow`](matplotlib.axes.axes.imshow#matplotlib.axes.Axes.imshow "matplotlib.axes.Axes.imshow") If *X* and *Y* are each equidistant, [`imshow`](matplotlib.axes.axes.imshow#matplotlib.axes.Axes.imshow "matplotlib.axes.Axes.imshow") can be a faster alternative. #### Notes **Masked arrays** *X*, *Y* and *C* may be masked arrays. If either `C[i, j]`, or one of the vertices surrounding `C[i, j]` (*X* or *Y* at `[i, j], [i+1, j], [i, j+1], [i+1, j+1]`) is masked, nothing is plotted. **Grid orientation** The grid orientation follows the standard matrix convention: An array *C* with shape (nrows, ncolumns) is plotted with the column number as *X* and the row number as *Y*. Examples using `matplotlib.axes.Axes.pcolor` -------------------------------------------- [Pcolor Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/pcolor_demo.html#sphx-glr-gallery-images-contours-and-fields-pcolor-demo-py) Pcolor Demo [Controlling view limits using margins and sticky\_edges](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_margins.html#sphx-glr-gallery-subplots-axes-and-figures-axes-margins-py) Controlling view limits using margins and sticky\_edges
programming_docs
matplotlib matplotlib.pyplot.arrow matplotlib.pyplot.arrow ======================= matplotlib.pyplot.arrow(*x*, *y*, *dx*, *dy*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2303-L2305) Add an arrow to the Axes. This draws an arrow from `(x, y)` to `(x+dx, y+dy)`. Parameters: **x, y**float The x and y coordinates of the arrow base. **dx, dy**float The length of the arrow along x and y direction. **width**float, default: 0.001 Width of full arrow tail. **length\_includes\_head**bool, default: False True if head is to be counted in calculating the length. **head\_width**float or None, default: 3\*width Total width of the full arrow head. **head\_length**float or None, default: 1.5\*head\_width Length of arrow head. **shape**{'full', 'left', 'right'}, default: 'full' Draw the left-half, right-half, or full arrow. **overhang**float, default: 0 Fraction that the arrow is swept back (0 overhang means triangular shape). Can be negative or greater than one. **head\_starts\_at\_zero**bool, default: False If True, the head starts being drawn at coordinate 0 instead of ending at coordinate 0. **\*\*kwargs** [`Patch`](matplotlib.patches.patch#matplotlib.patches.Patch "matplotlib.patches.Patch") properties: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | unknown | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | Returns: [`FancyArrow`](matplotlib.patches.fancyarrow#matplotlib.patches.FancyArrow "matplotlib.patches.FancyArrow") The created [`FancyArrow`](matplotlib.patches.fancyarrow#matplotlib.patches.FancyArrow "matplotlib.patches.FancyArrow") object. #### Notes The resulting arrow is affected by the Axes aspect ratio and limits. This may produce an arrow whose head is not square with its stem. To create an arrow whose head is square with its stem, use [`annotate()`](matplotlib.pyplot.annotate#matplotlib.pyplot.annotate "matplotlib.pyplot.annotate") for example: ``` >>> ax.annotate("", xy=(0.5, 0.5), xytext=(0, 0), ... arrowprops=dict(arrowstyle="->")) ``` matplotlib mpl_toolkits.axisartist.grid_finder.MaxNLocator mpl\_toolkits.axisartist.grid\_finder.MaxNLocator ================================================= *class*mpl\_toolkits.axisartist.grid\_finder.MaxNLocator(*nbins=10*, *steps=None*, *trim=True*, *integer=False*, *symmetric=False*, *prune=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_finder.py#L280-L293) Bases: [`MaxNLocator`](../ticker_api#matplotlib.ticker.MaxNLocator "matplotlib.ticker.MaxNLocator") Parameters: **nbins**int or 'auto', default: 10 Maximum number of intervals; one less than max number of ticks. If the string 'auto', the number of bins will be automatically determined based on the length of the axis. **steps**array-like, optional Sequence of nice numbers starting with 1 and ending with 10; e.g., [1, 2, 4, 5, 10], where the values are acceptable tick multiples. i.e. for the example, 20, 40, 60 would be an acceptable set of ticks, as would 0.4, 0.6, 0.8, because they are multiples of 2. However, 30, 60, 90 would not be allowed because 3 does not appear in the list of steps. **integer**bool, default: False If True, ticks will take only integer values, provided at least *min\_n\_ticks* integers are found within the view limits. **symmetric**bool, default: False If True, autoscaling will result in a range symmetric about zero. **prune**{'lower', 'upper', 'both', None}, default: None Remove edge ticks -- useful for stacked or ganged plots where the upper tick of one axes overlaps with the lower tick of the axes above it, primarily when `[rcParams["axes.autolimit\_mode"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=axes.autolimit_mode#matplotlibrc-sample)` (default: `'data'`) is `'round_numbers'`. If `prune=='lower'`, the smallest tick will be removed. If `prune == 'upper'`, the largest tick will be removed. If `prune == 'both'`, the largest and smallest ticks will be removed. If *prune* is *None*, no ticks will be removed. **min\_n\_ticks**int, default: 2 Relax *nbins* and *integer* constraints if necessary to obtain this minimum number of ticks. \_\_call\_\_(*v1*, *v2*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_finder.py#L291-L293) Return the locations of the ticks. Examples using `mpl_toolkits.axisartist.grid_finder.MaxNLocator` ---------------------------------------------------------------- [axis\_direction demo](https://matplotlib.org/stable/gallery/axisartist/demo_axis_direction.html#sphx-glr-gallery-axisartist-demo-axis-direction-py) axis\_direction demo [Demo CurveLinear Grid2](https://matplotlib.org/stable/gallery/axisartist/demo_curvelinear_grid2.html#sphx-glr-gallery-axisartist-demo-curvelinear-grid2-py) Demo CurveLinear Grid2 [mpl\_toolkits.axisartist.floating\_axes features](https://matplotlib.org/stable/gallery/axisartist/demo_floating_axes.html#sphx-glr-gallery-axisartist-demo-floating-axes-py) :mod:`mpl\_toolkits.axisartist.floating\_axes` features [Simple Axis Pad](https://matplotlib.org/stable/gallery/axisartist/simple_axis_pad.html#sphx-glr-gallery-axisartist-simple-axis-pad-py) Simple Axis Pad matplotlib mpl_toolkits.axisartist.grid_helper_curvelinear.FloatingAxisArtistHelper mpl\_toolkits.axisartist.grid\_helper\_curvelinear.FloatingAxisArtistHelper =========================================================================== *class*mpl\_toolkits.axisartist.grid\_helper\_curvelinear.FloatingAxisArtistHelper(*grid\_helper*, *nth\_coord*, *value*, *axis\_direction=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_helper_curvelinear.py#L68-L251) Bases: [`Floating`](mpl_toolkits.axisartist.axislines.axisartisthelper#mpl_toolkits.axisartist.axislines.AxisArtistHelper.Floating "mpl_toolkits.axisartist.axislines.AxisArtistHelper.Floating") nth\_coord = along which coordinate value varies. nth\_coord = 0 -> x axis, nth\_coord = 1 -> y axis get\_axislabel\_pos\_angle(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_helper_curvelinear.py#L135-L162) get\_axislabel\_transform(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_helper_curvelinear.py#L132-L133) get\_line(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_helper_curvelinear.py#L248-L251) get\_line\_transform(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_helper_curvelinear.py#L245-L246) get\_tick\_iterators(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_helper_curvelinear.py#L167-L243) tick\_loc, tick\_angle, tick\_label, (optionally) tick\_label get\_tick\_transform(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_helper_curvelinear.py#L164-L165) *property*grid\_info[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/_api/deprecation.py) set\_extremes(*e1*, *e2*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_helper_curvelinear.py#L82-L87) update\_lim(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/grid_helper_curvelinear.py#L89-L130) matplotlib matplotlib.axes.Axes.add_callback matplotlib.axes.Axes.add\_callback ================================== Axes.add\_callback(*func*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L348-L375) Add a callback function that will be called whenever one of the [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist")'s properties changes. Parameters: **func**callable The callback function. It must have the signature: ``` def func(artist: Artist) -> Any ``` where *artist* is the calling [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist"). Return values may exist but are ignored. Returns: int The observer id associated with the callback. This id can be used for removing the callback with [`remove_callback`](matplotlib.axes.axes.remove_callback#matplotlib.axes.Axes.remove_callback "matplotlib.axes.Axes.remove_callback") later. See also [`remove_callback`](matplotlib.axes.axes.remove_callback#matplotlib.axes.Axes.remove_callback "matplotlib.axes.Axes.remove_callback") matplotlib mpl_toolkits.axisartist.axes_grid.CbarAxes mpl\_toolkits.axisartist.axes\_grid.CbarAxes ============================================ *class*mpl\_toolkits.axisartist.axes\_grid.CbarAxes(*\*args*, *orientation*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axes_grid.py#L6-L8) Bases: [`CbarAxesBase`](mpl_toolkits.axes_grid1.axes_grid.cbaraxesbase#mpl_toolkits.axes_grid1.axes_grid.CbarAxesBase "mpl_toolkits.axes_grid1.axes_grid.CbarAxesBase"), [`Axes`](mpl_toolkits.axisartist.axislines.axes#mpl_toolkits.axisartist.axislines.Axes "mpl_toolkits.axisartist.axislines.Axes") [*Deprecated*] #### Notes Deprecated since version 3.5: Build an Axes in a figure. Parameters: **fig**[`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") The Axes is built in the [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") *fig*. **rect**tuple (left, bottom, width, height). The Axes is built in the rectangle *rect*. *rect* is in [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") coordinates. **sharex, sharey**[`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes"), optional The x or y [`axis`](../axis_api#module-matplotlib.axis "matplotlib.axis") is shared with the x or y axis in the input [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes"). **frameon**bool, default: True Whether the Axes frame is visible. **box\_aspect**float, optional Set a fixed aspect for the Axes box, i.e. the ratio of height to width. See [`set_box_aspect`](matplotlib.axes.axes.set_box_aspect#matplotlib.axes.Axes.set_box_aspect "matplotlib.axes.Axes.set_box_aspect") for details. **\*\*kwargs** Other optional keyword arguments: | Property | Description | | --- | --- | | [`adjustable`](matplotlib.axes.axes.set_adjustable#matplotlib.axes.Axes.set_adjustable "matplotlib.axes.Axes.set_adjustable") | {'box', 'datalim'} | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`anchor`](matplotlib.axes.axes.set_anchor#matplotlib.axes.Axes.set_anchor "matplotlib.axes.Axes.set_anchor") | (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...} | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect") | {'auto', 'equal'} or float | | [`autoscale_on`](matplotlib.axes.axes.set_autoscale_on#matplotlib.axes.Axes.set_autoscale_on "matplotlib.axes.Axes.set_autoscale_on") | bool | | [`autoscalex_on`](matplotlib.axes.axes.set_autoscalex_on#matplotlib.axes.Axes.set_autoscalex_on "matplotlib.axes.Axes.set_autoscalex_on") | unknown | | [`autoscaley_on`](matplotlib.axes.axes.set_autoscaley_on#matplotlib.axes.Axes.set_autoscaley_on "matplotlib.axes.Axes.set_autoscaley_on") | unknown | | [`axes_locator`](matplotlib.axes.axes.set_axes_locator#matplotlib.axes.Axes.set_axes_locator "matplotlib.axes.Axes.set_axes_locator") | Callable[[Axes, Renderer], Bbox] | | [`axisbelow`](matplotlib.axes.axes.set_axisbelow#matplotlib.axes.Axes.set_axisbelow "matplotlib.axes.Axes.set_axisbelow") | bool or 'line' | | [`box_aspect`](matplotlib.axes.axes.set_box_aspect#matplotlib.axes.Axes.set_box_aspect "matplotlib.axes.Axes.set_box_aspect") | float or None | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`facecolor`](matplotlib.axes.axes.set_facecolor#matplotlib.axes.Axes.set_facecolor "matplotlib.axes.Axes.set_facecolor") or fc | color | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`frame_on`](matplotlib.axes.axes.set_frame_on#matplotlib.axes.Axes.set_frame_on "matplotlib.axes.Axes.set_frame_on") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`navigate`](matplotlib.axes.axes.set_navigate#matplotlib.axes.Axes.set_navigate "matplotlib.axes.Axes.set_navigate") | bool | | [`navigate_mode`](matplotlib.axes.axes.set_navigate_mode#matplotlib.axes.Axes.set_navigate_mode "matplotlib.axes.Axes.set_navigate_mode") | unknown | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`position`](matplotlib.axes.axes.set_position#matplotlib.axes.Axes.set_position "matplotlib.axes.Axes.set_position") | [left, bottom, width, height] or [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`prop_cycle`](matplotlib.axes.axes.set_prop_cycle#matplotlib.axes.Axes.set_prop_cycle "matplotlib.axes.Axes.set_prop_cycle") | unknown | | [`rasterization_zorder`](matplotlib.axes.axes.set_rasterization_zorder#matplotlib.axes.Axes.set_rasterization_zorder "matplotlib.axes.Axes.set_rasterization_zorder") | float or None | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`title`](matplotlib.axes.axes.set_title#matplotlib.axes.Axes.set_title "matplotlib.axes.Axes.set_title") | str | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xbound`](matplotlib.axes.axes.set_xbound#matplotlib.axes.Axes.set_xbound "matplotlib.axes.Axes.set_xbound") | unknown | | [`xlabel`](matplotlib.axes.axes.set_xlabel#matplotlib.axes.Axes.set_xlabel "matplotlib.axes.Axes.set_xlabel") | str | | [`xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") | (bottom: float, top: float) | | [`xmargin`](matplotlib.axes.axes.set_xmargin#matplotlib.axes.Axes.set_xmargin "matplotlib.axes.Axes.set_xmargin") | float greater than -0.5 | | [`xscale`](matplotlib.axes.axes.set_xscale#matplotlib.axes.Axes.set_xscale "matplotlib.axes.Axes.set_xscale") | unknown | | [`xticklabels`](matplotlib.axes.axes.set_xticklabels#matplotlib.axes.Axes.set_xticklabels "matplotlib.axes.Axes.set_xticklabels") | unknown | | [`xticks`](matplotlib.axes.axes.set_xticks#matplotlib.axes.Axes.set_xticks "matplotlib.axes.Axes.set_xticks") | unknown | | [`ybound`](matplotlib.axes.axes.set_ybound#matplotlib.axes.Axes.set_ybound "matplotlib.axes.Axes.set_ybound") | unknown | | [`ylabel`](matplotlib.axes.axes.set_ylabel#matplotlib.axes.Axes.set_ylabel "matplotlib.axes.Axes.set_ylabel") | str | | [`ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim") | (bottom: float, top: float) | | [`ymargin`](matplotlib.axes.axes.set_ymargin#matplotlib.axes.Axes.set_ymargin "matplotlib.axes.Axes.set_ymargin") | float greater than -0.5 | | [`yscale`](matplotlib.axes.axes.set_yscale#matplotlib.axes.Axes.set_yscale "matplotlib.axes.Axes.set_yscale") | unknown | | [`yticklabels`](matplotlib.axes.axes.set_yticklabels#matplotlib.axes.Axes.set_yticklabels "matplotlib.axes.Axes.set_yticklabels") | unknown | | [`yticks`](matplotlib.axes.axes.set_yticks#matplotlib.axes.Axes.set_yticks "matplotlib.axes.Axes.set_yticks") | unknown | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | Returns: [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") The new [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") object. set(*\**, *adjustable=<UNSET>*, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *anchor=<UNSET>*, *animated=<UNSET>*, *aspect=<UNSET>*, *autoscale\_on=<UNSET>*, *autoscalex\_on=<UNSET>*, *autoscaley\_on=<UNSET>*, *axes\_locator=<UNSET>*, *axisbelow=<UNSET>*, *box\_aspect=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *facecolor=<UNSET>*, *frame\_on=<UNSET>*, *gid=<UNSET>*, *in\_layout=<UNSET>*, *label=<UNSET>*, *mouseover=<UNSET>*, *navigate=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *position=<UNSET>*, *prop\_cycle=<UNSET>*, *rasterization\_zorder=<UNSET>*, *rasterized=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *title=<UNSET>*, *transform=<UNSET>*, *url=<UNSET>*, *visible=<UNSET>*, *xbound=<UNSET>*, *xlabel=<UNSET>*, *xlim=<UNSET>*, *xmargin=<UNSET>*, *xscale=<UNSET>*, *xticklabels=<UNSET>*, *xticks=<UNSET>*, *ybound=<UNSET>*, *ylabel=<UNSET>*, *ylim=<UNSET>*, *ymargin=<UNSET>*, *yscale=<UNSET>*, *yticklabels=<UNSET>*, *yticks=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`adjustable`](matplotlib.axes.axes.set_adjustable#matplotlib.axes.Axes.set_adjustable "matplotlib.axes.Axes.set_adjustable") | {'box', 'datalim'} | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`anchor`](matplotlib.axes.axes.set_anchor#matplotlib.axes.Axes.set_anchor "matplotlib.axes.Axes.set_anchor") | (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...} | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect") | {'auto', 'equal'} or float | | [`autoscale_on`](matplotlib.axes.axes.set_autoscale_on#matplotlib.axes.Axes.set_autoscale_on "matplotlib.axes.Axes.set_autoscale_on") | bool | | [`autoscalex_on`](matplotlib.axes.axes.set_autoscalex_on#matplotlib.axes.Axes.set_autoscalex_on "matplotlib.axes.Axes.set_autoscalex_on") | unknown | | [`autoscaley_on`](matplotlib.axes.axes.set_autoscaley_on#matplotlib.axes.Axes.set_autoscaley_on "matplotlib.axes.Axes.set_autoscaley_on") | unknown | | [`axes_locator`](matplotlib.axes.axes.set_axes_locator#matplotlib.axes.Axes.set_axes_locator "matplotlib.axes.Axes.set_axes_locator") | Callable[[Axes, Renderer], Bbox] | | [`axisbelow`](matplotlib.axes.axes.set_axisbelow#matplotlib.axes.Axes.set_axisbelow "matplotlib.axes.Axes.set_axisbelow") | bool or 'line' | | [`box_aspect`](matplotlib.axes.axes.set_box_aspect#matplotlib.axes.Axes.set_box_aspect "matplotlib.axes.Axes.set_box_aspect") | float or None | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`facecolor`](matplotlib.axes.axes.set_facecolor#matplotlib.axes.Axes.set_facecolor "matplotlib.axes.Axes.set_facecolor") or fc | color | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`frame_on`](matplotlib.axes.axes.set_frame_on#matplotlib.axes.Axes.set_frame_on "matplotlib.axes.Axes.set_frame_on") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`navigate`](matplotlib.axes.axes.set_navigate#matplotlib.axes.Axes.set_navigate "matplotlib.axes.Axes.set_navigate") | bool | | [`navigate_mode`](matplotlib.axes.axes.set_navigate_mode#matplotlib.axes.Axes.set_navigate_mode "matplotlib.axes.Axes.set_navigate_mode") | unknown | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`position`](matplotlib.axes.axes.set_position#matplotlib.axes.Axes.set_position "matplotlib.axes.Axes.set_position") | [left, bottom, width, height] or [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`prop_cycle`](matplotlib.axes.axes.set_prop_cycle#matplotlib.axes.Axes.set_prop_cycle "matplotlib.axes.Axes.set_prop_cycle") | unknown | | [`rasterization_zorder`](matplotlib.axes.axes.set_rasterization_zorder#matplotlib.axes.Axes.set_rasterization_zorder "matplotlib.axes.Axes.set_rasterization_zorder") | float or None | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`title`](matplotlib.axes.axes.set_title#matplotlib.axes.Axes.set_title "matplotlib.axes.Axes.set_title") | str | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xbound`](matplotlib.axes.axes.set_xbound#matplotlib.axes.Axes.set_xbound "matplotlib.axes.Axes.set_xbound") | unknown | | [`xlabel`](matplotlib.axes.axes.set_xlabel#matplotlib.axes.Axes.set_xlabel "matplotlib.axes.Axes.set_xlabel") | str | | [`xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") | (bottom: float, top: float) | | [`xmargin`](matplotlib.axes.axes.set_xmargin#matplotlib.axes.Axes.set_xmargin "matplotlib.axes.Axes.set_xmargin") | float greater than -0.5 | | [`xscale`](matplotlib.axes.axes.set_xscale#matplotlib.axes.Axes.set_xscale "matplotlib.axes.Axes.set_xscale") | unknown | | [`xticklabels`](matplotlib.axes.axes.set_xticklabels#matplotlib.axes.Axes.set_xticklabels "matplotlib.axes.Axes.set_xticklabels") | unknown | | [`xticks`](matplotlib.axes.axes.set_xticks#matplotlib.axes.Axes.set_xticks "matplotlib.axes.Axes.set_xticks") | unknown | | [`ybound`](matplotlib.axes.axes.set_ybound#matplotlib.axes.Axes.set_ybound "matplotlib.axes.Axes.set_ybound") | unknown | | [`ylabel`](matplotlib.axes.axes.set_ylabel#matplotlib.axes.Axes.set_ylabel "matplotlib.axes.Axes.set_ylabel") | str | | [`ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim") | (bottom: float, top: float) | | [`ymargin`](matplotlib.axes.axes.set_ymargin#matplotlib.axes.Axes.set_ymargin "matplotlib.axes.Axes.set_ymargin") | float greater than -0.5 | | [`yscale`](matplotlib.axes.axes.set_yscale#matplotlib.axes.Axes.set_yscale "matplotlib.axes.Axes.set_yscale") | unknown | | [`yticklabels`](matplotlib.axes.axes.set_yticklabels#matplotlib.axes.Axes.set_yticklabels "matplotlib.axes.Axes.set_yticklabels") | unknown | | [`yticks`](matplotlib.axes.axes.set_yticks#matplotlib.axes.Axes.set_yticks "matplotlib.axes.Axes.set_yticks") | unknown | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float |
programming_docs
matplotlib mpl_toolkits.axes_grid1.mpl_axes.Axes mpl\_toolkits.axes\_grid1.mpl\_axes.Axes ======================================== *class*mpl\_toolkits.axes\_grid1.mpl\_axes.Axes(*fig*, *rect*, *\**, *facecolor=None*, *frameon=True*, *sharex=None*, *sharey=None*, *label=''*, *xscale=None*, *yscale=None*, *box\_aspect=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/mpl_axes.py#L19-L56) Bases: [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes._axes.Axes") Build an Axes in a figure. Parameters: **fig**[`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") The Axes is built in the [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") *fig*. **rect**tuple (left, bottom, width, height). The Axes is built in the rectangle *rect*. *rect* is in [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") coordinates. **sharex, sharey**[`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes"), optional The x or y [`axis`](../axis_api#module-matplotlib.axis "matplotlib.axis") is shared with the x or y axis in the input [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes"). **frameon**bool, default: True Whether the Axes frame is visible. **box\_aspect**float, optional Set a fixed aspect for the Axes box, i.e. the ratio of height to width. See [`set_box_aspect`](matplotlib.axes.axes.set_box_aspect#matplotlib.axes.Axes.set_box_aspect "matplotlib.axes.Axes.set_box_aspect") for details. **\*\*kwargs** Other optional keyword arguments: | Property | Description | | --- | --- | | [`adjustable`](matplotlib.axes.axes.set_adjustable#matplotlib.axes.Axes.set_adjustable "matplotlib.axes.Axes.set_adjustable") | {'box', 'datalim'} | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`anchor`](matplotlib.axes.axes.set_anchor#matplotlib.axes.Axes.set_anchor "matplotlib.axes.Axes.set_anchor") | (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...} | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect") | {'auto', 'equal'} or float | | [`autoscale_on`](matplotlib.axes.axes.set_autoscale_on#matplotlib.axes.Axes.set_autoscale_on "matplotlib.axes.Axes.set_autoscale_on") | bool | | [`autoscalex_on`](matplotlib.axes.axes.set_autoscalex_on#matplotlib.axes.Axes.set_autoscalex_on "matplotlib.axes.Axes.set_autoscalex_on") | unknown | | [`autoscaley_on`](matplotlib.axes.axes.set_autoscaley_on#matplotlib.axes.Axes.set_autoscaley_on "matplotlib.axes.Axes.set_autoscaley_on") | unknown | | [`axes_locator`](matplotlib.axes.axes.set_axes_locator#matplotlib.axes.Axes.set_axes_locator "matplotlib.axes.Axes.set_axes_locator") | Callable[[Axes, Renderer], Bbox] | | [`axisbelow`](matplotlib.axes.axes.set_axisbelow#matplotlib.axes.Axes.set_axisbelow "matplotlib.axes.Axes.set_axisbelow") | bool or 'line' | | [`box_aspect`](matplotlib.axes.axes.set_box_aspect#matplotlib.axes.Axes.set_box_aspect "matplotlib.axes.Axes.set_box_aspect") | float or None | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`facecolor`](matplotlib.axes.axes.set_facecolor#matplotlib.axes.Axes.set_facecolor "matplotlib.axes.Axes.set_facecolor") or fc | color | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`frame_on`](matplotlib.axes.axes.set_frame_on#matplotlib.axes.Axes.set_frame_on "matplotlib.axes.Axes.set_frame_on") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`navigate`](matplotlib.axes.axes.set_navigate#matplotlib.axes.Axes.set_navigate "matplotlib.axes.Axes.set_navigate") | bool | | [`navigate_mode`](matplotlib.axes.axes.set_navigate_mode#matplotlib.axes.Axes.set_navigate_mode "matplotlib.axes.Axes.set_navigate_mode") | unknown | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`position`](matplotlib.axes.axes.set_position#matplotlib.axes.Axes.set_position "matplotlib.axes.Axes.set_position") | [left, bottom, width, height] or [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`prop_cycle`](matplotlib.axes.axes.set_prop_cycle#matplotlib.axes.Axes.set_prop_cycle "matplotlib.axes.Axes.set_prop_cycle") | unknown | | [`rasterization_zorder`](matplotlib.axes.axes.set_rasterization_zorder#matplotlib.axes.Axes.set_rasterization_zorder "matplotlib.axes.Axes.set_rasterization_zorder") | float or None | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`title`](matplotlib.axes.axes.set_title#matplotlib.axes.Axes.set_title "matplotlib.axes.Axes.set_title") | str | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xbound`](matplotlib.axes.axes.set_xbound#matplotlib.axes.Axes.set_xbound "matplotlib.axes.Axes.set_xbound") | unknown | | [`xlabel`](matplotlib.axes.axes.set_xlabel#matplotlib.axes.Axes.set_xlabel "matplotlib.axes.Axes.set_xlabel") | str | | [`xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") | (bottom: float, top: float) | | [`xmargin`](matplotlib.axes.axes.set_xmargin#matplotlib.axes.Axes.set_xmargin "matplotlib.axes.Axes.set_xmargin") | float greater than -0.5 | | [`xscale`](matplotlib.axes.axes.set_xscale#matplotlib.axes.Axes.set_xscale "matplotlib.axes.Axes.set_xscale") | unknown | | [`xticklabels`](matplotlib.axes.axes.set_xticklabels#matplotlib.axes.Axes.set_xticklabels "matplotlib.axes.Axes.set_xticklabels") | unknown | | [`xticks`](matplotlib.axes.axes.set_xticks#matplotlib.axes.Axes.set_xticks "matplotlib.axes.Axes.set_xticks") | unknown | | [`ybound`](matplotlib.axes.axes.set_ybound#matplotlib.axes.Axes.set_ybound "matplotlib.axes.Axes.set_ybound") | unknown | | [`ylabel`](matplotlib.axes.axes.set_ylabel#matplotlib.axes.Axes.set_ylabel "matplotlib.axes.Axes.set_ylabel") | str | | [`ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim") | (bottom: float, top: float) | | [`ymargin`](matplotlib.axes.axes.set_ymargin#matplotlib.axes.Axes.set_ymargin "matplotlib.axes.Axes.set_ymargin") | float greater than -0.5 | | [`yscale`](matplotlib.axes.axes.set_yscale#matplotlib.axes.Axes.set_yscale "matplotlib.axes.Axes.set_yscale") | unknown | | [`yticklabels`](matplotlib.axes.axes.set_yticklabels#matplotlib.axes.Axes.set_yticklabels "matplotlib.axes.Axes.set_yticklabels") | unknown | | [`yticks`](matplotlib.axes.axes.set_yticks#matplotlib.axes.Axes.set_yticks "matplotlib.axes.Axes.set_yticks") | unknown | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | Returns: [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") The new [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") object. *class*AxisDict(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/mpl_axes.py#L21-L41) Bases: [`dict`](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.10)") \_\_call\_\_(*\*v*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/mpl_axes.py#L40-L41) Call self as a function. *property*axis Convenience method to get or set some axis properties. Call signatures: ``` xmin, xmax, ymin, ymax = axis() xmin, xmax, ymin, ymax = axis([xmin, xmax, ymin, ymax]) xmin, xmax, ymin, ymax = axis(option) xmin, xmax, ymin, ymax = axis(**kwargs) ``` Parameters: **xmin, xmax, ymin, ymax**float, optional The axis limits to be set. This can also be achieved using ``` ax.set(xlim=(xmin, xmax), ylim=(ymin, ymax)) ``` **option**bool or str If a bool, turns axis lines and labels on or off. If a string, possible values are: | Value | Description | | --- | --- | | 'on' | Turn on axis lines and labels. Same as `True`. | | 'off' | Turn off axis lines and labels. Same as `False`. | | 'equal' | Set equal scaling (i.e., make circles circular) by changing axis limits. This is the same as `ax.set_aspect('equal', adjustable='datalim')`. Explicit data limits may not be respected in this case. | | 'scaled' | Set equal scaling (i.e., make circles circular) by changing dimensions of the plot box. This is the same as `ax.set_aspect('equal', adjustable='box', anchor='C')`. Additionally, further autoscaling will be disabled. | | 'tight' | Set limits just large enough to show all data, then disable further autoscaling. | | 'auto' | Automatic scaling (fill plot box with data). | | 'image' | 'scaled' with axis limits equal to data limits. | | 'square' | Square plot; similar to 'scaled', but initially forcing `xmax-xmin == ymax-ymin`. | **emit**bool, default: True Whether observers are notified of the axis limit change. This option is passed on to [`set_xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") and [`set_ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim"). Returns: **xmin, xmax, ymin, ymax**float The axis limits. See also [`matplotlib.axes.Axes.set_xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") [`matplotlib.axes.Axes.set_ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim") clear()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/mpl_axes.py#L47-L56) Clear the Axes. set(*\**, *adjustable=<UNSET>*, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *anchor=<UNSET>*, *animated=<UNSET>*, *aspect=<UNSET>*, *autoscale\_on=<UNSET>*, *autoscalex\_on=<UNSET>*, *autoscaley\_on=<UNSET>*, *axes\_locator=<UNSET>*, *axisbelow=<UNSET>*, *box\_aspect=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *facecolor=<UNSET>*, *frame\_on=<UNSET>*, *gid=<UNSET>*, *in\_layout=<UNSET>*, *label=<UNSET>*, *mouseover=<UNSET>*, *navigate=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *position=<UNSET>*, *prop\_cycle=<UNSET>*, *rasterization\_zorder=<UNSET>*, *rasterized=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *title=<UNSET>*, *transform=<UNSET>*, *url=<UNSET>*, *visible=<UNSET>*, *xbound=<UNSET>*, *xlabel=<UNSET>*, *xlim=<UNSET>*, *xmargin=<UNSET>*, *xscale=<UNSET>*, *xticklabels=<UNSET>*, *xticks=<UNSET>*, *ybound=<UNSET>*, *ylabel=<UNSET>*, *ylim=<UNSET>*, *ymargin=<UNSET>*, *yscale=<UNSET>*, *yticklabels=<UNSET>*, *yticks=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`adjustable`](matplotlib.axes.axes.set_adjustable#matplotlib.axes.Axes.set_adjustable "matplotlib.axes.Axes.set_adjustable") | {'box', 'datalim'} | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`anchor`](matplotlib.axes.axes.set_anchor#matplotlib.axes.Axes.set_anchor "matplotlib.axes.Axes.set_anchor") | (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...} | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect") | {'auto', 'equal'} or float | | [`autoscale_on`](matplotlib.axes.axes.set_autoscale_on#matplotlib.axes.Axes.set_autoscale_on "matplotlib.axes.Axes.set_autoscale_on") | bool | | [`autoscalex_on`](matplotlib.axes.axes.set_autoscalex_on#matplotlib.axes.Axes.set_autoscalex_on "matplotlib.axes.Axes.set_autoscalex_on") | unknown | | [`autoscaley_on`](matplotlib.axes.axes.set_autoscaley_on#matplotlib.axes.Axes.set_autoscaley_on "matplotlib.axes.Axes.set_autoscaley_on") | unknown | | [`axes_locator`](matplotlib.axes.axes.set_axes_locator#matplotlib.axes.Axes.set_axes_locator "matplotlib.axes.Axes.set_axes_locator") | Callable[[Axes, Renderer], Bbox] | | [`axisbelow`](matplotlib.axes.axes.set_axisbelow#matplotlib.axes.Axes.set_axisbelow "matplotlib.axes.Axes.set_axisbelow") | bool or 'line' | | [`box_aspect`](matplotlib.axes.axes.set_box_aspect#matplotlib.axes.Axes.set_box_aspect "matplotlib.axes.Axes.set_box_aspect") | float or None | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`facecolor`](matplotlib.axes.axes.set_facecolor#matplotlib.axes.Axes.set_facecolor "matplotlib.axes.Axes.set_facecolor") or fc | color | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`frame_on`](matplotlib.axes.axes.set_frame_on#matplotlib.axes.Axes.set_frame_on "matplotlib.axes.Axes.set_frame_on") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`navigate`](matplotlib.axes.axes.set_navigate#matplotlib.axes.Axes.set_navigate "matplotlib.axes.Axes.set_navigate") | bool | | [`navigate_mode`](matplotlib.axes.axes.set_navigate_mode#matplotlib.axes.Axes.set_navigate_mode "matplotlib.axes.Axes.set_navigate_mode") | unknown | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`position`](matplotlib.axes.axes.set_position#matplotlib.axes.Axes.set_position "matplotlib.axes.Axes.set_position") | [left, bottom, width, height] or [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`prop_cycle`](matplotlib.axes.axes.set_prop_cycle#matplotlib.axes.Axes.set_prop_cycle "matplotlib.axes.Axes.set_prop_cycle") | unknown | | [`rasterization_zorder`](matplotlib.axes.axes.set_rasterization_zorder#matplotlib.axes.Axes.set_rasterization_zorder "matplotlib.axes.Axes.set_rasterization_zorder") | float or None | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`title`](matplotlib.axes.axes.set_title#matplotlib.axes.Axes.set_title "matplotlib.axes.Axes.set_title") | str | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xbound`](matplotlib.axes.axes.set_xbound#matplotlib.axes.Axes.set_xbound "matplotlib.axes.Axes.set_xbound") | unknown | | [`xlabel`](matplotlib.axes.axes.set_xlabel#matplotlib.axes.Axes.set_xlabel "matplotlib.axes.Axes.set_xlabel") | str | | [`xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") | (bottom: float, top: float) | | [`xmargin`](matplotlib.axes.axes.set_xmargin#matplotlib.axes.Axes.set_xmargin "matplotlib.axes.Axes.set_xmargin") | float greater than -0.5 | | [`xscale`](matplotlib.axes.axes.set_xscale#matplotlib.axes.Axes.set_xscale "matplotlib.axes.Axes.set_xscale") | unknown | | [`xticklabels`](matplotlib.axes.axes.set_xticklabels#matplotlib.axes.Axes.set_xticklabels "matplotlib.axes.Axes.set_xticklabels") | unknown | | [`xticks`](matplotlib.axes.axes.set_xticks#matplotlib.axes.Axes.set_xticks "matplotlib.axes.Axes.set_xticks") | unknown | | [`ybound`](matplotlib.axes.axes.set_ybound#matplotlib.axes.Axes.set_ybound "matplotlib.axes.Axes.set_ybound") | unknown | | [`ylabel`](matplotlib.axes.axes.set_ylabel#matplotlib.axes.Axes.set_ylabel "matplotlib.axes.Axes.set_ylabel") | str | | [`ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim") | (bottom: float, top: float) | | [`ymargin`](matplotlib.axes.axes.set_ymargin#matplotlib.axes.Axes.set_ymargin "matplotlib.axes.Axes.set_ymargin") | float greater than -0.5 | | [`yscale`](matplotlib.axes.axes.set_yscale#matplotlib.axes.Axes.set_yscale "matplotlib.axes.Axes.set_yscale") | unknown | | [`yticklabels`](matplotlib.axes.axes.set_yticklabels#matplotlib.axes.Axes.set_yticklabels "matplotlib.axes.Axes.set_yticklabels") | unknown | | [`yticks`](matplotlib.axes.axes.set_yticks#matplotlib.axes.Axes.set_yticks "matplotlib.axes.Axes.set_yticks") | unknown | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | Examples using `mpl_toolkits.axes_grid1.mpl_axes.Axes` ------------------------------------------------------ [Axes Divider](https://matplotlib.org/stable/gallery/axes_grid1/demo_axes_divider.html#sphx-glr-gallery-axes-grid1-demo-axes-divider-py) Axes Divider [Axes Grid2](https://matplotlib.org/stable/gallery/axes_grid1/demo_axes_grid2.html#sphx-glr-gallery-axes-grid1-demo-axes-grid2-py) Axes Grid2 [Parasite Simple2](https://matplotlib.org/stable/gallery/axes_grid1/parasite_simple2.html#sphx-glr-gallery-axes-grid1-parasite-simple2-py) Parasite Simple2 [Simple ImageGrid](https://matplotlib.org/stable/gallery/axes_grid1/simple_axesgrid.html#sphx-glr-gallery-axes-grid1-simple-axesgrid-py) Simple ImageGrid [Simple ImageGrid 2](https://matplotlib.org/stable/gallery/axes_grid1/simple_axesgrid2.html#sphx-glr-gallery-axes-grid1-simple-axesgrid2-py) Simple ImageGrid 2 [Tight Layout guide](https://matplotlib.org/stable/tutorials/intermediate/tight_layout_guide.html#sphx-glr-tutorials-intermediate-tight-layout-guide-py) Tight Layout guide
programming_docs
matplotlib mpl_toolkits.axes_grid1.axes_grid.Grid mpl\_toolkits.axes\_grid1.axes\_grid.Grid ========================================= *class*mpl\_toolkits.axes\_grid1.axes\_grid.Grid(*fig*, *rect*, *nrows\_ncols*, *ngrids=None*, *direction='row'*, *axes\_pad=0.02*, *\**, *share\_all=False*, *share\_x=True*, *share\_y=True*, *label\_mode='L'*, *axes\_class=None*, *aspect=False*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_grid.py#L52-L311) Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") A grid of Axes. In Matplotlib, the axes location (and size) is specified in normalized figure coordinates. This may not be ideal for images that needs to be displayed with a given aspect ratio; for example, it is difficult to display multiple images of a same size with some fixed padding between them. AxesGrid can be used in such case. Parameters: **fig**[`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") The parent figure. **rect**(float, float, float, float) or int The axes position, as a `(left, bottom, width, height)` tuple or as a three-digit subplot position code (e.g., "121"). **nrows\_ncols**(int, int) Number of rows and columns in the grid. **ngrids**int or None, default: None If not None, only the first *ngrids* axes in the grid are created. **direction**{"row", "column"}, default: "row" Whether axes are created in row-major ("row by row") or column-major order ("column by column"). This also affects the order in which axes are accessed using indexing (`grid[index]`). **axes\_pad**float or (float, float), default: 0.02 Padding or (horizontal padding, vertical padding) between axes, in inches. **share\_all**bool, default: False Whether all axes share their x- and y-axis. Overrides *share\_x* and *share\_y*. **share\_x**bool, default: True Whether all axes of a column share their x-axis. **share\_y**bool, default: True Whether all axes of a row share their y-axis. **label\_mode**{"L", "1", "all"}, default: "L" Determines which axes will get tick labels: * "L": All axes on the left column get vertical tick labels; all axes on the bottom row get horizontal tick labels. * "1": Only the bottom left axes is labelled. * "all": all axes are labelled. **axes\_class**subclass of [`matplotlib.axes.Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes"), default: None **aspect**bool, default: False Whether the axes aspect ratio follows the aspect ratio of the data limits. get\_aspect()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_grid.py#L254-L256) Return the aspect of the SubplotDivider. get\_axes\_locator()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_grid.py#L306-L307) get\_axes\_pad()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_grid.py#L238-L248) Return the axes padding. Returns: hpad, vpad Padding (horizontal pad, vertical pad) in inches. get\_divider()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_grid.py#L300-L301) get\_geometry()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_grid.py#L220-L224) Return the number of rows and columns of the grid as (nrows, ncols). get\_vsize\_hsize()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_grid.py#L309-L311) [*Deprecated*] #### Notes Deprecated since version 3.5: set\_aspect(*aspect*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_grid.py#L250-L252) Set the aspect of the SubplotDivider. set\_axes\_locator(*locator*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_grid.py#L303-L304) set\_axes\_pad(*axes\_pad*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_grid.py#L226-L236) Set the padding between the axes. Parameters: **axes\_pad**(float, float) The padding (horizontal pad, vertical pad) in inches. set\_label\_mode(*mode*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_grid.py#L258-L298) Define which axes have tick labels. Parameters: **mode**{"L", "1", "all"} The label mode: * "L": All axes on the left column get vertical tick labels; all axes on the bottom row get horizontal tick labels. * "1": Only the bottom left axes is labelled. * "all": all axes are labelled. matplotlib matplotlib.colors.BoundaryNorm matplotlib.colors.BoundaryNorm ============================== *class*matplotlib.colors.BoundaryNorm(*boundaries*, *ncolors*, *clip=False*, *\**, *extend='neither'*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1903-L2028) Bases: [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") Generate a colormap index based on discrete intervals. Unlike [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") or [`LogNorm`](matplotlib.colors.lognorm#matplotlib.colors.LogNorm "matplotlib.colors.LogNorm"), [`BoundaryNorm`](#matplotlib.colors.BoundaryNorm "matplotlib.colors.BoundaryNorm") maps values to integers instead of to the interval 0-1. Parameters: **boundaries**array-like Monotonically increasing sequence of at least 2 bin edges: data falling in the n-th bin will be mapped to the n-th color. **ncolors**int Number of colors in the colormap to be used. **clip**bool, optional If clip is `True`, out of range values are mapped to 0 if they are below `boundaries[0]` or mapped to `ncolors - 1` if they are above `boundaries[-1]`. If clip is `False`, out of range values are mapped to -1 if they are below `boundaries[0]` or mapped to *ncolors* if they are above `boundaries[-1]`. These are then converted to valid indices by [`Colormap.__call__`](matplotlib.colors.colormap#matplotlib.colors.Colormap.__call__ "matplotlib.colors.Colormap.__call__"). **extend**{'neither', 'both', 'min', 'max'}, default: 'neither' Extend the number of bins to include one or both of the regions beyond the boundaries. For example, if `extend` is 'min', then the color to which the region between the first pair of boundaries is mapped will be distinct from the first color in the colormap, and by default a [`Colorbar`](../colorbar_api#matplotlib.colorbar.Colorbar "matplotlib.colorbar.Colorbar") will be drawn with the triangle extension on the left or lower end. #### Notes If there are fewer bins (including extensions) than colors, then the color index is chosen by linearly interpolating the `[0, nbins - 1]` range onto the `[0, ncolors - 1]` range, effectively skipping some colors in the middle of the colormap. \_\_call\_\_(*value*, *clip=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1978-L2018) This method behaves similarly to [`Normalize.__call__`](matplotlib.colors.normalize#matplotlib.colors.Normalize.__call__ "matplotlib.colors.Normalize.__call__"), except that it returns integers or arrays of int16. inverse(*value*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L2020-L2028) Raises: ValueError BoundaryNorm is not invertible, so calling this method will always raise an error Examples using `matplotlib.colors.BoundaryNorm` ----------------------------------------------- [Multicolored lines](https://matplotlib.org/stable/gallery/lines_bars_and_markers/multicolored_line.html#sphx-glr-gallery-lines-bars-and-markers-multicolored-line-py) Multicolored lines [Colormap Normalizations](https://matplotlib.org/stable/gallery/images_contours_and_fields/colormap_normalizations.html#sphx-glr-gallery-images-contours-and-fields-colormap-normalizations-py) Colormap Normalizations [Creating annotated heatmaps](https://matplotlib.org/stable/gallery/images_contours_and_fields/image_annotated_heatmap.html#sphx-glr-gallery-images-contours-and-fields-image-annotated-heatmap-py) Creating annotated heatmaps [Image Masked](https://matplotlib.org/stable/gallery/images_contours_and_fields/image_masked.html#sphx-glr-gallery-images-contours-and-fields-image-masked-py) Image Masked [pcolormesh](https://matplotlib.org/stable/gallery/images_contours_and_fields/pcolormesh_levels.html#sphx-glr-gallery-images-contours-and-fields-pcolormesh-levels-py) pcolormesh [Left ventricle bullseye](https://matplotlib.org/stable/gallery/specialty_plots/leftventricle_bulleye.html#sphx-glr-gallery-specialty-plots-leftventricle-bulleye-py) Left ventricle bullseye [Customized Colorbars Tutorial](https://matplotlib.org/stable/tutorials/colors/colorbar_only.html#sphx-glr-tutorials-colors-colorbar-only-py) Customized Colorbars Tutorial [Colormap Normalization](https://matplotlib.org/stable/tutorials/colors/colormapnorms.html#sphx-glr-tutorials-colors-colormapnorms-py) Colormap Normalization matplotlib matplotlib.axes.Axes.set_aspect matplotlib.axes.Axes.set\_aspect ================================ Axes.set\_aspect(*aspect*, *adjustable=None*, *anchor=None*, *share=False*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L1631-L1699) Set the aspect ratio of the axes scaling, i.e. y/x-scale. Parameters: **aspect**{'auto', 'equal'} or float Possible values: * 'auto': fill the position rectangle with data. * 'equal': same as `aspect=1`, i.e. same scaling for x and y. * *float*: The displayed size of 1 unit in y-data coordinates will be *aspect* times the displayed size of 1 unit in x-data coordinates; e.g. for `aspect=2` a square in data coordinates will be rendered with a height of twice its width. **adjustable**None or {'box', 'datalim'}, optional If not `None`, this defines which parameter will be adjusted to meet the required aspect. See [`set_adjustable`](matplotlib.axes.axes.set_adjustable#matplotlib.axes.Axes.set_adjustable "matplotlib.axes.Axes.set_adjustable") for further details. **anchor**None or str or (float, float), optional If not `None`, this defines where the Axes will be drawn if there is extra space due to aspect constraints. The most common way to to specify the anchor are abbreviations of cardinal directions: | value | description | | --- | --- | | 'C' | centered | | 'SW' | lower left corner | | 'S' | middle of bottom edge | | 'SE' | lower right corner | | etc. | | See [`set_anchor`](matplotlib.axes.axes.set_anchor#matplotlib.axes.Axes.set_anchor "matplotlib.axes.Axes.set_anchor") for further details. **share**bool, default: False If `True`, apply the settings to all shared Axes. See also [`matplotlib.axes.Axes.set_adjustable`](matplotlib.axes.axes.set_adjustable#matplotlib.axes.Axes.set_adjustable "matplotlib.axes.Axes.set_adjustable") Set how the Axes adjusts to achieve the required aspect ratio. [`matplotlib.axes.Axes.set_anchor`](matplotlib.axes.axes.set_anchor#matplotlib.axes.Axes.set_anchor "matplotlib.axes.Axes.set_anchor") Set the position in case of extra space. Examples using `matplotlib.axes.Axes.set_aspect` ------------------------------------------------ [Bar chart with gradients](https://matplotlib.org/stable/gallery/lines_bars_and_markers/gradient_bar.html#sphx-glr-gallery-lines-bars-and-markers-gradient-bar-py) Bar chart with gradients [Tricontour Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/tricontour_demo.html#sphx-glr-gallery-images-contours-and-fields-tricontour-demo-py) Tricontour Demo [Tricontour Smooth Delaunay](https://matplotlib.org/stable/gallery/images_contours_and_fields/tricontour_smooth_delaunay.html#sphx-glr-gallery-images-contours-and-fields-tricontour-smooth-delaunay-py) Tricontour Smooth Delaunay [Tricontour Smooth User](https://matplotlib.org/stable/gallery/images_contours_and_fields/tricontour_smooth_user.html#sphx-glr-gallery-images-contours-and-fields-tricontour-smooth-user-py) Tricontour Smooth User [Trigradient Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/trigradient_demo.html#sphx-glr-gallery-images-contours-and-fields-trigradient-demo-py) Trigradient Demo [Tripcolor Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/tripcolor_demo.html#sphx-glr-gallery-images-contours-and-fields-tripcolor-demo-py) Tripcolor Demo [Triplot Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/triplot_demo.html#sphx-glr-gallery-images-contours-and-fields-triplot-demo-py) Triplot Demo [Axes box aspect](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_box_aspect.html#sphx-glr-gallery-subplots-axes-and-figures-axes-box-aspect-py) Axes box aspect [Controlling view limits using margins and sticky\_edges](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_margins.html#sphx-glr-gallery-subplots-axes-and-figures-axes-margins-py) Controlling view limits using margins and sticky\_edges [Placing Colorbars](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/colorbar_placement.html#sphx-glr-gallery-subplots-axes-and-figures-colorbar-placement-py) Placing Colorbars [Multiline](https://matplotlib.org/stable/gallery/text_labels_and_annotations/multiline.html#sphx-glr-gallery-text-labels-and-annotations-multiline-py) Multiline [Mmh Donuts!!!](https://matplotlib.org/stable/gallery/shapes_and_collections/donut.html#sphx-glr-gallery-shapes-and-collections-donut-py) Mmh Donuts!!! [Inset Locator Demo2](https://matplotlib.org/stable/gallery/axes_grid1/inset_locator_demo2.html#sphx-glr-gallery-axes-grid1-inset-locator-demo2-py) Inset Locator Demo2 [Scatter Histogram (Locatable Axes)](https://matplotlib.org/stable/gallery/axes_grid1/scatter_hist_locatable_axes.html#sphx-glr-gallery-axes-grid1-scatter-hist-locatable-axes-py) Scatter Histogram (Locatable Axes) [Simple Anchored Artists](https://matplotlib.org/stable/gallery/axes_grid1/simple_anchored_artists.html#sphx-glr-gallery-axes-grid1-simple-anchored-artists-py) Simple Anchored Artists [axis\_direction demo](https://matplotlib.org/stable/gallery/axisartist/demo_axis_direction.html#sphx-glr-gallery-axisartist-demo-axis-direction-py) axis\_direction demo [Simple Axis Pad](https://matplotlib.org/stable/gallery/axisartist/simple_axis_pad.html#sphx-glr-gallery-axisartist-simple-axis-pad-py) Simple Axis Pad ![The double pendulum problem](https://matplotlib.org/stable/_images/sphx_glr_double_pendulum_thumb.gif) [The double pendulum problem](https://matplotlib.org/stable/gallery/animation/double_pendulum.html#sphx-glr-gallery-animation-double-pendulum-py) The double pendulum problem [Anchored Artists](https://matplotlib.org/stable/gallery/misc/anchored_artists.html#sphx-glr-gallery-misc-anchored-artists-py) Anchored Artists [Rasterization for vector graphics](https://matplotlib.org/stable/gallery/misc/rasterization_demo.html#sphx-glr-gallery-misc-rasterization-demo-py) Rasterization for vector graphics [3D surface (solid color)](https://matplotlib.org/stable/gallery/mplot3d/surface3d_2.html#sphx-glr-gallery-mplot3d-surface3d-2-py) 3D surface (solid color) [3D voxel plot of the numpy logo](https://matplotlib.org/stable/gallery/mplot3d/voxels_numpy_logo.html#sphx-glr-gallery-mplot3d-voxels-numpy-logo-py) 3D voxel plot of the numpy logo [3D voxel / volumetric plot with rgb colors](https://matplotlib.org/stable/gallery/mplot3d/voxels_rgb.html#sphx-glr-gallery-mplot3d-voxels-rgb-py) 3D voxel / volumetric plot with rgb colors [Loglog Aspect](https://matplotlib.org/stable/gallery/scales/aspect_loglog.html#sphx-glr-gallery-scales-aspect-loglog-py) Loglog Aspect [Annotate Text Arrow](https://matplotlib.org/stable/gallery/userdemo/annotate_text_arrow.html#sphx-glr-gallery-userdemo-annotate-text-arrow-py) Annotate Text Arrow [Arranging multiple Axes in a Figure](https://matplotlib.org/stable/tutorials/intermediate/arranging_axes.html#sphx-glr-tutorials-intermediate-arranging-axes-py) Arranging multiple Axes in a Figure [Transformations Tutorial](https://matplotlib.org/stable/tutorials/advanced/transforms_tutorial.html#sphx-glr-tutorials-advanced-transforms-tutorial-py) Transformations Tutorial [Colormap Normalization](https://matplotlib.org/stable/tutorials/colors/colormapnorms.html#sphx-glr-tutorials-colors-colormapnorms-py) Colormap Normalization matplotlib matplotlib.axis.Axis.get_view_interval matplotlib.axis.Axis.get\_view\_interval ======================================== Axis.get\_view\_interval()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L1016-L1018) Return the `(min, max)` view limits of this axis. matplotlib matplotlib.axis.Tick.set_url matplotlib.axis.Tick.set\_url ============================= Tick.set\_url(*url*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L334-L345) Set the url of label1 and label2. Parameters: **url**str matplotlib matplotlib.axes.Axes.get_yscale matplotlib.axes.Axes.get\_yscale ================================ Axes.get\_yscale()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L72-L73) Return the yaxis' scale (as a str). matplotlib matplotlib.artist.Artist.get_clip_box matplotlib.artist.Artist.get\_clip\_box ======================================= Artist.get\_clip\_box()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L859-L861) Return the clipbox. matplotlib matplotlib.axis.Axis.get_ticklabel_extents matplotlib.axis.Axis.get\_ticklabel\_extents ============================================ Axis.get\_ticklabel\_extents(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L1170-L1183) [*Deprecated*] Get the extents of the tick labels on either side of the axes. #### Notes Deprecated since version 3.6. matplotlib mpl_toolkits.axisartist.floating_axes.floatingaxes_class_factory mpl\_toolkits.axisartist.floating\_axes.floatingaxes\_class\_factory ==================================================================== mpl\_toolkits.axisartist.floating\_axes.floatingaxes\_class\_factory(*axes\_class*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/floating_axes.py#L2278-L2300) matplotlib mpl_toolkits.axes_grid1.axes_divider.make_axes_area_auto_adjustable mpl\_toolkits.axes\_grid1.axes\_divider.make\_axes\_area\_auto\_adjustable ========================================================================== mpl\_toolkits.axes\_grid1.axes\_divider.make\_axes\_area\_auto\_adjustable(*ax*, *use\_axes=None*, *pad=0.1*, *adjust\_dirs=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_divider.py#L702-L718) Add auto-adjustable padding around *ax* to take its decorations (title, labels, ticks, ticklabels) into account during layout, using [`Divider.add_auto_adjustable_area`](mpl_toolkits.axes_grid1.axes_divider.divider#mpl_toolkits.axes_grid1.axes_divider.Divider.add_auto_adjustable_area "mpl_toolkits.axes_grid1.axes_divider.Divider.add_auto_adjustable_area"). By default, padding is determined from the decorations of *ax*. Pass *use\_axes* to consider the decorations of other Axes instead. Examples using `mpl_toolkits.axes_grid1.axes_divider.make_axes_area_auto_adjustable` ------------------------------------------------------------------------------------ [Make room for ylabel using axes\_grid](https://matplotlib.org/stable/gallery/axes_grid1/make_room_for_ylabel_using_axesgrid.html#sphx-glr-gallery-axes-grid1-make-room-for-ylabel-using-axesgrid-py) Make room for ylabel using axes\_grid
programming_docs
matplotlib matplotlib.axes.Axes.margins matplotlib.axes.Axes.margins ============================ Axes.margins(*\*margins*, *x=None*, *y=None*, *tight=True*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L2700-L2773) Set or retrieve autoscaling margins. The padding added to each limit of the Axes is the *margin* times the data interval. All input parameters must be floats within the range [0, 1]. Passing both positional and keyword arguments is invalid and will raise a TypeError. If no arguments (positional or otherwise) are provided, the current margins will remain in place and simply be returned. Specifying any margin changes only the autoscaling; for example, if *xmargin* is not None, then *xmargin* times the X data interval will be added to each end of that interval before it is used in autoscaling. Parameters: **\*margins**float, optional If a single positional argument is provided, it specifies both margins of the x-axis and y-axis limits. If two positional arguments are provided, they will be interpreted as *xmargin*, *ymargin*. If setting the margin on a single axis is desired, use the keyword arguments described below. **x, y**float, optional Specific margin values for the x-axis and y-axis, respectively. These cannot be used with positional arguments, but can be used individually to alter on e.g., only the y-axis. **tight**bool or None, default: True The *tight* parameter is passed to [`autoscale_view`](matplotlib.axes.axes.autoscale_view#matplotlib.axes.Axes.autoscale_view "matplotlib.axes.Axes.autoscale_view"), which is executed after a margin is changed; the default here is *True*, on the assumption that when margins are specified, no additional padding to match tick marks is usually desired. Setting *tight* to *None* preserves the previous setting. Returns: **xmargin, ymargin**float #### Notes If a previously used Axes method such as [`pcolor()`](matplotlib.axes.axes.pcolor#matplotlib.axes.Axes.pcolor "matplotlib.axes.Axes.pcolor") has set [`use_sticky_edges`](matplotlib.axes.axes.use_sticky_edges#matplotlib.axes.Axes.use_sticky_edges "matplotlib.axes.Axes.use_sticky_edges") to [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)"), only the limits not set by the "sticky artists" will be modified. To force all of the margins to be set, set [`use_sticky_edges`](matplotlib.axes.axes.use_sticky_edges#matplotlib.axes.Axes.use_sticky_edges "matplotlib.axes.Axes.use_sticky_edges") to [`False`](https://docs.python.org/3/library/constants.html#False "(in Python v3.10)") before calling [`margins()`](#matplotlib.axes.Axes.margins "matplotlib.axes.Axes.margins"). Examples using `matplotlib.axes.Axes.margins` --------------------------------------------- [Marker reference](https://matplotlib.org/stable/gallery/lines_bars_and_markers/marker_reference.html#sphx-glr-gallery-lines-bars-and-markers-marker-reference-py) Marker reference [Creating a timeline with lines, dates, and text](https://matplotlib.org/stable/gallery/lines_bars_and_markers/timeline.html#sphx-glr-gallery-lines-bars-and-markers-timeline-py) Creating a timeline with lines, dates, and text [Trigradient Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/trigradient_demo.html#sphx-glr-gallery-images-contours-and-fields-trigradient-demo-py) Trigradient Demo [Controlling view limits using margins and sticky\_edges](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_margins.html#sphx-glr-gallery-subplots-axes-and-figures-axes-margins-py) Controlling view limits using margins and sticky\_edges [Scale invariant angle label](https://matplotlib.org/stable/gallery/text_labels_and_annotations/angle_annotation.html#sphx-glr-gallery-text-labels-and-annotations-angle-annotation-py) Scale invariant angle label [ggplot style sheet](https://matplotlib.org/stable/gallery/style_sheets/ggplot.html#sphx-glr-gallery-style-sheets-ggplot-py) ggplot style sheet [Autoscaling](https://matplotlib.org/stable/tutorials/intermediate/autoscale.html#sphx-glr-tutorials-intermediate-autoscale-py) Autoscaling matplotlib mpl_toolkits.axisartist.angle_helper.LocatorBase mpl\_toolkits.axisartist.angle\_helper.LocatorBase ================================================== *class*mpl\_toolkits.axisartist.angle\_helper.LocatorBase(*nbins*, *include\_last=True*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/angle_helper.py#L142-L149) Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") set\_params(*nbins=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/angle_helper.py#L147-L149) Examples using `mpl_toolkits.axisartist.angle_helper.LocatorBase` ----------------------------------------------------------------- [axis\_direction demo](https://matplotlib.org/stable/gallery/axisartist/demo_axis_direction.html#sphx-glr-gallery-axisartist-demo-axis-direction-py) axis\_direction demo [Curvilinear grid demo](https://matplotlib.org/stable/gallery/axisartist/demo_curvelinear_grid.html#sphx-glr-gallery-axisartist-demo-curvelinear-grid-py) Curvilinear grid demo [mpl\_toolkits.axisartist.floating\_axes features](https://matplotlib.org/stable/gallery/axisartist/demo_floating_axes.html#sphx-glr-gallery-axisartist-demo-floating-axes-py) :mod:`mpl\_toolkits.axisartist.floating\_axes` features [floating\_axis demo](https://matplotlib.org/stable/gallery/axisartist/demo_floating_axis.html#sphx-glr-gallery-axisartist-demo-floating-axis-py) floating\_axis demo [Simple Axis Pad](https://matplotlib.org/stable/gallery/axisartist/simple_axis_pad.html#sphx-glr-gallery-axisartist-simple-axis-pad-py) Simple Axis Pad matplotlib matplotlib.pyplot.contour matplotlib.pyplot.contour ========================= matplotlib.pyplot.contour(*\*args*, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2441-L2447) Plot contour lines. Call signature: ``` contour([X, Y,] Z, [levels], **kwargs) ``` [`contour`](#matplotlib.pyplot.contour "matplotlib.pyplot.contour") and [`contourf`](matplotlib.pyplot.contourf#matplotlib.pyplot.contourf "matplotlib.pyplot.contourf") draw contour lines and filled contours, respectively. Except as noted, function signatures and return values are the same for both versions. Parameters: **X, Y**array-like, optional The coordinates of the values in *Z*. *X* and *Y* must both be 2D with the same shape as *Z* (e.g. created via [`numpy.meshgrid`](https://numpy.org/doc/stable/reference/generated/numpy.meshgrid.html#numpy.meshgrid "(in NumPy v1.23)")), or they must both be 1-D such that `len(X) == N` is the number of columns in *Z* and `len(Y) == M` is the number of rows in *Z*. *X* and *Y* must both be ordered monotonically. If not given, they are assumed to be integer indices, i.e. `X = range(N)`, `Y = range(M)`. **Z**(M, N) array-like The height values over which the contour is drawn. Color-mapping is controlled by *cmap*, *norm*, *vmin*, and *vmax*. **levels**int or array-like, optional Determines the number and positions of the contour lines / regions. If an int *n*, use [`MaxNLocator`](../ticker_api#matplotlib.ticker.MaxNLocator "matplotlib.ticker.MaxNLocator"), which tries to automatically choose no more than *n+1* "nice" contour levels between *vmin* and *vmax*. If array-like, draw contour lines at the specified levels. The values must be in increasing order. Returns: [`QuadContourSet`](../contour_api#matplotlib.contour.QuadContourSet "matplotlib.contour.QuadContourSet") Other Parameters: **corner\_mask**bool, default: `[rcParams["contour.corner\_mask"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=contour.corner_mask#matplotlibrc-sample)` (default: `True`) Enable/disable corner masking, which only has an effect if *Z* is a masked array. If `False`, any quad touching a masked point is masked out. If `True`, only the triangular corners of quads nearest those points are always masked out, other triangular corners comprising three unmasked points are contoured as usual. **colors**color string or sequence of colors, optional The colors of the levels, i.e. the lines for [`contour`](#matplotlib.pyplot.contour "matplotlib.pyplot.contour") and the areas for [`contourf`](matplotlib.pyplot.contourf#matplotlib.pyplot.contourf "matplotlib.pyplot.contourf"). The sequence is cycled for the levels in ascending order. If the sequence is shorter than the number of levels, it's repeated. As a shortcut, single color strings may be used in place of one-element lists, i.e. `'red'` instead of `['red']` to color all levels with the same color. This shortcut does only work for color strings, not for other ways of specifying colors. By default (value *None*), the colormap specified by *cmap* will be used. **alpha**float, default: 1 The alpha blending value, between 0 (transparent) and 1 (opaque). **cmap**str or [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap"), default: `[rcParams["image.cmap"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.cmap#matplotlibrc-sample)` (default: `'viridis'`) The Colormap instance or registered colormap name used to map scalar data to colors. This parameter is ignored if *colors* is set. **norm**str or [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize"), optional The normalization method used to scale scalar data to the [0, 1] range before mapping to colors using *cmap*. By default, a linear scaling is used, mapping the lowest value to 0 and the highest to 1. If given, this can be one of the following: * An instance of [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") or one of its subclasses (see [Colormap Normalization](https://matplotlib.org/stable/tutorials/colors/colormapnorms.html)). * A scale name, i.e. one of "linear", "log", "symlog", "logit", etc. For a list of available scales, call [`matplotlib.scale.get_scale_names()`](../scale_api#matplotlib.scale.get_scale_names "matplotlib.scale.get_scale_names"). In that case, a suitable [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") subclass is dynamically generated and instantiated. This parameter is ignored if *colors* is set. **vmin, vmax**float, optional When using scalar data and no explicit *norm*, *vmin* and *vmax* define the data range that the colormap covers. By default, the colormap covers the complete value range of the supplied data. It is an error to use *vmin*/*vmax* when a *norm* instance is given (but using a [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") *norm* name together with *vmin*/*vmax* is acceptable). If *vmin* or *vmax* are not given, the default color scaling is based on *levels*. This parameter is ignored if *colors* is set. **origin**{*None*, 'upper', 'lower', 'image'}, default: None Determines the orientation and exact position of *Z* by specifying the position of `Z[0, 0]`. This is only relevant, if *X*, *Y* are not given. * *None*: `Z[0, 0]` is at X=0, Y=0 in the lower left corner. * 'lower': `Z[0, 0]` is at X=0.5, Y=0.5 in the lower left corner. * 'upper': `Z[0, 0]` is at X=N+0.5, Y=0.5 in the upper left corner. * 'image': Use the value from `[rcParams["image.origin"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.origin#matplotlibrc-sample)` (default: `'upper'`). **extent**(x0, x1, y0, y1), optional If *origin* is not *None*, then *extent* is interpreted as in [`imshow`](matplotlib.pyplot.imshow#matplotlib.pyplot.imshow "matplotlib.pyplot.imshow"): it gives the outer pixel boundaries. In this case, the position of Z[0, 0] is the center of the pixel, not a corner. If *origin* is *None*, then (*x0*, *y0*) is the position of Z[0, 0], and (*x1*, *y1*) is the position of Z[-1, -1]. This argument is ignored if *X* and *Y* are specified in the call to contour. **locator**ticker.Locator subclass, optional The locator is used to determine the contour levels if they are not given explicitly via *levels*. Defaults to [`MaxNLocator`](../ticker_api#matplotlib.ticker.MaxNLocator "matplotlib.ticker.MaxNLocator"). **extend**{'neither', 'both', 'min', 'max'}, default: 'neither' Determines the `contourf`-coloring of values that are outside the *levels* range. If 'neither', values outside the *levels* range are not colored. If 'min', 'max' or 'both', color the values below, above or below and above the *levels* range. Values below `min(levels)` and above `max(levels)` are mapped to the under/over values of the [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap"). Note that most colormaps do not have dedicated colors for these by default, so that the over and under values are the edge values of the colormap. You may want to set these values explicitly using [`Colormap.set_under`](matplotlib.colors.colormap#matplotlib.colors.Colormap.set_under "matplotlib.colors.Colormap.set_under") and [`Colormap.set_over`](matplotlib.colors.colormap#matplotlib.colors.Colormap.set_over "matplotlib.colors.Colormap.set_over"). Note An existing [`QuadContourSet`](../contour_api#matplotlib.contour.QuadContourSet "matplotlib.contour.QuadContourSet") does not get notified if properties of its colormap are changed. Therefore, an explicit call `QuadContourSet.changed()` is needed after modifying the colormap. The explicit call can be left out, if a colorbar is assigned to the [`QuadContourSet`](../contour_api#matplotlib.contour.QuadContourSet "matplotlib.contour.QuadContourSet") because it internally calls `QuadContourSet.changed()`. Example: ``` x = np.arange(1, 10) y = x.reshape(-1, 1) h = x * y cs = plt.contourf(h, levels=[10, 30, 50], colors=['#808080', '#A0A0A0', '#C0C0C0'], extend='both') cs.cmap.set_over('red') cs.cmap.set_under('blue') cs.changed() ``` **xunits, yunits**registered units, optional Override axis units by specifying an instance of a [`matplotlib.units.ConversionInterface`](../units_api#matplotlib.units.ConversionInterface "matplotlib.units.ConversionInterface"). **antialiased**bool, optional Enable antialiasing, overriding the defaults. For filled contours, the default is *True*. For line contours, it is taken from `[rcParams["lines.antialiased"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=lines.antialiased#matplotlibrc-sample)` (default: `True`). **nchunk**int >= 0, optional If 0, no subdivision of the domain. Specify a positive integer to divide the domain into subdomains of *nchunk* by *nchunk* quads. Chunking reduces the maximum length of polygons generated by the contouring algorithm which reduces the rendering workload passed on to the backend and also requires slightly less RAM. It can however introduce rendering artifacts at chunk boundaries depending on the backend, the *antialiased* flag and value of *alpha*. **linewidths**float or array-like, default: `[rcParams["contour.linewidth"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=contour.linewidth#matplotlibrc-sample)` (default: `None`) *Only applies to* [`contour`](#matplotlib.pyplot.contour "matplotlib.pyplot.contour"). The line width of the contour lines. If a number, all levels will be plotted with this linewidth. If a sequence, the levels in ascending order will be plotted with the linewidths in the order specified. If None, this falls back to `[rcParams["lines.linewidth"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=lines.linewidth#matplotlibrc-sample)` (default: `1.5`). **linestyles**{*None*, 'solid', 'dashed', 'dashdot', 'dotted'}, optional *Only applies to* [`contour`](#matplotlib.pyplot.contour "matplotlib.pyplot.contour"). If *linestyles* is *None*, the default is 'solid' unless the lines are monochrome. In that case, negative contours will instead take their linestyle from the *negative\_linestyles* argument. *linestyles* can also be an iterable of the above strings specifying a set of linestyles to be used. If this iterable is shorter than the number of contour levels it will be repeated as necessary. **negative\_linestyles**{*None*, 'solid', 'dashed', 'dashdot', 'dotted'}, optional *Only applies to* [`contour`](#matplotlib.pyplot.contour "matplotlib.pyplot.contour"). If *linestyles* is *None* and the lines are monochrome, this argument specifies the line style for negative contours. If *negative\_linestyles* is *None*, the default is taken from `[rcParams["contour.negative\_linestyles"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=contour.negative_linestyles#matplotlibrc-sample)`. *negative\_linestyles* can also be an iterable of the above strings specifying a set of linestyles to be used. If this iterable is shorter than the number of contour levels it will be repeated as necessary. **hatches**list[str], optional *Only applies to* [`contourf`](matplotlib.pyplot.contourf#matplotlib.pyplot.contourf "matplotlib.pyplot.contourf"). A list of cross hatch patterns to use on the filled areas. If None, no hatching will be added to the contour. Hatching is supported in the PostScript, PDF, SVG and Agg backends only. **algorithm**{'mpl2005', 'mpl2014', 'serial', 'threaded'}, optional Which contouring algorithm to use to calculate the contour lines and polygons. The algorithms are implemented in [ContourPy](https://github.com/contourpy/contourpy), consult the [ContourPy documentation](https://contourpy.readthedocs.io) for further information. The default is taken from `[rcParams["contour.algorithm"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=contour.algorithm#matplotlibrc-sample)` (default: `'mpl2014'`). **data**indexable object, optional If given, all parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception). #### Notes 1. [`contourf`](matplotlib.pyplot.contourf#matplotlib.pyplot.contourf "matplotlib.pyplot.contourf") differs from the MATLAB version in that it does not draw the polygon edges. To draw edges, add line contours with calls to [`contour`](#matplotlib.pyplot.contour "matplotlib.pyplot.contour"). 2. [`contourf`](matplotlib.pyplot.contourf#matplotlib.pyplot.contourf "matplotlib.pyplot.contourf") fills intervals that are closed at the top; that is, for boundaries *z1* and *z2*, the filled region is: ``` z1 < Z <= z2 ``` except for the lowest interval, which is closed on both sides (i.e. it includes the lowest value). 3. [`contour`](#matplotlib.pyplot.contour "matplotlib.pyplot.contour") and [`contourf`](matplotlib.pyplot.contourf#matplotlib.pyplot.contourf "matplotlib.pyplot.contourf") use a [marching squares](https://en.wikipedia.org/wiki/Marching_squares) algorithm to compute contour locations. More information can be found in [ContourPy documentation](https://contourpy.readthedocs.io). Examples using `matplotlib.pyplot.contour` ------------------------------------------ [Interactive functions](https://matplotlib.org/stable/gallery/event_handling/ginput_manual_clabel_sgskip.html#sphx-glr-gallery-event-handling-ginput-manual-clabel-sgskip-py) Interactive functions matplotlib matplotlib.pyplot.loglog matplotlib.pyplot.loglog ======================== matplotlib.pyplot.loglog(*\*args*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2636-L2638) Make a plot with log scaling on both the x and y axis. Call signatures: ``` loglog([x], y, [fmt], data=None, **kwargs) loglog([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs) ``` This is just a thin wrapper around [`plot`](matplotlib.pyplot.plot#matplotlib.pyplot.plot "matplotlib.pyplot.plot") which additionally changes both the x-axis and the y-axis to log scaling. All of the concepts and parameters of plot can be used here as well. The additional parameters *base*, *subs* and *nonpositive* control the x/y-axis properties. They are just forwarded to [`Axes.set_xscale`](matplotlib.axes.axes.set_xscale#matplotlib.axes.Axes.set_xscale "matplotlib.axes.Axes.set_xscale") and [`Axes.set_yscale`](matplotlib.axes.axes.set_yscale#matplotlib.axes.Axes.set_yscale "matplotlib.axes.Axes.set_yscale"). To use different properties on the x-axis and the y-axis, use e.g. `ax.set_xscale("log", base=10); ax.set_yscale("log", base=2)`. Parameters: **base**float, default: 10 Base of the logarithm. **subs**sequence, optional The location of the minor ticks. If *None*, reasonable locations are automatically chosen depending on the number of decades in the plot. See [`Axes.set_xscale`](matplotlib.axes.axes.set_xscale#matplotlib.axes.Axes.set_xscale "matplotlib.axes.Axes.set_xscale")/[`Axes.set_yscale`](matplotlib.axes.axes.set_yscale#matplotlib.axes.Axes.set_yscale "matplotlib.axes.Axes.set_yscale") for details. **nonpositive**{'mask', 'clip'}, default: 'mask' Non-positive values can be masked as invalid, or clipped to a very small positive number. **\*\*kwargs** All parameters supported by [`plot`](matplotlib.pyplot.plot#matplotlib.pyplot.plot "matplotlib.pyplot.plot"). Returns: list of [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") Objects representing the plotted data.
programming_docs
matplotlib matplotlib.axis.Axis.axes matplotlib.axis.Axis.axes ========================= *property*Axis.axes The [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") instance the artist resides in, or *None*. matplotlib mpl_toolkits.axes_grid1.axes_size.MaxHeight mpl\_toolkits.axes\_grid1.axes\_size.MaxHeight ============================================== *class*mpl\_toolkits.axes\_grid1.axes\_size.MaxHeight(*artist\_list*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_size.py#L176-L182) Bases: [`MaxExtent`](mpl_toolkits.axes_grid1.axes_size.maxextent#mpl_toolkits.axes_grid1.axes_size.MaxExtent "mpl_toolkits.axes_grid1.axes_size.MaxExtent") Size whose absolute part is the largest height of the given *artist\_list*. matplotlib matplotlib.axis.Axis.limit_range_for_scale matplotlib.axis.Axis.limit\_range\_for\_scale ============================================= Axis.limit\_range\_for\_scale(*vmin*, *vmax*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L827-L828) matplotlib matplotlib.artist.Artist.set_alpha matplotlib.artist.Artist.set\_alpha =================================== Artist.set\_alpha(*alpha*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L970-L986) Set the alpha value used for blending - not supported on all backends. Parameters: **alpha**scalar or None *alpha* must be within the 0-1 range, inclusive. Examples using `matplotlib.artist.Artist.set_alpha` --------------------------------------------------- [Violin plot customization](https://matplotlib.org/stable/gallery/statistics/customized_violin.html#sphx-glr-gallery-statistics-customized-violin-py) Violin plot customization [Ellipse Demo](https://matplotlib.org/stable/gallery/shapes_and_collections/ellipse_demo.html#sphx-glr-gallery-shapes-and-collections-ellipse-demo-py) Ellipse Demo [Axes Grid2](https://matplotlib.org/stable/gallery/axes_grid1/demo_axes_grid2.html#sphx-glr-gallery-axes-grid1-demo-axes-grid2-py) Axes Grid2 [Legend Picking](https://matplotlib.org/stable/gallery/event_handling/legend_picking.html#sphx-glr-gallery-event-handling-legend-picking-py) Legend Picking [violinplot(D)](https://matplotlib.org/stable/plot_types/stats/violin.html#sphx-glr-plot-types-stats-violin-py) violinplot(D) matplotlib matplotlib.patches.bbox_artist matplotlib.patches.bbox\_artist =============================== matplotlib.patches.bbox\_artist(*artist*, *renderer*, *props=None*, *fill=True*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L2146-L2166) A debug function to draw a rectangle around the bounding box returned by an artist's [`Artist.get_window_extent`](matplotlib.artist.artist.get_window_extent#matplotlib.artist.Artist.get_window_extent "matplotlib.artist.Artist.get_window_extent") to test whether the artist is returning the correct bbox. *props* is a dict of rectangle props with the additional property 'pad' that sets the padding around the bbox in points. matplotlib matplotlib.markers.MarkerStyle matplotlib.markers.MarkerStyle ============================== *class*matplotlib.markers.MarkerStyle(*marker*, *fillstyle=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L156-L943) Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") A class representing marker types. Instances are immutable. If you need to change anything, create a new instance. Attributes: **markers**list All known markers. **filled\_markers**list All known filled markers. This is a subset of *markers*. **fillstyles**list The supported fillstyles. Parameters: **marker**str, array-like, Path, MarkerStyle, or None * Another instance of *MarkerStyle* copies the details of that `marker`. * *None* means no marker. This is the deprecated default. * For other possible marker values, see the module docstring [`matplotlib.markers`](../markers_api#module-matplotlib.markers "matplotlib.markers"). **fillstyle**str, default: `[rcParams["markers.fillstyle"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=markers.fillstyle#matplotlibrc-sample)` (default: `'full'`) One of 'full', 'left', 'right', 'bottom', 'top', 'none'. **transform**transforms.Transform, default: None Transform that will be combined with the native transform of the marker. **capstyle**CapStyle, default: None Cap style that will override the default cap style of the marker. **joinstyle**JoinStyle, default: None Join style that will override the default join style of the marker. filled\_markers*=('o', 'v', '^', '<', '>', '8', 's', 'p', '\*', 'h', 'H', 'D', 'd', 'P', 'X')* fillstyles*=('full', 'left', 'right', 'bottom', 'top', 'none')* get\_alt\_path()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L388-L395) Return a [`Path`](../path_api#matplotlib.path.Path "matplotlib.path.Path") for the alternate part of the marker. For unfilled markers, this is *None*; for filled markers, this is the area to be drawn with *markerfacecoloralt*. get\_alt\_transform()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L397-L405) Return the transform to be applied to the [`Path`](../path_api#matplotlib.path.Path "matplotlib.path.Path") from [`MarkerStyle.get_alt_path()`](#matplotlib.markers.MarkerStyle.get_alt_path "matplotlib.markers.MarkerStyle.get_alt_path"). get\_capstyle()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L321-L322) get\_fillstyle()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L299-L300) get\_joinstyle()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L318-L319) get\_marker()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L324-L325) get\_path()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L369-L376) Return a [`Path`](../path_api#matplotlib.path.Path "matplotlib.path.Path") for the primary part of the marker. For unfilled markers this is the whole marker, for filled markers, this is the area to be drawn with *markerfacecolor*. get\_snap\_threshold()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L407-L408) get\_transform()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L378-L386) Return the transform to be applied to the [`Path`](../path_api#matplotlib.path.Path "matplotlib.path.Path") from [`MarkerStyle.get_path()`](#matplotlib.markers.MarkerStyle.get_path "matplotlib.markers.MarkerStyle.get_path"). get\_user\_transform()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L410-L413) Return user supplied part of marker transform. is\_filled()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L296-L297) markers*={'.': 'point', ',': 'pixel', 'o': 'circle', 'v': 'triangle\_down', '^': 'triangle\_up', '<': 'triangle\_left', '>': 'triangle\_right', '1': 'tri\_down', '2': 'tri\_up', '3': 'tri\_left', '4': 'tri\_right', '8': 'octagon', 's': 'square', 'p': 'pentagon', '\*': 'star', 'h': 'hexagon1', 'H': 'hexagon2', '+': 'plus', 'x': 'x', 'D': 'diamond', 'd': 'thin\_diamond', '|': 'vline', '\_': 'hline', 'P': 'plus\_filled', 'X': 'x\_filled', 0: 'tickleft', 1: 'tickright', 2: 'tickup', 3: 'tickdown', 4: 'caretleft', 5: 'caretright', 6: 'caretup', 7: 'caretdown', 8: 'caretleftbase', 9: 'caretrightbase', 10: 'caretupbase', 11: 'caretdownbase', 'None': 'nothing', 'none': 'nothing', ' ': 'nothing', '': 'nothing'}* rotated(*\**, *deg=None*, *rad=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L431-L458) Return a new version of this marker rotated by specified angle. Parameters: **deg**float, default: None Rotation angle in degrees. **rad**float, default: None Rotation angle in radians. **.. note:: You must specify exactly one of deg or rad.** scaled(*sx*, *sy=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L460-L480) Return new marker scaled by specified scale factors. If *sy* is None, the same scale is applied in both the *x*- and *y*-directions. Parameters: **sx**float *X*-direction scaling factor. **sy**float, default: None *Y*-direction scaling factor. transformed(*transform:[Affine2D](../transformations#matplotlib.transforms.Affine2D "matplotlib.transforms.Affine2D")*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/markers.py#L415-L429) Return a new version of this marker with the transform applied. Parameters: **transform**Affine2D, default: None Transform will be combined with current user supplied transform. Examples using `matplotlib.markers.MarkerStyle` ----------------------------------------------- [Marker reference](https://matplotlib.org/stable/gallery/lines_bars_and_markers/marker_reference.html#sphx-glr-gallery-lines-bars-and-markers-marker-reference-py) Marker reference [Mapping marker properties to multivariate data](https://matplotlib.org/stable/gallery/lines_bars_and_markers/multivariate_marker_plot.html#sphx-glr-gallery-lines-bars-and-markers-multivariate-marker-plot-py) Mapping marker properties to multivariate data matplotlib matplotlib.artist.Artist.get_agg_filter matplotlib.artist.Artist.get\_agg\_filter ========================================= Artist.get\_agg\_filter()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L929-L931) Return filter function to be used for agg filter. matplotlib matplotlib.pyplot.clim matplotlib.pyplot.clim ====================== matplotlib.pyplot.clim(*vmin=None*, *vmax=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2057-L2075) Set the color limits of the current image. If either *vmin* or *vmax* is None, the image min/max respectively will be used for color scaling. If you want to set the clim of multiple images, use [`set_clim`](../cm_api#matplotlib.cm.ScalarMappable.set_clim "matplotlib.cm.ScalarMappable.set_clim") on every image, for example: ``` for im in gca().get_images(): im.set_clim(0, 0.5) ``` matplotlib matplotlib.axes.Axes.matshow matplotlib.axes.Axes.matshow ============================ Axes.matshow(*Z*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L7855-L7904) Plot the values of a 2D matrix or array as color-coded image. The matrix will be shown the way it would be printed, with the first row at the top. Row and column numbering is zero-based. Parameters: **Z**(M, N) array-like The matrix to be displayed. Returns: [`AxesImage`](../image_api#matplotlib.image.AxesImage "matplotlib.image.AxesImage") Other Parameters: **\*\*kwargs**[`imshow`](matplotlib.axes.axes.imshow#matplotlib.axes.Axes.imshow "matplotlib.axes.Axes.imshow") arguments See also [`imshow`](matplotlib.axes.axes.imshow#matplotlib.axes.Axes.imshow "matplotlib.axes.Axes.imshow") More general function to plot data on a 2D regular raster. #### Notes This is just a convenience function wrapping [`imshow`](matplotlib.axes.axes.imshow#matplotlib.axes.Axes.imshow "matplotlib.axes.Axes.imshow") to set useful defaults for displaying a matrix. In particular: * Set `origin='upper'`. * Set `interpolation='nearest'`. * Set `aspect='equal'`. * Ticks are placed to the left and above. * Ticks are formatted to show integer indices. matplotlib matplotlib.axes.Axes.get_yticklines matplotlib.axes.Axes.get\_yticklines ==================================== Axes.get\_yticklines(*minor=False*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L72-L73) Return the yaxis' tick lines as a list of [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D")s. matplotlib mpl_toolkits.axes_grid1.inset_locator.AnchoredLocatorBase mpl\_toolkits.axes\_grid1.inset\_locator.AnchoredLocatorBase ============================================================ *class*mpl\_toolkits.axes\_grid1.inset\_locator.AnchoredLocatorBase(*bbox\_to\_anchor*, *offsetbox*, *loc*, *borderpad=0.5*, *bbox\_transform=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/inset_locator.py#L60-L84) Bases: [`AnchoredOffsetbox`](../offsetbox_api#matplotlib.offsetbox.AnchoredOffsetbox "matplotlib.offsetbox.AnchoredOffsetbox") Parameters: **loc**str The box location. Valid locations are 'upper left', 'upper center', 'upper right', 'center left', 'center', 'center right', 'lower left', 'lower center, 'lower right'. For backward compatibility, numeric values are accepted as well. See the parameter *loc* of [`Legend`](../legend_api#matplotlib.legend.Legend "matplotlib.legend.Legend") for details. **pad**float, default: 0.4 Padding around the child as fraction of the fontsize. **borderpad**float, default: 0.5 Padding between the offsetbox frame and the *bbox\_to\_anchor*. **child**[`OffsetBox`](../offsetbox_api#matplotlib.offsetbox.OffsetBox "matplotlib.offsetbox.OffsetBox") The box that will be anchored. **prop**[`FontProperties`](../font_manager_api#matplotlib.font_manager.FontProperties "matplotlib.font_manager.FontProperties") This is only used as a reference for paddings. If not given, `[rcParams["legend.fontsize"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=legend.fontsize#matplotlibrc-sample)` (default: `'medium'`) is used. **frameon**bool Whether to draw a frame around the box. **bbox\_to\_anchor**[`BboxBase`](../transformations#matplotlib.transforms.BboxBase "matplotlib.transforms.BboxBase"), 2-tuple, or 4-tuple of floats Box that is used to position the legend in conjunction with *loc*. **bbox\_transform**None or [`matplotlib.transforms.Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") The transform for the bounding box (*bbox\_to\_anchor*). **\*\*kwargs** All other parameters are passed on to [`OffsetBox`](../offsetbox_api#matplotlib.offsetbox.OffsetBox "matplotlib.offsetbox.OffsetBox"). #### Notes See [`Legend`](../legend_api#matplotlib.legend.Legend "matplotlib.legend.Legend") for a detailed description of the anchoring mechanism. \_\_call\_\_(*ax*, *renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/inset_locator.py#L71-L84) Call self as a function. draw(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/inset_locator.py#L68-L69) Update the location of children if necessary and draw them to the given *renderer*. set(*\**, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *animated=<UNSET>*, *bbox\_to\_anchor=<UNSET>*, *child=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *gid=<UNSET>*, *height=<UNSET>*, *in\_layout=<UNSET>*, *label=<UNSET>*, *mouseover=<UNSET>*, *offset=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *rasterized=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *transform=<UNSET>*, *url=<UNSET>*, *visible=<UNSET>*, *width=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`bbox_to_anchor`](../offsetbox_api#matplotlib.offsetbox.AnchoredOffsetbox.set_bbox_to_anchor "matplotlib.offsetbox.AnchoredOffsetbox.set_bbox_to_anchor") | unknown | | [`child`](../offsetbox_api#matplotlib.offsetbox.AnchoredOffsetbox.set_child "matplotlib.offsetbox.AnchoredOffsetbox.set_child") | unknown | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`figure`](../offsetbox_api#matplotlib.offsetbox.OffsetBox.set_figure "matplotlib.offsetbox.OffsetBox.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`height`](../offsetbox_api#matplotlib.offsetbox.OffsetBox.set_height "matplotlib.offsetbox.OffsetBox.set_height") | float | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`offset`](../offsetbox_api#matplotlib.offsetbox.OffsetBox.set_offset "matplotlib.offsetbox.OffsetBox.set_offset") | (float, float) or callable | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`width`](../offsetbox_api#matplotlib.offsetbox.OffsetBox.set_width "matplotlib.offsetbox.OffsetBox.set_width") | float | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float |
programming_docs
matplotlib matplotlib.axes.Axes.add_table matplotlib.axes.Axes.add\_table =============================== Axes.add\_table(*tab*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L2459-L2468) Add a [`Table`](../table_api#matplotlib.table.Table "matplotlib.table.Table") to the Axes; return the table. matplotlib matplotlib.axes.Axes.draw matplotlib.axes.Axes.draw ========================= Axes.draw(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L3041-L3110) Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible ([`Artist.get_visible`](matplotlib.artist.artist.get_visible#matplotlib.artist.Artist.get_visible "matplotlib.artist.Artist.get_visible") returns False). Parameters: **renderer**[`RendererBase`](../backend_bases_api#matplotlib.backend_bases.RendererBase "matplotlib.backend_bases.RendererBase") subclass. #### Notes This method is overridden in the Artist subclasses. matplotlib matplotlib.axes.Axes.convert_yunits matplotlib.axes.Axes.convert\_yunits ==================================== Axes.convert\_yunits(*y*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L253-L263) Convert *y* using the unit type of the yaxis. If the artist is not contained in an Axes or if the yaxis does not have units, *y* itself is returned. matplotlib matplotlib.axes.Axes.psd matplotlib.axes.Axes.psd ======================== Axes.psd(*x*, *NFFT=None*, *Fs=None*, *Fc=None*, *detrend=None*, *window=None*, *noverlap=None*, *pad\_to=None*, *sides=None*, *scale\_by\_freq=None*, *return\_line=None*, *\**, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L7056-L7165) Plot the power spectral density. The power spectral density \(P\_{xx}\) by Welch's average periodogram method. The vector *x* is divided into *NFFT* length segments. Each segment is detrended by function *detrend* and windowed by function *window*. *noverlap* gives the length of the overlap between segments. The \(|\mathrm{fft}(i)|^2\) of each segment \(i\) are averaged to compute \(P\_{xx}\), with a scaling to correct for power loss due to windowing. If len(*x*) < *NFFT*, it will be zero padded to *NFFT*. Parameters: **x**1-D array or sequence Array or sequence containing the data **Fs**float, default: 2 The sampling frequency (samples per time unit). It is used to calculate the Fourier frequencies, *freqs*, in cycles per time unit. **window**callable or ndarray, default: [`window_hanning`](../mlab_api#matplotlib.mlab.window_hanning "matplotlib.mlab.window_hanning") A function or a vector of length *NFFT*. To create window vectors see [`window_hanning`](../mlab_api#matplotlib.mlab.window_hanning "matplotlib.mlab.window_hanning"), [`window_none`](../mlab_api#matplotlib.mlab.window_none "matplotlib.mlab.window_none"), [`numpy.blackman`](https://numpy.org/doc/stable/reference/generated/numpy.blackman.html#numpy.blackman "(in NumPy v1.23)"), [`numpy.hamming`](https://numpy.org/doc/stable/reference/generated/numpy.hamming.html#numpy.hamming "(in NumPy v1.23)"), [`numpy.bartlett`](https://numpy.org/doc/stable/reference/generated/numpy.bartlett.html#numpy.bartlett "(in NumPy v1.23)"), [`scipy.signal`](https://docs.scipy.org/doc/scipy/reference/signal.html#module-scipy.signal "(in SciPy v1.9.1)"), [`scipy.signal.get_window`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.get_window.html#scipy.signal.get_window "(in SciPy v1.9.1)"), etc. If a function is passed as the argument, it must take a data segment as an argument and return the windowed version of the segment. **sides**{'default', 'onesided', 'twosided'}, optional Which sides of the spectrum to return. 'default' is one-sided for real data and two-sided for complex data. 'onesided' forces the return of a one-sided spectrum, while 'twosided' forces two-sided. **pad\_to**int, optional The number of points to which the data segment is padded when performing the FFT. This can be different from *NFFT*, which specifies the number of data points used. While not increasing the actual resolution of the spectrum (the minimum distance between resolvable peaks), this can give more points in the plot, allowing for more detail. This corresponds to the *n* parameter in the call to [`fft`](https://numpy.org/doc/stable/reference/generated/numpy.fft.fft.html#numpy.fft.fft "(in NumPy v1.23)"). The default is None, which sets *pad\_to* equal to *NFFT* **NFFT**int, default: 256 The number of data points used in each block for the FFT. A power 2 is most efficient. This should *NOT* be used to get zero padding, or the scaling of the result will be incorrect; use *pad\_to* for this instead. **detrend**{'none', 'mean', 'linear'} or callable, default: 'none' The function applied to each segment before fft-ing, designed to remove the mean or linear trend. Unlike in MATLAB, where the *detrend* parameter is a vector, in Matplotlib it is a function. The [`mlab`](../mlab_api#module-matplotlib.mlab "matplotlib.mlab") module defines [`detrend_none`](../mlab_api#matplotlib.mlab.detrend_none "matplotlib.mlab.detrend_none"), [`detrend_mean`](../mlab_api#matplotlib.mlab.detrend_mean "matplotlib.mlab.detrend_mean"), and [`detrend_linear`](../mlab_api#matplotlib.mlab.detrend_linear "matplotlib.mlab.detrend_linear"), but you can use a custom function as well. You can also use a string to choose one of the functions: 'none' calls [`detrend_none`](../mlab_api#matplotlib.mlab.detrend_none "matplotlib.mlab.detrend_none"). 'mean' calls [`detrend_mean`](../mlab_api#matplotlib.mlab.detrend_mean "matplotlib.mlab.detrend_mean"). 'linear' calls [`detrend_linear`](../mlab_api#matplotlib.mlab.detrend_linear "matplotlib.mlab.detrend_linear"). **scale\_by\_freq**bool, default: True Whether the resulting density values should be scaled by the scaling frequency, which gives density in units of 1/Hz. This allows for integration over the returned frequency values. The default is True for MATLAB compatibility. **noverlap**int, default: 0 (no overlap) The number of points of overlap between segments. **Fc**int, default: 0 The center frequency of *x*, which offsets the x extents of the plot to reflect the frequency range used when a signal is acquired and then filtered and downsampled to baseband. **return\_line**bool, default: False Whether to include the line object plotted in the returned values. Returns: **Pxx**1-D array The values for the power spectrum \(P\_{xx}\) before scaling (real valued). **freqs**1-D array The frequencies corresponding to the elements in *Pxx*. **line**[`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") The line created by this function. Only returned if *return\_line* is True. Other Parameters: **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *x* **\*\*kwargs** Keyword arguments control the [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") properties: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_antialiased "matplotlib.lines.Line2D.set_antialiased") or aa | bool | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_color "matplotlib.lines.Line2D.set_color") or c | color | | [`dash_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_capstyle "matplotlib.lines.Line2D.set_dash_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`dash_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_joinstyle "matplotlib.lines.Line2D.set_dash_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`dashes`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dashes "matplotlib.lines.Line2D.set_dashes") | sequence of floats (on/off ink in points) or (None, None) | | [`data`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_data "matplotlib.lines.Line2D.set_data") | (2, N) array or two 1D arrays | | [`drawstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_drawstyle "matplotlib.lines.Line2D.set_drawstyle") or ds | {'default', 'steps', 'steps-pre', 'steps-mid', 'steps-post'}, default: 'default' | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fillstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_fillstyle "matplotlib.lines.Line2D.set_fillstyle") | {'full', 'left', 'right', 'bottom', 'top', 'none'} | | [`gapcolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_gapcolor "matplotlib.lines.Line2D.set_gapcolor") | color or None | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linestyle "matplotlib.lines.Line2D.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linewidth "matplotlib.lines.Line2D.set_linewidth") or lw | float | | [`marker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_marker "matplotlib.lines.Line2D.set_marker") | marker style string, [`Path`](../path_api#matplotlib.path.Path "matplotlib.path.Path") or [`MarkerStyle`](matplotlib.markers.markerstyle#matplotlib.markers.MarkerStyle "matplotlib.markers.MarkerStyle") | | [`markeredgecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgecolor "matplotlib.lines.Line2D.set_markeredgecolor") or mec | color | | [`markeredgewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgewidth "matplotlib.lines.Line2D.set_markeredgewidth") or mew | float | | [`markerfacecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecolor "matplotlib.lines.Line2D.set_markerfacecolor") or mfc | color | | [`markerfacecoloralt`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecoloralt "matplotlib.lines.Line2D.set_markerfacecoloralt") or mfcalt | color | | [`markersize`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markersize "matplotlib.lines.Line2D.set_markersize") or ms | float | | [`markevery`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markevery "matplotlib.lines.Line2D.set_markevery") | None or int or (int, int) or slice or list[int] or float or (float, float) or list[bool] | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_picker "matplotlib.lines.Line2D.set_picker") | float or callable[[Artist, Event], tuple[bool, dict]] | | [`pickradius`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_pickradius "matplotlib.lines.Line2D.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`solid_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_capstyle "matplotlib.lines.Line2D.set_solid_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`solid_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_joinstyle "matplotlib.lines.Line2D.set_solid_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | unknown | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xdata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_xdata "matplotlib.lines.Line2D.set_xdata") | 1D array | | [`ydata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_ydata "matplotlib.lines.Line2D.set_ydata") | 1D array | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`specgram`](matplotlib.axes.axes.specgram#matplotlib.axes.Axes.specgram "matplotlib.axes.Axes.specgram") Differs in the default overlap; in not returning the mean of the segment periodograms; in returning the times of the segments; and in plotting a colormap instead of a line. [`magnitude_spectrum`](matplotlib.axes.axes.magnitude_spectrum#matplotlib.axes.Axes.magnitude_spectrum "matplotlib.axes.Axes.magnitude_spectrum") Plots the magnitude spectrum. [`csd`](matplotlib.axes.axes.csd#matplotlib.axes.Axes.csd "matplotlib.axes.Axes.csd") Plots the spectral density between two signals. #### Notes For plotting, the power is plotted as \(10\log\_{10}(P\_{xx})\) for decibels, though *Pxx* itself is returned. #### References Bendat & Piersol -- Random Data: Analysis and Measurement Procedures, John Wiley & Sons (1986) Examples using `matplotlib.axes.Axes.psd` ----------------------------------------- [Psd Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/psd_demo.html#sphx-glr-gallery-lines-bars-and-markers-psd-demo-py) Psd Demo matplotlib matplotlib.artist.Artist.get_cursor_data matplotlib.artist.Artist.get\_cursor\_data ========================================== Artist.get\_cursor\_data(*event*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L1251-L1280) Return the cursor data for a given event. Note This method is intended to be overridden by artist subclasses. As an end-user of Matplotlib you will most likely not call this method yourself. Cursor data can be used by Artists to provide additional context information for a given event. The default implementation just returns *None*. Subclasses can override the method and return arbitrary data. However, when doing so, they must ensure that [`format_cursor_data`](matplotlib.artist.artist.format_cursor_data#matplotlib.artist.Artist.format_cursor_data "matplotlib.artist.Artist.format_cursor_data") can convert the data to a string representation. The only current use case is displaying the z-value of an [`AxesImage`](../image_api#matplotlib.image.AxesImage "matplotlib.image.AxesImage") in the status bar of a plot window, while moving the mouse. Parameters: **event**[`matplotlib.backend_bases.MouseEvent`](../backend_bases_api#matplotlib.backend_bases.MouseEvent "matplotlib.backend_bases.MouseEvent") See also [`format_cursor_data`](matplotlib.artist.artist.format_cursor_data#matplotlib.artist.Artist.format_cursor_data "matplotlib.artist.Artist.format_cursor_data") matplotlib matplotlib.pyplot.connect matplotlib.pyplot.connect ========================= matplotlib.pyplot.connect(*s*, *func*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L867-L869) Bind function *func* to event *s*. Parameters: **s**str One of the following events ids: * 'button\_press\_event' * 'button\_release\_event' * 'draw\_event' * 'key\_press\_event' * 'key\_release\_event' * 'motion\_notify\_event' * 'pick\_event' * 'resize\_event' * 'scroll\_event' * 'figure\_enter\_event', * 'figure\_leave\_event', * 'axes\_enter\_event', * 'axes\_leave\_event' * 'close\_event'. **func**callable The callback function to be executed, which must have the signature: ``` def func(event: Event) -> Any ``` For the location events (button and key press/release), if the mouse is over the Axes, the `inaxes` attribute of the event will be set to the [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") the event occurs is over, and additionally, the variables `xdata` and `ydata` attributes will be set to the mouse location in data coordinates. See [`KeyEvent`](../backend_bases_api#matplotlib.backend_bases.KeyEvent "matplotlib.backend_bases.KeyEvent") and [`MouseEvent`](../backend_bases_api#matplotlib.backend_bases.MouseEvent "matplotlib.backend_bases.MouseEvent") for more info. Returns: cid A connection id that can be used with [`FigureCanvasBase.mpl_disconnect`](../backend_bases_api#matplotlib.backend_bases.FigureCanvasBase.mpl_disconnect "matplotlib.backend_bases.FigureCanvasBase.mpl_disconnect"). #### Examples ``` def on_press(event): print('you pressed', event.button, event.xdata, event.ydata) cid = canvas.mpl_connect('button_press_event', on_press) ``` Examples using `matplotlib.pyplot.connect` ------------------------------------------ [Mouse move and click events](https://matplotlib.org/stable/gallery/event_handling/coords_demo.html#sphx-glr-gallery-event-handling-coords-demo-py) Mouse move and click events matplotlib matplotlib.pyplot.axis matplotlib.pyplot.axis ====================== matplotlib.pyplot.axis(*\*args*, *emit=True*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2327-L2329) Convenience method to get or set some axis properties. Call signatures: ``` xmin, xmax, ymin, ymax = axis() xmin, xmax, ymin, ymax = axis([xmin, xmax, ymin, ymax]) xmin, xmax, ymin, ymax = axis(option) xmin, xmax, ymin, ymax = axis(**kwargs) ``` Parameters: **xmin, xmax, ymin, ymax**float, optional The axis limits to be set. This can also be achieved using ``` ax.set(xlim=(xmin, xmax), ylim=(ymin, ymax)) ``` **option**bool or str If a bool, turns axis lines and labels on or off. If a string, possible values are: | Value | Description | | --- | --- | | 'on' | Turn on axis lines and labels. Same as `True`. | | 'off' | Turn off axis lines and labels. Same as `False`. | | 'equal' | Set equal scaling (i.e., make circles circular) by changing axis limits. This is the same as `ax.set_aspect('equal', adjustable='datalim')`. Explicit data limits may not be respected in this case. | | 'scaled' | Set equal scaling (i.e., make circles circular) by changing dimensions of the plot box. This is the same as `ax.set_aspect('equal', adjustable='box', anchor='C')`. Additionally, further autoscaling will be disabled. | | 'tight' | Set limits just large enough to show all data, then disable further autoscaling. | | 'auto' | Automatic scaling (fill plot box with data). | | 'image' | 'scaled' with axis limits equal to data limits. | | 'square' | Square plot; similar to 'scaled', but initially forcing `xmax-xmin == ymax-ymin`. | **emit**bool, default: True Whether observers are notified of the axis limit change. This option is passed on to [`set_xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") and [`set_ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim"). Returns: **xmin, xmax, ymin, ymax**float The axis limits. See also [`matplotlib.axes.Axes.set_xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") [`matplotlib.axes.Axes.set_ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim") Examples using `matplotlib.pyplot.axis` --------------------------------------- [Filled polygon](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill.html#sphx-glr-gallery-lines-bars-and-markers-fill-py) Filled polygon [Auto-wrapping text](https://matplotlib.org/stable/gallery/text_labels_and_annotations/autowrap.html#sphx-glr-gallery-text-labels-and-annotations-autowrap-py) Auto-wrapping text [Reference for Matplotlib artists](https://matplotlib.org/stable/gallery/shapes_and_collections/artist_reference.html#sphx-glr-gallery-shapes-and-collections-artist-reference-py) Reference for Matplotlib artists [Pyplot tutorial](https://matplotlib.org/stable/tutorials/introductory/pyplot.html#sphx-glr-tutorials-introductory-pyplot-py) Pyplot tutorial
programming_docs
matplotlib matplotlib.axis.Axis.set_data_interval matplotlib.axis.Axis.set\_data\_interval ======================================== Axis.set\_data\_interval(*vmin*, *vmax*, *ignore=False*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L1039-L1051) Set the axis data limits. This method is for internal use. If *ignore* is False (the default), this method will never reduce the preexisting data limits, only expand them if *vmin* or *vmax* are not within them. Moreover, the order of *vmin* and *vmax* does not matter; the orientation of the axis will not change. If *ignore* is True, the data limits will be set exactly to `(vmin, vmax)` in that order. matplotlib matplotlib.patches.Arrow matplotlib.patches.Arrow ======================== *class*matplotlib.patches.Arrow(*x*, *y*, *dx*, *dy*, *\**, *width=1.0*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L1260-L1313) Bases: [`Patch`](matplotlib.patches.patch#matplotlib.patches.Patch "matplotlib.patches.Patch") An arrow patch. Draws an arrow from (*x*, *y*) to (*x* + *dx*, *y* + *dy*). The width of the arrow is scaled by *width*. Parameters: **x**float x coordinate of the arrow tail. **y**float y coordinate of the arrow tail. **dx**float Arrow length in the x direction. **dy**float Arrow length in the y direction. **width**float, default: 1 Scale factor for the width of the arrow. With a default value of 1, the tail width is 0.2 and head width is 0.6. **\*\*kwargs** Keyword arguments control the [`Patch`](matplotlib.patches.patch#matplotlib.patches.Patch "matplotlib.patches.Patch") properties: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | unknown | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`FancyArrow`](matplotlib.patches.fancyarrow#matplotlib.patches.FancyArrow "matplotlib.patches.FancyArrow") Patch that allows independent control of the head and tail properties. get\_patch\_transform()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L1312-L1313) Return the [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") instance mapping patch coordinates to data coordinates. For example, one may define a patch of a circle which represents a radius of 5 by providing coordinates for a unit circle, and a transform which scales the coordinates (the patch coordinate) by 5. get\_path()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L1309-L1310) Return the path of this patch. set(*\**, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *animated=<UNSET>*, *antialiased=<UNSET>*, *capstyle=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *color=<UNSET>*, *edgecolor=<UNSET>*, *facecolor=<UNSET>*, *fill=<UNSET>*, *gid=<UNSET>*, *hatch=<UNSET>*, *in\_layout=<UNSET>*, *joinstyle=<UNSET>*, *label=<UNSET>*, *linestyle=<UNSET>*, *linewidth=<UNSET>*, *mouseover=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *rasterized=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *transform=<UNSET>*, *url=<UNSET>*, *visible=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | Examples using `matplotlib.patches.Arrow` ----------------------------------------- [Arrow guide](https://matplotlib.org/stable/gallery/shapes_and_collections/arrow_guide.html#sphx-glr-gallery-shapes-and-collections-arrow-guide-py) Arrow guide [Reference for Matplotlib artists](https://matplotlib.org/stable/gallery/shapes_and_collections/artist_reference.html#sphx-glr-gallery-shapes-and-collections-artist-reference-py) Reference for Matplotlib artists matplotlib matplotlib.axes.Axes.pcolormesh matplotlib.axes.Axes.pcolormesh =============================== Axes.pcolormesh(*\*args*, *alpha=None*, *norm=None*, *cmap=None*, *vmin=None*, *vmax=None*, *shading=None*, *antialiased=False*, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L5950-L6166) Create a pseudocolor plot with a non-regular rectangular grid. Call signature: ``` pcolormesh([X, Y,] C, **kwargs) ``` *X* and *Y* can be used to specify the corners of the quadrilaterals. Hint [`pcolormesh`](#matplotlib.axes.Axes.pcolormesh "matplotlib.axes.Axes.pcolormesh") is similar to [`pcolor`](matplotlib.axes.axes.pcolor#matplotlib.axes.Axes.pcolor "matplotlib.axes.Axes.pcolor"). It is much faster and preferred in most cases. For a detailed discussion on the differences see [Differences between pcolor() and pcolormesh()](matplotlib.pyplot.pcolormesh#differences-pcolor-pcolormesh). Parameters: **C**2D array-like The color-mapped values. Color-mapping is controlled by *cmap*, *norm*, *vmin*, and *vmax*. **X, Y**array-like, optional The coordinates of the corners of quadrilaterals of a pcolormesh: ``` (X[i+1, j], Y[i+1, j]) (X[i+1, j+1], Y[i+1, j+1]) +-----+ | | +-----+ (X[i, j], Y[i, j]) (X[i, j+1], Y[i, j+1]) ``` Note that the column index corresponds to the x-coordinate, and the row index corresponds to y. For details, see the [Notes](matplotlib.pyplot.pcolormesh#axes-pcolormesh-grid-orientation) section below. If `shading='flat'` the dimensions of *X* and *Y* should be one greater than those of *C*, and the quadrilateral is colored due to the value at `C[i, j]`. If *X*, *Y* and *C* have equal dimensions, a warning will be raised and the last row and column of *C* will be ignored. If `shading='nearest'` or `'gouraud'`, the dimensions of *X* and *Y* should be the same as those of *C* (if not, a ValueError will be raised). For `'nearest'` the color `C[i, j]` is centered on `(X[i, j], Y[i, j])`. For `'gouraud'`, a smooth interpolation is caried out between the quadrilateral corners. If *X* and/or *Y* are 1-D arrays or column vectors they will be expanded as needed into the appropriate 2D arrays, making a rectangular grid. **cmap**str or [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap"), default: `[rcParams["image.cmap"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.cmap#matplotlibrc-sample)` (default: `'viridis'`) The Colormap instance or registered colormap name used to map scalar data to colors. **norm**str or [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize"), optional The normalization method used to scale scalar data to the [0, 1] range before mapping to colors using *cmap*. By default, a linear scaling is used, mapping the lowest value to 0 and the highest to 1. If given, this can be one of the following: * An instance of [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") or one of its subclasses (see [Colormap Normalization](https://matplotlib.org/stable/tutorials/colors/colormapnorms.html)). * A scale name, i.e. one of "linear", "log", "symlog", "logit", etc. For a list of available scales, call [`matplotlib.scale.get_scale_names()`](../scale_api#matplotlib.scale.get_scale_names "matplotlib.scale.get_scale_names"). In that case, a suitable [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") subclass is dynamically generated and instantiated. **vmin, vmax**float, optional When using scalar data and no explicit *norm*, *vmin* and *vmax* define the data range that the colormap covers. By default, the colormap covers the complete value range of the supplied data. It is an error to use *vmin*/*vmax* when a *norm* instance is given (but using a [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") *norm* name together with *vmin*/*vmax* is acceptable). **edgecolors**{'none', None, 'face', color, color sequence}, optional The color of the edges. Defaults to 'none'. Possible values: * 'none' or '': No edge. * *None*: `[rcParams["patch.edgecolor"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=patch.edgecolor#matplotlibrc-sample)` (default: `'black'`) will be used. Note that currently `[rcParams["patch.force\_edgecolor"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=patch.force_edgecolor#matplotlibrc-sample)` (default: `False`) has to be True for this to work. * 'face': Use the adjacent face color. * A color or sequence of colors will set the edge color. The singular form *edgecolor* works as an alias. **alpha**float, default: None The alpha blending value, between 0 (transparent) and 1 (opaque). **shading**{'flat', 'nearest', 'gouraud', 'auto'}, optional The fill style for the quadrilateral; defaults to 'flat' or `[rcParams["pcolor.shading"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=pcolor.shading#matplotlibrc-sample)` (default: `'auto'`). Possible values: * 'flat': A solid color is used for each quad. The color of the quad (i, j), (i+1, j), (i, j+1), (i+1, j+1) is given by `C[i, j]`. The dimensions of *X* and *Y* should be one greater than those of *C*; if they are the same as *C*, then a deprecation warning is raised, and the last row and column of *C* are dropped. * 'nearest': Each grid point will have a color centered on it, extending halfway between the adjacent grid centers. The dimensions of *X* and *Y* must be the same as *C*. * 'gouraud': Each quad will be Gouraud shaded: The color of the corners (i', j') are given by `C[i', j']`. The color values of the area in between is interpolated from the corner values. The dimensions of *X* and *Y* must be the same as *C*. When Gouraud shading is used, *edgecolors* is ignored. * 'auto': Choose 'flat' if dimensions of *X* and *Y* are one larger than *C*. Choose 'nearest' if dimensions are the same. See [pcolormesh grids and shading](https://matplotlib.org/stable/gallery/images_contours_and_fields/pcolormesh_grids.html) for more description. **snap**bool, default: False Whether to snap the mesh to pixel boundaries. **rasterized**bool, optional Rasterize the pcolormesh when drawing vector graphics. This can speed up rendering and produce smaller files for large data sets. See also [Rasterization for vector graphics](https://matplotlib.org/stable/gallery/misc/rasterization_demo.html). Returns: [`matplotlib.collections.QuadMesh`](../collections_api#matplotlib.collections.QuadMesh "matplotlib.collections.QuadMesh") Other Parameters: **data**indexable object, optional If given, all parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception). **\*\*kwargs** Additionally, the following arguments are allowed. They are passed along to the [`QuadMesh`](../collections_api#matplotlib.collections.QuadMesh "matplotlib.collections.QuadMesh") constructor: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](../collections_api#matplotlib.collections.Collection.set_alpha "matplotlib.collections.Collection.set_alpha") | array-like or scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](../collections_api#matplotlib.collections.Collection.set_antialiased "matplotlib.collections.Collection.set_antialiased") or aa or antialiaseds | bool or list of bools | | [`array`](../collections_api#matplotlib.collections.QuadMesh.set_array "matplotlib.collections.QuadMesh.set_array") | (M, N) array-like or M\*N array-like | | [`capstyle`](../collections_api#matplotlib.collections.Collection.set_capstyle "matplotlib.collections.Collection.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clim`](../cm_api#matplotlib.cm.ScalarMappable.set_clim "matplotlib.cm.ScalarMappable.set_clim") | (vmin: float, vmax: float) | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`cmap`](../cm_api#matplotlib.cm.ScalarMappable.set_cmap "matplotlib.cm.ScalarMappable.set_cmap") | [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap") or str or None | | [`color`](../collections_api#matplotlib.collections.Collection.set_color "matplotlib.collections.Collection.set_color") | color or list of rgba tuples | | [`edgecolor`](../collections_api#matplotlib.collections.Collection.set_edgecolor "matplotlib.collections.Collection.set_edgecolor") or ec or edgecolors | color or list of colors or 'face' | | [`facecolor`](../collections_api#matplotlib.collections.Collection.set_facecolor "matplotlib.collections.Collection.set_facecolor") or facecolors or fc | color or list of colors | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](../collections_api#matplotlib.collections.Collection.set_hatch "matplotlib.collections.Collection.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](../collections_api#matplotlib.collections.Collection.set_joinstyle "matplotlib.collections.Collection.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](../collections_api#matplotlib.collections.Collection.set_linestyle "matplotlib.collections.Collection.set_linestyle") or dashes or linestyles or ls | str or tuple or list thereof | | [`linewidth`](../collections_api#matplotlib.collections.Collection.set_linewidth "matplotlib.collections.Collection.set_linewidth") or linewidths or lw | float or list of floats | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`norm`](../cm_api#matplotlib.cm.ScalarMappable.set_norm "matplotlib.cm.ScalarMappable.set_norm") | [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") or str or None | | [`offset_transform`](../collections_api#matplotlib.collections.Collection.set_offset_transform "matplotlib.collections.Collection.set_offset_transform") or transOffset | unknown | | [`offsets`](../collections_api#matplotlib.collections.Collection.set_offsets "matplotlib.collections.Collection.set_offsets") | (N, 2) or (2,) array-like | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`pickradius`](../collections_api#matplotlib.collections.Collection.set_pickradius "matplotlib.collections.Collection.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`urls`](../collections_api#matplotlib.collections.Collection.set_urls "matplotlib.collections.Collection.set_urls") | list of str or None | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`pcolor`](matplotlib.axes.axes.pcolor#matplotlib.axes.Axes.pcolor "matplotlib.axes.Axes.pcolor") An alternative implementation with slightly different features. For a detailed discussion on the differences see [Differences between pcolor() and pcolormesh()](matplotlib.pyplot.pcolormesh#differences-pcolor-pcolormesh). [`imshow`](matplotlib.axes.axes.imshow#matplotlib.axes.Axes.imshow "matplotlib.axes.Axes.imshow") If *X* and *Y* are each equidistant, [`imshow`](matplotlib.axes.axes.imshow#matplotlib.axes.Axes.imshow "matplotlib.axes.Axes.imshow") can be a faster alternative. #### Notes **Masked arrays** *C* may be a masked array. If `C[i, j]` is masked, the corresponding quadrilateral will be transparent. Masking of *X* and *Y* is not supported. Use [`pcolor`](matplotlib.axes.axes.pcolor#matplotlib.axes.Axes.pcolor "matplotlib.axes.Axes.pcolor") if you need this functionality. **Grid orientation** The grid orientation follows the standard matrix convention: An array *C* with shape (nrows, ncolumns) is plotted with the column number as *X* and the row number as *Y*. **Differences between pcolor() and pcolormesh()** Both methods are used to create a pseudocolor plot of a 2D array using quadrilaterals. The main difference lies in the created object and internal data handling: While [`pcolor`](matplotlib.axes.axes.pcolor#matplotlib.axes.Axes.pcolor "matplotlib.axes.Axes.pcolor") returns a [`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection"), [`pcolormesh`](#matplotlib.axes.Axes.pcolormesh "matplotlib.axes.Axes.pcolormesh") returns a [`QuadMesh`](../collections_api#matplotlib.collections.QuadMesh "matplotlib.collections.QuadMesh"). The latter is more specialized for the given purpose and thus is faster. It should almost always be preferred. There is also a slight difference in the handling of masked arrays. Both [`pcolor`](matplotlib.axes.axes.pcolor#matplotlib.axes.Axes.pcolor "matplotlib.axes.Axes.pcolor") and [`pcolormesh`](#matplotlib.axes.Axes.pcolormesh "matplotlib.axes.Axes.pcolormesh") support masked arrays for *C*. However, only [`pcolor`](matplotlib.axes.axes.pcolor#matplotlib.axes.Axes.pcolor "matplotlib.axes.Axes.pcolor") supports masked arrays for *X* and *Y*. The reason lies in the internal handling of the masked values. [`pcolor`](matplotlib.axes.axes.pcolor#matplotlib.axes.Axes.pcolor "matplotlib.axes.Axes.pcolor") leaves out the respective polygons from the PolyCollection. [`pcolormesh`](#matplotlib.axes.Axes.pcolormesh "matplotlib.axes.Axes.pcolormesh") sets the facecolor of the masked elements to transparent. You can see the difference when using edgecolors. While all edges are drawn irrespective of masking in a QuadMesh, the edge between two adjacent masked quadrilaterals in [`pcolor`](matplotlib.axes.axes.pcolor#matplotlib.axes.Axes.pcolor "matplotlib.axes.Axes.pcolor") is not drawn as the corresponding polygons do not exist in the PolyCollection. Another difference is the support of Gouraud shading in [`pcolormesh`](#matplotlib.axes.Axes.pcolormesh "matplotlib.axes.Axes.pcolormesh"), which is not available with [`pcolor`](matplotlib.axes.axes.pcolor#matplotlib.axes.Axes.pcolor "matplotlib.axes.Axes.pcolor"). Examples using `matplotlib.axes.Axes.pcolormesh` ------------------------------------------------ [Pcolor Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/pcolor_demo.html#sphx-glr-gallery-images-contours-and-fields-pcolor-demo-py) Pcolor Demo [pcolormesh grids and shading](https://matplotlib.org/stable/gallery/images_contours_and_fields/pcolormesh_grids.html#sphx-glr-gallery-images-contours-and-fields-pcolormesh-grids-py) pcolormesh grids and shading [pcolormesh](https://matplotlib.org/stable/gallery/images_contours_and_fields/pcolormesh_levels.html#sphx-glr-gallery-images-contours-and-fields-pcolormesh-levels-py) pcolormesh [Placing Colorbars](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/colorbar_placement.html#sphx-glr-gallery-subplots-axes-and-figures-colorbar-placement-py) Placing Colorbars [Figure subfigures](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subfigures.html#sphx-glr-gallery-subplots-axes-and-figures-subfigures-py) Figure subfigures [Rasterization for vector graphics](https://matplotlib.org/stable/gallery/misc/rasterization_demo.html#sphx-glr-gallery-misc-rasterization-demo-py) Rasterization for vector graphics [Constrained Layout Guide](https://matplotlib.org/stable/tutorials/intermediate/constrainedlayout_guide.html#sphx-glr-tutorials-intermediate-constrainedlayout-guide-py) Constrained Layout Guide [Colormap Normalization](https://matplotlib.org/stable/tutorials/colors/colormapnorms.html#sphx-glr-tutorials-colors-colormapnorms-py) Colormap Normalization [pcolormesh(X, Y, Z)](https://matplotlib.org/stable/plot_types/arrays/pcolormesh.html#sphx-glr-plot-types-arrays-pcolormesh-py) pcolormesh(X, Y, Z)
programming_docs
matplotlib matplotlib.axes.Axes.set_anchor matplotlib.axes.Axes.set\_anchor ================================ Axes.set\_anchor(*anchor*, *share=False*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L1829-L1875) Define the anchor location. The actual drawing area (active position) of the Axes may be smaller than the Bbox (original position) when a fixed aspect is required. The anchor defines where the drawing area will be located within the available space. Parameters: **anchor**(float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...} Either an (*x*, *y*) pair of relative coordinates (0 is left or bottom, 1 is right or top), 'C' (center), or a cardinal direction ('SW', southwest, is bottom left, etc.). str inputs are shorthands for (*x*, *y*) coordinates, as shown in the following table: ``` .. code-block:: none ``` | | | | | --- | --- | --- | | 'NW' (0.0, 1.0) | 'N' (0.5, 1.0) | 'NE' (1.0, 1.0) | | 'W' (0.0, 0.5) | 'C' (0.5, 0.5) | 'E' (1.0, 0.5) | | 'SW' (0.0, 0.0) | 'S' (0.5, 0.0) | 'SE' (1.0, 0.0) | **share**bool, default: False If `True`, apply the settings to all shared Axes. See also [`matplotlib.axes.Axes.set_aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect") for a description of aspect handling. matplotlib matplotlib.pyplot.figtext matplotlib.pyplot.figtext ========================= matplotlib.pyplot.figtext(*x*, *y*, *s*, *fontdict=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2217-L2219) Add text to figure. Parameters: **x, y**float The position to place the text. By default, this is in figure coordinates, floats in [0, 1]. The coordinate system can be changed using the *transform* keyword. **s**str The text string. **fontdict**dict, optional A dictionary to override the default text properties. If not given, the defaults are determined by `[rcParams["font.\*"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=font.*#matplotlibrc-sample)`. Properties passed as *kwargs* override the corresponding ones given in *fontdict*. Returns: [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text") Other Parameters: **\*\*kwargs**[`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text") properties Other miscellaneous text parameters. | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`backgroundcolor`](../text_api#matplotlib.text.Text.set_backgroundcolor "matplotlib.text.Text.set_backgroundcolor") | color | | [`bbox`](../text_api#matplotlib.text.Text.set_bbox "matplotlib.text.Text.set_bbox") | dict with properties for [`patches.FancyBboxPatch`](matplotlib.patches.fancybboxpatch#matplotlib.patches.FancyBboxPatch "matplotlib.patches.FancyBboxPatch") | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | unknown | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | unknown | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | unknown | | [`color`](../text_api#matplotlib.text.Text.set_color "matplotlib.text.Text.set_color") or c | color | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fontfamily`](../text_api#matplotlib.text.Text.set_fontfamily "matplotlib.text.Text.set_fontfamily") or family | {FONTNAME, 'serif', 'sans-serif', 'cursive', 'fantasy', 'monospace'} | | [`fontproperties`](../text_api#matplotlib.text.Text.set_fontproperties "matplotlib.text.Text.set_fontproperties") or font or font\_properties | [`font_manager.FontProperties`](../font_manager_api#matplotlib.font_manager.FontProperties "matplotlib.font_manager.FontProperties") or [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") or [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") | | [`fontsize`](../text_api#matplotlib.text.Text.set_fontsize "matplotlib.text.Text.set_fontsize") or size | float or {'xx-small', 'x-small', 'small', 'medium', 'large', 'x-large', 'xx-large'} | | [`fontstretch`](../text_api#matplotlib.text.Text.set_fontstretch "matplotlib.text.Text.set_fontstretch") or stretch | {a numeric value in range 0-1000, 'ultra-condensed', 'extra-condensed', 'condensed', 'semi-condensed', 'normal', 'semi-expanded', 'expanded', 'extra-expanded', 'ultra-expanded'} | | [`fontstyle`](../text_api#matplotlib.text.Text.set_fontstyle "matplotlib.text.Text.set_fontstyle") or style | {'normal', 'italic', 'oblique'} | | [`fontvariant`](../text_api#matplotlib.text.Text.set_fontvariant "matplotlib.text.Text.set_fontvariant") or variant | {'normal', 'small-caps'} | | [`fontweight`](../text_api#matplotlib.text.Text.set_fontweight "matplotlib.text.Text.set_fontweight") or weight | {a numeric value in range 0-1000, 'ultralight', 'light', 'normal', 'regular', 'book', 'medium', 'roman', 'semibold', 'demibold', 'demi', 'bold', 'heavy', 'extra bold', 'black'} | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`horizontalalignment`](../text_api#matplotlib.text.Text.set_horizontalalignment "matplotlib.text.Text.set_horizontalalignment") or ha | {'left', 'center', 'right'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linespacing`](../text_api#matplotlib.text.Text.set_linespacing "matplotlib.text.Text.set_linespacing") | float (multiple of font size) | | [`math_fontfamily`](../text_api#matplotlib.text.Text.set_math_fontfamily "matplotlib.text.Text.set_math_fontfamily") | str | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`multialignment`](../text_api#matplotlib.text.Text.set_multialignment "matplotlib.text.Text.set_multialignment") or ma | {'left', 'right', 'center'} | | [`parse_math`](../text_api#matplotlib.text.Text.set_parse_math "matplotlib.text.Text.set_parse_math") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`position`](../text_api#matplotlib.text.Text.set_position "matplotlib.text.Text.set_position") | (float, float) | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`rotation`](../text_api#matplotlib.text.Text.set_rotation "matplotlib.text.Text.set_rotation") | float or {'vertical', 'horizontal'} | | [`rotation_mode`](../text_api#matplotlib.text.Text.set_rotation_mode "matplotlib.text.Text.set_rotation_mode") | {None, 'default', 'anchor'} | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`text`](../text_api#matplotlib.text.Text.set_text "matplotlib.text.Text.set_text") | object | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`transform_rotates_text`](../text_api#matplotlib.text.Text.set_transform_rotates_text "matplotlib.text.Text.set_transform_rotates_text") | bool | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`usetex`](../text_api#matplotlib.text.Text.set_usetex "matplotlib.text.Text.set_usetex") | bool or None | | [`verticalalignment`](../text_api#matplotlib.text.Text.set_verticalalignment "matplotlib.text.Text.set_verticalalignment") or va | {'bottom', 'baseline', 'center', 'center\_baseline', 'top'} | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`wrap`](../text_api#matplotlib.text.Text.set_wrap "matplotlib.text.Text.set_wrap") | bool | | [`x`](../text_api#matplotlib.text.Text.set_x "matplotlib.text.Text.set_x") | float | | [`y`](../text_api#matplotlib.text.Text.set_y "matplotlib.text.Text.set_y") | float | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`Axes.text`](matplotlib.axes.axes.text#matplotlib.axes.Axes.text "matplotlib.axes.Axes.text") [`pyplot.text`](matplotlib.pyplot.text#matplotlib.pyplot.text "matplotlib.pyplot.text") matplotlib matplotlib.axis.YAxis.tick_right matplotlib.axis.YAxis.tick\_right ================================= YAxis.tick\_right()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L2619-L2630) Move ticks and ticklabels (if present) to the right of the Axes. matplotlib matplotlib.axes.Axes.apply_aspect matplotlib.axes.Axes.apply\_aspect ================================== Axes.apply\_aspect(*position=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L1891-L2020) Adjust the Axes for a specified data aspect ratio. Depending on [`get_adjustable`](matplotlib.axes.axes.get_adjustable#matplotlib.axes.Axes.get_adjustable "matplotlib.axes.Axes.get_adjustable") this will modify either the Axes box (position) or the view limits. In the former case, [`get_anchor`](matplotlib.axes.axes.get_anchor#matplotlib.axes.Axes.get_anchor "matplotlib.axes.Axes.get_anchor") will affect the position. Parameters: **position**None or .Bbox If not `None`, this defines the position of the Axes within the figure as a Bbox. See [`get_position`](matplotlib.axes.axes.get_position#matplotlib.axes.Axes.get_position "matplotlib.axes.Axes.get_position") for further details. See also [`matplotlib.axes.Axes.set_aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect") For a description of aspect ratio handling. [`matplotlib.axes.Axes.set_adjustable`](matplotlib.axes.axes.set_adjustable#matplotlib.axes.Axes.set_adjustable "matplotlib.axes.Axes.set_adjustable") Set how the Axes adjusts to achieve the required aspect ratio. [`matplotlib.axes.Axes.set_anchor`](matplotlib.axes.axes.set_anchor#matplotlib.axes.Axes.set_anchor "matplotlib.axes.Axes.set_anchor") Set the position in case of extra space. #### Notes This is called automatically when each Axes is drawn. You may need to call it yourself if you need to update the Axes position and/or view limits before the Figure is drawn. matplotlib matplotlib.pyplot.clabel matplotlib.pyplot.clabel ======================== matplotlib.pyplot.clabel(*CS*, *levels=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2422-L2424) Label a contour plot. Adds labels to line contours in given [`ContourSet`](../contour_api#matplotlib.contour.ContourSet "matplotlib.contour.ContourSet"). Parameters: **CS**[`ContourSet`](../contour_api#matplotlib.contour.ContourSet "matplotlib.contour.ContourSet") instance Line contours to label. **levels**array-like, optional A list of level values, that should be labeled. The list must be a subset of `CS.levels`. If not given, all levels are labeled. **\*\*kwargs** All other parameters are documented in [`clabel`](../contour_api#matplotlib.contour.ContourLabeler.clabel "matplotlib.contour.ContourLabeler.clabel"). Examples using `matplotlib.pyplot.clabel` ----------------------------------------- [Interactive functions](https://matplotlib.org/stable/gallery/event_handling/ginput_manual_clabel_sgskip.html#sphx-glr-gallery-event-handling-ginput-manual-clabel-sgskip-py) Interactive functions matplotlib matplotlib.axis.Axis.set_major_formatter matplotlib.axis.Axis.set\_major\_formatter ========================================== Axis.set\_major\_formatter(*formatter*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L1726-L1750) Set the formatter of the major ticker. In addition to a [`Formatter`](../ticker_api#matplotlib.ticker.Formatter "matplotlib.ticker.Formatter") instance, this also accepts a `str` or function. For a `str` a [`StrMethodFormatter`](../ticker_api#matplotlib.ticker.StrMethodFormatter "matplotlib.ticker.StrMethodFormatter") is used. The field used for the value must be labeled `'x'` and the field used for the position must be labeled `'pos'`. See the [`StrMethodFormatter`](../ticker_api#matplotlib.ticker.StrMethodFormatter "matplotlib.ticker.StrMethodFormatter") documentation for more information. For a function, a [`FuncFormatter`](../ticker_api#matplotlib.ticker.FuncFormatter "matplotlib.ticker.FuncFormatter") is used. The function must take two inputs (a tick value `x` and a position `pos`), and return a string containing the corresponding tick label. See the [`FuncFormatter`](../ticker_api#matplotlib.ticker.FuncFormatter "matplotlib.ticker.FuncFormatter") documentation for more information. Parameters: **formatter**[`Formatter`](../ticker_api#matplotlib.ticker.Formatter "matplotlib.ticker.Formatter"), `str`, or function Examples using `matplotlib.axis.Axis.set_major_formatter` --------------------------------------------------------- [Creating a timeline with lines, dates, and text](https://matplotlib.org/stable/gallery/lines_bars_and_markers/timeline.html#sphx-glr-gallery-lines-bars-and-markers-timeline-py) Creating a timeline with lines, dates, and text [Date tick labels](https://matplotlib.org/stable/gallery/text_labels_and_annotations/date.html#sphx-glr-gallery-text-labels-and-annotations-date-py) Date tick labels [Labeling ticks using engineering notation](https://matplotlib.org/stable/gallery/text_labels_and_annotations/engineering_formatter.html#sphx-glr-gallery-text-labels-and-annotations-engineering-formatter-py) Labeling ticks using engineering notation [Dollar Ticks](https://matplotlib.org/stable/gallery/pyplots/dollar_ticks.html#sphx-glr-gallery-pyplots-dollar-ticks-py) Dollar Ticks [3D surface (colormap)](https://matplotlib.org/stable/gallery/mplot3d/surface3d.html#sphx-glr-gallery-mplot3d-surface3d-py) 3D surface (colormap) [SkewT-logP diagram: using transforms and custom projections](https://matplotlib.org/stable/gallery/specialty_plots/skewt.html#sphx-glr-gallery-specialty-plots-skewt-py) SkewT-logP diagram: using transforms and custom projections [Centering labels between ticks](https://matplotlib.org/stable/gallery/ticks/centered_ticklabels.html#sphx-glr-gallery-ticks-centered-ticklabels-py) Centering labels between ticks [Custom Ticker](https://matplotlib.org/stable/gallery/ticks/custom_ticker1.html#sphx-glr-gallery-ticks-custom-ticker1-py) Custom Ticker [Formatting date ticks using ConciseDateFormatter](https://matplotlib.org/stable/gallery/ticks/date_concise_formatter.html#sphx-glr-gallery-ticks-date-concise-formatter-py) Formatting date ticks using ConciseDateFormatter [Date Demo Convert](https://matplotlib.org/stable/gallery/ticks/date_demo_convert.html#sphx-glr-gallery-ticks-date-demo-convert-py) Date Demo Convert [Placing date ticks using recurrence rules](https://matplotlib.org/stable/gallery/ticks/date_demo_rrule.html#sphx-glr-gallery-ticks-date-demo-rrule-py) Placing date ticks using recurrence rules [Custom tick formatter for time series](https://matplotlib.org/stable/gallery/ticks/date_index_formatter.html#sphx-glr-gallery-ticks-date-index-formatter-py) Custom tick formatter for time series [Major and minor ticks](https://matplotlib.org/stable/gallery/ticks/major_minor_demo.html#sphx-glr-gallery-ticks-major-minor-demo-py) Major and minor ticks [Setting tick labels from a list of values](https://matplotlib.org/stable/gallery/ticks/tick_labels_from_values.html#sphx-glr-gallery-ticks-tick-labels-from-values-py) Setting tick labels from a list of values [The Lifecycle of a Plot](https://matplotlib.org/stable/tutorials/introductory/lifecycle.html#sphx-glr-tutorials-introductory-lifecycle-py) The Lifecycle of a Plot [Quick start guide](https://matplotlib.org/stable/tutorials/introductory/quick_start.html#sphx-glr-tutorials-introductory-quick-start-py) Quick start guide [Artist tutorial](https://matplotlib.org/stable/tutorials/intermediate/artists.html#sphx-glr-tutorials-intermediate-artists-py) Artist tutorial [Choosing Colormaps in Matplotlib](https://matplotlib.org/stable/tutorials/colors/colormaps.html#sphx-glr-tutorials-colors-colormaps-py) Choosing Colormaps in Matplotlib [Text in Matplotlib Plots](https://matplotlib.org/stable/tutorials/text/text_intro.html#sphx-glr-tutorials-text-text-intro-py) Text in Matplotlib Plots matplotlib mpl_toolkits.axes_grid1.axes_size.Fraction mpl\_toolkits.axes\_grid1.axes\_size.Fraction ============================================= *class*mpl\_toolkits.axes\_grid1.axes\_size.Fraction(*fraction*, *ref\_size*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_size.py#L185-L204) Bases: `_Base` An instance whose size is a *fraction* of the *ref\_size*. ``` >>> s = Fraction(0.3, AxesX(ax)) ``` get\_size(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_size.py#L197-L204) matplotlib matplotlib.axes.Axes.get_legend matplotlib.axes.Axes.get\_legend ================================ Axes.get\_legend()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L2150-L2152) Return the [`Legend`](../legend_api#matplotlib.legend.Legend "matplotlib.legend.Legend") instance, or None if no legend is defined. matplotlib matplotlib.artist.Artist.axes matplotlib.artist.Artist.axes ============================= *property*Artist.axes The [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") instance the artist resides in, or *None*. matplotlib matplotlib.axes.Axes.set_adjustable matplotlib.axes.Axes.set\_adjustable ==================================== Axes.set\_adjustable(*adjustable*, *share=False*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L1715-L1761) Set how the Axes adjusts to achieve the required aspect ratio. Parameters: **adjustable**{'box', 'datalim'} If 'box', change the physical dimensions of the Axes. If 'datalim', change the `x` or `y` data limits. **share**bool, default: False If `True`, apply the settings to all shared Axes. See also [`matplotlib.axes.Axes.set_aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect") For a description of aspect handling. #### Notes Shared Axes (of which twinned Axes are a special case) impose restrictions on how aspect ratios can be imposed. For twinned Axes, use 'datalim'. For Axes that share both x and y, use 'box'. Otherwise, either 'datalim' or 'box' may be used. These limitations are partly a requirement to avoid over-specification, and partly a result of the particular implementation we are currently using, in which the adjustments for aspect ratios are done sequentially and independently on each Axes as it is drawn. Examples using `matplotlib.axes.Axes.set_adjustable` ---------------------------------------------------- [Loglog Aspect](https://matplotlib.org/stable/gallery/scales/aspect_loglog.html#sphx-glr-gallery-scales-aspect-loglog-py) Loglog Aspect
programming_docs
matplotlib mpl_toolkits.mplot3d.axis3d.Axis mpl\_toolkits.mplot3d.axis3d.Axis ================================= *class*mpl\_toolkits.mplot3d.axis3d.Axis(*axes*, *\**, *rotate\_label=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/axis3d.py#L53-L575) Bases: [`XAxis`](../axis_api#matplotlib.axis.XAxis "matplotlib.axis.XAxis") An Axis class for the 3D plots. Parameters: **axes**[`matplotlib.axes.Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") The [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") to which the created Axis belongs. **pickradius**float The acceptance radius for containment tests. See also [`Axis.contains`](matplotlib.axis.axis.contains#matplotlib.axis.Axis.contains "matplotlib.axis.Axis.contains"). *property*adir[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/_api/deprecation.py) *property*d\_interval[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/_api/deprecation.py) draw(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/axis3d.py#L339-L518) Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible ([`Artist.get_visible`](matplotlib.artist.artist.get_visible#matplotlib.artist.Artist.get_visible "matplotlib.artist.Artist.get_visible") returns False). Parameters: **renderer**[`RendererBase`](../backend_bases_api#matplotlib.backend_bases.RendererBase "matplotlib.backend_bases.RendererBase") subclass. #### Notes This method is overridden in the Artist subclasses. draw\_pane(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/axis3d.py#L322-L337) get\_major\_ticks(*numticks=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/axis3d.py#L184-L190) Return the list of major [`Tick`](../axis_api#matplotlib.axis.Tick "matplotlib.axis.Tick")s. get\_minor\_ticks(*numticks=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/axis3d.py#L192-L198) Return the list of minor [`Tick`](../axis_api#matplotlib.axis.Tick "matplotlib.axis.Tick")s. get\_rotate\_label(*text*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/axis3d.py#L236-L240) get\_tightbbox(*renderer=None*, *\**, *for\_layout\_only=False*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/axis3d.py#L522-L566) Return a bounding box that encloses the axis. It only accounts tick labels, axis label, and offsetText. If *for\_layout\_only* is True, then the width of the label (if this is an x-axis) or the height of the label (if this is a y-axis) is collapsed to near zero. This allows tight/constrained\_layout to ignore too-long labels when doing their layout. init3d()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/axis3d.py#L180-L182) [*Deprecated*] #### Notes Deprecated since version 3.6: set(*\**, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *animated=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *data\_interval=<UNSET>*, *gid=<UNSET>*, *in\_layout=<UNSET>*, *inverted=<UNSET>*, *label=<UNSET>*, *label\_coords=<UNSET>*, *label\_position=<UNSET>*, *label\_text=<UNSET>*, *major\_formatter=<UNSET>*, *major\_locator=<UNSET>*, *minor\_formatter=<UNSET>*, *minor\_locator=<UNSET>*, *mouseover=<UNSET>*, *pane\_color=<UNSET>*, *pane\_pos=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *pickradius=<UNSET>*, *rasterized=<UNSET>*, *remove\_overlapping\_locs=<UNSET>*, *rotate\_label=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *tick\_params=<UNSET>*, *ticklabels=<UNSET>*, *ticks=<UNSET>*, *ticks\_position=<UNSET>*, *transform=<UNSET>*, *units=<UNSET>*, *url=<UNSET>*, *view\_interval=<UNSET>*, *visible=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`data_interval`](matplotlib.axis.axis.set_data_interval#matplotlib.axis.Axis.set_data_interval "matplotlib.axis.Axis.set_data_interval") | unknown | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`inverted`](matplotlib.axis.axis.set_inverted#matplotlib.axis.Axis.set_inverted "matplotlib.axis.Axis.set_inverted") | unknown | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`label_coords`](matplotlib.axis.axis.set_label_coords#matplotlib.axis.Axis.set_label_coords "matplotlib.axis.Axis.set_label_coords") | unknown | | [`label_position`](matplotlib.axis.xaxis.set_label_position#matplotlib.axis.XAxis.set_label_position "matplotlib.axis.XAxis.set_label_position") | {'top', 'bottom'} | | [`label_text`](matplotlib.axis.axis.set_label_text#matplotlib.axis.Axis.set_label_text "matplotlib.axis.Axis.set_label_text") | str | | [`major_formatter`](matplotlib.axis.axis.set_major_formatter#matplotlib.axis.Axis.set_major_formatter "matplotlib.axis.Axis.set_major_formatter") | [`Formatter`](../ticker_api#matplotlib.ticker.Formatter "matplotlib.ticker.Formatter"), `str`, or function | | [`major_locator`](matplotlib.axis.axis.set_major_locator#matplotlib.axis.Axis.set_major_locator "matplotlib.axis.Axis.set_major_locator") | [`Locator`](../ticker_api#matplotlib.ticker.Locator "matplotlib.ticker.Locator") | | [`minor_formatter`](matplotlib.axis.axis.set_minor_formatter#matplotlib.axis.Axis.set_minor_formatter "matplotlib.axis.Axis.set_minor_formatter") | [`Formatter`](../ticker_api#matplotlib.ticker.Formatter "matplotlib.ticker.Formatter"), `str`, or function | | [`minor_locator`](matplotlib.axis.axis.set_minor_locator#matplotlib.axis.Axis.set_minor_locator "matplotlib.axis.Axis.set_minor_locator") | [`Locator`](../ticker_api#matplotlib.ticker.Locator "matplotlib.ticker.Locator") | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`pane_color`](#mpl_toolkits.mplot3d.axis3d.Axis.set_pane_color "mpl_toolkits.mplot3d.axis3d.Axis.set_pane_color") | color | | [`pane_pos`](#mpl_toolkits.mplot3d.axis3d.Axis.set_pane_pos "mpl_toolkits.mplot3d.axis3d.Axis.set_pane_pos") | unknown | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`pickradius`](matplotlib.axis.axis.set_pickradius#matplotlib.axis.Axis.set_pickradius "matplotlib.axis.Axis.set_pickradius") | float | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`remove_overlapping_locs`](matplotlib.axis.axis.set_remove_overlapping_locs#matplotlib.axis.Axis.set_remove_overlapping_locs "matplotlib.axis.Axis.set_remove_overlapping_locs") | unknown | | [`rotate_label`](#mpl_toolkits.mplot3d.axis3d.Axis.set_rotate_label "mpl_toolkits.mplot3d.axis3d.Axis.set_rotate_label") | unknown | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`tick_params`](matplotlib.axis.axis.set_tick_params#matplotlib.axis.Axis.set_tick_params "matplotlib.axis.Axis.set_tick_params") | unknown | | [`ticklabels`](matplotlib.axis.axis.set_ticklabels#matplotlib.axis.Axis.set_ticklabels "matplotlib.axis.Axis.set_ticklabels") | sequence of str or of [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text")s | | [`ticks`](matplotlib.axis.axis.set_ticks#matplotlib.axis.Axis.set_ticks "matplotlib.axis.Axis.set_ticks") | list of floats | | [`ticks_position`](matplotlib.axis.xaxis.set_ticks_position#matplotlib.axis.XAxis.set_ticks_position "matplotlib.axis.XAxis.set_ticks_position") | {'top', 'bottom', 'both', 'default', 'none'} | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`units`](matplotlib.axis.axis.set_units#matplotlib.axis.Axis.set_units "matplotlib.axis.Axis.set_units") | units tag | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`view_interval`](matplotlib.axis.axis.set_view_interval#matplotlib.axis.Axis.set_view_interval "matplotlib.axis.Axis.set_view_interval") | unknown | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | set\_pane\_color(*color*, *alpha=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/axis3d.py#L210-L226) Set pane color. Parameters: **color**color Color for axis pane. **alpha**float, optional Alpha value for axis pane. If None, base it on *color*. set\_pane\_pos(*xys*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/axis3d.py#L200-L202) [*Deprecated*] #### Notes Deprecated since version 3.6: set\_rotate\_label(*val*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/axis3d.py#L228-L234) Whether to rotate the axis label: True, False or None. If set to None the label will be rotated if longer than 4 chars. *property*v\_interval[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/_api/deprecation.py) matplotlib mpl_toolkits.mplot3d.art3d.patch_collection_2d_to_3d mpl\_toolkits.mplot3d.art3d.patch\_collection\_2d\_to\_3d ========================================================= mpl\_toolkits.mplot3d.art3d.patch\_collection\_2d\_to\_3d(*col*, *zs=0*, *zdir='z'*, *depthshade=True*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/art3d.py#L637-L661) Convert a [`PatchCollection`](../collections_api#matplotlib.collections.PatchCollection "matplotlib.collections.PatchCollection") into a [`Patch3DCollection`](mpl_toolkits.mplot3d.art3d.patch3dcollection#mpl_toolkits.mplot3d.art3d.Patch3DCollection "mpl_toolkits.mplot3d.art3d.Patch3DCollection") object (or a [`PathCollection`](../collections_api#matplotlib.collections.PathCollection "matplotlib.collections.PathCollection") into a [`Path3DCollection`](mpl_toolkits.mplot3d.art3d.path3dcollection#mpl_toolkits.mplot3d.art3d.Path3DCollection "mpl_toolkits.mplot3d.art3d.Path3DCollection") object). Parameters: **za** The location or locations to place the patches in the collection along the *zdir* axis. Default: 0. **zdir** The axis in which to place the patches. Default: "z". **depthshade** Whether to shade the patches to give a sense of depth. Default: *True*. matplotlib matplotlib.artist.Artist.draw matplotlib.artist.Artist.draw ============================= Artist.draw(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L951-L968) Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible ([`Artist.get_visible`](matplotlib.artist.artist.get_visible#matplotlib.artist.Artist.get_visible "matplotlib.artist.Artist.get_visible") returns False). Parameters: **renderer**[`RendererBase`](../backend_bases_api#matplotlib.backend_bases.RendererBase "matplotlib.backend_bases.RendererBase") subclass. #### Notes This method is overridden in the Artist subclasses. matplotlib mpl_toolkits.axes_grid1.parasite_axes.parasite_axes_class_factory mpl\_toolkits.axes\_grid1.parasite\_axes.parasite\_axes\_class\_factory ======================================================================= mpl\_toolkits.axes\_grid1.parasite\_axes.parasite\_axes\_class\_factory(*axes\_class*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/parasite_axes.py#L2278-L2300) matplotlib matplotlib.pyplot.hist2d matplotlib.pyplot.hist2d ======================== matplotlib.pyplot.hist2d(*x*, *y*, *bins=10*, *range=None*, *density=False*, *weights=None*, *cmin=None*, *cmax=None*, *\**, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2581-L2590) Make a 2D histogram plot. Parameters: **x, y**array-like, shape (n, ) Input values **bins**None or int or [int, int] or array-like or [array, array] The bin specification: * If int, the number of bins for the two dimensions (nx=ny=bins). * If `[int, int]`, the number of bins in each dimension (nx, ny = bins). * If array-like, the bin edges for the two dimensions (x\_edges=y\_edges=bins). * If `[array, array]`, the bin edges in each dimension (x\_edges, y\_edges = bins). The default value is 10. **range**array-like shape(2, 2), optional The leftmost and rightmost edges of the bins along each dimension (if not specified explicitly in the bins parameters): `[[xmin, xmax], [ymin, ymax]]`. All values outside of this range will be considered outliers and not tallied in the histogram. **density**bool, default: False Normalize histogram. See the documentation for the *density* parameter of [`hist`](matplotlib.axes.axes.hist#matplotlib.axes.Axes.hist "matplotlib.axes.Axes.hist") for more details. **weights**array-like, shape (n, ), optional An array of values w\_i weighing each sample (x\_i, y\_i). **cmin, cmax**float, default: None All bins that has count less than *cmin* or more than *cmax* will not be displayed (set to NaN before passing to imshow) and these count values in the return value count histogram will also be set to nan upon return. Returns: **h**2D array The bi-dimensional histogram of samples x and y. Values in x are histogrammed along the first dimension and values in y are histogrammed along the second dimension. **xedges**1D array The bin edges along the x axis. **yedges**1D array The bin edges along the y axis. **image**[`QuadMesh`](../collections_api#matplotlib.collections.QuadMesh "matplotlib.collections.QuadMesh") Other Parameters: **cmap**str or [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap"), default: `[rcParams["image.cmap"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.cmap#matplotlibrc-sample)` (default: `'viridis'`) The Colormap instance or registered colormap name used to map scalar data to colors. **norm**str or [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize"), optional The normalization method used to scale scalar data to the [0, 1] range before mapping to colors using *cmap*. By default, a linear scaling is used, mapping the lowest value to 0 and the highest to 1. If given, this can be one of the following: * An instance of [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") or one of its subclasses (see [Colormap Normalization](https://matplotlib.org/stable/tutorials/colors/colormapnorms.html)). * A scale name, i.e. one of "linear", "log", "symlog", "logit", etc. For a list of available scales, call [`matplotlib.scale.get_scale_names()`](../scale_api#matplotlib.scale.get_scale_names "matplotlib.scale.get_scale_names"). In that case, a suitable [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") subclass is dynamically generated and instantiated. **vmin, vmax**float, optional When using scalar data and no explicit *norm*, *vmin* and *vmax* define the data range that the colormap covers. By default, the colormap covers the complete value range of the supplied data. It is an error to use *vmin*/*vmax* when a *norm* instance is given (but using a [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") *norm* name together with *vmin*/*vmax* is acceptable). **alpha**`0 <= scalar <= 1` or `None`, optional The alpha blending value. **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *x*, *y*, *weights* **\*\*kwargs** Additional parameters are passed along to the [`pcolormesh`](matplotlib.axes.axes.pcolormesh#matplotlib.axes.Axes.pcolormesh "matplotlib.axes.Axes.pcolormesh") method and [`QuadMesh`](../collections_api#matplotlib.collections.QuadMesh "matplotlib.collections.QuadMesh") constructor. See also [`hist`](matplotlib.pyplot.hist#matplotlib.pyplot.hist "matplotlib.pyplot.hist") 1D histogram plotting [`hexbin`](matplotlib.pyplot.hexbin#matplotlib.pyplot.hexbin "matplotlib.pyplot.hexbin") 2D histogram with hexagonal bins #### Notes * Currently `hist2d` calculates its own axis limits, and any limits previously set are ignored. * Rendering the histogram with a logarithmic color scale is accomplished by passing a [`colors.LogNorm`](matplotlib.colors.lognorm#matplotlib.colors.LogNorm "matplotlib.colors.LogNorm") instance to the *norm* keyword argument. Likewise, power-law normalization (similar in effect to gamma correction) can be accomplished with [`colors.PowerNorm`](matplotlib.colors.powernorm#matplotlib.colors.PowerNorm "matplotlib.colors.PowerNorm"). matplotlib matplotlib.artist.setp matplotlib.artist.setp ====================== matplotlib.artist.setp(*obj*, *\*args*, *file=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L1720-L1801) Set one or more properties on an [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist"), or list allowed values. Parameters: **obj**[`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") or list of [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") The artist(s) whose properties are being set or queried. When setting properties, all artists are affected; when querying the allowed values, only the first instance in the sequence is queried. For example, two lines can be made thicker and red with a single call: ``` >>> x = arange(0, 1, 0.01) >>> lines = plot(x, sin(2*pi*x), x, sin(4*pi*x)) >>> setp(lines, linewidth=2, color='r') ``` **file**file-like, default: [`sys.stdout`](https://docs.python.org/3/library/sys.html#sys.stdout "(in Python v3.10)") Where [`setp`](#matplotlib.artist.setp "matplotlib.artist.setp") writes its output when asked to list allowed values. ``` >>> with open('output.log') as file: ... setp(line, file=file) ``` The default, `None`, means [`sys.stdout`](https://docs.python.org/3/library/sys.html#sys.stdout "(in Python v3.10)"). **\*args, \*\*kwargs** The properties to set. The following combinations are supported: * Set the linestyle of a line to be dashed: ``` >>> line, = plot([1, 2, 3]) >>> setp(line, linestyle='--') ``` * Set multiple properties at once: ``` >>> setp(line, linewidth=2, color='r') ``` * List allowed values for a line's linestyle: ``` >>> setp(line, 'linestyle') linestyle: {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} ``` * List all properties that can be set, and their allowed values: ``` >>> setp(line) agg_filter: a filter function, ... [long output listing omitted] ``` [`setp`](#matplotlib.artist.setp "matplotlib.artist.setp") also supports MATLAB style string/value pairs. For example, the following are equivalent: ``` >>> setp(lines, 'linewidth', 2, 'color', 'r') # MATLAB style >>> setp(lines, linewidth=2, color='r') # Python style ``` See also [`getp`](matplotlib.artist.getp#matplotlib.artist.getp "matplotlib.artist.getp")
programming_docs
matplotlib matplotlib.axes.Axes.streamplot matplotlib.axes.Axes.streamplot =============================== Axes.streamplot(*x*, *y*, *u*, *v*, *density=1*, *linewidth=None*, *color=None*, *cmap=None*, *norm=None*, *arrowsize=1*, *arrowstyle='-|>'*, *minlength=0.1*, *transform=None*, *zorder=None*, *start\_points=None*, *maxlength=4.0*, *integration\_direction='both'*, *broken\_streamlines=True*, *\**, *data=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/streamplot.py#L18-L241) Draw streamlines of a vector flow. Parameters: **x, y**1D/2D arrays Evenly spaced strictly increasing arrays to make a grid. If 2D, all rows of *x* must be equal and all columns of *y* must be equal; i.e., they must be as if generated by `np.meshgrid(x_1d, y_1d)`. **u, v**2D arrays *x* and *y*-velocities. The number of rows and columns must match the length of *y* and *x*, respectively. **density**float or (float, float) Controls the closeness of streamlines. When `density = 1`, the domain is divided into a 30x30 grid. *density* linearly scales this grid. Each cell in the grid can have, at most, one traversing streamline. For different densities in each direction, use a tuple (density\_x, density\_y). **linewidth**float or 2D array The width of the stream lines. With a 2D array the line width can be varied across the grid. The array must have the same shape as *u* and *v*. **color**color or 2D array The streamline color. If given an array, its values are converted to colors using *cmap* and *norm*. The array must have the same shape as *u* and *v*. **cmap, norm** Data normalization and colormapping parameters for *color*; only used if *color* is an array of floats. See [`imshow`](matplotlib.axes.axes.imshow#matplotlib.axes.Axes.imshow "matplotlib.axes.Axes.imshow") for a detailed description. **arrowsize**float Scaling factor for the arrow size. **arrowstyle**str Arrow style specification. See [`FancyArrowPatch`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch "matplotlib.patches.FancyArrowPatch"). **minlength**float Minimum length of streamline in axes coordinates. **start\_points**Nx2 array Coordinates of starting points for the streamlines in data coordinates (the same coordinates as the *x* and *y* arrays). **zorder**int The zorder of the stream lines and arrows. Artists with lower zorder values are drawn first. **maxlength**float Maximum length of streamline in axes coordinates. **integration\_direction**{'forward', 'backward', 'both'}, default: 'both' Integrate the streamline in forward, backward or both directions. **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *x*, *y*, *u*, *v*, *start\_points* **broken\_streamlines**boolean, default: True If False, forces streamlines to continue until they leave the plot domain. If True, they may be terminated if they come too close to another streamline. Returns: StreamplotSet Container object with attributes * `lines`: [`LineCollection`](../collections_api#matplotlib.collections.LineCollection "matplotlib.collections.LineCollection") of streamlines * `arrows`: [`PatchCollection`](../collections_api#matplotlib.collections.PatchCollection "matplotlib.collections.PatchCollection") containing [`FancyArrowPatch`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch "matplotlib.patches.FancyArrowPatch") objects representing the arrows half-way along stream lines. This container will probably change in the future to allow changes to the colormap, alpha, etc. for both lines and arrows, but these changes should be backward compatible. Examples using `matplotlib.axes.Axes.streamplot` ------------------------------------------------ [streamplot(X, Y, U, V)](https://matplotlib.org/stable/plot_types/arrays/streamplot.html#sphx-glr-plot-types-arrays-streamplot-py) streamplot(X, Y, U, V) matplotlib matplotlib.axes.Axes.get_children matplotlib.axes.Axes.get\_children ================================== Axes.get\_children()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L4313-L4323) Return a list of the child [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist")s of this [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist"). matplotlib matplotlib.pyplot.csd matplotlib.pyplot.csd ===================== matplotlib.pyplot.csd(*x*, *y*, *NFFT=None*, *Fs=None*, *Fc=None*, *detrend=None*, *window=None*, *noverlap=None*, *pad\_to=None*, *sides=None*, *scale\_by\_freq=None*, *return\_line=None*, *\**, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2461-L2470) Plot the cross-spectral density. The cross spectral density \(P\_{xy}\) by Welch's average periodogram method. The vectors *x* and *y* are divided into *NFFT* length segments. Each segment is detrended by function *detrend* and windowed by function *window*. *noverlap* gives the length of the overlap between segments. The product of the direct FFTs of *x* and *y* are averaged over each segment to compute \(P\_{xy}\), with a scaling to correct for power loss due to windowing. If len(*x*) < *NFFT* or len(*y*) < *NFFT*, they will be zero padded to *NFFT*. Parameters: **x, y**1-D arrays or sequences Arrays or sequences containing the data. **Fs**float, default: 2 The sampling frequency (samples per time unit). It is used to calculate the Fourier frequencies, *freqs*, in cycles per time unit. **window**callable or ndarray, default: [`window_hanning`](../mlab_api#matplotlib.mlab.window_hanning "matplotlib.mlab.window_hanning") A function or a vector of length *NFFT*. To create window vectors see [`window_hanning`](../mlab_api#matplotlib.mlab.window_hanning "matplotlib.mlab.window_hanning"), [`window_none`](../mlab_api#matplotlib.mlab.window_none "matplotlib.mlab.window_none"), [`numpy.blackman`](https://numpy.org/doc/stable/reference/generated/numpy.blackman.html#numpy.blackman "(in NumPy v1.23)"), [`numpy.hamming`](https://numpy.org/doc/stable/reference/generated/numpy.hamming.html#numpy.hamming "(in NumPy v1.23)"), [`numpy.bartlett`](https://numpy.org/doc/stable/reference/generated/numpy.bartlett.html#numpy.bartlett "(in NumPy v1.23)"), [`scipy.signal`](https://docs.scipy.org/doc/scipy/reference/signal.html#module-scipy.signal "(in SciPy v1.9.1)"), [`scipy.signal.get_window`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.get_window.html#scipy.signal.get_window "(in SciPy v1.9.1)"), etc. If a function is passed as the argument, it must take a data segment as an argument and return the windowed version of the segment. **sides**{'default', 'onesided', 'twosided'}, optional Which sides of the spectrum to return. 'default' is one-sided for real data and two-sided for complex data. 'onesided' forces the return of a one-sided spectrum, while 'twosided' forces two-sided. **pad\_to**int, optional The number of points to which the data segment is padded when performing the FFT. This can be different from *NFFT*, which specifies the number of data points used. While not increasing the actual resolution of the spectrum (the minimum distance between resolvable peaks), this can give more points in the plot, allowing for more detail. This corresponds to the *n* parameter in the call to [`fft`](https://numpy.org/doc/stable/reference/generated/numpy.fft.fft.html#numpy.fft.fft "(in NumPy v1.23)"). The default is None, which sets *pad\_to* equal to *NFFT* **NFFT**int, default: 256 The number of data points used in each block for the FFT. A power 2 is most efficient. This should *NOT* be used to get zero padding, or the scaling of the result will be incorrect; use *pad\_to* for this instead. **detrend**{'none', 'mean', 'linear'} or callable, default: 'none' The function applied to each segment before fft-ing, designed to remove the mean or linear trend. Unlike in MATLAB, where the *detrend* parameter is a vector, in Matplotlib it is a function. The [`mlab`](../mlab_api#module-matplotlib.mlab "matplotlib.mlab") module defines [`detrend_none`](../mlab_api#matplotlib.mlab.detrend_none "matplotlib.mlab.detrend_none"), [`detrend_mean`](../mlab_api#matplotlib.mlab.detrend_mean "matplotlib.mlab.detrend_mean"), and [`detrend_linear`](../mlab_api#matplotlib.mlab.detrend_linear "matplotlib.mlab.detrend_linear"), but you can use a custom function as well. You can also use a string to choose one of the functions: 'none' calls [`detrend_none`](../mlab_api#matplotlib.mlab.detrend_none "matplotlib.mlab.detrend_none"). 'mean' calls [`detrend_mean`](../mlab_api#matplotlib.mlab.detrend_mean "matplotlib.mlab.detrend_mean"). 'linear' calls [`detrend_linear`](../mlab_api#matplotlib.mlab.detrend_linear "matplotlib.mlab.detrend_linear"). **scale\_by\_freq**bool, default: True Whether the resulting density values should be scaled by the scaling frequency, which gives density in units of 1/Hz. This allows for integration over the returned frequency values. The default is True for MATLAB compatibility. **noverlap**int, default: 0 (no overlap) The number of points of overlap between segments. **Fc**int, default: 0 The center frequency of *x*, which offsets the x extents of the plot to reflect the frequency range used when a signal is acquired and then filtered and downsampled to baseband. **return\_line**bool, default: False Whether to include the line object plotted in the returned values. Returns: **Pxy**1-D array The values for the cross spectrum \(P\_{xy}\) before scaling (complex valued). **freqs**1-D array The frequencies corresponding to the elements in *Pxy*. **line**[`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") The line created by this function. Only returned if *return\_line* is True. Other Parameters: **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *x*, *y* **\*\*kwargs** Keyword arguments control the [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") properties: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_antialiased "matplotlib.lines.Line2D.set_antialiased") or aa | bool | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_color "matplotlib.lines.Line2D.set_color") or c | color | | [`dash_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_capstyle "matplotlib.lines.Line2D.set_dash_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`dash_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_joinstyle "matplotlib.lines.Line2D.set_dash_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`dashes`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dashes "matplotlib.lines.Line2D.set_dashes") | sequence of floats (on/off ink in points) or (None, None) | | [`data`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_data "matplotlib.lines.Line2D.set_data") | (2, N) array or two 1D arrays | | [`drawstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_drawstyle "matplotlib.lines.Line2D.set_drawstyle") or ds | {'default', 'steps', 'steps-pre', 'steps-mid', 'steps-post'}, default: 'default' | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fillstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_fillstyle "matplotlib.lines.Line2D.set_fillstyle") | {'full', 'left', 'right', 'bottom', 'top', 'none'} | | [`gapcolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_gapcolor "matplotlib.lines.Line2D.set_gapcolor") | color or None | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linestyle "matplotlib.lines.Line2D.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linewidth "matplotlib.lines.Line2D.set_linewidth") or lw | float | | [`marker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_marker "matplotlib.lines.Line2D.set_marker") | marker style string, [`Path`](../path_api#matplotlib.path.Path "matplotlib.path.Path") or [`MarkerStyle`](matplotlib.markers.markerstyle#matplotlib.markers.MarkerStyle "matplotlib.markers.MarkerStyle") | | [`markeredgecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgecolor "matplotlib.lines.Line2D.set_markeredgecolor") or mec | color | | [`markeredgewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgewidth "matplotlib.lines.Line2D.set_markeredgewidth") or mew | float | | [`markerfacecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecolor "matplotlib.lines.Line2D.set_markerfacecolor") or mfc | color | | [`markerfacecoloralt`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecoloralt "matplotlib.lines.Line2D.set_markerfacecoloralt") or mfcalt | color | | [`markersize`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markersize "matplotlib.lines.Line2D.set_markersize") or ms | float | | [`markevery`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markevery "matplotlib.lines.Line2D.set_markevery") | None or int or (int, int) or slice or list[int] or float or (float, float) or list[bool] | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_picker "matplotlib.lines.Line2D.set_picker") | float or callable[[Artist, Event], tuple[bool, dict]] | | [`pickradius`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_pickradius "matplotlib.lines.Line2D.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`solid_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_capstyle "matplotlib.lines.Line2D.set_solid_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`solid_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_joinstyle "matplotlib.lines.Line2D.set_solid_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | unknown | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xdata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_xdata "matplotlib.lines.Line2D.set_xdata") | 1D array | | [`ydata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_ydata "matplotlib.lines.Line2D.set_ydata") | 1D array | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`psd`](matplotlib.pyplot.psd#matplotlib.pyplot.psd "matplotlib.pyplot.psd") is equivalent to setting `y = x`. #### Notes For plotting, the power is plotted as \(10 \log\_{10}(P\_{xy})\) for decibels, though \(P\_{xy}\) itself is returned. #### References Bendat & Piersol -- Random Data: Analysis and Measurement Procedures, John Wiley & Sons (1986) matplotlib matplotlib.pyplot.annotate matplotlib.pyplot.annotate ========================== matplotlib.pyplot.annotate(*text*, *xy*, *xytext=None*, *xycoords='data'*, *textcoords=None*, *arrowprops=None*, *annotation\_clip=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2292-L2299) Annotate the point *xy* with text *text*. In the simplest form, the text is placed at *xy*. Optionally, the text can be displayed in another position *xytext*. An arrow pointing from the text to the annotated point *xy* can then be added by defining *arrowprops*. Parameters: **text**str The text of the annotation. **xy**(float, float) The point *(x, y)* to annotate. The coordinate system is determined by *xycoords*. **xytext**(float, float), default: *xy* The position *(x, y)* to place the text at. The coordinate system is determined by *textcoords*. **xycoords**str or [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") or [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") or callable or (float, float), default: 'data' The coordinate system that *xy* is given in. The following types of values are supported: * One of the following strings: | Value | Description | | --- | --- | | 'figure points' | Points from the lower left of the figure | | 'figure pixels' | Pixels from the lower left of the figure | | 'figure fraction' | Fraction of figure from lower left | | 'subfigure points' | Points from the lower left of the subfigure | | 'subfigure pixels' | Pixels from the lower left of the subfigure | | 'subfigure fraction' | Fraction of subfigure from lower left | | 'axes points' | Points from lower left corner of axes | | 'axes pixels' | Pixels from lower left corner of axes | | 'axes fraction' | Fraction of axes from lower left | | 'data' | Use the coordinate system of the object being annotated (default) | | 'polar' | *(theta, r)* if not native 'data' coordinates | Note that 'subfigure pixels' and 'figure pixels' are the same for the parent figure, so users who want code that is usable in a subfigure can use 'subfigure pixels'. * An [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist"): *xy* is interpreted as a fraction of the artist's [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox"). E.g. *(0, 0)* would be the lower left corner of the bounding box and *(0.5, 1)* would be the center top of the bounding box. * A [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") to transform *xy* to screen coordinates. * A function with one of the following signatures: ``` def transform(renderer) -> Bbox def transform(renderer) -> Transform ``` where *renderer* is a [`RendererBase`](../backend_bases_api#matplotlib.backend_bases.RendererBase "matplotlib.backend_bases.RendererBase") subclass. The result of the function is interpreted like the [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") and [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") cases above. * A tuple *(xcoords, ycoords)* specifying separate coordinate systems for *x* and *y*. *xcoords* and *ycoords* must each be of one of the above described types. See [Advanced Annotations](https://matplotlib.org/stable/tutorials/text/annotations.html#plotting-guide-annotation) for more details. **textcoords**str or [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") or [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") or callable or (float, float), default: value of *xycoords* The coordinate system that *xytext* is given in. All *xycoords* values are valid as well as the following strings: | Value | Description | | --- | --- | | 'offset points' | Offset (in points) from the *xy* value | | 'offset pixels' | Offset (in pixels) from the *xy* value | **arrowprops**dict, optional The properties used to draw a [`FancyArrowPatch`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch "matplotlib.patches.FancyArrowPatch") arrow between the positions *xy* and *xytext*. Defaults to None, i.e. no arrow is drawn. For historical reasons there are two different ways to specify arrows, "simple" and "fancy": **Simple arrow:** If *arrowprops* does not contain the key 'arrowstyle' the allowed keys are: | Key | Description | | --- | --- | | width | The width of the arrow in points | | headwidth | The width of the base of the arrow head in points | | headlength | The length of the arrow head in points | | shrink | Fraction of total length to shrink from both ends | | ? | Any key to [`matplotlib.patches.FancyArrowPatch`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch "matplotlib.patches.FancyArrowPatch") | The arrow is attached to the edge of the text box, the exact position (corners or centers) depending on where it's pointing to. **Fancy arrow:** This is used if 'arrowstyle' is provided in the *arrowprops*. Valid keys are the following [`FancyArrowPatch`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch "matplotlib.patches.FancyArrowPatch") parameters: | Key | Description | | --- | --- | | arrowstyle | the arrow style | | connectionstyle | the connection style | | relpos | see below; default is (0.5, 0.5) | | patchA | default is bounding box of the text | | patchB | default is None | | shrinkA | default is 2 points | | shrinkB | default is 2 points | | mutation\_scale | default is text size (in points) | | mutation\_aspect | default is 1. | | ? | any key for [`matplotlib.patches.PathPatch`](matplotlib.patches.pathpatch#matplotlib.patches.PathPatch "matplotlib.patches.PathPatch") | The exact starting point position of the arrow is defined by *relpos*. It's a tuple of relative coordinates of the text box, where (0, 0) is the lower left corner and (1, 1) is the upper right corner. Values <0 and >1 are supported and specify points outside the text box. By default (0.5, 0.5) the starting point is centered in the text box. **annotation\_clip**bool or None, default: None Whether to clip (i.e. not draw) the annotation when the annotation point *xy* is outside the axes area. * If *True*, the annotation will be clipped when *xy* is outside the axes. * If *False*, the annotation will always be drawn. * If *None*, the annotation will be clipped when *xy* is outside the axes and *xycoords* is 'data'. **\*\*kwargs** Additional kwargs are passed to [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text"). Returns: [`Annotation`](../text_api#matplotlib.text.Annotation "matplotlib.text.Annotation") See also [Advanced Annotations](https://matplotlib.org/stable/tutorials/text/annotations.html#plotting-guide-annotation) Examples using `matplotlib.pyplot.annotate` ------------------------------------------- [Pyplot tutorial](https://matplotlib.org/stable/tutorials/introductory/pyplot.html#sphx-glr-tutorials-introductory-pyplot-py) Pyplot tutorial [Annotations](https://matplotlib.org/stable/tutorials/text/annotations.html#sphx-glr-tutorials-text-annotations-py) Annotations
programming_docs
matplotlib matplotlib.colors.from_levels_and_colors matplotlib.colors.from\_levels\_and\_colors =========================================== matplotlib.colors.from\_levels\_and\_colors(*levels*, *colors*, *extend='neither'*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L2601-L2655) A helper routine to generate a cmap and a norm instance which behave similar to contourf's levels and colors arguments. Parameters: **levels**sequence of numbers The quantization levels used to construct the [`BoundaryNorm`](matplotlib.colors.boundarynorm#matplotlib.colors.BoundaryNorm "matplotlib.colors.BoundaryNorm"). Value `v` is quantized to level `i` if `lev[i] <= v < lev[i+1]`. **colors**sequence of colors The fill color to use for each level. If *extend* is "neither" there must be `n_level - 1` colors. For an *extend* of "min" or "max" add one extra color, and for an *extend* of "both" add two colors. **extend**{'neither', 'min', 'max', 'both'}, optional The behaviour when a value falls out of range of the given levels. See [`contourf`](matplotlib.axes.axes.contourf#matplotlib.axes.Axes.contourf "matplotlib.axes.Axes.contourf") for details. Returns: **cmap**[`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") **norm**[`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap") matplotlib matplotlib.patches.ConnectionPatch matplotlib.patches.ConnectionPatch ================================== *class*matplotlib.patches.ConnectionPatch(*xyA*, *xyB*, *coordsA*, *coordsB=None*, *\**, *axesA=None*, *axesB=None*, *arrowstyle='-'*, *connectionstyle='arc3'*, *patchA=None*, *patchB=None*, *shrinkA=0.0*, *shrinkB=0.0*, *mutation\_scale=10.0*, *mutation\_aspect=None*, *clip\_on=False*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L4465-L4706) Bases: [`FancyArrowPatch`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch "matplotlib.patches.FancyArrowPatch") A patch that connects two points (possibly in different axes). Connect point *xyA* in *coordsA* with point *xyB* in *coordsB*. Valid keys are | Key | Description | | --- | --- | | arrowstyle | the arrow style | | connectionstyle | the connection style | | relpos | default is (0.5, 0.5) | | patchA | default is bounding box of the text | | patchB | default is None | | shrinkA | default is 2 points | | shrinkB | default is 2 points | | mutation\_scale | default is text size (in points) | | mutation\_aspect | default is 1. | | ? | any key for [`matplotlib.patches.PathPatch`](matplotlib.patches.pathpatch#matplotlib.patches.PathPatch "matplotlib.patches.PathPatch") | *coordsA* and *coordsB* are strings that indicate the coordinates of *xyA* and *xyB*. | Property | Description | | --- | --- | | 'figure points' | points from the lower left corner of the figure | | 'figure pixels' | pixels from the lower left corner of the figure | | 'figure fraction' | 0, 0 is lower left of figure and 1, 1 is upper right | | 'subfigure points' | points from the lower left corner of the subfigure | | 'subfigure pixels' | pixels from the lower left corner of the subfigure | | 'subfigure fraction' | fraction of the subfigure, 0, 0 is lower left. | | 'axes points' | points from lower left corner of axes | | 'axes pixels' | pixels from lower left corner of axes | | 'axes fraction' | 0, 0 is lower left of axes and 1, 1 is upper right | | 'data' | use the coordinate system of the object being annotated (default) | | 'offset points' | offset (in points) from the *xy* value | | 'polar' | you can specify *theta*, *r* for the annotation, even in cartesian plots. Note that if you are using a polar axes, you do not need to specify polar for the coordinate system since that is the native "data" coordinate system. | Alternatively they can be set to any valid [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform"). Note that 'subfigure pixels' and 'figure pixels' are the same for the parent figure, so users who want code that is usable in a subfigure can use 'subfigure pixels'. Note Using [`ConnectionPatch`](#matplotlib.patches.ConnectionPatch "matplotlib.patches.ConnectionPatch") across two [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") instances is not directly compatible with [constrained layout](https://matplotlib.org/stable/tutorials/intermediate/constrainedlayout_guide.html). Add the artist directly to the [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") instead of adding it to a specific Axes, or exclude it from the layout using `con.set_in_layout(False)`. ``` fig, ax = plt.subplots(1, 2, constrained_layout=True) con = ConnectionPatch(..., axesA=ax[0], axesB=ax[1]) fig.add_artist(con) ``` draw(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L4701-L4706) Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible ([`Artist.get_visible`](matplotlib.artist.artist.get_visible#matplotlib.artist.Artist.get_visible "matplotlib.artist.Artist.get_visible") returns False). Parameters: **renderer**[`RendererBase`](../backend_bases_api#matplotlib.backend_bases.RendererBase "matplotlib.backend_bases.RendererBase") subclass. #### Notes This method is overridden in the Artist subclasses. get\_annotation\_clip()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L4650-L4656) Return the clipping behavior. See [`set_annotation_clip`](#matplotlib.patches.ConnectionPatch.set_annotation_clip "matplotlib.patches.ConnectionPatch.set_annotation_clip") for the meaning of the return value. set(*\**, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *animated=<UNSET>*, *annotation\_clip=<UNSET>*, *antialiased=<UNSET>*, *arrowstyle=<UNSET>*, *capstyle=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *color=<UNSET>*, *connectionstyle=<UNSET>*, *edgecolor=<UNSET>*, *facecolor=<UNSET>*, *fill=<UNSET>*, *gid=<UNSET>*, *hatch=<UNSET>*, *in\_layout=<UNSET>*, *joinstyle=<UNSET>*, *label=<UNSET>*, *linestyle=<UNSET>*, *linewidth=<UNSET>*, *mouseover=<UNSET>*, *mutation\_aspect=<UNSET>*, *mutation\_scale=<UNSET>*, *patchA=<UNSET>*, *patchB=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *positions=<UNSET>*, *rasterized=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *transform=<UNSET>*, *url=<UNSET>*, *visible=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`annotation_clip`](#matplotlib.patches.ConnectionPatch.set_annotation_clip "matplotlib.patches.ConnectionPatch.set_annotation_clip") | bool or None | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`arrowstyle`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch.set_arrowstyle "matplotlib.patches.FancyArrowPatch.set_arrowstyle") | str or [`matplotlib.patches.ArrowStyle`](matplotlib.patches.arrowstyle#matplotlib.patches.ArrowStyle "matplotlib.patches.ArrowStyle") | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`connectionstyle`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch.set_connectionstyle "matplotlib.patches.FancyArrowPatch.set_connectionstyle") | [ 'arc3' | 'angle3' | 'angle' | 'arc' | 'bar' ] | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`mutation_aspect`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch.set_mutation_aspect "matplotlib.patches.FancyArrowPatch.set_mutation_aspect") | float | | [`mutation_scale`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch.set_mutation_scale "matplotlib.patches.FancyArrowPatch.set_mutation_scale") | float | | [`patchA`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch.set_patchA "matplotlib.patches.FancyArrowPatch.set_patchA") | [`patches.Patch`](matplotlib.patches.patch#matplotlib.patches.Patch "matplotlib.patches.Patch") | | [`patchB`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch.set_patchB "matplotlib.patches.FancyArrowPatch.set_patchB") | [`patches.Patch`](matplotlib.patches.patch#matplotlib.patches.Patch "matplotlib.patches.Patch") | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`positions`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch.set_positions "matplotlib.patches.FancyArrowPatch.set_positions") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | set\_annotation\_clip(*b*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L4634-L4648) Set the annotation's clipping behavior. Parameters: **b**bool or None * True: The annotation will be clipped when `self.xy` is outside the axes. * False: The annotation will always be drawn. * None: The annotation will be clipped when `self.xy` is outside the axes and `self.xycoords == "data"`. Examples using `matplotlib.patches.ConnectionPatch` --------------------------------------------------- [Bar of pie](https://matplotlib.org/stable/gallery/pie_and_polar_charts/bar_of_pie.html#sphx-glr-gallery-pie-and-polar-charts-bar-of-pie-py) Bar of pie [Connect Simple01](https://matplotlib.org/stable/gallery/userdemo/connect_simple01.html#sphx-glr-gallery-userdemo-connect-simple01-py) Connect Simple01 [Constrained Layout Guide](https://matplotlib.org/stable/tutorials/intermediate/constrainedlayout_guide.html#sphx-glr-tutorials-intermediate-constrainedlayout-guide-py) Constrained Layout Guide matplotlib matplotlib.axes.Axes.add_artist matplotlib.axes.Axes.add\_artist ================================ Axes.add\_artist(*a*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L2240-L2259) Add an [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") to the Axes; return the artist. Use [`add_artist`](#matplotlib.axes.Axes.add_artist "matplotlib.axes.Axes.add_artist") only for artists for which there is no dedicated "add" method; and if necessary, use a method such as [`update_datalim`](matplotlib.axes.axes.update_datalim#matplotlib.axes.Axes.update_datalim "matplotlib.axes.Axes.update_datalim") to manually update the dataLim if the artist is to be included in autoscaling. If no `transform` has been specified when creating the artist (e.g. `artist.get_transform() == None`) then the transform is set to `ax.transData`. Examples using `matplotlib.axes.Axes.add_artist` ------------------------------------------------ [Scatter plots with a legend](https://matplotlib.org/stable/gallery/lines_bars_and_markers/scatter_with_legend.html#sphx-glr-gallery-lines-bars-and-markers-scatter-with-legend-py) Scatter plots with a legend [BboxImage Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/demo_bboximage.html#sphx-glr-gallery-images-contours-and-fields-demo-bboximage-py) BboxImage Demo [Bar of pie](https://matplotlib.org/stable/gallery/pie_and_polar_charts/bar_of_pie.html#sphx-glr-gallery-pie-and-polar-charts-bar-of-pie-py) Bar of pie [Annotating Plots](https://matplotlib.org/stable/gallery/text_labels_and_annotations/annotation_demo.html#sphx-glr-gallery-text-labels-and-annotations-annotation-demo-py) Annotating Plots [AnnotationBbox demo](https://matplotlib.org/stable/gallery/text_labels_and_annotations/demo_annotation_box.html#sphx-glr-gallery-text-labels-and-annotations-demo-annotation-box-py) AnnotationBbox demo [Using a text as a Path](https://matplotlib.org/stable/gallery/text_labels_and_annotations/demo_text_path.html#sphx-glr-gallery-text-labels-and-annotations-demo-text-path-py) Using a text as a Path [Ellipse Demo](https://matplotlib.org/stable/gallery/shapes_and_collections/ellipse_demo.html#sphx-glr-gallery-shapes-and-collections-ellipse-demo-py) Ellipse Demo [Anchored Direction Arrow](https://matplotlib.org/stable/gallery/axes_grid1/demo_anchored_direction_arrows.html#sphx-glr-gallery-axes-grid1-demo-anchored-direction-arrows-py) Anchored Direction Arrow [Axes Grid2](https://matplotlib.org/stable/gallery/axes_grid1/demo_axes_grid2.html#sphx-glr-gallery-axes-grid1-demo-axes-grid2-py) Axes Grid2 [Inset Locator Demo2](https://matplotlib.org/stable/gallery/axes_grid1/inset_locator_demo2.html#sphx-glr-gallery-axes-grid1-inset-locator-demo2-py) Inset Locator Demo2 [Simple Anchored Artists](https://matplotlib.org/stable/gallery/axes_grid1/simple_anchored_artists.html#sphx-glr-gallery-axes-grid1-simple-anchored-artists-py) Simple Anchored Artists [Anatomy of a figure](https://matplotlib.org/stable/gallery/showcase/anatomy.html#sphx-glr-gallery-showcase-anatomy-py) Anatomy of a figure [Anchored Artists](https://matplotlib.org/stable/gallery/misc/anchored_artists.html#sphx-glr-gallery-misc-anchored-artists-py) Anchored Artists [Artist tests](https://matplotlib.org/stable/gallery/units/artist_tests.html#sphx-glr-gallery-units-artist-tests-py) Artist tests [Anchored Box04](https://matplotlib.org/stable/gallery/userdemo/anchored_box04.html#sphx-glr-gallery-userdemo-anchored-box04-py) Anchored Box04 [Annotate Explain](https://matplotlib.org/stable/gallery/userdemo/annotate_explain.html#sphx-glr-gallery-userdemo-annotate-explain-py) Annotate Explain [Connect Simple01](https://matplotlib.org/stable/gallery/userdemo/connect_simple01.html#sphx-glr-gallery-userdemo-connect-simple01-py) Connect Simple01 [Simple Annotate01](https://matplotlib.org/stable/gallery/userdemo/simple_annotate01.html#sphx-glr-gallery-userdemo-simple-annotate01-py) Simple Annotate01 [Simple Legend02](https://matplotlib.org/stable/gallery/userdemo/simple_legend02.html#sphx-glr-gallery-userdemo-simple-legend02-py) Simple Legend02 [Legend guide](https://matplotlib.org/stable/tutorials/intermediate/legend_guide.html#sphx-glr-tutorials-intermediate-legend-guide-py) Legend guide [Annotations](https://matplotlib.org/stable/tutorials/text/annotations.html#sphx-glr-tutorials-text-annotations-py) Annotations matplotlib matplotlib.axes.Axes.triplot matplotlib.axes.Axes.triplot ============================ Axes.triplot(*\*args*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/tri/triplot.py#L7-L86) Draw an unstructured triangular grid as lines and/or markers. Call signatures: ``` triplot(triangulation, ...) triplot(x, y, [triangles], *, [mask=mask], ...) ``` The triangular grid can be specified either by passing a [`Triangulation`](../tri_api#matplotlib.tri.Triangulation "matplotlib.tri.Triangulation") object as the first parameter, or by passing the points *x*, *y* and optionally the *triangles* and a *mask*. If neither of *triangulation* or *triangles* are given, the triangulation is calculated on the fly. Parameters: **triangulation**[`Triangulation`](../tri_api#matplotlib.tri.Triangulation "matplotlib.tri.Triangulation") An already created triangular grid. **x, y, triangles, mask** Parameters defining the triangular grid. See [`Triangulation`](../tri_api#matplotlib.tri.Triangulation "matplotlib.tri.Triangulation"). This is mutually exclusive with specifying *triangulation*. **other\_parameters** All other args and kwargs are forwarded to [`plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot"). Returns: **lines**[`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") The drawn triangles edges. **markers**[`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") The drawn marker nodes. Examples using `matplotlib.axes.Axes.triplot` --------------------------------------------- [Tricontour Smooth Delaunay](https://matplotlib.org/stable/gallery/images_contours_and_fields/tricontour_smooth_delaunay.html#sphx-glr-gallery-images-contours-and-fields-tricontour-smooth-delaunay-py) Tricontour Smooth Delaunay [Tricontour Smooth User](https://matplotlib.org/stable/gallery/images_contours_and_fields/tricontour_smooth_user.html#sphx-glr-gallery-images-contours-and-fields-tricontour-smooth-user-py) Tricontour Smooth User [Trigradient Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/trigradient_demo.html#sphx-glr-gallery-images-contours-and-fields-trigradient-demo-py) Trigradient Demo [Triplot Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/triplot_demo.html#sphx-glr-gallery-images-contours-and-fields-triplot-demo-py) Triplot Demo [Trifinder Event Demo](https://matplotlib.org/stable/gallery/event_handling/trifinder_event_demo.html#sphx-glr-gallery-event-handling-trifinder-event-demo-py) Trifinder Event Demo [triplot(x, y)](https://matplotlib.org/stable/plot_types/unstructured/triplot.html#sphx-glr-plot-types-unstructured-triplot-py) triplot(x, y)
programming_docs
matplotlib matplotlib.pyplot.install_repl_displayhook matplotlib.pyplot.install\_repl\_displayhook ============================================ matplotlib.pyplot.install\_repl\_displayhook()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L122-L156) Connect to the display hook of the current shell. The display hook gets called when the read-evaluate-print-loop (REPL) of the shell has finished the execution of a command. We use this callback to be able to automatically update a figure in interactive mode. This works both with IPython and with vanilla python shells. matplotlib mpl_toolkits.axes_grid1.axes_size.Scalable mpl\_toolkits.axes\_grid1.axes\_size.Scalable ============================================= mpl\_toolkits.axes\_grid1.axes\_size.Scalable[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_size.py#L67-L79) alias of [`Scaled`](mpl_toolkits.axes_grid1.axes_size.scaled#mpl_toolkits.axes_grid1.axes_size.Scaled "mpl_toolkits.axes_grid1.axes_size.Scaled") matplotlib matplotlib.axes.Axes.update_datalim matplotlib.axes.Axes.update\_datalim ==================================== Axes.update\_datalim(*xys*, *updatex=True*, *updatey=True*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L2520-L2543) Extend the `dataLim` Bbox to include the given points. If no data is set currently, the Bbox will ignore its limits and set the bound to be the bounds of the xydata (*xys*). Otherwise, it will compute the bounds of the union of its current data and the data in *xys*. Parameters: **xys**2D array-like The points to include in the data limits Bbox. This can be either a list of (x, y) tuples or a Nx2 array. **updatex, updatey**bool, default: True Whether to update the x/y limits. matplotlib mpl_toolkits.mplot3d.proj3d.proj_transform mpl\_toolkits.mplot3d.proj3d.proj\_transform ============================================ mpl\_toolkits.mplot3d.proj3d.proj\_transform(*xs*, *ys*, *zs*, *M*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/proj3d.py#L154-L159) Transform the points by the projection matrix matplotlib matplotlib.pyplot.xscale matplotlib.pyplot.xscale ======================== matplotlib.pyplot.xscale(*value*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L3021-L3023) Set the xaxis' scale. Parameters: **value**{"linear", "log", "symlog", "logit", ...} or [`ScaleBase`](../scale_api#matplotlib.scale.ScaleBase "matplotlib.scale.ScaleBase") The axis scale type to apply. **\*\*kwargs** Different keyword arguments are accepted, depending on the scale. See the respective class keyword arguments: * [`matplotlib.scale.LinearScale`](../scale_api#matplotlib.scale.LinearScale "matplotlib.scale.LinearScale") * [`matplotlib.scale.LogScale`](../scale_api#matplotlib.scale.LogScale "matplotlib.scale.LogScale") * [`matplotlib.scale.SymmetricalLogScale`](../scale_api#matplotlib.scale.SymmetricalLogScale "matplotlib.scale.SymmetricalLogScale") * [`matplotlib.scale.LogitScale`](../scale_api#matplotlib.scale.LogitScale "matplotlib.scale.LogitScale") * [`matplotlib.scale.FuncScale`](../scale_api#matplotlib.scale.FuncScale "matplotlib.scale.FuncScale") #### Notes By default, Matplotlib supports the above mentioned scales. Additionally, custom scales may be registered using [`matplotlib.scale.register_scale`](../scale_api#matplotlib.scale.register_scale "matplotlib.scale.register_scale"). These scales can then also be used here. matplotlib matplotlib.axes.Axes.scatter matplotlib.axes.Axes.scatter ============================ Axes.scatter(*x*, *y*, *s=None*, *c=None*, *marker=None*, *cmap=None*, *norm=None*, *vmin=None*, *vmax=None*, *alpha=None*, *linewidths=None*, *\**, *edgecolors=None*, *plotnonfinite=False*, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L4388-L4642) A scatter plot of *y* vs. *x* with varying marker size and/or color. Parameters: **x, y**float or array-like, shape (n, ) The data positions. **s**float or array-like, shape (n, ), optional The marker size in points\*\*2. Default is `rcParams['lines.markersize'] ** 2`. **c**array-like or list of colors or color, optional The marker colors. Possible values: * A scalar or sequence of n numbers to be mapped to colors using *cmap* and *norm*. * A 2D array in which the rows are RGB or RGBA. * A sequence of colors of length n. * A single color format string. Note that *c* should not be a single numeric RGB or RGBA sequence because that is indistinguishable from an array of values to be colormapped. If you want to specify the same RGB or RGBA value for all points, use a 2D array with a single row. Otherwise, value- matching will have precedence in case of a size matching with *x* and *y*. If you wish to specify a single color for all points prefer the *color* keyword argument. Defaults to [`None`](https://docs.python.org/3/library/constants.html#None "(in Python v3.10)"). In that case the marker color is determined by the value of *color*, *facecolor* or *facecolors*. In case those are not specified or [`None`](https://docs.python.org/3/library/constants.html#None "(in Python v3.10)"), the marker color is determined by the next color of the `Axes`' current "shape and fill" color cycle. This cycle defaults to `[rcParams["axes.prop\_cycle"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=axes.prop_cycle#matplotlibrc-sample)` (default: `cycler('color', ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf'])`). **marker**[`MarkerStyle`](matplotlib.markers.markerstyle#matplotlib.markers.MarkerStyle "matplotlib.markers.MarkerStyle"), default: `[rcParams["scatter.marker"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=scatter.marker#matplotlibrc-sample)` (default: `'o'`) The marker style. *marker* can be either an instance of the class or the text shorthand for a particular marker. See [`matplotlib.markers`](../markers_api#module-matplotlib.markers "matplotlib.markers") for more information about marker styles. **cmap**str or [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap"), default: `[rcParams["image.cmap"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.cmap#matplotlibrc-sample)` (default: `'viridis'`) The Colormap instance or registered colormap name used to map scalar data to colors. This parameter is ignored if *c* is RGB(A). **norm**str or [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize"), optional The normalization method used to scale scalar data to the [0, 1] range before mapping to colors using *cmap*. By default, a linear scaling is used, mapping the lowest value to 0 and the highest to 1. If given, this can be one of the following: * An instance of [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") or one of its subclasses (see [Colormap Normalization](https://matplotlib.org/stable/tutorials/colors/colormapnorms.html)). * A scale name, i.e. one of "linear", "log", "symlog", "logit", etc. For a list of available scales, call [`matplotlib.scale.get_scale_names()`](../scale_api#matplotlib.scale.get_scale_names "matplotlib.scale.get_scale_names"). In that case, a suitable [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") subclass is dynamically generated and instantiated. This parameter is ignored if *c* is RGB(A). **vmin, vmax**float, optional When using scalar data and no explicit *norm*, *vmin* and *vmax* define the data range that the colormap covers. By default, the colormap covers the complete value range of the supplied data. It is an error to use *vmin*/*vmax* when a *norm* instance is given (but using a [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") *norm* name together with *vmin*/*vmax* is acceptable). This parameter is ignored if *c* is RGB(A). **alpha**float, default: None The alpha blending value, between 0 (transparent) and 1 (opaque). **linewidths**float or array-like, default: `[rcParams["lines.linewidth"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=lines.linewidth#matplotlibrc-sample)` (default: `1.5`) The linewidth of the marker edges. Note: The default *edgecolors* is 'face'. You may want to change this as well. **edgecolors**{'face', 'none', *None*} or color or sequence of color, default: `[rcParams["scatter.edgecolors"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=scatter.edgecolors#matplotlibrc-sample)` (default: `'face'`) The edge color of the marker. Possible values: * 'face': The edge color will always be the same as the face color. * 'none': No patch boundary will be drawn. * A color or sequence of colors. For non-filled markers, *edgecolors* is ignored. Instead, the color is determined like with 'face', i.e. from *c*, *colors*, or *facecolors*. **plotnonfinite**bool, default: False Whether to plot points with nonfinite *c* (i.e. `inf`, `-inf` or `nan`). If `True` the points are drawn with the *bad* colormap color (see [`Colormap.set_bad`](matplotlib.colors.colormap#matplotlib.colors.Colormap.set_bad "matplotlib.colors.Colormap.set_bad")). Returns: [`PathCollection`](../collections_api#matplotlib.collections.PathCollection "matplotlib.collections.PathCollection") Other Parameters: **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *x*, *y*, *s*, *linewidths*, *edgecolors*, *c*, *facecolor*, *facecolors*, *color* **\*\*kwargs**[`Collection`](../collections_api#matplotlib.collections.Collection "matplotlib.collections.Collection") properties See also [`plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot") To plot scatter plots when markers are identical in size and color. #### Notes * The [`plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot") function will be faster for scatterplots where markers don't vary in size or color. * Any or all of *x*, *y*, *s*, and *c* may be masked arrays, in which case all masks will be combined and only unmasked points will be plotted. * Fundamentally, scatter works with 1D arrays; *x*, *y*, *s*, and *c* may be input as N-D arrays, but within scatter they will be flattened. The exception is *c*, which will be flattened only if its size matches the size of *x* and *y*. Examples using `matplotlib.axes.Axes.scatter` --------------------------------------------- [Scatter plots with custom symbols](https://matplotlib.org/stable/gallery/lines_bars_and_markers/scatter_custom_symbol.html#sphx-glr-gallery-lines-bars-and-markers-scatter-custom-symbol-py) Scatter plots with custom symbols [Scatter Demo2](https://matplotlib.org/stable/gallery/lines_bars_and_markers/scatter_demo2.html#sphx-glr-gallery-lines-bars-and-markers-scatter-demo2-py) Scatter Demo2 [Scatter plot with histograms](https://matplotlib.org/stable/gallery/lines_bars_and_markers/scatter_hist.html#sphx-glr-gallery-lines-bars-and-markers-scatter-hist-py) Scatter plot with histograms [Scatter plots with a legend](https://matplotlib.org/stable/gallery/lines_bars_and_markers/scatter_with_legend.html#sphx-glr-gallery-lines-bars-and-markers-scatter-with-legend-py) Scatter plots with a legend [Advanced quiver and quiverkey functions](https://matplotlib.org/stable/gallery/images_contours_and_fields/quiver_demo.html#sphx-glr-gallery-images-contours-and-fields-quiver-demo-py) Advanced quiver and quiverkey functions [Axes box aspect](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_box_aspect.html#sphx-glr-gallery-subplots-axes-and-figures-axes-box-aspect-py) Axes box aspect [Axis Label Position](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axis_labels_demo.html#sphx-glr-gallery-subplots-axes-and-figures-axis-labels-demo-py) Axis Label Position [Plot a confidence ellipse of a two-dimensional dataset](https://matplotlib.org/stable/gallery/statistics/confidence_ellipse.html#sphx-glr-gallery-statistics-confidence-ellipse-py) Plot a confidence ellipse of a two-dimensional dataset [Violin plot customization](https://matplotlib.org/stable/gallery/statistics/customized_violin.html#sphx-glr-gallery-statistics-customized-violin-py) Violin plot customization [Scatter plot on polar axis](https://matplotlib.org/stable/gallery/pie_and_polar_charts/polar_scatter.html#sphx-glr-gallery-pie-and-polar-charts-polar-scatter-py) Scatter plot on polar axis [Legend Demo](https://matplotlib.org/stable/gallery/text_labels_and_annotations/legend_demo.html#sphx-glr-gallery-text-labels-and-annotations-legend-demo-py) Legend Demo [Scatter Histogram (Locatable Axes)](https://matplotlib.org/stable/gallery/axes_grid1/scatter_hist_locatable_axes.html#sphx-glr-gallery-axes-grid1-scatter-hist-locatable-axes-py) Scatter Histogram (Locatable Axes) [mpl\_toolkits.axisartist.floating\_axes features](https://matplotlib.org/stable/gallery/axisartist/demo_floating_axes.html#sphx-glr-gallery-axisartist-demo-floating-axes-py) :mod:`mpl\_toolkits.axisartist.floating\_axes` features ![Rain simulation](https://matplotlib.org/stable/_images/sphx_glr_rain_thumb.gif) [Rain simulation](https://matplotlib.org/stable/gallery/animation/rain.html#sphx-glr-gallery-animation-rain-py) Rain simulation [Pick Event Demo](https://matplotlib.org/stable/gallery/event_handling/pick_event_demo.html#sphx-glr-gallery-event-handling-pick-event-demo-py) Pick Event Demo [Zoom Window](https://matplotlib.org/stable/gallery/event_handling/zoom_window.html#sphx-glr-gallery-event-handling-zoom-window-py) Zoom Window [Plotting with keywords](https://matplotlib.org/stable/gallery/misc/keyword_plotting.html#sphx-glr-gallery-misc-keyword-plotting-py) Plotting with keywords [Zorder Demo](https://matplotlib.org/stable/gallery/misc/zorder_demo.html#sphx-glr-gallery-misc-zorder-demo-py) Zorder Demo [Plot 2D data on 3D plot](https://matplotlib.org/stable/gallery/mplot3d/2dcollections3d.html#sphx-glr-gallery-mplot3d-2dcollections3d-py) Plot 2D data on 3D plot [3D scatterplot](https://matplotlib.org/stable/gallery/mplot3d/scatter3d.html#sphx-glr-gallery-mplot3d-scatter3d-py) 3D scatterplot [Asinh Demo](https://matplotlib.org/stable/gallery/scales/asinh_demo.html#sphx-glr-gallery-scales-asinh-demo-py) Asinh Demo [Automatically setting tick positions](https://matplotlib.org/stable/gallery/ticks/auto_ticks.html#sphx-glr-gallery-ticks-auto-ticks-py) Automatically setting tick positions [Unit handling](https://matplotlib.org/stable/gallery/units/units_scatter.html#sphx-glr-gallery-units-units-scatter-py) Unit handling [Annotate Text Arrow](https://matplotlib.org/stable/gallery/userdemo/annotate_text_arrow.html#sphx-glr-gallery-userdemo-annotate-text-arrow-py) Annotate Text Arrow [Select indices from a collection using polygon selector](https://matplotlib.org/stable/gallery/widgets/polygon_selector_demo.html#sphx-glr-gallery-widgets-polygon-selector-demo-py) Select indices from a collection using polygon selector [Quick start guide](https://matplotlib.org/stable/tutorials/introductory/quick_start.html#sphx-glr-tutorials-introductory-quick-start-py) Quick start guide [Choosing Colormaps in Matplotlib](https://matplotlib.org/stable/tutorials/colors/colormaps.html#sphx-glr-tutorials-colors-colormaps-py) Choosing Colormaps in Matplotlib [scatter(x, y)](https://matplotlib.org/stable/plot_types/basic/scatter_plot.html#sphx-glr-plot-types-basic-scatter-plot-py) scatter(x, y) matplotlib mpl_toolkits.axes_grid1.inset_locator.BboxPatch mpl\_toolkits.axes\_grid1.inset\_locator.BboxPatch ================================================== *class*mpl\_toolkits.axes\_grid1.inset\_locator.BboxPatch(*bbox*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/inset_locator.py#L137-L163) Bases: [`Patch`](matplotlib.patches.patch#matplotlib.patches.Patch "matplotlib.patches.Patch") Patch showing the shape bounded by a Bbox. Parameters: **bbox**[`matplotlib.transforms.Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") Bbox to use for the extents of this patch. **\*\*kwargs** Patch properties. Valid arguments include: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | unknown | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | get\_path()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/inset_locator.py#L160-L163) Return the path of this patch. set(*\**, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *animated=<UNSET>*, *antialiased=<UNSET>*, *capstyle=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *color=<UNSET>*, *edgecolor=<UNSET>*, *facecolor=<UNSET>*, *fill=<UNSET>*, *gid=<UNSET>*, *hatch=<UNSET>*, *in\_layout=<UNSET>*, *joinstyle=<UNSET>*, *label=<UNSET>*, *linestyle=<UNSET>*, *linewidth=<UNSET>*, *mouseover=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *rasterized=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *transform=<UNSET>*, *url=<UNSET>*, *visible=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | Examples using `mpl_toolkits.axes_grid1.inset_locator.BboxPatch` ---------------------------------------------------------------- [Axes Zoom Effect](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_zoom_effect.html#sphx-glr-gallery-subplots-axes-and-figures-axes-zoom-effect-py) Axes Zoom Effect
programming_docs
matplotlib matplotlib.axes.Axes.hist matplotlib.axes.Axes.hist ========================= Axes.hist(*x*, *bins=None*, *range=None*, *density=False*, *weights=None*, *cumulative=False*, *bottom=None*, *histtype='bar'*, *align='mid'*, *orientation='vertical'*, *rwidth=None*, *log=False*, *color=None*, *label=None*, *stacked=False*, *\**, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L6408-L6870) Compute and plot a histogram. This method uses [`numpy.histogram`](https://numpy.org/doc/stable/reference/generated/numpy.histogram.html#numpy.histogram "(in NumPy v1.23)") to bin the data in *x* and count the number of values in each bin, then draws the distribution either as a [`BarContainer`](../container_api#matplotlib.container.BarContainer "matplotlib.container.BarContainer") or [`Polygon`](matplotlib.patches.polygon#matplotlib.patches.Polygon "matplotlib.patches.Polygon"). The *bins*, *range*, *density*, and *weights* parameters are forwarded to [`numpy.histogram`](https://numpy.org/doc/stable/reference/generated/numpy.histogram.html#numpy.histogram "(in NumPy v1.23)"). If the data has already been binned and counted, use [`bar`](matplotlib.axes.axes.bar#matplotlib.axes.Axes.bar "matplotlib.axes.Axes.bar") or [`stairs`](matplotlib.axes.axes.stairs#matplotlib.axes.Axes.stairs "matplotlib.axes.Axes.stairs") to plot the distribution: ``` counts, bins = np.histogram(x) plt.stairs(bins, counts) ``` Alternatively, plot pre-computed bins and counts using `hist()` by treating each bin as a single point with a weight equal to its count: ``` plt.hist(bins[:-1], bins, weights=counts) ``` The data input *x* can be a singular array, a list of datasets of potentially different lengths ([*x0*, *x1*, ...]), or a 2D ndarray in which each column is a dataset. Note that the ndarray form is transposed relative to the list form. If the input is an array, then the return value is a tuple (*n*, *bins*, *patches*); if the input is a sequence of arrays, then the return value is a tuple ([*n0*, *n1*, ...], *bins*, [*patches0*, *patches1*, ...]). Masked arrays are not supported. Parameters: **x**(n,) array or sequence of (n,) arrays Input values, this takes either a single array or a sequence of arrays which are not required to be of the same length. **bins**int or sequence or str, default: `[rcParams["hist.bins"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=hist.bins#matplotlibrc-sample)` (default: `10`) If *bins* is an integer, it defines the number of equal-width bins in the range. If *bins* is a sequence, it defines the bin edges, including the left edge of the first bin and the right edge of the last bin; in this case, bins may be unequally spaced. All but the last (righthand-most) bin is half-open. In other words, if *bins* is: ``` [1, 2, 3, 4] ``` then the first bin is `[1, 2)` (including 1, but excluding 2) and the second `[2, 3)`. The last bin, however, is `[3, 4]`, which *includes* 4. If *bins* is a string, it is one of the binning strategies supported by [`numpy.histogram_bin_edges`](https://numpy.org/doc/stable/reference/generated/numpy.histogram_bin_edges.html#numpy.histogram_bin_edges "(in NumPy v1.23)"): 'auto', 'fd', 'doane', 'scott', 'stone', 'rice', 'sturges', or 'sqrt'. **range**tuple or None, default: None The lower and upper range of the bins. Lower and upper outliers are ignored. If not provided, *range* is `(x.min(), x.max())`. Range has no effect if *bins* is a sequence. If *bins* is a sequence or *range* is specified, autoscaling is based on the specified bin range instead of the range of x. **density**bool, default: False If `True`, draw and return a probability density: each bin will display the bin's raw count divided by the total number of counts *and the bin width* (`density = counts / (sum(counts) * np.diff(bins))`), so that the area under the histogram integrates to 1 (`np.sum(density * np.diff(bins)) == 1`). If *stacked* is also `True`, the sum of the histograms is normalized to 1. **weights**(n,) array-like or None, default: None An array of weights, of the same shape as *x*. Each value in *x* only contributes its associated weight towards the bin count (instead of 1). If *density* is `True`, the weights are normalized, so that the integral of the density over the range remains 1. **cumulative**bool or -1, default: False If `True`, then a histogram is computed where each bin gives the counts in that bin plus all bins for smaller values. The last bin gives the total number of datapoints. If *density* is also `True` then the histogram is normalized such that the last bin equals 1. If *cumulative* is a number less than 0 (e.g., -1), the direction of accumulation is reversed. In this case, if *density* is also `True`, then the histogram is normalized such that the first bin equals 1. **bottom**array-like, scalar, or None, default: None Location of the bottom of each bin, ie. bins are drawn from `bottom` to `bottom + hist(x, bins)` If a scalar, the bottom of each bin is shifted by the same amount. If an array, each bin is shifted independently and the length of bottom must match the number of bins. If None, defaults to 0. **histtype**{'bar', 'barstacked', 'step', 'stepfilled'}, default: 'bar' The type of histogram to draw. * 'bar' is a traditional bar-type histogram. If multiple data are given the bars are arranged side by side. * 'barstacked' is a bar-type histogram where multiple data are stacked on top of each other. * 'step' generates a lineplot that is by default unfilled. * 'stepfilled' generates a lineplot that is by default filled. **align**{'left', 'mid', 'right'}, default: 'mid' The horizontal alignment of the histogram bars. * 'left': bars are centered on the left bin edges. * 'mid': bars are centered between the bin edges. * 'right': bars are centered on the right bin edges. **orientation**{'vertical', 'horizontal'}, default: 'vertical' If 'horizontal', [`barh`](matplotlib.axes.axes.barh#matplotlib.axes.Axes.barh "matplotlib.axes.Axes.barh") will be used for bar-type histograms and the *bottom* kwarg will be the left edges. **rwidth**float or None, default: None The relative width of the bars as a fraction of the bin width. If `None`, automatically compute the width. Ignored if *histtype* is 'step' or 'stepfilled'. **log**bool, default: False If `True`, the histogram axis will be set to a log scale. **color**color or array-like of colors or None, default: None Color or sequence of colors, one per dataset. Default (`None`) uses the standard line color sequence. **label**str or None, default: None String, or sequence of strings to match multiple datasets. Bar charts yield multiple patches per dataset, but only the first gets the label, so that [`legend`](matplotlib.axes.axes.legend#matplotlib.axes.Axes.legend "matplotlib.axes.Axes.legend") will work as expected. **stacked**bool, default: False If `True`, multiple data are stacked on top of each other If `False` multiple data are arranged side by side if histtype is 'bar' or on top of each other if histtype is 'step' Returns: **n**array or list of arrays The values of the histogram bins. See *density* and *weights* for a description of the possible semantics. If input *x* is an array, then this is an array of length *nbins*. If input is a sequence of arrays `[data1, data2, ...]`, then this is a list of arrays with the values of the histograms for each of the arrays in the same order. The dtype of the array *n* (or of its element arrays) will always be float even if no weighting or normalization is used. **bins**array The edges of the bins. Length nbins + 1 (nbins left edges and right edge of last bin). Always a single array even when multiple data sets are passed in. **patches**[`BarContainer`](../container_api#matplotlib.container.BarContainer "matplotlib.container.BarContainer") or list of a single [`Polygon`](matplotlib.patches.polygon#matplotlib.patches.Polygon "matplotlib.patches.Polygon") or list of such objects Container of individual artists used to create the histogram or list of such containers if there are multiple input datasets. Other Parameters: **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *x*, *weights* **\*\*kwargs** [`Patch`](matplotlib.patches.patch#matplotlib.patches.Patch "matplotlib.patches.Patch") properties See also [`hist2d`](matplotlib.axes.axes.hist2d#matplotlib.axes.Axes.hist2d "matplotlib.axes.Axes.hist2d") 2D histogram with rectangular bins [`hexbin`](matplotlib.axes.axes.hexbin#matplotlib.axes.Axes.hexbin "matplotlib.axes.Axes.hexbin") 2D histogram with hexagonal bins #### Notes For large numbers of bins (>1000), plotting can be significantly faster if *histtype* is set to 'step' or 'stepfilled' rather than 'bar' or 'barstacked'. Examples using `matplotlib.axes.Axes.hist` ------------------------------------------ [Scatter plot with histograms](https://matplotlib.org/stable/gallery/lines_bars_and_markers/scatter_hist.html#sphx-glr-gallery-lines-bars-and-markers-scatter-hist-py) Scatter plot with histograms [Axes Demo](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_demo.html#sphx-glr-gallery-subplots-axes-and-figures-axes-demo-py) Axes Demo [Using histograms to plot a cumulative distribution](https://matplotlib.org/stable/gallery/statistics/histogram_cumulative.html#sphx-glr-gallery-statistics-histogram-cumulative-py) Using histograms to plot a cumulative distribution [Some features of the histogram (hist) function](https://matplotlib.org/stable/gallery/statistics/histogram_features.html#sphx-glr-gallery-statistics-histogram-features-py) Some features of the histogram (hist) function [The histogram (hist) function with multiple data sets](https://matplotlib.org/stable/gallery/statistics/histogram_multihist.html#sphx-glr-gallery-statistics-histogram-multihist-py) The histogram (hist) function with multiple data sets [Placing text boxes](https://matplotlib.org/stable/gallery/text_labels_and_annotations/placing_text_boxes.html#sphx-glr-gallery-text-labels-and-annotations-placing-text-boxes-py) Placing text boxes [Simple axes labels](https://matplotlib.org/stable/gallery/pyplots/fig_axes_labels_simple.html#sphx-glr-gallery-pyplots-fig-axes-labels-simple-py) Simple axes labels [Bayesian Methods for Hackers style sheet](https://matplotlib.org/stable/gallery/style_sheets/bmh.html#sphx-glr-gallery-style-sheets-bmh-py) Bayesian Methods for Hackers style sheet [Scatter Histogram (Locatable Axes)](https://matplotlib.org/stable/gallery/axes_grid1/scatter_hist_locatable_axes.html#sphx-glr-gallery-axes-grid1-scatter-hist-locatable-axes-py) Scatter Histogram (Locatable Axes) [Animated histogram](https://matplotlib.org/stable/gallery/animation/animated_histogram.html#sphx-glr-gallery-animation-animated-histogram-py) Animated histogram [MRI with EEG](https://matplotlib.org/stable/gallery/specialty_plots/mri_with_eeg.html#sphx-glr-gallery-specialty-plots-mri-with-eeg-py) MRI with EEG [Quick start guide](https://matplotlib.org/stable/tutorials/introductory/quick_start.html#sphx-glr-tutorials-introductory-quick-start-py) Quick start guide [Artist tutorial](https://matplotlib.org/stable/tutorials/intermediate/artists.html#sphx-glr-tutorials-intermediate-artists-py) Artist tutorial [Path Tutorial](https://matplotlib.org/stable/tutorials/advanced/path_tutorial.html#sphx-glr-tutorials-advanced-path-tutorial-py) Path Tutorial [Transformations Tutorial](https://matplotlib.org/stable/tutorials/advanced/transforms_tutorial.html#sphx-glr-tutorials-advanced-transforms-tutorial-py) Transformations Tutorial [hist(x)](https://matplotlib.org/stable/plot_types/stats/hist_plot.html#sphx-glr-plot-types-stats-hist-plot-py) hist(x) matplotlib matplotlib.axes.Axes.set_xticks matplotlib.axes.Axes.set\_xticks ================================ Axes.set\_xticks(*ticks*, *labels=None*, *\**, *minor=False*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L72-L73) Set the xaxis' tick locations and optionally labels. If necessary, the view limits of the Axis are expanded so that all given ticks are visible. Parameters: **ticks**list of floats List of tick locations. The axis [`Locator`](../ticker_api#matplotlib.ticker.Locator "matplotlib.ticker.Locator") is replaced by a [`FixedLocator`](../ticker_api#matplotlib.ticker.FixedLocator "matplotlib.ticker.FixedLocator"). Some tick formatters will not label arbitrary tick positions; e.g. log formatters only label decade ticks by default. In such a case you can set a formatter explicitly on the axis using [`Axis.set_major_formatter`](matplotlib.axis.axis.set_major_formatter#matplotlib.axis.Axis.set_major_formatter "matplotlib.axis.Axis.set_major_formatter") or provide formatted *labels* yourself. **labels**list of str, optional List of tick labels. If not set, the labels are generated with the axis tick [`Formatter`](../ticker_api#matplotlib.ticker.Formatter "matplotlib.ticker.Formatter"). **minor**bool, default: False If `False`, set the major ticks; if `True`, the minor ticks. **\*\*kwargs** [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text") properties for the labels. These take effect only if you pass *labels*. In other cases, please use [`tick_params`](matplotlib.axes.axes.tick_params#matplotlib.axes.Axes.tick_params "matplotlib.axes.Axes.tick_params"). #### Notes The mandatory expansion of the view limits is an intentional design choice to prevent the surprise of a non-visible tick. If you need other limits, you should set the limits explicitly after setting the ticks. Examples using `matplotlib.axes.Axes.set_xticks` ------------------------------------------------ [Bar Label Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_label_demo.html#sphx-glr-gallery-lines-bars-and-markers-bar-label-demo-py) Bar Label Demo [Grouped bar chart with labels](https://matplotlib.org/stable/gallery/lines_bars_and_markers/barchart.html#sphx-glr-gallery-lines-bars-and-markers-barchart-py) Grouped bar chart with labels [Hat graph](https://matplotlib.org/stable/gallery/lines_bars_and_markers/hat_graph.html#sphx-glr-gallery-lines-bars-and-markers-hat-graph-py) Hat graph [Psd Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/psd_demo.html#sphx-glr-gallery-lines-bars-and-markers-psd-demo-py) Psd Demo [Creating annotated heatmaps](https://matplotlib.org/stable/gallery/images_contours_and_fields/image_annotated_heatmap.html#sphx-glr-gallery-images-contours-and-fields-image-annotated-heatmap-py) Creating annotated heatmaps [Box plot vs. violin plot comparison](https://matplotlib.org/stable/gallery/statistics/boxplot_vs_violin.html#sphx-glr-gallery-statistics-boxplot-vs-violin-py) Box plot vs. violin plot comparison [Violin plot customization](https://matplotlib.org/stable/gallery/statistics/customized_violin.html#sphx-glr-gallery-statistics-customized-violin-py) Violin plot customization [Producing multiple histograms side by side](https://matplotlib.org/stable/gallery/statistics/multiple_histograms_side_by_side.html#sphx-glr-gallery-statistics-multiple-histograms-side-by-side-py) Producing multiple histograms side by side [Multiline](https://matplotlib.org/stable/gallery/text_labels_and_annotations/multiline.html#sphx-glr-gallery-text-labels-and-annotations-multiline-py) Multiline [Rendering math equations using TeX](https://matplotlib.org/stable/gallery/text_labels_and_annotations/tex_demo.html#sphx-glr-gallery-text-labels-and-annotations-tex-demo-py) Rendering math equations using TeX [ggplot style sheet](https://matplotlib.org/stable/gallery/style_sheets/ggplot.html#sphx-glr-gallery-style-sheets-ggplot-py) ggplot style sheet [Scatter Histogram (Locatable Axes)](https://matplotlib.org/stable/gallery/axes_grid1/scatter_hist_locatable_axes.html#sphx-glr-gallery-axes-grid1-scatter-hist-locatable-axes-py) Scatter Histogram (Locatable Axes) [Simple Axisline4](https://matplotlib.org/stable/gallery/axes_grid1/simple_axisline4.html#sphx-glr-gallery-axes-grid1-simple-axisline4-py) Simple Axisline4 [Ticklabel alignment](https://matplotlib.org/stable/gallery/axisartist/demo_ticklabel_alignment.html#sphx-glr-gallery-axisartist-demo-ticklabel-alignment-py) Ticklabel alignment [Ticklabel direction](https://matplotlib.org/stable/gallery/axisartist/demo_ticklabel_direction.html#sphx-glr-gallery-axisartist-demo-ticklabel-direction-py) Ticklabel direction [Integral as the area under a curve](https://matplotlib.org/stable/gallery/showcase/integral.html#sphx-glr-gallery-showcase-integral-py) Integral as the area under a curve [Shaded & power normalized rendering](https://matplotlib.org/stable/gallery/showcase/mandelbrot.html#sphx-glr-gallery-showcase-mandelbrot-py) Shaded & power normalized rendering [XKCD](https://matplotlib.org/stable/gallery/showcase/xkcd.html#sphx-glr-gallery-showcase-xkcd-py) XKCD ![Rain simulation](https://matplotlib.org/stable/_images/sphx_glr_rain_thumb.gif) [Rain simulation](https://matplotlib.org/stable/gallery/animation/rain.html#sphx-glr-gallery-animation-rain-py) Rain simulation ![MATPLOTLIB **UNCHAINED**](https://matplotlib.org/stable/_images/sphx_glr_unchained_thumb.gif) [MATPLOTLIB UNCHAINED](https://matplotlib.org/stable/gallery/animation/unchained.html#sphx-glr-gallery-animation-unchained-py) MATPLOTLIB \*\*UNCHAINED\*\* [Log Bar](https://matplotlib.org/stable/gallery/scales/log_bar.html#sphx-glr-gallery-scales-log-bar-py) Log Bar [MRI with EEG](https://matplotlib.org/stable/gallery/specialty_plots/mri_with_eeg.html#sphx-glr-gallery-specialty-plots-mri-with-eeg-py) MRI with EEG [Custom spine bounds](https://matplotlib.org/stable/gallery/spines/spines_bounds.html#sphx-glr-gallery-spines-spines-bounds-py) Custom spine bounds [Group barchart with units](https://matplotlib.org/stable/gallery/units/bar_unit_demo.html#sphx-glr-gallery-units-bar-unit-demo-py) Group barchart with units [The Lifecycle of a Plot](https://matplotlib.org/stable/tutorials/introductory/lifecycle.html#sphx-glr-tutorials-introductory-lifecycle-py) The Lifecycle of a Plot matplotlib matplotlib.colors.TwoSlopeNorm matplotlib.colors.TwoSlopeNorm ============================== *class*matplotlib.colors.TwoSlopeNorm(*vcenter*, *vmin=None*, *vmax=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1387-L1475) Bases: [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") Normalize data with a set center. Useful when mapping data with an unequal rates of change around a conceptual center, e.g., data that range from -2 to 4, with 0 as the midpoint. Parameters: **vcenter**float The data value that defines `0.5` in the normalization. **vmin**float, optional The data value that defines `0.0` in the normalization. Defaults to the min value of the dataset. **vmax**float, optional The data value that defines `1.0` in the normalization. Defaults to the max value of the dataset. #### Examples This maps data value -4000 to 0., 0 to 0.5, and +10000 to 1.0; data between is linearly interpolated: ``` >>> import matplotlib.colors as mcolors >>> offset = mcolors.TwoSlopeNorm(vmin=-4000., vcenter=0., vmax=10000) >>> data = [-4000., -2000., 0., 2500., 5000., 7500., 10000.] >>> offset(data) array([0., 0.25, 0.5, 0.625, 0.75, 0.875, 1.0]) ``` \_\_call\_\_(*value*, *clip=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1449-L1465) Map value to the interval [0, 1]. The clip argument is unused. autoscale\_None(*A*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1439-L1447) Get vmin and vmax, and then clip at vcenter inverse(*value*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1467-L1475) *property*vcenter Examples using `matplotlib.colors.TwoSlopeNorm` ----------------------------------------------- [Colormap Normalization](https://matplotlib.org/stable/tutorials/colors/colormapnorms.html#sphx-glr-tutorials-colors-colormapnorms-py) Colormap Normalization
programming_docs
matplotlib mpl_toolkits.axisartist.angle_helper.select_step_sub mpl\_toolkits.axisartist.angle\_helper.select\_step\_sub ======================================================== mpl\_toolkits.axisartist.angle\_helper.select\_step\_sub(*dv*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/angle_helper.py#L59-L76) matplotlib matplotlib.pyplot.grid matplotlib.pyplot.grid ====================== matplotlib.pyplot.grid(*visible=None*, *which='major'*, *axis='both'*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2530-L2532) Configure the grid lines. Parameters: **visible**bool or None, optional Whether to show the grid lines. If any *kwargs* are supplied, it is assumed you want the grid on and *visible* will be set to True. If *visible* is *None* and there are no *kwargs*, this toggles the visibility of the lines. **which**{'major', 'minor', 'both'}, optional The grid lines to apply the changes on. **axis**{'both', 'x', 'y'}, optional The axis to apply the changes on. **\*\*kwargs**[`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") properties Define the line properties of the grid, e.g.: ``` grid(color='r', linestyle='-', linewidth=2) ``` Valid keyword arguments are: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_antialiased "matplotlib.lines.Line2D.set_antialiased") or aa | bool | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_color "matplotlib.lines.Line2D.set_color") or c | color | | [`dash_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_capstyle "matplotlib.lines.Line2D.set_dash_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`dash_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_joinstyle "matplotlib.lines.Line2D.set_dash_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`dashes`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dashes "matplotlib.lines.Line2D.set_dashes") | sequence of floats (on/off ink in points) or (None, None) | | [`data`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_data "matplotlib.lines.Line2D.set_data") | (2, N) array or two 1D arrays | | [`drawstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_drawstyle "matplotlib.lines.Line2D.set_drawstyle") or ds | {'default', 'steps', 'steps-pre', 'steps-mid', 'steps-post'}, default: 'default' | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fillstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_fillstyle "matplotlib.lines.Line2D.set_fillstyle") | {'full', 'left', 'right', 'bottom', 'top', 'none'} | | [`gapcolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_gapcolor "matplotlib.lines.Line2D.set_gapcolor") | color or None | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linestyle "matplotlib.lines.Line2D.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linewidth "matplotlib.lines.Line2D.set_linewidth") or lw | float | | [`marker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_marker "matplotlib.lines.Line2D.set_marker") | marker style string, [`Path`](../path_api#matplotlib.path.Path "matplotlib.path.Path") or [`MarkerStyle`](matplotlib.markers.markerstyle#matplotlib.markers.MarkerStyle "matplotlib.markers.MarkerStyle") | | [`markeredgecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgecolor "matplotlib.lines.Line2D.set_markeredgecolor") or mec | color | | [`markeredgewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgewidth "matplotlib.lines.Line2D.set_markeredgewidth") or mew | float | | [`markerfacecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecolor "matplotlib.lines.Line2D.set_markerfacecolor") or mfc | color | | [`markerfacecoloralt`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecoloralt "matplotlib.lines.Line2D.set_markerfacecoloralt") or mfcalt | color | | [`markersize`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markersize "matplotlib.lines.Line2D.set_markersize") or ms | float | | [`markevery`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markevery "matplotlib.lines.Line2D.set_markevery") | None or int or (int, int) or slice or list[int] or float or (float, float) or list[bool] | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_picker "matplotlib.lines.Line2D.set_picker") | float or callable[[Artist, Event], tuple[bool, dict]] | | [`pickradius`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_pickradius "matplotlib.lines.Line2D.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`solid_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_capstyle "matplotlib.lines.Line2D.set_solid_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`solid_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_joinstyle "matplotlib.lines.Line2D.set_solid_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | unknown | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xdata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_xdata "matplotlib.lines.Line2D.set_xdata") | 1D array | | [`ydata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_ydata "matplotlib.lines.Line2D.set_ydata") | 1D array | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | #### Notes The axis is drawn as a unit, so the effective zorder for drawing the grid is determined by the zorder of each axis, not by the zorder of the [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") objects comprising the grid. Therefore, to set grid zorder, use [`set_axisbelow`](matplotlib.axes.axes.set_axisbelow#matplotlib.axes.Axes.set_axisbelow "matplotlib.axes.Axes.set_axisbelow") or, for more control, call the [`set_zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") method of each axis. Examples using `matplotlib.pyplot.grid` --------------------------------------- [Step Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/step_demo.html#sphx-glr-gallery-lines-bars-and-markers-step-demo-py) Step Demo [Geographic Projections](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/geo_demo.html#sphx-glr-gallery-subplots-axes-and-figures-geo-demo-py) Geographic Projections [Pyplot Text](https://matplotlib.org/stable/gallery/pyplots/pyplot_text.html#sphx-glr-gallery-pyplots-pyplot-text-py) Pyplot Text [Customize Rc](https://matplotlib.org/stable/gallery/misc/customize_rc.html#sphx-glr-gallery-misc-customize-rc-py) Customize Rc [Findobj Demo](https://matplotlib.org/stable/gallery/misc/findobj_demo.html#sphx-glr-gallery-misc-findobj-demo-py) Findobj Demo [Custom scale](https://matplotlib.org/stable/gallery/scales/custom_scale.html#sphx-glr-gallery-scales-custom-scale-py) Custom scale [SkewT-logP diagram: using transforms and custom projections](https://matplotlib.org/stable/gallery/specialty_plots/skewt.html#sphx-glr-gallery-specialty-plots-skewt-py) SkewT-logP diagram: using transforms and custom projections [Pyplot tutorial](https://matplotlib.org/stable/tutorials/introductory/pyplot.html#sphx-glr-tutorials-introductory-pyplot-py) Pyplot tutorial matplotlib matplotlib.animation.ImageMagickFileWriter matplotlib.animation.ImageMagickFileWriter ========================================== *class*matplotlib.animation.ImageMagickFileWriter(*\*args*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/animation.py#L673-L684) File-based animated gif writer. Frames are written to temporary files on disk and then stitched together at the end. \_\_init\_\_(*\*args*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/animation.py#L385-L387) #### Methods | | | | --- | --- | | [`__init__`](#matplotlib.animation.ImageMagickFileWriter.__init__ "matplotlib.animation.ImageMagickFileWriter.__init__")(\*args, \*\*kwargs) | | | `bin_path`() | Return the binary path to the commandline tool used by a specific subclass. | | `finish`() | Finish any processing for writing the movie. | | `grab_frame`(\*\*savefig\_kwargs) | Grab the image information from the figure and save as a movie frame. | | `isAvailable`() | Return whether a MovieWriter subclass is actually available. | | `saving`(fig, outfile, dpi, \*args, \*\*kwargs) | Context manager to facilitate writing the movie file. | | `setup`(fig, outfile[, dpi, frame\_prefix]) | Setup for writing the movie file. | #### Attributes | | | | --- | --- | | `delay` | | | `frame_format` | Format (png, jpeg, etc.) to use for saving the frames, which can be decided by the individual subclasses. | | `frame_size` | A tuple `(width, height)` in pixels of a movie frame. | | [`input_names`](#matplotlib.animation.ImageMagickFileWriter.input_names "matplotlib.animation.ImageMagickFileWriter.input_names") | | | `output_args` | | | [`supported_formats`](#matplotlib.animation.ImageMagickFileWriter.supported_formats "matplotlib.animation.ImageMagickFileWriter.supported_formats") | | *property*input\_names supported\_formats*=['png', 'jpeg', 'tiff', 'raw', 'rgba']* matplotlib matplotlib.axes.Axes.get_yaxis_text2_transform matplotlib.axes.Axes.get\_yaxis\_text2\_transform ================================================= Axes.get\_yaxis\_text2\_transform(*pad\_points*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L1001-L1025) Returns: **transform**Transform The transform used for drawing secondart y-axis labels, which will add *pad\_points* of padding (in points) between the axis and the label. The x-direction is in axis coordinates and the y-direction is in data coordinates **valign**{'center', 'top', 'bottom', 'baseline', 'center\_baseline'} The text vertical alignment. **halign**{'center', 'left', 'right'} The text horizontal alignment. #### Notes This transformation is primarily used by the [`Axis`](../axis_api#matplotlib.axis.Axis "matplotlib.axis.Axis") class, and is meant to be overridden by new kinds of projections that may need to place axis elements in different locations. matplotlib matplotlib.axis.YAxis.set_ticks_position matplotlib.axis.YAxis.set\_ticks\_position ========================================== YAxis.set\_ticks\_position(*position*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L2583-L2617) Set the ticks position. Parameters: **position**{'left', 'right', 'both', 'default', 'none'} 'both' sets the ticks to appear on both positions, but does not change the tick labels. 'default' resets the tick positions to the default: ticks on both positions, labels at left. 'none' can be used if you don't want any ticks. 'none' and 'both' affect only the ticks, not the labels. Examples using `matplotlib.axis.YAxis.set_ticks_position` --------------------------------------------------------- [Spine Placement](https://matplotlib.org/stable/gallery/spines/spine_placement_demo.html#sphx-glr-gallery-spines-spine-placement-demo-py) Spine Placement [Spines](https://matplotlib.org/stable/gallery/spines/spines.html#sphx-glr-gallery-spines-spines-py) Spines [Custom spine bounds](https://matplotlib.org/stable/gallery/spines/spines_bounds.html#sphx-glr-gallery-spines-spines-bounds-py) Custom spine bounds [Dropped spines](https://matplotlib.org/stable/gallery/spines/spines_dropped.html#sphx-glr-gallery-spines-spines-dropped-py) Dropped spines matplotlib matplotlib.axis.XAxis.axis_name matplotlib.axis.XAxis.axis\_name ================================ XAxis.axis\_name*='x'* Read-only name identifying the axis. matplotlib matplotlib.axes.Axes.pchanged matplotlib.axes.Axes.pchanged ============================= Axes.pchanged()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L387-L398) Call all of the registered callbacks. This function is triggered internally when a property is changed. See also [`add_callback`](matplotlib.axes.axes.add_callback#matplotlib.axes.Axes.add_callback "matplotlib.axes.Axes.add_callback") [`remove_callback`](matplotlib.axes.axes.remove_callback#matplotlib.axes.Axes.remove_callback "matplotlib.axes.Axes.remove_callback") matplotlib matplotlib.artist.Artist.get_children matplotlib.artist.Artist.get\_children ====================================== Artist.get\_children()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L430-L432) Return a list of the child [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist")s of this [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist"). matplotlib matplotlib.axes.Axes.set_frame_on matplotlib.axes.Axes.set\_frame\_on =================================== Axes.set\_frame\_on(*b*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L3138-L3147) Set whether the Axes rectangle patch is drawn. Parameters: **b**bool matplotlib mpl_toolkits.axisartist.floating_axes.ExtremeFinderFixed mpl\_toolkits.axisartist.floating\_axes.ExtremeFinderFixed ========================================================== *class*mpl\_toolkits.axisartist.floating\_axes.ExtremeFinderFixed(*extremes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/floating_axes.py#L137-L153) Bases: [`ExtremeFinderSimple`](mpl_toolkits.axisartist.grid_finder.extremefindersimple#mpl_toolkits.axisartist.grid_finder.ExtremeFinderSimple "mpl_toolkits.axisartist.grid_finder.ExtremeFinderSimple") This subclass always returns the same bounding box. Parameters: **extremes**(float, float, float, float) The bounding box that this helper always returns. \_\_call\_\_(*transform\_xy*, *x1*, *y1*, *x2*, *y2*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/floating_axes.py#L151-L153) Compute an approximation of the bounding box obtained by applying *transform\_xy* to the box delimited by `(x1, y1, x2, y2)`. The intended use is to have `(x1, y1, x2, y2)` in axes coordinates, and have *transform\_xy* be the transform from axes coordinates to data coordinates; this method then returns the range of data coordinates that span the actual axes. The computation is done by sampling `nx * ny` equispaced points in the `(x1, y1, x2, y2)` box and finding the resulting points with extremal coordinates; then adding some padding to take into account the finite sampling. As each sampling step covers a relative range of *1/nx* or *1/ny*, the padding is computed by expanding the span covered by the extremal coordinates by these fractions. matplotlib matplotlib.pyplot.imsave matplotlib.pyplot.imsave ======================== matplotlib.pyplot.imsave(*fname*, *arr*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2114-L2116) Save an array as an image file. Parameters: **fname**str or path-like or file-like A path or a file-like object to store the image in. If *format* is not set, then the output format is inferred from the extension of *fname*, if any, and from `[rcParams["savefig.format"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=savefig.format#matplotlibrc-sample)` (default: `'png'`) otherwise. If *format* is set, it determines the output format. **arr**array-like The image data. The shape can be one of MxN (luminance), MxNx3 (RGB) or MxNx4 (RGBA). **vmin, vmax**float, optional *vmin* and *vmax* set the color scaling for the image by fixing the values that map to the colormap color limits. If either *vmin* or *vmax* is None, that limit is determined from the *arr* min/max value. **cmap**str or [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap"), default: `[rcParams["image.cmap"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.cmap#matplotlibrc-sample)` (default: `'viridis'`) A Colormap instance or registered colormap name. The colormap maps scalar data to colors. It is ignored for RGB(A) data. **format**str, optional The file format, e.g. 'png', 'pdf', 'svg', ... The behavior when this is unset is documented under *fname*. **origin**{'upper', 'lower'}, default: `[rcParams["image.origin"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.origin#matplotlibrc-sample)` (default: `'upper'`) Indicates whether the `(0, 0)` index of the array is in the upper left or lower left corner of the axes. **dpi**float The DPI to store in the metadata of the file. This does not affect the resolution of the output image. Depending on file format, this may be rounded to the nearest integer. **metadata**dict, optional Metadata in the image file. The supported keys depend on the output format, see the documentation of the respective backends for more information. **pil\_kwargs**dict, optional Keyword arguments passed to [`PIL.Image.Image.save`](https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.save "(in Pillow (PIL Fork) v9.2.0)"). If the 'pnginfo' key is present, it completely overrides *metadata*, including the default 'Software' key.
programming_docs
matplotlib matplotlib.axes.Axes.get_window_extent matplotlib.axes.Axes.get\_window\_extent ======================================== Axes.get\_window\_extent(*renderer=None*, *\*args*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L756-L773) Return the Axes bounding box in display space; *args* and *kwargs* are empty. This bounding box does not include the spines, ticks, ticklabels, or other labels. For a bounding box including these elements use [`get_tightbbox`](matplotlib.axes.axes.get_tightbbox#matplotlib.axes.Axes.get_tightbbox "matplotlib.axes.Axes.get_tightbbox"). See also [`matplotlib.axes.Axes.get_tightbbox`](matplotlib.axes.axes.get_tightbbox#matplotlib.axes.Axes.get_tightbbox "matplotlib.axes.Axes.get_tightbbox") [`matplotlib.axis.Axis.get_tightbbox`](matplotlib.axis.axis.get_tightbbox#matplotlib.axis.Axis.get_tightbbox "matplotlib.axis.Axis.get_tightbbox") [`matplotlib.spines.Spine.get_window_extent`](../spines_api#matplotlib.spines.Spine.get_window_extent "matplotlib.spines.Spine.get_window_extent") matplotlib matplotlib.axes.Axes.use_sticky_edges matplotlib.axes.Axes.use\_sticky\_edges ======================================= *property*Axes.use\_sticky\_edges When autoscaling, whether to obey all `Artist.sticky_edges`. Default is `True`. Setting this to `False` ensures that the specified margins will be applied, even if the plot includes an image, for example, which would otherwise force a view limit to coincide with its data limit. The changing this property does not change the plot until [`autoscale`](matplotlib.axes.axes.autoscale#matplotlib.axes.Axes.autoscale "matplotlib.axes.Axes.autoscale") or [`autoscale_view`](matplotlib.axes.axes.autoscale_view#matplotlib.axes.Axes.autoscale_view "matplotlib.axes.Axes.autoscale_view") is called. matplotlib mpl_toolkits.axes_grid1.mpl_axes mpl\_toolkits.axes\_grid1.mpl\_axes =================================== Classes ------- | | | | --- | --- | | [`Axes`](mpl_toolkits.axes_grid1.mpl_axes.axes#mpl_toolkits.axes_grid1.mpl_axes.Axes "mpl_toolkits.axes_grid1.mpl_axes.Axes")(fig, rect, \*[, facecolor, frameon, ...]) | Build an Axes in a figure. | | [`SimpleAxisArtist`](mpl_toolkits.axes_grid1.mpl_axes.simpleaxisartist#mpl_toolkits.axes_grid1.mpl_axes.SimpleAxisArtist "mpl_toolkits.axes_grid1.mpl_axes.SimpleAxisArtist")(axis, axisnum, spine) | | | [`SimpleChainedObjects`](mpl_toolkits.axes_grid1.mpl_axes.simplechainedobjects#mpl_toolkits.axes_grid1.mpl_axes.SimpleChainedObjects "mpl_toolkits.axes_grid1.mpl_axes.SimpleChainedObjects")(objects) | | matplotlib mpl_toolkits.axisartist.axislines.AxesZero mpl\_toolkits.axisartist.axislines.AxesZero =========================================== *class*mpl\_toolkits.axisartist.axislines.AxesZero(*\*args*, *grid\_helper=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L564-L577) Bases: [`Axes`](mpl_toolkits.axisartist.axislines.axes#mpl_toolkits.axisartist.axislines.Axes "mpl_toolkits.axisartist.axislines.Axes") Build an Axes in a figure. Parameters: **fig**[`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") The Axes is built in the [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") *fig*. **rect**tuple (left, bottom, width, height). The Axes is built in the rectangle *rect*. *rect* is in [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") coordinates. **sharex, sharey**[`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes"), optional The x or y [`axis`](../axis_api#module-matplotlib.axis "matplotlib.axis") is shared with the x or y axis in the input [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes"). **frameon**bool, default: True Whether the Axes frame is visible. **box\_aspect**float, optional Set a fixed aspect for the Axes box, i.e. the ratio of height to width. See [`set_box_aspect`](matplotlib.axes.axes.set_box_aspect#matplotlib.axes.Axes.set_box_aspect "matplotlib.axes.Axes.set_box_aspect") for details. **\*\*kwargs** Other optional keyword arguments: | Property | Description | | --- | --- | | [`adjustable`](matplotlib.axes.axes.set_adjustable#matplotlib.axes.Axes.set_adjustable "matplotlib.axes.Axes.set_adjustable") | {'box', 'datalim'} | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`anchor`](matplotlib.axes.axes.set_anchor#matplotlib.axes.Axes.set_anchor "matplotlib.axes.Axes.set_anchor") | (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...} | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect") | {'auto', 'equal'} or float | | [`autoscale_on`](matplotlib.axes.axes.set_autoscale_on#matplotlib.axes.Axes.set_autoscale_on "matplotlib.axes.Axes.set_autoscale_on") | bool | | [`autoscalex_on`](matplotlib.axes.axes.set_autoscalex_on#matplotlib.axes.Axes.set_autoscalex_on "matplotlib.axes.Axes.set_autoscalex_on") | unknown | | [`autoscaley_on`](matplotlib.axes.axes.set_autoscaley_on#matplotlib.axes.Axes.set_autoscaley_on "matplotlib.axes.Axes.set_autoscaley_on") | unknown | | [`axes_locator`](matplotlib.axes.axes.set_axes_locator#matplotlib.axes.Axes.set_axes_locator "matplotlib.axes.Axes.set_axes_locator") | Callable[[Axes, Renderer], Bbox] | | [`axisbelow`](matplotlib.axes.axes.set_axisbelow#matplotlib.axes.Axes.set_axisbelow "matplotlib.axes.Axes.set_axisbelow") | bool or 'line' | | [`box_aspect`](matplotlib.axes.axes.set_box_aspect#matplotlib.axes.Axes.set_box_aspect "matplotlib.axes.Axes.set_box_aspect") | float or None | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`facecolor`](matplotlib.axes.axes.set_facecolor#matplotlib.axes.Axes.set_facecolor "matplotlib.axes.Axes.set_facecolor") or fc | color | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`frame_on`](matplotlib.axes.axes.set_frame_on#matplotlib.axes.Axes.set_frame_on "matplotlib.axes.Axes.set_frame_on") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`navigate`](matplotlib.axes.axes.set_navigate#matplotlib.axes.Axes.set_navigate "matplotlib.axes.Axes.set_navigate") | bool | | [`navigate_mode`](matplotlib.axes.axes.set_navigate_mode#matplotlib.axes.Axes.set_navigate_mode "matplotlib.axes.Axes.set_navigate_mode") | unknown | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`position`](matplotlib.axes.axes.set_position#matplotlib.axes.Axes.set_position "matplotlib.axes.Axes.set_position") | [left, bottom, width, height] or [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`prop_cycle`](matplotlib.axes.axes.set_prop_cycle#matplotlib.axes.Axes.set_prop_cycle "matplotlib.axes.Axes.set_prop_cycle") | unknown | | [`rasterization_zorder`](matplotlib.axes.axes.set_rasterization_zorder#matplotlib.axes.Axes.set_rasterization_zorder "matplotlib.axes.Axes.set_rasterization_zorder") | float or None | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`title`](matplotlib.axes.axes.set_title#matplotlib.axes.Axes.set_title "matplotlib.axes.Axes.set_title") | str | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xbound`](matplotlib.axes.axes.set_xbound#matplotlib.axes.Axes.set_xbound "matplotlib.axes.Axes.set_xbound") | unknown | | [`xlabel`](matplotlib.axes.axes.set_xlabel#matplotlib.axes.Axes.set_xlabel "matplotlib.axes.Axes.set_xlabel") | str | | [`xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") | (bottom: float, top: float) | | [`xmargin`](matplotlib.axes.axes.set_xmargin#matplotlib.axes.Axes.set_xmargin "matplotlib.axes.Axes.set_xmargin") | float greater than -0.5 | | [`xscale`](matplotlib.axes.axes.set_xscale#matplotlib.axes.Axes.set_xscale "matplotlib.axes.Axes.set_xscale") | unknown | | [`xticklabels`](matplotlib.axes.axes.set_xticklabels#matplotlib.axes.Axes.set_xticklabels "matplotlib.axes.Axes.set_xticklabels") | unknown | | [`xticks`](matplotlib.axes.axes.set_xticks#matplotlib.axes.Axes.set_xticks "matplotlib.axes.Axes.set_xticks") | unknown | | [`ybound`](matplotlib.axes.axes.set_ybound#matplotlib.axes.Axes.set_ybound "matplotlib.axes.Axes.set_ybound") | unknown | | [`ylabel`](matplotlib.axes.axes.set_ylabel#matplotlib.axes.Axes.set_ylabel "matplotlib.axes.Axes.set_ylabel") | str | | [`ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim") | (bottom: float, top: float) | | [`ymargin`](matplotlib.axes.axes.set_ymargin#matplotlib.axes.Axes.set_ymargin "matplotlib.axes.Axes.set_ymargin") | float greater than -0.5 | | [`yscale`](matplotlib.axes.axes.set_yscale#matplotlib.axes.Axes.set_yscale "matplotlib.axes.Axes.set_yscale") | unknown | | [`yticklabels`](matplotlib.axes.axes.set_yticklabels#matplotlib.axes.Axes.set_yticklabels "matplotlib.axes.Axes.set_yticklabels") | unknown | | [`yticks`](matplotlib.axes.axes.set_yticks#matplotlib.axes.Axes.set_yticks "matplotlib.axes.Axes.set_yticks") | unknown | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | Returns: [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") The new [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") object. clear()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L566-L577) Clear the Axes. set(*\**, *adjustable=<UNSET>*, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *anchor=<UNSET>*, *animated=<UNSET>*, *aspect=<UNSET>*, *autoscale\_on=<UNSET>*, *autoscalex\_on=<UNSET>*, *autoscaley\_on=<UNSET>*, *axes\_locator=<UNSET>*, *axisbelow=<UNSET>*, *box\_aspect=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *facecolor=<UNSET>*, *frame\_on=<UNSET>*, *gid=<UNSET>*, *in\_layout=<UNSET>*, *label=<UNSET>*, *mouseover=<UNSET>*, *navigate=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *position=<UNSET>*, *prop\_cycle=<UNSET>*, *rasterization\_zorder=<UNSET>*, *rasterized=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *title=<UNSET>*, *transform=<UNSET>*, *url=<UNSET>*, *visible=<UNSET>*, *xbound=<UNSET>*, *xlabel=<UNSET>*, *xlim=<UNSET>*, *xmargin=<UNSET>*, *xscale=<UNSET>*, *xticklabels=<UNSET>*, *xticks=<UNSET>*, *ybound=<UNSET>*, *ylabel=<UNSET>*, *ylim=<UNSET>*, *ymargin=<UNSET>*, *yscale=<UNSET>*, *yticklabels=<UNSET>*, *yticks=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`adjustable`](matplotlib.axes.axes.set_adjustable#matplotlib.axes.Axes.set_adjustable "matplotlib.axes.Axes.set_adjustable") | {'box', 'datalim'} | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`anchor`](matplotlib.axes.axes.set_anchor#matplotlib.axes.Axes.set_anchor "matplotlib.axes.Axes.set_anchor") | (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...} | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`aspect`](matplotlib.axes.axes.set_aspect#matplotlib.axes.Axes.set_aspect "matplotlib.axes.Axes.set_aspect") | {'auto', 'equal'} or float | | [`autoscale_on`](matplotlib.axes.axes.set_autoscale_on#matplotlib.axes.Axes.set_autoscale_on "matplotlib.axes.Axes.set_autoscale_on") | bool | | [`autoscalex_on`](matplotlib.axes.axes.set_autoscalex_on#matplotlib.axes.Axes.set_autoscalex_on "matplotlib.axes.Axes.set_autoscalex_on") | unknown | | [`autoscaley_on`](matplotlib.axes.axes.set_autoscaley_on#matplotlib.axes.Axes.set_autoscaley_on "matplotlib.axes.Axes.set_autoscaley_on") | unknown | | [`axes_locator`](matplotlib.axes.axes.set_axes_locator#matplotlib.axes.Axes.set_axes_locator "matplotlib.axes.Axes.set_axes_locator") | Callable[[Axes, Renderer], Bbox] | | [`axisbelow`](matplotlib.axes.axes.set_axisbelow#matplotlib.axes.Axes.set_axisbelow "matplotlib.axes.Axes.set_axisbelow") | bool or 'line' | | [`box_aspect`](matplotlib.axes.axes.set_box_aspect#matplotlib.axes.Axes.set_box_aspect "matplotlib.axes.Axes.set_box_aspect") | float or None | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`facecolor`](matplotlib.axes.axes.set_facecolor#matplotlib.axes.Axes.set_facecolor "matplotlib.axes.Axes.set_facecolor") or fc | color | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`frame_on`](matplotlib.axes.axes.set_frame_on#matplotlib.axes.Axes.set_frame_on "matplotlib.axes.Axes.set_frame_on") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`navigate`](matplotlib.axes.axes.set_navigate#matplotlib.axes.Axes.set_navigate "matplotlib.axes.Axes.set_navigate") | bool | | [`navigate_mode`](matplotlib.axes.axes.set_navigate_mode#matplotlib.axes.Axes.set_navigate_mode "matplotlib.axes.Axes.set_navigate_mode") | unknown | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`position`](matplotlib.axes.axes.set_position#matplotlib.axes.Axes.set_position "matplotlib.axes.Axes.set_position") | [left, bottom, width, height] or [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`prop_cycle`](matplotlib.axes.axes.set_prop_cycle#matplotlib.axes.Axes.set_prop_cycle "matplotlib.axes.Axes.set_prop_cycle") | unknown | | [`rasterization_zorder`](matplotlib.axes.axes.set_rasterization_zorder#matplotlib.axes.Axes.set_rasterization_zorder "matplotlib.axes.Axes.set_rasterization_zorder") | float or None | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`title`](matplotlib.axes.axes.set_title#matplotlib.axes.Axes.set_title "matplotlib.axes.Axes.set_title") | str | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xbound`](matplotlib.axes.axes.set_xbound#matplotlib.axes.Axes.set_xbound "matplotlib.axes.Axes.set_xbound") | unknown | | [`xlabel`](matplotlib.axes.axes.set_xlabel#matplotlib.axes.Axes.set_xlabel "matplotlib.axes.Axes.set_xlabel") | str | | [`xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") | (bottom: float, top: float) | | [`xmargin`](matplotlib.axes.axes.set_xmargin#matplotlib.axes.Axes.set_xmargin "matplotlib.axes.Axes.set_xmargin") | float greater than -0.5 | | [`xscale`](matplotlib.axes.axes.set_xscale#matplotlib.axes.Axes.set_xscale "matplotlib.axes.Axes.set_xscale") | unknown | | [`xticklabels`](matplotlib.axes.axes.set_xticklabels#matplotlib.axes.Axes.set_xticklabels "matplotlib.axes.Axes.set_xticklabels") | unknown | | [`xticks`](matplotlib.axes.axes.set_xticks#matplotlib.axes.Axes.set_xticks "matplotlib.axes.Axes.set_xticks") | unknown | | [`ybound`](matplotlib.axes.axes.set_ybound#matplotlib.axes.Axes.set_ybound "matplotlib.axes.Axes.set_ybound") | unknown | | [`ylabel`](matplotlib.axes.axes.set_ylabel#matplotlib.axes.Axes.set_ylabel "matplotlib.axes.Axes.set_ylabel") | str | | [`ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim") | (bottom: float, top: float) | | [`ymargin`](matplotlib.axes.axes.set_ymargin#matplotlib.axes.Axes.set_ymargin "matplotlib.axes.Axes.set_ymargin") | float greater than -0.5 | | [`yscale`](matplotlib.axes.axes.set_yscale#matplotlib.axes.Axes.set_yscale "matplotlib.axes.Axes.set_yscale") | unknown | | [`yticklabels`](matplotlib.axes.axes.set_yticklabels#matplotlib.axes.Axes.set_yticklabels "matplotlib.axes.Axes.set_yticklabels") | unknown | | [`yticks`](matplotlib.axes.axes.set_yticks#matplotlib.axes.Axes.set_yticks "matplotlib.axes.Axes.set_yticks") | unknown | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float |
programming_docs
matplotlib matplotlib.axes.Axes.draw_artist matplotlib.axes.Axes.draw\_artist ================================= Axes.draw\_artist(*a*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L3112-L3116) Efficiently redraw a single artist. Examples using `matplotlib.axes.Axes.draw_artist` ------------------------------------------------- [Faster rendering by using blitting](https://matplotlib.org/stable/tutorials/advanced/blitting.html#sphx-glr-tutorials-advanced-blitting-py) Faster rendering by using blitting matplotlib matplotlib.artist.Artist.set_picker matplotlib.artist.Artist.set\_picker ==================================== Artist.set\_picker(*picker*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L527-L560) Define the picking behavior of the artist. Parameters: **picker**None or bool or float or callable This can be one of the following: * *None*: Picking is disabled for this artist (default). * A boolean: If *True* then picking will be enabled and the artist will fire a pick event if the mouse event is over the artist. * A float: If picker is a number it is interpreted as an epsilon tolerance in points and the artist will fire off an event if its data is within epsilon of the mouse event. For some artists like lines and patch collections, the artist may provide additional data to the pick event that is generated, e.g., the indices of the data within epsilon of the pick event * A function: If picker is callable, it is a user supplied function which determines whether the artist is hit by the mouse event: ``` hit, props = picker(artist, mouseevent) ``` to determine the hit test. if the mouse event is over the artist, return *hit=True* and props is a dictionary of properties you want added to the PickEvent attributes. Examples using `matplotlib.artist.Artist.set_picker` ---------------------------------------------------- [Legend Picking](https://matplotlib.org/stable/gallery/event_handling/legend_picking.html#sphx-glr-gallery-event-handling-legend-picking-py) Legend Picking [Pick Event Demo](https://matplotlib.org/stable/gallery/event_handling/pick_event_demo.html#sphx-glr-gallery-event-handling-pick-event-demo-py) Pick Event Demo matplotlib matplotlib.axis.Axis.set_view_interval matplotlib.axis.Axis.set\_view\_interval ======================================== Axis.set\_view\_interval(*vmin*, *vmax*, *ignore=False*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L1020-L1033) Set the axis view limits. This method is for internal use; Matplotlib users should typically use e.g. [`set_xlim`](matplotlib.axes.axes.set_xlim#matplotlib.axes.Axes.set_xlim "matplotlib.axes.Axes.set_xlim") or [`set_ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim"). If *ignore* is False (the default), this method will never reduce the preexisting view limits, only expand them if *vmin* or *vmax* are not within them. Moreover, the order of *vmin* and *vmax* does not matter; the orientation of the axis will not change. If *ignore* is True, the view limits will be set exactly to `(vmin, vmax)` in that order. matplotlib matplotlib.pyplot.ioff matplotlib.pyplot.ioff ====================== matplotlib.pyplot.ioff()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L442-L479) Disable interactive mode. See [`pyplot.isinteractive`](matplotlib.pyplot.isinteractive#matplotlib.pyplot.isinteractive "matplotlib.pyplot.isinteractive") for more details. See also [`ion`](matplotlib.pyplot.ion#matplotlib.pyplot.ion "matplotlib.pyplot.ion") Enable interactive mode. [`isinteractive`](matplotlib.pyplot.isinteractive#matplotlib.pyplot.isinteractive "matplotlib.pyplot.isinteractive") Whether interactive mode is enabled. [`show`](matplotlib.pyplot.show#matplotlib.pyplot.show "matplotlib.pyplot.show") Show all figures (and maybe block). [`pause`](matplotlib.pyplot.pause#matplotlib.pyplot.pause "matplotlib.pyplot.pause") Show all figures, and block for a time. #### Notes For a temporary change, this can be used as a context manager: ``` # if interactive mode is on # then figures will be shown on creation plt.ion() # This figure will be shown immediately fig = plt.figure() with plt.ioff(): # interactive mode will be off # figures will not automatically be shown fig2 = plt.figure() # ... ``` To enable optional usage as a context manager, this function returns a [`ExitStack`](https://docs.python.org/3/library/contextlib.html#contextlib.ExitStack "(in Python v3.10)") object, which is not intended to be stored or accessed by the user. matplotlib matplotlib.axes.Axes.set_xmargin matplotlib.axes.Axes.set\_xmargin ================================= Axes.set\_xmargin(*m*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L2656-L2676) Set padding of X data limits prior to autoscaling. *m* times the data interval will be added to each end of that interval before it is used in autoscaling. If *m* is negative, this will clip the data range instead of expanding it. For example, if your data is in the range [0, 2], a margin of 0.1 will result in a range [-0.2, 2.2]; a margin of -0.1 will result in a range of [0.2, 1.8]. Parameters: **m**float greater than -0.5 Examples using `matplotlib.axes.Axes.set_xmargin` ------------------------------------------------- [Automatically setting tick positions](https://matplotlib.org/stable/gallery/ticks/auto_ticks.html#sphx-glr-gallery-ticks-auto-ticks-py) Automatically setting tick positions matplotlib matplotlib.axes.Axes.get_axes_locator matplotlib.axes.Axes.get\_axes\_locator ======================================= Axes.get\_axes\_locator()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L1129-L1133) Return the axes\_locator. matplotlib matplotlib.colors.CenteredNorm matplotlib.colors.CenteredNorm ============================== *class*matplotlib.colors.CenteredNorm(*vcenter=0*, *halfrange=None*, *clip=False*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1478-L1572) Bases: [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") Normalize symmetrical data around a center (0 by default). Unlike [`TwoSlopeNorm`](matplotlib.colors.twoslopenorm#matplotlib.colors.TwoSlopeNorm "matplotlib.colors.TwoSlopeNorm"), [`CenteredNorm`](#matplotlib.colors.CenteredNorm "matplotlib.colors.CenteredNorm") applies an equal rate of change around the center. Useful when mapping symmetrical data around a conceptual center e.g., data that range from -2 to 4, with 0 as the midpoint, and with equal rates of change around that midpoint. Parameters: **vcenter**float, default: 0 The data value that defines `0.5` in the normalization. **halfrange**float, optional The range of data values that defines a range of `0.5` in the normalization, so that *vcenter* - *halfrange* is `0.0` and *vcenter* + *halfrange* is `1.0` in the normalization. Defaults to the largest absolute difference to *vcenter* for the values in the dataset. #### Examples This maps data values -2 to 0.25, 0 to 0.5, and 4 to 1.0 (assuming equal rates of change above and below 0.0): ``` >>> import matplotlib.colors as mcolors >>> norm = mcolors.CenteredNorm(halfrange=4.0) >>> data = [-2., 0., 4.] >>> norm(data) array([0.25, 0.5 , 1. ]) ``` \_\_call\_\_(*value*, *clip=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1568-L1572) Normalize *value* data in the `[vmin, vmax]` interval into the `[0.0, 1.0]` interval and return it. Parameters: **value** Data to normalize. **clip**bool If `None`, defaults to `self.clip` (which defaults to `False`). #### Notes If not already initialized, `self.vmin` and `self.vmax` are initialized using `self.autoscale_None(value)`. autoscale(*A*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1524-L1531) Set *halfrange* to `max(abs(A-vcenter))`, then set *vmin* and *vmax*. autoscale\_None(*A*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L1533-L1537) Set *vmin* and *vmax*. *property*halfrange *property*vcenter Examples using `matplotlib.colors.CenteredNorm` ----------------------------------------------- [Colormap Normalization](https://matplotlib.org/stable/tutorials/colors/colormapnorms.html#sphx-glr-tutorials-colors-colormapnorms-py) Colormap Normalization matplotlib matplotlib.patches.Shadow matplotlib.patches.Shadow ========================= *class*matplotlib.patches.Shadow(*patch*, *ox*, *oy*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L611-L660) Bases: [`Patch`](matplotlib.patches.patch#matplotlib.patches.Patch "matplotlib.patches.Patch") Create a shadow of the given *patch*. By default, the shadow will have the same face color as the *patch*, but darkened. Parameters: **patch**[`Patch`](matplotlib.patches.patch#matplotlib.patches.Patch "matplotlib.patches.Patch") The patch to create the shadow for. **ox, oy**float The shift of the shadow in data coordinates, scaled by a factor of dpi/72. **\*\*kwargs** Properties of the shadow patch. Supported keys are: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | unknown | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | draw(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L658-L660) Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible ([`Artist.get_visible`](matplotlib.artist.artist.get_visible#matplotlib.artist.Artist.get_visible "matplotlib.artist.Artist.get_visible") returns False). Parameters: **renderer**[`RendererBase`](../backend_bases_api#matplotlib.backend_bases.RendererBase "matplotlib.backend_bases.RendererBase") subclass. #### Notes This method is overridden in the Artist subclasses. get\_patch\_transform()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L655-L656) Return the [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") instance mapping patch coordinates to data coordinates. For example, one may define a patch of a circle which represents a radius of 5 by providing coordinates for a unit circle, and a transform which scales the coordinates (the patch coordinate) by 5. get\_path()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/patches.py#L652-L653) Return the path of this patch. set(*\**, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *animated=<UNSET>*, *antialiased=<UNSET>*, *capstyle=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *color=<UNSET>*, *edgecolor=<UNSET>*, *facecolor=<UNSET>*, *fill=<UNSET>*, *gid=<UNSET>*, *hatch=<UNSET>*, *in\_layout=<UNSET>*, *joinstyle=<UNSET>*, *label=<UNSET>*, *linestyle=<UNSET>*, *linewidth=<UNSET>*, *mouseover=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *rasterized=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *transform=<UNSET>*, *url=<UNSET>*, *visible=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | Examples using `matplotlib.patches.Shadow` ------------------------------------------ [Using a text as a Path](https://matplotlib.org/stable/gallery/text_labels_and_annotations/demo_text_path.html#sphx-glr-gallery-text-labels-and-annotations-demo-text-path-py) Using a text as a Path [SVG Filter Pie](https://matplotlib.org/stable/gallery/misc/svg_filter_pie.html#sphx-glr-gallery-misc-svg-filter-pie-py) SVG Filter Pie
programming_docs
matplotlib matplotlib.pyplot.xkcd matplotlib.pyplot.xkcd ====================== matplotlib.pyplot.xkcd(*scale=1*, *length=100*, *randomness=2*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L585-L649) Turn on [xkcd](https://xkcd.com/) sketch-style drawing mode. This will only have effect on things drawn after this function is called. For best results, the "Humor Sans" font should be installed: it is not included with Matplotlib. Parameters: **scale**float, optional The amplitude of the wiggle perpendicular to the source line. **length**float, optional The length of the wiggle along the line. **randomness**float, optional The scale factor by which the length is shrunken or expanded. #### Notes This function works by a number of rcParams, so it will probably override others you have set before. If you want the effects of this function to be temporary, it can be used as a context manager, for example: ``` with plt.xkcd(): # This figure will be in XKCD-style fig1 = plt.figure() # ... # This figure will be in regular style fig2 = plt.figure() ``` Examples using `matplotlib.pyplot.xkcd` --------------------------------------- [XKCD](https://matplotlib.org/stable/gallery/showcase/xkcd.html#sphx-glr-gallery-showcase-xkcd-py) XKCD matplotlib matplotlib.axis.YAxis.tick_left matplotlib.axis.YAxis.tick\_left ================================ YAxis.tick\_left()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L2632-L2643) Move ticks and ticklabels (if present) to the left of the Axes. Examples using `matplotlib.axis.YAxis.tick_left` ------------------------------------------------ [Stock prices over 32 years](https://matplotlib.org/stable/gallery/showcase/stock_prices.html#sphx-glr-gallery-showcase-stock-prices-py) Stock prices over 32 years [Set default y-axis tick labels on the right](https://matplotlib.org/stable/gallery/ticks/tick_label_right.html#sphx-glr-gallery-ticks-tick-label-right-py) Set default y-axis tick labels on the right matplotlib matplotlib.axis.Axis.get_pickradius matplotlib.axis.Axis.get\_pickradius ==================================== Axis.get\_pickradius()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L1339-L1341) Return the depth of the axis used by the picker. matplotlib matplotlib.axes.Axes.fill_between matplotlib.axes.Axes.fill\_between ================================== Axes.fill\_between(*x*, *y1*, *y2=0*, *where=None*, *interpolate=False*, *step=None*, *\**, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L5333-L5337) Fill the area between two horizontal curves. The curves are defined by the points (*x*, *y1*) and (*x*, *y2*). This creates one or multiple polygons describing the filled area. You may exclude some horizontal sections from filling using *where*. By default, the edges connect the given points directly. Use *step* if the filling should be a step function, i.e. constant in between *x*. Parameters: **x**array (length N) The x coordinates of the nodes defining the curves. **y1**array (length N) or scalar The y coordinates of the nodes defining the first curve. **y2**array (length N) or scalar, default: 0 The y coordinates of the nodes defining the second curve. **where**array of bool (length N), optional Define *where* to exclude some horizontal regions from being filled. The filled regions are defined by the coordinates `x[where]`. More precisely, fill between `x[i]` and `x[i+1]` if `where[i] and where[i+1]`. Note that this definition implies that an isolated *True* value between two *False* values in *where* will not result in filling. Both sides of the *True* position remain unfilled due to the adjacent *False* values. **interpolate**bool, default: False This option is only relevant if *where* is used and the two curves are crossing each other. Semantically, *where* is often used for *y1* > *y2* or similar. By default, the nodes of the polygon defining the filled region will only be placed at the positions in the *x* array. Such a polygon cannot describe the above semantics close to the intersection. The x-sections containing the intersection are simply clipped. Setting *interpolate* to *True* will calculate the actual intersection point and extend the filled region up to this point. **step**{'pre', 'post', 'mid'}, optional Define *step* if the filling should be a step function, i.e. constant in between *x*. The value determines where the step will occur: * 'pre': The y value is continued constantly to the left from every *x* position, i.e. the interval `(x[i-1], x[i]]` has the value `y[i]`. * 'post': The y value is continued constantly to the right from every *x* position, i.e. the interval `[x[i], x[i+1])` has the value `y[i]`. * 'mid': Steps occur half-way between the *x* positions. Returns: [`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection") A [`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection") containing the plotted polygons. Other Parameters: **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *x*, *y1*, *y2*, *where* **\*\*kwargs** All other keyword arguments are passed on to [`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection"). They control the [`Polygon`](matplotlib.patches.polygon#matplotlib.patches.Polygon "matplotlib.patches.Polygon") properties: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](../collections_api#matplotlib.collections.Collection.set_alpha "matplotlib.collections.Collection.set_alpha") | array-like or scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](../collections_api#matplotlib.collections.Collection.set_antialiased "matplotlib.collections.Collection.set_antialiased") or aa or antialiaseds | bool or list of bools | | [`array`](../cm_api#matplotlib.cm.ScalarMappable.set_array "matplotlib.cm.ScalarMappable.set_array") | array-like or None | | [`capstyle`](../collections_api#matplotlib.collections.Collection.set_capstyle "matplotlib.collections.Collection.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clim`](../cm_api#matplotlib.cm.ScalarMappable.set_clim "matplotlib.cm.ScalarMappable.set_clim") | (vmin: float, vmax: float) | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`cmap`](../cm_api#matplotlib.cm.ScalarMappable.set_cmap "matplotlib.cm.ScalarMappable.set_cmap") | [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap") or str or None | | [`color`](../collections_api#matplotlib.collections.Collection.set_color "matplotlib.collections.Collection.set_color") | color or list of rgba tuples | | [`edgecolor`](../collections_api#matplotlib.collections.Collection.set_edgecolor "matplotlib.collections.Collection.set_edgecolor") or ec or edgecolors | color or list of colors or 'face' | | [`facecolor`](../collections_api#matplotlib.collections.Collection.set_facecolor "matplotlib.collections.Collection.set_facecolor") or facecolors or fc | color or list of colors | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](../collections_api#matplotlib.collections.Collection.set_hatch "matplotlib.collections.Collection.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](../collections_api#matplotlib.collections.Collection.set_joinstyle "matplotlib.collections.Collection.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](../collections_api#matplotlib.collections.Collection.set_linestyle "matplotlib.collections.Collection.set_linestyle") or dashes or linestyles or ls | str or tuple or list thereof | | [`linewidth`](../collections_api#matplotlib.collections.Collection.set_linewidth "matplotlib.collections.Collection.set_linewidth") or linewidths or lw | float or list of floats | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`norm`](../cm_api#matplotlib.cm.ScalarMappable.set_norm "matplotlib.cm.ScalarMappable.set_norm") | [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") or str or None | | [`offset_transform`](../collections_api#matplotlib.collections.Collection.set_offset_transform "matplotlib.collections.Collection.set_offset_transform") or transOffset | unknown | | [`offsets`](../collections_api#matplotlib.collections.Collection.set_offsets "matplotlib.collections.Collection.set_offsets") | (N, 2) or (2,) array-like | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`paths`](../collections_api#matplotlib.collections.PolyCollection.set_verts "matplotlib.collections.PolyCollection.set_verts") | list of array-like | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`pickradius`](../collections_api#matplotlib.collections.Collection.set_pickradius "matplotlib.collections.Collection.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | `sizes` | ndarray or None | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`urls`](../collections_api#matplotlib.collections.Collection.set_urls "matplotlib.collections.Collection.set_urls") | list of str or None | | [`verts`](../collections_api#matplotlib.collections.PolyCollection.set_verts "matplotlib.collections.PolyCollection.set_verts") | list of array-like | | [`verts_and_codes`](../collections_api#matplotlib.collections.PolyCollection.set_verts_and_codes "matplotlib.collections.PolyCollection.set_verts_and_codes") | unknown | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`fill_between`](#matplotlib.axes.Axes.fill_between "matplotlib.axes.Axes.fill_between") Fill between two sets of y-values. [`fill_betweenx`](matplotlib.axes.axes.fill_betweenx#matplotlib.axes.Axes.fill_betweenx "matplotlib.axes.Axes.fill_betweenx") Fill between two sets of x-values. Examples using `matplotlib.axes.Axes.fill_between` -------------------------------------------------- [Fill Between and Alpha](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill_between_alpha.html#sphx-glr-gallery-lines-bars-and-markers-fill-between-alpha-py) Fill Between and Alpha [Filling the area between lines](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill_between_demo.html#sphx-glr-gallery-lines-bars-and-markers-fill-between-demo-py) Filling the area between lines [fill\_between(x, y1, y2)](https://matplotlib.org/stable/plot_types/basic/fill_between.html#sphx-glr-plot-types-basic-fill-between-py) fill\_between(x, y1, y2) matplotlib mpl_toolkits.axes_grid1.axes_size.Scaled mpl\_toolkits.axes\_grid1.axes\_size.Scaled =========================================== *class*mpl\_toolkits.axes\_grid1.axes\_size.Scaled(*scalable\_size*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_size.py#L67-L79) Bases: `_Base` Simple scaled(?) size with absolute part = 0 and relative part = *scalable\_size*. get\_size(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_size.py#L76-L79) Examples using `mpl_toolkits.axes_grid1.axes_size.Scaled` --------------------------------------------------------- [HBoxDivider demo](https://matplotlib.org/stable/gallery/axes_grid1/demo_axes_hbox_divider.html#sphx-glr-gallery-axes-grid1-demo-axes-hbox-divider-py) `.HBoxDivider` demo [Axes with a fixed physical size](https://matplotlib.org/stable/gallery/axes_grid1/demo_fixed_size_axes.html#sphx-glr-gallery-axes-grid1-demo-fixed-size-axes-py) Axes with a fixed physical size [Simple Axes Divider 1](https://matplotlib.org/stable/gallery/axes_grid1/simple_axes_divider1.html#sphx-glr-gallery-axes-grid1-simple-axes-divider1-py) Simple Axes Divider 1 matplotlib matplotlib.artist.getp matplotlib.artist.getp ====================== matplotlib.artist.getp(*obj*, *property=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L1681-L1714) Return the value of an [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist")'s *property*, or print all of them. Parameters: **obj**[`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") The queried artist; e.g., a [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D"), a [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text"), or an [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes"). **property**str or None, default: None If *property* is 'somename', this function returns `obj.get_somename()`. If it's None (or unset), it *prints* all gettable properties from *obj*. Many properties have aliases for shorter typing, e.g. 'lw' is an alias for 'linewidth'. In the output, aliases and full property names will be listed as: property or alias = value e.g.: linewidth or lw = 2 See also [`setp`](matplotlib.artist.setp#matplotlib.artist.setp "matplotlib.artist.setp") Examples using `matplotlib.artist.getp` --------------------------------------- [Artist tutorial](https://matplotlib.org/stable/tutorials/intermediate/artists.html#sphx-glr-tutorials-intermediate-artists-py) Artist tutorial matplotlib matplotlib.pyplot.barbs matplotlib.pyplot.barbs ======================= matplotlib.pyplot.barbs(*\*args*, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2361-L2365) Plot a 2D field of barbs. Call signature: ``` barbs([X, Y], U, V, [C], **kwargs) ``` Where *X*, *Y* define the barb locations, *U*, *V* define the barb directions, and *C* optionally sets the color. All arguments may be 1D or 2D. *U*, *V*, *C* may be masked arrays, but masked *X*, *Y* are not supported at present. Barbs are traditionally used in meteorology as a way to plot the speed and direction of wind observations, but can technically be used to plot any two dimensional vector quantity. As opposed to arrows, which give vector magnitude by the length of the arrow, the barbs give more quantitative information about the vector magnitude by putting slanted lines or a triangle for various increments in magnitude, as show schematically below: ``` : /\ \ : / \ \ : / \ \ \ : / \ \ \ : ------------------------------ ``` The largest increment is given by a triangle (or "flag"). After those come full lines (barbs). The smallest increment is a half line. There is only, of course, ever at most 1 half line. If the magnitude is small and only needs a single half-line and no full lines or triangles, the half-line is offset from the end of the barb so that it can be easily distinguished from barbs with a single full line. The magnitude for the barb shown above would nominally be 65, using the standard increments of 50, 10, and 5. See also <https://en.wikipedia.org/wiki/Wind_barb>. Parameters: **X, Y**1D or 2D array-like, optional The x and y coordinates of the barb locations. See *pivot* for how the barbs are drawn to the x, y positions. If not given, they will be generated as a uniform integer meshgrid based on the dimensions of *U* and *V*. If *X* and *Y* are 1D but *U*, *V* are 2D, *X*, *Y* are expanded to 2D using `X, Y = np.meshgrid(X, Y)`. In this case `len(X)` and `len(Y)` must match the column and row dimensions of *U* and *V*. **U, V**1D or 2D array-like The x and y components of the barb shaft. **C**1D or 2D array-like, optional Numeric data that defines the barb colors by colormapping via *norm* and *cmap*. This does not support explicit colors. If you want to set colors directly, use *barbcolor* instead. **length**float, default: 7 Length of the barb in points; the other parts of the barb are scaled against this. **pivot**{'tip', 'middle'} or float, default: 'tip' The part of the arrow that is anchored to the *X*, *Y* grid. The barb rotates about this point. This can also be a number, which shifts the start of the barb that many points away from grid point. **barbcolor**color or color sequence The color of all parts of the barb except for the flags. This parameter is analogous to the *edgecolor* parameter for polygons, which can be used instead. However this parameter will override facecolor. **flagcolor**color or color sequence The color of any flags on the barb. This parameter is analogous to the *facecolor* parameter for polygons, which can be used instead. However, this parameter will override facecolor. If this is not set (and *C* has not either) then *flagcolor* will be set to match *barbcolor* so that the barb has a uniform color. If *C* has been set, *flagcolor* has no effect. **sizes**dict, optional A dictionary of coefficients specifying the ratio of a given feature to the length of the barb. Only those values one wishes to override need to be included. These features include: * 'spacing' - space between features (flags, full/half barbs) * 'height' - height (distance from shaft to top) of a flag or full barb * 'width' - width of a flag, twice the width of a full barb * 'emptybarb' - radius of the circle used for low magnitudes **fill\_empty**bool, default: False Whether the empty barbs (circles) that are drawn should be filled with the flag color. If they are not filled, the center is transparent. **rounding**bool, default: True Whether the vector magnitude should be rounded when allocating barb components. If True, the magnitude is rounded to the nearest multiple of the half-barb increment. If False, the magnitude is simply truncated to the next lowest multiple. **barb\_increments**dict, optional A dictionary of increments specifying values to associate with different parts of the barb. Only those values one wishes to override need to be included. * 'half' - half barbs (Default is 5) * 'full' - full barbs (Default is 10) * 'flag' - flags (default is 50) **flip\_barb**bool or array-like of bool, default: False Whether the lines and flags should point opposite to normal. Normal behavior is for the barbs and lines to point right (comes from wind barbs having these features point towards low pressure in the Northern Hemisphere). A single value is applied to all barbs. Individual barbs can be flipped by passing a bool array of the same size as *U* and *V*. Returns: **barbs**[`Barbs`](matplotlib.quiver.barbs#matplotlib.quiver.Barbs "matplotlib.quiver.Barbs") Other Parameters: **data**indexable object, optional If given, all parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception). **\*\*kwargs** The barbs can further be customized using [`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection") keyword arguments: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](../collections_api#matplotlib.collections.Collection.set_alpha "matplotlib.collections.Collection.set_alpha") | array-like or scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](../collections_api#matplotlib.collections.Collection.set_antialiased "matplotlib.collections.Collection.set_antialiased") or aa or antialiaseds | bool or list of bools | | [`array`](../cm_api#matplotlib.cm.ScalarMappable.set_array "matplotlib.cm.ScalarMappable.set_array") | array-like or None | | [`capstyle`](../collections_api#matplotlib.collections.Collection.set_capstyle "matplotlib.collections.Collection.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clim`](../cm_api#matplotlib.cm.ScalarMappable.set_clim "matplotlib.cm.ScalarMappable.set_clim") | (vmin: float, vmax: float) | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`cmap`](../cm_api#matplotlib.cm.ScalarMappable.set_cmap "matplotlib.cm.ScalarMappable.set_cmap") | [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap") or str or None | | [`color`](../collections_api#matplotlib.collections.Collection.set_color "matplotlib.collections.Collection.set_color") | color or list of rgba tuples | | [`edgecolor`](../collections_api#matplotlib.collections.Collection.set_edgecolor "matplotlib.collections.Collection.set_edgecolor") or ec or edgecolors | color or list of colors or 'face' | | [`facecolor`](../collections_api#matplotlib.collections.Collection.set_facecolor "matplotlib.collections.Collection.set_facecolor") or facecolors or fc | color or list of colors | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](../collections_api#matplotlib.collections.Collection.set_hatch "matplotlib.collections.Collection.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](../collections_api#matplotlib.collections.Collection.set_joinstyle "matplotlib.collections.Collection.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](../collections_api#matplotlib.collections.Collection.set_linestyle "matplotlib.collections.Collection.set_linestyle") or dashes or linestyles or ls | str or tuple or list thereof | | [`linewidth`](../collections_api#matplotlib.collections.Collection.set_linewidth "matplotlib.collections.Collection.set_linewidth") or linewidths or lw | float or list of floats | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`norm`](../cm_api#matplotlib.cm.ScalarMappable.set_norm "matplotlib.cm.ScalarMappable.set_norm") | [`Normalize`](matplotlib.colors.normalize#matplotlib.colors.Normalize "matplotlib.colors.Normalize") or str or None | | [`offset_transform`](../collections_api#matplotlib.collections.Collection.set_offset_transform "matplotlib.collections.Collection.set_offset_transform") or transOffset | unknown | | [`offsets`](../collections_api#matplotlib.collections.Collection.set_offsets "matplotlib.collections.Collection.set_offsets") | (N, 2) or (2,) array-like | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`paths`](../collections_api#matplotlib.collections.PolyCollection.set_verts "matplotlib.collections.PolyCollection.set_verts") | list of array-like | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`pickradius`](../collections_api#matplotlib.collections.Collection.set_pickradius "matplotlib.collections.Collection.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | `sizes` | ndarray or None | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`urls`](../collections_api#matplotlib.collections.Collection.set_urls "matplotlib.collections.Collection.set_urls") | list of str or None | | [`verts`](../collections_api#matplotlib.collections.PolyCollection.set_verts "matplotlib.collections.PolyCollection.set_verts") | list of array-like | | [`verts_and_codes`](../collections_api#matplotlib.collections.PolyCollection.set_verts_and_codes "matplotlib.collections.PolyCollection.set_verts_and_codes") | unknown | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float |
programming_docs
matplotlib matplotlib.axes.Axes.indicate_inset matplotlib.axes.Axes.indicate\_inset ==================================== Axes.indicate\_inset(*bounds*, *inset\_ax=None*, *\**, *transform=None*, *facecolor='none'*, *edgecolor='0.5'*, *alpha=0.5*, *zorder=4.99*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L389-L500) Add an inset indicator to the Axes. This is a rectangle on the plot at the position indicated by *bounds* that optionally has lines that connect the rectangle to an inset Axes ([`Axes.inset_axes`](matplotlib.axes.axes.inset_axes#matplotlib.axes.Axes.inset_axes "matplotlib.axes.Axes.inset_axes")). Parameters: **bounds**[x0, y0, width, height] Lower-left corner of rectangle to be marked, and its width and height. **inset\_ax**[`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes") An optional inset Axes to draw connecting lines to. Two lines are drawn connecting the indicator box to the inset Axes on corners chosen so as to not overlap with the indicator box. **transform**[`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") Transform for the rectangle coordinates. Defaults to `ax.transAxes`, i.e. the units of *rect* are in Axes-relative coordinates. **facecolor**color, default: 'none' Facecolor of the rectangle. **edgecolor**color, default: '0.5' Color of the rectangle and color of the connecting lines. **alpha**float, default: 0.5 Transparency of the rectangle and connector lines. **zorder**float, default: 4.99 Drawing order of the rectangle and connector lines. The default, 4.99, is just below the default level of inset Axes. **\*\*kwargs** Other keyword arguments are passed on to the [`Rectangle`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle "matplotlib.patches.Rectangle") patch: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`angle`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_angle "matplotlib.patches.Rectangle.set_angle") | unknown | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`bounds`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_bounds "matplotlib.patches.Rectangle.set_bounds") | (left, bottom, width, height) | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`height`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_height "matplotlib.patches.Rectangle.set_height") | unknown | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`width`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_width "matplotlib.patches.Rectangle.set_width") | unknown | | [`x`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_x "matplotlib.patches.Rectangle.set_x") | unknown | | [`xy`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_xy "matplotlib.patches.Rectangle.set_xy") | (float, float) | | [`y`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_y "matplotlib.patches.Rectangle.set_y") | unknown | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | Returns: **rectangle\_patch**[`patches.Rectangle`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle "matplotlib.patches.Rectangle") The indicator frame. **connector\_lines**4-tuple of [`patches.ConnectionPatch`](matplotlib.patches.connectionpatch#matplotlib.patches.ConnectionPatch "matplotlib.patches.ConnectionPatch") The four connector lines connecting to (lower\_left, upper\_left, lower\_right upper\_right) corners of *inset\_ax*. Two lines are set with visibility to *False*, but the user can set the visibility to True if the automatic choice is not deemed correct. Warning This method is experimental as of 3.0, and the API may change. matplotlib matplotlib.artist.Artist.pick matplotlib.artist.Artist.pick ============================= Artist.pick(*mouseevent*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L490-L525) Process a pick event. Each child artist will fire a pick event if *mouseevent* is over the artist and the artist has picker set. See also [`set_picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker"), [`get_picker`](matplotlib.artist.artist.get_picker#matplotlib.artist.Artist.get_picker "matplotlib.artist.Artist.get_picker"), [`pickable`](matplotlib.artist.artist.pickable#matplotlib.artist.Artist.pickable "matplotlib.artist.Artist.pickable") matplotlib matplotlib.pyplot.stackplot matplotlib.pyplot.stackplot =========================== matplotlib.pyplot.stackplot(*x*, *\*args*, *labels=()*, *colors=None*, *baseline='zero'*, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2829-L2835) Draw a stacked area plot. Parameters: **x**(N,) array-like **y**(M, N) array-like The data is assumed to be unstacked. Each of the following calls is legal: ``` stackplot(x, y) # where y has shape (M, N) stackplot(x, y1, y2, y3) # where y1, y2, y3, y4 have length N ``` **baseline**{'zero', 'sym', 'wiggle', 'weighted\_wiggle'} Method used to calculate the baseline: * `'zero'`: Constant zero baseline, i.e. a simple stacked plot. * `'sym'`: Symmetric around zero and is sometimes called 'ThemeRiver'. * `'wiggle'`: Minimizes the sum of the squared slopes. * `'weighted_wiggle'`: Does the same but weights to account for size of each layer. It is also called 'Streamgraph'-layout. More details can be found at <http://leebyron.com/streamgraph/>. **labels**list of str, optional A sequence of labels to assign to each data series. If unspecified, then no labels will be applied to artists. **colors**list of color, optional A sequence of colors to be cycled through and used to color the stacked areas. The sequence need not be exactly the same length as the number of provided *y*, in which case the colors will repeat from the beginning. If not specified, the colors from the Axes property cycle will be used. **data**indexable object, optional If given, all parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception). **\*\*kwargs** All other keyword arguments are passed to [`Axes.fill_between`](matplotlib.axes.axes.fill_between#matplotlib.axes.Axes.fill_between "matplotlib.axes.Axes.fill_between"). Returns: list of [`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection") A list of [`PolyCollection`](../collections_api#matplotlib.collections.PolyCollection "matplotlib.collections.PolyCollection") instances, one for each element in the stacked area plot. matplotlib matplotlib.pyplot.close matplotlib.pyplot.close ======================= matplotlib.pyplot.close(*fig=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L877-L916) Close a figure window. Parameters: **fig**None or int or str or [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") The figure to close. There are a number of ways to specify this: * *None*: the current figure * [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure"): the given [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") instance * `int`: a figure number * `str`: a figure name * 'all': all figures Examples using `matplotlib.pyplot.close` ---------------------------------------- [Pong](https://matplotlib.org/stable/gallery/event_handling/pong_sgskip.html#sphx-glr-gallery-event-handling-pong-sgskip-py) Pong [Multipage PDF](https://matplotlib.org/stable/gallery/misc/multipage_pdf.html#sphx-glr-gallery-misc-multipage-pdf-py) Multipage PDF [Multiprocess](https://matplotlib.org/stable/gallery/misc/multiprocess_sgskip.html#sphx-glr-gallery-misc-multiprocess-sgskip-py) Multiprocess [Tight Layout guide](https://matplotlib.org/stable/tutorials/intermediate/tight_layout_guide.html#sphx-glr-tutorials-intermediate-tight-layout-guide-py) Tight Layout guide matplotlib matplotlib.axes.Axes.set_ylabel matplotlib.axes.Axes.set\_ylabel ================================ Axes.set\_ylabel(*ylabel*, *fontdict=None*, *labelpad=None*, *\**, *loc=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L3712-L3761) Set the label for the y-axis. Parameters: **ylabel**str The label text. **labelpad**float, default: `[rcParams["axes.labelpad"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=axes.labelpad#matplotlibrc-sample)` (default: `4.0`) Spacing in points from the Axes bounding box including ticks and tick labels. If None, the previous value is left as is. **loc**{'bottom', 'center', 'top'}, default: `[rcParams["yaxis.labellocation"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=yaxis.labellocation#matplotlibrc-sample)` (default: `'center'`) The label position. This is a high-level alternative for passing parameters *y* and *horizontalalignment*. Other Parameters: **\*\*kwargs**[`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text") properties [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text") properties control the appearance of the label. See also [`text`](matplotlib.axes.axes.text#matplotlib.axes.Axes.text "matplotlib.axes.Axes.text") Documents the properties supported by [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text"). Examples using `matplotlib.axes.Axes.set_ylabel` ------------------------------------------------ [Bar color demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_colors.html#sphx-glr-gallery-lines-bars-and-markers-bar-colors-py) Bar color demo [Bar Label Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_label_demo.html#sphx-glr-gallery-lines-bars-and-markers-bar-label-demo-py) Bar Label Demo [Stacked bar chart](https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_stacked.html#sphx-glr-gallery-lines-bars-and-markers-bar-stacked-py) Stacked bar chart [Grouped bar chart with labels](https://matplotlib.org/stable/gallery/lines_bars_and_markers/barchart.html#sphx-glr-gallery-lines-bars-and-markers-barchart-py) Grouped bar chart with labels [CSD Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/csd_demo.html#sphx-glr-gallery-lines-bars-and-markers-csd-demo-py) CSD Demo [Fill Between and Alpha](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill_between_alpha.html#sphx-glr-gallery-lines-bars-and-markers-fill-between-alpha-py) Fill Between and Alpha [Hatch-filled histograms](https://matplotlib.org/stable/gallery/lines_bars_and_markers/filled_step.html#sphx-glr-gallery-lines-bars-and-markers-filled-step-py) Hatch-filled histograms [Hat graph](https://matplotlib.org/stable/gallery/lines_bars_and_markers/hat_graph.html#sphx-glr-gallery-lines-bars-and-markers-hat-graph-py) Hat graph [Mapping marker properties to multivariate data](https://matplotlib.org/stable/gallery/lines_bars_and_markers/multivariate_marker_plot.html#sphx-glr-gallery-lines-bars-and-markers-multivariate-marker-plot-py) Mapping marker properties to multivariate data [Psd Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/psd_demo.html#sphx-glr-gallery-lines-bars-and-markers-psd-demo-py) Psd Demo [Scatter plots with custom symbols](https://matplotlib.org/stable/gallery/lines_bars_and_markers/scatter_custom_symbol.html#sphx-glr-gallery-lines-bars-and-markers-scatter-custom-symbol-py) Scatter plots with custom symbols [Scatter Demo2](https://matplotlib.org/stable/gallery/lines_bars_and_markers/scatter_demo2.html#sphx-glr-gallery-lines-bars-and-markers-scatter-demo2-py) Scatter Demo2 [Stackplots and streamgraphs](https://matplotlib.org/stable/gallery/lines_bars_and_markers/stackplot_demo.html#sphx-glr-gallery-lines-bars-and-markers-stackplot-demo-py) Stackplots and streamgraphs [Contourf Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/contourf_demo.html#sphx-glr-gallery-images-contours-and-fields-contourf-demo-py) Contourf Demo [Creating annotated heatmaps](https://matplotlib.org/stable/gallery/images_contours_and_fields/image_annotated_heatmap.html#sphx-glr-gallery-images-contours-and-fields-image-annotated-heatmap-py) Creating annotated heatmaps [Tricontour Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/tricontour_demo.html#sphx-glr-gallery-images-contours-and-fields-tricontour-demo-py) Tricontour Demo [Tripcolor Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/tripcolor_demo.html#sphx-glr-gallery-images-contours-and-fields-tripcolor-demo-py) Tripcolor Demo [Triplot Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/triplot_demo.html#sphx-glr-gallery-images-contours-and-fields-triplot-demo-py) Triplot Demo [Aligning Labels](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/align_labels_demo.html#sphx-glr-gallery-subplots-axes-and-figures-align-labels-demo-py) Aligning Labels [Axes Demo](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_demo.html#sphx-glr-gallery-subplots-axes-and-figures-axes-demo-py) Axes Demo [Axis Label Position](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axis_labels_demo.html#sphx-glr-gallery-subplots-axes-and-figures-axis-labels-demo-py) Axis Label Position [Resizing axes with constrained layout](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/demo_constrained_layout.html#sphx-glr-gallery-subplots-axes-and-figures-demo-constrained-layout-py) Resizing axes with constrained layout [Resizing axes with tight layout](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/demo_tight_layout.html#sphx-glr-gallery-subplots-axes-and-figures-demo-tight-layout-py) Resizing axes with tight layout [Figure labels: suptitle, supxlabel, supylabel](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/figure_title.html#sphx-glr-gallery-subplots-axes-and-figures-figure-title-py) Figure labels: suptitle, supxlabel, supylabel [Invert Axes](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/invert_axes.html#sphx-glr-gallery-subplots-axes-and-figures-invert-axes-py) Invert Axes [Secondary Axis](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/secondary_axis.html#sphx-glr-gallery-subplots-axes-and-figures-secondary-axis-py) Secondary Axis [Figure subfigures](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subfigures.html#sphx-glr-gallery-subplots-axes-and-figures-subfigures-py) Figure subfigures [Multiple subplots](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subplot.html#sphx-glr-gallery-subplots-axes-and-figures-subplot-py) Multiple subplots [Plots with different scales](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/two_scales.html#sphx-glr-gallery-subplots-axes-and-figures-two-scales-py) Plots with different scales [Box plots with custom fill colors](https://matplotlib.org/stable/gallery/statistics/boxplot_color.html#sphx-glr-gallery-statistics-boxplot-color-py) Box plots with custom fill colors [Boxplots](https://matplotlib.org/stable/gallery/statistics/boxplot_demo.html#sphx-glr-gallery-statistics-boxplot-demo-py) Boxplots [Box plot vs. violin plot comparison](https://matplotlib.org/stable/gallery/statistics/boxplot_vs_violin.html#sphx-glr-gallery-statistics-boxplot-vs-violin-py) Box plot vs. violin plot comparison [Violin plot customization](https://matplotlib.org/stable/gallery/statistics/customized_violin.html#sphx-glr-gallery-statistics-customized-violin-py) Violin plot customization [Using histograms to plot a cumulative distribution](https://matplotlib.org/stable/gallery/statistics/histogram_cumulative.html#sphx-glr-gallery-statistics-histogram-cumulative-py) Using histograms to plot a cumulative distribution [Some features of the histogram (hist) function](https://matplotlib.org/stable/gallery/statistics/histogram_features.html#sphx-glr-gallery-statistics-histogram-features-py) Some features of the histogram (hist) function [Producing multiple histograms side by side](https://matplotlib.org/stable/gallery/statistics/multiple_histograms_side_by_side.html#sphx-glr-gallery-statistics-multiple-histograms-side-by-side-py) Producing multiple histograms side by side [Using accented text in Matplotlib](https://matplotlib.org/stable/gallery/text_labels_and_annotations/accented_text.html#sphx-glr-gallery-text-labels-and-annotations-accented-text-py) Using accented text in Matplotlib [Date tick labels](https://matplotlib.org/stable/gallery/text_labels_and_annotations/date.html#sphx-glr-gallery-text-labels-and-annotations-date-py) Date tick labels [Legend Demo](https://matplotlib.org/stable/gallery/text_labels_and_annotations/legend_demo.html#sphx-glr-gallery-text-labels-and-annotations-legend-demo-py) Legend Demo [Mathtext](https://matplotlib.org/stable/gallery/text_labels_and_annotations/mathtext_demo.html#sphx-glr-gallery-text-labels-and-annotations-mathtext-demo-py) Mathtext [Multiline](https://matplotlib.org/stable/gallery/text_labels_and_annotations/multiline.html#sphx-glr-gallery-text-labels-and-annotations-multiline-py) Multiline [Rendering math equations using TeX](https://matplotlib.org/stable/gallery/text_labels_and_annotations/tex_demo.html#sphx-glr-gallery-text-labels-and-annotations-tex-demo-py) Rendering math equations using TeX [Simple axes labels](https://matplotlib.org/stable/gallery/pyplots/fig_axes_labels_simple.html#sphx-glr-gallery-pyplots-fig-axes-labels-simple-py) Simple axes labels [Text Commands](https://matplotlib.org/stable/gallery/pyplots/text_commands.html#sphx-glr-gallery-pyplots-text-commands-py) Text Commands [Color Demo](https://matplotlib.org/stable/gallery/color/color_demo.html#sphx-glr-gallery-color-color-demo-py) Color Demo [Line, Poly and RegularPoly Collection with autoscaling](https://matplotlib.org/stable/gallery/shapes_and_collections/collections.html#sphx-glr-gallery-shapes-and-collections-collections-py) Line, Poly and RegularPoly Collection with autoscaling [Ellipse Collection](https://matplotlib.org/stable/gallery/shapes_and_collections/ellipse_collection.html#sphx-glr-gallery-shapes-and-collections-ellipse-collection-py) Ellipse Collection [Dark background style sheet](https://matplotlib.org/stable/gallery/style_sheets/dark_background.html#sphx-glr-gallery-style-sheets-dark-background-py) Dark background style sheet [Make room for ylabel using axes\_grid](https://matplotlib.org/stable/gallery/axes_grid1/make_room_for_ylabel_using_axesgrid.html#sphx-glr-gallery-axes-grid1-make-room-for-ylabel-using-axesgrid-py) Make room for ylabel using axes\_grid [Parasite Simple](https://matplotlib.org/stable/gallery/axes_grid1/parasite_simple.html#sphx-glr-gallery-axes-grid1-parasite-simple-py) Parasite Simple [Parasite Axes demo](https://matplotlib.org/stable/gallery/axisartist/demo_parasite_axes.html#sphx-glr-gallery-axisartist-demo-parasite-axes-py) Parasite Axes demo [Parasite axis demo](https://matplotlib.org/stable/gallery/axisartist/demo_parasite_axes2.html#sphx-glr-gallery-axisartist-demo-parasite-axes2-py) Parasite axis demo [Ticklabel alignment](https://matplotlib.org/stable/gallery/axisartist/demo_ticklabel_alignment.html#sphx-glr-gallery-axisartist-demo-ticklabel-alignment-py) Ticklabel alignment [Simple Axis Direction03](https://matplotlib.org/stable/gallery/axisartist/simple_axis_direction03.html#sphx-glr-gallery-axisartist-simple-axis-direction03-py) Simple Axis Direction03 [Simple Axisline](https://matplotlib.org/stable/gallery/axisartist/simple_axisline.html#sphx-glr-gallery-axisartist-simple-axisline-py) Simple Axisline [Anatomy of a figure](https://matplotlib.org/stable/gallery/showcase/anatomy.html#sphx-glr-gallery-showcase-anatomy-py) Anatomy of a figure [XKCD](https://matplotlib.org/stable/gallery/showcase/xkcd.html#sphx-glr-gallery-showcase-xkcd-py) XKCD [Pick Event Demo](https://matplotlib.org/stable/gallery/event_handling/pick_event_demo.html#sphx-glr-gallery-event-handling-pick-event-demo-py) Pick Event Demo [Pythonic Matplotlib](https://matplotlib.org/stable/gallery/misc/pythonic_matplotlib.html#sphx-glr-gallery-misc-pythonic-matplotlib-py) Pythonic Matplotlib [Plot 2D data on 3D plot](https://matplotlib.org/stable/gallery/mplot3d/2dcollections3d.html#sphx-glr-gallery-mplot3d-2dcollections3d-py) Plot 2D data on 3D plot [Create 2D bar graphs in different planes](https://matplotlib.org/stable/gallery/mplot3d/bars3d.html#sphx-glr-gallery-mplot3d-bars3d-py) Create 2D bar graphs in different planes [3D errorbars](https://matplotlib.org/stable/gallery/mplot3d/errorbar3d.html#sphx-glr-gallery-mplot3d-errorbar3d-py) 3D errorbars [Lorenz Attractor](https://matplotlib.org/stable/gallery/mplot3d/lorenz_attractor.html#sphx-glr-gallery-mplot3d-lorenz-attractor-py) Lorenz Attractor [2D and 3D Axes in same Figure](https://matplotlib.org/stable/gallery/mplot3d/mixed_subplots.html#sphx-glr-gallery-mplot3d-mixed-subplots-py) 2D and 3D \*Axes\* in same \*Figure\* [Automatic Text Offsetting](https://matplotlib.org/stable/gallery/mplot3d/offset.html#sphx-glr-gallery-mplot3d-offset-py) Automatic Text Offsetting [3D scatterplot](https://matplotlib.org/stable/gallery/mplot3d/scatter3d.html#sphx-glr-gallery-mplot3d-scatter3d-py) 3D scatterplot [3D surface with polar coordinates](https://matplotlib.org/stable/gallery/mplot3d/surface3d_radial.html#sphx-glr-gallery-mplot3d-surface3d-radial-py) 3D surface with polar coordinates [Text annotations in 3D](https://matplotlib.org/stable/gallery/mplot3d/text3d.html#sphx-glr-gallery-mplot3d-text3d-py) Text annotations in 3D [Asinh Demo](https://matplotlib.org/stable/gallery/scales/asinh_demo.html#sphx-glr-gallery-scales-asinh-demo-py) Asinh Demo [Log Bar](https://matplotlib.org/stable/gallery/scales/log_bar.html#sphx-glr-gallery-scales-log-bar-py) Log Bar [Symlog Demo](https://matplotlib.org/stable/gallery/scales/symlog_demo.html#sphx-glr-gallery-scales-symlog-demo-py) Symlog Demo [MRI with EEG](https://matplotlib.org/stable/gallery/specialty_plots/mri_with_eeg.html#sphx-glr-gallery-specialty-plots-mri-with-eeg-py) MRI with EEG [Topographic hillshading](https://matplotlib.org/stable/gallery/specialty_plots/topographic_hillshading.html#sphx-glr-gallery-specialty-plots-topographic-hillshading-py) Topographic hillshading [Multiple Yaxis With Spines](https://matplotlib.org/stable/gallery/spines/multiple_yaxis_with_spines.html#sphx-glr-gallery-spines-multiple-yaxis-with-spines-py) Multiple Yaxis With Spines [Quick start guide](https://matplotlib.org/stable/tutorials/introductory/quick_start.html#sphx-glr-tutorials-introductory-quick-start-py) Quick start guide [Artist tutorial](https://matplotlib.org/stable/tutorials/intermediate/artists.html#sphx-glr-tutorials-intermediate-artists-py) Artist tutorial [Constrained Layout Guide](https://matplotlib.org/stable/tutorials/intermediate/constrainedlayout_guide.html#sphx-glr-tutorials-intermediate-constrainedlayout-guide-py) Constrained Layout Guide [Tight Layout guide](https://matplotlib.org/stable/tutorials/intermediate/tight_layout_guide.html#sphx-glr-tutorials-intermediate-tight-layout-guide-py) Tight Layout guide [Arranging multiple Axes in a Figure](https://matplotlib.org/stable/tutorials/intermediate/arranging_axes.html#sphx-glr-tutorials-intermediate-arranging-axes-py) Arranging multiple Axes in a Figure [Choosing Colormaps in Matplotlib](https://matplotlib.org/stable/tutorials/colors/colormaps.html#sphx-glr-tutorials-colors-colormaps-py) Choosing Colormaps in Matplotlib [Text in Matplotlib Plots](https://matplotlib.org/stable/tutorials/text/text_intro.html#sphx-glr-tutorials-text-text-intro-py) Text in Matplotlib Plots
programming_docs
matplotlib matplotlib.axes.Axes.bar matplotlib.axes.Axes.bar ======================== Axes.bar(*x*, *height*, *width=0.8*, *bottom=None*, *\**, *align='center'*, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L2204-L2499) Make a bar plot. The bars are positioned at *x* with the given *align*ment. Their dimensions are given by *height* and *width*. The vertical baseline is *bottom* (default 0). Many parameters can take either a single value applying to all bars or a sequence of values, one for each bar. Parameters: **x**float or array-like The x coordinates of the bars. See also *align* for the alignment of the bars to the coordinates. **height**float or array-like The height(s) of the bars. **width**float or array-like, default: 0.8 The width(s) of the bars. **bottom**float or array-like, default: 0 The y coordinate(s) of the bottom side(s) of the bars. **align**{'center', 'edge'}, default: 'center' Alignment of the bars to the *x* coordinates: * 'center': Center the base on the *x* positions. * 'edge': Align the left edges of the bars with the *x* positions. To align the bars on the right edge pass a negative *width* and `align='edge'`. Returns: [`BarContainer`](../container_api#matplotlib.container.BarContainer "matplotlib.container.BarContainer") Container with all the bars and optionally errorbars. Other Parameters: **color**color or list of color, optional The colors of the bar faces. **edgecolor**color or list of color, optional The colors of the bar edges. **linewidth**float or array-like, optional Width of the bar edge(s). If 0, don't draw edges. **tick\_label**str or list of str, optional The tick labels of the bars. Default: None (Use default numeric labels.) **label**str or list of str, optional A single label is attached to the resulting [`BarContainer`](../container_api#matplotlib.container.BarContainer "matplotlib.container.BarContainer") as a label for the whole dataset. If a list is provided, it must be the same length as *x* and labels the individual bars. Repeated labels are not de-duplicated and will cause repeated label entries, so this is best used when bars also differ in style (e.g., by passing a list to *color*.) **xerr, yerr**float or array-like of shape(N,) or shape(2, N), optional If not *None*, add horizontal / vertical errorbars to the bar tips. The values are +/- sizes relative to the data: * scalar: symmetric +/- values for all bars * shape(N,): symmetric +/- values for each bar * shape(2, N): Separate - and + values for each bar. First row contains the lower errors, the second row contains the upper errors. * *None*: No errorbar. (Default) See [Different ways of specifying error bars](https://matplotlib.org/stable/gallery/statistics/errorbar_features.html) for an example on the usage of *xerr* and *yerr*. **ecolor**color or list of color, default: 'black' The line color of the errorbars. **capsize**float, default: `[rcParams["errorbar.capsize"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=errorbar.capsize#matplotlibrc-sample)` (default: `0.0`) The length of the error bar caps in points. **error\_kw**dict, optional Dictionary of keyword arguments to be passed to the [`errorbar`](matplotlib.axes.axes.errorbar#matplotlib.axes.Axes.errorbar "matplotlib.axes.Axes.errorbar") method. Values of *ecolor* or *capsize* defined here take precedence over the independent keyword arguments. **log**bool, default: False If *True*, set the y-axis to be log scale. **data**indexable object, optional If given, all parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception). **\*\*kwargs**[`Rectangle`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle "matplotlib.patches.Rectangle") properties | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`angle`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_angle "matplotlib.patches.Rectangle.set_angle") | unknown | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.patches.patch#matplotlib.patches.Patch.set_antialiased "matplotlib.patches.Patch.set_antialiased") or aa | bool or None | | [`bounds`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_bounds "matplotlib.patches.Rectangle.set_bounds") | (left, bottom, width, height) | | [`capstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_capstyle "matplotlib.patches.Patch.set_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.patches.patch#matplotlib.patches.Patch.set_color "matplotlib.patches.Patch.set_color") | color | | [`edgecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_edgecolor "matplotlib.patches.Patch.set_edgecolor") or ec | color or None | | [`facecolor`](matplotlib.patches.patch#matplotlib.patches.Patch.set_facecolor "matplotlib.patches.Patch.set_facecolor") or fc | color or None | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fill`](matplotlib.patches.patch#matplotlib.patches.Patch.set_fill "matplotlib.patches.Patch.set_fill") | bool | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`hatch`](matplotlib.patches.patch#matplotlib.patches.Patch.set_hatch "matplotlib.patches.Patch.set_hatch") | {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '\*'} | | [`height`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_height "matplotlib.patches.Rectangle.set_height") | unknown | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`joinstyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_joinstyle "matplotlib.patches.Patch.set_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linestyle "matplotlib.patches.Patch.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.patches.patch#matplotlib.patches.Patch.set_linewidth "matplotlib.patches.Patch.set_linewidth") or lw | float or None | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`width`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_width "matplotlib.patches.Rectangle.set_width") | unknown | | [`x`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_x "matplotlib.patches.Rectangle.set_x") | unknown | | [`xy`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_xy "matplotlib.patches.Rectangle.set_xy") | (float, float) | | [`y`](matplotlib.patches.rectangle#matplotlib.patches.Rectangle.set_y "matplotlib.patches.Rectangle.set_y") | unknown | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`barh`](matplotlib.axes.axes.barh#matplotlib.axes.Axes.barh "matplotlib.axes.Axes.barh") Plot a horizontal bar plot. #### Notes Stacked bars can be achieved by passing individual *bottom* values per bar. See [Stacked bar chart](https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_stacked.html). Examples using `matplotlib.axes.Axes.bar` ----------------------------------------- [Bar color demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_colors.html#sphx-glr-gallery-lines-bars-and-markers-bar-colors-py) Bar color demo [Bar Label Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_label_demo.html#sphx-glr-gallery-lines-bars-and-markers-bar-label-demo-py) Bar Label Demo [Stacked bar chart](https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_stacked.html#sphx-glr-gallery-lines-bars-and-markers-bar-stacked-py) Stacked bar chart [Grouped bar chart with labels](https://matplotlib.org/stable/gallery/lines_bars_and_markers/barchart.html#sphx-glr-gallery-lines-bars-and-markers-barchart-py) Grouped bar chart with labels [Hat graph](https://matplotlib.org/stable/gallery/lines_bars_and_markers/hat_graph.html#sphx-glr-gallery-lines-bars-and-markers-hat-graph-py) Hat graph [Bar of pie](https://matplotlib.org/stable/gallery/pie_and_polar_charts/bar_of_pie.html#sphx-glr-gallery-pie-and-polar-charts-bar-of-pie-py) Bar of pie [Nested pie charts](https://matplotlib.org/stable/gallery/pie_and_polar_charts/nested_pie.html#sphx-glr-gallery-pie-and-polar-charts-nested-pie-py) Nested pie charts [Bar chart on polar axis](https://matplotlib.org/stable/gallery/pie_and_polar_charts/polar_bar.html#sphx-glr-gallery-pie-and-polar-charts-polar-bar-py) Bar chart on polar axis [Legend Demo](https://matplotlib.org/stable/gallery/text_labels_and_annotations/legend_demo.html#sphx-glr-gallery-text-labels-and-annotations-legend-demo-py) Legend Demo [ggplot style sheet](https://matplotlib.org/stable/gallery/style_sheets/ggplot.html#sphx-glr-gallery-style-sheets-ggplot-py) ggplot style sheet [mpl\_toolkits.axisartist.floating\_axes features](https://matplotlib.org/stable/gallery/axisartist/demo_floating_axes.html#sphx-glr-gallery-axisartist-demo-floating-axes-py) :mod:`mpl\_toolkits.axisartist.floating\_axes` features [XKCD](https://matplotlib.org/stable/gallery/showcase/xkcd.html#sphx-glr-gallery-showcase-xkcd-py) XKCD [Pick Event Demo](https://matplotlib.org/stable/gallery/event_handling/pick_event_demo.html#sphx-glr-gallery-event-handling-pick-event-demo-py) Pick Event Demo [Create 2D bar graphs in different planes](https://matplotlib.org/stable/gallery/mplot3d/bars3d.html#sphx-glr-gallery-mplot3d-bars3d-py) Create 2D bar graphs in different planes [Log Bar](https://matplotlib.org/stable/gallery/scales/log_bar.html#sphx-glr-gallery-scales-log-bar-py) Log Bar [Custom Ticker](https://matplotlib.org/stable/gallery/ticks/custom_ticker1.html#sphx-glr-gallery-ticks-custom-ticker1-py) Custom Ticker [Group barchart with units](https://matplotlib.org/stable/gallery/units/bar_unit_demo.html#sphx-glr-gallery-units-bar-unit-demo-py) Group barchart with units [Quick start guide](https://matplotlib.org/stable/tutorials/introductory/quick_start.html#sphx-glr-tutorials-introductory-quick-start-py) Quick start guide [Artist tutorial](https://matplotlib.org/stable/tutorials/intermediate/artists.html#sphx-glr-tutorials-intermediate-artists-py) Artist tutorial [Path Tutorial](https://matplotlib.org/stable/tutorials/advanced/path_tutorial.html#sphx-glr-tutorials-advanced-path-tutorial-py) Path Tutorial [bar(x, height)](https://matplotlib.org/stable/plot_types/basic/bar.html#sphx-glr-plot-types-basic-bar-py) bar(x, height) matplotlib matplotlib.axes.Axes.can_zoom matplotlib.axes.Axes.can\_zoom ============================== Axes.can\_zoom()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L3993-L3997) Return whether this Axes supports the zoom box button functionality. matplotlib matplotlib.gridspec.SubplotSpec matplotlib.gridspec.SubplotSpec =============================== *class*matplotlib.gridspec.SubplotSpec(*gridspec*, *num1*, *num2=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L543-L749) Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") The location of a subplot in a [`GridSpec`](matplotlib.gridspec.gridspec#matplotlib.gridspec.GridSpec "matplotlib.gridspec.GridSpec"). Note Likely, you'll never instantiate a [`SubplotSpec`](#matplotlib.gridspec.SubplotSpec "matplotlib.gridspec.SubplotSpec") yourself. Instead you will typically obtain one from a [`GridSpec`](matplotlib.gridspec.gridspec#matplotlib.gridspec.GridSpec "matplotlib.gridspec.GridSpec") using item-access. Parameters: **gridspec**[`GridSpec`](matplotlib.gridspec.gridspec#matplotlib.gridspec.GridSpec "matplotlib.gridspec.GridSpec") The GridSpec, which the subplot is referencing. **num1, num2**int The subplot will occupy the num1-th cell of the given gridspec. If num2 is provided, the subplot will span between num1-th cell and num2-th cell *inclusive*. The index starts from 0. *property*colspan The columns spanned by this subplot, as a [`range`](https://docs.python.org/3/library/stdtypes.html#range "(in Python v3.10)") object. get\_geometry()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L631-L640) Return the subplot geometry as tuple `(n_rows, n_cols, start, stop)`. The indices *start* and *stop* define the range of the subplot within the [`GridSpec`](matplotlib.gridspec.gridspec#matplotlib.gridspec.GridSpec "matplotlib.gridspec.GridSpec"). *stop* is inclusive (i.e. for a single cell `start == stop`). get\_gridspec()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L628-L629) get\_position(*figure*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L669-L683) Update the subplot position from `figure.subplotpars`. get\_topmost\_subplotspec()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L685-L693) Return the topmost [`SubplotSpec`](#matplotlib.gridspec.SubplotSpec "matplotlib.gridspec.SubplotSpec") instance associated with the subplot. is\_first\_col()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L663-L664) is\_first\_row()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L657-L658) is\_last\_col()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L666-L667) is\_last\_row()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L660-L661) *property*num2 *property*rowspan The rows spanned by this subplot, as a [`range`](https://docs.python.org/3/library/stdtypes.html#range "(in Python v3.10)") object. subgridspec(*nrows*, *ncols*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L709-L749) Create a GridSpec within this subplot. The created [`GridSpecFromSubplotSpec`](matplotlib.gridspec.gridspecfromsubplotspec#matplotlib.gridspec.GridSpecFromSubplotSpec "matplotlib.gridspec.GridSpecFromSubplotSpec") will have this [`SubplotSpec`](#matplotlib.gridspec.SubplotSpec "matplotlib.gridspec.SubplotSpec") as a parent. Parameters: **nrows**int Number of rows in grid. **ncols**int Number or columns in grid. Returns: [`GridSpecFromSubplotSpec`](matplotlib.gridspec.gridspecfromsubplotspec#matplotlib.gridspec.GridSpecFromSubplotSpec "matplotlib.gridspec.GridSpecFromSubplotSpec") Other Parameters: **\*\*kwargs** All other parameters are passed to [`GridSpecFromSubplotSpec`](matplotlib.gridspec.gridspecfromsubplotspec#matplotlib.gridspec.GridSpecFromSubplotSpec "matplotlib.gridspec.GridSpecFromSubplotSpec"). See also [`matplotlib.pyplot.subplots`](matplotlib.pyplot.subplots#matplotlib.pyplot.subplots "matplotlib.pyplot.subplots") #### Examples Adding three subplots in the space occupied by a single subplot: ``` fig = plt.figure() gs0 = fig.add_gridspec(3, 1) ax1 = fig.add_subplot(gs0[0]) ax2 = fig.add_subplot(gs0[1]) gssub = gs0[2].subgridspec(1, 3) for i in range(3): fig.add_subplot(gssub[0, i]) ``` Examples using `matplotlib.gridspec.SubplotSpec` ------------------------------------------------ [Nested GridSpecs](https://matplotlib.org/stable/gallery/userdemo/demo_gridspec06.html#sphx-glr-gallery-userdemo-demo-gridspec06-py) Nested GridSpecs [Constrained Layout Guide](https://matplotlib.org/stable/tutorials/intermediate/constrainedlayout_guide.html#sphx-glr-tutorials-intermediate-constrainedlayout-guide-py) Constrained Layout Guide [Arranging multiple Axes in a Figure](https://matplotlib.org/stable/tutorials/intermediate/arranging_axes.html#sphx-glr-tutorials-intermediate-arranging-axes-py) Arranging multiple Axes in a Figure matplotlib matplotlib.artist.Artist.get_clip_on matplotlib.artist.Artist.get\_clip\_on ====================================== Artist.get\_clip\_on()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L855-L857) Return whether the artist uses clipping. matplotlib matplotlib.pyplot.ginput matplotlib.pyplot.ginput ======================== matplotlib.pyplot.ginput(*n=1*, *timeout=30*, *show\_clicks=True*, *mouse\_add=MouseButton.LEFT*, *mouse\_pop=MouseButton.RIGHT*, *mouse\_stop=MouseButton.MIDDLE*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2235-L2243) Blocking call to interact with a figure. Wait until the user clicks *n* times on the figure, and return the coordinates of each click in a list. There are three possible interactions: * Add a point. * Remove the most recently added point. * Stop the interaction and return the points added so far. The actions are assigned to mouse buttons via the arguments *mouse\_add*, *mouse\_pop* and *mouse\_stop*. Parameters: **n**int, default: 1 Number of mouse clicks to accumulate. If negative, accumulate clicks until the input is terminated manually. **timeout**float, default: 30 seconds Number of seconds to wait before timing out. If zero or negative will never timeout. **show\_clicks**bool, default: True If True, show a red cross at the location of each click. **mouse\_add**[`MouseButton`](../backend_bases_api#matplotlib.backend_bases.MouseButton "matplotlib.backend_bases.MouseButton") or None, default: [`MouseButton.LEFT`](../backend_bases_api#matplotlib.backend_bases.MouseButton.LEFT "matplotlib.backend_bases.MouseButton.LEFT") Mouse button used to add points. **mouse\_pop**[`MouseButton`](../backend_bases_api#matplotlib.backend_bases.MouseButton "matplotlib.backend_bases.MouseButton") or None, default: [`MouseButton.RIGHT`](../backend_bases_api#matplotlib.backend_bases.MouseButton.RIGHT "matplotlib.backend_bases.MouseButton.RIGHT") Mouse button used to remove the most recently added point. **mouse\_stop**[`MouseButton`](../backend_bases_api#matplotlib.backend_bases.MouseButton "matplotlib.backend_bases.MouseButton") or None, default: [`MouseButton.MIDDLE`](../backend_bases_api#matplotlib.backend_bases.MouseButton.MIDDLE "matplotlib.backend_bases.MouseButton.MIDDLE") Mouse button used to stop input. Returns: list of tuples A list of the clicked (x, y) coordinates. #### Notes The keyboard can also be used to select points in case your mouse does not have one or more of the buttons. The delete and backspace keys act like right clicking (i.e., remove last point), the enter key terminates input and any other key (not already used by the window manager) selects a point. Examples using `matplotlib.pyplot.ginput` ----------------------------------------- [Interactive functions](https://matplotlib.org/stable/gallery/event_handling/ginput_manual_clabel_sgskip.html#sphx-glr-gallery-event-handling-ginput-manual-clabel-sgskip-py) Interactive functions
programming_docs
matplotlib matplotlib.artist.Artist.set_rasterized matplotlib.artist.Artist.set\_rasterized ======================================== Artist.set\_rasterized(*rasterized*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L908-L927) Force rasterized (bitmap) drawing for vector graphics output. Rasterized drawing is not supported by all artists. If you try to enable this on an artist that does not support it, the command has no effect and a warning will be issued. This setting is ignored for pixel-based output. See also [Rasterization for vector graphics](https://matplotlib.org/stable/gallery/misc/rasterization_demo.html). Parameters: **rasterized**bool matplotlib matplotlib.artist.get matplotlib.artist.get ===================== matplotlib.artist.get(*obj*, *property=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L1681-L1714) Return the value of an [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist")'s *property*, or print all of them. Parameters: **obj**[`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") The queried artist; e.g., a [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D"), a [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text"), or an [`Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes"). **property**str or None, default: None If *property* is 'somename', this function returns `obj.get_somename()`. If it's None (or unset), it *prints* all gettable properties from *obj*. Many properties have aliases for shorter typing, e.g. 'lw' is an alias for 'linewidth'. In the output, aliases and full property names will be listed as: property or alias = value e.g.: linewidth or lw = 2 See also [`setp`](matplotlib.artist.setp#matplotlib.artist.setp "matplotlib.artist.setp") matplotlib mpl_toolkits.axisartist.axis_artist.TickLabels mpl\_toolkits.axisartist.axis\_artist.TickLabels ================================================ *class*mpl\_toolkits.axisartist.axis\_artist.TickLabels(*\**, *axis\_direction='bottom'*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L382-L565) Bases: [`AxisLabel`](mpl_toolkits.axisartist.axis_artist.axislabel#mpl_toolkits.axisartist.axis_artist.AxisLabel "mpl_toolkits.axisartist.axis_artist.AxisLabel") Tick Labels. While derived from Text, this single artist draws all ticklabels. As in AxisLabel, the position of the text is updated in the fly, so changing text position has no effect. Otherwise, the properties can be changed as a normal Text. Unlike the ticklabels of the mainline matplotlib, properties of single ticklabel alone cannot modified. To change the pad between ticks and ticklabels, use set\_pad. draw(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L496-L517) Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible ([`Artist.get_visible`](matplotlib.artist.artist.get_visible#matplotlib.artist.Artist.get_visible "matplotlib.artist.Artist.get_visible") returns False). Parameters: **renderer**[`RendererBase`](../backend_bases_api#matplotlib.backend_bases.RendererBase "matplotlib.backend_bases.RendererBase") subclass. #### Notes This method is overridden in the Artist subclasses. get\_ref\_artist()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L399-L401) Return the underlying artist that actually defines some properties (e.g., color) of this artist. get\_texts\_widths\_heights\_descents(*renderer*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L551-L565) Return a list of `(width, height, descent)` tuples for ticklabels. Empty labels are left out. get\_window\_extents(*renderer=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L522-L549) invert\_axis\_direction()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L426-L428) set(*\**, *agg\_filter=<UNSET>*, *alpha=<UNSET>*, *animated=<UNSET>*, *axis\_direction=<UNSET>*, *backgroundcolor=<UNSET>*, *bbox=<UNSET>*, *clip\_box=<UNSET>*, *clip\_on=<UNSET>*, *clip\_path=<UNSET>*, *color=<UNSET>*, *default\_alignment=<UNSET>*, *default\_angle=<UNSET>*, *fontfamily=<UNSET>*, *fontproperties=<UNSET>*, *fontsize=<UNSET>*, *fontstretch=<UNSET>*, *fontstyle=<UNSET>*, *fontvariant=<UNSET>*, *fontweight=<UNSET>*, *gid=<UNSET>*, *horizontalalignment=<UNSET>*, *in\_layout=<UNSET>*, *label=<UNSET>*, *linespacing=<UNSET>*, *locs\_angles\_labels=<UNSET>*, *math\_fontfamily=<UNSET>*, *mouseover=<UNSET>*, *multialignment=<UNSET>*, *pad=<UNSET>*, *parse\_math=<UNSET>*, *path\_effects=<UNSET>*, *picker=<UNSET>*, *position=<UNSET>*, *rasterized=<UNSET>*, *rotation=<UNSET>*, *rotation\_mode=<UNSET>*, *sketch\_params=<UNSET>*, *snap=<UNSET>*, *text=<UNSET>*, *transform=<UNSET>*, *transform\_rotates\_text=<UNSET>*, *url=<UNSET>*, *usetex=<UNSET>*, *verticalalignment=<UNSET>*, *visible=<UNSET>*, *wrap=<UNSET>*, *x=<UNSET>*, *y=<UNSET>*, *zorder=<UNSET>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L117-L117) Set multiple properties at once. Supported properties are | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`axis_direction`](#mpl_toolkits.axisartist.axis_artist.TickLabels.set_axis_direction "mpl_toolkits.axisartist.axis_artist.TickLabels.set_axis_direction") | unknown | | [`backgroundcolor`](../text_api#matplotlib.text.Text.set_backgroundcolor "matplotlib.text.Text.set_backgroundcolor") | color | | [`bbox`](../text_api#matplotlib.text.Text.set_bbox "matplotlib.text.Text.set_bbox") | dict with properties for [`patches.FancyBboxPatch`](matplotlib.patches.fancybboxpatch#matplotlib.patches.FancyBboxPatch "matplotlib.patches.FancyBboxPatch") | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](../text_api#matplotlib.text.Text.set_color "matplotlib.text.Text.set_color") or c | color | | [`default_alignment`](mpl_toolkits.axisartist.axis_artist.axislabel#mpl_toolkits.axisartist.axis_artist.AxisLabel.set_default_alignment "mpl_toolkits.axisartist.axis_artist.AxisLabel.set_default_alignment") | unknown | | [`default_angle`](mpl_toolkits.axisartist.axis_artist.axislabel#mpl_toolkits.axisartist.axis_artist.AxisLabel.set_default_angle "mpl_toolkits.axisartist.axis_artist.AxisLabel.set_default_angle") | unknown | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fontfamily`](../text_api#matplotlib.text.Text.set_fontfamily "matplotlib.text.Text.set_fontfamily") or family | {FONTNAME, 'serif', 'sans-serif', 'cursive', 'fantasy', 'monospace'} | | [`fontproperties`](../text_api#matplotlib.text.Text.set_fontproperties "matplotlib.text.Text.set_fontproperties") or font or font\_properties | [`font_manager.FontProperties`](../font_manager_api#matplotlib.font_manager.FontProperties "matplotlib.font_manager.FontProperties") or [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") or [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") | | [`fontsize`](../text_api#matplotlib.text.Text.set_fontsize "matplotlib.text.Text.set_fontsize") or size | float or {'xx-small', 'x-small', 'small', 'medium', 'large', 'x-large', 'xx-large'} | | [`fontstretch`](../text_api#matplotlib.text.Text.set_fontstretch "matplotlib.text.Text.set_fontstretch") or stretch | {a numeric value in range 0-1000, 'ultra-condensed', 'extra-condensed', 'condensed', 'semi-condensed', 'normal', 'semi-expanded', 'expanded', 'extra-expanded', 'ultra-expanded'} | | [`fontstyle`](../text_api#matplotlib.text.Text.set_fontstyle "matplotlib.text.Text.set_fontstyle") or style | {'normal', 'italic', 'oblique'} | | [`fontvariant`](../text_api#matplotlib.text.Text.set_fontvariant "matplotlib.text.Text.set_fontvariant") or variant | {'normal', 'small-caps'} | | [`fontweight`](../text_api#matplotlib.text.Text.set_fontweight "matplotlib.text.Text.set_fontweight") or weight | {a numeric value in range 0-1000, 'ultralight', 'light', 'normal', 'regular', 'book', 'medium', 'roman', 'semibold', 'demibold', 'demi', 'bold', 'heavy', 'extra bold', 'black'} | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`horizontalalignment`](../text_api#matplotlib.text.Text.set_horizontalalignment "matplotlib.text.Text.set_horizontalalignment") or ha | {'left', 'center', 'right'} | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linespacing`](../text_api#matplotlib.text.Text.set_linespacing "matplotlib.text.Text.set_linespacing") | float (multiple of font size) | | [`locs_angles_labels`](#mpl_toolkits.axisartist.axis_artist.TickLabels.set_locs_angles_labels "mpl_toolkits.axisartist.axis_artist.TickLabels.set_locs_angles_labels") | unknown | | [`math_fontfamily`](../text_api#matplotlib.text.Text.set_math_fontfamily "matplotlib.text.Text.set_math_fontfamily") | str | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`multialignment`](../text_api#matplotlib.text.Text.set_multialignment "matplotlib.text.Text.set_multialignment") or ma | {'left', 'right', 'center'} | | [`pad`](mpl_toolkits.axisartist.axis_artist.axislabel#mpl_toolkits.axisartist.axis_artist.AxisLabel.set_pad "mpl_toolkits.axisartist.axis_artist.AxisLabel.set_pad") | unknown | | [`parse_math`](../text_api#matplotlib.text.Text.set_parse_math "matplotlib.text.Text.set_parse_math") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.artist.artist.set_picker#matplotlib.artist.Artist.set_picker "matplotlib.artist.Artist.set_picker") | None or bool or float or callable | | [`position`](../text_api#matplotlib.text.Text.set_position "matplotlib.text.Text.set_position") | (float, float) | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`rotation`](../text_api#matplotlib.text.Text.set_rotation "matplotlib.text.Text.set_rotation") | float or {'vertical', 'horizontal'} | | [`rotation_mode`](../text_api#matplotlib.text.Text.set_rotation_mode "matplotlib.text.Text.set_rotation_mode") | {None, 'default', 'anchor'} | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`text`](../text_api#matplotlib.text.Text.set_text "matplotlib.text.Text.set_text") | object | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") | | [`transform_rotates_text`](../text_api#matplotlib.text.Text.set_transform_rotates_text "matplotlib.text.Text.set_transform_rotates_text") | bool | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`usetex`](../text_api#matplotlib.text.Text.set_usetex "matplotlib.text.Text.set_usetex") | bool or None | | [`verticalalignment`](../text_api#matplotlib.text.Text.set_verticalalignment "matplotlib.text.Text.set_verticalalignment") or va | {'bottom', 'baseline', 'center', 'center\_baseline', 'top'} | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`wrap`](../text_api#matplotlib.text.Text.set_wrap "matplotlib.text.Text.set_wrap") | bool | | [`x`](../text_api#matplotlib.text.Text.set_x "matplotlib.text.Text.set_x") | float | | [`y`](../text_api#matplotlib.text.Text.set_y "matplotlib.text.Text.set_y") | float | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | set\_axis\_direction(*label\_direction*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L403-L424) Adjust the text angle and text alignment of ticklabels according to the matplotlib convention. The *label\_direction* must be one of [left, right, bottom, top]. | property | left | bottom | right | top | | --- | --- | --- | --- | --- | | ticklabels angle | 90 | 0 | -90 | 180 | | ticklabel va | center | baseline | center | baseline | | ticklabel ha | right | center | right | center | Note that the text angles are actually relative to (90 + angle of the direction to the ticklabel), which gives 0 for bottom axis. set\_locs\_angles\_labels(*locs\_angles\_labels*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axis_artist.py#L519-L520) matplotlib matplotlib.gridspec.GridSpecBase matplotlib.gridspec.GridSpecBase ================================ *class*matplotlib.gridspec.GridSpecBase(*nrows*, *ncols*, *height\_ratios=None*, *width\_ratios=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L27-L325) Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") A base class of GridSpec that specifies the geometry of the grid that a subplot will be placed. Parameters: **nrows, ncols**int The number of rows and columns of the grid. **width\_ratios**array-like of length *ncols*, optional Defines the relative widths of the columns. Each column gets a relative width of `width_ratios[i] / sum(width_ratios)`. If not given, all columns will have the same width. **height\_ratios**array-like of length *nrows*, optional Defines the relative heights of the rows. Each row gets a relative height of `height_ratios[i] / sum(height_ratios)`. If not given, all rows will have the same height. get\_geometry()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L75-L79) Return a tuple containing the number of rows and columns in the grid. get\_grid\_positions(*fig*, *raw=<deprecated parameter>*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L145-L205) Return the positions of the grid cells in figure coordinates. Parameters: **fig**[`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") The figure the grid should be applied to. The subplot parameters (margins and spacing between subplots) are taken from *fig*. **raw**bool, default: False If *True*, the subplot parameters of the figure are not taken into account. The grid spans the range [0, 1] in both directions without margins and there is no space between grid cells. This is used for constrained\_layout. Returns: **bottoms, tops, lefts, rights**array The bottom, top, left, right positions of the grid cells in figure coordinates. get\_height\_ratios()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L137-L143) Return the height ratios. This is *None* if no height ratios have been set explicitly. get\_subplot\_params(*figure=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L81-L83) get\_width\_ratios()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L115-L121) Return the width ratios. This is *None* if no width ratios have been set explicitly. *property*ncols The number of columns in the grid. new\_subplotspec(*loc*, *rowspan=1*, *colspan=1*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L85-L99) Create and return a [`SubplotSpec`](matplotlib.gridspec.subplotspec#matplotlib.gridspec.SubplotSpec "matplotlib.gridspec.SubplotSpec") instance. Parameters: **loc**(int, int) The position of the subplot in the grid as `(row_index, column_index)`. **rowspan, colspan**int, default: 1 The number of rows and columns the subplot should span in the grid. *property*nrows The number of rows in the grid. set\_height\_ratios(*height\_ratios*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L123-L135) Set the relative heights of the rows. *height\_ratios* must be of length *nrows*. Each row gets a relative height of `height_ratios[i] / sum(height_ratios)`. set\_width\_ratios(*width\_ratios*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L101-L113) Set the relative widths of the columns. *width\_ratios* must be of length *ncols*. Each column gets a relative width of `width_ratios[i] / sum(width_ratios)`. subplots(*\**, *sharex=False*, *sharey=False*, *squeeze=True*, *subplot\_kw=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/gridspec.py#L265-L325) Add all subplots specified by this [`GridSpec`](matplotlib.gridspec.gridspec#matplotlib.gridspec.GridSpec "matplotlib.gridspec.GridSpec") to its parent figure. See [`Figure.subplots`](../figure_api#matplotlib.figure.Figure.subplots "matplotlib.figure.Figure.subplots") for detailed documentation. Examples using `matplotlib.gridspec.GridSpecBase` ------------------------------------------------- [Aligning Labels](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/align_labels_demo.html#sphx-glr-gallery-subplots-axes-and-figures-align-labels-demo-py) Aligning Labels [Resizing axes with constrained layout](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/demo_constrained_layout.html#sphx-glr-gallery-subplots-axes-and-figures-demo-constrained-layout-py) Resizing axes with constrained layout [Using Gridspec to make multi-column/row subplot layouts](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/gridspec_multicolumn.html#sphx-glr-gallery-subplots-axes-and-figures-gridspec-multicolumn-py) Using Gridspec to make multi-column/row subplot layouts [Nested Gridspecs](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/gridspec_nested.html#sphx-glr-gallery-subplots-axes-and-figures-gridspec-nested-py) Nested Gridspecs [GridSpec demo](https://matplotlib.org/stable/gallery/userdemo/demo_gridspec03.html#sphx-glr-gallery-userdemo-demo-gridspec03-py) GridSpec demo [Constrained Layout Guide](https://matplotlib.org/stable/tutorials/intermediate/constrainedlayout_guide.html#sphx-glr-tutorials-intermediate-constrainedlayout-guide-py) Constrained Layout Guide [Tight Layout guide](https://matplotlib.org/stable/tutorials/intermediate/tight_layout_guide.html#sphx-glr-tutorials-intermediate-tight-layout-guide-py) Tight Layout guide [origin and extent in imshow](https://matplotlib.org/stable/tutorials/intermediate/imshow_extent.html#sphx-glr-tutorials-intermediate-imshow-extent-py) \*origin\* and \*extent\* in `~.Axes.imshow`
programming_docs
matplotlib matplotlib.axes.subplot_class_factory matplotlib.axes.subplot\_class\_factory ======================================= matplotlib.axes.subplot\_class\_factory(*axes\_class*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_subplots.py#L2278-L2300) matplotlib mpl_toolkits.axisartist.axes_grid.AxesGrid mpl\_toolkits.axisartist.axes\_grid.AxesGrid ============================================ mpl\_toolkits.axisartist.axes\_grid.AxesGrid[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axes_grid.py#L15-L16) alias of [`ImageGrid`](mpl_toolkits.axisartist.axes_grid.imagegrid#mpl_toolkits.axisartist.axes_grid.ImageGrid "mpl_toolkits.axisartist.axes_grid.ImageGrid") matplotlib matplotlib.axes.Axes.boxplot matplotlib.axes.Axes.boxplot ============================ Axes.boxplot(*x*, *notch=None*, *sym=None*, *vert=None*, *whis=None*, *positions=None*, *widths=None*, *patch\_artist=None*, *bootstrap=None*, *usermedians=None*, *conf\_intervals=None*, *meanline=None*, *showmeans=None*, *showcaps=None*, *showbox=None*, *showfliers=None*, *boxprops=None*, *labels=None*, *flierprops=None*, *medianprops=None*, *meanprops=None*, *capprops=None*, *whiskerprops=None*, *manage\_ticks=True*, *autorange=False*, *zorder=None*, *capwidths=None*, *\**, *data=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L3640-L3948) Draw a box and whisker plot. The box extends from the first quartile (Q1) to the third quartile (Q3) of the data, with a line at the median. The whiskers extend from the box by 1.5x the inter-quartile range (IQR). Flier points are those past the end of the whiskers. See <https://en.wikipedia.org/wiki/Box_plot> for reference. ``` Q1-1.5IQR Q1 median Q3 Q3+1.5IQR |-----:-----| o |--------| : |--------| o o |-----:-----| flier <-----------> fliers IQR ``` Parameters: **x**Array or a sequence of vectors. The input data. If a 2D array, a boxplot is drawn for each column in *x*. If a sequence of 1D arrays, a boxplot is drawn for each array in *x*. **notch**bool, default: False Whether to draw a notched boxplot ([`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)")), or a rectangular boxplot ([`False`](https://docs.python.org/3/library/constants.html#False "(in Python v3.10)")). The notches represent the confidence interval (CI) around the median. The documentation for *bootstrap* describes how the locations of the notches are computed by default, but their locations may also be overridden by setting the *conf\_intervals* parameter. Note In cases where the values of the CI are less than the lower quartile or greater than the upper quartile, the notches will extend beyond the box, giving it a distinctive "flipped" appearance. This is expected behavior and consistent with other statistical visualization packages. **sym**str, optional The default symbol for flier points. An empty string ('') hides the fliers. If [`None`](https://docs.python.org/3/library/constants.html#None "(in Python v3.10)"), then the fliers default to 'b+'. More control is provided by the *flierprops* parameter. **vert**bool, default: True If [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)"), draws vertical boxes. If [`False`](https://docs.python.org/3/library/constants.html#False "(in Python v3.10)"), draw horizontal boxes. **whis**float or (float, float), default: 1.5 The position of the whiskers. If a float, the lower whisker is at the lowest datum above `Q1 - whis*(Q3-Q1)`, and the upper whisker at the highest datum below `Q3 + whis*(Q3-Q1)`, where Q1 and Q3 are the first and third quartiles. The default value of `whis = 1.5` corresponds to Tukey's original definition of boxplots. If a pair of floats, they indicate the percentiles at which to draw the whiskers (e.g., (5, 95)). In particular, setting this to (0, 100) results in whiskers covering the whole range of the data. In the edge case where `Q1 == Q3`, *whis* is automatically set to (0, 100) (cover the whole range of the data) if *autorange* is True. Beyond the whiskers, data are considered outliers and are plotted as individual points. **bootstrap**int, optional Specifies whether to bootstrap the confidence intervals around the median for notched boxplots. If *bootstrap* is None, no bootstrapping is performed, and notches are calculated using a Gaussian-based asymptotic approximation (see McGill, R., Tukey, J.W., and Larsen, W.A., 1978, and Kendall and Stuart, 1967). Otherwise, bootstrap specifies the number of times to bootstrap the median to determine its 95% confidence intervals. Values between 1000 and 10000 are recommended. **usermedians**1D array-like, optional A 1D array-like of length `len(x)`. Each entry that is not [`None`](https://docs.python.org/3/library/constants.html#None "(in Python v3.10)") forces the value of the median for the corresponding dataset. For entries that are [`None`](https://docs.python.org/3/library/constants.html#None "(in Python v3.10)"), the medians are computed by Matplotlib as normal. **conf\_intervals**array-like, optional A 2D array-like of shape `(len(x), 2)`. Each entry that is not None forces the location of the corresponding notch (which is only drawn if *notch* is [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)")). For entries that are [`None`](https://docs.python.org/3/library/constants.html#None "(in Python v3.10)"), the notches are computed by the method specified by the other parameters (e.g., *bootstrap*). **positions**array-like, optional The positions of the boxes. The ticks and limits are automatically set to match the positions. Defaults to `range(1, N+1)` where N is the number of boxes to be drawn. **widths**float or array-like The widths of the boxes. The default is 0.5, or `0.15*(distance between extreme positions)`, if that is smaller. **patch\_artist**bool, default: False If [`False`](https://docs.python.org/3/library/constants.html#False "(in Python v3.10)") produces boxes with the Line2D artist. Otherwise, boxes are drawn with Patch artists. **labels**sequence, optional Labels for each dataset (one per dataset). **manage\_ticks**bool, default: True If True, the tick locations and labels will be adjusted to match the boxplot positions. **autorange**bool, default: False When [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)") and the data are distributed such that the 25th and 75th percentiles are equal, *whis* is set to (0, 100) such that the whisker ends are at the minimum and maximum of the data. **meanline**bool, default: False If [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)") (and *showmeans* is [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)")), will try to render the mean as a line spanning the full width of the box according to *meanprops* (see below). Not recommended if *shownotches* is also True. Otherwise, means will be shown as points. **zorder**float, default: `Line2D.zorder = 2` The zorder of the boxplot. Returns: dict A dictionary mapping each component of the boxplot to a list of the [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") instances created. That dictionary has the following keys (assuming vertical boxplots): * `boxes`: the main body of the boxplot showing the quartiles and the median's confidence intervals if enabled. * `medians`: horizontal lines at the median of each box. * `whiskers`: the vertical lines extending to the most extreme, non-outlier data points. * `caps`: the horizontal lines at the ends of the whiskers. * `fliers`: points representing data that extend beyond the whiskers (fliers). * `means`: points or lines representing the means. Other Parameters: **showcaps**bool, default: True Show the caps on the ends of whiskers. **showbox**bool, default: True Show the central box. **showfliers**bool, default: True Show the outliers beyond the caps. **showmeans**bool, default: False Show the arithmetic means. **capprops**dict, default: None The style of the caps. **capwidths**float or array, default: None The widths of the caps. **boxprops**dict, default: None The style of the box. **whiskerprops**dict, default: None The style of the whiskers. **flierprops**dict, default: None The style of the fliers. **medianprops**dict, default: None The style of the median. **meanprops**dict, default: None The style of the mean. **data**indexable object, optional If given, all parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception). See also [`violinplot`](matplotlib.axes.axes.violinplot#matplotlib.axes.Axes.violinplot "matplotlib.axes.Axes.violinplot") Draw an estimate of the probability density function. Examples using `matplotlib.axes.Axes.boxplot` --------------------------------------------- [Box plots with custom fill colors](https://matplotlib.org/stable/gallery/statistics/boxplot_color.html#sphx-glr-gallery-statistics-boxplot-color-py) Box plots with custom fill colors [Boxplots](https://matplotlib.org/stable/gallery/statistics/boxplot_demo.html#sphx-glr-gallery-statistics-boxplot-demo-py) Boxplots [Boxplot Demo](https://matplotlib.org/stable/gallery/pyplots/boxplot_demo_pyplot.html#sphx-glr-gallery-pyplots-boxplot-demo-pyplot-py) Boxplot Demo [boxplot(X)](https://matplotlib.org/stable/plot_types/stats/boxplot_plot.html#sphx-glr-plot-types-stats-boxplot-plot-py) boxplot(X) matplotlib matplotlib.axes.Axes.set_rasterization_zorder matplotlib.axes.Axes.set\_rasterization\_zorder =============================================== Axes.set\_rasterization\_zorder(*z*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L2775-L2793) Set the zorder threshold for rasterization for vector graphics output. All artists with a zorder below the given value will be rasterized if they support rasterization. This setting is ignored for pixel-based output. See also [Rasterization for vector graphics](https://matplotlib.org/stable/gallery/misc/rasterization_demo.html). Parameters: **z**float or None The zorder below which artists are rasterized. If `None` rasterization based on zorder is deactivated. Examples using `matplotlib.axes.Axes.set_rasterization_zorder` -------------------------------------------------------------- [Rasterization for vector graphics](https://matplotlib.org/stable/gallery/misc/rasterization_demo.html#sphx-glr-gallery-misc-rasterization-demo-py) Rasterization for vector graphics matplotlib mpl_toolkits.mplot3d.art3d.juggle_axes mpl\_toolkits.mplot3d.art3d.juggle\_axes ======================================== mpl\_toolkits.mplot3d.art3d.juggle\_axes(*xs*, *ys*, *zs*, *zdir*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/art3d.py#L896-L909) Reorder coordinates so that 2D xs, ys can be plotted in the plane orthogonal to zdir. zdir is normally x, y or z. However, if zdir starts with a '-' it is interpreted as a compensation for rotate\_axes. matplotlib matplotlib.axes.Axes.get_ybound matplotlib.axes.Axes.get\_ybound ================================ Axes.get\_ybound()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L3777-L3791) Return the lower and upper y-axis bounds, in increasing order. See also [`set_ybound`](matplotlib.axes.axes.set_ybound#matplotlib.axes.Axes.set_ybound "matplotlib.axes.Axes.set_ybound") [`get_ylim`](matplotlib.axes.axes.get_ylim#matplotlib.axes.Axes.get_ylim "matplotlib.axes.Axes.get_ylim"), [`set_ylim`](matplotlib.axes.axes.set_ylim#matplotlib.axes.Axes.set_ylim "matplotlib.axes.Axes.set_ylim") [`invert_yaxis`](matplotlib.axes.axes.invert_yaxis#matplotlib.axes.Axes.invert_yaxis "matplotlib.axes.Axes.invert_yaxis"), [`yaxis_inverted`](matplotlib.axes.axes.yaxis_inverted#matplotlib.axes.Axes.yaxis_inverted "matplotlib.axes.Axes.yaxis_inverted") matplotlib matplotlib.axis.Tick.get_tickdir matplotlib.axis.Tick.get\_tickdir ================================= Tick.get\_tickdir()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L225-L226) matplotlib matplotlib.axes.Axes.set_axis_off matplotlib.axes.Axes.set\_axis\_off =================================== Axes.set\_axis\_off()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L3436-L3443) Turn the x- and y-axis off. This affects the axis lines, ticks, ticklabels, grid and axis labels. Examples using `matplotlib.axes.Axes.set_axis_off` -------------------------------------------------- [Marker reference](https://matplotlib.org/stable/gallery/lines_bars_and_markers/marker_reference.html#sphx-glr-gallery-lines-bars-and-markers-marker-reference-py) Marker reference [Barcode](https://matplotlib.org/stable/gallery/images_contours_and_fields/barcode_demo.html#sphx-glr-gallery-images-contours-and-fields-barcode-demo-py) Barcode [Blend transparency with color in 2D images](https://matplotlib.org/stable/gallery/images_contours_and_fields/image_transparency_blend.html#sphx-glr-gallery-images-contours-and-fields-image-transparency-blend-py) Blend transparency with color in 2D images [Nested pie charts](https://matplotlib.org/stable/gallery/pie_and_polar_charts/nested_pie.html#sphx-glr-gallery-pie-and-polar-charts-nested-pie-py) Nested pie charts [Annotation arrow style reference](https://matplotlib.org/stable/gallery/text_labels_and_annotations/fancyarrow_demo.html#sphx-glr-gallery-text-labels-and-annotations-fancyarrow-demo-py) Annotation arrow style reference [Text alignment](https://matplotlib.org/stable/gallery/text_labels_and_annotations/text_alignment.html#sphx-glr-gallery-text-labels-and-annotations-text-alignment-py) Text alignment [Drawing fancy boxes](https://matplotlib.org/stable/gallery/shapes_and_collections/fancybox_demo.html#sphx-glr-gallery-shapes-and-collections-fancybox-demo-py) Drawing fancy boxes [Choosing Colormaps in Matplotlib](https://matplotlib.org/stable/tutorials/colors/colormaps.html#sphx-glr-tutorials-colors-colormaps-py) Choosing Colormaps in Matplotlib [Text properties and layout](https://matplotlib.org/stable/tutorials/text/text_props.html#sphx-glr-tutorials-text-text-props-py) Text properties and layout matplotlib matplotlib.axes.Axes.reset_position matplotlib.axes.Axes.reset\_position ==================================== Axes.reset\_position()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L1107-L1116) Reset the active position to the original position. This resets the possible position change due to aspect constraints. For an explanation of the positions see [`set_position`](matplotlib.axes.axes.set_position#matplotlib.axes.Axes.set_position "matplotlib.axes.Axes.set_position"). matplotlib matplotlib.pyplot.cohere matplotlib.pyplot.cohere ======================== matplotlib.pyplot.cohere(*x*, *y*, *NFFT=256*, *Fs=2*, *Fc=0*, *detrend=<function detrend\_none>*, *window=<function window\_hanning>*, *noverlap=0*, *pad\_to=None*, *sides='default'*, *scale\_by\_freq=None*, *\**, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2428-L2437) Plot the coherence between *x* and *y*. Coherence is the normalized cross spectral density: \[C\_{xy} = \frac{|P\_{xy}|^2}{P\_{xx}P\_{yy}}\] Parameters: **Fs**float, default: 2 The sampling frequency (samples per time unit). It is used to calculate the Fourier frequencies, *freqs*, in cycles per time unit. **window**callable or ndarray, default: [`window_hanning`](../mlab_api#matplotlib.mlab.window_hanning "matplotlib.mlab.window_hanning") A function or a vector of length *NFFT*. To create window vectors see [`window_hanning`](../mlab_api#matplotlib.mlab.window_hanning "matplotlib.mlab.window_hanning"), [`window_none`](../mlab_api#matplotlib.mlab.window_none "matplotlib.mlab.window_none"), [`numpy.blackman`](https://numpy.org/doc/stable/reference/generated/numpy.blackman.html#numpy.blackman "(in NumPy v1.23)"), [`numpy.hamming`](https://numpy.org/doc/stable/reference/generated/numpy.hamming.html#numpy.hamming "(in NumPy v1.23)"), [`numpy.bartlett`](https://numpy.org/doc/stable/reference/generated/numpy.bartlett.html#numpy.bartlett "(in NumPy v1.23)"), [`scipy.signal`](https://docs.scipy.org/doc/scipy/reference/signal.html#module-scipy.signal "(in SciPy v1.9.1)"), [`scipy.signal.get_window`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.get_window.html#scipy.signal.get_window "(in SciPy v1.9.1)"), etc. If a function is passed as the argument, it must take a data segment as an argument and return the windowed version of the segment. **sides**{'default', 'onesided', 'twosided'}, optional Which sides of the spectrum to return. 'default' is one-sided for real data and two-sided for complex data. 'onesided' forces the return of a one-sided spectrum, while 'twosided' forces two-sided. **pad\_to**int, optional The number of points to which the data segment is padded when performing the FFT. This can be different from *NFFT*, which specifies the number of data points used. While not increasing the actual resolution of the spectrum (the minimum distance between resolvable peaks), this can give more points in the plot, allowing for more detail. This corresponds to the *n* parameter in the call to [`fft`](https://numpy.org/doc/stable/reference/generated/numpy.fft.fft.html#numpy.fft.fft "(in NumPy v1.23)"). The default is None, which sets *pad\_to* equal to *NFFT* **NFFT**int, default: 256 The number of data points used in each block for the FFT. A power 2 is most efficient. This should *NOT* be used to get zero padding, or the scaling of the result will be incorrect; use *pad\_to* for this instead. **detrend**{'none', 'mean', 'linear'} or callable, default: 'none' The function applied to each segment before fft-ing, designed to remove the mean or linear trend. Unlike in MATLAB, where the *detrend* parameter is a vector, in Matplotlib it is a function. The [`mlab`](../mlab_api#module-matplotlib.mlab "matplotlib.mlab") module defines [`detrend_none`](../mlab_api#matplotlib.mlab.detrend_none "matplotlib.mlab.detrend_none"), [`detrend_mean`](../mlab_api#matplotlib.mlab.detrend_mean "matplotlib.mlab.detrend_mean"), and [`detrend_linear`](../mlab_api#matplotlib.mlab.detrend_linear "matplotlib.mlab.detrend_linear"), but you can use a custom function as well. You can also use a string to choose one of the functions: 'none' calls [`detrend_none`](../mlab_api#matplotlib.mlab.detrend_none "matplotlib.mlab.detrend_none"). 'mean' calls [`detrend_mean`](../mlab_api#matplotlib.mlab.detrend_mean "matplotlib.mlab.detrend_mean"). 'linear' calls [`detrend_linear`](../mlab_api#matplotlib.mlab.detrend_linear "matplotlib.mlab.detrend_linear"). **scale\_by\_freq**bool, default: True Whether the resulting density values should be scaled by the scaling frequency, which gives density in units of 1/Hz. This allows for integration over the returned frequency values. The default is True for MATLAB compatibility. **noverlap**int, default: 0 (no overlap) The number of points of overlap between blocks. **Fc**int, default: 0 The center frequency of *x*, which offsets the x extents of the plot to reflect the frequency range used when a signal is acquired and then filtered and downsampled to baseband. Returns: **Cxy**1-D array The coherence vector. **freqs**1-D array The frequencies for the elements in *Cxy*. Other Parameters: **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *x*, *y* **\*\*kwargs** Keyword arguments control the [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") properties: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_antialiased "matplotlib.lines.Line2D.set_antialiased") or aa | bool | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_color "matplotlib.lines.Line2D.set_color") or c | color | | [`dash_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_capstyle "matplotlib.lines.Line2D.set_dash_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`dash_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_joinstyle "matplotlib.lines.Line2D.set_dash_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`dashes`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dashes "matplotlib.lines.Line2D.set_dashes") | sequence of floats (on/off ink in points) or (None, None) | | [`data`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_data "matplotlib.lines.Line2D.set_data") | (2, N) array or two 1D arrays | | [`drawstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_drawstyle "matplotlib.lines.Line2D.set_drawstyle") or ds | {'default', 'steps', 'steps-pre', 'steps-mid', 'steps-post'}, default: 'default' | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fillstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_fillstyle "matplotlib.lines.Line2D.set_fillstyle") | {'full', 'left', 'right', 'bottom', 'top', 'none'} | | [`gapcolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_gapcolor "matplotlib.lines.Line2D.set_gapcolor") | color or None | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linestyle "matplotlib.lines.Line2D.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linewidth "matplotlib.lines.Line2D.set_linewidth") or lw | float | | [`marker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_marker "matplotlib.lines.Line2D.set_marker") | marker style string, [`Path`](../path_api#matplotlib.path.Path "matplotlib.path.Path") or [`MarkerStyle`](matplotlib.markers.markerstyle#matplotlib.markers.MarkerStyle "matplotlib.markers.MarkerStyle") | | [`markeredgecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgecolor "matplotlib.lines.Line2D.set_markeredgecolor") or mec | color | | [`markeredgewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgewidth "matplotlib.lines.Line2D.set_markeredgewidth") or mew | float | | [`markerfacecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecolor "matplotlib.lines.Line2D.set_markerfacecolor") or mfc | color | | [`markerfacecoloralt`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecoloralt "matplotlib.lines.Line2D.set_markerfacecoloralt") or mfcalt | color | | [`markersize`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markersize "matplotlib.lines.Line2D.set_markersize") or ms | float | | [`markevery`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markevery "matplotlib.lines.Line2D.set_markevery") | None or int or (int, int) or slice or list[int] or float or (float, float) or list[bool] | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_picker "matplotlib.lines.Line2D.set_picker") | float or callable[[Artist, Event], tuple[bool, dict]] | | [`pickradius`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_pickradius "matplotlib.lines.Line2D.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`solid_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_capstyle "matplotlib.lines.Line2D.set_solid_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`solid_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_joinstyle "matplotlib.lines.Line2D.set_solid_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | unknown | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xdata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_xdata "matplotlib.lines.Line2D.set_xdata") | 1D array | | [`ydata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_ydata "matplotlib.lines.Line2D.set_ydata") | 1D array | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | #### References Bendat & Piersol -- Random Data: Analysis and Measurement Procedures, John Wiley & Sons (1986)
programming_docs
matplotlib matplotlib.axes.Axes.stairs matplotlib.axes.Axes.stairs =========================== Axes.stairs(*values*, *edges=None*, *\**, *orientation='vertical'*, *baseline=0*, *fill=False*, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L6872-L6947) A stepwise constant function as a line with bounding edges or a filled plot. Parameters: **values**array-like The step heights. **edges**array-like The edge positions, with `len(edges) == len(vals) + 1`, between which the curve takes on vals values. **orientation**{'vertical', 'horizontal'}, default: 'vertical' The direction of the steps. Vertical means that *values* are along the y-axis, and edges are along the x-axis. **baseline**float, array-like or None, default: 0 The bottom value of the bounding edges or when `fill=True`, position of lower edge. If *fill* is True or an array is passed to *baseline*, a closed path is drawn. **fill**bool, default: False Whether the area under the step curve should be filled. Returns: **StepPatch**[`matplotlib.patches.StepPatch`](matplotlib.patches.steppatch#matplotlib.patches.StepPatch "matplotlib.patches.StepPatch") Other Parameters: **data**indexable object, optional If given, all parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception). **\*\*kwargs** [`StepPatch`](matplotlib.patches.steppatch#matplotlib.patches.StepPatch "matplotlib.patches.StepPatch") properties matplotlib matplotlib.pyplot.findobj matplotlib.pyplot.findobj ========================= matplotlib.pyplot.findobj(*o=None*, *match=None*, *include\_self=True*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L177-L181) Find artist objects. Recursively find all [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") instances contained in the artist. Parameters: **match** A filter criterion for the matches. This can be * *None*: Return all objects contained in artist. * A function with signature `def match(artist: Artist) -> bool`. The result will only contain artists for which the function returns *True*. * A class instance: e.g., [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D"). The result will only contain artists of this class or its subclasses (`isinstance` check). **include\_self**bool Include *self* in the list to be checked for a match. Returns: list of [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") matplotlib matplotlib.artist.Artist.get_label matplotlib.artist.Artist.get\_label =================================== Artist.get\_label()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L1056-L1058) Return the label used for this artist in the legend. Examples using `matplotlib.artist.Artist.get_label` --------------------------------------------------- [Parasite Simple](https://matplotlib.org/stable/gallery/axes_grid1/parasite_simple.html#sphx-glr-gallery-axes-grid1-parasite-simple-py) Parasite Simple [SVG Filter Line](https://matplotlib.org/stable/gallery/misc/svg_filter_line.html#sphx-glr-gallery-misc-svg-filter-line-py) SVG Filter Line [SVG Filter Pie](https://matplotlib.org/stable/gallery/misc/svg_filter_pie.html#sphx-glr-gallery-misc-svg-filter-pie-py) SVG Filter Pie matplotlib matplotlib.axes.Axes.annotate matplotlib.axes.Axes.annotate ============================= Axes.annotate(*text*, *xy*, *xytext=None*, *xycoords='data'*, *textcoords=None*, *arrowprops=None*, *annotation\_clip=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L683-L695) Annotate the point *xy* with text *text*. In the simplest form, the text is placed at *xy*. Optionally, the text can be displayed in another position *xytext*. An arrow pointing from the text to the annotated point *xy* can then be added by defining *arrowprops*. Parameters: **text**str The text of the annotation. **xy**(float, float) The point *(x, y)* to annotate. The coordinate system is determined by *xycoords*. **xytext**(float, float), default: *xy* The position *(x, y)* to place the text at. The coordinate system is determined by *textcoords*. **xycoords**str or [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") or [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") or callable or (float, float), default: 'data' The coordinate system that *xy* is given in. The following types of values are supported: * One of the following strings: | Value | Description | | --- | --- | | 'figure points' | Points from the lower left of the figure | | 'figure pixels' | Pixels from the lower left of the figure | | 'figure fraction' | Fraction of figure from lower left | | 'subfigure points' | Points from the lower left of the subfigure | | 'subfigure pixels' | Pixels from the lower left of the subfigure | | 'subfigure fraction' | Fraction of subfigure from lower left | | 'axes points' | Points from lower left corner of axes | | 'axes pixels' | Pixels from lower left corner of axes | | 'axes fraction' | Fraction of axes from lower left | | 'data' | Use the coordinate system of the object being annotated (default) | | 'polar' | *(theta, r)* if not native 'data' coordinates | Note that 'subfigure pixels' and 'figure pixels' are the same for the parent figure, so users who want code that is usable in a subfigure can use 'subfigure pixels'. * An [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist"): *xy* is interpreted as a fraction of the artist's [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox"). E.g. *(0, 0)* would be the lower left corner of the bounding box and *(0.5, 1)* would be the center top of the bounding box. * A [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") to transform *xy* to screen coordinates. * A function with one of the following signatures: ``` def transform(renderer) -> Bbox def transform(renderer) -> Transform ``` where *renderer* is a [`RendererBase`](../backend_bases_api#matplotlib.backend_bases.RendererBase "matplotlib.backend_bases.RendererBase") subclass. The result of the function is interpreted like the [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") and [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") cases above. * A tuple *(xcoords, ycoords)* specifying separate coordinate systems for *x* and *y*. *xcoords* and *ycoords* must each be of one of the above described types. See [Advanced Annotations](https://matplotlib.org/stable/tutorials/text/annotations.html#plotting-guide-annotation) for more details. **textcoords**str or [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") or [`Transform`](../transformations#matplotlib.transforms.Transform "matplotlib.transforms.Transform") or callable or (float, float), default: value of *xycoords* The coordinate system that *xytext* is given in. All *xycoords* values are valid as well as the following strings: | Value | Description | | --- | --- | | 'offset points' | Offset (in points) from the *xy* value | | 'offset pixels' | Offset (in pixels) from the *xy* value | **arrowprops**dict, optional The properties used to draw a [`FancyArrowPatch`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch "matplotlib.patches.FancyArrowPatch") arrow between the positions *xy* and *xytext*. Defaults to None, i.e. no arrow is drawn. For historical reasons there are two different ways to specify arrows, "simple" and "fancy": **Simple arrow:** If *arrowprops* does not contain the key 'arrowstyle' the allowed keys are: | Key | Description | | --- | --- | | width | The width of the arrow in points | | headwidth | The width of the base of the arrow head in points | | headlength | The length of the arrow head in points | | shrink | Fraction of total length to shrink from both ends | | ? | Any key to [`matplotlib.patches.FancyArrowPatch`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch "matplotlib.patches.FancyArrowPatch") | The arrow is attached to the edge of the text box, the exact position (corners or centers) depending on where it's pointing to. **Fancy arrow:** This is used if 'arrowstyle' is provided in the *arrowprops*. Valid keys are the following [`FancyArrowPatch`](matplotlib.patches.fancyarrowpatch#matplotlib.patches.FancyArrowPatch "matplotlib.patches.FancyArrowPatch") parameters: | Key | Description | | --- | --- | | arrowstyle | the arrow style | | connectionstyle | the connection style | | relpos | see below; default is (0.5, 0.5) | | patchA | default is bounding box of the text | | patchB | default is None | | shrinkA | default is 2 points | | shrinkB | default is 2 points | | mutation\_scale | default is text size (in points) | | mutation\_aspect | default is 1. | | ? | any key for [`matplotlib.patches.PathPatch`](matplotlib.patches.pathpatch#matplotlib.patches.PathPatch "matplotlib.patches.PathPatch") | The exact starting point position of the arrow is defined by *relpos*. It's a tuple of relative coordinates of the text box, where (0, 0) is the lower left corner and (1, 1) is the upper right corner. Values <0 and >1 are supported and specify points outside the text box. By default (0.5, 0.5) the starting point is centered in the text box. **annotation\_clip**bool or None, default: None Whether to clip (i.e. not draw) the annotation when the annotation point *xy* is outside the axes area. * If *True*, the annotation will be clipped when *xy* is outside the axes. * If *False*, the annotation will always be drawn. * If *None*, the annotation will be clipped when *xy* is outside the axes and *xycoords* is 'data'. **\*\*kwargs** Additional kwargs are passed to [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text"). Returns: [`Annotation`](../text_api#matplotlib.text.Annotation "matplotlib.text.Annotation") See also [Advanced Annotations](https://matplotlib.org/stable/tutorials/text/annotations.html#plotting-guide-annotation) Examples using `matplotlib.axes.Axes.annotate` ---------------------------------------------- [Broken Barh](https://matplotlib.org/stable/gallery/lines_bars_and_markers/broken_barh.html#sphx-glr-gallery-lines-bars-and-markers-broken-barh-py) Broken Barh [Hat graph](https://matplotlib.org/stable/gallery/lines_bars_and_markers/hat_graph.html#sphx-glr-gallery-lines-bars-and-markers-hat-graph-py) Hat graph [Creating a timeline with lines, dates, and text](https://matplotlib.org/stable/gallery/lines_bars_and_markers/timeline.html#sphx-glr-gallery-lines-bars-and-markers-timeline-py) Creating a timeline with lines, dates, and text [Combining two subplots using subplots and GridSpec](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/gridspec_and_subplots.html#sphx-glr-gallery-subplots-axes-and-figures-gridspec-and-subplots-py) Combining two subplots using subplots and GridSpec [Labeling a pie and a donut](https://matplotlib.org/stable/gallery/pie_and_polar_charts/pie_and_donut_labels.html#sphx-glr-gallery-pie-and-polar-charts-pie-and-donut-labels-py) Labeling a pie and a donut [Scale invariant angle label](https://matplotlib.org/stable/gallery/text_labels_and_annotations/angle_annotation.html#sphx-glr-gallery-text-labels-and-annotations-angle-annotation-py) Scale invariant angle label [Annotating Plots](https://matplotlib.org/stable/gallery/text_labels_and_annotations/annotation_demo.html#sphx-glr-gallery-text-labels-and-annotations-annotation-demo-py) Annotating Plots [Annotation arrow style reference](https://matplotlib.org/stable/gallery/text_labels_and_annotations/fancyarrow_demo.html#sphx-glr-gallery-text-labels-and-annotations-fancyarrow-demo-py) Annotation arrow style reference [Rendering math equations using TeX](https://matplotlib.org/stable/gallery/text_labels_and_annotations/tex_demo.html#sphx-glr-gallery-text-labels-and-annotations-tex-demo-py) Rendering math equations using TeX [Annotate Transform](https://matplotlib.org/stable/gallery/pyplots/annotate_transform.html#sphx-glr-gallery-pyplots-annotate-transform-py) Annotate Transform [Annotating a plot](https://matplotlib.org/stable/gallery/pyplots/annotation_basic.html#sphx-glr-gallery-pyplots-annotation-basic-py) Annotating a plot [Annotation Polar](https://matplotlib.org/stable/gallery/pyplots/annotation_polar.html#sphx-glr-gallery-pyplots-annotation-polar-py) Annotation Polar [Text Commands](https://matplotlib.org/stable/gallery/pyplots/text_commands.html#sphx-glr-gallery-pyplots-text-commands-py) Text Commands [Mmh Donuts!!!](https://matplotlib.org/stable/gallery/shapes_and_collections/donut.html#sphx-glr-gallery-shapes-and-collections-donut-py) Mmh Donuts!!! [axis\_direction demo](https://matplotlib.org/stable/gallery/axisartist/demo_axis_direction.html#sphx-glr-gallery-axisartist-demo-axis-direction-py) axis\_direction demo [Simple Axis Pad](https://matplotlib.org/stable/gallery/axisartist/simple_axis_pad.html#sphx-glr-gallery-axisartist-simple-axis-pad-py) Simple Axis Pad [XKCD](https://matplotlib.org/stable/gallery/showcase/xkcd.html#sphx-glr-gallery-showcase-xkcd-py) XKCD [Patheffect Demo](https://matplotlib.org/stable/gallery/misc/patheffect_demo.html#sphx-glr-gallery-misc-patheffect-demo-py) Patheffect Demo [Annotation with units](https://matplotlib.org/stable/gallery/units/annotate_with_units.html#sphx-glr-gallery-units-annotate-with-units-py) Annotation with units [Annotate Explain](https://matplotlib.org/stable/gallery/userdemo/annotate_explain.html#sphx-glr-gallery-userdemo-annotate-explain-py) Annotate Explain [Annotate Simple01](https://matplotlib.org/stable/gallery/userdemo/annotate_simple01.html#sphx-glr-gallery-userdemo-annotate-simple01-py) Annotate Simple01 [Annotate Simple02](https://matplotlib.org/stable/gallery/userdemo/annotate_simple02.html#sphx-glr-gallery-userdemo-annotate-simple02-py) Annotate Simple02 [Annotate Simple03](https://matplotlib.org/stable/gallery/userdemo/annotate_simple03.html#sphx-glr-gallery-userdemo-annotate-simple03-py) Annotate Simple03 [Annotate Simple04](https://matplotlib.org/stable/gallery/userdemo/annotate_simple04.html#sphx-glr-gallery-userdemo-annotate-simple04-py) Annotate Simple04 [Annotate Simple Coord01](https://matplotlib.org/stable/gallery/userdemo/annotate_simple_coord01.html#sphx-glr-gallery-userdemo-annotate-simple-coord01-py) Annotate Simple Coord01 [Annotate Simple Coord02](https://matplotlib.org/stable/gallery/userdemo/annotate_simple_coord02.html#sphx-glr-gallery-userdemo-annotate-simple-coord02-py) Annotate Simple Coord02 [Annotate Simple Coord03](https://matplotlib.org/stable/gallery/userdemo/annotate_simple_coord03.html#sphx-glr-gallery-userdemo-annotate-simple-coord03-py) Annotate Simple Coord03 [Connection styles for annotations](https://matplotlib.org/stable/gallery/userdemo/connectionstyle_demo.html#sphx-glr-gallery-userdemo-connectionstyle-demo-py) Connection styles for annotations [Simple Annotate01](https://matplotlib.org/stable/gallery/userdemo/simple_annotate01.html#sphx-glr-gallery-userdemo-simple-annotate01-py) Simple Annotate01 [Quick start guide](https://matplotlib.org/stable/tutorials/introductory/quick_start.html#sphx-glr-tutorials-introductory-quick-start-py) Quick start guide [Faster rendering by using blitting](https://matplotlib.org/stable/tutorials/advanced/blitting.html#sphx-glr-tutorials-advanced-blitting-py) Faster rendering by using blitting [Transformations Tutorial](https://matplotlib.org/stable/tutorials/advanced/transforms_tutorial.html#sphx-glr-tutorials-advanced-transforms-tutorial-py) Transformations Tutorial [Text in Matplotlib Plots](https://matplotlib.org/stable/tutorials/text/text_intro.html#sphx-glr-tutorials-text-text-intro-py) Text in Matplotlib Plots [Annotations](https://matplotlib.org/stable/tutorials/text/annotations.html#sphx-glr-tutorials-text-annotations-py) Annotations matplotlib matplotlib.pyplot.get_cmap matplotlib.pyplot.get\_cmap =========================== matplotlib.pyplot.get\_cmap(*name=None*, *lut=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2080-L2081) Get a colormap instance, defaulting to rc values if *name* is None. Parameters: **name**[`matplotlib.colors.Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap") or str or None, default: None If a [`Colormap`](matplotlib.colors.colormap#matplotlib.colors.Colormap "matplotlib.colors.Colormap") instance, it will be returned. Otherwise, the name of a colormap known to Matplotlib, which will be resampled by *lut*. The default, None, means `[rcParams["image.cmap"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=image.cmap#matplotlibrc-sample)` (default: `'viridis'`). **lut**int or None, default: None If *name* is not already a Colormap instance and *lut* is not None, the colormap will be resampled to have *lut* entries in the lookup table. Returns: Colormap Examples using `matplotlib.pyplot.get_cmap` ------------------------------------------- [pie(x)](https://matplotlib.org/stable/plot_types/stats/pie.html#sphx-glr-plot-types-stats-pie-py) pie(x) matplotlib matplotlib.colors.hsv_to_rgb matplotlib.colors.hsv\_to\_rgb ============================== matplotlib.colors.hsv\_to\_rgb(*hsv*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/colors.py#L2095-L2174) Convert hsv values to rgb. Parameters: **hsv**(..., 3) array-like All values assumed to be in range [0, 1] Returns: (..., 3) ndarray Colors converted to RGB values in range [0, 1] Examples using `matplotlib.colors.hsv_to_rgb` --------------------------------------------- [3D voxel / volumetric plot with cylindrical coordinates](https://matplotlib.org/stable/gallery/mplot3d/voxels_torus.html#sphx-glr-gallery-mplot3d-voxels-torus-py) 3D voxel / volumetric plot with cylindrical coordinates matplotlib matplotlib.axes.Axes.acorr matplotlib.axes.Axes.acorr ========================== Axes.acorr(*x*, *\**, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L1900-L1970) Plot the autocorrelation of *x*. Parameters: **x**array-like **detrend**callable, default: [`mlab.detrend_none`](../mlab_api#matplotlib.mlab.detrend_none "matplotlib.mlab.detrend_none") (no detrending) A detrending function applied to *x*. It must have the signature ``` detrend(x: np.ndarray) -> np.ndarray ``` **normed**bool, default: True If `True`, input vectors are normalised to unit length. **usevlines**bool, default: True Determines the plot style. If `True`, vertical lines are plotted from 0 to the acorr value using [`Axes.vlines`](matplotlib.axes.axes.vlines#matplotlib.axes.Axes.vlines "matplotlib.axes.Axes.vlines"). Additionally, a horizontal line is plotted at y=0 using [`Axes.axhline`](matplotlib.axes.axes.axhline#matplotlib.axes.Axes.axhline "matplotlib.axes.Axes.axhline"). If `False`, markers are plotted at the acorr values using [`Axes.plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot"). **maxlags**int, default: 10 Number of lags to show. If `None`, will return all `2 * len(x) - 1` lags. Returns: **lags**array (length `2*maxlags+1`) The lag vector. **c**array (length `2*maxlags+1`) The auto correlation vector. **line**[`LineCollection`](../collections_api#matplotlib.collections.LineCollection "matplotlib.collections.LineCollection") or [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist") added to the Axes of the correlation: * [`LineCollection`](../collections_api#matplotlib.collections.LineCollection "matplotlib.collections.LineCollection") if *usevlines* is True. * [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") if *usevlines* is False. **b**[`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") or None Horizontal line at 0 if *usevlines* is True None *usevlines* is False. Other Parameters: **linestyle**[`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") property, optional The linestyle for plotting the data points. Only used if *usevlines* is `False`. **marker**str, default: 'o' The marker for plotting the data points. Only used if *usevlines* is `False`. **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *x* **\*\*kwargs** Additional parameters are passed to [`Axes.vlines`](matplotlib.axes.axes.vlines#matplotlib.axes.Axes.vlines "matplotlib.axes.Axes.vlines") and [`Axes.axhline`](matplotlib.axes.axes.axhline#matplotlib.axes.Axes.axhline "matplotlib.axes.Axes.axhline") if *usevlines* is `True`; otherwise they are passed to [`Axes.plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot"). #### Notes The cross correlation is performed with [`numpy.correlate`](https://numpy.org/doc/stable/reference/generated/numpy.correlate.html#numpy.correlate "(in NumPy v1.23)") with `mode = "full"`. Examples using `matplotlib.axes.Axes.acorr` ------------------------------------------- [Cross- and Auto-Correlation Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/xcorr_acorr_demo.html#sphx-glr-gallery-lines-bars-and-markers-xcorr-acorr-demo-py) Cross- and Auto-Correlation Demo
programming_docs
matplotlib matplotlib.pyplot.gcf matplotlib.pyplot.gcf ===================== matplotlib.pyplot.gcf()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L817-L830) Get the current figure. If there is currently no figure on the pyplot figure stack, a new one is created using [`figure()`](matplotlib.pyplot.figure#matplotlib.pyplot.figure "matplotlib.pyplot.figure"). (To test whether there is currently a figure on the pyplot figure stack, check whether [`get_fignums()`](matplotlib.pyplot.get_fignums#matplotlib.pyplot.get_fignums "matplotlib.pyplot.get_fignums") is empty.) matplotlib mpl_toolkits.axisartist.axes_grid mpl\_toolkits.axisartist.axes\_grid =================================== Classes ------- | | | | --- | --- | | [`AxesGrid`](mpl_toolkits.axisartist.axes_grid.axesgrid#mpl_toolkits.axisartist.axes_grid.AxesGrid "mpl_toolkits.axisartist.axes_grid.AxesGrid") | alias of [`ImageGrid`](mpl_toolkits.axisartist.axes_grid.imagegrid#mpl_toolkits.axisartist.axes_grid.ImageGrid "mpl_toolkits.axisartist.axes_grid.ImageGrid") | | [`CbarAxes`](mpl_toolkits.axisartist.axes_grid.cbaraxes#mpl_toolkits.axisartist.axes_grid.CbarAxes "mpl_toolkits.axisartist.axes_grid.CbarAxes")(\*args, orientation, \*\*kwargs) | [*Deprecated*] | | [`Grid`](mpl_toolkits.axisartist.axes_grid.grid#mpl_toolkits.axisartist.axes_grid.Grid "mpl_toolkits.axisartist.axes_grid.Grid")(fig, rect, nrows\_ncols[, ngrids, ...]) | Parameters: | | [`ImageGrid`](mpl_toolkits.axisartist.axes_grid.imagegrid#mpl_toolkits.axisartist.axes_grid.ImageGrid "mpl_toolkits.axisartist.axes_grid.ImageGrid")(fig, rect, nrows\_ncols[, ngrids, ...]) | Parameters: | matplotlib matplotlib.axis.Axis.convert_units matplotlib.axis.Axis.convert\_units =================================== Axis.convert\_units(*x*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L1654-L1669) matplotlib matplotlib.axes.Axes.relim matplotlib.axes.Axes.relim ========================== Axes.relim(*visible\_only=False*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L2494-L2518) Recompute the data limits based on current artists. At present, [`Collection`](../collections_api#matplotlib.collections.Collection "matplotlib.collections.Collection") instances are not supported. Parameters: **visible\_only**bool, default: False Whether to exclude invisible artists. Examples using `matplotlib.axes.Axes.relim` ------------------------------------------- [Packed-bubble chart](https://matplotlib.org/stable/gallery/misc/packed_bubbles.html#sphx-glr-gallery-misc-packed-bubbles-py) Packed-bubble chart [Textbox](https://matplotlib.org/stable/gallery/widgets/textbox.html#sphx-glr-gallery-widgets-textbox-py) Textbox matplotlib mpl_toolkits.axes_grid1.axes_grid.ImageGrid mpl\_toolkits.axes\_grid1.axes\_grid.ImageGrid ============================================== *class*mpl\_toolkits.axes\_grid1.axes\_grid.ImageGrid(*fig*, *rect*, *nrows\_ncols*, *ngrids=None*, *direction='row'*, *axes\_pad=0.02*, *\**, *share\_all=False*, *aspect=True*, *label\_mode='L'*, *cbar\_mode=None*, *cbar\_location='right'*, *cbar\_pad=None*, *cbar\_size='5%'*, *cbar\_set\_cax=True*, *axes\_class=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axes_grid1/axes_grid.py#L314-L573) Bases: [`Grid`](mpl_toolkits.axes_grid1.axes_grid.grid#mpl_toolkits.axes_grid1.axes_grid.Grid "mpl_toolkits.axes_grid1.axes_grid.Grid") Parameters: **fig**[`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") The parent figure. **rect**(float, float, float, float) or int The axes position, as a `(left, bottom, width, height)` tuple or as a three-digit subplot position code (e.g., "121"). **nrows\_ncols**(int, int) Number of rows and columns in the grid. **ngrids**int or None, default: None If not None, only the first *ngrids* axes in the grid are created. **direction**{"row", "column"}, default: "row" Whether axes are created in row-major ("row by row") or column-major order ("column by column"). This also affects the order in which axes are accessed using indexing (`grid[index]`). **axes\_pad**float or (float, float), default: 0.02in Padding or (horizontal padding, vertical padding) between axes, in inches. **share\_all**bool, default: False Whether all axes share their x- and y-axis. **aspect**bool, default: True Whether the axes aspect ratio follows the aspect ratio of the data limits. **label\_mode**{"L", "1", "all"}, default: "L" Determines which axes will get tick labels: * "L": All axes on the left column get vertical tick labels; all axes on the bottom row get horizontal tick labels. * "1": Only the bottom left axes is labelled. * "all": all axes are labelled. **cbar\_mode**{"each", "single", "edge", None}, default: None Whether to create a colorbar for "each" axes, a "single" colorbar for the entire grid, colorbars only for axes on the "edge" determined by *cbar\_location*, or no colorbars. The colorbars are stored in the `cbar_axes` attribute. **cbar\_location**{"left", "right", "bottom", "top"}, default: "right" **cbar\_pad**float, default: None Padding between the image axes and the colorbar axes. **cbar\_size**size specification (see `Size.from_any`), default: "5%" Colorbar size. **cbar\_set\_cax**bool, default: True If True, each axes in the grid has a *cax* attribute that is bound to associated *cbar\_axes*. **axes\_class**subclass of [`matplotlib.axes.Axes`](../axes_api#matplotlib.axes.Axes "matplotlib.axes.Axes"), default: None matplotlib matplotlib.axes.Axes.set_title matplotlib.axes.Axes.set\_title =============================== Axes.set\_title(*label*, *fontdict=None*, *loc=None*, *pad=None*, *\**, *y=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_axes.py#L93-L170) Set a title for the Axes. Set one of the three available Axes titles. The available titles are positioned above the Axes in the center, flush with the left edge, and flush with the right edge. Parameters: **label**str Text to use for the title **fontdict**dict A dictionary controlling the appearance of the title text, the default *fontdict* is: ``` {'fontsize': rcParams['axes.titlesize'], 'fontweight': rcParams['axes.titleweight'], 'color': rcParams['axes.titlecolor'], 'verticalalignment': 'baseline', 'horizontalalignment': loc} ``` **loc**{'center', 'left', 'right'}, default: `[rcParams["axes.titlelocation"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=axes.titlelocation#matplotlibrc-sample)` (default: `'center'`) Which title to set. **y**float, default: `[rcParams["axes.titley"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=axes.titley#matplotlibrc-sample)` (default: `None`) Vertical Axes location for the title (1.0 is the top). If None (the default) and `[rcParams["axes.titley"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=axes.titley#matplotlibrc-sample)` (default: `None`) is also None, y is determined automatically to avoid decorators on the Axes. **pad**float, default: `[rcParams["axes.titlepad"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=axes.titlepad#matplotlibrc-sample)` (default: `6.0`) The offset of the title from the top of the Axes, in points. Returns: [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text") The matplotlib text instance representing the title Other Parameters: **\*\*kwargs**[`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text") properties Other keyword arguments are text properties, see [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text") for a list of valid text properties. Examples using `matplotlib.axes.Axes.set_title` ----------------------------------------------- [Bar color demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_colors.html#sphx-glr-gallery-lines-bars-and-markers-bar-colors-py) Bar color demo [Bar Label Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_label_demo.html#sphx-glr-gallery-lines-bars-and-markers-bar-label-demo-py) Bar Label Demo [Stacked bar chart](https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_stacked.html#sphx-glr-gallery-lines-bars-and-markers-bar-stacked-py) Stacked bar chart [Grouped bar chart with labels](https://matplotlib.org/stable/gallery/lines_bars_and_markers/barchart.html#sphx-glr-gallery-lines-bars-and-markers-barchart-py) Grouped bar chart with labels [Horizontal bar chart](https://matplotlib.org/stable/gallery/lines_bars_and_markers/barh.html#sphx-glr-gallery-lines-bars-and-markers-barh-py) Horizontal bar chart [Errorbar subsampling](https://matplotlib.org/stable/gallery/lines_bars_and_markers/errorbar_subsample.html#sphx-glr-gallery-lines-bars-and-markers-errorbar-subsample-py) Errorbar subsampling [EventCollection Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/eventcollection_demo.html#sphx-glr-gallery-lines-bars-and-markers-eventcollection-demo-py) EventCollection Demo [Fill Between and Alpha](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill_between_alpha.html#sphx-glr-gallery-lines-bars-and-markers-fill-between-alpha-py) Fill Between and Alpha [Filling the area between lines](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill_between_demo.html#sphx-glr-gallery-lines-bars-and-markers-fill-between-demo-py) Filling the area between lines [Fill Betweenx Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill_betweenx_demo.html#sphx-glr-gallery-lines-bars-and-markers-fill-betweenx-demo-py) Fill Betweenx Demo [Hat graph](https://matplotlib.org/stable/gallery/lines_bars_and_markers/hat_graph.html#sphx-glr-gallery-lines-bars-and-markers-hat-graph-py) Hat graph [Markevery Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/markevery_demo.html#sphx-glr-gallery-lines-bars-and-markers-markevery-demo-py) Markevery Demo [Psd Demo](https://matplotlib.org/stable/gallery/lines_bars_and_markers/psd_demo.html#sphx-glr-gallery-lines-bars-and-markers-psd-demo-py) Psd Demo [Scatter Demo2](https://matplotlib.org/stable/gallery/lines_bars_and_markers/scatter_demo2.html#sphx-glr-gallery-lines-bars-and-markers-scatter-demo2-py) Scatter Demo2 [Using span\_where](https://matplotlib.org/stable/gallery/lines_bars_and_markers/span_regions.html#sphx-glr-gallery-lines-bars-and-markers-span-regions-py) Using span\_where [Stackplots and streamgraphs](https://matplotlib.org/stable/gallery/lines_bars_and_markers/stackplot_demo.html#sphx-glr-gallery-lines-bars-and-markers-stackplot-demo-py) Stackplots and streamgraphs [hlines and vlines](https://matplotlib.org/stable/gallery/lines_bars_and_markers/vline_hline_demo.html#sphx-glr-gallery-lines-bars-and-markers-vline-hline-demo-py) hlines and vlines [Interactive Adjustment of Colormap Range](https://matplotlib.org/stable/gallery/images_contours_and_fields/colormap_interactive_adjustment.html#sphx-glr-gallery-images-contours-and-fields-colormap-interactive-adjustment-py) Interactive Adjustment of Colormap Range [Contour Corner Mask](https://matplotlib.org/stable/gallery/images_contours_and_fields/contour_corner_mask.html#sphx-glr-gallery-images-contours-and-fields-contour-corner-mask-py) Contour Corner Mask [Contour Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/contour_demo.html#sphx-glr-gallery-images-contours-and-fields-contour-demo-py) Contour Demo [Contour Label Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/contour_label_demo.html#sphx-glr-gallery-images-contours-and-fields-contour-label-demo-py) Contour Label Demo [Contourf Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/contourf_demo.html#sphx-glr-gallery-images-contours-and-fields-contourf-demo-py) Contourf Demo [Creating annotated heatmaps](https://matplotlib.org/stable/gallery/images_contours_and_fields/image_annotated_heatmap.html#sphx-glr-gallery-images-contours-and-fields-image-annotated-heatmap-py) Creating annotated heatmaps [Image antialiasing](https://matplotlib.org/stable/gallery/images_contours_and_fields/image_antialiasing.html#sphx-glr-gallery-images-contours-and-fields-image-antialiasing-py) Image antialiasing [Image Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/image_demo.html#sphx-glr-gallery-images-contours-and-fields-image-demo-py) Image Demo [Image Masked](https://matplotlib.org/stable/gallery/images_contours_and_fields/image_masked.html#sphx-glr-gallery-images-contours-and-fields-image-masked-py) Image Masked [Image Nonuniform](https://matplotlib.org/stable/gallery/images_contours_and_fields/image_nonuniform.html#sphx-glr-gallery-images-contours-and-fields-image-nonuniform-py) Image Nonuniform [Interpolations for imshow](https://matplotlib.org/stable/gallery/images_contours_and_fields/interpolation_methods.html#sphx-glr-gallery-images-contours-and-fields-interpolation-methods-py) Interpolations for imshow [Contour plot of irregularly spaced data](https://matplotlib.org/stable/gallery/images_contours_and_fields/irregulardatagrid.html#sphx-glr-gallery-images-contours-and-fields-irregulardatagrid-py) Contour plot of irregularly spaced data [Pcolor Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/pcolor_demo.html#sphx-glr-gallery-images-contours-and-fields-pcolor-demo-py) Pcolor Demo [pcolormesh grids and shading](https://matplotlib.org/stable/gallery/images_contours_and_fields/pcolormesh_grids.html#sphx-glr-gallery-images-contours-and-fields-pcolormesh-grids-py) pcolormesh grids and shading [pcolormesh](https://matplotlib.org/stable/gallery/images_contours_and_fields/pcolormesh_levels.html#sphx-glr-gallery-images-contours-and-fields-pcolormesh-levels-py) pcolormesh [Advanced quiver and quiverkey functions](https://matplotlib.org/stable/gallery/images_contours_and_fields/quiver_demo.html#sphx-glr-gallery-images-contours-and-fields-quiver-demo-py) Advanced quiver and quiverkey functions [Tricontour Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/tricontour_demo.html#sphx-glr-gallery-images-contours-and-fields-tricontour-demo-py) Tricontour Demo [Tricontour Smooth Delaunay](https://matplotlib.org/stable/gallery/images_contours_and_fields/tricontour_smooth_delaunay.html#sphx-glr-gallery-images-contours-and-fields-tricontour-smooth-delaunay-py) Tricontour Smooth Delaunay [Tricontour Smooth User](https://matplotlib.org/stable/gallery/images_contours_and_fields/tricontour_smooth_user.html#sphx-glr-gallery-images-contours-and-fields-tricontour-smooth-user-py) Tricontour Smooth User [Trigradient Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/trigradient_demo.html#sphx-glr-gallery-images-contours-and-fields-trigradient-demo-py) Trigradient Demo [Tripcolor Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/tripcolor_demo.html#sphx-glr-gallery-images-contours-and-fields-tripcolor-demo-py) Tripcolor Demo [Triplot Demo](https://matplotlib.org/stable/gallery/images_contours_and_fields/triplot_demo.html#sphx-glr-gallery-images-contours-and-fields-triplot-demo-py) Triplot Demo [Axes Demo](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_demo.html#sphx-glr-gallery-subplots-axes-and-figures-axes-demo-py) Axes Demo [Controlling view limits using margins and sticky\_edges](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_margins.html#sphx-glr-gallery-subplots-axes-and-figures-axes-margins-py) Controlling view limits using margins and sticky\_edges [Resizing axes with constrained layout](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/demo_constrained_layout.html#sphx-glr-gallery-subplots-axes-and-figures-demo-constrained-layout-py) Resizing axes with constrained layout [Resizing axes with tight layout](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/demo_tight_layout.html#sphx-glr-gallery-subplots-axes-and-figures-demo-tight-layout-py) Resizing axes with tight layout [Figure labels: suptitle, supxlabel, supylabel](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/figure_title.html#sphx-glr-gallery-subplots-axes-and-figures-figure-title-py) Figure labels: suptitle, supxlabel, supylabel [Invert Axes](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/invert_axes.html#sphx-glr-gallery-subplots-axes-and-figures-invert-axes-py) Invert Axes [Secondary Axis](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/secondary_axis.html#sphx-glr-gallery-subplots-axes-and-figures-secondary-axis-py) Secondary Axis [Figure subfigures](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subfigures.html#sphx-glr-gallery-subplots-axes-and-figures-subfigures-py) Figure subfigures [Creating multiple subplots using plt.subplots](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subplots_demo.html#sphx-glr-gallery-subplots-axes-and-figures-subplots-demo-py) Creating multiple subplots using ``plt.subplots`` [Box plots with custom fill colors](https://matplotlib.org/stable/gallery/statistics/boxplot_color.html#sphx-glr-gallery-statistics-boxplot-color-py) Box plots with custom fill colors [Plot a confidence ellipse of a two-dimensional dataset](https://matplotlib.org/stable/gallery/statistics/confidence_ellipse.html#sphx-glr-gallery-statistics-confidence-ellipse-py) Plot a confidence ellipse of a two-dimensional dataset [Violin plot customization](https://matplotlib.org/stable/gallery/statistics/customized_violin.html#sphx-glr-gallery-statistics-customized-violin-py) Violin plot customization [Different ways of specifying error bars](https://matplotlib.org/stable/gallery/statistics/errorbar_features.html#sphx-glr-gallery-statistics-errorbar-features-py) Different ways of specifying error bars [Including upper and lower limits in error bars](https://matplotlib.org/stable/gallery/statistics/errorbar_limits.html#sphx-glr-gallery-statistics-errorbar-limits-py) Including upper and lower limits in error bars [Hexagonal binned plot](https://matplotlib.org/stable/gallery/statistics/hexbin_demo.html#sphx-glr-gallery-statistics-hexbin-demo-py) Hexagonal binned plot [Using histograms to plot a cumulative distribution](https://matplotlib.org/stable/gallery/statistics/histogram_cumulative.html#sphx-glr-gallery-statistics-histogram-cumulative-py) Using histograms to plot a cumulative distribution [Some features of the histogram (hist) function](https://matplotlib.org/stable/gallery/statistics/histogram_features.html#sphx-glr-gallery-statistics-histogram-features-py) Some features of the histogram (hist) function [The histogram (hist) function with multiple data sets](https://matplotlib.org/stable/gallery/statistics/histogram_multihist.html#sphx-glr-gallery-statistics-histogram-multihist-py) The histogram (hist) function with multiple data sets [Bar of pie](https://matplotlib.org/stable/gallery/pie_and_polar_charts/bar_of_pie.html#sphx-glr-gallery-pie-and-polar-charts-bar-of-pie-py) Bar of pie [Labeling a pie and a donut](https://matplotlib.org/stable/gallery/pie_and_polar_charts/pie_and_donut_labels.html#sphx-glr-gallery-pie-and-polar-charts-pie-and-donut-labels-py) Labeling a pie and a donut [Polar plot](https://matplotlib.org/stable/gallery/pie_and_polar_charts/polar_demo.html#sphx-glr-gallery-pie-and-polar-charts-polar-demo-py) Polar plot [Using accented text in Matplotlib](https://matplotlib.org/stable/gallery/text_labels_and_annotations/accented_text.html#sphx-glr-gallery-text-labels-and-annotations-accented-text-py) Using accented text in Matplotlib [Scale invariant angle label](https://matplotlib.org/stable/gallery/text_labels_and_annotations/angle_annotation.html#sphx-glr-gallery-text-labels-and-annotations-angle-annotation-py) Scale invariant angle label [Date tick labels](https://matplotlib.org/stable/gallery/text_labels_and_annotations/date.html#sphx-glr-gallery-text-labels-and-annotations-date-py) Date tick labels [Labeling ticks using engineering notation](https://matplotlib.org/stable/gallery/text_labels_and_annotations/engineering_formatter.html#sphx-glr-gallery-text-labels-and-annotations-engineering-formatter-py) Labeling ticks using engineering notation [Using a ttf font file in Matplotlib](https://matplotlib.org/stable/gallery/text_labels_and_annotations/font_file.html#sphx-glr-gallery-text-labels-and-annotations-font-file-py) Using a ttf font file in Matplotlib [Labelling subplots](https://matplotlib.org/stable/gallery/text_labels_and_annotations/label_subplots.html#sphx-glr-gallery-text-labels-and-annotations-label-subplots-py) Labelling subplots [Legend Demo](https://matplotlib.org/stable/gallery/text_labels_and_annotations/legend_demo.html#sphx-glr-gallery-text-labels-and-annotations-legend-demo-py) Legend Demo [Mathtext](https://matplotlib.org/stable/gallery/text_labels_and_annotations/mathtext_demo.html#sphx-glr-gallery-text-labels-and-annotations-mathtext-demo-py) Mathtext [Math fontfamily](https://matplotlib.org/stable/gallery/text_labels_and_annotations/mathtext_fontfamily_example.html#sphx-glr-gallery-text-labels-and-annotations-mathtext-fontfamily-example-py) Math fontfamily [Multiline](https://matplotlib.org/stable/gallery/text_labels_and_annotations/multiline.html#sphx-glr-gallery-text-labels-and-annotations-multiline-py) Multiline [Rendering math equations using TeX](https://matplotlib.org/stable/gallery/text_labels_and_annotations/tex_demo.html#sphx-glr-gallery-text-labels-and-annotations-tex-demo-py) Rendering math equations using TeX [Title positioning](https://matplotlib.org/stable/gallery/text_labels_and_annotations/titles_demo.html#sphx-glr-gallery-text-labels-and-annotations-titles-demo-py) Title positioning [Boxplot Demo](https://matplotlib.org/stable/gallery/pyplots/boxplot_demo_pyplot.html#sphx-glr-gallery-pyplots-boxplot-demo-pyplot-py) Boxplot Demo [Simple axes labels](https://matplotlib.org/stable/gallery/pyplots/fig_axes_labels_simple.html#sphx-glr-gallery-pyplots-fig-axes-labels-simple-py) Simple axes labels [Text Commands](https://matplotlib.org/stable/gallery/pyplots/text_commands.html#sphx-glr-gallery-pyplots-text-commands-py) Text Commands [Color Demo](https://matplotlib.org/stable/gallery/color/color_demo.html#sphx-glr-gallery-color-color-demo-py) Color Demo [Creating a colormap from a list of colors](https://matplotlib.org/stable/gallery/color/custom_cmap.html#sphx-glr-gallery-color-custom-cmap-py) Creating a colormap from a list of colors [Line, Poly and RegularPoly Collection with autoscaling](https://matplotlib.org/stable/gallery/shapes_and_collections/collections.html#sphx-glr-gallery-shapes-and-collections-collections-py) Line, Poly and RegularPoly Collection with autoscaling [Compound path](https://matplotlib.org/stable/gallery/shapes_and_collections/compound_path.html#sphx-glr-gallery-shapes-and-collections-compound-path-py) Compound path [Mmh Donuts!!!](https://matplotlib.org/stable/gallery/shapes_and_collections/donut.html#sphx-glr-gallery-shapes-and-collections-donut-py) Mmh Donuts!!! [Line Collection](https://matplotlib.org/stable/gallery/shapes_and_collections/line_collection.html#sphx-glr-gallery-shapes-and-collections-line-collection-py) Line Collection [Bezier Curve](https://matplotlib.org/stable/gallery/shapes_and_collections/quad_bezier.html#sphx-glr-gallery-shapes-and-collections-quad-bezier-py) Bezier Curve [Bayesian Methods for Hackers style sheet](https://matplotlib.org/stable/gallery/style_sheets/bmh.html#sphx-glr-gallery-style-sheets-bmh-py) Bayesian Methods for Hackers style sheet [Dark background style sheet](https://matplotlib.org/stable/gallery/style_sheets/dark_background.html#sphx-glr-gallery-style-sheets-dark-background-py) Dark background style sheet [FiveThirtyEight style sheet](https://matplotlib.org/stable/gallery/style_sheets/fivethirtyeight.html#sphx-glr-gallery-style-sheets-fivethirtyeight-py) FiveThirtyEight style sheet [Make room for ylabel using axes\_grid](https://matplotlib.org/stable/gallery/axes_grid1/make_room_for_ylabel_using_axesgrid.html#sphx-glr-gallery-axes-grid1-make-room-for-ylabel-using-axesgrid-py) Make room for ylabel using axes\_grid [Axis Direction](https://matplotlib.org/stable/gallery/axisartist/axis_direction.html#sphx-glr-gallery-axisartist-axis-direction-py) Axis Direction [Anatomy of a figure](https://matplotlib.org/stable/gallery/showcase/anatomy.html#sphx-glr-gallery-showcase-anatomy-py) Anatomy of a figure [XKCD](https://matplotlib.org/stable/gallery/showcase/xkcd.html#sphx-glr-gallery-showcase-xkcd-py) XKCD [pyplot animation](https://matplotlib.org/stable/gallery/animation/animation_demo.html#sphx-glr-gallery-animation-animation-demo-py) pyplot animation [Cross hair cursor](https://matplotlib.org/stable/gallery/event_handling/cursor_demo.html#sphx-glr-gallery-event-handling-cursor-demo-py) Cross hair cursor [Data Browser](https://matplotlib.org/stable/gallery/event_handling/data_browser.html#sphx-glr-gallery-event-handling-data-browser-py) Data Browser [Image Slices Viewer](https://matplotlib.org/stable/gallery/event_handling/image_slices_viewer.html#sphx-glr-gallery-event-handling-image-slices-viewer-py) Image Slices Viewer [Keypress event](https://matplotlib.org/stable/gallery/event_handling/keypress_demo.html#sphx-glr-gallery-event-handling-keypress-demo-py) Keypress event [Lasso Demo](https://matplotlib.org/stable/gallery/event_handling/lasso_demo.html#sphx-glr-gallery-event-handling-lasso-demo-py) Lasso Demo [Legend Picking](https://matplotlib.org/stable/gallery/event_handling/legend_picking.html#sphx-glr-gallery-event-handling-legend-picking-py) Legend Picking [Looking Glass](https://matplotlib.org/stable/gallery/event_handling/looking_glass.html#sphx-glr-gallery-event-handling-looking-glass-py) Looking Glass [Path Editor](https://matplotlib.org/stable/gallery/event_handling/path_editor.html#sphx-glr-gallery-event-handling-path-editor-py) Path Editor [Pick Event Demo](https://matplotlib.org/stable/gallery/event_handling/pick_event_demo.html#sphx-glr-gallery-event-handling-pick-event-demo-py) Pick Event Demo [Pick Event Demo2](https://matplotlib.org/stable/gallery/event_handling/pick_event_demo2.html#sphx-glr-gallery-event-handling-pick-event-demo2-py) Pick Event Demo2 [Poly Editor](https://matplotlib.org/stable/gallery/event_handling/poly_editor.html#sphx-glr-gallery-event-handling-poly-editor-py) Poly Editor [Trifinder Event Demo](https://matplotlib.org/stable/gallery/event_handling/trifinder_event_demo.html#sphx-glr-gallery-event-handling-trifinder-event-demo-py) Trifinder Event Demo [Viewlims](https://matplotlib.org/stable/gallery/event_handling/viewlims.html#sphx-glr-gallery-event-handling-viewlims-py) Viewlims [Packed-bubble chart](https://matplotlib.org/stable/gallery/misc/packed_bubbles.html#sphx-glr-gallery-misc-packed-bubbles-py) Packed-bubble chart [Pythonic Matplotlib](https://matplotlib.org/stable/gallery/misc/pythonic_matplotlib.html#sphx-glr-gallery-misc-pythonic-matplotlib-py) Pythonic Matplotlib [Rasterization for vector graphics](https://matplotlib.org/stable/gallery/misc/rasterization_demo.html#sphx-glr-gallery-misc-rasterization-demo-py) Rasterization for vector graphics [Zorder Demo](https://matplotlib.org/stable/gallery/misc/zorder_demo.html#sphx-glr-gallery-misc-zorder-demo-py) Zorder Demo [Demo of 3D bar charts](https://matplotlib.org/stable/gallery/mplot3d/3d_bars.html#sphx-glr-gallery-mplot3d-3d-bars-py) Demo of 3D bar charts [Lorenz Attractor](https://matplotlib.org/stable/gallery/mplot3d/lorenz_attractor.html#sphx-glr-gallery-mplot3d-lorenz-attractor-py) Lorenz Attractor [3D wireframe plots in one direction](https://matplotlib.org/stable/gallery/mplot3d/wire3d_zero_stride.html#sphx-glr-gallery-mplot3d-wire3d-zero-stride-py) 3D wireframe plots in one direction [Asinh Demo](https://matplotlib.org/stable/gallery/scales/asinh_demo.html#sphx-glr-gallery-scales-asinh-demo-py) Asinh Demo [Loglog Aspect](https://matplotlib.org/stable/gallery/scales/aspect_loglog.html#sphx-glr-gallery-scales-aspect-loglog-py) Loglog Aspect [Exploring normalizations](https://matplotlib.org/stable/gallery/scales/power_norm.html#sphx-glr-gallery-scales-power-norm-py) Exploring normalizations [Scales](https://matplotlib.org/stable/gallery/scales/scales.html#sphx-glr-gallery-scales-scales-py) Scales [Radar chart (aka spider or star chart)](https://matplotlib.org/stable/gallery/specialty_plots/radar_chart.html#sphx-glr-gallery-specialty-plots-radar-chart-py) Radar chart (aka spider or star chart) [Topographic hillshading](https://matplotlib.org/stable/gallery/specialty_plots/topographic_hillshading.html#sphx-glr-gallery-specialty-plots-topographic-hillshading-py) Topographic hillshading [Spine Placement](https://matplotlib.org/stable/gallery/spines/spine_placement_demo.html#sphx-glr-gallery-spines-spine-placement-demo-py) Spine Placement [Spines](https://matplotlib.org/stable/gallery/spines/spines.html#sphx-glr-gallery-spines-spines-py) Spines [Dropped spines](https://matplotlib.org/stable/gallery/spines/spines_dropped.html#sphx-glr-gallery-spines-spines-dropped-py) Dropped spines [Colorbar Tick Labelling](https://matplotlib.org/stable/gallery/ticks/colorbar_tick_labelling_demo.html#sphx-glr-gallery-ticks-colorbar-tick-labelling-demo-py) Colorbar Tick Labelling [Custom tick formatter for time series](https://matplotlib.org/stable/gallery/ticks/date_index_formatter.html#sphx-glr-gallery-ticks-date-index-formatter-py) Custom tick formatter for time series [Date Precision and Epochs](https://matplotlib.org/stable/gallery/ticks/date_precision_and_epochs.html#sphx-glr-gallery-ticks-date-precision-and-epochs-py) Date Precision and Epochs [Move x-axis tick labels to the top](https://matplotlib.org/stable/gallery/ticks/tick_xlabel_top.html#sphx-glr-gallery-ticks-tick-xlabel-top-py) Move x-axis tick labels to the top [Artist tests](https://matplotlib.org/stable/gallery/units/artist_tests.html#sphx-glr-gallery-units-artist-tests-py) Artist tests [Group barchart with units](https://matplotlib.org/stable/gallery/units/bar_unit_demo.html#sphx-glr-gallery-units-bar-unit-demo-py) Group barchart with units [Evans test](https://matplotlib.org/stable/gallery/units/evans_test.html#sphx-glr-gallery-units-evans-test-py) Evans test [Annotated Cursor](https://matplotlib.org/stable/gallery/widgets/annotated_cursor.html#sphx-glr-gallery-widgets-annotated-cursor-py) Annotated Cursor [Rectangle and ellipse selectors](https://matplotlib.org/stable/gallery/widgets/rectangle_selector.html#sphx-glr-gallery-widgets-rectangle-selector-py) Rectangle and ellipse selectors [Span Selector](https://matplotlib.org/stable/gallery/widgets/span_selector.html#sphx-glr-gallery-widgets-span-selector-py) Span Selector [Image tutorial](https://matplotlib.org/stable/tutorials/introductory/images.html#sphx-glr-tutorials-introductory-images-py) Image tutorial [Quick start guide](https://matplotlib.org/stable/tutorials/introductory/quick_start.html#sphx-glr-tutorials-introductory-quick-start-py) Quick start guide [Artist tutorial](https://matplotlib.org/stable/tutorials/intermediate/artists.html#sphx-glr-tutorials-intermediate-artists-py) Artist tutorial [Styling with cycler](https://matplotlib.org/stable/tutorials/intermediate/color_cycle.html#sphx-glr-tutorials-intermediate-color-cycle-py) Styling with cycler [Constrained Layout Guide](https://matplotlib.org/stable/tutorials/intermediate/constrainedlayout_guide.html#sphx-glr-tutorials-intermediate-constrainedlayout-guide-py) Constrained Layout Guide [Tight Layout guide](https://matplotlib.org/stable/tutorials/intermediate/tight_layout_guide.html#sphx-glr-tutorials-intermediate-tight-layout-guide-py) Tight Layout guide [Transformations Tutorial](https://matplotlib.org/stable/tutorials/advanced/transforms_tutorial.html#sphx-glr-tutorials-advanced-transforms-tutorial-py) Transformations Tutorial [Specifying Colors](https://matplotlib.org/stable/tutorials/colors/colors.html#sphx-glr-tutorials-colors-colors-py) Specifying Colors [Colormap Normalization](https://matplotlib.org/stable/tutorials/colors/colormapnorms.html#sphx-glr-tutorials-colors-colormapnorms-py) Colormap Normalization [Text in Matplotlib Plots](https://matplotlib.org/stable/tutorials/text/text_intro.html#sphx-glr-tutorials-text-text-intro-py) Text in Matplotlib Plots
programming_docs
matplotlib matplotlib.artist.Artist.set_url matplotlib.artist.Artist.set\_url ================================= Artist.set\_url(*url*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L578-L586) Set the url for the artist. Parameters: **url**str matplotlib matplotlib.pyplot.savefig matplotlib.pyplot.savefig ========================= matplotlib.pyplot.savefig(*\*args*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L939-L944) Save the current figure. Call signature: ``` savefig(fname, *, dpi='figure', format=None, metadata=None, bbox_inches=None, pad_inches=0.1, facecolor='auto', edgecolor='auto', backend=None, **kwargs ) ``` The available output formats depend on the backend being used. Parameters: **fname**str or path-like or binary file-like A path, or a Python file-like object, or possibly some backend-dependent object such as [`matplotlib.backends.backend_pdf.PdfPages`](../backend_pdf_api#matplotlib.backends.backend_pdf.PdfPages "matplotlib.backends.backend_pdf.PdfPages"). If *format* is set, it determines the output format, and the file is saved as *fname*. Note that *fname* is used verbatim, and there is no attempt to make the extension, if any, of *fname* match *format*, and no extension is appended. If *format* is not set, then the format is inferred from the extension of *fname*, if there is one. If *format* is not set and *fname* has no extension, then the file is saved with `[rcParams["savefig.format"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=savefig.format#matplotlibrc-sample)` (default: `'png'`) and the appropriate extension is appended to *fname*. Other Parameters: **dpi**float or 'figure', default: `[rcParams["savefig.dpi"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=savefig.dpi#matplotlibrc-sample)` (default: `'figure'`) The resolution in dots per inch. If 'figure', use the figure's dpi value. **format**str The file format, e.g. 'png', 'pdf', 'svg', ... The behavior when this is unset is documented under *fname*. **metadata**dict, optional Key/value pairs to store in the image metadata. The supported keys and defaults depend on the image format and backend: * 'png' with Agg backend: See the parameter `metadata` of [`print_png`](../backend_agg_api#matplotlib.backends.backend_agg.FigureCanvasAgg.print_png "matplotlib.backends.backend_agg.FigureCanvasAgg.print_png"). * 'pdf' with pdf backend: See the parameter `metadata` of [`PdfPages`](../backend_pdf_api#matplotlib.backends.backend_pdf.PdfPages "matplotlib.backends.backend_pdf.PdfPages"). * 'svg' with svg backend: See the parameter `metadata` of [`print_svg`](../backend_svg_api#matplotlib.backends.backend_svg.FigureCanvasSVG.print_svg "matplotlib.backends.backend_svg.FigureCanvasSVG.print_svg"). * 'eps' and 'ps' with PS backend: Only 'Creator' is supported. **bbox\_inches**str or [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox"), default: `[rcParams["savefig.bbox"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=savefig.bbox#matplotlibrc-sample)` (default: `None`) Bounding box in inches: only the given portion of the figure is saved. If 'tight', try to figure out the tight bbox of the figure. **pad\_inches**float, default: `[rcParams["savefig.pad\_inches"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=savefig.pad_inches#matplotlibrc-sample)` (default: `0.1`) Amount of padding around the figure when bbox\_inches is 'tight'. **facecolor**color or 'auto', default: `[rcParams["savefig.facecolor"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=savefig.facecolor#matplotlibrc-sample)` (default: `'auto'`) The facecolor of the figure. If 'auto', use the current figure facecolor. **edgecolor**color or 'auto', default: `[rcParams["savefig.edgecolor"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=savefig.edgecolor#matplotlibrc-sample)` (default: `'auto'`) The edgecolor of the figure. If 'auto', use the current figure edgecolor. **backend**str, optional Use a non-default backend to render the file, e.g. to render a png file with the "cairo" backend rather than the default "agg", or a pdf file with the "pgf" backend rather than the default "pdf". Note that the default backend is normally sufficient. See [The builtin backends](https://matplotlib.org/stable/users/explain/backends.html#the-builtin-backends) for a list of valid backends for each file format. Custom backends can be referenced as "module://...". **orientation**{'landscape', 'portrait'} Currently only supported by the postscript backend. **papertype**str One of 'letter', 'legal', 'executive', 'ledger', 'a0' through 'a10', 'b0' through 'b10'. Only supported for postscript output. **transparent**bool If *True*, the Axes patches will all be transparent; the Figure patch will also be transparent unless *facecolor* and/or *edgecolor* are specified via kwargs. If *False* has no effect and the color of the Axes and Figure patches are unchanged (unless the Figure patch is specified via the *facecolor* and/or *edgecolor* keyword arguments in which case those colors are used). The transparency of these patches will be restored to their original values upon exit of this function. This is useful, for example, for displaying a plot on top of a colored background on a web page. **bbox\_extra\_artists**list of [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist"), optional A list of extra artists that will be considered when the tight bbox is calculated. **pil\_kwargs**dict, optional Additional keyword arguments that are passed to [`PIL.Image.Image.save`](https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.save "(in Pillow (PIL Fork) v9.2.0)") when saving the figure. Examples using `matplotlib.pyplot.savefig` ------------------------------------------ [Print Stdout](https://matplotlib.org/stable/gallery/misc/print_stdout_sgskip.html#sphx-glr-gallery-misc-print-stdout-sgskip-py) Print Stdout [Rasterization for vector graphics](https://matplotlib.org/stable/gallery/misc/rasterization_demo.html#sphx-glr-gallery-misc-rasterization-demo-py) Rasterization for vector graphics [SVG Filter Line](https://matplotlib.org/stable/gallery/misc/svg_filter_line.html#sphx-glr-gallery-misc-svg-filter-line-py) SVG Filter Line [SVG Filter Pie](https://matplotlib.org/stable/gallery/misc/svg_filter_pie.html#sphx-glr-gallery-misc-svg-filter-pie-py) SVG Filter Pie [SVG Histogram](https://matplotlib.org/stable/gallery/user_interfaces/svg_histogram_sgskip.html#sphx-glr-gallery-user-interfaces-svg-histogram-sgskip-py) SVG Histogram [SVG Tooltip](https://matplotlib.org/stable/gallery/user_interfaces/svg_tooltip_sgskip.html#sphx-glr-gallery-user-interfaces-svg-tooltip-sgskip-py) SVG Tooltip matplotlib matplotlib.artist.Artist.add_callback matplotlib.artist.Artist.add\_callback ====================================== Artist.add\_callback(*func*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/artist.py#L348-L375) Add a callback function that will be called whenever one of the [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist")'s properties changes. Parameters: **func**callable The callback function. It must have the signature: ``` def func(artist: Artist) -> Any ``` where *artist* is the calling [`Artist`](../artist_api#matplotlib.artist.Artist "matplotlib.artist.Artist"). Return values may exist but are ignored. Returns: int The observer id associated with the callback. This id can be used for removing the callback with [`remove_callback`](matplotlib.artist.artist.remove_callback#matplotlib.artist.Artist.remove_callback "matplotlib.artist.Artist.remove_callback") later. See also [`remove_callback`](matplotlib.artist.artist.remove_callback#matplotlib.artist.Artist.remove_callback "matplotlib.artist.Artist.remove_callback") matplotlib mpl_toolkits.axisartist.axislines.GridHelperRectlinear mpl\_toolkits.axisartist.axislines.GridHelperRectlinear ======================================================= *class*mpl\_toolkits.axisartist.axislines.GridHelperRectlinear(*axes*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L359-L437) Bases: [`GridHelperBase`](mpl_toolkits.axisartist.axislines.gridhelperbase#mpl_toolkits.axisartist.axislines.GridHelperBase "mpl_toolkits.axisartist.axislines.GridHelperBase") get\_gridlines(*which='major'*, *axis='both'*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L406-L437) Return list of gridline coordinates in data coordinates. *which* : "major" or "minor" *axis* : "both", "x" or "y" new\_fixed\_axis(*loc*, *nth\_coord=None*, *axis\_direction=None*, *offset=None*, *axes=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L365-L385) new\_floating\_axis(*nth\_coord*, *value*, *axis\_direction='bottom'*, *axes=None*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/axisartist/axislines.py#L387-L404) matplotlib mpl_toolkits.mplot3d.art3d.text_2d_to_3d mpl\_toolkits.mplot3d.art3d.text\_2d\_to\_3d ============================================ mpl\_toolkits.mplot3d.art3d.text\_2d\_to\_3d(*obj*, *z=0*, *zdir='z'*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/mpl_toolkits/mplot3d/art3d.py#L153-L156) Convert a Text to a Text3D object. matplotlib mpl_toolkits.axisartist.axes_rgb mpl\_toolkits.axisartist.axes\_rgb ================================== Classes ------- | | | | --- | --- | | [`RGBAxes`](mpl_toolkits.axisartist.axes_rgb.rgbaxes#mpl_toolkits.axisartist.axes_rgb.RGBAxes "mpl_toolkits.axisartist.axes_rgb.RGBAxes")(\*args[, pad]) | Parameters: | matplotlib matplotlib.axes.Axes.get_ygridlines matplotlib.axes.Axes.get\_ygridlines ==================================== Axes.get\_ygridlines()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axes/_base.py#L72-L73) Return the yaxis' grid lines as a list of [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D")s. matplotlib matplotlib.axis.Tick.get_loc matplotlib.axis.Tick.get\_loc ============================= Tick.get\_loc()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L294-L296) Return the tick location (data coords) as a scalar. matplotlib matplotlib.axis.Axis.set_ticklabels matplotlib.axis.Axis.set\_ticklabels ==================================== Axis.set\_ticklabels(*ticklabels*, *\**, *minor=False*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L1847-L1927) [*Discouraged*] Set the text values of the tick labels. Discouraged The use of this method is discouraged, because of the dependency on tick positions. In most cases, you'll want to use `set_[x/y]ticks(positions, labels)` instead. If you are using this method, you should always fix the tick positions before, e.g. by using [`Axis.set_ticks`](matplotlib.axis.axis.set_ticks#matplotlib.axis.Axis.set_ticks "matplotlib.axis.Axis.set_ticks") or by explicitly setting a [`FixedLocator`](../ticker_api#matplotlib.ticker.FixedLocator "matplotlib.ticker.FixedLocator"). Otherwise, ticks are free to move and the labels may end up in unexpected positions. Parameters: **ticklabels**sequence of str or of [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text")s Texts for labeling each tick location in the sequence set by [`Axis.set_ticks`](matplotlib.axis.axis.set_ticks#matplotlib.axis.Axis.set_ticks "matplotlib.axis.Axis.set_ticks"); the number of labels must match the number of locations. **minor**bool If True, set minor ticks instead of major ticks. **\*\*kwargs** Text properties. Returns: list of [`Text`](../text_api#matplotlib.text.Text "matplotlib.text.Text")s For each tick, includes `tick.label1` if it is visible, then `tick.label2` if it is visible, in that order. matplotlib matplotlib.axis.Axis.get_data_interval matplotlib.axis.Axis.get\_data\_interval ======================================== Axis.get\_data\_interval()[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/axis.py#L1035-L1037) Return the `(min, max)` data limits of this axis. matplotlib matplotlib.pyplot.plot_date matplotlib.pyplot.plot\_date ============================ matplotlib.pyplot.plot\_date(*x*, *y*, *fmt='o'*, *tz=None*, *xdate=True*, *ydate=False*, *\**, *data=None*, *\*\*kwargs*)[[source]](https://github.com/matplotlib/matplotlib/blob/v3.6.0/lib/matplotlib/pyplot.py#L2734-L2740) [*Discouraged*] Plot coercing the axis to treat floats as dates. Discouraged This method exists for historic reasons and will be deprecated in the future. * `datetime`-like data should directly be plotted using [`plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot"). * If you need to plot plain numeric data as [Matplotlib date format](../dates_api#date-format) or need to set a timezone, call `ax.xaxis.axis_date` / `ax.yaxis.axis_date` before [`plot`](matplotlib.axes.axes.plot#matplotlib.axes.Axes.plot "matplotlib.axes.Axes.plot"). See [`Axis.axis_date`](matplotlib.axis.axis.axis_date#matplotlib.axis.Axis.axis_date "matplotlib.axis.Axis.axis_date"). Similar to [`plot`](matplotlib.pyplot.plot#matplotlib.pyplot.plot "matplotlib.pyplot.plot"), this plots *y* vs. *x* as lines or markers. However, the axis labels are formatted as dates depending on *xdate* and *ydate*. Note that [`plot`](matplotlib.pyplot.plot#matplotlib.pyplot.plot "matplotlib.pyplot.plot") will work with [`datetime`](https://docs.python.org/3/library/datetime.html#module-datetime "(in Python v3.10)") and [`numpy.datetime64`](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.datetime64 "(in NumPy v1.23)") objects without resorting to this method. Parameters: **x, y**array-like The coordinates of the data points. If *xdate* or *ydate* is *True*, the respective values *x* or *y* are interpreted as [Matplotlib dates](../dates_api#date-format). **fmt**str, optional The plot format string. For details, see the corresponding parameter in [`plot`](matplotlib.pyplot.plot#matplotlib.pyplot.plot "matplotlib.pyplot.plot"). **tz**timezone string or [`datetime.tzinfo`](https://docs.python.org/3/library/datetime.html#datetime.tzinfo "(in Python v3.10)"), default: `[rcParams["timezone"]](https://matplotlib.org/stable/tutorials/introductory/customizing.html?highlight=timezone#matplotlibrc-sample)` (default: `'UTC'`) The time zone to use in labeling dates. **xdate**bool, default: True If *True*, the *x*-axis will be interpreted as Matplotlib dates. **ydate**bool, default: False If *True*, the *y*-axis will be interpreted as Matplotlib dates. Returns: list of [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") Objects representing the plotted data. Other Parameters: **data**indexable object, optional If given, the following parameters also accept a string `s`, which is interpreted as `data[s]` (unless this raises an exception): *x*, *y* **\*\*kwargs** Keyword arguments control the [`Line2D`](matplotlib.lines.line2d#matplotlib.lines.Line2D "matplotlib.lines.Line2D") properties: | Property | Description | | --- | --- | | [`agg_filter`](matplotlib.artist.artist.set_agg_filter#matplotlib.artist.Artist.set_agg_filter "matplotlib.artist.Artist.set_agg_filter") | a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image | | [`alpha`](matplotlib.artist.artist.set_alpha#matplotlib.artist.Artist.set_alpha "matplotlib.artist.Artist.set_alpha") | scalar or None | | [`animated`](matplotlib.artist.artist.set_animated#matplotlib.artist.Artist.set_animated "matplotlib.artist.Artist.set_animated") | bool | | [`antialiased`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_antialiased "matplotlib.lines.Line2D.set_antialiased") or aa | bool | | [`clip_box`](matplotlib.artist.artist.set_clip_box#matplotlib.artist.Artist.set_clip_box "matplotlib.artist.Artist.set_clip_box") | [`Bbox`](../transformations#matplotlib.transforms.Bbox "matplotlib.transforms.Bbox") | | [`clip_on`](matplotlib.artist.artist.set_clip_on#matplotlib.artist.Artist.set_clip_on "matplotlib.artist.Artist.set_clip_on") | bool | | [`clip_path`](matplotlib.artist.artist.set_clip_path#matplotlib.artist.Artist.set_clip_path "matplotlib.artist.Artist.set_clip_path") | Patch or (Path, Transform) or None | | [`color`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_color "matplotlib.lines.Line2D.set_color") or c | color | | [`dash_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_capstyle "matplotlib.lines.Line2D.set_dash_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`dash_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dash_joinstyle "matplotlib.lines.Line2D.set_dash_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`dashes`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_dashes "matplotlib.lines.Line2D.set_dashes") | sequence of floats (on/off ink in points) or (None, None) | | [`data`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_data "matplotlib.lines.Line2D.set_data") | (2, N) array or two 1D arrays | | [`drawstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_drawstyle "matplotlib.lines.Line2D.set_drawstyle") or ds | {'default', 'steps', 'steps-pre', 'steps-mid', 'steps-post'}, default: 'default' | | [`figure`](matplotlib.artist.artist.set_figure#matplotlib.artist.Artist.set_figure "matplotlib.artist.Artist.set_figure") | [`Figure`](../figure_api#matplotlib.figure.Figure "matplotlib.figure.Figure") | | [`fillstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_fillstyle "matplotlib.lines.Line2D.set_fillstyle") | {'full', 'left', 'right', 'bottom', 'top', 'none'} | | [`gapcolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_gapcolor "matplotlib.lines.Line2D.set_gapcolor") | color or None | | [`gid`](matplotlib.artist.artist.set_gid#matplotlib.artist.Artist.set_gid "matplotlib.artist.Artist.set_gid") | str | | [`in_layout`](matplotlib.artist.artist.set_in_layout#matplotlib.artist.Artist.set_in_layout "matplotlib.artist.Artist.set_in_layout") | bool | | [`label`](matplotlib.artist.artist.set_label#matplotlib.artist.Artist.set_label "matplotlib.artist.Artist.set_label") | object | | [`linestyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linestyle "matplotlib.lines.Line2D.set_linestyle") or ls | {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} | | [`linewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_linewidth "matplotlib.lines.Line2D.set_linewidth") or lw | float | | [`marker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_marker "matplotlib.lines.Line2D.set_marker") | marker style string, [`Path`](../path_api#matplotlib.path.Path "matplotlib.path.Path") or [`MarkerStyle`](matplotlib.markers.markerstyle#matplotlib.markers.MarkerStyle "matplotlib.markers.MarkerStyle") | | [`markeredgecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgecolor "matplotlib.lines.Line2D.set_markeredgecolor") or mec | color | | [`markeredgewidth`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markeredgewidth "matplotlib.lines.Line2D.set_markeredgewidth") or mew | float | | [`markerfacecolor`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecolor "matplotlib.lines.Line2D.set_markerfacecolor") or mfc | color | | [`markerfacecoloralt`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markerfacecoloralt "matplotlib.lines.Line2D.set_markerfacecoloralt") or mfcalt | color | | [`markersize`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markersize "matplotlib.lines.Line2D.set_markersize") or ms | float | | [`markevery`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_markevery "matplotlib.lines.Line2D.set_markevery") | None or int or (int, int) or slice or list[int] or float or (float, float) or list[bool] | | [`mouseover`](matplotlib.artist.artist.set_mouseover#matplotlib.artist.Artist.set_mouseover "matplotlib.artist.Artist.set_mouseover") | bool | | [`path_effects`](matplotlib.artist.artist.set_path_effects#matplotlib.artist.Artist.set_path_effects "matplotlib.artist.Artist.set_path_effects") | [`AbstractPathEffect`](../patheffects_api#matplotlib.patheffects.AbstractPathEffect "matplotlib.patheffects.AbstractPathEffect") | | [`picker`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_picker "matplotlib.lines.Line2D.set_picker") | float or callable[[Artist, Event], tuple[bool, dict]] | | [`pickradius`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_pickradius "matplotlib.lines.Line2D.set_pickradius") | unknown | | [`rasterized`](matplotlib.artist.artist.set_rasterized#matplotlib.artist.Artist.set_rasterized "matplotlib.artist.Artist.set_rasterized") | bool | | [`sketch_params`](matplotlib.artist.artist.set_sketch_params#matplotlib.artist.Artist.set_sketch_params "matplotlib.artist.Artist.set_sketch_params") | (scale: float, length: float, randomness: float) | | [`snap`](matplotlib.artist.artist.set_snap#matplotlib.artist.Artist.set_snap "matplotlib.artist.Artist.set_snap") | bool or None | | [`solid_capstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_capstyle "matplotlib.lines.Line2D.set_solid_capstyle") | [`CapStyle`](../_enums_api#matplotlib._enums.CapStyle "matplotlib._enums.CapStyle") or {'butt', 'projecting', 'round'} | | [`solid_joinstyle`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_solid_joinstyle "matplotlib.lines.Line2D.set_solid_joinstyle") | [`JoinStyle`](../_enums_api#matplotlib._enums.JoinStyle "matplotlib._enums.JoinStyle") or {'miter', 'round', 'bevel'} | | [`transform`](matplotlib.artist.artist.set_transform#matplotlib.artist.Artist.set_transform "matplotlib.artist.Artist.set_transform") | unknown | | [`url`](matplotlib.artist.artist.set_url#matplotlib.artist.Artist.set_url "matplotlib.artist.Artist.set_url") | str | | [`visible`](matplotlib.artist.artist.set_visible#matplotlib.artist.Artist.set_visible "matplotlib.artist.Artist.set_visible") | bool | | [`xdata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_xdata "matplotlib.lines.Line2D.set_xdata") | 1D array | | [`ydata`](matplotlib.lines.line2d#matplotlib.lines.Line2D.set_ydata "matplotlib.lines.Line2D.set_ydata") | 1D array | | [`zorder`](matplotlib.artist.artist.set_zorder#matplotlib.artist.Artist.set_zorder "matplotlib.artist.Artist.set_zorder") | float | See also [`matplotlib.dates`](../dates_api#module-matplotlib.dates "matplotlib.dates") Helper functions on dates. [`matplotlib.dates.date2num`](../dates_api#matplotlib.dates.date2num "matplotlib.dates.date2num") Convert dates to num. [`matplotlib.dates.num2date`](../dates_api#matplotlib.dates.num2date "matplotlib.dates.num2date") Convert num to dates. [`matplotlib.dates.drange`](../dates_api#matplotlib.dates.drange "matplotlib.dates.drange") Create an equally spaced sequence of dates. #### Notes If you are using custom date tickers and formatters, it may be necessary to set the formatters/locators after the call to [`plot_date`](#matplotlib.pyplot.plot_date "matplotlib.pyplot.plot_date"). [`plot_date`](#matplotlib.pyplot.plot_date "matplotlib.pyplot.plot_date") will set the default tick locator to [`AutoDateLocator`](../dates_api#matplotlib.dates.AutoDateLocator "matplotlib.dates.AutoDateLocator") (if the tick locator is not already set to a [`DateLocator`](../dates_api#matplotlib.dates.DateLocator "matplotlib.dates.DateLocator") instance) and the default tick formatter to [`AutoDateFormatter`](../dates_api#matplotlib.dates.AutoDateFormatter "matplotlib.dates.AutoDateFormatter") (if the tick formatter is not already set to a [`DateFormatter`](../dates_api#matplotlib.dates.DateFormatter "matplotlib.dates.DateFormatter") instance).
programming_docs
matplotlib mpl_toolkits.mplot3d mpl\_toolkits.mplot3d ===================== The mplot3d toolkit adds simple 3D plotting capabilities (scatter, surface, line, mesh, etc.) to Matplotlib by supplying an Axes object that can create a 2D projection of a 3D scene. The resulting graph will have the same look and feel as regular 2D plots. Not the fastest or most feature complete 3D library out there, but it ships with Matplotlib and thus may be a lighter weight solution for some use cases. See the [mplot3d tutorial](https://matplotlib.org/stable/tutorials/toolkits/mplot3d.html) for more information. The interactive backends also provide the ability to rotate and zoom the 3D scene. One can rotate the 3D scene by simply clicking-and-dragging the scene. Panning is done by clicking the middle mouse button, and zooming is done by right-clicking the scene and dragging the mouse up and down. Unlike 2D plots, the toolbar pan and zoom buttons are not used. * [mplot3d FAQ](mplot3d/faq) + [How is mplot3d different from Mayavi?](mplot3d/faq#how-is-mplot3d-different-from-mayavi) + [My 3D plot doesn't look right at certain viewing angles](mplot3d/faq#my-3d-plot-doesn-t-look-right-at-certain-viewing-angles) + [I don't like how the 3D plot is laid out, how do I change that?](mplot3d/faq#i-don-t-like-how-the-3d-plot-is-laid-out-how-do-i-change-that) * [mplot3d View Angles](mplot3d/view_angles) + [How to define the view angle](mplot3d/view_angles#how-to-define-the-view-angle) + [Primary view planes](mplot3d/view_angles#primary-view-planes) Note [`pyplot`](../pyplot_summary#module-matplotlib.pyplot "matplotlib.pyplot") cannot be used to add content to 3D plots, because its function signatures are strictly 2D and cannot handle the additional information needed for 3D. Instead, use the explicit API by calling the respective methods on the [`Axes3D`](../_as_gen/mpl_toolkits.mplot3d.axes3d.axes3d#mpl_toolkits.mplot3d.axes3d.Axes3D "mpl_toolkits.mplot3d.axes3d.Axes3D") object. axes3d ------ Note 3D plotting in Matplotlib is still not as mature as the 2D case. Please report any functions that do not behave as expected as a bug. In addition, help and patches would be greatly appreciated! | | | | --- | --- | | [`axes3d.Axes3D`](../_as_gen/mpl_toolkits.mplot3d.axes3d.axes3d#mpl_toolkits.mplot3d.axes3d.Axes3D "mpl_toolkits.mplot3d.axes3d.Axes3D")(fig[, rect, elev, azim, roll, ...]) | 3D Axes object. | axis3d ------ Note See `mpl_toolkits.mplot3d.axis3d._axinfo` for a dictionary containing constants that may be modified for controlling the look and feel of mplot3d axes (e.g., label spacing, font colors and panel colors). Historically, axis3d has suffered from having hard-coded constants that precluded user adjustments, and this dictionary was implemented in version 1.1 as a stop-gap measure. | | | | --- | --- | | [`axis3d.Axis`](../_as_gen/mpl_toolkits.mplot3d.axis3d.axis#mpl_toolkits.mplot3d.axis3d.Axis "mpl_toolkits.mplot3d.axis3d.Axis")(axes, \*[, rotate\_label]) | An Axis class for the 3D plots. | art3d ----- | | | | --- | --- | | [`art3d.Line3D`](../_as_gen/mpl_toolkits.mplot3d.art3d.line3d#mpl_toolkits.mplot3d.art3d.Line3D "mpl_toolkits.mplot3d.art3d.Line3D")(xs, ys, zs, \*args, \*\*kwargs) | 3D line object. | | [`art3d.Line3DCollection`](../_as_gen/mpl_toolkits.mplot3d.art3d.line3dcollection#mpl_toolkits.mplot3d.art3d.Line3DCollection "mpl_toolkits.mplot3d.art3d.Line3DCollection")(segments, \*[, zorder]) | A collection of 3D lines. | | [`art3d.Patch3D`](../_as_gen/mpl_toolkits.mplot3d.art3d.patch3d#mpl_toolkits.mplot3d.art3d.Patch3D "mpl_toolkits.mplot3d.art3d.Patch3D")(\*args[, zs, zdir]) | 3D patch object. | | [`art3d.Patch3DCollection`](../_as_gen/mpl_toolkits.mplot3d.art3d.patch3dcollection#mpl_toolkits.mplot3d.art3d.Patch3DCollection "mpl_toolkits.mplot3d.art3d.Patch3DCollection")(\*args[, zs, zdir, ...]) | A collection of 3D patches. | | [`art3d.Path3DCollection`](../_as_gen/mpl_toolkits.mplot3d.art3d.path3dcollection#mpl_toolkits.mplot3d.art3d.Path3DCollection "mpl_toolkits.mplot3d.art3d.Path3DCollection")(\*args[, zs, zdir, ...]) | A collection of 3D paths. | | [`art3d.PathPatch3D`](../_as_gen/mpl_toolkits.mplot3d.art3d.pathpatch3d#mpl_toolkits.mplot3d.art3d.PathPatch3D "mpl_toolkits.mplot3d.art3d.PathPatch3D")(path, \*[, zs, zdir]) | 3D PathPatch object. | | [`art3d.Poly3DCollection`](../_as_gen/mpl_toolkits.mplot3d.art3d.poly3dcollection#mpl_toolkits.mplot3d.art3d.Poly3DCollection "mpl_toolkits.mplot3d.art3d.Poly3DCollection")(verts, \*args[, zsort]) | A collection of 3D polygons. | | [`art3d.Text3D`](../_as_gen/mpl_toolkits.mplot3d.art3d.text3d#mpl_toolkits.mplot3d.art3d.Text3D "mpl_toolkits.mplot3d.art3d.Text3D")([x, y, z, text, zdir]) | Text object with 3D position and direction. | | [`art3d.get_dir_vector`](../_as_gen/mpl_toolkits.mplot3d.art3d.get_dir_vector#mpl_toolkits.mplot3d.art3d.get_dir_vector "mpl_toolkits.mplot3d.art3d.get_dir_vector")(zdir) | Return a direction vector. | | [`art3d.juggle_axes`](../_as_gen/mpl_toolkits.mplot3d.art3d.juggle_axes#mpl_toolkits.mplot3d.art3d.juggle_axes "mpl_toolkits.mplot3d.art3d.juggle_axes")(xs, ys, zs, zdir) | Reorder coordinates so that 2D xs, ys can be plotted in the plane orthogonal to zdir. | | [`art3d.line_2d_to_3d`](../_as_gen/mpl_toolkits.mplot3d.art3d.line_2d_to_3d#mpl_toolkits.mplot3d.art3d.line_2d_to_3d "mpl_toolkits.mplot3d.art3d.line_2d_to_3d")(line[, zs, zdir]) | Convert a 2D line to 3D. | | [`art3d.line_collection_2d_to_3d`](../_as_gen/mpl_toolkits.mplot3d.art3d.line_collection_2d_to_3d#mpl_toolkits.mplot3d.art3d.line_collection_2d_to_3d "mpl_toolkits.mplot3d.art3d.line_collection_2d_to_3d")(col[, zs, zdir]) | Convert a LineCollection to a Line3DCollection object. | | [`art3d.patch_2d_to_3d`](../_as_gen/mpl_toolkits.mplot3d.art3d.patch_2d_to_3d#mpl_toolkits.mplot3d.art3d.patch_2d_to_3d "mpl_toolkits.mplot3d.art3d.patch_2d_to_3d")(patch[, z, zdir]) | Convert a Patch to a Patch3D object. | | [`art3d.patch_collection_2d_to_3d`](../_as_gen/mpl_toolkits.mplot3d.art3d.patch_collection_2d_to_3d#mpl_toolkits.mplot3d.art3d.patch_collection_2d_to_3d "mpl_toolkits.mplot3d.art3d.patch_collection_2d_to_3d")(col[, zs, ...]) | Convert a [`PatchCollection`](../collections_api#matplotlib.collections.PatchCollection "matplotlib.collections.PatchCollection") into a `Patch3DCollection` object (or a [`PathCollection`](../collections_api#matplotlib.collections.PathCollection "matplotlib.collections.PathCollection") into a `Path3DCollection` object). | | [`art3d.pathpatch_2d_to_3d`](../_as_gen/mpl_toolkits.mplot3d.art3d.pathpatch_2d_to_3d#mpl_toolkits.mplot3d.art3d.pathpatch_2d_to_3d "mpl_toolkits.mplot3d.art3d.pathpatch_2d_to_3d")(pathpatch[, z, zdir]) | Convert a PathPatch to a PathPatch3D object. | | [`art3d.poly_collection_2d_to_3d`](../_as_gen/mpl_toolkits.mplot3d.art3d.poly_collection_2d_to_3d#mpl_toolkits.mplot3d.art3d.poly_collection_2d_to_3d "mpl_toolkits.mplot3d.art3d.poly_collection_2d_to_3d")(col[, zs, zdir]) | Convert a PolyCollection to a Poly3DCollection object. | | [`art3d.rotate_axes`](../_as_gen/mpl_toolkits.mplot3d.art3d.rotate_axes#mpl_toolkits.mplot3d.art3d.rotate_axes "mpl_toolkits.mplot3d.art3d.rotate_axes")(xs, ys, zs, zdir) | Reorder coordinates so that the axes are rotated with zdir along the original z axis. | | [`art3d.text_2d_to_3d`](../_as_gen/mpl_toolkits.mplot3d.art3d.text_2d_to_3d#mpl_toolkits.mplot3d.art3d.text_2d_to_3d "mpl_toolkits.mplot3d.art3d.text_2d_to_3d")(obj[, z, zdir]) | Convert a Text to a Text3D object. | proj3d ------ | | | | --- | --- | | [`proj3d.inv_transform`](../_as_gen/mpl_toolkits.mplot3d.proj3d.inv_transform#mpl_toolkits.mplot3d.proj3d.inv_transform "mpl_toolkits.mplot3d.proj3d.inv_transform")(xs, ys, zs, M) | | | [`proj3d.persp_transformation`](../_as_gen/mpl_toolkits.mplot3d.proj3d.persp_transformation#mpl_toolkits.mplot3d.proj3d.persp_transformation "mpl_toolkits.mplot3d.proj3d.persp_transformation")(zfront, zback, ...) | | | [`proj3d.proj_points`](../_as_gen/mpl_toolkits.mplot3d.proj3d.proj_points#mpl_toolkits.mplot3d.proj3d.proj_points "mpl_toolkits.mplot3d.proj3d.proj_points")(points, M) | | | [`proj3d.proj_trans_points`](../_as_gen/mpl_toolkits.mplot3d.proj3d.proj_trans_points#mpl_toolkits.mplot3d.proj3d.proj_trans_points "mpl_toolkits.mplot3d.proj3d.proj_trans_points")(points, M) | | | [`proj3d.proj_transform`](../_as_gen/mpl_toolkits.mplot3d.proj3d.proj_transform#mpl_toolkits.mplot3d.proj3d.proj_transform "mpl_toolkits.mplot3d.proj3d.proj_transform")(xs, ys, zs, M) | Transform the points by the projection matrix | | [`proj3d.proj_transform_clip`](../_as_gen/mpl_toolkits.mplot3d.proj3d.proj_transform_clip#mpl_toolkits.mplot3d.proj3d.proj_transform_clip "mpl_toolkits.mplot3d.proj3d.proj_transform_clip")(xs, ys, zs, M) | Transform the points by the projection matrix and return the clipping result returns txs, tys, tzs, tis | | [`proj3d.rot_x`](../_as_gen/mpl_toolkits.mplot3d.proj3d.rot_x#mpl_toolkits.mplot3d.proj3d.rot_x "mpl_toolkits.mplot3d.proj3d.rot_x")(V, alpha) | | | [`proj3d.transform`](../_as_gen/mpl_toolkits.mplot3d.proj3d.transform#mpl_toolkits.mplot3d.proj3d.transform "mpl_toolkits.mplot3d.proj3d.transform")(xs, ys, zs, M) | Transform the points by the projection matrix | | [`proj3d.view_transformation`](../_as_gen/mpl_toolkits.mplot3d.proj3d.view_transformation#mpl_toolkits.mplot3d.proj3d.view_transformation "mpl_toolkits.mplot3d.proj3d.view_transformation")(E, R, V, roll) | | | [`proj3d.world_transformation`](../_as_gen/mpl_toolkits.mplot3d.proj3d.world_transformation#mpl_toolkits.mplot3d.proj3d.world_transformation "mpl_toolkits.mplot3d.proj3d.world_transformation")(xmin, xmax, ...) | Produce a matrix that scales homogeneous coords in the specified ranges to [0, 1], or [0, pb\_aspect[i]] if the plotbox aspect ratio is specified. | matplotlib mpl_toolkits.axisartist mpl\_toolkits.axisartist ======================== The *axisartist* namespace provides a derived Axes implementation (`mpl_toolkits.axisartist.Axes`), designed to support curvilinear grids. The biggest difference is that the artists that are responsible for drawing axis lines, ticks, ticklabels, and axis labels are separated out from Matplotlib's Axis class. You can find a tutorial describing usage of axisartist at the [axisartist](https://matplotlib.org/stable/tutorials/toolkits/axisartist.html#axisartist-users-guide-index) user guide. Note This module contains classes and function that were formerly part of the `mpl_toolkits.axes_grid` module that was removed in 3.6. Additional classes from that older module may also be found in [`mpl_toolkits.axes_grid1`](axes_grid1#module-mpl_toolkits.axes_grid1 "mpl_toolkits.axes_grid1"). **The submodules of the axisartist API are:** | | | | --- | --- | | [`axisartist.angle_helper`](../_as_gen/mpl_toolkits.axisartist.angle_helper#module-mpl_toolkits.axisartist.angle_helper "mpl_toolkits.axisartist.angle_helper") | | | [`axisartist.axes_divider`](../_as_gen/mpl_toolkits.axisartist.axes_divider#module-mpl_toolkits.axisartist.axes_divider "mpl_toolkits.axisartist.axes_divider") | | | [`axisartist.axes_grid`](../_as_gen/mpl_toolkits.axisartist.axes_grid#module-mpl_toolkits.axisartist.axes_grid "mpl_toolkits.axisartist.axes_grid") | | | [`axisartist.axes_rgb`](../_as_gen/mpl_toolkits.axisartist.axes_rgb#module-mpl_toolkits.axisartist.axes_rgb "mpl_toolkits.axisartist.axes_rgb") | | | [`axisartist.axis_artist`](../_as_gen/mpl_toolkits.axisartist.axis_artist#module-mpl_toolkits.axisartist.axis_artist "mpl_toolkits.axisartist.axis_artist") | The [`axis_artist`](../_as_gen/mpl_toolkits.axisartist.axis_artist#module-mpl_toolkits.axisartist.axis_artist "mpl_toolkits.axisartist.axis_artist") module implements custom artists to draw axis elements (axis lines and labels, tick lines and labels, grid lines). | | [`axisartist.axisline_style`](../_as_gen/mpl_toolkits.axisartist.axisline_style#module-mpl_toolkits.axisartist.axisline_style "mpl_toolkits.axisartist.axisline_style") | | | [`axisartist.axislines`](../_as_gen/mpl_toolkits.axisartist.axislines#module-mpl_toolkits.axisartist.axislines "mpl_toolkits.axisartist.axislines") | Axislines includes modified implementation of the Axes class. | | [`axisartist.clip_path`](../_as_gen/mpl_toolkits.axisartist.clip_path#module-mpl_toolkits.axisartist.clip_path "mpl_toolkits.axisartist.clip_path") | | | [`axisartist.floating_axes`](../_as_gen/mpl_toolkits.axisartist.floating_axes#module-mpl_toolkits.axisartist.floating_axes "mpl_toolkits.axisartist.floating_axes") | An experimental support for curvilinear grid. | | [`axisartist.grid_finder`](../_as_gen/mpl_toolkits.axisartist.grid_finder#module-mpl_toolkits.axisartist.grid_finder "mpl_toolkits.axisartist.grid_finder") | | | [`axisartist.grid_helper_curvelinear`](../_as_gen/mpl_toolkits.axisartist.grid_helper_curvelinear#module-mpl_toolkits.axisartist.grid_helper_curvelinear "mpl_toolkits.axisartist.grid_helper_curvelinear") | An experimental support for curvilinear grid. | | [`axisartist.parasite_axes`](../_as_gen/mpl_toolkits.axisartist.parasite_axes#module-mpl_toolkits.axisartist.parasite_axes "mpl_toolkits.axisartist.parasite_axes") | | matplotlib mpl_toolkits.axes_grid1 mpl\_toolkits.axes\_grid1 ========================= [`mpl_toolkits.axes_grid1`](#module-mpl_toolkits.axes_grid1 "mpl_toolkits.axes_grid1") provides a framework of helper classes to adjust the positioning of multiple fixed-aspect Axes (e.g., displaying images). It can be contrasted with the `aspect` property of Matplotlib Axes, which adjusts the position of a single Axes. See [Overview of mpl\_toolkits.axes\_grid1](https://matplotlib.org/stable/tutorials/toolkits/axes_grid.html#axes-grid1-users-guide-index) for a guide on the usage of axes\_grid1. Note This module contains classes and function that were formerly part of the `mpl_toolkits.axes_grid` module that was removed in 3.6. Additional classes from that older module may also be found in [`mpl_toolkits.axisartist`](axisartist#module-mpl_toolkits.axisartist "mpl_toolkits.axisartist"). **The submodules of the axes\_grid1 API are:** | | | | --- | --- | | [`axes_grid1.anchored_artists`](../_as_gen/mpl_toolkits.axes_grid1.anchored_artists#module-mpl_toolkits.axes_grid1.anchored_artists "mpl_toolkits.axes_grid1.anchored_artists") | | | [`axes_grid1.axes_divider`](../_as_gen/mpl_toolkits.axes_grid1.axes_divider#module-mpl_toolkits.axes_grid1.axes_divider "mpl_toolkits.axes_grid1.axes_divider") | Helper classes to adjust the positions of multiple axes at drawing time. | | [`axes_grid1.axes_grid`](../_as_gen/mpl_toolkits.axes_grid1.axes_grid#module-mpl_toolkits.axes_grid1.axes_grid "mpl_toolkits.axes_grid1.axes_grid") | | | [`axes_grid1.axes_rgb`](../_as_gen/mpl_toolkits.axes_grid1.axes_rgb#module-mpl_toolkits.axes_grid1.axes_rgb "mpl_toolkits.axes_grid1.axes_rgb") | | | [`axes_grid1.axes_size`](../_as_gen/mpl_toolkits.axes_grid1.axes_size#module-mpl_toolkits.axes_grid1.axes_size "mpl_toolkits.axes_grid1.axes_size") | Provides classes of simple units that will be used with AxesDivider class (or others) to determine the size of each axes. | | [`axes_grid1.inset_locator`](../_as_gen/mpl_toolkits.axes_grid1.inset_locator#module-mpl_toolkits.axes_grid1.inset_locator "mpl_toolkits.axes_grid1.inset_locator") | A collection of functions and objects for creating or placing inset axes. | | [`axes_grid1.mpl_axes`](../_as_gen/mpl_toolkits.axes_grid1.mpl_axes#module-mpl_toolkits.axes_grid1.mpl_axes "mpl_toolkits.axes_grid1.mpl_axes") | | | [`axes_grid1.parasite_axes`](../_as_gen/mpl_toolkits.axes_grid1.parasite_axes#module-mpl_toolkits.axes_grid1.parasite_axes "mpl_toolkits.axes_grid1.parasite_axes") | | matplotlib mplot3d FAQ mplot3d FAQ =========== How is mplot3d different from Mayavi? ------------------------------------- [Mayavi](https://docs.enthought.com/mayavi/mayavi/) is a very powerful and featureful 3D graphing library. For advanced 3D scenes and excellent rendering capabilities, it is highly recommended to use Mayavi. mplot3d was intended to allow users to create simple 3D graphs with the same "look-and-feel" as matplotlib's 2D plots. Furthermore, users can use the same toolkit that they are already familiar with to generate both their 2D and 3D plots. My 3D plot doesn't look right at certain viewing angles ------------------------------------------------------- This is probably the most commonly reported issue with mplot3d. The problem is that -- from some viewing angles -- a 3D object would appear in front of another object, even though it is physically behind it. This can result in plots that do not look "physically correct." Unfortunately, while some work is being done to reduce the occurrence of this artifact, it is currently an intractable problem, and can not be fully solved until matplotlib supports 3D graphics rendering at its core. The problem occurs due to the reduction of 3D data down to 2D + z-order scalar. A single value represents the 3rd dimension for all parts of 3D objects in a collection. Therefore, when the bounding boxes of two collections intersect, it becomes possible for this artifact to occur. Furthermore, the intersection of two 3D objects (such as polygons or patches) can not be rendered properly in matplotlib's 2D rendering engine. This problem will likely not be solved until OpenGL support is added to all of the backends (patches are greatly welcomed). Until then, if you need complex 3D scenes, we recommend using [MayaVi](https://docs.enthought.com/mayavi/mayavi/). I don't like how the 3D plot is laid out, how do I change that? --------------------------------------------------------------- Historically, mplot3d has suffered from a hard-coding of parameters used to control visuals such as label spacing, tick length, and grid line width. Work is being done to eliminate this issue. For matplotlib v1.1.0, there is a semi-official manner to modify these parameters. See the note in the [`mplot3d.axis3d`](../mplot3d#module-mpl_toolkits.mplot3d.axis3d "mpl_toolkits.mplot3d.axis3d") section of the mplot3d API documentation for more information. matplotlib mplot3d View Angles mplot3d View Angles =================== How to define the view angle ---------------------------- The position of the viewport "camera" in a 3D plot is defined by three angles: *elevation*, *azimuth*, and *roll*. From the resulting position, it always points towards the center of the plot box volume. The angle direction is a common convention, and is shared with [PyVista](https://docs.pyvista.org/api/core/camera.html) and [MATLAB](https://www.mathworks.com/help/matlab/ref/view.html) (though MATLAB lacks a roll angle). Note that a positive roll angle rotates the viewing plane clockwise, so the 3d axes will appear to rotate counter-clockwise. Rotating the plot using the mouse will control only the azimuth and elevation, but all three angles can be set programmatically: ``` import matplotlib.pyplot as plt ax = plt.figure().add_subplot(projection='3d') ax.view_init(elev=30, azim=45, roll=15) ``` Primary view planes ------------------- To look directly at the primary view planes, the required elevation, azimuth, and roll angles are shown in the diagram of an "unfolded" plot below. These are further documented in the [`mplot3d.axes3d.Axes3D.view_init`](../../_as_gen/mpl_toolkits.mplot3d.axes3d.axes3d#mpl_toolkits.mplot3d.axes3d.Axes3D.view_init "mpl_toolkits.mplot3d.axes3d.Axes3D.view_init") API. ([Source code](https://matplotlib.org/stable/gallery/mplot3d/view_planes_3d.py), [png](https://matplotlib.org/stable/gallery/mplot3d/view_planes_3d.png)) bower API API === Commands -------- Command line reference * [cache](#cache) * [help](#help) * [home](#home) * [info](#info) * [init](#init) * [install](#install) * [link](#link) * [list](#list) * [login](#login) * [lookup](#lookup) * [prune](#prune) * [register](#register) * [search](#search) * [update](#update) * [uninstall](#uninstall) * [unregister](#unregister) * [version](#version) ### cache ``` $ bower cache <command> [<args>] ``` Manage bower cache #### cache clean ``` $ bower cache clean $ bower cache clean <name> [<name> ...] $ bower cache clean <name>#<version> [<name>#<version> ..] ``` Cleans cached packages #### cache list ``` $ bower cache list $ bower cache list <name> [<name> ...] ``` Lists cached packages ### help ``` $ bower help <command> ``` Display help information about Bower ### home ``` $ bower home $ bower home <package> $ bower home <package>#<version> ``` Opens a package homepage into your favorite browser. If no `<package>` is passed, opens the homepage of the local package. ### info ``` $ bower info <package> $ bower info <package> [<property>] $ bower info <package>#<version> [<property>] ``` Displays overall information of a package or of a particular version. ### init ``` $ bower init ``` Interactively create a bower.json file ### install ``` $ bower install [<options>] $ bower install <endpoint> [<endpoint> ..] [<options>] ``` Installs project dependencies recursively. Project dependencies consist of: 1. `dependencies` specified in `bower.json` of project 2. All “external” dependencies not specified in `bower.json`, but present in `bower_components` 3. Any additional `<endpoint>` passed as an argument to this command When `--save` flag is used, all additional endpoint are saved to `dependencies` in `bower.json`. Bower recommends to always use `--save` flag to achieve reproducible installs between machines. Endpoints can have multiple forms: * `<package>` * `<package>#<version>` * `<name>=<package>#<version>` Where: * `<package>` is a package URL, physical location or registry name * `<version>` is a valid range, commit, branch, etc. * `<name>` is the name it should have locally. `<package>` can be any one of the following: | | | | --- | --- | | Registered package name | `jquery` `normalize.css` | | Git endpoint | `https://github.com/user/package.git` `[email protected]:user/package.git` | | Git endpoint without .git | `git+https://github.com/user/package` `git+ssh://[email protected]/user/package` | | Local folder | `my/local/folder/` | | Public Subversion endpoint | `svn+http://package.googlecode.com/svn/` | | Private Subversion endpoint | `svn+ssh://package.googlecode.com/svn/` `svn+https://package.googlecode.com/svn/` | | Shorthand (defaults to GitHub) | `user/package` | | URL | `http://example.com/script.js` `http://example.com/style.css` `http://example.com/package.zip` (contents will be extracted) `http://example.com/package.tar` (contents will be extracted) | A version can be: | | | | --- | --- | | semver version | `#1.2.3` | | version range | `#1.2` `#~1.2.3` `#^1.2.3` `#>=1.2.3 <2.0` | | Git tag | `#<tag>` | | Git commit SHA | `#<sha>` | | Git branch | `#<branch>` | | Subversion revision | `#<revision>` | #### install options * `-F`, `--force-latest`: Force latest version on conflict * `-p`, `--production`: Do not install project devDependencies * `-S`, `--save`: Save installed packages into the project’s bower.json dependencies * `-D`, `--save-dev`: Save installed packages into the project’s bower.json devDependencies * `-E`, `--save-exact`: Configure installed packages with an exact version rather than semver ### link ``` $ bower link $ bower link <name> [<local name>] ``` The link functionality allows developers to easily test their packages. Linking is a two-step process. Using ‘bower link’ in a project folder will create a global link. Then, in some other package, `bower link <name>` will create a link in the components folder pointing to the previously created link. This allows you to easily test a package because changes will be reflected immediately. When the link is no longer necessary, simply remove it with `bower uninstall <name>`. ### list ``` $ bower list [<options>] ``` List local packages and possible updates. #### list options * `-p`, `--paths`: Generates a simple JSON source mapping * `-r`, `--relative`: Make paths relative to the directory config property, which defaults to bower\_components ### lookup ``` $ bower lookup <name> ``` Look up a package URL by name ### login ``` $ bower login ``` Authenticate with GitHub and store credentials. Required to unregister packages. #### login options * `-t`, `--token`: Pass an existing GitHub auth token rather than prompting for username and password ### prune ``` $ bower prune ``` Uninstalls local extraneous packages ### register ``` $ bower register <name> <url> ``` Register a package ### search ``` $ bower search $ bower search <name> ``` Finds all packages or a specific package. ### update ``` $ bower update <name> [<name> ..] [<options>] ``` Updates installed packages to their newest version according to bower.json. #### update options * `-F`, `--force-latest`: Force latest version on conflict * `-p`, `--production`: Do not install project devDependencies * `-S`, `--save`: Update `dependencies` in bower.json * `-D`, `--save-dev`: Update `devDependencies` in bower.json ### uninstall ``` $ bower uninstall <name> [<name> ..] [<options>] ``` Uninstalls a package locally from your bower\_components directory #### uninstall options * `-S`, `--save`: Remove uninstalled packages from the project’s bower.json dependencies * `-D`, `--save-dev`: Remove uninstalled packages from the project’s bower.json devDependencies ### unregister ``` $ bower unregister <package> ``` Unregisters a package. ### version ``` $ bower version [<newversion> | major | minor | patch] ``` Run this in a package directory to bump the version and write the new data back to the bower.json file. The newversion argument should be a valid semver string, or a valid second argument to semver.inc (one of “build”, “patch”, “minor”, or “major”). In the second case, the existing version will be incremented by 1 in the specified field. If run in a git repo, it will also create a version commit and tag, and fail if the repo is not clean. #### version options * `-m`, `--message`: Custom git commit and tag message If supplied with `--message` (shorthand: `-m`) config option, bower will use it as a commit message when creating a version commit. If the message config contains %s then that will be replaced with the resulting version number. For example: ``` $ bower version patch -m "Upgrade to %s for reasons" ``` Options ------- * [force](#force) * [json](#json) * [loglevel](#loglevel) * [offline](#offline) * [quiet](#quiet) * [silent](#silent) * [verbose](#verbose) * [allow-root](#allow-root) ### force ``` -f, --force ``` Makes various commands more forceful * `bower install --force` re-installs all installed components. It also forces installation even when there are non-bower directories with the same name in the components directory. Adding `--force` also bypasses the cache, and writes to the cache anyway. * `bower uninstall <package> --force` continues uninstallation even after a dependency conflict * `bower register <package> --force` and `bower unregister <package> --force` bypasses confirmation. Login is still needed. ### json ``` -j, --json ``` Output consumable JSON ### loglevel ``` -l, --loglevel ``` What level of logs to report. Possible values: error, conflict, warn, action, info, debug ### offline ``` -o, --offline ``` Do not use network connection ### quiet ``` -q, --quiet ``` Only output important information. It is an alias for `--loglevel=warn`. ### silent ``` -s, --silent ``` Do not output anything, besides errors. It is an alias for `--loglevel=error`. Silent is also useful if you have private components that might leak credentials to your CI environment. ### verbose ``` -V, --verbose ``` Makes output more verbose. It is an alias for `--loglevel=debug`. ### allow-root ``` --allow-root ``` Allows running commands as root. Bower is a user command, there is no need to execute it with superuser permissions. However, if you still want to run commands with sudo, use `--allow-root` option. Consuming a package ------------------- You can use [build tools](https://bower.io/docs/tools) to easily consume Bower packages. If you use [`bower list --paths`](#list) or `bower list --paths --json`, you will get a simple name-to-path mapping: ``` $ bower list --paths # or $ bower list --paths --json ``` ``` { "backbone": "bower_components/backbone/backbone.js", "jquery": "bower_components/jquery/dist/jquery.js", "underscore": "bower_components/underscore/underscore.js" } ``` Every command supports the [`--json` option](#json) that makes Bower output JSON. Command result is outputted to `stdout` and error/logs to `stderr`. Programmatic API ---------------- Bower provides a powerful, programmatic API. All commands can be accessed through the `bower.commands` object. ``` var bower = require('bower'); bower.commands .install(['jquery'], { save: true }, { /* custom config */ }) .on('end', function (installed) { console.log(installed); }); bower.commands .search('jquery', {}) .on('end', function (results) { console.log(results); }); ``` Commands emit four types of events: `log`, `prompt`, `end`, `error`. * `log` is emitted to report the state/progress of the command. * `prompt` is emitted whenever the user needs to be prompted. * `error` will only be emitted if something goes wrong. * `end` is emitted when the command successfully ends. For a better idea of how this works, you may want to check out [our bin file](https://github.com/bower/bower/blob/master/bin/bower). When using Bower programmatically, prompting is disabled by default. You can enable it when calling commands with `interactive: true` in the config. This requires you to listen for the `prompt` event and handle the prompting yourself. The easiest way is to use the [inquirer](https://npmjs.org/package/inquirer) npm module like so: ``` var inquirer = require('inquirer'); bower.commands .install(['jquery'], { save: true }, { interactive: true }) // .. .on('prompt', function (prompts, callback) { inquirer.prompt(prompts).then(callback); }); ``` Running on a continuous integration server ------------------------------------------ Bower will skip some interactive operations if it finds a `CI` environmental variable set to `true`. You will find that the `CI` variable is already set for you on many continuous integration servers, e.g., [CircleCI](https://circleci.com/docs/environment-variables#basics) and [Travis-CI](http://docs.travis-ci.com/user/ci-environment/#Environment-variables). You may try to set the `CI` variable manually before running your Bower commands. On Mac or Linux, `export CI=true` and on Windows `set CI=true` If for some reason you are unable to set the `CI` environment variable, you can alternately use the `--config.interactive=false` flag. ``` $ bower install --config.interactive=false ``` Non-interactive mode -------------------- Bower works by default in interactive mode. There are few ways of disabling it: * passing `CI=true` in environment * passing `--config.interactive=false` to Bower command * attaching a pipe to Bower (e.g. `bower install | cat`) * redirecting output to file (e.g. `bower install > logs.txt`) * running Bower through its [Programmatic API](#programmatic-api) When interactive mode is disabled: * `bower init` does not work * `bower register` and `bower unregister` bypass confirmation * `bower login` fails unless `--token` parameter is provided * `bower install` fails on resolution conflicts, instead of asking for choice * `bower uninstall` doesn’t ask for confirmation if dependency is to be removed Using local cache ----------------- Bower supports installing packages from its local cache – without an internet connection – if the packages were installed before. ``` $ bower install <package> --offline ``` The content of the cache can be listed with [`bower cache list`](#cache-list): ``` $ bower cache list ``` The cache can be cleaned with [`bower cache clean`](#cache-clean): ``` $ bower cache clean ```
programming_docs
bower Creating Packages Creating Packages ================= bower.json ---------- Packages are defined by a manifest file `bower.json`. This is similar to Node’s `package.json` or Ruby’s `Gemfile`. Interactively create a `bower.json` with [`bower init`](api#init) ``` $ bower init ``` Specification ------------- Detailed specification of `bower.json` file can be found in [bower/spec](https://github.com/bower/spec/blob/master/json.md) repository. Maintaining dependencies ------------------------ Using `bower install <package> --save` will add `<package>` to your project’s bower.json `dependencies` array. ``` # install package and add it to bower.json dependencies $ bower install <package> --save ``` Similarly, using `bower install <package> --save-dev` will add `<package>` to your project’s bower.json `devDependencies` array. ``` # install package and add it to bower.json devDependencies $ bower install <package> --save-dev ``` Register -------- Registering your package allows others to install it with a short name, like `bower install <my-package-name>`. To register a new package: * The package name **must** adhere to the [bower.json spec](https://github.com/bower/spec/blob/master/json.md#name). * There **must** be a valid `bower.json` in the project’s root directory. * Your package should use [semver](http://semver.org/) Git tags. The `v` prefix is allowed. * Your package **must** be publically available at a Git endpoint (e.g., GitHub). Remember to push your Git tags! * For private packages (e.g. GitHub Enterprise), please consider running a private [Bower registry](https://github.com/bower/registry). Then use [`bower register`](api#register): ``` $ bower register <my-package-name> <git-endpoint> # for example $ bower register example git://github.com/user/example.git ``` Now anyone can run `bower install <my-package-name>`, and get your library installed. The Bower registry does not have authentication or user management at this point in time. It’s on a first come, first served basis. Bower doesn’t support GitHub-style namespacing (`org/repo`), however you are encouraged to namespace related packages with `-`, for example, `angular-` and `paper-`. Please do not squat on package names. Register your package and claim your name after you have a valid public repo with working code. For package name transfers, intellectual property and other disputes, please try to resolve with the owner first. If no resolution, please submit a ticket in the [Bower Registry repo](https://github.com/bower/registry) and the Bower Core Team will assist. ### Unregister You can unregister packages with [`bower unregister`](api#unregister). You first need to authenticate with GitHub with [`bower login`](api#login) to confirm you are a contributor to the package repo. ``` bower login # enter username and password ? Username: ? Password: # unregister packages after successful login bower unregister <package> ``` You’ll likely want to [`bower cache clean`](api#cache-clean) after your change. Please remember it is generally considered bad behavior to remove versions of a library that others are depending on. Think twice :) If the above doesn’t work for you, you can [request a package be unregistered manually](https://github.com/bower/registry/issues). bower Pluggable Resolvers Pluggable Resolvers =================== Pluggable resolvers allow you to use resolvers created by 3rd party JavaScript developers — including overriding default resolvers used by Bower. For example, resolvers can be used for: * Handling [Mercurial](https://mercurial.selenic.com/) or [Bazaar](http://bazaar.canonical.com/en/) repositories * Speeding up checkouts of services like [GitLab](https://about.gitlab.com/) or [Bitbucket](https://bitbucket.org/) * Allowing to use packages from [npm](https://www.npmjs.com/) or [component.io](https://github.com/component/component.github.io) * Proxying downloads through 3rd party service like [Artifactory](http://www.jfrog.com/artifactory/) or [Nexus Repository](http://www.sonatype.com/nexus-repository-oss) * Implementing custom private registry (hosted on GitHub?) * Adding authentication support for private [GitHub Enterprise](https://enterprise.github.com/) instances Pluggable resolvers were introduced in Bower 1.5. Please make sure your Bower version is correct (`bower --version`). Using ----- A Pluggable Resolver is just an npm package that you install as `devDependency` in the `package.json` of your repository, or install globally with `npm install -g`. Declare what Pluggable resolvers your project uses by adding entries to the `resolvers` section of [.bowerrc](config). ``` { "resolvers": [ "bitbucket-resolver", "github-enterprise-resolver" ] } ``` Bower tries to use resolvers in the order specified. If no custom resolver matches the source being processed, Bower fallbacks to default resolvers (git, github, filesystem, svn, registry). You can find the list of available Bower resolvers on [npm website](https://www.npmjs.com/search?q=bower-resolver). Creating -------- As mentioned, custom resolvers are [npm](https://www.npmjs.com/) packages with specific a API described below. The `package.json` should not list `bower` as a `dependency` or `peerDependency` (both have undesired behavior in npm 2.x, and we don’t want you to use bower internals). Instead, you can check for proper environment in resolver’s factory by reading provided `bower.version` parameter and use any other packages on npm (like [request](https://www.npmjs.com/package/request)). Packages should list `bower-resolver` as one of the `keywords` in `package.json`. Resolvers should also follow [semver](http://semver.org/) specification. Here is how an example `package.json` of a custom resolver can look like: ``` { "name": "custom-bower-resolver", "version": "1.0.0", "keywords": ["bower-resolver"], "main": "index.js", "dependencies": { "request": "^2.61.0" } } ``` The `index.js` should export factory for resolver, as follows: ``` var tmp = require('tmp'); /** * Factory function for resolver * It is called only one time by Bower, to instantiate resolver. * You can instantiate here any caches or create helper functions. */ module.exports = function resolver (bower) { // Resolver factory returns an instance of resolver return { // Match method tells whether resolver supports given source // It can return either boolean or promise of boolean match: function (source) { return source.indexOf('svn://') === 0 }, // Optional: // Can resolve or normalize sources, like: // "jquery" => "git://github.com/jquery/jquery.git" locate: function (source) { return source; }, // Optional: // Allows to list available versions of given source. // Bower chooses matching release and passes it to "fetch" releases: function (source) { return [ { target: 'v1.0.0', version: '1.0.0' }, { target: 'v1.0.1', version: '1.0.1' } ] }, // It downloads package and extracts it to temporary directory // You can use npm's "tmp" package to tmp directories // See the "Resolver API" section for details on this method fetch: function (endpoint, cached) { // If cached version of package exists, re-use it if (cached && cached.version) { return; } var tempDir = tmp.dirSync(); // ... download package to tempDir return { tempPath: tempDir.name, removeIgnores: true } } } } ``` If you need something more solid, see this real world example: [Mercurial Resolver](https://github.com/phenomnomnominal/mercurial-bower-resolver). Resolver API ------------ ### Resolver package ``` var plugResolver = require('pluggable-resolver') var resolver = plugResolver({ version: '1.5.0', config: {...}, logger: logger }) ``` * `resolver`: `Resolver` - instance of the resolver. * `version`: `String` - Bower’s version that instantiates resolver. You can validate it. * `config`: `Object` - Bower’s <config>. You can ask authors to put extra configuration in it. * `logger`: `Object` - Bower’s [logger](https://github.com/bower/bower/tree/master/packages/bower-logger). Use it to output important warnings / information. `plugResolver()` returns an instance of the resolver with the API described below. ``` resolver.match() resolver.locate() resolver.releases() resolver.fetch() ``` ### resolver.match() Tells Bower whether to use or not use this resolver for some source. ``` var isMatched = resolver.match( source ) ``` * `source`: `String` - source from bower.json, like `git://github.com/jquery/jquery.git` * `isMatched`: `Boolean` - *Returns* a boolean that tells whether resolver can handle given source (either by locating them with `locate` method, or fetching it with `fetch` + optional `releases` method). `.match()` can also return a [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) of the result. It’s useful e.g. for filesystem checks. ### resolver.locate() Allows to implement simplified registry. ``` var locatedSource = resolver.locate( source ) ``` * `source`: `String` - source from bower.json, like `jquery/jquery` * `locatedSource`: `String` - *Returns* a resolved source string, like `"git://github.com/jquery/jquery.git"` `.locate()` can also return a [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) of the result. It’s useful e.g. for remote registry calls. ### resolver.releases() Bower selects one matching `version` from the result and passes matching `target` field to `fetch` method. ``` var resolvedReleases = resolver.releases( source ) ``` * `source`: `String` - source from bower.json, like `git://github.com/jquery/jquery.git` * `resolvedReleases`: `Array` - *Returns* available releases for given source (like list of available tags on GitHub) + `target`: `String` - unique target id for release (usually tag name) + `version`: `String` - semantic version for the target above `.releases()` can also return a [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) of the result. ### resolver.fetch() Downloads given endpoint and returns path to temporary directory. ``` var fetched = resolver.fetch( endPoint, cached ) ``` * `endpoint`: `Object` - endpoint for the resource to download + `name`: `String` - name of resource (like `jquery`) + `source`: `String` - where to download resource from (like `git://github.com/jquery/jquery.git`) + `target`: `String` - the version or release of resource to download (like `v1.0.0`) * `cached`: `Object` - contains information about cached resource + `endpoint`: `Object` - endpoint of cached resource (the same format as above) + `release`: `String` - release of cached resource + `releases`: `Array` - the result of `releases` method + `version`: `String` - present cached resource has been resolved as version (like `1.0.0`) + `resolution`: `String` - the “resolution” returned from previous fetch call for same resource * `fetched`: `Object` - *Returned* + `tempPath`: `String` - path to teporary directory with downloaded resource + `removeIgnores`: `Boolean` - tells whether bower should remove files ignores in bower.json. + `resolution`: `Object` - extra object that is saved in `.bower.json` and passed in `cached` field to the next `fetch` call. It can be used e.g. to download resources conditionally, for example by storing e-tag or last-modified time. `.fetch()` can also return a [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) of the result. **If `.fetch()` returns `undefined`, then Bower re-uses cached package.** bower Configuration Configuration ============= Bower can be configured using JSON in a .bowerrc file. For example: ``` { "directory": "app/components/", "timeout": 120000, "registry": { "search": [ "http://localhost:8000", "https://registry.bower.io" ] } } ``` Placement & Order ----------------- The config is obtained by merging multiple configurations by this order of importance: * CLI arguments via `--config` * Environment variables * Local `.bowerrc` located in the current working directory * All `.bowerrc` files upwards the directory tree * `.bowerrc` file located in user’s home folder (`~`) * `.bowerrc` file located in the global folder (`/`) Example of CLI arguments: * `--config.endpoint-parser=<parser>` * `--config.storage.packages=<packages_cache_dir>` Example of valid environment variables: * `bower_endpoint_parser` is evaluated as `endpoint-parser` * `bower_storage__packages` is evaluated as `storage.packages` Example of valid environment variables with Array convention: * `export bower_registry__search='[http://localhost:8080, http://registry.bower.io]'; bower install` .bowerrc specification ---------------------- Available configuration variables, in `.bowerrc` format: ``` { "cwd": "~/.my-project", "directory": "bower_components", "registry": "https://registry.bower.io", "shorthand-resolver": "git://github.com//.git", "proxy": "http://proxy.local", "https-proxy": "http://proxy.local", "ca": "/var/certificate.pem", "color": true, "timeout": 60000, "save": true, "save-exact": true, "strict-ssl": true, "storage": { "packages" : "~/.bower/packages", "registry" : "~/.bower/registry", "links" : "~/.bower/links" }, "interactive": true, "resolvers": [ "mercurial-bower-resolver" ], "shallowCloneHosts": [ "myGitHost.example.com" ], "scripts": { "preinstall": "", "postinstall": "", "preuninstall": "" }, "ignoredDependencies": [ "jquery" ] } ``` A detailed description of available configuration variables can be found in [bower/spec](https://github.com/bower/spec/blob/master/config.md) repository. Environment variables in .bowerrc --------------------------------- One can use environment variables in `.bowerrc`, using the following syntax `${ENV_VAR}`. ``` "storage" : { "packages": "/path/to/${USER}/packages" } ``` Hooks ----- Bower provides 3 separate hooks that can be used to trigger other automated tools during Bower usage. Importantly, these hooks are intended to allow external tools to help wire up the newly installed components into the parent project and other similar tasks. These hooks are not intended to provide a post-installation build step for component authors. As such, the configuration for these hooks is provided in the `.bowerrc` file in the parent project’s directory. In `.bowerrc` do: ``` { "scripts": { "preinstall": "<your command here>", "postinstall": "<your command here>", "preuninstall": "<your command here>" } } ``` The value of each script hook may contain a % character. When your script is called, the % will be replaced with a space-separated list of components being installed or uninstalled. Your script will also include an environment variable `BOWER_PID` containing the PID of the parent Bower process that triggered the script. This can be used to verify that a `preinstall` and `postinstall` steps are part of the same Bower process. homebrew Linux CI in homebrew/core Linux CI in `homebrew/core` =========================== We currently use Ubuntu 16.04 for bottling in `homebrew/core`. Ubuntu vs. other Linux distributions ------------------------------------ As of 2022, around 77% of our users are using Ubuntu. This is the reason why we have chosen this distribution for our base CI image. We have successfully used Ubuntu for CI since version 14.04. The Ubuntu LTS versions are supported for 5 years. A new LTS version is released every 2 years. Our bottles are compatible with other distributions like Debian/CentOS, even when compiled on Ubuntu. Past and next versions ---------------------- We are currently moving our CI to Ubuntu 22.04. This work will probably be done before end of 2022. Moving from Ubuntu 16.04 to Ubuntu 22.04 (and thus skipping version 18.04 and 20.04) took longer than expected. We plan to proceed with regular updates from 2022 onwards. We aim to use the latest Ubuntu LTS version for our CI. We will start using the latest Ubuntu LTS version for our CI no earlier than 3 months after its release and, ideally, no more than 12 months after its release. | Distribution | Glibc | GCC | Usage | | --- | --- | --- | --- | | Ubuntu 14.04 | 2.19 | 4 | From 2014 to 2017 | | Ubuntu 16.04 | 2.23 | 5 | From 2017 to 2022 | | Ubuntu 22.04 | 2.35 | 11 | From 2022 to 2024 | | Ubuntu 24.04 | ? | ? | From 2024 to 2026 | Why always use the latest version? ---------------------------------- Homebrew is a rolling-release package manager. We try to ship the newest things as quickly as possible, on macOS and Linux. When a formula needs a newer GCC because our host GCC in CI is too old, we needed to make that formula depend on a newer Homebrew GCC. All C++ dependents of that formula immediately acquire a dependency on Homebrew GCC as well. While we have taken the steps to make sure this no longer holds up GCC updates, it still creates a maintenance burden. This problem is more likely for formula which are very actively maintained and try to use newer features of C++. We decided that we shouldn’t have a maintenance burden for formulae which are doing the right thing by staying up to date. It makes a lot of sense for Homebrew maintainers to submit upstream fixes when formulae are not working with newer compilers. It makes a lot less sense for Homebrew maintainers to submit fixes because our host compiler is too old. Note that `glibc` will need to be installed for more users as their `glibc` version will often be too old: disk space is cheap and we have can handle this situation for our users. This situation will often arise when update to a new LTS version and adoption of the new Ubuntu is still low during the first months. For the same reasons as above: we prefer to stay on the bleeding edge and give our users a gentle nudge to think about updating their OS. homebrew Xcode Xcode ===== Supported Xcode versions ------------------------ Homebrew supports and recommends the latest Xcode and/or Command Line Tools available for your platform (see `OS::Mac::Xcode.latest_version` and `OS::Mac::CLT.latest_clang_version` in [`Library/Homebrew/os/mac/xcode.rb`](https://github.com/Homebrew/brew/blob/HEAD/Library/Homebrew/os/mac/xcode.rb)). Updating for new Xcode releases ------------------------------- When a new Xcode release is made, the following things need to be updated: * In [`Library/Homebrew/os/mac/xcode.rb`](https://github.com/Homebrew/brew/blob/HEAD/Library/Homebrew/os/mac/xcode.rb) + `OS::Mac::Xcode.latest_version` + `OS::Mac::CLT.latest_clang_version` + `OS::Mac::Xcode.detect_version_from_clang_version` homebrew Interesting Taps & Forks Interesting Taps & Forks ======================== A [tap](taps) is Homebrew-speak for a Git repository containing additional formulae. Homebrew has the capability to add (and remove) multiple taps to your local installation with the `brew tap` and `brew untap` commands; run `man brew` in your terminal for usage information. The main repository at <https://github.com/Homebrew/homebrew-core>, often called `homebrew/core`, is always built-in. Your taps are Git repositories located at `$(brew --repository)/Library/Taps`. Unsupported interesting taps ---------------------------- * [homebrew-ffmpeg/ffmpeg](https://github.com/homebrew-ffmpeg/homebrew-ffmpeg): A tap for FFmpeg with additional options, including nonfree additions. * [denji/nginx](https://github.com/denji/homebrew-nginx): A tap for NGINX modules, intended for its `nginx-full` formula which includes more module options. * [InstantClientTap/instantclient](https://github.com/InstantClientTap/homebrew-instantclient): A tap for Oracle Instant Client. * [osx-cross/avr](https://github.com/osx-cross/homebrew-avr): GNU AVR toolchain (Libc, compilers and other tools for Atmel MCUs), useful for Arduino hackers and AVR programmers. * [petere/postgresql](https://github.com/petere/homebrew-postgresql): Allows installing multiple PostgreSQL versions in parallel. * [osrf/simulation](https://github.com/osrf/homebrew-simulation): Tools for robotics simulation. * [brewsci/bio](https://github.com/brewsci/homebrew-bio): Bioinformatics formulae. * [davidchall/hep](https://github.com/davidchall/homebrew-hep): High energy physics formulae. * [lifepillar/appleii](https://github.com/lifepillar/homebrew-appleii): Formulae for vintage Apple emulation. * [gromgit/fuse](https://github.com/gromgit/homebrew-fuse): macOS FUSE formulae that are no longer available in `homebrew/core`. * [cloudflare/cloudflare](https://github.com/cloudflare/homebrew-cloudflare): Formulae for the applications by Cloudflare, including curl with HTTP/3 support. Unsupported interesting forks ----------------------------- * [mistydemeo/tigerbrew](https://github.com/mistydemeo/tigerbrew): Experimental Tiger/Leopard PowerPC version.
programming_docs
homebrew Homebrew Shell Completion Homebrew Shell Completion ========================= Homebrew comes with completion definitions for the `brew` command. Some packages also provide completion definitions for their own programs. `zsh`, `bash` and `fish` are currently supported. You must manually configure your shell to enable its completion support. This is because the Homebrew-managed completions are stored under `HOMEBREW_PREFIX` which your system shell may not be aware of, and since it is difficult to automatically configure `bash` and `zsh` completions in a robust manner, the Homebrew installer does not do it for you. Shell completions for external Homebrew commands are not automatically installed. To opt-in to using completions for external commands (if provided), they need to be linked to `HOMEBREW_PREFIX` by running `brew completions link`. Configuring Completions in `bash` --------------------------------- To make Homebrew’s completions available in `bash`, you must source the definitions as part of your shell’s startup. Add the following to your `~/.bash_profile` (or, if it doesn’t exist, `~/.profile`): ``` if type brew &>/dev/null then HOMEBREW_PREFIX="$(brew --prefix)" if [[ -r "${HOMEBREW_PREFIX}/etc/profile.d/bash_completion.sh" ]] then source "${HOMEBREW_PREFIX}/etc/profile.d/bash_completion.sh" else for COMPLETION in "${HOMEBREW_PREFIX}/etc/bash_completion.d/"* do [[ -r "${COMPLETION}" ]] && source "${COMPLETION}" done fi fi ``` If you install the `bash-completion` formula, this will automatically source the completions’ initialisation script (so you do not need to follow the instructions in the formula’s caveats). If you are using Homebrew’s `bash` as your shell (i.e. `bash` >= v4) you should use the `bash-completion@2` formula instead. Configuring Completions in `zsh` -------------------------------- To make Homebrew’s completions available in `zsh`, you must insert the Homebrew-managed `zsh/site-functions` path into your `FPATH` before initialising `zsh`’s completion facility. Add the following to your `~/.zshrc`: ``` if type brew &>/dev/null then FPATH="$(brew --prefix)/share/zsh/site-functions:${FPATH}" autoload -Uz compinit compinit fi ``` This must be done before `compinit` is called. Note that if you are using Oh My Zsh, it will call `compinit` for you, so this must be done before you call `oh-my-zsh.sh`. This may be done by appending the following line to your `~/.zprofile` after Homebrew’s initialization, instead of modifying your `~/.zshrc` as above: ``` FPATH="$(brew --prefix)/share/zsh/site-functions:${FPATH}" ``` You may also need to forcibly rebuild `zcompdump`: ``` rm -f ~/.zcompdump; compinit ``` Additionally, if you receive “zsh compinit: insecure directories” warnings when attempting to load these completions, you may need to run this: ``` chmod -R go-w "$(brew --prefix)/share" ``` Configuring Completions in `fish` --------------------------------- No configuration is needed if you’re using Homebrew’s `fish`. Friendly! If your `fish` is from somewhere else, add the following to your `~/.config/fish/config.fish`: ``` if test -d (brew --prefix)"/share/fish/completions" set -gx fish_complete_path $fish_complete_path (brew --prefix)/share/fish/completions end if test -d (brew --prefix)"/share/fish/vendor_completions.d" set -gx fish_complete_path $fish_complete_path (brew --prefix)/share/fish/vendor_completions.d end ``` homebrew Documentation Documentation ============= Users ----- * [`brew` man-page (command documentation)](manpage) * [Homebrew Blog (news on major updates)](https://brew.sh/blog/) * [Troubleshooting](troubleshooting) * [Installation](installation) * [Frequently Asked Questions](faq) * [Common Issues](common-issues) * [`brew` Shell Completion](shell-completion) * [Homebrew on Linux](homebrew-on-linux) * [Tips and Tricks](tips-n'-tricks) * [Bottles (binary packages)](bottles) * [Taps (third-party repositories)](taps) * [Interesting Taps and Forks](interesting-taps-and-forks) * [Anonymous Aggregate User Behaviour Analytics](analytics) * [Querying `brew`](querying-brew) * [C++ Standard Libraries](c++-standard-libraries) * [MD5 and SHA-1 Deprecation](checksum_deprecation) * [Custom GCC and Cross Compilers](custom-gcc-and-cross-compilers) * [External Commands](external-commands) * [Ruby Gems, Python Eggs and Perl Modules](gems,-eggs-and-perl-modules) * [Python](homebrew-and-python) * [How To Build Software Outside Homebrew With Homebrew `keg_only` dependencies](how-to-build-software-outside-homebrew-with-homebrew-keg-only-dependencies) * [Xcode](xcode) * [Creating a Homebrew Issue](creating-a-homebrew-issue) * [Updating Software in Homebrew](updating-software-in-homebrew) * [Adding Software to Homebrew](adding-software-to-homebrew) * [Kickstarter Supporters](https://docs.brew.sh/Kickstarter-Supporters) Contributors ------------ * [How To Open A Pull Request (and get it merged)](how-to-open-a-homebrew-pull-request) * [Formula Cookbook](formula-cookbook) * [Cask Cookbook](cask-cookbook) * [Acceptable Formulae](acceptable-formulae) * [Acceptable Casks](acceptable-casks) * [License Guidelines](license-guidelines) * [Formulae Versions](versions) * [Deprecating, Disabling, and Removing Formulae](deprecating-disabling-and-removing-formulae) * [Node for Formula Authors](node-for-formula-authors) * [Python for Formula Authors](python-for-formula-authors) * [`brew livecheck`](brew-livecheck) * [Migrating A Formula To A Tap](migrating-a-formula-to-a-tap) * [Rename A Formula](rename-a-formula) * [Building Against Non-Homebrew Dependencies](building-against-non-homebrew-dependencies) * [How To Create (And Maintain) A Tap](how-to-create-and-maintain-a-tap) * [Brew Test Bot](brew-test-bot) * [Diagram Guidelines](diagram-guidelines) * [Prose Style Guidelines](prose-style-guidelines) * [Type Checking with Sorbet](typechecking) Maintainers ----------- * [New Maintainer Checklist](https://docs.brew.sh/New-Maintainer-Checklist) * [Maintainers: Avoiding Burnout](https://docs.brew.sh/Maintainers-Avoiding-Burnout) * [Maintainer Guidelines](https://docs.brew.sh/Maintainer-Guidelines) * [Homebrew/brew Maintainer Guide](https://docs.brew.sh/Homebrew-brew-Maintainer-Guide) * [Homebrew/homebrew-core Maintainer Guide](https://docs.brew.sh/Homebrew-homebrew-core-Maintainer-Guide) * [Homebrew/homebrew-cask Maintainer Guide](https://docs.brew.sh/Homebrew-homebrew-cask-Maintainer-Guide) * [Brew Test Bot For Maintainers](https://docs.brew.sh/Brew-Test-Bot-For-Core-Contributors) * [Common Issues for Maintainers](https://docs.brew.sh/Common-Issues-for-Core-Contributors) * [Releases](releases) * [Developer/Internal API Documentation](https://rubydoc.brew.sh) Governance ---------- * [Homebrew Governance](https://docs.brew.sh/Homebrew-Governance) * [Homebrew Leadership Responsibilities](homebrew-leadership-responsibilities) * [Homebrew Governance Archives](https://docs.brew.sh/Homebrew-Governance-Archives) homebrew Node for Formula Authors Node for Formula Authors ======================== This document explains how to successfully use Node and npm in a Node module based Homebrew formula. Running `npm install` --------------------- Homebrew provides two helper methods in a `Language::Node` module: `std_npm_install_args` and `local_npm_install_args`. They both set up the correct environment for npm and return arguments for `npm install` for their specific use cases. Your formula should use these instead of invoking `npm install` explicitly. The syntax for a standard Node module installation is: ``` system "npm", "install", *Language::Node.std_npm_install_args(libexec) ``` where `libexec` is the destination prefix (usually the `libexec` variable). Download URL ------------ If the Node module is also available on the npm registry, we prefer npm hosted release tarballs over GitHub (or elsewhere) hosted source tarballs. The advantages of these tarballs are that they don’t include the files from the `.npmignore` (such as tests) resulting in a smaller download size and that any possible transpilation step is already done (e.g. no need to compile CoffeeScript files as a build step). The npm registry URLs usually have the format of: ``` https://registry.npmjs.org/<name>/-/<name>-<version>.tgz ``` Alternatively you could `curl` the JSON at `https://registry.npmjs.org/<name>` and look for the value of `versions[<version>].dist.tarball` for the correct tarball URL. Dependencies ------------ Node modules which are compatible with the latest Node version should declare a dependency on the `node` formula. ``` depends_on "node" ``` If your formula requires being executed with an older Node version you should use one of the versioned node formulae (e.g. `node@12`). ### Special requirements for native addons If your Node module is a native addon or has a native addon somewhere in its dependency tree you have to declare an additional dependency. Since the compilation of the native addon results in an invocation of `node-gyp` we need an additional build time dependency on `"python"` (because GYP depends on Python). ``` depends_on "python" => :build ``` Also note that such a formula would only be compatible with the same Node major version it originally was compiled with. This means that we need to revision every formula with a Node native addon with every major version bump of the `node` formula. To make sure we don’t overlook your formula on a Node major version bump, write a meaningful test which would fail in such a case (invoked with an ABI-incompatible Node version). Installation ------------ Node modules should be installed to `libexec`. This prevents the Node modules from contaminating the global `node_modules`, which is important so that npm doesn’t try to manage Homebrew-installed Node modules. In the following we distinguish between two types of Node modules installed using formulae: * formulae for standard Node modules compatible with npm’s global module format which should use [`std_npm_install_args`](#installing-global-style-modules-with-std_npm_install_args-to-libexec) (like [`azure-cli`](https://github.com/Homebrew/homebrew-core/blob/0f3b27d252b8112c744e0460d871cfe1def6b993/Formula/azure-cli.rb) or [`webpack`](https://github.com/Homebrew/homebrew-core/blob/6282879973d569962e63da7c81ac4623e1a8336b/Formula/webpack.rb)) * formulae where the `npm install` call is not the only required install step (e.g. need to also compile non-JavaScript sources) which have to use [`local_npm_install_args`](#installing-module-dependencies-locally-with-local_npm_install_args) (like [`elixirscript`](https://github.com/Homebrew/homebrew-core/blob/4bb491b7b246830aed57b97348a17e9401374978/Formula/elixirscript.rb) or [`grunt-cli`](https://github.com/Homebrew/homebrew-core/blob/93be1840908adb2f9ee8c48c66586ee6327480e3/Formula/grunt-cli.rb)) What both methods have in common is that they are setting the correct environment for using npm inside Homebrew and are returning the arguments for invoking `npm install` for their specific use cases. This includes fixing an important edge case with the npm cache (caused by Homebrew’s redirection of `HOME` during the build and test process) by using our own custom `npm_cache` inside `HOMEBREW_CACHE`, which would otherwise result in very long build times and high disk space usage. To use them you have to require the Node language module at the beginning of your formula file with: ``` require "language/node" ``` ### Installing global style modules with `std_npm_install_args` to `libexec` In your formula’s `install` method, simply `cd` to the top level of your Node module if necessary and then use `system` to invoke `npm install` with `Language::Node.std_npm_install_args` like: ``` system "npm", "install", *Language::Node.std_npm_install_args(libexec) ``` This will install your Node module in npm’s global module style with a custom prefix to `libexec`. All your modules’ executables will be automatically resolved by npm into `libexec/bin` for you, which is not symlinked into Homebrew’s prefix. We need to make sure these are installed. To do this we need to symlink all executables to `bin` with: ``` bin.install_symlink Dir["#{libexec}/bin/*"] ``` ### Installing module dependencies locally with `local_npm_install_args` In your formula’s `install` method, do any installation steps which need to be done before the `npm install` step and then `cd` to the top level of the included Node module. Then, use `system` with `Language::Node.local_npm_install_args` to invoke `npm install` like: ``` system "npm", "install", *Language::Node.local_npm_install_args ``` This will install all of your Node modules dependencies to your local build path. You can now continue with your build steps and take care of the installation into the Homebrew `prefix` on your own, following the [general Homebrew formula instructions](formula-cookbook). Example ------- Installing a standard Node module based formula would look like this: ``` require "language/node" class Foo < Formula desc "..." homepage "..." url "https://registry.npmjs.org/foo/-/foo-1.4.2.tgz" sha256 "..." depends_on "node" # uncomment if there is a native addon inside the dependency tree # depends_on "python" => :build def install system "npm", "install", *Language::Node.std_npm_install_args(libexec) bin.install_symlink Dir["#{libexec}/bin/*"] end test do # add a meaningful test here end end ``` Tooling ------- You can use [homebrew-npm-noob](https://github.com/zmwangx/homebrew-npm-noob) to automatically generate a formula like the example above for an npm package. homebrew Homebrew on Linux Homebrew on Linux ================= The Homebrew package manager may be used on Linux and [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/about). Homebrew was formerly referred to as Linuxbrew when running on Linux or WSL. It can be installed in your home directory, in which case it does not use *sudo*. Homebrew does not use any libraries provided by your host system, except *glibc* and *gcc* if they are new enough. Homebrew can install its own current versions of *glibc* and *gcc* for older distributions of Linux. [Features](#features), [installation instructions](#install) and [requirements](#requirements) are described below. Terminology (e.g. the difference between a Cellar, Tap, Cask and so forth) is [explained in the documentation](formula-cookbook#homebrew-terminology). Features -------- * Can install software to your home directory and so does not require *sudo* * Install software not packaged by your host distribution * Install up-to-date versions of software when your host distribution is old * Use the same package manager to manage your macOS, Linux, and Windows systems Install ------- Instructions for a supported install of Homebrew on Linux are on the [homepage](https://brew.sh). The installation script installs Homebrew to `/home/linuxbrew/.linuxbrew` using *sudo* if possible and within your home directory at `~/.linuxbrew` otherwise. Homebrew does not use *sudo* after installation. Using `/home/linuxbrew/.linuxbrew` allows the use of more binary packages (bottles) than installing in your personal home directory. The prefix `/home/linuxbrew/.linuxbrew` was chosen so that users without admin access can ask an admin to create a `linuxbrew` role account and still benefit from precompiled binaries. If you do not yourself have admin privileges, consider asking your admin staff to create a `linuxbrew` role account for you with home directory set to `/home/linuxbrew`. Follow the *Next steps* instructions to add Homebrew to your `PATH` and to your bash shell profile script, either `~/.profile` on Debian/Ubuntu or `~/.bash_profile` on CentOS/Fedora/Red Hat. ``` test -d ~/.linuxbrew && eval "$(~/.linuxbrew/bin/brew shellenv)" test -d /home/linuxbrew/.linuxbrew && eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)" test -r ~/.bash_profile && echo "eval \"\$($(brew --prefix)/bin/brew shellenv)\"" >> ~/.bash_profile echo "eval \"\$($(brew --prefix)/bin/brew shellenv)\"" >> ~/.profile ``` You’re done! Try installing a package: ``` brew install hello ``` If you’re using an older distribution of Linux, installing your first package will also install a recent version of *glibc* and *gcc*. Use `brew doctor` to troubleshoot common issues. Requirements ------------ * **GCC** 4.7.0 or newer * **Linux** 2.6.32 or newer * **Glibc** 2.13 or newer * **64-bit x86\_64** CPU To install build tools, paste at a terminal prompt: * **Debian or Ubuntu** ``` sudo apt-get install build-essential procps curl file git ``` * **Fedora, CentOS, or Red Hat** ``` sudo yum groupinstall 'Development Tools' sudo yum install procps-ng curl file git sudo yum install libxcrypt-compat # needed by Fedora 30 and up ``` ### ARM Homebrew can run on 32-bit ARM (Raspberry Pi and others) and 64-bit ARM (AArch64), but no binary packages (bottles) are available. Support for ARM is on a best-effort basis. Pull requests are welcome to improve the experience on ARM platforms. You may need to install your own Ruby using your system package manager, a PPA, or `rbenv/ruby-build` as we no longer distribute a Homebrew Portable Ruby for ARM. ### 32-bit x86 Homebrew does not currently support 32-bit x86 platforms. It would be possible for Homebrew to work on 32-bit x86 platforms with some effort. An interested and dedicated person could maintain a fork of Homebrew to develop support for 32-bit x86. Homebrew on Linux Community --------------------------- * [@HomebrewOnLinux on Twitter](https://twitter.com/HomebrewOnLinux) * [Homebrew/discussions (forum)](https://github.com/homebrew/discussions/discussions) homebrew Type Checking With Sorbet Type Checking With Sorbet ========================= The majority of the code in Homebrew is written in Ruby which is a dynamic language. To avail the benefits of static type checking, we have set up Sorbet in our codebase which provides the benefits of static type checking to dynamic languages like Ruby. The [Sorbet Documentation](https://sorbet.org/docs/overview) is a good place to get started if you want to dive deeper into Sorbet and it’s abilities. Sorbet in the Homebrew Codebase ------------------------------- ### Inline Type Annotations To add type annotations to a class or module, we need to first extend it with the `T::Sig` module (read this as `Type::Signature`). This adds the `sig` method which is used to annotate method signatures. Here’s a simple example: ``` class MyClass extend T::Sig sig { params(name: String).returns(String) } def my_method(name) "Hello, #{name}!" end end ``` With `params`, we specify that we have a parameter `name` which must be a `String` and with `returns`, we specify that this method always returns a `String`. For more information on how to express more complex types, refer to the official documentation: * [Method Signatures](https://sorbet.org/docs/sigs) * [Class Types](https://sorbet.org/docs/class-types) * [Nilable Types](https://sorbet.org/docs/nilable-types) * [Union Types](https://sorbet.org/docs/union-types) ### Ruby Interface Files (`.rbi`) RBI files help Sorbet learn about constants, ancestors and methods defined in ways it doesn’t understand natively. We can also create a RBI file to help Sorbet understand dynamic definitions. Sometimes it is necessary to explicitly include the `Kernel` module in order for Sorbet to know that methods such as `puts` are available in a given context. This is mostly necessary for modules since they can be used in both `BasicObject`s (which don’t include `Kernel`) and `Object`s (which include `Kernel` by default). In this case, it is necessary to create an `.rbi` file ([example](https://github.com/Homebrew/brew/blob/61b79318ed089b5010501e2cbf163fd8e48e2dfc/Library/Homebrew/global.rbi)) since re-including the `Kernel` module in actual code can break things. Read more about RBI files [here](https://sorbet.org/docs/rbi). ### The [`Library/Homebrew/sorbet`](https://github.com/Homebrew/brew/tree/master/Library/Homebrew/sorbet) Directory * The `rbi` directory contains all Ruby Interface (`.rbi`) files auto-generated by running `brew typecheck --update`: + RBI files for all gems are generated using [Tapioca](https://github.com/Shopify/tapioca#tapioca). + Definitions for dynamic code (i.e. meta-programming) are generated using `srb rbi hidden-definitions`. + Definitions for missing constants are generated using `srb rbi todo`. * The `config` file is a newline-separated list of arguments to pass to `srb tc`, the same as if they’d been passed at the command-line. Arguments in the config file are always passed first, followed by arguments provided on the command-line. We use it to ignore Gem directories which we do not wish to type check. * Every Ruby file in the codebase has a magic `# typed: <level>` comment at the top, where `<level>` is one of [Sorbet’s strictness levels](https://sorbet.org/docs/static#file-level-granularity-strictness-levels), usually `false`, `true` or `strict`. The `false` files only report errors related to the syntax, constant resolution and correctness of the method signatures, but no type errors. Our long-term goal is to move all `false` files to `true` and start reporting type errors on those files as well. Therefore, when adding new files, you should ideally mark it with `# typed: true` and work out any resulting type errors. Using `brew typecheck` ---------------------- When run without any arguments, `brew typecheck`, will run considering the strictness levels set in each of the individual Ruby files in the core Homebrew codebase. However, when it is run on a specific file or directory, more errors may show up since Sorbet cannot resolve constants defined outside the scope of the specified file. These problems can be solved with RBI files. Currently `brew typecheck` provides `--quiet`, `--file`, `--dir` and `--ignore` options but you can explore more options with `srb tc --help` and passing them with `srb tc`. Resolving Type Errors --------------------- Sorbet reports type errors along with an error reference code, which can be used to look up more information on how to debug the error, or what causes the error in the [Sorbet Documentation](https://sorbet.org/docs/overview). Here is how to debug some common type errors: * Using `T.reveal_type`. In files which are `true` or higher, if we wrap a variable or method call in `T.reveal_type`, Sorbet will show us what type it thinks that variable has in the output of `srb tc`. This is particularly useful when writing [method signatures](https://sorbet.org/docs/sigs) and debugging. Make sure to remove this line from your code before committing your changes, since this is just a debugging tool. * One of the most frequent errors that we’ve encountered is: `7003: Method does not exist.` Since Ruby is a very dynamic language, methods can be defined in ways Sorbet cannot see statically. In such cases, check if the method exists at runtime, if not, then Sorbet has caught a future bug! But, it is also possible that even though a method exists at runtime, Sorbet cannot see it. In such cases, we use [`.rbi` files](#ruby-interface-files-rbi). * Since Sorbet does not automatically assume that Kernel is to be included in Modules, we may encounter many errors while trying to use methods like `puts`, `ohai`, `odebug` et cetera. A simple workaround for this would be to add an extra `include Kernel` line in the respective RBI file. * The tips above are very generic and apply to lots of cases. For some common gotchas when using Sorbet, refer to the [Sorbet Error Reference](https://sorbet.org/docs/error-reference) and [FAQ](https://sorbet.org/docs/faq).
programming_docs
homebrew Tips and Tricks Tips and Tricks =============== Install previous versions of formulae ------------------------------------- Some formulae in `homebrew/core` are made available as [versioned formulae](versions) using a special naming format, e.g. `gcc@7`. If the version you’re looking for isn’t available, consider using `brew extract`. Quickly remove something from Homebrew’s prefix ----------------------------------------------- ``` brew unlink <formula> ``` This can be useful if a package can’t build against the version of something you have linked into Homebrew’s prefix. And of course, you can simply `brew link <formula>` again afterwards! Pre-download a file for a formula --------------------------------- Sometimes it’s faster to download a file via means other than the strategies that are available as part of Homebrew. For example, Erlang provides a torrent that’ll let you download at 4–5× compared to the normal HTTP method. Downloads are saved in the `downloads` subdirectory of Homebrew’s cache directory (as specified by `brew --cache`, e.g. `~/Library/Caches/Homebrew`) and renamed as `<url-hash>--<formula>-<version>`. The command `brew --cache --build-from-source <formula>` will print the expected path of the cached download, so after downloading the file, you can run `mv the_tarball "$(brew --cache --build-from-source <formula>)"` to relocate it to the cache. You can also pre-cache the download by using the command `brew fetch <formula>` which also displays the SHA-256 hash. This can be useful for updating formulae to new versions. Install stuff without the Xcode CLT ----------------------------------- ``` brew sh # or: eval "$(brew --env)" gem install ronn # or c-programs ``` This imports the `brew` environment into your existing shell; `gem` will pick up the environment variables and be able to build. As a bonus, `brew`’s automatically determined optimization flags are set. Install only a formula’s dependencies (not the formula) ------------------------------------------------------- ``` brew install --only-dependencies <formula> ``` Use the interactive Homebrew shell ---------------------------------- ``` $ brew irb ==> Interactive Homebrew Shell Example commands available with: `brew irb --examples` irb(main):001:0> Formulary.factory("ace").methods - Object.methods => [:install, :test, :test_defined?, :sbin, :pkgshare, :elisp, :frameworks, :kext_prefix, :any_version_installed?, :etc, :pkgetc, ... :on_macos, :on_linux, :debug?, :quiet?, :verbose?, :with_context] irb(main):002:0> ``` Hide the beer mug emoji when finishing a build ---------------------------------------------- ``` export HOMEBREW_NO_EMOJI=1 ``` This sets the `HOMEBREW_NO_EMOJI` environment variable, causing Homebrew to hide all emoji. The beer emoji can also be replaced with other character(s): ``` export HOMEBREW_INSTALL_BADGE="☕️ 🐸" ``` Editor plugins -------------- ### Sublime Text * [Homebrew-formula-syntax](https://github.com/samueljohn/Homebrew-formula-syntax) can be installed with Package Control in Sublime Text 2/3, which adds highlighting for inline patches. ### Vim * [brew.vim](https://github.com/xu-cheng/brew.vim) adds highlighting to inline patches in Vim. ### Emacs * [homebrew-mode](https://github.com/dunn/homebrew-mode) provides syntax highlighting for inline patches as well as a number of helper functions for editing formula files. * [pcmpl-homebrew](https://github.com/hiddenlotus/pcmpl-homebrew) provides completion for emacs shell-mode and eshell-mode. ### Atom * [language-homebrew-formula](https://atom.io/packages/language-homebrew-formula) adds highlighting and diff support (with the [language-diff](https://atom.io/packages/language-diff) plugin). homebrew How to Build Software Outside Homebrew with Homebrew keg_only Dependencies How to Build Software Outside Homebrew with Homebrew `keg_only` Dependencies ============================================================================ What does “keg-only” mean? -------------------------- The [FAQ](faq#what-does-keg-only-mean) briefly explains this. As an example: *OpenSSL isn’t symlinked into my `PATH` and non-Homebrew builds can’t find it!* This is because Homebrew isolates it within its individual prefix, rather than symlinking to the publicly available location. Advice on potential workarounds ------------------------------- A number of people in this situation are either forcefully linking keg-only tools with `brew link --force` or moving default system utilities out of the `PATH` and replacing them with manually created symlinks to the Homebrew-provided tool. *Please* do not remove macOS native tools and forcefully replace them with symlinks back to the Homebrew-provided tool. Doing so can and likely will cause significant breakage when attempting to build software. `brew link --force` creates a warning in `brew doctor` to let both you and maintainers know that a link exists that could be causing issues. If you’ve linked something and there’s no problems at all? Feel free to ignore the `brew doctor` error. How do I use those tools outside of Homebrew? --------------------------------------------- Useful, reliable alternatives exist should you wish to use keg-only tools outside of Homebrew. ### Build flags You can set flags to give configure scripts or Makefiles a nudge in the right direction. An example of flag setting: ``` ./configure --prefix=/Users/Dave/Downloads CFLAGS="-I$(brew --prefix)/opt/openssl/include" LDFLAGS="-L$(brew --prefix)/opt/openssl/lib" ``` An example using `pip`: ``` CFLAGS="-I$(brew --prefix)/opt/icu4c/include" LDFLAGS="-L$(brew --prefix)/opt/icu4c/lib" pip install pyicu ``` ### `PATH` modification You can temporarily prepend your `PATH` with the tool’s `bin` directory, such as: ``` export PATH="$(brew --prefix)/opt/openssl/bin:${PATH}" ``` This will prepend the directory to your `PATH`, ensuring any build script that searches the `PATH` will find it first. Changing your `PATH` using this command ensures the change only exists for the duration of the shell session. Once the current session ends, the `PATH` reverts to its prior state. ### `pkg-config` detection If the tool you are attempting to build is [pkg-config](https://en.wikipedia.org/wiki/Pkg-config) aware, you can amend your `PKG_CONFIG_PATH` to find a keg-only utility’s `.pc` files, if it has any. Not all formulae ship with these files. An example of this is: ``` export PKG_CONFIG_PATH="$(brew --prefix)/opt/openssl/lib/pkgconfig" ``` If you’re curious about the `PKG_CONFIG_PATH` variable, `man pkg-config` goes into more detail. You can get `pkg-config` to print the default search path with: ``` pkg-config --variable pc_path pkg-config ``` homebrew Adding Software To Homebrew Adding Software To Homebrew =========================== Is your favorite software missing from Homebrew? Then you’re the perfect person to resolve this problem. If you want to add software that is either closed source or a GUI-only program, you will want to follow the guide for [Casks](#casks). Otherwise follow the guide for [Formulae](#formulae). See also: [Homebrew Terminology](formula-cookbook#homebrew-terminology) Before you start, please check the open pull requests for [Homebrew/homebrew-core](https://github.com/Homebrew/homebrew-core/pulls) or [Homebrew/homebrew-cask](https://github.com/Homebrew/homebrew-cask/pulls) to make sure no one else beat you to the punch. Next, you will want to go through the [Acceptable Formulae](acceptable-formulae) or [Acceptable Casks](acceptable-casks) documentation to determine if the software is an appropriate addition to Homebrew. If you are creating a formula for an alternative version of software already in Homebrew (e.g. a major/minor version that differs significantly from the existing version), be sure to read the [Versions](versions) documentation to understand versioned formulae requirements. If everything checks out, you’re ready to get started on a new formula! Formulae -------- ### Writing the formula 1. It’s a good idea to find existing formulae in Homebrew that have similarities to the software you want to add. This will help you to understand how specific languages, build methods, etc. are typically handled. 2. If you’re starting from scratch, you can use the [`brew create` command](manpage#create-options-url) to produce a basic version of your formula. This command accepts a number of options and you may be able to save yourself some work by using an appropriate template option like `--python`. 3. You will now have to develop the boilerplate code from `brew create` into a full-fledged formula. Your main references will be the [Formula Cookbook](formula-cookbook), similar formulae in Homebrew, and the upstream documentation for your chosen software. Be sure to also take note of the Homebrew documentation for writing [Python](python-for-formula-authors) and [Node](node-for-formula-authors) formulae, if applicable. 4. Make sure you write a good test as part of your formula. Refer to the [Add a test to the formula](formula-cookbook#add-a-test-to-the-formula) section of the Cookbook for help with this. 5. Try installing your formula using `brew install --build-from-source <formula>`, where <formula> is the name of your formula. If any errors occur, correct your formula and attempt to install it again. The formula installation should finish without errors by the end of this step. If you’re stuck, ask for help on GitHub or [Homebrew/discussions](https://github.com/homebrew/discussions/discussions). The maintainers are very happy to help but we also like to see that you’ve put effort into trying to find a solution first. ### Testing and auditing the formula 1. Run `brew audit --strict --new-formula --online <formula>` with your formula. If any errors occur, correct your formula and run the audit again. The audit should finish without any errors by the end of this step. 2. Run your formula’s test using `brew test <formula>`. The test should finish without any errors. ### Submitting the formula You’re finally ready to submit your formula to the [homebrew-core](https://github.com/Homebrew/homebrew-core/) repository. If you haven’t done this before, you can refer to the [How to Open a Pull Request](how-to-open-a-homebrew-pull-request) documentation for help. Maintainers will review the pull request and provide feedback about any areas that need to be addressed before the formula can be added to Homebrew. If you’ve made it this far, congratulations on submitting a Homebrew formula! We appreciate the hard work you put into this and you can take satisfaction in knowing that your work may benefit other Homebrew users as well. Casks ----- **Note:** Before taking the time to craft a new cask: * make sure it can be accepted by checking the [Rejected Casks FAQ](acceptable-casks#rejected-casks), and * check that the cask was not [already refused](https://github.com/Homebrew/homebrew-cask/search?q=is%3Aclosed&type=Issues). ### Writing the cask Making a new cask is easy. Follow the directions in [Getting Set Up To Contribute](https://github.com/Homebrew/homebrew-cask/blob/HEAD/CONTRIBUTING.md#getting-set-up-to-contribute) to begin. #### Examples Here’s a cask for `shuttle` as an example. Note the `verified` parameter below the `url`, which is needed when [the url and homepage hostnames differ](cask-cookbook#when-url-and-homepage-domains-differ-add-verified). ``` cask "shuttle" do version "1.2.9" sha256 "0b80bf62922291da391098f979683e69cc7b65c4bdb986a431e3f1d9175fba20" url "https://github.com/fitztrev/shuttle/releases/download/v#{version}/Shuttle.zip", verified: "github.com/fitztrev/shuttle/" name "Shuttle" desc "Simple shortcut menu" homepage "https://fitztrev.github.io/shuttle/" app "Shuttle.app" zap trash: "~/.shuttle.json" end ``` And here is one for `noisy`. Note that it has an unversioned download (the download `url` does not contain the version number, unlike the example above). It also suppresses the checksum with `sha256 :no_check`, which is necessary because since the download `url` does not contain the version number, its checksum will change when a new version is made available. ``` cask "noisy" do version "1.3" sha256 :no_check url "https://github.com/downloads/jonshea/Noisy/Noisy.zip" name "Noisy" desc "White noise generator" homepage "https://github.com/jonshea/Noisy" app "Noisy.app" end ``` Here is a last example for `airdisplay`, which uses a `pkg` installer to install the application instead of a stand-alone application bundle (`.app`). Note the [`uninstall pkgutil` stanza](cask-cookbook#uninstall-key-pkgutil), which is needed to uninstall all files that were installed using the installer. You will also see how to adapt `version` to the download `url`. Use [our custom `version` methods](cask-cookbook#version-methods) to do so, resorting to the standard [Ruby String methods](https://ruby-doc.org/core/String.html) when they don’t suffice. ``` cask "airdisplay" do version "3.4.2,26581" sha256 "272d14f33b3a4a16e5e0e1ebb2d519db4e0e3da17f95f77c91455b354bee7ee7" url "https://www.avatron.com/updates/software/airdisplay/ad#{version.before_comma.no_dots}.zip" name "Air Display" desc "Utility for using a tablet as a second monitor" homepage "https://avatron.com/applications/air-display/" livecheck do url "https://www.avatron.com/updates/software/airdisplay/appcast.xml" strategy :sparkle end depends_on macos: ">= :mojave" pkg "Air Display Installer.pkg" uninstall pkgutil: [ "com.avatron.pkg.AirDisplay", "com.avatron.pkg.AirDisplayHost2", ] end ``` #### Generating a token for the cask The cask **token** is the mnemonic string people will use to interact with the cask via `brew install`, etc. The name of the cask **file** is simply the token with the extension `.rb` appended. The easiest way to generate a token for a cask is to run this command: ``` $(brew --repository homebrew/cask)/developer/bin/generate_cask_token "/full/path/to/new/software.app" ``` If the software you wish to create a cask for is not installed, or does not have an associated App bundle, just give the full proper name of the software instead of a pathname: ``` $(brew --repository homebrew/cask)/developer/bin/generate_cask_token "Google Chrome" ``` If the `generate_cask_token` script does not work for you, see [Cask Token Details](#cask-token-details). #### Creating the cask file Once you know the token, create your cask with the handy-dandy `brew create --cask` command: ``` brew create --cask download-url --set-name my-new-cask ``` This will open `$EDITOR` with a template for your new cask, to be stored in the file `my-new-cask.rb`. Running the `create` command above will get you a template that looks like this: ``` cask "my-new-cask" do version "" sha256 "" url "download-url" name "" desc "" homepage "" app "" end ``` #### Cask stanzas Fill in the following stanzas for your cask: | name | value | | --- | --- | | `version` | application version | | `sha256` | SHA-256 checksum of the file downloaded from `url`, calculated by the command `shasum -a 256 <file>`. Can be suppressed by using the special value `:no_check`. (see [`sha256` Stanza Details](cask-cookbook#stanza-sha256)) | | `url` | URL to the `.dmg`/`.zip`/`.tgz`/`.tbz2` file that contains the application.A [`verified` parameter](cask-cookbook#when-url-and-homepage-domains-differ-add-verified) must be added if the hostnames in the `url` and `homepage` stanzas differ. [Block syntax](cask-cookbook#using-a-block-to-defer-code-execution) is available for URLs that change on every visit | | `name` | the full and proper name defined by the vendor, and any useful alternate names (see [`name` Stanza Details](cask-cookbook#stanza-name)) | | `desc` | one-line description of the software (see [`desc` Stanza Details](cask-cookbook#stanza-desc)) | | `homepage` | application homepage; used for the `brew home` command | | `app` | relative path to an `.app` bundle that should be moved into the `/Applications` folder on installation (see [`app` Stanza Details](cask-cookbook#stanza-app)) | Other commonly used stanzas are: | name | value | | --- | --- | | `livecheck` | Ruby block describing how to find updates for this cask (see [`livecheck` Stanza Details](cask-cookbook#stanza-livecheck)) | | `pkg` | relative path to a `.pkg` file containing the distribution (see [`pkg` Stanza Details](cask-cookbook#stanza-pkg)) | | `caveats` | a string or Ruby block providing the user with cask-specific information at install time (see [`caveats` Stanza Details](cask-cookbook#stanza-caveats)) | | `uninstall` | procedures to uninstall a cask. Optional unless the `pkg` stanza is used. (see [`uninstall` Stanza Details](cask-cookbook#stanza-uninstall)) | | `zap` | additional procedures for a more complete uninstall, including configuration files and shared resources (see [`zap` Stanza Details](cask-cookbook#stanza-zap)) | Additional [`artifact` stanzas](cask-cookbook#at-least-one-artifact-stanza-is-also-required) may be needed for special use cases. Even more special-use stanzas are listed at [Optional Stanzas](cask-cookbook#optional-stanzas). #### Cask token details If a token conflicts with an already-existing cask, authors should manually make the new token unique by prepending the vendor name. Example: [unison.rb](https://github.com/Homebrew/homebrew-cask/blob/HEAD/Casks/unison.rb) and [panic-unison.rb](https://github.com/Homebrew/homebrew-cask/blob/HEAD/Casks/panic-unison.rb). If possible, avoid creating tokens that differ only by the placement of hyphens. To generate a token manually, or to learn about exceptions for unusual cases, see the [Token Reference](cask-cookbook#token-reference). #### Archives with subfolders When a downloaded archive expands to a subfolder, the subfolder name must be included in the `app` value. Example: 1. Texmaker is downloaded to the file `TexmakerMacosxLion.zip`. 2. `TexmakerMacosxLion.zip` unzips to a folder called `TexmakerMacosxLion`. 3. The folder `TexmakerMacosxLion` contains the application `texmaker.app`. 4. So, the `app` stanza should include the subfolder as a relative path: ``` app "TexmakerMacosxLion/texmaker.app" ``` ### Testing and auditing the cask Give it a shot with: ``` export HOMEBREW_NO_AUTO_UPDATE=1 brew install my-new-cask ``` Did it install? If something went wrong, edit your cask with `brew edit my-new-cask` to fix it. Test also if the uninstall works successfully: ``` brew uninstall my-new-cask ``` If everything looks good, you’ll also want to make sure your cask passes audit with: ``` brew audit --new-cask my-new-cask ``` You should also check stylistic details with `brew style`: ``` brew style --fix my-new-cask ``` Keep in mind that all these checks will be made when you submit your PR, so by doing them in advance you’re saving everyone a lot of time and trouble. If your application and Homebrew Cask do not work well together, feel free to [file an issue](https://github.com/Homebrew/homebrew-cask#reporting-bugs) after checking out open issues. ### Submitting the cask #### Finding a home for your cask See the [Acceptable Casks documentation](acceptable-casks#finding-a-home-for-your-cask). Hop into your Tap and check to make sure your new cask is there: ``` $ cd "$(brew --repository)"/Library/Taps/homebrew/homebrew-cask $ git status # On branch master # Untracked files: # (use "git add <file>..." to include in what will be committed) # # Casks/my-new-cask.rb ``` So far, so good. Now make a feature branch `my-new-cask-branch` that you’ll use in your pull request: ``` $ git checkout -b my-new-cask-branch Switched to a new branch 'my-new-cask-branch' ``` Stage your cask with: ``` git add Casks/my-new-cask.rb ``` You can view the changes that are to be committed with: ``` git diff --cached ``` Commit your changes with: ``` git commit -v ``` #### Commit messages For any Git project, some good rules for commit messages are: * The first line is the commit summary, 50 characters or less, * Followed by an empty line, * Followed by an explanation of the commit, wrapped to 72 characters. See [A Note About Git Commit Messages](https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html) for more. The first line of a commit message becomes the **title** of a pull request on GitHub, like the subject line of an email. Including the key info in the first line will help us respond faster to your pull request. For cask commits in the Homebrew Cask project, we like to include the application name, version number, and purpose of the commit in the first line. Examples of good, clear commit summaries: * `Add Transmission.app v1.0` * `Upgrade Transmission.app to v2.82` * `Fix checksum in Transmission.app cask` * `Add CodeBox Latest` Examples of difficult, unclear commit summaries: * `Upgrade to v2.82` * `Checksum was bad` #### Pushing Push your changes on the branch `my-new-cask-branch` to your GitHub account: ``` git push my-new-cask-branch ``` If you are using [GitHub two-factor authentication](https://docs.github.com/en/authentication/securing-your-account-with-two-factor-authentication-2fa) and have set your remote repository as HTTPS you will need to [set up a personal access token](https://docs.github.com/en/repositories/creating-and-managing-repositories/troubleshooting-cloning-errors#provide-an-access-token) and use that instead of your password. #### Filing a pull request on GitHub ##### a) use suggestion from `git push` The `git push` command prints a suggestion for how to create a pull request: ``` remote: Create a pull request for 'new-cask-cask' on GitHub by visiting: remote: https://github.com//homebrew-cask/pull/new/my-new-cask-branch ``` ##### b) use suggestion from GitHub’s website Now go to the [`homebrew-cask` GitHub repository](https://github.com/Homebrew/homebrew-cask). GitHub will often show your `my-new-cask-branch` branch with a handy button to `Compare & pull request`. ##### c) manually create a pull request on GitHub Otherwise, click the `Contribute > Open pull request` button and choose to `compare across forks`. The base fork should be `Homebrew/homebrew-cask @ master`, and the head fork should be `my-github-username/homebrew-cask @ my-new-cask-branch`. You can also add any further comments to your pull request at this stage. ##### Congratulations! You are done now, and your cask should be pulled in or otherwise noticed in a while. If a maintainer suggests some changes, just make them on the `my-new-cask-branch` branch locally and [push](#pushing). ### Cleaning up After your pull request is submitted, you should get yourself back onto `master`, so that `brew update` will pull down new casks properly: ``` cd "$(brew --repository)"/Library/Taps/homebrew/homebrew-cask git checkout master ``` If earlier you set the variable `HOMEBREW_NO_AUTO_UPDATE` then clean it up with: ``` unset HOMEBREW_NO_AUTO_UPDATE ```
programming_docs
homebrew Releases Releases ======== Since Homebrew 1.0.0 most Homebrew users (those who haven’t run a `dev-cmd` or set `HOMEBREW_DEVELOPER=1` which is ~99.9% based on analytics data) require tags on the [Homebrew/brew repository](https://github.com/homebrew/brew) in order to get new versions of Homebrew. There are a few steps in making a new Homebrew release: 1. Check the [Homebrew/brew pull requests](https://github.com/homebrew/brew/pulls), [issues](https://github.com/homebrew/brew/issues), [Homebrew/homebrew-core issues](https://github.com/homebrew/homebrew-core/issues) and [Homebrew/discussions (forum)](https://github.com/homebrew/discussions/discussions) to see if there is anything pressing that needs to be fixed or merged before the next release. If so, fix and merge these changes. 2. Ensure that no code changes have happened for at least a couple of hours (ideally 4 hours), at least one Homebrew/homebrew-core pull request CI job has completed successfully, checked the state of the Homebrew/brew `master` CI job (i.e. main jobs green or green after rerunning), and that you are confident there are no major regressions on the current `master`, branch. 3. Run `brew release` to create a new draft release. For major or minor version bumps, pass `--major` or `--minor`, respectively. 4. Publish the draft release on [GitHub](https://github.com/Homebrew/brew/releases). If this is a major or minor release (e.g. X.0.0 or X.Y.0) then there are a few more steps: 1. Before creating the tag you should delete any `odisabled` code, make any `odeprecated` code `odisabled`, uncomment any `# odeprecated` code and add any new `odeprecations` that are desired. Also delete any command argument definitions that pass `replacement: ...`. 2. Write up a release notes blog post to <https://brew.sh> e.g. [brew.sh#319](https://github.com/Homebrew/brew.sh/pull/319). This should use the output from `brew release [--major|--minor]` as input but have the wording adjusted to be more human readable and explain not just what has changed but why. 3. When the release has shipped and the blog post has been merged, tweet the blog post as the [@MacHomebrew Twitter account](https://twitter.com/MacHomebrew) or tweet it yourself and retweet it with the @MacHomebrew Twitter account (credentials are in 1Password). 4. Consider whether to submit it to other sources e.g. Hacker News, Reddit. * Pros: gets a wider reach and user feedback * Cons: negative comments are common and people take this as a chance to complain about Homebrew (regardless of their usage) Please do not manually create a release based on older commits on the `master` branch. It’s very hard to judge whether these have been sufficiently tested by users or if they will cause negative side-effects with the current state of Homebrew/homebrew-core. If a new branch is needed ASAP but there are things on `master` that cannot be released yet (e.g. new deprecations and you want to make a patch release) then revert the relevant PRs, follow the process above and then revert the reverted PRs to reapply them on `master`. homebrew MD5 and SHA-1 Deprecation MD5 and SHA-1 Deprecation ========================= In early 2015 Homebrew started the process of deprecating *SHA1* for package integrity verification. Since then formulae under the Homebrew organisation have been migrated to use *SHA256* for verification; this includes both source packages and our precompiled packages (bottles). Homebrew has since stopped supporting *SHA1* and *MD5* entirely. *MD5* checksums were removed from core formulae in 2012 and as of April 2015 installing a formula verified by *MD5* is actively blocked. We removed *SHA1* support in **November 2016**, 21 months after we started warning people to move away from it for verification. This is enforced in the same way *MD5* is, by blocking the installation of that individual formula until the checksum is migrated. This means custom taps, local custom formulae, etc. need to be migrated to use *SHA256* before you can install them. homebrew Gems, Eggs and Perl Modules Gems, Eggs and Perl Modules =========================== On a fresh macOS installation there are three empty directories for add-ons available to all users: * `/Library/Ruby` * `/Library/Python` * `/Library/Perl` You need sudo to install to these like so: `sudo gem install`, `sudo easy_install` or `sudo cpan -i`. Python packages (eggs) without sudo using system Python ------------------------------------------------------- An option to avoid sudo is to use an access control list. For example: ``` chmod +a 'user:<YOUR_NAME_HERE> allow add_subdirectory,add_file,delete_child,directory_inherit' /Library/Python/3.y/site-packages ``` will let you add packages to Python 3.y as yourself, which is probably safer than changing the group ownership of the directory. ### So why was I using sudo? Habit maybe? One reason is executables go in `/usr/local/bin`. Usually this isn’t a writable location. But if you installed Homebrew as we recommend on macOS Intel, `/usr/local` will be writable without sudo. So now you are good to install the development tools you need without risking the use of sudo. ### An alternative package path *This is only recommended if you **don’t** use a brewed Python.* On macOS, any Python version X.Y [also searches in `~/Library/Python/X.Y/lib/python/site-packages` for modules](https://docs.python.org/2/install/index.html#alternate-installation-the-user-scheme). That path might not yet exist, but you can create it: ``` mkdir -p ~/Library/Python/2.7/lib/python/site-packages ``` To teach `easy_install` and `pip` to install there, either use the `--user` switch or create a `~/.pydistutils.cfg` file with the following content: ``` [install] install_lib = ~/Library/Python/$py_version_short/lib/python/site-packages ``` ### Using virtualenv (with system Python) [Virtualenv](https://virtualenv.pypa.io/) ships `pip` and creates isolated Python environments with separate `site-packages`, which therefore don’t need sudo. Rubygems without sudo --------------------- *This is only recommended if you **don’t** use rbenv or RVM.* Brewed Ruby installs executables to `$(brew --prefix)/opt/ruby/bin` without sudo. You should add this to your path. See the caveats in the `ruby` formula for up-to-date information. ### With system Ruby To make Ruby install to `/usr/local`, we need to add `gem: -n/usr/local/bin` to your `~/.gemrc`. It’s YAML, so do it manually or use this: ``` echo "gem: -n/usr/local/bin" >> ~/.gemrc ``` **However, all versions of RubyGems before 1.3.6 are buggy** and ignore the above setting. Sadly a fresh install of Snow Leopard comes with 1.3.5. Currently the only known way to get around this is to upgrade rubygems as root: ``` sudo gem update --system ``` ### An alternative gem path Just install everything into the Homebrew prefix like this: ``` echo "export GEM_HOME=\"$(brew --prefix)\"" >> ~/.bashrc ``` ### It doesn’t work! I get some “permissions” error when I try to install stuff! *Note that you may not want to do this, since Apple has decided it is not a good default.* If you ever did a `sudo gem`, etc. before then a lot of files will have been created owned by root. Fix with: ``` sudo chown -R $(whoami) /Library/Ruby/* /Library/Perl/* /Library/Python/* ``` Perl CPAN modules without sudo ------------------------------ The Perl module `local::lib` works similarly to rbenv/RVM (although for modules only, not Perl installations). A simple solution that only pollutes your `/Library/Perl` a little is to install [`local::lib`](https://metacpan.org/pod/local::lib) with sudo: ``` sudo cpan local::lib ``` Note that this will install some other dependencies like `Module::Install`. Then put the appropriate incantation in your shell’s startup, e.g. for `.profile` you’d insert the below; for others see the [`local::lib`](https://metacpan.org/pod/local::lib) docs. ``` eval "$(perl -I$HOME/perl5/lib/perl5 -Mlocal::lib)" ``` Now (after you restart your shell) `cpan` or `perl -MCPAN -eshell` etc. will install modules and binaries in `~/perl5` and the relevant subdirectories will be in your `PATH` and `PERL5LIB`. ### Avoiding sudo altogether for Perl If you don’t even want (or can’t) use sudo for bootstrapping `local::lib`, just manually install `local::lib` in `~/perl5` and add the relevant path to `PERL5LIB` before the `.bashrc` eval incantation. Another alternative is to use `perlbrew` to install a separate copy of Perl in your home directory, or wherever you like: ``` curl -L https://install.perlbrew.pl | bash perlbrew install perl-5.16.2 echo ".~/perl5/perlbrew/etc/bashrc" >> ~/.bashrc ``` homebrew Common Issues Common Issues ============= This is a list of commonly encountered problems, known issues, and their solutions. Running `brew` -------------- ### `brew` complains about absence of “Command Line Tools” You need to have the Xcode Command Line Utilities installed (and updated): run `xcode-select --install` in the terminal. ### Ruby: `bad interpreter: /usr/bin/ruby^M: no such file or directory` You cloned with `git`, and your Git configuration is set to use Windows line endings. See this page on [configuring Git to handle line endings](https://docs.github.com/en/get-started/getting-started-with-git/configuring-git-to-handle-line-endings). ### Ruby: `bad interpreter: /usr/bin/ruby` You don’t have a `/usr/bin/ruby` or it is not executable. It’s not recommended to let this persist; you’d be surprised how many `.app`s, tools and scripts expect your macOS-provided files and directories to be *unmodified* since macOS was installed. ### `brew update` complains about untracked working tree files After running `brew update`, you receive a Git error warning about untracked files or local changes that would be overwritten by a checkout or merge, followed by a list of files inside your Homebrew installation. This is caused by an old bug in in the `update` code that has long since been fixed. However, the nature of the bug requires that you do the following: ``` cd "$(brew --repository)" git reset --hard FETCH_HEAD ``` If `brew doctor` still complains about uncommitted modifications, also run this command: ``` cd "$(brew --repository)/Library" git clean -fd ``` ### `launchctl` refuses to load launchd plist files When trying to load a plist file with `launchctl`, you receive an error that resembles either: ``` Bug: launchctl.c:2325 (23930):13: (dbfd = open(g_job_overrides_db_path, [...] launch_msg(): Socket is not connected ``` or: ``` Could not open job overrides database at: /private/var/db/launchd.db/com.apple.launchd/overrides.plist: 13: Permission denied launch_msg(): Socket is not connected ``` These are likely due to one of four issues: 1. You are using iTerm. The solution is to use Terminal.app when interacting with `launchctl`. 2. You are using a terminal multiplexer such as `tmux` or `screen`. You should interact with `launchctl` from a separate Terminal.app shell. 3. You are attempting to run `launchctl` while logged in remotely. You should enable screen sharing on the remote machine and issue the command using Terminal.app running on that machine. 4. You are `su`‘ed as a different user. ### `brew upgrade` errors out When running `brew upgrade`, you see something like this: ``` Error: undefined method `include?' for nil:NilClass Please report this bug: https://docs.brew.sh/Troubleshooting /usr/local/Library/Homebrew/formula.rb:393:in `canonical_name' /usr/local/Library/Homebrew/formula.rb:425:in `factory' /usr/local/Library/Contributions/examples/brew-upgrade.rb:7 /usr/local/Library/Contributions/examples/brew-upgrade.rb:7:in `map' /usr/local/Library/Contributions/examples/brew-upgrade.rb:7 /usr/local/bin/brew:46:in `require' /usr/local/bin/brew:46:in `require?' /usr/local/bin/brew:79 ``` This happens because an old version of the upgrade command is hanging around for some reason. The fix: ``` cd "$(brew --repository)/Library/Contributions/examples" git clean -n # if this doesn't list anything that you want to keep, then git clean -f # this will remove untracked files ``` ### Python: `easy-install.pth` cannot be linked ``` Warning: Could not link <formula>. Unlinking... Error: The `brew link` step did not complete successfully The formula built, but is not symlinked into /usr/local You can try again using `brew link <formula>' Possible conflicting files are: /usr/local/lib/python2.7/site-packages/site.py /usr/local/lib/python2.7/site-packages/easy-install.pth ==> Could not symlink file: /homebrew/Cellar/<formula>/<version>/lib/python2.7/site-packages/site.py Target /usr/local/lib/python2.7/site-packages/site.py already exists. You may need to delete it. To force the link and overwrite all other conflicting files, do: brew link --overwrite formula_name To list all files that would be deleted: brew link --overwrite --dry-run formula_name ``` Don’t follow the advice here but fix by using `Language::Python.setup_install_args` in the formula as described in [Python for Formula Authors](python-for-formula-authors). Upgrading macOS --------------- Upgrading macOS can cause errors like the following: * `dyld: Library not loaded: /usr/local/opt/icu4c/lib/libicui18n.54.dylib` * `configure: error: Cannot find libz` Following a macOS upgrade it may be necessary to reinstall the Xcode Command Line Tools and then `brew upgrade` all installed formulae: ``` xcode-select --install brew upgrade ``` Cask - cURL error ----------------- First, let’s tackle a common problem: do you have a `.curlrc` file? Check with `ls -A ~ | grep .curlrc` (if you get a result, the file exists). Those are a frequent cause of issues of this nature. Before anything else, remove that file and try again. If it now works, do not open an issue. Incompatible `.curlrc` configurations must be fixed on your side. If, however, you do not have a `.curlrc` or removing it did not work, let’s see if the issue is upstream: 1. Go to the vendor’s website (`brew home <cask_name>`). 2. Find the download link for the app and click on it. ### If the download works The cask is outdated. Let’s fix it: 1. Look around the app’s website and find out what the latest version is. It will likely be expressed in the URL used to download it. 2. Take a look at the cask’s version (`brew cat <cask_name>`) and verify it is indeed outdated. If the app’s version is `:latest`, it means the `url` itself is outdated. It will need to be changed to the new one. Help us by [submitting a fix](https://github.com/Homebrew/homebrew-cask/blob/HEAD/CONTRIBUTING.md#updating-a-cask). If you get stumped, [open an issue](https://github.com/Homebrew/homebrew-cask/issues/new?template=01_bug_report.md) explaining your steps so far and where you’re having trouble. ### If the download does not work The issue isn’t in any way related to Homebrew Cask, but with the vendor or your connection. Start by diagnosing your connection (try to download other casks, go around the web). If the problem is with your connection, try a website like [Ask Different](https://apple.stackexchange.com/) to ask for advice. If you’re sure the issue is not with your connection, contact the app’s vendor and let them know their link is down, so they can fix it. **Do not open an issue.** Cask - checksum does not match ------------------------------ First, check if the problem was with your download. Delete the downloaded file (its location will be pointed out in the error message) and try again. If the problem persists, the cask must be outdated. It’ll likely need a new version, but it’s possible the version has remained the same (happens occasionally when the vendor updates the app in place). 1. Go to the vendor’s website (`brew home <cask_name>`). 2. Find out what the latest version is. It may be expressed in the URL used to download it. 3. Take a look at the cask’s version (`brew info <cask_name>`) and verify it is indeed outdated. If it is: Help us by [submitting a fix](https://github.com/Homebrew/homebrew-cask/blob/HEAD/CONTRIBUTING.md#updating-a-cask). If you get stumped, [open an issue](https://github.com/Homebrew/homebrew-cask/issues/new?template=01_bug_report.md) explaining your steps so far and where you’re having trouble. Cask - permission denied ------------------------ In this case, it’s likely your user account has no admin rights so you don’t have permissions to write to `/Applications` (which is the default). You can use [`--appdir`](https://github.com/Homebrew/homebrew-cask/blob/HEAD/USAGE.md#options) to choose where to install your applications. If `--appdir` doesn’t fix the issue or you do have write permissions to `/Applications`, verify you’re the owner of the `Caskroom` directory by running `ls -dl "$(brew --prefix)/Caskroom"` and checking the third field. If you are not the owner, fix it with `sudo chown -R "$(whoami)" "$(brew --prefix)/Caskroom"`. If you are, the problem may lie in the app bundle itself. Some app bundles don’t have certain permissions that are necessary for us to move them to the appropriate location. You may check such permissions with `ls -ls <path_to_app_bundle>`. If you see something like `dr-xr-xr-x` at the start of the output, that may be the cause. To fix it, we change the app bundle’s permission to allow us to move it, and then set it back to what it was (in case the developer set those permissions deliberately). See [`litecoin`](https://github.com/Homebrew/homebrew-cask/blob/0cde71f1fea8ad62d6ec4732fcf35ac0c52d8792/Casks/litecoin.rb#L14L20) for an example of such a cask. Help us by [submitting a fix](https://github.com/Homebrew/homebrew-cask/blob/HEAD/CONTRIBUTING.md#updating-a-cask). If you get stumped, [open an issue](https://github.com/Homebrew/homebrew-cask/issues/new?template=01_bug_report.md) explaining your steps so far and where you’re having trouble. Cask - source is not there -------------------------- First, you need to identify which artifact is not being handled correctly anymore. It’s explicit in the error message: if it says `It seems the App source…'` the problem is [`app`](cask-cookbook#stanza-app). The pattern is the same across [all artifacts](cask-cookbook#at-least-one-artifact-stanza-is-also-required). Fixing this error is typically easy, and requires only a bit of time on your part. Start by downloading the package for the cask: `brew fetch <cask_name>`. The last line of output will inform you of the location of the download. Navigate there and manually unpack it. As an example, lets say the structure inside the archive is as follows: ``` . ├─ Files/SomeApp.app ├─ Files/script.sh └─ README.md ``` Now, let’s look at the cask (`brew cat <cask_name>`): ``` (…) app "SomeApp.app" (…) ``` The cask was expecting `SomeApp.app` to be in the top directory of the archive (see how it says simply `SomeApp.app`) but the developer changed it to inside a `Files` directory. All we have to do is update that line of the cask to follow the new structure: `app 'Files/SomeApp.app'`. Note that occasionally the app’s name changes completely (from `SomeApp.app` to `OtherApp.app`, let’s say). In these instances, the filename of the cask itself, as well as its token, must also change. Consult the [`token reference`](cask-cookbook#token-reference) for complete instructions on the new name. Help us by [submitting a fix](https://github.com/Homebrew/homebrew-cask/blob/HEAD/CONTRIBUTING.md#updating-a-cask). If you get stumped, [open an issue](https://github.com/Homebrew/homebrew-cask/issues/new?template=01_bug_report.md) explaining your steps so far and where you’re having trouble. Cask - wrong number of arguments -------------------------------- Make sure the issue really lies with your macOS version. To do so, try to install the software manually. If it is incompatible with your macOS version, it will tell you. In that case, there is nothing we can do to help you install the software, but we can add a [`depends_on macos:`](cask-cookbook#depends_on-macos) stanza to prevent the cask from trying to install on incompatible macOS versions. Help us by [submitting a fix](https://github.com/Homebrew/homebrew-cask/blob/HEAD/CONTRIBUTING.md#updating-a-cask). If you get stumped, [open an issue](https://github.com/Homebrew/homebrew-cask/issues/new?template=01_bug_report.md) explaining your steps so far and where you’re having trouble. Other local issues ------------------ If your Homebrew installation gets messed up (and fixing the issues found by `brew doctor` doesn’t solve the problem), reinstalling Homebrew may help to reset to a normal state. To easily reinstall Homebrew, use [Homebrew Bundle](https://github.com/Homebrew/homebrew-bundle) to automatically restore your installed formulae and casks. To do so, run `brew bundle dump`, [uninstall](faq#how-do-i-uninstall-homebrew), [reinstall](installation) and run `brew bundle install`.
programming_docs
homebrew Renaming a Formula Renaming a Formula ================== Sometimes software and formulae need to be renamed. To rename a formula you need to: 1. Rename the formula file and its class to a new formula. The new name must meet all the usual rules of formula naming. Fix any test failures that may occur due to the stricter requirements for new formulae than existing formulae (i.e. `brew audit --online --new-formula` must pass for that formula). 2. Create a pull request to the corresponding tap deleting the old formula file, adding the new formula file, and adding it to `formula_renames.json` with a commit message like `newack: renamed from ack`. Use the canonical name (e.g. `ack` instead of `user/repo/ack`). A `formula_renames.json` example for a formula rename: ``` { "ack": "newack" } ``` homebrew External Commands External Commands ================= Homebrew, like Git, supports *external commands*. This lets you create new commands that can be run like: ``` brew mycommand --option1 --option3 <formula> ``` without modifying Homebrew’s internals. Command types ------------- External commands come in two flavours: Ruby commands and shell scripts. In both cases, the command file should be executable (`chmod +x`) and live somewhere in your `PATH`. External commands can be added to a tap to allow easy distribution. See [below](#external-commands-in-taps) for more details. ### Ruby commands An external command `extcmd` implemented as a Ruby command should be named `brew-extcmd.rb`. The command is executed by doing a `require` on the full pathname. As the command is `require`d, it has full access to the Homebrew “environment”, i.e. all global variables and modules that any internal command has access to. Be wary of using Homebrew internals; they may change at any time without warning. The command may `Kernel.exit` with a status code if it needs to; if it doesn’t explicitly exit then Homebrew will return `0`. ### Other executable scripts An executable script for a command named `extcmd` should be named `brew-extcmd`. The script itself can use any suitable shebang (`#!`) line, so an external script can be written in Bash, Ruby, or even Python. Unlike the ruby commands this file must not end with a language-specific suffix (`.sh`, or `.py`). This file will be run via `exec` with some Homebrew variables set as environment variables, and passed any additional command-line arguments. | Variable | Description | | --- | --- | | `HOMEBREW_CACHE` | Where Homebrew caches downloaded tarballs to, by default `~/Library/Caches/Homebrew`. | | `HOMEBREW_PREFIX` | Where Homebrew installs software. `/usr/local` by default for macOS Intel, `/opt/homebrew` for Apple Silicon and `/home/linuxbrew/.linuxbrew` for Linux. | | `HOMEBREW_CELLAR` | The location of the Homebrew Cellar, where software is staged. This will be `HOMEBREW_PREFIX/Cellar` if that directory exists, or `HOMEBREW_REPOSITORY/Cellar` otherwise. | | `HOMEBREW_LIBRARY_PATH` | The directory containing Homebrew’s own application code. | | `HOMEBREW_REPOSITORY` | The Git repository directory (i.e. where Homebrew’s `.git` directory lives). Usually either the same as `HOMEBREW_PREFIX` or a `Homebrew` subdirectory. | Providing `--help` ------------------ All internal and external Homebrew commands can provide styled `--help` output by using Homebrew’s [argument parser](https://rubydoc.brew.sh/Homebrew/CLI/Parser.html), as seen in the [`brew services` command](https://github.com/Homebrew/homebrew-services/blob/HEAD/cmd/services.rb); or by including lines starting with `#:` (a comment then `:` character in both Bash and Ruby), as seen in the [header of `update.sh`](https://github.com/Homebrew/brew/blob/cf7def0c68903814c6b4e04a55fe8f3cb3f5605e/Library/Homebrew/cmd/update.sh#L1-L10), which is printed with `brew update --help`. Unofficial external commands ---------------------------- These commands have been contributed by Homebrew users but are not included in the main Homebrew organisation, nor are they installed by the installer script. You can install them manually, as outlined above. Note they are largely untested, and as always, be careful about running untested code on your machine. ### brew-gem Install any `gem` package into a self-contained Homebrew Cellar location: <https://github.com/sportngin/brew-gem> Note this can also be installed with `brew install brew-gem`. External commands in taps ------------------------- External commands can be hosted in a [tap](taps) to allow users to easily install and use them. See [How to Create and Maintain a Tap](how-to-create-and-maintain-a-tap) for more details about creating and maintaining a tap. External commands should be added to a `cmd` directory in the tap. An external command `extcmd` implemented as a Ruby command should live in `cmd/extcmd.rb` (don’t forget to `chmod +x`). To easily use Homebrew’s argument parser, replicate the following Ruby template for external commands (replacing all instances of `foo` with the name of the command): ``` # frozen_string_literal: true module Homebrew module_function def foo_args Homebrew::CLI::Parser.new do description <<~EOS Do something. Place a description here. EOS switch "-f", "--force", description: "Force doing something in the command." flag "--file=", description: "Specify a file to do something with in the command." comma_array "--names", description: "Add a list of names to the command." named_args [:formula, :cask], min: 1 end end def foo args = foo_args.parse something if args.force? something_else if args.file == "file.txt" end end ``` Using the above will generate appropriate help text: ``` $ brew foo --help Usage: brew foo [options] formula|cask [...] Do something. Place a description here. -f, --force Force doing something in the command. --file Specify a file to do something with in the command. --names Add a list of names to the command. -d, --debug Display any debugging information. -q, --quiet Make some output more quiet. -v, --verbose Make some output more verbose. -h, --help Show this message. ``` The usage string is automatically generated based on the specified number and type of named arguments (see below for more details on specifying named arguments). The generated usage string can be overridden by passing the correct usage string to the `usage_banner` method (placed just before the `description` method). See the [`brew tap` command](https://github.com/Homebrew/brew/blob/HEAD/Library/Homebrew/cmd/tap.rb) for an example. Use the `named_args` method to specify the type and number of named arguments that are expected. Pass either a symbol to indicate the type of argument expected, an array of symbols to indicate that multiple types should be expected, or an array of strings to specify which specific options should be expected (see the [`brew analytics` command](https://github.com/Homebrew/brew/blob/HEAD/Library/Homebrew/cmd/analytics.rb) for an example of this). Pass an integer to the `number`, `min`, or `max` parameter of `named_args` to specify the number of named arguments that are expected. See the following examples: ``` # Accept no named args named_args :none # Accept any number (including none) of formula arguments named_args :formula # Accept exactly one of the specified options as an argument named_args %w[state off on], number: 1 # Accept at least one argument that is either a formula or a cask named_args [:formula, :cask], min: 1 # Accept no more than one argument that is a tap named_args :tap, max: 1 # Accept between one and two named args named_args min: 1, max: 2 ``` Named arguments can be accessed by calling `args.named`. Check out the internal [commands](https://github.com/Homebrew/brew/tree/HEAD/Library/Homebrew/cmd) and [developer commands](https://github.com/Homebrew/brew/tree/HEAD/Library/Homebrew/dev-cmd) for more usage examples. homebrew Querying brew Querying `brew` =============== *In this document we will be using [jq](https://stedolan.github.io/jq/) to parse JSON, available from Homebrew using `brew install jq`.* Overview -------- `brew` provides commands for getting common types of information out of the system. `brew list` shows installed formulae. `brew deps foo` shows the dependencies that `foo` needs. Additional commands, including external commands, can of course be written to provide more detailed information. There are a couple of disadvantages here. First, it requires writing Ruby against a possibly changing Homebrew codebase. There will be more code to touch during refactors, and Homebrew can’t guarantee that external commands will continue to work. Second, it requires designing the commands themselves, specifying input parameters and output formats. To enable users to do rich queries without the problems above, Homebrew provides the `brew info` command. `brew info --json` ------------------ `brew info` can output JSON-formatted information about formulae. This JSON can then be parsed using your tools of choice. See more details in `brew info --help`. The default schema version is `v1`, which returns info about formulae; specify `--json=v2` to include both formulae and casks. Note that fields may be added to the schema as needed without incrementing the schema. Any significant breaking changes will cause a change to the schema version. The schema itself is not currently documented outside of the code in [`formula.rb`](https://github.com/Homebrew/brew/blob/2e6b6ab3a20da503ba2a22a37fdd6bd936d818ed/Library/Homebrew/formula.rb#L1922-L2017) that generates it. Examples -------- *The top-level element of the JSON output is always an array, so the `map` operator is used to act on the data.* ### Pretty-print a single formula’s info ``` brew info --json=v1 tig | jq . ``` ### Installed formulae To show full JSON information about all installed formulae: ``` brew info --json=v1 --all | jq "map(select(.installed != []))" ``` You’ll note that processing all formulae can be slow; it’s quicker to let `brew` do this: ``` brew info --json=v1 --installed ``` ### Linked keg-only formulae Some formulae are marked as “keg-only”, meaning that installed files are not linked to the shared `bin`, `lib`, etc. directories, as doing so can cause conflicts. Such formulae can be forced to link to the shared directories, but doing so is not recommended (and will cause `brew doctor` to complain.) To find the names of linked keg-only formulae: ``` brew info --json=v1 --installed | jq "map(select(.keg_only == true and .linked_keg != null) | .name)" ``` ### Unlinked normal formulae To find the names of normal (not keg-only) formulae that are installed, but not linked to the shared directories: ``` brew info --json=v1 --installed | jq "map(select(.keg_only == false and .linked_keg == null) | .name)" ``` formulae.brew.sh ---------------- [formulae.brew.sh](https://formulae.brew.sh) has a [documented JSON API](https://formulae.brew.sh/docs/api/) which provides access to the `brew info --json=v1` output without needing access to Homebrew. Concluding remarks ------------------ By using the JSON output, queries can be made against Homebrew with less risk of being broken due to Homebrew code changes, and without needing to understand Homebrew’s Ruby internals. If the JSON output does not provide some information that it ought to, please submit a request, preferably with a patch to add the desired information. homebrew Prose Style Guidelines Prose Style Guidelines ====================== This is a set of style and usage guidelines for Homebrew’s prose documentation aimed at users, contributors, and maintainers (as opposed to executable computer code). It applies to documents like those in `docs` in the `Homebrew/brew` repository, announcement emails, and other communications with the Homebrew community. This does not apply to any Ruby or other computer code. You can use it to inform technical documentation extracted from computer code, like embedded man pages, but it’s just a suggestion there. Goals and audience ------------------ The primary goal of Homebrew’s prose documents is communicating with its community of users and contributors. “Users” includes “contributors” here; wherever you see “users” you can substitute “users and contributors”. Understandability is more important than any particular style guideline. Users take precedence over maintainers, except in specifically maintainer-focused documents. Homebrew’s audience includes users with a wide range of education and experience, and users for whom English is not a native language. We aim to support as many of those users as feasible. We strive for “correct” but not “fancy” usage. Think newspaper article, not academic paper. This is a set of guidelines to be applied using human judgement, not a set of hard and fast rules. It is like [The Economist’s Style Guide](https://web.archive.org/web/20170830001125/https://www.economist.com/styleguide/introduction) or [Garner’s Modern American Usage](https://en.wikipedia.org/wiki/Garner's_Modern_American_Usage). It is less like the [Ruby Style Guide](https://github.com/rubocop-hq/ruby-style-guide#the-ruby-style-guide). All guidelines here are open to interpretation and discussion. 100% conformance to these guidelines is *not* a goal. The intent of this document is to help authors make decisions about clarity, style, and consistency. It is not to help settle arguments about who knows English better. Don’t use this document to be a jerk. Guidelines ---------- We prefer: ### Style and usage * British/Commonwealth English over American English, in general * “e.g.” and “i.e.”: Go ahead and use “e.g.” or “i.e.” instead of spelling them out. Don’t worry about putting a comma after them. + “e.g.” means “for example”; “i.e.” means “that is” * Offset nontrivial subordinate clauses with commas ### Personal pronouns * We respect all people’s choice of personal pronouns * Singular “they” when someone’s gender is unknown * Avoid gender-specific language when not necessary ### Structure and markup * Title Case in `h1` headings; sentence case in all other headings * Periods at the ends of list items where most items in that list are complete sentences * More generally, parallel list item structure * Capitalise all list items if you want, even if they’re not complete sentences; just be consistent within each list, and preferably, throughout the whole page * Use a subordinate list item instead of dropping a multi-sentence paragraph-long item into a list of sentence fragments * Prefer Markdown over other markup formats unless their specific features are needed + GitHub Flavoured Markdown. GitHub’s implementation is the standard, period. ### Typographical conventions * Literal text in commands and code is styled in `fixed width font` * Placeholders inside code snippets are marked up with `<...>` brackets + e.g. `git remote add <my-user-name> https://github.com/<my-user-name>/homebrew-core.git` * Names of commands like `git` and `brew` are styled in `fixed width font` * No “$” with environment variables mentioned outside code snippets + e.g. “Set `BLAH` to 5”, not “Set `$BLAH` to 5” * One space after periods, not two * Capitalised proper nouns * We do not defer to extensive nonstandard capitalisation, typesetting, or other styling of brand names, aside from the normal capitalisation of proper nouns and simple internal capitalisation * No “TM”, ™, SM, ©, ®, or other explicit indicators of rights ownership or trademarks; we take these as understood when the brand name is mentioned * Tap names like `homebrew/core` are styled in `fixed width font`. Repository names may be styled in either fixed width font like “`Homebrew/homebrew-core`”, as links like “[Homebrew/homebrew-core](https://github.com/homebrew/homebrew-core)”, or regular text like “Homebrew/homebrew-core”, based on which looks best for a given use. + But be consistent within a single document + Capitalise repository names to match the user and repository names on GitHub. Keep tap names in lower case. * Commas + No Oxford commas + Prefer a “loose” comma style: “when in doubt, leave it out” unless needed for clarity ### Terminology, words, and word styling * “pull request”, not “Pull Request” * “check out” is the verb; “checkout” is the noun * Spell out certain technical words + “repository”, not “repo” + When abbreviating, introduce the abbreviation with the first usage in any document * Some abbreviations (near-universally understood among our user base) are fine, though. + “Mac” is fine; “Macintosh” isn’t necessary * “macOS” for all versions, “OS X” or “Mac OS X” when describing specific older versions * “RuboCop”, not “Rubocop” * A pull request is made “on” a repository; that repository is “at” a URL How to use these guidelines --------------------------- Refer to these guidelines to make decisions about style and usage in your own writing for Homebrew documents and communication. PRs that fix style and usage throughout a document or multiple documents are okay and encouraged. PRs for just one or two style changes are a bit much. Giving style and usage feedback on a PR or commit that involves documents is okay and encouraged. But keep in mind that these are just guidelines, and for any change, the author may have made a deliberate choice to break these rules in the interest of understandability or aesthetics. homebrew Homebrew Governance Responsibilities Homebrew Governance Responsibilities ==================================== Project Leadership Committee ---------------------------- ### PLC Sole Responsibilities * organising the AGM * voting on maintainer hardware grants (before they are purchased) * voting on maintainer hackathon/conference/AGM travel expenses (before they are booked) * responding to and handling Code of Conduct complaints * removing inactive members (that are not maintainers) that did not vote in the AGM ### PLC Shared Responsibilities * approving Open Collective expenses that are expected or have already been agreed upon by the PLC (e.g. Homebrew cloud usage on a personal credit card) (only one approval needed) * blocking abusive GitHub users * performing GitHub admin operations on the Homebrew GitHub organisation * performing Slack admin operations on the Homebrew Slack ### PLC Dated Yearly Tasks * January: check membership, announce AGM votes + Ask for nominations for the for the PLC and project leader, and ask who is interested in serving on the TSC + Create ballots for the elections on https://www.opavote.com + Ask the project leader and representatives of the PLC and TSC to prepare reports for the AGM + Ask for members interested in presenting lightning talks at the AGM * February: organise the annual general meeting (AGM) + Create a dedicated Slack channel + Book a group dinner (which Homebrew pays for) and check for any dietary requirements + Ask someone to bring a conference/table microphone for people to be able to remotely participate in AGM * February after the AGM: + Add the minutes of the AGM to https://github.com/Homebrew/homebrew-governance + Create [issue in homebrew-governance](https://github.com/homebrew/homebrew-governance/issues) to ask members who did not vote in the election whether they wish to remain or step down as members - Members that are not maintainers should be a least one of: * An current or previously active maintainer, PLC/TSC member or Project Leader * A long-standing member of the Homebrew community (e.g. been submitting good bug reports for over two years) * October: arrange in-person AGM + Offer to pay for Homebrew maintainers who are at least one of: - active Homebrew maintainers (i.e. not just contributors) - new Homebrew maintainers (i.e. this would be their first AGM) - current members of or running for election for PLC/TSC/Project Leader + Authorise people to book travel Project Leader -------------- ### PL Sole Responsibilities * manage all day-to-day technical decisions * resolve disputes related to the operation of Homebrew between maintainers, members, other contributors, and users * [product management](https://en.wikipedia.org/wiki/Product_management) for the various Homebrew products * in February, before the AGM: checking for activity of non-PLC/TSC maintainers and asking them to step down if they have not been active enough in the past 12 months ### PL Shared Responsibilities * approving new Homebrew maintainers (only one approval needed) * approving Open Collective expenses that are expected or have already been agreed upon by the PLC (e.g. Homebrew cloud usage on a personal credit card) (only one approval needed) * blocking abusive GitHub users * performing GitHub admin operations on the Homebrew GitHub organisation * performing Slack admin operations on the Homebrew Slack Technical Steering Committee ---------------------------- ### TSC Sole Responsibilities * decide on technical disputes between Homebrew maintainers and the Project Leader ### TSC Shared Responsibilities * approving new Homebrew maintainers (only one approval needed) * blocking abusive GitHub users * performing GitHub admin operations on the Homebrew GitHub organisation
programming_docs
homebrew Acceptable Formulae Acceptable Formulae =================== Some formulae should not go in [homebrew/core](https://github.com/Homebrew/homebrew-core). But there are additional [Interesting Taps and Forks](interesting-taps-and-forks) and anyone can start their own! ### Supported platforms in `homebrew/core` The formula needs to build and pass tests on the latest 3 supported macOS versions ([x86\_64 and Apple Silicon/ARM](installation#macos-requirements)) and on x86\_64 [Linux](linux-ci). Please have a look at the continuous integration jobs on a pull request in `homebrew/core` to see the full list of OSs. If upstream does not support one of these platforms, an exception can be made and the formula can be disabled for that platform. ### Dupes in `homebrew/core` We now accept stuff that comes with macOS as long as it uses `keg_only :provided_by_macos` to be keg-only by default. ### Versioned formulae in `homebrew/core` We now accept versioned formulae as long as they [meet the requirements](versions). ### We don’t like tools that upgrade themselves Software that can upgrade itself does not integrate well with Homebrew’s own upgrade functionality. The self-update functionality should be disabled (while minimising complication to the formula). ### We don’t like install scripts that download unversioned things We don’t like install scripts that are pulling from the `master` branch of Git repositories or unversioned, unchecksummed tarballs. These should use `resource` blocks with specific revisions or checksummed tarballs instead. Note that we now allow tools like `cargo`, `gem` and `pip` to download specifically versioned libraries during installation. ### We don’t like binary formulae Our policy is that formulae in the core tap ([homebrew/core](https://github.com/Homebrew/homebrew-core)) must be open-source with an [Debian Free Software Guidelines license](https://wiki.debian.org/DFSGLicenses) and either built from source or produce cross-platform binaries (e.g. Java, Mono). Binary-only formulae should go to [homebrew/cask](https://github.com/Homebrew/homebrew-cask). Additionally, [homebrew/core](https://github.com/Homebrew/homebrew-core) formulae must also not depend on casks or any other proprietary software. This includes automatic installation of casks at runtime. ### Stable versions Formulae in the core repository must have a stable version tagged by the upstream project. Tarballs are preferred to Git checkouts, and tarballs should include the version in the filename whenever possible. We don’t accept software without a tagged version because they regularly break due to upstream changes and we can’t provide <bottles> for them. ### Niche (or self-submitted) stuff The software in question must: * be maintained (i.e. the last release wasn’t ages ago, it works without patching on all Homebrew-supported OS versions and has no outstanding, unpatched security vulnerabilities) * be known * be stable (e.g. not declared “unstable” or “beta” by upstream) * be used * have a homepage We will reject formulae that seem too obscure, partly because they won’t get maintained and partly because we have to draw the line somewhere. We frown on authors submitting their own work unless it is very popular. Don’t forget Homebrew is all Git underneath! [Maintain your own tap](how-to-create-and-maintain-a-tap) if you have to! There may be exceptions to these rules in the main repository; we may include things that don’t meet these criteria or reject things that do. Please trust that we need to use our discretion based on our experience running a package manager. ### Stuff that builds an `.app` Don’t make your formula build an `.app` (native macOS Application); we don’t want those things in Homebrew. Encourage upstream projects to build and support a `.app` that can be distributed by [homebrew/cask](https://github.com/Homebrew/homebrew-cask) (and used without it, too). ### Stuff that builds a GUI by default (but doesn’t have to) Make it build a command-line tool or a library by default and, if the GUI is useful and would be widely used, also build the GUI. Don’t build X11/XQuartz GUIs as they are a bad user experience on macOS. ### Stuff that doesn’t build with the latest, stable Xcode Clang Clang is the default C/C++ compiler on macOS (and has been for a long time). Software that doesn’t build with it hasn’t been adequately ported to macOS. ### Stuff that requires heavy manual pre/post-install intervention We’re a package manager so we want to do things like resolve dependencies and set up applications for our users. If things require too much manual intervention then they aren’t useful in a package manager. Stuff that requires vendored versions of Homebrew formulae ---------------------------------------------------------- Homebrew formulae should avoid having multiple, separate, upstream projects bundled together in a single package to avoid shipping outdated/insecure versions of software that is already a formula. Veracode’s [State of Software Security report](https://www.veracode.com/blog/research/announcing-state-software-security-v11-open-source-edition) concludes > In fact, 79% of the time, developers never update third-party libraries after including them in a codebase. > > For more info see [Debian’s](https://www.debian.org/doc/debian-policy/ch-source.html#s-embeddedfiles) and [Fedora’s](https://docs.fedoraproject.org/en-US/packaging-guidelines/#bundling) stances on this. ### Sometimes there are exceptions Even if all criteria are met we may not accept the formula. Documentation tends to lag behind current decision-making. Although some rejections may seem arbitrary or strange they are based on years of experience making Homebrew work acceptably for our users. homebrew brew(1) – The Missing Package Manager for macOS (or Linux) brew(1) – The Missing Package Manager for macOS (or Linux) ========================================================== SYNOPSIS -------- `brew` `--version` `brew` *`command`* [`--verbose`|`-v`] [*`options`*] [*`formula`*] … DESCRIPTION ----------- Homebrew is the easiest and most flexible way to install the UNIX tools Apple didn’t include with macOS. It can also install software not packaged for your Linux distribution to your home directory without requiring `sudo`. TERMINOLOGY ----------- **formula**: Homebrew package definition built from upstream sources **cask**: Homebrew package definition that installs macOS native applications **keg**: installation destination directory of a given **formula** version e.g. `/usr/local/Cellar/foo/0.1` **rack**: directory containing one or more versioned kegs e.g. `/usr/local/Cellar/foo` **keg-only**: a **formula** is **keg-only** if it is not symlinked into Homebrew’s prefix (e.g. `/usr/local`) **cellar**: directory containing one or more named **racks** e.g. `/usr/local/Cellar` **Caskroom**: directory containing one or more named **casks** e.g. `/usr/local/Caskroom` **external command**: `brew` subcommand defined outside of the Homebrew/brew GitHub repository **tap**: directory (and usually Git repository) of **formulae**, **casks** and/or **external commands** **bottle**: pre-built **keg** poured into the **cellar**/**rack** instead of building from upstream sources ESSENTIAL COMMANDS ------------------ For the full command list, see the [COMMANDS](#commands) section. With `--verbose` or `--debug`, many commands print extra debugging information. Note that these options should only appear after a command. Some command behaviour can be customised with environment variables; see the [ENVIRONMENT](#environment) section. ### `install` *`formula`* Install *`formula`*. *`formula`* is usually the name of the formula to install, but it has other syntaxes which are listed in the [SPECIFYING FORMULAE](#specifying-formulae) section. ### `uninstall` *`formula`* Uninstall *`formula`*. ### `list` List all installed formulae. ### `search` [*`text`*|`/`*`text`*`/`] Perform a substring search of cask tokens and formula names for *`text`*. If *`text`* is flanked by slashes, it is interpreted as a regular expression. The search for *`text`* is extended online to `homebrew/core` and `homebrew/cask`. If no search term is provided, all locally available formulae are listed. COMMANDS -------- ### `analytics` [*`subcommand`*] Control Homebrew’s anonymous aggregate user behaviour analytics. Read more at [https://docs.brew.sh/Analytics](analytics). `brew analytics` [`state`] Display the current state of Homebrew’s analytics. `brew analytics` (`on`|`off`) Turn Homebrew’s analytics on or off respectively. `brew analytics regenerate-uuid` Regenerate the UUID used for Homebrew’s analytics. ### `autoremove` [*`--dry-run`*] Uninstall formulae that were only installed as a dependency of another formula and are now no longer needed. * `-n`, `--dry-run`: List what would be uninstalled, but do not actually uninstall anything. ### `casks` List all locally installable casks including short names. ### `cleanup` [*`options`*] [*`formula`*|*`cask`* …] Remove stale lock files and outdated downloads for all formulae and casks, and remove old versions of installed formulae. If arguments are specified, only do this for the given formulae and casks. Removes all downloads more than 120 days old. This can be adjusted with `HOMEBREW_CLEANUP_MAX_AGE_DAYS`. * `--prune`: Remove all cache files older than specified *`days`*. If you want to remove everything, use `--prune=all`. * `-n`, `--dry-run`: Show what would be removed, but do not actually remove anything. * `-s`: Scrub the cache, including downloads for even the latest versions. Note that downloads for any installed formulae or casks will still not be deleted. If you want to delete those too: `rm -rf "$(brew --cache)"` * `--prune-prefix`: Only prune the symlinks and directories from the prefix and remove no other files. ### `commands` [*`--quiet`*] [*`--include-aliases`*] Show lists of built-in and external commands. * `-q`, `--quiet`: List only the names of commands without category headers. * `--include-aliases`: Include aliases of internal commands. ### `completions` [*`subcommand`*] Control whether Homebrew automatically links external tap shell completion files. Read more at [https://docs.brew.sh/Shell-Completion](shell-completion). `brew completions` [`state`] Display the current state of Homebrew’s completions. `brew completions` (`link`|`unlink`) Link or unlink Homebrew’s completions. ### `config`, `--config` Show Homebrew and system configuration info useful for debugging. If you file a bug report, you will be required to provide this information. ### `deps` [*`options`*] [*`formula`*|*`cask`* …] Show dependencies for *`formula`*. Additional options specific to *`formula`* may be appended to the command. When given multiple formula arguments, show the intersection of dependencies for each formula. * `-n`: Sort dependencies in topological order. * `--1`: Only show dependencies one level down, instead of recursing. * `--union`: Show the union of dependencies for multiple *`formula`*, instead of the intersection. * `--full-name`: List dependencies by their full name. * `--include-build`: Include `:build` dependencies for *`formula`*. * `--include-optional`: Include `:optional` dependencies for *`formula`*. * `--include-test`: Include `:test` dependencies for *`formula`* (non-recursive). * `--skip-recommended`: Skip `:recommended` dependencies for *`formula`*. * `--include-requirements`: Include requirements in addition to dependencies for *`formula`*. * `--tree`: Show dependencies as a tree. When given multiple formula arguments, show individual trees for each formula. * `--graph`: Show dependencies as a directed graph. * `--dot`: Show text-based graph description in DOT format. * `--annotate`: Mark any build, test, optional, or recommended dependencies as such in the output. * `--installed`: List dependencies for formulae that are currently installed. If *`formula`* is specified, list only its dependencies that are currently installed. * `--all`: List dependencies for all available formulae. * `--for-each`: Switch into the mode used by the `--all` option, but only list dependencies for each provided *`formula`*, one formula per line. This is used for debugging the `--installed`/`--all` display mode. * `--formula`: Treat all named arguments as formulae. * `--cask`: Treat all named arguments as casks. ### `desc` [*`options`*] *`formula`*|*`cask`*|*`text`*|`/`*`regex`*`/` […] Display *`formula`*’s name and one-line description. Formula descriptions are cached; the cache is created on the first search, making that search slower than subsequent ones. * `-s`, `--search`: Search both names and descriptions for *`text`*. If *`text`* is flanked by slashes, it is interpreted as a regular expression. * `-n`, `--name`: Search just names for *`text`*. If *`text`* is flanked by slashes, it is interpreted as a regular expression. * `-d`, `--description`: Search just descriptions for *`text`*. If *`text`* is flanked by slashes, it is interpreted as a regular expression. * `--formula`: Treat all named arguments as formulae. * `--cask`: Treat all named arguments as casks. ### `developer` [*`subcommand`*] Control Homebrew’s developer mode. When developer mode is enabled, `brew update` will update Homebrew to the latest commit on the `master` branch instead of the latest stable version along with some other behaviour changes. `brew developer` [`state`] Display the current state of Homebrew’s developer mode. `brew developer` (`on`|`off`) Turn Homebrew’s developer mode on or off respectively. ### `doctor`, `dr` [*`--list-checks`*] [*`--audit-debug`*] [*`diagnostic_check`* …] Check your system for potential problems. Will exit with a non-zero status if any potential problems are found. Please note that these warnings are just used to help the Homebrew maintainers with debugging if you file an issue. If everything you use Homebrew for is working fine: please don’t worry or file an issue; just ignore this. * `--list-checks`: List all audit methods, which can be run individually if provided as arguments. * `-D`, `--audit-debug`: Enable debugging and profiling of audit methods. ### `fetch` [*`options`*] *`formula`*|*`cask`* […] Download a bottle (if available) or source packages for *`formula`*e and binaries for *`cask`*s. For files, also print SHA-256 checksums. * `--bottle-tag`: Download a bottle for given tag. * `--HEAD`: Fetch HEAD version instead of stable version. * `-f`, `--force`: Remove a previously cached version and re-fetch. * `-v`, `--verbose`: Do a verbose VCS checkout, if the URL represents a VCS. This is useful for seeing if an existing VCS cache has been updated. * `--retry`: Retry if downloading fails or re-download if the checksum of a previously cached version no longer matches. * `--deps`: Also download dependencies for any listed *`formula`*. * `-s`, `--build-from-source`: Download source packages rather than a bottle. * `--build-bottle`: Download source packages (for eventual bottling) rather than a bottle. * `--force-bottle`: Download a bottle if it exists for the current or newest version of macOS, even if it would not be used during installation. * `--[no-]quarantine`: Disable/enable quarantining of downloads (default: enabled). * `--formula`: Treat all named arguments as formulae. * `--cask`: Treat all named arguments as casks. ### `formulae` List all locally installable formulae including short names. ### `gist-logs` [*`options`*] *`formula`* Upload logs for a failed build of *`formula`* to a new Gist. Presents an error message if no logs are found. * `--with-hostname`: Include the hostname in the Gist. * `-n`, `--new-issue`: Automatically create a new issue in the appropriate GitHub repository after creating the Gist. * `-p`, `--private`: The Gist will be marked private and will not appear in listings but will be accessible with its link. ### `home`, `homepage` [*`--formula`*] [*`--cask`*] [*`formula`*|*`cask`* …] Open a *`formula`* or *`cask`*’s homepage in a browser, or open Homebrew’s own homepage if no argument is provided. * `--formula`: Treat all named arguments as formulae. * `--cask`: Treat all named arguments as casks. ### `info`, `abv` [*`options`*] [*`formula`*|*`cask`* …] Display brief statistics for your Homebrew installation. If a *`formula`* or *`cask`* is provided, show summary of information about it. * `--analytics`: List global Homebrew analytics data or, if specified, installation and build error data for *`formula`* (provided neither `HOMEBREW_NO_ANALYTICS` nor `HOMEBREW_NO_GITHUB_API` are set). * `--days`: How many days of analytics data to retrieve. The value for *`days`* must be `30`, `90` or `365`. The default is `30`. * `--category`: Which type of analytics data to retrieve. The value for *`category`* must be `install`, `install-on-request` or `build-error`; `cask-install` or `os-version` may be specified if *`formula`* is not. The default is `install`. * `--github`: Open the GitHub source page for *`formula`* and *`cask`* in a browser. To view the history locally: `brew log -p` *`formula`* or *`cask`* * `--json`: Print a JSON representation. Currently the default value for *`version`* is `v1` for *`formula`*. For *`formula`* and *`cask`* use `v2`. See the docs for examples of using the JSON output: [https://docs.brew.sh/Querying-Brew](querying-brew) * `--installed`: Print JSON of formulae that are currently installed. * `--all`: Print JSON of all available formulae. * `--variations`: Include the variations hash in each formula’s JSON output. * `-v`, `--verbose`: Show more verbose analytics data for *`formula`*. * `--formula`: Treat all named arguments as formulae. * `--cask`: Treat all named arguments as casks. ### `install` [*`options`*] *`formula`*|*`cask`* […] Install a *`formula`* or *`cask`*. Additional options specific to a *`formula`* may be appended to the command. Unless `HOMEBREW_NO_INSTALLED_DEPENDENTS_CHECK` is set, `brew upgrade` or `brew reinstall` will be run for outdated dependents and dependents with broken linkage, respectively. Unless `HOMEBREW_NO_INSTALL_CLEANUP` is set, `brew cleanup` will then be run for the installed formulae or, every 30 days, for all formulae. Unless `HOMEBREW_NO_INSTALL_UPGRADE` is set, `brew install *`formula`*` will upgrade *`formula`* if it is already installed but outdated. * `-d`, `--debug`: If brewing fails, open an interactive debugging session with access to IRB or a shell inside the temporary build directory. * `-f`, `--force`: Install formulae without checking for previously installed keg-only or non-migrated versions. When installing casks, overwrite existing files (binaries and symlinks are excluded, unless originally from the same cask). * `-v`, `--verbose`: Print the verification and postinstall steps. * `--formula`: Treat all named arguments as formulae. * `--ignore-dependencies`: An unsupported Homebrew development flag to skip installing any dependencies of any kind. If the dependencies are not already present, the formula will have issues. If you’re not developing Homebrew, consider adjusting your PATH rather than using this flag. * `--only-dependencies`: Install the dependencies with specified options but do not install the formula itself. * `--cc`: Attempt to compile using the specified *`compiler`*, which should be the name of the compiler’s executable, e.g. `gcc-7` for GCC 7. In order to use LLVM’s clang, specify `llvm_clang`. To use the Apple-provided clang, specify `clang`. This option will only accept compilers that are provided by Homebrew or bundled with macOS. Please do not file issues if you encounter errors while using this option. * `-s`, `--build-from-source`: Compile *`formula`* from source even if a bottle is provided. Dependencies will still be installed from bottles if they are available. * `--force-bottle`: Install from a bottle if it exists for the current or newest version of macOS, even if it would not normally be used for installation. * `--include-test`: Install testing dependencies required to run `brew test` *`formula`*. * `--HEAD`: If *`formula`* defines it, install the HEAD version, aka. main, trunk, unstable, master. * `--fetch-HEAD`: Fetch the upstream repository to detect if the HEAD installation of the formula is outdated. Otherwise, the repository’s HEAD will only be checked for updates when a new stable or development version has been released. * `--keep-tmp`: Retain the temporary files created during installation. * `--debug-symbols`: Generate debug symbols on build. Source will be retained in a cache directory. * `--build-bottle`: Prepare the formula for eventual bottling during installation, skipping any post-install steps. * `--bottle-arch`: Optimise bottles for the specified architecture rather than the oldest architecture supported by the version of macOS the bottles are built on. * `--display-times`: Print install times for each package at the end of the run. * `-i`, `--interactive`: Download and patch *`formula`*, then open a shell. This allows the user to run `./configure --help` and otherwise determine how to turn the software package into a Homebrew package. * `-g`, `--git`: Create a Git repository, useful for creating patches to the software. * `--overwrite`: Delete files that already exist in the prefix while linking. * `--cask`: Treat all named arguments as casks. * `--[no-]binaries`: Disable/enable linking of helper executables (default: enabled). * `--require-sha`: Require all casks to have a checksum. * `--[no-]quarantine`: Disable/enable quarantining of downloads (default: enabled). * `--skip-cask-deps`: Skip installing cask dependencies. * `--zap`: For use with `brew reinstall --cask`. Remove all files associated with a cask. *May remove files which are shared between applications.* ### `leaves` [*`--installed-on-request`*] [*`--installed-as-dependency`*] List installed formulae that are not dependencies of another installed formula. * `-r`, `--installed-on-request`: Only list leaves that were manually installed. * `-p`, `--installed-as-dependency`: Only list leaves that were installed as dependencies. ### `link`, `ln` [*`options`*] *`installed_formula`* […] Symlink all of *`formula`*’s installed files into Homebrew’s prefix. This is done automatically when you install formulae but can be useful for DIY installations. * `--overwrite`: Delete files that already exist in the prefix while linking. * `-n`, `--dry-run`: List files which would be linked or deleted by `brew link --overwrite` without actually linking or deleting any files. * `-f`, `--force`: Allow keg-only formulae to be linked. * `--HEAD`: Link the HEAD version of the formula if it is installed. ### `list`, `ls` [*`options`*] [*`installed_formula`*|*`installed_cask`* …] List all installed formulae and casks. If *`formula`* is provided, summarise the paths within its current keg. If *`cask`* is provided, list its artifacts. * `--formula`: List only formulae, or treat all named arguments as formulae. * `--cask`: List only casks, or treat all named arguments as casks. * `--full-name`: Print formulae with fully-qualified names. Unless `--full-name`, `--versions` or `--pinned` are passed, other options (i.e. `-1`, `-l`, `-r` and `-t`) are passed to `ls`(1) which produces the actual output. * `--versions`: Show the version number for installed formulae, or only the specified formulae if *`formula`* are provided. * `--multiple`: Only show formulae with multiple versions installed. * `--pinned`: List only pinned formulae, or only the specified (pinned) formulae if *`formula`* are provided. See also `pin`, `unpin`. * `-1`: Force output to be one entry per line. This is the default when output is not to a terminal. * `-l`: List formulae and/or casks in long format. Has no effect when a formula or cask name is passed as an argument. * `-r`: Reverse the order of the formulae and/or casks sort to list the oldest entries first. Has no effect when a formula or cask name is passed as an argument. * `-t`: Sort formulae and/or casks by time modified, listing most recently modified first. Has no effect when a formula or cask name is passed as an argument. ### `log` [*`options`*] [*`formula`*|*`cask`*] Show the `git log` for *`formula`* or *`cask`*, or show the log for the Homebrew repository if no formula or cask is provided. * `-p`, `--patch`: Also print patch from commit. * `--stat`: Also print diffstat from commit. * `--oneline`: Print only one line per commit. * `-1`: Print only one commit. * `-n`, `--max-count`: Print only a specified number of commits. * `--formula`: Treat all named arguments as formulae. * `--cask`: Treat all named arguments as casks. ### `migrate` [*`--force`*] [*`--dry-run`*] *`installed_formula`* […] Migrate renamed packages to new names, where *`formula`* are old names of packages. * `-f`, `--force`: Treat installed *`formula`* and provided *`formula`* as if they are from the same taps and migrate them anyway. * `-n`, `--dry-run`: Show what would be migrated, but do not actually migrate anything. ### `missing` [*`--hide`*`=`] [*`formula`* …] Check the given *`formula`* kegs for missing dependencies. If no *`formula`* are provided, check all kegs. Will exit with a non-zero status if any kegs are found to be missing dependencies. * `--hide`: Act as if none of the specified *`hidden`* are installed. *`hidden`* should be a comma-separated list of formulae. ### `options` [*`options`*] [*`formula`* …] Show install options specific to *`formula`*. * `--compact`: Show all options on a single line separated by spaces. * `--installed`: Show options for formulae that are currently installed. * `--all`: Show options for all available formulae. * `--command`: Show options for the specified *`command`*. ### `outdated` [*`options`*] [*`formula`*|*`cask`* …] List installed casks and formulae that have an updated version available. By default, version information is displayed in interactive shells, and suppressed otherwise. * `-q`, `--quiet`: List only the names of outdated kegs (takes precedence over `--verbose`). * `-v`, `--verbose`: Include detailed version information. * `--formula`: List only outdated formulae. * `--cask`: List only outdated casks. * `--json`: Print output in JSON format. There are two versions: `v1` and `v2`. `v1` is deprecated and is currently the default if no version is specified. `v2` prints outdated formulae and casks. * `--fetch-HEAD`: Fetch the upstream repository to detect if the HEAD installation of the formula is outdated. Otherwise, the repository’s HEAD will only be checked for updates when a new stable or development version has been released. * `--greedy`: Also include outdated casks with `auto_updates true` or `version :latest`. * `--greedy-latest`: Also include outdated casks including those with `version :latest`. * `--greedy-auto-updates`: Also include outdated casks including those with `auto_updates true`. ### `pin` *`installed_formula`* […] Pin the specified *`formula`*, preventing them from being upgraded when issuing the `brew upgrade` *`formula`* command. See also `unpin`. ### `postinstall` *`installed_formula`* […] Rerun the post-install steps for *`formula`*. ### `readall` [*`--aliases`*] [*`--syntax`*] [*`tap`* …] Import all items from the specified *`tap`*, or from all installed taps if none is provided. This can be useful for debugging issues across all items when making significant changes to `formula.rb`, testing the performance of loading all items or checking if any current formulae/casks have Ruby issues. * `--aliases`: Verify any alias symlinks in each tap. * `--syntax`: Syntax-check all of Homebrew’s Ruby files (if no `*`tap`*` is passed). ### `reinstall` [*`options`*] *`formula`*|*`cask`* […] Uninstall and then reinstall a *`formula`* or *`cask`* using the same options it was originally installed with, plus any appended options specific to a *`formula`*. Unless `HOMEBREW_NO_INSTALLED_DEPENDENTS_CHECK` is set, `brew upgrade` or `brew reinstall` will be run for outdated dependents and dependents with broken linkage, respectively. Unless `HOMEBREW_NO_INSTALL_CLEANUP` is set, `brew cleanup` will then be run for the reinstalled formulae or, every 30 days, for all formulae. * `-d`, `--debug`: If brewing fails, open an interactive debugging session with access to IRB or a shell inside the temporary build directory. * `-f`, `--force`: Install without checking for previously installed keg-only or non-migrated versions. * `-v`, `--verbose`: Print the verification and postinstall steps. * `--formula`: Treat all named arguments as formulae. * `-s`, `--build-from-source`: Compile *`formula`* from source even if a bottle is available. * `-i`, `--interactive`: Download and patch *`formula`*, then open a shell. This allows the user to run `./configure --help` and otherwise determine how to turn the software package into a Homebrew package. * `--force-bottle`: Install from a bottle if it exists for the current or newest version of macOS, even if it would not normally be used for installation. * `--keep-tmp`: Retain the temporary files created during installation. * `--debug-symbols`: Generate debug symbols on build. Source will be retained in a cache directory. * `--display-times`: Print install times for each formula at the end of the run. * `-g`, `--git`: Create a Git repository, useful for creating patches to the software. * `--cask`: Treat all named arguments as casks. * `--[no-]binaries`: Disable/enable linking of helper executables (default: enabled). * `--require-sha`: Require all casks to have a checksum. * `--[no-]quarantine`: Disable/enable quarantining of downloads (default: enabled). * `--skip-cask-deps`: Skip installing cask dependencies. * `--zap`: For use with `brew reinstall --cask`. Remove all files associated with a cask. *May remove files which are shared between applications.* ### `search`, `-S` [*`options`*] *`text`*|`/`*`regex`*`/` […] Perform a substring search of cask tokens and formula names for *`text`*. If *`text`* is flanked by slashes, it is interpreted as a regular expression. The search for *`text`* is extended online to `homebrew/core` and `homebrew/cask`. * `--formula`: Search online and locally for formulae. * `--cask`: Search online and locally for casks. * `--desc`: Search for formulae with a description matching *`text`* and casks with a name or description matching *`text`*. * `--pull-request`: Search for GitHub pull requests containing *`text`*. * `--open`: Search for only open GitHub pull requests. * `--closed`: Search for only closed GitHub pull requests. * `--repology`: Search for *`text`* in the given database. * `--macports`: Search for *`text`* in the given database. * `--fink`: Search for *`text`* in the given database. * `--opensuse`: Search for *`text`* in the given database. * `--fedora`: Search for *`text`* in the given database. * `--archlinux`: Search for *`text`* in the given database. * `--debian`: Search for *`text`* in the given database. * `--ubuntu`: Search for *`text`* in the given database. ### `shellenv` Print export statements. When run in a shell, this installation of Homebrew will be added to your `PATH`, `MANPATH`, and `INFOPATH`. The variables `HOMEBREW_PREFIX`, `HOMEBREW_CELLAR` and `HOMEBREW_REPOSITORY` are also exported to avoid querying them multiple times. To help guarantee idempotence, this command produces no output when Homebrew’s `bin` and `sbin` directories are first and second respectively in your `PATH`. Consider adding evaluation of this command’s output to your dotfiles (e.g. `~/.profile`, `~/.bash_profile`, or `~/.zprofile`) with: `eval "$(brew shellenv)"` ### `tap` [*`options`*] [*`user`*`/`*`repo`*] [*`URL`*] Tap a formula repository. If no arguments are provided, list all installed taps. With *`URL`* unspecified, tap a formula repository from GitHub using HTTPS. Since so many taps are hosted on GitHub, this command is a shortcut for `brew tap` *`user`*`/`*`repo`* `https://github.com/`*`user`*`/homebrew-`*`repo`*. With *`URL`* specified, tap a formula repository from anywhere, using any transport protocol that `git`(1) handles. The one-argument form of `tap` simplifies but also limits. This two-argument command makes no assumptions, so taps can be cloned from places other than GitHub and using protocols other than HTTPS, e.g. SSH, git, HTTP, FTP(S), rsync. * `--[no-]force-auto-update`: Auto-update tap even if it is not hosted on GitHub. By default, only taps hosted on GitHub are auto-updated (for performance reasons). * `--custom-remote`: Install or change a tap with a custom remote. Useful for mirrors. * `--repair`: Migrate tapped formulae from symlink-based to directory-based structure. * `--list-pinned`: List all pinned taps. ### `tap-info` [*`--installed`*] [*`--json`*] [*`tap`* …] Show detailed information about one or more *`tap`*s. If no *`tap`* names are provided, display brief statistics for all installed taps. * `--installed`: Show information on each installed tap. * `--json`: Print a JSON representation of *`tap`*. Currently the default and only accepted value for *`version`* is `v1`. See the docs for examples of using the JSON output: [https://docs.brew.sh/Querying-Brew](querying-brew) ### `uninstall`, `remove`, `rm` [*`options`*] *`installed_formula`*|*`installed_cask`* […] Uninstall a *`formula`* or *`cask`*. * `-f`, `--force`: Delete all installed versions of *`formula`*. Uninstall even if *`cask`* is not installed, overwrite existing files and ignore errors when removing files. * `--zap`: Remove all files associated with a *`cask`*. *May remove files which are shared between applications.* * `--ignore-dependencies`: Don’t fail uninstall, even if *`formula`* is a dependency of any installed formulae. * `--formula`: Treat all named arguments as formulae. * `--cask`: Treat all named arguments as casks. ### `unlink` [*`--dry-run`*] *`installed_formula`* […] Remove symlinks for *`formula`* from Homebrew’s prefix. This can be useful for temporarily disabling a formula: `brew unlink` *`formula`* `&&` *`commands`* `&& brew link` *`formula`* * `-n`, `--dry-run`: List files which would be unlinked without actually unlinking or deleting any files. ### `unpin` *`installed_formula`* […] Unpin *`formula`*, allowing them to be upgraded by `brew upgrade` *`formula`*. See also `pin`. ### `untap` [*`--force`*] *`tap`* […] Remove a tapped formula repository. * `-f`, `--force`: Untap even if formulae or casks from this tap are currently installed. ### `update` [*`options`*] Fetch the newest version of Homebrew and all formulae from GitHub using `git`(1) and perform any necessary migrations. * `--merge`: Use `git merge` to apply updates (rather than `git rebase`). * `--auto-update`: Run on auto-updates (e.g. before `brew install`). Skips some slower steps. * `-f`, `--force`: Always do a slower, full update check (even if unnecessary). ### `update-reset` [*`repository`* …] Fetch and reset Homebrew and all tap repositories (or any specified *`repository`*) using `git`(1) to their latest `origin/HEAD`. *Note:* this will destroy all your uncommitted or committed changes. ### `upgrade` [*`options`*] [*`outdated_formula`*|*`outdated_cask`* …] Upgrade outdated casks and outdated, unpinned formulae using the same options they were originally installed with, plus any appended brew formula options. If *`cask`* or *`formula`* are specified, upgrade only the given *`cask`* or *`formula`* kegs (unless they are pinned; see `pin`, `unpin`). Unless `HOMEBREW_NO_INSTALLED_DEPENDENTS_CHECK` is set, `brew upgrade` or `brew reinstall` will be run for outdated dependents and dependents with broken linkage, respectively. Unless `HOMEBREW_NO_INSTALL_CLEANUP` is set, `brew cleanup` will then be run for the upgraded formulae or, every 30 days, for all formulae. * `-d`, `--debug`: If brewing fails, open an interactive debugging session with access to IRB or a shell inside the temporary build directory. * `-f`, `--force`: Install formulae without checking for previously installed keg-only or non-migrated versions. When installing casks, overwrite existing files (binaries and symlinks are excluded, unless originally from the same cask). * `-v`, `--verbose`: Print the verification and postinstall steps. * `-n`, `--dry-run`: Show what would be upgraded, but do not actually upgrade anything. * `--formula`: Treat all named arguments as formulae. If no named arguments are specified, upgrade only outdated formulae. * `-s`, `--build-from-source`: Compile *`formula`* from source even if a bottle is available. * `-i`, `--interactive`: Download and patch *`formula`*, then open a shell. This allows the user to run `./configure --help` and otherwise determine how to turn the software package into a Homebrew package. * `--force-bottle`: Install from a bottle if it exists for the current or newest version of macOS, even if it would not normally be used for installation. * `--fetch-HEAD`: Fetch the upstream repository to detect if the HEAD installation of the formula is outdated. Otherwise, the repository’s HEAD will only be checked for updates when a new stable or development version has been released. * `--ignore-pinned`: Set a successful exit status even if pinned formulae are not upgraded. * `--keep-tmp`: Retain the temporary files created during installation. * `--debug-symbols`: Generate debug symbols on build. Source will be retained in a cache directory. * `--display-times`: Print install times for each package at the end of the run. * `--cask`: Treat all named arguments as casks. If no named arguments are specified, upgrade only outdated casks. * `--[no-]binaries`: Disable/enable linking of helper executables (default: enabled). * `--require-sha`: Require all casks to have a checksum. * `--[no-]quarantine`: Disable/enable quarantining of downloads (default: enabled). * `--skip-cask-deps`: Skip installing cask dependencies. * `--greedy`: Also include casks with `auto_updates true` or `version :latest`. * `--greedy-latest`: Also include casks with `version :latest`. * `--greedy-auto-updates`: Also include casks with `auto_updates true`. ### `uses` [*`options`*] *`formula`* […] Show formulae and casks that specify *`formula`* as a dependency; that is, show dependents of *`formula`*. When given multiple formula arguments, show the intersection of formulae that use *`formula`*. By default, `uses` shows all formulae and casks that specify *`formula`* as a required or recommended dependency for their stable builds. * `--recursive`: Resolve more than one level of dependencies. * `--installed`: Only list formulae and casks that are currently installed. * `--include-build`: Include all formulae that specify *`formula`* as `:build` type dependency. * `--include-test`: Include all formulae that specify *`formula`* as `:test` type dependency. * `--include-optional`: Include all formulae that specify *`formula`* as `:optional` type dependency. * `--skip-recommended`: Skip all formulae that specify *`formula`* as `:recommended` type dependency. * `--formula`: Include only formulae. * `--cask`: Include only casks. ### `--cache` [*`options`*] [*`formula`*|*`cask`* …] Display Homebrew’s download cache. See also `HOMEBREW_CACHE`. If *`formula`* is provided, display the file or directory used to cache *`formula`*. * `-s`, `--build-from-source`: Show the cache file used when building from source. * `--force-bottle`: Show the cache file used when pouring a bottle. * `--bottle-tag`: Show the cache file used when pouring a bottle for the given tag. * `--HEAD`: Show the cache file used when building from HEAD. * `--formula`: Only show cache files for formulae. * `--cask`: Only show cache files for casks. ### `--caskroom` [*`cask`* …] Display Homebrew’s Caskroom path. If *`cask`* is provided, display the location in the Caskroom where *`cask`* would be installed, without any sort of versioned directory as the last path. ### `--cellar` [*`formula`* …] Display Homebrew’s Cellar path. *Default:* `$(brew --prefix)/Cellar`, or if that directory doesn’t exist, `$(brew --repository)/Cellar`. If *`formula`* is provided, display the location in the Cellar where *`formula`* would be installed, without any sort of versioned directory as the last path. ### `--env`, `environment` [*`--shell`*`=`] [*`--plain`*] [*`formula`* …] Summarise Homebrew’s build environment as a plain list. If the command’s output is sent through a pipe and no shell is specified, the list is formatted for export to `bash`(1) unless `--plain` is passed. * `--shell`: Generate a list of environment variables for the specified shell, or `--shell=auto` to detect the current shell. * `--plain`: Generate plain output even when piped. ### `--prefix` [*`--unbrewed`*] [*`--installed`*] [*`formula`* …] Display Homebrew’s install path. *Default:* * macOS Intel: `/usr/local` * macOS ARM: `/opt/homebrew` * Linux: `/home/linuxbrew/.linuxbrew` If *`formula`* is provided, display the location where *`formula`* is or would be installed. * `--unbrewed`: List files in Homebrew’s prefix not installed by Homebrew. * `--installed`: Outputs nothing and returns a failing status code if *`formula`* is not installed. ### `--repository`, `--repo` [*`tap`* …] Display where Homebrew’s git repository is located. If *`user`*`/`*`repo`* are provided, display where tap *`user`*`/`*`repo`*’s directory is located. ### `--version`, `-v` Print the version numbers of Homebrew, Homebrew/homebrew-core and Homebrew/homebrew-cask (if tapped) to standard output. DEVELOPER COMMANDS ------------------ ### `audit` [*`options`*] [*`formula`*|*`cask`* …] Check *`formula`* for Homebrew coding style violations. This should be run before submitting a new formula or cask. If no *`formula`*|*`cask`* are provided, check all locally available formulae and casks and skip style checks. Will exit with a non-zero status if any errors are found. * `--strict`: Run additional, stricter style checks. * `--git`: Run additional, slower style checks that navigate the Git repository. * `--online`: Run additional, slower style checks that require a network connection. * `--installed`: Only check formulae and casks that are currently installed. * `--new`: Run various additional style checks to determine if a new formula or cask is eligible for Homebrew. This should be used when creating new formula and implies `--strict` and `--online`. * `--[no-]appcast`: Audit the appcast. * `--token-conflicts`: Audit for token conflicts. * `--tap`: Check the formulae within the given tap, specified as *`user`*`/`*`repo`*. * `--fix`: Fix style violations automatically using RuboCop’s auto-correct feature. * `--display-cop-names`: Include the RuboCop cop name for each violation in the output. * `--display-filename`: Prefix every line of output with the file or formula name being audited, to make output easy to grep. * `--display-failures-only`: Only display casks that fail the audit. This is the default for formulae. * `--skip-style`: Skip running non-RuboCop style checks. Useful if you plan on running `brew style` separately. Enabled by default unless a formula is specified by name. * `-D`, `--audit-debug`: Enable debugging and profiling of audit methods. * `--only`: Specify a comma-separated *`method`* list to only run the methods named `audit_`*`method`*. * `--except`: Specify a comma-separated *`method`* list to skip running the methods named `audit_`*`method`*. * `--only-cops`: Specify a comma-separated *`cops`* list to check for violations of only the listed RuboCop cops. * `--except-cops`: Specify a comma-separated *`cops`* list to skip checking for violations of the listed RuboCop cops. * `--formula`: Treat all named arguments as formulae. * `--cask`: Treat all named arguments as casks. ### `bottle` [*`options`*] *`installed_formula`*|*`file`* […] Generate a bottle (binary package) from a formula that was installed with `--build-bottle`. If the formula specifies a rebuild version, it will be incremented in the generated DSL. Passing `--keep-old` will attempt to keep it at its original value, while `--no-rebuild` will remove it. * `--skip-relocation`: Do not check if the bottle can be marked as relocatable. * `--force-core-tap`: Build a bottle even if *`formula`* is not in `homebrew/core` or any installed taps. * `--no-rebuild`: If the formula specifies a rebuild version, remove it from the generated DSL. * `--keep-old`: If the formula specifies a rebuild version, attempt to preserve its value in the generated DSL. * `--json`: Write bottle information to a JSON file, which can be used as the value for `--merge`. * `--merge`: Generate an updated bottle block for a formula and optionally merge it into the formula file. Instead of a formula name, requires the path to a JSON file generated with `brew bottle --json` *`formula`*. * `--write`: Write changes to the formula file. A new commit will be generated unless `--no-commit` is passed. * `--no-commit`: When passed with `--write`, a new commit will not generated after writing changes to the formula file. * `--only-json-tab`: When passed with `--json`, the tab will be written to the JSON file but not the bottle. * `--committer`: Specify a committer name and email in `git`’s standard author format. * `--root-url`: Use the specified *`URL`* as the root of the bottle’s URL instead of Homebrew’s default. * `--root-url-using`: Use the specified download strategy class for downloading the bottle’s URL instead of Homebrew’s default. ### `bump` [*`options`*] [*`formula`*|*`cask`* …] Display out-of-date brew formulae and the latest version available. If the returned current and livecheck versions differ or when querying specific formulae, also displays whether a pull request has been opened with the URL. * `--full-name`: Print formulae/casks with fully-qualified names. * `--no-pull-requests`: Do not retrieve pull requests from GitHub. * `--formula`: Check only formulae. * `--cask`: Check only casks. * `--open-pr`: Open a pull request for the new version if there are none already open. * `--limit`: Limit number of package results returned. * `--start-with`: Letter or word that the list of package results should alphabetically follow. ### `bump-cask-pr` [*`options`*] *`cask`* Create a pull request to update *`cask`* with a new version. A best effort to determine the *`SHA-256`* will be made if the value is not supplied by the user. * `-n`, `--dry-run`: Print what would be done rather than doing it. * `--write-only`: Make the expected file modifications without taking any Git actions. * `--commit`: When passed with `--write-only`, generate a new commit after writing changes to the cask file. * `--no-audit`: Don’t run `brew audit` before opening the PR. * `--online`: Run `brew audit --online` before opening the PR. * `--no-style`: Don’t run `brew style --fix` before opening the PR. * `--no-browse`: Print the pull request URL instead of opening in a browser. * `--no-fork`: Don’t try to fork the repository. * `--version`: Specify the new *`version`* for the cask. * `--message`: Append *`message`* to the default pull request message. * `--url`: Specify the *`URL`* for the new download. * `--sha256`: Specify the *`SHA-256`* checksum of the new download. * `--fork-org`: Use the specified GitHub organization for forking. * `-f`, `--force`: Ignore duplicate open PRs. ### `bump-formula-pr` [*`options`*] [*`formula`*] Create a pull request to update *`formula`* with a new URL or a new tag. If a *`URL`* is specified, the *`SHA-256`* checksum of the new download should also be specified. A best effort to determine the *`SHA-256`* and *`formula`* name will be made if either or both values are not supplied by the user. If a *`tag`* is specified, the Git commit *`revision`* corresponding to that tag should also be specified. A best effort to determine the *`revision`* will be made if the value is not supplied by the user. If a *`version`* is specified, a best effort to determine the *`URL`* and *`SHA-256`* or the *`tag`* and *`revision`* will be made if both values are not supplied by the user. *Note:* this command cannot be used to transition a formula from a URL-and-SHA-256 style specification into a tag-and-revision style specification, nor vice versa. It must use whichever style specification the formula already uses. * `-n`, `--dry-run`: Print what would be done rather than doing it. * `--write-only`: Make the expected file modifications without taking any Git actions. * `--commit`: When passed with `--write-only`, generate a new commit after writing changes to the formula file. * `--no-audit`: Don’t run `brew audit` before opening the PR. * `--strict`: Run `brew audit --strict` before opening the PR. * `--online`: Run `brew audit --online` before opening the PR. * `--no-browse`: Print the pull request URL instead of opening in a browser. * `--no-fork`: Don’t try to fork the repository. * `--mirror`: Use the specified *`URL`* as a mirror URL. If *`URL`* is a comma-separated list of URLs, multiple mirrors will be added. * `--fork-org`: Use the specified GitHub organization for forking. * `--version`: Use the specified *`version`* to override the value parsed from the URL or tag. Note that `--version=0` can be used to delete an existing version override from a formula if it has become redundant. * `--message`: Append *`message`* to the default pull request message. * `--url`: Specify the *`URL`* for the new download. If a *`URL`* is specified, the *`SHA-256`* checksum of the new download should also be specified. * `--sha256`: Specify the *`SHA-256`* checksum of the new download. * `--tag`: Specify the new git commit *`tag`* for the formula. * `--revision`: Specify the new commit *`revision`* corresponding to the specified git *`tag`* or specified *`version`*. * `-f`, `--force`: Ignore duplicate open PRs. Remove all mirrors if `--mirror` was not specified. * `--python-package-name`: Use the specified *`package-name`* when finding Python resources for *`formula`*. If no package name is specified, it will be inferred from the formula’s stable URL. * `--python-extra-packages`: Include these additional Python packages when finding resources. * `--python-exclude-packages`: Exclude these Python packages when finding resources. ### `bump-revision` [*`options`*] *`formula`* […] Create a commit to increment the revision of *`formula`*. If no revision is present, “revision 1” will be added. * `-n`, `--dry-run`: Print what would be done rather than doing it. * `--remove-bottle-block`: Remove the bottle block in addition to bumping the revision. * `--write-only`: Make the expected file modifications without taking any Git actions. * `--message`: Append *`message`* to the default commit message. ### `bump-unversioned-casks` [*`options`*] *`cask`*|*`tap`* […] Check all casks with unversioned URLs in a given *`tap`* for updates. * `-n`, `--dry-run`: Do everything except caching state and opening pull requests. * `--limit`: Maximum runtime in minutes. * `--state-file`: File for caching state. ### `cat` [*`--formula`*] [*`--cask`*] *`formula`*|*`cask`* Display the source of a *`formula`* or *`cask`*. * `--formula`: Treat all named arguments as formulae. * `--cask`: Treat all named arguments as casks. ### `command` *`command`* […] Display the path to the file being used when invoking `brew` *`cmd`*. ### `contributions` *`email|name`* [*`--repositories`*`=`] Contributions to Homebrew repos for a user. The first argument is a name (e.g. “BrewTestBot”) or an email address (e.g. “[email protected]”). * `--repositories`: Specify a comma-separated (no spaces) list of repositories to search. Supported repositories: `brew`, `core`, `cask`, `aliases`, `autoupdate`, `bundle`, `command-not-found`, `test-bot`, `services`, `cask-drivers`, `cask-fonts` and `cask-versions`.Omitting this flag, or specifying `--repositories=all`, will search all repositories. * `--from`: Date (ISO-8601 format) to start searching contributions. * `--to`: Date (ISO-8601 format) to stop searching contributions. ### `create` [*`options`*] *`URL`* Generate a formula or, with `--cask`, a cask for the downloadable file at *`URL`* and open it in the editor. Homebrew will attempt to automatically derive the formula name and version, but if it fails, you’ll have to make your own template. The `wget` formula serves as a simple example. For the complete API, see: <https://rubydoc.brew.sh/Formula> * `--autotools`: Create a basic template for an Autotools-style build. * `--cask`: Create a basic template for a cask. * `--cmake`: Create a basic template for a CMake-style build. * `--crystal`: Create a basic template for a Crystal build. * `--go`: Create a basic template for a Go build. * `--meson`: Create a basic template for a Meson-style build. * `--node`: Create a basic template for a Node build. * `--perl`: Create a basic template for a Perl build. * `--python`: Create a basic template for a Python build. * `--ruby`: Create a basic template for a Ruby build. * `--rust`: Create a basic template for a Rust build. * `--no-fetch`: Homebrew will not download *`URL`* to the cache and will thus not add its SHA-256 to the formula for you, nor will it check the GitHub API for GitHub projects (to fill out its description and homepage). * `--HEAD`: Indicate that *`URL`* points to the package’s repository rather than a file. * `--set-name`: Explicitly set the *`name`* of the new formula or cask. * `--set-version`: Explicitly set the *`version`* of the new formula or cask. * `--set-license`: Explicitly set the *`license`* of the new formula. * `--tap`: Generate the new formula within the given tap, specified as *`user`*`/`*`repo`*. * `-f`, `--force`: Ignore errors for disallowed formula names and names that shadow aliases. ### `dispatch-build-bottle` [*`options`*] *`formula`* […] Build bottles for these formulae with GitHub Actions. * `--tap`: Target tap repository (default: `homebrew/core`). * `--timeout`: Build timeout (in minutes, default: 60). * `--issue`: If specified, post a comment to this issue number if the job fails. * `--macos`: Version(s) of macOS the bottle should be built for. * `--workflow`: Dispatch specified workflow (default: `dispatch-build-bottle.yml`). * `--upload`: Upload built bottles. * `--linux`: Dispatch bottle for Linux (using GitHub runners). * `--linux-self-hosted`: Dispatch bottle for Linux (using self-hosted runner). * `--linux-wheezy`: Use Debian Wheezy container for building the bottle on Linux. ### `edit` [*`options`*] [*`formula`*|*`cask`* …] Open a *`formula`* or *`cask`* in the editor set by `EDITOR` or `HOMEBREW_EDITOR`, or open the Homebrew repository for editing if no formula is provided. * `--formula`: Treat all named arguments as formulae. * `--cask`: Treat all named arguments as casks. * `--print-path`: Print the file path to be edited, without opening an editor. ### `extract` [*`--version`*`=`] [*`--force`*] *`formula`* *`tap`* Look through repository history to find the most recent version of *`formula`* and create a copy in *`tap`*. Specifically, the command will create the new formula file at *`tap`*`/Formula/`*`formula`*`@`*`version`*`.rb`. If the tap is not installed yet, attempt to install/clone the tap before continuing. To extract a formula from a tap that is not `homebrew/core` use its fully-qualified form of *`user`*`/`*`repo`*`/`*`formula`*. * `--version`: Extract the specified *`version`* of *`formula`* instead of the most recent. * `-f`, `--force`: Overwrite the destination formula if it already exists. ### `formula` *`formula`* […] Display the path where *`formula`* is located. ### `generate-man-completions` [*`--fail-if-not-changed`*] Generate Homebrew’s manpages and shell completions. * `--fail-if-not-changed`: Return a failing status code if no changes are detected in the manpage outputs. This can be used to notify CI when the manpages are out of date. Additionally, the date used in new manpages will match those in the existing manpages (to allow comparison without factoring in the date). ### `install-bundler-gems` [*`--groups`*`=`] Install Homebrew’s Bundler gems. * `--groups`: Installs the specified comma-separated list of gem groups (default: last used). ### `irb` [*`--examples`*] [*`--pry`*] Enter the interactive Homebrew Ruby shell. * `--examples`: Show several examples. * `--pry`: Use Pry instead of IRB. Implied if `HOMEBREW_PRY` is set. ### `linkage` [*`options`*] [*`installed_formula`* …] Check the library links from the given *`formula`* kegs. If no *`formula`* are provided, check all kegs. Raises an error if run on uninstalled formulae. * `--test`: Show only missing libraries and exit with a non-zero status if any missing libraries are found. * `--strict`: Exit with a non-zero status if any undeclared dependencies with linkage are found. * `--reverse`: For every library that a keg references, print its dylib path followed by the binaries that link to it. * `--cached`: Print the cached linkage values stored in `HOMEBREW_CACHE`, set by a previous `brew linkage` run. ### `livecheck`, `lc` [*`options`*] [*`formula`*|*`cask`* …] Check for newer versions of formulae and/or casks from upstream. If no formula or cask argument is passed, the list of formulae and casks to check is taken from `HOMEBREW_LIVECHECK_WATCHLIST` or `~/.brew_livecheck_watchlist`. * `--full-name`: Print formulae/casks with fully-qualified names. * `--tap`: Check formulae/casks within the given tap, specified as *`user`*`/`*`repo`*. * `--all`: Check all available formulae/casks. * `--installed`: Check formulae/casks that are currently installed. * `--newer-only`: Show the latest version only if it’s newer than the formula/cask. * `--json`: Output information in JSON format. * `-q`, `--quiet`: Suppress warnings, don’t print a progress bar for JSON output. * `--formula`: Only check formulae. * `--cask`: Only check casks. ### `pr-automerge` [*`options`*] Find pull requests that can be automatically merged using `brew pr-publish`. * `--tap`: Target tap repository (default: `homebrew/core`). * `--workflow`: Workflow file to use with `brew pr-publish`. * `--with-label`: Pull requests must have this label. * `--without-labels`: Pull requests must not have these labels (default: `do not merge`, `new formula`, `automerge-skip`). * `--without-approval`: Pull requests do not require approval to be merged. * `--publish`: Run `brew pr-publish` on matching pull requests. * `--no-autosquash`: Instruct `brew pr-publish` to skip automatically reformatting and rewording commits in the pull request to the preferred format. * `--ignore-failures`: Include pull requests that have failing status checks. ### `pr-publish` [*`options`*] *`pull_request`* […] Publish bottles for a pull request with GitHub Actions. Requires write access to the repository. * `--no-autosquash`: Skip automatically reformatting and rewording commits in the pull request to the preferred format, even if supported on the target tap. * `--branch`: Branch to publish to (default: `master`). * `--message`: Message to include when autosquashing revision bumps, deletions, and rebuilds. * `--tap`: Target tap repository (default: `homebrew/core`). * `--workflow`: Target workflow filename (default: `publish-commit-bottles.yml`). ### `pr-pull` [*`options`*] *`pull_request`* […] Download and publish bottles, and apply the bottle commit from a pull request with artifacts generated by GitHub Actions. Requires write access to the repository. * `--no-upload`: Download the bottles but don’t upload them. * `--no-commit`: Do not generate a new commit before uploading. * `-n`, `--dry-run`: Print what would be done rather than doing it. * `--clean`: Do not amend the commits from pull requests. * `--keep-old`: If the formula specifies a rebuild version, attempt to preserve its value in the generated DSL. * `--no-autosquash`: Skip automatically reformatting and rewording commits in the pull request to our preferred format. * `--branch-okay`: Do not warn if pulling to a branch besides the repository default (useful for testing). * `--resolve`: When a patch fails to apply, leave in progress and allow user to resolve, instead of aborting. * `--warn-on-upload-failure`: Warn instead of raising an error if the bottle upload fails. Useful for repairing bottle uploads that previously failed. * `--committer`: Specify a committer name and email in `git`’s standard author format. * `--message`: Message to include when autosquashing revision bumps, deletions, and rebuilds. * `--artifact`: Download artifacts with the specified name (default: `bottles`). * `--tap`: Target tap repository (default: `homebrew/core`). * `--root-url`: Use the specified *`URL`* as the root of the bottle’s URL instead of Homebrew’s default. * `--root-url-using`: Use the specified download strategy class for downloading the bottle’s URL instead of Homebrew’s default. * `--workflows`: Retrieve artifacts from the specified workflow (default: `tests.yml`). Can be a comma-separated list to include multiple workflows. * `--ignore-missing-artifacts`: Comma-separated list of workflows which can be ignored if they have not been run. ### `pr-upload` [*`options`*] Apply the bottle commit and publish bottles to a host. * `--keep-old`: If the formula specifies a rebuild version, attempt to preserve its value in the generated DSL. * `-n`, `--dry-run`: Print what would be done rather than doing it. * `--no-commit`: Do not generate a new commit before uploading. * `--warn-on-upload-failure`: Warn instead of raising an error if the bottle upload fails. Useful for repairing bottle uploads that previously failed. * `--upload-only`: Skip running `brew bottle` before uploading. * `--committer`: Specify a committer name and email in `git`’s standard author format. * `--root-url`: Use the specified *`URL`* as the root of the bottle’s URL instead of Homebrew’s default. * `--root-url-using`: Use the specified download strategy class for downloading the bottle’s URL instead of Homebrew’s default. ### `prof` [*`--stackprof`*] *`command`* […] Run Homebrew with a Ruby profiler. For example, `brew prof readall`. * `--stackprof`: Use `stackprof` instead of `ruby-prof` (the default). ### `release` [*`--major`*] [*`--minor`*] Create a new draft Homebrew/brew release with the appropriate version number and release notes. By default, `brew release` will bump the patch version number. Pass `--major` or `--minor` to bump the major or minor version numbers, respectively. The command will fail if the previous major or minor release was made less than one month ago. Requires write access to the Homebrew/brew repository. * `--major`: Create a major release. * `--minor`: Create a minor release. ### `rubocop` Installs, configures and runs Homebrew’s `rubocop`. ### `ruby` [*`options`*] (`-e` *`text`*|*`file`*) Run a Ruby instance with Homebrew’s libraries loaded. For example, `brew ruby -e "puts :gcc.f.deps"` or `brew ruby script.rb`. * `-r`: Load a library using `require`. * `-e`: Execute the given text string as a script. ### `sh` [*`--env`*`=`] [*`--cmd`*`=`] [*`file`*] Enter an interactive shell for Homebrew’s build environment. Use years-battle-hardened build logic to help your `./configure && make && make install` and even your `gem install` succeed. Especially handy if you run Homebrew in an Xcode-only configuration since it adds tools like `make` to your `PATH` which build systems would not find otherwise. * `--env`: Use the standard `PATH` instead of superenv’s when `std` is passed. * `-c`, `--cmd`: Execute commands in a non-interactive shell. ### `sponsors` Update the list of GitHub Sponsors in the `Homebrew/brew` README. ### `style` [*`options`*] [*`file`*|*`tap`*|*`formula`*|*`cask`* …] Check formulae or files for conformance to Homebrew style guidelines. Lists of *`file`*, *`tap`* and *`formula`* may not be combined. If none are provided, `style` will run style checks on the whole Homebrew library, including core code and all formulae. * `--fix`: Fix style violations automatically using RuboCop’s auto-correct feature. * `--display-cop-names`: Include the RuboCop cop name for each violation in the output. * `--reset-cache`: Reset the RuboCop cache. * `--formula`: Treat all named arguments as formulae. * `--cask`: Treat all named arguments as casks. * `--only-cops`: Specify a comma-separated *`cops`* list to check for violations of only the listed RuboCop cops. * `--except-cops`: Specify a comma-separated *`cops`* list to skip checking for violations of the listed RuboCop cops. ### `tap-new` [*`options`*] *`user`*`/`*`repo`* Generate the template files for a new tap. * `--no-git`: Don’t initialize a Git repository for the tap. * `--pull-label`: Label name for pull requests ready to be pulled (default: `pr-pull`). * `--branch`: Initialize Git repository and setup GitHub Actions workflows with the specified branch name (default: `main`). * `--github-packages`: Upload bottles to GitHub Packages. ### `test` [*`options`*] *`installed_formula`* […] Run the test method provided by an installed formula. There is no standard output or return code, but generally it should notify the user if something is wrong with the installed formula. *Example:* `brew install jruby && brew test jruby` * `-f`, `--force`: Test formulae even if they are unlinked. * `--HEAD`: Test the head version of a formula. * `--keep-tmp`: Retain the temporary files created for the test. * `--retry`: Retry if a testing fails. ### `tests` [*`options`*] Run Homebrew’s unit and integration tests. * `--coverage`: Generate code coverage reports. * `--generic`: Run only OS-agnostic tests. * `--no-compat`: Do not load the compatibility layer when running tests. * `--online`: Include tests that use the GitHub API and tests that use any of the taps for official external commands. * `--byebug`: Enable debugging using byebug. * `--changed`: Only runs tests on files that were changed from the master branch. * `--only`: Run only *`test_script`*`_spec.rb`. Appending `:`*`line_number`* will start at a specific line. * `--seed`: Randomise tests with the specified *`value`* instead of a random seed. ### `typecheck`, `tc` [*`options`*] Check for typechecking errors using Sorbet. * `--fix`: Automatically fix type errors. * `-q`, `--quiet`: Silence all non-critical errors. * `--update`: Update RBI files. * `--all`: Regenerate all RBI files rather than just updated gems. * `--suggest-typed`: Try upgrading `typed` sigils. * `--fail-if-not-changed`: Return a failing status code if all gems are up to date and gem definitions do not need a tapioca update. * `--dir`: Typecheck all files in a specific directory. * `--file`: Typecheck a single file. * `--ignore`: Ignores input files that contain the given string in their paths (relative to the input path passed to Sorbet). ### `unbottled` [*`options`*] [*`formula`* …] Show the unbottled dependents of formulae. * `--tag`: Use the specified bottle tag (e.g. `big_sur`) instead of the current OS. * `--dependents`: Skip getting analytics data and sort by number of dependents instead. * `--all`: Print the number of unbottled and total formulae. ### `unpack` [*`options`*] *`formula`* […] Unpack the source files for *`formula`* into subdirectories of the current working directory. * `--destdir`: Create subdirectories in the directory named by *`path`* instead. * `--patch`: Patches for *`formula`* will be applied to the unpacked source. * `-g`, `--git`: Initialise a Git repository in the unpacked source. This is useful for creating patches for the software. * `-f`, `--force`: Overwrite the destination directory if it already exists. ### `update-license-data` [*`--fail-if-not-changed`*] Update SPDX license data in the Homebrew repository. * `--fail-if-not-changed`: Return a failing status code if current license data’s version is the same as the upstream. This can be used to notify CI when the SPDX license data is out of date. ### `update-maintainers` Update the list of maintainers in the `Homebrew/brew` README. ### `update-python-resources` [*`options`*] *`formula`* […] Update versions for PyPI resource blocks in *`formula`*. * `-p`, `--print-only`: Print the updated resource blocks instead of changing *`formula`*. * `-s`, `--silent`: Suppress any output. * `--ignore-non-pypi-packages`: Don’t fail if *`formula`* is not a PyPI package. * `--version`: Use the specified *`version`* when finding resources for *`formula`*. If no version is specified, the current version for *`formula`* will be used. * `--package-name`: Use the specified *`package-name`* when finding resources for *`formula`*. If no package name is specified, it will be inferred from the formula’s stable URL. * `--extra-packages`: Include these additional packages when finding resources. * `--exclude-packages`: Exclude these packages when finding resources. ### `update-test` [*`options`*] Run a test of `brew update` with a new repository clone. If no options are passed, use `origin/master` as the start commit. * `--to-tag`: Set `HOMEBREW_UPDATE_TO_TAG` to test updating between tags. * `--keep-tmp`: Retain the temporary directory containing the new repository clone. * `--commit`: Use the specified *`commit`* as the start commit. * `--before`: Use the commit at the specified *`date`* as the start commit. ### `vendor-gems` [*`--update`*`=`] Install and commit Homebrew’s vendored gems. * `--update`: Update all vendored Gems to the latest version. GLOBAL CASK OPTIONS ------------------- These options are applicable to the `install`, `reinstall`, and `upgrade` subcommands with the `--cask` flag. * `--appdir`: Target location for Applications (default: `/Applications`). * `--colorpickerdir`: Target location for Color Pickers (default: `~/Library/ColorPickers`). * `--prefpanedir`: Target location for Preference Panes (default: `~/Library/PreferencePanes`). * `--qlplugindir`: Target location for QuickLook Plugins (default: `~/Library/QuickLook`). * `--mdimporterdir`: Target location for Spotlight Plugins (default: `~/Library/Spotlight`). * `--dictionarydir`: Target location for Dictionaries (default: `~/Library/Dictionaries`). * `--fontdir`: Target location for Fonts (default: `~/Library/Fonts`). * `--servicedir`: Target location for Services (default: `~/Library/Services`). * `--input-methoddir`: Target location for Input Methods (default: `~/Library/Input Methods`). * `--internet-plugindir`: Target location for Internet Plugins (default: `~/Library/Internet Plug-Ins`). * `--audio-unit-plugindir`: Target location for Audio Unit Plugins (default: `~/Library/Audio/Plug-Ins/Components`). * `--vst-plugindir`: Target location for VST Plugins (default: `~/Library/Audio/Plug-Ins/VST`). * `--vst3-plugindir`: Target location for VST3 Plugins (default: `~/Library/Audio/Plug-Ins/VST3`). * `--screen-saverdir`: Target location for Screen Savers (default: `~/Library/Screen Savers`). * `--language`: Comma-separated list of language codes to prefer for cask installation. The first matching language is used, otherwise it reverts to the cask’s default language. The default value is the language of your system. GLOBAL OPTIONS -------------- These options are applicable across multiple subcommands. * `-d`, `--debug`: Display any debugging information. * `-q`, `--quiet`: Make some output more quiet. * `-v`, `--verbose`: Make some output more verbose. * `-h`, `--help`: Show this message. OFFICIAL EXTERNAL COMMANDS -------------------------- ### `alias` [*`alias`* … | *`alias`*=*`command`*] Show existing aliases. If no aliases are given, print the whole list. * `--edit`: Edit aliases in a text editor. Either one or all aliases may be opened at once. If the given alias doesn’t exist it’ll be pre-populated with a template. ### `autoupdate` *`subcommand`* [*`interval`*] [*`options`*] An easy, convenient way to automatically update Homebrew. This script will run `brew update` in the background once every 24 hours (by default) until explicitly told to stop, utilising `launchd`. `brew autoupdate start` [*`interval`*] [*`options`*] Start autoupdating either once every `interval` hours or once every 24 hours. Please note the interval has to be passed in seconds, so 12 hours would be `brew autoupdate start 43200`. Pass `--upgrade` or `--cleanup` to automatically run `brew upgrade` and/or `brew cleanup` respectively. Pass `--enable-notification` to send a notification when the autoupdate process has finished successfully. `brew autoupdate stop` Stop autoupdating, but retain plist & logs. `brew autoupdate delete` Cancel the autoupdate, delete the plist and logs. `brew autoupdate status` Prints the current status of this tool. `brew autoupdate version` Output this tool’s current version, and a short changelog. * `--upgrade`: Automatically upgrade your installed formulae. If the Caskroom exists locally Casks will be upgraded as well. Must be passed with `start`. * `--greedy`: Upgrade casks with –greedy (include auto-updating casks). Must be passed with `start`. * `--cleanup`: Automatically clean brew’s cache and logs. Must be passed with `start`. * `--enable-notification`: Send a notification when the autoupdate process has finished successfully, if `terminal-notifier` is installed & found. Must be passed with `start`. <NOTE: Notifications are enabled by default on macOS Catalina and newer.> * `--immediate`: Starts the autoupdate command immediately, instead of waiting for one interval (24 hours by default) to pass first. Must be passed with `start`. ### `bundle` [*`subcommand`*] Bundler for non-Ruby dependencies from Homebrew, Homebrew Cask, Mac App Store and Whalebrew. `brew bundle` [`install`] Install and upgrade (by default) all dependencies from the `Brewfile`. You can specify the `Brewfile` location using `--file` or by setting the `HOMEBREW_BUNDLE_FILE` environment variable. You can skip the installation of dependencies by adding space-separated values to one or more of the following environment variables: `HOMEBREW_BUNDLE_BREW_SKIP`, `HOMEBREW_BUNDLE_CASK_SKIP`, `HOMEBREW_BUNDLE_MAS_SKIP`, `HOMEBREW_BUNDLE_WHALEBREW_SKIP`, `HOMEBREW_BUNDLE_TAP_SKIP`. `brew bundle` will output a `Brewfile.lock.json` in the same directory as the `Brewfile` if all dependencies are installed successfully. This contains dependency and system status information which can be useful in debugging `brew bundle` failures and replicating a “last known good build” state. You can opt-out of this behaviour by setting the `HOMEBREW_BUNDLE_NO_LOCK` environment variable or passing the `--no-lock` option. You may wish to check this file into the same version control system as your `Brewfile` (or ensure your version control system ignores it if you’d prefer to rely on debugging information from a local machine). `brew bundle dump` Write all installed casks/formulae/images/taps into a `Brewfile` in the current directory. `brew bundle cleanup` Uninstall all dependencies not listed from the `Brewfile`. This workflow is useful for maintainers or testers who regularly install lots of formulae. `brew bundle check` Check if all dependencies are installed from the `Brewfile`. This provides a successful exit code if everything is up-to-date, making it useful for scripting. `brew bundle list` List all dependencies present in the `Brewfile`. By default, only Homebrew dependencies are listed. `brew bundle exec` *`command`* Run an external command in an isolated build environment based on the `Brewfile` dependencies. This sanitized build environment ignores unrequested dependencies, which makes sure that things you didn’t specify in your `Brewfile` won’t get picked up by commands like `bundle install`, `npm install`, etc. It will also add compiler flags which will help find keg-only dependencies like `openssl`, `icu4c`, etc. * `--file`: Read the `Brewfile` from this location. Use `--file=-` to pipe to stdin/stdout. * `--global`: Read the `Brewfile` from `~/.Brewfile`. * `-v`, `--verbose`: `install` prints output from commands as they are run. `check` lists all missing dependencies. * `--no-upgrade`: `install` won’t run `brew upgrade` on outdated dependencies. Note they may still be upgraded by `brew install` if needed. * `-f`, `--force`: `dump` overwrites an existing `Brewfile`. `cleanup` actually performs its cleanup operations. * `--cleanup`: `install` performs cleanup operation, same as running `cleanup --force`. * `--no-lock`: `install` won’t output a `Brewfile.lock.json`. * `--all`: `list` all dependencies. * `--formula`: `list` Homebrew dependencies. * `--cask`: `list` Homebrew Cask dependencies. * `--tap`: `list` tap dependencies. * `--mas`: `list` Mac App Store dependencies. * `--whalebrew`: `list` Whalebrew dependencies. * `--describe`: `dump` adds a description comment above each line, unless the dependency does not have a description. * `--no-restart`: `dump` does not add `restart_service` to formula lines. * `--zap`: `cleanup` casks using the `zap` command instead of `uninstall`. ### `command-not-found-init` Print instructions for setting up the command-not-found hook for your shell. If the output is not to a tty, print the appropriate handler script for your shell. ### `services` [*`subcommand`*] Manage background services with macOS’ `launchctl`(1) daemon manager. If `sudo` is passed, operate on `/Library/LaunchDaemons` (started at boot). Otherwise, operate on `~/Library/LaunchAgents` (started at login). [`sudo`] `brew services` [`list`] (`--json`) List information about all managed services for the current user (or root). [`sudo`] `brew services info` (*`formula`*|`--all`|`--json`) List all managed services for the current user (or root). [`sudo`] `brew services run` (*`formula`*|`--all`) Run the service *`formula`* without registering to launch at login (or boot). [`sudo`] `brew services start` (*`formula`*|`--all`|`--file=`) Start the service *`formula`* immediately and register it to launch at login (or boot). [`sudo`] `brew services stop` (*`formula`*|`--all`) Stop the service *`formula`* immediately and unregister it from launching at login (or boot). [`sudo`] `brew services kill` (*`formula`*|`--all`) Stop the service *`formula`* immediately but keep it registered to launch at login (or boot). [`sudo`] `brew services restart` (*`formula`*|`--all`) Stop (if necessary) and start the service *`formula`* immediately and register it to launch at login (or boot). [`sudo`] `brew services cleanup` Remove all unused services. * `--file`: Use the service file from this location to `start` the service. * `--all`: Run *`subcommand`* on all services. * `--json`: Output as JSON. ### `test-bot` [*`options`*] [*`formula`*] Tests the full lifecycle of a Homebrew change to a tap (Git repository). For example, for a GitHub Actions pull request that changes a formula `brew test-bot` will ensure the system is cleaned and set up to test the formula, install the formula, run various tests and checks on it, bottle (package) the binaries and test formulae that depend on it to ensure they aren’t broken by these changes. Only supports GitHub Actions as a CI provider. This is because Homebrew uses GitHub Actions and it’s freely available for public and private use with macOS and Linux workers. * `--dry-run`: Print what would be done rather than doing it. * `--cleanup`: Clean all state from the Homebrew directory. Use with care! * `--skip-setup`: Don’t check if the local system is set up correctly. * `--build-from-source`: Build from source rather than building bottles. * `--build-dependents-from-source`: Build dependents from source rather than testing bottles. * `--junit`: generate a JUnit XML test results file. * `--keep-old`: Run `brew bottle --keep-old` to build new bottles for a single platform. * `--skip-relocation`: Run `brew bottle --skip-relocation` to build new bottles that don’t require relocation. * `--only-json-tab`: Run `brew bottle --only-json-tab` to build new bottles that do not contain a tab. * `--local`: Ask Homebrew to write verbose logs under `./logs/` and set `$HOME` to `./home/` * `--tap`: Use the Git repository of the given tap. Defaults to the core tap for syntax checking. * `--fail-fast`: Immediately exit on a failing step. * `-v`, `--verbose`: Print test step output in real time. Has the side effect of passing output as raw bytes instead of re-encoding in UTF-8. * `--test-default-formula`: Use a default testing formula when not building a tap and no other formulae are specified. * `--root-url`: Use the specified *`URL`* as the root of the bottle’s URL instead of Homebrew’s default. * `--git-name`: Set the Git author/committer names to the given name. * `--git-email`: Set the Git author/committer email to the given email. * `--publish`: Publish the uploaded bottles. * `--skip-online-checks`: Don’t pass `--online` to `brew audit` and skip `brew livecheck`. * `--skip-dependents`: Don’t test any dependents. * `--skip-recursive-dependents`: Only test the direct dependents. * `--only-cleanup-before`: Only run the pre-cleanup step. Needs `--cleanup`. * `--only-setup`: Only run the local system setup check step. * `--only-tap-syntax`: Only run the tap syntax check step. * `--only-formulae`: Only run the formulae steps. * `--only-formulae-detect`: Only run the formulae detection steps. * `--only-formulae-dependents`: Only run the formulae dependents steps. * `--only-cleanup-after`: Only run the post-cleanup step. Needs `--cleanup`. * `--testing-formulae`: Use these testing formulae rather than running the formulae detection steps. * `--added-formulae`: Use these added formulae rather than running the formulae detection steps. * `--deleted-formulae`: Use these deleted formulae rather than running the formulae detection steps. * `--skipped-or-failed-formulae`: Use these skipped or failed formulae from formulae steps for a formulae dependents step. ### `unalias` *`alias`* […] Remove aliases. ### `which-formula` [*`--explain`*] *`command`* […] Prints the formula(e) which provides the given command. * `--explain`: Output explanation of how to get ‘cmd’ by installing one of the providing formulae. ### `which-update` [*`options`*] [*`database`*] Database update for `brew which-formula` * `--stats`: Print statistics about the database contents (number of commands and formulae, list of missing formulae). * `--commit`: Commit the changes using `git`. * `--update-existing`: Update database entries with outdated formula versions. * `--install-missing`: Install and update formulae that are missing from the database and don’t have bottles. * `--max-downloads`: Specify a maximum number of formulae to download and update. CUSTOM EXTERNAL COMMANDS ------------------------ Homebrew, like `git`(1), supports external commands. These are executable scripts that reside somewhere in the `PATH`, named `brew-`*`cmdname`* or `brew-`*`cmdname`*`.rb`, which can be invoked like `brew` *`cmdname`*. This allows you to create your own commands without modifying Homebrew’s internals. Instructions for creating your own commands can be found in the docs: [https://docs.brew.sh/External-Commands](external-commands) SPECIFYING FORMULAE ------------------- Many Homebrew commands accept one or more *`formula`* arguments. These arguments can take several different forms: * The name of a formula: e.g. `git`, `node`, `wget`. * The fully-qualified name of a tapped formula: Sometimes a formula from a tapped repository may conflict with one in `homebrew/core`. You can still access these formulae by using a special syntax, e.g. `homebrew/dupes/vim` or `homebrew/versions/node4`. * An arbitrary file: Homebrew can install formulae from a local path. It can point to either a formula file or a bottle. Prefix relative paths with `./` to prevent them from being interpreted as a formula or tap name. SPECIFYING CASKS ---------------- Many Homebrew Cask commands accept one or more *`cask`* arguments. These can be specified the same way as the *`formula`* arguments described in `SPECIFYING FORMULAE` above. ENVIRONMENT ----------- Note that environment variables must have a value set to be detected. For example, run `export HOMEBREW_NO_INSECURE_REDIRECT=1` rather than just `export HOMEBREW_NO_INSECURE_REDIRECT`. * `HOMEBREW_ADDITIONAL_GOOGLE_ANALYTICS_ID` Additional Google Analytics tracking ID to emit user behaviour analytics to. For more information, see: [https://docs.brew.sh/Analytics](analytics) * `HOMEBREW_ARCH` Linux only: Pass this value to a type name representing the compiler’s `-march` option. *Default:* `native`. * `HOMEBREW_ARTIFACT_DOMAIN` Prefix all download URLs, including those for bottles, with this value. For example, `HOMEBREW_ARTIFACT_DOMAIN=http://localhost:8080` will cause a formula with the URL `https://example.com/foo.tar.gz` to instead download from `http://localhost:8080/https://example.com/foo.tar.gz`. Bottle URLs however, have their domain replaced with this prefix. This results in e.g. `https://ghcr.io/v2/homebrew/core/gettext/manifests/0.21` to instead be downloaded from `http://localhost:8080/v2/homebrew/core/gettext/manifests/0.21` * `HOMEBREW_AUTO_UPDATE_SECS` Run `brew update` once every `HOMEBREW_AUTO_UPDATE_SECS` seconds before some commands, e.g. `brew install`, `brew upgrade` and `brew tap`. Alternatively, disable auto-update entirely with HOMEBREW\_NO\_AUTO\_UPDATE. *Default:* `300`. * `HOMEBREW_AUTOREMOVE` If set, calls to `brew cleanup` and `brew uninstall` will automatically remove unused formula dependents and if HOMEBREW\_NO\_INSTALL\_CLEANUP is not set, `brew cleanup` will start running `brew autoremove` periodically. * `HOMEBREW_BAT` If set, use `bat` for the `brew cat` command. * `HOMEBREW_BAT_CONFIG_PATH` Use this as the `bat` configuration file. *Default:* `$HOME/.config/bat/config`. * `HOMEBREW_BAT_THEME` Use this as the `bat` theme for syntax highlighting. *Default:* `$BAT_THEME`. * `HOMEBREW_BOOTSNAP` If set, use Bootsnap to speed up repeated `brew` calls. A no-op when using Homebrew’s vendored, relocatable Ruby on macOS (as it doesn’t work). * `HOMEBREW_BOTTLE_DOMAIN` Use this URL as the download mirror for bottles. If bottles at that URL are temporarily unavailable, the default bottle domain will be used as a fallback mirror. For example, `HOMEBREW_BOTTLE_DOMAIN=http://localhost:8080` will cause all bottles to download from the prefix `http://localhost:8080/`. If bottles are not available at `HOMEBREW_BOTTLE_DOMAIN` they will be downloaded from the default bottle domain. *Default:* `https://ghcr.io/v2/homebrew/core`. * `HOMEBREW_BREW_GIT_REMOTE` Use this URL as the Homebrew/brew `git`(1) remote. *Default:* `https://github.com/Homebrew/brew`. * `HOMEBREW_BROWSER` Use this as the browser when opening project homepages. *Default:* `$BROWSER` or the OS’s default browser. * `HOMEBREW_CACHE` Use this directory as the download cache. *Default:* macOS: `$HOME/Library/Caches/Homebrew`, Linux: `$XDG_CACHE_HOME/Homebrew` or `$HOME/.cache/Homebrew`. * `HOMEBREW_CASK_OPTS` Append these options to all `cask` commands. All `--*dir` options, `--language`, `--require-sha`, `--no-quarantine` and `--no-binaries` are supported. For example, you might add something like the following to your `~/.profile`, `~/.bash_profile`, or `~/.zshenv`:\n\n `export HOMEBREW_CASK_OPTS="--appdir=~/Applications --fontdir=/Library/Fonts"` * `HOMEBREW_CLEANUP_PERIODIC_FULL_DAYS` If set, `brew install`, `brew upgrade` and `brew reinstall` will cleanup all formulae when this number of days has passed. *Default:* `30`. * `HOMEBREW_CLEANUP_MAX_AGE_DAYS` Cleanup all cached files older than this many days. *Default:* `120`. * `HOMEBREW_COLOR` If set, force colour output on non-TTY outputs. * `HOMEBREW_CORE_GIT_REMOTE` Use this URL as the Homebrew/homebrew-core `git`(1) remote. *Default:* `https://github.com/Homebrew/homebrew-core`. * `HOMEBREW_CURLRC` If set, do not pass `--disable` when invoking `curl`(1), which disables the use of `curlrc`. * `HOMEBREW_CURL_PATH` Linux only: Set this value to a new enough `curl` executable for Homebrew to use. *Default:* `curl`. * `HOMEBREW_CURL_RETRIES` Pass the given retry count to `--retry` when invoking `curl`(1). *Default:* `3`. * `HOMEBREW_CURL_VERBOSE` If set, pass `--verbose` when invoking `curl`(1). * `HOMEBREW_DEVELOPER` If set, tweak behaviour to be more relevant for Homebrew developers (active or budding) by e.g. turning warnings into errors. * `HOMEBREW_DISABLE_LOAD_FORMULA` If set, refuse to load formulae. This is useful when formulae are not trusted (such as in pull requests). * `HOMEBREW_DISPLAY` Use this X11 display when opening a page in a browser, for example with `brew home`. Primarily useful on Linux. *Default:* `$DISPLAY`. * `HOMEBREW_DISPLAY_INSTALL_TIMES` If set, print install times for each formula at the end of the run. * `HOMEBREW_EDITOR` Use this editor when editing a single formula, or several formulae in the same directory. *Note:* `brew edit` will open all of Homebrew as discontinuous files and directories. Visual Studio Code can handle this correctly in project mode, but many editors will do strange things in this case. *Default:* `$EDITOR` or `$VISUAL`. * `HOMEBREW_FAIL_LOG_LINES` Output this many lines of output on formula `system` failures. *Default:* `15`. * `HOMEBREW_FORBIDDEN_LICENSES` A space-separated list of licenses. Homebrew will refuse to install a formula if it or any of its dependencies has a license on this list. * `HOMEBREW_FORCE_BREWED_CA_CERTIFICATES` If set, always use a Homebrew-installed `ca-certificates` rather than the system version. Automatically set if the system version is too old. * `HOMEBREW_FORCE_BREWED_CURL` If set, always use a Homebrew-installed `curl`(1) rather than the system version. Automatically set if the system version of `curl` is too old. * `HOMEBREW_FORCE_BREWED_GIT` If set, always use a Homebrew-installed `git`(1) rather than the system version. Automatically set if the system version of `git` is too old. * `HOMEBREW_FORCE_VENDOR_RUBY` If set, always use Homebrew’s vendored, relocatable Ruby version even if the system version of Ruby is new enough. * `HOMEBREW_GITHUB_API_TOKEN` Use this personal access token for the GitHub API, for features such as `brew search`. You can create one at <https://github.com/settings/tokens>. If set, GitHub will allow you a greater number of API requests. For more information, see: <https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting> *Note:* Homebrew doesn’t require permissions for any of the scopes, but some developer commands may require additional permissions. * `HOMEBREW_GITHUB_PACKAGES_TOKEN` Use this GitHub personal access token when accessing the GitHub Packages Registry (where bottles may be stored). * `HOMEBREW_DOCKER_REGISTRY_BASIC_AUTH_TOKEN` Use this base64 encoded username and password for authenticating with a Docker registry proxying GitHub Packages. If HOMEBREW\_DOCKER\_REGISTRY\_TOKEN is set, it will be used instead. * `HOMEBREW_DOCKER_REGISTRY_TOKEN` Use this bearer token for authenticating with a Docker registry proxying GitHub Packages. Preferred over HOMEBREW\_DOCKER\_REGISTRY\_TOKEN\_BASIC. * `HOMEBREW_GITHUB_PACKAGES_USER` Use this username when accessing the GitHub Packages Registry (where bottles may be stored). * `HOMEBREW_GIT_EMAIL` Set the Git author and committer email to this value. * `HOMEBREW_GIT_NAME` Set the Git author and committer name to this value. * `HOMEBREW_GIT_PATH` Linux only: Set this value to a new enough `git` executable for Homebrew to use. *Default:* `git`. * `HOMEBREW_INSTALL_BADGE` Print this text before the installation summary of each successful build. *Default:* The “Beer Mug” emoji. * `HOMEBREW_INSTALL_FROM_API` If set, install formulae and casks in homebrew/core and homebrew/cask taps using Homebrew’s API instead of needing (large, slow) local checkouts of these repositories. *Note:* Setting HOMEBREW\_INSTALL\_FROM\_API is not compatible with Homebrew’s developer mode so will error (as Homebrew development needs a full clone). * `HOMEBREW_LIVECHECK_WATCHLIST` Consult this file for the list of formulae to check by default when no formula argument is passed to `brew livecheck`. *Default:* `$HOME/.brew_livecheck_watchlist` * `HOMEBREW_LOGS` Use this directory to store log files. *Default:* macOS: `$HOME/Library/Logs/Homebrew`, Linux: `$XDG_CACHE_HOME/Homebrew/Logs` or `$HOME/.cache/Homebrew/Logs`. * `HOMEBREW_MAKE_JOBS` Use this value as the number of parallel jobs to run when building with `make`(1). *Default:* The number of available CPU cores. * `HOMEBREW_NO_ANALYTICS` If set, do not send analytics. For more information, see: [https://docs.brew.sh/Analytics](analytics) * `HOMEBREW_NO_AUTO_UPDATE` If set, do not automatically update before running some commands, e.g. `brew install`, `brew upgrade` and `brew tap`. Alternatively, run this less often by setting HOMEBREW\_AUTO\_UPDATE\_SECS to a value higher than the default. * `HOMEBREW_NO_BOOTSNAP` If set, do not use Bootsnap to speed up repeated `brew` calls. * `HOMEBREW_NO_INSTALLED_DEPENDENTS_CHECK` If set, do not check for broken linkage of dependents or outdated dependents after installing, upgrading or reinstalling formulae. This will result in fewer dependents (and their dependencies) being upgraded or reinstalled but may result in more breakage from running `brew install *`formula`*` or `brew upgrade *`formula`*`. * `HOMEBREW_NO_CLEANUP_FORMULAE` A comma-separated list of formulae. Homebrew will refuse to clean up or autoremove a formula if it appears on this list. * `HOMEBREW_NO_COLOR` If set, do not print text with colour added. *Default:* `$NO_COLOR`. * `HOMEBREW_NO_COMPAT` If set, disable all use of legacy compatibility code. * `HOMEBREW_NO_EMOJI` If set, do not print `HOMEBREW_INSTALL_BADGE` on a successful build. *Note:* Will only try to print emoji on OS X Lion or newer. * `HOMEBREW_NO_ENV_HINTS` If set, do not print any hints about changing Homebrew’s behaviour with environment variables. * `HOMEBREW_NO_GITHUB_API` If set, do not use the GitHub API, e.g. for searches or fetching relevant issues after a failed install. * `HOMEBREW_NO_INSECURE_REDIRECT` If set, forbid redirects from secure HTTPS to insecure HTTP. *Note:* While ensuring your downloads are fully secure, this is likely to cause from-source SourceForge, some GNU & GNOME-hosted formulae to fail to download. * `HOMEBREW_NO_INSTALL_CLEANUP` If set, `brew install`, `brew upgrade` and `brew reinstall` will never automatically cleanup installed/upgraded/reinstalled formulae or all formulae every `HOMEBREW_CLEANUP_PERIODIC_FULL_DAYS` days. Alternatively, HOMEBREW\_NO\_CLEANUP\_FORMULAE allows specifying specific formulae to not clean up. * `HOMEBREW_NO_INSTALL_UPGRADE` If set, `brew install *`formula`*` will not upgrade `*`formula`*` if it is installed but outdated. * `HOMEBREW_PRY` If set, use Pry for the `brew irb` command. * `HOMEBREW_SIMULATE_MACOS_ON_LINUX` If set, running Homebrew on Linux will simulate certain macOS code paths. This is useful when auditing macOS formulae while on Linux. * `HOMEBREW_SSH_CONFIG_PATH` If set, Homebrew will use the given config file instead of `~/.ssh/config` when fetching `git` repos over `ssh`. *Default:* `$HOME/.ssh/config` * `HOMEBREW_SKIP_OR_LATER_BOTTLES` If set along with `HOMEBREW_DEVELOPER`, do not use bottles from older versions of macOS. This is useful in development on new macOS versions. * `HOMEBREW_SORBET_RUNTIME` If set, enable runtime typechecking using Sorbet. * `HOMEBREW_SVN` Use this as the `svn`(1) binary. *Default:* A Homebrew-built Subversion (if installed), or the system-provided binary. * `HOMEBREW_TEMP` Use this path as the temporary directory for building packages. Changing this may be needed if your system temporary directory and Homebrew prefix are on different volumes, as macOS has trouble moving symlinks across volumes when the target does not yet exist. This issue typically occurs when using FileVault or custom SSD configurations. *Default:* macOS: `/private/tmp`, Linux: `/tmp`. * `HOMEBREW_UPDATE_REPORT_ALL_FORMULAE` If set, `brew update` lists changes to all formulae and cask files rather than only showing when they are new and not installed or outdated and installed. * `HOMEBREW_UPDATE_TO_TAG` If set, always use the latest stable tag (even if developer commands have been run). * `HOMEBREW_VERBOSE` If set, always assume `--verbose` when running commands. * `HOMEBREW_DEBUG` If set, always assume `--debug` when running commands. * `HOMEBREW_VERBOSE_USING_DOTS` If set, verbose output will print a `.` no more than once a minute. This can be useful to avoid long-running Homebrew commands being killed due to no output. * `all_proxy` Use this SOCKS5 proxy for `curl`(1), `git`(1) and `svn`(1) when downloading through Homebrew. * `ftp_proxy` Use this FTP proxy for `curl`(1), `git`(1) and `svn`(1) when downloading through Homebrew. * `http_proxy` Use this HTTP proxy for `curl`(1), `git`(1) and `svn`(1) when downloading through Homebrew. * `https_proxy` Use this HTTPS proxy for `curl`(1), `git`(1) and `svn`(1) when downloading through Homebrew. * `no_proxy` A comma-separated list of hostnames and domain names excluded from proxying by `curl`(1), `git`(1) and `svn`(1) when downloading through Homebrew. * `SUDO_ASKPASS` If set, pass the `-A` option when calling `sudo`(8). USING HOMEBREW BEHIND A PROXY ----------------------------- Set the `http_proxy`, `https_proxy`, `all_proxy`, `ftp_proxy` and/or `no_proxy` environment variables documented above. For example, to use an unauthenticated HTTP or SOCKS5 proxy: ``` export http_proxy=http://$HOST:$PORT export all_proxy=socks5://$HOST:$PORT ``` And for an authenticated HTTP proxy: ``` export http_proxy=http://$USER:$PASSWORD@$HOST:$PORT ``` SEE ALSO -------- Homebrew Documentation: <https://docs.brew.sh> Homebrew API: <https://rubydoc.brew.sh> `git`(1), `git-log`(1) AUTHORS ------- Homebrew’s Project Leader is Mike McQuaid. Homebrew’s Project Leadership Committee is Issy Long, Jonathan Chang, Mike McQuaid, Misty De Méo and Sean Molenaar. Homebrew’s Technical Steering Committee is Bo Anderson, FX Coudert, Michka Popoff, Mike McQuaid and Rylan Polster. Homebrew’s other current maintainers are Alexander Bayandin, Bevan Kay, Branch Vincent, Caleb Xu, Carlo Cabrera, Daniel Nachun, Dawid Dziurla, Dustin Rodrigues, Eric Knibbe, George Adams, Markus Reiter, Maxim Belkin, Miccal Matthews, Michael Cho, Nanda H Krishna, Randall, Rui Chen, Sam Ford, Shaun Jackman, Steve Peters, Thierry Moisan and Vítor Galvão. Former maintainers with significant contributions include Claudia Pellegrino, Seeker, William Woodruff, Jan Viljanen, JCount, commitay, Dominyk Tiller, Tim Smith, Baptiste Fontaine, Xu Cheng, Martin Afanasjew, Brett Koonce, Charlie Sharpsteen, Jack Nagel, Adam Vandenberg, Andrew Janke, Alex Dunn, neutric, Tomasz Pajor, Uladzislau Shablinski, Alyssa Ross, ilovezfs, Chongyu Zhu and Homebrew’s creator: Max Howell. BUGS ---- See our issues on GitHub: * **Homebrew/brew**: <https://github.com/Homebrew/brew/issues> * **Homebrew/homebrew-core**: <https://github.com/Homebrew/homebrew-core/issues> * **Homebrew/homebrew-cask**: <https://github.com/Homebrew/homebrew-cask/issues>
programming_docs
homebrew How To Open a Homebrew Pull Request How To Open a Homebrew Pull Request =================================== The following commands are used by Homebrew contributors to set up a fork of Homebrew’s Git repository on GitHub, create a new branch and create a GitHub pull request (“PR”) of the changes in that branch. Depending on the change you want to make, you need to send the pull request to the appropriate one of Homebrew’s main repositories. If you want to submit a change to Homebrew core code (the `brew` implementation), you should open the pull request on [Homebrew/brew](https://github.com/Homebrew/brew). If you want to submit a change for a formula, you should open the pull request on the [homebrew/core](https://github.com/Homebrew/homebrew-core) tap, for casks you should open the pull request on the [homebrew/cask](https://github.com/Homebrew/homebrew-cask) tap or another [official tap](https://github.com/Homebrew), based on the formula type. Submit a new version of an existing formula ------------------------------------------- 1. Use `brew bump-formula-pr` to do everything (i.e. forking, committing, pushing) with a single command. Run `brew bump-formula-pr --help` to learn more. Submit a new version of an existing cask ---------------------------------------- 1. Use `brew bump-cask-pr` to do everything (i.e. forking, committing, pushing) with a single command. Run `brew bump-cask-pr --help` to learn more. Set up your own fork of the Homebrew repository ----------------------------------------------- ### Core `brew` code related pull request 1. [Fork the Homebrew/brew repository on GitHub](https://github.com/Homebrew/brew/fork). * This creates a personal remote repository that you can push to. This is needed because only Homebrew maintainers have push access to the main repositories. 2. Change to the directory containing your Homebrew installation: ``` cd "$(brew --repository)" ``` 3. Add your pushable forked repository as a new remote: ``` git remote add <YOUR_USERNAME> https://github.com/<YOUR_USERNAME>/brew.git ``` * `<YOUR_USERNAME>` is your GitHub username, not your local machine username. ### Formulae related pull request 1. [Fork the Homebrew/homebrew-core repository on GitHub](https://github.com/Homebrew/homebrew-core/fork). * This creates a personal remote repository that you can push to. This is needed because only Homebrew maintainers have push access to the main repositories. 2. Change to the directory containing Homebrew formulae: ``` cd "$(brew --repository homebrew/core)" ``` 3. Add your pushable forked repository as a new remote: ``` git remote add <YOUR_USERNAME> https://github.com/<YOUR_USERNAME>/homebrew-core.git ``` * `<YOUR_USERNAME>` is your GitHub username, not your local machine username. ### Cask related pull request 1. [Fork the Homebrew/homebrew-cask repository on GitHub](https://github.com/Homebrew/homebrew-cask/fork). * This creates a personal remote repository that you can push to. This is needed because only Homebrew maintainers have push access to the main repositories. 2. Change to the directory containing Homebrew casks: ``` cd "$(brew --repository homebrew/cask)" ``` 3. Add your pushable forked repository as a new remote: ``` git remote add <YOUR_USERNAME> https://github.com/<YOUR_USERNAME>/homebrew-cask.git ``` * `<YOUR_USERNAME>` is your GitHub username, not your local machine username. Create your pull request from a new branch ------------------------------------------ To make a new branch and submit it for review, create a GitHub pull request with the following steps: 1. Check out the `master` branch: ``` git checkout master ``` 2. Retrieve new changes to the `master` branch: ``` brew update ``` 3. Create a new branch from the latest `master` branch: ``` git checkout -b <YOUR_BRANCH_NAME> origin/master ``` 4. Make your changes. For formulae or casks, use `brew edit` or your favourite text editor, following all the guidelines in the [Formula Cookbook](formula-cookbook) or [Cask Cookbook](cask-cookbook). * If there’s a `bottle do` block in the formula, don’t remove or change it; we’ll update it when we pull your PR. 5. Test your changes by running the following, and ensure they all pass without issue. For changed formulae and casks, make sure you do the `brew audit` step while your changed formula/cask is installed. ``` brew tests brew install --build-from-source <CHANGED_FORMULA|CHANGED_CASK> brew test <CHANGED_FORMULA|CHANGED_CASK> brew audit --strict --online <CHANGED_FORMULA|CHANGED_CASK> ``` 6. [Make a separate commit](formula-cookbook#commit) for each changed formula with `git add` and `git commit`. * Please note that our preferred commit message format for simple version updates is “`<FORMULA_NAME> <NEW_VERSION>`”, e.g. “`source-highlight 3.1.8`”. 7. Upload your branch of new commits to your fork: ``` git push --set-upstream <YOUR_USERNAME> <YOUR_BRANCH_NAME> ``` 8. Go to the relevant repository (e.g. <https://github.com/Homebrew/brew>, <https://github.com/Homebrew/homebrew-core>, etc.) and create a pull request to request review and merging of the commits in your pushed branch. Explain why the change is needed and, if fixing a bug, how to reproduce the bug. Make sure you have done each step in the checklist that appears in your new PR. 9. Await feedback or a merge from Homebrew’s maintainers. We typically respond to all PRs within a couple days, but it may take up to a week, depending on the maintainers’ workload. Thank you! Following up ------------ To respond well to feedback: 1. Ask for clarification of anything you don’t understand and for help with anything you don’t know how to do. 2. Post a comment on your pull request if you’ve provided all the requested changes/information and it hasn’t been merged after a week. Post a comment on your pull request if you’re stuck and need help. * A `needs response` label on a PR means that the Homebrew maintainers need you to respond to previous comments. 3. Keep discussion in the pull request unless requested otherwise (i.e. do not email maintainers privately). 4. Do not continue discussion in closed pull requests. 5. Do not argue with Homebrew maintainers. You may disagree but unless they change their mind, please implement what they request. Ultimately they control what is included in Homebrew, as they have to support any changes that are made. To make changes based on feedback: 1. Check out your branch again: ``` git checkout <YOUR_BRANCH_NAME> ``` 2. Make any requested changes and commit them with `git add` and `git commit`. 3. Squash new commits into one commit per formula: ``` git rebase --interactive origin/master ``` * If you are working on a PR for a single formula, `git commit --amend` is a convenient way of keeping your commits squashed as you go. 4. Push to your remote fork’s branch and the pull request: ``` git push --force ``` Once all feedback has been addressed and if it’s a change we want to include (we include most changes), then we’ll add your commit to Homebrew. Note that the PR status may show up as “Closed” instead of “Merged” because of the way we merge contributions. Don’t worry: you will still get author credit in the actual merged commit. Well done, you are now a Homebrew contributor! homebrew Acceptable Casks Acceptable Casks ================ Some casks should not go in [homebrew/cask](https://github.com/Homebrew/homebrew-cask). But there are additional [Interesting Taps and Forks](interesting-taps-and-forks) and anyone can [start their own](taps)! Finding a Home For Your Cask ---------------------------- We maintain separate Taps for different types of binaries. Our nomenclature is: * **Stable**: The latest version provided by the developer defined by them as such. * **Beta, Development, Unstable**: Subsequent versions to **stable**, yet incomplete and under development, aiming to eventually become the new **stable**. Also includes alternate versions specifically targeted at developers. * **Nightly**: Constantly up-to-date versions of the current development state. * **Legacy**: Any **stable** version that is not the most recent. * **Regional, Localized**: Any version that isn’t the US English one, when that exists. * **Trial**: Time-limited version that stops working entirely after it expires, requiring payment to lift the limitation. * **Freemium**: Gratis version that works indefinitely but with limitations that can be removed by paying. * **Fork**: An alternate version of an existing project, with a based-on but modified source and binary. * **Unofficial**: An *allegedly* unmodified compiled binary, by a third-party, of a binary that has no existing build by the owner of the source code. * **Vendorless**: A binary distributed without an official website, like a forum posting. * **Walled**: When the download URL is both behind a login/registration form and from a host that differs from the homepage. * **Font**: Data file containing a set of glyphs, characters, or symbols, that changes typed text. * **Driver**: Software to make a hardware peripheral recognisable and usable by the system. If the software is useless without the peripheral, it’s considered a driver. ### Stable Versions Stable versions live in the main repository at [Homebrew/homebrew-cask](https://github.com/Homebrew/homebrew-cask). They should run on the latest release of macOS or the previous point release (High Sierra and Mojave as of late 2018). ### But There Is No Stable Version! When software is only available as a beta, development, or unstable version, its cask can go in the main repo. When stable versions become available, only those will be accepted as subsequent updates. ### Beta, Unstable, Development, Nightly, or Legacy Alternative versions should be submitted to [Homebrew/homebrew-cask-versions](https://github.com/Homebrew/homebrew-cask-versions). ### Regional and Localized When an App exists in more than one language or has different regional editions, [the `language` stanza should be used to switch between languages or regions](cask-cookbook#stanza-language). ### Trial and Freemium Versions Before submitting a trial, make sure it can be made into a full working version without the need to be redownloaded. If an App provides a trial but the only way to buy the full version is via the Mac App Store, it does not belong in any of the official repos. Freemium versions are fine. ### Forks and Apps with Conflicting Names Forks must have the vendor’s name as a prefix on the Cask’s file name and token. If the original software is discontinued, forks still need to follow this rule so as to not be surprising to the user. There are two exceptions which allow the fork to replace the main cask: * The original discontinued software recommends that fork. * The fork is so overwhelmingly popular that it surpasses the original and is now the de facto project when people think of the name. For unrelated Apps that share a name, the most popular one (usually the one already present) stays unprefixed. Since this can be subjective, if you disagree with a decision, open an issue and make your case to the maintainers. ### Unofficial, Vendorless, and Walled Builds We do not accept these casks since they offer a higher-than-normal security risk. ### Fonts Font Casks live in the [Homebrew/homebrew-cask-fonts](https://github.com/Homebrew/homebrew-cask-fonts) repository. See the font repo [CONTRIBUTING.md](https://github.com/Homebrew/homebrew-cask-fonts/blob/HEAD/CONTRIBUTING.md) for details. ### Drivers Driver Casks live in the [Homebrew/homebrew-cask-drivers](https://github.com/Homebrew/homebrew-cask-drivers) repository. See the drivers repo [CONTRIBUTING.md](https://github.com/Homebrew/homebrew-cask-drivers/blob/master/CONTRIBUTING.md) for details. Apps that bundle malware ------------------------ Unfortunately, in the world of software there are bad actors that bundle malware with their apps. Even so, Homebrew Cask has long decided it will not be an active gatekeeper ([macOS already has one](https://support.apple.com/en-us/HT202491)) and [users are expected to know about the software they are installing](#homebrew-cask-is-not-a-discoverability-service). This means we will not always remove casks that link to these apps, in part because there is no clear line between useful app, potentially unwanted program, and the different shades of malware — what is useful to one user may be seen as malicious by another. But we’d still like for users to enjoy some kind of protection while minimising occurrences of legitimate developers being branded as malware carriers. To do so, we evaluate casks on a case-by-case basis and any user is free to bring a potential malware case to our attention. However, it is important to never forget the last line of defence is *always* the user. If an app that bundles malware was not signed with an Apple Developer ID and you purposefully disabled or bypassed Gatekeeper, no action will be taken on our part. When you disable security features, you do so at your own risk. If, however, an app that bundles malware is signed, Apple can revoke its permissions and it will no longer run on the computers of users that keep security features on — we all benefit, Homebrew Cask users or not. To report a signed app that bundles malware, use [Apple’s Feedback Assistant](https://feedbackassistant.apple.com) We are also open to removing casks where we feel there is enough evidence that the app is malicious. To suggest a cask for removal, submit a Pull Request to delete it, together with your reasoning. Typically, this will mean presenting a [VirusTotal](https://www.virustotal.com) scan of the app showing it is malicious, ideally with some other reporting indicating it’s not a false positive. Likewise, software which provides both “clean” and malware-infested versions might be removed from the repo — even if we could have access to the *good* version — if its developers push for users to install the *bad* version. We do so because in these cases there’s a higher than normal risk that both versions are (or will soon become) compromised in some manner. If a cask you depend on was removed due to these rules, fear not. Removal of a cask from the official repositories means we won’t support it, but you can do so by hosting your own [tap](how-to-create-and-maintain-a-tap). Exceptions to the Notability Threshold -------------------------------------- Casks which do not reach a minimum notability threshold (see [Rejected Casks](#rejected-casks)) aren’t accepted in the main repositories because the increased maintenance burden doesn’t justify the poor usage numbers they will likely get. This notability check is performed automatically by the audit commands we provide, but its decisions aren’t set in stone. A cask which fails the notability check can be added if it is: 1. A popular app that has their own website but the developers use GitHub for hosting the binaries. That repository won’t be notable but the app may be. 2. Submitted by a maintainer or prolific contributor. A big part of the reasoning for the notability rule is unpopular software garners less attention and the cask gets abandoned, outdated, and broken. Someone with a proven investment in Hombrew Cask is less likely to let that happen for software they depend on. 3. A piece of software that was recently released to great fanfare—everyone is talking about it on Twitter and Hacker News and we’ve even gotten multiple premature submissions for it. That’s a clear case of an app that will reach the threshold in no time so that’s a PR we won’t close immediately (but may wait to merge). Note none of these exceptions is a guarantee for inclusion, but examples of situations where we may take a second look. Homebrew Cask is not a discoverability service ---------------------------------------------- From the inception of Homebrew Cask, various requests fell under the umbrella of this reply. Though a somewhat popular request, after careful consideration on multiple occasions we’ve always come back to the same conclusion: we’re not a discoverability service and our users are expected to have reasonable knowledge about the apps they’re installing through us before doing so. For example, [grouping casks by categories](https://github.com/Homebrew/homebrew-cask/issues/5425) is not within the scope of the project. Amongst other things, the logistics of such requests are unsustainable for Homebrew Cask. Before making a request of this nature, you must read through previous related issues, as well as any other issues they link to, to get a full understanding of why that is the case, and why “but project *x* does *y*” arguments aren’t applicable, and not every package manager is the same. You should also be able to present clear actionable fixes to those concerns. Simply asking for it without solutions will get your issue closed. However, there is a difference between discoverability (finding new apps you didn’t know about) and searchability (identifying the app you know about and want to install). While the former is unlikely to ever become part of our goals, the latter is indeed important to us, and we continue to work on it. Rejected Casks -------------- Before submitting a Cask to any of our repos, you must read [our documentation on acceptable Casks](#finding-a-home-for-your-cask) and perform a (at least quick) search to see if there were any previous attempts to introduce it. Common reasons to reject a Cask entirely: * We have strong reasons to believe including the Cask can put the whole project at risk. Happened only once so far, [with Popcorn Time](https://github.com/Homebrew/homebrew-cask/pull/3954). * The Cask is unreasonably difficult to maintain. Examples once included [Audacity](https://github.com/Homebrew/homebrew-cask/pull/27517) and [older Java development Casks](https://github.com/Homebrew/homebrew-cask/issues/57387). * The app is a trial version, and the only way to acquire the full version is through the Mac App Store. + Similarly (and trickier to spot), the app has moved to the Mac App Store but still provides old versions via direct download. We reject these in all official repos so users don’t get stuck using an old version, wrongly thinking they’re using the most up-to-date one (which, amongst other things, might be a security risk). * The app is both open-source and CLI-only (i.e. it only uses the `binary` artifact). In that case, and [in the spirit of deduplication](https://github.com/Homebrew/homebrew-cask/issues/15603), submit it first to [Homebrew/core](https://github.com/Homebrew/homebrew-core) as a formula that builds from source. If it is rejected, you may then try again as a cask (link us to the issue so we can see the discussion and reasoning for rejection). * The app is open-source and has a GUI but no compiled versions (or only old ones) are provided. It’s better to have them in [Homebrew/core](https://github.com/Homebrew/homebrew-core) so users don’t get perpetually outdated versions. See [`gedit`](https://github.com/Homebrew/homebrew-cask/pull/23360) for example. * The app has been rejected before due to an issue we cannot fix, and the new submission doesn’t fix that. An example would be [the first submission of `soapui`](https://github.com/Homebrew/homebrew-cask/pull/4939), whose installation problems were not fixed in the two subsequent submissions ([#9969](https://github.com/Homebrew/homebrew-cask/pull/9969), [#10606](https://github.com/Homebrew/homebrew-cask/pull/10606)). * The Cask is a duplicate. These submissions mostly occur when the [token reference](cask-cookbook#token-reference) was not followed. * The download URL for the app is both behind a login/registration form and from a host that differs from the homepage, meaning users can’t easily verify its authenticity. * The Cask is for an unmaintained app (no releases in the last year, or [explicitly discontinued](https://github.com/Homebrew/homebrew-cask/pull/22699)). * The Cask is for an app that is too obscure. Examples: + An app from a code repository that is not notable enough (under 30 forks, 30 watchers, 75 stars). + [Electronic Identification (eID) software](https://github.com/Homebrew/homebrew-cask/issues/59021). * The Cask is for an app with no information on the homepage (example: a GitHub repository without a README). * The author has [specifically asked us not to include it](https://github.com/Homebrew/homebrew-cask/pull/5342). * The Cask requires [SIP to be disabled](https://github.com/Homebrew/homebrew-cask/pull/41890) to be installed and/or used. * The Cask is a `pkg` that requires [`allow_untrusted: true`](cask-cookbook#pkg-allow_untrusted). Common reasons to reject a Cask from the main repo: * The cask was submitted to the wrong repo. When drafting a cask, consult “[Finding a Home For Your Cask](#finding-a-home-for-your-cask)” to see where it belongs. No cask is guaranteed to be accepted ------------------------------------ Follow the guidelines above and your submission has a great chance of being accepted. But remember documentation tends to lag behind current decision-making and we can’t predict every case. Maintainers may override these rules when experience tells us it will lead to a better overall Homebrew.
programming_docs
homebrew brew livecheck `brew livecheck` ================ The `brew livecheck` command finds the newest version of a formula or cask’s software by checking upstream. Livecheck has [strategies](https://rubydoc.brew.sh/Homebrew/Livecheck/Strategy.html) to identify versions from various sources, such as Git repositories, websites, etc. Behavior -------- When livecheck isn’t given instructions for how to check for upstream versions, it does the following by default: 1. For formulae: Collect the `head`, `stable`, and `homepage` URLs, in that order. For casks: Collect the `url` and `homepage` URLs, in that order. 2. Determine if any strategies apply to the first URL. If not, try the next URL. 3. If a strategy can be applied, use it to check for new versions. 4. Return the newest version (or an error if versions could not be found at any available URLs). It’s sometimes necessary to override this default behavior to create a working check. If a source doesn’t provide the newest version, we need to check a different one. If livecheck doesn’t correctly match version text, we need to provide an appropriate regex or `strategy` block. This can be accomplished by adding a `livecheck` block to the formula/cask. For more information on the available methods, please refer to the [`Livecheck` class documentation](https://rubydoc.brew.sh/Livecheck.html). Creating a check ---------------- 1. **Use the debug output to understand the situation**. `brew livecheck --debug <formula>|<cask>` provides information about which URLs livecheck tries, any strategies that apply, matched versions, etc. 2. **Research available sources to select a URL**. Try removing the file name from `stable`/`url`, to see if this is a directory listing page. If that doesn’t work, try to find a page that links to the file (e.g. a download page). If it’s not possible to find the newest version on the website, try checking other sources from the formula/cask. When necessary, search for other sources outside of the formula/cask. 3. **Create a regex, if necessary**. If the check works without a regex and wouldn’t benefit from having one, it’s usually fine to omit it. More information on creating regexes can be found in the [regex guidelines](#regex-guidelines) section. ### General guidelines * **Only use `strategy` when it’s necessary**. For example, if livecheck is already using `Git` for a URL, it’s not necessary to use `strategy :git`. However, if `Git` applies to a URL but we need to use `PageMatch`, it’s necessary to specify `strategy :page_match`. * **Only use the `GithubLatest` strategy when it’s necessary and correct**. `github.com` rate limits requests and we try to minimize our use of this strategy to avoid hitting the rate limit on CI or when using `brew livecheck --tap` on large taps (e.g. homebrew/core). The `Git` strategy is often sufficient and we only need to use `GithubLatest` when the “latest” release is different than the newest version from the tags. ### URL guidelines * **A `url` is required in a `livecheck` block**. This can be a URL string (e.g. `"https://www.example.com/downloads/"`) or a formula/cask URL symbol (i.e. `:stable`, `:url`, `:head`, `:homepage`). The exception to this rule is a `livecheck` block that only uses `skip`. * **Check for versions in the same location as the stable archive, whenever possible**. * **Avoid checking paginated release pages, when possible**. For example, we generally avoid checking the `release` page for a GitHub project because the latest stable version can be pushed off the first page by pre-release versions. In this scenario, it’s more reliable to use the `Git` strategy, which fetches all the tags in the repository. ### Regex guidelines The `livecheck` block regex restricts matches to a subset of the fetched content and uses a capture group around the version text. * **Regexes should be made case insensitive, whenever possible**, by adding `i` at the end (e.g. `/.../i` or `%r{...}i`). This improves reliability, as the regex will handle changes in letter case without needing modifications. * **Regexes should only use a capturing group around the version text**. For example, in `/href=.*?example-v?(\d+(?:\.\d+)+)(?:-src)?\.t/i`, we’re only using a capturing group around the version test (matching a version like `1.2`, `1.2.3`, etc.) and we’re using non-capturing groups elsewhere (e.g. `(?:-src)?`). * **Anchor the start/end of the regex, to restrict the scope**. For example, on HTML pages we often match file names or version directories in `href` attribute URLs (e.g. `/href=.*?example[._-]v?(\d+(?:\.\d+)+)\.zip/i`). The general idea is that limiting scope will help exclude unwanted matches. * **Avoid generic catch-alls like `.*` or `.+`** in favor of something non-greedy and/or contextually appropriate. For example, to match characters within the bounds of an HTML attribute, use `[^"' >]+?`. * **Use `[._-]` in place of a period/underscore/hyphen between the software name and version in a file name**. For a file named `example-1.2.3.tar.gz`, `example[._-]v?(\d+(?:\.\d+)+)\.t` will continue matching if the upstream file name format changes to `example_1.2.3.tar.gz` or `example.1.2.3.tar.gz`. * **Use `\.t` in place of `\.tgz`, `\.tar\.gz`, etc.** There are a variety of different file extensions for tarballs (e.g. `.tar.bz2`, `tbz2`, `.tar.gz`, `.tgz`, `.tar.xz`, `.txz`, etc.) and the upstream source may switch from one compression format to another over time. `\.t` avoids this issue by matching current and future formats starting with `t`. Outside of tarballs, we use the full file extension in the regex like `\.zip`, `\.jar`, etc. Example `livecheck` blocks -------------------------- The following examples cover a number of patterns that you may encounter. These are intended to be representative samples and can be easily adapted. When in doubt, start with one of these examples instead of copy-pasting a `livecheck` block from a random formula/cask. ### File names When matching the version from a file name on an HTML page, we often restrict matching to `href` attributes. `href=.*?` will match the opening delimiter (`"`, `'`) as well as any part of the URL before the file name. ``` livecheck do url "https://www.example.com/downloads/" regex(/href=.*?example[._-]v?(\d+(?:\.\d+)+)\.t/i) end ``` We sometimes make this more explicit to exclude unwanted matches. URLs with a preceding path can use `href=.*?/` and others can use `href=["']?`. For example, this is necessary when the page also contains unwanted files with a longer prefix (`another-example-1.2.tar.gz`). ### Version directories When checking a directory listing page, sometimes files are separated into version directories (e.g. `1.2.3/`). In this case, we must identify versions from the directory names. ``` livecheck do url "https://www.example.com/releases/example/" regex(%r{href=["']?v?(\d+(?:\.\d+)+)/?["' >]}i) end ``` ### Git tags When the `stable` URL uses the `Git` strategy, the following example will only match tags like `1.2`/`v1.2`, etc. ``` livecheck do url :stable regex(/^v?(\d+(?:\.\d+)+)$/i) end ``` If tags include the software name as a prefix (e.g. `example-1.2.3`), it’s easy to modify the regex accordingly: `/^example[._-]v?(\d+(?:\.\d+)+)$/i` ### Referenced formula/cask A formula/cask can use the same check as another by using `formula` or `cask`. ``` livecheck do formula "another-formula" end ``` The referenced formula/cask should be in the same tap, as a reference to a formula/cask from another tap will generate an error if the user doesn’t already have it tapped. ### `strategy` blocks If the upstream version format needs to be manipulated to match the formula/cask format, a `strategy` block can be used instead of a `regex`. #### `PageMatch` `strategy` block In the example below, we’re converting a date format like `2020-01-01` into `20200101`. ``` livecheck do url :homepage strategy :page_match do |page| page.scan(/href=.*?example[._-]v?(\d{4}-\d{2}-\d{2})\.t/i) .map { |match| match&.first&.gsub(/\D/, "") } end end ``` The `PageMatch` `strategy` block style seen here also applies to any strategy that uses `PageMatch` internally. #### `Git` `strategy` block A `strategy` block for `Git` is a bit different, as the block receives an array of tag strings instead of a page content string. Similar to the `PageMatch` example, this is converting tags with a date format like `2020-01-01` into `20200101`. ``` livecheck do url :stable strategy :git do |tags| tags.map { |tag| tag[/^(\d{4}-\d{2}-\d{2})$/i, 1]&.gsub(/\D/, "") }.compact end end ``` #### `Sparkle` `strategy` block A `strategy` block for `Sparkle` receives an `item` which has methods for the `short_version`, `version`, `url` and `title`. The default pattern for the `Sparkle` strategy is `"#{item.short_version},#{item.version}"` if both are set. In the example below, the `url` also includes a download ID which is needed: ``` livecheck do url "https://www.example.com/example.xml" strategy :sparkle do |item| "#{item.short_version},#{item.version}:#{item.url[%r{/(\d+)/[^/]+\.zip}i, 1]}" end end ``` ### `skip` Livecheck automatically skips some formulae/casks for a number of reasons (deprecated, disabled, discontinued, etc.). However, on rare occasions we need to use a `livecheck` block to do a manual skip. The `skip` method takes a string containing a very brief reason for skipping. ``` livecheck do skip "No version information available" end ``` homebrew Python for Formula Authors Python for Formula Authors ========================== This document explains how to successfully use Python in a Homebrew formula. Homebrew draws a distinction between Python **applications** and Python **libraries**. The difference is that users generally do not care that applications are written in Python; it is unusual that a user would expect to be able to `import foo` after installing an application. Examples of applications are [`ansible`](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/ansible.rb) and [`jrnl`](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/jrnl.rb). Python libraries exist to be imported by other Python modules; they are often dependencies of Python applications. They are usually no more than incidentally useful in a terminal. Examples of libraries are [`py2cairo`](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/py2cairo.rb) and the bindings that are installed by [`protobuf`](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/protobuf.rb). Bindings are a special case of libraries that allow Python code to interact with a library or application implemented in another language. Homebrew is happy to accept applications that are built in Python, whether the apps are available from PyPI or not. Homebrew generally won’t accept libraries that can be installed correctly with `pip install foo`. Bindings may be installed for packages that provide them, especially if equivalent functionality isn’t available through pip. Applications should unconditionally bundle all of their Python-language dependencies and libraries and should install any unsatisfied dependencies; these strategies are discussed in depth in the following sections. Applications ------------ ### Python declarations Formulae for apps that require Python 3 **should** declare an unconditional dependency on `"[email protected]"`. These apps **must** work with the current Homebrew Python 3.x formula. Applications that are compatible with Python 2 **should** use the Apple-provided system Python in `/usr/bin` on systems that provide Python 2.7. No explicit Python dependency is needed since `/usr/bin` is always in `PATH` for Homebrew formulae. ### Installing Applications should be installed into a Python [virtualenv](https://virtualenv.pypa.io/en/stable/) environment rooted in `libexec`. This prevents the app’s Python modules from contaminating the system site-packages and vice versa. All of the Python module dependencies of the application (and their dependencies, recursively) should be declared as `resource`s in the formula and installed into the virtualenv, as well. Each dependency should be explicitly specified; please do not rely on `setup.py` or `pip` to perform automatic dependency resolution, for the [reasons described here](acceptable-formulae#we-dont-like-install-scripts-that-download-unversioned-things). You can use `brew update-python-resources` to help you write resource stanzas. To use it, simply run `brew update-python-resources <formula>`. Sometimes, `brew update-python-resources` won’t be able to automatically update the resources. If this happens, try running `brew update-python-resources --print-only <formula>` to print the resource stanzas instead of applying the changes directly to the file. You can then copy and paste resources as needed. If using `brew update-python-resources` doesn’t work, you can use [homebrew-pypi-poet](https://pypi.python.org/pypi/homebrew-pypi-poet) to help you write resource stanzas. To use it, set up a virtualenv and install your package and all its dependencies. Then, `pip install homebrew-pypi-poet` into the same virtualenv. Running `poet some_package` will generate the necessary resource stanzas. You can do this like: ``` # Use a temporary directory for the virtual environment cd "$(mktemp -d)" # Create and source a new virtual environment in the venv/ directory python3 -m venv venv source venv/bin/activate # Install the package of interest as well as homebrew-pypi-poet pip install some_package homebrew-pypi-poet poet some_package # Destroy the virtual environment deactivate rm -rf venv ``` Homebrew provides helper methods for instantiating and populating virtualenvs. You can use them by putting `include Language::Python::Virtualenv` at the top of the `Formula` class definition. For most applications, all you will need to write is: ``` def install virtualenv_install_with_resources end ``` This is exactly the same as writing: ``` def install # Create a virtualenv in `libexec`. If your app needs Python 3, make sure that # `depends_on "python"` is declared, and use `virtualenv_create(libexec, "python3")`. venv = virtualenv_create(libexec) # Install all of the resources declared on the formula into the virtualenv. venv.pip_install resources # `pip_install_and_link` takes a look at the virtualenv's bin directory # before and after installing its argument. New scripts will be symlinked # into `bin`. `pip_install_and_link buildpath` will install the package # that the formula points to, because buildpath is the location where the # formula's tarball was unpacked. venv.pip_install_and_link buildpath end ``` ### Example Installing a formula with dependencies will look like this: ``` class Foo < Formula include Language::Python::Virtualenv url "..." resource "six" do url "https://pypi.python.org/packages/source/s/six/six-1.9.0.tar.gz" sha256 "e24052411fc4fbd1f672635537c3fc2330d9481b18c0317695b46259512c91d5" end resource "parsedatetime" do url "https://pypi.python.org/packages/source/p/parsedatetime/parsedatetime-1.4.tar.gz" sha256 "09bfcd8f3c239c75e77b3ff05d782ab2c1aed0892f250ce2adf948d4308fe9dc" end def install virtualenv_install_with_resources end end ``` You can also use the more verbose form and request that specific resources be installed: ``` def install venv = virtualenv_create(libexec) %w[six parsedatetime].each do |r| venv.pip_install resource(r) end venv.pip_install_and_link buildpath end ``` in case you need to do different things for different resources. Bindings -------- To add bindings for Python 3, please add `depends_on "[email protected]"` to work with the current Homebrew Python 3.x formula. Build Python 2 bindings with the system Python by default (don’t add an option) and they should be usable with any binary-compatible Python. If that isn’t the case, it’s an upstream bug; [here’s some advice for resolving it](https://blog.tim-smith.us/2015/09/python-extension-modules-os-x/). ### Dependencies Bindings should follow the same advice for Python module dependencies as libraries; see below for more. ### Installing bindings If the bindings are installed by invoking a `setup.py`, do something like: ``` cd "source/python" do system Formula["[email protected]"].opt_bin/"python3", *Language::Python.setup_install_args(prefix) end ``` If the configure script takes a `--with-python` flag, it usually will not need extra help finding Python. If the `configure` and `make` scripts do not want to install into the Cellar, sometimes you can: 1. call `./configure --without-python` (or a similar named option) 2. `cd` into the directory containing the Python bindings 3. call `setup.py` with `system` and `Language::Python.setup_install_args` (as described above) Sometimes we have to edit a `Makefile` on-the-fly to use our prefix for the Python bindings using Homebrew’s [`inreplace`](formula-cookbook#inreplace) helper method. Libraries --------- ### Python declarations Libraries built for Python 3 should include `depends_on "[email protected]"`, which will bottle against Homebrew’s Python 3.x. Python 2.x libraries must function when they are installed against either the system Python or brewed Python. Python 2 libraries need a `uses_from_macos "python@2"` declaration; they will be built with the system Python, but should still be usable with any other Python 2.7. If this is not the case, it’s an upstream bug; [here’s some advice for resolving it](https://blog.tim-smith.us/2015/09/python-extension-modules-os-x/). ### Installing Libraries may be installed to `libexec` and added to `sys.path` by writing a `.pth` file (named like “homebrew-foo.pth”) to the `prefix` site-packages. This simplifies the ensuing drama if `pip` is accidentally used to upgrade a Homebrew-installed package and prevents the accumulation of stale .pyc files in Homebrew’s site-packages. Most formulae presently just install to `prefix`. ### Dependencies The dependencies of libraries must be installed so that they are importable. To minimise the potential for linking conflicts, dependencies should be installed to `libexec/<vendor>` and added to `sys.path` by writing a second `.pth` file (named like “homebrew-foo-dependencies.pth”) to the `prefix` site-packages. Further down the rabbit hole ---------------------------- Additional commentary that explains why Homebrew does some of the things it does. ### setuptools vs. distutils vs. pip Distutils is a module in the Python standard library that provides developers a basic package management API. Setuptools is a module distributed outside the standard library that extends distutils. It is a convention that Python packages provide a `setup.py` that calls the `setup()` function from either distutils or setuptools. Setuptools provides the `easy_install` command, which is an end-user package management tool that fetches and installs packages from PyPI, the Python Package Index. `pip` is another, newer end-user package management tool, which is also provided outside the standard library. While pip supplants `easy_install`, pip does not replace the other functionality of the setuptools module. Distutils and pip use a “flat” installation hierarchy that installs modules as individual files under site-packages while `easy_install` installs zipped eggs to site-packages instead. Distribute (not to be confused with distutils) is an obsolete fork of setuptools. Distlib is a package maintained outside the standard library which is used by pip for some low-level packaging operations and is not relevant to most `setup.py` users. ### Running `setup.py` In the event that a formula needs to interact with `setup.py` instead of calling `pip`, Homebrew provides a helper method, `Language::Python.setup_install_args`, which returns useful arguments for invoking `setup.py`. Your formula should use this instead of invoking `setup.py` explicitly. The syntax is: ``` system Formula["[email protected]"].opt_bin/"python3", *Language::Python.setup_install_args(prefix) ``` where `prefix` is the destination prefix (usually `libexec` or `prefix`). ### What is `--single-version-externally-managed`? `--single-version-externally-managed` (“SVEM”) is a setuptools-only [argument to `setup.py install`](https://setuptools.readthedocs.io/en/latest/setuptools.html?#install-run-easy-install-or-old-style-installation). The primary effect of SVEM is to use distutils to perform the install instead of using setuptools’ `easy_install`. `easy_install` does a few things that we need to avoid: * fetches and installs dependencies * upgrades dependencies in `sys.path` in-place * writes `.pth` and `site.py` files which aren’t useful for us and cause link conflicts Setuptools requires that SVEM is used in conjunction with `--record`, which provides a list of files that can later be used to uninstall the package. We don’t need or want this because Homebrew can manage uninstallation but since setuptools demands it we comply. The Homebrew convention is to call the record file “installed.txt”. Detecting whether a `setup.py` uses `setup()` from setuptools or distutils is difficult, but we always need to pass this flag to setuptools-based scripts. `pip` faces the same problem that we do and forces `setup()` to use the setuptools version by loading a shim around `setup.py` that imports setuptools before doing anything else. Since setuptools monkey-patches distutils and replaces its `setup` function, this provides a single, consistent interface. We have borrowed this code and use it in `Language::Python.setup_install_args`. ### `--prefix` vs `--root` `setup.py` accepts a slightly bewildering array of installation options. The correct switch for Homebrew is `--prefix`, which automatically sets the `--install-foo` family of options using sane POSIX-y values. `--root` [is used](https://mail.python.org/pipermail/distutils-sig/2010-November/017099.html) when installing into a prefix that will not become part of the final installation location of the files, like when building a .rpm or binary distribution. When using a `setup.py`-based setuptools, `--root` has the side effect of activating `--single-version-externally-managed`. It is not safe to use `--root` with an empty `--prefix` because the `root` is removed from paths when byte-compiling modules. It is probably safe to use `--prefix` with `--root=/`, which should work with either setuptools or distutils-based `setup.py`’s but is kinda ugly. ### `pip` vs. `setup.py` [PEP 453](https://legacy.python.org/dev/peps/pep-0453/#recommendations-for-downstream-distributors) makes a recommendation to downstream distributors (us) that sdist tarballs should be installed with `pip` instead of by invoking `setup.py` directly. We do not do this because Apple’s Python distribution does not include pip, so we can’t assume that pip is available. We could do something clever to work around Apple’s piplessness but the value proposition is not yet clear.
programming_docs
homebrew Versions Versions ======== [homebrew/core](https://github.com/homebrew/homebrew-core) supports multiple versions of formulae with a special naming format. For example, the formula for GCC 6 is named `[email protected]` and begins with `class GccAT6 < Formula`. Acceptable versioned formulae ----------------------------- Versioned formulae we include in [homebrew/core](https://github.com/homebrew/homebrew-core) must meet the following standards: * Versioned software should build on all Homebrew’s supported versions of macOS. * Versioned formulae should differ in major/minor (not patch) versions from the current stable release. This is because patch versions indicate bug or security updates, and we want to ensure you apply security updates. * Unstable versions (alpha, beta, development versions) are not acceptable for versioned (or unversioned) formulae. * Upstream should have a release branch for each formula version, and have an explicit policy of releasing security updates for each version when necessary. For example, [PHP 7.0 was not a supported version but PHP 7.2 was](https://php.net/supported-versions.php) in January 2020. By contrast, most software projects are structured to only release security updates for their latest versions, so their earlier versions are not eligible for versioning. * Versioned formulae should share a codebase with the main formula. If the project is split into a different repository, we recommend creating a new formula (`formula2` rather than `formula@2` or `formula@1`). * Formulae that depend on versioned formulae must not depend on the same formulae at two different versions twice in their recursive dependencies. For example, if you depend on `[email protected]` and `foo`, and `foo` depends on `openssl` then you must instead use `openssl`. * Versioned formulae should only be linkable at the same time as their non-versioned counterpart if the upstream project provides support for it, e.g. using suffixed binaries. If this is not possible, use `keg_only :versioned_formula` to allow users to have multiple versions installed at once. * A `keg_only :versioned_formula` should not `post_install` anything in the `HOMEBREW_PREFIX` that conflicts with or duplicates the main counterpart (or other versioned formulae). For example, a `node@6` formula should not install its `npm` into `HOMEBREW_PREFIX` like the `node` formula does. * Versioned formulae submitted should be expected to be used by a large number of people. If this ceases to be the case, they will be removed. We will aim not to remove those in the [top 3,000 `install_on_request` formulae](https://brew.sh/analytics/install-on-request/). * Versioned formulae should not have `resource`s that require security updates. For example, a `node@6` formula should not have an `npm` resource but instead rely on the `npm` provided by the upstream tarball. * Versioned formulae should be as similar as possible and sensible compared to the main formulae. Creating or updating a versioned formula should be a chance to ask questions of the main formula and vice versa, e.g. can some unused or useless options be removed or made default? * No more than five versions of a formula (including the main one) will be supported at any given time, regardless of usage. When removing formulae that violate this, we will aim to do so based on usage and support status rather than age. * Versioned formulae must be ABI stable for the lifetime of the version branch. Updates to the versioned formula must not introduce ABI incompatibilities or otherwise require dependents to be revision bumped. In practice, this means that their dependents should never need `revision` bumps to be rebuilt against newer versions. Version updates which violate this should be rejected and the formula be deprecated from that point onwards. Homebrew’s versions should not be used to “pin” formulae to your personal requirements. You should instead create your own [tap](how-to-create-and-maintain-a-tap) for formulae you or your organisation wish to control the versioning of, or those that do not meet the above standards. Software that has regular API or ABI breaking releases still needs to meet all the above requirements; that a `brew upgrade` has broken something for you is not an argument for us to add and maintain a formula for you. If there is a formula that currently exists in the Homebrew/homebrew-core repository or has existed in the past (i.e. was migrated or deleted), you can recover it for your own use with the `brew extract` command. This will copy the desired version of the formula into a custom tap. For example, if your project depends on `automake` 1.12 instead of the most recent version, you can obtain the `automake` formula at version 1.12 by running `brew extract automake <YOUR_GITHUB_USER>/<YOUR_TAP_REPOSITORY_NAME> --version=1.12`. Formulae obtained this way may contain deprecated, disabled or removed Homebrew syntax (e.g. checksums may be `sha1` instead of `sha256`); the `brew extract` command does not edit or update formulae to meet current standards and style requirements. We may temporarily add versioned formulae for our own needs that do not meet these standards in [homebrew/core](https://github.com/homebrew/homebrew-core). The presence of a versioned formula there does not imply it will be maintained indefinitely or that we are willing to accept any more versions that do not meet the requirements above. homebrew Bottles (Binary Packages) Bottles (Binary Packages) ========================= Bottles are produced by installing a formula with `brew install --build-bottle <formula>` and then bottling it with `brew bottle <formula>`. This generates a bottle file in the current directory and outputs the bottle DSL for insertion into the formula file. Usage ----- When the formula being installed defines a bottle matching your system, it will be downloaded and installed automatically when you run `brew install <formula>`. Bottles will not be used if: * the user requests it (by specifying `--build-from-source`), * the formula requests it (with `pour_bottle?`), * any options are specified during installation (bottles are all compiled with default options), * the bottle is not up to date (e.g. missing or mismatched checksum), * or the bottle’s `cellar` is neither `:any` (it requires being installed to a specific Cellar path) nor equal to the current `HOMEBREW_CELLAR` (the required Cellar path does not match that of the current Homebrew installation). Creation -------- Bottles for `homebrew/core` formulae are created by [Brew Test Bot](brew-test-bot) when a pull request is submitted. If the formula builds successfully on each supported platform and a maintainer approves the change, Brew Test Bot updates its `bottle do` block and uploads each bottle to [GitHub Packages](https://github.com/orgs/Homebrew/packages). By default, bottles will be built for the oldest CPU supported by the OS/architecture you’re building for (Core 2 for 64-bit x86 operating systems). This ensures that bottles are compatible with all computers you might distribute them to. If you *really* want your bottles to be optimised for something else, you can pass the `--bottle-arch=` option to build for another architecture; for example, `brew install foo --build-bottle --bottle-arch=penryn`. Just remember that if you build for a newer architecture, some of your users might get binaries they can’t run and that would be sad! Format ------ Bottles are simple gzipped tarballs of compiled binaries. The formula name, version, target operating system and rebuild version is stored in the filename, any other metadata is in the formula’s bottle DSL, and the formula definition is located within the bottle at `<formula>/<version>/.brew/<formula>.rb`. Bottle DSL (Domain Specific Language) ------------------------------------- Bottles are specified in formula definitions by a DSL contained within a `bottle do ... end` block. A simple (and typical) example: ``` bottle do sha256 arm64_big_sur: "a9ae578b05c3da46cedc07dd428d94a856aeae7f3ef80a0f405bf89b8cde893a" sha256 big_sur: "5dc376aa20241233b76e2ec2c1d4e862443a0250916b2838a1ff871e8a6dc2c5" sha256 catalina: "924afbbc16549d8c2b80544fd03104ff8c17a4b1460238e3ed17a1313391a2af" sha256 mojave: "678d338adc7d6e8c352800fe03fc56660c796bd6da23eda2b1411fed18bd0d8d" end ``` A full example: ``` bottle do root_url "https://example.com" rebuild 4 sha256 cellar: "/opt/homebrew/Cellar", arm64_big_sur: "a9ae578b05c3da46cedc07dd428d94a856aeae7f3ef80a0f405bf89b8cde893a" sha256 cellar: :any, big_sur: "5dc376aa20241233b76e2ec2c1d4e862443a0250916b2838a1ff871e8a6dc2c5" sha256 catalina: "924afbbc16549d8c2b80544fd03104ff8c17a4b1460238e3ed17a1313391a2af" sha256 mojave: "678d338adc7d6e8c352800fe03fc56660c796bd6da23eda2b1411fed18bd0d8d" end ``` ### Root URL (`root_url`) Optionally contains the URL root used to determine bottle URLs. By default this is omitted and Homebrew’s default bottle URL root is used. This may be useful for taps that wish to provide bottles for their formulae or cater to a non-default `HOMEBREW_CELLAR`. ### Cellar (`cellar`) Optionally contains the value of `HOMEBREW_CELLAR` in which the bottles were built. Most compiled software contains references to its compiled location, preventing it from being simply relocated anywhere on disk. A value of `:any` or `:any_skip_relocation` means that the bottle can be safely installed in any Cellar as it did not contain any references to the Cellar in which it was originally built. This can be omitted if the bottle was compiled for the given OS/architecture’s default `HOMEBREW_CELLAR`, as is done for all bottles built by Brew Test Bot. ### Rebuild version (`rebuild`) Optionally contains the rebuild version of the bottle. Sometimes bottles may need be updated without bumping the version or revision of the formula, e.g. if a new patch was applied. In such cases `rebuild` will have a value of `1` or more. ### Checksum (`sha256`) Contains the SHA-256 hash of the bottle for the given OS/architecture. Formula DSL ----------- An additional bottle-related method is available in the formula DSL. ### Pour bottle (`pour_bottle?`) Optionally returns a boolean to indicate whether a bottle should be used when installing this formula. For example a bottle may break if a related formula has been compiled with non-default options, so this method could check for that case and return `false`. A full example: ``` pour_bottle? do reason "The bottle needs to be installed into #{Homebrew::DEFAULT_PREFIX}." satisfy { HOMEBREW_PREFIX.to_s == Homebrew::DEFAULT_PREFIX } end ``` Commonly used `pour_bottle?` conditions can be added as preset symbols to the `pour_bottle?` method, allowing them to be specified like this: ``` pour_bottle? only_if: :default_prefix pour_bottle? only_if: :clt_installed ``` homebrew Building Against Non-Homebrew Dependencies Building Against Non-Homebrew Dependencies ========================================== History ------- Originally Homebrew was a build-from-source package manager and all user environment variables and non-Homebrew-installed software were available to builds. Since then Homebrew added `Requirement`s to specify dependencies on non-Homebrew software (such as those provided by `brew cask` like X11/XQuartz), the `superenv` build system to strip out unspecified dependencies, environment filtering to stop the user environment leaking into Homebrew builds and `default_formula` to specify that a `Requirement` can be satisfied by a particular formula. As Homebrew became primarily a binary package manager, most users were fulfilling `Requirement`s with the `default_formula`, not with arbitrary alternatives. To improve quality and reduce variation, Homebrew now exclusively supports using the default formula, as an ordinary dependency, and no longer supports using arbitrary alternatives. Today ----- If you wish to build against custom non-Homebrew dependencies that are provided by Homebrew (e.g. a non-Homebrew, non-macOS `ruby`) then you must [create and maintain your own tap](how-to-create-and-maintain-a-tap) as these formulae will not be accepted in Homebrew/homebrew-core. Once you have done that you can specify `env :std` in the formula which will allow e.g. `which ruby` to access your existing `PATH` variable and allow compilation to link against this Ruby. You can also [include a custom Requirement](https://github.com/Homebrew/brew/tree/HEAD/Library/Homebrew/requirements) in your formula that more accurately describes the non-Homebrew software you build against. homebrew Troubleshooting Troubleshooting =============== **Run `brew update` twice and `brew doctor` (and fix all the warnings) *before* creating an issue!** This document will help you check for common issues and make sure your issue has not already been reported. Check for common issues ----------------------- * Read through the list of [Common Issues](common-issues). Check to see if the issue has been reported ------------------------------------------- * Search the appropriate issue tracker to see if someone else has already reported the same issue: + [Homebrew/homebrew-core issue tracker](https://github.com/Homebrew/homebrew-core/issues) (formulae) + [Homebrew/homebrew-cask issue tracker](https://github.com/Homebrew/homebrew-cask/issues) (casks) + [Homebrew/brew issue tracker](https://github.com/Homebrew/brew/issues) (`brew` itself) * If the formula or cask that has failed to install is part of a non-Homebrew tap, then check that tap’s issue tracker instead. * Search the [Homebrew discussion forum](https://github.com/homebrew/discussions/discussions) or [Discourse archive](https://discourse.brew.sh/) to see if any discussions have started about the issue. Create an issue --------------- If your problem hasn’t been solved or reported, then create an issue: 1. Collect debugging information: * If you have a problem with installing a formula: run `brew gist-logs <formula>` (where `<formula>` is the name of the formula) to upload the logs to a new [Gist](https://gist.github.com). * If your have a non-formula problem: collect the output of `brew config` and `brew doctor`. 2. Create a new issue on the issue tracker for [Homebrew/homebrew-core](https://github.com/Homebrew/homebrew-core/issues/new/choose), [Homebrew/homebrew-cask](https://github.com/Homebrew/homebrew-cask/issues/new/choose) or [Homebrew/brew](https://github.com/Homebrew/brew/issues/new/choose) and follow the instructions: * Give your issue a descriptive title which includes the formula name (if applicable) and the version of macOS or Linux you are using. For example, if a formula fails to build, title your issue “<formula> failed to build on <platform>”, where “<formula>” is the name of the formula that failed to build, and “<platform>” is the name and version of macOS or Linux you are using. * Include the URL provided by `brew gist-logs <formula>` (if applicable) plus links to any additional Gists you may have created. * Include the output of `brew config` and `brew doctor`. homebrew Updating Software in Homebrew Updating Software in Homebrew ============================= Did you find something in Homebrew that wasn’t the latest version? You can help yourself and others by submitting a pull request to update the formula. First, check the pull requests in the Homebrew tap repositories to make sure a PR isn’t already open. If you’re submitting a [formula](formula-cookbook#homebrew-terminology), check [homebrew-core](https://github.com/Homebrew/homebrew-core/pulls). If you’re submitting a [cask](formula-cookbook#homebrew-terminology), check [homebrew-cask](https://github.com/Homebrew/homebrew-cask/pulls). You may also want to look through closed pull requests in the repositories, as sometimes packages run into problems preventing them from being updated and it’s better to be aware of any issues before putting significant effort into an update. The [How To Open a Homebrew Pull Request](how-to-open-a-homebrew-pull-request) documentation should explain most everything you need to know about the process of creating a PR for a version update. For simple updates, this typically involves changing the URL and SHA256 values. However, some updates require additional changes to the package. You can look back at previous pull requests to see how others have handled things in the past but be sure to look at a variety of PRs. Sometimes packages aren’t updated properly, so you may need to use your judgment to determine how to best proceed. Once you’ve created the pull request in the appropriate Homebrew repository your commit(s) will be tested on our continuous integration servers, showing a green check mark if everything passed or a red X if there were failures. Maintainers will review your pull request and provide feedback about any changes that need to be made before it can be merged. We appreciate your help in keeping Homebrew’s repositories up to date as new versions of software are released! homebrew Custom GCC and Cross Compilers Custom GCC and Cross Compilers ============================== Homebrew depends on having an up-to-date version of Xcode because it comes with specific versions of build tools, e.g. `clang`. Installing a custom version of GCC or Autotools into your `PATH` has the potential to break lots of compiles so we prefer the Apple- or Homebrew-provided compilers. Cross compilers based on GCC will typically be “keg-only” and therefore not linked into your `PATH` by default, or be prefixed with the target architecture, again to avoid conflicting with Apple or Homebrew compilers. Rather than merging formulae for either of these cases at this time, we’re listing them on this page. If you come up with a formula for a new version of GCC or cross-compiler suite, please link to it here. * Homebrew provides a `gcc` formula for use with Xcode 4.2+. * Homebrew provides older GCC formulae, e.g. `gcc@7`. * Homebrew provides some cross-compilers and toolchains, but these are named to avoid clashing with the default tools, e.g. `i686-elf-gcc`, `x86_64-elf-gcc`. * Homebrew provides LLVM’s Clang, which is bundled with the `llvm` formula. * [RISC-V](https://github.com/riscv/homebrew-riscv) provides the RISC-V toolchain including binutils and GCC. homebrew Formula Cookbook Formula Cookbook ================ A *formula* is a package definition written in Ruby. It can be created with `brew create <URL>` where `<URL>` is a zip or tarball, installed with `brew install <formula>`, and debugged with `brew install --debug --verbose <formula>`. Formulae use the [Formula API](https://rubydoc.brew.sh/Formula) which provides various Homebrew-specific helpers. Homebrew terminology -------------------- | Term | Description | Example | | --- | --- | --- | | **Formula** | The package definition | `/usr/local/Homebrew/Library/Taps/homebrew/homebrew-core/Formula/foo.rb` | | **Keg** | The installation prefix of a **Formula** | `/usr/local/Cellar/foo/0.1` | | **Keg-only** | A **Formula** is **Keg-only** if it is not linked into the Homebrew prefix | The [`openjdk` formula](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/openjdk.rb) | | **opt prefix** | A symlink to the active version of a **Keg** | `/usr/local/opt/foo` | | **Cellar** | All **Kegs** are installed here | `/usr/local/Cellar` | | **Tap** | A Git repository of **Formulae** and/or commands | `/usr/local/Homebrew/Library/Taps/homebrew/homebrew-core` | | **Bottle** | Pre-built **Keg** used instead of building from source | `qt-4.8.4.catalina.bottle.tar.gz` | | **Cask** | An [extension of Homebrew](https://github.com/Homebrew/homebrew-cask) to install macOS native apps | `/Applications/MacDown.app/Contents/SharedSupport/bin/macdown` | | **Brew Bundle** | An [extension of Homebrew](https://github.com/Homebrew/homebrew-bundle) to describe dependencies | `brew 'myservice', restart_service: true` | An introduction --------------- Homebrew uses Git for downloading updates and contributing to the project. Homebrew installs to the `Cellar` and then symlinks some of the installation into `/usr/local` so that other programs can see what’s going on. We suggest you `brew ls` a few of the kegs in your Cellar to see how it is all arranged. Packages are installed according to their formulae, which live in `/usr/local/Homebrew/Library/Taps/homebrew/homebrew-core/Formula`. Check out a simple one, e.g. `brew edit etl` (or [`etl`](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/etl.rb)) or a more advanced one, e.g. `brew edit git` (or [`git`](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/git.rb)). Basic instructions ------------------ Make sure you run `brew update` before you start. This turns your Homebrew installation into a Git repository. Before submitting a new formula make sure your package: * meets all our [Acceptable Formulae](acceptable-formulae) requirements * isn’t already in Homebrew (check `brew search <formula>`) * isn’t already waiting to be merged (check the [issue tracker](https://github.com/Homebrew/homebrew-core/pulls)) * is still supported by upstream (i.e. doesn’t require extensive patching) * has a stable, tagged version (i.e. not just a GitHub repository with no versions) * passes all `brew audit --new-formula <formula>` tests Before submitting a new formula make sure you read over our [contribution guidelines](https://github.com/Homebrew/brew/blob/HEAD/CONTRIBUTING.md#contributing-to-homebrew). ### Grab the URL Run `brew create` with a URL to the source tarball: ``` brew create https://example.com/foo-0.1.tar.gz ``` This creates `/usr/local/Homebrew/Library/Taps/homebrew/homebrew-core/Formula/foo.rb` and opens it in your `EDITOR`. It’ll look something like: ``` class Foo < Formula desc "" homepage "" url "https://example.com/foo-0.1.tar.gz" sha256 "85cc828a96735bdafcf29eb6291ca91bac846579bcef7308536e0c875d6c81d7" license "" # depends_on "cmake" => :build def install # ENV.deparallelize system "./configure", "--disable-debug", "--disable-dependency-tracking", "--disable-silent-rules", "--prefix=#{prefix}" # system "cmake", ".", *std_cmake_args system "make", "install" end test do system "false" end end ``` If `brew` said `Warning: Version cannot be determined from URL` when doing the `create` step, you’ll need to explicitly add the correct [`version`](https://rubydoc.brew.sh/Formula#version-class_method) to the formula and then save the formula. Homebrew will try to guess the formula’s name from its URL. If it fails to do so you can override this with `brew create <URL> --set-name <name>`. ### Fill in the `homepage` **We don’t accept formulae without a [`homepage`](https://rubydoc.brew.sh/Formula#homepage%3D-class_method)!** An SSL/TLS (https) [`homepage`](https://rubydoc.brew.sh/Formula#homepage%3D-class_method) is preferred, if one is available. Try to summarise from the [`homepage`](https://rubydoc.brew.sh/Formula#homepage%3D-class_method) what the formula does in the [`desc`](https://rubydoc.brew.sh/Formula#desc%3D-class_method)ription. Note that the [`desc`](https://rubydoc.brew.sh/Formula#desc%3D-class_method)ription is automatically prepended with the formula name. ### Fill in the `license` **We don’t accept new formulae into Homebrew/homebrew-core without a [`license`](https://rubydoc.brew.sh/Formula#license-class_method)!** We only accept formulae that use a [Debian Free Software Guidelines license](https://wiki.debian.org/DFSGLicenses) or are released into the public domain following [DFSG Guidelines on Public Domain software](https://wiki.debian.org/DFSGLicenses#Public_Domain). Use the license identifier from the [SPDX License List](https://spdx.org/licenses/) e.g. `license "BSD-2-Clause"`, or use `license :public_domain` for public domain software. Use `:any_of`, `:all_of` or `:with` to describe complex license expressions. `:any_of` should be used when the user can choose which license to use. `:all_of` should be used when the user must use all licenses. `:with` should be used to specify a valid SPDX exception. Add `+` to an identifier to indicate that the formulae can be licensed under later versions of the same license. Check out the [License Guidelines](license-guidelines) for examples of complex license expressions in Homebrew formulae. ### Check the build system ``` brew install --interactive foo ``` You’re now at a new prompt with the tarball extracted to a temporary sandbox. Check the package’s `README`. Does the package install with `./configure`, `cmake`, or something else? Delete the commented out `cmake` lines if the package uses `./configure`. ### Check for dependencies The `README` probably tells you about dependencies and Homebrew or macOS probably already has them. You can check for Homebrew dependencies with `brew search`. Some common dependencies that macOS comes with: * `libexpat` * `libGL` * `libiconv` * `libpcap` * `libxml2` * `python` * `ruby` There are plenty of others; check `/usr/lib` for them. We generally try not to duplicate system libraries and complicated tools in core Homebrew but we do duplicate some commonly used tools. Special exceptions are OpenSSL and LibreSSL. Things that use either *should* be built using Homebrew’s shipped equivalent and our Brew Test Bot’s post-install `audit` will warn if it detects you haven’t done this. Homebrew’s OpenSSL is [`keg_only`](https://rubydoc.brew.sh/Formula#keg_only-class_method) to avoid conflicting with the system so sometimes formulae need to have environment variables set or special configuration flags passed to locate our OpenSSL. You can see this mechanism in the [`clamav`](https://github.com/Homebrew/homebrew-core/blob/89c4574ef1a6d15e92196637ff315a0a4bb3e289/Formula/clamav.rb#L37) formula. Usually this is unnecessary because Homebrew sets up our [build environment](https://github.com/Homebrew/brew/blob/HEAD/Library/Homebrew/extend/ENV/super.rb) to favour finding [`keg_only`](https://rubydoc.brew.sh/Formula#keg_only-class_method) formulae first. **Important:** `$(brew --prefix)/bin` is NOT on the `PATH` during formula installation. If you have dependencies at build time, you must specify them and `brew` will add them to the `PATH` or create a [`Requirement`](https://rubydoc.brew.sh/Requirement). ### Specifying other formulae as dependencies ``` class Foo < Formula depends_on "pkg-config" depends_on "jpeg" depends_on "readline" => :recommended depends_on "gtk+" => :optional depends_on "httpd" => [:build, :test] depends_on :xcode => "9.3" end ``` A String (e.g. `"jpeg"`) specifies a formula dependency. A Symbol (e.g. `:xcode`) specifies a [`Requirement`](https://rubydoc.brew.sh/Requirement) which can be fulfilled by one or more formulae, casks or other system-wide installed software (e.g. Xcode). A Hash (e.g. `=>`) adds information to a dependency. Given a String or Symbol, the value can be one or more of the following values: * `:build` means that dependency is a build-time only dependency so it can be skipped when installing from a bottle or when listing missing dependencies using `brew missing`. * `:test` means that dependency is only required when running `brew test`. * `:optional` generates an implicit `with-foo` option for the formula. This means that, given `depends_on "foo" => :optional`, the user must pass `--with-foo` in order to use the dependency. * `:recommended` generates an implicit `without-foo` option, meaning that the dependency is enabled by default and the user must pass `--without-foo` to disable this dependency. The default description can be overridden using the normal option syntax (in this case, the option declaration must precede the dependency): ``` option "with-foo", "Compile with foo bindings" # This overrides the generated description if you want to depends_on "foo" => :optional # Generated description would otherwise be "Build with foo support" ``` * Some [`Requirement`](https://rubydoc.brew.sh/Requirement)s can also take a string specifying their minimum version that the formula depends on. **Note:** `:optional` and `:recommended` are not allowed in Homebrew/homebrew-core as they are not tested by CI. ### Specifying conflicts with other formulae Sometimes there’s hard conflict between formulae, and it can’t be avoided or circumvented with [`keg_only`](https://rubydoc.brew.sh/Formula#keg_only-class_method). A good example formula for minor conflict is [`mbedtls`](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/mbedtls.rb), which ships and compiles a “Hello World” executable. This is obviously non-essential to `mbedtls`’s functionality, and conflict with the popular GNU [`hello`](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/hello.rb) formula would be overkill, so we just [remove it](https://github.com/Homebrew/homebrew-core/blob/966273060ad507fea490bd931971963de8b1a1dc/Formula/mbedtls.rb#L30-L31) during the installation process. [`pdftohtml`](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/pdftohtml.rb) provides an example of a serious conflict, where both formula ship an identically-named binary that is essential to functionality, so a [`conflicts_with`](https://rubydoc.brew.sh/Formula#conflicts_with-class_method) is preferable. As a general rule, [`conflicts_with`](https://rubydoc.brew.sh/Formula#conflicts_with-class_method) should be a last-resort option. It’s a fairly blunt instrument. The syntax for a conflict that can’t be worked around is: ``` conflicts_with "blueduck", because: "yellowduck also ships a duck binary" ``` ### Formulae revisions In Homebrew we sometimes accept formulae updates that don’t include a version bump. These include resource updates, new patches or fixing a security issue with a formula. Occasionally, these updates require a forced-recompile of the formula itself or its dependents to either ensure formulae continue to function as expected or to close a security issue. This forced-recompile is known as a [`revision`](https://rubydoc.brew.sh/Formula#revision%3D-class_method) and is inserted underneath the [`homepage`](https://rubydoc.brew.sh/Formula#homepage%3D-class_method)/[`url`](https://rubydoc.brew.sh/Formula#url-class_method)/[`sha256`](https://rubydoc.brew.sh/Formula#sha256%3D-class_method) block. When a dependent of a formula fails against a new version of that dependency it must receive a [`revision`](https://rubydoc.brew.sh/Formula#revision%3D-class_method). An example of such failure can be seen [here](https://github.com/Homebrew/legacy-homebrew/issues/31195) and the fix [here](https://github.com/Homebrew/legacy-homebrew/pull/31207). [`revision`](https://rubydoc.brew.sh/Formula#revision%3D-class_method)s are also used for formulae that move from the system OpenSSL to the Homebrew-shipped OpenSSL without any other changes to that formula. This ensures users aren’t left exposed to the potential security issues of the outdated OpenSSL. An example of this can be seen in [this commit](https://github.com/Homebrew/homebrew-core/commit/0d4453a91923e6118983961e18d0609e9828a1a4). ### Version scheme changes Sometimes formulae have version schemes that change such that a direct comparison between two versions no longer produces the correct result. For example, a project might be version `13` and then decide to become `1.0.0`. As `13` is translated to `13.0.0` by our versioning system by default this requires intervention. When a version scheme of a formula fails to recognise a new version as newer it must receive a [`version_scheme`](https://rubydoc.brew.sh/Formula#version_scheme%3D-class_method). An example of this can be seen [here](https://github.com/Homebrew/homebrew-core/pull/4006). ### Double-check for dependencies When you already have a lot of formulae installed, it’s easy to miss a common dependency. You can double-check which libraries a binary links to with the `otool` command (perhaps you need to use `xcrun otool`): ``` $ otool -L /usr/local/bin/ldapvi /usr/local/bin/ldapvi: /usr/local/opt/openssl/lib/libssl.1.0.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/opt/openssl/lib/libcrypto.1.0.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/lib/libglib-2.0.0.dylib (compatibility version 4201.0.0, current version 4201.0.0) /usr/local/opt/gettext/lib/libintl.8.dylib (compatibility version 10.0.0, current version 10.2.0) /usr/local/opt/readline/lib/libreadline.6.dylib (compatibility version 6.0.0, current version 6.3.0) /usr/local/lib/libpopt.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libncurses.5.4.dylib (compatibility version 5.4.0, current version 5.4.0) /System/Library/Frameworks/LDAP.framework/Versions/A/LDAP (compatibility version 1.0.0, current version 2.4.0) /usr/lib/libresolv.9.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1213.0.0) ``` ### Specifying gems, Python modules, Go projects, etc. as dependencies Homebrew doesn’t package already-packaged language-specific libraries. These should be installed directly from `gem`/`cpan`/`pip` etc. If you’re installing an application then use [`resource`](https://rubydoc.brew.sh/Formula#resource-class_method)s for all language-specific dependencies: ``` class Foo < Formula resource "pycrypto" do url "https://files.pythonhosted.org/packages/60/db/645aa9af249f059cc3a368b118de33889219e0362141e75d4eaf6f80f163/pycrypto-2.6.1.tar.gz" sha256 "f2ce1e989b272cfcb677616763e0a2e7ec659effa67a88aa92b3a65528f60a3c" end def install resource("pycrypto").stage { system "python", *Language::Python.setup_install_args(libexec/"vendor") } end end ``` [`jrnl`](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/jrnl.rb) is an example of a formula that does this well. The end result means the user doesn’t have to use `pip` or Python and can just run `jrnl`. For Python formulae, running `brew update-python-resources <formula>` will automatically add the necessary [`resource`](https://rubydoc.brew.sh/Formula#resource-class_method) stanzas for the dependencies of your Python application to the formula. Note that `brew update-python-resources` is run automatically by `brew create` if you pass the `--python` flag. If `brew update-python-resources` is unable to determine the correct `resource` stanzas, [homebrew-pypi-poet](https://github.com/tdsmith/homebrew-pypi-poet) is a good third-party alternative that may help. ### Install the formula ``` brew install --build-from-source --verbose --debug foo ``` `--debug` will ask you to open an interactive shell if the build fails so you can try to figure out what went wrong. Check the top of the e.g. `./configure` output. Some configure scripts do not recognise e.g. `--disable-debug`. If you see a warning about it, remove the option from the formula. ### Add a test to the formula Add a valid test to the [`test do`](https://rubydoc.brew.sh/Formula#test-class_method) block of the formula. This will be run by `brew test foo` and the [Brew Test Bot](brew-test-bot). The [`test do`](https://rubydoc.brew.sh/Formula#test-class_method) block automatically creates and changes to a temporary directory which is deleted after run. You can access this [`Pathname`](https://rubydoc.brew.sh/Pathname) with the [`testpath`](https://rubydoc.brew.sh/Formula#testpath-instance_method) function. The environment variable `HOME` is set to [`testpath`](https://rubydoc.brew.sh/Formula#testpath-instance_method) within the [`test do`](https://rubydoc.brew.sh/Formula#test-class_method) block. We want tests that don’t require any user input and test the basic functionality of the application. For example `foo build-foo input.foo` is a good test and (despite their widespread use) `foo --version` and `foo --help` are bad tests. However, a bad test is better than no test at all. See [`cmake`](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/cmake.rb) for an example of a formula with a good test. The formula writes a basic `CMakeLists.txt` file into the test directory then calls CMake to generate Makefiles. This test checks that CMake doesn’t e.g. segfault during basic operation. You can check that the output is as expected with `assert_equal` or `assert_match` on the output of the [Formula assertions](https://rubydoc.brew.sh/Homebrew/Assertions.html) such as in this example from the [envv formula](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/envv.rb): ``` assert_equal "mylist=A:C; export mylist", shell_output("#{bin}/envv del mylist B").strip ``` You can also check that an output file was created: ``` assert_predicate testpath/"output.txt", :exist? ``` Some advice for specific cases: * If the formula is a library, compile and run some simple code that links against it. It could be taken from upstream’s documentation / source examples. A good example is [`tinyxml2`](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/tinyxml2.rb), which writes a small C++ source file into the test directory, compiles and links it against the tinyxml2 library and finally checks that the resulting program runs successfully. * If the formula is for a GUI program, try to find some function that runs as command-line only, like a format conversion, reading or displaying a config file, etc. * If the software cannot function without credentials or requires a virtual machine, docker instance, etc. to run, a test could be to try to connect with invalid credentials (or without credentials) and confirm that it fails as expected. This is preferred over mocking a dependency. * Homebrew comes with a number of [standard test fixtures](https://github.com/Homebrew/brew/tree/master/Library/Homebrew/test/support/fixtures), including numerous sample images, sounds, and documents in various formats. You can get the file path to a test fixture with `test_fixtures("test.svg")`. * If your test requires a test file that isn’t a standard test fixture, you can install it from a source repository during the `test` phase with a resource block, like this: ``` resource("testdata") do url "https://example.com/input.foo" sha256 "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" end test do resource("testdata").stage do assert_match "OK", shell_output("#{bin}/foo build-foo input.foo") end end ``` ### Manuals Homebrew expects to find manual pages in `#{prefix}/share/man/...`, and not in `#{prefix}/man/...`. Some software installs to `man` instead of `share/man`, so check the output and add a `"--mandir=#{man}"` to the `./configure` line if needed. ### Caveats In case there are specific issues with the Homebrew packaging (compared to how the software is installed from other sources) a `caveats` block can be added to the formula to warn users. This can indicate non-standard install paths, an example from the `ruby` formula: ``` ==> Caveats By default, binaries installed by gem will be placed into: /usr/local/lib/ruby/gems/bin You may want to add this to your PATH. ``` ### A quick word on naming Name the formula like the project markets the product. So it’s `pkg-config`, not `pkgconfig`; `sdl_mixer`, not `sdl-mixer` or `sdlmixer`. The only exception is stuff like “Apache Ant”. Apache sticks “Apache” in front of everything, but we use the formula name `ant`. We only include the prefix in cases like `gnuplot` (because it’s part of the name) and `gnu-go` (because everyone calls it “GNU Go”—nobody just calls it “Go”). The word “Go” is too common and there are too many implementations of it. If you’re not sure about the name, check its homepage, Wikipedia page and [what Debian calls it](https://www.debian.org/distrib/packages). When Homebrew already has a formula called `foo` we typically do not accept requests to replace that formula with something else also named `foo`. This is to avoid both confusing and surprising users’ expectations. When two formulae share an upstream name, e.g. [AESCrypt](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/aescrypt.rb) and [AES Crypt](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/aescrypt-packetizer.rb) the newer formula must typically adapt its name to avoid conflict with the current formula. If you’re *still* not sure, just commit. We’ll apply some arbitrary rule and make a decision 😉. When importing classes, Homebrew will require the formula and then create an instance of the class. It does this by assuming the formula name can be directly converted to the class name using a `regexp`. The rules are simple: * `foo-bar.rb` => `FooBar` * `foobar.rb` => `Foobar` Thus, if you change the name of the class, you must also rename the file. Filenames should be all lowercase, and class names should be the strict CamelCase equivalent, e.g. formulae `gnu-go` and `sdl_mixer` become classes `GnuGo` and `SdlMixer`, even if part of their name is an acronym. Add aliases by creating symlinks in an `Aliases` directory in the tap root. ### Audit the formula You can run `brew audit --strict --online` to test formulae for adherence to Homebrew house style. The `audit` command includes warnings for trailing whitespace, preferred URLs for certain source hosts, and a lot of other style issues. Fixing these warnings before committing will make the process a lot quicker for everyone. New formulae being submitted to Homebrew should run `brew audit --new-formula foo`. This command is performed by the Brew Test Bot on new submissions as part of the automated build and test process, and highlights more potential issues than the standard audit. Use `brew info` and check if the version guessed by Homebrew from the URL is correct. Add an explicit [`version`](https://rubydoc.brew.sh/Formula#version-class_method) if not. ### Commit Everything is built on Git, so contribution is easy: ``` brew update # required in more ways than you think (initialises the brew git repository if you don't already have it) cd "$(brew --repository homebrew/core)" # Create a new git branch for your formula so your pull request is easy to # modify if any changes come up during review. git checkout -b <some-descriptive-name> origin/master git add Formula/foo.rb git commit ``` The established standard for Git commit messages is: * the first line is a commit summary of *50 characters or less* * two (2) newlines, then * explain the commit thoroughly. At Homebrew, we like to put the name of the formula up front like so: `foobar 7.3 (new formula)`. This may seem crazy short, but you’ll find that forcing yourself to summarise the commit encourages you to be atomic and concise. If you can’t summarise it in 50-80 characters, you’re probably trying to commit two commits as one. For a more thorough explanation, please read Tim Pope’s excellent blog post, [A Note About Git Commit Messages](https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html). The preferred commit message format for simple version updates is `foobar 7.3` and for fixes is `foobar: fix flibble matrix.`. Ensure you reference any relevant GitHub issue, e.g. `Closes #12345` in the commit message. Homebrew’s history is the first thing future contributors will look to when trying to understand the current state of formulae they’re interested in. ### Push Now you just need to push your commit to GitHub. If you haven’t forked Homebrew yet, [go to the `homebrew-core` repository and hit the Fork button](https://github.com/Homebrew/homebrew-core). If you have already forked Homebrew on GitHub, then you can manually push (just make sure you have been pulling from the `Homebrew/homebrew-core` master): ``` git push https://github.com/myname/homebrew-core/ <what-you-called-your-branch> ``` Now, [open a pull request](how-to-open-a-homebrew-pull-request) for your changes. * One formula per commit; one commit per formula. * Keep merge commits out of the pull request. Convenience tools ----------------- ### Messaging Three commands are provided for displaying informational messages to the user: * `ohai` for general info * `opoo` for warning messages * `odie` for error messages and immediately exiting Use `odie` when you need to exit a formula gracefully for any reason. For example: ``` if build.head? lib_jar = Dir["cfr-*-SNAPSHOT.jar"] doc_jar = Dir["cfr-*-SNAPSHOT-javadoc.jar"] odie "Unexpected number of artifacts!" if (lib_jar.length != 1) || (doc_jar.length != 1) end ``` ### `bin.install "foo"` You’ll see stuff like this in some formulae. This moves the file `foo` into the formula’s `bin` directory (`/usr/local/Cellar/pkg/0.1/bin`) and makes it executable (`chmod 0555 foo`). You can also rename the file during the installation process. This can be useful for adding a prefix to binaries that would otherwise cause conflicts with another formula, or for removing a file extension. For example, to install `foo.py` into the formula’s `bin` directory (`/usr/local/Cellar/pkg/0.1/bin`) as just `foo` instead of `foo.py`: ``` bin.install "foo.py" => "foo" ``` ### `inreplace` [`inreplace`](https://rubydoc.brew.sh/Utils/Inreplace) is a convenience function that can edit files in-place. For example: ``` inreplace "path", before, after ``` `before` and `after` can be strings or regular expressions. You should use the block form if you need to make multiple replacements in a file: ``` inreplace "path" do |s| s.gsub!(/foo/, "bar") s.gsub! "123", "456" end ``` Make sure you modify `s`! This block ignores the returned value. [`inreplace`](https://rubydoc.brew.sh/Utils/Inreplace) should be used instead of patches when patching something that will never be accepted upstream, e.g. making the software’s build system respect Homebrew’s installation hierarchy. If it’s something that affects both Homebrew and MacPorts (i.e. macOS specific) it should be turned into an upstream submitted patch instead. If you need modify variables in a `Makefile`, rather than using [`inreplace`](https://rubydoc.brew.sh/Utils/Inreplace), pass them as arguments to `make`: ``` system "make", "target", "VAR2=value1", "VAR2=value2", "VAR3=values can have spaces" ``` ``` system "make", "CC=#{ENV.cc}", "PREFIX=#{prefix}" ``` Note that values *can* contain unescaped spaces if you use the multiple-argument form of `system`. Patches ------- While [`patch`](https://rubydoc.brew.sh/Formula#patch-class_method)es should generally be avoided, sometimes they are temporarily necessary. When [`patch`](https://rubydoc.brew.sh/Formula#patch-class_method)ing (i.e. fixing header file inclusion, fixing compiler warnings, etc.) the first thing to do is check whether or not the upstream project is aware of the issue. If not, file a bug report and/or submit your patch for inclusion. We may sometimes still accept your patch before it was submitted upstream but by getting the ball rolling on fixing the upstream issue you reduce the length of time we have to carry the patch around. *Always justify a [`patch`](https://rubydoc.brew.sh/Formula#patch-class_method) with a code comment!* Otherwise, nobody will know when it is safe to remove the patch, or safe to leave it in when updating the formula. The comment should include a link to the relevant upstream issue(s). External [`patch`](https://rubydoc.brew.sh/Formula#patch-class_method)es can be declared using resource-style blocks: ``` patch do url "https://example.com/example_patch.diff" sha256 "85cc828a96735bdafcf29eb6291ca91bac846579bcef7308536e0c875d6c81d7" end ``` A strip level of `-p1` is assumed. It can be overridden using a symbol argument: ``` patch :p0 do url "https://example.com/example_patch.diff" sha256 "85cc828a96735bdafcf29eb6291ca91bac846579bcef7308536e0c875d6c81d7" end ``` [`patch`](https://rubydoc.brew.sh/Formula#patch-class_method)es can be declared in [`stable`](https://rubydoc.brew.sh/Formula#stable-class_method) and [`head`](https://rubydoc.brew.sh/Formula#head-class_method) blocks. Always use a block instead of a conditional, i.e. `stable do ... end` instead of `if build.stable? then ... end`. ``` stable do # some other things... patch do url "https://example.com/example_patch.diff" sha256 "85cc828a96735bdafcf29eb6291ca91bac846579bcef7308536e0c875d6c81d7" end end ``` Embedded (**END**) patches can be declared like so: ``` patch :DATA patch :p0, :DATA ``` with the patch data included at the end of the file: ``` __END__ diff --git a/foo/showfigfonts b/foo/showfigfonts index 643c60b..543379c 100644 --- a/foo/showfigfonts +++ b/foo/showfigfonts @@ -14,6 +14,7 @@ … ``` Patches can also be embedded by passing a string. This makes it possible to provide multiple embedded patches while making only some of them conditional. ``` patch :p0, "..." ``` In embedded patches, the string “HOMEBREW\_PREFIX” is replaced with the value of the constant `HOMEBREW_PREFIX` before the patch is applied. ### Creating the diff ``` brew install --interactive --git foo # (make some edits) git diff | pbcopy brew edit foo ``` Now just paste into the formula after `__END__`. Instead of `git diff | pbcopy`, for some editors `git diff >> path/to/your/formula/foo.rb` might help you ensure that the patch is not touched, e.g. white space removal, indentation changes, etc. Advanced formula tricks ----------------------- If anything isn’t clear, you can usually figure it out by `grep`ping the `$(brew --repository homebrew/core)` directory. Please submit a pull request to amend this document if you think it will help! ### Handling different system configurations Often, formulae need different dependencies, resources, patches, conflicts, deprecations or `keg_only` statuses on different OSes and arches. In these cases, the components can be nested inside `on_macos`, `on_linux`, `on_arm` or `on_intel` blocks. For example, here’s how to add `gcc` as a Linux-only dependency: ``` on_linux do depends_on "gcc" end ``` Components can also be declared for specific macOS versions or version ranges. For example, to declare a dependency only on High Sierra, nest the `depends_on` call inside an `on_high_sierra` block. Add an `:or_older` or `:or_newer` parameter to the `on_high_sierra` method to add the dependency to all macOS versions that meet the condition. For example, to add `gettext` as a build dependency on Mojave and all later macOS versions, use: ``` on_mojave :or_newer do depends_on "gettext" => :build end ``` Sometimes, a dependency is needed on certain macOS versions *and* on Linux. In these cases, a special `on_system` method can be used: ``` on_system :linux, macos: :sierra_or_older do depends_on "gettext" => :build end ``` To check multiple conditions, nest the corresponding blocks. For example, the following code adds a `gettext` build dependency when on ARM *and* macOS: ``` on_macos do on_arm do depends_on "gettext" => :build end end ``` #### Inside `def install` and `test do` Inside `def install` and `test do`, don’t use these `on_*` methods. Instead, use `if` statements and the following conditionals: * `OS.mac?` and `OS.linux?` return `true` or `false` based on the OS * `Hardware::CPU.intel?` and `Hardware::CPU.arm?` return `true` or `false` based on the arch * `MacOS.version` returns the current macOS version. Use `==`, `<=` or `>=` to compare to symbols corresponding to macOS versions (e.g. `if MacOS.version >= :mojave`) See [`rust`](https://github.com/Homebrew/homebrew-core/blob/fe831237a7c24033a48f588a1578ba54f953f922/Formula/rust.rb#L72) for an example. ### `livecheck` blocks When `brew livecheck` is unable to identify versions for a formula, we can control its behavior using a `livecheck` block. Here is a simple example to check a page for links containing a filename like `example-1.2.tar.gz`: ``` livecheck do url "https://www.example.com/downloads/" regex(/href=.*?example[._-]v?(\d+(?:\.\d+)+)\.t/i) end ``` For `url`/`regex` guidelines and additional `livecheck` block examples, refer to the [`brew livecheck` documentation](brew-livecheck). For more technical information on the methods used in a `livecheck` block, please refer to the [`Livecheck` class documentation](https://rubydoc.brew.sh/Livecheck.html). ### Unstable versions (`head`) Formulae can specify an alternate download for the upstream project’s [`head`](https://rubydoc.brew.sh/Formula#head-class_method) (`master`/`trunk`). #### `head` [`head`](https://rubydoc.brew.sh/Formula#head-class_method) URLs (activated by passing `--HEAD`) build the development cutting edge. Specifying it is easy: ``` class Foo < Formula head "https://github.com/mxcl/lastfm-cocoa.git" end ``` Homebrew understands `git`, `svn`, and `hg` URLs, and has a way to specify `cvs` repositories as a URL as well. You can test whether the [`head`](https://rubydoc.brew.sh/Formula#head-class_method) is being built with `build.head?`. To use a specific commit, tag, or branch from a repository, specify [`head`](https://rubydoc.brew.sh/Formula#head-class_method) with the `:tag` and `:revision`, `:revision`, or `:branch` option, like so: ``` class Foo < Formula head "https://github.com/some/package.git", revision: "090930930295adslfknsdfsdaffnasd13" # or branch: "main" (the default is "master") # or tag: "1_0_release", revision: "090930930295adslfknsdfsdaffnasd13" end ``` ### Compiler selection Sometimes a package fails to build when using a certain compiler. Since recent [Xcode versions](xcode) no longer include a GCC compiler we cannot simply force the use of GCC. Instead, the correct way to declare this is the [`fails_with`](https://rubydoc.brew.sh/Formula#fails_with-class_method) DSL method. A properly constructed [`fails_with`](https://rubydoc.brew.sh/Formula#fails_with-class_method) block documents the latest compiler build version known to cause compilation to fail, and the cause of the failure. For example: ``` fails_with :clang do build 211 cause "Miscompilation resulting in segfault on queries" end ``` `build` takes a Fixnum (an integer; you can find this number in your `brew --config` output). `cause` takes a String, and the use of heredocs is encouraged to improve readability and allow for more comprehensive documentation. [`fails_with`](https://rubydoc.brew.sh/Formula#fails_with-class_method) declarations can be used with any of `:gcc`, `:llvm`, and `:clang`. Homebrew will use this information to select a working compiler (if one is available). ### Specifying the download strategy explicitly To use one of Homebrew’s built-in download strategies, specify the `:using =>` flag on a [`url`](https://rubydoc.brew.sh/Formula#url-class_method) or [`head`](https://rubydoc.brew.sh/Formula#head-class_method). For example: ``` class Python3 < Formula homepage "https://www.python.org/" url "https://www.python.org/ftp/python/3.4.3/Python-3.4.3.tar.xz" sha256 "b5b3963533768d5fc325a4d7a6bd6f666726002d696f1d399ec06b043ea996b8" head "https://hg.python.org/cpython", :using => :hg ``` Homebrew offers anonymous download strategies. | `:using` value | download strategy | | --- | --- | | `:bzr` | `BazaarDownloadStrategy` | | `:curl` | `CurlDownloadStrategy` | | `:cvs` | `CVSDownloadStrategy` | | `:fossil` | `FossilDownloadStrategy` | | `:git` | `GitDownloadStrategy` | | `:hg` | `MercurialDownloadStrategy` | | `:nounzip` | `NoUnzipCurlDownloadStrategy` | | `:post` | `CurlPostDownloadStrategy` | | `:svn` | `SubversionDownloadStrategy` | If you need more control over the way files are downloaded and staged, you can create a custom download strategy and specify it using the [`url`](https://rubydoc.brew.sh/Formula#url-class_method) method’s `:using` option: ``` class MyDownloadStrategy < SomeHomebrewDownloadStrategy def fetch(timeout: nil, **options) opoo "Unhandled options in #{self.class}#fetch: #{options.keys.join(", ")}" unless options.empty? # downloads output to `temporary_path` end end class Foo < Formula url "something", :using => MyDownloadStrategy end ``` ### Just moving some files When your code in the install function is run, the current working directory is set to the extracted tarball. So it is easy to just move some files: ``` prefix.install "file1", "file2" ``` Or everything: ``` prefix.install Dir["output/*"] ``` Generally we’d rather you were specific about what files or directories need to be installed rather than installing everything. #### Variables for directory locations | Name | Default | Example | | --- | --- | --- | | **`HOMEBREW_PREFIX`** | `/usr/local` | | | **`prefix`** | `#{HOMEBREW_PREFIX}/Cellar/#{name}/#{version}` | `/usr/local/Cellar/foo/0.1` | | **`opt_prefix`** | `#{HOMEBREW_PREFIX}/opt/#{name}` | `/usr/local/opt/foo` | | **`bin`** | `#{prefix}/bin` | `/usr/local/Cellar/foo/0.1/bin` | | **`doc`** | `#{prefix}/share/doc/#{name}` | `/usr/local/Cellar/foo/0.1/share/doc/foo` | | **`include`** | `#{prefix}/include` | `/usr/local/Cellar/foo/0.1/include` | | **`info`** | `#{prefix}/share/info` | `/usr/local/Cellar/foo/0.1/share/info` | | **`lib`** | `#{prefix}/lib` | `/usr/local/Cellar/foo/0.1/lib` | | **`libexec`** | `#{prefix}/libexec` | `/usr/local/Cellar/foo/0.1/libexec` | | **`man`** | `#{prefix}/share/man` | `/usr/local/Cellar/foo/0.1/share/man` | | **`man[1-8]`** | `#{prefix}/share/man/man[1-8]` | `/usr/local/Cellar/foo/0.1/share/man/man[1-8]` | | **`sbin`** | `#{prefix}/sbin` | `/usr/local/Cellar/foo/0.1/sbin` | | **`share`** | `#{prefix}/share` | `/usr/local/Cellar/foo/0.1/share` | | **`pkgshare`** | `#{prefix}/share/#{name}` | `/usr/local/Cellar/foo/0.1/share/foo` | | **`elisp`** | `#{prefix}/share/emacs/site-lisp/#{name}` | `/usr/local/Cellar/foo/0.1/share/emacs/site-lisp/foo` | | **`frameworks`** | `#{prefix}/Frameworks` | `/usr/local/Cellar/foo/0.1/Frameworks` | | **`kext_prefix`** | `#{prefix}/Library/Extensions` | `/usr/local/Cellar/foo/0.1/Library/Extensions` | | **`zsh_function`** | `#{prefix}/share/zsh/site-functions` | `/usr/local/Cellar/foo/0.1/share/zsh/site-functions` | | **`fish_function`** | `#{prefix}/share/fish/vendor_functions` | `/usr/local/Cellar/foo/0.1/share/fish/vendor_functions` | | **`bash_completion`** | `#{prefix}/etc/bash_completion.d` | `/usr/local/Cellar/foo/0.1/etc/bash_completion.d` | | **`zsh_completion`** | `#{prefix}/share/zsh/site-functions` | `/usr/local/Cellar/foo/0.1/share/zsh/site-functions` | | **`fish_completion`** | `#{prefix}/share/fish/vendor_completions.d` | `/usr/local/Cellar/foo/0.1/share/fish/vendor_completions.d` | | **`etc`** | `#{HOMEBREW_PREFIX}/etc` | `/usr/local/etc` | | **`pkgetc`** | `#{HOMEBREW_PREFIX}/etc/#{name}` | `/usr/local/etc/foo` | | **`var`** | `#{HOMEBREW_PREFIX}/var` | `/usr/local/var` | | **`buildpath`** | A temporary directory somewhere on your system | `/private/tmp/[formula-name]-0q2b/[formula-name]` | These can be used, for instance, in code such as ``` bin.install Dir["output/*"] ``` to move binaries into their correct location into the Cellar, and ``` man.mkpath ``` to create the directory structure for the manual page location. To install man pages into specific locations, use `man1.install "foo.1", "bar.1"`, `man2.install "foo.2"`, etc. Note that in the context of Homebrew, [`libexec`](https://rubydoc.brew.sh/Formula#libexec-instance_method) is reserved for private use by the formula and therefore is not symlinked into `HOMEBREW_PREFIX`. ### Adding optional steps **Note:** [`option`](https://rubydoc.brew.sh/Formula#option-class_method)s are not allowed in Homebrew/homebrew-core as they are not tested by CI. If you want to add an [`option`](https://rubydoc.brew.sh/Formula#option-class_method): ``` class Yourformula < Formula ... option "with-ham", "Description of the option" option "without-spam", "Another description" depends_on "foo" => :optional # will automatically add a with-foo option ... ``` And then to define the effects the [`option`](https://rubydoc.brew.sh/Formula#option-class_method)s have: ``` if build.with? "ham" # note, no "with" in the option name (it is added by the build.with? method) end if build.without? "ham" # works as you'd expect. True if `--without-ham` was given. end ``` [`option`](https://rubydoc.brew.sh/Formula#option-class_method) names should be prefixed with the words `with` or `without`. For example, an option to run a test suite should be named `--with-test` or `--with-check` rather than `--test`, and an option to enable a shared library `--with-shared` rather than `--shared` or `--enable-shared`. [`option`](https://rubydoc.brew.sh/Formula#option-class_method)s that aren’t `build.with?` or `build.without?` should be deprecated with [`deprecated_option`](https://rubydoc.brew.sh/Formula#deprecated_option-class_method). See [`wget`](https://github.com/Homebrew/homebrew-core/blob/3f762b63c6fbbd49191ffdf58574d7e18937d93f/Formula/wget.rb#L27-L31) for an example. ### File level operations You can use the file utilities provided by Ruby’s [`FileUtils`](https://www.ruby-doc.org/stdlib/libdoc/fileutils/rdoc/index.html). These are included in the [`Formula`](https://rubydoc.brew.sh/Formula) class, so you do not need the `FileUtils.` prefix to use them. When creating symlinks, take special care to ensure they are *relative* symlinks. This makes it easier to create a relocatable bottle. For example, to create a symlink in `bin` to an executable in `libexec`, use ``` bin.install_symlink libexec/"name" ``` instead of: ``` ln_s libexec/"name", bin ``` The symlinks created by [`install_symlink`](https://rubydoc.brew.sh/Pathname#install_symlink-instance_method) are guaranteed to be relative. `ln_s` will only produce a relative symlink when given a relative path. ### Rewriting a script shebang Some formulae install executable scripts written in an interpreted language such as Python or Perl. Homebrew provides a `rewrite_shebang` method to rewrite the shebang of a script. This replaces a script’s original interpreter path with the one the formula depends on. This guarantees that the correct interpreter is used at execution time. This isn’t required if the build system already handles it (e.g. often with `pip` or Perl `ExtUtils::MakeMaker`). For example, the [`icdiff` formula](https://github.com/Homebrew/homebrew-core/blob/7beae5ab57c65249403699b2b0700fbccf14e6cb/Formula/icdiff.rb#L16) uses such utility. Note that it is necessary to include the utility in the formula, for example with Python one must use `include Language::Python::Shebang`. ### Handling files that should persist over formula upgrades For example, Ruby 1.9’s gems should be installed to `var/lib/ruby/` so that gems don’t need to be reinstalled when upgrading Ruby. You can usually do this with symlink trickery, or (ideally) a configure option. Another example would be configuration files that should not be overwritten on package upgrades. If after installation you find that to-be-persisted configuration files are not copied but instead *symlinked* into `/usr/local/etc/` from the Cellar, this can often be rectified by passing an appropriate argument to the package’s configure script. That argument will vary depending on a given package’s configure script and/or Makefile, but one example might be: `--sysconfdir=#{etc}` ### Service files There are two ways to add plists and systemd services to a formula, so that [`brew services`](https://github.com/Homebrew/homebrew-services) can pick it up: 1. If the formula already provides a file the formula can install it into the prefix like so. ``` prefix.install_symlink "file.plist" => "#{plist_name}.plist" prefix.install_symlink "file.service" => "#{service_name}.service" ``` 1. If the formula does not provide a service you can generate one using the following stanza. ``` service do run bin/"script" end ``` #### Service block methods There are many more options you can set within such a block, and in this table you will find them all. The only required field in a `service` block is the `run` field to indicate what to run. | Method | Default | macOS | Linux | Description | | --- | --- | --- | --- | --- | | `run` | - | yes | yes | Command to execute, an array with arguments or a path | | `run_type` | `:immediate` | yes | yes | The type of service, `:immediate`, `:interval` or `:cron` | | `keep_alive` | `false` | yes | yes | If the service needs to keep the process running after exit | | `interval` | - | yes | yes | Controls the start interval, required for the `:interval` type | | `cron` | - | yes | yes | Controls the trigger times, required for the `:cron` type | | `launch_only_once` | false | yes | yes | If the command should only run once | | `environment_variables` | - | yes | yes | A hash of variables to set | | `working_dir` | - | yes | yes | The directory to operate from | | `root_dir` | - | yes | yes | The directory to use as a chroot for the process | | `input_path` | - | yes | yes | Path to use as input for the process | | `log_path` | - | yes | yes | Path to write stdout to | | `error_log_path` | - | yes | yes | Path to write stderr to | | `restart_delay` | - | yes | yes | The delay before restarting a process | | `process_type` | - | yes | no-op | The type of process to manage, `:background`, `:standard`, `:interactive` or `:adaptive` | | `macos_legacy_timers` | - | yes | no-op | Timers created by launchd jobs are coalesced unless this is set | | `sockets` | - | yes | no-op | A socket that is created as an accesspoint to the service | For services that start and keep running alive you can use the default `run_type :` like so: ``` service do run [opt_bin/"beanstalkd", "test"] keep_alive true run_type :immediate # This should be omitted since it's the default end ``` If a service needs to run on an interval, use `run_type :interval` and specify an interval: ``` service do run [opt_bin/"beanstalkd", "test"] run_type :interval interval 500 end ``` If a service needs to run at certain times, use `run_type :cron` and specify a time with the crontab syntax: ``` service do run [opt_bin/"beanstalkd", "test"] run_type :cron cron "5 * * * *" end ``` For environment variables you can specify a hash. For the path there is the helper method `std_service_path_env`. This method will set the path to `#{HOMEBREW_PREFIX}/bin:#{HOMEBREW_PREFIX}/sbin:/usr/bin:/bin:/usr/sbin:/sbin` so the service can find other `brew` commands. ``` service do run opt_bin/"beanstalkd" environment_variables PATH: std_service_path_env end ``` #### KeepAlive options The standard options, keep alive regardless of any status or circomstances ``` service do run [opt_bin/"beanstalkd", "test"] keep_alive true # or false end ``` Same as above in hash form ``` service do run [opt_bin/"beanstalkd", "test"] keep_alive { always: true } end ``` Keep alive until the job exits with a non-zero return code ``` service do run [opt_bin/"beanstalkd", "test"] keep_alive { succesful_exit: true } end ``` Keep alive only if the job crashed ``` service do run [opt_bin/"beanstalkd", "test"] keep_alive { crashed: true } end ``` Keep alive as long as a file exists ``` service do run [opt_bin/"beanstalkd", "test"] keep_alive { path: "/some/path" } end ``` #### Socket format The sockets method accepts a formatted socket definition as `<type>://<host>:<port>`. * `type`: `udp` or `tcp` * `host`: The host to run the socket on. For example `0.0.0.0` * `port`: The port the socket should listen on. Please note that sockets will be accessible on IPv4 and IPv6 addresses by default. ### Using environment variables Homebrew has multiple levels of environment variable filtering which affects variables available to formulae. Firstly, the overall environment in which Homebrew runs is filtered to avoid environment contamination breaking from-source builds (<https://github.com/Homebrew/brew/issues/932>). In particular, this process filters all but the given whitelisted variables, but allows environment variables prefixed with `HOMEBREW_`. The specific implementation can be seen in [`bin/brew`](https://github.com/Homebrew/brew/blob/HEAD/bin/brew). The second level of filtering removes sensitive environment variables (such as credentials like keys, passwords or tokens) to avoid malicious subprocesses obtaining them (<https://github.com/Homebrew/brew/pull/2524>). This has the effect of preventing any such variables from reaching a formula’s Ruby code as they are filtered before it is called. The specific implementation can be seen in the [`ENV.clear_sensitive_environment!` method](https://github.com/Homebrew/brew/blob/HEAD/Library/Homebrew/extend/ENV.rb). You can set environment variables in a formula’s `install` method using `ENV["VARIABLE_NAME"] = "VALUE"`. An example can be seen in [the `gh` formula](https://github.com/Homebrew/homebrew-core/blob/fd9ad29f8e3ca9476f838ebb13794ddb7dafba00/Formula/gh.rb#L22). Environment variables can also be set temporarily using the `with_env` method; any variables defined in the call to that method will be restored to their original values at the end of the block. An example can be seen in [the `csound` formula](https://github.com/Homebrew/homebrew-core/blob/c3feaff8cdb578331385676620c865796cfc3388/Formula/csound.rb#L155-L157). In summary, environment variables used by a formula need to conform to these filtering rules in order to be available. ### Deprecating and disabling a formula See our [Deprecating, Disabling, and Removing Formulae](deprecating-disabling-and-removing-formulae) documentation for more information about how and when to deprecate or disable a formula. Updating formulae ----------------- Eventually a new version of the software will be released. In this case you should update the [`url`](https://rubydoc.brew.sh/Formula#url-class_method) and [`sha256`](https://rubydoc.brew.sh/Formula#sha256%3D-class_method). You can use: ``` brew bump-formula-pr foo ``` If a [`revision`](https://rubydoc.brew.sh/Formula#revision%3D-class_method) line exists outside any `bottle do` block it should be removed. Leave the `bottle do ... end` block as-is; our CI system will update it when we pull your change. Check if the formula you are updating is a dependency for any other formulae by running `brew uses <formula>`. If it is a dependency, run `brew reinstall` for all the dependencies after it is installed and verify they work correctly. Style guide ----------- Homebrew wants to maintain a consistent Ruby style across all formulae mostly based on [Ruby Style Guide](https://github.com/rubocop-hq/ruby-style-guide#the-ruby-style-guide). Other formulae may not have been updated to match this guide yet but all new ones should. Also: * The order of methods in a formula should be consistent with other formulae (e.g.: `def install` goes before `def post_install`). * An empty line is required before the `__END__` line. Troubleshooting for people writing new formulae ----------------------------------------------- ### Version detection fails Homebrew tries to automatically determine the [`version`](https://rubydoc.brew.sh/Formula#version-class_method) from the [`url`](https://rubydoc.brew.sh/Formula#url-class_method) to avoid duplication. If the tarball has an unusual name you may need to manually assign the [`version`](https://rubydoc.brew.sh/Formula#version-class_method). ### Bad makefiles Not all projects have makefiles that will run in parallel so try to deparallelize by adding these lines to the `install` method: ``` ENV.deparallelize system "make" # separate make and make install steps system "make", "install" ``` If that fixes it, please open an [issue](https://github.com/Homebrew/homebrew-core/issues) so that we can fix it for everyone. ### Still won’t work? Check out what MacPorts and Fink do: ``` brew search --macports foo brew search --fink foo ``` Superenv notes -------------- `superenv` is our “super environment” that isolates builds by removing `/usr/local/bin` and all user `PATH`s that are not essential for the build. It does this because user `PATH`s are often full of stuff that breaks builds. `superenv` also removes bad flags from the commands passed to `clang`/`gcc` and injects others (for example all [`keg_only`](https://rubydoc.brew.sh/Formula#keg_only-class_method) dependencies are added to the `-I` and `-L` flags). Fortran ------- Some software requires a Fortran compiler. This can be declared by adding `depends_on "gcc"` to a formula. MPI --- Formula requiring MPI should use [OpenMPI](https://www.open-mpi.org/) by adding `depends_on "open-mpi"` to the formula, rather than [MPICH](https://www.mpich.org/). These packages have conflicts and provide the same standardised interfaces. Choosing a default implementation and requiring it to be adopted allows software to link against multiple libraries that rely on MPI without creating un-anticipated incompatibilities due to differing MPI runtimes. Linear algebra libraries ------------------------ By default packages that require BLAS/LAPACK linear algebra interfaces should link to [OpenBLAS](https://www.openblas.net/) using `depends_on "openblas"` and passing `-DBLA_VENDOR=OpenBLAS` to CMake (applies to CMake based formula only) rather than Apple’s Accelerate framework, or the default reference lapack implementation. Apple’s implementation of BLAS/LAPACK is outdated and may introduce hard-to-debug problems. The reference `lapack` formula is fine, although it is not actively maintained or tuned. For this reason, formulae needing BLAS/LAPACK should link with OpenBLAS. How to start over (reset to upstream `master`) ---------------------------------------------- Have you created a real mess in Git which stops you from creating a commit you want to submit to us? You might want to consider starting again from scratch. Your changes can be reset to the Homebrew `master` branch by running: ``` git checkout -f master git reset --hard origin/master ```
programming_docs
homebrew Python Python ====== This page describes how Python is handled in Homebrew for users. See [Python for Formula Authors](python-for-formula-authors) for advice on writing formulae to install packages written in Python. Homebrew should work with any [CPython](https://stackoverflow.com/questions/2324208/is-there-any-difference-between-cpython-and-python) and defaults to the macOS system Python. Homebrew provides formulae to brew Python 3.y. A `python@2` formula was provided until the end of 2019, at which point it was removed due to the Python 2 deprecation. **Important:** If you choose to use a Python which isn’t either of these two (system Python or brewed Python), the Homebrew team cannot support any breakage that may occur. Python 3.y ---------- Homebrew provides formulae for maintained releases of Python 3.y (`[email protected]`). **Important:** Python may be upgraded to a newer version at any time. Consider using a version manager such as `pyenv` if you require stability of minor or patch versions for virtual environments. The executables are organised as follows: * `python3` points to Homebrew’s Python 3.y (if installed) * `pip3` points to Homebrew’s Python 3.y’s pip (if installed) Unversioned symlinks for `python`, `python-config`, `pip` etc. are installed here: ``` $(brew --prefix)/opt/python/libexec/bin ``` Setuptools, Pip, etc. --------------------- The Python formulae install [pip](https://pip.pypa.io/) (as `pip3`) and [Setuptools](https://pypi.org/project/setuptools/). Setuptools can be updated via pip3, without having to re-brew Python: ``` python3 -m pip install --upgrade setuptools ``` Similarly, pip3 can be used to upgrade itself via: ``` python3 -m pip install --upgrade pip ``` `site-packages` and the `PYTHONPATH` ------------------------------------- The `site-packages` is a directory that contains Python modules, including bindings installed by other formulae. Homebrew creates it here: ``` $(brew --prefix)/lib/pythonX.Y/site-packages ``` So, for Python 3.y.z, you’ll find it at `/usr/local/lib/python3.y/site-packages`. Python 3.y also searches for modules in: * `/Library/Python/3.y/site-packages` * `~/Library/Python/3.y/lib/python/site-packages` Homebrew’s `site-packages` directory is first created (1) once any Homebrew formulae with Python bindings are installed, or (2) upon `brew install python`. ### Why here? The reasoning for this location is to preserve your modules between (minor) upgrades or re-installations of Python. Additionally, Homebrew has a strict policy never to write stuff outside of the `brew --prefix`, so we don’t spam your system. Homebrew-provided Python bindings --------------------------------- Some formulae provide Python bindings. **Warning!** Python may crash (see [Common Issues](common-issues)) when you `import <module>` from a brewed Python if you ran `brew install <formula_with_python_bindings>` against the system Python. If you decide to switch to the brewed Python, then reinstall all formulae with Python bindings (e.g. `pyside`, `wxwidgets`, `pyqt`, `pygobject3`, `opencv`, `vtk` and `boost-python`). Policy for non-brewed Python bindings ------------------------------------- These should be installed via `pip install <package>`. To discover, you can use `pip search` or <https://pypi.org>. **Note:** macOS’s system Python does not provide `pip`. Follow the [pip documentation](https://pip.pypa.io/en/stable/installation/) to install it for your system Python if you would like it. Brewed Python modules --------------------- For brewed Python, modules installed with `pip3` or `python3 setup.py install` will be installed to the `$(brew --prefix)/lib/pythonX.Y/site-packages` directory (explained above). Executable Python scripts will be in `$(brew --prefix)/bin`. Since the system Python may not know which compiler flags to set when building bindings for software installed by Homebrew, you may need to run: ``` CFLAGS="-I$(brew --prefix)/include" LDFLAGS="-L$(brew --prefix)/lib" pip install <package> ``` Virtualenv ---------- **Warning!** When you `brew install` formulae that provide Python bindings, you should **not be in an active virtual environment**. Activate the virtualenv *after* you’ve brewed, or brew in a fresh terminal window. This will ensure Python modules are installed into Homebrew’s `site-packages` and *not* into that of the virtual environment. Virtualenv has a `--system-site-packages` switch to allow “global” (i.e. Homebrew’s) `site-packages` to be accessible from within the virtualenv. Why is Homebrew’s Python being installed as a dependency? --------------------------------------------------------- Formulae that declare an unconditional dependency on the `python` formula are bottled against Homebrew’s Python 3.y and require it to be installed. homebrew Anonymous Aggregate User Behaviour Analytics Anonymous Aggregate User Behaviour Analytics ============================================ Homebrew gathers anonymous aggregate user behaviour analytics using Google Analytics. You will be notified the first time you run `brew update` or install Homebrew. Analytics are not enabled until after this notice is shown, to ensure that you can [opt out](analytics#opting-out) without ever sending analytics data. Why? ---- Homebrew is provided free of charge and run entirely by volunteers in their spare time. As a result, we do not have the resources to do detailed user studies of Homebrew users to decide on how best to design future features and prioritise current work. Anonymous aggregate user analytics allow us to prioritise fixes and features based on how, where and when people use Homebrew. For example: * If a formula is widely used and is failing often it will enable us to prioritise fixing that formula over others. * Collecting the OS version allows us to decide which versions of macOS to prioritise for support and identify build failures that occur only on single versions. How Long? --------- Homebrew’s anonymous user and event data have a 14 month retention period. This is the [lowest possible value for Google Analytics](https://support.google.com/analytics/answer/7667196). What? ----- Homebrew’s analytics record some shared information for every event: * The Homebrew user agent, e.g. `Homebrew/3.3.0 (Macintosh; Intel Mac OS X 10.15.6) curl/7.64.1`. * The [Google Analytics version](https://developers.google.com/analytics/devguides/collection/protocol/v1/parameters#v), i.e. `1`. * The Homebrew [analytics tracking ID](https://developers.google.com/analytics/devguides/collection/protocol/v1/parameters#tid), e.g. `UA-75654628-1`. * A Homebrew [analytics user ID](https://developers.google.com/analytics/devguides/collection/protocol/v1/parameters#cid), e.g. `1BAB65CC-FE7F-4D8C-AB45-B7DB5A6BA9CB`. This is generated by `uuidgen` and stored in the repository-specific Git configuration variable `homebrew.analyticsuuid` within `$(brew --repository)/.git/config`. This does not allow us to track individual users, but does enable us to accurately measure user counts versus event counts. The ID is specific to the Homebrew package manager, and does not permit Homebrew maintainers to e.g. track you across websites you visit. * Whether the [Google Analytics anonymous IP setting](https://developers.google.com/analytics/devguides/collection/protocol/v1/parameters#aip) is enabled, i.e. `1`. * The Homebrew [application name](https://developers.google.com/analytics/devguides/collection/protocol/v1/parameters#an), e.g. `Homebrew`. * The Homebrew [application version](https://developers.google.com/analytics/devguides/collection/protocol/v1/parameters#av), e.g. `2.5.0`. * The Homebrew [analytics hit type](https://developers.google.com/analytics/devguides/collection/protocol/v1/parameters#t), e.g. `event`. Homebrew’s analytics records the following different events: * An `event` hit type with the `install` event category and the Homebrew formula from a non-private GitHub tap you install plus any used options (e.g. `wget --HEAD`) as the action, and an event label (e.g. `macOS 10.15, non-/usr/local, CI`) to indicate the OS version, non-standard installation location and invocation as part of CI. This allows us to identify which formulae need fixing and where more easily. * An `event` hit type with the `install_on_request` event category and the Homebrew formula from a non-private GitHub tap you have requested to install (e.g. when explicitly named with a `brew install`) plus options and an event label as above. This allows us to differentiate the formulae that users intend to install from those pulled in as dependencies. * An `event` hit type with the `cask_install` event category and the Homebrew cask from a non-private GitHub tap you install as the action and an event label as above. This allows us to identify which casks need fixing and where more easily. * An `event` hit type with the `BuildError` event category and the Homebrew formula plus options that failed to install as the action and an event label as above, e.g. `wget --HEAD` and `macOS 10.15`. You can also view all the information that is sent by Homebrew’s analytics by setting `HOMEBREW_ANALYTICS_DEBUG=1` in your environment. Please note this will also stop any analytics from being sent. It is impossible for the Homebrew developers to match any particular event to any particular user, even if we had access to the Homebrew analytics user ID (which we do not). An example of the most user-specific information we can see from Google Analytics: As far as we can tell it would be impossible for Google to match the randomly generated Homebrew-only analytics user ID to any other Google Analytics user ID. If Google turned evil the only thing they could do would be to lie about anonymising IP addresses and attempt to match users based on IP addresses. When/Where? ----------- Homebrew’s analytics are sent throughout Homebrew’s execution to Google Analytics over HTTPS. Who? ---- Summaries of installation and error analytics are [publicly available](https://formulae.brew.sh/analytics/). A JSON API is also available. The majority of Homebrew maintainers are not granted more detailed analytics data beyond these public resources. How? ---- The code is viewable in [`analytics.rb`](https://github.com/Homebrew/brew/blob/HEAD/Library/Homebrew/utils/analytics.rb) and [`analytics.sh`](https://github.com/Homebrew/brew/blob/HEAD/Library/Homebrew/utils/analytics.sh). They are done in a separate background process and fail fast to avoid delaying any execution. They will fail immediately and silently if you have no network connection. Opting out ---------- Homebrew analytics helps us maintainers and leaving it on is appreciated. However, if you want to opt out of Homebrew’s analytics, you can set this variable in your environment: ``` export HOMEBREW_NO_ANALYTICS=1 ``` Alternatively, this will prevent analytics from ever being sent: ``` brew analytics off ``` homebrew C++ Standard Libraries C++ Standard Libraries ====================== There are two C++ standard libraries supported by Apple compilers. The default for 10.9 and later is **libc++**, which is also the default for `clang` on older platforms when building C++11 code. The default for 10.8 and earlier was **libstdc++**, supported by Apple GCC compilers, GNU GCC compilers, and `clang`. This was marked deprecated with a warning during compilation as of Xcode 8. There are subtle incompatibilities between several of the C++ standard libraries, so Homebrew will refuse to install software if a dependency was built with an incompatible C++ library. It’s recommended that you install the dependency tree using a compatible compiler. **If you’ve upgraded to 10.9 or later from an earlier version:** Because the default C++ standard library is now libc++, you may not be able to build software using dependencies that you built on 10.8 or earlier. If you’re reading this page because you were directed here by a build error, you can most likely fix the issue if you reinstall all the dependencies of the package you’re trying to build. Example install using GCC 7: ``` brew install gcc@7 brew install --cc=gcc-7 <formula> ``` homebrew Diagram Guidelines Diagram Guidelines ================== Preferred file format --------------------- For complex diagrams, use the `.drawio.svg` format. Files with the `.drawio.svg` extension are SVG files with embedded [draw.io](https://www.diagrams.net/) source code. Using that format lends itself to a developer-friendly workflow: it is valid SVG, plays well with `git diff` and can be edited in lock-step using various online and offline flavours of draw.io. If you use VS Code, you can use an [extension](https://marketplace.visualstudio.com/items?itemName=hediet.vscode-drawio) for draw.io integration. Files in the `.drawio.svg` format can be processed offline. Embedding a diagram into Markdown --------------------------------- To embed a `.drawio.svg` file into Markdown, use the same syntax as for any image. Example: `![My diagram](my-diagram.drawio.svg)` Mind that GitHub doesn’t allow styling in Markdown documents. Where styling is allowed (e.g. in the exported brew.sh version of the documentation), always set a background colour of `white` for the diagram. That’s the colour draw.io assumes, and keeps the diagram easy to read in dark mode without further customization. You can use the CSS selector `img[src$=".drawio.svg"]` for styling. Example ------- Example for an SVG image embedded into Markdown: ``` ![Example diagram: Managing Pull Requests](/assets/img/docs/managing-pull-requests.drawio.svg) ``` Result: Example for styling (where allowed): ``` img[src$=".drawio.svg"] { background-color: white; margin-bottom: 20px; padding: 5%; width: 90%; } @media (prefers-color-scheme: dark) { img[src$=".drawio.svg"] { filter: invert(85%); -webkit-filter: invert(85%); } } ``` homebrew How to Create and Maintain a Tap How to Create and Maintain a Tap ================================ [Taps](taps) are external sources of Homebrew formulae, casks and/or external commands. They can be created by anyone to provide their own formulae, casks and/or external commands to any Homebrew user. Creating a tap -------------- A tap is usually a Git repository available online, but you can use anything as long as it’s a protocol that Git understands, or even just a directory with files in it. If hosted on GitHub, we recommend that the repository’s name start with `homebrew-` so the short `brew tap` command can be used. See the <manpage> for more information on repository naming. The `brew tap-new` command can be used to create a new tap along with some template files. Tap formulae follow the same format as the core’s ones, and can be added under either the `Formula` subdirectory, the `HomebrewFormula` subdirectory or the repository’s root. The first available directory is used, other locations will be ignored. We recommend use of subdirectories because it makes the repository organisation easier to grasp, and top-level files are not mixed with formulae. See [homebrew/core](https://github.com/Homebrew/homebrew-core) for an example of a tap with a `Formula` subdirectory. Naming your formulae to avoid clashes ------------------------------------- If your formulae have the same name as Homebrew/homebrew-core formulae they cannot be installed side-by-side. If you wish to create a different version of a formula that’s in Homebrew/homebrew-core (e.g. with `option`s) consider giving it a different name e.g. `nginx-full` for more fully-featured `nginx` formula. This will allow both `nginx` and `nginx-full` to be installed at the same time (assuming one is `keg_only` or the linked files do not clash). ### Installing If it’s on GitHub, users can install any of your formulae with `brew install user/repo/formula`. Homebrew will automatically add your `github.com/user/homebrew-repo` tap before installing the formula. `user/repo/formula` points to the `github.com/user/homebrew-repo/**/formula.rb` file here. If they want to get your tap without installing any formula at the same time, users can add it with the [`brew tap` command](taps). If it’s on GitHub, they can use `brew tap user/repo`, where `user` is your GitHub username and `homebrew-repo` is your repository. If it’s hosted outside of GitHub, they have to use `brew tap user/repo <URL>`, where `user` and `repo` will be used to refer to your tap and `<URL>` is your Git clone URL. Users can then install your formulae either with `brew install foo` if there’s no core formula with the same name, or with `brew install user/repo/foo` to avoid conflicts. Maintaining a tap ----------------- A tap is just a Git repository so you don’t have to do anything specific when making modifications, apart from committing and pushing your changes. ### Updating Once your tap is installed, Homebrew will update it each time a user runs `brew update`. Outdated formulae will be upgraded when a user runs `brew upgrade`, like core formulae. Casks ----- Casks can also be installed from a tap. Casks can be included in taps with formulae, or in a tap with just casks. Place any cask files you wish to make available in a `Casks` directory at the top level of your tap. See [homebrew/cask](https://github.com/Homebrew/homebrew-cask) for an example of a tap with a `Casks` subdirectory. ### Naming Unlike formulae, casks must have globally unique names to avoid clashes. This can be achieved by e.g. prepending the cask name with your github username: `username-formula-name`. External commands ----------------- You can provide your tap users with custom `brew` commands by adding them in a `cmd` subdirectory. [Read more on external commands](external-commands). See [homebrew/aliases](https://github.com/Homebrew/homebrew-aliases) for an example of a tap with external commands. Official Vendor Taps -------------------- Some upstream software providers like to package their software in their own Homebrew tap. When their software is [eligible for Homebrew/homebrew-core](acceptable-formulae) we prefer to maintain software there for ease of updates, improved discoverability and use of tools such as [formulae.brew.sh](https://formulae.brew.sh). We are not willing to remove software packaged in Homebrew/homebrew-core in favour of an upstream tap. We are not willing to instruct users in our formulae to use your formulae instead. If upstream projects have issues with how Homebrew packages your software: please file issues (or, ideally, pull requests) to address these problems. There’s an increasing desire in commercial open source about “maintaining control” e.g. defining exactly what binaries are shipping to users. Not supporting users (or even software distributions) to build-from-source is antithetical to the values of open source. If you think Homebrew’s perspective is annoying on this: try and see how Debian responds to requests to ship your binaries. homebrew Cask Cookbook Cask Cookbook ============= Each Cask is a Ruby block, beginning with a special header line. The Cask definition itself is always enclosed in a `do … end` block. Example: ``` cask "alfred" do version "2.7.1_387" sha256 "a3738d0513d736918a6d71535ef3d85dd184af267c05698e49ac4c6b48f38e17" url "https://cachefly.alfredapp.com/Alfred_#{version}.zip" name "Alfred" desc "Application launcher and productivity software" homepage "https://www.alfredapp.com/" app "Alfred 2.app" app "Alfred 2.app/Contents/Preferences/Alfred Preferences.app" end ``` The Cask Language Is Declarative -------------------------------- Each Cask contains a series of stanzas (or “fields”) which *declare* how the software is to be obtained and installed. In a declarative language, the author does not need to worry about **order**. As long as all the needed fields are present, Homebrew Cask will figure out what needs to be done at install time. To make maintenance easier, the most-frequently-updated stanzas are usually placed at the top. But that’s a convention, not a rule. Exception: `do` blocks such as `postflight` may enclose a block of pure Ruby code. Lines within that block follow a procedural (order-dependent) paradigm. Conditional Statements ---------------------- ### Efficiency Conditional statements are permitted, but only if they are very efficient. Tests on the following values are known to be acceptable: | value | examples | | --- | --- | | `MacOS.version` | [coconutbattery.rb](https://github.com/Homebrew/homebrew-cask/blob/a11ee55e8ed8255f7dab77120dfb1fb955789559/Casks/coconutbattery.rb#L2-L16), [yasu.rb](https://github.com/Homebrew/homebrew-cask/blob/21d3f7ac8a4adac0fe474b3d4b020d284eeef88d/Casks/yasu.rb#L2-L23) | ### Version Comparisons Tests against `MacOS.version` may use either symbolic names or version strings with numeric comparison operators: ``` if MacOS.version <= :mojave # symbolic name ``` ``` if MacOS.version <= "10.14" # version string ``` The available symbols for macOS versions are: `:el_capitan`, `:sierra`, `:high_sierra`, `:mojave`, `:catalina` and `:big_sur`. The corresponding numeric version strings should be given as major releases containing a single dot. Note that in the official Homebrew Cask repositories only the symbolic names are allowed. The numeric comparison may only be used for third-party taps. ### Always Fall Through to the Newest Case Conditionals should be constructed so that the default is the newest OS version. When using an `if` statement, test for older versions, and then let the `else` statement hold the latest and greatest. This makes it more likely that the Cask will work without alteration when a new OS is released. Example (from [coconutbattery.rb](https://github.com/Homebrew/homebrew-cask/blob/2c801af44be29fff7f3cb2996455fce5dd95d1cc/Casks/coconutbattery.rb)): ``` if MacOS.version <= :sierra # ... elsif MacOS.version <= :mojave # ... else # ... end ``` ### Switch Between Languages or Regions If a cask is available in multiple languages, you can use the `language` stanza to switch between languages or regions based on the system locale. Arbitrary Ruby Methods ---------------------- In the exceptional case that the Cask DSL is insufficient, it is possible to define arbitrary Ruby variables and methods inside the Cask by creating a `Utils` namespace. Example: ``` cask "myapp" do module Utils def self.arbitrary_method ... end end name "MyApp" version "1.0" sha256 "a32565cdb1673f4071593d4cc9e1c26bc884218b62fef8abc450daa47ba8fa92" url "https://#{Utils.arbitrary_method}" homepage "https://www.example.com/" ... end ``` This should be used sparingly: any method which is needed by two or more Casks should instead be rolled into the core. Care must also be taken that such methods be very efficient. Variables and methods should not be defined outside the `Utils` namespace, as they may collide with Homebrew Cask internals. Header Line Details ------------------- The first non-comment line in a Cask follows the form: ``` cask "<cask-token>" do ``` [`<cask-token>`](#token-reference) should match the Cask filename, without the `.rb` extension, enclosed in single quotes. There are currently some arbitrary limitations on Cask tokens which are in the process of being removed. GitHub Actions will catch any errors during the transition. Stanza order ------------ Having a common order for stanzas makes Casks easier to update and parse. Below is the complete stanza sequence (no Cask will have all stanzas). The empty lines shown here are also important, as they help to visually delineate information. ``` version sha256 language url appcast name desc homepage livecheck auto_updates conflicts_with depends_on container suite app pkg installer binary manpage colorpicker dictionary font input_method internet_plugin prefpane qlplugin mdimporter screen_saver service audio_unit_plugin vst_plugin vst3_plugin artifact, target: # target: shown here as is required with `artifact` stage_only preflight postflight uninstall_preflight uninstall_postflight uninstall zap caveats ``` Note that every stanza that has additional parameters (`:symbols` after a `,`) shall have them on separate lines, one per line, in alphabetical order. An exception is `target:` which typically consists of short lines. Stanzas ------- ### Required Stanzas Each of the following stanzas is required for every Cask. | name | multiple occurrences allowed? | value | | --- | --- | --- | | `version` | no | Application version.See [Version Stanza Details](#stanza-version) for more information. | | `sha256` | no | SHA-256 checksum of the file downloaded from `url`, calculated by the command `shasum -a 256 <file>`. Can be suppressed by using the special value `:no_check`.See [Checksum Stanza Details](#stanza-sha256) for more information. | | `url` | no | URL to the `.dmg`/`.zip`/`.tgz`/`.tbz2` file that contains the application.A [comment](#when-url-and-homepage-hostnames-differ-add-a-comment) should be added if the hostnames in the `url` and `homepage` stanzas differ. Block syntax should be used for URLs that change on every visit.See [URL Stanza Details](#stanza-url) for more information. | | `name` | yes | String providing the full and proper name defined by the vendor.See [Name Stanza Details](#stanza-name) for more information. | | `desc` | no | One-line description of the Cask. Shows when running `brew info`.See [Desc Stanza Details](#stanza-desc) for more information. | | `homepage` | no | Application homepage; used for the `brew home` command. | ### At Least One Artifact Stanza Is Also Required Each Cask must declare one or more *artifacts* (i.e. something to install). | name | multiple occurrences allowed? | value | | --- | --- | --- | | `app` | yes | Relative path to an `.app` that should be moved into the `/Applications` folder on installation.See [App Stanza Details](#stanza-app) for more information. | | `pkg` | yes | Relative path to a `.pkg` file containing the distribution.See [Pkg Stanza Details](#stanza-pkg) for more information. | | `binary` | yes | Relative path to a Binary that should be linked into the `$(brew --prefix)/bin` folder (typically `/usr/local/bin`) on installation.See [Binary Stanza Details](#stanza-binary) for more information. | | `colorpicker` | yes | Relative path to a ColorPicker plugin that should be moved into the `~/Library/ColorPickers` folder on installation. | | `dictionary` | yes | Relative path to a Dictionary that should be moved into the `~/Library/Dictionaries` folder on installation. | | `font` | yes | Relative path to a Font that should be moved into the `~/Library/Fonts` folder on installation. | | `input_method` | yes | Relative path to a Input Method that should be moved into the `~/Library/Input Methods` folder on installation. | | `internet_plugin` | yes | Relative path to a Service that should be moved into the `~/Library/Internet Plug-Ins` folder on installation. | | `manpage` | yes | Relative path to a Man Page that should be linked into the respective man page folder on installation, e.g. `/usr/local/share/man/man3` for `my_app.3`. | | `prefpane` | yes | Relative path to a Preference Pane that should be moved into the `~/Library/PreferencePanes` folder on installation. | | `qlplugin` | yes | Relative path to a QuickLook Plugin that should be moved into the `~/Library/QuickLook` folder on installation. | | `mdimporter` | yes | Relative path to a Spotlight metadata importer that should be moved into the `~/Library/Spotlight` folder on installation. | | `screen_saver` | yes | Relative path to a Screen Saver that should be moved into the `~/Library/Screen Savers` folder on installation. | | `service` | yes | Relative path to a Service that should be moved into the `~/Library/Services` folder on installation. | | `audio_unit_plugin` | yes | Relative path to an Audio Unit plugin that should be moved into the `~/Library/Audio/Components` folder on installation. | | `vst_plugin` | yes | Relative path to a VST Plugin that should be moved into the `~/Library/Audio/VST` folder on installation. | | `vst3_plugin` | yes | Relative path to a VST3 Plugin that should be moved into the `~/Library/Audio/VST3` folder on installation. | | `suite` | yes | Relative path to a containing directory that should be moved into the `/Applications` folder on installation.See [Suite Stanza Details](#stanza-suite) for more information. | | `artifact` | yes | Relative path to an arbitrary path that should be moved on installation. Must provide an absolute path as a `target` (example [alcatraz.rb](https://github.com/Homebrew/homebrew-cask/blob/312ae841f1f1b2ec07f4d88b7dfdd7fbdf8d4f94/Casks/alcatraz.rb#L12)). This is only for unusual cases. The `app` stanza is strongly preferred when moving `.app` bundles. | | `installer` | yes | Describes an executable which must be run to complete the installation.See [Installer Stanza Details](#stanza-installer) for more information. | | `stage_only` | no | `true`. Assert that the Cask contains no activatable artifacts. | ### Optional Stanzas | name | multiple occurrences allowed? | value | | --- | --- | --- | | `uninstall` | yes | Procedures to uninstall a Cask. Optional unless the `pkg` stanza is used.See [Uninstall Stanza Details](#stanza-uninstall) for more information. | | `zap` | yes | Additional procedures for a more complete uninstall, including user files and shared resources.See [Zap Stanza Details](#stanza-zap) for more information. | | `appcast` | no | URL providing an appcast feed to find updates for this Cask.See [Appcast Stanza Details](#stanza-appcast) for more information. | | `depends_on` | yes | List of dependencies and requirements for this Cask.See [Depends\_on Stanza Details](#stanza-depends_on) for more information. | | `conflicts_with` | yes | List of conflicts with this Cask (*not yet functional*).See [Conflicts\_with Stanza Details](#stanza-conflicts_with) for more information. | | `caveats` | yes | String or Ruby block providing the user with Cask-specific information at install time.See [Caveats Stanza Details](#stanza-caveats) for more information. | | `livecheck` | no | Ruby block describing how to find updates for this Cask.See [Livecheck Stanza Details](#stanza-livecheck) for more information. | | `preflight` | yes | Ruby block containing preflight install operations (needed only in very rare cases). | | `postflight` | yes | Ruby block containing postflight install operations.See [Postflight Stanza Details](#stanza-flight) for more information. | | `uninstall_preflight` | yes | Ruby block containing preflight uninstall operations (needed only in very rare cases). | | `uninstall_postflight` | yes | Ruby block containing postflight uninstall operations. | | `language` | required | Ruby block, called with language code parameters, containing other stanzas and/or a return value.See [Language Stanza Details](#stanza-language) for more information. | | `container nested:` | no | Relative path to an inner container that must be extracted before moving on with the installation. This allows us to support dmg inside tar, zip inside dmg, etc. | | `container type:` | no | Symbol to override container-type autodetect. May be one of: `:air`, `:bz2`, `:cab`, `:dmg`, `:generic_unar`, `:gzip`, `:otf`, `:pkg`, `:rar`, `:seven_zip`, `:sit`, `:tar`, `:ttf`, `:xar`, `:zip`, `:naked`. (Example: [parse.rb](https://github.com/Homebrew/homebrew-cask/blob/312ae841f1f1b2ec07f4d88b7dfdd7fbdf8d4f94/Casks/parse.rb#L11)) | | `auto_updates` | no | `true`. Assert the Cask artifacts auto-update. Use if `Check for Updates…` or similar is present in app menu, but not if it only opens a webpage and does not do the download and installation for you. | Stanza descriptions ------------------- ### Stanza: `app` In the simple case of a string argument to `app`, the source file is moved to the target `/Applications` directory. For example: ``` app "Alfred 2.app" ``` by default moves the source to: ``` /Applications/Alfred 2.app ``` #### Renaming the Target You can rename the target which appears in your `/Applications` directory by adding a `target:` key to `app`. Example (from [scala-ide.rb](https://github.com/Homebrew/homebrew-cask/blob/312ae841f1f1b2ec07f4d88b7dfdd7fbdf8d4f94/Casks/scala-ide.rb#L21)): ``` app "eclipse/Eclipse.app", target: "Scala IDE.app" ``` #### target: May Contain an Absolute Path If `target:` has a leading slash, it is interpreted as an absolute path. The containing directory for the absolute path will be created if it does not already exist. Example (from [manopen.rb](https://github.com/Homebrew/homebrew-cask/blob/312ae841f1f1b2ec07f4d88b7dfdd7fbdf8d4f94/Casks/manopen.rb#L12)): ``` artifact "openman.1", target: "/usr/local/share/man/man1/openman.1" ``` #### target: Works on Most Artifact Types The `target:` key works similarly for most Cask artifacts, such as `app`, `binary`, `colorpicker`, `dictionary`, `font`, `input_method`, `prefpane`, `qlplugin`, `mdimporter`, `service`, `suite`, and `artifact`. #### target: Should Only Be Used in Select Cases Don’t use `target:` for aesthetic reasons, like removing version numbers (`app "Slack #{version}.app", target: "Slack.app"`). Use it when it makes sense functionally and document your reason clearly in the Cask, using one of the templates: [for clarity](https://github.com/Homebrew/homebrew-cask/blob/312ae841f1f1b2ec07f4d88b7dfdd7fbdf8d4f94/Casks/imagemin.rb#L12); [for consistency](https://github.com/Homebrew/homebrew-cask/blob/d2a6b26df69fc28c4d84d6f5198b2b652c2f414d/Casks/devonthink-pro-office.rb#L16); [to prevent conflicts](https://github.com/Homebrew/homebrew-cask/blob/bd6dc1a64e0bdd35ba0e20789045ea023b0b6aed/Casks/flash-player-debugger.rb#L11); [due to developer suggestion](https://github.com/Homebrew/homebrew-cask/blob/ff3e9c4a6623af44b8a071027e8dcf3f4edfc6d9/Casks/kivy.rb#L12). ### Stanza: `appcast` The value of the `appcast` stanza is a string, holding the URL for an appcast which provides information on future updates. Note: The [`livecheck` stanza](#stanza-livecheck) should be preferred in most cases, as it allows casks to be updated automatically. The main casks repo only accepts submissions for stable versions of software (and [documented exceptions](acceptable-casks#but-there-is-no-stable-version)), but it still gets pull requests for unstable versions. By checking the submitted `version` against the contents of an appcast, we can better detect these invalid cases. Example: [`atom.rb`](https://github.com/Homebrew/homebrew-cask/blob/645dbb8228ec2f1f217ed1431e188687aac13ca5/Casks/atom.rb#L7) There are a few different ways the `appcast` can be determined: * If the app is distributed via GitHub releases, the `appcast` will be of the form `https://github.com/<user>/<project_name>/releases.atom`. Example: [`electron.rb`](https://github.com/Homebrew/homebrew-cask/blob/645dbb8228ec2f1f217ed1431e188687aac13ca5/Casks/electron.rb#L7) * If the app is distributed via GitLab releases, the `appcast` will be of the form `https://gitlab.com/<user>/<project_name>/-/tags?format=atom`. Example: [`grafx.rb`](https://github.com/Homebrew/homebrew-cask/blob/b22381902f9da870bb07d21b496558f283dad612/Casks/grafx.rb#L6) * The popular update framework [Sparkle](https://sparkle-project.org/) generally uses the `SUFeedURL` property in `Contents/Info.plist` inside `.app` bundles. Example: [`glyphs.rb`](https://github.com/Homebrew/homebrew-cask/blob/645dbb8228ec2f1f217ed1431e188687aac13ca5/Casks/glyphs.rb#L6) * Sourceforge projects follow the form `https://sourceforge.net/projects/<project_name>/rss`. A more specific page can be used as needed, pointing to a specific directory structure: `https://sourceforge.net/projects/<project_name>/rss?path=/<path_here>`. Example: [`seashore.rb`](https://github.com/Homebrew/homebrew-cask/blob/645dbb8228ec2f1f217ed1431e188687aac13ca5/Casks/seashore.rb#L6) * An appcast can be any URL hosted by the app’s developer that changes every time a new release is out or that contains the version number of the current release (e.g. a download HTML page). Webpages that only change on new version releases are preferred, as are sites that do not contain previous version strings (i.e. avoid changelog pages if the download page contains the current version number but not older ones). Example: [`razorsql.rb`](https://github.com/Homebrew/homebrew-cask/blob/645dbb8228ec2f1f217ed1431e188687aac13ca5/Casks/razorsql.rb#L6) The [`find-appcast`](https://github.com/Homebrew/homebrew-cask/blob/HEAD/developer/bin/find-appcast) script is able to identify some of these, as well as `electron-builder` appcasts which are trickier to find by hand. Run it with `"$(brew --repository homebrew/cask)/developer/bin/find-appcast" '</path/to/software.app>'`. #### Parameters | key | value | | --- | --- | | `must_contain:` | a custom string for `brew audit --appcast <cask>` to check against. | Sometimes a `version` doesn’t match a string on the webpage, in which case we tweak what to search for. Example: if `version` is `6.26.1440` and the appcast’s contents only show `6.24`, the check for “is `version` in the appcast feed” will fail. With `must_contain`, the check is told to “look for this string instead of `version`”. In the example, `must_contain: version.major_minor` is saying “look for `6.24`”, making the check succeed. If no `must_contain` is given, the check considers from the beginning of the `version` string until the first character that isn’t alphanumeric or a period. Example: if `version` is `6.26b-14,40`, the check will see `6.26b`. This is so it covers most cases by default, while still allowing complex `version`s suitable for interpolation on the rest of the cask. Example of using `must_contain`: [`hwsensors.rb`](https://github.com/Homebrew/homebrew-cask/blob/87bc3860f43d5b14d0c38ae8de469d24ee7f5b2f/Casks/hwsensors.rb#L6L7) ### Stanza: `binary` In the simple case of a string argument to `binary`, the source file is linked into the `$(brew --prefix)/bin` directory (typically `/usr/local/bin`) on installation. For example (from [operadriver.rb](https://github.com/Homebrew/homebrew-cask/blob/60531a2812005dd5f17dc92f3ce7419af3c5d019/Casks/operadriver.rb#L11)): ``` binary "operadriver" ``` creates a symlink to: ``` $(brew --prefix)/bin/operadriver ``` from a source file such as: ``` /usr/local/Caskroom/operadriver/0.2.2/operadriver ``` A binary (or multiple) can also be contained in an application bundle: ``` app "Atom.app" binary "#{appdir}/Atom.app/Contents/Resources/app/apm/bin/apm" ``` You can rename the target which appears in your binaries directory by adding a `target:` key to `binary`: ``` binary "#{appdir}/Atom.app/Contents/Resources/app/atom.sh", target: "atom" ``` Behaviour and usage of `target:` is [the same as with `app`](#renaming-the-target). However, for `binary` the select cases don’t apply as rigidly. It’s fine to take extra liberties with `target:` to be consistent with other command-line tools, like [changing case](https://github.com/Homebrew/homebrew-cask/blob/9ad93b833961f1d969505bc6bdb1c2ad4e58a433/Casks/openscad.rb#L12), [removing an extension](https://github.com/Homebrew/homebrew-cask/blob/c443d4f5c6864538efe5bb1ecf662565a5ffb438/Casks/filebot.rb#L13), or [cleaning up the name](https://github.com/Homebrew/homebrew-cask/blob/146917cbcc679648de6b0bccff4e9b43fce0e6c8/Casks/minishift.rb#L13). ### Stanza: `caveats` Sometimes there are particularities with the installation of a piece of software that cannot or should not be handled programmatically by Homebrew Cask. In those instances, `caveats` is the way to inform the user. Information in `caveats` is displayed when a cask is invoked with either `install` or `info`. To avoid flooding users with too many messages (thus desensitising them to the important ones), `caveats` should be used sparingly and exclusively for installation-related matters. If you’re not sure a `caveat` you find pertinent is installation-related or not, ask a maintainer. As a general rule, if your case isn’t already covered in our comprehensive [`caveats Mini-DSL`](#caveats-mini-dsl), it’s unlikely to be accepted. #### caveats as a String When `caveats` is a string, it is evaluated at compile time. The following methods are available for interpolation if `caveats` is placed in its customary position at the end of the Cask: | method | description | | --- | --- | | `token` | the Cask token | | `version` | the Cask version | | `homepage` | the Cask homepage | | `caskroom_path` | the containing directory for this Cask, typically `/usr/local/Caskroom/<token>` (only available with block form) | | `staged_path` | the staged location for this Cask, including version number: `/usr/local/Caskroom/<token>/<version>` (only available with block form) | Example: ``` caveats "Using #{token} is hazardous to your health." ``` #### caveats as a Block When `caveats` is a Ruby block, evaluation is deferred until install time. Within a block you may refer to the `@cask` instance variable, and invoke any method available on `@cask`. #### caveats Mini-DSL There is a mini-DSL available within `caveats` blocks. The following methods may be called to generate standard warning messages: | method | description | | --- | --- | | `path_environment_variable "path"` | users should make sure `path` is in their `$PATH` environment variable. | | `zsh_path_helper "path"` | zsh users must take additional steps to make sure `path` is in their `$PATH` environment variable. | | `depends_on_java "version"` | users should make sure they have the specified version of java installed. `version` can be exact (e.g. `6`), a minimum (e.g. `7+`), or omitted (when any version works). | | `logout` | users should log out and log back in to complete installation. | | `reboot` | users should reboot to complete installation. | | `files_in_usr_local` | the Cask installs files to `/usr/local`, which may confuse Homebrew. | | `discontinued` | all software development has been officially discontinued upstream. | | `free_license "web_page"` | users may get an official license to use the software at `web_page`. | | `kext` | users may need to enable their kexts in System Preferences → Security & Privacy → General. | | `unsigned_accessibility` | users will need to re-enable the app on each update in System Preferences → Security & Privacy → Privacy as it is unsigned. | | `license "web_page"` | software has a usage license at `web_page`. | Example: ``` caveats do path_environment_variable "/usr/texbin" end ``` ### Stanza: `conflicts_with` `conflicts_with` is used to declare conflicts that keep a Cask from installing or working correctly. #### conflicts\_with cask The value should be another Cask token. Example use: [`wireshark`](https://github.com/Homebrew/homebrew-cask/blob/903493e09cf33b845e7cf497ecf9cfc9709087ee/Casks/wireshark.rb#L10), which conflicts with `wireshark-chmodbpf`. ``` conflicts_with cask: "wireshark-chmodbpf" ``` #### conflicts\_with formula Note: `conflicts_with formula:` is a stub and is not yet functional. The value should be another formula name. Example use: [`macvim`](https://github.com/Homebrew/homebrew-cask/blob/84b90afd7b571e581f8a48d4bdf9c7bb24ebff3b/Casks/macvim.rb#L10), which conflicts with the `macvim` formula. ``` conflicts_with formula: "macvim" ``` ### Stanza: `depends_on` `depends_on` is used to declare dependencies and requirements for a Cask. `depends_on` is not consulted until `install` is attempted. #### depends\_on cask The value should be another Cask token, needed by the current Cask. Example use: [`cellery`](https://github.com/Homebrew/homebrew-cask/blob/4002df8f6bca93ed6eb40494995fcfa038cf99bf/Casks/cellery.rb#L11) depends on OSXFUSE: ``` depends_on cask: "osxfuse" ``` #### depends\_on formula The value should name a Homebrew Formula needed by the Cask. Example use: some distributions are contained in archive formats such as `7z` which are not supported by stock Apple tools. For these cases, a more capable archive reader may be pulled in at install time by declaring a dependency on the Homebrew Formula `unar`: ``` depends_on formula: "unar" ``` #### depends\_on macos ##### Requiring an Exact macOS Release The value for `depends_on macos:` may be a symbol or an array of symbols, listing the exact compatible macOS releases. The available values for macOS releases are: | symbol | corresponding release | | --- | --- | | `:el_capitan` | `10.11` | | `:sierra` | `10.12` | | `:high_sierra` | `10.13` | | `:mojave` | `10.14` | | `:catalina` | `10.15` | | `:big_sur` | `11.0` | | `:monterey` | `12.0` | Only major releases are covered (version numbers containing a single dot). The symbol form is used for readability. The following are all valid ways to enumerate the exact macOS release requirements for a Cask: ``` depends_on macos: :big_sur depends_on macos: [ :catalina, :big_sur, ] ``` ##### Setting a Minimum macOS Release `depends_on macos:` can also accept a string starting with a comparison operator such as `>=`, followed by an macOS release in the form above. The following is a valid expression meaning “at least macOS Big Sur (11.0)”: ``` depends_on macos: ">= :big_sur" ``` A comparison expression cannot be combined with any other form of `depends_on macos:`. #### depends\_on arch The value for `depends_on arch:` may be a symbol or an array of symbols, listing the hardware compatibility requirements for a Cask. The requirement is satisfied at install time if any one of multiple `arch:` value matches the user’s hardware. The available symbols for hardware are: | symbol | meaning | | --- | --- | | `:x86_64` | 64-bit Intel | | `:intel` | 64-bit Intel | | `:arm64` | Apple Silicon | The following are all valid expressions: ``` depends_on arch: :intel depends_on arch: :x86_64 # same meaning as above depends_on arch: [:x86_64] # same meaning as above depends_on arch: :arm64 ``` #### All depends\_on Keys | key | description | | --- | --- | | `formula:` | a Homebrew Formula | | `cask:` | a Cask token | | `macos:` | a symbol, string, array, or comparison expression defining macOS release requirements | | `arch:` | a symbol or array defining hardware requirements | | `java:` | *stub - not yet functional* | ### Stanza: `desc` `desc` accepts a single-line UTF-8 string containing a short description of the software. It’s used to help with searchability and disambiguation, thus it must concisely describe what the software does (or what you can accomplish with it). `desc` is not for app slogans! Vendors’ descriptions tend to be filled with generic adjectives such as “modern” and “lightweight”. Those are meaningless marketing fluff (do you ever see apps proudly describing themselves as outdated and bulky?) which must the deleted. It’s fine to use the information on the software’s website as a starting point, but it will require editing in almost all cases. #### Dos and Don’ts * **Do** start with an uppercase letter. ``` - desc "sound and music editor" + desc "Sound and music editor" ``` * **Do** be brief, i.e. use less than 80 characters. ``` - desc "Sound and music editor which comes with effects, instruments, sounds and all kinds of creative features" + desc "Sound and music editor" ``` * **Do** describe what the software does or is: ``` - desc "Development of musical ideas made easy" + desc "Sound and music editor" ``` * **Do not** include the platform. Casks only work on macOS, so this is redundant information. ``` - desc "Sound and music editor for macOS" + desc "Sound and music editor" ``` * **Do not** include the Cask’s [name](#stanza-name). ``` - desc "Ableton Live is a sound and music editor" + desc "Sound and music editor" ``` * **Do not** include the vendor. This should be added to the Cask’s [name](#stanza-name) instead. ``` - desc "Sound and music editor made by Ableton" + desc "Sound and music editor" ``` * **Do not** add user pronouns. ``` - desc "Edit your music files" + desc "Sound and music editor" ``` * **Do not** use empty marketing jargon. ``` - desc "Beautiful and powerful modern sound and music editor" + desc "Sound and music editor" ``` ### Stanza: `\*flight` #### Evaluation of Blocks is Always Deferred The Ruby blocks defined by `preflight`, `postflight`, `uninstall_preflight`, and `uninstall_postflight` are not evaluated until install time or uninstall time. Within a block, you may refer to the `@cask` instance variable, and invoke any method available on `@cask`. #### \*flight Mini-DSL There is a mini-DSL available within these blocks. The following methods may be called to perform standard tasks: | method | availability | description | | --- | --- | --- | | `set_ownership(paths)` | `preflight`, `postflight`, `uninstall_preflight` | set user and group ownership of `paths`. Example: [`unifi-controller.rb`](https://github.com/Homebrew/homebrew-cask/blob/8a452a41707af6a661049da6254571090fac5418/Casks/unifi-controller.rb#L13) | | `set_permissions(paths, permissions_str)` | `preflight`, `postflight`, `uninstall_preflight` | set permissions in `paths` to `permissions_str`. Example: [`docker-machine.rb`](https://github.com/Homebrew/homebrew-cask/blob/8a452a41707af6a661049da6254571090fac5418/Casks/docker-machine.rb#L16) | `set_ownership(paths)` defaults user ownership to the current user and group ownership to `staff`. These can be changed by passing in extra options: `set_ownership(paths, user: 'user', group: 'group')`. ### Stanza: `installer` This stanza must always be accompanied by [`uninstall`](#stanza-uninstall). The `installer` stanza takes a series of key-value pairs, the first key of which must be `manual:` or `script:`. #### installer manual `installer manual:` takes a single string value, describing a GUI installer which must be run by the user at a later time. The path may be absolute, or relative to the Cask. Example (from [nutstore.rb](https://github.com/Homebrew/homebrew-cask/blob/249ec31048591308e63e50f79dae01d2f933cccf/Casks/nutstore.rb#L9)): ``` installer manual: "Nutstore Installer.app" ``` #### installer script `installer script:` introduces a series of key-value pairs describing a command which will automate completion of the install. **It should never be used for interactive installations.** The form is similar to `uninstall script:`: | key | value | | --- | --- | | `executable:` | path to an install script to be run | | `args:` | array of arguments to the install script | | `input:` | array of lines of input to be sent to `stdin` of the script | | `must_succeed:` | set to `false` if the script is allowed to fail | | `sudo:` | set to `true` if the script needs `sudo` | The path may be absolute, or relative to the Cask. Example (from [miniforge.rb](https://github.com/Homebrew/homebrew-cask/blob/ed2033fb3578376c3ee58a2cb459ef96fa6eb37d/Casks/miniforge.rb#L15L18)): ``` installer script: { executable: "Miniforge3-#{version}-MacOSX-x86_64.sh", args: ["-b", "-p", "#{caskroom_path}/base"], } ``` If the `installer script:` does not require any of the key-values it can point directly to the path of the install script: ``` installer script: "#{staged_path}/install.sh" ``` ### Stanza: `language` The `language` stanza can match [ISO 639-1](https://en.wikipedia.org/wiki/ISO_639-1) language codes, regional identifiers ([ISO 3166-1 Alpha 2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2)) and script codes ([ISO 15924](https://en.wikipedia.org/wiki/ISO_15924)), or a combination thereof. US English should always be used as the default language: ``` language "zh", "CN" do "zh_CN" end language "de" do "de_DE" end language "en-GB" do "en_GB" end language "en", default: true do "en_US" end ``` Note that the following are not the same: ``` language "en", "GB" do # matches all locales containing "en" or "GB" end language "en-GB" do # matches only locales containing "en" and "GB" end ``` The return value of the matching `language` block can be accessed by simply calling `language`. ``` homepage "https://example.org/#{language}" ``` Examples: [Firefox](https://github.com/Homebrew/homebrew-cask/blob/306b8fbd9502036f1ca742f70c569d8677b62403/Casks/firefox.rb#L4L74), [Battle.net](https://github.com/Homebrew/homebrew-cask/blob/306b8fbd9502036f1ca742f70c569d8677b62403/Casks/battle-net.rb#L5L17) #### Installation To install a cask in a specific language, you can pass the `--language=` option to `brew install`: ``` brew install firefox --language=it ``` ### Stanza: `livecheck` The `livecheck` stanza is used to automatically fetch the latest version of a cask from changelogs, release notes, appcasts, etc. See also: [`brew livecheck` reference](brew-livecheck) Every `livecheck` block must contain a `url`, which can either be a string or a symbol pointing to other URLs in the cask (`:url` or `:homepage`). Additionally, a `livecheck` should specify which `strategy` should be used to extract the version: | `strategy` | Description | | --- | --- | | `:header_match` | extract version from HTTP headers (e.g. `Location` or `Content-Disposition`) | | `:page_match` | extract version from page contents | | `:sparkle` | extract version from Sparkle appcast contents | Here is a basic example, extracting a simple version from a page: ``` livecheck do url "https://example.org/my-app/download" strategy :page_match regex(%r{href=.*?/MyApp-(\d+(?:\.\d+)*)\.zip}i) end ``` If the download URL is present on the homepage, we can use a symbol instead of a string: ``` livecheck do url :homepage strategy :page_match regex(%r{href=.*?/MyApp-(\d+(?:\.\d+)*)\.zip}i) end ``` The `header_match` strategy will try parsing a version from the filename (in the `Content-Disposition` header) and the final URL (in the `Location` header). If that doesn’t work, a `regex` can be specified, e.g.: ``` strategy :header_match regex(/MyApp-(\d+(?:\.\d+)*)\.zip/i) ``` If the version depends on multiple header fields, a block can be specified, e.g. ``` strategy :header_match do |headers| v = headers["content-disposition"][/MyApp-(\d+(?:\.\d+)*)\.zip/i, 1] id = headers["location"][%r{/(\d+)/download$}i, 1] next if v.blank? || id.blank? "#{v},#{id}" end ``` Similarly, the `:page_match` strategy can also be used for more complex versions by specifying a block: ``` strategy :page_match do |page| match = page.match(%r{href=.*?/(\d+)/MyApp-(\d+(?:\.\d+)*)\.zip}i) next if match.blank? "#{match[2]},#{match[1]}" end ``` ### Stanza: `name` `name` accepts a UTF-8 string defining the name of the software, including capitalization and punctuation. It is used to help with searchability and disambiguation. Unlike the [token](#token-reference), which is simplified and reduced to a limited set of characters, the `name` stanza can include the proper capitalization, spacing and punctuation to match the official name of the software. For disambiguation purposes, it is recommended to spell out the name of the application, and including the vendor name if necessary. A good example is [`pycharm-ce`](https://github.com/Homebrew/homebrew-cask/blob/fc05c0353aebb28e40db72faba04b82ca832d11a/Casks/pycharm-ce.rb#L6-L7), whose name is spelled out as `Jetbrains PyCharm Community Edition`, even though it is likely never referenced as such anywhere. Additional details about the software can be provided in the [desc](#stanza-desc) stanza. The `name` stanza can be repeated multiple times if there are useful alternative names. The first instance should use the Latin alphabet. For example, see the [`cave-story`](https://github.com/Homebrew/homebrew-cask/blob/0fe48607f5656e4f1de58c6884945378b7e6f960/Casks/cave-story.rb#L7-L9) cask, whose original name does not use the Latin alphabet. ### Stanza: `pkg` This stanza must always be accompanied by [`uninstall`](#stanza-uninstall) The first argument to the `pkg` stanza should be a relative path to the `.pkg` file to be installed. For example: ``` pkg "Unity.pkg" ``` Subsequent arguments to `pkg` are key/value pairs which modify the install process. Currently supported keys are `allow_untrusted:` and `choices:`. #### `pkg allow_untrusted:` `pkg allow_untrusted: true` can be used to install the `.pkg` with an untrusted certificate passing `-allowUntrusted` to `/usr/sbin/installer`. This option is not permitted in official Homebrew Cask taps, it is only provided for use in third-party taps or local Casks. Example ([alinof-timer.rb](https://github.com/Homebrew/homebrew-cask/blob/312ae841f1f1b2ec07f4d88b7dfdd7fbdf8d4f94/Casks/alinof-timer.rb#L10)): ``` pkg "AlinofTimer.pkg", allow_untrusted: true ``` #### `pkg choices:` `pkg choices:` can be used to override `.pkg`’s default install options via `-applyChoiceChangesXML`. It uses a deserialized version of the `choiceChanges` property list (refer to the `CHOICE CHANGES FILE` section of the `installer` manual page by running `man -P 'less --pattern "^CHOICE CHANGES FILE"' installer`). Running the macOS command: ``` installer -showChoicesXML -pkg '/path/to/my.pkg' ``` will output an XML which you can use to extract the `choices:` values, as well as their equivalents to the GUI options. See [this pull request for wireshark-chmodbpf](https://github.com/Homebrew/homebrew-cask/pull/26997) and [this one for wine-staging](https://github.com/Homebrew/homebrew-cask/pull/27937) for some examples of the procedure. Example ([wireshark-chmodbpf.rb](https://github.com/Homebrew/homebrew-cask/blob/f95b8a8306b91fe9da7908b842f4a5fa80f7afe0/Casks/wireshark-chmodbpf.rb#L9-L26)): ``` pkg "Wireshark #{version} Intel 64.pkg", choices: [ { "choiceIdentifier" => "wireshark", "choiceAttribute" => "selected", "attributeSetting" => 0, }, { "choiceIdentifier" => "chmodbpf", "choiceAttribute" => "selected", "attributeSetting" => 1, }, { "choiceIdentifier" => "cli", "choiceAttribute" => "selected", "attributeSetting" => 0, }, ] ``` Example ([wine-staging.rb](https://github.com/Homebrew/homebrew-cask/blob/51b65f6a5a25a7f79af4d372e1a0bf1dc3849251/Casks/wine-staging.rb#L11-L18)): ``` pkg "winehq-staging-#{version}.pkg", choices: [ { "choiceIdentifier" => "choice3", "choiceAttribute" => "selected", "attributeSetting" => 1, }, ] ``` ### Stanza: `sha256` #### Calculating the SHA256 The `sha256` value is usually calculated by the command: ``` shasum --algorithm 256 <file> ``` #### Special Value `:no_check` The special value `sha256 :no_check` is used to turn off SHA checking whenever checksumming is impractical due to the upstream configuration. `version :latest` requires `sha256 :no_check`, and this pairing is common. However, `sha256 :no_check` does not require `version :latest`. We use a checksum whenever possible. ### Stanza: `suite` Some distributions provide a suite of multiple applications, or an application with required data, to be installed together in a subdirectory of `/Applications`. For these Casks, use the `suite` stanza to define the directory containing the application suite. Example (from [sketchup.rb](https://github.com/Homebrew/homebrew-cask/blob/312ae841f1f1b2ec07f4d88b7dfdd7fbdf8d4f94/Casks/sketchup.rb#L12)): ``` suite "SketchUp 2016" ``` The value of `suite` is never an `.app` bundle, but a plain directory. ### Stanza: `uninstall` > If you cannot design a working `uninstall` stanza, please submit your cask anyway. The maintainers can help you write an `uninstall` stanza, just ask! > > #### `uninstall pkgutil:` Is The Easiest and Most Useful `pkgutil:` is the easiest and most useful `uninstall` directive. See [Uninstall Key pkgutil:](#uninstall-key-pkgutil). #### `uninstall` Is Required for Casks That Install a pkg or installer manual For most Casks, uninstall actions are determined automatically, and an explicit `uninstall` stanza is not needed. However, a Cask which uses the `pkg` or `installer manual:` stanzas will **not** know how to uninstall correctly unless an `uninstall` stanza is given. So, while the [Cask DSL](#required-stanzas) does not enforce the requirement, it is much better for end-users if every `pkg` and `installer manual:` has a corresponding `uninstall`. The `uninstall` stanza is available for non-`pkg` Casks, and is useful for a few corner cases. However, the documentation below concerns the typical case of using `uninstall` to define procedures for a `pkg`. #### There Are Multiple Uninstall Techniques Since `pkg` installers can do arbitrary things, different techniques are needed to uninstall in each case. You may need to specify one, or several, of the following key/value pairs as arguments to `uninstall`. #### Summary of Keys * `early_script:` (string or hash) - like [`script:`](#uninstall-key-script), but runs early (for special cases, best avoided) * [`launchctl:`](#uninstall-key-launchctl) (string or array) - ids of `launchctl` jobs to remove * [`quit:`](#uninstall-key-quit) (string or array) - bundle ids of running applications to quit * [`signal:`](#uninstall-key-signal) (array of arrays) - signal numbers and bundle ids of running applications to send a Unix signal to (used when `quit:` does not work) * [`login_item:`](#uninstall-key-login_item) (string or array) - names of login items to remove * [`kext:`](#uninstall-key-kext) (string or array) - bundle ids of kexts to unload from the system * [`script:`](#uninstall-key-script) (string or hash) - relative path to an uninstall script to be run via sudo; use hash if args are needed + `executable:` - relative path to an uninstall script to be run via sudo (required for hash form) + `args:` - array of arguments to the uninstall script + `input:` - array of lines of input to be sent to `stdin` of the script + `must_succeed:` - set to `false` if the script is allowed to fail + `sudo:` - set to `true` if the script needs `sudo` * [`pkgutil:`](#uninstall-key-pkgutil) (string, regexp or array of strings and regexps) - strings or regexps matching bundle ids of packages to uninstall using `pkgutil` * [`delete:`](#uninstall-key-delete) (string or array) - single-quoted, absolute paths of files or directory trees to remove. `delete:` should only be used as a last resort. `pkgutil:` is strongly preferred. * `rmdir:` (string or array) - single-quoted, absolute paths of directories to remove if empty. Works recursively. * [`trash:`](#uninstall-key-trash) (string or array) - single-quoted, absolute paths of files or directory trees to move to Trash. Each `uninstall` technique is applied according to the order above. The order in which `uninstall` keys appear in the Cask file is ignored. For assistance filling in the right values for `uninstall` keys, there are several helper scripts found under `developer/bin` in the Homebrew Cask repository. Each of these scripts responds to the `-help` option with additional documentation. The easiest way to work out an `uninstall` stanza is on a system where the `pkg` is currently installed and operational. To operate on an uninstalled `pkg` file, see [Working With a pkg File Manually](#working-with-a-pkg-file-manually), below. #### `uninstall` Key `pkgutil:` This is the most useful uninstall key. `pkgutil:` is often sufficient to completely uninstall a `pkg`, and is strongly preferred over `delete:`. IDs for the most recently-installed packages can be listed using the command: ``` "$(brew --repository homebrew/cask)/developer/bin/list_recent_pkg_ids" ``` `pkgutil:` also accepts a regular expression match against multiple package IDs. The regular expressions are somewhat nonstandard. To test a `pkgutil:` regular expression against currently-installed packages, use the command: ``` "$(brew --repository homebrew/cask)/developer/bin/list_pkg_ids_by_regexp" <regular-expression> ``` #### List Files Associated With a pkg Id Once you know the ID for an installed package, (above), you can list all files on your system associated with that package ID using the macOS command: ``` pkgutil --files <package.id.goes.here> ``` Listing the associated files can help you assess whether the package included any `launchctl` jobs or kernel extensions (kexts). #### `uninstall` Key `launchctl:` IDs for currently loaded `launchctl` jobs can be listed using the command: ``` "$(brew --repository homebrew/cask)/developer/bin/list_loaded_launchjob_ids" ``` IDs for all installed `launchctl` jobs can be listed using the command: ``` "$(brew --repository homebrew/cask)/developer/bin/list_installed_launchjob_ids" ``` #### `uninstall` Key `quit:` Bundle IDs for currently running Applications can be listed using the command: ``` "$(brew --repository homebrew/cask)/developer/bin/list_running_app_ids" ``` Bundle IDs inside an Application bundle on disk can be listed using the command: ``` "$(brew --repository homebrew/cask)/developer/bin/list_ids_in_app" '/path/to/application.app' ``` #### `uninstall` Key `signal:` `signal:` should only be needed in the rare case that a process does not respond to `quit:`. Bundle IDs for `signal:` targets may be obtained as for `quit:`. The value for `signal:` is an array-of-arrays, with each cell containing two elements: the desired Unix signal followed by the corresponding bundle ID. The Unix signal may be given in numeric or string form (see the `kill` man page for more details). The elements of the `signal:` array are applied in order, only if there is an existing process associated the bundle ID, and stopping when that process terminates. A bundle ID may be repeated to send more than one signal to the same process. It is better to use the least-severe signals which are sufficient to stop a process. The `KILL` signal in particular can have unwanted side-effects. An example, with commonly-used signals in ascending order of severity: ``` uninstall signal: [ ["TERM", "fr.madrau.switchresx.daemon"], ["QUIT", "fr.madrau.switchresx.daemon"], ["INT", "fr.madrau.switchresx.daemon"], ["HUP", "fr.madrau.switchresx.daemon"], ["KILL", "fr.madrau.switchresx.daemon"], ] ``` Note that when multiple running processes match the given Bundle ID, all matching processes will be signaled. Unlike `quit:` directives, Unix signals originate from the current user, not from the superuser. This is construed as a safety feature, since the superuser is capable of bringing down the system via signals. However, this inconsistency may also be considered a bug, and should be addressed in some fashion in a future version. `uninstall` key `login_item:` ------------------------------ Login items associated with an Application bundle on disk can be listed using the command: ``` "$(brew --repository homebrew/cask)/developer/bin/list_login_items_for_app" '/path/to/application.app' ``` Note that you will likely need to have opened the app at least once for any login items to be present. #### `uninstall` Key `kext:` IDs for currently loaded kernel extensions can be listed using the command: ``` "$(brew --repository homebrew/cask)/developer/bin/list_loaded_kext_ids" ``` IDs inside a kext bundle you have located on disk can be listed using the command: ``` "$(brew --repository homebrew/cask)/developer/bin/list_id_in_kext" '/path/to/name.kext' ``` #### `uninstall` Key `script:` `uninstall script:` introduces a series of key-value pairs describing a command which will automate completion of the uninstall. Example (from [gpgtools.rb](#)): ``` uninstall script: { executable: "#{staged_path}/Uninstall.app/Contents/Resources/GPG Suite Uninstaller.app/Contents/Resources/uninstall.sh", sudo: true, } ``` It is important to note that, although `script:` in the above example does attempt to completely uninstall the `pkg`, it should not be used in detriment of [`pkgutil:`](#uninstall-key-pkgutil), but as a complement when possible. #### `uninstall` Key `delete:` `delete:` should only be used as a last resort, if other `uninstall` methods are insufficient. Arguments to `uninstall delete:` should use the following basic rules: * Basic tilde expansion is performed on paths, i.e. leading `~` is expanded to the home directory. * Paths must be absolute. * Glob expansion is performed using the [standard set of characters](https://en.wikipedia.org/wiki/Glob_(programming)). To remove user-specific files, use the [`zap` stanza](#stanza-zap). #### `uninstall` Key `trash:` `trash:` arguments follow the same rules listed above for `delete:`. #### Working With a pkg File Manually Advanced users may wish to work with a `pkg` file manually, without having the package installed. A list of files which may be installed from a `pkg` can be extracted using the command: ``` "$(brew --repository homebrew/cask)/developer/bin/list_payload_in_pkg" '/path/to/my.pkg' ``` Candidate application names helpful for determining the name of a Cask may be extracted from a `pkg` file using the command: ``` "$(brew --repository homebrew/cask)/developer/bin/list_apps_in_pkg" '/path/to/my.pkg' ``` Candidate package IDs which may be useful in a `pkgutil:` key may be extracted from a `pkg` file using the command: ``` "$(brew --repository homebrew/cask)/developer/bin/list_ids_in_pkg" '/path/to/my.pkg' ``` A fully manual method for finding bundle ids in a package file follows: 1. Unpack `/path/to/my.pkg` (replace with your package name) with `pkgutil --expand /path/to/my.pkg /tmp/expanded.unpkg`. 2. The unpacked package is a folder. Bundle ids are contained within files named `PackageInfo`. These files can be found with the command `find /tmp/expanded.unpkg -name PackageInfo`. 3. `PackageInfo` files are XML files, and bundle ids are found within the `identifier` attributes of `<pkg-info>` tags that look like `<pkg-info ... identifier="com.oracle.jdk7u51" ... >`, where extraneous attributes have been snipped out and replaced with ellipses. 4. Kexts inside packages are also described in `PackageInfo` files. If any kernel extensions are present, the command `find /tmp/expanded.unpkg -name PackageInfo -print0 | xargs -0 grep -i kext` should return a `<bundle id>` tag with a `path` attribute that contains a `.kext` extension, for example `<bundle id="com.wavtap.driver.WavTap" ... path="./WavTap.kext" ... />`. 5. Once bundle ids have been identified, the unpacked package directory can be deleted. ### Stanza: `url` #### HTTPS URLs are Preferred If available, an HTTPS URL is preferred. A plain HTTP URL should only be used in the absence of a secure alternative. #### Additional HTTP/S URL Parameters When a plain URL string is insufficient to fetch a file, additional information may be provided to the `curl`-based downloader, in the form of key/value pairs appended to `url`: | key | value | | --- | --- | | `verified:` | a string repeating the beginning of `url`, for verification purposes. [See below](#when-url-and-homepage-domains-differ-add-verified). | | `using:` | the symbol `:post` is the only legal value | | `cookies:` | a hash of cookies to be set in the download request | | `referer:` | a string holding the URL to set as referer in the download request | | `header:` | a string holding the header to set for the download request. | | `user_agent:` | a string holding the user agent to set for the download request. Can also be set to the symbol `:fake`, which will use a generic Browser-like user agent string. We prefer `:fake` when the server does not require a specific user agent. | | `data:` | a hash of parameters to be set in the POST request | Example of using `cookies:`: [java.rb](https://github.com/Homebrew/homebrew-cask/blob/472930df191d66747a57d5c96c0d00511d56e21b/Casks/java.rb#L5-L8) Example of using `referer:`: [rrootage.rb](https://github.com/Homebrew/homebrew-cask/blob/312ae841f1f1b2ec07f4d88b7dfdd7fbdf8d4f94/Casks/rrootage.rb#L5) Example of using `header:`: [issue-325182724](https://github.com/Homebrew/brew/pull/6545#issue-325182724) #### When URL and Homepage Domains Differ, Add `verified:` When the domains of `url` and `homepage` differ, the discrepancy should be documented with the `verified:` parameter, repeating the smallest possible portion of the URL that uniquely identifies the app or vendor, excluding the protocol. Example: [`shotcut.rb`](https://github.com/Homebrew/homebrew-cask/blob/08733296b49c59c58b6beeada59ed4207cef60c3/Casks/shotcut.rb#L5L6). This must be added so a user auditing the cask knows the URL was verified by the Homebrew Cask team as the one provided by the vendor, even though it may look unofficial. It is our responsibility as Homebrew Cask maintainers to verify both the `url` and `homepage` information when first added (or subsequently modified, apart from versioning). The parameter doesn’t mean you should trust the source blindly, but we only approve casks in which users can easily verify its authenticity with basic means, such as checking the official homepage or public repository. Occasionally, slightly more elaborate techniques may be used, such as inspecting an [`appcast`](#stanza-appcast) we established as official. Cases where such quick verifications aren’t possible (e.g. when the download URL is behind a registration wall) are [treated in a stricter manner](acceptable-casks#unofficial-vendorless-and-walled-builds). #### Difficulty Finding a URL Web browsers may obscure the direct `url` download location for a variety of reasons. Homebrew Cask supplies a script which can read extended file attributes to extract the actual source URL for most files downloaded by a browser on macOS. The script usually emits multiple candidate URLs; you may have to test each of them: ``` $(brew --repository homebrew/cask)/developer/bin/list_url_attributes_on_file <file> ``` #### Subversion URLs In rare cases, a distribution may not be available over ordinary HTTP/S. Subversion URLs are also supported, and can be specified by appending the following key/value pairs to `url`: | key | value | | --- | --- | | `using:` | the symbol `:svn` is the only legal value | | `revision:` | a string identifying the subversion revision to download | | `trust_cert:` | set to `true` to automatically trust the certificate presented by the server (avoiding an interactive prompt) | #### SourceForge/OSDN URLs SourceForge and OSDN (formerly `SourceForge.JP`) projects are common ways to distribute binaries, but they provide many different styles of URLs to get to the goods. We prefer URLs of this format: ``` https://downloads.sourceforge.net/<project_name>/<filename>.<ext> ``` Or, if it’s from [OSDN](https://osdn.jp/): ``` http://<subdomain>.osdn.jp/<project_name>/<release_id>/<filename>.<ext> ``` `<subdomain>` is typically of the form `dl` or `<user>.dl`. If these formats are not available, and the application is macOS-exclusive (otherwise a command-line download defaults to the Windows version) we prefer the use of this format: ``` https://sourceforge.net/projects/<project_name>/files/latest/download ``` #### Some Providers Block Command-line Downloads Some hosting providers actively block command-line HTTP clients. Such URLs cannot be used in Casks. Other providers may use URLs that change periodically, or even on each visit (example: FossHub). While some cases [could be circumvented](#using-a-block-to-defer-code-execution), they tend to occur when the vendor is actively trying to prevent automated downloads, so we prefer to not add those casks to the main repository. #### Using a Block to Defer Code Execution Some casks—notably nightlies—have versioned download URLs but are updated so often that they become impractical to keep current with the usual process. For those, we want to dynamically determine `url`. ##### The Problem In theory, one can write arbitrary Ruby code right in the Cask definition to fetch and construct a disposable URL. However, this typically involves an HTTP round trip to a landing site, which may take a long time. Because of the way Homebrew Cask loads and parses Casks, it is not acceptable that such expensive operations be performed directly in the body of a Cask definition. ##### Writing the Block Similar to the `preflight`, `postflight`, `uninstall_preflight`, and `uninstall_postflight` blocks, the `url` stanza offers an optional *block syntax*: ``` url "https://handbrake.fr/nightly.php" do |page| file_path = page[/href=["']?([^"' >]*Handbrake[._-][^"' >]+\.dmg)["' >]/i, 1] file_path ? URI.join(page.url, file_path) : nil end ``` You can also nest `url do` blocks inside `url do` blocks to follow a chain of URLs. The block is only evaluated when needed, for example on download time or when auditing a Cask. Inside a block, you may safely do things such as HTTP/S requests that may take a long time to execute. You may also refer to the `@cask` instance variable, and invoke any method available on `@cask`. The block will be called immediately before downloading; its result value will be assumed to be a `String` (or a pair of a `String` and `Hash` containing parameters) and subsequently used as a download URL. You can use the `url` stanza with either a direct argument or a block but not with both. Example for using the block syntax: [vlc-nightly.rb](https://github.com/Homebrew/homebrew-cask-versions/blob/2bf0f13dd49d263ebec0ca56e58ad8458633f789/Casks/vlc-nightly.rb#L5L10) ##### Mixing Additional URL Parameters With the Block Syntax In rare cases, you might need to set URL parameters like `cookies` or `referer` while also using the block syntax. This is possible by returning a two-element array as a block result. The first element of the array must be the download URL; the second element must be a `Hash` containing the parameters. ### Stanza: `version` `version`, while related to the app’s own versioning, doesn’t have to follow it exactly. It is common to change it slightly so it can be [interpolated](https://en.wikipedia.org/wiki/String_interpolation#Ruby_/_Crystal) in other stanzas, usually in `url` to create a Cask that only needs `version` and `sha256` changes when updated. This can be taken further, when needed, with [ruby String methods](https://ruby-doc.org/core/String.html). For example: Instead of ``` version "1.2.3" url "https://example.com/file-version-123.dmg" ``` We can use ``` version "1.2.3" url "https://example.com/file-version-#{version.delete('.')}.dmg" ``` We can also leverage the power of regular expressions. So instead of ``` version "1.2.3build4" url "https://example.com/1.2.3/file-version-1.2.3build4.dmg" ``` We can use ``` version "1.2.3build4" url "https://example.com/#{version.sub(%r{build\d+}, '')}/file-version-#{version}.dmg" ``` #### version :latest The special value `:latest` is used on casks which: 1. `url` doesn’t contain a version. 2. Having a correct value to `version` is too difficult or impractical, even with our automated systems. Example: [spotify.rb](https://github.com/Homebrew/homebrew-cask/blob/f56e8ba057687690e26a6502623aa9476ff4ac0e/Casks/spotify.rb#L2) #### version methods The examples above can become hard to read, however. Since many of these changes are common, we provide a number of helpers to clearly interpret otherwise obtuse cases: | Method | Input | Output | | --- | --- | --- | | `major` | `1.2.3-a45,ccdd88` | `1` | | `minor` | `1.2.3-a45,ccdd88` | `2` | | `patch` | `1.2.3-a45,ccdd88` | `3-a45` | | `major_minor` | `1.2.3-a45,ccdd88` | `1.2` | | `major_minor_patch` | `1.2.3-a45,ccdd88` | `1.2.3-a45` | | `minor_patch` | `1.2.3-a45,ccdd88` | `2.3-a45` | | `before_comma` | `1.2.3-a45,ccdd88` | `1.2.3-a45` | | `after_comma` | `1.2.3-a45,ccdd88` | `ccdd88` | | `dots_to_hyphens` | `1.2.3-a45,ccdd88` | `1-2-3-a45,ccdd88` | | `no_dots` | `1.2.3-a45,ccdd88` | `123-a45,ccdd88` | Similar to `dots_to_hyphens`, we provide all logical permutations of `{dots,hyphens,underscores}_to_{dots,hyphens,underscores}`. The same applies to `no_dots` in the form of `no_{dots,hyphens,underscores}`, with an extra `no_dividers` that applies all of those at once. Finally, there is `csv` that returns an array of comma-separated values. `csv`, `before_comma` and `after_comma` are extra special to allow for otherwise complex cases, and should be used sparingly. There should be no more than two of `,` per `version`. ### Stanza: `zap` #### `zap` Stanza Purpose The `zap` stanza describes a more complete uninstallation of files associated with a Cask. The `zap` procedures will never be performed by default, but only if the user uses `--zap` on `uninstall`: ``` brew uninstall --zap firefox ``` `zap` stanzas may remove: * Preference files and caches stored within the user’s `~/Library` directory. * Shared resources such as application updaters. Since shared resources may be removed, other applications may be affected by `brew uninstall --zap`. Understanding that is the responsibility of the end user. `zap` stanzas should not remove: * Files created by the user directly. Appending `--force` to the command will allow you to perform these actions even if the Cask is no longer installed: ``` brew uninstall --zap --force firefox ``` #### `zap` Stanza Syntax The form of `zap` stanza follows the [`uninstall` stanza](#stanza-uninstall). All of the same directives are available. The `trash:` key is preferred over `delete:`. Example: [dropbox.rb](https://github.com/Homebrew/homebrew-cask/blob/31cd96cc0e00dab1bff74d622e32d816bafd1f6f/Casks/dropbox.rb#L17-L35) #### `zap` Creation The simplest method is to use [@nrlquaker’s CreateZap](https://github.com/nrlquaker/homebrew-createzap), which can automatically generate the stanza. In a few instances it may fail to pick up anything and manual creation may be required. Manual creation can be facilitated with: * Some of the developer tools are already available in Homebrew Cask. * `sudo find / -iname "*<search item>*"` * An uninstaller tool such as [AppCleaner](https://github.com/Homebrew/homebrew-cask/blob/HEAD/Casks/appcleaner.rb). * Inspecting the usual suspects, i.e. `/Library/{'Application Support',LaunchAgents,LaunchDaemons,Frameworks,Logs,Preferences,PrivilegedHelperTools}` and `~/Library/{'Application Support',Caches,Containers,LaunchAgents,Logs,Preferences,'Saved Application State'}`. --- Token reference --------------- This section describes the algorithm implemented in the `generate_cask_token` script, and covers detailed rules and exceptions which are not needed in most cases. * [Purpose](#purpose) * [Finding the Simplified Name of the Vendor’s Distribution](#finding-the-simplified-name-of-the-vendors-distribution) * [Converting the Simplified Name To a Token](#converting-the-simplified-name-to-a-token) * [Cask Filenames](#cask-filenames) * [Cask Headers](#cask-headers) * [Cask Token Examples](#cask-token-examples) * [Tap Specific Cask Token Examples](#tap-specific-cask-token-examples) * [Token Overlap](#token-overlap) Purpose ------- Software vendors are often inconsistent with their naming. By enforcing strict naming conventions we aim to: * Prevent duplicate submissions * Minimize renaming events * Unambiguously boil down the name of the software into a unique identifier Details of software names and brands will inevitably be lost in the conversion to a minimal token. To capture the vendor’s full name for a distribution, use the [`name`](#stanza-name) within a Cask. `name` accepts an unrestricted UTF-8 string. Finding the Simplified Name of the Vendor’s Distribution -------------------------------------------------------- ### Simplified Names of Apps * Start with the exact name of the Application bundle as it appears on disk, such as `Google Chrome.app`. * If the name uses letters outside A-Z, convert it to ASCII as described in [Converting to ASCII](#converting-to-ascii). * Remove `.app` from the end. * Remove from the end: the string “app”, if the vendor styles the name like “Software App.app”. Exception: when “app” is an inseparable part of the name, without which the name would be inherently nonsensical, as in [whatsapp.rb](https://github.com/Homebrew/homebrew-cask/blob/HEAD/Casks/whatsapp.rb). * Remove from the end: version numbers or incremental release designations such as “alpha”, “beta”, or “release candidate”. Strings which distinguish different capabilities or codebases such as “Community Edition” are currently accepted. Exception: when a number is not an incremental release counter, but a differentiator for a different product from a different vendor, as in [kdiff3.rb](https://github.com/Homebrew/homebrew-cask/blob/HEAD/Casks/kdiff3.rb). * If the version number is arranged to occur in the middle of the App name, it should also be removed. * Remove from the end: “Launcher”, “Quick Launcher”. * Remove from the end: strings such as “Desktop”, “for Desktop”. * Remove from the end: strings such as “Mac”, “for Mac”, “for OS X”, “macOS”, “for macOS”. These terms are generally added to ported software such as “MAME OS X.app”. Exception: when the software is not a port, and “Mac” is an inseparable part of the name, without which the name would be inherently nonsensical, as in [PlayOnMac.app](https://github.com/Homebrew/homebrew-cask/blob/HEAD/Casks/playonmac.rb). * Remove from the end: hardware designations such as “for x86”, “32-bit”, “ppc”. * Remove from the end: software framework names such as “Cocoa”, “Qt”, “Gtk”, “Wx”, “Java”, “Oracle JVM”, etc. Exception: the framework is the product being Casked. * Remove from the end: localization strings such as “en-US”. * If the result of that process is a generic term, such as “Macintosh Installer”, try prepending the name of the vendor or developer, followed by a hyphen. If that doesn’t work, then just create the best name you can, based on the vendor’s web page. * If the result conflicts with the name of an existing Cask, make yours unique by prepending the name of the vendor or developer, followed by a hyphen. Example: [unison.rb](https://github.com/Homebrew/homebrew-cask/blob/HEAD/Casks/unison.rb) and [panic-unison.rb](https://github.com/Homebrew/homebrew-cask/blob/HEAD/Casks/panic-unison.rb). * Inevitably, there are a small number of exceptions not covered by the rules. Don’t hesitate to [use the forum](https://github.com/orgs/Homebrew/discussions) if you have a problem. ### Converting to ASCII * If the vendor provides an English localization string, that is preferred. Here are the places it may be found, in order of preference: + `CFBundleDisplayName` in the main `Info.plist` file of the app bundle + `CFBundleName` in the main `Info.plist` file of the app bundle + `CFBundleDisplayName` in `InfoPlist.strings` of an `en.lproj` localization directory + `CFBundleName` in `InfoPlist.strings` of an `en.lproj` localization directory + `CFBundleDisplayName` in `InfoPlist.strings` of an `English.lproj` localization directory + `CFBundleName` in `InfoPlist.strings` of an `English.lproj` localization directory * When there is no vendor localization string, romanize the name by transliteration or decomposition. * As a last resort, translate the name of the app bundle into English. ### Simplified Names of `pkg`-based Installers * The Simplified Name of a `pkg` may be more tricky to determine than that of an App. If a `pkg` installs an App, then use that App name with the rules above. If not, just create the best name you can, based on the vendor’s web page. ### Simplified Names of non-App Software * Currently, rules for generating a token are not well-defined for Preference Panes, QuickLook plugins, and several other types of software installable by Homebrew Cask. Just create the best name you can, based on the filename on disk or the vendor’s web page. Watch out for duplicates. Non-app tokens should become more standardized in the future. Converting the Simplified Name To a Token ----------------------------------------- The token is the primary identifier for a package in our project. It’s the unique string users refer to when operating on the Cask. To convert the App’s Simplified Name (above) to a token: * Convert all letters to lower case. * Expand the `+` symbol into a separated English word: `-plus-`. * Expand the `@` symbol into a separated English word: `-at-`. * Spaces become hyphens. * Underscores become hyphens. * Middots/Interpuncts become hyphens. * Hyphens stay hyphens. * Digits stay digits. * Delete any character which is not alphanumeric or a hyphen. * Collapse a series of multiple hyphens into one hyphen. * Delete a leading or trailing hyphen. Cask Filenames -------------- Casks are stored in a Ruby file named after the token, with the file extension `.rb`. Cask Headers ------------ The token is also given in the header line for each Cask. Cask Token Examples ------------------- These illustrate most of the rules for generating a token: | App Name on Disk | Simplified App Name | Cask Token | Filename | | --- | --- | --- | --- | | `Audio Hijack Pro.app` | Audio Hijack Pro | audio-hijack-pro | `audio-hijack-pro.rb` | | `VLC.app` | VLC | vlc | `vlc.rb` | | `BetterTouchTool.app` | BetterTouchTool | bettertouchtool | `bettertouchtool.rb` | | `LPK25 Editor.app` | LPK25 Editor | lpk25-editor | `lpk25-editor.rb` | | `Sublime Text 2.app` | Sublime Text | sublime-text | `sublime-text.rb` | Tap Specific Cask Token Examples -------------------------------- Cask taps have naming conventions specific to each tap. [Homebrew/cask-versions](https://github.com/Homebrew/homebrew-cask-versions/blob/HEAD/CONTRIBUTING.md#naming-versions-casks) [Homebrew/cask-fonts](https://github.com/Homebrew/homebrew-cask-fonts/blob/HEAD/CONTRIBUTING.md#naming-font-casks) [Homebrew/cask-drivers](https://github.com/Homebrew/homebrew-cask-drivers/blob/HEAD/CONTRIBUTING.md#naming-driver-casks) Special Affixes =============== A few situations require a prefix or suffix to be added to the token. Token Overlap ------------- When the token for a new Cask would otherwise conflict with the token of an already existing Cask, the nature of that overlap dictates the token (for possibly both Casks). See [Forks and Apps with Conflicting Names](acceptable-casks#forks-and-apps-with-conflicting-names) for information on how to proceed. Potentially Misleading Name --------------------------- If the token for a piece of unofficial software that interacts with a popular service would make it look official and the vendor is not authorised to use the name, [a prefix must be added](acceptable-casks#forks-and-apps-with-conflicting-names) for disambiguation. In cases where the prefix is ambiguous and would make the app appear official, the `-unofficial` suffix may be used.
programming_docs
homebrew Brew Test Bot Brew Test Bot ============= `brew test-bot` is the name for the automated review and testing system funded by [our Kickstarter in 2013](https://www.kickstarter.com/projects/homebrew/brew-test-bot). It comprises three Mac Pros hosting virtual machines that run the [`test-bot.rb`](https://github.com/Homebrew/homebrew-test-bot/) external command to perform automated testing of commits to the master branch, pull requests and custom builds requested by maintainers. Pull Requests ------------- The bot automatically builds pull requests and updates their status depending on the result of the job. For example, a job which has been queued but not yet completed will have a section in the pull request that looks like this: --- A failed build looks like this: --- A passed build looks like this: --- On failed or passed builds you can click the “Details” link to view the result in GitHub Actions. homebrew License Guidelines License Guidelines ================== We only accept formulae that use a [Debian Free Software Guidelines license](https://wiki.debian.org/DFSGLicenses) or are released into the public domain following [DFSG Guidelines on Public Domain software](https://wiki.debian.org/DFSGLicenses#Public_Domain). Specifying a License -------------------- All licenses are identified by their license identifier from the [SPDX License List](https://spdx.org/licenses/). Specify a license by passing it to the `license` method: ``` license "MIT" ``` The public domain can be indicated using a symbol: ``` license :public_domain ``` If the license for a formula cannot be represented using an SPDX expression: ``` license :cannot_represent ``` Complex SPDX License Expressions -------------------------------- Some formulae have multiple licenses that need to be combined in different ways. In these cases, a more complex license expression can be used. These expressions are based on the [SPDX License Expression Guidelines](https://spdx.github.io/spdx-spec/appendix-IV-SPDX-license-expressions/). Add a `+` to indicate that the user can choose a later version of the same license: ``` license "EPL-1.0+" ``` GNU licenses (`GPL`, `LGPL`, `AGPL` and `GFDL`) require either the `-only` or the `-or-later` suffix to indicate whether a later version of the license is allowed: ``` license "LGPL-2.1-only" ``` ``` license "GPL-1.0-or-later" ``` Use `:any_of` to indicate that the user can choose which license applies: ``` license any_of: ["MIT", "0BSD"] ``` Use `:all_of` to indicate that the user must comply with multiple licenses: ``` license all_of: ["MIT", "0BSD"] ``` Use `:with` to indicate a license exception: ``` license "MIT" => { with: "LLVM-exception" } ``` These expressions can be nested as needed: ``` license any_of: [ "MIT", :public_domain, all_of: ["0BSD", "Zlib", "Artistic-1.0+"], "Apache-2.0" => { with: "LLVM-exception" }, ] ``` Specifying Forbidden Licenses ----------------------------- The `HOMEBREW_FORBIDDEN_LICENSES` environment variable can be set to forbid installation of formulae that require or have dependencies that require certain licenses. The `HOMEBREW_FORBIDDEN_LICENSES` should be set to a space separated list of licenses. Use `public_domain` to forbid installation of formulae with a `:public_domain` license. For example, the following forbids installation of `MIT`, `Artistic-1.0` and `:public_domain` licenses: ``` export HOMEBREW_FORBIDDEN_LICENSES="MIT Artistic-1.0 public_domain" ``` In this example Homebrew would refuse to install any formula that specifies the `MIT` license. Homebrew would also forbid installation of any formula that declares a dependency on a formula that specifies `MIT`, even if the original formula has an allowed license. Homebrew interprets complex license expressions and determines whether the licenses allow installation. To continue the above example, Homebrew would not allow installation of a formula with the following license declarations: ``` license any_of: ["MIT", "Artistic-1.0"] ``` ``` license all_of: ["MIT", "0BSD"] ``` Homebrew *would* allow formulae with the following declaration to be installed: ``` license any_of: ["MIT", "0BSD"] ``` `HOMEBREW_FORBIDDEN_LICENSES` can also forbid future versions of specific licenses. For example, to forbid `Artistic-1.0`, `Artistic-2.0` and any future Artistic licenses, use: ``` export HOMEBREW_FORBIDDEN_LICENSES="Artistic-1.0+" ``` For GNU licenses (such as `GPL`, `LGPL`, `AGPL` and `GFDL`), use `-only` or `-or-later`. For example, the following would forbid `GPL-2.0`, `LGPL-2.1` and `LGPL-3.0` formulae from being installed, but would allow `GPL-3.0` ``` export HOMEBREW_FORBIDDEN_LICENSES="GPL-2.0-only LGPL-2.1-or-later" ``` homebrew Creating a Homebrew Issue Creating a Homebrew Issue ========================= First, check to make sure your issue is not listed in the [FAQ](faq) or [Common Issues](common-issues) and can’t otherwise be resolved with the information in the [Tips and Tricks](tips-n'-tricks) documentation. Next, go through the steps in the [Troubleshooting guide](troubleshooting). If the preceding steps did not help, it may be appropriate to submit an issue. This can be done by navigating to the relevant repository, clicking the “Issues” link, and clicking on the “New issue” button. When creating an issue, make sure you use the provided template, as it’s important in helping others to understand and potentially triage your issue efficiently. homebrew Installation Installation ============ Instructions for a supported install of Homebrew are on the [homepage](https://brew.sh). This script installs Homebrew to its preferred prefix (`/usr/local` for macOS Intel, `/opt/homebrew` for Apple Silicon and `/home/linuxbrew/.linuxbrew` for Linux) so that [you don’t need sudo](faq#why-does-homebrew-say-sudo-is-bad) when you `brew install`. It is a careful script; it can be run even if you have stuff installed in the preferred prefix already. It tells you exactly what it will do before it does it too. You have to confirm everything it will do before it starts. macOS Requirements ------------------ * A 64-bit Intel CPU or Apple Silicon CPU [1](#1) * macOS Catalina (10.15) (or higher) [2](#2) * Command Line Tools (CLT) for Xcode (from `xcode-select --install` or <https://developer.apple.com/download/all/>) or [Xcode](https://itunes.apple.com/us/app/xcode/id497799835) [3](#3) * The Bourne-again shell for installation (i.e. `bash`) [4](#4) Git Remote Mirroring -------------------- You can use geolocalized Git mirrors to speed up Homebrew’s installation and `brew update` by setting `HOMEBREW_BREW_GIT_REMOTE` and/or `HOMEBREW_CORE_GIT_REMOTE` in your shell environment with this script: ``` export HOMEBREW_BREW_GIT_REMOTE="..." # put your Git mirror of Homebrew/brew here export HOMEBREW_CORE_GIT_REMOTE="..." # put your Git mirror of Homebrew/homebrew-core here /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)" ``` The default Git remote will be used if the corresponding environment variable is unset. Alternative Installs -------------------- ### Linux or Windows 10 Subsystem for Linux Check out [the Homebrew on Linux installation documentation](homebrew-on-linux). ### Untar anywhere Just extract (or `git clone`) Homebrew wherever you want. Just avoid: * Directories with names that contain spaces. Homebrew itself can handle spaces, but many build scripts cannot. * `/tmp` subdirectories because Homebrew gets upset. * `/sw` and `/opt/local` because build scripts get confused when Homebrew is there instead of Fink or MacPorts, respectively. However do yourself a favour and use the installer to install to the default prefix. Some things may not build when installed elsewhere. One of the reasons Homebrew just works relative to the competition is **because** we recommend installing here. *Pick another prefix at your peril!* ``` mkdir homebrew && curl -L https://github.com/Homebrew/brew/tarball/master | tar xz --strip 1 -C homebrew ``` or ``` git clone https://github.com/Homebrew/brew homebrew ``` then ``` eval "$(homebrew/bin/brew shellenv)" brew update --force --quiet chmod -R go-w "$(brew --prefix)/share/zsh" ``` ### Multiple installations Create a Homebrew installation wherever you extract the tarball. Whichever `brew` command is called is where the packages will be installed. You can use this as you see fit, e.g. to have a system set of libs in the default prefix and tweaked formulae for development in `~/homebrew`. ### Unattended installation If you want a non-interactive run of the Homebrew installer that doesn’t prompt for passwords (e.g. in automation scripts), prepend [`NONINTERACTIVE=1`](https://github.com/Homebrew/install/#install-homebrew-on-macos-or-linux) to the installation command. Uninstallation -------------- Uninstallation is documented in the [FAQ](faq). 1 For 32-bit or PPC support see [Tigerbrew](https://github.com/mistydemeo/tigerbrew). 2 10.15 or higher is recommended, while 10.11–10.14 are supported on a best-effort basis. For 10.4–10.6 see [Tigerbrew](https://github.com/mistydemeo/tigerbrew). 3 Most formulae require a compiler. A handful require a full Xcode installation. You can install Xcode, the CLT, or both; Homebrew supports all three configurations. Downloading Xcode may require an Apple Developer account on older versions of Mac OS X. Sign up for free at [Apple’s website](https://developer.apple.com/register/index.action). 4 The one-liner installation method found on [brew.sh](https://brew.sh) requires the Bourne-again shell, i.e. `bash`. Notably, `zsh`, `fish`, `tcsh` and `csh` will not work. homebrew Taps (Third-Party Repositories) Taps (Third-Party Repositories) =============================== The `brew tap` command adds more repositories to the list of formulae that Homebrew tracks, updates, and installs from. By default, `tap` assumes that the repositories come from GitHub, but the command isn’t limited to any one location. The `brew tap` command ---------------------- * `brew tap` without arguments lists all currently tapped repositories. For example: ``` $ brew tap homebrew/cask homebrew/core petere/postgresql ``` * `brew tap <user/repo>` makes a clone of the repository at *https://github.com/<user>/homebrew-<repo>* into `$(brew --repository)/Library/Taps`. After that, `brew` will be able to work with those formulae as if they were in Homebrew’s [homebrew/core](https://github.com/Homebrew/homebrew-core) canonical repository. You can install and uninstall them with `brew [un]install`, and the formulae are automatically updated when you run `brew update`. (See below for details about how `brew tap` handles the names of repositories.) * `brew tap <user/repo> <URL>` makes a clone of the repository at *URL*. Unlike the one-argument version, *URL* is not assumed to be GitHub, and it doesn’t have to be HTTP. Any location and any protocol that Git can handle is fine, although non-GitHub taps require running `brew tap --force-auto-update <user/repo>` to enable automatic updating. * `brew tap --repair` migrates tapped formulae from a symlink-based to directory-based structure. (This should only need to be run once.) * `brew untap user/repo [user/repo user/repo ...]` removes the given taps. The repositories are deleted and `brew` will no longer be aware of their formulae. `brew untap` can handle multiple removals at once. Repository naming conventions and assumptions --------------------------------------------- On GitHub, your repository must be named `homebrew-something` to use the one-argument form of `brew tap`. The prefix “homebrew-“ is not optional. (The two-argument form doesn’t have this limitation, but it forces you to give the full URL explicitly.) When you use `brew tap` on the command line, however, you can leave out the “homebrew-“ prefix in commands. That is, `brew tap username/foobar` can be used as a shortcut for the long version: `brew tap username/homebrew-foobar`. `brew` will automatically add back the “homebrew-“ prefix whenever it’s necessary. Formula with duplicate names ---------------------------- If your tap contains a formula that is also present in [homebrew/core](https://github.com/Homebrew/homebrew-core), that’s fine, but you would need to specify its fully qualified name in the form `<user>/<repo>/<formula>` to install your version. Whenever a `brew install foo` command is issued, `brew` selects which formula to use by searching in the following order: * core formulae * other taps If you need a formula to be installed from a particular tap, you can use fully qualified names to refer to them. If you were to create a tap for an alternative `vim` formula, the behaviour would be: ``` brew install vim # installs from homebrew/core brew install username/repo/vim # installs from your custom repository ``` As a result, we recommend you give new names to customized formulae if you want to make them easier to install. Note that there is (intentionally) no way of replacing dependencies of core formulae with those from other taps. homebrew Migrating A Formula To A Tap Migrating A Formula To A Tap ============================ There are times when we may wish to migrate a formula from one tap into another tap. To do this: 1. Create a pull request to the new tap adding the formula file as-is from the original tap. Fix any test failures that may occur due to the stricter requirements for new formulae than existing formula (e.g. `brew audit --strict` must pass for that formula). 2. Create a pull request to the original tap deleting the formula file and add it to `tap_migrations.json` with a commit message like `gv: migrate to homebrew/core`. 3. Put a link for each pull request in the other pull request so the maintainers can merge them both at once. Congratulations, you’ve moved a formula to a tap! For Homebrew maintainers, formulae should only ever be migrated into and within the Homebrew organisation (e.g. from Homebrew/core to Homebrew/cask, or from a third-party tap to Homebrew/core), and never out of it. homebrew FAQ FAQ === Is there a glossary of terms around? ------------------------------------ The Formula Cookbook has a list of [Homebrew terminology](formula-cookbook#homebrew-terminology). How do I update my local packages? ---------------------------------- First update all package definitions (formulae) and Homebrew itself: ``` brew update ``` You can now list which of your installed packages (kegs) are outdated with: ``` brew outdated ``` Upgrade everything with: ``` brew upgrade ``` Or upgrade a specific formula with: ``` brew upgrade <formula> ``` How do I stop certain formulae from being updated? -------------------------------------------------- To stop something from being updated/upgraded: ``` brew pin <formula> ``` To allow that formulae to update again: ``` brew unpin <formula> ``` Note that pinned, outdated formulae that another formula depends on need to be upgraded when required, as we do not allow formulae to be built against outdated versions. If this is not desired, you can instead use `brew extract` to [maintain your own copy of the formula in a tap](how-to-create-and-maintain-a-tap). How do I uninstall Homebrew? ---------------------------- To uninstall Homebrew, run the [uninstall script from the Homebrew/install repository](https://github.com/homebrew/install#uninstall-homebrew). How do I keep old versions of a formula when upgrading? ------------------------------------------------------- Homebrew automatically uninstalls old versions of each formula that is upgraded with `brew upgrade`, and periodically performs additional cleanup every 30 days. To **disable** automatic `brew cleanup`: ``` export HOMEBREW_NO_INSTALL_CLEANUP=1 ``` To disable automatic `brew cleanup` only for formulae `foo` and `bar`: ``` export HOMEBREW_NO_CLEANUP_FORMULAE=foo,bar ``` When automatic `brew cleanup` is disabled, if you uninstall a formula, it will only remove the latest version you have installed. It will not remove all versions of the formula that you may have installed in the past. Homebrew will continue to attempt to install the newest version it knows about when you run `brew upgrade`. This can be surprising. In this case, to remove a formula entirely, you may run `brew uninstall --force <formula>`. Be careful as this is a destructive operation. Why does `brew upgrade <formula>` or `brew install <formula>` also upgrade a bunch of other stuff? -------------------------------------------------------------------------------------------------- Homebrew doesn’t support arbitrary mixing and matching of formula versions, so everything a formula depends on, and everything that depends on it in turn, needs to be upgraded to the latest version as that’s the only combination of formulae we test. As a consequence any given `upgrade` or `install` command can upgrade many other (seemingly unrelated) formulae, especially if something important like `python` or `openssl` also needed an upgrade. Where does stuff get downloaded? -------------------------------- ``` brew --cache ``` Which is usually: `~/Library/Caches/Homebrew` My Mac `.app`s don’t find Homebrew utilities! --------------------------------------------- GUI apps on macOS don’t have Homebrew’s prefix in their `PATH` by default. If you’re on Mountain Lion or later, you can fix this by running `sudo launchctl config user path "$(brew --prefix)/bin:${PATH}"` and then rebooting, as documented in `man launchctl`. Note that this sets the `launchctl` `PATH` for *all users*. For earlier versions of macOS, see [this page](https://developer.apple.com/legacy/library/qa/qa1067/_index.html). How do I contribute to Homebrew? -------------------------------- Read our [contribution guidelines](https://github.com/Homebrew/brew/blob/HEAD/CONTRIBUTING.md#contributing-to-homebrew). Why do you compile everything? ------------------------------ Homebrew provides pre-built binary packages for many formulae. These are referred to as <bottles> and are available at <https://github.com/Homebrew/homebrew-core/packages>. If available, bottled binaries will be used by default except under the following conditions: * The `--build-from-source` option is invoked. * No bottle is available for the machine’s currently running OS version. (Bottles for macOS are generated only for supported macOS versions.) * Homebrew is installed to a prefix other than the default (although some bottles support this). * Formula options were passed to the install command. For example, `brew install <formula>` will try to find a bottled binary, but `brew install --with-foo <formula>` will trigger a source build. We aim to bottle everything. How do I get a formula from someone else’s pull request? -------------------------------------------------------- ``` brew install hub brew update cd "$(brew --repository homebrew/core)" hub fetch github_username hub pr checkout pull_request_number ``` Why should I install Homebrew in the default location? ------------------------------------------------------ Homebrew’s pre-built binary packages (known as <bottles>) of many formulae can only be used if you install in the default installation prefix, otherwise they have to be built from source. Building from source takes a long time, is prone to failure, and is not supported. The default prefix is: * `/usr/local` for macOS on Intel, * `/opt/homebrew` for macOS on Apple Silicon/ARM, and * `/home/linuxbrew/.linuxbrew` for Linux. Do yourself a favour and install to the default prefix so that you can use our pre-built binary packages. *Pick another prefix at your peril!* Why is the default installation prefix `/opt/homebrew` on Apple Silicon? ------------------------------------------------------------------------ The prefix `/opt/homebrew` was chosen to allow installations in `/opt/homebrew` for Apple Silicon and `/usr/local` for Rosetta 2 to coexist and use bottles. Why is the default installation prefix `/home/linuxbrew/.linuxbrew` on Linux? ----------------------------------------------------------------------------- The prefix `/home/linuxbrew/.linuxbrew` was chosen so that users without admin access can still benefit from precompiled binaries via a `linuxbrew` role account. If you do not yourself have admin privileges, consider asking your admin staff to create a `linuxbrew` role account for you with home directory `/home/linuxbrew`. Why does Homebrew say sudo is bad? ---------------------------------- **tl;dr** Sudo is dangerous, and you installed TextMate.app without sudo anyway. Homebrew refuses to work using sudo. You should only ever sudo a tool you trust. Of course, you can trust Homebrew 😉 — but do you trust the multi-megabyte Makefile that Homebrew runs? Developers often understand C++ far better than they understand `make` syntax. It’s too high a risk to sudo such stuff. It could modify (or upload) any files on your system. And indeed, we’ve seen some build scripts try to modify `/usr` even when the prefix was specified as something else entirely. We use the macOS sandbox to stop this but this doesn’t work when run as the `root` user (which also has read and write access to almost everything on the system). Did you `chown root /Applications/TextMate.app`? Probably not. So is it that important to `chown root wget`? If you need to run Homebrew in a multi-user environment, consider creating a separate user account specifically for use of Homebrew. Why isn’t a particular command documented? ------------------------------------------ If it’s not in [`man brew`](manpage), it’s probably an [external command](external-commands) with documentation available using `--help`. Why haven’t you merged my pull request? --------------------------------------- If all maintainer feedback has been addressed and all tests are passing, bump it with a “bump” comment. Sometimes we miss requests and there are plenty of them. In the meantime, rebase your pull request so that it can be more easily merged. Can I edit formulae myself? --------------------------- Yes! It’s easy! Just `brew edit <formula>`. You don’t have to submit modifications back to `homebrew/core`, just edit the formula to what you personally need and `brew install <formula>`. As a bonus, `brew update` will merge your changes with upstream so you can still keep the formula up-to-date **with** your personal modifications! Can I make new formulae? ------------------------ Yes! It’s easy! Just `brew create URL`. Homebrew will then open the formula in `EDITOR` so you can edit it, but it probably already installs; try it: `brew install <formula>`. If you encounter any issues, run the command with the `--debug` switch like so: `brew install --debug <formula>`, which drops you into a debugging shell. If you want your new formula to be part of `homebrew/core` or want to learn more about writing formulae, then please read the [Formula Cookbook](formula-cookbook). Why was a formula deleted or disabled? -------------------------------------- Use `brew log <formula>` to find out! Likely because it had [unresolved issues](acceptable-formulae) and/or [our analytics](https://formulae.brew.sh/analytics/) indicated it was not widely used. For disabled and deprecated formulae, running `brew info <formula>` will also provide an explanation. Homebrew is a poor name, it’s too generic; why was it chosen? ------------------------------------------------------------- Homebrew’s creator @mxcl wasn’t too concerned with the beer theme and didn’t consider that the project may actually prove popular. By the time Max realised that it was popular, it was too late. However, today, the first Google hit for “homebrew” is not beer related 😉 What does “keg-only” mean? -------------------------- It means the formula is installed only into the Cellar and is not linked into the default prefix. This means most tools will not find it. You can see why a formula was installed as keg-only, and instructions for including it in your `PATH`, by running `brew info <formula>`. You can [modify a tool’s build configuration](how-to-build-software-outside-homebrew-with-homebrew-keg-only-dependencies) to find keg-only dependencies. Or, you can link in the formula if you need to with `brew link <formula>`, though this can cause unexpected behaviour if you are shadowing macOS software. How can I specify different configure arguments for a formula? -------------------------------------------------------------- `brew edit <formula>` and edit the formula directly. Currently there is no other way to do this. Why can’t I open a Mac app from an “unidentified developer”? ------------------------------------------------------------ Chances are that certain apps will give you a popup message like this: This is a [security feature from Apple](https://support.apple.com/en-us/HT202491). The single most important thing to know is that **you can allow individual apps to be exempt from this feature.** This allows the app to run while the rest of the system remains under protection. **Always leave system-wide protection enabled,** and disable it only for specific apps as needed. If you’re sure you want to trust the app, you can disable protection for it by right-clicking its icon and choosing *Open*: In the resulting dialog, click the *Open* button to have macOS permanently allow the app to run on this Mac. **Don’t do this unless you’re sure you trust the app.** Alternatively, you may provide the [`--no-quarantine` flag](https://github.com/Homebrew/homebrew-cask/blob/HEAD/USAGE.md#options) at install time to not add this feature to a specific app. Why aren’t some apps included during `brew upgrade`? ---------------------------------------------------- After running `brew upgrade`, you may notice some casks you think should be upgrading, aren’t. As you’re likely aware, a lot of macOS software can upgrade itself: That could cause conflicts when used in tandem with Homebrew Cask’s `upgrade` mechanism. When software uses its built-in mechanisms to upgrade itself, it happens without Homebrew Cask’s knowledge, causing both versions get out of sync. If you were to then upgrade through Homebrew Cask while we have a lower version of the software on record, you’d get a downgrade. There are a few ideas to fix this problem: * Try to prevent the software’s automated updates. It wouldn’t be a universal solution and may cause it to break. Most software on Homebrew Cask is closed-source, so we’d be guessing. This is also why pinning casks to a version isn’t available. * Try to extract the installed software’s version and compare it to the cask, deciding what to do at that time. It’d be a complicated solution that would break other parts of our methodology, such as using versions to interpolate `url` values (a definite win for maintainability). This solution also isn’t universal, as many software developers are inconsistent in their versioning schemes (and app bundles are meant to have two version strings) and it doesn’t work for all types of software we support. So we let software be. Anything installed with Homebrew Cask should behave the same as if it were installed manually. But since we also want to support software that doesn’t self-upgrade, we add [`auto_updates true`](https://github.com/Homebrew/homebrew-cask/blob/62c0495b254845a481dacac6ea7c8005e27a3fb0/Casks/alfred.rb#L10) to casks for software that does, which excludes them from `brew upgrade`. Casks which use [`version :latest`](cask-cookbook#version-latest) are also excluded, because we have no way to track their installed version. It helps to ask the developers of such software to provide versioned releases (i.e. include the version in the path of the download `url`). If you still want to force software to be upgraded via Homebrew Cask, you can reference it specifically in the `upgrade` command: ``` brew upgrade <cask> ``` Or use the `--greedy` flag: ``` brew upgrade --greedy ``` Refer to the `upgrade` section of the [`brew` manual page](manpage) for more details.
programming_docs
homebrew Deprecating, Disabling, and Removing Formulae Deprecating, Disabling, and Removing Formulae ============================================= There are many reasons why formulae may be deprecated, disabled, or removed. This document explains the differences between each method as well as explaining when one method should be used over another. Overview -------- This general rule of thumb can be followed: * `deprecate!` should be used for formulae that *should* no longer be used. * `disable!` should be used for formulae that *cannot* be used. * Formulae that are no longer acceptable in homebrew/core or have been disabled for over a year should be removed. Deprecation ----------- If a user attempts to install a deprecated formula, they will be shown a warning message but the install will proceed. A formula should be deprecated to indicate to users that the formula should not be used and will be disabled in the future. Deprecated formulae should be maintained by the Homebrew maintainers so they can still build from source and their bottles continue to work (even if unmaintained upstream). If this is not possible, they should be disabled. The most common reasons for deprecation are when the upstream project is deprecated, unmaintained, or archived. Formulae with dependents may be deprecated only if at least one of the following are true: * its dependents are all deprecated * the formula does not build on any of our supported macOS versions and on Linux * the formula has outstanding CVEs To deprecate a formula, add a `deprecate!` call. This call should include a deprecation date (in the ISO 8601 format) and a deprecation reason: ``` deprecate! date: "YYYY-MM-DD", because: :reason ``` The `date` parameter should be set to the date that the project or version became (or will become) deprecated. If there is no clear date but the formula needs to be deprecated, use today’s date. If the `date` parameter is set to a date in the future, the formula will not become deprecated until that date. This can be useful if the upstream developers have indicated a date where the project or version will stop being supported. The `because` parameter can be a preset reason (using a symbol) or a custom reason. See the [Deprecate and Disable Reasons](#deprecate-and-disable-reasons) section below for more details about the `because` parameter. Disabling --------- If a user attempts to install a disabled formula, they will be shown an error message and the install will fail. A formula should be disabled to indicate to users that the formula cannot be used and will be removed in the future. Disabled formulae may no longer build from source or have working bottles. The most common reasons for disabling a formula are: * it cannot be built from source (meaning no bottles can be built) * it has been deprecated for a long time * the upstream repository has been removed * the project has no license Formulae should not be disabled without a deprecation period of at least three months unless the circumstances are exceptional (e.g. the formula does not build on any supported macOS version or Linux). Popular formulae should have longer deprecation periods. The popularity of a formula should be based on our analytics data. **Note: disabled formulae in homebrew/core will be automatically removed one year after their disable date** To disable a formula, add a `disable!` call. This call should include a deprecation date (in the ISO 8601 format) and a deprecation reason: ``` disable! date: "YYYY-MM-DD", because: :reason ``` The `date` parameter should be set to the date that the reason for disabling came into effect. If there is no clear date but the formula needs to be disabled, use today’s date. If the `date` parameter is set to a date in the future, the formula will be deprecated until that date (on which the formula will become disabled). The `because` parameter can be a preset reason (using a symbol) or a custom reason. See the [Deprecate and Disable Reasons](#deprecate-and-disable-reasons) section below for more details about the `because` parameter. Removal ------- A formula should be removed if it does not meet our criteria for [acceptable formulae](acceptable-formulae) or [versioned formulae](versions), has a non-open-source license, or has been disabled for over a year. Deprecate and Disable Reasons ----------------------------- When a formula is deprecated or disabled, a reason explaining the action must be provided. There are two ways to indicate the reason. The preferred way is to use a pre-existing symbol to indicate the reason. The available symbols are listed below and can be found in the [`DeprecateDisable` module](https://github.com/Homebrew/brew/blob/master/Library/Homebrew/deprecate_disable.rb): * `:does_not_build`: the formula cannot be built from source * `:no_license`: the formula does not have a license * `:repo_archived`: the upstream repository has been archived * `:repo_removed`: the upstream repository has been removed * `:unmaintained`: the project appears to be abandoned * `:unsupported`: Homebrew’s application of the software is not supported by the upstream developers (e.g. upstream only supports macOS versions prior to 10.14) * `:deprecated_upstream`: the project is deprecated upstream * `:versioned_formula`: the formula is a versioned formula These reasons can be specified by their symbols (the comments show the message that will be displayed to users): ``` # Warning: <formula> has been deprecated because it is deprecated upstream! deprecate! date: "2020-01-01", because: :deprecated_upstream ``` ``` # Error: <formula> has been disabled because it does not build! disable! date: "2020-01-01", because: :does_not_build ``` If these pre-existing reasons do not fit, a custom reason can be specified. These reasons should be written to fit into the sentence `<formula> has been deprecated/disabled because it <reason>!`. A well-worded example of a custom reason would be: ``` # Warning: <formula> has been deprecated because it fetches unversioned dependencies at runtime! deprecate! date: "2020-01-01", because: "fetches unversioned dependencies at runtime" ``` A poorly-worded example of a custom reason would be: ``` # Error: <formula> has been disabled because it invalid license! disable! date: "2020-01-01", because: "invalid license" ``` redux API Reference API Reference ============= The Redux API surface is tiny. Redux defines a set of contracts for you to implement (such as [reducers](https://redux.js.org/understanding/thinking-in-redux/glossary#reducer)) and provides a few helper functions to tie these contracts together. This section documents the complete Redux API. Keep in mind that Redux is only concerned with managing the state. In a real app, you'll also want to use UI bindings like [react-redux](https://github.com/gaearon/react-redux). ### Top-Level Exports * [createStore(reducer, [preloadedState], [enhancer])](createstore) * [combineReducers(reducers)](combinereducers) * [applyMiddleware(...middlewares)](applymiddleware) * [bindActionCreators(actionCreators, dispatch)](bindactioncreators) * [compose(...functions)](compose) ### Store API * [Store](store) + [getState()](store#getState) + [dispatch(action)](store#dispatchaction) + [subscribe(listener)](store#subscribelistener) + [replaceReducer(nextReducer)](store#replacereducernextreducer) ### Importing Every function described above is a top-level export. You can import any of them like this: #### ES6 ``` import { createStore } from 'redux' ``` #### ES5 (CommonJS) ``` var createStore = require('redux').createStore ``` #### ES5 (UMD build) ``` var createStore = Redux.createStore ``` redux combineReducers(reducers) `combineReducers(reducers)` =========================== As your app grows more complex, you'll want to split your [reducing function](https://redux.js.org/understanding/thinking-in-redux/glossary#reducer) into separate functions, each managing independent parts of the [state](https://redux.js.org/understanding/thinking-in-redux/glossary#state). The `combineReducers` helper function turns an object whose values are different reducing functions into a single reducing function you can pass to [`createStore`](createstore). The resulting reducer calls every child reducer, and gathers their results into a single state object. **The state produced by `combineReducers()` namespaces the states of each reducer under their keys as passed to `combineReducers()`** Example: ``` rootReducer = combineReducers({potato: potatoReducer, tomato: tomatoReducer}) // This would produce the following state object { potato: { // ... potatoes, and other state managed by the potatoReducer ... }, tomato: { // ... tomatoes, and other state managed by the tomatoReducer, maybe some nice sauce? ... } } ``` You can control state key names by using different keys for the reducers in the passed object. For example, you may call `combineReducers({ todos: myTodosReducer, counter: myCounterReducer })` for the state shape to be `{ todos, counter }`. A popular convention is to name reducers after the state slices they manage, so you can use ES6 property shorthand notation: `combineReducers({ counter, todos })`. This is equivalent to writing `combineReducers({ counter: counter, todos: todos })`. > > ##### A Note for Flux Users > > > This function helps you organize your reducers to manage their own slices of state, similar to how you would have different Flux Stores to manage different state. With Redux, there is just one store, but `combineReducers` helps you keep the same logical division between reducers. > > > #### Arguments 1. `reducers` (*Object*): An object whose values correspond to different reducing functions that need to be combined into one. See the notes below for some rules every passed reducer must follow. > Earlier documentation suggested the use of the ES6 `import * as reducers` syntax to obtain the reducers object. This was the source of a lot of confusion, which is why we now recommend exporting a single reducer obtained using `combineReducers()` from `reducers/index.js` instead. An example is included below. > > #### Returns (*Function*): A reducer that invokes every reducer inside the `reducers` object, and constructs a state object with the same shape. #### Notes This function is mildly opinionated and is skewed towards helping beginners avoid common pitfalls. This is why it attempts to enforce some rules that you don't have to follow if you write the root reducer manually. Any reducer passed to `combineReducers` must satisfy these rules: * For any action that is not recognized, it must return the `state` given to it as the first argument. * It must never return `undefined`. It is too easy to do this by mistake via an early `return` statement, so `combineReducers` throws if you do that instead of letting the error manifest itself somewhere else. * If the `state` given to it is `undefined`, it must return the initial state for this specific reducer. According to the previous rule, the initial state must not be `undefined` either. It is handy to specify it with ES6 optional arguments syntax, but you can also explicitly check the first argument for being `undefined`. While `combineReducers` attempts to check that your reducers conform to some of these rules, you should remember them, and do your best to follow them. `combineReducers` will check your reducers by passing `undefined` to them; this is done even if you specify initial state to `Redux.createStore(combineReducers(...), initialState)`. Therefore, you **must** ensure your reducers work properly when receiving `undefined` as state, even if you never intend for them to actually receive `undefined` in your own code. #### Example #### `reducers/todos.js` ``` export default function todos(state = [], action) { switch (action.type) { case 'ADD_TODO': return state.concat([action.text]) default: return state } } ``` #### `reducers/counter.js` ``` export default function counter(state = 0, action) { switch (action.type) { case 'INCREMENT': return state + 1 case 'DECREMENT': return state - 1 default: return state } } ``` #### `reducers/index.js` ``` import { combineReducers } from 'redux' import todos from './todos' import counter from './counter' export default combineReducers({ todos, counter }) ``` #### `App.js` ``` import { createStore } from 'redux' import reducer from './reducers/index' const store = createStore(reducer) console.log(store.getState()) // { // counter: 0, // todos: [] // } store.dispatch({ type: 'ADD_TODO', text: 'Use Redux' }) console.log(store.getState()) // { // counter: 0, // todos: [ 'Use Redux' ] // } ``` #### Tips * This helper is just a convenience! You can write your own `combineReducers` that [works differently](https://github.com/acdlite/reduce-reducers), or even assemble the state object from the child reducers manually and write a root reducing function explicitly, like you would write any other function. * You may call `combineReducers` at any level of the reducer hierarchy. It doesn't have to happen at the top. In fact you may use it again to split the child reducers that get too complicated into independent grandchildren, and so on. redux bindActionCreators(actionCreators, dispatch) `bindActionCreators(actionCreators, dispatch)` ============================================== Turns an object whose values are [action creators](https://redux.js.org/understanding/thinking-in-redux/glossary#action-creator), into an object with the same keys, but with every action creator wrapped into a [`dispatch`](store#dispatchaction) call so they may be invoked directly. Normally you should just call [`dispatch`](store#dispatchaction) directly on your [`Store`](store) instance. If you use Redux with React, [react-redux](https://github.com/gaearon/react-redux) will provide you with the [`dispatch`](store#dispatchaction) function so you can call it directly, too. The only use case for `bindActionCreators` is when you want to pass some action creators down to a component that isn't aware of Redux, and you don't want to pass [`dispatch`](store#dispatchaction) or the Redux store to it. For convenience, you can also pass an action creator as the first argument, and get a dispatch wrapped function in return. #### Parameters 1. `actionCreators` (*Function* or *Object*): An [action creator](https://redux.js.org/understanding/thinking-in-redux/glossary#action-creator), or an object whose values are action creators. 2. `dispatch` (*Function*): A [`dispatch`](store#dispatchaction) function available on the [`Store`](store) instance. #### Returns (*Function* or *Object*): An object mimicking the original object, but with each function immediately dispatching the action returned by the corresponding action creator. If you passed a function as `actionCreators`, the return value will also be a single function. #### Example #### `TodoActionCreators.js` ``` export function addTodo(text) { return { type: 'ADD_TODO', text } } export function removeTodo(id) { return { type: 'REMOVE_TODO', id } } ``` #### `SomeComponent.js` ``` import { Component } from 'react' import { bindActionCreators } from 'redux' import { connect } from 'react-redux' import * as TodoActionCreators from './TodoActionCreators' console.log(TodoActionCreators) // { // addTodo: Function, // removeTodo: Function // } class TodoListContainer extends Component { constructor(props) { super(props) const { dispatch } = props // Here's a good use case for bindActionCreators: // You want a child component to be completely unaware of Redux. // We create bound versions of these functions now so we can // pass them down to our child later. this.boundActionCreators = bindActionCreators(TodoActionCreators, dispatch) console.log(this.boundActionCreators) // { // addTodo: Function, // removeTodo: Function // } } componentDidMount() { // Injected by react-redux: let { dispatch } = this.props // Note: this won't work: // TodoActionCreators.addTodo('Use Redux') // You're just calling a function that creates an action. // You must dispatch the action, too! // This will work: let action = TodoActionCreators.addTodo('Use Redux') dispatch(action) } render() { // Injected by react-redux: let { todos } = this.props return <TodoList todos={todos} {...this.boundActionCreators} /> // An alternative to bindActionCreators is to pass // just the dispatch function down, but then your child component // needs to import action creators and know about them. // return <TodoList todos={todos} dispatch={dispatch} /> } } export default connect(state => ({ todos: state.todos }))(TodoListContainer) ``` #### Tips * You might ask: why don't we bind the action creators to the store instance right away, like in classical Flux? The problem is that this won't work well with universal apps that need to render on the server. Most likely you want to have a separate store instance per request so you can prepare them with different data, but binding action creators during their definition means you're stuck with a single store instance for all requests. * If you use ES5, instead of `import * as` syntax you can just pass `require('./TodoActionCreators')` to `bindActionCreators` as the first argument. The only thing it cares about is that the values of the `actionCreators` properties are functions. The module system doesn't matter. redux compose(...functions) `compose(...functions)` ======================= Composes functions from right to left. This is a functional programming utility, and is included in Redux as a convenience. You might want to use it to apply several [store enhancers](https://redux.js.org/understanding/thinking-in-redux/glossary#store-enhancer) in a row. #### Arguments 1. (*arguments*): The functions to compose. Each function is expected to accept a single parameter. Its return value will be provided as an argument to the function standing to the left, and so on. The exception is the right-most argument which can accept multiple parameters, as it will provide the signature for the resulting composed function. #### Returns (*Function*): The final function obtained by composing the given functions from right to left. #### Example This example demonstrates how to use `compose` to enhance a <store> with [`applyMiddleware`](applymiddleware) and a few developer tools from the [redux-devtools](https://github.com/reduxjs/redux-devtools) package. ``` import { createStore, applyMiddleware, compose } from 'redux' import thunk from 'redux-thunk' import DevTools from './containers/DevTools' import reducer from '../reducers' const store = createStore( reducer, compose(applyMiddleware(thunk), DevTools.instrument()) ) ``` #### Tips * All `compose` does is let you write deeply nested function transformations without the rightward drift of the code. Don't give it too much credit! redux applyMiddleware(...middleware) `applyMiddleware(...middleware)` ================================ Middleware is the suggested way to extend Redux with custom functionality. Middleware lets you wrap the store's [`dispatch`](store#dispatchaction) method for fun and profit. The key feature of middleware is that it is composable. Multiple middleware can be combined together, where each middleware requires no knowledge of what comes before or after it in the chain. The most common use case for middleware is to support asynchronous actions without much boilerplate code or a dependency on a library like [Rx](https://github.com/Reactive-Extensions/RxJS). It does so by letting you dispatch [async actions](https://redux.js.org/understanding/thinking-in-redux/glossary#async-action) in addition to normal actions. For example, [redux-thunk](https://github.com/reduxjs/redux-thunk) lets the action creators invert control by dispatching functions. They would receive [`dispatch`](store#dispatchaction) as an argument and may call it asynchronously. Such functions are called *thunks*. Another example of middleware is [redux-promise](https://github.com/acdlite/redux-promise). It lets you dispatch a [Promise](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Promise) async action, and dispatches a normal action when the Promise resolves. Middleware is not baked into [`createStore`](createstore) and is not a fundamental part of the Redux architecture, but we consider it useful enough to be supported right in the core. This way, there is a single standard way to extend [`dispatch`](store#dispatchaction) in the ecosystem, and different middleware may compete in expressiveness and utility. #### Arguments * `...middleware` (*arguments*): Functions that conform to the Redux *middleware API*. Each middleware receives [`Store`](store)'s [`dispatch`](store#dispatchaction) and [`getState`](store#getState) functions as named arguments, and returns a function. That function will be given the `next` middleware's dispatch method, and is expected to return a function of `action` calling `next(action)` with a potentially different argument, or at a different time, or maybe not calling it at all. The last middleware in the chain will receive the real store's [`dispatch`](store#dispatchaction) method as the `next` parameter, thus ending the chain. So, the middleware signature is `({ getState, dispatch }) => next => action`. #### Returns (*Function*) A store enhancer that applies the given middleware. The store enhancer signature is `createStore => createStore` but the easiest way to apply it is to pass it to [`createStore()`](createstore) as the last `enhancer` argument. #### Example: Custom Logger Middleware ``` import { createStore, applyMiddleware } from 'redux' import todos from './reducers' function logger({ getState }) { return next => action => { console.log('will dispatch', action) // Call the next dispatch method in the middleware chain. const returnValue = next(action) console.log('state after dispatch', getState()) // This will likely be the action itself, unless // a middleware further in chain changed it. return returnValue } } const store = createStore(todos, ['Use Redux'], applyMiddleware(logger)) store.dispatch({ type: 'ADD_TODO', text: 'Understand the middleware' }) // (These lines will be logged by the middleware:) // will dispatch: { type: 'ADD_TODO', text: 'Understand the middleware' } // state after dispatch: [ 'Use Redux', 'Understand the middleware' ] ``` #### Example: Using Thunk Middleware for Async Actions ``` import { createStore, combineReducers, applyMiddleware } from 'redux' import thunk from 'redux-thunk' import * as reducers from './reducers' const reducer = combineReducers(reducers) // applyMiddleware supercharges createStore with middleware: const store = createStore(reducer, applyMiddleware(thunk)) function fetchSecretSauce() { return fetch('https://www.google.com/search?q=secret+sauce') } // These are the normal action creators you have seen so far. // The actions they return can be dispatched without any middleware. // However, they only express “facts” and not the “async flow”. function makeASandwich(forPerson, secretSauce) { return { type: 'MAKE_SANDWICH', forPerson, secretSauce } } function apologize(fromPerson, toPerson, error) { return { type: 'APOLOGIZE', fromPerson, toPerson, error } } function withdrawMoney(amount) { return { type: 'WITHDRAW', amount } } // Even without middleware, you can dispatch an action: store.dispatch(withdrawMoney(100)) // But what do you do when you need to start an asynchronous action, // such as an API call, or a router transition? // Meet thunks. // A thunk is a function that returns a function. // This is a thunk. function makeASandwichWithSecretSauce(forPerson) { // Invert control! // Return a function that accepts `dispatch` so we can dispatch later. // Thunk middleware knows how to turn thunk async actions into actions. return function (dispatch) { return fetchSecretSauce().then( sauce => dispatch(makeASandwich(forPerson, sauce)), error => dispatch(apologize('The Sandwich Shop', forPerson, error)) ) } } // Thunk middleware lets me dispatch thunk async actions // as if they were actions! store.dispatch(makeASandwichWithSecretSauce('Me')) // It even takes care to return the thunk's return value // from the dispatch, so I can chain Promises as long as I return them. store.dispatch(makeASandwichWithSecretSauce('My wife')).then(() => { console.log('Done!') }) // In fact I can write action creators that dispatch // actions and async actions from other action creators, // and I can build my control flow with Promises. function makeSandwichesForEverybody() { return function (dispatch, getState) { if (!getState().sandwiches.isShopOpen) { // You don't have to return Promises, but it's a handy convention // so the caller can always call .then() on async dispatch result. return Promise.resolve() } // We can dispatch both plain object actions and other thunks, // which lets us compose the asynchronous actions in a single flow. return dispatch(makeASandwichWithSecretSauce('My Grandma')) .then(() => Promise.all([ dispatch(makeASandwichWithSecretSauce('Me')), dispatch(makeASandwichWithSecretSauce('My wife')) ]) ) .then(() => dispatch(makeASandwichWithSecretSauce('Our kids'))) .then(() => dispatch( getState().myMoney > 42 ? withdrawMoney(42) : apologize('Me', 'The Sandwich Shop') ) ) } } // This is very useful for server side rendering, because I can wait // until data is available, then synchronously render the app. import { renderToString } from 'react-dom/server' store .dispatch(makeSandwichesForEverybody()) .then(() => response.send(renderToString(<MyApp store={store} />))) // I can also dispatch a thunk async action from a component // any time its props change to load the missing data. import { connect } from 'react-redux' import { Component } from 'react' class SandwichShop extends Component { componentDidMount() { this.props.dispatch(makeASandwichWithSecretSauce(this.props.forPerson)) } componentDidUpdate(prevProps) { if (prevProps.forPerson !== this.props.forPerson) { this.props.dispatch(makeASandwichWithSecretSauce(this.props.forPerson)) } } render() { return <p>{this.props.sandwiches.join('mustard')}</p> } } export default connect(state => ({ sandwiches: state.sandwiches }))(SandwichShop) ``` #### Tips * Middleware only wraps the store's [`dispatch`](store#dispatchaction) function. Technically, anything a middleware can do, you can do manually by wrapping every `dispatch` call, but it's easier to manage this in a single place and define action transformations on the scale of the whole project. * If you use other store enhancers in addition to `applyMiddleware`, make sure to put `applyMiddleware` before them in the composition chain because the middleware is potentially asynchronous. For example, it should go before [redux-devtools](https://github.com/reduxjs/redux-devtools) because otherwise the DevTools won't see the raw actions emitted by the Promise middleware and such. * If you want to conditionally apply a middleware, make sure to only import it when it's needed: ``` let middleware = [a, b] if (process.env.NODE_ENV !== 'production') { const c = require('some-debug-middleware') const d = require('another-debug-middleware') middleware = [...middleware, c, d] } const store = createStore( reducer, preloadedState, applyMiddleware(...middleware) ) ``` This makes it easier for bundling tools to cut out unneeded modules and reduces the size of your builds. * Ever wondered what `applyMiddleware` itself is? It ought to be an extension mechanism more powerful than the middleware itself. Indeed, `applyMiddleware` is an example of the most powerful Redux extension mechanism called [store enhancers](https://redux.js.org/understanding/thinking-in-redux/glossary#store-enhancer). It is highly unlikely you'll ever want to write a store enhancer yourself. Another example of a store enhancer is [redux-devtools](https://github.com/reduxjs/redux-devtools). Middleware is less powerful than a store enhancer, but it is easier to write. * Middleware sounds much more complicated than it really is. The only way to really understand middleware is to see how the existing middleware works, and try to write your own. The function nesting can be intimidating, but most of the middleware you'll find are, in fact, 10-liners, and the nesting and composability is what makes the middleware system powerful. * To apply multiple store enhancers, you may use [`compose()`](compose).
programming_docs
redux Store Store ===== A store holds the whole [state tree](https://redux.js.org/understanding/thinking-in-redux/glossary#state) of your application. The only way to change the state inside it is to dispatch an [action](https://redux.js.org/understanding/thinking-in-redux/glossary#action) on it. A store is not a class. It's just an object with a few methods on it. To create it, pass your root [reducing function](https://redux.js.org/understanding/thinking-in-redux/glossary#reducer) to [`createStore`](createstore). > > ##### A Note for Flux Users > > > If you're coming from Flux, there is a single important difference you need to understand. Redux doesn't have a Dispatcher or support many stores. **Instead, there is just a single store with a single root [reducing function](https://redux.js.org/understanding/thinking-in-redux/glossary#reducer).** As your app grows, instead of adding stores, you split the root reducer into smaller reducers independently operating on the different parts of the state tree. You can use a helper like [`combineReducers`](combinereducers) to combine them. This is similar to how there is just one root component in a React app, but it is composed out of many small components. > > > ### Store Methods * [`getState()`](#getstate) * [`dispatch(action)`](#dispatchaction) * [`subscribe(listener)`](#subscribelistener) * [`replaceReducer(nextReducer)`](#replacereducernextreducer) Store Methods ------------- ### getState() Returns the current state tree of your application. It is equal to the last value returned by the store's reducer. #### Returns *(any)*: The current state tree of your application. --- ### dispatch(action) Dispatches an action. This is the only way to trigger a state change. The store's reducing function will be called with the current [`getState()`](#getState) result and the given `action` synchronously. Its return value will be considered the next state. It will be returned from [`getState()`](#getState) from now on, and the change listeners will immediately be notified. > > ##### A Note for Flux Users > > > If you attempt to call `dispatch` from inside the [reducer](https://redux.js.org/understanding/thinking-in-redux/glossary#reducer), it will throw with an error saying “Reducers may not dispatch actions.” This is similar to “Cannot dispatch in a middle of dispatch” error in Flux, but doesn't cause the problems associated with it. In Flux, a dispatch is forbidden while Stores are handling the action and emitting updates. This is unfortunate because it makes it impossible to dispatch actions from component lifecycle hooks or other benign places. > > > In Redux, subscriptions are called after the root reducer has returned the new state, so you *may* dispatch in the subscription listeners. You are only disallowed to dispatch inside the reducers because they must have no side effects. If you want to cause a side effect in response to an action, the right place to do this is in the potentially async [action creator](https://redux.js.org/understanding/thinking-in-redux/glossary#action-creator). > > > #### Arguments 1. `action` (*Object*†): A plain object describing the change that makes sense for your application. Actions are the only way to get data into the store, so any data, whether from the UI events, network callbacks, or other sources such as WebSockets needs to eventually be dispatched as actions. Actions must have a `type` field that indicates the type of action being performed. Types can be defined as constants and imported from another module. It's better to use strings for `type` than [Symbols](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Symbol) because strings are serializable. Other than `type`, the structure of an action object is really up to you. If you're interested, check out [Flux Standard Action](https://github.com/acdlite/flux-standard-action) for recommendations on how actions could be constructed. #### Returns (Object†): The dispatched action (see notes). #### Notes † The “vanilla” store implementation you get by calling [`createStore`](/api/createstore) only supports plain object actions and hands them immediately to the reducer.However, if you wrap [`createStore`](createstore) with [`applyMiddleware`](applymiddleware), the middleware can interpret actions differently, and provide support for dispatching [async actions](https://redux.js.org/understanding/thinking-in-redux/glossary#async-action). Async actions are usually asynchronous primitives like Promises, Observables, or thunks. Middleware is created by the community and does not ship with Redux by default. You need to explicitly install packages like [redux-thunk](https://github.com/reduxjs/redux-thunk) or [redux-promise](https://github.com/acdlite/redux-promise) to use it. You may also create your own middleware. To learn how to describe asynchronous API calls, read the current state inside action creators, perform side effects, or chain them to execute in a sequence, see the examples for [`applyMiddleware`](applymiddleware). #### Example ``` import { createStore } from 'redux' const store = createStore(todos, ['Use Redux']) function addTodo(text) { return { type: 'ADD_TODO', text } } store.dispatch(addTodo('Read the docs')) store.dispatch(addTodo('Read about the middleware')) ``` --- ### subscribe(listener) Adds a change listener. It will be called any time an action is dispatched, and some part of the state tree may potentially have changed. You may then call [`getState()`](#getState) to read the current state tree inside the callback. You may call [`dispatch()`](#dispatchaction) from a change listener, with the following caveats: 1. The listener should only call [`dispatch()`](#dispatchaction) either in response to user actions or under specific conditions (e. g. dispatching an action when the store has a specific field). Calling [`dispatch()`](#dispatchaction) without any conditions is technically possible, however it leads to an infinite loop as every [`dispatch()`](#dispatchaction) call usually triggers the listener again. 2. The subscriptions are snapshotted just before every [`dispatch()`](#dispatchaction) call. If you subscribe or unsubscribe while the listeners are being invoked, this will not have any effect on the [`dispatch()`](#dispatchaction) that is currently in progress. However, the next [`dispatch()`](#dispatchaction) call, whether nested or not, will use a more recent snapshot of the subscription list. 3. The listener should not expect to see all state changes, as the state might have been updated multiple times during a nested [`dispatch()`](#dispatchaction) before the listener is called. It is, however, guaranteed that all subscribers registered before the [`dispatch()`](#dispatchaction) started will be called with the latest state by the time it exits. It is a low-level API. Most likely, instead of using it directly, you'll use React (or other) bindings. If you commonly use the callback as a hook to react to state changes, you might want to [write a custom `observeStore` utility](https://github.com/reduxjs/redux/issues/303#issuecomment-125184409). The `Store` is also an [`Observable`](https://github.com/zenparsing/es-observable), so you can `subscribe` to changes with libraries like [RxJS](https://github.com/ReactiveX/RxJS). To unsubscribe the change listener, invoke the function returned by `subscribe`. #### Arguments 1. `listener` (*Function*): The callback to be invoked any time an action has been dispatched, and the state tree might have changed. You may call [`getState()`](#getState) inside this callback to read the current state tree. It is reasonable to expect that the store's reducer is a pure function, so you may compare references to some deep path in the state tree to learn whether its value has changed. ##### Returns (*Function*): A function that unsubscribes the change listener. ##### Example ``` function select(state) { return state.some.deep.property } let currentValue function handleChange() { let previousValue = currentValue currentValue = select(store.getState()) if (previousValue !== currentValue) { console.log( 'Some deep nested property changed from', previousValue, 'to', currentValue ) } } const unsubscribe = store.subscribe(handleChange) unsubscribe() ``` --- ### replaceReducer(nextReducer) Replaces the reducer currently used by the store to calculate the state. It is an advanced API. You might need this if your app implements code splitting, and you want to load some of the reducers dynamically. You might also need this if you implement a hot reloading mechanism for Redux. #### Arguments 1. `nextReducer` (*Function*) The next reducer for the store to use. redux createStore(reducer, [preloadedState], [enhancer]) `createStore(reducer, [preloadedState], [enhancer])` ==================================================== Creates a Redux <store> that holds the complete state tree of your app. There should only be a single store in your app. #### Arguments 1. `reducer` *(Function)*: A [reducing function](https://redux.js.org/understanding/thinking-in-redux/glossary#reducer) that returns the next [state tree](https://redux.js.org/understanding/thinking-in-redux/glossary#state), given the current state tree and an [action](https://redux.js.org/understanding/thinking-in-redux/glossary#action) to handle. 2. [`preloadedState`] *(any)*: The initial state. You may optionally specify it to hydrate the state from the server in universal apps, or to restore a previously serialized user session. If you produced `reducer` with [`combineReducers`](combinereducers), this must be a plain object with the same shape as the keys passed to it. Otherwise, you are free to pass anything that your `reducer` can understand. 3. [`enhancer`] *(Function)*: The store enhancer. You may optionally specify it to enhance the store with third-party capabilities such as middleware, time travel, persistence, etc. The only store enhancer that ships with Redux is [`applyMiddleware()`](applymiddleware). #### Returns ([*`Store`*](store)): An object that holds the complete state of your app. The only way to change its state is by [dispatching actions](store#dispatchaction). You may also [subscribe](store#subscribelistener) to the changes to its state to update the UI. #### Example ``` import { createStore } from 'redux' function todos(state = [], action) { switch (action.type) { case 'ADD_TODO': return state.concat([action.text]) default: return state } } const store = createStore(todos, ['Use Redux']) store.dispatch({ type: 'ADD_TODO', text: 'Read the docs' }) console.log(store.getState()) // [ 'Use Redux', 'Read the docs' ] ``` #### Tips * Don't create more than one store in an application! Instead, use [`combineReducers`](combinereducers) to create a single root reducer out of many. * Redux state is normally plain JS objects and arrays. * If your state is a plain object, make sure you never mutate it! Immutable updates require making copies of each level of data, typically using the object spread operator ( `return { ...state, ...newData }` ). * For universal apps that run on the server, create a store instance with every request so that they are isolated. Dispatch a few data fetching actions to a store instance and wait for them to complete before rendering the app on the server. * When a store is created, Redux dispatches a dummy action to your reducer to populate the store with the initial state. You are not meant to handle the dummy action directly. Just remember that your reducer should return some kind of initial state if the state given to it as the first argument is `undefined`, and you're all set. * To apply multiple store enhancers, you may use [`compose()`](compose). pytorch torch.nn.quantized.dynamic torch.nn.quantized.dynamic ========================== Linear ------ `class torch.nn.quantized.dynamic.Linear(in_features, out_features, bias_=True, dtype=torch.qint8)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/dynamic/modules/linear.html#Linear) A dynamic quantized linear module with floating point tensor as inputs and outputs. We adopt the same interface as `torch.nn.Linear`, please see <https://pytorch.org/docs/stable/nn.html#torch.nn.Linear> for documentation. Similar to [`torch.nn.Linear`](generated/torch.nn.linear#torch.nn.Linear "torch.nn.Linear"), attributes will be randomly initialized at module creation time and will be overwritten later Variables * **~Linear.weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the non-learnable quantized weights of the module which are of shape (out\_features,in\_features)(\text{out\\_features}, \text{in\\_features}) . * **~Linear.bias** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the non-learnable floating point bias of the module of shape (out\_features)(\text{out\\_features}) . If `bias` is `True`, the values are initialized to zero. Examples: ``` >>> m = nn.quantized.dynamic.Linear(20, 30) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size()) torch.Size([128, 30]) ``` `classmethod from_float(mod)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/dynamic/modules/linear.html#Linear.from_float) Create a dynamic quantized module from a float module or qparams\_dict Parameters **mod** ([Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")) – a float module, either produced by torch.quantization utilities or provided by the user LSTM ---- `class torch.nn.quantized.dynamic.LSTM(*args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/dynamic/modules/rnn.html#LSTM) A dynamic quantized LSTM module with floating point tensor as inputs and outputs. We adopt the same interface as `torch.nn.LSTM`, please see <https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM> for documentation. Examples: ``` >>> rnn = nn.LSTM(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> c0 = torch.randn(2, 3, 20) >>> output, (hn, cn) = rnn(input, (h0, c0)) ``` LSTMCell -------- `class torch.nn.quantized.dynamic.LSTMCell(*args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/dynamic/modules/rnn.html#LSTMCell) A long short-term memory (LSTM) cell. A dynamic quantized LSTMCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same interface as `torch.nn.LSTMCell`, please see <https://pytorch.org/docs/stable/nn.html#torch.nn.LSTMCell> for documentation. Examples: ``` >>> rnn = nn.LSTMCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> cx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx, cx = rnn(input[i], (hx, cx)) output.append(hx) ``` GRUCell ------- `class torch.nn.quantized.dynamic.GRUCell(input_size, hidden_size, bias=True, dtype=torch.qint8)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/dynamic/modules/rnn.html#GRUCell) A gated recurrent unit (GRU) cell A dynamic quantized GRUCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same interface as `torch.nn.GRUCell`, please see <https://pytorch.org/docs/stable/nn.html#torch.nn.GRUCell> for documentation. Examples: ``` >>> rnn = nn.GRUCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx = rnn(input[i], hx) output.append(hx) ``` RNNCell ------- `class torch.nn.quantized.dynamic.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh', dtype=torch.qint8)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/dynamic/modules/rnn.html#RNNCell) An Elman RNN cell with tanh or ReLU non-linearity. A dynamic quantized RNNCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same interface as `torch.nn.RNNCell`, please see <https://pytorch.org/docs/stable/nn.html#torch.nn.RNNCell> for documentation. Examples: ``` >>> rnn = nn.RNNCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx = rnn(input[i], hx) output.append(hx) ``` pytorch Tensor Attributes Tensor Attributes ================= Each `torch.Tensor` has a [`torch.dtype`](#torch.torch.dtype "torch.torch.dtype"), [`torch.device`](#torch.torch.device "torch.torch.device"), and [`torch.layout`](#torch.torch.layout "torch.torch.layout"). torch.dtype ----------- `class torch.dtype` A [`torch.dtype`](#torch.torch.dtype "torch.torch.dtype") is an object that represents the data type of a [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor"). PyTorch has twelve different data types: | Data type | dtype | Legacy Constructors | | --- | --- | --- | | 32-bit floating point | `torch.float32` or `torch.float` | `torch.*.FloatTensor` | | 64-bit floating point | `torch.float64` or `torch.double` | `torch.*.DoubleTensor` | | 64-bit complex | `torch.complex64` or `torch.cfloat` | | | 128-bit complex | `torch.complex128` or `torch.cdouble` | | | 16-bit floating point [1](#id3) | `torch.float16` or `torch.half` | `torch.*.HalfTensor` | | 16-bit floating point [2](#id4) | `torch.bfloat16` | `torch.*.BFloat16Tensor` | | 8-bit integer (unsigned) | `torch.uint8` | `torch.*.ByteTensor` | | 8-bit integer (signed) | `torch.int8` | `torch.*.CharTensor` | | 16-bit integer (signed) | `torch.int16` or `torch.short` | `torch.*.ShortTensor` | | 32-bit integer (signed) | `torch.int32` or `torch.int` | `torch.*.IntTensor` | | 64-bit integer (signed) | `torch.int64` or `torch.long` | `torch.*.LongTensor` | | Boolean | `torch.bool` | `torch.*.BoolTensor` | `1` Sometimes referred to as binary16: uses 1 sign, 5 exponent, and 10 significand bits. Useful when precision is important. `2` Sometimes referred to as Brain Floating Point: use 1 sign, 8 exponent and 7 significand bits. Useful when range is important, since it has the same number of exponent bits as `float32` To find out if a [`torch.dtype`](#torch.torch.dtype "torch.torch.dtype") is a floating point data type, the property [`is_floating_point`](generated/torch.is_floating_point#torch.is_floating_point "torch.is_floating_point") can be used, which returns `True` if the data type is a floating point data type. To find out if a [`torch.dtype`](#torch.torch.dtype "torch.torch.dtype") is a complex data type, the property [`is_complex`](generated/torch.is_complex#torch.is_complex "torch.is_complex") can be used, which returns `True` if the data type is a complex data type. When the dtypes of inputs to an arithmetic operation (`add`, `sub`, `div`, `mul`) differ, we promote by finding the minimum dtype that satisfies the following rules: * If the type of a scalar operand is of a higher category than tensor operands (where complex > floating > integral > boolean), we promote to a type with sufficient size to hold all scalar operands of that category. * If a zero-dimension tensor operand has a higher category than dimensioned operands, we promote to a type with sufficient size and category to hold all zero-dim tensor operands of that category. * If there are no higher-category zero-dim operands, we promote to a type with sufficient size and category to hold all dimensioned operands. A floating point scalar operand has dtype `torch.get_default_dtype()` and an integral non-boolean scalar operand has dtype `torch.int64`. Unlike numpy, we do not inspect values when determining the minimum `dtypes` of an operand. Quantized and complex types are not yet supported. Promotion Examples: ``` >>> float_tensor = torch.ones(1, dtype=torch.float) >>> double_tensor = torch.ones(1, dtype=torch.double) >>> complex_float_tensor = torch.ones(1, dtype=torch.complex64) >>> complex_double_tensor = torch.ones(1, dtype=torch.complex128) >>> int_tensor = torch.ones(1, dtype=torch.int) >>> long_tensor = torch.ones(1, dtype=torch.long) >>> uint_tensor = torch.ones(1, dtype=torch.uint8) >>> double_tensor = torch.ones(1, dtype=torch.double) >>> bool_tensor = torch.ones(1, dtype=torch.bool) # zero-dim tensors >>> long_zerodim = torch.tensor(1, dtype=torch.long) >>> int_zerodim = torch.tensor(1, dtype=torch.int) >>> torch.add(5, 5).dtype torch.int64 # 5 is an int64, but does not have higher category than int_tensor so is not considered. >>> (int_tensor + 5).dtype torch.int32 >>> (int_tensor + long_zerodim).dtype torch.int32 >>> (long_tensor + int_tensor).dtype torch.int64 >>> (bool_tensor + long_tensor).dtype torch.int64 >>> (bool_tensor + uint_tensor).dtype torch.uint8 >>> (float_tensor + double_tensor).dtype torch.float64 >>> (complex_float_tensor + complex_double_tensor).dtype torch.complex128 >>> (bool_tensor + int_tensor).dtype torch.int32 # Since long is a different kind than float, result dtype only needs to be large enough # to hold the float. >>> torch.add(long_tensor, float_tensor).dtype torch.float32 ``` `When the output tensor of an arithmetic operation is specified, we allow casting to its dtype except that:` * An integral output tensor cannot accept a floating point tensor. * A boolean output tensor cannot accept a non-boolean tensor. * A non-complex output tensor cannot accept a complex tensor Casting Examples: ``` # allowed: >>> float_tensor *= double_tensor >>> float_tensor *= int_tensor >>> float_tensor *= uint_tensor >>> float_tensor *= bool_tensor >>> float_tensor *= double_tensor >>> int_tensor *= long_tensor >>> int_tensor *= uint_tensor >>> uint_tensor *= int_tensor # disallowed (RuntimeError: result type can't be cast to the desired output type): >>> int_tensor *= float_tensor >>> bool_tensor *= int_tensor >>> bool_tensor *= uint_tensor >>> float_tensor *= complex_float_tensor ``` torch.device ------------ `class torch.device` A [`torch.device`](#torch.torch.device "torch.torch.device") is an object representing the device on which a [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor") is or will be allocated. The [`torch.device`](#torch.torch.device "torch.torch.device") contains a device type (`'cpu'` or `'cuda'`) and optional device ordinal for the device type. If the device ordinal is not present, this object will always represent the current device for the device type, even after [`torch.cuda.set_device()`](cuda#torch.cuda.set_device "torch.cuda.set_device") is called; e.g., a [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor") constructed with device `'cuda'` is equivalent to `'cuda:X'` where X is the result of [`torch.cuda.current_device()`](cuda#torch.cuda.current_device "torch.cuda.current_device"). A [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor")’s device can be accessed via the [`Tensor.device`](tensors#torch.Tensor.device "torch.Tensor.device") property. A [`torch.device`](#torch.torch.device "torch.torch.device") can be constructed via a string or via a string and device ordinal Via a string: ``` >>> torch.device('cuda:0') device(type='cuda', index=0) >>> torch.device('cpu') device(type='cpu') >>> torch.device('cuda') # current cuda device device(type='cuda') ``` Via a string and device ordinal: ``` >>> torch.device('cuda', 0) device(type='cuda', index=0) >>> torch.device('cpu', 0) device(type='cpu', index=0) ``` Note The [`torch.device`](#torch.torch.device "torch.torch.device") argument in functions can generally be substituted with a string. This allows for fast prototyping of code. ``` >>> # Example of a function that takes in a torch.device >>> cuda1 = torch.device('cuda:1') >>> torch.randn((2,3), device=cuda1) ``` ``` >>> # You can substitute the torch.device with a string >>> torch.randn((2,3), device='cuda:1') ``` Note For legacy reasons, a device can be constructed via a single device ordinal, which is treated as a cuda device. This matches [`Tensor.get_device()`](tensors#torch.Tensor.get_device "torch.Tensor.get_device"), which returns an ordinal for cuda tensors and is not supported for cpu tensors. ``` >>> torch.device(1) device(type='cuda', index=1) ``` Note Methods which take a device will generally accept a (properly formatted) string or (legacy) integer device ordinal, i.e. the following are all equivalent: ``` >>> torch.randn((2,3), device=torch.device('cuda:1')) >>> torch.randn((2,3), device='cuda:1') >>> torch.randn((2,3), device=1) # legacy ``` torch.layout ------------ `class torch.layout` Warning The `torch.layout` class is in beta and subject to change. A [`torch.layout`](#torch.torch.layout "torch.torch.layout") is an object that represents the memory layout of a [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor"). Currently, we support `torch.strided` (dense Tensors) and have beta support for `torch.sparse_coo` (sparse COO Tensors). `torch.strided` represents dense Tensors and is the memory layout that is most commonly used. Each strided tensor has an associated `torch.Storage`, which holds its data. These tensors provide multi-dimensional, [strided](https://en.wikipedia.org/wiki/Stride_of_an_array) view of a storage. Strides are a list of integers: the k-th stride represents the jump in the memory necessary to go from one element to the next one in the k-th dimension of the Tensor. This concept makes it possible to perform many tensor operations efficiently. Example: ``` >>> x = torch.Tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]) >>> x.stride() (5, 1) >>> x.t().stride() (1, 5) ``` For more information on `torch.sparse_coo` tensors, see [torch.sparse](sparse#sparse-docs). torch.memory\_format -------------------- `class torch.memory_format` A [`torch.memory_format`](#torch.torch.memory_format "torch.torch.memory_format") is an object representing the memory format on which a [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor") is or will be allocated. Possible values are: * `torch.contiguous_format`: Tensor is or will be allocated in dense non-overlapping memory. Strides represented by values in decreasing order. * `torch.channels_last`: Tensor is or will be allocated in dense non-overlapping memory. Strides represented by values in `strides[0] > strides[2] > strides[3] > strides[1] == 1` aka NHWC order. * `torch.preserve_format`: Used in functions like `clone` to preserve the memory format of the input tensor. If input tensor is allocated in dense non-overlapping memory, the output tensor strides will be copied from the input. Otherwise output strides will follow `torch.contiguous_format`
programming_docs
pytorch Pipeline Parallelism Pipeline Parallelism ==================== Pipeline parallelism was original introduced in the [Gpipe](https://arxiv.org/abs/1811.06965) paper and is an efficient technique to train large models on multiple GPUs. Warning Pipeline Parallelism is experimental and subject to change. Model Parallelism using multiple GPUs ------------------------------------- Typically for large models which don’t fit on a single GPU, model parallelism is employed where certain parts of the model are placed on different GPUs. Although, if this is done naively for sequential models, the training process suffers from GPU under utilization since only one GPU is active at one time as shown in the figure below: The figure represents a model with 4 layers placed on 4 different GPUs (vertical axis). The horizontal axis represents training this model through time demonstrating that only 1 GPU is utilized at a time ([image source](https://arxiv.org/abs/1811.06965)). Pipelined Execution ------------------- To alleviate this problem, pipeline parallelism splits the input minibatch into multiple microbatches and pipelines the execution of these microbatches across multiple GPUs. This is outlined in the figure below: The figure represents a model with 4 layers placed on 4 different GPUs (vertical axis). The horizontal axis represents training this model through time demonstrating that the GPUs are utilized much more efficiently. However, there still exists a bubble (as demonstrated in the figure) where certain GPUs are not utilized. ([image source](https://arxiv.org/abs/1811.06965)). Pipe APIs in PyTorch -------------------- `class torch.distributed.pipeline.sync.Pipe(module, chunks=1, checkpoint='except_last', deferred_batch_norm=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/pipeline/sync/pipe.html#Pipe) Wraps an arbitrary [`nn.Sequential`](generated/torch.nn.sequential#torch.nn.Sequential "torch.nn.Sequential") module to train on using synchronous pipeline parallelism. If the module requires lots of memory and doesn’t fit on a single GPU, pipeline parallelism is a useful technique to employ for training. The implementation is based on the [torchgpipe](https://arxiv.org/abs/2004.09910) paper. Pipe combines pipeline parallelism with checkpointing to reduce peak memory required to train while minimizing device under-utilization. You should place all the modules on the appropriate devices and wrap them into an [`nn.Sequential`](generated/torch.nn.sequential#torch.nn.Sequential "torch.nn.Sequential") module defining the desired order of execution. Parameters * **module** ([`nn.Sequential`](generated/torch.nn.sequential#torch.nn.Sequential "torch.nn.Sequential")) – sequential module to be parallelized using pipelining. Each module in the sequence has to have all of its parameters on a single device. Each module in the sequence has to either be an nn.Module or [`nn.Sequential`](generated/torch.nn.sequential#torch.nn.Sequential "torch.nn.Sequential") (to combine multiple sequential modules on a single device) * **chunks** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – number of micro-batches (default: `1`) * **checkpoint** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – when to enable checkpointing, one of `'always'`, `'except_last'`, or `'never'` (default: `'except_last'`). `'never'` disables checkpointing completely, `'except_last'` enables checkpointing for all micro-batches except the last one and `'always'` enables checkpointing for all micro-batches. * **deferred\_batch\_norm** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether to use deferred `BatchNorm` moving statistics (default: [`False`](https://docs.python.org/3/library/constants.html#False "(in Python v3.9)")). If set to [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.9)"), we track statistics across multiple micro-batches to update the running statistics per mini-batch. Raises * [**TypeError**](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.9)") – the module is not a [`nn.Sequential`](generated/torch.nn.sequential#torch.nn.Sequential "torch.nn.Sequential"). * [**ValueError**](https://docs.python.org/3/library/exceptions.html#ValueError "(in Python v3.9)") – invalid arguments Example:: Pipeline of two FC layers across GPUs 0 and 1. ``` >>> fc1 = nn.Linear(16, 8).cuda(0) >>> fc2 = nn.Linear(8, 4).cuda(1) >>> model = nn.Sequential(fc1, fc2) >>> model = Pipe(model, chunks=8) >>> input = torch.rand(16, 16).cuda(0) >>> output_rref = model(input) ``` Note You can wrap a [`Pipe`](#torch.distributed.pipeline.sync.Pipe "torch.distributed.pipeline.sync.Pipe") model with [`torch.nn.parallel.DistributedDataParallel`](generated/torch.nn.parallel.distributeddataparallel#torch.nn.parallel.DistributedDataParallel "torch.nn.parallel.DistributedDataParallel") only when the checkpoint parameter of [`Pipe`](#torch.distributed.pipeline.sync.Pipe "torch.distributed.pipeline.sync.Pipe") is `'never'`. Note [`Pipe`](#torch.distributed.pipeline.sync.Pipe "torch.distributed.pipeline.sync.Pipe") only supports intra-node pipelining currently, but will be expanded to support inter-node pipelining in the future. The forward function returns an [`RRef`](rpc#torch.distributed.rpc.RRef "torch.distributed.rpc.RRef") to allow for inter-node pipelining in the future, where the output might be on a remote host. For intra-node pipelinining you can use [`local_value()`](rpc#torch.distributed.rpc.RRef.local_value "torch.distributed.rpc.RRef.local_value") to retrieve the output locally. Warning [`Pipe`](#torch.distributed.pipeline.sync.Pipe "torch.distributed.pipeline.sync.Pipe") is experimental and subject to change. `forward(input)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/pipeline/sync/pipe.html#Pipe.forward) Processes a single input mini-batch through the pipe and returns an [`RRef`](rpc#torch.distributed.rpc.RRef "torch.distributed.rpc.RRef") pointing to the output. [`Pipe`](#torch.distributed.pipeline.sync.Pipe "torch.distributed.pipeline.sync.Pipe") is a fairly transparent module wrapper. It doesn’t modify the input and output signature of the underlying module. But there’s type restriction. Input and output have to be a [`Tensor`](tensors#torch.Tensor "torch.Tensor") or a sequence of tensors. This restriction is applied at partition boundaries too. The input tensor is split into multiple micro-batches based on the `chunks` parameter used to initialize [`Pipe`](#torch.distributed.pipeline.sync.Pipe "torch.distributed.pipeline.sync.Pipe"). The batch size is assumed to be the first dimension of the tensor and if the batch size is less than `chunks`, the number of micro-batches is equal to the batch size. Parameters **input** (torch.Tensor or sequence of [`Tensor`](tensors#torch.Tensor "torch.Tensor")) – input mini-batch Returns [`RRef`](rpc#torch.distributed.rpc.RRef "torch.distributed.rpc.RRef") to the output of the mini-batch Raises [**TypeError**](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.9)") – input is not a tensor or sequence of tensors. ### Skip connections Certain models like ResNeXt are not completely sequential and have skip connections between layers. Naively implementing as part of pipeling parallelism would imply that we need to copy outputs for certain layers through multiple GPUs till we eventually reach the GPU where the layer for the skip connection resides. To avoid this copy overhead, we provide APIs below to stash and pop Tensors in different layers of the model. `torch.distributed.pipeline.sync.skip.skippable.skippable(stash=(), pop=())` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/pipeline/sync/skip/skippable.html#skippable) The decorator to define a [`nn.Module`](generated/torch.nn.module#torch.nn.Module "torch.nn.Module") with skip connections. Decorated modules are called “skippable”. This functionality works perfectly fine even when the module is not wrapped by [`Pipe`](#torch.distributed.pipeline.sync.Pipe "torch.distributed.pipeline.sync.Pipe"). Each skip tensor is managed by its name. Before manipulating skip tensors, a skippable module must statically declare the names for skip tensors by `stash` and/or `pop` parameters. Skip tensors with pre-declared name can be stashed by `yield stash(name, tensor)` or popped by `tensor = yield pop(name)`. Here is an example with three layers. A skip tensor named “1to3” is stashed and popped at the first and last layer, respectively: ``` @skippable(stash=['1to3']) class Layer1(nn.Module): def forward(self, input): yield stash('1to3', input) return f1(input) class Layer2(nn.Module): def forward(self, input): return f2(input) @skippable(pop=['1to3']) class Layer3(nn.Module): def forward(self, input): skip_1to3 = yield pop('1to3') return f3(input) + skip_1to3 model = nn.Sequential(Layer1(), Layer2(), Layer3()) ``` One skippable module can stash or pop multiple skip tensors: ``` @skippable(stash=['alice', 'bob'], pop=['carol']) class StashStashPop(nn.Module): def forward(self, input): yield stash('alice', f_alice(input)) yield stash('bob', f_bob(input)) carol = yield pop('carol') return input + carol ``` Every skip tensor must be associated with exactly one pair of `stash` and `pop`. [`Pipe`](#torch.distributed.pipeline.sync.Pipe "torch.distributed.pipeline.sync.Pipe") checks this restriction automatically when wrapping a module. You can also check the restriction by [`verify_skippables()`](#torch.distributed.pipeline.sync.skip.skippable.verify_skippables "torch.distributed.pipeline.sync.skip.skippable.verify_skippables") without [`Pipe`](#torch.distributed.pipeline.sync.Pipe "torch.distributed.pipeline.sync.Pipe"). `class torch.distributed.pipeline.sync.skip.skippable.stash(name, tensor)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/pipeline/sync/skip/skippable.html#stash) The command to stash a skip tensor. ``` def forward(self, input): yield stash('name', input) return f(input) ``` Parameters * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – name of skip tensor * **input** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor") *or* [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.9)")) – tensor to pass to the skip connection `class torch.distributed.pipeline.sync.skip.skippable.pop(name)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/pipeline/sync/skip/skippable.html#pop) The command to pop a skip tensor. ``` def forward(self, input): skip = yield pop('name') return f(input) + skip ``` Parameters **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – name of skip tensor Returns the skip tensor previously stashed by another layer under the same name `torch.distributed.pipeline.sync.skip.skippable.verify_skippables(module)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/pipeline/sync/skip/skippable.html#verify_skippables) Verifies if the underlying skippable modules satisfy integrity. Every skip tensor must have only one pair of `stash` and `pop`. If there are one or more unmatched pairs, it will raise [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.9)") with the detailed messages. Here are a few failure cases. [`verify_skippables()`](#torch.distributed.pipeline.sync.skip.skippable.verify_skippables "torch.distributed.pipeline.sync.skip.skippable.verify_skippables") will report failure for these cases: ``` # Layer1 stashes "1to3". # Layer3 pops "1to3". nn.Sequential(Layer1(), Layer2()) # └──── ? nn.Sequential(Layer2(), Layer3()) # ? ────┘ nn.Sequential(Layer1(), Layer2(), Layer3(), Layer3()) # └───────────────────┘ ^^^^^^ nn.Sequential(Layer1(), Layer1(), Layer2(), Layer3()) # ^^^^^^ └───────────────────┘ ``` To use the same name for multiple skip tensors, they must be isolated by different namespaces. See `isolate()`. Raises [**TypeError**](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.9)") – one or more pairs of `stash` and `pop` are not matched. Acknowledgements ---------------- The implementation for pipeline parallelism is based on [fairscale’s pipe implementation](https://github.com/facebookresearch/fairscale/tree/master/fairscale/nn/pipe) and [torchgpipe](https://github.com/kakaobrain/torchgpipe). We would like to thank both teams for their contributions and guidance towards bringing pipeline parallelism into PyTorch. pytorch torch.utils.dlpack torch.utils.dlpack ================== `torch.utils.dlpack.from_dlpack(dlpack) → Tensor` Decodes a DLPack to a tensor. Parameters **dlpack** – a PyCapsule object with the dltensor The tensor will share the memory with the object represented in the dlpack. Note that each dlpack can only be consumed once. `torch.utils.dlpack.to_dlpack(tensor) → PyCapsule` Returns a DLPack representing the tensor. Parameters **tensor** – a tensor to be exported The dlpack shares the tensors memory. Note that each dlpack can only be consumed once. pytorch DDP Communication Hooks DDP Communication Hooks ======================= DDP communication hook is a generic interface to control how to communicate gradients across workers by overriding the vanilla allreduce in [DistributedDataParallel](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel.). A few built-in communication hooks are provided, and users can easily apply any of these hooks to optimize communication. Besides, the hook interface can also support user-defined communication strategies for more advanced use cases. Warning DDP communication hook is experimental and subject to change. Warning DDP communication hooks can only support single process single device mode on NCCL backend. How to Use a Communication Hook? -------------------------------- To use a communication hook, the user just needs to let the DDP model register the hook before the training loop as below. `torch.nn.parallel.DistributedDataParallel.register_comm_hook().` noindex Default Communication Hooks --------------------------- Default communication hooks are simple **stateless** hooks, so the input state in `register_comm_hook` is either a process group or `None`. `torch.distributed.algorithms.ddp_comm_hooks.default_hooks.allreduce_hook(process_group, bucket)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.html#allreduce_hook) This DDP communication hook just calls `allreduce` using `GradBucket` tensors. Once gradient tensors are aggregated across all workers, its `then` callback takes the mean and returns the result. If user registers this hook, DDP results is expected to be same as the case where no hook was registered. Hence, this won’t change behavior of DDP and user can use this as a reference or modify this hook to log useful information or any other purposes while unaffecting DDP behavior. Example:: ``` >>> ddp_model.register_comm_hook(process_group, allreduce_hook) ``` `torch.distributed.algorithms.ddp_comm_hooks.default_hooks.fp16_compress_hook(process_group, bucket)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.html#fp16_compress_hook) This DDP communication hook implements a simple gradient compression approach that converts `GradBucket` tensors whose type is assumed to be `torch.float32` to half-precision floating point format (`torch.float16`). It allreduces those `float16` gradient tensors. Once compressed gradient tensors are allreduced, its then callback called `decompress` converts the aggregated result back to `float32` and takes the mean. Example:: ``` >>> ddp_model.register_comm_hook(process_group, fp16_compress_hook) ``` PowerSGD Communication Hook --------------------------- PowerSGD ([Vogels et al., NeurIPS 2019](https://arxiv.org/abs/1905.13727)) is a gradient compression algorithm, which can provide very high compression rates and accelerate bandwidth-bound distributed training. This algorithm needs to maintain both some hyperparameters and the internal state. Therefore, PowerSGD communication hook is a **stateful** hook, and the user needs to provide a state object defined as below. ### PowerSGD State `class torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.PowerSGDState(process_group, matrix_approximation_rank=1, start_powerSGD_iter=10, use_error_feedback=True, warm_start=True, random_seed=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/algorithms/ddp_comm_hooks/powerSGD_hook.html#PowerSGDState) Stores both the algorithm’s hyperparameters and the internal state for all the gradients during the training. Particularly, `matrix_approximation_rank` and `start_powerSGD_iter` are the main hyperparameters that should be tuned by the user. For performance, we suggest to keep binary hyperparameters `use_error_feedback` and `warm_start` on. 1. `matrix_approximation_rank` controls the size of compressed low-rank tensors, which determines the compression rate. The lower the rank, the stronger the compression. 1.1. If `matrix_approximation_rank` is too low, the full model quality will need more training steps to reach or will never reach and yield loss in accuracy. 1.2. The increase of `matrix_approximation_rank` can substantially increase the computation costs of the compression, and the accuracy may not be futher improved beyond a certain `matrix_approximation_rank` threshold. To tune `matrix_approximation_rank`, we suggest to start from 1 and increase by factors of 2 (like an expoential grid search, 1, 2, 4, …), until a satisfactory accuracy is reached. Typically only a small value 1-4 is used. For some NLP tasks (as shown in Appendix D of the original paper), this value has been increased to 32. 2. `start_powerSGD_iter` defers PowerSGD compression util step `start_powerSGD_iter`, and vanilla allreduce runs prior to step `start_powerSGD_iter`. This hybrid scheme of **vanilla allreduce + PowerSGD** can effectively improve the accuracy, even a relatively small `matrix_approximation_rank` is used. This is because that, the beginning of training phase is usually very sensitive to inaccurate gradients, and compressing gradients too early may make the training quickly take a suboptimal trajectory, which can result in an irrecoverable impact on the accuracy. To tune `start_powerSGD_iter`, we suggest to start with 10% of total training steps, and increase it until a satisfactory accuracy is reached. Warning If error feedback or warm-up is enabled, the minimum value of `start_powerSGD_iter` allowed in DDP is 2. This is because there is another internal optimization that rebuilds buckets at iteration 1 in DDP, and this can conflict with any tensor memorized before the rebuild process. ### PowerSGD Hooks Warning PowerSGD typically requires extra memory of the same size as the model’s gradients to enable error feedback, which can compensate for biased compressed communication and improve accuracy. Warning The current implementation may cause gradient overflow for FP16 input. `torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.powerSGD_hook(state, bucket)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/algorithms/ddp_comm_hooks/powerSGD_hook.html#powerSGD_hook) This DDP communication hook implements PowerSGD gradient compression algorithm described in the [paper](https://arxiv.org/abs/1905.13727). Once gradient tensors are aggregated across all workers, this hook applies compression as follows: 1. Views the input flattened 1D gradient tensor as two groups of per-parameter tensors: high-rank tensors and vector-like rank-1 tensors (for biases). 2. Handles rank-1 tensors by allreducing them without compression: 2.1. Allocate contiguous memory for those rank-1 tensors, and allreduces all the rank-1 tensors as a batch, without compression; 2.2. Copies the individual rank-1 tensors from the contiguous memory back to the input tensor. 3. Handles high-rank tensors by PowerSGD compression: 3.1. For each high-rank tensor M, creates two low-rank tensors P and Q for decomposing M, such that M = PQ^T, where Q is initialized from a standard normal distribution and orthogonalized; 3.2. Computes each P in Ps, which is equal to MQ; 3.3. Allreduces Ps as a batch; 3.4. Orthogonalizes each P in Ps; 3.5. Computes each Q in Qs, which is approximately equal to M^TP; 3.6. Allreduces Qs as a batch; 3.7. Computes each M among all the high-rank tensors, which is approximately equal to PQ^T. Note that this communication hook enforces vanilla allreduce for the first `state.start_powerSGD_iter` iterations. This not only gives the user more control over the tradeoff between speedup and accuracy, but also helps abstract away some complexity of the internal optimization of DDP for future communication hook developers. Parameters * **state** ([PowerSGDState](#torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.PowerSGDState "torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.PowerSGDState")) – State information to configure the compression rate and support error feedback, warm start, etc. To tune the compression configs, mainly need to tune `matrix_approximation_rank`` and `start_powerSGD_iter`. * **bucket** (*dist.\_GradBucket*) – Bucket that stores a 1D flattened gradient tensor that batches multiple per-variable tensors. Note that since DDP comm hook only supports single process single device mode at this time, only exactly one tensor is stored in this bucket. Returns Future handler of the communication, which updates the gradients in place. Example:: ``` >>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1, start_powerSGD_iter=10) >>> ddp_model.register_comm_hook(state, powerSGD_hook) ``` `torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.batched_powerSGD_hook(state, bucket)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/algorithms/ddp_comm_hooks/powerSGD_hook.html#batched_powerSGD_hook) This DDP communication hook implements a simplified PowerSGD gradient compression algorithm described in the [paper](https://arxiv.org/abs/1905.13727). This variant does not compress the gradients layer by layer, but instead compresses the flattened input tensor that batches all the gradients. Therefore, it is **faster** than [`powerSGD_hook()`](#torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.powerSGD_hook "torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.powerSGD_hook"), but usually results in a **much lower accuracy**, unless `matrix_approximation_rank` is 1. Warning Increasing `matrix_approximation_rank` here may not necessarily increase the accuracy, because batching per-parameter tensors without column/row alignment can destroy low-rank structure. Therefore, the user should always consider [`powerSGD_hook()`](#torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.powerSGD_hook "torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.powerSGD_hook") first, and only consider this variant when a satisfactory accuracy can be achieved when `matrix_approximation_rank` is 1. Once gradient tensors are aggregated across all workers, this hook applies compression as follows: 1. Views the input flattened 1D gradient tensor as a square-shaped tensor M with 0 paddings; 2. Creates two low-rank tensors P and Q for decomposing M, such that M = PQ^T, where Q is initialized from a standard normal distribution and orthogonalized; 3. Computes P, which is equal to MQ; 4. Allreduces P; 5. Orthogonalizes P; 6. Computes Q, which is approximately equal to M^TP; 7. Allreduces Q; 8. Computes M, which is approximately equal to PQ^T. 9. Truncates the input tensor to the original length. Note that this communication hook enforces vanilla allreduce for the first `state.start_powerSGD_iter` iterations. This not only gives the user more control over the tradeoff between speedup and accuracy, but also helps abstract away some complexity of the internal optimization of DDP for future communication hook developers. Parameters * **state** ([PowerSGDState](#torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.PowerSGDState "torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.PowerSGDState")) – State information to configure the compression rate and support error feedback, warm start, etc. To tune the compression configs, mainly need to tune `matrix_approximation_rank` and `start_powerSGD_iter`. * **bucket** (*dist.\_GradBucket*) – Bucket that stores a 1D flattened gradient tensor that batches multiple per-variable tensors. Note that since DDP comm hook only supports single process single device mode at this time, only exactly one tensor is stored in this bucket. Returns Future handler of the communication, which updates the gradients in place. Example:: ``` >>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1) >>> ddp_model.register_comm_hook(state, batched_powerSGD_hook) ``` Acknowledgements ---------------- Many thanks to PowerSGD paper author **Thijs Vogels** for the code review on PowerSGD communication hook, as well as the [comparison experiments](https://observablehq.com/@tvogels/powersgd-benchmark), which show that the performance of PowerSGD communication hook is on par with the implementation in the original [paper](https://arxiv.org/abs/1905.13727).
programming_docs
pytorch torch.Storage torch.Storage ============= A `torch.Storage` is a contiguous, one-dimensional array of a single data type. Every [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor") has a corresponding storage of the same data type. `class torch.FloatStorage(*args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch.html#FloatStorage) `bfloat16()` Casts this storage to bfloat16 type `bool()` Casts this storage to bool type `byte()` Casts this storage to byte type `char()` Casts this storage to char type `clone()` Returns a copy of this storage `complex_double()` Casts this storage to complex double type `complex_float()` Casts this storage to complex float type `copy_()` `cpu()` Returns a CPU copy of this storage if it’s not already on the CPU `cuda(device=None, non_blocking=False, **kwargs)` Returns a copy of this object in CUDA memory. If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned. Parameters * **device** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The destination GPU id. Defaults to the current device. * **non\_blocking** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True` and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. * **\*\*kwargs** – For compatibility, may contain the key `async` in place of the `non_blocking` argument. `data_ptr()` `device` `double()` Casts this storage to double type `dtype` `element_size()` `fill_()` `float()` Casts this storage to float type `static from_buffer()` `static from_file(filename, shared=False, size=0) → Storage` If `shared` is `True`, then memory is shared between all processes. All changes are written to the file. If `shared` is `False`, then the changes on the storage do not affect the file. `size` is the number of elements in the storage. If `shared` is `False`, then the file must contain at least `size * sizeof(Type)` bytes (`Type` is the type of storage). If `shared` is `True` the file will be created if needed. Parameters * **filename** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – file name to map * **shared** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether to share memory * **size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – number of elements in the storage `get_device()` `half()` Casts this storage to half type `int()` Casts this storage to int type `is_cuda: bool = False` `is_pinned()` `is_shared()` `is_sparse: bool = False` `long()` Casts this storage to long type `new()` `pin_memory()` Copies the storage to pinned memory, if it’s not already pinned. `resize_()` `share_memory_()` Moves the storage to shared memory. This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized. Returns: self `short()` Casts this storage to short type `size()` `tolist()` Returns a list containing the elements of this storage `type(dtype=None, non_blocking=False, **kwargs)` Returns the type if `dtype` is not provided, else casts this object to the specified type. If this is already of the correct type, no copy is performed and the original object is returned. Parameters * **dtype** ([type](https://docs.python.org/3/library/functions.html#type "(in Python v3.9)") *or* *string*) – The desired type * **non\_blocking** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect. * **\*\*kwargs** – For compatibility, may contain the key `async` in place of the `non_blocking` argument. The `async` arg is deprecated. pytorch torch.nn.init torch.nn.init ============= `torch.nn.init.calculate_gain(nonlinearity, param=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/init.html#calculate_gain) Return the recommended gain value for the given nonlinearity function. The values are as follows: | nonlinearity | gain | | --- | --- | | Linear / Identity | 11 | | Conv{1,2,3}D | 11 | | Sigmoid | 11 | | Tanh | 53\frac{5}{3} | | ReLU | 2\sqrt{2} | | Leaky Relu | 21+negative\_slope2\sqrt{\frac{2}{1 + \text{negative\\_slope}^2}} | | SELU | 34\frac{3}{4} | Parameters * **nonlinearity** – the non-linear function (`nn.functional` name) * **param** – optional parameter for the non-linear function #### Examples ``` >>> gain = nn.init.calculate_gain('leaky_relu', 0.2) # leaky_relu with negative_slope=0.2 ``` `torch.nn.init.uniform_(tensor, a=0.0, b=1.0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/init.html#uniform_) Fills the input Tensor with values drawn from the uniform distribution U(a,b)\mathcal{U}(a, b) . Parameters * **tensor** – an n-dimensional `torch.Tensor` * **a** – the lower bound of the uniform distribution * **b** – the upper bound of the uniform distribution #### Examples ``` >>> w = torch.empty(3, 5) >>> nn.init.uniform_(w) ``` `torch.nn.init.normal_(tensor, mean=0.0, std=1.0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/init.html#normal_) Fills the input Tensor with values drawn from the normal distribution N(mean,std2)\mathcal{N}(\text{mean}, \text{std}^2) . Parameters * **tensor** – an n-dimensional `torch.Tensor` * **mean** – the mean of the normal distribution * **std** – the standard deviation of the normal distribution #### Examples ``` >>> w = torch.empty(3, 5) >>> nn.init.normal_(w) ``` `torch.nn.init.constant_(tensor, val)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/init.html#constant_) Fills the input Tensor with the value val\text{val} . Parameters * **tensor** – an n-dimensional `torch.Tensor` * **val** – the value to fill the tensor with #### Examples ``` >>> w = torch.empty(3, 5) >>> nn.init.constant_(w, 0.3) ``` `torch.nn.init.ones_(tensor)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/init.html#ones_) Fills the input Tensor with the scalar value `1`. Parameters **tensor** – an n-dimensional `torch.Tensor` #### Examples ``` >>> w = torch.empty(3, 5) >>> nn.init.ones_(w) ``` `torch.nn.init.zeros_(tensor)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/init.html#zeros_) Fills the input Tensor with the scalar value `0`. Parameters **tensor** – an n-dimensional `torch.Tensor` #### Examples ``` >>> w = torch.empty(3, 5) >>> nn.init.zeros_(w) ``` `torch.nn.init.eye_(tensor)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/init.html#eye_) Fills the 2-dimensional input `Tensor` with the identity matrix. Preserves the identity of the inputs in `Linear` layers, where as many inputs are preserved as possible. Parameters **tensor** – a 2-dimensional `torch.Tensor` #### Examples ``` >>> w = torch.empty(3, 5) >>> nn.init.eye_(w) ``` `torch.nn.init.dirac_(tensor, groups=1)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/init.html#dirac_) Fills the {3, 4, 5}-dimensional input `Tensor` with the Dirac delta function. Preserves the identity of the inputs in `Convolutional` layers, where as many input channels are preserved as possible. In case of groups>1, each group of channels preserves identity Parameters * **tensor** – a {3, 4, 5}-dimensional `torch.Tensor` * **groups** (*optional*) – number of groups in the conv layer (default: 1) #### Examples ``` >>> w = torch.empty(3, 16, 5, 5) >>> nn.init.dirac_(w) >>> w = torch.empty(3, 24, 5, 5) >>> nn.init.dirac_(w, 3) ``` `torch.nn.init.xavier_uniform_(tensor, gain=1.0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/init.html#xavier_uniform_) Fills the input `Tensor` with values according to the method described in `Understanding the difficulty of training deep feedforward neural networks` - Glorot, X. & Bengio, Y. (2010), using a uniform distribution. The resulting tensor will have values sampled from U(−a,a)\mathcal{U}(-a, a) where a=gain×6fan\_in+fan\_outa = \text{gain} \times \sqrt{\frac{6}{\text{fan\\_in} + \text{fan\\_out}}} Also known as Glorot initialization. Parameters * **tensor** – an n-dimensional `torch.Tensor` * **gain** – an optional scaling factor #### Examples ``` >>> w = torch.empty(3, 5) >>> nn.init.xavier_uniform_(w, gain=nn.init.calculate_gain('relu')) ``` `torch.nn.init.xavier_normal_(tensor, gain=1.0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/init.html#xavier_normal_) Fills the input `Tensor` with values according to the method described in `Understanding the difficulty of training deep feedforward neural networks` - Glorot, X. & Bengio, Y. (2010), using a normal distribution. The resulting tensor will have values sampled from N(0,std2)\mathcal{N}(0, \text{std}^2) where std=gain×2fan\_in+fan\_out\text{std} = \text{gain} \times \sqrt{\frac{2}{\text{fan\\_in} + \text{fan\\_out}}} Also known as Glorot initialization. Parameters * **tensor** – an n-dimensional `torch.Tensor` * **gain** – an optional scaling factor #### Examples ``` >>> w = torch.empty(3, 5) >>> nn.init.xavier_normal_(w) ``` `torch.nn.init.kaiming_uniform_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/init.html#kaiming_uniform_) Fills the input `Tensor` with values according to the method described in `Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification` - He, K. et al. (2015), using a uniform distribution. The resulting tensor will have values sampled from U(−bound,bound)\mathcal{U}(-\text{bound}, \text{bound}) where bound=gain×3fan\_mode\text{bound} = \text{gain} \times \sqrt{\frac{3}{\text{fan\\_mode}}} Also known as He initialization. Parameters * **tensor** – an n-dimensional `torch.Tensor` * **a** – the negative slope of the rectifier used after this layer (only used with `'leaky_relu'`) * **mode** – either `'fan_in'` (default) or `'fan_out'`. Choosing `'fan_in'` preserves the magnitude of the variance of the weights in the forward pass. Choosing `'fan_out'` preserves the magnitudes in the backwards pass. * **nonlinearity** – the non-linear function (`nn.functional` name), recommended to use only with `'relu'` or `'leaky_relu'` (default). #### Examples ``` >>> w = torch.empty(3, 5) >>> nn.init.kaiming_uniform_(w, mode='fan_in', nonlinearity='relu') ``` `torch.nn.init.kaiming_normal_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/init.html#kaiming_normal_) Fills the input `Tensor` with values according to the method described in `Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification` - He, K. et al. (2015), using a normal distribution. The resulting tensor will have values sampled from N(0,std2)\mathcal{N}(0, \text{std}^2) where std=gainfan\_mode\text{std} = \frac{\text{gain}}{\sqrt{\text{fan\\_mode}}} Also known as He initialization. Parameters * **tensor** – an n-dimensional `torch.Tensor` * **a** – the negative slope of the rectifier used after this layer (only used with `'leaky_relu'`) * **mode** – either `'fan_in'` (default) or `'fan_out'`. Choosing `'fan_in'` preserves the magnitude of the variance of the weights in the forward pass. Choosing `'fan_out'` preserves the magnitudes in the backwards pass. * **nonlinearity** – the non-linear function (`nn.functional` name), recommended to use only with `'relu'` or `'leaky_relu'` (default). #### Examples ``` >>> w = torch.empty(3, 5) >>> nn.init.kaiming_normal_(w, mode='fan_out', nonlinearity='relu') ``` `torch.nn.init.orthogonal_(tensor, gain=1)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/init.html#orthogonal_) Fills the input `Tensor` with a (semi) orthogonal matrix, as described in `Exact solutions to the nonlinear dynamics of learning in deep linear neural networks` - Saxe, A. et al. (2013). The input tensor must have at least 2 dimensions, and for tensors with more than 2 dimensions the trailing dimensions are flattened. Parameters * **tensor** – an n-dimensional `torch.Tensor`, where n≥2n \geq 2 * **gain** – optional scaling factor #### Examples ``` >>> w = torch.empty(3, 5) >>> nn.init.orthogonal_(w) ``` `torch.nn.init.sparse_(tensor, sparsity, std=0.01)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/init.html#sparse_) Fills the 2D input `Tensor` as a sparse matrix, where the non-zero elements will be drawn from the normal distribution N(0,0.01)\mathcal{N}(0, 0.01) , as described in `Deep learning via Hessian-free optimization` - Martens, J. (2010). Parameters * **tensor** – an n-dimensional `torch.Tensor` * **sparsity** – The fraction of elements in each column to be set to zero * **std** – the standard deviation of the normal distribution used to generate the non-zero values #### Examples ``` >>> w = torch.empty(3, 5) >>> nn.init.sparse_(w, sparsity=0.1) ``` pytorch torch.quantization torch.quantization ================== This module implements the functions you call directly to convert your model from FP32 to quantized form. For example the [`prepare()`](#torch.quantization.prepare "torch.quantization.prepare") is used in post training quantization to prepares your model for the calibration step and [`convert()`](#torch.quantization.convert "torch.quantization.convert") actually converts the weights to int8 and replaces the operations with their quantized counterparts. There are other helper functions for things like quantizing the input to your model and performing critical fusions like conv+relu. Top-level quantization APIs --------------------------- `torch.quantization.quantize(model, run_fn, run_args, mapping=None, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/quantize.html#quantize) Quantize the input float model with post training static quantization. First it will prepare the model for calibration, then it calls `run_fn` which will run the calibration step, after that we will convert the model to a quantized model. Parameters * **model** – input float model * **run\_fn** – a calibration function for calibrating the prepared model * **run\_args** – positional arguments for `run_fn` * **inplace** – carry out model transformations in-place, the original module is mutated * **mapping** – correspondence between original module types and quantized counterparts Returns Quantized model. `torch.quantization.quantize_dynamic(model, qconfig_spec=None, dtype=torch.qint8, mapping=None, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/quantize.html#quantize_dynamic) Converts a float model to dynamic (i.e. weights-only) quantized model. Replaces specified modules with dynamic weight-only quantized versions and output the quantized model. For simplest usage provide `dtype` argument that can be float16 or qint8. Weight-only quantization by default is performed for layers with large weights size - i.e. Linear and RNN variants. Fine grained control is possible with `qconfig` and `mapping` that act similarly to `quantize()`. If `qconfig` is provided, the `dtype` argument is ignored. Parameters * **model** – input model * **qconfig\_spec** – Either: + A dictionary that maps from name or type of submodule to quantization configuration, qconfig applies to all submodules of a given module unless qconfig for the submodules are specified (when the submodule already has qconfig attribute). Entries in the dictionary need to be QConfigDynamic instances. + A set of types and/or submodule names to apply dynamic quantization to, in which case the `dtype` argument is used to specify the bit-width * **inplace** – carry out model transformations in-place, the original module is mutated * **mapping** – maps type of a submodule to a type of corresponding dynamically quantized version with which the submodule needs to be replaced `torch.quantization.quantize_qat(model, run_fn, run_args, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/quantize.html#quantize_qat) Do quantization aware training and output a quantized model Parameters * **model** – input model * **run\_fn** – a function for evaluating the prepared model, can be a function that simply runs the prepared model or a training loop * **run\_args** – positional arguments for `run_fn` Returns Quantized model. `torch.quantization.prepare(model, inplace=False, allow_list=None, observer_non_leaf_module_list=None, prepare_custom_config_dict=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/quantize.html#prepare) Prepares a copy of the model for quantization calibration or quantization-aware training. Quantization configuration should be assigned preemptively to individual submodules in `.qconfig` attribute. The model will be attached with observer or fake quant modules, and qconfig will be propagated. Parameters * **model** – input model to be modified in-place * **inplace** – carry out model transformations in-place, the original module is mutated * **allow\_list** – list of quantizable modules * **observer\_non\_leaf\_module\_list** – list of non-leaf modules we want to add observer * **prepare\_custom\_config\_dict** – customization configuration dictionary for prepare function ``` # Example of prepare_custom_config_dict: prepare_custom_config_dict = { # user will manually define the corresponding observed # module class which has a from_float class method that converts # float custom module to observed custom module "float_to_observed_custom_module_class": { CustomModule: ObservedCustomModule } } ``` `torch.quantization.prepare_qat(model, mapping=None, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/quantize.html#prepare_qat) Prepares a copy of the model for quantization calibration or quantization-aware training and converts it to quantized version. Quantization configuration should be assigned preemptively to individual submodules in `.qconfig` attribute. Parameters * **model** – input model to be modified in-place * **mapping** – dictionary that maps float modules to quantized modules to be replaced. * **inplace** – carry out model transformations in-place, the original module is mutated `torch.quantization.convert(module, mapping=None, inplace=False, remove_qconfig=True, convert_custom_config_dict=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/quantize.html#convert) Converts submodules in input module to a different module according to `mapping` by calling `from_float` method on the target module class. And remove qconfig at the end if remove\_qconfig is set to True. Parameters * **module** – prepared and calibrated module * **mapping** – a dictionary that maps from source module type to target module type, can be overwritten to allow swapping user defined Modules * **inplace** – carry out model transformations in-place, the original module is mutated * **convert\_custom\_config\_dict** – custom configuration dictionary for convert function ``` # Example of convert_custom_config_dict: convert_custom_config_dict = { # user will manually define the corresponding quantized # module class which has a from_observed class method that converts # observed custom module to quantized custom module "observed_to_quantized_custom_module_class": { ObservedCustomModule: QuantizedCustomModule } } ``` `class torch.quantization.QConfig` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/qconfig.html#QConfig) Describes how to quantize a layer or a part of the network by providing settings (observer classes) for activations and weights respectively. Note that QConfig needs to contain observer **classes** (like MinMaxObserver) or a callable that returns instances on invocation, not the concrete observer instances themselves. Quantization preparation function will instantiate observers multiple times for each of the layers. Observer classes have usually reasonable default arguments, but they can be overwritten with `with_args` method (that behaves like functools.partial): my\_qconfig = QConfig(activation=MinMaxObserver.with\_args(dtype=torch.qint8), weight=default\_observer.with\_args(dtype=torch.qint8)) `class torch.quantization.QConfigDynamic` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/qconfig.html#QConfigDynamic) Describes how to dynamically quantize a layer or a part of the network by providing settings (observer classes) for weights. It’s like QConfig, but for dynamic quantization. Note that QConfigDynamic needs to contain observer **classes** (like MinMaxObserver) or a callable that returns instances on invocation, not the concrete observer instances themselves. Quantization function will instantiate observers multiple times for each of the layers. Observer classes have usually reasonable default arguments, but they can be overwritten with `with_args` method (that behaves like functools.partial): my\_qconfig = QConfigDynamic(weight=default\_observer.with\_args(dtype=torch.qint8)) Preparing model for quantization -------------------------------- `torch.quantization.fuse_modules(model, modules_to_fuse, inplace=False, fuser_func=<function fuse_known_modules>, fuse_custom_config_dict=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/fuse_modules.html#fuse_modules) Fuses a list of modules into a single module Fuses only the following sequence of modules: conv, bn conv, bn, relu conv, relu linear, relu bn, relu All other sequences are left unchanged. For these sequences, replaces the first item in the list with the fused module, replacing the rest of the modules with identity. Parameters * **model** – Model containing the modules to be fused * **modules\_to\_fuse** – list of list of module names to fuse. Can also be a list of strings if there is only a single list of modules to fuse. * **inplace** – bool specifying if fusion happens in place on the model, by default a new model is returned * **fuser\_func** – Function that takes in a list of modules and outputs a list of fused modules of the same length. For example, fuser\_func([convModule, BNModule]) returns the list [ConvBNModule, nn.Identity()] Defaults to torch.quantization.fuse\_known\_modules * **fuse\_custom\_config\_dict** – custom configuration for fusion ``` # Example of fuse_custom_config_dict fuse_custom_config_dict = { # Additional fuser_method mapping "additional_fuser_method_mapping": { (torch.nn.Conv2d, torch.nn.BatchNorm2d): fuse_conv_bn }, } ``` Returns model with fused modules. A new copy is created if inplace=True. Examples: ``` >>> m = myModel() >>> # m is a module containing the sub-modules below >>> modules_to_fuse = [ ['conv1', 'bn1', 'relu1'], ['submodule.conv', 'submodule.relu']] >>> fused_m = torch.quantization.fuse_modules(m, modules_to_fuse) >>> output = fused_m(input) >>> m = myModel() >>> # Alternately provide a single list of modules to fuse >>> modules_to_fuse = ['conv1', 'bn1', 'relu1'] >>> fused_m = torch.quantization.fuse_modules(m, modules_to_fuse) >>> output = fused_m(input) ``` `class torch.quantization.QuantStub(qconfig=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/stubs.html#QuantStub) Quantize stub module, before calibration, this is same as an observer, it will be swapped as `nnq.Quantize` in `convert`. Parameters **qconfig** – quantization configuration for the tensor, if qconfig is not provided, we will get qconfig from parent modules `class torch.quantization.DeQuantStub` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/stubs.html#DeQuantStub) Dequantize stub module, before calibration, this is same as identity, this will be swapped as `nnq.DeQuantize` in `convert`. `class torch.quantization.QuantWrapper(module)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/stubs.html#QuantWrapper) A wrapper class that wraps the input module, adds QuantStub and DeQuantStub and surround the call to module with call to quant and dequant modules. This is used by the `quantization` utility functions to add the quant and dequant modules, before `convert` function `QuantStub` will just be observer, it observes the input tensor, after `convert`, `QuantStub` will be swapped to `nnq.Quantize` which does actual quantization. Similarly for `DeQuantStub`. `torch.quantization.add_quant_dequant(module)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/quantize.html#add_quant_dequant) Wrap the leaf child module in QuantWrapper if it has a valid qconfig Note that this function will modify the children of module inplace and it can return a new module which wraps the input module as well. Parameters * **module** – input module with qconfig attributes for all the leaf modules * **we want to quantize** (*that*) – Returns Either the inplace modified module with submodules wrapped in `QuantWrapper` based on qconfig or a new `QuantWrapper` module which wraps the input module, the latter case only happens when the input module is a leaf module and we want to quantize it. Utility functions ----------------- `torch.quantization.add_observer_(module, qconfig_propagation_list=None, non_leaf_module_list=None, device=None, custom_module_class_mapping=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/quantize.html#add_observer_) Add observer for the leaf child of the module. This function insert observer module to all leaf child module that has a valid qconfig attribute. Parameters * **module** – input module with qconfig attributes for all the leaf modules that we want to quantize * **device** – parent device, if any * **non\_leaf\_module\_list** – list of non-leaf modules we want to add observer Returns None, module is modified inplace with added observer modules and forward\_hooks `torch.quantization.swap_module(mod, mapping, custom_module_class_mapping)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/quantize.html#swap_module) Swaps the module if it has a quantized counterpart and it has an `observer` attached. Parameters * **mod** – input module * **mapping** – a dictionary that maps from nn module to nnq module Returns The corresponding quantized module of `mod` `torch.quantization.propagate_qconfig_(module, qconfig_dict=None, allow_list=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/quantize.html#propagate_qconfig_) Propagate qconfig through the module hierarchy and assign `qconfig` attribute on each leaf module Parameters * **module** – input module * **qconfig\_dict** – dictionary that maps from name or type of submodule to quantization configuration, qconfig applies to all submodules of a given module unless qconfig for the submodules are specified (when the submodule already has qconfig attribute) Returns None, module is modified inplace with qconfig attached `torch.quantization.default_eval_fn(model, calib_data)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization.html#default_eval_fn) Default evaluation function takes a torch.utils.data.Dataset or a list of input Tensors and run the model on the dataset Observers --------- `class torch.quantization.ObserverBase(dtype)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/observer.html#ObserverBase) Base observer Module. Any observer implementation should derive from this class. Concrete observers should follow the same API. In forward, they will update the statistics of the observed Tensor. And they should provide a `calculate_qparams` function that computes the quantization parameters given the collected statistics. Parameters **dtype** – Quantized data type `classmethod with_args(**kwargs)` Wrapper that allows creation of class factories. This can be useful when there is a need to create classes with the same constructor arguments, but different instances. Example: ``` >>> Foo.with_args = classmethod(_with_args) >>> foo_builder = Foo.with_args(a=3, b=4).with_args(answer=42) >>> foo_instance1 = foo_builder() >>> foo_instance2 = foo_builder() >>> id(foo_instance1) == id(foo_instance2) False ``` `class torch.quantization.MinMaxObserver(dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/observer.html#MinMaxObserver) Observer module for computing the quantization parameters based on the running min and max values. This observer uses the tensor min/max statistics to compute the quantization parameters. The module records the running minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters * **dtype** – Quantized data type * **qscheme** – Quantization scheme to be used * **reduce\_range** – Reduces the range of the quantized data type by 1 bit * **quant\_min** – Minimum quantization value. If unspecified, it will follow the 8-bit setup. * **quant\_max** – Maximum quantization value. If unspecified, it will follow the 8-bit setup. Given running min/max as xminx\_\text{min} and xmaxx\_\text{max} , scale ss and zero point zz are computed as: The running minimum/maximum xmin/maxx\_\text{min/max} is computed as: xmin={min⁡(X)if xmin=Nonemin⁡(xmin,min⁡(X))otherwisexmax={max⁡(X)if xmax=Nonemax⁡(xmax,max⁡(X))otherwise\begin{array}{ll} x\_\text{min} &= \begin{cases} \min(X) & \text{if~}x\_\text{min} = \text{None} \\ \min\left(x\_\text{min}, \min(X)\right) & \text{otherwise} \end{cases}\\ x\_\text{max} &= \begin{cases} \max(X) & \text{if~}x\_\text{max} = \text{None} \\ \max\left(x\_\text{max}, \max(X)\right) & \text{otherwise} \end{cases}\\ \end{array} where XX is the observed tensor. The scale ss and zero point zz are then computed as: if Symmetric:s=2max⁡(∣xmin∣,xmax)/(Qmax−Qmin)z={0if dtype is qint8128otherwiseOtherwise:s=(xmax−xmin)/(Qmax−Qmin)z=Qmin−round(xmin/s)\begin{aligned} \text{if Symmetric:}&\\ &s = 2 \max(|x\_\text{min}|, x\_\text{max}) / \left( Q\_\text{max} - Q\_\text{min} \right) \\ &z = \begin{cases} 0 & \text{if dtype is qint8} \\ 128 & \text{otherwise} \end{cases}\\ \text{Otherwise:}&\\ &s = \left( x\_\text{max} - x\_\text{min} \right ) / \left( Q\_\text{max} - Q\_\text{min} \right ) \\ &z = Q\_\text{min} - \text{round}(x\_\text{min} / s) \end{aligned} where QminQ\_\text{min} and QmaxQ\_\text{max} are the minimum and maximum of the quantized data type. Warning Only works with `torch.per_tensor_symmetric` quantization scheme Warning `dtype` can only take `torch.qint8` or `torch.quint8`. Note If the running minimum equals to the running maximum, the scale and zero\_point are set to 1.0 and 0. `class torch.quantization.MovingAverageMinMaxObserver(averaging_constant=0.01, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/observer.html#MovingAverageMinMaxObserver) Observer module for computing the quantization parameters based on the moving average of the min and max values. This observer computes the quantization parameters based on the moving averages of minimums and maximums of the incoming tensors. The module records the average minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters * **averaging\_constant** – Averaging constant for min/max. * **dtype** – Quantized data type * **qscheme** – Quantization scheme to be used * **reduce\_range** – Reduces the range of the quantized data type by 1 bit * **quant\_min** – Minimum quantization value. If unspecified, it will follow the 8-bit setup. * **quant\_max** – Maximum quantization value. If unspecified, it will follow the 8-bit setup. The moving average min/max is computed as follows xmin={min⁡(X)if xmin=None(1−c)xmin+cmin⁡(X)otherwisexmax={max⁡(X)if xmax=None(1−c)xmax+cmax⁡(X)otherwise\begin{array}{ll} x\_\text{min} = \begin{cases} \min(X) & \text{if~}x\_\text{min} = \text{None} \\ (1 - c) x\_\text{min} + c \min(X) & \text{otherwise} \end{cases}\\ x\_\text{max} = \begin{cases} \max(X) & \text{if~}x\_\text{max} = \text{None} \\ (1 - c) x\_\text{max} + c \max(X) & \text{otherwise} \end{cases}\\ \end{array} where xmin/maxx\_\text{min/max} is the running average min/max, XX is is the incoming tensor, and cc is the `averaging_constant`. The scale and zero point are then computed as in `MinMaxObserver`. Note Only works with `torch.per_tensor_affine` quantization scheme. Note If the running minimum equals to the running maximum, the scale and zero\_point are set to 1.0 and 0. `class torch.quantization.PerChannelMinMaxObserver(ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/observer.html#PerChannelMinMaxObserver) Observer module for computing the quantization parameters based on the running per channel min and max values. This observer uses the tensor min/max statistics to compute the per channel quantization parameters. The module records the running minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters * **ch\_axis** – Channel axis * **dtype** – Quantized data type * **qscheme** – Quantization scheme to be used * **reduce\_range** – Reduces the range of the quantized data type by 1 bit * **quant\_min** – Minimum quantization value. If unspecified, it will follow the 8-bit setup. * **quant\_max** – Maximum quantization value. If unspecified, it will follow the 8-bit setup. The quantization parameters are computed the same way as in `MinMaxObserver`, with the difference that the running min/max values are stored per channel. Scales and zero points are thus computed per channel as well. Note If the running minimum equals to the running maximum, the scales and zero\_points are set to 1.0 and 0. `class torch.quantization.MovingAveragePerChannelMinMaxObserver(averaging_constant=0.01, ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/observer.html#MovingAveragePerChannelMinMaxObserver) Observer module for computing the quantization parameters based on the running per channel min and max values. This observer uses the tensor min/max statistics to compute the per channel quantization parameters. The module records the running minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters * **averaging\_constant** – Averaging constant for min/max. * **ch\_axis** – Channel axis * **dtype** – Quantized data type * **qscheme** – Quantization scheme to be used * **reduce\_range** – Reduces the range of the quantized data type by 1 bit * **quant\_min** – Minimum quantization value. If unspecified, it will follow the 8-bit setup. * **quant\_max** – Maximum quantization value. If unspecified, it will follow the 8-bit setup. The quantization parameters are computed the same way as in `MovingAverageMinMaxObserver`, with the difference that the running min/max values are stored per channel. Scales and zero points are thus computed per channel as well. Note If the running minimum equals to the running maximum, the scales and zero\_points are set to 1.0 and 0. `class torch.quantization.HistogramObserver(bins=2048, upsample_rate=128, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/observer.html#HistogramObserver) The module records the running histogram of tensor values along with min/max values. `calculate_qparams` will calculate scale and zero\_point. Parameters * **bins** – Number of bins to use for the histogram * **upsample\_rate** – Factor by which the histograms are upsampled, this is used to interpolate histograms with varying ranges across observations * **dtype** – Quantized data type * **qscheme** – Quantization scheme to be used * **reduce\_range** – Reduces the range of the quantized data type by 1 bit The scale and zero point are computed as follows: 1. Create the histogram of the incoming inputs. The histogram is computed continuously, and the ranges per bin change with every new tensor observed. 2. Search the distribution in the histogram for optimal min/max values. The search for the min/max values ensures the minimization of the quantization error with respect to the floating point model. 3. Compute the scale and zero point the same way as in the [`MinMaxObserver`](#torch.quantization.MinMaxObserver "torch.quantization.MinMaxObserver") `class torch.quantization.FakeQuantize(observer=<class 'torch.quantization.observer.MovingAverageMinMaxObserver'>, quant_min=0, quant_max=255, **observer_kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/fake_quantize.html#FakeQuantize) Simulate the quantize and dequantize operations in training time. The output of this module is given by x\_out = (clamp(round(x/scale + zero\_point), quant\_min, quant\_max)-zero\_point)\*scale * `scale` defines the scale factor used for quantization. * `zero_point` specifies the quantized value to which 0 in floating point maps to * `quant_min` specifies the minimum allowable quantized value. * `quant_max` specifies the maximum allowable quantized value. * `fake_quant_enable` controls the application of fake quantization on tensors, note that statistics can still be updated. * `observer_enable` controls statistics collection on tensors * `dtype specifies the quantized dtype that is being emulated with fake-quantization,` allowable values are torch.qint8 and torch.quint8. The values of quant\_min and quant\_max should be chosen to be consistent with the dtype Parameters * **observer** (*module*) – Module for observing statistics on input tensors and calculating scale and zero-point. * **quant\_min** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The minimum allowable quantized value. * **quant\_max** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The maximum allowable quantized value. * **observer\_kwargs** (*optional*) – Arguments for the observer module Variables **~FakeQuantize.observer** ([Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")) – User provided module that collects statistics on the input tensor and provides a method to calculate scale and zero-point. `class torch.quantization.NoopObserver(dtype=torch.float16, custom_op_name='')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/observer.html#NoopObserver) Observer that doesn’t do anything and just passes its configuration to the quantized module’s `.from_float()`. Primarily used for quantization to float16 which doesn’t require determining ranges. Parameters * **dtype** – Quantized data type * **custom\_op\_name** – (temporary) specify this observer for an operator that doesn’t require any observation (Can be used in Graph Mode Passes for special case ops). Debugging utilities ------------------- `torch.quantization.get_observer_dict(mod, target_dict, prefix='')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/quantize.html#get_observer_dict) Traverse the modules and save all observers into dict. This is mainly used for quantization accuracy debug :param mod: the top module we want to save all observers :param prefix: the prefix for the current module :param target\_dict: the dictionary used to save all the observers `class torch.quantization.RecordingObserver(**kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quantization/observer.html#RecordingObserver) The module is mainly for debug and records the tensor values during runtime. Parameters * **dtype** – Quantized data type * **qscheme** – Quantization scheme to be used * **reduce\_range** – Reduces the range of the quantized data type by 1 bit | | | | --- | --- | | [`nn.intrinsic`](torch.nn.intrinsic#module-torch.nn.intrinsic "torch.nn.intrinsic") | |
programming_docs
pytorch torch.nn torch.nn ======== These are the basic building block for graphs torch.nn * [Containers](#containers) * [Convolution Layers](#convolution-layers) * [Pooling layers](#pooling-layers) * [Padding Layers](#padding-layers) * [Non-linear Activations (weighted sum, nonlinearity)](#non-linear-activations-weighted-sum-nonlinearity) * [Non-linear Activations (other)](#non-linear-activations-other) * [Normalization Layers](#normalization-layers) * [Recurrent Layers](#recurrent-layers) * [Transformer Layers](#transformer-layers) * [Linear Layers](#linear-layers) * [Dropout Layers](#dropout-layers) * [Sparse Layers](#sparse-layers) * [Distance Functions](#distance-functions) * [Loss Functions](#loss-functions) * [Vision Layers](#vision-layers) * [Shuffle Layers](#shuffle-layers) * [DataParallel Layers (multi-GPU, distributed)](#dataparallel-layers-multi-gpu-distributed) * [Utilities](#utilities) * [Quantized Functions](#quantized-functions) * [Lazy Modules Initialization](#lazy-modules-initialization) | | | | --- | --- | | [`Parameter`](generated/torch.nn.parameter.parameter#torch.nn.parameter.Parameter "torch.nn.parameter.Parameter") | A kind of Tensor that is to be considered a module parameter. | | [`UninitializedParameter`](generated/torch.nn.parameter.uninitializedparameter#torch.nn.parameter.UninitializedParameter "torch.nn.parameter.UninitializedParameter") | A parameter that is not initialized. | Containers ---------- | | | | --- | --- | | [`Module`](generated/torch.nn.module#torch.nn.Module "torch.nn.Module") | Base class for all neural network modules. | | [`Sequential`](generated/torch.nn.sequential#torch.nn.Sequential "torch.nn.Sequential") | A sequential container. | | [`ModuleList`](generated/torch.nn.modulelist#torch.nn.ModuleList "torch.nn.ModuleList") | Holds submodules in a list. | | [`ModuleDict`](generated/torch.nn.moduledict#torch.nn.ModuleDict "torch.nn.ModuleDict") | Holds submodules in a dictionary. | | [`ParameterList`](generated/torch.nn.parameterlist#torch.nn.ParameterList "torch.nn.ParameterList") | Holds parameters in a list. | | [`ParameterDict`](generated/torch.nn.parameterdict#torch.nn.ParameterDict "torch.nn.ParameterDict") | Holds parameters in a dictionary. | Global Hooks For Module | | | | --- | --- | | [`register_module_forward_pre_hook`](generated/torch.nn.modules.module.register_module_forward_pre_hook#torch.nn.modules.module.register_module_forward_pre_hook "torch.nn.modules.module.register_module_forward_pre_hook") | Registers a forward pre-hook common to all modules. | | [`register_module_forward_hook`](generated/torch.nn.modules.module.register_module_forward_hook#torch.nn.modules.module.register_module_forward_hook "torch.nn.modules.module.register_module_forward_hook") | Registers a global forward hook for all the modules | | [`register_module_backward_hook`](generated/torch.nn.modules.module.register_module_backward_hook#torch.nn.modules.module.register_module_backward_hook "torch.nn.modules.module.register_module_backward_hook") | Registers a backward hook common to all the modules. | Convolution Layers ------------------ | | | | --- | --- | | [`nn.Conv1d`](generated/torch.nn.conv1d#torch.nn.Conv1d "torch.nn.Conv1d") | Applies a 1D convolution over an input signal composed of several input planes. | | [`nn.Conv2d`](generated/torch.nn.conv2d#torch.nn.Conv2d "torch.nn.Conv2d") | Applies a 2D convolution over an input signal composed of several input planes. | | [`nn.Conv3d`](generated/torch.nn.conv3d#torch.nn.Conv3d "torch.nn.Conv3d") | Applies a 3D convolution over an input signal composed of several input planes. | | [`nn.ConvTranspose1d`](generated/torch.nn.convtranspose1d#torch.nn.ConvTranspose1d "torch.nn.ConvTranspose1d") | Applies a 1D transposed convolution operator over an input image composed of several input planes. | | [`nn.ConvTranspose2d`](generated/torch.nn.convtranspose2d#torch.nn.ConvTranspose2d "torch.nn.ConvTranspose2d") | Applies a 2D transposed convolution operator over an input image composed of several input planes. | | [`nn.ConvTranspose3d`](generated/torch.nn.convtranspose3d#torch.nn.ConvTranspose3d "torch.nn.ConvTranspose3d") | Applies a 3D transposed convolution operator over an input image composed of several input planes. | | [`nn.LazyConv1d`](generated/torch.nn.lazyconv1d#torch.nn.LazyConv1d "torch.nn.LazyConv1d") | A [`torch.nn.Conv1d`](generated/torch.nn.conv1d#torch.nn.Conv1d "torch.nn.Conv1d") module with lazy initialization of the `in_channels` argument of the `Conv1d` that is inferred from the `input.size(1)`. | | [`nn.LazyConv2d`](generated/torch.nn.lazyconv2d#torch.nn.LazyConv2d "torch.nn.LazyConv2d") | A [`torch.nn.Conv2d`](generated/torch.nn.conv2d#torch.nn.Conv2d "torch.nn.Conv2d") module with lazy initialization of the `in_channels` argument of the `Conv2d` that is inferred from the `input.size(1)`. | | [`nn.LazyConv3d`](generated/torch.nn.lazyconv3d#torch.nn.LazyConv3d "torch.nn.LazyConv3d") | A [`torch.nn.Conv3d`](generated/torch.nn.conv3d#torch.nn.Conv3d "torch.nn.Conv3d") module with lazy initialization of the `in_channels` argument of the `Conv3d` that is inferred from the `input.size(1)`. | | [`nn.LazyConvTranspose1d`](generated/torch.nn.lazyconvtranspose1d#torch.nn.LazyConvTranspose1d "torch.nn.LazyConvTranspose1d") | A [`torch.nn.ConvTranspose1d`](generated/torch.nn.convtranspose1d#torch.nn.ConvTranspose1d "torch.nn.ConvTranspose1d") module with lazy initialization of the `in_channels` argument of the `ConvTranspose1d` that is inferred from the `input.size(1)`. | | [`nn.LazyConvTranspose2d`](generated/torch.nn.lazyconvtranspose2d#torch.nn.LazyConvTranspose2d "torch.nn.LazyConvTranspose2d") | A [`torch.nn.ConvTranspose2d`](generated/torch.nn.convtranspose2d#torch.nn.ConvTranspose2d "torch.nn.ConvTranspose2d") module with lazy initialization of the `in_channels` argument of the `ConvTranspose2d` that is inferred from the `input.size(1)`. | | [`nn.LazyConvTranspose3d`](generated/torch.nn.lazyconvtranspose3d#torch.nn.LazyConvTranspose3d "torch.nn.LazyConvTranspose3d") | A [`torch.nn.ConvTranspose3d`](generated/torch.nn.convtranspose3d#torch.nn.ConvTranspose3d "torch.nn.ConvTranspose3d") module with lazy initialization of the `in_channels` argument of the `ConvTranspose3d` that is inferred from the `input.size(1)`. | | [`nn.Unfold`](generated/torch.nn.unfold#torch.nn.Unfold "torch.nn.Unfold") | Extracts sliding local blocks from a batched input tensor. | | [`nn.Fold`](generated/torch.nn.fold#torch.nn.Fold "torch.nn.Fold") | Combines an array of sliding local blocks into a large containing tensor. | Pooling layers -------------- | | | | --- | --- | | [`nn.MaxPool1d`](generated/torch.nn.maxpool1d#torch.nn.MaxPool1d "torch.nn.MaxPool1d") | Applies a 1D max pooling over an input signal composed of several input planes. | | [`nn.MaxPool2d`](generated/torch.nn.maxpool2d#torch.nn.MaxPool2d "torch.nn.MaxPool2d") | Applies a 2D max pooling over an input signal composed of several input planes. | | [`nn.MaxPool3d`](generated/torch.nn.maxpool3d#torch.nn.MaxPool3d "torch.nn.MaxPool3d") | Applies a 3D max pooling over an input signal composed of several input planes. | | [`nn.MaxUnpool1d`](generated/torch.nn.maxunpool1d#torch.nn.MaxUnpool1d "torch.nn.MaxUnpool1d") | Computes a partial inverse of `MaxPool1d`. | | [`nn.MaxUnpool2d`](generated/torch.nn.maxunpool2d#torch.nn.MaxUnpool2d "torch.nn.MaxUnpool2d") | Computes a partial inverse of `MaxPool2d`. | | [`nn.MaxUnpool3d`](generated/torch.nn.maxunpool3d#torch.nn.MaxUnpool3d "torch.nn.MaxUnpool3d") | Computes a partial inverse of `MaxPool3d`. | | [`nn.AvgPool1d`](generated/torch.nn.avgpool1d#torch.nn.AvgPool1d "torch.nn.AvgPool1d") | Applies a 1D average pooling over an input signal composed of several input planes. | | [`nn.AvgPool2d`](generated/torch.nn.avgpool2d#torch.nn.AvgPool2d "torch.nn.AvgPool2d") | Applies a 2D average pooling over an input signal composed of several input planes. | | [`nn.AvgPool3d`](generated/torch.nn.avgpool3d#torch.nn.AvgPool3d "torch.nn.AvgPool3d") | Applies a 3D average pooling over an input signal composed of several input planes. | | [`nn.FractionalMaxPool2d`](generated/torch.nn.fractionalmaxpool2d#torch.nn.FractionalMaxPool2d "torch.nn.FractionalMaxPool2d") | Applies a 2D fractional max pooling over an input signal composed of several input planes. | | [`nn.LPPool1d`](generated/torch.nn.lppool1d#torch.nn.LPPool1d "torch.nn.LPPool1d") | Applies a 1D power-average pooling over an input signal composed of several input planes. | | [`nn.LPPool2d`](generated/torch.nn.lppool2d#torch.nn.LPPool2d "torch.nn.LPPool2d") | Applies a 2D power-average pooling over an input signal composed of several input planes. | | [`nn.AdaptiveMaxPool1d`](generated/torch.nn.adaptivemaxpool1d#torch.nn.AdaptiveMaxPool1d "torch.nn.AdaptiveMaxPool1d") | Applies a 1D adaptive max pooling over an input signal composed of several input planes. | | [`nn.AdaptiveMaxPool2d`](generated/torch.nn.adaptivemaxpool2d#torch.nn.AdaptiveMaxPool2d "torch.nn.AdaptiveMaxPool2d") | Applies a 2D adaptive max pooling over an input signal composed of several input planes. | | [`nn.AdaptiveMaxPool3d`](generated/torch.nn.adaptivemaxpool3d#torch.nn.AdaptiveMaxPool3d "torch.nn.AdaptiveMaxPool3d") | Applies a 3D adaptive max pooling over an input signal composed of several input planes. | | [`nn.AdaptiveAvgPool1d`](generated/torch.nn.adaptiveavgpool1d#torch.nn.AdaptiveAvgPool1d "torch.nn.AdaptiveAvgPool1d") | Applies a 1D adaptive average pooling over an input signal composed of several input planes. | | [`nn.AdaptiveAvgPool2d`](generated/torch.nn.adaptiveavgpool2d#torch.nn.AdaptiveAvgPool2d "torch.nn.AdaptiveAvgPool2d") | Applies a 2D adaptive average pooling over an input signal composed of several input planes. | | [`nn.AdaptiveAvgPool3d`](generated/torch.nn.adaptiveavgpool3d#torch.nn.AdaptiveAvgPool3d "torch.nn.AdaptiveAvgPool3d") | Applies a 3D adaptive average pooling over an input signal composed of several input planes. | Padding Layers -------------- | | | | --- | --- | | [`nn.ReflectionPad1d`](generated/torch.nn.reflectionpad1d#torch.nn.ReflectionPad1d "torch.nn.ReflectionPad1d") | Pads the input tensor using the reflection of the input boundary. | | [`nn.ReflectionPad2d`](generated/torch.nn.reflectionpad2d#torch.nn.ReflectionPad2d "torch.nn.ReflectionPad2d") | Pads the input tensor using the reflection of the input boundary. | | [`nn.ReplicationPad1d`](generated/torch.nn.replicationpad1d#torch.nn.ReplicationPad1d "torch.nn.ReplicationPad1d") | Pads the input tensor using replication of the input boundary. | | [`nn.ReplicationPad2d`](generated/torch.nn.replicationpad2d#torch.nn.ReplicationPad2d "torch.nn.ReplicationPad2d") | Pads the input tensor using replication of the input boundary. | | [`nn.ReplicationPad3d`](generated/torch.nn.replicationpad3d#torch.nn.ReplicationPad3d "torch.nn.ReplicationPad3d") | Pads the input tensor using replication of the input boundary. | | [`nn.ZeroPad2d`](generated/torch.nn.zeropad2d#torch.nn.ZeroPad2d "torch.nn.ZeroPad2d") | Pads the input tensor boundaries with zero. | | [`nn.ConstantPad1d`](generated/torch.nn.constantpad1d#torch.nn.ConstantPad1d "torch.nn.ConstantPad1d") | Pads the input tensor boundaries with a constant value. | | [`nn.ConstantPad2d`](generated/torch.nn.constantpad2d#torch.nn.ConstantPad2d "torch.nn.ConstantPad2d") | Pads the input tensor boundaries with a constant value. | | [`nn.ConstantPad3d`](generated/torch.nn.constantpad3d#torch.nn.ConstantPad3d "torch.nn.ConstantPad3d") | Pads the input tensor boundaries with a constant value. | Non-linear Activations (weighted sum, nonlinearity) --------------------------------------------------- | | | | --- | --- | | [`nn.ELU`](generated/torch.nn.elu#torch.nn.ELU "torch.nn.ELU") | Applies the element-wise function: | | [`nn.Hardshrink`](generated/torch.nn.hardshrink#torch.nn.Hardshrink "torch.nn.Hardshrink") | Applies the hard shrinkage function element-wise: | | [`nn.Hardsigmoid`](generated/torch.nn.hardsigmoid#torch.nn.Hardsigmoid "torch.nn.Hardsigmoid") | Applies the element-wise function: | | [`nn.Hardtanh`](generated/torch.nn.hardtanh#torch.nn.Hardtanh "torch.nn.Hardtanh") | Applies the HardTanh function element-wise | | [`nn.Hardswish`](generated/torch.nn.hardswish#torch.nn.Hardswish "torch.nn.Hardswish") | Applies the hardswish function, element-wise, as described in the paper: | | [`nn.LeakyReLU`](generated/torch.nn.leakyrelu#torch.nn.LeakyReLU "torch.nn.LeakyReLU") | Applies the element-wise function: | | [`nn.LogSigmoid`](generated/torch.nn.logsigmoid#torch.nn.LogSigmoid "torch.nn.LogSigmoid") | Applies the element-wise function: | | [`nn.MultiheadAttention`](generated/torch.nn.multiheadattention#torch.nn.MultiheadAttention "torch.nn.MultiheadAttention") | Allows the model to jointly attend to information from different representation subspaces. | | [`nn.PReLU`](generated/torch.nn.prelu#torch.nn.PReLU "torch.nn.PReLU") | Applies the element-wise function: | | [`nn.ReLU`](generated/torch.nn.relu#torch.nn.ReLU "torch.nn.ReLU") | Applies the rectified linear unit function element-wise: | | [`nn.ReLU6`](generated/torch.nn.relu6#torch.nn.ReLU6 "torch.nn.ReLU6") | Applies the element-wise function: | | [`nn.RReLU`](generated/torch.nn.rrelu#torch.nn.RReLU "torch.nn.RReLU") | Applies the randomized leaky rectified liner unit function, element-wise, as described in the paper: | | [`nn.SELU`](generated/torch.nn.selu#torch.nn.SELU "torch.nn.SELU") | Applied element-wise, as: | | [`nn.CELU`](generated/torch.nn.celu#torch.nn.CELU "torch.nn.CELU") | Applies the element-wise function: | | [`nn.GELU`](generated/torch.nn.gelu#torch.nn.GELU "torch.nn.GELU") | Applies the Gaussian Error Linear Units function: | | [`nn.Sigmoid`](generated/torch.nn.sigmoid#torch.nn.Sigmoid "torch.nn.Sigmoid") | Applies the element-wise function: | | [`nn.SiLU`](generated/torch.nn.silu#torch.nn.SiLU "torch.nn.SiLU") | Applies the silu function, element-wise. | | [`nn.Softplus`](generated/torch.nn.softplus#torch.nn.Softplus "torch.nn.Softplus") | Applies the element-wise function: | | [`nn.Softshrink`](generated/torch.nn.softshrink#torch.nn.Softshrink "torch.nn.Softshrink") | Applies the soft shrinkage function elementwise: | | [`nn.Softsign`](generated/torch.nn.softsign#torch.nn.Softsign "torch.nn.Softsign") | Applies the element-wise function: | | [`nn.Tanh`](generated/torch.nn.tanh#torch.nn.Tanh "torch.nn.Tanh") | Applies the element-wise function: | | [`nn.Tanhshrink`](generated/torch.nn.tanhshrink#torch.nn.Tanhshrink "torch.nn.Tanhshrink") | Applies the element-wise function: | | [`nn.Threshold`](generated/torch.nn.threshold#torch.nn.Threshold "torch.nn.Threshold") | Thresholds each element of the input Tensor. | Non-linear Activations (other) ------------------------------ | | | | --- | --- | | [`nn.Softmin`](generated/torch.nn.softmin#torch.nn.Softmin "torch.nn.Softmin") | Applies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range `[0, 1]` and sum to 1. | | [`nn.Softmax`](generated/torch.nn.softmax#torch.nn.Softmax "torch.nn.Softmax") | Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1. | | [`nn.Softmax2d`](generated/torch.nn.softmax2d#torch.nn.Softmax2d "torch.nn.Softmax2d") | Applies SoftMax over features to each spatial location. | | [`nn.LogSoftmax`](generated/torch.nn.logsoftmax#torch.nn.LogSoftmax "torch.nn.LogSoftmax") | Applies the log⁡(Softmax(x))\log(\text{Softmax}(x)) function to an n-dimensional input Tensor. | | [`nn.AdaptiveLogSoftmaxWithLoss`](generated/torch.nn.adaptivelogsoftmaxwithloss#torch.nn.AdaptiveLogSoftmaxWithLoss "torch.nn.AdaptiveLogSoftmaxWithLoss") | Efficient softmax approximation as described in [Efficient softmax approximation for GPUs by Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou](https://arxiv.org/abs/1609.04309). | Normalization Layers -------------------- | | | | --- | --- | | [`nn.BatchNorm1d`](generated/torch.nn.batchnorm1d#torch.nn.BatchNorm1d "torch.nn.BatchNorm1d") | Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](https://arxiv.org/abs/1502.03167) . | | [`nn.BatchNorm2d`](generated/torch.nn.batchnorm2d#torch.nn.BatchNorm2d "torch.nn.BatchNorm2d") | Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](https://arxiv.org/abs/1502.03167) . | | [`nn.BatchNorm3d`](generated/torch.nn.batchnorm3d#torch.nn.BatchNorm3d "torch.nn.BatchNorm3d") | Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](https://arxiv.org/abs/1502.03167) . | | [`nn.GroupNorm`](generated/torch.nn.groupnorm#torch.nn.GroupNorm "torch.nn.GroupNorm") | Applies Group Normalization over a mini-batch of inputs as described in the paper [Group Normalization](https://arxiv.org/abs/1803.08494) | | [`nn.SyncBatchNorm`](generated/torch.nn.syncbatchnorm#torch.nn.SyncBatchNorm "torch.nn.SyncBatchNorm") | Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](https://arxiv.org/abs/1502.03167) . | | [`nn.InstanceNorm1d`](generated/torch.nn.instancenorm1d#torch.nn.InstanceNorm1d "torch.nn.InstanceNorm1d") | Applies Instance Normalization over a 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper [Instance Normalization: The Missing Ingredient for Fast Stylization](https://arxiv.org/abs/1607.08022). | | [`nn.InstanceNorm2d`](generated/torch.nn.instancenorm2d#torch.nn.InstanceNorm2d "torch.nn.InstanceNorm2d") | Applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper [Instance Normalization: The Missing Ingredient for Fast Stylization](https://arxiv.org/abs/1607.08022). | | [`nn.InstanceNorm3d`](generated/torch.nn.instancenorm3d#torch.nn.InstanceNorm3d "torch.nn.InstanceNorm3d") | Applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper [Instance Normalization: The Missing Ingredient for Fast Stylization](https://arxiv.org/abs/1607.08022). | | [`nn.LayerNorm`](generated/torch.nn.layernorm#torch.nn.LayerNorm "torch.nn.LayerNorm") | Applies Layer Normalization over a mini-batch of inputs as described in the paper [Layer Normalization](https://arxiv.org/abs/1607.06450) | | [`nn.LocalResponseNorm`](generated/torch.nn.localresponsenorm#torch.nn.LocalResponseNorm "torch.nn.LocalResponseNorm") | Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension. | Recurrent Layers ---------------- | | | | --- | --- | | [`nn.RNNBase`](generated/torch.nn.rnnbase#torch.nn.RNNBase "torch.nn.RNNBase") | | | [`nn.RNN`](generated/torch.nn.rnn#torch.nn.RNN "torch.nn.RNN") | Applies a multi-layer Elman RNN with tanh⁡\tanh or ReLU\text{ReLU} non-linearity to an input sequence. | | [`nn.LSTM`](generated/torch.nn.lstm#torch.nn.LSTM "torch.nn.LSTM") | Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence. | | [`nn.GRU`](generated/torch.nn.gru#torch.nn.GRU "torch.nn.GRU") | Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence. | | [`nn.RNNCell`](generated/torch.nn.rnncell#torch.nn.RNNCell "torch.nn.RNNCell") | An Elman RNN cell with tanh or ReLU non-linearity. | | [`nn.LSTMCell`](generated/torch.nn.lstmcell#torch.nn.LSTMCell "torch.nn.LSTMCell") | A long short-term memory (LSTM) cell. | | [`nn.GRUCell`](generated/torch.nn.grucell#torch.nn.GRUCell "torch.nn.GRUCell") | A gated recurrent unit (GRU) cell | Transformer Layers ------------------ | | | | --- | --- | | [`nn.Transformer`](generated/torch.nn.transformer#torch.nn.Transformer "torch.nn.Transformer") | A transformer model. | | [`nn.TransformerEncoder`](generated/torch.nn.transformerencoder#torch.nn.TransformerEncoder "torch.nn.TransformerEncoder") | TransformerEncoder is a stack of N encoder layers | | [`nn.TransformerDecoder`](generated/torch.nn.transformerdecoder#torch.nn.TransformerDecoder "torch.nn.TransformerDecoder") | TransformerDecoder is a stack of N decoder layers | | [`nn.TransformerEncoderLayer`](generated/torch.nn.transformerencoderlayer#torch.nn.TransformerEncoderLayer "torch.nn.TransformerEncoderLayer") | TransformerEncoderLayer is made up of self-attn and feedforward network. | | [`nn.TransformerDecoderLayer`](generated/torch.nn.transformerdecoderlayer#torch.nn.TransformerDecoderLayer "torch.nn.TransformerDecoderLayer") | TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network. | Linear Layers ------------- | | | | --- | --- | | [`nn.Identity`](generated/torch.nn.identity#torch.nn.Identity "torch.nn.Identity") | A placeholder identity operator that is argument-insensitive. | | [`nn.Linear`](generated/torch.nn.linear#torch.nn.Linear "torch.nn.Linear") | Applies a linear transformation to the incoming data: y=xAT+by = xA^T + b | | [`nn.Bilinear`](generated/torch.nn.bilinear#torch.nn.Bilinear "torch.nn.Bilinear") | Applies a bilinear transformation to the incoming data: y=x1TAx2+by = x\_1^T A x\_2 + b | | [`nn.LazyLinear`](generated/torch.nn.lazylinear#torch.nn.LazyLinear "torch.nn.LazyLinear") | A [`torch.nn.Linear`](generated/torch.nn.linear#torch.nn.Linear "torch.nn.Linear") module with lazy initialization. | Dropout Layers -------------- | | | | --- | --- | | [`nn.Dropout`](generated/torch.nn.dropout#torch.nn.Dropout "torch.nn.Dropout") | During training, randomly zeroes some of the elements of the input tensor with probability `p` using samples from a Bernoulli distribution. | | [`nn.Dropout2d`](generated/torch.nn.dropout2d#torch.nn.Dropout2d "torch.nn.Dropout2d") | Randomly zero out entire channels (a channel is a 2D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 2D tensor input[i,j]\text{input}[i, j] ). | | [`nn.Dropout3d`](generated/torch.nn.dropout3d#torch.nn.Dropout3d "torch.nn.Dropout3d") | Randomly zero out entire channels (a channel is a 3D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 3D tensor input[i,j]\text{input}[i, j] ). | | [`nn.AlphaDropout`](generated/torch.nn.alphadropout#torch.nn.AlphaDropout "torch.nn.AlphaDropout") | Applies Alpha Dropout over the input. | Sparse Layers ------------- | | | | --- | --- | | [`nn.Embedding`](generated/torch.nn.embedding#torch.nn.Embedding "torch.nn.Embedding") | A simple lookup table that stores embeddings of a fixed dictionary and size. | | [`nn.EmbeddingBag`](generated/torch.nn.embeddingbag#torch.nn.EmbeddingBag "torch.nn.EmbeddingBag") | Computes sums or means of ‘bags’ of embeddings, without instantiating the intermediate embeddings. | Distance Functions ------------------ | | | | --- | --- | | [`nn.CosineSimilarity`](generated/torch.nn.cosinesimilarity#torch.nn.CosineSimilarity "torch.nn.CosineSimilarity") | Returns cosine similarity between x1x\_1 and x2x\_2 , computed along dim. | | [`nn.PairwiseDistance`](generated/torch.nn.pairwisedistance#torch.nn.PairwiseDistance "torch.nn.PairwiseDistance") | Computes the batchwise pairwise distance between vectors v1v\_1 , v2v\_2 using the p-norm: | Loss Functions -------------- | | | | --- | --- | | [`nn.L1Loss`](generated/torch.nn.l1loss#torch.nn.L1Loss "torch.nn.L1Loss") | Creates a criterion that measures the mean absolute error (MAE) between each element in the input xx and target yy . | | [`nn.MSELoss`](generated/torch.nn.mseloss#torch.nn.MSELoss "torch.nn.MSELoss") | Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input xx and target yy . | | [`nn.CrossEntropyLoss`](generated/torch.nn.crossentropyloss#torch.nn.CrossEntropyLoss "torch.nn.CrossEntropyLoss") | This criterion combines [`LogSoftmax`](generated/torch.nn.logsoftmax#torch.nn.LogSoftmax "torch.nn.LogSoftmax") and [`NLLLoss`](generated/torch.nn.nllloss#torch.nn.NLLLoss "torch.nn.NLLLoss") in one single class. | | [`nn.CTCLoss`](generated/torch.nn.ctcloss#torch.nn.CTCLoss "torch.nn.CTCLoss") | The Connectionist Temporal Classification loss. | | [`nn.NLLLoss`](generated/torch.nn.nllloss#torch.nn.NLLLoss "torch.nn.NLLLoss") | The negative log likelihood loss. | | [`nn.PoissonNLLLoss`](generated/torch.nn.poissonnllloss#torch.nn.PoissonNLLLoss "torch.nn.PoissonNLLLoss") | Negative log likelihood loss with Poisson distribution of target. | | [`nn.GaussianNLLLoss`](generated/torch.nn.gaussiannllloss#torch.nn.GaussianNLLLoss "torch.nn.GaussianNLLLoss") | Gaussian negative log likelihood loss. | | [`nn.KLDivLoss`](generated/torch.nn.kldivloss#torch.nn.KLDivLoss "torch.nn.KLDivLoss") | The Kullback-Leibler divergence loss measure | | [`nn.BCELoss`](generated/torch.nn.bceloss#torch.nn.BCELoss "torch.nn.BCELoss") | Creates a criterion that measures the Binary Cross Entropy between the target and the output: | | [`nn.BCEWithLogitsLoss`](generated/torch.nn.bcewithlogitsloss#torch.nn.BCEWithLogitsLoss "torch.nn.BCEWithLogitsLoss") | This loss combines a `Sigmoid` layer and the `BCELoss` in one single class. | | [`nn.MarginRankingLoss`](generated/torch.nn.marginrankingloss#torch.nn.MarginRankingLoss "torch.nn.MarginRankingLoss") | Creates a criterion that measures the loss given inputs x1x1 , x2x2 , two 1D mini-batch `Tensors`, and a label 1D mini-batch tensor yy (containing 1 or -1). | | [`nn.HingeEmbeddingLoss`](generated/torch.nn.hingeembeddingloss#torch.nn.HingeEmbeddingLoss "torch.nn.HingeEmbeddingLoss") | Measures the loss given an input tensor xx and a labels tensor yy (containing 1 or -1). | | [`nn.MultiLabelMarginLoss`](generated/torch.nn.multilabelmarginloss#torch.nn.MultiLabelMarginLoss "torch.nn.MultiLabelMarginLoss") | Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input xx (a 2D mini-batch `Tensor`) and output yy (which is a 2D `Tensor` of target class indices). | | [`nn.SmoothL1Loss`](generated/torch.nn.smoothl1loss#torch.nn.SmoothL1Loss "torch.nn.SmoothL1Loss") | Creates a criterion that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise. | | [`nn.SoftMarginLoss`](generated/torch.nn.softmarginloss#torch.nn.SoftMarginLoss "torch.nn.SoftMarginLoss") | Creates a criterion that optimizes a two-class classification logistic loss between input tensor xx and target tensor yy (containing 1 or -1). | | [`nn.MultiLabelSoftMarginLoss`](generated/torch.nn.multilabelsoftmarginloss#torch.nn.MultiLabelSoftMarginLoss "torch.nn.MultiLabelSoftMarginLoss") | Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input xx and target yy of size (N,C)(N, C) . | | [`nn.CosineEmbeddingLoss`](generated/torch.nn.cosineembeddingloss#torch.nn.CosineEmbeddingLoss "torch.nn.CosineEmbeddingLoss") | Creates a criterion that measures the loss given input tensors x1x\_1 , x2x\_2 and a `Tensor` label yy with values 1 or -1. | | [`nn.MultiMarginLoss`](generated/torch.nn.multimarginloss#torch.nn.MultiMarginLoss "torch.nn.MultiMarginLoss") | Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input xx (a 2D mini-batch `Tensor`) and output yy (which is a 1D tensor of target class indices, 0≤y≤x.size(1)−10 \leq y \leq \text{x.size}(1)-1 ): | | [`nn.TripletMarginLoss`](generated/torch.nn.tripletmarginloss#torch.nn.TripletMarginLoss "torch.nn.TripletMarginLoss") | Creates a criterion that measures the triplet loss given an input tensors x1x1 , x2x2 , x3x3 and a margin with a value greater than 00 . | | [`nn.TripletMarginWithDistanceLoss`](generated/torch.nn.tripletmarginwithdistanceloss#torch.nn.TripletMarginWithDistanceLoss "torch.nn.TripletMarginWithDistanceLoss") | Creates a criterion that measures the triplet loss given input tensors aa , pp , and nn (representing anchor, positive, and negative examples, respectively), and a nonnegative, real-valued function (“distance function”) used to compute the relationship between the anchor and positive example (“positive distance”) and the anchor and negative example (“negative distance”). | Vision Layers ------------- | | | | --- | --- | | [`nn.PixelShuffle`](generated/torch.nn.pixelshuffle#torch.nn.PixelShuffle "torch.nn.PixelShuffle") | Rearranges elements in a tensor of shape (∗,C×r2,H,W)(\*, C \times r^2, H, W) to a tensor of shape (∗,C,H×r,W×r)(\*, C, H \times r, W \times r) , where r is an upscale factor. | | [`nn.PixelUnshuffle`](generated/torch.nn.pixelunshuffle#torch.nn.PixelUnshuffle "torch.nn.PixelUnshuffle") | Reverses the [`PixelShuffle`](generated/torch.nn.pixelshuffle#torch.nn.PixelShuffle "torch.nn.PixelShuffle") operation by rearranging elements in a tensor of shape (∗,C,H×r,W×r)(\*, C, H \times r, W \times r) to a tensor of shape (∗,C×r2,H,W)(\*, C \times r^2, H, W) , where r is a downscale factor. | | [`nn.Upsample`](generated/torch.nn.upsample#torch.nn.Upsample "torch.nn.Upsample") | Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data. | | [`nn.UpsamplingNearest2d`](generated/torch.nn.upsamplingnearest2d#torch.nn.UpsamplingNearest2d "torch.nn.UpsamplingNearest2d") | Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels. | | [`nn.UpsamplingBilinear2d`](generated/torch.nn.upsamplingbilinear2d#torch.nn.UpsamplingBilinear2d "torch.nn.UpsamplingBilinear2d") | Applies a 2D bilinear upsampling to an input signal composed of several input channels. | Shuffle Layers -------------- | | | | --- | --- | | [`nn.ChannelShuffle`](generated/torch.nn.channelshuffle#torch.nn.ChannelShuffle "torch.nn.ChannelShuffle") | Divide the channels in a tensor of shape (∗,C,H,W)(\*, C , H, W) into g groups and rearrange them as (∗,Cg,g,H,W)(\*, C \frac g, g, H, W) , while keeping the original tensor shape. | DataParallel Layers (multi-GPU, distributed) -------------------------------------------- | | | | --- | --- | | [`nn.DataParallel`](generated/torch.nn.dataparallel#torch.nn.DataParallel "torch.nn.DataParallel") | Implements data parallelism at the module level. | | [`nn.parallel.DistributedDataParallel`](generated/torch.nn.parallel.distributeddataparallel#torch.nn.parallel.DistributedDataParallel "torch.nn.parallel.DistributedDataParallel") | Implements distributed data parallelism that is based on `torch.distributed` package at the module level. | Utilities --------- From the `torch.nn.utils` module | | | | --- | --- | | [`clip_grad_norm_`](generated/torch.nn.utils.clip_grad_norm_#torch.nn.utils.clip_grad_norm_ "torch.nn.utils.clip_grad_norm_") | Clips gradient norm of an iterable of parameters. | | [`clip_grad_value_`](generated/torch.nn.utils.clip_grad_value_#torch.nn.utils.clip_grad_value_ "torch.nn.utils.clip_grad_value_") | Clips gradient of an iterable of parameters at specified value. | | [`parameters_to_vector`](generated/torch.nn.utils.parameters_to_vector#torch.nn.utils.parameters_to_vector "torch.nn.utils.parameters_to_vector") | Convert parameters to one vector | | [`vector_to_parameters`](generated/torch.nn.utils.vector_to_parameters#torch.nn.utils.vector_to_parameters "torch.nn.utils.vector_to_parameters") | Convert one vector to the parameters | | | | | --- | --- | | [`prune.BasePruningMethod`](generated/torch.nn.utils.prune.basepruningmethod#torch.nn.utils.prune.BasePruningMethod "torch.nn.utils.prune.BasePruningMethod") | Abstract base class for creation of new pruning techniques. | | | | | --- | --- | | [`prune.PruningContainer`](generated/torch.nn.utils.prune.pruningcontainer#torch.nn.utils.prune.PruningContainer "torch.nn.utils.prune.PruningContainer") | Container holding a sequence of pruning methods for iterative pruning. | | [`prune.Identity`](generated/torch.nn.utils.prune.identity#torch.nn.utils.prune.Identity "torch.nn.utils.prune.Identity") | Utility pruning method that does not prune any units but generates the pruning parametrization with a mask of ones. | | [`prune.RandomUnstructured`](generated/torch.nn.utils.prune.randomunstructured#torch.nn.utils.prune.RandomUnstructured "torch.nn.utils.prune.RandomUnstructured") | Prune (currently unpruned) units in a tensor at random. | | [`prune.L1Unstructured`](generated/torch.nn.utils.prune.l1unstructured#torch.nn.utils.prune.L1Unstructured "torch.nn.utils.prune.L1Unstructured") | Prune (currently unpruned) units in a tensor by zeroing out the ones with the lowest L1-norm. | | [`prune.RandomStructured`](generated/torch.nn.utils.prune.randomstructured#torch.nn.utils.prune.RandomStructured "torch.nn.utils.prune.RandomStructured") | Prune entire (currently unpruned) channels in a tensor at random. | | [`prune.LnStructured`](generated/torch.nn.utils.prune.lnstructured#torch.nn.utils.prune.LnStructured "torch.nn.utils.prune.LnStructured") | Prune entire (currently unpruned) channels in a tensor based on their Ln-norm. | | [`prune.CustomFromMask`](generated/torch.nn.utils.prune.customfrommask#torch.nn.utils.prune.CustomFromMask "torch.nn.utils.prune.CustomFromMask") | | | [`prune.identity`](generated/torch.nn.utils.prune.identity#torch.nn.utils.prune.identity "torch.nn.utils.prune.identity") | Applies pruning reparametrization to the tensor corresponding to the parameter called `name` in `module` without actually pruning any units. | | [`prune.random_unstructured`](generated/torch.nn.utils.prune.random_unstructured#torch.nn.utils.prune.random_unstructured "torch.nn.utils.prune.random_unstructured") | Prunes tensor corresponding to parameter called `name` in `module` by removing the specified `amount` of (currently unpruned) units selected at random. | | [`prune.l1_unstructured`](generated/torch.nn.utils.prune.l1_unstructured#torch.nn.utils.prune.l1_unstructured "torch.nn.utils.prune.l1_unstructured") | Prunes tensor corresponding to parameter called `name` in `module` by removing the specified `amount` of (currently unpruned) units with the lowest L1-norm. | | [`prune.random_structured`](generated/torch.nn.utils.prune.random_structured#torch.nn.utils.prune.random_structured "torch.nn.utils.prune.random_structured") | Prunes tensor corresponding to parameter called `name` in `module` by removing the specified `amount` of (currently unpruned) channels along the specified `dim` selected at random. | | [`prune.ln_structured`](generated/torch.nn.utils.prune.ln_structured#torch.nn.utils.prune.ln_structured "torch.nn.utils.prune.ln_structured") | Prunes tensor corresponding to parameter called `name` in `module` by removing the specified `amount` of (currently unpruned) channels along the specified `dim` with the lowest L``n``-norm. | | [`prune.global_unstructured`](generated/torch.nn.utils.prune.global_unstructured#torch.nn.utils.prune.global_unstructured "torch.nn.utils.prune.global_unstructured") | Globally prunes tensors corresponding to all parameters in `parameters` by applying the specified `pruning_method`. | | [`prune.custom_from_mask`](generated/torch.nn.utils.prune.custom_from_mask#torch.nn.utils.prune.custom_from_mask "torch.nn.utils.prune.custom_from_mask") | Prunes tensor corresponding to parameter called `name` in `module` by applying the pre-computed mask in `mask`. | | [`prune.remove`](generated/torch.nn.utils.prune.remove#torch.nn.utils.prune.remove "torch.nn.utils.prune.remove") | Removes the pruning reparameterization from a module and the pruning method from the forward hook. | | [`prune.is_pruned`](generated/torch.nn.utils.prune.is_pruned#torch.nn.utils.prune.is_pruned "torch.nn.utils.prune.is_pruned") | Check whether `module` is pruned by looking for `forward_pre_hooks` in its modules that inherit from the `BasePruningMethod`. | | [`weight_norm`](generated/torch.nn.utils.weight_norm#torch.nn.utils.weight_norm "torch.nn.utils.weight_norm") | Applies weight normalization to a parameter in the given module. | | [`remove_weight_norm`](generated/torch.nn.utils.remove_weight_norm#torch.nn.utils.remove_weight_norm "torch.nn.utils.remove_weight_norm") | Removes the weight normalization reparameterization from a module. | | [`spectral_norm`](generated/torch.nn.utils.spectral_norm#torch.nn.utils.spectral_norm "torch.nn.utils.spectral_norm") | Applies spectral normalization to a parameter in the given module. | | [`remove_spectral_norm`](generated/torch.nn.utils.remove_spectral_norm#torch.nn.utils.remove_spectral_norm "torch.nn.utils.remove_spectral_norm") | Removes the spectral normalization reparameterization from a module. | Utility functions in other modules | | | | --- | --- | | [`nn.utils.rnn.PackedSequence`](generated/torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence "torch.nn.utils.rnn.PackedSequence") | Holds the data and list of `batch_sizes` of a packed sequence. | | [`nn.utils.rnn.pack_padded_sequence`](generated/torch.nn.utils.rnn.pack_padded_sequence#torch.nn.utils.rnn.pack_padded_sequence "torch.nn.utils.rnn.pack_padded_sequence") | Packs a Tensor containing padded sequences of variable length. | | [`nn.utils.rnn.pad_packed_sequence`](generated/torch.nn.utils.rnn.pad_packed_sequence#torch.nn.utils.rnn.pad_packed_sequence "torch.nn.utils.rnn.pad_packed_sequence") | Pads a packed batch of variable length sequences. | | [`nn.utils.rnn.pad_sequence`](generated/torch.nn.utils.rnn.pad_sequence#torch.nn.utils.rnn.pad_sequence "torch.nn.utils.rnn.pad_sequence") | Pad a list of variable length Tensors with `padding_value` | | [`nn.utils.rnn.pack_sequence`](generated/torch.nn.utils.rnn.pack_sequence#torch.nn.utils.rnn.pack_sequence "torch.nn.utils.rnn.pack_sequence") | Packs a list of variable length Tensors | | [`nn.Flatten`](generated/torch.nn.flatten#torch.nn.Flatten "torch.nn.Flatten") | Flattens a contiguous range of dims into a tensor. | | [`nn.Unflatten`](generated/torch.nn.unflatten#torch.nn.Unflatten "torch.nn.Unflatten") | Unflattens a tensor dim expanding it to a desired shape. | Quantized Functions ------------------- Quantization refers to techniques for performing computations and storing tensors at lower bitwidths than floating point precision. PyTorch supports both per tensor and per channel asymmetric linear quantization. To learn more how to use quantized functions in PyTorch, please refer to the [Quantization](quantization#quantization-doc) documentation. Lazy Modules Initialization --------------------------- | | | | --- | --- | | [`nn.modules.lazy.LazyModuleMixin`](generated/torch.nn.modules.lazy.lazymodulemixin#torch.nn.modules.lazy.LazyModuleMixin "torch.nn.modules.lazy.LazyModuleMixin") | A mixin for modules that lazily initialize parameters, also known as “lazy modules.” |
programming_docs
pytorch Multiprocessing package - torch.multiprocessing Multiprocessing package - torch.multiprocessing =============================================== torch.multiprocessing is a wrapper around the native [`multiprocessing`](https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing "(in Python v3.9)") module. It registers custom reducers, that use shared memory to provide shared views on the same data in different processes. Once the tensor/storage is moved to shared\_memory (see [`share_memory_()`](tensors#torch.Tensor.share_memory_ "torch.Tensor.share_memory_")), it will be possible to send it to other processes without making any copies. The API is 100% compatible with the original module - it’s enough to change `import multiprocessing` to `import torch.multiprocessing` to have all the tensors sent through the queues or shared via other mechanisms, moved to shared memory. Because of the similarity of APIs we do not document most of this package contents, and we recommend referring to very good docs of the original module. Warning If the main process exits abruptly (e.g. because of an incoming signal), Python’s `multiprocessing` sometimes fails to clean up its children. It’s a known caveat, so if you’re seeing any resource leaks after interrupting the interpreter, it probably means that this has just happened to you. Strategy management ------------------- `torch.multiprocessing.get_all_sharing_strategies()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/multiprocessing.html#get_all_sharing_strategies) Returns a set of sharing strategies supported on a current system. `torch.multiprocessing.get_sharing_strategy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/multiprocessing.html#get_sharing_strategy) Returns the current strategy for sharing CPU tensors. `torch.multiprocessing.set_sharing_strategy(new_strategy)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/multiprocessing.html#set_sharing_strategy) Sets the strategy for sharing CPU tensors. Parameters **new\_strategy** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – Name of the selected strategy. Should be one of the values returned by [`get_all_sharing_strategies()`](#torch.multiprocessing.get_all_sharing_strategies "torch.multiprocessing.get_all_sharing_strategies"). Sharing CUDA tensors -------------------- Sharing CUDA tensors between processes is supported only in Python 3, using a `spawn` or `forkserver` start methods. Unlike CPU tensors, the sending process is required to keep the original tensor as long as the receiving process retains a copy of the tensor. The refcounting is implemented under the hood but requires users to follow the next best practices. Warning If the consumer process dies abnormally to a fatal signal, the shared tensor could be forever kept in memory as long as the sending process is running. 1. Release memory ASAP in the consumer. ``` ## Good x = queue.get() # do somethings with x del x ``` ``` ## Bad x = queue.get() # do somethings with x # do everything else (producer have to keep x in memory) ``` 2. Keep producer process running until all consumers exits. This will prevent the situation when the producer process releasing memory which is still in use by the consumer. ``` ## producer # send tensors, do something event.wait() ``` ``` ## consumer # receive tensors and use them event.set() ``` 3. Don’t pass received tensors. ``` # not going to work x = queue.get() queue_2.put(x) ``` ``` # you need to create a process-local copy x = queue.get() x_clone = x.clone() queue_2.put(x_clone) ``` ``` # putting and getting from the same queue in the same process will likely end up with segfault queue.put(tensor) x = queue.get() ``` Sharing strategies ------------------ This section provides a brief overview into how different sharing strategies work. Note that it applies only to CPU tensor - CUDA tensors will always use the CUDA API, as that’s the only way they can be shared. ### File descriptor - `file_descriptor` Note This is the default strategy (except for macOS and OS X where it’s not supported). This strategy will use file descriptors as shared memory handles. Whenever a storage is moved to shared memory, a file descriptor obtained from `shm_open` is cached with the object, and when it’s going to be sent to other processes, the file descriptor will be transferred (e.g. via UNIX sockets) to it. The receiver will also cache the file descriptor and `mmap` it, to obtain a shared view onto the storage data. Note that if there will be a lot of tensors shared, this strategy will keep a large number of file descriptors open most of the time. If your system has low limits for the number of open file descriptors, and you can’t raise them, you should use the `file_system` strategy. ### File system - `file_system` This strategy will use file names given to `shm_open` to identify the shared memory regions. This has a benefit of not requiring the implementation to cache the file descriptors obtained from it, but at the same time is prone to shared memory leaks. The file can’t be deleted right after its creation, because other processes need to access it to open their views. If the processes fatally crash, or are killed, and don’t call the storage destructors, the files will remain in the system. This is very serious, because they keep using up the memory until the system is restarted, or they’re freed manually. To counter the problem of shared memory file leaks, [`torch.multiprocessing`](#module-torch.multiprocessing "torch.multiprocessing") will spawn a daemon named `torch_shm_manager` that will isolate itself from the current process group, and will keep track of all shared memory allocations. Once all processes connected to it exit, it will wait a moment to ensure there will be no new connections, and will iterate over all shared memory files allocated by the group. If it finds that any of them still exist, they will be deallocated. We’ve tested this method and it proved to be robust to various failures. Still, if your system has high enough limits, and `file_descriptor` is a supported strategy, we do not recommend switching to this one. Spawning subprocesses --------------------- Note Available for Python >= 3.4. This depends on the `spawn` start method in Python’s `multiprocessing` package. Spawning a number of subprocesses to perform some function can be done by creating `Process` instances and calling `join` to wait for their completion. This approach works fine when dealing with a single subprocess but presents potential issues when dealing with multiple processes. Namely, joining processes sequentially implies they will terminate sequentially. If they don’t, and the first process does not terminate, the process termination will go unnoticed. Also, there are no native facilities for error propagation. The `spawn` function below addresses these concerns and takes care of error propagation, out of order termination, and will actively terminate processes upon detecting an error in one of them. `torch.multiprocessing.spawn(fn, args=(), nprocs=1, join=True, daemon=False, start_method='spawn')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/multiprocessing/spawn.html#spawn) Spawns `nprocs` processes that run `fn` with `args`. If one of the processes exits with a non-zero exit status, the remaining processes are killed and an exception is raised with the cause of termination. In the case an exception was caught in the child process, it is forwarded and its traceback is included in the exception raised in the parent process. Parameters * **fn** (*function*) – Function is called as the entrypoint of the spawned process. This function must be defined at the top level of a module so it can be pickled and spawned. This is a requirement imposed by multiprocessing. The function is called as `fn(i, *args)`, where `i` is the process index and `args` is the passed through tuple of arguments. * **args** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Arguments passed to `fn`. * **nprocs** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of processes to spawn. * **join** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Perform a blocking join on all processes. * **daemon** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – The spawned processes’ daemon flag. If set to True, daemonic processes will be created. * **start\_method** (*string*) – (deprecated) this method will always use `spawn` as the start method. To use a different start method use `start_processes()`. Returns None if `join` is `True`, `ProcessContext` if `join` is `False` `class torch.multiprocessing.SpawnContext` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/multiprocessing/spawn.html#SpawnContext) Returned by [`spawn()`](#torch.multiprocessing.spawn "torch.multiprocessing.spawn") when called with `join=False`. `join(timeout=None)` Tries to join one or more processes in this spawn context. If one of them exited with a non-zero exit status, this function kills the remaining processes and raises an exception with the cause of the first process exiting. Returns `True` if all processes have been joined successfully, `False` if there are more processes that need to be joined. Parameters **timeout** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Wait this long before giving up on waiting. pytorch torch.fx torch.fx ======== Overview -------- **This feature is under a Beta release and its API may change.** FX is a toolkit for developers to use to transform `nn.Module` instances. FX consists of three main components: a **symbolic tracer,** an **intermediate representation**, and **Python code generation**. A demonstration of these components in action: ``` import torch # Simple module for demonstration class MyModule(torch.nn.Module): def __init__(self): super().__init__() self.param = torch.nn.Parameter(torch.rand(3, 4)) self.linear = torch.nn.Linear(4, 5) def forward(self, x): return self.linear(x + self.param).clamp(min=0.0, max=1.0) module = MyModule() from torch.fx import symbolic_trace # Symbolic tracing frontend - captures the semantics of the module symbolic_traced : torch.fx.GraphModule = symbolic_trace(module) # High-level intermediate representation (IR) - Graph representation print(symbolic_traced.graph) """ graph(x): %param : [#users=1] = self.param %add_1 : [#users=1] = call_function[target=<built-in function add>](args = (%x, %param), kwargs = {}) %linear_1 : [#users=1] = call_module[target=linear](args = (%add_1,), kwargs = {}) %clamp_1 : [#users=1] = call_method[target=clamp](args = (%linear_1,), kwargs = {min: 0.0, max: 1.0}) return clamp_1 """ # Code generation - valid Python code print(symbolic_traced.code) """ def forward(self, x): param = self.param add_1 = x + param; x = param = None linear_1 = self.linear(add_1); add_1 = None clamp_1 = linear_1.clamp(min = 0.0, max = 1.0); linear_1 = None return clamp_1 """ ``` The **symbolic tracer** performs “symbolic execution” of the Python code. It feeds fake values, called Proxies, through the code. Operations on theses Proxies are recorded. More information about symbolic tracing can be found in the [`symbolic_trace()`](#torch.fx.symbolic_trace "torch.fx.symbolic_trace") and [`Tracer`](#torch.fx.Tracer "torch.fx.Tracer") documentation. The **intermediate representation** is the container for the operations that were recorded during symbolic tracing. It consists of a list of Nodes that represent function inputs, callsites (to functions, methods, or [`torch.nn.Module`](generated/torch.nn.module#torch.nn.Module "torch.nn.Module") instances), and return values. More information about the IR can be found in the documentation for [`Graph`](#torch.fx.Graph "torch.fx.Graph"). The IR is the format on which transformations are applied. **Python code generation** is what makes FX a Python-to-Python (or Module-to-Module) transformation toolkit. For each Graph IR, we can create valid Python code matching the Graph’s semantics. This functionality is wrapped up in [`GraphModule`](#torch.fx.GraphModule "torch.fx.GraphModule"), which is a [`torch.nn.Module`](generated/torch.nn.module#torch.nn.Module "torch.nn.Module") instance that holds a [`Graph`](#torch.fx.Graph "torch.fx.Graph") as well as a `forward` method generated from the Graph. Taken together, this pipeline of components (symbolic tracing → intermediate representation → transforms → Python code generation) constitutes the Python-to-Python transformation pipeline of FX. In addition, these components can be used separately. For example, symbolic tracing can be used in isolation to capture a form of the code for analysis (and not transformation) purposes. Code generation can be used for programmatically generating models, for example from a config file. There are many uses for FX! Several example transformations can be found at the [examples](https://github.com/pytorch/examples/tree/master/fx) repository. Writing Transformations ----------------------- What is an FX transform? Essentially, it’s a function that looks like this. ``` import torch import torch.fx def transform(m: nn.Module, tracer_class : type = torch.fx.Tracer) -> torch.nn.Module: # Step 1: Acquire a Graph representing the code in `m` # NOTE: torch.fx.symbolic_trace is a wrapper around a call to # fx.Tracer.trace and constructing a GraphModule. We'll # split that out in our transform to allow the caller to # customize tracing behavior. graph : torch.fx.Graph = tracer_class().trace(m) # Step 2: Modify this Graph or create a new one graph = ... # Step 3: Construct a Module to return return torch.fx.GraphModule(m, graph) ``` Your transform will take in an [`torch.nn.Module`](generated/torch.nn.module#torch.nn.Module "torch.nn.Module"), acquire a [`Graph`](#torch.fx.Graph "torch.fx.Graph") from it, do some modifications, and return a new [`torch.nn.Module`](generated/torch.nn.module#torch.nn.Module "torch.nn.Module"). You should think of the [`torch.nn.Module`](generated/torch.nn.module#torch.nn.Module "torch.nn.Module") that your FX transform returns as identical to a regular [`torch.nn.Module`](generated/torch.nn.module#torch.nn.Module "torch.nn.Module") – you can pass it to another FX transform, you can pass it to TorchScript, or you can run it. Ensuring that the inputs and outputs of your FX transform are a [`torch.nn.Module`](generated/torch.nn.module#torch.nn.Module "torch.nn.Module") will allow for composability. Note It is also possible to modify an existing [`GraphModule`](#torch.fx.GraphModule "torch.fx.GraphModule") instead of creating a new one, like so: ``` import torch import torch.fx def transform(m : nn.Module) -> nn.Module): gm : torch.fx.GraphModule = torch.fx.symbolic_trace(m) # Modify gm.graph # <...> # Recompile the forward() method of `gm` from its Graph gm.recompile() return gm ``` Note that you MUST call [`GraphModule.recompile()`](#torch.fx.GraphModule.recompile "torch.fx.GraphModule.recompile") to bring the generated `forward()` method on the `GraphModule` in sync with the modified [`Graph`](#torch.fx.Graph "torch.fx.Graph"). Given that you’ve passed in a [`torch.nn.Module`](generated/torch.nn.module#torch.nn.Module "torch.nn.Module") that has been traced into a [`Graph`](#torch.fx.Graph "torch.fx.Graph"), there are now two primary approaches you can take to building a new [`Graph`](#torch.fx.Graph "torch.fx.Graph"). ### A Quick Primer on Graphs Full treatment of the semantics of graphs can be found in the [`Graph`](#torch.fx.Graph "torch.fx.Graph") documentation, but we are going to cover the basics here. A [`Graph`](#torch.fx.Graph "torch.fx.Graph") is a data structure that represents a method on a [`GraphModule`](#torch.fx.GraphModule "torch.fx.GraphModule"). The information that this requires is: * What are the inputs to the method? * What are the operations that run inside the method? * What is the output (i.e. return) value from the method? All three of these concepts are represented with [`Node`](#torch.fx.Node "torch.fx.Node") instances. Let’s see what we mean by that with a short example: ``` import torch import torch.fx class MyModule(torch.nn.Module): def __init__(self): super().__init__() self.param = torch.nn.Parameter(torch.rand(3, 4)) self.linear = torch.nn.Linear(4, 5) def forward(self, x): return torch.topk(torch.sum( self.linear(x + self.linear.weight).relu(), dim=-1), 3) m = MyModule() gm = torch.fx.symbolic_trace(m) gm.graph.print_tabular() ``` Here we define a module `MyModule` for demonstration purposes, instantiate it, symbolically trace it, then call the [`Graph.print_tabular()`](#torch.fx.Graph.print_tabular "torch.fx.Graph.print_tabular") method to print out a table showing the nodes of this [`Graph`](#torch.fx.Graph "torch.fx.Graph"): | opcode | name | target | args | kwargs | | --- | --- | --- | --- | --- | | placeholder | x | x | () | {} | | get\_attr | linear\_weight | linear.weight | () | {} | | call\_function | add\_1 | <built-in function add> | (x, linear\_weight) | {} | | call\_module | linear\_1 | linear | (add\_1,) | {} | | call\_method | relu\_1 | relu | (linear\_1,) | {} | | call\_function | sum\_1 | <built-in method sum …> | (relu\_1,) | {‘dim’: -1} | | call\_function | topk\_1 | <built-in method topk …> | (sum\_1, 3) | {} | | output | output | output | (topk\_1,) | {} | We can use this information to answer the questions we posed above. * What are the inputs to the method? In FX, method inputs are specified via special `placeholder` nodes. In this case, we have a single `placeholder` node with a `target` of `x`, meaning we have a single (non-self) argument named x. * What are the operations within the method? The `get_attr`, `call_function`, `call_module`, and `call_method` nodes represent the operations in the method. A full treatment of the semantics of all of these can be found in the [`Node`](#torch.fx.Node "torch.fx.Node") documentation. * What is the return value of the method? The return value in a [`Graph`](#torch.fx.Graph "torch.fx.Graph") is specified by a special `output` node. Given that we now know the basics of how code is represented in FX, we can now explore how we would edit a [`Graph`](#torch.fx.Graph "torch.fx.Graph"). ### Graph Manipulation #### Direct Graph Manipulation One approach to building this new [`Graph`](#torch.fx.Graph "torch.fx.Graph") is to directly manipulate your old one. To aid in this, we can simply take the [`Graph`](#torch.fx.Graph "torch.fx.Graph") we obtain from symbolic tracing and modify it. For example, let’s say we desire to replace [`torch.add()`](generated/torch.add#torch.add "torch.add") calls with [`torch.mul()`](generated/torch.mul#torch.mul "torch.mul") calls. ``` import torch import torch.fx # Sample module class M(torch.nn.Module): def forward(self, x, y): return torch.add(x, y) def transform(m: torch.nn.Module, tracer_class : type = fx.Tracer) -> torch.nn.Module: graph : fx.Graph = tracer_class().trace(m) # FX represents its Graph as an ordered list of # nodes, so we can iterate through them. for node in graph.nodes: # Checks if we're calling a function (i.e: # torch.add) if node.op == 'call_function': # The target attribute is the function # that call_function calls. if node.target == torch.add: node.target = torch.mul graph.lint() # Does some checks to make sure the # Graph is well-formed. return fx.GraphModule(m, graph) ``` We can also do more involved [`Graph`](#torch.fx.Graph "torch.fx.Graph") rewrites, such as deleting or appending nodes. To aid in these transformations, FX has utility functions for transforming the graph that can be found in the [`Graph`](#torch.fx.Graph "torch.fx.Graph") documentation. An example of using these APIs to append a `torch.relu()` call can be found below. ``` # Specifies the insertion point. Any nodes added to the # Graph within this scope will be inserted after `node` with traced.graph.inserting_after(node): # Insert a new `call_function` node calling `torch.relu` new_node = traced.graph.call_function( torch.relu, args=(node,)) # We want all places that used the value of `node` to # now use that value after the `relu` call we've added. # We use the `replace_all_uses_with` API to do this. node.replace_all_uses_with(new_node) ``` For simple transformations that only consist of substitutions, you can also make use of the [subgraph rewriter.](https://github.com/pytorch/pytorch/blob/master/torch/fx/subgraph_rewriter.py) #### Subgraph Rewriting With replace\_pattern() FX also provides another level of automation on top of direct graph manipulation. The [`replace_pattern()`](#torch.fx.replace_pattern "torch.fx.replace_pattern") API is essentially a “find/replace” tool for editing [`Graph`](#torch.fx.Graph "torch.fx.Graph")s. It allows you to specify a `pattern` and `replacement` function and it will trace through those functions, find instances of the group of operations in the `pattern` graph, and replace those instances with copies of the `replacement` graph. This can help to greatly automate tedious graph manipulation code, which can get unwieldy as the transformations get more complex. #### Graph Manipulation Examples * [Replace one op](https://github.com/pytorch/examples/blob/master/fx/replace_op.py) * [Conv/Batch Norm fusion](https://github.com/pytorch/pytorch/blob/master/torch/fx/experimental/fuser.py) * [replace\_pattern: Basic usage](https://github.com/pytorch/examples/blob/master/fx/subgraph_rewriter_basic_use.py) * [Quantization](https://pytorch.org/docs/master/quantization.html#prototype-fx-graph-mode-quantization) * [Invert Transformation](https://github.com/pytorch/examples/blob/master/fx/invert.py) ### Proxy/Retracing Another way of manipulating [`Graph`](#torch.fx.Graph "torch.fx.Graph")s is by reusing the [`Proxy`](#torch.fx.Proxy "torch.fx.Proxy") machinery used in symbolic tracing. For example, let’s imagine that we wanted to write a transformation that decomposed PyTorch functions into smaller operations. It would transform every `F.relu(x)` call into `(x > 0) * x`. One possibility would be to perform the requisite graph rewriting to insert the comparison and multiplication after the `F.relu`, and then clean up the original `F.relu`. However, we can automate this process by using [`Proxy`](#torch.fx.Proxy "torch.fx.Proxy") objects to automatically record operations into the [`Graph`](#torch.fx.Graph "torch.fx.Graph"). To use this method, we write the operations that we want inserted as regular PyTorch code and invoke that code with [`Proxy`](#torch.fx.Proxy "torch.fx.Proxy") objects as arugments. These [`Proxy`](#torch.fx.Proxy "torch.fx.Proxy") objects will capture the operations that are performed on them and append them to the [`Graph`](#torch.fx.Graph "torch.fx.Graph"). ``` # Note that this decomposition rule can be read as regular Python def relu_decomposition(x): return (x > 0) * x decomposition_rules = {} decomposition_rules[F.relu] = relu_decomposition def decompose(model: torch.nn.Module, tracer_class : type = fx.Tracer) -> torch.nn.Module: """ Decompose `model` into smaller constituent operations. Currently,this only supports decomposing ReLU into its mathematical definition: (x > 0) * x """ graph : fx.Graph = tracer_class().trace(model) new_graph = fx.Graph() env = {} for node in graph.nodes: if node.op == 'call_function' and node.target in decomposition_rules: # By wrapping the arguments with proxies, # we can dispatch to the appropriate # decomposition rule and implicitly add it # to the Graph by symbolically tracing it. proxy_args = [ fx.Proxy(env[x.name]) if isinstance(x, fx.Node) else x for x in node.args] output_proxy = decomposition_rules[node.target](*proxy_args) # Operations on `Proxy` always yield new `Proxy`s, and the # return value of our decomposition rule is no exception. # We need to extract the underlying `Node` from the `Proxy` # to use it in subsequent iterations of this transform. new_node = output_proxy.node env[node.name] = new_node else: # Default case: we don't have a decomposition rule for this # node, so just copy the node over into the new graph. new_node = new_graph.node_copy(node, lambda x: env[x.name]) env[node.name] = new_node return fx.GraphModule(model, new_graph) ``` In addition to avoiding explicit graph manipulation, using [`Proxy`](#torch.fx.Proxy "torch.fx.Proxy")s also allows you to specify your rewrite rules as native Python code. For transformations that require a large amount of rewrite rules (such as vmap or grad), this can often improve readability and maintainability of the rules. A worked example of using [`Proxy`](#torch.fx.Proxy "torch.fx.Proxy")s for [`Graph`](#torch.fx.Graph "torch.fx.Graph") manipulation can be found [here](https://github.com/pytorch/examples/blob/master/fx/proxy_based_graph_creation.py). ### The Interpreter Pattern A useful code organizational pattern in FX is to loop over all the [`Node`](#torch.fx.Node "torch.fx.Node")s in a [`Graph`](#torch.fx.Graph "torch.fx.Graph") and execute them. This can be used for several things including runtime analysis of values flowing through the graph or transformation of the code via retracing with [`Proxy`](#torch.fx.Proxy "torch.fx.Proxy")s. For example, suppose we want to run a [`GraphModule`](#torch.fx.GraphModule "torch.fx.GraphModule") and record the [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor") shape and dtype properties on the nodes as we see them at runtime. That might look like: ``` import torch import torch.fx from torch.fx.node import Node from typing import Dict class ShapeProp: """ Shape propagation. This class takes a `GraphModule`. Then, its `propagate` method executes the `GraphModule` node-by-node with the given arguments. As each operation executes, the ShapeProp class stores away the shape and element type for the output values of each operation on the `shape` and `dtype` attributes of the operation's `Node`. """ def __init__(self, mod): self.mod = mod self.graph = mod.graph self.modules = dict(self.mod.named_modules()) def propagate(self, *args): args_iter = iter(args) env : Dict[str, Node] = {} def load_arg(a): return torch.fx.graph.map_arg(a, lambda n: env[n.name]) def fetch_attr(target : str): target_atoms = target.split('.') attr_itr = self.mod for i, atom in enumerate(target_atoms): if not hasattr(attr_itr, atom): raise RuntimeError(f"Node referenced nonexistant target {'.'.join(target_atoms[:i])}") attr_itr = getattr(attr_itr, atom) return attr_itr for node in self.graph.nodes: if node.op == 'placeholder': result = next(args_iter) elif node.op == 'get_attr': result = fetch_attr(node.target) elif node.op == 'call_function': result = node.target(*load_arg(node.args), **load_arg(node.kwargs)) elif node.op == 'call_method': self_obj, *args = load_arg(node.args) kwargs = load_arg(node.kwargs) result = getattr(self_obj, node.target)(*args, **kwargs) elif node.op == 'call_module': result = self.modules[node.target](*load_arg(node.args), **load_arg(node.kwargs)) # This is the only code specific to shape propagation. # you can delete this `if` branch and this becomes # a generic GraphModule interpreter. if isinstance(result, torch.Tensor): node.shape = result.shape node.dtype = result.dtype env[node.name] = result return load_arg(self.graph.result) ``` As you can see, a full interpreter for FX is not that complicated but it can be very useful. To ease using this pattern, we provide the [`Interpreter`](#torch.fx.Interpreter "torch.fx.Interpreter") class, which encompasses the above logic in a way that certain aspects of the interpreter’s execution can be overridden via method overrides. In addition to executing operations, we can also generate a new `Graph` by feeding [`Proxy`](#torch.fx.Proxy "torch.fx.Proxy") values through an interpreter. Similarly, we provide the [`Transformer`](#torch.fx.Transformer "torch.fx.Transformer") class to encompass this pattern. [`Transformer`](#torch.fx.Transformer "torch.fx.Transformer") behaves similarly to [`Interpreter`](#torch.fx.Interpreter "torch.fx.Interpreter"), but instead of calling the `run` method to get a concrete output value from the Module, you would call the [`Transformer.transform()`](#torch.fx.Transformer.transform "torch.fx.Transformer.transform") method to return a new [`GraphModule`](#torch.fx.GraphModule "torch.fx.GraphModule") which was subject to any transformation rules you installed as overridden methods. #### Examples of the Interpreter Pattern * [Shape Propagation](https://github.com/pytorch/pytorch/blob/master/torch/fx/experimental/shape_prop.py) * [Performance Profiler](https://github.com/pytorch/tutorials/pull/1319) Debugging --------- ### Introduction Often in the course of authoring transformations, our code will not be quite right. In this case, we may need to do some debugging. The key is to work backwards: first, check the results of invoking the generated module to prove or disprove correctness. Then, inspect and debug the generated code. Then, debug the process of transformations that led to the generated code. If you’re not familiar with debuggers, please see the auxiliary section [Available Debuggers](#available-debuggers). ### Checking Correctness of Modules Because the output of most deep learning modules consists of floating point [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor") instances, checking for equivalence between the results of two [`torch.nn.Module`](generated/torch.nn.module#torch.nn.Module "torch.nn.Module") is not as straightforward as doing a simple equality check. To motivate this, let’s use an example: ``` import torch import torch.fx import torchvision.models as models def transform(m : torch.nn.Module) -> torch.nn.Module: gm = torch.fx.symbolic_trace(m) # Imagine we're doing some transforms here # <...> gm.recompile() return gm resnet18 = models.resnet18() transformed_resnet18 = transform(resnet18) input_image = torch.randn(5, 3, 224, 224) assert resnet18(input_image) == transformed_resnet18(input_image) """ RuntimeError: Boolean value of Tensor with more than one value is ambiguous """ ``` Here, we’ve tried to check equality of the values of two deep learning models with the `==` equality operator. However, this is not well- defined both due to the issue of that operator returning a tensor and not a bool, but also because comparison of floating point values should use a margin of error (or epsilon) to account for the non-commutativity of floating point operations (see [here](https://floating-point-gui.de/errors/comparison/) for more details). We can use [`torch.allclose()`](generated/torch.allclose#torch.allclose "torch.allclose") instead, which will give us an approximate comparison taking into account a relative and absolute tolerance threshold: ``` assert torch.allclose(resnet18(input_image), transformed_resnet18(input_image)) ``` This is the first tool in our toolbox to check if transformed modules are behaving as we expect compared to a reference implementation. ### Debugging the Generated Code Because FX generates the `forward()` function on [`GraphModule`](#torch.fx.GraphModule "torch.fx.GraphModule")s, using traditional debugging techniques like `print` statements or `pdb` is not as straightfoward. Luckily, we have several techniques we can use for debugging the generated code. #### Use `pdb` Invoke `pdb` to step into the running program. Although the code that represents the [`Graph`](#torch.fx.Graph "torch.fx.Graph") is not in any source file, we can still step into it manually using `pdb` when the forward pass is invoked. ``` import torch import torch.fx import torchvision.models as models def my_pass(inp: torch.nn.Module, tracer_class : type = fx.Tracer) -> torch.nn.Module: graph = tracer_class().trace(inp) # Transformation logic here # <...> # Return new Module return fx.GraphModule(inp, graph) my_module = models.resnet18() my_module_transformed = my_pass(my_module) input_value = torch.randn(5, 3, 224, 224) # When this line is executed at runtime, we will be dropped into an # interactive `pdb` prompt. We can use the `step` or `s` command to # step into the execution of the next line import pdb; pdb.set_trace() my_module_transformed(input_value) ``` #### Print the Generated Code If you’d like to run the same code multiple times, then it can be a bit tedious to step to the right code with `pdb`. In that case, one approach is to simply copy-paste the generated `forward` pass into your code and examine it from there. ``` # Assume that `traced` is a GraphModule that has undergone some # number of transforms # Copy this code for later print(traced) # Print the code generated from symbolic tracing. This outputs: """ def forward(self, y): x = self.x add_1 = x + y; x = y = None return add_1 """ # Subclass the original Module class SubclassM(M): def __init__(self): super().__init__() # Paste the generated `forward` function (the one we printed and # copied above) here def forward(self, y): x = self.x add_1 = x + y; x = y = None return add_1 # Create an instance of the original, untraced Module. Then, create an # instance of the Module with the copied `forward` function. We can # now compare the output of both the original and the traced version. pre_trace = M() post_trace = SubclassM() ``` #### Use the `to_folder` Function From `GraphModule` [`GraphModule.to_folder()`](#torch.fx.GraphModule.to_folder "torch.fx.GraphModule.to_folder") is a method in `GraphModule` that allows you to dump out the generated FX code to a folder. Although copying the forward pass into the code often suffices as in [Print the Generated Code](#print-the-generated-code), it may be easier to examine modules and parameters using `to_folder`. ``` m = symbolic_trace(M()) m.to_folder("foo", "Bar") from foo import Bar y = Bar() ``` After running the above example, we can then look at the code within `foo/module.py` and modify it as desired (e.g. adding `print` statements or using `pdb`) to debug the generated code. ### Debugging the Transformation Now that we’ve identified that a transformation is creating incorrect code, it’s time to debug the transformation itself. First, we’ll check the [Limitations of Symbolic Tracing](#limitations-of-symbolic-tracing) section in the documentation. Once we verify that tracing is working as expected, the goal becomes figuring out what went wrong during our `GraphModule` transformation. There may be a quick answer in [Writing Transformations](#writing-transformations), but, if not, there are several ways to examine our traced module: ``` # Sample Module class M(torch.nn.Module): def forward(self, x, y): return x + y # Create an instance of `M` m = M() # Symbolically trace an instance of `M` (returns a GraphModule). In # this example, we'll only be discussing how to inspect a # GraphModule, so we aren't showing any sample transforms for the # sake of brevity. traced = symbolic_trace(m) # Print the code produced by tracing the module. print(traced) # The generated `forward` function is: """ def forward(self, x, y): add_1 = x + y; x = y = None return add_1 """ # Print the internal Graph. print(traced.graph) # This print-out returns: """ graph(x, y): %add_1 : [#users=1] = call_function[target=<built-in function add>](args = (%x, %y), kwargs = {}) return add_1 """ # Print a tabular representation of the internal Graph. traced.graph.print_tabular() # This gives us: """ opcode name target args kwargs ------------- ------ ----------------------- -------- -------- placeholder x x () {} placeholder y y () {} call_function add_1 <built-in function add> (x, y) {} """ ``` Using the utility functions above, we can compare our traced Module before and after we’ve applied our transformations. Sometimes, a simple visual comparison is enough to trace down a bug. If it’s still not clear what’s going wrong, a debugger like `pdb` can be a good next step. Going off of the example above, consider the following code: ``` # Sample user-defined function def transform_graph(module: torch.nn.Module, tracer_class : type = fx.Tracer) -> torch.nn.Module: # Get the Graph from our traced Module g = tracer_class().trace(module) """ Transformations on `g` go here """ return fx.GraphModule(module, g) # Transform the Graph transformed = transform_graph(traced) # Print the new code after our transforms. Check to see if it was # what we expected print(transformed) ``` Using the above example, let’s say that the call to `print(traced)` showed us that there was an error in our transforms. We want to find what goes wrong using a debugger. We start a `pdb` session. We can see what’s happening during the transform by breaking on `transform_graph(traced)`, then pressing `s` to “step into” the call to `transform_graph(traced)`. We may also have good luck by editing the `print_tabular` method to print different attributes of the Nodes in the Graph. (For example, we might want to see the Node’s `input_nodes` and `users`.) ### Available Debuggers The most common Python debugger is [pdb](https://docs.python.org/3/library/pdb.html). You can start your program in “debug mode” with `pdb` by typing `python -m pdb FILENAME.py` into the command line, where `FILENAME` is the name of the file you want to debug. After that, you can use the `pdb` [debugger commands](https://docs.python.org/3/library/pdb.html#debugger-commands) to move through your running program stepwise. It’s common to set a breakpoint (`b LINE-NUMBER`) when you start `pdb`, then call `c` to run the program until that point. This prevents you from having to step through each line of execution (using `s` or `n`) to get to the part of the code you want to examine. Alternatively, you can write `import pdb; pdb.set_trace()` before the line you want to break at. If you add `pdb.set_trace()`, your program will automatically start in debug mode when you run it. (In other words, you can just type `python FILENAME.py` into the command line instead of `python -m pdb FILENAME.py`.) Once you’re running your file in debug mode, you can step through the code and examine your program’s internal state using certain commands. There are many excellent tutorials on `pdb` online, including RealPython’s [“Python Debugging With Pdb”](https://realpython.com/python-debugging-pdb/). IDEs like PyCharm or VSCode usually have a debugger built in. In your IDE, you can choose to either a) use `pdb` by pulling up a terminal window in your IDE (e.g. View → Terminal in VSCode), or b) use the built-in debugger (usually a graphical wrapper around `pdb`). Limitations of Symbolic Tracing ------------------------------- FX uses a system of **symbolic tracing** (a.k.a [symbolic execution](https://en.wikipedia.org/wiki/Symbolic_execution)) to capture the semantics of programs in a transformable/analyzable form. The system is **tracing** in that it executes the program (really a [`torch.nn.Module`](generated/torch.nn.module#torch.nn.Module "torch.nn.Module") or function) to record operations. It is **symbolic** in that the data flowing through the program during this execution is not real data, but rather symbols ([`Proxy`](#torch.fx.Proxy "torch.fx.Proxy") in FX parlance). Although symbolic tracing works for most neural net code, it has some limitations. ### Dynamic Control Flow The main limitation of symbolic tracing is it does not currently support *dynamic control flow*. That is, loops or `if` statements where the condition may depend on the input values of the program. For example, let’s examine the following program: ``` def func_to_trace(x): dim0 = x.size[0] if dim0 == 3: return torch.relu(x) else: return torch.neg(x) traced = torch.fx.symbolic_trace(func_to_trace) """ <...> File "dyn.py", line 6, in func_to_trace if dim0 == 3: File "pytorch/torch/fx/proxy.py", line 155, in __bool__ return self.tracer.to_bool(self) File "pytorch/torch/fx/proxy.py", line 85, in to_bool raise TraceError('symbolically traced variables cannot be used as inputs to control flow') torch.fx.proxy.TraceError: symbolically traced variables cannot be used as inputs to control flow """ ``` The condition to the `if` statement relies on the value of `dim0`, which eventually relies on the value of `x`, a function input. Since `x` can change (i.e. if you pass a new input tensor to the traced function), this is *dynamic control flow*. The traceback walks back up through your code to show you where this situation happens. #### Static Control Flow On the other hand, so-called *static control flow* is supported. Static control flow is loops or `if` statements whose value cannot change across invocations. Typically, in PyTorch programs, this control flow arises for code making decisions about a model’s architecture based on hyper-parameters. As a concrete example: ``` import torch import torch.fx class MyModule(torch.nn.Module): def __init__(self, do_activation : bool = False): super().__init__() self.do_activation = do_activation self.linear = torch.nn.Linear(512, 512) def forward(self, x): x = self.linear(x) # This if-statement is so-called static control flow. # Its condition does not depend on any input values if self.do_activation: x = torch.relu(x) return x without_activation = MyModule(do_activation=False) with_activation = MyModule(do_activation=True) traced_without_activation = torch.fx.symbolic_trace(without_activation) print(traced_without_activation.code) """ def forward(self, x): linear_1 = self.linear(x); x = None return linear_1 """ traced_with_activation = torch.fx.symbolic_trace(with_activation) print(traced_with_activation.code) """ import torch def forward(self, x): linear_1 = self.linear(x); x = None relu_1 = torch.relu(linear_1); linear_1 = None return relu_1 """ ``` The if-statement `if self.do_activation` does not depend on any function inputs, thus it is static. `do_activation` can be considered to be a hyper-parameter, and the traces of different instances of `MyModule` with different values for that parameter have different code. This is a valid pattern that is supported by symbolic tracing. Many instances of dynamic control flow are semantically static control flow. These instances can be made to support symbolic tracing by removing the data dependencies on input values, for example by moving values to `Module` attributes or by passing constant values during symbolic tracing: ``` def f(x, flag): if flag: return x else: return x*2 fx.symbolic_trace(f) # Fails! def wrapper(flag): return lambda x: f(x, flag) new_f = wrapper(flag=True) fx.symbolic_trace(new_f) ``` In the case of truly dynamic control flow, the sections of the program that contain this code can be traced as calls to the Method (see [Customizing Tracing with the Tracer class](#customizing-tracing)) or function (see [`wrap()`](#torch.fx.wrap "torch.fx.wrap")) rather than tracing through them. ### Non-`torch` Functions FX uses `__torch_function__` as the mechanism by which it intercepts calls (see the [technical overview](https://github.com/pytorch/pytorch/blob/master/torch/fx/OVERVIEW.md#technical-details) for more information about this). Some functions, such as builtin Python functions or those in the `math` module, are things that are not covered by `__torch_function__`, but we would still like to capture them in symbolic tracing. For example: ``` import torch import torch.fx from math import sqrt def normalize(x): """ Normalize `x` by the size of the batch dimension """ return x / sqrt(len(x)) # It's valid Python code normalize(torch.rand(3, 4)) traced = torch.fx.symbolic_trace(normalize) """ <...> File "sqrt.py", line 9, in normalize return x / sqrt(len(x)) File "pytorch/torch/fx/proxy.py", line 161, in __len__ raise RuntimeError("'len' is not supported in symbolic tracing by default. If you want " RuntimeError: 'len' is not supported in symbolic tracing by default. If you want this call to be recorded, please call torch.fx.wrap('len') at module scope """ ``` The error tells us that the built-in function `len` is not supported. We can make it so that functions like this are recorded in the trace as direct calls using the [`wrap()`](#torch.fx.wrap "torch.fx.wrap") API: ``` torch.fx.wrap('len') torch.fx.wrap('sqrt') traced = torch.fx.symbolic_trace(normalize) print(traced.code) """ import math def forward(self, x): len_1 = len(x) sqrt_1 = math.sqrt(len_1); len_1 = None truediv = x / sqrt_1; x = sqrt_1 = None return truediv """ ``` ### Customizing Tracing with the `Tracer` class The [`Tracer`](#torch.fx.Tracer "torch.fx.Tracer") class is the class that underlies the implementation of `symbolic_trace`. The behavior of tracing can be customized by subclassing Tracer, like so: ``` class MyCustomTracer(torch.fx.Tracer): # Inside here you can override various methods # to customize tracing. See the `Tracer` API # reference pass # Let's use this custom tracer to trace through this module class MyModule(torch.nn.Module): def forward(self, x): return torch.relu(x) + torch.ones(3, 4) mod = MyModule() traced_graph = MyCustomTracer().trace(mod) # trace() returns a Graph. Let's wrap it up in a # GraphModule to make it runnable traced = torch.fx.GraphModule(mod, traced_graph) ``` #### Leaf Modules Leaf Modules are the modules that appear as calls in the symbolic trace rather than being traced through. The default set of leaf modules is the set of standard `torch.nn` module instances. For example: ``` class MySpecialSubmodule(torch.nn.Module): def forward(self, x): return torch.neg(x) class MyModule(torch.nn.Module): def __init__(self): super().__init__() self.linear = torch.nn.Linear(3, 4) self.submod = MySpecialSubmodule() def forward(self, x): return self.submod(self.linear(x)) traced = torch.fx.symbolic_trace(MyModule()) print(traced.code) # `linear` is preserved as a call, yet `submod` is traced though. # This is because the default set of "Leaf Modules" includes all # standard `torch.nn` modules. """ import torch def forward(self, x): linear_1 = self.linear(x); x = None neg_1 = torch.neg(linear_1); linear_1 = None return neg_1 """ ``` The set of leaf modules can be customized by overriding [`Tracer.is_leaf_module()`](#torch.fx.Tracer.is_leaf_module "torch.fx.Tracer.is_leaf_module"). ### Miscellanea * Tensor constructors (e.g. `torch.zeros`, `torch.ones`, `torch.rand`, `torch.randn`, `torch.sparse_coo_tensor`) are currently not traceable. + The deterministic constructors (`zeros`, `ones`) can be used and the value they produce will be embedded in the trace as a constant. This is only problematic if the arguments to these constructors refers to dynamic input sizes. In this case, `ones_like` or `zeros_like` may be a viable substitute. + Nondeterministic constructors (`rand`, `randn`) will have a single random value embedded in the trace. This is likely not the intended behavior. + This behavior may be fixed in a future release. * Type annotations + Python 3-style type annotations (e.g. `func(x : torch.Tensor, y : int) -> torch.Tensor`) are supported and will be preserved by symbolic tracing. + Python 2-style comment type annotations `# type: (torch.Tensor, int) -> torch.Tensor` are not currently supported. + Annotations on local names within a function are not currently supported. API Reference ------------- `torch.fx.symbolic_trace(root, concrete_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/symbolic_trace.html#symbolic_trace) Symbolic tracing API Given an `nn.Module` or function instance `root`, this function will return a `GraphModule` constructed by recording operations seen while tracing through `root`. Parameters * **root** (*Union**[*[torch.nn.Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")*,* *Callable**]*) – Module or function to be traced and converted into a Graph representation. * **concrete\_args** (*Optional**[**Dict**[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *any**]**]*) – Concrete arguments that should not be treated as Proxies. Returns a Module created from the recorded operations from `root`. Return type [GraphModule](#torch.fx.GraphModule "torch.fx.GraphModule") `torch.fx.wrap(fn_or_name)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/symbolic_trace.html#wrap) This function can be called at module-level scope to register fn\_or\_name as a “leaf function”. A “leaf function” will be preserved as a CallFunction node in the FX trace instead of being traced through: ``` # foo/bar/baz.py def my_custom_function(x, y): return x * x + y * y torch.fx.wrap('my_custom_function') def fn_to_be_traced(x, y): # When symbolic tracing, the below call to my_custom_function will be inserted into # the graph rather than tracing it. return my_custom_function(x, y) ``` This function can also equivalently be used as a decorator: ``` # foo/bar/baz.py @torch.fx.wrap def my_custom_function(x, y): return x * x + y * y ``` A wrapped function can be thought of a “leaf function”, analogous to the concept of “leaf modules”, that is, they are functions that are left as calls in the FX trace rather than traced through. Parameters **fn\_or\_name** (*Union**[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *Callable**]*) – The function or name of the global function to insert into the graph when it’s called `class torch.fx.GraphModule(root, graph, class_name='GraphModule')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph_module.html#GraphModule) GraphModule is an nn.Module generated from an fx.Graph. Graphmodule has a `graph` attribute, as well as `code` and `forward` attributes generated from that `graph`. Warning When `graph` is reassigned, `code` and `forward` will be automatically regenerated. However, if you edit the contents of the `graph` without reassigning the `graph` attribute itself, you must call `recompile()` to update the generated code. `__init__(root, graph, class_name='GraphModule')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph_module.html#GraphModule.__init__) Construct a GraphModule. Parameters * **root** (*Union**[*[torch.nn.Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")*,* *Dict**[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *Any**]*) – `root` can either be an nn.Module instance or a Dict mapping strings to any attribute type. In the case that `root` is a Module, any references to Module-based objects (via qualified name) in the Graph’s Nodes’ `target` field will be copied over from the respective place within `root`’s Module hierarchy into the GraphModule’s module hierarchy. In the case that `root` is a dict, the qualified name found in a Node’s `target` will be looked up directly in the dict’s keys. The object mapped to by the Dict will be copied over into the appropriate place within the GraphModule’s module hierarchy. * **graph** ([Graph](#torch.fx.Graph "torch.fx.Graph")) – `graph` contains the nodes this GraphModule should use for code generation * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – `name` denotes the name of this GraphModule for debugging purposes. If it’s unset, all error messages will report as originating from `GraphModule`. It may be helpful to set this to `root`’s original name or a name that makes sense within the context of your transform. `property code` Return the Python code generated from the `Graph` underlying this `GraphModule`. `property graph` Return the `Graph` underlying this `GraphModule` `recompile()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph_module.html#GraphModule.recompile) Recompile this GraphModule from its `graph` attribute. This should be called after editing the contained `graph`, otherwise the generated code of this `GraphModule` will be out of date. `to_folder(folder, module_name='FxModule')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph_module.html#GraphModule.to_folder) Dumps out module to `folder` with `module_name` so that it can be imported with `from <folder> import <module_name>` Parameters * **folder** (*Union**[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* [os.PathLike](https://docs.python.org/3/library/os.html#os.PathLike "(in Python v3.9)")*]*) – The folder to write the code out to * **module\_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – Top-level name to use for the `Module` while writing out the code `class torch.fx.Graph` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph) `Graph` is the main data structure used in the FX Intermediate Representation. It consists of a series of `Node` s, each representing callsites (or other syntactic constructs). The list of `Node` s, taken together, constitute a valid Python function. For example, the following code ``` import torch import torch.fx class MyModule(torch.nn.Module): def __init__(self): super().__init__() self.param = torch.nn.Parameter(torch.rand(3, 4)) self.linear = torch.nn.Linear(4, 5) def forward(self, x): return torch.topk(torch.sum(self.linear(x + self.linear.weight).relu(), dim=-1), 3) m = MyModule() gm = torch.fx.symbolic_trace(m) ``` Will produce the following Graph: ``` print(gm.graph) ``` ``` graph(x): %linear_weight : [#users=1] = self.linear.weight %add_1 : [#users=1] = call_function[target=operator.add](args = (%x, %linear_weight), kwargs = {}) %linear_1 : [#users=1] = call_module[target=linear](args = (%add_1,), kwargs = {}) %relu_1 : [#users=1] = call_method[target=relu](args = (%linear_1,), kwargs = {}) %sum_1 : [#users=1] = call_function[target=torch.sum](args = (%relu_1,), kwargs = {dim: -1}) %topk_1 : [#users=1] = call_function[target=torch.topk](args = (%sum_1, 3), kwargs = {}) return topk_1 ``` For the semantics of operations represented in the `Graph`, please see [`Node`](#torch.fx.Node "torch.fx.Node"). `__init__()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.__init__) Construct an empty Graph. `call_function(the_function, args=None, kwargs=None, type_expr=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.call_function) Insert a `call_function` `Node` into the `Graph`. A `call_function` node represents a call to a Python callable, specified by `the_function`. `the_function` can be Parameters * **the\_function** (*Callable**[**..**,* *Any**]*) – The function to be called. Can be any PyTorch operator, Python function, or member of the `builtins` or `operator` namespaces. * **args** (*Optional**[**Tuple**[**Argument**,* *..**]**]*) – The positional arguments to be passed to the called function. * **kwargs** (*Optional**[**Dict**[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *Argument**]**]*) – The keyword arguments to be passed to the called function * **type\_expr** (*Optional**[**Any**]*) – an optional type annotation representing the Python type the output of this node will have. Returns The newly created and inserted `call_function` node. Note The same insertion point and type expression rules apply for this method as [`Graph.create_node()`](#torch.fx.Graph.create_node "torch.fx.Graph.create_node"). `call_method(method_name, args=None, kwargs=None, type_expr=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.call_method) Insert a `call_method` `Node` into the `Graph`. A `call_method` node represents a call to a given method on the 0th element of `args`. Parameters * **method\_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – The name of the method to apply to the self argument. For example, if args[0] is a `Node` representing a `Tensor`, then to call `relu()` on that `Tensor`, pass `relu` to `method_name`. * **args** (*Optional**[**Tuple**[**Argument**,* *..**]**]*) – The positional arguments to be passed to the called method. Note that this *should* include a `self` argument. * **kwargs** (*Optional**[**Dict**[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *Argument**]**]*) – The keyword arguments to be passed to the called method * **type\_expr** (*Optional**[**Any**]*) – an optional type annotation representing the Python type the output of this node will have. Returns The newly created and inserted `call_method` node. Note The same insertion point and type expression rules apply for this method as [`Graph.create_node()`](#torch.fx.Graph.create_node "torch.fx.Graph.create_node"). `call_module(module_name, args=None, kwargs=None, type_expr=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.call_module) Insert a `call_module` `Node` into the `Graph`. A `call_module` node represents a call to the forward() function of a `Module` in the `Module` hierarchy. Parameters * **module\_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – The qualified name of the `Module` in the `Module` hierarchy to be called. For example, if the traced `Module` has a submodule named `foo`, which has a submodule named `bar`, the qualified name `foo.bar` should be passed as `module_name` to call that module. * **args** (*Optional**[**Tuple**[**Argument**,* *..**]**]*) – The positional arguments to be passed to the called method. Note that this should *not* include a `self` argument. * **kwargs** (*Optional**[**Dict**[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *Argument**]**]*) – The keyword arguments to be passed to the called method * **type\_expr** (*Optional**[**Any**]*) – an optional type annotation representing the Python type the output of this node will have. Returns The newly-created and inserted `call_module` node. Note The same insertion point and type expression rules apply for this method as [`Graph.create_node()`](#torch.fx.Graph.create_node "torch.fx.Graph.create_node"). `create_node(op, target, args=None, kwargs=None, name=None, type_expr=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.create_node) Create a `Node` and add it to the `Graph` at the current insert-point. Note that the current insert-point can be set via [`Graph.inserting_before()`](#torch.fx.Graph.inserting_before "torch.fx.Graph.inserting_before") and [`Graph.inserting_after()`](#torch.fx.Graph.inserting_after "torch.fx.Graph.inserting_after"). Parameters * **op** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – the opcode for this Node. One of ‘call\_function’, ‘call\_method’, ‘get\_attr’, ‘call\_module’, ‘placeholder’, or ‘output’. The semantics of these opcodes are described in the `Graph` docstring. * **args** (*Optional**[**Tuple**[**Argument**,* *..**]**]*) – is a tuple of arguments to this node. * **kwargs** (*Optional**[**Dict**[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *Argument**]**]*) – the kwargs of this Node * **name** (*Optional**[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*]*) – an optional string name for the `Node`. This will influence the name of the value assigned to in the Python generated code. * **type\_expr** (*Optional**[**Any**]*) – an optional type annotation representing the Python type the output of this node will have. Returns The newly-created and inserted node. `erase_node(to_erase)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.erase_node) Erases a `Node` from the `Graph`. Throws an exception if there are still users of that node in the `Graph`. Parameters **to\_erase** ([Node](#torch.fx.Node "torch.fx.Node")) – The `Node` to erase from the `Graph`. `get_attr(qualified_name, type_expr=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.get_attr) Insert a `get_attr` node into the Graph. A `get_attr` `Node` represents the fetch of an attribute from the `Module` hierarchy. Parameters * **qualified\_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – the fully-qualified name of the attribute to be retrieved. For example, if the traced Module has a submodule named `foo`, which has a submodule named `bar`, which has an attribute named `baz`, the qualified name `foo.bar.baz` should be passed as `qualified_name`. * **type\_expr** (*Optional**[**Any**]*) – an optional type annotation representing the Python type the output of this node will have. Returns The newly-created and inserted `get_attr` node. Note The same insertion point and type expression rules apply for this method as `Graph.create_node`. `graph_copy(g, val_map)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.graph_copy) Copy all nodes from a given graph into `self`. Parameters * **g** ([Graph](#torch.fx.Graph "torch.fx.Graph")) – The source graph from which to copy Nodes. * **val\_map** (*Dict**[*[Node](#torch.fx.Node "torch.fx.Node")*,* [Node](#torch.fx.Node "torch.fx.Node")*]*) – a dictionary that will be populated with a mapping from nodes in `g` to nodes in `self`. Note that `val_map` can be passed in with values in it already to override copying of certain values. Returns The value in `self` that is now equivalent to the output value in `g`, if `g` had an `output` node. `None` otherwise. `inserting_after(n=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.inserting_after) Set the point at which create\_node and companion methods will insert into the graph. When used within a ‘with’ statement, this will temporary set the insert point and then restore it when the with statement exits: ``` with g.inserting_after(n): ... # inserting after node n ... # insert point restored to what it was previously g.inserting_after(n) # set the insert point permanently ``` Parameters **n** (*Optional**[*[Node](#torch.fx.Node "torch.fx.Node")*]*) – The node before which to insert. If None this will insert after the beginning of the entire graph. Returns A resource manager that will restore the insert point on `__exit__`. `inserting_before(n=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.inserting_before) Set the point at which create\_node and companion methods will insert into the graph. When used within a ‘with’ statement, this will temporary set the insert point and then restore it when the with statement exits: ``` with g.inserting_before(n): ... # inserting before node n ... # insert point restored to what it was previously g.inserting_before(n) # set the insert point permanently ``` Parameters **n** (*Optional**[*[Node](#torch.fx.Node "torch.fx.Node")*]*) – The node before which to insert. If None this will insert before the beginning of the entire graph. Returns A resource manager that will restore the insert point on `__exit__`. `lint(root=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.lint) Runs various checks on this Graph to make sure it is well-formed. In particular: - Checks Nodes have correct ownership (owned by this graph) - Checks Nodes appear in topological order - If `root` is provided, checks that targets exist in `root` Parameters **root** (*Optional**[*[torch.nn.Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")*]*) – The root module with which to check for targets. This is equivalent to the `root` argument that is passed when constructing a `GraphModule`. `node_copy(node, arg_transform=<function Graph.<lambda>>)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.node_copy) Copy a node from one graph into another. `arg_transform` needs to transform arguments from the graph of node to the graph of self. Example: ``` # Copying all the nodes in `g` into `new_graph` g : torch.fx.Graph = ... new_graph = torch.fx.graph() value_remap = {} for node in g.nodes: value_remap[node] = new_graph.node_copy(node, lambda n : value_remap[n]) ``` Parameters * **node** ([Node](#torch.fx.Node "torch.fx.Node")) – The node to copy into `self`. * **arg\_transform** (*Callable**[**[*[Node](#torch.fx.Node "torch.fx.Node")*]**,* *Argument**]*) – A function that transforms `Node` arguments in node’s `args` and `kwargs` into the equivalent argument in `self`. In the simplest case, this should retrieve a value out of a table mapping Nodes in the original graph to `self`. `property nodes` Get the list of Nodes that constitute this Graph. Note that this `Node` list representation is a doubly-linked list. Mutations during iteration (e.g. delete a Node, add a Node) are safe. Returns A doubly-linked list of Nodes. Note that `reversed` can be called on this list to switch iteration order. `output(result, type_expr=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.output) Insert an `output` `Node` into the `Graph`. An `output` node represents a `return` statement in Python code. `result` is the value that should be returned. Parameters * **result** (*Argument*) – The value to be returned. * **type\_expr** (*Optional**[**Any**]*) – an optional type annotation representing the Python type the output of this node will have. Note The same insertion point and type expression rules apply for this method as `Graph.create_node`. `placeholder(name, type_expr=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.placeholder) Insert a `placeholder` node into the Graph. A `placeholder` represents a function input. Parameters * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – A name for the input value. This corresponds to the name of the positional argument to the function this `Graph` represents. * **type\_expr** (*Optional**[**Any**]*) – an optional type annotation representing the Python type the output of this node will have. This is needed in some cases for proper code generation (e.g. when the function is used subsequently in TorchScript compilation). Note The same insertion point and type expression rules apply for this method as `Graph.create_node`. `print_tabular()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.print_tabular) Prints the intermediate representation of the graph in tabular format. `python_code(root_module)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/graph.html#Graph.python_code) Turn this `Graph` into valid Python code. Parameters **root\_module** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – The name of the root module on which to look-up qualified name targets. This is usually ‘self’. Returns The string source code generated from this `Graph`. `class torch.fx.Node(graph, name, op, target, args, kwargs, type=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/node.html#Node) `Node` is the data structure that represents individual operations within a `Graph`. For the most part, Nodes represent callsites to various entities, such as operators, methods, and Modules (some exceptions include nodes that specify function inputs and outputs). Each `Node` has a function specified by its `op` property. The `Node` semantics for each value of `op` are as follows: * `placeholder` represents a function input. The `name` attribute specifies the name this value will take on. `target` is similarly the name of the argument. `args` holds either: 1) nothing, or 2) a single argument denoting the default parameter of the function input. `kwargs` is don’t-care. Placeholders correspond to the function parameters (e.g. `x`) in the graph printout. * `get_attr` retrieves a parameter from the module hierarchy. `name` is similarly the name the result of the fetch is assigned to. `target` is the fully-qualified name of the parameter’s position in the module hierarchy. `args` and `kwargs` are don’t-care * `call_function` applies a free function to some values. `name` is similarly the name of the value to assign to. `target` is the function to be applied. `args` and `kwargs` represent the arguments to the function, following the Python calling convention * `call_module` applies a module in the module hierarchy’s `forward()` method to given arguments. `name` is as previous. `target` is the fully-qualified name of the module in the module hierarchy to call. `args` and `kwargs` represent the arguments to invoke the module on, *including the self argument*. * `call_method` calls a method on a value. `name` is as similar. `target` is the string name of the method to apply to the `self` argument. `args` and `kwargs` represent the arguments to invoke the module on, *including the self argument* * `output` contains the output of the traced function in its `args[0]` attribute. This corresponds to the “return” statement in the Graph printout. `property all_input_nodes` Return all Nodes that are inputs to this Node. This is equivalent to iterating over `args` and `kwargs` and only collecting the values that are Nodes. Returns List of `Nodes` that appear in the `args` and `kwargs` of this `Node`, in that order. `append(x)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/node.html#Node.append) Insert x after this node in the list of nodes in the graph. Equvalent to `self.next.prepend(x)` Parameters **x** ([Node](#torch.fx.Node "torch.fx.Node")) – The node to put after this node. Must be a member of the same graph. `property args` The tuple of arguments to this `Node`. The interpretation of arguments depends on the node’s opcode. See the [`Node`](#torch.fx.Node "torch.fx.Node") docstring for more information. Assignment to this property is allowed. All accounting of uses and users is updated automatically on assignment. `property kwargs` The dict of keyword arguments to this `Node`. The interpretation of arguments depends on the node’s opcode. See the [`Node`](#torch.fx.Node "torch.fx.Node") docstring for more information. Assignment to this property is allowed. All accounting of uses and users is updated automatically on assignment. `property next` Returns the next `Node` in the linked list of Nodes. Returns The next `Node` in the linked list of Nodes. `prepend(x)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/node.html#Node.prepend) Insert x before this node in the list of nodes in the graph. Example: ``` Before: p -> self bx -> x -> ax After: p -> x -> self bx -> ax ``` Parameters **x** ([Node](#torch.fx.Node "torch.fx.Node")) – The node to put before this node. Must be a member of the same graph. `property prev` Returns the previous `Node` in the linked list of Nodes. Returns The previous `Node` in the linked list of Nodes. `replace_all_uses_with(replace_with)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/node.html#Node.replace_all_uses_with) Replace all uses of `self` in the Graph with the Node `replace_with`. Parameters **replace\_with** ([Node](#torch.fx.Node "torch.fx.Node")) – The node to replace all uses of `self` with. Returns The list of Nodes on which this change was made. `class torch.fx.Tracer(autowrap_modules=(<module 'math' from '/home/matti/miniconda3/lib/python3.7/lib-dynload/math.cpython-37m-x86_64-linux-gnu.so'>, ))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/symbolic_trace.html#Tracer) `Tracer` is the class that implements the symbolic tracing functionality of `torch.fx.symbolic_trace`. A call to `symbolic_trace(m)` is equivalent to `Tracer().trace(m)`. Tracer can be subclassed to override various behaviors of the tracing process. The different behaviors that can be overridden are described in the docstrings of the methods on this class. `call_module(m, forward, args, kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/symbolic_trace.html#Tracer.call_module) Method that specifies the behavior of this `Tracer` when it encounters a call to an `nn.Module` instance. By default, the behavior is to check if the called module is a leaf module via `is_leaf_module`. If it is, emit a `call_module` node referring to `m` in the `Graph`. Otherwise, call the `Module` normally, tracing through the operations in its `forward` function. This method can be overridden to–for example–create nested traced GraphModules, or any other behavior you would want while tracing across `Module` boundaries. `Module` boundaries. Parameters * **m** ([Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")) – The module for which a call is being emitted * **forward** (*Callable*) – The forward() method of the `Module` to be invoked * **args** (*Tuple*) – args of the module callsite * **kwargs** (*Dict*) – kwargs of the module callsite Returns The return value from the Module call. In the case that a `call_module` node was emitted, this is a `Proxy` value. Otherwise, it is whatever value was returned from the `Module` invocation. `create_arg(a)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/symbolic_trace.html#Tracer.create_arg) A method to specify the behavior of tracing when preparing values to be used as arguments to nodes in the `Graph`. By default, the behavior includes: 1. Iterate through collection types (e.g. tuple, list, dict) and recursively call `create_args` on the elements. 2. Given a Proxy object, return a reference to the underlying IR `Node` 3. Given a non-Proxy Tensor object, emit IR for various cases: * For a Parameter, emit a `get_attr` node referring to that Parameter * For a non-Parameter Tensor, store the Tensor away in a special attribute referring to that attribute. This method can be overridden to support more types. Parameters **a** (*Any*) – The value to be emitted as an `Argument` in the `Graph`. Returns The value `a` converted into the appropriate `Argument` `create_args_for_root(root_fn, is_module, concrete_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/symbolic_trace.html#Tracer.create_args_for_root) Create `placeholder` nodes corresponding to the signature of the `root` Module. This method introspects root’s signature and emits those nodes accordingly, also supporting `*args` and `**kwargs`. `is_leaf_module(m, module_qualified_name)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/symbolic_trace.html#Tracer.is_leaf_module) A method to specify whether a given `nn.Module` is a “leaf” module. Leaf modules are the atomic units that appear in the IR, referenced by `call_module` calls. By default, Modules in the PyTorch standard library namespace (torch.nn) are leaf modules. All other modules are traced through and their constituent ops are recorded, unless specified otherwise via this parameter. Parameters * **m** ([Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")) – The module being queried about * **module\_qualified\_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – The path to root of this module. For example, if you have a module hierarchy where submodule `foo` contains submodule `bar`, which contains submodule `baz`, that module will appear with the qualified name `foo.bar.baz` here. `path_of_module(mod)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/symbolic_trace.html#Tracer.path_of_module) Helper method to find the qualified name of `mod` in the Module hierarchy of `root`. For example, if `root` has a submodule named `foo`, which has a submodule named `bar`, passing `bar` into this function will return the string “foo.bar”. Parameters **mod** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – The `Module` to retrieve the qualified name for. `trace(root, concrete_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/symbolic_trace.html#Tracer.trace) Trace `root` and return the corresponding FX `Graph` representation. `root` can either be an `nn.Module` instance or a Python callable. Note that after this call, `self.root` may be different from the `root` passed in here. For example, when a free function is passed to `trace()`, we will create an `nn.Module` instance to use as the root and add embedded constants to. Parameters **root** (*Union**[*[Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")*,* *Callable**]*) – Either a `Module` or a function to be traced through. Returns A `Graph` representing the semantics of the passed-in `root`. `class torch.fx.Proxy(node, tracer=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/proxy.html#Proxy) `Proxy` objects are `Node` wrappers that flow through the program during symbolic tracing and record all the operations (`torch` function calls, method calls, operators) that they touch into the growing FX Graph. If you’re doing graph transforms, you can wrap your own `Proxy` method around a raw `Node` so that you can use the overloaded operators to add additional things to a `Graph`. `class torch.fx.Interpreter(module)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Interpreter) An Interpreter executes an FX graph Node-by-Node. This pattern can be useful for many things, including writing code transformations as well as analysis passes. Methods in the Interpreter class can be overridden to customize the behavior of execution. The map of overrideable methods in terms of call hierarchy: ``` run() +-- run_node +-- placeholder() +-- get_attr() +-- call_function() +-- call_method() +-- call_module() +-- output() ``` #### Example Suppose we want to swap all instances of `torch.neg` with `torch.sigmoid` and vice versa (including their `Tensor` method equivalents). We could subclass Interpreter like so: ``` class NegSigmSwapInterpreter(Interpreter): def call_function(self, target : Target, args : Tuple, kwargs : Dict) -> Any: if target == torch.sigmoid: return torch.neg(*args, **kwargs) return super().call_function(n) def call_method(self, target : Target, args : Tuple, kwargs : Dict) -> Any: if target == 'neg': call_self, *args_tail = args return call_self.sigmoid(*args_tail, **kwargs) return super().call_method(n) def fn(x): return torch.sigmoid(x).neg() gm = torch.fx.symbolic_trace(fn) input = torch.randn(3, 4) result = NegSigmSwapInterpreter(gm).run(input) torch.testing.assert_allclose(result, torch.neg(input).sigmoid()) ``` Parameters **module** ([GraphModule](#torch.fx.GraphModule "torch.fx.GraphModule")) – The module to be executed `call_function(target, args, kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Interpreter.call_function) Execute a `call_function` node and return the result. Parameters * **target** (*Target*) – The call target for this node. See [Node](https://pytorch.org/docs/master/fx.html#torch.fx.Node) for details on semantics * **args** (*Tuple*) – Tuple of positional args for this invocation * **kwargs** (*Dict*) – Dict of keyword arguments for this invocation Return Any: The value returned by the function invocation `call_method(target, args, kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Interpreter.call_method) Execute a `call_method` node and return the result. Parameters * **target** (*Target*) – The call target for this node. See [Node](https://pytorch.org/docs/master/fx.html#torch.fx.Node) for details on semantics * **args** (*Tuple*) – Tuple of positional args for this invocation * **kwargs** (*Dict*) – Dict of keyword arguments for this invocation Return Any: The value returned by the method invocation `call_module(target, args, kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Interpreter.call_module) Execute a `call_module` node and return the result. Parameters * **target** (*Target*) – The call target for this node. See [Node](https://pytorch.org/docs/master/fx.html#torch.fx.Node) for details on semantics * **args** (*Tuple*) – Tuple of positional args for this invocation * **kwargs** (*Dict*) – Dict of keyword arguments for this invocation Return Any: The value returned by the module invocation `fetch_args_kwargs_from_env(n)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Interpreter.fetch_args_kwargs_from_env) Fetch the concrete values of `args` and `kwargs` of node `n` from the current execution environment. Parameters **n** ([Node](#torch.fx.Node "torch.fx.Node")) – The node for which `args` and `kwargs` should be fetched. Returns `args` and `kwargs` with concrete values for `n`. Return type Tuple[Tuple, Dict] `fetch_attr(target)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Interpreter.fetch_attr) Fetch an attribute from the `Module` hierarchy of `self.module`. Parameters **target** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – The fully-qualfiied name of the attribute to fetch Returns The value of the attribute. Return type Any `get_attr(target, args, kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Interpreter.get_attr) Execute a `get_attr` node. Will retrieve an attribute value from the `Module` hierarchy of `self.module`. Parameters * **target** (*Target*) – The call target for this node. See [Node](https://pytorch.org/docs/master/fx.html#torch.fx.Node) for details on semantics * **args** (*Tuple*) – Tuple of positional args for this invocation * **kwargs** (*Dict*) – Dict of keyword arguments for this invocation Returns The value of the attribute that was retrieved Return type Any `map_nodes_to_values(args, n)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Interpreter.map_nodes_to_values) Recursively descend through `args` and look up the concrete value for each `Node` in the current execution environment. Parameters * **args** (*Argument*) – Data structure within which to look up concrete values * **n** ([Node](#torch.fx.Node "torch.fx.Node")) – Node to which `args` belongs. This is only used for error reporting. `output(target, args, kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Interpreter.output) Execute an `output` node. This really just retrieves the value referenced by the `output` node and returns it. Parameters * **target** (*Target*) – The call target for this node. See [Node](https://pytorch.org/docs/master/fx.html#torch.fx.Node) for details on semantics * **args** (*Tuple*) – Tuple of positional args for this invocation * **kwargs** (*Dict*) – Dict of keyword arguments for this invocation Returns The return value referenced by the output node Return type Any `placeholder(target, args, kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Interpreter.placeholder) Execute a `placeholder` node. Note that this is stateful: `Interpreter` maintains an internal iterator over arguments passed to `run` and this method returns next() on that iterator. Parameters * **target** (*Target*) – The call target for this node. See [Node](https://pytorch.org/docs/master/fx.html#torch.fx.Node) for details on semantics * **args** (*Tuple*) – Tuple of positional args for this invocation * **kwargs** (*Dict*) – Dict of keyword arguments for this invocation Returns The argument value that was retrieved. Return type Any `run(*args, initial_env=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Interpreter.run) Run `module` via interpretation and return the result. Parameters * **\*args** – The arguments to the Module to run, in positional order * **initial\_env** (*Optional**[**Dict**[*[Node](#torch.fx.Node "torch.fx.Node")*,* *Any**]**]*) – An optional starting environment for execution. This is a dict mapping `Node` to any value. This can be used, for example, to pre-populate results for certain `Nodes` so as to do only partial evaluation within the interpreter. Returns The value returned from executing the Module Return type Any `run_node(n)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Interpreter.run_node) Run a specific node `n` and return the result. Calls into placeholder, get\_attr, call\_function, call\_method, call\_module, or output depending on `node.op` Parameters **n** ([Node](#torch.fx.Node "torch.fx.Node")) – The Node to execute Returns The result of executing `n` Return type Any `class torch.fx.Transformer(module)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Transformer) `Transformer` is a special type of interpreter that produces a new `Module`. It exposes a `transform()` method that returns the transformed `Module`. `Transformer` does not require arguments to run, as `Interpreter` does. `Transformer` works entirely symbolically. #### Example Suppose we want to swap all instances of `torch.neg` with `torch.sigmoid` and vice versa (including their `Tensor` method equivalents). We could subclass `Transformer` like so: ``` class NegSigmSwapXformer(Transformer): def call_function(self, target : 'Target', args : Tuple[Argument, ...], kwargs : Dict[str, Any]) -> Any: if target == torch.sigmoid: return torch.neg(*args, **kwargs) return super().call_function(n) def call_method(self, target : 'Target', args : Tuple[Argument, ...], kwargs : Dict[str, Any]) -> Any: if target == 'neg': call_self, *args_tail = args return call_self.sigmoid(*args_tail, **kwargs) return super().call_method(n) def fn(x): return torch.sigmoid(x).neg() gm = torch.fx.symbolic_trace(fn) transformed : torch.nn.Module = NegSigmSwapXformer(gm).transform() input = torch.randn(3, 4) torch.testing.assert_allclose(transformed(input), torch.neg(input).sigmoid()) ``` Parameters **module** ([GraphModule](#torch.fx.GraphModule "torch.fx.GraphModule")) – The `Module` to be transformed. `get_attr(target, args, kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Transformer.get_attr) Execute a `get_attr` node. In `Transformer`, this is overridden to insert a new `get_attr` node into the output graph. Parameters * **target** (*Target*) – The call target for this node. See [Node](https://pytorch.org/docs/master/fx.html#torch.fx.Node) for details on semantics * **args** (*Tuple*) – Tuple of positional args for this invocation * **kwargs** (*Dict*) – Dict of keyword arguments for this invocation `placeholder(target, args, kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Transformer.placeholder) Execute a `placeholder` node. In `Transformer`, this is overridden to insert a new `placeholder` into the output graph. Parameters * **target** (*Target*) – The call target for this node. See [Node](https://pytorch.org/docs/master/fx.html#torch.fx.Node) for details on semantics * **args** (*Tuple*) – Tuple of positional args for this invocation * **kwargs** (*Dict*) – Dict of keyword arguments for this invocation `transform()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/interpreter.html#Transformer.transform) Transform `self.module` and return the transformed `GraphModule`. `torch.fx.replace_pattern(gm, pattern, replacement)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/fx/subgraph_rewriter.html#replace_pattern) Matches all possible non-overlapping sets of operators and their data dependencies (`pattern`) in the Graph of a GraphModule (`gm`), then replaces each of these matched subgraphs with another subgraph (`replacement`). Parameters * **gm** – The GraphModule that wraps the Graph to operate on * **pattern** – The subgraph to match in `gm` for replacement * **replacement** – The subgraph to replace `pattern` with Returns A list of `Match` objects representing the places in the original graph that `pattern` was matched to. The list is empty if there are no matches. `Match` is defined as: ``` class Match(NamedTuple): # Node from which the match was found anchor: Node # Maps nodes in the pattern subgraph to nodes in the larger graph nodes_map: Dict[Node, Node] ``` Return type List[Match] Examples: ``` import torch from torch.fx import symbolic_trace, subgraph_rewriter class M(torch.nn.Module): def __init__(self): super().__init__() def forward(self, x, w1, w2): m1 = torch.cat([w1, w2]).sum() m2 = torch.cat([w1, w2]).sum() return x + torch.max(m1) + torch.max(m2) def pattern(w1, w2): return torch.cat([w1, w2]).sum() def replacement(w1, w2): return torch.stack([w1, w2]) traced_module = symbolic_trace(M()) subgraph_rewriter.replace_pattern(traced_module, pattern, replacement) ``` The above code will first match `pattern` in the `forward` method of `traced_module`. Pattern-matching is done based on use-def relationships, not node names. For example, if you had `p = torch.cat([a, b])` in `pattern`, you could match `m = torch.cat([a, b])` in the original `forward` function, despite the variable names being different (`p` vs `m`). The `return` statement in `pattern` is matched based on its value only; it may or may not match to the `return` statement in the larger graph. In other words, the pattern doesn’t have to extend to the end of the larger graph. When the pattern is matched, it will be removed from the larger function and replaced by `replacement`. If there are multiple matches for `pattern` in the larger function, each non-overlapping match will be replaced. In the case of a match overlap, the first found match in the set of overlapping matches will be replaced. (“First” here being defined as the first in a topological ordering of the Nodes’ use-def relationships. In most cases, the first Node is the parameter that appears directly after `self`, while the last Node is whatever the function returns.) One important thing to note is that the parameters of the `pattern` Callable must be used in the Callable itself, and the parameters of the `replacement` Callable must match the pattern. The first rule is why, in the above code block, the `forward` function has parameters `x, w1, w2`, but the `pattern` function only has parameters `w1, w2`. `pattern` doesn’t use `x`, so it shouldn’t specify `x` as a parameter. As an example of the second rule, consider replacing ``` def pattern(x, y): return torch.neg(x) + torch.relu(y) ``` with ``` def replacement(x, y): return torch.relu(x) ``` In this case, `replacement` needs the same number of parameters as `pattern` (both `x` and `y`), even though the parameter `y` isn’t used in `replacement`. After calling `subgraph_rewriter.replace_pattern`, the generated Python code looks like this: ``` def forward(self, x, w1, w2): stack_1 = torch.stack([w1, w2]) sum_1 = stack_1.sum() stack_2 = torch.stack([w1, w2]) sum_2 = stack_2.sum() max_1 = torch.max(sum_1) add_1 = x + max_1 max_2 = torch.max(sum_2) add_2 = add_1 + max_2 return add_2 ```
programming_docs
pytorch PyTorch documentation PyTorch documentation ===================== PyTorch is an optimized tensor library for deep learning using GPUs and CPUs. Features described in this documentation are classified by release status: *Stable:* These features will be maintained long-term and there should generally be no major performance limitations or gaps in documentation. We also expect to maintain backwards compatibility (although breaking changes can happen and notice will be given one release ahead of time). *Beta:* Features are tagged as Beta because the API may change based on user feedback, because the performance needs to improve, or because coverage across operators is not yet complete. For Beta features, we are committing to seeing the feature through to the Stable classification. We are not, however, committing to backwards compatibility. *Prototype:* These features are typically not available as part of binary distributions like PyPI or Conda, except sometimes behind run-time flags, and are at an early stage for feedback and testing. Notes * [Automatic Mixed Precision examples](https://pytorch.org/docs/1.8.0/notes/amp_examples.html) * [Autograd mechanics](https://pytorch.org/docs/1.8.0/notes/autograd.html) * [Broadcasting semantics](https://pytorch.org/docs/1.8.0/notes/broadcasting.html) * [CPU threading and TorchScript inference](https://pytorch.org/docs/1.8.0/notes/cpu_threading_torchscript_inference.html) * [CUDA semantics](https://pytorch.org/docs/1.8.0/notes/cuda.html) * [Distributed Data Parallel](https://pytorch.org/docs/1.8.0/notes/ddp.html) * [Extending PyTorch](https://pytorch.org/docs/1.8.0/notes/extending.html) * [Frequently Asked Questions](https://pytorch.org/docs/1.8.0/notes/faq.html) * [Features for large-scale deployments](https://pytorch.org/docs/1.8.0/notes/large_scale_deployments.html) * [Modules](https://pytorch.org/docs/1.8.0/notes/modules.html) * [Multiprocessing best practices](https://pytorch.org/docs/1.8.0/notes/multiprocessing.html) * [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) * [Serialization semantics](https://pytorch.org/docs/1.8.0/notes/serialization.html) * [Windows FAQ](https://pytorch.org/docs/1.8.0/notes/windows.html) Language Bindings * [C++](https://pytorch.org/docs/1.8.0/cpp_index.html) * [Javadoc](https://pytorch.org/javadoc/) Python API * <torch> * [torch.nn](nn) * [torch.nn.functional](nn.functional) * [torch.Tensor](tensors) * [Tensor Attributes](tensor_attributes) * [Tensor Views](tensor_view) * [torch.autograd](autograd) * [torch.cuda](cuda) * [torch.cuda.amp](amp) * [torch.backends](backends) * [torch.distributed](distributed) * [torch.distributions](distributions) * [torch.fft](fft) * [torch.futures](futures) * [torch.fx](fx) * [torch.hub](hub) * [torch.jit](jit) * [torch.linalg](linalg) * <torch.overrides> * [torch.nn.init](nn.init) * [torch.onnx](onnx) * [torch.optim](optim) * [Complex Numbers](complex_numbers) * [DDP Communication Hooks](ddp_comm_hooks) * [Pipeline Parallelism](pipeline) * [Quantization](quantization) * [Distributed RPC Framework](rpc) * [torch.random](random) * [torch.sparse](sparse) * [torch.Storage](storage) * [torch.utils.benchmark](benchmark_utils) * [torch.utils.bottleneck](bottleneck) * [torch.utils.checkpoint](checkpoint) * [torch.utils.cpp\_extension](cpp_extension) * [torch.utils.data](data) * [torch.utils.dlpack](dlpack) * [torch.utils.mobile\_optimizer](mobile_optimizer) * [torch.utils.model\_zoo](model_zoo) * [torch.utils.tensorboard](tensorboard) * [Type Info](type_info) * [Named Tensors](named_tensor) * [Named Tensors operator coverage](name_inference) * [torch.\_\_config\_\_](__config__) Libraries * [torchaudio](https://pytorch.org/audio/stable) * [torchtext](https://pytorch.org/text/stable) * [torchvision](https://pytorch.org/vision/stable) * [TorchElastic](https://pytorch.org/elastic/) * [TorchServe](https://pytorch.org/serve) * [PyTorch on XLA Devices](http://pytorch.org/xla/) Community * [PyTorch Contribution Guide](https://pytorch.org/docs/1.8.0/community/contribution_guide.html) * [PyTorch Governance](https://pytorch.org/docs/1.8.0/community/governance.html) * [PyTorch Governance | Persons of Interest](https://pytorch.org/docs/1.8.0/community/persons_of_interest.html) Indices and tables ================== * [Index](https://pytorch.org/docs/1.8.0/genindex.html) * [Module Index](https://pytorch.org/docs/1.8.0/py-modindex.html) pytorch torch torch ===== The torch package contains data structures for multi-dimensional tensors and defines mathematical operations over these tensors. Additionally, it provides many utilities for efficient serializing of Tensors and arbitrary types, and other useful utilities. It has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0 Tensors ------- | | | | --- | --- | | [`is_tensor`](generated/torch.is_tensor#torch.is_tensor "torch.is_tensor") | Returns True if `obj` is a PyTorch tensor. | | [`is_storage`](generated/torch.is_storage#torch.is_storage "torch.is_storage") | Returns True if `obj` is a PyTorch storage object. | | [`is_complex`](generated/torch.is_complex#torch.is_complex "torch.is_complex") | Returns True if the data type of `input` is a complex data type i.e., one of `torch.complex64`, and `torch.complex128`. | | [`is_floating_point`](generated/torch.is_floating_point#torch.is_floating_point "torch.is_floating_point") | Returns True if the data type of `input` is a floating point data type i.e., one of `torch.float64`, `torch.float32`, `torch.float16`, and `torch.bfloat16`. | | [`is_nonzero`](generated/torch.is_nonzero#torch.is_nonzero "torch.is_nonzero") | Returns True if the `input` is a single element tensor which is not equal to zero after type conversions. | | [`set_default_dtype`](generated/torch.set_default_dtype#torch.set_default_dtype "torch.set_default_dtype") | Sets the default floating point dtype to `d`. | | [`get_default_dtype`](generated/torch.get_default_dtype#torch.get_default_dtype "torch.get_default_dtype") | Get the current default floating point [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype"). | | [`set_default_tensor_type`](generated/torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type") | Sets the default `torch.Tensor` type to floating point tensor type `t`. | | [`numel`](generated/torch.numel#torch.numel "torch.numel") | Returns the total number of elements in the `input` tensor. | | [`set_printoptions`](generated/torch.set_printoptions#torch.set_printoptions "torch.set_printoptions") | Set options for printing. | | [`set_flush_denormal`](generated/torch.set_flush_denormal#torch.set_flush_denormal "torch.set_flush_denormal") | Disables denormal floating numbers on CPU. | ### Creation Ops Note Random sampling creation ops are listed under [Random sampling](#random-sampling) and include: [`torch.rand()`](generated/torch.rand#torch.rand "torch.rand") [`torch.rand_like()`](generated/torch.rand_like#torch.rand_like "torch.rand_like") [`torch.randn()`](generated/torch.randn#torch.randn "torch.randn") [`torch.randn_like()`](generated/torch.randn_like#torch.randn_like "torch.randn_like") [`torch.randint()`](generated/torch.randint#torch.randint "torch.randint") [`torch.randint_like()`](generated/torch.randint_like#torch.randint_like "torch.randint_like") [`torch.randperm()`](generated/torch.randperm#torch.randperm "torch.randperm") You may also use [`torch.empty()`](generated/torch.empty#torch.empty "torch.empty") with the [In-place random sampling](#inplace-random-sampling) methods to create [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor") s with values sampled from a broader range of distributions. | | | | --- | --- | | [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor") | Constructs a tensor with `data`. | | [`sparse_coo_tensor`](generated/torch.sparse_coo_tensor#torch.sparse_coo_tensor "torch.sparse_coo_tensor") | Constructs a [sparse tensor in COO(rdinate) format](sparse#sparse-coo-docs) with specified values at the given `indices`. | | [`as_tensor`](generated/torch.as_tensor#torch.as_tensor "torch.as_tensor") | Convert the data into a `torch.Tensor`. | | [`as_strided`](generated/torch.as_strided#torch.as_strided "torch.as_strided") | Create a view of an existing `torch.Tensor` `input` with specified `size`, `stride` and `storage_offset`. | | [`from_numpy`](generated/torch.from_numpy#torch.from_numpy "torch.from_numpy") | Creates a [`Tensor`](tensors#torch.Tensor "torch.Tensor") from a [`numpy.ndarray`](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray "(in NumPy v1.20)"). | | [`zeros`](generated/torch.zeros#torch.zeros "torch.zeros") | Returns a tensor filled with the scalar value `0`, with the shape defined by the variable argument `size`. | | [`zeros_like`](generated/torch.zeros_like#torch.zeros_like "torch.zeros_like") | Returns a tensor filled with the scalar value `0`, with the same size as `input`. | | [`ones`](generated/torch.ones#torch.ones "torch.ones") | Returns a tensor filled with the scalar value `1`, with the shape defined by the variable argument `size`. | | [`ones_like`](generated/torch.ones_like#torch.ones_like "torch.ones_like") | Returns a tensor filled with the scalar value `1`, with the same size as `input`. | | [`arange`](generated/torch.arange#torch.arange "torch.arange") | Returns a 1-D tensor of size ⌈end−startstep⌉\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil with values from the interval `[start, end)` taken with common difference `step` beginning from `start`. | | [`range`](generated/torch.range#torch.range "torch.range") | Returns a 1-D tensor of size ⌊end−startstep⌋+1\left\lfloor \frac{\text{end} - \text{start}}{\text{step}} \right\rfloor + 1 with values from `start` to `end` with step `step`. | | [`linspace`](generated/torch.linspace#torch.linspace "torch.linspace") | Creates a one-dimensional tensor of size `steps` whose values are evenly spaced from `start` to `end`, inclusive. | | [`logspace`](generated/torch.logspace#torch.logspace "torch.logspace") | Creates a one-dimensional tensor of size `steps` whose values are evenly spaced from basestart{{\text{{base}}}}^{{\text{{start}}}} to baseend{{\text{{base}}}}^{{\text{{end}}}} , inclusive, on a logarithmic scale with base `base`. | | [`eye`](generated/torch.eye#torch.eye "torch.eye") | Returns a 2-D tensor with ones on the diagonal and zeros elsewhere. | | [`empty`](generated/torch.empty#torch.empty "torch.empty") | Returns a tensor filled with uninitialized data. | | [`empty_like`](generated/torch.empty_like#torch.empty_like "torch.empty_like") | Returns an uninitialized tensor with the same size as `input`. | | [`empty_strided`](generated/torch.empty_strided#torch.empty_strided "torch.empty_strided") | Returns a tensor filled with uninitialized data. | | [`full`](generated/torch.full#torch.full "torch.full") | Creates a tensor of size `size` filled with `fill_value`. | | [`full_like`](generated/torch.full_like#torch.full_like "torch.full_like") | Returns a tensor with the same size as `input` filled with `fill_value`. | | [`quantize_per_tensor`](generated/torch.quantize_per_tensor#torch.quantize_per_tensor "torch.quantize_per_tensor") | Converts a float tensor to a quantized tensor with given scale and zero point. | | [`quantize_per_channel`](generated/torch.quantize_per_channel#torch.quantize_per_channel "torch.quantize_per_channel") | Converts a float tensor to a per-channel quantized tensor with given scales and zero points. | | [`dequantize`](generated/torch.dequantize#torch.dequantize "torch.dequantize") | Returns an fp32 Tensor by dequantizing a quantized Tensor | | [`complex`](generated/torch.complex#torch.complex "torch.complex") | Constructs a complex tensor with its real part equal to [`real`](generated/torch.real#torch.real "torch.real") and its imaginary part equal to [`imag`](generated/torch.imag#torch.imag "torch.imag"). | | [`polar`](generated/torch.polar#torch.polar "torch.polar") | Constructs a complex tensor whose elements are Cartesian coordinates corresponding to the polar coordinates with absolute value [`abs`](generated/torch.abs#torch.abs "torch.abs") and angle [`angle`](generated/torch.angle#torch.angle "torch.angle"). | | [`heaviside`](generated/torch.heaviside#torch.heaviside "torch.heaviside") | Computes the Heaviside step function for each element in `input`. | ### Indexing, Slicing, Joining, Mutating Ops | | | | --- | --- | | [`cat`](generated/torch.cat#torch.cat "torch.cat") | Concatenates the given sequence of `seq` tensors in the given dimension. | | [`chunk`](generated/torch.chunk#torch.chunk "torch.chunk") | Splits a tensor into a specific number of chunks. | | [`column_stack`](generated/torch.column_stack#torch.column_stack "torch.column_stack") | Creates a new tensor by horizontally stacking the tensors in `tensors`. | | [`dstack`](generated/torch.dstack#torch.dstack "torch.dstack") | Stack tensors in sequence depthwise (along third axis). | | [`gather`](generated/torch.gather#torch.gather "torch.gather") | Gathers values along an axis specified by `dim`. | | [`hstack`](generated/torch.hstack#torch.hstack "torch.hstack") | Stack tensors in sequence horizontally (column wise). | | [`index_select`](generated/torch.index_select#torch.index_select "torch.index_select") | Returns a new tensor which indexes the `input` tensor along dimension `dim` using the entries in `index` which is a `LongTensor`. | | [`masked_select`](generated/torch.masked_select#torch.masked_select "torch.masked_select") | Returns a new 1-D tensor which indexes the `input` tensor according to the boolean mask `mask` which is a `BoolTensor`. | | [`movedim`](generated/torch.movedim#torch.movedim "torch.movedim") | Moves the dimension(s) of `input` at the position(s) in `source` to the position(s) in `destination`. | | [`moveaxis`](generated/torch.moveaxis#torch.moveaxis "torch.moveaxis") | Alias for [`torch.movedim()`](generated/torch.movedim#torch.movedim "torch.movedim"). | | [`narrow`](generated/torch.narrow#torch.narrow "torch.narrow") | Returns a new tensor that is a narrowed version of `input` tensor. | | [`nonzero`](generated/torch.nonzero#torch.nonzero "torch.nonzero") | | | [`reshape`](generated/torch.reshape#torch.reshape "torch.reshape") | Returns a tensor with the same data and number of elements as `input`, but with the specified shape. | | [`row_stack`](generated/torch.row_stack#torch.row_stack "torch.row_stack") | Alias of [`torch.vstack()`](generated/torch.vstack#torch.vstack "torch.vstack"). | | [`scatter`](generated/torch.scatter#torch.scatter "torch.scatter") | Out-of-place version of [`torch.Tensor.scatter_()`](tensors#torch.Tensor.scatter_ "torch.Tensor.scatter_") | | [`scatter_add`](generated/torch.scatter_add#torch.scatter_add "torch.scatter_add") | Out-of-place version of [`torch.Tensor.scatter_add_()`](tensors#torch.Tensor.scatter_add_ "torch.Tensor.scatter_add_") | | [`split`](generated/torch.split#torch.split "torch.split") | Splits the tensor into chunks. | | [`squeeze`](generated/torch.squeeze#torch.squeeze "torch.squeeze") | Returns a tensor with all the dimensions of `input` of size `1` removed. | | [`stack`](generated/torch.stack#torch.stack "torch.stack") | Concatenates a sequence of tensors along a new dimension. | | [`swapaxes`](generated/torch.swapaxes#torch.swapaxes "torch.swapaxes") | Alias for [`torch.transpose()`](generated/torch.transpose#torch.transpose "torch.transpose"). | | [`swapdims`](generated/torch.swapdims#torch.swapdims "torch.swapdims") | Alias for [`torch.transpose()`](generated/torch.transpose#torch.transpose "torch.transpose"). | | [`t`](generated/torch.t#torch.t "torch.t") | Expects `input` to be <= 2-D tensor and transposes dimensions 0 and 1. | | [`take`](generated/torch.take#torch.take "torch.take") | Returns a new tensor with the elements of `input` at the given indices. | | [`tensor_split`](generated/torch.tensor_split#torch.tensor_split "torch.tensor_split") | Splits a tensor into multiple sub-tensors, all of which are views of `input`, along dimension `dim` according to the indices or number of sections specified by `indices_or_sections`. | | [`tile`](generated/torch.tile#torch.tile "torch.tile") | Constructs a tensor by repeating the elements of `input`. | | [`transpose`](generated/torch.transpose#torch.transpose "torch.transpose") | Returns a tensor that is a transposed version of `input`. | | [`unbind`](generated/torch.unbind#torch.unbind "torch.unbind") | Removes a tensor dimension. | | [`unsqueeze`](generated/torch.unsqueeze#torch.unsqueeze "torch.unsqueeze") | Returns a new tensor with a dimension of size one inserted at the specified position. | | [`vstack`](generated/torch.vstack#torch.vstack "torch.vstack") | Stack tensors in sequence vertically (row wise). | | [`where`](generated/torch.where#torch.where "torch.where") | Return a tensor of elements selected from either `x` or `y`, depending on `condition`. | Generators ---------- | | | | --- | --- | | [`Generator`](generated/torch.generator#torch.Generator "torch.Generator") | Creates and returns a generator object that manages the state of the algorithm which produces pseudo random numbers. | Random sampling --------------- | | | | --- | --- | | [`seed`](generated/torch.seed#torch.seed "torch.seed") | Sets the seed for generating random numbers to a non-deterministic random number. | | [`manual_seed`](generated/torch.manual_seed#torch.manual_seed "torch.manual_seed") | Sets the seed for generating random numbers. | | [`initial_seed`](generated/torch.initial_seed#torch.initial_seed "torch.initial_seed") | Returns the initial seed for generating random numbers as a Python `long`. | | [`get_rng_state`](generated/torch.get_rng_state#torch.get_rng_state "torch.get_rng_state") | Returns the random number generator state as a `torch.ByteTensor`. | | [`set_rng_state`](generated/torch.set_rng_state#torch.set_rng_state "torch.set_rng_state") | Sets the random number generator state. | `torch.default_generator Returns the default CPU torch.Generator` | | | | --- | --- | | [`bernoulli`](generated/torch.bernoulli#torch.bernoulli "torch.bernoulli") | Draws binary random numbers (0 or 1) from a Bernoulli distribution. | | [`multinomial`](generated/torch.multinomial#torch.multinomial "torch.multinomial") | Returns a tensor where each row contains `num_samples` indices sampled from the multinomial probability distribution located in the corresponding row of tensor `input`. | | [`normal`](generated/torch.normal#torch.normal "torch.normal") | Returns a tensor of random numbers drawn from separate normal distributions whose mean and standard deviation are given. | | [`poisson`](generated/torch.poisson#torch.poisson "torch.poisson") | Returns a tensor of the same size as `input` with each element sampled from a Poisson distribution with rate parameter given by the corresponding element in `input` i.e., | | [`rand`](generated/torch.rand#torch.rand "torch.rand") | Returns a tensor filled with random numbers from a uniform distribution on the interval [0,1)[0, 1) | | [`rand_like`](generated/torch.rand_like#torch.rand_like "torch.rand_like") | Returns a tensor with the same size as `input` that is filled with random numbers from a uniform distribution on the interval [0,1)[0, 1) . | | [`randint`](generated/torch.randint#torch.randint "torch.randint") | Returns a tensor filled with random integers generated uniformly between `low` (inclusive) and `high` (exclusive). | | [`randint_like`](generated/torch.randint_like#torch.randint_like "torch.randint_like") | Returns a tensor with the same shape as Tensor `input` filled with random integers generated uniformly between `low` (inclusive) and `high` (exclusive). | | [`randn`](generated/torch.randn#torch.randn "torch.randn") | Returns a tensor filled with random numbers from a normal distribution with mean `0` and variance `1` (also called the standard normal distribution). | | [`randn_like`](generated/torch.randn_like#torch.randn_like "torch.randn_like") | Returns a tensor with the same size as `input` that is filled with random numbers from a normal distribution with mean 0 and variance 1. | | [`randperm`](generated/torch.randperm#torch.randperm "torch.randperm") | Returns a random permutation of integers from `0` to `n - 1`. | ### In-place random sampling There are a few more in-place random sampling functions defined on Tensors as well. Click through to refer to their documentation: * [`torch.Tensor.bernoulli_()`](tensors#torch.Tensor.bernoulli_ "torch.Tensor.bernoulli_") - in-place version of [`torch.bernoulli()`](generated/torch.bernoulli#torch.bernoulli "torch.bernoulli") * [`torch.Tensor.cauchy_()`](tensors#torch.Tensor.cauchy_ "torch.Tensor.cauchy_") - numbers drawn from the Cauchy distribution * [`torch.Tensor.exponential_()`](tensors#torch.Tensor.exponential_ "torch.Tensor.exponential_") - numbers drawn from the exponential distribution * [`torch.Tensor.geometric_()`](tensors#torch.Tensor.geometric_ "torch.Tensor.geometric_") - elements drawn from the geometric distribution * [`torch.Tensor.log_normal_()`](tensors#torch.Tensor.log_normal_ "torch.Tensor.log_normal_") - samples from the log-normal distribution * [`torch.Tensor.normal_()`](tensors#torch.Tensor.normal_ "torch.Tensor.normal_") - in-place version of [`torch.normal()`](generated/torch.normal#torch.normal "torch.normal") * [`torch.Tensor.random_()`](tensors#torch.Tensor.random_ "torch.Tensor.random_") - numbers sampled from the discrete uniform distribution * [`torch.Tensor.uniform_()`](tensors#torch.Tensor.uniform_ "torch.Tensor.uniform_") - numbers sampled from the continuous uniform distribution ### Quasi-random sampling | | | | --- | --- | | [`quasirandom.SobolEngine`](generated/torch.quasirandom.sobolengine#torch.quasirandom.SobolEngine "torch.quasirandom.SobolEngine") | The [`torch.quasirandom.SobolEngine`](generated/torch.quasirandom.sobolengine#torch.quasirandom.SobolEngine "torch.quasirandom.SobolEngine") is an engine for generating (scrambled) Sobol sequences. | Serialization ------------- | | | | --- | --- | | [`save`](generated/torch.save#torch.save "torch.save") | Saves an object to a disk file. | | [`load`](generated/torch.load#torch.load "torch.load") | Loads an object saved with [`torch.save()`](generated/torch.save#torch.save "torch.save") from a file. | Parallelism ----------- | | | | --- | --- | | [`get_num_threads`](generated/torch.get_num_threads#torch.get_num_threads "torch.get_num_threads") | Returns the number of threads used for parallelizing CPU operations | | [`set_num_threads`](generated/torch.set_num_threads#torch.set_num_threads "torch.set_num_threads") | Sets the number of threads used for intraop parallelism on CPU. | | [`get_num_interop_threads`](generated/torch.get_num_interop_threads#torch.get_num_interop_threads "torch.get_num_interop_threads") | Returns the number of threads used for inter-op parallelism on CPU (e.g. | | [`set_num_interop_threads`](generated/torch.set_num_interop_threads#torch.set_num_interop_threads "torch.set_num_interop_threads") | Sets the number of threads used for interop parallelism (e.g. | Locally disabling gradient computation -------------------------------------- The context managers [`torch.no_grad()`](generated/torch.no_grad#torch.no_grad "torch.no_grad"), [`torch.enable_grad()`](generated/torch.enable_grad#torch.enable_grad "torch.enable_grad"), and [`torch.set_grad_enabled()`](generated/torch.set_grad_enabled#torch.set_grad_enabled "torch.set_grad_enabled") are helpful for locally disabling and enabling gradient computation. See [Locally disabling gradient computation](autograd#locally-disable-grad) for more details on their usage. These context managers are thread local, so they won’t work if you send work to another thread using the `threading` module, etc. Examples: ``` >>> x = torch.zeros(1, requires_grad=True) >>> with torch.no_grad(): ... y = x * 2 >>> y.requires_grad False >>> is_train = False >>> with torch.set_grad_enabled(is_train): ... y = x * 2 >>> y.requires_grad False >>> torch.set_grad_enabled(True) # this can also be used as a function >>> y = x * 2 >>> y.requires_grad True >>> torch.set_grad_enabled(False) >>> y = x * 2 >>> y.requires_grad False ``` | | | | --- | --- | | [`no_grad`](generated/torch.no_grad#torch.no_grad "torch.no_grad") | Context-manager that disabled gradient calculation. | | [`enable_grad`](generated/torch.enable_grad#torch.enable_grad "torch.enable_grad") | Context-manager that enables gradient calculation. | | [`set_grad_enabled`](generated/torch.set_grad_enabled#torch.set_grad_enabled "torch.set_grad_enabled") | Context-manager that sets gradient calculation to on or off. | Math operations --------------- ### Pointwise Ops | | | | --- | --- | | [`abs`](generated/torch.abs#torch.abs "torch.abs") | Computes the absolute value of each element in `input`. | | [`absolute`](generated/torch.absolute#torch.absolute "torch.absolute") | Alias for [`torch.abs()`](generated/torch.abs#torch.abs "torch.abs") | | [`acos`](generated/torch.acos#torch.acos "torch.acos") | Computes the inverse cosine of each element in `input`. | | [`arccos`](generated/torch.arccos#torch.arccos "torch.arccos") | Alias for [`torch.acos()`](generated/torch.acos#torch.acos "torch.acos"). | | [`acosh`](generated/torch.acosh#torch.acosh "torch.acosh") | Returns a new tensor with the inverse hyperbolic cosine of the elements of `input`. | | [`arccosh`](generated/torch.arccosh#torch.arccosh "torch.arccosh") | Alias for [`torch.acosh()`](generated/torch.acosh#torch.acosh "torch.acosh"). | | [`add`](generated/torch.add#torch.add "torch.add") | Adds the scalar `other` to each element of the input `input` and returns a new resulting tensor. | | [`addcdiv`](generated/torch.addcdiv#torch.addcdiv "torch.addcdiv") | Performs the element-wise division of `tensor1` by `tensor2`, multiply the result by the scalar `value` and add it to `input`. | | [`addcmul`](generated/torch.addcmul#torch.addcmul "torch.addcmul") | Performs the element-wise multiplication of `tensor1` by `tensor2`, multiply the result by the scalar `value` and add it to `input`. | | [`angle`](generated/torch.angle#torch.angle "torch.angle") | Computes the element-wise angle (in radians) of the given `input` tensor. | | [`asin`](generated/torch.asin#torch.asin "torch.asin") | Returns a new tensor with the arcsine of the elements of `input`. | | [`arcsin`](generated/torch.arcsin#torch.arcsin "torch.arcsin") | Alias for [`torch.asin()`](generated/torch.asin#torch.asin "torch.asin"). | | [`asinh`](generated/torch.asinh#torch.asinh "torch.asinh") | Returns a new tensor with the inverse hyperbolic sine of the elements of `input`. | | [`arcsinh`](generated/torch.arcsinh#torch.arcsinh "torch.arcsinh") | Alias for [`torch.asinh()`](generated/torch.asinh#torch.asinh "torch.asinh"). | | [`atan`](generated/torch.atan#torch.atan "torch.atan") | Returns a new tensor with the arctangent of the elements of `input`. | | [`arctan`](generated/torch.arctan#torch.arctan "torch.arctan") | Alias for [`torch.atan()`](generated/torch.atan#torch.atan "torch.atan"). | | [`atanh`](generated/torch.atanh#torch.atanh "torch.atanh") | Returns a new tensor with the inverse hyperbolic tangent of the elements of `input`. | | [`arctanh`](generated/torch.arctanh#torch.arctanh "torch.arctanh") | Alias for [`torch.atanh()`](generated/torch.atanh#torch.atanh "torch.atanh"). | | [`atan2`](generated/torch.atan2#torch.atan2 "torch.atan2") | Element-wise arctangent of inputi/otheri\text{input}\_{i} / \text{other}\_{i} with consideration of the quadrant. | | [`bitwise_not`](generated/torch.bitwise_not#torch.bitwise_not "torch.bitwise_not") | Computes the bitwise NOT of the given input tensor. | | [`bitwise_and`](generated/torch.bitwise_and#torch.bitwise_and "torch.bitwise_and") | Computes the bitwise AND of `input` and `other`. | | [`bitwise_or`](generated/torch.bitwise_or#torch.bitwise_or "torch.bitwise_or") | Computes the bitwise OR of `input` and `other`. | | [`bitwise_xor`](generated/torch.bitwise_xor#torch.bitwise_xor "torch.bitwise_xor") | Computes the bitwise XOR of `input` and `other`. | | [`ceil`](generated/torch.ceil#torch.ceil "torch.ceil") | Returns a new tensor with the ceil of the elements of `input`, the smallest integer greater than or equal to each element. | | [`clamp`](generated/torch.clamp#torch.clamp "torch.clamp") | Clamp all elements in `input` into the range `[` [`min`](generated/torch.min#torch.min "torch.min"), [`max`](generated/torch.max#torch.max "torch.max") `]`. | | [`clip`](generated/torch.clip#torch.clip "torch.clip") | Alias for [`torch.clamp()`](generated/torch.clamp#torch.clamp "torch.clamp"). | | [`conj`](generated/torch.conj#torch.conj "torch.conj") | Computes the element-wise conjugate of the given `input` tensor. | | [`copysign`](generated/torch.copysign#torch.copysign "torch.copysign") | Create a new floating-point tensor with the magnitude of `input` and the sign of `other`, elementwise. | | [`cos`](generated/torch.cos#torch.cos "torch.cos") | Returns a new tensor with the cosine of the elements of `input`. | | [`cosh`](generated/torch.cosh#torch.cosh "torch.cosh") | Returns a new tensor with the hyperbolic cosine of the elements of `input`. | | [`deg2rad`](generated/torch.deg2rad#torch.deg2rad "torch.deg2rad") | Returns a new tensor with each of the elements of `input` converted from angles in degrees to radians. | | [`div`](generated/torch.div#torch.div "torch.div") | Divides each element of the input `input` by the corresponding element of `other`. | | [`divide`](generated/torch.divide#torch.divide "torch.divide") | Alias for [`torch.div()`](generated/torch.div#torch.div "torch.div"). | | [`digamma`](generated/torch.digamma#torch.digamma "torch.digamma") | Computes the logarithmic derivative of the gamma function on `input`. | | [`erf`](generated/torch.erf#torch.erf "torch.erf") | Computes the error function of each element. | | [`erfc`](generated/torch.erfc#torch.erfc "torch.erfc") | Computes the complementary error function of each element of `input`. | | [`erfinv`](generated/torch.erfinv#torch.erfinv "torch.erfinv") | Computes the inverse error function of each element of `input`. | | [`exp`](generated/torch.exp#torch.exp "torch.exp") | Returns a new tensor with the exponential of the elements of the input tensor `input`. | | [`exp2`](generated/torch.exp2#torch.exp2 "torch.exp2") | Computes the base two exponential function of `input`. | | [`expm1`](generated/torch.expm1#torch.expm1 "torch.expm1") | Returns a new tensor with the exponential of the elements minus 1 of `input`. | | [`fake_quantize_per_channel_affine`](generated/torch.fake_quantize_per_channel_affine#torch.fake_quantize_per_channel_affine "torch.fake_quantize_per_channel_affine") | Returns a new tensor with the data in `input` fake quantized per channel using `scale`, `zero_point`, `quant_min` and `quant_max`, across the channel specified by `axis`. | | [`fake_quantize_per_tensor_affine`](generated/torch.fake_quantize_per_tensor_affine#torch.fake_quantize_per_tensor_affine "torch.fake_quantize_per_tensor_affine") | Returns a new tensor with the data in `input` fake quantized using `scale`, `zero_point`, `quant_min` and `quant_max`. | | [`fix`](generated/torch.fix#torch.fix "torch.fix") | Alias for [`torch.trunc()`](generated/torch.trunc#torch.trunc "torch.trunc") | | [`float_power`](generated/torch.float_power#torch.float_power "torch.float_power") | Raises `input` to the power of `exponent`, elementwise, in double precision. | | [`floor`](generated/torch.floor#torch.floor "torch.floor") | Returns a new tensor with the floor of the elements of `input`, the largest integer less than or equal to each element. | | [`floor_divide`](generated/torch.floor_divide#torch.floor_divide "torch.floor_divide") | | | [`fmod`](generated/torch.fmod#torch.fmod "torch.fmod") | Computes the element-wise remainder of division. | | [`frac`](generated/torch.frac#torch.frac "torch.frac") | Computes the fractional portion of each element in `input`. | | [`imag`](generated/torch.imag#torch.imag "torch.imag") | Returns a new tensor containing imaginary values of the `self` tensor. | | [`ldexp`](generated/torch.ldexp#torch.ldexp "torch.ldexp") | Multiplies `input` by 2\*\*:attr:`other`. | | [`lerp`](generated/torch.lerp#torch.lerp "torch.lerp") | Does a linear interpolation of two tensors `start` (given by `input`) and `end` based on a scalar or tensor `weight` and returns the resulting `out` tensor. | | [`lgamma`](generated/torch.lgamma#torch.lgamma "torch.lgamma") | Computes the logarithm of the gamma function on `input`. | | [`log`](generated/torch.log#torch.log "torch.log") | Returns a new tensor with the natural logarithm of the elements of `input`. | | [`log10`](generated/torch.log10#torch.log10 "torch.log10") | Returns a new tensor with the logarithm to the base 10 of the elements of `input`. | | [`log1p`](generated/torch.log1p#torch.log1p "torch.log1p") | Returns a new tensor with the natural logarithm of (1 + `input`). | | [`log2`](generated/torch.log2#torch.log2 "torch.log2") | Returns a new tensor with the logarithm to the base 2 of the elements of `input`. | | [`logaddexp`](generated/torch.logaddexp#torch.logaddexp "torch.logaddexp") | Logarithm of the sum of exponentiations of the inputs. | | [`logaddexp2`](generated/torch.logaddexp2#torch.logaddexp2 "torch.logaddexp2") | Logarithm of the sum of exponentiations of the inputs in base-2. | | [`logical_and`](generated/torch.logical_and#torch.logical_and "torch.logical_and") | Computes the element-wise logical AND of the given input tensors. | | [`logical_not`](generated/torch.logical_not#torch.logical_not "torch.logical_not") | Computes the element-wise logical NOT of the given input tensor. | | [`logical_or`](generated/torch.logical_or#torch.logical_or "torch.logical_or") | Computes the element-wise logical OR of the given input tensors. | | [`logical_xor`](generated/torch.logical_xor#torch.logical_xor "torch.logical_xor") | Computes the element-wise logical XOR of the given input tensors. | | [`logit`](generated/torch.logit#torch.logit "torch.logit") | Returns a new tensor with the logit of the elements of `input`. | | [`hypot`](generated/torch.hypot#torch.hypot "torch.hypot") | Given the legs of a right triangle, return its hypotenuse. | | [`i0`](generated/torch.i0#torch.i0 "torch.i0") | Computes the zeroth order modified Bessel function of the first kind for each element of `input`. | | [`igamma`](generated/torch.igamma#torch.igamma "torch.igamma") | Computes the regularized lower incomplete gamma function: | | [`igammac`](generated/torch.igammac#torch.igammac "torch.igammac") | Computes the regularized upper incomplete gamma function: | | [`mul`](generated/torch.mul#torch.mul "torch.mul") | Multiplies each element of the input `input` with the scalar `other` and returns a new resulting tensor. | | [`multiply`](generated/torch.multiply#torch.multiply "torch.multiply") | Alias for [`torch.mul()`](generated/torch.mul#torch.mul "torch.mul"). | | [`mvlgamma`](generated/torch.mvlgamma#torch.mvlgamma "torch.mvlgamma") | Computes the [multivariate log-gamma function](https://en.wikipedia.org/wiki/Multivariate_gamma_function)) with dimension pp element-wise, given by | | [`nan_to_num`](generated/torch.nan_to_num#torch.nan_to_num "torch.nan_to_num") | Replaces `NaN`, positive infinity, and negative infinity values in `input` with the values specified by `nan`, `posinf`, and `neginf`, respectively. | | [`neg`](generated/torch.neg#torch.neg "torch.neg") | Returns a new tensor with the negative of the elements of `input`. | | [`negative`](generated/torch.negative#torch.negative "torch.negative") | Alias for [`torch.neg()`](generated/torch.neg#torch.neg "torch.neg") | | [`nextafter`](generated/torch.nextafter#torch.nextafter "torch.nextafter") | Return the next floating-point value after `input` towards `other`, elementwise. | | [`polygamma`](generated/torch.polygamma#torch.polygamma "torch.polygamma") | Computes the nthn^{th} derivative of the digamma function on `input`. | | [`pow`](generated/torch.pow#torch.pow "torch.pow") | Takes the power of each element in `input` with `exponent` and returns a tensor with the result. | | [`rad2deg`](generated/torch.rad2deg#torch.rad2deg "torch.rad2deg") | Returns a new tensor with each of the elements of `input` converted from angles in radians to degrees. | | [`real`](generated/torch.real#torch.real "torch.real") | Returns a new tensor containing real values of the `self` tensor. | | [`reciprocal`](generated/torch.reciprocal#torch.reciprocal "torch.reciprocal") | Returns a new tensor with the reciprocal of the elements of `input` | | [`remainder`](generated/torch.remainder#torch.remainder "torch.remainder") | Computes the element-wise remainder of division. | | [`round`](generated/torch.round#torch.round "torch.round") | Returns a new tensor with each of the elements of `input` rounded to the closest integer. | | [`rsqrt`](generated/torch.rsqrt#torch.rsqrt "torch.rsqrt") | Returns a new tensor with the reciprocal of the square-root of each of the elements of `input`. | | [`sigmoid`](generated/torch.sigmoid#torch.sigmoid "torch.sigmoid") | Returns a new tensor with the sigmoid of the elements of `input`. | | [`sign`](generated/torch.sign#torch.sign "torch.sign") | Returns a new tensor with the signs of the elements of `input`. | | [`sgn`](generated/torch.sgn#torch.sgn "torch.sgn") | For complex tensors, this function returns a new tensor whose elemants have the same angle as that of the elements of `input` and absolute value 1. | | [`signbit`](generated/torch.signbit#torch.signbit "torch.signbit") | Tests if each element of `input` has its sign bit set (is less than zero) or not. | | [`sin`](generated/torch.sin#torch.sin "torch.sin") | Returns a new tensor with the sine of the elements of `input`. | | [`sinc`](generated/torch.sinc#torch.sinc "torch.sinc") | Computes the normalized sinc of `input.` | | [`sinh`](generated/torch.sinh#torch.sinh "torch.sinh") | Returns a new tensor with the hyperbolic sine of the elements of `input`. | | [`sqrt`](generated/torch.sqrt#torch.sqrt "torch.sqrt") | Returns a new tensor with the square-root of the elements of `input`. | | [`square`](generated/torch.square#torch.square "torch.square") | Returns a new tensor with the square of the elements of `input`. | | [`sub`](generated/torch.sub#torch.sub "torch.sub") | Subtracts `other`, scaled by `alpha`, from `input`. | | [`subtract`](generated/torch.subtract#torch.subtract "torch.subtract") | Alias for [`torch.sub()`](generated/torch.sub#torch.sub "torch.sub"). | | [`tan`](generated/torch.tan#torch.tan "torch.tan") | Returns a new tensor with the tangent of the elements of `input`. | | [`tanh`](generated/torch.tanh#torch.tanh "torch.tanh") | Returns a new tensor with the hyperbolic tangent of the elements of `input`. | | [`true_divide`](generated/torch.true_divide#torch.true_divide "torch.true_divide") | Alias for [`torch.div()`](generated/torch.div#torch.div "torch.div") with `rounding_mode=None`. | | [`trunc`](generated/torch.trunc#torch.trunc "torch.trunc") | Returns a new tensor with the truncated integer values of the elements of `input`. | | [`xlogy`](generated/torch.xlogy#torch.xlogy "torch.xlogy") | Computes `input * log(other)` with the following cases. | ### Reduction Ops | | | | --- | --- | | [`argmax`](generated/torch.argmax#torch.argmax "torch.argmax") | Returns the indices of the maximum value of all elements in the `input` tensor. | | [`argmin`](generated/torch.argmin#torch.argmin "torch.argmin") | Returns the indices of the minimum value(s) of the flattened tensor or along a dimension | | [`amax`](generated/torch.amax#torch.amax "torch.amax") | Returns the maximum value of each slice of the `input` tensor in the given dimension(s) `dim`. | | [`amin`](generated/torch.amin#torch.amin "torch.amin") | Returns the minimum value of each slice of the `input` tensor in the given dimension(s) `dim`. | | [`all`](generated/torch.all#torch.all "torch.all") | Tests if all elements in `input` evaluate to `True`. | | [`any`](generated/torch.any#torch.any "torch.any") | param input the input tensor. | | [`max`](generated/torch.max#torch.max "torch.max") | Returns the maximum value of all elements in the `input` tensor. | | [`min`](generated/torch.min#torch.min "torch.min") | Returns the minimum value of all elements in the `input` tensor. | | [`dist`](generated/torch.dist#torch.dist "torch.dist") | Returns the p-norm of (`input` - `other`) | | [`logsumexp`](generated/torch.logsumexp#torch.logsumexp "torch.logsumexp") | Returns the log of summed exponentials of each row of the `input` tensor in the given dimension `dim`. | | [`mean`](generated/torch.mean#torch.mean "torch.mean") | Returns the mean value of all elements in the `input` tensor. | | [`median`](generated/torch.median#torch.median "torch.median") | Returns the median of the values in `input`. | | [`nanmedian`](generated/torch.nanmedian#torch.nanmedian "torch.nanmedian") | Returns the median of the values in `input`, ignoring `NaN` values. | | [`mode`](generated/torch.mode#torch.mode "torch.mode") | Returns a namedtuple `(values, indices)` where `values` is the mode value of each row of the `input` tensor in the given dimension `dim`, i.e. | | [`norm`](generated/torch.norm#torch.norm "torch.norm") | Returns the matrix norm or vector norm of a given tensor. | | [`nansum`](generated/torch.nansum#torch.nansum "torch.nansum") | Returns the sum of all elements, treating Not a Numbers (NaNs) as zero. | | [`prod`](generated/torch.prod#torch.prod "torch.prod") | Returns the product of all elements in the `input` tensor. | | [`quantile`](generated/torch.quantile#torch.quantile "torch.quantile") | Returns the q-th quantiles of all elements in the `input` tensor, doing a linear interpolation when the q-th quantile lies between two data points. | | [`nanquantile`](generated/torch.nanquantile#torch.nanquantile "torch.nanquantile") | This is a variant of [`torch.quantile()`](generated/torch.quantile#torch.quantile "torch.quantile") that “ignores” `NaN` values, computing the quantiles `q` as if `NaN` values in `input` did not exist. | | [`std`](generated/torch.std#torch.std "torch.std") | Returns the standard-deviation of all elements in the `input` tensor. | | [`std_mean`](generated/torch.std_mean#torch.std_mean "torch.std_mean") | Returns the standard-deviation and mean of all elements in the `input` tensor. | | [`sum`](generated/torch.sum#torch.sum "torch.sum") | Returns the sum of all elements in the `input` tensor. | | [`unique`](generated/torch.unique#torch.unique "torch.unique") | Returns the unique elements of the input tensor. | | [`unique_consecutive`](generated/torch.unique_consecutive#torch.unique_consecutive "torch.unique_consecutive") | Eliminates all but the first element from every consecutive group of equivalent elements. | | [`var`](generated/torch.var#torch.var "torch.var") | Returns the variance of all elements in the `input` tensor. | | [`var_mean`](generated/torch.var_mean#torch.var_mean "torch.var_mean") | Returns the variance and mean of all elements in the `input` tensor. | | [`count_nonzero`](generated/torch.count_nonzero#torch.count_nonzero "torch.count_nonzero") | Counts the number of non-zero values in the tensor `input` along the given `dim`. | ### Comparison Ops | | | | --- | --- | | [`allclose`](generated/torch.allclose#torch.allclose "torch.allclose") | This function checks if all `input` and `other` satisfy the condition: | | [`argsort`](generated/torch.argsort#torch.argsort "torch.argsort") | Returns the indices that sort a tensor along a given dimension in ascending order by value. | | [`eq`](generated/torch.eq#torch.eq "torch.eq") | Computes element-wise equality | | [`equal`](generated/torch.equal#torch.equal "torch.equal") | `True` if two tensors have the same size and elements, `False` otherwise. | | [`ge`](generated/torch.ge#torch.ge "torch.ge") | Computes input≥other\text{input} \geq \text{other} element-wise. | | [`greater_equal`](generated/torch.greater_equal#torch.greater_equal "torch.greater_equal") | Alias for [`torch.ge()`](generated/torch.ge#torch.ge "torch.ge"). | | [`gt`](generated/torch.gt#torch.gt "torch.gt") | Computes input>other\text{input} > \text{other} element-wise. | | [`greater`](generated/torch.greater#torch.greater "torch.greater") | Alias for [`torch.gt()`](generated/torch.gt#torch.gt "torch.gt"). | | [`isclose`](generated/torch.isclose#torch.isclose "torch.isclose") | Returns a new tensor with boolean elements representing if each element of `input` is “close” to the corresponding element of `other`. | | [`isfinite`](generated/torch.isfinite#torch.isfinite "torch.isfinite") | Returns a new tensor with boolean elements representing if each element is `finite` or not. | | [`isinf`](generated/torch.isinf#torch.isinf "torch.isinf") | Tests if each element of `input` is infinite (positive or negative infinity) or not. | | [`isposinf`](generated/torch.isposinf#torch.isposinf "torch.isposinf") | Tests if each element of `input` is positive infinity or not. | | [`isneginf`](generated/torch.isneginf#torch.isneginf "torch.isneginf") | Tests if each element of `input` is negative infinity or not. | | [`isnan`](generated/torch.isnan#torch.isnan "torch.isnan") | Returns a new tensor with boolean elements representing if each element of `input` is NaN or not. | | [`isreal`](generated/torch.isreal#torch.isreal "torch.isreal") | Returns a new tensor with boolean elements representing if each element of `input` is real-valued or not. | | [`kthvalue`](generated/torch.kthvalue#torch.kthvalue "torch.kthvalue") | Returns a namedtuple `(values, indices)` where `values` is the `k` th smallest element of each row of the `input` tensor in the given dimension `dim`. | | [`le`](generated/torch.le#torch.le "torch.le") | Computes input≤other\text{input} \leq \text{other} element-wise. | | [`less_equal`](generated/torch.less_equal#torch.less_equal "torch.less_equal") | Alias for [`torch.le()`](generated/torch.le#torch.le "torch.le"). | | [`lt`](generated/torch.lt#torch.lt "torch.lt") | Computes input<other\text{input} < \text{other} element-wise. | | [`less`](generated/torch.less#torch.less "torch.less") | Alias for [`torch.lt()`](generated/torch.lt#torch.lt "torch.lt"). | | [`maximum`](generated/torch.maximum#torch.maximum "torch.maximum") | Computes the element-wise maximum of `input` and `other`. | | [`minimum`](generated/torch.minimum#torch.minimum "torch.minimum") | Computes the element-wise minimum of `input` and `other`. | | [`fmax`](generated/torch.fmax#torch.fmax "torch.fmax") | Computes the element-wise maximum of `input` and `other`. | | [`fmin`](generated/torch.fmin#torch.fmin "torch.fmin") | Computes the element-wise minimum of `input` and `other`. | | [`ne`](generated/torch.ne#torch.ne "torch.ne") | Computes input≠other\text{input} \neq \text{other} element-wise. | | [`not_equal`](generated/torch.not_equal#torch.not_equal "torch.not_equal") | Alias for [`torch.ne()`](generated/torch.ne#torch.ne "torch.ne"). | | [`sort`](generated/torch.sort#torch.sort "torch.sort") | Sorts the elements of the `input` tensor along a given dimension in ascending order by value. | | [`topk`](generated/torch.topk#torch.topk "torch.topk") | Returns the `k` largest elements of the given `input` tensor along a given dimension. | | [`msort`](generated/torch.msort#torch.msort "torch.msort") | Sorts the elements of the `input` tensor along its first dimension in ascending order by value. | ### Spectral Ops | | | | --- | --- | | [`stft`](generated/torch.stft#torch.stft "torch.stft") | Short-time Fourier transform (STFT). | | [`istft`](generated/torch.istft#torch.istft "torch.istft") | Inverse short time Fourier Transform. | | [`bartlett_window`](generated/torch.bartlett_window#torch.bartlett_window "torch.bartlett_window") | Bartlett window function. | | [`blackman_window`](generated/torch.blackman_window#torch.blackman_window "torch.blackman_window") | Blackman window function. | | [`hamming_window`](generated/torch.hamming_window#torch.hamming_window "torch.hamming_window") | Hamming window function. | | [`hann_window`](generated/torch.hann_window#torch.hann_window "torch.hann_window") | Hann window function. | | [`kaiser_window`](generated/torch.kaiser_window#torch.kaiser_window "torch.kaiser_window") | Computes the Kaiser window with window length `window_length` and shape parameter `beta`. | ### Other Operations | | | | --- | --- | | [`atleast_1d`](generated/torch.atleast_1d#torch.atleast_1d "torch.atleast_1d") | Returns a 1-dimensional view of each input tensor with zero dimensions. | | [`atleast_2d`](generated/torch.atleast_2d#torch.atleast_2d "torch.atleast_2d") | Returns a 2-dimensional view of each input tensor with zero dimensions. | | [`atleast_3d`](generated/torch.atleast_3d#torch.atleast_3d "torch.atleast_3d") | Returns a 3-dimensional view of each input tensor with zero dimensions. | | [`bincount`](generated/torch.bincount#torch.bincount "torch.bincount") | Count the frequency of each value in an array of non-negative ints. | | [`block_diag`](generated/torch.block_diag#torch.block_diag "torch.block_diag") | Create a block diagonal matrix from provided tensors. | | [`broadcast_tensors`](generated/torch.broadcast_tensors#torch.broadcast_tensors "torch.broadcast_tensors") | Broadcasts the given tensors according to [Broadcasting semantics](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics). | | [`broadcast_to`](generated/torch.broadcast_to#torch.broadcast_to "torch.broadcast_to") | Broadcasts `input` to the shape `shape`. | | [`broadcast_shapes`](generated/torch.broadcast_shapes#torch.broadcast_shapes "torch.broadcast_shapes") | Similar to [`broadcast_tensors()`](generated/torch.broadcast_tensors#torch.broadcast_tensors "torch.broadcast_tensors") but for shapes. | | [`bucketize`](generated/torch.bucketize#torch.bucketize "torch.bucketize") | Returns the indices of the buckets to which each value in the `input` belongs, where the boundaries of the buckets are set by `boundaries`. | | [`cartesian_prod`](generated/torch.cartesian_prod#torch.cartesian_prod "torch.cartesian_prod") | Do cartesian product of the given sequence of tensors. | | [`cdist`](generated/torch.cdist#torch.cdist "torch.cdist") | Computes batched the p-norm distance between each pair of the two collections of row vectors. | | [`clone`](generated/torch.clone#torch.clone "torch.clone") | Returns a copy of `input`. | | [`combinations`](generated/torch.combinations#torch.combinations "torch.combinations") | Compute combinations of length rr of the given tensor. | | [`cross`](generated/torch.cross#torch.cross "torch.cross") | Returns the cross product of vectors in dimension `dim` of `input` and `other`. | | [`cummax`](generated/torch.cummax#torch.cummax "torch.cummax") | Returns a namedtuple `(values, indices)` where `values` is the cumulative maximum of elements of `input` in the dimension `dim`. | | [`cummin`](generated/torch.cummin#torch.cummin "torch.cummin") | Returns a namedtuple `(values, indices)` where `values` is the cumulative minimum of elements of `input` in the dimension `dim`. | | [`cumprod`](generated/torch.cumprod#torch.cumprod "torch.cumprod") | Returns the cumulative product of elements of `input` in the dimension `dim`. | | [`cumsum`](generated/torch.cumsum#torch.cumsum "torch.cumsum") | Returns the cumulative sum of elements of `input` in the dimension `dim`. | | [`diag`](generated/torch.diag#torch.diag "torch.diag") | * If `input` is a vector (1-D tensor), then returns a 2-D square tensor | | [`diag_embed`](generated/torch.diag_embed#torch.diag_embed "torch.diag_embed") | Creates a tensor whose diagonals of certain 2D planes (specified by `dim1` and `dim2`) are filled by `input`. | | [`diagflat`](generated/torch.diagflat#torch.diagflat "torch.diagflat") | * If `input` is a vector (1-D tensor), then returns a 2-D square tensor | | [`diagonal`](generated/torch.diagonal#torch.diagonal "torch.diagonal") | Returns a partial view of `input` with the its diagonal elements with respect to `dim1` and `dim2` appended as a dimension at the end of the shape. | | [`diff`](generated/torch.diff#torch.diff "torch.diff") | Computes the n-th forward difference along the given dimension. | | [`einsum`](generated/torch.einsum#torch.einsum "torch.einsum") | Sums the product of the elements of the input `operands` along dimensions specified using a notation based on the Einstein summation convention. | | [`flatten`](generated/torch.flatten#torch.flatten "torch.flatten") | Flattens `input` by reshaping it into a one-dimensional tensor. | | [`flip`](generated/torch.flip#torch.flip "torch.flip") | Reverse the order of a n-D tensor along given axis in dims. | | [`fliplr`](generated/torch.fliplr#torch.fliplr "torch.fliplr") | Flip tensor in the left/right direction, returning a new tensor. | | [`flipud`](generated/torch.flipud#torch.flipud "torch.flipud") | Flip tensor in the up/down direction, returning a new tensor. | | [`kron`](generated/torch.kron#torch.kron "torch.kron") | Computes the Kronecker product, denoted by ⊗\otimes , of `input` and `other`. | | [`rot90`](generated/torch.rot90#torch.rot90 "torch.rot90") | Rotate a n-D tensor by 90 degrees in the plane specified by dims axis. | | [`gcd`](generated/torch.gcd#torch.gcd "torch.gcd") | Computes the element-wise greatest common divisor (GCD) of `input` and `other`. | | [`histc`](generated/torch.histc#torch.histc "torch.histc") | Computes the histogram of a tensor. | | [`meshgrid`](generated/torch.meshgrid#torch.meshgrid "torch.meshgrid") | Take NN tensors, each of which can be either scalar or 1-dimensional vector, and create NN N-dimensional grids, where the ii th grid is defined by expanding the ii th input over dimensions defined by other inputs. | | [`lcm`](generated/torch.lcm#torch.lcm "torch.lcm") | Computes the element-wise least common multiple (LCM) of `input` and `other`. | | [`logcumsumexp`](generated/torch.logcumsumexp#torch.logcumsumexp "torch.logcumsumexp") | Returns the logarithm of the cumulative summation of the exponentiation of elements of `input` in the dimension `dim`. | | [`ravel`](generated/torch.ravel#torch.ravel "torch.ravel") | Return a contiguous flattened tensor. | | [`renorm`](generated/torch.renorm#torch.renorm "torch.renorm") | Returns a tensor where each sub-tensor of `input` along dimension `dim` is normalized such that the `p`-norm of the sub-tensor is lower than the value `maxnorm` | | [`repeat_interleave`](generated/torch.repeat_interleave#torch.repeat_interleave "torch.repeat_interleave") | Repeat elements of a tensor. | | [`roll`](generated/torch.roll#torch.roll "torch.roll") | Roll the tensor along the given dimension(s). | | [`searchsorted`](generated/torch.searchsorted#torch.searchsorted "torch.searchsorted") | Find the indices from the *innermost* dimension of `sorted_sequence` such that, if the corresponding values in `values` were inserted before the indices, the order of the corresponding *innermost* dimension within `sorted_sequence` would be preserved. | | [`tensordot`](generated/torch.tensordot#torch.tensordot "torch.tensordot") | Returns a contraction of a and b over multiple dimensions. | | [`trace`](generated/torch.trace#torch.trace "torch.trace") | Returns the sum of the elements of the diagonal of the input 2-D matrix. | | [`tril`](generated/torch.tril#torch.tril "torch.tril") | Returns the lower triangular part of the matrix (2-D tensor) or batch of matrices `input`, the other elements of the result tensor `out` are set to 0. | | [`tril_indices`](generated/torch.tril_indices#torch.tril_indices "torch.tril_indices") | Returns the indices of the lower triangular part of a `row`-by- `col` matrix in a 2-by-N Tensor, where the first row contains row coordinates of all indices and the second row contains column coordinates. | | [`triu`](generated/torch.triu#torch.triu "torch.triu") | Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices `input`, the other elements of the result tensor `out` are set to 0. | | [`triu_indices`](generated/torch.triu_indices#torch.triu_indices "torch.triu_indices") | Returns the indices of the upper triangular part of a `row` by `col` matrix in a 2-by-N Tensor, where the first row contains row coordinates of all indices and the second row contains column coordinates. | | [`vander`](generated/torch.vander#torch.vander "torch.vander") | Generates a Vandermonde matrix. | | [`view_as_real`](generated/torch.view_as_real#torch.view_as_real "torch.view_as_real") | Returns a view of `input` as a real tensor. | | [`view_as_complex`](generated/torch.view_as_complex#torch.view_as_complex "torch.view_as_complex") | Returns a view of `input` as a complex tensor. | ### BLAS and LAPACK Operations | | | | --- | --- | | [`addbmm`](generated/torch.addbmm#torch.addbmm "torch.addbmm") | Performs a batch matrix-matrix product of matrices stored in `batch1` and `batch2`, with a reduced add step (all matrix multiplications get accumulated along the first dimension). | | [`addmm`](generated/torch.addmm#torch.addmm "torch.addmm") | Performs a matrix multiplication of the matrices `mat1` and `mat2`. | | [`addmv`](generated/torch.addmv#torch.addmv "torch.addmv") | Performs a matrix-vector product of the matrix `mat` and the vector `vec`. | | [`addr`](generated/torch.addr#torch.addr "torch.addr") | Performs the outer-product of vectors `vec1` and `vec2` and adds it to the matrix `input`. | | [`baddbmm`](generated/torch.baddbmm#torch.baddbmm "torch.baddbmm") | Performs a batch matrix-matrix product of matrices in `batch1` and `batch2`. | | [`bmm`](generated/torch.bmm#torch.bmm "torch.bmm") | Performs a batch matrix-matrix product of matrices stored in `input` and `mat2`. | | [`chain_matmul`](generated/torch.chain_matmul#torch.chain_matmul "torch.chain_matmul") | Returns the matrix product of the NN 2-D tensors. | | [`cholesky`](generated/torch.cholesky#torch.cholesky "torch.cholesky") | Computes the Cholesky decomposition of a symmetric positive-definite matrix AA or for batches of symmetric positive-definite matrices. | | [`cholesky_inverse`](generated/torch.cholesky_inverse#torch.cholesky_inverse "torch.cholesky_inverse") | Computes the inverse of a symmetric positive-definite matrix AA using its Cholesky factor uu : returns matrix `inv`. | | [`cholesky_solve`](generated/torch.cholesky_solve#torch.cholesky_solve "torch.cholesky_solve") | Solves a linear system of equations with a positive semidefinite matrix to be inverted given its Cholesky factor matrix uu . | | [`dot`](generated/torch.dot#torch.dot "torch.dot") | Computes the dot product of two 1D tensors. | | [`eig`](generated/torch.eig#torch.eig "torch.eig") | Computes the eigenvalues and eigenvectors of a real square matrix. | | [`geqrf`](generated/torch.geqrf#torch.geqrf "torch.geqrf") | This is a low-level function for calling LAPACK directly. | | [`ger`](generated/torch.ger#torch.ger "torch.ger") | Alias of [`torch.outer()`](generated/torch.outer#torch.outer "torch.outer"). | | [`inner`](generated/torch.inner#torch.inner "torch.inner") | Computes the dot product for 1D tensors. | | [`inverse`](generated/torch.inverse#torch.inverse "torch.inverse") | Takes the inverse of the square matrix `input`. | | [`det`](generated/torch.det#torch.det "torch.det") | Calculates determinant of a square matrix or batches of square matrices. | | [`logdet`](generated/torch.logdet#torch.logdet "torch.logdet") | Calculates log determinant of a square matrix or batches of square matrices. | | [`slogdet`](generated/torch.slogdet#torch.slogdet "torch.slogdet") | Calculates the sign and log absolute value of the determinant(s) of a square matrix or batches of square matrices. | | [`lstsq`](generated/torch.lstsq#torch.lstsq "torch.lstsq") | Computes the solution to the least squares and least norm problems for a full rank matrix AA of size (m×n)(m \times n) and a matrix BB of size (m×k)(m \times k) . | | [`lu`](generated/torch.lu#torch.lu "torch.lu") | Computes the LU factorization of a matrix or batches of matrices `A`. | | [`lu_solve`](generated/torch.lu_solve#torch.lu_solve "torch.lu_solve") | Returns the LU solve of the linear system Ax=bAx = b using the partially pivoted LU factorization of A from [`torch.lu()`](generated/torch.lu#torch.lu "torch.lu"). | | [`lu_unpack`](generated/torch.lu_unpack#torch.lu_unpack "torch.lu_unpack") | Unpacks the data and pivots from a LU factorization of a tensor. | | [`matmul`](generated/torch.matmul#torch.matmul "torch.matmul") | Matrix product of two tensors. | | [`matrix_power`](generated/torch.matrix_power#torch.matrix_power "torch.matrix_power") | Returns the matrix raised to the power `n` for square matrices. | | [`matrix_rank`](generated/torch.matrix_rank#torch.matrix_rank "torch.matrix_rank") | Returns the numerical rank of a 2-D tensor. | | [`matrix_exp`](generated/torch.matrix_exp#torch.matrix_exp "torch.matrix_exp") | Returns the matrix exponential. | | [`mm`](generated/torch.mm#torch.mm "torch.mm") | Performs a matrix multiplication of the matrices `input` and `mat2`. | | [`mv`](generated/torch.mv#torch.mv "torch.mv") | Performs a matrix-vector product of the matrix `input` and the vector `vec`. | | [`orgqr`](generated/torch.orgqr#torch.orgqr "torch.orgqr") | Computes the orthogonal matrix `Q` of a QR factorization, from the `(input, input2)` tuple returned by [`torch.geqrf()`](generated/torch.geqrf#torch.geqrf "torch.geqrf"). | | [`ormqr`](generated/torch.ormqr#torch.ormqr "torch.ormqr") | Multiplies `mat` (given by `input3`) by the orthogonal `Q` matrix of the QR factorization formed by [`torch.geqrf()`](generated/torch.geqrf#torch.geqrf "torch.geqrf") that is represented by `(a, tau)` (given by (`input`, `input2`)). | | [`outer`](generated/torch.outer#torch.outer "torch.outer") | Outer product of `input` and `vec2`. | | [`pinverse`](generated/torch.pinverse#torch.pinverse "torch.pinverse") | Calculates the pseudo-inverse (also known as the Moore-Penrose inverse) of a 2D tensor. | | [`qr`](generated/torch.qr#torch.qr "torch.qr") | Computes the QR decomposition of a matrix or a batch of matrices `input`, and returns a namedtuple (Q, R) of tensors such that input=QR\text{input} = Q R with QQ being an orthogonal matrix or batch of orthogonal matrices and RR being an upper triangular matrix or batch of upper triangular matrices. | | [`solve`](generated/torch.solve#torch.solve "torch.solve") | This function returns the solution to the system of linear equations represented by AX=BAX = B and the LU factorization of A, in order as a namedtuple `solution, LU`. | | [`svd`](generated/torch.svd#torch.svd "torch.svd") | Computes the singular value decomposition of either a matrix or batch of matrices `input`. | | [`svd_lowrank`](generated/torch.svd_lowrank#torch.svd_lowrank "torch.svd_lowrank") | Return the singular value decomposition `(U, S, V)` of a matrix, batches of matrices, or a sparse matrix AA such that A≈Udiag(S)VTA \approx U diag(S) V^T . | | [`pca_lowrank`](generated/torch.pca_lowrank#torch.pca_lowrank "torch.pca_lowrank") | Performs linear Principal Component Analysis (PCA) on a low-rank matrix, batches of such matrices, or sparse matrix. | | [`symeig`](generated/torch.symeig#torch.symeig "torch.symeig") | This function returns eigenvalues and eigenvectors of a real symmetric matrix `input` or a batch of real symmetric matrices, represented by a namedtuple (eigenvalues, eigenvectors). | | [`lobpcg`](generated/torch.lobpcg#torch.lobpcg "torch.lobpcg") | Find the k largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric positive defined generalized eigenvalue problem using matrix-free LOBPCG methods. | | [`trapz`](generated/torch.trapz#torch.trapz "torch.trapz") | Estimate ∫ydx\int y\,dx along `dim`, using the trapezoid rule. | | [`triangular_solve`](generated/torch.triangular_solve#torch.triangular_solve "torch.triangular_solve") | Solves a system of equations with a triangular coefficient matrix AA and multiple right-hand sides bb . | | [`vdot`](generated/torch.vdot#torch.vdot "torch.vdot") | Computes the dot product of two 1D tensors. | Utilities --------- | | | | --- | --- | | [`compiled_with_cxx11_abi`](generated/torch.compiled_with_cxx11_abi#torch.compiled_with_cxx11_abi "torch.compiled_with_cxx11_abi") | Returns whether PyTorch was built with \_GLIBCXX\_USE\_CXX11\_ABI=1 | | [`result_type`](generated/torch.result_type#torch.result_type "torch.result_type") | Returns the [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") that would result from performing an arithmetic operation on the provided input tensors. | | [`can_cast`](generated/torch.can_cast#torch.can_cast "torch.can_cast") | Determines if a type conversion is allowed under PyTorch casting rules described in the type promotion [documentation](tensor_attributes#type-promotion-doc). | | [`promote_types`](generated/torch.promote_types#torch.promote_types "torch.promote_types") | Returns the [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") with the smallest size and scalar kind that is not smaller nor of lower kind than either `type1` or `type2`. | | [`use_deterministic_algorithms`](generated/torch.use_deterministic_algorithms#torch.use_deterministic_algorithms "torch.use_deterministic_algorithms") | Sets whether PyTorch operations must use “deterministic” algorithms. | | [`are_deterministic_algorithms_enabled`](generated/torch.are_deterministic_algorithms_enabled#torch.are_deterministic_algorithms_enabled "torch.are_deterministic_algorithms_enabled") | Returns True if the global deterministic flag is turned on. | | [`_assert`](generated/torch._assert#torch._assert "torch._assert") | A wrapper around Python’s assert which is symbolically traceable. |
programming_docs
pytorch torch.nn.intrinsic.qat torch.nn.intrinsic.qat ====================== This module implements the versions of those fused operations needed for quantization aware training. ConvBn2d -------- `class torch.nn.intrinsic.qat.ConvBn2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/intrinsic/qat/modules/conv_fused.html#ConvBn2d) A ConvBn2d module is a module fused from Conv2d and BatchNorm2d, attached with FakeQuantize modules for weight, used in quantization aware training. We combined the interface of [`torch.nn.Conv2d`](generated/torch.nn.conv2d#torch.nn.Conv2d "torch.nn.Conv2d") and [`torch.nn.BatchNorm2d`](generated/torch.nn.batchnorm2d#torch.nn.BatchNorm2d "torch.nn.BatchNorm2d"). Similar to [`torch.nn.Conv2d`](generated/torch.nn.conv2d#torch.nn.Conv2d "torch.nn.Conv2d"), with FakeQuantize modules initialized to default. Variables * **~ConvBn2d.freeze\_bn** – * **~ConvBn2d.weight\_fake\_quant** – fake quant module for weight ConvBnReLU2d ------------ `class torch.nn.intrinsic.qat.ConvBnReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/intrinsic/qat/modules/conv_fused.html#ConvBnReLU2d) A ConvBnReLU2d module is a module fused from Conv2d, BatchNorm2d and ReLU, attached with FakeQuantize modules for weight, used in quantization aware training. We combined the interface of [`torch.nn.Conv2d`](generated/torch.nn.conv2d#torch.nn.Conv2d "torch.nn.Conv2d") and [`torch.nn.BatchNorm2d`](generated/torch.nn.batchnorm2d#torch.nn.BatchNorm2d "torch.nn.BatchNorm2d") and [`torch.nn.ReLU`](generated/torch.nn.relu#torch.nn.ReLU "torch.nn.ReLU"). Similar to `torch.nn.Conv2d`, with FakeQuantize modules initialized to default. Variables **~ConvBnReLU2d.weight\_fake\_quant** – fake quant module for weight ConvReLU2d ---------- `class torch.nn.intrinsic.qat.ConvReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/intrinsic/qat/modules/conv_fused.html#ConvReLU2d) A ConvReLU2d module is a fused module of Conv2d and ReLU, attached with FakeQuantize modules for weight for quantization aware training. We combined the interface of [`Conv2d`](generated/torch.nn.conv2d#torch.nn.Conv2d "torch.nn.Conv2d") and [`BatchNorm2d`](generated/torch.nn.batchnorm2d#torch.nn.BatchNorm2d "torch.nn.BatchNorm2d"). Variables **~ConvReLU2d.weight\_fake\_quant** – fake quant module for weight LinearReLU ---------- `class torch.nn.intrinsic.qat.LinearReLU(in_features, out_features, bias=True, qconfig=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/intrinsic/qat/modules/linear_relu.html#LinearReLU) A LinearReLU module fused from Linear and ReLU modules, attached with FakeQuantize modules for weight, used in quantization aware training. We adopt the same interface as [`torch.nn.Linear`](generated/torch.nn.linear#torch.nn.Linear "torch.nn.Linear"). Similar to `torch.nn.intrinsic.LinearReLU`, with FakeQuantize modules initialized to default. Variables **~LinearReLU.weight** – fake quant module for weight Examples: ``` >>> m = nn.qat.LinearReLU(20, 30) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size()) torch.Size([128, 30]) ``` pytorch torch.hub torch.hub ========= Pytorch Hub is a pre-trained model repository designed to facilitate research reproducibility. Publishing models ----------------- Pytorch Hub supports publishing pre-trained models(model definitions and pre-trained weights) to a github repository by adding a simple `hubconf.py` file; `hubconf.py` can have multiple entrypoints. Each entrypoint is defined as a python function (example: a pre-trained model you want to publish). ``` def entrypoint_name(*args, **kwargs): # args & kwargs are optional, for models which take positional/keyword arguments. ... ``` ### How to implement an entrypoint? Here is a code snippet specifies an entrypoint for `resnet18` model if we expand the implementation in `pytorch/vision/hubconf.py`. In most case importing the right function in `hubconf.py` is sufficient. Here we just want to use the expanded version as an example to show how it works. You can see the full script in [pytorch/vision repo](https://github.com/pytorch/vision/blob/master/hubconf.py) ``` dependencies = ['torch'] from torchvision.models.resnet import resnet18 as _resnet18 # resnet18 is the name of entrypoint def resnet18(pretrained=False, **kwargs): """ # This docstring shows up in hub.help() Resnet18 model pretrained (bool): kwargs, load pretrained weights into the model """ # Call the model, load pretrained weights model = _resnet18(pretrained=pretrained, **kwargs) return model ``` * `dependencies` variable is a **list** of package names required to **load** the model. Note this might be slightly different from dependencies required for training a model. * `args` and `kwargs` are passed along to the real callable function. * Docstring of the function works as a help message. It explains what does the model do and what are the allowed positional/keyword arguments. It’s highly recommended to add a few examples here. * Entrypoint function can either return a model(nn.module), or auxiliary tools to make the user workflow smoother, e.g. tokenizers. * Callables prefixed with underscore are considered as helper functions which won’t show up in [`torch.hub.list()`](#torch.hub.list "torch.hub.list"). * Pretrained weights can either be stored locally in the github repo, or loadable by [`torch.hub.load_state_dict_from_url()`](#torch.hub.load_state_dict_from_url "torch.hub.load_state_dict_from_url"). If less than 2GB, it’s recommended to attach it to a [project release](https://help.github.com/en/articles/distributing-large-binaries) and use the url from the release. In the example above `torchvision.models.resnet.resnet18` handles `pretrained`, alternatively you can put the following logic in the entrypoint definition. ``` if pretrained: # For checkpoint saved in local github repo, e.g. <RELATIVE_PATH_TO_CHECKPOINT>=weights/save.pth dirname = os.path.dirname(__file__) checkpoint = os.path.join(dirname, <RELATIVE_PATH_TO_CHECKPOINT>) state_dict = torch.load(checkpoint) model.load_state_dict(state_dict) # For checkpoint saved elsewhere checkpoint = 'https://download.pytorch.org/models/resnet18-5c106cde.pth' model.load_state_dict(torch.hub.load_state_dict_from_url(checkpoint, progress=False)) ``` ### Important Notice * The published models should be at least in a branch/tag. It can’t be a random commit. Loading models from Hub ----------------------- Pytorch Hub provides convenient APIs to explore all available models in hub through [`torch.hub.list()`](#torch.hub.list "torch.hub.list"), show docstring and examples through [`torch.hub.help()`](#torch.hub.help "torch.hub.help") and load the pre-trained models using [`torch.hub.load()`](#torch.hub.load "torch.hub.load"). `torch.hub.list(github, force_reload=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/hub.html#list) List all entrypoints available in `github` hubconf. Parameters * **github** (*string*) – a string with format “repo\_owner/repo\_name[:tag\_name]” with an optional tag/branch. The default branch is `master` if not specified. Example: ‘pytorch/vision[:hub]’ * **force\_reload** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – whether to discard the existing cache and force a fresh download. Default is `False`. Returns a list of available entrypoint names Return type entrypoints #### Example ``` >>> entrypoints = torch.hub.list('pytorch/vision', force_reload=True) ``` `torch.hub.help(github, model, force_reload=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/hub.html#help) Show the docstring of entrypoint `model`. Parameters * **github** (*string*) – a string with format <repo\_owner/repo\_name[:tag\_name]> with an optional tag/branch. The default branch is `master` if not specified. Example: ‘pytorch/vision[:hub]’ * **model** (*string*) – a string of entrypoint name defined in repo’s hubconf.py * **force\_reload** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – whether to discard the existing cache and force a fresh download. Default is `False`. #### Example ``` >>> print(torch.hub.help('pytorch/vision', 'resnet18', force_reload=True)) ``` `torch.hub.load(repo_or_dir, model, *args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/hub.html#load) Load a model from a github repo or a local directory. Note: Loading a model is the typical use case, but this can also be used to for loading other objects such as tokenizers, loss functions, etc. If `source` is `'github'`, `repo_or_dir` is expected to be of the form `repo_owner/repo_name[:tag_name]` with an optional tag/branch. If `source` is `'local'`, `repo_or_dir` is expected to be a path to a local directory. Parameters * **repo\_or\_dir** (*string*) – repo name (`repo_owner/repo_name[:tag_name]`), if `source = 'github'`; or a path to a local directory, if `source = 'local'`. * **model** (*string*) – the name of a callable (entrypoint) defined in the repo/dir’s `hubconf.py`. * **\*args** (*optional*) – the corresponding args for callable `model`. * **source** (*string**,* *optional*) – `'github'` | `'local'`. Specifies how `repo_or_dir` is to be interpreted. Default is `'github'`. * **force\_reload** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – whether to force a fresh download of the github repo unconditionally. Does not have any effect if `source = 'local'`. Default is `False`. * **verbose** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `False`, mute messages about hitting local caches. Note that the message about first download cannot be muted. Does not have any effect if `source = 'local'`. Default is `True`. * **\*\*kwargs** (*optional*) – the corresponding kwargs for callable `model`. Returns The output of the `model` callable when called with the given `*args` and `**kwargs`. #### Example ``` >>> # from a github repo >>> repo = 'pytorch/vision' >>> model = torch.hub.load(repo, 'resnet50', pretrained=True) >>> # from a local directory >>> path = '/some/local/path/pytorch/vision' >>> model = torch.hub.load(path, 'resnet50', pretrained=True) ``` `torch.hub.download_url_to_file(url, dst, hash_prefix=None, progress=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/hub.html#download_url_to_file) Download object at the given URL to a local path. Parameters * **url** (*string*) – URL of the object to download * **dst** (*string*) – Full path where object will be saved, e.g. `/tmp/temporary_file` * **hash\_prefix** (*string**,* *optional*) – If not None, the SHA256 downloaded file should start with `hash_prefix`. Default: None * **progress** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – whether or not to display a progress bar to stderr Default: True #### Example ``` >>> torch.hub.download_url_to_file('https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth', '/tmp/temporary_file') ``` `torch.hub.load_state_dict_from_url(url, model_dir=None, map_location=None, progress=True, check_hash=False, file_name=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/hub.html#load_state_dict_from_url) Loads the Torch serialized object at the given URL. If downloaded file is a zip file, it will be automatically decompressed. If the object is already present in `model_dir`, it’s deserialized and returned. The default value of `model_dir` is `<hub_dir>/checkpoints` where `hub_dir` is the directory returned by [`get_dir()`](#torch.hub.get_dir "torch.hub.get_dir"). Parameters * **url** (*string*) – URL of the object to download * **model\_dir** (*string**,* *optional*) – directory in which to save the object * **map\_location** (*optional*) – a function or a dict specifying how to remap storage locations (see torch.load) * **progress** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – whether or not to display a progress bar to stderr. Default: True * **check\_hash** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If True, the filename part of the URL should follow the naming convention `filename-<sha256>.ext` where `<sha256>` is the first eight or more digits of the SHA256 hash of the contents of the file. The hash is used to ensure unique names and to verify the contents of the file. Default: False * **file\_name** (*string**,* *optional*) – name for the downloaded file. Filename from `url` will be used if not set. #### Example ``` >>> state_dict = torch.hub.load_state_dict_from_url('https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth') ``` ### Running a loaded model: Note that `*args` and `**kwargs` in [`torch.hub.load()`](#torch.hub.load "torch.hub.load") are used to **instantiate** a model. After you have loaded a model, how can you find out what you can do with the model? A suggested workflow is * `dir(model)` to see all available methods of the model. * `help(model.foo)` to check what arguments `model.foo` takes to run To help users explore without referring to documentation back and forth, we strongly recommend repo owners make function help messages clear and succinct. It’s also helpful to include a minimal working example. ### Where are my downloaded models saved? The locations are used in the order of * Calling `hub.set_dir(<PATH_TO_HUB_DIR>)` * `$TORCH_HOME/hub`, if environment variable `TORCH_HOME` is set. * `$XDG_CACHE_HOME/torch/hub`, if environment variable `XDG_CACHE_HOME` is set. * `~/.cache/torch/hub` `torch.hub.get_dir()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/hub.html#get_dir) Get the Torch Hub cache directory used for storing downloaded models & weights. If [`set_dir()`](#torch.hub.set_dir "torch.hub.set_dir") is not called, default path is `$TORCH_HOME/hub` where environment variable `$TORCH_HOME` defaults to `$XDG_CACHE_HOME/torch`. `$XDG_CACHE_HOME` follows the X Design Group specification of the Linux filesystem layout, with a default value `~/.cache` if the environment variable is not set. `torch.hub.set_dir(d)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/hub.html#set_dir) Optionally set the Torch Hub directory used to save downloaded models & weights. Parameters **d** (*string*) – path to a local folder to save downloaded models & weights. ### Caching logic By default, we don’t clean up files after loading it. Hub uses the cache by default if it already exists in the directory returned by [`get_dir()`](#torch.hub.get_dir "torch.hub.get_dir"). Users can force a reload by calling `hub.load(..., force_reload=True)`. This will delete the existing github folder and downloaded weights, reinitialize a fresh download. This is useful when updates are published to the same branch, users can keep up with the latest release. ### Known limitations: Torch hub works by importing the package as if it was installed. There’re some side effects introduced by importing in Python. For example, you can see new items in Python caches `sys.modules` and `sys.path_importer_cache` which is normal Python behavior. A known limitation that worth mentioning here is user **CANNOT** load two different branches of the same repo in the **same python process**. It’s just like installing two packages with the same name in Python, which is not good. Cache might join the party and give you surprises if you actually try that. Of course it’s totally fine to load them in separate processes. pytorch torch.overrides torch.overrides =============== This module exposes various helper functions for the `__torch_function__` protocol. See [Extending torch](https://pytorch.org/docs/1.8.0/notes/extending.html#extending-torch) for more detail on the `__torch_function__` protocol. Functions --------- `torch.overrides.get_ignored_functions()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/overrides.html#get_ignored_functions) Return public functions that cannot be overridden by `__torch_function__`. Returns A tuple of functions that are publicly available in the torch API but cannot be overridden with `__torch_function__`. Mostly this is because none of the arguments of these functions are tensors or tensor-likes. Return type Set[Callable] #### Examples ``` >>> torch.Tensor.as_subclass in torch.overrides.get_ignored_functions() True >>> torch.add in torch.overrides.get_ignored_functions() False ``` `torch.overrides.get_overridable_functions()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/overrides.html#get_overridable_functions) List functions that are overridable via \_\_torch\_function\_\_ Returns A dictionary that maps namespaces that contain overridable functions to functions in that namespace that can be overridden. Return type Dict[Any, List[Callable]] `torch.overrides.get_testing_overrides()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/overrides.html#get_testing_overrides) Return a dict containing dummy overrides for all overridable functions Returns A dictionary that maps overridable functions in the PyTorch API to lambda functions that have the same signature as the real function and unconditionally return -1. These lambda functions are useful for testing API coverage for a type that defines `__torch_function__`. Return type Dict[Callable, Callable] #### Examples ``` >>> import inspect >>> my_add = torch.overrides.get_testing_overrides()[torch.add] >>> inspect.signature(my_add) <Signature (input, other, out=None)> ``` `torch.overrides.handle_torch_function(public_api, relevant_args, *args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/overrides.html#handle_torch_function) Implement a function with checks for `__torch_function__` overrides. See torch::autograd::handle\_torch\_function for the equivalent of this function in the C++ implementation. Parameters * **public\_api** (*function*) – Function exposed by the public torch API originally called like `public_api(*args, **kwargs)` on which arguments are now being checked. * **relevant\_args** (*iterable*) – Iterable of arguments to check for \_\_torch\_function\_\_ methods. * **args** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Arbitrary positional arguments originally passed into `public_api`. * **kwargs** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Arbitrary keyword arguments originally passed into `public_api`. Returns Result from calling `implementation` or an `__torch_function__` method, as appropriate. Return type [object](https://docs.python.org/3/library/functions.html#object "(in Python v3.9)") :raises TypeError : if no implementation is found.: #### Example ``` >>> def func(a): ... if type(a) is not torch.Tensor: # This will make func dispatchable by __torch_function__ ... return handle_torch_function(func, (a,), a) ... return a + 0 ``` `torch.overrides.has_torch_function()` Check for \_\_torch\_function\_\_ implementations in the elements of an iterable. Considers exact `Tensor` s and `Parameter` s non-dispatchable. :param relevant\_args: Iterable or aguments to check for \_\_torch\_function\_\_ methods. :type relevant\_args: iterable Returns True if any of the elements of relevant\_args have \_\_torch\_function\_\_ implementations, False otherwise. Return type [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)") See also `torch.is_tensor_like()` Checks if something is a Tensor-like, including an exact `Tensor`. `torch.overrides.is_tensor_like(inp)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/overrides.html#is_tensor_like) Returns `True` if the passed-in input is a Tensor-like. Currently, this occurs whenever there’s a `__torch_function__` attribute on the type of the input. #### Examples A subclass of tensor is generally a Tensor-like. ``` >>> class SubTensor(torch.Tensor): ... >>> is_tensor_like(SubTensor([0])) True ``` Built-in or user types aren’t usually Tensor-like. ``` >>> is_tensor_like(6) False >>> is_tensor_like(None) False >>> class NotATensor: ... >>> is_tensor_like(NotATensor()) False ``` But, they can be made Tensor-like by implementing \_\_torch\_function\_\_. ``` >>> class TensorLike: ... def __torch_function__(self, func, types, args, kwargs): ... return -1 >>> is_tensor_like(TensorLike()) True ``` `torch.overrides.is_tensor_method_or_property(func)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/overrides.html#is_tensor_method_or_property) Returns True if the function passed in is a handler for a method or property belonging to `torch.Tensor`, as passed into `__torch_function__`. Note For properties, their `__get__` method must be passed in. This may be needed, in particular, for the following reasons: 1. Methods/properties sometimes don’t contain a `__module__` slot. 2. They require that the first passed-in argument is an instance of `torch.Tensor`. #### Examples ``` >>> is_tensor_method_or_property(torch.Tensor.add) True >>> is_tensor_method_or_property(torch.add) False ``` `torch.overrides.wrap_torch_function(dispatcher)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/overrides.html#wrap_torch_function) Wraps a given function with `__torch_function__` -related functionality. Parameters **dispatcher** (*Callable*) – A callable that returns an iterable of Tensor-likes passed into the function. Note This decorator may reduce the performance of your code. Generally, it’s enough to express your code as a series of functions that, themselves, support \_\_torch\_function\_\_. If you find yourself in the rare situation where this is not the case, e.g. if you’re wrapping a low-level library and you also need it to work for Tensor-likes, then this function is available. #### Examples ``` >>> def dispatcher(a): # Must have the same signature as func ... return (a,) >>> @torch.overrides.wrap_torch_function(dispatcher) >>> def func(a): # This will make func dispatchable by __torch_function__ ... return a + 0 ```
programming_docs
pytorch torch.fft torch.fft ========= Discrete Fourier transforms and related functions. Fast Fourier Transforms ----------------------- `torch.fft.fft(input, n=None, dim=-1, norm=None) → Tensor` Computes the one dimensional discrete Fourier transform of `input`. Note The Fourier domain representation of any real signal satisfies the Hermitian property: `X[i] = conj(X[-i])`. This function always returns both the positive and negative frequency terms even though, for real inputs, the negative frequencies are redundant. [`rfft()`](#torch.fft.rfft "torch.fft.rfft") returns the more compact one-sided representation where only the positive frequencies are returned. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **n** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Signal length. If given, the input will either be zero-padded or trimmed to this length before computing the FFT. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – The dimension along which to take the one dimensional FFT. * **norm** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Normalization mode. For the forward transform ([`fft()`](#torch.fft.fft "torch.fft.fft")), these correspond to: + `"forward"` - normalize by `1/n` + `"backward"` - no normalization + `"ortho"` - normalize by `1/sqrt(n)` (making the FFT orthonormal)Calling the backward transform ([`ifft()`](#torch.fft.ifft "torch.fft.ifft")) with the same normalization mode will apply an overall normalization of `1/n` between the two transforms. This is required to make [`ifft()`](#torch.fft.ifft "torch.fft.ifft") the exact inverse. Default is `"backward"` (no normalization). #### Example ``` >>> t = torch.arange(4) >>> t tensor([0, 1, 2, 3]) >>> torch.fft.fft(t) tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j]) ``` ``` >>> t = tensor([0.+1.j, 2.+3.j, 4.+5.j, 6.+7.j]) >>> torch.fft.fft(t) tensor([12.+16.j, -8.+0.j, -4.-4.j, 0.-8.j]) ``` `torch.fft.ifft(input, n=None, dim=-1, norm=None) → Tensor` Computes the one dimensional inverse discrete Fourier transform of `input`. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **n** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Signal length. If given, the input will either be zero-padded or trimmed to this length before computing the IFFT. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – The dimension along which to take the one dimensional IFFT. * **norm** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Normalization mode. For the backward transform ([`ifft()`](#torch.fft.ifft "torch.fft.ifft")), these correspond to: + `"forward"` - no normalization + `"backward"` - normalize by `1/n` + `"ortho"` - normalize by `1/sqrt(n)` (making the IFFT orthonormal)Calling the forward transform ([`fft()`](#torch.fft.fft "torch.fft.fft")) with the same normalization mode will apply an overall normalization of `1/n` between the two transforms. This is required to make [`ifft()`](#torch.fft.ifft "torch.fft.ifft") the exact inverse. Default is `"backward"` (normalize by `1/n`). #### Example ``` >>> t = torch.tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j]) >>> torch.fft.ifft(t) tensor([0.+0.j, 1.+0.j, 2.+0.j, 3.+0.j]) ``` `torch.fft.fft2(input, s=None, dim=(-2, -1), norm=None) → Tensor` Computes the 2 dimensional discrete Fourier transform of `input`. Equivalent to [`fftn()`](#torch.fft.fftn "torch.fft.fftn") but FFTs only the last two dimensions by default. Note The Fourier domain representation of any real signal satisfies the Hermitian property: `X[i, j] = conj(X[-i, -j])`. This function always returns all positive and negative frequency terms even though, for real inputs, half of these values are redundant. [`rfft2()`](#torch.fft.rfft2 "torch.fft.rfft2") returns the more compact one-sided representation where only the positive frequencies of the last dimension are returned. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **s** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Signal size in the transformed dimensions. If given, each dimension `dim[i]` will either be zero-padded or trimmed to the length `s[i]` before computing the FFT. If a length `-1` is specified, no padding is done in that dimension. Default: `s = [input.size(d) for d in dim]` * **dim** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Dimensions to be transformed. Default: last two dimensions. * **norm** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Normalization mode. For the forward transform ([`fft2()`](#torch.fft.fft2 "torch.fft.fft2")), these correspond to: + `"forward"` - normalize by `1/n` + `"backward"` - no normalization + `"ortho"` - normalize by `1/sqrt(n)` (making the FFT orthonormal)Where `n = prod(s)` is the logical FFT size. Calling the backward transform ([`ifft2()`](#torch.fft.ifft2 "torch.fft.ifft2")) with the same normalization mode will apply an overall normalization of `1/n` between the two transforms. This is required to make [`ifft2()`](#torch.fft.ifft2 "torch.fft.ifft2") the exact inverse. Default is `"backward"` (no normalization). #### Example ``` >>> x = torch.rand(10, 10, dtype=torch.complex64) >>> fft2 = torch.fft.fft2(t) ``` The discrete Fourier transform is separable, so [`fft2()`](#torch.fft.fft2 "torch.fft.fft2") here is equivalent to two one-dimensional [`fft()`](#torch.fft.fft "torch.fft.fft") calls: ``` >>> two_ffts = torch.fft.fft(torch.fft.fft(x, dim=0), dim=1) >>> torch.allclose(fft2, two_ffts) ``` `torch.fft.ifft2(input, s=None, dim=(-2, -1), norm=None) → Tensor` Computes the 2 dimensional inverse discrete Fourier transform of `input`. Equivalent to [`ifftn()`](#torch.fft.ifftn "torch.fft.ifftn") but IFFTs only the last two dimensions by default. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **s** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Signal size in the transformed dimensions. If given, each dimension `dim[i]` will either be zero-padded or trimmed to the length `s[i]` before computing the IFFT. If a length `-1` is specified, no padding is done in that dimension. Default: `s = [input.size(d) for d in dim]` * **dim** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Dimensions to be transformed. Default: last two dimensions. * **norm** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Normalization mode. For the backward transform ([`ifft2()`](#torch.fft.ifft2 "torch.fft.ifft2")), these correspond to: + `"forward"` - no normalization + `"backward"` - normalize by `1/n` + `"ortho"` - normalize by `1/sqrt(n)` (making the IFFT orthonormal)Where `n = prod(s)` is the logical IFFT size. Calling the forward transform ([`fft2()`](#torch.fft.fft2 "torch.fft.fft2")) with the same normalization mode will apply an overall normalization of `1/n` between the two transforms. This is required to make [`ifft2()`](#torch.fft.ifft2 "torch.fft.ifft2") the exact inverse. Default is `"backward"` (normalize by `1/n`). #### Example ``` >>> x = torch.rand(10, 10, dtype=torch.complex64) >>> ifft2 = torch.fft.ifft2(t) ``` The discrete Fourier transform is separable, so [`ifft2()`](#torch.fft.ifft2 "torch.fft.ifft2") here is equivalent to two one-dimensional [`ifft()`](#torch.fft.ifft "torch.fft.ifft") calls: ``` >>> two_iffts = torch.fft.ifft(torch.fft.ifft(x, dim=0), dim=1) >>> torch.allclose(ifft2, two_iffts) ``` `torch.fft.fftn(input, s=None, dim=None, norm=None) → Tensor` Computes the N dimensional discrete Fourier transform of `input`. Note The Fourier domain representation of any real signal satisfies the Hermitian property: `X[i_1, ..., i_n] = conj(X[-i_1, ..., -i_n])`. This function always returns all positive and negative frequency terms even though, for real inputs, half of these values are redundant. [`rfftn()`](#torch.fft.rfftn "torch.fft.rfftn") returns the more compact one-sided representation where only the positive frequencies of the last dimension are returned. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **s** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Signal size in the transformed dimensions. If given, each dimension `dim[i]` will either be zero-padded or trimmed to the length `s[i]` before computing the FFT. If a length `-1` is specified, no padding is done in that dimension. Default: `s = [input.size(d) for d in dim]` * **dim** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Dimensions to be transformed. Default: all dimensions, or the last `len(s)` dimensions if `s` is given. * **norm** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Normalization mode. For the forward transform ([`fftn()`](#torch.fft.fftn "torch.fft.fftn")), these correspond to: + `"forward"` - normalize by `1/n` + `"backward"` - no normalization + `"ortho"` - normalize by `1/sqrt(n)` (making the FFT orthonormal)Where `n = prod(s)` is the logical FFT size. Calling the backward transform ([`ifftn()`](#torch.fft.ifftn "torch.fft.ifftn")) with the same normalization mode will apply an overall normalization of `1/n` between the two transforms. This is required to make [`ifftn()`](#torch.fft.ifftn "torch.fft.ifftn") the exact inverse. Default is `"backward"` (no normalization). #### Example ``` >>> x = torch.rand(10, 10, dtype=torch.complex64) >>> fftn = torch.fft.fftn(t) ``` The discrete Fourier transform is separable, so [`fftn()`](#torch.fft.fftn "torch.fft.fftn") here is equivalent to two one-dimensional [`fft()`](#torch.fft.fft "torch.fft.fft") calls: ``` >>> two_ffts = torch.fft.fft(torch.fft.fft(x, dim=0), dim=1) >>> torch.allclose(fftn, two_ffts) ``` `torch.fft.ifftn(input, s=None, dim=None, norm=None) → Tensor` Computes the N dimensional inverse discrete Fourier transform of `input`. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **s** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Signal size in the transformed dimensions. If given, each dimension `dim[i]` will either be zero-padded or trimmed to the length `s[i]` before computing the IFFT. If a length `-1` is specified, no padding is done in that dimension. Default: `s = [input.size(d) for d in dim]` * **dim** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Dimensions to be transformed. Default: all dimensions, or the last `len(s)` dimensions if `s` is given. * **norm** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Normalization mode. For the backward transform ([`ifftn()`](#torch.fft.ifftn "torch.fft.ifftn")), these correspond to: + `"forward"` - no normalization + `"backward"` - normalize by `1/n` + `"ortho"` - normalize by `1/sqrt(n)` (making the IFFT orthonormal)Where `n = prod(s)` is the logical IFFT size. Calling the forward transform ([`fftn()`](#torch.fft.fftn "torch.fft.fftn")) with the same normalization mode will apply an overall normalization of `1/n` between the two transforms. This is required to make [`ifftn()`](#torch.fft.ifftn "torch.fft.ifftn") the exact inverse. Default is `"backward"` (normalize by `1/n`). #### Example ``` >>> x = torch.rand(10, 10, dtype=torch.complex64) >>> ifftn = torch.fft.ifftn(t) ``` The discrete Fourier transform is separable, so [`ifftn()`](#torch.fft.ifftn "torch.fft.ifftn") here is equivalent to two one-dimensional [`ifft()`](#torch.fft.ifft "torch.fft.ifft") calls: ``` >>> two_iffts = torch.fft.ifft(torch.fft.ifft(x, dim=0), dim=1) >>> torch.allclose(ifftn, two_iffts) ``` `torch.fft.rfft(input, n=None, dim=-1, norm=None) → Tensor` Computes the one dimensional Fourier transform of real-valued `input`. The FFT of a real signal is Hermitian-symmetric, `X[i] = conj(X[-i])` so the output contains only the positive frequencies below the Nyquist frequency. To compute the full output, use [`fft()`](#torch.fft.fft "torch.fft.fft") Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the real input tensor * **n** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Signal length. If given, the input will either be zero-padded or trimmed to this length before computing the real FFT. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – The dimension along which to take the one dimensional real FFT. * **norm** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Normalization mode. For the forward transform ([`rfft()`](#torch.fft.rfft "torch.fft.rfft")), these correspond to: + `"forward"` - normalize by `1/n` + `"backward"` - no normalization + `"ortho"` - normalize by `1/sqrt(n)` (making the FFT orthonormal)Calling the backward transform ([`irfft()`](#torch.fft.irfft "torch.fft.irfft")) with the same normalization mode will apply an overall normalization of `1/n` between the two transforms. This is required to make [`irfft()`](#torch.fft.irfft "torch.fft.irfft") the exact inverse. Default is `"backward"` (no normalization). #### Example ``` >>> t = torch.arange(4) >>> t tensor([0, 1, 2, 3]) >>> torch.fft.rfft(t) tensor([ 6.+0.j, -2.+2.j, -2.+0.j]) ``` Compare against the full output from [`fft()`](#torch.fft.fft "torch.fft.fft"): ``` >>> torch.fft.fft(t) tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j]) ``` Notice that the symmetric element `T[-1] == T[1].conj()` is omitted. At the Nyquist frequency `T[-2] == T[2]` is it’s own symmetric pair, and therefore must always be real-valued. `torch.fft.irfft(input, n=None, dim=-1, norm=None) → Tensor` Computes the inverse of [`rfft()`](#torch.fft.rfft "torch.fft.rfft"). `input` is interpreted as a one-sided Hermitian signal in the Fourier domain, as produced by [`rfft()`](#torch.fft.rfft "torch.fft.rfft"). By the Hermitian property, the output will be real-valued. Note Some input frequencies must be real-valued to satisfy the Hermitian property. In these cases the imaginary component will be ignored. For example, any imaginary component in the zero-frequency term cannot be represented in a real output and so will always be ignored. Note The correct interpretation of the Hermitian input depends on the length of the original data, as given by `n`. This is because each input shape could correspond to either an odd or even length signal. By default, the signal is assumed to be even length and odd signals will not round-trip properly. So, it is recommended to always pass the signal length `n`. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor representing a half-Hermitian signal * **n** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Output signal length. This determines the length of the output signal. If given, the input will either be zero-padded or trimmed to this length before computing the real IFFT. Defaults to even output: `n=2*(input.size(dim) - 1)`. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – The dimension along which to take the one dimensional real IFFT. * **norm** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Normalization mode. For the backward transform ([`irfft()`](#torch.fft.irfft "torch.fft.irfft")), these correspond to: + `"forward"` - no normalization + `"backward"` - normalize by `1/n` + `"ortho"` - normalize by `1/sqrt(n)` (making the real IFFT orthonormal)Calling the forward transform ([`rfft()`](#torch.fft.rfft "torch.fft.rfft")) with the same normalization mode will apply an overall normalization of `1/n` between the two transforms. This is required to make [`irfft()`](#torch.fft.irfft "torch.fft.irfft") the exact inverse. Default is `"backward"` (normalize by `1/n`). #### Example ``` >>> t = torch.arange(5) >>> t tensor([0, 1, 2, 3, 4]) >>> T = torch.fft.rfft(t) >>> T tensor([10.0000+0.0000j, -2.5000+3.4410j, -2.5000+0.8123j]) ``` Without specifying the output length to [`irfft()`](#torch.fft.irfft "torch.fft.irfft"), the output will not round-trip properly because the input is odd-length: ``` >>> torch.fft.irfft(T) tensor([0.6250, 1.4045, 3.1250, 4.8455]) ``` So, it is recommended to always pass the signal length `n`: ``` >>> torch.fft.irfft(T, t.numel()) tensor([0.0000, 1.0000, 2.0000, 3.0000, 4.0000]) ``` `torch.fft.rfft2(input, s=None, dim=(-2, -1), norm=None) → Tensor` Computes the 2-dimensional discrete Fourier transform of real `input`. Equivalent to [`rfftn()`](#torch.fft.rfftn "torch.fft.rfftn") but FFTs only the last two dimensions by default. The FFT of a real signal is Hermitian-symmetric, `X[i, j] = conj(X[-i, -j])`, so the full [`fft2()`](#torch.fft.fft2 "torch.fft.fft2") output contains redundant information. [`rfft2()`](#torch.fft.rfft2 "torch.fft.rfft2") instead omits the negative frequencies in the last dimension. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **s** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Signal size in the transformed dimensions. If given, each dimension `dim[i]` will either be zero-padded or trimmed to the length `s[i]` before computing the real FFT. If a length `-1` is specified, no padding is done in that dimension. Default: `s = [input.size(d) for d in dim]` * **dim** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Dimensions to be transformed. Default: last two dimensions. * **norm** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Normalization mode. For the forward transform ([`rfft2()`](#torch.fft.rfft2 "torch.fft.rfft2")), these correspond to: + `"forward"` - normalize by `1/n` + `"backward"` - no normalization + `"ortho"` - normalize by `1/sqrt(n)` (making the real FFT orthonormal)Where `n = prod(s)` is the logical FFT size. Calling the backward transform ([`irfft2()`](#torch.fft.irfft2 "torch.fft.irfft2")) with the same normalization mode will apply an overall normalization of `1/n` between the two transforms. This is required to make [`irfft2()`](#torch.fft.irfft2 "torch.fft.irfft2") the exact inverse. Default is `"backward"` (no normalization). #### Example ``` >>> t = torch.rand(10, 10) >>> rfft2 = torch.fft.rfft2(t) >>> rfft2.size() torch.Size([10, 6]) ``` Compared against the full output from [`fft2()`](#torch.fft.fft2 "torch.fft.fft2"), we have all elements up to the Nyquist frequency. ``` >>> fft2 = torch.fft.fft2(t) >>> torch.allclose(fft2[..., :6], rfft2) True ``` The discrete Fourier transform is separable, so [`rfft2()`](#torch.fft.rfft2 "torch.fft.rfft2") here is equivalent to a combination of [`fft()`](#torch.fft.fft "torch.fft.fft") and [`rfft()`](#torch.fft.rfft "torch.fft.rfft"): ``` >>> two_ffts = torch.fft.fft(torch.fft.rfft(x, dim=1), dim=0) >>> torch.allclose(rfft2, two_ffts) ``` `torch.fft.irfft2(input, s=None, dim=(-2, -1), norm=None) → Tensor` Computes the inverse of [`rfft2()`](#torch.fft.rfft2 "torch.fft.rfft2"). Equivalent to [`irfftn()`](#torch.fft.irfftn "torch.fft.irfftn") but IFFTs only the last two dimensions by default. `input` is interpreted as a one-sided Hermitian signal in the Fourier domain, as produced by [`rfft2()`](#torch.fft.rfft2 "torch.fft.rfft2"). By the Hermitian property, the output will be real-valued. Note Some input frequencies must be real-valued to satisfy the Hermitian property. In these cases the imaginary component will be ignored. For example, any imaginary component in the zero-frequency term cannot be represented in a real output and so will always be ignored. Note The correct interpretation of the Hermitian input depends on the length of the original data, as given by `s`. This is because each input shape could correspond to either an odd or even length signal. By default, the signal is assumed to be even length and odd signals will not round-trip properly. So, it is recommended to always pass the signal shape `s`. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **s** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Signal size in the transformed dimensions. If given, each dimension `dim[i]` will either be zero-padded or trimmed to the length `s[i]` before computing the real FFT. If a length `-1` is specified, no padding is done in that dimension. Defaults to even output in the last dimension: `s[-1] = 2*(input.size(dim[-1]) - 1)`. * **dim** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Dimensions to be transformed. The last dimension must be the half-Hermitian compressed dimension. Default: last two dimensions. * **norm** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Normalization mode. For the backward transform ([`irfft2()`](#torch.fft.irfft2 "torch.fft.irfft2")), these correspond to: + `"forward"` - no normalization + `"backward"` - normalize by `1/n` + `"ortho"` - normalize by `1/sqrt(n)` (making the real IFFT orthonormal)Where `n = prod(s)` is the logical IFFT size. Calling the forward transform ([`rfft2()`](#torch.fft.rfft2 "torch.fft.rfft2")) with the same normalization mode will apply an overall normalization of `1/n` between the two transforms. This is required to make [`irfft2()`](#torch.fft.irfft2 "torch.fft.irfft2") the exact inverse. Default is `"backward"` (normalize by `1/n`). #### Example ``` >>> t = torch.rand(10, 9) >>> T = torch.fft.rfft2(t) ``` Without specifying the output length to [`irfft2()`](#torch.fft.irfft2 "torch.fft.irfft2"), the output will not round-trip properly because the input is odd-length in the last dimension: ``` >>> torch.fft.irfft2(T).size() torch.Size([10, 10]) ``` So, it is recommended to always pass the signal shape `s`. ``` >>> roundtrip = torch.fft.irfft2(T, t.size()) >>> roundtrip.size() torch.Size([10, 9]) >>> torch.allclose(roundtrip, t) True ``` `torch.fft.rfftn(input, s=None, dim=None, norm=None) → Tensor` Computes the N-dimensional discrete Fourier transform of real `input`. The FFT of a real signal is Hermitian-symmetric, `X[i_1, ..., i_n] = conj(X[-i_1, ..., -i_n])` so the full [`fftn()`](#torch.fft.fftn "torch.fft.fftn") output contains redundant information. [`rfftn()`](#torch.fft.rfftn "torch.fft.rfftn") instead omits the negative frequencies in the last dimension. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **s** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Signal size in the transformed dimensions. If given, each dimension `dim[i]` will either be zero-padded or trimmed to the length `s[i]` before computing the real FFT. If a length `-1` is specified, no padding is done in that dimension. Default: `s = [input.size(d) for d in dim]` * **dim** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Dimensions to be transformed. Default: all dimensions, or the last `len(s)` dimensions if `s` is given. * **norm** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Normalization mode. For the forward transform ([`rfftn()`](#torch.fft.rfftn "torch.fft.rfftn")), these correspond to: + `"forward"` - normalize by `1/n` + `"backward"` - no normalization + `"ortho"` - normalize by `1/sqrt(n)` (making the real FFT orthonormal)Where `n = prod(s)` is the logical FFT size. Calling the backward transform ([`irfftn()`](#torch.fft.irfftn "torch.fft.irfftn")) with the same normalization mode will apply an overall normalization of `1/n` between the two transforms. This is required to make [`irfftn()`](#torch.fft.irfftn "torch.fft.irfftn") the exact inverse. Default is `"backward"` (no normalization). #### Example ``` >>> t = torch.rand(10, 10) >>> rfftn = torch.fft.rfftn(t) >>> rfftn.size() torch.Size([10, 6]) ``` Compared against the full output from [`fftn()`](#torch.fft.fftn "torch.fft.fftn"), we have all elements up to the Nyquist frequency. ``` >>> fftn = torch.fft.fftn(t) >>> torch.allclose(fftn[..., :6], rfftn) True ``` The discrete Fourier transform is separable, so [`rfftn()`](#torch.fft.rfftn "torch.fft.rfftn") here is equivalent to a combination of [`fft()`](#torch.fft.fft "torch.fft.fft") and [`rfft()`](#torch.fft.rfft "torch.fft.rfft"): ``` >>> two_ffts = torch.fft.fft(torch.fft.rfft(x, dim=1), dim=0) >>> torch.allclose(rfftn, two_ffts) ``` `torch.fft.irfftn(input, s=None, dim=None, norm=None) → Tensor` Computes the inverse of [`rfftn()`](#torch.fft.rfftn "torch.fft.rfftn"). `input` is interpreted as a one-sided Hermitian signal in the Fourier domain, as produced by [`rfftn()`](#torch.fft.rfftn "torch.fft.rfftn"). By the Hermitian property, the output will be real-valued. Note Some input frequencies must be real-valued to satisfy the Hermitian property. In these cases the imaginary component will be ignored. For example, any imaginary component in the zero-frequency term cannot be represented in a real output and so will always be ignored. Note The correct interpretation of the Hermitian input depends on the length of the original data, as given by `s`. This is because each input shape could correspond to either an odd or even length signal. By default, the signal is assumed to be even length and odd signals will not round-trip properly. So, it is recommended to always pass the signal shape `s`. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **s** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Signal size in the transformed dimensions. If given, each dimension `dim[i]` will either be zero-padded or trimmed to the length `s[i]` before computing the real FFT. If a length `-1` is specified, no padding is done in that dimension. Defaults to even output in the last dimension: `s[-1] = 2*(input.size(dim[-1]) - 1)`. * **dim** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – Dimensions to be transformed. The last dimension must be the half-Hermitian compressed dimension. Default: all dimensions, or the last `len(s)` dimensions if `s` is given. * **norm** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Normalization mode. For the backward transform ([`irfftn()`](#torch.fft.irfftn "torch.fft.irfftn")), these correspond to: + `"forward"` - no normalization + `"backward"` - normalize by `1/n` + `"ortho"` - normalize by `1/sqrt(n)` (making the real IFFT orthonormal)Where `n = prod(s)` is the logical IFFT size. Calling the forward transform ([`rfftn()`](#torch.fft.rfftn "torch.fft.rfftn")) with the same normalization mode will apply an overall normalization of `1/n` between the two transforms. This is required to make [`irfftn()`](#torch.fft.irfftn "torch.fft.irfftn") the exact inverse. Default is `"backward"` (normalize by `1/n`). #### Example ``` >>> t = torch.rand(10, 9) >>> T = torch.fft.rfftn(t) ``` Without specifying the output length to [`irfft()`](#torch.fft.irfft "torch.fft.irfft"), the output will not round-trip properly because the input is odd-length in the last dimension: ``` >>> torch.fft.irfftn(T).size() torch.Size([10, 10]) ``` So, it is recommended to always pass the signal shape `s`. ``` >>> roundtrip = torch.fft.irfftn(T, t.size()) >>> roundtrip.size() torch.Size([10, 9]) >>> torch.allclose(roundtrip, t) True ``` `torch.fft.hfft(input, n=None, dim=-1, norm=None) → Tensor` Computes the one dimensional discrete Fourier transform of a Hermitian symmetric `input` signal. Note [`hfft()`](#torch.fft.hfft "torch.fft.hfft")/[`ihfft()`](#torch.fft.ihfft "torch.fft.ihfft") are analogous to [`rfft()`](#torch.fft.rfft "torch.fft.rfft")/[`irfft()`](#torch.fft.irfft "torch.fft.irfft"). The real FFT expects a real signal in the time-domain and gives a Hermitian symmetry in the frequency-domain. The Hermitian FFT is the opposite; Hermitian symmetric in the time-domain and real-valued in the frequency-domain. For this reason, special care needs to be taken with the length argument `n`, in the same way as with [`irfft()`](#torch.fft.irfft "torch.fft.irfft"). Note Because the signal is Hermitian in the time-domain, the result will be real in the frequency domain. Note that some input frequencies must be real-valued to satisfy the Hermitian property. In these cases the imaginary component will be ignored. For example, any imaginary component in `input[0]` would result in one or more complex frequency terms which cannot be represented in a real output and so will always be ignored. Note The correct interpretation of the Hermitian input depends on the length of the original data, as given by `n`. This is because each input shape could correspond to either an odd or even length signal. By default, the signal is assumed to be even length and odd signals will not round-trip properly. So, it is recommended to always pass the signal length `n`. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor representing a half-Hermitian signal * **n** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Output signal length. This determines the length of the real output. If given, the input will either be zero-padded or trimmed to this length before computing the Hermitian FFT. Defaults to even output: `n=2*(input.size(dim) - 1)`. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – The dimension along which to take the one dimensional Hermitian FFT. * **norm** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Normalization mode. For the forward transform ([`hfft()`](#torch.fft.hfft "torch.fft.hfft")), these correspond to: + `"forward"` - normalize by `1/n` + `"backward"` - no normalization + `"ortho"` - normalize by `1/sqrt(n)` (making the Hermitian FFT orthonormal)Calling the backward transform ([`ihfft()`](#torch.fft.ihfft "torch.fft.ihfft")) with the same normalization mode will apply an overall normalization of `1/n` between the two transforms. This is required to make [`ihfft()`](#torch.fft.ihfft "torch.fft.ihfft") the exact inverse. Default is `"backward"` (no normalization). #### Example Taking a real-valued frequency signal and bringing it into the time domain gives Hermitian symmetric output: ``` >>> t = torch.arange(5) >>> t tensor([0, 1, 2, 3, 4]) >>> T = torch.fft.ifft(t) >>> T tensor([ 2.0000+-0.0000j, -0.5000-0.6882j, -0.5000-0.1625j, -0.5000+0.1625j, -0.5000+0.6882j]) ``` Note that `T[1] == T[-1].conj()` and `T[2] == T[-2].conj()` is redundant. We can thus compute the forward transform without considering negative frequencies: ``` >>> torch.fft.hfft(T[:3], n=5) tensor([0., 1., 2., 3., 4.]) ``` Like with [`irfft()`](#torch.fft.irfft "torch.fft.irfft"), the output length must be given in order to recover an even length output: ``` >>> torch.fft.hfft(T[:3]) tensor([0.5000, 1.1236, 2.5000, 3.8764]) ``` `torch.fft.ihfft(input, n=None, dim=-1, norm=None) → Tensor` Computes the inverse of [`hfft()`](#torch.fft.hfft "torch.fft.hfft"). `input` must be a real-valued signal, interpreted in the Fourier domain. The IFFT of a real signal is Hermitian-symmetric, `X[i] = conj(X[-i])`. [`ihfft()`](#torch.fft.ihfft "torch.fft.ihfft") represents this in the one-sided form where only the positive frequencies below the Nyquist frequency are included. To compute the full output, use [`ifft()`](#torch.fft.ifft "torch.fft.ifft"). Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the real input tensor * **n** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Signal length. If given, the input will either be zero-padded or trimmed to this length before computing the Hermitian IFFT. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – The dimension along which to take the one dimensional Hermitian IFFT. * **norm** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Normalization mode. For the backward transform ([`ihfft()`](#torch.fft.ihfft "torch.fft.ihfft")), these correspond to: + `"forward"` - no normalization + `"backward"` - normalize by `1/n` + `"ortho"` - normalize by `1/sqrt(n)` (making the IFFT orthonormal)Calling the forward transform ([`hfft()`](#torch.fft.hfft "torch.fft.hfft")) with the same normalization mode will apply an overall normalization of `1/n` between the two transforms. This is required to make [`ihfft()`](#torch.fft.ihfft "torch.fft.ihfft") the exact inverse. Default is `"backward"` (normalize by `1/n`). #### Example ``` >>> t = torch.arange(5) >>> t tensor([0, 1, 2, 3, 4]) >>> torch.fft.ihfft(t) tensor([ 2.0000+-0.0000j, -0.5000-0.6882j, -0.5000-0.1625j]) ``` Compare against the full output from [`ifft()`](#torch.fft.ifft "torch.fft.ifft"): ``` >>> torch.fft.ifft(t) tensor([ 2.0000+-0.0000j, -0.5000-0.6882j, -0.5000-0.1625j, -0.5000+0.1625j, -0.5000+0.6882j]) ``` Helper Functions ---------------- `torch.fft.fftfreq(n, d=1.0, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor` Computes the discrete Fourier Transform sample frequencies for a signal of size `n`. Note By convention, [`fft()`](#torch.fft.fft "torch.fft.fft") returns positive frequency terms first, followed by the negative frequencies in reverse order, so that `f[-i]` for all 0<i≤n/20 < i \leq n/2 in Python gives the negative frequency terms. For an FFT of length `n` and with inputs spaced in length unit `d`, the frequencies are: ``` f = [0, 1, ..., (n - 1) // 2, -(n // 2), ..., -1] / (d * n) ``` Note For even lengths, the Nyquist frequency at `f[n/2]` can be thought of as either negative or positive. [`fftfreq()`](#torch.fft.fftfreq "torch.fft.fftfreq") follows NumPy’s convention of taking it to be negative. Parameters * **n** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the FFT length * **d** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – The sampling length scale. The spacing between individual samples of the FFT input. The default assumes unit spacing, dividing that result by the actual spacing gives the result in physical frequency units. Keyword Arguments * **dtype** (`torch.dtype`, optional) – the desired data type of returned tensor. Default: if `None`, uses a global default (see [`torch.set_default_tensor_type()`](generated/torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). * **layout** (`torch.layout`, optional) – the desired layout of returned Tensor. Default: `torch.strided`. * **device** (`torch.device`, optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](generated/torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. #### Example ``` >>> torch.fft.fftfreq(5) tensor([ 0.0000, 0.2000, 0.4000, -0.4000, -0.2000]) ``` For even input, we can see the Nyquist frequency at `f[2]` is given as negative: ``` >>> torch.fft.fftfreq(4) tensor([ 0.0000, 0.2500, -0.5000, -0.2500]) ``` `torch.fft.rfftfreq(n, d=1.0, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor` Computes the sample frequencies for [`rfft()`](#torch.fft.rfft "torch.fft.rfft") with a signal of size `n`. Note [`rfft()`](#torch.fft.rfft "torch.fft.rfft") returns Hermitian one-sided output, so only the positive frequency terms are returned. For a real FFT of length `n` and with inputs spaced in length unit `d`, the frequencies are: ``` f = torch.arange((n + 1) // 2) / (d * n) ``` Note For even lengths, the Nyquist frequency at `f[n/2]` can be thought of as either negative or positive. Unlike [`fftfreq()`](#torch.fft.fftfreq "torch.fft.fftfreq"), [`rfftfreq()`](#torch.fft.rfftfreq "torch.fft.rfftfreq") always returns it as positive. Parameters * **n** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the real FFT length * **d** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – The sampling length scale. The spacing between individual samples of the FFT input. The default assumes unit spacing, dividing that result by the actual spacing gives the result in physical frequency units. Keyword Arguments * **dtype** (`torch.dtype`, optional) – the desired data type of returned tensor. Default: if `None`, uses a global default (see [`torch.set_default_tensor_type()`](generated/torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). * **layout** (`torch.layout`, optional) – the desired layout of returned Tensor. Default: `torch.strided`. * **device** (`torch.device`, optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](generated/torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. #### Example ``` >>> torch.fft.rfftfreq(5) tensor([ 0.0000, 0.2000, 0.4000]) ``` ``` >>> torch.fft.rfftfreq(4) tensor([ 0.0000, 0.2500, 0.5000]) ``` Compared to the output from [`fftfreq()`](#torch.fft.fftfreq "torch.fft.fftfreq"), we see that the Nyquist frequency at `f[2]` has changed sign: >>> torch.fft.fftfreq(4) tensor([ 0.0000, 0.2500, -0.5000, -0.2500]) `torch.fft.fftshift(input, dim=None) → Tensor` Reorders n-dimensional FFT data, as provided by [`fftn()`](#torch.fft.fftn "torch.fft.fftn"), to have negative frequency terms first. This performs a periodic shift of n-dimensional data such that the origin `(0, ..., 0)` is moved to the center of the tensor. Specifically, to `input.shape[dim] // 2` in each selected dimension. Note By convention, the FFT returns positive frequency terms first, followed by the negative frequencies in reverse order, so that `f[-i]` for all 0<i≤n/20 < i \leq n/2 in Python gives the negative frequency terms. [`fftshift()`](#torch.fft.fftshift "torch.fft.fftshift") rearranges all frequencies into ascending order from negative to positive with the zero-frequency term in the center. Note For even lengths, the Nyquist frequency at `f[n/2]` can be thought of as either negative or positive. [`fftshift()`](#torch.fft.fftshift "torch.fft.fftshift") always puts the Nyquist term at the 0-index. This is the same convention used by [`fftfreq()`](#torch.fft.fftfreq "torch.fft.fftfreq"). Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the tensor in FFT order * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – The dimensions to rearrange. Only dimensions specified here will be rearranged, any other dimensions will be left in their original order. Default: All dimensions of `input`. #### Example ``` >>> f = torch.fft.fftfreq(4) >>> f tensor([ 0.0000, 0.2500, -0.5000, -0.2500]) ``` ``` >>> torch.fft.fftshift(f) tensor([-0.5000, -0.2500, 0.0000, 0.2500]) ``` Also notice that the Nyquist frequency term at `f[2]` was moved to the beginning of the tensor. This also works for multi-dimensional transforms: ``` >>> x = torch.fft.fftfreq(5, d=1/5) + 0.1 * torch.fft.fftfreq(5, d=1/5).unsqueeze(1) >>> x tensor([[ 0.0000, 1.0000, 2.0000, -2.0000, -1.0000], [ 0.1000, 1.1000, 2.1000, -1.9000, -0.9000], [ 0.2000, 1.2000, 2.2000, -1.8000, -0.8000], [-0.2000, 0.8000, 1.8000, -2.2000, -1.2000], [-0.1000, 0.9000, 1.9000, -2.1000, -1.1000]]) ``` ``` >>> torch.fft.fftshift(x) tensor([[-2.2000, -1.2000, -0.2000, 0.8000, 1.8000], [-2.1000, -1.1000, -0.1000, 0.9000, 1.9000], [-2.0000, -1.0000, 0.0000, 1.0000, 2.0000], [-1.9000, -0.9000, 0.1000, 1.1000, 2.1000], [-1.8000, -0.8000, 0.2000, 1.2000, 2.2000]]) ``` [`fftshift()`](#torch.fft.fftshift "torch.fft.fftshift") can also be useful for spatial data. If our data is defined on a centered grid (`[-(N//2), (N-1)//2]`) then we can use the standard FFT defined on an uncentered grid (`[0, N)`) by first applying an [`ifftshift()`](#torch.fft.ifftshift "torch.fft.ifftshift"). ``` >>> x_centered = torch.arange(-5, 5) >>> x_uncentered = torch.fft.ifftshift(x_centered) >>> fft_uncentered = torch.fft.fft(x_uncentered) ``` Similarly, we can convert the frequency domain components to centered convention by applying [`fftshift()`](#torch.fft.fftshift "torch.fft.fftshift"). ``` >>> fft_centered = torch.fft.fftshift(fft_uncentered) ``` The inverse transform, from centered Fourier space back to centered spatial data, can be performed by applying the inverse shifts in reverse order: ``` >>> x_centered_2 = torch.fft.fftshift(torch.fft.ifft(torch.fft.ifftshift(fft_centered))) >>> torch.allclose(x_centered.to(torch.complex64), x_centered_2) True ``` `torch.fft.ifftshift(input, dim=None) → Tensor` Inverse of [`fftshift()`](#torch.fft.fftshift "torch.fft.fftshift"). Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the tensor in FFT order * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – The dimensions to rearrange. Only dimensions specified here will be rearranged, any other dimensions will be left in their original order. Default: All dimensions of `input`. #### Example ``` >>> f = torch.fft.fftfreq(5) >>> f tensor([ 0.0000, 0.2000, 0.4000, -0.4000, -0.2000]) ``` A round-trip through [`fftshift()`](#torch.fft.fftshift "torch.fft.fftshift") and [`ifftshift()`](#torch.fft.ifftshift "torch.fft.ifftshift") gives the same result: ``` >>> shifted = torch.fftshift(f) >>> torch.ifftshift(shifted) tensor([ 0.0000, 0.2000, 0.4000, -0.4000, -0.2000]) ```
programming_docs
pytorch torch.sparse torch.sparse ============ Introduction ------------ PyTorch provides [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor") to represent a multi-dimensional array containing elements of a single data type. By default, array elements are stored contiguously in memory leading to efficient implementations of various array processing algorithms that relay on the fast access to array elements. However, there exists an important class of multi-dimensional arrays, so-called sparse arrays, where the contiguous memory storage of array elements turns out to be suboptimal. Sparse arrays have a property of having a vast portion of elements being equal to zero which means that a lot of memory as well as processor resources can be spared if only the non-zero elements are stored or/and processed. Various sparse storage formats ([such as COO, CSR/CSC, LIL, etc.](https://en.wikipedia.org/wiki/Sparse_matrix)) have been developed that are optimized for a particular structure of non-zero elements in sparse arrays as well as for specific operations on the arrays. Note When talking about storing only non-zero elements of a sparse array, the usage of adjective “non-zero” is not strict: one is allowed to store also zeros in the sparse array data structure. Hence, in the following, we use “specified elements” for those array elements that are actually stored. In addition, the unspecified elements are typically assumed to have zero value, but not only, hence we use the term “fill value” to denote such elements. Note Using a sparse storage format for storing sparse arrays can be advantageous only when the size and sparsity levels of arrays are high. Otherwise, for small-sized or low-sparsity arrays using the contiguous memory storage format is likely the most efficient approach. Warning The PyTorch API of sparse tensors is in beta and may change in the near future. Sparse COO tensors ------------------ Currently, PyTorch implements the so-called Coordinate format, or COO format, as the default sparse storage format for storing sparse tensors. In COO format, the specified elements are stored as tuples of element indices and the corresponding values. In particular, * the indices of specified elements are collected in `indices` tensor of size `(ndim, nse)` and with element type `torch.int64`, * the corresponding values are collected in `values` tensor of size `(nse,)` and with an arbitrary integer or floating point number element type, where `ndim` is the dimensionality of the tensor and `nse` is the number of specified elements. Note The memory consumption of a sparse COO tensor is at least `(ndim * 8 + <size of element type in bytes>) * nse` bytes (plus a constant overhead from storing other tensor data). The memory consumption of a strided tensor is at least `product(<tensor shape>) * <size of element type in bytes>`. For example, the memory consumption of a 10 000 x 10 000 tensor with 100 000 non-zero 32-bit floating point numbers is at least `(2 * 8 + 4) * 100 000 = 2 000 000` bytes when using COO tensor layout and `10 000 * 10 000 * 4 = 400 000 000` bytes when using the default strided tensor layout. Notice the 200 fold memory saving from using the COO storage format. ### Construction A sparse COO tensor can be constructed by providing the two tensors of indices and values, as well as the size of the sparse tensor (when it cannot be inferred from the indices and values tensors) to a function [`torch.sparse_coo_tensor()`](generated/torch.sparse_coo_tensor#torch.sparse_coo_tensor "torch.sparse_coo_tensor"). Suppose we want to define a sparse tensor with the entry 3 at location (0, 2), entry 4 at location (1, 0), and entry 5 at location (1, 2). Unspecified elements are assumed to have the same value, fill value, which is zero by default. We would then write: ``` >>> i = [[0, 1, 1], [2, 0, 2]] >>> v = [3, 4, 5] >>> s = torch.sparse_coo_tensor(i, v, (2, 3)) >>> s tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([3, 4, 5]), size=(2, 3), nnz=3, layout=torch.sparse_coo) >>> s.to_dense() tensor([[0, 0, 3], [4, 0, 5]]) ``` Note that the input `i` is NOT a list of index tuples. If you want to write your indices this way, you should transpose before passing them to the sparse constructor: ``` >>> i = [[0, 2], [1, 0], [1, 2]] >>> v = [3, 4, 5 ] >>> s = torch.sparse_coo_tensor(list(zip(*i)), v, (2, 3)) >>> # Or another equivalent formulation to get s >>> s = torch.sparse_coo_tensor(torch.tensor(i).t(), v, (2, 3)) >>> torch.sparse_coo_tensor(i.t(), v, torch.Size([2,3])).to_dense() tensor([[0, 0, 3], [4, 0, 5]]) ``` An empty sparse COO tensor can be constructed by specifying its size only: ``` >>> torch.sparse_coo_tensor(size=(2, 3)) tensor(indices=tensor([], size=(2, 0)), values=tensor([], size=(0,)), size=(2, 3), nnz=0, layout=torch.sparse_coo) ``` ### Hybrid sparse COO tensors Pytorch implements an extension of sparse tensors with scalar values to sparse tensors with (contiguous) tensor values. Such tensors are called hybrid tensors. PyTorch hybrid COO tensor extends the sparse COO tensor by allowing the `values` tensor to be a multi-dimensional tensor so that we have: * the indices of specified elements are collected in `indices` tensor of size `(sparse_dims, nse)` and with element type `torch.int64`, * the corresponding (tensor) values are collected in `values` tensor of size `(nse, dense_dims)` and with an arbitrary integer or floating point number element type. Note We use (M + K)-dimensional tensor to denote a N-dimensional hybrid sparse tensor, where M and K are the numbers of sparse and dense dimensions, respectively, such that M + K == N holds. Suppose we want to create a (2 + 1)-dimensional tensor with the entry [3, 4] at location (0, 2), entry [5, 6] at location (1, 0), and entry [7, 8] at location (1, 2). We would write ``` >>> i = [[0, 1, 1], [2, 0, 2]] >>> v = [[3, 4], [5, 6], [7, 8]] >>> s = torch.sparse_coo_tensor(i, v, (2, 3, 2)) >>> s tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([[3, 4], [5, 6], [7, 8]]), size=(2, 3, 2), nnz=3, layout=torch.sparse_coo) ``` ``` >>> s.to_dense() tensor([[[0, 0], [0, 0], [3, 4]], [[5, 6], [0, 0], [7, 8]]]) ``` In general, if `s` is a sparse COO tensor and `M = s.sparse_dim()`, `K = s.dense_dim()`, then we have the following invariants: * `M + K == len(s.shape) == s.ndim` - dimensionality of a tensor is the sum of the number of sparse and dense dimensions, * `s.indices().shape == (M, nse)` - sparse indices are stored explicitly, * `s.values().shape == (nse,) + s.shape[M : M + K]` - the values of a hybrid tensor are K-dimensional tensors, * `s.values().layout == torch.strided` - values are stored as strided tensors. Note Dense dimensions always follow sparse dimensions, that is, mixing of dense and sparse dimensions is not supported. ### Uncoalesced sparse COO tensors PyTorch sparse COO tensor format permits *uncoalesced* sparse tensors, where there may be duplicate coordinates in the indices; in this case, the interpretation is that the value at that index is the sum of all duplicate value entries. For example, one can specify multiple values, `3` and `4`, for the same index `1`, that leads to an 1-D uncoalesced tensor: ``` >>> i = [[1, 1]] >>> v = [3, 4] >>> s=torch.sparse_coo_tensor(i, v, (3,)) >>> s tensor(indices=tensor([[1, 1]]), values=tensor( [3, 4]), size=(3,), nnz=2, layout=torch.sparse_coo) ``` while the coalescing process will accumulate the multi-valued elements into a single value using summation: ``` >>> s.coalesce() tensor(indices=tensor([[1]]), values=tensor([7]), size=(3,), nnz=1, layout=torch.sparse_coo) ``` In general, the output of [`torch.Tensor.coalesce()`](#torch.Tensor.coalesce "torch.Tensor.coalesce") method is a sparse tensor with the following properties: * the indices of specified tensor elements are unique, * the indices are sorted in lexicographical order, * [`torch.Tensor.is_coalesced()`](#torch.Tensor.is_coalesced "torch.Tensor.is_coalesced") returns `True`. Note For the most part, you shouldn’t have to care whether or not a sparse tensor is coalesced or not, as most operations will work identically given a coalesced or uncoalesced sparse tensor. However, some operations can be implemented more efficiently on uncoalesced tensors, and some on coalesced tensors. For instance, addition of sparse COO tensors is implemented by simply concatenating the indices and values tensors: ``` >>> a = torch.sparse_coo_tensor([[1, 1]], [5, 6], (2,)) >>> b = torch.sparse_coo_tensor([[0, 0]], [7, 8], (2,)) >>> a + b tensor(indices=tensor([[0, 0, 1, 1]]), values=tensor([7, 8, 5, 6]), size=(2,), nnz=4, layout=torch.sparse_coo) ``` If you repeatedly perform an operation that can produce duplicate entries (e.g., [`torch.Tensor.add()`](tensors#torch.Tensor.add "torch.Tensor.add")), you should occasionally coalesce your sparse tensors to prevent them from growing too large. On the other hand, the lexicographical ordering of indices can be advantageous for implementing algorithms that involve many element selection operations, such as slicing or matrix products. ### Working with sparse COO tensors Let’s consider the following example: ``` >>> i = [[0, 1, 1], [2, 0, 2]] >>> v = [[3, 4], [5, 6], [7, 8]] >>> s = torch.sparse_coo_tensor(i, v, (2, 3, 2)) ``` As mentioned above, a sparse COO tensor is a [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor") instance and to distinguish it from the `Tensor` instances that use some other layout, on can use [`torch.Tensor.is_sparse`](#torch.Tensor.is_sparse "torch.Tensor.is_sparse") or `torch.Tensor.layout` properties: ``` >>> isinstance(s, torch.Tensor) True >>> s.is_sparse True >>> s.layout == torch.sparse_coo True ``` The number of sparse and dense dimensions can be acquired using methods [`torch.Tensor.sparse_dim()`](#torch.Tensor.sparse_dim "torch.Tensor.sparse_dim") and [`torch.Tensor.dense_dim()`](#torch.Tensor.dense_dim "torch.Tensor.dense_dim"), respectively. For instance: ``` >>> s.sparse_dim(), s.dense_dim() (2, 1) ``` If `s` is a sparse COO tensor then its COO format data can be acquired using methods [`torch.Tensor.indices()`](#torch.Tensor.indices "torch.Tensor.indices") and [`torch.Tensor.values()`](#torch.Tensor.values "torch.Tensor.values"). Note Currently, one can acquire the COO format data only when the tensor instance is coalesced: ``` >>> s.indices() RuntimeError: Cannot get indices on an uncoalesced tensor, please call .coalesce() first ``` For acquiring the COO format data of an uncoalesced tensor, use `torch.Tensor._values()` and `torch.Tensor._indices()`: ``` >>> s._indices() tensor([[0, 1, 1], [2, 0, 2]]) ``` Constructing a new sparse COO tensor results a tensor that is not coalesced: ``` >>> s.is_coalesced() False ``` but one can construct a coalesced copy of a sparse COO tensor using the [`torch.Tensor.coalesce()`](#torch.Tensor.coalesce "torch.Tensor.coalesce") method: ``` >>> s2 = s.coalesce() >>> s2.indices() tensor([[0, 1, 1], [2, 0, 2]]) ``` When working with uncoalesced sparse COO tensors, one must take into an account the additive nature of uncoalesced data: the values of the same indices are the terms of a sum that evaluation gives the value of the corresponding tensor element. For example, the scalar multiplication on an uncoalesced sparse tensor could be implemented by multiplying all the uncoalesced values with the scalar because `c * (a + b) == c * a + c * b` holds. However, any nonlinear operation, say, a square root, cannot be implemented by applying the operation to uncoalesced data because `sqrt(a + b) == sqrt(a) + sqrt(b)` does not hold in general. Slicing (with positive step) of a sparse COO tensor is supported only for dense dimensions. Indexing is supported for both sparse and dense dimensions: ``` >>> s[1] tensor(indices=tensor([[0, 2]]), values=tensor([[5, 6], [7, 8]]), size=(3, 2), nnz=2, layout=torch.sparse_coo) >>> s[1, 0, 1] tensor(6) >>> s[1, 0, 1:] tensor([6]) ``` In PyTorch, the fill value of a sparse tensor cannot be specified explicitly and is assumed to be zero in general. However, there exists operations that may interpret the fill value differently. For instance, [`torch.sparse.softmax()`](#torch.sparse.softmax "torch.sparse.softmax") computes the softmax with the assumption that the fill value is negative infinity. Supported Linear Algebra operations ----------------------------------- The following table summarizes supported Linear Algebra operations on sparse matrices where the operands layouts may vary. Here `T[layout]` denotes a tensor with a given layout. Similarly, `M[layout]` denotes a matrix (2-D PyTorch tensor), and `V[layout]` denotes a vector (1-D PyTorch tensor). In addition, `f` denotes a scalar (float or 0-D PyTorch tensor), `*` is element-wise multiplication, and `@` is matrix multiplication. | PyTorch operation | Sparse grad? | Layout signature | | --- | --- | --- | | [`torch.mv()`](generated/torch.mv#torch.mv "torch.mv") | no | `M[sparse_coo] @ V[strided] -> V[strided]` | | [`torch.matmul()`](generated/torch.matmul#torch.matmul "torch.matmul") | no | `M[sparse_coo] @ M[strided] -> M[strided]` | | [`torch.mm()`](generated/torch.mm#torch.mm "torch.mm") | no | `M[sparse_coo] @ M[strided] -> M[strided]` | | [`torch.sparse.mm()`](#torch.sparse.mm "torch.sparse.mm") | yes | `M[sparse_coo] @ M[strided] -> M[strided]` | | [`torch.smm()`](#torch.smm "torch.smm") | no | `M[sparse_coo] @ M[strided] -> M[sparse_coo]` | | [`torch.hspmm()`](#torch.hspmm "torch.hspmm") | no | `M[sparse_coo] @ M[strided] -> M[hybrid sparse_coo]` | | [`torch.bmm()`](generated/torch.bmm#torch.bmm "torch.bmm") | no | `T[sparse_coo] @ T[strided] -> T[strided]` | | [`torch.addmm()`](generated/torch.addmm#torch.addmm "torch.addmm") | no | `f * M[strided] + f * (M[sparse_coo] @ M[strided]) -> M[strided]` | | [`torch.sparse.addmm()`](#torch.sparse.addmm "torch.sparse.addmm") | yes | `f * M[strided] + f * (M[sparse_coo] @ M[strided]) -> M[strided]` | | [`torch.sspaddmm()`](#torch.sspaddmm "torch.sspaddmm") | no | `f * M[sparse_coo] + f * (M[sparse_coo] @ M[strided]) -> M[sparse_coo]` | | [`torch.lobpcg()`](generated/torch.lobpcg#torch.lobpcg "torch.lobpcg") | no | `GENEIG(M[sparse_coo]) -> M[strided], M[strided]` | | [`torch.pca_lowrank()`](generated/torch.pca_lowrank#torch.pca_lowrank "torch.pca_lowrank") | yes | `PCA(M[sparse_coo]) -> M[strided], M[strided], M[strided]` | | [`torch.svd_lowrank()`](generated/torch.svd_lowrank#torch.svd_lowrank "torch.svd_lowrank") | yes | `SVD(M[sparse_coo]) -> M[strided], M[strided], M[strided]` | where “Sparse grad?” column indicates if the PyTorch operation supports backward with respect to sparse matrix argument. All PyTorch operations, except [`torch.smm()`](#torch.smm "torch.smm"), support backward with respect to strided matrix arguments. Note Currently, PyTorch does not support matrix multiplication with the layout signature `M[strided] @ M[sparse_coo]`. However, applications can still compute this using the matrix relation `D @ S == (S.t() @ D.t()).t()`. `class torch.Tensor` The following methods are specific to [sparse tensors](#sparse-docs): `is_sparse` Is `True` if the Tensor uses sparse storage layout, `False` otherwise. `dense_dim() → int` Return the number of dense dimensions in a [sparse tensor](#sparse-docs) `self`. Warning Throws an error if `self` is not a sparse tensor. See also [`Tensor.sparse_dim()`](#torch.Tensor.sparse_dim "torch.Tensor.sparse_dim") and [hybrid tensors](#sparse-hybrid-coo-docs). `sparse_dim() → int` Return the number of sparse dimensions in a [sparse tensor](#sparse-docs) `self`. Warning Throws an error if `self` is not a sparse tensor. See also [`Tensor.dense_dim()`](#torch.Tensor.dense_dim "torch.Tensor.dense_dim") and [hybrid tensors](#sparse-hybrid-coo-docs). `sparse_mask(mask) → Tensor` Returns a new [sparse tensor](#sparse-docs) with values from a strided tensor `self` filtered by the indices of the sparse tensor `mask`. The values of `mask` sparse tensor are ignored. `self` and `mask` tensors must have the same shape. Note The returned sparse tensor has the same indices as the sparse tensor `mask`, even when the corresponding values in `self` are zeros. Parameters **mask** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – a sparse tensor whose indices are used as a filter Example: ``` >>> nse = 5 >>> dims = (5, 5, 2, 2) >>> I = torch.cat([torch.randint(0, dims[0], size=(nse,)), ... torch.randint(0, dims[1], size=(nse,))], 0).reshape(2, nse) >>> V = torch.randn(nse, dims[2], dims[3]) >>> S = torch.sparse_coo_tensor(I, V, dims).coalesce() >>> D = torch.randn(dims) >>> D.sparse_mask(S) tensor(indices=tensor([[0, 0, 0, 2], [0, 1, 4, 3]]), values=tensor([[[ 1.6550, 0.2397], [-0.1611, -0.0779]], [[ 0.2326, -1.0558], [ 1.4711, 1.9678]], [[-0.5138, -0.0411], [ 1.9417, 0.5158]], [[ 0.0793, 0.0036], [-0.2569, -0.1055]]]), size=(5, 5, 2, 2), nnz=4, layout=torch.sparse_coo) ``` `sparse_resize_(size, sparse_dim, dense_dim) → Tensor` Resizes `self` [sparse tensor](#sparse-docs) to the desired size and the number of sparse and dense dimensions. Note If the number of specified elements in `self` is zero, then [`size`](tensors#torch.Tensor.size "torch.Tensor.size"), [`sparse_dim`](#torch.Tensor.sparse_dim "torch.Tensor.sparse_dim"), and [`dense_dim`](#torch.Tensor.dense_dim "torch.Tensor.dense_dim") can be any size and positive integers such that `len(size) == sparse_dim + dense_dim`. If `self` specifies one or more elements, however, then each dimension in [`size`](tensors#torch.Tensor.size "torch.Tensor.size") must not be smaller than the corresponding dimension of `self`, [`sparse_dim`](#torch.Tensor.sparse_dim "torch.Tensor.sparse_dim") must equal the number of sparse dimensions in `self`, and [`dense_dim`](#torch.Tensor.dense_dim "torch.Tensor.dense_dim") must equal the number of dense dimensions in `self`. Warning Throws an error if `self` is not a sparse tensor. Parameters * **size** (*torch.Size*) – the desired size. If `self` is non-empty sparse tensor, the desired size cannot be smaller than the original size. * **sparse\_dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the number of sparse dimensions * **dense\_dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the number of dense dimensions `sparse_resize_and_clear_(size, sparse_dim, dense_dim) → Tensor` Removes all specified elements from a [sparse tensor](#sparse-docs) `self` and resizes `self` to the desired size and the number of sparse and dense dimensions. Parameters * **size** (*torch.Size*) – the desired size. * **sparse\_dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the number of sparse dimensions * **dense\_dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the number of dense dimensions `to_dense() → Tensor` Creates a strided copy of `self`. Warning Throws an error if `self` is a strided tensor. Example: ``` >>> s = torch.sparse_coo_tensor( ... torch.tensor([[1, 1], ... [0, 2]]), ... torch.tensor([9, 10]), ... size=(3, 3)) >>> s.to_dense() tensor([[ 0, 0, 0], [ 9, 0, 10], [ 0, 0, 0]]) ``` `to_sparse(sparseDims) → Tensor` Returns a sparse copy of the tensor. PyTorch supports sparse tensors in [coordinate format](#sparse-coo-docs). Parameters **sparseDims** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the number of sparse dimensions to include in the new sparse tensor Example: ``` >>> d = torch.tensor([[0, 0, 0], [9, 0, 10], [0, 0, 0]]) >>> d tensor([[ 0, 0, 0], [ 9, 0, 10], [ 0, 0, 0]]) >>> d.to_sparse() tensor(indices=tensor([[1, 1], [0, 2]]), values=tensor([ 9, 10]), size=(3, 3), nnz=2, layout=torch.sparse_coo) >>> d.to_sparse(1) tensor(indices=tensor([[1]]), values=tensor([[ 9, 0, 10]]), size=(3, 3), nnz=1, layout=torch.sparse_coo) ``` `coalesce() → Tensor` Returns a coalesced copy of `self` if `self` is an [uncoalesced tensor](#sparse-uncoalesced-coo-docs). Returns `self` if `self` is a coalesced tensor. Warning Throws an error if `self` is not a sparse COO tensor. `is_coalesced() → bool` Returns `True` if `self` is a [sparse COO tensor](#sparse-coo-docs) that is coalesced, `False` otherwise. Warning Throws an error if `self` is not a sparse COO tensor. See [`coalesce()`](#torch.Tensor.coalesce "torch.Tensor.coalesce") and [uncoalesced tensors](#sparse-uncoalesced-coo-docs). `indices() → Tensor` Return the indices tensor of a [sparse COO tensor](#sparse-coo-docs). Warning Throws an error if `self` is not a sparse COO tensor. See also [`Tensor.values()`](#torch.Tensor.values "torch.Tensor.values"). Note This method can only be called on a coalesced sparse tensor. See [`Tensor.coalesce()`](#torch.Tensor.coalesce "torch.Tensor.coalesce") for details. `values() → Tensor` Return the values tensor of a [sparse COO tensor](#sparse-coo-docs). Warning Throws an error if `self` is not a sparse COO tensor. See also [`Tensor.indices()`](#torch.Tensor.indices "torch.Tensor.indices"). Note This method can only be called on a coalesced sparse tensor. See [`Tensor.coalesce()`](#torch.Tensor.coalesce "torch.Tensor.coalesce") for details. The following [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor") methods support [sparse COO tensors](#sparse-coo-docs): [`add()`](tensors#torch.Tensor.add "torch.Tensor.add") [`add_()`](tensors#torch.Tensor.add_ "torch.Tensor.add_") [`addmm()`](tensors#torch.Tensor.addmm "torch.Tensor.addmm") [`addmm_()`](tensors#torch.Tensor.addmm_ "torch.Tensor.addmm_") [`any()`](tensors#torch.Tensor.any "torch.Tensor.any") [`asin()`](tensors#torch.Tensor.asin "torch.Tensor.asin") [`asin_()`](tensors#torch.Tensor.asin_ "torch.Tensor.asin_") [`arcsin()`](tensors#torch.Tensor.arcsin "torch.Tensor.arcsin") [`arcsin_()`](tensors#torch.Tensor.arcsin_ "torch.Tensor.arcsin_") [`bmm()`](tensors#torch.Tensor.bmm "torch.Tensor.bmm") [`clone()`](tensors#torch.Tensor.clone "torch.Tensor.clone") [`deg2rad()`](tensors#torch.Tensor.deg2rad "torch.Tensor.deg2rad") `deg2rad_()` [`detach()`](autograd#torch.Tensor.detach "torch.Tensor.detach") [`detach_()`](autograd#torch.Tensor.detach_ "torch.Tensor.detach_") [`dim()`](tensors#torch.Tensor.dim "torch.Tensor.dim") [`div()`](tensors#torch.Tensor.div "torch.Tensor.div") [`div_()`](tensors#torch.Tensor.div_ "torch.Tensor.div_") [`floor_divide()`](tensors#torch.Tensor.floor_divide "torch.Tensor.floor_divide") [`floor_divide_()`](tensors#torch.Tensor.floor_divide_ "torch.Tensor.floor_divide_") [`get_device()`](tensors#torch.Tensor.get_device "torch.Tensor.get_device") [`index_select()`](tensors#torch.Tensor.index_select "torch.Tensor.index_select") [`isnan()`](tensors#torch.Tensor.isnan "torch.Tensor.isnan") [`log1p()`](tensors#torch.Tensor.log1p "torch.Tensor.log1p") [`log1p_()`](tensors#torch.Tensor.log1p_ "torch.Tensor.log1p_") [`mm()`](tensors#torch.Tensor.mm "torch.Tensor.mm") [`mul()`](tensors#torch.Tensor.mul "torch.Tensor.mul") [`mul_()`](tensors#torch.Tensor.mul_ "torch.Tensor.mul_") [`mv()`](tensors#torch.Tensor.mv "torch.Tensor.mv") [`narrow_copy()`](tensors#torch.Tensor.narrow_copy "torch.Tensor.narrow_copy") [`neg()`](tensors#torch.Tensor.neg "torch.Tensor.neg") [`neg_()`](tensors#torch.Tensor.neg_ "torch.Tensor.neg_") [`negative()`](tensors#torch.Tensor.negative "torch.Tensor.negative") [`negative_()`](tensors#torch.Tensor.negative_ "torch.Tensor.negative_") [`numel()`](tensors#torch.Tensor.numel "torch.Tensor.numel") [`rad2deg()`](tensors#torch.Tensor.rad2deg "torch.Tensor.rad2deg") `rad2deg_()` [`resize_as_()`](tensors#torch.Tensor.resize_as_ "torch.Tensor.resize_as_") [`size()`](tensors#torch.Tensor.size "torch.Tensor.size") [`pow()`](tensors#torch.Tensor.pow "torch.Tensor.pow") [`sqrt()`](tensors#torch.Tensor.sqrt "torch.Tensor.sqrt") [`square()`](tensors#torch.Tensor.square "torch.Tensor.square") `smm()` `sspaddmm()` [`sub()`](tensors#torch.Tensor.sub "torch.Tensor.sub") [`sub_()`](tensors#torch.Tensor.sub_ "torch.Tensor.sub_") [`t()`](tensors#torch.Tensor.t "torch.Tensor.t") [`t_()`](tensors#torch.Tensor.t_ "torch.Tensor.t_") [`transpose()`](tensors#torch.Tensor.transpose "torch.Tensor.transpose") [`transpose_()`](tensors#torch.Tensor.transpose_ "torch.Tensor.transpose_") [`zero_()`](tensors#torch.Tensor.zero_ "torch.Tensor.zero_") Sparse tensor functions ----------------------- `torch.sparse_coo_tensor(indices, values, size=None, *, dtype=None, device=None, requires_grad=False) → Tensor` Constructs a [sparse tensor in COO(rdinate) format](#sparse-coo-docs) with specified values at the given `indices`. Note This function returns an [uncoalesced tensor](#sparse-uncoalesced-coo-docs). Parameters * **indices** (*array\_like*) – Initial data for the tensor. Can be a list, tuple, NumPy `ndarray`, scalar, and other types. Will be cast to a `torch.LongTensor` internally. The indices are the coordinates of the non-zero values in the matrix, and thus should be two-dimensional where the first dimension is the number of tensor dimensions and the second dimension is the number of non-zero values. * **values** (*array\_like*) – Initial values for the tensor. Can be a list, tuple, NumPy `ndarray`, scalar, and other types. * **size** (list, tuple, or `torch.Size`, optional) – Size of the sparse tensor. If not provided the size will be inferred as the minimum size big enough to hold all non-zero elements. Keyword Arguments * **dtype** ([`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: if None, infers data type from `values`. * **device** ([`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](generated/torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Example: ``` >>> i = torch.tensor([[0, 1, 1], ... [2, 0, 2]]) >>> v = torch.tensor([3, 4, 5], dtype=torch.float32) >>> torch.sparse_coo_tensor(i, v, [2, 4]) tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([3., 4., 5.]), size=(2, 4), nnz=3, layout=torch.sparse_coo) >>> torch.sparse_coo_tensor(i, v) # Shape inference tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([3., 4., 5.]), size=(2, 3), nnz=3, layout=torch.sparse_coo) >>> torch.sparse_coo_tensor(i, v, [2, 4], ... dtype=torch.float64, ... device=torch.device('cuda:0')) tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([3., 4., 5.]), device='cuda:0', size=(2, 4), nnz=3, dtype=torch.float64, layout=torch.sparse_coo) # Create an empty sparse tensor with the following invariants: # 1. sparse_dim + dense_dim = len(SparseTensor.shape) # 2. SparseTensor._indices().shape = (sparse_dim, nnz) # 3. SparseTensor._values().shape = (nnz, SparseTensor.shape[sparse_dim:]) # # For instance, to create an empty sparse tensor with nnz = 0, dense_dim = 0 and # sparse_dim = 1 (hence indices is a 2D tensor of shape = (1, 0)) >>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), [], [1]) tensor(indices=tensor([], size=(1, 0)), values=tensor([], size=(0,)), size=(1,), nnz=0, layout=torch.sparse_coo) # and to create an empty sparse tensor with nnz = 0, dense_dim = 1 and # sparse_dim = 1 >>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), torch.empty([0, 2]), [1, 2]) tensor(indices=tensor([], size=(1, 0)), values=tensor([], size=(0, 2)), size=(1, 2), nnz=0, layout=torch.sparse_coo) ``` `torch.sparse.sum(input, dim=None, dtype=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/sparse.html#sum) Returns the sum of each row of the sparse tensor `input` in the given dimensions `dim`. If `dim` is a list of dimensions, reduce over all of them. When sum over all `sparse_dim`, this method returns a dense tensor instead of a sparse tensor. All summed `dim` are squeezed (see [`torch.squeeze()`](generated/torch.squeeze#torch.squeeze "torch.squeeze")), resulting an output tensor having `dim` fewer dimensions than `input`. During backward, only gradients at `nnz` locations of `input` will propagate back. Note that the gradients of `input` is coalesced. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input sparse tensor * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *tuple of python:ints*) – a dimension or a list of dimensions to reduce. Default: reduce over all dims. * **dtype** (`torch.dtype`, optional) – the desired data type of returned Tensor. Default: dtype of `input`. Example: ``` >>> nnz = 3 >>> dims = [5, 5, 2, 3] >>> I = torch.cat([torch.randint(0, dims[0], size=(nnz,)), torch.randint(0, dims[1], size=(nnz,))], 0).reshape(2, nnz) >>> V = torch.randn(nnz, dims[2], dims[3]) >>> size = torch.Size(dims) >>> S = torch.sparse_coo_tensor(I, V, size) >>> S tensor(indices=tensor([[2, 0, 3], [2, 4, 1]]), values=tensor([[[-0.6438, -1.6467, 1.4004], [ 0.3411, 0.0918, -0.2312]], [[ 0.5348, 0.0634, -2.0494], [-0.7125, -1.0646, 2.1844]], [[ 0.1276, 0.1874, -0.6334], [-1.9682, -0.5340, 0.7483]]]), size=(5, 5, 2, 3), nnz=3, layout=torch.sparse_coo) # when sum over only part of sparse_dims, return a sparse tensor >>> torch.sparse.sum(S, [1, 3]) tensor(indices=tensor([[0, 2, 3]]), values=tensor([[-1.4512, 0.4073], [-0.8901, 0.2017], [-0.3183, -1.7539]]), size=(5, 2), nnz=3, layout=torch.sparse_coo) # when sum over all sparse dim, return a dense tensor # with summed dims squeezed >>> torch.sparse.sum(S, [0, 1, 3]) tensor([-2.6596, -1.1450]) ``` `torch.sparse.addmm(mat, mat1, mat2, beta=1.0, alpha=1.0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/sparse.html#addmm) This function does exact same thing as [`torch.addmm()`](generated/torch.addmm#torch.addmm "torch.addmm") in the forward, except that it supports backward for sparse matrix `mat1`. `mat1` need to have `sparse_dim = 2`. Note that the gradients of `mat1` is a coalesced sparse tensor. Parameters * **mat** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – a dense matrix to be added * **mat1** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – a sparse matrix to be multiplied * **mat2** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – a dense matrix to be multiplied * **beta** (*Number**,* *optional*) – multiplier for `mat` (β\beta ) * **alpha** (*Number**,* *optional*) – multiplier for mat1@mat2mat1 @ mat2 (α\alpha ) `torch.sparse.mm(mat1, mat2)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/sparse.html#mm) Performs a matrix multiplication of the sparse matrix `mat1` and the (sparse or strided) matrix `mat2`. Similar to [`torch.mm()`](generated/torch.mm#torch.mm "torch.mm"), If `mat1` is a (n×m)(n \times m) tensor, `mat2` is a (m×p)(m \times p) tensor, out will be a (n×p)(n \times p) tensor. `mat1` need to have `sparse_dim = 2`. This function also supports backward for both matrices. Note that the gradients of `mat1` is a coalesced sparse tensor. Parameters * **mat1** (*SparseTensor*) – the first sparse matrix to be multiplied * **mat2** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the second matrix to be multiplied, which could be sparse or dense Shape: The format of the output tensor of this function follows: - sparse x sparse -> sparse - sparse x dense -> dense Example: ``` >>> a = torch.randn(2, 3).to_sparse().requires_grad_(True) >>> a tensor(indices=tensor([[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]]), values=tensor([ 1.5901, 0.0183, -0.6146, 1.8061, -0.0112, 0.6302]), size=(2, 3), nnz=6, layout=torch.sparse_coo, requires_grad=True) >>> b = torch.randn(3, 2, requires_grad=True) >>> b tensor([[-0.6479, 0.7874], [-1.2056, 0.5641], [-1.1716, -0.9923]], requires_grad=True) >>> y = torch.sparse.mm(a, b) >>> y tensor([[-0.3323, 1.8723], [-1.8951, 0.7904]], grad_fn=<SparseAddmmBackward>) >>> y.sum().backward() >>> a.grad tensor(indices=tensor([[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]]), values=tensor([ 0.1394, -0.6415, -2.1639, 0.1394, -0.6415, -2.1639]), size=(2, 3), nnz=6, layout=torch.sparse_coo) ``` `torch.sspaddmm(input, mat1, mat2, *, beta=1, alpha=1, out=None) → Tensor` Matrix multiplies a sparse tensor `mat1` with a dense tensor `mat2`, then adds the sparse tensor `input` to the result. Note: This function is equivalent to [`torch.addmm()`](generated/torch.addmm#torch.addmm "torch.addmm"), except `input` and `mat1` are sparse. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – a sparse matrix to be added * **mat1** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – a sparse matrix to be matrix multiplied * **mat2** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – a dense matrix to be matrix multiplied Keyword Arguments * **beta** (*Number**,* *optional*) – multiplier for `mat` (β\beta ) * **alpha** (*Number**,* *optional*) – multiplier for mat1@mat2mat1 @ mat2 (α\alpha ) * **out** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. `torch.hspmm(mat1, mat2, *, out=None) → Tensor` Performs a matrix multiplication of a [sparse COO matrix](#sparse-coo-docs) `mat1` and a strided matrix `mat2`. The result is a (1 + 1)-dimensional [hybrid COO matrix](#sparse-hybrid-coo-docs). Parameters * **mat1** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the first sparse matrix to be matrix multiplied * **mat2** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the second strided matrix to be matrix multiplied Keyword Arguments **{out}** – `torch.smm(input, mat) → Tensor` Performs a matrix multiplication of the sparse matrix `input` with the dense matrix `mat`. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – a sparse matrix to be matrix multiplied * **mat** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – a dense matrix to be matrix multiplied `torch.sparse.softmax(input, dim, dtype=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/sparse.html#softmax) Applies a softmax function. Softmax is defined as: Softmax(xi)=exp(xi)∑jexp(xj)\text{Softmax}(x\_{i}) = \frac{exp(x\_i)}{\sum\_j exp(x\_j)} where i,ji, j run over sparse tensor indices and unspecified entries are ignores. This is equivalent to defining unspecified entries as negative infinity so that exp(xk)=0exp(x\_k) = 0 when the entry with index kk has not specified. It is applied to all slices along `dim`, and will re-scale them so that the elements lie in the range `[0, 1]` and sum to 1. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – input * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – A dimension along which softmax will be computed. * **dtype** (`torch.dtype`, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to `dtype` before the operation is performed. This is useful for preventing data type overflows. Default: None `torch.sparse.log_softmax(input, dim, dtype=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/sparse.html#log_softmax) Applies a softmax function followed by logarithm. See [`softmax`](#torch.sparse.softmax "torch.sparse.softmax") for more details. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – input * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – A dimension along which softmax will be computed. * **dtype** (`torch.dtype`, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to `dtype` before the operation is performed. This is useful for preventing data type overflows. Default: None Other functions --------------- The following `torch` functions support [sparse COO tensors](#sparse-coo-docs): [`cat()`](generated/torch.cat#torch.cat "torch.cat") [`dstack()`](generated/torch.dstack#torch.dstack "torch.dstack") [`empty()`](generated/torch.empty#torch.empty "torch.empty") [`empty_like()`](generated/torch.empty_like#torch.empty_like "torch.empty_like") [`hstack()`](generated/torch.hstack#torch.hstack "torch.hstack") [`index_select()`](generated/torch.index_select#torch.index_select "torch.index_select") [`is_complex()`](generated/torch.is_complex#torch.is_complex "torch.is_complex") [`is_floating_point()`](generated/torch.is_floating_point#torch.is_floating_point "torch.is_floating_point") [`is_nonzero()`](generated/torch.is_nonzero#torch.is_nonzero "torch.is_nonzero") `is_same_size()` `is_signed()` [`is_tensor()`](generated/torch.is_tensor#torch.is_tensor "torch.is_tensor") [`lobpcg()`](generated/torch.lobpcg#torch.lobpcg "torch.lobpcg") [`mm()`](generated/torch.mm#torch.mm "torch.mm") `native_norm()` [`pca_lowrank()`](generated/torch.pca_lowrank#torch.pca_lowrank "torch.pca_lowrank") `select()` [`stack()`](generated/torch.stack#torch.stack "torch.stack") [`svd_lowrank()`](generated/torch.svd_lowrank#torch.svd_lowrank "torch.svd_lowrank") [`unsqueeze()`](generated/torch.unsqueeze#torch.unsqueeze "torch.unsqueeze") [`vstack()`](generated/torch.vstack#torch.vstack "torch.vstack") [`zeros()`](generated/torch.zeros#torch.zeros "torch.zeros") [`zeros_like()`](generated/torch.zeros_like#torch.zeros_like "torch.zeros_like")
programming_docs
pytorch TorchScript Language Reference * [Types](#supported-type) * [Expressions](#expressions) * [Statements](#statements) * [Variable Resolution](#variable-resolution) * [Use of Python Values](#use-of-python-values) TorchScript Language Reference ============================== TorchScript is a statically typed subset of Python that can either be written directly (using the [`@torch.jit.script`](generated/torch.jit.script#torch.jit.script "torch.jit.script") decorator) or generated automatically from Python code via tracing. When using tracing, code is automatically converted into this subset of Python by recording only the actual operators on tensors and simply executing and discarding the other surrounding Python code. When writing TorchScript directly using `@torch.jit.script` decorator, the programmer must only use the subset of Python supported in TorchScript. This section documents what is supported in TorchScript as if it were a language reference for a stand alone language. Any features of Python not mentioned in this reference are not part of TorchScript. See `Builtin Functions` for a complete reference of available Pytorch tensor methods, modules, and functions. As a subset of Python, any valid TorchScript function is also a valid Python function. This makes it possible to `disable TorchScript` and debug the function using standard Python tools like `pdb`. The reverse is not true: there are many valid Python programs that are not valid TorchScript programs. Instead, TorchScript focuses specifically on the features of Python that are needed to represent neural network models in PyTorch. Types ----- The largest difference between TorchScript and the full Python language is that TorchScript only supports a small set of types that are needed to express neural net models. In particular, TorchScript supports: | Type | Description | | --- | --- | | `Tensor` | A PyTorch tensor of any dtype, dimension, or backend | | `Tuple[T0, T1, ..., TN]` | A tuple containing subtypes `T0`, `T1`, etc. (e.g. `Tuple[Tensor, Tensor]`) | | `bool` | A boolean value | | `int` | A scalar integer | | `float` | A scalar floating point number | | `str` | A string | | `List[T]` | A list of which all members are type `T` | | `Optional[T]` | A value which is either None or type `T` | | `Dict[K, V]` | A dict with key type `K` and value type `V`. Only `str`, `int`, and `float` are allowed as key types. | | `T` | A [TorchScript Class](#torchscript-class) | | `E` | A [TorchScript Enum](#torchscript-enum) | | `NamedTuple[T0, T1, ...]` | A [`collections.namedtuple`](https://docs.python.org/3/library/collections.html#collections.namedtuple "(in Python v3.9)") tuple type | Unlike Python, each variable in TorchScript function must have a single static type. This makes it easier to optimize TorchScript functions. Example (a type mismatch) ``` import torch @torch.jit.script def an_error(x): if x: r = torch.rand(1) else: r = 4 return r ``` ``` Traceback (most recent call last): ... RuntimeError: ... Type mismatch: r is set to type Tensor in the true branch and type int in the false branch: @torch.jit.script def an_error(x): if x: ~~~~~ r = torch.rand(1) ~~~~~~~~~~~~~~~~~ else: ~~~~~ r = 4 ~~~~~ <--- HERE return r and was used here: else: r = 4 return r ~ <--- HERE... ``` ### Unsupported Typing Constructs TorchScript does not support all features and types of the [`typing`](https://docs.python.org/3/library/typing.html#module-typing "(in Python v3.9)") module. Some of these are more fundamental things that are unlikely to be added in the future while others may be added if there is enough user demand to make it a priority. These types and features from the [`typing`](https://docs.python.org/3/library/typing.html#module-typing "(in Python v3.9)") module are unavailble in TorchScript. | Item | Description | | --- | --- | | [`typing.Any`](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.9)") | [`typing.Any`](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.9)") is currently in development but not yet released | | [`typing.NoReturn`](https://docs.python.org/3/library/typing.html#typing.NoReturn "(in Python v3.9)") | Not implemented | | [`typing.Union`](https://docs.python.org/3/library/typing.html#typing.Union "(in Python v3.9)") | Unlikely to be implemented (however [`typing.Optional`](https://docs.python.org/3/library/typing.html#typing.Optional "(in Python v3.9)") is supported) | | [`typing.Sequence`](https://docs.python.org/3/library/typing.html#typing.Sequence "(in Python v3.9)") | Not implemented | | [`typing.Callable`](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.9)") | Not implemented | | [`typing.Literal`](https://docs.python.org/3/library/typing.html#typing.Literal "(in Python v3.9)") | Not implemented | | [`typing.ClassVar`](https://docs.python.org/3/library/typing.html#typing.ClassVar "(in Python v3.9)") | Not implemented | | [`typing.Final`](https://docs.python.org/3/library/typing.html#typing.Final "(in Python v3.9)") | This is supported for [module attributes](#module-attributes) class attribute annotations but not for functions | | [`typing.AnyStr`](https://docs.python.org/3/library/typing.html#typing.AnyStr "(in Python v3.9)") | TorchScript does not support [`bytes`](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.9)") so this type is not used | | [`typing.overload`](https://docs.python.org/3/library/typing.html#typing.overload "(in Python v3.9)") | [`typing.overload`](https://docs.python.org/3/library/typing.html#typing.overload "(in Python v3.9)") is currently in development but not yet released | | Type aliases | Not implemented | | Nominal vs structural subtyping | Nominal typing is in development, but structural typing is not | | NewType | Unlikely to be implemented | | Generics | Unlikely to be implemented | Any other functionality from the [`typing`](https://docs.python.org/3/library/typing.html#module-typing "(in Python v3.9)") module not explitily listed in this documentation is unsupported. ### Default Types By default, all parameters to a TorchScript function are assumed to be Tensor. To specify that an argument to a TorchScript function is another type, it is possible to use MyPy-style type annotations using the types listed above. ``` import torch @torch.jit.script def foo(x, tup): # type: (int, Tuple[Tensor, Tensor]) -> Tensor t0, t1 = tup return t0 + t1 + x print(foo(3, (torch.rand(3), torch.rand(3)))) ``` Note It is also possible to annotate types with Python 3 type hints from the `typing` module. ``` import torch from typing import Tuple @torch.jit.script def foo(x: int, tup: Tuple[torch.Tensor, torch.Tensor]) -> torch.Tensor: t0, t1 = tup return t0 + t1 + x print(foo(3, (torch.rand(3), torch.rand(3)))) ``` An empty list is assumed to be `List[Tensor]` and empty dicts `Dict[str, Tensor]`. To instantiate an empty list or dict of other types, use `Python 3 type hints`. Example (type annotations for Python 3): ``` import torch import torch.nn as nn from typing import Dict, List, Tuple class EmptyDataStructures(torch.nn.Module): def __init__(self): super(EmptyDataStructures, self).__init__() def forward(self, x: torch.Tensor) -> Tuple[List[Tuple[int, float]], Dict[str, int]]: # This annotates the list to be a `List[Tuple[int, float]]` my_list: List[Tuple[int, float]] = [] for i in range(10): my_list.append((i, x.item())) my_dict: Dict[str, int] = {} return my_list, my_dict x = torch.jit.script(EmptyDataStructures()) ``` ### Optional Type Refinement TorchScript will refine the type of a variable of type `Optional[T]` when a comparison to `None` is made inside the conditional of an if-statement or checked in an `assert`. The compiler can reason about multiple `None` checks that are combined with `and`, `or`, and `not`. Refinement will also occur for else blocks of if-statements that are not explicitly written. The `None` check must be within the if-statement’s condition; assigning a `None` check to a variable and using it in the if-statement’s condition will not refine the types of variables in the check. Only local variables will be refined, an attribute like `self.x` will not and must assigned to a local variable to be refined. Example (refining types on parameters and locals): ``` import torch import torch.nn as nn from typing import Optional class M(nn.Module): z: Optional[int] def __init__(self, z): super(M, self).__init__() # If `z` is None, its type cannot be inferred, so it must # be specified (above) self.z = z def forward(self, x, y, z): # type: (Optional[int], Optional[int], Optional[int]) -> int if x is None: x = 1 x = x + 1 # Refinement for an attribute by assigning it to a local z = self.z if y is not None and z is not None: x = y + z # Refinement via an `assert` assert z is not None x += z return x module = torch.jit.script(M(2)) module = torch.jit.script(M(None)) ``` ### TorchScript Classes Warning TorchScript class support is experimental. Currently it is best suited for simple record-like types (think a `NamedTuple` with methods attached). Python classes can be used in TorchScript if they are annotated with [`@torch.jit.script`](generated/torch.jit.script#torch.jit.script "torch.jit.script"), similar to how you would declare a TorchScript function: ``` @torch.jit.script class Foo: def __init__(self, x, y): self.x = x def aug_add_x(self, inc): self.x += inc ``` This subset is restricted: * All functions must be valid TorchScript functions (including `__init__()`). * Classes must be new-style classes, as we use `__new__()` to construct them with pybind11. * TorchScript classes are statically typed. Members can only be declared by assigning to self in the `__init__()` method. For example, assigning to `self` outside of the `__init__()` method: ``` @torch.jit.script class Foo: def assign_x(self): self.x = torch.rand(2, 3) ``` Will result in: ``` RuntimeError: Tried to set nonexistent attribute: x. Did you forget to initialize it in __init__()?: def assign_x(self): self.x = torch.rand(2, 3) ~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE ``` * No expressions except method definitions are allowed in the body of the class. * No support for inheritance or any other polymorphism strategy, except for inheriting from `object` to specify a new-style class. After a class is defined, it can be used in both TorchScript and Python interchangeably like any other TorchScript type: ``` # Declare a TorchScript class @torch.jit.script class Pair: def __init__(self, first, second): self.first = first self.second = second @torch.jit.script def sum_pair(p): # type: (Pair) -> Tensor return p.first + p.second p = Pair(torch.rand(2, 3), torch.rand(2, 3)) print(sum_pair(p)) ``` ### TorchScript Enums Python enums can be used in TorchScript without any extra annotation or code: ``` from enum import Enum class Color(Enum): RED = 1 GREEN = 2 @torch.jit.script def enum_fn(x: Color, y: Color) -> bool: if x == Color.RED: return True return x == y ``` After an enum is defined, it can be used in both TorchScript and Python interchangeably like any other TorchScript type. The type of the values of an enum must be `int`, `float`, or `str`. All values must be of the same type; heterogenous types for enum values are not supported. ### Named Tuples Types produced by [`collections.namedtuple`](https://docs.python.org/3/library/collections.html#collections.namedtuple "(in Python v3.9)") can be used in TorchScript. ``` import torch import collections Point = collections.namedtuple('Point', ['x', 'y']) @torch.jit.script def total(point): # type: (Point) -> Tensor return point.x + point.y p = Point(x=torch.rand(3), y=torch.rand(3)) print(total(p)) ``` ### Iterables Some functions (for example, [`zip`](https://docs.python.org/3/library/functions.html#zip "(in Python v3.9)") and [`enumerate`](https://docs.python.org/3/library/functions.html#enumerate "(in Python v3.9)")) can only operate on iterable types. Iterable types in TorchScript include `Tensor`s, lists, tuples, dictionaries, strings, [`torch.nn.ModuleList`](generated/torch.nn.modulelist#torch.nn.ModuleList "torch.nn.ModuleList") and [`torch.nn.ModuleDict`](generated/torch.nn.moduledict#torch.nn.ModuleDict "torch.nn.ModuleDict"). Expressions ----------- The following Python Expressions are supported. ### Literals ``` True False None 'string literals' "string literals" 3 # interpreted as int 3.4 # interpreted as a float ``` #### List Construction An empty list is assumed have type `List[Tensor]`. The types of other list literals are derived from the type of the members. See [Default Types](#default-types) for more details. ``` [3, 4] [] [torch.rand(3), torch.rand(4)] ``` #### Tuple Construction ``` (3, 4) (3,) ``` #### Dict Construction An empty dict is assumed have type `Dict[str, Tensor]`. The types of other dict literals are derived from the type of the members. See [Default Types](#default-types) for more details. ``` {'hello': 3} {} {'a': torch.rand(3), 'b': torch.rand(4)} ``` ### Variables See [Variable Resolution](#variable-resolution) for how variables are resolved. ``` my_variable_name ``` ### Arithmetic Operators ``` a + b a - b a * b a / b a ^ b a @ b ``` ### Comparison Operators ``` a == b a != b a < b a > b a <= b a >= b ``` ### Logical Operators ``` a and b a or b not b ``` ### Subscripts and Slicing ``` t[0] t[-1] t[0:2] t[1:] t[:1] t[:] t[0, 1] t[0, 1:2] t[0, :1] t[-1, 1:, 0] t[1:, -1, 0] t[i:j, i] ``` ### Function Calls Calls to `builtin functions` ``` torch.rand(3, dtype=torch.int) ``` Calls to other script functions: ``` import torch @torch.jit.script def foo(x): return x + 1 @torch.jit.script def bar(x): return foo(x) ``` ### Method Calls Calls to methods of builtin types like tensor: `x.mm(y)` On modules, methods must be compiled before they can be called. The TorchScript compiler recursively compiles methods it sees when compiling other methods. By default, compilation starts on the `forward` method. Any methods called by `forward` will be compiled, and any methods called by those methods, and so on. To start compilation at a method other than `forward`, use the [`@torch.jit.export`](jit#torch.jit.export "torch.jit.export") decorator (`forward` implicitly is marked `@torch.jit.export`). Calling a submodule directly (e.g. `self.resnet(input)`) is equivalent to calling its `forward` method (e.g. `self.resnet.forward(input)`). ``` import torch import torch.nn as nn import torchvision class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() means = torch.tensor([103.939, 116.779, 123.68]) self.means = torch.nn.Parameter(means.resize_(1, 3, 1, 1)) resnet = torchvision.models.resnet18() self.resnet = torch.jit.trace(resnet, torch.rand(1, 3, 224, 224)) def helper(self, input): return self.resnet(input - self.means) def forward(self, input): return self.helper(input) # Since nothing in the model calls `top_level_method`, the compiler # must be explicitly told to compile this method @torch.jit.export def top_level_method(self, input): return self.other_helper(input) def other_helper(self, input): return input + 10 # `my_script_module` will have the compiled methods `forward`, `helper`, # `top_level_method`, and `other_helper` my_script_module = torch.jit.script(MyModule()) ``` ### Ternary Expressions ``` x if x > y else y ``` ### Casts ``` float(ten) int(3.5) bool(ten) str(2)`` ``` ### Accessing Module Parameters ``` self.my_parameter self.my_submodule.my_parameter ``` Statements ---------- TorchScript supports the following types of statements: ### Simple Assignments ``` a = b a += b # short-hand for a = a + b, does not operate in-place on a a -= b ``` ### Pattern Matching Assignments ``` a, b = tuple_or_list a, b, *c = a_tuple ``` Multiple Assignments ``` a = b, c = tup ``` ### Print Statements ``` print("the result of an add:", a + b) ``` ### If Statements ``` if a < 4: r = -a elif a < 3: r = a + a else: r = 3 * a ``` In addition to bools, floats, ints, and Tensors can be used in a conditional and will be implicitly casted to a boolean. ### While Loops ``` a = 0 while a < 4: print(a) a += 1 ``` ### For loops with range ``` x = 0 for i in range(10): x *= i ``` ### For loops over tuples These unroll the loop, generating a body for each member of the tuple. The body must type-check correctly for each member. ``` tup = (3, torch.rand(4)) for x in tup: print(x) ``` ### For loops over constant nn.ModuleList To use a `nn.ModuleList` inside a compiled method, it must be marked constant by adding the name of the attribute to the `__constants__` list for the type. For loops over a `nn.ModuleList` will unroll the body of the loop at compile time, with each member of the constant module list. ``` class SubModule(torch.nn.Module): def __init__(self): super(SubModule, self).__init__() self.weight = nn.Parameter(torch.randn(2)) def forward(self, input): return self.weight + input class MyModule(torch.nn.Module): __constants__ = ['mods'] def __init__(self): super(MyModule, self).__init__() self.mods = torch.nn.ModuleList([SubModule() for i in range(10)]) def forward(self, v): for module in self.mods: v = module(v) return v m = torch.jit.script(MyModule()) ``` ### Break and Continue ``` for i in range(5): if i == 1: continue if i == 3: break print(i) ``` ### Return ``` return a, b ``` Variable Resolution ------------------- TorchScript supports a subset of Python’s variable resolution (i.e. scoping) rules. Local variables behave the same as in Python, except for the restriction that a variable must have the same type along all paths through a function. If a variable has a different type on different branches of an if statement, it is an error to use it after the end of the if statement. Similarly, a variable is not allowed to be used if it is only *defined* along some paths through the function. Example: ``` @torch.jit.script def foo(x): if x < 0: y = 4 print(y) ``` ``` Traceback (most recent call last): ... RuntimeError: ... y is not defined in the false branch... @torch.jit.script... def foo(x): if x < 0: ~~~~~~~~~ y = 4 ~~~~~ <--- HERE print(y) and was used here: if x < 0: y = 4 print(y) ~ <--- HERE... ``` Non-local variables are resolved to Python values at compile time when the function is defined. These values are then converted into TorchScript values using the rules described in [Use of Python Values](#use-of-python-values). Use of Python Values -------------------- To make writing TorchScript more convenient, we allow script code to refer to Python values in the surrounding scope. For instance, any time there is a reference to `torch`, the TorchScript compiler is actually resolving it to the `torch` Python module when the function is declared. These Python values are not a first class part of TorchScript. Instead they are de-sugared at compile-time into the primitive types that TorchScript supports. This depends on the dynamic type of the Python valued referenced when compilation occurs. This section describes the rules that are used when accessing Python values in TorchScript. ### Functions TorchScript can call Python functions. This functionality is very useful when incrementally converting a model to TorchScript. The model can be moved function-by-function to TorchScript, leaving calls to Python functions in place. This way you can incrementally check the correctness of the model as you go. `torch.jit.is_scripting()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/_jit_internal.html#is_scripting) Function that returns True when in compilation and False otherwise. This is useful especially with the @unused decorator to leave code in your model that is not yet TorchScript compatible. .. testcode: ``` import torch @torch.jit.unused def unsupported_linear_op(x): return x def linear(x): if not torch.jit.is_scripting(): return torch.linear(x) else: return unsupported_linear_op(x) ``` ### Attribute Lookup On Python Modules TorchScript can lookup attributes on modules. `Builtin functions` like `torch.add` are accessed this way. This allows TorchScript to call functions defined in other modules. ### Python-defined Constants TorchScript also provides a way to use constants that are defined in Python. These can be used to hard-code hyper-parameters into the function, or to define universal constants. There are two ways of specifying that a Python value should be treated as a constant. 1. Values looked up as attributes of a module are assumed to be constant: ``` import math import torch @torch.jit.script def fn(): return math.pi ``` 2. Attributes of a ScriptModule can be marked constant by annotating them with `Final[T]` ``` import torch import torch.nn as nn class Foo(nn.Module): # `Final` from the `typing_extensions` module can also be used a : torch.jit.Final[int] def __init__(self): super(Foo, self).__init__() self.a = 1 + 4 def forward(self, input): return self.a + input f = torch.jit.script(Foo()) ``` Supported constant Python types are * `int` * `float` * `bool` * `torch.device` * `torch.layout` * `torch.dtype` * tuples containing supported types * `torch.nn.ModuleList` which can be used in a TorchScript for loop ### Module Attributes The `torch.nn.Parameter` wrapper and `register_buffer` can be used to assign tensors to a module. Other values assigned to a module that is compiled will be added to the compiled module if their types can be inferred. All [types](#types) available in TorchScript can be used as module attributes. Tensor attributes are semantically the same as buffers. The type of empty lists and dictionaries and `None` values cannot be inferred and must be specified via [PEP 526-style](https://www.python.org/dev/peps/pep-0526/#class-and-instance-variable-annotations) class annotations. If a type cannot be inferred and is not explicilty annotated, it will not be added as an attribute to the resulting `ScriptModule`. Example: ``` from typing import List, Dict class Foo(nn.Module): # `words` is initialized as an empty list, so its type must be specified words: List[str] # The type could potentially be inferred if `a_dict` (below) was not # empty, but this annotation ensures `some_dict` will be made into the # proper type some_dict: Dict[str, int] def __init__(self, a_dict): super(Foo, self).__init__() self.words = [] self.some_dict = a_dict # `int`s can be inferred self.my_int = 10 def forward(self, input): # type: (str) -> int self.words.append(input) return self.some_dict[input] + self.my_int f = torch.jit.script(Foo({'hi': 2})) ```
programming_docs
pytorch torch.nn.quantized torch.nn.quantized ================== This module implements the quantized versions of the nn modules and functionals. Functional interface -------------------- Functional interface (quantized). `torch.nn.quantized.functional.linear(input, weight, bias=None, scale=None, zero_point=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/functional.html#linear) Applies a linear transformation to the incoming quantized data: y=xAT+by = xA^T + b . See [`Linear`](#torch.nn.quantized.Linear "torch.nn.quantized.Linear") Note Current implementation packs weights on every call, which has penalty on performance. If you want to avoid the overhead, use [`Linear`](#torch.nn.quantized.Linear "torch.nn.quantized.Linear"). Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Quantized input of type `torch.quint8` * **weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Quantized weight of type `torch.qint8` * **bias** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – None or fp32 bias of type `torch.float` * **scale** (*double*) – output scale. If None, derived from the input scale * **zero\_point** (*long*) – output zero point. If None, derived from the input zero\_point Shape: * Input: (N,∗,in\_features)(N, \*, in\\_features) where `*` means any number of additional dimensions * Weight: (out\_features,in\_features)(out\\_features, in\\_features) * Bias: (out\_features)(out\\_features) * Output: (N,∗,out\_features)(N, \*, out\\_features) `torch.nn.quantized.functional.conv1d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/functional.html#conv1d) Applies a 1D convolution over a quantized 1D input composed of several input planes. See [`Conv1d`](#torch.nn.quantized.Conv1d "torch.nn.quantized.Conv1d") for details and output shape. Parameters * **input** – quantized input tensor of shape (minibatch,in\_channels,iW)(\text{minibatch} , \text{in\\_channels} , iW) * **weight** – quantized filters of shape (out\_channels,in\_channelsgroups,iW)(\text{out\\_channels} , \frac{\text{in\\_channels}}{\text{groups}} , iW) * **bias** – **non-quantized** bias tensor of shape (out\_channels)(\text{out\\_channels}) . The tensor type must be `torch.float`. * **stride** – the stride of the convolving kernel. Can be a single number or a tuple `(sW,)`. Default: 1 * **padding** – implicit paddings on both sides of the input. Can be a single number or a tuple `(padW,)`. Default: 0 * **dilation** – the spacing between kernel elements. Can be a single number or a tuple `(dW,)`. Default: 1 * **groups** – split input into groups, in\_channels\text{in\\_channels} should be divisible by the number of groups. Default: 1 * **padding\_mode** – the padding mode to use. Only “zeros” is supported for quantized convolution at the moment. Default: “zeros” * **scale** – quantization scale for the output. Default: 1.0 * **zero\_point** – quantization zero\_point for the output. Default: 0 * **dtype** – quantization data type to use. Default: `torch.quint8` Examples: ``` >>> from torch.nn.quantized import functional as qF >>> filters = torch.randn(33, 16, 3, dtype=torch.float) >>> inputs = torch.randn(20, 16, 50, dtype=torch.float) >>> bias = torch.randn(33, dtype=torch.float) >>> >>> scale, zero_point = 1.0, 0 >>> dtype_inputs = torch.quint8 >>> dtype_filters = torch.qint8 >>> >>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters) >>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs) >>> qF.conv1d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point) ``` `torch.nn.quantized.functional.conv2d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/functional.html#conv2d) Applies a 2D convolution over a quantized 2D input composed of several input planes. See [`Conv2d`](#torch.nn.quantized.Conv2d "torch.nn.quantized.Conv2d") for details and output shape. Parameters * **input** – quantized input tensor of shape (minibatch,in\_channels,iH,iW)(\text{minibatch} , \text{in\\_channels} , iH , iW) * **weight** – quantized filters of shape (out\_channels,in\_channelsgroups,kH,kW)(\text{out\\_channels} , \frac{\text{in\\_channels}}{\text{groups}} , kH , kW) * **bias** – **non-quantized** bias tensor of shape (out\_channels)(\text{out\\_channels}) . The tensor type must be `torch.float`. * **stride** – the stride of the convolving kernel. Can be a single number or a tuple `(sH, sW)`. Default: 1 * **padding** – implicit paddings on both sides of the input. Can be a single number or a tuple `(padH, padW)`. Default: 0 * **dilation** – the spacing between kernel elements. Can be a single number or a tuple `(dH, dW)`. Default: 1 * **groups** – split input into groups, in\_channels\text{in\\_channels} should be divisible by the number of groups. Default: 1 * **padding\_mode** – the padding mode to use. Only “zeros” is supported for quantized convolution at the moment. Default: “zeros” * **scale** – quantization scale for the output. Default: 1.0 * **zero\_point** – quantization zero\_point for the output. Default: 0 * **dtype** – quantization data type to use. Default: `torch.quint8` Examples: ``` >>> from torch.nn.quantized import functional as qF >>> filters = torch.randn(8, 4, 3, 3, dtype=torch.float) >>> inputs = torch.randn(1, 4, 5, 5, dtype=torch.float) >>> bias = torch.randn(8, dtype=torch.float) >>> >>> scale, zero_point = 1.0, 0 >>> dtype_inputs = torch.quint8 >>> dtype_filters = torch.qint8 >>> >>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters) >>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs) >>> qF.conv2d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point) ``` `torch.nn.quantized.functional.conv3d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/functional.html#conv3d) Applies a 3D convolution over a quantized 3D input composed of several input planes. See [`Conv3d`](#torch.nn.quantized.Conv3d "torch.nn.quantized.Conv3d") for details and output shape. Parameters * **input** – quantized input tensor of shape (minibatch,in\_channels,iD,iH,iW)(\text{minibatch} , \text{in\\_channels} , iD , iH , iW) * **weight** – quantized filters of shape (out\_channels,in\_channelsgroups,kD,kH,kW)(\text{out\\_channels} , \frac{\text{in\\_channels}}{\text{groups}} , kD , kH , kW) * **bias** – **non-quantized** bias tensor of shape (out\_channels)(\text{out\\_channels}) . The tensor type must be `torch.float`. * **stride** – the stride of the convolving kernel. Can be a single number or a tuple `(sD, sH, sW)`. Default: 1 * **padding** – implicit paddings on both sides of the input. Can be a single number or a tuple `(padD, padH, padW)`. Default: 0 * **dilation** – the spacing between kernel elements. Can be a single number or a tuple `(dD, dH, dW)`. Default: 1 * **groups** – split input into groups, in\_channels\text{in\\_channels} should be divisible by the number of groups. Default: 1 * **padding\_mode** – the padding mode to use. Only “zeros” is supported for quantized convolution at the moment. Default: “zeros” * **scale** – quantization scale for the output. Default: 1.0 * **zero\_point** – quantization zero\_point for the output. Default: 0 * **dtype** – quantization data type to use. Default: `torch.quint8` Examples: ``` >>> from torch.nn.quantized import functional as qF >>> filters = torch.randn(8, 4, 3, 3, 3, dtype=torch.float) >>> inputs = torch.randn(1, 4, 5, 5, 5, dtype=torch.float) >>> bias = torch.randn(8, dtype=torch.float) >>> >>> scale, zero_point = 1.0, 0 >>> dtype_inputs = torch.quint8 >>> dtype_filters = torch.qint8 >>> >>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters) >>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs) >>> qF.conv3d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point) ``` `torch.nn.quantized.functional.max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/functional.html#max_pool2d) Applies a 2D max pooling over a quantized input signal composed of several quantized input planes. Note The input quantization parameters are propagated to the output. See `MaxPool2d` for details. `torch.nn.quantized.functional.adaptive_avg_pool2d(input, output_size)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/functional.html#adaptive_avg_pool2d) Applies a 2D adaptive average pooling over a quantized input signal composed of several quantized input planes. Note The input quantization parameters propagate to the output. See `AdaptiveAvgPool2d` for details and output shape. Parameters **output\_size** – the target output size (single integer or double-integer tuple) `torch.nn.quantized.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/functional.html#avg_pool2d) Applies 2D average-pooling operation in kH×kWkH \times kW regions by step size sH×sWsH \times sW steps. The number of output features is equal to the number of input planes. Note The input quantization parameters propagate to the output. See `AvgPool2d` for details and output shape. Parameters * **input** – quantized input tensor (minibatch,in\_channels,iH,iW)(\text{minibatch} , \text{in\\_channels} , iH , iW) * **kernel\_size** – size of the pooling region. Can be a single number or a tuple `(kH, kW)` * **stride** – stride of the pooling operation. Can be a single number or a tuple `(sH, sW)`. Default: `kernel_size` * **padding** – implicit zero paddings on both sides of the input. Can be a single number or a tuple `(padH, padW)`. Default: 0 * **ceil\_mode** – when True, will use `ceil` instead of `floor` in the formula to compute the output shape. Default: `False` * **count\_include\_pad** – when True, will include the zero-padding in the averaging calculation. Default: `True` * **divisor\_override** – if specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: None `torch.nn.quantized.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/functional.html#interpolate) Down/up samples the input to either the given `size` or the given `scale_factor` See [`torch.nn.functional.interpolate()`](nn.functional#torch.nn.functional.interpolate "torch.nn.functional.interpolate") for implementation details. The input dimensions are interpreted in the form: `mini-batch x channels x [optional depth] x [optional height] x width`. Note The input quantization parameters propagate to the output. Note Only 2D/3D input is supported for quantized inputs Note Only the following modes are supported for the quantized inputs: * `bilinear` * `nearest` Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*] or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*] or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]*) – output spatial size. * **scale\_factor** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* *Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*]*) – multiplier for spatial size. Has to match input size if it is a tuple. * **mode** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – algorithm used for upsampling: `'nearest'` | `'bilinear'` * **align\_corners** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Geometrically, we consider the pixels of the input and output as squares rather than points. If set to `True`, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to `False`, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation *independent* of input size when `scale_factor` is kept the same. This only has an effect when `mode` is `'bilinear'`. Default: `False` `torch.nn.quantized.functional.hardswish(input, scale, zero_point)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/functional.html#hardswish) This is the quantized version of [`hardswish()`](nn.functional#torch.nn.functional.hardswish "torch.nn.functional.hardswish"). Parameters * **input** – quantized input * **scale** – quantization scale of the output tensor * **zero\_point** – quantization zero point of the output tensor `torch.nn.quantized.functional.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/functional.html#upsample) Upsamples the input to either the given `size` or the given `scale_factor` Warning This function is deprecated in favor of [`torch.nn.quantized.functional.interpolate()`](#torch.nn.quantized.functional.interpolate "torch.nn.quantized.functional.interpolate"). This is equivalent with `nn.quantized.functional.interpolate(...)`. See [`torch.nn.functional.interpolate()`](nn.functional#torch.nn.functional.interpolate "torch.nn.functional.interpolate") for implementation details. The input dimensions are interpreted in the form: `mini-batch x channels x [optional depth] x [optional height] x width`. Note The input quantization parameters propagate to the output. Note Only 2D input is supported for quantized inputs Note Only the following modes are supported for the quantized inputs: * `bilinear` * `nearest` Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – quantized input tensor * **size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*] or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*] or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]*) – output spatial size. * **scale\_factor** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* *Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*]*) – multiplier for spatial size. Has to be an integer. * **mode** (*string*) – algorithm used for upsampling: `'nearest'` | `'bilinear'` * **align\_corners** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Geometrically, we consider the pixels of the input and output as squares rather than points. If set to `True`, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to `False`, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation *independent* of input size when `scale_factor` is kept the same. This only has an effect when `mode` is `'bilinear'`. Default: `False` Warning With `align_corners = True`, the linearly interpolating modes (`bilinear`) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is `align_corners = False`. See [`Upsample`](generated/torch.nn.upsample#torch.nn.Upsample "torch.nn.Upsample") for concrete examples on how this affects the outputs. `torch.nn.quantized.functional.upsample_bilinear(input, size=None, scale_factor=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/functional.html#upsample_bilinear) Upsamples the input, using bilinear upsampling. Warning This function is deprecated in favor of [`torch.nn.quantized.functional.interpolate()`](#torch.nn.quantized.functional.interpolate "torch.nn.quantized.functional.interpolate"). This is equivalent with `nn.quantized.functional.interpolate(..., mode='bilinear', align_corners=True)`. Note The input quantization parameters propagate to the output. Note Only 2D inputs are supported Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – quantized input * **size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]*) – output spatial size. * **scale\_factor** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]*) – multiplier for spatial size `torch.nn.quantized.functional.upsample_nearest(input, size=None, scale_factor=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/functional.html#upsample_nearest) Upsamples the input, using nearest neighbours’ pixel values. Warning This function is deprecated in favor of [`torch.nn.quantized.functional.interpolate()`](#torch.nn.quantized.functional.interpolate "torch.nn.quantized.functional.interpolate"). This is equivalent with `nn.quantized.functional.interpolate(..., mode='nearest')`. Note The input quantization parameters propagate to the output. Note Only 2D inputs are supported Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – quantized input * **size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*] or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]*) – output spatial size. * **scale\_factor** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – multiplier for spatial size. Has to be an integer. ReLU6 ----- `class torch.nn.quantized.ReLU6(inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/activation.html#ReLU6) Applies the element-wise function: ReLU6(x)=min⁡(max⁡(x0,x),q(6))\text{ReLU6}(x) = \min(\max(x\_0, x), q(6)) , where x0x\_0 is the zero\_point, and q(6)q(6) is the quantized representation of number 6. Parameters **inplace** – can optionally do the operation in-place. Default: `False` Shape: * Input: (N,∗)(N, \*) where `*` means, any number of additional dimensions * Output: (N,∗)(N, \*) , same shape as the input Examples: ``` >>> m = nn.quantized.ReLU6() >>> input = torch.randn(2) >>> input = torch.quantize_per_tensor(input, 1.0, 0, dtype=torch.qint32) >>> output = m(input) ``` ELU --- `class torch.nn.quantized.ELU(scale, zero_point, alpha=1.0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/activation.html#ELU) This is the quantized equivalent of [`ELU`](generated/torch.nn.elu#torch.nn.ELU "torch.nn.ELU"). Parameters * **scale** – quantization scale of the output tensor * **zero\_point** – quantization zero point of the output tensor * **alpha** – the alpha constant Hardswish --------- `class torch.nn.quantized.Hardswish(scale, zero_point)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/activation.html#Hardswish) This is the quantized version of [`Hardswish`](generated/torch.nn.hardswish#torch.nn.Hardswish "torch.nn.Hardswish"). Parameters * **scale** – quantization scale of the output tensor * **zero\_point** – quantization zero point of the output tensor Conv1d ------ `class torch.nn.quantized.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/conv.html#Conv1d) Applies a 1D convolution over a quantized input signal composed of several quantized input planes. For details on input arguments, parameters, and implementation see [`Conv1d`](generated/torch.nn.conv1d#torch.nn.Conv1d "torch.nn.Conv1d"). Note Only `zeros` is supported for the `padding_mode` argument. Note Only `torch.quint8` is supported for the input data type. Variables * **~Conv1d.weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – packed tensor derived from the learnable weight parameter. * **~Conv1d.scale** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – scalar for the output scale * **~Conv1d.zero\_point** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – scalar for the output zero point See [`Conv1d`](generated/torch.nn.conv1d#torch.nn.Conv1d "torch.nn.Conv1d") for other attributes. Examples: ``` >>> m = nn.quantized.Conv1d(16, 33, 3, stride=2) >>> input = torch.randn(20, 16, 100) >>> # quantize input to quint8 >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8) >>> output = m(q_input) ``` `classmethod from_float(mod)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/conv.html#Conv1d.from_float) Creates a quantized module from a float module or qparams\_dict. Parameters **mod** ([Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")) – a float module, either produced by torch.quantization utilities or provided by the user Conv2d ------ `class torch.nn.quantized.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/conv.html#Conv2d) Applies a 2D convolution over a quantized input signal composed of several quantized input planes. For details on input arguments, parameters, and implementation see [`Conv2d`](generated/torch.nn.conv2d#torch.nn.Conv2d "torch.nn.Conv2d"). Note Only `zeros` is supported for the `padding_mode` argument. Note Only `torch.quint8` is supported for the input data type. Variables * **~Conv2d.weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – packed tensor derived from the learnable weight parameter. * **~Conv2d.scale** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – scalar for the output scale * **~Conv2d.zero\_point** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – scalar for the output zero point See [`Conv2d`](generated/torch.nn.conv2d#torch.nn.Conv2d "torch.nn.Conv2d") for other attributes. Examples: ``` >>> # With square kernels and equal stride >>> m = nn.quantized.Conv2d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.quantized.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2)) >>> # non-square kernels and unequal stride and with padding and dilation >>> m = nn.quantized.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1)) >>> input = torch.randn(20, 16, 50, 100) >>> # quantize input to quint8 >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8) >>> output = m(q_input) ``` `classmethod from_float(mod)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/conv.html#Conv2d.from_float) Creates a quantized module from a float module or qparams\_dict. Parameters **mod** ([Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")) – a float module, either produced by torch.quantization utilities or provided by the user Conv3d ------ `class torch.nn.quantized.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/conv.html#Conv3d) Applies a 3D convolution over a quantized input signal composed of several quantized input planes. For details on input arguments, parameters, and implementation see [`Conv3d`](generated/torch.nn.conv3d#torch.nn.Conv3d "torch.nn.Conv3d"). Note Only `zeros` is supported for the `padding_mode` argument. Note Only `torch.quint8` is supported for the input data type. Variables * **~Conv3d.weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – packed tensor derived from the learnable weight parameter. * **~Conv3d.scale** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – scalar for the output scale * **~Conv3d.zero\_point** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – scalar for the output zero point See [`Conv3d`](generated/torch.nn.conv3d#torch.nn.Conv3d "torch.nn.Conv3d") for other attributes. Examples: ``` >>> # With square kernels and equal stride >>> m = nn.quantized.Conv3d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.quantized.Conv3d(16, 33, (3, 5, 5), stride=(1, 2, 2), padding=(1, 2, 2)) >>> # non-square kernels and unequal stride and with padding and dilation >>> m = nn.quantized.Conv3d(16, 33, (3, 5, 5), stride=(1, 2, 2), padding=(1, 2, 2), dilation=(1, 2, 2)) >>> input = torch.randn(20, 16, 56, 56, 56) >>> # quantize input to quint8 >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8) >>> output = m(q_input) ``` `classmethod from_float(mod)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/conv.html#Conv3d.from_float) Creates a quantized module from a float module or qparams\_dict. Parameters **mod** ([Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")) – a float module, either produced by torch.quantization utilities or provided by the user FloatFunctional --------------- `class torch.nn.quantized.FloatFunctional` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/functional_modules.html#FloatFunctional) State collector class for float operations. The instance of this class can be used instead of the `torch.` prefix for some operations. See example usage below. Note This class does not provide a `forward` hook. Instead, you must use one of the underlying functions (e.g. `add`). Examples: ``` >>> f_add = FloatFunctional() >>> a = torch.tensor(3.0) >>> b = torch.tensor(4.0) >>> f_add.add(a, b) # Equivalent to ``torch.add(a, b)`` ``` Valid operation names: * add * cat * mul * add\_relu * add\_scalar * mul\_scalar QFunctional ----------- `class torch.nn.quantized.QFunctional` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/functional_modules.html#QFunctional) Wrapper class for quantized operations. The instance of this class can be used instead of the `torch.ops.quantized` prefix. See example usage below. Note This class does not provide a `forward` hook. Instead, you must use one of the underlying functions (e.g. `add`). Examples: ``` >>> q_add = QFunctional() >>> a = torch.quantize_per_tensor(torch.tensor(3.0), 1.0, 0, torch.qint32) >>> b = torch.quantize_per_tensor(torch.tensor(4.0), 1.0, 0, torch.qint32) >>> q_add.add(a, b) # Equivalent to ``torch.ops.quantized.add(a, b, 1.0, 0)`` ``` Valid operation names: * add * cat * mul * add\_relu * add\_scalar * mul\_scalar Quantize -------- `class torch.nn.quantized.Quantize(scale, zero_point, dtype)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules.html#Quantize) Quantizes an incoming tensor Parameters * **scale** – scale of the output Quantized Tensor * **zero\_point** – zero\_point of output Quantized Tensor * **dtype** – data type of output Quantized Tensor Variables **zero\_point, dtype** (*`scale`**,*) – Examples:: ``` >>> t = torch.tensor([[1., -1.], [1., -1.]]) >>> scale, zero_point, dtype = 1.0, 2, torch.qint8 >>> qm = Quantize(scale, zero_point, dtype) >>> qt = qm(t) >>> print(qt) tensor([[ 1., -1.], [ 1., -1.]], size=(2, 2), dtype=torch.qint8, scale=1.0, zero_point=2) ``` DeQuantize ---------- `class torch.nn.quantized.DeQuantize` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules.html#DeQuantize) Dequantizes an incoming tensor Examples:: ``` >>> input = torch.tensor([[1., -1.], [1., -1.]]) >>> scale, zero_point, dtype = 1.0, 2, torch.qint8 >>> qm = Quantize(scale, zero_point, dtype) >>> quantized_input = qm(input) >>> dqm = DeQuantize() >>> dequantized = dqm(quantized_input) >>> print(dequantized) tensor([[ 1., -1.], [ 1., -1.]], dtype=torch.float32) ``` Linear ------ `class torch.nn.quantized.Linear(in_features, out_features, bias_=True, dtype=torch.qint8)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/linear.html#Linear) A quantized linear module with quantized tensor as inputs and outputs. We adopt the same interface as `torch.nn.Linear`, please see <https://pytorch.org/docs/stable/nn.html#torch.nn.Linear> for documentation. Similar to [`Linear`](generated/torch.nn.linear#torch.nn.Linear "torch.nn.Linear"), attributes will be randomly initialized at module creation time and will be overwritten later Variables * **~Linear.weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the non-learnable quantized weights of the module of shape (out\_features,in\_features)(\text{out\\_features}, \text{in\\_features}) . * **~Linear.bias** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the non-learnable bias of the module of shape (out\_features)(\text{out\\_features}) . If `bias` is `True`, the values are initialized to zero. * **~Linear.scale** – `scale` parameter of output Quantized Tensor, type: double * **~Linear.zero\_point** – `zero_point` parameter for output Quantized Tensor, type: long Examples: ``` >>> m = nn.quantized.Linear(20, 30) >>> input = torch.randn(128, 20) >>> input = torch.quantize_per_tensor(input, 1.0, 0, torch.quint8) >>> output = m(input) >>> print(output.size()) torch.Size([128, 30]) ``` `classmethod from_float(mod)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/linear.html#Linear.from_float) Create a quantized module from a float module or qparams\_dict Parameters **mod** ([Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")) – a float module, either produced by torch.quantization utilities or provided by the user BatchNorm2d ----------- `class torch.nn.quantized.BatchNorm2d(num_features, eps=1e-05, momentum=0.1)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/batchnorm.html#BatchNorm2d) This is the quantized version of [`BatchNorm2d`](generated/torch.nn.batchnorm2d#torch.nn.BatchNorm2d "torch.nn.BatchNorm2d"). BatchNorm3d ----------- `class torch.nn.quantized.BatchNorm3d(num_features, eps=1e-05, momentum=0.1)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/batchnorm.html#BatchNorm3d) This is the quantized version of [`BatchNorm3d`](generated/torch.nn.batchnorm3d#torch.nn.BatchNorm3d "torch.nn.BatchNorm3d"). LayerNorm --------- `class torch.nn.quantized.LayerNorm(normalized_shape, weight, bias, scale, zero_point, eps=1e-05, elementwise_affine=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/normalization.html#LayerNorm) This is the quantized version of [`LayerNorm`](generated/torch.nn.layernorm#torch.nn.LayerNorm "torch.nn.LayerNorm"). Additional args: * **scale** - quantization scale of the output, type: double. * **zero\_point** - quantization zero point of the output, type: long. GroupNorm --------- `class torch.nn.quantized.GroupNorm(num_groups, num_channels, weight, bias, scale, zero_point, eps=1e-05, affine=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/normalization.html#GroupNorm) This is the quantized version of [`GroupNorm`](generated/torch.nn.groupnorm#torch.nn.GroupNorm "torch.nn.GroupNorm"). Additional args: * **scale** - quantization scale of the output, type: double. * **zero\_point** - quantization zero point of the output, type: long. InstanceNorm1d -------------- `class torch.nn.quantized.InstanceNorm1d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/normalization.html#InstanceNorm1d) This is the quantized version of [`InstanceNorm1d`](generated/torch.nn.instancenorm1d#torch.nn.InstanceNorm1d "torch.nn.InstanceNorm1d"). Additional args: * **scale** - quantization scale of the output, type: double. * **zero\_point** - quantization zero point of the output, type: long. InstanceNorm2d -------------- `class torch.nn.quantized.InstanceNorm2d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/normalization.html#InstanceNorm2d) This is the quantized version of [`InstanceNorm2d`](generated/torch.nn.instancenorm2d#torch.nn.InstanceNorm2d "torch.nn.InstanceNorm2d"). Additional args: * **scale** - quantization scale of the output, type: double. * **zero\_point** - quantization zero point of the output, type: long. InstanceNorm3d -------------- `class torch.nn.quantized.InstanceNorm3d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/normalization.html#InstanceNorm3d) This is the quantized version of [`InstanceNorm3d`](generated/torch.nn.instancenorm3d#torch.nn.InstanceNorm3d "torch.nn.InstanceNorm3d"). Additional args: * **scale** - quantization scale of the output, type: double. * **zero\_point** - quantization zero point of the output, type: long. Embedding --------- `class torch.nn.quantized.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None, dtype=torch.quint8)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/embedding_ops.html#Embedding) A quantized Embedding module with quantized packed weights as inputs. We adopt the same interface as `torch.nn.Embedding`, please see <https://pytorch.org/docs/stable/nn.html#torch.nn.Embedding> for documentation. Similar to [`Embedding`](generated/torch.nn.embedding#torch.nn.Embedding "torch.nn.Embedding"), attributes will be randomly initialized at module creation time and will be overwritten later Variables **~Embedding.weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the non-learnable quantized weights of the module of shape (num\_embeddings,embedding\_dim)(\text{num\\_embeddings}, \text{embedding\\_dim}) . Examples:: ``` >>> m = nn.quantized.Embedding(num_embeddings=10, embedding_dim=12) >>> indices = torch.tensor([9, 6, 5, 7, 8, 8, 9, 2, 8]) >>> output = m(indices) >>> print(output.size()) torch.Size([9, 12] ``` `classmethod from_float(mod)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/embedding_ops.html#Embedding.from_float) Create a quantized embedding module from a float module Parameters **mod** ([Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")) – a float module, either produced by torch.quantization utilities or provided by user EmbeddingBag ------------ `class torch.nn.quantized.EmbeddingBag(num_embeddings, embedding_dim, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='sum', sparse=False, _weight=None, include_last_offset=False, dtype=torch.quint8)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/embedding_ops.html#EmbeddingBag) A quantized EmbeddingBag module with quantized packed weights as inputs. We adopt the same interface as `torch.nn.EmbeddingBag`, please see <https://pytorch.org/docs/stable/nn.html#torch.nn.EmbeddingBag> for documentation. Similar to [`EmbeddingBag`](generated/torch.nn.embeddingbag#torch.nn.EmbeddingBag "torch.nn.EmbeddingBag"), attributes will be randomly initialized at module creation time and will be overwritten later Variables **~EmbeddingBag.weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the non-learnable quantized weights of the module of shape (num\_embeddings,embedding\_dim)(\text{num\\_embeddings}, \text{embedding\\_dim}) . Examples:: ``` >>> m = nn.quantized.EmbeddingBag(num_embeddings=10, embedding_dim=12, include_last_offset=True, mode='sum') >>> indices = torch.tensor([9, 6, 5, 7, 8, 8, 9, 2, 8, 6, 6, 9, 1, 6, 8, 8, 3, 2, 3, 6, 3, 6, 5, 7, 0, 8, 4, 6, 5, 8, 2, 3]) >>> offsets = torch.tensor([0, 19, 20, 28, 28, 32]) >>> output = m(indices, offsets) >>> print(output.size()) torch.Size([5, 12] ``` `classmethod from_float(mod)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/quantized/modules/embedding_ops.html#EmbeddingBag.from_float) Create a quantized embedding\_bag module from a float module Parameters **mod** ([Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")) – a float module, either produced by torch.quantization utilities or provided by user
programming_docs
pytorch Automatic differentiation package - torch.autograd Automatic differentiation package - torch.autograd ================================================== `torch.autograd` provides classes and functions implementing automatic differentiation of arbitrary scalar valued functions. It requires minimal changes to the existing code - you only need to declare `Tensor` s for which gradients should be computed with the `requires_grad=True` keyword. As of now, we only support autograd for floating point `Tensor` types ( half, float, double and bfloat16) and complex `Tensor` types (cfloat, cdouble). `torch.autograd.backward(tensors, grad_tensors=None, retain_graph=None, create_graph=False, grad_variables=None, inputs=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd.html#backward) Computes the sum of gradients of given tensors w.r.t. graph leaves. The graph is differentiated using the chain rule. If any of `tensors` are non-scalar (i.e. their data has more than one element) and require gradient, then the Jacobian-vector product would be computed, in this case the function additionally requires specifying `grad_tensors`. It should be a sequence of matching length, that contains the “vector” in the Jacobian-vector product, usually the gradient of the differentiated function w.r.t. corresponding tensors (`None` is an acceptable value for all tensors that don’t need gradient tensors). This function accumulates gradients in the leaves - you might need to zero `.grad` attributes or set them to `None` before calling it. See [Default gradient layouts](#default-grad-layouts) for details on the memory layout of accumulated gradients. Note Using this method with `create_graph=True` will create a reference cycle between the parameter and its gradient which can cause a memory leak. We recommend using `autograd.grad` when creating the graph to avoid this. If you have to use this function, make sure to reset the `.grad` fields of your parameters to `None` after use to break the cycle and avoid the leak. Note If you run any forward ops, create `grad_tensors`, and/or call `backward` in a user-specified CUDA stream context, see [Stream semantics of backward passes](https://pytorch.org/docs/1.8.0/notes/cuda.html#bwd-cuda-stream-semantics). Parameters * **tensors** (*sequence of Tensor*) – Tensors of which the derivative will be computed. * **grad\_tensors** (*sequence of* *(*[Tensor](tensors#torch.Tensor "torch.Tensor") *or* [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.9)")*)*) – The “vector” in the Jacobian-vector product, usually gradients w.r.t. each element of corresponding tensors. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable for all grad\_tensors, then this argument is optional. * **retain\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `False`, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to `True` is not needed and often can be worked around in a much more efficient way. Defaults to the value of `create_graph`. * **create\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults to `False`. * **inputs** (*sequence of Tensor*) – Inputs w.r.t. which the gradient will be accumulated into `.grad`. All other Tensors will be ignored. If not provided, the gradient is accumulated into all the leaf Tensors that were used to compute the attr::tensors. All the provided inputs must be leaf Tensors. `torch.autograd.grad(outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False, only_inputs=True, allow_unused=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd.html#grad) Computes and returns the sum of gradients of outputs w.r.t. the inputs. `grad_outputs` should be a sequence of length matching `output` containing the “vector” in Jacobian-vector product, usually the pre-computed gradients w.r.t. each of the outputs. If an output doesn’t require\_grad, then the gradient can be `None`). If `only_inputs` is `True`, the function will only return a list of gradients w.r.t the specified inputs. If it’s `False`, then gradient w.r.t. all remaining leaves will still be computed, and will be accumulated into their `.grad` attribute. Note If you run any forward ops, create `grad_outputs`, and/or call `grad` in a user-specified CUDA stream context, see [Stream semantics of backward passes](https://pytorch.org/docs/1.8.0/notes/cuda.html#bwd-cuda-stream-semantics). Parameters * **outputs** (*sequence of Tensor*) – outputs of the differentiated function. * **inputs** (*sequence of Tensor*) – Inputs w.r.t. which the gradient will be returned (and not accumulated into `.grad`). * **grad\_outputs** (*sequence of Tensor*) – The “vector” in the Jacobian-vector product. Usually gradients w.r.t. each output. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable for all grad\_tensors, then this argument is optional. Default: None. * **retain\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `False`, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to `True` is not needed and often can be worked around in a much more efficient way. Defaults to the value of `create_graph`. * **create\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, graph of the derivative will be constructed, allowing to compute higher order derivative products. Default: `False`. * **allow\_unused** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `False`, specifying inputs that were not used when computing outputs (and therefore their grad is always zero) is an error. Defaults to `False`. Functional higher level API --------------------------- Warning This API is in beta. Even though the function signatures are very unlikely to change, major improvements to performances are planned before we consider this stable. This section contains the higher level API for the autograd that builds on the basic API above and allows you to compute jacobians, hessians, etc. This API works with user-provided functions that take only Tensors as input and return only Tensors. If your function takes other arguments that are not Tensors or Tensors that don’t have requires\_grad set, you can use a lambda to capture them. For example, for a function `f` that takes three inputs, a Tensor for which we want the jacobian, another tensor that should be considered constant and a boolean flag as `f(input, constant, flag=flag)` you can use it as `functional.jacobian(lambda x: f(x, constant, flag=flag), input)`. `torch.autograd.functional.jacobian(func, inputs, create_graph=False, strict=False, vectorize=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/functional.html#jacobian) Function that computes the Jacobian of a given function. Parameters * **func** (*function*) – a Python function that takes Tensor inputs and returns a tuple of Tensors or a Tensor. * **inputs** (*tuple of Tensors* *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – inputs to the function `func`. * **create\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, the Jacobian will be computed in a differentiable manner. Note that when `strict` is `False`, the result can not require gradients or be disconnected from the inputs. Defaults to `False`. * **strict** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, an error will be raised when we detect that there exists an input such that all the outputs are independent of it. If `False`, we return a Tensor of zeros as the jacobian for said inputs, which is the expected mathematical value. Defaults to `False`. * **vectorize** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – This feature is experimental, please use at your own risk. When computing the jacobian, usually we invoke `autograd.grad` once per row of the jacobian. If this flag is `True`, we use the vmap prototype feature as the backend to vectorize calls to `autograd.grad` so we only invoke it once instead of once per row. This should lead to performance improvements in many use cases, however, due to this feature being incomplete, there may be performance cliffs. Please use `torch._C._debug_only_display_vmap_fallback_warnings(True)` to show any performance warnings and file us issues if warnings exist for your use case. Defaults to `False`. Returns if there is a single input and output, this will be a single Tensor containing the Jacobian for the linearized inputs and output. If one of the two is a tuple, then the Jacobian will be a tuple of Tensors. If both of them are tuples, then the Jacobian will be a tuple of tuple of Tensors where `Jacobian[i][j]` will contain the Jacobian of the `i`th output and `j`th input and will have as size the concatenation of the sizes of the corresponding output and the corresponding input and will have same dtype and device as the corresponding input. Return type Jacobian ([Tensor](tensors#torch.Tensor "torch.Tensor") or nested tuple of Tensors) #### Example ``` >>> def exp_reducer(x): ... return x.exp().sum(dim=1) >>> inputs = torch.rand(2, 2) >>> jacobian(exp_reducer, inputs) tensor([[[1.4917, 2.4352], [0.0000, 0.0000]], [[0.0000, 0.0000], [2.4369, 2.3799]]]) ``` ``` >>> jacobian(exp_reducer, inputs, create_graph=True) tensor([[[1.4917, 2.4352], [0.0000, 0.0000]], [[0.0000, 0.0000], [2.4369, 2.3799]]], grad_fn=<ViewBackward>) ``` ``` >>> def exp_adder(x, y): ... return 2 * x.exp() + 3 * y >>> inputs = (torch.rand(2), torch.rand(2)) >>> jacobian(exp_adder, inputs) (tensor([[2.8052, 0.0000], [0.0000, 3.3963]]), tensor([[3., 0.], [0., 3.]])) ``` `torch.autograd.functional.hessian(func, inputs, create_graph=False, strict=False, vectorize=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/functional.html#hessian) Function that computes the Hessian of a given scalar function. Parameters * **func** (*function*) – a Python function that takes Tensor inputs and returns a Tensor with a single element. * **inputs** (*tuple of Tensors* *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – inputs to the function `func`. * **create\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, the Hessian will be computed in a differentiable manner. Note that when `strict` is `False`, the result can not require gradients or be disconnected from the inputs. Defaults to `False`. * **strict** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, an error will be raised when we detect that there exists an input such that all the outputs are independent of it. If `False`, we return a Tensor of zeros as the hessian for said inputs, which is the expected mathematical value. Defaults to `False`. * **vectorize** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – This feature is experimental, please use at your own risk. When computing the hessian, usually we invoke `autograd.grad` once per row of the hessian. If this flag is `True`, we use the vmap prototype feature as the backend to vectorize calls to `autograd.grad` so we only invoke it once instead of once per row. This should lead to performance improvements in many use cases, however, due to this feature being incomplete, there may be performance cliffs. Please use `torch._C._debug_only_display_vmap_fallback_warnings(True)` to show any performance warnings and file us issues if warnings exist for your use case. Defaults to `False`. Returns if there is a single input, this will be a single Tensor containing the Hessian for the input. If it is a tuple, then the Hessian will be a tuple of tuples where `Hessian[i][j]` will contain the Hessian of the `i`th input and `j`th input with size the sum of the size of the `i`th input plus the size of the `j`th input. `Hessian[i][j]` will have the same dtype and device as the corresponding `i`th input. Return type Hessian ([Tensor](tensors#torch.Tensor "torch.Tensor") or a tuple of tuple of Tensors) #### Example ``` >>> def pow_reducer(x): ... return x.pow(3).sum() >>> inputs = torch.rand(2, 2) >>> hessian(pow_reducer, inputs) tensor([[[[5.2265, 0.0000], [0.0000, 0.0000]], [[0.0000, 4.8221], [0.0000, 0.0000]]], [[[0.0000, 0.0000], [1.9456, 0.0000]], [[0.0000, 0.0000], [0.0000, 3.2550]]]]) ``` ``` >>> hessian(pow_reducer, inputs, create_graph=True) tensor([[[[5.2265, 0.0000], [0.0000, 0.0000]], [[0.0000, 4.8221], [0.0000, 0.0000]]], [[[0.0000, 0.0000], [1.9456, 0.0000]], [[0.0000, 0.0000], [0.0000, 3.2550]]]], grad_fn=<ViewBackward>) ``` ``` >>> def pow_adder_reducer(x, y): ... return (2 * x.pow(2) + 3 * y.pow(2)).sum() >>> inputs = (torch.rand(2), torch.rand(2)) >>> hessian(pow_adder_reducer, inputs) ((tensor([[4., 0.], [0., 4.]]), tensor([[0., 0.], [0., 0.]])), (tensor([[0., 0.], [0., 0.]]), tensor([[6., 0.], [0., 6.]]))) ``` `torch.autograd.functional.vjp(func, inputs, v=None, create_graph=False, strict=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/functional.html#vjp) Function that computes the dot product between a vector `v` and the Jacobian of the given function at the point given by the inputs. Parameters * **func** (*function*) – a Python function that takes Tensor inputs and returns a tuple of Tensors or a Tensor. * **inputs** (*tuple of Tensors* *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – inputs to the function `func`. * **v** (*tuple of Tensors* *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – The vector for which the vector Jacobian product is computed. Must be the same size as the output of `func`. This argument is optional when the output of `func` contains a single element and (if it is not provided) will be set as a Tensor containing a single `1`. * **create\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, both the output and result will be computed in a differentiable way. Note that when `strict` is `False`, the result can not require gradients or be disconnected from the inputs. Defaults to `False`. * **strict** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, an error will be raised when we detect that there exists an input such that all the outputs are independent of it. If `False`, we return a Tensor of zeros as the vjp for said inputs, which is the expected mathematical value. Defaults to `False`. Returns tuple with: func\_output (tuple of Tensors or Tensor): output of `func(inputs)` vjp (tuple of Tensors or Tensor): result of the dot product with the same shape as the inputs. Return type output ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) #### Example ``` >>> def exp_reducer(x): ... return x.exp().sum(dim=1) >>> inputs = torch.rand(4, 4) >>> v = torch.ones(4) >>> vjp(exp_reducer, inputs, v) (tensor([5.7817, 7.2458, 5.7830, 6.7782]), tensor([[1.4458, 1.3962, 1.3042, 1.6354], [2.1288, 1.0652, 1.5483, 2.5035], [2.2046, 1.1292, 1.1432, 1.3059], [1.3225, 1.6652, 1.7753, 2.0152]])) ``` ``` >>> vjp(exp_reducer, inputs, v, create_graph=True) (tensor([5.7817, 7.2458, 5.7830, 6.7782], grad_fn=<SumBackward1>), tensor([[1.4458, 1.3962, 1.3042, 1.6354], [2.1288, 1.0652, 1.5483, 2.5035], [2.2046, 1.1292, 1.1432, 1.3059], [1.3225, 1.6652, 1.7753, 2.0152]], grad_fn=<MulBackward0>)) ``` ``` >>> def adder(x, y): ... return 2 * x + 3 * y >>> inputs = (torch.rand(2), torch.rand(2)) >>> v = torch.ones(2) >>> vjp(adder, inputs, v) (tensor([2.4225, 2.3340]), (tensor([2., 2.]), tensor([3., 3.]))) ``` `torch.autograd.functional.jvp(func, inputs, v=None, create_graph=False, strict=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/functional.html#jvp) Function that computes the dot product between the Jacobian of the given function at the point given by the inputs and a vector `v`. Parameters * **func** (*function*) – a Python function that takes Tensor inputs and returns a tuple of Tensors or a Tensor. * **inputs** (*tuple of Tensors* *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – inputs to the function `func`. * **v** (*tuple of Tensors* *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – The vector for which the Jacobian vector product is computed. Must be the same size as the input of `func`. This argument is optional when the input to `func` contains a single element and (if it is not provided) will be set as a Tensor containing a single `1`. * **create\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, both the output and result will be computed in a differentiable way. Note that when `strict` is `False`, the result can not require gradients or be disconnected from the inputs. Defaults to `False`. * **strict** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, an error will be raised when we detect that there exists an input such that all the outputs are independent of it. If `False`, we return a Tensor of zeros as the jvp for said inputs, which is the expected mathematical value. Defaults to `False`. Returns tuple with: func\_output (tuple of Tensors or Tensor): output of `func(inputs)` jvp (tuple of Tensors or Tensor): result of the dot product with the same shape as the output. Return type output ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) #### Example ``` >>> def exp_reducer(x): ... return x.exp().sum(dim=1) >>> inputs = torch.rand(4, 4) >>> v = torch.ones(4, 4) >>> jvp(exp_reducer, inputs, v) (tensor([6.3090, 4.6742, 7.9114, 8.2106]), tensor([6.3090, 4.6742, 7.9114, 8.2106])) ``` ``` >>> jvp(exp_reducer, inputs, v, create_graph=True) (tensor([6.3090, 4.6742, 7.9114, 8.2106], grad_fn=<SumBackward1>), tensor([6.3090, 4.6742, 7.9114, 8.2106], grad_fn=<SqueezeBackward1>)) ``` ``` >>> def adder(x, y): ... return 2 * x + 3 * y >>> inputs = (torch.rand(2), torch.rand(2)) >>> v = (torch.ones(2), torch.ones(2)) >>> jvp(adder, inputs, v) (tensor([2.2399, 2.5005]), tensor([5., 5.])) ``` Note The jvp is currently computed by using the backward of the backward (sometimes called the double backwards trick) as we don’t have support for forward mode AD in PyTorch at the moment. `torch.autograd.functional.vhp(func, inputs, v=None, create_graph=False, strict=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/functional.html#vhp) Function that computes the dot product between a vector `v` and the Hessian of a given scalar function at the point given by the inputs. Parameters * **func** (*function*) – a Python function that takes Tensor inputs and returns a Tensor with a single element. * **inputs** (*tuple of Tensors* *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – inputs to the function `func`. * **v** (*tuple of Tensors* *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – The vector for which the vector Hessian product is computed. Must be the same size as the input of `func`. This argument is optional when `func`’s input contains a single element and (if it is not provided) will be set as a Tensor containing a single `1`. * **create\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, both the output and result will be computed in a differentiable way. Note that when `strict` is `False`, the result can not require gradients or be disconnected from the inputs. Defaults to `False`. * **strict** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, an error will be raised when we detect that there exists an input such that all the outputs are independent of it. If `False`, we return a Tensor of zeros as the vhp for said inputs, which is the expected mathematical value. Defaults to `False`. Returns tuple with: func\_output (tuple of Tensors or Tensor): output of `func(inputs)` vhp (tuple of Tensors or Tensor): result of the dot product with the same shape as the inputs. Return type output ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) #### Example ``` >>> def pow_reducer(x): ... return x.pow(3).sum() >>> inputs = torch.rand(2, 2) >>> v = torch.ones(2, 2) >>> vhp(pow_reducer, inputs, v) (tensor(0.5591), tensor([[1.0689, 1.2431], [3.0989, 4.4456]])) >>> vhp(pow_reducer, inputs, v, create_graph=True) (tensor(0.5591, grad_fn=<SumBackward0>), tensor([[1.0689, 1.2431], [3.0989, 4.4456]], grad_fn=<MulBackward0>)) >>> def pow_adder_reducer(x, y): ... return (2 * x.pow(2) + 3 * y.pow(2)).sum() >>> inputs = (torch.rand(2), torch.rand(2)) >>> v = (torch.zeros(2), torch.ones(2)) >>> vhp(pow_adder_reducer, inputs, v) (tensor(4.8053), (tensor([0., 0.]), tensor([6., 6.]))) ``` `torch.autograd.functional.hvp(func, inputs, v=None, create_graph=False, strict=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/functional.html#hvp) Function that computes the dot product between the Hessian of a given scalar function and a vector `v` at the point given by the inputs. Parameters * **func** (*function*) – a Python function that takes Tensor inputs and returns a Tensor with a single element. * **inputs** (*tuple of Tensors* *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – inputs to the function `func`. * **v** (*tuple of Tensors* *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – The vector for which the Hessian vector product is computed. Must be the same size as the input of `func`. This argument is optional when `func`’s input contains a single element and (if it is not provided) will be set as a Tensor containing a single `1`. * **create\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, both the output and result will be computed in a differentiable way. Note that when `strict` is `False`, the result can not require gradients or be disconnected from the inputs. Defaults to `False`. * **strict** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, an error will be raised when we detect that there exists an input such that all the outputs are independent of it. If `False`, we return a Tensor of zeros as the hvp for said inputs, which is the expected mathematical value. Defaults to `False`. Returns tuple with: func\_output (tuple of Tensors or Tensor): output of `func(inputs)` hvp (tuple of Tensors or Tensor): result of the dot product with the same shape as the inputs. Return type output ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) #### Example ``` >>> def pow_reducer(x): ... return x.pow(3).sum() >>> inputs = torch.rand(2, 2) >>> v = torch.ones(2, 2) >>> hvp(pow_reducer, inputs, v) (tensor(0.1448), tensor([[2.0239, 1.6456], [2.4988, 1.4310]])) ``` ``` >>> hvp(pow_reducer, inputs, v, create_graph=True) (tensor(0.1448, grad_fn=<SumBackward0>), tensor([[2.0239, 1.6456], [2.4988, 1.4310]], grad_fn=<MulBackward0>)) ``` ``` >>> def pow_adder_reducer(x, y): ... return (2 * x.pow(2) + 3 * y.pow(2)).sum() >>> inputs = (torch.rand(2), torch.rand(2)) >>> v = (torch.zeros(2), torch.ones(2)) >>> hvp(pow_adder_reducer, inputs, v) (tensor(2.3030), (tensor([0., 0.]), tensor([6., 6.]))) ``` Note This function is significantly slower than `vhp` due to backward mode AD constraints. If your functions is twice continuously differentiable, then hvp = vhp.t(). So if you know that your function satisfies this condition, you should use vhp instead that is much faster with the current implementation. Locally disabling gradient computation -------------------------------------- `class torch.autograd.no_grad` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/grad_mode.html#no_grad) Context-manager that disabled gradient calculation. Disabling gradient calculation is useful for inference, when you are sure that you will not call `Tensor.backward()`. It will reduce memory consumption for computations that would otherwise have `requires_grad=True`. In this mode, the result of every computation will have `requires_grad=False`, even when the inputs have `requires_grad=True`. This context manager is thread local; it will not affect computation in other threads. Also functions as a decorator. (Make sure to instantiate with parenthesis.) Example: ``` >>> x = torch.tensor([1], requires_grad=True) >>> with torch.no_grad(): ... y = x * 2 >>> y.requires_grad False >>> @torch.no_grad() ... def doubler(x): ... return x * 2 >>> z = doubler(x) >>> z.requires_grad False ``` `class torch.autograd.enable_grad` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/grad_mode.html#enable_grad) Context-manager that enables gradient calculation. Enables gradient calculation, if it has been disabled via [`no_grad`](#torch.autograd.no_grad "torch.autograd.no_grad") or [`set_grad_enabled`](#torch.autograd.set_grad_enabled "torch.autograd.set_grad_enabled"). This context manager is thread local; it will not affect computation in other threads. Also functions as a decorator. (Make sure to instantiate with parenthesis.) Example: ``` >>> x = torch.tensor([1], requires_grad=True) >>> with torch.no_grad(): ... with torch.enable_grad(): ... y = x * 2 >>> y.requires_grad True >>> y.backward() >>> x.grad >>> @torch.enable_grad() ... def doubler(x): ... return x * 2 >>> with torch.no_grad(): ... z = doubler(x) >>> z.requires_grad True ``` `class torch.autograd.set_grad_enabled(mode)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/grad_mode.html#set_grad_enabled) Context-manager that sets gradient calculation to on or off. `set_grad_enabled` will enable or disable grads based on its argument `mode`. It can be used as a context-manager or as a function. This context manager is thread local; it will not affect computation in other threads. Parameters **mode** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Flag whether to enable grad (`True`), or disable (`False`). This can be used to conditionally enable gradients. Example: ``` >>> x = torch.tensor([1], requires_grad=True) >>> is_train = False >>> with torch.set_grad_enabled(is_train): ... y = x * 2 >>> y.requires_grad False >>> torch.set_grad_enabled(True) >>> y = x * 2 >>> y.requires_grad True >>> torch.set_grad_enabled(False) >>> y = x * 2 >>> y.requires_grad False ``` Default gradient layouts ------------------------ When a non-sparse `param` receives a non-sparse gradient during [`torch.autograd.backward()`](#torch.autograd.backward "torch.autograd.backward") or [`torch.Tensor.backward()`](#torch.Tensor.backward "torch.Tensor.backward") `param.grad` is accumulated as follows. If `param.grad` is initially `None`: 1. If `param`’s memory is non-overlapping and dense, `.grad` is created with strides matching `param` (thus matching `param`’s layout). 2. Otherwise, `.grad` is created with rowmajor-contiguous strides. If `param` already has a non-sparse `.grad` attribute: 3. If `create_graph=False`, `backward()` accumulates into `.grad` in-place, which preserves its strides. 4. If `create_graph=True`, `backward()` replaces `.grad` with a new tensor `.grad + new grad`, which attempts (but does not guarantee) matching the preexisting `.grad`’s strides. The default behavior (letting `.grad`s be `None` before the first `backward()`, such that their layout is created according to 1 or 2, and retained over time according to 3 or 4) is recommended for best performance. Calls to `model.zero_grad()` or `optimizer.zero_grad()` will not affect `.grad` layouts. In fact, resetting all `.grad`s to `None` before each accumulation phase, e.g.: ``` for iterations... ... for param in model.parameters(): param.grad = None loss.backward() ``` such that they’re recreated according to 1 or 2 every time, is a valid alternative to `model.zero_grad()` or `optimizer.zero_grad()` that may improve performance for some networks. ### Manual gradient layouts If you need manual control over `.grad`’s strides, assign `param.grad =` a zeroed tensor with desired strides before the first `backward()`, and never reset it to `None`. 3 guarantees your layout is preserved as long as `create_graph=False`. 4 indicates your layout is *likely* preserved even if `create_graph=True`. In-place operations on Tensors ------------------------------ Supporting in-place operations in autograd is a hard matter, and we discourage their use in most cases. Autograd’s aggressive buffer freeing and reuse makes it very efficient and there are very few occasions when in-place operations actually lower memory usage by any significant amount. Unless you’re operating under heavy memory pressure, you might never need to use them. ### In-place correctness checks All `Tensor` s keep track of in-place operations applied to them, and if the implementation detects that a tensor was saved for backward in one of the functions, but it was modified in-place afterwards, an error will be raised once backward pass is started. This ensures that if you’re using in-place functions and not seeing any errors, you can be sure that the computed gradients are correct. Variable (deprecated) --------------------- Warning The Variable API has been deprecated: Variables are no longer necessary to use autograd with tensors. Autograd automatically supports Tensors with `requires_grad` set to `True`. Below please find a quick guide on what has changed: * `Variable(tensor)` and `Variable(tensor, requires_grad)` still work as expected, but they return Tensors instead of Variables. * `var.data` is the same thing as `tensor.data`. * Methods such as `var.backward(), var.detach(), var.register_hook()` now work on tensors with the same method names. In addition, one can now create tensors with `requires_grad=True` using factory methods such as [`torch.randn()`](generated/torch.randn#torch.randn "torch.randn"), [`torch.zeros()`](generated/torch.zeros#torch.zeros "torch.zeros"), [`torch.ones()`](generated/torch.ones#torch.ones "torch.ones"), and others like the following: `autograd_tensor = torch.randn((2, 3, 4), requires_grad=True)` Tensor autograd functions ------------------------- `class torch.Tensor` `grad` This attribute is `None` by default and becomes a Tensor the first time a call to [`backward()`](#torch.Tensor.backward "torch.Tensor.backward") computes gradients for `self`. The attribute will then contain the gradients computed and future calls to [`backward()`](#torch.Tensor.backward "torch.Tensor.backward") will accumulate (add) gradients into it. `requires_grad` Is `True` if gradients need to be computed for this Tensor, `False` otherwise. Note The fact that gradients need to be computed for a Tensor do not mean that the [`grad`](#torch.Tensor.grad "torch.Tensor.grad") attribute will be populated, see [`is_leaf`](#torch.Tensor.is_leaf "torch.Tensor.is_leaf") for more details. `is_leaf` All Tensors that have [`requires_grad`](#torch.Tensor.requires_grad "torch.Tensor.requires_grad") which is `False` will be leaf Tensors by convention. For Tensors that have [`requires_grad`](#torch.Tensor.requires_grad "torch.Tensor.requires_grad") which is `True`, they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and so `grad_fn` is None. Only leaf Tensors will have their [`grad`](#torch.Tensor.grad "torch.Tensor.grad") populated during a call to [`backward()`](#torch.Tensor.backward "torch.Tensor.backward"). To get [`grad`](#torch.Tensor.grad "torch.Tensor.grad") populated for non-leaf Tensors, you can use [`retain_grad()`](#torch.Tensor.retain_grad "torch.Tensor.retain_grad"). Example: ``` >>> a = torch.rand(10, requires_grad=True) >>> a.is_leaf True >>> b = torch.rand(10, requires_grad=True).cuda() >>> b.is_leaf False # b was created by the operation that cast a cpu Tensor into a cuda Tensor >>> c = torch.rand(10, requires_grad=True) + 2 >>> c.is_leaf False # c was created by the addition operation >>> d = torch.rand(10).cuda() >>> d.is_leaf True # d does not require gradients and so has no operation creating it (that is tracked by the autograd engine) >>> e = torch.rand(10).cuda().requires_grad_() >>> e.is_leaf True # e requires gradients and has no operations creating it >>> f = torch.rand(10, requires_grad=True, device="cuda") >>> f.is_leaf True # f requires grad, has no operation creating it ``` `backward(gradient=None, retain_graph=None, create_graph=False, inputs=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.backward) Computes the gradient of current tensor w.r.t. graph leaves. The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying `gradient`. It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t. `self`. This function accumulates gradients in the leaves - you might need to zero `.grad` attributes or set them to `None` before calling it. See [Default gradient layouts](#default-grad-layouts) for details on the memory layout of accumulated gradients. Note If you run any forward ops, create `gradient`, and/or call `backward` in a user-specified CUDA stream context, see [Stream semantics of backward passes](https://pytorch.org/docs/1.8.0/notes/cuda.html#bwd-cuda-stream-semantics). Parameters * **gradient** ([Tensor](tensors#torch.Tensor "torch.Tensor") *or* [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.9)")) – Gradient w.r.t. the tensor. If it is a tensor, it will be automatically converted to a Tensor that does not require grad unless `create_graph` is True. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable then this argument is optional. * **retain\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `False`, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of `create_graph`. * **create\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults to `False`. * **inputs** (*sequence of Tensor*) – Inputs w.r.t. which the gradient will be accumulated into `.grad`. All other Tensors will be ignored. If not provided, the gradient is accumulated into all the leaf Tensors that were used to compute the attr::tensors. All the provided inputs must be leaf Tensors. `detach()` Returns a new Tensor, detached from the current graph. The result will never require gradient. Note Returned Tensor shares the same storage with the original one. In-place modifications on either of them will be seen, and may trigger errors in correctness checks. IMPORTANT NOTE: Previously, in-place size / stride / storage changes (such as `resize_` / `resize_as_` / `set_` / `transpose_`) to the returned tensor also update the original tensor. Now, these in-place changes will not update the original tensor anymore, and will instead trigger an error. For sparse tensors: In-place indices / values changes (such as `zero_` / `copy_` / `add_`) to the returned tensor will not update the original tensor anymore, and will instead trigger an error. `detach_()` Detaches the Tensor from the graph that created it, making it a leaf. Views cannot be detached in-place. `register_hook(hook)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.register_hook) Registers a backward hook. The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have the following signature: ``` hook(grad) -> Tensor or None ``` The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of [`grad`](#torch.Tensor.grad "torch.Tensor.grad"). This function returns a handle with a method `handle.remove()` that removes the hook from the module. Example: ``` >>> v = torch.tensor([0., 0., 0.], requires_grad=True) >>> h = v.register_hook(lambda grad: grad * 2) # double the gradient >>> v.backward(torch.tensor([1., 2., 3.])) >>> v.grad 2 4 6 [torch.FloatTensor of size (3,)] >>> h.remove() # removes the hook ``` `retain_grad()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.retain_grad) Enables .grad attribute for non-leaf Tensors. Function -------- `class torch.autograd.Function` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/function.html#Function) Records operation history and defines formulas for differentiating ops. See the Note on extending the autograd engine for more details on how to use this class: <https://pytorch.org/docs/stable/notes/extending.html#extending-torch-autograd> Every operation performed on `Tensor` s creates a new function object, that performs the computation, and records that it happened. The history is retained in the form of a DAG of functions, with edges denoting data dependencies (`input <- output`). Then, when backward is called, the graph is processed in the topological ordering, by calling [`backward()`](#torch.autograd.backward "torch.autograd.backward") methods of each [`Function`](#torch.autograd.Function "torch.autograd.Function") object, and passing returned gradients on to next [`Function`](#torch.autograd.Function "torch.autograd.Function") s. Normally, the only way users interact with functions is by creating subclasses and defining new operations. This is a recommended way of extending torch.autograd. Examples: ``` >>> class Exp(Function): >>> >>> @staticmethod >>> def forward(ctx, i): >>> result = i.exp() >>> ctx.save_for_backward(result) >>> return result >>> >>> @staticmethod >>> def backward(ctx, grad_output): >>> result, = ctx.saved_tensors >>> return grad_output * result >>> >>> #Use it by calling the apply method: >>> output = Exp.apply(input) ``` `static backward(ctx, *grad_outputs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/function.html#Function.backward) Defines a formula for differentiating the operation. This function is to be overridden by all subclasses. It must accept a context `ctx` as the first argument, followed by as many outputs did [`forward()`](#torch.autograd.Function.forward "torch.autograd.Function.forward") return, and it should return as many tensors, as there were inputs to [`forward()`](#torch.autograd.Function.forward "torch.autograd.Function.forward"). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. The context can be used to retrieve tensors saved during the forward pass. It also has an attribute `ctx.needs_input_grad` as a tuple of booleans representing whether each input needs gradient. E.g., [`backward()`](#torch.autograd.backward "torch.autograd.backward") will have `ctx.needs_input_grad[0] = True` if the first input to [`forward()`](#torch.autograd.Function.forward "torch.autograd.Function.forward") needs gradient computated w.r.t. the output. `static forward(ctx, *args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/function.html#Function.forward) Performs the operation. This function is to be overridden by all subclasses. It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types). The context can be used to store tensors that can be then retrieved during the backward pass. Context method mixins --------------------- When creating a new [`Function`](#torch.autograd.Function "torch.autograd.Function"), the following methods are available to `ctx`. `class torch.autograd.function._ContextMethodMixin` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/function.html#_ContextMethodMixin) `mark_dirty(*args)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/function.html#_ContextMethodMixin.mark_dirty) Marks given tensors as modified in an in-place operation. **This should be called at most once, only from inside the** `forward()` **method, and all arguments should be inputs.** Every tensor that’s been modified in-place in a call to `forward()` should be given to this function, to ensure correctness of our checks. It doesn’t matter whether the function is called before or after modification. `mark_non_differentiable(*args)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/function.html#_ContextMethodMixin.mark_non_differentiable) Marks outputs as non-differentiable. **This should be called at most once, only from inside the** `forward()` **method, and all arguments should be outputs.** This will mark outputs as not requiring gradients, increasing the efficiency of backward computation. You still need to accept a gradient for each output in `backward()`, but it’s always going to be a zero tensor with the same shape as the shape of a corresponding output. This is used e.g. for indices returned from a max `Function`. `save_for_backward(*tensors)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/function.html#_ContextMethodMixin.save_for_backward) Saves given tensors for a future call to `backward()`. **This should be called at most once, and only from inside the** `forward()` **method.** Later, saved tensors can be accessed through the `saved_tensors` attribute. Before returning them to the user, a check is made to ensure they weren’t used in any in-place operation that modified their content. Arguments can also be `None`. `set_materialize_grads(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/function.html#_ContextMethodMixin.set_materialize_grads) Sets whether to materialize output grad tensors. Default is true. **This should be called only from inside the** `forward()` **method** If true, undefined output grad tensors will be expanded to tensors full of zeros prior to calling the `backward()` method. Numerical gradient checking --------------------------- `torch.autograd.gradcheck(func, inputs, eps=1e-06, atol=1e-05, rtol=0.001, raise_exception=True, check_sparse_nnz=False, nondet_tol=0.0, check_undefined_grad=True, check_grad_dtypes=False, check_batched_grad=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/gradcheck.html#gradcheck) Check gradients computed via small finite differences against analytical gradients w.r.t. tensors in `inputs` that are of floating point or complex type and with `requires_grad=True`. The check between numerical and analytical gradients uses [`allclose()`](generated/torch.allclose#torch.allclose "torch.allclose"). For complex functions, no notion of Jacobian exists. Gradcheck verifies if the numerical and analytical values of Wirtinger and Conjugate Wirtinger derivative are consistent. The gradient computation is done under the assumption that the overall function has a real valued output. For functions with complex output, gradcheck compares the numerical and analytical gradients for two values of `grad_output`: 1 and 1j. For more details, check out [Autograd for Complex Numbers](https://pytorch.org/docs/1.8.0/notes/autograd.html#complex-autograd-doc). Note The default values are designed for `input` of double precision. This check will likely fail if `input` is of less precision, e.g., `FloatTensor`. Warning If any checked tensor in `input` has overlapping memory, i.e., different indices pointing to the same memory address (e.g., from `torch.expand()`), this check will likely fail because the numerical gradients computed by point perturbation at such indices will change values at all other indices that share the same memory address. Parameters * **func** (*function*) – a Python function that takes Tensor inputs and returns a Tensor or a tuple of Tensors * **inputs** (*tuple of Tensor* *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – inputs to the function * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – perturbation for finite differences * **atol** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – absolute tolerance * **rtol** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – relative tolerance * **raise\_exception** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – indicating whether to raise an exception if the check fails. The exception gives more information about the exact nature of the failure. This is helpful when debugging gradchecks. * **check\_sparse\_nnz** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if True, gradcheck allows for SparseTensor input, and for any SparseTensor at input, gradcheck will perform check at nnz positions only. * **nondet\_tol** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – tolerance for non-determinism. When running identical inputs through the differentiation, the results must either match exactly (default, 0.0) or be within this tolerance. * **check\_undefined\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if True, check if undefined output grads are supported and treated as zeros, for `Tensor` outputs. * **check\_batched\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if True, check if we can compute batched gradients using prototype vmap support. Defaults to False. Returns True if all differences satisfy allclose condition `torch.autograd.gradgradcheck(func, inputs, grad_outputs=None, eps=1e-06, atol=1e-05, rtol=0.001, gen_non_contig_grad_outputs=False, raise_exception=True, nondet_tol=0.0, check_undefined_grad=True, check_grad_dtypes=False, check_batched_grad=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/gradcheck.html#gradgradcheck) Check gradients of gradients computed via small finite differences against analytical gradients w.r.t. tensors in `inputs` and `grad_outputs` that are of floating point or complex type and with `requires_grad=True`. This function checks that backpropagating through the gradients computed to the given `grad_outputs` are correct. The check between numerical and analytical gradients uses [`allclose()`](generated/torch.allclose#torch.allclose "torch.allclose"). Note The default values are designed for `input` and `grad_outputs` of double precision. This check will likely fail if they are of less precision, e.g., `FloatTensor`. Warning If any checked tensor in `input` and `grad_outputs` has overlapping memory, i.e., different indices pointing to the same memory address (e.g., from `torch.expand()`), this check will likely fail because the numerical gradients computed by point perturbation at such indices will change values at all other indices that share the same memory address. Parameters * **func** (*function*) – a Python function that takes Tensor inputs and returns a Tensor or a tuple of Tensors * **inputs** (*tuple of Tensor* *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – inputs to the function * **grad\_outputs** (*tuple of Tensor* *or* [Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – The gradients with respect to the function’s outputs. * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – perturbation for finite differences * **atol** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – absolute tolerance * **rtol** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – relative tolerance * **gen\_non\_contig\_grad\_outputs** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if `grad_outputs` is `None` and `gen_non_contig_grad_outputs` is `True`, the randomly generated gradient outputs are made to be noncontiguous * **raise\_exception** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – indicating whether to raise an exception if the check fails. The exception gives more information about the exact nature of the failure. This is helpful when debugging gradchecks. * **nondet\_tol** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – tolerance for non-determinism. When running identical inputs through the differentiation, the results must either match exactly (default, 0.0) or be within this tolerance. Note that a small amount of nondeterminism in the gradient will lead to larger inaccuracies in the second derivative. * **check\_undefined\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if True, check if undefined output grads are supported and treated as zeros * **check\_batched\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if True, check if we can compute batched gradients using prototype vmap support. Defaults to False. Returns True if all differences satisfy allclose condition Profiler -------- Autograd includes a profiler that lets you inspect the cost of different operators inside your model - both on the CPU and GPU. There are two modes implemented at the moment - CPU-only using [`profile`](#torch.autograd.profiler.profile "torch.autograd.profiler.profile"). and nvprof based (registers both CPU and GPU activity) using [`emit_nvtx`](#torch.autograd.profiler.emit_nvtx "torch.autograd.profiler.emit_nvtx"). `class torch.autograd.profiler.profile(enabled=True, *, use_cuda=False, record_shapes=False, with_flops=False, profile_memory=False, with_stack=False, use_kineto=False, use_cpu=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/profiler.html#profile) Context manager that manages autograd profiler state and holds a summary of results. Under the hood it just records events of functions being executed in C++ and exposes those events to Python. You can wrap any code into it and it will only report runtime of PyTorch functions. Note: profiler is thread local and is automatically propagated into the async tasks Parameters * **enabled** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Setting this to False makes this context manager a no-op. * **use\_cuda** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Enables timing of CUDA events as well using the cudaEvent API. Adds approximately 4us of overhead to each tensor operation. * **record\_shapes** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If shapes recording is set, information about input dimensions will be collected. This allows one to see which dimensions have been used under the hood and further group by them using prof.key\_averages(group\_by\_input\_shape=True). Please note that shape recording might skew your profiling data. It is recommended to use separate runs with and without shape recording to validate the timing. Most likely the skew will be negligible for bottom most events (in a case of nested function calls). But for higher level functions the total self cpu time might be artificially increased because of the shape collection. * **with\_flops** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If with\_flops is set, the profiler will estimate the FLOPS (floating pointer operations per second) value using the operator’s input shape and total time. This allows one to estimate the hardware performance. Currently, this option only works for the matrix multiplication and 2D convolution operators. * **profile\_memory** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – track tensor memory allocation/deallocation. * **with\_stack** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – record source information (file and line number) for the ops. * **use\_kineto** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – experimental, enable profiling with Kineto profiler. * **use\_cpu** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – profile CPU events; setting to `False` requires `use_kineto=True` and can be used to lower the overhead for GPU-only profiling. #### Example ``` >>> x = torch.randn((1, 1), requires_grad=True) >>> with torch.autograd.profiler.profile() as prof: >>> for _ in range(100): # any normal python code, really! >>> y = x ** 2 >> y.backward() >>> # NOTE: some columns were removed for brevity >>> print(prof.key_averages().table(sort_by="self_cpu_time_total")) ----------------------------------- --------------- --------------- --------------- Name Self CPU total CPU time avg Number of Calls ----------------------------------- --------------- --------------- --------------- mul 32.048ms 32.048ms 200 pow 27.041ms 27.041ms 200 PowBackward0 9.727ms 55.483ms 100 torch::autograd::AccumulateGrad 9.148ms 9.148ms 100 torch::autograd::GraphRoot 691.816us 691.816us 100 ----------------------------------- --------------- --------------- --------------- ``` `export_chrome_trace(path)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/profiler.html#profile.export_chrome_trace) Exports an EventList as a Chrome tracing tools file. The checkpoint can be later loaded and inspected under `chrome://tracing` URL. Parameters **path** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – Path where the trace will be written. `key_averages(group_by_input_shape=False, group_by_stack_n=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/profiler.html#profile.key_averages) Averages all function events over their keys. Parameters * **group\_by\_input\_shapes** – group entries by * **name, input shapes) rather than just event name.** (*(**event*) – * **is useful to see which input shapes contribute to the runtime** (*This*) – * **most and may help with size-specific optimizations or** (*the*) – * **the best candidates for quantization** (*choosing*) – * **group\_by\_stack\_n** – group by top n stack trace entries Returns An EventList containing FunctionEventAvg objects. `property self_cpu_time_total` Returns total time spent on CPU obtained as a sum of all self times across all the events. `table(sort_by=None, row_limit=100, max_src_column_width=75, header=None, top_level_events_only=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/profiler.html#profile.table) Prints an EventList as a nicely formatted table. Parameters * **sort\_by** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – Attribute used to sort entries. By default they are printed in the same order as they were registered. Valid keys include: `cpu_time`, `cuda_time`, `cpu_time_total`, `cuda_time_total`, `cpu_memory_usage`, `cuda_memory_usage`, `self_cpu_memory_usage`, `self_cuda_memory_usage`, `count`. * **top\_level\_events\_only** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Boolean flag to determine the selection of events to display. If true, the profiler will only display events at top level like top-level invocation of python `lstm`, python `add` or other functions, nested events like low-level cpu/cuda ops events are omitted for profiler result readability. Returns A string containing the table. `total_average()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/profiler.html#profile.total_average) Averages all events. Returns A FunctionEventAvg object. `class torch.autograd.profiler.emit_nvtx(enabled=True, record_shapes=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/profiler.html#emit_nvtx) Context manager that makes every autograd operation emit an NVTX range. It is useful when running the program under nvprof: ``` nvprof --profile-from-start off -o trace_name.prof -- <regular command here> ``` Unfortunately, there’s no way to force nvprof to flush the data it collected to disk, so for CUDA profiling one has to use this context manager to annotate nvprof traces and wait for the process to exit before inspecting them. Then, either NVIDIA Visual Profiler (nvvp) can be used to visualize the timeline, or [`torch.autograd.profiler.load_nvprof()`](#torch.autograd.profiler.load_nvprof "torch.autograd.profiler.load_nvprof") can load the results for inspection e.g. in Python REPL. Parameters * **enabled** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional**,* *default=True*) – Setting `enabled=False` makes this context manager a no-op. Default: `True`. * **record\_shapes** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional**,* *default=False*) – If `record_shapes=True`, the nvtx range wrapping each autograd op will append information about the sizes of Tensor arguments received by that op, in the following format: `[[arg0.size(0), arg0.size(1), ...], [arg1.size(0), arg1.size(1), ...], ...]` Non-tensor arguments will be represented by `[]`. Arguments will be listed in the order they are received by the backend op. Please note that this order may not match the order in which those arguments were passed on the Python side. Also note that shape recording may increase the overhead of nvtx range creation. #### Example ``` >>> with torch.cuda.profiler.profile(): ... model(x) # Warmup CUDA memory allocator and profiler ... with torch.autograd.profiler.emit_nvtx(): ... model(x) ``` **Forward-backward correlation** When viewing a profile created using [`emit_nvtx`](#torch.autograd.profiler.emit_nvtx "torch.autograd.profiler.emit_nvtx") in the Nvidia Visual Profiler, correlating each backward-pass op with the corresponding forward-pass op can be difficult. To ease this task, [`emit_nvtx`](#torch.autograd.profiler.emit_nvtx "torch.autograd.profiler.emit_nvtx") appends sequence number information to the ranges it generates. During the forward pass, each function range is decorated with `seq=<N>`. `seq` is a running counter, incremented each time a new backward Function object is created and stashed for backward. Thus, the `seq=<N>` annotation associated with each forward function range tells you that if a backward Function object is created by this forward function, the backward object will receive sequence number N. During the backward pass, the top-level range wrapping each C++ backward Function’s `apply()` call is decorated with `stashed seq=<M>`. `M` is the sequence number that the backward object was created with. By comparing `stashed seq` numbers in backward with `seq` numbers in forward, you can track down which forward op created each backward Function. Any functions executed during the backward pass are also decorated with `seq=<N>`. During default backward (with `create_graph=False`) this information is irrelevant, and in fact, `N` may simply be 0 for all such functions. Only the top-level ranges associated with backward Function objects’ `apply()` methods are useful, as a way to correlate these Function objects with the earlier forward pass. **Double-backward** If, on the other hand, a backward pass with `create_graph=True` is underway (in other words, if you are setting up for a double-backward), each function’s execution during backward is given a nonzero, useful `seq=<N>`. Those functions may themselves create Function objects to be executed later during double-backward, just as the original functions in the forward pass did. The relationship between backward and double-backward is conceptually the same as the relationship between forward and backward: The functions still emit current-sequence-number-tagged ranges, the Function objects they create still stash those sequence numbers, and during the eventual double-backward, the Function objects’ `apply()` ranges are still tagged with `stashed seq` numbers, which can be compared to `seq` numbers from the backward pass. `torch.autograd.profiler.load_nvprof(path)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/profiler.html#load_nvprof) Opens an nvprof trace file and parses autograd annotations. Parameters **path** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – path to nvprof trace Anomaly detection ----------------- `class torch.autograd.detect_anomaly` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/anomaly_mode.html#detect_anomaly) Context-manager that enable anomaly detection for the autograd engine. This does two things: * Running the forward pass with detection enabled will allow the backward pass to print the traceback of the forward operation that created the failing backward function. * Any backward computation that generate “nan” value will raise an error. Warning This mode should be enabled only for debugging as the different tests will slow down your program execution. #### Example ``` >>> import torch >>> from torch import autograd >>> class MyFunc(autograd.Function): ... @staticmethod ... def forward(ctx, inp): ... return inp.clone() ... @staticmethod ... def backward(ctx, gO): ... # Error during the backward pass ... raise RuntimeError("Some error in backward") ... return gO.clone() >>> def run_fn(a): ... out = MyFunc.apply(a) ... return out.sum() >>> inp = torch.rand(10, 10, requires_grad=True) >>> out = run_fn(inp) >>> out.backward() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/your/pytorch/install/torch/tensor.py", line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/your/pytorch/install/torch/autograd/__init__.py", line 90, in backward allow_unreachable=True) # allow_unreachable flag File "/your/pytorch/install/torch/autograd/function.py", line 76, in apply return self._forward_cls.backward(self, *args) File "<stdin>", line 8, in backward RuntimeError: Some error in backward >>> with autograd.detect_anomaly(): ... inp = torch.rand(10, 10, requires_grad=True) ... out = run_fn(inp) ... out.backward() Traceback of forward call that caused the error: File "tmp.py", line 53, in <module> out = run_fn(inp) File "tmp.py", line 44, in run_fn out = MyFunc.apply(a) Traceback (most recent call last): File "<stdin>", line 4, in <module> File "/your/pytorch/install/torch/tensor.py", line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/your/pytorch/install/torch/autograd/__init__.py", line 90, in backward allow_unreachable=True) # allow_unreachable flag File "/your/pytorch/install/torch/autograd/function.py", line 76, in apply return self._forward_cls.backward(self, *args) File "<stdin>", line 8, in backward RuntimeError: Some error in backward ``` `class torch.autograd.set_detect_anomaly(mode)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/anomaly_mode.html#set_detect_anomaly) Context-manager that sets the anomaly detection for the autograd engine on or off. `set_detect_anomaly` will enable or disable the autograd anomaly detection based on its argument `mode`. It can be used as a context-manager or as a function. See `detect_anomaly` above for details of the anomaly detection behaviour. Parameters **mode** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Flag whether to enable anomaly detection (`True`), or disable (`False`).
programming_docs
pytorch Probability distributions - torch.distributions Probability distributions - torch.distributions =============================================== The `distributions` package contains parameterizable probability distributions and sampling functions. This allows the construction of stochastic computation graphs and stochastic gradient estimators for optimization. This package generally follows the design of the [TensorFlow Distributions](https://arxiv.org/abs/1711.10604) package. It is not possible to directly backpropagate through random samples. However, there are two main methods for creating surrogate functions that can be backpropagated through. These are the score function estimator/likelihood ratio estimator/REINFORCE and the pathwise derivative estimator. REINFORCE is commonly seen as the basis for policy gradient methods in reinforcement learning, and the pathwise derivative estimator is commonly seen in the reparameterization trick in variational autoencoders. Whilst the score function only requires the value of samples f(x)f(x) , the pathwise derivative requires the derivative f′(x)f'(x) . The next sections discuss these two in a reinforcement learning example. For more details see [Gradient Estimation Using Stochastic Computation Graphs](https://arxiv.org/abs/1506.05254) . Score function -------------- When the probability density function is differentiable with respect to its parameters, we only need `sample()` and `log_prob()` to implement REINFORCE: Δθ=αr∂log⁡p(a∣πθ(s))∂θ\Delta\theta = \alpha r \frac{\partial\log p(a|\pi^\theta(s))}{\partial\theta} where θ\theta are the parameters, α\alpha is the learning rate, rr is the reward and p(a∣πθ(s))p(a|\pi^\theta(s)) is the probability of taking action aa in state ss given policy πθ\pi^\theta . In practice we would sample an action from the output of a network, apply this action in an environment, and then use `log_prob` to construct an equivalent loss function. Note that we use a negative because optimizers use gradient descent, whilst the rule above assumes gradient ascent. With a categorical policy, the code for implementing REINFORCE would be as follows: ``` probs = policy_network(state) # Note that this is equivalent to what used to be called multinomial m = Categorical(probs) action = m.sample() next_state, reward = env.step(action) loss = -m.log_prob(action) * reward loss.backward() ``` Pathwise derivative ------------------- The other way to implement these stochastic/policy gradients would be to use the reparameterization trick from the `rsample()` method, where the parameterized random variable can be constructed via a parameterized deterministic function of a parameter-free random variable. The reparameterized sample therefore becomes differentiable. The code for implementing the pathwise derivative would be as follows: ``` params = policy_network(state) m = Normal(*params) # Any distribution with .has_rsample == True could work based on the application action = m.rsample() next_state, reward = env.step(action) # Assuming that reward is differentiable loss = -reward loss.backward() ``` Distribution ------------ `class torch.distributions.distribution.Distribution(batch_shape=torch.Size([]), event_shape=torch.Size([]), validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/distribution.html#Distribution) Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.9)") Distribution is the abstract base class for probability distributions. `property arg_constraints` Returns a dictionary from argument names to [`Constraint`](#torch.distributions.constraints.Constraint "torch.distributions.constraints.Constraint") objects that should be satisfied by each argument of this distribution. Args that are not tensors need not appear in this dict. `property batch_shape` Returns the shape over which parameters are batched. `cdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/distribution.html#Distribution.cdf) Returns the cumulative density/mass function evaluated at `value`. Parameters **value** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/distribution.html#Distribution.entropy) Returns entropy of distribution, batched over batch\_shape. Returns Tensor of shape batch\_shape. `enumerate_support(expand=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/distribution.html#Distribution.enumerate_support) Returns tensor containing all values supported by a discrete distribution. The result will enumerate over dimension 0, so the shape of the result will be `(cardinality,) + batch_shape + event_shape` (where `event_shape = ()` for univariate distributions). Note that this enumerates over all batched tensors in lock-step `[[0, 0], [1, 1], …]`. With `expand=False`, enumeration happens along dim 0, but with the remaining batch dimensions being singleton dimensions, `[[0], [1], ..`. To iterate over the full Cartesian product use `itertools.product(m.enumerate_support())`. Parameters **expand** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether to expand the support over the batch dims to match the distribution’s `batch_shape`. Returns Tensor iterating over dimension 0. `property event_shape` Returns the shape of a single sample (without batching). `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/distribution.html#Distribution.expand) Returns a new distribution instance (or populates an existing instance provided by a derived class) with batch dimensions expanded to `batch_shape`. This method calls [`expand`](tensors#torch.Tensor.expand "torch.Tensor.expand") on the distribution’s parameters. As such, this does not allocate new memory for the expanded distribution instance. Additionally, this does not repeat any args checking or parameter broadcasting in `__init__.py`, when an instance is first created. Parameters * **batch\_shape** (*torch.Size*) – the desired expanded size. * **\_instance** – new instance provided by subclasses that need to override `.expand`. Returns New distribution instance with batch dimensions expanded to `batch_size`. `icdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/distribution.html#Distribution.icdf) Returns the inverse cumulative density/mass function evaluated at `value`. Parameters **value** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/distribution.html#Distribution.log_prob) Returns the log of the probability density/mass function evaluated at `value`. Parameters **value** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – `property mean` Returns the mean of the distribution. `perplexity()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/distribution.html#Distribution.perplexity) Returns perplexity of distribution, batched over batch\_shape. Returns Tensor of shape batch\_shape. `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/distribution.html#Distribution.rsample) Generates a sample\_shape shaped reparameterized sample or sample\_shape shaped batch of reparameterized samples if the distribution parameters are batched. `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/distribution.html#Distribution.sample) Generates a sample\_shape shaped sample or sample\_shape shaped batch of samples if the distribution parameters are batched. `sample_n(n)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/distribution.html#Distribution.sample_n) Generates n samples or n batches of samples if the distribution parameters are batched. `static set_default_validate_args(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/distribution.html#Distribution.set_default_validate_args) Sets whether validation is enabled or disabled. The default behavior mimics Python’s `assert` statement: validation is on by default, but is disabled if Python is run in optimized mode (via `python -O`). Validation may be expensive, so you may want to disable it once a model is working. Parameters **value** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Whether to enable validation. `property stddev` Returns the standard deviation of the distribution. `property support` Returns a [`Constraint`](#torch.distributions.constraints.Constraint "torch.distributions.constraints.Constraint") object representing this distribution’s support. `property variance` Returns the variance of the distribution. ExponentialFamily ----------------- `class torch.distributions.exp_family.ExponentialFamily(batch_shape=torch.Size([]), event_shape=torch.Size([]), validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/exp_family.html#ExponentialFamily) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") ExponentialFamily is the abstract base class for probability distributions belonging to an exponential family, whose probability mass/density function has the form is defined below pF(x;θ)=exp⁡(⟨t(x),θ⟩−F(θ)+k(x))p\_{F}(x; \theta) = \exp(\langle t(x), \theta\rangle - F(\theta) + k(x)) where θ\theta denotes the natural parameters, t(x)t(x) denotes the sufficient statistic, F(θ)F(\theta) is the log normalizer function for a given family and k(x)k(x) is the carrier measure. Note This class is an intermediary between the `Distribution` class and distributions which belong to an exponential family mainly to check the correctness of the `.entropy()` and analytic KL divergence methods. We use this class to compute the entropy and KL divergence using the AD framework and Bregman divergences (courtesy of: Frank Nielsen and Richard Nock, Entropies and Cross-entropies of Exponential Families). `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/exp_family.html#ExponentialFamily.entropy) Method to compute the entropy using Bregman divergence of the log normalizer. Bernoulli --------- `class torch.distributions.bernoulli.Bernoulli(probs=None, logits=None, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/bernoulli.html#Bernoulli) Bases: [`torch.distributions.exp_family.ExponentialFamily`](#torch.distributions.exp_family.ExponentialFamily "torch.distributions.exp_family.ExponentialFamily") Creates a Bernoulli distribution parameterized by [`probs`](#torch.distributions.bernoulli.Bernoulli.probs "torch.distributions.bernoulli.Bernoulli.probs") or [`logits`](#torch.distributions.bernoulli.Bernoulli.logits "torch.distributions.bernoulli.Bernoulli.logits") (but not both). Samples are binary (0 or 1). They take the value `1` with probability `p` and `0` with probability `1 - p`. Example: ``` >>> m = Bernoulli(torch.tensor([0.3])) >>> m.sample() # 30% chance 1; 70% chance 0 tensor([ 0.]) ``` Parameters * **probs** (*Number**,* [Tensor](tensors#torch.Tensor "torch.Tensor")) – the probability of sampling `1` * **logits** (*Number**,* [Tensor](tensors#torch.Tensor "torch.Tensor")) – the log-odds of sampling `1` `arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}` `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/bernoulli.html#Bernoulli.entropy) `enumerate_support(expand=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/bernoulli.html#Bernoulli.enumerate_support) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/bernoulli.html#Bernoulli.expand) `has_enumerate_support = True` `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/bernoulli.html#Bernoulli.log_prob) `logits` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/bernoulli.html#Bernoulli.logits) `property mean` `property param_shape` `probs` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/bernoulli.html#Bernoulli.probs) `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/bernoulli.html#Bernoulli.sample) `support = Boolean()` `property variance` Beta ---- `class torch.distributions.beta.Beta(concentration1, concentration0, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/beta.html#Beta) Bases: [`torch.distributions.exp_family.ExponentialFamily`](#torch.distributions.exp_family.ExponentialFamily "torch.distributions.exp_family.ExponentialFamily") Beta distribution parameterized by [`concentration1`](#torch.distributions.beta.Beta.concentration1 "torch.distributions.beta.Beta.concentration1") and [`concentration0`](#torch.distributions.beta.Beta.concentration0 "torch.distributions.beta.Beta.concentration0"). Example: ``` >>> m = Beta(torch.tensor([0.5]), torch.tensor([0.5])) >>> m.sample() # Beta distributed with concentration concentration1 and concentration0 tensor([ 0.1046]) ``` Parameters * **concentration1** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – 1st concentration parameter of the distribution (often referred to as alpha) * **concentration0** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – 2nd concentration parameter of the distribution (often referred to as beta) `arg_constraints = {'concentration0': GreaterThan(lower_bound=0.0), 'concentration1': GreaterThan(lower_bound=0.0)}` `property concentration0` `property concentration1` `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/beta.html#Beta.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/beta.html#Beta.expand) `has_rsample = True` `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/beta.html#Beta.log_prob) `property mean` `rsample(sample_shape=())` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/beta.html#Beta.rsample) `support = Interval(lower_bound=0.0, upper_bound=1.0)` `property variance` Binomial -------- `class torch.distributions.binomial.Binomial(total_count=1, probs=None, logits=None, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/binomial.html#Binomial) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Creates a Binomial distribution parameterized by `total_count` and either [`probs`](#torch.distributions.binomial.Binomial.probs "torch.distributions.binomial.Binomial.probs") or [`logits`](#torch.distributions.binomial.Binomial.logits "torch.distributions.binomial.Binomial.logits") (but not both). `total_count` must be broadcastable with [`probs`](#torch.distributions.binomial.Binomial.probs "torch.distributions.binomial.Binomial.probs")/[`logits`](#torch.distributions.binomial.Binomial.logits "torch.distributions.binomial.Binomial.logits"). Example: ``` >>> m = Binomial(100, torch.tensor([0 , .2, .8, 1])) >>> x = m.sample() tensor([ 0., 22., 71., 100.]) >>> m = Binomial(torch.tensor([[5.], [10.]]), torch.tensor([0.5, 0.8])) >>> x = m.sample() tensor([[ 4., 5.], [ 7., 6.]]) ``` Parameters * **total\_count** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – number of Bernoulli trials * **probs** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Event probabilities * **logits** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Event log-odds `arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0), 'total_count': IntegerGreaterThan(lower_bound=0)}` `enumerate_support(expand=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/binomial.html#Binomial.enumerate_support) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/binomial.html#Binomial.expand) `has_enumerate_support = True` `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/binomial.html#Binomial.log_prob) `logits` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/binomial.html#Binomial.logits) `property mean` `property param_shape` `probs` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/binomial.html#Binomial.probs) `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/binomial.html#Binomial.sample) `property support` `property variance` Categorical ----------- `class torch.distributions.categorical.Categorical(probs=None, logits=None, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/categorical.html#Categorical) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Creates a categorical distribution parameterized by either [`probs`](#torch.distributions.categorical.Categorical.probs "torch.distributions.categorical.Categorical.probs") or [`logits`](#torch.distributions.categorical.Categorical.logits "torch.distributions.categorical.Categorical.logits") (but not both). Note It is equivalent to the distribution that [`torch.multinomial()`](generated/torch.multinomial#torch.multinomial "torch.multinomial") samples from. Samples are integers from {0,…,K−1}\{0, \ldots, K-1\} where `K` is `probs.size(-1)`. If `probs` is 1-dimensional with length-`K`, each element is the relative probability of sampling the class at that index. If `probs` is N-dimensional, the first N-1 dimensions are treated as a batch of relative probability vectors. Note The `probs` argument must be non-negative, finite and have a non-zero sum, and it will be normalized to sum to 1 along the last dimension. attr:`probs` will return this normalized value. The `logits` argument will be interpreted as unnormalized log probabilities and can therefore be any real number. It will likewise be normalized so that the resulting probabilities sum to 1 along the last dimension. attr:`logits` will return this normalized value. See also: [`torch.multinomial()`](generated/torch.multinomial#torch.multinomial "torch.multinomial") Example: ``` >>> m = Categorical(torch.tensor([ 0.25, 0.25, 0.25, 0.25 ])) >>> m.sample() # equal probability of 0, 1, 2, 3 tensor(3) ``` Parameters * **probs** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – event probabilities * **logits** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – event log probabilities (unnormalized) `arg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}` `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/categorical.html#Categorical.entropy) `enumerate_support(expand=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/categorical.html#Categorical.enumerate_support) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/categorical.html#Categorical.expand) `has_enumerate_support = True` `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/categorical.html#Categorical.log_prob) `logits` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/categorical.html#Categorical.logits) `property mean` `property param_shape` `probs` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/categorical.html#Categorical.probs) `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/categorical.html#Categorical.sample) `property support` `property variance` Cauchy ------ `class torch.distributions.cauchy.Cauchy(loc, scale, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/cauchy.html#Cauchy) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Samples from a Cauchy (Lorentz) distribution. The distribution of the ratio of independent normally distributed random variables with means `0` follows a Cauchy distribution. Example: ``` >>> m = Cauchy(torch.tensor([0.0]), torch.tensor([1.0])) >>> m.sample() # sample from a Cauchy distribution with loc=0 and scale=1 tensor([ 2.3214]) ``` Parameters * **loc** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – mode or median of the distribution. * **scale** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – half width at half maximum. `arg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}` `cdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/cauchy.html#Cauchy.cdf) `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/cauchy.html#Cauchy.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/cauchy.html#Cauchy.expand) `has_rsample = True` `icdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/cauchy.html#Cauchy.icdf) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/cauchy.html#Cauchy.log_prob) `property mean` `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/cauchy.html#Cauchy.rsample) `support = Real()` `property variance` Chi2 ---- `class torch.distributions.chi2.Chi2(df, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/chi2.html#Chi2) Bases: [`torch.distributions.gamma.Gamma`](#torch.distributions.gamma.Gamma "torch.distributions.gamma.Gamma") Creates a Chi2 distribution parameterized by shape parameter [`df`](#torch.distributions.chi2.Chi2.df "torch.distributions.chi2.Chi2.df"). This is exactly equivalent to `Gamma(alpha=0.5*df, beta=0.5)` Example: ``` >>> m = Chi2(torch.tensor([1.0])) >>> m.sample() # Chi2 distributed with shape df=1 tensor([ 0.1046]) ``` Parameters **df** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – shape parameter of the distribution `arg_constraints = {'df': GreaterThan(lower_bound=0.0)}` `property df` `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/chi2.html#Chi2.expand) ContinuousBernoulli ------------------- `class torch.distributions.continuous_bernoulli.ContinuousBernoulli(probs=None, logits=None, lims=(0.499, 0.501), validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/continuous_bernoulli.html#ContinuousBernoulli) Bases: [`torch.distributions.exp_family.ExponentialFamily`](#torch.distributions.exp_family.ExponentialFamily "torch.distributions.exp_family.ExponentialFamily") Creates a continuous Bernoulli distribution parameterized by [`probs`](#torch.distributions.continuous_bernoulli.ContinuousBernoulli.probs "torch.distributions.continuous_bernoulli.ContinuousBernoulli.probs") or [`logits`](#torch.distributions.continuous_bernoulli.ContinuousBernoulli.logits "torch.distributions.continuous_bernoulli.ContinuousBernoulli.logits") (but not both). The distribution is supported in [0, 1] and parameterized by ‘probs’ (in (0,1)) or ‘logits’ (real-valued). Note that, unlike the Bernoulli, ‘probs’ does not correspond to a probability and ‘logits’ does not correspond to log-odds, but the same names are used due to the similarity with the Bernoulli. See [1] for more details. Example: ``` >>> m = ContinuousBernoulli(torch.tensor([0.3])) >>> m.sample() tensor([ 0.2538]) ``` Parameters * **probs** (*Number**,* [Tensor](tensors#torch.Tensor "torch.Tensor")) – (0,1) valued parameters * **logits** (*Number**,* [Tensor](tensors#torch.Tensor "torch.Tensor")) – real valued parameters whose sigmoid matches ‘probs’ [1] The continuous Bernoulli: fixing a pervasive error in variational autoencoders, Loaiza-Ganem G and Cunningham JP, NeurIPS 2019. <https://arxiv.org/abs/1907.06845> `arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}` `cdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/continuous_bernoulli.html#ContinuousBernoulli.cdf) `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/continuous_bernoulli.html#ContinuousBernoulli.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/continuous_bernoulli.html#ContinuousBernoulli.expand) `has_rsample = True` `icdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/continuous_bernoulli.html#ContinuousBernoulli.icdf) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/continuous_bernoulli.html#ContinuousBernoulli.log_prob) `logits` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/continuous_bernoulli.html#ContinuousBernoulli.logits) `property mean` `property param_shape` `probs` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/continuous_bernoulli.html#ContinuousBernoulli.probs) `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/continuous_bernoulli.html#ContinuousBernoulli.rsample) `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/continuous_bernoulli.html#ContinuousBernoulli.sample) `property stddev` `support = Interval(lower_bound=0.0, upper_bound=1.0)` `property variance` Dirichlet --------- `class torch.distributions.dirichlet.Dirichlet(concentration, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/dirichlet.html#Dirichlet) Bases: [`torch.distributions.exp_family.ExponentialFamily`](#torch.distributions.exp_family.ExponentialFamily "torch.distributions.exp_family.ExponentialFamily") Creates a Dirichlet distribution parameterized by concentration `concentration`. Example: ``` >>> m = Dirichlet(torch.tensor([0.5, 0.5])) >>> m.sample() # Dirichlet distributed with concentrarion concentration tensor([ 0.1046, 0.8954]) ``` Parameters **concentration** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – concentration parameter of the distribution (often referred to as alpha) `arg_constraints = {'concentration': IndependentConstraint(GreaterThan(lower_bound=0.0), 1)}` `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/dirichlet.html#Dirichlet.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/dirichlet.html#Dirichlet.expand) `has_rsample = True` `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/dirichlet.html#Dirichlet.log_prob) `property mean` `rsample(sample_shape=())` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/dirichlet.html#Dirichlet.rsample) `support = Simplex()` `property variance` Exponential ----------- `class torch.distributions.exponential.Exponential(rate, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/exponential.html#Exponential) Bases: [`torch.distributions.exp_family.ExponentialFamily`](#torch.distributions.exp_family.ExponentialFamily "torch.distributions.exp_family.ExponentialFamily") Creates a Exponential distribution parameterized by `rate`. Example: ``` >>> m = Exponential(torch.tensor([1.0])) >>> m.sample() # Exponential distributed with rate=1 tensor([ 0.1046]) ``` Parameters **rate** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – rate = 1 / scale of the distribution `arg_constraints = {'rate': GreaterThan(lower_bound=0.0)}` `cdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/exponential.html#Exponential.cdf) `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/exponential.html#Exponential.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/exponential.html#Exponential.expand) `has_rsample = True` `icdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/exponential.html#Exponential.icdf) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/exponential.html#Exponential.log_prob) `property mean` `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/exponential.html#Exponential.rsample) `property stddev` `support = GreaterThan(lower_bound=0.0)` `property variance` FisherSnedecor -------------- `class torch.distributions.fishersnedecor.FisherSnedecor(df1, df2, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/fishersnedecor.html#FisherSnedecor) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Creates a Fisher-Snedecor distribution parameterized by `df1` and `df2`. Example: ``` >>> m = FisherSnedecor(torch.tensor([1.0]), torch.tensor([2.0])) >>> m.sample() # Fisher-Snedecor-distributed with df1=1 and df2=2 tensor([ 0.2453]) ``` Parameters * **df1** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – degrees of freedom parameter 1 * **df2** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – degrees of freedom parameter 2 `arg_constraints = {'df1': GreaterThan(lower_bound=0.0), 'df2': GreaterThan(lower_bound=0.0)}` `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/fishersnedecor.html#FisherSnedecor.expand) `has_rsample = True` `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/fishersnedecor.html#FisherSnedecor.log_prob) `property mean` `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/fishersnedecor.html#FisherSnedecor.rsample) `support = GreaterThan(lower_bound=0.0)` `property variance` Gamma ----- `class torch.distributions.gamma.Gamma(concentration, rate, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/gamma.html#Gamma) Bases: [`torch.distributions.exp_family.ExponentialFamily`](#torch.distributions.exp_family.ExponentialFamily "torch.distributions.exp_family.ExponentialFamily") Creates a Gamma distribution parameterized by shape `concentration` and `rate`. Example: ``` >>> m = Gamma(torch.tensor([1.0]), torch.tensor([1.0])) >>> m.sample() # Gamma distributed with concentration=1 and rate=1 tensor([ 0.1046]) ``` Parameters * **concentration** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – shape parameter of the distribution (often referred to as alpha) * **rate** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – rate = 1 / scale of the distribution (often referred to as beta) `arg_constraints = {'concentration': GreaterThan(lower_bound=0.0), 'rate': GreaterThan(lower_bound=0.0)}` `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/gamma.html#Gamma.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/gamma.html#Gamma.expand) `has_rsample = True` `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/gamma.html#Gamma.log_prob) `property mean` `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/gamma.html#Gamma.rsample) `support = GreaterThan(lower_bound=0.0)` `property variance` Geometric --------- `class torch.distributions.geometric.Geometric(probs=None, logits=None, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/geometric.html#Geometric) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Creates a Geometric distribution parameterized by [`probs`](#torch.distributions.geometric.Geometric.probs "torch.distributions.geometric.Geometric.probs"), where [`probs`](#torch.distributions.geometric.Geometric.probs "torch.distributions.geometric.Geometric.probs") is the probability of success of Bernoulli trials. It represents the probability that in k+1k + 1 Bernoulli trials, the first kk trials failed, before seeing a success. Samples are non-negative integers [0, inf⁡\inf ). Example: ``` >>> m = Geometric(torch.tensor([0.3])) >>> m.sample() # underlying Bernoulli has 30% chance 1; 70% chance 0 tensor([ 2.]) ``` Parameters * **probs** (*Number**,* [Tensor](tensors#torch.Tensor "torch.Tensor")) – the probability of sampling `1`. Must be in range (0, 1] * **logits** (*Number**,* [Tensor](tensors#torch.Tensor "torch.Tensor")) – the log-odds of sampling `1`. `arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}` `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/geometric.html#Geometric.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/geometric.html#Geometric.expand) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/geometric.html#Geometric.log_prob) `logits` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/geometric.html#Geometric.logits) `property mean` `probs` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/geometric.html#Geometric.probs) `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/geometric.html#Geometric.sample) `support = IntegerGreaterThan(lower_bound=0)` `property variance` Gumbel ------ `class torch.distributions.gumbel.Gumbel(loc, scale, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/gumbel.html#Gumbel) Bases: [`torch.distributions.transformed_distribution.TransformedDistribution`](#torch.distributions.transformed_distribution.TransformedDistribution "torch.distributions.transformed_distribution.TransformedDistribution") Samples from a Gumbel Distribution. Examples: ``` >>> m = Gumbel(torch.tensor([1.0]), torch.tensor([2.0])) >>> m.sample() # sample from Gumbel distribution with loc=1, scale=2 tensor([ 1.0124]) ``` Parameters * **loc** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – Location parameter of the distribution * **scale** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – Scale parameter of the distribution `arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}` `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/gumbel.html#Gumbel.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/gumbel.html#Gumbel.expand) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/gumbel.html#Gumbel.log_prob) `property mean` `property stddev` `support = Real()` `property variance` HalfCauchy ---------- `class torch.distributions.half_cauchy.HalfCauchy(scale, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/half_cauchy.html#HalfCauchy) Bases: [`torch.distributions.transformed_distribution.TransformedDistribution`](#torch.distributions.transformed_distribution.TransformedDistribution "torch.distributions.transformed_distribution.TransformedDistribution") Creates a half-Cauchy distribution parameterized by `scale` where: ``` X ~ Cauchy(0, scale) Y = |X| ~ HalfCauchy(scale) ``` Example: ``` >>> m = HalfCauchy(torch.tensor([1.0])) >>> m.sample() # half-cauchy distributed with scale=1 tensor([ 2.3214]) ``` Parameters **scale** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – scale of the full Cauchy distribution `arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'scale': GreaterThan(lower_bound=0.0)}` `cdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/half_cauchy.html#HalfCauchy.cdf) `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/half_cauchy.html#HalfCauchy.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/half_cauchy.html#HalfCauchy.expand) `has_rsample = True` `icdf(prob)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/half_cauchy.html#HalfCauchy.icdf) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/half_cauchy.html#HalfCauchy.log_prob) `property mean` `property scale` `support = GreaterThan(lower_bound=0.0)` `property variance` HalfNormal ---------- `class torch.distributions.half_normal.HalfNormal(scale, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/half_normal.html#HalfNormal) Bases: [`torch.distributions.transformed_distribution.TransformedDistribution`](#torch.distributions.transformed_distribution.TransformedDistribution "torch.distributions.transformed_distribution.TransformedDistribution") Creates a half-normal distribution parameterized by `scale` where: ``` X ~ Normal(0, scale) Y = |X| ~ HalfNormal(scale) ``` Example: ``` >>> m = HalfNormal(torch.tensor([1.0])) >>> m.sample() # half-normal distributed with scale=1 tensor([ 0.1046]) ``` Parameters **scale** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – scale of the full Normal distribution `arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'scale': GreaterThan(lower_bound=0.0)}` `cdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/half_normal.html#HalfNormal.cdf) `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/half_normal.html#HalfNormal.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/half_normal.html#HalfNormal.expand) `has_rsample = True` `icdf(prob)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/half_normal.html#HalfNormal.icdf) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/half_normal.html#HalfNormal.log_prob) `property mean` `property scale` `support = GreaterThan(lower_bound=0.0)` `property variance` Independent ----------- `class torch.distributions.independent.Independent(base_distribution, reinterpreted_batch_ndims, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/independent.html#Independent) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Reinterprets some of the batch dims of a distribution as event dims. This is mainly useful for changing the shape of the result of [`log_prob()`](#torch.distributions.independent.Independent.log_prob "torch.distributions.independent.Independent.log_prob"). For example to create a diagonal Normal distribution with the same shape as a Multivariate Normal distribution (so they are interchangeable), you can: ``` >>> loc = torch.zeros(3) >>> scale = torch.ones(3) >>> mvn = MultivariateNormal(loc, scale_tril=torch.diag(scale)) >>> [mvn.batch_shape, mvn.event_shape] [torch.Size(()), torch.Size((3,))] >>> normal = Normal(loc, scale) >>> [normal.batch_shape, normal.event_shape] [torch.Size((3,)), torch.Size(())] >>> diagn = Independent(normal, 1) >>> [diagn.batch_shape, diagn.event_shape] [torch.Size(()), torch.Size((3,))] ``` Parameters * **base\_distribution** ([torch.distributions.distribution.Distribution](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution")) – a base distribution * **reinterpreted\_batch\_ndims** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the number of batch dims to reinterpret as event dims `arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {}` `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/independent.html#Independent.entropy) `enumerate_support(expand=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/independent.html#Independent.enumerate_support) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/independent.html#Independent.expand) `property has_enumerate_support` `property has_rsample` `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/independent.html#Independent.log_prob) `property mean` `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/independent.html#Independent.rsample) `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/independent.html#Independent.sample) `property support` `property variance` Kumaraswamy ----------- `class torch.distributions.kumaraswamy.Kumaraswamy(concentration1, concentration0, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/kumaraswamy.html#Kumaraswamy) Bases: [`torch.distributions.transformed_distribution.TransformedDistribution`](#torch.distributions.transformed_distribution.TransformedDistribution "torch.distributions.transformed_distribution.TransformedDistribution") Samples from a Kumaraswamy distribution. Example: ``` >>> m = Kumaraswamy(torch.Tensor([1.0]), torch.Tensor([1.0])) >>> m.sample() # sample from a Kumaraswamy distribution with concentration alpha=1 and beta=1 tensor([ 0.1729]) ``` Parameters * **concentration1** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – 1st concentration parameter of the distribution (often referred to as alpha) * **concentration0** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – 2nd concentration parameter of the distribution (often referred to as beta) `arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'concentration0': GreaterThan(lower_bound=0.0), 'concentration1': GreaterThan(lower_bound=0.0)}` `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/kumaraswamy.html#Kumaraswamy.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/kumaraswamy.html#Kumaraswamy.expand) `has_rsample = True` `property mean` `support = Interval(lower_bound=0.0, upper_bound=1.0)` `property variance` LKJCholesky ----------- `class torch.distributions.lkj_cholesky.LKJCholesky(dim, concentration=1.0, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/lkj_cholesky.html#LKJCholesky) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") LKJ distribution for lower Cholesky factor of correlation matrices. The distribution is controlled by `concentration` parameter η\eta to make the probability of the correlation matrix MM generated from a Cholesky factor propotional to det⁡(M)η−1\det(M)^{\eta - 1} . Because of that, when `concentration == 1`, we have a uniform distribution over Cholesky factors of correlation matrices. Note that this distribution samples the Cholesky factor of correlation matrices and not the correlation matrices themselves and thereby differs slightly from the derivations in [1] for the `LKJCorr` distribution. For sampling, this uses the Onion method from [1] Section 3. L ~ LKJCholesky(dim, concentration) X = L @ L’ ~ LKJCorr(dim, concentration) Example: ``` >>> l = LKJCholesky(3, 0.5) >>> l.sample() # l @ l.T is a sample of a correlation 3x3 matrix tensor([[ 1.0000, 0.0000, 0.0000], [ 0.3516, 0.9361, 0.0000], [-0.1899, 0.4748, 0.8593]]) ``` Parameters * **dimension** (*dim*) – dimension of the matrices * **concentration** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – concentration/shape parameter of the distribution (often referred to as eta) **References** [1] `Generating random correlation matrices based on vines and extended onion method`, Daniel Lewandowski, Dorota Kurowicka, Harry Joe. `arg_constraints = {'concentration': GreaterThan(lower_bound=0.0)}` `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/lkj_cholesky.html#LKJCholesky.expand) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/lkj_cholesky.html#LKJCholesky.log_prob) `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/lkj_cholesky.html#LKJCholesky.sample) `support = CorrCholesky()` Laplace ------- `class torch.distributions.laplace.Laplace(loc, scale, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/laplace.html#Laplace) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Creates a Laplace distribution parameterized by `loc` and `scale`. Example: ``` >>> m = Laplace(torch.tensor([0.0]), torch.tensor([1.0])) >>> m.sample() # Laplace distributed with loc=0, scale=1 tensor([ 0.1046]) ``` Parameters * **loc** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – mean of the distribution * **scale** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – scale of the distribution `arg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}` `cdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/laplace.html#Laplace.cdf) `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/laplace.html#Laplace.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/laplace.html#Laplace.expand) `has_rsample = True` `icdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/laplace.html#Laplace.icdf) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/laplace.html#Laplace.log_prob) `property mean` `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/laplace.html#Laplace.rsample) `property stddev` `support = Real()` `property variance` LogNormal --------- `class torch.distributions.log_normal.LogNormal(loc, scale, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/log_normal.html#LogNormal) Bases: [`torch.distributions.transformed_distribution.TransformedDistribution`](#torch.distributions.transformed_distribution.TransformedDistribution "torch.distributions.transformed_distribution.TransformedDistribution") Creates a log-normal distribution parameterized by [`loc`](#torch.distributions.log_normal.LogNormal.loc "torch.distributions.log_normal.LogNormal.loc") and [`scale`](#torch.distributions.log_normal.LogNormal.scale "torch.distributions.log_normal.LogNormal.scale") where: ``` X ~ Normal(loc, scale) Y = exp(X) ~ LogNormal(loc, scale) ``` Example: ``` >>> m = LogNormal(torch.tensor([0.0]), torch.tensor([1.0])) >>> m.sample() # log-normal distributed with mean=0 and stddev=1 tensor([ 0.1046]) ``` Parameters * **loc** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – mean of log of distribution * **scale** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – standard deviation of log of the distribution `arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}` `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/log_normal.html#LogNormal.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/log_normal.html#LogNormal.expand) `has_rsample = True` `property loc` `property mean` `property scale` `support = GreaterThan(lower_bound=0.0)` `property variance` LowRankMultivariateNormal ------------------------- `class torch.distributions.lowrank_multivariate_normal.LowRankMultivariateNormal(loc, cov_factor, cov_diag, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/lowrank_multivariate_normal.html#LowRankMultivariateNormal) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Creates a multivariate normal distribution with covariance matrix having a low-rank form parameterized by `cov_factor` and `cov_diag`: ``` covariance_matrix = cov_factor @ cov_factor.T + cov_diag ``` #### Example ``` >>> m = LowRankMultivariateNormal(torch.zeros(2), torch.tensor([[1.], [0.]]), torch.ones(2)) >>> m.sample() # normally distributed with mean=`[0,0]`, cov_factor=`[[1],[0]]`, cov_diag=`[1,1]` tensor([-0.2102, -0.5429]) ``` Parameters * **loc** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – mean of the distribution with shape `batch_shape + event_shape` * **cov\_factor** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – factor part of low-rank form of covariance matrix with shape `batch_shape + event_shape + (rank,)` * **cov\_diag** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – diagonal part of low-rank form of covariance matrix with shape `batch_shape + event_shape` Note The computation for determinant and inverse of covariance matrix is avoided when `cov_factor.shape[1] << cov_factor.shape[0]` thanks to [Woodbury matrix identity](https://en.wikipedia.org/wiki/Woodbury_matrix_identity) and [matrix determinant lemma](https://en.wikipedia.org/wiki/Matrix_determinant_lemma). Thanks to these formulas, we just need to compute the determinant and inverse of the small size “capacitance” matrix: ``` capacitance = I + cov_factor.T @ inv(cov_diag) @ cov_factor ``` `arg_constraints = {'cov_diag': IndependentConstraint(GreaterThan(lower_bound=0.0), 1), 'cov_factor': IndependentConstraint(Real(), 2), 'loc': IndependentConstraint(Real(), 1)}` `covariance_matrix` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/lowrank_multivariate_normal.html#LowRankMultivariateNormal.covariance_matrix) `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/lowrank_multivariate_normal.html#LowRankMultivariateNormal.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/lowrank_multivariate_normal.html#LowRankMultivariateNormal.expand) `has_rsample = True` `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/lowrank_multivariate_normal.html#LowRankMultivariateNormal.log_prob) `property mean` `precision_matrix` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/lowrank_multivariate_normal.html#LowRankMultivariateNormal.precision_matrix) `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/lowrank_multivariate_normal.html#LowRankMultivariateNormal.rsample) `scale_tril` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/lowrank_multivariate_normal.html#LowRankMultivariateNormal.scale_tril) `support = IndependentConstraint(Real(), 1)` `variance` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/lowrank_multivariate_normal.html#LowRankMultivariateNormal.variance) MixtureSameFamily ----------------- `class torch.distributions.mixture_same_family.MixtureSameFamily(mixture_distribution, component_distribution, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/mixture_same_family.html#MixtureSameFamily) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") The `MixtureSameFamily` distribution implements a (batch of) mixture distribution where all component are from different parameterizations of the same distribution type. It is parameterized by a `Categorical` “selecting distribution” (over `k` component) and a component distribution, i.e., a `Distribution` with a rightmost batch shape (equal to `[k]`) which indexes each (batch of) component. Examples: ``` # Construct Gaussian Mixture Model in 1D consisting of 5 equally # weighted normal distributions >>> mix = D.Categorical(torch.ones(5,)) >>> comp = D.Normal(torch.randn(5,), torch.rand(5,)) >>> gmm = MixtureSameFamily(mix, comp) # Construct Gaussian Mixture Modle in 2D consisting of 5 equally # weighted bivariate normal distributions >>> mix = D.Categorical(torch.ones(5,)) >>> comp = D.Independent(D.Normal( torch.randn(5,2), torch.rand(5,2)), 1) >>> gmm = MixtureSameFamily(mix, comp) # Construct a batch of 3 Gaussian Mixture Models in 2D each # consisting of 5 random weighted bivariate normal distributions >>> mix = D.Categorical(torch.rand(3,5)) >>> comp = D.Independent(D.Normal( torch.randn(3,5,2), torch.rand(3,5,2)), 1) >>> gmm = MixtureSameFamily(mix, comp) ``` Parameters * **mixture\_distribution** – `torch.distributions.Categorical`-like instance. Manages the probability of selecting component. The number of categories must match the rightmost batch dimension of the `component_distribution`. Must have either scalar `batch_shape` or `batch_shape` matching `component_distribution.batch_shape[:-1]` * **component\_distribution** – `torch.distributions.Distribution`-like instance. Right-most batch dimension indexes component. `arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {}` `cdf(x)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/mixture_same_family.html#MixtureSameFamily.cdf) `property component_distribution` `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/mixture_same_family.html#MixtureSameFamily.expand) `has_rsample = False` `log_prob(x)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/mixture_same_family.html#MixtureSameFamily.log_prob) `property mean` `property mixture_distribution` `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/mixture_same_family.html#MixtureSameFamily.sample) `property support` `property variance` Multinomial ----------- `class torch.distributions.multinomial.Multinomial(total_count=1, probs=None, logits=None, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/multinomial.html#Multinomial) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Creates a Multinomial distribution parameterized by [`total_count`](#torch.distributions.multinomial.Multinomial.total_count "torch.distributions.multinomial.Multinomial.total_count") and either [`probs`](#torch.distributions.multinomial.Multinomial.probs "torch.distributions.multinomial.Multinomial.probs") or [`logits`](#torch.distributions.multinomial.Multinomial.logits "torch.distributions.multinomial.Multinomial.logits") (but not both). The innermost dimension of [`probs`](#torch.distributions.multinomial.Multinomial.probs "torch.distributions.multinomial.Multinomial.probs") indexes over categories. All other dimensions index over batches. Note that [`total_count`](#torch.distributions.multinomial.Multinomial.total_count "torch.distributions.multinomial.Multinomial.total_count") need not be specified if only [`log_prob()`](#torch.distributions.multinomial.Multinomial.log_prob "torch.distributions.multinomial.Multinomial.log_prob") is called (see example below) Note The `probs` argument must be non-negative, finite and have a non-zero sum, and it will be normalized to sum to 1 along the last dimension. attr:`probs` will return this normalized value. The `logits` argument will be interpreted as unnormalized log probabilities and can therefore be any real number. It will likewise be normalized so that the resulting probabilities sum to 1 along the last dimension. attr:`logits` will return this normalized value. * [`sample()`](#torch.distributions.multinomial.Multinomial.sample "torch.distributions.multinomial.Multinomial.sample") requires a single shared `total_count` for all parameters and samples. * [`log_prob()`](#torch.distributions.multinomial.Multinomial.log_prob "torch.distributions.multinomial.Multinomial.log_prob") allows different `total_count` for each parameter and sample. Example: ``` >>> m = Multinomial(100, torch.tensor([ 1., 1., 1., 1.])) >>> x = m.sample() # equal probability of 0, 1, 2, 3 tensor([ 21., 24., 30., 25.]) >>> Multinomial(probs=torch.tensor([1., 1., 1., 1.])).log_prob(x) tensor([-4.1338]) ``` Parameters * **total\_count** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – number of trials * **probs** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – event probabilities * **logits** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – event log probabilities (unnormalized) `arg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}` `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/multinomial.html#Multinomial.expand) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/multinomial.html#Multinomial.log_prob) `property logits` `property mean` `property param_shape` `property probs` `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/multinomial.html#Multinomial.sample) `property support` `total_count: int = None` `property variance` MultivariateNormal ------------------ `class torch.distributions.multivariate_normal.MultivariateNormal(loc, covariance_matrix=None, precision_matrix=None, scale_tril=None, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/multivariate_normal.html#MultivariateNormal) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Creates a multivariate normal (also called Gaussian) distribution parameterized by a mean vector and a covariance matrix. The multivariate normal distribution can be parameterized either in terms of a positive definite covariance matrix Σ\mathbf{\Sigma} or a positive definite precision matrix Σ−1\mathbf{\Sigma}^{-1} or a lower-triangular matrix L\mathbf{L} with positive-valued diagonal entries, such that Σ=LL⊤\mathbf{\Sigma} = \mathbf{L}\mathbf{L}^\top . This triangular matrix can be obtained via e.g. Cholesky decomposition of the covariance. #### Example ``` >>> m = MultivariateNormal(torch.zeros(2), torch.eye(2)) >>> m.sample() # normally distributed with mean=`[0,0]` and covariance_matrix=`I` tensor([-0.2102, -0.5429]) ``` Parameters * **loc** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – mean of the distribution * **covariance\_matrix** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – positive-definite covariance matrix * **precision\_matrix** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – positive-definite precision matrix * **scale\_tril** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – lower-triangular factor of covariance, with positive-valued diagonal Note Only one of [`covariance_matrix`](#torch.distributions.multivariate_normal.MultivariateNormal.covariance_matrix "torch.distributions.multivariate_normal.MultivariateNormal.covariance_matrix") or [`precision_matrix`](#torch.distributions.multivariate_normal.MultivariateNormal.precision_matrix "torch.distributions.multivariate_normal.MultivariateNormal.precision_matrix") or [`scale_tril`](#torch.distributions.multivariate_normal.MultivariateNormal.scale_tril "torch.distributions.multivariate_normal.MultivariateNormal.scale_tril") can be specified. Using [`scale_tril`](#torch.distributions.multivariate_normal.MultivariateNormal.scale_tril "torch.distributions.multivariate_normal.MultivariateNormal.scale_tril") will be more efficient: all computations internally are based on [`scale_tril`](#torch.distributions.multivariate_normal.MultivariateNormal.scale_tril "torch.distributions.multivariate_normal.MultivariateNormal.scale_tril"). If [`covariance_matrix`](#torch.distributions.multivariate_normal.MultivariateNormal.covariance_matrix "torch.distributions.multivariate_normal.MultivariateNormal.covariance_matrix") or [`precision_matrix`](#torch.distributions.multivariate_normal.MultivariateNormal.precision_matrix "torch.distributions.multivariate_normal.MultivariateNormal.precision_matrix") is passed instead, it is only used to compute the corresponding lower triangular matrices using a Cholesky decomposition. `arg_constraints = {'covariance_matrix': PositiveDefinite(), 'loc': IndependentConstraint(Real(), 1), 'precision_matrix': PositiveDefinite(), 'scale_tril': LowerCholesky()}` `covariance_matrix` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/multivariate_normal.html#MultivariateNormal.covariance_matrix) `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/multivariate_normal.html#MultivariateNormal.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/multivariate_normal.html#MultivariateNormal.expand) `has_rsample = True` `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/multivariate_normal.html#MultivariateNormal.log_prob) `property mean` `precision_matrix` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/multivariate_normal.html#MultivariateNormal.precision_matrix) `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/multivariate_normal.html#MultivariateNormal.rsample) `scale_tril` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/multivariate_normal.html#MultivariateNormal.scale_tril) `support = IndependentConstraint(Real(), 1)` `property variance` NegativeBinomial ---------------- `class torch.distributions.negative_binomial.NegativeBinomial(total_count, probs=None, logits=None, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/negative_binomial.html#NegativeBinomial) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Creates a Negative Binomial distribution, i.e. distribution of the number of successful independent and identical Bernoulli trials before `total_count` failures are achieved. The probability of failure of each Bernoulli trial is [`probs`](#torch.distributions.negative_binomial.NegativeBinomial.probs "torch.distributions.negative_binomial.NegativeBinomial.probs"). Parameters * **total\_count** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – non-negative number of negative Bernoulli trials to stop, although the distribution is still valid for real valued count * **probs** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Event probabilities of failure in the half open interval [0, 1) * **logits** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Event log-odds for probabilities of failure `arg_constraints = {'logits': Real(), 'probs': HalfOpenInterval(lower_bound=0.0, upper_bound=1.0), 'total_count': GreaterThanEq(lower_bound=0)}` `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/negative_binomial.html#NegativeBinomial.expand) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/negative_binomial.html#NegativeBinomial.log_prob) `logits` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/negative_binomial.html#NegativeBinomial.logits) `property mean` `property param_shape` `probs` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/negative_binomial.html#NegativeBinomial.probs) `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/negative_binomial.html#NegativeBinomial.sample) `support = IntegerGreaterThan(lower_bound=0)` `property variance` Normal ------ `class torch.distributions.normal.Normal(loc, scale, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/normal.html#Normal) Bases: [`torch.distributions.exp_family.ExponentialFamily`](#torch.distributions.exp_family.ExponentialFamily "torch.distributions.exp_family.ExponentialFamily") Creates a normal (also called Gaussian) distribution parameterized by `loc` and `scale`. Example: ``` >>> m = Normal(torch.tensor([0.0]), torch.tensor([1.0])) >>> m.sample() # normally distributed with loc=0 and scale=1 tensor([ 0.1046]) ``` Parameters * **loc** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – mean of the distribution (often referred to as mu) * **scale** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – standard deviation of the distribution (often referred to as sigma) `arg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}` `cdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/normal.html#Normal.cdf) `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/normal.html#Normal.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/normal.html#Normal.expand) `has_rsample = True` `icdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/normal.html#Normal.icdf) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/normal.html#Normal.log_prob) `property mean` `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/normal.html#Normal.rsample) `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/normal.html#Normal.sample) `property stddev` `support = Real()` `property variance` OneHotCategorical ----------------- `class torch.distributions.one_hot_categorical.OneHotCategorical(probs=None, logits=None, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/one_hot_categorical.html#OneHotCategorical) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Creates a one-hot categorical distribution parameterized by [`probs`](#torch.distributions.one_hot_categorical.OneHotCategorical.probs "torch.distributions.one_hot_categorical.OneHotCategorical.probs") or [`logits`](#torch.distributions.one_hot_categorical.OneHotCategorical.logits "torch.distributions.one_hot_categorical.OneHotCategorical.logits"). Samples are one-hot coded vectors of size `probs.size(-1)`. Note The `probs` argument must be non-negative, finite and have a non-zero sum, and it will be normalized to sum to 1 along the last dimension. attr:`probs` will return this normalized value. The `logits` argument will be interpreted as unnormalized log probabilities and can therefore be any real number. It will likewise be normalized so that the resulting probabilities sum to 1 along the last dimension. attr:`logits` will return this normalized value. See also: `torch.distributions.Categorical()` for specifications of [`probs`](#torch.distributions.one_hot_categorical.OneHotCategorical.probs "torch.distributions.one_hot_categorical.OneHotCategorical.probs") and [`logits`](#torch.distributions.one_hot_categorical.OneHotCategorical.logits "torch.distributions.one_hot_categorical.OneHotCategorical.logits"). Example: ``` >>> m = OneHotCategorical(torch.tensor([ 0.25, 0.25, 0.25, 0.25 ])) >>> m.sample() # equal probability of 0, 1, 2, 3 tensor([ 0., 0., 0., 1.]) ``` Parameters * **probs** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – event probabilities * **logits** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – event log probabilities (unnormalized) `arg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}` `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/one_hot_categorical.html#OneHotCategorical.entropy) `enumerate_support(expand=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/one_hot_categorical.html#OneHotCategorical.enumerate_support) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/one_hot_categorical.html#OneHotCategorical.expand) `has_enumerate_support = True` `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/one_hot_categorical.html#OneHotCategorical.log_prob) `property logits` `property mean` `property param_shape` `property probs` `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/one_hot_categorical.html#OneHotCategorical.sample) `support = OneHot()` `property variance` Pareto ------ `class torch.distributions.pareto.Pareto(scale, alpha, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/pareto.html#Pareto) Bases: [`torch.distributions.transformed_distribution.TransformedDistribution`](#torch.distributions.transformed_distribution.TransformedDistribution "torch.distributions.transformed_distribution.TransformedDistribution") Samples from a Pareto Type 1 distribution. Example: ``` >>> m = Pareto(torch.tensor([1.0]), torch.tensor([1.0])) >>> m.sample() # sample from a Pareto distribution with scale=1 and alpha=1 tensor([ 1.5623]) ``` Parameters * **scale** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – Scale parameter of the distribution * **alpha** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – Shape parameter of the distribution `arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'alpha': GreaterThan(lower_bound=0.0), 'scale': GreaterThan(lower_bound=0.0)}` `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/pareto.html#Pareto.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/pareto.html#Pareto.expand) `property mean` `property support` `property variance` Poisson ------- `class torch.distributions.poisson.Poisson(rate, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/poisson.html#Poisson) Bases: [`torch.distributions.exp_family.ExponentialFamily`](#torch.distributions.exp_family.ExponentialFamily "torch.distributions.exp_family.ExponentialFamily") Creates a Poisson distribution parameterized by `rate`, the rate parameter. Samples are nonnegative integers, with a pmf given by rateke−ratek!\mathrm{rate}^k \frac{e^{-\mathrm{rate}}}{k!} Example: ``` >>> m = Poisson(torch.tensor([4])) >>> m.sample() tensor([ 3.]) ``` Parameters **rate** (*Number**,* [Tensor](tensors#torch.Tensor "torch.Tensor")) – the rate parameter `arg_constraints = {'rate': GreaterThan(lower_bound=0.0)}` `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/poisson.html#Poisson.expand) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/poisson.html#Poisson.log_prob) `property mean` `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/poisson.html#Poisson.sample) `support = IntegerGreaterThan(lower_bound=0)` `property variance` RelaxedBernoulli ---------------- `class torch.distributions.relaxed_bernoulli.RelaxedBernoulli(temperature, probs=None, logits=None, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/relaxed_bernoulli.html#RelaxedBernoulli) Bases: [`torch.distributions.transformed_distribution.TransformedDistribution`](#torch.distributions.transformed_distribution.TransformedDistribution "torch.distributions.transformed_distribution.TransformedDistribution") Creates a RelaxedBernoulli distribution, parametrized by [`temperature`](#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.temperature "torch.distributions.relaxed_bernoulli.RelaxedBernoulli.temperature"), and either [`probs`](#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.probs "torch.distributions.relaxed_bernoulli.RelaxedBernoulli.probs") or [`logits`](#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.logits "torch.distributions.relaxed_bernoulli.RelaxedBernoulli.logits") (but not both). This is a relaxed version of the `Bernoulli` distribution, so the values are in (0, 1), and has reparametrizable samples. Example: ``` >>> m = RelaxedBernoulli(torch.tensor([2.2]), torch.tensor([0.1, 0.2, 0.3, 0.99])) >>> m.sample() tensor([ 0.2951, 0.3442, 0.8918, 0.9021]) ``` Parameters * **temperature** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – relaxation temperature * **probs** (*Number**,* [Tensor](tensors#torch.Tensor "torch.Tensor")) – the probability of sampling `1` * **logits** (*Number**,* [Tensor](tensors#torch.Tensor "torch.Tensor")) – the log-odds of sampling `1` `arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}` `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/relaxed_bernoulli.html#RelaxedBernoulli.expand) `has_rsample = True` `property logits` `property probs` `support = Interval(lower_bound=0.0, upper_bound=1.0)` `property temperature` LogitRelaxedBernoulli --------------------- `class torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli(temperature, probs=None, logits=None, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/relaxed_bernoulli.html#LogitRelaxedBernoulli) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Creates a LogitRelaxedBernoulli distribution parameterized by [`probs`](#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.probs "torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.probs") or [`logits`](#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.logits "torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.logits") (but not both), which is the logit of a RelaxedBernoulli distribution. Samples are logits of values in (0, 1). See [1] for more details. Parameters * **temperature** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – relaxation temperature * **probs** (*Number**,* [Tensor](tensors#torch.Tensor "torch.Tensor")) – the probability of sampling `1` * **logits** (*Number**,* [Tensor](tensors#torch.Tensor "torch.Tensor")) – the log-odds of sampling `1` [1] The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables (Maddison et al, 2017) [2] Categorical Reparametrization with Gumbel-Softmax (Jang et al, 2017) `arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}` `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/relaxed_bernoulli.html#LogitRelaxedBernoulli.expand) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/relaxed_bernoulli.html#LogitRelaxedBernoulli.log_prob) `logits` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/relaxed_bernoulli.html#LogitRelaxedBernoulli.logits) `property param_shape` `probs` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/relaxed_bernoulli.html#LogitRelaxedBernoulli.probs) `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/relaxed_bernoulli.html#LogitRelaxedBernoulli.rsample) `support = Real()` RelaxedOneHotCategorical ------------------------ `class torch.distributions.relaxed_categorical.RelaxedOneHotCategorical(temperature, probs=None, logits=None, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/relaxed_categorical.html#RelaxedOneHotCategorical) Bases: [`torch.distributions.transformed_distribution.TransformedDistribution`](#torch.distributions.transformed_distribution.TransformedDistribution "torch.distributions.transformed_distribution.TransformedDistribution") Creates a RelaxedOneHotCategorical distribution parametrized by [`temperature`](#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.temperature "torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.temperature"), and either [`probs`](#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.probs "torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.probs") or [`logits`](#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.logits "torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.logits"). This is a relaxed version of the `OneHotCategorical` distribution, so its samples are on simplex, and are reparametrizable. Example: ``` >>> m = RelaxedOneHotCategorical(torch.tensor([2.2]), torch.tensor([0.1, 0.2, 0.3, 0.4])) >>> m.sample() tensor([ 0.1294, 0.2324, 0.3859, 0.2523]) ``` Parameters * **temperature** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – relaxation temperature * **probs** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – event probabilities * **logits** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – unnormalized log probability for each event `arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}` `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/relaxed_categorical.html#RelaxedOneHotCategorical.expand) `has_rsample = True` `property logits` `property probs` `support = Simplex()` `property temperature` StudentT -------- `class torch.distributions.studentT.StudentT(df, loc=0.0, scale=1.0, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/studentT.html#StudentT) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Creates a Student’s t-distribution parameterized by degree of freedom `df`, mean `loc` and scale `scale`. Example: ``` >>> m = StudentT(torch.tensor([2.0])) >>> m.sample() # Student's t-distributed with degrees of freedom=2 tensor([ 0.1046]) ``` Parameters * **df** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – degrees of freedom * **loc** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – mean of the distribution * **scale** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – scale of the distribution `arg_constraints = {'df': GreaterThan(lower_bound=0.0), 'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}` `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/studentT.html#StudentT.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/studentT.html#StudentT.expand) `has_rsample = True` `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/studentT.html#StudentT.log_prob) `property mean` `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/studentT.html#StudentT.rsample) `support = Real()` `property variance` TransformedDistribution ----------------------- `class torch.distributions.transformed_distribution.TransformedDistribution(base_distribution, transforms, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transformed_distribution.html#TransformedDistribution) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Extension of the Distribution class, which applies a sequence of Transforms to a base distribution. Let f be the composition of transforms applied: ``` X ~ BaseDistribution Y = f(X) ~ TransformedDistribution(BaseDistribution, f) log p(Y) = log p(X) + log |det (dX/dY)| ``` Note that the `.event_shape` of a [`TransformedDistribution`](#torch.distributions.transformed_distribution.TransformedDistribution "torch.distributions.transformed_distribution.TransformedDistribution") is the maximum shape of its base distribution and its transforms, since transforms can introduce correlations among events. An example for the usage of [`TransformedDistribution`](#torch.distributions.transformed_distribution.TransformedDistribution "torch.distributions.transformed_distribution.TransformedDistribution") would be: ``` # Building a Logistic Distribution # X ~ Uniform(0, 1) # f = a + b * logit(X) # Y ~ f(X) ~ Logistic(a, b) base_distribution = Uniform(0, 1) transforms = [SigmoidTransform().inv, AffineTransform(loc=a, scale=b)] logistic = TransformedDistribution(base_distribution, transforms) ``` For more examples, please look at the implementations of [`Gumbel`](#torch.distributions.gumbel.Gumbel "torch.distributions.gumbel.Gumbel"), [`HalfCauchy`](#torch.distributions.half_cauchy.HalfCauchy "torch.distributions.half_cauchy.HalfCauchy"), [`HalfNormal`](#torch.distributions.half_normal.HalfNormal "torch.distributions.half_normal.HalfNormal"), [`LogNormal`](#torch.distributions.log_normal.LogNormal "torch.distributions.log_normal.LogNormal"), [`Pareto`](#torch.distributions.pareto.Pareto "torch.distributions.pareto.Pareto"), [`Weibull`](#torch.distributions.weibull.Weibull "torch.distributions.weibull.Weibull"), [`RelaxedBernoulli`](#torch.distributions.relaxed_bernoulli.RelaxedBernoulli "torch.distributions.relaxed_bernoulli.RelaxedBernoulli") and [`RelaxedOneHotCategorical`](#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical "torch.distributions.relaxed_categorical.RelaxedOneHotCategorical") `arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {}` `cdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transformed_distribution.html#TransformedDistribution.cdf) Computes the cumulative distribution function by inverting the transform(s) and computing the score of the base distribution. `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transformed_distribution.html#TransformedDistribution.expand) `property has_rsample` `icdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transformed_distribution.html#TransformedDistribution.icdf) Computes the inverse cumulative distribution function using transform(s) and computing the score of the base distribution. `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transformed_distribution.html#TransformedDistribution.log_prob) Scores the sample by inverting the transform(s) and computing the score using the score of the base distribution and the log abs det jacobian. `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transformed_distribution.html#TransformedDistribution.rsample) Generates a sample\_shape shaped reparameterized sample or sample\_shape shaped batch of reparameterized samples if the distribution parameters are batched. Samples first from base distribution and applies `transform()` for every transform in the list. `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transformed_distribution.html#TransformedDistribution.sample) Generates a sample\_shape shaped sample or sample\_shape shaped batch of samples if the distribution parameters are batched. Samples first from base distribution and applies `transform()` for every transform in the list. `property support` Uniform ------- `class torch.distributions.uniform.Uniform(low, high, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/uniform.html#Uniform) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") Generates uniformly distributed random samples from the half-open interval `[low, high)`. Example: ``` >>> m = Uniform(torch.tensor([0.0]), torch.tensor([5.0])) >>> m.sample() # uniformly distributed in the range [0.0, 5.0) tensor([ 2.3418]) ``` Parameters * **low** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – lower range (inclusive). * **high** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – upper range (exclusive). `arg_constraints = {'high': Dependent(), 'low': Dependent()}` `cdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/uniform.html#Uniform.cdf) `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/uniform.html#Uniform.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/uniform.html#Uniform.expand) `has_rsample = True` `icdf(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/uniform.html#Uniform.icdf) `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/uniform.html#Uniform.log_prob) `property mean` `rsample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/uniform.html#Uniform.rsample) `property stddev` `property support` `property variance` VonMises -------- `class torch.distributions.von_mises.VonMises(loc, concentration, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/von_mises.html#VonMises) Bases: [`torch.distributions.distribution.Distribution`](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution") A circular von Mises distribution. This implementation uses polar coordinates. The `loc` and `value` args can be any real number (to facilitate unconstrained optimization), but are interpreted as angles modulo 2 pi. Example:: ``` >>> m = dist.VonMises(torch.tensor([1.0]), torch.tensor([1.0])) >>> m.sample() # von Mises distributed with loc=1 and concentration=1 tensor([1.9777]) ``` Parameters * **loc** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor")) – an angle in radians. * **concentration** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor")) – concentration parameter `arg_constraints = {'concentration': GreaterThan(lower_bound=0.0), 'loc': Real()}` `expand(batch_shape)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/von_mises.html#VonMises.expand) `has_rsample = False` `log_prob(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/von_mises.html#VonMises.log_prob) `property mean` The provided mean is the circular one. `sample(sample_shape=torch.Size([]))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/von_mises.html#VonMises.sample) The sampling algorithm for the von Mises distribution is based on the following paper: Best, D. J., and Nicholas I. Fisher. “Efficient simulation of the von Mises distribution.” Applied Statistics (1979): 152-157. `support = Real()` `variance` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/von_mises.html#VonMises.variance) The provided variance is the circular one. Weibull ------- `class torch.distributions.weibull.Weibull(scale, concentration, validate_args=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/weibull.html#Weibull) Bases: [`torch.distributions.transformed_distribution.TransformedDistribution`](#torch.distributions.transformed_distribution.TransformedDistribution "torch.distributions.transformed_distribution.TransformedDistribution") Samples from a two-parameter Weibull distribution. #### Example ``` >>> m = Weibull(torch.tensor([1.0]), torch.tensor([1.0])) >>> m.sample() # sample from a Weibull distribution with scale=1, concentration=1 tensor([ 0.4784]) ``` Parameters * **scale** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – Scale parameter of distribution (lambda). * **concentration** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – Concentration parameter of distribution (k/shape). `arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'concentration': GreaterThan(lower_bound=0.0), 'scale': GreaterThan(lower_bound=0.0)}` `entropy()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/weibull.html#Weibull.entropy) `expand(batch_shape, _instance=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/weibull.html#Weibull.expand) `property mean` `support = GreaterThan(lower_bound=0.0)` `property variance` `KL Divergence` --------------- `torch.distributions.kl.kl_divergence(p, q)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/kl.html#kl_divergence) Compute Kullback-Leibler divergence KL(p∥q)KL(p \| q) between two distributions. KL(p∥q)=∫p(x)log⁡p(x)q(x)dxKL(p \| q) = \int p(x) \log\frac {p(x)} {q(x)} \,dx Parameters * **p** ([Distribution](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution")) – A `Distribution` object. * **q** ([Distribution](#torch.distributions.distribution.Distribution "torch.distributions.distribution.Distribution")) – A `Distribution` object. Returns A batch of KL divergences of shape `batch_shape`. Return type [Tensor](tensors#torch.Tensor "torch.Tensor") Raises [**NotImplementedError**](https://docs.python.org/3/library/exceptions.html#NotImplementedError "(in Python v3.9)") – If the distribution types have not been registered via [`register_kl()`](#torch.distributions.kl.register_kl "torch.distributions.kl.register_kl"). `torch.distributions.kl.register_kl(type_p, type_q)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/kl.html#register_kl) Decorator to register a pairwise function with [`kl_divergence()`](#torch.distributions.kl.kl_divergence "torch.distributions.kl.kl_divergence"). Usage: ``` @register_kl(Normal, Normal) def kl_normal_normal(p, q): # insert implementation here ``` Lookup returns the most specific (type,type) match ordered by subclass. If the match is ambiguous, a `RuntimeWarning` is raised. For example to resolve the ambiguous situation: ``` @register_kl(BaseP, DerivedQ) def kl_version1(p, q): ... @register_kl(DerivedP, BaseQ) def kl_version2(p, q): ... ``` you should register a third most-specific implementation, e.g.: ``` register_kl(DerivedP, DerivedQ)(kl_version1) # Break the tie. ``` Parameters * **type\_p** ([type](https://docs.python.org/3/library/functions.html#type "(in Python v3.9)")) – A subclass of `Distribution`. * **type\_q** ([type](https://docs.python.org/3/library/functions.html#type "(in Python v3.9)")) – A subclass of `Distribution`. `Transforms` ------------ `class torch.distributions.transforms.Transform(cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#Transform) Abstract class for invertable transformations with computable log det jacobians. They are primarily used in `torch.distributions.TransformedDistribution`. Caching is useful for transforms whose inverses are either expensive or numerically unstable. Note that care must be taken with memoized values since the autograd graph may be reversed. For example while the following works with or without caching: ``` y = t(x) t.log_abs_det_jacobian(x, y).backward() # x will receive gradients. ``` However the following will error when caching due to dependency reversal: ``` y = t(x) z = t.inv(y) grad(z.sum(), [y]) # error because z is x ``` Derived classes should implement one or both of `_call()` or `_inverse()`. Derived classes that set `bijective=True` should also implement [`log_abs_det_jacobian()`](#torch.distributions.transforms.Transform.log_abs_det_jacobian "torch.distributions.transforms.Transform.log_abs_det_jacobian"). Parameters **cache\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Size of cache. If zero, no caching is done. If one, the latest single value is cached. Only 0 and 1 are supported. Variables * **~Transform.domain** ([`Constraint`](#torch.distributions.constraints.Constraint "torch.distributions.constraints.Constraint")) – The constraint representing valid inputs to this transform. * **~Transform.codomain** ([`Constraint`](#torch.distributions.constraints.Constraint "torch.distributions.constraints.Constraint")) – The constraint representing valid outputs to this transform which are inputs to the inverse transform. * **~Transform.bijective** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Whether this transform is bijective. A transform `t` is bijective iff `t.inv(t(x)) == x` and `t(t.inv(y)) == y` for every `x` in the domain and `y` in the codomain. Transforms that are not bijective should at least maintain the weaker pseudoinverse properties `t(t.inv(t(x)) == t(x)` and `t.inv(t(t.inv(y))) == t.inv(y)`. * **~Transform.sign** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [Tensor](tensors#torch.Tensor "torch.Tensor")) – For bijective univariate transforms, this should be +1 or -1 depending on whether transform is monotone increasing or decreasing. `property inv` Returns the inverse [`Transform`](#torch.distributions.transforms.Transform "torch.distributions.transforms.Transform") of this transform. This should satisfy `t.inv.inv is t`. `property sign` Returns the sign of the determinant of the Jacobian, if applicable. In general this only makes sense for bijective transforms. `log_abs_det_jacobian(x, y)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#Transform.log_abs_det_jacobian) Computes the log det jacobian `log |dy/dx|` given input and output. `forward_shape(shape)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#Transform.forward_shape) Infers the shape of the forward computation, given the input shape. Defaults to preserving shape. `inverse_shape(shape)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#Transform.inverse_shape) Infers the shapes of the inverse computation, given the output shape. Defaults to preserving shape. `class torch.distributions.transforms.ComposeTransform(parts, cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#ComposeTransform) Composes multiple transforms in a chain. The transforms being composed are responsible for caching. Parameters * **parts** (list of [`Transform`](#torch.distributions.transforms.Transform "torch.distributions.transforms.Transform")) – A list of transforms to compose. * **cache\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Size of cache. If zero, no caching is done. If one, the latest single value is cached. Only 0 and 1 are supported. `class torch.distributions.transforms.IndependentTransform(base_transform, reinterpreted_batch_ndims, cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#IndependentTransform) Wrapper around another transform to treat `reinterpreted_batch_ndims`-many extra of the right most dimensions as dependent. This has no effect on the forward or backward transforms, but does sum out `reinterpreted_batch_ndims`-many of the rightmost dimensions in `log_abs_det_jacobian()`. Parameters * **base\_transform** ([`Transform`](#torch.distributions.transforms.Transform "torch.distributions.transforms.Transform")) – A base transform. * **reinterpreted\_batch\_ndims** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The number of extra rightmost dimensions to treat as dependent. `class torch.distributions.transforms.ReshapeTransform(in_shape, out_shape, cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#ReshapeTransform) Unit Jacobian transform to reshape the rightmost part of a tensor. Note that `in_shape` and `out_shape` must have the same number of elements, just as for [`torch.Tensor.reshape()`](tensors#torch.Tensor.reshape "torch.Tensor.reshape"). Parameters * **in\_shape** (*torch.Size*) – The input event shape. * **out\_shape** (*torch.Size*) – The output event shape. `class torch.distributions.transforms.ExpTransform(cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#ExpTransform) Transform via the mapping y=exp⁡(x)y = \exp(x) . `class torch.distributions.transforms.PowerTransform(exponent, cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#PowerTransform) Transform via the mapping y=xexponenty = x^{\text{exponent}} . `class torch.distributions.transforms.SigmoidTransform(cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#SigmoidTransform) Transform via the mapping y=11+exp⁡(−x)y = \frac{1}{1 + \exp(-x)} and x=logit(y)x = \text{logit}(y) . `class torch.distributions.transforms.TanhTransform(cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#TanhTransform) Transform via the mapping y=tanh⁡(x)y = \tanh(x) . It is equivalent to `` ComposeTransform([AffineTransform(0., 2.), SigmoidTransform(), AffineTransform(-1., 2.)]) `` However this might not be numerically stable, thus it is recommended to use `TanhTransform` instead. Note that one should use `cache_size=1` when it comes to `NaN/Inf` values. `class torch.distributions.transforms.AbsTransform(cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#AbsTransform) Transform via the mapping y=∣x∣y = |x| . `class torch.distributions.transforms.AffineTransform(loc, scale, event_dim=0, cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#AffineTransform) Transform via the pointwise affine mapping y=loc+scale×xy = \text{loc} + \text{scale} \times x . Parameters * **loc** ([Tensor](tensors#torch.Tensor "torch.Tensor") *or* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Location parameter. * **scale** ([Tensor](tensors#torch.Tensor "torch.Tensor") *or* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Scale parameter. * **event\_dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Optional size of `event_shape`. This should be zero for univariate random variables, 1 for distributions over vectors, 2 for distributions over matrices, etc. `class torch.distributions.transforms.CorrCholeskyTransform(cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#CorrCholeskyTransform) Transforms an uncontrained real vector xx with length D∗(D−1)/2D\*(D-1)/2 into the Cholesky factor of a D-dimension correlation matrix. This Cholesky factor is a lower triangular matrix with positive diagonals and unit Euclidean norm for each row. The transform is processed as follows: 1. First we convert x into a lower triangular matrix in row order. 2. For each row XiX\_i of the lower triangular part, we apply a *signed* version of class [`StickBreakingTransform`](#torch.distributions.transforms.StickBreakingTransform "torch.distributions.transforms.StickBreakingTransform") to transform XiX\_i into a unit Euclidean length vector using the following steps: - Scales into the interval (−1,1)(-1, 1) domain: ri=tanh⁡(Xi)r\_i = \tanh(X\_i) . - Transforms into an unsigned domain: zi=ri2z\_i = r\_i^2 . - Applies si=StickBreakingTransform(zi)s\_i = StickBreakingTransform(z\_i) . - Transforms back into signed domain: yi=sign(ri)∗siy\_i = sign(r\_i) \* \sqrt{s\_i} . `class torch.distributions.transforms.SoftmaxTransform(cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#SoftmaxTransform) Transform from unconstrained space to the simplex via y=exp⁡(x)y = \exp(x) then normalizing. This is not bijective and cannot be used for HMC. However this acts mostly coordinate-wise (except for the final normalization), and thus is appropriate for coordinate-wise optimization algorithms. `class torch.distributions.transforms.StickBreakingTransform(cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#StickBreakingTransform) Transform from unconstrained space to the simplex of one additional dimension via a stick-breaking process. This transform arises as an iterated sigmoid transform in a stick-breaking construction of the `Dirichlet` distribution: the first logit is transformed via sigmoid to the first probability and the probability of everything else, and then the process recurses. This is bijective and appropriate for use in HMC; however it mixes coordinates together and is less appropriate for optimization. `class torch.distributions.transforms.LowerCholeskyTransform(cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#LowerCholeskyTransform) Transform from unconstrained matrices to lower-triangular matrices with nonnegative diagonal entries. This is useful for parameterizing positive definite matrices in terms of their Cholesky factorization. `class torch.distributions.transforms.StackTransform(tseq, dim=0, cache_size=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/transforms.html#StackTransform) Transform functor that applies a sequence of transforms `tseq` component-wise to each submatrix at `dim` in a way compatible with [`torch.stack()`](generated/torch.stack#torch.stack "torch.stack"). Example:: x = torch.stack([torch.range(1, 10), torch.range(1, 10)], dim=1) t = StackTransform([ExpTransform(), identity\_transform], dim=1) y = t(x) `Constraints` ------------- The following constraints are implemented: * `constraints.boolean` * `constraints.cat` * `constraints.corr_cholesky` * `constraints.dependent` * `constraints.greater_than(lower_bound)` * `constraints.greater_than_eq(lower_bound)` * `constraints.independent(constraint, reinterpreted_batch_ndims)` * `constraints.integer_interval(lower_bound, upper_bound)` * `constraints.interval(lower_bound, upper_bound)` * `constraints.less_than(upper_bound)` * `constraints.lower_cholesky` * `constraints.lower_triangular` * `constraints.multinomial` * `constraints.nonnegative_integer` * `constraints.one_hot` * `constraints.positive_definite` * `constraints.positive_integer` * `constraints.positive` * `constraints.real_vector` * `constraints.real` * `constraints.simplex` * `constraints.stack` * `constraints.unit_interval` `class torch.distributions.constraints.Constraint` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/constraints.html#Constraint) Abstract base class for constraints. A constraint object represents a region over which a variable is valid, e.g. within which a variable can be optimized. Variables * **~Constraint.is\_discrete** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Whether constrained space is discrete. Defaults to False. * **~Constraint.event\_dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of rightmost dimensions that together define an event. The [`check()`](#torch.distributions.constraints.Constraint.check "torch.distributions.constraints.Constraint.check") method will remove this many dimensions when computing validity. `check(value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/constraints.html#Constraint.check) Returns a byte tensor of `sample_shape + batch_shape` indicating whether each event in value satisfies this constraint. `torch.distributions.constraints.dependent_property` alias of `torch.distributions.constraints._DependentProperty` `torch.distributions.constraints.independent` alias of `torch.distributions.constraints._IndependentConstraint` `torch.distributions.constraints.integer_interval` alias of `torch.distributions.constraints._IntegerInterval` `torch.distributions.constraints.greater_than` alias of `torch.distributions.constraints._GreaterThan` `torch.distributions.constraints.greater_than_eq` alias of `torch.distributions.constraints._GreaterThanEq` `torch.distributions.constraints.less_than` alias of `torch.distributions.constraints._LessThan` `torch.distributions.constraints.multinomial` alias of `torch.distributions.constraints._Multinomial` `torch.distributions.constraints.interval` alias of `torch.distributions.constraints._Interval` `torch.distributions.constraints.half_open_interval` alias of `torch.distributions.constraints._HalfOpenInterval` `torch.distributions.constraints.cat` alias of `torch.distributions.constraints._Cat` `torch.distributions.constraints.stack` alias of `torch.distributions.constraints._Stack` `Constraint Registry` --------------------- PyTorch provides two global [`ConstraintRegistry`](#torch.distributions.constraint_registry.ConstraintRegistry "torch.distributions.constraint_registry.ConstraintRegistry") objects that link [`Constraint`](#torch.distributions.constraints.Constraint "torch.distributions.constraints.Constraint") objects to [`Transform`](#torch.distributions.transforms.Transform "torch.distributions.transforms.Transform") objects. These objects both input constraints and return transforms, but they have different guarantees on bijectivity. 1. `biject_to(constraint)` looks up a bijective [`Transform`](#torch.distributions.transforms.Transform "torch.distributions.transforms.Transform") from `constraints.real` to the given `constraint`. The returned transform is guaranteed to have `.bijective = True` and should implement `.log_abs_det_jacobian()`. 2. `transform_to(constraint)` looks up a not-necessarily bijective [`Transform`](#torch.distributions.transforms.Transform "torch.distributions.transforms.Transform") from `constraints.real` to the given `constraint`. The returned transform is not guaranteed to implement `.log_abs_det_jacobian()`. The `transform_to()` registry is useful for performing unconstrained optimization on constrained parameters of probability distributions, which are indicated by each distribution’s `.arg_constraints` dict. These transforms often overparameterize a space in order to avoid rotation; they are thus more suitable for coordinate-wise optimization algorithms like Adam: ``` loc = torch.zeros(100, requires_grad=True) unconstrained = torch.zeros(100, requires_grad=True) scale = transform_to(Normal.arg_constraints['scale'])(unconstrained) loss = -Normal(loc, scale).log_prob(data).sum() ``` The `biject_to()` registry is useful for Hamiltonian Monte Carlo, where samples from a probability distribution with constrained `.support` are propagated in an unconstrained space, and algorithms are typically rotation invariant.: ``` dist = Exponential(rate) unconstrained = torch.zeros(100, requires_grad=True) sample = biject_to(dist.support)(unconstrained) potential_energy = -dist.log_prob(sample).sum() ``` Note An example where `transform_to` and `biject_to` differ is `constraints.simplex`: `transform_to(constraints.simplex)` returns a [`SoftmaxTransform`](#torch.distributions.transforms.SoftmaxTransform "torch.distributions.transforms.SoftmaxTransform") that simply exponentiates and normalizes its inputs; this is a cheap and mostly coordinate-wise operation appropriate for algorithms like SVI. In contrast, `biject_to(constraints.simplex)` returns a [`StickBreakingTransform`](#torch.distributions.transforms.StickBreakingTransform "torch.distributions.transforms.StickBreakingTransform") that bijects its input down to a one-fewer-dimensional space; this a more expensive less numerically stable transform but is needed for algorithms like HMC. The `biject_to` and `transform_to` objects can be extended by user-defined constraints and transforms using their `.register()` method either as a function on singleton constraints: ``` transform_to.register(my_constraint, my_transform) ``` or as a decorator on parameterized constraints: ``` @transform_to.register(MyConstraintClass) def my_factory(constraint): assert isinstance(constraint, MyConstraintClass) return MyTransform(constraint.param1, constraint.param2) ``` You can create your own registry by creating a new [`ConstraintRegistry`](#torch.distributions.constraint_registry.ConstraintRegistry "torch.distributions.constraint_registry.ConstraintRegistry") object. `class torch.distributions.constraint_registry.ConstraintRegistry` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/constraint_registry.html#ConstraintRegistry) Registry to link constraints to transforms. `register(constraint, factory=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributions/constraint_registry.html#ConstraintRegistry.register) Registers a [`Constraint`](#torch.distributions.constraints.Constraint "torch.distributions.constraints.Constraint") subclass in this registry. Usage: ``` @my_registry.register(MyConstraintClass) def construct_transform(constraint): assert isinstance(constraint, MyConstraint) return MyTransform(constraint.arg_constraints) ``` Parameters * **constraint** (subclass of [`Constraint`](#torch.distributions.constraints.Constraint "torch.distributions.constraints.Constraint")) – A subclass of [`Constraint`](#torch.distributions.constraints.Constraint "torch.distributions.constraints.Constraint"), or a singleton object of the desired class. * **factory** (*callable*) – A callable that inputs a constraint object and returns a [`Transform`](#torch.distributions.transforms.Transform "torch.distributions.transforms.Transform") object.
programming_docs
pytorch Named Tensors Named Tensors ============= Named Tensors allow users to give explicit names to tensor dimensions. In most cases, operations that take dimension parameters will accept dimension names, avoiding the need to track dimensions by position. In addition, named tensors use names to automatically check that APIs are being used correctly at runtime, providing extra safety. Names can also be used to rearrange dimensions, for example, to support “broadcasting by name” rather than “broadcasting by position”. Warning The named tensor API is a prototype feature and subject to change. Creating named tensors ---------------------- Factory functions now take a new `names` argument that associates a name with each dimension. ``` >>> torch.zeros(2, 3, names=('N', 'C')) tensor([[0., 0., 0.], [0., 0., 0.]], names=('N', 'C')) ``` Named dimensions, like regular Tensor dimensions, are ordered. `tensor.names[i]` is the name of dimension `i` of `tensor`. The following factory functions support named tensors: * [`torch.empty()`](generated/torch.empty#torch.empty "torch.empty") * [`torch.rand()`](generated/torch.rand#torch.rand "torch.rand") * [`torch.randn()`](generated/torch.randn#torch.randn "torch.randn") * [`torch.ones()`](generated/torch.ones#torch.ones "torch.ones") * [`torch.tensor()`](generated/torch.tensor#torch.tensor "torch.tensor") * [`torch.zeros()`](generated/torch.zeros#torch.zeros "torch.zeros") Named dimensions ---------------- See [`names`](#torch.Tensor.names "torch.Tensor.names") for restrictions on tensor names. Use [`names`](#torch.Tensor.names "torch.Tensor.names") to access the dimension names of a tensor and [`rename()`](#torch.Tensor.rename "torch.Tensor.rename") to rename named dimensions. ``` >>> imgs = torch.randn(1, 2, 2, 3 , names=('N', 'C', 'H', 'W')) >>> imgs.names ('N', 'C', 'H', 'W') >>> renamed_imgs = imgs.rename(H='height', W='width') >>> renamed_imgs.names ('N', 'C', 'height', 'width) ``` Named tensors can coexist with unnamed tensors; named tensors are instances of [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor"). Unnamed tensors have `None`-named dimensions. Named tensors do not require all dimensions to be named. ``` >>> imgs = torch.randn(1, 2, 2, 3 , names=(None, 'C', 'H', 'W')) >>> imgs.names (None, 'C', 'H', 'W') ``` Name propagation semantics -------------------------- Named tensors use names to automatically check that APIs are being called correctly at runtime. This occurs in a process called *name inference*. More formally, name inference consists of the following two steps: * **Check names**: an operator may perform automatic checks at runtime that check that certain dimension names must match. * **Propagate names**: name inference propagates names to output tensors. All operations that support named tensors propagate names. ``` >>> x = torch.randn(3, 3, names=('N', 'C')) >>> x.abs().names ('N', 'C') ``` ### match semantics Two names *match* if they are equal (string equality) or if at least one is `None`. Nones are essentially a special “wildcard” name. `unify(A, B)` determines which of the names `A` and `B` to propagate to the outputs. It returns the more *specific* of the two names, if they match. If the names do not match, then it errors. Note In practice, when working with named tensors, one should avoid having unnamed dimensions because their handling can be complicated. It is recommended to lift all unnamed dimensions to be named dimensions by using [`refine_names()`](#torch.Tensor.refine_names "torch.Tensor.refine_names"). ### Basic name inference rules Let’s see how `match` and `unify` are used in name inference in the case of adding two one-dim tensors with no broadcasting. ``` x = torch.randn(3, names=('X',)) y = torch.randn(3) z = torch.randn(3, names=('Z',)) ``` **Check names**: check that the names of the two tensors *match*. For the following examples: ``` >>> # x + y # match('X', None) is True >>> # x + z # match('X', 'Z') is False >>> # x + x # match('X', 'X') is True >>> x + z Error when attempting to broadcast dims ['X'] and dims ['Z']: dim 'X' and dim 'Z' are at the same position from the right but do not match. ``` **Propagate names**: *unify* the names to select which one to propagate. In the case of `x + y`, `unify('X', None) = 'X'` because `'X'` is more specific than `None`. ``` >>> (x + y).names ('X',) >>> (x + x).names ('X',) ``` For a comprehensive list of name inference rules, see [Named Tensors operator coverage](name_inference#name-inference-reference-doc). Here are two common operations that may be useful to go over: * Binary arithmetic ops: [Unifies names from inputs](name_inference#unifies-names-from-inputs-doc) * Matrix multiplication ops: [Contracts away dims](name_inference#contracts-away-dims-doc) Explicit alignment by names --------------------------- Use [`align_as()`](#torch.Tensor.align_as "torch.Tensor.align_as") or [`align_to()`](#torch.Tensor.align_to "torch.Tensor.align_to") to align tensor dimensions by name to a specified ordering. This is useful for performing “broadcasting by names”. ``` # This function is agnostic to the dimension ordering of `input`, # as long as it has a `C` dimension somewhere. def scale_channels(input, scale): scale = scale.refine_names('C') return input * scale.align_as(input) >>> num_channels = 3 >>> scale = torch.randn(num_channels, names=('C',)) >>> imgs = torch.rand(3, 3, 3, num_channels, names=('N', 'H', 'W', 'C')) >>> more_imgs = torch.rand(3, num_channels, 3, 3, names=('N', 'C', 'H', 'W')) >>> videos = torch.randn(3, num_channels, 3, 3, 3, names=('N', 'C', 'H', 'W', 'D') >>> scale_channels(imgs, scale) >>> scale_channels(more_imgs, scale) >>> scale_channels(videos, scale) ``` Manipulating dimensions ----------------------- Use [`align_to()`](#torch.Tensor.align_to "torch.Tensor.align_to") to permute large amounts of dimensions without mentioning all of them as in required by [`permute()`](tensors#torch.Tensor.permute "torch.Tensor.permute"). ``` >>> tensor = torch.randn(2, 2, 2, 2, 2, 2) >>> named_tensor = tensor.refine_names('A', 'B', 'C', 'D', 'E', 'F') # Move the F (dim 5) and E dimension (dim 4) to the front while keeping # the rest in the same order >>> tensor.permute(5, 4, 0, 1, 2, 3) >>> named_tensor.align_to('F', 'E', ...) ``` Use [`flatten()`](tensors#torch.Tensor.flatten "torch.Tensor.flatten") and [`unflatten()`](#torch.Tensor.unflatten "torch.Tensor.unflatten") to flatten and unflatten dimensions, respectively. These methods are more verbose than [`view()`](tensors#torch.Tensor.view "torch.Tensor.view") and [`reshape()`](tensors#torch.Tensor.reshape "torch.Tensor.reshape"), but have more semantic meaning to someone reading the code. ``` >>> imgs = torch.randn(32, 3, 128, 128) >>> named_imgs = imgs.refine_names('N', 'C', 'H', 'W') >>> flat_imgs = imgs.view(32, -1) >>> named_flat_imgs = named_imgs.flatten(['C', 'H', 'W'], 'features') >>> named_flat_imgs.names ('N', 'features') >>> unflattened_imgs = imgs.view(32, 3, 128, 128) >>> unflattened_named_imgs = named_flat_imgs.unflatten( 'features', [('C', 3), ('H', 128), ('W', 128)]) ``` Autograd support ---------------- Autograd currently supports named tensors in a limited manner: autograd ignores names on all tensors. Gradient computation is still correct but we lose the safety that names give us. ``` >>> x = torch.randn(3, names=('D',)) >>> weight = torch.randn(3, names=('D',), requires_grad=True) >>> loss = (x - weight).abs() >>> grad_loss = torch.randn(3) >>> loss.backward(grad_loss) >>> weight.grad # Unnamed for now. Will be named in the future tensor([-1.8107, -0.6357, 0.0783]) >>> weight.grad.zero_() >>> grad_loss = grad_loss.refine_names('C') >>> loss = (x - weight).abs() # Ideally we'd check that the names of loss and grad_loss match but we don't yet. >>> loss.backward(grad_loss) >>> weight.grad tensor([-1.8107, -0.6357, 0.0783]) ``` Currently supported operations and subsystems --------------------------------------------- ### Operators See [Named Tensors operator coverage](name_inference#name-inference-reference-doc) for a full list of the supported torch and tensor operations. We do not yet support the following that is not covered by the link: * indexing, advanced indexing. For `torch.nn.functional` operators, we support the following: * [`torch.nn.functional.relu()`](nn.functional#torch.nn.functional.relu "torch.nn.functional.relu") * [`torch.nn.functional.softmax()`](nn.functional#torch.nn.functional.softmax "torch.nn.functional.softmax") * [`torch.nn.functional.log_softmax()`](nn.functional#torch.nn.functional.log_softmax "torch.nn.functional.log_softmax") * [`torch.nn.functional.tanh()`](nn.functional#torch.nn.functional.tanh "torch.nn.functional.tanh") * [`torch.nn.functional.sigmoid()`](nn.functional#torch.nn.functional.sigmoid "torch.nn.functional.sigmoid") * [`torch.nn.functional.dropout()`](nn.functional#torch.nn.functional.dropout "torch.nn.functional.dropout") ### Subsystems Autograd is supported, see [Autograd support](#named-tensors-autograd-doc). Because gradients are currently unnamed, optimizers may work but are untested. NN modules are currently unsupported. This can lead to the following when calling modules with named tensor inputs: * NN module parameters are unnamed, so outputs may be partially named. * NN module forward passes have code that don’t support named tensors and will error out appropriately. We also do not support the following subsystems, though some may work out of the box: * distributions * serialization ([`torch.load()`](generated/torch.load#torch.load "torch.load"), [`torch.save()`](generated/torch.save#torch.save "torch.save")) * multiprocessing * JIT * distributed * ONNX If any of these would help your use case, please [search if an issue has already been filed](https://github.com/pytorch/pytorch/issues?q=is%3Aopen+is%3Aissue+label%3A%22module%3A+named+tensor%22) and if not, [file one](https://github.com/pytorch/pytorch/issues/new/choose). Named tensor API reference -------------------------- In this section please find the documentation for named tensor specific APIs. For a comprehensive reference for how names are propagated through other PyTorch operators, see [Named Tensors operator coverage](name_inference#name-inference-reference-doc). `class torch.Tensor` `names` Stores names for each of this tensor’s dimensions. `names[idx]` corresponds to the name of tensor dimension `idx`. Names are either a string if the dimension is named or `None` if the dimension is unnamed. Dimension names may contain characters or underscore. Furthermore, a dimension name must be a valid Python variable name (i.e., does not start with underscore). Tensors may not have two named dimensions with the same name. Warning The named tensor API is experimental and subject to change. `rename(*names, **rename_map)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.rename) Renames dimension names of `self`. There are two main usages: `self.rename(**rename_map)` returns a view on tensor that has dims renamed as specified in the mapping `rename_map`. `self.rename(*names)` returns a view on tensor, renaming all dimensions positionally using [`names`](#torch.Tensor.names "torch.Tensor.names"). Use `self.rename(None)` to drop names on a tensor. One cannot specify both positional args [`names`](#torch.Tensor.names "torch.Tensor.names") and keyword args `rename_map`. Examples: ``` >>> imgs = torch.rand(2, 3, 5, 7, names=('N', 'C', 'H', 'W')) >>> renamed_imgs = imgs.rename(N='batch', C='channels') >>> renamed_imgs.names ('batch', 'channels', 'H', 'W') >>> renamed_imgs = imgs.rename(None) >>> renamed_imgs.names (None,) >>> renamed_imgs = imgs.rename('batch', 'channel', 'height', 'width') >>> renamed_imgs.names ('batch', 'channel', 'height', 'width') ``` Warning The named tensor API is experimental and subject to change. `rename_(*names, **rename_map)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.rename_) In-place version of [`rename()`](#torch.Tensor.rename "torch.Tensor.rename"). `refine_names(*names)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.refine_names) Refines the dimension names of `self` according to [`names`](#torch.Tensor.names "torch.Tensor.names"). Refining is a special case of renaming that “lifts” unnamed dimensions. A `None` dim can be refined to have any name; a named dim can only be refined to have the same name. Because named tensors can coexist with unnamed tensors, refining names gives a nice way to write named-tensor-aware code that works with both named and unnamed tensors. [`names`](#torch.Tensor.names "torch.Tensor.names") may contain up to one Ellipsis (`...`). The Ellipsis is expanded greedily; it is expanded in-place to fill [`names`](#torch.Tensor.names "torch.Tensor.names") to the same length as `self.dim()` using names from the corresponding indices of `self.names`. Python 2 does not support Ellipsis but one may use a string literal instead (`'...'`). Parameters **names** (*iterable of str*) – The desired names of the output tensor. May contain up to one Ellipsis. Examples: ``` >>> imgs = torch.randn(32, 3, 128, 128) >>> named_imgs = imgs.refine_names('N', 'C', 'H', 'W') >>> named_imgs.names ('N', 'C', 'H', 'W') >>> tensor = torch.randn(2, 3, 5, 7, 11) >>> tensor = tensor.refine_names('A', ..., 'B', 'C') >>> tensor.names ('A', None, None, 'B', 'C') ``` Warning The named tensor API is experimental and subject to change. `align_as(other) → Tensor` Permutes the dimensions of the `self` tensor to match the dimension order in the `other` tensor, adding size-one dims for any new names. This operation is useful for explicit broadcasting by names (see examples). All of the dims of `self` must be named in order to use this method. The resulting tensor is a view on the original tensor. All dimension names of `self` must be present in `other.names`. `other` may contain named dimensions that are not in `self.names`; the output tensor has a size-one dimension for each of those new names. To align a tensor to a specific order, use [`align_to()`](#torch.Tensor.align_to "torch.Tensor.align_to"). Examples: ``` # Example 1: Applying a mask >>> mask = torch.randint(2, [127, 128], dtype=torch.bool).refine_names('W', 'H') >>> imgs = torch.randn(32, 128, 127, 3, names=('N', 'H', 'W', 'C')) >>> imgs.masked_fill_(mask.align_as(imgs), 0) # Example 2: Applying a per-channel-scale >>> def scale_channels(input, scale): >>> scale = scale.refine_names('C') >>> return input * scale.align_as(input) >>> num_channels = 3 >>> scale = torch.randn(num_channels, names=('C',)) >>> imgs = torch.rand(32, 128, 128, num_channels, names=('N', 'H', 'W', 'C')) >>> more_imgs = torch.rand(32, num_channels, 128, 128, names=('N', 'C', 'H', 'W')) >>> videos = torch.randn(3, num_channels, 128, 128, 128, names=('N', 'C', 'H', 'W', 'D')) # scale_channels is agnostic to the dimension order of the input >>> scale_channels(imgs, scale) >>> scale_channels(more_imgs, scale) >>> scale_channels(videos, scale) ``` Warning The named tensor API is experimental and subject to change. `align_to(*names)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.align_to) Permutes the dimensions of the `self` tensor to match the order specified in [`names`](#torch.Tensor.names "torch.Tensor.names"), adding size-one dims for any new names. All of the dims of `self` must be named in order to use this method. The resulting tensor is a view on the original tensor. All dimension names of `self` must be present in [`names`](#torch.Tensor.names "torch.Tensor.names"). [`names`](#torch.Tensor.names "torch.Tensor.names") may contain additional names that are not in `self.names`; the output tensor has a size-one dimension for each of those new names. [`names`](#torch.Tensor.names "torch.Tensor.names") may contain up to one Ellipsis (`...`). The Ellipsis is expanded to be equal to all dimension names of `self` that are not mentioned in [`names`](#torch.Tensor.names "torch.Tensor.names"), in the order that they appear in `self`. Python 2 does not support Ellipsis but one may use a string literal instead (`'...'`). Parameters **names** (*iterable of str*) – The desired dimension ordering of the output tensor. May contain up to one Ellipsis that is expanded to all unmentioned dim names of `self`. Examples: ``` >>> tensor = torch.randn(2, 2, 2, 2, 2, 2) >>> named_tensor = tensor.refine_names('A', 'B', 'C', 'D', 'E', 'F') # Move the F and E dims to the front while keeping the rest in order >>> named_tensor.align_to('F', 'E', ...) ``` Warning The named tensor API is experimental and subject to change. `unflatten(dim, sizes)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.unflatten) Expands the dimension [`dim`](tensors#torch.Tensor.dim "torch.Tensor.dim") of the `self` tensor over multiple dimensions of sizes given by `sizes`. * `sizes` is the new shape of the unflattened dimension and it can be a `Tuple[int]` as well as `torch.Size` if `self` is a `Tensor`, or `namedshape` (Tuple[(name: str, size: int)]) if `self` is a `NamedTensor`. The total number of elements in sizes must match the number of elements in the original dim being unflattened. Parameters * **dim** (*Union**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*]*) – Dimension to unflatten * **sizes** (*Union**[**Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*] or* *torch.Size**,* *Tuple**[**Tuple**[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**]**]*) – New shape of the unflattened dimension #### Examples ``` >>> torch.randn(3, 4, 1).unflatten(1, (2, 2)).shape torch.Size([3, 2, 2, 1]) >>> torch.randn(2, 4, names=('A', 'B')).unflatten('B', (('B1', 2), ('B2', 2))) tensor([[[-1.1772, 0.0180], [ 0.2412, 0.1431]], ``` [[-1.1819, -0.8899], [ 1.5813, 0.2274]]], names=(‘A’, ‘B1’, ‘B2’)) Warning The named tensor API is experimental and subject to change. `flatten(dims, out_dim) → Tensor` Flattens `dims` into a single dimension with name `out_dim`. All of `dims` must be consecutive in order in the `self` tensor, but not necessary contiguous in memory. Examples: ``` >>> imgs = torch.randn(32, 3, 128, 128, names=('N', 'C', 'H', 'W')) >>> flat_imgs = imgs.flatten(['C', 'H', 'W'], 'features') >>> flat_imgs.names, flat_imgs.shape (('N', 'features'), torch.Size([32, 49152])) ``` Warning The named tensor API is experimental and subject to change. pytorch Distributed RPC Framework Distributed RPC Framework ========================= The distributed RPC framework provides mechanisms for multi-machine model training through a set of primitives to allow for remote communication, and a higher-level API to automatically differentiate models split across several machines. Warning APIs in the RPC package are stable. There are multiple ongoing work items to improve performance and error handling, which will ship in future releases. Note Please refer to [PyTorch Distributed Overview](https://pytorch.org/tutorials/beginner/dist_overview.html) for a brief introduction to all features related to distributed training. Basics ------ The distributed RPC framework makes it easy to run functions remotely, supports referencing remote objects without copying the real data around, and provides autograd and optimizer APIs to transparently run backward and update parameters across RPC boundaries. These features can be categorized into four sets of APIs. 1. **Remote Procedure Call (RPC)** supports running a function on the specified destination worker with the given arguments and getting the return value back or creating a reference to the return value. There are three main RPC APIs: [`rpc_sync()`](#torch.distributed.rpc.rpc_sync "torch.distributed.rpc.rpc_sync") (synchronous), [`rpc_async()`](#torch.distributed.rpc.rpc_async "torch.distributed.rpc.rpc_async") (asynchronous), and [`remote()`](#torch.distributed.rpc.remote "torch.distributed.rpc.remote") (asynchronous and returns a reference to the remote return value). Use the synchronous API if the user code cannot proceed without the return value. Otherwise, use the asynchronous API to get a future, and wait on the future when the return value is needed on the caller. The [`remote()`](#torch.distributed.rpc.remote "torch.distributed.rpc.remote") API is useful when the requirement is to create something remotely but never need to fetch it to the caller. Imagine the case that a driver process is setting up a parameter server and a trainer. The driver can create an embedding table on the parameter server and then share the reference to the embedding table with the trainer, but itself will never use the embedding table locally. In this case, [`rpc_sync()`](#torch.distributed.rpc.rpc_sync "torch.distributed.rpc.rpc_sync") and [`rpc_async()`](#torch.distributed.rpc.rpc_async "torch.distributed.rpc.rpc_async") are no longer appropriate, as they always imply that the return value will be returned to the caller immediately or in the future. 2. **Remote Reference (RRef)** serves as a distributed shared pointer to a local or remote object. It can be shared with other workers and reference counting will be handled transparently. Each RRef only has one owner and the object only lives on that owner. Non-owner workers holding RRefs can get copies of the object from the owner by explicitly requesting it. This is useful when a worker needs to access some data object, but itself is neither the creator (the caller of [`remote()`](#torch.distributed.rpc.remote "torch.distributed.rpc.remote")) or the owner of the object. The distributed optimizer, as we will discuss below, is one example of such use cases. 3. **Distributed Autograd** stitches together local autograd engines on all the workers involved in the forward pass, and automatically reach out to them during the backward pass to compute gradients. This is especially helpful if the forward pass needs to span multiple machines when conducting, e.g., distributed model parallel training, parameter-server training, etc. With this feature, user code no longer needs to worry about how to send gradients across RPC boundaries and in which order should the local autograd engines be launched, which can become quite complicated where there are nested and inter-dependent RPC calls in the forward pass. 4. **Distributed Optimizer**’s constructor takes a [`Optimizer()`](optim#torch.optim.Optimizer "torch.optim.Optimizer") (e.g., [`SGD()`](optim#torch.optim.SGD "torch.optim.SGD"), [`Adagrad()`](optim#torch.optim.Adagrad "torch.optim.Adagrad"), etc.) and a list of parameter RRefs, creates an [`Optimizer()`](optim#torch.optim.Optimizer "torch.optim.Optimizer") instance on each distinct RRef owner, and updates parameters accordingly when running `step()`. When you have distributed forward and backward passes, parameters and gradients will be scattered across multiple workers, and hence it requires an optimizer on each of the involved workers. Distributed Optimizer wraps all those local optimizers into one, and provides a concise constructor and `step()` API. RPC --- Before using RPC and distributed autograd primitives, initialization must take place. To initialize the RPC framework we need to use [`init_rpc()`](#torch.distributed.rpc.init_rpc "torch.distributed.rpc.init_rpc") which would initialize the RPC framework, RRef framework and distributed autograd. `torch.distributed.rpc.init_rpc(name, backend=None, rank=-1, world_size=None, rpc_backend_options=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/rpc.html#init_rpc) Initializes RPC primitives such as the local RPC agent and distributed autograd, which immediately makes the current process ready to send and receive RPCs. Parameters * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – a globally unique name of this node. (e.g., `Trainer3`, `ParameterServer2`, `Master`, `Worker1`) Name can only contain number, alphabet, underscore, colon, and/or dash, and must be shorter than 128 characters. * **backend** ([BackendType](#torch.distributed.rpc.BackendType "torch.distributed.rpc.BackendType")*,* *optional*) – The type of RPC backend implementation. Supported values include `BackendType.TENSORPIPE` (the default) and `BackendType.PROCESS_GROUP`. See [Backends](#rpc-backends) for more information. * **rank** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – a globally unique id/rank of this node. * **world\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The number of workers in the group. * **rpc\_backend\_options** ([RpcBackendOptions](#torch.distributed.rpc.RpcBackendOptions "torch.distributed.rpc.RpcBackendOptions")*,* *optional*) – The options passed to the RpcAgent constructor. It must be an agent-specific subclass of [`RpcBackendOptions`](#torch.distributed.rpc.RpcBackendOptions "torch.distributed.rpc.RpcBackendOptions") and contains agent-specific initialization configurations. By default, for all agents, it sets the default timeout to 60 seconds and performs the rendezvous with an underlying process group initialized using `init_method = "env://"`, meaning that environment variables `MASTER_ADDR` and `MASTER_PORT` need to be set properly. See [Backends](#rpc-backends) for more information and find which options are available. The following APIs allow users to remotely execute functions as well as create references (RRefs) to remote data objects. In these APIs, when passing a `Tensor` as an argument or a return value, the destination worker will try to create a `Tensor` with the same meta (i.e., shape, stride, etc.). We intentionally disallow transmitting CUDA tensors because it might crash if the device lists on source and destination workers do not match. In such cases, applications can always explicitly move the input tensors to CPU on the caller and move it to the desired devices on the callee if necessary. Warning TorchScript support in RPC is a prototype feature and subject to change. Since v1.5.0, `torch.distributed.rpc` supports calling TorchScript functions as RPC target functions, and this will help improve parallelism on the callee side as executing TorchScript functions does not require GIL. `torch.distributed.rpc.rpc_sync(to, func, args=None, kwargs=None, timeout=-1.0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/rpc/api.html#rpc_sync) Make a blocking RPC call to run function `func` on worker `to`. RPC messages are sent and received in parallel to execution of Python code. This method is thread-safe. Parameters * **to** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)") *or* [WorkerInfo](#torch.distributed.rpc.WorkerInfo "torch.distributed.rpc.WorkerInfo") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – name/rank/`WorkerInfo` of the destination worker. * **func** (*callable*) – a callable function, such as Python callables, builtin operators (e.g. [`add()`](generated/torch.add#torch.add "torch.add")) and annotated TorchScript functions. * **args** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – the argument tuple for the `func` invocation. * **kwargs** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)")) – is a dictionary of keyword arguments for the `func` invocation. * **timeout** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – timeout in seconds to use for this RPC. If the RPC does not complete in this amount of time, an exception indicating it has timed out will be raised. A value of 0 indicates an infinite timeout, i.e. a timeout error will never be raised. If not provided, the default value set during initialization or with `_set_rpc_timeout` is used. Returns Returns the result of running `func` with `args` and `kwargs`. Warning Using GPU tensors as arguments or return values of `func` is not supported since we don’t support sending GPU tensors over the wire. You need to explicitly copy GPU tensors to CPU before using them as arguments or return values of `func`. Example:: Make sure that `MASTER_ADDR` and `MASTER_PORT` are set properly on both workers. Refer to [`init_process_group()`](distributed#torch.distributed.init_process_group "torch.distributed.init_process_group") API for more details. For example, ``` >>> export MASTER_ADDR=localhost >>> export MASTER_PORT=5678 ``` Then run the following code in two different processes: ``` >>> # On worker 0: >>> import torch >>> import torch.distributed.rpc as rpc >>> rpc.init_rpc("worker0", rank=0, world_size=2) >>> ret = rpc.rpc_sync("worker1", torch.add, args=(torch.ones(2), 3)) >>> rpc.shutdown() ``` ``` >>> # On worker 1: >>> import torch.distributed.rpc as rpc >>> rpc.init_rpc("worker1", rank=1, world_size=2) >>> rpc.shutdown() ``` Below is an example of running a TorchScript function using RPC. ``` >>> # On both workers: >>> @torch.jit.script >>> def my_script_add(t1, t2): >>> return torch.add(t1, t2) ``` ``` >>> # On worker 0: >>> import torch.distributed.rpc as rpc >>> rpc.init_rpc("worker0", rank=0, world_size=2) >>> ret = rpc.rpc_sync("worker1", my_script_add, args=(torch.ones(2), 3)) >>> rpc.shutdown() ``` ``` >>> # On worker 1: >>> import torch.distributed.rpc as rpc >>> rpc.init_rpc("worker1", rank=1, world_size=2) >>> rpc.shutdown() ``` `torch.distributed.rpc.rpc_async(to, func, args=None, kwargs=None, timeout=-1.0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/rpc/api.html#rpc_async) Make a non-blocking RPC call to run function `func` on worker `to`. RPC messages are sent and received in parallel to execution of Python code. This method is thread-safe. This method will immediately return a [`Future`](futures#torch.futures.Future "torch.futures.Future") that can be awaited on. Parameters * **to** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)") *or* [WorkerInfo](#torch.distributed.rpc.WorkerInfo "torch.distributed.rpc.WorkerInfo") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – name/rank/`WorkerInfo` of the destination worker. * **func** (*callable*) – a callable function, such as Python callables, builtin operators (e.g. [`add()`](generated/torch.add#torch.add "torch.add")) and annotated TorchScript functions. * **args** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – the argument tuple for the `func` invocation. * **kwargs** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)")) – is a dictionary of keyword arguments for the `func` invocation. * **timeout** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – timeout in seconds to use for this RPC. If the RPC does not complete in this amount of time, an exception indicating it has timed out will be raised. A value of 0 indicates an infinite timeout, i.e. a timeout error will never be raised. If not provided, the default value set during initialization or with `_set_rpc_timeout` is used. Returns Returns a [`Future`](futures#torch.futures.Future "torch.futures.Future") object that can be waited on. When completed, the return value of `func` on `args` and `kwargs` can be retrieved from the [`Future`](futures#torch.futures.Future "torch.futures.Future") object. Warning Using GPU tensors as arguments or return values of `func` is not supported since we don’t support sending GPU tensors over the wire. You need to explicitly copy GPU tensors to CPU before using them as arguments or return values of `func`. Warning The `rpc_async` API does not copy storages of argument tensors until sending them over the wire, which could be done by a different thread depending on the RPC backend type. The caller should make sure that the contents of those tensors stay intact until the returned [`Future`](futures#torch.futures.Future "torch.futures.Future") completes. Example:: Make sure that `MASTER_ADDR` and `MASTER_PORT` are set properly on both workers. Refer to [`init_process_group()`](distributed#torch.distributed.init_process_group "torch.distributed.init_process_group") API for more details. For example, ``` >>> export MASTER_ADDR=localhost >>> export MASTER_PORT=5678 ``` Then run the following code in two different processes: ``` >>> # On worker 0: >>> import torch >>> import torch.distributed.rpc as rpc >>> rpc.init_rpc("worker0", rank=0, world_size=2) >>> fut1 = rpc.rpc_async("worker1", torch.add, args=(torch.ones(2), 3)) >>> fut2 = rpc.rpc_async("worker1", min, args=(1, 2)) >>> result = fut1.wait() + fut2.wait() >>> rpc.shutdown() ``` ``` >>> # On worker 1: >>> import torch.distributed.rpc as rpc >>> rpc.init_rpc("worker1", rank=1, world_size=2) >>> rpc.shutdown() ``` Below is an example of running a TorchScript function using RPC. ``` >>> # On both workers: >>> @torch.jit.script >>> def my_script_add(t1, t2): >>> return torch.add(t1, t2) ``` ``` >>> # On worker 0: >>> import torch.distributed.rpc as rpc >>> rpc.init_rpc("worker0", rank=0, world_size=2) >>> fut = rpc.rpc_async("worker1", my_script_add, args=(torch.ones(2), 3)) >>> ret = fut.wait() >>> rpc.shutdown() ``` ``` >>> # On worker 1: >>> import torch.distributed.rpc as rpc >>> rpc.init_rpc("worker1", rank=1, world_size=2) >>> rpc.shutdown() ``` `torch.distributed.rpc.remote(to, func, args=None, kwargs=None, timeout=-1.0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/rpc/api.html#remote) Make a remote call to run `func` on worker `to` and return an [`RRef`](#torch.distributed.rpc.RRef "torch.distributed.rpc.RRef") to the result value immediately. Worker `to` will be the owner of the returned [`RRef`](#torch.distributed.rpc.RRef "torch.distributed.rpc.RRef"), and the worker calling `remote` is a user. The owner manages the global reference count of its [`RRef`](#torch.distributed.rpc.RRef "torch.distributed.rpc.RRef"), and the owner [`RRef`](#torch.distributed.rpc.RRef "torch.distributed.rpc.RRef") is only destructed when globally there are no living references to it. Parameters * **to** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)") *or* [WorkerInfo](#torch.distributed.rpc.WorkerInfo "torch.distributed.rpc.WorkerInfo") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – name/rank/`WorkerInfo` of the destination worker. * **func** (*callable*) – a callable function, such as Python callables, builtin operators (e.g. [`add()`](generated/torch.add#torch.add "torch.add")) and annotated TorchScript functions. * **args** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – the argument tuple for the `func` invocation. * **kwargs** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)")) – is a dictionary of keyword arguments for the `func` invocation. * **timeout** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – timeout in seconds for this remote call. If the creation of this [`RRef`](#torch.distributed.rpc.RRef "torch.distributed.rpc.RRef") on worker `to` is not successfully processed on this worker within this timeout, then the next time there is an attempt to use the RRef (such as `to_here()`), a timeout will be raised indicating this failure. A value of 0 indicates an infinite timeout, i.e. a timeout error will never be raised. If not provided, the default value set during initialization or with `_set_rpc_timeout` is used. Returns A user [`RRef`](#torch.distributed.rpc.RRef "torch.distributed.rpc.RRef") instance to the result value. Use the blocking API [`torch.distributed.rpc.RRef.to_here()`](#torch.distributed.rpc.RRef.to_here "torch.distributed.rpc.RRef.to_here") to retrieve the result value locally. Warning Using GPU tensors as arguments or return values of `func` is not supported since we don’t support sending GPU tensors over the wire. You need to explicitly copy GPU tensors to CPU before using them as arguments or return values of `func`. Warning The `remote` API does not copy storages of argument tensors until sending them over the wire, which could be done by a different thread depending on the RPC backend type. The caller should make sure that the contents of those tensors stay intact until the returned RRef is confirmed by the owner, which can be checked using the [`torch.distributed.rpc.RRef.confirmed_by_owner()`](#torch.distributed.rpc.RRef.confirmed_by_owner "torch.distributed.rpc.RRef.confirmed_by_owner") API. Warning Errors such as timeouts for the `remote` API are handled on a best-effort basis. This means that when remote calls initiated by `remote` fail, such as with a timeout error, we take a best-effort approach to error handling. This means that errors are handled and set on the resulting RRef on an asynchronous basis. If the RRef has not been used by the application before this handling (such as `to_here` or fork call), then future uses of the `RRef` will appropriately raise errors. However, it is possible that the user application will use the `RRef` before the errors are handled. In this case, errors may not be raised as they have not yet been handled. Example:: Make sure that `MASTER_ADDR` and `MASTER_PORT` are set properly on both workers. Refer to [`init_process_group()`](distributed#torch.distributed.init_process_group "torch.distributed.init_process_group") API for more details. For example, ``` >>> export MASTER_ADDR=localhost >>> export MASTER_PORT=5678 ``` Then run the following code in two different processes: ``` >>> # On worker 0: >>> import torch >>> import torch.distributed.rpc as rpc >>> rpc.init_rpc("worker0", rank=0, world_size=2) >>> rref1 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 3)) >>> rref2 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 1)) >>> x = rref1.to_here() + rref2.to_here() >>> rpc.shutdown() ``` ``` >>> # On worker 1: >>> import torch.distributed.rpc as rpc >>> rpc.init_rpc("worker1", rank=1, world_size=2) >>> rpc.shutdown() ``` Below is an example of running a TorchScript function using RPC. ``` >>> # On both workers: >>> @torch.jit.script >>> def my_script_add(t1, t2): >>> return torch.add(t1, t2) ``` ``` >>> # On worker 0: >>> import torch.distributed.rpc as rpc >>> rpc.init_rpc("worker0", rank=0, world_size=2) >>> rref = rpc.remote("worker1", my_script_add, args=(torch.ones(2), 3)) >>> rref.to_here() >>> rpc.shutdown() ``` ``` >>> # On worker 1: >>> import torch.distributed.rpc as rpc >>> rpc.init_rpc("worker1", rank=1, world_size=2) >>> rpc.shutdown() ``` `torch.distributed.rpc.get_worker_info(worker_name=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/rpc/api.html#get_worker_info) Get [`WorkerInfo`](#torch.distributed.rpc.WorkerInfo "torch.distributed.rpc.WorkerInfo") of a given worker name. Use this [`WorkerInfo`](#torch.distributed.rpc.WorkerInfo "torch.distributed.rpc.WorkerInfo") to avoid passing an expensive string on every invocation. Parameters **worker\_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – the string name of a worker. If `None`, return the the id of the current worker. (default `None`) Returns [`WorkerInfo`](#torch.distributed.rpc.WorkerInfo "torch.distributed.rpc.WorkerInfo") instance for the given `worker_name` or [`WorkerInfo`](#torch.distributed.rpc.WorkerInfo "torch.distributed.rpc.WorkerInfo") of the current worker if `worker_name` is `None`. `torch.distributed.rpc.shutdown(graceful=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/rpc/api.html#shutdown) Perform a shutdown of the RPC agent, and then destroy the RPC agent. This stops the local agent from accepting outstanding requests, and shuts down the RPC framework by terminating all RPC threads. If `graceful=True`, this will block until all local and remote RPC processes reach this method and wait for all outstanding work to complete. Otherwise, if `graceful=False`, this is a local shutdown, and it does not wait for other RPC processes to reach this method. Warning For [`Future`](futures#torch.futures.Future "torch.futures.Future") objects returned by [`rpc_async()`](#torch.distributed.rpc.rpc_async "torch.distributed.rpc.rpc_async"), `future.wait()` should not be called after `shutdown()`. Parameters **graceful** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Whether to do a graceful shutdown or not. If True, this will 1) wait until there is no pending system messages for `UserRRefs` and delete them; 2) block until all local and remote RPC processes have reached this method and wait for all outstanding work to complete. Example:: Make sure that `MASTER_ADDR` and `MASTER_PORT` are set properly on both workers. Refer to [`init_process_group()`](distributed#torch.distributed.init_process_group "torch.distributed.init_process_group") API for more details. For example, ``` >>> export MASTER_ADDR=localhost >>> export MASTER_PORT=5678 ``` Then run the following code in two different processes: ``` >>> # On worker 0: >>> import torch >>> import torch.distributed.rpc as rpc >>> rpc.init_rpc("worker0", rank=0, world_size=2) >>> # do some work >>> result = rpc.rpc_sync("worker1", torch.add, args=(torch.ones(1), 1)) >>> # ready to shutdown >>> rpc.shutdown() ``` ``` >>> # On worker 1: >>> import torch.distributed.rpc as rpc >>> rpc.init_rpc("worker1", rank=1, world_size=2) >>> # wait for worker 0 to finish work, and then shutdown. >>> rpc.shutdown() ``` `class torch.distributed.rpc.WorkerInfo` A structure that encapsulates information of a worker in the system. Contains the name and ID of the worker. This class is not meant to be constructed directly, rather, an instance can be retrieved through [`get_worker_info()`](#torch.distributed.rpc.get_worker_info "torch.distributed.rpc.get_worker_info") and the result can be passed in to functions such as [`rpc_sync()`](#torch.distributed.rpc.rpc_sync "torch.distributed.rpc.rpc_sync"), [`rpc_async()`](#torch.distributed.rpc.rpc_async "torch.distributed.rpc.rpc_async"), [`remote()`](#torch.distributed.rpc.remote "torch.distributed.rpc.remote") to avoid copying a string on every invocation. `property id` Globally unique id to identify the worker. `property name` The name of the worker. The RPC package also provides decorators which allow applications to specify how a given function should be treated on the callee side. `torch.distributed.rpc.functions.async_execution(fn)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/rpc/functions.html#async_execution) A decorator for a function indicating that the return value of the function is guaranteed to be a [`Future`](futures#torch.futures.Future "torch.futures.Future") object and this function can run asynchronously on the RPC callee. More specifically, the callee extracts the [`Future`](futures#torch.futures.Future "torch.futures.Future") returned by the wrapped function and installs subsequent processing steps as a callback to that [`Future`](futures#torch.futures.Future "torch.futures.Future"). The installed callback will read the value from the [`Future`](futures#torch.futures.Future "torch.futures.Future") when completed and send the value back as the RPC response. That also means the returned [`Future`](futures#torch.futures.Future "torch.futures.Future") only exists on the callee side and is never sent through RPC. This decorator is useful when the wrapped function’s (`fn`) execution needs to pause and resume due to, e.g., containing [`rpc_async()`](#torch.distributed.rpc.rpc_async "torch.distributed.rpc.rpc_async") or waiting for other signals. Note To enable asynchronous execution, applications must pass the function object returned by this decorator to RPC APIs. If RPC detected attributes installed by this decorator, it knows that this function returns a `Future` object and will handle that accordingly. However, this does not mean this decorator has to be outmost one when defining a function. For example, when combined with `@staticmethod` or `@classmethod`, `@rpc.functions.async_execution` needs to be the inner decorator to allow the target function be recognized as a static or class function. This target function can still execute asynchronously because, when accessed, the static or class method preserves attributes installed by `@rpc.functions.async_execution`. Example:: The returned [`Future`](futures#torch.futures.Future "torch.futures.Future") object can come from [`rpc_async()`](#torch.distributed.rpc.rpc_async "torch.distributed.rpc.rpc_async"), [`then()`](futures#torch.futures.Future.then "torch.futures.Future.then"), or [`Future`](futures#torch.futures.Future "torch.futures.Future") constructor. The example below shows directly using the [`Future`](futures#torch.futures.Future "torch.futures.Future") returned by [`then()`](futures#torch.futures.Future.then "torch.futures.Future.then"). ``` >>> from torch.distributed import rpc >>> >>> # omitting setup and shutdown RPC >>> >>> # On all workers >>> @rpc.functions.async_execution >>> def async_add_chained(to, x, y, z): >>> # This function runs on "worker1" and returns immediately when >>> # the callback is installed through the `then(cb)` API. In the >>> # mean time, the `rpc_async` to "worker2" can run concurrently. >>> # When the return value of that `rpc_async` arrives at >>> # "worker1", "worker1" will run the lambda function accordingly >>> # and set the value for the previously returned `Future`, which >>> # will then trigger RPC to send the result back to "worker0". >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then( >>> lambda fut: fut.wait() + z >>> ) >>> >>> # On worker0 >>> ret = rpc.rpc_sync( >>> "worker1", >>> async_add_chained, >>> args=("worker2", torch.ones(2), 1, 1) >>> ) >>> print(ret) # prints tensor([3., 3.]) ``` When combined with TorchScript decorators, this decorator must be the outmost one. ``` >>> from torch import Tensor >>> from torch.futures import Future >>> from torch.distributed import rpc >>> >>> # omitting setup and shutdown RPC >>> >>> # On all workers >>> @torch.jit.script >>> def script_add(x: Tensor, y: Tensor) -> Tensor: >>> return x + y >>> >>> @rpc.functions.async_execution >>> @torch.jit.script >>> def async_add(to: str, x: Tensor, y: Tensor) -> Future[Tensor]: >>> return rpc.rpc_async(to, script_add, (x, y)) >>> >>> # On worker0 >>> ret = rpc.rpc_sync( >>> "worker1", >>> async_add, >>> args=("worker2", torch.ones(2), 1) >>> ) >>> print(ret) # prints tensor([2., 2.]) ``` When combined with static or class method, this decorator must be the inner one. ``` >>> from torch.distributed import rpc >>> >>> # omitting setup and shutdown RPC >>> >>> # On all workers >>> class AsyncExecutionClass: >>> >>> @staticmethod >>> @rpc.functions.async_execution >>> def static_async_add(to, x, y, z): >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then( >>> lambda fut: fut.wait() + z >>> ) >>> >>> @classmethod >>> @rpc.functions.async_execution >>> def class_async_add(cls, to, x, y, z): >>> ret_fut = torch.futures.Future() >>> rpc.rpc_async(to, torch.add, args=(x, y)).then( >>> lambda fut: ret_fut.set_result(fut.wait() + z) >>> ) >>> return ret_fut >>> >>> @rpc.functions.async_execution >>> def bound_async_add(self, to, x, y, z): >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then( >>> lambda fut: fut.wait() + z >>> ) >>> >>> # On worker0 >>> ret = rpc.rpc_sync( >>> "worker1", >>> AsyncExecutionClass.static_async_add, >>> args=("worker2", torch.ones(2), 1, 2) >>> ) >>> print(ret) # prints tensor([4., 4.]) >>> >>> ret = rpc.rpc_sync( >>> "worker1", >>> AsyncExecutionClass.class_async_add, >>> args=("worker2", torch.ones(2), 1, 2) >>> ) >>> print(ret) # prints tensor([4., 4.]) ``` This decorator also works with RRef helpers, i.e., . [`torch.distributed.rpc.RRef.rpc_sync()`](#torch.distributed.rpc.RRef.rpc_sync "torch.distributed.rpc.RRef.rpc_sync"), [`torch.distributed.rpc.RRef.rpc_async()`](#torch.distributed.rpc.RRef.rpc_async "torch.distributed.rpc.RRef.rpc_async"), and [`torch.distributed.rpc.RRef.remote()`](#torch.distributed.rpc.RRef.remote "torch.distributed.rpc.RRef.remote"). ``` >>> from torch.distributed import rpc >>> >>> # reuse the AsyncExecutionClass class above >>> rref = rpc.remote("worker1", AsyncExecutionClass) >>> ret = rref.rpc_sync().static_async_add("worker2", torch.ones(2), 1, 2) >>> print(ret) # prints tensor([4., 4.]) >>> >>> rref = rpc.remote("worker1", AsyncExecutionClass) >>> ret = rref.rpc_async().static_async_add("worker2", torch.ones(2), 1, 2).wait() >>> print(ret) # prints tensor([4., 4.]) >>> >>> rref = rpc.remote("worker1", AsyncExecutionClass) >>> ret = rref.remote().static_async_add("worker2", torch.ones(2), 1, 2).to_here() >>> print(ret) # prints tensor([4., 4.]) ``` ### Backends The RPC module can leverage different backends to perform the communication between the nodes. The backend to be used can be specified in the [`init_rpc()`](#torch.distributed.rpc.init_rpc "torch.distributed.rpc.init_rpc") function, by passing a certain value of the [`BackendType`](#torch.distributed.rpc.BackendType "torch.distributed.rpc.BackendType") enum. Regardless of what backend is used, the rest of the RPC API won’t change. Each backend also defines its own subclass of the [`RpcBackendOptions`](#torch.distributed.rpc.RpcBackendOptions "torch.distributed.rpc.RpcBackendOptions") class, an instance of which can also be passed to [`init_rpc()`](#torch.distributed.rpc.init_rpc "torch.distributed.rpc.init_rpc") to configure the backend’s behavior. `class torch.distributed.rpc.BackendType` An enum class of available backends. PyTorch ships with two builtin backends: `BackendType.TENSORPIPE` and `BackendType.PROCESS_GROUP`. Additional ones can be registered using the `register_backend()` function. `class torch.distributed.rpc.RpcBackendOptions` An abstract structure encapsulating the options passed into the RPC backend. An instance of this class can be passed in to [`init_rpc()`](#torch.distributed.rpc.init_rpc "torch.distributed.rpc.init_rpc") in order to initialize RPC with specific configurations, such as the RPC timeout and `init_method` to be used. `property init_method` URL specifying how to initialize the process group. Default is `env://` `property rpc_timeout` A float indicating the timeout to use for all RPCs. If an RPC does not complete in this timeframe, it will complete with an exception indicating that it has timed out. #### TensorPipe Backend The TensorPipe agent, which is the default, leverages [the TensorPipe library](https://github.com/pytorch/tensorpipe), which provides a natively point-to-point communication primitive specifically suited for machine learning that fundamentally addresses some of the limitations of Gloo. Compared to Gloo, it has the advantage of being asynchronous, which allows a large number of transfers to occur simultaneously, each at their own speed, without blocking each other. It will only open pipes between pairs of nodes when needed, on demand, and when one node fails only its incident pipes will be closed, while all other ones will keep working as normal. In addition, it is able to support multiple different transports (TCP, of course, but also shared memory, NVLink, InfiniBand, …) and can automatically detect their availability and negotiate the best transport to use for each pipe. The TensorPipe backend has been introduced in PyTorch v1.6 and is being actively developed. At the moment, it only supports CPU tensors, with GPU support coming soon. It comes with a TCP-based transport, just like Gloo. It is also able to automatically chunk and multiplex large tensors over multiple sockets and threads in order to achieve very high bandwidths. The agent will be able to pick the best transport on its own, with no intervention required. Example: ``` >>> import os >>> from torch.distributed import rpc >>> os.environ['MASTER_ADDR'] = 'localhost' >>> os.environ['MASTER_PORT'] = '29500' >>> >>> rpc.init_rpc( >>> "worker1", >>> rank=0, >>> world_size=2, >>> rpc_backend_options=rpc.TensorPipeRpcBackendOptions( >>> num_worker_threads=8, >>> rpc_timeout=20 # 20 second timeout >>> ) >>> ) >>> >>> # omitting init_rpc invocation on worker2 ``` `class torch.distributed.rpc.TensorPipeRpcBackendOptions(*, num_worker_threads=16, rpc_timeout=60.0, init_method='env://', device_maps=None, _transports=None, _channels=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/rpc/options.html#TensorPipeRpcBackendOptions) The backend options for `TensorPipeAgent`, derived from [`RpcBackendOptions`](#torch.distributed.rpc.RpcBackendOptions "torch.distributed.rpc.RpcBackendOptions"). Parameters * **num\_worker\_threads** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – The number of threads in the thread-pool used by `TensorPipeAgent` to execute requests (default: 16). * **rpc\_timeout** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – The default timeout, in seconds, for RPC requests (default: 60 seconds). If the RPC has not completed in this timeframe, an exception indicating so will be raised. Callers can override this timeout for individual RPCs in [`rpc_sync()`](#torch.distributed.rpc.rpc_sync "torch.distributed.rpc.rpc_sync") and [`rpc_async()`](#torch.distributed.rpc.rpc_async "torch.distributed.rpc.rpc_async") if necessary. * **init\_method** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – The URL to initialize the distributed store used for rendezvous. It takes any value accepted for the same argument of [`init_process_group()`](distributed#torch.distributed.init_process_group "torch.distributed.init_process_group") (default: `env://`). * **device\_maps** (*Dict**[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *Dict**]*) – Device placement mappings from this worker to the callee. Key is the callee worker name and value the dictionary (`Dict` of `int`, `str`, or `torch.device`) that maps this worker’s devices to the callee worker’s devices. (default: `None`) `property device_maps` The device map locations. `property init_method` URL specifying how to initialize the process group. Default is `env://` `property num_worker_threads` The number of threads in the thread-pool used by `TensorPipeAgent` to execute requests. `property rpc_timeout` A float indicating the timeout to use for all RPCs. If an RPC does not complete in this timeframe, it will complete with an exception indicating that it has timed out. `set_device_map(to, device_map)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/rpc/options.html#TensorPipeRpcBackendOptions.set_device_map) Set device mapping between each RPC caller and callee pair. This function can be called multiple times to incrementally add device placement configurations. Parameters * **worker\_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – Callee name. * **device\_map** (*Dict of python:int**,* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*, or* [torch.device](tensor_attributes#torch.torch.device "torch.torch.device")) – Device placement mappings from this worker to the callee. This map must be invertible. Example:: ``` >>> # both workers >>> def add(x, y): >>> print(x) # tensor([1., 1.], device='cuda:1') >>> return x + y, (x + y).to(2) >>> >>> # on worker 0 >>> options = TensorPipeRpcBackendOptions( >>> num_worker_threads=8, >>> device_maps={"worker1": {0, 1}} >>> # maps worker0's cuda:0 to worker1's cuda:1 >>> ) >>> options.set_device_map("worker1", {1, 2}) >>> # maps worker0's cuda:1 to worker1's cuda:2 >>> >>> rpc.init_rpc( >>> "worker0", >>> rank=0, >>> world_size=2 >>> backend=rpc.BackendType.TENSORPIPE, >>> rpc_backend_options=options >>> ) >>> >>> x = torch.ones(2) >>> rets = rpc.rpc_sync("worker1", add, args=(x.to(0), 1)) >>> # The first argument will be moved to cuda:1 on worker1. When >>> # sending the return value back, it will follow the invert of >>> # the device map, and hence will be moved back to cuda:0 and >>> # cuda:1 on worker0 >>> print(rets[0]) # tensor([2., 2.], device='cuda:0') >>> print(rets[0]) # tensor([2., 2.], device='cuda:1') ``` #### Process Group Backend Warning The Process Group Backend will be deprecated soon, we recommend using the TensorPipe Backend instead. The Process Group agent instantiates a process group from the [`distributed`](distributed#module-torch.distributed "torch.distributed") module and utilizes its point-to-point communication capabilities to send RPC messages. Internally, the process group uses [the Gloo library](https://github.com/facebookincubator/gloo/). Gloo has been hardened by years of extensive use in PyTorch and is thus very reliable. However, as it was designed to perform collective communication, it may not always be the best fit for RPC. For example, each networking operation is synchronous and blocking, which means that it cannot be run in parallel with others. Moreover, it opens a connection between all pairs of nodes, and brings down all of them when one fails, thus reducing the resiliency and the elasticity of the system. Example: ``` >>> import os >>> from torch.distributed import rpc >>> os.environ['MASTER_ADDR'] = 'localhost' >>> os.environ['MASTER_PORT'] = '29500' >>> >>> rpc.init_rpc( >>> "worker1", >>> rank=0, >>> world_size=2, >>> backend=rpc.BackendType.PROCESS_GROUP, >>> rpc_backend_options=rpc.ProcessGroupRpcBackendOptions( >>> num_send_recv_threads=16, >>> rpc_timeout=20 # 20 second timeout >>> ) >>> ) >>> >>> # omitting init_rpc invocation on worker2 ``` `class torch.distributed.rpc.ProcessGroupRpcBackendOptions` The backend options class for `ProcessGroupAgent`, which is derived from `RpcBackendOptions`. Parameters * **num\_send\_recv\_threads** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – The number of threads in the thread-pool used by `ProcessGroupAgent` (default: 4). * **rpc\_timeout** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – The default timeout, in seconds, for RPC requests (default: 60 seconds). If the RPC has not completed in this timeframe, an exception indicating so will be raised. Callers can override this timeout for individual RPCs in [`rpc_sync()`](#torch.distributed.rpc.rpc_sync "torch.distributed.rpc.rpc_sync") and [`rpc_async()`](#torch.distributed.rpc.rpc_async "torch.distributed.rpc.rpc_async") if necessary. * **init\_method** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – The URL to initialize `ProcessGroupGloo` (default: `env://`). `property init_method` URL specifying how to initialize the process group. Default is `env://` `property num_send_recv_threads` The number of threads in the thread-pool used by ProcessGroupAgent. `property rpc_timeout` A float indicating the timeout to use for all RPCs. If an RPC does not complete in this timeframe, it will complete with an exception indicating that it has timed out. RRef ---- An `RRef` (Remote REFerence) is a reference to a value of some type `T` (e.g. `Tensor`) on a remote worker. This handle keeps the referenced remote value alive on the owner, but there is no implication that the value will be transferred to the local worker in the future. RRefs can be used in multi-machine training by holding references to [nn.Modules](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) that exist on other workers, and calling the appropriate functions to retrieve or modify their parameters during training. See [Remote Reference Protocol](rpc/rref#remote-reference-protocol) for more details. `class torch.distributed.rpc.RRef` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/rpc/api.html#RRef) `backward(self: torch._C._distributed_rpc.PyRRef, dist_autograd_ctx_id: int = -1, retain_graph: bool = False) → None` Runs the backward pass using the RRef as the root of the backward pass. If `dist_autograd_ctx_id` is provided, we perform a distributed backward pass using the provided ctx\_id starting from the owner of the RRef. In this case, [`get_gradients()`](#torch.distributed.autograd.get_gradients "torch.distributed.autograd.get_gradients") should be used to retrieve the gradients. If `dist_autograd_ctx_id` is `None`, it is assumed that this is a local autograd graph and we only perform a local backward pass. In the local case, the node calling this API has to be the owner of the RRef. The value of the RRef is expected to be a scalar Tensor. Parameters * **dist\_autograd\_ctx\_id** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – The distributed autograd context id for which we should retrieve the gradients (default: -1). * **retain\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `False`, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to `True` is not needed and often can be worked around in a much more efficient way. Usually, you need to set this to `True` to run backward multiple times (default: False). Example:: ``` >>> import torch.distributed.autograd as dist_autograd >>> with dist_autograd.context() as context_id: >>> rref.backward(context_id) ``` `confirmed_by_owner(self: torch._C._distributed_rpc.PyRRef) → bool` Returns whether this `RRef` has been confirmed by the owner. `OwnerRRef` always returns true, while `UserRRef` only returns true when the owner knowns about this `UserRRef`. `is_owner(self: torch._C._distributed_rpc.PyRRef) → bool` Returns whether or not the current node is the owner of this `RRef`. `local_value(self: torch._C._distributed_rpc.PyRRef) → object` If the current node is the owner, returns a reference to the local value. Otherwise, throws an exception. `owner(self: torch._C._distributed_rpc.PyRRef) → torch._C._distributed_rpc.WorkerInfo` Returns worker information of the node that owns this `RRef`. `owner_name(self: torch._C._distributed_rpc.PyRRef) → str` Returns worker name of the node that owns this `RRef`. `remote(self: torch._C._distributed_rpc.PyRRef, timeout: float = -1.0) → object` Create a helper proxy to easily launch a `remote` using the owner of the RRef as the destination to run functions on the object referenced by this RRef. More specifically, `rref.remote().func_name(*args, **kwargs)` is the same as the following: ``` >>> def run(rref, func_name, args, kwargs): >>> return getattr(rref.local_value(), func_name)(*args, **kwargs) >>> >>> rpc.remote(rref.owner(), run, args=(rref, func_name, args, kwargs)) ``` Parameters **timeout** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – Timeout for `rref.remote()`. If the creation of this [`RRef`](#torch.distributed.rpc.RRef "torch.distributed.rpc.RRef") is not successfully completed within the timeout, then the next time there is an attempt to use the RRef (such as `to_here`), a timeout will be raised. If not provided, the default RPC timeout will be used. Please see `rpc.remote()` for specific timeout semantics for [`RRef`](#torch.distributed.rpc.RRef "torch.distributed.rpc.RRef"). Example:: ``` >>> from torch.distributed import rpc >>> rref = rpc.remote("worker1", torch.add, args=(torch.zeros(2, 2), 1)) >>> rref.remote().size().to_here() # returns torch.Size([2, 2]) >>> rref.remote().view(1, 4).to_here() # returns tensor([[1., 1., 1., 1.]]) ``` `rpc_async(self: torch._C._distributed_rpc.PyRRef, timeout: float = -1.0) → object` Create a helper proxy to easily launch an `rpc_async` using the owner of the RRef as the destination to run functions on the object referenced by this RRef. More specifically, `rref.rpc_async().func_name(*args, **kwargs)` is the same as the following: ``` >>> def run(rref, func_name, args, kwargs): >>> return getattr(rref.local_value(), func_name)(*args, **kwargs) >>> >>> rpc.rpc_async(rref.owner(), run, args=(rref, func_name, args, kwargs)) ``` Parameters **timeout** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – Timeout for `rref.rpc_async()`. If the call does not complete within this timeframe, an exception indicating so will be raised. If this argument is not provided, the default RPC timeout will be used. Example:: ``` >>> from torch.distributed import rpc >>> rref = rpc.remote("worker1", torch.add, args=(torch.zeros(2, 2), 1)) >>> rref.rpc_async().size().wait() # returns torch.Size([2, 2]) >>> rref.rpc_async().view(1, 4).wait() # returns tensor([[1., 1., 1., 1.]]) ``` `rpc_sync(self: torch._C._distributed_rpc.PyRRef, timeout: float = -1.0) → object` Create a helper proxy to easily launch an `rpc_sync` using the owner of the RRef as the destination to run functions on the object referenced by this RRef. More specifically, `rref.rpc_sync().func_name(*args, **kwargs)` is the same as the following: ``` >>> def run(rref, func_name, args, kwargs): >>> return getattr(rref.local_value(), func_name)(*args, **kwargs) >>> >>> rpc.rpc_sync(rref.owner(), run, args=(rref, func_name, args, kwargs)) ``` Parameters **timeout** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – Timeout for `rref.rpc_sync()`. If the call does not complete within this timeframe, an exception indicating so will be raised. If this argument is not provided, the default RPC timeout will be used. Example:: ``` >>> from torch.distributed import rpc >>> rref = rpc.remote("worker1", torch.add, args=(torch.zeros(2, 2), 1)) >>> rref.rpc_sync().size() # returns torch.Size([2, 2]) >>> rref.rpc_sync().view(1, 4) # returns tensor([[1., 1., 1., 1.]]) ``` `to_here(self: torch._C._distributed_rpc.PyRRef, timeout: float = -1.0) → object` Blocking call that copies the value of the RRef from the owner to the local node and returns it. If the current node is the owner, returns a reference to the local value. Parameters **timeout** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – Timeout for `to_here`. If the call does not complete within this timeframe, an exception indicating so will be raised. If this argument is not provided, the default RPC timeout (60s) will be used. More Information about RRef * [Remote Reference Protocol](rpc/rref) + [Background](rpc/rref#background) + [Assumptions](rpc/rref#assumptions) + [RRef Lifetime](rpc/rref#rref-lifetime) - [Design Reasoning](rpc/rref#design-reasoning) - [Implementation](rpc/rref#implementation) + [Protocol Scenarios](rpc/rref#protocol-scenarios) - [User Share RRef with Owner as Return Value](rpc/rref#user-share-rref-with-owner-as-return-value) - [User Share RRef with Owner as Argument](rpc/rref#user-share-rref-with-owner-as-argument) - [Owner Share RRef with User](rpc/rref#owner-share-rref-with-user) - [User Share RRef with User](rpc/rref#user-share-rref-with-user) Distributed Autograd Framework ------------------------------ This module provides an RPC-based distributed autograd framework that can be used for applications such as model parallel training. In short, applications may send and receive gradient recording tensors over RPC. In the forward pass, we record when gradient recording tensors are sent over RPC and during the backward pass we use this information to perform a distributed backward pass using RPC. For more details see [Distributed Autograd Design](rpc/distributed_autograd#distributed-autograd-design). `class torch.distributed.autograd.context` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/autograd.html#context) Context object to wrap forward and backward passes when using distributed autograd. The `context_id` generated in the `with` statement is required to uniquely identify a distributed backward pass on all workers. Each worker stores metadata associated with this `context_id`, which is required to correctly execute a distributed autograd pass. Example:: ``` >>> import torch.distributed.autograd as dist_autograd >>> with dist_autograd.context() as context_id: >>> t1 = torch.rand((3, 3), requires_grad=True) >>> t2 = torch.rand((3, 3), requires_grad=True) >>> loss = rpc.rpc_sync("worker1", torch.add, args=(t1, t2)).sum() >>> dist_autograd.backward(context_id, [loss]) ``` `torch.distributed.autograd.backward(context_id: int, roots: List[Tensor], retain_graph = False) → None` Kicks off the distributed backward pass using the provided roots. This currently implements the [FAST mode algorithm](rpc/distributed_autograd#fast-mode-algorithm) which assumes all RPC messages sent in the same distributed autograd context across workers would be part of the autograd graph during the backward pass. We use the provided roots to discover the autograd graph and compute appropriate dependencies. This method blocks until the entire autograd computation is done. We accumulate the gradients in the appropriate [`torch.distributed.autograd.context`](#torch.distributed.autograd.context "torch.distributed.autograd.context") on each of the nodes. The autograd context to be used is looked up given the `context_id` that is passed in when [`torch.distributed.autograd.backward()`](#torch.distributed.autograd.backward "torch.distributed.autograd.backward") is called. If there is no valid autograd context corresponding to the given ID, we throw an error. You can retrieve the accumulated gradients using the [`get_gradients()`](#torch.distributed.autograd.get_gradients "torch.distributed.autograd.get_gradients") API. Parameters * **context\_id** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The autograd context id for which we should retrieve the gradients. * **roots** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – Tensors which represent the roots of the autograd computation. All the tensors should be scalars. * **retain\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If False, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Usually, you need to set this to True to run backward multiple times. Example:: ``` >>> import torch.distributed.autograd as dist_autograd >>> with dist_autograd.context() as context_id: >>> pred = model.forward() >>> loss = loss_func(pred, loss) >>> dist_autograd.backward(context_id, loss) ``` `torch.distributed.autograd.get_gradients(context_id: int) → Dict[Tensor, Tensor]` Retrieves a map from Tensor to the appropriate gradient for that Tensor accumulated in the provided context corresponding to the given `context_id` as part of the distributed autograd backward pass. Parameters **context\_id** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The autograd context id for which we should retrieve the gradients. Returns A map where the key is the Tensor and the value is the associated gradient for that Tensor. Example:: ``` >>> import torch.distributed.autograd as dist_autograd >>> with dist_autograd.context() as context_id: >>> t1 = torch.rand((3, 3), requires_grad=True) >>> t2 = torch.rand((3, 3), requires_grad=True) >>> loss = t1 + t2 >>> dist_autograd.backward(context_id, [loss.sum()]) >>> grads = dist_autograd.get_gradients(context_id) >>> print(grads[t1]) >>> print(grads[t2]) ``` More Information about RPC Autograd * [Distributed Autograd Design](rpc/distributed_autograd) + [Background](rpc/distributed_autograd#background) + [Autograd recording during the forward pass](rpc/distributed_autograd#autograd-recording-during-the-forward-pass) + [Distributed Autograd Context](rpc/distributed_autograd#distributed-autograd-context) + [Distributed Backward Pass](rpc/distributed_autograd#distributed-backward-pass) - [Computing dependencies](rpc/distributed_autograd#computing-dependencies) - [FAST mode algorithm](rpc/distributed_autograd#fast-mode-algorithm) - [SMART mode algorithm](rpc/distributed_autograd#smart-mode-algorithm) + [Distributed Optimizer](rpc/distributed_autograd#distributed-optimizer) + [Simple end to end example](rpc/distributed_autograd#simple-end-to-end-example) Distributed Optimizer --------------------- [`torch.distributed.optim`](#module-torch.distributed.optim "torch.distributed.optim") exposes DistributedOptimizer, which takes a list of remote parameters ([`RRef`](#torch.distributed.rpc.RRef "torch.distributed.rpc.RRef")) and runs the optimizer locally on the workers where the parameters live. The distributed optimizer can use any of the local optimizer [Algorithms](optim#optimizer-algorithms) to apply the gradients on each worker. `class torch.distributed.optim.DistributedOptimizer(optimizer_class, params_rref, *args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/optim/optimizer.html#DistributedOptimizer) DistributedOptimizer takes remote references to parameters scattered across workers and applies the given optimizer locally for each parameter. This class uses [`get_gradients()`](#torch.distributed.autograd.get_gradients "torch.distributed.autograd.get_gradients") in order to retrieve the gradients for specific parameters. Concurrent calls to [`step()`](#torch.distributed.optim.DistributedOptimizer.step "torch.distributed.optim.DistributedOptimizer.step"), either from the same or different clients, will be serialized on each worker – as each worker’s optimizer can only work on one set of gradients at a time. However, there is no guarantee that the full forward-backward-optimizer sequence will execute for one client at a time. This means that the gradients being applied may not correspond to the latest forward pass executed on a given worker. Also, there is no guaranteed ordering across workers. `DistributedOptimizer` creates the local optimizer with TorchScript enabled by default, so that optimizer updates are not blocked by the Python Global Interpreter Lock (GIL) during multithreaded training (e.g. Distributed Model Parallel). This feature is currently in beta stage, enabled for optimizers including `Adagrad`, `Adam`, `SGD`, `RMSprop`, `AdamW` and `Adadelta`. We are increasing the coverage to all optimizers in future releases. Parameters * **optimizer\_class** ([optim.Optimizer](optim#torch.optim.Optimizer "torch.optim.Optimizer")) – the class of optimizer to instantiate on each worker. * **params\_rref** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")*[*[RRef](#torch.distributed.rpc.RRef "torch.distributed.rpc.RRef")*]*) – list of RRefs to local or remote parameters to optimize. * **args** – arguments to pass to the optimizer constructor on each worker. * **kwargs** – arguments to pass to the optimizer constructor on each worker. Example:: ``` >>> import torch.distributed.autograd as dist_autograd >>> import torch.distributed.rpc as rpc >>> from torch import optim >>> from torch.distributed.optim import DistributedOptimizer >>> >>> with dist_autograd.context() as context_id: >>> # Forward pass. >>> rref1 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 3)) >>> rref2 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 1)) >>> loss = rref1.to_here() + rref2.to_here() >>> >>> # Backward pass. >>> dist_autograd.backward(context_id, [loss.sum()]) >>> >>> # Optimizer. >>> dist_optim = DistributedOptimizer( >>> optim.SGD, >>> [rref1, rref2], >>> lr=0.05, >>> ) >>> dist_optim.step(context_id) ``` `step(context_id)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/optim/optimizer.html#DistributedOptimizer.step) Performs a single optimization step. This will call [`torch.optim.Optimizer.step()`](optim#torch.optim.Optimizer.step "torch.optim.Optimizer.step") on each worker containing parameters to be optimized, and will block until all workers return. The provided `context_id` will be used to retrieve the corresponding [`context`](#torch.distributed.autograd.context "torch.distributed.autograd.context") that contains the gradients that should be applied to the parameters. Parameters **context\_id** – the autograd context id for which we should run the optimizer step. Design Notes ------------ The distributed autograd design note covers the design of the RPC-based distributed autograd framework that is useful for applications such as model parallel training. * [Distributed Autograd Design](rpc/distributed_autograd#distributed-autograd-design) The RRef design note covers the design of the [RRef](#rref) (Remote REFerence) protocol used to refer to values on remote workers by the framework. * [Remote Reference Protocol](rpc/rref#remote-reference-protocol) Tutorials --------- The RPC tutorials introduce users to the RPC framework, provide several example applications using [torch.distributed.rpc](#distributed-rpc-framework) APIs, and demonstrate how to use [the profiler](https://pytorch.org/docs/stable/autograd.html#profiler) to profile RPC-based workloads. * [Getting started with Distributed RPC Framework](https://pytorch.org/tutorials/intermediate/rpc_tutorial.html) * [Implementing a Parameter Server using Distributed RPC Framework](https://pytorch.org/tutorials/intermediate/rpc_param_server_tutorial.html) * [Combining Distributed DataParallel with Distributed RPC Framework](https://pytorch.org/tutorials/advanced/rpc_ddp_tutorial.html) * [Profiling RPC-based Workloads](https://pytorch.org/tutorials/recipes/distributed_rpc_profiling.html) * [Implementing batch RPC processing](https://pytorch.org/tutorials/intermediate/rpc_async_execution.html) * [Distributed Pipeline Parallel](https://pytorch.org/tutorials/intermediate/dist_pipeline_parallel_tutorial.html)
programming_docs
pytorch torch.utils.cpp_extension torch.utils.cpp\_extension ========================== `torch.utils.cpp_extension.CppExtension(name, sources, *args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/cpp_extension.html#CppExtension) Creates a `setuptools.Extension` for C++. Convenience method that creates a `setuptools.Extension` with the bare minimum (but often sufficient) arguments to build a C++ extension. All arguments are forwarded to the `setuptools.Extension` constructor. #### Example ``` >>> from setuptools import setup >>> from torch.utils.cpp_extension import BuildExtension, CppExtension >>> setup( name='extension', ext_modules=[ CppExtension( name='extension', sources=['extension.cpp'], extra_compile_args=['-g']), ], cmdclass={ 'build_ext': BuildExtension }) ``` `torch.utils.cpp_extension.CUDAExtension(name, sources, *args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/cpp_extension.html#CUDAExtension) Creates a `setuptools.Extension` for CUDA/C++. Convenience method that creates a `setuptools.Extension` with the bare minimum (but often sufficient) arguments to build a CUDA/C++ extension. This includes the CUDA include path, library path and runtime library. All arguments are forwarded to the `setuptools.Extension` constructor. #### Example ``` >>> from setuptools import setup >>> from torch.utils.cpp_extension import BuildExtension, CUDAExtension >>> setup( name='cuda_extension', ext_modules=[ CUDAExtension( name='cuda_extension', sources=['extension.cpp', 'extension_kernel.cu'], extra_compile_args={'cxx': ['-g'], 'nvcc': ['-O2']}) ], cmdclass={ 'build_ext': BuildExtension }) ``` Compute capabilities: By default the extension will be compiled to run on all archs of the cards visible during the building process of the extension, plus PTX. If down the road a new card is installed the extension may need to be recompiled. If a visible card has a compute capability (CC) that’s newer than the newest version for which your nvcc can build fully-compiled binaries, Pytorch will make nvcc fall back to building kernels with the newest version of PTX your nvcc does support (see below for details on PTX). You can override the default behavior using `TORCH_CUDA_ARCH_LIST` to explicitly specify which CCs you want the extension to support: TORCH\_CUDA\_ARCH\_LIST=”6.1 8.6” python build\_my\_extension.py TORCH\_CUDA\_ARCH\_LIST=”5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTX” python build\_my\_extension.py The +PTX option causes extension kernel binaries to include PTX instructions for the specified CC. PTX is an intermediate representation that allows kernels to runtime-compile for any CC >= the specified CC (for example, 8.6+PTX generates PTX that can runtime-compile for any GPU with CC >= 8.6). This improves your binary’s forward compatibility. However, relying on older PTX to provide forward compat by runtime-compiling for newer CCs can modestly reduce performance on those newer CCs. If you know exact CC(s) of the GPUs you want to target, you’re always better off specifying them individually. For example, if you want your extension to run on 8.0 and 8.6, “8.0+PTX” would work functionally because it includes PTX that can runtime-compile for 8.6, but “8.0 8.6” would be better. Note that while it’s possible to include all supported archs, the more archs get included the slower the building process will be, as it will build a separate kernel image for each arch. `torch.utils.cpp_extension.BuildExtension(*args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/cpp_extension.html#BuildExtension) A custom `setuptools` build extension . This `setuptools.build_ext` subclass takes care of passing the minimum required compiler flags (e.g. `-std=c++14`) as well as mixed C++/CUDA compilation (and support for CUDA files in general). When using [`BuildExtension`](#torch.utils.cpp_extension.BuildExtension "torch.utils.cpp_extension.BuildExtension"), it is allowed to supply a dictionary for `extra_compile_args` (rather than the usual list) that maps from languages (`cxx` or `nvcc`) to a list of additional compiler flags to supply to the compiler. This makes it possible to supply different flags to the C++ and CUDA compiler during mixed compilation. `use_ninja` (bool): If `use_ninja` is `True` (default), then we attempt to build using the Ninja backend. Ninja greatly speeds up compilation compared to the standard `setuptools.build_ext`. Fallbacks to the standard distutils backend if Ninja is not available. Note By default, the Ninja backend uses #CPUS + 2 workers to build the extension. This may use up too many resources on some systems. One can control the number of workers by setting the `MAX_JOBS` environment variable to a non-negative number. `torch.utils.cpp_extension.load(name, sources, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, is_standalone=False, keep_intermediates=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/cpp_extension.html#load) Loads a PyTorch C++ extension just-in-time (JIT). To load an extension, a Ninja build file is emitted, which is used to compile the given sources into a dynamic library. This library is subsequently loaded into the current Python process as a module and returned from this function, ready for use. By default, the directory to which the build file is emitted and the resulting library compiled to is `<tmp>/torch_extensions/<name>`, where `<tmp>` is the temporary folder on the current platform and `<name>` the name of the extension. This location can be overridden in two ways. First, if the `TORCH_EXTENSIONS_DIR` environment variable is set, it replaces `<tmp>/torch_extensions` and all extensions will be compiled into subfolders of this directory. Second, if the `build_directory` argument to this function is supplied, it overrides the entire path, i.e. the library will be compiled into that folder directly. To compile the sources, the default system compiler (`c++`) is used, which can be overridden by setting the `CXX` environment variable. To pass additional arguments to the compilation process, `extra_cflags` or `extra_ldflags` can be provided. For example, to compile your extension with optimizations, pass `extra_cflags=['-O3']`. You can also use `extra_cflags` to pass further include directories. CUDA support with mixed compilation is provided. Simply pass CUDA source files (`.cu` or `.cuh`) along with other sources. Such files will be detected and compiled with nvcc rather than the C++ compiler. This includes passing the CUDA lib64 directory as a library directory, and linking `cudart`. You can pass additional flags to nvcc via `extra_cuda_cflags`, just like with `extra_cflags` for C++. Various heuristics for finding the CUDA install directory are used, which usually work fine. If not, setting the `CUDA_HOME` environment variable is the safest option. Parameters * **name** – The name of the extension to build. This MUST be the same as the name of the pybind11 module! * **sources** – A list of relative or absolute paths to C++ source files. * **extra\_cflags** – optional list of compiler flags to forward to the build. * **extra\_cuda\_cflags** – optional list of compiler flags to forward to nvcc when building CUDA sources. * **extra\_ldflags** – optional list of linker flags to forward to the build. * **extra\_include\_paths** – optional list of include directories to forward to the build. * **build\_directory** – optional path to use as build workspace. * **verbose** – If `True`, turns on verbose logging of load steps. * **with\_cuda** – Determines whether CUDA headers and libraries are added to the build. If set to `None` (default), this value is automatically determined based on the existence of `.cu` or `.cuh` in `sources`. Set it to `True`` to force CUDA headers and libraries to be included. * **is\_python\_module** – If `True` (default), imports the produced shared library as a Python module. If `False`, behavior depends on `is_standalone`. * **is\_standalone** – If `False` (default) loads the constructed extension into the process as a plain dynamic library. If `True`, build a standalone executable. Returns Returns the loaded PyTorch extension as a Python module. `If is_python_module is False and is_standalone is False:` Returns nothing. (The shared library is loaded into the process as a side effect.) `If is_standalone is True.` Return the path to the executable. (On Windows, TORCH\_LIB\_PATH is added to the PATH environment variable as a side effect.) Return type If `is_python_module` is `True` #### Example ``` >>> from torch.utils.cpp_extension import load >>> module = load( name='extension', sources=['extension.cpp', 'extension_kernel.cu'], extra_cflags=['-O2'], verbose=True) ``` `torch.utils.cpp_extension.load_inline(name, cpp_sources, cuda_sources=None, functions=None, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, with_pytorch_error_handling=True, keep_intermediates=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/cpp_extension.html#load_inline) Loads a PyTorch C++ extension just-in-time (JIT) from string sources. This function behaves exactly like [`load()`](#torch.utils.cpp_extension.load "torch.utils.cpp_extension.load"), but takes its sources as strings rather than filenames. These strings are stored to files in the build directory, after which the behavior of [`load_inline()`](#torch.utils.cpp_extension.load_inline "torch.utils.cpp_extension.load_inline") is identical to [`load()`](#torch.utils.cpp_extension.load "torch.utils.cpp_extension.load"). See [the tests](https://github.com/pytorch/pytorch/blob/master/test/test_cpp_extensions.py) for good examples of using this function. Sources may omit two required parts of a typical non-inline C++ extension: the necessary header includes, as well as the (pybind11) binding code. More precisely, strings passed to `cpp_sources` are first concatenated into a single `.cpp` file. This file is then prepended with `#include <torch/extension.h>`. Furthermore, if the `functions` argument is supplied, bindings will be automatically generated for each function specified. `functions` can either be a list of function names, or a dictionary mapping from function names to docstrings. If a list is given, the name of each function is used as its docstring. The sources in `cuda_sources` are concatenated into a separate `.cu` file and prepended with `torch/types.h`, `cuda.h` and `cuda_runtime.h` includes. The `.cpp` and `.cu` files are compiled separately, but ultimately linked into a single library. Note that no bindings are generated for functions in `cuda_sources` per se. To bind to a CUDA kernel, you must create a C++ function that calls it, and either declare or define this C++ function in one of the `cpp_sources` (and include its name in `functions`). See [`load()`](#torch.utils.cpp_extension.load "torch.utils.cpp_extension.load") for a description of arguments omitted below. Parameters * **cpp\_sources** – A string, or list of strings, containing C++ source code. * **cuda\_sources** – A string, or list of strings, containing CUDA source code. * **functions** – A list of function names for which to generate function bindings. If a dictionary is given, it should map function names to docstrings (which are otherwise just the function names). * **with\_cuda** – Determines whether CUDA headers and libraries are added to the build. If set to `None` (default), this value is automatically determined based on whether `cuda_sources` is provided. Set it to `True` to force CUDA headers and libraries to be included. * **with\_pytorch\_error\_handling** – Determines whether pytorch error and warning macros are handled by pytorch instead of pybind. To do this, each function `foo` is called via an intermediary `_safe_foo` function. This redirection might cause issues in obscure cases of cpp. This flag should be set to `False` when this redirect causes issues. #### Example ``` >>> from torch.utils.cpp_extension import load_inline >>> source = \'\'\' at::Tensor sin_add(at::Tensor x, at::Tensor y) { return x.sin() + y.sin(); } \'\'\' >>> module = load_inline(name='inline_extension', cpp_sources=[source], functions=['sin_add']) ``` Note By default, the Ninja backend uses #CPUS + 2 workers to build the extension. This may use up too many resources on some systems. One can control the number of workers by setting the `MAX_JOBS` environment variable to a non-negative number. `torch.utils.cpp_extension.include_paths(cuda=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/cpp_extension.html#include_paths) Get the include paths required to build a C++ or CUDA extension. Parameters **cuda** – If `True`, includes CUDA-specific include paths. Returns A list of include path strings. `torch.utils.cpp_extension.check_compiler_abi_compatibility(compiler)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/cpp_extension.html#check_compiler_abi_compatibility) Verifies that the given compiler is ABI-compatible with PyTorch. Parameters **compiler** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – The compiler executable name to check (e.g. `g++`). Must be executable in a shell process. Returns False if the compiler is (likely) ABI-incompatible with PyTorch, else True. `torch.utils.cpp_extension.verify_ninja_availability()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/cpp_extension.html#verify_ninja_availability) Raises `RuntimeError` if [ninja](https://ninja-build.org/) build system is not available on the system, does nothing otherwise. `torch.utils.cpp_extension.is_ninja_available()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/cpp_extension.html#is_ninja_available) Returns `True` if the [ninja](https://ninja-build.org/) build system is available on the system, `False` otherwise. pytorch torch.cuda torch.cuda ========== This package adds support for CUDA tensor types, that implement the same function as CPU tensors, but they utilize GPUs for computation. It is lazily initialized, so you can always import it, and use [`is_available()`](#torch.cuda.is_available "torch.cuda.is_available") to determine if your system supports CUDA. [CUDA semantics](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-semantics) has more details about working with CUDA. `torch.cuda.can_device_access_peer(device, peer_device)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#can_device_access_peer) Checks if peer access between two devices is possible. `torch.cuda.current_blas_handle()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#current_blas_handle) Returns cublasHandle\_t pointer to current cuBLAS handle `torch.cuda.current_device()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#current_device) Returns the index of a currently selected device. `torch.cuda.current_stream(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#current_stream) Returns the currently selected [`Stream`](#torch.cuda.Stream "torch.cuda.Stream") for a given device. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – selected device. Returns the currently selected [`Stream`](#torch.cuda.Stream "torch.cuda.Stream") for the current device, given by [`current_device()`](#torch.cuda.current_device "torch.cuda.current_device"), if [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default). `torch.cuda.default_stream(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#default_stream) Returns the default [`Stream`](#torch.cuda.Stream "torch.cuda.Stream") for a given device. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – selected device. Returns the default [`Stream`](#torch.cuda.Stream "torch.cuda.Stream") for the current device, given by [`current_device()`](#torch.cuda.current_device "torch.cuda.current_device"), if [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default). `class torch.cuda.device(device)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#device) Context-manager that changes the selected device. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – device index to select. It’s a no-op if this argument is a negative integer or `None`. `torch.cuda.device_count()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#device_count) Returns the number of GPUs available. `class torch.cuda.device_of(obj)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#device_of) Context-manager that changes the current device to that of given object. You can use both tensors and storages as arguments. If a given object is not allocated on a GPU, this is a no-op. Parameters **obj** ([Tensor](tensors#torch.Tensor "torch.Tensor") *or* *Storage*) – object allocated on the selected device. `torch.cuda.get_arch_list()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#get_arch_list) Returns list CUDA architectures this library was compiled for. `torch.cuda.get_device_capability(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#get_device_capability) Gets the cuda capability of a device. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – device for which to return the device capability. This function is a no-op if this argument is a negative integer. It uses the current device, given by [`current_device()`](#torch.cuda.current_device "torch.cuda.current_device"), if [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default). Returns the major and minor cuda capability of the device Return type [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)"), [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) `torch.cuda.get_device_name(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#get_device_name) Gets the name of a device. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – device for which to return the name. This function is a no-op if this argument is a negative integer. It uses the current device, given by [`current_device()`](#torch.cuda.current_device "torch.cuda.current_device"), if [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default). Returns the name of the device Return type [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)") `torch.cuda.get_device_properties(device)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#get_device_properties) Gets the properties of a device. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – device for which to return the properties of the device. Returns the properties of the device Return type \_CudaDeviceProperties `torch.cuda.get_gencode_flags()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#get_gencode_flags) Returns NVCC gencode flags this library were compiled with. `torch.cuda.init()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#init) Initialize PyTorch’s CUDA state. You may need to call this explicitly if you are interacting with PyTorch via its C API, as Python bindings for CUDA functionality will not be available until this initialization takes place. Ordinary users should not need this, as all of PyTorch’s CUDA methods automatically initialize CUDA state on-demand. Does nothing if the CUDA state is already initialized. `torch.cuda.ipc_collect()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#ipc_collect) Force collects GPU memory after it has been released by CUDA IPC. Note Checks if any sent CUDA tensors could be cleaned from the memory. Force closes shared memory file used for reference counting if there is no active counters. Useful when the producer process stopped actively sending tensors and want to release unused memory. `torch.cuda.is_available()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#is_available) Returns a bool indicating if CUDA is currently available. `torch.cuda.is_initialized()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#is_initialized) Returns whether PyTorch’s CUDA state has been initialized. `torch.cuda.set_device(device)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#set_device) Sets the current device. Usage of this function is discouraged in favor of [`device`](#torch.cuda.device "torch.cuda.device"). In most cases it’s better to use `CUDA_VISIBLE_DEVICES` environmental variable. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – selected device. This function is a no-op if this argument is negative. `torch.cuda.stream(stream)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#stream) Context-manager that selects a given stream. All CUDA kernels queued within its context will be enqueued on a selected stream. Parameters **stream** ([Stream](#torch.cuda.Stream "torch.cuda.Stream")) – selected stream. This manager is a no-op if it’s `None`. Note Streams are per-device. If the selected stream is not on the current device, this function will also change the current device to match the stream. `torch.cuda.synchronize(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda.html#synchronize) Waits for all kernels in all streams on a CUDA device to complete. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – device for which to synchronize. It uses the current device, given by [`current_device()`](#torch.cuda.current_device "torch.cuda.current_device"), if [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default). Random Number Generator ----------------------- `torch.cuda.get_rng_state(device='cuda')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/random.html#get_rng_state) Returns the random number generator state of the specified GPU as a ByteTensor. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – The device to return the RNG state of. Default: `'cuda'` (i.e., `torch.device('cuda')`, the current CUDA device). Warning This function eagerly initializes CUDA. `torch.cuda.get_rng_state_all()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/random.html#get_rng_state_all) Returns a list of ByteTensor representing the random number states of all devices. `torch.cuda.set_rng_state(new_state, device='cuda')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/random.html#set_rng_state) Sets the random number generator state of the specified GPU. Parameters * **new\_state** (*torch.ByteTensor*) – The desired state * **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – The device to set the RNG state. Default: `'cuda'` (i.e., `torch.device('cuda')`, the current CUDA device). `torch.cuda.set_rng_state_all(new_states)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/random.html#set_rng_state_all) Sets the random number generator state of all devices. Parameters **new\_states** (*Iterable of torch.ByteTensor*) – The desired state for each device `torch.cuda.manual_seed(seed)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/random.html#manual_seed) Sets the seed for generating random numbers for the current GPU. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. Parameters **seed** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The desired seed. Warning If you are working with a multi-GPU model, this function is insufficient to get determinism. To seed all GPUs, use [`manual_seed_all()`](#torch.cuda.manual_seed_all "torch.cuda.manual_seed_all"). `torch.cuda.manual_seed_all(seed)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/random.html#manual_seed_all) Sets the seed for generating random numbers on all GPUs. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. Parameters **seed** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The desired seed. `torch.cuda.seed()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/random.html#seed) Sets the seed for generating random numbers to a random number for the current GPU. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. Warning If you are working with a multi-GPU model, this function will only initialize the seed on one GPU. To initialize all GPUs, use [`seed_all()`](#torch.cuda.seed_all "torch.cuda.seed_all"). `torch.cuda.seed_all()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/random.html#seed_all) Sets the seed for generating random numbers to a random number on all GPUs. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. `torch.cuda.initial_seed()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/random.html#initial_seed) Returns the current random seed of the current GPU. Warning This function eagerly initializes CUDA. Communication collectives ------------------------- `torch.cuda.comm.broadcast(tensor, devices=None, *, out=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/parallel/comm.html#broadcast) Broadcasts a tensor to specified GPU devices. Parameters * **tensor** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – tensor to broadcast. Can be on CPU or GPU. * **devices** (*Iterable**[*[torch.device](tensor_attributes#torch.torch.device "torch.torch.device")*,* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – an iterable of GPU devices, among which to broadcast. * **out** (*Sequence**[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]**,* *optional**,* *keyword-only*) – the GPU tensors to store output results. Note Exactly one of `devices` and `out` must be specified. Returns * `If devices is specified,` a tuple containing copies of `tensor`, placed on `devices`. * `If out is specified,` a tuple containing `out` tensors, each containing a copy of `tensor`. `torch.cuda.comm.broadcast_coalesced(tensors, devices, buffer_size=10485760)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/parallel/comm.html#broadcast_coalesced) Broadcasts a sequence tensors to the specified GPUs. Small tensors are first coalesced into a buffer to reduce the number of synchronizations. Parameters * **tensors** (*sequence*) – tensors to broadcast. Must be on the same device, either CPU or GPU. * **devices** (*Iterable**[*[torch.device](tensor_attributes#torch.torch.device "torch.torch.device")*,* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]*) – an iterable of GPU devices, among which to broadcast. * **buffer\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – maximum size of the buffer used for coalescing Returns A tuple containing copies of `tensor`, placed on `devices`. `torch.cuda.comm.reduce_add(inputs, destination=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/parallel/comm.html#reduce_add) Sums tensors from multiple GPUs. All inputs should have matching shapes, dtype, and layout. The output tensor will be of the same shape, dtype, and layout. Parameters * **inputs** (*Iterable**[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]*) – an iterable of tensors to add. * **destination** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – a device on which the output will be placed (default: current device). Returns A tensor containing an elementwise sum of all inputs, placed on the `destination` device. `torch.cuda.comm.scatter(tensor, devices=None, chunk_sizes=None, dim=0, streams=None, *, out=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/parallel/comm.html#scatter) Scatters tensor across multiple GPUs. Parameters * **tensor** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – tensor to scatter. Can be on CPU or GPU. * **devices** (*Iterable**[*[torch.device](tensor_attributes#torch.torch.device "torch.torch.device")*,* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – an iterable of GPU devices, among which to scatter. * **chunk\_sizes** (*Iterable**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – sizes of chunks to be placed on each device. It should match `devices` in length and sums to `tensor.size(dim)`. If not specified, `tensor` will be divided into equal chunks. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – A dimension along which to chunk `tensor`. Default: `0`. * **streams** (*Iterable**[*[Stream](#torch.cuda.Stream "torch.cuda.Stream")*]**,* *optional*) – an iterable of Streams, among which to execute the scatter. If not specified, the default stream will be utilized. * **out** (*Sequence**[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]**,* *optional**,* *keyword-only*) – the GPU tensors to store output results. Sizes of these tensors must match that of `tensor`, except for `dim`, where the total size must sum to `tensor.size(dim)`. Note Exactly one of `devices` and `out` must be specified. When `out` is specified, `chunk_sizes` must not be specified and will be inferred from sizes of `out`. Returns * `If devices is specified,` a tuple containing chunks of `tensor`, placed on `devices`. * `If out is specified,` a tuple containing `out` tensors, each containing a chunk of `tensor`. `torch.cuda.comm.gather(tensors, dim=0, destination=None, *, out=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/parallel/comm.html#gather) Gathers tensors from multiple GPU devices. Parameters * **tensors** (*Iterable**[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]*) – an iterable of tensors to gather. Tensor sizes in all dimensions other than `dim` have to match. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – a dimension along which the tensors will be concatenated. Default: `0`. * **destination** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device")*,* [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*, or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the output device. Can be CPU or CUDA. Default: the current CUDA device. * **out** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional**,* *keyword-only*) – the tensor to store gather result. Its sizes must match those of `tensors`, except for `dim`, where the size must equal `sum(tensor.size(dim) for tensor in tensors)`. Can be on CPU or CUDA. Note `destination` must not be specified when `out` is specified. Returns * `If destination is specified,` a tensor located on `destination` device, that is a result of concatenating `tensors` along `dim`. * `If out is specified,` the `out` tensor, now containing results of concatenating `tensors` along `dim`. Streams and events ------------------ `class torch.cuda.Stream` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/streams.html#Stream) Wrapper around a CUDA stream. A CUDA stream is a linear sequence of execution that belongs to a specific device, independent from other streams. See [CUDA semantics](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-semantics) for details. Parameters * **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – a device on which to allocate the stream. If [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default) or a negative integer, this will use the current device. * **priority** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – priority of the stream. Can be either -1 (high priority) or 0 (low priority). By default, streams have priority 0. Note Although CUDA versions >= 11 support more than two levels of priorities, in PyTorch, we only support two levels of priorities. `query()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/streams.html#Stream.query) Checks if all the work submitted has been completed. Returns A boolean indicating if all kernels in this stream are completed. `record_event(event=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/streams.html#Stream.record_event) Records an event. Parameters **event** ([Event](#torch.cuda.Event "torch.cuda.Event")*,* *optional*) – event to record. If not given, a new one will be allocated. Returns Recorded event. `synchronize()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/streams.html#Stream.synchronize) Wait for all the kernels in this stream to complete. Note This is a wrapper around `cudaStreamSynchronize()`: see [CUDA Stream documentation](https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html) for more info. `wait_event(event)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/streams.html#Stream.wait_event) Makes all future work submitted to the stream wait for an event. Parameters **event** ([Event](#torch.cuda.Event "torch.cuda.Event")) – an event to wait for. Note This is a wrapper around `cudaStreamWaitEvent()`: see [CUDA Stream documentation](https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html) for more info. This function returns without waiting for `event`: only future operations are affected. `wait_stream(stream)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/streams.html#Stream.wait_stream) Synchronizes with another stream. All future work submitted to this stream will wait until all kernels submitted to a given stream at the time of call complete. Parameters **stream** ([Stream](#torch.cuda.Stream "torch.cuda.Stream")) – a stream to synchronize. Note This function returns without waiting for currently enqueued kernels in [`stream`](#torch.cuda.stream "torch.cuda.stream"): only future operations are affected. `class torch.cuda.Event` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/streams.html#Event) Wrapper around a CUDA event. CUDA events are synchronization markers that can be used to monitor the device’s progress, to accurately measure timing, and to synchronize CUDA streams. The underlying CUDA events are lazily initialized when the event is first recorded or exported to another process. After creation, only streams on the same device may record the event. However, streams on any device can wait on the event. Parameters * **enable\_timing** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – indicates if the event should measure time (default: `False`) * **blocking** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if `True`, [`wait()`](#torch.cuda.Event.wait "torch.cuda.Event.wait") will be blocking (default: `False`) * **interprocess** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – if `True`, the event can be shared between processes (default: `False`) `elapsed_time(end_event)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/streams.html#Event.elapsed_time) Returns the time elapsed in milliseconds after the event was recorded and before the end\_event was recorded. `classmethod from_ipc_handle(device, handle)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/streams.html#Event.from_ipc_handle) Reconstruct an event from an IPC handle on the given device. `ipc_handle()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/streams.html#Event.ipc_handle) Returns an IPC handle of this event. If not recorded yet, the event will use the current device. `query()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/streams.html#Event.query) Checks if all work currently captured by event has completed. Returns A boolean indicating if all work currently captured by event has completed. `record(stream=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/streams.html#Event.record) Records the event in a given stream. Uses `torch.cuda.current_stream()` if no stream is specified. The stream’s device must match the event’s device. `synchronize()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/streams.html#Event.synchronize) Waits for the event to complete. Waits until the completion of all work currently captured in this event. This prevents the CPU thread from proceeding until the event completes. Note This is a wrapper around `cudaEventSynchronize()`: see [CUDA Event documentation](https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__EVENT.html) for more info. `wait(stream=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/streams.html#Event.wait) Makes all future work submitted to the given stream wait for this event. Use `torch.cuda.current_stream()` if no stream is specified. Memory management ----------------- `torch.cuda.empty_cache()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/memory.html#empty_cache) Releases all unoccupied cached memory currently held by the caching allocator so that those can be used in other GPU application and visible in `nvidia-smi`. Note [`empty_cache()`](#torch.cuda.empty_cache "torch.cuda.empty_cache") doesn’t increase the amount of GPU memory available for PyTorch. However, it may help reduce fragmentation of GPU memory in certain cases. See [Memory management](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-memory-management) for more details about GPU memory management. `torch.cuda.list_gpu_processes(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/memory.html#list_gpu_processes) Returns a human-readable printout of the running processes and their GPU memory use for a given device. This can be useful to display periodically during training, or when handling out-of-memory exceptions. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – selected device. Returns printout for the current device, given by [`current_device()`](#torch.cuda.current_device "torch.cuda.current_device"), if [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default). `torch.cuda.memory_stats(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/memory.html#memory_stats) Returns a dictionary of CUDA memory allocator statistics for a given device. The return value of this function is a dictionary of statistics, each of which is a non-negative integer. Core statistics: * `"allocated.{all,large_pool,small_pool}.{current,peak,allocated,freed}"`: number of allocation requests received by the memory allocator. * `"allocated_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}"`: amount of allocated memory. * `"segment.{all,large_pool,small_pool}.{current,peak,allocated,freed}"`: number of reserved segments from `cudaMalloc()`. * `"reserved_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}"`: amount of reserved memory. * `"active.{all,large_pool,small_pool}.{current,peak,allocated,freed}"`: number of active memory blocks. * `"active_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}"`: amount of active memory. * `"inactive_split.{all,large_pool,small_pool}.{current,peak,allocated,freed}"`: number of inactive, non-releasable memory blocks. * `"inactive_split_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}"`: amount of inactive, non-releasable memory. For these core statistics, values are broken down as follows. Pool type: * `all`: combined statistics across all memory pools. * `large_pool`: statistics for the large allocation pool (as of October 2019, for size >= 1MB allocations). * `small_pool`: statistics for the small allocation pool (as of October 2019, for size < 1MB allocations). Metric type: * `current`: current value of this metric. * `peak`: maximum value of this metric. * `allocated`: historical total increase in this metric. * `freed`: historical total decrease in this metric. In addition to the core statistics, we also provide some simple event counters: * `"num_alloc_retries"`: number of failed `cudaMalloc` calls that result in a cache flush and retry. * `"num_ooms"`: number of out-of-memory errors thrown. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – selected device. Returns statistics for the current device, given by [`current_device()`](#torch.cuda.current_device "torch.cuda.current_device"), if [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default). Note See [Memory management](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-memory-management) for more details about GPU memory management. `torch.cuda.memory_summary(device=None, abbreviated=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/memory.html#memory_summary) Returns a human-readable printout of the current memory allocator statistics for a given device. This can be useful to display periodically during training, or when handling out-of-memory exceptions. Parameters * **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – selected device. Returns printout for the current device, given by [`current_device()`](#torch.cuda.current_device "torch.cuda.current_device"), if [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default). * **abbreviated** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – whether to return an abbreviated summary (default: False). Note See [Memory management](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-memory-management) for more details about GPU memory management. `torch.cuda.memory_snapshot()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/memory.html#memory_snapshot) Returns a snapshot of the CUDA memory allocator state across all devices. Interpreting the output of this function requires familiarity with the memory allocator internals. Note See [Memory management](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-memory-management) for more details about GPU memory management. `torch.cuda.memory_allocated(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/memory.html#memory_allocated) Returns the current GPU memory occupied by tensors in bytes for a given device. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – selected device. Returns statistic for the current device, given by [`current_device()`](#torch.cuda.current_device "torch.cuda.current_device"), if [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default). Note This is likely less than the amount shown in `nvidia-smi` since some unused memory can be held by the caching allocator and some context needs to be created on GPU. See [Memory management](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-memory-management) for more details about GPU memory management. `torch.cuda.max_memory_allocated(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/memory.html#max_memory_allocated) Returns the maximum GPU memory occupied by tensors in bytes for a given device. By default, this returns the peak allocated memory since the beginning of this program. `reset_peak_stats()` can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak allocated memory usage of each iteration in a training loop. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – selected device. Returns statistic for the current device, given by [`current_device()`](#torch.cuda.current_device "torch.cuda.current_device"), if [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default). Note See [Memory management](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-memory-management) for more details about GPU memory management. `torch.cuda.reset_max_memory_allocated(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/memory.html#reset_max_memory_allocated) Resets the starting point in tracking maximum GPU memory occupied by tensors for a given device. See [`max_memory_allocated()`](#torch.cuda.max_memory_allocated "torch.cuda.max_memory_allocated") for details. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – selected device. Returns statistic for the current device, given by [`current_device()`](#torch.cuda.current_device "torch.cuda.current_device"), if [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default). Warning This function now calls `reset_peak_memory_stats()`, which resets /all/ peak memory stats. Note See [Memory management](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-memory-management) for more details about GPU memory management. `torch.cuda.memory_reserved(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/memory.html#memory_reserved) Returns the current GPU memory managed by the caching allocator in bytes for a given device. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – selected device. Returns statistic for the current device, given by [`current_device()`](#torch.cuda.current_device "torch.cuda.current_device"), if [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default). Note See [Memory management](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-memory-management) for more details about GPU memory management. `torch.cuda.max_memory_reserved(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/memory.html#max_memory_reserved) Returns the maximum GPU memory managed by the caching allocator in bytes for a given device. By default, this returns the peak cached memory since the beginning of this program. `reset_peak_stats()` can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak cached memory amount of each iteration in a training loop. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – selected device. Returns statistic for the current device, given by [`current_device()`](#torch.cuda.current_device "torch.cuda.current_device"), if [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default). Note See [Memory management](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-memory-management) for more details about GPU memory management. `torch.cuda.set_per_process_memory_fraction(fraction, device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/memory.html#set_per_process_memory_fraction) Set memory fraction for a process. The fraction is used to limit an caching allocator to allocated memory on a CUDA device. The allowed value equals the total visible memory multiplied fraction. If trying to allocate more than the allowed value in a process, will raise an out of memory error in allocator. Parameters * **fraction** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Range: 0~1. Allowed memory equals total\_memory \* fraction. * **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – selected device. If it is `None` the default CUDA device is used. Note In general, the total available free memory is less than the total capacity. `torch.cuda.memory_cached(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/memory.html#memory_cached) Deprecated; see [`memory_reserved()`](#torch.cuda.memory_reserved "torch.cuda.memory_reserved"). `torch.cuda.max_memory_cached(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/memory.html#max_memory_cached) Deprecated; see [`max_memory_reserved()`](#torch.cuda.max_memory_reserved "torch.cuda.max_memory_reserved"). `torch.cuda.reset_max_memory_cached(device=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/memory.html#reset_max_memory_cached) Resets the starting point in tracking maximum GPU memory managed by the caching allocator for a given device. See [`max_memory_cached()`](#torch.cuda.max_memory_cached "torch.cuda.max_memory_cached") for details. Parameters **device** ([torch.device](tensor_attributes#torch.torch.device "torch.torch.device") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – selected device. Returns statistic for the current device, given by [`current_device()`](#torch.cuda.current_device "torch.cuda.current_device"), if [`device`](#torch.cuda.device "torch.cuda.device") is `None` (default). Warning This function now calls `reset_peak_memory_stats()`, which resets /all/ peak memory stats. Note See [Memory management](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-memory-management) for more details about GPU memory management. NVIDIA Tools Extension (NVTX) ----------------------------- `torch.cuda.nvtx.mark(msg)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/nvtx.html#mark) Describe an instantaneous event that occurred at some point. Parameters **msg** (*string*) – ASCII message to associate with the event. `torch.cuda.nvtx.range_push(msg)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/nvtx.html#range_push) Pushes a range onto a stack of nested range span. Returns zero-based depth of the range that is started. Parameters **msg** (*string*) – ASCII message to associate with range `torch.cuda.nvtx.range_pop()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/nvtx.html#range_pop) Pops a range off of a stack of nested range spans. Returns the zero-based depth of the range that is ended.
programming_docs
pytorch torch.nn.intrinsic.quantized torch.nn.intrinsic.quantized ============================ This module implements the quantized implementations of fused operations like conv + relu. ConvReLU2d ---------- `class torch.nn.intrinsic.quantized.ConvReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/intrinsic/quantized/modules/conv_relu.html#ConvReLU2d) A ConvReLU2d module is a fused module of Conv2d and ReLU We adopt the same interface as [`torch.nn.quantized.Conv2d`](torch.nn.quantized#torch.nn.quantized.Conv2d "torch.nn.quantized.Conv2d"). Variables **as torch.nn.quantized.Conv2d** (*Same*) – ConvReLU3d ---------- `class torch.nn.intrinsic.quantized.ConvReLU3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/intrinsic/quantized/modules/conv_relu.html#ConvReLU3d) A ConvReLU3d module is a fused module of Conv3d and ReLU We adopt the same interface as [`torch.nn.quantized.Conv3d`](torch.nn.quantized#torch.nn.quantized.Conv3d "torch.nn.quantized.Conv3d"). Attributes: Same as torch.nn.quantized.Conv3d LinearReLU ---------- `class torch.nn.intrinsic.quantized.LinearReLU(in_features, out_features, bias=True, dtype=torch.qint8)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/intrinsic/quantized/modules/linear_relu.html#LinearReLU) A LinearReLU module fused from Linear and ReLU modules We adopt the same interface as [`torch.nn.quantized.Linear`](torch.nn.quantized#torch.nn.quantized.Linear "torch.nn.quantized.Linear"). Variables **as torch.nn.quantized.Linear** (*Same*) – Examples: ``` >>> m = nn.intrinsic.LinearReLU(20, 30) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size()) torch.Size([128, 30]) ``` pytorch Distributed communication package - torch.distributed Distributed communication package - torch.distributed ===================================================== Note Please refer to [PyTorch Distributed Overview](https://pytorch.org/tutorials/beginner/dist_overview.html) for a brief introduction to all features related to distributed training. Backends -------- `torch.distributed` supports three built-in backends, each with different capabilities. The table below shows which functions are available for use with CPU / CUDA tensors. MPI supports CUDA only if the implementation used to build PyTorch supports it. | Backend | `gloo` | `mpi` | `nccl` | | --- | --- | --- | --- | | Device | CPU | GPU | CPU | GPU | CPU | GPU | | send | ✓ | ✘ | ✓ | ? | ✘ | ✘ | | recv | ✓ | ✘ | ✓ | ? | ✘ | ✘ | | broadcast | ✓ | ✓ | ✓ | ? | ✘ | ✓ | | all\_reduce | ✓ | ✓ | ✓ | ? | ✘ | ✓ | | reduce | ✓ | ✘ | ✓ | ? | ✘ | ✓ | | all\_gather | ✓ | ✘ | ✓ | ? | ✘ | ✓ | | gather | ✓ | ✘ | ✓ | ? | ✘ | ✘ | | scatter | ✓ | ✘ | ✓ | ? | ✘ | ✘ | | reduce\_scatter | ✘ | ✘ | ✘ | ✘ | ✘ | ✓ | | all\_to\_all | ✘ | ✘ | ✓ | ? | ✘ | ✘ | | barrier | ✓ | ✘ | ✓ | ? | ✘ | ✓ | ### Backends that come with PyTorch PyTorch distributed package supports Linux (stable), MacOS (stable), and Windows (prototype). By default for Linux, the Gloo and NCCL backends are built and included in PyTorch distributed (NCCL only when building with CUDA). MPI is an optional backend that can only be included if you build PyTorch from source. (e.g.building PyTorch on a host that has MPI installed.) Note As of PyTorch v1.8, Windows supports all collective communications backend but NCCL, If the `init_method` argument of [`init_process_group()`](#torch.distributed.init_process_group "torch.distributed.init_process_group") points to a file it must adhere to the following schema: * Local file system, `init_method="file:///d:/tmp/some_file"` * Shared file system, `init_method="file://////{machine_name}/{share_folder_name}/some_file"` Same as on Linux platform, you can enable TcpStore by setting environment variables, MASTER\_ADDR and MASTER\_PORT. ### Which backend to use? In the past, we were often asked: “which backend should I use?”. * Rule of thumb + Use the NCCL backend for distributed **GPU** training + Use the Gloo backend for distributed **CPU** training. * GPU hosts with InfiniBand interconnect + Use NCCL, since it’s the only backend that currently supports InfiniBand and GPUDirect. * GPU hosts with Ethernet interconnect + Use NCCL, since it currently provides the best distributed GPU training performance, especially for multiprocess single-node or multi-node distributed training. If you encounter any problem with NCCL, use Gloo as the fallback option. (Note that Gloo currently runs slower than NCCL for GPUs.) * CPU hosts with InfiniBand interconnect + If your InfiniBand has enabled IP over IB, use Gloo, otherwise, use MPI instead. We are planning on adding InfiniBand support for Gloo in the upcoming releases. * CPU hosts with Ethernet interconnect + Use Gloo, unless you have specific reasons to use MPI. ### Common environment variables #### Choosing the network interface to use By default, both the NCCL and Gloo backends will try to find the right network interface to use. If the automatically detected interface is not correct, you can override it using the following environment variables (applicable to the respective backend): * **NCCL\_SOCKET\_IFNAME**, for example `export NCCL_SOCKET_IFNAME=eth0` * **GLOO\_SOCKET\_IFNAME**, for example `export GLOO_SOCKET_IFNAME=eth0` If you’re using the Gloo backend, you can specify multiple interfaces by separating them by a comma, like this: `export GLOO_SOCKET_IFNAME=eth0,eth1,eth2,eth3`. The backend will dispatch operations in a round-robin fashion across these interfaces. It is imperative that all processes specify the same number of interfaces in this variable. #### Other NCCL environment variables NCCL has also provided a number of environment variables for fine-tuning purposes. Commonly used ones include the following for debugging purposes: * `export NCCL_DEBUG=INFO` * `export NCCL_DEBUG_SUBSYS=ALL` For the full list of NCCL environment variables, please refer to [NVIDIA NCCL’s official documentation](https://docs.nvidia.com/deeplearning/sdk/nccl-developer-guide/docs/env.html) Basics ------ The `torch.distributed` package provides PyTorch support and communication primitives for multiprocess parallelism across several computation nodes running on one or more machines. The class [`torch.nn.parallel.DistributedDataParallel()`](generated/torch.nn.parallel.distributeddataparallel#torch.nn.parallel.DistributedDataParallel "torch.nn.parallel.DistributedDataParallel") builds on this functionality to provide synchronous distributed training as a wrapper around any PyTorch model. This differs from the kinds of parallelism provided by [Multiprocessing package - torch.multiprocessing](multiprocessing) and [`torch.nn.DataParallel()`](generated/torch.nn.dataparallel#torch.nn.DataParallel "torch.nn.DataParallel") in that it supports multiple network-connected machines and in that the user must explicitly launch a separate copy of the main training script for each process. In the single-machine synchronous case, `torch.distributed` or the [`torch.nn.parallel.DistributedDataParallel()`](generated/torch.nn.parallel.distributeddataparallel#torch.nn.parallel.DistributedDataParallel "torch.nn.parallel.DistributedDataParallel") wrapper may still have advantages over other approaches to data-parallelism, including [`torch.nn.DataParallel()`](generated/torch.nn.dataparallel#torch.nn.DataParallel "torch.nn.DataParallel"): * Each process maintains its own optimizer and performs a complete optimization step with each iteration. While this may appear redundant, since the gradients have already been gathered together and averaged across processes and are thus the same for every process, this means that no parameter broadcast step is needed, reducing time spent transferring tensors between nodes. * Each process contains an independent Python interpreter, eliminating the extra interpreter overhead and “GIL-thrashing” that comes from driving several execution threads, model replicas, or GPUs from a single Python process. This is especially important for models that make heavy use of the Python runtime, including models with recurrent layers or many small components. Initialization -------------- The package needs to be initialized using the [`torch.distributed.init_process_group()`](#torch.distributed.init_process_group "torch.distributed.init_process_group") function before calling any other methods. This blocks until all processes have joined. `torch.distributed.is_available()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed.html#is_available) Returns `True` if the distributed package is available. Otherwise, `torch.distributed` does not expose any other APIs. Currently, `torch.distributed` is available on Linux, MacOS and Windows. Set `USE_DISTRIBUTED=1` to enable it when building PyTorch from source. Currently, the default value is `USE_DISTRIBUTED=1` for Linux and Windows, `USE_DISTRIBUTED=0` for MacOS. `torch.distributed.init_process_group(backend, init_method=None, timeout=datetime.timedelta(seconds=1800), world_size=-1, rank=-1, store=None, group_name='')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#init_process_group) Initializes the default distributed process group, and this will also initialize the distributed package. There are 2 main ways to initialize a process group: 1. Specify `store`, `rank`, and `world_size` explicitly. 2. Specify `init_method` (a URL string) which indicates where/how to discover peers. Optionally specify `rank` and `world_size`, or encode all required parameters in the URL and omit them. If neither is specified, `init_method` is assumed to be “env://”. Parameters * **backend** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)") *or* [Backend](#torch.distributed.Backend "torch.distributed.Backend")) – The backend to use. Depending on build-time configurations, valid values include `mpi`, `gloo`, and `nccl`. This field should be given as a lowercase string (e.g., `"gloo"`), which can also be accessed via [`Backend`](#torch.distributed.Backend "torch.distributed.Backend") attributes (e.g., `Backend.GLOO`). If using multiple processes per machine with `nccl` backend, each process must have exclusive access to every GPU it uses, as sharing GPUs between processes can result in deadlocks. * **init\_method** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – URL specifying how to initialize the process group. Default is “env://” if no `init_method` or `store` is specified. Mutually exclusive with `store`. * **world\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Number of processes participating in the job. Required if `store` is specified. * **rank** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Rank of the current process (it should be a number between 0 and `world_size`-1). Required if `store` is specified. * **store** ([Store](#torch.distributed.Store "torch.distributed.Store")*,* *optional*) – Key/value store accessible to all workers, used to exchange connection/address information. Mutually exclusive with `init_method`. * **timeout** (*timedelta**,* *optional*) – Timeout for operations executed against the process group. Default value equals 30 minutes. This is applicable for the `gloo` backend. For `nccl`, this is applicable only if the environment variable `NCCL_BLOCKING_WAIT` or `NCCL_ASYNC_ERROR_HANDLING` is set to 1. When `NCCL_BLOCKING_WAIT` is set, this is the duration for which the process will block and wait for collectives to complete before throwing an exception. When `NCCL_ASYNC_ERROR_HANDLING` is set, this is the duration after which collectives will be aborted asynchronously and the process will crash. `NCCL_BLOCKING_WAIT` will provide errors to the user which can be caught and handled, but due to its blocking nature, it has a performance overhead. On the other hand, `NCCL_ASYNC_ERROR_HANDLING` has very little performance overhead, but crashes the process on errors. This is done since CUDA execution is async and it is no longer safe to continue executing user code since failed async NCCL operations might result in subsequent CUDA operations running on corrupted data. Only one of these two environment variables should be set. * **group\_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional**,* *deprecated*) – Group name. To enable `backend == Backend.MPI`, PyTorch needs to be built from source on a system that supports MPI. `class torch.distributed.Backend` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#Backend) An enum-like class of available backends: GLOO, NCCL, MPI, and other registered backends. The values of this class are lowercase strings, e.g., `"gloo"`. They can be accessed as attributes, e.g., `Backend.NCCL`. This class can be directly called to parse the string, e.g., `Backend(backend_str)` will check if `backend_str` is valid, and return the parsed lowercase string if so. It also accepts uppercase strings, e.g., `Backend("GLOO")` returns `"gloo"`. Note The entry `Backend.UNDEFINED` is present but only used as initial value of some fields. Users should neither use it directly nor assume its existence. `torch.distributed.get_backend(group=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#get_backend) Returns the backend of the given process group. Parameters **group** (*ProcessGroup**,* *optional*) – The process group to work on. The default is the general main process group. If another specific group is specified, the calling process must be part of `group`. Returns The backend of the given process group as a lower case string. `torch.distributed.get_rank(group=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#get_rank) Returns the rank of current process group Rank is a unique identifier assigned to each process within a distributed process group. They are always consecutive integers ranging from 0 to `world_size`. Parameters **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. Returns The rank of the process group -1, if not part of the group `torch.distributed.get_world_size(group=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#get_world_size) Returns the number of processes in the current process group Parameters **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. Returns The world size of the process group -1, if not part of the group `torch.distributed.is_initialized()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#is_initialized) Checking if the default process group has been initialized `torch.distributed.is_mpi_available()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#is_mpi_available) Checks if the MPI backend is available. `torch.distributed.is_nccl_available()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#is_nccl_available) Checks if the NCCL backend is available. Currently three initialization methods are supported: ### TCP initialization There are two ways to initialize using TCP, both requiring a network address reachable from all processes and a desired `world_size`. The first way requires specifying an address that belongs to the rank 0 process. This initialization method requires that all processes have manually specified ranks. Note that multicast address is not supported anymore in the latest distributed package. `group_name` is deprecated as well. ``` import torch.distributed as dist # Use address of one of the machines dist.init_process_group(backend, init_method='tcp://10.1.1.20:23456', rank=args.rank, world_size=4) ``` ### Shared file-system initialization Another initialization method makes use of a file system that is shared and visible from all machines in a group, along with a desired `world_size`. The URL should start with `file://` and contain a path to a non-existent file (in an existing directory) on a shared file system. File-system initialization will automatically create that file if it doesn’t exist, but will not delete the file. Therefore, it is your responsibility to make sure that the file is cleaned up before the next [`init_process_group()`](#torch.distributed.init_process_group "torch.distributed.init_process_group") call on the same file path/name. Note that automatic rank assignment is not supported anymore in the latest distributed package and `group_name` is deprecated as well. Warning This method assumes that the file system supports locking using `fcntl` - most local systems and NFS support it. Warning This method will always create the file and try its best to clean up and remove the file at the end of the program. In other words, each initialization with the file init method will need a brand new empty file in order for the initialization to succeed. If the same file used by the previous initialization (which happens not to get cleaned up) is used again, this is unexpected behavior and can often cause deadlocks and failures. Therefore, even though this method will try its best to clean up the file, if the auto-delete happens to be unsuccessful, it is your responsibility to ensure that the file is removed at the end of the training to prevent the same file to be reused again during the next time. This is especially important if you plan to call [`init_process_group()`](#torch.distributed.init_process_group "torch.distributed.init_process_group") multiple times on the same file name. In other words, if the file is not removed/cleaned up and you call [`init_process_group()`](#torch.distributed.init_process_group "torch.distributed.init_process_group") again on that file, failures are expected. The rule of thumb here is that, make sure that the file is non-existent or empty every time [`init_process_group()`](#torch.distributed.init_process_group "torch.distributed.init_process_group") is called. ``` import torch.distributed as dist # rank should always be specified dist.init_process_group(backend, init_method='file:///mnt/nfs/sharedfile', world_size=4, rank=args.rank) ``` ### Environment variable initialization This method will read the configuration from environment variables, allowing one to fully customize how the information is obtained. The variables to be set are: * `MASTER_PORT` - required; has to be a free port on machine with rank 0 * `MASTER_ADDR` - required (except for rank 0); address of rank 0 node * `WORLD_SIZE` - required; can be set either here, or in a call to init function * `RANK` - required; can be set either here, or in a call to init function The machine with rank 0 will be used to set up all connections. This is the default method, meaning that `init_method` does not have to be specified (or can be `env://`). Distributed Key-Value Store --------------------------- The distributed package comes with a distributed key-value store, which can be used to share information between processes in the group as well as to initialize the distributed pacakge in [`torch.distributed.init_process_group()`](#torch.distributed.init_process_group "torch.distributed.init_process_group") (by explicitly creating the store as an alternative to specifying `init_method`.) There are 3 choices for Key-Value Stores: [`TCPStore`](#torch.distributed.TCPStore "torch.distributed.TCPStore"), [`FileStore`](#torch.distributed.FileStore "torch.distributed.FileStore"), and [`HashStore`](#torch.distributed.HashStore "torch.distributed.HashStore"). `class torch.distributed.Store` Base class for all store implementations, such as the 3 provided by PyTorch distributed: ([`TCPStore`](#torch.distributed.TCPStore "torch.distributed.TCPStore"), [`FileStore`](#torch.distributed.FileStore "torch.distributed.FileStore"), and [`HashStore`](#torch.distributed.HashStore "torch.distributed.HashStore")). `class torch.distributed.TCPStore` A TCP-based distributed key-value store implementation. The server store holds the data, while the client stores can connect to the server store over TCP and perform actions such as `set()` to insert a key-value pair, `get()` to retrieve a key-value pair, etc. Parameters * **host\_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – The hostname or IP Address the server store should run on. * **port** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The port on which the server store should listen for incoming requests. * **world\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The total number of store users (number of clients + 1 for the server). * **is\_master** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – True when initializing the server store, False for client stores. * **timeout** (*timedelta*) – Timeout used by the store during initialization and for methods such as `get()` and `wait()`. Example:: ``` >>> import torch.distributed as dist >>> from datetime import timedelta >>> # Run on process 1 (server) >>> server_store = dist.TCPStore("127.0.0.1", 1234, 2, True, timedelta(seconds=30)) >>> # Run on process 2 (client) >>> client_store = dist.TCPStore("127.0.0.1", 1234, 2, False) >>> # Use any of the store methods from either the client or server after initialization >>> server_store.set("first_key", "first_value") >>> client_store.get("first_key") ``` `class torch.distributed.HashStore` A thread-safe store implementation based on an underlying hashmap. This store can be used within the same process (for example, by other threads), but cannot be used across processes. Example:: ``` >>> import torch.distributed as dist >>> store = dist.HashStore() >>> # store can be used from other threads >>> # Use any of the store methods after initialization >>> store.set("first_key", "first_value") ``` `class torch.distributed.FileStore` A store implementation that uses a file to store the underlying key-value pairs. Parameters * **file\_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – path of the file in which to store the key-value pairs * **world\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The total number of processes using the store Example:: ``` >>> import torch.distributed as dist >>> store1 = dist.FileStore("/tmp/filestore", 2) >>> store2 = dist.FileStore("/tmp/filestore", 2) >>> # Use any of the store methods from either the client or server after initialization >>> store1.set("first_key", "first_value") >>> store2.get("first_key") ``` `class torch.distributed.PrefixStore` A wrapper around any of the 3 key-value stores ([`TCPStore`](#torch.distributed.TCPStore "torch.distributed.TCPStore"), [`FileStore`](#torch.distributed.FileStore "torch.distributed.FileStore"), and [`HashStore`](#torch.distributed.HashStore "torch.distributed.HashStore")) that adds a prefix to each key inserted to the store. Parameters * **prefix** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – The prefix string that is prepended to each key before being inserted into the store. * **store** (*torch.distributed.store*) – A store object that forms the underlying key-value store. `torch.distributed.Store.set(self: torch._C._distributed_c10d.Store, arg0: str, arg1: str) → None` Inserts the key-value pair into the store based on the supplied `key` and `value`. If `key` already exists in the store, it will overwrite the old value with the new supplied `value`. Parameters * **key** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – The key to be added to the store. * **value** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – The value associated with `key` to be added to the store. Example:: ``` >>> import torch.distributed as dist >>> from datetime import timedelta >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.set("first_key", "first_value") >>> # Should return "first_value" >>> store.get("first_key") ``` `torch.distributed.Store.get(self: torch._C._distributed_c10d.Store, arg0: str) → bytes` Retrieves the value associated with the given `key` in the store. If `key` is not present in the store, the function will wait for `timeout`, which is defined when initializing the store, before throwing an exception. Parameters **key** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – The function will return the value associated with this key. Returns Value associated with `key` if `key` is in the store. Example:: ``` >>> import torch.distributed as dist >>> from datetime import timedelta >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.set("first_key", "first_value") >>> # Should return "first_value" >>> store.get("first_key") ``` `torch.distributed.Store.add(self: torch._C._distributed_c10d.Store, arg0: str, arg1: int) → int` The first call to add for a given `key` creates a counter associated with `key` in the store, initialized to `amount`. Subsequent calls to add with the same `key` increment the counter by the specified `amount`. Calling `add()` with a key that has already been set in the store by `set()` will result in an exception. Parameters * **key** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – The key in the store whose counter will be incremented. * **amount** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The quantity by which the counter will be incremented. Example:: ``` >>> import torch.distributed as dist >>> from datetime import timedelta >>> # Using TCPStore as an example, other store types can also be used >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.add("first_key", 1) >>> store.add("first_key", 6) >>> # Should return 7 >>> store.get("first_key") ``` `torch.distributed.Store.wait(*args, **kwargs)` Overloaded function. 1. wait(self: torch.\_C.\_distributed\_c10d.Store, arg0: List[str]) -> None Waits for each key in `keys` to be added to the store. If not all keys are set before the `timeout` (set during store initialization), then `wait` will throw an exception. Parameters **keys** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – List of keys on which to wait until they are set in the store. Example:: ``` >>> import torch.distributed as dist >>> from datetime import timedelta >>> # Using TCPStore as an example, other store types can also be used >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> # This will throw an exception after 30 seconds >>> store.wait(["bad_key"]) ``` 2. wait(self: torch.\_C.\_distributed\_c10d.Store, arg0: List[str], arg1: datetime.timedelta) -> None Waits for each key in `keys` to be added to the store, and throws an exception if the keys have not been set by the supplied `timeout`. Parameters * **keys** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – List of keys on which to wait until they are set in the store. * **timeout** (*timedelta*) – Time to wait for the keys to be added before throwing an exception. Example:: ``` >>> import torch.distributed as dist >>> from datetime import timedelta >>> # Using TCPStore as an example, other store types can also be used >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> # This will throw an exception after 10 seconds >>> store.wait(["bad_key"], timedelta(seconds=10)) ``` `torch.distributed.Store.num_keys(self: torch._C._distributed_c10d.Store) → int` Returns the number of keys set in the store. Note that this number will typically be one greater than the number of keys added by `set()` and `add()` since one key is used to coordinate all the workers using the store. Warning When used with the [`TCPStore`](#torch.distributed.TCPStore "torch.distributed.TCPStore"), `num_keys` returns the number of keys written to the underlying file. If the store is destructed and another store is created with the same file, the original keys will be retained. Returns The number of keys present in the store. Example:: ``` >>> import torch.distributed as dist >>> from datetime import timedelta >>> # Using TCPStore as an example, other store types can also be used >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.set("first_key", "first_value") >>> # This should return 2 >>> store.num_keys() ``` `torch.distributed.Store.delete_key(self: torch._C._distributed_c10d.Store, arg0: str) → bool` Deletes the key-value pair associated with `key` from the store. Returns `true` if the key was successfully deleted, and `false` if it was not. Warning The `delete_key` API is only supported by the [`TCPStore`](#torch.distributed.TCPStore "torch.distributed.TCPStore") and [`HashStore`](#torch.distributed.HashStore "torch.distributed.HashStore"). Using this API with the [`FileStore`](#torch.distributed.FileStore "torch.distributed.FileStore") will result in an exception. Parameters **key** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – The key to be deleted from the store Returns `True` if `key` was deleted, otherwise `False`. Example:: ``` >>> import torch.distributed as dist >>> from datetime import timedelta >>> # Using TCPStore as an example, HashStore can also be used >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.set("first_key") >>> # This should return true >>> store.delete_key("first_key") >>> # This should return false >>> store.delete_key("bad_key") ``` `torch.distributed.Store.set_timeout(self: torch._C._distributed_c10d.Store, arg0: datetime.timedelta) → None` Sets the store’s default timeout. This timeout is used during initialization and in `wait()` and `get()`. Parameters **timeout** (*timedelta*) – timeout to be set in the store. Example:: ``` >>> import torch.distributed as dist >>> from datetime import timedelta >>> # Using TCPStore as an example, other store types can also be used >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.set_timeout(timedelta(seconds=10)) >>> # This will throw an exception after 10 seconds >>> store.wait(["bad_key"]) ``` Groups ------ By default collectives operate on the default group (also called the world) and require all processes to enter the distributed function call. However, some workloads can benefit from more fine-grained communication. This is where distributed groups come into play. [`new_group()`](#torch.distributed.new_group "torch.distributed.new_group") function can be used to create new groups, with arbitrary subsets of all processes. It returns an opaque group handle that can be given as a `group` argument to all collectives (collectives are distributed functions to exchange information in certain well-known programming patterns). `torch.distributed.new_group(ranks=None, timeout=datetime.timedelta(seconds=1800), backend=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#new_group) Creates a new distributed group. This function requires that all processes in the main group (i.e. all processes that are part of the distributed job) enter this function, even if they are not going to be members of the group. Additionally, groups should be created in the same order in all processes. Warning Using multiple process groups with the `NCCL` backend concurrently is not safe and the user should perform explicit synchronization in their application to ensure only one process group is used at a time. This means collectives from one process group should have completed execution on the device (not just enqueued since CUDA execution is async) before collectives from another process group are enqueued. See [Using multiple NCCL communicators concurrently](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/communicators.html#using-multiple-nccl-communicators-concurrently) for more details. Parameters * **ranks** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")*[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]*) – List of ranks of group members. If `None`, will be set to all ranks. Default is `None`. * **timeout** (*timedelta**,* *optional*) – Timeout for operations executed against the process group. Default value equals 30 minutes. This is only applicable for the `gloo` backend. * **backend** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)") *or* [Backend](#torch.distributed.Backend "torch.distributed.Backend")*,* *optional*) – The backend to use. Depending on build-time configurations, valid values are `gloo` and `nccl`. By default uses the same backend as the global group. This field should be given as a lowercase string (e.g., `"gloo"`), which can also be accessed via [`Backend`](#torch.distributed.Backend "torch.distributed.Backend") attributes (e.g., `Backend.GLOO`). Returns A handle of distributed group that can be given to collective calls. Point-to-point communication ---------------------------- `torch.distributed.send(tensor, dst, group=None, tag=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#send) Sends a tensor synchronously. Parameters * **tensor** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Tensor to send. * **dst** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Destination rank. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **tag** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Tag to match send with remote recv `torch.distributed.recv(tensor, src=None, group=None, tag=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#recv) Receives a tensor synchronously. Parameters * **tensor** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Tensor to fill with received data. * **src** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Source rank. Will receive from any process if unspecified. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **tag** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Tag to match recv with remote send Returns Sender rank -1, if not part of the group [`isend()`](#torch.distributed.isend "torch.distributed.isend") and [`irecv()`](#torch.distributed.irecv "torch.distributed.irecv") return distributed request objects when used. In general, the type of this object is unspecified as they should never be created manually, but they are guaranteed to support two methods: * `is_completed()` - returns True if the operation has finished * `wait()` - will block the process until the operation is finished. `is_completed()` is guaranteed to return True once it returns. `torch.distributed.isend(tensor, dst, group=None, tag=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#isend) Sends a tensor asynchronously. Parameters * **tensor** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Tensor to send. * **dst** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Destination rank. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **tag** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Tag to match send with remote recv Returns A distributed request object. None, if not part of the group `torch.distributed.irecv(tensor, src=None, group=None, tag=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#irecv) Receives a tensor asynchronously. Parameters * **tensor** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Tensor to fill with received data. * **src** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Source rank. Will receive from any process if unspecified. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **tag** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Tag to match recv with remote send Returns A distributed request object. None, if not part of the group Synchronous and asynchronous collective operations -------------------------------------------------- Every collective operation function supports the following two kinds of operations, depending on the setting of the `async_op` flag passed into the collective: **Synchronous operation** - the default mode, when `async_op` is set to `False`. When the function returns, it is guaranteed that the collective operation is performed. In the case of CUDA operations, it is not guaranteed that the CUDA operation is completed, since CUDA operations are asynchronous. For CPU collectives, any further function calls utilizing the output of the collective call will behave as expected. For CUDA collectives, function calls utilizing the output on the same CUDA stream will behave as expected. Users must take care of synchronization under the scenario of running under different streams. For details on CUDA semantics such as stream synchronization, see [CUDA Semantics](https://pytorch.org/docs/stable/notes/cuda.html). See the below script to see examples of differences in these semantics for CPU and CUDA operations. **Asynchronous operation** - when `async_op` is set to True. The collective operation function returns a distributed request object. In general, you don’t need to create it manually and it is guaranteed to support two methods: * `is_completed()` - in the case of CPU collectives, returns `True` if completed. In the case of CUDA operations, returns `True` if the operation has been successfully enqueued onto a CUDA stream and the output can be utilized on the default stream without further synchronization. * `wait()` - in the case of CPU collectives, will block the process until the operation is completed. In the case of CUDA collectives, will block until the operation has been successfully enqueued onto a CUDA stream and the output can be utilized on the default stream without further synchronization. **Example** The following code can serve as a reference regarding semantics for CUDA operations when using distributed collectives. It shows the explicit need to synchronize when using collective outputs on different CUDA streams: ``` # Code runs on each rank. dist.init_process_group("nccl", rank=rank, world_size=2) output = torch.tensor([rank]).cuda(rank) s = torch.cuda.Stream() handle = dist.all_reduce(output, async_op=True) # Wait ensures the operation is enqueued, but not necessarily complete. handle.wait() # Using result on non-default stream. with torch.cuda.stream(s): s.wait_stream(torch.cuda.default_stream()) output.add_(100) if rank == 0: # if the explicit call to wait_stream was omitted, the output below will be # non-deterministically 1 or 101, depending on whether the allreduce overwrote # the value after the add completed. print(output) ``` Collective functions -------------------- `torch.distributed.broadcast(tensor, src, group=None, async_op=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#broadcast) Broadcasts the tensor to the whole group. `tensor` must have the same number of elements in all processes participating in the collective. Parameters * **tensor** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Data to be sent if `src` is the rank of current process, and tensor to be used to save received data otherwise. * **src** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Source rank. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **async\_op** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether this op should be an async op Returns Async work handle, if async\_op is set to True. None, if not async\_op or if not part of the group `torch.distributed.broadcast_object_list(object_list, src=0, group=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#broadcast_object_list) Broadcasts picklable objects in `object_list` to the whole group. Similar to [`broadcast()`](#torch.distributed.broadcast "torch.distributed.broadcast"), but Python objects can be passed in. Note that all objects in `object_list` must be picklable in order to be broadcasted. Parameters * **object\_list** (*List**[**Any**]*) – List of input objects to broadcast. Each object must be picklable. Only objects on the `src` rank will be broadcast, but each rank must provide lists of equal sizes. * **src** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Source rank from which to broadcast `object_list`. * **group** – (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. Default is `None`. Returns `None`. If rank is part of the group, `object_list` will contain the broadcasted objects from `src` rank. Note For NCCL-based processed groups, internal tensor representations of objects must be moved to the GPU device before communication takes place. In this case, the device used is given by `torch.cuda.current_device()` and it is the user’s responsiblity to ensure that this is set so that each rank has an individual GPU, via `torch.cuda.set_device()`. Note Note that this API differs slightly from the [`all_gather()`](#torch.distributed.all_gather "torch.distributed.all_gather") collective since it does not provide an `async_op` handle and thus will be a blocking call. Warning [`broadcast_object_list()`](#torch.distributed.broadcast_object_list "torch.distributed.broadcast_object_list") uses `pickle` module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Example:: ``` >>> # Note: Process group initialization omitted on each rank. >>> import torch.distributed as dist >>> if dist.get_rank() == 0: >>> # Assumes world_size of 3. >>> objects = ["foo", 12, {1: 2}] # any picklable object >>> else: >>> objects = [None, None, None] >>> dist.broadcast_object_list(objects, src=0) >>> broadcast_objects ['foo', 12, {1: 2}] ``` `torch.distributed.all_reduce(tensor, op=<ReduceOp.SUM: 0>, group=None, async_op=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#all_reduce) Reduces the tensor data across all machines in such a way that all get the final result. After the call `tensor` is going to be bitwise identical in all processes. Complex tensors are supported. Parameters * **tensor** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Input and output of the collective. The function operates in-place. * **op** (*optional*) – One of the values from `torch.distributed.ReduceOp` enum. Specifies an operation used for element-wise reductions. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **async\_op** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether this op should be an async op Returns Async work handle, if async\_op is set to True. None, if not async\_op or if not part of the group #### Examples ``` >>> # All tensors below are of torch.int64 type. >>> # We have 2 process groups, 2 ranks. >>> tensor = torch.arange(2, dtype=torch.int64) + 1 + 2 * rank >>> tensor tensor([1, 2]) # Rank 0 tensor([3, 4]) # Rank 1 >>> dist.all_reduce(tensor, op=ReduceOp.SUM) >>> tensor tensor([4, 6]) # Rank 0 tensor([4, 6]) # Rank 1 ``` ``` >>> # All tensors below are of torch.cfloat type. >>> # We have 2 process groups, 2 ranks. >>> tensor = torch.tensor([1+1j, 2+2j], dtype=torch.cfloat) + 2 * rank * (1+1j) >>> tensor tensor([1.+1.j, 2.+2.j]) # Rank 0 tensor([3.+3.j, 4.+4.j]) # Rank 1 >>> dist.all_reduce(tensor, op=ReduceOp.SUM) >>> tensor tensor([4.+4.j, 6.+6.j]) # Rank 0 tensor([4.+4.j, 6.+6.j]) # Rank 1 ``` `torch.distributed.reduce(tensor, dst, op=<ReduceOp.SUM: 0>, group=None, async_op=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#reduce) Reduces the tensor data across all machines. Only the process with rank `dst` is going to receive the final result. Parameters * **tensor** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Input and output of the collective. The function operates in-place. * **dst** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Destination rank * **op** (*optional*) – One of the values from `torch.distributed.ReduceOp` enum. Specifies an operation used for element-wise reductions. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **async\_op** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether this op should be an async op Returns Async work handle, if async\_op is set to True. None, if not async\_op or if not part of the group `torch.distributed.all_gather(tensor_list, tensor, group=None, async_op=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#all_gather) Gathers tensors from the whole group in a list. Complex tensors are supported. Parameters * **tensor\_list** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")*[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]*) – Output list. It should contain correctly-sized tensors to be used for output of the collective. * **tensor** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Tensor to be broadcast from current process. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **async\_op** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether this op should be an async op Returns Async work handle, if async\_op is set to True. None, if not async\_op or if not part of the group #### Examples ``` >>> # All tensors below are of torch.int64 dtype. >>> # We have 2 process groups, 2 ranks. >>> tensor_list = [torch.zero(2, dtype=torch.int64) for _ in range(2)] >>> tensor_list [tensor([0, 0]), tensor([0, 0])] # Rank 0 and 1 >>> tensor = torch.arange(2, dtype=torch.int64) + 1 + 2 * rank >>> tensor tensor([1, 2]) # Rank 0 tensor([3, 4]) # Rank 1 >>> dist.all_gather(tensor_list, tensor) >>> tensor_list [tensor([1, 2]), tensor([3, 4])] # Rank 0 [tensor([1, 2]), tensor([3, 4])] # Rank 1 ``` ``` >>> # All tensors below are of torch.cfloat dtype. >>> # We have 2 process groups, 2 ranks. >>> tensor_list = [torch.zero(2, dtype=torch.cfloat) for _ in range(2)] >>> tensor_list [tensor([0.+0.j, 0.+0.j]), tensor([0.+0.j, 0.+0.j])] # Rank 0 and 1 >>> tensor = torch.tensor([1+1j, 2+2j], dtype=torch.cfloat) + 2 * rank * (1+1j) >>> tensor tensor([1.+1.j, 2.+2.j]) # Rank 0 tensor([3.+3.j, 4.+4.j]) # Rank 1 >>> dist.all_gather(tensor_list, tensor) >>> tensor_list [tensor([1.+1.j, 2.+2.j]), tensor([3.+3.j, 4.+4.j])] # Rank 0 [tensor([1.+1.j, 2.+2.j]), tensor([3.+3.j, 4.+4.j])] # Rank 1 ``` `torch.distributed.all_gather_object(object_list, obj, group=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#all_gather_object) Gathers picklable objects from the whole group into a list. Similar to [`all_gather()`](#torch.distributed.all_gather "torch.distributed.all_gather"), but Python objects can be passed in. Note that the object must be picklable in order to be gathered. Parameters * **object\_list** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")*[**Any**]*) – Output list. It should be correctly sized as the size of the group for this collective and will contain the output. * **object** (*Any*) – Pickable Python object to be broadcast from current process. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. Default is `None`. Returns None. If the calling rank is part of this group, the output of the collective will be populated into the input `object_list`. If the calling rank is not part of the group, the passed in `object_list` will be unmodified. Note Note that this API differs slightly from the [`all_gather()`](#torch.distributed.all_gather "torch.distributed.all_gather") collective since it does not provide an `async_op` handle and thus will be a blocking call. Note For NCCL-based processed groups, internal tensor representations of objects must be moved to the GPU device before communication takes place. In this case, the device used is given by `torch.cuda.current_device()` and it is the user’s responsiblity to ensure that this is set so that each rank has an individual GPU, via `torch.cuda.set_device()`. Warning [`all_gather_object()`](#torch.distributed.all_gather_object "torch.distributed.all_gather_object") uses `pickle` module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Example:: ``` >>> # Note: Process group initialization omitted on each rank. >>> import torch.distributed as dist >>> # Assumes world_size of 3. >>> gather_objects = ["foo", 12, {1: 2}] # any picklable object >>> output = [None for _ in gather_objects] >>> dist.all_gather_object(output, gather_objects[dist.get_rank()]) >>> output ['foo', 12, {1: 2}] ``` `torch.distributed.gather(tensor, gather_list=None, dst=0, group=None, async_op=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#gather) Gathers a list of tensors in a single process. Parameters * **tensor** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Input tensor. * **gather\_list** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")*[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]**,* *optional*) – List of appropriately-sized tensors to use for gathered data (default is None, must be specified on the destination rank) * **dst** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Destination rank (default is 0) * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **async\_op** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether this op should be an async op Returns Async work handle, if async\_op is set to True. None, if not async\_op or if not part of the group `torch.distributed.gather_object(obj, object_gather_list=None, dst=0, group=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#gather_object) Gathers picklable objects from the whole group in a single process. Similar to [`gather()`](#torch.distributed.gather "torch.distributed.gather"), but Python objects can be passed in. Note that the object must be picklable in order to be gathered. Parameters * **obj** (*Any*) – Input object. Must be picklable. * **object\_gather\_list** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")*[**Any**]*) – Output list. On the `dst` rank, it should be correctly sized as the size of the group for this collective and will contain the output. Must be `None` on non-dst ranks. (default is `None`) * **dst** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Destination rank. (default is 0) * **group** – (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. Default is `None`. Returns None. On the `dst` rank, `object_gather_list` will contain the output of the collective. Note Note that this API differs slightly from the gather collective since it does not provide an async\_op handle and thus will be a blocking call. Note Note that this API is not supported when using the NCCL backend. Warning [`gather_object()`](#torch.distributed.gather_object "torch.distributed.gather_object") uses `pickle` module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Example:: ``` >>> # Note: Process group initialization omitted on each rank. >>> import torch.distributed as dist >>> # Assumes world_size of 3. >>> gather_objects = ["foo", 12, {1: 2}] # any picklable object >>> output = [None for _ in gather_objects] >>> dist.gather_object( gather_objects[dist.get_rank()], output if dist.get_rank() == 0 else None, dst=0 ) >>> # On rank 0 >>> output ['foo', 12, {1: 2}] ``` `torch.distributed.scatter(tensor, scatter_list=None, src=0, group=None, async_op=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#scatter) Scatters a list of tensors to all processes in a group. Each process will receive exactly one tensor and store its data in the `tensor` argument. Parameters * **tensor** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Output tensor. * **scatter\_list** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")*[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]*) – List of tensors to scatter (default is None, must be specified on the source rank) * **src** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Source rank (default is 0) * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **async\_op** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether this op should be an async op Returns Async work handle, if async\_op is set to True. None, if not async\_op or if not part of the group `torch.distributed.scatter_object_list(scatter_object_output_list, scatter_object_input_list, src=0, group=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#scatter_object_list) Scatters picklable objects in `scatter_object_input_list` to the whole group. Similar to [`scatter()`](#torch.distributed.scatter "torch.distributed.scatter"), but Python objects can be passed in. On each rank, the scattered object will be stored as the first element of `scatter_object_output_list`. Note that all objects in `scatter_object_input_list` must be picklable in order to be scattered. Parameters * **scatter\_object\_output\_list** (*List**[**Any**]*) – Non-empty list whose first element will store the object scattered to this rank. * **scatter\_object\_input\_list** (*List**[**Any**]*) – List of input objects to scatter. Each object must be picklable. Only objects on the `src` rank will be scattered, and the argument can be `None` for non-src ranks. * **src** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Source rank from which to scatter `scatter_object_input_list`. * **group** – (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. Default is `None`. Returns `None`. If rank is part of the group, `scatter_object_output_list` will have its first element set to the scattered object for this rank. Note Note that this API differs slightly from the scatter collective since it does not provide an `async_op` handle and thus will be a blocking call. Warning [`scatter_object_list()`](#torch.distributed.scatter_object_list "torch.distributed.scatter_object_list") uses `pickle` module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Example:: ``` >>> # Note: Process group initialization omitted on each rank. >>> import torch.distributed as dist >>> if dist.get_rank() == 0: >>> # Assumes world_size of 3. >>> objects = ["foo", 12, {1: 2}] # any picklable object >>> else: >>> # Can be any list on non-src ranks, elements are not used. >>> objects = [None, None, None] >>> output_list = [None] >>> dist.scatter_object_list(output_list, objects, src=0) >>> # Rank i gets objects[i]. For example, on rank 2: >>> output_list [{1: 2}] ``` `torch.distributed.reduce_scatter(output, input_list, op=<ReduceOp.SUM: 0>, group=None, async_op=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#reduce_scatter) Reduces, then scatters a list of tensors to all processes in a group. Parameters * **output** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Output tensor. * **input\_list** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")*[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]*) – List of tensors to reduce and scatter. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **async\_op** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether this op should be an async op. Returns Async work handle, if async\_op is set to True. None, if not async\_op or if not part of the group. `torch.distributed.all_to_all(output_tensor_list, input_tensor_list, group=None, async_op=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#all_to_all) Each process scatters list of input tensors to all processes in a group and return gathered list of tensors in output list. Parameters * **output\_tensor\_list** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")*[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]*) – List of tensors to be gathered one per rank. * **input\_tensor\_list** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")*[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]*) – List of tensors to scatter one per rank. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **async\_op** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether this op should be an async op. Returns Async work handle, if async\_op is set to True. None, if not async\_op or if not part of the group. Warning `all_to_all` is experimental and subject to change. #### Examples ``` >>> input = torch.arange(4) + rank * 4 >>> input = list(input.chunk(4)) >>> input [tensor([0]), tensor([1]), tensor([2]), tensor([3])] # Rank 0 [tensor([4]), tensor([5]), tensor([6]), tensor([7])] # Rank 1 [tensor([8]), tensor([9]), tensor([10]), tensor([11])] # Rank 2 [tensor([12]), tensor([13]), tensor([14]), tensor([15])] # Rank 3 >>> output = list(torch.empty([4], dtype=torch.int64).chunk(4)) >>> dist.all_to_all(output, input) >>> output [tensor([0]), tensor([4]), tensor([8]), tensor([12])] # Rank 0 [tensor([1]), tensor([5]), tensor([9]), tensor([13])] # Rank 1 [tensor([2]), tensor([6]), tensor([10]), tensor([14])] # Rank 2 [tensor([3]), tensor([7]), tensor([11]), tensor([15])] # Rank 3 ``` ``` >>> # Essentially, it is similar to following operation: >>> scatter_list = input >>> gather_list = output >>> for i in range(world_size): >>> dist.scatter(gather_list[i], scatter_list if i == rank else [], src = i) ``` ``` >>> input tensor([0, 1, 2, 3, 4, 5]) # Rank 0 tensor([10, 11, 12, 13, 14, 15, 16, 17, 18]) # Rank 1 tensor([20, 21, 22, 23, 24]) # Rank 2 tensor([30, 31, 32, 33, 34, 35, 36]) # Rank 3 >>> input_splits [2, 2, 1, 1] # Rank 0 [3, 2, 2, 2] # Rank 1 [2, 1, 1, 1] # Rank 2 [2, 2, 2, 1] # Rank 3 >>> output_splits [2, 3, 2, 2] # Rank 0 [2, 2, 1, 2] # Rank 1 [1, 2, 1, 2] # Rank 2 [1, 2, 1, 1] # Rank 3 >>> input = list(input.split(input_splits)) >>> input [tensor([0, 1]), tensor([2, 3]), tensor([4]), tensor([5])] # Rank 0 [tensor([10, 11, 12]), tensor([13, 14]), tensor([15, 16]), tensor([17, 18])] # Rank 1 [tensor([20, 21]), tensor([22]), tensor([23]), tensor([24])] # Rank 2 [tensor([30, 31]), tensor([32, 33]), tensor([34, 35]), tensor([36])] # Rank 3 >>> output = ... >>> dist.all_to_all(output, input) >>> output [tensor([0, 1]), tensor([10, 11, 12]), tensor([20, 21]), tensor([30, 31])] # Rank 0 [tensor([2, 3]), tensor([13, 14]), tensor([22]), tensor([32, 33])] # Rank 1 [tensor([4]), tensor([15, 16]), tensor([23]), tensor([34, 35])] # Rank 2 [tensor([5]), tensor([17, 18]), tensor([24]), tensor([36])] # Rank 3 ``` `torch.distributed.barrier(group=None, async_op=False, device_ids=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#barrier) Synchronizes all processes. This collective blocks processes until the whole group enters this function, if async\_op is False, or if async work handle is called on wait(). Parameters * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **async\_op** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether this op should be an async op * **device\_ids** (*[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – List of device/GPU ids. Valid only for NCCL backend. Returns Async work handle, if async\_op is set to True. None, if not async\_op or if not part of the group `class torch.distributed.ReduceOp` An enum-like class for available reduction operations: `SUM`, `PRODUCT`, `MIN`, `MAX`, `BAND`, `BOR`, and `BXOR`. Note that `BAND`, `BOR`, and `BXOR` reductions are not available when using the `NCCL` backend. Additionally, `MAX`, `MIN` and `PRODUCT` are not supported for complex tensors. The values of this class can be accessed as attributes, e.g., `ReduceOp.SUM`. They are used in specifying strategies for reduction collectives, e.g., [`reduce()`](#torch.distributed.reduce "torch.distributed.reduce"), [`all_reduce_multigpu()`](#torch.distributed.all_reduce_multigpu "torch.distributed.all_reduce_multigpu"), etc. Members: SUM PRODUCT MIN MAX BAND BOR BXOR `class torch.distributed.reduce_op` Deprecated enum-like class for reduction operations: `SUM`, `PRODUCT`, `MIN`, and `MAX`. [`ReduceOp`](#torch.distributed.ReduceOp "torch.distributed.ReduceOp") is recommended to use instead. Autograd-enabled communication primitives ----------------------------------------- If you want to use collective communication functions supporting autograd you can find an implementation of those in the `torch.distributed.nn.*` module. Functions here are synchronous and will be inserted in the autograd graph, so you need to ensure that all the processes that participated in the collective operation will do the backward pass for the backward communication to effectively happen and don’t cause a deadlock. Please notice that currently the only backend where all the functions are guaranteed to work is `gloo`. .. autofunction:: torch.distributed.nn.broadcast .. autofunction:: torch.distributed.nn.gather .. autofunction:: torch.distributed.nn.scatter .. autofunction:: torch.distributed.nn.reduce .. autofunction:: torch.distributed.nn.all\_gather .. autofunction:: torch.distributed.nn.all\_to\_all .. autofunction:: torch.distributed.nn.all\_reduce Multi-GPU collective functions ------------------------------ If you have more than one GPU on each node, when using the NCCL and Gloo backend, [`broadcast_multigpu()`](#torch.distributed.broadcast_multigpu "torch.distributed.broadcast_multigpu") [`all_reduce_multigpu()`](#torch.distributed.all_reduce_multigpu "torch.distributed.all_reduce_multigpu") [`reduce_multigpu()`](#torch.distributed.reduce_multigpu "torch.distributed.reduce_multigpu") [`all_gather_multigpu()`](#torch.distributed.all_gather_multigpu "torch.distributed.all_gather_multigpu") and [`reduce_scatter_multigpu()`](#torch.distributed.reduce_scatter_multigpu "torch.distributed.reduce_scatter_multigpu") support distributed collective operations among multiple GPUs within each node. These functions can potentially improve the overall distributed training performance and be easily used by passing a list of tensors. Each Tensor in the passed tensor list needs to be on a separate GPU device of the host where the function is called. Note that the length of the tensor list needs to be identical among all the distributed processes. Also note that currently the multi-GPU collective functions are only supported by the NCCL backend. For example, if the system we use for distributed training has 2 nodes, each of which has 8 GPUs. On each of the 16 GPUs, there is a tensor that we would like to all-reduce. The following code can serve as a reference: Code running on Node 0 ``` import torch import torch.distributed as dist dist.init_process_group(backend="nccl", init_method="file:///distributed_test", world_size=2, rank=0) tensor_list = [] for dev_idx in range(torch.cuda.device_count()): tensor_list.append(torch.FloatTensor([1]).cuda(dev_idx)) dist.all_reduce_multigpu(tensor_list) ``` Code running on Node 1 ``` import torch import torch.distributed as dist dist.init_process_group(backend="nccl", init_method="file:///distributed_test", world_size=2, rank=1) tensor_list = [] for dev_idx in range(torch.cuda.device_count()): tensor_list.append(torch.FloatTensor([1]).cuda(dev_idx)) dist.all_reduce_multigpu(tensor_list) ``` After the call, all 16 tensors on the two nodes will have the all-reduced value of 16 `torch.distributed.broadcast_multigpu(tensor_list, src, group=None, async_op=False, src_tensor=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#broadcast_multigpu) Broadcasts the tensor to the whole group with multiple GPU tensors per node. `tensor` must have the same number of elements in all the GPUs from all processes participating in the collective. each tensor in the list must be on a different GPU Only nccl and gloo backend are currently supported tensors should only be GPU tensors Parameters * **tensor\_list** (*List**[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]*) – Tensors that participate in the collective operation. If `src` is the rank, then the specified `src_tensor` element of `tensor_list` (`tensor_list[src_tensor]`) will be broadcast to all other tensors (on different GPUs) in the src process and all tensors in `tensor_list` of other non-src processes. You also need to make sure that `len(tensor_list)` is the same for all the distributed processes calling this function. * **src** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Source rank. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **async\_op** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether this op should be an async op * **src\_tensor** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Source tensor rank within `tensor_list` Returns Async work handle, if async\_op is set to True. None, if not async\_op or if not part of the group `torch.distributed.all_reduce_multigpu(tensor_list, op=<ReduceOp.SUM: 0>, group=None, async_op=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#all_reduce_multigpu) Reduces the tensor data across all machines in such a way that all get the final result. This function reduces a number of tensors on every node, while each tensor resides on different GPUs. Therefore, the input tensor in the tensor list needs to be GPU tensors. Also, each tensor in the tensor list needs to reside on a different GPU. After the call, all `tensor` in `tensor_list` is going to be bitwise identical in all processes. Complex tensors are supported. Only nccl and gloo backend is currently supported tensors should only be GPU tensors Parameters * **list** (*tensor*) – List of input and output tensors of the collective. The function operates in-place and requires that each tensor to be a GPU tensor on different GPUs. You also need to make sure that `len(tensor_list)` is the same for all the distributed processes calling this function. * **op** (*optional*) – One of the values from `torch.distributed.ReduceOp` enum. Specifies an operation used for element-wise reductions. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **async\_op** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether this op should be an async op Returns Async work handle, if async\_op is set to True. None, if not async\_op or if not part of the group `torch.distributed.reduce_multigpu(tensor_list, dst, op=<ReduceOp.SUM: 0>, group=None, async_op=False, dst_tensor=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#reduce_multigpu) Reduces the tensor data on multiple GPUs across all machines. Each tensor in `tensor_list` should reside on a separate GPU Only the GPU of `tensor_list[dst_tensor]` on the process with rank `dst` is going to receive the final result. Only nccl backend is currently supported tensors should only be GPU tensors Parameters * **tensor\_list** (*List**[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]*) – Input and output GPU tensors of the collective. The function operates in-place. You also need to make sure that `len(tensor_list)` is the same for all the distributed processes calling this function. * **dst** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Destination rank * **op** (*optional*) – One of the values from `torch.distributed.ReduceOp` enum. Specifies an operation used for element-wise reductions. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **async\_op** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether this op should be an async op * **dst\_tensor** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Destination tensor rank within `tensor_list` Returns Async work handle, if async\_op is set to True. None, otherwise `torch.distributed.all_gather_multigpu(output_tensor_lists, input_tensor_list, group=None, async_op=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#all_gather_multigpu) Gathers tensors from the whole group in a list. Each tensor in `tensor_list` should reside on a separate GPU Only nccl backend is currently supported tensors should only be GPU tensors Complex tensors are supported. Parameters * **output\_tensor\_lists** (*List**[**List**[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]**]*) – Output lists. It should contain correctly-sized tensors on each GPU to be used for output of the collective, e.g. `output_tensor_lists[i]` contains the all\_gather result that resides on the GPU of `input_tensor_list[i]`. Note that each element of `output_tensor_lists` has the size of `world_size * len(input_tensor_list)`, since the function all gathers the result from every single GPU in the group. To interpret each element of `output_tensor_lists[i]`, note that `input_tensor_list[j]` of rank k will be appear in `output_tensor_lists[i][k * world_size + j]` Also note that `len(output_tensor_lists)`, and the size of each element in `output_tensor_lists` (each element is a list, therefore `len(output_tensor_lists[i])`) need to be the same for all the distributed processes calling this function. * **input\_tensor\_list** (*List**[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]*) – List of tensors(on different GPUs) to be broadcast from current process. Note that `len(input_tensor_list)` needs to be the same for all the distributed processes calling this function. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **async\_op** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether this op should be an async op Returns Async work handle, if async\_op is set to True. None, if not async\_op or if not part of the group `torch.distributed.reduce_scatter_multigpu(output_tensor_list, input_tensor_lists, op=<ReduceOp.SUM: 0>, group=None, async_op=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/distributed/distributed_c10d.html#reduce_scatter_multigpu) Reduce and scatter a list of tensors to the whole group. Only nccl backend is currently supported. Each tensor in `output_tensor_list` should reside on a separate GPU, as should each list of tensors in `input_tensor_lists`. Parameters * **output\_tensor\_list** (*List**[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]*) – Output tensors (on different GPUs) to receive the result of the operation. Note that `len(output_tensor_list)` needs to be the same for all the distributed processes calling this function. * **input\_tensor\_lists** (*List**[**List**[*[Tensor](tensors#torch.Tensor "torch.Tensor")*]**]*) – Input lists. It should contain correctly-sized tensors on each GPU to be used for input of the collective, e.g. `input_tensor_lists[i]` contains the reduce\_scatter input that resides on the GPU of `output_tensor_list[i]`. Note that each element of `input_tensor_lists` has the size of `world_size * len(output_tensor_list)`, since the function scatters the result from every single GPU in the group. To interpret each element of `input_tensor_lists[i]`, note that `output_tensor_list[j]` of rank k receives the reduce-scattered result from `input_tensor_lists[i][k * world_size + j]` Also note that `len(input_tensor_lists)`, and the size of each element in `input_tensor_lists` (each element is a list, therefore `len(input_tensor_lists[i])`) need to be the same for all the distributed processes calling this function. * **group** (*ProcessGroup**,* *optional*) – The process group to work on. If None, the default process group will be used. * **async\_op** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether this op should be an async op. Returns Async work handle, if async\_op is set to True. None, if not async\_op or if not part of the group. Third-party backends -------------------- Besides the GLOO/MPI/NCCL backends, PyTorch distributed supports third-party backends through a run-time register mechanism. For references on how to develop a third-party backend through C++ Extension, please refer to [Tutorials - Custom C++ and CUDA Extensions](https://pytorch.org/tutorials/advanced/cpp_extension.html) and `test/cpp_extensions/cpp_c10d_extension.cpp`. The capability of third-party backends are decided by their own implementations. The new backend derives from `c10d.ProcessGroup` and registers the backend name and the instantiating interface through `torch.distributed.Backend.register_backend()` when imported. When manually importing this backend and invoking [`torch.distributed.init_process_group()`](#torch.distributed.init_process_group "torch.distributed.init_process_group") with the corresponding backend name, the `torch.distributed` package runs on the new backend. Warning The support of third-party backend is experimental and subject to change. Launch utility -------------- The `torch.distributed` package also provides a launch utility in `torch.distributed.launch`. This helper utility can be used to launch multiple processes per node for distributed training. `torch.distributed.launch` is a module that spawns up multiple distributed training processes on each of the training nodes. The utility can be used for single-node distributed training, in which one or more processes per node will be spawned. The utility can be used for either CPU training or GPU training. If the utility is used for GPU training, each distributed process will be operating on a single GPU. This can achieve well-improved single-node training performance. It can also be used in multi-node distributed training, by spawning up multiple processes on each node for well-improved multi-node distributed training performance as well. This will especially be benefitial for systems with multiple Infiniband interfaces that have direct-GPU support, since all of them can be utilized for aggregated communication bandwidth. In both cases of single-node distributed training or multi-node distributed training, this utility will launch the given number of processes per node (`--nproc_per_node`). If used for GPU training, this number needs to be less or equal to the number of GPUs on the current system (`nproc_per_node`), and each process will be operating on a single GPU from *GPU 0 to GPU (nproc\_per\_node - 1)*. **How to use this module:** 1. Single-Node multi-process distributed training ``` >>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 and all other arguments of your training script) ``` 2. Multi-Node multi-process distributed training: (e.g. two nodes) Node 1: *(IP: 192.168.1.1, and has a free port: 1234)* ``` >>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE --nnodes=2 --node_rank=0 --master_addr="192.168.1.1" --master_port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 and all other arguments of your training script) ``` Node 2: ``` >>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE --nnodes=2 --node_rank=1 --master_addr="192.168.1.1" --master_port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 and all other arguments of your training script) ``` 3. To look up what optional arguments this module offers: ``` >>> python -m torch.distributed.launch --help ``` **Important Notices:** 1. This utility and multi-process distributed (single-node or multi-node) GPU training currently only achieves the best performance using the NCCL distributed backend. Thus NCCL backend is the recommended backend to use for GPU training. 2. In your training program, you must parse the command-line argument: `--local_rank=LOCAL_PROCESS_RANK`, which will be provided by this module. If your training program uses GPUs, you should ensure that your code only runs on the GPU device of LOCAL\_PROCESS\_RANK. This can be done by: Parsing the local\_rank argument ``` >>> import argparse >>> parser = argparse.ArgumentParser() >>> parser.add_argument("--local_rank", type=int) >>> args = parser.parse_args() ``` Set your device to local rank using either ``` >>> torch.cuda.set_device(args.local_rank) # before your code runs ``` or ``` >>> with torch.cuda.device(args.local_rank): >>> # your code to run ``` 3. In your training program, you are supposed to call the following function at the beginning to start the distributed backend. You need to make sure that the init\_method uses `env://`, which is the only supported `init_method` by this module. ``` torch.distributed.init_process_group(backend='YOUR BACKEND', init_method='env://') ``` 4. In your training program, you can either use regular distributed functions or use [`torch.nn.parallel.DistributedDataParallel()`](generated/torch.nn.parallel.distributeddataparallel#torch.nn.parallel.DistributedDataParallel "torch.nn.parallel.DistributedDataParallel") module. If your training program uses GPUs for training and you would like to use [`torch.nn.parallel.DistributedDataParallel()`](generated/torch.nn.parallel.distributeddataparallel#torch.nn.parallel.DistributedDataParallel "torch.nn.parallel.DistributedDataParallel") module, here is how to configure it. ``` model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank], output_device=args.local_rank) ``` Please ensure that `device_ids` argument is set to be the only GPU device id that your code will be operating on. This is generally the local rank of the process. In other words, the `device_ids` needs to be `[args.local_rank]`, and `output_device` needs to be `args.local_rank` in order to use this utility 5. Another way to pass `local_rank` to the subprocesses via environment variable `LOCAL_RANK`. This behavior is enabled when you launch the script with `--use_env=True`. You must adjust the subprocess example above to replace `args.local_rank` with `os.environ['LOCAL_RANK']`; the launcher will not pass `--local_rank` when you specify this flag. Warning `local_rank` is NOT globally unique: it is only unique per process on a machine. Thus, don’t use it to decide if you should, e.g., write to a networked filesystem. See <https://github.com/pytorch/pytorch/issues/12042> for an example of how things can go wrong if you don’t do this correctly. Spawn utility ------------- The [Multiprocessing package - torch.multiprocessing](multiprocessing#multiprocessing-doc) package also provides a `spawn` function in [`torch.multiprocessing.spawn()`](multiprocessing#torch.multiprocessing.spawn "torch.multiprocessing.spawn"). This helper function can be used to spawn multiple processes. It works by passing in the function that you want to run and spawns N processes to run it. This can be used for multiprocess distributed training as well. For references on how to use it, please refer to [PyTorch example - ImageNet implementation](https://github.com/pytorch/examples/tree/master/imagenet) Note that this function requires Python 3.4 or higher.
programming_docs
pytorch torch.utils.data torch.utils.data ================ At the heart of PyTorch data loading utility is the [`torch.utils.data.DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") class. It represents a Python iterable over a dataset, with support for * [map-style and iterable-style datasets](#dataset-types), * [customizing data loading order](#data-loading-order-and-sampler), * [automatic batching](#loading-batched-and-non-batched-data), * [single- and multi-process data loading](#single-and-multi-process-data-loading), * [automatic memory pinning](#memory-pinning). These options are configured by the constructor arguments of a [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader"), which has signature: ``` DataLoader(dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, num_workers=0, collate_fn=None, pin_memory=False, drop_last=False, timeout=0, worker_init_fn=None, *, prefetch_factor=2, persistent_workers=False) ``` The sections below describe in details the effects and usages of these options. Dataset Types ------------- The most important argument of [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") constructor is `dataset`, which indicates a dataset object to load data from. PyTorch supports two different types of datasets: * [map-style datasets](#map-style-datasets), * [iterable-style datasets](#iterable-style-datasets). ### Map-style datasets A map-style dataset is one that implements the `__getitem__()` and `__len__()` protocols, and represents a map from (possibly non-integral) indices/keys to data samples. For example, such a dataset, when accessed with `dataset[idx]`, could read the `idx`-th image and its corresponding label from a folder on the disk. See [`Dataset`](#torch.utils.data.Dataset "torch.utils.data.Dataset") for more details. ### Iterable-style datasets An iterable-style dataset is an instance of a subclass of [`IterableDataset`](#torch.utils.data.IterableDataset "torch.utils.data.IterableDataset") that implements the `__iter__()` protocol, and represents an iterable over data samples. This type of datasets is particularly suitable for cases where random reads are expensive or even improbable, and where the batch size depends on the fetched data. For example, such a dataset, when called `iter(dataset)`, could return a stream of data reading from a database, a remote server, or even logs generated in real time. See [`IterableDataset`](#torch.utils.data.IterableDataset "torch.utils.data.IterableDataset") for more details. Note When using an [`IterableDataset`](#torch.utils.data.IterableDataset "torch.utils.data.IterableDataset") with [multi-process data loading](#multi-process-data-loading). The same dataset object is replicated on each worker process, and thus the replicas must be configured differently to avoid duplicated data. See [`IterableDataset`](#torch.utils.data.IterableDataset "torch.utils.data.IterableDataset") documentations for how to achieve this. Data Loading Order and Sampler ------------------------------ For [iterable-style datasets](#iterable-style-datasets), data loading order is entirely controlled by the user-defined iterable. This allows easier implementations of chunk-reading and dynamic batch size (e.g., by yielding a batched sample at each time). The rest of this section concerns the case with [map-style datasets](#map-style-datasets). [`torch.utils.data.Sampler`](#torch.utils.data.Sampler "torch.utils.data.Sampler") classes are used to specify the sequence of indices/keys used in data loading. They represent iterable objects over the indices to datasets. E.g., in the common case with stochastic gradient decent (SGD), a [`Sampler`](#torch.utils.data.Sampler "torch.utils.data.Sampler") could randomly permute a list of indices and yield each one at a time, or yield a small number of them for mini-batch SGD. A sequential or shuffled sampler will be automatically constructed based on the `shuffle` argument to a [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader"). Alternatively, users may use the `sampler` argument to specify a custom [`Sampler`](#torch.utils.data.Sampler "torch.utils.data.Sampler") object that at each time yields the next index/key to fetch. A custom [`Sampler`](#torch.utils.data.Sampler "torch.utils.data.Sampler") that yields a list of batch indices at a time can be passed as the `batch_sampler` argument. Automatic batching can also be enabled via `batch_size` and `drop_last` arguments. See [the next section](#loading-batched-and-non-batched-data) for more details on this. Note Neither `sampler` nor `batch_sampler` is compatible with iterable-style datasets, since such datasets have no notion of a key or an index. Loading Batched and Non-Batched Data ------------------------------------ [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") supports automatically collating individual fetched data samples into batches via arguments `batch_size`, `drop_last`, and `batch_sampler`. ### Automatic batching (default) This is the most common case, and corresponds to fetching a minibatch of data and collating them into batched samples, i.e., containing Tensors with one dimension being the batch dimension (usually the first). When `batch_size` (default `1`) is not `None`, the data loader yields batched samples instead of individual samples. `batch_size` and `drop_last` arguments are used to specify how the data loader obtains batches of dataset keys. For map-style datasets, users can alternatively specify `batch_sampler`, which yields a list of keys at a time. Note The `batch_size` and `drop_last` arguments essentially are used to construct a `batch_sampler` from `sampler`. For map-style datasets, the `sampler` is either provided by user or constructed based on the `shuffle` argument. For iterable-style datasets, the `sampler` is a dummy infinite one. See [this section](#data-loading-order-and-sampler) on more details on samplers. Note When fetching from [iterable-style datasets](#iterable-style-datasets) with [multi-processing](#multi-process-data-loading), the `drop_last` argument drops the last non-full batch of each worker’s dataset replica. After fetching a list of samples using the indices from sampler, the function passed as the `collate_fn` argument is used to collate lists of samples into batches. In this case, loading from a map-style dataset is roughly equivalent with: ``` for indices in batch_sampler: yield collate_fn([dataset[i] for i in indices]) ``` and loading from an iterable-style dataset is roughly equivalent with: ``` dataset_iter = iter(dataset) for indices in batch_sampler: yield collate_fn([next(dataset_iter) for _ in indices]) ``` A custom `collate_fn` can be used to customize collation, e.g., padding sequential data to max length of a batch. See [this section](#dataloader-collate-fn) on more about `collate_fn`. ### Disable automatic batching In certain cases, users may want to handle batching manually in dataset code, or simply load individual samples. For example, it could be cheaper to directly load batched data (e.g., bulk reads from a database or reading continuous chunks of memory), or the batch size is data dependent, or the program is designed to work on individual samples. Under these scenarios, it’s likely better to not use automatic batching (where `collate_fn` is used to collate the samples), but let the data loader directly return each member of the `dataset` object. When both `batch_size` and `batch_sampler` are `None` (default value for `batch_sampler` is already `None`), automatic batching is disabled. Each sample obtained from the `dataset` is processed with the function passed as the `collate_fn` argument. **When automatic batching is disabled**, the default `collate_fn` simply converts NumPy arrays into PyTorch Tensors, and keeps everything else untouched. In this case, loading from a map-style dataset is roughly equivalent with: ``` for index in sampler: yield collate_fn(dataset[index]) ``` and loading from an iterable-style dataset is roughly equivalent with: ``` for data in iter(dataset): yield collate_fn(data) ``` See [this section](#dataloader-collate-fn) on more about `collate_fn`. ### Working with `collate_fn` The use of `collate_fn` is slightly different when automatic batching is enabled or disabled. **When automatic batching is disabled**, `collate_fn` is called with each individual data sample, and the output is yielded from the data loader iterator. In this case, the default `collate_fn` simply converts NumPy arrays in PyTorch tensors. **When automatic batching is enabled**, `collate_fn` is called with a list of data samples at each time. It is expected to collate the input samples into a batch for yielding from the data loader iterator. The rest of this section describes behavior of the default `collate_fn` in this case. For instance, if each data sample consists of a 3-channel image and an integral class label, i.e., each element of the dataset returns a tuple `(image, class_index)`, the default `collate_fn` collates a list of such tuples into a single tuple of a batched image tensor and a batched class label Tensor. In particular, the default `collate_fn` has the following properties: * It always prepends a new dimension as the batch dimension. * It automatically converts NumPy arrays and Python numerical values into PyTorch Tensors. * It preserves the data structure, e.g., if each sample is a dictionary, it outputs a dictionary with the same set of keys but batched Tensors as values (or lists if the values can not be converted into Tensors). Same for `list` s, `tuple` s, `namedtuple` s, etc. Users may use customized `collate_fn` to achieve custom batching, e.g., collating along a dimension other than the first, padding sequences of various lengths, or adding support for custom data types. Single- and Multi-process Data Loading -------------------------------------- A [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") uses single-process data loading by default. Within a Python process, the [Global Interpreter Lock (GIL)](https://wiki.python.org/moin/GlobalInterpreterLock) prevents true fully parallelizing Python code across threads. To avoid blocking computation code with data loading, PyTorch provides an easy switch to perform multi-process data loading by simply setting the argument `num_workers` to a positive integer. ### Single-process data loading (default) In this mode, data fetching is done in the same process a [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") is initialized. Therefore, data loading may block computing. However, this mode may be preferred when resource(s) used for sharing data among processes (e.g., shared memory, file descriptors) is limited, or when the entire dataset is small and can be loaded entirely in memory. Additionally, single-process loading often shows more readable error traces and thus is useful for debugging. ### Multi-process data loading Setting the argument `num_workers` as a positive integer will turn on multi-process data loading with the specified number of loader worker processes. In this mode, each time an iterator of a [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") is created (e.g., when you call `enumerate(dataloader)`), `num_workers` worker processes are created. At this point, the `dataset`, `collate_fn`, and `worker_init_fn` are passed to each worker, where they are used to initialize, and fetch data. This means that dataset access together with its internal IO, transforms (including `collate_fn`) runs in the worker process. [`torch.utils.data.get_worker_info()`](#torch.utils.data.get_worker_info "torch.utils.data.get_worker_info") returns various useful information in a worker process (including the worker id, dataset replica, initial seed, etc.), and returns `None` in main process. Users may use this function in dataset code and/or `worker_init_fn` to individually configure each dataset replica, and to determine whether the code is running in a worker process. For example, this can be particularly helpful in sharding the dataset. For map-style datasets, the main process generates the indices using `sampler` and sends them to the workers. So any shuffle randomization is done in the main process which guides loading by assigning indices to load. For iterable-style datasets, since each worker process gets a replica of the `dataset` object, naive multi-process loading will often result in duplicated data. Using [`torch.utils.data.get_worker_info()`](#torch.utils.data.get_worker_info "torch.utils.data.get_worker_info") and/or `worker_init_fn`, users may configure each replica independently. (See [`IterableDataset`](#torch.utils.data.IterableDataset "torch.utils.data.IterableDataset") documentations for how to achieve this. ) For similar reasons, in multi-process loading, the `drop_last` argument drops the last non-full batch of each worker’s iterable-style dataset replica. Workers are shut down once the end of the iteration is reached, or when the iterator becomes garbage collected. Warning It is generally not recommended to return CUDA tensors in multi-process loading because of many subtleties in using CUDA and sharing CUDA tensors in multiprocessing (see [CUDA in multiprocessing](https://pytorch.org/docs/1.8.0/notes/multiprocessing.html#multiprocessing-cuda-note)). Instead, we recommend using [automatic memory pinning](#memory-pinning) (i.e., setting `pin_memory=True`), which enables fast data transfer to CUDA-enabled GPUs. #### Platform-specific behaviors Since workers rely on Python [`multiprocessing`](https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing "(in Python v3.9)"), worker launch behavior is different on Windows compared to Unix. * On Unix, `fork()` is the default [`multiprocessing`](https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing "(in Python v3.9)") start method. Using `fork()`, child workers typically can access the `dataset` and Python argument functions directly through the cloned address space. * On Windows, `spawn()` is the default [`multiprocessing`](https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing "(in Python v3.9)") start method. Using `spawn()`, another interpreter is launched which runs your main script, followed by the internal worker function that receives the `dataset`, `collate_fn` and other arguments through [`pickle`](https://docs.python.org/3/library/pickle.html#module-pickle "(in Python v3.9)") serialization. This separate serialization means that you should take two steps to ensure you are compatible with Windows while using multi-process data loading: * Wrap most of you main script’s code within `if __name__ == '__main__':` block, to make sure it doesn’t run again (most likely generating error) when each worker process is launched. You can place your dataset and [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") instance creation logic here, as it doesn’t need to be re-executed in workers. * Make sure that any custom `collate_fn`, `worker_init_fn` or `dataset` code is declared as top level definitions, outside of the `__main__` check. This ensures that they are available in worker processes. (this is needed since functions are pickled as references only, not `bytecode`.) #### Randomness in multi-process data loading By default, each worker will have its PyTorch seed set to `base_seed + worker_id`, where `base_seed` is a long generated by main process using its RNG (thereby, consuming a RNG state mandatorily). However, seeds for other libraries may be duplicated upon initializing workers (e.g., NumPy), causing each worker to return identical random numbers. (See [this section](https://pytorch.org/docs/1.8.0/notes/faq.html#dataloader-workers-random-seed) in FAQ.). In `worker_init_fn`, you may access the PyTorch seed set for each worker with either [`torch.utils.data.get_worker_info().seed`](#torch.utils.data.get_worker_info "torch.utils.data.get_worker_info") or [`torch.initial_seed()`](generated/torch.initial_seed#torch.initial_seed "torch.initial_seed"), and use it to seed other libraries before data loading. Memory Pinning -------------- Host to GPU copies are much faster when they originate from pinned (page-locked) memory. See [Use pinned memory buffers](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-memory-pinning) for more details on when and how to use pinned memory generally. For data loading, passing `pin_memory=True` to a [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") will automatically put the fetched data Tensors in pinned memory, and thus enables faster data transfer to CUDA-enabled GPUs. The default memory pinning logic only recognizes Tensors and maps and iterables containing Tensors. By default, if the pinning logic sees a batch that is a custom type (which will occur if you have a `collate_fn` that returns a custom batch type), or if each element of your batch is a custom type, the pinning logic will not recognize them, and it will return that batch (or those elements) without pinning the memory. To enable memory pinning for custom batch or data type(s), define a `pin_memory()` method on your custom type(s). See the example below. Example: ``` class SimpleCustomBatch: def __init__(self, data): transposed_data = list(zip(*data)) self.inp = torch.stack(transposed_data[0], 0) self.tgt = torch.stack(transposed_data[1], 0) # custom memory pinning method on custom type def pin_memory(self): self.inp = self.inp.pin_memory() self.tgt = self.tgt.pin_memory() return self def collate_wrapper(batch): return SimpleCustomBatch(batch) inps = torch.arange(10 * 5, dtype=torch.float32).view(10, 5) tgts = torch.arange(10 * 5, dtype=torch.float32).view(10, 5) dataset = TensorDataset(inps, tgts) loader = DataLoader(dataset, batch_size=2, collate_fn=collate_wrapper, pin_memory=True) for batch_ndx, sample in enumerate(loader): print(sample.inp.is_pinned()) print(sample.tgt.is_pinned()) ``` `class torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, num_workers=0, collate_fn=None, pin_memory=False, drop_last=False, timeout=0, worker_init_fn=None, multiprocessing_context=None, generator=None, *, prefetch_factor=2, persistent_workers=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/dataloader.html#DataLoader) Data loader. Combines a dataset and a sampler, and provides an iterable over the given dataset. The [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") supports both map-style and iterable-style datasets with single- or multi-process loading, customizing loading order and optional automatic batching (collation) and memory pinning. See [`torch.utils.data`](#module-torch.utils.data "torch.utils.data") documentation page for more details. Parameters * **dataset** ([Dataset](#torch.utils.data.Dataset "torch.utils.data.Dataset")) – dataset from which to load the data. * **batch\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – how many samples per batch to load (default: `1`). * **shuffle** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – set to `True` to have the data reshuffled at every epoch (default: `False`). * **sampler** ([Sampler](#torch.utils.data.Sampler "torch.utils.data.Sampler") *or* *Iterable**,* *optional*) – defines the strategy to draw samples from the dataset. Can be any `Iterable` with `__len__` implemented. If specified, `shuffle` must not be specified. * **batch\_sampler** ([Sampler](#torch.utils.data.Sampler "torch.utils.data.Sampler") *or* *Iterable**,* *optional*) – like `sampler`, but returns a batch of indices at a time. Mutually exclusive with `batch_size`, `shuffle`, `sampler`, and `drop_last`. * **num\_workers** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – how many subprocesses to use for data loading. `0` means that the data will be loaded in the main process. (default: `0`) * **collate\_fn** (*callable**,* *optional*) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset. * **pin\_memory** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, the data loader will copy Tensors into CUDA pinned memory before returning them. If your data elements are a custom type, or your `collate_fn` returns a batch that is a custom type, see the example below. * **drop\_last** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – set to `True` to drop the last incomplete batch, if the dataset size is not divisible by the batch size. If `False` and the size of dataset is not divisible by the batch size, then the last batch will be smaller. (default: `False`) * **timeout** (*numeric**,* *optional*) – if positive, the timeout value for collecting a batch from workers. Should always be non-negative. (default: `0`) * **worker\_init\_fn** (*callable**,* *optional*) – If not `None`, this will be called on each worker subprocess with the worker id (an int in `[0, num_workers - 1]`) as input, after seeding and before data loading. (default: `None`) * **prefetch\_factor** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional**,* *keyword-only arg*) – Number of samples loaded in advance by each worker. `2` means there will be a total of 2 \* num\_workers samples prefetched across all workers. (default: `2`) * **persistent\_workers** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, the data loader will not shutdown the worker processes after a dataset has been consumed once. This allows to maintain the workers `Dataset` instances alive. (default: `False`) Warning If the `spawn` start method is used, `worker_init_fn` cannot be an unpicklable object, e.g., a lambda function. See [Multiprocessing best practices](https://pytorch.org/docs/1.8.0/notes/multiprocessing.html#multiprocessing-best-practices) on more details related to multiprocessing in PyTorch. Warning `len(dataloader)` heuristic is based on the length of the sampler used. When `dataset` is an [`IterableDataset`](#torch.utils.data.IterableDataset "torch.utils.data.IterableDataset"), it instead returns an estimate based on `len(dataset) / batch_size`, with proper rounding depending on `drop_last`, regardless of multi-process loading configurations. This represents the best guess PyTorch can make because PyTorch trusts user `dataset` code in correctly handling multi-process loading to avoid duplicate data. However, if sharding results in multiple workers having incomplete last batches, this estimate can still be inaccurate, because (1) an otherwise complete batch can be broken into multiple ones and (2) more than one batch worth of samples can be dropped when `drop_last` is set. Unfortunately, PyTorch can not detect such cases in general. See [Dataset Types](#dataset-types) for more details on these two types of datasets and how [`IterableDataset`](#torch.utils.data.IterableDataset "torch.utils.data.IterableDataset") interacts with [Multi-process data loading](#multi-process-data-loading). Warning See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html#reproducibility), and [My data loader workers return identical random numbers](https://pytorch.org/docs/1.8.0/notes/faq.html#dataloader-workers-random-seed), and [Randomness in multi-process data loading](#data-loading-randomness) notes for random seed related questions. `class torch.utils.data.Dataset` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/dataset.html#Dataset) An abstract class representing a [`Dataset`](#torch.utils.data.Dataset "torch.utils.data.Dataset"). All datasets that represent a map from keys to data samples should subclass it. All subclasses should overwrite `__getitem__()`, supporting fetching a data sample for a given key. Subclasses could also optionally overwrite `__len__()`, which is expected to return the size of the dataset by many [`Sampler`](#torch.utils.data.Sampler "torch.utils.data.Sampler") implementations and the default options of [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader"). Note [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") by default constructs a index sampler that yields integral indices. To make it work with a map-style dataset with non-integral indices/keys, a custom sampler must be provided. `class torch.utils.data.IterableDataset` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/dataset.html#IterableDataset) An iterable Dataset. All datasets that represent an iterable of data samples should subclass it. Such form of datasets is particularly useful when data come from a stream. All subclasses should overwrite `__iter__()`, which would return an iterator of samples in this dataset. When a subclass is used with [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader"), each item in the dataset will be yielded from the [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") iterator. When `num_workers > 0`, each worker process will have a different copy of the dataset object, so it is often desired to configure each copy independently to avoid having duplicate data returned from the workers. [`get_worker_info()`](#torch.utils.data.get_worker_info "torch.utils.data.get_worker_info"), when called in a worker process, returns information about the worker. It can be used in either the dataset’s `__iter__()` method or the [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") ‘s `worker_init_fn` option to modify each copy’s behavior. Example 1: splitting workload across all workers in `__iter__()`: ``` >>> class MyIterableDataset(torch.utils.data.IterableDataset): ... def __init__(self, start, end): ... super(MyIterableDataset).__init__() ... assert end > start, "this example code only works with end >= start" ... self.start = start ... self.end = end ... ... def __iter__(self): ... worker_info = torch.utils.data.get_worker_info() ... if worker_info is None: # single-process data loading, return the full iterator ... iter_start = self.start ... iter_end = self.end ... else: # in a worker process ... # split workload ... per_worker = int(math.ceil((self.end - self.start) / float(worker_info.num_workers))) ... worker_id = worker_info.id ... iter_start = self.start + worker_id * per_worker ... iter_end = min(iter_start + per_worker, self.end) ... return iter(range(iter_start, iter_end)) ... >>> # should give same set of data as range(3, 7), i.e., [3, 4, 5, 6]. >>> ds = MyIterableDataset(start=3, end=7) >>> # Single-process loading >>> print(list(torch.utils.data.DataLoader(ds, num_workers=0))) [3, 4, 5, 6] >>> # Mult-process loading with two worker processes >>> # Worker 0 fetched [3, 4]. Worker 1 fetched [5, 6]. >>> print(list(torch.utils.data.DataLoader(ds, num_workers=2))) [3, 5, 4, 6] >>> # With even more workers >>> print(list(torch.utils.data.DataLoader(ds, num_workers=20))) [3, 4, 5, 6] ``` Example 2: splitting workload across all workers using `worker_init_fn`: ``` >>> class MyIterableDataset(torch.utils.data.IterableDataset): ... def __init__(self, start, end): ... super(MyIterableDataset).__init__() ... assert end > start, "this example code only works with end >= start" ... self.start = start ... self.end = end ... ... def __iter__(self): ... return iter(range(self.start, self.end)) ... >>> # should give same set of data as range(3, 7), i.e., [3, 4, 5, 6]. >>> ds = MyIterableDataset(start=3, end=7) >>> # Single-process loading >>> print(list(torch.utils.data.DataLoader(ds, num_workers=0))) [3, 4, 5, 6] >>> >>> # Directly doing multi-process loading yields duplicate data >>> print(list(torch.utils.data.DataLoader(ds, num_workers=2))) [3, 3, 4, 4, 5, 5, 6, 6] >>> # Define a `worker_init_fn` that configures each dataset copy differently >>> def worker_init_fn(worker_id): ... worker_info = torch.utils.data.get_worker_info() ... dataset = worker_info.dataset # the dataset copy in this worker process ... overall_start = dataset.start ... overall_end = dataset.end ... # configure the dataset to only process the split workload ... per_worker = int(math.ceil((overall_end - overall_start) / float(worker_info.num_workers))) ... worker_id = worker_info.id ... dataset.start = overall_start + worker_id * per_worker ... dataset.end = min(dataset.start + per_worker, overall_end) ... >>> # Mult-process loading with the custom `worker_init_fn` >>> # Worker 0 fetched [3, 4]. Worker 1 fetched [5, 6]. >>> print(list(torch.utils.data.DataLoader(ds, num_workers=2, worker_init_fn=worker_init_fn))) [3, 5, 4, 6] >>> # With even more workers >>> print(list(torch.utils.data.DataLoader(ds, num_workers=20, worker_init_fn=worker_init_fn))) [3, 4, 5, 6] ``` `class torch.utils.data.TensorDataset(*tensors)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/dataset.html#TensorDataset) Dataset wrapping tensors. Each sample will be retrieved by indexing tensors along the first dimension. Parameters **\*tensors** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – tensors that have the same size of the first dimension. `class torch.utils.data.ConcatDataset(datasets)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/dataset.html#ConcatDataset) Dataset as a concatenation of multiple datasets. This class is useful to assemble different existing datasets. Parameters **datasets** (*sequence*) – List of datasets to be concatenated `class torch.utils.data.ChainDataset(datasets)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/dataset.html#ChainDataset) Dataset for chainning multiple [`IterableDataset`](#torch.utils.data.IterableDataset "torch.utils.data.IterableDataset") s. This class is useful to assemble different existing dataset streams. The chainning operation is done on-the-fly, so concatenating large-scale datasets with this class will be efficient. Parameters **datasets** (*iterable of IterableDataset*) – datasets to be chained together `class torch.utils.data.BufferedShuffleDataset(dataset, buffer_size)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/dataset.html#BufferedShuffleDataset) Dataset shuffled from the original dataset. This class is useful to shuffle an existing instance of an IterableDataset. The buffer with `buffer_size` is filled with the items from the dataset first. Then, each item will be yielded from the buffer by reservoir sampling via iterator. `buffer_size` is required to be larger than 0. For `buffer_size == 1`, the dataset is not shuffled. In order to fully shuffle the whole dataset, `buffer_size` is required to be greater than or equal to the size of dataset. When it is used with [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader"), each item in the dataset will be yielded from the [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") iterator. And, the method to set up a random seed is different based on `num_workers`. For single-process mode (`num_workers == 0`), the random seed is required to be set before the [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") in the main process. ``` >>> ds = BufferedShuffleDataset(dataset) >>> random.seed(...) >>> print(list(torch.utils.data.DataLoader(ds, num_workers=0))) ``` For multi-process mode (`num_workers > 0`), the random seed is set by a callable function in each worker. ``` >>> ds = BufferedShuffleDataset(dataset) >>> def init_fn(worker_id): ... random.seed(...) >>> print(list(torch.utils.data.DataLoader(ds, ..., num_workers=n, worker_init_fn=init_fn))) ``` Parameters * **dataset** ([IterableDataset](#torch.utils.data.IterableDataset "torch.utils.data.IterableDataset")) – The original IterableDataset. * **buffer\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The buffer size for shuffling. `class torch.utils.data.Subset(dataset, indices)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/dataset.html#Subset) Subset of a dataset at specified indices. Parameters * **dataset** ([Dataset](#torch.utils.data.Dataset "torch.utils.data.Dataset")) – The whole Dataset * **indices** (*sequence*) – Indices in the whole set selected for subset `torch.utils.data.get_worker_info()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/_utils/worker.html#get_worker_info) Returns the information about the current [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") iterator worker process. When called in a worker, this returns an object guaranteed to have the following attributes: * `id`: the current worker id. * `num_workers`: the total number of workers. * `seed`: the random seed set for the current worker. This value is determined by main process RNG and the worker id. See [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader")’s documentation for more details. * `dataset`: the copy of the dataset object in **this** process. Note that this will be a different object in a different process than the one in the main process. When called in the main process, this returns `None`. Note When used in a `worker_init_fn` passed over to [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader"), this method can be useful to set up each worker process differently, for instance, using `worker_id` to configure the `dataset` object to only read a specific fraction of a sharded dataset, or use `seed` to seed other libraries used in dataset code (e.g., NumPy). `torch.utils.data.random_split(dataset, lengths, generator=<torch._C.Generator object>)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/dataset.html#random_split) Randomly split a dataset into non-overlapping new datasets of given lengths. Optionally fix the generator for reproducible results, e.g.: ``` >>> random_split(range(10), [3, 7], generator=torch.Generator().manual_seed(42)) ``` Parameters * **dataset** ([Dataset](#torch.utils.data.Dataset "torch.utils.data.Dataset")) – Dataset to be split * **lengths** (*sequence*) – lengths of splits to be produced * **generator** ([Generator](generated/torch.generator#torch.Generator "torch.Generator")) – Generator used for the random permutation. `class torch.utils.data.Sampler(data_source)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/sampler.html#Sampler) Base class for all Samplers. Every Sampler subclass has to provide an `__iter__()` method, providing a way to iterate over indices of dataset elements, and a `__len__()` method that returns the length of the returned iterators. Note The `__len__()` method isn’t strictly required by [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader"), but is expected in any calculation involving the length of a [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader"). `class torch.utils.data.SequentialSampler(data_source)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/sampler.html#SequentialSampler) Samples elements sequentially, always in the same order. Parameters **data\_source** ([Dataset](#torch.utils.data.Dataset "torch.utils.data.Dataset")) – dataset to sample from `class torch.utils.data.RandomSampler(data_source, replacement=False, num_samples=None, generator=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/sampler.html#RandomSampler) Samples elements randomly. If without replacement, then sample from a shuffled dataset. If with replacement, then user can specify `num_samples` to draw. Parameters * **data\_source** ([Dataset](#torch.utils.data.Dataset "torch.utils.data.Dataset")) – dataset to sample from * **replacement** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – samples are drawn on-demand with replacement if `True`, default=``False`` * **num\_samples** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – number of samples to draw, default=`len(dataset)`. This argument is supposed to be specified only when `replacement` is `True`. * **generator** ([Generator](generated/torch.generator#torch.Generator "torch.Generator")) – Generator used in sampling. `class torch.utils.data.SubsetRandomSampler(indices, generator=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/sampler.html#SubsetRandomSampler) Samples elements randomly from a given list of indices, without replacement. Parameters * **indices** (*sequence*) – a sequence of indices * **generator** ([Generator](generated/torch.generator#torch.Generator "torch.Generator")) – Generator used in sampling. `class torch.utils.data.WeightedRandomSampler(weights, num_samples, replacement=True, generator=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/sampler.html#WeightedRandomSampler) Samples elements from `[0,..,len(weights)-1]` with given probabilities (weights). Parameters * **weights** (*sequence*) – a sequence of weights, not necessary summing up to one * **num\_samples** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – number of samples to draw * **replacement** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – if `True`, samples are drawn with replacement. If not, they are drawn without replacement, which means that when a sample index is drawn for a row, it cannot be drawn again for that row. * **generator** ([Generator](generated/torch.generator#torch.Generator "torch.Generator")) – Generator used in sampling. #### Example ``` >>> list(WeightedRandomSampler([0.1, 0.9, 0.4, 0.7, 3.0, 0.6], 5, replacement=True)) [4, 4, 1, 4, 5] >>> list(WeightedRandomSampler([0.9, 0.4, 0.05, 0.2, 0.3, 0.1], 5, replacement=False)) [0, 1, 4, 3, 2] ``` `class torch.utils.data.BatchSampler(sampler, batch_size, drop_last)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/sampler.html#BatchSampler) Wraps another sampler to yield a mini-batch of indices. Parameters * **sampler** ([Sampler](#torch.utils.data.Sampler "torch.utils.data.Sampler") *or* *Iterable*) – Base sampler. Can be any iterable object * **batch\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Size of mini-batch. * **drop\_last** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, the sampler will drop the last batch if its size would be less than `batch_size` #### Example ``` >>> list(BatchSampler(SequentialSampler(range(10)), batch_size=3, drop_last=False)) [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]] >>> list(BatchSampler(SequentialSampler(range(10)), batch_size=3, drop_last=True)) [[0, 1, 2], [3, 4, 5], [6, 7, 8]] ``` `class torch.utils.data.distributed.DistributedSampler(dataset, num_replicas=None, rank=None, shuffle=True, seed=0, drop_last=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/data/distributed.html#DistributedSampler) Sampler that restricts data loading to a subset of the dataset. It is especially useful in conjunction with [`torch.nn.parallel.DistributedDataParallel`](generated/torch.nn.parallel.distributeddataparallel#torch.nn.parallel.DistributedDataParallel "torch.nn.parallel.DistributedDataParallel"). In such a case, each process can pass a `DistributedSampler` instance as a [`DataLoader`](#torch.utils.data.DataLoader "torch.utils.data.DataLoader") sampler, and load a subset of the original dataset that is exclusive to it. Note Dataset is assumed to be of constant size. Parameters * **dataset** – Dataset used for sampling. * **num\_replicas** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Number of processes participating in distributed training. By default, `world_size` is retrieved from the current distributed group. * **rank** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Rank of the current process within `num_replicas`. By default, `rank` is retrieved from the current distributed group. * **shuffle** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True` (default), sampler will shuffle the indices. * **seed** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – random seed used to shuffle the sampler if `shuffle=True`. This number should be identical across all processes in the distributed group. Default: `0`. * **drop\_last** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if `True`, then the sampler will drop the tail of the data to make it evenly divisible across the number of replicas. If `False`, the sampler will add extra indices to make the data evenly divisible across the replicas. Default: `False`. Warning In distributed mode, calling the `set_epoch()` method at the beginning of each epoch **before** creating the `DataLoader` iterator is necessary to make shuffling work properly across multiple epochs. Otherwise, the same ordering will be always used. Example: ``` >>> sampler = DistributedSampler(dataset) if is_distributed else None >>> loader = DataLoader(dataset, shuffle=(sampler is None), ... sampler=sampler) >>> for epoch in range(start_epoch, n_epochs): ... if is_distributed: ... sampler.set_epoch(epoch) ... train(loader) ```
programming_docs
pytorch torch.utils.model_zoo torch.utils.model\_zoo ====================== Moved to `torch.hub`. `torch.utils.model_zoo.load_url(url, model_dir=None, map_location=None, progress=True, check_hash=False, file_name=None)` Loads the Torch serialized object at the given URL. If downloaded file is a zip file, it will be automatically decompressed. If the object is already present in `model_dir`, it’s deserialized and returned. The default value of `model_dir` is `<hub_dir>/checkpoints` where `hub_dir` is the directory returned by [`get_dir()`](hub#torch.hub.get_dir "torch.hub.get_dir"). Parameters * **url** (*string*) – URL of the object to download * **model\_dir** (*string**,* *optional*) – directory in which to save the object * **map\_location** (*optional*) – a function or a dict specifying how to remap storage locations (see torch.load) * **progress** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – whether or not to display a progress bar to stderr. Default: True * **check\_hash** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If True, the filename part of the URL should follow the naming convention `filename-<sha256>.ext` where `<sha256>` is the first eight or more digits of the SHA256 hash of the contents of the file. The hash is used to ensure unique names and to verify the contents of the file. Default: False * **file\_name** (*string**,* *optional*) – name for the downloaded file. Filename from `url` will be used if not set. #### Example ``` >>> state_dict = torch.hub.load_state_dict_from_url('https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth') ``` pytorch TorchScript TorchScript =========== * [Creating TorchScript Code](#creating-torchscript-code) * [Mixing Tracing and Scripting](#mixing-tracing-and-scripting) * [TorchScript Language](#torchscript-language) * [Built-in Functions and Modules](#built-in-functions-and-modules) + [PyTorch Functions and Modules](#pytorch-functions-and-modules) + [Python Functions and Modules](#python-functions-and-modules) + [Python Language Reference Comparison](#python-language-reference-comparison) * [Debugging](#debugging) + [Disable JIT for Debugging](#disable-jit-for-debugging) + [Inspecting Code](#inspecting-code) + [Interpreting Graphs](#interpreting-graphs) + [Tracer](#tracer) * [Frequently Asked Questions](#frequently-asked-questions) * [Appendix](#appendix) + [Migrating to PyTorch 1.2 Recursive Scripting API](#migrating-to-pytorch-1-2-recursive-scripting-api) + [References](#references) TorchScript is a way to create serializable and optimizable models from PyTorch code. Any TorchScript program can be saved from a Python process and loaded in a process where there is no Python dependency. We provide tools to incrementally transition a model from a pure Python program to a TorchScript program that can be run independently from Python, such as in a standalone C++ program. This makes it possible to train models in PyTorch using familiar tools in Python and then export the model via TorchScript to a production environment where Python programs may be disadvantageous for performance and multi-threading reasons. For a gentle introduction to TorchScript, see the [Introduction to TorchScript](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html) tutorial. For an end-to-end example of converting a PyTorch model to TorchScript and running it in C++, see the [Loading a PyTorch Model in C++](https://pytorch.org/tutorials/advanced/cpp_export.html) tutorial. Creating TorchScript Code ------------------------- | | | | --- | --- | | [`script`](generated/torch.jit.script#torch.jit.script "torch.jit.script")(obj[, optimize, \_frames\_up, \_rcb]) | Scripting a function or `nn.Module` will inspect the source code, compile it as TorchScript code using the TorchScript compiler, and return a [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") or [`ScriptFunction`](generated/torch.jit.scriptfunction#torch.jit.ScriptFunction "torch.jit.ScriptFunction"). | | [`trace`](generated/torch.jit.trace#torch.jit.trace "torch.jit.trace")(func, example\_inputs[, optimize, …]) | Trace a function and return an executable or [`ScriptFunction`](generated/torch.jit.scriptfunction#torch.jit.ScriptFunction "torch.jit.ScriptFunction") that will be optimized using just-in-time compilation. | | [`script_if_tracing`](generated/torch.jit.script_if_tracing#torch.jit.script_if_tracing "torch.jit.script_if_tracing")(fn) | Compiles `fn` when it is first called during tracing. | | [`trace_module`](generated/torch.jit.trace_module#torch.jit.trace_module "torch.jit.trace_module")(mod, inputs[, optimize, …]) | Trace a module and return an executable [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") that will be optimized using just-in-time compilation. | | [`fork`](generated/torch.jit.fork#torch.jit.fork "torch.jit.fork")(func, \*args, \*\*kwargs) | Creates an asynchronous task executing `func` and a reference to the value of the result of this execution. | | [`wait`](generated/torch.jit.wait#torch.jit.wait "torch.jit.wait")(future) | Forces completion of a `torch.jit.Future[T]` asynchronous task, returning the result of the task. | | [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule")() | A wrapper around C++ `torch::jit::Module`. | | [`ScriptFunction`](generated/torch.jit.scriptfunction#torch.jit.ScriptFunction "torch.jit.ScriptFunction") | Functionally equivalent to a [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule"), but represents a single function and does not have any attributes or Parameters. | | [`freeze`](generated/torch.jit.freeze#torch.jit.freeze "torch.jit.freeze")(mod[, preserved\_attrs, optimize\_numerics]) | Freezing a [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") will clone it and attempt to inline the cloned module’s submodules, parameters, and attributes as constants in the TorchScript IR Graph. | | [`save`](generated/torch.jit.save#torch.jit.save "torch.jit.save")(m, f[, \_extra\_files]) | Save an offline version of this module for use in a separate process. | | [`load`](generated/torch.jit.load#torch.jit.load "torch.jit.load")(f[, map\_location, \_extra\_files]) | Load a [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") or [`ScriptFunction`](generated/torch.jit.scriptfunction#torch.jit.ScriptFunction "torch.jit.ScriptFunction") previously saved with [`torch.jit.save`](generated/torch.jit.save#torch.jit.save "torch.jit.save") | | [`ignore`](generated/torch.jit.ignore#torch.jit.ignore "torch.jit.ignore")([drop]) | This decorator indicates to the compiler that a function or method should be ignored and left as a Python function. | | [`unused`](generated/torch.jit.unused#torch.jit.unused "torch.jit.unused")(fn) | This decorator indicates to the compiler that a function or method should be ignored and replaced with the raising of an exception. | | [`isinstance`](generated/torch.jit.isinstance#torch.jit.isinstance "torch.jit.isinstance")(obj, target\_type) | This function provides for conatiner type refinement in TorchScript. | Mixing Tracing and Scripting ---------------------------- In many cases either tracing or scripting is an easier approach for converting a model to TorchScript. Tracing and scripting can be composed to suit the particular requirements of a part of a model. Scripted functions can call traced functions. This is particularly useful when you need to use control-flow around a simple feed-forward model. For instance the beam search of a sequence to sequence model will typically be written in script but can call an encoder module generated using tracing. Example (calling a traced function in script): ``` import torch def foo(x, y): return 2 * x + y traced_foo = torch.jit.trace(foo, (torch.rand(3), torch.rand(3))) @torch.jit.script def bar(x): return traced_foo(x, x) ``` Traced functions can call script functions. This is useful when a small part of a model requires some control-flow even though most of the model is just a feed-forward network. Control-flow inside of a script function called by a traced function is preserved correctly. Example (calling a script function in a traced function): ``` import torch @torch.jit.script def foo(x, y): if x.max() > y.max(): r = x else: r = y return r def bar(x, y, z): return foo(x, y) + z traced_bar = torch.jit.trace(bar, (torch.rand(3), torch.rand(3), torch.rand(3))) ``` This composition also works for `nn.Module`s as well, where it can be used to generate a submodule using tracing that can be called from the methods of a script module. Example (using a traced module): ``` import torch import torchvision class MyScriptModule(torch.nn.Module): def __init__(self): super(MyScriptModule, self).__init__() self.means = torch.nn.Parameter(torch.tensor([103.939, 116.779, 123.68]) .resize_(1, 3, 1, 1)) self.resnet = torch.jit.trace(torchvision.models.resnet18(), torch.rand(1, 3, 224, 224)) def forward(self, input): return self.resnet(input - self.means) my_script_module = torch.jit.script(MyScriptModule()) ``` TorchScript Language -------------------- TorchScript is a statically typed subset of Python, so many Python features apply directly to TorchScript. See the full [TorchScript Language Reference](jit_language_reference#language-reference) for details. Built-in Functions and Modules ------------------------------ TorchScript supports the use of most PyTorch functions and many Python built-ins. See [TorchScript Builtins](jit_builtin_functions#builtin-functions) for a full reference of supported functions. ### PyTorch Functions and Modules TorchScript supports a subset of the tensor and neural network functions that PyTorch provides. Most methods on Tensor as well as functions in the `torch` namespace, all functions in `torch.nn.functional` and most modules from `torch.nn` are supported in TorchScript. See [TorchScript Unsupported Pytorch Constructs](jit_unsupported#jit-unsupported) for a list of unsupported PyTorch functions and modules. ### Python Functions and Modules Many of Python’s [built-in functions](https://docs.python.org/3/library/functions.html) are supported in TorchScript. The [`math`](https://docs.python.org/3/library/math.html#module-math "(in Python v3.9)") module is also supported (see [math Module](jit_builtin_functions#math-module) for details), but no other Python modules (built-in or third party) are supported. ### Python Language Reference Comparison For a full listing of supported Python features, see [Python Language Reference Coverage](jit_python_reference#python-language-reference). Debugging --------- ### Disable JIT for Debugging `PYTORCH_JIT` Setting the environment variable `PYTORCH_JIT=0` will disable all script and tracing annotations. If there is hard-to-debug error in one of your TorchScript models, you can use this flag to force everything to run using native Python. Since TorchScript (scripting and tracing) is disabled with this flag, you can use tools like `pdb` to debug the model code. For example: ``` @torch.jit.script def scripted_fn(x : torch.Tensor): for i in range(12): x = x + x return x def fn(x): x = torch.neg(x) import pdb; pdb.set_trace() return scripted_fn(x) traced_fn = torch.jit.trace(fn, (torch.rand(4, 5),)) traced_fn(torch.rand(3, 4)) ``` Debugging this script with `pdb` works except for when we invoke the [`@torch.jit.script`](generated/torch.jit.script#torch.jit.script "torch.jit.script") function. We can globally disable JIT, so that we can call the [`@torch.jit.script`](generated/torch.jit.script#torch.jit.script "torch.jit.script") function as a normal Python function and not compile it. If the above script is called `disable_jit_example.py`, we can invoke it like so: ``` $ PYTORCH_JIT=0 python disable_jit_example.py ``` and we will be able to step into the [`@torch.jit.script`](generated/torch.jit.script#torch.jit.script "torch.jit.script") function as a normal Python function. To disable the TorchScript compiler for a specific function, see [`@torch.jit.ignore`](generated/torch.jit.ignore#torch.jit.ignore "torch.jit.ignore"). ### Inspecting Code TorchScript provides a code pretty-printer for all [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") instances. This pretty-printer gives an interpretation of the script method’s code as valid Python syntax. For example: ``` @torch.jit.script def foo(len): # type: (int) -> torch.Tensor rv = torch.zeros(3, 4) for i in range(len): if i < 10: rv = rv - 1.0 else: rv = rv + 1.0 return rv print(foo.code) ``` A [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") with a single `forward` method will have an attribute `code`, which you can use to inspect the [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule")’s code. If the [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") has more than one method, you will need to access `.code` on the method itself and not the module. We can inspect the code of a method named `foo` on a [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") by accessing `.foo.code`. The example above produces this output: ``` def foo(len: int) -> Tensor: rv = torch.zeros([3, 4], dtype=None, layout=None, device=None, pin_memory=None) rv0 = rv for i in range(len): if torch.lt(i, 10): rv1 = torch.sub(rv0, 1., 1) else: rv1 = torch.add(rv0, 1., 1) rv0 = rv1 return rv0 ``` This is TorchScript’s compilation of the code for the `forward` method. You can use this to ensure TorchScript (tracing or scripting) has captured your model code correctly. ### Interpreting Graphs TorchScript also has a representation at a lower level than the code pretty- printer, in the form of IR graphs. TorchScript uses a static single assignment (SSA) intermediate representation (IR) to represent computation. The instructions in this format consist of ATen (the C++ backend of PyTorch) operators and other primitive operators, including control flow operators for loops and conditionals. As an example: ``` @torch.jit.script def foo(len): # type: (int) -> torch.Tensor rv = torch.zeros(3, 4) for i in range(len): if i < 10: rv = rv - 1.0 else: rv = rv + 1.0 return rv print(foo.graph) ``` `graph` follows the same rules described in the [Inspecting Code](#inspecting-code) section with regard to `forward` method lookup. The example script above produces the graph: ``` graph(%len.1 : int): %24 : int = prim::Constant[value=1]() %17 : bool = prim::Constant[value=1]() # test.py:10:5 %12 : bool? = prim::Constant() %10 : Device? = prim::Constant() %6 : int? = prim::Constant() %1 : int = prim::Constant[value=3]() # test.py:9:22 %2 : int = prim::Constant[value=4]() # test.py:9:25 %20 : int = prim::Constant[value=10]() # test.py:11:16 %23 : float = prim::Constant[value=1]() # test.py:12:23 %4 : int[] = prim::ListConstruct(%1, %2) %rv.1 : Tensor = aten::zeros(%4, %6, %6, %10, %12) # test.py:9:10 %rv : Tensor = prim::Loop(%len.1, %17, %rv.1) # test.py:10:5 block0(%i.1 : int, %rv.14 : Tensor): %21 : bool = aten::lt(%i.1, %20) # test.py:11:12 %rv.13 : Tensor = prim::If(%21) # test.py:11:9 block0(): %rv.3 : Tensor = aten::sub(%rv.14, %23, %24) # test.py:12:18 -> (%rv.3) block1(): %rv.6 : Tensor = aten::add(%rv.14, %23, %24) # test.py:14:18 -> (%rv.6) -> (%17, %rv.13) return (%rv) ``` Take the instruction `%rv.1 : Tensor = aten::zeros(%4, %6, %6, %10, %12) # test.py:9:10` for example. * `%rv.1 : Tensor` means we assign the output to a (unique) value named `rv.1`, that value is of `Tensor` type and that we do not know its concrete shape. * `aten::zeros` is the operator (equivalent to `torch.zeros`) and the input list `(%4, %6, %6, %10, %12)` specifies which values in scope should be passed as inputs. The schema for built-in functions like `aten::zeros` can be found at [Builtin Functions](#builtin-functions). * `# test.py:9:10` is the location in the original source file that generated this instruction. In this case, it is a file named `test.py`, on line 9, and at character 10. Notice that operators can also have associated `blocks`, namely the `prim::Loop` and `prim::If` operators. In the graph print-out, these operators are formatted to reflect their equivalent source code forms to facilitate easy debugging. Graphs can be inspected as shown to confirm that the computation described by a [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") is correct, in both automated and manual fashion, as described below. ### Tracer #### Tracing Edge Cases There are some edge cases that exist where the trace of a given Python function/module will not be representative of the underlying code. These cases can include: * Tracing of control flow that is dependent on inputs (e.g. tensor shapes) * Tracing of in-place operations of tensor views (e.g. indexing on the left-hand side of an assignment) Note that these cases may in fact be traceable in the future. #### Automatic Trace Checking One way to automatically catch many errors in traces is by using `check_inputs` on the `torch.jit.trace()` API. `check_inputs` takes a list of tuples of inputs that will be used to re-trace the computation and verify the results. For example: ``` def loop_in_traced_fn(x): result = x[0] for i in range(x.size(0)): result = result * x[i] return result inputs = (torch.rand(3, 4, 5),) check_inputs = [(torch.rand(4, 5, 6),), (torch.rand(2, 3, 4),)] traced = torch.jit.trace(loop_in_traced_fn, inputs, check_inputs=check_inputs) ``` Gives us the following diagnostic information: ``` ERROR: Graphs differed across invocations! Graph diff: graph(%x : Tensor) { %1 : int = prim::Constant[value=0]() %2 : int = prim::Constant[value=0]() %result.1 : Tensor = aten::select(%x, %1, %2) %4 : int = prim::Constant[value=0]() %5 : int = prim::Constant[value=0]() %6 : Tensor = aten::select(%x, %4, %5) %result.2 : Tensor = aten::mul(%result.1, %6) %8 : int = prim::Constant[value=0]() %9 : int = prim::Constant[value=1]() %10 : Tensor = aten::select(%x, %8, %9) - %result : Tensor = aten::mul(%result.2, %10) + %result.3 : Tensor = aten::mul(%result.2, %10) ? ++ %12 : int = prim::Constant[value=0]() %13 : int = prim::Constant[value=2]() %14 : Tensor = aten::select(%x, %12, %13) + %result : Tensor = aten::mul(%result.3, %14) + %16 : int = prim::Constant[value=0]() + %17 : int = prim::Constant[value=3]() + %18 : Tensor = aten::select(%x, %16, %17) - %15 : Tensor = aten::mul(%result, %14) ? ^ ^ + %19 : Tensor = aten::mul(%result, %18) ? ^ ^ - return (%15); ? ^ + return (%19); ? ^ } ``` This message indicates to us that the computation differed between when we first traced it and when we traced it with the `check_inputs`. Indeed, the loop within the body of `loop_in_traced_fn` depends on the shape of the input `x`, and thus when we try another `x` with a different shape, the trace differs. In this case, data-dependent control flow like this can be captured using [`torch.jit.script()`](generated/torch.jit.script#torch.jit.script "torch.jit.script") instead: ``` def fn(x): result = x[0] for i in range(x.size(0)): result = result * x[i] return result inputs = (torch.rand(3, 4, 5),) check_inputs = [(torch.rand(4, 5, 6),), (torch.rand(2, 3, 4),)] scripted_fn = torch.jit.script(fn) print(scripted_fn.graph) #print(str(scripted_fn.graph).strip()) for input_tuple in [inputs] + check_inputs: torch.testing.assert_allclose(fn(*input_tuple), scripted_fn(*input_tuple)) ``` Which produces: ``` graph(%x : Tensor) { %5 : bool = prim::Constant[value=1]() %1 : int = prim::Constant[value=0]() %result.1 : Tensor = aten::select(%x, %1, %1) %4 : int = aten::size(%x, %1) %result : Tensor = prim::Loop(%4, %5, %result.1) block0(%i : int, %7 : Tensor) { %10 : Tensor = aten::select(%x, %1, %i) %result.2 : Tensor = aten::mul(%7, %10) -> (%5, %result.2) } return (%result); } ``` #### Tracer Warnings The tracer produces warnings for several problematic patterns in traced computation. As an example, take a trace of a function that contains an in-place assignment on a slice (a view) of a Tensor: ``` def fill_row_zero(x): x[0] = torch.rand(*x.shape[1:2]) return x traced = torch.jit.trace(fill_row_zero, (torch.rand(3, 4),)) print(traced.graph) ``` Produces several warnings and a graph which simply returns the input: ``` fill_row_zero.py:4: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe. x[0] = torch.rand(*x.shape[1:2]) fill_row_zero.py:6: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error: Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 1] (0.09115803241729736 vs. 0.6782537698745728) and 3 other locations (33.00%) traced = torch.jit.trace(fill_row_zero, (torch.rand(3, 4),)) graph(%0 : Float(3, 4)) { return (%0); } ``` We can fix this by modifying the code to not use the in-place update, but rather build up the result tensor out-of-place with `torch.cat`: ``` def fill_row_zero(x): x = torch.cat((torch.rand(1, *x.shape[1:2]), x[1:2]), dim=0) return x traced = torch.jit.trace(fill_row_zero, (torch.rand(3, 4),)) print(traced.graph) ``` Frequently Asked Questions -------------------------- Q: I would like to train a model on GPU and do inference on CPU. What are the best practices? First convert your model from GPU to CPU and then save it, like so: ``` cpu_model = gpu_model.cpu() sample_input_cpu = sample_input_gpu.cpu() traced_cpu = torch.jit.trace(cpu_model, sample_input_cpu) torch.jit.save(traced_cpu, "cpu.pt") traced_gpu = torch.jit.trace(gpu_model, sample_input_gpu) torch.jit.save(traced_gpu, "gpu.pt") # ... later, when using the model: if use_gpu: model = torch.jit.load("gpu.pt") else: model = torch.jit.load("cpu.pt") model(input) ``` This is recommended because the tracer may witness tensor creation on a specific device, so casting an already-loaded model may have unexpected effects. Casting the model *before* saving it ensures that the tracer has the correct device information. Q: How do I store attributes on a [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule")? Say we have a model like: ``` import torch class Model(torch.nn.Module): def __init__(self): super(Model, self).__init__() self.x = 2 def forward(self): return self.x m = torch.jit.script(Model()) ``` If `Model` is instantiated it will result in a compilation error since the compiler doesn’t know about `x`. There are 4 ways to inform the compiler of attributes on [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule"): 1. `nn.Parameter` - Values wrapped in `nn.Parameter` will work as they do on `nn.Module`s 2. `register_buffer` - Values wrapped in `register_buffer` will work as they do on `nn.Module`s. This is equivalent to an attribute (see 4) of type `Tensor`. 3. Constants - Annotating a class member as `Final` (or adding it to a list called `__constants__` at the class definition level) will mark the contained names as constants. Constants are saved directly in the code of the model. See `builtin-constants` for details. 4. Attributes - Values that are a `supported type` can be added as mutable attributes. Most types can be inferred but some may need to be specified, see `module attributes` for details. Q: I would like to trace module’s method but I keep getting this error: `RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient` This error usually means that the method you are tracing uses a module’s parameters and you are passing the module’s method instead of the module instance (e.g. `my_module_instance.forward` vs `my_module_instance`). * Invoking `trace` with a module’s method captures module parameters (which may require gradients) as **constants**. * On the other hand, invoking `trace` with module’s instance (e.g. `my_module`) creates a new module and correctly copies parameters into the new module, so they can accumulate gradients if required. To trace a specific method on a module, see [`torch.jit.trace_module`](generated/torch.jit.trace_module#torch.jit.trace_module "torch.jit.trace_module") Appendix -------- ### Migrating to PyTorch 1.2 Recursive Scripting API This section details the changes to TorchScript in PyTorch 1.2. If you are new to TorchScript you can skip this section. There are two main changes to the TorchScript API with PyTorch 1.2. 1. [`torch.jit.script`](generated/torch.jit.script#torch.jit.script "torch.jit.script") will now attempt to recursively compile functions, methods, and classes that it encounters. Once you call `torch.jit.script`, compilation is “opt-out”, rather than “opt-in”. 2. `torch.jit.script(nn_module_instance)` is now the preferred way to create [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule")s, instead of inheriting from `torch.jit.ScriptModule`. These changes combine to provide a simpler, easier-to-use API for converting your `nn.Module`s into [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule")s, ready to be optimized and executed in a non-Python environment. The new usage looks like this: ``` import torch import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) my_model = Model() my_scripted_model = torch.jit.script(my_model) ``` * The module’s `forward` is compiled by default. Methods called from `forward` are lazily compiled in the order they are used in `forward`. * To compile a method other than `forward` that is not called from `forward`, add `@torch.jit.export`. * To stop the compiler from compiling a method, add [`@torch.jit.ignore`](generated/torch.jit.ignore#torch.jit.ignore "torch.jit.ignore") or [`@torch.jit.unused`](generated/torch.jit.unused#torch.jit.unused "torch.jit.unused"). `@ignore` leaves the * method as a call to python, and `@unused` replaces it with an exception. `@ignored` cannot be exported; `@unused` can. * Most attribute types can be inferred, so `torch.jit.Attribute` is not necessary. For empty container types, annotate their types using [PEP 526-style](https://www.python.org/dev/peps/pep-0526/#class-and-instance-variable-annotations) class annotations. * Constants can be marked with a `Final` class annotation instead of adding the name of the member to `__constants__`. * Python 3 type hints can be used in place of `torch.jit.annotate` As a result of these changes, the following items are considered deprecated and should not appear in new code: * The `@torch.jit.script_method` decorator * Classes that inherit from `torch.jit.ScriptModule` * The `torch.jit.Attribute` wrapper class * The `__constants__` array * The `torch.jit.annotate` function #### Modules Warning The [`@torch.jit.ignore`](generated/torch.jit.ignore#torch.jit.ignore "torch.jit.ignore") annotation’s behavior changes in PyTorch 1.2. Before PyTorch 1.2 the @ignore decorator was used to make a function or method callable from code that is exported. To get this functionality back, use `@torch.jit.unused()`. `@torch.jit.ignore` is now equivalent to `@torch.jit.ignore(drop=False)`. See [`@torch.jit.ignore`](generated/torch.jit.ignore#torch.jit.ignore "torch.jit.ignore") and [`@torch.jit.unused`](generated/torch.jit.unused#torch.jit.unused "torch.jit.unused") for details. When passed to the [`torch.jit.script`](generated/torch.jit.script#torch.jit.script "torch.jit.script") function, a `torch.nn.Module`’s data is copied to a [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") and the TorchScript compiler compiles the module. The module’s `forward` is compiled by default. Methods called from `forward` are lazily compiled in the order they are used in `forward`, as well as any `@torch.jit.export` methods. `torch.jit.export(fn)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/_jit_internal.html#export) This decorator indicates that a method on an `nn.Module` is used as an entry point into a [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") and should be compiled. `forward` implicitly is assumed to be an entry point, so it does not need this decorator. Functions and methods called from `forward` are compiled as they are seen by the compiler, so they do not need this decorator either. Example (using `@torch.jit.export` on a method): ``` import torch import torch.nn as nn class MyModule(nn.Module): def implicitly_compiled_method(self, x): return x + 99 # `forward` is implicitly decorated with `@torch.jit.export`, # so adding it here would have no effect def forward(self, x): return x + 10 @torch.jit.export def another_forward(self, x): # When the compiler sees this call, it will compile # `implicitly_compiled_method` return self.implicitly_compiled_method(x) def unused_method(self, x): return x - 20 # `m` will contain compiled methods: # `forward` # `another_forward` # `implicitly_compiled_method` # `unused_method` will not be compiled since it was not called from # any compiled methods and wasn't decorated with `@torch.jit.export` m = torch.jit.script(MyModule()) ``` #### Functions Functions don’t change much, they can be decorated with [`@torch.jit.ignore`](generated/torch.jit.ignore#torch.jit.ignore "torch.jit.ignore") or [`torch.jit.unused`](generated/torch.jit.unused#torch.jit.unused "torch.jit.unused") if needed. ``` # Same behavior as pre-PyTorch 1.2 @torch.jit.script def some_fn(): return 2 # Marks a function as ignored, if nothing # ever calls it then this has no effect @torch.jit.ignore def some_fn2(): return 2 # As with ignore, if nothing calls it then it has no effect. # If it is called in script it is replaced with an exception. @torch.jit.unused def some_fn3(): import pdb; pdb.set_trace() return 4 # Doesn't do anything, this function is already # the main entry point @torch.jit.export def some_fn4(): return 2 ``` #### TorchScript Classes Warning TorchScript class support is experimental. Currently it is best suited for simple record-like types (think a `NamedTuple` with methods attached). Everything in a user defined [TorchScript Class](torchscript-class) is exported by default, functions can be decorated with [`@torch.jit.ignore`](generated/torch.jit.ignore#torch.jit.ignore "torch.jit.ignore") if needed. #### Attributes The TorchScript compiler needs to know the types of `module attributes`. Most types can be inferred from the value of the member. Empty lists and dicts cannot have their types inferred and must have their types annotated with [PEP 526-style](https://www.python.org/dev/peps/pep-0526/#class-and-instance-variable-annotations) class annotations. If a type cannot be inferred and is not explicitly annotated, it will not be added as an attribute to the resulting [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") Old API: ``` from typing import Dict import torch class MyModule(torch.jit.ScriptModule): def __init__(self): super(MyModule, self).__init__() self.my_dict = torch.jit.Attribute({}, Dict[str, int]) self.my_int = torch.jit.Attribute(20, int) m = MyModule() ``` New API: ``` from typing import Dict class MyModule(torch.nn.Module): my_dict: Dict[str, int] def __init__(self): super(MyModule, self).__init__() # This type cannot be inferred and must be specified self.my_dict = {} # The attribute type here is inferred to be `int` self.my_int = 20 def forward(self): pass m = torch.jit.script(MyModule()) ``` #### Constants The `Final` type constructor can be used to mark members as `constant`. If members are not marked constant, they will be copied to the resulting [`ScriptModule`](generated/torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") as an attribute. Using `Final` opens opportunities for optimization if the value is known to be fixed and gives additional type safety. Old API: ``` class MyModule(torch.jit.ScriptModule): __constants__ = ['my_constant'] def __init__(self): super(MyModule, self).__init__() self.my_constant = 2 def forward(self): pass m = MyModule() ``` New API: ``` try: from typing_extensions import Final except: # If you don't have `typing_extensions` installed, you can use a # polyfill from `torch.jit`. from torch.jit import Final class MyModule(torch.nn.Module): my_constant: Final[int] def __init__(self): super(MyModule, self).__init__() self.my_constant = 2 def forward(self): pass m = torch.jit.script(MyModule()) ``` #### Variables Containers are assumed to have type `Tensor` and be non-optional (see `Default Types` for more information). Previously, `torch.jit.annotate` was used to tell the TorchScript compiler what the type should be. Python 3 style type hints are now supported. ``` import torch from typing import Dict, Optional @torch.jit.script def make_dict(flag: bool): x: Dict[str, int] = {} x['hi'] = 2 b: Optional[int] = None if flag: b = 2 return x, b ``` ### References * [Python Language Reference Coverage](jit_python_reference) * [TorchScript Unsupported Pytorch Constructs](jit_unsupported)
programming_docs
pytorch torch.utils.tensorboard torch.utils.tensorboard ======================= Before going further, more details on TensorBoard can be found at <https://www.tensorflow.org/tensorboard/> Once you’ve installed TensorBoard, these utilities let you log PyTorch models and metrics into a directory for visualization within the TensorBoard UI. Scalars, images, histograms, graphs, and embedding visualizations are all supported for PyTorch models and tensors as well as Caffe2 nets and blobs. The SummaryWriter class is your main entry to log data for consumption and visualization by TensorBoard. For example: ``` import torch import torchvision from torch.utils.tensorboard import SummaryWriter from torchvision import datasets, transforms # Writer will output to ./runs/ directory by default writer = SummaryWriter() transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) trainset = datasets.MNIST('mnist_train', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) model = torchvision.models.resnet50(False) # Have ResNet model take in grayscale rather than RGB model.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False) images, labels = next(iter(trainloader)) grid = torchvision.utils.make_grid(images) writer.add_image('images', grid, 0) writer.add_graph(model, images) writer.close() ``` This can then be visualized with TensorBoard, which should be installable and runnable with: ``` pip install tensorboard tensorboard --logdir=runs ``` Lots of information can be logged for one experiment. To avoid cluttering the UI and have better result clustering, we can group plots by naming them hierarchically. For example, “Loss/train” and “Loss/test” will be grouped together, while “Accuracy/train” and “Accuracy/test” will be grouped separately in the TensorBoard interface. ``` from torch.utils.tensorboard import SummaryWriter import numpy as np writer = SummaryWriter() for n_iter in range(100): writer.add_scalar('Loss/train', np.random.random(), n_iter) writer.add_scalar('Loss/test', np.random.random(), n_iter) writer.add_scalar('Accuracy/train', np.random.random(), n_iter) writer.add_scalar('Accuracy/test', np.random.random(), n_iter) ``` Expected result: `class torch.utils.tensorboard.writer.SummaryWriter(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter) Writes entries directly to event files in the log\_dir to be consumed by TensorBoard. The `SummaryWriter` class provides a high-level API to create an event file in a given directory and add summaries and events to it. The class updates the file contents asynchronously. This allows a training program to call methods to add data to the file directly from the training loop, without slowing down training. `__init__(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.__init__) Creates a `SummaryWriter` that will write out events and summaries to the event file. Parameters * **log\_dir** (*string*) – Save directory location. Default is runs/**CURRENT\_DATETIME\_HOSTNAME**, which changes after each run. Use hierarchical folder structure to compare between runs easily. e.g. pass in ‘runs/exp1’, ‘runs/exp2’, etc. for each new experiment to compare across them. * **comment** (*string*) – Comment log\_dir suffix appended to the default `log_dir`. If `log_dir` is assigned, this argument has no effect. * **purge\_step** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – When logging crashes at step T+XT+X and restarts at step TT , any events whose global\_step larger or equal to TT will be purged and hidden from TensorBoard. Note that crashed and resumed experiments should have the same `log_dir`. * **max\_queue** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Size of the queue for pending events and summaries before one of the ‘add’ calls forces a flush to disk. Default is ten items. * **flush\_secs** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – How often, in seconds, to flush the pending events and summaries to disk. Default is every two minutes. * **filename\_suffix** (*string*) – Suffix added to all event filenames in the log\_dir directory. More details on filename construction in tensorboard.summary.writer.event\_file\_writer.EventFileWriter. Examples: ``` from torch.utils.tensorboard import SummaryWriter # create a summary writer with automatically generated folder name. writer = SummaryWriter() # folder location: runs/May04_22-14-54_s-MacBook-Pro.local/ # create a summary writer using the specified folder name. writer = SummaryWriter("my_experiment") # folder location: my_experiment # create a summary writer with comment appended. writer = SummaryWriter(comment="LR_0.1_BATCH_16") # folder location: runs/May04_22-14-54_s-MacBook-Pro.localLR_0.1_BATCH_16/ ``` `add_scalar(tag, scalar_value, global_step=None, walltime=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_scalar) Add scalar data to summary. Parameters * **tag** (*string*) – Data identifier * **scalar\_value** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* *string/blobname*) – Value to save * **global\_step** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Global step value to record * **walltime** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Optional override default walltime (time.time()) with seconds after epoch of event Examples: ``` from torch.utils.tensorboard import SummaryWriter writer = SummaryWriter() x = range(100) for i in x: writer.add_scalar('y=2x', i * 2, i) writer.close() ``` Expected result: `add_scalars(main_tag, tag_scalar_dict, global_step=None, walltime=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_scalars) Adds many scalar data to summary. Parameters * **main\_tag** (*string*) – The parent name for the tags * **tag\_scalar\_dict** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)")) – Key-value pair storing the tag and corresponding values * **global\_step** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Global step value to record * **walltime** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Optional override default walltime (time.time()) seconds after epoch of event Examples: ``` from torch.utils.tensorboard import SummaryWriter writer = SummaryWriter() r = 5 for i in range(100): writer.add_scalars('run_14h', {'xsinx':i*np.sin(i/r), 'xcosx':i*np.cos(i/r), 'tanx': np.tan(i/r)}, i) writer.close() # This call adds three values to the same scalar plot with the tag # 'run_14h' in TensorBoard's scalar section. ``` Expected result: `add_histogram(tag, values, global_step=None, bins='tensorflow', walltime=None, max_bins=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_histogram) Add histogram to summary. Parameters * **tag** (*string*) – Data identifier * **values** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor")*,* *numpy.array**, or* *string/blobname*) – Values to build histogram * **global\_step** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Global step value to record * **bins** (*string*) – One of {‘tensorflow’,’auto’, ‘fd’, …}. This determines how the bins are made. You can find other options in: <https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html> * **walltime** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Optional override default walltime (time.time()) seconds after epoch of event Examples: ``` from torch.utils.tensorboard import SummaryWriter import numpy as np writer = SummaryWriter() for i in range(10): x = np.random.random(1000) writer.add_histogram('distribution centers', x + i, i) writer.close() ``` Expected result: `add_image(tag, img_tensor, global_step=None, walltime=None, dataformats='CHW')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_image) Add image data to summary. Note that this requires the `pillow` package. Parameters * **tag** (*string*) – Data identifier * **img\_tensor** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor")*,* *numpy.array**, or* *string/blobname*) – Image data * **global\_step** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Global step value to record * **walltime** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Optional override default walltime (time.time()) seconds after epoch of event Shape: img\_tensor: Default is (3,H,W)(3, H, W) . You can use `torchvision.utils.make_grid()` to convert a batch of tensor into 3xHxW format or call `add_images` and let us do the job. Tensor with (1,H,W)(1, H, W) , (H,W)(H, W) , (H,W,3)(H, W, 3) is also suitable as long as corresponding `dataformats` argument is passed, e.g. `CHW`, `HWC`, `HW`. Examples: ``` from torch.utils.tensorboard import SummaryWriter import numpy as np img = np.zeros((3, 100, 100)) img[0] = np.arange(0, 10000).reshape(100, 100) / 10000 img[1] = 1 - np.arange(0, 10000).reshape(100, 100) / 10000 img_HWC = np.zeros((100, 100, 3)) img_HWC[:, :, 0] = np.arange(0, 10000).reshape(100, 100) / 10000 img_HWC[:, :, 1] = 1 - np.arange(0, 10000).reshape(100, 100) / 10000 writer = SummaryWriter() writer.add_image('my_image', img, 0) # If you have non-default dimension setting, set the dataformats argument. writer.add_image('my_image_HWC', img_HWC, 0, dataformats='HWC') writer.close() ``` Expected result: `add_images(tag, img_tensor, global_step=None, walltime=None, dataformats='NCHW')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_images) Add batched image data to summary. Note that this requires the `pillow` package. Parameters * **tag** (*string*) – Data identifier * **img\_tensor** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor")*,* *numpy.array**, or* *string/blobname*) – Image data * **global\_step** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Global step value to record * **walltime** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Optional override default walltime (time.time()) seconds after epoch of event * **dataformats** (*string*) – Image data format specification of the form NCHW, NHWC, CHW, HWC, HW, WH, etc. Shape: img\_tensor: Default is (N,3,H,W)(N, 3, H, W) . If `dataformats` is specified, other shape will be accepted. e.g. NCHW or NHWC. Examples: ``` from torch.utils.tensorboard import SummaryWriter import numpy as np img_batch = np.zeros((16, 3, 100, 100)) for i in range(16): img_batch[i, 0] = np.arange(0, 10000).reshape(100, 100) / 10000 / 16 * i img_batch[i, 1] = (1 - np.arange(0, 10000).reshape(100, 100) / 10000) / 16 * i writer = SummaryWriter() writer.add_images('my_image_batch', img_batch, 0) writer.close() ``` Expected result: `add_figure(tag, figure, global_step=None, close=True, walltime=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_figure) Render matplotlib figure into an image and add it to summary. Note that this requires the `matplotlib` package. Parameters * **tag** (*string*) – Data identifier * **figure** (*matplotlib.pyplot.figure*) – Figure or a list of figures * **global\_step** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Global step value to record * **close** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Flag to automatically close the figure * **walltime** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Optional override default walltime (time.time()) seconds after epoch of event `add_video(tag, vid_tensor, global_step=None, fps=4, walltime=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_video) Add video data to summary. Note that this requires the `moviepy` package. Parameters * **tag** (*string*) – Data identifier * **vid\_tensor** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor")) – Video data * **global\_step** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Global step value to record * **fps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Frames per second * **walltime** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Optional override default walltime (time.time()) seconds after epoch of event Shape: vid\_tensor: (N,T,C,H,W)(N, T, C, H, W) . The values should lie in [0, 255] for type `uint8` or [0, 1] for type `float`. `add_audio(tag, snd_tensor, global_step=None, sample_rate=44100, walltime=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_audio) Add audio data to summary. Parameters * **tag** (*string*) – Data identifier * **snd\_tensor** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor")) – Sound data * **global\_step** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Global step value to record * **sample\_rate** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – sample rate in Hz * **walltime** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Optional override default walltime (time.time()) seconds after epoch of event Shape: snd\_tensor: (1,L)(1, L) . The values should lie between [-1, 1]. `add_text(tag, text_string, global_step=None, walltime=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_text) Add text data to summary. Parameters * **tag** (*string*) – Data identifier * **text\_string** (*string*) – String to save * **global\_step** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Global step value to record * **walltime** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Optional override default walltime (time.time()) seconds after epoch of event Examples: ``` writer.add_text('lstm', 'This is an lstm', 0) writer.add_text('rnn', 'This is an rnn', 10) ``` `add_graph(model, input_to_model=None, verbose=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_graph) Add graph data to summary. Parameters * **model** ([torch.nn.Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")) – Model to draw. * **input\_to\_model** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor") *or* *list of torch.Tensor*) – A variable or a tuple of variables to be fed. * **verbose** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Whether to print graph structure in console. `add_embedding(mat, metadata=None, label_img=None, global_step=None, tag='default', metadata_header=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_embedding) Add embedding projector data to summary. Parameters * **mat** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor") *or* *numpy.array*) – A matrix which each row is the feature vector of the data point * **metadata** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – A list of labels, each element will be convert to string * **label\_img** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor")) – Images correspond to each data point * **global\_step** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Global step value to record * **tag** (*string*) – Name for the embedding Shape: mat: (N,D)(N, D) , where N is number of data and D is feature dimension label\_img: (N,C,H,W)(N, C, H, W) Examples: ``` import keyword import torch meta = [] while len(meta)<100: meta = meta+keyword.kwlist # get some strings meta = meta[:100] for i, v in enumerate(meta): meta[i] = v+str(i) label_img = torch.rand(100, 3, 10, 32) for i in range(100): label_img[i]*=i/100.0 writer.add_embedding(torch.randn(100, 5), metadata=meta, label_img=label_img) writer.add_embedding(torch.randn(100, 5), label_img=label_img) writer.add_embedding(torch.randn(100, 5), metadata=meta) ``` `add_pr_curve(tag, labels, predictions, global_step=None, num_thresholds=127, weights=None, walltime=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_pr_curve) Adds precision recall curve. Plotting a precision-recall curve lets you understand your model’s performance under different threshold settings. With this function, you provide the ground truth labeling (T/F) and prediction confidence (usually the output of your model) for each target. The TensorBoard UI will let you choose the threshold interactively. Parameters * **tag** (*string*) – Data identifier * **labels** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor")*,* *numpy.array**, or* *string/blobname*) – Ground truth data. Binary label for each element. * **predictions** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor")*,* *numpy.array**, or* *string/blobname*) – The probability that an element be classified as true. Value should be in [0, 1] * **global\_step** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Global step value to record * **num\_thresholds** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of thresholds used to draw the curve. * **walltime** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Optional override default walltime (time.time()) seconds after epoch of event Examples: ``` from torch.utils.tensorboard import SummaryWriter import numpy as np labels = np.random.randint(2, size=100) # binary label predictions = np.random.rand(100) writer = SummaryWriter() writer.add_pr_curve('pr_curve', labels, predictions, 0) writer.close() ``` `add_custom_scalars(layout)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_custom_scalars) Create special chart by collecting charts tags in ‘scalars’. Note that this function can only be called once for each SummaryWriter() object. Because it only provides metadata to tensorboard, the function can be called before or after the training loop. Parameters **layout** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)")) – {categoryName: *charts*}, where *charts* is also a dictionary {chartName: *ListOfProperties*}. The first element in *ListOfProperties* is the chart’s type (one of **Multiline** or **Margin**) and the second element should be a list containing the tags you have used in add\_scalar function, which will be collected into the new chart. Examples: ``` layout = {'Taiwan':{'twse':['Multiline',['twse/0050', 'twse/2330']]}, 'USA':{ 'dow':['Margin', ['dow/aaa', 'dow/bbb', 'dow/ccc']], 'nasdaq':['Margin', ['nasdaq/aaa', 'nasdaq/bbb', 'nasdaq/ccc']]}} writer.add_custom_scalars(layout) ``` `add_mesh(tag, vertices, colors=None, faces=None, config_dict=None, global_step=None, walltime=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_mesh) Add meshes or 3D point clouds to TensorBoard. The visualization is based on Three.js, so it allows users to interact with the rendered object. Besides the basic definitions such as vertices, faces, users can further provide camera parameter, lighting condition, etc. Please check <https://threejs.org/docs/index.html#manual/en/introduction/Creating-a-scene> for advanced usage. Parameters * **tag** (*string*) – Data identifier * **vertices** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor")) – List of the 3D coordinates of vertices. * **colors** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor")) – Colors for each vertex * **faces** ([torch.Tensor](tensors#torch.Tensor "torch.Tensor")) – Indices of vertices within each triangle. (Optional) * **config\_dict** – Dictionary with ThreeJS classes names and configuration. * **global\_step** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Global step value to record * **walltime** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Optional override default walltime (time.time()) seconds after epoch of event Shape: vertices: (B,N,3)(B, N, 3) . (batch, number\_of\_vertices, channels) colors: (B,N,3)(B, N, 3) . The values should lie in [0, 255] for type `uint8` or [0, 1] for type `float`. faces: (B,N,3)(B, N, 3) . The values should lie in [0, number\_of\_vertices] for type `uint8`. Examples: ``` from torch.utils.tensorboard import SummaryWriter vertices_tensor = torch.as_tensor([ [1, 1, 1], [-1, -1, 1], [1, -1, -1], [-1, 1, -1], ], dtype=torch.float).unsqueeze(0) colors_tensor = torch.as_tensor([ [255, 0, 0], [0, 255, 0], [0, 0, 255], [255, 0, 255], ], dtype=torch.int).unsqueeze(0) faces_tensor = torch.as_tensor([ [0, 2, 3], [0, 3, 1], [0, 1, 2], [1, 3, 2], ], dtype=torch.int).unsqueeze(0) writer = SummaryWriter() writer.add_mesh('my_mesh', vertices=vertices_tensor, colors=colors_tensor, faces=faces_tensor) writer.close() ``` `add_hparams(hparam_dict, metric_dict, hparam_domain_discrete=None, run_name=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_hparams) Add a set of hyperparameters to be compared in TensorBoard. Parameters * **hparam\_dict** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)")) – Each key-value pair in the dictionary is the name of the hyper parameter and it’s corresponding value. The type of the value can be one of `bool`, `string`, `float`, `int`, or `None`. * **metric\_dict** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)")) – Each key-value pair in the dictionary is the name of the metric and it’s corresponding value. Note that the key used here should be unique in the tensorboard record. Otherwise the value you added by `add_scalar` will be displayed in hparam plugin. In most cases, this is unwanted. * **hparam\_domain\_discrete** – (Optional[Dict[str, List[Any]]]) A dictionary that contains names of the hyperparameters and all discrete values they can hold * **run\_name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – Name of the run, to be included as part of the logdir. If unspecified, will use current timestamp. Examples: ``` from torch.utils.tensorboard import SummaryWriter with SummaryWriter() as w: for i in range(5): w.add_hparams({'lr': 0.1*i, 'bsize': i}, {'hparam/accuracy': 10*i, 'hparam/loss': 10*i}) ``` Expected result: `flush()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.flush) Flushes the event file to disk. Call this method to make sure that all pending events have been written to disk. `close()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.close)
programming_docs
pytorch torch.nn.functional torch.nn.functional =================== Convolution functions --------------------- ### conv1d `torch.nn.functional.conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor` Applies a 1D convolution over an input signal composed of several input planes. This operator supports [TensorFloat32](https://pytorch.org/docs/1.8.0/notes/cuda.html#tf32-on-ampere). See [`Conv1d`](generated/torch.nn.conv1d#torch.nn.Conv1d "torch.nn.Conv1d") for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting `torch.backends.cudnn.deterministic = True`. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. Parameters * **input** – input tensor of shape (minibatch,in\_channels,iW)(\text{minibatch} , \text{in\\_channels} , iW) * **weight** – filters of shape (out\_channels,in\_channelsgroups,kW)(\text{out\\_channels} , \frac{\text{in\\_channels}}{\text{groups}} , kW) * **bias** – optional bias of shape (out\_channels)(\text{out\\_channels}) . Default: `None` * **stride** – the stride of the convolving kernel. Can be a single number or a one-element tuple `(sW,)`. Default: 1 * **padding** – implicit paddings on both sides of the input. Can be a single number or a one-element tuple `(padW,)`. Default: 0 * **dilation** – the spacing between kernel elements. Can be a single number or a one-element tuple `(dW,)`. Default: 1 * **groups** – split input into groups, in\_channels\text{in\\_channels} should be divisible by the number of groups. Default: 1 Examples: ``` >>> filters = torch.randn(33, 16, 3) >>> inputs = torch.randn(20, 16, 50) >>> F.conv1d(inputs, filters) ``` ### conv2d `torch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor` Applies a 2D convolution over an input image composed of several input planes. This operator supports [TensorFloat32](https://pytorch.org/docs/1.8.0/notes/cuda.html#tf32-on-ampere). See [`Conv2d`](generated/torch.nn.conv2d#torch.nn.Conv2d "torch.nn.Conv2d") for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting `torch.backends.cudnn.deterministic = True`. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. Parameters * **input** – input tensor of shape (minibatch,in\_channels,iH,iW)(\text{minibatch} , \text{in\\_channels} , iH , iW) * **weight** – filters of shape (out\_channels,in\_channelsgroups,kH,kW)(\text{out\\_channels} , \frac{\text{in\\_channels}}{\text{groups}} , kH , kW) * **bias** – optional bias tensor of shape (out\_channels)(\text{out\\_channels}) . Default: `None` * **stride** – the stride of the convolving kernel. Can be a single number or a tuple `(sH, sW)`. Default: 1 * **padding** – implicit paddings on both sides of the input. Can be a single number or a tuple `(padH, padW)`. Default: 0 * **dilation** – the spacing between kernel elements. Can be a single number or a tuple `(dH, dW)`. Default: 1 * **groups** – split input into groups, in\_channels\text{in\\_channels} should be divisible by the number of groups. Default: 1 Examples: ``` >>> # With square kernels and equal stride >>> filters = torch.randn(8,4,3,3) >>> inputs = torch.randn(1,4,5,5) >>> F.conv2d(inputs, filters, padding=1) ``` ### conv3d `torch.nn.functional.conv3d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor` Applies a 3D convolution over an input image composed of several input planes. This operator supports [TensorFloat32](https://pytorch.org/docs/1.8.0/notes/cuda.html#tf32-on-ampere). See [`Conv3d`](generated/torch.nn.conv3d#torch.nn.Conv3d "torch.nn.Conv3d") for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting `torch.backends.cudnn.deterministic = True`. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. Parameters * **input** – input tensor of shape (minibatch,in\_channels,iT,iH,iW)(\text{minibatch} , \text{in\\_channels} , iT , iH , iW) * **weight** – filters of shape (out\_channels,in\_channelsgroups,kT,kH,kW)(\text{out\\_channels} , \frac{\text{in\\_channels}}{\text{groups}} , kT , kH , kW) * **bias** – optional bias tensor of shape (out\_channels)(\text{out\\_channels}) . Default: None * **stride** – the stride of the convolving kernel. Can be a single number or a tuple `(sT, sH, sW)`. Default: 1 * **padding** – implicit paddings on both sides of the input. Can be a single number or a tuple `(padT, padH, padW)`. Default: 0 * **dilation** – the spacing between kernel elements. Can be a single number or a tuple `(dT, dH, dW)`. Default: 1 * **groups** – split input into groups, in\_channels\text{in\\_channels} should be divisible by the number of groups. Default: 1 Examples: ``` >>> filters = torch.randn(33, 16, 3, 3, 3) >>> inputs = torch.randn(20, 16, 50, 10, 20) >>> F.conv3d(inputs, filters) ``` ### conv\_transpose1d `torch.nn.functional.conv_transpose1d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor` Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called “deconvolution”. This operator supports [TensorFloat32](https://pytorch.org/docs/1.8.0/notes/cuda.html#tf32-on-ampere). See [`ConvTranspose1d`](generated/torch.nn.convtranspose1d#torch.nn.ConvTranspose1d "torch.nn.ConvTranspose1d") for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting `torch.backends.cudnn.deterministic = True`. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. Parameters * **input** – input tensor of shape (minibatch,in\_channels,iW)(\text{minibatch} , \text{in\\_channels} , iW) * **weight** – filters of shape (in\_channels,out\_channelsgroups,kW)(\text{in\\_channels} , \frac{\text{out\\_channels}}{\text{groups}} , kW) * **bias** – optional bias of shape (out\_channels)(\text{out\\_channels}) . Default: None * **stride** – the stride of the convolving kernel. Can be a single number or a tuple `(sW,)`. Default: 1 * **padding** – `dilation * (kernel_size - 1) - padding` zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple `(padW,)`. Default: 0 * **output\_padding** – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple `(out_padW)`. Default: 0 * **groups** – split input into groups, in\_channels\text{in\\_channels} should be divisible by the number of groups. Default: 1 * **dilation** – the spacing between kernel elements. Can be a single number or a tuple `(dW,)`. Default: 1 Examples: ``` >>> inputs = torch.randn(20, 16, 50) >>> weights = torch.randn(16, 33, 5) >>> F.conv_transpose1d(inputs, weights) ``` ### conv\_transpose2d `torch.nn.functional.conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor` Applies a 2D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution”. This operator supports [TensorFloat32](https://pytorch.org/docs/1.8.0/notes/cuda.html#tf32-on-ampere). See [`ConvTranspose2d`](generated/torch.nn.convtranspose2d#torch.nn.ConvTranspose2d "torch.nn.ConvTranspose2d") for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting `torch.backends.cudnn.deterministic = True`. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. Parameters * **input** – input tensor of shape (minibatch,in\_channels,iH,iW)(\text{minibatch} , \text{in\\_channels} , iH , iW) * **weight** – filters of shape (in\_channels,out\_channelsgroups,kH,kW)(\text{in\\_channels} , \frac{\text{out\\_channels}}{\text{groups}} , kH , kW) * **bias** – optional bias of shape (out\_channels)(\text{out\\_channels}) . Default: None * **stride** – the stride of the convolving kernel. Can be a single number or a tuple `(sH, sW)`. Default: 1 * **padding** – `dilation * (kernel_size - 1) - padding` zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple `(padH, padW)`. Default: 0 * **output\_padding** – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple `(out_padH, out_padW)`. Default: 0 * **groups** – split input into groups, in\_channels\text{in\\_channels} should be divisible by the number of groups. Default: 1 * **dilation** – the spacing between kernel elements. Can be a single number or a tuple `(dH, dW)`. Default: 1 Examples: ``` >>> # With square kernels and equal stride >>> inputs = torch.randn(1, 4, 5, 5) >>> weights = torch.randn(4, 8, 3, 3) >>> F.conv_transpose2d(inputs, weights, padding=1) ``` ### conv\_transpose3d `torch.nn.functional.conv_transpose3d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor` Applies a 3D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution” This operator supports [TensorFloat32](https://pytorch.org/docs/1.8.0/notes/cuda.html#tf32-on-ampere). See [`ConvTranspose3d`](generated/torch.nn.convtranspose3d#torch.nn.ConvTranspose3d "torch.nn.ConvTranspose3d") for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting `torch.backends.cudnn.deterministic = True`. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. Parameters * **input** – input tensor of shape (minibatch,in\_channels,iT,iH,iW)(\text{minibatch} , \text{in\\_channels} , iT , iH , iW) * **weight** – filters of shape (in\_channels,out\_channelsgroups,kT,kH,kW)(\text{in\\_channels} , \frac{\text{out\\_channels}}{\text{groups}} , kT , kH , kW) * **bias** – optional bias of shape (out\_channels)(\text{out\\_channels}) . Default: None * **stride** – the stride of the convolving kernel. Can be a single number or a tuple `(sT, sH, sW)`. Default: 1 * **padding** – `dilation * (kernel_size - 1) - padding` zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple `(padT, padH, padW)`. Default: 0 * **output\_padding** – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple `(out_padT, out_padH, out_padW)`. Default: 0 * **groups** – split input into groups, in\_channels\text{in\\_channels} should be divisible by the number of groups. Default: 1 * **dilation** – the spacing between kernel elements. Can be a single number or a tuple `(dT, dH, dW)`. Default: 1 Examples: ``` >>> inputs = torch.randn(20, 16, 50, 10, 20) >>> weights = torch.randn(16, 33, 3, 3, 3) >>> F.conv_transpose3d(inputs, weights) ``` ### unfold `torch.nn.functional.unfold(input, kernel_size, dilation=1, padding=0, stride=1)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#unfold) Extracts sliding local blocks from a batched input tensor. Warning Currently, only 4-D input tensors (batched image-like tensors) are supported. Warning More than one element of the unfolded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensor, please clone it first. See [`torch.nn.Unfold`](generated/torch.nn.unfold#torch.nn.Unfold "torch.nn.Unfold") for details ### fold `torch.nn.functional.fold(input, output_size, kernel_size, dilation=1, padding=0, stride=1)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#fold) Combines an array of sliding local blocks into a large containing tensor. Warning Currently, only 3-D output tensors (unfolded batched image-like tensors) are supported. See [`torch.nn.Fold`](generated/torch.nn.fold#torch.nn.Fold "torch.nn.Fold") for details Pooling functions ----------------- ### avg\_pool1d `torch.nn.functional.avg_pool1d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True) → Tensor` Applies a 1D average pooling over an input signal composed of several input planes. See [`AvgPool1d`](generated/torch.nn.avgpool1d#torch.nn.AvgPool1d "torch.nn.AvgPool1d") for details and output shape. Parameters * **input** – input tensor of shape (minibatch,in\_channels,iW)(\text{minibatch} , \text{in\\_channels} , iW) * **kernel\_size** – the size of the window. Can be a single number or a tuple `(kW,)` * **stride** – the stride of the window. Can be a single number or a tuple `(sW,)`. Default: `kernel_size` * **padding** – implicit zero paddings on both sides of the input. Can be a single number or a tuple `(padW,)`. Default: 0 * **ceil\_mode** – when True, will use `ceil` instead of `floor` to compute the output shape. Default: `False` * **count\_include\_pad** – when True, will include the zero-padding in the averaging calculation. Default: `True` Examples: ``` >>> # pool of square window of size=3, stride=2 >>> input = torch.tensor([[[1, 2, 3, 4, 5, 6, 7]]], dtype=torch.float32) >>> F.avg_pool1d(input, kernel_size=3, stride=2) tensor([[[ 2., 4., 6.]]]) ``` ### avg\_pool2d `torch.nn.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) → Tensor` Applies 2D average-pooling operation in kH×kWkH \times kW regions by step size sH×sWsH \times sW steps. The number of output features is equal to the number of input planes. See [`AvgPool2d`](generated/torch.nn.avgpool2d#torch.nn.AvgPool2d "torch.nn.AvgPool2d") for details and output shape. Parameters * **input** – input tensor (minibatch,in\_channels,iH,iW)(\text{minibatch} , \text{in\\_channels} , iH , iW) * **kernel\_size** – size of the pooling region. Can be a single number or a tuple `(kH, kW)` * **stride** – stride of the pooling operation. Can be a single number or a tuple `(sH, sW)`. Default: `kernel_size` * **padding** – implicit zero paddings on both sides of the input. Can be a single number or a tuple `(padH, padW)`. Default: 0 * **ceil\_mode** – when True, will use `ceil` instead of `floor` in the formula to compute the output shape. Default: `False` * **count\_include\_pad** – when True, will include the zero-padding in the averaging calculation. Default: `True` * **divisor\_override** – if specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: None ### avg\_pool3d `torch.nn.functional.avg_pool3d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) → Tensor` Applies 3D average-pooling operation in kT×kH×kWkT \times kH \times kW regions by step size sT×sH×sWsT \times sH \times sW steps. The number of output features is equal to ⌊input planessT⌋\lfloor\frac{\text{input planes}}{sT}\rfloor . See [`AvgPool3d`](generated/torch.nn.avgpool3d#torch.nn.AvgPool3d "torch.nn.AvgPool3d") for details and output shape. Parameters * **input** – input tensor (minibatch,in\_channels,iT×iH,iW)(\text{minibatch} , \text{in\\_channels} , iT \times iH , iW) * **kernel\_size** – size of the pooling region. Can be a single number or a tuple `(kT, kH, kW)` * **stride** – stride of the pooling operation. Can be a single number or a tuple `(sT, sH, sW)`. Default: `kernel_size` * **padding** – implicit zero paddings on both sides of the input. Can be a single number or a tuple `(padT, padH, padW)`, Default: 0 * **ceil\_mode** – when True, will use `ceil` instead of `floor` in the formula to compute the output shape * **count\_include\_pad** – when True, will include the zero-padding in the averaging calculation * **divisor\_override** – if specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: None ### max\_pool1d `torch.nn.functional.max_pool1d(*args, **kwargs)` Applies a 1D max pooling over an input signal composed of several input planes. See [`MaxPool1d`](generated/torch.nn.maxpool1d#torch.nn.MaxPool1d "torch.nn.MaxPool1d") for details. ### max\_pool2d `torch.nn.functional.max_pool2d(*args, **kwargs)` Applies a 2D max pooling over an input signal composed of several input planes. See [`MaxPool2d`](generated/torch.nn.maxpool2d#torch.nn.MaxPool2d "torch.nn.MaxPool2d") for details. ### max\_pool3d `torch.nn.functional.max_pool3d(*args, **kwargs)` Applies a 3D max pooling over an input signal composed of several input planes. See [`MaxPool3d`](generated/torch.nn.maxpool3d#torch.nn.MaxPool3d "torch.nn.MaxPool3d") for details. ### max\_unpool1d `torch.nn.functional.max_unpool1d(input, indices, kernel_size, stride=None, padding=0, output_size=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#max_unpool1d) Computes a partial inverse of `MaxPool1d`. See [`MaxUnpool1d`](generated/torch.nn.maxunpool1d#torch.nn.MaxUnpool1d "torch.nn.MaxUnpool1d") for details. ### max\_unpool2d `torch.nn.functional.max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#max_unpool2d) Computes a partial inverse of `MaxPool2d`. See [`MaxUnpool2d`](generated/torch.nn.maxunpool2d#torch.nn.MaxUnpool2d "torch.nn.MaxUnpool2d") for details. ### max\_unpool3d `torch.nn.functional.max_unpool3d(input, indices, kernel_size, stride=None, padding=0, output_size=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#max_unpool3d) Computes a partial inverse of `MaxPool3d`. See [`MaxUnpool3d`](generated/torch.nn.maxunpool3d#torch.nn.MaxUnpool3d "torch.nn.MaxUnpool3d") for details. ### lp\_pool1d `torch.nn.functional.lp_pool1d(input, norm_type, kernel_size, stride=None, ceil_mode=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#lp_pool1d) Applies a 1D power-average pooling over an input signal composed of several input planes. If the sum of all inputs to the power of `p` is zero, the gradient is set to zero as well. See [`LPPool1d`](generated/torch.nn.lppool1d#torch.nn.LPPool1d "torch.nn.LPPool1d") for details. ### lp\_pool2d `torch.nn.functional.lp_pool2d(input, norm_type, kernel_size, stride=None, ceil_mode=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#lp_pool2d) Applies a 2D power-average pooling over an input signal composed of several input planes. If the sum of all inputs to the power of `p` is zero, the gradient is set to zero as well. See [`LPPool2d`](generated/torch.nn.lppool2d#torch.nn.LPPool2d "torch.nn.LPPool2d") for details. ### adaptive\_max\_pool1d `torch.nn.functional.adaptive_max_pool1d(*args, **kwargs)` Applies a 1D adaptive max pooling over an input signal composed of several input planes. See [`AdaptiveMaxPool1d`](generated/torch.nn.adaptivemaxpool1d#torch.nn.AdaptiveMaxPool1d "torch.nn.AdaptiveMaxPool1d") for details and output shape. Parameters * **output\_size** – the target output size (single integer) * **return\_indices** – whether to return pooling indices. Default: `False` ### adaptive\_max\_pool2d `torch.nn.functional.adaptive_max_pool2d(*args, **kwargs)` Applies a 2D adaptive max pooling over an input signal composed of several input planes. See [`AdaptiveMaxPool2d`](generated/torch.nn.adaptivemaxpool2d#torch.nn.AdaptiveMaxPool2d "torch.nn.AdaptiveMaxPool2d") for details and output shape. Parameters * **output\_size** – the target output size (single integer or double-integer tuple) * **return\_indices** – whether to return pooling indices. Default: `False` ### adaptive\_max\_pool3d `torch.nn.functional.adaptive_max_pool3d(*args, **kwargs)` Applies a 3D adaptive max pooling over an input signal composed of several input planes. See [`AdaptiveMaxPool3d`](generated/torch.nn.adaptivemaxpool3d#torch.nn.AdaptiveMaxPool3d "torch.nn.AdaptiveMaxPool3d") for details and output shape. Parameters * **output\_size** – the target output size (single integer or triple-integer tuple) * **return\_indices** – whether to return pooling indices. Default: `False` ### adaptive\_avg\_pool1d `torch.nn.functional.adaptive_avg_pool1d(input, output_size) → Tensor` Applies a 1D adaptive average pooling over an input signal composed of several input planes. See [`AdaptiveAvgPool1d`](generated/torch.nn.adaptiveavgpool1d#torch.nn.AdaptiveAvgPool1d "torch.nn.AdaptiveAvgPool1d") for details and output shape. Parameters **output\_size** – the target output size (single integer) ### adaptive\_avg\_pool2d `torch.nn.functional.adaptive_avg_pool2d(input, output_size)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#adaptive_avg_pool2d) Applies a 2D adaptive average pooling over an input signal composed of several input planes. See [`AdaptiveAvgPool2d`](generated/torch.nn.adaptiveavgpool2d#torch.nn.AdaptiveAvgPool2d "torch.nn.AdaptiveAvgPool2d") for details and output shape. Parameters **output\_size** – the target output size (single integer or double-integer tuple) ### adaptive\_avg\_pool3d `torch.nn.functional.adaptive_avg_pool3d(input, output_size)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#adaptive_avg_pool3d) Applies a 3D adaptive average pooling over an input signal composed of several input planes. See [`AdaptiveAvgPool3d`](generated/torch.nn.adaptiveavgpool3d#torch.nn.AdaptiveAvgPool3d "torch.nn.AdaptiveAvgPool3d") for details and output shape. Parameters **output\_size** – the target output size (single integer or triple-integer tuple) Non-linear activation functions ------------------------------- ### threshold `torch.nn.functional.threshold(input, threshold, value, inplace=False)` Thresholds each element of the input Tensor. See [`Threshold`](generated/torch.nn.threshold#torch.nn.Threshold "torch.nn.Threshold") for more details. `torch.nn.functional.threshold_(input, threshold, value) → Tensor` In-place version of [`threshold()`](#torch.nn.functional.threshold "torch.nn.functional.threshold"). ### relu `torch.nn.functional.relu(input, inplace=False) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#relu) Applies the rectified linear unit function element-wise. See [`ReLU`](generated/torch.nn.relu#torch.nn.ReLU "torch.nn.ReLU") for more details. `torch.nn.functional.relu_(input) → Tensor` In-place version of [`relu()`](#torch.nn.functional.relu "torch.nn.functional.relu"). ### hardtanh `torch.nn.functional.hardtanh(input, min_val=-1., max_val=1., inplace=False) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#hardtanh) Applies the HardTanh function element-wise. See [`Hardtanh`](generated/torch.nn.hardtanh#torch.nn.Hardtanh "torch.nn.Hardtanh") for more details. `torch.nn.functional.hardtanh_(input, min_val=-1., max_val=1.) → Tensor` In-place version of [`hardtanh()`](#torch.nn.functional.hardtanh "torch.nn.functional.hardtanh"). ### hardswish `torch.nn.functional.hardswish(input, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#hardswish) Applies the hardswish function, element-wise, as described in the paper: [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244). Hardswish(x)={0if x≤−3,xif x≥+3,x⋅(x+3)/6otherwise\text{Hardswish}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ x & \text{if~} x \ge +3, \\ x \cdot (x + 3) /6 & \text{otherwise} \end{cases} See [`Hardswish`](generated/torch.nn.hardswish#torch.nn.Hardswish "torch.nn.Hardswish") for more details. ### relu6 `torch.nn.functional.relu6(input, inplace=False) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#relu6) Applies the element-wise function ReLU6(x)=min⁡(max⁡(0,x),6)\text{ReLU6}(x) = \min(\max(0,x), 6) . See [`ReLU6`](generated/torch.nn.relu6#torch.nn.ReLU6 "torch.nn.ReLU6") for more details. ### elu `torch.nn.functional.elu(input, alpha=1.0, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#elu) Applies element-wise, ELU(x)=max⁡(0,x)+min⁡(0,α∗(exp⁡(x)−1))\text{ELU}(x) = \max(0,x) + \min(0, \alpha \* (\exp(x) - 1)) . See [`ELU`](generated/torch.nn.elu#torch.nn.ELU "torch.nn.ELU") for more details. `torch.nn.functional.elu_(input, alpha=1.) → Tensor` In-place version of [`elu()`](#torch.nn.functional.elu "torch.nn.functional.elu"). ### selu `torch.nn.functional.selu(input, inplace=False) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#selu) Applies element-wise, SELU(x)=scale∗(max⁡(0,x)+min⁡(0,α∗(exp⁡(x)−1)))\text{SELU}(x) = scale \* (\max(0,x) + \min(0, \alpha \* (\exp(x) - 1))) , with α=1.6732632423543772848170429916717\alpha=1.6732632423543772848170429916717 and scale=1.0507009873554804934193349852946scale=1.0507009873554804934193349852946 . See [`SELU`](generated/torch.nn.selu#torch.nn.SELU "torch.nn.SELU") for more details. ### celu `torch.nn.functional.celu(input, alpha=1., inplace=False) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#celu) Applies element-wise, CELU(x)=max⁡(0,x)+min⁡(0,α∗(exp⁡(x/α)−1))\text{CELU}(x) = \max(0,x) + \min(0, \alpha \* (\exp(x/\alpha) - 1)) . See [`CELU`](generated/torch.nn.celu#torch.nn.CELU "torch.nn.CELU") for more details. ### leaky\_relu `torch.nn.functional.leaky_relu(input, negative_slope=0.01, inplace=False) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#leaky_relu) Applies element-wise, LeakyReLU(x)=max⁡(0,x)+negative\_slope∗min⁡(0,x)\text{LeakyReLU}(x) = \max(0, x) + \text{negative\\_slope} \* \min(0, x) See [`LeakyReLU`](generated/torch.nn.leakyrelu#torch.nn.LeakyReLU "torch.nn.LeakyReLU") for more details. `torch.nn.functional.leaky_relu_(input, negative_slope=0.01) → Tensor` In-place version of [`leaky_relu()`](#torch.nn.functional.leaky_relu "torch.nn.functional.leaky_relu"). ### prelu `torch.nn.functional.prelu(input, weight) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#prelu) Applies element-wise the function PReLU(x)=max⁡(0,x)+weight∗min⁡(0,x)\text{PReLU}(x) = \max(0,x) + \text{weight} \* \min(0,x) where weight is a learnable parameter. See [`PReLU`](generated/torch.nn.prelu#torch.nn.PReLU "torch.nn.PReLU") for more details. ### rrelu `torch.nn.functional.rrelu(input, lower=1./8, upper=1./3, training=False, inplace=False) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#rrelu) Randomized leaky ReLU. See [`RReLU`](generated/torch.nn.rrelu#torch.nn.RReLU "torch.nn.RReLU") for more details. `torch.nn.functional.rrelu_(input, lower=1./8, upper=1./3, training=False) → Tensor` In-place version of [`rrelu()`](#torch.nn.functional.rrelu "torch.nn.functional.rrelu"). ### glu `torch.nn.functional.glu(input, dim=-1) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#glu) The gated linear unit. Computes: GLU(a,b)=a⊗σ(b)\text{GLU}(a, b) = a \otimes \sigma(b) where `input` is split in half along `dim` to form `a` and `b`, σ\sigma is the sigmoid function and ⊗\otimes is the element-wise product between matrices. See [Language Modeling with Gated Convolutional Networks](https://arxiv.org/abs/1612.08083). Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – input tensor * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – dimension on which to split the input. Default: -1 ### gelu `torch.nn.functional.gelu(input) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#gelu) Applies element-wise the function GELU(x)=x∗Φ(x)\text{GELU}(x) = x \* \Phi(x) where Φ(x)\Phi(x) is the Cumulative Distribution Function for Gaussian Distribution. See [Gaussian Error Linear Units (GELUs)](https://arxiv.org/abs/1606.08415). ### logsigmoid `torch.nn.functional.logsigmoid(input) → Tensor` Applies element-wise LogSigmoid(xi)=log⁡(11+exp⁡(−xi))\text{LogSigmoid}(x\_i) = \log \left(\frac{1}{1 + \exp(-x\_i)}\right) See [`LogSigmoid`](generated/torch.nn.logsigmoid#torch.nn.LogSigmoid "torch.nn.LogSigmoid") for more details. ### hardshrink `torch.nn.functional.hardshrink(input, lambd=0.5) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#hardshrink) Applies the hard shrinkage function element-wise See [`Hardshrink`](generated/torch.nn.hardshrink#torch.nn.Hardshrink "torch.nn.Hardshrink") for more details. ### tanhshrink `torch.nn.functional.tanhshrink(input) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#tanhshrink) Applies element-wise, Tanhshrink(x)=x−Tanh(x)\text{Tanhshrink}(x) = x - \text{Tanh}(x) See [`Tanhshrink`](generated/torch.nn.tanhshrink#torch.nn.Tanhshrink "torch.nn.Tanhshrink") for more details. ### softsign `torch.nn.functional.softsign(input) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#softsign) Applies element-wise, the function SoftSign(x)=x1+∣x∣\text{SoftSign}(x) = \frac{x}{1 + |x|} See [`Softsign`](generated/torch.nn.softsign#torch.nn.Softsign "torch.nn.Softsign") for more details. ### softplus `torch.nn.functional.softplus(input, beta=1, threshold=20) → Tensor` Applies element-wise, the function Softplus(x)=1β∗log⁡(1+exp⁡(β∗x))\text{Softplus}(x) = \frac{1}{\beta} \* \log(1 + \exp(\beta \* x)) . For numerical stability the implementation reverts to the linear function when input×β>thresholdinput \times \beta > threshold . See [`Softplus`](generated/torch.nn.softplus#torch.nn.Softplus "torch.nn.Softplus") for more details. ### softmin `torch.nn.functional.softmin(input, dim=None, _stacklevel=3, dtype=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#softmin) Applies a softmin function. Note that Softmin(x)=Softmax(−x)\text{Softmin}(x) = \text{Softmax}(-x) . See softmax definition for mathematical formula. See [`Softmin`](generated/torch.nn.softmin#torch.nn.Softmin "torch.nn.Softmin") for more details. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – input * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – A dimension along which softmin will be computed (so every slice along dim will sum to 1). * **dtype** (`torch.dtype`, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to `dtype` before the operation is performed. This is useful for preventing data type overflows. Default: None. ### softmax `torch.nn.functional.softmax(input, dim=None, _stacklevel=3, dtype=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#softmax) Applies a softmax function. Softmax is defined as: Softmax(xi)=exp⁡(xi)∑jexp⁡(xj)\text{Softmax}(x\_{i}) = \frac{\exp(x\_i)}{\sum\_j \exp(x\_j)} It is applied to all slices along dim, and will re-scale them so that the elements lie in the range `[0, 1]` and sum to 1. See [`Softmax`](generated/torch.nn.softmax#torch.nn.Softmax "torch.nn.Softmax") for more details. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – input * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – A dimension along which softmax will be computed. * **dtype** (`torch.dtype`, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to `dtype` before the operation is performed. This is useful for preventing data type overflows. Default: None. Note This function doesn’t work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. Use log\_softmax instead (it’s faster and has better numerical properties). ### softshrink `torch.nn.functional.softshrink(input, lambd=0.5) → Tensor` Applies the soft shrinkage function elementwise See [`Softshrink`](generated/torch.nn.softshrink#torch.nn.Softshrink "torch.nn.Softshrink") for more details. ### gumbel\_softmax `torch.nn.functional.gumbel_softmax(logits, tau=1, hard=False, eps=1e-10, dim=-1)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#gumbel_softmax) Samples from the Gumbel-Softmax distribution ([Link 1](https://arxiv.org/abs/1611.00712) [Link 2](https://arxiv.org/abs/1611.01144)) and optionally discretizes. Parameters * **logits** – `[…, num_features]` unnormalized log probabilities * **tau** – non-negative scalar temperature * **hard** – if `True`, the returned samples will be discretized as one-hot vectors, but will be differentiated as if it is the soft sample in autograd * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – A dimension along which softmax will be computed. Default: -1. Returns Sampled tensor of same shape as `logits` from the Gumbel-Softmax distribution. If `hard=True`, the returned samples will be one-hot, otherwise they will be probability distributions that sum to 1 across `dim`. Note This function is here for legacy reasons, may be removed from nn.Functional in the future. Note The main trick for `hard` is to do `y_hard - y_soft.detach() + y_soft` It achieves two things: - makes the output value exactly one-hot (since we add then subtract y\_soft value) - makes the gradient equal to y\_soft gradient (since we strip all other gradients) Examples:: ``` >>> logits = torch.randn(20, 32) >>> # Sample soft categorical using reparametrization trick: >>> F.gumbel_softmax(logits, tau=1, hard=False) >>> # Sample hard categorical using "Straight-through" trick: >>> F.gumbel_softmax(logits, tau=1, hard=True) ``` ### log\_softmax `torch.nn.functional.log_softmax(input, dim=None, _stacklevel=3, dtype=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#log_softmax) Applies a softmax followed by a logarithm. While mathematically equivalent to log(softmax(x)), doing these two operations separately is slower, and numerically unstable. This function uses an alternative formulation to compute the output and gradient correctly. See [`LogSoftmax`](generated/torch.nn.logsoftmax#torch.nn.LogSoftmax "torch.nn.LogSoftmax") for more details. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – input * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – A dimension along which log\_softmax will be computed. * **dtype** (`torch.dtype`, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to `dtype` before the operation is performed. This is useful for preventing data type overflows. Default: None. ### tanh `torch.nn.functional.tanh(input) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#tanh) Applies element-wise, Tanh(x)=tanh⁡(x)=exp⁡(x)−exp⁡(−x)exp⁡(x)+exp⁡(−x)\text{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)}{\exp(x) + \exp(-x)} See [`Tanh`](generated/torch.nn.tanh#torch.nn.Tanh "torch.nn.Tanh") for more details. ### sigmoid `torch.nn.functional.sigmoid(input) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#sigmoid) Applies the element-wise function Sigmoid(x)=11+exp⁡(−x)\text{Sigmoid}(x) = \frac{1}{1 + \exp(-x)} See [`Sigmoid`](generated/torch.nn.sigmoid#torch.nn.Sigmoid "torch.nn.Sigmoid") for more details. ### hardsigmoid `torch.nn.functional.hardsigmoid(input) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#hardsigmoid) Applies the element-wise function Hardsigmoid(x)={0if x≤−3,1if x≥+3,x/6+1/2otherwise\text{Hardsigmoid}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ 1 & \text{if~} x \ge +3, \\ x / 6 + 1 / 2 & \text{otherwise} \end{cases} Parameters **inplace** – If set to `True`, will do this operation in-place. Default: `False` See [`Hardsigmoid`](generated/torch.nn.hardsigmoid#torch.nn.Hardsigmoid "torch.nn.Hardsigmoid") for more details. ### silu `torch.nn.functional.silu(input, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#silu) Applies the silu function, element-wise. silu(x)=x∗σ(x),where σ(x) is the logistic sigmoid.\text{silu}(x) = x \* \sigma(x), \text{where } \sigma(x) \text{ is the logistic sigmoid.} Note See [Gaussian Error Linear Units (GELUs)](https://arxiv.org/abs/1606.08415) where the SiLU (Sigmoid Linear Unit) was originally coined, and see [Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning](https://arxiv.org/abs/1702.03118) and [Swish: a Self-Gated Activation Function](https://arxiv.org/abs/1710.05941v1) where the SiLU was experimented with later. See [`SiLU`](generated/torch.nn.silu#torch.nn.SiLU "torch.nn.SiLU") for more details. Normalization functions ----------------------- ### batch\_norm `torch.nn.functional.batch_norm(input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#batch_norm) Applies Batch Normalization for each channel across a batch of data. See [`BatchNorm1d`](generated/torch.nn.batchnorm1d#torch.nn.BatchNorm1d "torch.nn.BatchNorm1d"), [`BatchNorm2d`](generated/torch.nn.batchnorm2d#torch.nn.BatchNorm2d "torch.nn.BatchNorm2d"), [`BatchNorm3d`](generated/torch.nn.batchnorm3d#torch.nn.BatchNorm3d "torch.nn.BatchNorm3d") for details. ### instance\_norm `torch.nn.functional.instance_norm(input, running_mean=None, running_var=None, weight=None, bias=None, use_input_stats=True, momentum=0.1, eps=1e-05)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#instance_norm) Applies Instance Normalization for each channel in each data sample in a batch. See [`InstanceNorm1d`](generated/torch.nn.instancenorm1d#torch.nn.InstanceNorm1d "torch.nn.InstanceNorm1d"), [`InstanceNorm2d`](generated/torch.nn.instancenorm2d#torch.nn.InstanceNorm2d "torch.nn.InstanceNorm2d"), [`InstanceNorm3d`](generated/torch.nn.instancenorm3d#torch.nn.InstanceNorm3d "torch.nn.InstanceNorm3d") for details. ### layer\_norm `torch.nn.functional.layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#layer_norm) Applies Layer Normalization for last certain number of dimensions. See [`LayerNorm`](generated/torch.nn.layernorm#torch.nn.LayerNorm "torch.nn.LayerNorm") for details. ### local\_response\_norm `torch.nn.functional.local_response_norm(input, size, alpha=0.0001, beta=0.75, k=1.0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#local_response_norm) Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension. Applies normalization across channels. See [`LocalResponseNorm`](generated/torch.nn.localresponsenorm#torch.nn.LocalResponseNorm "torch.nn.LocalResponseNorm") for details. ### normalize `torch.nn.functional.normalize(input, p=2, dim=1, eps=1e-12, out=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#normalize) Performs LpL\_p normalization of inputs over specified dimension. For a tensor `input` of sizes (n0,...,ndim,...,nk)(n\_0, ..., n\_{dim}, ..., n\_k) , each ndimn\_{dim} -element vector vv along dimension `dim` is transformed as v=vmax⁡(∥v∥p,ϵ).v = \frac{v}{\max(\lVert v \rVert\_p, \epsilon)}. With the default arguments it uses the Euclidean norm over vectors along dimension 11 for normalization. Parameters * **input** – input tensor of any shape * **p** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the exponent value in the norm formulation. Default: 2 * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension to reduce. Default: 1 * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – small value to avoid division by zero. Default: 1e-12 * **out** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. If `out` is used, this operation won’t be differentiable. Linear functions ---------------- ### linear `torch.nn.functional.linear(input, weight, bias=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#linear) Applies a linear transformation to the incoming data: y=xAT+by = xA^T + b . This operator supports [TensorFloat32](https://pytorch.org/docs/1.8.0/notes/cuda.html#tf32-on-ampere). Shape: * Input: (N,∗,in\_features)(N, \*, in\\_features) N is the batch size, `*` means any number of additional dimensions * Weight: (out\_features,in\_features)(out\\_features, in\\_features) * Bias: (out\_features)(out\\_features) * Output: (N,∗,out\_features)(N, \*, out\\_features) ### bilinear `torch.nn.functional.bilinear(input1, input2, weight, bias=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#bilinear) Applies a bilinear transformation to the incoming data: y=x1TAx2+by = x\_1^T A x\_2 + b Shape: * input1: (N,∗,Hin1)(N, \*, H\_{in1}) where Hin1=in1\_featuresH\_{in1}=\text{in1\\_features} and ∗\* means any number of additional dimensions. All but the last dimension of the inputs should be the same. * input2: (N,∗,Hin2)(N, \*, H\_{in2}) where Hin2=in2\_featuresH\_{in2}=\text{in2\\_features} * weight: (out\_features,in1\_features,in2\_features)(\text{out\\_features}, \text{in1\\_features}, \text{in2\\_features}) * bias: (out\_features)(\text{out\\_features}) * output: (N,∗,Hout)(N, \*, H\_{out}) where Hout=out\_featuresH\_{out}=\text{out\\_features} and all but the last dimension are the same shape as the input. Dropout functions ----------------- ### dropout `torch.nn.functional.dropout(input, p=0.5, training=True, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#dropout) During training, randomly zeroes some of the elements of the input tensor with probability `p` using samples from a Bernoulli distribution. See [`Dropout`](generated/torch.nn.dropout#torch.nn.Dropout "torch.nn.Dropout") for details. Parameters * **p** – probability of an element to be zeroed. Default: 0.5 * **training** – apply dropout if is `True`. Default: `True` * **inplace** – If set to `True`, will do this operation in-place. Default: `False` ### alpha\_dropout `torch.nn.functional.alpha_dropout(input, p=0.5, training=False, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#alpha_dropout) Applies alpha dropout to the input. See [`AlphaDropout`](generated/torch.nn.alphadropout#torch.nn.AlphaDropout "torch.nn.AlphaDropout") for details. ### feature\_alpha\_dropout `torch.nn.functional.feature_alpha_dropout(input, p=0.5, training=False, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#feature_alpha_dropout) Randomly masks out entire channels (a channel is a feature map, e.g. the jj -th channel of the ii -th sample in the batch input is a tensor input[i,j]\text{input}[i, j] ) of the input tensor). Instead of setting activations to zero, as in regular Dropout, the activations are set to the negative saturation value of the SELU activation function. Each element will be masked independently on every forward call with probability `p` using samples from a Bernoulli distribution. The elements to be masked are randomized on every forward call, and scaled and shifted to maintain zero mean and unit variance. See `FeatureAlphaDropout` for details. Parameters * **p** – dropout probability of a channel to be zeroed. Default: 0.5 * **training** – apply dropout if is `True`. Default: `True` * **inplace** – If set to `True`, will do this operation in-place. Default: `False` ### dropout2d `torch.nn.functional.dropout2d(input, p=0.5, training=True, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#dropout2d) Randomly zero out entire channels (a channel is a 2D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 2D tensor input[i,j]\text{input}[i, j] ) of the input tensor). Each channel will be zeroed out independently on every forward call with probability `p` using samples from a Bernoulli distribution. See [`Dropout2d`](generated/torch.nn.dropout2d#torch.nn.Dropout2d "torch.nn.Dropout2d") for details. Parameters * **p** – probability of a channel to be zeroed. Default: 0.5 * **training** – apply dropout if is `True`. Default: `True` * **inplace** – If set to `True`, will do this operation in-place. Default: `False` ### dropout3d `torch.nn.functional.dropout3d(input, p=0.5, training=True, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#dropout3d) Randomly zero out entire channels (a channel is a 3D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 3D tensor input[i,j]\text{input}[i, j] ) of the input tensor). Each channel will be zeroed out independently on every forward call with probability `p` using samples from a Bernoulli distribution. See [`Dropout3d`](generated/torch.nn.dropout3d#torch.nn.Dropout3d "torch.nn.Dropout3d") for details. Parameters * **p** – probability of a channel to be zeroed. Default: 0.5 * **training** – apply dropout if is `True`. Default: `True` * **inplace** – If set to `True`, will do this operation in-place. Default: `False` Sparse functions ---------------- ### embedding `torch.nn.functional.embedding(input, weight, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#embedding) A simple lookup table that looks up embeddings in a fixed dictionary and size. This module is often used to retrieve word embeddings using indices. The input to the module is a list of indices, and the embedding matrix, and the output is the corresponding word embeddings. See [`torch.nn.Embedding`](generated/torch.nn.embedding#torch.nn.Embedding "torch.nn.Embedding") for more details. Parameters * **input** (*LongTensor*) – Tensor containing indices into the embedding matrix * **weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – The embedding matrix with number of rows equal to the maximum possible index + 1, and number of columns equal to the embedding size * **padding\_idx** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – If given, pads the output with the embedding vector at `padding_idx` (initialized to zeros) whenever it encounters the index. * **max\_norm** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – If given, each embedding vector with norm larger than `max_norm` is renormalized to have norm `max_norm`. Note: this will modify `weight` in-place. * **norm\_type** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – The p of the p-norm to compute for the `max_norm` option. Default `2`. * **scale\_grad\_by\_freq** (*boolean**,* *optional*) – If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default `False`. * **sparse** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, gradient w.r.t. `weight` will be a sparse tensor. See Notes under [`torch.nn.Embedding`](generated/torch.nn.embedding#torch.nn.Embedding "torch.nn.Embedding") for more details regarding sparse gradients. Shape: * Input: LongTensor of arbitrary shape containing the indices to extract * `Weight: Embedding matrix of floating point type with shape (V, embedding_dim),` where V = maximum index + 1 and embedding\_dim = the embedding size * Output: `(*, embedding_dim)`, where `*` is the input shape Examples: ``` >>> # a batch of 2 samples of 4 indices each >>> input = torch.tensor([[1,2,4,5],[4,3,2,9]]) >>> # an embedding matrix containing 10 tensors of size 3 >>> embedding_matrix = torch.rand(10, 3) >>> F.embedding(input, embedding_matrix) tensor([[[ 0.8490, 0.9625, 0.6753], [ 0.9666, 0.7761, 0.6108], [ 0.6246, 0.9751, 0.3618], [ 0.4161, 0.2419, 0.7383]], [[ 0.6246, 0.9751, 0.3618], [ 0.0237, 0.7794, 0.0528], [ 0.9666, 0.7761, 0.6108], [ 0.3385, 0.8612, 0.1867]]]) >>> # example with padding_idx >>> weights = torch.rand(10, 3) >>> weights[0, :].zero_() >>> embedding_matrix = weights >>> input = torch.tensor([[0,2,0,5]]) >>> F.embedding(input, embedding_matrix, padding_idx=0) tensor([[[ 0.0000, 0.0000, 0.0000], [ 0.5609, 0.5384, 0.8720], [ 0.0000, 0.0000, 0.0000], [ 0.6262, 0.2438, 0.7471]]]) ``` ### embedding\_bag `torch.nn.functional.embedding_bag(input, weight, offsets=None, max_norm=None, norm_type=2, scale_grad_by_freq=False, mode='mean', sparse=False, per_sample_weights=None, include_last_offset=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#embedding_bag) Computes sums, means or maxes of `bags` of embeddings, without instantiating the intermediate embeddings. See [`torch.nn.EmbeddingBag`](generated/torch.nn.embeddingbag#torch.nn.EmbeddingBag "torch.nn.EmbeddingBag") for more details. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. Parameters * **input** (*LongTensor*) – Tensor containing bags of indices into the embedding matrix * **weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – The embedding matrix with number of rows equal to the maximum possible index + 1, and number of columns equal to the embedding size * **offsets** (*LongTensor**,* *optional*) – Only used when `input` is 1D. `offsets` determines the starting index position of each bag (sequence) in `input`. * **max\_norm** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – If given, each embedding vector with norm larger than `max_norm` is renormalized to have norm `max_norm`. Note: this will modify `weight` in-place. * **norm\_type** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – The `p` in the `p`-norm to compute for the `max_norm` option. Default `2`. * **scale\_grad\_by\_freq** (*boolean**,* *optional*) – if given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default `False`. Note: this option is not supported when `mode="max"`. * **mode** (*string**,* *optional*) – `"sum"`, `"mean"` or `"max"`. Specifies the way to reduce the bag. Default: `"mean"` * **sparse** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if `True`, gradient w.r.t. `weight` will be a sparse tensor. See Notes under [`torch.nn.Embedding`](generated/torch.nn.embedding#torch.nn.Embedding "torch.nn.Embedding") for more details regarding sparse gradients. Note: this option is not supported when `mode="max"`. * **per\_sample\_weights** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – a tensor of float / double weights, or None to indicate all weights should be taken to be 1. If specified, `per_sample_weights` must have exactly the same shape as input and is treated as having the same `offsets`, if those are not None. * **include\_last\_offset** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if `True`, the size of offsets is equal to the number of bags + 1. * **last element is the size of the input, or the ending index position of the last bag** (*The*) – Shape: * `input` (LongTensor) and `offsets` (LongTensor, optional) + If `input` is 2D of shape `(B, N)`, it will be treated as `B` bags (sequences) each of fixed length `N`, and this will return `B` values aggregated in a way depending on the `mode`. `offsets` is ignored and required to be `None` in this case. + If `input` is 1D of shape `(N)`, it will be treated as a concatenation of multiple bags (sequences). `offsets` is required to be a 1D tensor containing the starting index positions of each bag in `input`. Therefore, for `offsets` of shape `(B)`, `input` will be viewed as having `B` bags. Empty bags (i.e., having 0-length) will have returned vectors filled by zeros. * `weight` (Tensor): the learnable weights of the module of shape `(num_embeddings, embedding_dim)` * `per_sample_weights` (Tensor, optional). Has the same shape as `input`. * `output`: aggregated embedding values of shape `(B, embedding_dim)` Examples: ``` >>> # an Embedding module containing 10 tensors of size 3 >>> embedding_matrix = torch.rand(10, 3) >>> # a batch of 2 samples of 4 indices each >>> input = torch.tensor([1,2,4,5,4,3,2,9]) >>> offsets = torch.tensor([0,4]) >>> F.embedding_bag(embedding_matrix, input, offsets) tensor([[ 0.3397, 0.3552, 0.5545], [ 0.5893, 0.4386, 0.5882]]) ``` ### one\_hot `torch.nn.functional.one_hot(tensor, num_classes=-1) → LongTensor` Takes LongTensor with index values of shape `(*)` and returns a tensor of shape `(*, num_classes)` that have zeros everywhere except where the index of last dimension matches the corresponding value of the input tensor, in which case it will be 1. See also [One-hot on Wikipedia](https://en.wikipedia.org/wiki/One-hot) . Parameters * **tensor** (*LongTensor*) – class values of any shape. * **num\_classes** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Total number of classes. If set to -1, the number of classes will be inferred as one greater than the largest class value in the input tensor. Returns LongTensor that has one more dimension with 1 values at the index of last dimension indicated by the input, and 0 everywhere else. #### Examples ``` >>> F.one_hot(torch.arange(0, 5) % 3) tensor([[1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 0, 0], [0, 1, 0]]) >>> F.one_hot(torch.arange(0, 5) % 3, num_classes=5) tensor([[1, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0], [1, 0, 0, 0, 0], [0, 1, 0, 0, 0]]) >>> F.one_hot(torch.arange(0, 6).view(3,2) % 3) tensor([[[1, 0, 0], [0, 1, 0]], [[0, 0, 1], [1, 0, 0]], [[0, 1, 0], [0, 0, 1]]]) ``` Distance functions ------------------ ### pairwise\_distance `torch.nn.functional.pairwise_distance(x1, x2, p=2.0, eps=1e-06, keepdim=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#pairwise_distance) See [`torch.nn.PairwiseDistance`](generated/torch.nn.pairwisedistance#torch.nn.PairwiseDistance "torch.nn.PairwiseDistance") for details ### cosine\_similarity `torch.nn.functional.cosine_similarity(x1, x2, dim=1, eps=1e-8) → Tensor` Returns cosine similarity between x1 and x2, computed along dim. similarity=x1⋅x2max⁡(∥x1∥2⋅∥x2∥2,ϵ)\text{similarity} = \dfrac{x\_1 \cdot x\_2}{\max(\Vert x\_1 \Vert \_2 \cdot \Vert x\_2 \Vert \_2, \epsilon)} Parameters * **x1** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – First input. * **x2** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – Second input (of size matching x1). * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Dimension of vectors. Default: 1 * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – Small value to avoid division by zero. Default: 1e-8 Shape: * Input: (∗1,D,∗2)(\ast\_1, D, \ast\_2) where D is at position `dim`. * Output: (∗1,∗2)(\ast\_1, \ast\_2) where 1 is at position `dim`. Example: ``` >>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> output = F.cosine_similarity(input1, input2) >>> print(output) ``` ### pdist `torch.nn.functional.pdist(input, p=2) → Tensor` Computes the p-norm distance between every pair of row vectors in the input. This is identical to the upper triangular portion, excluding the diagonal, of `torch.norm(input[:, None] - input, dim=2, p=p)`. This function will be faster if the rows are contiguous. If input has shape N×MN \times M then the output will have shape 12N(N−1)\frac{1}{2} N (N - 1) . This function is equivalent to `scipy.spatial.distance.pdist(input, ‘minkowski’, p=p)` if p∈(0,∞)p \in (0, \infty) . When p=0p = 0 it is equivalent to `scipy.spatial.distance.pdist(input, ‘hamming’) * M`. When p=∞p = \infty , the closest scipy function is `scipy.spatial.distance.pdist(xn, lambda x, y: np.abs(x - y).max())`. Parameters * **input** – input tensor of shape N×MN \times M . * **p** – p value for the p-norm distance to calculate between each vector pair ∈[0,∞]\in [0, \infty] . Loss functions -------------- ### binary\_cross\_entropy `torch.nn.functional.binary_cross_entropy(input, target, weight=None, size_average=None, reduce=None, reduction='mean')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#binary_cross_entropy) Function that measures the Binary Cross Entropy between the target and the output. See [`BCELoss`](generated/torch.nn.bceloss#torch.nn.BCELoss "torch.nn.BCELoss") for details. Parameters * **input** – Tensor of arbitrary shape * **target** – Tensor of the same shape as input * **weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – a manual rescaling weight if provided it’s repeated to match input tensor shape * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when reduce is `False`. Default: `True` * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Note: `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Default: `'mean'` Examples: ``` >>> input = torch.randn((3, 2), requires_grad=True) >>> target = torch.rand((3, 2), requires_grad=False) >>> loss = F.binary_cross_entropy(F.sigmoid(input), target) >>> loss.backward() ``` ### binary\_cross\_entropy\_with\_logits `torch.nn.functional.binary_cross_entropy_with_logits(input, target, weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#binary_cross_entropy_with_logits) Function that measures Binary Cross Entropy between target and output logits. See [`BCEWithLogitsLoss`](generated/torch.nn.bcewithlogitsloss#torch.nn.BCEWithLogitsLoss "torch.nn.BCEWithLogitsLoss") for details. Parameters * **input** – Tensor of arbitrary shape * **target** – Tensor of the same shape as input * **weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – a manual rescaling weight if provided it’s repeated to match input tensor shape * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when reduce is `False`. Default: `True` * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Note: `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Default: `'mean'` * **pos\_weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – a weight of positive examples. Must be a vector with length equal to the number of classes. Examples: ``` >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> loss = F.binary_cross_entropy_with_logits(input, target) >>> loss.backward() ``` ### poisson\_nll\_loss `torch.nn.functional.poisson_nll_loss(input, target, log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#poisson_nll_loss) Poisson negative log likelihood loss. See [`PoissonNLLLoss`](generated/torch.nn.poissonnllloss#torch.nn.PoissonNLLLoss "torch.nn.PoissonNLLLoss") for details. Parameters * **input** – expectation of underlying Poisson distribution. * **target** – random sample target∼Poisson(input)target \sim \text{Poisson}(input) . * **log\_input** – if `True` the loss is computed as exp⁡(input)−target∗input\exp(\text{input}) - \text{target} \* \text{input} , if `False` then loss is input−target∗log⁡(input+eps)\text{input} - \text{target} \* \log(\text{input}+\text{eps}) . Default: `True` * **full** – whether to compute full loss, i. e. to add the Stirling approximation term. Default: `False` target∗log⁡(target)−target+0.5∗log⁡(2∗π∗target)\text{target} \* \log(\text{target}) - \text{target} + 0.5 \* \log(2 \* \pi \* \text{target}) . * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when reduce is `False`. Default: `True` * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – Small value to avoid evaluation of log⁡(0)\log(0) when `log_input`=``False``. Default: 1e-8 * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Note: `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Default: `'mean'` ### cosine\_embedding\_loss `torch.nn.functional.cosine_embedding_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#cosine_embedding_loss) See [`CosineEmbeddingLoss`](generated/torch.nn.cosineembeddingloss#torch.nn.CosineEmbeddingLoss "torch.nn.CosineEmbeddingLoss") for details. ### cross\_entropy `torch.nn.functional.cross_entropy(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#cross_entropy) This criterion combines `log_softmax` and `nll_loss` in a single function. See [`CrossEntropyLoss`](generated/torch.nn.crossentropyloss#torch.nn.CrossEntropyLoss "torch.nn.CrossEntropyLoss") for details. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – (N,C)(N, C) where `C = number of classes` or (N,C,H,W)(N, C, H, W) in case of 2D Loss, or (N,C,d1,d2,...,dK)(N, C, d\_1, d\_2, ..., d\_K) where K≥1K \geq 1 in the case of K-dimensional loss. * **target** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – (N)(N) where each value is 0≤targets[i]≤C−10 \leq \text{targets}[i] \leq C-1 , or (N,d1,d2,...,dK)(N, d\_1, d\_2, ..., d\_K) where K≥1K \geq 1 for K-dimensional loss. * **weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when reduce is `False`. Default: `True` * **ignore\_index** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Specifies a target value that is ignored and does not contribute to the input gradient. When `size_average` is `True`, the loss is averaged over non-ignored targets. Default: -100 * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Note: `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Default: `'mean'` Examples: ``` >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randint(5, (3,), dtype=torch.int64) >>> loss = F.cross_entropy(input, target) >>> loss.backward() ``` ### ctc\_loss `torch.nn.functional.ctc_loss(log_probs, targets, input_lengths, target_lengths, blank=0, reduction='mean', zero_infinity=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#ctc_loss) The Connectionist Temporal Classification loss. See [`CTCLoss`](generated/torch.nn.ctcloss#torch.nn.CTCLoss "torch.nn.CTCLoss") for details. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting `torch.backends.cudnn.deterministic = True`. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. Parameters * **log\_probs** – (T,N,C)(T, N, C) where `C = number of characters in alphabet including blank`, `T = input length`, and `N = batch size`. The logarithmized probabilities of the outputs (e.g. obtained with [`torch.nn.functional.log_softmax()`](#torch.nn.functional.log_softmax "torch.nn.functional.log_softmax")). * **targets** – (N,S)(N, S) or `(sum(target_lengths))`. Targets cannot be blank. In the second form, the targets are assumed to be concatenated. * **input\_lengths** – (N)(N) . Lengths of the inputs (must each be ≤T\leq T ) * **target\_lengths** – (N)(N) . Lengths of the targets * **blank** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Blank label. Default 00 . * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the output losses will be divided by the target lengths and then the mean over the batch is taken, `'sum'`: the output will be summed. Default: `'mean'` * **zero\_infinity** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether to zero infinite losses and the associated gradients. Default: `False` Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Example: ``` >>> log_probs = torch.randn(50, 16, 20).log_softmax(2).detach().requires_grad_() >>> targets = torch.randint(1, 20, (16, 30), dtype=torch.long) >>> input_lengths = torch.full((16,), 50, dtype=torch.long) >>> target_lengths = torch.randint(10,30,(16,), dtype=torch.long) >>> loss = F.ctc_loss(log_probs, targets, input_lengths, target_lengths) >>> loss.backward() ``` ### hinge\_embedding\_loss `torch.nn.functional.hinge_embedding_loss(input, target, margin=1.0, size_average=None, reduce=None, reduction='mean') → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#hinge_embedding_loss) See [`HingeEmbeddingLoss`](generated/torch.nn.hingeembeddingloss#torch.nn.HingeEmbeddingLoss "torch.nn.HingeEmbeddingLoss") for details. ### kl\_div `torch.nn.functional.kl_div(input, target, size_average=None, reduce=None, reduction='mean', log_target=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#kl_div) The [Kullback-Leibler divergence Loss](https://en.wikipedia.org/wiki/Kullback-Leibler_divergence) See [`KLDivLoss`](generated/torch.nn.kldivloss#torch.nn.KLDivLoss "torch.nn.KLDivLoss") for details. Parameters * **input** – Tensor of arbitrary shape * **target** – Tensor of the same shape as input * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when reduce is `False`. Default: `True` * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'batchmean'` | `'sum'` | `'mean'`. `'none'`: no reduction will be applied `'batchmean'`: the sum of the output will be divided by the batchsize `'sum'`: the output will be summed `'mean'`: the output will be divided by the number of elements in the output Default: `'mean'` * **log\_target** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – A flag indicating whether `target` is passed in the log space. It is recommended to pass certain distributions (like `softmax`) in the log space to avoid numerical issues caused by explicit `log`. Default: `False` Note `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Note :attr:`reduction` = `'mean'` doesn’t return the true kl divergence value, please use :attr:`reduction` = `'batchmean'` which aligns with KL math definition. In the next major release, `'mean'` will be changed to be the same as ‘batchmean’. ### l1\_loss `torch.nn.functional.l1_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#l1_loss) Function that takes the mean element-wise absolute value difference. See [`L1Loss`](generated/torch.nn.l1loss#torch.nn.L1Loss "torch.nn.L1Loss") for details. ### mse\_loss `torch.nn.functional.mse_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#mse_loss) Measures the element-wise mean squared error. See [`MSELoss`](generated/torch.nn.mseloss#torch.nn.MSELoss "torch.nn.MSELoss") for details. ### margin\_ranking\_loss `torch.nn.functional.margin_ranking_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#margin_ranking_loss) See [`MarginRankingLoss`](generated/torch.nn.marginrankingloss#torch.nn.MarginRankingLoss "torch.nn.MarginRankingLoss") for details. ### multilabel\_margin\_loss `torch.nn.functional.multilabel_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#multilabel_margin_loss) See [`MultiLabelMarginLoss`](generated/torch.nn.multilabelmarginloss#torch.nn.MultiLabelMarginLoss "torch.nn.MultiLabelMarginLoss") for details. ### multilabel\_soft\_margin\_loss `torch.nn.functional.multilabel_soft_margin_loss(input, target, weight=None, size_average=None) → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#multilabel_soft_margin_loss) See [`MultiLabelSoftMarginLoss`](generated/torch.nn.multilabelsoftmarginloss#torch.nn.MultiLabelSoftMarginLoss "torch.nn.MultiLabelSoftMarginLoss") for details. ### multi\_margin\_loss `torch.nn.functional.multi_margin_loss(input, target, p=1, margin=1.0, weight=None, size_average=None, reduce=None, reduction='mean')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#multi_margin_loss) multi\_margin\_loss(input, target, p=1, margin=1, weight=None, size\_average=None, reduce=None, reduction=’mean’) -> Tensor See [`MultiMarginLoss`](generated/torch.nn.multimarginloss#torch.nn.MultiMarginLoss "torch.nn.MultiMarginLoss") for details. ### nll\_loss `torch.nn.functional.nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#nll_loss) The negative log likelihood loss. See [`NLLLoss`](generated/torch.nn.nllloss#torch.nn.NLLLoss "torch.nn.NLLLoss") for details. Parameters * **input** – (N,C)(N, C) where `C = number of classes` or (N,C,H,W)(N, C, H, W) in case of 2D Loss, or (N,C,d1,d2,...,dK)(N, C, d\_1, d\_2, ..., d\_K) where K≥1K \geq 1 in the case of K-dimensional loss. * **target** – (N)(N) where each value is 0≤targets[i]≤C−10 \leq \text{targets}[i] \leq C-1 , or (N,d1,d2,...,dK)(N, d\_1, d\_2, ..., d\_K) where K≥1K \geq 1 for K-dimensional loss. * **weight** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when reduce is `False`. Default: `True` * **ignore\_index** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Specifies a target value that is ignored and does not contribute to the input gradient. When `size_average` is `True`, the loss is averaged over non-ignored targets. Default: -100 * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Note: `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Default: `'mean'` Example: ``` >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() ``` ### smooth\_l1\_loss `torch.nn.functional.smooth_l1_loss(input, target, size_average=None, reduce=None, reduction='mean', beta=1.0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#smooth_l1_loss) Function that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise. See [`SmoothL1Loss`](generated/torch.nn.smoothl1loss#torch.nn.SmoothL1Loss "torch.nn.SmoothL1Loss") for details. ### soft\_margin\_loss `torch.nn.functional.soft_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#soft_margin_loss) See [`SoftMarginLoss`](generated/torch.nn.softmarginloss#torch.nn.SoftMarginLoss "torch.nn.SoftMarginLoss") for details. ### triplet\_margin\_loss `torch.nn.functional.triplet_margin_loss(anchor, positive, negative, margin=1.0, p=2, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#triplet_margin_loss) See [`TripletMarginLoss`](generated/torch.nn.tripletmarginloss#torch.nn.TripletMarginLoss "torch.nn.TripletMarginLoss") for details ### triplet\_margin\_with\_distance\_loss `torch.nn.functional.triplet_margin_with_distance_loss(anchor, positive, negative, *, distance_function=None, margin=1.0, swap=False, reduction='mean')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#triplet_margin_with_distance_loss) See [`TripletMarginWithDistanceLoss`](generated/torch.nn.tripletmarginwithdistanceloss#torch.nn.TripletMarginWithDistanceLoss "torch.nn.TripletMarginWithDistanceLoss") for details. Vision functions ---------------- ### pixel\_shuffle `torch.nn.functional.pixel_shuffle(input, upscale_factor) → Tensor` Rearranges elements in a tensor of shape (∗,C×r2,H,W)(\*, C \times r^2, H, W) to a tensor of shape (∗,C,H×r,W×r)(\*, C, H \times r, W \times r) , where r is the `upscale_factor`. See [`PixelShuffle`](generated/torch.nn.pixelshuffle#torch.nn.PixelShuffle "torch.nn.PixelShuffle") for details. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **upscale\_factor** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – factor to increase spatial resolution by Examples: ``` >>> input = torch.randn(1, 9, 4, 4) >>> output = torch.nn.functional.pixel_shuffle(input, 3) >>> print(output.size()) torch.Size([1, 1, 12, 12]) ``` ### pixel\_unshuffle `torch.nn.functional.pixel_unshuffle(input, downscale_factor) → Tensor` Reverses the [`PixelShuffle`](generated/torch.nn.pixelshuffle#torch.nn.PixelShuffle "torch.nn.PixelShuffle") operation by rearranging elements in a tensor of shape (∗,C,H×r,W×r)(\*, C, H \times r, W \times r) to a tensor of shape (∗,C×r2,H,W)(\*, C \times r^2, H, W) , where r is the `downscale_factor`. See [`PixelUnshuffle`](generated/torch.nn.pixelunshuffle#torch.nn.PixelUnshuffle "torch.nn.PixelUnshuffle") for details. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **downscale\_factor** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – factor to increase spatial resolution by Examples: ``` >>> input = torch.randn(1, 1, 12, 12) >>> output = torch.nn.functional.pixel_unshuffle(input, 3) >>> print(output.size()) torch.Size([1, 9, 4, 4]) ``` ### pad `torch.nn.functional.pad(input, pad, mode='constant', value=0)` Pads tensor. Padding size: The padding size by which to pad some dimensions of `input` are described starting from the last dimension and moving forward. ⌊len(pad)2⌋\left\lfloor\frac{\text{len(pad)}}{2}\right\rfloor dimensions of `input` will be padded. For example, to pad only the last dimension of the input tensor, then [`pad`](#torch.nn.functional.pad "torch.nn.functional.pad") has the form (padding\_left,padding\_right)(\text{padding\\_left}, \text{padding\\_right}) ; to pad the last 2 dimensions of the input tensor, then use (padding\_left,padding\_right,(\text{padding\\_left}, \text{padding\\_right}, padding\_top,padding\_bottom)\text{padding\\_top}, \text{padding\\_bottom}) ; to pad the last 3 dimensions, use (padding\_left,padding\_right,(\text{padding\\_left}, \text{padding\\_right}, padding\_top,padding\_bottom\text{padding\\_top}, \text{padding\\_bottom} padding\_front,padding\_back)\text{padding\\_front}, \text{padding\\_back}) . Padding mode: See [`torch.nn.ConstantPad2d`](generated/torch.nn.constantpad2d#torch.nn.ConstantPad2d "torch.nn.ConstantPad2d"), [`torch.nn.ReflectionPad2d`](generated/torch.nn.reflectionpad2d#torch.nn.ReflectionPad2d "torch.nn.ReflectionPad2d"), and [`torch.nn.ReplicationPad2d`](generated/torch.nn.replicationpad2d#torch.nn.ReplicationPad2d "torch.nn.ReplicationPad2d") for concrete examples on how each of the padding modes works. Constant padding is implemented for arbitrary dimensions. Replicate padding is implemented for padding the last 3 dimensions of 5D input tensor, or the last 2 dimensions of 4D input tensor, or the last dimension of 3D input tensor. Reflect padding is only implemented for padding the last 2 dimensions of 4D input tensor, or the last dimension of 3D input tensor. Note When using the CUDA backend, this operation may induce nondeterministic behaviour in its backward pass that is not easily switched off. Please see the notes on [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for background. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – N-dimensional tensor * **pad** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – m-elements tuple, where m2≤\frac{m}{2} \leq input dimensions and mm is even. * **mode** – `'constant'`, `'reflect'`, `'replicate'` or `'circular'`. Default: `'constant'` * **value** – fill value for `'constant'` padding. Default: `0` Examples: ``` >>> t4d = torch.empty(3, 3, 4, 2) >>> p1d = (1, 1) # pad last dim by 1 on each side >>> out = F.pad(t4d, p1d, "constant", 0) # effectively zero padding >>> print(out.size()) torch.Size([3, 3, 4, 4]) >>> p2d = (1, 1, 2, 2) # pad last dim by (1, 1) and 2nd to last by (2, 2) >>> out = F.pad(t4d, p2d, "constant", 0) >>> print(out.size()) torch.Size([3, 3, 8, 4]) >>> t4d = torch.empty(3, 3, 4, 2) >>> p3d = (0, 1, 2, 1, 3, 3) # pad by (0, 1), (2, 1), and (3, 3) >>> out = F.pad(t4d, p3d, "constant", 0) >>> print(out.size()) torch.Size([3, 9, 7, 3]) ``` ### interpolate `torch.nn.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#interpolate) Down/up samples the input to either the given `size` or the given `scale_factor` The algorithm used for interpolation is determined by `mode`. Currently temporal, spatial and volumetric sampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape. The input dimensions are interpreted in the form: `mini-batch x channels x [optional depth] x [optional height] x width`. The modes available for resizing are: `nearest`, `linear` (3D-only), `bilinear`, `bicubic` (4D-only), `trilinear` (5D-only), `area` Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*] or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*] or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]*) – output spatial size. * **scale\_factor** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* *Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*]*) – multiplier for spatial size. Has to match input size if it is a tuple. * **mode** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – algorithm used for upsampling: `'nearest'` | `'linear'` | `'bilinear'` | `'bicubic'` | `'trilinear'` | `'area'`. Default: `'nearest'` * **align\_corners** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Geometrically, we consider the pixels of the input and output as squares rather than points. If set to `True`, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to `False`, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation *independent* of input size when `scale_factor` is kept the same. This only has an effect when `mode` is `'linear'`, `'bilinear'`, `'bicubic'` or `'trilinear'`. Default: `False` * **recompute\_scale\_factor** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – recompute the scale\_factor for use in the interpolation calculation. When `scale_factor` is passed as a parameter, it is used to compute the `output_size`. If `recompute_scale_factor` is `False` or not specified, the passed-in `scale_factor` will be used in the interpolation computation. Otherwise, a new `scale_factor` will be computed based on the output and input sizes for use in the interpolation computation (i.e. the computation will be identical to if the computed `output_size` were passed-in explicitly). Note that when `scale_factor` is floating-point, the recomputed scale\_factor may differ from the one passed in due to rounding and precision issues. Note With `mode='bicubic'`, it’s possible to cause overshoot, in other words it can produce negative values or values greater than 255 for images. Explicitly call `result.clamp(min=0, max=255)` if you want to reduce the overshoot when displaying the image. Warning With `align_corners = True`, the linearly interpolating modes (`linear`, `bilinear`, and `trilinear`) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is `align_corners = False`. See [`Upsample`](generated/torch.nn.upsample#torch.nn.Upsample "torch.nn.Upsample") for concrete examples on how this affects the outputs. Warning When scale\_factor is specified, if recompute\_scale\_factor=True, scale\_factor is used to compute the output\_size which will then be used to infer new scales for the interpolation. The default behavior for recompute\_scale\_factor changed to False in 1.6.0, and scale\_factor is used in the interpolation calculation. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. ### upsample `torch.nn.functional.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#upsample) Upsamples the input to either the given `size` or the given `scale_factor` Warning This function is deprecated in favor of [`torch.nn.functional.interpolate()`](#torch.nn.functional.interpolate "torch.nn.functional.interpolate"). This is equivalent with `nn.functional.interpolate(...)`. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. The algorithm used for upsampling is determined by `mode`. Currently temporal, spatial and volumetric upsampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape. The input dimensions are interpreted in the form: `mini-batch x channels x [optional depth] x [optional height] x width`. The modes available for upsampling are: `nearest`, `linear` (3D-only), `bilinear`, `bicubic` (4D-only), `trilinear` (5D-only) Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor * **size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*] or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*] or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]*) – output spatial size. * **scale\_factor** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* *Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*]*) – multiplier for spatial size. Has to match input size if it is a tuple. * **mode** (*string*) – algorithm used for upsampling: `'nearest'` | `'linear'` | `'bilinear'` | `'bicubic'` | `'trilinear'`. Default: `'nearest'` * **align\_corners** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Geometrically, we consider the pixels of the input and output as squares rather than points. If set to `True`, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to `False`, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation *independent* of input size when `scale_factor` is kept the same. This only has an effect when `mode` is `'linear'`, `'bilinear'`, `'bicubic'` or `'trilinear'`. Default: `False` Note With `mode='bicubic'`, it’s possible to cause overshoot, in other words it can produce negative values or values greater than 255 for images. Explicitly call `result.clamp(min=0, max=255)` if you want to reduce the overshoot when displaying the image. Warning With `align_corners = True`, the linearly interpolating modes (`linear`, `bilinear`, and `trilinear`) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is `align_corners = False`. See [`Upsample`](generated/torch.nn.upsample#torch.nn.Upsample "torch.nn.Upsample") for concrete examples on how this affects the outputs. ### upsample\_nearest `torch.nn.functional.upsample_nearest(input, size=None, scale_factor=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#upsample_nearest) Upsamples the input, using nearest neighbours’ pixel values. Warning This function is deprecated in favor of [`torch.nn.functional.interpolate()`](#torch.nn.functional.interpolate "torch.nn.functional.interpolate"). This is equivalent with `nn.functional.interpolate(..., mode='nearest')`. Currently spatial and volumetric upsampling are supported (i.e. expected inputs are 4 or 5 dimensional). Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – input * **size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*] or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]*) – output spatia size. * **scale\_factor** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – multiplier for spatial size. Has to be an integer. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. ### upsample\_bilinear `torch.nn.functional.upsample_bilinear(input, size=None, scale_factor=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#upsample_bilinear) Upsamples the input, using bilinear upsampling. Warning This function is deprecated in favor of [`torch.nn.functional.interpolate()`](#torch.nn.functional.interpolate "torch.nn.functional.interpolate"). This is equivalent with `nn.functional.interpolate(..., mode='bilinear', align_corners=True)`. Expected inputs are spatial (4 dimensional). Use `upsample_trilinear` fo volumetric (5 dimensional) inputs. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – input * **size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]*) – output spatial size. * **scale\_factor** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]*) – multiplier for spatial size Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. ### grid\_sample `torch.nn.functional.grid_sample(input, grid, mode='bilinear', padding_mode='zeros', align_corners=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#grid_sample) Given an `input` and a flow-field `grid`, computes the `output` using `input` values and pixel locations from `grid`. Currently, only spatial (4-D) and volumetric (5-D) `input` are supported. In the spatial (4-D) case, for `input` with shape (N,C,Hin,Win)(N, C, H\_\text{in}, W\_\text{in}) and `grid` with shape (N,Hout,Wout,2)(N, H\_\text{out}, W\_\text{out}, 2) , the output will have shape (N,C,Hout,Wout)(N, C, H\_\text{out}, W\_\text{out}) . For each output location `output[n, :, h, w]`, the size-2 vector `grid[n, h, w]` specifies `input` pixel locations `x` and `y`, which are used to interpolate the output value `output[n, :, h, w]`. In the case of 5D inputs, `grid[n, d, h, w]` specifies the `x`, `y`, `z` pixel locations for interpolating `output[n, :, d, h, w]`. `mode` argument specifies `nearest` or `bilinear` interpolation method to sample the input pixels. `grid` specifies the sampling pixel locations normalized by the `input` spatial dimensions. Therefore, it should have most values in the range of `[-1, 1]`. For example, values `x = -1, y = -1` is the left-top pixel of `input`, and values `x = 1, y = 1` is the right-bottom pixel of `input`. If `grid` has values outside the range of `[-1, 1]`, the corresponding outputs are handled as defined by `padding_mode`. Options are * `padding_mode="zeros"`: use `0` for out-of-bound grid locations, * `padding_mode="border"`: use border values for out-of-bound grid locations, * `padding_mode="reflection"`: use values at locations reflected by the border for out-of-bound grid locations. For location far away from the border, it will keep being reflected until becoming in bound, e.g., (normalized) pixel location `x = -3.5` reflects by border `-1` and becomes `x' = 1.5`, then reflects by border `1` and becomes `x'' = -0.5`. Note This function is often used in conjunction with [`affine_grid()`](#torch.nn.functional.affine_grid "torch.nn.functional.affine_grid") to build [Spatial Transformer Networks](https://arxiv.org/abs/1506.02025) . Note When using the CUDA backend, this operation may induce nondeterministic behaviour in its backward pass that is not easily switched off. Please see the notes on [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for background. Note NaN values in `grid` would be interpreted as `-1`. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – input of shape (N,C,Hin,Win)(N, C, H\_\text{in}, W\_\text{in}) (4-D case) or (N,C,Din,Hin,Win)(N, C, D\_\text{in}, H\_\text{in}, W\_\text{in}) (5-D case) * **grid** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – flow-field of shape (N,Hout,Wout,2)(N, H\_\text{out}, W\_\text{out}, 2) (4-D case) or (N,Dout,Hout,Wout,3)(N, D\_\text{out}, H\_\text{out}, W\_\text{out}, 3) (5-D case) * **mode** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – interpolation mode to calculate output values `'bilinear'` | `'nearest'` | `'bicubic'`. Default: `'bilinear'` Note: `mode='bicubic'` supports only 4-D input. When `mode='bilinear'` and the input is 5-D, the interpolation mode used internally will actually be trilinear. However, when the input is 4-D, the interpolation mode will legitimately be bilinear. * **padding\_mode** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – padding mode for outside grid values `'zeros'` | `'border'` | `'reflection'`. Default: `'zeros'` * **align\_corners** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Geometrically, we consider the pixels of the input as squares rather than points. If set to `True`, the extrema (`-1` and `1`) are considered as referring to the center points of the input’s corner pixels. If set to `False`, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic. This option parallels the `align_corners` option in [`interpolate()`](#torch.nn.functional.interpolate "torch.nn.functional.interpolate"), and so whichever option is used here should also be used there to resize the input image before grid sampling. Default: `False` Returns output Tensor Return type output ([Tensor](tensors#torch.Tensor "torch.Tensor")) Warning When `align_corners = True`, the grid positions depend on the pixel size relative to the input image size, and so the locations sampled by [`grid_sample()`](#torch.nn.functional.grid_sample "torch.nn.functional.grid_sample") will differ for the same input given at different resolutions (that is, after being upsampled or downsampled). The default behavior up to version 1.2.0 was `align_corners = True`. Since then, the default behavior has been changed to `align_corners = False`, in order to bring it in line with the default for [`interpolate()`](#torch.nn.functional.interpolate "torch.nn.functional.interpolate"). Note `mode='bicubic'` is implemented using the [cubic convolution algorithm](https://en.wikipedia.org/wiki/Bicubic_interpolation) with α=−0.75\alpha=-0.75 . The constant α\alpha might be different from packages to packages. For example, [PIL](https://github.com/python-pillow/Pillow/blob/4634eafe3c695a014267eefdce830b4a825beed7/src/libImaging/Resample.c#L51) and [OpenCV](https://github.com/opencv/opencv/blob/f345ed564a06178670750bad59526cfa4033be55/modules/imgproc/src/resize.cpp#L908) use -0.5 and -0.75 respectively. This algorithm may “overshoot” the range of values it’s interpolating. For example, it may produce negative values or values greater than 255 when interpolating input in [0, 255]. Clamp the results with :func: `torch.clamp` to ensure they are within the valid range. ### affine\_grid `torch.nn.functional.affine_grid(theta, size, align_corners=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/functional.html#affine_grid) Generates a 2D or 3D flow field (sampling grid), given a batch of affine matrices `theta`. Note This function is often used in conjunction with [`grid_sample()`](#torch.nn.functional.grid_sample "torch.nn.functional.grid_sample") to build [Spatial Transformer Networks](https://arxiv.org/abs/1506.02025) . Parameters * **theta** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – input batch of affine matrices with shape (N×2×3N \times 2 \times 3 ) for 2D or (N×3×4N \times 3 \times 4 ) for 3D * **size** (*torch.Size*) – the target output image size. (N×C×H×WN \times C \times H \times W for 2D or N×C×D×H×WN \times C \times D \times H \times W for 3D) Example: torch.Size((32, 3, 24, 24)) * **align\_corners** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if `True`, consider `-1` and `1` to refer to the centers of the corner pixels rather than the image corners. Refer to [`grid_sample()`](#torch.nn.functional.grid_sample "torch.nn.functional.grid_sample") for a more complete description. A grid generated by [`affine_grid()`](#torch.nn.functional.affine_grid "torch.nn.functional.affine_grid") should be passed to [`grid_sample()`](#torch.nn.functional.grid_sample "torch.nn.functional.grid_sample") with the same setting for this option. Default: `False` Returns output Tensor of size (N×H×W×2N \times H \times W \times 2 ) Return type output ([Tensor](tensors#torch.Tensor "torch.Tensor")) Warning When `align_corners = True`, the grid positions depend on the pixel size relative to the input image size, and so the locations sampled by [`grid_sample()`](#torch.nn.functional.grid_sample "torch.nn.functional.grid_sample") will differ for the same input given at different resolutions (that is, after being upsampled or downsampled). The default behavior up to version 1.2.0 was `align_corners = True`. Since then, the default behavior has been changed to `align_corners = False`, in order to bring it in line with the default for [`interpolate()`](#torch.nn.functional.interpolate "torch.nn.functional.interpolate"). Warning When `align_corners = True`, 2D affine transforms on 1D data and 3D affine transforms on 2D data (that is, when one of the spatial dimensions has unit size) are ill-defined, and not an intended use case. This is not a problem when `align_corners = False`. Up to version 1.2.0, all grid points along a unit dimension were considered arbitrarily to be at `-1`. From version 1.3.0, under `align_corners = True` all grid points along a unit dimension are considered to be at ``0` (the center of the input image). DataParallel functions (multi-GPU, distributed) ----------------------------------------------- ### data\_parallel `torch.nn.parallel.data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/parallel/data_parallel.html#data_parallel) Evaluates module(input) in parallel across the GPUs given in device\_ids. This is the functional version of the DataParallel module. Parameters * **module** ([Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")) – the module to evaluate in parallel * **inputs** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – inputs to the module * **device\_ids** (*list of python:int* *or* [torch.device](tensor_attributes#torch.torch.device "torch.torch.device")) – GPU ids on which to replicate module * **output\_device** (*list of python:int* *or* [torch.device](tensor_attributes#torch.torch.device "torch.torch.device")) – GPU location of the output Use -1 to indicate the CPU. (default: device\_ids[0]) Returns a Tensor containing the result of module(input) located on output\_device
programming_docs
pytorch torch.random torch.random ============ `torch.random.fork_rng(devices=None, enabled=True, _caller='fork_rng', _devices_kw='devices')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/random.html#fork_rng) Forks the RNG, so that when you return, the RNG is reset to the state that it was previously in. Parameters * **devices** (*iterable of CUDA IDs*) – CUDA devices for which to fork the RNG. CPU RNG state is always forked. By default, [`fork_rng()`](#torch.random.fork_rng "torch.random.fork_rng") operates on all devices, but will emit a warning if your machine has a lot of devices, since this function will run very slowly in that case. If you explicitly specify devices, this warning will be suppressed * **enabled** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – if `False`, the RNG is not forked. This is a convenience argument for easily disabling the context manager without having to delete it and unindent your Python code under it. `torch.random.get_rng_state()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/random.html#get_rng_state) Returns the random number generator state as a `torch.ByteTensor`. `torch.random.initial_seed()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/random.html#initial_seed) Returns the initial seed for generating random numbers as a Python `long`. `torch.random.manual_seed(seed)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/random.html#manual_seed) Sets the seed for generating random numbers. Returns a `torch.Generator` object. Parameters **seed** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The desired seed. Value must be within the inclusive range `[-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff]`. Otherwise, a RuntimeError is raised. Negative inputs are remapped to positive values with the formula `0xffff_ffff_ffff_ffff + seed`. `torch.random.seed()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/random.html#seed) Sets the seed for generating random numbers to a non-deterministic random number. Returns a 64 bit number used to seed the RNG. `torch.random.set_rng_state(new_state)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/random.html#set_rng_state) Sets the random number generator state. Parameters **new\_state** (*torch.ByteTensor*) – The desired state pytorch torch.optim torch.optim =========== [`torch.optim`](#module-torch.optim "torch.optim") is a package implementing various optimization algorithms. Most commonly used methods are already supported, and the interface is general enough, so that more sophisticated ones can be also easily integrated in the future. How to use an optimizer ----------------------- To use [`torch.optim`](#module-torch.optim "torch.optim") you have to construct an optimizer object, that will hold the current state and will update the parameters based on the computed gradients. ### Constructing it To construct an [`Optimizer`](#torch.optim.Optimizer "torch.optim.Optimizer") you have to give it an iterable containing the parameters (all should be `Variable` s) to optimize. Then, you can specify optimizer-specific options such as the learning rate, weight decay, etc. Note If you need to move a model to GPU via `.cuda()`, please do so before constructing optimizers for it. Parameters of a model after `.cuda()` will be different objects with those before the call. In general, you should make sure that optimized parameters live in consistent locations when optimizers are constructed and used. Example: ``` optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9) optimizer = optim.Adam([var1, var2], lr=0.0001) ``` ### Per-parameter options [`Optimizer`](#torch.optim.Optimizer "torch.optim.Optimizer") s also support specifying per-parameter options. To do this, instead of passing an iterable of `Variable` s, pass in an iterable of [`dict`](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)") s. Each of them will define a separate parameter group, and should contain a `params` key, containing a list of parameters belonging to it. Other keys should match the keyword arguments accepted by the optimizers, and will be used as optimization options for this group. Note You can still pass options as keyword arguments. They will be used as defaults, in the groups that didn’t override them. This is useful when you only want to vary a single option, while keeping all others consistent between parameter groups. For example, this is very useful when one wants to specify per-layer learning rates: ``` optim.SGD([ {'params': model.base.parameters()}, {'params': model.classifier.parameters(), 'lr': 1e-3} ], lr=1e-2, momentum=0.9) ``` This means that `model.base`’s parameters will use the default learning rate of `1e-2`, `model.classifier`’s parameters will use a learning rate of `1e-3`, and a momentum of `0.9` will be used for all parameters. ### Taking an optimization step All optimizers implement a [`step()`](#torch.optim.Optimizer.step "torch.optim.Optimizer.step") method, that updates the parameters. It can be used in two ways: #### `optimizer.step()` This is a simplified version supported by most optimizers. The function can be called once the gradients are computed using e.g. `backward()`. Example: ``` for input, target in dataset: optimizer.zero_grad() output = model(input) loss = loss_fn(output, target) loss.backward() optimizer.step() ``` #### `optimizer.step(closure)` Some optimization algorithms such as Conjugate Gradient and LBFGS need to reevaluate the function multiple times, so you have to pass in a closure that allows them to recompute your model. The closure should clear the gradients, compute the loss, and return it. Example: ``` for input, target in dataset: def closure(): optimizer.zero_grad() output = model(input) loss = loss_fn(output, target) loss.backward() return loss optimizer.step(closure) ``` Algorithms ---------- `class torch.optim.Optimizer(params, defaults)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/optimizer.html#Optimizer) Base class for all optimizers. Warning Parameters need to be specified as collections that have a deterministic ordering that is consistent between runs. Examples of objects that don’t satisfy those properties are sets and iterators over values of dictionaries. Parameters * **params** (*iterable*) – an iterable of [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor") s or [`dict`](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)") s. Specifies what Tensors should be optimized. * **defaults** – (dict): a dict containing default values of optimization options (used when a parameter group doesn’t specify them). `add_param_group(param_group)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/optimizer.html#Optimizer.add_param_group) Add a param group to the [`Optimizer`](#torch.optim.Optimizer "torch.optim.Optimizer") s `param_groups`. This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the [`Optimizer`](#torch.optim.Optimizer "torch.optim.Optimizer") as training progresses. Parameters * **param\_group** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)")) – Specifies what Tensors should be optimized along with group * **optimization options.** (*specific*) – `load_state_dict(state_dict)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/optimizer.html#Optimizer.load_state_dict) Loads the optimizer state. Parameters **state\_dict** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)")) – optimizer state. Should be an object returned from a call to [`state_dict()`](#torch.optim.Optimizer.state_dict "torch.optim.Optimizer.state_dict"). `state_dict()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/optimizer.html#Optimizer.state_dict) Returns the state of the optimizer as a [`dict`](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)"). It contains two entries: * state - a dict holding current optimization state. Its content differs between optimizer classes. * param\_groups - a dict containing all parameter groups `step(closure)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/optimizer.html#Optimizer.step) Performs a single optimization step (parameter update). Parameters **closure** (*callable*) – A closure that reevaluates the model and returns the loss. Optional for most optimizers. Note Unless otherwise specified, this function should not modify the `.grad` field of the parameters. `zero_grad(set_to_none=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/optimizer.html#Optimizer.zero_grad) Sets the gradients of all optimized [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor") s to zero. Parameters **set\_to\_none** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests `zero_grad(set_to_none=True)` followed by a backward pass, `.grad`s are guaranteed to be None for params that did not receive a gradient. 3. `torch.optim` optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether). `class torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/adadelta.html#Adadelta) Implements Adadelta algorithm. It has been proposed in [ADADELTA: An Adaptive Learning Rate Method](https://arxiv.org/abs/1212.5701). Parameters * **params** (*iterable*) – iterable of parameters to optimize or dicts defining parameter groups * **rho** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – coefficient used for computing a running average of squared gradients (default: 0.9) * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – term added to the denominator to improve numerical stability (default: 1e-6) * **lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – coefficient that scale delta before it is applied to the parameters (default: 1.0) * **weight\_decay** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – weight decay (L2 penalty) (default: 0) `step(closure=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/adadelta.html#Adadelta.step) Performs a single optimization step. Parameters **closure** (*callable**,* *optional*) – A closure that reevaluates the model and returns the loss. `class torch.optim.Adagrad(params, lr=0.01, lr_decay=0, weight_decay=0, initial_accumulator_value=0, eps=1e-10)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/adagrad.html#Adagrad) Implements Adagrad algorithm. It has been proposed in [Adaptive Subgradient Methods for Online Learning and Stochastic Optimization](http://jmlr.org/papers/v12/duchi11a.html). Parameters * **params** (*iterable*) – iterable of parameters to optimize or dicts defining parameter groups * **lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – learning rate (default: 1e-2) * **lr\_decay** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – learning rate decay (default: 0) * **weight\_decay** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – weight decay (L2 penalty) (default: 0) * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – term added to the denominator to improve numerical stability (default: 1e-10) `step(closure=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/adagrad.html#Adagrad.step) Performs a single optimization step. Parameters **closure** (*callable**,* *optional*) – A closure that reevaluates the model and returns the loss. `class torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/adam.html#Adam) Implements Adam algorithm. It has been proposed in [Adam: A Method for Stochastic Optimization](https://arxiv.org/abs/1412.6980). The implementation of the L2 penalty follows changes proposed in [Decoupled Weight Decay Regularization](https://arxiv.org/abs/1711.05101). Parameters * **params** (*iterable*) – iterable of parameters to optimize or dicts defining parameter groups * **lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – learning rate (default: 1e-3) * **betas** (*Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*]**,* *optional*) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999)) * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – term added to the denominator to improve numerical stability (default: 1e-8) * **weight\_decay** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – weight decay (L2 penalty) (default: 0) * **amsgrad** (*boolean**,* *optional*) – whether to use the AMSGrad variant of this algorithm from the paper [On the Convergence of Adam and Beyond](https://openreview.net/forum?id=ryQu7f-RZ) (default: False) `step(closure=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/adam.html#Adam.step) Performs a single optimization step. Parameters **closure** (*callable**,* *optional*) – A closure that reevaluates the model and returns the loss. `class torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/adamw.html#AdamW) Implements AdamW algorithm. The original Adam algorithm was proposed in [Adam: A Method for Stochastic Optimization](https://arxiv.org/abs/1412.6980). The AdamW variant was proposed in [Decoupled Weight Decay Regularization](https://arxiv.org/abs/1711.05101). Parameters * **params** (*iterable*) – iterable of parameters to optimize or dicts defining parameter groups * **lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – learning rate (default: 1e-3) * **betas** (*Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*]**,* *optional*) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999)) * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – term added to the denominator to improve numerical stability (default: 1e-8) * **weight\_decay** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – weight decay coefficient (default: 1e-2) * **amsgrad** (*boolean**,* *optional*) – whether to use the AMSGrad variant of this algorithm from the paper [On the Convergence of Adam and Beyond](https://openreview.net/forum?id=ryQu7f-RZ) (default: False) `step(closure=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/adamw.html#AdamW.step) Performs a single optimization step. Parameters **closure** (*callable**,* *optional*) – A closure that reevaluates the model and returns the loss. `class torch.optim.SparseAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/sparse_adam.html#SparseAdam) Implements lazy version of Adam algorithm suitable for sparse tensors. In this variant, only moments that show up in the gradient get updated, and only those portions of the gradient get applied to the parameters. Parameters * **params** (*iterable*) – iterable of parameters to optimize or dicts defining parameter groups * **lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – learning rate (default: 1e-3) * **betas** (*Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*]**,* *optional*) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999)) * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – term added to the denominator to improve numerical stability (default: 1e-8) `step(closure=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/sparse_adam.html#SparseAdam.step) Performs a single optimization step. Parameters **closure** (*callable**,* *optional*) – A closure that reevaluates the model and returns the loss. `class torch.optim.Adamax(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/adamax.html#Adamax) Implements Adamax algorithm (a variant of Adam based on infinity norm). It has been proposed in [Adam: A Method for Stochastic Optimization](https://arxiv.org/abs/1412.6980). Parameters * **params** (*iterable*) – iterable of parameters to optimize or dicts defining parameter groups * **lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – learning rate (default: 2e-3) * **betas** (*Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*]**,* *optional*) – coefficients used for computing running averages of gradient and its square * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – term added to the denominator to improve numerical stability (default: 1e-8) * **weight\_decay** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – weight decay (L2 penalty) (default: 0) `step(closure=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/adamax.html#Adamax.step) Performs a single optimization step. Parameters **closure** (*callable**,* *optional*) – A closure that reevaluates the model and returns the loss. `class torch.optim.ASGD(params, lr=0.01, lambd=0.0001, alpha=0.75, t0=1000000.0, weight_decay=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/asgd.html#ASGD) Implements Averaged Stochastic Gradient Descent. It has been proposed in [Acceleration of stochastic approximation by averaging](https://dl.acm.org/citation.cfm?id=131098). Parameters * **params** (*iterable*) – iterable of parameters to optimize or dicts defining parameter groups * **lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – learning rate (default: 1e-2) * **lambd** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – decay term (default: 1e-4) * **alpha** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – power for eta update (default: 0.75) * **t0** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – point at which to start averaging (default: 1e6) * **weight\_decay** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – weight decay (L2 penalty) (default: 0) `step(closure=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/asgd.html#ASGD.step) Performs a single optimization step. Parameters **closure** (*callable**,* *optional*) – A closure that reevaluates the model and returns the loss. `class torch.optim.LBFGS(params, lr=1, max_iter=20, max_eval=None, tolerance_grad=1e-07, tolerance_change=1e-09, history_size=100, line_search_fn=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lbfgs.html#LBFGS) Implements L-BFGS algorithm, heavily inspired by `minFunc <https://www.cs.ubc.ca/~schmidtm/Software/minFunc.html>`. Warning This optimizer doesn’t support per-parameter options and parameter groups (there can be only one). Warning Right now all parameters have to be on a single device. This will be improved in the future. Note This is a very memory intensive optimizer (it requires additional `param_bytes * (history_size + 1)` bytes). If it doesn’t fit in memory try reducing the history size, or use a different algorithm. Parameters * **lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – learning rate (default: 1) * **max\_iter** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – maximal number of iterations per optimization step (default: 20) * **max\_eval** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – maximal number of function evaluations per optimization step (default: max\_iter \* 1.25). * **tolerance\_grad** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – termination tolerance on first order optimality (default: 1e-5). * **tolerance\_change** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – termination tolerance on function value/parameter changes (default: 1e-9). * **history\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – update history size (default: 100). * **line\_search\_fn** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – either ‘strong\_wolfe’ or None (default: None). `step(closure)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lbfgs.html#LBFGS.step) Performs a single optimization step. Parameters **closure** (*callable*) – A closure that reevaluates the model and returns the loss. `class torch.optim.RMSprop(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/rmsprop.html#RMSprop) Implements RMSprop algorithm. Proposed by G. Hinton in his [course](https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf). The centered version first appears in [Generating Sequences With Recurrent Neural Networks](https://arxiv.org/pdf/1308.0850v5.pdf). The implementation here takes the square root of the gradient average before adding epsilon (note that TensorFlow interchanges these two operations). The effective learning rate is thus α/(v+ϵ)\alpha/(\sqrt{v} + \epsilon) where α\alpha is the scheduled learning rate and vv is the weighted moving average of the squared gradient. Parameters * **params** (*iterable*) – iterable of parameters to optimize or dicts defining parameter groups * **lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – learning rate (default: 1e-2) * **momentum** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – momentum factor (default: 0) * **alpha** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – smoothing constant (default: 0.99) * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – term added to the denominator to improve numerical stability (default: 1e-8) * **centered** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if `True`, compute the centered RMSProp, the gradient is normalized by an estimation of its variance * **weight\_decay** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – weight decay (L2 penalty) (default: 0) `step(closure=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/rmsprop.html#RMSprop.step) Performs a single optimization step. Parameters **closure** (*callable**,* *optional*) – A closure that reevaluates the model and returns the loss. `class torch.optim.Rprop(params, lr=0.01, etas=(0.5, 1.2), step_sizes=(1e-06, 50))` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/rprop.html#Rprop) Implements the resilient backpropagation algorithm. Parameters * **params** (*iterable*) – iterable of parameters to optimize or dicts defining parameter groups * **lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – learning rate (default: 1e-2) * **etas** (*Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*]**,* *optional*) – pair of (etaminus, etaplis), that are multiplicative increase and decrease factors (default: (0.5, 1.2)) * **step\_sizes** (*Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*]**,* *optional*) – a pair of minimal and maximal allowed step sizes (default: (1e-6, 50)) `step(closure=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/rprop.html#Rprop.step) Performs a single optimization step. Parameters **closure** (*callable**,* *optional*) – A closure that reevaluates the model and returns the loss. `class torch.optim.SGD(params, lr=<required parameter>, momentum=0, dampening=0, weight_decay=0, nesterov=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/sgd.html#SGD) Implements stochastic gradient descent (optionally with momentum). Nesterov momentum is based on the formula from [On the importance of initialization and momentum in deep learning](http://www.cs.toronto.edu/%7Ehinton/absps/momentum.pdf). Parameters * **params** (*iterable*) – iterable of parameters to optimize or dicts defining parameter groups * **lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – learning rate * **momentum** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – momentum factor (default: 0) * **weight\_decay** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – weight decay (L2 penalty) (default: 0) * **dampening** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – dampening for momentum (default: 0) * **nesterov** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – enables Nesterov momentum (default: False) #### Example ``` >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) >>> optimizer.zero_grad() >>> loss_fn(model(input), target).backward() >>> optimizer.step() ``` Note The implementation of SGD with Momentum/Nesterov subtly differs from Sutskever et. al. and implementations in some other frameworks. Considering the specific case of Momentum, the update can be written as vt+1=μ∗vt+gt+1,pt+1=pt−lr∗vt+1,\begin{aligned} v\_{t+1} & = \mu \* v\_{t} + g\_{t+1}, \\ p\_{t+1} & = p\_{t} - \text{lr} \* v\_{t+1}, \end{aligned} where pp , gg , vv and μ\mu denote the parameters, gradient, velocity, and momentum respectively. This is in contrast to Sutskever et. al. and other frameworks which employ an update of the form vt+1=μ∗vt+lr∗gt+1,pt+1=pt−vt+1.\begin{aligned} v\_{t+1} & = \mu \* v\_{t} + \text{lr} \* g\_{t+1}, \\ p\_{t+1} & = p\_{t} - v\_{t+1}. \end{aligned} The Nesterov version is analogously modified. `step(closure=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/sgd.html#SGD.step) Performs a single optimization step. Parameters **closure** (*callable**,* *optional*) – A closure that reevaluates the model and returns the loss. How to adjust learning rate --------------------------- `torch.optim.lr_scheduler` provides several methods to adjust the learning rate based on the number of epochs. [`torch.optim.lr_scheduler.ReduceLROnPlateau`](#torch.optim.lr_scheduler.ReduceLROnPlateau "torch.optim.lr_scheduler.ReduceLROnPlateau") allows dynamic learning rate reducing based on some validation measurements. Learning rate scheduling should be applied after optimizer’s update; e.g., you should write your code this way: ``` >>> scheduler = ... >>> for epoch in range(100): >>> train(...) >>> validate(...) >>> scheduler.step() ``` Warning Prior to PyTorch 1.1.0, the learning rate scheduler was expected to be called before the optimizer’s update; 1.1.0 changed this behavior in a BC-breaking way. If you use the learning rate scheduler (calling `scheduler.step()`) before the optimizer’s update (calling `optimizer.step()`), this will skip the first value of the learning rate schedule. If you are unable to reproduce results after upgrading to PyTorch 1.1.0, please check if you are calling `scheduler.step()` at the wrong time. `class torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=-1, verbose=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#LambdaLR) Sets the learning rate of each parameter group to the initial lr times a given function. When last\_epoch=-1, sets initial lr as lr. Parameters * **optimizer** ([Optimizer](#torch.optim.Optimizer "torch.optim.Optimizer")) – Wrapped optimizer. * **lr\_lambda** (*function* *or* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – A function which computes a multiplicative factor given an integer parameter epoch, or a list of such functions, one for each group in optimizer.param\_groups. * **last\_epoch** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The index of last epoch. Default: -1. * **verbose** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, prints a message to stdout for each update. Default: `False`. #### Example ``` >>> # Assuming optimizer has two groups. >>> lambda1 = lambda epoch: epoch // 30 >>> lambda2 = lambda epoch: 0.95 ** epoch >>> scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2]) >>> for epoch in range(100): >>> train(...) >>> validate(...) >>> scheduler.step() ``` `load_state_dict(state_dict)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#LambdaLR.load_state_dict) Loads the schedulers state. When saving or loading the scheduler, please make sure to also save or load the state of the optimizer. Parameters **state\_dict** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)")) – scheduler state. Should be an object returned from a call to [`state_dict()`](#torch.optim.lr_scheduler.LambdaLR.state_dict "torch.optim.lr_scheduler.LambdaLR.state_dict"). `state_dict()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#LambdaLR.state_dict) Returns the state of the scheduler as a [`dict`](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)"). It contains an entry for every variable in self.\_\_dict\_\_ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas. When saving or loading the scheduler, please make sure to also save or load the state of the optimizer. `class torch.optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda, last_epoch=-1, verbose=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#MultiplicativeLR) Multiply the learning rate of each parameter group by the factor given in the specified function. When last\_epoch=-1, sets initial lr as lr. Parameters * **optimizer** ([Optimizer](#torch.optim.Optimizer "torch.optim.Optimizer")) – Wrapped optimizer. * **lr\_lambda** (*function* *or* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – A function which computes a multiplicative factor given an integer parameter epoch, or a list of such functions, one for each group in optimizer.param\_groups. * **last\_epoch** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The index of last epoch. Default: -1. * **verbose** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, prints a message to stdout for each update. Default: `False`. #### Example ``` >>> lmbda = lambda epoch: 0.95 >>> scheduler = MultiplicativeLR(optimizer, lr_lambda=lmbda) >>> for epoch in range(100): >>> train(...) >>> validate(...) >>> scheduler.step() ``` `load_state_dict(state_dict)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#MultiplicativeLR.load_state_dict) Loads the schedulers state. Parameters **state\_dict** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)")) – scheduler state. Should be an object returned from a call to [`state_dict()`](#torch.optim.lr_scheduler.MultiplicativeLR.state_dict "torch.optim.lr_scheduler.MultiplicativeLR.state_dict"). `state_dict()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#MultiplicativeLR.state_dict) Returns the state of the scheduler as a [`dict`](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)"). It contains an entry for every variable in self.\_\_dict\_\_ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas. `class torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=-1, verbose=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#StepLR) Decays the learning rate of each parameter group by gamma every step\_size epochs. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler. When last\_epoch=-1, sets initial lr as lr. Parameters * **optimizer** ([Optimizer](#torch.optim.Optimizer "torch.optim.Optimizer")) – Wrapped optimizer. * **step\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Period of learning rate decay. * **gamma** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Multiplicative factor of learning rate decay. Default: 0.1. * **last\_epoch** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The index of last epoch. Default: -1. * **verbose** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, prints a message to stdout for each update. Default: `False`. #### Example ``` >>> # Assuming optimizer uses lr = 0.05 for all groups >>> # lr = 0.05 if epoch < 30 >>> # lr = 0.005 if 30 <= epoch < 60 >>> # lr = 0.0005 if 60 <= epoch < 90 >>> # ... >>> scheduler = StepLR(optimizer, step_size=30, gamma=0.1) >>> for epoch in range(100): >>> train(...) >>> validate(...) >>> scheduler.step() ``` `class torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=-1, verbose=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#MultiStepLR) Decays the learning rate of each parameter group by gamma once the number of epoch reaches one of the milestones. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler. When last\_epoch=-1, sets initial lr as lr. Parameters * **optimizer** ([Optimizer](#torch.optim.Optimizer "torch.optim.Optimizer")) – Wrapped optimizer. * **milestones** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – List of epoch indices. Must be increasing. * **gamma** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Multiplicative factor of learning rate decay. Default: 0.1. * **last\_epoch** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The index of last epoch. Default: -1. * **verbose** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, prints a message to stdout for each update. Default: `False`. #### Example ``` >>> # Assuming optimizer uses lr = 0.05 for all groups >>> # lr = 0.05 if epoch < 30 >>> # lr = 0.005 if 30 <= epoch < 80 >>> # lr = 0.0005 if epoch >= 80 >>> scheduler = MultiStepLR(optimizer, milestones=[30,80], gamma=0.1) >>> for epoch in range(100): >>> train(...) >>> validate(...) >>> scheduler.step() ``` `class torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma, last_epoch=-1, verbose=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#ExponentialLR) Decays the learning rate of each parameter group by gamma every epoch. When last\_epoch=-1, sets initial lr as lr. Parameters * **optimizer** ([Optimizer](#torch.optim.Optimizer "torch.optim.Optimizer")) – Wrapped optimizer. * **gamma** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Multiplicative factor of learning rate decay. * **last\_epoch** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The index of last epoch. Default: -1. * **verbose** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, prints a message to stdout for each update. Default: `False`. `class torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max, eta_min=0, last_epoch=-1, verbose=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#CosineAnnealingLR) Set the learning rate of each parameter group using a cosine annealing schedule, where ηmax\eta\_{max} is set to the initial lr and TcurT\_{cur} is the number of epochs since the last restart in SGDR: ηt=ηmin+12(ηmax−ηmin)(1+cos⁡(TcurTmaxπ)),Tcur≠(2k+1)Tmax;ηt+1=ηt+12(ηmax−ηmin)(1−cos⁡(1Tmaxπ)),Tcur=(2k+1)Tmax.\begin{aligned} \eta\_t & = \eta\_{min} + \frac{1}{2}(\eta\_{max} - \eta\_{min})\left(1 + \cos\left(\frac{T\_{cur}}{T\_{max}}\pi\right)\right), & T\_{cur} \neq (2k+1)T\_{max}; \\ \eta\_{t+1} & = \eta\_{t} + \frac{1}{2}(\eta\_{max} - \eta\_{min}) \left(1 - \cos\left(\frac{1}{T\_{max}}\pi\right)\right), & T\_{cur} = (2k+1)T\_{max}. \end{aligned} When last\_epoch=-1, sets initial lr as lr. Notice that because the schedule is defined recursively, the learning rate can be simultaneously modified outside this scheduler by other operators. If the learning rate is set solely by this scheduler, the learning rate at each step becomes: ηt=ηmin+12(ηmax−ηmin)(1+cos⁡(TcurTmaxπ))\eta\_t = \eta\_{min} + \frac{1}{2}(\eta\_{max} - \eta\_{min})\left(1 + \cos\left(\frac{T\_{cur}}{T\_{max}}\pi\right)\right) It has been proposed in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). Note that this only implements the cosine annealing part of SGDR, and not the restarts. Parameters * **optimizer** ([Optimizer](#torch.optim.Optimizer "torch.optim.Optimizer")) – Wrapped optimizer. * **T\_max** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Maximum number of iterations. * **eta\_min** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Minimum learning rate. Default: 0. * **last\_epoch** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The index of last epoch. Default: -1. * **verbose** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, prints a message to stdout for each update. Default: `False`. `class torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08, verbose=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#ReduceLROnPlateau) Reduce learning rate when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This scheduler reads a metrics quantity and if no improvement is seen for a ‘patience’ number of epochs, the learning rate is reduced. Parameters * **optimizer** ([Optimizer](#torch.optim.Optimizer "torch.optim.Optimizer")) – Wrapped optimizer. * **mode** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – One of `min`, `max`. In `min` mode, lr will be reduced when the quantity monitored has stopped decreasing; in `max` mode it will be reduced when the quantity monitored has stopped increasing. Default: ‘min’. * **factor** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Factor by which the learning rate will be reduced. new\_lr = lr \* factor. Default: 0.1. * **patience** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of epochs with no improvement after which learning rate will be reduced. For example, if `patience = 2`, then we will ignore the first 2 epochs with no improvement, and will only decrease the LR after the 3rd epoch if the loss still hasn’t improved then. Default: 10. * **threshold** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Threshold for measuring the new optimum, to only focus on significant changes. Default: 1e-4. * **threshold\_mode** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – One of `rel`, `abs`. In `rel` mode, dynamic\_threshold = best \* ( 1 + threshold ) in ‘max’ mode or best \* ( 1 - threshold ) in `min` mode. In `abs` mode, dynamic\_threshold = best + threshold in `max` mode or best - threshold in `min` mode. Default: ‘rel’. * **cooldown** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of epochs to wait before resuming normal operation after lr has been reduced. Default: 0. * **min\_lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – A scalar or a list of scalars. A lower bound on the learning rate of all param groups or each group respectively. Default: 0. * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Minimal decay applied to lr. If the difference between new and old lr is smaller than eps, the update is ignored. Default: 1e-8. * **verbose** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, prints a message to stdout for each update. Default: `False`. #### Example ``` >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) >>> scheduler = ReduceLROnPlateau(optimizer, 'min') >>> for epoch in range(10): >>> train(...) >>> val_loss = validate(...) >>> # Note that step should be called after validate() >>> scheduler.step(val_loss) ``` `class torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr, max_lr, step_size_up=2000, step_size_down=None, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle', cycle_momentum=True, base_momentum=0.8, max_momentum=0.9, last_epoch=-1, verbose=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#CyclicLR) Sets the learning rate of each parameter group according to cyclical learning rate policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/abs/1506.01186). The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis. Cyclical learning rate policy changes the learning rate after every batch. `step` should be called after a batch has been used for training. This class has three built-in policies, as put forth in the paper: * “triangular”: A basic triangular cycle without amplitude scaling. * “triangular2”: A basic triangular cycle that scales initial amplitude by half each cycle. * “exp\_range”: A cycle that scales initial amplitude by gammacycle iterations\text{gamma}^{\text{cycle iterations}} at each cycle iteration. This implementation was adapted from the github repo: [bckenstler/CLR](https://github.com/bckenstler/CLR) Parameters * **optimizer** ([Optimizer](#torch.optim.Optimizer "torch.optim.Optimizer")) – Wrapped optimizer. * **base\_lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – Initial learning rate which is the lower boundary in the cycle for each parameter group. * **max\_lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – Upper learning rate boundaries in the cycle for each parameter group. Functionally, it defines the cycle amplitude (max\_lr - base\_lr). The lr at any cycle is the sum of base\_lr and some scaling of the amplitude; therefore max\_lr may not actually be reached depending on scaling function. * **step\_size\_up** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of training iterations in the increasing half of a cycle. Default: 2000 * **step\_size\_down** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of training iterations in the decreasing half of a cycle. If step\_size\_down is None, it is set to step\_size\_up. Default: None * **mode** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – One of {triangular, triangular2, exp\_range}. Values correspond to policies detailed above. If scale\_fn is not None, this argument is ignored. Default: ‘triangular’ * **gamma** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Constant in ‘exp\_range’ scaling function: gamma\*\*(cycle iterations) Default: 1.0 * **scale\_fn** (*function*) – Custom scaling policy defined by a single argument lambda function, where 0 <= scale\_fn(x) <= 1 for all x >= 0. If specified, then ‘mode’ is ignored. Default: None * **scale\_mode** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – {‘cycle’, ‘iterations’}. Defines whether scale\_fn is evaluated on cycle number or cycle iterations (training iterations since start of cycle). Default: ‘cycle’ * **cycle\_momentum** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, momentum is cycled inversely to learning rate between ‘base\_momentum’ and ‘max\_momentum’. Default: True * **base\_momentum** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – Lower momentum boundaries in the cycle for each parameter group. Note that momentum is cycled inversely to learning rate; at the peak of a cycle, momentum is ‘base\_momentum’ and learning rate is ‘max\_lr’. Default: 0.8 * **max\_momentum** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – Upper momentum boundaries in the cycle for each parameter group. Functionally, it defines the cycle amplitude (max\_momentum - base\_momentum). The momentum at any cycle is the difference of max\_momentum and some scaling of the amplitude; therefore base\_momentum may not actually be reached depending on scaling function. Note that momentum is cycled inversely to learning rate; at the start of a cycle, momentum is ‘max\_momentum’ and learning rate is ‘base\_lr’ Default: 0.9 * **last\_epoch** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The index of the last batch. This parameter is used when resuming a training job. Since `step()` should be invoked after each batch instead of after each epoch, this number represents the total number of *batches* computed, not the total number of epochs computed. When last\_epoch=-1, the schedule is started from the beginning. Default: -1 * **verbose** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, prints a message to stdout for each update. Default: `False`. #### Example ``` >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) >>> scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.01, max_lr=0.1) >>> data_loader = torch.utils.data.DataLoader(...) >>> for epoch in range(10): >>> for batch in data_loader: >>> train_batch(...) >>> scheduler.step() ``` `get_lr()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#CyclicLR.get_lr) Calculates the learning rate at batch index. This function treats `self.last_epoch` as the last batch index. If `self.cycle_momentum` is `True`, this function has a side effect of updating the optimizer’s momentum. `class torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, total_steps=None, epochs=None, steps_per_epoch=None, pct_start=0.3, anneal_strategy='cos', cycle_momentum=True, base_momentum=0.85, max_momentum=0.95, div_factor=25.0, final_div_factor=10000.0, three_phase=False, last_epoch=-1, verbose=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#OneCycleLR) Sets the learning rate of each parameter group according to the 1cycle learning rate policy. The 1cycle policy anneals the learning rate from an initial learning rate to some maximum learning rate and then from that maximum learning rate to some minimum learning rate much lower than the initial learning rate. This policy was initially described in the paper [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). The 1cycle learning rate policy changes the learning rate after every batch. `step` should be called after a batch has been used for training. This scheduler is not chainable. Note also that the total number of steps in the cycle can be determined in one of two ways (listed in order of precedence): 1. A value for total\_steps is explicitly provided. 2. A number of epochs (epochs) and a number of steps per epoch (steps\_per\_epoch) are provided. In this case, the number of total steps is inferred by total\_steps = epochs \* steps\_per\_epoch You must either provide a value for total\_steps or provide a value for both epochs and steps\_per\_epoch. The default behaviour of this scheduler follows the fastai implementation of 1cycle, which claims that “unpublished work has shown even better results by using only two phases”. To mimic the behaviour of the original paper instead, set `three_phase=True`. Parameters * **optimizer** ([Optimizer](#torch.optim.Optimizer "torch.optim.Optimizer")) – Wrapped optimizer. * **max\_lr** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – Upper learning rate boundaries in the cycle for each parameter group. * **total\_steps** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The total number of steps in the cycle. Note that if a value is not provided here, then it must be inferred by providing a value for epochs and steps\_per\_epoch. Default: None * **epochs** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The number of epochs to train for. This is used along with steps\_per\_epoch in order to infer the total number of steps in the cycle if a value for total\_steps is not provided. Default: None * **steps\_per\_epoch** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The number of steps per epoch to train for. This is used along with epochs in order to infer the total number of steps in the cycle if a value for total\_steps is not provided. Default: None * **pct\_start** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – The percentage of the cycle (in number of steps) spent increasing the learning rate. Default: 0.3 * **anneal\_strategy** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – {‘cos’, ‘linear’} Specifies the annealing strategy: “cos” for cosine annealing, “linear” for linear annealing. Default: ‘cos’ * **cycle\_momentum** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, momentum is cycled inversely to learning rate between ‘base\_momentum’ and ‘max\_momentum’. Default: True * **base\_momentum** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – Lower momentum boundaries in the cycle for each parameter group. Note that momentum is cycled inversely to learning rate; at the peak of a cycle, momentum is ‘base\_momentum’ and learning rate is ‘max\_lr’. Default: 0.85 * **max\_momentum** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – Upper momentum boundaries in the cycle for each parameter group. Functionally, it defines the cycle amplitude (max\_momentum - base\_momentum). Note that momentum is cycled inversely to learning rate; at the start of a cycle, momentum is ‘max\_momentum’ and learning rate is ‘base\_lr’ Default: 0.95 * **div\_factor** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Determines the initial learning rate via initial\_lr = max\_lr/div\_factor Default: 25 * **final\_div\_factor** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Determines the minimum learning rate via min\_lr = initial\_lr/final\_div\_factor Default: 1e4 * **three\_phase** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, use a third phase of the schedule to annihilate the learning rate according to ‘final\_div\_factor’ instead of modifying the second phase (the first two phases will be symmetrical about the step indicated by ‘pct\_start’). * **last\_epoch** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The index of the last batch. This parameter is used when resuming a training job. Since `step()` should be invoked after each batch instead of after each epoch, this number represents the total number of *batches* computed, not the total number of epochs computed. When last\_epoch=-1, the schedule is started from the beginning. Default: -1 * **verbose** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, prints a message to stdout for each update. Default: `False`. #### Example ``` >>> data_loader = torch.utils.data.DataLoader(...) >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) >>> scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=0.01, steps_per_epoch=len(data_loader), epochs=10) >>> for epoch in range(10): >>> for batch in data_loader: >>> train_batch(...) >>> scheduler.step() ``` `class torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0, T_mult=1, eta_min=0, last_epoch=-1, verbose=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#CosineAnnealingWarmRestarts) Set the learning rate of each parameter group using a cosine annealing schedule, where ηmax\eta\_{max} is set to the initial lr, TcurT\_{cur} is the number of epochs since the last restart and TiT\_{i} is the number of epochs between two warm restarts in SGDR: ηt=ηmin+12(ηmax−ηmin)(1+cos⁡(TcurTiπ))\eta\_t = \eta\_{min} + \frac{1}{2}(\eta\_{max} - \eta\_{min})\left(1 + \cos\left(\frac{T\_{cur}}{T\_{i}}\pi\right)\right) When Tcur=TiT\_{cur}=T\_{i} , set ηt=ηmin\eta\_t = \eta\_{min} . When Tcur=0T\_{cur}=0 after restart, set ηt=ηmax\eta\_t=\eta\_{max} . It has been proposed in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). Parameters * **optimizer** ([Optimizer](#torch.optim.Optimizer "torch.optim.Optimizer")) – Wrapped optimizer. * **T\_0** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of iterations for the first restart. * **T\_mult** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – A factor increases TiT\_{i} after a restart. Default: 1. * **eta\_min** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – Minimum learning rate. Default: 0. * **last\_epoch** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – The index of last epoch. Default: -1. * **verbose** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, prints a message to stdout for each update. Default: `False`. `step(epoch=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/optim/lr_scheduler.html#CosineAnnealingWarmRestarts.step) Step could be called after every batch update #### Example ``` >>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult) >>> iters = len(dataloader) >>> for epoch in range(20): >>> for i, sample in enumerate(dataloader): >>> inputs, labels = sample['inputs'], sample['labels'] >>> optimizer.zero_grad() >>> outputs = net(inputs) >>> loss = criterion(outputs, labels) >>> loss.backward() >>> optimizer.step() >>> scheduler.step(epoch + i / iters) ``` This function can be called in an interleaved way. #### Example ``` >>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult) >>> for epoch in range(20): >>> scheduler.step() >>> scheduler.step(26) >>> scheduler.step() # scheduler.step(27), instead of scheduler(20) ``` Stochastic Weight Averaging --------------------------- `torch.optim.swa_utils` implements Stochastic Weight Averaging (SWA). In particular, `torch.optim.swa_utils.AveragedModel` class implements SWA models, `torch.optim.swa_utils.SWALR` implements the SWA learning rate scheduler and `torch.optim.swa_utils.update_bn()` is a utility function used to update SWA batch normalization statistics at the end of training. SWA has been proposed in [Averaging Weights Leads to Wider Optima and Better Generalization](https://arxiv.org/abs/1803.05407). ### Constructing averaged models `AveragedModel` class serves to compute the weights of the SWA model. You can create an averaged model by running: ``` >>> swa_model = AveragedModel(model) ``` Here the model `model` can be an arbitrary [`torch.nn.Module`](generated/torch.nn.module#torch.nn.Module "torch.nn.Module") object. `swa_model` will keep track of the running averages of the parameters of the `model`. To update these averages, you can use the `update_parameters()` function: ``` >>> swa_model.update_parameters(model) ``` ### SWA learning rate schedules Typically, in SWA the learning rate is set to a high constant value. `SWALR` is a learning rate scheduler that anneals the learning rate to a fixed value, and then keeps it constant. For example, the following code creates a scheduler that linearly anneals the learning rate from its initial value to 0.05 in 5 epochs within each parameter group: ``` >>> swa_scheduler = torch.optim.swa_utils.SWALR(optimizer, \ >>> anneal_strategy="linear", anneal_epochs=5, swa_lr=0.05) ``` You can also use cosine annealing to a fixed value instead of linear annealing by setting `anneal_strategy="cos"`. ### Taking care of batch normalization `update_bn()` is a utility function that allows to compute the batchnorm statistics for the SWA model on a given dataloader `loader` at the end of training: ``` >>> torch.optim.swa_utils.update_bn(loader, swa_model) ``` `update_bn()` applies the `swa_model` to every element in the dataloader and computes the activation statistics for each batch normalization layer in the model. Warning `update_bn()` assumes that each batch in the dataloader `loader` is either a tensors or a list of tensors where the first element is the tensor that the network `swa_model` should be applied to. If your dataloader has a different structure, you can update the batch normalization statistics of the `swa_model` by doing a forward pass with the `swa_model` on each element of the dataset. ### Custom averaging strategies By default, `torch.optim.swa_utils.AveragedModel` computes a running equal average of the parameters that you provide, but you can also use custom averaging functions with the `avg_fn` parameter. In the following example `ema_model` computes an exponential moving average. Example: ``` >>> ema_avg = lambda averaged_model_parameter, model_parameter, num_averaged:\ >>> 0.1 * averaged_model_parameter + 0.9 * model_parameter >>> ema_model = torch.optim.swa_utils.AveragedModel(model, avg_fn=ema_avg) ``` ### Putting it all together In the example below, `swa_model` is the SWA model that accumulates the averages of the weights. We train the model for a total of 300 epochs and we switch to the SWA learning rate schedule and start to collect SWA averages of the parameters at epoch 160: ``` >>> loader, optimizer, model, loss_fn = ... >>> swa_model = torch.optim.swa_utils.AveragedModel(model) >>> scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=300) >>> swa_start = 160 >>> swa_scheduler = SWALR(optimizer, swa_lr=0.05) >>> >>> for epoch in range(300): >>> for input, target in loader: >>> optimizer.zero_grad() >>> loss_fn(model(input), target).backward() >>> optimizer.step() >>> if epoch > swa_start: >>> swa_model.update_parameters(model) >>> swa_scheduler.step() >>> else: >>> scheduler.step() >>> >>> # Update bn statistics for the swa_model at the end >>> torch.optim.swa_utils.update_bn(loader, swa_model) >>> # Use swa_model to make predictions on test data >>> preds = swa_model(test_input) ```
programming_docs
pytorch torch.backends torch.backends ============== `torch.backends` controls the behavior of various backends that PyTorch supports. These backends include: * `torch.backends.cuda` * `torch.backends.cudnn` * `torch.backends.mkl` * `torch.backends.mkldnn` * `torch.backends.openmp` torch.backends.cuda ------------------- `torch.backends.cuda.is_built()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/backends/cuda.html#is_built) Returns whether PyTorch is built with CUDA support. Note that this doesn’t necessarily mean CUDA is available; just that if this PyTorch binary were run a machine with working CUDA drivers and devices, we would be able to use it. `torch.backends.cuda.matmul.allow_tf32` A [`bool`](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)") that controls whether TensorFloat-32 tensor cores may be used in matrix multiplications on Ampere or newer GPUs. See [TensorFloat-32(TF32) on Ampere devices](https://pytorch.org/docs/1.8.0/notes/cuda.html#tf32-on-ampere). `torch.backends.cuda.cufft_plan_cache` `cufft_plan_cache` caches the cuFFT plans `size` A readonly [`int`](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") that shows the number of plans currently in the cuFFT plan cache. `max_size` A [`int`](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") that controls cache capacity of cuFFT plan. `clear()` Clears the cuFFT plan cache. torch.backends.cudnn -------------------- `torch.backends.cudnn.version()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/backends/cudnn.html#version) Returns the version of cuDNN `torch.backends.cudnn.is_available()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/backends/cudnn.html#is_available) Returns a bool indicating if CUDNN is currently available. `torch.backends.cudnn.enabled` A [`bool`](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)") that controls whether cuDNN is enabled. `torch.backends.cudnn.allow_tf32` A [`bool`](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)") that controls where TensorFloat-32 tensor cores may be used in cuDNN convolutions on Ampere or newer GPUs. See [TensorFloat-32(TF32) on Ampere devices](https://pytorch.org/docs/1.8.0/notes/cuda.html#tf32-on-ampere). `torch.backends.cudnn.deterministic` A [`bool`](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)") that, if True, causes cuDNN to only use deterministic convolution algorithms. See also [`torch.are_deterministic_algorithms_enabled()`](generated/torch.are_deterministic_algorithms_enabled#torch.are_deterministic_algorithms_enabled "torch.are_deterministic_algorithms_enabled") and [`torch.use_deterministic_algorithms()`](generated/torch.use_deterministic_algorithms#torch.use_deterministic_algorithms "torch.use_deterministic_algorithms"). `torch.backends.cudnn.benchmark` A [`bool`](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)") that, if True, causes cuDNN to benchmark multiple convolution algorithms and select the fastest. torch.backends.mkl ------------------ `torch.backends.mkl.is_available()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/backends/mkl.html#is_available) Returns whether PyTorch is built with MKL support. torch.backends.mkldnn --------------------- `torch.backends.mkldnn.is_available()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/backends/mkldnn.html#is_available) Returns whether PyTorch is built with MKL-DNN support. torch.backends.openmp --------------------- `torch.backends.openmp.is_available()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/backends/openmp.html#is_available) Returns whether PyTorch is built with OpenMP support. pytorch Benchmark Utils - torch.utils.benchmark Benchmark Utils - torch.utils.benchmark ======================================= `class torch.utils.benchmark.Timer(stmt='pass', setup='pass', timer=<function timer>, globals=None, label=None, sub_label=None, description=None, env=None, num_threads=1, language=<Language.PYTHON: 0>)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/timer.html#Timer) Helper class for measuring execution time of PyTorch statements. For a full tutorial on how to use this class, see: <https://pytorch.org/tutorials/recipes/recipes/benchmark.html> The PyTorch Timer is based on `timeit.Timer` (and in fact uses `timeit.Timer` internally), but with several key differences: 1. Runtime aware: Timer will perform warmups (important as some elements of PyTorch are lazily initialized), set threadpool size so that comparisons are apples-to-apples, and synchronize asynchronous CUDA functions when necessary. 2. Focus on replicates: When measuring code, and particularly complex kernels / models, run-to-run variation is a significant confounding factor. It is expected that all measurements should include replicates to quantify noise and allow median computation, which is more robust than mean. To that effect, this class deviates from the `timeit` API by conceptually merging `timeit.Timer.repeat` and `timeit.Timer.autorange`. (Exact algorithms are discussed in method docstrings.) The `timeit` method is replicated for cases where an adaptive strategy is not desired. 3. Optional metadata: When defining a Timer, one can optionally specify `label`, `sub_label`, `description`, and `env`. (Defined later) These fields are included in the representation of result object and by the `Compare` class to group and display results for comparison. 4. Instruction counts In addition to wall times, Timer can run a statement under Callgrind and report instructions executed. Directly analogous to `timeit.Timer` constructor arguments: `stmt`, `setup`, `timer`, `globals` PyTorch Timer specific constructor arguments: `label`, `sub_label`, `description`, `env`, `num_threads` Parameters * **stmt** – Code snippet to be run in a loop and timed. * **setup** – Optional setup code. Used to define variables used in `stmt` * **timer** – Callable which returns the current time. If PyTorch was built without CUDA or there is no GPU present, this defaults to `timeit.default_timer`; otherwise it will synchronize CUDA before measuring the time. * **globals** – A dict which defines the global variables when `stmt` is being executed. This is the other method for providing variables which `stmt` needs. * **label** – String which summarizes `stmt`. For instance, if `stmt` is “torch.nn.functional.relu(torch.add(x, 1, out=out))” one might set label to “ReLU(x + 1)” to improve readability. * **sub\_label** – Provide supplemental information to disambiguate measurements with identical stmt or label. For instance, in our example above sub\_label might be “float” or “int”, so that it is easy to differentiate: “ReLU(x + 1): (float)” ”ReLU(x + 1): (int)” when printing Measurements or summarizing using `Compare`. * **description** – String to distinguish measurements with identical label and sub\_label. The principal use of `description` is to signal to `Compare` the columns of data. For instance one might set it based on the input size to create a table of the form: ``` | n=1 | n=4 | ... ------------- ... ReLU(x + 1): (float) | ... | ... | ... ReLU(x + 1): (int) | ... | ... | ... ``` using `Compare`. It is also included when printing a Measurement. * **env** – This tag indicates that otherwise identical tasks were run in different environments, and are therefore not equivilent, for instance when A/B testing a change to a kernel. `Compare` will treat Measurements with different `env` specification as distinct when merging replicate runs. * **num\_threads** – The size of the PyTorch threadpool when executing `stmt`. Single threaded performace is important as both a key inference workload and a good indicator of intrinsic algorithmic efficiency, so the default is set to one. This is in contrast to the default PyTorch threadpool size which tries to utilize all cores. `blocked_autorange(callback=None, min_run_time=0.2)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/timer.html#Timer.blocked_autorange) Measure many replicates while keeping timer overhead to a minimum. At a high level, blocked\_autorange executes the following pseudo-code: ``` `setup` total_time = 0 while total_time < min_run_time start = timer() for _ in range(block_size): `stmt` total_time += (timer() - start) ``` Note the variable `block_size` in the inner loop. The choice of block size is important to measurement quality, and must balance two competing objectives: 1. A small block size results in more replicates and generally better statistics. 2. A large block size better amortizes the cost of `timer` invocation, and results in a less biased measurement. This is important because CUDA syncronization time is non-trivial (order single to low double digit microseconds) and would otherwise bias the measurement. blocked\_autorange sets block\_size by running a warmup period, increasing block size until timer overhead is less than 0.1% of the overall computation. This value is then used for the main measurement loop. Returns A `Measurement` object that contains measured runtimes and repetition counts, and can be used to compute statistics. (mean, median, etc.) `collect_callgrind(number=100, collect_baseline=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/timer.html#Timer.collect_callgrind) Collect instruction counts using Callgrind. Unlike wall times, instruction counts are deterministic (modulo non-determinism in the program itself and small amounts of jitter from the Python interpreter.) This makes them ideal for detailed performance analysis. This method runs `stmt` in a separate process so that Valgrind can instrument the program. Performance is severely degraded due to the instrumentation, howevever this is ameliorated by the fact that a small number of iterations is generally sufficient to obtain good measurements. In order to to use this method `valgrind`, `callgrind_control`, and `callgrind_annotate` must be installed. Because there is a process boundary between the caller (this process) and the `stmt` execution, `globals` cannot contain arbitrary in-memory data structures. (Unlike timing methods) Instead, globals are restricted to builtins, `nn.Modules`’s, and TorchScripted functions/modules to reduce the surprise factor from serialization and subsequent deserialization. The `GlobalsBridge` class provides more detail on this subject. Take particular care with nn.Modules: they rely on pickle and you may need to add an import to `setup` for them to transfer properly. By default, a profile for an empty statement will be collected and cached to indicate how many instructions are from the Python loop which drives `stmt`. Returns A `CallgrindStats` object which provides instruction counts and some basic facilities for analyzing and manipulating results. `timeit(number=1000000)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/timer.html#Timer.timeit) Mirrors the semantics of timeit.Timer.timeit(). Execute the main statement (`stmt`) `number` times. <https://docs.python.org/3/library/timeit.html#timeit.Timer.timeit> `class torch.utils.benchmark.Measurement(number_per_run, raw_times, task_spec, metadata=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/common.html#Measurement) The result of a Timer measurement. This class stores one or more measurements of a given statement. It is serializable and provides several convenience methods (including a detailed \_\_repr\_\_) for downstream consumers. `static merge(measurements)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/common.html#Measurement.merge) Convenience method for merging replicates. Merge will extrapolate times to `number_per_run=1` and will not transfer any metadata. (Since it might differ between replicates) `property significant_figures` Approximate significant figure estimate. This property is intended to give a convenient way to estimate the precision of a measurement. It only uses the interquartile region to estimate statistics to try to mitigate skew from the tails, and uses a static z value of 1.645 since it is not expected to be used for small values of `n`, so z can approximate `t`. The significant figure estimation used in conjunction with the `trim_sigfig` method to provide a more human interpretable data summary. \_\_repr\_\_ does not use this method; it simply displays raw values. Significant figure estimation is intended for `Compare`. `class torch.utils.benchmark.CallgrindStats(task_spec, number_per_run, built_with_debug_symbols, baseline_inclusive_stats, baseline_exclusive_stats, stmt_inclusive_stats, stmt_exclusive_stats)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/valgrind_wrapper/timer_interface.html#CallgrindStats) Top level container for Callgrind results collected by Timer. Manipulation is generally done using the FunctionCounts class, which is obtained by calling `CallgrindStats.stats(…)`. Several convenience methods are provided as well; the most significant is `CallgrindStats.as_standardized()`. `as_standardized()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/valgrind_wrapper/timer_interface.html#CallgrindStats.as_standardized) Strip library names and some prefixes from function strings. When comparing two different sets of instruction counts, on stumbling block can be path prefixes. Callgrind includes the full filepath when reporting a function (as it should). However, this can cause issues when diffing profiles. If a key component such as Python or PyTorch was built in separate locations in the two profiles, which can result in something resembling: ``` 23234231 /tmp/first_build_dir/thing.c:foo(...) 9823794 /tmp/first_build_dir/thing.c:bar(...) ... 53453 .../aten/src/Aten/...:function_that_actually_changed(...) ... -9823794 /tmp/second_build_dir/thing.c:bar(...) -23234231 /tmp/second_build_dir/thing.c:foo(...) ``` Stripping prefixes can ameliorate this issue by regularizing the strings and causing better cancellation of equivilent call sites when diffing. `counts(*, denoise=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/valgrind_wrapper/timer_interface.html#CallgrindStats.counts) Returns the total number of instructions executed. See `FunctionCounts.denoise()` for an explation of the `denoise` arg. `delta(other, inclusive=False, subtract_baselines=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/valgrind_wrapper/timer_interface.html#CallgrindStats.delta) Diff two sets of counts. One common reason to collect instruction counts is to determine the the effect that a particular change will have on the number of instructions needed to perform some unit of work. If a change increases that number, the next logical question is “why”. This generally involves looking at what part if the code increased in instruction count. This function automates that process so that one can easily diff counts on both an inclusive and exclusive basis. The `subtract_baselines` argument allows one to disable baseline correction, though in most cases it shouldn’t matter as the baselines are expected to more or less cancel out. `stats(inclusive=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/valgrind_wrapper/timer_interface.html#CallgrindStats.stats) Returns detailed function counts. Conceptually, the FunctionCounts returned can be thought of as a tuple of (count, path\_and\_function\_name) tuples. `inclusive` matches the semantics of callgrind. If True, the counts include instructions executed by children. `inclusive=True` is useful for identifying hot spots in code; `inclusive=False` is useful for reducing noise when diffing counts from two different runs. (See CallgrindStats.delta(…) for more details) `class torch.utils.benchmark.FunctionCounts(_data, inclusive, _linewidth=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/valgrind_wrapper/timer_interface.html#FunctionCounts) Container for manipulating Callgrind results. It supports: 1. Addition and subtraction to combine or diff results. 2. Tuple-like indexing. 3. A `denoise` function which strips CPython calls which are known to be non-deterministic and quite noisy. 4. Two higher order methods (`filter` and `transform`) for custom manipulation. `denoise()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/valgrind_wrapper/timer_interface.html#FunctionCounts.denoise) Remove known noisy instructions. Several instructions in the CPython interpreter are rather noisy. These instructions involve unicode to dictionary lookups which Python uses to map variable names. FunctionCounts is generally a content agnostic container, however this is sufficiently important for obtaining reliable results to warrant an exception. `filter(filter_fn)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/valgrind_wrapper/timer_interface.html#FunctionCounts.filter) Keep only the elements where `filter_fn` applied to function name returns True. `transform(map_fn)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/benchmark/utils/valgrind_wrapper/timer_interface.html#FunctionCounts.transform) Apply `map_fn` to all of the function names. This can be used to regularize function names (e.g. stripping irrelevant parts of the file path), coalesce entries by mapping multiple functions to the same name (in which case the counts are added together), etc. pytorch torch.futures torch.futures ============= Warning The `torch.futures` package is experimental and subject to change. This package provides a [`Future`](#torch.futures.Future "torch.futures.Future") type that encapsulates an asynchronous execution and a set of utility functions to simplify operations on [`Future`](#torch.futures.Future "torch.futures.Future") objects. Currently, the [`Future`](#torch.futures.Future "torch.futures.Future") type is primarily used by the [Distributed RPC Framework](rpc#distributed-rpc-framework). `class torch.futures.Future` Wrapper around a `torch._C.Future` which encapsulates an asynchronous execution of a callable, e.g. [`rpc_async()`](rpc#torch.distributed.rpc.rpc_async "torch.distributed.rpc.rpc_async"). It also exposes a set of APIs to add callback functions and set results. `add_done_callback(self: torch._C.Future, arg0: function) → None` `done()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/futures.html#Future.done) Return `True` if this `Future` is done. A `Future` is done if it has a result or an exception. `set_exception(result)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/futures.html#Future.set_exception) Set an exception for this `Future`, which will mark this `Future` as completed with an error and trigger all attached callbacks. Note that when calling wait()/value() on this `Future`, the exception set here will be raised inline. Parameters **result** ([BaseException](https://docs.python.org/3/library/exceptions.html#BaseException "(in Python v3.9)")) – the exception for this `Future`. Example:: ``` >>> import torch >>> >>> fut = torch.futures.Future() >>> fut.set_exception(ValueError("foo")) >>> fut.wait() >>> >>> # Output: >>> # This will run after the future has finished. >>> ValueError: foo ``` `set_result(result)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/futures.html#Future.set_result) Set the result for this `Future`, which will mark this `Future` as completed and trigger all attached callbacks. Note that a `Future` cannot be marked completed twice. Parameters **result** ([object](https://docs.python.org/3/library/functions.html#object "(in Python v3.9)")) – the result object of this `Future`. Example:: ``` >>> import threading >>> import time >>> import torch >>> >>> def slow_set_future(fut, value): >>> time.sleep(0.5) >>> fut.set_result(value) >>> >>> fut = torch.futures.Future() >>> t = threading.Thread( >>> target=slow_set_future, >>> args=(fut, torch.ones(2) * 3) >>> ) >>> t.start() >>> >>> print(fut.wait()) # tensor([3., 3.]) >>> t.join() ``` `then(callback)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/futures.html#Future.then) Append the given callback function to this `Future`, which will be run when the `Future` is completed. Multiple callbacks can be added to the same `Future`, and will be invoked in the same order as they were added. The callback must take one argument, which is the reference to this `Future`. The callback function can use the `Future.wait()` API to get the value. Note that if this `Future` is already completed, the given callback will be run immediately inline. Parameters **callback** (`Callable`) – a `Callable` that takes this `Future` as the only argument. Returns A new `Future` object that holds the return value of the `callback` and will be marked as completed when the given `callback` finishes. Example:: ``` >>> import torch >>> >>> def callback(fut): >>> print(f"RPC return value is {fut.wait()}.") >>> >>> fut = torch.futures.Future() >>> # The inserted callback will print the return value when >>> # receiving the response from "worker1" >>> cb_fut = fut.then(callback) >>> chain_cb_fut = cb_fut.then( >>> lambda x : print(f"Chained cb done. {x.wait()}") >>> ) >>> fut.set_result(5) >>> >>> # Outputs are: >>> # RPC return value is 5. >>> # Chained cb done. None ``` `value(self: torch._C.Future) → object` `wait()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/futures.html#Future.wait) Block until the value of this `Future` is ready. Returns The value held by this `Future`. If the function (callback or RPC) creating the value has thrown an error, this `wait` method will also throw an error. `torch.futures.collect_all(futures)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/futures.html#collect_all) Collects the provided [`Future`](#torch.futures.Future "torch.futures.Future") objects into a single combined [`Future`](#torch.futures.Future "torch.futures.Future") that is completed when all of the sub-futures are completed. Parameters **futures** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – a list of [`Future`](#torch.futures.Future "torch.futures.Future") objects. Returns Returns a [`Future`](#torch.futures.Future "torch.futures.Future") object to a list of the passed in Futures. Example:: ``` >>> import torch >>> >>> fut0 = torch.futures.Future() >>> fut1 = torch.futures.Future() >>> >>> fut = torch.futures.collect_all([fut0, fut1]) >>> >>> fut0.set_result(0) >>> fut1.set_result(1) >>> >>> fut_list = fut.wait() >>> print(f"fut0 result = {fut_list[0].wait()}") >>> print(f"fut1 result = {fut_list[1].wait()}") >>> # outputs: >>> # fut0 result = 0 >>> # fut1 result = 1 ``` `torch.futures.wait_all(futures)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/futures.html#wait_all) Waits for all provided futures to be complete, and returns the list of completed values. Parameters **futures** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")) – a list of [`Future`](#torch.futures.Future "torch.futures.Future") object. Returns A list of the completed [`Future`](#torch.futures.Future "torch.futures.Future") results. This method will throw an error if `wait` on any [`Future`](#torch.futures.Future "torch.futures.Future") throws.
programming_docs
pytorch torch.nn.intrinsic torch.nn.intrinsic ================== This module implements the combined (fused) modules conv + relu which can be then quantized. ConvBn1d -------- `class torch.nn.intrinsic.ConvBn1d(conv, bn)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/intrinsic/modules/fused.html#ConvBn1d) This is a sequential container which calls the Conv 1d and Batch Norm 1d modules. During quantization this will be replaced with the corresponding fused module. ConvBn2d -------- `class torch.nn.intrinsic.ConvBn2d(conv, bn)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/intrinsic/modules/fused.html#ConvBn2d) This is a sequential container which calls the Conv 2d and Batch Norm 2d modules. During quantization this will be replaced with the corresponding fused module. ConvBnReLU1d ------------ `class torch.nn.intrinsic.ConvBnReLU1d(conv, bn, relu)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/intrinsic/modules/fused.html#ConvBnReLU1d) This is a sequential container which calls the Conv 1d, Batch Norm 1d, and ReLU modules. During quantization this will be replaced with the corresponding fused module. ConvBnReLU2d ------------ `class torch.nn.intrinsic.ConvBnReLU2d(conv, bn, relu)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/intrinsic/modules/fused.html#ConvBnReLU2d) This is a sequential container which calls the Conv 2d, Batch Norm 2d, and ReLU modules. During quantization this will be replaced with the corresponding fused module. ConvReLU1d ---------- `class torch.nn.intrinsic.ConvReLU1d(conv, relu)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/intrinsic/modules/fused.html#ConvReLU1d) This is a sequential container which calls the Conv1d and ReLU modules. During quantization this will be replaced with the corresponding fused module. ConvReLU2d ---------- `class torch.nn.intrinsic.ConvReLU2d(conv, relu)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/intrinsic/modules/fused.html#ConvReLU2d) This is a sequential container which calls the Conv2d and ReLU modules. During quantization this will be replaced with the corresponding fused module. pytorch torch.utils.bottleneck torch.utils.bottleneck ====================== `torch.utils.bottleneck` is a tool that can be used as an initial step for debugging bottlenecks in your program. It summarizes runs of your script with the Python profiler and PyTorch’s autograd profiler. Run it on the command line with ``` python -m torch.utils.bottleneck /path/to/source/script.py [args] ``` where [args] are any number of arguments to `script.py`, or run `python -m torch.utils.bottleneck -h` for more usage instructions. Warning Because your script will be profiled, please ensure that it exits in a finite amount of time. Warning Due to the asynchronous nature of CUDA kernels, when running against CUDA code, the cProfile output and CPU-mode autograd profilers may not show correct timings: the reported CPU time reports the amount of time used to launch the kernels but does not include the time the kernel spent executing on a GPU unless the operation does a synchronize. Ops that do synchronize appear to be extremely expensive under regular CPU-mode profilers. In these case where timings are incorrect, the CUDA-mode autograd profiler may be helpful. Note To decide which (CPU-only-mode or CUDA-mode) autograd profiler output to look at, you should first check if your script is CPU-bound (“CPU total time is much greater than CUDA total time”). If it is CPU-bound, looking at the results of the CPU-mode autograd profiler will help. If on the other hand your script spends most of its time executing on the GPU, then it makes sense to start looking for responsible CUDA operators in the output of the CUDA-mode autograd profiler. Of course the reality is much more complicated and your script might not be in one of those two extremes depending on the part of the model you’re evaluating. If the profiler outputs don’t help, you could try looking at the result of [`torch.autograd.profiler.emit_nvtx()`](autograd#torch.autograd.profiler.emit_nvtx "torch.autograd.profiler.emit_nvtx") with `nvprof`. However, please take into account that the NVTX overhead is very high and often gives a heavily skewed timeline. Warning If you are profiling CUDA code, the first profiler that `bottleneck` runs (cProfile) will include the CUDA startup time (CUDA buffer allocation cost) in its time reporting. This should not matter if your bottlenecks result in code much slower than the CUDA startup time. For more complicated uses of the profilers (like in a multi-GPU case), please see <https://docs.python.org/3/library/profile.html> or [`torch.autograd.profiler.profile()`](autograd#torch.autograd.profiler.profile "torch.autograd.profiler.profile") for more information. pytorch torch.utils.checkpoint torch.utils.checkpoint ====================== Note Checkpointing is implemented by rerunning a forward-pass segment for each checkpointed segment during backward. This can cause persistent states like the RNG state to be advanced than they would without checkpointing. By default, checkpointing includes logic to juggle the RNG state such that checkpointed passes making use of RNG (through dropout for example) have deterministic output as compared to non-checkpointed passes. The logic to stash and restore RNG states can incur a moderate performance hit depending on the runtime of checkpointed operations. If deterministic output compared to non-checkpointed passes is not required, supply `preserve_rng_state=False` to `checkpoint` or `checkpoint_sequential` to omit stashing and restoring the RNG state during each checkpoint. The stashing logic saves and restores the RNG state for the current device and the device of all cuda Tensor arguments to the `run_fn`. However, the logic has no way to anticipate if the user will move Tensors to a new device within the `run_fn` itself. Therefore, if you move Tensors to a new device (“new” meaning not belonging to the set of [current device + devices of Tensor arguments]) within `run_fn`, deterministic output compared to non-checkpointed passes is never guaranteed. `torch.utils.checkpoint.checkpoint(function, *args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/checkpoint.html#checkpoint) Checkpoint a model or part of the model Checkpointing works by trading compute for memory. Rather than storing all intermediate activations of the entire computation graph for computing backward, the checkpointed part does **not** save intermediate activations, and instead recomputes them in backward pass. It can be applied on any part of a model. Specifically, in the forward pass, `function` will run in [`torch.no_grad()`](generated/torch.no_grad#torch.no_grad "torch.no_grad") manner, i.e., not storing the intermediate activations. Instead, the forward pass saves the inputs tuple and the `function` parameter. In the backwards pass, the saved inputs and `function` is retrieved, and the forward pass is computed on `function` again, now tracking the intermediate activations, and then the gradients are calculated using these activation values. Warning Checkpointing doesn’t work with [`torch.autograd.grad()`](autograd#torch.autograd.grad "torch.autograd.grad"), but only with [`torch.autograd.backward()`](autograd#torch.autograd.backward "torch.autograd.backward"). Warning If `function` invocation during backward does anything different than the one during forward, e.g., due to some global variable, the checkpointed version won’t be equivalent, and unfortunately it can’t be detected. Warning If checkpointed segment contains tensors detached from the computational graph by `detach()` or `torch.no_grad()`, the backward pass will raise an error. This is because `checkpoint` makes all the outputs require gradients which causes issues when a tensor is defined to have no gradient in the model. To circumvent this, detach the tensors outside of the `checkpoint` function. Parameters * **function** – describes what to run in the forward pass of the model or part of the model. It should also know how to handle the inputs passed as the tuple. For example, in LSTM, if user passes `(activation, hidden)`, `function` should correctly use the first input as `activation` and the second input as `hidden` * **preserve\_rng\_state** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional**,* *default=True*) – Omit stashing and restoring the RNG state during each checkpoint. * **args** – tuple containing inputs to the `function` Returns Output of running `function` on `*args` `torch.utils.checkpoint.checkpoint_sequential(functions, segments, input, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/checkpoint.html#checkpoint_sequential) A helper function for checkpointing sequential models. Sequential models execute a list of modules/functions in order (sequentially). Therefore, we can divide such a model in various segments and checkpoint each segment. All segments except the last will run in [`torch.no_grad()`](generated/torch.no_grad#torch.no_grad "torch.no_grad") manner, i.e., not storing the intermediate activations. The inputs of each checkpointed segment will be saved for re-running the segment in the backward pass. See [`checkpoint()`](#torch.utils.checkpoint.checkpoint "torch.utils.checkpoint.checkpoint") on how checkpointing works. Warning Checkpointing doesn’t work with [`torch.autograd.grad()`](autograd#torch.autograd.grad "torch.autograd.grad"), but only with [`torch.autograd.backward()`](autograd#torch.autograd.backward "torch.autograd.backward"). Parameters * **functions** – A [`torch.nn.Sequential`](generated/torch.nn.sequential#torch.nn.Sequential "torch.nn.Sequential") or the list of modules or functions (comprising the model) to run sequentially. * **segments** – Number of chunks to create in the model * **input** – A Tensor that is input to `functions` * **preserve\_rng\_state** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional**,* *default=True*) – Omit stashing and restoring the RNG state during each checkpoint. Returns Output of running `functions` sequentially on `*inputs` #### Example ``` >>> model = nn.Sequential(...) >>> input_var = checkpoint_sequential(model, chunks, input_var) ``` pytorch torch.Tensor torch.Tensor ============ A [`torch.Tensor`](#torch.Tensor "torch.Tensor") is a multi-dimensional matrix containing elements of a single data type. Torch defines 10 tensor types with CPU and GPU variants which are as follows: | Data type | dtype | CPU tensor | GPU tensor | | --- | --- | --- | --- | | 32-bit floating point | `torch.float32` or `torch.float` | `torch.FloatTensor` | `torch.cuda.FloatTensor` | | 64-bit floating point | `torch.float64` or `torch.double` | `torch.DoubleTensor` | `torch.cuda.DoubleTensor` | | 16-bit floating point [1](#id3) | `torch.float16` or `torch.half` | `torch.HalfTensor` | `torch.cuda.HalfTensor` | | 16-bit floating point [2](#id4) | `torch.bfloat16` | `torch.BFloat16Tensor` | `torch.cuda.BFloat16Tensor` | | 32-bit complex | `torch.complex32` | | | | 64-bit complex | `torch.complex64` | | | | 128-bit complex | `torch.complex128` or `torch.cdouble` | | | | 8-bit integer (unsigned) | `torch.uint8` | `torch.ByteTensor` | `torch.cuda.ByteTensor` | | 8-bit integer (signed) | `torch.int8` | `torch.CharTensor` | `torch.cuda.CharTensor` | | 16-bit integer (signed) | `torch.int16` or `torch.short` | `torch.ShortTensor` | `torch.cuda.ShortTensor` | | 32-bit integer (signed) | `torch.int32` or `torch.int` | `torch.IntTensor` | `torch.cuda.IntTensor` | | 64-bit integer (signed) | `torch.int64` or `torch.long` | `torch.LongTensor` | `torch.cuda.LongTensor` | | Boolean | `torch.bool` | `torch.BoolTensor` | `torch.cuda.BoolTensor` | `1` Sometimes referred to as binary16: uses 1 sign, 5 exponent, and 10 significand bits. Useful when precision is important at the expense of range. `2` Sometimes referred to as Brain Floating Point: uses 1 sign, 8 exponent, and 7 significand bits. Useful when range is important, since it has the same number of exponent bits as `float32` [`torch.Tensor`](#torch.Tensor "torch.Tensor") is an alias for the default tensor type (`torch.FloatTensor`). A tensor can be constructed from a Python [`list`](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)") or sequence using the [`torch.tensor()`](generated/torch.tensor#torch.tensor "torch.tensor") constructor: ``` >>> torch.tensor([[1., -1.], [1., -1.]]) tensor([[ 1.0000, -1.0000], [ 1.0000, -1.0000]]) >>> torch.tensor(np.array([[1, 2, 3], [4, 5, 6]])) tensor([[ 1, 2, 3], [ 4, 5, 6]]) ``` Warning [`torch.tensor()`](generated/torch.tensor#torch.tensor "torch.tensor") always copies `data`. If you have a Tensor `data` and just want to change its `requires_grad` flag, use [`requires_grad_()`](#torch.Tensor.requires_grad_ "torch.Tensor.requires_grad_") or [`detach()`](autograd#torch.Tensor.detach "torch.Tensor.detach") to avoid a copy. If you have a numpy array and want to avoid a copy, use [`torch.as_tensor()`](generated/torch.as_tensor#torch.as_tensor "torch.as_tensor"). A tensor of specific data type can be constructed by passing a [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") and/or a [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") to a constructor or tensor creation op: ``` >>> torch.zeros([2, 4], dtype=torch.int32) tensor([[ 0, 0, 0, 0], [ 0, 0, 0, 0]], dtype=torch.int32) >>> cuda0 = torch.device('cuda:0') >>> torch.ones([2, 4], dtype=torch.float64, device=cuda0) tensor([[ 1.0000, 1.0000, 1.0000, 1.0000], [ 1.0000, 1.0000, 1.0000, 1.0000]], dtype=torch.float64, device='cuda:0') ``` The contents of a tensor can be accessed and modified using Python’s indexing and slicing notation: ``` >>> x = torch.tensor([[1, 2, 3], [4, 5, 6]]) >>> print(x[1][2]) tensor(6) >>> x[0][1] = 8 >>> print(x) tensor([[ 1, 8, 3], [ 4, 5, 6]]) ``` Use [`torch.Tensor.item()`](#torch.Tensor.item "torch.Tensor.item") to get a Python number from a tensor containing a single value: ``` >>> x = torch.tensor([[1]]) >>> x tensor([[ 1]]) >>> x.item() 1 >>> x = torch.tensor(2.5) >>> x tensor(2.5000) >>> x.item() 2.5 ``` A tensor can be created with `requires_grad=True` so that [`torch.autograd`](autograd#module-torch.autograd "torch.autograd") records operations on them for automatic differentiation. ``` >>> x = torch.tensor([[1., -1.], [1., 1.]], requires_grad=True) >>> out = x.pow(2).sum() >>> out.backward() >>> x.grad tensor([[ 2.0000, -2.0000], [ 2.0000, 2.0000]]) ``` Each tensor has an associated `torch.Storage`, which holds its data. The tensor class also provides multi-dimensional, [strided](https://en.wikipedia.org/wiki/Stride_of_an_array) view of a storage and defines numeric operations on it. Note For more information on tensor views, see [Tensor Views](tensor_view#tensor-view-doc). Note For more information on the [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype"), [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device"), and [`torch.layout`](tensor_attributes#torch.torch.layout "torch.torch.layout") attributes of a [`torch.Tensor`](#torch.Tensor "torch.Tensor"), see [Tensor Attributes](tensor_attributes#tensor-attributes-doc). Note Methods which mutate a tensor are marked with an underscore suffix. For example, `torch.FloatTensor.abs_()` computes the absolute value in-place and returns the modified tensor, while `torch.FloatTensor.abs()` computes the result in a new tensor. Note To change an existing tensor’s [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") and/or [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype"), consider using [`to()`](#torch.Tensor.to "torch.Tensor.to") method on the tensor. Warning Current implementation of [`torch.Tensor`](#torch.Tensor "torch.Tensor") introduces memory overhead, thus it might lead to unexpectedly high memory usage in the applications with many tiny tensors. If this is your case, consider using one large structure. `class torch.Tensor` There are a few main ways to create a tensor, depending on your use case. * To create a tensor with pre-existing data, use [`torch.tensor()`](generated/torch.tensor#torch.tensor "torch.tensor"). * To create a tensor with specific size, use `torch.*` tensor creation ops (see [Creation Ops](torch#tensor-creation-ops)). * To create a tensor with the same size (and similar types) as another tensor, use `torch.*_like` tensor creation ops (see [Creation Ops](torch#tensor-creation-ops)). * To create a tensor with similar type but different size as another tensor, use `tensor.new_*` creation ops. `new_tensor(data, dtype=None, device=None, requires_grad=False) → Tensor` Returns a new Tensor with `data` as the tensor data. By default, the returned Tensor has the same [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") and [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") as this tensor. Warning [`new_tensor()`](#torch.Tensor.new_tensor "torch.Tensor.new_tensor") always copies `data`. If you have a Tensor `data` and want to avoid a copy, use [`torch.Tensor.requires_grad_()`](#torch.Tensor.requires_grad_ "torch.Tensor.requires_grad_") or [`torch.Tensor.detach()`](autograd#torch.Tensor.detach "torch.Tensor.detach"). If you have a numpy array and want to avoid a copy, use [`torch.from_numpy()`](generated/torch.from_numpy#torch.from_numpy "torch.from_numpy"). Warning When data is a tensor `x`, [`new_tensor()`](#torch.Tensor.new_tensor "torch.Tensor.new_tensor") reads out ‘the data’ from whatever it is passed, and constructs a leaf variable. Therefore `tensor.new_tensor(x)` is equivalent to `x.clone().detach()` and `tensor.new_tensor(x, requires_grad=True)` is equivalent to `x.clone().detach().requires_grad_(True)`. The equivalents using `clone()` and `detach()` are recommended. Parameters * **data** (*array\_like*) – The returned Tensor copies `data`. * **dtype** ([`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired type of returned tensor. Default: if None, same [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") as this tensor. * **device** ([`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if None, same [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") as this tensor. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Example: ``` >>> tensor = torch.ones((2,), dtype=torch.int8) >>> data = [[0, 1], [2, 3]] >>> tensor.new_tensor(data) tensor([[ 0, 1], [ 2, 3]], dtype=torch.int8) ``` `new_full(size, fill_value, dtype=None, device=None, requires_grad=False) → Tensor` Returns a Tensor of size [`size`](#torch.Tensor.size "torch.Tensor.size") filled with `fill_value`. By default, the returned Tensor has the same [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") and [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") as this tensor. Parameters * **fill\_value** (*scalar*) – the number to fill the output tensor with. * **dtype** ([`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired type of returned tensor. Default: if None, same [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") as this tensor. * **device** ([`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if None, same [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") as this tensor. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Example: ``` >>> tensor = torch.ones((2,), dtype=torch.float64) >>> tensor.new_full((3, 4), 3.141592) tensor([[ 3.1416, 3.1416, 3.1416, 3.1416], [ 3.1416, 3.1416, 3.1416, 3.1416], [ 3.1416, 3.1416, 3.1416, 3.1416]], dtype=torch.float64) ``` `new_empty(size, dtype=None, device=None, requires_grad=False) → Tensor` Returns a Tensor of size [`size`](#torch.Tensor.size "torch.Tensor.size") filled with uninitialized data. By default, the returned Tensor has the same [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") and [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") as this tensor. Parameters * **dtype** ([`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired type of returned tensor. Default: if None, same [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") as this tensor. * **device** ([`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if None, same [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") as this tensor. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Example: ``` >>> tensor = torch.ones(()) >>> tensor.new_empty((2, 3)) tensor([[ 5.8182e-18, 4.5765e-41, -1.0545e+30], [ 3.0949e-41, 4.4842e-44, 0.0000e+00]]) ``` `new_ones(size, dtype=None, device=None, requires_grad=False) → Tensor` Returns a Tensor of size [`size`](#torch.Tensor.size "torch.Tensor.size") filled with `1`. By default, the returned Tensor has the same [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") and [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") as this tensor. Parameters * **size** (*int...*) – a list, tuple, or `torch.Size` of integers defining the shape of the output tensor. * **dtype** ([`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired type of returned tensor. Default: if None, same [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") as this tensor. * **device** ([`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if None, same [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") as this tensor. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Example: ``` >>> tensor = torch.tensor((), dtype=torch.int32) >>> tensor.new_ones((2, 3)) tensor([[ 1, 1, 1], [ 1, 1, 1]], dtype=torch.int32) ``` `new_zeros(size, dtype=None, device=None, requires_grad=False) → Tensor` Returns a Tensor of size [`size`](#torch.Tensor.size "torch.Tensor.size") filled with `0`. By default, the returned Tensor has the same [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") and [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") as this tensor. Parameters * **size** (*int...*) – a list, tuple, or `torch.Size` of integers defining the shape of the output tensor. * **dtype** ([`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired type of returned tensor. Default: if None, same [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") as this tensor. * **device** ([`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if None, same [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") as this tensor. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Example: ``` >>> tensor = torch.tensor((), dtype=torch.float64) >>> tensor.new_zeros((2, 3)) tensor([[ 0., 0., 0.], [ 0., 0., 0.]], dtype=torch.float64) ``` `is_cuda` Is `True` if the Tensor is stored on the GPU, `False` otherwise. `is_quantized` Is `True` if the Tensor is quantized, `False` otherwise. `is_meta` Is `True` if the Tensor is a meta tensor, `False` otherwise. Meta tensors are like normal tensors, but they carry no data. `device` Is the [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") where this Tensor is. `grad` This attribute is `None` by default and becomes a Tensor the first time a call to [`backward()`](autograd#torch.Tensor.backward "torch.Tensor.backward") computes gradients for `self`. The attribute will then contain the gradients computed and future calls to [`backward()`](autograd#torch.Tensor.backward "torch.Tensor.backward") will accumulate (add) gradients into it. `ndim` Alias for [`dim()`](#torch.Tensor.dim "torch.Tensor.dim") `T` Is this Tensor with its dimensions reversed. If `n` is the number of dimensions in `x`, `x.T` is equivalent to `x.permute(n-1, n-2, ..., 0)`. `real` Returns a new tensor containing real values of the `self` tensor. The returned tensor and `self` share the same underlying storage. Warning [`real()`](generated/torch.real#torch.real "torch.real") is only supported for tensors with complex dtypes. Example:: ``` >>> x=torch.randn(4, dtype=torch.cfloat) >>> x tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)]) >>> x.real tensor([ 0.3100, -0.5445, -1.6492, -0.0638]) ``` `imag` Returns a new tensor containing imaginary values of the `self` tensor. The returned tensor and `self` share the same underlying storage. Warning [`imag()`](generated/torch.imag#torch.imag "torch.imag") is only supported for tensors with complex dtypes. Example:: ``` >>> x=torch.randn(4, dtype=torch.cfloat) >>> x tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)]) >>> x.imag tensor([ 0.3553, -0.7896, -0.0633, -0.8119]) ``` `abs() → Tensor` See [`torch.abs()`](generated/torch.abs#torch.abs "torch.abs") `abs_() → Tensor` In-place version of [`abs()`](#torch.Tensor.abs "torch.Tensor.abs") `absolute() → Tensor` Alias for [`abs()`](generated/torch.abs#torch.abs "torch.abs") `absolute_() → Tensor` In-place version of [`absolute()`](#torch.Tensor.absolute "torch.Tensor.absolute") Alias for [`abs_()`](#torch.Tensor.abs_ "torch.Tensor.abs_") `acos() → Tensor` See [`torch.acos()`](generated/torch.acos#torch.acos "torch.acos") `acos_() → Tensor` In-place version of [`acos()`](#torch.Tensor.acos "torch.Tensor.acos") `arccos() → Tensor` See [`torch.arccos()`](generated/torch.arccos#torch.arccos "torch.arccos") `arccos_() → Tensor` In-place version of [`arccos()`](#torch.Tensor.arccos "torch.Tensor.arccos") `add(other, *, alpha=1) → Tensor` Add a scalar or tensor to `self` tensor. If both `alpha` and `other` are specified, each element of `other` is scaled by `alpha` before being used. When `other` is a tensor, the shape of `other` must be [broadcastable](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics) with the shape of the underlying tensor See [`torch.add()`](generated/torch.add#torch.add "torch.add") `add_(other, *, alpha=1) → Tensor` In-place version of [`add()`](#torch.Tensor.add "torch.Tensor.add") `addbmm(batch1, batch2, *, beta=1, alpha=1) → Tensor` See [`torch.addbmm()`](generated/torch.addbmm#torch.addbmm "torch.addbmm") `addbmm_(batch1, batch2, *, beta=1, alpha=1) → Tensor` In-place version of [`addbmm()`](#torch.Tensor.addbmm "torch.Tensor.addbmm") `addcdiv(tensor1, tensor2, *, value=1) → Tensor` See [`torch.addcdiv()`](generated/torch.addcdiv#torch.addcdiv "torch.addcdiv") `addcdiv_(tensor1, tensor2, *, value=1) → Tensor` In-place version of [`addcdiv()`](#torch.Tensor.addcdiv "torch.Tensor.addcdiv") `addcmul(tensor1, tensor2, *, value=1) → Tensor` See [`torch.addcmul()`](generated/torch.addcmul#torch.addcmul "torch.addcmul") `addcmul_(tensor1, tensor2, *, value=1) → Tensor` In-place version of [`addcmul()`](#torch.Tensor.addcmul "torch.Tensor.addcmul") `addmm(mat1, mat2, *, beta=1, alpha=1) → Tensor` See [`torch.addmm()`](generated/torch.addmm#torch.addmm "torch.addmm") `addmm_(mat1, mat2, *, beta=1, alpha=1) → Tensor` In-place version of [`addmm()`](#torch.Tensor.addmm "torch.Tensor.addmm") `sspaddmm(mat1, mat2, *, beta=1, alpha=1) → Tensor` See [`torch.sspaddmm()`](sparse#torch.sspaddmm "torch.sspaddmm") `addmv(mat, vec, *, beta=1, alpha=1) → Tensor` See [`torch.addmv()`](generated/torch.addmv#torch.addmv "torch.addmv") `addmv_(mat, vec, *, beta=1, alpha=1) → Tensor` In-place version of [`addmv()`](#torch.Tensor.addmv "torch.Tensor.addmv") `addr(vec1, vec2, *, beta=1, alpha=1) → Tensor` See [`torch.addr()`](generated/torch.addr#torch.addr "torch.addr") `addr_(vec1, vec2, *, beta=1, alpha=1) → Tensor` In-place version of [`addr()`](#torch.Tensor.addr "torch.Tensor.addr") `allclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) → Tensor` See [`torch.allclose()`](generated/torch.allclose#torch.allclose "torch.allclose") `amax(dim=None, keepdim=False) → Tensor` See [`torch.amax()`](generated/torch.amax#torch.amax "torch.amax") `amin(dim=None, keepdim=False) → Tensor` See [`torch.amin()`](generated/torch.amin#torch.amin "torch.amin") `angle() → Tensor` See [`torch.angle()`](generated/torch.angle#torch.angle "torch.angle") `apply_(callable) → Tensor` Applies the function `callable` to each element in the tensor, replacing each element with the value returned by `callable`. Note This function only works with CPU tensors and should not be used in code sections that require high performance. `argmax(dim=None, keepdim=False) → LongTensor` See [`torch.argmax()`](generated/torch.argmax#torch.argmax "torch.argmax") `argmin(dim=None, keepdim=False) → LongTensor` See [`torch.argmin()`](generated/torch.argmin#torch.argmin "torch.argmin") `argsort(dim=-1, descending=False) → LongTensor` See [`torch.argsort()`](generated/torch.argsort#torch.argsort "torch.argsort") `asin() → Tensor` See [`torch.asin()`](generated/torch.asin#torch.asin "torch.asin") `asin_() → Tensor` In-place version of [`asin()`](#torch.Tensor.asin "torch.Tensor.asin") `arcsin() → Tensor` See [`torch.arcsin()`](generated/torch.arcsin#torch.arcsin "torch.arcsin") `arcsin_() → Tensor` In-place version of [`arcsin()`](#torch.Tensor.arcsin "torch.Tensor.arcsin") `as_strided(size, stride, storage_offset=0) → Tensor` See [`torch.as_strided()`](generated/torch.as_strided#torch.as_strided "torch.as_strided") `atan() → Tensor` See [`torch.atan()`](generated/torch.atan#torch.atan "torch.atan") `atan_() → Tensor` In-place version of [`atan()`](#torch.Tensor.atan "torch.Tensor.atan") `arctan() → Tensor` See [`torch.arctan()`](generated/torch.arctan#torch.arctan "torch.arctan") `arctan_() → Tensor` In-place version of [`arctan()`](#torch.Tensor.arctan "torch.Tensor.arctan") `atan2(other) → Tensor` See [`torch.atan2()`](generated/torch.atan2#torch.atan2 "torch.atan2") `atan2_(other) → Tensor` In-place version of [`atan2()`](#torch.Tensor.atan2 "torch.Tensor.atan2") `all(dim=None, keepdim=False) → Tensor` See [`torch.all()`](generated/torch.all#torch.all "torch.all") `any(dim=None, keepdim=False) → Tensor` See [`torch.any()`](generated/torch.any#torch.any "torch.any") `backward(gradient=None, retain_graph=None, create_graph=False, inputs=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.backward) Computes the gradient of current tensor w.r.t. graph leaves. The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying `gradient`. It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t. `self`. This function accumulates gradients in the leaves - you might need to zero `.grad` attributes or set them to `None` before calling it. See [Default gradient layouts](autograd#default-grad-layouts) for details on the memory layout of accumulated gradients. Note If you run any forward ops, create `gradient`, and/or call `backward` in a user-specified CUDA stream context, see [Stream semantics of backward passes](https://pytorch.org/docs/1.8.0/notes/cuda.html#bwd-cuda-stream-semantics). Parameters * **gradient** ([Tensor](#torch.Tensor "torch.Tensor") *or* [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.9)")) – Gradient w.r.t. the tensor. If it is a tensor, it will be automatically converted to a Tensor that does not require grad unless `create_graph` is True. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable then this argument is optional. * **retain\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `False`, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of `create_graph`. * **create\_graph** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults to `False`. * **inputs** (*sequence of Tensor*) – Inputs w.r.t. which the gradient will be accumulated into `.grad`. All other Tensors will be ignored. If not provided, the gradient is accumulated into all the leaf Tensors that were used to compute the attr::tensors. All the provided inputs must be leaf Tensors. `baddbmm(batch1, batch2, *, beta=1, alpha=1) → Tensor` See [`torch.baddbmm()`](generated/torch.baddbmm#torch.baddbmm "torch.baddbmm") `baddbmm_(batch1, batch2, *, beta=1, alpha=1) → Tensor` In-place version of [`baddbmm()`](#torch.Tensor.baddbmm "torch.Tensor.baddbmm") `bernoulli(*, generator=None) → Tensor` Returns a result tensor where each result[i]\texttt{result[i]} is independently sampled from Bernoulli(self[i])\text{Bernoulli}(\texttt{self[i]}) . `self` must have floating point `dtype`, and the result will have the same `dtype`. See [`torch.bernoulli()`](generated/torch.bernoulli#torch.bernoulli "torch.bernoulli") `bernoulli_()` `bernoulli_(p=0.5, *, generator=None) → Tensor` Fills each location of `self` with an independent sample from Bernoulli(p)\text{Bernoulli}(\texttt{p}) . `self` can have integral `dtype`. `bernoulli_(p_tensor, *, generator=None) → Tensor` `p_tensor` should be a tensor containing probabilities to be used for drawing the binary random number. The ith\text{i}^{th} element of `self` tensor will be set to a value sampled from Bernoulli(p\_tensor[i])\text{Bernoulli}(\texttt{p\\_tensor[i]}) . `self` can have integral `dtype`, but `p_tensor` must have floating point `dtype`. See also [`bernoulli()`](#torch.Tensor.bernoulli "torch.Tensor.bernoulli") and [`torch.bernoulli()`](generated/torch.bernoulli#torch.bernoulli "torch.bernoulli") `bfloat16(memory_format=torch.preserve_format) → Tensor` `self.bfloat16()` is equivalent to `self.to(torch.bfloat16)`. See [`to()`](#torch.Tensor.to "torch.Tensor.to"). Parameters **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. `bincount(weights=None, minlength=0) → Tensor` See [`torch.bincount()`](generated/torch.bincount#torch.bincount "torch.bincount") `bitwise_not() → Tensor` See [`torch.bitwise_not()`](generated/torch.bitwise_not#torch.bitwise_not "torch.bitwise_not") `bitwise_not_() → Tensor` In-place version of [`bitwise_not()`](#torch.Tensor.bitwise_not "torch.Tensor.bitwise_not") `bitwise_and() → Tensor` See [`torch.bitwise_and()`](generated/torch.bitwise_and#torch.bitwise_and "torch.bitwise_and") `bitwise_and_() → Tensor` In-place version of [`bitwise_and()`](#torch.Tensor.bitwise_and "torch.Tensor.bitwise_and") `bitwise_or() → Tensor` See [`torch.bitwise_or()`](generated/torch.bitwise_or#torch.bitwise_or "torch.bitwise_or") `bitwise_or_() → Tensor` In-place version of [`bitwise_or()`](#torch.Tensor.bitwise_or "torch.Tensor.bitwise_or") `bitwise_xor() → Tensor` See [`torch.bitwise_xor()`](generated/torch.bitwise_xor#torch.bitwise_xor "torch.bitwise_xor") `bitwise_xor_() → Tensor` In-place version of [`bitwise_xor()`](#torch.Tensor.bitwise_xor "torch.Tensor.bitwise_xor") `bmm(batch2) → Tensor` See [`torch.bmm()`](generated/torch.bmm#torch.bmm "torch.bmm") `bool(memory_format=torch.preserve_format) → Tensor` `self.bool()` is equivalent to `self.to(torch.bool)`. See [`to()`](#torch.Tensor.to "torch.Tensor.to"). Parameters **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. `byte(memory_format=torch.preserve_format) → Tensor` `self.byte()` is equivalent to `self.to(torch.uint8)`. See [`to()`](#torch.Tensor.to "torch.Tensor.to"). Parameters **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. `broadcast_to(shape) → Tensor` See [`torch.broadcast_to()`](generated/torch.broadcast_to#torch.broadcast_to "torch.broadcast_to"). `cauchy_(median=0, sigma=1, *, generator=None) → Tensor` Fills the tensor with numbers drawn from the Cauchy distribution: f(x)=1πσ(x−median)2+σ2f(x) = \dfrac{1}{\pi} \dfrac{\sigma}{(x - \text{median})^2 + \sigma^2} `ceil() → Tensor` See [`torch.ceil()`](generated/torch.ceil#torch.ceil "torch.ceil") `ceil_() → Tensor` In-place version of [`ceil()`](#torch.Tensor.ceil "torch.Tensor.ceil") `char(memory_format=torch.preserve_format) → Tensor` `self.char()` is equivalent to `self.to(torch.int8)`. See [`to()`](#torch.Tensor.to "torch.Tensor.to"). Parameters **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. `cholesky(upper=False) → Tensor` See [`torch.cholesky()`](generated/torch.cholesky#torch.cholesky "torch.cholesky") `cholesky_inverse(upper=False) → Tensor` See [`torch.cholesky_inverse()`](generated/torch.cholesky_inverse#torch.cholesky_inverse "torch.cholesky_inverse") `cholesky_solve(input2, upper=False) → Tensor` See [`torch.cholesky_solve()`](generated/torch.cholesky_solve#torch.cholesky_solve "torch.cholesky_solve") `chunk(chunks, dim=0) → List of Tensors` See [`torch.chunk()`](generated/torch.chunk#torch.chunk "torch.chunk") `clamp(min, max) → Tensor` See [`torch.clamp()`](generated/torch.clamp#torch.clamp "torch.clamp") `clamp_(min, max) → Tensor` In-place version of [`clamp()`](#torch.Tensor.clamp "torch.Tensor.clamp") `clip(min, max) → Tensor` Alias for [`clamp()`](#torch.Tensor.clamp "torch.Tensor.clamp"). `clip_(min, max) → Tensor` Alias for [`clamp_()`](#torch.Tensor.clamp_ "torch.Tensor.clamp_"). `clone(*, memory_format=torch.preserve_format) → Tensor` See [`torch.clone()`](generated/torch.clone#torch.clone "torch.clone") `contiguous(memory_format=torch.contiguous_format) → Tensor` Returns a contiguous in memory tensor containing the same data as `self` tensor. If `self` tensor is already in the specified memory format, this function returns the `self` tensor. Parameters **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.contiguous_format`. `copy_(src, non_blocking=False) → Tensor` Copies the elements from `src` into `self` tensor and returns `self`. The `src` tensor must be [broadcastable](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics) with the `self` tensor. It may be of a different data type or reside on a different device. Parameters * **src** ([Tensor](#torch.Tensor "torch.Tensor")) – the source tensor to copy from * **non\_blocking** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – if `True` and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect. `conj() → Tensor` See [`torch.conj()`](generated/torch.conj#torch.conj "torch.conj") `copysign(other) → Tensor` See [`torch.copysign()`](generated/torch.copysign#torch.copysign "torch.copysign") `copysign_(other) → Tensor` In-place version of [`copysign()`](#torch.Tensor.copysign "torch.Tensor.copysign") `cos() → Tensor` See [`torch.cos()`](generated/torch.cos#torch.cos "torch.cos") `cos_() → Tensor` In-place version of [`cos()`](#torch.Tensor.cos "torch.Tensor.cos") `cosh() → Tensor` See [`torch.cosh()`](generated/torch.cosh#torch.cosh "torch.cosh") `cosh_() → Tensor` In-place version of [`cosh()`](#torch.Tensor.cosh "torch.Tensor.cosh") `count_nonzero(dim=None) → Tensor` See [`torch.count_nonzero()`](generated/torch.count_nonzero#torch.count_nonzero "torch.count_nonzero") `acosh() → Tensor` See [`torch.acosh()`](generated/torch.acosh#torch.acosh "torch.acosh") `acosh_() → Tensor` In-place version of [`acosh()`](#torch.Tensor.acosh "torch.Tensor.acosh") `arccosh()` acosh() -> Tensor See [`torch.arccosh()`](generated/torch.arccosh#torch.arccosh "torch.arccosh") `arccosh_()` acosh\_() -> Tensor In-place version of [`arccosh()`](#torch.Tensor.arccosh "torch.Tensor.arccosh") `cpu(memory_format=torch.preserve_format) → Tensor` Returns a copy of this object in CPU memory. If this object is already in CPU memory and on the correct device, then no copy is performed and the original object is returned. Parameters **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. `cross(other, dim=-1) → Tensor` See [`torch.cross()`](generated/torch.cross#torch.cross "torch.cross") `cuda(device=None, non_blocking=False, memory_format=torch.preserve_format) → Tensor` Returns a copy of this object in CUDA memory. If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned. Parameters * **device** ([`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device")) – The destination GPU device. Defaults to the current CUDA device. * **non\_blocking** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True` and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. Default: `False`. * **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. `logcumsumexp(dim) → Tensor` See [`torch.logcumsumexp()`](generated/torch.logcumsumexp#torch.logcumsumexp "torch.logcumsumexp") `cummax(dim) -> (Tensor, Tensor)` See [`torch.cummax()`](generated/torch.cummax#torch.cummax "torch.cummax") `cummin(dim) -> (Tensor, Tensor)` See [`torch.cummin()`](generated/torch.cummin#torch.cummin "torch.cummin") `cumprod(dim, dtype=None) → Tensor` See [`torch.cumprod()`](generated/torch.cumprod#torch.cumprod "torch.cumprod") `cumprod_(dim, dtype=None) → Tensor` In-place version of [`cumprod()`](#torch.Tensor.cumprod "torch.Tensor.cumprod") `cumsum(dim, dtype=None) → Tensor` See [`torch.cumsum()`](generated/torch.cumsum#torch.cumsum "torch.cumsum") `cumsum_(dim, dtype=None) → Tensor` In-place version of [`cumsum()`](#torch.Tensor.cumsum "torch.Tensor.cumsum") `data_ptr() → int` Returns the address of the first element of `self` tensor. `deg2rad() → Tensor` See [`torch.deg2rad()`](generated/torch.deg2rad#torch.deg2rad "torch.deg2rad") `dequantize() → Tensor` Given a quantized Tensor, dequantize it and return the dequantized float Tensor. `det() → Tensor` See [`torch.det()`](generated/torch.det#torch.det "torch.det") `dense_dim() → int` Return the number of dense dimensions in a [sparse tensor](sparse#sparse-docs) `self`. Warning Throws an error if `self` is not a sparse tensor. See also [`Tensor.sparse_dim()`](sparse#torch.Tensor.sparse_dim "torch.Tensor.sparse_dim") and [hybrid tensors](sparse#sparse-hybrid-coo-docs). `detach()` Returns a new Tensor, detached from the current graph. The result will never require gradient. Note Returned Tensor shares the same storage with the original one. In-place modifications on either of them will be seen, and may trigger errors in correctness checks. IMPORTANT NOTE: Previously, in-place size / stride / storage changes (such as `resize_` / `resize_as_` / `set_` / `transpose_`) to the returned tensor also update the original tensor. Now, these in-place changes will not update the original tensor anymore, and will instead trigger an error. For sparse tensors: In-place indices / values changes (such as `zero_` / `copy_` / `add_`) to the returned tensor will not update the original tensor anymore, and will instead trigger an error. `detach_()` Detaches the Tensor from the graph that created it, making it a leaf. Views cannot be detached in-place. `diag(diagonal=0) → Tensor` See [`torch.diag()`](generated/torch.diag#torch.diag "torch.diag") `diag_embed(offset=0, dim1=-2, dim2=-1) → Tensor` See [`torch.diag_embed()`](generated/torch.diag_embed#torch.diag_embed "torch.diag_embed") `diagflat(offset=0) → Tensor` See [`torch.diagflat()`](generated/torch.diagflat#torch.diagflat "torch.diagflat") `diagonal(offset=0, dim1=0, dim2=1) → Tensor` See [`torch.diagonal()`](generated/torch.diagonal#torch.diagonal "torch.diagonal") `fill_diagonal_(fill_value, wrap=False) → Tensor` Fill the main diagonal of a tensor that has at least 2-dimensions. When dims>2, all dimensions of input must be of equal length. This function modifies the input tensor in-place, and returns the input tensor. Parameters * **fill\_value** (*Scalar*) – the fill value * **wrap** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – the diagonal ‘wrapped’ after N columns for tall matrices. Example: ``` >>> a = torch.zeros(3, 3) >>> a.fill_diagonal_(5) tensor([[5., 0., 0.], [0., 5., 0.], [0., 0., 5.]]) >>> b = torch.zeros(7, 3) >>> b.fill_diagonal_(5) tensor([[5., 0., 0.], [0., 5., 0.], [0., 0., 5.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]) >>> c = torch.zeros(7, 3) >>> c.fill_diagonal_(5, wrap=True) tensor([[5., 0., 0.], [0., 5., 0.], [0., 0., 5.], [0., 0., 0.], [5., 0., 0.], [0., 5., 0.], [0., 0., 5.]]) ``` `fmax(other) → Tensor` See [`torch.fmax()`](generated/torch.fmax#torch.fmax "torch.fmax") `fmin(other) → Tensor` See [`torch.fmin()`](generated/torch.fmin#torch.fmin "torch.fmin") `diff(n=1, dim=-1, prepend=None, append=None) → Tensor` See [`torch.diff()`](generated/torch.diff#torch.diff "torch.diff") `digamma() → Tensor` See [`torch.digamma()`](generated/torch.digamma#torch.digamma "torch.digamma") `digamma_() → Tensor` In-place version of [`digamma()`](#torch.Tensor.digamma "torch.Tensor.digamma") `dim() → int` Returns the number of dimensions of `self` tensor. `dist(other, p=2) → Tensor` See [`torch.dist()`](generated/torch.dist#torch.dist "torch.dist") `div(value, *, rounding_mode=None) → Tensor` See [`torch.div()`](generated/torch.div#torch.div "torch.div") `div_(value, *, rounding_mode=None) → Tensor` In-place version of [`div()`](#torch.Tensor.div "torch.Tensor.div") `divide(value, *, rounding_mode=None) → Tensor` See [`torch.divide()`](generated/torch.divide#torch.divide "torch.divide") `divide_(value, *, rounding_mode=None) → Tensor` In-place version of [`divide()`](#torch.Tensor.divide "torch.Tensor.divide") `dot(other) → Tensor` See [`torch.dot()`](generated/torch.dot#torch.dot "torch.dot") `double(memory_format=torch.preserve_format) → Tensor` `self.double()` is equivalent to `self.to(torch.float64)`. See [`to()`](#torch.Tensor.to "torch.Tensor.to"). Parameters **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. `eig(eigenvectors=False) -> (Tensor, Tensor)` See [`torch.eig()`](generated/torch.eig#torch.eig "torch.eig") `element_size() → int` Returns the size in bytes of an individual element. Example: ``` >>> torch.tensor([]).element_size() 4 >>> torch.tensor([], dtype=torch.uint8).element_size() 1 ``` `eq(other) → Tensor` See [`torch.eq()`](generated/torch.eq#torch.eq "torch.eq") `eq_(other) → Tensor` In-place version of [`eq()`](#torch.Tensor.eq "torch.Tensor.eq") `equal(other) → bool` See [`torch.equal()`](generated/torch.equal#torch.equal "torch.equal") `erf() → Tensor` See [`torch.erf()`](generated/torch.erf#torch.erf "torch.erf") `erf_() → Tensor` In-place version of [`erf()`](#torch.Tensor.erf "torch.Tensor.erf") `erfc() → Tensor` See [`torch.erfc()`](generated/torch.erfc#torch.erfc "torch.erfc") `erfc_() → Tensor` In-place version of [`erfc()`](#torch.Tensor.erfc "torch.Tensor.erfc") `erfinv() → Tensor` See [`torch.erfinv()`](generated/torch.erfinv#torch.erfinv "torch.erfinv") `erfinv_() → Tensor` In-place version of [`erfinv()`](#torch.Tensor.erfinv "torch.Tensor.erfinv") `exp() → Tensor` See [`torch.exp()`](generated/torch.exp#torch.exp "torch.exp") `exp_() → Tensor` In-place version of [`exp()`](#torch.Tensor.exp "torch.Tensor.exp") `expm1() → Tensor` See [`torch.expm1()`](generated/torch.expm1#torch.expm1 "torch.expm1") `expm1_() → Tensor` In-place version of [`expm1()`](#torch.Tensor.expm1 "torch.Tensor.expm1") `expand(*sizes) → Tensor` Returns a new view of the `self` tensor with singleton dimensions expanded to a larger size. Passing -1 as the size for a dimension means not changing the size of that dimension. Tensor can be also expanded to a larger number of dimensions, and the new ones will be appended at the front. For the new dimensions, the size cannot be set to -1. Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the `stride` to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory. Parameters **\*sizes** (*torch.Size* *or* *int...*) – the desired expanded size Warning More than one element of an expanded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first. Example: ``` >>> x = torch.tensor([[1], [2], [3]]) >>> x.size() torch.Size([3, 1]) >>> x.expand(3, 4) tensor([[ 1, 1, 1, 1], [ 2, 2, 2, 2], [ 3, 3, 3, 3]]) >>> x.expand(-1, 4) # -1 means not changing the size of that dimension tensor([[ 1, 1, 1, 1], [ 2, 2, 2, 2], [ 3, 3, 3, 3]]) ``` `expand_as(other) → Tensor` Expand this tensor to the same size as `other`. `self.expand_as(other)` is equivalent to `self.expand(other.size())`. Please see [`expand()`](#torch.Tensor.expand "torch.Tensor.expand") for more information about `expand`. Parameters **other** ([`torch.Tensor`](#torch.Tensor "torch.Tensor")) – The result tensor has the same size as `other`. `exponential_(lambd=1, *, generator=None) → Tensor` Fills `self` tensor with elements drawn from the exponential distribution: f(x)=λe−λxf(x) = \lambda e^{-\lambda x} `fix() → Tensor` See [`torch.fix()`](generated/torch.fix#torch.fix "torch.fix"). `fix_() → Tensor` In-place version of [`fix()`](#torch.Tensor.fix "torch.Tensor.fix") `fill_(value) → Tensor` Fills `self` tensor with the specified value. `flatten(input, start_dim=0, end_dim=-1) → Tensor` see [`torch.flatten()`](generated/torch.flatten#torch.flatten "torch.flatten") `flip(dims) → Tensor` See [`torch.flip()`](generated/torch.flip#torch.flip "torch.flip") `fliplr() → Tensor` See [`torch.fliplr()`](generated/torch.fliplr#torch.fliplr "torch.fliplr") `flipud() → Tensor` See [`torch.flipud()`](generated/torch.flipud#torch.flipud "torch.flipud") `float(memory_format=torch.preserve_format) → Tensor` `self.float()` is equivalent to `self.to(torch.float32)`. See [`to()`](#torch.Tensor.to "torch.Tensor.to"). Parameters **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. `float_power(exponent) → Tensor` See [`torch.float_power()`](generated/torch.float_power#torch.float_power "torch.float_power") `float_power_(exponent) → Tensor` In-place version of [`float_power()`](#torch.Tensor.float_power "torch.Tensor.float_power") `floor() → Tensor` See [`torch.floor()`](generated/torch.floor#torch.floor "torch.floor") `floor_() → Tensor` In-place version of [`floor()`](#torch.Tensor.floor "torch.Tensor.floor") `floor_divide(value) → Tensor` See [`torch.floor_divide()`](generated/torch.floor_divide#torch.floor_divide "torch.floor_divide") `floor_divide_(value) → Tensor` In-place version of [`floor_divide()`](#torch.Tensor.floor_divide "torch.Tensor.floor_divide") `fmod(divisor) → Tensor` See [`torch.fmod()`](generated/torch.fmod#torch.fmod "torch.fmod") `fmod_(divisor) → Tensor` In-place version of [`fmod()`](#torch.Tensor.fmod "torch.Tensor.fmod") `frac() → Tensor` See [`torch.frac()`](generated/torch.frac#torch.frac "torch.frac") `frac_() → Tensor` In-place version of [`frac()`](#torch.Tensor.frac "torch.Tensor.frac") `gather(dim, index) → Tensor` See [`torch.gather()`](generated/torch.gather#torch.gather "torch.gather") `gcd(other) → Tensor` See [`torch.gcd()`](generated/torch.gcd#torch.gcd "torch.gcd") `gcd_(other) → Tensor` In-place version of [`gcd()`](#torch.Tensor.gcd "torch.Tensor.gcd") `ge(other) → Tensor` See [`torch.ge()`](generated/torch.ge#torch.ge "torch.ge"). `ge_(other) → Tensor` In-place version of [`ge()`](#torch.Tensor.ge "torch.Tensor.ge"). `greater_equal(other) → Tensor` See [`torch.greater_equal()`](generated/torch.greater_equal#torch.greater_equal "torch.greater_equal"). `greater_equal_(other) → Tensor` In-place version of [`greater_equal()`](#torch.Tensor.greater_equal "torch.Tensor.greater_equal"). `geometric_(p, *, generator=None) → Tensor` Fills `self` tensor with elements drawn from the geometric distribution: f(X=k)=pk−1(1−p)f(X=k) = p^{k - 1} (1 - p) `geqrf() -> (Tensor, Tensor)` See [`torch.geqrf()`](generated/torch.geqrf#torch.geqrf "torch.geqrf") `ger(vec2) → Tensor` See [`torch.ger()`](generated/torch.ger#torch.ger "torch.ger") `get_device() -> Device ordinal (Integer)` For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides. For CPU tensors, an error is thrown. Example: ``` >>> x = torch.randn(3, 4, 5, device='cuda:0') >>> x.get_device() 0 >>> x.cpu().get_device() # RuntimeError: get_device is not implemented for type torch.FloatTensor ``` `gt(other) → Tensor` See [`torch.gt()`](generated/torch.gt#torch.gt "torch.gt"). `gt_(other) → Tensor` In-place version of [`gt()`](#torch.Tensor.gt "torch.Tensor.gt"). `greater(other) → Tensor` See [`torch.greater()`](generated/torch.greater#torch.greater "torch.greater"). `greater_(other) → Tensor` In-place version of [`greater()`](#torch.Tensor.greater "torch.Tensor.greater"). `half(memory_format=torch.preserve_format) → Tensor` `self.half()` is equivalent to `self.to(torch.float16)`. See [`to()`](#torch.Tensor.to "torch.Tensor.to"). Parameters **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. `hardshrink(lambd=0.5) → Tensor` See [`torch.nn.functional.hardshrink()`](nn.functional#torch.nn.functional.hardshrink "torch.nn.functional.hardshrink") `heaviside(values) → Tensor` See [`torch.heaviside()`](generated/torch.heaviside#torch.heaviside "torch.heaviside") `histc(bins=100, min=0, max=0) → Tensor` See [`torch.histc()`](generated/torch.histc#torch.histc "torch.histc") `hypot(other) → Tensor` See [`torch.hypot()`](generated/torch.hypot#torch.hypot "torch.hypot") `hypot_(other) → Tensor` In-place version of [`hypot()`](#torch.Tensor.hypot "torch.Tensor.hypot") `i0() → Tensor` See [`torch.i0()`](generated/torch.i0#torch.i0 "torch.i0") `i0_() → Tensor` In-place version of [`i0()`](#torch.Tensor.i0 "torch.Tensor.i0") `igamma(other) → Tensor` See [`torch.igamma()`](generated/torch.igamma#torch.igamma "torch.igamma") `igamma_(other) → Tensor` In-place version of [`igamma()`](#torch.Tensor.igamma "torch.Tensor.igamma") `igammac(other) → Tensor` See [`torch.igammac()`](generated/torch.igammac#torch.igammac "torch.igammac") `igammac_(other) → Tensor` In-place version of [`igammac()`](#torch.Tensor.igammac "torch.Tensor.igammac") `index_add_(dim, index, tensor) → Tensor` Accumulate the elements of [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor") into the `self` tensor by adding to the indices in the order given in `index`. For example, if `dim == 0` and `index[i] == j`, then the `i`th row of [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor") is added to the `j`th row of `self`. The [`dim`](#torch.Tensor.dim "torch.Tensor.dim")th dimension of [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor") must have the same size as the length of `index` (which must be a vector), and all other dimensions must match `self`, or an error will be raised. Note This operation may behave nondeterministically when given tensors on a CUDA device. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. Parameters * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – dimension along which to index * **index** (*IntTensor* *or* *LongTensor*) – indices of [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor") to select from * **tensor** ([Tensor](#torch.Tensor "torch.Tensor")) – the tensor containing values to add Example: ``` >>> x = torch.ones(5, 3) >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 4, 2]) >>> x.index_add_(0, index, t) tensor([[ 2., 3., 4.], [ 1., 1., 1.], [ 8., 9., 10.], [ 1., 1., 1.], [ 5., 6., 7.]]) ``` `index_add(tensor1, dim, index, tensor2) → Tensor` Out-of-place version of [`torch.Tensor.index_add_()`](#torch.Tensor.index_add_ "torch.Tensor.index_add_"). `tensor1` corresponds to `self` in [`torch.Tensor.index_add_()`](#torch.Tensor.index_add_ "torch.Tensor.index_add_"). `index_copy_(dim, index, tensor) → Tensor` Copies the elements of [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor") into the `self` tensor by selecting the indices in the order given in `index`. For example, if `dim == 0` and `index[i] == j`, then the `i`th row of [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor") is copied to the `j`th row of `self`. The [`dim`](#torch.Tensor.dim "torch.Tensor.dim")th dimension of [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor") must have the same size as the length of `index` (which must be a vector), and all other dimensions must match `self`, or an error will be raised. Note If `index` contains duplicate entries, multiple elements from [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor") will be copied to the same index of `self`. The result is nondeterministic since it depends on which copy occurs last. Parameters * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – dimension along which to index * **index** (*LongTensor*) – indices of [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor") to select from * **tensor** ([Tensor](#torch.Tensor "torch.Tensor")) – the tensor containing values to copy Example: ``` >>> x = torch.zeros(5, 3) >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 4, 2]) >>> x.index_copy_(0, index, t) tensor([[ 1., 2., 3.], [ 0., 0., 0.], [ 7., 8., 9.], [ 0., 0., 0.], [ 4., 5., 6.]]) ``` `index_copy(tensor1, dim, index, tensor2) → Tensor` Out-of-place version of [`torch.Tensor.index_copy_()`](#torch.Tensor.index_copy_ "torch.Tensor.index_copy_"). `tensor1` corresponds to `self` in [`torch.Tensor.index_copy_()`](#torch.Tensor.index_copy_ "torch.Tensor.index_copy_"). `index_fill_(dim, index, val) → Tensor` Fills the elements of the `self` tensor with value `val` by selecting the indices in the order given in `index`. Parameters * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – dimension along which to index * **index** (*LongTensor*) – indices of `self` tensor to fill in * **val** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the value to fill with Example:: ``` >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 2]) >>> x.index_fill_(1, index, -1) tensor([[-1., 2., -1.], [-1., 5., -1.], [-1., 8., -1.]]) ``` `index_fill(tensor1, dim, index, value) → Tensor` Out-of-place version of [`torch.Tensor.index_fill_()`](#torch.Tensor.index_fill_ "torch.Tensor.index_fill_"). `tensor1` corresponds to `self` in [`torch.Tensor.index_fill_()`](#torch.Tensor.index_fill_ "torch.Tensor.index_fill_"). `index_put_(indices, values, accumulate=False) → Tensor` Puts values from the tensor [`values`](sparse#torch.Tensor.values "torch.Tensor.values") into the tensor `self` using the indices specified in [`indices`](sparse#torch.Tensor.indices "torch.Tensor.indices") (which is a tuple of Tensors). The expression `tensor.index_put_(indices, values)` is equivalent to `tensor[indices] = values`. Returns `self`. If `accumulate` is `True`, the elements in [`values`](sparse#torch.Tensor.values "torch.Tensor.values") are added to `self`. If accumulate is `False`, the behavior is undefined if indices contain duplicate elements. Parameters * **indices** (*tuple of LongTensor*) – tensors used to index into `self`. * **values** ([Tensor](#torch.Tensor "torch.Tensor")) – tensor of same dtype as `self`. * **accumulate** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether to accumulate into self `index_put(tensor1, indices, values, accumulate=False) → Tensor` Out-place version of [`index_put_()`](#torch.Tensor.index_put_ "torch.Tensor.index_put_"). `tensor1` corresponds to `self` in [`torch.Tensor.index_put_()`](#torch.Tensor.index_put_ "torch.Tensor.index_put_"). `index_select(dim, index) → Tensor` See [`torch.index_select()`](generated/torch.index_select#torch.index_select "torch.index_select") `indices() → Tensor` Return the indices tensor of a [sparse COO tensor](sparse#sparse-coo-docs). Warning Throws an error if `self` is not a sparse COO tensor. See also [`Tensor.values()`](sparse#torch.Tensor.values "torch.Tensor.values"). Note This method can only be called on a coalesced sparse tensor. See [`Tensor.coalesce()`](sparse#torch.Tensor.coalesce "torch.Tensor.coalesce") for details. `inner(other) → Tensor` See [`torch.inner()`](generated/torch.inner#torch.inner "torch.inner"). `int(memory_format=torch.preserve_format) → Tensor` `self.int()` is equivalent to `self.to(torch.int32)`. See [`to()`](#torch.Tensor.to "torch.Tensor.to"). Parameters **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. `int_repr() → Tensor` Given a quantized Tensor, `self.int_repr()` returns a CPU Tensor with uint8\_t as data type that stores the underlying uint8\_t values of the given Tensor. `inverse() → Tensor` See [`torch.inverse()`](generated/torch.inverse#torch.inverse "torch.inverse") `isclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) → Tensor` See [`torch.isclose()`](generated/torch.isclose#torch.isclose "torch.isclose") `isfinite() → Tensor` See [`torch.isfinite()`](generated/torch.isfinite#torch.isfinite "torch.isfinite") `isinf() → Tensor` See [`torch.isinf()`](generated/torch.isinf#torch.isinf "torch.isinf") `isposinf() → Tensor` See [`torch.isposinf()`](generated/torch.isposinf#torch.isposinf "torch.isposinf") `isneginf() → Tensor` See [`torch.isneginf()`](generated/torch.isneginf#torch.isneginf "torch.isneginf") `isnan() → Tensor` See [`torch.isnan()`](generated/torch.isnan#torch.isnan "torch.isnan") `is_contiguous(memory_format=torch.contiguous_format) → bool` Returns True if `self` tensor is contiguous in memory in the order specified by memory format. Parameters **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – Specifies memory allocation order. Default: `torch.contiguous_format`. `is_complex() → bool` Returns True if the data type of `self` is a complex data type. `is_floating_point() → bool` Returns True if the data type of `self` is a floating point data type. `is_leaf` All Tensors that have [`requires_grad`](autograd#torch.Tensor.requires_grad "torch.Tensor.requires_grad") which is `False` will be leaf Tensors by convention. For Tensors that have [`requires_grad`](autograd#torch.Tensor.requires_grad "torch.Tensor.requires_grad") which is `True`, they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and so `grad_fn` is None. Only leaf Tensors will have their [`grad`](autograd#torch.Tensor.grad "torch.Tensor.grad") populated during a call to [`backward()`](autograd#torch.Tensor.backward "torch.Tensor.backward"). To get [`grad`](autograd#torch.Tensor.grad "torch.Tensor.grad") populated for non-leaf Tensors, you can use [`retain_grad()`](autograd#torch.Tensor.retain_grad "torch.Tensor.retain_grad"). Example: ``` >>> a = torch.rand(10, requires_grad=True) >>> a.is_leaf True >>> b = torch.rand(10, requires_grad=True).cuda() >>> b.is_leaf False # b was created by the operation that cast a cpu Tensor into a cuda Tensor >>> c = torch.rand(10, requires_grad=True) + 2 >>> c.is_leaf False # c was created by the addition operation >>> d = torch.rand(10).cuda() >>> d.is_leaf True # d does not require gradients and so has no operation creating it (that is tracked by the autograd engine) >>> e = torch.rand(10).cuda().requires_grad_() >>> e.is_leaf True # e requires gradients and has no operations creating it >>> f = torch.rand(10, requires_grad=True, device="cuda") >>> f.is_leaf True # f requires grad, has no operation creating it ``` `is_pinned()` Returns true if this tensor resides in pinned memory. `is_set_to(tensor) → bool` Returns True if both tensors are pointing to the exact same memory (same storage, offset, size and stride). `is_shared()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.is_shared) Checks if tensor is in shared memory. This is always `True` for CUDA tensors. `is_signed() → bool` Returns True if the data type of `self` is a signed data type. `is_sparse` Is `True` if the Tensor uses sparse storage layout, `False` otherwise. `istft(n_fft, hop_length=None, win_length=None, window=None, center=True, normalized=False, onesided=None, length=None, return_complex=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.istft) See [`torch.istft()`](generated/torch.istft#torch.istft "torch.istft") `isreal() → Tensor` See [`torch.isreal()`](generated/torch.isreal#torch.isreal "torch.isreal") `item() → number` Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see [`tolist()`](#torch.Tensor.tolist "torch.Tensor.tolist"). This operation is not differentiable. Example: ``` >>> x = torch.tensor([1.0]) >>> x.item() 1.0 ``` `kthvalue(k, dim=None, keepdim=False) -> (Tensor, LongTensor)` See [`torch.kthvalue()`](generated/torch.kthvalue#torch.kthvalue "torch.kthvalue") `lcm(other) → Tensor` See [`torch.lcm()`](generated/torch.lcm#torch.lcm "torch.lcm") `lcm_(other) → Tensor` In-place version of [`lcm()`](#torch.Tensor.lcm "torch.Tensor.lcm") `ldexp(other) → Tensor` See [`torch.ldexp()`](generated/torch.ldexp#torch.ldexp "torch.ldexp") `ldexp_(other) → Tensor` In-place version of [`ldexp()`](#torch.Tensor.ldexp "torch.Tensor.ldexp") `le(other) → Tensor` See [`torch.le()`](generated/torch.le#torch.le "torch.le"). `le_(other) → Tensor` In-place version of [`le()`](#torch.Tensor.le "torch.Tensor.le"). `less_equal(other) → Tensor` See [`torch.less_equal()`](generated/torch.less_equal#torch.less_equal "torch.less_equal"). `less_equal_(other) → Tensor` In-place version of [`less_equal()`](#torch.Tensor.less_equal "torch.Tensor.less_equal"). `lerp(end, weight) → Tensor` See [`torch.lerp()`](generated/torch.lerp#torch.lerp "torch.lerp") `lerp_(end, weight) → Tensor` In-place version of [`lerp()`](#torch.Tensor.lerp "torch.Tensor.lerp") `lgamma() → Tensor` See [`torch.lgamma()`](generated/torch.lgamma#torch.lgamma "torch.lgamma") `lgamma_() → Tensor` In-place version of [`lgamma()`](#torch.Tensor.lgamma "torch.Tensor.lgamma") `log() → Tensor` See [`torch.log()`](generated/torch.log#torch.log "torch.log") `log_() → Tensor` In-place version of [`log()`](#torch.Tensor.log "torch.Tensor.log") `logdet() → Tensor` See [`torch.logdet()`](generated/torch.logdet#torch.logdet "torch.logdet") `log10() → Tensor` See [`torch.log10()`](generated/torch.log10#torch.log10 "torch.log10") `log10_() → Tensor` In-place version of [`log10()`](#torch.Tensor.log10 "torch.Tensor.log10") `log1p() → Tensor` See [`torch.log1p()`](generated/torch.log1p#torch.log1p "torch.log1p") `log1p_() → Tensor` In-place version of [`log1p()`](#torch.Tensor.log1p "torch.Tensor.log1p") `log2() → Tensor` See [`torch.log2()`](generated/torch.log2#torch.log2 "torch.log2") `log2_() → Tensor` In-place version of [`log2()`](#torch.Tensor.log2 "torch.Tensor.log2") `log_normal_(mean=1, std=2, *, generator=None)` Fills `self` tensor with numbers samples from the log-normal distribution parameterized by the given mean μ\mu and standard deviation σ\sigma . Note that [`mean`](generated/torch.mean#torch.mean "torch.mean") and [`std`](generated/torch.std#torch.std "torch.std") are the mean and standard deviation of the underlying normal distribution, and not of the returned distribution: f(x)=1xσ2πe−(ln⁡x−μ)22σ2f(x) = \dfrac{1}{x \sigma \sqrt{2\pi}}\ e^{-\frac{(\ln x - \mu)^2}{2\sigma^2}} `logaddexp(other) → Tensor` See [`torch.logaddexp()`](generated/torch.logaddexp#torch.logaddexp "torch.logaddexp") `logaddexp2(other) → Tensor` See [`torch.logaddexp2()`](generated/torch.logaddexp2#torch.logaddexp2 "torch.logaddexp2") `logsumexp(dim, keepdim=False) → Tensor` See [`torch.logsumexp()`](generated/torch.logsumexp#torch.logsumexp "torch.logsumexp") `logical_and() → Tensor` See [`torch.logical_and()`](generated/torch.logical_and#torch.logical_and "torch.logical_and") `logical_and_() → Tensor` In-place version of [`logical_and()`](#torch.Tensor.logical_and "torch.Tensor.logical_and") `logical_not() → Tensor` See [`torch.logical_not()`](generated/torch.logical_not#torch.logical_not "torch.logical_not") `logical_not_() → Tensor` In-place version of [`logical_not()`](#torch.Tensor.logical_not "torch.Tensor.logical_not") `logical_or() → Tensor` See [`torch.logical_or()`](generated/torch.logical_or#torch.logical_or "torch.logical_or") `logical_or_() → Tensor` In-place version of [`logical_or()`](#torch.Tensor.logical_or "torch.Tensor.logical_or") `logical_xor() → Tensor` See [`torch.logical_xor()`](generated/torch.logical_xor#torch.logical_xor "torch.logical_xor") `logical_xor_() → Tensor` In-place version of [`logical_xor()`](#torch.Tensor.logical_xor "torch.Tensor.logical_xor") `logit() → Tensor` See [`torch.logit()`](generated/torch.logit#torch.logit "torch.logit") `logit_() → Tensor` In-place version of [`logit()`](#torch.Tensor.logit "torch.Tensor.logit") `long(memory_format=torch.preserve_format) → Tensor` `self.long()` is equivalent to `self.to(torch.int64)`. See [`to()`](#torch.Tensor.to "torch.Tensor.to"). Parameters **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. `lstsq(A) -> (Tensor, Tensor)` See [`torch.lstsq()`](generated/torch.lstsq#torch.lstsq "torch.lstsq") `lt(other) → Tensor` See [`torch.lt()`](generated/torch.lt#torch.lt "torch.lt"). `lt_(other) → Tensor` In-place version of [`lt()`](#torch.Tensor.lt "torch.Tensor.lt"). `less()` lt(other) -> Tensor See [`torch.less()`](generated/torch.less#torch.less "torch.less"). `less_(other) → Tensor` In-place version of [`less()`](#torch.Tensor.less "torch.Tensor.less"). `lu(pivot=True, get_infos=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.lu) See [`torch.lu()`](generated/torch.lu#torch.lu "torch.lu") `lu_solve(LU_data, LU_pivots) → Tensor` See [`torch.lu_solve()`](generated/torch.lu_solve#torch.lu_solve "torch.lu_solve") `as_subclass(cls) → Tensor` Makes a `cls` instance with the same data pointer as `self`. Changes in the output mirror changes in `self`, and the output stays attached to the autograd graph. `cls` must be a subclass of `Tensor`. `map_(tensor, callable)` Applies `callable` for each element in `self` tensor and the given [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor") and stores the results in `self` tensor. `self` tensor and the given [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor") must be [broadcastable](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics). The `callable` should have the signature: ``` def callable(a, b) -> number ``` `masked_scatter_(mask, source)` Copies elements from `source` into `self` tensor at positions where the `mask` is True. The shape of `mask` must be [broadcastable](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics) with the shape of the underlying tensor. The `source` should have at least as many elements as the number of ones in `mask` Parameters * **mask** (*BoolTensor*) – the boolean mask * **source** ([Tensor](#torch.Tensor "torch.Tensor")) – the tensor to copy from Note The `mask` operates on the `self` tensor, not on the given `source` tensor. `masked_scatter(mask, tensor) → Tensor` Out-of-place version of [`torch.Tensor.masked_scatter_()`](#torch.Tensor.masked_scatter_ "torch.Tensor.masked_scatter_") `masked_fill_(mask, value)` Fills elements of `self` tensor with `value` where `mask` is True. The shape of `mask` must be [broadcastable](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics) with the shape of the underlying tensor. Parameters * **mask** (*BoolTensor*) – the boolean mask * **value** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the value to fill in with `masked_fill(mask, value) → Tensor` Out-of-place version of [`torch.Tensor.masked_fill_()`](#torch.Tensor.masked_fill_ "torch.Tensor.masked_fill_") `masked_select(mask) → Tensor` See [`torch.masked_select()`](generated/torch.masked_select#torch.masked_select "torch.masked_select") `matmul(tensor2) → Tensor` See [`torch.matmul()`](generated/torch.matmul#torch.matmul "torch.matmul") `matrix_power(n) → Tensor` See [`torch.matrix_power()`](generated/torch.matrix_power#torch.matrix_power "torch.matrix_power") `matrix_exp() → Tensor` See [`torch.matrix_exp()`](generated/torch.matrix_exp#torch.matrix_exp "torch.matrix_exp") `max(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)` See [`torch.max()`](generated/torch.max#torch.max "torch.max") `maximum(other) → Tensor` See [`torch.maximum()`](generated/torch.maximum#torch.maximum "torch.maximum") `mean(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)` See [`torch.mean()`](generated/torch.mean#torch.mean "torch.mean") `median(dim=None, keepdim=False) -> (Tensor, LongTensor)` See [`torch.median()`](generated/torch.median#torch.median "torch.median") `nanmedian(dim=None, keepdim=False) -> (Tensor, LongTensor)` See [`torch.nanmedian()`](generated/torch.nanmedian#torch.nanmedian "torch.nanmedian") `min(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)` See [`torch.min()`](generated/torch.min#torch.min "torch.min") `minimum(other) → Tensor` See [`torch.minimum()`](generated/torch.minimum#torch.minimum "torch.minimum") `mm(mat2) → Tensor` See [`torch.mm()`](generated/torch.mm#torch.mm "torch.mm") `smm(mat) → Tensor` See [`torch.smm()`](sparse#torch.smm "torch.smm") `mode(dim=None, keepdim=False) -> (Tensor, LongTensor)` See [`torch.mode()`](generated/torch.mode#torch.mode "torch.mode") `movedim(source, destination) → Tensor` See [`torch.movedim()`](generated/torch.movedim#torch.movedim "torch.movedim") `moveaxis(source, destination) → Tensor` See [`torch.moveaxis()`](generated/torch.moveaxis#torch.moveaxis "torch.moveaxis") `msort() → Tensor` See [`torch.msort()`](generated/torch.msort#torch.msort "torch.msort") `mul(value) → Tensor` See [`torch.mul()`](generated/torch.mul#torch.mul "torch.mul"). `mul_(value) → Tensor` In-place version of [`mul()`](#torch.Tensor.mul "torch.Tensor.mul"). `multiply(value) → Tensor` See [`torch.multiply()`](generated/torch.multiply#torch.multiply "torch.multiply"). `multiply_(value) → Tensor` In-place version of [`multiply()`](#torch.Tensor.multiply "torch.Tensor.multiply"). `multinomial(num_samples, replacement=False, *, generator=None) → Tensor` See [`torch.multinomial()`](generated/torch.multinomial#torch.multinomial "torch.multinomial") `mv(vec) → Tensor` See [`torch.mv()`](generated/torch.mv#torch.mv "torch.mv") `mvlgamma(p) → Tensor` See [`torch.mvlgamma()`](generated/torch.mvlgamma#torch.mvlgamma "torch.mvlgamma") `mvlgamma_(p) → Tensor` In-place version of [`mvlgamma()`](#torch.Tensor.mvlgamma "torch.Tensor.mvlgamma") `nansum(dim=None, keepdim=False, dtype=None) → Tensor` See [`torch.nansum()`](generated/torch.nansum#torch.nansum "torch.nansum") `narrow(dimension, start, length) → Tensor` See [`torch.narrow()`](generated/torch.narrow#torch.narrow "torch.narrow") Example: ``` >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> x.narrow(0, 0, 2) tensor([[ 1, 2, 3], [ 4, 5, 6]]) >>> x.narrow(1, 1, 2) tensor([[ 2, 3], [ 5, 6], [ 8, 9]]) ``` `narrow_copy(dimension, start, length) → Tensor` Same as [`Tensor.narrow()`](#torch.Tensor.narrow "torch.Tensor.narrow") except returning a copy rather than shared storage. This is primarily for sparse tensors, which do not have a shared-storage narrow method. Calling ``narrow_copy` with ``dimemsion > self.sparse_dim()`` will return a copy with the relevant dense dimension narrowed, and ``self.shape`` updated accordingly. `ndimension() → int` Alias for [`dim()`](#torch.Tensor.dim "torch.Tensor.dim") `nan_to_num(nan=0.0, posinf=None, neginf=None) → Tensor` See [`torch.nan_to_num()`](generated/torch.nan_to_num#torch.nan_to_num "torch.nan_to_num"). `nan_to_num_(nan=0.0, posinf=None, neginf=None) → Tensor` In-place version of [`nan_to_num()`](#torch.Tensor.nan_to_num "torch.Tensor.nan_to_num"). `ne(other) → Tensor` See [`torch.ne()`](generated/torch.ne#torch.ne "torch.ne"). `ne_(other) → Tensor` In-place version of [`ne()`](#torch.Tensor.ne "torch.Tensor.ne"). `not_equal(other) → Tensor` See [`torch.not_equal()`](generated/torch.not_equal#torch.not_equal "torch.not_equal"). `not_equal_(other) → Tensor` In-place version of [`not_equal()`](#torch.Tensor.not_equal "torch.Tensor.not_equal"). `neg() → Tensor` See [`torch.neg()`](generated/torch.neg#torch.neg "torch.neg") `neg_() → Tensor` In-place version of [`neg()`](#torch.Tensor.neg "torch.Tensor.neg") `negative() → Tensor` See [`torch.negative()`](generated/torch.negative#torch.negative "torch.negative") `negative_() → Tensor` In-place version of [`negative()`](#torch.Tensor.negative "torch.Tensor.negative") `nelement() → int` Alias for [`numel()`](#torch.Tensor.numel "torch.Tensor.numel") `nextafter(other) → Tensor` See [`torch.nextafter()`](generated/torch.nextafter#torch.nextafter "torch.nextafter") `nextafter_(other) → Tensor` In-place version of [`nextafter()`](#torch.Tensor.nextafter "torch.Tensor.nextafter") `nonzero() → LongTensor` See [`torch.nonzero()`](generated/torch.nonzero#torch.nonzero "torch.nonzero") `norm(p='fro', dim=None, keepdim=False, dtype=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.norm) See [`torch.norm()`](generated/torch.norm#torch.norm "torch.norm") `normal_(mean=0, std=1, *, generator=None) → Tensor` Fills `self` tensor with elements samples from the normal distribution parameterized by [`mean`](generated/torch.mean#torch.mean "torch.mean") and [`std`](generated/torch.std#torch.std "torch.std"). `numel() → int` See [`torch.numel()`](generated/torch.numel#torch.numel "torch.numel") `numpy() → numpy.ndarray` Returns `self` tensor as a NumPy `ndarray`. This tensor and the returned `ndarray` share the same underlying storage. Changes to `self` tensor will be reflected in the `ndarray` and vice versa. `orgqr(input2) → Tensor` See [`torch.orgqr()`](generated/torch.orgqr#torch.orgqr "torch.orgqr") `ormqr(input2, input3, left=True, transpose=False) → Tensor` See [`torch.ormqr()`](generated/torch.ormqr#torch.ormqr "torch.ormqr") `outer(vec2) → Tensor` See [`torch.outer()`](generated/torch.outer#torch.outer "torch.outer"). `permute(*dims) → Tensor` Returns a view of the original tensor with its dimensions permuted. Parameters **\*dims** (*int...*) – The desired ordering of dimensions #### Example ``` >>> x = torch.randn(2, 3, 5) >>> x.size() torch.Size([2, 3, 5]) >>> x.permute(2, 0, 1).size() torch.Size([5, 2, 3]) ``` `pin_memory() → Tensor` Copies the tensor to pinned memory, if it’s not already pinned. `pinverse() → Tensor` See [`torch.pinverse()`](generated/torch.pinverse#torch.pinverse "torch.pinverse") `polygamma(n) → Tensor` See [`torch.polygamma()`](generated/torch.polygamma#torch.polygamma "torch.polygamma") `polygamma_(n) → Tensor` In-place version of [`polygamma()`](#torch.Tensor.polygamma "torch.Tensor.polygamma") `pow(exponent) → Tensor` See [`torch.pow()`](generated/torch.pow#torch.pow "torch.pow") `pow_(exponent) → Tensor` In-place version of [`pow()`](#torch.Tensor.pow "torch.Tensor.pow") `prod(dim=None, keepdim=False, dtype=None) → Tensor` See [`torch.prod()`](generated/torch.prod#torch.prod "torch.prod") `put_(indices, tensor, accumulate=False) → Tensor` Copies the elements from [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor") into the positions specified by indices. For the purpose of indexing, the `self` tensor is treated as if it were a 1-D tensor. If `accumulate` is `True`, the elements in [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor") are added to `self`. If accumulate is `False`, the behavior is undefined if indices contain duplicate elements. Parameters * **indices** (*LongTensor*) – the indices into self * **tensor** ([Tensor](#torch.Tensor "torch.Tensor")) – the tensor containing values to copy from * **accumulate** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether to accumulate into self Example: ``` >>> src = torch.tensor([[4, 3, 5], ... [6, 7, 8]]) >>> src.put_(torch.tensor([1, 3]), torch.tensor([9, 10])) tensor([[ 4, 9, 5], [ 10, 7, 8]]) ``` `qr(some=True) -> (Tensor, Tensor)` See [`torch.qr()`](generated/torch.qr#torch.qr "torch.qr") `qscheme() → torch.qscheme` Returns the quantization scheme of a given QTensor. `quantile(q, dim=None, keepdim=False) → Tensor` See [`torch.quantile()`](generated/torch.quantile#torch.quantile "torch.quantile") `nanquantile(q, dim=None, keepdim=False) → Tensor` See [`torch.nanquantile()`](generated/torch.nanquantile#torch.nanquantile "torch.nanquantile") `q_scale() → float` Given a Tensor quantized by linear(affine) quantization, returns the scale of the underlying quantizer(). `q_zero_point() → int` Given a Tensor quantized by linear(affine) quantization, returns the zero\_point of the underlying quantizer(). `q_per_channel_scales() → Tensor` Given a Tensor quantized by linear (affine) per-channel quantization, returns a Tensor of scales of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q\_per\_channel\_axis) of the tensor. `q_per_channel_zero_points() → Tensor` Given a Tensor quantized by linear (affine) per-channel quantization, returns a tensor of zero\_points of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q\_per\_channel\_axis) of the tensor. `q_per_channel_axis() → int` Given a Tensor quantized by linear (affine) per-channel quantization, returns the index of dimension on which per-channel quantization is applied. `rad2deg() → Tensor` See [`torch.rad2deg()`](generated/torch.rad2deg#torch.rad2deg "torch.rad2deg") `random_(from=0, to=None, *, generator=None) → Tensor` Fills `self` tensor with numbers sampled from the discrete uniform distribution over `[from, to - 1]`. If not specified, the values are usually only bounded by `self` tensor’s data type. However, for floating point types, if unspecified, range will be `[0, 2^mantissa]` to ensure that every value is representable. For example, `torch.tensor(1, dtype=torch.double).random_()` will be uniform in `[0, 2^53]`. `ravel(input) → Tensor` see [`torch.ravel()`](generated/torch.ravel#torch.ravel "torch.ravel") `reciprocal() → Tensor` See [`torch.reciprocal()`](generated/torch.reciprocal#torch.reciprocal "torch.reciprocal") `reciprocal_() → Tensor` In-place version of [`reciprocal()`](#torch.Tensor.reciprocal "torch.Tensor.reciprocal") `record_stream(stream)` Ensures that the tensor memory is not reused for another tensor until all current work queued on `stream` are complete. Note The caching allocator is aware of only the stream where a tensor was allocated. Due to the awareness, it already correctly manages the life cycle of tensors on only one stream. But if a tensor is used on a stream different from the stream of origin, the allocator might reuse the memory unexpectedly. Calling this method lets the allocator know which streams have used the tensor. `register_hook(hook)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.register_hook) Registers a backward hook. The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have the following signature: ``` hook(grad) -> Tensor or None ``` The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of [`grad`](autograd#torch.Tensor.grad "torch.Tensor.grad"). This function returns a handle with a method `handle.remove()` that removes the hook from the module. Example: ``` >>> v = torch.tensor([0., 0., 0.], requires_grad=True) >>> h = v.register_hook(lambda grad: grad * 2) # double the gradient >>> v.backward(torch.tensor([1., 2., 3.])) >>> v.grad 2 4 6 [torch.FloatTensor of size (3,)] >>> h.remove() # removes the hook ``` `remainder(divisor) → Tensor` See [`torch.remainder()`](generated/torch.remainder#torch.remainder "torch.remainder") `remainder_(divisor) → Tensor` In-place version of [`remainder()`](#torch.Tensor.remainder "torch.Tensor.remainder") `renorm(p, dim, maxnorm) → Tensor` See [`torch.renorm()`](generated/torch.renorm#torch.renorm "torch.renorm") `renorm_(p, dim, maxnorm) → Tensor` In-place version of [`renorm()`](#torch.Tensor.renorm "torch.Tensor.renorm") `repeat(*sizes) → Tensor` Repeats this tensor along the specified dimensions. Unlike [`expand()`](#torch.Tensor.expand "torch.Tensor.expand"), this function copies the tensor’s data. Warning [`repeat()`](#torch.Tensor.repeat "torch.Tensor.repeat") behaves differently from [numpy.repeat](https://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html), but is more similar to [numpy.tile](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tile.html). For the operator similar to `numpy.repeat`, see [`torch.repeat_interleave()`](generated/torch.repeat_interleave#torch.repeat_interleave "torch.repeat_interleave"). Parameters **sizes** (*torch.Size* *or* *int...*) – The number of times to repeat this tensor along each dimension Example: ``` >>> x = torch.tensor([1, 2, 3]) >>> x.repeat(4, 2) tensor([[ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3]]) >>> x.repeat(4, 2, 1).size() torch.Size([4, 2, 3]) ``` `repeat_interleave(repeats, dim=None) → Tensor` See [`torch.repeat_interleave()`](generated/torch.repeat_interleave#torch.repeat_interleave "torch.repeat_interleave"). `requires_grad` Is `True` if gradients need to be computed for this Tensor, `False` otherwise. Note The fact that gradients need to be computed for a Tensor do not mean that the [`grad`](autograd#torch.Tensor.grad "torch.Tensor.grad") attribute will be populated, see [`is_leaf`](autograd#torch.Tensor.is_leaf "torch.Tensor.is_leaf") for more details. `requires_grad_(requires_grad=True) → Tensor` Change if autograd should record operations on this tensor: sets this tensor’s [`requires_grad`](autograd#torch.Tensor.requires_grad "torch.Tensor.requires_grad") attribute in-place. Returns this tensor. [`requires_grad_()`](#torch.Tensor.requires_grad_ "torch.Tensor.requires_grad_")’s main use case is to tell autograd to begin recording operations on a Tensor `tensor`. If `tensor` has `requires_grad=False` (because it was obtained through a DataLoader, or required preprocessing or initialization), `tensor.requires_grad_()` makes it so that autograd will begin to record operations on `tensor`. Parameters **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If autograd should record operations on this tensor. Default: `True`. Example: ``` >>> # Let's say we want to preprocess some saved weights and use >>> # the result as new weights. >>> saved_weights = [0.1, 0.2, 0.3, 0.25] >>> loaded_weights = torch.tensor(saved_weights) >>> weights = preprocess(loaded_weights) # some function >>> weights tensor([-0.5503, 0.4926, -2.1158, -0.8303]) >>> # Now, start to record operations done to weights >>> weights.requires_grad_() >>> out = weights.pow(2).sum() >>> out.backward() >>> weights.grad tensor([-1.1007, 0.9853, -4.2316, -1.6606]) ``` `reshape(*shape) → Tensor` Returns a tensor with the same data and number of elements as `self` but with the specified shape. This method returns a view if `shape` is compatible with the current shape. See [`torch.Tensor.view()`](#torch.Tensor.view "torch.Tensor.view") on when it is possible to return a view. See [`torch.reshape()`](generated/torch.reshape#torch.reshape "torch.reshape") Parameters **shape** (*tuple of python:ints* *or* *int...*) – the desired shape `reshape_as(other) → Tensor` Returns this tensor as the same shape as `other`. `self.reshape_as(other)` is equivalent to `self.reshape(other.sizes())`. This method returns a view if `other.sizes()` is compatible with the current shape. See [`torch.Tensor.view()`](#torch.Tensor.view "torch.Tensor.view") on when it is possible to return a view. Please see [`reshape()`](generated/torch.reshape#torch.reshape "torch.reshape") for more information about `reshape`. Parameters **other** ([`torch.Tensor`](#torch.Tensor "torch.Tensor")) – The result tensor has the same shape as `other`. `resize_(*sizes, memory_format=torch.contiguous_format) → Tensor` Resizes `self` tensor to the specified size. If the number of elements is larger than the current storage size, then the underlying storage is resized to fit the new number of elements. If the number of elements is smaller, the underlying storage is not changed. Existing elements are preserved but any new memory is uninitialized. Warning This is a low-level method. The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged). For most purposes, you will instead want to use [`view()`](#torch.Tensor.view "torch.Tensor.view"), which checks for contiguity, or [`reshape()`](#torch.Tensor.reshape "torch.Tensor.reshape"), which copies data if needed. To change the size in-place with custom strides, see [`set_()`](#torch.Tensor.set_ "torch.Tensor.set_"). Parameters * **sizes** (*torch.Size* *or* *int...*) – the desired size * **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of Tensor. Default: `torch.contiguous_format`. Note that memory format of `self` is going to be unaffected if `self.size()` matches `sizes`. Example: ``` >>> x = torch.tensor([[1, 2], [3, 4], [5, 6]]) >>> x.resize_(2, 2) tensor([[ 1, 2], [ 3, 4]]) ``` `resize_as_(tensor, memory_format=torch.contiguous_format) → Tensor` Resizes the `self` tensor to be the same size as the specified [`tensor`](generated/torch.tensor#torch.tensor "torch.tensor"). This is equivalent to `self.resize_(tensor.size())`. Parameters **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of Tensor. Default: `torch.contiguous_format`. Note that memory format of `self` is going to be unaffected if `self.size()` matches `tensor.size()`. `retain_grad()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.retain_grad) Enables .grad attribute for non-leaf Tensors. `roll(shifts, dims) → Tensor` See [`torch.roll()`](generated/torch.roll#torch.roll "torch.roll") `rot90(k, dims) → Tensor` See [`torch.rot90()`](generated/torch.rot90#torch.rot90 "torch.rot90") `round() → Tensor` See [`torch.round()`](generated/torch.round#torch.round "torch.round") `round_() → Tensor` In-place version of [`round()`](#torch.Tensor.round "torch.Tensor.round") `rsqrt() → Tensor` See [`torch.rsqrt()`](generated/torch.rsqrt#torch.rsqrt "torch.rsqrt") `rsqrt_() → Tensor` In-place version of [`rsqrt()`](#torch.Tensor.rsqrt "torch.Tensor.rsqrt") `scatter(dim, index, src) → Tensor` Out-of-place version of [`torch.Tensor.scatter_()`](#torch.Tensor.scatter_ "torch.Tensor.scatter_") `scatter_(dim, index, src, reduce=None) → Tensor` Writes all values from the tensor `src` into `self` at the indices specified in the `index` tensor. For each value in `src`, its output index is specified by its index in `src` for `dimension != dim` and by the corresponding value in `index` for `dimension = dim`. For a 3-D tensor, `self` is updated as: ``` self[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2 ``` This is the reverse operation of the manner described in [`gather()`](#torch.Tensor.gather "torch.Tensor.gather"). `self`, `index` and `src` (if it is a Tensor) should all have the same number of dimensions. It is also required that `index.size(d) <= src.size(d)` for all dimensions `d`, and that `index.size(d) <= self.size(d)` for all dimensions `d != dim`. Note that `index` and `src` do not broadcast. Moreover, as for [`gather()`](#torch.Tensor.gather "torch.Tensor.gather"), the values of `index` must be between `0` and `self.size(dim) - 1` inclusive. Warning When indices are not unique, the behavior is non-deterministic (one of the values from `src` will be picked arbitrarily) and the gradient will be incorrect (it will be propagated to all locations in the source that correspond to the same index)! Note The backward pass is implemented only for `src.shape == index.shape`. Additionally accepts an optional `reduce` argument that allows specification of an optional reduction operation, which is applied to all values in the tensor `src` into `self` at the indicies specified in the `index`. For each value in `src`, the reduction operation is applied to an index in `self` which is specified by its index in `src` for `dimension != dim` and by the corresponding value in `index` for `dimension = dim`. Given a 3-D tensor and reduction using the multiplication operation, `self` is updated as: ``` self[index[i][j][k]][j][k] *= src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] *= src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] *= src[i][j][k] # if dim == 2 ``` Reducing with the addition operation is the same as using [`scatter_add_()`](#torch.Tensor.scatter_add_ "torch.Tensor.scatter_add_"). Parameters * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the axis along which to index * **index** (*LongTensor*) – the indices of elements to scatter, can be either empty or of the same dimensionality as `src`. When empty, the operation returns `self` unchanged. * **src** ([Tensor](#torch.Tensor "torch.Tensor") *or* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the source element(s) to scatter. * **reduce** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – reduction operation to apply, can be either `'add'` or `'multiply'`. Example: ``` >>> src = torch.arange(1, 11).reshape((2, 5)) >>> src tensor([[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]) >>> index = torch.tensor([[0, 1, 2, 0]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_(0, index, src) tensor([[1, 0, 0, 4, 0], [0, 2, 0, 0, 0], [0, 0, 3, 0, 0]]) >>> index = torch.tensor([[0, 1, 2], [0, 1, 4]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_(1, index, src) tensor([[1, 2, 3, 0, 0], [6, 7, 0, 0, 8], [0, 0, 0, 0, 0]]) >>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]), ... 1.23, reduce='multiply') tensor([[2.0000, 2.0000, 2.4600, 2.0000], [2.0000, 2.0000, 2.0000, 2.4600]]) >>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]), ... 1.23, reduce='add') tensor([[2.0000, 2.0000, 3.2300, 2.0000], [2.0000, 2.0000, 2.0000, 3.2300]]) ``` `scatter_add_(dim, index, src) → Tensor` Adds all values from the tensor `other` into `self` at the indices specified in the `index` tensor in a similar fashion as [`scatter_()`](#torch.Tensor.scatter_ "torch.Tensor.scatter_"). For each value in `src`, it is added to an index in `self` which is specified by its index in `src` for `dimension != dim` and by the corresponding value in `index` for `dimension = dim`. For a 3-D tensor, `self` is updated as: ``` self[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2 ``` `self`, `index` and `src` should have same number of dimensions. It is also required that `index.size(d) <= src.size(d)` for all dimensions `d`, and that `index.size(d) <= self.size(d)` for all dimensions `d != dim`. Note that `index` and `src` do not broadcast. Note This operation may behave nondeterministically when given tensors on a CUDA device. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. Note The backward pass is implemented only for `src.shape == index.shape`. Parameters * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the axis along which to index * **index** (*LongTensor*) – the indices of elements to scatter and add, can be either empty or of the same dimensionality as `src`. When empty, the operation returns `self` unchanged. * **src** ([Tensor](#torch.Tensor "torch.Tensor")) – the source elements to scatter and add Example: ``` >>> src = torch.ones((2, 5)) >>> index = torch.tensor([[0, 1, 2, 0, 0]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src) tensor([[1., 0., 0., 1., 1.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.]]) >>> index = torch.tensor([[0, 1, 2, 0, 0], [0, 1, 2, 2, 2]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src) tensor([[2., 0., 0., 1., 1.], [0., 2., 0., 0., 0.], [0., 0., 2., 1., 1.]]) ``` `scatter_add(dim, index, src) → Tensor` Out-of-place version of [`torch.Tensor.scatter_add_()`](#torch.Tensor.scatter_add_ "torch.Tensor.scatter_add_") `select(dim, index) → Tensor` Slices the `self` tensor along the selected dimension at the given index. This function returns a view of the original tensor with the given dimension removed. Parameters * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension to slice * **index** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the index to select with Note [`select()`](#torch.Tensor.select "torch.Tensor.select") is equivalent to slicing. For example, `tensor.select(0, index)` is equivalent to `tensor[index]` and `tensor.select(2, index)` is equivalent to `tensor[:,:,index]`. `set_(source=None, storage_offset=0, size=None, stride=None) → Tensor` Sets the underlying storage, size, and strides. If `source` is a tensor, `self` tensor will share the same storage and have the same size and strides as `source`. Changes to elements in one tensor will be reflected in the other. If `source` is a `Storage`, the method sets the underlying storage, offset, size, and stride. Parameters * **source** ([Tensor](#torch.Tensor "torch.Tensor") *or* *Storage*) – the tensor or storage to use * **storage\_offset** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the offset in the storage * **size** (*torch.Size**,* *optional*) – the desired size. Defaults to the size of the source. * **stride** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – the desired stride. Defaults to C-contiguous strides. `share_memory_()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.share_memory_) Moves the underlying storage to shared memory. This is a no-op if the underlying storage is already in shared memory and for CUDA tensors. Tensors in shared memory cannot be resized. `short(memory_format=torch.preserve_format) → Tensor` `self.short()` is equivalent to `self.to(torch.int16)`. See [`to()`](#torch.Tensor.to "torch.Tensor.to"). Parameters **memory\_format** ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. `sigmoid() → Tensor` See [`torch.sigmoid()`](generated/torch.sigmoid#torch.sigmoid "torch.sigmoid") `sigmoid_() → Tensor` In-place version of [`sigmoid()`](#torch.Tensor.sigmoid "torch.Tensor.sigmoid") `sign() → Tensor` See [`torch.sign()`](generated/torch.sign#torch.sign "torch.sign") `sign_() → Tensor` In-place version of [`sign()`](#torch.Tensor.sign "torch.Tensor.sign") `signbit() → Tensor` See [`torch.signbit()`](generated/torch.signbit#torch.signbit "torch.signbit") `sgn() → Tensor` See [`torch.sgn()`](generated/torch.sgn#torch.sgn "torch.sgn") `sgn_() → Tensor` In-place version of [`sgn()`](#torch.Tensor.sgn "torch.Tensor.sgn") `sin() → Tensor` See [`torch.sin()`](generated/torch.sin#torch.sin "torch.sin") `sin_() → Tensor` In-place version of [`sin()`](#torch.Tensor.sin "torch.Tensor.sin") `sinc() → Tensor` See [`torch.sinc()`](generated/torch.sinc#torch.sinc "torch.sinc") `sinc_() → Tensor` In-place version of [`sinc()`](#torch.Tensor.sinc "torch.Tensor.sinc") `sinh() → Tensor` See [`torch.sinh()`](generated/torch.sinh#torch.sinh "torch.sinh") `sinh_() → Tensor` In-place version of [`sinh()`](#torch.Tensor.sinh "torch.Tensor.sinh") `asinh() → Tensor` See [`torch.asinh()`](generated/torch.asinh#torch.asinh "torch.asinh") `asinh_() → Tensor` In-place version of [`asinh()`](#torch.Tensor.asinh "torch.Tensor.asinh") `arcsinh() → Tensor` See [`torch.arcsinh()`](generated/torch.arcsinh#torch.arcsinh "torch.arcsinh") `arcsinh_() → Tensor` In-place version of [`arcsinh()`](#torch.Tensor.arcsinh "torch.Tensor.arcsinh") `size() → torch.Size` Returns the size of the `self` tensor. The returned value is a subclass of [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)"). Example: ``` >>> torch.empty(3, 4, 5).size() torch.Size([3, 4, 5]) ``` `slogdet() -> (Tensor, Tensor)` See [`torch.slogdet()`](generated/torch.slogdet#torch.slogdet "torch.slogdet") `solve(A) → Tensor, Tensor` See [`torch.solve()`](generated/torch.solve#torch.solve "torch.solve") `sort(dim=-1, descending=False) -> (Tensor, LongTensor)` See [`torch.sort()`](generated/torch.sort#torch.sort "torch.sort") `split(split_size, dim=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.split) See [`torch.split()`](generated/torch.split#torch.split "torch.split") `sparse_mask(mask) → Tensor` Returns a new [sparse tensor](sparse#sparse-docs) with values from a strided tensor `self` filtered by the indices of the sparse tensor `mask`. The values of `mask` sparse tensor are ignored. `self` and `mask` tensors must have the same shape. Note The returned sparse tensor has the same indices as the sparse tensor `mask`, even when the corresponding values in `self` are zeros. Parameters **mask** ([Tensor](#torch.Tensor "torch.Tensor")) – a sparse tensor whose indices are used as a filter Example: ``` >>> nse = 5 >>> dims = (5, 5, 2, 2) >>> I = torch.cat([torch.randint(0, dims[0], size=(nse,)), ... torch.randint(0, dims[1], size=(nse,))], 0).reshape(2, nse) >>> V = torch.randn(nse, dims[2], dims[3]) >>> S = torch.sparse_coo_tensor(I, V, dims).coalesce() >>> D = torch.randn(dims) >>> D.sparse_mask(S) tensor(indices=tensor([[0, 0, 0, 2], [0, 1, 4, 3]]), values=tensor([[[ 1.6550, 0.2397], [-0.1611, -0.0779]], [[ 0.2326, -1.0558], [ 1.4711, 1.9678]], [[-0.5138, -0.0411], [ 1.9417, 0.5158]], [[ 0.0793, 0.0036], [-0.2569, -0.1055]]]), size=(5, 5, 2, 2), nnz=4, layout=torch.sparse_coo) ``` `sparse_dim() → int` Return the number of sparse dimensions in a [sparse tensor](sparse#sparse-docs) `self`. Warning Throws an error if `self` is not a sparse tensor. See also [`Tensor.dense_dim()`](sparse#torch.Tensor.dense_dim "torch.Tensor.dense_dim") and [hybrid tensors](sparse#sparse-hybrid-coo-docs). `sqrt() → Tensor` See [`torch.sqrt()`](generated/torch.sqrt#torch.sqrt "torch.sqrt") `sqrt_() → Tensor` In-place version of [`sqrt()`](#torch.Tensor.sqrt "torch.Tensor.sqrt") `square() → Tensor` See [`torch.square()`](generated/torch.square#torch.square "torch.square") `square_() → Tensor` In-place version of [`square()`](#torch.Tensor.square "torch.Tensor.square") `squeeze(dim=None) → Tensor` See [`torch.squeeze()`](generated/torch.squeeze#torch.squeeze "torch.squeeze") `squeeze_(dim=None) → Tensor` In-place version of [`squeeze()`](#torch.Tensor.squeeze "torch.Tensor.squeeze") `std(dim=None, unbiased=True, keepdim=False) → Tensor` See [`torch.std()`](generated/torch.std#torch.std "torch.std") `stft(n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=None, return_complex=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.stft) See [`torch.stft()`](generated/torch.stft#torch.stft "torch.stft") Warning This function changed signature at version 0.4.1. Calling with the previous signature may cause error or return incorrect result. `storage() → torch.Storage` Returns the underlying storage. `storage_offset() → int` Returns `self` tensor’s offset in the underlying storage in terms of number of storage elements (not bytes). Example: ``` >>> x = torch.tensor([1, 2, 3, 4, 5]) >>> x.storage_offset() 0 >>> x[3:].storage_offset() 3 ``` `storage_type() → type` Returns the type of the underlying storage. `stride(dim) → tuple or int` Returns the stride of `self` tensor. Stride is the jump necessary to go from one element to the next one in the specified dimension [`dim`](#torch.Tensor.dim "torch.Tensor.dim"). A tuple of all strides is returned when no argument is passed in. Otherwise, an integer value is returned as the stride in the particular dimension [`dim`](#torch.Tensor.dim "torch.Tensor.dim"). Parameters **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the desired dimension in which stride is required Example: ``` >>> x = torch.tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]) >>> x.stride() (5, 1) >>> x.stride(0) 5 >>> x.stride(-1) 1 ``` `sub(other, *, alpha=1) → Tensor` See [`torch.sub()`](generated/torch.sub#torch.sub "torch.sub"). `sub_(other, *, alpha=1) → Tensor` In-place version of [`sub()`](#torch.Tensor.sub "torch.Tensor.sub") `subtract(other, *, alpha=1) → Tensor` See [`torch.subtract()`](generated/torch.subtract#torch.subtract "torch.subtract"). `subtract_(other, *, alpha=1) → Tensor` In-place version of [`subtract()`](#torch.Tensor.subtract "torch.Tensor.subtract"). `sum(dim=None, keepdim=False, dtype=None) → Tensor` See [`torch.sum()`](generated/torch.sum#torch.sum "torch.sum") `sum_to_size(*size) → Tensor` Sum `this` tensor to [`size`](#torch.Tensor.size "torch.Tensor.size"). [`size`](#torch.Tensor.size "torch.Tensor.size") must be broadcastable to `this` tensor size. Parameters **size** (*int...*) – a sequence of integers defining the shape of the output tensor. `svd(some=True, compute_uv=True) -> (Tensor, Tensor, Tensor)` See [`torch.svd()`](generated/torch.svd#torch.svd "torch.svd") `swapaxes(axis0, axis1) → Tensor` See [`torch.swapaxes()`](generated/torch.swapaxes#torch.swapaxes "torch.swapaxes") `swapdims(dim0, dim1) → Tensor` See [`torch.swapdims()`](generated/torch.swapdims#torch.swapdims "torch.swapdims") `symeig(eigenvectors=False, upper=True) -> (Tensor, Tensor)` See [`torch.symeig()`](generated/torch.symeig#torch.symeig "torch.symeig") `t() → Tensor` See [`torch.t()`](generated/torch.t#torch.t "torch.t") `t_() → Tensor` In-place version of [`t()`](#torch.Tensor.t "torch.Tensor.t") `tensor_split(indices_or_sections, dim=0) → List of Tensors` See [`torch.tensor_split()`](generated/torch.tensor_split#torch.tensor_split "torch.tensor_split") `tile(*reps) → Tensor` See [`torch.tile()`](generated/torch.tile#torch.tile "torch.tile") `to(*args, **kwargs) → Tensor` Performs Tensor dtype and/or device conversion. A [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") and [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") are inferred from the arguments of `self.to(*args, **kwargs)`. Note If the `self` Tensor already has the correct [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") and [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device"), then `self` is returned. Otherwise, the returned tensor is a copy of `self` with the desired [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") and [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device"). Here are the ways to call `to`: `to(dtype, non_blocking=False, copy=False, memory_format=torch.preserve_format) → Tensor` Returns a Tensor with the specified `dtype` Args: memory\_format ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional): the desired memory format of returned Tensor. Default: `torch.preserve_format`. `to(device=None, dtype=None, non_blocking=False, copy=False, memory_format=torch.preserve_format) → Tensor` Returns a Tensor with the specified [`device`](#torch.Tensor.device "torch.Tensor.device") and (optional) `dtype`. If `dtype` is `None` it is inferred to be `self.dtype`. When `non_blocking`, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. When `copy` is set, a new Tensor is created even when the Tensor already matches the desired conversion. Args: memory\_format ([`torch.memory_format`](tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional): the desired memory format of returned Tensor. Default: `torch.preserve_format`. `to(other, non_blocking=False, copy=False) → Tensor` Returns a Tensor with same [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") and [`torch.device`](tensor_attributes#torch.torch.device "torch.torch.device") as the Tensor `other`. When `non_blocking`, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. When `copy` is set, a new Tensor is created even when the Tensor already matches the desired conversion. Example: ``` >>> tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu >>> tensor.to(torch.float64) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64) >>> cuda0 = torch.device('cuda:0') >>> tensor.to(cuda0) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], device='cuda:0') >>> tensor.to(cuda0, dtype=torch.float64) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0') >>> other = torch.randn((), dtype=torch.float64, device=cuda0) >>> tensor.to(other, non_blocking=True) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0') ``` `to_mkldnn() → Tensor` Returns a copy of the tensor in `torch.mkldnn` layout. `take(indices) → Tensor` See [`torch.take()`](generated/torch.take#torch.take "torch.take") `tan() → Tensor` See [`torch.tan()`](generated/torch.tan#torch.tan "torch.tan") `tan_() → Tensor` In-place version of [`tan()`](#torch.Tensor.tan "torch.Tensor.tan") `tanh() → Tensor` See [`torch.tanh()`](generated/torch.tanh#torch.tanh "torch.tanh") `tanh_() → Tensor` In-place version of [`tanh()`](#torch.Tensor.tanh "torch.Tensor.tanh") `atanh() → Tensor` See [`torch.atanh()`](generated/torch.atanh#torch.atanh "torch.atanh") `atanh_(other) → Tensor` In-place version of [`atanh()`](#torch.Tensor.atanh "torch.Tensor.atanh") `arctanh() → Tensor` See [`torch.arctanh()`](generated/torch.arctanh#torch.arctanh "torch.arctanh") `arctanh_(other) → Tensor` In-place version of [`arctanh()`](#torch.Tensor.arctanh "torch.Tensor.arctanh") `tolist() → list or number` Returns the tensor as a (nested) list. For scalars, a standard Python number is returned, just like with [`item()`](#torch.Tensor.item "torch.Tensor.item"). Tensors are automatically moved to the CPU first if necessary. This operation is not differentiable. Examples: ``` >>> a = torch.randn(2, 2) >>> a.tolist() [[0.012766935862600803, 0.5415473580360413], [-0.08909505605697632, 0.7729271650314331]] >>> a[0,0].tolist() 0.012766935862600803 ``` `topk(k, dim=None, largest=True, sorted=True) -> (Tensor, LongTensor)` See [`torch.topk()`](generated/torch.topk#torch.topk "torch.topk") `to_sparse(sparseDims) → Tensor` Returns a sparse copy of the tensor. PyTorch supports sparse tensors in [coordinate format](sparse#sparse-coo-docs). Parameters **sparseDims** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the number of sparse dimensions to include in the new sparse tensor Example: ``` >>> d = torch.tensor([[0, 0, 0], [9, 0, 10], [0, 0, 0]]) >>> d tensor([[ 0, 0, 0], [ 9, 0, 10], [ 0, 0, 0]]) >>> d.to_sparse() tensor(indices=tensor([[1, 1], [0, 2]]), values=tensor([ 9, 10]), size=(3, 3), nnz=2, layout=torch.sparse_coo) >>> d.to_sparse(1) tensor(indices=tensor([[1]]), values=tensor([[ 9, 0, 10]]), size=(3, 3), nnz=1, layout=torch.sparse_coo) ``` `trace() → Tensor` See [`torch.trace()`](generated/torch.trace#torch.trace "torch.trace") `transpose(dim0, dim1) → Tensor` See [`torch.transpose()`](generated/torch.transpose#torch.transpose "torch.transpose") `transpose_(dim0, dim1) → Tensor` In-place version of [`transpose()`](#torch.Tensor.transpose "torch.Tensor.transpose") `triangular_solve(A, upper=True, transpose=False, unitriangular=False) -> (Tensor, Tensor)` See [`torch.triangular_solve()`](generated/torch.triangular_solve#torch.triangular_solve "torch.triangular_solve") `tril(k=0) → Tensor` See [`torch.tril()`](generated/torch.tril#torch.tril "torch.tril") `tril_(k=0) → Tensor` In-place version of [`tril()`](#torch.Tensor.tril "torch.Tensor.tril") `triu(k=0) → Tensor` See [`torch.triu()`](generated/torch.triu#torch.triu "torch.triu") `triu_(k=0) → Tensor` In-place version of [`triu()`](#torch.Tensor.triu "torch.Tensor.triu") `true_divide(value) → Tensor` See [`torch.true_divide()`](generated/torch.true_divide#torch.true_divide "torch.true_divide") `true_divide_(value) → Tensor` In-place version of [`true_divide_()`](#torch.Tensor.true_divide_ "torch.Tensor.true_divide_") `trunc() → Tensor` See [`torch.trunc()`](generated/torch.trunc#torch.trunc "torch.trunc") `trunc_() → Tensor` In-place version of [`trunc()`](#torch.Tensor.trunc "torch.Tensor.trunc") `type(dtype=None, non_blocking=False, **kwargs) → str or Tensor` Returns the type if `dtype` is not provided, else casts this object to the specified type. If this is already of the correct type, no copy is performed and the original object is returned. Parameters * **dtype** ([type](https://docs.python.org/3/library/functions.html#type "(in Python v3.9)") *or* *string*) – The desired type * **non\_blocking** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect. * **\*\*kwargs** – For compatibility, may contain the key `async` in place of the `non_blocking` argument. The `async` arg is deprecated. `type_as(tensor) → Tensor` Returns this tensor cast to the type of the given tensor. This is a no-op if the tensor is already of the correct type. This is equivalent to `self.type(tensor.type())` Parameters **tensor** ([Tensor](#torch.Tensor "torch.Tensor")) – the tensor which has the desired type `unbind(dim=0) → seq` See [`torch.unbind()`](generated/torch.unbind#torch.unbind "torch.unbind") `unfold(dimension, size, step) → Tensor` Returns a view of the original tensor which contains all slices of size [`size`](#torch.Tensor.size "torch.Tensor.size") from `self` tensor in the dimension `dimension`. Step between two slices is given by `step`. If `sizedim` is the size of dimension `dimension` for `self`, the size of dimension `dimension` in the returned tensor will be `(sizedim - size) / step + 1`. An additional dimension of size [`size`](#torch.Tensor.size "torch.Tensor.size") is appended in the returned tensor. Parameters * **dimension** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – dimension in which unfolding happens * **size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the size of each slice that is unfolded * **step** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the step between each slice Example: ``` >>> x = torch.arange(1., 8) >>> x tensor([ 1., 2., 3., 4., 5., 6., 7.]) >>> x.unfold(0, 2, 1) tensor([[ 1., 2.], [ 2., 3.], [ 3., 4.], [ 4., 5.], [ 5., 6.], [ 6., 7.]]) >>> x.unfold(0, 2, 2) tensor([[ 1., 2.], [ 3., 4.], [ 5., 6.]]) ``` `uniform_(from=0, to=1) → Tensor` Fills `self` tensor with numbers sampled from the continuous uniform distribution: P(x)=1to−fromP(x) = \dfrac{1}{\text{to} - \text{from}} `unique(sorted=True, return_inverse=False, return_counts=False, dim=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.unique) Returns the unique elements of the input tensor. See [`torch.unique()`](generated/torch.unique#torch.unique "torch.unique") `unique_consecutive(return_inverse=False, return_counts=False, dim=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/tensor.html#Tensor.unique_consecutive) Eliminates all but the first element from every consecutive group of equivalent elements. See [`torch.unique_consecutive()`](generated/torch.unique_consecutive#torch.unique_consecutive "torch.unique_consecutive") `unsqueeze(dim) → Tensor` See [`torch.unsqueeze()`](generated/torch.unsqueeze#torch.unsqueeze "torch.unsqueeze") `unsqueeze_(dim) → Tensor` In-place version of [`unsqueeze()`](#torch.Tensor.unsqueeze "torch.Tensor.unsqueeze") `values() → Tensor` Return the values tensor of a [sparse COO tensor](sparse#sparse-coo-docs). Warning Throws an error if `self` is not a sparse COO tensor. See also [`Tensor.indices()`](sparse#torch.Tensor.indices "torch.Tensor.indices"). Note This method can only be called on a coalesced sparse tensor. See [`Tensor.coalesce()`](sparse#torch.Tensor.coalesce "torch.Tensor.coalesce") for details. `var(dim=None, unbiased=True, keepdim=False) → Tensor` See [`torch.var()`](generated/torch.var#torch.var "torch.var") `vdot(other) → Tensor` See [`torch.vdot()`](generated/torch.vdot#torch.vdot "torch.vdot") `view(*shape) → Tensor` Returns a new tensor with the same data as the `self` tensor but of a different `shape`. The returned tensor shares the same data and must have the same number of elements, but may have a different size. For a tensor to be viewed, the new view size must be compatible with its original size and stride, i.e., each new view dimension must either be a subspace of an original dimension, or only span across original dimensions d,d+1,…,d+kd, d+1, \dots, d+k that satisfy the following contiguity-like condition that ∀i=d,…,d+k−1\forall i = d, \dots, d+k-1 , stride[i]=stride[i+1]×size[i+1]\text{stride}[i] = \text{stride}[i+1] \times \text{size}[i+1] Otherwise, it will not be possible to view `self` tensor as `shape` without copying it (e.g., via [`contiguous()`](#torch.Tensor.contiguous "torch.Tensor.contiguous")). When it is unclear whether a [`view()`](#torch.Tensor.view "torch.Tensor.view") can be performed, it is advisable to use [`reshape()`](generated/torch.reshape#torch.reshape "torch.reshape"), which returns a view if the shapes are compatible, and copies (equivalent to calling [`contiguous()`](#torch.Tensor.contiguous "torch.Tensor.contiguous")) otherwise. Parameters **shape** (*torch.Size* *or* *int...*) – the desired size Example: ``` >>> x = torch.randn(4, 4) >>> x.size() torch.Size([4, 4]) >>> y = x.view(16) >>> y.size() torch.Size([16]) >>> z = x.view(-1, 8) # the size -1 is inferred from other dimensions >>> z.size() torch.Size([2, 8]) >>> a = torch.randn(1, 2, 3, 4) >>> a.size() torch.Size([1, 2, 3, 4]) >>> b = a.transpose(1, 2) # Swaps 2nd and 3rd dimension >>> b.size() torch.Size([1, 3, 2, 4]) >>> c = a.view(1, 3, 2, 4) # Does not change tensor layout in memory >>> c.size() torch.Size([1, 3, 2, 4]) >>> torch.equal(b, c) False ``` `view(dtype) → Tensor` Returns a new tensor with the same data as the `self` tensor but of a different `dtype`. `dtype` must have the same number of bytes per element as `self`’s dtype. Warning This overload is not supported by TorchScript, and using it in a Torchscript program will cause undefined behavior. Parameters **dtype** ([`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype")) – the desired dtype Example: ``` >>> x = torch.randn(4, 4) >>> x tensor([[ 0.9482, -0.0310, 1.4999, -0.5316], [-0.1520, 0.7472, 0.5617, -0.8649], [-2.4724, -0.0334, -0.2976, -0.8499], [-0.2109, 1.9913, -0.9607, -0.6123]]) >>> x.dtype torch.float32 >>> y = x.view(torch.int32) >>> y tensor([[ 1064483442, -1124191867, 1069546515, -1089989247], [-1105482831, 1061112040, 1057999968, -1084397505], [-1071760287, -1123489973, -1097310419, -1084649136], [-1101533110, 1073668768, -1082790149, -1088634448]], dtype=torch.int32) >>> y[0, 0] = 1000000000 >>> x tensor([[ 0.0047, -0.0310, 1.4999, -0.5316], [-0.1520, 0.7472, 0.5617, -0.8649], [-2.4724, -0.0334, -0.2976, -0.8499], [-0.2109, 1.9913, -0.9607, -0.6123]]) >>> x.view(torch.int16) Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: Viewing a tensor as a new dtype with a different number of bytes per element is not supported. ``` `view_as(other) → Tensor` View this tensor as the same size as `other`. `self.view_as(other)` is equivalent to `self.view(other.size())`. Please see [`view()`](#torch.Tensor.view "torch.Tensor.view") for more information about `view`. Parameters **other** ([`torch.Tensor`](#torch.Tensor "torch.Tensor")) – The result tensor has the same size as `other`. `where(condition, y) → Tensor` `self.where(condition, y)` is equivalent to `torch.where(condition, self, y)`. See [`torch.where()`](generated/torch.where#torch.where "torch.where") `xlogy(other) → Tensor` See [`torch.xlogy()`](generated/torch.xlogy#torch.xlogy "torch.xlogy") `xlogy_(other) → Tensor` In-place version of [`xlogy()`](#torch.Tensor.xlogy "torch.Tensor.xlogy") `zero_() → Tensor` Fills `self` tensor with zeros.
programming_docs
pytorch torch.linalg torch.linalg ============ Common linear algebra operations. This module is in BETA. New functions are still being added, and some functions may change in future PyTorch releases. See the documentation of each function for details. Functions --------- `torch.linalg.cholesky(input, *, out=None) → Tensor` Computes the Cholesky decomposition of a Hermitian (or symmetric for real-valued matrices) positive-definite matrix or the Cholesky decompositions for a batch of such matrices. Each decomposition has the form: input=LLH\text{input} = LL^H where LL is a lower-triangular matrix and LHL^H is the conjugate transpose of LL , which is just a transpose for the case of real-valued input matrices. In code it translates to `input = L @ L.t()` if `input` is real-valued and `input = L @ L.conj().t()` if `input` is complex-valued. The batch of LL matrices is returned. Supports real-valued and complex-valued inputs. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note LAPACK’s `potrf` is used for CPU inputs, and MAGMA’s `potrf` is used for CUDA inputs. Note If `input` is not a Hermitian positive-definite matrix, or if it’s a batch of matrices and one or more of them is not a Hermitian positive-definite matrix, then a RuntimeError will be thrown. If `input` is a batch of matrices, then the error message will include the batch index of the first matrix that is not Hermitian positive-definite. Parameters **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor of size (∗,n,n)(\*, n, n) consisting of Hermitian positive-definite n×nn \times n matrices, where ∗\* is zero or more batch dimensions. Keyword Arguments **out** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – The output tensor. Ignored if `None`. Default: `None` Examples: ``` >>> a = torch.randn(2, 2, dtype=torch.complex128) >>> a = torch.mm(a, a.t().conj()) # creates a Hermitian positive-definite matrix >>> l = torch.linalg.cholesky(a) >>> a tensor([[2.5266+0.0000j, 1.9586-2.0626j], [1.9586+2.0626j, 9.4160+0.0000j]], dtype=torch.complex128) >>> l tensor([[1.5895+0.0000j, 0.0000+0.0000j], [1.2322+1.2976j, 2.4928+0.0000j]], dtype=torch.complex128) >>> torch.mm(l, l.t().conj()) tensor([[2.5266+0.0000j, 1.9586-2.0626j], [1.9586+2.0626j, 9.4160+0.0000j]], dtype=torch.complex128) >>> a = torch.randn(3, 2, 2, dtype=torch.float64) >>> a = torch.matmul(a, a.transpose(-2, -1)) # creates a symmetric positive-definite matrix >>> l = torch.linalg.cholesky(a) >>> a tensor([[[ 1.1629, 2.0237], [ 2.0237, 6.6593]], [[ 0.4187, 0.1830], [ 0.1830, 0.1018]], [[ 1.9348, -2.5744], [-2.5744, 4.6386]]], dtype=torch.float64) >>> l tensor([[[ 1.0784, 0.0000], [ 1.8766, 1.7713]], [[ 0.6471, 0.0000], [ 0.2829, 0.1477]], [[ 1.3910, 0.0000], [-1.8509, 1.1014]]], dtype=torch.float64) >>> torch.allclose(torch.matmul(l, l.transpose(-2, -1)), a) True ``` `torch.linalg.cond(input, p=None, *, out=None) → Tensor` Computes the condition number of a matrix `input`, or of each matrix in a batched `input`, using the matrix norm defined by `p`. For norms `{‘fro’, ‘nuc’, inf, -inf, 1, -1}` this is defined as the matrix norm of `input` times the matrix norm of the inverse of `input` computed using [`torch.linalg.norm()`](#torch.linalg.norm "torch.linalg.norm"). While for norms `{None, 2, -2}` this is defined as the ratio between the largest and smallest singular values computed using [`torch.linalg.svd()`](#torch.linalg.svd "torch.linalg.svd"). This function supports float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function may synchronize that device with the CPU depending on which norm `p` is used. Note For norms `{None, 2, -2}`, `input` may be a non-square matrix or batch of non-square matrices. For other norms, however, `input` must be a square matrix or a batch of square matrices, and if this requirement is not satisfied a RuntimeError will be thrown. Note For norms `{‘fro’, ‘nuc’, inf, -inf, 1, -1}` if `input` is a non-invertible matrix then a tensor containing infinity will be returned. If `input` is a batch of matrices and one or more of them is not invertible then a RuntimeError will be thrown. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input matrix of size `(m, n)` or the batch of matrices of size `(*, m, n)` where `*` is one or more batch dimensions. * **p** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *inf**,* *-inf**,* *'fro'**,* *'nuc'**,* *optional*) – the type of the matrix norm to use in the computations. inf refers to `float('inf')`, numpy’s `inf` object, or any equivalent object. The following norms can be used: | p | norm for matrices | | --- | --- | | None | ratio of the largest singular value to the smallest singular value | | ’fro’ | Frobenius norm | | ’nuc’ | nuclear norm | | inf | max(sum(abs(x), dim=1)) | | -inf | min(sum(abs(x), dim=1)) | | 1 | max(sum(abs(x), dim=0)) | | -1 | min(sum(abs(x), dim=0)) | | 2 | ratio of the largest singular value to the smallest singular value | | -2 | ratio of the smallest singular value to the largest singular value | Default: `None` Keyword Arguments **out** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – tensor to write the output to. Default is `None`. Returns The condition number of `input`. The output dtype is always real valued even for complex inputs (e.g. float if `input` is cfloat). Examples: ``` >>> a = torch.randn(3, 4, 4, dtype=torch.complex64) >>> torch.linalg.cond(a) >>> a = torch.tensor([[1., 0, -1], [0, 1, 0], [1, 0, 1]]) >>> torch.linalg.cond(a) tensor([1.4142]) >>> torch.linalg.cond(a, 'fro') tensor(3.1623) >>> torch.linalg.cond(a, 'nuc') tensor(9.2426) >>> torch.linalg.cond(a, float('inf')) tensor(2.) >>> torch.linalg.cond(a, float('-inf')) tensor(1.) >>> torch.linalg.cond(a, 1) tensor(2.) >>> torch.linalg.cond(a, -1) tensor(1.) >>> torch.linalg.cond(a, 2) tensor([1.4142]) >>> torch.linalg.cond(a, -2) tensor([0.7071]) >>> a = torch.randn(2, 3, 3) >>> a tensor([[[-0.9204, 1.1140, 1.2055], [ 0.3988, -0.2395, -0.7441], [-0.5160, 0.3115, 0.2619]], [[-2.2128, 0.9241, 2.1492], [-1.1277, 2.7604, -0.8760], [ 1.2159, 0.5960, 0.0498]]]) >>> torch.linalg.cond(a) tensor([[9.5917], [3.2538]]) >>> a = torch.randn(2, 3, 3, dtype=torch.complex64) >>> a tensor([[[-0.4671-0.2137j, -0.1334-0.9508j, 0.6252+0.1759j], [-0.3486-0.2991j, -0.1317+0.1252j, 0.3025-0.1604j], [-0.5634+0.8582j, 0.1118-0.4677j, -0.1121+0.7574j]], [[ 0.3964+0.2533j, 0.9385-0.6417j, -0.0283-0.8673j], [ 0.2635+0.2323j, -0.8929-1.1269j, 0.3332+0.0733j], [ 0.1151+0.1644j, -1.1163+0.3471j, -0.5870+0.1629j]]]) >>> torch.linalg.cond(a) tensor([[4.6245], [4.5671]]) >>> torch.linalg.cond(a, 1) tensor([9.2589, 9.3486]) ``` `torch.linalg.det(input) → Tensor` Computes the determinant of a square matrix `input`, or of each square matrix in a batched `input`. This function supports float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The determinant is computed using LU factorization. LAPACK’s `getrf` is used for CPU inputs, and MAGMA’s `getrf` is used for CUDA inputs. Note Backward through `det` internally uses [`torch.linalg.svd()`](#torch.linalg.svd "torch.linalg.svd") when `input` is not invertible. In this case, double backward through `det` will be unstable when `input` doesn’t have distinct singular values. See [`torch.linalg.svd()`](#torch.linalg.svd "torch.linalg.svd") for more details. Parameters **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input matrix of size `(n, n)` or the batch of matrices of size `(*, n, n)` where `*` is one or more batch dimensions. Example: ``` >>> a = torch.randn(3, 3) >>> a tensor([[ 0.9478, 0.9158, -1.1295], [ 0.9701, 0.7346, -1.8044], [-0.2337, 0.0557, 0.6929]]) >>> torch.linalg.det(a) tensor(0.0934) >>> a = torch.randn(3, 2, 2) >>> a tensor([[[ 0.9254, -0.6213], [-0.5787, 1.6843]], [[ 0.3242, -0.9665], [ 0.4539, -0.0887]], [[ 1.1336, -0.4025], [-0.7089, 0.9032]]]) >>> torch.linalg.det(a) tensor([1.1990, 0.4099, 0.7386]) ``` `torch.linalg.slogdet(input, *, out=None) -> (Tensor, Tensor)` Calculates the sign and natural logarithm of the absolute value of a square matrix’s determinant, or of the absolute values of the determinants of a batch of square matrices `input`. The determinant can be computed with `sign * exp(logabsdet)`. Supports input of float, double, cfloat and cdouble datatypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The determinant is computed using LU factorization. LAPACK’s `getrf` is used for CPU inputs, and MAGMA’s `getrf` is used for CUDA inputs. Note For matrices that have zero determinant, this returns `(0, -inf)`. If `input` is batched then the entries in the result tensors corresponding to matrices with the zero determinant have sign 0 and the natural logarithm of the absolute value of the determinant -inf. Parameters **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input matrix of size (n,n)(n, n) or the batch of matrices of size (∗,n,n)(\*, n, n) where ∗\* is one or more batch dimensions. Keyword Arguments **out** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – tuple of two tensors to write the output to. Returns A namedtuple (sign, logabsdet) containing the sign of the determinant and the natural logarithm of the absolute value of determinant, respectively. Example: ``` >>> A = torch.randn(3, 3) >>> A tensor([[ 0.0032, -0.2239, -1.1219], [-0.6690, 0.1161, 0.4053], [-1.6218, -0.9273, -0.0082]]) >>> torch.linalg.det(A) tensor(-0.7576) >>> torch.linalg.logdet(A) tensor(nan) >>> torch.linalg.slogdet(A) torch.return_types.linalg_slogdet(sign=tensor(-1.), logabsdet=tensor(-0.2776)) ``` `torch.linalg.eigh(input, UPLO='L', *, out=None) -> (Tensor, Tensor)` Computes the eigenvalues and eigenvectors of a complex Hermitian (or real symmetric) matrix `input`, or of each such matrix in a batched `input`. For a single matrix `input`, the tensor of eigenvalues `w` and the tensor of eigenvectors `V` decompose the `input` such that `input = V diag(w) Vᴴ`, where `Vᴴ` is the transpose of `V` for real-valued `input`, or the conjugate transpose of `V` for complex-valued `input`. Since the matrix or matrices in `input` are assumed to be Hermitian, the imaginary part of their diagonals is always treated as zero. When `UPLO` is “L”, its default value, only the lower triangular part of each matrix is used in the computation. When `UPLO` is “U” only the upper triangular part of each matrix is used. Supports input of float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The eigenvalues/eigenvectors are computed using LAPACK’s `syevd` and `heevd` routines for CPU inputs, and MAGMA’s `syevd` and `heevd` routines for CUDA inputs. Note The eigenvalues of real symmetric or complex Hermitian matrices are always real. Note The eigenvectors of matrices are not unique, so any eigenvector multiplied by a constant remains a valid eigenvector. This function may compute different eigenvector representations on different device types. Usually the difference is only in the sign of the eigenvector. Note See [`torch.linalg.eigvalsh()`](#torch.linalg.eigvalsh "torch.linalg.eigvalsh") for a related function that computes only eigenvalues. However, that function is not differentiable. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the Hermitian `n times n` matrix or the batch of such matrices of size `(*, n, n)` where `*` is one or more batch dimensions. * **UPLO** (*'L'**,* *'U'**,* *optional*) – controls whether to use the upper-triangular or the lower-triangular part of `input` in the computations. Default is `'L'`. Keyword Arguments **out** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – tuple of two tensors to write the output to. Default is `None`. Returns A namedtuple (eigenvalues, eigenvectors) containing * `eigenvalues (Tensor): Shape (*, m).` The eigenvalues in ascending order. * `eigenvectors (Tensor): Shape (*, m, m).` The orthonormal eigenvectors of the `input`. Return type ([Tensor](tensors#torch.Tensor "torch.Tensor"), [Tensor](tensors#torch.Tensor "torch.Tensor")) Examples: ``` >>> a = torch.randn(2, 2, dtype=torch.complex128) >>> a = a + a.t().conj() # creates a Hermitian matrix >>> a tensor([[2.9228+0.0000j, 0.2029-0.0862j], [0.2029+0.0862j, 0.3464+0.0000j]], dtype=torch.complex128) >>> w, v = torch.linalg.eigh(a) >>> w tensor([0.3277, 2.9415], dtype=torch.float64) >>> v tensor([[-0.0846+-0.0000j, -0.9964+0.0000j], [ 0.9170+0.3898j, -0.0779-0.0331j]], dtype=torch.complex128) >>> torch.allclose(torch.matmul(v, torch.matmul(w.to(v.dtype).diag_embed(), v.t().conj())), a) True >>> a = torch.randn(3, 2, 2, dtype=torch.float64) >>> a = a + a.transpose(-2, -1) # creates a symmetric matrix >>> w, v = torch.linalg.eigh(a) >>> torch.allclose(torch.matmul(v, torch.matmul(w.diag_embed(), v.transpose(-2, -1))), a) True ``` `torch.linalg.eigvalsh(input, UPLO='L', *, out=None) → Tensor` Computes the eigenvalues of a complex Hermitian (or real symmetric) matrix `input`, or of each such matrix in a batched `input`. The eigenvalues are returned in ascending order. Since the matrix or matrices in `input` are assumed to be Hermitian, the imaginary part of their diagonals is always treated as zero. When `UPLO` is “L”, its default value, only the lower triangular part of each matrix is used in the computation. When `UPLO` is “U” only the upper triangular part of each matrix is used. Supports input of float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The eigenvalues are computed using LAPACK’s `syevd` and `heevd` routines for CPU inputs, and MAGMA’s `syevd` and `heevd` routines for CUDA inputs. Note The eigenvalues of real symmetric or complex Hermitian matrices are always real. Note This function doesn’t support backpropagation, please use [`torch.linalg.eigh()`](#torch.linalg.eigh "torch.linalg.eigh") instead, which also computes the eigenvectors. Note See [`torch.linalg.eigh()`](#torch.linalg.eigh "torch.linalg.eigh") for a related function that computes both eigenvalues and eigenvectors. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the Hermitian `n times n` matrix or the batch of such matrices of size `(*, n, n)` where `*` is one or more batch dimensions. * **UPLO** (*'L'**,* *'U'**,* *optional*) – controls whether to use the upper-triangular or the lower-triangular part of `input` in the computations. Default is `'L'`. Keyword Arguments **out** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – tensor to write the output to. Default is `None`. Examples: ``` >>> a = torch.randn(2, 2, dtype=torch.complex128) >>> a = a + a.t().conj() # creates a Hermitian matrix >>> a tensor([[2.9228+0.0000j, 0.2029-0.0862j], [0.2029+0.0862j, 0.3464+0.0000j]], dtype=torch.complex128) >>> w = torch.linalg.eigvalsh(a) >>> w tensor([0.3277, 2.9415], dtype=torch.float64) >>> a = torch.randn(3, 2, 2, dtype=torch.float64) >>> a = a + a.transpose(-2, -1) # creates a symmetric matrix >>> a tensor([[[ 2.8050, -0.3850], [-0.3850, 3.2376]], [[-1.0307, -2.7457], [-2.7457, -1.7517]], [[ 1.7166, 2.2207], [ 2.2207, -2.0898]]], dtype=torch.float64) >>> w = torch.linalg.eigvalsh(a) >>> w tensor([[ 2.5797, 3.4629], [-4.1605, 1.3780], [-3.1113, 2.7381]], dtype=torch.float64) ``` `torch.linalg.matrix_rank(input, tol=None, hermitian=False, *, out=None) → Tensor` Computes the numerical rank of a matrix `input`, or of each matrix in a batched `input`. The matrix rank is computed as the number of singular values (or absolute eigenvalues when `hermitian` is `True`) that are greater than the specified `tol` threshold. If `tol` is not specified, `tol` is set to `S.max(dim=-1)*max(input.shape[-2:])*eps`, where `S` is the singular values (or absolute eigenvalues when `hermitian` is `True`), and `eps` is the epsilon value for the datatype of `input`. The epsilon value can be obtained using the `eps` attribute of `torch.finfo`. Supports input of float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The matrix rank is computed using singular value decomposition (see [`torch.linalg.svd()`](#torch.linalg.svd "torch.linalg.svd")) by default. If `hermitian` is `True`, then `input` is assumed to be Hermitian (symmetric if real-valued), and the computation is done by obtaining the eigenvalues (see [`torch.linalg.eigvalsh()`](#torch.linalg.eigvalsh "torch.linalg.eigvalsh")). Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input matrix of size `(m, n)` or the batch of matrices of size `(*, m, n)` where `*` is one or more batch dimensions. * **tol** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – the tolerance value. Default is `None` * **hermitian** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – indicates whether `input` is Hermitian. Default is `False`. Keyword Arguments **out** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – tensor to write the output to. Default is `None`. Examples: ``` >>> a = torch.eye(10) >>> torch.linalg.matrix_rank(a) tensor(10) >>> b = torch.eye(10) >>> b[0, 0] = 0 >>> torch.linalg.matrix_rank(b) tensor(9) >>> a = torch.randn(4, 3, 2) >>> torch.linalg.matrix_rank(a) tensor([2, 2, 2, 2]) >>> a = torch.randn(2, 4, 2, 3) >>> torch.linalg.matrix_rank(a) tensor([[2, 2, 2, 2], [2, 2, 2, 2]]) >>> a = torch.randn(2, 4, 3, 3, dtype=torch.complex64) >>> torch.linalg.matrix_rank(a) tensor([[3, 3, 3, 3], [3, 3, 3, 3]]) >>> torch.linalg.matrix_rank(a, hermitian=True) tensor([[3, 3, 3, 3], [3, 3, 3, 3]]) >>> torch.linalg.matrix_rank(a, tol=1.0) tensor([[3, 2, 2, 2], [1, 2, 1, 2]]) >>> torch.linalg.matrix_rank(a, tol=1.0, hermitian=True) tensor([[2, 2, 2, 1], [1, 2, 2, 2]]) ``` `torch.linalg.norm(input, ord=None, dim=None, keepdim=False, *, out=None, dtype=None) → Tensor` Returns the matrix norm or vector norm of a given tensor. This function can calculate one of eight different types of matrix norms, or one of an infinite number of vector norms, depending on both the number of reduction dimensions and the value of the `ord` parameter. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – The input tensor. If dim is None, x must be 1-D or 2-D, unless `ord` is None. If both `dim` and `ord` are None, the 2-norm of the input flattened to 1-D will be returned. Its data type must be either a floating point or complex type. For complex inputs, the norm is calculated on of the absolute values of each element. If the input is complex and neither `dtype` nor `out` is specified, the result’s data type will be the corresponding floating point type (e.g. float if `input` is complexfloat). * **ord** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *inf**,* *-inf**,* *'fro'**,* *'nuc'**,* *optional*) – The order of norm. inf refers to `float('inf')`, numpy’s `inf` object, or any equivalent object. The following norms can be calculated: | ord | norm for matrices | norm for vectors | | --- | --- | --- | | None | Frobenius norm | 2-norm | | ’fro’ | Frobenius norm | – not supported – | | ‘nuc’ | nuclear norm | – not supported – | | inf | max(sum(abs(x), dim=1)) | max(abs(x)) | | -inf | min(sum(abs(x), dim=1)) | min(abs(x)) | | 0 | – not supported – | sum(x != 0) | | 1 | max(sum(abs(x), dim=0)) | as below | | -1 | min(sum(abs(x), dim=0)) | as below | | 2 | 2-norm (largest sing. value) | as below | | -2 | smallest singular value | as below | | other | – not supported – | sum(abs(x)\*\*ord)\*\*(1./ord) | Default: `None` * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *2-tuple of python:ints**,* *2-list of python:ints**,* *optional*) – If `dim` is an int, vector norm will be calculated over the specified dimension. If `dim` is a 2-tuple of ints, matrix norm will be calculated over the specified dimensions. If `dim` is None, matrix norm will be calculated when the input tensor has two dimensions, and vector norm will be calculated when the input tensor has one dimension. Default: `None` * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If set to True, the reduced dimensions are retained in the result as dimensions with size one. Default: `False` Keyword Arguments * **out** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – The output tensor. Ignored if `None`. Default: `None` * **dtype** (`torch.dtype`, optional) – If specified, the input tensor is cast to `dtype` before performing the operation, and the returned tensor’s type will be `dtype`. If this argument is used in conjunction with the `out` argument, the output tensor’s type must match this argument or a RuntimeError will be raised. Default: `None` Examples: ``` >>> import torch >>> from torch import linalg as LA >>> a = torch.arange(9, dtype=torch.float) - 4 >>> a tensor([-4., -3., -2., -1., 0., 1., 2., 3., 4.]) >>> b = a.reshape((3, 3)) >>> b tensor([[-4., -3., -2.], [-1., 0., 1.], [ 2., 3., 4.]]) >>> LA.norm(a) tensor(7.7460) >>> LA.norm(b) tensor(7.7460) >>> LA.norm(b, 'fro') tensor(7.7460) >>> LA.norm(a, float('inf')) tensor(4.) >>> LA.norm(b, float('inf')) tensor(9.) >>> LA.norm(a, -float('inf')) tensor(0.) >>> LA.norm(b, -float('inf')) tensor(2.) >>> LA.norm(a, 1) tensor(20.) >>> LA.norm(b, 1) tensor(7.) >>> LA.norm(a, -1) tensor(0.) >>> LA.norm(b, -1) tensor(6.) >>> LA.norm(a, 2) tensor(7.7460) >>> LA.norm(b, 2) tensor(7.3485) >>> LA.norm(a, -2) tensor(0.) >>> LA.norm(b.double(), -2) tensor(1.8570e-16, dtype=torch.float64) >>> LA.norm(a, 3) tensor(5.8480) >>> LA.norm(a, -3) tensor(0.) ``` Using the `dim` argument to compute vector norms: ``` >>> c = torch.tensor([[1., 2., 3.], ... [-1, 1, 4]]) >>> LA.norm(c, dim=0) tensor([1.4142, 2.2361, 5.0000]) >>> LA.norm(c, dim=1) tensor([3.7417, 4.2426]) >>> LA.norm(c, ord=1, dim=1) tensor([6., 6.]) ``` Using the `dim` argument to compute matrix norms: ``` >>> m = torch.arange(8, dtype=torch.float).reshape(2, 2, 2) >>> LA.norm(m, dim=(1,2)) tensor([ 3.7417, 11.2250]) >>> LA.norm(m[0, :, :]), LA.norm(m[1, :, :]) (tensor(3.7417), tensor(11.2250)) ``` `torch.linalg.pinv(input, rcond=1e-15, hermitian=False, *, out=None) → Tensor` Computes the pseudo-inverse (also known as the Moore-Penrose inverse) of a matrix `input`, or of each matrix in a batched `input`. The singular values (or the absolute values of the eigenvalues when `hermitian` is `True`) that are below the specified `rcond` threshold are treated as zero and discarded in the computation. Supports input of float, double, cfloat and cdouble datatypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The pseudo-inverse is computed using singular value decomposition (see [`torch.linalg.svd()`](#torch.linalg.svd "torch.linalg.svd")) by default. If `hermitian` is `True`, then `input` is assumed to be Hermitian (symmetric if real-valued), and the computation of the pseudo-inverse is done by obtaining the eigenvalues and eigenvectors (see [`torch.linalg.eigh()`](#torch.linalg.eigh "torch.linalg.eigh")). Note If singular value decomposition or eigenvalue decomposition algorithms do not converge then a RuntimeError will be thrown. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input matrix of size `(m, n)` or the batch of matrices of size `(*, m, n)` where `*` is one or more batch dimensions. * **rcond** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* [Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the tolerance value to determine the cutoff for small singular values. Must be broadcastable to the singular values of `input` as returned by [`torch.svd()`](generated/torch.svd#torch.svd "torch.svd"). Default is `1e-15`. * **hermitian** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – indicates whether `input` is Hermitian. Default is `False`. Keyword Arguments **out** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – The output tensor. Ignored if `None`. Default is `None`. Examples: ``` >>> input = torch.randn(3, 5) >>> input tensor([[ 0.5495, 0.0979, -1.4092, -0.1128, 0.4132], [-1.1143, -0.3662, 0.3042, 1.6374, -0.9294], [-0.3269, -0.5745, -0.0382, -0.5922, -0.6759]]) >>> torch.linalg.pinv(input) tensor([[ 0.0600, -0.1933, -0.2090], [-0.0903, -0.0817, -0.4752], [-0.7124, -0.1631, -0.2272], [ 0.1356, 0.3933, -0.5023], [-0.0308, -0.1725, -0.5216]]) Batched linalg.pinv example >>> a = torch.randn(2, 6, 3) >>> b = torch.linalg.pinv(a) >>> torch.matmul(b, a) tensor([[[ 1.0000e+00, 1.6391e-07, -1.1548e-07], [ 8.3121e-08, 1.0000e+00, -2.7567e-07], [ 3.5390e-08, 1.4901e-08, 1.0000e+00]], [[ 1.0000e+00, -8.9407e-08, 2.9802e-08], [-2.2352e-07, 1.0000e+00, 1.1921e-07], [ 0.0000e+00, 8.9407e-08, 1.0000e+00]]]) Hermitian input example >>> a = torch.randn(3, 3, dtype=torch.complex64) >>> a = a + a.t().conj() # creates a Hermitian matrix >>> b = torch.linalg.pinv(a, hermitian=True) >>> torch.matmul(b, a) tensor([[ 1.0000e+00+0.0000e+00j, -1.1921e-07-2.3842e-07j, 5.9605e-08-2.3842e-07j], [ 5.9605e-08+2.3842e-07j, 1.0000e+00+2.3842e-07j, -4.7684e-07+1.1921e-07j], [-1.1921e-07+0.0000e+00j, -2.3842e-07-2.9802e-07j, 1.0000e+00-1.7897e-07j]]) Non-default rcond example >>> rcond = 0.5 >>> a = torch.randn(3, 3) >>> torch.linalg.pinv(a) tensor([[ 0.2971, -0.4280, -2.0111], [-0.0090, 0.6426, -0.1116], [-0.7832, -0.2465, 1.0994]]) >>> torch.linalg.pinv(a, rcond) tensor([[-0.2672, -0.2351, -0.0539], [-0.0211, 0.6467, -0.0698], [-0.4400, -0.3638, -0.0910]]) Matrix-wise rcond example >>> a = torch.randn(5, 6, 2, 3, 3) >>> rcond = torch.rand(2) # different rcond values for each matrix in a[:, :, 0] and a[:, :, 1] >>> torch.linalg.pinv(a, rcond) >>> rcond = torch.randn(5, 6, 2) # different rcond value for each matrix in 'a' >>> torch.linalg.pinv(a, rcond) ``` `torch.linalg.svd(input, full_matrices=True, compute_uv=True, *, out=None) -> (Tensor, Tensor, Tensor)` Computes the singular value decomposition of either a matrix or batch of matrices `input`.” The singular value decomposition is represented as a namedtuple `(U, S, Vh)`, such that input=U@diag(S)×Vhinput = U \mathbin{@} diag(S) \times Vh . If `input` is a batch of tensors, then `U`, `S`, and `Vh` are also batched with the same batch dimensions as `input`. If `full_matrices` is `False` (default), the method returns the reduced singular value decomposition i.e., if the last two dimensions of `input` are `m` and `n`, then the returned `U` and `V` matrices will contain only min(n,m)min(n, m) orthonormal columns. If `compute_uv` is `False`, the returned `U` and `Vh` will be empy tensors with no elements and the same device as `input`. The `full_matrices` argument has no effect when `compute_uv` is False. The dtypes of `U` and `V` are the same as `input`’s. `S` will always be real-valued, even if `input` is complex. Note Unlike NumPy’s `linalg.svd`, this always returns a namedtuple of three tensors, even when `compute_uv=False`. This behavior may change in a future PyTorch release. Note The singular values are returned in descending order. If `input` is a batch of matrices, then the singular values of each matrix in the batch is returned in descending order. Note The implementation of SVD on CPU uses the LAPACK routine `?gesdd` (a divide-and-conquer algorithm) instead of `?gesvd` for speed. Analogously, the SVD on GPU uses the cuSOLVER routines `gesvdj` and `gesvdjBatched` on CUDA 10.1.243 and later, and uses the MAGMA routine `gesdd` on earlier versions of CUDA. Note The returned matrix `U` will be transposed, i.e. with strides `U.contiguous().transpose(-2, -1).stride()`. Note Gradients computed using `U` and `Vh` may be unstable if `input` is not full rank or has non-unique singular values. Note When `full_matrices` = `True`, the gradients on `U[..., :, min(m, n):]` and `V[..., :, min(m, n):]` will be ignored in backward as those vectors can be arbitrary bases of the subspaces. Note The `S` tensor can only be used to compute gradients if `compute_uv` is True. Note Since `U` and `V` of an SVD is not unique, each vector can be multiplied by an arbitrary phase factor eiϕe^{i \phi} while the SVD result is still correct. Different platforms, like Numpy, or inputs on different device types, may produce different `U` and `V` tensors. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor of size (∗,m,n)(\*, m, n) where `*` is zero or more batch dimensions consisting of m×nm \times n matrices. * **full\_matrices** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – controls whether to compute the full or reduced decomposition, and consequently the shape of returned `U` and `V`. Defaults to True. * **compute\_uv** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – whether to compute `U` and `V` or not. Defaults to True. * **out** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – a tuple of three tensors to use for the outputs. If compute\_uv=False, the 1st and 3rd arguments must be tensors, but they are ignored. E.g. you can pass `(torch.Tensor(), out_S, torch.Tensor())` Example: ``` >>> import torch >>> a = torch.randn(5, 3) >>> a tensor([[-0.3357, -0.2987, -1.1096], [ 1.4894, 1.0016, -0.4572], [-1.9401, 0.7437, 2.0968], [ 0.1515, 1.3812, 1.5491], [-1.8489, -0.5907, -2.5673]]) >>> >>> # reconstruction in the full_matrices=False case >>> u, s, vh = torch.linalg.svd(a, full_matrices=False) >>> u.shape, s.shape, vh.shape (torch.Size([5, 3]), torch.Size([3]), torch.Size([3, 3])) >>> torch.dist(a, u @ torch.diag(s) @ vh) tensor(1.0486e-06) >>> >>> # reconstruction in the full_matrices=True case >>> u, s, vh = torch.linalg.svd(a) >>> u.shape, s.shape, vh.shape (torch.Size([5, 5]), torch.Size([3]), torch.Size([3, 3])) >>> torch.dist(a, u[:, :3] @ torch.diag(s) @ vh) >>> torch.dist(a, u[:, :3] @ torch.diag(s) @ vh) tensor(1.0486e-06) >>> >>> # extra dimensions >>> a_big = torch.randn(7, 5, 3) >>> u, s, vh = torch.linalg.svd(a_big, full_matrices=False) >>> torch.dist(a_big, u @ torch.diag_embed(s) @ vh) tensor(3.0957e-06) ``` `torch.linalg.solve(input, other, *, out=None) → Tensor` Computes the solution `x` to the matrix equation `matmul(input, x) = other` with a square matrix, or batches of such matrices, `input` and one or more right-hand side vectors `other`. If `input` is batched and `other` is not, then `other` is broadcast to have the same batch dimensions as `input`. The resulting tensor has the same shape as the (possibly broadcast) `other`. Supports input of `float`, `double`, `cfloat` and `cdouble` dtypes. Note If `input` is a non-square or non-invertible matrix, or a batch containing non-square matrices or one or more non-invertible matrices, then a RuntimeError will be thrown. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the square n×nn \times n matrix or the batch of such matrices of size (∗,n,n)(\*, n, n) where `*` is one or more batch dimensions. * **other** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – right-hand side tensor of shape (∗,n)(\*, n) or (∗,n,k)(\*, n, k) , where kk is the number of right-hand side vectors. Keyword Arguments **out** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – The output tensor. Ignored if `None`. Default: `None` Examples: ``` >>> A = torch.eye(3) >>> b = torch.randn(3) >>> x = torch.linalg.solve(A, b) >>> torch.allclose(A @ x, b) True ``` Batched input: ``` >>> A = torch.randn(2, 3, 3) >>> b = torch.randn(3, 1) >>> x = torch.linalg.solve(A, b) >>> torch.allclose(A @ x, b) True >>> b = torch.rand(3) # b is broadcast internally to (*A.shape[:-2], 3) >>> x = torch.linalg.solve(A, b) >>> x.shape torch.Size([2, 3]) >>> Ax = A @ x.unsqueeze(-1) >>> torch.allclose(Ax, b.unsqueeze(-1).expand_as(Ax)) True ``` `torch.linalg.tensorinv(input, ind=2, *, out=None) → Tensor` Computes a tensor `input_inv` such that `tensordot(input_inv, input, ind) == I_n` (inverse tensor equation), where `I_n` is the n-dimensional identity tensor and `n` is equal to `input.ndim`. The resulting tensor `input_inv` has shape equal to `input.shape[ind:] + input.shape[:ind]`. Supports input of `float`, `double`, `cfloat` and `cdouble` data types. Note If `input` is not invertible or does not satisfy the requirement `prod(input.shape[ind:]) == prod(input.shape[:ind])`, then a RuntimeError will be thrown. Note When `input` is a 2-dimensional tensor and `ind=1`, this function computes the (multiplicative) inverse of `input`, equivalent to calling [`torch.inverse()`](generated/torch.inverse#torch.inverse "torch.inverse"). Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – A tensor to invert. Its shape must satisfy `prod(input.shape[:ind]) == prod(input.shape[ind:])`. * **ind** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – A positive integer that describes the inverse tensor equation. See [`torch.tensordot()`](generated/torch.tensordot#torch.tensordot "torch.tensordot") for details. Default: 2. Keyword Arguments **out** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – The output tensor. Ignored if `None`. Default: `None` Examples: ``` >>> a = torch.eye(4 * 6).reshape((4, 6, 8, 3)) >>> ainv = torch.linalg.tensorinv(a, ind=2) >>> ainv.shape torch.Size([8, 3, 4, 6]) >>> b = torch.randn(4, 6) >>> torch.allclose(torch.tensordot(ainv, b), torch.linalg.tensorsolve(a, b)) True >>> a = torch.randn(4, 4) >>> a_tensorinv = torch.linalg.tensorinv(a, ind=1) >>> a_inv = torch.inverse(a) >>> torch.allclose(a_tensorinv, a_inv) True ``` `torch.linalg.tensorsolve(input, other, dims=None, *, out=None) → Tensor` Computes a tensor `x` such that `tensordot(input, x, dims=x.ndim) = other`. The resulting tensor `x` has the same shape as `input[other.ndim:]`. Supports real-valued and complex-valued inputs. Note If `input` does not satisfy the requirement `prod(input.shape[other.ndim:]) == prod(input.shape[:other.ndim])` after (optionally) moving the dimensions using `dims`, then a RuntimeError will be thrown. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – “left-hand-side” tensor, it must satisfy the requirement `prod(input.shape[other.ndim:]) == prod(input.shape[:other.ndim])`. * **other** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – “right-hand-side” tensor of shape `input.shape[other.ndim]`. * **dims** (*Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]*) – dimensions of `input` to be moved before the computation. Equivalent to calling `input = movedim(input, dims, range(len(dims) - input.ndim, 0))`. If None (default), no dimensions are moved. Keyword Arguments **out** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – The output tensor. Ignored if `None`. Default: `None` Examples: ``` >>> a = torch.eye(2 * 3 * 4).reshape((2 * 3, 4, 2, 3, 4)) >>> b = torch.randn(2 * 3, 4) >>> x = torch.linalg.tensorsolve(a, b) >>> x.shape torch.Size([2, 3, 4]) >>> torch.allclose(torch.tensordot(a, x, dims=x.ndim), b) True >>> a = torch.randn(6, 4, 4, 3, 2) >>> b = torch.randn(4, 3, 2) >>> x = torch.linalg.tensorsolve(a, b, dims=(0, 2)) >>> x.shape torch.Size([6, 4]) >>> a = a.permute(1, 3, 4, 0, 2) >>> a.shape[b.ndim:] torch.Size([6, 4]) >>> torch.allclose(torch.tensordot(a, x, dims=x.ndim), b, atol=1e-6) True ``` `torch.linalg.inv(input, *, out=None) → Tensor` Computes the multiplicative inverse matrix of a square matrix `input`, or of each square matrix in a batched `input`. The result satisfies the relation: `matmul(inv(input),input)` = `matmul(input,inv(input))` = `eye(input.shape[0]).expand_as(input)`. Supports input of float, double, cfloat and cdouble data types. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The inverse matrix is computed using LAPACK’s `getrf` and `getri` routines for CPU inputs. For CUDA inputs, cuSOLVER’s `getrf` and `getrs` routines as well as cuBLAS’ `getrf` and `getri` routines are used if CUDA version >= 10.1.243, otherwise MAGMA’s `getrf` and `getri` routines are used instead. Note If `input` is a non-invertible matrix or non-square matrix, or batch with at least one such matrix, then a RuntimeError will be thrown. Parameters **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the square `(n, n)` matrix or the batch of such matrices of size `(*, n, n)` where `*` is one or more batch dimensions. Keyword Arguments **out** ([Tensor](tensors#torch.Tensor "torch.Tensor")*,* *optional*) – The output tensor. Ignored if `None`. Default is `None`. Examples: ``` >>> x = torch.rand(4, 4) >>> y = torch.linalg.inv(x) >>> z = torch.mm(x, y) >>> z tensor([[ 1.0000, -0.0000, -0.0000, 0.0000], [ 0.0000, 1.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 1.0000, 0.0000], [ 0.0000, -0.0000, -0.0000, 1.0000]]) >>> torch.max(torch.abs(z - torch.eye(4))) # Max non-zero tensor(1.1921e-07) >>> # Batched inverse example >>> x = torch.randn(2, 3, 4, 4) >>> y = torch.linalg.inv(x) >>> z = torch.matmul(x, y) >>> torch.max(torch.abs(z - torch.eye(4).expand_as(x))) # Max non-zero tensor(1.9073e-06) >>> x = torch.rand(4, 4, dtype=torch.cdouble) >>> y = torch.linalg.inv(x) >>> z = torch.mm(x, y) >>> z tensor([[ 1.0000e+00+0.0000e+00j, -1.3878e-16+3.4694e-16j, 5.5511e-17-1.1102e-16j, 0.0000e+00-1.6653e-16j], [ 5.5511e-16-1.6653e-16j, 1.0000e+00+6.9389e-17j, 2.2204e-16-1.1102e-16j, -2.2204e-16+1.1102e-16j], [ 3.8858e-16-1.2490e-16j, 2.7756e-17+3.4694e-17j, 1.0000e+00+0.0000e+00j, -4.4409e-16+5.5511e-17j], [ 4.4409e-16+5.5511e-16j, -3.8858e-16+1.8041e-16j, 2.2204e-16+0.0000e+00j, 1.0000e+00-3.4694e-16j]], dtype=torch.complex128) >>> torch.max(torch.abs(z - torch.eye(4, dtype=torch.cdouble))) # Max non-zero tensor(7.5107e-16, dtype=torch.float64) ``` `torch.linalg.qr(input, mode='reduced', *, out=None) -> (Tensor, Tensor)` Computes the QR decomposition of a matrix or a batch of matrices `input`, and returns a namedtuple (Q, R) of tensors such that input=QR\text{input} = Q R with QQ being an orthogonal matrix or batch of orthogonal matrices and RR being an upper triangular matrix or batch of upper triangular matrices. Depending on the value of `mode` this function returns the reduced or complete QR factorization. See below for a list of valid modes. Note **Differences with** `numpy.linalg.qr`: * `mode='raw'` is not implemented * unlike `numpy.linalg.qr`, this function always returns a tuple of two tensors. When `mode='r'`, the `Q` tensor is an empty tensor. This behavior may change in a future PyTorch release. Note Backpropagation is not supported for `mode='r'`. Use `mode='reduced'` instead. Backpropagation is also not supported if the first min⁡(input.size(−1),input.size(−2))\min(input.size(-1), input.size(-2)) columns of any matrix in `input` are not linearly independent. While no error will be thrown when this occurs the values of the “gradient” produced may be anything. This behavior may change in the future. Note This function uses LAPACK for CPU inputs and MAGMA for CUDA inputs, and may produce different (valid) decompositions on different device types or different platforms. Parameters * **input** ([Tensor](tensors#torch.Tensor "torch.Tensor")) – the input tensor of size (∗,m,n)(\*, m, n) where `*` is zero or more batch dimensions consisting of matrices of dimension m×nm \times n . * **mode** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – if `k = min(m, n)` then: + `'reduced'` : returns `(Q, R)` with dimensions (m, k), (k, n) (default) + `'complete'`: returns `(Q, R)` with dimensions (m, m), (m, n) + `'r'`: computes only `R`; returns `(Q, R)` where `Q` is empty and `R` has dimensions (k, n) Keyword Arguments **out** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – tuple of `Q` and `R` tensors. The dimensions of `Q` and `R` are detailed in the description of `mode` above. Example: ``` >>> a = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]]) >>> q, r = torch.linalg.qr(a) >>> q tensor([[-0.8571, 0.3943, 0.3314], [-0.4286, -0.9029, -0.0343], [ 0.2857, -0.1714, 0.9429]]) >>> r tensor([[ -14.0000, -21.0000, 14.0000], [ 0.0000, -175.0000, 70.0000], [ 0.0000, 0.0000, -35.0000]]) >>> torch.mm(q, r).round() tensor([[ 12., -51., 4.], [ 6., 167., -68.], [ -4., 24., -41.]]) >>> torch.mm(q.t(), q).round() tensor([[ 1., 0., 0.], [ 0., 1., -0.], [ 0., -0., 1.]]) >>> q2, r2 = torch.linalg.qr(a, mode='r') >>> q2 tensor([]) >>> torch.equal(r, r2) True >>> a = torch.randn(3, 4, 5) >>> q, r = torch.linalg.qr(a, mode='complete') >>> torch.allclose(torch.matmul(q, r), a) True >>> torch.allclose(torch.matmul(q.transpose(-2, -1), q), torch.eye(5)) True ```
programming_docs
pytorch Automatic Mixed Precision package - torch.cuda.amp Automatic Mixed Precision package - torch.cuda.amp ================================================== `torch.cuda.amp` provides convenience methods for mixed precision, where some operations use the `torch.float32` (`float`) datatype and other operations use `torch.float16` (`half`). Some ops, like linear layers and convolutions, are much faster in `float16`. Other ops, like reductions, often require the dynamic range of `float32`. Mixed precision tries to match each op to its appropriate datatype. Ordinarily, “automatic mixed precision training” uses [`torch.cuda.amp.autocast`](#torch.cuda.amp.autocast "torch.cuda.amp.autocast") and [`torch.cuda.amp.GradScaler`](#torch.cuda.amp.GradScaler "torch.cuda.amp.GradScaler") together, as shown in the [Automatic Mixed Precision examples](https://pytorch.org/docs/1.8.0/notes/amp_examples.html#amp-examples) and [Automatic Mixed Precision recipe](https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html). However, [`autocast`](#torch.cuda.amp.autocast "torch.cuda.amp.autocast") and [`GradScaler`](#torch.cuda.amp.GradScaler "torch.cuda.amp.GradScaler") are modular, and may be used separately if desired. * [Autocasting](#autocasting) * [Gradient Scaling](#gradient-scaling) * [Autocast Op Reference](#autocast-op-reference) + [Op Eligibility](#op-eligibility) + [Op-Specific Behavior](#op-specific-behavior) - [Ops that can autocast to `float16`](#ops-that-can-autocast-to-float16) - [Ops that can autocast to `float32`](#ops-that-can-autocast-to-float32) - [Ops that promote to the widest input type](#ops-that-promote-to-the-widest-input-type) - [Prefer `binary_cross_entropy_with_logits` over `binary_cross_entropy`](#prefer-binary-cross-entropy-with-logits-over-binary-cross-entropy) Autocasting ----------- `class torch.cuda.amp.autocast(enabled=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/autocast_mode.html#autocast) Instances of [`autocast`](#torch.cuda.amp.autocast "torch.cuda.amp.autocast") serve as context managers or decorators that allow regions of your script to run in mixed precision. In these regions, CUDA ops run in an op-specific dtype chosen by autocast to improve performance while maintaining accuracy. See the [Autocast Op Reference](#autocast-op-reference) for details. When entering an autocast-enabled region, Tensors may be any type. You should not call `.half()` on your model(s) or inputs when using autocasting. [`autocast`](#torch.cuda.amp.autocast "torch.cuda.amp.autocast") should wrap only the forward pass(es) of your network, including the loss computation(s). Backward passes under autocast are not recommended. Backward ops run in the same type that autocast used for corresponding forward ops. Example: ``` # Creates model and optimizer in default precision model = Net().cuda() optimizer = optim.SGD(model.parameters(), ...) for input, target in data: optimizer.zero_grad() # Enables autocasting for the forward pass (model + loss) with autocast(): output = model(input) loss = loss_fn(output, target) # Exits the context manager before backward() loss.backward() optimizer.step() ``` See the [Automatic Mixed Precision examples](https://pytorch.org/docs/1.8.0/notes/amp_examples.html#amp-examples) for usage (along with gradient scaling) in more complex scenarios (e.g., gradient penalty, multiple models/losses, custom autograd functions). [`autocast`](#torch.cuda.amp.autocast "torch.cuda.amp.autocast") can also be used as a decorator, e.g., on the `forward` method of your model: ``` class AutocastModel(nn.Module): ... @autocast() def forward(self, input): ... ``` Floating-point Tensors produced in an autocast-enabled region may be `float16`. After returning to an autocast-disabled region, using them with floating-point Tensors of different dtypes may cause type mismatch errors. If so, cast the Tensor(s) produced in the autocast region back to `float32` (or other dtype if desired). If a Tensor from the autocast region is already `float32`, the cast is a no-op, and incurs no additional overhead. Example: ``` # Creates some tensors in default dtype (here assumed to be float32) a_float32 = torch.rand((8, 8), device="cuda") b_float32 = torch.rand((8, 8), device="cuda") c_float32 = torch.rand((8, 8), device="cuda") d_float32 = torch.rand((8, 8), device="cuda") with autocast(): # torch.mm is on autocast's list of ops that should run in float16. # Inputs are float32, but the op runs in float16 and produces float16 output. # No manual casts are required. e_float16 = torch.mm(a_float32, b_float32) # Also handles mixed input types f_float16 = torch.mm(d_float32, e_float16) # After exiting autocast, calls f_float16.float() to use with d_float32 g_float32 = torch.mm(d_float32, f_float16.float()) ``` Type mismatch errors *in* an autocast-enabled region are a bug; if this is what you observe, please file an issue. `autocast(enabled=False)` subregions can be nested in autocast-enabled regions. Locally disabling autocast can be useful, for example, if you want to force a subregion to run in a particular `dtype`. Disabling autocast gives you explicit control over the execution type. In the subregion, inputs from the surrounding region should be cast to `dtype` before use: ``` # Creates some tensors in default dtype (here assumed to be float32) a_float32 = torch.rand((8, 8), device="cuda") b_float32 = torch.rand((8, 8), device="cuda") c_float32 = torch.rand((8, 8), device="cuda") d_float32 = torch.rand((8, 8), device="cuda") with autocast(): e_float16 = torch.mm(a_float32, b_float32) with autocast(enabled=False): # Calls e_float16.float() to ensure float32 execution # (necessary because e_float16 was created in an autocasted region) f_float32 = torch.mm(c_float32, e_float16.float()) # No manual casts are required when re-entering the autocast-enabled region. # torch.mm again runs in float16 and produces float16 output, regardless of input types. g_float16 = torch.mm(d_float32, f_float32) ``` The autocast state is thread-local. If you want it enabled in a new thread, the context manager or decorator must be invoked in that thread. This affects [`torch.nn.DataParallel`](generated/torch.nn.dataparallel#torch.nn.DataParallel "torch.nn.DataParallel") and [`torch.nn.parallel.DistributedDataParallel`](generated/torch.nn.parallel.distributeddataparallel#torch.nn.parallel.DistributedDataParallel "torch.nn.parallel.DistributedDataParallel") when used with more than one GPU per process (see [Working with Multiple GPUs](https://pytorch.org/docs/1.8.0/notes/amp_examples.html#amp-multigpu)). Parameters **enabled** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional**,* *default=True*) – Whether autocasting should be enabled in the region. `torch.cuda.amp.custom_fwd(fwd=None, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/autocast_mode.html#custom_fwd) Helper decorator for `forward` methods of custom autograd functions (subclasses of [`torch.autograd.Function`](autograd#torch.autograd.Function "torch.autograd.Function")). See the [example page](https://pytorch.org/docs/1.8.0/notes/amp_examples.html#amp-custom-examples) for more detail. Parameters **cast\_inputs** (`torch.dtype` or None, optional, default=None) – If not `None`, when `forward` runs in an autocast-enabled region, casts incoming floating-point CUDA Tensors to the target dtype (non-floating-point Tensors are not affected), then executes `forward` with autocast disabled. If `None`, `forward`’s internal ops execute with the current autocast state. Note If the decorated `forward` is called outside an autocast-enabled region, [`custom_fwd`](#torch.cuda.amp.custom_fwd "torch.cuda.amp.custom_fwd") is a no-op and `cast_inputs` has no effect. `torch.cuda.amp.custom_bwd(bwd)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/autocast_mode.html#custom_bwd) Helper decorator for backward methods of custom autograd functions (subclasses of [`torch.autograd.Function`](autograd#torch.autograd.Function "torch.autograd.Function")). Ensures that `backward` executes with the same autocast state as `forward`. See the [example page](https://pytorch.org/docs/1.8.0/notes/amp_examples.html#amp-custom-examples) for more detail. Gradient Scaling ---------------- If the forward pass for a particular op has `float16` inputs, the backward pass for that op will produce `float16` gradients. Gradient values with small magnitudes may not be representable in `float16`. These values will flush to zero (“underflow”), so the update for the corresponding parameters will be lost. To prevent underflow, “gradient scaling” multiplies the network’s loss(es) by a scale factor and invokes a backward pass on the scaled loss(es). Gradients flowing backward through the network are then scaled by the same factor. In other words, gradient values have a larger magnitude, so they don’t flush to zero. Each parameter’s gradient (`.grad` attribute) should be unscaled before the optimizer updates the parameters, so the scale factor does not interfere with the learning rate. `class torch.cuda.amp.GradScaler(init_scale=65536.0, growth_factor=2.0, backoff_factor=0.5, growth_interval=2000, enabled=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler) `get_backoff_factor()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler.get_backoff_factor) Returns a Python float containing the scale backoff factor. `get_growth_factor()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler.get_growth_factor) Returns a Python float containing the scale growth factor. `get_growth_interval()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler.get_growth_interval) Returns a Python int containing the growth interval. `get_scale()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler.get_scale) Returns a Python float containing the current scale, or 1.0 if scaling is disabled. Warning [`get_scale()`](#torch.cuda.amp.GradScaler.get_scale "torch.cuda.amp.GradScaler.get_scale") incurs a CPU-GPU sync. `is_enabled()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler.is_enabled) Returns a bool indicating whether this instance is enabled. `load_state_dict(state_dict)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler.load_state_dict) Loads the scaler state. If this instance is disabled, [`load_state_dict()`](#torch.cuda.amp.GradScaler.load_state_dict "torch.cuda.amp.GradScaler.load_state_dict") is a no-op. Parameters **state\_dict** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)")) – scaler state. Should be an object returned from a call to [`state_dict()`](#torch.cuda.amp.GradScaler.state_dict "torch.cuda.amp.GradScaler.state_dict"). `scale(outputs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler.scale) Multiplies (‘scales’) a tensor or list of tensors by the scale factor. Returns scaled outputs. If this instance of [`GradScaler`](#torch.cuda.amp.GradScaler "torch.cuda.amp.GradScaler") is not enabled, outputs are returned unmodified. Parameters **outputs** ([Tensor](tensors#torch.Tensor "torch.Tensor") *or* *iterable of Tensors*) – Outputs to scale. `set_backoff_factor(new_factor)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler.set_backoff_factor) Parameters **new\_scale** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Value to use as the new scale backoff factor. `set_growth_factor(new_factor)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler.set_growth_factor) Parameters **new\_scale** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – Value to use as the new scale growth factor. `set_growth_interval(new_interval)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler.set_growth_interval) Parameters **new\_interval** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Value to use as the new growth interval. `state_dict()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler.state_dict) Returns the state of the scaler as a [`dict`](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)"). It contains five entries: * `"scale"` - a Python float containing the current scale * `"growth_factor"` - a Python float containing the current growth factor * `"backoff_factor"` - a Python float containing the current backoff factor * `"growth_interval"` - a Python int containing the current growth interval * `"_growth_tracker"` - a Python int containing the number of recent consecutive unskipped steps. If this instance is not enabled, returns an empty dict. Note If you wish to checkpoint the scaler’s state after a particular iteration, [`state_dict()`](#torch.cuda.amp.GradScaler.state_dict "torch.cuda.amp.GradScaler.state_dict") should be called after [`update()`](#torch.cuda.amp.GradScaler.update "torch.cuda.amp.GradScaler.update"). `step(optimizer, *args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler.step) [`step()`](#torch.cuda.amp.GradScaler.step "torch.cuda.amp.GradScaler.step") carries out the following two operations: 1. Internally invokes `unscale_(optimizer)` (unless [`unscale_()`](#torch.cuda.amp.GradScaler.unscale_ "torch.cuda.amp.GradScaler.unscale_") was explicitly called for `optimizer` earlier in the iteration). As part of the [`unscale_()`](#torch.cuda.amp.GradScaler.unscale_ "torch.cuda.amp.GradScaler.unscale_"), gradients are checked for infs/NaNs. 2. If no inf/NaN gradients are found, invokes `optimizer.step()` using the unscaled gradients. Otherwise, `optimizer.step()` is skipped to avoid corrupting the params. `*args` and `**kwargs` are forwarded to `optimizer.step()`. Returns the return value of `optimizer.step(*args, **kwargs)`. Parameters * **optimizer** ([torch.optim.Optimizer](optim#torch.optim.Optimizer "torch.optim.Optimizer")) – Optimizer that applies the gradients. * **args** – Any arguments. * **kwargs** – Any keyword arguments. Warning Closure use is not currently supported. `unscale_(optimizer)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler.unscale_) Divides (“unscales”) the optimizer’s gradient tensors by the scale factor. [`unscale_()`](#torch.cuda.amp.GradScaler.unscale_ "torch.cuda.amp.GradScaler.unscale_") is optional, serving cases where you need to [modify or inspect gradients](https://pytorch.org/docs/1.8.0/notes/amp_examples.html#working-with-unscaled-gradients) between the backward pass(es) and [`step()`](#torch.cuda.amp.GradScaler.step "torch.cuda.amp.GradScaler.step"). If [`unscale_()`](#torch.cuda.amp.GradScaler.unscale_ "torch.cuda.amp.GradScaler.unscale_") is not called explicitly, gradients will be unscaled automatically during [`step()`](#torch.cuda.amp.GradScaler.step "torch.cuda.amp.GradScaler.step"). Simple example, using [`unscale_()`](#torch.cuda.amp.GradScaler.unscale_ "torch.cuda.amp.GradScaler.unscale_") to enable clipping of unscaled gradients: ``` ... scaler.scale(loss).backward() scaler.unscale_(optimizer) torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm) scaler.step(optimizer) scaler.update() ``` Parameters **optimizer** ([torch.optim.Optimizer](optim#torch.optim.Optimizer "torch.optim.Optimizer")) – Optimizer that owns the gradients to be unscaled. Note [`unscale_()`](#torch.cuda.amp.GradScaler.unscale_ "torch.cuda.amp.GradScaler.unscale_") does not incur a CPU-GPU sync. Warning [`unscale_()`](#torch.cuda.amp.GradScaler.unscale_ "torch.cuda.amp.GradScaler.unscale_") should only be called once per optimizer per [`step()`](#torch.cuda.amp.GradScaler.step "torch.cuda.amp.GradScaler.step") call, and only after all gradients for that optimizer’s assigned parameters have been accumulated. Calling [`unscale_()`](#torch.cuda.amp.GradScaler.unscale_ "torch.cuda.amp.GradScaler.unscale_") twice for a given optimizer between each [`step()`](#torch.cuda.amp.GradScaler.step "torch.cuda.amp.GradScaler.step") triggers a RuntimeError. Warning [`unscale_()`](#torch.cuda.amp.GradScaler.unscale_ "torch.cuda.amp.GradScaler.unscale_") may unscale sparse gradients out of place, replacing the `.grad` attribute. `update(new_scale=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/cuda/amp/grad_scaler.html#GradScaler.update) Updates the scale factor. If any optimizer steps were skipped the scale is multiplied by `backoff_factor` to reduce it. If `growth_interval` unskipped iterations occurred consecutively, the scale is multiplied by `growth_factor` to increase it. Passing `new_scale` sets the scale directly. Parameters **new\_scale** (float or `torch.cuda.FloatTensor`, optional, default=None) – New scale factor. Warning [`update()`](#torch.cuda.amp.GradScaler.update "torch.cuda.amp.GradScaler.update") should only be called at the end of the iteration, after `scaler.step(optimizer)` has been invoked for all optimizers used this iteration. Autocast Op Reference --------------------- ### Op Eligibility Only CUDA ops are eligible for autocasting. Ops that run in `float64` or non-floating-point dtypes are not eligible, and will run in these types whether or not autocast is enabled. Only out-of-place ops and Tensor methods are eligible. In-place variants and calls that explicitly supply an `out=...` Tensor are allowed in autocast-enabled regions, but won’t go through autocasting. For example, in an autocast-enabled region `a.addmm(b, c)` can autocast, but `a.addmm_(b, c)` and `a.addmm(b, c, out=d)` cannot. For best performance and stability, prefer out-of-place ops in autocast-enabled regions. Ops called with an explicit `dtype=...` argument are not eligible, and will produce output that respects the `dtype` argument. ### Op-Specific Behavior The following lists describe the behavior of eligible ops in autocast-enabled regions. These ops always go through autocasting whether they are invoked as part of a [`torch.nn.Module`](generated/torch.nn.module#torch.nn.Module "torch.nn.Module"), as a function, or as a [`torch.Tensor`](tensors#torch.Tensor "torch.Tensor") method. If functions are exposed in multiple namespaces, they go through autocasting regardless of the namespace. Ops not listed below do not go through autocasting. They run in the type defined by their inputs. However, autocasting may still change the type in which unlisted ops run if they’re downstream from autocasted ops. If an op is unlisted, we assume it’s numerically stable in `float16`. If you believe an unlisted op is numerically unstable in `float16`, please file an issue. #### Ops that can autocast to `float16` `__matmul__`, `addbmm`, `addmm`, `addmv`, `addr`, `baddbmm`, `bmm`, `chain_matmul`, `conv1d`, `conv2d`, `conv3d`, `conv_transpose1d`, `conv_transpose2d`, `conv_transpose3d`, `GRUCell`, `linear`, `LSTMCell`, `matmul`, `mm`, `mv`, `prelu`, `RNNCell` #### Ops that can autocast to `float32` `__pow__`, `__rdiv__`, `__rpow__`, `__rtruediv__`, `acos`, `asin`, `binary_cross_entropy_with_logits`, `cosh`, `cosine_embedding_loss`, `cdist`, `cosine_similarity`, `cross_entropy`, `cumprod`, `cumsum`, `dist`, `erfinv`, `exp`, `expm1`, `gelu`, `group_norm`, `hinge_embedding_loss`, `kl_div`, `l1_loss`, `layer_norm`, `log`, `log_softmax`, `log10`, `log1p`, `log2`, `margin_ranking_loss`, `mse_loss`, `multilabel_margin_loss`, `multi_margin_loss`, `nll_loss`, `norm`, `normalize`, `pdist`, `poisson_nll_loss`, `pow`, `prod`, `reciprocal`, `rsqrt`, `sinh`, `smooth_l1_loss`, `soft_margin_loss`, `softmax`, `softmin`, `softplus`, `sum`, `renorm`, `tan`, `triplet_margin_loss` #### Ops that promote to the widest input type These ops don’t require a particular dtype for stability, but take multiple inputs and require that the inputs’ dtypes match. If all of the inputs are `float16`, the op runs in `float16`. If any of the inputs is `float32`, autocast casts all inputs to `float32` and runs the op in `float32`. `addcdiv`, `addcmul`, `atan2`, `bilinear`, `cat`, `cross`, `dot`, `equal`, `index_put`, `stack`, `tensordot` Some ops not listed here (e.g., binary ops like `add`) natively promote inputs without autocasting’s intervention. If inputs are a mixture of `float16` and `float32`, these ops run in `float32` and produce `float32` output, regardless of whether autocast is enabled. #### Prefer `binary_cross_entropy_with_logits` over `binary_cross_entropy` The backward passes of [`torch.nn.functional.binary_cross_entropy()`](nn.functional#torch.nn.functional.binary_cross_entropy "torch.nn.functional.binary_cross_entropy") (and [`torch.nn.BCELoss`](generated/torch.nn.bceloss#torch.nn.BCELoss "torch.nn.BCELoss"), which wraps it) can produce gradients that aren’t representable in `float16`. In autocast-enabled regions, the forward input may be `float16`, which means the backward gradient must be representable in `float16` (autocasting `float16` forward inputs to `float32` doesn’t help, because that cast must be reversed in backward). Therefore, `binary_cross_entropy` and `BCELoss` raise an error in autocast-enabled regions. Many models use a sigmoid layer right before the binary cross entropy layer. In this case, combine the two layers using [`torch.nn.functional.binary_cross_entropy_with_logits()`](nn.functional#torch.nn.functional.binary_cross_entropy_with_logits "torch.nn.functional.binary_cross_entropy_with_logits") or [`torch.nn.BCEWithLogitsLoss`](generated/torch.nn.bcewithlogitsloss#torch.nn.BCEWithLogitsLoss "torch.nn.BCEWithLogitsLoss"). `binary_cross_entropy_with_logits` and `BCEWithLogits` are safe to autocast.
programming_docs
pytorch torch.onnx torch.onnx ========== * [Example: End-to-end AlexNet from PyTorch to ONNX](#example-end-to-end-alexnet-from-pytorch-to-onnx) * [Tracing vs Scripting](#tracing-vs-scripting) * [Write PyTorch model in Torch way](#write-pytorch-model-in-torch-way) * [Using dictionaries to handle Named Arguments as model inputs](#using-dictionaries-to-handle-named-arguments-as-model-inputs) * [Indexing](#indexing) + [Getter](#getter) + [Setter](#setter) * [TorchVision support](#torchvision-support) * [Limitations](#limitations) * [Supported operators](#supported-operators) * [Adding support for operators](#adding-support-for-operators) + [ATen operators](#aten-operators) + [Non-ATen operators](#non-aten-operators) + [Custom operators](#custom-operators) * [Operator Export Type](#operator-export-type) + [ONNX](#id2) + [ONNX\_ATEN](#onnx-aten) + [ONNX\_ATEN\_FALLBACK](#onnx-aten-fallback) + [RAW](#raw) + [ONNX\_FALLTHROUGH](#onnx-fallthrough) * [Frequently Asked Questions](#frequently-asked-questions) * [Use external data format](#use-external-data-format) * [Training](#training) * [Functions](#functions) Example: End-to-end AlexNet from PyTorch to ONNX ------------------------------------------------ Here is a simple script which exports a pretrained AlexNet as defined in torchvision into ONNX. It runs a single round of inference and then saves the resulting traced model to `alexnet.onnx`: ``` import torch import torchvision dummy_input = torch.randn(10, 3, 224, 224, device='cuda') model = torchvision.models.alexnet(pretrained=True).cuda() # Providing input and output names sets the display names for values # within the model's graph. Setting these does not change the semantics # of the graph; it is only for readability. # # The inputs to the network consist of the flat list of inputs (i.e. # the values you would pass to the forward() method) followed by the # flat list of parameters. You can partially specify names, i.e. provide # a list here shorter than the number of inputs to the model, and we will # only set that subset of names, starting from the beginning. input_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(16) ] output_names = [ "output1" ] torch.onnx.export(model, dummy_input, "alexnet.onnx", verbose=True, input_names=input_names, output_names=output_names) ``` The resulting `alexnet.onnx` is a binary protobuf file which contains both the network structure and parameters of the model you exported (in this case, AlexNet). The keyword argument `verbose=True` causes the exporter to print out a human-readable representation of the network: ``` # These are the inputs and parameters to the network, which have taken on # the names we specified earlier. graph(%actual_input_1 : Float(10, 3, 224, 224) %learned_0 : Float(64, 3, 11, 11) %learned_1 : Float(64) %learned_2 : Float(192, 64, 5, 5) %learned_3 : Float(192) # ---- omitted for brevity ---- %learned_14 : Float(1000, 4096) %learned_15 : Float(1000)) { # Every statement consists of some output tensors (and their types), # the operator to be run (with its attributes, e.g., kernels, strides, # etc.), its input tensors (%actual_input_1, %learned_0, %learned_1) %17 : Float(10, 64, 55, 55) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[11, 11], pads=[2, 2, 2, 2], strides=[4, 4]](%actual_input_1, %learned_0, %learned_1), scope: AlexNet/Sequential[features]/Conv2d[0] %18 : Float(10, 64, 55, 55) = onnx::Relu(%17), scope: AlexNet/Sequential[features]/ReLU[1] %19 : Float(10, 64, 27, 27) = onnx::MaxPool[kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[2, 2]](%18), scope: AlexNet/Sequential[features]/MaxPool2d[2] # ---- omitted for brevity ---- %29 : Float(10, 256, 6, 6) = onnx::MaxPool[kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[2, 2]](%28), scope: AlexNet/Sequential[features]/MaxPool2d[12] # Dynamic means that the shape is not known. This may be because of a # limitation of our implementation (which we would like to fix in a # future release) or shapes which are truly dynamic. %30 : Dynamic = onnx::Shape(%29), scope: AlexNet %31 : Dynamic = onnx::Slice[axes=[0], ends=[1], starts=[0]](%30), scope: AlexNet %32 : Long() = onnx::Squeeze[axes=[0]](%31), scope: AlexNet %33 : Long() = onnx::Constant[value={9216}](), scope: AlexNet # ---- omitted for brevity ---- %output1 : Float(10, 1000) = onnx::Gemm[alpha=1, beta=1, broadcast=1, transB=1](%45, %learned_14, %learned_15), scope: AlexNet/Sequential[classifier]/Linear[6] return (%output1); } ``` You can also verify the protobuf using the [ONNX](https://github.com/onnx/onnx/) library. You can install `ONNX` with conda: ``` conda install -c conda-forge onnx ``` Then, you can run: ``` import onnx # Load the ONNX model model = onnx.load("alexnet.onnx") # Check that the IR is well formed onnx.checker.check_model(model) # Print a human readable representation of the graph onnx.helper.printable_graph(model.graph) ``` To run the exported script with [caffe2](https://caffe2.ai/), you will need to install `caffe2`: If you don’t have one already, Please [follow the install instructions](https://caffe2.ai/docs/getting-started.html). Once these are installed, you can use the backend for Caffe2: ``` # ...continuing from above import caffe2.python.onnx.backend as backend import numpy as np rep = backend.prepare(model, device="CUDA:0") # or "CPU" # For the Caffe2 backend: # rep.predict_net is the Caffe2 protobuf for the network # rep.workspace is the Caffe2 workspace for the network # (see the class caffe2.python.onnx.backend.Workspace) outputs = rep.run(np.random.randn(10, 3, 224, 224).astype(np.float32)) # To run networks with more than one input, pass a tuple # rather than a single numpy ndarray. print(outputs[0]) ``` You can also run the exported model with [ONNX Runtime](https://github.com/microsoft/onnxruntime), you will need to install `ONNX Runtime`: please [follow these instructions](https://github.com/microsoft/onnxruntime#installation). Once these are installed, you can use the backend for ONNX Runtime: ``` # ...continuing from above import onnxruntime as ort ort_session = ort.InferenceSession('alexnet.onnx') outputs = ort_session.run(None, {'actual_input_1': np.random.randn(10, 3, 224, 224).astype(np.float32)}) print(outputs[0]) ``` Here is another [tutorial of exporting the SuperResolution model to ONNX.](https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html). In the future, there will be backends for other frameworks as well. Tracing vs Scripting -------------------- The ONNX exporter can be both *trace-based* and *script-based* exporter. * *trace-based* means that it operates by executing your model once, and exporting the operators which were actually run during this run. This means that if your model is dynamic, e.g., changes behavior depending on input data, the export won’t be accurate. Similarly, a trace is likely to be valid only for a specific input size (which is one reason why we require explicit inputs on tracing.) We recommend examining the model trace and making sure the traced operators look reasonable. If your model contains control flows like for loops and if conditions, *trace-based* exporter will unroll the loops and if conditions, exporting a static graph that is exactly the same as this run. If you want to export your model with dynamic control flows, you will need to use the *script-based* exporter. * *script-based* means that the model you are trying to export is a [ScriptModule](jit). `ScriptModule` is the core data structure in `TorchScript`, and `TorchScript` is a subset of Python language, that creates serializable and optimizable models from PyTorch code. We allow mixing tracing and scripting. You can compose tracing and scripting to suit the particular requirements of a part of a model. Checkout this example: ``` import torch # Trace-based only class LoopModel(torch.nn.Module): def forward(self, x, y): for i in range(y): x = x + i return x model = LoopModel() dummy_input = torch.ones(2, 3, dtype=torch.long) loop_count = torch.tensor(5, dtype=torch.long) torch.onnx.export(model, (dummy_input, loop_count), 'loop.onnx', verbose=True) ``` With *trace-based* exporter, we get the result ONNX graph which unrolls the for loop: ``` graph(%0 : Long(2, 3), %1 : Long()): %2 : Tensor = onnx::Constant[value={1}]() %3 : Tensor = onnx::Add(%0, %2) %4 : Tensor = onnx::Constant[value={2}]() %5 : Tensor = onnx::Add(%3, %4) %6 : Tensor = onnx::Constant[value={3}]() %7 : Tensor = onnx::Add(%5, %6) %8 : Tensor = onnx::Constant[value={4}]() %9 : Tensor = onnx::Add(%7, %8) return (%9) ``` To utilize *script-based* exporter for capturing the dynamic loop, we can write the loop in script, and call it from the regular nn.Module: ``` # Mixing tracing and scripting @torch.jit.script def loop(x, y): for i in range(int(y)): x = x + i return x class LoopModel2(torch.nn.Module): def forward(self, x, y): return loop(x, y) model = LoopModel2() dummy_input = torch.ones(2, 3, dtype=torch.long) loop_count = torch.tensor(5, dtype=torch.long) torch.onnx.export(model, (dummy_input, loop_count), 'loop.onnx', verbose=True, input_names=['input_data', 'loop_range']) ``` Now the exported ONNX graph becomes: ``` graph(%input_data : Long(2, 3), %loop_range : Long()): %2 : Long() = onnx::Constant[value={1}](), scope: LoopModel2/loop %3 : Tensor = onnx::Cast[to=9](%2) %4 : Long(2, 3) = onnx::Loop(%loop_range, %3, %input_data), scope: LoopModel2/loop # custom_loop.py:240:5 block0(%i.1 : Long(), %cond : bool, %x.6 : Long(2, 3)): %8 : Long(2, 3) = onnx::Add(%x.6, %i.1), scope: LoopModel2/loop # custom_loop.py:241:13 %9 : Tensor = onnx::Cast[to=9](%2) -> (%9, %8) return (%4) ``` The dynamic control flow is captured correctly. We can verify in backends with different loop range. ``` import caffe2.python.onnx.backend as backend import numpy as np import onnx model = onnx.load('loop.onnx') rep = backend.prepare(model) outputs = rep.run((dummy_input.numpy(), np.array(9).astype(np.int64))) print(outputs[0]) #[[37 37 37] # [37 37 37]] import onnxruntime as ort ort_sess = ort.InferenceSession('loop.onnx') outputs = ort_sess.run(None, {'input_data': dummy_input.numpy(), 'loop_range': np.array(9).astype(np.int64)}) print(outputs) #[array([[37, 37, 37], # [37, 37, 37]], dtype=int64)] ``` To avoid exporting a variable scalar tensor as a fixed value constant as part of the ONNX model, please avoid use of `torch.Tensor.item()`. Torch supports implicit cast of single-element tensors to numbers. E.g.: ``` class LoopModel(torch.nn.Module): def forward(self, x, y): res = [] arr = x.split(2, 0) for i in range(int(y)): res += [arr[i].sum(0, False)] return torch.stack(res) model = torch.jit.script(LoopModel()) inputs = (torch.randn(16), torch.tensor(8)) out = model(*inputs) torch.onnx.export(model, inputs, 'loop_and_list.onnx', opset_version=11, example_outputs=out) ``` Write PyTorch model in Torch way -------------------------------- PyTorch models can be written using numpy manipulations, but this is not proper when we convert to the ONNX model. For the trace-based exporter, tracing treats the numpy values as the constant node, therefore it calculates the wrong result if we change the input. So the PyTorch model need implement using torch operators. For example, do not use numpy operators on numpy tensors: ``` np.concatenate((x, y, z), axis=1) ``` do not convert to numpy types: ``` y = x.astype(np.int) ``` Always use torch tensors and torch operators: torch.concat, etc. In addition, Dropout layer need defined in init function so that inferencing can handle it properly, i.e., ``` class MyModule(nn.Module): def __init__(self): self.dropout = nn.Dropout(0.5) def forward(self, x): x = self.dropout(x) ``` Using dictionaries to handle Named Arguments as model inputs ------------------------------------------------------------ There are two ways to handle models which consist of named parameters or keyword arguments as inputs: * The first method is to pass all the inputs in the same order as required by the model and pass None values for the keyword arguments that do not require a value to be passed * The second and more intuitive method is to represent the keyword arguments as key-value pairs where the key represents the name of the argument in the model signature and the value represents the value of the argument to be passed For example, in the model: ``` class Model(torch.nn.Module): def forward(self, x, y=None, z=None): if y is not None: return x + y if z is not None: return x + z return x m = Model() x = torch.randn(2, 3) z = torch.randn(2, 3) ``` There are two ways of exporting the model: * Not using a dictionary for the keyword arguments and passing all the inputs in the same order as required by the model ``` torch.onnx.export(model, (x, None, z), ‘test.onnx’) ``` * Using a dictionary to represent the keyword arguments. This dictionary is always passed in addition to the non-keyword arguments and is always the last argument in the args tuple. ``` torch.onnx.export(model, (x, {'y': None, 'z': z}), ‘test.onnx’) ``` For cases in which there are no keyword arguments, models can be exported with either an empty or no dictionary. For example, ``` torch.onnx.export(model, (x, {}), ‘test.onnx’) or torch.onnx.export(model, (x, ), ‘test.onnx’) ``` An exception to this rule are cases in which the last input is also of a dictionary type. In these cases it is mandatory to have an empty dictionary as the last argument in the args tuple. For example, ``` class Model(torch.nn.Module): def forward(self, k, x): ... return x m = Model() k = torch.randn(2, 3) x = {torch.tensor(1.): torch.randn(2, 3)} ``` Without the presence of the empty dictionary, the export call assumes that the ‘x’ input is intended to represent the optional dictionary consisting of named arguments. In order to prevent this from being an issue a constraint is placed to provide an empty dictionary as the last input in the tuple args in such cases. The new call would look like this. ``` torch.onnx.export(model, (k, x, {}), ‘test.onnx’) ``` Indexing -------- Tensor indexing in PyTorch is very flexible and complicated. There are two categories of indexing. Both are largely supported in exporting today. If you are experiencing issues exporting indexing that belongs to the supported patterns below, please double check that you are exporting with the latest opset (opset\_version=12). ### Getter This type of indexing occurs on the RHS. Export is supported for ONNX opset version >= 9. E.g.: ``` data = torch.randn(3, 4) index = torch.tensor([1, 2]) # RHS indexing is supported in ONNX opset >= 11. class RHSIndexing(torch.nn.Module): def forward(self, data, index): return data[index] out = RHSIndexing()(data, index) torch.onnx.export(RHSIndexing(), (data, index), 'indexing.onnx', opset_version=9) # onnxruntime import onnxruntime sess = onnxruntime.InferenceSession('indexing.onnx') out_ort = sess.run(None, { sess.get_inputs()[0].name: data.numpy(), sess.get_inputs()[1].name: index.numpy(), }) assert torch.all(torch.eq(out, torch.tensor(out_ort))) ``` Below is the list of supported patterns for RHS indexing. ``` # Scalar indices data[0, 1] # Slice indices data[:3] # Tensor indices data[torch.tensor([[1, 2], [2, 3]])] data[torch.tensor([2, 3]), torch.tensor([1, 2])] data[torch.tensor([[1, 2], [2, 3]]), torch.tensor([2, 3])] data[torch.tensor([2, 3]), :, torch.tensor([1, 2])] # Ellipsis # Not supported in scripting # i.e. torch.jit.script(model) will fail if model contains this pattern. # Export is supported under tracing # i.e. torch.onnx.export(model) data[...] # The combination of above data[2, ..., torch.tensor([2, 1, 3]), 2:4, torch.tensor([[1], [2]])] # Boolean mask (supported for ONNX opset version >= 11) data[data != 1] ``` And below is the list of unsupported patterns for RHS indexing. ``` # Tensor indices that includes negative values. data[torch.tensor([[1, 2], [2, -3]]), torch.tensor([-2, 3])] ``` ### Setter In code, this type of indexing occurs on the LHS. Export is supported for ONNX opset version >= 11. E.g.: ``` data = torch.zeros(3, 4) new_data = torch.arange(4).to(torch.float32) # LHS indexing is supported in ONNX opset >= 11. class LHSIndexing(torch.nn.Module): def forward(self, data, new_data): data[1] = new_data return data out = LHSIndexing()(data, new_data) data = torch.zeros(3, 4) new_data = torch.arange(4).to(torch.float32) torch.onnx.export(LHSIndexing(), (data, new_data), 'inplace_assign.onnx', opset_version=11) # onnxruntime import onnxruntime sess = onnxruntime.InferenceSession('inplace_assign.onnx') out_ort = sess.run(None, { sess.get_inputs()[0].name: torch.zeros(3, 4).numpy(), sess.get_inputs()[1].name: new_data.numpy(), }) assert torch.all(torch.eq(out, torch.tensor(out_ort))) ``` Below is the list of supported patterns for LHS indexing. ``` # Scalar indices data[0, 1] = new_data # Slice indices data[:3] = new_data # Tensor indices # If more than one tensor are used as indices, only consecutive 1-d tensor indices are supported. data[torch.tensor([[1, 2], [2, 3]])] = new_data data[torch.tensor([2, 3]), torch.tensor([1, 2])] = new_data # Ellipsis # Not supported to export in script modules # i.e. torch.onnx.export(torch.jit.script(model)) will fail if model contains this pattern. # Export is supported under tracing # i.e. torch.onnx.export(model) data[...] = new_data # The combination of above data[2, ..., torch.tensor([2, 1, 3]), 2:4] += update # Boolean mask data[data != 1] = new_data ``` And below is the list of unsupported patterns for LHS indexing. ``` # Multiple tensor indices if any has rank >= 2 data[torch.tensor([[1, 2], [2, 3]]), torch.tensor([2, 3])] = new_data # Multiple tensor indices that are not consecutive data[torch.tensor([2, 3]), :, torch.tensor([1, 2])] = new_data # Tensor indices that includes negative values. data[torch.tensor([1, -2]), torch.tensor([-2, 3])] = new_data ``` If you are experiencing issues exporting indexing that belongs to the above supported patterns, please double check that you are exporting with the latest opset (opset\_version=12). TorchVision support ------------------- All TorchVision models, except for quantized versions, are exportable to ONNX. More details can be found in [TorchVision](torchvision/models). Limitations ----------- * Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted but their usage is not recommended. Users need to verify their dict inputs carefully, and keep in mind that dynamic lookups are not available. * PyTorch and ONNX backends(Caffe2, ONNX Runtime, etc) often have implementations of operators with some numeric differences. Depending on model structure, these differences may be negligible, but they can also cause major divergences in behavior (especially on untrained models.) We allow Caffe2 to call directly to Torch implementations of operators, to help you smooth over these differences when precision is important, and to also document these differences. Supported operators ------------------- The following operators are supported: * BatchNorm * ConstantPadNd * Conv * Dropout * Embedding (no optional arguments supported) * EmbeddingBag * FeatureDropout (training mode not supported) * Index * MaxPool1d * MaxPool2d * MaxPool3d * RNN * abs * absolute * acos * adaptive\_avg\_pool1d * adaptive\_avg\_pool2d * adaptive\_avg\_pool3d * adaptive\_max\_pool1d * adaptive\_max\_pool2d * adaptive\_max\_pool3d * add (nonzero alpha not supported) * addmm * and * arange * argmax * argmin * asin * atan * avg\_pool1d * avg\_pool2d * avg\_pool2d * avg\_pool3d * as\_strided * baddbmm * bitshift * cat * ceil * celu * clamp * clamp\_max * clamp\_min * concat * copy * cos * cumsum * det * dim\_arange * div * dropout * einsum * elu * empty * empty\_like * eq * erf * exp * expand * expand\_as * eye * flatten * floor * floor\_divide * frobenius\_norm * full * full\_like * gather * ge * gelu * glu * group\_norm * gt * hardswish * hardtanh * im2col * index\_copy * index\_fill * index\_put * index\_select * instance\_norm * interpolate * isnan * KLDivLoss * layer\_norm * le * leaky\_relu * len * log * log1p * log2 * log\_sigmoid * log\_softmax * logdet * logsumexp * lt * masked\_fill * masked\_scatter * masked\_select * max * mean * min * mm * mul * multinomial * narrow * ne * neg * new\_empty * new\_full * new\_zeros * nll\_loss * nonzero * norm * ones * ones\_like * or * permute * pixel\_shuffle * pow * prelu (single weight shared among input channels not supported) * prod * rand * randn * randn\_like * reciprocal * reflection\_pad * relu * repeat * replication\_pad * reshape * reshape\_as * round * rrelu * rsqrt * rsub * scalar\_tensor * scatter * scatter\_add * select * selu * sigmoid * sign * sin * size * slice * softmax * softplus * sort * split * sqrt * squeeze * stack * std * sub (nonzero alpha not supported) * sum * t * tan * tanh * threshold (non-zero threshold/non-zero value not supported) * to * topk * transpose * true\_divide * type\_as * unbind * unfold (experimental support with ATen-Caffe2 integration) * unique * unsqueeze * upsample\_nearest1d * upsample\_nearest2d * upsample\_nearest3d * view * weight\_norm * where * zeros * zeros\_like The operator set above is sufficient to export the following models: * AlexNet * DCGAN * DenseNet * Inception (warning: this model is highly sensitive to changes in operator implementation) * ResNet * SuperResolution * VGG * [word\_language\_model](https://github.com/pytorch/examples/tree/master/word_language_model) Adding support for operators ---------------------------- Adding export support for operators is an *advance usage*. To achieve this, developers need to touch the source code of PyTorch. Please follow the [instructions](https://github.com/pytorch/pytorch#from-source) for installing PyTorch from source. If the wanted operator is standardized in ONNX, it should be easy to add support for exporting such operator (adding a symbolic function for the operator). To confirm whether the operator is standardized or not, please check the [ONNX operator list](https://github.com/onnx/onnx/blob/master/docs/Operators.md). ### ATen operators If the operator is an ATen operator, which means you can find the declaration of the function in `torch/csrc/autograd/generated/VariableType.h` (available in generated code in PyTorch install dir), you should add the symbolic function in `torch/onnx/symbolic_opset<version>.py` and follow the instructions listed as below: * Define the symbolic function in `torch/onnx/symbolic_opset<version>.py`, for example [torch/onnx/symbolic\_opset9.py](https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic_opset9.py). Make sure the function has the same name as the ATen operator/function defined in `VariableType.h`. * The first parameter is always the exported ONNX graph. Parameter names must EXACTLY match the names in `VariableType.h`, because dispatch is done with keyword arguments. * Parameter ordering does NOT necessarily match what is in `VariableType.h`, tensors (inputs) are always first, then non-tensor arguments. * In the symbolic function, if the operator is already standardized in ONNX, we only need to create a node to represent the ONNX operator in the graph. * If the input argument is a tensor, but ONNX asks for a scalar, we have to explicitly do the conversion. The helper function `_scalar` can convert a scalar tensor into a python scalar, and `_if_scalar_type_as` can turn a Python scalar into a PyTorch tensor. ### Non-ATen operators If the operator is a non-ATen operator, the symbolic function has to be added in the corresponding PyTorch Function class. Please read the following instructions: * Create a symbolic function named `symbolic` in the corresponding Function class. * The first parameter is always the exported ONNX graph. * Parameter names except the first must EXACTLY match the names in `forward`. * The output tuple size must match the outputs of `forward`. * In the symbolic function, if the operator is already standardized in ONNX, we just need to create a node to represent the ONNX operator in the graph. Symbolic functions should be implemented in Python. All of these functions interact with Python methods which are implemented via C++-Python bindings, but intuitively the interface they provide looks like this: ``` def operator/symbolic(g, *inputs): """ Modifies Graph (e.g., using "op"), adding the ONNX operations representing this PyTorch function, and returning a Value or tuple of Values specifying the ONNX outputs whose values correspond to the original PyTorch return values of the autograd Function (or None if an output is not supported by ONNX). Args: g (Graph): graph to write the ONNX representation into inputs (Value...): list of values representing the variables which contain the inputs for this function """ class Value(object): """Represents an intermediate tensor value computed in ONNX.""" def type(self): """Returns the Type of the value.""" class Type(object): def sizes(self): """Returns a tuple of ints representing the shape of a tensor this describes.""" class Graph(object): def op(self, opname, *inputs, **attrs): """ Create an ONNX operator 'opname', taking 'args' as inputs and attributes 'kwargs' and add it as a node to the current graph, returning the value representing the single output of this operator (see the `outputs` keyword argument for multi-return nodes). The set of operators and the inputs/attributes they take is documented at https://github.com/onnx/onnx/blob/master/docs/Operators.md Args: opname (string): The ONNX operator name, e.g., `Abs` or `Add`. args (Value...): The inputs to the operator; usually provided as arguments to the `symbolic` definition. kwargs: The attributes of the ONNX operator, with keys named according to the following convention: `alpha_f` indicates the `alpha` attribute with type `f`. The valid type specifiers are `f` (float), `i` (int), `s` (string) or `t` (Tensor). An attribute specified with type float accepts either a single float, or a list of floats (e.g., you would say `dims_i` for a `dims` attribute that takes a list of integers). outputs (int, optional): The number of outputs this operator returns; by default an operator is assumed to return a single output. If `outputs` is greater than one, this functions returns a tuple of output `Value`, representing each output of the ONNX operator in positional. """ ``` The ONNX graph C++ definition is in `torch/csrc/jit/ir/ir.h`. Here is an example of handling missing symbolic function for `elu` operator. We try to export the model and see the error message as below: ``` UserWarning: ONNX export failed on elu because torch.onnx.symbolic_opset9.elu does not exist RuntimeError: ONNX export failed: Couldn't export operator elu ``` The export fails because PyTorch does not support exporting `elu` operator. We find `virtual Tensor elu(const Tensor & input, Scalar alpha, bool inplace) const override;` in `VariableType.h`. This means `elu` is an ATen operator. We check the [ONNX operator list](https://github.com/onnx/onnx/blob/master/docs/Operators.md), and confirm that `Elu` is standardized in ONNX. We add the following lines to `symbolic_opset9.py`: ``` def elu(g, input, alpha, inplace=False): return g.op("Elu", input, alpha_f=_scalar(alpha)) ``` Now PyTorch is able to export `elu` operator. There are more examples in [symbolic\_opset9.py](https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic_opset9.py), [symbolic\_opset10.py](https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic_opset10.py). The interface for specifying operator definitions is experimental; adventurous users should note that the APIs will probably change in a future interface. ### Custom operators Following this tutorial [Extending TorchScript with Custom C++ Operators](https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html), you can create and register your own custom ops implementation in PyTorch. Here’s how to export such model to ONNX.: ``` # Create custom symbolic function from torch.onnx.symbolic_helper import parse_args @parse_args('v', 'v', 'f', 'i') def symbolic_foo_forward(g, input1, input2, attr1, attr2): return g.op("Foo", input1, input2, attr1_f=attr1, attr2_i=attr2) # Register custom symbolic function from torch.onnx import register_custom_op_symbolic register_custom_op_symbolic('custom_ops::foo_forward', symbolic_foo_forward, 9) class FooModel(torch.nn.Module): def __init__(self, attr1, attr2): super(FooModule, self).__init__() self.attr1 = attr1 self.attr2 = attr2 def forward(self, input1, input2): # Calling custom op return torch.ops.custom_ops.foo_forward(input1, input2, self.attr1, self.attr2) model = FooModel(attr1, attr2) torch.onnx.export(model, (dummy_input1, dummy_input2), 'model.onnx', custom_opsets={"custom_domain": 2}) ``` Depending on the custom operator, you can export it as one or a combination of existing ONNX ops. You can also export it as a custom op in ONNX as well. In that case, you can specify the custom domain and version (custom opset) using the `custom_opsets` dictionary at export. If not explicitly specified, the custom opset version is set to 1 by default. Using custom ONNX ops, you will need to extend the backend of your choice with matching custom ops implementation, e.g. [Caffe2 custom ops](https://caffe2.ai/docs/custom-operators.html), [ONNX Runtime custom ops](https://github.com/microsoft/onnxruntime/blob/master/docs/AddingCustomOp.md). Operator Export Type -------------------- Exporting models with unsupported ONNX operators can be achieved using the `operator_export_type` flag in export API. This flag is useful when users try to export ATen and non-ATen operators that are not registered and supported in ONNX. ### ONNX This mode is used to export all operators as regular ONNX operators. This is the default `operator_export_type` mode. ``` Example torch ir graph: graph(%0 : Float(2, 3, 4, strides=[12, 4, 1])): %3 : Float(2, 3, 4, strides=[12, 4, 1]) = aten:exp(%0) %4 : Float(2, 3, 4, strides=[12, 4, 1]) = aten:div(%0, %3) return (%4) Is exported as: graph(%0 : Float(2, 3, 4, strides=[12, 4, 1])): %1 : Float(2, 3, 4, strides=[12, 4, 1]) = onnx:Exp(%0) %2 : Float(2, 3, 4, strides=[12, 4, 1]) = onnx:Div(%0, %1) return (%2) ``` ### ONNX\_ATEN This mode is used to export all operators as ATen ops, and avoid conversion to ONNX. ``` Example torch ir graph: graph(%0 : Float(2, 3, 4, strides=[12, 4, 1])): %3 : Float(2, 3, 4, strides=[12, 4, 1]) = aten::exp(%0) %4 : Float(2, 3, 4, strides=[12, 4, 1]) = aten::div(%0, %3) return (%4) Is exported as: graph(%0 : Float(2, 3, 4, strides=[12, 4, 1])): %1 : Float(2, 3, 4, strides=[12, 4, 1]) = aten::ATen[operator="exp"](%0) %2 : Float(2, 3, 4, strides=[12, 4, 1]) = aten::ATen[operator="div"](%0, %1) return (%2) ``` ### ONNX\_ATEN\_FALLBACK To fallback on unsupported ATen operators in ONNX. Supported operators are exported to ONNX regularly. In the following example, aten::triu is not supported in ONNX. Exporter falls back on this operator. ``` Example torch ir graph: graph(%0 : Float): %3 : int = prim::Constant[value=0]() %4 : Float = aten::triu(%0, %3) # unsupported op %5 : Float = aten::mul(%4, %0) # registered op return (%5) is exported as: graph(%0 : Float): %1 : Long() = onnx::Constant[value={0}]() %2 : Float = aten::ATen[operator="triu"](%0, %1) # unsupported op %3 : Float = onnx::Mul(%2, %0) # registered op return (%3) ``` ### RAW To export a raw ir. ``` Example torch ir graph: graph(%x.1 : Float(1, strides=[1])): %1 : Tensor = aten::exp(%x.1) %2 : Tensor = aten::div(%x.1, %1) %y.1 : Tensor[] = prim::ListConstruct(%2) return (%y.1) is exported as: graph(%x.1 : Float(1, strides=[1])): %1 : Tensor = aten::exp(%x.1) %2 : Tensor = aten::div(%x.1, %1) %y.1 : Tensor[] = prim::ListConstruct(%2) return (%y.1) ``` ### ONNX\_FALLTHROUGH This mode can be used to export any operator (ATen or non-ATen) that is not registered and supported in ONNX. Exported falls through and exports the operator as is, as custom op. Exporting custom operators enables users to register and implement the operator as part of their runtime backend. ``` Example torch ir graph: graph(%0 : Float(2, 3, 4, strides=[12, 4, 1]), %1 : Float(2, 3, 4, strides=[12, 4, 1])): %6 : Float(2, 3, 4, strides=[12, 4, 1]) = foo_namespace::bar(%0, %1) # custom op %7 : Float(2, 3, 4, strides=[12, 4, 1]) = aten::div(%6, %0) # registered op return (%7)) is exported as: graph(%0 : Float(2, 3, 4, strides=[12, 4, 1]), %1 : Float(2, 3, 4, strides=[12, 4, 1])): %2 : Float(2, 3, 4, strides=[12, 4, 1]) = foo_namespace::bar(%0, %1) # custom op %3 : Float(2, 3, 4, strides=[12, 4, 1]) = onnx::Div(%2, %0) # registered op return (%3 ``` Frequently Asked Questions -------------------------- Q: I have exported my lstm model, but its input size seems to be fixed? The tracer records the example inputs shape in the graph. In case the model should accept inputs of dynamic shape, you can utilize the parameter `dynamic_axes` in export api. ``` layer_count = 4 model = nn.LSTM(10, 20, num_layers=layer_count, bidirectional=True) model.eval() with torch.no_grad(): input = torch.randn(5, 3, 10) h0 = torch.randn(layer_count * 2, 3, 20) c0 = torch.randn(layer_count * 2, 3, 20) output, (hn, cn) = model(input, (h0, c0)) # default export torch.onnx.export(model, (input, (h0, c0)), 'lstm.onnx') onnx_model = onnx.load('lstm.onnx') # input shape [5, 3, 10] print(onnx_model.graph.input[0]) # export with `dynamic_axes` torch.onnx.export(model, (input, (h0, c0)), 'lstm.onnx', input_names=['input', 'h0', 'c0'], output_names=['output', 'hn', 'cn'], dynamic_axes={'input': {0: 'sequence'}, 'output': {0: 'sequence'}}) onnx_model = onnx.load('lstm.onnx') # input shape ['sequence', 3, 10] print(onnx_model.graph.input[0]) ``` Q: How to export models with loops in it? Please checkout [Tracing vs Scripting](#tracing-vs-scripting). Q: Does ONNX support implicit scalar datatype casting? No, but the exporter will try to handle that part. Scalars are converted to constant tensors in ONNX. The exporter will try to figure out the right datatype for scalars. However for cases that it failed to do so, you will need to manually provide the datatype information. This often happens with scripted models, where the datatypes are not recorded. We are trying to improve the datatype propagation in the exporter such that manual changes are not required in the future. ``` class ImplicitCastType(torch.jit.ScriptModule): @torch.jit.script_method def forward(self, x): # Exporter knows x is float32, will export '2' as float32 as well. y = x + 2 # Without type propagation, exporter doesn't know the datatype of y. # Thus '3' is exported as int64 by default. return y + 3 # The following will export correctly. # return y + torch.tensor([3], dtype=torch.float32) x = torch.tensor([1.0], dtype=torch.float32) torch.onnx.export(ImplicitCastType(), x, 'models/implicit_cast.onnx', example_outputs=ImplicitCastType()(x)) ``` Q: Is tensor in-place indexed assignment like `data[index] = new_data` supported? Yes, this is supported for ONNX opset version >= 11. Please checkout [Indexing](#indexing). Q: Is tensor list exportable to ONNX? Yes, this is supported now for ONNX opset version >= 11. ONNX introduced the concept of Sequence in opset 11. Similar to list, Sequence is a data type that contains arbitrary number of Tensors. Associated operators are also introduced in ONNX, such as SequenceInsert, SequenceAt, etc. However, in-place list append within loops is not exportable to ONNX. To implement this, please use inplace add operator. E.g.: ``` class ListLoopModel(torch.nn.Module): def forward(self, x): res = [] res1 = [] arr = x.split(2, 0) res2 = torch.zeros(3, 4, dtype=torch.long) for i in range(len(arr)): res += [arr[i].sum(0, False)] res1 += [arr[-1 - i].sum(0, False)] res2 += 1 return torch.stack(res), torch.stack(res1), res2 model = torch.jit.script(ListLoopModel()) inputs = torch.randn(16) out = model(inputs) torch.onnx.export(model, (inputs, ), 'loop_and_list.onnx', opset_version=11, example_outputs=out) # onnxruntime import onnxruntime sess = onnxruntime.InferenceSession('loop_and_list.onnx') out_ort = sess.run(None, { sess.get_inputs()[0].name: inputs.numpy(), }) assert [torch.allclose(o, torch.tensor(o_ort)) for o, o_ort in zip(out, out_ort)] ``` Use external data format ------------------------ `use_external_data_format` argument in export API enables export of models in ONNX external data format. With this option enabled, the exporter stores some model parameters in external binary files, rather than the ONNX file itself. These external binary files are stored in the same location as the ONNX file. Argument ‘f’ must be a string specifying the location of the model. ``` model = torchvision.models.mobilenet_v2(pretrained=True) input = torch.randn(2, 3, 224, 224, requires_grad=True) torch.onnx.export(model, (input, ), './large_model.onnx', use_external_data_format=True) ``` This argument enables export of large models to ONNX. Models larger than 2GB cannot be exported in one file because of the protobuf size limit. Users should set `use_external_data_format` to `True` to successfully export such models. Training -------- `Training` argument in export API allows users to export models in a training-friendly mode. `TrainingMode.TRAINING` exports model in a training-friendly mode that avoids certain model optimizations which might interfere with model parameter training. `TrainingMode.PRESERVE` exports the model in inference mode if `model.training` is `False`. Otherwise, it exports the model in a training-friendly mode. The default mode for this argument is `TrainingMode.EVAL` which exports the model in inference mode. Functions --------- `torch.onnx.export(model, args, f, export_params=True, verbose=False, training=<TrainingMode.EVAL: 0>, input_names=None, output_names=None, aten=False, export_raw_ir=False, operator_export_type=None, opset_version=None, _retain_param_name=True, do_constant_folding=True, example_outputs=None, strip_doc_string=True, dynamic_axes=None, keep_initializers_as_inputs=None, custom_opsets=None, enable_onnx_checker=True, use_external_data_format=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/onnx.html#export) Export a model into ONNX format. This exporter runs your model once in order to get a trace of its execution to be exported; at the moment, it supports a limited set of dynamic models (e.g., RNNs.) Parameters * **model** ([torch.nn.Module](generated/torch.nn.module#torch.nn.Module "torch.nn.Module")) – the model to be exported. * **args** (*tuple of arguments* *or* [torch.Tensor](tensors#torch.Tensor "torch.Tensor")*,* *a dictionary consisting of named arguments* *(**optional**)*) – a dictionary to specify the input to the corresponding named parameter: - KEY: str, named parameter - VALUE: corresponding input args can be structured either as: 1. ONLY A TUPLE OF ARGUMENTS or torch.Tensor: ``` ‘’args = (x, y, z)’' ```The inputs to the model, e.g., such that `model(*args)` is a valid invocation of the model. Any non-Tensor arguments will be hard-coded into the exported model; any Tensor arguments will become inputs of the exported model, in the order they occur in args. If args is a Tensor, this is equivalent to having called it with a 1-ary tuple of that Tensor. 2. A TUPLE OF ARGUEMENTS WITH A DICTIONARY OF NAMED PARAMETERS: ``` ‘’args = (x, { ‘y’: input_y, ‘z’: input_z }) ‘’ ```The inputs to the model are structured as a tuple consisting of non-keyword arguments and the last value of this tuple being a dictionary consisting of named parameters and the corresponding inputs as key-value pairs. If certain named argument is not present in the dictionary, it is assigned the default value, or None if default value is not provided. Cases in which an dictionary input is the last input of the args tuple would cause a conflict when a dictionary of named parameters is used. The model below provides such an example. class Model(torch.nn.Module): def forward(self, k, x): … return x m = Model() k = torch.randn(2, 3) x = {torch.tensor(1.): torch.randn(2, 3)} In the previous iteration, the call to export API would look like torch.onnx.export(model, (k, x), ‘test.onnx’) This would work as intended. However, the export function would now assume that the ‘x’ input is intended to represent the optional dictionary consisting of named arguments. In order to prevent this from being an issue a constraint is placed to provide an empty dictionary as the last input in the tuple args in such cases. The new call would look like this. torch.onnx.export(model, (k, x, {}), ‘test.onnx’) * **f** – a file-like object (has to implement fileno that returns a file descriptor) or a string containing a file name. A binary Protobuf will be written to this file. * **export\_params** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *default True*) – if specified, all parameters will be exported. Set this to False if you want to export an untrained model. In this case, the exported model will first take all of its parameters as arguments, the ordering as specified by `model.state_dict().values()` * **verbose** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *default False*) – if specified, we will print out a debug description of the trace being exported. * **training** (*enum**,* *default TrainingMode.EVAL*) – TrainingMode.EVAL: export the model in inference mode. TrainingMode.PRESERVE: export the model in inference mode if model.training is False and to a training friendly mode if model.training is True. TrainingMode.TRAINING: export the model in a training friendly mode. * **input\_names** (*list of strings**,* *default empty list*) – names to assign to the input nodes of the graph, in order * **output\_names** (*list of strings**,* *default empty list*) – names to assign to the output nodes of the graph, in order * **aten** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *default False*) – [DEPRECATED. use operator\_export\_type] export the model in aten mode. If using aten mode, all the ops original exported by the functions in symbolic\_opset<version>.py are exported as ATen ops. * **export\_raw\_ir** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *default False*) – [DEPRECATED. use operator\_export\_type] export the internal IR directly instead of converting it to ONNX ops. * **operator\_export\_type** (*enum**,* *default OperatorExportTypes.ONNX*) – OperatorExportTypes.ONNX: All ops are exported as regular ONNX ops (with ONNX namespace). OperatorExportTypes.ONNX\_ATEN: All ops are exported as ATen ops (with aten namespace). OperatorExportTypes.ONNX\_ATEN\_FALLBACK: If an ATen op is not supported in ONNX or its symbolic is missing, fall back on ATen op. Registered ops are exported to ONNX regularly. Example graph: ``` graph(%0 : Float):: %3 : int = prim::Constant[value=0]() %4 : Float = aten::triu(%0, %3) # missing op %5 : Float = aten::mul(%4, %0) # registered op return (%5) ``` is exported as: ``` graph(%0 : Float):: %1 : Long() = onnx::Constant[value={0}]() %2 : Float = aten::ATen[operator="triu"](%0, %1) # missing op %3 : Float = onnx::Mul(%2, %0) # registered op return (%3) ``` In the above example, aten::triu is not supported in ONNX, hence exporter falls back on this op. OperatorExportTypes.RAW: Export raw ir. OperatorExportTypes.ONNX\_FALLTHROUGH: If an op is not supported in ONNX, fall through and export the operator as is, as a custom ONNX op. Using this mode, the op can be exported and implemented by the user for their runtime backend. Example graph: ``` graph(%x.1 : Long(1, strides=[1])):: %1 : None = prim::Constant() %2 : Tensor = aten::sum(%x.1, %1) %y.1 : Tensor[] = prim::ListConstruct(%2) return (%y.1) ``` is exported as: ``` graph(%x.1 : Long(1, strides=[1])):: %1 : Tensor = onnx::ReduceSum[keepdims=0](%x.1) %y.1 : Long() = prim::ListConstruct(%1) return (%y.1) ``` In the above example, prim::ListConstruct is not supported, hence exporter falls through. * **opset\_version** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *default is 9*) – by default we export the model to the opset version of the onnx submodule. Since ONNX’s latest opset may evolve before next stable release, by default we export to one stable opset version. Right now, supported stable opset version is 9. The opset\_version must be \_onnx\_main\_opset or in \_onnx\_stable\_opsets which are defined in torch/onnx/symbolic\_helper.py * **do\_constant\_folding** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *default False*) – If True, the constant-folding optimization is applied to the model during export. Constant-folding optimization will replace some of the ops that have all constant inputs, with pre-computed constant nodes. * **example\_outputs** (*tuple of Tensors**,* *default None*) – Model’s example outputs being exported. example\_outputs must be provided when exporting a ScriptModule or TorchScript Function. * **strip\_doc\_string** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *default True*) – if True, strips the field “doc\_string” from the exported model, which information about the stack trace. * **dynamic\_axes** (*dict<string**,* *dict<python:int**,* *string>>* *or* *dict<string**,* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)")*(*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*)**>**,* *default empty dict*) – a dictionary to specify dynamic axes of input/output, such that: - KEY: input and/or output names - VALUE: index of dynamic axes for given key and potentially the name to be used for exported dynamic axes. In general the value is defined according to one of the following ways or a combination of both: (1). A list of integers specifying the dynamic axes of provided input. In this scenario automated names will be generated and applied to dynamic axes of provided input/output during export. OR (2). An inner dictionary that specifies a mapping FROM the index of dynamic axis in corresponding input/output TO the name that is desired to be applied on such axis of such input/output during export. Example. if we have the following shape for inputs and outputs: ``` shape(input_1) = ('b', 3, 'w', 'h') and shape(input_2) = ('b', 4) and shape(output) = ('b', 'd', 5) ``` Then `dynamic axes` can be defined either as: 1. ONLY INDICES: ``` ``dynamic_axes = {'input_1':[0, 2, 3], 'input_2':[0], 'output':[0, 1]}`` where automatic names will be generated for exported dynamic axes ``` 2. INDICES WITH CORRESPONDING NAMES: ``` ``dynamic_axes = {'input_1':{0:'batch', 1:'width', 2:'height'}, 'input_2':{0:'batch'}, 'output':{0:'batch', 1:'detections'}}`` where provided names will be applied to exported dynamic axes ``` 3. MIXED MODE OF (1) and (2): ``` ``dynamic_axes = {'input_1':[0, 2, 3], 'input_2':{0:'batch'}, 'output':[0,1]}`` ``` * **keep\_initializers\_as\_inputs** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *default None*) – If True, all the initializers (typically corresponding to parameters) in the exported graph will also be added as inputs to the graph. If False, then initializers are not added as inputs to the graph, and only the non-parameter inputs are added as inputs. This may allow for better optimizations (such as constant folding etc.) by backends/runtimes that execute these graphs. If unspecified (default None), then the behavior is chosen automatically as follows. If operator\_export\_type is OperatorExportTypes.ONNX, the behavior is equivalent to setting this argument to False. For other values of operator\_export\_type, the behavior is equivalent to setting this argument to True. Note that for ONNX opset version < 9, initializers MUST be part of graph inputs. Therefore, if opset\_version argument is set to a 8 or lower, this argument will be ignored. * **custom\_opsets** (*dict<string**,* *int>**,* *default empty dict*) – A dictionary to indicate custom opset domain and version at export. If model contains a custom opset, it is optional to specify the domain and opset version in the dictionary: - KEY: opset domain name - VALUE: opset version If the custom opset is not provided in this dictionary, opset version is set to 1 by default. * **enable\_onnx\_checker** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *default True*) – If True the onnx model checker will be run as part of the export, to ensure the exported model is a valid ONNX model. * **external\_data\_format** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *default False*) – If True, then the model is exported in ONNX external data format, in which case some of the model parameters are stored in external binary files and not in the ONNX model file itself. See link for format details: <https://github.com/onnx/onnx/blob/8b3f7e2e7a0f2aba0e629e23d89f07c7fc0e6a5e/onnx/onnx.proto#L423> Also, in this case, argument ‘f’ must be a string specifying the location of the model. The external binary files will be stored in the same location specified by the model location ‘f’. If False, then the model is stored in regular format, i.e. model and parameters are all in one file. This argument is ignored for all export types other than ONNX. `torch.onnx.export_to_pretty_string(*args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/onnx.html#export_to_pretty_string) `torch.onnx.register_custom_op_symbolic(symbolic_name, symbolic_fn, opset_version)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/onnx.html#register_custom_op_symbolic) `torch.onnx.operators.shape_as_tensor(x)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/onnx/operators.html#shape_as_tensor) `torch.onnx.select_model_mode_for_export(model, mode)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/onnx.html#select_model_mode_for_export) A context manager to temporarily set the training mode of ‘model’ to ‘mode’, resetting it when we exit the with-block. A no-op if mode is None. In version 1.6 changed to this from set\_training `torch.onnx.is_in_onnx_export()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/onnx.html#is_in_onnx_export) Check whether it’s in the middle of the ONNX export. This function returns True in the middle of torch.onnx.export(). torch.onnx.export should be executed with single thread.
programming_docs
pytorch Type Info Type Info ========= The numerical properties of a [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") can be accessed through either the [`torch.finfo`](#torch.torch.finfo "torch.torch.finfo") or the [`torch.iinfo`](#torch.torch.iinfo "torch.torch.iinfo"). torch.finfo ----------- `class torch.finfo` A [`torch.finfo`](#torch.torch.finfo "torch.torch.finfo") is an object that represents the numerical properties of a floating point [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype"), (i.e. `torch.float32`, `torch.float64`, and `torch.float16`). This is similar to [numpy.finfo](https://docs.scipy.org/doc/numpy/reference/generated/numpy.finfo.html). A [`torch.finfo`](#torch.torch.finfo "torch.torch.finfo") provides the following attributes: | Name | Type | Description | | --- | --- | --- | | bits | int | The number of bits occupied by the type. | | eps | float | The smallest representable number such that `1.0 + eps != 1.0`. | | max | float | The largest representable number. | | min | float | The smallest representable number (typically `-max`). | | tiny | float | The smallest positive representable number. | | resolution | float | The approximate decimal resolution of this type, i.e., `10**-precision`. | Note The constructor of [`torch.finfo`](#torch.torch.finfo "torch.torch.finfo") can be called without argument, in which case the class is created for the pytorch default dtype (as returned by [`torch.get_default_dtype()`](generated/torch.get_default_dtype#torch.get_default_dtype "torch.get_default_dtype")). torch.iinfo ----------- `class torch.iinfo` A [`torch.iinfo`](#torch.torch.iinfo "torch.torch.iinfo") is an object that represents the numerical properties of a integer [`torch.dtype`](tensor_attributes#torch.torch.dtype "torch.torch.dtype") (i.e. `torch.uint8`, `torch.int8`, `torch.int16`, `torch.int32`, and `torch.int64`). This is similar to [numpy.iinfo](https://docs.scipy.org/doc/numpy/reference/generated/numpy.iinfo.html). A [`torch.iinfo`](#torch.torch.iinfo "torch.torch.iinfo") provides the following attributes: | Name | Type | Description | | --- | --- | --- | | bits | int | The number of bits occupied by the type. | | max | int | The largest representable number. | | min | int | The smallest representable number. | pytorch torch.utils.mobile_optimizer torch.utils.mobile\_optimizer ============================= Warning This API is in beta and may change in the near future. Torch mobile supports `torch.mobile_optimizer.optimize_for_mobile` utility to run a list of optimization pass with modules in eval mode. The method takes the following parameters: a torch.jit.ScriptModule object, a blocklisting optimization set and a preserved method list `By default, if optimization blocklist is None or empty, optimize_for_mobile will run the following optimizations:` * **Conv2D + BatchNorm fusion** (blocklisting option `MobileOptimizerType::CONV_BN_FUSION`): This optimization pass folds `Conv2d-BatchNorm2d` into `Conv2d` in `forward` method of this module and all its submodules. The weight and bias of the `Conv2d` are correspondingly updated. * **Insert and Fold prepacked ops** (blocklisting option `MobileOptimizerType::INSERT_FOLD_PREPACK_OPS`): This optimization pass rewrites the graph to replace 2D convolutions and linear ops with their prepacked counterparts. Prepacked ops are stateful ops in that, they require some state to be created, such as weight prepacking and use this state, i.e. prepacked weights, during op execution. XNNPACK is one such backend that provides prepacked ops, with kernels optimized for mobile platforms (such as ARM CPUs). Prepacking of weight enables efficient memory access and thus faster kernel execution. At the moment `optimize_for_mobile` pass rewrites the graph to replace `Conv2D/Linear` with 1) op that pre-packs weight for XNNPACK conv2d/linear ops and 2) op that takes pre-packed weight and activation as input and generates output activations. Since 1 needs to be done only once, we fold the weight pre-packing such that it is done only once at model load time. This pass of the `optimize_for_mobile` does 1 and 2 and then folds, i.e. removes, weight pre-packing ops. * **ReLU/Hardtanh fusion**: XNNPACK ops support fusion of clamping. That is clamping of output activation is done as part of the kernel, including for 2D convolution and linear op kernels. Thus clamping effectively comes for free. Thus any op that can be expressed as clamping op, such as `ReLU` or `hardtanh`, can be fused with previous `Conv2D` or `linear` op in XNNPACK. This pass rewrites graph by finding `ReLU/hardtanh` ops that follow XNNPACK `Conv2D/linear` ops, written by the previous pass, and fuses them together. * **Dropout removal** (blocklisting option `MobileOptimizerType::REMOVE_DROPOUT`): This optimization pass removes `dropout` and `dropout_` nodes from this module when training is false. * **Conv packed params hoisting** (blocklisting option `MobileOptimizerType::HOIST_CONV_PACKED_PARAMS`): This optimization pass moves convolution packed params to the root module, so that the convolution structs can be deleted. This decreases model size without impacting numerics. `optimize_for_mobile` will also invoke freeze\_module pass which only preserves `forward` method. If you have other method to that needed to be preserved, add them into the preserved method list and pass into the method. `torch.utils.mobile_optimizer.optimize_for_mobile(script_module, optimization_blocklist=None, preserved_methods=None, backend='CPU')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/utils/mobile_optimizer.html#optimize_for_mobile) Parameters * **script\_module** – An instance of torch script module with type of ScriptModule. * **optimization\_blocklist** – A set with type of MobileOptimizerType. When set is not passed, optimization method will run all the optimizer pass; otherwise, optimizer method will run the optimization pass that is not included inside optimization\_blocklist. * **perserved\_methods** – A list of methods that needed to be preserved when freeze\_module pass is invoked * **backend** – Device type to use for running the result model (‘CPU’(default), ‘Vulkan’ or ‘Metal’). Returns A new optimized torch script module pytorch torch.nn.qat torch.nn.qat ============ This module implements versions of the key nn modules **Conv2d()** and **Linear()** which run in FP32 but with rounding applied to simulate the effect of INT8 quantization. Conv2d ------ `class torch.nn.qat.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/qat/modules/conv.html#Conv2d) A Conv2d module attached with FakeQuantize modules for weight, used for quantization aware training. We adopt the same interface as `torch.nn.Conv2d`, please see <https://pytorch.org/docs/stable/nn.html?highlight=conv2d#torch.nn.Conv2d> for documentation. Similar to `torch.nn.Conv2d`, with FakeQuantize modules initialized to default. Variables **~Conv2d.weight\_fake\_quant** – fake quant module for weight `classmethod from_float(mod)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/qat/modules/conv.html#Conv2d.from_float) Create a qat module from a float module or qparams\_dict Args: `mod` a float module, either produced by torch.quantization utilities or directly from user Linear ------ `class torch.nn.qat.Linear(in_features, out_features, bias=True, qconfig=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/qat/modules/linear.html#Linear) A linear module attached with FakeQuantize modules for weight, used for quantization aware training. We adopt the same interface as `torch.nn.Linear`, please see <https://pytorch.org/docs/stable/nn.html#torch.nn.Linear> for documentation. Similar to `torch.nn.Linear`, with FakeQuantize modules initialized to default. Variables **~Linear.weight** – fake quant module for weight `classmethod from_float(mod)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/qat/modules/linear.html#Linear.from_float) Create a qat module from a float module or qparams\_dict Args: `mod` a float module, either produced by torch.quantization utilities or directly from user pytorch torch.allclose torch.allclose ============== `torch.allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) → bool` This function checks if all `input` and `other` satisfy the condition: ∣input−other∣≤atol+rtol×∣other∣\lvert \text{input} - \text{other} \rvert \leq \texttt{atol} + \texttt{rtol} \times \lvert \text{other} \rvert elementwise, for all elements of `input` and `other`. The behaviour of this function is analogous to [numpy.allclose](https://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html) Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – first tensor to compare * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – second tensor to compare * **atol** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – absolute tolerance. Default: 1e-08 * **rtol** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – relative tolerance. Default: 1e-05 * **equal\_nan** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if `True`, then two `NaN` s will be considered equal. Default: `False` Example: ``` >>> torch.allclose(torch.tensor([10000., 1e-07]), torch.tensor([10000.1, 1e-08])) False >>> torch.allclose(torch.tensor([10000., 1e-08]), torch.tensor([10000.1, 1e-09])) True >>> torch.allclose(torch.tensor([1.0, float('nan')]), torch.tensor([1.0, float('nan')])) False >>> torch.allclose(torch.tensor([1.0, float('nan')]), torch.tensor([1.0, float('nan')]), equal_nan=True) True ``` pytorch SoftMarginLoss SoftMarginLoss ============== `class torch.nn.SoftMarginLoss(size_average=None, reduce=None, reduction='mean')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/loss.html#SoftMarginLoss) Creates a criterion that optimizes a two-class classification logistic loss between input tensor xx and target tensor yy (containing 1 or -1). loss(x,y)=∑ilog⁡(1+exp⁡(−y[i]∗x[i]))x.nelement()\text{loss}(x, y) = \sum\_i \frac{\log(1 + \exp(-y[i]\*x[i]))}{\text{x.nelement}()} Parameters * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when `reduce` is `False`. Default: `True` * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Note: `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Default: `'mean'` Shape: * Input: (∗)(\*) where ∗\* means, any number of additional dimensions * Target: (∗)(\*) , same shape as the input * Output: scalar. If `reduction` is `'none'`, then same shape as the input pytorch UninitializedParameter UninitializedParameter ====================== `class torch.nn.parameter.UninitializedParameter` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/parameter.html#UninitializedParameter) A parameter that is not initialized. Unitialized Parameters are a a special case of `torch.nn.Parameter` where the shape of the data is still unknown. Unlikely a `torch.nn.Parameter`, uninitialized parameters hold no data and attempting to access some properties, like their shape, will throw a runtime error. The only operations that can be performed on a uninitialized parameter are changing its datatype, moving it to a different device and converting it to a regular `torch.nn.Parameter`. `materialize(shape, device=None, dtype=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/parameter.html#UninitializedParameter.materialize) Create a Parameter with the same properties of the uninitialized one. Given a shape, it materializes a parameter in the same device and with the same `dtype` as the current one or the specified ones in the arguments. Parameters * **shape** – (tuple): the shape for the materialized tensor. * **device** (`torch.device`) – the desired device of the parameters and buffers in this module. Optional. * **dtype** (`torch.dtype`) – the desired floating point type of the floating point parameters and buffers in this module. Optional. pytorch torch.unsqueeze torch.unsqueeze =============== `torch.unsqueeze(input, dim) → Tensor` Returns a new tensor with a dimension of size one inserted at the specified position. The returned tensor shares the same underlying data with this tensor. A `dim` value within the range `[-input.dim() - 1, input.dim() + 1)` can be used. Negative `dim` will correspond to [`unsqueeze()`](#torch.unsqueeze "torch.unsqueeze") applied at `dim` = `dim + input.dim() + 1`. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the index at which to insert the singleton dimension Example: ``` >>> x = torch.tensor([1, 2, 3, 4]) >>> torch.unsqueeze(x, 0) tensor([[ 1, 2, 3, 4]]) >>> torch.unsqueeze(x, 1) tensor([[ 1], [ 2], [ 3], [ 4]]) ``` pytorch torch.log10 torch.log10 =========== `torch.log10(input, *, out=None) → Tensor` Returns a new tensor with the logarithm to the base 10 of the elements of `input`. yi=log⁡10(xi)y\_{i} = \log\_{10} (x\_{i}) Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.rand(5) >>> a tensor([ 0.5224, 0.9354, 0.7257, 0.1301, 0.2251]) >>> torch.log10(a) tensor([-0.2820, -0.0290, -0.1392, -0.8857, -0.6476]) ``` pytorch torch.result_type torch.result\_type ================== `torch.result_type(tensor1, tensor2) → dtype` Returns the [`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype") that would result from performing an arithmetic operation on the provided input tensors. See type promotion [documentation](../tensor_attributes#type-promotion-doc) for more information on the type promotion logic. Parameters * **tensor1** ([Tensor](../tensors#torch.Tensor "torch.Tensor") *or* *Number*) – an input tensor or number * **tensor2** ([Tensor](../tensors#torch.Tensor "torch.Tensor") *or* *Number*) – an input tensor or number Example: ``` >>> torch.result_type(torch.tensor([1, 2], dtype=torch.int), 1.0) torch.float32 >>> torch.result_type(torch.tensor([1, 2], dtype=torch.uint8), torch.tensor(1)) torch.uint8 ``` pytorch torch.atleast_3d torch.atleast\_3d ================= `torch.atleast_3d(*tensors)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/functional.html#atleast_3d) Returns a 3-dimensional view of each input tensor with zero dimensions. Input tensors with three or more dimensions are returned as-is. :param input: :type input: Tensor or list of Tensors Returns output (Tensor or tuple of Tensors) #### Example ``` >>> x = torch.tensor(0.5) >>> x tensor(0.5000) >>> torch.atleast_3d(x) tensor([[[0.5000]]]) >>> y = torch.randn(2,2) >>> y tensor([[-0.8079, 0.7460], [-1.1647, 1.4734]]) >>> torch.atleast_3d(y) tensor([[[-0.8079], [ 0.7460]], [[-1.1647], [ 1.4734]]]) >>> x = torch.randn(1,1,1) >>> x tensor([[[-1.5689]]]) >>> torch.atleast_3d(x) tensor([[[-1.5689]]]) >>> x = torch.tensor(0.5) >>> y = torch.tensor(1.) >>> torch.atleast_3d((x,y)) (tensor([[[0.5000]]]), tensor([[[1.]]])) ``` pytorch torch.full_like torch.full\_like ================ `torch.full_like(input, fill_value, *, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor` Returns a tensor with the same size as `input` filled with `fill_value`. `torch.full_like(input, fill_value)` is equivalent to `torch.full(input.size(), fill_value, dtype=input.dtype, layout=input.layout, device=input.device)`. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the size of `input` will determine size of the output tensor. * **fill\_value** – the number to fill the output tensor with. Keyword Arguments * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned Tensor. Default: if `None`, defaults to the dtype of `input`. * **layout** ([`torch.layout`](../tensor_attributes#torch.torch.layout "torch.torch.layout"), optional) – the desired layout of returned tensor. Default: if `None`, defaults to the layout of `input`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, defaults to the device of `input`. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. * **memory\_format** ([`torch.memory_format`](../tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. pytorch torch.isposinf torch.isposinf ============== `torch.isposinf(input, *, out=None) → Tensor` Tests if each element of `input` is positive infinity or not. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example:: ``` >>> a = torch.tensor([-float('inf'), float('inf'), 1.2]) >>> torch.isposinf(a) tensor([False, True, False]) ``` pytorch CustomFromMask CustomFromMask ============== `class torch.nn.utils.prune.CustomFromMask(mask)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/prune.html#CustomFromMask) `classmethod apply(module, name, mask)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/prune.html#CustomFromMask.apply) Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters * **module** ([nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – module containing the tensor to prune * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – parameter name within `module` on which pruning will act. `apply_mask(module)` Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters **module** ([nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – module containing the tensor to prune Returns pruned version of the input tensor Return type pruned\_tensor ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) `prune(t, default_mask=None, importance_scores=None)` Computes and returns a pruned version of input tensor `t` according to the pruning rule specified in `compute_mask()`. Parameters * **t** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – tensor to prune (of same dimensions as `default_mask`). * **importance\_scores** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – tensor of importance scores (of same shape as `t`) used to compute mask for pruning `t`. The values in this tensor indicate the importance of the corresponding elements in the `t` that is being pruned. If unspecified or None, the tensor `t` will be used in its place. * **default\_mask** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – mask from previous pruning iteration, if any. To be considered when determining what portion of the tensor that pruning should act on. If None, default to a mask of ones. Returns pruned version of tensor `t`. `remove(module)` Removes the pruning reparameterization from a module. The pruned parameter named `name` remains permanently pruned, and the parameter named `name+'_orig'` is removed from the parameter list. Similarly, the buffer named `name+'_mask'` is removed from the buffers. Note Pruning itself is NOT undone or reversed!
programming_docs
pytorch Identity Identity ======== `class torch.nn.utils.prune.Identity` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/prune.html#Identity) Utility pruning method that does not prune any units but generates the pruning parametrization with a mask of ones. `classmethod apply(module, name)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/prune.html#Identity.apply) Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters * **module** ([nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – module containing the tensor to prune * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – parameter name within `module` on which pruning will act. `apply_mask(module)` Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters **module** ([nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – module containing the tensor to prune Returns pruned version of the input tensor Return type pruned\_tensor ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) `prune(t, default_mask=None, importance_scores=None)` Computes and returns a pruned version of input tensor `t` according to the pruning rule specified in `compute_mask()`. Parameters * **t** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – tensor to prune (of same dimensions as `default_mask`). * **importance\_scores** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – tensor of importance scores (of same shape as `t`) used to compute mask for pruning `t`. The values in this tensor indicate the importance of the corresponding elements in the `t` that is being pruned. If unspecified or None, the tensor `t` will be used in its place. * **default\_mask** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – mask from previous pruning iteration, if any. To be considered when determining what portion of the tensor that pruning should act on. If None, default to a mask of ones. Returns pruned version of tensor `t`. `remove(module)` Removes the pruning reparameterization from a module. The pruned parameter named `name` remains permanently pruned, and the parameter named `name+'_orig'` is removed from the parameter list. Similarly, the buffer named `name+'_mask'` is removed from the buffers. Note Pruning itself is NOT undone or reversed! pytorch torch.jit.load torch.jit.load ============== `torch.jit.load(f, map_location=None, _extra_files=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/jit/_serialization.html#load) Load a [`ScriptModule`](torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") or [`ScriptFunction`](torch.jit.scriptfunction#torch.jit.ScriptFunction "torch.jit.ScriptFunction") previously saved with [`torch.jit.save`](torch.jit.save#torch.jit.save "torch.jit.save") All previously saved modules, no matter their device, are first loaded onto CPU, and then are moved to the devices they were saved from. If this fails (e.g. because the run time system doesn’t have certain devices), an exception is raised. Parameters * **f** – a file-like object (has to implement read, readline, tell, and seek), or a string containing a file name * **map\_location** (*string* *or* [torch.device](../tensor_attributes#torch.torch.device "torch.torch.device")) – A simplified version of `map_location` in `torch.jit.save` used to dynamically remap storages to an alternative set of devices. * **\_extra\_files** (*dictionary of filename to content*) – The extra filenames given in the map would be loaded and their content would be stored in the provided map. Returns A [`ScriptModule`](torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") object. Example: ``` import torch import io torch.jit.load('scriptmodule.pt') # Load ScriptModule from io.BytesIO object with open('scriptmodule.pt', 'rb') as f: buffer = io.BytesIO(f.read()) # Load all tensors to the original device torch.jit.load(buffer) # Load all tensors onto CPU, using a device buffer.seek(0) torch.jit.load(buffer, map_location=torch.device('cpu')) # Load all tensors onto CPU, using a string buffer.seek(0) torch.jit.load(buffer, map_location='cpu') # Load with extra files. extra_files = {'foo.txt': ''} # values will be replaced with data torch.jit.load('scriptmodule.pt', _extra_files=extra_files) print(extra_files['foo.txt']) ``` pytorch torch.triu torch.triu ========== `torch.triu(input, diagonal=0, *, out=None) → Tensor` Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices `input`, the other elements of the result tensor `out` are set to 0. The upper triangular part of the matrix is defined as the elements on and above the diagonal. The argument [`diagonal`](torch.diagonal#torch.diagonal "torch.diagonal") controls which diagonal to consider. If [`diagonal`](torch.diagonal#torch.diagonal "torch.diagonal") = 0, all elements on and above the main diagonal are retained. A positive value excludes just as many diagonals above the main diagonal, and similarly a negative value includes just as many diagonals below the main diagonal. The main diagonal are the set of indices {(i,i)}\lbrace (i, i) \rbrace for i∈[0,min⁡{d1,d2}−1]i \in [0, \min\{d\_{1}, d\_{2}\} - 1] where d1,d2d\_{1}, d\_{2} are the dimensions of the matrix. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **diagonal** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the diagonal to consider Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(3, 3) >>> a tensor([[ 0.2309, 0.5207, 2.0049], [ 0.2072, -1.0680, 0.6602], [ 0.3480, -0.5211, -0.4573]]) >>> torch.triu(a) tensor([[ 0.2309, 0.5207, 2.0049], [ 0.0000, -1.0680, 0.6602], [ 0.0000, 0.0000, -0.4573]]) >>> torch.triu(a, diagonal=1) tensor([[ 0.0000, 0.5207, 2.0049], [ 0.0000, 0.0000, 0.6602], [ 0.0000, 0.0000, 0.0000]]) >>> torch.triu(a, diagonal=-1) tensor([[ 0.2309, 0.5207, 2.0049], [ 0.2072, -1.0680, 0.6602], [ 0.0000, -0.5211, -0.4573]]) >>> b = torch.randn(4, 6) >>> b tensor([[ 0.5876, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235], [-0.2447, 0.9556, -1.2919, 1.3378, -0.1768, -1.0857], [ 0.4333, 0.3146, 0.6576, -1.0432, 0.9348, -0.4410], [-0.9888, 1.0679, -1.3337, -1.6556, 0.4798, 0.2830]]) >>> torch.triu(b, diagonal=1) tensor([[ 0.0000, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235], [ 0.0000, 0.0000, -1.2919, 1.3378, -0.1768, -1.0857], [ 0.0000, 0.0000, 0.0000, -1.0432, 0.9348, -0.4410], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.4798, 0.2830]]) >>> torch.triu(b, diagonal=-1) tensor([[ 0.5876, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235], [-0.2447, 0.9556, -1.2919, 1.3378, -0.1768, -1.0857], [ 0.0000, 0.3146, 0.6576, -1.0432, 0.9348, -0.4410], [ 0.0000, 0.0000, -1.3337, -1.6556, 0.4798, 0.2830]]) ``` pytorch torch.not_equal torch.not\_equal ================ `torch.not_equal(input, other, *, out=None) → Tensor` Alias for [`torch.ne()`](torch.ne#torch.ne "torch.ne"). pytorch KLDivLoss KLDivLoss ========= `class torch.nn.KLDivLoss(size_average=None, reduce=None, reduction='mean', log_target=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/loss.html#KLDivLoss) The Kullback-Leibler divergence loss measure [Kullback-Leibler divergence](https://en.wikipedia.org/wiki/Kullback-Leibler_divergence) is a useful distance measure for continuous distributions and is often useful when performing direct regression over the space of (discretely sampled) continuous output distributions. As with [`NLLLoss`](torch.nn.nllloss#torch.nn.NLLLoss "torch.nn.NLLLoss"), the `input` given is expected to contain *log-probabilities* and is not restricted to a 2D Tensor. The targets are interpreted as *probabilities* by default, but could be considered as *log-probabilities* with `log_target` set to `True`. This criterion expects a `target` `Tensor` of the same size as the `input` `Tensor`. The unreduced (i.e. with `reduction` set to `'none'`) loss can be described as: l(x,y)=L={l1,…,lN},ln=yn⋅(log⁡yn−xn)l(x,y) = L = \{ l\_1,\dots,l\_N \}, \quad l\_n = y\_n \cdot \left( \log y\_n - x\_n \right) where the index NN spans all dimensions of `input` and LL has the same shape as `input`. If `reduction` is not `'none'` (default `'mean'`), then: ℓ(x,y)={mean⁡(L),if reduction=‘mean’;sum⁡(L),if reduction=‘sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';} \\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases} In default `reduction` mode `'mean'`, the losses are averaged for each minibatch over observations **as well as** over dimensions. `'batchmean'` mode gives the correct KL divergence where losses are averaged over batch dimension only. `'mean'` mode’s behavior will be changed to the same as `'batchmean'` in the next major release. Parameters * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when `reduce` is `False`. Default: `True` * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'batchmean'` | `'sum'` | `'mean'`. `'none'`: no reduction will be applied. `'batchmean'`: the sum of the output will be divided by batchsize. `'sum'`: the output will be summed. `'mean'`: the output will be divided by the number of elements in the output. Default: `'mean'` * **log\_target** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Specifies whether `target` is passed in the log space. Default: `False` Note `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Note `reduction` = `'mean'` doesn’t return the true kl divergence value, please use `reduction` = `'batchmean'` which aligns with KL math definition. In the next major release, `'mean'` will be changed to be the same as `'batchmean'`. Shape: * Input: (N,∗)(N, \*) where ∗\* means, any number of additional dimensions * Target: (N,∗)(N, \*) , same shape as the input * Output: scalar by default. If :attr:`reduction` is `'none'`, then (N,∗)(N, \*) , the same shape as the input pytorch torch.sqrt torch.sqrt ========== `torch.sqrt(input, *, out=None) → Tensor` Returns a new tensor with the square-root of the elements of `input`. outi=inputi\text{out}\_{i} = \sqrt{\text{input}\_{i}} Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4) >>> a tensor([-2.0755, 1.0226, 0.0831, 0.4806]) >>> torch.sqrt(a) tensor([ nan, 1.0112, 0.2883, 0.6933]) ``` pytorch torch.nn.utils.weight_norm torch.nn.utils.weight\_norm =========================== `torch.nn.utils.weight_norm(module, name='weight', dim=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/weight_norm.html#weight_norm) Applies weight normalization to a parameter in the given module. w=gv∥v∥\mathbf{w} = g \dfrac{\mathbf{v}}{\|\mathbf{v}\|} Weight normalization is a reparameterization that decouples the magnitude of a weight tensor from its direction. This replaces the parameter specified by `name` (e.g. `'weight'`) with two parameters: one specifying the magnitude (e.g. `'weight_g'`) and one specifying the direction (e.g. `'weight_v'`). Weight normalization is implemented via a hook that recomputes the weight tensor from the magnitude and direction before every `forward()` call. By default, with `dim=0`, the norm is computed independently per output channel/plane. To compute a norm over the entire weight tensor, use `dim=None`. See <https://arxiv.org/abs/1602.07868> Parameters * **module** ([Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – containing module * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – name of weight parameter * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – dimension over which to compute the norm Returns The original module with the weight norm hook Example: ``` >>> m = weight_norm(nn.Linear(20, 40), name='weight') >>> m Linear(in_features=20, out_features=40, bias=True) >>> m.weight_g.size() torch.Size([40, 1]) >>> m.weight_v.size() torch.Size([40, 20]) ``` pytorch LazyConv3d LazyConv3d ========== `class torch.nn.LazyConv3d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/conv.html#LazyConv3d) A [`torch.nn.Conv3d`](torch.nn.conv3d#torch.nn.Conv3d "torch.nn.Conv3d") module with lazy initialization of the `in_channels` argument of the [`Conv3d`](torch.nn.conv3d#torch.nn.Conv3d "torch.nn.Conv3d") that is inferred from the `input.size(1)`. Parameters * **out\_channels** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of channels produced by the convolution * **kernel\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Size of the convolving kernel * **stride** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Stride of the convolution. Default: 1 * **padding** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Zero-padding added to both sides of the input. Default: 0 * **padding\_mode** (*string**,* *optional*) – `'zeros'`, `'reflect'`, `'replicate'` or `'circular'`. Default: `'zeros'` * **dilation** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Spacing between kernel elements. Default: 1 * **groups** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Number of blocked connections from input channels to output channels. Default: 1 * **bias** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, adds a learnable bias to the output. Default: `True` See also [`torch.nn.Conv3d`](torch.nn.conv3d#torch.nn.Conv3d "torch.nn.Conv3d") and [`torch.nn.modules.lazy.LazyModuleMixin`](torch.nn.modules.lazy.lazymodulemixin#torch.nn.modules.lazy.LazyModuleMixin "torch.nn.modules.lazy.LazyModuleMixin") `cls_to_become` alias of [`Conv3d`](torch.nn.conv3d#torch.nn.Conv3d "torch.nn.Conv3d") pytorch torch.cat torch.cat ========= `torch.cat(tensors, dim=0, *, out=None) → Tensor` Concatenates the given sequence of `seq` tensors in the given dimension. All tensors must either have the same shape (except in the concatenating dimension) or be empty. [`torch.cat()`](#torch.cat "torch.cat") can be seen as an inverse operation for [`torch.split()`](torch.split#torch.split "torch.split") and [`torch.chunk()`](torch.chunk#torch.chunk "torch.chunk"). [`torch.cat()`](#torch.cat "torch.cat") can be best understood via examples. Parameters * **tensors** (*sequence of Tensors*) – any python sequence of tensors of the same type. Non-empty tensors provided must have the same shape, except in the cat dimension. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the dimension over which the tensors are concatenated Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> x = torch.randn(2, 3) >>> x tensor([[ 0.6580, -1.0969, -0.4614], [-0.1034, -0.5790, 0.1497]]) >>> torch.cat((x, x, x), 0) tensor([[ 0.6580, -1.0969, -0.4614], [-0.1034, -0.5790, 0.1497], [ 0.6580, -1.0969, -0.4614], [-0.1034, -0.5790, 0.1497], [ 0.6580, -1.0969, -0.4614], [-0.1034, -0.5790, 0.1497]]) >>> torch.cat((x, x, x), 1) tensor([[ 0.6580, -1.0969, -0.4614, 0.6580, -1.0969, -0.4614, 0.6580, -1.0969, -0.4614], [-0.1034, -0.5790, 0.1497, -0.1034, -0.5790, 0.1497, -0.1034, -0.5790, 0.1497]]) ``` pytorch torch.sign torch.sign ========== `torch.sign(input, *, out=None) → Tensor` Returns a new tensor with the signs of the elements of `input`. outi=sgn⁡(inputi)\text{out}\_{i} = \operatorname{sgn}(\text{input}\_{i}) Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.tensor([0.7, -1.2, 0., 2.3]) >>> a tensor([ 0.7000, -1.2000, 0.0000, 2.3000]) >>> torch.sign(a) tensor([ 1., -1., 0., 1.]) ``` pytorch RandomUnstructured RandomUnstructured ================== `class torch.nn.utils.prune.RandomUnstructured(amount)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/prune.html#RandomUnstructured) Prune (currently unpruned) units in a tensor at random. Parameters * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – parameter name within `module` on which pruning will act. * **amount** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – quantity of parameters to prune. If `float`, should be between 0.0 and 1.0 and represent the fraction of parameters to prune. If `int`, it represents the absolute number of parameters to prune. `classmethod apply(module, name, amount)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/prune.html#RandomUnstructured.apply) Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters * **module** ([nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – module containing the tensor to prune * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – parameter name within `module` on which pruning will act. * **amount** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – quantity of parameters to prune. If `float`, should be between 0.0 and 1.0 and represent the fraction of parameters to prune. If `int`, it represents the absolute number of parameters to prune. `apply_mask(module)` Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters **module** ([nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – module containing the tensor to prune Returns pruned version of the input tensor Return type pruned\_tensor ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) `prune(t, default_mask=None, importance_scores=None)` Computes and returns a pruned version of input tensor `t` according to the pruning rule specified in `compute_mask()`. Parameters * **t** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – tensor to prune (of same dimensions as `default_mask`). * **importance\_scores** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – tensor of importance scores (of same shape as `t`) used to compute mask for pruning `t`. The values in this tensor indicate the importance of the corresponding elements in the `t` that is being pruned. If unspecified or None, the tensor `t` will be used in its place. * **default\_mask** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – mask from previous pruning iteration, if any. To be considered when determining what portion of the tensor that pruning should act on. If None, default to a mask of ones. Returns pruned version of tensor `t`. `remove(module)` Removes the pruning reparameterization from a module. The pruned parameter named `name` remains permanently pruned, and the parameter named `name+'_orig'` is removed from the parameter list. Similarly, the buffer named `name+'_mask'` is removed from the buffers. Note Pruning itself is NOT undone or reversed!
programming_docs
pytorch torch.sparse_coo_tensor torch.sparse\_coo\_tensor ========================= `torch.sparse_coo_tensor(indices, values, size=None, *, dtype=None, device=None, requires_grad=False) → Tensor` Constructs a [sparse tensor in COO(rdinate) format](../sparse#sparse-coo-docs) with specified values at the given `indices`. Note This function returns an [uncoalesced tensor](../sparse#sparse-uncoalesced-coo-docs). Parameters * **indices** (*array\_like*) – Initial data for the tensor. Can be a list, tuple, NumPy `ndarray`, scalar, and other types. Will be cast to a `torch.LongTensor` internally. The indices are the coordinates of the non-zero values in the matrix, and thus should be two-dimensional where the first dimension is the number of tensor dimensions and the second dimension is the number of non-zero values. * **values** (*array\_like*) – Initial values for the tensor. Can be a list, tuple, NumPy `ndarray`, scalar, and other types. * **size** (list, tuple, or `torch.Size`, optional) – Size of the sparse tensor. If not provided the size will be inferred as the minimum size big enough to hold all non-zero elements. Keyword Arguments * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: if None, infers data type from `values`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Example: ``` >>> i = torch.tensor([[0, 1, 1], ... [2, 0, 2]]) >>> v = torch.tensor([3, 4, 5], dtype=torch.float32) >>> torch.sparse_coo_tensor(i, v, [2, 4]) tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([3., 4., 5.]), size=(2, 4), nnz=3, layout=torch.sparse_coo) >>> torch.sparse_coo_tensor(i, v) # Shape inference tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([3., 4., 5.]), size=(2, 3), nnz=3, layout=torch.sparse_coo) >>> torch.sparse_coo_tensor(i, v, [2, 4], ... dtype=torch.float64, ... device=torch.device('cuda:0')) tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([3., 4., 5.]), device='cuda:0', size=(2, 4), nnz=3, dtype=torch.float64, layout=torch.sparse_coo) # Create an empty sparse tensor with the following invariants: # 1. sparse_dim + dense_dim = len(SparseTensor.shape) # 2. SparseTensor._indices().shape = (sparse_dim, nnz) # 3. SparseTensor._values().shape = (nnz, SparseTensor.shape[sparse_dim:]) # # For instance, to create an empty sparse tensor with nnz = 0, dense_dim = 0 and # sparse_dim = 1 (hence indices is a 2D tensor of shape = (1, 0)) >>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), [], [1]) tensor(indices=tensor([], size=(1, 0)), values=tensor([], size=(0,)), size=(1,), nnz=0, layout=torch.sparse_coo) # and to create an empty sparse tensor with nnz = 0, dense_dim = 1 and # sparse_dim = 1 >>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), torch.empty([0, 2]), [1, 2]) tensor(indices=tensor([], size=(1, 0)), values=tensor([], size=(0, 2)), size=(1, 2), nnz=0, layout=torch.sparse_coo) ``` pytorch RNN RNN === `class torch.nn.RNN(*args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/rnn.html#RNN) Applies a multi-layer Elman RNN with tanh⁡\tanh or ReLU\text{ReLU} non-linearity to an input sequence. For each element in the input sequence, each layer computes the following function: ht=tanh⁡(Wihxt+bih+Whhh(t−1)+bhh)h\_t = \tanh(W\_{ih} x\_t + b\_{ih} + W\_{hh} h\_{(t-1)} + b\_{hh}) where hth\_t is the hidden state at time `t`, xtx\_t is the input at time `t`, and h(t−1)h\_{(t-1)} is the hidden state of the previous layer at time `t-1` or the initial hidden state at time `0`. If `nonlinearity` is `'relu'`, then ReLU\text{ReLU} is used instead of tanh⁡\tanh . Parameters * **input\_size** – The number of expected features in the input `x` * **hidden\_size** – The number of features in the hidden state `h` * **num\_layers** – Number of recurrent layers. E.g., setting `num_layers=2` would mean stacking two RNNs together to form a `stacked RNN`, with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1 * **nonlinearity** – The non-linearity to use. Can be either `'tanh'` or `'relu'`. Default: `'tanh'` * **bias** – If `False`, then the layer does not use bias weights `b_ih` and `b_hh`. Default: `True` * **batch\_first** – If `True`, then the input and output tensors are provided as `(batch, seq, feature)`. Default: `False` * **dropout** – If non-zero, introduces a `Dropout` layer on the outputs of each RNN layer except the last layer, with dropout probability equal to `dropout`. Default: 0 * **bidirectional** – If `True`, becomes a bidirectional RNN. Default: `False` Inputs: input, h\_0 * **input** of shape `(seq_len, batch, input_size)`: tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See [`torch.nn.utils.rnn.pack_padded_sequence()`](torch.nn.utils.rnn.pack_padded_sequence#torch.nn.utils.rnn.pack_padded_sequence "torch.nn.utils.rnn.pack_padded_sequence") or [`torch.nn.utils.rnn.pack_sequence()`](torch.nn.utils.rnn.pack_sequence#torch.nn.utils.rnn.pack_sequence "torch.nn.utils.rnn.pack_sequence") for details. * **h\_0** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided. If the RNN is bidirectional, num\_directions should be 2, else it should be 1. Outputs: output, h\_n * **output** of shape `(seq_len, batch, num_directions * hidden_size)`: tensor containing the output features (`h_t`) from the last layer of the RNN, for each `t`. If a [`torch.nn.utils.rnn.PackedSequence`](torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence "torch.nn.utils.rnn.PackedSequence") has been given as the input, the output will also be a packed sequence. For the unpacked case, the directions can be separated using `output.view(seq_len, batch, num_directions, hidden_size)`, with forward and backward being direction `0` and `1` respectively. Similarly, the directions can be separated in the packed case. * **h\_n** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor containing the hidden state for `t = seq_len`. Like *output*, the layers can be separated using `h_n.view(num_layers, num_directions, batch, hidden_size)`. Shape: * Input1: (L,N,Hin)(L, N, H\_{in}) tensor containing input features where Hin=input\_sizeH\_{in}=\text{input\\_size} and `L` represents a sequence length. * Input2: (S,N,Hout)(S, N, H\_{out}) tensor containing the initial hidden state for each element in the batch. Hout=hidden\_sizeH\_{out}=\text{hidden\\_size} Defaults to zero if not provided. where S=num\_layers∗num\_directionsS=\text{num\\_layers} \* \text{num\\_directions} If the RNN is bidirectional, num\_directions should be 2, else it should be 1. * Output1: (L,N,Hall)(L, N, H\_{all}) where Hall=num\_directions∗hidden\_sizeH\_{all}=\text{num\\_directions} \* \text{hidden\\_size} * Output2: (S,N,Hout)(S, N, H\_{out}) tensor containing the next hidden state for each element in the batch Variables * **~RNN.weight\_ih\_l[k]** – the learnable input-hidden weights of the k-th layer, of shape `(hidden_size, input_size)` for `k = 0`. Otherwise, the shape is `(hidden_size, num_directions * hidden_size)` * **~RNN.weight\_hh\_l[k]** – the learnable hidden-hidden weights of the k-th layer, of shape `(hidden_size, hidden_size)` * **~RNN.bias\_ih\_l[k]** – the learnable input-hidden bias of the k-th layer, of shape `(hidden_size)` * **~RNN.bias\_hh\_l[k]** – the learnable hidden-hidden bias of the k-th layer, of shape `(hidden_size)` Note All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=1hidden\_sizek = \frac{1}{\text{hidden\\_size}} Warning There are known non-determinism issues for RNN functions on some versions of cuDNN and CUDA. You can enforce deterministic behavior by setting the following environment variables: On CUDA 10.1, set environment variable `CUDA_LAUNCH_BLOCKING=1`. This may affect performance. On CUDA 10.2 or later, set environment variable (note the leading colon symbol) `CUBLAS_WORKSPACE_CONFIG=:16:8` or `CUBLAS_WORKSPACE_CONFIG=:4096:2`. See the [cuDNN 8 Release Notes](https://docs.nvidia.com/deeplearning/sdk/cudnn-release-notes/rel_8.html) for more information. Orphan Note If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype `torch.float16` 4) V100 GPU is used, 5) input data is not in `PackedSequence` format persistent algorithm can be selected to improve performance. Examples: ``` >>> rnn = nn.RNN(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> output, hn = rnn(input, h0) ``` pytorch torch.broadcast_shapes torch.broadcast\_shapes ======================= `torch.broadcast_shapes(*shapes) → Size` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/functional.html#broadcast_shapes) Similar to [`broadcast_tensors()`](torch.broadcast_tensors#torch.broadcast_tensors "torch.broadcast_tensors") but for shapes. This is equivalent to `torch.broadcast_tensors(*map(torch.empty, shapes))[0].shape` but avoids the need create to intermediate tensors. This is useful for broadcasting tensors of common batch shape but different rightmost shape, e.g. to broadcast mean vectors with covariance matrices. Example: ``` >>> torch.broadcast_shapes((2,), (3, 1), (1, 1, 1)) torch.Size([1, 3, 2]) ``` Parameters **\*shapes** (*torch.Size*) – Shapes of tensors. Returns A shape compatible with all input shapes. Return type shape (torch.Size) Raises [**RuntimeError**](https://docs.python.org/3/library/exceptions.html#RuntimeError "(in Python v3.9)") – If shapes are incompatible. pytorch torch.lgamma torch.lgamma ============ `torch.lgamma(input, *, out=None) → Tensor` Computes the logarithm of the gamma function on `input`. outi=log⁡Γ(inputi)\text{out}\_{i} = \log \Gamma(\text{input}\_{i}) Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.arange(0.5, 2, 0.5) >>> torch.lgamma(a) tensor([ 0.5724, 0.0000, -0.1208]) ``` pytorch Threshold Threshold ========= `class torch.nn.Threshold(threshold, value, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/activation.html#Threshold) Thresholds each element of the input Tensor. Threshold is defined as: y={x, if x>thresholdvalue, otherwise y = \begin{cases} x, &\text{ if } x > \text{threshold} \\ \text{value}, &\text{ otherwise } \end{cases} Parameters * **threshold** – The value to threshold at * **value** – The value to replace with * **inplace** – can optionally do the operation in-place. Default: `False` Shape: * Input: (N,∗)(N, \*) where `*` means, any number of additional dimensions * Output: (N,∗)(N, \*) , same shape as the input Examples: ``` >>> m = nn.Threshold(0.1, 20) >>> input = torch.randn(2) >>> output = m(input) ``` pytorch GELU GELU ==== `class torch.nn.GELU` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/activation.html#GELU) Applies the Gaussian Error Linear Units function: GELU(x)=x∗Φ(x)\text{GELU}(x) = x \* \Phi(x) where Φ(x)\Phi(x) is the Cumulative Distribution Function for Gaussian Distribution. Shape: * Input: (N,∗)(N, \*) where `*` means, any number of additional dimensions * Output: (N,∗)(N, \*) , same shape as the input Examples: ``` >>> m = nn.GELU() >>> input = torch.randn(2) >>> output = m(input) ``` pytorch torch.range torch.range =========== `torch.range(start=0, end, step=1, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor` Returns a 1-D tensor of size ⌊end−startstep⌋+1\left\lfloor \frac{\text{end} - \text{start}}{\text{step}} \right\rfloor + 1 with values from `start` to `end` with step `step`. Step is the gap between two values in the tensor. outi+1=outi+step.\text{out}\_{i+1} = \text{out}\_i + \text{step}. Warning This function is deprecated and will be removed in a future release because its behavior is inconsistent with Python’s range builtin. Instead, use [`torch.arange()`](torch.arange#torch.arange "torch.arange"), which produces values in [start, end). Parameters * **start** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the starting value for the set of points. Default: `0`. * **end** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the ending value for the set of points * **step** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the gap between each pair of adjacent points. Default: `1`. Keyword Arguments * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: if `None`, uses a global default (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). If `dtype` is not given, infer the data type from the other input arguments. If any of `start`, `end`, or `stop` are floating-point, the `dtype` is inferred to be the default dtype, see [`get_default_dtype()`](torch.get_default_dtype#torch.get_default_dtype "torch.get_default_dtype"). Otherwise, the `dtype` is inferred to be `torch.int64`. * **layout** ([`torch.layout`](../tensor_attributes#torch.torch.layout "torch.torch.layout"), optional) – the desired layout of returned Tensor. Default: `torch.strided`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Example: ``` >>> torch.range(1, 4) tensor([ 1., 2., 3., 4.]) >>> torch.range(1, 4, 0.5) tensor([ 1.0000, 1.5000, 2.0000, 2.5000, 3.0000, 3.5000, 4.0000]) ``` pytorch torch.ger torch.ger ========= `torch.ger(input, vec2, *, out=None) → Tensor` Alias of [`torch.outer()`](torch.outer#torch.outer "torch.outer"). Warning This function is deprecated and will be removed in a future PyTorch release. Use [`torch.outer()`](torch.outer#torch.outer "torch.outer") instead. pytorch Parameter Parameter ========= `class torch.nn.parameter.Parameter` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/parameter.html#Parameter) A kind of Tensor that is to be considered a module parameter. Parameters are [`Tensor`](../tensors#torch.Tensor "torch.Tensor") subclasses, that have a very special property when used with `Module` s - when they’re assigned as Module attributes they are automatically added to the list of its parameters, and will appear e.g. in `parameters()` iterator. Assigning a Tensor doesn’t have such effect. This is because one might want to cache some temporary state, like last hidden state of the RNN, in the model. If there was no such class as [`Parameter`](#torch.nn.parameter.Parameter "torch.nn.parameter.Parameter"), these temporaries would get registered too. Parameters * **data** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – parameter tensor. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if the parameter requires gradient. See [Excluding subgraphs from backward](https://pytorch.org/docs/1.8.0/notes/autograd.html#excluding-subgraphs) for more details. Default: `True` pytorch torch.tensor torch.tensor ============ `torch.tensor(data, *, dtype=None, device=None, requires_grad=False, pin_memory=False) → Tensor` Constructs a tensor with `data`. Warning [`torch.tensor()`](#torch.tensor "torch.tensor") always copies `data`. If you have a Tensor `data` and want to avoid a copy, use [`torch.Tensor.requires_grad_()`](../tensors#torch.Tensor.requires_grad_ "torch.Tensor.requires_grad_") or [`torch.Tensor.detach()`](../autograd#torch.Tensor.detach "torch.Tensor.detach"). If you have a NumPy `ndarray` and want to avoid a copy, use [`torch.as_tensor()`](torch.as_tensor#torch.as_tensor "torch.as_tensor"). Warning When data is a tensor `x`, [`torch.tensor()`](#torch.tensor "torch.tensor") reads out ‘the data’ from whatever it is passed, and constructs a leaf variable. Therefore `torch.tensor(x)` is equivalent to `x.clone().detach()` and `torch.tensor(x, requires_grad=True)` is equivalent to `x.clone().detach().requires_grad_(True)`. The equivalents using `clone()` and `detach()` are recommended. Parameters **data** (*array\_like*) – Initial data for the tensor. Can be a list, tuple, NumPy `ndarray`, scalar, and other types. Keyword Arguments * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: if `None`, infers data type from `data`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. * **pin\_memory** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: `False`. Example: ``` >>> torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]]) tensor([[ 0.1000, 1.2000], [ 2.2000, 3.1000], [ 4.9000, 5.2000]]) >>> torch.tensor([0, 1]) # Type inference on data tensor([ 0, 1]) >>> torch.tensor([[0.11111, 0.222222, 0.3333333]], ... dtype=torch.float64, ... device=torch.device('cuda:0')) # creates a torch.cuda.DoubleTensor tensor([[ 0.1111, 0.2222, 0.3333]], dtype=torch.float64, device='cuda:0') >>> torch.tensor(3.14159) # Create a scalar (zero-dimensional tensor) tensor(3.1416) >>> torch.tensor([]) # Create an empty tensor (of size (0,)) tensor([]) ```
programming_docs
pytorch Dropout2d Dropout2d ========= `class torch.nn.Dropout2d(p=0.5, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/dropout.html#Dropout2d) Randomly zero out entire channels (a channel is a 2D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 2D tensor input[i,j]\text{input}[i, j] ). Each channel will be zeroed out independently on every forward call with probability `p` using samples from a Bernoulli distribution. Usually the input comes from `nn.Conv2d` modules. As described in the paper [Efficient Object Localization Using Convolutional Networks](https://arxiv.org/abs/1411.4280) , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, `nn.Dropout2d()` will help promote independence between feature maps and should be used instead. Parameters * **p** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – probability of an element to be zero-ed. * **inplace** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If set to `True`, will do this operation in-place Shape: * Input: (N,C,H,W)(N, C, H, W) * Output: (N,C,H,W)(N, C, H, W) (same shape as input) Examples: ``` >>> m = nn.Dropout2d(p=0.2) >>> input = torch.randn(20, 16, 32, 32) >>> output = m(input) ``` pytorch torch.chunk torch.chunk =========== `torch.chunk(input, chunks, dim=0) → List of Tensors` Splits a tensor into a specific number of chunks. Each chunk is a view of the input tensor. Last chunk will be smaller if the tensor size along the given dimension `dim` is not divisible by `chunks`. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor to split * **chunks** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – number of chunks to return * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – dimension along which to split the tensor pytorch ConstantPad3d ConstantPad3d ============= `class torch.nn.ConstantPad3d(padding, value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/padding.html#ConstantPad3d) Pads the input tensor boundaries with a constant value. For `N`-dimensional padding, use [`torch.nn.functional.pad()`](../nn.functional#torch.nn.functional.pad "torch.nn.functional.pad"). Parameters **padding** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – the size of the padding. If is `int`, uses the same padding in all boundaries. If a 6-`tuple`, uses (padding\_left\text{padding\\_left} , padding\_right\text{padding\\_right} , padding\_top\text{padding\\_top} , padding\_bottom\text{padding\\_bottom} , padding\_front\text{padding\\_front} , padding\_back\text{padding\\_back} ) Shape: * Input: (N,C,Din,Hin,Win)(N, C, D\_{in}, H\_{in}, W\_{in}) * Output: (N,C,Dout,Hout,Wout)(N, C, D\_{out}, H\_{out}, W\_{out}) where Dout=Din+padding\_front+padding\_backD\_{out} = D\_{in} + \text{padding\\_front} + \text{padding\\_back} Hout=Hin+padding\_top+padding\_bottomH\_{out} = H\_{in} + \text{padding\\_top} + \text{padding\\_bottom} Wout=Win+padding\_left+padding\_rightW\_{out} = W\_{in} + \text{padding\\_left} + \text{padding\\_right} Examples: ``` >>> m = nn.ConstantPad3d(3, 3.5) >>> input = torch.randn(16, 3, 10, 20, 30) >>> output = m(input) >>> # using different paddings for different sides >>> m = nn.ConstantPad3d((3, 3, 6, 6, 0, 1), 3.5) >>> output = m(input) ``` pytorch ReLU ReLU ==== `class torch.nn.ReLU(inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/activation.html#ReLU) Applies the rectified linear unit function element-wise: ReLU(x)=(x)+=max⁡(0,x)\text{ReLU}(x) = (x)^+ = \max(0, x) Parameters **inplace** – can optionally do the operation in-place. Default: `False` Shape: * Input: (N,∗)(N, \*) where `*` means, any number of additional dimensions * Output: (N,∗)(N, \*) , same shape as the input Examples: ``` >>> m = nn.ReLU() >>> input = torch.randn(2) >>> output = m(input) An implementation of CReLU - https://arxiv.org/abs/1603.05201 >>> m = nn.ReLU() >>> input = torch.randn(2).unsqueeze(0) >>> output = torch.cat((m(input),m(-input))) ``` pytorch torch.bitwise_or torch.bitwise\_or ================= `torch.bitwise_or(input, other, *, out=None) → Tensor` Computes the bitwise OR of `input` and `other`. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical OR. Parameters * **input** – the first input tensor * **other** – the second input tensor Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. #### Example ``` >>> torch.bitwise_or(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8)) tensor([-1, -2, 3], dtype=torch.int8) >>> torch.bitwise_or(torch.tensor([True, True, False]), torch.tensor([False, True, False])) tensor([ True, True, False]) ``` pytorch BCELoss BCELoss ======= `class torch.nn.BCELoss(weight=None, size_average=None, reduce=None, reduction='mean')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/loss.html#BCELoss) Creates a criterion that measures the Binary Cross Entropy between the target and the output: The unreduced (i.e. with `reduction` set to `'none'`) loss can be described as: ℓ(x,y)=L={l1,…,lN}⊤,ln=−wn[yn⋅log⁡xn+(1−yn)⋅log⁡(1−xn)],\ell(x, y) = L = \{l\_1,\dots,l\_N\}^\top, \quad l\_n = - w\_n \left[ y\_n \cdot \log x\_n + (1 - y\_n) \cdot \log (1 - x\_n) \right], where NN is the batch size. If `reduction` is not `'none'` (default `'mean'`), then ℓ(x,y)={mean⁡(L),if reduction=‘mean’;sum⁡(L),if reduction=‘sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases} This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets yy should be numbers between 0 and 1. Notice that if xnx\_n is either 0 or 1, one of the log terms would be mathematically undefined in the above loss equation. PyTorch chooses to set log⁡(0)=−∞\log (0) = -\infty , since lim⁡x→0log⁡(x)=−∞\lim\_{x\to 0} \log (x) = -\infty . However, an infinite term in the loss equation is not desirable for several reasons. For one, if either yn=0y\_n = 0 or (1−yn)=0(1 - y\_n) = 0 , then we would be multiplying 0 with infinity. Secondly, if we have an infinite loss value, then we would also have an infinite term in our gradient, since lim⁡x→0ddxlog⁡(x)=∞\lim\_{x\to 0} \frac{d}{dx} \log (x) = \infty . This would make BCELoss’s backward method nonlinear with respect to xnx\_n , and using it for things like linear regression would not be straight-forward. Our solution is that BCELoss clamps its log function outputs to be greater than or equal to -100. This way, we can always have a finite loss value and a linear backward method. Parameters * **weight** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size `nbatch`. * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when `reduce` is `False`. Default: `True` * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Note: `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Default: `'mean'` Shape: * Input: (N,∗)(N, \*) where ∗\* means, any number of additional dimensions * Target: (N,∗)(N, \*) , same shape as the input * Output: scalar. If `reduction` is `'none'`, then (N,∗)(N, \*) , same shape as input. Examples: ``` >>> m = nn.Sigmoid() >>> loss = nn.BCELoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(m(input), target) >>> output.backward() ``` pytorch ParameterDict ParameterDict ============= `class torch.nn.ParameterDict(parameters=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/container.html#ParameterDict) Holds parameters in a dictionary. ParameterDict can be indexed like a regular Python dictionary, but parameters it contains are properly registered, and will be visible by all Module methods. [`ParameterDict`](#torch.nn.ParameterDict "torch.nn.ParameterDict") is an **ordered** dictionary that respects * the order of insertion, and * in [`update()`](#torch.nn.ParameterDict.update "torch.nn.ParameterDict.update"), the order of the merged `OrderedDict` or another [`ParameterDict`](#torch.nn.ParameterDict "torch.nn.ParameterDict") (the argument to [`update()`](#torch.nn.ParameterDict.update "torch.nn.ParameterDict.update")). Note that [`update()`](#torch.nn.ParameterDict.update "torch.nn.ParameterDict.update") with other unordered mapping types (e.g., Python’s plain `dict`) does not preserve the order of the merged mapping. Parameters **parameters** (*iterable**,* *optional*) – a mapping (dictionary) of (string : `Parameter`) or an iterable of key-value pairs of type (string, `Parameter`) Example: ``` class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.params = nn.ParameterDict({ 'left': nn.Parameter(torch.randn(5, 10)), 'right': nn.Parameter(torch.randn(5, 10)) }) def forward(self, x, choice): x = self.params[choice].mm(x) return x ``` `clear()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/container.html#ParameterDict.clear) Remove all items from the ParameterDict. `items()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/container.html#ParameterDict.items) Return an iterable of the ParameterDict key/value pairs. `keys()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/container.html#ParameterDict.keys) Return an iterable of the ParameterDict keys. `pop(key)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/container.html#ParameterDict.pop) Remove key from the ParameterDict and return its parameter. Parameters **key** (*string*) – key to pop from the ParameterDict `update(parameters)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/container.html#ParameterDict.update) Update the [`ParameterDict`](#torch.nn.ParameterDict "torch.nn.ParameterDict") with the key-value pairs from a mapping or an iterable, overwriting existing keys. Note If `parameters` is an `OrderedDict`, a [`ParameterDict`](#torch.nn.ParameterDict "torch.nn.ParameterDict"), or an iterable of key-value pairs, the order of new elements in it is preserved. Parameters **parameters** (*iterable*) – a mapping (dictionary) from string to `Parameter`, or an iterable of key-value pairs of type (string, `Parameter`) `values()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/container.html#ParameterDict.values) Return an iterable of the ParameterDict values. pytorch torch.tensor_split torch.tensor\_split =================== `torch.tensor_split(input, indices_or_sections, dim=0) → List of Tensors` Splits a tensor into multiple sub-tensors, all of which are views of `input`, along dimension `dim` according to the indices or number of sections specified by `indices_or_sections`. This function is based on NumPy’s [`numpy.array_split()`](https://numpy.org/doc/stable/reference/generated/numpy.array_split.html#numpy.array_split "(in NumPy v1.20)"). Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor to split * **indices\_or\_sections** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.9)") *or* *tuple of python:ints*) – If `indices_or_sections` is an integer `n` or a zero dimensional long tensor with value `n`, `input` is split into `n` sections along dimension `dim`. If `input` is divisible by `n` along dimension `dim`, each section will be of equal size, `input.size(dim) / n`. If `input` is not divisible by `n`, the sizes of the first `int(input.size(dim) % n)` sections will have size `int(input.size(dim) / n) + 1`, and the rest will have size `int(input.size(dim) / n)`. If `indices_or_sections` is a list or tuple of ints, or a one-dimensional long tensor, then `input` is split along dimension `dim` at each of the indices in the list, tuple or tensor. For instance, `indices_or_sections=[2, 3]` and `dim=0` would result in the tensors `input[:2]`, `input[2:3]`, and `input[3:]`. If indices\_or\_sections is a tensor, it must be a zero-dimensional or one-dimensional long tensor on the CPU. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – dimension along which to split the tensor. Default: `0` Example:: ``` >>> x = torch.arange(8) >>> torch.tensor_split(x, 3) (tensor([0, 1, 2]), tensor([3, 4, 5]), tensor([6, 7])) ``` ``` >>> x = torch.arange(7) >>> torch.tensor_split(x, 3) (tensor([0, 1, 2]), tensor([3, 4]), tensor([5, 6])) >>> torch.tensor_split(x, (1, 6)) (tensor([0]), tensor([1, 2, 3, 4, 5]), tensor([6])) ``` ``` >>> x = torch.arange(14).reshape(2, 7) >>> x tensor([[ 0, 1, 2, 3, 4, 5, 6], [ 7, 8, 9, 10, 11, 12, 13]]) >>> torch.tensor_split(x, 3, dim=1) (tensor([[0, 1, 2], [7, 8, 9]]), tensor([[ 3, 4], [10, 11]]), tensor([[ 5, 6], [12, 13]])) >>> torch.tensor_split(x, (1, 6), dim=1) (tensor([[0], [7]]), tensor([[ 1, 2, 3, 4, 5], [ 8, 9, 10, 11, 12]]), tensor([[ 6], [13]])) ``` pytorch torch.arcsinh torch.arcsinh ============= `torch.arcsinh(input, *, out=None) → Tensor` Alias for [`torch.asinh()`](torch.asinh#torch.asinh "torch.asinh"). pytorch PixelShuffle PixelShuffle ============ `class torch.nn.PixelShuffle(upscale_factor)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/pixelshuffle.html#PixelShuffle) Rearranges elements in a tensor of shape (∗,C×r2,H,W)(\*, C \times r^2, H, W) to a tensor of shape (∗,C,H×r,W×r)(\*, C, H \times r, W \times r) , where r is an upscale factor. This is useful for implementing efficient sub-pixel convolution with a stride of 1/r1/r . See the paper: [Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network](https://arxiv.org/abs/1609.05158) by Shi et. al (2016) for more details. Parameters **upscale\_factor** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – factor to increase spatial resolution by Shape: * Input: (∗,Cin,Hin,Win)(\*, C\_{in}, H\_{in}, W\_{in}) , where \* is zero or more batch dimensions * Output: (∗,Cout,Hout,Wout)(\*, C\_{out}, H\_{out}, W\_{out}) , where Cout=Cin÷upscale\_factor2C\_{out} = C\_{in} \div \text{upscale\\_factor}^2 Hout=Hin×upscale\_factorH\_{out} = H\_{in} \times \text{upscale\\_factor} Wout=Win×upscale\_factorW\_{out} = W\_{in} \times \text{upscale\\_factor} Examples: ``` >>> pixel_shuffle = nn.PixelShuffle(3) >>> input = torch.randn(1, 9, 4, 4) >>> output = pixel_shuffle(input) >>> print(output.size()) torch.Size([1, 1, 12, 12]) ``` pytorch torch.expm1 torch.expm1 =========== `torch.expm1(input, *, out=None) → Tensor` Returns a new tensor with the exponential of the elements minus 1 of `input`. yi=exi−1y\_{i} = e^{x\_{i}} - 1 Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> torch.expm1(torch.tensor([0, math.log(2.)])) tensor([ 0., 1.]) ``` pytorch torch.mode torch.mode ========== `torch.mode(input, dim=-1, keepdim=False, *, out=None) -> (Tensor, LongTensor)` Returns a namedtuple `(values, indices)` where `values` is the mode value of each row of the `input` tensor in the given dimension `dim`, i.e. a value which appears most often in that row, and `indices` is the index location of each mode value found. By default, `dim` is the last dimension of the `input` tensor. If `keepdim` is `True`, the output tensors are of the same size as `input` except in the dimension `dim` where they are of size 1. Otherwise, `dim` is squeezed (see [`torch.squeeze()`](torch.squeeze#torch.squeeze "torch.squeeze")), resulting in the output tensors having 1 fewer dimension than `input`. Note This function is not defined for `torch.cuda.Tensor` yet. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension to reduce. * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether the output tensor has `dim` retained or not. Keyword Arguments **out** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – the result tuple of two output tensors (values, indices) Example: ``` >>> a = torch.randint(10, (5,)) >>> a tensor([6, 5, 1, 0, 2]) >>> b = a + (torch.randn(50, 1) * 5).long() >>> torch.mode(b, 0) torch.return_types.mode(values=tensor([6, 5, 1, 0, 2]), indices=tensor([2, 2, 2, 2, 2])) ``` pytorch ConstantPad1d ConstantPad1d ============= `class torch.nn.ConstantPad1d(padding, value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/padding.html#ConstantPad1d) Pads the input tensor boundaries with a constant value. For `N`-dimensional padding, use [`torch.nn.functional.pad()`](../nn.functional#torch.nn.functional.pad "torch.nn.functional.pad"). Parameters **padding** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – the size of the padding. If is `int`, uses the same padding in both boundaries. If a 2-`tuple`, uses (padding\_left\text{padding\\_left} , padding\_right\text{padding\\_right} ) Shape: * Input: (N,C,Win)(N, C, W\_{in}) * Output: (N,C,Wout)(N, C, W\_{out}) where Wout=Win+padding\_left+padding\_rightW\_{out} = W\_{in} + \text{padding\\_left} + \text{padding\\_right} Examples: ``` >>> m = nn.ConstantPad1d(2, 3.5) >>> input = torch.randn(1, 2, 4) >>> input tensor([[[-1.0491, -0.7152, -0.0749, 0.8530], [-1.3287, 1.8966, 0.1466, -0.2771]]]) >>> m(input) tensor([[[ 3.5000, 3.5000, -1.0491, -0.7152, -0.0749, 0.8530, 3.5000, 3.5000], [ 3.5000, 3.5000, -1.3287, 1.8966, 0.1466, -0.2771, 3.5000, 3.5000]]]) >>> m = nn.ConstantPad1d(2, 3.5) >>> input = torch.randn(1, 2, 3) >>> input tensor([[[ 1.6616, 1.4523, -1.1255], [-3.6372, 0.1182, -1.8652]]]) >>> m(input) tensor([[[ 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000, 3.5000], [ 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000, 3.5000]]]) >>> # using different paddings for different sides >>> m = nn.ConstantPad1d((3, 1), 3.5) >>> m(input) tensor([[[ 3.5000, 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000], [ 3.5000, 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000]]]) ```
programming_docs
pytorch torch.min torch.min ========= `torch.min(input) → Tensor` Returns the minimum value of all elements in the `input` tensor. Warning This function produces deterministic (sub)gradients unlike `min(dim=0)` Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Example: ``` >>> a = torch.randn(1, 3) >>> a tensor([[ 0.6750, 1.0857, 1.7197]]) >>> torch.min(a) tensor(0.6750) ``` `torch.min(input, dim, keepdim=False, *, out=None) -> (Tensor, LongTensor)` Returns a namedtuple `(values, indices)` where `values` is the minimum value of each row of the `input` tensor in the given dimension `dim`. And `indices` is the index location of each minimum value found (argmin). If `keepdim` is `True`, the output tensors are of the same size as `input` except in the dimension `dim` where they are of size 1. Otherwise, `dim` is squeezed (see [`torch.squeeze()`](torch.squeeze#torch.squeeze "torch.squeeze")), resulting in the output tensors having 1 fewer dimension than `input`. Note If there are multiple minimal values in a reduced row then the indices of the first minimal value are returned. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension to reduce. * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether the output tensor has `dim` retained or not. Keyword Arguments **out** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – the tuple of two output tensors (min, min\_indices) Example: ``` >>> a = torch.randn(4, 4) >>> a tensor([[-0.6248, 1.1334, -1.1899, -0.2803], [-1.4644, -0.2635, -0.3651, 0.6134], [ 0.2457, 0.0384, 1.0128, 0.7015], [-0.1153, 2.9849, 2.1458, 0.5788]]) >>> torch.min(a, 1) torch.return_types.min(values=tensor([-1.1899, -1.4644, 0.0384, -0.1153]), indices=tensor([2, 0, 1, 0])) ``` `torch.min(input, other, *, out=None) → Tensor` See [`torch.minimum()`](torch.minimum#torch.minimum "torch.minimum"). pytorch torch.as_tensor torch.as\_tensor ================ `torch.as_tensor(data, dtype=None, device=None) → Tensor` Convert the data into a `torch.Tensor`. If the data is already a `Tensor` with the same `dtype` and `device`, no copy will be performed, otherwise a new `Tensor` will be returned with computational graph retained if data `Tensor` has `requires_grad=True`. Similarly, if the data is an `ndarray` of the corresponding `dtype` and the `device` is the cpu, no copy will be performed. Parameters * **data** (*array\_like*) – Initial data for the tensor. Can be a list, tuple, NumPy `ndarray`, scalar, and other types. * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: if `None`, infers data type from `data`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. Example: ``` >>> a = numpy.array([1, 2, 3]) >>> t = torch.as_tensor(a) >>> t tensor([ 1, 2, 3]) >>> t[0] = -1 >>> a array([-1, 2, 3]) >>> a = numpy.array([1, 2, 3]) >>> t = torch.as_tensor(a, device=torch.device('cuda')) >>> t tensor([ 1, 2, 3]) >>> t[0] = -1 >>> a array([1, 2, 3]) ``` pytorch torch.vstack torch.vstack ============ `torch.vstack(tensors, *, out=None) → Tensor` Stack tensors in sequence vertically (row wise). This is equivalent to concatenation along the first axis after all 1-D tensors have been reshaped by [`torch.atleast_2d()`](torch.atleast_2d#torch.atleast_2d "torch.atleast_2d"). Parameters **tensors** (*sequence of Tensors*) – sequence of tensors to concatenate Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.tensor([1, 2, 3]) >>> b = torch.tensor([4, 5, 6]) >>> torch.vstack((a,b)) tensor([[1, 2, 3], [4, 5, 6]]) >>> a = torch.tensor([[1],[2],[3]]) >>> b = torch.tensor([[4],[5],[6]]) >>> torch.vstack((a,b)) tensor([[1], [2], [3], [4], [5], [6]]) ``` pytorch Softshrink Softshrink ========== `class torch.nn.Softshrink(lambd=0.5)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/activation.html#Softshrink) Applies the soft shrinkage function elementwise: SoftShrinkage(x)={x−λ, if x>λx+λ, if x<−λ0, otherwise \text{SoftShrinkage}(x) = \begin{cases} x - \lambda, & \text{ if } x > \lambda \\ x + \lambda, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases} Parameters **lambd** – the λ\lambda (must be no less than zero) value for the Softshrink formulation. Default: 0.5 Shape: * Input: (N,∗)(N, \*) where `*` means, any number of additional dimensions * Output: (N,∗)(N, \*) , same shape as the input Examples: ``` >>> m = nn.Softshrink() >>> input = torch.randn(2) >>> output = m(input) ``` pytorch torch.logical_not torch.logical\_not ================== `torch.logical_not(input, *, out=None) → Tensor` Computes the element-wise logical NOT of the given input tensor. If not specified, the output tensor will have the bool dtype. If the input tensor is not a bool tensor, zeros are treated as `False` and non-zeros are treated as `True`. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> torch.logical_not(torch.tensor([True, False])) tensor([False, True]) >>> torch.logical_not(torch.tensor([0, 1, -10], dtype=torch.int8)) tensor([ True, False, False]) >>> torch.logical_not(torch.tensor([0., 1.5, -10.], dtype=torch.double)) tensor([ True, False, False]) >>> torch.logical_not(torch.tensor([0., 1., -10.], dtype=torch.double), out=torch.empty(3, dtype=torch.int16)) tensor([1, 0, 0], dtype=torch.int16) ``` pytorch torch.isneginf torch.isneginf ============== `torch.isneginf(input, *, out=None) → Tensor` Tests if each element of `input` is negative infinity or not. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example:: ``` >>> a = torch.tensor([-float('inf'), float('inf'), 1.2]) >>> torch.isneginf(a) tensor([ True, False, False]) ``` pytorch torch.blackman_window torch.blackman\_window ====================== `torch.blackman_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor` Blackman window function. w[n]=0.42−0.5cos⁡(2πnN−1)+0.08cos⁡(4πnN−1)w[n] = 0.42 - 0.5 \cos \left( \frac{2 \pi n}{N - 1} \right) + 0.08 \cos \left( \frac{4 \pi n}{N - 1} \right) where NN is the full window size. The input `window_length` is a positive integer controlling the returned window size. `periodic` flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like [`torch.stft()`](torch.stft#torch.stft "torch.stft"). Therefore, if `periodic` is true, the NN in above formula is in fact window\_length+1\text{window\\_length} + 1 . Also, we always have `torch.blackman_window(L, periodic=True)` equal to `torch.blackman_window(L + 1, periodic=False)[:-1])`. Note If `window_length` =1=1 , the returned window contains a single value 1. Parameters * **window\_length** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the size of returned window * **periodic** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If True, returns a window to be used as periodic function. If False, return a symmetric window. Keyword Arguments * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: if `None`, uses a global default (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). Only floating point types are supported. * **layout** ([`torch.layout`](../tensor_attributes#torch.torch.layout "torch.torch.layout"), optional) – the desired layout of returned window tensor. Only `torch.strided` (dense layout) is supported. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Returns A 1-D tensor of size (window\_length,)(\text{window\\_length},) containing the window Return type [Tensor](../tensors#torch.Tensor "torch.Tensor") pytorch LazyConv1d LazyConv1d ========== `class torch.nn.LazyConv1d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/conv.html#LazyConv1d) A [`torch.nn.Conv1d`](torch.nn.conv1d#torch.nn.Conv1d "torch.nn.Conv1d") module with lazy initialization of the `in_channels` argument of the [`Conv1d`](torch.nn.conv1d#torch.nn.Conv1d "torch.nn.Conv1d") that is inferred from the `input.size(1)`. Parameters * **out\_channels** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of channels produced by the convolution * **kernel\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Size of the convolving kernel * **stride** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Stride of the convolution. Default: 1 * **padding** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Zero-padding added to both sides of the input. Default: 0 * **padding\_mode** (*string**,* *optional*) – `'zeros'`, `'reflect'`, `'replicate'` or `'circular'`. Default: `'zeros'` * **dilation** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Spacing between kernel elements. Default: 1 * **groups** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Number of blocked connections from input channels to output channels. Default: 1 * **bias** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, adds a learnable bias to the output. Default: `True` See also [`torch.nn.Conv1d`](torch.nn.conv1d#torch.nn.Conv1d "torch.nn.Conv1d") and [`torch.nn.modules.lazy.LazyModuleMixin`](torch.nn.modules.lazy.lazymodulemixin#torch.nn.modules.lazy.LazyModuleMixin "torch.nn.modules.lazy.LazyModuleMixin") `cls_to_become` alias of [`Conv1d`](torch.nn.conv1d#torch.nn.Conv1d "torch.nn.Conv1d") pytorch torch.qr torch.qr ======== `torch.qr(input, some=True, *, out=None) -> (Tensor, Tensor)` Computes the QR decomposition of a matrix or a batch of matrices `input`, and returns a namedtuple (Q, R) of tensors such that input=QR\text{input} = Q R with QQ being an orthogonal matrix or batch of orthogonal matrices and RR being an upper triangular matrix or batch of upper triangular matrices. If `some` is `True`, then this function returns the thin (reduced) QR factorization. Otherwise, if `some` is `False`, this function returns the complete QR factorization. Warning `torch.qr` is deprecated. Please use [`torch.linalg.qr()`](../linalg#torch.linalg.qr "torch.linalg.qr") instead. **Differences with** `torch.linalg.qr`: * `torch.linalg.qr` takes a string parameter `mode` instead of `some`: + `some=True` is equivalent of `mode='reduced'`: both are the default + `some=False` is equivalent of `mode='complete'`. Warning If you plan to backpropagate through QR, note that the current backward implementation is only well-defined when the first min⁡(input.size(−1),input.size(−2))\min(input.size(-1), input.size(-2)) columns of `input` are linearly independent. This behavior will propably change once QR supports pivoting. Note This function uses LAPACK for CPU inputs and MAGMA for CUDA inputs, and may produce different (valid) decompositions on different device types or different platforms. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor of size (∗,m,n)(\*, m, n) where `*` is zero or more batch dimensions consisting of matrices of dimension m×nm \times n . * **some** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Set to `True` for reduced QR decomposition and `False` for complete QR decomposition. If `k = min(m, n)` then: + `some=True` : returns `(Q, R)` with dimensions (m, k), (k, n) (default) + `'some=False'`: returns `(Q, R)` with dimensions (m, m), (m, n) Keyword Arguments **out** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – tuple of `Q` and `R` tensors. The dimensions of `Q` and `R` are detailed in the description of `some` above. Example: ``` >>> a = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]]) >>> q, r = torch.qr(a) >>> q tensor([[-0.8571, 0.3943, 0.3314], [-0.4286, -0.9029, -0.0343], [ 0.2857, -0.1714, 0.9429]]) >>> r tensor([[ -14.0000, -21.0000, 14.0000], [ 0.0000, -175.0000, 70.0000], [ 0.0000, 0.0000, -35.0000]]) >>> torch.mm(q, r).round() tensor([[ 12., -51., 4.], [ 6., 167., -68.], [ -4., 24., -41.]]) >>> torch.mm(q.t(), q).round() tensor([[ 1., 0., 0.], [ 0., 1., -0.], [ 0., -0., 1.]]) >>> a = torch.randn(3, 4, 5) >>> q, r = torch.qr(a, some=False) >>> torch.allclose(torch.matmul(q, r), a) True >>> torch.allclose(torch.matmul(q.transpose(-2, -1), q), torch.eye(5)) True ``` pytorch DistributedDataParallel DistributedDataParallel ======================= `class torch.nn.parallel.DistributedDataParallel(module, device_ids=None, output_device=None, dim=0, broadcast_buffers=True, process_group=None, bucket_cap_mb=25, find_unused_parameters=False, check_reduction=False, gradient_as_bucket_view=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/parallel/distributed.html#DistributedDataParallel) Implements distributed data parallelism that is based on `torch.distributed` package at the module level. This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. The module is replicated on each machine and each device, and each such replica handles a portion of the input. During the backwards pass, gradients from each node are averaged. The batch size should be larger than the number of GPUs used locally. See also: [Basics](../distributed#distributed-basics) and [Use nn.parallel.DistributedDataParallel instead of multiprocessing or nn.DataParallel](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-nn-ddp-instead). The same constraints on input as in [`torch.nn.DataParallel`](torch.nn.dataparallel#torch.nn.DataParallel "torch.nn.DataParallel") apply. Creation of this class requires that `torch.distributed` to be already initialized, by calling [`torch.distributed.init_process_group()`](../distributed#torch.distributed.init_process_group "torch.distributed.init_process_group"). `DistributedDataParallel` is proven to be significantly faster than [`torch.nn.DataParallel`](torch.nn.dataparallel#torch.nn.DataParallel "torch.nn.DataParallel") for single-node multi-GPU data parallel training. To use `DistributedDataParallel` on a host with N GPUs, you should spawn up `N` processes, ensuring that each process exclusively works on a single GPU from 0 to N-1. This can be done by either setting `CUDA_VISIBLE_DEVICES` for every process or by calling: ``` >>> torch.cuda.set_device(i) ``` where i is from 0 to N-1. In each process, you should refer the following to construct this module: ``` >>> torch.distributed.init_process_group( >>> backend='nccl', world_size=N, init_method='...' >>> ) >>> model = DistributedDataParallel(model, device_ids=[i], output_device=i) ``` In order to spawn up multiple processes per node, you can use either `torch.distributed.launch` or `torch.multiprocessing.spawn`. Note Please refer to [PyTorch Distributed Overview](https://pytorch.org/tutorials/beginner/dist_overview.html) for a brief introduction to all features related to distributed training. Note `nccl` backend is currently the fastest and highly recommended backend when using GPUs. This applies to both single-node and multi-node distributed training. Note This module also supports mixed-precision distributed training. This means that your model can have different types of parameters such as mixed types of `fp16` and `fp32`, the gradient reduction on these mixed types of parameters will just work fine. Note If you use `torch.save` on one process to checkpoint the module, and `torch.load` on some other processes to recover it, make sure that `map_location` is configured properly for every process. Without `map_location`, `torch.load` would recover the module to devices where the module was saved from. Note When a model is trained on `M` nodes with `batch=N`, the gradient will be `M` times smaller when compared to the same model trained on a single node with `batch=M*N` if the loss is summed (NOT averaged as usual) across instances in a batch (because the gradients between different nodes are averaged). You should take this into consideration when you want to obtain a mathematically equivalent training process compared to the local training counterpart. But in most cases, you can just treat a DistributedDataParallel wrapped model, a DataParallel wrapped model and an ordinary model on a single GPU as the same (E.g. using the same learning rate for equivalent batch size). Note Parameters are never broadcast between processes. The module performs an all-reduce step on gradients and assumes that they will be modified by the optimizer in all processes in the same way. Buffers (e.g. BatchNorm stats) are broadcast from the module in process of rank 0, to all other replicas in the system in every iteration. Note If you are using DistributedDataParallel in conjunction with the [Distributed RPC Framework](../rpc#distributed-rpc-framework), you should always use [`torch.distributed.autograd.backward()`](../rpc#torch.distributed.autograd.backward "torch.distributed.autograd.backward") to compute gradients and [`torch.distributed.optim.DistributedOptimizer`](../rpc#torch.distributed.optim.DistributedOptimizer "torch.distributed.optim.DistributedOptimizer") for optimizing parameters. Example: ``` >>> import torch.distributed.autograd as dist_autograd >>> from torch.nn.parallel import DistributedDataParallel as DDP >>> from torch import optim >>> from torch.distributed.optim import DistributedOptimizer >>> from torch.distributed.rpc import RRef >>> >>> t1 = torch.rand((3, 3), requires_grad=True) >>> t2 = torch.rand((3, 3), requires_grad=True) >>> rref = rpc.remote("worker1", torch.add, args=(t1, t2)) >>> ddp_model = DDP(my_model) >>> >>> # Setup optimizer >>> optimizer_params = [rref] >>> for param in ddp_model.parameters(): >>> optimizer_params.append(RRef(param)) >>> >>> dist_optim = DistributedOptimizer( >>> optim.SGD, >>> optimizer_params, >>> lr=0.05, >>> ) >>> >>> with dist_autograd.context() as context_id: >>> pred = ddp_model(rref.to_here()) >>> loss = loss_func(pred, loss) >>> dist_autograd.backward(context_id, loss) >>> dist_optim.step() ``` Warning Constructor, forward method, and differentiation of the output (or a function of the output of this module) are distributed synchronization points. Take that into account in case different processes might be executing different code. Warning This module assumes all parameters are registered in the model by the time it is created. No parameters should be added nor removed later. Same applies to buffers. Warning This module assumes all parameters are registered in the model of each distributed processes are in the same order. The module itself will conduct gradient `allreduce` following the reverse order of the registered parameters of the model. In other words, it is users’ responsibility to ensure that each distributed process has the exact same model and thus the exact same parameter registration order. Warning This module allows parameters with non-rowmajor-contiguous strides. For example, your model may contain some parameters whose `torch.memory_format` is `torch.contiguous_format` and others whose format is `torch.channels_last`. However, corresponding parameters in different processes must have the same strides. Warning This module doesn’t work with [`torch.autograd.grad()`](../autograd#torch.autograd.grad "torch.autograd.grad") (i.e. it will only work if gradients are to be accumulated in `.grad` attributes of parameters). Warning If you plan on using this module with a `nccl` backend or a `gloo` backend (that uses Infiniband), together with a DataLoader that uses multiple workers, please change the multiprocessing start method to `forkserver` (Python 3 only) or `spawn`. Unfortunately Gloo (that uses Infiniband) and NCCL2 are not fork safe, and you will likely experience deadlocks if you don’t change this setting. Warning Forward and backward hooks defined on `module` and its submodules won’t be invoked anymore, unless the hooks are initialized in the `forward()` method. Warning You should never try to change your model’s parameters after wrapping up your model with `DistributedDataParallel`. Because, when wrapping up your model with `DistributedDataParallel`, the constructor of `DistributedDataParallel` will register the additional gradient reduction functions on all the parameters of the model itself at the time of construction. If you change the model’s parameters afterwards, gradient redunction functions no longer match the correct set of parameters. Warning Using `DistributedDataParallel` in conjunction with the [Distributed RPC Framework](../rpc#distributed-rpc-framework) is experimental and subject to change. Warning The `gradient_as_bucket_view` mode does not yet work with Automatic Mixed Precision (AMP). AMP maintains stashed gradients that are used for unscaling gradients. With `gradient_as_bucket_view=True`, these stashed gradients will point to communication buckets in the first iteration. In the next iteration, the communication buckets are mutated and thus these stashed gradients will be unexpectedly mutated as well, which might lead to wrong results. Parameters * **module** ([Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – module to be parallelized * **device\_ids** (*list of python:int* *or* [torch.device](../tensor_attributes#torch.torch.device "torch.torch.device")) – CUDA devices. This should only be provided when the input module resides on a single CUDA device. For single-device modules, the i’th `module` replica is placed on `device_ids[i]`. For multi-device modules and CPU modules, `device_ids` must be `None` or an empty list, and input data for the forward pass must be placed on the correct device. (default: all visible devices for single-device modules) * **output\_device** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [torch.device](../tensor_attributes#torch.torch.device "torch.torch.device")) – Device location of output for single-device CUDA modules. For multi-device modules and CPU modules, it must be `None`, and the module itself dictates the output location. (default: `device_ids[0]` for single-device modules) * **broadcast\_buffers** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Flag that enables syncing (broadcasting) buffers of the module at beginning of the `forward` function. (default: `True`) * **process\_group** – The process group to be used for distributed data all-reduction. If `None`, the default process group, which is created by [`torch.distributed.init_process_group()`](../distributed#torch.distributed.init_process_group "torch.distributed.init_process_group"), will be used. (default: `None`) * **bucket\_cap\_mb** – `DistributedDataParallel` will bucket parameters into multiple buckets so that gradient reduction of each bucket can potentially overlap with backward computation. `bucket_cap_mb` controls the bucket size in MegaBytes (MB). (default: 25) * **find\_unused\_parameters** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Traverse the autograd graph from all tensors contained in the return value of the wrapped module’s `forward` function. Parameters that don’t receive gradients as part of this graph are preemptively marked as being ready to be reduced. Note that all `forward` outputs that are derived from module parameters must participate in calculating loss and later the gradient computation. If they don’t, this wrapper will hang waiting for autograd to produce gradients for those parameters. Any outputs derived from module parameters that are otherwise unused can be detached from the autograd graph using `torch.Tensor.detach`. (default: `False`) * **check\_reduction** – This argument is deprecated. * **gradient\_as\_bucket\_view** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – This is a prototype feature and subject to changes. When set to `True`, gradients will be views pointing to different offsets of `allreduce` communication buckets. This can reduce peak memory usage, where the saved memory size will be equal to the total gradients size. Moreover, it avoids the overhead of copying between gradients and `allreduce` communication buckets. When gradients are views, `detach_()` cannot be called on the gradients. If hitting such errors, please fix it by referring to the [`zero_grad()`](../optim#torch.optim.Optimizer.zero_grad "torch.optim.Optimizer.zero_grad") function in `torch/optim/optimizer.py` as a solution. Variables **~DistributedDataParallel.module** ([Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – the module to be parallelized. Example: ``` >>> torch.distributed.init_process_group(backend='nccl', world_size=4, init_method='...') >>> net = torch.nn.parallel.DistributedDataParallel(model, pg) ``` `join(divide_by_initial_world_size=True, enable=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/parallel/distributed.html#DistributedDataParallel.join) A context manager to be used in conjunction with an instance of [`torch.nn.parallel.DistributedDataParallel`](#torch.nn.parallel.DistributedDataParallel "torch.nn.parallel.DistributedDataParallel") to be able to train with uneven inputs across participating processes. This context manager will keep track of already-joined DDP processes, and “shadow” the forward and backward passes by inserting collective communication operations to match with the ones created by non-joined DDP processes. This will ensure each collective call has a corresponding call by already-joined DDP processes, preventing hangs or errors that would otherwise happen when training with uneven inputs across processes. Once all DDP processes have joined, the context manager will broadcast the model corresponding to the last joined process to all processes to ensure the model is the same across all processes (which is guaranteed by DDP). To use this to enable training with uneven inputs across processes, simply wrap this context manager around your training loop. No further modifications to the model or data loading is required. Warning This module works only with the multi-process, single-device usage of [`torch.nn.parallel.DistributedDataParallel`](#torch.nn.parallel.DistributedDataParallel "torch.nn.parallel.DistributedDataParallel"), which means that a single process works on a single GPU. Warning This module currently does not support custom distributed collective operations in the forward pass, such as `SyncBatchNorm` or other custom defined collectives in the model’s forward pass. Parameters * **divide\_by\_initial\_world\_size** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, will divide gradients by the initial `world_size` DDP training was launched with. If `False`, will compute the effective world size (number of ranks that have not depleted their inputs yet) and divide gradients by that during allreduce. Set `divide_by_initial_world_size=True` to ensure every input sample including the uneven inputs have equal weight in terms of how much they contribute to the global gradient. This is achieved by always dividing the gradient by the initial `world_size` even when we encounter uneven inputs. If you set this to `False`, we divide the gradient by the remaining number of nodes. This ensures parity with training on a smaller `world_size` although it also means the uneven inputs would contribute more towards the global gradient. Typically, you would want to set this to `True` for cases where the last few inputs of your training job are uneven. In extreme cases, where there is a large discrepancy in the number of inputs, setting this to `False` might provide better results. * **enable** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Whether to enable uneven input detection or not. Pass in `enable=False` to disable in cases where you know that inputs are even across participating processes. Default is `True`. Example: ``` >>> import torch >>> import torch.distributed as dist >>> import os >>> import torch.multiprocessing as mp >>> import torch.nn as nn >>> # On each spawned worker >>> def worker(rank): >>> dist.init_process_group("nccl", rank=rank, world_size=2) >>> torch.cuda.set_device(rank) >>> model = nn.Linear(1, 1, bias=False).to(rank) >>> model = torch.nn.parallel.DistributedDataParallel( >>> model, device_ids=[rank], output_device=rank >>> ) >>> # Rank 1 gets one more input than rank 0. >>> inputs = [torch.tensor([1]).float() for _ in range(10 + rank)] >>> with model.join(): >>> for _ in range(5): >>> for inp in inputs: >>> loss = model(inp).sum() >>> loss.backward() >>> # Without the join() API, the below synchronization will hang >>> # blocking for rank 1's allreduce to complete. >>> torch.cuda.synchronize(device=rank) ``` `no_sync()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/parallel/distributed.html#DistributedDataParallel.no_sync) A context manager to disable gradient synchronizations across DDP processes. Within this context, gradients will be accumulated on module variables, which will later be synchronized in the first forward-backward pass exiting the context. Example: ``` >>> ddp = torch.nn.parallel.DistributedDataParallel(model, pg) >>> with ddp.no_sync(): >>> for input in inputs: >>> ddp(input).backward() # no synchronization, accumulate grads >>> ddp(another_input).backward() # synchronize grads ``` `register_comm_hook(state, hook)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/parallel/distributed.html#DistributedDataParallel.register_comm_hook) Registers a communication hook which is an enhancement that provides a flexible hook to users where they can specify how DDP aggregates gradients across multiple workers. This hook would be very useful for researchers to try out new ideas. For example, this hook can be used to implement several algorithms like GossipGrad and gradient compression which involve different communication strategies for parameter syncs while running Distributed DataParallel training. Parameters * **state** ([object](https://docs.python.org/3/library/functions.html#object "(in Python v3.9)")) – Passed to the hook to maintain any state information during the training process. Examples include error feedback in gradient compression, peers to communicate with next in GossipGrad, etc. It is locally stored by each worker and shared by all the gradient tensors on the worker. * **hook** (*callable*) – Averages gradient tensors across workers and defined as: `hook(state: object, bucket: dist._GradBucket) -> torch.futures.Future`: This function is called once the bucket is ready. The hook can perform whatever processing is needed and return a Future indicating completion of any async work (ex: allreduce). If the hook doesn’t perform any communication, it can also just return a completed Future. The Future should hold the new value of grad bucket’s tensors. Once a bucket is ready, c10d reducer would call this hook and use the tensors returned by the Future and copy grads to individual parameters. We also provide an API called `get_future` to retrieve a Future associated with the completion of `c10d.ProcessGroup.work`. Warning Grad bucket’s tensors will not be predivided by world\_size. User is responsible to divide by the world\_size in case of operations like allreduce. Warning DDP communication hook can only be registered once and should be registered before calling backward. Warning The Future object that hook returns should contain a result that has the same shape with the tensors inside grad bucket. Warning DDP communication hook does not support single-process multiple-device mode. Gradbucket tensors should consist of only a single tensor. Warning `get_future` API supports only NCCL backend and will return a `torch._C.Future` which is an internal type and should be used with caution. It can still be used by `register_comm_hook` API, but it is subject to some subtle differences compared to `torch.futures.Future`. Warning DDP communication hook is experimental and subject to change. Example:: Below is an example of a noop hook that returns the same tensors. ``` >>> def noop(state: object, bucket: dist._GradBucket): -> torch.futures.Future >>> fut = torch.futures.Future() >>> fut.set_result(bucket.get_tensors()) >>> return fut ``` ``` >>> ddp.register_comm_hook(state = None, hook = noop) ``` Example:: Below is an example of a Parallel SGD algorithm where gradients are encoded before allreduce, and then decoded after allreduce. ``` >>> def encode_and_decode(state: object, bucket: dist._GradBucket): -> torch.futures.Future >>> tensors = [t / process_group.world_size for t in bucket.get_tensors()] >>> encoded_tensors = encode(tensors) # encode gradients >>> fut = process_group.allreduce(encoded_tensors).get_future() >>> # Define the then callback to decode. >>> def decode(fut): >>> decoded_tensors = decode(fut.value()) # decode gradients >>> return decoded_tensors >>> return fut.then(decode) ``` ``` >>> ddp.register_comm_hook(state = None, hook = encode_and_decode) ```
programming_docs
pytorch torch.cholesky_inverse torch.cholesky\_inverse ======================= `torch.cholesky_inverse(input, upper=False, *, out=None) → Tensor` Computes the inverse of a symmetric positive-definite matrix AA using its Cholesky factor uu : returns matrix `inv`. The inverse is computed using LAPACK routines `dpotri` and `spotri` (and the corresponding MAGMA routines). If `upper` is `False`, uu is lower triangular such that the returned tensor is inv=(uuT)−1inv = (uu^{{T}})^{{-1}} If `upper` is `True` or not provided, uu is upper triangular such that the returned tensor is inv=(uTu)−1inv = (u^T u)^{{-1}} Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input 2-D tensor uu , a upper or lower triangular Cholesky factor * **upper** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – whether to return a lower (default) or upper triangular matrix Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor for `inv` Example: ``` >>> a = torch.randn(3, 3) >>> a = torch.mm(a, a.t()) + 1e-05 * torch.eye(3) # make symmetric positive definite >>> u = torch.cholesky(a) >>> a tensor([[ 0.9935, -0.6353, 1.5806], [ -0.6353, 0.8769, -1.7183], [ 1.5806, -1.7183, 10.6618]]) >>> torch.cholesky_inverse(u) tensor([[ 1.9314, 1.2251, -0.0889], [ 1.2251, 2.4439, 0.2122], [-0.0889, 0.2122, 0.1412]]) >>> a.inverse() tensor([[ 1.9314, 1.2251, -0.0889], [ 1.2251, 2.4439, 0.2122], [-0.0889, 0.2122, 0.1412]]) ``` pytorch torch.multinomial torch.multinomial ================= `torch.multinomial(input, num_samples, replacement=False, *, generator=None, out=None) → LongTensor` Returns a tensor where each row contains `num_samples` indices sampled from the multinomial probability distribution located in the corresponding row of tensor `input`. Note The rows of `input` do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum. Indices are ordered from left to right according to when each was sampled (first samples are placed in first column). If `input` is a vector, `out` is a vector of size `num_samples`. If `input` is a matrix with `m` rows, `out` is an matrix of shape (m×num\_samples)(m \times \text{num\\_samples}) . If replacement is `True`, samples are drawn with replacement. If not, they are drawn without replacement, which means that when a sample index is drawn for a row, it cannot be drawn again for that row. Note When drawn without replacement, `num_samples` must be lower than number of non-zero elements in `input` (or the min number of non-zero elements in each row of `input` if it is a matrix). Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor containing probabilities * **num\_samples** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – number of samples to draw * **replacement** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – whether to draw with replacement or not Keyword Arguments * **generator** ([`torch.Generator`](torch.generator#torch.Generator "torch.Generator"), optional) – a pseudorandom number generator for sampling * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> weights = torch.tensor([0, 10, 3, 0], dtype=torch.float) # create a tensor of weights >>> torch.multinomial(weights, 2) tensor([1, 2]) >>> torch.multinomial(weights, 4) # ERROR! RuntimeError: invalid argument 2: invalid multinomial distribution (with replacement=False, not enough non-negative category to sample) at ../aten/src/TH/generic/THTensorRandom.cpp:320 >>> torch.multinomial(weights, 4, replacement=True) tensor([ 2, 1, 1, 1]) ``` pytorch torch.lstsq torch.lstsq =========== `torch.lstsq(input, A, *, out=None) → Tensor` Computes the solution to the least squares and least norm problems for a full rank matrix AA of size (m×n)(m \times n) and a matrix BB of size (m×k)(m \times k) . If m≥nm \geq n , [`lstsq()`](#torch.lstsq "torch.lstsq") solves the least-squares problem: min⁡X∥AX−B∥2.\begin{array}{ll} \min\_X & \|AX-B\|\_2. \end{array} If m<nm < n , [`lstsq()`](#torch.lstsq "torch.lstsq") solves the least-norm problem: min⁡X∥X∥2subject toAX=B.\begin{array}{ll} \min\_X & \|X\|\_2 & \text{subject to} & AX = B. \end{array} Returned tensor XX has shape (max⁡(m,n)×k)(\max(m, n) \times k) . The first nn rows of XX contains the solution. If m≥nm \geq n , the residual sum of squares for the solution in each column is given by the sum of squares of elements in the remaining m−nm - n rows of that column. Note The case when m<nm < n is not supported on the GPU. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the matrix BB * **A** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the mm by nn matrix AA Keyword Arguments **out** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – the optional destination tensor Returns A namedtuple (solution, QR) containing: * **solution** (*Tensor*): the least squares solution * **QR** (*Tensor*): the details of the QR factorization Return type ([Tensor](../tensors#torch.Tensor "torch.Tensor"), [Tensor](../tensors#torch.Tensor "torch.Tensor")) Note The returned matrices will always be transposed, irrespective of the strides of the input matrices. That is, they will have stride `(1, m)` instead of `(m, 1)`. Example: ``` >>> A = torch.tensor([[1., 1, 1], ... [2, 3, 4], ... [3, 5, 2], ... [4, 2, 5], ... [5, 4, 3]]) >>> B = torch.tensor([[-10., -3], ... [ 12, 14], ... [ 14, 12], ... [ 16, 16], ... [ 18, 16]]) >>> X, _ = torch.lstsq(B, A) >>> X tensor([[ 2.0000, 1.0000], [ 1.0000, 1.0000], [ 1.0000, 2.0000], [ 10.9635, 4.8501], [ 8.9332, 5.2418]]) ``` pytorch torch.fake_quantize_per_tensor_affine torch.fake\_quantize\_per\_tensor\_affine ========================================= `torch.fake_quantize_per_tensor_affine(input, scale, zero_point, quant_min, quant_max) → Tensor` Returns a new tensor with the data in `input` fake quantized using `scale`, `zero_point`, `quant_min` and `quant_max`. output=min(quant\_max,max(quant\_min,std::nearby\_int(input/scale)+zero\_point))\text{output} = min( \text{quant\\_max}, max( \text{quant\\_min}, \text{std::nearby\\_int}(\text{input} / \text{scale}) + \text{zero\\_point} ) ) Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input value(s), in `torch.float32`. * **scale** (*double*) – quantization scale * **zero\_point** (*int64*) – quantization zero\_point * **quant\_min** (*int64*) – lower bound of the quantized domain * **quant\_max** (*int64*) – upper bound of the quantized domain Returns A newly fake\_quantized tensor Return type [Tensor](../tensors#torch.Tensor "torch.Tensor") Example: ``` >>> x = torch.randn(4) >>> x tensor([ 0.0552, 0.9730, 0.3973, -1.0780]) >>> torch.fake_quantize_per_tensor_affine(x, 0.1, 0, 0, 255) tensor([0.1000, 1.0000, 0.4000, 0.0000]) ``` pytorch ScriptFunction ScriptFunction ============== `class torch.jit.ScriptFunction` Functionally equivalent to a [`ScriptModule`](torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule"), but represents a single function and does not have any attributes or Parameters. `get_debug_state(self: torch._C.ScriptFunction) → torch._C.GraphExecutorState` `save(self: torch._C.ScriptFunction, filename: str, _extra_files: Dict[str, str] = {}) → None` `save_to_buffer(self: torch._C.ScriptFunction, _extra_files: Dict[str, str] = {}) → bytes` pytorch torch.abs torch.abs ========= `torch.abs(input, *, out=None) → Tensor` Computes the absolute value of each element in `input`. outi=∣inputi∣\text{out}\_{i} = |\text{input}\_{i}| Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> torch.abs(torch.tensor([-1, -2, 3])) tensor([ 1, 2, 3]) ``` pytorch torch.tril torch.tril ========== `torch.tril(input, diagonal=0, *, out=None) → Tensor` Returns the lower triangular part of the matrix (2-D tensor) or batch of matrices `input`, the other elements of the result tensor `out` are set to 0. The lower triangular part of the matrix is defined as the elements on and below the diagonal. The argument [`diagonal`](torch.diagonal#torch.diagonal "torch.diagonal") controls which diagonal to consider. If [`diagonal`](torch.diagonal#torch.diagonal "torch.diagonal") = 0, all elements on and below the main diagonal are retained. A positive value includes just as many diagonals above the main diagonal, and similarly a negative value excludes just as many diagonals below the main diagonal. The main diagonal are the set of indices {(i,i)}\lbrace (i, i) \rbrace for i∈[0,min⁡{d1,d2}−1]i \in [0, \min\{d\_{1}, d\_{2}\} - 1] where d1,d2d\_{1}, d\_{2} are the dimensions of the matrix. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **diagonal** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the diagonal to consider Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(3, 3) >>> a tensor([[-1.0813, -0.8619, 0.7105], [ 0.0935, 0.1380, 2.2112], [-0.3409, -0.9828, 0.0289]]) >>> torch.tril(a) tensor([[-1.0813, 0.0000, 0.0000], [ 0.0935, 0.1380, 0.0000], [-0.3409, -0.9828, 0.0289]]) >>> b = torch.randn(4, 6) >>> b tensor([[ 1.2219, 0.5653, -0.2521, -0.2345, 1.2544, 0.3461], [ 0.4785, -0.4477, 0.6049, 0.6368, 0.8775, 0.7145], [ 1.1502, 3.2716, -1.1243, -0.5413, 0.3615, 0.6864], [-0.0614, -0.7344, -1.3164, -0.7648, -1.4024, 0.0978]]) >>> torch.tril(b, diagonal=1) tensor([[ 1.2219, 0.5653, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.4785, -0.4477, 0.6049, 0.0000, 0.0000, 0.0000], [ 1.1502, 3.2716, -1.1243, -0.5413, 0.0000, 0.0000], [-0.0614, -0.7344, -1.3164, -0.7648, -1.4024, 0.0000]]) >>> torch.tril(b, diagonal=-1) tensor([[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.4785, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 1.1502, 3.2716, 0.0000, 0.0000, 0.0000, 0.0000], [-0.0614, -0.7344, -1.3164, 0.0000, 0.0000, 0.0000]]) ``` pytorch torch.column_stack torch.column\_stack =================== `torch.column_stack(tensors, *, out=None) → Tensor` Creates a new tensor by horizontally stacking the tensors in `tensors`. Equivalent to `torch.hstack(tensors)`, except each zero or one dimensional tensor `t` in `tensors` is first reshaped into a `(t.numel(), 1)` column before being stacked horizontally. Parameters **tensors** (*sequence of Tensors*) – sequence of tensors to concatenate Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.tensor([1, 2, 3]) >>> b = torch.tensor([4, 5, 6]) >>> torch.column_stack((a, b)) tensor([[1, 4], [2, 5], [3, 6]]) >>> a = torch.arange(5) >>> b = torch.arange(10).reshape(5, 2) >>> torch.column_stack((a, b, b)) tensor([[0, 0, 1, 0, 1], [1, 2, 3, 2, 3], [2, 4, 5, 4, 5], [3, 6, 7, 6, 7], [4, 8, 9, 8, 9]]) ``` pytorch torch.maximum torch.maximum ============= `torch.maximum(input, other, *, out=None) → Tensor` Computes the element-wise maximum of `input` and `other`. Note If one of the elements being compared is a NaN, then that element is returned. [`maximum()`](#torch.maximum "torch.maximum") is not supported for tensors with complex dtypes. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the second input tensor Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.tensor((1, 2, -1)) >>> b = torch.tensor((3, 0, 4)) >>> torch.maximum(a, b) tensor([3, 2, 4]) ``` pytorch torch.bincount torch.bincount ============== `torch.bincount(input, weights=None, minlength=0) → Tensor` Count the frequency of each value in an array of non-negative ints. The number of bins (size 1) is one larger than the largest value in `input` unless `input` is empty, in which case the result is a tensor of size 0. If `minlength` is specified, the number of bins is at least `minlength` and if `input` is empty, then the result is tensor of size `minlength` filled with zeros. If `n` is the value at position `i`, `out[n] += weights[i]` if `weights` is specified else `out[n] += 1`. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for more information. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – 1-d int tensor * **weights** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – optional, weight for each value in the input tensor. Should be of same size as input tensor. * **minlength** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – optional, minimum number of bins. Should be non-negative. Returns a tensor of shape `Size([max(input) + 1])` if `input` is non-empty, else `Size(0)` Return type output ([Tensor](../tensors#torch.Tensor "torch.Tensor")) Example: ``` >>> input = torch.randint(0, 8, (5,), dtype=torch.int64) >>> weights = torch.linspace(0, 1, steps=5) >>> input, weights (tensor([4, 3, 6, 3, 4]), tensor([ 0.0000, 0.2500, 0.5000, 0.7500, 1.0000]) >>> torch.bincount(input) tensor([0, 0, 0, 2, 2, 0, 1]) >>> input.bincount(weights) tensor([0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 0.0000, 0.5000]) ``` pytorch torch.logspace torch.logspace ============== `torch.logspace(start, end, steps, base=10.0, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor` Creates a one-dimensional tensor of size `steps` whose values are evenly spaced from basestart{{\text{{base}}}}^{{\text{{start}}}} to baseend{{\text{{base}}}}^{{\text{{end}}}} , inclusive, on a logarithmic scale with base `base`. That is, the values are: (basestart,base(start+end−startsteps−1),…,base(start+(steps−2)∗end−startsteps−1),baseend)(\text{base}^{\text{start}}, \text{base}^{(\text{start} + \frac{\text{end} - \text{start}}{ \text{steps} - 1})}, \ldots, \text{base}^{(\text{start} + (\text{steps} - 2) \* \frac{\text{end} - \text{start}}{ \text{steps} - 1})}, \text{base}^{\text{end}}) Warning Not providing a value for `steps` is deprecated. For backwards compatibility, not providing a value for `steps` will create a tensor with 100 elements. Note that this behavior is not reflected in the documented function signature and should not be relied on. In a future PyTorch release, failing to provide a value for `steps` will throw a runtime error. Parameters * **start** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the starting value for the set of points * **end** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the ending value for the set of points * **steps** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – size of the constructed tensor * **base** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – base of the logarithm function. Default: `10.0`. Keyword Arguments * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: if `None`, uses a global default (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). * **layout** ([`torch.layout`](../tensor_attributes#torch.torch.layout "torch.torch.layout"), optional) – the desired layout of returned Tensor. Default: `torch.strided`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Example: ``` >>> torch.logspace(start=-10, end=10, steps=5) tensor([ 1.0000e-10, 1.0000e-05, 1.0000e+00, 1.0000e+05, 1.0000e+10]) >>> torch.logspace(start=0.1, end=1.0, steps=5) tensor([ 1.2589, 2.1135, 3.5481, 5.9566, 10.0000]) >>> torch.logspace(start=0.1, end=1.0, steps=1) tensor([1.2589]) >>> torch.logspace(start=2, end=2, steps=1, base=2) tensor([4.0]) ``` pytorch L1Loss L1Loss ====== `class torch.nn.L1Loss(size_average=None, reduce=None, reduction='mean')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/loss.html#L1Loss) Creates a criterion that measures the mean absolute error (MAE) between each element in the input xx and target yy . The unreduced (i.e. with `reduction` set to `'none'`) loss can be described as: ℓ(x,y)=L={l1,…,lN}⊤,ln=∣xn−yn∣,\ell(x, y) = L = \{l\_1,\dots,l\_N\}^\top, \quad l\_n = \left| x\_n - y\_n \right|, where NN is the batch size. If `reduction` is not `'none'` (default `'mean'`), then: ℓ(x,y)={mean⁡(L),if reduction=‘mean’;sum⁡(L),if reduction=‘sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases} xx and yy are tensors of arbitrary shapes with a total of nn elements each. The sum operation still operates over all the elements, and divides by nn . The division by nn can be avoided if one sets `reduction = 'sum'`. Supports real-valued and complex-valued inputs. Parameters * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when `reduce` is `False`. Default: `True` * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Note: `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Default: `'mean'` Shape: * Input: (N,∗)(N, \*) where ∗\* means, any number of additional dimensions * Target: (N,∗)(N, \*) , same shape as the input * Output: scalar. If `reduction` is `'none'`, then (N,∗)(N, \*) , same shape as the input Examples: ``` >>> loss = nn.L1Loss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randn(3, 5) >>> output = loss(input, target) >>> output.backward() ```
programming_docs
pytorch torch.log torch.log ========= `torch.log(input, *, out=None) → Tensor` Returns a new tensor with the natural logarithm of the elements of `input`. yi=log⁡e(xi)y\_{i} = \log\_{e} (x\_{i}) Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(5) >>> a tensor([-0.7168, -0.5471, -0.8933, -1.4428, -0.1190]) >>> torch.log(a) tensor([ nan, nan, nan, nan, nan]) ``` pytorch torch.searchsorted torch.searchsorted ================== `torch.searchsorted(sorted_sequence, values, *, out_int32=False, right=False, out=None) → Tensor` Find the indices from the *innermost* dimension of `sorted_sequence` such that, if the corresponding values in `values` were inserted before the indices, the order of the corresponding *innermost* dimension within `sorted_sequence` would be preserved. Return a new tensor with the same size as `values`. If `right` is False (default), then the left boundary of `sorted_sequence` is closed. More formally, the returned index satisfies the following rules: | `sorted_sequence` | `right` | *returned index satisfies* | | --- | --- | --- | | 1-D | False | `sorted_sequence[i-1] < values[m][n]...[l][x] <= sorted_sequence[i]` | | 1-D | True | `sorted_sequence[i-1] <= values[m][n]...[l][x] < sorted_sequence[i]` | | N-D | False | `sorted_sequence[m][n]...[l][i-1] < values[m][n]...[l][x] <= sorted_sequence[m][n]...[l][i]` | | N-D | True | `sorted_sequence[m][n]...[l][i-1] <= values[m][n]...[l][x] < sorted_sequence[m][n]...[l][i]` | Parameters * **sorted\_sequence** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – N-D or 1-D tensor, containing monotonically increasing sequence on the *innermost* dimension. * **values** ([Tensor](../tensors#torch.Tensor "torch.Tensor") *or* *Scalar*) – N-D tensor or a Scalar containing the search value(s). Keyword Arguments * **out\_int32** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – indicate the output data type. torch.int32 if True, torch.int64 otherwise. Default value is False, i.e. default output data type is torch.int64. * **right** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if False, return the first suitable location that is found. If True, return the last such index. If no suitable index found, return 0 for non-numerical value (eg. nan, inf) or the size of *innermost* dimension within `sorted_sequence` (one pass the last index of the *innermost* dimension). In other words, if False, gets the lower bound index for each value in `values` on the corresponding *innermost* dimension of the `sorted_sequence`. If True, gets the upper bound index instead. Default value is False. * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor, must be the same size as `values` if provided. Note If your use case is always 1-D sorted sequence, [`torch.bucketize()`](torch.bucketize#torch.bucketize "torch.bucketize") is preferred, because it has fewer dimension checks resulting in slightly better performance. Example: ``` >>> sorted_sequence = torch.tensor([[1, 3, 5, 7, 9], [2, 4, 6, 8, 10]]) >>> sorted_sequence tensor([[ 1, 3, 5, 7, 9], [ 2, 4, 6, 8, 10]]) >>> values = torch.tensor([[3, 6, 9], [3, 6, 9]]) >>> values tensor([[3, 6, 9], [3, 6, 9]]) >>> torch.searchsorted(sorted_sequence, values) tensor([[1, 3, 4], [1, 2, 4]]) >>> torch.searchsorted(sorted_sequence, values, right=True) tensor([[2, 3, 5], [1, 3, 4]]) >>> sorted_sequence_1d = torch.tensor([1, 3, 5, 7, 9]) >>> sorted_sequence_1d tensor([1, 3, 5, 7, 9]) >>> torch.searchsorted(sorted_sequence_1d, values) tensor([[1, 3, 4], [1, 3, 4]]) ``` pytorch torch.swapdims torch.swapdims ============== `torch.swapdims(input, dim0, dim1) → Tensor` Alias for [`torch.transpose()`](torch.transpose#torch.transpose "torch.transpose"). This function is equivalent to NumPy’s swapaxes function. Examples: ``` >>> x = torch.tensor([[[0,1],[2,3]],[[4,5],[6,7]]]) >>> x tensor([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> torch.swapdims(x, 0, 1) tensor([[[0, 1], [4, 5]], [[2, 3], [6, 7]]]) >>> torch.swapdims(x, 0, 2) tensor([[[0, 4], [2, 6]], [[1, 5], [3, 7]]]) ``` pytorch torch.matrix_exp torch.matrix\_exp ================= `torch.matrix_exp()` Returns the matrix exponential. Supports batched input. For a matrix `A`, the matrix exponential is defined as eA=∑k=0∞Ak/k!\mathrm{e}^A = \sum\_{k=0}^\infty A^k / k! The implementation is based on: Bader, P.; Blanes, S.; Casas, F. Computing the Matrix Exponential with an Optimized Taylor Polynomial Approximation. Mathematics 2019, 7, 1174. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Example: ``` >>> a = torch.randn(2, 2, 2) >>> a[0, :, :] = torch.eye(2, 2) >>> a[1, :, :] = 2 * torch.eye(2, 2) >>> a tensor([[[1., 0.], [0., 1.]], [[2., 0.], [0., 2.]]]) >>> torch.matrix_exp(a) tensor([[[2.7183, 0.0000], [0.0000, 2.7183]], [[7.3891, 0.0000], [0.0000, 7.3891]]]) >>> import math >>> x = torch.tensor([[0, math.pi/3], [-math.pi/3, 0]]) >>> x.matrix_exp() # should be [[cos(pi/3), sin(pi/3)], [-sin(pi/3), cos(pi/3)]] tensor([[ 0.5000, 0.8660], [-0.8660, 0.5000]]) ``` pytorch torch.atleast_1d torch.atleast\_1d ================= `torch.atleast_1d(*tensors)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/functional.html#atleast_1d) Returns a 1-dimensional view of each input tensor with zero dimensions. Input tensors with one or more dimensions are returned as-is. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor") *or* *list of Tensors*) – Returns output (Tensor or tuple of Tensors) Example:: ``` >>> x = torch.randn(2) >>> x tensor([1.4584, 0.7583]) >>> torch.atleast_1d(x) tensor([1.4584, 0.7583]) >>> x = torch.tensor(1.) >>> x tensor(1.) >>> torch.atleast_1d(x) tensor([1.]) >>> x = torch.tensor(0.5) >>> y = torch.tensor(1.) >>> torch.atleast_1d((x,y)) (tensor([0.5000]), tensor([1.])) ``` pytorch torch.get_num_interop_threads torch.get\_num\_interop\_threads ================================ `torch.get_num_interop_threads() → int` Returns the number of threads used for inter-op parallelism on CPU (e.g. in JIT interpreter) pytorch torch.bartlett_window torch.bartlett\_window ====================== `torch.bartlett_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor` Bartlett window function. w[n]=1−∣2nN−1−1∣={2nN−1if 0≤n≤N−122−2nN−1if N−12<n<N,w[n] = 1 - \left| \frac{2n}{N-1} - 1 \right| = \begin{cases} \frac{2n}{N - 1} & \text{if } 0 \leq n \leq \frac{N - 1}{2} \\ 2 - \frac{2n}{N - 1} & \text{if } \frac{N - 1}{2} < n < N \\ \end{cases}, where NN is the full window size. The input `window_length` is a positive integer controlling the returned window size. `periodic` flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like [`torch.stft()`](torch.stft#torch.stft "torch.stft"). Therefore, if `periodic` is true, the NN in above formula is in fact window\_length+1\text{window\\_length} + 1 . Also, we always have `torch.bartlett_window(L, periodic=True)` equal to `torch.bartlett_window(L + 1, periodic=False)[:-1])`. Note If `window_length` =1=1 , the returned window contains a single value 1. Parameters * **window\_length** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the size of returned window * **periodic** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If True, returns a window to be used as periodic function. If False, return a symmetric window. Keyword Arguments * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: if `None`, uses a global default (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). Only floating point types are supported. * **layout** ([`torch.layout`](../tensor_attributes#torch.torch.layout "torch.torch.layout"), optional) – the desired layout of returned window tensor. Only `torch.strided` (dense layout) is supported. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Returns A 1-D tensor of size (window\_length,)(\text{window\\_length},) containing the window Return type [Tensor](../tensors#torch.Tensor "torch.Tensor") pytorch torch.arccos torch.arccos ============ `torch.arccos(input, *, out=None) → Tensor` Alias for [`torch.acos()`](torch.acos#torch.acos "torch.acos"). pytorch torch.amin torch.amin ========== `torch.amin(input, dim, keepdim=False, *, out=None) → Tensor` Returns the minimum value of each slice of the `input` tensor in the given dimension(s) `dim`. Note `The difference between max/min and amax/amin is:` * `amax`/`amin` supports reducing on multiple dimensions, * `amax`/`amin` does not return indices, * `amax`/`amin` evenly distributes gradient between equal values, while `max(dim)`/`min(dim)` propagates gradient only to a single index in the source tensor. If `keepdim` is `True`, the output tensors are of the same size as `input` except in the dimension(s) `dim` where they are of size 1. Otherwise, `dim`s are squeezed (see :func:`torch.squeeze`), resulting in the output tensors having fewer dimensions than `input`. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *tuple of python:ints*) – the dimension or dimensions to reduce. * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether the output tensor has `dim` retained or not. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4, 4) >>> a tensor([[ 0.6451, -0.4866, 0.2987, -1.3312], [-0.5744, 1.2980, 1.8397, -0.2713], [ 0.9128, 0.9214, -1.7268, -0.2995], [ 0.9023, 0.4853, 0.9075, -1.6165]]) >>> torch.amin(a, 1) tensor([-1.3312, -0.5744, -1.7268, -1.6165]) ``` pytorch torch.isnan torch.isnan =========== `torch.isnan(input) → Tensor` Returns a new tensor with boolean elements representing if each element of `input` is NaN or not. Complex values are considered NaN when either their real and/or imaginary part is NaN. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Returns A boolean tensor that is True where `input` is NaN and False elsewhere Example: ``` >>> torch.isnan(torch.tensor([1, float('nan'), 2])) tensor([False, True, False]) ``` pytorch torch.roll torch.roll ========== `torch.roll(input, shifts, dims=None) → Tensor` Roll the tensor along the given dimension(s). Elements that are shifted beyond the last position are re-introduced at the first position. If a dimension is not specified, the tensor will be flattened before rolling and then restored to the original shape. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **shifts** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *tuple of python:ints*) – The number of places by which the elements of the tensor are shifted. If shifts is a tuple, dims must be a tuple of the same size, and each dimension will be rolled by the corresponding value * **dims** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *tuple of python:ints*) – Axis along which to roll Example: ``` >>> x = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8]).view(4, 2) >>> x tensor([[1, 2], [3, 4], [5, 6], [7, 8]]) >>> torch.roll(x, 1, 0) tensor([[7, 8], [1, 2], [3, 4], [5, 6]]) >>> torch.roll(x, -1, 0) tensor([[3, 4], [5, 6], [7, 8], [1, 2]]) >>> torch.roll(x, shifts=(2, 1), dims=(0, 1)) tensor([[6, 5], [8, 7], [2, 1], [4, 3]]) ``` pytorch LeakyReLU LeakyReLU ========= `class torch.nn.LeakyReLU(negative_slope=0.01, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/activation.html#LeakyReLU) Applies the element-wise function: LeakyReLU(x)=max⁡(0,x)+negative\_slope∗min⁡(0,x)\text{LeakyReLU}(x) = \max(0, x) + \text{negative\\_slope} \* \min(0, x) or LeakyRELU(x)={x, if x≥0negative\_slope×x, otherwise \text{LeakyRELU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ \text{negative\\_slope} \times x, & \text{ otherwise } \end{cases} Parameters * **negative\_slope** – Controls the angle of the negative slope. Default: 1e-2 * **inplace** – can optionally do the operation in-place. Default: `False` Shape: * Input: (N,∗)(N, \*) where `*` means, any number of additional dimensions * Output: (N,∗)(N, \*) , same shape as the input Examples: ``` >>> m = nn.LeakyReLU(0.1) >>> input = torch.randn(2) >>> output = m(input) ``` pytorch torch.initial_seed torch.initial\_seed =================== `torch.initial_seed()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/random.html#initial_seed) Returns the initial seed for generating random numbers as a Python `long`. pytorch torch.block_diag torch.block\_diag ================= `torch.block_diag(*tensors)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/functional.html#block_diag) Create a block diagonal matrix from provided tensors. Parameters **\*tensors** – One or more tensors with 0, 1, or 2 dimensions. Returns A 2 dimensional tensor with all the input tensors arranged in order such that their upper left and lower right corners are diagonally adjacent. All other elements are set to 0. Return type [Tensor](../tensors#torch.Tensor "torch.Tensor") Example: ``` >>> import torch >>> A = torch.tensor([[0, 1], [1, 0]]) >>> B = torch.tensor([[3, 4, 5], [6, 7, 8]]) >>> C = torch.tensor(7) >>> D = torch.tensor([1, 2, 3]) >>> E = torch.tensor([[4], [5], [6]]) >>> torch.block_diag(A, B, C, D, E) tensor([[0, 1, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 3, 4, 5, 0, 0, 0, 0, 0], [0, 0, 6, 7, 8, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 7, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 2, 3, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 4], [0, 0, 0, 0, 0, 0, 0, 0, 0, 5], [0, 0, 0, 0, 0, 0, 0, 0, 0, 6]]) ``` pytorch UpsamplingBilinear2d UpsamplingBilinear2d ==================== `class torch.nn.UpsamplingBilinear2d(size=None, scale_factor=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/upsampling.html#UpsamplingBilinear2d) Applies a 2D bilinear upsampling to an input signal composed of several input channels. To specify the scale, it takes either the `size` or the `scale_factor` as it’s constructor argument. When `size` is given, it is the output size of the image `(h, w)`. Parameters * **size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – output spatial sizes * **scale\_factor** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* *Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*]**,* *optional*) – multiplier for spatial size. Warning This class is deprecated in favor of `interpolate()`. It is equivalent to `nn.functional.interpolate(..., mode='bilinear', align_corners=True)`. Shape: * Input: (N,C,Hin,Win)(N, C, H\_{in}, W\_{in}) * Output: (N,C,Hout,Wout)(N, C, H\_{out}, W\_{out}) where Hout=⌊Hin×scale\_factor⌋H\_{out} = \left\lfloor H\_{in} \times \text{scale\\_factor} \right\rfloor Wout=⌊Win×scale\_factor⌋W\_{out} = \left\lfloor W\_{in} \times \text{scale\\_factor} \right\rfloor Examples: ``` >>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.UpsamplingBilinear2d(scale_factor=2) >>> m(input) tensor([[[[ 1.0000, 1.3333, 1.6667, 2.0000], [ 1.6667, 2.0000, 2.3333, 2.6667], [ 2.3333, 2.6667, 3.0000, 3.3333], [ 3.0000, 3.3333, 3.6667, 4.0000]]]]) ``` pytorch torch.meshgrid torch.meshgrid ============== `torch.meshgrid(*tensors)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/functional.html#meshgrid) Take NN tensors, each of which can be either scalar or 1-dimensional vector, and create NN N-dimensional grids, where the ii th grid is defined by expanding the ii th input over dimensions defined by other inputs. Parameters **tensors** (*list of Tensor*) – list of scalars or 1 dimensional tensors. Scalars will be treated as tensors of size (1,)(1,) automatically Returns If the input has kk tensors of size (N1,),(N2,),…,(Nk,)(N\_1,), (N\_2,), \ldots , (N\_k,) , then the output would also have kk tensors, where all tensors are of size (N1,N2,…,Nk)(N\_1, N\_2, \ldots , N\_k) . Return type seq (sequence of Tensors) Example: ``` >>> x = torch.tensor([1, 2, 3]) >>> y = torch.tensor([4, 5, 6]) >>> grid_x, grid_y = torch.meshgrid(x, y) >>> grid_x tensor([[1, 1, 1], [2, 2, 2], [3, 3, 3]]) >>> grid_y tensor([[4, 5, 6], [4, 5, 6], [4, 5, 6]]) ``` pytorch LazyConvTranspose1d LazyConvTranspose1d =================== `class torch.nn.LazyConvTranspose1d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/conv.html#LazyConvTranspose1d) A [`torch.nn.ConvTranspose1d`](torch.nn.convtranspose1d#torch.nn.ConvTranspose1d "torch.nn.ConvTranspose1d") module with lazy initialization of the `in_channels` argument of the [`ConvTranspose1d`](torch.nn.convtranspose1d#torch.nn.ConvTranspose1d "torch.nn.ConvTranspose1d") that is inferred from the `input.size(1)`. Parameters * **out\_channels** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of channels produced by the convolution * **kernel\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Size of the convolving kernel * **stride** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Stride of the convolution. Default: 1 * **padding** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – `dilation * (kernel_size - 1) - padding` zero-padding will be added to both sides of the input. Default: 0 * **output\_padding** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Additional size added to one side of the output shape. Default: 0 * **groups** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Number of blocked connections from input channels to output channels. Default: 1 * **bias** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, adds a learnable bias to the output. Default: `True` * **dilation** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Spacing between kernel elements. Default: 1 See also [`torch.nn.ConvTranspose1d`](torch.nn.convtranspose1d#torch.nn.ConvTranspose1d "torch.nn.ConvTranspose1d") and [`torch.nn.modules.lazy.LazyModuleMixin`](torch.nn.modules.lazy.lazymodulemixin#torch.nn.modules.lazy.LazyModuleMixin "torch.nn.modules.lazy.LazyModuleMixin") `cls_to_become` alias of [`ConvTranspose1d`](torch.nn.convtranspose1d#torch.nn.ConvTranspose1d "torch.nn.ConvTranspose1d")
programming_docs
pytorch torch.std torch.std ========= `torch.std(input, unbiased=True) → Tensor` Returns the standard-deviation of all elements in the `input` tensor. If `unbiased` is `False`, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **unbiased** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether to use the unbiased estimation or not Example: ``` >>> a = torch.randn(1, 3) >>> a tensor([[-0.8166, -1.3802, -0.3560]]) >>> torch.std(a) tensor(0.5130) ``` `torch.std(input, dim, unbiased=True, keepdim=False, *, out=None) → Tensor` Returns the standard-deviation of each row of the `input` tensor in the dimension `dim`. If `dim` is a list of dimensions, reduce over all of them. If `keepdim` is `True`, the output tensor is of the same size as `input` except in the dimension(s) `dim` where it is of size 1. Otherwise, `dim` is squeezed (see [`torch.squeeze()`](torch.squeeze#torch.squeeze "torch.squeeze")), resulting in the output tensor having 1 (or `len(dim)`) fewer dimension(s). If `unbiased` is `False`, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *tuple of python:ints*) – the dimension or dimensions to reduce. * **unbiased** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether to use the unbiased estimation or not * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether the output tensor has `dim` retained or not. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4, 4) >>> a tensor([[ 0.2035, 1.2959, 1.8101, -0.4644], [ 1.5027, -0.3270, 0.5905, 0.6538], [-1.5745, 1.3330, -0.5596, -0.6548], [ 0.1264, -0.5080, 1.6420, 0.1992]]) >>> torch.std(a, dim=1) tensor([ 1.0311, 0.7477, 1.2204, 0.9087]) ``` pytorch torch.floor_divide torch.floor\_divide =================== `torch.floor_divide(input, other, *, out=None) → Tensor` Warning This function’s name is a misnomer. It actually rounds the quotient towards zero instead of taking its floor. This behavior will be deprecated in a future PyTorch release. Computes `input` divided by `other`, elementwise, and rounds each quotient towards zero. Equivalently, it truncates the quotient(s): outi=trunc(inputiotheri)\text{{out}}\_i = \text{trunc} \left( \frac{{\text{{input}}\_i}}{{\text{{other}}\_i}} \right) Supports broadcasting to a common shape, type promotion, and integer and float inputs. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor") *or* *Number*) – the dividend * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor") *or* *Number*) – the divisor Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.tensor([4.0, 3.0]) >>> b = torch.tensor([2.0, 2.0]) >>> torch.floor_divide(a, b) tensor([2.0, 1.0]) >>> torch.floor_divide(a, 1.4) tensor([2.0, 2.0]) ``` pytorch torch.acosh torch.acosh =========== `torch.acosh(input, *, out=None) → Tensor` Returns a new tensor with the inverse hyperbolic cosine of the elements of `input`. Note The domain of the inverse hyperbolic cosine is `[1, inf)` and values outside this range will be mapped to `NaN`, except for `+ INF` for which the output is mapped to `+ INF`. outi=cosh⁡−1(inputi)\text{out}\_{i} = \cosh^{-1}(\text{input}\_{i}) Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4).uniform_(1, 2) >>> a tensor([ 1.3192, 1.9915, 1.9674, 1.7151 ]) >>> torch.acosh(a) tensor([ 0.7791, 1.3120, 1.2979, 1.1341 ]) ``` pytorch torch.clone torch.clone =========== `torch.clone(input, *, memory_format=torch.preserve_format) → Tensor` Returns a copy of `input`. Note This function is differentiable, so gradients will flow back from the result of this operation to `input`. To create a tensor without an autograd relationship to `input` see [`detach()`](../autograd#torch.Tensor.detach "torch.Tensor.detach"). Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **memory\_format** ([`torch.memory_format`](../tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned tensor. Default: `torch.preserve_format`. pytorch InstanceNorm1d InstanceNorm1d ============== `class torch.nn.InstanceNorm1d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/instancenorm.html#InstanceNorm1d) Applies Instance Normalization over a 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper [Instance Normalization: The Missing Ingredient for Fast Stylization](https://arxiv.org/abs/1607.08022). y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} \* \gamma + \beta The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. γ\gamma and β\beta are learnable parameter vectors of size `C` (where `C` is the input size) if `affine` is `True`. The standard-deviation is calculated via the biased estimator, equivalent to `torch.var(input, unbiased=False)`. By default, this layer uses instance statistics computed from input data in both training and evaluation modes. If `track_running_stats` is set to `True`, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default `momentum` of 0.1. Note This `momentum` argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is x^new=(1−momentum)×x^+momentum×xt\hat{x}\_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x\_t , where x^\hat{x} is the estimated statistic and xtx\_t is the new observed value. Note [`InstanceNorm1d`](#torch.nn.InstanceNorm1d "torch.nn.InstanceNorm1d") and [`LayerNorm`](torch.nn.layernorm#torch.nn.LayerNorm "torch.nn.LayerNorm") are very similar, but have some subtle differences. [`InstanceNorm1d`](#torch.nn.InstanceNorm1d "torch.nn.InstanceNorm1d") is applied on each channel of channeled data like multidimensional time series, but [`LayerNorm`](torch.nn.layernorm#torch.nn.LayerNorm "torch.nn.LayerNorm") is usually applied on entire sample and often in NLP tasks. Additionally, [`LayerNorm`](torch.nn.layernorm#torch.nn.LayerNorm "torch.nn.LayerNorm") applies elementwise affine transform, while [`InstanceNorm1d`](#torch.nn.InstanceNorm1d "torch.nn.InstanceNorm1d") usually don’t apply affine transform. Parameters * **num\_features** – CC from an expected input of size (N,C,L)(N, C, L) or LL from input of size (N,L)(N, L) * **eps** – a value added to the denominator for numerical stability. Default: 1e-5 * **momentum** – the value used for the running\_mean and running\_var computation. Default: 0.1 * **affine** – a boolean value that when set to `True`, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default: `False`. * **track\_running\_stats** – a boolean value that when set to `True`, this module tracks the running mean and variance, and when set to `False`, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: `False` Shape: * Input: (N,C,L)(N, C, L) * Output: (N,C,L)(N, C, L) (same shape as input) Examples: ``` >>> # Without Learnable Parameters >>> m = nn.InstanceNorm1d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm1d(100, affine=True) >>> input = torch.randn(20, 100, 40) >>> output = m(input) ``` pytorch torch.unbind torch.unbind ============ `torch.unbind(input, dim=0) → seq` Removes a tensor dimension. Returns a tuple of all slices along a given dimension, already without it. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor to unbind * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – dimension to remove Example: ``` >>> torch.unbind(torch.tensor([[1, 2, 3], >>> [4, 5, 6], >>> [7, 8, 9]])) (tensor([1, 2, 3]), tensor([4, 5, 6]), tensor([7, 8, 9])) ``` pytorch torch.dequantize torch.dequantize ================ `torch.dequantize(tensor) → Tensor` Returns an fp32 Tensor by dequantizing a quantized Tensor Parameters **tensor** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – A quantized Tensor `torch.dequantize(tensors) → sequence of Tensors` Given a list of quantized Tensors, dequantize them and return a list of fp32 Tensors Parameters **tensors** (*sequence of Tensors*) – A list of quantized Tensors pytorch torch.sub torch.sub ========= `torch.sub(input, other, *, alpha=1, out=None) → Tensor` Subtracts `other`, scaled by `alpha`, from `input`. outi=inputi−alpha×otheri\text{{out}}\_i = \text{{input}}\_i - \text{{alpha}} \times \text{{other}}\_i Supports [broadcasting to a common shape](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics), [type promotion](../tensor_attributes#type-promotion-doc), and integer, float, and complex inputs. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor") *or* *Scalar*) – the tensor or scalar to subtract from `input` Keyword Arguments * **alpha** (*Scalar*) – the scalar multiplier for `other` * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.tensor((1, 2)) >>> b = torch.tensor((0, 1)) >>> torch.sub(a, b, alpha=2) tensor([1, 0]) ``` pytorch torch.amax torch.amax ========== `torch.amax(input, dim, keepdim=False, *, out=None) → Tensor` Returns the maximum value of each slice of the `input` tensor in the given dimension(s) `dim`. Note `The difference between max/min and amax/amin is:` * `amax`/`amin` supports reducing on multiple dimensions, * `amax`/`amin` does not return indices, * `amax`/`amin` evenly distributes gradient between equal values, while `max(dim)`/`min(dim)` propagates gradient only to a single index in the source tensor. If `keepdim is ``True``, the output tensors are of the same size as `input` except in the dimension(s) `dim` where they are of size 1. Otherwise, `dim`s are squeezed (see :func:`torch.squeeze`), resulting in the output tensors having fewer dimension than `input`. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *tuple of python:ints*) – the dimension or dimensions to reduce. * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether the output tensor has `dim` retained or not. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4, 4) >>> a tensor([[ 0.8177, 1.4878, -0.2491, 0.9130], [-0.7158, 1.1775, 2.0992, 0.4817], [-0.0053, 0.0164, -1.3738, -0.0507], [ 1.9700, 1.1106, -1.0318, -1.0816]]) >>> torch.amax(a, 1) tensor([1.4878, 2.0992, 0.0164, 1.9700]) ``` pytorch set_grad_enabled set\_grad\_enabled ================== `class torch.set_grad_enabled(mode)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/grad_mode.html#set_grad_enabled) Context-manager that sets gradient calculation to on or off. `set_grad_enabled` will enable or disable grads based on its argument [`mode`](torch.mode#torch.mode "torch.mode"). It can be used as a context-manager or as a function. This context manager is thread local; it will not affect computation in other threads. Parameters **mode** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Flag whether to enable grad (`True`), or disable (`False`). This can be used to conditionally enable gradients. Example: ``` >>> x = torch.tensor([1], requires_grad=True) >>> is_train = False >>> with torch.set_grad_enabled(is_train): ... y = x * 2 >>> y.requires_grad False >>> torch.set_grad_enabled(True) >>> y = x * 2 >>> y.requires_grad True >>> torch.set_grad_enabled(False) >>> y = x * 2 >>> y.requires_grad False ``` pytorch torch.hstack torch.hstack ============ `torch.hstack(tensors, *, out=None) → Tensor` Stack tensors in sequence horizontally (column wise). This is equivalent to concatenation along the first axis for 1-D tensors, and along the second axis for all other tensors. Parameters **tensors** (*sequence of Tensors*) – sequence of tensors to concatenate Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.tensor([1, 2, 3]) >>> b = torch.tensor([4, 5, 6]) >>> torch.hstack((a,b)) tensor([1, 2, 3, 4, 5, 6]) >>> a = torch.tensor([[1],[2],[3]]) >>> b = torch.tensor([[4],[5],[6]]) >>> torch.hstack((a,b)) tensor([[1, 4], [2, 5], [3, 6]]) ``` pytorch torch.nn.utils.parameters_to_vector torch.nn.utils.parameters\_to\_vector ===================================== `torch.nn.utils.parameters_to_vector(parameters)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/convert_parameters.html#parameters_to_vector) Convert parameters to one vector Parameters **parameters** (*Iterable**[*[Tensor](../tensors#torch.Tensor "torch.Tensor")*]*) – an iterator of Tensors that are the parameters of a model. Returns The parameters represented by a single vector pytorch torch.isfinite torch.isfinite ============== `torch.isfinite(input) → Tensor` Returns a new tensor with boolean elements representing if each element is `finite` or not. Real values are finite when they are not NaN, negative infinity, or infinity. Complex values are finite when both their real and imaginary parts are finite. Args: input (Tensor): the input tensor. Returns: A boolean tensor that is True where `input` is finite and False elsewhere Example: ``` >>> torch.isfinite(torch.tensor([1, float('inf'), 2, float('-inf'), float('nan')])) tensor([True, False, True, False, False]) ``` pytorch torch.rand_like torch.rand\_like ================ `torch.rand_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor` Returns a tensor with the same size as `input` that is filled with random numbers from a uniform distribution on the interval [0,1)[0, 1) . `torch.rand_like(input)` is equivalent to `torch.rand(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)`. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the size of `input` will determine size of the output tensor. Keyword Arguments * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned Tensor. Default: if `None`, defaults to the dtype of `input`. * **layout** ([`torch.layout`](../tensor_attributes#torch.torch.layout "torch.torch.layout"), optional) – the desired layout of returned tensor. Default: if `None`, defaults to the layout of `input`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, defaults to the device of `input`. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. * **memory\_format** ([`torch.memory_format`](../tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. pytorch torch.randint_like torch.randint\_like =================== `torch.randint_like(input, low=0, high, *, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor` Returns a tensor with the same shape as Tensor `input` filled with random integers generated uniformly between `low` (inclusive) and `high` (exclusive). Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the size of `input` will determine size of the output tensor. * **low** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Lowest integer to be drawn from the distribution. Default: 0. * **high** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – One above the highest integer to be drawn from the distribution. Keyword Arguments * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned Tensor. Default: if `None`, defaults to the dtype of `input`. * **layout** ([`torch.layout`](../tensor_attributes#torch.torch.layout "torch.torch.layout"), optional) – the desired layout of returned tensor. Default: if `None`, defaults to the layout of `input`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, defaults to the device of `input`. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. * **memory\_format** ([`torch.memory_format`](../tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.preserve_format`. pytorch CosineSimilarity CosineSimilarity ================ `class torch.nn.CosineSimilarity(dim=1, eps=1e-08)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/distance.html#CosineSimilarity) Returns cosine similarity between x1x\_1 and x2x\_2 , computed along dim. similarity=x1⋅x2max⁡(∥x1∥2⋅∥x2∥2,ϵ).\text{similarity} = \dfrac{x\_1 \cdot x\_2}{\max(\Vert x\_1 \Vert \_2 \cdot \Vert x\_2 \Vert \_2, \epsilon)}. Parameters * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Dimension where cosine similarity is computed. Default: 1 * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – Small value to avoid division by zero. Default: 1e-8 Shape: * Input1: (∗1,D,∗2)(\ast\_1, D, \ast\_2) where D is at position `dim` * Input2: (∗1,D,∗2)(\ast\_1, D, \ast\_2) , same shape as the Input1 * Output: (∗1,∗2)(\ast\_1, \ast\_2) Examples:: ``` >>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> cos = nn.CosineSimilarity(dim=1, eps=1e-6) >>> output = cos(input1, input2) ```
programming_docs
pytorch torch.nn.utils.rnn.pad_packed_sequence torch.nn.utils.rnn.pad\_packed\_sequence ======================================== `torch.nn.utils.rnn.pad_packed_sequence(sequence, batch_first=False, padding_value=0.0, total_length=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/rnn.html#pad_packed_sequence) Pads a packed batch of variable length sequences. It is an inverse operation to [`pack_padded_sequence()`](torch.nn.utils.rnn.pack_padded_sequence#torch.nn.utils.rnn.pack_padded_sequence "torch.nn.utils.rnn.pack_padded_sequence"). The returned Tensor’s data will be of size `T x B x *`, where `T` is the length of the longest sequence and `B` is the batch size. If `batch_first` is True, the data will be transposed into `B x T x *` format. #### Example ``` >>> from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence >>> seq = torch.tensor([[1,2,0], [3,0,0], [4,5,6]]) >>> lens = [2, 1, 3] >>> packed = pack_padded_sequence(seq, lens, batch_first=True, enforce_sorted=False) >>> packed PackedSequence(data=tensor([4, 1, 3, 5, 2, 6]), batch_sizes=tensor([3, 2, 1]), sorted_indices=tensor([2, 0, 1]), unsorted_indices=tensor([1, 2, 0])) >>> seq_unpacked, lens_unpacked = pad_packed_sequence(packed, batch_first=True) >>> seq_unpacked tensor([[1, 2, 0], [3, 0, 0], [4, 5, 6]]) >>> lens_unpacked tensor([2, 1, 3]) ``` Note `total_length` is useful to implement the `pack sequence -> recurrent network -> unpack sequence` pattern in a [`Module`](torch.nn.module#torch.nn.Module "torch.nn.Module") wrapped in [`DataParallel`](torch.nn.dataparallel#torch.nn.DataParallel "torch.nn.DataParallel"). See [this FAQ section](https://pytorch.org/docs/1.8.0/notes/faq.html#pack-rnn-unpack-with-data-parallelism) for details. Parameters * **sequence** ([PackedSequence](torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence "torch.nn.utils.rnn.PackedSequence")) – batch to pad * **batch\_first** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if `True`, the output will be in `B x T x *` format. * **padding\_value** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – values for padded elements. * **total\_length** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – if not `None`, the output will be padded to have length `total_length`. This method will throw [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "(in Python v3.9)") if `total_length` is less than the max sequence length in `sequence`. Returns Tuple of Tensor containing the padded sequence, and a Tensor containing the list of lengths of each sequence in the batch. Batch elements will be re-ordered as they were ordered originally when the batch was passed to `pack_padded_sequence` or `pack_sequence`. pytorch torch.pca_lowrank torch.pca\_lowrank ================== `torch.pca_lowrank(A, q=None, center=True, niter=2)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/_lowrank.html#pca_lowrank) Performs linear Principal Component Analysis (PCA) on a low-rank matrix, batches of such matrices, or sparse matrix. This function returns a namedtuple `(U, S, V)` which is the nearly optimal approximation of a singular value decomposition of a centered matrix AA such that A=Udiag(S)VTA = U diag(S) V^T . Note The relation of `(U, S, V)` to PCA is as follows: * AA is a data matrix with `m` samples and `n` features * the VV columns represent the principal directions * S∗∗2/(m−1)S \*\* 2 / (m - 1) contains the eigenvalues of ATA/(m−1)A^T A / (m - 1) which is the covariance of `A` when `center=True` is provided. * `matmul(A, V[:, :k])` projects data to the first k principal components Note Different from the standard SVD, the size of returned matrices depend on the specified rank and q values as follows: * UU is m x q matrix * SS is q-vector * VV is n x q matrix Note To obtain repeatable results, reset the seed for the pseudorandom number generator Parameters * **A** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor of size (∗,m,n)(\*, m, n) * **q** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – a slightly overestimated rank of AA . By default, `q = min(6, m, n)`. * **center** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if True, center the input tensor, otherwise, assume that the input is centered. * **niter** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the number of subspace iterations to conduct; niter must be a nonnegative integer, and defaults to 2. References: ``` - Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions, arXiv:0909.4061 [math.NA; math.PR], 2009 (available at `arXiv <http://arxiv.org/abs/0909.4061>`_). ``` pytorch torch.count_nonzero torch.count\_nonzero ==================== `torch.count_nonzero(input, dim=None) → Tensor` Counts the number of non-zero values in the tensor `input` along the given `dim`. If no dim is specified then all non-zeros in the tensor are counted. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *tuple of python:ints**,* *optional*) – Dim or tuple of dims along which to count non-zeros. Example: ``` >>> x = torch.zeros(3,3) >>> x[torch.randn(3,3) > 0.5] = 1 >>> x tensor([[0., 1., 1.], [0., 0., 0.], [0., 0., 1.]]) >>> torch.count_nonzero(x) tensor(3) >>> torch.count_nonzero(x, dim=0) tensor([0, 1, 2]) ``` pytorch torch.vander torch.vander ============ `torch.vander(x, N=None, increasing=False) → Tensor` Generates a Vandermonde matrix. The columns of the output matrix are elementwise powers of the input vector x(N−1),x(N−2),...,x0x^{(N-1)}, x^{(N-2)}, ..., x^0 . If increasing is True, the order of the columns is reversed x0,x1,...,x(N−1)x^0, x^1, ..., x^{(N-1)} . Such a matrix with a geometric progression in each row is named for Alexandre-Theophile Vandermonde. Parameters * **x** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – 1-D input tensor. * **N** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Number of columns in the output. If N is not specified, a square array is returned (N=len(x))(N = len(x)) . * **increasing** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Order of the powers of the columns. If True, the powers increase from left to right, if False (the default) they are reversed. Returns Vandermonde matrix. If increasing is False, the first column is x(N−1)x^{(N-1)} , the second x(N−2)x^{(N-2)} and so forth. If increasing is True, the columns are x0,x1,...,x(N−1)x^0, x^1, ..., x^{(N-1)} . Return type [Tensor](../tensors#torch.Tensor "torch.Tensor") Example: ``` >>> x = torch.tensor([1, 2, 3, 5]) >>> torch.vander(x) tensor([[ 1, 1, 1, 1], [ 8, 4, 2, 1], [ 27, 9, 3, 1], [125, 25, 5, 1]]) >>> torch.vander(x, N=3) tensor([[ 1, 1, 1], [ 4, 2, 1], [ 9, 3, 1], [25, 5, 1]]) >>> torch.vander(x, N=3, increasing=True) tensor([[ 1, 1, 1], [ 1, 2, 4], [ 1, 3, 9], [ 1, 5, 25]]) ``` pytorch torch.fix torch.fix ========= `torch.fix(input, *, out=None) → Tensor` Alias for [`torch.trunc()`](torch.trunc#torch.trunc "torch.trunc") pytorch torch.nn.modules.module.register_module_forward_hook torch.nn.modules.module.register\_module\_forward\_hook ======================================================= `torch.nn.modules.module.register_module_forward_hook(hook)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/module.html#register_module_forward_hook) Registers a global forward hook for all the modules Warning This adds global state to the `nn.module` module and it is only intended for debugging/profiling purposes. The hook will be called every time after `forward()` has computed an output. It should have the following signature: ``` hook(module, input, output) -> None or modified output ``` The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the `forward`. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after `forward()` is called. Returns a handle that can be used to remove the added hook by calling `handle.remove()` Return type `torch.utils.hooks.RemovableHandle` This hook will be executed before specific module hooks registered with `register_forward_hook`. pytorch torch.jit.trace_module torch.jit.trace\_module ======================= `torch.jit.trace_module(mod, inputs, optimize=None, check_trace=True, check_inputs=None, check_tolerance=1e-05, strict=True, _force_outplace=False, _module_class=None, _compilation_unit=<torch.jit.CompilationUnit object>)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/jit/_trace.html#trace_module) Trace a module and return an executable [`ScriptModule`](torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") that will be optimized using just-in-time compilation. When a module is passed to [`torch.jit.trace`](torch.jit.trace#torch.jit.trace "torch.jit.trace"), only the `forward` method is run and traced. With `trace_module`, you can specify a dictionary of method names to example inputs to trace (see the `inputs`) argument below. See [`torch.jit.trace`](torch.jit.trace#torch.jit.trace "torch.jit.trace") for more information on tracing. Parameters * **mod** ([torch.nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – A `torch.nn.Module` containing methods whose names are specified in `inputs`. The given methods will be compiled as a part of a single `ScriptModule`. * **inputs** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.9)")) – A dict containing sample inputs indexed by method names in `mod`. The inputs will be passed to methods whose names correspond to inputs’ keys while tracing. `{ 'forward' : example_forward_input, 'method2': example_method2_input}` Keyword Arguments * **check\_trace** (`bool`, optional) – Check if the same inputs run through traced code produce the same outputs. Default: `True`. You might want to disable this if, for example, your network contains non- deterministic ops or if you are sure that the network is correct despite a checker failure. * **check\_inputs** (*list of dicts**,* *optional*) – A list of dicts of input arguments that should be used to check the trace against what is expected. Each tuple is equivalent to a set of input arguments that would be specified in `inputs`. For best results, pass in a set of checking inputs representative of the space of shapes and types of inputs you expect the network to see. If not specified, the original `inputs` are used for checking * **check\_tolerance** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – Floating-point comparison tolerance to use in the checker procedure. This can be used to relax the checker strictness in the event that results diverge numerically for a known reason, such as operator fusion. Returns A [`ScriptModule`](torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") object with a single `forward` method containing the traced code. When `func` is a `torch.nn.Module`, the returned [`ScriptModule`](torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") will have the same set of sub-modules and parameters as `func`. Example (tracing a module with multiple methods): ``` import torch import torch.nn as nn class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv = nn.Conv2d(1, 1, 3) def forward(self, x): return self.conv(x) def weighted_kernel_sum(self, weight): return weight * self.conv.weight n = Net() example_weight = torch.rand(1, 1, 3, 3) example_forward_input = torch.rand(1, 1, 3, 3) # Trace a specific method and construct `ScriptModule` with # a single `forward` method module = torch.jit.trace(n.forward, example_forward_input) # Trace a module (implicitly traces `forward`) and construct a # `ScriptModule` with a single `forward` method module = torch.jit.trace(n, example_forward_input) # Trace specific methods on a module (specified in `inputs`), constructs # a `ScriptModule` with `forward` and `weighted_kernel_sum` methods inputs = {'forward' : example_forward_input, 'weighted_kernel_sum' : example_weight} module = torch.jit.trace_module(n, inputs) ``` pytorch torch.max torch.max ========= `torch.max(input) → Tensor` Returns the maximum value of all elements in the `input` tensor. Warning This function produces deterministic (sub)gradients unlike `max(dim=0)` Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Example: ``` >>> a = torch.randn(1, 3) >>> a tensor([[ 0.6763, 0.7445, -2.2369]]) >>> torch.max(a) tensor(0.7445) ``` `torch.max(input, dim, keepdim=False, *, out=None) -> (Tensor, LongTensor)` Returns a namedtuple `(values, indices)` where `values` is the maximum value of each row of the `input` tensor in the given dimension `dim`. And `indices` is the index location of each maximum value found (argmax). If `keepdim` is `True`, the output tensors are of the same size as `input` except in the dimension `dim` where they are of size 1. Otherwise, `dim` is squeezed (see [`torch.squeeze()`](torch.squeeze#torch.squeeze "torch.squeeze")), resulting in the output tensors having 1 fewer dimension than `input`. Note If there are multiple maximal values in a reduced row then the indices of the first maximal value are returned. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension to reduce. * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether the output tensor has `dim` retained or not. Default: `False`. Keyword Arguments **out** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – the result tuple of two output tensors (max, max\_indices) Example: ``` >>> a = torch.randn(4, 4) >>> a tensor([[-1.2360, -0.2942, -0.1222, 0.8475], [ 1.1949, -1.1127, -2.2379, -0.6702], [ 1.5717, -0.9207, 0.1297, -1.8768], [-0.6172, 1.0036, -0.6060, -0.2432]]) >>> torch.max(a, 1) torch.return_types.max(values=tensor([0.8475, 1.1949, 1.5717, 1.0036]), indices=tensor([3, 0, 0, 1])) ``` `torch.max(input, other, *, out=None) → Tensor` See [`torch.maximum()`](torch.maximum#torch.maximum "torch.maximum"). pytorch torch.mm torch.mm ======== `torch.mm(input, mat2, *, out=None) → Tensor` Performs a matrix multiplication of the matrices `input` and `mat2`. If `input` is a (n×m)(n \times m) tensor, `mat2` is a (m×p)(m \times p) tensor, `out` will be a (n×p)(n \times p) tensor. Note This function does not [broadcast](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics). For broadcasting matrix products, see [`torch.matmul()`](torch.matmul#torch.matmul "torch.matmul"). Supports strided and sparse 2-D tensors as inputs, autograd with respect to strided inputs. This operator supports [TensorFloat32](https://pytorch.org/docs/1.8.0/notes/cuda.html#tf32-on-ampere). Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the first matrix to be matrix multiplied * **mat2** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the second matrix to be matrix multiplied Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> mat1 = torch.randn(2, 3) >>> mat2 = torch.randn(3, 3) >>> torch.mm(mat1, mat2) tensor([[ 0.4851, 0.5037, -0.3633], [-0.0760, -3.6705, 2.4784]]) ``` pytorch torch.dot torch.dot ========= `torch.dot(input, other, *, out=None) → Tensor` Computes the dot product of two 1D tensors. Note Unlike NumPy’s dot, torch.dot intentionally only supports computing the dot product of two 1D tensors with the same number of elements. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – first tensor in the dot product, must be 1D. * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – second tensor in the dot product, must be 1D. Keyword Arguments **{out}** – Example: ``` >>> torch.dot(torch.tensor([2, 3]), torch.tensor([2, 1])) tensor(7) ``` pytorch torch.full torch.full ========== `torch.full(size, fill_value, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor` Creates a tensor of size `size` filled with `fill_value`. The tensor’s dtype is inferred from `fill_value`. Parameters * **size** (*int...*) – a list, tuple, or `torch.Size` of integers defining the shape of the output tensor. * **fill\_value** (*Scalar*) – the value to fill the output tensor with. Keyword Arguments * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: if `None`, uses a global default (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). * **layout** ([`torch.layout`](../tensor_attributes#torch.torch.layout "torch.torch.layout"), optional) – the desired layout of returned Tensor. Default: `torch.strided`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Example: ``` >>> torch.full((2, 3), 3.141592) tensor([[ 3.1416, 3.1416, 3.1416], [ 3.1416, 3.1416, 3.1416]]) ``` pytorch Hardsigmoid Hardsigmoid =========== `class torch.nn.Hardsigmoid(inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/activation.html#Hardsigmoid) Applies the element-wise function: Hardsigmoid(x)={0if x≤−3,1if x≥+3,x/6+1/2otherwise\text{Hardsigmoid}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ 1 & \text{if~} x \ge +3, \\ x / 6 + 1 / 2 & \text{otherwise} \end{cases} Parameters **inplace** – can optionally do the operation in-place. Default: `False` Shape: * Input: (N,∗)(N, \*) where `*` means, any number of additional dimensions * Output: (N,∗)(N, \*) , same shape as the input Examples: ``` >>> m = nn.Hardsigmoid() >>> input = torch.randn(2) >>> output = m(input) ``` pytorch LSTM LSTM ==== `class torch.nn.LSTM(*args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/rnn.html#LSTM) Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence. For each element in the input sequence, each layer computes the following function: it=σ(Wiixt+bii+Whiht−1+bhi)ft=σ(Wifxt+bif+Whfht−1+bhf)gt=tanh⁡(Wigxt+big+Whght−1+bhg)ot=σ(Wioxt+bio+Whoht−1+bho)ct=ft⊙ct−1+it⊙gtht=ot⊙tanh⁡(ct)\begin{array}{ll} \\ i\_t = \sigma(W\_{ii} x\_t + b\_{ii} + W\_{hi} h\_{t-1} + b\_{hi}) \\ f\_t = \sigma(W\_{if} x\_t + b\_{if} + W\_{hf} h\_{t-1} + b\_{hf}) \\ g\_t = \tanh(W\_{ig} x\_t + b\_{ig} + W\_{hg} h\_{t-1} + b\_{hg}) \\ o\_t = \sigma(W\_{io} x\_t + b\_{io} + W\_{ho} h\_{t-1} + b\_{ho}) \\ c\_t = f\_t \odot c\_{t-1} + i\_t \odot g\_t \\ h\_t = o\_t \odot \tanh(c\_t) \\ \end{array} where hth\_t is the hidden state at time `t`, ctc\_t is the cell state at time `t`, xtx\_t is the input at time `t`, ht−1h\_{t-1} is the hidden state of the layer at time `t-1` or the initial hidden state at time `0`, and iti\_t , ftf\_t , gtg\_t , oto\_t are the input, forget, cell, and output gates, respectively. σ\sigma is the sigmoid function, and ⊙\odot is the Hadamard product. In a multilayer LSTM, the input xt(l)x^{(l)}\_t of the ll -th layer (l>=2l >= 2 ) is the hidden state ht(l−1)h^{(l-1)}\_t of the previous layer multiplied by dropout δt(l−1)\delta^{(l-1)}\_t where each δt(l−1)\delta^{(l-1)}\_t is a Bernoulli random variable which is 00 with probability `dropout`. If `proj_size > 0` is specified, LSTM with projections will be used. This changes the LSTM cell in the following way. First, the dimension of hth\_t will be changed from `hidden_size` to `proj_size` (dimensions of WhiW\_{hi} will be changed accordingly). Second, the output hidden state of each layer will be multiplied by a learnable projection matrix: ht=Whrhth\_t = W\_{hr}h\_t . Note that as a consequence of this, the output of LSTM network will be of different shape as well. See Inputs/Outputs sections below for exact dimensions of all variables. You can find more details in <https://arxiv.org/abs/1402.1128>. Parameters * **input\_size** – The number of expected features in the input `x` * **hidden\_size** – The number of features in the hidden state `h` * **num\_layers** – Number of recurrent layers. E.g., setting `num_layers=2` would mean stacking two LSTMs together to form a `stacked LSTM`, with the second LSTM taking in outputs of the first LSTM and computing the final results. Default: 1 * **bias** – If `False`, then the layer does not use bias weights `b_ih` and `b_hh`. Default: `True` * **batch\_first** – If `True`, then the input and output tensors are provided as (batch, seq, feature). Default: `False` * **dropout** – If non-zero, introduces a `Dropout` layer on the outputs of each LSTM layer except the last layer, with dropout probability equal to `dropout`. Default: 0 * **bidirectional** – If `True`, becomes a bidirectional LSTM. Default: `False` * **proj\_size** – If `> 0`, will use LSTM with projections of corresponding size. Default: 0 Inputs: input, (h\_0, c\_0) * **input** of shape `(seq_len, batch, input_size)`: tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See [`torch.nn.utils.rnn.pack_padded_sequence()`](torch.nn.utils.rnn.pack_padded_sequence#torch.nn.utils.rnn.pack_padded_sequence "torch.nn.utils.rnn.pack_padded_sequence") or [`torch.nn.utils.rnn.pack_sequence()`](torch.nn.utils.rnn.pack_sequence#torch.nn.utils.rnn.pack_sequence "torch.nn.utils.rnn.pack_sequence") for details. * **h\_0** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor containing the initial hidden state for each element in the batch. If the LSTM is bidirectional, num\_directions should be 2, else it should be 1. If `proj_size > 0` was specified, the shape has to be `(num_layers * num_directions, batch, proj_size)`. * **c\_0** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor containing the initial cell state for each element in the batch. If `(h_0, c_0)` is not provided, both **h\_0** and **c\_0** default to zero. Outputs: output, (h\_n, c\_n) * **output** of shape `(seq_len, batch, num_directions * hidden_size)`: tensor containing the output features `(h_t)` from the last layer of the LSTM, for each `t`. If a [`torch.nn.utils.rnn.PackedSequence`](torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence "torch.nn.utils.rnn.PackedSequence") has been given as the input, the output will also be a packed sequence. If `proj_size > 0` was specified, output shape will be `(seq_len, batch, num_directions * proj_size)`. For the unpacked case, the directions can be separated using `output.view(seq_len, batch, num_directions, hidden_size)`, with forward and backward being direction `0` and `1` respectively. Similarly, the directions can be separated in the packed case. * **h\_n** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor containing the hidden state for `t = seq_len`. If `proj_size > 0` was specified, `h_n` shape will be `(num_layers * num_directions, batch, proj_size)`. Like *output*, the layers can be separated using `h_n.view(num_layers, num_directions, batch, hidden_size)` and similarly for *c\_n*. * **c\_n** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor containing the cell state for `t = seq_len`. Variables * **~LSTM.weight\_ih\_l[k]** – the learnable input-hidden weights of the kth\text{k}^{th} layer `(W_ii|W_if|W_ig|W_io)`, of shape `(4*hidden_size, input_size)` for `k = 0`. Otherwise, the shape is `(4*hidden_size, num_directions * hidden_size)` * **~LSTM.weight\_hh\_l[k]** – the learnable hidden-hidden weights of the kth\text{k}^{th} layer `(W_hi|W_hf|W_hg|W_ho)`, of shape `(4*hidden_size, hidden_size)`. If `proj_size > 0` was specified, the shape will be `(4*hidden_size, proj_size)`. * **~LSTM.bias\_ih\_l[k]** – the learnable input-hidden bias of the kth\text{k}^{th} layer `(b_ii|b_if|b_ig|b_io)`, of shape `(4*hidden_size)` * **~LSTM.bias\_hh\_l[k]** – the learnable hidden-hidden bias of the kth\text{k}^{th} layer `(b_hi|b_hf|b_hg|b_ho)`, of shape `(4*hidden_size)` * **~LSTM.weight\_hr\_l[k]** – the learnable projection weights of the kth\text{k}^{th} layer of shape `(proj_size, hidden_size)`. Only present when `proj_size > 0` was specified. Note All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=1hidden\_sizek = \frac{1}{\text{hidden\\_size}} Warning There are known non-determinism issues for RNN functions on some versions of cuDNN and CUDA. You can enforce deterministic behavior by setting the following environment variables: On CUDA 10.1, set environment variable `CUDA_LAUNCH_BLOCKING=1`. This may affect performance. On CUDA 10.2 or later, set environment variable (note the leading colon symbol) `CUBLAS_WORKSPACE_CONFIG=:16:8` or `CUBLAS_WORKSPACE_CONFIG=:4096:2`. See the [cuDNN 8 Release Notes](https://docs.nvidia.com/deeplearning/sdk/cudnn-release-notes/rel_8.html) for more information. Orphan Note If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype `torch.float16` 4) V100 GPU is used, 5) input data is not in `PackedSequence` format persistent algorithm can be selected to improve performance. Examples: ``` >>> rnn = nn.LSTM(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> c0 = torch.randn(2, 3, 20) >>> output, (hn, cn) = rnn(input, (h0, c0)) ```
programming_docs
pytorch SyncBatchNorm SyncBatchNorm ============= `class torch.nn.SyncBatchNorm(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, process_group=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/batchnorm.html#SyncBatchNorm) Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](https://arxiv.org/abs/1502.03167) . y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} \* \gamma + \beta The mean and standard-deviation are calculated per-dimension over all mini-batches of the same process groups. γ\gamma and β\beta are learnable parameter vectors of size `C` (where `C` is the input size). By default, the elements of γ\gamma are sampled from U(0,1)\mathcal{U}(0, 1) and the elements of β\beta are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to `torch.var(input, unbiased=False)`. Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default `momentum` of 0.1. If `track_running_stats` is set to `False`, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well. Note This `momentum` argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is x^new=(1−momentum)×x^+momentum×xt\hat{x}\_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x\_t , where x^\hat{x} is the estimated statistic and xtx\_t is the new observed value. Because the Batch Normalization is done for each channel in the `C` dimension, computing statistics on `(N, +)` slices, it’s common terminology to call this Volumetric Batch Normalization or Spatio-temporal Batch Normalization. Currently [`SyncBatchNorm`](#torch.nn.SyncBatchNorm "torch.nn.SyncBatchNorm") only supports `DistributedDataParallel` (DDP) with single GPU per process. Use [`torch.nn.SyncBatchNorm.convert_sync_batchnorm()`](#torch.nn.SyncBatchNorm.convert_sync_batchnorm "torch.nn.SyncBatchNorm.convert_sync_batchnorm") to convert `BatchNorm*D` layer to [`SyncBatchNorm`](#torch.nn.SyncBatchNorm "torch.nn.SyncBatchNorm") before wrapping Network with DDP. Parameters * **num\_features** – CC from an expected input of size (N,C,+)(N, C, +) * **eps** – a value added to the denominator for numerical stability. Default: `1e-5` * **momentum** – the value used for the running\_mean and running\_var computation. Can be set to `None` for cumulative moving average (i.e. simple average). Default: 0.1 * **affine** – a boolean value that when set to `True`, this module has learnable affine parameters. Default: `True` * **track\_running\_stats** – a boolean value that when set to `True`, this module tracks the running mean and variance, and when set to `False`, this module does not track such statistics, and initializes statistics buffers `running_mean` and `running_var` as `None`. When these buffers are `None`, this module always uses batch statistics. in both training and eval modes. Default: `True` * **process\_group** – synchronization of stats happen within each process group individually. Default behavior is synchronization across the whole world Shape: * Input: (N,C,+)(N, C, +) * Output: (N,C,+)(N, C, +) (same shape as input) Examples: ``` >>> # With Learnable Parameters >>> m = nn.SyncBatchNorm(100) >>> # creating process group (optional) >>> # ranks is a list of int identifying rank ids. >>> ranks = list(range(8)) >>> r1, r2 = ranks[:4], ranks[4:] >>> # Note: every rank calls into new_group for every >>> # process group created, even if that rank is not >>> # part of the group. >>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]] >>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1] >>> # Without Learnable Parameters >>> m = nn.BatchNorm3d(100, affine=False, process_group=process_group) >>> input = torch.randn(20, 100, 35, 45, 10) >>> output = m(input) >>> # network is nn.BatchNorm layer >>> sync_bn_network = nn.SyncBatchNorm.convert_sync_batchnorm(network, process_group) >>> # only single gpu per process is currently supported >>> ddp_sync_bn_network = torch.nn.parallel.DistributedDataParallel( >>> sync_bn_network, >>> device_ids=[args.local_rank], >>> output_device=args.local_rank) ``` `classmethod convert_sync_batchnorm(module, process_group=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/batchnorm.html#SyncBatchNorm.convert_sync_batchnorm) Helper function to convert all `BatchNorm*D` layers in the model to [`torch.nn.SyncBatchNorm`](#torch.nn.SyncBatchNorm "torch.nn.SyncBatchNorm") layers. Parameters * **module** ([nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – module containing one or more attr:`BatchNorm*D` layers * **process\_group** (*optional*) – process group to scope synchronization, default is the whole world Returns The original `module` with the converted [`torch.nn.SyncBatchNorm`](#torch.nn.SyncBatchNorm "torch.nn.SyncBatchNorm") layers. If the original `module` is a `BatchNorm*D` layer, a new [`torch.nn.SyncBatchNorm`](#torch.nn.SyncBatchNorm "torch.nn.SyncBatchNorm") layer object will be returned instead. Example: ``` >>> # Network with nn.BatchNorm layer >>> module = torch.nn.Sequential( >>> torch.nn.Linear(20, 100), >>> torch.nn.BatchNorm1d(100), >>> ).cuda() >>> # creating process group (optional) >>> # ranks is a list of int identifying rank ids. >>> ranks = list(range(8)) >>> r1, r2 = ranks[:4], ranks[4:] >>> # Note: every rank calls into new_group for every >>> # process group created, even if that rank is not >>> # part of the group. >>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]] >>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1] >>> sync_bn_module = torch.nn.SyncBatchNorm.convert_sync_batchnorm(module, process_group) ``` pytorch torch.digamma torch.digamma ============= `torch.digamma(input, *, out=None) → Tensor` Computes the logarithmic derivative of the gamma function on `input`. ψ(x)=ddxln⁡(Γ(x))=Γ′(x)Γ(x)\psi(x) = \frac{d}{dx} \ln\left(\Gamma\left(x\right)\right) = \frac{\Gamma'(x)}{\Gamma(x)} Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor to compute the digamma function on Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Note This function is similar to SciPy’s `scipy.special.digamma`. Note From PyTorch 1.8 onwards, the digamma function returns `-Inf` for `0`. Previously it returned `NaN` for `0`. Example: ``` >>> a = torch.tensor([1, 0.5]) >>> torch.digamma(a) tensor([-0.5772, -1.9635]) ``` pytorch torch.jit.freeze torch.jit.freeze ================ `torch.jit.freeze(mod, preserved_attrs=None, optimize_numerics=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/jit/_freeze.html#freeze) Freezing a [`ScriptModule`](torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") will clone it and attempt to inline the cloned module’s submodules, parameters, and attributes as constants in the TorchScript IR Graph. By default, `forward` will be preserved, as well as attributes & methods specified in `preserved_attrs`. Additionally, any attribute that is modified within a preserved method will be preserved. Freezing currently only accepts ScriptModules that are in eval mode. Parameters * **mod** ([`ScriptModule`](torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule")) – a module to be frozen * **preserved\_attrs** (*Optional**[**List**[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*]**]*) – a list of attributes to preserve in addition to the forward method. * **modified in preserved methods will also be preserved.** (*Attributes*) – * **optimize\_numerics** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – If `True`, a set of optimization passes will be run that does not strictly * **numerics. Full details of optimization can be found at torch.jit.optimize\_frozen\_module.** (*preserve*) – Returns Frozen [`ScriptModule`](torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule"). Example (Freezing a simple module with a Parameter): ``` def forward(self, input): output = self.weight.mm(input) output = self.linear(output) return output scripted_module = torch.jit.script(MyModule(2, 3).eval()) frozen_module = torch.jit.freeze(scripted_module) # parameters have been removed and inlined into the Graph as constants assert len(list(frozen_module.named_parameters())) == 0 # See the compiled graph as Python code print(frozen_module.code) ``` Example (Freezing a module with preserved attributes) ``` def forward(self, input): self.modified_tensor += 1 return input + self.modified_tensor scripted_module = torch.jit.script(MyModule2().eval()) frozen_module = torch.jit.freeze(scripted_module, preserved_attrs=["version"]) # we've manually preserved `version`, so it still exists on the frozen module and can be modified assert frozen_module.version == 1 frozen_module.version = 2 # `modified_tensor` is detected as being mutated in the forward, so freezing preserves # it to retain model semantics assert frozen_module(torch.tensor(1)) == torch.tensor(12) # now that we've run it once, the next result will be incremented by one assert frozen_module(torch.tensor(1)) == torch.tensor(13) ``` Note If you’re not sure why an attribute is not being inlined as a constant, you can run `dump_alias_db` on frozen\_module.forward.graph to see if freezing has detected the attribute is being modified. pytorch torch.rsqrt torch.rsqrt =========== `torch.rsqrt(input, *, out=None) → Tensor` Returns a new tensor with the reciprocal of the square-root of each of the elements of `input`. outi=1inputi\text{out}\_{i} = \frac{1}{\sqrt{\text{input}\_{i}}} Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4) >>> a tensor([-0.0370, 0.2970, 1.5420, -0.9105]) >>> torch.rsqrt(a) tensor([ nan, 1.8351, 0.8053, nan]) ``` pytorch AdaptiveLogSoftmaxWithLoss AdaptiveLogSoftmaxWithLoss ========================== `class torch.nn.AdaptiveLogSoftmaxWithLoss(in_features, n_classes, cutoffs, div_value=4.0, head_bias=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/adaptive.html#AdaptiveLogSoftmaxWithLoss) Efficient softmax approximation as described in [Efficient softmax approximation for GPUs by Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou](https://arxiv.org/abs/1609.04309). Adaptive softmax is an approximate strategy for training models with large output spaces. It is most effective when the label distribution is highly imbalanced, for example in natural language modelling, where the word frequency distribution approximately follows the [Zipf’s law](https://en.wikipedia.org/wiki/Zipf%27s_law). Adaptive softmax partitions the labels into several clusters, according to their frequency. These clusters may contain different number of targets each. Additionally, clusters containing less frequent labels assign lower dimensional embeddings to those labels, which speeds up the computation. For each minibatch, only clusters for which at least one target is present are evaluated. The idea is that the clusters which are accessed frequently (like the first one, containing most frequent labels), should also be cheap to compute – that is, contain a small number of assigned labels. We highly recommend taking a look at the original paper for more details. * `cutoffs` should be an ordered Sequence of integers sorted in the increasing order. It controls number of clusters and the partitioning of targets into clusters. For example setting `cutoffs = [10, 100, 1000]` means that first `10` targets will be assigned to the ‘head’ of the adaptive softmax, targets `11, 12, …, 100` will be assigned to the first cluster, and targets `101, 102, …, 1000` will be assigned to the second cluster, while targets `1001, 1002, …, n_classes - 1` will be assigned to the last, third cluster. * `div_value` is used to compute the size of each additional cluster, which is given as ⌊in\_featuresdiv\_valueidx⌋\left\lfloor\frac{\texttt{in\\_features}}{\texttt{div\\_value}^{idx}}\right\rfloor , where idxidx is the cluster index (with clusters for less frequent words having larger indices, and indices starting from 11 ). * `head_bias` if set to True, adds a bias term to the ‘head’ of the adaptive softmax. See paper for details. Set to False in the official implementation. Warning Labels passed as inputs to this module should be sorted according to their frequency. This means that the most frequent label should be represented by the index `0`, and the least frequent label should be represented by the index `n_classes - 1`. Note This module returns a `NamedTuple` with `output` and `loss` fields. See further documentation for details. Note To compute log-probabilities for all classes, the `log_prob` method can be used. Parameters * **in\_features** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of features in the input tensor * **n\_classes** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of classes in the dataset * **cutoffs** (*Sequence*) – Cutoffs used to assign targets to their buckets * **div\_value** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – value used as an exponent to compute sizes of the clusters. Default: 4.0 * **head\_bias** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, adds a bias term to the ‘head’ of the adaptive softmax. Default: `False` Returns * **output** is a Tensor of size `N` containing computed target log probabilities for each example * **loss** is a Scalar representing the computed negative log likelihood loss Return type `NamedTuple` with `output` and `loss` fields Shape: * input: (N,in\_features)(N, \texttt{in\\_features}) * target: (N)(N) where each value satisfies 0<=target[i]<=n\_classes0 <= \texttt{target[i]} <= \texttt{n\\_classes} * output1: (N)(N) * output2: `Scalar` `log_prob(input)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/adaptive.html#AdaptiveLogSoftmaxWithLoss.log_prob) Computes log probabilities for all n\_classes\texttt{n\\_classes} Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – a minibatch of examples Returns log-probabilities of for each class cc in range 0<=c<=n\_classes0 <= c <= \texttt{n\\_classes} , where n\_classes\texttt{n\\_classes} is a parameter passed to `AdaptiveLogSoftmaxWithLoss` constructor. Shape: * Input: (N,in\_features)(N, \texttt{in\\_features}) * Output: (N,n\_classes)(N, \texttt{n\\_classes}) `predict(input)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/adaptive.html#AdaptiveLogSoftmaxWithLoss.predict) This is equivalent to `self.log_pob(input).argmax(dim=1)`, but is more efficient in some cases. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – a minibatch of examples Returns a class with the highest probability for each example Return type output ([Tensor](../tensors#torch.Tensor "torch.Tensor")) Shape: * Input: (N,in\_features)(N, \texttt{in\\_features}) * Output: (N)(N) pytorch torch.renorm torch.renorm ============ `torch.renorm(input, p, dim, maxnorm, *, out=None) → Tensor` Returns a tensor where each sub-tensor of `input` along dimension `dim` is normalized such that the `p`-norm of the sub-tensor is lower than the value `maxnorm` Note If the norm of a row is lower than `maxnorm`, the row is unchanged Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **p** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the power for the norm computation * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension to slice over to get the sub-tensors * **maxnorm** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the maximum norm to keep each sub-tensor under Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> x = torch.ones(3, 3) >>> x[1].fill_(2) tensor([ 2., 2., 2.]) >>> x[2].fill_(3) tensor([ 3., 3., 3.]) >>> x tensor([[ 1., 1., 1.], [ 2., 2., 2.], [ 3., 3., 3.]]) >>> torch.renorm(x, 1, 0, 5) tensor([[ 1.0000, 1.0000, 1.0000], [ 1.6667, 1.6667, 1.6667], [ 1.6667, 1.6667, 1.6667]]) ``` pytorch torch.row_stack torch.row\_stack ================ `torch.row_stack(tensors, *, out=None) → Tensor` Alias of [`torch.vstack()`](torch.vstack#torch.vstack "torch.vstack"). pytorch torch.diagflat torch.diagflat ============== `torch.diagflat(input, offset=0) → Tensor` * If `input` is a vector (1-D tensor), then returns a 2-D square tensor with the elements of `input` as the diagonal. * If `input` is a tensor with more than one dimension, then returns a 2-D tensor with diagonal elements equal to a flattened `input`. The argument `offset` controls which diagonal to consider: * If `offset` = 0, it is the main diagonal. * If `offset` > 0, it is above the main diagonal. * If `offset` < 0, it is below the main diagonal. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **offset** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the diagonal to consider. Default: 0 (main diagonal). Examples: ``` >>> a = torch.randn(3) >>> a tensor([-0.2956, -0.9068, 0.1695]) >>> torch.diagflat(a) tensor([[-0.2956, 0.0000, 0.0000], [ 0.0000, -0.9068, 0.0000], [ 0.0000, 0.0000, 0.1695]]) >>> torch.diagflat(a, 1) tensor([[ 0.0000, -0.2956, 0.0000, 0.0000], [ 0.0000, 0.0000, -0.9068, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.1695], [ 0.0000, 0.0000, 0.0000, 0.0000]]) >>> a = torch.randn(2, 2) >>> a tensor([[ 0.2094, -0.3018], [-0.1516, 1.9342]]) >>> torch.diagflat(a) tensor([[ 0.2094, 0.0000, 0.0000, 0.0000], [ 0.0000, -0.3018, 0.0000, 0.0000], [ 0.0000, 0.0000, -0.1516, 0.0000], [ 0.0000, 0.0000, 0.0000, 1.9342]]) ``` pytorch torch.diagonal torch.diagonal ============== `torch.diagonal(input, offset=0, dim1=0, dim2=1) → Tensor` Returns a partial view of `input` with the its diagonal elements with respect to `dim1` and `dim2` appended as a dimension at the end of the shape. The argument `offset` controls which diagonal to consider: * If `offset` = 0, it is the main diagonal. * If `offset` > 0, it is above the main diagonal. * If `offset` < 0, it is below the main diagonal. Applying [`torch.diag_embed()`](torch.diag_embed#torch.diag_embed "torch.diag_embed") to the output of this function with the same arguments yields a diagonal matrix with the diagonal entries of the input. However, [`torch.diag_embed()`](torch.diag_embed#torch.diag_embed "torch.diag_embed") has different default dimensions, so those need to be explicitly specified. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Must be at least 2-dimensional. * **offset** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – which diagonal to consider. Default: 0 (main diagonal). * **dim1** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – first dimension with respect to which to take diagonal. Default: 0. * **dim2** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – second dimension with respect to which to take diagonal. Default: 1. Note To take a batch diagonal, pass in dim1=-2, dim2=-1. Examples: ``` >>> a = torch.randn(3, 3) >>> a tensor([[-1.0854, 1.1431, -0.1752], [ 0.8536, -0.0905, 0.0360], [ 0.6927, -0.3735, -0.4945]]) >>> torch.diagonal(a, 0) tensor([-1.0854, -0.0905, -0.4945]) >>> torch.diagonal(a, 1) tensor([ 1.1431, 0.0360]) >>> x = torch.randn(2, 5, 4, 2) >>> torch.diagonal(x, offset=-1, dim1=1, dim2=2) tensor([[[-1.2631, 0.3755, -1.5977, -1.8172], [-1.1065, 1.0401, -0.2235, -0.7938]], [[-1.7325, -0.3081, 0.6166, 0.2335], [ 1.0500, 0.7336, -0.3836, -1.1015]]]) ```
programming_docs
pytorch CosineEmbeddingLoss CosineEmbeddingLoss =================== `class torch.nn.CosineEmbeddingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/loss.html#CosineEmbeddingLoss) Creates a criterion that measures the loss given input tensors x1x\_1 , x2x\_2 and a `Tensor` label yy with values 1 or -1. This is used for measuring whether two inputs are similar or dissimilar, using the cosine distance, and is typically used for learning nonlinear embeddings or semi-supervised learning. The loss function for each sample is: loss(x,y)={1−cos⁡(x1,x2),if y=1max⁡(0,cos⁡(x1,x2)−margin),if y=−1\text{loss}(x, y) = \begin{cases} 1 - \cos(x\_1, x\_2), & \text{if } y = 1 \\ \max(0, \cos(x\_1, x\_2) - \text{margin}), & \text{if } y = -1 \end{cases} Parameters * **margin** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – Should be a number from −1-1 to 11 , 00 to 0.50.5 is suggested. If `margin` is missing, the default value is 00 . * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when `reduce` is `False`. Default: `True` * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Note: `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Default: `'mean'` pytorch Dropout Dropout ======= `class torch.nn.Dropout(p=0.5, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/dropout.html#Dropout) During training, randomly zeroes some of the elements of the input tensor with probability `p` using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call. This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons as described in the paper [Improving neural networks by preventing co-adaptation of feature detectors](https://arxiv.org/abs/1207.0580) . Furthermore, the outputs are scaled by a factor of 11−p\frac{1}{1-p} during training. This means that during evaluation the module simply computes an identity function. Parameters * **p** – probability of an element to be zeroed. Default: 0.5 * **inplace** – If set to `True`, will do this operation in-place. Default: `False` Shape: * Input: (∗)(\*) . Input can be of any shape * Output: (∗)(\*) . Output is of the same shape as input Examples: ``` >>> m = nn.Dropout(p=0.2) >>> input = torch.randn(20, 16) >>> output = m(input) ``` pytorch torch.geqrf torch.geqrf =========== `torch.geqrf(input, *, out=None) -> (Tensor, Tensor)` This is a low-level function for calling LAPACK directly. This function returns a namedtuple (a, tau) as defined in [LAPACK documentation for geqrf](https://software.intel.com/en-us/node/521004) . You’ll generally want to use [`torch.qr()`](torch.qr#torch.qr "torch.qr") instead. Computes a QR decomposition of `input`, but without constructing QQ and RR as explicit separate matrices. Rather, this directly calls the underlying LAPACK function `?geqrf` which produces a sequence of ‘elementary reflectors’. See [LAPACK documentation for geqrf](https://software.intel.com/en-us/node/521004) for further details. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input matrix Keyword Arguments **out** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – the output tuple of (Tensor, Tensor) pytorch BatchNorm2d BatchNorm2d =========== `class torch.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/batchnorm.html#BatchNorm2d) Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](https://arxiv.org/abs/1502.03167) . y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} \* \gamma + \beta The mean and standard-deviation are calculated per-dimension over the mini-batches and γ\gamma and β\beta are learnable parameter vectors of size `C` (where `C` is the input size). By default, the elements of γ\gamma are set to 1 and the elements of β\beta are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to `torch.var(input, unbiased=False)`. Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default `momentum` of 0.1. If `track_running_stats` is set to `False`, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well. Note This `momentum` argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is x^new=(1−momentum)×x^+momentum×xt\hat{x}\_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x\_t , where x^\hat{x} is the estimated statistic and xtx\_t is the new observed value. Because the Batch Normalization is done over the `C` dimension, computing statistics on `(N, H, W)` slices, it’s common terminology to call this Spatial Batch Normalization. Parameters * **num\_features** – CC from an expected input of size (N,C,H,W)(N, C, H, W) * **eps** – a value added to the denominator for numerical stability. Default: 1e-5 * **momentum** – the value used for the running\_mean and running\_var computation. Can be set to `None` for cumulative moving average (i.e. simple average). Default: 0.1 * **affine** – a boolean value that when set to `True`, this module has learnable affine parameters. Default: `True` * **track\_running\_stats** – a boolean value that when set to `True`, this module tracks the running mean and variance, and when set to `False`, this module does not track such statistics, and initializes statistics buffers `running_mean` and `running_var` as `None`. When these buffers are `None`, this module always uses batch statistics. in both training and eval modes. Default: `True` Shape: * Input: (N,C,H,W)(N, C, H, W) * Output: (N,C,H,W)(N, C, H, W) (same shape as input) Examples: ``` >>> # With Learnable Parameters >>> m = nn.BatchNorm2d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm2d(100, affine=False) >>> input = torch.randn(20, 100, 35, 45) >>> output = m(input) ``` pytorch torch.squeeze torch.squeeze ============= `torch.squeeze(input, dim=None, *, out=None) → Tensor` Returns a tensor with all the dimensions of `input` of size `1` removed. For example, if `input` is of shape: (A×1×B×C×1×D)(A \times 1 \times B \times C \times 1 \times D) then the `out` tensor will be of shape: (A×B×C×D)(A \times B \times C \times D) . When `dim` is given, a squeeze operation is done only in the given dimension. If `input` is of shape: (A×1×B)(A \times 1 \times B) , `squeeze(input, 0)` leaves the tensor unchanged, but `squeeze(input, 1)` will squeeze the tensor to the shape (A×B)(A \times B) . Note The returned tensor shares the storage with the input tensor, so changing the contents of one will change the contents of the other. Warning If the tensor has a batch dimension of size 1, then `squeeze(input)` will also remove the batch dimension, which can lead to unexpected errors. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – if given, the input will be squeezed only in this dimension Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> x = torch.zeros(2, 1, 2, 1, 2) >>> x.size() torch.Size([2, 1, 2, 1, 2]) >>> y = torch.squeeze(x) >>> y.size() torch.Size([2, 2, 2]) >>> y = torch.squeeze(x, 0) >>> y.size() torch.Size([2, 1, 2, 1, 2]) >>> y = torch.squeeze(x, 1) >>> y.size() torch.Size([2, 2, 1, 2]) ``` pytorch torch.promote_types torch.promote\_types ==================== `torch.promote_types(type1, type2) → dtype` Returns the [`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype") with the smallest size and scalar kind that is not smaller nor of lower kind than either `type1` or `type2`. See type promotion [documentation](../tensor_attributes#type-promotion-doc) for more information on the type promotion logic. Parameters * **type1** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype")) – * **type2** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype")) – Example: ``` >>> torch.promote_types(torch.int32, torch.float32) torch.float32 >>> torch.promote_types(torch.uint8, torch.long) torch.long ``` pytorch LPPool1d LPPool1d ======== `class torch.nn.LPPool1d(norm_type, kernel_size, stride=None, ceil_mode=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/pooling.html#LPPool1d) Applies a 1D power-average pooling over an input signal composed of several input planes. On each window, the function computed is: f(X)=∑x∈Xxppf(X) = \sqrt[p]{\sum\_{x \in X} x^{p}} * At p = ∞\infty , one gets Max Pooling * At p = 1, one gets Sum Pooling (which is proportional to Average Pooling) Note If the sum to the power of `p` is zero, the gradient of this function is not defined. This implementation will set the gradient to zero in this case. Parameters * **kernel\_size** – a single int, the size of the window * **stride** – a single int, the stride of the window. Default value is `kernel_size` * **ceil\_mode** – when True, will use `ceil` instead of `floor` to compute the output shape Shape: * Input: (N,C,Lin)(N, C, L\_{in}) * Output: (N,C,Lout)(N, C, L\_{out}) , where Lout=⌊Lin−kernel\_sizestride+1⌋L\_{out} = \left\lfloor\frac{L\_{in} - \text{kernel\\_size}}{\text{stride}} + 1\right\rfloor Examples:: ``` >>> # power-2 pool of window of length 3, with stride 2. >>> m = nn.LPPool1d(2, 3, stride=2) >>> input = torch.randn(20, 16, 50) >>> output = m(input) ``` pytorch torch.dist torch.dist ========== `torch.dist(input, other, p=2) → Tensor` Returns the p-norm of (`input` - `other`) The shapes of `input` and `other` must be [broadcastable](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics). Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the Right-hand-side input tensor * **p** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – the norm to be computed Example: ``` >>> x = torch.randn(4) >>> x tensor([-1.5393, -0.8675, 0.5916, 1.6321]) >>> y = torch.randn(4) >>> y tensor([ 0.0967, -1.0511, 0.6295, 0.8360]) >>> torch.dist(x, y, 3.5) tensor(1.6727) >>> torch.dist(x, y, 3) tensor(1.6973) >>> torch.dist(x, y, 0) tensor(inf) >>> torch.dist(x, y, 1) tensor(2.6537) ``` pytorch torch.broadcast_tensors torch.broadcast\_tensors ======================== `torch.broadcast_tensors(*tensors) → List of Tensors` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/functional.html#broadcast_tensors) Broadcasts the given tensors according to [Broadcasting semantics](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics). Parameters **\*tensors** – any number of tensors of the same type Warning More than one element of a broadcasted tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first. Example: ``` >>> x = torch.arange(3).view(1, 3) >>> y = torch.arange(2).view(2, 1) >>> a, b = torch.broadcast_tensors(x, y) >>> a.size() torch.Size([2, 3]) >>> a tensor([[0, 1, 2], [0, 1, 2]]) ``` pytorch torch.exp2 torch.exp2 ========== `torch.exp2(input, *, out=None) → Tensor` Computes the base two exponential function of `input`. yi=2xiy\_{i} = 2^{x\_{i}} Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> torch.exp2(torch.tensor([0, math.log2(2.), 3, 4])) tensor([ 1., 2., 8., 16.]) ``` pytorch L1Unstructured L1Unstructured ============== `class torch.nn.utils.prune.L1Unstructured(amount)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/prune.html#L1Unstructured) Prune (currently unpruned) units in a tensor by zeroing out the ones with the lowest L1-norm. Parameters **amount** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – quantity of parameters to prune. If `float`, should be between 0.0 and 1.0 and represent the fraction of parameters to prune. If `int`, it represents the absolute number of parameters to prune. `classmethod apply(module, name, amount, importance_scores=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/prune.html#L1Unstructured.apply) Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters * **module** ([nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – module containing the tensor to prune * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – parameter name within `module` on which pruning will act. * **amount** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – quantity of parameters to prune. If `float`, should be between 0.0 and 1.0 and represent the fraction of parameters to prune. If `int`, it represents the absolute number of parameters to prune. * **importance\_scores** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – tensor of importance scores (of same shape as module parameter) used to compute mask for pruning. The values in this tensor indicate the importance of the corresponding elements in the parameter being pruned. If unspecified or None, the module parameter will be used in its place. `apply_mask(module)` Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters **module** ([nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – module containing the tensor to prune Returns pruned version of the input tensor Return type pruned\_tensor ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) `prune(t, default_mask=None, importance_scores=None)` Computes and returns a pruned version of input tensor `t` according to the pruning rule specified in `compute_mask()`. Parameters * **t** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – tensor to prune (of same dimensions as `default_mask`). * **importance\_scores** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – tensor of importance scores (of same shape as `t`) used to compute mask for pruning `t`. The values in this tensor indicate the importance of the corresponding elements in the `t` that is being pruned. If unspecified or None, the tensor `t` will be used in its place. * **default\_mask** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – mask from previous pruning iteration, if any. To be considered when determining what portion of the tensor that pruning should act on. If None, default to a mask of ones. Returns pruned version of tensor `t`. `remove(module)` Removes the pruning reparameterization from a module. The pruned parameter named `name` remains permanently pruned, and the parameter named `name+'_orig'` is removed from the parameter list. Similarly, the buffer named `name+'_mask'` is removed from the buffers. Note Pruning itself is NOT undone or reversed! pytorch MaxUnpool2d MaxUnpool2d =========== `class torch.nn.MaxUnpool2d(kernel_size, stride=None, padding=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/pooling.html#MaxUnpool2d) Computes a partial inverse of [`MaxPool2d`](torch.nn.maxpool2d#torch.nn.MaxPool2d "torch.nn.MaxPool2d"). [`MaxPool2d`](torch.nn.maxpool2d#torch.nn.MaxPool2d "torch.nn.MaxPool2d") is not fully invertible, since the non-maximal values are lost. [`MaxUnpool2d`](#torch.nn.MaxUnpool2d "torch.nn.MaxUnpool2d") takes in as input the output of [`MaxPool2d`](torch.nn.maxpool2d#torch.nn.MaxPool2d "torch.nn.MaxPool2d") including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero. Note [`MaxPool2d`](torch.nn.maxpool2d#torch.nn.MaxPool2d "torch.nn.MaxPool2d") can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument `output_size` in the forward call. See the Inputs and Example below. Parameters * **kernel\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Size of the max pooling window. * **stride** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Stride of the max pooling window. It is set to `kernel_size` by default. * **padding** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Padding that was added to the input Inputs: * `input`: the input Tensor to invert * `indices`: the indices given out by [`MaxPool2d`](torch.nn.maxpool2d#torch.nn.MaxPool2d "torch.nn.MaxPool2d") * `output_size` (optional): the targeted output size Shape: * Input: (N,C,Hin,Win)(N, C, H\_{in}, W\_{in}) * Output: (N,C,Hout,Wout)(N, C, H\_{out}, W\_{out}) , where Hout=(Hin−1)×stride[0]−2×padding[0]+kernel\_size[0]H\_{out} = (H\_{in} - 1) \times \text{stride[0]} - 2 \times \text{padding[0]} + \text{kernel\\_size[0]} Wout=(Win−1)×stride[1]−2×padding[1]+kernel\_size[1]W\_{out} = (W\_{in} - 1) \times \text{stride[1]} - 2 \times \text{padding[1]} + \text{kernel\\_size[1]} or as given by `output_size` in the call operator Example: ``` >>> pool = nn.MaxPool2d(2, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool2d(2, stride=2) >>> input = torch.tensor([[[[ 1., 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12], [13, 14, 15, 16]]]]) >>> output, indices = pool(input) >>> unpool(output, indices) tensor([[[[ 0., 0., 0., 0.], [ 0., 6., 0., 8.], [ 0., 0., 0., 0.], [ 0., 14., 0., 16.]]]]) >>> # specify a different output size than input size >>> unpool(output, indices, output_size=torch.Size([1, 1, 5, 5])) tensor([[[[ 0., 0., 0., 0., 0.], [ 6., 0., 8., 0., 0.], [ 0., 0., 0., 14., 0.], [ 16., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.]]]]) ```
programming_docs
pytorch PackedSequence PackedSequence ============== `class torch.nn.utils.rnn.PackedSequence` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/rnn.html#PackedSequence) Holds the data and list of [`batch_sizes`](#torch.nn.utils.rnn.PackedSequence.batch_sizes "torch.nn.utils.rnn.PackedSequence.batch_sizes") of a packed sequence. All RNN modules accept packed sequences as inputs. Note Instances of this class should never be created manually. They are meant to be instantiated by functions like [`pack_padded_sequence()`](torch.nn.utils.rnn.pack_padded_sequence#torch.nn.utils.rnn.pack_padded_sequence "torch.nn.utils.rnn.pack_padded_sequence"). Batch sizes represent the number elements at each sequence step in the batch, not the varying sequence lengths passed to [`pack_padded_sequence()`](torch.nn.utils.rnn.pack_padded_sequence#torch.nn.utils.rnn.pack_padded_sequence "torch.nn.utils.rnn.pack_padded_sequence"). For instance, given data `abc` and `x` the [`PackedSequence`](#torch.nn.utils.rnn.PackedSequence "torch.nn.utils.rnn.PackedSequence") would contain data `axbc` with `batch_sizes=[2,1,1]`. Variables * **~PackedSequence.data** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – Tensor containing packed sequence * **~PackedSequence.batch\_sizes** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – Tensor of integers holding information about the batch size at each sequence step * **~PackedSequence.sorted\_indices** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – Tensor of integers holding how this [`PackedSequence`](#torch.nn.utils.rnn.PackedSequence "torch.nn.utils.rnn.PackedSequence") is constructed from sequences. * **~PackedSequence.unsorted\_indices** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – Tensor of integers holding how this to recover the original sequences with correct order. Note [`data`](#torch.nn.utils.rnn.PackedSequence.data "torch.nn.utils.rnn.PackedSequence.data") can be on arbitrary device and of arbitrary dtype. [`sorted_indices`](#torch.nn.utils.rnn.PackedSequence.sorted_indices "torch.nn.utils.rnn.PackedSequence.sorted_indices") and [`unsorted_indices`](#torch.nn.utils.rnn.PackedSequence.unsorted_indices "torch.nn.utils.rnn.PackedSequence.unsorted_indices") must be `torch.int64` tensors on the same device as [`data`](#torch.nn.utils.rnn.PackedSequence.data "torch.nn.utils.rnn.PackedSequence.data"). However, [`batch_sizes`](#torch.nn.utils.rnn.PackedSequence.batch_sizes "torch.nn.utils.rnn.PackedSequence.batch_sizes") should always be a CPU `torch.int64` tensor. This invariant is maintained throughout [`PackedSequence`](#torch.nn.utils.rnn.PackedSequence "torch.nn.utils.rnn.PackedSequence") class, and all functions that construct a `:class:PackedSequence` in PyTorch (i.e., they only pass in tensors conforming to this constraint). `property batch_sizes` Alias for field number 1 `count()` Return number of occurrences of value. `property data` Alias for field number 0 `index()` Return first index of value. Raises ValueError if the value is not present. `property is_cuda` Returns true if `self.data` stored on a gpu `is_pinned()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/rnn.html#PackedSequence.is_pinned) Returns true if `self.data` stored on in pinned memory `property sorted_indices` Alias for field number 2 `to(*args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/rnn.html#PackedSequence.to) Performs dtype and/or device conversion on `self.data`. It has similar signature as [`torch.Tensor.to()`](../tensors#torch.Tensor.to "torch.Tensor.to"), except optional arguments like `non_blocking` and `copy` should be passed as kwargs, not args, or they will not apply to the index tensors. Note If the `self.data` Tensor already has the correct `torch.dtype` and `torch.device`, then `self` is returned. Otherwise, returns a copy with the desired configuration. `property unsorted_indices` Alias for field number 3 pytorch torch.rot90 torch.rot90 =========== `torch.rot90(input, k, dims) → Tensor` Rotate a n-D tensor by 90 degrees in the plane specified by dims axis. Rotation direction is from the first towards the second axis if k > 0, and from the second towards the first for k < 0. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **k** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – number of times to rotate * **dims** (*a list* *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – axis to rotate Example: ``` >>> x = torch.arange(4).view(2, 2) >>> x tensor([[0, 1], [2, 3]]) >>> torch.rot90(x, 1, [0, 1]) tensor([[1, 3], [0, 2]]) >>> x = torch.arange(8).view(2, 2, 2) >>> x tensor([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> torch.rot90(x, 1, [1, 2]) tensor([[[1, 3], [0, 2]], [[5, 7], [4, 6]]]) ``` pytorch torch.nn.utils.remove_spectral_norm torch.nn.utils.remove\_spectral\_norm ===================================== `torch.nn.utils.remove_spectral_norm(module, name='weight')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/spectral_norm.html#remove_spectral_norm) Removes the spectral normalization reparameterization from a module. Parameters * **module** ([Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – containing module * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – name of weight parameter #### Example ``` >>> m = spectral_norm(nn.Linear(40, 10)) >>> remove_spectral_norm(m) ``` pytorch Sigmoid Sigmoid ======= `class torch.nn.Sigmoid` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/activation.html#Sigmoid) Applies the element-wise function: Sigmoid(x)=σ(x)=11+exp⁡(−x)\text{Sigmoid}(x) = \sigma(x) = \frac{1}{1 + \exp(-x)} Shape: * Input: (N,∗)(N, \*) where `*` means, any number of additional dimensions * Output: (N,∗)(N, \*) , same shape as the input Examples: ``` >>> m = nn.Sigmoid() >>> input = torch.randn(2) >>> output = m(input) ``` pytorch torch.unique_consecutive torch.unique\_consecutive ========================= `torch.unique_consecutive(*args, **kwargs)` Eliminates all but the first element from every consecutive group of equivalent elements. Note This function is different from [`torch.unique()`](torch.unique#torch.unique "torch.unique") in the sense that this function only eliminates consecutive duplicate values. This semantics is similar to `std::unique` in C++. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor * **return\_inverse** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Whether to also return the indices for where elements in the original input ended up in the returned unique list. * **return\_counts** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Whether to also return the counts for each unique element. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension to apply unique. If `None`, the unique of the flattened input is returned. default: `None` Returns A tensor or a tuple of tensors containing * **output** (*Tensor*): the output list of unique scalar elements. * **inverse\_indices** (*Tensor*): (optional) if `return_inverse` is True, there will be an additional returned tensor (same shape as input) representing the indices for where elements in the original input map to in the output; otherwise, this function will only return a single tensor. * **counts** (*Tensor*): (optional) if `return_counts` is True, there will be an additional returned tensor (same shape as output or output.size(dim), if dim was specified) representing the number of occurrences for each unique value or tensor. Return type ([Tensor](../tensors#torch.Tensor "torch.Tensor"), [Tensor](../tensors#torch.Tensor "torch.Tensor") (optional), [Tensor](../tensors#torch.Tensor "torch.Tensor") (optional)) Example: ``` >>> x = torch.tensor([1, 1, 2, 2, 3, 1, 1, 2]) >>> output = torch.unique_consecutive(x) >>> output tensor([1, 2, 3, 1, 2]) >>> output, inverse_indices = torch.unique_consecutive(x, return_inverse=True) >>> output tensor([1, 2, 3, 1, 2]) >>> inverse_indices tensor([0, 0, 1, 1, 2, 3, 3, 4]) >>> output, counts = torch.unique_consecutive(x, return_counts=True) >>> output tensor([1, 2, 3, 1, 2]) >>> counts tensor([2, 2, 1, 2, 1]) ``` pytorch torch.logdet torch.logdet ============ `torch.logdet(input) → Tensor` Calculates log determinant of a square matrix or batches of square matrices. Note Result is `-inf` if `input` has zero log determinant, and is `nan` if `input` has negative determinant. Note Backward through [`logdet()`](#torch.logdet "torch.logdet") internally uses SVD results when `input` is not invertible. In this case, double backward through [`logdet()`](#torch.logdet "torch.logdet") will be unstable in when `input` doesn’t have distinct singular values. See [`svd()`](torch.svd#torch.svd "torch.svd") for details. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor of size `(*, n, n)` where `*` is zero or more batch dimensions. Example: ``` >>> A = torch.randn(3, 3) >>> torch.det(A) tensor(0.2611) >>> torch.logdet(A) tensor(-1.3430) >>> A tensor([[[ 0.9254, -0.6213], [-0.5787, 1.6843]], [[ 0.3242, -0.9665], [ 0.4539, -0.0887]], [[ 1.1336, -0.4025], [-0.7089, 0.9032]]]) >>> A.det() tensor([1.1990, 0.4099, 0.7386]) >>> A.det().log() tensor([ 0.1815, -0.8917, -0.3031]) ``` pytorch torch.logit torch.logit =========== `torch.logit(input, eps=None, *, out=None) → Tensor` Returns a new tensor with the logit of the elements of `input`. `input` is clamped to [eps, 1 - eps] when eps is not None. When eps is None and `input` < 0 or `input` > 1, the function will yields NaN. yi=ln⁡(zi1−zi)zi={xiif eps is Noneepsif xi<epsxiif eps≤xi≤1−eps1−epsif xi>1−epsy\_{i} = \ln(\frac{z\_{i}}{1 - z\_{i}}) \\ z\_{i} = \begin{cases} x\_{i} & \text{if eps is None} \\ \text{eps} & \text{if } x\_{i} < \text{eps} \\ x\_{i} & \text{if } \text{eps} \leq x\_{i} \leq 1 - \text{eps} \\ 1 - \text{eps} & \text{if } x\_{i} > 1 - \text{eps} \end{cases} Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **eps** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – the epsilon for input clamp bound. Default: `None` Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.rand(5) >>> a tensor([0.2796, 0.9331, 0.6486, 0.1523, 0.6516]) >>> torch.logit(a, eps=1e-6) tensor([-0.9466, 2.6352, 0.6131, -1.7169, 0.6261]) ``` pytorch torch.hypot torch.hypot =========== `torch.hypot(input, other, *, out=None) → Tensor` Given the legs of a right triangle, return its hypotenuse. outi=inputi2+otheri2\text{out}\_{i} = \sqrt{\text{input}\_{i}^{2} + \text{other}\_{i}^{2}} The shapes of `input` and `other` must be [broadcastable](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics). Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the first input tensor * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the second input tensor Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.hypot(torch.tensor([4.0]), torch.tensor([3.0, 4.0, 5.0])) tensor([5.0000, 5.6569, 6.4031]) ``` pytorch torch.ldexp torch.ldexp =========== `torch.ldexp(input, other, *, out=None) → Tensor` Multiplies `input` by 2\*\*:attr:`other`. outi=inputi∗2iother\text{{out}}\_i = \text{{input}}\_i \* 2^\text{{other}}\_i Typically this function is used to construct floating point numbers by multiplying mantissas in `input` with integral powers of two created from the exponents in :attr:’other’. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – a tensor of exponents, typically integers. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example:: ``` >>> torch.ldexp(torch.tensor([1.]), torch.tensor([1])) tensor([2.]) >>> torch.ldexp(torch.tensor([1.0]), torch.tensor([1, 2, 3, 4])) tensor([ 2., 4., 8., 16.]) ``` pytorch torch.set_default_tensor_type torch.set\_default\_tensor\_type ================================ `torch.set_default_tensor_type(t)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch.html#set_default_tensor_type) Sets the default `torch.Tensor` type to floating point tensor type `t`. This type will also be used as default floating point type for type inference in [`torch.tensor()`](torch.tensor#torch.tensor "torch.tensor"). The default floating point tensor type is initially `torch.FloatTensor`. Parameters **t** ([type](https://docs.python.org/3/library/functions.html#type "(in Python v3.9)") *or* *string*) – the floating point tensor type or its name Example: ``` >>> torch.tensor([1.2, 3]).dtype # initial default for floating point is torch.float32 torch.float32 >>> torch.set_default_tensor_type(torch.DoubleTensor) >>> torch.tensor([1.2, 3]).dtype # a new floating point tensor torch.float64 ``` pytorch GRU GRU === `class torch.nn.GRU(*args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/rnn.html#GRU) Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence. For each element in the input sequence, each layer computes the following function: rt=σ(Wirxt+bir+Whrh(t−1)+bhr)zt=σ(Wizxt+biz+Whzh(t−1)+bhz)nt=tanh⁡(Winxt+bin+rt∗(Whnh(t−1)+bhn))ht=(1−zt)∗nt+zt∗h(t−1)\begin{array}{ll} r\_t = \sigma(W\_{ir} x\_t + b\_{ir} + W\_{hr} h\_{(t-1)} + b\_{hr}) \\ z\_t = \sigma(W\_{iz} x\_t + b\_{iz} + W\_{hz} h\_{(t-1)} + b\_{hz}) \\ n\_t = \tanh(W\_{in} x\_t + b\_{in} + r\_t \* (W\_{hn} h\_{(t-1)}+ b\_{hn})) \\ h\_t = (1 - z\_t) \* n\_t + z\_t \* h\_{(t-1)} \end{array} where hth\_t is the hidden state at time `t`, xtx\_t is the input at time `t`, h(t−1)h\_{(t-1)} is the hidden state of the layer at time `t-1` or the initial hidden state at time `0`, and rtr\_t , ztz\_t , ntn\_t are the reset, update, and new gates, respectively. σ\sigma is the sigmoid function, and ∗\* is the Hadamard product. In a multilayer GRU, the input xt(l)x^{(l)}\_t of the ll -th layer (l>=2l >= 2 ) is the hidden state ht(l−1)h^{(l-1)}\_t of the previous layer multiplied by dropout δt(l−1)\delta^{(l-1)}\_t where each δt(l−1)\delta^{(l-1)}\_t is a Bernoulli random variable which is 00 with probability `dropout`. Parameters * **input\_size** – The number of expected features in the input `x` * **hidden\_size** – The number of features in the hidden state `h` * **num\_layers** – Number of recurrent layers. E.g., setting `num_layers=2` would mean stacking two GRUs together to form a `stacked GRU`, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1 * **bias** – If `False`, then the layer does not use bias weights `b_ih` and `b_hh`. Default: `True` * **batch\_first** – If `True`, then the input and output tensors are provided as (batch, seq, feature). Default: `False` * **dropout** – If non-zero, introduces a `Dropout` layer on the outputs of each GRU layer except the last layer, with dropout probability equal to `dropout`. Default: 0 * **bidirectional** – If `True`, becomes a bidirectional GRU. Default: `False` Inputs: input, h\_0 * **input** of shape `(seq_len, batch, input_size)`: tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See [`torch.nn.utils.rnn.pack_padded_sequence()`](torch.nn.utils.rnn.pack_padded_sequence#torch.nn.utils.rnn.pack_padded_sequence "torch.nn.utils.rnn.pack_padded_sequence") for details. * **h\_0** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided. If the RNN is bidirectional, num\_directions should be 2, else it should be 1. Outputs: output, h\_n * **output** of shape `(seq_len, batch, num_directions * hidden_size)`: tensor containing the output features h\_t from the last layer of the GRU, for each `t`. If a [`torch.nn.utils.rnn.PackedSequence`](torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence "torch.nn.utils.rnn.PackedSequence") has been given as the input, the output will also be a packed sequence. For the unpacked case, the directions can be separated using `output.view(seq_len, batch, num_directions, hidden_size)`, with forward and backward being direction `0` and `1` respectively. Similarly, the directions can be separated in the packed case. * **h\_n** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor containing the hidden state for `t = seq_len` Like *output*, the layers can be separated using `h_n.view(num_layers, num_directions, batch, hidden_size)`. Shape: * Input1: (L,N,Hin)(L, N, H\_{in}) tensor containing input features where Hin=input\_sizeH\_{in}=\text{input\\_size} and `L` represents a sequence length. * Input2: (S,N,Hout)(S, N, H\_{out}) tensor containing the initial hidden state for each element in the batch. Hout=hidden\_sizeH\_{out}=\text{hidden\\_size} Defaults to zero if not provided. where S=num\_layers∗num\_directionsS=\text{num\\_layers} \* \text{num\\_directions} If the RNN is bidirectional, num\_directions should be 2, else it should be 1. * Output1: (L,N,Hall)(L, N, H\_{all}) where Hall=num\_directions∗hidden\_sizeH\_{all}=\text{num\\_directions} \* \text{hidden\\_size} * Output2: (S,N,Hout)(S, N, H\_{out}) tensor containing the next hidden state for each element in the batch Variables * **~GRU.weight\_ih\_l[k]** – the learnable input-hidden weights of the kth\text{k}^{th} layer (W\_ir|W\_iz|W\_in), of shape `(3*hidden_size, input_size)` for `k = 0`. Otherwise, the shape is `(3*hidden_size, num_directions * hidden_size)` * **~GRU.weight\_hh\_l[k]** – the learnable hidden-hidden weights of the kth\text{k}^{th} layer (W\_hr|W\_hz|W\_hn), of shape `(3*hidden_size, hidden_size)` * **~GRU.bias\_ih\_l[k]** – the learnable input-hidden bias of the kth\text{k}^{th} layer (b\_ir|b\_iz|b\_in), of shape `(3*hidden_size)` * **~GRU.bias\_hh\_l[k]** – the learnable hidden-hidden bias of the kth\text{k}^{th} layer (b\_hr|b\_hz|b\_hn), of shape `(3*hidden_size)` Note All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=1hidden\_sizek = \frac{1}{\text{hidden\\_size}} Orphan Note If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype `torch.float16` 4) V100 GPU is used, 5) input data is not in `PackedSequence` format persistent algorithm can be selected to improve performance. Examples: ``` >>> rnn = nn.GRU(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> output, hn = rnn(input, h0) ``` pytorch SiLU SiLU ==== `class torch.nn.SiLU(inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/activation.html#SiLU) Applies the silu function, element-wise. silu(x)=x∗σ(x),where σ(x) is the logistic sigmoid.\text{silu}(x) = x \* \sigma(x), \text{where } \sigma(x) \text{ is the logistic sigmoid.} Note See [Gaussian Error Linear Units (GELUs)](https://arxiv.org/abs/1606.08415) where the SiLU (Sigmoid Linear Unit) was originally coined, and see [Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning](https://arxiv.org/abs/1702.03118) and [Swish: a Self-Gated Activation Function](https://arxiv.org/abs/1710.05941v1) where the SiLU was experimented with later. Shape: * Input: (N,∗)(N, \*) where `*` means, any number of additional dimensions * Output: (N,∗)(N, \*) , same shape as the input Examples: ``` >>> m = nn.SiLU() >>> input = torch.randn(2) >>> output = m(input) ```
programming_docs
pytorch MultiLabelSoftMarginLoss MultiLabelSoftMarginLoss ======================== `class torch.nn.MultiLabelSoftMarginLoss(weight=None, size_average=None, reduce=None, reduction='mean')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/loss.html#MultiLabelSoftMarginLoss) Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input xx and target yy of size (N,C)(N, C) . For each sample in the minibatch: loss(x,y)=−1C∗∑iy[i]∗log⁡((1+exp⁡(−x[i]))−1)+(1−y[i])∗log⁡(exp⁡(−x[i])(1+exp⁡(−x[i])))loss(x, y) = - \frac{1}{C} \* \sum\_i y[i] \* \log((1 + \exp(-x[i]))^{-1}) + (1-y[i]) \* \log\left(\frac{\exp(-x[i])}{(1 + \exp(-x[i]))}\right) where i∈{0,⋯,x.nElement()−1}i \in \left\{0, \; \cdots , \; \text{x.nElement}() - 1\right\} , y[i]∈{0,1}y[i] \in \left\{0, \; 1\right\} . Parameters * **weight** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size `C`. Otherwise, it is treated as if having all ones. * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when `reduce` is `False`. Default: `True` * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Note: `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Default: `'mean'` Shape: * Input: (N,C)(N, C) where `N` is the batch size and `C` is the number of classes. * Target: (N,C)(N, C) , label targets padded by -1 ensuring same shape as the input. * Output: scalar. If `reduction` is `'none'`, then (N)(N) . pytorch torch.divide torch.divide ============ `torch.divide(input, other, *, rounding_mode=None, out=None) → Tensor` Alias for [`torch.div()`](torch.div#torch.div "torch.div"). pytorch torch.gather torch.gather ============ `torch.gather(input, dim, index, *, sparse_grad=False, out=None) → Tensor` Gathers values along an axis specified by `dim`. For a 3-D tensor the output is specified by: ``` out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0 out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1 out[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2 ``` `input` and `index` must have the same number of dimensions. It is also required that `index.size(d) <= input.size(d)` for all dimensions `d != dim`. `out` will have the same shape as `index`. Note that `input` and `index` do not broadcast against each other. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the source tensor * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the axis along which to index * **index** (*LongTensor*) – the indices of elements to gather Keyword Arguments * **sparse\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, gradient w.r.t. `input` will be a sparse tensor. * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the destination tensor Example: ``` >>> t = torch.tensor([[1, 2], [3, 4]]) >>> torch.gather(t, 1, torch.tensor([[0, 0], [1, 0]])) tensor([[ 1, 1], [ 4, 3]]) ``` pytorch PruningContainer PruningContainer ================ `class torch.nn.utils.prune.PruningContainer(*args)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/prune.html#PruningContainer) Container holding a sequence of pruning methods for iterative pruning. Keeps track of the order in which pruning methods are applied and handles combining successive pruning calls. Accepts as argument an instance of a BasePruningMethod or an iterable of them. `add_pruning_method(method)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/prune.html#PruningContainer.add_pruning_method) Adds a child pruning `method` to the container. Parameters **method** (*subclass of BasePruningMethod*) – child pruning method to be added to the container. `classmethod apply(module, name, *args, importance_scores=None, **kwargs)` Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters * **module** ([nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – module containing the tensor to prune * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – parameter name within `module` on which pruning will act. * **args** – arguments passed on to a subclass of [`BasePruningMethod`](torch.nn.utils.prune.basepruningmethod#torch.nn.utils.prune.BasePruningMethod "torch.nn.utils.prune.BasePruningMethod") * **importance\_scores** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – tensor of importance scores (of same shape as module parameter) used to compute mask for pruning. The values in this tensor indicate the importance of the corresponding elements in the parameter being pruned. If unspecified or None, the parameter will be used in its place. * **kwargs** – keyword arguments passed on to a subclass of a [`BasePruningMethod`](torch.nn.utils.prune.basepruningmethod#torch.nn.utils.prune.BasePruningMethod "torch.nn.utils.prune.BasePruningMethod") `apply_mask(module)` Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters **module** ([nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – module containing the tensor to prune Returns pruned version of the input tensor Return type pruned\_tensor ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) `compute_mask(t, default_mask)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/prune.html#PruningContainer.compute_mask) Applies the latest `method` by computing the new partial masks and returning its combination with the `default_mask`. The new partial mask should be computed on the entries or channels that were not zeroed out by the `default_mask`. Which portions of the tensor `t` the new mask will be calculated from depends on the `PRUNING_TYPE` (handled by the type handler): * for ‘unstructured’, the mask will be computed from the raveled list of nonmasked entries; * for ‘structured’, the mask will be computed from the nonmasked channels in the tensor; * for ‘global’, the mask will be computed across all entries. Parameters * **t** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – tensor representing the parameter to prune (of same dimensions as `default_mask`). * **default\_mask** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – mask from previous pruning iteration. Returns new mask that combines the effects of the `default_mask` and the new mask from the current pruning `method` (of same dimensions as `default_mask` and `t`). Return type mask ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) `prune(t, default_mask=None, importance_scores=None)` Computes and returns a pruned version of input tensor `t` according to the pruning rule specified in [`compute_mask()`](#torch.nn.utils.prune.PruningContainer.compute_mask "torch.nn.utils.prune.PruningContainer.compute_mask"). Parameters * **t** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – tensor to prune (of same dimensions as `default_mask`). * **importance\_scores** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – tensor of importance scores (of same shape as `t`) used to compute mask for pruning `t`. The values in this tensor indicate the importance of the corresponding elements in the `t` that is being pruned. If unspecified or None, the tensor `t` will be used in its place. * **default\_mask** ([torch.Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – mask from previous pruning iteration, if any. To be considered when determining what portion of the tensor that pruning should act on. If None, default to a mask of ones. Returns pruned version of tensor `t`. `remove(module)` Removes the pruning reparameterization from a module. The pruned parameter named `name` remains permanently pruned, and the parameter named `name+'_orig'` is removed from the parameter list. Similarly, the buffer named `name+'_mask'` is removed from the buffers. Note Pruning itself is NOT undone or reversed! pytorch torch.matrix_rank torch.matrix\_rank ================== `torch.matrix_rank(input, tol=None, symmetric=False, *, out=None) → Tensor` Returns the numerical rank of a 2-D tensor. The method to compute the matrix rank is done using SVD by default. If `symmetric` is `True`, then `input` is assumed to be symmetric, and the computation of the rank is done by obtaining the eigenvalues. `tol` is the threshold below which the singular values (or the eigenvalues when `symmetric` is `True`) are considered to be 0. If `tol` is not specified, `tol` is set to `S.max() * max(S.size()) * eps` where `S` is the singular values (or the eigenvalues when `symmetric` is `True`), and `eps` is the epsilon value for the datatype of `input`. Note [`torch.matrix_rank()`](#torch.matrix_rank "torch.matrix_rank") is deprecated. Please use [`torch.linalg.matrix_rank()`](../linalg#torch.linalg.matrix_rank "torch.linalg.matrix_rank") instead. The parameter `symmetric` was renamed in [`torch.linalg.matrix_rank()`](../linalg#torch.linalg.matrix_rank "torch.linalg.matrix_rank") to `hermitian`. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input 2-D tensor * **tol** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – the tolerance value. Default: `None` * **symmetric** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – indicates whether `input` is symmetric. Default: `False` Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.eye(10) >>> torch.matrix_rank(a) tensor(10) >>> b = torch.eye(10) >>> b[0, 0] = 0 >>> torch.matrix_rank(b) tensor(9) ``` pytorch torch.t torch.t ======= `torch.t(input) → Tensor` Expects `input` to be <= 2-D tensor and transposes dimensions 0 and 1. 0-D and 1-D tensors are returned as is. When input is a 2-D tensor this is equivalent to `transpose(input, 0, 1)`. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Example: ``` >>> x = torch.randn(()) >>> x tensor(0.1995) >>> torch.t(x) tensor(0.1995) >>> x = torch.randn(3) >>> x tensor([ 2.4320, -0.4608, 0.7702]) >>> torch.t(x) tensor([ 2.4320, -0.4608, 0.7702]) >>> x = torch.randn(2, 3) >>> x tensor([[ 0.4875, 0.9158, -0.5872], [ 0.3938, -0.6929, 0.6932]]) >>> torch.t(x) tensor([[ 0.4875, 0.3938], [ 0.9158, -0.6929], [-0.5872, 0.6932]]) ``` pytorch SELU SELU ==== `class torch.nn.SELU(inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/activation.html#SELU) Applied element-wise, as: SELU(x)=scale∗(max⁡(0,x)+min⁡(0,α∗(exp⁡(x)−1)))\text{SELU}(x) = \text{scale} \* (\max(0,x) + \min(0, \alpha \* (\exp(x) - 1))) with α=1.6732632423543772848170429916717\alpha = 1.6732632423543772848170429916717 and scale=1.0507009873554804934193349852946\text{scale} = 1.0507009873554804934193349852946 . More details can be found in the paper [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515) . Parameters **inplace** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – can optionally do the operation in-place. Default: `False` Shape: * Input: (N,∗)(N, \*) where `*` means, any number of additional dimensions * Output: (N,∗)(N, \*) , same shape as the input Examples: ``` >>> m = nn.SELU() >>> input = torch.randn(2) >>> output = m(input) ``` pytorch torch.tanh torch.tanh ========== `torch.tanh(input, *, out=None) → Tensor` Returns a new tensor with the hyperbolic tangent of the elements of `input`. outi=tanh⁡(inputi)\text{out}\_{i} = \tanh(\text{input}\_{i}) Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4) >>> a tensor([ 0.8986, -0.7279, 1.1745, 0.2611]) >>> torch.tanh(a) tensor([ 0.7156, -0.6218, 0.8257, 0.2553]) ``` pytorch torch.zeros torch.zeros =========== `torch.zeros(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor` Returns a tensor filled with the scalar value `0`, with the shape defined by the variable argument `size`. Parameters **size** (*int...*) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple. Keyword Arguments * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: if `None`, uses a global default (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). * **layout** ([`torch.layout`](../tensor_attributes#torch.torch.layout "torch.torch.layout"), optional) – the desired layout of returned Tensor. Default: `torch.strided`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Example: ``` >>> torch.zeros(2, 3) tensor([[ 0., 0., 0.], [ 0., 0., 0.]]) >>> torch.zeros(5) tensor([ 0., 0., 0., 0., 0.]) ``` pytorch torch.cumsum torch.cumsum ============ `torch.cumsum(input, dim, *, dtype=None, out=None) → Tensor` Returns the cumulative sum of elements of `input` in the dimension `dim`. For example, if `input` is a vector of size N, the result will also be a vector of size N, with elements. yi=x1+x2+x3+⋯+xiy\_i = x\_1 + x\_2 + x\_3 + \dots + x\_i Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension to do the operation over Keyword Arguments * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. If specified, the input tensor is casted to `dtype` before the operation is performed. This is useful for preventing data type overflows. Default: None. * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(10) >>> a tensor([-0.8286, -0.4890, 0.5155, 0.8443, 0.1865, -0.1752, -2.0595, 0.1850, -1.1571, -0.4243]) >>> torch.cumsum(a, dim=0) tensor([-0.8286, -1.3175, -0.8020, 0.0423, 0.2289, 0.0537, -2.0058, -1.8209, -2.9780, -3.4022]) ``` pytorch torch.transpose torch.transpose =============== `torch.transpose(input, dim0, dim1) → Tensor` Returns a tensor that is a transposed version of `input`. The given dimensions `dim0` and `dim1` are swapped. The resulting `out` tensor shares its underlying storage with the `input` tensor, so changing the content of one would change the content of the other. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim0** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the first dimension to be transposed * **dim1** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the second dimension to be transposed Example: ``` >>> x = torch.randn(2, 3) >>> x tensor([[ 1.0028, -0.9893, 0.5809], [-0.1669, 0.7299, 0.4942]]) >>> torch.transpose(x, 0, 1) tensor([[ 1.0028, -0.1669], [-0.9893, 0.7299], [ 0.5809, 0.4942]]) ``` pytorch InstanceNorm3d InstanceNorm3d ============== `class torch.nn.InstanceNorm3d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/instancenorm.html#InstanceNorm3d) Applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper [Instance Normalization: The Missing Ingredient for Fast Stylization](https://arxiv.org/abs/1607.08022). y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} \* \gamma + \beta The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. γ\gamma and β\beta are learnable parameter vectors of size C (where C is the input size) if `affine` is `True`. The standard-deviation is calculated via the biased estimator, equivalent to `torch.var(input, unbiased=False)`. By default, this layer uses instance statistics computed from input data in both training and evaluation modes. If `track_running_stats` is set to `True`, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default `momentum` of 0.1. Note This `momentum` argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is x^new=(1−momentum)×x^+momentum×xt\hat{x}\_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x\_t , where x^\hat{x} is the estimated statistic and xtx\_t is the new observed value. Note [`InstanceNorm3d`](#torch.nn.InstanceNorm3d "torch.nn.InstanceNorm3d") and [`LayerNorm`](torch.nn.layernorm#torch.nn.LayerNorm "torch.nn.LayerNorm") are very similar, but have some subtle differences. [`InstanceNorm3d`](#torch.nn.InstanceNorm3d "torch.nn.InstanceNorm3d") is applied on each channel of channeled data like 3D models with RGB color, but [`LayerNorm`](torch.nn.layernorm#torch.nn.LayerNorm "torch.nn.LayerNorm") is usually applied on entire sample and often in NLP tasks. Additionally, [`LayerNorm`](torch.nn.layernorm#torch.nn.LayerNorm "torch.nn.LayerNorm") applies elementwise affine transform, while [`InstanceNorm3d`](#torch.nn.InstanceNorm3d "torch.nn.InstanceNorm3d") usually don’t apply affine transform. Parameters * **num\_features** – CC from an expected input of size (N,C,D,H,W)(N, C, D, H, W) * **eps** – a value added to the denominator for numerical stability. Default: 1e-5 * **momentum** – the value used for the running\_mean and running\_var computation. Default: 0.1 * **affine** – a boolean value that when set to `True`, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default: `False`. * **track\_running\_stats** – a boolean value that when set to `True`, this module tracks the running mean and variance, and when set to `False`, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: `False` Shape: * Input: (N,C,D,H,W)(N, C, D, H, W) * Output: (N,C,D,H,W)(N, C, D, H, W) (same shape as input) Examples: ``` >>> # Without Learnable Parameters >>> m = nn.InstanceNorm3d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm3d(100, affine=True) >>> input = torch.randn(20, 100, 35, 45, 10) >>> output = m(input) ```
programming_docs
pytorch torch.randn torch.randn =========== `torch.randn(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor` Returns a tensor filled with random numbers from a normal distribution with mean `0` and variance `1` (also called the standard normal distribution). outi∼N(0,1)\text{out}\_{i} \sim \mathcal{N}(0, 1) The shape of the tensor is defined by the variable argument `size`. Parameters **size** (*int...*) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple. Keyword Arguments * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: if `None`, uses a global default (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). * **layout** ([`torch.layout`](../tensor_attributes#torch.torch.layout "torch.torch.layout"), optional) – the desired layout of returned Tensor. Default: `torch.strided`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Example: ``` >>> torch.randn(4) tensor([-2.1436, 0.9966, 2.3426, -0.6366]) >>> torch.randn(2, 3) tensor([[ 1.5954, 2.8929, -1.0923], [ 1.1719, -0.4709, -0.1996]]) ``` pytorch LazyConvTranspose3d LazyConvTranspose3d =================== `class torch.nn.LazyConvTranspose3d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/conv.html#LazyConvTranspose3d) A [`torch.nn.ConvTranspose3d`](torch.nn.convtranspose3d#torch.nn.ConvTranspose3d "torch.nn.ConvTranspose3d") module with lazy initialization of the `in_channels` argument of the [`ConvTranspose3d`](torch.nn.convtranspose3d#torch.nn.ConvTranspose3d "torch.nn.ConvTranspose3d") that is inferred from the `input.size(1)`. Parameters * **out\_channels** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of channels produced by the convolution * **kernel\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Size of the convolving kernel * **stride** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Stride of the convolution. Default: 1 * **padding** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – `dilation * (kernel_size - 1) - padding` zero-padding will be added to both sides of each dimension in the input. Default: 0 * **output\_padding** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Additional size added to one side of each dimension in the output shape. Default: 0 * **groups** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Number of blocked connections from input channels to output channels. Default: 1 * **bias** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, adds a learnable bias to the output. Default: `True` * **dilation** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Spacing between kernel elements. Default: 1 See also [`torch.nn.ConvTranspose3d`](torch.nn.convtranspose3d#torch.nn.ConvTranspose3d "torch.nn.ConvTranspose3d") and [`torch.nn.modules.lazy.LazyModuleMixin`](torch.nn.modules.lazy.lazymodulemixin#torch.nn.modules.lazy.LazyModuleMixin "torch.nn.modules.lazy.LazyModuleMixin") `cls_to_become` alias of [`ConvTranspose3d`](torch.nn.convtranspose3d#torch.nn.ConvTranspose3d "torch.nn.ConvTranspose3d") pytorch torch.svd torch.svd ========= `torch.svd(input, some=True, compute_uv=True, *, out=None) -> (Tensor, Tensor, Tensor)` Computes the singular value decomposition of either a matrix or batch of matrices `input`. The singular value decomposition is represented as a namedtuple (`U,S,V`), such that `input` = `U` diag(`S`) `Vᴴ`, where `Vᴴ` is the transpose of `V` for the real-valued inputs, or the conjugate transpose of `V` for the complex-valued inputs. If `input` is a batch of tensors, then `U`, `S`, and `V` are also batched with the same batch dimensions as `input`. If `some` is `True` (default), the method returns the reduced singular value decomposition i.e., if the last two dimensions of `input` are `m` and `n`, then the returned `U` and `V` matrices will contain only min(`n, m`) orthonormal columns. If `compute_uv` is `False`, the returned `U` and `V` will be zero-filled matrices of shape `(m × m)` and `(n × n)` respectively, and the same device as `input`. The `some` argument has no effect when `compute_uv` is `False`. Supports input of float, double, cfloat and cdouble data types. The dtypes of `U` and `V` are the same as `input`’s. `S` will always be real-valued, even if `input` is complex. Warning [`torch.svd()`](#torch.svd "torch.svd") is deprecated. Please use [`torch.linalg.svd()`](../linalg#torch.linalg.svd "torch.linalg.svd") instead, which is similar to NumPy’s `numpy.linalg.svd`. Note Differences with [`torch.linalg.svd()`](../linalg#torch.linalg.svd "torch.linalg.svd"): * `some` is the opposite of [`torch.linalg.svd()`](../linalg#torch.linalg.svd "torch.linalg.svd")’s `full_matricies`. Note that default value for both is `True`, so the default behavior is effectively the opposite. * [`torch.svd()`](#torch.svd "torch.svd") returns `V`, whereas [`torch.linalg.svd()`](../linalg#torch.linalg.svd "torch.linalg.svd") returns `Vᴴ`. * If `compute_uv=False`, [`torch.svd()`](#torch.svd "torch.svd") returns zero-filled tensors for `U` and `Vh`, whereas [`torch.linalg.svd()`](../linalg#torch.linalg.svd "torch.linalg.svd") returns empty tensors. Note The singular values are returned in descending order. If `input` is a batch of matrices, then the singular values of each matrix in the batch is returned in descending order. Note The implementation of SVD on CPU uses the LAPACK routine `?gesdd` (a divide-and-conquer algorithm) instead of `?gesvd` for speed. Analogously, the SVD on GPU uses the cuSOLVER routines `gesvdj` and `gesvdjBatched` on CUDA 10.1.243 and later, and uses the MAGMA routine `gesdd` on earlier versions of CUDA. Note The returned matrix `U` will be transposed, i.e. with strides `U.contiguous().transpose(-2, -1).stride()`. Note Gradients computed using `U` and `V` may be unstable if `input` is not full rank or has non-unique singular values. Note When `some` = `False`, the gradients on `U[..., :, min(m, n):]` and `V[..., :, min(m, n):]` will be ignored in backward as those vectors can be arbitrary bases of the subspaces. Note The `S` tensor can only be used to compute gradients if `compute_uv` is True. Note With the complex-valued input the backward operation works correctly only for gauge invariant loss functions. Please look at [Gauge problem in AD](https://re-ra.xyz/Gauge-Problem-in-Automatic-Differentiation/) for more details. Note Since `U` and `V` of an SVD is not unique, each vector can be multiplied by an arbitrary phase factor eiϕe^{i \phi} while the SVD result is still correct. Different platforms, like Numpy, or inputs on different device types, may produce different `U` and `V` tensors. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor of size `(*, m, n)` where `*` is zero or more batch dimensions consisting of `(m × n)` matrices. * **some** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – controls whether to compute the reduced or full decomposition, and consequently the shape of returned `U` and `V`. Defaults to True. * **compute\_uv** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – option whether to compute `U` and `V` or not. Defaults to True. Keyword Arguments **out** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – the output tuple of tensors Example: ``` >>> a = torch.randn(5, 3) >>> a tensor([[ 0.2364, -0.7752, 0.6372], [ 1.7201, 0.7394, -0.0504], [-0.3371, -1.0584, 0.5296], [ 0.3550, -0.4022, 1.5569], [ 0.2445, -0.0158, 1.1414]]) >>> u, s, v = torch.svd(a) >>> u tensor([[ 0.4027, 0.0287, 0.5434], [-0.1946, 0.8833, 0.3679], [ 0.4296, -0.2890, 0.5261], [ 0.6604, 0.2717, -0.2618], [ 0.4234, 0.2481, -0.4733]]) >>> s tensor([2.3289, 2.0315, 0.7806]) >>> v tensor([[-0.0199, 0.8766, 0.4809], [-0.5080, 0.4054, -0.7600], [ 0.8611, 0.2594, -0.4373]]) >>> torch.dist(a, torch.mm(torch.mm(u, torch.diag(s)), v.t())) tensor(8.6531e-07) >>> a_big = torch.randn(7, 5, 3) >>> u, s, v = torch.svd(a_big) >>> torch.dist(a_big, torch.matmul(torch.matmul(u, torch.diag_embed(s)), v.transpose(-2, -1))) tensor(2.6503e-06) ``` pytorch Softplus Softplus ======== `class torch.nn.Softplus(beta=1, threshold=20)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/activation.html#Softplus) Applies the element-wise function: Softplus(x)=1β∗log⁡(1+exp⁡(β∗x))\text{Softplus}(x) = \frac{1}{\beta} \* \log(1 + \exp(\beta \* x)) SoftPlus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive. For numerical stability the implementation reverts to the linear function when input×β>thresholdinput \times \beta > threshold . Parameters * **beta** – the β\beta value for the Softplus formulation. Default: 1 * **threshold** – values above this revert to a linear function. Default: 20 Shape: * Input: (N,∗)(N, \*) where `*` means, any number of additional dimensions * Output: (N,∗)(N, \*) , same shape as the input Examples: ``` >>> m = nn.Softplus() >>> input = torch.randn(2) >>> output = m(input) ``` pytorch torch.argmin torch.argmin ============ `torch.argmin(input, dim=None, keepdim=False) → LongTensor` Returns the indices of the minimum value(s) of the flattened tensor or along a dimension This is the second value returned by [`torch.min()`](torch.min#torch.min "torch.min"). See its documentation for the exact semantics of this method. Note If there are multiple minimal values then the indices of the first minimal value are returned. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension to reduce. If `None`, the argmin of the flattened input is returned. * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether the output tensor has `dim` retained or not. Ignored if `dim=None`. Example: ``` >>> a = torch.randn(4, 4) >>> a tensor([[ 0.1139, 0.2254, -0.1381, 0.3687], [ 1.0100, -1.1975, -0.0102, -0.4732], [-0.9240, 0.1207, -0.7506, -1.0213], [ 1.7809, -1.2960, 0.9384, 0.1438]]) >>> torch.argmin(a) tensor(13) >>> torch.argmin(a, dim=1) tensor([ 2, 1, 3, 1]) >>> torch.argmin(a, dim=1, keepdim=True) tensor([[2], [1], [3], [1]]) ``` pytorch no_grad no\_grad ======== `class torch.no_grad` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/autograd/grad_mode.html#no_grad) Context-manager that disabled gradient calculation. Disabling gradient calculation is useful for inference, when you are sure that you will not call [`Tensor.backward()`](../autograd#torch.Tensor.backward "torch.Tensor.backward"). It will reduce memory consumption for computations that would otherwise have `requires_grad=True`. In this mode, the result of every computation will have `requires_grad=False`, even when the inputs have `requires_grad=True`. This context manager is thread local; it will not affect computation in other threads. Also functions as a decorator. (Make sure to instantiate with parenthesis.) Example: ``` >>> x = torch.tensor([1], requires_grad=True) >>> with torch.no_grad(): ... y = x * 2 >>> y.requires_grad False >>> @torch.no_grad() ... def doubler(x): ... return x * 2 >>> z = doubler(x) >>> z.requires_grad False ``` pytorch torch.jit.unused torch.jit.unused ================ `torch.jit.unused(fn)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/_jit_internal.html#unused) This decorator indicates to the compiler that a function or method should be ignored and replaced with the raising of an exception. This allows you to leave code in your model that is not yet TorchScript compatible and still export your model. Example (using `@torch.jit.unused` on a method): ``` import torch import torch.nn as nn class MyModule(nn.Module): def __init__(self, use_memory_efficient): super(MyModule, self).__init__() self.use_memory_efficient = use_memory_efficient @torch.jit.unused def memory_efficient(self, x): import pdb pdb.set_trace() return x + 10 def forward(self, x): # Use not-yet-scriptable memory efficient mode if self.use_memory_efficient: return self.memory_efficient(x) else: return x + 10 m = torch.jit.script(MyModule(use_memory_efficient=False)) m.save("m.pt") m = torch.jit.script(MyModule(use_memory_efficient=True)) # exception raised m(torch.rand(100)) ``` pytorch torch.stack torch.stack =========== `torch.stack(tensors, dim=0, *, out=None) → Tensor` Concatenates a sequence of tensors along a new dimension. All tensors need to be of the same size. Parameters * **tensors** (*sequence of Tensors*) – sequence of tensors to concatenate * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – dimension to insert. Has to be between 0 and the number of dimensions of concatenated tensors (inclusive) Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. pytorch torch.msort torch.msort =========== `torch.msort(input, *, out=None) → Tensor` Sorts the elements of the `input` tensor along its first dimension in ascending order by value. Note `torch.msort(t)` is equivalent to `torch.sort(t, dim=0)[0]`. See also [`torch.sort()`](torch.sort#torch.sort "torch.sort"). Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> t = torch.randn(3, 4) >>> t tensor([[-0.1321, 0.4370, -1.2631, -1.1289], [-2.0527, -1.1250, 0.2275, 0.3077], [-0.0881, -0.1259, -0.5495, 1.0284]]) >>> torch.msort(t) tensor([[-2.0527, -1.1250, -1.2631, -1.1289], [-0.1321, -0.1259, -0.5495, 0.3077], [-0.0881, 0.4370, 0.2275, 1.0284]]) ``` pytorch torch.is_complex torch.is\_complex ================= `torch.is_complex(input) -> (bool)` Returns True if the data type of `input` is a complex data type i.e., one of `torch.complex64`, and `torch.complex128`. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. pytorch torch.i0 torch.i0 ======== `torch.i0(input, *, out=None) → Tensor` Computes the zeroth order modified Bessel function of the first kind for each element of `input`. outi=I0(inputi)=∑k=0∞(inputi2/4)k(k!)2\text{out}\_{i} = I\_0(\text{input}\_{i}) = \sum\_{k=0}^{\infty} \frac{(\text{input}\_{i}^2/4)^k}{(k!)^2} Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> torch.i0(torch.arange(5, dtype=torch.float32)) tensor([ 1.0000, 1.2661, 2.2796, 4.8808, 11.3019]) ``` pytorch torch.tan torch.tan ========= `torch.tan(input, *, out=None) → Tensor` Returns a new tensor with the tangent of the elements of `input`. outi=tan⁡(inputi)\text{out}\_{i} = \tan(\text{input}\_{i}) Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4) >>> a tensor([-1.2027, -1.7687, 0.4412, -1.3856]) >>> torch.tan(a) tensor([-2.5930, 4.9859, 0.4722, -5.3366]) ``` pytorch torch.cumprod torch.cumprod ============= `torch.cumprod(input, dim, *, dtype=None, out=None) → Tensor` Returns the cumulative product of elements of `input` in the dimension `dim`. For example, if `input` is a vector of size N, the result will also be a vector of size N, with elements. yi=x1×x2×x3×⋯×xiy\_i = x\_1 \times x\_2\times x\_3\times \dots \times x\_i Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension to do the operation over Keyword Arguments * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. If specified, the input tensor is casted to `dtype` before the operation is performed. This is useful for preventing data type overflows. Default: None. * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(10) >>> a tensor([ 0.6001, 0.2069, -0.1919, 0.9792, 0.6727, 1.0062, 0.4126, -0.2129, -0.4206, 0.1968]) >>> torch.cumprod(a, dim=0) tensor([ 0.6001, 0.1241, -0.0238, -0.0233, -0.0157, -0.0158, -0.0065, 0.0014, -0.0006, -0.0001]) >>> a[5] = 0.0 >>> torch.cumprod(a, dim=0) tensor([ 0.6001, 0.1241, -0.0238, -0.0233, -0.0157, -0.0000, -0.0000, 0.0000, -0.0000, -0.0000]) ``` pytorch torch.vdot torch.vdot ========== `torch.vdot(input, other, *, out=None) → Tensor` Computes the dot product of two 1D tensors. The vdot(a, b) function handles complex numbers differently than dot(a, b). If the first argument is complex, the complex conjugate of the first argument is used for the calculation of the dot product. Note Unlike NumPy’s vdot, torch.vdot intentionally only supports computing the dot product of two 1D tensors with the same number of elements. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – first tensor in the dot product, must be 1D. Its conjugate is used if it’s complex. * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – second tensor in the dot product, must be 1D. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> torch.vdot(torch.tensor([2, 3]), torch.tensor([2, 1])) tensor(7) >>> a = torch.tensor((1 +2j, 3 - 1j)) >>> b = torch.tensor((2 +1j, 4 - 0j)) >>> torch.vdot(a, b) tensor([16.+1.j]) >>> torch.vdot(b, a) tensor([16.-1.j]) ```
programming_docs
pytorch torch.randperm torch.randperm ============== `torch.randperm(n, *, generator=None, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) → Tensor` Returns a random permutation of integers from `0` to `n - 1`. Parameters **n** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the upper bound (exclusive) Keyword Arguments * **generator** ([`torch.Generator`](torch.generator#torch.Generator "torch.Generator"), optional) – a pseudorandom number generator for sampling * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: `torch.int64`. * **layout** ([`torch.layout`](../tensor_attributes#torch.torch.layout "torch.torch.layout"), optional) – the desired layout of returned Tensor. Default: `torch.strided`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. * **pin\_memory** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: `False`. Example: ``` >>> torch.randperm(4) tensor([2, 1, 0, 3]) ``` pytorch torch.ones torch.ones ========== `torch.ones(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor` Returns a tensor filled with the scalar value `1`, with the shape defined by the variable argument `size`. Parameters **size** (*int...*) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple. Keyword Arguments * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: if `None`, uses a global default (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). * **layout** ([`torch.layout`](../tensor_attributes#torch.torch.layout "torch.torch.layout"), optional) – the desired layout of returned Tensor. Default: `torch.strided`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. Example: ``` >>> torch.ones(2, 3) tensor([[ 1., 1., 1.], [ 1., 1., 1.]]) >>> torch.ones(5) tensor([ 1., 1., 1., 1., 1.]) ``` pytorch torch.nn.utils.prune.is_pruned torch.nn.utils.prune.is\_pruned =============================== `torch.nn.utils.prune.is_pruned(module)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/prune.html#is_pruned) Check whether `module` is pruned by looking for `forward_pre_hooks` in its modules that inherit from the [`BasePruningMethod`](torch.nn.utils.prune.basepruningmethod#torch.nn.utils.prune.BasePruningMethod "torch.nn.utils.prune.BasePruningMethod"). Parameters **module** ([nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – object that is either pruned or unpruned Returns binary answer to whether `module` is pruned. #### Examples ``` >>> m = nn.Linear(5, 7) >>> print(prune.is_pruned(m)) False >>> prune.random_unstructured(m, name='weight', amount=0.2) >>> print(prune.is_pruned(m)) True ``` pytorch torch.nn.utils.vector_to_parameters torch.nn.utils.vector\_to\_parameters ===================================== `torch.nn.utils.vector_to_parameters(vec, parameters)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/convert_parameters.html#vector_to_parameters) Convert one vector to the parameters Parameters * **vec** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – a single vector represents the parameters of a model. * **parameters** (*Iterable**[*[Tensor](../tensors#torch.Tensor "torch.Tensor")*]*) – an iterator of Tensors that are the parameters of a model. pytorch torch.logical_and torch.logical\_and ================== `torch.logical_and(input, other, *, out=None) → Tensor` Computes the element-wise logical AND of the given input tensors. Zeros are treated as `False` and nonzeros are treated as `True`. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor to compute AND with Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> torch.logical_and(torch.tensor([True, False, True]), torch.tensor([True, False, False])) tensor([ True, False, False]) >>> a = torch.tensor([0, 1, 10, 0], dtype=torch.int8) >>> b = torch.tensor([4, 0, 1, 0], dtype=torch.int8) >>> torch.logical_and(a, b) tensor([False, False, True, False]) >>> torch.logical_and(a.double(), b.double()) tensor([False, False, True, False]) >>> torch.logical_and(a.double(), b) tensor([False, False, True, False]) >>> torch.logical_and(a, b, out=torch.empty(4, dtype=torch.bool)) tensor([False, False, True, False]) ``` pytorch LSTMCell LSTMCell ======== `class torch.nn.LSTMCell(input_size, hidden_size, bias=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/rnn.html#LSTMCell) A long short-term memory (LSTM) cell. i=σ(Wiix+bii+Whih+bhi)f=σ(Wifx+bif+Whfh+bhf)g=tanh⁡(Wigx+big+Whgh+bhg)o=σ(Wiox+bio+Whoh+bho)c′=f∗c+i∗gh′=o∗tanh⁡(c′)\begin{array}{ll} i = \sigma(W\_{ii} x + b\_{ii} + W\_{hi} h + b\_{hi}) \\ f = \sigma(W\_{if} x + b\_{if} + W\_{hf} h + b\_{hf}) \\ g = \tanh(W\_{ig} x + b\_{ig} + W\_{hg} h + b\_{hg}) \\ o = \sigma(W\_{io} x + b\_{io} + W\_{ho} h + b\_{ho}) \\ c' = f \* c + i \* g \\ h' = o \* \tanh(c') \\ \end{array} where σ\sigma is the sigmoid function, and ∗\* is the Hadamard product. Parameters * **input\_size** – The number of expected features in the input `x` * **hidden\_size** – The number of features in the hidden state `h` * **bias** – If `False`, then the layer does not use bias weights `b_ih` and `b_hh`. Default: `True` Inputs: input, (h\_0, c\_0) * **input** of shape `(batch, input_size)`: tensor containing input features * **h\_0** of shape `(batch, hidden_size)`: tensor containing the initial hidden state for each element in the batch. * **c\_0** of shape `(batch, hidden_size)`: tensor containing the initial cell state for each element in the batch. If `(h_0, c_0)` is not provided, both **h\_0** and **c\_0** default to zero. Outputs: (h\_1, c\_1) * **h\_1** of shape `(batch, hidden_size)`: tensor containing the next hidden state for each element in the batch * **c\_1** of shape `(batch, hidden_size)`: tensor containing the next cell state for each element in the batch Variables * **~LSTMCell.weight\_ih** – the learnable input-hidden weights, of shape `(4*hidden_size, input_size)` * **~LSTMCell.weight\_hh** – the learnable hidden-hidden weights, of shape `(4*hidden_size, hidden_size)` * **~LSTMCell.bias\_ih** – the learnable input-hidden bias, of shape `(4*hidden_size)` * **~LSTMCell.bias\_hh** – the learnable hidden-hidden bias, of shape `(4*hidden_size)` Note All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=1hidden\_sizek = \frac{1}{\text{hidden\\_size}} Examples: ``` >>> rnn = nn.LSTMCell(10, 20) >>> input = torch.randn(3, 10) >>> hx = torch.randn(3, 20) >>> cx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx, cx = rnn(input[i], (hx, cx)) output.append(hx) ``` pytorch torch.sin torch.sin ========= `torch.sin(input, *, out=None) → Tensor` Returns a new tensor with the sine of the elements of `input`. outi=sin⁡(inputi)\text{out}\_{i} = \sin(\text{input}\_{i}) Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4) >>> a tensor([-0.5461, 0.1347, -2.7266, -0.2746]) >>> torch.sin(a) tensor([-0.5194, 0.1343, -0.4032, -0.2711]) ``` pytorch torch.heaviside torch.heaviside =============== `torch.heaviside(input, values, *, out=None) → Tensor` Computes the Heaviside step function for each element in `input`. The Heaviside step function is defined as: heaviside(input,values)={0,if input < 0values,if input == 01,if input > 0\text{{heaviside}}(input, values) = \begin{cases} 0, & \text{if input < 0}\\ values, & \text{if input == 0}\\ 1, & \text{if input > 0} \end{cases} Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **values** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – The values to use where `input` is zero. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> input = torch.tensor([-1.5, 0, 2.0]) >>> values = torch.tensor([0.5]) >>> torch.heaviside(input, values) tensor([0.0000, 0.5000, 1.0000]) >>> values = torch.tensor([1.2, -2.0, 3.5]) >>> torch.heaviside(input, values) tensor([0., -2., 1.]) ``` pytorch torch.triu_indices torch.triu\_indices =================== `torch.triu_indices(row, col, offset=0, *, dtype=torch.long, device='cpu', layout=torch.strided) → Tensor` Returns the indices of the upper triangular part of a `row` by `col` matrix in a 2-by-N Tensor, where the first row contains row coordinates of all indices and the second row contains column coordinates. Indices are ordered based on rows and then columns. The upper triangular part of the matrix is defined as the elements on and above the diagonal. The argument `offset` controls which diagonal to consider. If `offset` = 0, all elements on and above the main diagonal are retained. A positive value excludes just as many diagonals above the main diagonal, and similarly a negative value includes just as many diagonals below the main diagonal. The main diagonal are the set of indices {(i,i)}\lbrace (i, i) \rbrace for i∈[0,min⁡{d1,d2}−1]i \in [0, \min\{d\_{1}, d\_{2}\} - 1] where d1,d2d\_{1}, d\_{2} are the dimensions of the matrix. Note When running on CUDA, `row * col` must be less than 2592^{59} to prevent overflow during calculation. Parameters * **row** (`int`) – number of rows in the 2-D matrix. * **col** (`int`) – number of columns in the 2-D matrix. * **offset** (`int`) – diagonal offset from the main diagonal. Default: if not provided, 0. Keyword Arguments * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: if `None`, `torch.long`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **layout** ([`torch.layout`](../tensor_attributes#torch.torch.layout "torch.torch.layout"), optional) – currently only support `torch.strided`. Example:: ``` >>> a = torch.triu_indices(3, 3) >>> a tensor([[0, 0, 0, 1, 1, 2], [0, 1, 2, 1, 2, 2]]) ``` ``` >>> a = torch.triu_indices(4, 3, -1) >>> a tensor([[0, 0, 0, 1, 1, 1, 2, 2, 3], [0, 1, 2, 0, 1, 2, 1, 2, 2]]) ``` ``` >>> a = torch.triu_indices(4, 3, 1) >>> a tensor([[0, 0, 1], [1, 2, 2]]) ``` pytorch UpsamplingNearest2d UpsamplingNearest2d =================== `class torch.nn.UpsamplingNearest2d(size=None, scale_factor=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/upsampling.html#UpsamplingNearest2d) Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels. To specify the scale, it takes either the `size` or the `scale_factor` as it’s constructor argument. When `size` is given, it is the output size of the image `(h, w)`. Parameters * **size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – output spatial sizes * **scale\_factor** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* *Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*]**,* *optional*) – multiplier for spatial size. Warning This class is deprecated in favor of `interpolate()`. Shape: * Input: (N,C,Hin,Win)(N, C, H\_{in}, W\_{in}) * Output: (N,C,Hout,Wout)(N, C, H\_{out}, W\_{out}) where Hout=⌊Hin×scale\_factor⌋H\_{out} = \left\lfloor H\_{in} \times \text{scale\\_factor} \right\rfloor Wout=⌊Win×scale\_factor⌋W\_{out} = \left\lfloor W\_{in} \times \text{scale\\_factor} \right\rfloor Examples: ``` >>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.UpsamplingNearest2d(scale_factor=2) >>> m(input) tensor([[[[ 1., 1., 2., 2.], [ 1., 1., 2., 2.], [ 3., 3., 4., 4.], [ 3., 3., 4., 4.]]]]) ``` pytorch torch.asin torch.asin ========== `torch.asin(input, *, out=None) → Tensor` Returns a new tensor with the arcsine of the elements of `input`. outi=sin⁡−1(inputi)\text{out}\_{i} = \sin^{-1}(\text{input}\_{i}) Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4) >>> a tensor([-0.5962, 1.4985, -0.4396, 1.4525]) >>> torch.asin(a) tensor([-0.6387, nan, -0.4552, nan]) ``` pytorch torch.cosh torch.cosh ========== `torch.cosh(input, *, out=None) → Tensor` Returns a new tensor with the hyperbolic cosine of the elements of `input`. outi=cosh⁡(inputi)\text{out}\_{i} = \cosh(\text{input}\_{i}) Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4) >>> a tensor([ 0.1632, 1.1835, -0.6979, -0.7325]) >>> torch.cosh(a) tensor([ 1.0133, 1.7860, 1.2536, 1.2805]) ``` Note When `input` is on the CPU, the implementation of torch.cosh may use the Sleef library, which rounds very large results to infinity or negative infinity. See [here](https://sleef.org/purec.xhtml) for details. pytorch torch.jit.trace torch.jit.trace =============== `torch.jit.trace(func, example_inputs, optimize=None, check_trace=True, check_inputs=None, check_tolerance=1e-05, strict=True, _force_outplace=False, _module_class=None, _compilation_unit=<torch.jit.CompilationUnit object>)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/jit/_trace.html#trace) Trace a function and return an executable or [`ScriptFunction`](torch.jit.scriptfunction#torch.jit.ScriptFunction "torch.jit.ScriptFunction") that will be optimized using just-in-time compilation. Tracing is ideal for code that operates only on `Tensor`s and lists, dictionaries, and tuples of `Tensor`s. Using `torch.jit.trace` and `torch.jit.trace_module`, you can turn an existing module or Python function into a TorchScript [`ScriptFunction`](torch.jit.scriptfunction#torch.jit.ScriptFunction "torch.jit.ScriptFunction") or [`ScriptModule`](torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule"). You must provide example inputs, and we run the function, recording the operations performed on all the tensors. * The resulting recording of a standalone function produces `ScriptFunction`. * The resulting recording of `nn.Module.forward` or `nn.Module` produces `ScriptModule`. This module also contains any parameters that the original module had as well. Warning Tracing only correctly records functions and modules which are not data dependent (e.g., do not have conditionals on data in tensors) and do not have any untracked external dependencies (e.g., perform input/output or access global variables). Tracing only records operations done when the given function is run on the given tensors. Therefore, the returned `ScriptModule` will always run the same traced graph on any input. This has some important implications when your module is expected to run different sets of operations, depending on the input and/or the module state. For example, * Tracing will not record any control-flow like if-statements or loops. When this control-flow is constant across your module, this is fine and it often inlines the control-flow decisions. But sometimes the control-flow is actually part of the model itself. For instance, a recurrent network is a loop over the (possibly dynamic) length of an input sequence. * In the returned [`ScriptModule`](torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule"), operations that have different behaviors in `training` and `eval` modes will always behave as if it is in the mode it was in during tracing, no matter which mode the `ScriptModule` is in. In cases like these, tracing would not be appropriate and [`scripting`](torch.jit.script#torch.jit.script "torch.jit.script") is a better choice. If you trace such models, you may silently get incorrect results on subsequent invocations of the model. The tracer will try to emit warnings when doing something that may cause an incorrect trace to be produced. Parameters * **func** (*callable* *or* [torch.nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – A Python function or `torch.nn.Module` that will be run with `example_inputs`. `func` arguments and return values must be tensors or (possibly nested) tuples that contain tensors. When a module is passed `torch.jit.trace`, only the `forward` method is run and traced (see [`torch.jit.trace`](torch.jit.trace_module#torch.jit.trace_module "torch.jit.trace_module") for details). * **example\_inputs** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)") *or* [torch.Tensor](../tensors#torch.Tensor "torch.Tensor")) – A tuple of example inputs that will be passed to the function while tracing. The resulting trace can be run with inputs of different types and shapes assuming the traced operations support those types and shapes. `example_inputs` may also be a single Tensor in which case it is automatically wrapped in a tuple. Keyword Arguments * **check\_trace** (`bool`, optional) – Check if the same inputs run through traced code produce the same outputs. Default: `True`. You might want to disable this if, for example, your network contains non- deterministic ops or if you are sure that the network is correct despite a checker failure. * **check\_inputs** (*list of tuples**,* *optional*) – A list of tuples of input arguments that should be used to check the trace against what is expected. Each tuple is equivalent to a set of input arguments that would be specified in `example_inputs`. For best results, pass in a set of checking inputs representative of the space of shapes and types of inputs you expect the network to see. If not specified, the original `example_inputs` are used for checking * **check\_tolerance** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – Floating-point comparison tolerance to use in the checker procedure. This can be used to relax the checker strictness in the event that results diverge numerically for a known reason, such as operator fusion. * **strict** (`bool`, optional) – run the tracer in a strict mode or not (default: `True`). Only turn this off when you want the tracer to record your mutable container types (currently `list`/`dict`) and you are sure that the container you are using in your problem is a `constant` structure and does not get used as control flow (if, for) conditions. Returns If `func` is `nn.Module` or `forward` of `nn.Module`, `trace` returns a [`ScriptModule`](torch.jit.scriptmodule#torch.jit.ScriptModule "torch.jit.ScriptModule") object with a single `forward` method containing the traced code. The returned `ScriptModule` will have the same set of sub-modules and parameters as the original `nn.Module`. If `func` is a standalone function, `trace` returns `ScriptFunction`. Example (tracing a function): ``` import torch def foo(x, y): return 2 * x + y # Run `foo` with the provided inputs and record the tensor operations traced_foo = torch.jit.trace(foo, (torch.rand(3), torch.rand(3))) # `traced_foo` can now be run with the TorchScript interpreter or saved # and loaded in a Python-free environment ``` Example (tracing an existing module): ``` import torch import torch.nn as nn class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv = nn.Conv2d(1, 1, 3) def forward(self, x): return self.conv(x) n = Net() example_weight = torch.rand(1, 1, 3, 3) example_forward_input = torch.rand(1, 1, 3, 3) # Trace a specific method and construct `ScriptModule` with # a single `forward` method module = torch.jit.trace(n.forward, example_forward_input) # Trace a module (implicitly traces `forward`) and construct a # `ScriptModule` with a single `forward` method module = torch.jit.trace(n, example_forward_input) ```
programming_docs
pytorch CTCLoss CTCLoss ======= `class torch.nn.CTCLoss(blank=0, reduction='mean', zero_infinity=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/loss.html#CTCLoss) The Connectionist Temporal Classification loss. Calculates loss between a continuous (unsegmented) time series and a target sequence. CTCLoss sums over the probability of possible alignments of input to target, producing a loss value which is differentiable with respect to each input node. The alignment of input to target is assumed to be “many-to-one”, which limits the length of the target sequence such that it must be ≤\leq the input length. Parameters * **blank** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – blank label. Default 00 . * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the output losses will be divided by the target lengths and then the mean over the batch is taken. Default: `'mean'` * **zero\_infinity** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Whether to zero infinite losses and the associated gradients. Default: `False` Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Shape: * Log\_probs: Tensor of size (T,N,C)(T, N, C) , where T=input lengthT = \text{input length} , N=batch sizeN = \text{batch size} , and C=number of classes (including blank)C = \text{number of classes (including blank)} . The logarithmized probabilities of the outputs (e.g. obtained with [`torch.nn.functional.log_softmax()`](../nn.functional#torch.nn.functional.log_softmax "torch.nn.functional.log_softmax")). * Targets: Tensor of size (N,S)(N, S) or (sum⁡(target\_lengths))(\operatorname{sum}(\text{target\\_lengths})) , where N=batch sizeN = \text{batch size} and S=max target length, if shape is (N,S)S = \text{max target length, if shape is } (N, S) . It represent the target sequences. Each element in the target sequence is a class index. And the target index cannot be blank (default=0). In the (N,S)(N, S) form, targets are padded to the length of the longest sequence, and stacked. In the (sum⁡(target\_lengths))(\operatorname{sum}(\text{target\\_lengths})) form, the targets are assumed to be un-padded and concatenated within 1 dimension. * Input\_lengths: Tuple or tensor of size (N)(N) , where N=batch sizeN = \text{batch size} . It represent the lengths of the inputs (must each be ≤T\leq T ). And the lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths. * Target\_lengths: Tuple or tensor of size (N)(N) , where N=batch sizeN = \text{batch size} . It represent lengths of the targets. Lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths. If target shape is (N,S)(N,S) , target\_lengths are effectively the stop index sns\_n for each target sequence, such that `target_n = targets[n,0:s_n]` for each target in a batch. Lengths must each be ≤S\leq S If the targets are given as a 1d tensor that is the concatenation of individual targets, the target\_lengths must add up to the total length of the tensor. * Output: scalar. If `reduction` is `'none'`, then (N)(N) , where N=batch sizeN = \text{batch size} . Examples: ``` >>> # Target are to be padded >>> T = 50 # Input sequence length >>> C = 20 # Number of classes (including blank) >>> N = 16 # Batch size >>> S = 30 # Target sequence length of longest target in batch (padding length) >>> S_min = 10 # Minimum target length, for demonstration purposes >>> >>> # Initialize random batch of input vectors, for *size = (T,N,C) >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_() >>> >>> # Initialize random batch of targets (0 = blank, 1:C = classes) >>> target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long) >>> >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long) >>> target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long) >>> ctc_loss = nn.CTCLoss() >>> loss = ctc_loss(input, target, input_lengths, target_lengths) >>> loss.backward() >>> >>> >>> # Target are to be un-padded >>> T = 50 # Input sequence length >>> C = 20 # Number of classes (including blank) >>> N = 16 # Batch size >>> >>> # Initialize random batch of input vectors, for *size = (T,N,C) >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_() >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long) >>> >>> # Initialize random batch of targets (0 = blank, 1:C = classes) >>> target_lengths = torch.randint(low=1, high=T, size=(N,), dtype=torch.long) >>> target = torch.randint(low=1, high=C, size=(sum(target_lengths),), dtype=torch.long) >>> ctc_loss = nn.CTCLoss() >>> loss = ctc_loss(input, target, input_lengths, target_lengths) >>> loss.backward() ``` Reference: A. Graves et al.: Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks: <https://www.cs.toronto.edu/~graves/icml_2006.pdf> Note In order to use CuDNN, the following must be satisfied: `targets` must be in concatenated format, all `input_lengths` must be `T`. blank=0blank=0 , `target_lengths` ≤256\leq 256 , the integer arguments must be of dtype `torch.int32`. The regular implementation uses the (more common in PyTorch) `torch.long` dtype. Note In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting `torch.backends.cudnn.deterministic = True`. Please see the notes on [Reproducibility](https://pytorch.org/docs/1.8.0/notes/randomness.html) for background. pytorch torch.stft torch.stft ========== `torch.stft(input, n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=None, return_complex=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/functional.html#stft) Short-time Fourier transform (STFT). Warning From version 1.8.0, `return_complex` must always be given explicitly for real inputs and `return_complex=False` has been deprecated. Strongly prefer `return_complex=True` as in a future pytorch release, this function will only return complex tensors. Note that [`torch.view_as_real()`](torch.view_as_real#torch.view_as_real "torch.view_as_real") can be used to recover a real tensor with an extra last dimension for real and imaginary components. The STFT computes the Fourier transform of short overlapping windows of the input. This giving frequency components of the signal as they change over time. The interface of this function is modeled after the [librosa](https://librosa.org/doc/latest/generated/librosa.stft.html) stft function. Ignoring the optional batch dimension, this method computes the following expression: X[m,ω]=∑k=0win\_length-1window[k] input[m×hop\_length+k]exp⁡(−j2π⋅ωkwin\_length),X[m, \omega] = \sum\_{k = 0}^{\text{win\\_length-1}}% \text{window}[k]\ \text{input}[m \times \text{hop\\_length} + k]\ % \exp\left(- j \frac{2 \pi \cdot \omega k}{\text{win\\_length}}\right), where mm is the index of the sliding window, and ω\omega is the frequency that 0≤ω<n\_fft0 \leq \omega < \text{n\\_fft} . When `onesided` is the default value `True`, * `input` must be either a 1-D time sequence or a 2-D batch of time sequences. * If `hop_length` is `None` (default), it is treated as equal to `floor(n_fft / 4)`. * If `win_length` is `None` (default), it is treated as equal to `n_fft`. * `window` can be a 1-D tensor of size `win_length`, e.g., from [`torch.hann_window()`](torch.hann_window#torch.hann_window "torch.hann_window"). If `window` is `None` (default), it is treated as if having 11 everywhere in the window. If win\_length<n\_fft\text{win\\_length} < \text{n\\_fft} , `window` will be padded on both sides to length `n_fft` before being applied. * If `center` is `True` (default), `input` will be padded on both sides so that the tt -th frame is centered at time t×hop\_lengtht \times \text{hop\\_length} . Otherwise, the tt -th frame begins at time t×hop\_lengtht \times \text{hop\\_length} . * `pad_mode` determines the padding method used on `input` when `center` is `True`. See [`torch.nn.functional.pad()`](../nn.functional#torch.nn.functional.pad "torch.nn.functional.pad") for all available options. Default is `"reflect"`. * If `onesided` is `True` (default for real input), only values for ω\omega in [0,1,2,…,⌊n\_fft2⌋+1]\left[0, 1, 2, \dots, \left\lfloor \frac{\text{n\\_fft}}{2} \right\rfloor + 1\right] are returned because the real-to-complex Fourier transform satisfies the conjugate symmetry, i.e., X[m,ω]=X[m,n\_fft−ω]∗X[m, \omega] = X[m, \text{n\\_fft} - \omega]^\* . Note if the input or window tensors are complex, then `onesided` output is not possible. * If `normalized` is `True` (default is `False`), the function returns the normalized STFT results, i.e., multiplied by (frame\_length)−0.5(\text{frame\\_length})^{-0.5} . * If `return_complex` is `True` (default if input is complex), the return is a `input.dim() + 1` dimensional complex tensor. If `False`, the output is a `input.dim() + 2` dimensional real tensor where the last dimension represents the real and imaginary components. Returns either a complex tensor of size (∗×N×T)(\* \times N \times T) if `return_complex` is true, or a real tensor of size (∗×N×T×2)(\* \times N \times T \times 2) . Where ∗\* is the optional batch size of `input`, NN is the number of frequencies where STFT is applied and TT is the total number of frames used. Warning This function changed signature at version 0.4.1. Calling with the previous signature may cause error or return incorrect result. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor * **n\_fft** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – size of Fourier transform * **hop\_length** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the distance between neighboring sliding window frames. Default: `None` (treated as equal to `floor(n_fft / 4)`) * **win\_length** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the size of window frame and STFT filter. Default: `None` (treated as equal to `n_fft`) * **window** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the optional window function. Default: `None` (treated as window of all 11 s) * **center** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – whether to pad `input` on both sides so that the tt -th frame is centered at time t×hop\_lengtht \times \text{hop\\_length} . Default: `True` * **pad\_mode** (*string**,* *optional*) – controls the padding method used when `center` is `True`. Default: `"reflect"` * **normalized** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – controls whether to return the normalized STFT results Default: `False` * **onesided** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – controls whether to return half of results to avoid redundancy for real inputs. Default: `True` for real `input` and `window`, `False` otherwise. * **return\_complex** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – whether to return a complex tensor, or a real tensor with an extra last dimension for the real and imaginary components. Returns A tensor containing the STFT result with shape described above Return type [Tensor](../tensors#torch.Tensor "torch.Tensor") pytorch torch.less_equal torch.less\_equal ================= `torch.less_equal(input, other, *, out=None) → Tensor` Alias for [`torch.le()`](torch.le#torch.le "torch.le"). pytorch torch.diag_embed torch.diag\_embed ================= `torch.diag_embed(input, offset=0, dim1=-2, dim2=-1) → Tensor` Creates a tensor whose diagonals of certain 2D planes (specified by `dim1` and `dim2`) are filled by `input`. To facilitate creating batched diagonal matrices, the 2D planes formed by the last two dimensions of the returned tensor are chosen by default. The argument `offset` controls which diagonal to consider: * If `offset` = 0, it is the main diagonal. * If `offset` > 0, it is above the main diagonal. * If `offset` < 0, it is below the main diagonal. The size of the new matrix will be calculated to make the specified diagonal of the size of the last input dimension. Note that for `offset` other than 00 , the order of `dim1` and `dim2` matters. Exchanging them is equivalent to changing the sign of `offset`. Applying [`torch.diagonal()`](torch.diagonal#torch.diagonal "torch.diagonal") to the output of this function with the same arguments yields a matrix identical to input. However, [`torch.diagonal()`](torch.diagonal#torch.diagonal "torch.diagonal") has different default dimensions, so those need to be explicitly specified. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Must be at least 1-dimensional. * **offset** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – which diagonal to consider. Default: 0 (main diagonal). * **dim1** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – first dimension with respect to which to take diagonal. Default: -2. * **dim2** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – second dimension with respect to which to take diagonal. Default: -1. Example: ``` >>> a = torch.randn(2, 3) >>> torch.diag_embed(a) tensor([[[ 1.5410, 0.0000, 0.0000], [ 0.0000, -0.2934, 0.0000], [ 0.0000, 0.0000, -2.1788]], [[ 0.5684, 0.0000, 0.0000], [ 0.0000, -1.0845, 0.0000], [ 0.0000, 0.0000, -1.3986]]]) >>> torch.diag_embed(a, offset=1, dim1=0, dim2=2) tensor([[[ 0.0000, 1.5410, 0.0000, 0.0000], [ 0.0000, 0.5684, 0.0000, 0.0000]], [[ 0.0000, 0.0000, -0.2934, 0.0000], [ 0.0000, 0.0000, -1.0845, 0.0000]], [[ 0.0000, 0.0000, 0.0000, -2.1788], [ 0.0000, 0.0000, 0.0000, -1.3986]], [[ 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000]]]) ``` pytorch torch.matmul torch.matmul ============ `torch.matmul(input, other, *, out=None) → Tensor` Matrix product of two tensors. The behavior depends on the dimensionality of the tensors as follows: * If both tensors are 1-dimensional, the dot product (scalar) is returned. * If both arguments are 2-dimensional, the matrix-matrix product is returned. * If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed. * If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned. * If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are [broadcasted](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics) (and thus must be broadcastable). For example, if `input` is a (j×1×n×n)(j \times 1 \times n \times n) tensor and `other` is a (k×n×n)(k \times n \times n) tensor, `out` will be a (j×k×n×n)(j \times k \times n \times n) tensor. Note that the broadcasting logic only looks at the batch dimensions when determining if the inputs are broadcastable, and not the matrix dimensions. For example, if `input` is a (j×1×n×m)(j \times 1 \times n \times m) tensor and `other` is a (k×m×p)(k \times m \times p) tensor, these inputs are valid for broadcasting even though the final two dimensions (i.e. the matrix dimensions) are different. `out` will be a (j×k×n×p)(j \times k \times n \times p) tensor. This operator supports [TensorFloat32](https://pytorch.org/docs/1.8.0/notes/cuda.html#tf32-on-ampere). Note The 1-dimensional dot product version of this function does not support an `out` parameter. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the first tensor to be multiplied * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the second tensor to be multiplied Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> # vector x vector >>> tensor1 = torch.randn(3) >>> tensor2 = torch.randn(3) >>> torch.matmul(tensor1, tensor2).size() torch.Size([]) >>> # matrix x vector >>> tensor1 = torch.randn(3, 4) >>> tensor2 = torch.randn(4) >>> torch.matmul(tensor1, tensor2).size() torch.Size([3]) >>> # batched matrix x broadcasted vector >>> tensor1 = torch.randn(10, 3, 4) >>> tensor2 = torch.randn(4) >>> torch.matmul(tensor1, tensor2).size() torch.Size([10, 3]) >>> # batched matrix x batched matrix >>> tensor1 = torch.randn(10, 3, 4) >>> tensor2 = torch.randn(10, 4, 5) >>> torch.matmul(tensor1, tensor2).size() torch.Size([10, 3, 5]) >>> # batched matrix x broadcasted matrix >>> tensor1 = torch.randn(10, 3, 4) >>> tensor2 = torch.randn(4, 5) >>> torch.matmul(tensor1, tensor2).size() torch.Size([10, 3, 5]) ``` pytorch torch.atan torch.atan ========== `torch.atan(input, *, out=None) → Tensor` Returns a new tensor with the arctangent of the elements of `input`. outi=tan⁡−1(inputi)\text{out}\_{i} = \tan^{-1}(\text{input}\_{i}) Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4) >>> a tensor([ 0.2341, 0.2539, -0.6256, -0.6448]) >>> torch.atan(a) tensor([ 0.2299, 0.2487, -0.5591, -0.5727]) ``` pytorch torch.jit.isinstance torch.jit.isinstance ==================== `torch.jit.isinstance(obj, target_type)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/jit.html#isinstance) This function provides for conatiner type refinement in TorchScript. It can refine parameterized containers of the List, Dict, Tuple, and Optional types. E.g. `List[str]`, `Dict[str, List[torch.Tensor]]`, `Optional[Tuple[int,str,int]]`. It can also refine basic types such as bools and ints that are available in TorchScript. Parameters * **obj** – object to refine the type of * **target\_type** – type to try to refine obj to Returns True if obj was successfully refined to the type of target\_type, False otherwise with no new type refinement Return type `bool` Example (using `torch.jit.isinstance` for type refinement): .. testcode: ``` import torch from typing import Any, Dict, List class MyModule(torch.nn.Module): def __init__(self): super(MyModule, self).__init__() def forward(self, input: Any): # note the Any type if torch.jit.isinstance(input, List[torch.Tensor]): for t in input: y = t.clamp(0, 0.5) elif torch.jit.isinstance(input, Dict[str, str]): for val in input.values(): print(val) m = torch.jit.script(MyModule()) x = [torch.rand(3,3), torch.rand(4,3)] m(x) y = {"key1":"val1","key2":"val2"} m(y) ```
programming_docs
pytorch ModuleDict ModuleDict ========== `class torch.nn.ModuleDict(modules=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/container.html#ModuleDict) Holds submodules in a dictionary. [`ModuleDict`](#torch.nn.ModuleDict "torch.nn.ModuleDict") can be indexed like a regular Python dictionary, but modules it contains are properly registered, and will be visible by all [`Module`](torch.nn.module#torch.nn.Module "torch.nn.Module") methods. [`ModuleDict`](#torch.nn.ModuleDict "torch.nn.ModuleDict") is an **ordered** dictionary that respects * the order of insertion, and * in [`update()`](#torch.nn.ModuleDict.update "torch.nn.ModuleDict.update"), the order of the merged `OrderedDict`, `dict` (started from Python 3.6) or another [`ModuleDict`](#torch.nn.ModuleDict "torch.nn.ModuleDict") (the argument to [`update()`](#torch.nn.ModuleDict.update "torch.nn.ModuleDict.update")). Note that [`update()`](#torch.nn.ModuleDict.update "torch.nn.ModuleDict.update") with other unordered mapping types (e.g., Python’s plain `dict` before Python version 3.6) does not preserve the order of the merged mapping. Parameters **modules** (*iterable**,* *optional*) – a mapping (dictionary) of (string: module) or an iterable of key-value pairs of type (string, module) Example: ``` class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.choices = nn.ModuleDict({ 'conv': nn.Conv2d(10, 10, 3), 'pool': nn.MaxPool2d(3) }) self.activations = nn.ModuleDict([ ['lrelu', nn.LeakyReLU()], ['prelu', nn.PReLU()] ]) def forward(self, x, choice, act): x = self.choices[choice](x) x = self.activations[act](x) return x ``` `clear()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/container.html#ModuleDict.clear) Remove all items from the ModuleDict. `items()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/container.html#ModuleDict.items) Return an iterable of the ModuleDict key/value pairs. `keys()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/container.html#ModuleDict.keys) Return an iterable of the ModuleDict keys. `pop(key)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/container.html#ModuleDict.pop) Remove key from the ModuleDict and return its module. Parameters **key** (*string*) – key to pop from the ModuleDict `update(modules)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/container.html#ModuleDict.update) Update the [`ModuleDict`](#torch.nn.ModuleDict "torch.nn.ModuleDict") with the key-value pairs from a mapping or an iterable, overwriting existing keys. Note If `modules` is an `OrderedDict`, a [`ModuleDict`](#torch.nn.ModuleDict "torch.nn.ModuleDict"), or an iterable of key-value pairs, the order of new elements in it is preserved. Parameters **modules** (*iterable*) – a mapping (dictionary) from string to [`Module`](torch.nn.module#torch.nn.Module "torch.nn.Module"), or an iterable of key-value pairs of type (string, [`Module`](torch.nn.module#torch.nn.Module "torch.nn.Module")) `values()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/container.html#ModuleDict.values) Return an iterable of the ModuleDict values. pytorch torch.imag torch.imag ========== `torch.imag(input) → Tensor` Returns a new tensor containing imaginary values of the `self` tensor. The returned tensor and `self` share the same underlying storage. Warning [`imag()`](#torch.imag "torch.imag") is only supported for tensors with complex dtypes. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Example:: ``` >>> x=torch.randn(4, dtype=torch.cfloat) >>> x tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)]) >>> x.imag tensor([ 0.3553, -0.7896, -0.0633, -0.8119]) ``` pytorch torch.seed torch.seed ========== `torch.seed()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/random.html#seed) Sets the seed for generating random numbers to a non-deterministic random number. Returns a 64 bit number used to seed the RNG. pytorch torch.outer torch.outer =========== `torch.outer(input, vec2, *, out=None) → Tensor` Outer product of `input` and `vec2`. If `input` is a vector of size nn and `vec2` is a vector of size mm , then `out` must be a matrix of size (n×m)(n \times m) . Note This function does not [broadcast](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics). Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – 1-D input vector * **vec2** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – 1-D input vector Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – optional output matrix Example: ``` >>> v1 = torch.arange(1., 5.) >>> v2 = torch.arange(1., 4.) >>> torch.outer(v1, v2) tensor([[ 1., 2., 3.], [ 2., 4., 6.], [ 3., 6., 9.], [ 4., 8., 12.]]) ``` pytorch torch.logical_or torch.logical\_or ================= `torch.logical_or(input, other, *, out=None) → Tensor` Computes the element-wise logical OR of the given input tensors. Zeros are treated as `False` and nonzeros are treated as `True`. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor to compute OR with Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> torch.logical_or(torch.tensor([True, False, True]), torch.tensor([True, False, False])) tensor([ True, False, True]) >>> a = torch.tensor([0, 1, 10, 0], dtype=torch.int8) >>> b = torch.tensor([4, 0, 1, 0], dtype=torch.int8) >>> torch.logical_or(a, b) tensor([ True, True, True, False]) >>> torch.logical_or(a.double(), b.double()) tensor([ True, True, True, False]) >>> torch.logical_or(a.double(), b) tensor([ True, True, True, False]) >>> torch.logical_or(a, b, out=torch.empty(4, dtype=torch.bool)) tensor([ True, True, True, False]) ``` pytorch torch.eq torch.eq ======== `torch.eq(input, other, *, out=None) → Tensor` Computes element-wise equality The second argument can be a number or a tensor whose shape is [broadcastable](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics) with the first argument. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor to compare * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor") *or* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the tensor or value to compare Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Returns A boolean tensor that is True where `input` is equal to `other` and False elsewhere Example: ``` >>> torch.eq(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]])) tensor([[ True, False], [False, True]]) ``` pytorch torch.trace torch.trace =========== `torch.trace(input) → Tensor` Returns the sum of the elements of the diagonal of the input 2-D matrix. Example: ``` >>> x = torch.arange(1., 10.).view(3, 3) >>> x tensor([[ 1., 2., 3.], [ 4., 5., 6.], [ 7., 8., 9.]]) >>> torch.trace(x) tensor(15.) ``` pytorch torch.nanquantile torch.nanquantile ================= `torch.nanquantile(input, q, dim=None, keepdim=False, *, out=None) → Tensor` This is a variant of [`torch.quantile()`](torch.quantile#torch.quantile "torch.quantile") that “ignores” `NaN` values, computing the quantiles `q` as if `NaN` values in `input` did not exist. If all values in a reduced row are `NaN` then the quantiles for that reduction will be `NaN`. See the documentation for [`torch.quantile()`](torch.quantile#torch.quantile "torch.quantile"). Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **q** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* [Tensor](../tensors#torch.Tensor "torch.Tensor")) – a scalar or 1D tensor of quantile values in the range [0, 1] * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension to reduce. * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether the output tensor has `dim` retained or not. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> t = torch.tensor([float('nan'), 1, 2]) >>> t.quantile(0.5) tensor(nan) >>> t.nanquantile(0.5) tensor(1.5000) >>> t = torch.tensor([[float('nan'), float('nan')], [1, 2]]) >>> t tensor([[nan, nan], [1., 2.]]) >>> t.nanquantile(0.5, dim=0) tensor([1., 2.]) >>> t.nanquantile(0.5, dim=1) tensor([ nan, 1.5000]) ``` pytorch Tanh Tanh ==== `class torch.nn.Tanh` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/activation.html#Tanh) Applies the element-wise function: Tanh(x)=tanh⁡(x)=exp⁡(x)−exp⁡(−x)exp⁡(x)+exp⁡(−x)\text{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)} {\exp(x) + \exp(-x)} Shape: * Input: (N,∗)(N, \*) where `*` means, any number of additional dimensions * Output: (N,∗)(N, \*) , same shape as the input Examples: ``` >>> m = nn.Tanh() >>> input = torch.randn(2) >>> output = m(input) ``` pytorch SobolEngine SobolEngine =========== `class torch.quasirandom.SobolEngine(dimension, scramble=False, seed=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quasirandom.html#SobolEngine) The [`torch.quasirandom.SobolEngine`](#torch.quasirandom.SobolEngine "torch.quasirandom.SobolEngine") is an engine for generating (scrambled) Sobol sequences. Sobol sequences are an example of low discrepancy quasi-random sequences. This implementation of an engine for Sobol sequences is capable of sampling sequences up to a maximum dimension of 21201. It uses direction numbers from <https://web.maths.unsw.edu.au/~fkuo/sobol/> obtained using the search criterion D(6) up to the dimension 21201. This is the recommended choice by the authors. #### References * Art B. Owen. Scrambling Sobol and Niederreiter-Xing points. Journal of Complexity, 14(4):466-489, December 1998. * I. M. Sobol. The distribution of points in a cube and the accurate evaluation of integrals. Zh. Vychisl. Mat. i Mat. Phys., 7:784-802, 1967. Parameters * **dimension** (*Int*) – The dimensionality of the sequence to be drawn * **scramble** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Setting this to `True` will produce scrambled Sobol sequences. Scrambling is capable of producing better Sobol sequences. Default: `False`. * **seed** (*Int**,* *optional*) – This is the seed for the scrambling. The seed of the random number generator is set to this, if specified. Otherwise, it uses a random seed. Default: `None` Examples: ``` >>> soboleng = torch.quasirandom.SobolEngine(dimension=5) >>> soboleng.draw(3) tensor([[0.5000, 0.5000, 0.5000, 0.5000, 0.5000], [0.7500, 0.2500, 0.7500, 0.2500, 0.7500], [0.2500, 0.7500, 0.2500, 0.7500, 0.2500]]) ``` `draw(n=1, out=None, dtype=torch.float32)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quasirandom.html#SobolEngine.draw) Function to draw a sequence of `n` points from a Sobol sequence. Note that the samples are dependent on the previous samples. The size of the result is (n,dimension)(n, dimension) . Parameters * **n** (*Int**,* *optional*) – The length of sequence of points to draw. Default: 1 * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – The output tensor * **dtype** (`torch.dtype`, optional) – the desired data type of the returned tensor. Default: `torch.float32` `draw_base2(m, out=None, dtype=torch.float32)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quasirandom.html#SobolEngine.draw_base2) Function to draw a sequence of `2**m` points from a Sobol sequence. Note that the samples are dependent on the previous samples. The size of the result is (2∗∗m,dimension)(2\*\*m, dimension) . Parameters * **m** (*Int*) – The (base2) exponent of the number of points to draw. * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – The output tensor * **dtype** (`torch.dtype`, optional) – the desired data type of the returned tensor. Default: `torch.float32` `fast_forward(n)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quasirandom.html#SobolEngine.fast_forward) Function to fast-forward the state of the `SobolEngine` by `n` steps. This is equivalent to drawing `n` samples without using the samples. Parameters **n** (*Int*) – The number of steps to fast-forward by. `reset()` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/quasirandom.html#SobolEngine.reset) Function to reset the `SobolEngine` to base state. pytorch torch.clamp torch.clamp =========== `torch.clamp(input, min, max, *, out=None) → Tensor` Clamp all elements in `input` into the range `[` [`min`](torch.min#torch.min "torch.min"), [`max`](torch.max#torch.max "torch.max") `]`. Let min\_value and max\_value be [`min`](torch.min#torch.min "torch.min") and [`max`](torch.max#torch.max "torch.max"), respectively, this returns: yi=min⁡(max⁡(xi,min\_value),max\_value)y\_i = \min(\max(x\_i, \text{min\\_value}), \text{max\\_value}) Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **min** (*Number*) – lower-bound of the range to be clamped to * **max** (*Number*) – upper-bound of the range to be clamped to Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4) >>> a tensor([-1.7120, 0.1734, -0.0478, -0.0922]) >>> torch.clamp(a, min=-0.5, max=0.5) tensor([-0.5000, 0.1734, -0.0478, -0.0922]) ``` `torch.clamp(input, *, min, out=None) → Tensor` Clamps all elements in `input` to be larger or equal [`min`](torch.min#torch.min "torch.min"). Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments * **min** (*Number*) – minimal value of each element in the output * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4) >>> a tensor([-0.0299, -2.3184, 2.1593, -0.8883]) >>> torch.clamp(a, min=0.5) tensor([ 0.5000, 0.5000, 2.1593, 0.5000]) ``` `torch.clamp(input, *, max, out=None) → Tensor` Clamps all elements in `input` to be smaller or equal [`max`](torch.max#torch.max "torch.max"). Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments * **max** (*Number*) – maximal value of each element in the output * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4) >>> a tensor([ 0.7753, -0.4702, -0.4599, 1.1899]) >>> torch.clamp(a, max=0.5) tensor([ 0.5000, -0.4702, -0.4599, 0.5000]) ``` pytorch torch.get_num_threads torch.get\_num\_threads ======================= `torch.get_num_threads() → int` Returns the number of threads used for parallelizing CPU operations pytorch torch.nansum torch.nansum ============ `torch.nansum(input, *, dtype=None) → Tensor` Returns the sum of all elements, treating Not a Numbers (NaNs) as zero. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. If specified, the input tensor is casted to `dtype` before the operation is performed. This is useful for preventing data type overflows. Default: None. Example: ``` >>> a = torch.tensor([1., 2., float('nan'), 4.]) >>> torch.nansum(a) tensor(7.) ``` `torch.nansum(input, dim, keepdim=False, *, dtype=None) → Tensor` Returns the sum of each row of the `input` tensor in the given dimension `dim`, treating Not a Numbers (NaNs) as zero. If `dim` is a list of dimensions, reduce over all of them. If `keepdim` is `True`, the output tensor is of the same size as `input` except in the dimension(s) `dim` where it is of size 1. Otherwise, `dim` is squeezed (see [`torch.squeeze()`](torch.squeeze#torch.squeeze "torch.squeeze")), resulting in the output tensor having 1 (or `len(dim)`) fewer dimension(s). Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *tuple of python:ints*) – the dimension or dimensions to reduce. * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether the output tensor has `dim` retained or not. Keyword Arguments **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. If specified, the input tensor is casted to `dtype` before the operation is performed. This is useful for preventing data type overflows. Default: None. Example: ``` >>> torch.nansum(torch.tensor([1., float("nan")])) 1.0 >>> a = torch.tensor([[1, 2], [3., float("nan")]]) >>> torch.nansum(a) tensor(6.) >>> torch.nansum(a, dim=0) tensor([4., 2.]) >>> torch.nansum(a, dim=1) tensor([3., 3.]) ``` pytorch torch._assert torch.\_assert ============== `torch._assert(condition, message)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch.html#_assert) A wrapper around Python’s assert which is symbolically traceable. pytorch BCEWithLogitsLoss BCEWithLogitsLoss ================= `class torch.nn.BCEWithLogitsLoss(weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/loss.html#BCEWithLogitsLoss) This loss combines a `Sigmoid` layer and the `BCELoss` in one single class. This version is more numerically stable than using a plain `Sigmoid` followed by a `BCELoss` as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability. The unreduced (i.e. with `reduction` set to `'none'`) loss can be described as: ℓ(x,y)=L={l1,…,lN}⊤,ln=−wn[yn⋅log⁡σ(xn)+(1−yn)⋅log⁡(1−σ(xn))],\ell(x, y) = L = \{l\_1,\dots,l\_N\}^\top, \quad l\_n = - w\_n \left[ y\_n \cdot \log \sigma(x\_n) + (1 - y\_n) \cdot \log (1 - \sigma(x\_n)) \right], where NN is the batch size. If `reduction` is not `'none'` (default `'mean'`), then ℓ(x,y)={mean⁡(L),if reduction=‘mean’;sum⁡(L),if reduction=‘sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases} This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets `t[i]` should be numbers between 0 and 1. It’s possible to trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as: ℓc(x,y)=Lc={l1,c,…,lN,c}⊤,ln,c=−wn,c[pcyn,c⋅log⁡σ(xn,c)+(1−yn,c)⋅log⁡(1−σ(xn,c))],\ell\_c(x, y) = L\_c = \{l\_{1,c},\dots,l\_{N,c}\}^\top, \quad l\_{n,c} = - w\_{n,c} \left[ p\_c y\_{n,c} \cdot \log \sigma(x\_{n,c}) + (1 - y\_{n,c}) \cdot \log (1 - \sigma(x\_{n,c})) \right], where cc is the class number (c>1c > 1 for multi-label binary classification, c=1c = 1 for single-label binary classification), nn is the number of the sample in the batch and pcp\_c is the weight of the positive answer for the class cc . pc>1p\_c > 1 increases the recall, pc<1p\_c < 1 increases the precision. For example, if a dataset contains 100 positive and 300 negative examples of a single class, then `pos_weight` for the class should be equal to 300100=3\frac{300}{100}=3 . The loss would act as if the dataset contains 3×100=3003\times 100=300 positive examples. Examples: ``` >>> target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10 >>> output = torch.full([10, 64], 1.5) # A prediction (logit) >>> pos_weight = torch.ones([64]) # All weights are equal to 1 >>> criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight) >>> criterion(output, target) # -log(sigmoid(1.5)) tensor(0.2014) ``` Parameters * **weight** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size `nbatch`. * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when `reduce` is `False`. Default: `True` * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Note: `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Default: `'mean'` * **pos\_weight** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – a weight of positive examples. Must be a vector with length equal to the number of classes. Shape: * Input: (N,∗)(N, \*) where ∗\* means, any number of additional dimensions * Target: (N,∗)(N, \*) , same shape as the input * Output: scalar. If `reduction` is `'none'`, then (N,∗)(N, \*) , same shape as input. Examples: ``` >>> loss = nn.BCEWithLogitsLoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(input, target) >>> output.backward() ```
programming_docs
pytorch torch.atleast_2d torch.atleast\_2d ================= `torch.atleast_2d(*tensors)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/functional.html#atleast_2d) Returns a 2-dimensional view of each input tensor with zero dimensions. Input tensors with two or more dimensions are returned as-is. :param input: :type input: Tensor or list of Tensors Returns output (Tensor or tuple of Tensors) Example:: ``` >>> x = torch.tensor(1.) >>> x tensor(1.) >>> torch.atleast_2d(x) tensor([[1.]]) >>> x = torch.randn(2,2) >>> x tensor([[2.2086, 2.5165], [0.1757, 0.5194]]) >>> torch.atleast_2d(x) tensor([[2.2086, 2.5165], [0.1757, 0.5194]]) >>> x = torch.tensor(0.5) >>> y = torch.tensor(1.) >>> torch.atleast_2d((x,y)) (tensor([[0.5000]]), tensor([[1.]])) ``` pytorch torch.flipud torch.flipud ============ `torch.flipud(input) → Tensor` Flip tensor in the up/down direction, returning a new tensor. Flip the entries in each column in the up/down direction. Rows are preserved, but appear in a different order than before. Note Requires the tensor to be at least 1-D. Note `torch.flipud` makes a copy of `input`’s data. This is different from NumPy’s `np.flipud`, which returns a view in constant time. Since copying a tensor’s data is more work than viewing that data, `torch.flipud` is expected to be slower than `np.flipud`. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – Must be at least 1-dimensional. Example: ``` >>> x = torch.arange(4).view(2, 2) >>> x tensor([[0, 1], [2, 3]]) >>> torch.flipud(x) tensor([[2, 3], [0, 1]]) ``` pytorch torch.atanh torch.atanh =========== `torch.atanh(input, *, out=None) → Tensor` Returns a new tensor with the inverse hyperbolic tangent of the elements of `input`. Note The domain of the inverse hyperbolic tangent is `(-1, 1)` and values outside this range will be mapped to `NaN`, except for the values `1` and `-1` for which the output is mapped to `+/-INF` respectively. outi=tanh⁡−1(inputi)\text{out}\_{i} = \tanh^{-1}(\text{input}\_{i}) Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4).uniform_(-1, 1) >>> a tensor([ -0.9385, 0.2968, -0.8591, -0.1871 ]) >>> torch.atanh(a) tensor([ -1.7253, 0.3060, -1.2899, -0.1893 ]) ``` pytorch torch.argsort torch.argsort ============= `torch.argsort(input, dim=-1, descending=False) → LongTensor` Returns the indices that sort a tensor along a given dimension in ascending order by value. This is the second value returned by [`torch.sort()`](torch.sort#torch.sort "torch.sort"). See its documentation for the exact semantics of this method. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the dimension to sort along * **descending** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – controls the sorting order (ascending or descending) Example: ``` >>> a = torch.randn(4, 4) >>> a tensor([[ 0.0785, 1.5267, -0.8521, 0.4065], [ 0.1598, 0.0788, -0.0745, -1.2700], [ 1.2208, 1.0722, -0.7064, 1.2564], [ 0.0669, -0.2318, -0.8229, -0.9280]]) >>> torch.argsort(a, dim=1) tensor([[2, 0, 3, 1], [3, 2, 1, 0], [2, 1, 0, 3], [3, 2, 1, 0]]) ``` pytorch torch.empty torch.empty =========== `torch.empty(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) → Tensor` Returns a tensor filled with uninitialized data. The shape of the tensor is defined by the variable argument `size`. Parameters **size** (*int...*) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple. Keyword Arguments * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. Default: if `None`, uses a global default (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). * **layout** ([`torch.layout`](../tensor_attributes#torch.torch.layout "torch.torch.layout"), optional) – the desired layout of returned Tensor. Default: `torch.strided`. * **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device of returned tensor. Default: if `None`, uses the current device for the default tensor type (see [`torch.set_default_tensor_type()`](torch.set_default_tensor_type#torch.set_default_tensor_type "torch.set_default_tensor_type")). `device` will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires\_grad** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If autograd should record operations on the returned tensor. Default: `False`. * **pin\_memory** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: `False`. * **memory\_format** ([`torch.memory_format`](../tensor_attributes#torch.torch.memory_format "torch.torch.memory_format"), optional) – the desired memory format of returned Tensor. Default: `torch.contiguous_format`. Example: ``` >>> torch.empty(2, 3) tensor(1.00000e-08 * [[ 6.3984, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000]]) ``` pytorch torch.flatten torch.flatten ============= `torch.flatten(input, start_dim=0, end_dim=-1) → Tensor` Flattens `input` by reshaping it into a one-dimensional tensor. If `start_dim` or `end_dim` are passed, only dimensions starting with `start_dim` and ending with `end_dim` are flattened. The order of elements in `input` is unchanged. Unlike NumPy’s flatten, which always copies input’s data, this function may return the original object, a view, or copy. If no dimensions are flattened, then the original object `input` is returned. Otherwise, if input can be viewed as the flattened shape, then that view is returned. Finally, only if the input cannot be viewed as the flattened shape is input’s data copied. See [`torch.Tensor.view()`](../tensors#torch.Tensor.view "torch.Tensor.view") for details on when a view will be returned. Note Flattening a zero-dimensional tensor will return a one-dimensional view. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **start\_dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the first dim to flatten * **end\_dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the last dim to flatten Example: ``` >>> t = torch.tensor([[[1, 2], ... [3, 4]], ... [[5, 6], ... [7, 8]]]) >>> torch.flatten(t) tensor([1, 2, 3, 4, 5, 6, 7, 8]) >>> torch.flatten(t, start_dim=1) tensor([[1, 2, 3, 4], [5, 6, 7, 8]]) ``` pytorch torch.igammac torch.igammac ============= `torch.igammac(input, other, *, out=None) → Tensor` Computes the regularized upper incomplete gamma function: outi=1Γ(inputi)∫otheri∞tinputi−1e−tdt\text{out}\_{i} = \frac{1}{\Gamma(\text{input}\_i)} \int\_{\text{other}\_i}^{\infty} t^{\text{input}\_i-1} e^{-t} dt where both inputi\text{input}\_i and otheri\text{other}\_i are weakly positive and at least one is strictly positive. If both are zero or either is negative then outi=nan\text{out}\_i=\text{nan} . Γ(⋅)\Gamma(\cdot) in the equation above is the gamma function, Γ(inputi)=∫0∞t(inputi−1)e−tdt.\Gamma(\text{input}\_i) = \int\_0^\infty t^{(\text{input}\_i-1)} e^{-t} dt. See [`torch.igamma()`](torch.igamma#torch.igamma "torch.igamma") and [`torch.lgamma()`](torch.lgamma#torch.lgamma "torch.lgamma") for related functions. Supports [broadcasting to a common shape](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics) and float inputs. Note The backward pass with respect to `input` is not yet supported. Please open an issue on PyTorch’s Github to request it. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the first non-negative input tensor * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the second non-negative input tensor Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a1 = torch.tensor([4.0]) >>> a2 = torch.tensor([3.0, 4.0, 5.0]) >>> a = torch.igammac(a1, a2) tensor([0.6472, 0.4335, 0.2650]) >>> b = torch.igamma(a1, a2) + torch.igammac(a1, a2) tensor([1., 1., 1.]) ``` pytorch Upsample Upsample ======== `class torch.nn.Upsample(size=None, scale_factor=None, mode='nearest', align_corners=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/upsampling.html#Upsample) Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data. The input data is assumed to be of the form `minibatch x channels x [optional depth] x [optional height] x width`. Hence, for spatial inputs, we expect a 4D Tensor and for volumetric inputs, we expect a 5D Tensor. The algorithms available for upsampling are nearest neighbor and linear, bilinear, bicubic and trilinear for 3D, 4D and 5D input Tensor, respectively. One can either give a `scale_factor` or the target output `size` to calculate the output size. (You cannot give both, as it is ambiguous) Parameters * **size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*] or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*] or* *Tuple**[*[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*]**,* *optional*) – output spatial sizes * **scale\_factor** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)") *or* *Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*] or* *Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*] or* *Tuple**[*[float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*]**,* *optional*) – multiplier for spatial size. Has to match input size if it is a tuple. * **mode** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")*,* *optional*) – the upsampling algorithm: one of `'nearest'`, `'linear'`, `'bilinear'`, `'bicubic'` and `'trilinear'`. Default: `'nearest'` * **align\_corners** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if `True`, the corner pixels of the input and output tensors are aligned, and thus preserving the values at those pixels. This only has effect when `mode` is `'linear'`, `'bilinear'`, or `'trilinear'`. Default: `False` Shape: * Input: (N,C,Win)(N, C, W\_{in}) , (N,C,Hin,Win)(N, C, H\_{in}, W\_{in}) or (N,C,Din,Hin,Win)(N, C, D\_{in}, H\_{in}, W\_{in}) * Output: (N,C,Wout)(N, C, W\_{out}) , (N,C,Hout,Wout)(N, C, H\_{out}, W\_{out}) or (N,C,Dout,Hout,Wout)(N, C, D\_{out}, H\_{out}, W\_{out}) , where Dout=⌊Din×scale\_factor⌋D\_{out} = \left\lfloor D\_{in} \times \text{scale\\_factor} \right\rfloor Hout=⌊Hin×scale\_factor⌋H\_{out} = \left\lfloor H\_{in} \times \text{scale\\_factor} \right\rfloor Wout=⌊Win×scale\_factor⌋W\_{out} = \left\lfloor W\_{in} \times \text{scale\\_factor} \right\rfloor Warning With `align_corners = True`, the linearly interpolating modes (`linear`, `bilinear`, `bicubic`, and `trilinear`) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is `align_corners = False`. See below for concrete examples on how this affects the outputs. Note If you want downsampling/general resizing, you should use `interpolate()`. Examples: ``` >>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='nearest') >>> m(input) tensor([[[[ 1., 1., 2., 2.], [ 1., 1., 2., 2.], [ 3., 3., 4., 4.], [ 3., 3., 4., 4.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False >>> m(input) tensor([[[[ 1.0000, 1.2500, 1.7500, 2.0000], [ 1.5000, 1.7500, 2.2500, 2.5000], [ 2.5000, 2.7500, 3.2500, 3.5000], [ 3.0000, 3.2500, 3.7500, 4.0000]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) >>> m(input) tensor([[[[ 1.0000, 1.3333, 1.6667, 2.0000], [ 1.6667, 2.0000, 2.3333, 2.6667], [ 2.3333, 2.6667, 3.0000, 3.3333], [ 3.0000, 3.3333, 3.6667, 4.0000]]]]) >>> # Try scaling the same data in a larger tensor >>> >>> input_3x3 = torch.zeros(3, 3).view(1, 1, 3, 3) >>> input_3x3[:, :, :2, :2].copy_(input) tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> input_3x3 tensor([[[[ 1., 2., 0.], [ 3., 4., 0.], [ 0., 0., 0.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False >>> # Notice that values in top left corner are the same with the small input (except at boundary) >>> m(input_3x3) tensor([[[[ 1.0000, 1.2500, 1.7500, 1.5000, 0.5000, 0.0000], [ 1.5000, 1.7500, 2.2500, 1.8750, 0.6250, 0.0000], [ 2.5000, 2.7500, 3.2500, 2.6250, 0.8750, 0.0000], [ 2.2500, 2.4375, 2.8125, 2.2500, 0.7500, 0.0000], [ 0.7500, 0.8125, 0.9375, 0.7500, 0.2500, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) >>> # Notice that values in top left corner are now changed >>> m(input_3x3) tensor([[[[ 1.0000, 1.4000, 1.8000, 1.6000, 0.8000, 0.0000], [ 1.8000, 2.2000, 2.6000, 2.2400, 1.1200, 0.0000], [ 2.6000, 3.0000, 3.4000, 2.8800, 1.4400, 0.0000], [ 2.4000, 2.7200, 3.0400, 2.5600, 1.2800, 0.0000], [ 1.2000, 1.3600, 1.5200, 1.2800, 0.6400, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]]) ``` pytorch Dropout3d Dropout3d ========= `class torch.nn.Dropout3d(p=0.5, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/dropout.html#Dropout3d) Randomly zero out entire channels (a channel is a 3D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 3D tensor input[i,j]\text{input}[i, j] ). Each channel will be zeroed out independently on every forward call with probability `p` using samples from a Bernoulli distribution. Usually the input comes from `nn.Conv3d` modules. As described in the paper [Efficient Object Localization Using Convolutional Networks](https://arxiv.org/abs/1411.4280) , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, `nn.Dropout3d()` will help promote independence between feature maps and should be used instead. Parameters * **p** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – probability of an element to be zeroed. * **inplace** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If set to `True`, will do this operation in-place Shape: * Input: (N,C,D,H,W)(N, C, D, H, W) * Output: (N,C,D,H,W)(N, C, D, H, W) (same shape as input) Examples: ``` >>> m = nn.Dropout3d(p=0.2) >>> input = torch.randn(20, 16, 4, 32, 32) >>> output = m(input) ``` pytorch ChannelShuffle ChannelShuffle ============== `class torch.nn.ChannelShuffle(groups)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/channelshuffle.html#ChannelShuffle) Divide the channels in a tensor of shape (∗,C,H,W)(\*, C , H, W) into g groups and rearrange them as (∗,Cg,g,H,W)(\*, C \frac g, g, H, W) , while keeping the original tensor shape. Parameters **groups** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – number of groups to divide channels in. Examples: ``` >>> channel_shuffle = nn.ChannelShuffle(2) >>> input = torch.randn(1, 4, 2, 2) >>> print(input) [[[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]], [[13, 14], [15, 16]], ]] >>> output = channel_shuffle(input) >>> print(output) [[[[1, 2], [3, 4]], [[9, 10], [11, 12]], [[5, 6], [7, 8]], [[13, 14], [15, 16]], ]] ``` pytorch torch.nextafter torch.nextafter =============== `torch.nextafter(input, other, *, out=None) → Tensor` Return the next floating-point value after `input` towards `other`, elementwise. The shapes of `input` and `other` must be [broadcastable](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics). Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the first input tensor * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the second input tensor Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example:: ``` >>> eps = torch.finfo(torch.float32).eps >>> torch.nextafter(torch.Tensor([1, 2]), torch.Tensor([2, 1])) == torch.Tensor([eps + 1, 2 - eps]) tensor([True, True]) ``` pytorch ConstantPad2d ConstantPad2d ============= `class torch.nn.ConstantPad2d(padding, value)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/padding.html#ConstantPad2d) Pads the input tensor boundaries with a constant value. For `N`-dimensional padding, use [`torch.nn.functional.pad()`](../nn.functional#torch.nn.functional.pad "torch.nn.functional.pad"). Parameters **padding** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – the size of the padding. If is `int`, uses the same padding in all boundaries. If a 4-`tuple`, uses (padding\_left\text{padding\\_left} , padding\_right\text{padding\\_right} , padding\_top\text{padding\\_top} , padding\_bottom\text{padding\\_bottom} ) Shape: * Input: (N,C,Hin,Win)(N, C, H\_{in}, W\_{in}) * Output: (N,C,Hout,Wout)(N, C, H\_{out}, W\_{out}) where Hout=Hin+padding\_top+padding\_bottomH\_{out} = H\_{in} + \text{padding\\_top} + \text{padding\\_bottom} Wout=Win+padding\_left+padding\_rightW\_{out} = W\_{in} + \text{padding\\_left} + \text{padding\\_right} Examples: ``` >>> m = nn.ConstantPad2d(2, 3.5) >>> input = torch.randn(1, 2, 2) >>> input tensor([[[ 1.6585, 0.4320], [-0.8701, -0.4649]]]) >>> m(input) tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 1.6585, 0.4320, 3.5000, 3.5000], [ 3.5000, 3.5000, -0.8701, -0.4649, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]]) >>> # using different paddings for different sides >>> m = nn.ConstantPad2d((3, 0, 2, 1), 3.5) >>> m(input) tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 1.6585, 0.4320], [ 3.5000, 3.5000, 3.5000, -0.8701, -0.4649], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]]) ```
programming_docs
pytorch torch.logsumexp torch.logsumexp =============== `torch.logsumexp(input, dim, keepdim=False, *, out=None)` Returns the log of summed exponentials of each row of the `input` tensor in the given dimension `dim`. The computation is numerically stabilized. For summation index jj given by `dim` and other indices ii , the result is logsumexp(x)i=log⁡∑jexp⁡(xij)\text{logsumexp}(x)\_{i} = \log \sum\_j \exp(x\_{ij}) If `keepdim` is `True`, the output tensor is of the same size as `input` except in the dimension(s) `dim` where it is of size 1. Otherwise, `dim` is squeezed (see [`torch.squeeze()`](torch.squeeze#torch.squeeze "torch.squeeze")), resulting in the output tensor having 1 (or `len(dim)`) fewer dimension(s). Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *tuple of python:ints*) – the dimension or dimensions to reduce. * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether the output tensor has `dim` retained or not. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example:: ``` >>> a = torch.randn(3, 3) >>> torch.logsumexp(a, 1) tensor([ 0.8442, 1.4322, 0.8711]) ``` pytorch torch.inverse torch.inverse ============= `torch.inverse(input, *, out=None) → Tensor` Takes the inverse of the square matrix `input`. `input` can be batches of 2D square tensors, in which case this function would return a tensor composed of individual inverses. Supports real and complex input. Note [`torch.inverse()`](#torch.inverse "torch.inverse") is deprecated. Please use [`torch.linalg.inv()`](../linalg#torch.linalg.inv "torch.linalg.inv") instead. Note Irrespective of the original strides, the returned tensors will be transposed, i.e. with strides like `input.contiguous().transpose(-2, -1).stride()` Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor of size (∗,n,n)(\*, n, n) where `*` is zero or more batch dimensions Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Examples: ``` >>> x = torch.rand(4, 4) >>> y = torch.inverse(x) >>> z = torch.mm(x, y) >>> z tensor([[ 1.0000, -0.0000, -0.0000, 0.0000], [ 0.0000, 1.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 1.0000, 0.0000], [ 0.0000, -0.0000, -0.0000, 1.0000]]) >>> torch.max(torch.abs(z - torch.eye(4))) # Max non-zero tensor(1.1921e-07) >>> # Batched inverse example >>> x = torch.randn(2, 3, 4, 4) >>> y = torch.inverse(x) >>> z = torch.matmul(x, y) >>> torch.max(torch.abs(z - torch.eye(4).expand_as(x))) # Max non-zero tensor(1.9073e-06) >>> x = torch.rand(4, 4, dtype=torch.cdouble) >>> y = torch.inverse(x) >>> z = torch.mm(x, y) >>> z tensor([[ 1.0000e+00+0.0000e+00j, -1.3878e-16+3.4694e-16j, 5.5511e-17-1.1102e-16j, 0.0000e+00-1.6653e-16j], [ 5.5511e-16-1.6653e-16j, 1.0000e+00+6.9389e-17j, 2.2204e-16-1.1102e-16j, -2.2204e-16+1.1102e-16j], [ 3.8858e-16-1.2490e-16j, 2.7756e-17+3.4694e-17j, 1.0000e+00+0.0000e+00j, -4.4409e-16+5.5511e-17j], [ 4.4409e-16+5.5511e-16j, -3.8858e-16+1.8041e-16j, 2.2204e-16+0.0000e+00j, 1.0000e+00-3.4694e-16j]], dtype=torch.complex128) >>> torch.max(torch.abs(z - torch.eye(4, dtype=torch.cdouble))) # Max non-zero tensor(7.5107e-16, dtype=torch.float64) ``` pytorch torch.normal torch.normal ============ `torch.normal(mean, std, *, generator=None, out=None) → Tensor` Returns a tensor of random numbers drawn from separate normal distributions whose mean and standard deviation are given. The [`mean`](torch.mean#torch.mean "torch.mean") is a tensor with the mean of each output element’s normal distribution The [`std`](torch.std#torch.std "torch.std") is a tensor with the standard deviation of each output element’s normal distribution The shapes of [`mean`](torch.mean#torch.mean "torch.mean") and [`std`](torch.std#torch.std "torch.std") don’t need to match, but the total number of elements in each tensor need to be the same. Note When the shapes do not match, the shape of [`mean`](torch.mean#torch.mean "torch.mean") is used as the shape for the returned output tensor Parameters * **mean** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor of per-element means * **std** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor of per-element standard deviations Keyword Arguments * **generator** ([`torch.Generator`](torch.generator#torch.Generator "torch.Generator"), optional) – a pseudorandom number generator for sampling * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> torch.normal(mean=torch.arange(1., 11.), std=torch.arange(1, 0, -0.1)) tensor([ 1.0425, 3.5672, 2.7969, 4.2925, 4.7229, 6.2134, 8.0505, 8.1408, 9.0563, 10.0566]) ``` `torch.normal(mean=0.0, std, *, out=None) → Tensor` Similar to the function above, but the means are shared among all drawn elements. Parameters * **mean** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – the mean for all distributions * **std** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor of per-element standard deviations Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> torch.normal(mean=0.5, std=torch.arange(1., 6.)) tensor([-1.2793, -1.0732, -2.0687, 5.1177, -1.2303]) ``` `torch.normal(mean, std=1.0, *, out=None) → Tensor` Similar to the function above, but the standard-deviations are shared among all drawn elements. Parameters * **mean** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor of per-element means * **std** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – the standard deviation for all distributions Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor Example: ``` >>> torch.normal(mean=torch.arange(1., 6.)) tensor([ 1.1552, 2.6148, 2.6535, 5.8318, 4.2361]) ``` `torch.normal(mean, std, size, *, out=None) → Tensor` Similar to the function above, but the means and standard deviations are shared among all drawn elements. The resulting tensor has size given by `size`. Parameters * **mean** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the mean for all distributions * **std** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the standard deviation for all distributions * **size** (*int...*) – a sequence of integers defining the shape of the output tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> torch.normal(2, 3, size=(1, 4)) tensor([[-1.3987, -1.9544, 3.6048, 0.7909]]) ``` pytorch torch.sort torch.sort ========== `torch.sort(input, dim=-1, descending=False, *, out=None) -> (Tensor, LongTensor)` Sorts the elements of the `input` tensor along a given dimension in ascending order by value. If `dim` is not given, the last dimension of the `input` is chosen. If `descending` is `True` then the elements are sorted in descending order by value. A namedtuple of (values, indices) is returned, where the `values` are the sorted values and `indices` are the indices of the elements in the original `input` tensor. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the dimension to sort along * **descending** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – controls the sorting order (ascending or descending) Keyword Arguments **out** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – the output tuple of (`Tensor`, `LongTensor`) that can be optionally given to be used as output buffers Example: ``` >>> x = torch.randn(3, 4) >>> sorted, indices = torch.sort(x) >>> sorted tensor([[-0.2162, 0.0608, 0.6719, 2.3332], [-0.5793, 0.0061, 0.6058, 0.9497], [-0.5071, 0.3343, 0.9553, 1.0960]]) >>> indices tensor([[ 1, 0, 2, 3], [ 3, 1, 0, 2], [ 0, 3, 1, 2]]) >>> sorted, indices = torch.sort(x, 0) >>> sorted tensor([[-0.5071, -0.2162, 0.6719, -0.5793], [ 0.0608, 0.0061, 0.9497, 0.3343], [ 0.6058, 0.9553, 1.0960, 2.3332]]) >>> indices tensor([[ 2, 0, 0, 1], [ 0, 1, 1, 2], [ 1, 2, 2, 0]]) ``` pytorch torch.mean torch.mean ========== `torch.mean(input) → Tensor` Returns the mean value of all elements in the `input` tensor. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Example: ``` >>> a = torch.randn(1, 3) >>> a tensor([[ 0.2294, -0.5481, 1.3288]]) >>> torch.mean(a) tensor(0.3367) ``` `torch.mean(input, dim, keepdim=False, *, out=None) → Tensor` Returns the mean value of each row of the `input` tensor in the given dimension `dim`. If `dim` is a list of dimensions, reduce over all of them. If `keepdim` is `True`, the output tensor is of the same size as `input` except in the dimension(s) `dim` where it is of size 1. Otherwise, `dim` is squeezed (see [`torch.squeeze()`](torch.squeeze#torch.squeeze "torch.squeeze")), resulting in the output tensor having 1 (or `len(dim)`) fewer dimension(s). Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* *tuple of python:ints*) – the dimension or dimensions to reduce. * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether the output tensor has `dim` retained or not. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4, 4) >>> a tensor([[-0.3841, 0.6320, 0.4254, -0.7384], [-0.9644, 1.0131, -0.6549, -1.4279], [-0.2951, -1.3350, -0.7694, 0.5600], [ 1.0842, -0.9580, 0.3623, 0.2343]]) >>> torch.mean(a, 1) tensor([-0.0163, -0.5085, -0.4599, 0.1807]) >>> torch.mean(a, 1, True) tensor([[-0.0163], [-0.5085], [-0.4599], [ 0.1807]]) ``` pytorch DataParallel DataParallel ============ `class torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/parallel/data_parallel.html#DataParallel) Implements data parallelism at the module level. This container parallelizes the application of the given `module` by splitting the input across the specified devices by chunking in the batch dimension (other objects will be copied once per device). In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module. The batch size should be larger than the number of GPUs used. Warning It is recommended to use [`DistributedDataParallel`](torch.nn.parallel.distributeddataparallel#torch.nn.parallel.DistributedDataParallel "torch.nn.parallel.DistributedDataParallel"), instead of this class, to do multi-GPU training, even if there is only a single node. See: [Use nn.parallel.DistributedDataParallel instead of multiprocessing or nn.DataParallel](https://pytorch.org/docs/1.8.0/notes/cuda.html#cuda-nn-ddp-instead) and [Distributed Data Parallel](https://pytorch.org/docs/1.8.0/notes/ddp.html#ddp). Arbitrary positional and keyword inputs are allowed to be passed into DataParallel but some types are specially handled. tensors will be **scattered** on dim specified (default 0). tuple, list and dict types will be shallow copied. The other types will be shared among different threads and can be corrupted if written to in the model’s forward pass. The parallelized `module` must have its parameters and buffers on `device_ids[0]` before running this [`DataParallel`](#torch.nn.DataParallel "torch.nn.DataParallel") module. Warning In each forward, `module` is **replicated** on each device, so any updates to the running module in `forward` will be lost. For example, if `module` has a counter attribute that is incremented in each `forward`, it will always stay at the initial value because the update is done on the replicas which are destroyed after `forward`. However, [`DataParallel`](#torch.nn.DataParallel "torch.nn.DataParallel") guarantees that the replica on `device[0]` will have its parameters and buffers sharing storage with the base parallelized `module`. So **in-place** updates to the parameters or buffers on `device[0]` will be recorded. E.g., [`BatchNorm2d`](torch.nn.batchnorm2d#torch.nn.BatchNorm2d "torch.nn.BatchNorm2d") and [`spectral_norm()`](torch.nn.utils.spectral_norm#torch.nn.utils.spectral_norm "torch.nn.utils.spectral_norm") rely on this behavior to update the buffers. Warning Forward and backward hooks defined on `module` and its submodules will be invoked `len(device_ids)` times, each with inputs located on a particular device. Particularly, the hooks are only guaranteed to be executed in correct order with respect to operations on corresponding devices. For example, it is not guaranteed that hooks set via [`register_forward_pre_hook()`](torch.nn.module#torch.nn.Module.register_forward_pre_hook "torch.nn.Module.register_forward_pre_hook") be executed before `all` `len(device_ids)` [`forward()`](torch.nn.module#torch.nn.Module.forward "torch.nn.Module.forward") calls, but that each such hook be executed before the corresponding [`forward()`](torch.nn.module#torch.nn.Module.forward "torch.nn.Module.forward") call of that device. Warning When `module` returns a scalar (i.e., 0-dimensional tensor) in `forward()`, this wrapper will return a vector of length equal to number of devices used in data parallelism, containing the result from each device. Note There is a subtlety in using the `pack sequence -> recurrent network -> unpack sequence` pattern in a [`Module`](torch.nn.module#torch.nn.Module "torch.nn.Module") wrapped in [`DataParallel`](#torch.nn.DataParallel "torch.nn.DataParallel"). See [My recurrent network doesn’t work with data parallelism](https://pytorch.org/docs/1.8.0/notes/faq.html#pack-rnn-unpack-with-data-parallelism) section in FAQ for details. Parameters * **module** ([Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – module to be parallelized * **device\_ids** (*list of python:int* *or* [torch.device](../tensor_attributes#torch.torch.device "torch.torch.device")) – CUDA devices (default: all devices) * **output\_device** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [torch.device](../tensor_attributes#torch.torch.device "torch.torch.device")) – device location of output (default: device\_ids[0]) Variables **~DataParallel.module** ([Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – the module to be parallelized Example: ``` >>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2]) >>> output = net(input_var) # input_var can be on any device, including CPU ``` pytorch LazyConv2d LazyConv2d ========== `class torch.nn.LazyConv2d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/conv.html#LazyConv2d) A [`torch.nn.Conv2d`](torch.nn.conv2d#torch.nn.Conv2d "torch.nn.Conv2d") module with lazy initialization of the `in_channels` argument of the [`Conv2d`](torch.nn.conv2d#torch.nn.Conv2d "torch.nn.Conv2d") that is inferred from the `input.size(1)`. Parameters * **out\_channels** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of channels produced by the convolution * **kernel\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Size of the convolving kernel * **stride** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Stride of the convolution. Default: 1 * **padding** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Zero-padding added to both sides of the input. Default: 0 * **padding\_mode** (*string**,* *optional*) – `'zeros'`, `'reflect'`, `'replicate'` or `'circular'`. Default: `'zeros'` * **dilation** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Spacing between kernel elements. Default: 1 * **groups** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Number of blocked connections from input channels to output channels. Default: 1 * **bias** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, adds a learnable bias to the output. Default: `True` See also [`torch.nn.Conv2d`](torch.nn.conv2d#torch.nn.Conv2d "torch.nn.Conv2d") and [`torch.nn.modules.lazy.LazyModuleMixin`](torch.nn.modules.lazy.lazymodulemixin#torch.nn.modules.lazy.LazyModuleMixin "torch.nn.modules.lazy.LazyModuleMixin") `cls_to_become` alias of [`Conv2d`](torch.nn.conv2d#torch.nn.Conv2d "torch.nn.Conv2d") pytorch torch.bucketize torch.bucketize =============== `torch.bucketize(input, boundaries, *, out_int32=False, right=False, out=None) → Tensor` Returns the indices of the buckets to which each value in the `input` belongs, where the boundaries of the buckets are set by `boundaries`. Return a new tensor with the same size as `input`. If `right` is False (default), then the left boundary is closed. More formally, the returned index satisfies the following rules: | `right` | *returned index satisfies* | | --- | --- | | False | `boundaries[i-1] < input[m][n]...[l][x] <= boundaries[i]` | | True | `boundaries[i-1] <= input[m][n]...[l][x] < boundaries[i]` | Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor") *or* *Scalar*) – N-D tensor or a Scalar containing the search value(s). * **boundaries** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – 1-D tensor, must contain a monotonically increasing sequence. Keyword Arguments * **out\_int32** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – indicate the output data type. torch.int32 if True, torch.int64 otherwise. Default value is False, i.e. default output data type is torch.int64. * **right** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if False, return the first suitable location that is found. If True, return the last such index. If no suitable index found, return 0 for non-numerical value (eg. nan, inf) or the size of `boundaries` (one pass the last index). In other words, if False, gets the lower bound index for each value in `input` from `boundaries`. If True, gets the upper bound index instead. Default value is False. * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor, must be the same size as `input` if provided. Example: ``` >>> boundaries = torch.tensor([1, 3, 5, 7, 9]) >>> boundaries tensor([1, 3, 5, 7, 9]) >>> v = torch.tensor([[3, 6, 9], [3, 6, 9]]) >>> v tensor([[3, 6, 9], [3, 6, 9]]) >>> torch.bucketize(v, boundaries) tensor([[1, 3, 4], [1, 3, 4]]) >>> torch.bucketize(v, boundaries, right=True) tensor([[2, 3, 5], [2, 3, 5]]) ```
programming_docs
pytorch torch.true_divide torch.true\_divide ================== `torch.true_divide(dividend, divisor, *, out) → Tensor` Alias for [`torch.div()`](torch.div#torch.div "torch.div") with `rounding_mode=None`. pytorch torch.equal torch.equal =========== `torch.equal(input, other) → bool` `True` if two tensors have the same size and elements, `False` otherwise. Example: ``` >>> torch.equal(torch.tensor([1, 2]), torch.tensor([1, 2])) True ``` pytorch HingeEmbeddingLoss HingeEmbeddingLoss ================== `class torch.nn.HingeEmbeddingLoss(margin=1.0, size_average=None, reduce=None, reduction='mean')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/loss.html#HingeEmbeddingLoss) Measures the loss given an input tensor xx and a labels tensor yy (containing 1 or -1). This is usually used for measuring whether two inputs are similar or dissimilar, e.g. using the L1 pairwise distance as xx , and is typically used for learning nonlinear embeddings or semi-supervised learning. The loss function for nn -th sample in the mini-batch is ln={xn,ifyn=1,max⁡{0,Δ−xn},ifyn=−1,l\_n = \begin{cases} x\_n, & \text{if}\; y\_n = 1,\\ \max \{0, \Delta - x\_n\}, & \text{if}\; y\_n = -1, \end{cases} and the total loss functions is ℓ(x,y)={mean⁡(L),if reduction=‘mean’;sum⁡(L),if reduction=‘sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases} where L={l1,…,lN}⊤L = \{l\_1,\dots,l\_N\}^\top . Parameters * **margin** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – Has a default value of `1`. * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when `reduce` is `False`. Default: `True` * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Note: `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Default: `'mean'` Shape: * Input: (∗)(\*) where ∗\* means, any number of dimensions. The sum operation operates over all the elements. * Target: (∗)(\*) , same shape as the input * Output: scalar. If `reduction` is `'none'`, then same shape as the input pytorch torch.lu torch.lu ======== `torch.lu(*args, **kwargs)` Computes the LU factorization of a matrix or batches of matrices `A`. Returns a tuple containing the LU factorization and pivots of `A`. Pivoting is done if `pivot` is set to `True`. Note The pivots returned by the function are 1-indexed. If `pivot` is `False`, then the returned pivots is a tensor filled with zeros of the appropriate size. Note LU factorization with `pivot` = `False` is not available for CPU, and attempting to do so will throw an error. However, LU factorization with `pivot` = `False` is available for CUDA. Note This function does not check if the factorization was successful or not if `get_infos` is `True` since the status of the factorization is present in the third element of the return tuple. Note In the case of batches of square matrices with size less or equal to 32 on a CUDA device, the LU factorization is repeated for singular matrices due to the bug in the MAGMA library (see magma issue 13). Note `L`, `U`, and `P` can be derived using [`torch.lu_unpack()`](torch.lu_unpack#torch.lu_unpack "torch.lu_unpack"). Warning The LU factorization does have backward support, but only for square inputs of full rank. Parameters * **A** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor to factor of size (∗,m,n)(\*, m, n) * **pivot** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – controls whether pivoting is done. Default: `True` * **get\_infos** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if set to `True`, returns an info IntTensor. Default: `False` * **out** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – optional output tuple. If `get_infos` is `True`, then the elements in the tuple are Tensor, IntTensor, and IntTensor. If `get_infos` is `False`, then the elements in the tuple are Tensor, IntTensor. Default: `None` Returns A tuple of tensors containing * **factorization** (*Tensor*): the factorization of size (∗,m,n)(\*, m, n) * **pivots** (*IntTensor*): the pivots of size (∗,min(m,n))(\*, \text{min}(m, n)) . `pivots` stores all the intermediate transpositions of rows. The final permutation `perm` could be reconstructed by applying `swap(perm[i], perm[pivots[i] - 1])` for `i = 0, ..., pivots.size(-1) - 1`, where `perm` is initially the identity permutation of mm elements (essentially this is what [`torch.lu_unpack()`](torch.lu_unpack#torch.lu_unpack "torch.lu_unpack") is doing). * **infos** (*IntTensor*, *optional*): if `get_infos` is `True`, this is a tensor of size (∗)(\*) where non-zero values indicate whether factorization for the matrix or each minibatch has succeeded or failed Return type ([Tensor](../tensors#torch.Tensor "torch.Tensor"), IntTensor, IntTensor (optional)) Example: ``` >>> A = torch.randn(2, 3, 3) >>> A_LU, pivots = torch.lu(A) >>> A_LU tensor([[[ 1.3506, 2.5558, -0.0816], [ 0.1684, 1.1551, 0.1940], [ 0.1193, 0.6189, -0.5497]], [[ 0.4526, 1.2526, -0.3285], [-0.7988, 0.7175, -0.9701], [ 0.2634, -0.9255, -0.3459]]]) >>> pivots tensor([[ 3, 3, 3], [ 3, 3, 3]], dtype=torch.int32) >>> A_LU, pivots, info = torch.lu(A, get_infos=True) >>> if info.nonzero().size(0) == 0: ... print('LU factorization succeeded for all samples!') LU factorization succeeded for all samples! ``` pytorch torch.lt torch.lt ======== `torch.lt(input, other, *, out=None) → Tensor` Computes input<other\text{input} < \text{other} element-wise. The second argument can be a number or a tensor whose shape is [broadcastable](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics) with the first argument. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor to compare * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor") *or* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")) – the tensor or value to compare Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Returns A boolean tensor that is True where `input` is less than `other` and False elsewhere Example: ``` >>> torch.lt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]])) tensor([[False, False], [True, False]]) ``` pytorch torch.norm torch.norm ========== `torch.norm(input, p='fro', dim=None, keepdim=False, out=None, dtype=None)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/functional.html#norm) Returns the matrix norm or vector norm of a given tensor. Warning torch.norm is deprecated and may be removed in a future PyTorch release. Use [`torch.linalg.norm()`](../linalg#torch.linalg.norm "torch.linalg.norm") instead, but note that [`torch.linalg.norm()`](../linalg#torch.linalg.norm "torch.linalg.norm") has a different signature and slightly different behavior that is more consistent with NumPy’s numpy.linalg.norm. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – The input tensor. Its data type must be either a floating point or complex type. For complex inputs, the norm is calculated using the absolute value of each element. If the input is complex and neither `dtype` nor `out` is specified, the result’s data type will be the corresponding floating point type (e.g. float if `input` is complexfloat). * **p** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* [float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *inf**,* *-inf**,* *'fro'**,* *'nuc'**,* *optional*) – the order of norm. Default: `'fro'` The following norms can be calculated: | ord | matrix norm | vector norm | | --- | --- | --- | | ’fro’ | Frobenius norm | – | | ‘nuc’ | nuclear norm | – | | Number | – | sum(abs(x)\*\*ord)\*\*(1./ord) | The vector norm can be calculated across any number of dimensions. The corresponding dimensions of `input` are flattened into one dimension, and the norm is calculated on the flattened dimension. Frobenius norm produces the same result as `p=2` in all cases except when `dim` is a list of three or more dims, in which case Frobenius norm throws an error. Nuclear norm can only be calculated across exactly two dimensions. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *tuple of python:ints**,* *list of python:ints**,* *optional*) – Specifies which dimension or dimensions of `input` to calculate the norm across. If `dim` is `None`, the norm will be calculated across all dimensions of `input`. If the norm type indicated by `p` does not support the specified number of dimensions, an error will occur. * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – whether the output tensors have `dim` retained or not. Ignored if `dim` = `None` and `out` = `None`. Default: `False` * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Ignored if `dim` = `None` and `out` = `None`. * **dtype** ([`torch.dtype`](../tensor_attributes#torch.torch.dtype "torch.torch.dtype"), optional) – the desired data type of returned tensor. If specified, the input tensor is casted to :attr:’dtype’ while performing the operation. Default: None. Note Even though `p='fro'` supports any number of dimensions, the true mathematical definition of Frobenius norm only applies to tensors with exactly two dimensions. [`torch.linalg.norm()`](../linalg#torch.linalg.norm "torch.linalg.norm") with `ord='fro'` aligns with the mathematical definition, since it can only be applied across exactly two dimensions. Example: ``` >>> import torch >>> a = torch.arange(9, dtype= torch.float) - 4 >>> b = a.reshape((3, 3)) >>> torch.norm(a) tensor(7.7460) >>> torch.norm(b) tensor(7.7460) >>> torch.norm(a, float('inf')) tensor(4.) >>> torch.norm(b, float('inf')) tensor(4.) >>> c = torch.tensor([[ 1, 2, 3],[-1, 1, 4]] , dtype= torch.float) >>> torch.norm(c, dim=0) tensor([1.4142, 2.2361, 5.0000]) >>> torch.norm(c, dim=1) tensor([3.7417, 4.2426]) >>> torch.norm(c, p=1, dim=1) tensor([6., 6.]) >>> d = torch.arange(8, dtype= torch.float).reshape(2,2,2) >>> torch.norm(d, dim=(1,2)) tensor([ 3.7417, 11.2250]) >>> torch.norm(d[0, :, :]), torch.norm(d[1, :, :]) (tensor(3.7417), tensor(11.2250)) ``` pytorch Generator Generator ========= `class torch.Generator(device='cpu') → Generator` Creates and returns a generator object that manages the state of the algorithm which produces pseudo random numbers. Used as a keyword argument in many [In-place random sampling](../torch#inplace-random-sampling) functions. Parameters **device** ([`torch.device`](../tensor_attributes#torch.torch.device "torch.torch.device"), optional) – the desired device for the generator. Returns An torch.Generator object. Return type [Generator](#torch.Generator "torch.Generator") Example: ``` >>> g_cpu = torch.Generator() >>> g_cuda = torch.Generator(device='cuda') ``` `device` Generator.device -> device Gets the current device of the generator. Example: ``` >>> g_cpu = torch.Generator() >>> g_cpu.device device(type='cpu') ``` `get_state() → Tensor` Returns the Generator state as a `torch.ByteTensor`. Returns A `torch.ByteTensor` which contains all the necessary bits to restore a Generator to a specific point in time. Return type [Tensor](../tensors#torch.Tensor "torch.Tensor") Example: ``` >>> g_cpu = torch.Generator() >>> g_cpu.get_state() ``` `initial_seed() → int` Returns the initial seed for generating random numbers. Example: ``` >>> g_cpu = torch.Generator() >>> g_cpu.initial_seed() 2147483647 ``` `manual_seed(seed) → Generator` Sets the seed for generating random numbers. Returns a `torch.Generator` object. It is recommended to set a large seed, i.e. a number that has a good balance of 0 and 1 bits. Avoid having many 0 bits in the seed. Parameters **seed** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The desired seed. Value must be within the inclusive range `[-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff]`. Otherwise, a RuntimeError is raised. Negative inputs are remapped to positive values with the formula `0xffff_ffff_ffff_ffff + seed`. Returns An torch.Generator object. Return type [Generator](#torch.Generator "torch.Generator") Example: ``` >>> g_cpu = torch.Generator() >>> g_cpu.manual_seed(2147483647) ``` `seed() → int` Gets a non-deterministic random number from std::random\_device or the current time and uses it to seed a Generator. Example: ``` >>> g_cpu = torch.Generator() >>> g_cpu.seed() 1516516984916 ``` `set_state(new_state) → void` Sets the Generator state. Parameters **new\_state** (*torch.ByteTensor*) – The desired state. Example: ``` >>> g_cpu = torch.Generator() >>> g_cpu_other = torch.Generator() >>> g_cpu.set_state(g_cpu_other.get_state()) ``` pytorch torch.isclose torch.isclose ============= `torch.isclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) → Tensor` Returns a new tensor with boolean elements representing if each element of `input` is “close” to the corresponding element of `other`. Closeness is defined as: ∣input−other∣≤atol+rtol×∣other∣\lvert \text{input} - \text{other} \rvert \leq \texttt{atol} + \texttt{rtol} \times \lvert \text{other} \rvert where `input` and `other` are finite. Where `input` and/or `other` are nonfinite they are close if and only if they are equal, with NaNs being considered equal to each other when `equal_nan` is True. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – first tensor to compare * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – second tensor to compare * **atol** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – absolute tolerance. Default: 1e-08 * **rtol** ([float](https://docs.python.org/3/library/functions.html#float "(in Python v3.9)")*,* *optional*) – relative tolerance. Default: 1e-05 * **equal\_nan** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – if `True`, then two `NaN` s will be considered equal. Default: `False` Examples: ``` >>> torch.isclose(torch.tensor((1., 2, 3)), torch.tensor((1 + 1e-10, 3, 4))) tensor([ True, False, False]) >>> torch.isclose(torch.tensor((float('inf'), 4)), torch.tensor((float('inf'), 6)), rtol=.5) tensor([True, True]) ``` pytorch torch.kthvalue torch.kthvalue ============== `torch.kthvalue(input, k, dim=None, keepdim=False, *, out=None) -> (Tensor, LongTensor)` Returns a namedtuple `(values, indices)` where `values` is the `k` th smallest element of each row of the `input` tensor in the given dimension `dim`. And `indices` is the index location of each element found. If `dim` is not given, the last dimension of the `input` is chosen. If `keepdim` is `True`, both the `values` and `indices` tensors are the same size as `input`, except in the dimension `dim` where they are of size 1. Otherwise, `dim` is squeezed (see [`torch.squeeze()`](torch.squeeze#torch.squeeze "torch.squeeze")), resulting in both the `values` and `indices` tensors having 1 fewer dimension than the `input` tensor. Note When `input` is a CUDA tensor and there are multiple valid `k` th values, this function may nondeterministically return `indices` for any of them. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **k** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – k for the k-th smallest element * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – the dimension to find the kth value along * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether the output tensor has `dim` retained or not. Keyword Arguments **out** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – the output tuple of (Tensor, LongTensor) can be optionally given to be used as output buffers Example: ``` >>> x = torch.arange(1., 6.) >>> x tensor([ 1., 2., 3., 4., 5.]) >>> torch.kthvalue(x, 4) torch.return_types.kthvalue(values=tensor(4.), indices=tensor(3)) >>> x=torch.arange(1.,7.).resize_(2,3) >>> x tensor([[ 1., 2., 3.], [ 4., 5., 6.]]) >>> torch.kthvalue(x, 2, 0, True) torch.return_types.kthvalue(values=tensor([[4., 5., 6.]]), indices=tensor([[1, 1, 1]])) ``` pytorch ELU ELU === `class torch.nn.ELU(alpha=1.0, inplace=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/activation.html#ELU) Applies the element-wise function: ELU(x)={x, if x>0α∗(exp⁡(x)−1), if x≤0\text{ELU}(x) = \begin{cases} x, & \text{ if } x > 0\\ \alpha \* (\exp(x) - 1), & \text{ if } x \leq 0 \end{cases} Parameters * **alpha** – the α\alpha value for the ELU formulation. Default: 1.0 * **inplace** – can optionally do the operation in-place. Default: `False` Shape: * Input: (N,∗)(N, \*) where `*` means, any number of additional dimensions * Output: (N,∗)(N, \*) , same shape as the input Examples: ``` >>> m = nn.ELU() >>> input = torch.randn(2) >>> output = m(input) ``` pytorch torch.index_select torch.index\_select =================== `torch.index_select(input, dim, index, *, out=None) → Tensor` Returns a new tensor which indexes the `input` tensor along dimension `dim` using the entries in `index` which is a `LongTensor`. The returned tensor has the same number of dimensions as the original tensor (`input`). The `dim`th dimension has the same size as the length of `index`; other dimensions have the same size as in the original tensor. Note The returned tensor does **not** use the same storage as the original tensor. If `out` has a different shape than expected, we silently change it to the correct shape, reallocating the underlying storage if necessary. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension in which we index * **index** (*IntTensor* *or* *LongTensor*) – the 1-D tensor containing the indices to index Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> x = torch.randn(3, 4) >>> x tensor([[ 0.1427, 0.0231, -0.5414, -1.0009], [-0.4664, 0.2647, -0.1228, -1.1068], [-1.1734, -0.6571, 0.7230, -0.6004]]) >>> indices = torch.tensor([0, 2]) >>> torch.index_select(x, 0, indices) tensor([[ 0.1427, 0.0231, -0.5414, -1.0009], [-1.1734, -0.6571, 0.7230, -0.6004]]) >>> torch.index_select(x, 1, indices) tensor([[ 0.1427, -0.5414], [-0.4664, -0.1228], [-1.1734, 0.7230]]) ```
programming_docs
pytorch torch.jit.fork torch.jit.fork ============== `torch.jit.fork(func, *args, **kwargs)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/jit/_async.html#fork) Creates an asynchronous task executing `func` and a reference to the value of the result of this execution. `fork` will return immediately, so the return value of `func` may not have been computed yet. To force completion of the task and access the return value invoke `torch.jit.wait` on the Future. `fork` invoked with a `func` which returns `T` is typed as `torch.jit.Future[T]`. `fork` calls can be arbitrarily nested, and may be invoked with positional and keyword arguments. Asynchronous execution will only occur when run in TorchScript. If run in pure python, `fork` will not execute in parallel. `fork` will also not execute in parallel when invoked while tracing, however the `fork` and `wait` calls will be captured in the exported IR Graph. .. warning: ``` `fork` tasks will execute non-deterministicly. We recommend only spawning parallel fork tasks for pure functions that do not modify their inputs, module attributes, or global state. ``` Parameters * **func** (*callable* *or* [torch.nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – A Python function or `torch.nn.Module` that will be invoked. If executed in TorchScript, it will execute asynchronously, otherwise it will not. Traced invocations of fork will be captured in the IR. * **\*\*kwargs** (*\*args**,*) – arguments to invoke `func` with. Returns a reference to the execution of `func`. The value `T` can only be accessed by forcing completion of `func` through `torch.jit.wait`. Return type `torch.jit.Future[T]` Example (fork a free function): ``` import torch from torch import Tensor def foo(a : Tensor, b : int) -> Tensor: return a + b def bar(a): fut : torch.jit.Future[Tensor] = torch.jit.fork(foo, a, b=2) return torch.jit.wait(fut) script_bar = torch.jit.script(bar) input = torch.tensor(2) # only the scripted version executes asynchronously assert script_bar(input) == bar(input) # trace is not run asynchronously, but fork is captured in IR graph = torch.jit.trace(bar, (input,)).graph assert "fork" in str(graph) ``` Example (fork a module method): ``` import torch from torch import Tensor class AddMod(torch.nn.Module): def forward(self, a: Tensor, b : int): return a + b class Mod(torch.nn.Module): def __init__(self): super(self).__init__() self.mod = AddMod() def forward(self, input): fut = torch.jit.fork(self.mod, a, b=2) return torch.jit.wait(fut) input = torch.tensor(2) mod = Mod() assert mod(input) == torch.jit.script(mod).forward(input) ``` pytorch torch.unique torch.unique ============ `torch.unique(*args, **kwargs)` Returns the unique elements of the input tensor. Note This function is different from [`torch.unique_consecutive()`](torch.unique_consecutive#torch.unique_consecutive "torch.unique_consecutive") in the sense that this function also eliminates non-consecutive duplicate values. Note Currently in the CUDA implementation and the CPU implementation when dim is specified, `torch.unique` always sort the tensor at the beginning regardless of the `sort` argument. Sorting could be slow, so if your input tensor is already sorted, it is recommended to use [`torch.unique_consecutive()`](torch.unique_consecutive#torch.unique_consecutive "torch.unique_consecutive") which avoids the sorting. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor * **sorted** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Whether to sort the unique elements in ascending order before returning as output. * **return\_inverse** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Whether to also return the indices for where elements in the original input ended up in the returned unique list. * **return\_counts** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Whether to also return the counts for each unique element. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension to apply unique. If `None`, the unique of the flattened input is returned. default: `None` Returns A tensor or a tuple of tensors containing * **output** (*Tensor*): the output list of unique scalar elements. * **inverse\_indices** (*Tensor*): (optional) if `return_inverse` is True, there will be an additional returned tensor (same shape as input) representing the indices for where elements in the original input map to in the output; otherwise, this function will only return a single tensor. * **counts** (*Tensor*): (optional) if `return_counts` is True, there will be an additional returned tensor (same shape as output or output.size(dim), if dim was specified) representing the number of occurrences for each unique value or tensor. Return type ([Tensor](../tensors#torch.Tensor "torch.Tensor"), [Tensor](../tensors#torch.Tensor "torch.Tensor") (optional), [Tensor](../tensors#torch.Tensor "torch.Tensor") (optional)) Example: ``` >>> output = torch.unique(torch.tensor([1, 3, 2, 3], dtype=torch.long)) >>> output tensor([ 2, 3, 1]) >>> output, inverse_indices = torch.unique( ... torch.tensor([1, 3, 2, 3], dtype=torch.long), sorted=True, return_inverse=True) >>> output tensor([ 1, 2, 3]) >>> inverse_indices tensor([ 0, 2, 1, 2]) >>> output, inverse_indices = torch.unique( ... torch.tensor([[1, 3], [2, 3]], dtype=torch.long), sorted=True, return_inverse=True) >>> output tensor([ 1, 2, 3]) >>> inverse_indices tensor([[ 0, 2], [ 1, 2]]) ``` pytorch MaxUnpool3d MaxUnpool3d =========== `class torch.nn.MaxUnpool3d(kernel_size, stride=None, padding=0)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/pooling.html#MaxUnpool3d) Computes a partial inverse of [`MaxPool3d`](torch.nn.maxpool3d#torch.nn.MaxPool3d "torch.nn.MaxPool3d"). [`MaxPool3d`](torch.nn.maxpool3d#torch.nn.MaxPool3d "torch.nn.MaxPool3d") is not fully invertible, since the non-maximal values are lost. [`MaxUnpool3d`](#torch.nn.MaxUnpool3d "torch.nn.MaxUnpool3d") takes in as input the output of [`MaxPool3d`](torch.nn.maxpool3d#torch.nn.MaxPool3d "torch.nn.MaxPool3d") including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero. Note [`MaxPool3d`](torch.nn.maxpool3d#torch.nn.MaxPool3d "torch.nn.MaxPool3d") can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument `output_size` in the forward call. See the Inputs section below. Parameters * **kernel\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Size of the max pooling window. * **stride** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Stride of the max pooling window. It is set to `kernel_size` by default. * **padding** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Padding that was added to the input Inputs: * `input`: the input Tensor to invert * `indices`: the indices given out by [`MaxPool3d`](torch.nn.maxpool3d#torch.nn.MaxPool3d "torch.nn.MaxPool3d") * `output_size` (optional): the targeted output size Shape: * Input: (N,C,Din,Hin,Win)(N, C, D\_{in}, H\_{in}, W\_{in}) * Output: (N,C,Dout,Hout,Wout)(N, C, D\_{out}, H\_{out}, W\_{out}) , where Dout=(Din−1)×stride[0]−2×padding[0]+kernel\_size[0]D\_{out} = (D\_{in} - 1) \times \text{stride[0]} - 2 \times \text{padding[0]} + \text{kernel\\_size[0]} Hout=(Hin−1)×stride[1]−2×padding[1]+kernel\_size[1]H\_{out} = (H\_{in} - 1) \times \text{stride[1]} - 2 \times \text{padding[1]} + \text{kernel\\_size[1]} Wout=(Win−1)×stride[2]−2×padding[2]+kernel\_size[2]W\_{out} = (W\_{in} - 1) \times \text{stride[2]} - 2 \times \text{padding[2]} + \text{kernel\\_size[2]} or as given by `output_size` in the call operator Example: ``` >>> # pool of square window of size=3, stride=2 >>> pool = nn.MaxPool3d(3, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool3d(3, stride=2) >>> output, indices = pool(torch.randn(20, 16, 51, 33, 15)) >>> unpooled_output = unpool(output, indices) >>> unpooled_output.size() torch.Size([20, 16, 51, 33, 15]) ``` pytorch MultiLabelMarginLoss MultiLabelMarginLoss ==================== `class torch.nn.MultiLabelMarginLoss(size_average=None, reduce=None, reduction='mean')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/loss.html#MultiLabelMarginLoss) Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input xx (a 2D mini-batch `Tensor`) and output yy (which is a 2D `Tensor` of target class indices). For each sample in the mini-batch: loss(x,y)=∑ijmax⁡(0,1−(x[y[j]]−x[i]))x.size(0)\text{loss}(x, y) = \sum\_{ij}\frac{\max(0, 1 - (x[y[j]] - x[i]))}{\text{x.size}(0)} where x∈{0,⋯,x.size(0)−1}x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\} , y∈{0,⋯,y.size(0)−1}y \in \left\{0, \; \cdots , \; \text{y.size}(0) - 1\right\} , 0≤y[j]≤x.size(0)−10 \leq y[j] \leq \text{x.size}(0)-1 , and i≠y[j]i \neq y[j] for all ii and jj . yy and xx must have the same size. The criterion only considers a contiguous block of non-negative targets that starts at the front. This allows for different samples to have variable amounts of target classes. Parameters * **size\_average** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field `size_average` is set to `False`, the losses are instead summed for each minibatch. Ignored when `reduce` is `False`. Default: `True` * **reduce** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – Deprecated (see `reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on `size_average`. When `reduce` is `False`, returns a loss per batch element instead and ignores `size_average`. Default: `True` * **reduction** (*string**,* *optional*) – Specifies the reduction to apply to the output: `'none'` | `'mean'` | `'sum'`. `'none'`: no reduction will be applied, `'mean'`: the sum of the output will be divided by the number of elements in the output, `'sum'`: the output will be summed. Note: `size_average` and `reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override `reduction`. Default: `'mean'` Shape: * Input: (C)(C) or (N,C)(N, C) where `N` is the batch size and `C` is the number of classes. * Target: (C)(C) or (N,C)(N, C) , label targets padded by -1 ensuring same shape as the input. * Output: scalar. If `reduction` is `'none'`, then (N)(N) . Examples: ``` >>> loss = nn.MultiLabelMarginLoss() >>> x = torch.FloatTensor([[0.1, 0.2, 0.4, 0.8]]) >>> # for target y, only consider labels 3 and 0, not after label -1 >>> y = torch.LongTensor([[3, 0, -1, 1]]) >>> loss(x, y) >>> # 0.25 * ((1-(0.1-0.2)) + (1-(0.1-0.4)) + (1-(0.8-0.2)) + (1-(0.8-0.4))) tensor(0.8500) ``` pytorch torch.tile torch.tile ========== `torch.tile(input, reps) → Tensor` Constructs a tensor by repeating the elements of `input`. The `reps` argument specifies the number of repetitions in each dimension. If `reps` specifies fewer dimensions than `input` has, then ones are prepended to `reps` until all dimensions are specified. For example, if `input` has shape (8, 6, 4, 2) and `reps` is (2, 2), then `reps` is treated as (1, 1, 2, 2). Analogously, if `input` has fewer dimensions than `reps` specifies, then `input` is treated as if it were unsqueezed at dimension zero until it has as many dimensions as `reps` specifies. For example, if `input` has shape (4, 2) and `reps` is (3, 3, 2, 2), then `input` is treated as if it had the shape (1, 1, 4, 2). Note This function is similar to NumPy’s tile function. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor whose elements to repeat. * **reps** ([tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – the number of repetitions per dimension. Example: ``` >>> x = torch.tensor([1, 2, 3]) >>> x.tile((2,)) tensor([1, 2, 3, 1, 2, 3]) >>> y = torch.tensor([[1, 2], [3, 4]]) >>> torch.tile(y, (2, 2)) tensor([[1, 2, 1, 2], [3, 4, 3, 4], [1, 2, 1, 2], [3, 4, 3, 4]]) ``` pytorch torch.set_flush_denormal torch.set\_flush\_denormal ========================== `torch.set_flush_denormal(mode) → bool` Disables denormal floating numbers on CPU. Returns `True` if your system supports flushing denormal numbers and it successfully configures flush denormal mode. [`set_flush_denormal()`](#torch.set_flush_denormal "torch.set_flush_denormal") is only supported on x86 architectures supporting SSE3. Parameters **mode** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – Controls whether to enable flush denormal mode or not Example: ``` >>> torch.set_flush_denormal(True) True >>> torch.tensor([1e-323], dtype=torch.float64) tensor([ 0.], dtype=torch.float64) >>> torch.set_flush_denormal(False) True >>> torch.tensor([1e-323], dtype=torch.float64) tensor(9.88131e-324 * [ 1.0000], dtype=torch.float64) ``` pytorch torch.addmv torch.addmv =========== `torch.addmv(input, mat, vec, *, beta=1, alpha=1, out=None) → Tensor` Performs a matrix-vector product of the matrix `mat` and the vector `vec`. The vector `input` is added to the final result. If `mat` is a (n×m)(n \times m) tensor, `vec` is a 1-D tensor of size `m`, then `input` must be [broadcastable](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics) with a 1-D tensor of size `n` and `out` will be 1-D tensor of size `n`. `alpha` and `beta` are scaling factors on matrix-vector product between `mat` and `vec` and the added tensor `input` respectively. out=β input+α(mat@vec)\text{out} = \beta\ \text{input} + \alpha\ (\text{mat} \mathbin{@} \text{vec}) If `beta` is 0, then `input` will be ignored, and `nan` and `inf` in it will not be propagated. For inputs of type `FloatTensor` or `DoubleTensor`, arguments `beta` and `alpha` must be real numbers, otherwise they should be integers Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – vector to be added * **mat** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – matrix to be matrix multiplied * **vec** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – vector to be matrix multiplied Keyword Arguments * **beta** (*Number**,* *optional*) – multiplier for `input` (β\beta ) * **alpha** (*Number**,* *optional*) – multiplier for mat@vecmat @ vec (α\alpha ) * **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> M = torch.randn(2) >>> mat = torch.randn(2, 3) >>> vec = torch.randn(3) >>> torch.addmv(M, mat, vec) tensor([-0.3768, -5.5565]) ``` pytorch torch.floor torch.floor =========== `torch.floor(input, *, out=None) → Tensor` Returns a new tensor with the floor of the elements of `input`, the largest integer less than or equal to each element. outi=⌊inputi⌋\text{out}\_{i} = \left\lfloor \text{input}\_{i} \right\rfloor Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4) >>> a tensor([-0.8166, 1.5308, -0.2530, -0.2091]) >>> torch.floor(a) tensor([-1., 1., -1., -1.]) ``` pytorch torch.manual_seed torch.manual\_seed ================== `torch.manual_seed(seed)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/random.html#manual_seed) Sets the seed for generating random numbers. Returns a `torch.Generator` object. Parameters **seed** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – The desired seed. Value must be within the inclusive range `[-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff]`. Otherwise, a RuntimeError is raised. Negative inputs are remapped to positive values with the formula `0xffff_ffff_ffff_ffff + seed`. pytorch torch.narrow torch.narrow ============ `torch.narrow(input, dim, start, length) → Tensor` Returns a new tensor that is a narrowed version of `input` tensor. The dimension `dim` is input from `start` to `start + length`. The returned tensor and `input` tensor share the same underlying storage. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the tensor to narrow * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension along which to narrow * **start** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the starting dimension * **length** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the distance to the ending dimension Example: ``` >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> torch.narrow(x, 0, 0, 2) tensor([[ 1, 2, 3], [ 4, 5, 6]]) >>> torch.narrow(x, 1, 1, 2) tensor([[ 2, 3], [ 5, 6], [ 8, 9]]) ``` pytorch BatchNorm3d BatchNorm3d =========== `class torch.nn.BatchNorm3d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/batchnorm.html#BatchNorm3d) Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](https://arxiv.org/abs/1502.03167) . y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} \* \gamma + \beta The mean and standard-deviation are calculated per-dimension over the mini-batches and γ\gamma and β\beta are learnable parameter vectors of size `C` (where `C` is the input size). By default, the elements of γ\gamma are set to 1 and the elements of β\beta are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to `torch.var(input, unbiased=False)`. Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default `momentum` of 0.1. If `track_running_stats` is set to `False`, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well. Note This `momentum` argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is x^new=(1−momentum)×x^+momentum×xt\hat{x}\_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x\_t , where x^\hat{x} is the estimated statistic and xtx\_t is the new observed value. Because the Batch Normalization is done over the `C` dimension, computing statistics on `(N, D, H, W)` slices, it’s common terminology to call this Volumetric Batch Normalization or Spatio-temporal Batch Normalization. Parameters * **num\_features** – CC from an expected input of size (N,C,D,H,W)(N, C, D, H, W) * **eps** – a value added to the denominator for numerical stability. Default: 1e-5 * **momentum** – the value used for the running\_mean and running\_var computation. Can be set to `None` for cumulative moving average (i.e. simple average). Default: 0.1 * **affine** – a boolean value that when set to `True`, this module has learnable affine parameters. Default: `True` * **track\_running\_stats** – a boolean value that when set to `True`, this module tracks the running mean and variance, and when set to `False`, this module does not track such statistics, and initializes statistics buffers `running_mean` and `running_var` as `None`. When these buffers are `None`, this module always uses batch statistics. in both training and eval modes. Default: `True` Shape: * Input: (N,C,D,H,W)(N, C, D, H, W) * Output: (N,C,D,H,W)(N, C, D, H, W) (same shape as input) Examples: ``` >>> # With Learnable Parameters >>> m = nn.BatchNorm3d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm3d(100, affine=False) >>> input = torch.randn(20, 100, 35, 45, 10) >>> output = m(input) ```
programming_docs
pytorch torch.broadcast_to torch.broadcast\_to =================== `torch.broadcast_to(input, shape) → Tensor` Broadcasts `input` to the shape `shape`. Equivalent to calling `input.expand(shape)`. See [`expand()`](../tensors#torch.Tensor.expand "torch.Tensor.expand") for details. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **shape** (list, tuple, or `torch.Size`) – the new shape. Example: ``` >>> x = torch.tensor([1, 2, 3]) >>> torch.broadcast_to(x, (3, 3)) tensor([[1, 2, 3], [1, 2, 3], [1, 2, 3]]) ``` pytorch torch.set_num_threads torch.set\_num\_threads ======================= `torch.set_num_threads(int)` Sets the number of threads used for intraop parallelism on CPU. Warning To ensure that the correct number of threads is used, set\_num\_threads must be called before running eager, JIT or autograd code. pytorch InstanceNorm2d InstanceNorm2d ============== `class torch.nn.InstanceNorm2d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/instancenorm.html#InstanceNorm2d) Applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper [Instance Normalization: The Missing Ingredient for Fast Stylization](https://arxiv.org/abs/1607.08022). y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} \* \gamma + \beta The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. γ\gamma and β\beta are learnable parameter vectors of size `C` (where `C` is the input size) if `affine` is `True`. The standard-deviation is calculated via the biased estimator, equivalent to `torch.var(input, unbiased=False)`. By default, this layer uses instance statistics computed from input data in both training and evaluation modes. If `track_running_stats` is set to `True`, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default `momentum` of 0.1. Note This `momentum` argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is x^new=(1−momentum)×x^+momentum×xt\hat{x}\_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x\_t , where x^\hat{x} is the estimated statistic and xtx\_t is the new observed value. Note [`InstanceNorm2d`](#torch.nn.InstanceNorm2d "torch.nn.InstanceNorm2d") and [`LayerNorm`](torch.nn.layernorm#torch.nn.LayerNorm "torch.nn.LayerNorm") are very similar, but have some subtle differences. [`InstanceNorm2d`](#torch.nn.InstanceNorm2d "torch.nn.InstanceNorm2d") is applied on each channel of channeled data like RGB images, but [`LayerNorm`](torch.nn.layernorm#torch.nn.LayerNorm "torch.nn.LayerNorm") is usually applied on entire sample and often in NLP tasks. Additionally, [`LayerNorm`](torch.nn.layernorm#torch.nn.LayerNorm "torch.nn.LayerNorm") applies elementwise affine transform, while [`InstanceNorm2d`](#torch.nn.InstanceNorm2d "torch.nn.InstanceNorm2d") usually don’t apply affine transform. Parameters * **num\_features** – CC from an expected input of size (N,C,H,W)(N, C, H, W) * **eps** – a value added to the denominator for numerical stability. Default: 1e-5 * **momentum** – the value used for the running\_mean and running\_var computation. Default: 0.1 * **affine** – a boolean value that when set to `True`, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default: `False`. * **track\_running\_stats** – a boolean value that when set to `True`, this module tracks the running mean and variance, and when set to `False`, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: `False` Shape: * Input: (N,C,H,W)(N, C, H, W) * Output: (N,C,H,W)(N, C, H, W) (same shape as input) Examples: ``` >>> # Without Learnable Parameters >>> m = nn.InstanceNorm2d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm2d(100, affine=True) >>> input = torch.randn(20, 100, 35, 45) >>> output = m(input) ``` pytorch torch.atan2 torch.atan2 =========== `torch.atan2(input, other, *, out=None) → Tensor` Element-wise arctangent of inputi/otheri\text{input}\_{i} / \text{other}\_{i} with consideration of the quadrant. Returns a new tensor with the signed angles in radians between vector (otheri,inputi)(\text{other}\_{i}, \text{input}\_{i}) and vector (1,0)(1, 0) . (Note that otheri\text{other}\_{i} , the second parameter, is the x-coordinate, while inputi\text{input}\_{i} , the first parameter, is the y-coordinate.) The shapes of `input` and `other` must be [broadcastable](https://pytorch.org/docs/1.8.0/notes/broadcasting.html#broadcasting-semantics). Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the first input tensor * **other** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the second input tensor Keyword Arguments **out** ([Tensor](../tensors#torch.Tensor "torch.Tensor")*,* *optional*) – the output tensor. Example: ``` >>> a = torch.randn(4) >>> a tensor([ 0.9041, 0.0196, -0.3108, -2.4423]) >>> torch.atan2(a, torch.randn(4)) tensor([ 0.9833, 0.0811, -1.9743, -1.4151]) ``` pytorch torch.nn.utils.prune.custom_from_mask torch.nn.utils.prune.custom\_from\_mask ======================================= `torch.nn.utils.prune.custom_from_mask(module, name, mask)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/utils/prune.html#custom_from_mask) Prunes tensor corresponding to parameter called `name` in `module` by applying the pre-computed mask in `mask`. Modifies module in place (and also return the modified module) by: 1) adding a named buffer called `name+'_mask'` corresponding to the binary mask applied to the parameter `name` by the pruning method. 2) replacing the parameter `name` by its pruned version, while the original (unpruned) parameter is stored in a new parameter named `name+'_orig'`. Parameters * **module** ([nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) – module containing the tensor to prune * **name** ([str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.9)")) – parameter name within `module` on which pruning will act. * **mask** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – binary mask to be applied to the parameter. Returns modified (i.e. pruned) version of the input module Return type module ([nn.Module](torch.nn.module#torch.nn.Module "torch.nn.Module")) #### Examples ``` >>> m = prune.custom_from_mask( nn.Linear(5, 3), name='bias', mask=torch.Tensor([0, 1, 0]) ) >>> print(m.bias_mask) tensor([0., 1., 0.]) ``` pytorch LazyLinear LazyLinear ========== `class torch.nn.LazyLinear(out_features, bias=True)` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/linear.html#LazyLinear) A [`torch.nn.Linear`](torch.nn.linear#torch.nn.Linear "torch.nn.Linear") module with lazy initialization. In this module, the `weight` and `bias` are of `torch.nn.UninitializedParameter` class. They will be initialized after the first call to `forward` is done and the module will become a regular [`torch.nn.Linear`](torch.nn.linear#torch.nn.Linear "torch.nn.Linear") module. Check the [`torch.nn.modules.lazy.LazyModuleMixin`](torch.nn.modules.lazy.lazymodulemixin#torch.nn.modules.lazy.LazyModuleMixin "torch.nn.modules.lazy.LazyModuleMixin") for further documentation on lazy modules and their limitations. Parameters * **out\_features** – size of each output sample * **bias** – If set to `False`, the layer will not learn an additive bias. Default: `True` Variables * **~LazyLinear.weight** – the learnable weights of the module of shape (out\_features,in\_features)(\text{out\\_features}, \text{in\\_features}) . The values are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) , where k=1in\_featuresk = \frac{1}{\text{in\\_features}} * **~LazyLinear.bias** – the learnable bias of the module of shape (out\_features)(\text{out\\_features}) . If `bias` is `True`, the values are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=1in\_featuresk = \frac{1}{\text{in\\_features}} `cls_to_become` alias of [`Linear`](torch.nn.linear#torch.nn.Linear "torch.nn.Linear") pytorch LazyConvTranspose2d LazyConvTranspose2d =================== `class torch.nn.LazyConvTranspose2d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/conv.html#LazyConvTranspose2d) A [`torch.nn.ConvTranspose2d`](torch.nn.convtranspose2d#torch.nn.ConvTranspose2d "torch.nn.ConvTranspose2d") module with lazy initialization of the `in_channels` argument of the [`ConvTranspose2d`](torch.nn.convtranspose2d#torch.nn.ConvTranspose2d "torch.nn.ConvTranspose2d") that is inferred from the `input.size(1)`. Parameters * **out\_channels** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – Number of channels produced by the convolution * **kernel\_size** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – Size of the convolving kernel * **stride** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Stride of the convolution. Default: 1 * **padding** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – `dilation * (kernel_size - 1) - padding` zero-padding will be added to both sides of each dimension in the input. Default: 0 * **output\_padding** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Additional size added to one side of each dimension in the output shape. Default: 0 * **groups** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")*,* *optional*) – Number of blocked connections from input channels to output channels. Default: 1 * **bias** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")*,* *optional*) – If `True`, adds a learnable bias to the output. Default: `True` * **dilation** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)") *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")*,* *optional*) – Spacing between kernel elements. Default: 1 See also [`torch.nn.ConvTranspose2d`](torch.nn.convtranspose2d#torch.nn.ConvTranspose2d "torch.nn.ConvTranspose2d") and [`torch.nn.modules.lazy.LazyModuleMixin`](torch.nn.modules.lazy.lazymodulemixin#torch.nn.modules.lazy.LazyModuleMixin "torch.nn.modules.lazy.LazyModuleMixin") `cls_to_become` alias of [`ConvTranspose2d`](torch.nn.convtranspose2d#torch.nn.ConvTranspose2d "torch.nn.ConvTranspose2d") pytorch torch.is_nonzero torch.is\_nonzero ================= `torch.is_nonzero(input) -> (bool)` Returns True if the `input` is a single element tensor which is not equal to zero after type conversions. i.e. not equal to `torch.tensor([0.])` or `torch.tensor([0])` or `torch.tensor([False])`. Throws a `RuntimeError` if `torch.numel() != 1` (even in case of sparse tensors). Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Examples: ``` >>> torch.is_nonzero(torch.tensor([0.])) False >>> torch.is_nonzero(torch.tensor([1.5])) True >>> torch.is_nonzero(torch.tensor([False])) False >>> torch.is_nonzero(torch.tensor([3])) True >>> torch.is_nonzero(torch.tensor([1, 3, 5])) Traceback (most recent call last): ... RuntimeError: bool value of Tensor with more than one value is ambiguous >>> torch.is_nonzero(torch.tensor([])) Traceback (most recent call last): ... RuntimeError: bool value of Tensor with no values is ambiguous ``` pytorch torch.flip torch.flip ========== `torch.flip(input, dims) → Tensor` Reverse the order of a n-D tensor along given axis in dims. Note `torch.flip` makes a copy of `input`’s data. This is different from NumPy’s `np.flip`, which returns a view in constant time. Since copying a tensor’s data is more work than viewing that data, `torch.flip` is expected to be slower than `np.flip`. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dims** (*a list* *or* [tuple](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.9)")) – axis to flip on Example: ``` >>> x = torch.arange(8).view(2, 2, 2) >>> x tensor([[[ 0, 1], [ 2, 3]], [[ 4, 5], [ 6, 7]]]) >>> torch.flip(x, [0, 1]) tensor([[[ 6, 7], [ 4, 5]], [[ 2, 3], [ 0, 1]]]) ``` pytorch LogSigmoid LogSigmoid ========== `class torch.nn.LogSigmoid` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/activation.html#LogSigmoid) Applies the element-wise function: LogSigmoid(x)=log⁡(11+exp⁡(−x))\text{LogSigmoid}(x) = \log\left(\frac{ 1 }{ 1 + \exp(-x)}\right) Shape: * Input: (N,∗)(N, \*) where `*` means, any number of additional dimensions * Output: (N,∗)(N, \*) , same shape as the input Examples: ``` >>> m = nn.LogSigmoid() >>> input = torch.randn(2) >>> output = m(input) ``` pytorch torch.solve torch.solve =========== `torch.solve(input, A, *, out=None) -> (Tensor, Tensor)` This function returns the solution to the system of linear equations represented by AX=BAX = B and the LU factorization of A, in order as a namedtuple `solution, LU`. `LU` contains `L` and `U` factors for LU factorization of `A`. `torch.solve(B, A)` can take in 2D inputs `B, A` or inputs that are batches of 2D matrices. If the inputs are batches, then returns batched outputs `solution, LU`. Supports real-valued and complex-valued inputs. Note Irrespective of the original strides, the returned matrices `solution` and `LU` will be transposed, i.e. with strides like `B.contiguous().transpose(-1, -2).stride()` and `A.contiguous().transpose(-1, -2).stride()` respectively. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – input matrix BB of size (∗,m,k)(\*, m, k) , where ∗\* is zero or more batch dimensions. * **A** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – input square matrix of size (∗,m,m)(\*, m, m) , where ∗\* is zero or more batch dimensions. Keyword Arguments **out** (*(*[Tensor](../tensors#torch.Tensor "torch.Tensor")*,* [Tensor](../tensors#torch.Tensor "torch.Tensor")*)**,* *optional*) – optional output tuple. Example: ``` >>> A = torch.tensor([[6.80, -2.11, 5.66, 5.97, 8.23], ... [-6.05, -3.30, 5.36, -4.44, 1.08], ... [-0.45, 2.58, -2.70, 0.27, 9.04], ... [8.32, 2.71, 4.35, -7.17, 2.14], ... [-9.67, -5.14, -7.26, 6.08, -6.87]]).t() >>> B = torch.tensor([[4.02, 6.19, -8.22, -7.57, -3.03], ... [-1.56, 4.00, -8.67, 1.75, 2.86], ... [9.81, -4.09, -4.57, -8.61, 8.99]]).t() >>> X, LU = torch.solve(B, A) >>> torch.dist(B, torch.mm(A, X)) tensor(1.00000e-06 * 7.0977) >>> # Batched solver example >>> A = torch.randn(2, 3, 1, 4, 4) >>> B = torch.randn(2, 3, 1, 4, 6) >>> X, LU = torch.solve(B, A) >>> torch.dist(B, A.matmul(X)) tensor(1.00000e-06 * 3.6386) ``` pytorch torch.nanmedian torch.nanmedian =============== `torch.nanmedian(input) → Tensor` Returns the median of the values in `input`, ignoring `NaN` values. This function is identical to [`torch.median()`](torch.median#torch.median "torch.median") when there are no `NaN` values in `input`. When `input` has one or more `NaN` values, [`torch.median()`](torch.median#torch.median "torch.median") will always return `NaN`, while this function will return the median of the non-`NaN` elements in `input`. If all the elements in `input` are `NaN` it will also return `NaN`. Parameters **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. Example: ``` >>> a = torch.tensor([1, float('nan'), 3, 2]) >>> a.median() tensor(nan) >>> a.nanmedian() tensor(2.) ``` `torch.nanmedian(input, dim=-1, keepdim=False, *, out=None) -> (Tensor, LongTensor)` Returns a namedtuple `(values, indices)` where `values` contains the median of each row of `input` in the dimension `dim`, ignoring `NaN` values, and `indices` contains the index of the median values found in the dimension `dim`. This function is identical to [`torch.median()`](torch.median#torch.median "torch.median") when there are no `NaN` values in a reduced row. When a reduced row has one or more `NaN` values, [`torch.median()`](torch.median#torch.median "torch.median") will always reduce it to `NaN`, while this function will reduce it to the median of the non-`NaN` elements. If all the elements in a reduced row are `NaN` then it will be reduced to `NaN`, too. Parameters * **input** ([Tensor](../tensors#torch.Tensor "torch.Tensor")) – the input tensor. * **dim** ([int](https://docs.python.org/3/library/functions.html#int "(in Python v3.9)")) – the dimension to reduce. * **keepdim** ([bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.9)")) – whether the output tensor has `dim` retained or not. Keyword Arguments **out** (*(*[Tensor](../tensors#torch.Tensor "torch.Tensor")*,* [Tensor](../tensors#torch.Tensor "torch.Tensor")*)**,* *optional*) – The first tensor will be populated with the median values and the second tensor, which must have dtype long, with their indices in the dimension `dim` of `input`. Example: ``` >>> a = torch.tensor([[2, 3, 1], [float('nan'), 1, float('nan')]]) >>> a tensor([[2., 3., 1.], [nan, 1., nan]]) >>> a.median(0) torch.return_types.median(values=tensor([nan, 1., nan]), indices=tensor([1, 1, 1])) >>> a.nanmedian(0) torch.return_types.nanmedian(values=tensor([2., 1., 1.]), indices=tensor([0, 1, 0])) ``` pytorch RNNCell RNNCell ======= `class torch.nn.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh')` [[source]](https://pytorch.org/docs/1.8.0/_modules/torch/nn/modules/rnn.html#RNNCell) An Elman RNN cell with tanh or ReLU non-linearity. h′=tanh⁡(Wihx+bih+Whhh+bhh)h' = \tanh(W\_{ih} x + b\_{ih} + W\_{hh} h + b\_{hh}) If `nonlinearity` is `‘relu’`, then ReLU is used in place of tanh. Parameters * **input\_size** – The number of expected features in the input `x` * **hidden\_size** – The number of features in the hidden state `h` * **bias** – If `False`, then the layer does not use bias weights `b_ih` and `b_hh`. Default: `True` * **nonlinearity** – The non-linearity to use. Can be either `'tanh'` or `'relu'`. Default: `'tanh'` Inputs: input, hidden * **input** of shape `(batch, input_size)`: tensor containing input features * **hidden** of shape `(batch, hidden_size)`: tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided. Outputs: h’ * **h’** of shape `(batch, hidden_size)`: tensor containing the next hidden state for each element in the batch Shape: * Input1: (N,Hin)(N, H\_{in}) tensor containing input features where HinH\_{in} = `input_size` * Input2: (N,Hout)(N, H\_{out}) tensor containing the initial hidden state for each element in the batch where HoutH\_{out} = `hidden_size` Defaults to zero if not provided. * Output: (N,Hout)(N, H\_{out}) tensor containing the next hidden state for each element in the batch Variables * **~RNNCell.weight\_ih** – the learnable input-hidden weights, of shape `(hidden_size, input_size)` * **~RNNCell.weight\_hh** – the learnable hidden-hidden weights, of shape `(hidden_size, hidden_size)` * **~RNNCell.bias\_ih** – the learnable input-hidden bias, of shape `(hidden_size)` * **~RNNCell.bias\_hh** – the learnable hidden-hidden bias, of shape `(hidden_size)` Note All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=1hidden\_sizek = \frac{1}{\text{hidden\\_size}} Examples: ``` >>> rnn = nn.RNNCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx = rnn(input[i], hx) output.append(hx) ```
programming_docs