code
stringlengths
2.5k
150k
kind
stringclasses
1 value
python tkinter — Python interface to Tcl/Tk tkinter — Python interface to Tcl/Tk ==================================== **Source code:** [Lib/tkinter/\_\_init\_\_.py](https://github.com/python/cpython/tree/3.9/Lib/tkinter/__init__.py) The [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") package (“Tk interface”) is the standard Python interface to the Tcl/Tk GUI toolkit. Both Tk and [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") are available on most Unix platforms, including macOS, as well as on Windows systems. Running `python -m tkinter` from the command line should open a window demonstrating a simple Tk interface, letting you know that [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") is properly installed on your system, and also showing what version of Tcl/Tk is installed, so you can read the Tcl/Tk documentation specific to that version. See also * [TkDocs](http://tkdocs.com/) Extensive tutorial on creating user interfaces with Tkinter. Explains key concepts, and illustrates recommended approaches using the modern API. * [Tkinter 8.5 reference: a GUI for Python](https://www.tkdocs.com/shipman/) Reference documentation for Tkinter 8.5 detailing available classes, methods, and options. Tcl/Tk Resources: * [Tk commands](https://www.tcl.tk/man/tcl8.6/TkCmd/contents.htm) Comprehensive reference to each of the underlying Tcl/Tk commands used by Tkinter. * [Tcl/Tk Home Page](https://www.tcl.tk) Additional documentation, and links to Tcl/Tk core development. Books: * [Modern Tkinter for Busy Python Developers](https://tkdocs.com/book.html) By Mark Roseman. (ISBN 978-1999149567) * [Python and Tkinter Programming](https://www.packtpub.com/product/python-gui-programming-with-tkinter/9781788835886) By Alan Moore. (ISBN 978-1788835886) * [Programming Python](http://learning-python.com/about-pp4e.html) By Mark Lutz; has excellent coverage of Tkinter. (ISBN 978-0596158101) * [Tcl and the Tk Toolkit (2nd edition)](https://www.amazon.com/exec/obidos/ASIN/032133633X) By John Ousterhout, inventor of Tcl/Tk, and Ken Jones; does not cover Tkinter. (ISBN 978-0321336330) Tkinter Modules --------------- Support for Tkinter is spread across several modules. Most applications will need the main [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") module, as well as the [`tkinter.ttk`](tkinter.ttk#module-tkinter.ttk "tkinter.ttk: Tk themed widget set") module, which provides the modern themed widget set and API: ``` from tkinter import * from tkinter import ttk ``` `class tkinter.Tk(screenName=None, baseName=None, className='Tk', useTk=True, sync=False, use=None)` Construct a toplevel Tk widget, which is usually the main window of an application, and initialize a Tcl interpreter for this widget. Each instance has its own associated Tcl interpreter. The [`Tk`](#tkinter.Tk "tkinter.Tk") class is typically instantiated using all default values. However, the following keyword arguments are currently recognized: *screenName* When given (as a string), sets the `DISPLAY` environment variable. (X11 only) *baseName* Name of the profile file. By default, *baseName* is derived from the program name (`sys.argv[0]`). *className* Name of the widget class. Used as a profile file and also as the name with which Tcl is invoked (*argv0* in *interp*). *useTk* If `True`, initialize the Tk subsystem. The [`tkinter.Tcl()`](#tkinter.Tcl "tkinter.Tcl") function sets this to `False`. *sync* If `True`, execute all X server commands synchronously, so that errors are reported immediately. Can be used for debugging. (X11 only) *use* Specifies the *id* of the window in which to embed the application, instead of it being created as an independent toplevel window. *id* must be specified in the same way as the value for the -use option for toplevel widgets (that is, it has a form like that returned by `winfo_id()`). Note that on some platforms this will only work correctly if *id* refers to a Tk frame or toplevel that has its -container option enabled. [`Tk`](#tkinter.Tk "tkinter.Tk") reads and interprets profile files, named `.*className*.tcl` and `.*baseName*.tcl`, into the Tcl interpreter and calls [`exec()`](functions#exec "exec") on the contents of `.*className*.py` and `.*baseName*.py`. The path for the profile files is the `HOME` environment variable or, if that isn’t defined, then [`os.curdir`](os#os.curdir "os.curdir"). `tk` The Tk application object created by instantiating [`Tk`](#tkinter.Tk "tkinter.Tk"). This provides access to the Tcl interpreter. Each widget that is attached the same instance of [`Tk`](#tkinter.Tk "tkinter.Tk") has the same value for its [`tk`](#tkinter.Tk.tk "tkinter.Tk.tk") attribute. `master` The widget object that contains this widget. For [`Tk`](#tkinter.Tk "tkinter.Tk"), the *master* is [`None`](constants#None "None") because it is the main window. The terms *master* and *parent* are similar and sometimes used interchangeably as argument names; however, calling `winfo_parent()` returns a string of the widget name whereas [`master`](#tkinter.Tk.master "tkinter.Tk.master") returns the object. *parent*/*child* reflects the tree-like relationship while *master*/*slave* reflects the container structure. `children` The immediate descendants of this widget as a [`dict`](stdtypes#dict "dict") with the child widget names as the keys and the child instance objects as the values. `tkinter.Tcl(screenName=None, baseName=None, className='Tk', useTk=False)` The [`Tcl()`](#tkinter.Tcl "tkinter.Tcl") function is a factory function which creates an object much like that created by the [`Tk`](#tkinter.Tk "tkinter.Tk") class, except that it does not initialize the Tk subsystem. This is most often useful when driving the Tcl interpreter in an environment where one doesn’t want to create extraneous toplevel windows, or where one cannot (such as Unix/Linux systems without an X server). An object created by the [`Tcl()`](#tkinter.Tcl "tkinter.Tcl") object can have a Toplevel window created (and the Tk subsystem initialized) by calling its `loadtk()` method. The modules that provide Tk support include: [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") Main Tkinter module. [`tkinter.colorchooser`](tkinter.colorchooser#module-tkinter.colorchooser "tkinter.colorchooser: Color choosing dialog (Tk)") Dialog to let the user choose a color. [`tkinter.commondialog`](dialog#module-tkinter.commondialog "tkinter.commondialog: Tkinter base class for dialogs (Tk)") Base class for the dialogs defined in the other modules listed here. [`tkinter.filedialog`](dialog#module-tkinter.filedialog "tkinter.filedialog: Dialog classes for file selection (Tk)") Common dialogs to allow the user to specify a file to open or save. [`tkinter.font`](tkinter.font#module-tkinter.font "tkinter.font: Tkinter font-wrapping class (Tk)") Utilities to help work with fonts. [`tkinter.messagebox`](tkinter.messagebox#module-tkinter.messagebox "tkinter.messagebox: Various types of alert dialogs (Tk)") Access to standard Tk dialog boxes. [`tkinter.scrolledtext`](tkinter.scrolledtext#module-tkinter.scrolledtext "tkinter.scrolledtext: Text widget with a vertical scroll bar. (Tk)") Text widget with a vertical scroll bar built in. [`tkinter.simpledialog`](dialog#module-tkinter.simpledialog "tkinter.simpledialog: Simple dialog windows (Tk)") Basic dialogs and convenience functions. [`tkinter.ttk`](tkinter.ttk#module-tkinter.ttk "tkinter.ttk: Tk themed widget set") Themed widget set introduced in Tk 8.5, providing modern alternatives for many of the classic widgets in the main [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") module. Additional modules: `_tkinter` A binary module that contains the low-level interface to Tcl/Tk. It is automatically imported by the main [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") module, and should never be used directly by application programmers. It is usually a shared library (or DLL), but might in some cases be statically linked with the Python interpreter. `idlelib` Python’s Integrated Development and Learning Environment (IDLE). Based on [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces"). `tkinter.constants` Symbolic constants that can be used in place of strings when passing various parameters to Tkinter calls. Automatically imported by the main [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") module. [`tkinter.dnd`](tkinter.dnd#module-tkinter.dnd "tkinter.dnd: Tkinter drag-and-drop interface (Tk)") (experimental) Drag-and-drop support for [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces"). This will become deprecated when it is replaced with the Tk DND. [`tkinter.tix`](tkinter.tix#module-tkinter.tix "tkinter.tix: Tk Extension Widgets for Tkinter") (deprecated) An older third-party Tcl/Tk package that adds several new widgets. Better alternatives for most can be found in [`tkinter.ttk`](tkinter.ttk#module-tkinter.ttk "tkinter.ttk: Tk themed widget set"). [`turtle`](turtle#module-turtle "turtle: An educational framework for simple graphics applications") Turtle graphics in a Tk window. Tkinter Life Preserver ---------------------- This section is not designed to be an exhaustive tutorial on either Tk or Tkinter. Rather, it is intended as a stop gap, providing some introductory orientation on the system. Credits: * Tk was written by John Ousterhout while at Berkeley. * Tkinter was written by Steen Lumholt and Guido van Rossum. * This Life Preserver was written by Matt Conway at the University of Virginia. * The HTML rendering, and some liberal editing, was produced from a FrameMaker version by Ken Manheimer. * Fredrik Lundh elaborated and revised the class interface descriptions, to get them current with Tk 4.2. * Mike Clarkson converted the documentation to LaTeX, and compiled the User Interface chapter of the reference manual. ### How To Use This Section This section is designed in two parts: the first half (roughly) covers background material, while the second half can be taken to the keyboard as a handy reference. When trying to answer questions of the form “how do I do blah”, it is often best to find out how to do “blah” in straight Tk, and then convert this back into the corresponding [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") call. Python programmers can often guess at the correct Python command by looking at the Tk documentation. This means that in order to use Tkinter, you will have to know a little bit about Tk. This document can’t fulfill that role, so the best we can do is point you to the best documentation that exists. Here are some hints: * The authors strongly suggest getting a copy of the Tk man pages. Specifically, the man pages in the `manN` directory are most useful. The `man3` man pages describe the C interface to the Tk library and thus are not especially helpful for script writers. * Addison-Wesley publishes a book called Tcl and the Tk Toolkit by John Ousterhout (ISBN 0-201-63337-X) which is a good introduction to Tcl and Tk for the novice. The book is not exhaustive, and for many details it defers to the man pages. * `tkinter/__init__.py` is a last resort for most, but can be a good place to go when nothing else makes sense. ### A Simple Hello World Program ``` import tkinter as tk class Application(tk.Frame): def __init__(self, master=None): super().__init__(master) self.master = master self.pack() self.create_widgets() def create_widgets(self): self.hi_there = tk.Button(self) self.hi_there["text"] = "Hello World\n(click me)" self.hi_there["command"] = self.say_hi self.hi_there.pack(side="top") self.quit = tk.Button(self, text="QUIT", fg="red", command=self.master.destroy) self.quit.pack(side="bottom") def say_hi(self): print("hi there, everyone!") root = tk.Tk() app = Application(master=root) app.mainloop() ``` A (Very) Quick Look at Tcl/Tk ----------------------------- The class hierarchy looks complicated, but in actual practice, application programmers almost always refer to the classes at the very bottom of the hierarchy. Notes: * These classes are provided for the purposes of organizing certain functions under one namespace. They aren’t meant to be instantiated independently. * The [`Tk`](#tkinter.Tk "tkinter.Tk") class is meant to be instantiated only once in an application. Application programmers need not instantiate one explicitly, the system creates one whenever any of the other classes are instantiated. * The `Widget` class is not meant to be instantiated, it is meant only for subclassing to make “real” widgets (in C++, this is called an ‘abstract class’). To make use of this reference material, there will be times when you will need to know how to read short passages of Tk and how to identify the various parts of a Tk command. (See section [Mapping Basic Tk into Tkinter](#tkinter-basic-mapping) for the [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") equivalents of what’s below.) Tk scripts are Tcl programs. Like all Tcl programs, Tk scripts are just lists of tokens separated by spaces. A Tk widget is just its *class*, the *options* that help configure it, and the *actions* that make it do useful things. To make a widget in Tk, the command is always of the form: ``` classCommand newPathname options ``` *classCommand* denotes which kind of widget to make (a button, a label, a menu…) *newPathname* is the new name for this widget. All names in Tk must be unique. To help enforce this, widgets in Tk are named with *pathnames*, just like files in a file system. The top level widget, the *root*, is called `.` (period) and children are delimited by more periods. For example, `.myApp.controlPanel.okButton` might be the name of a widget. *options* configure the widget’s appearance and in some cases, its behavior. The options come in the form of a list of flags and values. Flags are preceded by a ‘-‘, like Unix shell command flags, and values are put in quotes if they are more than one word. For example: ``` button .fred -fg red -text "hi there" ^ ^ \______________________/ | | | class new options command widget (-opt val -opt val ...) ``` Once created, the pathname to the widget becomes a new command. This new *widget command* is the programmer’s handle for getting the new widget to perform some *action*. In C, you’d express this as someAction(fred, someOptions), in C++, you would express this as fred.someAction(someOptions), and in Tk, you say: ``` .fred someAction someOptions ``` Note that the object name, `.fred`, starts with a dot. As you’d expect, the legal values for *someAction* will depend on the widget’s class: `.fred disable` works if fred is a button (fred gets greyed out), but does not work if fred is a label (disabling of labels is not supported in Tk). The legal values of *someOptions* is action dependent. Some actions, like `disable`, require no arguments, others, like a text-entry box’s `delete` command, would need arguments to specify what range of text to delete. Mapping Basic Tk into Tkinter ----------------------------- Class commands in Tk correspond to class constructors in Tkinter. ``` button .fred =====> fred = Button() ``` The master of an object is implicit in the new name given to it at creation time. In Tkinter, masters are specified explicitly. ``` button .panel.fred =====> fred = Button(panel) ``` The configuration options in Tk are given in lists of hyphened tags followed by values. In Tkinter, options are specified as keyword-arguments in the instance constructor, and keyword-args for configure calls or as instance indices, in dictionary style, for established instances. See section [Setting Options](#tkinter-setting-options) on setting options. ``` button .fred -fg red =====> fred = Button(panel, fg="red") .fred configure -fg red =====> fred["fg"] = red OR ==> fred.config(fg="red") ``` In Tk, to perform an action on a widget, use the widget name as a command, and follow it with an action name, possibly with arguments (options). In Tkinter, you call methods on the class instance to invoke actions on the widget. The actions (methods) that a given widget can perform are listed in `tkinter/__init__.py`. ``` .fred invoke =====> fred.invoke() ``` To give a widget to the packer (geometry manager), you call pack with optional arguments. In Tkinter, the Pack class holds all this functionality, and the various forms of the pack command are implemented as methods. All widgets in [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") are subclassed from the Packer, and so inherit all the packing methods. See the [`tkinter.tix`](tkinter.tix#module-tkinter.tix "tkinter.tix: Tk Extension Widgets for Tkinter") module documentation for additional information on the Form geometry manager. ``` pack .fred -side left =====> fred.pack(side="left") ``` How Tk and Tkinter are Related ------------------------------ From the top down: Your App Here (Python) A Python application makes a [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") call. tkinter (Python Package) This call (say, for example, creating a button widget), is implemented in the [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") package, which is written in Python. This Python function will parse the commands and the arguments and convert them into a form that makes them look as if they had come from a Tk script instead of a Python script. \_tkinter (C) These commands and their arguments will be passed to a C function in the `_tkinter` - note the underscore - extension module. Tk Widgets (C and Tcl) This C function is able to make calls into other C modules, including the C functions that make up the Tk library. Tk is implemented in C and some Tcl. The Tcl part of the Tk widgets is used to bind certain default behaviors to widgets, and is executed once at the point where the Python [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") package is imported. (The user never sees this stage). Tk (C) The Tk part of the Tk Widgets implement the final mapping to … Xlib (C) the Xlib library to draw graphics on the screen. Handy Reference --------------- ### Setting Options Options control things like the color and border width of a widget. Options can be set in three ways: At object creation time, using keyword arguments ``` fred = Button(self, fg="red", bg="blue") ``` After object creation, treating the option name like a dictionary index ``` fred["fg"] = "red" fred["bg"] = "blue" ``` Use the config() method to update multiple attrs subsequent to object creation ``` fred.config(fg="red", bg="blue") ``` For a complete explanation of a given option and its behavior, see the Tk man pages for the widget in question. Note that the man pages list “STANDARD OPTIONS” and “WIDGET SPECIFIC OPTIONS” for each widget. The former is a list of options that are common to many widgets, the latter are the options that are idiosyncratic to that particular widget. The Standard Options are documented on the *[options(3)](https://manpages.debian.org/options(3))* man page. No distinction between standard and widget-specific options is made in this document. Some options don’t apply to some kinds of widgets. Whether a given widget responds to a particular option depends on the class of the widget; buttons have a `command` option, labels do not. The options supported by a given widget are listed in that widget’s man page, or can be queried at runtime by calling the `config()` method without arguments, or by calling the `keys()` method on that widget. The return value of these calls is a dictionary whose key is the name of the option as a string (for example, `'relief'`) and whose values are 5-tuples. Some options, like `bg` are synonyms for common options with long names (`bg` is shorthand for “background”). Passing the `config()` method the name of a shorthand option will return a 2-tuple, not 5-tuple. The 2-tuple passed back will contain the name of the synonym and the “real” option (such as `('bg', 'background')`). | Index | Meaning | Example | | --- | --- | --- | | 0 | option name | `'relief'` | | 1 | option name for database lookup | `'relief'` | | 2 | option class for database lookup | `'Relief'` | | 3 | default value | `'raised'` | | 4 | current value | `'groove'` | Example: ``` >>> print(fred.config()) {'relief': ('relief', 'relief', 'Relief', 'raised', 'groove')} ``` Of course, the dictionary printed will include all the options available and their values. This is meant only as an example. ### The Packer The packer is one of Tk’s geometry-management mechanisms. Geometry managers are used to specify the relative positioning of widgets within their container - their mutual *master*. In contrast to the more cumbersome *placer* (which is used less commonly, and we do not cover here), the packer takes qualitative relationship specification - *above*, *to the left of*, *filling*, etc - and works everything out to determine the exact placement coordinates for you. The size of any *master* widget is determined by the size of the “slave widgets” inside. The packer is used to control where slave widgets appear inside the master into which they are packed. You can pack widgets into frames, and frames into other frames, in order to achieve the kind of layout you desire. Additionally, the arrangement is dynamically adjusted to accommodate incremental changes to the configuration, once it is packed. Note that widgets do not appear until they have had their geometry specified with a geometry manager. It’s a common early mistake to leave out the geometry specification, and then be surprised when the widget is created but nothing appears. A widget will appear only after it has had, for example, the packer’s `pack()` method applied to it. The pack() method can be called with keyword-option/value pairs that control where the widget is to appear within its container, and how it is to behave when the main application window is resized. Here are some examples: ``` fred.pack() # defaults to side = "top" fred.pack(side="left") fred.pack(expand=1) ``` ### Packer Options For more extensive information on the packer and the options that it can take, see the man pages and page 183 of John Ousterhout’s book. anchor Anchor type. Denotes where the packer is to place each slave in its parcel. expand Boolean, `0` or `1`. fill Legal values: `'x'`, `'y'`, `'both'`, `'none'`. ipadx and ipady A distance - designating internal padding on each side of the slave widget. padx and pady A distance - designating external padding on each side of the slave widget. side Legal values are: `'left'`, `'right'`, `'top'`, `'bottom'`. ### Coupling Widget Variables The current-value setting of some widgets (like text entry widgets) can be connected directly to application variables by using special options. These options are `variable`, `textvariable`, `onvalue`, `offvalue`, and `value`. This connection works both ways: if the variable changes for any reason, the widget it’s connected to will be updated to reflect the new value. Unfortunately, in the current implementation of [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") it is not possible to hand over an arbitrary Python variable to a widget through a `variable` or `textvariable` option. The only kinds of variables for which this works are variables that are subclassed from a class called Variable, defined in [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces"). There are many useful subclasses of Variable already defined: `StringVar`, `IntVar`, `DoubleVar`, and `BooleanVar`. To read the current value of such a variable, call the `get()` method on it, and to change its value you call the `set()` method. If you follow this protocol, the widget will always track the value of the variable, with no further intervention on your part. For example: ``` import tkinter as tk class App(tk.Frame): def __init__(self, master): super().__init__(master) self.pack() self.entrythingy = tk.Entry() self.entrythingy.pack() # Create the application variable. self.contents = tk.StringVar() # Set it to some value. self.contents.set("this is a variable") # Tell the entry widget to watch this variable. self.entrythingy["textvariable"] = self.contents # Define a callback for when the user hits return. # It prints the current value of the variable. self.entrythingy.bind('<Key-Return>', self.print_contents) def print_contents(self, event): print("Hi. The current entry content is:", self.contents.get()) root = tk.Tk() myapp = App(root) myapp.mainloop() ``` ### The Window Manager In Tk, there is a utility command, `wm`, for interacting with the window manager. Options to the `wm` command allow you to control things like titles, placement, icon bitmaps, and the like. In [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces"), these commands have been implemented as methods on the `Wm` class. Toplevel widgets are subclassed from the `Wm` class, and so can call the `Wm` methods directly. To get at the toplevel window that contains a given widget, you can often just refer to the widget’s master. Of course if the widget has been packed inside of a frame, the master won’t represent a toplevel window. To get at the toplevel window that contains an arbitrary widget, you can call the `_root()` method. This method begins with an underscore to denote the fact that this function is part of the implementation, and not an interface to Tk functionality. Here are some examples of typical usage: ``` import tkinter as tk class App(tk.Frame): def __init__(self, master=None): super().__init__(master) self.pack() # create the application myapp = App() # # here are method calls to the window manager class # myapp.master.title("My Do-Nothing Application") myapp.master.maxsize(1000, 400) # start the program myapp.mainloop() ``` ### Tk Option Data Types anchor Legal values are points of the compass: `"n"`, `"ne"`, `"e"`, `"se"`, `"s"`, `"sw"`, `"w"`, `"nw"`, and also `"center"`. bitmap There are eight built-in, named bitmaps: `'error'`, `'gray25'`, `'gray50'`, `'hourglass'`, `'info'`, `'questhead'`, `'question'`, `'warning'`. To specify an X bitmap filename, give the full path to the file, preceded with an `@`, as in `"@/usr/contrib/bitmap/gumby.bit"`. boolean You can pass integers 0 or 1 or the strings `"yes"` or `"no"`. callback This is any Python function that takes no arguments. For example: ``` def print_it(): print("hi there") fred["command"] = print_it ``` color Colors can be given as the names of X colors in the rgb.txt file, or as strings representing RGB values in 4 bit: `"#RGB"`, 8 bit: `"#RRGGBB"`, 12 bit: `"#RRRGGGBBB"`, or 16 bit: `"#RRRRGGGGBBBB"` ranges, where R,G,B here represent any legal hex digit. See page 160 of Ousterhout’s book for details. cursor The standard X cursor names from `cursorfont.h` can be used, without the `XC_` prefix. For example to get a hand cursor (`XC_hand2`), use the string `"hand2"`. You can also specify a bitmap and mask file of your own. See page 179 of Ousterhout’s book. distance Screen distances can be specified in either pixels or absolute distances. Pixels are given as numbers and absolute distances as strings, with the trailing character denoting units: `c` for centimetres, `i` for inches, `m` for millimetres, `p` for printer’s points. For example, 3.5 inches is expressed as `"3.5i"`. font Tk uses a list font name format, such as `{courier 10 bold}`. Font sizes with positive numbers are measured in points; sizes with negative numbers are measured in pixels. geometry This is a string of the form `widthxheight`, where width and height are measured in pixels for most widgets (in characters for widgets displaying text). For example: `fred["geometry"] = "200x100"`. justify Legal values are the strings: `"left"`, `"center"`, `"right"`, and `"fill"`. region This is a string with four space-delimited elements, each of which is a legal distance (see above). For example: `"2 3 4 5"` and `"3i 2i 4.5i 2i"` and `"3c 2c 4c 10.43c"` are all legal regions. relief Determines what the border style of a widget will be. Legal values are: `"raised"`, `"sunken"`, `"flat"`, `"groove"`, and `"ridge"`. scrollcommand This is almost always the `set()` method of some scrollbar widget, but can be any widget method that takes a single argument. wrap Must be one of: `"none"`, `"char"`, or `"word"`. ### Bindings and Events The bind method from the widget command allows you to watch for certain events and to have a callback function trigger when that event type occurs. The form of the bind method is: ``` def bind(self, sequence, func, add=''): ``` where: sequence is a string that denotes the target kind of event. (See the bind man page and page 201 of John Ousterhout’s book for details). func is a Python function, taking one argument, to be invoked when the event occurs. An Event instance will be passed as the argument. (Functions deployed this way are commonly known as *callbacks*.) add is optional, either `''` or `'+'`. Passing an empty string denotes that this binding is to replace any other bindings that this event is associated with. Passing a `'+'` means that this function is to be added to the list of functions bound to this event type. For example: ``` def turn_red(self, event): event.widget["activeforeground"] = "red" self.button.bind("<Enter>", self.turn_red) ``` Notice how the widget field of the event is being accessed in the `turn_red()` callback. This field contains the widget that caught the X event. The following table lists the other event fields you can access, and how they are denoted in Tk, which can be useful when referring to the Tk man pages. | Tk | Tkinter Event Field | Tk | Tkinter Event Field | | --- | --- | --- | --- | | %f | focus | %A | char | | %h | height | %E | send\_event | | %k | keycode | %K | keysym | | %s | state | %N | keysym\_num | | %t | time | %T | type | | %w | width | %W | widget | | %x | x | %X | x\_root | | %y | y | %Y | y\_root | ### The index Parameter A number of widgets require “index” parameters to be passed. These are used to point at a specific place in a Text widget, or to particular characters in an Entry widget, or to particular menu items in a Menu widget. Entry widget indexes (index, view index, etc.) Entry widgets have options that refer to character positions in the text being displayed. You can use these [`tkinter`](#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") functions to access these special points in text widgets: Text widget indexes The index notation for Text widgets is very rich and is best described in the Tk man pages. Menu indexes (menu.invoke(), menu.entryconfig(), etc.) Some options and methods for menus manipulate specific menu entries. Anytime a menu index is needed for an option or a parameter, you may pass in: * an integer which refers to the numeric position of the entry in the widget, counted from the top, starting with 0; * the string `"active"`, which refers to the menu position that is currently under the cursor; * the string `"last"` which refers to the last menu item; * An integer preceded by `@`, as in `@6`, where the integer is interpreted as a y pixel coordinate in the menu’s coordinate system; * the string `"none"`, which indicates no menu entry at all, most often used with menu.activate() to deactivate all entries, and finally, * a text string that is pattern matched against the label of the menu entry, as scanned from the top of the menu to the bottom. Note that this index type is considered after all the others, which means that matches for menu items labelled `last`, `active`, or `none` may be interpreted as the above literals, instead. ### Images Images of different formats can be created through the corresponding subclass of `tkinter.Image`: * `BitmapImage` for images in XBM format. * `PhotoImage` for images in PGM, PPM, GIF and PNG formats. The latter is supported starting with Tk 8.6. Either type of image is created through either the `file` or the `data` option (other options are available as well). The image object can then be used wherever an `image` option is supported by some widget (e.g. labels, buttons, menus). In these cases, Tk will not keep a reference to the image. When the last Python reference to the image object is deleted, the image data is deleted as well, and Tk will display an empty box wherever the image was used. See also The [Pillow](http://python-pillow.org/) package adds support for formats such as BMP, JPEG, TIFF, and WebP, among others. File Handlers ------------- Tk allows you to register and unregister a callback function which will be called from the Tk mainloop when I/O is possible on a file descriptor. Only one handler may be registered per file descriptor. Example code: ``` import tkinter widget = tkinter.Tk() mask = tkinter.READABLE | tkinter.WRITABLE widget.tk.createfilehandler(file, mask, callback) ... widget.tk.deletefilehandler(file) ``` This feature is not available on Windows. Since you don’t know how many bytes are available for reading, you may not want to use the [`BufferedIOBase`](io#io.BufferedIOBase "io.BufferedIOBase") or [`TextIOBase`](io#io.TextIOBase "io.TextIOBase") [`read()`](io#io.BufferedIOBase.read "io.BufferedIOBase.read") or [`readline()`](io#io.IOBase.readline "io.IOBase.readline") methods, since these will insist on reading a predefined number of bytes. For sockets, the [`recv()`](socket#socket.socket.recv "socket.socket.recv") or [`recvfrom()`](socket#socket.socket.recvfrom "socket.socket.recvfrom") methods will work fine; for other files, use raw reads or `os.read(file.fileno(), maxbytecount)`. `Widget.tk.createfilehandler(file, mask, func)` Registers the file handler callback function *func*. The *file* argument may either be an object with a [`fileno()`](io#io.IOBase.fileno "io.IOBase.fileno") method (such as a file or socket object), or an integer file descriptor. The *mask* argument is an ORed combination of any of the three constants below. The callback is called as follows: ``` callback(file, mask) ``` `Widget.tk.deletefilehandler(file)` Unregisters a file handler. `tkinter.READABLE` `tkinter.WRITABLE` `tkinter.EXCEPTION` Constants used in the *mask* arguments.
programming_docs
python Generic Operating System Services Generic Operating System Services ================================= The modules described in this chapter provide interfaces to operating system features that are available on (almost) all operating systems, such as files and a clock. The interfaces are generally modeled after the Unix or C interfaces, but they are available on most other systems as well. Here’s an overview: * [`os` — Miscellaneous operating system interfaces](os) + [File Names, Command Line Arguments, and Environment Variables](os#file-names-command-line-arguments-and-environment-variables) + [Process Parameters](os#process-parameters) + [File Object Creation](os#file-object-creation) + [File Descriptor Operations](os#file-descriptor-operations) - [Querying the size of a terminal](os#querying-the-size-of-a-terminal) - [Inheritance of File Descriptors](os#inheritance-of-file-descriptors) + [Files and Directories](os#files-and-directories) - [Linux extended attributes](os#linux-extended-attributes) + [Process Management](os#process-management) + [Interface to the scheduler](os#interface-to-the-scheduler) + [Miscellaneous System Information](os#miscellaneous-system-information) + [Random numbers](os#random-numbers) * [`io` — Core tools for working with streams](io) + [Overview](io#overview) - [Text I/O](io#text-i-o) - [Binary I/O](io#binary-i-o) - [Raw I/O](io#raw-i-o) + [High-level Module Interface](io#high-level-module-interface) + [Class hierarchy](io#class-hierarchy) - [I/O Base Classes](io#i-o-base-classes) - [Raw File I/O](io#raw-file-i-o) - [Buffered Streams](io#buffered-streams) - [Text I/O](io#id1) + [Performance](io#performance) - [Binary I/O](io#id2) - [Text I/O](io#id3) - [Multi-threading](io#multi-threading) - [Reentrancy](io#reentrancy) * [`time` — Time access and conversions](time) + [Functions](time#functions) + [Clock ID Constants](time#clock-id-constants) + [Timezone Constants](time#timezone-constants) * [`argparse` — Parser for command-line options, arguments and sub-commands](argparse) + [Example](argparse#example) - [Creating a parser](argparse#creating-a-parser) - [Adding arguments](argparse#adding-arguments) - [Parsing arguments](argparse#parsing-arguments) + [ArgumentParser objects](argparse#argumentparser-objects) - [prog](argparse#prog) - [usage](argparse#usage) - [description](argparse#description) - [epilog](argparse#epilog) - [parents](argparse#parents) - [formatter\_class](argparse#formatter-class) - [prefix\_chars](argparse#prefix-chars) - [fromfile\_prefix\_chars](argparse#fromfile-prefix-chars) - [argument\_default](argparse#argument-default) - [allow\_abbrev](argparse#allow-abbrev) - [conflict\_handler](argparse#conflict-handler) - [add\_help](argparse#add-help) - [exit\_on\_error](argparse#exit-on-error) + [The add\_argument() method](argparse#the-add-argument-method) - [name or flags](argparse#name-or-flags) - [action](argparse#action) - [nargs](argparse#nargs) - [const](argparse#const) - [default](argparse#default) - [type](argparse#type) - [choices](argparse#choices) - [required](argparse#required) - [help](argparse#help) - [metavar](argparse#metavar) - [dest](argparse#dest) - [Action classes](argparse#action-classes) + [The parse\_args() method](argparse#the-parse-args-method) - [Option value syntax](argparse#option-value-syntax) - [Invalid arguments](argparse#invalid-arguments) - [Arguments containing `-`](argparse#arguments-containing) - [Argument abbreviations (prefix matching)](argparse#argument-abbreviations-prefix-matching) - [Beyond `sys.argv`](argparse#beyond-sys-argv) - [The Namespace object](argparse#the-namespace-object) + [Other utilities](argparse#other-utilities) - [Sub-commands](argparse#sub-commands) - [FileType objects](argparse#filetype-objects) - [Argument groups](argparse#argument-groups) - [Mutual exclusion](argparse#mutual-exclusion) - [Parser defaults](argparse#parser-defaults) - [Printing help](argparse#printing-help) - [Partial parsing](argparse#partial-parsing) - [Customizing file parsing](argparse#customizing-file-parsing) - [Exiting methods](argparse#exiting-methods) - [Intermixed parsing](argparse#intermixed-parsing) + [Upgrading optparse code](argparse#upgrading-optparse-code) * [`getopt` — C-style parser for command line options](getopt) * [`logging` — Logging facility for Python](logging) + [Logger Objects](logging#logger-objects) + [Logging Levels](logging#logging-levels) + [Handler Objects](logging#handler-objects) + [Formatter Objects](logging#formatter-objects) + [Filter Objects](logging#filter-objects) + [LogRecord Objects](logging#logrecord-objects) + [LogRecord attributes](logging#logrecord-attributes) + [LoggerAdapter Objects](logging#loggeradapter-objects) + [Thread Safety](logging#thread-safety) + [Module-Level Functions](logging#module-level-functions) + [Module-Level Attributes](logging#module-level-attributes) + [Integration with the warnings module](logging#integration-with-the-warnings-module) * [`logging.config` — Logging configuration](logging.config) + [Configuration functions](logging.config#configuration-functions) + [Security considerations](logging.config#security-considerations) + [Configuration dictionary schema](logging.config#configuration-dictionary-schema) - [Dictionary Schema Details](logging.config#dictionary-schema-details) - [Incremental Configuration](logging.config#incremental-configuration) - [Object connections](logging.config#object-connections) - [User-defined objects](logging.config#user-defined-objects) - [Access to external objects](logging.config#access-to-external-objects) - [Access to internal objects](logging.config#access-to-internal-objects) - [Import resolution and custom importers](logging.config#import-resolution-and-custom-importers) + [Configuration file format](logging.config#configuration-file-format) * [`logging.handlers` — Logging handlers](logging.handlers) + [StreamHandler](logging.handlers#streamhandler) + [FileHandler](logging.handlers#filehandler) + [NullHandler](logging.handlers#nullhandler) + [WatchedFileHandler](logging.handlers#watchedfilehandler) + [BaseRotatingHandler](logging.handlers#baserotatinghandler) + [RotatingFileHandler](logging.handlers#rotatingfilehandler) + [TimedRotatingFileHandler](logging.handlers#timedrotatingfilehandler) + [SocketHandler](logging.handlers#sockethandler) + [DatagramHandler](logging.handlers#datagramhandler) + [SysLogHandler](logging.handlers#sysloghandler) + [NTEventLogHandler](logging.handlers#nteventloghandler) + [SMTPHandler](logging.handlers#smtphandler) + [MemoryHandler](logging.handlers#memoryhandler) + [HTTPHandler](logging.handlers#httphandler) + [QueueHandler](logging.handlers#queuehandler) + [QueueListener](logging.handlers#queuelistener) * [`getpass` — Portable password input](getpass) * [`curses` — Terminal handling for character-cell displays](curses) + [Functions](curses#functions) + [Window Objects](curses#window-objects) + [Constants](curses#constants) * [`curses.textpad` — Text input widget for curses programs](curses#module-curses.textpad) + [Textbox objects](curses#textbox-objects) * [`curses.ascii` — Utilities for ASCII characters](curses.ascii) * [`curses.panel` — A panel stack extension for curses](curses.panel) + [Functions](curses.panel#functions) + [Panel Objects](curses.panel#panel-objects) * [`platform` — Access to underlying platform’s identifying data](platform) + [Cross Platform](platform#cross-platform) + [Java Platform](platform#java-platform) + [Windows Platform](platform#windows-platform) + [macOS Platform](platform#macos-platform) + [Unix Platforms](platform#unix-platforms) * [`errno` — Standard errno system symbols](errno) * [`ctypes` — A foreign function library for Python](ctypes) + [ctypes tutorial](ctypes#ctypes-tutorial) - [Loading dynamic link libraries](ctypes#loading-dynamic-link-libraries) - [Accessing functions from loaded dlls](ctypes#accessing-functions-from-loaded-dlls) - [Calling functions](ctypes#calling-functions) - [Fundamental data types](ctypes#fundamental-data-types) - [Calling functions, continued](ctypes#calling-functions-continued) - [Calling functions with your own custom data types](ctypes#calling-functions-with-your-own-custom-data-types) - [Specifying the required argument types (function prototypes)](ctypes#specifying-the-required-argument-types-function-prototypes) - [Return types](ctypes#return-types) - [Passing pointers (or: passing parameters by reference)](ctypes#passing-pointers-or-passing-parameters-by-reference) - [Structures and unions](ctypes#structures-and-unions) - [Structure/union alignment and byte order](ctypes#structure-union-alignment-and-byte-order) - [Bit fields in structures and unions](ctypes#bit-fields-in-structures-and-unions) - [Arrays](ctypes#arrays) - [Pointers](ctypes#pointers) - [Type conversions](ctypes#type-conversions) - [Incomplete Types](ctypes#incomplete-types) - [Callback functions](ctypes#callback-functions) - [Accessing values exported from dlls](ctypes#accessing-values-exported-from-dlls) - [Surprises](ctypes#surprises) - [Variable-sized data types](ctypes#variable-sized-data-types) + [ctypes reference](ctypes#ctypes-reference) - [Finding shared libraries](ctypes#finding-shared-libraries) - [Loading shared libraries](ctypes#loading-shared-libraries) - [Foreign functions](ctypes#foreign-functions) - [Function prototypes](ctypes#function-prototypes) - [Utility functions](ctypes#utility-functions) - [Data types](ctypes#data-types) - [Fundamental data types](ctypes#ctypes-fundamental-data-types-2) - [Structured data types](ctypes#structured-data-types) - [Arrays and pointers](ctypes#arrays-and-pointers) python zoneinfo — IANA time zone support zoneinfo — IANA time zone support ================================= New in version 3.9. The [`zoneinfo`](#module-zoneinfo "zoneinfo: IANA time zone support") module provides a concrete time zone implementation to support the IANA time zone database as originally specified in [**PEP 615**](https://www.python.org/dev/peps/pep-0615). By default, [`zoneinfo`](#module-zoneinfo "zoneinfo: IANA time zone support") uses the system’s time zone data if available; if no system time zone data is available, the library will fall back to using the first-party [tzdata](https://pypi.org/project/tzdata/) package available on PyPI. See also `Module:` [`datetime`](datetime#module-datetime "datetime: Basic date and time types.") Provides the [`time`](datetime#datetime.time "datetime.time") and [`datetime`](datetime#datetime.datetime "datetime.datetime") types with which the [`ZoneInfo`](#zoneinfo.ZoneInfo "zoneinfo.ZoneInfo") class is designed to be used. Package [tzdata](https://pypi.org/project/tzdata/) First-party package maintained by the CPython core developers to supply time zone data via PyPI. Using `ZoneInfo` ---------------- [`ZoneInfo`](#zoneinfo.ZoneInfo "zoneinfo.ZoneInfo") is a concrete implementation of the [`datetime.tzinfo`](datetime#datetime.tzinfo "datetime.tzinfo") abstract base class, and is intended to be attached to `tzinfo`, either via the constructor, the [`datetime.replace`](datetime#datetime.datetime.replace "datetime.datetime.replace") method or [`datetime.astimezone`](datetime#datetime.datetime.astimezone "datetime.datetime.astimezone"): ``` >>> from zoneinfo import ZoneInfo >>> from datetime import datetime, timedelta >>> dt = datetime(2020, 10, 31, 12, tzinfo=ZoneInfo("America/Los_Angeles")) >>> print(dt) 2020-10-31 12:00:00-07:00 >>> dt.tzname() 'PDT' ``` Datetimes constructed in this way are compatible with datetime arithmetic and handle daylight saving time transitions with no further intervention: ``` >>> dt_add = dt + timedelta(days=1) >>> print(dt_add) 2020-11-01 12:00:00-08:00 >>> dt_add.tzname() 'PST' ``` These time zones also support the [`fold`](datetime#datetime.datetime.fold "datetime.datetime.fold") attribute introduced in [**PEP 495**](https://www.python.org/dev/peps/pep-0495). During offset transitions which induce ambiguous times (such as a daylight saving time to standard time transition), the offset from *before* the transition is used when `fold=0`, and the offset *after* the transition is used when `fold=1`, for example: ``` >>> dt = datetime(2020, 11, 1, 1, tzinfo=ZoneInfo("America/Los_Angeles")) >>> print(dt) 2020-11-01 01:00:00-07:00 >>> print(dt.replace(fold=1)) 2020-11-01 01:00:00-08:00 ``` When converting from another time zone, the fold will be set to the correct value: ``` >>> from datetime import timezone >>> LOS_ANGELES = ZoneInfo("America/Los_Angeles") >>> dt_utc = datetime(2020, 11, 1, 8, tzinfo=timezone.utc) >>> # Before the PDT -> PST transition >>> print(dt_utc.astimezone(LOS_ANGELES)) 2020-11-01 01:00:00-07:00 >>> # After the PDT -> PST transition >>> print((dt_utc + timedelta(hours=1)).astimezone(LOS_ANGELES)) 2020-11-01 01:00:00-08:00 ``` Data sources ------------ The `zoneinfo` module does not directly provide time zone data, and instead pulls time zone information from the system time zone database or the first-party PyPI package [tzdata](https://pypi.org/project/tzdata/), if available. Some systems, including notably Windows systems, do not have an IANA database available, and so for projects targeting cross-platform compatibility that require time zone data, it is recommended to declare a dependency on tzdata. If neither system data nor tzdata are available, all calls to [`ZoneInfo`](#zoneinfo.ZoneInfo "zoneinfo.ZoneInfo") will raise [`ZoneInfoNotFoundError`](#zoneinfo.ZoneInfoNotFoundError "zoneinfo.ZoneInfoNotFoundError"). ### Configuring the data sources When `ZoneInfo(key)` is called, the constructor first searches the directories specified in [`TZPATH`](#zoneinfo.TZPATH "zoneinfo.TZPATH") for a file matching `key`, and on failure looks for a match in the tzdata package. This behavior can be configured in three ways: 1. The default [`TZPATH`](#zoneinfo.TZPATH "zoneinfo.TZPATH") when not otherwise specified can be configured at [compile time](#zoneinfo-data-compile-time-config). 2. [`TZPATH`](#zoneinfo.TZPATH "zoneinfo.TZPATH") can be configured using [an environment variable](#zoneinfo-data-environment-var). 3. At [runtime](#zoneinfo-data-runtime-config), the search path can be manipulated using the [`reset_tzpath()`](#zoneinfo.reset_tzpath "zoneinfo.reset_tzpath") function. #### Compile-time configuration The default [`TZPATH`](#zoneinfo.TZPATH "zoneinfo.TZPATH") includes several common deployment locations for the time zone database (except on Windows, where there are no “well-known” locations for time zone data). On POSIX systems, downstream distributors and those building Python from source who know where their system time zone data is deployed may change the default time zone path by specifying the compile-time option `TZPATH` (or, more likely, the `configure` flag `--with-tzpath`), which should be a string delimited by [`os.pathsep`](os#os.pathsep "os.pathsep"). On all platforms, the configured value is available as the `TZPATH` key in [`sysconfig.get_config_var()`](sysconfig#sysconfig.get_config_var "sysconfig.get_config_var"). #### Environment configuration When initializing [`TZPATH`](#zoneinfo.TZPATH "zoneinfo.TZPATH") (either at import time or whenever [`reset_tzpath()`](#zoneinfo.reset_tzpath "zoneinfo.reset_tzpath") is called with no arguments), the `zoneinfo` module will use the environment variable `PYTHONTZPATH`, if it exists, to set the search path. `PYTHONTZPATH` This is an [`os.pathsep`](os#os.pathsep "os.pathsep")-separated string containing the time zone search path to use. It must consist of only absolute rather than relative paths. Relative components specified in `PYTHONTZPATH` will not be used, but otherwise the behavior when a relative path is specified is implementation-defined; CPython will raise [`InvalidTZPathWarning`](#zoneinfo.InvalidTZPathWarning "zoneinfo.InvalidTZPathWarning"), but other implementations are free to silently ignore the erroneous component or raise an exception. To set the system to ignore the system data and use the tzdata package instead, set `PYTHONTZPATH=""`. #### Runtime configuration The TZ search path can also be configured at runtime using the [`reset_tzpath()`](#zoneinfo.reset_tzpath "zoneinfo.reset_tzpath") function. This is generally not an advisable operation, though it is reasonable to use it in test functions that require the use of a specific time zone path (or require disabling access to the system time zones). The `ZoneInfo` class -------------------- `class zoneinfo.ZoneInfo(key)` A concrete [`datetime.tzinfo`](datetime#datetime.tzinfo "datetime.tzinfo") subclass that represents an IANA time zone specified by the string `key`. Calls to the primary constructor will always return objects that compare identically; put another way, barring cache invalidation via [`ZoneInfo.clear_cache()`](#zoneinfo.ZoneInfo.clear_cache "zoneinfo.ZoneInfo.clear_cache"), for all values of `key`, the following assertion will always be true: ``` a = ZoneInfo(key) b = ZoneInfo(key) assert a is b ``` `key` must be in the form of a relative, normalized POSIX path, with no up-level references. The constructor will raise [`ValueError`](exceptions#ValueError "ValueError") if a non-conforming key is passed. If no file matching `key` is found, the constructor will raise [`ZoneInfoNotFoundError`](#zoneinfo.ZoneInfoNotFoundError "zoneinfo.ZoneInfoNotFoundError"). The `ZoneInfo` class has two alternate constructors: `classmethod ZoneInfo.from_file(fobj, /, key=None)` Constructs a `ZoneInfo` object from a file-like object returning bytes (e.g. a file opened in binary mode or an [`io.BytesIO`](io#io.BytesIO "io.BytesIO") object). Unlike the primary constructor, this always constructs a new object. The `key` parameter sets the name of the zone for the purposes of [`__str__()`](../reference/datamodel#object.__str__ "object.__str__") and [`__repr__()`](../reference/datamodel#object.__repr__ "object.__repr__"). Objects created via this constructor cannot be pickled (see [pickling](#pickling)). `classmethod ZoneInfo.no_cache(key)` An alternate constructor that bypasses the constructor’s cache. It is identical to the primary constructor, but returns a new object on each call. This is most likely to be useful for testing or demonstration purposes, but it can also be used to create a system with a different cache invalidation strategy. Objects created via this constructor will also bypass the cache of a deserializing process when unpickled. Caution Using this constructor may change the semantics of your datetimes in surprising ways, only use it if you know that you need to. The following class methods are also available: `classmethod ZoneInfo.clear_cache(*, only_keys=None)` A method for invalidating the cache on the `ZoneInfo` class. If no arguments are passed, all caches are invalidated and the next call to the primary constructor for each key will return a new instance. If an iterable of key names is passed to the `only_keys` parameter, only the specified keys will be removed from the cache. Keys passed to `only_keys` but not found in the cache are ignored. Warning Invoking this function may change the semantics of datetimes using `ZoneInfo` in surprising ways; this modifies process-wide global state and thus may have wide-ranging effects. Only use it if you know that you need to. The class has one attribute: `ZoneInfo.key` This is a read-only [attribute](../glossary#term-attribute) that returns the value of `key` passed to the constructor, which should be a lookup key in the IANA time zone database (e.g. `America/New_York`, `Europe/Paris` or `Asia/Tokyo`). For zones constructed from file without specifying a `key` parameter, this will be set to `None`. Note Although it is a somewhat common practice to expose these to end users, these values are designed to be primary keys for representing the relevant zones and not necessarily user-facing elements. Projects like CLDR (the Unicode Common Locale Data Repository) can be used to get more user-friendly strings from these keys. ### String representations The string representation returned when calling [`str`](stdtypes#str "str") on a [`ZoneInfo`](#zoneinfo.ZoneInfo "zoneinfo.ZoneInfo") object defaults to using the [`ZoneInfo.key`](#zoneinfo.ZoneInfo.key "zoneinfo.ZoneInfo.key") attribute (see the note on usage in the attribute documentation): ``` >>> zone = ZoneInfo("Pacific/Kwajalein") >>> str(zone) 'Pacific/Kwajalein' >>> dt = datetime(2020, 4, 1, 3, 15, tzinfo=zone) >>> f"{dt.isoformat()} [{dt.tzinfo}]" '2020-04-01T03:15:00+12:00 [Pacific/Kwajalein]' ``` For objects constructed from a file without specifying a `key` parameter, `str` falls back to calling [`repr()`](functions#repr "repr"). `ZoneInfo`’s `repr` is implementation-defined and not necessarily stable between versions, but it is guaranteed not to be a valid `ZoneInfo` key. ### Pickle serialization Rather than serializing all transition data, `ZoneInfo` objects are serialized by key, and `ZoneInfo` objects constructed from files (even those with a value for `key` specified) cannot be pickled. The behavior of a `ZoneInfo` file depends on how it was constructed: 1. `ZoneInfo(key)`: When constructed with the primary constructor, a `ZoneInfo` object is serialized by key, and when deserialized, the deserializing process uses the primary and thus it is expected that these are expected to be the same object as other references to the same time zone. For example, if `europe_berlin_pkl` is a string containing a pickle constructed from `ZoneInfo("Europe/Berlin")`, one would expect the following behavior: ``` >>> a = ZoneInfo("Europe/Berlin") >>> b = pickle.loads(europe_berlin_pkl) >>> a is b True ``` 2. `ZoneInfo.no_cache(key)`: When constructed from the cache-bypassing constructor, the `ZoneInfo` object is also serialized by key, but when deserialized, the deserializing process uses the cache bypassing constructor. If `europe_berlin_pkl_nc` is a string containing a pickle constructed from `ZoneInfo.no_cache("Europe/Berlin")`, one would expect the following behavior: ``` >>> a = ZoneInfo("Europe/Berlin") >>> b = pickle.loads(europe_berlin_pkl_nc) >>> a is b False ``` 3. `ZoneInfo.from_file(fobj, /, key=None)`: When constructed from a file, the `ZoneInfo` object raises an exception on pickling. If an end user wants to pickle a `ZoneInfo` constructed from a file, it is recommended that they use a wrapper type or a custom serialization function: either serializing by key or storing the contents of the file object and serializing that. This method of serialization requires that the time zone data for the required key be available on both the serializing and deserializing side, similar to the way that references to classes and functions are expected to exist in both the serializing and deserializing environments. It also means that no guarantees are made about the consistency of results when unpickling a `ZoneInfo` pickled in an environment with a different version of the time zone data. Functions --------- `zoneinfo.available_timezones()` Get a set containing all the valid keys for IANA time zones available anywhere on the time zone path. This is recalculated on every call to the function. This function only includes canonical zone names and does not include “special” zones such as those under the `posix/` and `right/` directories, or the `posixrules` zone. Caution This function may open a large number of files, as the best way to determine if a file on the time zone path is a valid time zone is to read the “magic string” at the beginning. Note These values are not designed to be exposed to end-users; for user facing elements, applications should use something like CLDR (the Unicode Common Locale Data Repository) to get more user-friendly strings. See also the cautionary note on [`ZoneInfo.key`](#zoneinfo.ZoneInfo.key "zoneinfo.ZoneInfo.key"). `zoneinfo.reset_tzpath(to=None)` Sets or resets the time zone search path ([`TZPATH`](#zoneinfo.TZPATH "zoneinfo.TZPATH")) for the module. When called with no arguments, [`TZPATH`](#zoneinfo.TZPATH "zoneinfo.TZPATH") is set to the default value. Calling `reset_tzpath` will not invalidate the [`ZoneInfo`](#zoneinfo.ZoneInfo "zoneinfo.ZoneInfo") cache, and so calls to the primary `ZoneInfo` constructor will only use the new `TZPATH` in the case of a cache miss. The `to` parameter must be a [sequence](../glossary#term-sequence) of strings or [`os.PathLike`](os#os.PathLike "os.PathLike") and not a string, all of which must be absolute paths. [`ValueError`](exceptions#ValueError "ValueError") will be raised if something other than an absolute path is passed. Globals ------- `zoneinfo.TZPATH` A read-only sequence representing the time zone search path – when constructing a `ZoneInfo` from a key, the key is joined to each entry in the `TZPATH`, and the first file found is used. `TZPATH` may contain only absolute paths, never relative paths, regardless of how it is configured. The object that `zoneinfo.TZPATH` points to may change in response to a call to [`reset_tzpath()`](#zoneinfo.reset_tzpath "zoneinfo.reset_tzpath"), so it is recommended to use `zoneinfo.TZPATH` rather than importing `TZPATH` from `zoneinfo` or assigning a long-lived variable to `zoneinfo.TZPATH`. For more information on configuring the time zone search path, see [Configuring the data sources](#zoneinfo-data-configuration). Exceptions and warnings ----------------------- `exception zoneinfo.ZoneInfoNotFoundError` Raised when construction of a [`ZoneInfo`](#zoneinfo.ZoneInfo "zoneinfo.ZoneInfo") object fails because the specified key could not be found on the system. This is a subclass of [`KeyError`](exceptions#KeyError "KeyError"). `exception zoneinfo.InvalidTZPathWarning` Raised when [`PYTHONTZPATH`](#envvar-PYTHONTZPATH) contains an invalid component that will be filtered out, such as a relative path.
programming_docs
python calendar — General calendar-related functions calendar — General calendar-related functions ============================================= **Source code:** [Lib/calendar.py](https://github.com/python/cpython/tree/3.9/Lib/calendar.py) This module allows you to output calendars like the Unix **cal** program, and provides additional useful functions related to the calendar. By default, these calendars have Monday as the first day of the week, and Sunday as the last (the European convention). Use [`setfirstweekday()`](#calendar.setfirstweekday "calendar.setfirstweekday") to set the first day of the week to Sunday (6) or to any other weekday. Parameters that specify dates are given as integers. For related functionality, see also the [`datetime`](datetime#module-datetime "datetime: Basic date and time types.") and [`time`](time#module-time "time: Time access and conversions.") modules. The functions and classes defined in this module use an idealized calendar, the current Gregorian calendar extended indefinitely in both directions. This matches the definition of the “proleptic Gregorian” calendar in Dershowitz and Reingold’s book “Calendrical Calculations”, where it’s the base calendar for all computations. Zero and negative years are interpreted as prescribed by the ISO 8601 standard. Year 0 is 1 BC, year -1 is 2 BC, and so on. `class calendar.Calendar(firstweekday=0)` Creates a [`Calendar`](#calendar.Calendar "calendar.Calendar") object. *firstweekday* is an integer specifying the first day of the week. `0` is Monday (the default), `6` is Sunday. A [`Calendar`](#calendar.Calendar "calendar.Calendar") object provides several methods that can be used for preparing the calendar data for formatting. This class doesn’t do any formatting itself. This is the job of subclasses. [`Calendar`](#calendar.Calendar "calendar.Calendar") instances have the following methods: `iterweekdays()` Return an iterator for the week day numbers that will be used for one week. The first value from the iterator will be the same as the value of the [`firstweekday`](#calendar.firstweekday "calendar.firstweekday") property. `itermonthdates(year, month)` Return an iterator for the month *month* (1–12) in the year *year*. This iterator will return all days (as [`datetime.date`](datetime#datetime.date "datetime.date") objects) for the month and all days before the start of the month or after the end of the month that are required to get a complete week. `itermonthdays(year, month)` Return an iterator for the month *month* in the year *year* similar to [`itermonthdates()`](#calendar.Calendar.itermonthdates "calendar.Calendar.itermonthdates"), but not restricted by the [`datetime.date`](datetime#datetime.date "datetime.date") range. Days returned will simply be day of the month numbers. For the days outside of the specified month, the day number is `0`. `itermonthdays2(year, month)` Return an iterator for the month *month* in the year *year* similar to [`itermonthdates()`](#calendar.Calendar.itermonthdates "calendar.Calendar.itermonthdates"), but not restricted by the [`datetime.date`](datetime#datetime.date "datetime.date") range. Days returned will be tuples consisting of a day of the month number and a week day number. `itermonthdays3(year, month)` Return an iterator for the month *month* in the year *year* similar to [`itermonthdates()`](#calendar.Calendar.itermonthdates "calendar.Calendar.itermonthdates"), but not restricted by the [`datetime.date`](datetime#datetime.date "datetime.date") range. Days returned will be tuples consisting of a year, a month and a day of the month numbers. New in version 3.7. `itermonthdays4(year, month)` Return an iterator for the month *month* in the year *year* similar to [`itermonthdates()`](#calendar.Calendar.itermonthdates "calendar.Calendar.itermonthdates"), but not restricted by the [`datetime.date`](datetime#datetime.date "datetime.date") range. Days returned will be tuples consisting of a year, a month, a day of the month, and a day of the week numbers. New in version 3.7. `monthdatescalendar(year, month)` Return a list of the weeks in the month *month* of the *year* as full weeks. Weeks are lists of seven [`datetime.date`](datetime#datetime.date "datetime.date") objects. `monthdays2calendar(year, month)` Return a list of the weeks in the month *month* of the *year* as full weeks. Weeks are lists of seven tuples of day numbers and weekday numbers. `monthdayscalendar(year, month)` Return a list of the weeks in the month *month* of the *year* as full weeks. Weeks are lists of seven day numbers. `yeardatescalendar(year, width=3)` Return the data for the specified year ready for formatting. The return value is a list of month rows. Each month row contains up to *width* months (defaulting to 3). Each month contains between 4 and 6 weeks and each week contains 1–7 days. Days are [`datetime.date`](datetime#datetime.date "datetime.date") objects. `yeardays2calendar(year, width=3)` Return the data for the specified year ready for formatting (similar to [`yeardatescalendar()`](#calendar.Calendar.yeardatescalendar "calendar.Calendar.yeardatescalendar")). Entries in the week lists are tuples of day numbers and weekday numbers. Day numbers outside this month are zero. `yeardayscalendar(year, width=3)` Return the data for the specified year ready for formatting (similar to [`yeardatescalendar()`](#calendar.Calendar.yeardatescalendar "calendar.Calendar.yeardatescalendar")). Entries in the week lists are day numbers. Day numbers outside this month are zero. `class calendar.TextCalendar(firstweekday=0)` This class can be used to generate plain text calendars. [`TextCalendar`](#calendar.TextCalendar "calendar.TextCalendar") instances have the following methods: `formatmonth(theyear, themonth, w=0, l=0)` Return a month’s calendar in a multi-line string. If *w* is provided, it specifies the width of the date columns, which are centered. If *l* is given, it specifies the number of lines that each week will use. Depends on the first weekday as specified in the constructor or set by the [`setfirstweekday()`](#calendar.setfirstweekday "calendar.setfirstweekday") method. `prmonth(theyear, themonth, w=0, l=0)` Print a month’s calendar as returned by [`formatmonth()`](#calendar.TextCalendar.formatmonth "calendar.TextCalendar.formatmonth"). `formatyear(theyear, w=2, l=1, c=6, m=3)` Return a *m*-column calendar for an entire year as a multi-line string. Optional parameters *w*, *l*, and *c* are for date column width, lines per week, and number of spaces between month columns, respectively. Depends on the first weekday as specified in the constructor or set by the [`setfirstweekday()`](#calendar.setfirstweekday "calendar.setfirstweekday") method. The earliest year for which a calendar can be generated is platform-dependent. `pryear(theyear, w=2, l=1, c=6, m=3)` Print the calendar for an entire year as returned by [`formatyear()`](#calendar.TextCalendar.formatyear "calendar.TextCalendar.formatyear"). `class calendar.HTMLCalendar(firstweekday=0)` This class can be used to generate HTML calendars. `HTMLCalendar` instances have the following methods: `formatmonth(theyear, themonth, withyear=True)` Return a month’s calendar as an HTML table. If *withyear* is true the year will be included in the header, otherwise just the month name will be used. `formatyear(theyear, width=3)` Return a year’s calendar as an HTML table. *width* (defaulting to 3) specifies the number of months per row. `formatyearpage(theyear, width=3, css='calendar.css', encoding=None)` Return a year’s calendar as a complete HTML page. *width* (defaulting to 3) specifies the number of months per row. *css* is the name for the cascading style sheet to be used. [`None`](constants#None "None") can be passed if no style sheet should be used. *encoding* specifies the encoding to be used for the output (defaulting to the system default encoding). `HTMLCalendar` has the following attributes you can override to customize the CSS classes used by the calendar: `cssclasses` A list of CSS classes used for each weekday. The default class list is: ``` cssclasses = ["mon", "tue", "wed", "thu", "fri", "sat", "sun"] ``` more styles can be added for each day: ``` cssclasses = ["mon text-bold", "tue", "wed", "thu", "fri", "sat", "sun red"] ``` Note that the length of this list must be seven items. `cssclass_noday` The CSS class for a weekday occurring in the previous or coming month. New in version 3.7. `cssclasses_weekday_head` A list of CSS classes used for weekday names in the header row. The default is the same as [`cssclasses`](#calendar.HTMLCalendar.cssclasses "calendar.HTMLCalendar.cssclasses"). New in version 3.7. `cssclass_month_head` The month’s head CSS class (used by `formatmonthname()`). The default value is `"month"`. New in version 3.7. `cssclass_month` The CSS class for the whole month’s table (used by [`formatmonth()`](#calendar.HTMLCalendar.formatmonth "calendar.HTMLCalendar.formatmonth")). The default value is `"month"`. New in version 3.7. `cssclass_year` The CSS class for the whole year’s table of tables (used by [`formatyear()`](#calendar.HTMLCalendar.formatyear "calendar.HTMLCalendar.formatyear")). The default value is `"year"`. New in version 3.7. `cssclass_year_head` The CSS class for the table head for the whole year (used by [`formatyear()`](#calendar.HTMLCalendar.formatyear "calendar.HTMLCalendar.formatyear")). The default value is `"year"`. New in version 3.7. Note that although the naming for the above described class attributes is singular (e.g. `cssclass_month` `cssclass_noday`), one can replace the single CSS class with a space separated list of CSS classes, for example: ``` "text-bold text-red" ``` Here is an example how `HTMLCalendar` can be customized: ``` class CustomHTMLCal(calendar.HTMLCalendar): cssclasses = [style + " text-nowrap" for style in calendar.HTMLCalendar.cssclasses] cssclass_month_head = "text-center month-head" cssclass_month = "text-center month" cssclass_year = "text-italic lead" ``` `class calendar.LocaleTextCalendar(firstweekday=0, locale=None)` This subclass of [`TextCalendar`](#calendar.TextCalendar "calendar.TextCalendar") can be passed a locale name in the constructor and will return month and weekday names in the specified locale. If this locale includes an encoding all strings containing month and weekday names will be returned as unicode. `class calendar.LocaleHTMLCalendar(firstweekday=0, locale=None)` This subclass of [`HTMLCalendar`](#calendar.HTMLCalendar "calendar.HTMLCalendar") can be passed a locale name in the constructor and will return month and weekday names in the specified locale. If this locale includes an encoding all strings containing month and weekday names will be returned as unicode. Note The `formatweekday()` and `formatmonthname()` methods of these two classes temporarily change the current locale to the given *locale*. Because the current locale is a process-wide setting, they are not thread-safe. For simple text calendars this module provides the following functions. `calendar.setfirstweekday(weekday)` Sets the weekday (`0` is Monday, `6` is Sunday) to start each week. The values `MONDAY`, `TUESDAY`, `WEDNESDAY`, `THURSDAY`, `FRIDAY`, `SATURDAY`, and `SUNDAY` are provided for convenience. For example, to set the first weekday to Sunday: ``` import calendar calendar.setfirstweekday(calendar.SUNDAY) ``` `calendar.firstweekday()` Returns the current setting for the weekday to start each week. `calendar.isleap(year)` Returns [`True`](constants#True "True") if *year* is a leap year, otherwise [`False`](constants#False "False"). `calendar.leapdays(y1, y2)` Returns the number of leap years in the range from *y1* to *y2* (exclusive), where *y1* and *y2* are years. This function works for ranges spanning a century change. `calendar.weekday(year, month, day)` Returns the day of the week (`0` is Monday) for *year* (`1970`–…), *month* (`1`–`12`), *day* (`1`–`31`). `calendar.weekheader(n)` Return a header containing abbreviated weekday names. *n* specifies the width in characters for one weekday. `calendar.monthrange(year, month)` Returns weekday of first day of the month and number of days in month, for the specified *year* and *month*. `calendar.monthcalendar(year, month)` Returns a matrix representing a month’s calendar. Each row represents a week; days outside of the month are represented by zeros. Each week begins with Monday unless set by [`setfirstweekday()`](#calendar.setfirstweekday "calendar.setfirstweekday"). `calendar.prmonth(theyear, themonth, w=0, l=0)` Prints a month’s calendar as returned by [`month()`](#calendar.month "calendar.month"). `calendar.month(theyear, themonth, w=0, l=0)` Returns a month’s calendar in a multi-line string using the `formatmonth()` of the [`TextCalendar`](#calendar.TextCalendar "calendar.TextCalendar") class. `calendar.prcal(year, w=0, l=0, c=6, m=3)` Prints the calendar for an entire year as returned by [`calendar()`](#module-calendar "calendar: Functions for working with calendars, including some emulation of the Unix cal program."). `calendar.calendar(year, w=2, l=1, c=6, m=3)` Returns a 3-column calendar for an entire year as a multi-line string using the `formatyear()` of the [`TextCalendar`](#calendar.TextCalendar "calendar.TextCalendar") class. `calendar.timegm(tuple)` An unrelated but handy function that takes a time tuple such as returned by the [`gmtime()`](time#time.gmtime "time.gmtime") function in the [`time`](time#module-time "time: Time access and conversions.") module, and returns the corresponding Unix timestamp value, assuming an epoch of 1970, and the POSIX encoding. In fact, [`time.gmtime()`](time#time.gmtime "time.gmtime") and [`timegm()`](#calendar.timegm "calendar.timegm") are each others’ inverse. The [`calendar`](#module-calendar "calendar: Functions for working with calendars, including some emulation of the Unix cal program.") module exports the following data attributes: `calendar.day_name` An array that represents the days of the week in the current locale. `calendar.day_abbr` An array that represents the abbreviated days of the week in the current locale. `calendar.month_name` An array that represents the months of the year in the current locale. This follows normal convention of January being month number 1, so it has a length of 13 and `month_name[0]` is the empty string. `calendar.month_abbr` An array that represents the abbreviated months of the year in the current locale. This follows normal convention of January being month number 1, so it has a length of 13 and `month_abbr[0]` is the empty string. See also `Module` [`datetime`](datetime#module-datetime "datetime: Basic date and time types.") Object-oriented interface to dates and times with similar functionality to the [`time`](time#module-time "time: Time access and conversions.") module. `Module` [`time`](time#module-time "time: Time access and conversions.") Low-level time related functions. python xml.sax — Support for SAX2 parsers xml.sax — Support for SAX2 parsers ================================== **Source code:** [Lib/xml/sax/\_\_init\_\_.py](https://github.com/python/cpython/tree/3.9/Lib/xml/sax/__init__.py) The [`xml.sax`](#module-xml.sax "xml.sax: Package containing SAX2 base classes and convenience functions.") package provides a number of modules which implement the Simple API for XML (SAX) interface for Python. The package itself provides the SAX exceptions and the convenience functions which will be most used by users of the SAX API. Warning The [`xml.sax`](#module-xml.sax "xml.sax: Package containing SAX2 base classes and convenience functions.") module is not secure against maliciously constructed data. If you need to parse untrusted or unauthenticated data see [XML vulnerabilities](xml#xml-vulnerabilities). Changed in version 3.7.1: The SAX parser no longer processes general external entities by default to increase security. Before, the parser created network connections to fetch remote files or loaded local files from the file system for DTD and entities. The feature can be enabled again with method [`setFeature()`](xml.sax.reader#xml.sax.xmlreader.XMLReader.setFeature "xml.sax.xmlreader.XMLReader.setFeature") on the parser object and argument [`feature_external_ges`](xml.sax.handler#xml.sax.handler.feature_external_ges "xml.sax.handler.feature_external_ges"). The convenience functions are: `xml.sax.make_parser(parser_list=[])` Create and return a SAX [`XMLReader`](xml.sax.reader#xml.sax.xmlreader.XMLReader "xml.sax.xmlreader.XMLReader") object. The first parser found will be used. If *parser\_list* is provided, it must be an iterable of strings which name modules that have a function named `create_parser()`. Modules listed in *parser\_list* will be used before modules in the default list of parsers. Changed in version 3.8: The *parser\_list* argument can be any iterable, not just a list. `xml.sax.parse(filename_or_stream, handler, error_handler=handler.ErrorHandler())` Create a SAX parser and use it to parse a document. The document, passed in as *filename\_or\_stream*, can be a filename or a file object. The *handler* parameter needs to be a SAX [`ContentHandler`](xml.sax.handler#xml.sax.handler.ContentHandler "xml.sax.handler.ContentHandler") instance. If *error\_handler* is given, it must be a SAX [`ErrorHandler`](xml.sax.handler#xml.sax.handler.ErrorHandler "xml.sax.handler.ErrorHandler") instance; if omitted, [`SAXParseException`](#xml.sax.SAXParseException "xml.sax.SAXParseException") will be raised on all errors. There is no return value; all work must be done by the *handler* passed in. `xml.sax.parseString(string, handler, error_handler=handler.ErrorHandler())` Similar to [`parse()`](#xml.sax.parse "xml.sax.parse"), but parses from a buffer *string* received as a parameter. *string* must be a [`str`](stdtypes#str "str") instance or a [bytes-like object](../glossary#term-bytes-like-object). Changed in version 3.5: Added support of [`str`](stdtypes#str "str") instances. A typical SAX application uses three kinds of objects: readers, handlers and input sources. “Reader” in this context is another term for parser, i.e. some piece of code that reads the bytes or characters from the input source, and produces a sequence of events. The events then get distributed to the handler objects, i.e. the reader invokes a method on the handler. A SAX application must therefore obtain a reader object, create or open the input sources, create the handlers, and connect these objects all together. As the final step of preparation, the reader is called to parse the input. During parsing, methods on the handler objects are called based on structural and syntactic events from the input data. For these objects, only the interfaces are relevant; they are normally not instantiated by the application itself. Since Python does not have an explicit notion of interface, they are formally introduced as classes, but applications may use implementations which do not inherit from the provided classes. The [`InputSource`](xml.sax.reader#xml.sax.xmlreader.InputSource "xml.sax.xmlreader.InputSource"), [`Locator`](xml.sax.reader#xml.sax.xmlreader.Locator "xml.sax.xmlreader.Locator"), `Attributes`, `AttributesNS`, and [`XMLReader`](xml.sax.reader#xml.sax.xmlreader.XMLReader "xml.sax.xmlreader.XMLReader") interfaces are defined in the module [`xml.sax.xmlreader`](xml.sax.reader#module-xml.sax.xmlreader "xml.sax.xmlreader: Interface which SAX-compliant XML parsers must implement."). The handler interfaces are defined in [`xml.sax.handler`](xml.sax.handler#module-xml.sax.handler "xml.sax.handler: Base classes for SAX event handlers."). For convenience, [`InputSource`](xml.sax.reader#xml.sax.xmlreader.InputSource "xml.sax.xmlreader.InputSource") (which is often instantiated directly) and the handler classes are also available from [`xml.sax`](#module-xml.sax "xml.sax: Package containing SAX2 base classes and convenience functions."). These interfaces are described below. In addition to these classes, [`xml.sax`](#module-xml.sax "xml.sax: Package containing SAX2 base classes and convenience functions.") provides the following exception classes. `exception xml.sax.SAXException(msg, exception=None)` Encapsulate an XML error or warning. This class can contain basic error or warning information from either the XML parser or the application: it can be subclassed to provide additional functionality or to add localization. Note that although the handlers defined in the [`ErrorHandler`](xml.sax.handler#xml.sax.handler.ErrorHandler "xml.sax.handler.ErrorHandler") interface receive instances of this exception, it is not required to actually raise the exception — it is also useful as a container for information. When instantiated, *msg* should be a human-readable description of the error. The optional *exception* parameter, if given, should be `None` or an exception that was caught by the parsing code and is being passed along as information. This is the base class for the other SAX exception classes. `exception xml.sax.SAXParseException(msg, exception, locator)` Subclass of [`SAXException`](#xml.sax.SAXException "xml.sax.SAXException") raised on parse errors. Instances of this class are passed to the methods of the SAX [`ErrorHandler`](xml.sax.handler#xml.sax.handler.ErrorHandler "xml.sax.handler.ErrorHandler") interface to provide information about the parse error. This class supports the SAX [`Locator`](xml.sax.reader#xml.sax.xmlreader.Locator "xml.sax.xmlreader.Locator") interface as well as the [`SAXException`](#xml.sax.SAXException "xml.sax.SAXException") interface. `exception xml.sax.SAXNotRecognizedException(msg, exception=None)` Subclass of [`SAXException`](#xml.sax.SAXException "xml.sax.SAXException") raised when a SAX [`XMLReader`](xml.sax.reader#xml.sax.xmlreader.XMLReader "xml.sax.xmlreader.XMLReader") is confronted with an unrecognized feature or property. SAX applications and extensions may use this class for similar purposes. `exception xml.sax.SAXNotSupportedException(msg, exception=None)` Subclass of [`SAXException`](#xml.sax.SAXException "xml.sax.SAXException") raised when a SAX [`XMLReader`](xml.sax.reader#xml.sax.xmlreader.XMLReader "xml.sax.xmlreader.XMLReader") is asked to enable a feature that is not supported, or to set a property to a value that the implementation does not support. SAX applications and extensions may use this class for similar purposes. See also [SAX: The Simple API for XML](http://www.saxproject.org/) This site is the focal point for the definition of the SAX API. It provides a Java implementation and online documentation. Links to implementations and historical information are also available. `Module` [`xml.sax.handler`](xml.sax.handler#module-xml.sax.handler "xml.sax.handler: Base classes for SAX event handlers.") Definitions of the interfaces for application-provided objects. `Module` [`xml.sax.saxutils`](xml.sax.utils#module-xml.sax.saxutils "xml.sax.saxutils: Convenience functions and classes for use with SAX.") Convenience functions for use in SAX applications. `Module` [`xml.sax.xmlreader`](xml.sax.reader#module-xml.sax.xmlreader "xml.sax.xmlreader: Interface which SAX-compliant XML parsers must implement.") Definitions of the interfaces for parser-provided objects. SAXException Objects -------------------- The [`SAXException`](#xml.sax.SAXException "xml.sax.SAXException") exception class supports the following methods: `SAXException.getMessage()` Return a human-readable message describing the error condition. `SAXException.getException()` Return an encapsulated exception object, or `None`.
programming_docs
python xmlrpc.server — Basic XML-RPC servers xmlrpc.server — Basic XML-RPC servers ===================================== **Source code:** [Lib/xmlrpc/server.py](https://github.com/python/cpython/tree/3.9/Lib/xmlrpc/server.py) The [`xmlrpc.server`](#module-xmlrpc.server "xmlrpc.server: Basic XML-RPC server implementations.") module provides a basic server framework for XML-RPC servers written in Python. Servers can either be free standing, using [`SimpleXMLRPCServer`](#xmlrpc.server.SimpleXMLRPCServer "xmlrpc.server.SimpleXMLRPCServer"), or embedded in a CGI environment, using [`CGIXMLRPCRequestHandler`](#xmlrpc.server.CGIXMLRPCRequestHandler "xmlrpc.server.CGIXMLRPCRequestHandler"). Warning The [`xmlrpc.server`](#module-xmlrpc.server "xmlrpc.server: Basic XML-RPC server implementations.") module is not secure against maliciously constructed data. If you need to parse untrusted or unauthenticated data see [XML vulnerabilities](xml#xml-vulnerabilities). `class xmlrpc.server.SimpleXMLRPCServer(addr, requestHandler=SimpleXMLRPCRequestHandler, logRequests=True, allow_none=False, encoding=None, bind_and_activate=True, use_builtin_types=False)` Create a new server instance. This class provides methods for registration of functions that can be called by the XML-RPC protocol. The *requestHandler* parameter should be a factory for request handler instances; it defaults to [`SimpleXMLRPCRequestHandler`](#xmlrpc.server.SimpleXMLRPCRequestHandler "xmlrpc.server.SimpleXMLRPCRequestHandler"). The *addr* and *requestHandler* parameters are passed to the [`socketserver.TCPServer`](socketserver#socketserver.TCPServer "socketserver.TCPServer") constructor. If *logRequests* is true (the default), requests will be logged; setting this parameter to false will turn off logging. The *allow\_none* and *encoding* parameters are passed on to [`xmlrpc.client`](xmlrpc.client#module-xmlrpc.client "xmlrpc.client: XML-RPC client access.") and control the XML-RPC responses that will be returned from the server. The *bind\_and\_activate* parameter controls whether `server_bind()` and `server_activate()` are called immediately by the constructor; it defaults to true. Setting it to false allows code to manipulate the *allow\_reuse\_address* class variable before the address is bound. The *use\_builtin\_types* parameter is passed to the [`loads()`](xmlrpc.client#xmlrpc.client.loads "xmlrpc.client.loads") function and controls which types are processed when date/times values or binary data are received; it defaults to false. Changed in version 3.3: The *use\_builtin\_types* flag was added. `class xmlrpc.server.CGIXMLRPCRequestHandler(allow_none=False, encoding=None, use_builtin_types=False)` Create a new instance to handle XML-RPC requests in a CGI environment. The *allow\_none* and *encoding* parameters are passed on to [`xmlrpc.client`](xmlrpc.client#module-xmlrpc.client "xmlrpc.client: XML-RPC client access.") and control the XML-RPC responses that will be returned from the server. The *use\_builtin\_types* parameter is passed to the [`loads()`](xmlrpc.client#xmlrpc.client.loads "xmlrpc.client.loads") function and controls which types are processed when date/times values or binary data are received; it defaults to false. Changed in version 3.3: The *use\_builtin\_types* flag was added. `class xmlrpc.server.SimpleXMLRPCRequestHandler` Create a new request handler instance. This request handler supports `POST` requests and modifies logging so that the *logRequests* parameter to the [`SimpleXMLRPCServer`](#xmlrpc.server.SimpleXMLRPCServer "xmlrpc.server.SimpleXMLRPCServer") constructor parameter is honored. SimpleXMLRPCServer Objects -------------------------- The [`SimpleXMLRPCServer`](#xmlrpc.server.SimpleXMLRPCServer "xmlrpc.server.SimpleXMLRPCServer") class is based on [`socketserver.TCPServer`](socketserver#socketserver.TCPServer "socketserver.TCPServer") and provides a means of creating simple, stand alone XML-RPC servers. `SimpleXMLRPCServer.register_function(function=None, name=None)` Register a function that can respond to XML-RPC requests. If *name* is given, it will be the method name associated with *function*, otherwise `function.__name__` will be used. *name* is a string, and may contain characters not legal in Python identifiers, including the period character. This method can also be used as a decorator. When used as a decorator, *name* can only be given as a keyword argument to register *function* under *name*. If no *name* is given, `function.__name__` will be used. Changed in version 3.7: [`register_function()`](#xmlrpc.server.SimpleXMLRPCServer.register_function "xmlrpc.server.SimpleXMLRPCServer.register_function") can be used as a decorator. `SimpleXMLRPCServer.register_instance(instance, allow_dotted_names=False)` Register an object which is used to expose method names which have not been registered using [`register_function()`](#xmlrpc.server.SimpleXMLRPCServer.register_function "xmlrpc.server.SimpleXMLRPCServer.register_function"). If *instance* contains a `_dispatch()` method, it is called with the requested method name and the parameters from the request. Its API is `def _dispatch(self, method, params)` (note that *params* does not represent a variable argument list). If it calls an underlying function to perform its task, that function is called as `func(*params)`, expanding the parameter list. The return value from `_dispatch()` is returned to the client as the result. If *instance* does not have a `_dispatch()` method, it is searched for an attribute matching the name of the requested method. If the optional *allow\_dotted\_names* argument is true and the instance does not have a `_dispatch()` method, then if the requested method name contains periods, each component of the method name is searched for individually, with the effect that a simple hierarchical search is performed. The value found from this search is then called with the parameters from the request, and the return value is passed back to the client. Warning Enabling the *allow\_dotted\_names* option allows intruders to access your module’s global variables and may allow intruders to execute arbitrary code on your machine. Only use this option on a secure, closed network. `SimpleXMLRPCServer.register_introspection_functions()` Registers the XML-RPC introspection functions `system.listMethods`, `system.methodHelp` and `system.methodSignature`. `SimpleXMLRPCServer.register_multicall_functions()` Registers the XML-RPC multicall function system.multicall. `SimpleXMLRPCRequestHandler.rpc_paths` An attribute value that must be a tuple listing valid path portions of the URL for receiving XML-RPC requests. Requests posted to other paths will result in a 404 “no such page” HTTP error. If this tuple is empty, all paths will be considered valid. The default value is `('/', '/RPC2')`. ### SimpleXMLRPCServer Example Server code: ``` from xmlrpc.server import SimpleXMLRPCServer from xmlrpc.server import SimpleXMLRPCRequestHandler # Restrict to a particular path. class RequestHandler(SimpleXMLRPCRequestHandler): rpc_paths = ('/RPC2',) # Create server with SimpleXMLRPCServer(('localhost', 8000), requestHandler=RequestHandler) as server: server.register_introspection_functions() # Register pow() function; this will use the value of # pow.__name__ as the name, which is just 'pow'. server.register_function(pow) # Register a function under a different name def adder_function(x, y): return x + y server.register_function(adder_function, 'add') # Register an instance; all the methods of the instance are # published as XML-RPC methods (in this case, just 'mul'). class MyFuncs: def mul(self, x, y): return x * y server.register_instance(MyFuncs()) # Run the server's main loop server.serve_forever() ``` The following client code will call the methods made available by the preceding server: ``` import xmlrpc.client s = xmlrpc.client.ServerProxy('http://localhost:8000') print(s.pow(2,3)) # Returns 2**3 = 8 print(s.add(2,3)) # Returns 5 print(s.mul(5,2)) # Returns 5*2 = 10 # Print list of available methods print(s.system.listMethods()) ``` `register_function()` can also be used as a decorator. The previous server example can register functions in a decorator way: ``` from xmlrpc.server import SimpleXMLRPCServer from xmlrpc.server import SimpleXMLRPCRequestHandler class RequestHandler(SimpleXMLRPCRequestHandler): rpc_paths = ('/RPC2',) with SimpleXMLRPCServer(('localhost', 8000), requestHandler=RequestHandler) as server: server.register_introspection_functions() # Register pow() function; this will use the value of # pow.__name__ as the name, which is just 'pow'. server.register_function(pow) # Register a function under a different name, using # register_function as a decorator. *name* can only be given # as a keyword argument. @server.register_function(name='add') def adder_function(x, y): return x + y # Register a function under function.__name__. @server.register_function def mul(x, y): return x * y server.serve_forever() ``` The following example included in the `Lib/xmlrpc/server.py` module shows a server allowing dotted names and registering a multicall function. Warning Enabling the *allow\_dotted\_names* option allows intruders to access your module’s global variables and may allow intruders to execute arbitrary code on your machine. Only use this example only within a secure, closed network. ``` import datetime class ExampleService: def getData(self): return '42' class currentTime: @staticmethod def getCurrentTime(): return datetime.datetime.now() with SimpleXMLRPCServer(("localhost", 8000)) as server: server.register_function(pow) server.register_function(lambda x,y: x+y, 'add') server.register_instance(ExampleService(), allow_dotted_names=True) server.register_multicall_functions() print('Serving XML-RPC on localhost port 8000') try: server.serve_forever() except KeyboardInterrupt: print("\nKeyboard interrupt received, exiting.") sys.exit(0) ``` This ExampleService demo can be invoked from the command line: ``` python -m xmlrpc.server ``` The client that interacts with the above server is included in `Lib/xmlrpc/client.py`: ``` server = ServerProxy("http://localhost:8000") try: print(server.currentTime.getCurrentTime()) except Error as v: print("ERROR", v) multi = MultiCall(server) multi.getData() multi.pow(2,9) multi.add(1,2) try: for response in multi(): print(response) except Error as v: print("ERROR", v) ``` This client which interacts with the demo XMLRPC server can be invoked as: ``` python -m xmlrpc.client ``` CGIXMLRPCRequestHandler ----------------------- The [`CGIXMLRPCRequestHandler`](#xmlrpc.server.CGIXMLRPCRequestHandler "xmlrpc.server.CGIXMLRPCRequestHandler") class can be used to handle XML-RPC requests sent to Python CGI scripts. `CGIXMLRPCRequestHandler.register_function(function=None, name=None)` Register a function that can respond to XML-RPC requests. If *name* is given, it will be the method name associated with *function*, otherwise `function.__name__` will be used. *name* is a string, and may contain characters not legal in Python identifiers, including the period character. This method can also be used as a decorator. When used as a decorator, *name* can only be given as a keyword argument to register *function* under *name*. If no *name* is given, `function.__name__` will be used. Changed in version 3.7: [`register_function()`](#xmlrpc.server.CGIXMLRPCRequestHandler.register_function "xmlrpc.server.CGIXMLRPCRequestHandler.register_function") can be used as a decorator. `CGIXMLRPCRequestHandler.register_instance(instance)` Register an object which is used to expose method names which have not been registered using [`register_function()`](#xmlrpc.server.CGIXMLRPCRequestHandler.register_function "xmlrpc.server.CGIXMLRPCRequestHandler.register_function"). If instance contains a `_dispatch()` method, it is called with the requested method name and the parameters from the request; the return value is returned to the client as the result. If instance does not have a `_dispatch()` method, it is searched for an attribute matching the name of the requested method; if the requested method name contains periods, each component of the method name is searched for individually, with the effect that a simple hierarchical search is performed. The value found from this search is then called with the parameters from the request, and the return value is passed back to the client. `CGIXMLRPCRequestHandler.register_introspection_functions()` Register the XML-RPC introspection functions `system.listMethods`, `system.methodHelp` and `system.methodSignature`. `CGIXMLRPCRequestHandler.register_multicall_functions()` Register the XML-RPC multicall function `system.multicall`. `CGIXMLRPCRequestHandler.handle_request(request_text=None)` Handle an XML-RPC request. If *request\_text* is given, it should be the POST data provided by the HTTP server, otherwise the contents of stdin will be used. Example: ``` class MyFuncs: def mul(self, x, y): return x * y handler = CGIXMLRPCRequestHandler() handler.register_function(pow) handler.register_function(lambda x,y: x+y, 'add') handler.register_introspection_functions() handler.register_instance(MyFuncs()) handler.handle_request() ``` Documenting XMLRPC server ------------------------- These classes extend the above classes to serve HTML documentation in response to HTTP GET requests. Servers can either be free standing, using [`DocXMLRPCServer`](#xmlrpc.server.DocXMLRPCServer "xmlrpc.server.DocXMLRPCServer"), or embedded in a CGI environment, using [`DocCGIXMLRPCRequestHandler`](#xmlrpc.server.DocCGIXMLRPCRequestHandler "xmlrpc.server.DocCGIXMLRPCRequestHandler"). `class xmlrpc.server.DocXMLRPCServer(addr, requestHandler=DocXMLRPCRequestHandler, logRequests=True, allow_none=False, encoding=None, bind_and_activate=True, use_builtin_types=True)` Create a new server instance. All parameters have the same meaning as for [`SimpleXMLRPCServer`](#xmlrpc.server.SimpleXMLRPCServer "xmlrpc.server.SimpleXMLRPCServer"); *requestHandler* defaults to [`DocXMLRPCRequestHandler`](#xmlrpc.server.DocXMLRPCRequestHandler "xmlrpc.server.DocXMLRPCRequestHandler"). Changed in version 3.3: The *use\_builtin\_types* flag was added. `class xmlrpc.server.DocCGIXMLRPCRequestHandler` Create a new instance to handle XML-RPC requests in a CGI environment. `class xmlrpc.server.DocXMLRPCRequestHandler` Create a new request handler instance. This request handler supports XML-RPC POST requests, documentation GET requests, and modifies logging so that the *logRequests* parameter to the [`DocXMLRPCServer`](#xmlrpc.server.DocXMLRPCServer "xmlrpc.server.DocXMLRPCServer") constructor parameter is honored. DocXMLRPCServer Objects ----------------------- The [`DocXMLRPCServer`](#xmlrpc.server.DocXMLRPCServer "xmlrpc.server.DocXMLRPCServer") class is derived from [`SimpleXMLRPCServer`](#xmlrpc.server.SimpleXMLRPCServer "xmlrpc.server.SimpleXMLRPCServer") and provides a means of creating self-documenting, stand alone XML-RPC servers. HTTP POST requests are handled as XML-RPC method calls. HTTP GET requests are handled by generating pydoc-style HTML documentation. This allows a server to provide its own web-based documentation. `DocXMLRPCServer.set_server_title(server_title)` Set the title used in the generated HTML documentation. This title will be used inside the HTML “title” element. `DocXMLRPCServer.set_server_name(server_name)` Set the name used in the generated HTML documentation. This name will appear at the top of the generated documentation inside a “h1” element. `DocXMLRPCServer.set_server_documentation(server_documentation)` Set the description used in the generated HTML documentation. This description will appear as a paragraph, below the server name, in the documentation. DocCGIXMLRPCRequestHandler -------------------------- The [`DocCGIXMLRPCRequestHandler`](#xmlrpc.server.DocCGIXMLRPCRequestHandler "xmlrpc.server.DocCGIXMLRPCRequestHandler") class is derived from [`CGIXMLRPCRequestHandler`](#xmlrpc.server.CGIXMLRPCRequestHandler "xmlrpc.server.CGIXMLRPCRequestHandler") and provides a means of creating self-documenting, XML-RPC CGI scripts. HTTP POST requests are handled as XML-RPC method calls. HTTP GET requests are handled by generating pydoc-style HTML documentation. This allows a server to provide its own web-based documentation. `DocCGIXMLRPCRequestHandler.set_server_title(server_title)` Set the title used in the generated HTML documentation. This title will be used inside the HTML “title” element. `DocCGIXMLRPCRequestHandler.set_server_name(server_name)` Set the name used in the generated HTML documentation. This name will appear at the top of the generated documentation inside a “h1” element. `DocCGIXMLRPCRequestHandler.set_server_documentation(server_documentation)` Set the description used in the generated HTML documentation. This description will appear as a paragraph, below the server name, in the documentation. python dbm — Interfaces to Unix “databases” dbm — Interfaces to Unix “databases” ==================================== **Source code:** [Lib/dbm/\_\_init\_\_.py](https://github.com/python/cpython/tree/3.9/Lib/dbm/__init__.py) [`dbm`](#module-dbm "dbm: Interfaces to various Unix \"database\" formats.") is a generic interface to variants of the DBM database — [`dbm.gnu`](#module-dbm.gnu "dbm.gnu: GNU's reinterpretation of dbm. (Unix)") or [`dbm.ndbm`](#module-dbm.ndbm "dbm.ndbm: The standard \"database\" interface, based on ndbm. (Unix)"). If none of these modules is installed, the slow-but-simple implementation in module [`dbm.dumb`](#module-dbm.dumb "dbm.dumb: Portable implementation of the simple DBM interface.") will be used. There is a [third party interface](https://www.jcea.es/programacion/pybsddb.htm) to the Oracle Berkeley DB. `exception dbm.error` A tuple containing the exceptions that can be raised by each of the supported modules, with a unique exception also named [`dbm.error`](#dbm.error "dbm.error") as the first item — the latter is used when [`dbm.error`](#dbm.error "dbm.error") is raised. `dbm.whichdb(filename)` This function attempts to guess which of the several simple database modules available — [`dbm.gnu`](#module-dbm.gnu "dbm.gnu: GNU's reinterpretation of dbm. (Unix)"), [`dbm.ndbm`](#module-dbm.ndbm "dbm.ndbm: The standard \"database\" interface, based on ndbm. (Unix)") or [`dbm.dumb`](#module-dbm.dumb "dbm.dumb: Portable implementation of the simple DBM interface.") — should be used to open a given file. Returns one of the following values: `None` if the file can’t be opened because it’s unreadable or doesn’t exist; the empty string (`''`) if the file’s format can’t be guessed; or a string containing the required module name, such as `'dbm.ndbm'` or `'dbm.gnu'`. `dbm.open(file, flag='r', mode=0o666)` Open the database file *file* and return a corresponding object. If the database file already exists, the [`whichdb()`](#dbm.whichdb "dbm.whichdb") function is used to determine its type and the appropriate module is used; if it does not exist, the first module listed above that can be imported is used. The optional *flag* argument can be: | Value | Meaning | | --- | --- | | `'r'` | Open existing database for reading only (default) | | `'w'` | Open existing database for reading and writing | | `'c'` | Open database for reading and writing, creating it if it doesn’t exist | | `'n'` | Always create a new, empty database, open for reading and writing | The optional *mode* argument is the Unix mode of the file, used only when the database has to be created. It defaults to octal `0o666` (and will be modified by the prevailing umask). The object returned by [`open()`](#dbm.open "dbm.open") supports the same basic functionality as dictionaries; keys and their corresponding values can be stored, retrieved, and deleted, and the [`in`](../reference/expressions#in) operator and the `keys()` method are available, as well as `get()` and `setdefault()`. Changed in version 3.2: `get()` and `setdefault()` are now available in all database modules. Changed in version 3.8: Deleting a key from a read-only database raises database module specific error instead of [`KeyError`](exceptions#KeyError "KeyError"). Key and values are always stored as bytes. This means that when strings are used they are implicitly converted to the default encoding before being stored. These objects also support being used in a [`with`](../reference/compound_stmts#with) statement, which will automatically close them when done. Changed in version 3.4: Added native support for the context management protocol to the objects returned by [`open()`](#dbm.open "dbm.open"). The following example records some hostnames and a corresponding title, and then prints out the contents of the database: ``` import dbm # Open database, creating it if necessary. with dbm.open('cache', 'c') as db: # Record some values db[b'hello'] = b'there' db['www.python.org'] = 'Python Website' db['www.cnn.com'] = 'Cable News Network' # Note that the keys are considered bytes now. assert db[b'www.python.org'] == b'Python Website' # Notice how the value is now in bytes. assert db['www.cnn.com'] == b'Cable News Network' # Often-used methods of the dict interface work too. print(db.get('python.org', b'not present')) # Storing a non-string key or value will raise an exception (most # likely a TypeError). db['www.yahoo.com'] = 4 # db is automatically closed when leaving the with statement. ``` See also `Module` [`shelve`](shelve#module-shelve "shelve: Python object persistence.") Persistence module which stores non-string data. The individual submodules are described in the following sections. dbm.gnu — GNU’s reinterpretation of dbm --------------------------------------- **Source code:** [Lib/dbm/gnu.py](https://github.com/python/cpython/tree/3.9/Lib/dbm/gnu.py) This module is quite similar to the [`dbm`](#module-dbm "dbm: Interfaces to various Unix \"database\" formats.") module, but uses the GNU library `gdbm` instead to provide some additional functionality. Please note that the file formats created by [`dbm.gnu`](#module-dbm.gnu "dbm.gnu: GNU's reinterpretation of dbm. (Unix)") and [`dbm.ndbm`](#module-dbm.ndbm "dbm.ndbm: The standard \"database\" interface, based on ndbm. (Unix)") are incompatible. The [`dbm.gnu`](#module-dbm.gnu "dbm.gnu: GNU's reinterpretation of dbm. (Unix)") module provides an interface to the GNU DBM library. `dbm.gnu.gdbm` objects behave like mappings (dictionaries), except that keys and values are always converted to bytes before storing. Printing a `gdbm` object doesn’t print the keys and values, and the `items()` and `values()` methods are not supported. `exception dbm.gnu.error` Raised on [`dbm.gnu`](#module-dbm.gnu "dbm.gnu: GNU's reinterpretation of dbm. (Unix)")-specific errors, such as I/O errors. [`KeyError`](exceptions#KeyError "KeyError") is raised for general mapping errors like specifying an incorrect key. `dbm.gnu.open(filename[, flag[, mode]])` Open a `gdbm` database and return a `gdbm` object. The *filename* argument is the name of the database file. The optional *flag* argument can be: | Value | Meaning | | --- | --- | | `'r'` | Open existing database for reading only (default) | | `'w'` | Open existing database for reading and writing | | `'c'` | Open database for reading and writing, creating it if it doesn’t exist | | `'n'` | Always create a new, empty database, open for reading and writing | The following additional characters may be appended to the flag to control how the database is opened: | Value | Meaning | | --- | --- | | `'f'` | Open the database in fast mode. Writes to the database will not be synchronized. | | `'s'` | Synchronized mode. This will cause changes to the database to be immediately written to the file. | | `'u'` | Do not lock database. | Not all flags are valid for all versions of `gdbm`. The module constant `open_flags` is a string of supported flag characters. The exception [`error`](#dbm.gnu.error "dbm.gnu.error") is raised if an invalid flag is specified. The optional *mode* argument is the Unix mode of the file, used only when the database has to be created. It defaults to octal `0o666`. In addition to the dictionary-like methods, `gdbm` objects have the following methods: `gdbm.firstkey()` It’s possible to loop over every key in the database using this method and the [`nextkey()`](#dbm.gnu.gdbm.nextkey "dbm.gnu.gdbm.nextkey") method. The traversal is ordered by `gdbm`’s internal hash values, and won’t be sorted by the key values. This method returns the starting key. `gdbm.nextkey(key)` Returns the key that follows *key* in the traversal. The following code prints every key in the database `db`, without having to create a list in memory that contains them all: ``` k = db.firstkey() while k is not None: print(k) k = db.nextkey(k) ``` `gdbm.reorganize()` If you have carried out a lot of deletions and would like to shrink the space used by the `gdbm` file, this routine will reorganize the database. `gdbm` objects will not shorten the length of a database file except by using this reorganization; otherwise, deleted file space will be kept and reused as new (key, value) pairs are added. `gdbm.sync()` When the database has been opened in fast mode, this method forces any unwritten data to be written to the disk. `gdbm.close()` Close the `gdbm` database. dbm.ndbm — Interface based on ndbm ---------------------------------- **Source code:** [Lib/dbm/ndbm.py](https://github.com/python/cpython/tree/3.9/Lib/dbm/ndbm.py) The [`dbm.ndbm`](#module-dbm.ndbm "dbm.ndbm: The standard \"database\" interface, based on ndbm. (Unix)") module provides an interface to the Unix “(n)dbm” library. Dbm objects behave like mappings (dictionaries), except that keys and values are always stored as bytes. Printing a `dbm` object doesn’t print the keys and values, and the `items()` and `values()` methods are not supported. This module can be used with the “classic” ndbm interface or the GNU GDBM compatibility interface. On Unix, the **configure** script will attempt to locate the appropriate header file to simplify building this module. `exception dbm.ndbm.error` Raised on [`dbm.ndbm`](#module-dbm.ndbm "dbm.ndbm: The standard \"database\" interface, based on ndbm. (Unix)")-specific errors, such as I/O errors. [`KeyError`](exceptions#KeyError "KeyError") is raised for general mapping errors like specifying an incorrect key. `dbm.ndbm.library` Name of the `ndbm` implementation library used. `dbm.ndbm.open(filename[, flag[, mode]])` Open a dbm database and return a `ndbm` object. The *filename* argument is the name of the database file (without the `.dir` or `.pag` extensions). The optional *flag* argument must be one of these values: | Value | Meaning | | --- | --- | | `'r'` | Open existing database for reading only (default) | | `'w'` | Open existing database for reading and writing | | `'c'` | Open database for reading and writing, creating it if it doesn’t exist | | `'n'` | Always create a new, empty database, open for reading and writing | The optional *mode* argument is the Unix mode of the file, used only when the database has to be created. It defaults to octal `0o666` (and will be modified by the prevailing umask). In addition to the dictionary-like methods, `ndbm` objects provide the following method: `ndbm.close()` Close the `ndbm` database. dbm.dumb — Portable DBM implementation -------------------------------------- **Source code:** [Lib/dbm/dumb.py](https://github.com/python/cpython/tree/3.9/Lib/dbm/dumb.py) Note The [`dbm.dumb`](#module-dbm.dumb "dbm.dumb: Portable implementation of the simple DBM interface.") module is intended as a last resort fallback for the [`dbm`](#module-dbm "dbm: Interfaces to various Unix \"database\" formats.") module when a more robust module is not available. The [`dbm.dumb`](#module-dbm.dumb "dbm.dumb: Portable implementation of the simple DBM interface.") module is not written for speed and is not nearly as heavily used as the other database modules. The [`dbm.dumb`](#module-dbm.dumb "dbm.dumb: Portable implementation of the simple DBM interface.") module provides a persistent dictionary-like interface which is written entirely in Python. Unlike other modules such as [`dbm.gnu`](#module-dbm.gnu "dbm.gnu: GNU's reinterpretation of dbm. (Unix)") no external library is required. As with other persistent mappings, the keys and values are always stored as bytes. The module defines the following: `exception dbm.dumb.error` Raised on [`dbm.dumb`](#module-dbm.dumb "dbm.dumb: Portable implementation of the simple DBM interface.")-specific errors, such as I/O errors. [`KeyError`](exceptions#KeyError "KeyError") is raised for general mapping errors like specifying an incorrect key. `dbm.dumb.open(filename[, flag[, mode]])` Open a `dumbdbm` database and return a dumbdbm object. The *filename* argument is the basename of the database file (without any specific extensions). When a dumbdbm database is created, files with `.dat` and `.dir` extensions are created. The optional *flag* argument can be: | Value | Meaning | | --- | --- | | `'r'` | Open existing database for reading only (default) | | `'w'` | Open existing database for reading and writing | | `'c'` | Open database for reading and writing, creating it if it doesn’t exist | | `'n'` | Always create a new, empty database, open for reading and writing | The optional *mode* argument is the Unix mode of the file, used only when the database has to be created. It defaults to octal `0o666` (and will be modified by the prevailing umask). Warning It is possible to crash the Python interpreter when loading a database with a sufficiently large/complex entry due to stack depth limitations in Python’s AST compiler. Changed in version 3.5: [`open()`](#dbm.dumb.open "dbm.dumb.open") always creates a new database when the flag has the value `'n'`. Changed in version 3.8: A database opened with flags `'r'` is now read-only. Opening with flags `'r'` and `'w'` no longer creates a database if it does not exist. In addition to the methods provided by the [`collections.abc.MutableMapping`](collections.abc#collections.abc.MutableMapping "collections.abc.MutableMapping") class, `dumbdbm` objects provide the following methods: `dumbdbm.sync()` Synchronize the on-disk directory and data files. This method is called by the `Shelve.sync()` method. `dumbdbm.close()` Close the `dumbdbm` database.
programming_docs
python graphlib — Functionality to operate with graph-like structures graphlib — Functionality to operate with graph-like structures ============================================================== **Source code:** [Lib/graphlib.py](https://github.com/python/cpython/tree/3.9/Lib/graphlib.py) `class graphlib.TopologicalSorter(graph=None)` Provides functionality to topologically sort a graph of hashable nodes. A topological order is a linear ordering of the vertices in a graph such that for every directed edge u -> v from vertex u to vertex v, vertex u comes before vertex v in the ordering. For instance, the vertices of the graph may represent tasks to be performed, and the edges may represent constraints that one task must be performed before another; in this example, a topological ordering is just a valid sequence for the tasks. A complete topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph. If the optional *graph* argument is provided it must be a dictionary representing a directed acyclic graph where the keys are nodes and the values are iterables of all predecessors of that node in the graph (the nodes that have edges that point to the value in the key). Additional nodes can be added to the graph using the [`add()`](#graphlib.TopologicalSorter.add "graphlib.TopologicalSorter.add") method. In the general case, the steps required to perform the sorting of a given graph are as follows: * Create an instance of the [`TopologicalSorter`](#graphlib.TopologicalSorter "graphlib.TopologicalSorter") with an optional initial graph. * Add additional nodes to the graph. * Call [`prepare()`](#graphlib.TopologicalSorter.prepare "graphlib.TopologicalSorter.prepare") on the graph. * While [`is_active()`](#graphlib.TopologicalSorter.is_active "graphlib.TopologicalSorter.is_active") is `True`, iterate over the nodes returned by [`get_ready()`](#graphlib.TopologicalSorter.get_ready "graphlib.TopologicalSorter.get_ready") and process them. Call [`done()`](#graphlib.TopologicalSorter.done "graphlib.TopologicalSorter.done") on each node as it finishes processing. In case just an immediate sorting of the nodes in the graph is required and no parallelism is involved, the convenience method [`TopologicalSorter.static_order()`](#graphlib.TopologicalSorter.static_order "graphlib.TopologicalSorter.static_order") can be used directly: ``` >>> graph = {"D": {"B", "C"}, "C": {"A"}, "B": {"A"}} >>> ts = TopologicalSorter(graph) >>> tuple(ts.static_order()) ('A', 'C', 'B', 'D') ``` The class is designed to easily support parallel processing of the nodes as they become ready. For instance: ``` topological_sorter = TopologicalSorter() # Add nodes to 'topological_sorter'... topological_sorter.prepare() while topological_sorter.is_active(): for node in topological_sorter.get_ready(): # Worker threads or processes take nodes to work on off the # 'task_queue' queue. task_queue.put(node) # When the work for a node is done, workers put the node in # 'finalized_tasks_queue' so we can get more nodes to work on. # The definition of 'is_active()' guarantees that, at this point, at # least one node has been placed on 'task_queue' that hasn't yet # been passed to 'done()', so this blocking 'get()' must (eventually) # succeed. After calling 'done()', we loop back to call 'get_ready()' # again, so put newly freed nodes on 'task_queue' as soon as # logically possible. node = finalized_tasks_queue.get() topological_sorter.done(node) ``` `add(node, *predecessors)` Add a new node and its predecessors to the graph. Both the *node* and all elements in *predecessors* must be hashable. If called multiple times with the same node argument, the set of dependencies will be the union of all dependencies passed in. It is possible to add a node with no dependencies (*predecessors* is not provided) or to provide a dependency twice. If a node that has not been provided before is included among *predecessors* it will be automatically added to the graph with no predecessors of its own. Raises [`ValueError`](exceptions#ValueError "ValueError") if called after [`prepare()`](#graphlib.TopologicalSorter.prepare "graphlib.TopologicalSorter.prepare"). `prepare()` Mark the graph as finished and check for cycles in the graph. If any cycle is detected, [`CycleError`](#graphlib.CycleError "graphlib.CycleError") will be raised, but [`get_ready()`](#graphlib.TopologicalSorter.get_ready "graphlib.TopologicalSorter.get_ready") can still be used to obtain as many nodes as possible until cycles block more progress. After a call to this function, the graph cannot be modified, and therefore no more nodes can be added using [`add()`](#graphlib.TopologicalSorter.add "graphlib.TopologicalSorter.add"). `is_active()` Returns `True` if more progress can be made and `False` otherwise. Progress can be made if cycles do not block the resolution and either there are still nodes ready that haven’t yet been returned by [`TopologicalSorter.get_ready()`](#graphlib.TopologicalSorter.get_ready "graphlib.TopologicalSorter.get_ready") or the number of nodes marked [`TopologicalSorter.done()`](#graphlib.TopologicalSorter.done "graphlib.TopologicalSorter.done") is less than the number that have been returned by [`TopologicalSorter.get_ready()`](#graphlib.TopologicalSorter.get_ready "graphlib.TopologicalSorter.get_ready"). The `__bool__()` method of this class defers to this function, so instead of: ``` if ts.is_active(): ... ``` it is possible to simply do: ``` if ts: ... ``` Raises [`ValueError`](exceptions#ValueError "ValueError") if called without calling [`prepare()`](#graphlib.TopologicalSorter.prepare "graphlib.TopologicalSorter.prepare") previously. `done(*nodes)` Marks a set of nodes returned by [`TopologicalSorter.get_ready()`](#graphlib.TopologicalSorter.get_ready "graphlib.TopologicalSorter.get_ready") as processed, unblocking any successor of each node in *nodes* for being returned in the future by a call to [`TopologicalSorter.get_ready()`](#graphlib.TopologicalSorter.get_ready "graphlib.TopologicalSorter.get_ready"). Raises [`ValueError`](exceptions#ValueError "ValueError") if any node in *nodes* has already been marked as processed by a previous call to this method or if a node was not added to the graph by using [`TopologicalSorter.add()`](#graphlib.TopologicalSorter.add "graphlib.TopologicalSorter.add"), if called without calling [`prepare()`](#graphlib.TopologicalSorter.prepare "graphlib.TopologicalSorter.prepare") or if node has not yet been returned by [`get_ready()`](#graphlib.TopologicalSorter.get_ready "graphlib.TopologicalSorter.get_ready"). `get_ready()` Returns a `tuple` with all the nodes that are ready. Initially it returns all nodes with no predecessors, and once those are marked as processed by calling [`TopologicalSorter.done()`](#graphlib.TopologicalSorter.done "graphlib.TopologicalSorter.done"), further calls will return all new nodes that have all their predecessors already processed. Once no more progress can be made, empty tuples are returned. Raises [`ValueError`](exceptions#ValueError "ValueError") if called without calling [`prepare()`](#graphlib.TopologicalSorter.prepare "graphlib.TopologicalSorter.prepare") previously. `static_order()` Returns an iterator object which will iterate over nodes in a topological order. When using this method, [`prepare()`](#graphlib.TopologicalSorter.prepare "graphlib.TopologicalSorter.prepare") and [`done()`](#graphlib.TopologicalSorter.done "graphlib.TopologicalSorter.done") should not be called. This method is equivalent to: ``` def static_order(self): self.prepare() while self.is_active(): node_group = self.get_ready() yield from node_group self.done(*node_group) ``` The particular order that is returned may depend on the specific order in which the items were inserted in the graph. For example: ``` >>> ts = TopologicalSorter() >>> ts.add(3, 2, 1) >>> ts.add(1, 0) >>> print([*ts.static_order()]) [2, 0, 1, 3] >>> ts2 = TopologicalSorter() >>> ts2.add(1, 0) >>> ts2.add(3, 2, 1) >>> print([*ts2.static_order()]) [0, 2, 1, 3] ``` This is due to the fact that “0” and “2” are in the same level in the graph (they would have been returned in the same call to [`get_ready()`](#graphlib.TopologicalSorter.get_ready "graphlib.TopologicalSorter.get_ready")) and the order between them is determined by the order of insertion. If any cycle is detected, [`CycleError`](#graphlib.CycleError "graphlib.CycleError") will be raised. New in version 3.9. Exceptions ---------- The [`graphlib`](#module-graphlib "graphlib: Functionality to operate with graph-like structures") module defines the following exception classes: `exception graphlib.CycleError` Subclass of [`ValueError`](exceptions#ValueError "ValueError") raised by [`TopologicalSorter.prepare()`](#graphlib.TopologicalSorter.prepare "graphlib.TopologicalSorter.prepare") if cycles exist in the working graph. If multiple cycles exist, only one undefined choice among them will be reported and included in the exception. The detected cycle can be accessed via the second element in the `args` attribute of the exception instance and consists in a list of nodes, such that each node is, in the graph, an immediate predecessor of the next node in the list. In the reported list, the first and the last node will be the same, to make it clear that it is cyclic. python email.utils: Miscellaneous utilities email.utils: Miscellaneous utilities ==================================== **Source code:** [Lib/email/utils.py](https://github.com/python/cpython/tree/3.9/Lib/email/utils.py) There are a couple of useful utilities provided in the [`email.utils`](#module-email.utils "email.utils: Miscellaneous email package utilities.") module: `email.utils.localtime(dt=None)` Return local time as an aware datetime object. If called without arguments, return current time. Otherwise *dt* argument should be a [`datetime`](datetime#datetime.datetime "datetime.datetime") instance, and it is converted to the local time zone according to the system time zone database. If *dt* is naive (that is, `dt.tzinfo` is `None`), it is assumed to be in local time. In this case, a positive or zero value for *isdst* causes `localtime` to presume initially that summer time (for example, Daylight Saving Time) is or is not (respectively) in effect for the specified time. A negative value for *isdst* causes the `localtime` to attempt to divine whether summer time is in effect for the specified time. New in version 3.3. `email.utils.make_msgid(idstring=None, domain=None)` Returns a string suitable for an [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html)-compliant *Message-ID* header. Optional *idstring* if given, is a string used to strengthen the uniqueness of the message id. Optional *domain* if given provides the portion of the msgid after the ‘@’. The default is the local hostname. It is not normally necessary to override this default, but may be useful certain cases, such as a constructing distributed system that uses a consistent domain name across multiple hosts. Changed in version 3.2: Added the *domain* keyword. The remaining functions are part of the legacy (`Compat32`) email API. There is no need to directly use these with the new API, since the parsing and formatting they provide is done automatically by the header parsing machinery of the new API. `email.utils.quote(str)` Return a new string with backslashes in *str* replaced by two backslashes, and double quotes replaced by backslash-double quote. `email.utils.unquote(str)` Return a new string which is an *unquoted* version of *str*. If *str* ends and begins with double quotes, they are stripped off. Likewise if *str* ends and begins with angle brackets, they are stripped off. `email.utils.parseaddr(address)` Parse address – which should be the value of some address-containing field such as *To* or *Cc* – into its constituent *realname* and *email address* parts. Returns a tuple of that information, unless the parse fails, in which case a 2-tuple of `('', '')` is returned. `email.utils.formataddr(pair, charset='utf-8')` The inverse of [`parseaddr()`](#email.utils.parseaddr "email.utils.parseaddr"), this takes a 2-tuple of the form `(realname, email_address)` and returns the string value suitable for a *To* or *Cc* header. If the first element of *pair* is false, then the second element is returned unmodified. Optional *charset* is the character set that will be used in the [**RFC 2047**](https://tools.ietf.org/html/rfc2047.html) encoding of the `realname` if the `realname` contains non-ASCII characters. Can be an instance of [`str`](stdtypes#str "str") or a [`Charset`](email.charset#email.charset.Charset "email.charset.Charset"). Defaults to `utf-8`. Changed in version 3.3: Added the *charset* option. `email.utils.getaddresses(fieldvalues)` This method returns a list of 2-tuples of the form returned by `parseaddr()`. *fieldvalues* is a sequence of header field values as might be returned by [`Message.get_all`](email.compat32-message#email.message.Message.get_all "email.message.Message.get_all"). Here’s a simple example that gets all the recipients of a message: ``` from email.utils import getaddresses tos = msg.get_all('to', []) ccs = msg.get_all('cc', []) resent_tos = msg.get_all('resent-to', []) resent_ccs = msg.get_all('resent-cc', []) all_recipients = getaddresses(tos + ccs + resent_tos + resent_ccs) ``` `email.utils.parsedate(date)` Attempts to parse a date according to the rules in [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html). however, some mailers don’t follow that format as specified, so [`parsedate()`](#email.utils.parsedate "email.utils.parsedate") tries to guess correctly in such cases. *date* is a string containing an [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html) date, such as `"Mon, 20 Nov 1995 19:12:08 -0500"`. If it succeeds in parsing the date, [`parsedate()`](#email.utils.parsedate "email.utils.parsedate") returns a 9-tuple that can be passed directly to [`time.mktime()`](time#time.mktime "time.mktime"); otherwise `None` will be returned. Note that indexes 6, 7, and 8 of the result tuple are not usable. `email.utils.parsedate_tz(date)` Performs the same function as [`parsedate()`](#email.utils.parsedate "email.utils.parsedate"), but returns either `None` or a 10-tuple; the first 9 elements make up a tuple that can be passed directly to [`time.mktime()`](time#time.mktime "time.mktime"), and the tenth is the offset of the date’s timezone from UTC (which is the official term for Greenwich Mean Time) [1](#id2). If the input string has no timezone, the last element of the tuple returned is `0`, which represents UTC. Note that indexes 6, 7, and 8 of the result tuple are not usable. `email.utils.parsedate_to_datetime(date)` The inverse of [`format_datetime()`](#email.utils.format_datetime "email.utils.format_datetime"). Performs the same function as [`parsedate()`](#email.utils.parsedate "email.utils.parsedate"), but on success returns a [`datetime`](datetime#datetime.datetime "datetime.datetime"). If the input date has a timezone of `-0000`, the `datetime` will be a naive `datetime`, and if the date is conforming to the RFCs it will represent a time in UTC but with no indication of the actual source timezone of the message the date comes from. If the input date has any other valid timezone offset, the `datetime` will be an aware `datetime` with the corresponding a [`timezone`](datetime#datetime.timezone "datetime.timezone") [`tzinfo`](datetime#datetime.tzinfo "datetime.tzinfo"). New in version 3.3. `email.utils.mktime_tz(tuple)` Turn a 10-tuple as returned by [`parsedate_tz()`](#email.utils.parsedate_tz "email.utils.parsedate_tz") into a UTC timestamp (seconds since the Epoch). If the timezone item in the tuple is `None`, assume local time. `email.utils.formatdate(timeval=None, localtime=False, usegmt=False)` Returns a date string as per [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html), e.g.: ``` Fri, 09 Nov 2001 01:08:47 -0000 ``` Optional *timeval* if given is a floating point time value as accepted by [`time.gmtime()`](time#time.gmtime "time.gmtime") and [`time.localtime()`](time#time.localtime "time.localtime"), otherwise the current time is used. Optional *localtime* is a flag that when `True`, interprets *timeval*, and returns a date relative to the local timezone instead of UTC, properly taking daylight savings time into account. The default is `False` meaning UTC is used. Optional *usegmt* is a flag that when `True`, outputs a date string with the timezone as an ascii string `GMT`, rather than a numeric `-0000`. This is needed for some protocols (such as HTTP). This only applies when *localtime* is `False`. The default is `False`. `email.utils.format_datetime(dt, usegmt=False)` Like `formatdate`, but the input is a [`datetime`](datetime#module-datetime "datetime: Basic date and time types.") instance. If it is a naive datetime, it is assumed to be “UTC with no information about the source timezone”, and the conventional `-0000` is used for the timezone. If it is an aware `datetime`, then the numeric timezone offset is used. If it is an aware timezone with offset zero, then *usegmt* may be set to `True`, in which case the string `GMT` is used instead of the numeric timezone offset. This provides a way to generate standards conformant HTTP date headers. New in version 3.3. `email.utils.decode_rfc2231(s)` Decode the string *s* according to [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html). `email.utils.encode_rfc2231(s, charset=None, language=None)` Encode the string *s* according to [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html). Optional *charset* and *language*, if given is the character set name and language name to use. If neither is given, *s* is returned as-is. If *charset* is given but *language* is not, the string is encoded using the empty string for *language*. `email.utils.collapse_rfc2231_value(value, errors='replace', fallback_charset='us-ascii')` When a header parameter is encoded in [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html) format, [`Message.get_param`](email.compat32-message#email.message.Message.get_param "email.message.Message.get_param") may return a 3-tuple containing the character set, language, and value. [`collapse_rfc2231_value()`](#email.utils.collapse_rfc2231_value "email.utils.collapse_rfc2231_value") turns this into a unicode string. Optional *errors* is passed to the *errors* argument of [`str`](stdtypes#str "str")’s [`encode()`](stdtypes#str.encode "str.encode") method; it defaults to `'replace'`. Optional *fallback\_charset* specifies the character set to use if the one in the [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html) header is not known by Python; it defaults to `'us-ascii'`. For convenience, if the *value* passed to [`collapse_rfc2231_value()`](#email.utils.collapse_rfc2231_value "email.utils.collapse_rfc2231_value") is not a tuple, it should be a string and it is returned unquoted. `email.utils.decode_params(params)` Decode parameters list according to [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html). *params* is a sequence of 2-tuples containing elements of the form `(content-type, string-value)`. #### Footnotes `1` Note that the sign of the timezone offset is the opposite of the sign of the `time.timezone` variable for the same timezone; the latter variable follows the POSIX standard while this module follows [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html).
programming_docs
python weakref — Weak references weakref — Weak references ========================= **Source code:** [Lib/weakref.py](https://github.com/python/cpython/tree/3.9/Lib/weakref.py) The [`weakref`](#module-weakref "weakref: Support for weak references and weak dictionaries.") module allows the Python programmer to create *weak references* to objects. In the following, the term *referent* means the object which is referred to by a weak reference. A weak reference to an object is not enough to keep the object alive: when the only remaining references to a referent are weak references, [garbage collection](../glossary#term-garbage-collection) is free to destroy the referent and reuse its memory for something else. However, until the object is actually destroyed the weak reference may return the object even if there are no strong references to it. A primary use for weak references is to implement caches or mappings holding large objects, where it’s desired that a large object not be kept alive solely because it appears in a cache or mapping. For example, if you have a number of large binary image objects, you may wish to associate a name with each. If you used a Python dictionary to map names to images, or images to names, the image objects would remain alive just because they appeared as values or keys in the dictionaries. The [`WeakKeyDictionary`](#weakref.WeakKeyDictionary "weakref.WeakKeyDictionary") and [`WeakValueDictionary`](#weakref.WeakValueDictionary "weakref.WeakValueDictionary") classes supplied by the [`weakref`](#module-weakref "weakref: Support for weak references and weak dictionaries.") module are an alternative, using weak references to construct mappings that don’t keep objects alive solely because they appear in the mapping objects. If, for example, an image object is a value in a [`WeakValueDictionary`](#weakref.WeakValueDictionary "weakref.WeakValueDictionary"), then when the last remaining references to that image object are the weak references held by weak mappings, garbage collection can reclaim the object, and its corresponding entries in weak mappings are simply deleted. [`WeakKeyDictionary`](#weakref.WeakKeyDictionary "weakref.WeakKeyDictionary") and [`WeakValueDictionary`](#weakref.WeakValueDictionary "weakref.WeakValueDictionary") use weak references in their implementation, setting up callback functions on the weak references that notify the weak dictionaries when a key or value has been reclaimed by garbage collection. [`WeakSet`](#weakref.WeakSet "weakref.WeakSet") implements the [`set`](stdtypes#set "set") interface, but keeps weak references to its elements, just like a [`WeakKeyDictionary`](#weakref.WeakKeyDictionary "weakref.WeakKeyDictionary") does. [`finalize`](#weakref.finalize "weakref.finalize") provides a straight forward way to register a cleanup function to be called when an object is garbage collected. This is simpler to use than setting up a callback function on a raw weak reference, since the module automatically ensures that the finalizer remains alive until the object is collected. Most programs should find that using one of these weak container types or [`finalize`](#weakref.finalize "weakref.finalize") is all they need – it’s not usually necessary to create your own weak references directly. The low-level machinery is exposed by the [`weakref`](#module-weakref "weakref: Support for weak references and weak dictionaries.") module for the benefit of advanced uses. Not all objects can be weakly referenced; those objects which can include class instances, functions written in Python (but not in C), instance methods, sets, frozensets, some [file objects](../glossary#term-file-object), [generators](../glossary#term-generator), type objects, sockets, arrays, deques, regular expression pattern objects, and code objects. Changed in version 3.2: Added support for thread.lock, threading.Lock, and code objects. Several built-in types such as [`list`](stdtypes#list "list") and [`dict`](stdtypes#dict "dict") do not directly support weak references but can add support through subclassing: ``` class Dict(dict): pass obj = Dict(red=1, green=2, blue=3) # this object is weak referenceable ``` **CPython implementation detail:** Other built-in types such as [`tuple`](stdtypes#tuple "tuple") and [`int`](functions#int "int") do not support weak references even when subclassed. Extension types can easily be made to support weak references; see [Weak Reference Support](../extending/newtypes#weakref-support). When `__slots__` are defined for a given type, weak reference support is disabled unless a `'__weakref__'` string is also present in the sequence of strings in the `__slots__` declaration. See [\_\_slots\_\_ documentation](../reference/datamodel#slots) for details. `class weakref.ref(object[, callback])` Return a weak reference to *object*. The original object can be retrieved by calling the reference object if the referent is still alive; if the referent is no longer alive, calling the reference object will cause [`None`](constants#None "None") to be returned. If *callback* is provided and not [`None`](constants#None "None"), and the returned weakref object is still alive, the callback will be called when the object is about to be finalized; the weak reference object will be passed as the only parameter to the callback; the referent will no longer be available. It is allowable for many weak references to be constructed for the same object. Callbacks registered for each weak reference will be called from the most recently registered callback to the oldest registered callback. Exceptions raised by the callback will be noted on the standard error output, but cannot be propagated; they are handled in exactly the same way as exceptions raised from an object’s [`__del__()`](../reference/datamodel#object.__del__ "object.__del__") method. Weak references are [hashable](../glossary#term-hashable) if the *object* is hashable. They will maintain their hash value even after the *object* was deleted. If [`hash()`](functions#hash "hash") is called the first time only after the *object* was deleted, the call will raise [`TypeError`](exceptions#TypeError "TypeError"). Weak references support tests for equality, but not ordering. If the referents are still alive, two references have the same equality relationship as their referents (regardless of the *callback*). If either referent has been deleted, the references are equal only if the reference objects are the same object. This is a subclassable type rather than a factory function. `__callback__` This read-only attribute returns the callback currently associated to the weakref. If there is no callback or if the referent of the weakref is no longer alive then this attribute will have value `None`. Changed in version 3.4: Added the [`__callback__`](#weakref.ref.__callback__ "weakref.ref.__callback__") attribute. `weakref.proxy(object[, callback])` Return a proxy to *object* which uses a weak reference. This supports use of the proxy in most contexts instead of requiring the explicit dereferencing used with weak reference objects. The returned object will have a type of either `ProxyType` or `CallableProxyType`, depending on whether *object* is callable. Proxy objects are not [hashable](../glossary#term-hashable) regardless of the referent; this avoids a number of problems related to their fundamentally mutable nature, and prevent their use as dictionary keys. *callback* is the same as the parameter of the same name to the [`ref()`](#weakref.ref "weakref.ref") function. Changed in version 3.8: Extended the operator support on proxy objects to include the matrix multiplication operators `@` and `@=`. `weakref.getweakrefcount(object)` Return the number of weak references and proxies which refer to *object*. `weakref.getweakrefs(object)` Return a list of all weak reference and proxy objects which refer to *object*. `class weakref.WeakKeyDictionary([dict])` Mapping class that references keys weakly. Entries in the dictionary will be discarded when there is no longer a strong reference to the key. This can be used to associate additional data with an object owned by other parts of an application without adding attributes to those objects. This can be especially useful with objects that override attribute accesses. Changed in version 3.9: Added support for `|` and `|=` operators, specified in [**PEP 584**](https://www.python.org/dev/peps/pep-0584). [`WeakKeyDictionary`](#weakref.WeakKeyDictionary "weakref.WeakKeyDictionary") objects have an additional method that exposes the internal references directly. The references are not guaranteed to be “live” at the time they are used, so the result of calling the references needs to be checked before being used. This can be used to avoid creating references that will cause the garbage collector to keep the keys around longer than needed. `WeakKeyDictionary.keyrefs()` Return an iterable of the weak references to the keys. `class weakref.WeakValueDictionary([dict])` Mapping class that references values weakly. Entries in the dictionary will be discarded when no strong reference to the value exists any more. Changed in version 3.9: Added support for `|` and `|=` operators, as specified in [**PEP 584**](https://www.python.org/dev/peps/pep-0584). [`WeakValueDictionary`](#weakref.WeakValueDictionary "weakref.WeakValueDictionary") objects have an additional method that has the same issues as the `keyrefs()` method of [`WeakKeyDictionary`](#weakref.WeakKeyDictionary "weakref.WeakKeyDictionary") objects. `WeakValueDictionary.valuerefs()` Return an iterable of the weak references to the values. `class weakref.WeakSet([elements])` Set class that keeps weak references to its elements. An element will be discarded when no strong reference to it exists any more. `class weakref.WeakMethod(method)` A custom [`ref`](#weakref.ref "weakref.ref") subclass which simulates a weak reference to a bound method (i.e., a method defined on a class and looked up on an instance). Since a bound method is ephemeral, a standard weak reference cannot keep hold of it. [`WeakMethod`](#weakref.WeakMethod "weakref.WeakMethod") has special code to recreate the bound method until either the object or the original function dies: ``` >>> class C: ... def method(self): ... print("method called!") ... >>> c = C() >>> r = weakref.ref(c.method) >>> r() >>> r = weakref.WeakMethod(c.method) >>> r() <bound method C.method of <__main__.C object at 0x7fc859830220>> >>> r()() method called! >>> del c >>> gc.collect() 0 >>> r() >>> ``` New in version 3.4. `class weakref.finalize(obj, func, /, *args, **kwargs)` Return a callable finalizer object which will be called when *obj* is garbage collected. Unlike an ordinary weak reference, a finalizer will always survive until the reference object is collected, greatly simplifying lifecycle management. A finalizer is considered *alive* until it is called (either explicitly or at garbage collection), and after that it is *dead*. Calling a live finalizer returns the result of evaluating `func(*arg, **kwargs)`, whereas calling a dead finalizer returns [`None`](constants#None "None"). Exceptions raised by finalizer callbacks during garbage collection will be shown on the standard error output, but cannot be propagated. They are handled in the same way as exceptions raised from an object’s [`__del__()`](../reference/datamodel#object.__del__ "object.__del__") method or a weak reference’s callback. When the program exits, each remaining live finalizer is called unless its [`atexit`](atexit#module-atexit "atexit: Register and execute cleanup functions.") attribute has been set to false. They are called in reverse order of creation. A finalizer will never invoke its callback during the later part of the [interpreter shutdown](../glossary#term-interpreter-shutdown) when module globals are liable to have been replaced by [`None`](constants#None "None"). `__call__()` If *self* is alive then mark it as dead and return the result of calling `func(*args, **kwargs)`. If *self* is dead then return [`None`](constants#None "None"). `detach()` If *self* is alive then mark it as dead and return the tuple `(obj, func, args, kwargs)`. If *self* is dead then return [`None`](constants#None "None"). `peek()` If *self* is alive then return the tuple `(obj, func, args, kwargs)`. If *self* is dead then return [`None`](constants#None "None"). `alive` Property which is true if the finalizer is alive, false otherwise. `atexit` A writable boolean property which by default is true. When the program exits, it calls all remaining live finalizers for which [`atexit`](#weakref.finalize.atexit "weakref.finalize.atexit") is true. They are called in reverse order of creation. Note It is important to ensure that *func*, *args* and *kwargs* do not own any references to *obj*, either directly or indirectly, since otherwise *obj* will never be garbage collected. In particular, *func* should not be a bound method of *obj*. New in version 3.4. `weakref.ReferenceType` The type object for weak references objects. `weakref.ProxyType` The type object for proxies of objects which are not callable. `weakref.CallableProxyType` The type object for proxies of callable objects. `weakref.ProxyTypes` Sequence containing all the type objects for proxies. This can make it simpler to test if an object is a proxy without being dependent on naming both proxy types. See also [**PEP 205**](https://www.python.org/dev/peps/pep-0205) - Weak References The proposal and rationale for this feature, including links to earlier implementations and information about similar features in other languages. Weak Reference Objects ---------------------- Weak reference objects have no methods and no attributes besides [`ref.__callback__`](#weakref.ref.__callback__ "weakref.ref.__callback__"). A weak reference object allows the referent to be obtained, if it still exists, by calling it: ``` >>> import weakref >>> class Object: ... pass ... >>> o = Object() >>> r = weakref.ref(o) >>> o2 = r() >>> o is o2 True ``` If the referent no longer exists, calling the reference object returns [`None`](constants#None "None"): ``` >>> del o, o2 >>> print(r()) None ``` Testing that a weak reference object is still live should be done using the expression `ref() is not None`. Normally, application code that needs to use a reference object should follow this pattern: ``` # r is a weak reference object o = r() if o is None: # referent has been garbage collected print("Object has been deallocated; can't frobnicate.") else: print("Object is still live!") o.do_something_useful() ``` Using a separate test for “liveness” creates race conditions in threaded applications; another thread can cause a weak reference to become invalidated before the weak reference is called; the idiom shown above is safe in threaded applications as well as single-threaded applications. Specialized versions of [`ref`](#weakref.ref "weakref.ref") objects can be created through subclassing. This is used in the implementation of the [`WeakValueDictionary`](#weakref.WeakValueDictionary "weakref.WeakValueDictionary") to reduce the memory overhead for each entry in the mapping. This may be most useful to associate additional information with a reference, but could also be used to insert additional processing on calls to retrieve the referent. This example shows how a subclass of [`ref`](#weakref.ref "weakref.ref") can be used to store additional information about an object and affect the value that’s returned when the referent is accessed: ``` import weakref class ExtendedRef(weakref.ref): def __init__(self, ob, callback=None, /, **annotations): super().__init__(ob, callback) self.__counter = 0 for k, v in annotations.items(): setattr(self, k, v) def __call__(self): """Return a pair containing the referent and the number of times the reference has been called. """ ob = super().__call__() if ob is not None: self.__counter += 1 ob = (ob, self.__counter) return ob ``` Example ------- This simple example shows how an application can use object IDs to retrieve objects that it has seen before. The IDs of the objects can then be used in other data structures without forcing the objects to remain alive, but the objects can still be retrieved by ID if they do. ``` import weakref _id2obj_dict = weakref.WeakValueDictionary() def remember(obj): oid = id(obj) _id2obj_dict[oid] = obj return oid def id2obj(oid): return _id2obj_dict[oid] ``` Finalizer Objects ----------------- The main benefit of using [`finalize`](#weakref.finalize "weakref.finalize") is that it makes it simple to register a callback without needing to preserve the returned finalizer object. For instance ``` >>> import weakref >>> class Object: ... pass ... >>> kenny = Object() >>> weakref.finalize(kenny, print, "You killed Kenny!") <finalize object at ...; for 'Object' at ...> >>> del kenny You killed Kenny! ``` The finalizer can be called directly as well. However the finalizer will invoke the callback at most once. ``` >>> def callback(x, y, z): ... print("CALLBACK") ... return x + y + z ... >>> obj = Object() >>> f = weakref.finalize(obj, callback, 1, 2, z=3) >>> assert f.alive >>> assert f() == 6 CALLBACK >>> assert not f.alive >>> f() # callback not called because finalizer dead >>> del obj # callback not called because finalizer dead ``` You can unregister a finalizer using its [`detach()`](#weakref.finalize.detach "weakref.finalize.detach") method. This kills the finalizer and returns the arguments passed to the constructor when it was created. ``` >>> obj = Object() >>> f = weakref.finalize(obj, callback, 1, 2, z=3) >>> f.detach() (<...Object object ...>, <function callback ...>, (1, 2), {'z': 3}) >>> newobj, func, args, kwargs = _ >>> assert not f.alive >>> assert newobj is obj >>> assert func(*args, **kwargs) == 6 CALLBACK ``` Unless you set the [`atexit`](#weakref.finalize.atexit "weakref.finalize.atexit") attribute to [`False`](constants#False "False"), a finalizer will be called when the program exits if it is still alive. For instance ``` >>> obj = Object() >>> weakref.finalize(obj, print, "obj dead or exiting") <finalize object at ...; for 'Object' at ...> >>> exit() obj dead or exiting ``` Comparing finalizers with \_\_del\_\_() methods ----------------------------------------------- Suppose we want to create a class whose instances represent temporary directories. The directories should be deleted with their contents when the first of the following events occurs: * the object is garbage collected, * the object’s `remove()` method is called, or * the program exits. We might try to implement the class using a [`__del__()`](../reference/datamodel#object.__del__ "object.__del__") method as follows: ``` class TempDir: def __init__(self): self.name = tempfile.mkdtemp() def remove(self): if self.name is not None: shutil.rmtree(self.name) self.name = None @property def removed(self): return self.name is None def __del__(self): self.remove() ``` Starting with Python 3.4, [`__del__()`](../reference/datamodel#object.__del__ "object.__del__") methods no longer prevent reference cycles from being garbage collected, and module globals are no longer forced to [`None`](constants#None "None") during [interpreter shutdown](../glossary#term-interpreter-shutdown). So this code should work without any issues on CPython. However, handling of [`__del__()`](../reference/datamodel#object.__del__ "object.__del__") methods is notoriously implementation specific, since it depends on internal details of the interpreter’s garbage collector implementation. A more robust alternative can be to define a finalizer which only references the specific functions and objects that it needs, rather than having access to the full state of the object: ``` class TempDir: def __init__(self): self.name = tempfile.mkdtemp() self._finalizer = weakref.finalize(self, shutil.rmtree, self.name) def remove(self): self._finalizer() @property def removed(self): return not self._finalizer.alive ``` Defined like this, our finalizer only receives a reference to the details it needs to clean up the directory appropriately. If the object never gets garbage collected the finalizer will still be called at exit. The other advantage of weakref based finalizers is that they can be used to register finalizers for classes where the definition is controlled by a third party, such as running code when a module is unloaded: ``` import weakref, sys def unloading_module(): # implicit reference to the module globals from the function body weakref.finalize(sys.modules[__name__], unloading_module) ``` Note If you create a finalizer object in a daemonic thread just as the program exits then there is the possibility that the finalizer does not get called at exit. However, in a daemonic thread [`atexit.register()`](atexit#atexit.register "atexit.register"), `try: ... finally: ...` and `with: ...` do not guarantee that cleanup occurs either.
programming_docs
python Importing Modules Importing Modules ================= The modules described in this chapter provide new ways to import other Python modules and hooks for customizing the import process. The full list of modules described in this chapter is: * [`zipimport` — Import modules from Zip archives](zipimport) + [zipimporter Objects](zipimport#zipimporter-objects) + [Examples](zipimport#examples) * [`pkgutil` — Package extension utility](pkgutil) * [`modulefinder` — Find modules used by a script](modulefinder) + [Example usage of `ModuleFinder`](modulefinder#example-usage-of-modulefinder) * [`runpy` — Locating and executing Python modules](runpy) * [`importlib` — The implementation of `import`](importlib) + [Introduction](importlib#introduction) + [Functions](importlib#functions) + [`importlib.abc` – Abstract base classes related to import](importlib#module-importlib.abc) + [`importlib.resources` – Resources](importlib#module-importlib.resources) + [`importlib.machinery` – Importers and path hooks](importlib#module-importlib.machinery) + [`importlib.util` – Utility code for importers](importlib#module-importlib.util) + [Examples](importlib#examples) - [Importing programmatically](importlib#importing-programmatically) - [Checking if a module can be imported](importlib#checking-if-a-module-can-be-imported) - [Importing a source file directly](importlib#importing-a-source-file-directly) - [Setting up an importer](importlib#setting-up-an-importer) - [Approximating `importlib.import_module()`](importlib#approximating-importlib-import-module) * [Using `importlib.metadata`](importlib.metadata) + [Overview](importlib.metadata#overview) + [Functional API](importlib.metadata#functional-api) - [Entry points](importlib.metadata#entry-points) - [Distribution metadata](importlib.metadata#distribution-metadata) - [Distribution versions](importlib.metadata#distribution-versions) - [Distribution files](importlib.metadata#distribution-files) - [Distribution requirements](importlib.metadata#distribution-requirements) + [Distributions](importlib.metadata#distributions) + [Extending the search algorithm](importlib.metadata#extending-the-search-algorithm) python Low-level API Index Low-level API Index =================== This page lists all low-level asyncio APIs. Obtaining the Event Loop ------------------------ | | | | --- | --- | | [`asyncio.get_running_loop()`](asyncio-eventloop#asyncio.get_running_loop "asyncio.get_running_loop") | The **preferred** function to get the running event loop. | | [`asyncio.get_event_loop()`](asyncio-eventloop#asyncio.get_event_loop "asyncio.get_event_loop") | Get an event loop instance (current or via the policy). | | [`asyncio.set_event_loop()`](asyncio-eventloop#asyncio.set_event_loop "asyncio.set_event_loop") | Set the event loop as current via the current policy. | | [`asyncio.new_event_loop()`](asyncio-eventloop#asyncio.new_event_loop "asyncio.new_event_loop") | Create a new event loop. | #### Examples * [Using asyncio.get\_running\_loop()](asyncio-future#asyncio-example-future). Event Loop Methods ------------------ See also the main documentation section about the [event loop methods](asyncio-eventloop#asyncio-event-loop). #### Lifecycle | | | | --- | --- | | [`loop.run_until_complete()`](asyncio-eventloop#asyncio.loop.run_until_complete "asyncio.loop.run_until_complete") | Run a Future/Task/awaitable until complete. | | [`loop.run_forever()`](asyncio-eventloop#asyncio.loop.run_forever "asyncio.loop.run_forever") | Run the event loop forever. | | [`loop.stop()`](asyncio-eventloop#asyncio.loop.stop "asyncio.loop.stop") | Stop the event loop. | | [`loop.close()`](asyncio-eventloop#asyncio.loop.close "asyncio.loop.close") | Close the event loop. | | [`loop.is_running()`](asyncio-eventloop#asyncio.loop.is_running "asyncio.loop.is_running") | Return `True` if the event loop is running. | | [`loop.is_closed()`](asyncio-eventloop#asyncio.loop.is_closed "asyncio.loop.is_closed") | Return `True` if the event loop is closed. | | `await` [`loop.shutdown_asyncgens()`](asyncio-eventloop#asyncio.loop.shutdown_asyncgens "asyncio.loop.shutdown_asyncgens") | Close asynchronous generators. | #### Debugging | | | | --- | --- | | [`loop.set_debug()`](asyncio-eventloop#asyncio.loop.set_debug "asyncio.loop.set_debug") | Enable or disable the debug mode. | | [`loop.get_debug()`](asyncio-eventloop#asyncio.loop.get_debug "asyncio.loop.get_debug") | Get the current debug mode. | #### Scheduling Callbacks | | | | --- | --- | | [`loop.call_soon()`](asyncio-eventloop#asyncio.loop.call_soon "asyncio.loop.call_soon") | Invoke a callback soon. | | [`loop.call_soon_threadsafe()`](asyncio-eventloop#asyncio.loop.call_soon_threadsafe "asyncio.loop.call_soon_threadsafe") | A thread-safe variant of [`loop.call_soon()`](asyncio-eventloop#asyncio.loop.call_soon "asyncio.loop.call_soon"). | | [`loop.call_later()`](asyncio-eventloop#asyncio.loop.call_later "asyncio.loop.call_later") | Invoke a callback *after* the given time. | | [`loop.call_at()`](asyncio-eventloop#asyncio.loop.call_at "asyncio.loop.call_at") | Invoke a callback *at* the given time. | #### Thread/Process Pool | | | | --- | --- | | `await` [`loop.run_in_executor()`](asyncio-eventloop#asyncio.loop.run_in_executor "asyncio.loop.run_in_executor") | Run a CPU-bound or other blocking function in a [`concurrent.futures`](concurrent.futures#module-concurrent.futures "concurrent.futures: Execute computations concurrently using threads or processes.") executor. | | [`loop.set_default_executor()`](asyncio-eventloop#asyncio.loop.set_default_executor "asyncio.loop.set_default_executor") | Set the default executor for [`loop.run_in_executor()`](asyncio-eventloop#asyncio.loop.run_in_executor "asyncio.loop.run_in_executor"). | #### Tasks and Futures | | | | --- | --- | | [`loop.create_future()`](asyncio-eventloop#asyncio.loop.create_future "asyncio.loop.create_future") | Create a [`Future`](asyncio-future#asyncio.Future "asyncio.Future") object. | | [`loop.create_task()`](asyncio-eventloop#asyncio.loop.create_task "asyncio.loop.create_task") | Schedule coroutine as a [`Task`](asyncio-task#asyncio.Task "asyncio.Task"). | | [`loop.set_task_factory()`](asyncio-eventloop#asyncio.loop.set_task_factory "asyncio.loop.set_task_factory") | Set a factory used by [`loop.create_task()`](asyncio-eventloop#asyncio.loop.create_task "asyncio.loop.create_task") to create [`Tasks`](asyncio-task#asyncio.Task "asyncio.Task"). | | [`loop.get_task_factory()`](asyncio-eventloop#asyncio.loop.get_task_factory "asyncio.loop.get_task_factory") | Get the factory [`loop.create_task()`](asyncio-eventloop#asyncio.loop.create_task "asyncio.loop.create_task") uses to create [`Tasks`](asyncio-task#asyncio.Task "asyncio.Task"). | #### DNS | | | | --- | --- | | `await` [`loop.getaddrinfo()`](asyncio-eventloop#asyncio.loop.getaddrinfo "asyncio.loop.getaddrinfo") | Asynchronous version of [`socket.getaddrinfo()`](socket#socket.getaddrinfo "socket.getaddrinfo"). | | `await` [`loop.getnameinfo()`](asyncio-eventloop#asyncio.loop.getnameinfo "asyncio.loop.getnameinfo") | Asynchronous version of [`socket.getnameinfo()`](socket#socket.getnameinfo "socket.getnameinfo"). | #### Networking and IPC | | | | --- | --- | | `await` [`loop.create_connection()`](asyncio-eventloop#asyncio.loop.create_connection "asyncio.loop.create_connection") | Open a TCP connection. | | `await` [`loop.create_server()`](asyncio-eventloop#asyncio.loop.create_server "asyncio.loop.create_server") | Create a TCP server. | | `await` [`loop.create_unix_connection()`](asyncio-eventloop#asyncio.loop.create_unix_connection "asyncio.loop.create_unix_connection") | Open a Unix socket connection. | | `await` [`loop.create_unix_server()`](asyncio-eventloop#asyncio.loop.create_unix_server "asyncio.loop.create_unix_server") | Create a Unix socket server. | | `await` [`loop.connect_accepted_socket()`](asyncio-eventloop#asyncio.loop.connect_accepted_socket "asyncio.loop.connect_accepted_socket") | Wrap a [`socket`](socket#socket.socket "socket.socket") into a `(transport, protocol)` pair. | | `await` [`loop.create_datagram_endpoint()`](asyncio-eventloop#asyncio.loop.create_datagram_endpoint "asyncio.loop.create_datagram_endpoint") | Open a datagram (UDP) connection. | | `await` [`loop.sendfile()`](asyncio-eventloop#asyncio.loop.sendfile "asyncio.loop.sendfile") | Send a file over a transport. | | `await` [`loop.start_tls()`](asyncio-eventloop#asyncio.loop.start_tls "asyncio.loop.start_tls") | Upgrade an existing connection to TLS. | | `await` [`loop.connect_read_pipe()`](asyncio-eventloop#asyncio.loop.connect_read_pipe "asyncio.loop.connect_read_pipe") | Wrap a read end of a pipe into a `(transport, protocol)` pair. | | `await` [`loop.connect_write_pipe()`](asyncio-eventloop#asyncio.loop.connect_write_pipe "asyncio.loop.connect_write_pipe") | Wrap a write end of a pipe into a `(transport, protocol)` pair. | #### Sockets | | | | --- | --- | | `await` [`loop.sock_recv()`](asyncio-eventloop#asyncio.loop.sock_recv "asyncio.loop.sock_recv") | Receive data from the [`socket`](socket#socket.socket "socket.socket"). | | `await` [`loop.sock_recv_into()`](asyncio-eventloop#asyncio.loop.sock_recv_into "asyncio.loop.sock_recv_into") | Receive data from the [`socket`](socket#socket.socket "socket.socket") into a buffer. | | `await` [`loop.sock_sendall()`](asyncio-eventloop#asyncio.loop.sock_sendall "asyncio.loop.sock_sendall") | Send data to the [`socket`](socket#socket.socket "socket.socket"). | | `await` [`loop.sock_connect()`](asyncio-eventloop#asyncio.loop.sock_connect "asyncio.loop.sock_connect") | Connect the [`socket`](socket#socket.socket "socket.socket"). | | `await` [`loop.sock_accept()`](asyncio-eventloop#asyncio.loop.sock_accept "asyncio.loop.sock_accept") | Accept a [`socket`](socket#socket.socket "socket.socket") connection. | | `await` [`loop.sock_sendfile()`](asyncio-eventloop#asyncio.loop.sock_sendfile "asyncio.loop.sock_sendfile") | Send a file over the [`socket`](socket#socket.socket "socket.socket"). | | [`loop.add_reader()`](asyncio-eventloop#asyncio.loop.add_reader "asyncio.loop.add_reader") | Start watching a file descriptor for read availability. | | [`loop.remove_reader()`](asyncio-eventloop#asyncio.loop.remove_reader "asyncio.loop.remove_reader") | Stop watching a file descriptor for read availability. | | [`loop.add_writer()`](asyncio-eventloop#asyncio.loop.add_writer "asyncio.loop.add_writer") | Start watching a file descriptor for write availability. | | [`loop.remove_writer()`](asyncio-eventloop#asyncio.loop.remove_writer "asyncio.loop.remove_writer") | Stop watching a file descriptor for write availability. | #### Unix Signals | | | | --- | --- | | [`loop.add_signal_handler()`](asyncio-eventloop#asyncio.loop.add_signal_handler "asyncio.loop.add_signal_handler") | Add a handler for a [`signal`](signal#module-signal "signal: Set handlers for asynchronous events."). | | [`loop.remove_signal_handler()`](asyncio-eventloop#asyncio.loop.remove_signal_handler "asyncio.loop.remove_signal_handler") | Remove a handler for a [`signal`](signal#module-signal "signal: Set handlers for asynchronous events."). | #### Subprocesses | | | | --- | --- | | [`loop.subprocess_exec()`](asyncio-eventloop#asyncio.loop.subprocess_exec "asyncio.loop.subprocess_exec") | Spawn a subprocess. | | [`loop.subprocess_shell()`](asyncio-eventloop#asyncio.loop.subprocess_shell "asyncio.loop.subprocess_shell") | Spawn a subprocess from a shell command. | #### Error Handling | | | | --- | --- | | [`loop.call_exception_handler()`](asyncio-eventloop#asyncio.loop.call_exception_handler "asyncio.loop.call_exception_handler") | Call the exception handler. | | [`loop.set_exception_handler()`](asyncio-eventloop#asyncio.loop.set_exception_handler "asyncio.loop.set_exception_handler") | Set a new exception handler. | | [`loop.get_exception_handler()`](asyncio-eventloop#asyncio.loop.get_exception_handler "asyncio.loop.get_exception_handler") | Get the current exception handler. | | [`loop.default_exception_handler()`](asyncio-eventloop#asyncio.loop.default_exception_handler "asyncio.loop.default_exception_handler") | The default exception handler implementation. | #### Examples * [Using asyncio.get\_event\_loop() and loop.run\_forever()](asyncio-eventloop#asyncio-example-lowlevel-helloworld). * [Using loop.call\_later()](asyncio-eventloop#asyncio-example-call-later). * Using `loop.create_connection()` to implement [an echo-client](asyncio-protocol#asyncio-example-tcp-echo-client-protocol). * Using `loop.create_connection()` to [connect a socket](asyncio-protocol#asyncio-example-create-connection). * [Using add\_reader() to watch an FD for read events](asyncio-eventloop#asyncio-example-watch-fd). * [Using loop.add\_signal\_handler()](asyncio-eventloop#asyncio-example-unix-signals). * [Using loop.subprocess\_exec()](asyncio-protocol#asyncio-example-subprocess-proto). Transports ---------- All transports implement the following methods: | | | | --- | --- | | [`transport.close()`](asyncio-protocol#asyncio.BaseTransport.close "asyncio.BaseTransport.close") | Close the transport. | | [`transport.is_closing()`](asyncio-protocol#asyncio.BaseTransport.is_closing "asyncio.BaseTransport.is_closing") | Return `True` if the transport is closing or is closed. | | [`transport.get_extra_info()`](asyncio-protocol#asyncio.BaseTransport.get_extra_info "asyncio.BaseTransport.get_extra_info") | Request for information about the transport. | | [`transport.set_protocol()`](asyncio-protocol#asyncio.BaseTransport.set_protocol "asyncio.BaseTransport.set_protocol") | Set a new protocol. | | [`transport.get_protocol()`](asyncio-protocol#asyncio.BaseTransport.get_protocol "asyncio.BaseTransport.get_protocol") | Return the current protocol. | Transports that can receive data (TCP and Unix connections, pipes, etc). Returned from methods like [`loop.create_connection()`](asyncio-eventloop#asyncio.loop.create_connection "asyncio.loop.create_connection"), [`loop.create_unix_connection()`](asyncio-eventloop#asyncio.loop.create_unix_connection "asyncio.loop.create_unix_connection"), [`loop.connect_read_pipe()`](asyncio-eventloop#asyncio.loop.connect_read_pipe "asyncio.loop.connect_read_pipe"), etc: #### Read Transports | | | | --- | --- | | [`transport.is_reading()`](asyncio-protocol#asyncio.ReadTransport.is_reading "asyncio.ReadTransport.is_reading") | Return `True` if the transport is receiving. | | [`transport.pause_reading()`](asyncio-protocol#asyncio.ReadTransport.pause_reading "asyncio.ReadTransport.pause_reading") | Pause receiving. | | [`transport.resume_reading()`](asyncio-protocol#asyncio.ReadTransport.resume_reading "asyncio.ReadTransport.resume_reading") | Resume receiving. | Transports that can Send data (TCP and Unix connections, pipes, etc). Returned from methods like [`loop.create_connection()`](asyncio-eventloop#asyncio.loop.create_connection "asyncio.loop.create_connection"), [`loop.create_unix_connection()`](asyncio-eventloop#asyncio.loop.create_unix_connection "asyncio.loop.create_unix_connection"), [`loop.connect_write_pipe()`](asyncio-eventloop#asyncio.loop.connect_write_pipe "asyncio.loop.connect_write_pipe"), etc: #### Write Transports | | | | --- | --- | | [`transport.write()`](asyncio-protocol#asyncio.WriteTransport.write "asyncio.WriteTransport.write") | Write data to the transport. | | [`transport.writelines()`](asyncio-protocol#asyncio.WriteTransport.writelines "asyncio.WriteTransport.writelines") | Write buffers to the transport. | | [`transport.can_write_eof()`](asyncio-protocol#asyncio.WriteTransport.can_write_eof "asyncio.WriteTransport.can_write_eof") | Return [`True`](constants#True "True") if the transport supports sending EOF. | | [`transport.write_eof()`](asyncio-protocol#asyncio.WriteTransport.write_eof "asyncio.WriteTransport.write_eof") | Close and send EOF after flushing buffered data. | | [`transport.abort()`](asyncio-protocol#asyncio.WriteTransport.abort "asyncio.WriteTransport.abort") | Close the transport immediately. | | [`transport.get_write_buffer_size()`](asyncio-protocol#asyncio.WriteTransport.get_write_buffer_size "asyncio.WriteTransport.get_write_buffer_size") | Return high and low water marks for write flow control. | | [`transport.set_write_buffer_limits()`](asyncio-protocol#asyncio.WriteTransport.set_write_buffer_limits "asyncio.WriteTransport.set_write_buffer_limits") | Set new high and low water marks for write flow control. | Transports returned by [`loop.create_datagram_endpoint()`](asyncio-eventloop#asyncio.loop.create_datagram_endpoint "asyncio.loop.create_datagram_endpoint"): #### Datagram Transports | | | | --- | --- | | [`transport.sendto()`](asyncio-protocol#asyncio.DatagramTransport.sendto "asyncio.DatagramTransport.sendto") | Send data to the remote peer. | | [`transport.abort()`](asyncio-protocol#asyncio.DatagramTransport.abort "asyncio.DatagramTransport.abort") | Close the transport immediately. | Low-level transport abstraction over subprocesses. Returned by [`loop.subprocess_exec()`](asyncio-eventloop#asyncio.loop.subprocess_exec "asyncio.loop.subprocess_exec") and [`loop.subprocess_shell()`](asyncio-eventloop#asyncio.loop.subprocess_shell "asyncio.loop.subprocess_shell"): #### Subprocess Transports | | | | --- | --- | | [`transport.get_pid()`](asyncio-protocol#asyncio.SubprocessTransport.get_pid "asyncio.SubprocessTransport.get_pid") | Return the subprocess process id. | | [`transport.get_pipe_transport()`](asyncio-protocol#asyncio.SubprocessTransport.get_pipe_transport "asyncio.SubprocessTransport.get_pipe_transport") | Return the transport for the requested communication pipe (*stdin*, *stdout*, or *stderr*). | | [`transport.get_returncode()`](asyncio-protocol#asyncio.SubprocessTransport.get_returncode "asyncio.SubprocessTransport.get_returncode") | Return the subprocess return code. | | [`transport.kill()`](asyncio-protocol#asyncio.SubprocessTransport.kill "asyncio.SubprocessTransport.kill") | Kill the subprocess. | | [`transport.send_signal()`](asyncio-protocol#asyncio.SubprocessTransport.send_signal "asyncio.SubprocessTransport.send_signal") | Send a signal to the subprocess. | | [`transport.terminate()`](asyncio-protocol#asyncio.SubprocessTransport.terminate "asyncio.SubprocessTransport.terminate") | Stop the subprocess. | | [`transport.close()`](asyncio-protocol#asyncio.SubprocessTransport.close "asyncio.SubprocessTransport.close") | Kill the subprocess and close all pipes. | Protocols --------- Protocol classes can implement the following **callback methods**: | | | | --- | --- | | `callback` [`connection_made()`](asyncio-protocol#asyncio.BaseProtocol.connection_made "asyncio.BaseProtocol.connection_made") | Called when a connection is made. | | `callback` [`connection_lost()`](asyncio-protocol#asyncio.BaseProtocol.connection_lost "asyncio.BaseProtocol.connection_lost") | Called when the connection is lost or closed. | | `callback` [`pause_writing()`](asyncio-protocol#asyncio.BaseProtocol.pause_writing "asyncio.BaseProtocol.pause_writing") | Called when the transport’s buffer goes over the high water mark. | | `callback` [`resume_writing()`](asyncio-protocol#asyncio.BaseProtocol.resume_writing "asyncio.BaseProtocol.resume_writing") | Called when the transport’s buffer drains below the low water mark. | #### Streaming Protocols (TCP, Unix Sockets, Pipes) | | | | --- | --- | | `callback` [`data_received()`](asyncio-protocol#asyncio.Protocol.data_received "asyncio.Protocol.data_received") | Called when some data is received. | | `callback` [`eof_received()`](asyncio-protocol#asyncio.Protocol.eof_received "asyncio.Protocol.eof_received") | Called when an EOF is received. | #### Buffered Streaming Protocols | | | | --- | --- | | `callback` [`get_buffer()`](asyncio-protocol#asyncio.BufferedProtocol.get_buffer "asyncio.BufferedProtocol.get_buffer") | Called to allocate a new receive buffer. | | `callback` [`buffer_updated()`](asyncio-protocol#asyncio.BufferedProtocol.buffer_updated "asyncio.BufferedProtocol.buffer_updated") | Called when the buffer was updated with the received data. | | `callback` [`eof_received()`](asyncio-protocol#asyncio.BufferedProtocol.eof_received "asyncio.BufferedProtocol.eof_received") | Called when an EOF is received. | #### Datagram Protocols | | | | --- | --- | | `callback` [`datagram_received()`](asyncio-protocol#asyncio.DatagramProtocol.datagram_received "asyncio.DatagramProtocol.datagram_received") | Called when a datagram is received. | | `callback` [`error_received()`](asyncio-protocol#asyncio.DatagramProtocol.error_received "asyncio.DatagramProtocol.error_received") | Called when a previous send or receive operation raises an [`OSError`](exceptions#OSError "OSError"). | #### Subprocess Protocols | | | | --- | --- | | `callback` [`pipe_data_received()`](asyncio-protocol#asyncio.SubprocessProtocol.pipe_data_received "asyncio.SubprocessProtocol.pipe_data_received") | Called when the child process writes data into its *stdout* or *stderr* pipe. | | `callback` [`pipe_connection_lost()`](asyncio-protocol#asyncio.SubprocessProtocol.pipe_connection_lost "asyncio.SubprocessProtocol.pipe_connection_lost") | Called when one of the pipes communicating with the child process is closed. | | `callback` [`process_exited()`](asyncio-protocol#asyncio.SubprocessProtocol.process_exited "asyncio.SubprocessProtocol.process_exited") | Called when the child process has exited. | Event Loop Policies ------------------- Policies is a low-level mechanism to alter the behavior of functions like [`asyncio.get_event_loop()`](asyncio-eventloop#asyncio.get_event_loop "asyncio.get_event_loop"). See also the main [policies section](asyncio-policy#asyncio-policies) for more details. #### Accessing Policies | | | | --- | --- | | [`asyncio.get_event_loop_policy()`](asyncio-policy#asyncio.get_event_loop_policy "asyncio.get_event_loop_policy") | Return the current process-wide policy. | | [`asyncio.set_event_loop_policy()`](asyncio-policy#asyncio.set_event_loop_policy "asyncio.set_event_loop_policy") | Set a new process-wide policy. | | [`AbstractEventLoopPolicy`](asyncio-policy#asyncio.AbstractEventLoopPolicy "asyncio.AbstractEventLoopPolicy") | Base class for policy objects. |
programming_docs
python File and Directory Access File and Directory Access ========================= The modules described in this chapter deal with disk files and directories. For example, there are modules for reading the properties of files, manipulating paths in a portable way, and creating temporary files. The full list of modules in this chapter is: * [`pathlib` — Object-oriented filesystem paths](pathlib) + [Basic use](pathlib#basic-use) + [Pure paths](pathlib#pure-paths) - [General properties](pathlib#general-properties) - [Operators](pathlib#operators) - [Accessing individual parts](pathlib#accessing-individual-parts) - [Methods and properties](pathlib#methods-and-properties) + [Concrete paths](pathlib#concrete-paths) - [Methods](pathlib#methods) + [Correspondence to tools in the `os` module](pathlib#correspondence-to-tools-in-the-os-module) * [`os.path` — Common pathname manipulations](os.path) * [`fileinput` — Iterate over lines from multiple input streams](fileinput) * [`stat` — Interpreting `stat()` results](stat) * [`filecmp` — File and Directory Comparisons](filecmp) + [The `dircmp` class](filecmp#the-dircmp-class) * [`tempfile` — Generate temporary files and directories](tempfile) + [Examples](tempfile#examples) + [Deprecated functions and variables](tempfile#deprecated-functions-and-variables) * [`glob` — Unix style pathname pattern expansion](glob) * [`fnmatch` — Unix filename pattern matching](fnmatch) * [`linecache` — Random access to text lines](linecache) * [`shutil` — High-level file operations](shutil) + [Directory and files operations](shutil#directory-and-files-operations) - [Platform-dependent efficient copy operations](shutil#platform-dependent-efficient-copy-operations) - [copytree example](shutil#copytree-example) - [rmtree example](shutil#rmtree-example) + [Archiving operations](shutil#archiving-operations) - [Archiving example](shutil#archiving-example) - [Archiving example with base\_dir](shutil#archiving-example-with-base-dir) + [Querying the size of the output terminal](shutil#querying-the-size-of-the-output-terminal) See also `Module` [`os`](os#module-os "os: Miscellaneous operating system interfaces.") Operating system interfaces, including functions to work with files at a lower level than Python [file objects](../glossary#term-file-object). `Module` [`io`](io#module-io "io: Core tools for working with streams.") Python’s built-in I/O library, including both abstract classes and some concrete classes such as file I/O. `Built-in function` [`open()`](functions#open "open") The standard way to open files for reading and writing with Python. python tarfile — Read and write tar archive files tarfile — Read and write tar archive files ========================================== **Source code:** [Lib/tarfile.py](https://github.com/python/cpython/tree/3.9/Lib/tarfile.py) The [`tarfile`](#module-tarfile "tarfile: Read and write tar-format archive files.") module makes it possible to read and write tar archives, including those using gzip, bz2 and lzma compression. Use the [`zipfile`](zipfile#module-zipfile "zipfile: Read and write ZIP-format archive files.") module to read or write `.zip` files, or the higher-level functions in [shutil](shutil#archiving-operations). Some facts and figures: * reads and writes [`gzip`](gzip#module-gzip "gzip: Interfaces for gzip compression and decompression using file objects."), [`bz2`](bz2#module-bz2 "bz2: Interfaces for bzip2 compression and decompression.") and [`lzma`](lzma#module-lzma "lzma: A Python wrapper for the liblzma compression library.") compressed archives if the respective modules are available. * read/write support for the POSIX.1-1988 (ustar) format. * read/write support for the GNU tar format including *longname* and *longlink* extensions, read-only support for all variants of the *sparse* extension including restoration of sparse files. * read/write support for the POSIX.1-2001 (pax) format. * handles directories, regular files, hardlinks, symbolic links, fifos, character devices and block devices and is able to acquire and restore file information like timestamp, access permissions and owner. Changed in version 3.3: Added support for [`lzma`](lzma#module-lzma "lzma: A Python wrapper for the liblzma compression library.") compression. `tarfile.open(name=None, mode='r', fileobj=None, bufsize=10240, **kwargs)` Return a [`TarFile`](#tarfile.TarFile "tarfile.TarFile") object for the pathname *name*. For detailed information on [`TarFile`](#tarfile.TarFile "tarfile.TarFile") objects and the keyword arguments that are allowed, see [TarFile Objects](#tarfile-objects). *mode* has to be a string of the form `'filemode[:compression]'`, it defaults to `'r'`. Here is a full list of mode combinations: | mode | action | | --- | --- | | `'r' or 'r:*'` | Open for reading with transparent compression (recommended). | | `'r:'` | Open for reading exclusively without compression. | | `'r:gz'` | Open for reading with gzip compression. | | `'r:bz2'` | Open for reading with bzip2 compression. | | `'r:xz'` | Open for reading with lzma compression. | | `'x'` or `'x:'` | Create a tarfile exclusively without compression. Raise an [`FileExistsError`](exceptions#FileExistsError "FileExistsError") exception if it already exists. | | `'x:gz'` | Create a tarfile with gzip compression. Raise an [`FileExistsError`](exceptions#FileExistsError "FileExistsError") exception if it already exists. | | `'x:bz2'` | Create a tarfile with bzip2 compression. Raise an [`FileExistsError`](exceptions#FileExistsError "FileExistsError") exception if it already exists. | | `'x:xz'` | Create a tarfile with lzma compression. Raise an [`FileExistsError`](exceptions#FileExistsError "FileExistsError") exception if it already exists. | | `'a' or 'a:'` | Open for appending with no compression. The file is created if it does not exist. | | `'w' or 'w:'` | Open for uncompressed writing. | | `'w:gz'` | Open for gzip compressed writing. | | `'w:bz2'` | Open for bzip2 compressed writing. | | `'w:xz'` | Open for lzma compressed writing. | Note that `'a:gz'`, `'a:bz2'` or `'a:xz'` is not possible. If *mode* is not suitable to open a certain (compressed) file for reading, [`ReadError`](#tarfile.ReadError "tarfile.ReadError") is raised. Use *mode* `'r'` to avoid this. If a compression method is not supported, [`CompressionError`](#tarfile.CompressionError "tarfile.CompressionError") is raised. If *fileobj* is specified, it is used as an alternative to a [file object](../glossary#term-file-object) opened in binary mode for *name*. It is supposed to be at position 0. For modes `'w:gz'`, `'r:gz'`, `'w:bz2'`, `'r:bz2'`, `'x:gz'`, `'x:bz2'`, [`tarfile.open()`](#tarfile.open "tarfile.open") accepts the keyword argument *compresslevel* (default `9`) to specify the compression level of the file. For modes `'w:xz'` and `'x:xz'`, [`tarfile.open()`](#tarfile.open "tarfile.open") accepts the keyword argument *preset* to specify the compression level of the file. For special purposes, there is a second format for *mode*: `'filemode|[compression]'`. [`tarfile.open()`](#tarfile.open "tarfile.open") will return a [`TarFile`](#tarfile.TarFile "tarfile.TarFile") object that processes its data as a stream of blocks. No random seeking will be done on the file. If given, *fileobj* may be any object that has a `read()` or `write()` method (depending on the *mode*). *bufsize* specifies the blocksize and defaults to `20 * 512` bytes. Use this variant in combination with e.g. `sys.stdin`, a socket [file object](../glossary#term-file-object) or a tape device. However, such a [`TarFile`](#tarfile.TarFile "tarfile.TarFile") object is limited in that it does not allow random access, see [Examples](#tar-examples). The currently possible modes: | Mode | Action | | --- | --- | | `'r|*'` | Open a *stream* of tar blocks for reading with transparent compression. | | `'r|'` | Open a *stream* of uncompressed tar blocks for reading. | | `'r|gz'` | Open a gzip compressed *stream* for reading. | | `'r|bz2'` | Open a bzip2 compressed *stream* for reading. | | `'r|xz'` | Open an lzma compressed *stream* for reading. | | `'w|'` | Open an uncompressed *stream* for writing. | | `'w|gz'` | Open a gzip compressed *stream* for writing. | | `'w|bz2'` | Open a bzip2 compressed *stream* for writing. | | `'w|xz'` | Open an lzma compressed *stream* for writing. | Changed in version 3.5: The `'x'` (exclusive creation) mode was added. Changed in version 3.6: The *name* parameter accepts a [path-like object](../glossary#term-path-like-object). `class tarfile.TarFile` Class for reading and writing tar archives. Do not use this class directly: use [`tarfile.open()`](#tarfile.open "tarfile.open") instead. See [TarFile Objects](#tarfile-objects). `tarfile.is_tarfile(name)` Return [`True`](constants#True "True") if *name* is a tar archive file, that the [`tarfile`](#module-tarfile "tarfile: Read and write tar-format archive files.") module can read. *name* may be a [`str`](stdtypes#str "str"), file, or file-like object. Changed in version 3.9: Support for file and file-like objects. The [`tarfile`](#module-tarfile "tarfile: Read and write tar-format archive files.") module defines the following exceptions: `exception tarfile.TarError` Base class for all [`tarfile`](#module-tarfile "tarfile: Read and write tar-format archive files.") exceptions. `exception tarfile.ReadError` Is raised when a tar archive is opened, that either cannot be handled by the [`tarfile`](#module-tarfile "tarfile: Read and write tar-format archive files.") module or is somehow invalid. `exception tarfile.CompressionError` Is raised when a compression method is not supported or when the data cannot be decoded properly. `exception tarfile.StreamError` Is raised for the limitations that are typical for stream-like [`TarFile`](#tarfile.TarFile "tarfile.TarFile") objects. `exception tarfile.ExtractError` Is raised for *non-fatal* errors when using [`TarFile.extract()`](#tarfile.TarFile.extract "tarfile.TarFile.extract"), but only if `TarFile.errorlevel``== 2`. `exception tarfile.HeaderError` Is raised by [`TarInfo.frombuf()`](#tarfile.TarInfo.frombuf "tarfile.TarInfo.frombuf") if the buffer it gets is invalid. The following constants are available at the module level: `tarfile.ENCODING` The default character encoding: `'utf-8'` on Windows, the value returned by [`sys.getfilesystemencoding()`](sys#sys.getfilesystemencoding "sys.getfilesystemencoding") otherwise. Each of the following constants defines a tar archive format that the [`tarfile`](#module-tarfile "tarfile: Read and write tar-format archive files.") module is able to create. See section [Supported tar formats](#tar-formats) for details. `tarfile.USTAR_FORMAT` POSIX.1-1988 (ustar) format. `tarfile.GNU_FORMAT` GNU tar format. `tarfile.PAX_FORMAT` POSIX.1-2001 (pax) format. `tarfile.DEFAULT_FORMAT` The default format for creating archives. This is currently [`PAX_FORMAT`](#tarfile.PAX_FORMAT "tarfile.PAX_FORMAT"). Changed in version 3.8: The default format for new archives was changed to [`PAX_FORMAT`](#tarfile.PAX_FORMAT "tarfile.PAX_FORMAT") from [`GNU_FORMAT`](#tarfile.GNU_FORMAT "tarfile.GNU_FORMAT"). See also `Module` [`zipfile`](zipfile#module-zipfile "zipfile: Read and write ZIP-format archive files.") Documentation of the [`zipfile`](zipfile#module-zipfile "zipfile: Read and write ZIP-format archive files.") standard module. [Archiving operations](shutil#archiving-operations) Documentation of the higher-level archiving facilities provided by the standard [`shutil`](shutil#module-shutil "shutil: High-level file operations, including copying.") module. [GNU tar manual, Basic Tar Format](https://www.gnu.org/software/tar/manual/html_node/Standard.html) Documentation for tar archive files, including GNU tar extensions. TarFile Objects --------------- The [`TarFile`](#tarfile.TarFile "tarfile.TarFile") object provides an interface to a tar archive. A tar archive is a sequence of blocks. An archive member (a stored file) is made up of a header block followed by data blocks. It is possible to store a file in a tar archive several times. Each archive member is represented by a [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object, see [TarInfo Objects](#tarinfo-objects) for details. A [`TarFile`](#tarfile.TarFile "tarfile.TarFile") object can be used as a context manager in a [`with`](../reference/compound_stmts#with) statement. It will automatically be closed when the block is completed. Please note that in the event of an exception an archive opened for writing will not be finalized; only the internally used file object will be closed. See the [Examples](#tar-examples) section for a use case. New in version 3.2: Added support for the context management protocol. `class tarfile.TarFile(name=None, mode='r', fileobj=None, format=DEFAULT_FORMAT, tarinfo=TarInfo, dereference=False, ignore_zeros=False, encoding=ENCODING, errors='surrogateescape', pax_headers=None, debug=0, errorlevel=0)` All following arguments are optional and can be accessed as instance attributes as well. *name* is the pathname of the archive. *name* may be a [path-like object](../glossary#term-path-like-object). It can be omitted if *fileobj* is given. In this case, the file object’s `name` attribute is used if it exists. *mode* is either `'r'` to read from an existing archive, `'a'` to append data to an existing file, `'w'` to create a new file overwriting an existing one, or `'x'` to create a new file only if it does not already exist. If *fileobj* is given, it is used for reading or writing data. If it can be determined, *mode* is overridden by *fileobj*’s mode. *fileobj* will be used from position 0. Note *fileobj* is not closed, when [`TarFile`](#tarfile.TarFile "tarfile.TarFile") is closed. *format* controls the archive format for writing. It must be one of the constants [`USTAR_FORMAT`](#tarfile.USTAR_FORMAT "tarfile.USTAR_FORMAT"), [`GNU_FORMAT`](#tarfile.GNU_FORMAT "tarfile.GNU_FORMAT") or [`PAX_FORMAT`](#tarfile.PAX_FORMAT "tarfile.PAX_FORMAT") that are defined at module level. When reading, format will be automatically detected, even if different formats are present in a single archive. The *tarinfo* argument can be used to replace the default [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") class with a different one. If *dereference* is [`False`](constants#False "False"), add symbolic and hard links to the archive. If it is [`True`](constants#True "True"), add the content of the target files to the archive. This has no effect on systems that do not support symbolic links. If *ignore\_zeros* is [`False`](constants#False "False"), treat an empty block as the end of the archive. If it is [`True`](constants#True "True"), skip empty (and invalid) blocks and try to get as many members as possible. This is only useful for reading concatenated or damaged archives. *debug* can be set from `0` (no debug messages) up to `3` (all debug messages). The messages are written to `sys.stderr`. If *errorlevel* is `0`, all errors are ignored when using [`TarFile.extract()`](#tarfile.TarFile.extract "tarfile.TarFile.extract"). Nevertheless, they appear as error messages in the debug output, when debugging is enabled. If `1`, all *fatal* errors are raised as [`OSError`](exceptions#OSError "OSError") exceptions. If `2`, all *non-fatal* errors are raised as [`TarError`](#tarfile.TarError "tarfile.TarError") exceptions as well. The *encoding* and *errors* arguments define the character encoding to be used for reading or writing the archive and how conversion errors are going to be handled. The default settings will work for most users. See section [Unicode issues](#tar-unicode) for in-depth information. The *pax\_headers* argument is an optional dictionary of strings which will be added as a pax global header if *format* is [`PAX_FORMAT`](#tarfile.PAX_FORMAT "tarfile.PAX_FORMAT"). Changed in version 3.2: Use `'surrogateescape'` as the default for the *errors* argument. Changed in version 3.5: The `'x'` (exclusive creation) mode was added. Changed in version 3.6: The *name* parameter accepts a [path-like object](../glossary#term-path-like-object). `classmethod TarFile.open(...)` Alternative constructor. The [`tarfile.open()`](#tarfile.open "tarfile.open") function is actually a shortcut to this classmethod. `TarFile.getmember(name)` Return a [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object for member *name*. If *name* can not be found in the archive, [`KeyError`](exceptions#KeyError "KeyError") is raised. Note If a member occurs more than once in the archive, its last occurrence is assumed to be the most up-to-date version. `TarFile.getmembers()` Return the members of the archive as a list of [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") objects. The list has the same order as the members in the archive. `TarFile.getnames()` Return the members as a list of their names. It has the same order as the list returned by [`getmembers()`](#tarfile.TarFile.getmembers "tarfile.TarFile.getmembers"). `TarFile.list(verbose=True, *, members=None)` Print a table of contents to `sys.stdout`. If *verbose* is [`False`](constants#False "False"), only the names of the members are printed. If it is [`True`](constants#True "True"), output similar to that of **ls -l** is produced. If optional *members* is given, it must be a subset of the list returned by [`getmembers()`](#tarfile.TarFile.getmembers "tarfile.TarFile.getmembers"). Changed in version 3.5: Added the *members* parameter. `TarFile.next()` Return the next member of the archive as a [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object, when [`TarFile`](#tarfile.TarFile "tarfile.TarFile") is opened for reading. Return [`None`](constants#None "None") if there is no more available. `TarFile.extractall(path=".", members=None, *, numeric_owner=False)` Extract all members from the archive to the current working directory or directory *path*. If optional *members* is given, it must be a subset of the list returned by [`getmembers()`](#tarfile.TarFile.getmembers "tarfile.TarFile.getmembers"). Directory information like owner, modification time and permissions are set after all members have been extracted. This is done to work around two problems: A directory’s modification time is reset each time a file is created in it. And, if a directory’s permissions do not allow writing, extracting files to it will fail. If *numeric\_owner* is [`True`](constants#True "True"), the uid and gid numbers from the tarfile are used to set the owner/group for the extracted files. Otherwise, the named values from the tarfile are used. Warning Never extract archives from untrusted sources without prior inspection. It is possible that files are created outside of *path*, e.g. members that have absolute filenames starting with `"/"` or filenames with two dots `".."`. Changed in version 3.5: Added the *numeric\_owner* parameter. Changed in version 3.6: The *path* parameter accepts a [path-like object](../glossary#term-path-like-object). `TarFile.extract(member, path="", set_attrs=True, *, numeric_owner=False)` Extract a member from the archive to the current working directory, using its full name. Its file information is extracted as accurately as possible. *member* may be a filename or a [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object. You can specify a different directory using *path*. *path* may be a [path-like object](../glossary#term-path-like-object). File attributes (owner, mtime, mode) are set unless *set\_attrs* is false. If *numeric\_owner* is [`True`](constants#True "True"), the uid and gid numbers from the tarfile are used to set the owner/group for the extracted files. Otherwise, the named values from the tarfile are used. Note The [`extract()`](#tarfile.TarFile.extract "tarfile.TarFile.extract") method does not take care of several extraction issues. In most cases you should consider using the [`extractall()`](#tarfile.TarFile.extractall "tarfile.TarFile.extractall") method. Warning See the warning for [`extractall()`](#tarfile.TarFile.extractall "tarfile.TarFile.extractall"). Changed in version 3.2: Added the *set\_attrs* parameter. Changed in version 3.5: Added the *numeric\_owner* parameter. Changed in version 3.6: The *path* parameter accepts a [path-like object](../glossary#term-path-like-object). `TarFile.extractfile(member)` Extract a member from the archive as a file object. *member* may be a filename or a [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object. If *member* is a regular file or a link, an [`io.BufferedReader`](io#io.BufferedReader "io.BufferedReader") object is returned. For all other existing members, [`None`](constants#None "None") is returned. If *member* does not appear in the archive, [`KeyError`](exceptions#KeyError "KeyError") is raised. Changed in version 3.3: Return an [`io.BufferedReader`](io#io.BufferedReader "io.BufferedReader") object. `TarFile.add(name, arcname=None, recursive=True, *, filter=None)` Add the file *name* to the archive. *name* may be any type of file (directory, fifo, symbolic link, etc.). If given, *arcname* specifies an alternative name for the file in the archive. Directories are added recursively by default. This can be avoided by setting *recursive* to [`False`](constants#False "False"). Recursion adds entries in sorted order. If *filter* is given, it should be a function that takes a [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object argument and returns the changed [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object. If it instead returns [`None`](constants#None "None") the [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object will be excluded from the archive. See [Examples](#tar-examples) for an example. Changed in version 3.2: Added the *filter* parameter. Changed in version 3.7: Recursion adds entries in sorted order. `TarFile.addfile(tarinfo, fileobj=None)` Add the [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object *tarinfo* to the archive. If *fileobj* is given, it should be a [binary file](../glossary#term-binary-file), and `tarinfo.size` bytes are read from it and added to the archive. You can create [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") objects directly, or by using [`gettarinfo()`](#tarfile.TarFile.gettarinfo "tarfile.TarFile.gettarinfo"). `TarFile.gettarinfo(name=None, arcname=None, fileobj=None)` Create a [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object from the result of [`os.stat()`](os#os.stat "os.stat") or equivalent on an existing file. The file is either named by *name*, or specified as a [file object](../glossary#term-file-object) *fileobj* with a file descriptor. *name* may be a [path-like object](../glossary#term-path-like-object). If given, *arcname* specifies an alternative name for the file in the archive, otherwise, the name is taken from *fileobj*’s [`name`](io#io.FileIO.name "io.FileIO.name") attribute, or the *name* argument. The name should be a text string. You can modify some of the [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo")’s attributes before you add it using [`addfile()`](#tarfile.TarFile.addfile "tarfile.TarFile.addfile"). If the file object is not an ordinary file object positioned at the beginning of the file, attributes such as [`size`](#tarfile.TarInfo.size "tarfile.TarInfo.size") may need modifying. This is the case for objects such as [`GzipFile`](gzip#gzip.GzipFile "gzip.GzipFile"). The [`name`](#tarfile.TarInfo.name "tarfile.TarInfo.name") may also be modified, in which case *arcname* could be a dummy string. Changed in version 3.6: The *name* parameter accepts a [path-like object](../glossary#term-path-like-object). `TarFile.close()` Close the [`TarFile`](#tarfile.TarFile "tarfile.TarFile"). In write mode, two finishing zero blocks are appended to the archive. `TarFile.pax_headers` A dictionary containing key-value pairs of pax global headers. TarInfo Objects --------------- A [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object represents one member in a [`TarFile`](#tarfile.TarFile "tarfile.TarFile"). Aside from storing all required attributes of a file (like file type, size, time, permissions, owner etc.), it provides some useful methods to determine its type. It does *not* contain the file’s data itself. [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") objects are returned by [`TarFile`](#tarfile.TarFile "tarfile.TarFile")’s methods `getmember()`, `getmembers()` and `gettarinfo()`. `class tarfile.TarInfo(name="")` Create a [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object. `classmethod TarInfo.frombuf(buf, encoding, errors)` Create and return a [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object from string buffer *buf*. Raises [`HeaderError`](#tarfile.HeaderError "tarfile.HeaderError") if the buffer is invalid. `classmethod TarInfo.fromtarfile(tarfile)` Read the next member from the [`TarFile`](#tarfile.TarFile "tarfile.TarFile") object *tarfile* and return it as a [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object. `TarInfo.tobuf(format=DEFAULT_FORMAT, encoding=ENCODING, errors='surrogateescape')` Create a string buffer from a [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object. For information on the arguments see the constructor of the [`TarFile`](#tarfile.TarFile "tarfile.TarFile") class. Changed in version 3.2: Use `'surrogateescape'` as the default for the *errors* argument. A `TarInfo` object has the following public data attributes: `TarInfo.name` Name of the archive member. `TarInfo.size` Size in bytes. `TarInfo.mtime` Time of last modification. `TarInfo.mode` Permission bits. `TarInfo.type` File type. *type* is usually one of these constants: `REGTYPE`, `AREGTYPE`, `LNKTYPE`, `SYMTYPE`, `DIRTYPE`, `FIFOTYPE`, `CONTTYPE`, `CHRTYPE`, `BLKTYPE`, `GNUTYPE_SPARSE`. To determine the type of a [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object more conveniently, use the `is*()` methods below. `TarInfo.linkname` Name of the target file name, which is only present in [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") objects of type `LNKTYPE` and `SYMTYPE`. `TarInfo.uid` User ID of the user who originally stored this member. `TarInfo.gid` Group ID of the user who originally stored this member. `TarInfo.uname` User name. `TarInfo.gname` Group name. `TarInfo.pax_headers` A dictionary containing key-value pairs of an associated pax extended header. A [`TarInfo`](#tarfile.TarInfo "tarfile.TarInfo") object also provides some convenient query methods: `TarInfo.isfile()` Return [`True`](constants#True "True") if the `Tarinfo` object is a regular file. `TarInfo.isreg()` Same as [`isfile()`](#tarfile.TarInfo.isfile "tarfile.TarInfo.isfile"). `TarInfo.isdir()` Return [`True`](constants#True "True") if it is a directory. `TarInfo.issym()` Return [`True`](constants#True "True") if it is a symbolic link. `TarInfo.islnk()` Return [`True`](constants#True "True") if it is a hard link. `TarInfo.ischr()` Return [`True`](constants#True "True") if it is a character device. `TarInfo.isblk()` Return [`True`](constants#True "True") if it is a block device. `TarInfo.isfifo()` Return [`True`](constants#True "True") if it is a FIFO. `TarInfo.isdev()` Return [`True`](constants#True "True") if it is one of character device, block device or FIFO. Command-Line Interface ---------------------- New in version 3.4. The [`tarfile`](#module-tarfile "tarfile: Read and write tar-format archive files.") module provides a simple command-line interface to interact with tar archives. If you want to create a new tar archive, specify its name after the [`-c`](#cmdoption-tarfile-c) option and then list the filename(s) that should be included: ``` $ python -m tarfile -c monty.tar spam.txt eggs.txt ``` Passing a directory is also acceptable: ``` $ python -m tarfile -c monty.tar life-of-brian_1979/ ``` If you want to extract a tar archive into the current directory, use the [`-e`](#cmdoption-tarfile-e) option: ``` $ python -m tarfile -e monty.tar ``` You can also extract a tar archive into a different directory by passing the directory’s name: ``` $ python -m tarfile -e monty.tar other-dir/ ``` For a list of the files in a tar archive, use the [`-l`](#cmdoption-tarfile-l) option: ``` $ python -m tarfile -l monty.tar ``` ### Command-line options `-l <tarfile>` `--list <tarfile>` List files in a tarfile. `-c <tarfile> <source1> ... <sourceN>` `--create <tarfile> <source1> ... <sourceN>` Create tarfile from source files. `-e <tarfile> [<output_dir>]` `--extract <tarfile> [<output_dir>]` Extract tarfile into the current directory if *output\_dir* is not specified. `-t <tarfile>` `--test <tarfile>` Test whether the tarfile is valid or not. `-v, --verbose` Verbose output. Examples -------- How to extract an entire tar archive to the current working directory: ``` import tarfile tar = tarfile.open("sample.tar.gz") tar.extractall() tar.close() ``` How to extract a subset of a tar archive with [`TarFile.extractall()`](#tarfile.TarFile.extractall "tarfile.TarFile.extractall") using a generator function instead of a list: ``` import os import tarfile def py_files(members): for tarinfo in members: if os.path.splitext(tarinfo.name)[1] == ".py": yield tarinfo tar = tarfile.open("sample.tar.gz") tar.extractall(members=py_files(tar)) tar.close() ``` How to create an uncompressed tar archive from a list of filenames: ``` import tarfile tar = tarfile.open("sample.tar", "w") for name in ["foo", "bar", "quux"]: tar.add(name) tar.close() ``` The same example using the [`with`](../reference/compound_stmts#with) statement: ``` import tarfile with tarfile.open("sample.tar", "w") as tar: for name in ["foo", "bar", "quux"]: tar.add(name) ``` How to read a gzip compressed tar archive and display some member information: ``` import tarfile tar = tarfile.open("sample.tar.gz", "r:gz") for tarinfo in tar: print(tarinfo.name, "is", tarinfo.size, "bytes in size and is ", end="") if tarinfo.isreg(): print("a regular file.") elif tarinfo.isdir(): print("a directory.") else: print("something else.") tar.close() ``` How to create an archive and reset the user information using the *filter* parameter in [`TarFile.add()`](#tarfile.TarFile.add "tarfile.TarFile.add"): ``` import tarfile def reset(tarinfo): tarinfo.uid = tarinfo.gid = 0 tarinfo.uname = tarinfo.gname = "root" return tarinfo tar = tarfile.open("sample.tar.gz", "w:gz") tar.add("foo", filter=reset) tar.close() ``` Supported tar formats --------------------- There are three tar formats that can be created with the [`tarfile`](#module-tarfile "tarfile: Read and write tar-format archive files.") module: * The POSIX.1-1988 ustar format ([`USTAR_FORMAT`](#tarfile.USTAR_FORMAT "tarfile.USTAR_FORMAT")). It supports filenames up to a length of at best 256 characters and linknames up to 100 characters. The maximum file size is 8 GiB. This is an old and limited but widely supported format. * The GNU tar format ([`GNU_FORMAT`](#tarfile.GNU_FORMAT "tarfile.GNU_FORMAT")). It supports long filenames and linknames, files bigger than 8 GiB and sparse files. It is the de facto standard on GNU/Linux systems. [`tarfile`](#module-tarfile "tarfile: Read and write tar-format archive files.") fully supports the GNU tar extensions for long names, sparse file support is read-only. * The POSIX.1-2001 pax format ([`PAX_FORMAT`](#tarfile.PAX_FORMAT "tarfile.PAX_FORMAT")). It is the most flexible format with virtually no limits. It supports long filenames and linknames, large files and stores pathnames in a portable way. Modern tar implementations, including GNU tar, bsdtar/libarchive and star, fully support extended *pax* features; some old or unmaintained libraries may not, but should treat *pax* archives as if they were in the universally-supported *ustar* format. It is the current default format for new archives. It extends the existing *ustar* format with extra headers for information that cannot be stored otherwise. There are two flavours of pax headers: Extended headers only affect the subsequent file header, global headers are valid for the complete archive and affect all following files. All the data in a pax header is encoded in *UTF-8* for portability reasons. There are some more variants of the tar format which can be read, but not created: * The ancient V7 format. This is the first tar format from Unix Seventh Edition, storing only regular files and directories. Names must not be longer than 100 characters, there is no user/group name information. Some archives have miscalculated header checksums in case of fields with non-ASCII characters. * The SunOS tar extended format. This format is a variant of the POSIX.1-2001 pax format, but is not compatible. Unicode issues -------------- The tar format was originally conceived to make backups on tape drives with the main focus on preserving file system information. Nowadays tar archives are commonly used for file distribution and exchanging archives over networks. One problem of the original format (which is the basis of all other formats) is that there is no concept of supporting different character encodings. For example, an ordinary tar archive created on a *UTF-8* system cannot be read correctly on a *Latin-1* system if it contains non-*ASCII* characters. Textual metadata (like filenames, linknames, user/group names) will appear damaged. Unfortunately, there is no way to autodetect the encoding of an archive. The pax format was designed to solve this problem. It stores non-ASCII metadata using the universal character encoding *UTF-8*. The details of character conversion in [`tarfile`](#module-tarfile "tarfile: Read and write tar-format archive files.") are controlled by the *encoding* and *errors* keyword arguments of the [`TarFile`](#tarfile.TarFile "tarfile.TarFile") class. *encoding* defines the character encoding to use for the metadata in the archive. The default value is [`sys.getfilesystemencoding()`](sys#sys.getfilesystemencoding "sys.getfilesystemencoding") or `'ascii'` as a fallback. Depending on whether the archive is read or written, the metadata must be either decoded or encoded. If *encoding* is not set appropriately, this conversion may fail. The *errors* argument defines how characters are treated that cannot be converted. Possible values are listed in section [Error Handlers](codecs#error-handlers). The default scheme is `'surrogateescape'` which Python also uses for its file system calls, see [File Names, Command Line Arguments, and Environment Variables](os#os-filenames). For [`PAX_FORMAT`](#tarfile.PAX_FORMAT "tarfile.PAX_FORMAT") archives (the default), *encoding* is generally not needed because all the metadata is stored using *UTF-8*. *encoding* is only used in the rare cases when binary pax headers are decoded or when strings with surrogate characters are stored.
programming_docs
python errno — Standard errno system symbols errno — Standard errno system symbols ===================================== This module makes available standard `errno` system symbols. The value of each symbol is the corresponding integer value. The names and descriptions are borrowed from `linux/include/errno.h`, which should be all-inclusive. `errno.errorcode` Dictionary providing a mapping from the errno value to the string name in the underlying system. For instance, `errno.errorcode[errno.EPERM]` maps to `'EPERM'`. To translate a numeric error code to an error message, use [`os.strerror()`](os#os.strerror "os.strerror"). Of the following list, symbols that are not used on the current platform are not defined by the module. The specific list of defined symbols is available as `errno.errorcode.keys()`. Symbols available can include: `errno.EPERM` Operation not permitted. This error is mapped to the exception [`PermissionError`](exceptions#PermissionError "PermissionError"). `errno.ENOENT` No such file or directory. This error is mapped to the exception [`FileNotFoundError`](exceptions#FileNotFoundError "FileNotFoundError"). `errno.ESRCH` No such process. This error is mapped to the exception [`ProcessLookupError`](exceptions#ProcessLookupError "ProcessLookupError"). `errno.EINTR` Interrupted system call. This error is mapped to the exception [`InterruptedError`](exceptions#InterruptedError "InterruptedError"). `errno.EIO` I/O error `errno.ENXIO` No such device or address `errno.E2BIG` Arg list too long `errno.ENOEXEC` Exec format error `errno.EBADF` Bad file number `errno.ECHILD` No child processes. This error is mapped to the exception [`ChildProcessError`](exceptions#ChildProcessError "ChildProcessError"). `errno.EAGAIN` Try again. This error is mapped to the exception [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError"). `errno.ENOMEM` Out of memory `errno.EACCES` Permission denied. This error is mapped to the exception [`PermissionError`](exceptions#PermissionError "PermissionError"). `errno.EFAULT` Bad address `errno.ENOTBLK` Block device required `errno.EBUSY` Device or resource busy `errno.EEXIST` File exists. This error is mapped to the exception [`FileExistsError`](exceptions#FileExistsError "FileExistsError"). `errno.EXDEV` Cross-device link `errno.ENODEV` No such device `errno.ENOTDIR` Not a directory. This error is mapped to the exception [`NotADirectoryError`](exceptions#NotADirectoryError "NotADirectoryError"). `errno.EISDIR` Is a directory. This error is mapped to the exception [`IsADirectoryError`](exceptions#IsADirectoryError "IsADirectoryError"). `errno.EINVAL` Invalid argument `errno.ENFILE` File table overflow `errno.EMFILE` Too many open files `errno.ENOTTY` Not a typewriter `errno.ETXTBSY` Text file busy `errno.EFBIG` File too large `errno.ENOSPC` No space left on device `errno.ESPIPE` Illegal seek `errno.EROFS` Read-only file system `errno.EMLINK` Too many links `errno.EPIPE` Broken pipe. This error is mapped to the exception [`BrokenPipeError`](exceptions#BrokenPipeError "BrokenPipeError"). `errno.EDOM` Math argument out of domain of func `errno.ERANGE` Math result not representable `errno.EDEADLK` Resource deadlock would occur `errno.ENAMETOOLONG` File name too long `errno.ENOLCK` No record locks available `errno.ENOSYS` Function not implemented `errno.ENOTEMPTY` Directory not empty `errno.ELOOP` Too many symbolic links encountered `errno.EWOULDBLOCK` Operation would block. This error is mapped to the exception [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError"). `errno.ENOMSG` No message of desired type `errno.EIDRM` Identifier removed `errno.ECHRNG` Channel number out of range `errno.EL2NSYNC` Level 2 not synchronized `errno.EL3HLT` Level 3 halted `errno.EL3RST` Level 3 reset `errno.ELNRNG` Link number out of range `errno.EUNATCH` Protocol driver not attached `errno.ENOCSI` No CSI structure available `errno.EL2HLT` Level 2 halted `errno.EBADE` Invalid exchange `errno.EBADR` Invalid request descriptor `errno.EXFULL` Exchange full `errno.ENOANO` No anode `errno.EBADRQC` Invalid request code `errno.EBADSLT` Invalid slot `errno.EDEADLOCK` File locking deadlock error `errno.EBFONT` Bad font file format `errno.ENOSTR` Device not a stream `errno.ENODATA` No data available `errno.ETIME` Timer expired `errno.ENOSR` Out of streams resources `errno.ENONET` Machine is not on the network `errno.ENOPKG` Package not installed `errno.EREMOTE` Object is remote `errno.ENOLINK` Link has been severed `errno.EADV` Advertise error `errno.ESRMNT` Srmount error `errno.ECOMM` Communication error on send `errno.EPROTO` Protocol error `errno.EMULTIHOP` Multihop attempted `errno.EDOTDOT` RFS specific error `errno.EBADMSG` Not a data message `errno.EOVERFLOW` Value too large for defined data type `errno.ENOTUNIQ` Name not unique on network `errno.EBADFD` File descriptor in bad state `errno.EREMCHG` Remote address changed `errno.ELIBACC` Can not access a needed shared library `errno.ELIBBAD` Accessing a corrupted shared library `errno.ELIBSCN` .lib section in a.out corrupted `errno.ELIBMAX` Attempting to link in too many shared libraries `errno.ELIBEXEC` Cannot exec a shared library directly `errno.EILSEQ` Illegal byte sequence `errno.ERESTART` Interrupted system call should be restarted `errno.ESTRPIPE` Streams pipe error `errno.EUSERS` Too many users `errno.ENOTSOCK` Socket operation on non-socket `errno.EDESTADDRREQ` Destination address required `errno.EMSGSIZE` Message too long `errno.EPROTOTYPE` Protocol wrong type for socket `errno.ENOPROTOOPT` Protocol not available `errno.EPROTONOSUPPORT` Protocol not supported `errno.ESOCKTNOSUPPORT` Socket type not supported `errno.EOPNOTSUPP` Operation not supported on transport endpoint `errno.EPFNOSUPPORT` Protocol family not supported `errno.EAFNOSUPPORT` Address family not supported by protocol `errno.EADDRINUSE` Address already in use `errno.EADDRNOTAVAIL` Cannot assign requested address `errno.ENETDOWN` Network is down `errno.ENETUNREACH` Network is unreachable `errno.ENETRESET` Network dropped connection because of reset `errno.ECONNABORTED` Software caused connection abort. This error is mapped to the exception [`ConnectionAbortedError`](exceptions#ConnectionAbortedError "ConnectionAbortedError"). `errno.ECONNRESET` Connection reset by peer. This error is mapped to the exception [`ConnectionResetError`](exceptions#ConnectionResetError "ConnectionResetError"). `errno.ENOBUFS` No buffer space available `errno.EISCONN` Transport endpoint is already connected `errno.ENOTCONN` Transport endpoint is not connected `errno.ESHUTDOWN` Cannot send after transport endpoint shutdown. This error is mapped to the exception [`BrokenPipeError`](exceptions#BrokenPipeError "BrokenPipeError"). `errno.ETOOMANYREFS` Too many references: cannot splice `errno.ETIMEDOUT` Connection timed out. This error is mapped to the exception [`TimeoutError`](exceptions#TimeoutError "TimeoutError"). `errno.ECONNREFUSED` Connection refused. This error is mapped to the exception [`ConnectionRefusedError`](exceptions#ConnectionRefusedError "ConnectionRefusedError"). `errno.EHOSTDOWN` Host is down `errno.EHOSTUNREACH` No route to host `errno.EALREADY` Operation already in progress. This error is mapped to the exception [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError"). `errno.EINPROGRESS` Operation now in progress. This error is mapped to the exception [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError"). `errno.ESTALE` Stale NFS file handle `errno.EUCLEAN` Structure needs cleaning `errno.ENOTNAM` Not a XENIX named type file `errno.ENAVAIL` No XENIX semaphores available `errno.EISNAM` Is a named type file `errno.EREMOTEIO` Remote I/O error `errno.EDQUOT` Quota exceeded python rlcompleter — Completion function for GNU readline rlcompleter — Completion function for GNU readline ================================================== **Source code:** [Lib/rlcompleter.py](https://github.com/python/cpython/tree/3.9/Lib/rlcompleter.py) The [`rlcompleter`](#module-rlcompleter "rlcompleter: Python identifier completion, suitable for the GNU readline library.") module defines a completion function suitable for the [`readline`](readline#module-readline "readline: GNU readline support for Python. (Unix)") module by completing valid Python identifiers and keywords. When this module is imported on a Unix platform with the [`readline`](readline#module-readline "readline: GNU readline support for Python. (Unix)") module available, an instance of the `Completer` class is automatically created and its `complete()` method is set as the [`readline`](readline#module-readline "readline: GNU readline support for Python. (Unix)") completer. Example: ``` >>> import rlcompleter >>> import readline >>> readline.parse_and_bind("tab: complete") >>> readline. <TAB PRESSED> readline.__doc__ readline.get_line_buffer( readline.read_init_file( readline.__file__ readline.insert_text( readline.set_completer( readline.__name__ readline.parse_and_bind( >>> readline. ``` The [`rlcompleter`](#module-rlcompleter "rlcompleter: Python identifier completion, suitable for the GNU readline library.") module is designed for use with Python’s [interactive mode](../tutorial/interpreter#tut-interactive). Unless Python is run with the [`-S`](../using/cmdline#id3) option, the module is automatically imported and configured (see [Readline configuration](site#rlcompleter-config)). On platforms without [`readline`](readline#module-readline "readline: GNU readline support for Python. (Unix)"), the `Completer` class defined by this module can still be used for custom purposes. Completer Objects ----------------- Completer objects have the following method: `Completer.complete(text, state)` Return the *state*th completion for *text*. If called for *text* that doesn’t include a period character (`'.'`), it will complete from names currently defined in [`__main__`](__main__#module-__main__ "__main__: The environment where the top-level script is run."), [`builtins`](builtins#module-builtins "builtins: The module that provides the built-in namespace.") and keywords (as defined by the [`keyword`](keyword#module-keyword "keyword: Test whether a string is a keyword in Python.") module). If called for a dotted name, it will try to evaluate anything without obvious side-effects (functions will not be evaluated, but it can generate calls to [`__getattr__()`](../reference/datamodel#object.__getattr__ "object.__getattr__")) up to the last part, and find matches for the rest via the [`dir()`](functions#dir "dir") function. Any exception raised during the evaluation of the expression is caught, silenced and [`None`](constants#None "None") is returned. python html — HyperText Markup Language support html — HyperText Markup Language support ======================================== **Source code:** [Lib/html/\_\_init\_\_.py](https://github.com/python/cpython/tree/3.9/Lib/html/__init__.py) This module defines utilities to manipulate HTML. `html.escape(s, quote=True)` Convert the characters `&`, `<` and `>` in string *s* to HTML-safe sequences. Use this if you need to display text that might contain such characters in HTML. If the optional flag *quote* is true, the characters (`"`) and (`'`) are also translated; this helps for inclusion in an HTML attribute value delimited by quotes, as in `<a href="...">`. New in version 3.2. `html.unescape(s)` Convert all named and numeric character references (e.g. `&gt;`, `&#62;`, `&#x3e;`) in the string *s* to the corresponding Unicode characters. This function uses the rules defined by the HTML 5 standard for both valid and invalid character references, and the [`list of HTML 5 named character references`](html.entities#html.entities.html5 "html.entities.html5"). New in version 3.4. Submodules in the `html` package are: * [`html.parser`](html.parser#module-html.parser "html.parser: A simple parser that can handle HTML and XHTML.") – HTML/XHTML parser with lenient parsing mode * [`html.entities`](html.entities#module-html.entities "html.entities: Definitions of HTML general entities.") – HTML entity definitions python base64 — Base16, Base32, Base64, Base85 Data Encodings base64 — Base16, Base32, Base64, Base85 Data Encodings ====================================================== **Source code:** [Lib/base64.py](https://github.com/python/cpython/tree/3.9/Lib/base64.py) This module provides functions for encoding binary data to printable ASCII characters and decoding such encodings back to binary data. It provides encoding and decoding functions for the encodings specified in [**RFC 3548**](https://tools.ietf.org/html/rfc3548.html), which defines the Base16, Base32, and Base64 algorithms, and for the de-facto standard Ascii85 and Base85 encodings. The [**RFC 3548**](https://tools.ietf.org/html/rfc3548.html) encodings are suitable for encoding binary data so that it can safely sent by email, used as parts of URLs, or included as part of an HTTP POST request. The encoding algorithm is not the same as the **uuencode** program. There are two interfaces provided by this module. The modern interface supports encoding [bytes-like objects](../glossary#term-bytes-like-object) to ASCII [`bytes`](stdtypes#bytes "bytes"), and decoding [bytes-like objects](../glossary#term-bytes-like-object) or strings containing ASCII to [`bytes`](stdtypes#bytes "bytes"). Both base-64 alphabets defined in [**RFC 3548**](https://tools.ietf.org/html/rfc3548.html) (normal, and URL- and filesystem-safe) are supported. The legacy interface does not support decoding from strings, but it does provide functions for encoding and decoding to and from [file objects](../glossary#term-file-object). It only supports the Base64 standard alphabet, and it adds newlines every 76 characters as per [**RFC 2045**](https://tools.ietf.org/html/rfc2045.html). Note that if you are looking for [**RFC 2045**](https://tools.ietf.org/html/rfc2045.html) support you probably want to be looking at the [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package instead. Changed in version 3.3: ASCII-only Unicode strings are now accepted by the decoding functions of the modern interface. Changed in version 3.4: Any [bytes-like objects](../glossary#term-bytes-like-object) are now accepted by all encoding and decoding functions in this module. Ascii85/Base85 support added. The modern interface provides: `base64.b64encode(s, altchars=None)` Encode the [bytes-like object](../glossary#term-bytes-like-object) *s* using Base64 and return the encoded [`bytes`](stdtypes#bytes "bytes"). Optional *altchars* must be a [bytes-like object](../glossary#term-bytes-like-object) of at least length 2 (additional characters are ignored) which specifies an alternative alphabet for the `+` and `/` characters. This allows an application to e.g. generate URL or filesystem safe Base64 strings. The default is `None`, for which the standard Base64 alphabet is used. `base64.b64decode(s, altchars=None, validate=False)` Decode the Base64 encoded [bytes-like object](../glossary#term-bytes-like-object) or ASCII string *s* and return the decoded [`bytes`](stdtypes#bytes "bytes"). Optional *altchars* must be a [bytes-like object](../glossary#term-bytes-like-object) or ASCII string of at least length 2 (additional characters are ignored) which specifies the alternative alphabet used instead of the `+` and `/` characters. A [`binascii.Error`](binascii#binascii.Error "binascii.Error") exception is raised if *s* is incorrectly padded. If *validate* is `False` (the default), characters that are neither in the normal base-64 alphabet nor the alternative alphabet are discarded prior to the padding check. If *validate* is `True`, these non-alphabet characters in the input result in a [`binascii.Error`](binascii#binascii.Error "binascii.Error"). `base64.standard_b64encode(s)` Encode [bytes-like object](../glossary#term-bytes-like-object) *s* using the standard Base64 alphabet and return the encoded [`bytes`](stdtypes#bytes "bytes"). `base64.standard_b64decode(s)` Decode [bytes-like object](../glossary#term-bytes-like-object) or ASCII string *s* using the standard Base64 alphabet and return the decoded [`bytes`](stdtypes#bytes "bytes"). `base64.urlsafe_b64encode(s)` Encode [bytes-like object](../glossary#term-bytes-like-object) *s* using the URL- and filesystem-safe alphabet, which substitutes `-` instead of `+` and `_` instead of `/` in the standard Base64 alphabet, and return the encoded [`bytes`](stdtypes#bytes "bytes"). The result can still contain `=`. `base64.urlsafe_b64decode(s)` Decode [bytes-like object](../glossary#term-bytes-like-object) or ASCII string *s* using the URL- and filesystem-safe alphabet, which substitutes `-` instead of `+` and `_` instead of `/` in the standard Base64 alphabet, and return the decoded [`bytes`](stdtypes#bytes "bytes"). `base64.b32encode(s)` Encode the [bytes-like object](../glossary#term-bytes-like-object) *s* using Base32 and return the encoded [`bytes`](stdtypes#bytes "bytes"). `base64.b32decode(s, casefold=False, map01=None)` Decode the Base32 encoded [bytes-like object](../glossary#term-bytes-like-object) or ASCII string *s* and return the decoded [`bytes`](stdtypes#bytes "bytes"). Optional *casefold* is a flag specifying whether a lowercase alphabet is acceptable as input. For security purposes, the default is `False`. [**RFC 3548**](https://tools.ietf.org/html/rfc3548.html) allows for optional mapping of the digit 0 (zero) to the letter O (oh), and for optional mapping of the digit 1 (one) to either the letter I (eye) or letter L (el). The optional argument *map01* when not `None`, specifies which letter the digit 1 should be mapped to (when *map01* is not `None`, the digit 0 is always mapped to the letter O). For security purposes the default is `None`, so that 0 and 1 are not allowed in the input. A [`binascii.Error`](binascii#binascii.Error "binascii.Error") is raised if *s* is incorrectly padded or if there are non-alphabet characters present in the input. `base64.b16encode(s)` Encode the [bytes-like object](../glossary#term-bytes-like-object) *s* using Base16 and return the encoded [`bytes`](stdtypes#bytes "bytes"). `base64.b16decode(s, casefold=False)` Decode the Base16 encoded [bytes-like object](../glossary#term-bytes-like-object) or ASCII string *s* and return the decoded [`bytes`](stdtypes#bytes "bytes"). Optional *casefold* is a flag specifying whether a lowercase alphabet is acceptable as input. For security purposes, the default is `False`. A [`binascii.Error`](binascii#binascii.Error "binascii.Error") is raised if *s* is incorrectly padded or if there are non-alphabet characters present in the input. `base64.a85encode(b, *, foldspaces=False, wrapcol=0, pad=False, adobe=False)` Encode the [bytes-like object](../glossary#term-bytes-like-object) *b* using Ascii85 and return the encoded [`bytes`](stdtypes#bytes "bytes"). *foldspaces* is an optional flag that uses the special short sequence ‘y’ instead of 4 consecutive spaces (ASCII 0x20) as supported by ‘btoa’. This feature is not supported by the “standard” Ascii85 encoding. *wrapcol* controls whether the output should have newline (`b'\n'`) characters added to it. If this is non-zero, each output line will be at most this many characters long. *pad* controls whether the input is padded to a multiple of 4 before encoding. Note that the `btoa` implementation always pads. *adobe* controls whether the encoded byte sequence is framed with `<~` and `~>`, which is used by the Adobe implementation. New in version 3.4. `base64.a85decode(b, *, foldspaces=False, adobe=False, ignorechars=b' \t\n\r\v')` Decode the Ascii85 encoded [bytes-like object](../glossary#term-bytes-like-object) or ASCII string *b* and return the decoded [`bytes`](stdtypes#bytes "bytes"). *foldspaces* is a flag that specifies whether the ‘y’ short sequence should be accepted as shorthand for 4 consecutive spaces (ASCII 0x20). This feature is not supported by the “standard” Ascii85 encoding. *adobe* controls whether the input sequence is in Adobe Ascii85 format (i.e. is framed with <~ and ~>). *ignorechars* should be a [bytes-like object](../glossary#term-bytes-like-object) or ASCII string containing characters to ignore from the input. This should only contain whitespace characters, and by default contains all whitespace characters in ASCII. New in version 3.4. `base64.b85encode(b, pad=False)` Encode the [bytes-like object](../glossary#term-bytes-like-object) *b* using base85 (as used in e.g. git-style binary diffs) and return the encoded [`bytes`](stdtypes#bytes "bytes"). If *pad* is true, the input is padded with `b'\0'` so its length is a multiple of 4 bytes before encoding. New in version 3.4. `base64.b85decode(b)` Decode the base85-encoded [bytes-like object](../glossary#term-bytes-like-object) or ASCII string *b* and return the decoded [`bytes`](stdtypes#bytes "bytes"). Padding is implicitly removed, if necessary. New in version 3.4. The legacy interface: `base64.decode(input, output)` Decode the contents of the binary *input* file and write the resulting binary data to the *output* file. *input* and *output* must be [file objects](../glossary#term-file-object). *input* will be read until `input.readline()` returns an empty bytes object. `base64.decodebytes(s)` Decode the [bytes-like object](../glossary#term-bytes-like-object) *s*, which must contain one or more lines of base64 encoded data, and return the decoded [`bytes`](stdtypes#bytes "bytes"). New in version 3.1. `base64.encode(input, output)` Encode the contents of the binary *input* file and write the resulting base64 encoded data to the *output* file. *input* and *output* must be [file objects](../glossary#term-file-object). *input* will be read until `input.read()` returns an empty bytes object. [`encode()`](#base64.encode "base64.encode") inserts a newline character (`b'\n'`) after every 76 bytes of the output, as well as ensuring that the output always ends with a newline, as per [**RFC 2045**](https://tools.ietf.org/html/rfc2045.html) (MIME). `base64.encodebytes(s)` Encode the [bytes-like object](../glossary#term-bytes-like-object) *s*, which can contain arbitrary binary data, and return [`bytes`](stdtypes#bytes "bytes") containing the base64-encoded data, with newlines (`b'\n'`) inserted after every 76 bytes of output, and ensuring that there is a trailing newline, as per [**RFC 2045**](https://tools.ietf.org/html/rfc2045.html) (MIME). New in version 3.1. An example usage of the module: ``` >>> import base64 >>> encoded = base64.b64encode(b'data to be encoded') >>> encoded b'ZGF0YSB0byBiZSBlbmNvZGVk' >>> data = base64.b64decode(encoded) >>> data b'data to be encoded' ``` See also `Module` [`binascii`](binascii#module-binascii "binascii: Tools for converting between binary and various ASCII-encoded binary representations.") Support module containing ASCII-to-binary and binary-to-ASCII conversions. [**RFC 1521**](https://tools.ietf.org/html/rfc1521.html) - MIME (Multipurpose Internet Mail Extensions) Part One: Mechanisms for Specifying and Describing the Format of Internet Message Bodies Section 5.2, “Base64 Content-Transfer-Encoding,” provides the definition of the base64 encoding.
programming_docs
python http.client — HTTP protocol client http.client — HTTP protocol client ================================== **Source code:** [Lib/http/client.py](https://github.com/python/cpython/tree/3.9/Lib/http/client.py) This module defines classes which implement the client side of the HTTP and HTTPS protocols. It is normally not used directly — the module [`urllib.request`](urllib.request#module-urllib.request "urllib.request: Extensible library for opening URLs.") uses it to handle URLs that use HTTP and HTTPS. See also The [Requests package](https://requests.readthedocs.io/en/master/) is recommended for a higher-level HTTP client interface. Note HTTPS support is only available if Python was compiled with SSL support (through the [`ssl`](ssl#module-ssl "ssl: TLS/SSL wrapper for socket objects") module). The module provides the following classes: `class http.client.HTTPConnection(host, port=None, [timeout, ]source_address=None, blocksize=8192)` An [`HTTPConnection`](#http.client.HTTPConnection "http.client.HTTPConnection") instance represents one transaction with an HTTP server. It should be instantiated passing it a host and optional port number. If no port number is passed, the port is extracted from the host string if it has the form `host:port`, else the default HTTP port (80) is used. If the optional *timeout* parameter is given, blocking operations (like connection attempts) will timeout after that many seconds (if it is not given, the global default timeout setting is used). The optional *source\_address* parameter may be a tuple of a (host, port) to use as the source address the HTTP connection is made from. The optional *blocksize* parameter sets the buffer size in bytes for sending a file-like message body. For example, the following calls all create instances that connect to the server at the same host and port: ``` >>> h1 = http.client.HTTPConnection('www.python.org') >>> h2 = http.client.HTTPConnection('www.python.org:80') >>> h3 = http.client.HTTPConnection('www.python.org', 80) >>> h4 = http.client.HTTPConnection('www.python.org', 80, timeout=10) ``` Changed in version 3.2: *source\_address* was added. Changed in version 3.4: The *strict* parameter was removed. HTTP 0.9-style “Simple Responses” are not longer supported. Changed in version 3.7: *blocksize* parameter was added. `class http.client.HTTPSConnection(host, port=None, key_file=None, cert_file=None, [timeout, ]source_address=None, *, context=None, check_hostname=None, blocksize=8192)` A subclass of [`HTTPConnection`](#http.client.HTTPConnection "http.client.HTTPConnection") that uses SSL for communication with secure servers. Default port is `443`. If *context* is specified, it must be a [`ssl.SSLContext`](ssl#ssl.SSLContext "ssl.SSLContext") instance describing the various SSL options. Please read [Security considerations](ssl#ssl-security) for more information on best practices. Changed in version 3.2: *source\_address*, *context* and *check\_hostname* were added. Changed in version 3.2: This class now supports HTTPS virtual hosts if possible (that is, if [`ssl.HAS_SNI`](ssl#ssl.HAS_SNI "ssl.HAS_SNI") is true). Changed in version 3.4: The *strict* parameter was removed. HTTP 0.9-style “Simple Responses” are no longer supported. Changed in version 3.4.3: This class now performs all the necessary certificate and hostname checks by default. To revert to the previous, unverified, behavior `ssl._create_unverified_context()` can be passed to the *context* parameter. Changed in version 3.8: This class now enables TLS 1.3 [`ssl.SSLContext.post_handshake_auth`](ssl#ssl.SSLContext.post_handshake_auth "ssl.SSLContext.post_handshake_auth") for the default *context* or when *cert\_file* is passed with a custom *context*. Deprecated since version 3.6: *key\_file* and *cert\_file* are deprecated in favor of *context*. Please use [`ssl.SSLContext.load_cert_chain()`](ssl#ssl.SSLContext.load_cert_chain "ssl.SSLContext.load_cert_chain") instead, or let [`ssl.create_default_context()`](ssl#ssl.create_default_context "ssl.create_default_context") select the system’s trusted CA certificates for you. The *check\_hostname* parameter is also deprecated; the [`ssl.SSLContext.check_hostname`](ssl#ssl.SSLContext.check_hostname "ssl.SSLContext.check_hostname") attribute of *context* should be used instead. `class http.client.HTTPResponse(sock, debuglevel=0, method=None, url=None)` Class whose instances are returned upon successful connection. Not instantiated directly by user. Changed in version 3.4: The *strict* parameter was removed. HTTP 0.9 style “Simple Responses” are no longer supported. This module provides the following function: `http.client.parse_headers(fp)` Parse the headers from a file pointer *fp* representing a HTTP request/response. The file has to be a `BufferedIOBase` reader (i.e. not text) and must provide a valid [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html) style header. This function returns an instance of `http.client.HTTPMessage` that holds the header fields, but no payload (the same as [`HTTPResponse.msg`](#http.client.HTTPResponse.msg "http.client.HTTPResponse.msg") and [`http.server.BaseHTTPRequestHandler.headers`](http.server#http.server.BaseHTTPRequestHandler.headers "http.server.BaseHTTPRequestHandler.headers")). After returning, the file pointer *fp* is ready to read the HTTP body. Note [`parse_headers()`](#http.client.parse_headers "http.client.parse_headers") does not parse the start-line of a HTTP message; it only parses the `Name: value` lines. The file has to be ready to read these field lines, so the first line should already be consumed before calling the function. The following exceptions are raised as appropriate: `exception http.client.HTTPException` The base class of the other exceptions in this module. It is a subclass of [`Exception`](exceptions#Exception "Exception"). `exception http.client.NotConnected` A subclass of [`HTTPException`](#http.client.HTTPException "http.client.HTTPException"). `exception http.client.InvalidURL` A subclass of [`HTTPException`](#http.client.HTTPException "http.client.HTTPException"), raised if a port is given and is either non-numeric or empty. `exception http.client.UnknownProtocol` A subclass of [`HTTPException`](#http.client.HTTPException "http.client.HTTPException"). `exception http.client.UnknownTransferEncoding` A subclass of [`HTTPException`](#http.client.HTTPException "http.client.HTTPException"). `exception http.client.UnimplementedFileMode` A subclass of [`HTTPException`](#http.client.HTTPException "http.client.HTTPException"). `exception http.client.IncompleteRead` A subclass of [`HTTPException`](#http.client.HTTPException "http.client.HTTPException"). `exception http.client.ImproperConnectionState` A subclass of [`HTTPException`](#http.client.HTTPException "http.client.HTTPException"). `exception http.client.CannotSendRequest` A subclass of [`ImproperConnectionState`](#http.client.ImproperConnectionState "http.client.ImproperConnectionState"). `exception http.client.CannotSendHeader` A subclass of [`ImproperConnectionState`](#http.client.ImproperConnectionState "http.client.ImproperConnectionState"). `exception http.client.ResponseNotReady` A subclass of [`ImproperConnectionState`](#http.client.ImproperConnectionState "http.client.ImproperConnectionState"). `exception http.client.BadStatusLine` A subclass of [`HTTPException`](#http.client.HTTPException "http.client.HTTPException"). Raised if a server responds with a HTTP status code that we don’t understand. `exception http.client.LineTooLong` A subclass of [`HTTPException`](#http.client.HTTPException "http.client.HTTPException"). Raised if an excessively long line is received in the HTTP protocol from the server. `exception http.client.RemoteDisconnected` A subclass of [`ConnectionResetError`](exceptions#ConnectionResetError "ConnectionResetError") and [`BadStatusLine`](#http.client.BadStatusLine "http.client.BadStatusLine"). Raised by [`HTTPConnection.getresponse()`](#http.client.HTTPConnection.getresponse "http.client.HTTPConnection.getresponse") when the attempt to read the response results in no data read from the connection, indicating that the remote end has closed the connection. New in version 3.5: Previously, [`BadStatusLine`](#http.client.BadStatusLine "http.client.BadStatusLine")`('')` was raised. The constants defined in this module are: `http.client.HTTP_PORT` The default port for the HTTP protocol (always `80`). `http.client.HTTPS_PORT` The default port for the HTTPS protocol (always `443`). `http.client.responses` This dictionary maps the HTTP 1.1 status codes to the W3C names. Example: `http.client.responses[http.client.NOT_FOUND]` is `'Not Found'`. See [HTTP status codes](http#http-status-codes) for a list of HTTP status codes that are available in this module as constants. HTTPConnection Objects ---------------------- [`HTTPConnection`](#http.client.HTTPConnection "http.client.HTTPConnection") instances have the following methods: `HTTPConnection.request(method, url, body=None, headers={}, *, encode_chunked=False)` This will send a request to the server using the HTTP request method *method* and the selector *url*. If *body* is specified, the specified data is sent after the headers are finished. It may be a [`str`](stdtypes#str "str"), a [bytes-like object](../glossary#term-bytes-like-object), an open [file object](../glossary#term-file-object), or an iterable of [`bytes`](stdtypes#bytes "bytes"). If *body* is a string, it is encoded as ISO-8859-1, the default for HTTP. If it is a bytes-like object, the bytes are sent as is. If it is a [file object](../glossary#term-file-object), the contents of the file is sent; this file object should support at least the `read()` method. If the file object is an instance of [`io.TextIOBase`](io#io.TextIOBase "io.TextIOBase"), the data returned by the `read()` method will be encoded as ISO-8859-1, otherwise the data returned by `read()` is sent as is. If *body* is an iterable, the elements of the iterable are sent as is until the iterable is exhausted. The *headers* argument should be a mapping of extra HTTP headers to send with the request. If *headers* contains neither Content-Length nor Transfer-Encoding, but there is a request body, one of those header fields will be added automatically. If *body* is `None`, the Content-Length header is set to `0` for methods that expect a body (`PUT`, `POST`, and `PATCH`). If *body* is a string or a bytes-like object that is not also a [file](../glossary#term-file-object), the Content-Length header is set to its length. Any other type of *body* (files and iterables in general) will be chunk-encoded, and the Transfer-Encoding header will automatically be set instead of Content-Length. The *encode\_chunked* argument is only relevant if Transfer-Encoding is specified in *headers*. If *encode\_chunked* is `False`, the HTTPConnection object assumes that all encoding is handled by the calling code. If it is `True`, the body will be chunk-encoded. Note Chunked transfer encoding has been added to the HTTP protocol version 1.1. Unless the HTTP server is known to handle HTTP 1.1, the caller must either specify the Content-Length, or must pass a [`str`](stdtypes#str "str") or bytes-like object that is not also a file as the body representation. New in version 3.2: *body* can now be an iterable. Changed in version 3.6: If neither Content-Length nor Transfer-Encoding are set in *headers*, file and iterable *body* objects are now chunk-encoded. The *encode\_chunked* argument was added. No attempt is made to determine the Content-Length for file objects. `HTTPConnection.getresponse()` Should be called after a request is sent to get the response from the server. Returns an [`HTTPResponse`](#http.client.HTTPResponse "http.client.HTTPResponse") instance. Note Note that you must have read the whole response before you can send a new request to the server. Changed in version 3.5: If a [`ConnectionError`](exceptions#ConnectionError "ConnectionError") or subclass is raised, the [`HTTPConnection`](#http.client.HTTPConnection "http.client.HTTPConnection") object will be ready to reconnect when a new request is sent. `HTTPConnection.set_debuglevel(level)` Set the debugging level. The default debug level is `0`, meaning no debugging output is printed. Any value greater than `0` will cause all currently defined debug output to be printed to stdout. The `debuglevel` is passed to any new [`HTTPResponse`](#http.client.HTTPResponse "http.client.HTTPResponse") objects that are created. New in version 3.1. `HTTPConnection.set_tunnel(host, port=None, headers=None)` Set the host and the port for HTTP Connect Tunnelling. This allows running the connection through a proxy server. The host and port arguments specify the endpoint of the tunneled connection (i.e. the address included in the CONNECT request, *not* the address of the proxy server). The headers argument should be a mapping of extra HTTP headers to send with the CONNECT request. For example, to tunnel through a HTTPS proxy server running locally on port 8080, we would pass the address of the proxy to the [`HTTPSConnection`](#http.client.HTTPSConnection "http.client.HTTPSConnection") constructor, and the address of the host that we eventually want to reach to the [`set_tunnel()`](#http.client.HTTPConnection.set_tunnel "http.client.HTTPConnection.set_tunnel") method: ``` >>> import http.client >>> conn = http.client.HTTPSConnection("localhost", 8080) >>> conn.set_tunnel("www.python.org") >>> conn.request("HEAD","/index.html") ``` New in version 3.2. `HTTPConnection.connect()` Connect to the server specified when the object was created. By default, this is called automatically when making a request if the client does not already have a connection. `HTTPConnection.close()` Close the connection to the server. `HTTPConnection.blocksize` Buffer size in bytes for sending a file-like message body. New in version 3.7. As an alternative to using the `request()` method described above, you can also send your request step by step, by using the four functions below. `HTTPConnection.putrequest(method, url, skip_host=False, skip_accept_encoding=False)` This should be the first call after the connection to the server has been made. It sends a line to the server consisting of the *method* string, the *url* string, and the HTTP version (`HTTP/1.1`). To disable automatic sending of `Host:` or `Accept-Encoding:` headers (for example to accept additional content encodings), specify *skip\_host* or *skip\_accept\_encoding* with non-False values. `HTTPConnection.putheader(header, argument[, ...])` Send an [**RFC 822**](https://tools.ietf.org/html/rfc822.html)-style header to the server. It sends a line to the server consisting of the header, a colon and a space, and the first argument. If more arguments are given, continuation lines are sent, each consisting of a tab and an argument. `HTTPConnection.endheaders(message_body=None, *, encode_chunked=False)` Send a blank line to the server, signalling the end of the headers. The optional *message\_body* argument can be used to pass a message body associated with the request. If *encode\_chunked* is `True`, the result of each iteration of *message\_body* will be chunk-encoded as specified in [**RFC 7230**](https://tools.ietf.org/html/rfc7230.html), Section 3.3.1. How the data is encoded is dependent on the type of *message\_body*. If *message\_body* implements the [buffer interface](../c-api/buffer#bufferobjects) the encoding will result in a single chunk. If *message\_body* is a [`collections.abc.Iterable`](collections.abc#collections.abc.Iterable "collections.abc.Iterable"), each iteration of *message\_body* will result in a chunk. If *message\_body* is a [file object](../glossary#term-file-object), each call to `.read()` will result in a chunk. The method automatically signals the end of the chunk-encoded data immediately after *message\_body*. Note Due to the chunked encoding specification, empty chunks yielded by an iterator body will be ignored by the chunk-encoder. This is to avoid premature termination of the read of the request by the target server due to malformed encoding. New in version 3.6: Chunked encoding support. The *encode\_chunked* parameter was added. `HTTPConnection.send(data)` Send data to the server. This should be used directly only after the [`endheaders()`](#http.client.HTTPConnection.endheaders "http.client.HTTPConnection.endheaders") method has been called and before [`getresponse()`](#http.client.HTTPConnection.getresponse "http.client.HTTPConnection.getresponse") is called. HTTPResponse Objects -------------------- An [`HTTPResponse`](#http.client.HTTPResponse "http.client.HTTPResponse") instance wraps the HTTP response from the server. It provides access to the request headers and the entity body. The response is an iterable object and can be used in a with statement. Changed in version 3.5: The [`io.BufferedIOBase`](io#io.BufferedIOBase "io.BufferedIOBase") interface is now implemented and all of its reader operations are supported. `HTTPResponse.read([amt])` Reads and returns the response body, or up to the next *amt* bytes. `HTTPResponse.readinto(b)` Reads up to the next len(b) bytes of the response body into the buffer *b*. Returns the number of bytes read. New in version 3.3. `HTTPResponse.getheader(name, default=None)` Return the value of the header *name*, or *default* if there is no header matching *name*. If there is more than one header with the name *name*, return all of the values joined by ‘, ‘. If ‘default’ is any iterable other than a single string, its elements are similarly returned joined by commas. `HTTPResponse.getheaders()` Return a list of (header, value) tuples. `HTTPResponse.fileno()` Return the `fileno` of the underlying socket. `HTTPResponse.msg` A `http.client.HTTPMessage` instance containing the response headers. `http.client.HTTPMessage` is a subclass of [`email.message.Message`](email.compat32-message#email.message.Message "email.message.Message"). `HTTPResponse.version` HTTP protocol version used by server. 10 for HTTP/1.0, 11 for HTTP/1.1. `HTTPResponse.url` URL of the resource retrieved, commonly used to determine if a redirect was followed. `HTTPResponse.headers` Headers of the response in the form of an [`email.message.EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") instance. `HTTPResponse.status` Status code returned by server. `HTTPResponse.reason` Reason phrase returned by server. `HTTPResponse.debuglevel` A debugging hook. If [`debuglevel`](#http.client.HTTPResponse.debuglevel "http.client.HTTPResponse.debuglevel") is greater than zero, messages will be printed to stdout as the response is read and parsed. `HTTPResponse.closed` Is `True` if the stream is closed. `HTTPResponse.geturl()` Deprecated since version 3.9: Deprecated in favor of [`url`](#http.client.HTTPResponse.url "http.client.HTTPResponse.url"). `HTTPResponse.info()` Deprecated since version 3.9: Deprecated in favor of [`headers`](#http.client.HTTPResponse.headers "http.client.HTTPResponse.headers"). `HTTPResponse.getstatus()` Deprecated since version 3.9: Deprecated in favor of [`status`](#http.client.HTTPResponse.status "http.client.HTTPResponse.status"). Examples -------- Here is an example session that uses the `GET` method: ``` >>> import http.client >>> conn = http.client.HTTPSConnection("www.python.org") >>> conn.request("GET", "/") >>> r1 = conn.getresponse() >>> print(r1.status, r1.reason) 200 OK >>> data1 = r1.read() # This will return entire content. >>> # The following example demonstrates reading data in chunks. >>> conn.request("GET", "/") >>> r1 = conn.getresponse() >>> while chunk := r1.read(200): ... print(repr(chunk)) b'<!doctype html>\n<!--[if"... ... >>> # Example of an invalid request >>> conn = http.client.HTTPSConnection("docs.python.org") >>> conn.request("GET", "/parrot.spam") >>> r2 = conn.getresponse() >>> print(r2.status, r2.reason) 404 Not Found >>> data2 = r2.read() >>> conn.close() ``` Here is an example session that uses the `HEAD` method. Note that the `HEAD` method never returns any data. ``` >>> import http.client >>> conn = http.client.HTTPSConnection("www.python.org") >>> conn.request("HEAD", "/") >>> res = conn.getresponse() >>> print(res.status, res.reason) 200 OK >>> data = res.read() >>> print(len(data)) 0 >>> data == b'' True ``` Here is an example session that shows how to `POST` requests: ``` >>> import http.client, urllib.parse >>> params = urllib.parse.urlencode({'@number': 12524, '@type': 'issue', '@action': 'show'}) >>> headers = {"Content-type": "application/x-www-form-urlencoded", ... "Accept": "text/plain"} >>> conn = http.client.HTTPConnection("bugs.python.org") >>> conn.request("POST", "", params, headers) >>> response = conn.getresponse() >>> print(response.status, response.reason) 302 Found >>> data = response.read() >>> data b'Redirecting to <a href="http://bugs.python.org/issue12524">http://bugs.python.org/issue12524</a>' >>> conn.close() ``` Client side `HTTP PUT` requests are very similar to `POST` requests. The difference lies only the server side where HTTP server will allow resources to be created via `PUT` request. It should be noted that custom HTTP methods are also handled in [`urllib.request.Request`](urllib.request#urllib.request.Request "urllib.request.Request") by setting the appropriate method attribute. Here is an example session that shows how to send a `PUT` request using http.client: ``` >>> # This creates an HTTP message >>> # with the content of BODY as the enclosed representation >>> # for the resource http://localhost:8080/file ... >>> import http.client >>> BODY = "***filecontents***" >>> conn = http.client.HTTPConnection("localhost", 8080) >>> conn.request("PUT", "/file", BODY) >>> response = conn.getresponse() >>> print(response.status, response.reason) 200, OK ``` HTTPMessage Objects ------------------- An `http.client.HTTPMessage` instance holds the headers from an HTTP response. It is implemented using the [`email.message.Message`](email.compat32-message#email.message.Message "email.message.Message") class.
programming_docs
python tkinter.dnd — Drag and drop support tkinter.dnd — Drag and drop support =================================== **Source code:** [Lib/tkinter/dnd.py](https://github.com/python/cpython/tree/3.9/Lib/tkinter/dnd.py) Note This is experimental and due to be deprecated when it is replaced with the Tk DND. The [`tkinter.dnd`](#module-tkinter.dnd "tkinter.dnd: Tkinter drag-and-drop interface (Tk)") module provides drag-and-drop support for objects within a single application, within the same window or between windows. To enable an object to be dragged, you must create an event binding for it that starts the drag-and-drop process. Typically, you bind a ButtonPress event to a callback function that you write (see [Bindings and Events](tkinter#bindings-and-events)). The function should call [`dnd_start()`](#tkinter.dnd.dnd_start "tkinter.dnd.dnd_start"), where ‘source’ is the object to be dragged, and ‘event’ is the event that invoked the call (the argument to your callback function). Selection of a target object occurs as follows: 1. Top-down search of area under mouse for target widget * Target widget should have a callable *dnd\_accept* attribute * If *dnd\_accept* is not present or returns None, search moves to parent widget * If no target widget is found, then the target object is None 2. Call to *<old\_target>.dnd\_leave(source, event)* 3. Call to *<new\_target>.dnd\_enter(source, event)* 4. Call to *<target>.dnd\_commit(source, event)* to notify of drop 5. Call to *<source>.dnd\_end(target, event)* to signal end of drag-and-drop `class tkinter.dnd.DndHandler(source, event)` The *DndHandler* class handles drag-and-drop events tracking Motion and ButtonRelease events on the root of the event widget. `cancel(event=None)` Cancel the drag-and-drop process. `finish(event, commit=0)` Execute end of drag-and-drop functions. `on_motion(event)` Inspect area below mouse for target objects while drag is performed. `on_release(event)` Signal end of drag when the release pattern is triggered. `tkinter.dnd.dnd_start(source, event)` Factory function for drag-and-drop process. See also [Bindings and Events](tkinter#bindings-and-events) python argparse — Parser for command-line options, arguments and sub-commands argparse — Parser for command-line options, arguments and sub-commands ====================================================================== New in version 3.2. **Source code:** [Lib/argparse.py](https://github.com/python/cpython/tree/3.9/Lib/argparse.py) The [`argparse`](#module-argparse "argparse: Command-line option and argument parsing library.") module makes it easy to write user-friendly command-line interfaces. The program defines what arguments it requires, and [`argparse`](#module-argparse "argparse: Command-line option and argument parsing library.") will figure out how to parse those out of [`sys.argv`](sys#sys.argv "sys.argv"). The [`argparse`](#module-argparse "argparse: Command-line option and argument parsing library.") module also automatically generates help and usage messages and issues errors when users give the program invalid arguments. Example ------- The following code is a Python program that takes a list of integers and produces either the sum or the max: ``` import argparse parser = argparse.ArgumentParser(description='Process some integers.') parser.add_argument('integers', metavar='N', type=int, nargs='+', help='an integer for the accumulator') parser.add_argument('--sum', dest='accumulate', action='store_const', const=sum, default=max, help='sum the integers (default: find the max)') args = parser.parse_args() print(args.accumulate(args.integers)) ``` Assuming the Python code above is saved into a file called `prog.py`, it can be run at the command line and provides useful help messages: ``` $ python prog.py -h usage: prog.py [-h] [--sum] N [N ...] Process some integers. positional arguments: N an integer for the accumulator optional arguments: -h, --help show this help message and exit --sum sum the integers (default: find the max) ``` When run with the appropriate arguments, it prints either the sum or the max of the command-line integers: ``` $ python prog.py 1 2 3 4 4 $ python prog.py 1 2 3 4 --sum 10 ``` If invalid arguments are passed in, it will issue an error: ``` $ python prog.py a b c usage: prog.py [-h] [--sum] N [N ...] prog.py: error: argument N: invalid int value: 'a' ``` The following sections walk you through this example. ### Creating a parser The first step in using the [`argparse`](#module-argparse "argparse: Command-line option and argument parsing library.") is creating an [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") object: ``` >>> parser = argparse.ArgumentParser(description='Process some integers.') ``` The [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") object will hold all the information necessary to parse the command line into Python data types. ### Adding arguments Filling an [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") with information about program arguments is done by making calls to the [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") method. Generally, these calls tell the [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") how to take the strings on the command line and turn them into objects. This information is stored and used when [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") is called. For example: ``` >>> parser.add_argument('integers', metavar='N', type=int, nargs='+', ... help='an integer for the accumulator') >>> parser.add_argument('--sum', dest='accumulate', action='store_const', ... const=sum, default=max, ... help='sum the integers (default: find the max)') ``` Later, calling [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") will return an object with two attributes, `integers` and `accumulate`. The `integers` attribute will be a list of one or more ints, and the `accumulate` attribute will be either the [`sum()`](functions#sum "sum") function, if `--sum` was specified at the command line, or the [`max()`](functions#max "max") function if it was not. ### Parsing arguments [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") parses arguments through the [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") method. This will inspect the command line, convert each argument to the appropriate type and then invoke the appropriate action. In most cases, this means a simple [`Namespace`](#argparse.Namespace "argparse.Namespace") object will be built up from attributes parsed out of the command line: ``` >>> parser.parse_args(['--sum', '7', '-1', '42']) Namespace(accumulate=<built-in function sum>, integers=[7, -1, 42]) ``` In a script, [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") will typically be called with no arguments, and the [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") will automatically determine the command-line arguments from [`sys.argv`](sys#sys.argv "sys.argv"). ArgumentParser objects ---------------------- `class argparse.ArgumentParser(prog=None, usage=None, description=None, epilog=None, parents=[], formatter_class=argparse.HelpFormatter, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='error', add_help=True, allow_abbrev=True, exit_on_error=True)` Create a new [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") object. All parameters should be passed as keyword arguments. Each parameter has its own more detailed description below, but in short they are: * [prog](#prog) - The name of the program (default: `sys.argv[0]`) * [usage](#usage) - The string describing the program usage (default: generated from arguments added to parser) * [description](#description) - Text to display before the argument help (default: none) * [epilog](#epilog) - Text to display after the argument help (default: none) * [parents](#parents) - A list of [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") objects whose arguments should also be included * [formatter\_class](#formatter-class) - A class for customizing the help output * [prefix\_chars](#prefix-chars) - The set of characters that prefix optional arguments (default: ‘-‘) * [fromfile\_prefix\_chars](#fromfile-prefix-chars) - The set of characters that prefix files from which additional arguments should be read (default: `None`) * [argument\_default](#argument-default) - The global default value for arguments (default: `None`) * [conflict\_handler](#conflict-handler) - The strategy for resolving conflicting optionals (usually unnecessary) * [add\_help](#add-help) - Add a `-h/--help` option to the parser (default: `True`) * [allow\_abbrev](#allow-abbrev) - Allows long options to be abbreviated if the abbreviation is unambiguous. (default: `True`) * [exit\_on\_error](#exit-on-error) - Determines whether or not ArgumentParser exits with error info when an error occurs. (default: `True`) Changed in version 3.5: *allow\_abbrev* parameter was added. Changed in version 3.8: In previous versions, *allow\_abbrev* also disabled grouping of short flags such as `-vv` to mean `-v -v`. Changed in version 3.9: *exit\_on\_error* parameter was added. The following sections describe how each of these are used. ### prog By default, [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") objects use `sys.argv[0]` to determine how to display the name of the program in help messages. This default is almost always desirable because it will make the help messages match how the program was invoked on the command line. For example, consider a file named `myprogram.py` with the following code: ``` import argparse parser = argparse.ArgumentParser() parser.add_argument('--foo', help='foo help') args = parser.parse_args() ``` The help for this program will display `myprogram.py` as the program name (regardless of where the program was invoked from): ``` $ python myprogram.py --help usage: myprogram.py [-h] [--foo FOO] optional arguments: -h, --help show this help message and exit --foo FOO foo help $ cd .. $ python subdir/myprogram.py --help usage: myprogram.py [-h] [--foo FOO] optional arguments: -h, --help show this help message and exit --foo FOO foo help ``` To change this default behavior, another value can be supplied using the `prog=` argument to [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser"): ``` >>> parser = argparse.ArgumentParser(prog='myprogram') >>> parser.print_help() usage: myprogram [-h] optional arguments: -h, --help show this help message and exit ``` Note that the program name, whether determined from `sys.argv[0]` or from the `prog=` argument, is available to help messages using the `%(prog)s` format specifier. ``` >>> parser = argparse.ArgumentParser(prog='myprogram') >>> parser.add_argument('--foo', help='foo of the %(prog)s program') >>> parser.print_help() usage: myprogram [-h] [--foo FOO] optional arguments: -h, --help show this help message and exit --foo FOO foo of the myprogram program ``` ### usage By default, [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") calculates the usage message from the arguments it contains: ``` >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('--foo', nargs='?', help='foo help') >>> parser.add_argument('bar', nargs='+', help='bar help') >>> parser.print_help() usage: PROG [-h] [--foo [FOO]] bar [bar ...] positional arguments: bar bar help optional arguments: -h, --help show this help message and exit --foo [FOO] foo help ``` The default message can be overridden with the `usage=` keyword argument: ``` >>> parser = argparse.ArgumentParser(prog='PROG', usage='%(prog)s [options]') >>> parser.add_argument('--foo', nargs='?', help='foo help') >>> parser.add_argument('bar', nargs='+', help='bar help') >>> parser.print_help() usage: PROG [options] positional arguments: bar bar help optional arguments: -h, --help show this help message and exit --foo [FOO] foo help ``` The `%(prog)s` format specifier is available to fill in the program name in your usage messages. ### description Most calls to the [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") constructor will use the `description=` keyword argument. This argument gives a brief description of what the program does and how it works. In help messages, the description is displayed between the command-line usage string and the help messages for the various arguments: ``` >>> parser = argparse.ArgumentParser(description='A foo that bars') >>> parser.print_help() usage: argparse.py [-h] A foo that bars optional arguments: -h, --help show this help message and exit ``` By default, the description will be line-wrapped so that it fits within the given space. To change this behavior, see the [formatter\_class](#formatter-class) argument. ### epilog Some programs like to display additional description of the program after the description of the arguments. Such text can be specified using the `epilog=` argument to [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser"): ``` >>> parser = argparse.ArgumentParser( ... description='A foo that bars', ... epilog="And that's how you'd foo a bar") >>> parser.print_help() usage: argparse.py [-h] A foo that bars optional arguments: -h, --help show this help message and exit And that's how you'd foo a bar ``` As with the [description](#description) argument, the `epilog=` text is by default line-wrapped, but this behavior can be adjusted with the [formatter\_class](#formatter-class) argument to [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser"). ### parents Sometimes, several parsers share a common set of arguments. Rather than repeating the definitions of these arguments, a single parser with all the shared arguments and passed to `parents=` argument to [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") can be used. The `parents=` argument takes a list of [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") objects, collects all the positional and optional actions from them, and adds these actions to the [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") object being constructed: ``` >>> parent_parser = argparse.ArgumentParser(add_help=False) >>> parent_parser.add_argument('--parent', type=int) >>> foo_parser = argparse.ArgumentParser(parents=[parent_parser]) >>> foo_parser.add_argument('foo') >>> foo_parser.parse_args(['--parent', '2', 'XXX']) Namespace(foo='XXX', parent=2) >>> bar_parser = argparse.ArgumentParser(parents=[parent_parser]) >>> bar_parser.add_argument('--bar') >>> bar_parser.parse_args(['--bar', 'YYY']) Namespace(bar='YYY', parent=None) ``` Note that most parent parsers will specify `add_help=False`. Otherwise, the [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") will see two `-h/--help` options (one in the parent and one in the child) and raise an error. Note You must fully initialize the parsers before passing them via `parents=`. If you change the parent parsers after the child parser, those changes will not be reflected in the child. ### formatter\_class [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") objects allow the help formatting to be customized by specifying an alternate formatting class. Currently, there are four such classes: `class argparse.RawDescriptionHelpFormatter` `class argparse.RawTextHelpFormatter` `class argparse.ArgumentDefaultsHelpFormatter` `class argparse.MetavarTypeHelpFormatter` [`RawDescriptionHelpFormatter`](#argparse.RawDescriptionHelpFormatter "argparse.RawDescriptionHelpFormatter") and [`RawTextHelpFormatter`](#argparse.RawTextHelpFormatter "argparse.RawTextHelpFormatter") give more control over how textual descriptions are displayed. By default, [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") objects line-wrap the [description](#description) and [epilog](#epilog) texts in command-line help messages: ``` >>> parser = argparse.ArgumentParser( ... prog='PROG', ... description='''this description ... was indented weird ... but that is okay''', ... epilog=''' ... likewise for this epilog whose whitespace will ... be cleaned up and whose words will be wrapped ... across a couple lines''') >>> parser.print_help() usage: PROG [-h] this description was indented weird but that is okay optional arguments: -h, --help show this help message and exit likewise for this epilog whose whitespace will be cleaned up and whose words will be wrapped across a couple lines ``` Passing [`RawDescriptionHelpFormatter`](#argparse.RawDescriptionHelpFormatter "argparse.RawDescriptionHelpFormatter") as `formatter_class=` indicates that [description](#description) and [epilog](#epilog) are already correctly formatted and should not be line-wrapped: ``` >>> parser = argparse.ArgumentParser( ... prog='PROG', ... formatter_class=argparse.RawDescriptionHelpFormatter, ... description=textwrap.dedent('''\ ... Please do not mess up this text! ... -------------------------------- ... I have indented it ... exactly the way ... I want it ... ''')) >>> parser.print_help() usage: PROG [-h] Please do not mess up this text! -------------------------------- I have indented it exactly the way I want it optional arguments: -h, --help show this help message and exit ``` [`RawTextHelpFormatter`](#argparse.RawTextHelpFormatter "argparse.RawTextHelpFormatter") maintains whitespace for all sorts of help text, including argument descriptions. However, multiple new lines are replaced with one. If you wish to preserve multiple blank lines, add spaces between the newlines. [`ArgumentDefaultsHelpFormatter`](#argparse.ArgumentDefaultsHelpFormatter "argparse.ArgumentDefaultsHelpFormatter") automatically adds information about default values to each of the argument help messages: ``` >>> parser = argparse.ArgumentParser( ... prog='PROG', ... formatter_class=argparse.ArgumentDefaultsHelpFormatter) >>> parser.add_argument('--foo', type=int, default=42, help='FOO!') >>> parser.add_argument('bar', nargs='*', default=[1, 2, 3], help='BAR!') >>> parser.print_help() usage: PROG [-h] [--foo FOO] [bar ...] positional arguments: bar BAR! (default: [1, 2, 3]) optional arguments: -h, --help show this help message and exit --foo FOO FOO! (default: 42) ``` [`MetavarTypeHelpFormatter`](#argparse.MetavarTypeHelpFormatter "argparse.MetavarTypeHelpFormatter") uses the name of the [type](#type) argument for each argument as the display name for its values (rather than using the [dest](#dest) as the regular formatter does): ``` >>> parser = argparse.ArgumentParser( ... prog='PROG', ... formatter_class=argparse.MetavarTypeHelpFormatter) >>> parser.add_argument('--foo', type=int) >>> parser.add_argument('bar', type=float) >>> parser.print_help() usage: PROG [-h] [--foo int] float positional arguments: float optional arguments: -h, --help show this help message and exit --foo int ``` ### prefix\_chars Most command-line options will use `-` as the prefix, e.g. `-f/--foo`. Parsers that need to support different or additional prefix characters, e.g. for options like `+f` or `/foo`, may specify them using the `prefix_chars=` argument to the ArgumentParser constructor: ``` >>> parser = argparse.ArgumentParser(prog='PROG', prefix_chars='-+') >>> parser.add_argument('+f') >>> parser.add_argument('++bar') >>> parser.parse_args('+f X ++bar Y'.split()) Namespace(bar='Y', f='X') ``` The `prefix_chars=` argument defaults to `'-'`. Supplying a set of characters that does not include `-` will cause `-f/--foo` options to be disallowed. ### fromfile\_prefix\_chars Sometimes, for example when dealing with a particularly long argument list, it may make sense to keep the list of arguments in a file rather than typing it out at the command line. If the `fromfile_prefix_chars=` argument is given to the [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") constructor, then arguments that start with any of the specified characters will be treated as files, and will be replaced by the arguments they contain. For example: ``` >>> with open('args.txt', 'w') as fp: ... fp.write('-f\nbar') >>> parser = argparse.ArgumentParser(fromfile_prefix_chars='@') >>> parser.add_argument('-f') >>> parser.parse_args(['-f', 'foo', '@args.txt']) Namespace(f='bar') ``` Arguments read from a file must by default be one per line (but see also [`convert_arg_line_to_args()`](#argparse.ArgumentParser.convert_arg_line_to_args "argparse.ArgumentParser.convert_arg_line_to_args")) and are treated as if they were in the same place as the original file referencing argument on the command line. So in the example above, the expression `['-f', 'foo', '@args.txt']` is considered equivalent to the expression `['-f', 'foo', '-f', 'bar']`. The `fromfile_prefix_chars=` argument defaults to `None`, meaning that arguments will never be treated as file references. ### argument\_default Generally, argument defaults are specified either by passing a default to [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") or by calling the [`set_defaults()`](#argparse.ArgumentParser.set_defaults "argparse.ArgumentParser.set_defaults") methods with a specific set of name-value pairs. Sometimes however, it may be useful to specify a single parser-wide default for arguments. This can be accomplished by passing the `argument_default=` keyword argument to [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser"). For example, to globally suppress attribute creation on [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") calls, we supply `argument_default=SUPPRESS`: ``` >>> parser = argparse.ArgumentParser(argument_default=argparse.SUPPRESS) >>> parser.add_argument('--foo') >>> parser.add_argument('bar', nargs='?') >>> parser.parse_args(['--foo', '1', 'BAR']) Namespace(bar='BAR', foo='1') >>> parser.parse_args([]) Namespace() ``` ### allow\_abbrev Normally, when you pass an argument list to the [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") method of an [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser"), it [recognizes abbreviations](#prefix-matching) of long options. This feature can be disabled by setting `allow_abbrev` to `False`: ``` >>> parser = argparse.ArgumentParser(prog='PROG', allow_abbrev=False) >>> parser.add_argument('--foobar', action='store_true') >>> parser.add_argument('--foonley', action='store_false') >>> parser.parse_args(['--foon']) usage: PROG [-h] [--foobar] [--foonley] PROG: error: unrecognized arguments: --foon ``` New in version 3.5. ### conflict\_handler [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") objects do not allow two actions with the same option string. By default, [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") objects raise an exception if an attempt is made to create an argument with an option string that is already in use: ``` >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('-f', '--foo', help='old foo help') >>> parser.add_argument('--foo', help='new foo help') Traceback (most recent call last): .. ArgumentError: argument --foo: conflicting option string(s): --foo ``` Sometimes (e.g. when using [parents](#parents)) it may be useful to simply override any older arguments with the same option string. To get this behavior, the value `'resolve'` can be supplied to the `conflict_handler=` argument of [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser"): ``` >>> parser = argparse.ArgumentParser(prog='PROG', conflict_handler='resolve') >>> parser.add_argument('-f', '--foo', help='old foo help') >>> parser.add_argument('--foo', help='new foo help') >>> parser.print_help() usage: PROG [-h] [-f FOO] [--foo FOO] optional arguments: -h, --help show this help message and exit -f FOO old foo help --foo FOO new foo help ``` Note that [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") objects only remove an action if all of its option strings are overridden. So, in the example above, the old `-f/--foo` action is retained as the `-f` action, because only the `--foo` option string was overridden. ### add\_help By default, ArgumentParser objects add an option which simply displays the parser’s help message. For example, consider a file named `myprogram.py` containing the following code: ``` import argparse parser = argparse.ArgumentParser() parser.add_argument('--foo', help='foo help') args = parser.parse_args() ``` If `-h` or `--help` is supplied at the command line, the ArgumentParser help will be printed: ``` $ python myprogram.py --help usage: myprogram.py [-h] [--foo FOO] optional arguments: -h, --help show this help message and exit --foo FOO foo help ``` Occasionally, it may be useful to disable the addition of this help option. This can be achieved by passing `False` as the `add_help=` argument to [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser"): ``` >>> parser = argparse.ArgumentParser(prog='PROG', add_help=False) >>> parser.add_argument('--foo', help='foo help') >>> parser.print_help() usage: PROG [--foo FOO] optional arguments: --foo FOO foo help ``` The help option is typically `-h/--help`. The exception to this is if the `prefix_chars=` is specified and does not include `-`, in which case `-h` and `--help` are not valid options. In this case, the first character in `prefix_chars` is used to prefix the help options: ``` >>> parser = argparse.ArgumentParser(prog='PROG', prefix_chars='+/') >>> parser.print_help() usage: PROG [+h] optional arguments: +h, ++help show this help message and exit ``` ### exit\_on\_error Normally, when you pass an invalid argument list to the [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") method of an [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser"), it will exit with error info. If the user would like to catch errors manually, the feature can be enabled by setting `exit_on_error` to `False`: ``` >>> parser = argparse.ArgumentParser(exit_on_error=False) >>> parser.add_argument('--integers', type=int) _StoreAction(option_strings=['--integers'], dest='integers', nargs=None, const=None, default=None, type=<class 'int'>, choices=None, help=None, metavar=None) >>> try: ... parser.parse_args('--integers a'.split()) ... except argparse.ArgumentError: ... print('Catching an argumentError') ... Catching an argumentError ``` New in version 3.9. The add\_argument() method -------------------------- `ArgumentParser.add_argument(name or flags...[, action][, nargs][, const][, default][, type][, choices][, required][, help][, metavar][, dest])` Define how a single command-line argument should be parsed. Each parameter has its own more detailed description below, but in short they are: * [name or flags](#name-or-flags) - Either a name or a list of option strings, e.g. `foo` or `-f, --foo`. * [action](#action) - The basic type of action to be taken when this argument is encountered at the command line. * [nargs](#nargs) - The number of command-line arguments that should be consumed. * [const](#const) - A constant value required by some [action](#action) and [nargs](#nargs) selections. * [default](#default) - The value produced if the argument is absent from the command line and if it is absent from the namespace object. * [type](#type) - The type to which the command-line argument should be converted. * [choices](#choices) - A container of the allowable values for the argument. * [required](#required) - Whether or not the command-line option may be omitted (optionals only). * [help](#help) - A brief description of what the argument does. * [metavar](#metavar) - A name for the argument in usage messages. * [dest](#dest) - The name of the attribute to be added to the object returned by [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args"). The following sections describe how each of these are used. ### name or flags The [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") method must know whether an optional argument, like `-f` or `--foo`, or a positional argument, like a list of filenames, is expected. The first arguments passed to [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") must therefore be either a series of flags, or a simple argument name. For example, an optional argument could be created like: ``` >>> parser.add_argument('-f', '--foo') ``` while a positional argument could be created like: ``` >>> parser.add_argument('bar') ``` When [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") is called, optional arguments will be identified by the `-` prefix, and the remaining arguments will be assumed to be positional: ``` >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('-f', '--foo') >>> parser.add_argument('bar') >>> parser.parse_args(['BAR']) Namespace(bar='BAR', foo=None) >>> parser.parse_args(['BAR', '--foo', 'FOO']) Namespace(bar='BAR', foo='FOO') >>> parser.parse_args(['--foo', 'FOO']) usage: PROG [-h] [-f FOO] bar PROG: error: the following arguments are required: bar ``` ### action [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") objects associate command-line arguments with actions. These actions can do just about anything with the command-line arguments associated with them, though most actions simply add an attribute to the object returned by [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args"). The `action` keyword argument specifies how the command-line arguments should be handled. The supplied actions are: * `'store'` - This just stores the argument’s value. This is the default action. For example: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo') >>> parser.parse_args('--foo 1'.split()) Namespace(foo='1') ``` * `'store_const'` - This stores the value specified by the [const](#const) keyword argument. The `'store_const'` action is most commonly used with optional arguments that specify some sort of flag. For example: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', action='store_const', const=42) >>> parser.parse_args(['--foo']) Namespace(foo=42) ``` * `'store_true'` and `'store_false'` - These are special cases of `'store_const'` used for storing the values `True` and `False` respectively. In addition, they create default values of `False` and `True` respectively. For example: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', action='store_true') >>> parser.add_argument('--bar', action='store_false') >>> parser.add_argument('--baz', action='store_false') >>> parser.parse_args('--foo --bar'.split()) Namespace(foo=True, bar=False, baz=True) ``` * `'append'` - This stores a list, and appends each argument value to the list. This is useful to allow an option to be specified multiple times. Example usage: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', action='append') >>> parser.parse_args('--foo 1 --foo 2'.split()) Namespace(foo=['1', '2']) ``` * `'append_const'` - This stores a list, and appends the value specified by the [const](#const) keyword argument to the list. (Note that the [const](#const) keyword argument defaults to `None`.) The `'append_const'` action is typically useful when multiple arguments need to store constants to the same list. For example: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--str', dest='types', action='append_const', const=str) >>> parser.add_argument('--int', dest='types', action='append_const', const=int) >>> parser.parse_args('--str --int'.split()) Namespace(types=[<class 'str'>, <class 'int'>]) ``` * `'count'` - This counts the number of times a keyword argument occurs. For example, this is useful for increasing verbosity levels: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--verbose', '-v', action='count', default=0) >>> parser.parse_args(['-vvv']) Namespace(verbose=3) ``` Note, the *default* will be `None` unless explicitly set to *0*. * `'help'` - This prints a complete help message for all the options in the current parser and then exits. By default a help action is automatically added to the parser. See [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") for details of how the output is created. * `'version'` - This expects a `version=` keyword argument in the [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") call, and prints version information and exits when invoked: ``` >>> import argparse >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('--version', action='version', version='%(prog)s 2.0') >>> parser.parse_args(['--version']) PROG 2.0 ``` * `'extend'` - This stores a list, and extends each argument value to the list. Example usage: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument("--foo", action="extend", nargs="+", type=str) >>> parser.parse_args(["--foo", "f1", "--foo", "f2", "f3", "f4"]) Namespace(foo=['f1', 'f2', 'f3', 'f4']) ``` New in version 3.8. You may also specify an arbitrary action by passing an Action subclass or other object that implements the same interface. The `BooleanOptionalAction` is available in `argparse` and adds support for boolean actions such as `--foo` and `--no-foo`: ``` >>> import argparse >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', action=argparse.BooleanOptionalAction) >>> parser.parse_args(['--no-foo']) Namespace(foo=False) ``` New in version 3.9. The recommended way to create a custom action is to extend [`Action`](#argparse.Action "argparse.Action"), overriding the `__call__` method and optionally the `__init__` and `format_usage` methods. An example of a custom action: ``` >>> class FooAction(argparse.Action): ... def __init__(self, option_strings, dest, nargs=None, **kwargs): ... if nargs is not None: ... raise ValueError("nargs not allowed") ... super().__init__(option_strings, dest, **kwargs) ... def __call__(self, parser, namespace, values, option_string=None): ... print('%r %r %r' % (namespace, values, option_string)) ... setattr(namespace, self.dest, values) ... >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', action=FooAction) >>> parser.add_argument('bar', action=FooAction) >>> args = parser.parse_args('1 --foo 2'.split()) Namespace(bar=None, foo=None) '1' None Namespace(bar='1', foo=None) '2' '--foo' >>> args Namespace(bar='1', foo='2') ``` For more details, see [`Action`](#argparse.Action "argparse.Action"). ### nargs ArgumentParser objects usually associate a single command-line argument with a single action to be taken. The `nargs` keyword argument associates a different number of command-line arguments with a single action. The supported values are: * `N` (an integer). `N` arguments from the command line will be gathered together into a list. For example: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', nargs=2) >>> parser.add_argument('bar', nargs=1) >>> parser.parse_args('c --foo a b'.split()) Namespace(bar=['c'], foo=['a', 'b']) ``` Note that `nargs=1` produces a list of one item. This is different from the default, in which the item is produced by itself. * `'?'`. One argument will be consumed from the command line if possible, and produced as a single item. If no command-line argument is present, the value from [default](#default) will be produced. Note that for optional arguments, there is an additional case - the option string is present but not followed by a command-line argument. In this case the value from [const](#const) will be produced. Some examples to illustrate this: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', nargs='?', const='c', default='d') >>> parser.add_argument('bar', nargs='?', default='d') >>> parser.parse_args(['XX', '--foo', 'YY']) Namespace(bar='XX', foo='YY') >>> parser.parse_args(['XX', '--foo']) Namespace(bar='XX', foo='c') >>> parser.parse_args([]) Namespace(bar='d', foo='d') ``` One of the more common uses of `nargs='?'` is to allow optional input and output files: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('infile', nargs='?', type=argparse.FileType('r'), ... default=sys.stdin) >>> parser.add_argument('outfile', nargs='?', type=argparse.FileType('w'), ... default=sys.stdout) >>> parser.parse_args(['input.txt', 'output.txt']) Namespace(infile=<_io.TextIOWrapper name='input.txt' encoding='UTF-8'>, outfile=<_io.TextIOWrapper name='output.txt' encoding='UTF-8'>) >>> parser.parse_args([]) Namespace(infile=<_io.TextIOWrapper name='<stdin>' encoding='UTF-8'>, outfile=<_io.TextIOWrapper name='<stdout>' encoding='UTF-8'>) ``` * `'*'`. All command-line arguments present are gathered into a list. Note that it generally doesn’t make much sense to have more than one positional argument with `nargs='*'`, but multiple optional arguments with `nargs='*'` is possible. For example: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', nargs='*') >>> parser.add_argument('--bar', nargs='*') >>> parser.add_argument('baz', nargs='*') >>> parser.parse_args('a b --foo x y --bar 1 2'.split()) Namespace(bar=['1', '2'], baz=['a', 'b'], foo=['x', 'y']) ``` * `'+'`. Just like `'*'`, all command-line args present are gathered into a list. Additionally, an error message will be generated if there wasn’t at least one command-line argument present. For example: ``` >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('foo', nargs='+') >>> parser.parse_args(['a', 'b']) Namespace(foo=['a', 'b']) >>> parser.parse_args([]) usage: PROG [-h] foo [foo ...] PROG: error: the following arguments are required: foo ``` If the `nargs` keyword argument is not provided, the number of arguments consumed is determined by the [action](#action). Generally this means a single command-line argument will be consumed and a single item (not a list) will be produced. ### const The `const` argument of [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") is used to hold constant values that are not read from the command line but are required for the various [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") actions. The two most common uses of it are: * When [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") is called with `action='store_const'` or `action='append_const'`. These actions add the `const` value to one of the attributes of the object returned by [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args"). See the [action](#action) description for examples. * When [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") is called with option strings (like `-f` or `--foo`) and `nargs='?'`. This creates an optional argument that can be followed by zero or one command-line arguments. When parsing the command line, if the option string is encountered with no command-line argument following it, the value of `const` will be assumed instead. See the [nargs](#nargs) description for examples. With the `'store_const'` and `'append_const'` actions, the `const` keyword argument must be given. For other actions, it defaults to `None`. ### default All optional arguments and some positional arguments may be omitted at the command line. The `default` keyword argument of [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument"), whose value defaults to `None`, specifies what value should be used if the command-line argument is not present. For optional arguments, the `default` value is used when the option string was not present at the command line: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', default=42) >>> parser.parse_args(['--foo', '2']) Namespace(foo='2') >>> parser.parse_args([]) Namespace(foo=42) ``` If the target namespace already has an attribute set, the action *default* will not over write it: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', default=42) >>> parser.parse_args([], namespace=argparse.Namespace(foo=101)) Namespace(foo=101) ``` If the `default` value is a string, the parser parses the value as if it were a command-line argument. In particular, the parser applies any [type](#type) conversion argument, if provided, before setting the attribute on the [`Namespace`](#argparse.Namespace "argparse.Namespace") return value. Otherwise, the parser uses the value as is: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--length', default='10', type=int) >>> parser.add_argument('--width', default=10.5, type=int) >>> parser.parse_args() Namespace(length=10, width=10.5) ``` For positional arguments with [nargs](#nargs) equal to `?` or `*`, the `default` value is used when no command-line argument was present: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('foo', nargs='?', default=42) >>> parser.parse_args(['a']) Namespace(foo='a') >>> parser.parse_args([]) Namespace(foo=42) ``` Providing `default=argparse.SUPPRESS` causes no attribute to be added if the command-line argument was not present: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', default=argparse.SUPPRESS) >>> parser.parse_args([]) Namespace() >>> parser.parse_args(['--foo', '1']) Namespace(foo='1') ``` ### type By default, the parser reads command-line arguments in as simple strings. However, quite often the command-line string should instead be interpreted as another type, such as a [`float`](functions#float "float") or [`int`](functions#int "int"). The `type` keyword for [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") allows any necessary type-checking and type conversions to be performed. If the [type](#type) keyword is used with the [default](#default) keyword, the type converter is only applied if the default is a string. The argument to `type` can be any callable that accepts a single string. If the function raises `ArgumentTypeError`, [`TypeError`](exceptions#TypeError "TypeError"), or [`ValueError`](exceptions#ValueError "ValueError"), the exception is caught and a nicely formatted error message is displayed. No other exception types are handled. Common built-in types and functions can be used as type converters: ``` import argparse import pathlib parser = argparse.ArgumentParser() parser.add_argument('count', type=int) parser.add_argument('distance', type=float) parser.add_argument('street', type=ascii) parser.add_argument('code_point', type=ord) parser.add_argument('source_file', type=open) parser.add_argument('dest_file', type=argparse.FileType('w', encoding='latin-1')) parser.add_argument('datapath', type=pathlib.Path) ``` User defined functions can be used as well: ``` >>> def hyphenated(string): ... return '-'.join([word[:4] for word in string.casefold().split()]) ... >>> parser = argparse.ArgumentParser() >>> _ = parser.add_argument('short_title', type=hyphenated) >>> parser.parse_args(['"The Tale of Two Cities"']) Namespace(short_title='"the-tale-of-two-citi') ``` The [`bool()`](functions#bool "bool") function is not recommended as a type converter. All it does is convert empty strings to `False` and non-empty strings to `True`. This is usually not what is desired. In general, the `type` keyword is a convenience that should only be used for simple conversions that can only raise one of the three supported exceptions. Anything with more interesting error-handling or resource management should be done downstream after the arguments are parsed. For example, JSON or YAML conversions have complex error cases that require better reporting than can be given by the `type` keyword. A [`JSONDecodeError`](json#json.JSONDecodeError "json.JSONDecodeError") would not be well formatted and a `FileNotFound` exception would not be handled at all. Even [`FileType`](#argparse.FileType "argparse.FileType") has its limitations for use with the `type` keyword. If one argument uses *FileType* and then a subsequent argument fails, an error is reported but the file is not automatically closed. In this case, it would be better to wait until after the parser has run and then use the [`with`](../reference/compound_stmts#with)-statement to manage the files. For type checkers that simply check against a fixed set of values, consider using the [choices](#choices) keyword instead. ### choices Some command-line arguments should be selected from a restricted set of values. These can be handled by passing a container object as the *choices* keyword argument to [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument"). When the command line is parsed, argument values will be checked, and an error message will be displayed if the argument was not one of the acceptable values: ``` >>> parser = argparse.ArgumentParser(prog='game.py') >>> parser.add_argument('move', choices=['rock', 'paper', 'scissors']) >>> parser.parse_args(['rock']) Namespace(move='rock') >>> parser.parse_args(['fire']) usage: game.py [-h] {rock,paper,scissors} game.py: error: argument move: invalid choice: 'fire' (choose from 'rock', 'paper', 'scissors') ``` Note that inclusion in the *choices* container is checked after any [type](#type) conversions have been performed, so the type of the objects in the *choices* container should match the [type](#type) specified: ``` >>> parser = argparse.ArgumentParser(prog='doors.py') >>> parser.add_argument('door', type=int, choices=range(1, 4)) >>> print(parser.parse_args(['3'])) Namespace(door=3) >>> parser.parse_args(['4']) usage: doors.py [-h] {1,2,3} doors.py: error: argument door: invalid choice: 4 (choose from 1, 2, 3) ``` Any container can be passed as the *choices* value, so [`list`](stdtypes#list "list") objects, [`set`](stdtypes#set "set") objects, and custom containers are all supported. Use of [`enum.Enum`](enum#enum.Enum "enum.Enum") is not recommended because it is difficult to control its appearance in usage, help, and error messages. Formatted choices overrides the default *metavar* which is normally derived from *dest*. This is usually what you want because the user never sees the *dest* parameter. If this display isn’t desirable (perhaps because there are many choices), just specify an explicit [metavar](#metavar). ### required In general, the [`argparse`](#module-argparse "argparse: Command-line option and argument parsing library.") module assumes that flags like `-f` and `--bar` indicate *optional* arguments, which can always be omitted at the command line. To make an option *required*, `True` can be specified for the `required=` keyword argument to [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument"): ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', required=True) >>> parser.parse_args(['--foo', 'BAR']) Namespace(foo='BAR') >>> parser.parse_args([]) usage: [-h] --foo FOO : error: the following arguments are required: --foo ``` As the example shows, if an option is marked as `required`, [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") will report an error if that option is not present at the command line. Note Required options are generally considered bad form because users expect *options* to be *optional*, and thus they should be avoided when possible. ### help The `help` value is a string containing a brief description of the argument. When a user requests help (usually by using `-h` or `--help` at the command line), these `help` descriptions will be displayed with each argument: ``` >>> parser = argparse.ArgumentParser(prog='frobble') >>> parser.add_argument('--foo', action='store_true', ... help='foo the bars before frobbling') >>> parser.add_argument('bar', nargs='+', ... help='one of the bars to be frobbled') >>> parser.parse_args(['-h']) usage: frobble [-h] [--foo] bar [bar ...] positional arguments: bar one of the bars to be frobbled optional arguments: -h, --help show this help message and exit --foo foo the bars before frobbling ``` The `help` strings can include various format specifiers to avoid repetition of things like the program name or the argument [default](#default). The available specifiers include the program name, `%(prog)s` and most keyword arguments to [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument"), e.g. `%(default)s`, `%(type)s`, etc.: ``` >>> parser = argparse.ArgumentParser(prog='frobble') >>> parser.add_argument('bar', nargs='?', type=int, default=42, ... help='the bar to %(prog)s (default: %(default)s)') >>> parser.print_help() usage: frobble [-h] [bar] positional arguments: bar the bar to frobble (default: 42) optional arguments: -h, --help show this help message and exit ``` As the help string supports %-formatting, if you want a literal `%` to appear in the help string, you must escape it as `%%`. [`argparse`](#module-argparse "argparse: Command-line option and argument parsing library.") supports silencing the help entry for certain options, by setting the `help` value to `argparse.SUPPRESS`: ``` >>> parser = argparse.ArgumentParser(prog='frobble') >>> parser.add_argument('--foo', help=argparse.SUPPRESS) >>> parser.print_help() usage: frobble [-h] optional arguments: -h, --help show this help message and exit ``` ### metavar When [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") generates help messages, it needs some way to refer to each expected argument. By default, ArgumentParser objects use the [dest](#dest) value as the “name” of each object. By default, for positional argument actions, the [dest](#dest) value is used directly, and for optional argument actions, the [dest](#dest) value is uppercased. So, a single positional argument with `dest='bar'` will be referred to as `bar`. A single optional argument `--foo` that should be followed by a single command-line argument will be referred to as `FOO`. An example: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo') >>> parser.add_argument('bar') >>> parser.parse_args('X --foo Y'.split()) Namespace(bar='X', foo='Y') >>> parser.print_help() usage: [-h] [--foo FOO] bar positional arguments: bar optional arguments: -h, --help show this help message and exit --foo FOO ``` An alternative name can be specified with `metavar`: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', metavar='YYY') >>> parser.add_argument('bar', metavar='XXX') >>> parser.parse_args('X --foo Y'.split()) Namespace(bar='X', foo='Y') >>> parser.print_help() usage: [-h] [--foo YYY] XXX positional arguments: XXX optional arguments: -h, --help show this help message and exit --foo YYY ``` Note that `metavar` only changes the *displayed* name - the name of the attribute on the [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") object is still determined by the [dest](#dest) value. Different values of `nargs` may cause the metavar to be used multiple times. Providing a tuple to `metavar` specifies a different display for each of the arguments: ``` >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('-x', nargs=2) >>> parser.add_argument('--foo', nargs=2, metavar=('bar', 'baz')) >>> parser.print_help() usage: PROG [-h] [-x X X] [--foo bar baz] optional arguments: -h, --help show this help message and exit -x X X --foo bar baz ``` ### dest Most [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") actions add some value as an attribute of the object returned by [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args"). The name of this attribute is determined by the `dest` keyword argument of [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument"). For positional argument actions, `dest` is normally supplied as the first argument to [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument"): ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('bar') >>> parser.parse_args(['XXX']) Namespace(bar='XXX') ``` For optional argument actions, the value of `dest` is normally inferred from the option strings. [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") generates the value of `dest` by taking the first long option string and stripping away the initial `--` string. If no long option strings were supplied, `dest` will be derived from the first short option string by stripping the initial `-` character. Any internal `-` characters will be converted to `_` characters to make sure the string is a valid attribute name. The examples below illustrate this behavior: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('-f', '--foo-bar', '--foo') >>> parser.add_argument('-x', '-y') >>> parser.parse_args('-f 1 -x 2'.split()) Namespace(foo_bar='1', x='2') >>> parser.parse_args('--foo 1 -y 2'.split()) Namespace(foo_bar='1', x='2') ``` `dest` allows a custom attribute name to be provided: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', dest='bar') >>> parser.parse_args('--foo XXX'.split()) Namespace(bar='XXX') ``` ### Action classes Action classes implement the Action API, a callable which returns a callable which processes arguments from the command-line. Any object which follows this API may be passed as the `action` parameter to `add_argument()`. `class argparse.Action(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)` Action objects are used by an ArgumentParser to represent the information needed to parse a single argument from one or more strings from the command line. The Action class must accept the two positional arguments plus any keyword arguments passed to [`ArgumentParser.add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") except for the `action` itself. Instances of Action (or return value of any callable to the `action` parameter) should have attributes “dest”, “option\_strings”, “default”, “type”, “required”, “help”, etc. defined. The easiest way to ensure these attributes are defined is to call `Action.__init__`. Action instances should be callable, so subclasses must override the `__call__` method, which should accept four parameters: * `parser` - The ArgumentParser object which contains this action. * `namespace` - The [`Namespace`](#argparse.Namespace "argparse.Namespace") object that will be returned by [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args"). Most actions add an attribute to this object using [`setattr()`](functions#setattr "setattr"). * `values` - The associated command-line arguments, with any type conversions applied. Type conversions are specified with the [type](#type) keyword argument to [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument"). * `option_string` - The option string that was used to invoke this action. The `option_string` argument is optional, and will be absent if the action is associated with a positional argument. The `__call__` method may perform arbitrary actions, but will typically set attributes on the `namespace` based on `dest` and `values`. Action subclasses can define a `format_usage` method that takes no argument and return a string which will be used when printing the usage of the program. If such method is not provided, a sensible default will be used. The parse\_args() method ------------------------ `ArgumentParser.parse_args(args=None, namespace=None)` Convert argument strings to objects and assign them as attributes of the namespace. Return the populated namespace. Previous calls to [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") determine exactly what objects are created and how they are assigned. See the documentation for [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") for details. * [args](#args) - List of strings to parse. The default is taken from [`sys.argv`](sys#sys.argv "sys.argv"). * [namespace](#namespace) - An object to take the attributes. The default is a new empty [`Namespace`](#argparse.Namespace "argparse.Namespace") object. ### Option value syntax The [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") method supports several ways of specifying the value of an option (if it takes one). In the simplest case, the option and its value are passed as two separate arguments: ``` >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('-x') >>> parser.add_argument('--foo') >>> parser.parse_args(['-x', 'X']) Namespace(foo=None, x='X') >>> parser.parse_args(['--foo', 'FOO']) Namespace(foo='FOO', x=None) ``` For long options (options with names longer than a single character), the option and value can also be passed as a single command-line argument, using `=` to separate them: ``` >>> parser.parse_args(['--foo=FOO']) Namespace(foo='FOO', x=None) ``` For short options (options only one character long), the option and its value can be concatenated: ``` >>> parser.parse_args(['-xX']) Namespace(foo=None, x='X') ``` Several short options can be joined together, using only a single `-` prefix, as long as only the last option (or none of them) requires a value: ``` >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('-x', action='store_true') >>> parser.add_argument('-y', action='store_true') >>> parser.add_argument('-z') >>> parser.parse_args(['-xyzZ']) Namespace(x=True, y=True, z='Z') ``` ### Invalid arguments While parsing the command line, [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") checks for a variety of errors, including ambiguous options, invalid types, invalid options, wrong number of positional arguments, etc. When it encounters such an error, it exits and prints the error along with a usage message: ``` >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('--foo', type=int) >>> parser.add_argument('bar', nargs='?') >>> # invalid type >>> parser.parse_args(['--foo', 'spam']) usage: PROG [-h] [--foo FOO] [bar] PROG: error: argument --foo: invalid int value: 'spam' >>> # invalid option >>> parser.parse_args(['--bar']) usage: PROG [-h] [--foo FOO] [bar] PROG: error: no such option: --bar >>> # wrong number of arguments >>> parser.parse_args(['spam', 'badger']) usage: PROG [-h] [--foo FOO] [bar] PROG: error: extra arguments found: badger ``` ### Arguments containing `-` The [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") method attempts to give errors whenever the user has clearly made a mistake, but some situations are inherently ambiguous. For example, the command-line argument `-1` could either be an attempt to specify an option or an attempt to provide a positional argument. The [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") method is cautious here: positional arguments may only begin with `-` if they look like negative numbers and there are no options in the parser that look like negative numbers: ``` >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('-x') >>> parser.add_argument('foo', nargs='?') >>> # no negative number options, so -1 is a positional argument >>> parser.parse_args(['-x', '-1']) Namespace(foo=None, x='-1') >>> # no negative number options, so -1 and -5 are positional arguments >>> parser.parse_args(['-x', '-1', '-5']) Namespace(foo='-5', x='-1') >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('-1', dest='one') >>> parser.add_argument('foo', nargs='?') >>> # negative number options present, so -1 is an option >>> parser.parse_args(['-1', 'X']) Namespace(foo=None, one='X') >>> # negative number options present, so -2 is an option >>> parser.parse_args(['-2']) usage: PROG [-h] [-1 ONE] [foo] PROG: error: no such option: -2 >>> # negative number options present, so both -1s are options >>> parser.parse_args(['-1', '-1']) usage: PROG [-h] [-1 ONE] [foo] PROG: error: argument -1: expected one argument ``` If you have positional arguments that must begin with `-` and don’t look like negative numbers, you can insert the pseudo-argument `'--'` which tells [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") that everything after that is a positional argument: ``` >>> parser.parse_args(['--', '-f']) Namespace(foo='-f', one=None) ``` ### Argument abbreviations (prefix matching) The [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") method [by default](#allow-abbrev) allows long options to be abbreviated to a prefix, if the abbreviation is unambiguous (the prefix matches a unique option): ``` >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('-bacon') >>> parser.add_argument('-badger') >>> parser.parse_args('-bac MMM'.split()) Namespace(bacon='MMM', badger=None) >>> parser.parse_args('-bad WOOD'.split()) Namespace(bacon=None, badger='WOOD') >>> parser.parse_args('-ba BA'.split()) usage: PROG [-h] [-bacon BACON] [-badger BADGER] PROG: error: ambiguous option: -ba could match -badger, -bacon ``` An error is produced for arguments that could produce more than one options. This feature can be disabled by setting [allow\_abbrev](#allow-abbrev) to `False`. ### Beyond `sys.argv` Sometimes it may be useful to have an ArgumentParser parse arguments other than those of [`sys.argv`](sys#sys.argv "sys.argv"). This can be accomplished by passing a list of strings to [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args"). This is useful for testing at the interactive prompt: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument( ... 'integers', metavar='int', type=int, choices=range(10), ... nargs='+', help='an integer in the range 0..9') >>> parser.add_argument( ... '--sum', dest='accumulate', action='store_const', const=sum, ... default=max, help='sum the integers (default: find the max)') >>> parser.parse_args(['1', '2', '3', '4']) Namespace(accumulate=<built-in function max>, integers=[1, 2, 3, 4]) >>> parser.parse_args(['1', '2', '3', '4', '--sum']) Namespace(accumulate=<built-in function sum>, integers=[1, 2, 3, 4]) ``` ### The Namespace object `class argparse.Namespace` Simple class used by default by [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") to create an object holding attributes and return it. This class is deliberately simple, just an [`object`](functions#object "object") subclass with a readable string representation. If you prefer to have dict-like view of the attributes, you can use the standard Python idiom, [`vars()`](functions#vars "vars"): ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo') >>> args = parser.parse_args(['--foo', 'BAR']) >>> vars(args) {'foo': 'BAR'} ``` It may also be useful to have an [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") assign attributes to an already existing object, rather than a new [`Namespace`](#argparse.Namespace "argparse.Namespace") object. This can be achieved by specifying the `namespace=` keyword argument: ``` >>> class C: ... pass ... >>> c = C() >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo') >>> parser.parse_args(args=['--foo', 'BAR'], namespace=c) >>> c.foo 'BAR' ``` Other utilities --------------- ### Sub-commands `ArgumentParser.add_subparsers([title][, description][, prog][, parser_class][, action][, option_string][, dest][, required][, help][, metavar])` Many programs split up their functionality into a number of sub-commands, for example, the `svn` program can invoke sub-commands like `svn checkout`, `svn update`, and `svn commit`. Splitting up functionality this way can be a particularly good idea when a program performs several different functions which require different kinds of command-line arguments. [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") supports the creation of such sub-commands with the [`add_subparsers()`](#argparse.ArgumentParser.add_subparsers "argparse.ArgumentParser.add_subparsers") method. The [`add_subparsers()`](#argparse.ArgumentParser.add_subparsers "argparse.ArgumentParser.add_subparsers") method is normally called with no arguments and returns a special action object. This object has a single method, `add_parser()`, which takes a command name and any [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") constructor arguments, and returns an [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") object that can be modified as usual. Description of parameters: * title - title for the sub-parser group in help output; by default “subcommands” if description is provided, otherwise uses title for positional arguments * description - description for the sub-parser group in help output, by default `None` * prog - usage information that will be displayed with sub-command help, by default the name of the program and any positional arguments before the subparser argument * parser\_class - class which will be used to create sub-parser instances, by default the class of the current parser (e.g. ArgumentParser) * [action](#action) - the basic type of action to be taken when this argument is encountered at the command line * [dest](#dest) - name of the attribute under which sub-command name will be stored; by default `None` and no value is stored * [required](#required) - Whether or not a subcommand must be provided, by default `False` (added in 3.7) * [help](#help) - help for sub-parser group in help output, by default `None` * [metavar](#metavar) - string presenting available sub-commands in help; by default it is `None` and presents sub-commands in form {cmd1, cmd2, ..} Some example usage: ``` >>> # create the top-level parser >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('--foo', action='store_true', help='foo help') >>> subparsers = parser.add_subparsers(help='sub-command help') >>> >>> # create the parser for the "a" command >>> parser_a = subparsers.add_parser('a', help='a help') >>> parser_a.add_argument('bar', type=int, help='bar help') >>> >>> # create the parser for the "b" command >>> parser_b = subparsers.add_parser('b', help='b help') >>> parser_b.add_argument('--baz', choices='XYZ', help='baz help') >>> >>> # parse some argument lists >>> parser.parse_args(['a', '12']) Namespace(bar=12, foo=False) >>> parser.parse_args(['--foo', 'b', '--baz', 'Z']) Namespace(baz='Z', foo=True) ``` Note that the object returned by [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") will only contain attributes for the main parser and the subparser that was selected by the command line (and not any other subparsers). So in the example above, when the `a` command is specified, only the `foo` and `bar` attributes are present, and when the `b` command is specified, only the `foo` and `baz` attributes are present. Similarly, when a help message is requested from a subparser, only the help for that particular parser will be printed. The help message will not include parent parser or sibling parser messages. (A help message for each subparser command, however, can be given by supplying the `help=` argument to `add_parser()` as above.) ``` >>> parser.parse_args(['--help']) usage: PROG [-h] [--foo] {a,b} ... positional arguments: {a,b} sub-command help a a help b b help optional arguments: -h, --help show this help message and exit --foo foo help >>> parser.parse_args(['a', '--help']) usage: PROG a [-h] bar positional arguments: bar bar help optional arguments: -h, --help show this help message and exit >>> parser.parse_args(['b', '--help']) usage: PROG b [-h] [--baz {X,Y,Z}] optional arguments: -h, --help show this help message and exit --baz {X,Y,Z} baz help ``` The [`add_subparsers()`](#argparse.ArgumentParser.add_subparsers "argparse.ArgumentParser.add_subparsers") method also supports `title` and `description` keyword arguments. When either is present, the subparser’s commands will appear in their own group in the help output. For example: ``` >>> parser = argparse.ArgumentParser() >>> subparsers = parser.add_subparsers(title='subcommands', ... description='valid subcommands', ... help='additional help') >>> subparsers.add_parser('foo') >>> subparsers.add_parser('bar') >>> parser.parse_args(['-h']) usage: [-h] {foo,bar} ... optional arguments: -h, --help show this help message and exit subcommands: valid subcommands {foo,bar} additional help ``` Furthermore, `add_parser` supports an additional `aliases` argument, which allows multiple strings to refer to the same subparser. This example, like `svn`, aliases `co` as a shorthand for `checkout`: ``` >>> parser = argparse.ArgumentParser() >>> subparsers = parser.add_subparsers() >>> checkout = subparsers.add_parser('checkout', aliases=['co']) >>> checkout.add_argument('foo') >>> parser.parse_args(['co', 'bar']) Namespace(foo='bar') ``` One particularly effective way of handling sub-commands is to combine the use of the [`add_subparsers()`](#argparse.ArgumentParser.add_subparsers "argparse.ArgumentParser.add_subparsers") method with calls to [`set_defaults()`](#argparse.ArgumentParser.set_defaults "argparse.ArgumentParser.set_defaults") so that each subparser knows which Python function it should execute. For example: ``` >>> # sub-command functions >>> def foo(args): ... print(args.x * args.y) ... >>> def bar(args): ... print('((%s))' % args.z) ... >>> # create the top-level parser >>> parser = argparse.ArgumentParser() >>> subparsers = parser.add_subparsers() >>> >>> # create the parser for the "foo" command >>> parser_foo = subparsers.add_parser('foo') >>> parser_foo.add_argument('-x', type=int, default=1) >>> parser_foo.add_argument('y', type=float) >>> parser_foo.set_defaults(func=foo) >>> >>> # create the parser for the "bar" command >>> parser_bar = subparsers.add_parser('bar') >>> parser_bar.add_argument('z') >>> parser_bar.set_defaults(func=bar) >>> >>> # parse the args and call whatever function was selected >>> args = parser.parse_args('foo 1 -x 2'.split()) >>> args.func(args) 2.0 >>> >>> # parse the args and call whatever function was selected >>> args = parser.parse_args('bar XYZYX'.split()) >>> args.func(args) ((XYZYX)) ``` This way, you can let [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") do the job of calling the appropriate function after argument parsing is complete. Associating functions with actions like this is typically the easiest way to handle the different actions for each of your subparsers. However, if it is necessary to check the name of the subparser that was invoked, the `dest` keyword argument to the [`add_subparsers()`](#argparse.ArgumentParser.add_subparsers "argparse.ArgumentParser.add_subparsers") call will work: ``` >>> parser = argparse.ArgumentParser() >>> subparsers = parser.add_subparsers(dest='subparser_name') >>> subparser1 = subparsers.add_parser('1') >>> subparser1.add_argument('-x') >>> subparser2 = subparsers.add_parser('2') >>> subparser2.add_argument('y') >>> parser.parse_args(['2', 'frobble']) Namespace(subparser_name='2', y='frobble') ``` Changed in version 3.7: New *required* keyword argument. ### FileType objects `class argparse.FileType(mode='r', bufsize=-1, encoding=None, errors=None)` The [`FileType`](#argparse.FileType "argparse.FileType") factory creates objects that can be passed to the type argument of [`ArgumentParser.add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument"). Arguments that have [`FileType`](#argparse.FileType "argparse.FileType") objects as their type will open command-line arguments as files with the requested modes, buffer sizes, encodings and error handling (see the [`open()`](functions#open "open") function for more details): ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--raw', type=argparse.FileType('wb', 0)) >>> parser.add_argument('out', type=argparse.FileType('w', encoding='UTF-8')) >>> parser.parse_args(['--raw', 'raw.dat', 'file.txt']) Namespace(out=<_io.TextIOWrapper name='file.txt' mode='w' encoding='UTF-8'>, raw=<_io.FileIO name='raw.dat' mode='wb'>) ``` FileType objects understand the pseudo-argument `'-'` and automatically convert this into `sys.stdin` for readable [`FileType`](#argparse.FileType "argparse.FileType") objects and `sys.stdout` for writable [`FileType`](#argparse.FileType "argparse.FileType") objects: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('infile', type=argparse.FileType('r')) >>> parser.parse_args(['-']) Namespace(infile=<_io.TextIOWrapper name='<stdin>' encoding='UTF-8'>) ``` New in version 3.4: The *encodings* and *errors* keyword arguments. ### Argument groups `ArgumentParser.add_argument_group(title=None, description=None)` By default, [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") groups command-line arguments into “positional arguments” and “optional arguments” when displaying help messages. When there is a better conceptual grouping of arguments than this default one, appropriate groups can be created using the [`add_argument_group()`](#argparse.ArgumentParser.add_argument_group "argparse.ArgumentParser.add_argument_group") method: ``` >>> parser = argparse.ArgumentParser(prog='PROG', add_help=False) >>> group = parser.add_argument_group('group') >>> group.add_argument('--foo', help='foo help') >>> group.add_argument('bar', help='bar help') >>> parser.print_help() usage: PROG [--foo FOO] bar group: bar bar help --foo FOO foo help ``` The [`add_argument_group()`](#argparse.ArgumentParser.add_argument_group "argparse.ArgumentParser.add_argument_group") method returns an argument group object which has an [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") method just like a regular [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser"). When an argument is added to the group, the parser treats it just like a normal argument, but displays the argument in a separate group for help messages. The [`add_argument_group()`](#argparse.ArgumentParser.add_argument_group "argparse.ArgumentParser.add_argument_group") method accepts *title* and *description* arguments which can be used to customize this display: ``` >>> parser = argparse.ArgumentParser(prog='PROG', add_help=False) >>> group1 = parser.add_argument_group('group1', 'group1 description') >>> group1.add_argument('foo', help='foo help') >>> group2 = parser.add_argument_group('group2', 'group2 description') >>> group2.add_argument('--bar', help='bar help') >>> parser.print_help() usage: PROG [--bar BAR] foo group1: group1 description foo foo help group2: group2 description --bar BAR bar help ``` Note that any arguments not in your user-defined groups will end up back in the usual “positional arguments” and “optional arguments” sections. ### Mutual exclusion `ArgumentParser.add_mutually_exclusive_group(required=False)` Create a mutually exclusive group. [`argparse`](#module-argparse "argparse: Command-line option and argument parsing library.") will make sure that only one of the arguments in the mutually exclusive group was present on the command line: ``` >>> parser = argparse.ArgumentParser(prog='PROG') >>> group = parser.add_mutually_exclusive_group() >>> group.add_argument('--foo', action='store_true') >>> group.add_argument('--bar', action='store_false') >>> parser.parse_args(['--foo']) Namespace(bar=True, foo=True) >>> parser.parse_args(['--bar']) Namespace(bar=False, foo=False) >>> parser.parse_args(['--foo', '--bar']) usage: PROG [-h] [--foo | --bar] PROG: error: argument --bar: not allowed with argument --foo ``` The [`add_mutually_exclusive_group()`](#argparse.ArgumentParser.add_mutually_exclusive_group "argparse.ArgumentParser.add_mutually_exclusive_group") method also accepts a *required* argument, to indicate that at least one of the mutually exclusive arguments is required: ``` >>> parser = argparse.ArgumentParser(prog='PROG') >>> group = parser.add_mutually_exclusive_group(required=True) >>> group.add_argument('--foo', action='store_true') >>> group.add_argument('--bar', action='store_false') >>> parser.parse_args([]) usage: PROG [-h] (--foo | --bar) PROG: error: one of the arguments --foo --bar is required ``` Note that currently mutually exclusive argument groups do not support the *title* and *description* arguments of [`add_argument_group()`](#argparse.ArgumentParser.add_argument_group "argparse.ArgumentParser.add_argument_group"). ### Parser defaults `ArgumentParser.set_defaults(**kwargs)` Most of the time, the attributes of the object returned by [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") will be fully determined by inspecting the command-line arguments and the argument actions. [`set_defaults()`](#argparse.ArgumentParser.set_defaults "argparse.ArgumentParser.set_defaults") allows some additional attributes that are determined without any inspection of the command line to be added: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('foo', type=int) >>> parser.set_defaults(bar=42, baz='badger') >>> parser.parse_args(['736']) Namespace(bar=42, baz='badger', foo=736) ``` Note that parser-level defaults always override argument-level defaults: ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', default='bar') >>> parser.set_defaults(foo='spam') >>> parser.parse_args([]) Namespace(foo='spam') ``` Parser-level defaults can be particularly useful when working with multiple parsers. See the [`add_subparsers()`](#argparse.ArgumentParser.add_subparsers "argparse.ArgumentParser.add_subparsers") method for an example of this type. `ArgumentParser.get_default(dest)` Get the default value for a namespace attribute, as set by either [`add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") or by [`set_defaults()`](#argparse.ArgumentParser.set_defaults "argparse.ArgumentParser.set_defaults"): ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', default='badger') >>> parser.get_default('foo') 'badger' ``` ### Printing help In most typical applications, [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") will take care of formatting and printing any usage or error messages. However, several formatting methods are available: `ArgumentParser.print_usage(file=None)` Print a brief description of how the [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") should be invoked on the command line. If *file* is `None`, [`sys.stdout`](sys#sys.stdout "sys.stdout") is assumed. `ArgumentParser.print_help(file=None)` Print a help message, including the program usage and information about the arguments registered with the [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser"). If *file* is `None`, [`sys.stdout`](sys#sys.stdout "sys.stdout") is assumed. There are also variants of these methods that simply return a string instead of printing it: `ArgumentParser.format_usage()` Return a string containing a brief description of how the [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") should be invoked on the command line. `ArgumentParser.format_help()` Return a string containing a help message, including the program usage and information about the arguments registered with the [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser"). ### Partial parsing `ArgumentParser.parse_known_args(args=None, namespace=None)` Sometimes a script may only parse a few of the command-line arguments, passing the remaining arguments on to another script or program. In these cases, the [`parse_known_args()`](#argparse.ArgumentParser.parse_known_args "argparse.ArgumentParser.parse_known_args") method can be useful. It works much like [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args") except that it does not produce an error when extra arguments are present. Instead, it returns a two item tuple containing the populated namespace and the list of remaining argument strings. ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', action='store_true') >>> parser.add_argument('bar') >>> parser.parse_known_args(['--foo', '--badger', 'BAR', 'spam']) (Namespace(bar='BAR', foo=True), ['--badger', 'spam']) ``` Warning [Prefix matching](#prefix-matching) rules apply to `parse_known_args()`. The parser may consume an option even if it’s just a prefix of one of its known options, instead of leaving it in the remaining arguments list. ### Customizing file parsing `ArgumentParser.convert_arg_line_to_args(arg_line)` Arguments that are read from a file (see the *fromfile\_prefix\_chars* keyword argument to the [`ArgumentParser`](#argparse.ArgumentParser "argparse.ArgumentParser") constructor) are read one argument per line. [`convert_arg_line_to_args()`](#argparse.ArgumentParser.convert_arg_line_to_args "argparse.ArgumentParser.convert_arg_line_to_args") can be overridden for fancier reading. This method takes a single argument *arg\_line* which is a string read from the argument file. It returns a list of arguments parsed from this string. The method is called once per line read from the argument file, in order. A useful override of this method is one that treats each space-separated word as an argument. The following example demonstrates how to do this: ``` class MyArgumentParser(argparse.ArgumentParser): def convert_arg_line_to_args(self, arg_line): return arg_line.split() ``` ### Exiting methods `ArgumentParser.exit(status=0, message=None)` This method terminates the program, exiting with the specified *status* and, if given, it prints a *message* before that. The user can override this method to handle these steps differently: ``` class ErrorCatchingArgumentParser(argparse.ArgumentParser): def exit(self, status=0, message=None): if status: raise Exception(f'Exiting because of an error: {message}') exit(status) ``` `ArgumentParser.error(message)` This method prints a usage message including the *message* to the standard error and terminates the program with a status code of 2. ### Intermixed parsing `ArgumentParser.parse_intermixed_args(args=None, namespace=None)` `ArgumentParser.parse_known_intermixed_args(args=None, namespace=None)` A number of Unix commands allow the user to intermix optional arguments with positional arguments. The [`parse_intermixed_args()`](#argparse.ArgumentParser.parse_intermixed_args "argparse.ArgumentParser.parse_intermixed_args") and [`parse_known_intermixed_args()`](#argparse.ArgumentParser.parse_known_intermixed_args "argparse.ArgumentParser.parse_known_intermixed_args") methods support this parsing style. These parsers do not support all the argparse features, and will raise exceptions if unsupported features are used. In particular, subparsers, `argparse.REMAINDER`, and mutually exclusive groups that include both optionals and positionals are not supported. The following example shows the difference between [`parse_known_args()`](#argparse.ArgumentParser.parse_known_args "argparse.ArgumentParser.parse_known_args") and [`parse_intermixed_args()`](#argparse.ArgumentParser.parse_intermixed_args "argparse.ArgumentParser.parse_intermixed_args"): the former returns `['2', '3']` as unparsed arguments, while the latter collects all the positionals into `rest`. ``` >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo') >>> parser.add_argument('cmd') >>> parser.add_argument('rest', nargs='*', type=int) >>> parser.parse_known_args('doit 1 --foo bar 2 3'.split()) (Namespace(cmd='doit', foo='bar', rest=[1]), ['2', '3']) >>> parser.parse_intermixed_args('doit 1 --foo bar 2 3'.split()) Namespace(cmd='doit', foo='bar', rest=[1, 2, 3]) ``` [`parse_known_intermixed_args()`](#argparse.ArgumentParser.parse_known_intermixed_args "argparse.ArgumentParser.parse_known_intermixed_args") returns a two item tuple containing the populated namespace and the list of remaining argument strings. [`parse_intermixed_args()`](#argparse.ArgumentParser.parse_intermixed_args "argparse.ArgumentParser.parse_intermixed_args") raises an error if there are any remaining unparsed argument strings. New in version 3.7. Upgrading optparse code ----------------------- Originally, the [`argparse`](#module-argparse "argparse: Command-line option and argument parsing library.") module had attempted to maintain compatibility with [`optparse`](optparse#module-optparse "optparse: Command-line option parsing library. (deprecated)"). However, [`optparse`](optparse#module-optparse "optparse: Command-line option parsing library. (deprecated)") was difficult to extend transparently, particularly with the changes required to support the new `nargs=` specifiers and better usage messages. When most everything in [`optparse`](optparse#module-optparse "optparse: Command-line option parsing library. (deprecated)") had either been copy-pasted over or monkey-patched, it no longer seemed practical to try to maintain the backwards compatibility. The [`argparse`](#module-argparse "argparse: Command-line option and argument parsing library.") module improves on the standard library [`optparse`](optparse#module-optparse "optparse: Command-line option parsing library. (deprecated)") module in a number of ways including: * Handling positional arguments. * Supporting sub-commands. * Allowing alternative option prefixes like `+` and `/`. * Handling zero-or-more and one-or-more style arguments. * Producing more informative usage messages. * Providing a much simpler interface for custom `type` and `action`. A partial upgrade path from [`optparse`](optparse#module-optparse "optparse: Command-line option parsing library. (deprecated)") to [`argparse`](#module-argparse "argparse: Command-line option and argument parsing library."): * Replace all [`optparse.OptionParser.add_option()`](optparse#optparse.OptionParser.add_option "optparse.OptionParser.add_option") calls with [`ArgumentParser.add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") calls. * Replace `(options, args) = parser.parse_args()` with `args = parser.parse_args()` and add additional [`ArgumentParser.add_argument()`](#argparse.ArgumentParser.add_argument "argparse.ArgumentParser.add_argument") calls for the positional arguments. Keep in mind that what was previously called `options`, now in the [`argparse`](#module-argparse "argparse: Command-line option and argument parsing library.") context is called `args`. * Replace [`optparse.OptionParser.disable_interspersed_args()`](optparse#optparse.OptionParser.disable_interspersed_args "optparse.OptionParser.disable_interspersed_args") by using [`parse_intermixed_args()`](#argparse.ArgumentParser.parse_intermixed_args "argparse.ArgumentParser.parse_intermixed_args") instead of [`parse_args()`](#argparse.ArgumentParser.parse_args "argparse.ArgumentParser.parse_args"). * Replace callback actions and the `callback_*` keyword arguments with `type` or `action` arguments. * Replace string names for `type` keyword arguments with the corresponding type objects (e.g. int, float, complex, etc). * Replace `optparse.Values` with [`Namespace`](#argparse.Namespace "argparse.Namespace") and `optparse.OptionError` and `optparse.OptionValueError` with `ArgumentError`. * Replace strings with implicit arguments such as `%default` or `%prog` with the standard Python syntax to use dictionaries to format strings, that is, `%(default)s` and `%(prog)s`. * Replace the OptionParser constructor `version` argument with a call to `parser.add_argument('--version', action='version', version='<the version>')`.
programming_docs
python ftplib — FTP protocol client ftplib — FTP protocol client ============================ **Source code:** [Lib/ftplib.py](https://github.com/python/cpython/tree/3.9/Lib/ftplib.py) This module defines the class [`FTP`](#ftplib.FTP "ftplib.FTP") and a few related items. The [`FTP`](#ftplib.FTP "ftplib.FTP") class implements the client side of the FTP protocol. You can use this to write Python programs that perform a variety of automated FTP jobs, such as mirroring other FTP servers. It is also used by the module [`urllib.request`](urllib.request#module-urllib.request "urllib.request: Extensible library for opening URLs.") to handle URLs that use FTP. For more information on FTP (File Transfer Protocol), see Internet [**RFC 959**](https://tools.ietf.org/html/rfc959.html). The default encoding is UTF-8, following [**RFC 2640**](https://tools.ietf.org/html/rfc2640.html). Here’s a sample session using the [`ftplib`](#module-ftplib "ftplib: FTP protocol client (requires sockets).") module: ``` >>> from ftplib import FTP >>> ftp = FTP('ftp.us.debian.org') # connect to host, default port >>> ftp.login() # user anonymous, passwd anonymous@ '230 Login successful.' >>> ftp.cwd('debian') # change into "debian" directory '250 Directory successfully changed.' >>> ftp.retrlines('LIST') # list directory contents -rw-rw-r-- 1 1176 1176 1063 Jun 15 10:18 README ... drwxr-sr-x 5 1176 1176 4096 Dec 19 2000 pool drwxr-sr-x 4 1176 1176 4096 Nov 17 2008 project drwxr-xr-x 3 1176 1176 4096 Oct 10 2012 tools '226 Directory send OK.' >>> with open('README', 'wb') as fp: >>> ftp.retrbinary('RETR README', fp.write) '226 Transfer complete.' >>> ftp.quit() '221 Goodbye.' ``` The module defines the following items: `class ftplib.FTP(host='', user='', passwd='', acct='', timeout=None, source_address=None, *, encoding='utf-8')` Return a new instance of the [`FTP`](#ftplib.FTP "ftplib.FTP") class. When *host* is given, the method call `connect(host)` is made. When *user* is given, additionally the method call `login(user, passwd, acct)` is made (where *passwd* and *acct* default to the empty string when not given). The optional *timeout* parameter specifies a timeout in seconds for blocking operations like the connection attempt (if is not specified, the global default timeout setting will be used). *source\_address* is a 2-tuple `(host, port)` for the socket to bind to as its source address before connecting. The *encoding* parameter specifies the encoding for directories and filenames. The [`FTP`](#ftplib.FTP "ftplib.FTP") class supports the [`with`](../reference/compound_stmts#with) statement, e.g.: ``` >>> from ftplib import FTP >>> with FTP("ftp1.at.proftpd.org") as ftp: ... ftp.login() ... ftp.dir() ... '230 Anonymous login ok, restrictions apply.' dr-xr-xr-x 9 ftp ftp 154 May 6 10:43 . dr-xr-xr-x 9 ftp ftp 154 May 6 10:43 .. dr-xr-xr-x 5 ftp ftp 4096 May 6 10:43 CentOS dr-xr-xr-x 3 ftp ftp 18 Jul 10 2008 Fedora >>> ``` Changed in version 3.2: Support for the [`with`](../reference/compound_stmts#with) statement was added. Changed in version 3.3: *source\_address* parameter was added. Changed in version 3.9: If the *timeout* parameter is set to be zero, it will raise a [`ValueError`](exceptions#ValueError "ValueError") to prevent the creation of a non-blocking socket. The *encoding* parameter was added, and the default was changed from Latin-1 to UTF-8 to follow [**RFC 2640**](https://tools.ietf.org/html/rfc2640.html). `class ftplib.FTP_TLS(host='', user='', passwd='', acct='', keyfile=None, certfile=None, context=None, timeout=None, source_address=None, *, encoding='utf-8')` A [`FTP`](#ftplib.FTP "ftplib.FTP") subclass which adds TLS support to FTP as described in [**RFC 4217**](https://tools.ietf.org/html/rfc4217.html). Connect as usual to port 21 implicitly securing the FTP control connection before authenticating. Securing the data connection requires the user to explicitly ask for it by calling the [`prot_p()`](#ftplib.FTP_TLS.prot_p "ftplib.FTP_TLS.prot_p") method. *context* is a [`ssl.SSLContext`](ssl#ssl.SSLContext "ssl.SSLContext") object which allows bundling SSL configuration options, certificates and private keys into a single (potentially long-lived) structure. Please read [Security considerations](ssl#ssl-security) for best practices. *keyfile* and *certfile* are a legacy alternative to *context* – they can point to PEM-formatted private key and certificate chain files (respectively) for the SSL connection. New in version 3.2. Changed in version 3.3: *source\_address* parameter was added. Changed in version 3.4: The class now supports hostname check with [`ssl.SSLContext.check_hostname`](ssl#ssl.SSLContext.check_hostname "ssl.SSLContext.check_hostname") and *Server Name Indication* (see [`ssl.HAS_SNI`](ssl#ssl.HAS_SNI "ssl.HAS_SNI")). Deprecated since version 3.6: *keyfile* and *certfile* are deprecated in favor of *context*. Please use [`ssl.SSLContext.load_cert_chain()`](ssl#ssl.SSLContext.load_cert_chain "ssl.SSLContext.load_cert_chain") instead, or let [`ssl.create_default_context()`](ssl#ssl.create_default_context "ssl.create_default_context") select the system’s trusted CA certificates for you. Changed in version 3.9: If the *timeout* parameter is set to be zero, it will raise a [`ValueError`](exceptions#ValueError "ValueError") to prevent the creation of a non-blocking socket. The *encoding* parameter was added, and the default was changed from Latin-1 to UTF-8 to follow [**RFC 2640**](https://tools.ietf.org/html/rfc2640.html). Here’s a sample session using the [`FTP_TLS`](#ftplib.FTP_TLS "ftplib.FTP_TLS") class: ``` >>> ftps = FTP_TLS('ftp.pureftpd.org') >>> ftps.login() '230 Anonymous user logged in' >>> ftps.prot_p() '200 Data protection level set to "private"' >>> ftps.nlst() ['6jack', 'OpenBSD', 'antilink', 'blogbench', 'bsdcam', 'clockspeed', 'djbdns-jedi', 'docs', 'eaccelerator-jedi', 'favicon.ico', 'francotone', 'fugu', 'ignore', 'libpuzzle', 'metalog', 'minidentd', 'misc', 'mysql-udf-global-user-variables', 'php-jenkins-hash', 'php-skein-hash', 'php-webdav', 'phpaudit', 'phpbench', 'pincaster', 'ping', 'posto', 'pub', 'public', 'public_keys', 'pure-ftpd', 'qscan', 'qtc', 'sharedance', 'skycache', 'sound', 'tmp', 'ucarp'] ``` `exception ftplib.error_reply` Exception raised when an unexpected reply is received from the server. `exception ftplib.error_temp` Exception raised when an error code signifying a temporary error (response codes in the range 400–499) is received. `exception ftplib.error_perm` Exception raised when an error code signifying a permanent error (response codes in the range 500–599) is received. `exception ftplib.error_proto` Exception raised when a reply is received from the server that does not fit the response specifications of the File Transfer Protocol, i.e. begin with a digit in the range 1–5. `ftplib.all_errors` The set of all exceptions (as a tuple) that methods of [`FTP`](#ftplib.FTP "ftplib.FTP") instances may raise as a result of problems with the FTP connection (as opposed to programming errors made by the caller). This set includes the four exceptions listed above as well as [`OSError`](exceptions#OSError "OSError") and [`EOFError`](exceptions#EOFError "EOFError"). See also `Module` [`netrc`](netrc#module-netrc "netrc: Loading of .netrc files.") Parser for the `.netrc` file format. The file `.netrc` is typically used by FTP clients to load user authentication information before prompting the user. FTP Objects ----------- Several methods are available in two flavors: one for handling text files and another for binary files. These are named for the command which is used followed by `lines` for the text version or `binary` for the binary version. [`FTP`](#ftplib.FTP "ftplib.FTP") instances have the following methods: `FTP.set_debuglevel(level)` Set the instance’s debugging level. This controls the amount of debugging output printed. The default, `0`, produces no debugging output. A value of `1` produces a moderate amount of debugging output, generally a single line per request. A value of `2` or higher produces the maximum amount of debugging output, logging each line sent and received on the control connection. `FTP.connect(host='', port=0, timeout=None, source_address=None)` Connect to the given host and port. The default port number is `21`, as specified by the FTP protocol specification. It is rarely needed to specify a different port number. This function should be called only once for each instance; it should not be called at all if a host was given when the instance was created. All other methods can only be used after a connection has been made. The optional *timeout* parameter specifies a timeout in seconds for the connection attempt. If no *timeout* is passed, the global default timeout setting will be used. *source\_address* is a 2-tuple `(host, port)` for the socket to bind to as its source address before connecting. Raises an [auditing event](sys#auditing) `ftplib.connect` with arguments `self`, `host`, `port`. Changed in version 3.3: *source\_address* parameter was added. `FTP.getwelcome()` Return the welcome message sent by the server in reply to the initial connection. (This message sometimes contains disclaimers or help information that may be relevant to the user.) `FTP.login(user='anonymous', passwd='', acct='')` Log in as the given *user*. The *passwd* and *acct* parameters are optional and default to the empty string. If no *user* is specified, it defaults to `'anonymous'`. If *user* is `'anonymous'`, the default *passwd* is `'anonymous@'`. This function should be called only once for each instance, after a connection has been established; it should not be called at all if a host and user were given when the instance was created. Most FTP commands are only allowed after the client has logged in. The *acct* parameter supplies “accounting information”; few systems implement this. `FTP.abort()` Abort a file transfer that is in progress. Using this does not always work, but it’s worth a try. `FTP.sendcmd(cmd)` Send a simple command string to the server and return the response string. Raises an [auditing event](sys#auditing) `ftplib.sendcmd` with arguments `self`, `cmd`. `FTP.voidcmd(cmd)` Send a simple command string to the server and handle the response. Return nothing if a response code corresponding to success (codes in the range 200–299) is received. Raise [`error_reply`](#ftplib.error_reply "ftplib.error_reply") otherwise. Raises an [auditing event](sys#auditing) `ftplib.sendcmd` with arguments `self`, `cmd`. `FTP.retrbinary(cmd, callback, blocksize=8192, rest=None)` Retrieve a file in binary transfer mode. *cmd* should be an appropriate `RETR` command: `'RETR filename'`. The *callback* function is called for each block of data received, with a single bytes argument giving the data block. The optional *blocksize* argument specifies the maximum chunk size to read on the low-level socket object created to do the actual transfer (which will also be the largest size of the data blocks passed to *callback*). A reasonable default is chosen. *rest* means the same thing as in the [`transfercmd()`](#ftplib.FTP.transfercmd "ftplib.FTP.transfercmd") method. `FTP.retrlines(cmd, callback=None)` Retrieve a file or directory listing in the encoding specified by the *encoding* parameter at initialization. *cmd* should be an appropriate `RETR` command (see [`retrbinary()`](#ftplib.FTP.retrbinary "ftplib.FTP.retrbinary")) or a command such as `LIST` or `NLST` (usually just the string `'LIST'`). `LIST` retrieves a list of files and information about those files. `NLST` retrieves a list of file names. The *callback* function is called for each line with a string argument containing the line with the trailing CRLF stripped. The default *callback* prints the line to `sys.stdout`. `FTP.set_pasv(val)` Enable “passive” mode if *val* is true, otherwise disable passive mode. Passive mode is on by default. `FTP.storbinary(cmd, fp, blocksize=8192, callback=None, rest=None)` Store a file in binary transfer mode. *cmd* should be an appropriate `STOR` command: `"STOR filename"`. *fp* is a [file object](../glossary#term-file-object) (opened in binary mode) which is read until EOF using its `read()` method in blocks of size *blocksize* to provide the data to be stored. The *blocksize* argument defaults to 8192. *callback* is an optional single parameter callable that is called on each block of data after it is sent. *rest* means the same thing as in the [`transfercmd()`](#ftplib.FTP.transfercmd "ftplib.FTP.transfercmd") method. Changed in version 3.2: *rest* parameter added. `FTP.storlines(cmd, fp, callback=None)` Store a file in line mode. *cmd* should be an appropriate `STOR` command (see [`storbinary()`](#ftplib.FTP.storbinary "ftplib.FTP.storbinary")). Lines are read until EOF from the [file object](../glossary#term-file-object) *fp* (opened in binary mode) using its [`readline()`](io#io.IOBase.readline "io.IOBase.readline") method to provide the data to be stored. *callback* is an optional single parameter callable that is called on each line after it is sent. `FTP.transfercmd(cmd, rest=None)` Initiate a transfer over the data connection. If the transfer is active, send an `EPRT` or `PORT` command and the transfer command specified by *cmd*, and accept the connection. If the server is passive, send an `EPSV` or `PASV` command, connect to it, and start the transfer command. Either way, return the socket for the connection. If optional *rest* is given, a `REST` command is sent to the server, passing *rest* as an argument. *rest* is usually a byte offset into the requested file, telling the server to restart sending the file’s bytes at the requested offset, skipping over the initial bytes. Note however that the [`transfercmd()`](#ftplib.FTP.transfercmd "ftplib.FTP.transfercmd") method converts *rest* to a string with the *encoding* parameter specified at initialization, but no check is performed on the string’s contents. If the server does not recognize the `REST` command, an [`error_reply`](#ftplib.error_reply "ftplib.error_reply") exception will be raised. If this happens, simply call [`transfercmd()`](#ftplib.FTP.transfercmd "ftplib.FTP.transfercmd") without a *rest* argument. `FTP.ntransfercmd(cmd, rest=None)` Like [`transfercmd()`](#ftplib.FTP.transfercmd "ftplib.FTP.transfercmd"), but returns a tuple of the data connection and the expected size of the data. If the expected size could not be computed, `None` will be returned as the expected size. *cmd* and *rest* means the same thing as in [`transfercmd()`](#ftplib.FTP.transfercmd "ftplib.FTP.transfercmd"). `FTP.mlsd(path="", facts=[])` List a directory in a standardized format by using `MLSD` command ([**RFC 3659**](https://tools.ietf.org/html/rfc3659.html)). If *path* is omitted the current directory is assumed. *facts* is a list of strings representing the type of information desired (e.g. `["type", "size", "perm"]`). Return a generator object yielding a tuple of two elements for every file found in path. First element is the file name, the second one is a dictionary containing facts about the file name. Content of this dictionary might be limited by the *facts* argument but server is not guaranteed to return all requested facts. New in version 3.3. `FTP.nlst(argument[, ...])` Return a list of file names as returned by the `NLST` command. The optional *argument* is a directory to list (default is the current server directory). Multiple arguments can be used to pass non-standard options to the `NLST` command. Note If your server supports the command, [`mlsd()`](#ftplib.FTP.mlsd "ftplib.FTP.mlsd") offers a better API. `FTP.dir(argument[, ...])` Produce a directory listing as returned by the `LIST` command, printing it to standard output. The optional *argument* is a directory to list (default is the current server directory). Multiple arguments can be used to pass non-standard options to the `LIST` command. If the last argument is a function, it is used as a *callback* function as for [`retrlines()`](#ftplib.FTP.retrlines "ftplib.FTP.retrlines"); the default prints to `sys.stdout`. This method returns `None`. Note If your server supports the command, [`mlsd()`](#ftplib.FTP.mlsd "ftplib.FTP.mlsd") offers a better API. `FTP.rename(fromname, toname)` Rename file *fromname* on the server to *toname*. `FTP.delete(filename)` Remove the file named *filename* from the server. If successful, returns the text of the response, otherwise raises [`error_perm`](#ftplib.error_perm "ftplib.error_perm") on permission errors or [`error_reply`](#ftplib.error_reply "ftplib.error_reply") on other errors. `FTP.cwd(pathname)` Set the current directory on the server. `FTP.mkd(pathname)` Create a new directory on the server. `FTP.pwd()` Return the pathname of the current directory on the server. `FTP.rmd(dirname)` Remove the directory named *dirname* on the server. `FTP.size(filename)` Request the size of the file named *filename* on the server. On success, the size of the file is returned as an integer, otherwise `None` is returned. Note that the `SIZE` command is not standardized, but is supported by many common server implementations. `FTP.quit()` Send a `QUIT` command to the server and close the connection. This is the “polite” way to close a connection, but it may raise an exception if the server responds with an error to the `QUIT` command. This implies a call to the [`close()`](#ftplib.FTP.close "ftplib.FTP.close") method which renders the [`FTP`](#ftplib.FTP "ftplib.FTP") instance useless for subsequent calls (see below). `FTP.close()` Close the connection unilaterally. This should not be applied to an already closed connection such as after a successful call to [`quit()`](#ftplib.FTP.quit "ftplib.FTP.quit"). After this call the [`FTP`](#ftplib.FTP "ftplib.FTP") instance should not be used any more (after a call to [`close()`](#ftplib.FTP.close "ftplib.FTP.close") or [`quit()`](#ftplib.FTP.quit "ftplib.FTP.quit") you cannot reopen the connection by issuing another [`login()`](#ftplib.FTP.login "ftplib.FTP.login") method). FTP\_TLS Objects ---------------- [`FTP_TLS`](#ftplib.FTP_TLS "ftplib.FTP_TLS") class inherits from [`FTP`](#ftplib.FTP "ftplib.FTP"), defining these additional objects: `FTP_TLS.ssl_version` The SSL version to use (defaults to [`ssl.PROTOCOL_SSLv23`](ssl#ssl.PROTOCOL_SSLv23 "ssl.PROTOCOL_SSLv23")). `FTP_TLS.auth()` Set up a secure control connection by using TLS or SSL, depending on what is specified in the [`ssl_version`](#ftplib.FTP_TLS.ssl_version "ftplib.FTP_TLS.ssl_version") attribute. Changed in version 3.4: The method now supports hostname check with [`ssl.SSLContext.check_hostname`](ssl#ssl.SSLContext.check_hostname "ssl.SSLContext.check_hostname") and *Server Name Indication* (see [`ssl.HAS_SNI`](ssl#ssl.HAS_SNI "ssl.HAS_SNI")). `FTP_TLS.ccc()` Revert control channel back to plaintext. This can be useful to take advantage of firewalls that know how to handle NAT with non-secure FTP without opening fixed ports. New in version 3.3. `FTP_TLS.prot_p()` Set up secure data connection. `FTP_TLS.prot_c()` Set up clear text data connection. python gzip — Support for gzip files gzip — Support for gzip files ============================= **Source code:** [Lib/gzip.py](https://github.com/python/cpython/tree/3.9/Lib/gzip.py) This module provides a simple interface to compress and decompress files just like the GNU programs **gzip** and **gunzip** would. The data compression is provided by the [`zlib`](zlib#module-zlib "zlib: Low-level interface to compression and decompression routines compatible with gzip.") module. The [`gzip`](#module-gzip "gzip: Interfaces for gzip compression and decompression using file objects.") module provides the [`GzipFile`](#gzip.GzipFile "gzip.GzipFile") class, as well as the [`open()`](#gzip.open "gzip.open"), [`compress()`](#gzip.compress "gzip.compress") and [`decompress()`](#gzip.decompress "gzip.decompress") convenience functions. The [`GzipFile`](#gzip.GzipFile "gzip.GzipFile") class reads and writes **gzip**-format files, automatically compressing or decompressing the data so that it looks like an ordinary [file object](../glossary#term-file-object). Note that additional file formats which can be decompressed by the **gzip** and **gunzip** programs, such as those produced by **compress** and **pack**, are not supported by this module. The module defines the following items: `gzip.open(filename, mode='rb', compresslevel=9, encoding=None, errors=None, newline=None)` Open a gzip-compressed file in binary or text mode, returning a [file object](../glossary#term-file-object). The *filename* argument can be an actual filename (a [`str`](stdtypes#str "str") or [`bytes`](stdtypes#bytes "bytes") object), or an existing file object to read from or write to. The *mode* argument can be any of `'r'`, `'rb'`, `'a'`, `'ab'`, `'w'`, `'wb'`, `'x'` or `'xb'` for binary mode, or `'rt'`, `'at'`, `'wt'`, or `'xt'` for text mode. The default is `'rb'`. The *compresslevel* argument is an integer from 0 to 9, as for the [`GzipFile`](#gzip.GzipFile "gzip.GzipFile") constructor. For binary mode, this function is equivalent to the [`GzipFile`](#gzip.GzipFile "gzip.GzipFile") constructor: `GzipFile(filename, mode, compresslevel)`. In this case, the *encoding*, *errors* and *newline* arguments must not be provided. For text mode, a [`GzipFile`](#gzip.GzipFile "gzip.GzipFile") object is created, and wrapped in an [`io.TextIOWrapper`](io#io.TextIOWrapper "io.TextIOWrapper") instance with the specified encoding, error handling behavior, and line ending(s). Changed in version 3.3: Added support for *filename* being a file object, support for text mode, and the *encoding*, *errors* and *newline* arguments. Changed in version 3.4: Added support for the `'x'`, `'xb'` and `'xt'` modes. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `exception gzip.BadGzipFile` An exception raised for invalid gzip files. It inherits [`OSError`](exceptions#OSError "OSError"). [`EOFError`](exceptions#EOFError "EOFError") and [`zlib.error`](zlib#zlib.error "zlib.error") can also be raised for invalid gzip files. New in version 3.8. `class gzip.GzipFile(filename=None, mode=None, compresslevel=9, fileobj=None, mtime=None)` Constructor for the [`GzipFile`](#gzip.GzipFile "gzip.GzipFile") class, which simulates most of the methods of a [file object](../glossary#term-file-object), with the exception of the `truncate()` method. At least one of *fileobj* and *filename* must be given a non-trivial value. The new class instance is based on *fileobj*, which can be a regular file, an [`io.BytesIO`](io#io.BytesIO "io.BytesIO") object, or any other object which simulates a file. It defaults to `None`, in which case *filename* is opened to provide a file object. When *fileobj* is not `None`, the *filename* argument is only used to be included in the **gzip** file header, which may include the original filename of the uncompressed file. It defaults to the filename of *fileobj*, if discernible; otherwise, it defaults to the empty string, and in this case the original filename is not included in the header. The *mode* argument can be any of `'r'`, `'rb'`, `'a'`, `'ab'`, `'w'`, `'wb'`, `'x'`, or `'xb'`, depending on whether the file will be read or written. The default is the mode of *fileobj* if discernible; otherwise, the default is `'rb'`. In future Python releases the mode of *fileobj* will not be used. It is better to always specify *mode* for writing. Note that the file is always opened in binary mode. To open a compressed file in text mode, use [`open()`](#gzip.open "gzip.open") (or wrap your [`GzipFile`](#gzip.GzipFile "gzip.GzipFile") with an [`io.TextIOWrapper`](io#io.TextIOWrapper "io.TextIOWrapper")). The *compresslevel* argument is an integer from `0` to `9` controlling the level of compression; `1` is fastest and produces the least compression, and `9` is slowest and produces the most compression. `0` is no compression. The default is `9`. The *mtime* argument is an optional numeric timestamp to be written to the last modification time field in the stream when compressing. It should only be provided in compression mode. If omitted or `None`, the current time is used. See the [`mtime`](#gzip.GzipFile.mtime "gzip.GzipFile.mtime") attribute for more details. Calling a [`GzipFile`](#gzip.GzipFile "gzip.GzipFile") object’s `close()` method does not close *fileobj*, since you might wish to append more material after the compressed data. This also allows you to pass an [`io.BytesIO`](io#io.BytesIO "io.BytesIO") object opened for writing as *fileobj*, and retrieve the resulting memory buffer using the [`io.BytesIO`](io#io.BytesIO "io.BytesIO") object’s [`getvalue()`](io#io.BytesIO.getvalue "io.BytesIO.getvalue") method. [`GzipFile`](#gzip.GzipFile "gzip.GzipFile") supports the [`io.BufferedIOBase`](io#io.BufferedIOBase "io.BufferedIOBase") interface, including iteration and the [`with`](../reference/compound_stmts#with) statement. Only the `truncate()` method isn’t implemented. [`GzipFile`](#gzip.GzipFile "gzip.GzipFile") also provides the following method and attribute: `peek(n)` Read *n* uncompressed bytes without advancing the file position. At most one single read on the compressed stream is done to satisfy the call. The number of bytes returned may be more or less than requested. Note While calling [`peek()`](#gzip.GzipFile.peek "gzip.GzipFile.peek") does not change the file position of the [`GzipFile`](#gzip.GzipFile "gzip.GzipFile"), it may change the position of the underlying file object (e.g. if the [`GzipFile`](#gzip.GzipFile "gzip.GzipFile") was constructed with the *fileobj* parameter). New in version 3.2. `mtime` When decompressing, the value of the last modification time field in the most recently read header may be read from this attribute, as an integer. The initial value before reading any headers is `None`. All **gzip** compressed streams are required to contain this timestamp field. Some programs, such as **gunzip**, make use of the timestamp. The format is the same as the return value of [`time.time()`](time#time.time "time.time") and the [`st_mtime`](os#os.stat_result.st_mtime "os.stat_result.st_mtime") attribute of the object returned by [`os.stat()`](os#os.stat "os.stat"). Changed in version 3.1: Support for the [`with`](../reference/compound_stmts#with) statement was added, along with the *mtime* constructor argument and [`mtime`](#gzip.GzipFile.mtime "gzip.GzipFile.mtime") attribute. Changed in version 3.2: Support for zero-padded and unseekable files was added. Changed in version 3.3: The [`io.BufferedIOBase.read1()`](io#io.BufferedIOBase.read1 "io.BufferedIOBase.read1") method is now implemented. Changed in version 3.4: Added support for the `'x'` and `'xb'` modes. Changed in version 3.5: Added support for writing arbitrary [bytes-like objects](../glossary#term-bytes-like-object). The [`read()`](io#io.BufferedIOBase.read "io.BufferedIOBase.read") method now accepts an argument of `None`. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). Deprecated since version 3.9: Opening [`GzipFile`](#gzip.GzipFile "gzip.GzipFile") for writing without specifying the *mode* argument is deprecated. `gzip.compress(data, compresslevel=9, *, mtime=None)` Compress the *data*, returning a [`bytes`](stdtypes#bytes "bytes") object containing the compressed data. *compresslevel* and *mtime* have the same meaning as in the [`GzipFile`](#gzip.GzipFile "gzip.GzipFile") constructor above. New in version 3.2. Changed in version 3.8: Added the *mtime* parameter for reproducible output. `gzip.decompress(data)` Decompress the *data*, returning a [`bytes`](stdtypes#bytes "bytes") object containing the uncompressed data. New in version 3.2. Examples of usage ----------------- Example of how to read a compressed file: ``` import gzip with gzip.open('/home/joe/file.txt.gz', 'rb') as f: file_content = f.read() ``` Example of how to create a compressed GZIP file: ``` import gzip content = b"Lots of content here" with gzip.open('/home/joe/file.txt.gz', 'wb') as f: f.write(content) ``` Example of how to GZIP compress an existing file: ``` import gzip import shutil with open('/home/joe/file.txt', 'rb') as f_in: with gzip.open('/home/joe/file.txt.gz', 'wb') as f_out: shutil.copyfileobj(f_in, f_out) ``` Example of how to GZIP compress a binary string: ``` import gzip s_in = b"Lots of content here" s_out = gzip.compress(s_in) ``` See also `Module` [`zlib`](zlib#module-zlib "zlib: Low-level interface to compression and decompression routines compatible with gzip.") The basic data compression module needed to support the **gzip** file format. Command Line Interface ---------------------- The [`gzip`](#module-gzip "gzip: Interfaces for gzip compression and decompression using file objects.") module provides a simple command line interface to compress or decompress files. Once executed the [`gzip`](#module-gzip "gzip: Interfaces for gzip compression and decompression using file objects.") module keeps the input file(s). Changed in version 3.8: Add a new command line interface with a usage. By default, when you will execute the CLI, the default compression level is 6. ### Command line options `file` If *file* is not specified, read from [`sys.stdin`](sys#sys.stdin "sys.stdin"). `--fast` Indicates the fastest compression method (less compression). `--best` Indicates the slowest compression method (best compression). `-d, --decompress` Decompress the given file. `-h, --help` Show the help message.
programming_docs
python turtle — Turtle graphics turtle — Turtle graphics ======================== **Source code:** [Lib/turtle.py](https://github.com/python/cpython/tree/3.9/Lib/turtle.py) Introduction ------------ Turtle graphics is a popular way for introducing programming to kids. It was part of the original Logo programming language developed by Wally Feurzeig, Seymour Papert and Cynthia Solomon in 1967. Imagine a robotic turtle starting at (0, 0) in the x-y plane. After an `import turtle`, give it the command `turtle.forward(15)`, and it moves (on-screen!) 15 pixels in the direction it is facing, drawing a line as it moves. Give it the command `turtle.right(25)`, and it rotates in-place 25 degrees clockwise. By combining together these and similar commands, intricate shapes and pictures can easily be drawn. The [`turtle`](#module-turtle "turtle: An educational framework for simple graphics applications") module is an extended reimplementation of the same-named module from the Python standard distribution up to version Python 2.5. It tries to keep the merits of the old turtle module and to be (nearly) 100% compatible with it. This means in the first place to enable the learning programmer to use all the commands, classes and methods interactively when using the module from within IDLE run with the `-n` switch. The turtle module provides turtle graphics primitives, in both object-oriented and procedure-oriented ways. Because it uses [`tkinter`](tkinter#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") for the underlying graphics, it needs a version of Python installed with Tk support. The object-oriented interface uses essentially two+two classes: 1. The [`TurtleScreen`](#turtle.TurtleScreen "turtle.TurtleScreen") class defines graphics windows as a playground for the drawing turtles. Its constructor needs a `tkinter.Canvas` or a [`ScrolledCanvas`](#turtle.ScrolledCanvas "turtle.ScrolledCanvas") as argument. It should be used when [`turtle`](#module-turtle "turtle: An educational framework for simple graphics applications") is used as part of some application. The function [`Screen()`](#turtle.Screen "turtle.Screen") returns a singleton object of a [`TurtleScreen`](#turtle.TurtleScreen "turtle.TurtleScreen") subclass. This function should be used when [`turtle`](#module-turtle "turtle: An educational framework for simple graphics applications") is used as a standalone tool for doing graphics. As a singleton object, inheriting from its class is not possible. All methods of TurtleScreen/Screen also exist as functions, i.e. as part of the procedure-oriented interface. 2. [`RawTurtle`](#turtle.RawTurtle "turtle.RawTurtle") (alias: [`RawPen`](#turtle.RawPen "turtle.RawPen")) defines Turtle objects which draw on a [`TurtleScreen`](#turtle.TurtleScreen "turtle.TurtleScreen"). Its constructor needs a Canvas, ScrolledCanvas or TurtleScreen as argument, so the RawTurtle objects know where to draw. Derived from RawTurtle is the subclass [`Turtle`](#turtle.Turtle "turtle.Turtle") (alias: `Pen`), which draws on “the” [`Screen`](#turtle.Screen "turtle.Screen") instance which is automatically created, if not already present. All methods of RawTurtle/Turtle also exist as functions, i.e. part of the procedure-oriented interface. The procedural interface provides functions which are derived from the methods of the classes [`Screen`](#turtle.Screen "turtle.Screen") and [`Turtle`](#turtle.Turtle "turtle.Turtle"). They have the same names as the corresponding methods. A screen object is automatically created whenever a function derived from a Screen method is called. An (unnamed) turtle object is automatically created whenever any of the functions derived from a Turtle method is called. To use multiple turtles on a screen one has to use the object-oriented interface. Note In the following documentation the argument list for functions is given. Methods, of course, have the additional first argument *self* which is omitted here. Overview of available Turtle and Screen methods ----------------------------------------------- ### Turtle methods Turtle motion Move and draw Tell Turtle’s state Setting and measurement Pen control Drawing state Color control Filling More drawing control Turtle state Visibility Appearance Using events Special Turtle methods ### Methods of TurtleScreen/Screen Window control Animation control Using screen events Settings and special methods Input methods Methods specific to Screen Methods of RawTurtle/Turtle and corresponding functions ------------------------------------------------------- Most of the examples in this section refer to a Turtle instance called `turtle`. ### Turtle motion `turtle.forward(distance)` `turtle.fd(distance)` Parameters **distance** – a number (integer or float) Move the turtle forward by the specified *distance*, in the direction the turtle is headed. ``` >>> turtle.position() (0.00,0.00) >>> turtle.forward(25) >>> turtle.position() (25.00,0.00) >>> turtle.forward(-75) >>> turtle.position() (-50.00,0.00) ``` `turtle.back(distance)` `turtle.bk(distance)` `turtle.backward(distance)` Parameters **distance** – a number Move the turtle backward by *distance*, opposite to the direction the turtle is headed. Do not change the turtle’s heading. ``` >>> turtle.position() (0.00,0.00) >>> turtle.backward(30) >>> turtle.position() (-30.00,0.00) ``` `turtle.right(angle)` `turtle.rt(angle)` Parameters **angle** – a number (integer or float) Turn turtle right by *angle* units. (Units are by default degrees, but can be set via the [`degrees()`](#turtle.degrees "turtle.degrees") and [`radians()`](#turtle.radians "turtle.radians") functions.) Angle orientation depends on the turtle mode, see [`mode()`](#turtle.mode "turtle.mode"). ``` >>> turtle.heading() 22.0 >>> turtle.right(45) >>> turtle.heading() 337.0 ``` `turtle.left(angle)` `turtle.lt(angle)` Parameters **angle** – a number (integer or float) Turn turtle left by *angle* units. (Units are by default degrees, but can be set via the [`degrees()`](#turtle.degrees "turtle.degrees") and [`radians()`](#turtle.radians "turtle.radians") functions.) Angle orientation depends on the turtle mode, see [`mode()`](#turtle.mode "turtle.mode"). ``` >>> turtle.heading() 22.0 >>> turtle.left(45) >>> turtle.heading() 67.0 ``` `turtle.goto(x, y=None)` `turtle.setpos(x, y=None)` `turtle.setposition(x, y=None)` Parameters * **x** – a number or a pair/vector of numbers * **y** – a number or `None` If *y* is `None`, *x* must be a pair of coordinates or a [`Vec2D`](#turtle.Vec2D "turtle.Vec2D") (e.g. as returned by [`pos()`](#turtle.pos "turtle.pos")). Move turtle to an absolute position. If the pen is down, draw line. Do not change the turtle’s orientation. ``` >>> tp = turtle.pos() >>> tp (0.00,0.00) >>> turtle.setpos(60,30) >>> turtle.pos() (60.00,30.00) >>> turtle.setpos((20,80)) >>> turtle.pos() (20.00,80.00) >>> turtle.setpos(tp) >>> turtle.pos() (0.00,0.00) ``` `turtle.setx(x)` Parameters **x** – a number (integer or float) Set the turtle’s first coordinate to *x*, leave second coordinate unchanged. ``` >>> turtle.position() (0.00,240.00) >>> turtle.setx(10) >>> turtle.position() (10.00,240.00) ``` `turtle.sety(y)` Parameters **y** – a number (integer or float) Set the turtle’s second coordinate to *y*, leave first coordinate unchanged. ``` >>> turtle.position() (0.00,40.00) >>> turtle.sety(-10) >>> turtle.position() (0.00,-10.00) ``` `turtle.setheading(to_angle)` `turtle.seth(to_angle)` Parameters **to\_angle** – a number (integer or float) Set the orientation of the turtle to *to\_angle*. Here are some common directions in degrees: | standard mode | logo mode | | --- | --- | | 0 - east | 0 - north | | 90 - north | 90 - east | | 180 - west | 180 - south | | 270 - south | 270 - west | ``` >>> turtle.setheading(90) >>> turtle.heading() 90.0 ``` `turtle.home()` Move turtle to the origin – coordinates (0,0) – and set its heading to its start-orientation (which depends on the mode, see [`mode()`](#turtle.mode "turtle.mode")). ``` >>> turtle.heading() 90.0 >>> turtle.position() (0.00,-10.00) >>> turtle.home() >>> turtle.position() (0.00,0.00) >>> turtle.heading() 0.0 ``` `turtle.circle(radius, extent=None, steps=None)` Parameters * **radius** – a number * **extent** – a number (or `None`) * **steps** – an integer (or `None`) Draw a circle with given *radius*. The center is *radius* units left of the turtle; *extent* – an angle – determines which part of the circle is drawn. If *extent* is not given, draw the entire circle. If *extent* is not a full circle, one endpoint of the arc is the current pen position. Draw the arc in counterclockwise direction if *radius* is positive, otherwise in clockwise direction. Finally the direction of the turtle is changed by the amount of *extent*. As the circle is approximated by an inscribed regular polygon, *steps* determines the number of steps to use. If not given, it will be calculated automatically. May be used to draw regular polygons. ``` >>> turtle.home() >>> turtle.position() (0.00,0.00) >>> turtle.heading() 0.0 >>> turtle.circle(50) >>> turtle.position() (-0.00,0.00) >>> turtle.heading() 0.0 >>> turtle.circle(120, 180) # draw a semicircle >>> turtle.position() (0.00,240.00) >>> turtle.heading() 180.0 ``` `turtle.dot(size=None, *color)` Parameters * **size** – an integer >= 1 (if given) * **color** – a colorstring or a numeric color tuple Draw a circular dot with diameter *size*, using *color*. If *size* is not given, the maximum of pensize+4 and 2\*pensize is used. ``` >>> turtle.home() >>> turtle.dot() >>> turtle.fd(50); turtle.dot(20, "blue"); turtle.fd(50) >>> turtle.position() (100.00,-0.00) >>> turtle.heading() 0.0 ``` `turtle.stamp()` Stamp a copy of the turtle shape onto the canvas at the current turtle position. Return a stamp\_id for that stamp, which can be used to delete it by calling `clearstamp(stamp_id)`. ``` >>> turtle.color("blue") >>> turtle.stamp() 11 >>> turtle.fd(50) ``` `turtle.clearstamp(stampid)` Parameters **stampid** – an integer, must be return value of previous [`stamp()`](#turtle.stamp "turtle.stamp") call Delete stamp with given *stampid*. ``` >>> turtle.position() (150.00,-0.00) >>> turtle.color("blue") >>> astamp = turtle.stamp() >>> turtle.fd(50) >>> turtle.position() (200.00,-0.00) >>> turtle.clearstamp(astamp) >>> turtle.position() (200.00,-0.00) ``` `turtle.clearstamps(n=None)` Parameters **n** – an integer (or `None`) Delete all or first/last *n* of turtle’s stamps. If *n* is `None`, delete all stamps, if *n* > 0 delete first *n* stamps, else if *n* < 0 delete last *n* stamps. ``` >>> for i in range(8): ... turtle.stamp(); turtle.fd(30) 13 14 15 16 17 18 19 20 >>> turtle.clearstamps(2) >>> turtle.clearstamps(-2) >>> turtle.clearstamps() ``` `turtle.undo()` Undo (repeatedly) the last turtle action(s). Number of available undo actions is determined by the size of the undobuffer. ``` >>> for i in range(4): ... turtle.fd(50); turtle.lt(80) ... >>> for i in range(8): ... turtle.undo() ``` `turtle.speed(speed=None)` Parameters **speed** – an integer in the range 0..10 or a speedstring (see below) Set the turtle’s speed to an integer value in the range 0..10. If no argument is given, return current speed. If input is a number greater than 10 or smaller than 0.5, speed is set to 0. Speedstrings are mapped to speedvalues as follows: * “fastest”: 0 * “fast”: 10 * “normal”: 6 * “slow”: 3 * “slowest”: 1 Speeds from 1 to 10 enforce increasingly faster animation of line drawing and turtle turning. Attention: *speed* = 0 means that *no* animation takes place. forward/back makes turtle jump and likewise left/right make the turtle turn instantly. ``` >>> turtle.speed() 3 >>> turtle.speed('normal') >>> turtle.speed() 6 >>> turtle.speed(9) >>> turtle.speed() 9 ``` ### Tell Turtle’s state `turtle.position()` `turtle.pos()` Return the turtle’s current location (x,y) (as a [`Vec2D`](#turtle.Vec2D "turtle.Vec2D") vector). ``` >>> turtle.pos() (440.00,-0.00) ``` `turtle.towards(x, y=None)` Parameters * **x** – a number or a pair/vector of numbers or a turtle instance * **y** – a number if *x* is a number, else `None` Return the angle between the line from turtle position to position specified by (x,y), the vector or the other turtle. This depends on the turtle’s start orientation which depends on the mode - “standard”/”world” or “logo”. ``` >>> turtle.goto(10, 10) >>> turtle.towards(0,0) 225.0 ``` `turtle.xcor()` Return the turtle’s x coordinate. ``` >>> turtle.home() >>> turtle.left(50) >>> turtle.forward(100) >>> turtle.pos() (64.28,76.60) >>> print(round(turtle.xcor(), 5)) 64.27876 ``` `turtle.ycor()` Return the turtle’s y coordinate. ``` >>> turtle.home() >>> turtle.left(60) >>> turtle.forward(100) >>> print(turtle.pos()) (50.00,86.60) >>> print(round(turtle.ycor(), 5)) 86.60254 ``` `turtle.heading()` Return the turtle’s current heading (value depends on the turtle mode, see [`mode()`](#turtle.mode "turtle.mode")). ``` >>> turtle.home() >>> turtle.left(67) >>> turtle.heading() 67.0 ``` `turtle.distance(x, y=None)` Parameters * **x** – a number or a pair/vector of numbers or a turtle instance * **y** – a number if *x* is a number, else `None` Return the distance from the turtle to (x,y), the given vector, or the given other turtle, in turtle step units. ``` >>> turtle.home() >>> turtle.distance(30,40) 50.0 >>> turtle.distance((30,40)) 50.0 >>> joe = Turtle() >>> joe.forward(77) >>> turtle.distance(joe) 77.0 ``` ### Settings for measurement `turtle.degrees(fullcircle=360.0)` Parameters **fullcircle** – a number Set angle measurement units, i.e. set number of “degrees” for a full circle. Default value is 360 degrees. ``` >>> turtle.home() >>> turtle.left(90) >>> turtle.heading() 90.0 Change angle measurement unit to grad (also known as gon, grade, or gradian and equals 1/100-th of the right angle.) >>> turtle.degrees(400.0) >>> turtle.heading() 100.0 >>> turtle.degrees(360) >>> turtle.heading() 90.0 ``` `turtle.radians()` Set the angle measurement units to radians. Equivalent to `degrees(2*math.pi)`. ``` >>> turtle.home() >>> turtle.left(90) >>> turtle.heading() 90.0 >>> turtle.radians() >>> turtle.heading() 1.5707963267948966 ``` ### Pen control #### Drawing state `turtle.pendown()` `turtle.pd()` `turtle.down()` Pull the pen down – drawing when moving. `turtle.penup()` `turtle.pu()` `turtle.up()` Pull the pen up – no drawing when moving. `turtle.pensize(width=None)` `turtle.width(width=None)` Parameters **width** – a positive number Set the line thickness to *width* or return it. If resizemode is set to “auto” and turtleshape is a polygon, that polygon is drawn with the same line thickness. If no argument is given, the current pensize is returned. ``` >>> turtle.pensize() 1 >>> turtle.pensize(10) # from here on lines of width 10 are drawn ``` `turtle.pen(pen=None, **pendict)` Parameters * **pen** – a dictionary with some or all of the below listed keys * **pendict** – one or more keyword-arguments with the below listed keys as keywords Return or set the pen’s attributes in a “pen-dictionary” with the following key/value pairs: * “shown”: True/False * “pendown”: True/False * “pencolor”: color-string or color-tuple * “fillcolor”: color-string or color-tuple * “pensize”: positive number * “speed”: number in range 0..10 * “resizemode”: “auto” or “user” or “noresize” * “stretchfactor”: (positive number, positive number) * “outline”: positive number * “tilt”: number This dictionary can be used as argument for a subsequent call to [`pen()`](#turtle.pen "turtle.pen") to restore the former pen-state. Moreover one or more of these attributes can be provided as keyword-arguments. This can be used to set several pen attributes in one statement. ``` >>> turtle.pen(fillcolor="black", pencolor="red", pensize=10) >>> sorted(turtle.pen().items()) [('fillcolor', 'black'), ('outline', 1), ('pencolor', 'red'), ('pendown', True), ('pensize', 10), ('resizemode', 'noresize'), ('shearfactor', 0.0), ('shown', True), ('speed', 9), ('stretchfactor', (1.0, 1.0)), ('tilt', 0.0)] >>> penstate=turtle.pen() >>> turtle.color("yellow", "") >>> turtle.penup() >>> sorted(turtle.pen().items())[:3] [('fillcolor', ''), ('outline', 1), ('pencolor', 'yellow')] >>> turtle.pen(penstate, fillcolor="green") >>> sorted(turtle.pen().items())[:3] [('fillcolor', 'green'), ('outline', 1), ('pencolor', 'red')] ``` `turtle.isdown()` Return `True` if pen is down, `False` if it’s up. ``` >>> turtle.penup() >>> turtle.isdown() False >>> turtle.pendown() >>> turtle.isdown() True ``` #### Color control `turtle.pencolor(*args)` Return or set the pencolor. Four input formats are allowed: `pencolor()` Return the current pencolor as color specification string or as a tuple (see example). May be used as input to another color/pencolor/fillcolor call. `pencolor(colorstring)` Set pencolor to *colorstring*, which is a Tk color specification string, such as `"red"`, `"yellow"`, or `"#33cc8c"`. `pencolor((r, g, b))` Set pencolor to the RGB color represented by the tuple of *r*, *g*, and *b*. Each of *r*, *g*, and *b* must be in the range 0..colormode, where colormode is either 1.0 or 255 (see [`colormode()`](#turtle.colormode "turtle.colormode")). `pencolor(r, g, b)` Set pencolor to the RGB color represented by *r*, *g*, and *b*. Each of *r*, *g*, and *b* must be in the range 0..colormode. If turtleshape is a polygon, the outline of that polygon is drawn with the newly set pencolor. ``` >>> colormode() 1.0 >>> turtle.pencolor() 'red' >>> turtle.pencolor("brown") >>> turtle.pencolor() 'brown' >>> tup = (0.2, 0.8, 0.55) >>> turtle.pencolor(tup) >>> turtle.pencolor() (0.2, 0.8, 0.5490196078431373) >>> colormode(255) >>> turtle.pencolor() (51.0, 204.0, 140.0) >>> turtle.pencolor('#32c18f') >>> turtle.pencolor() (50.0, 193.0, 143.0) ``` `turtle.fillcolor(*args)` Return or set the fillcolor. Four input formats are allowed: `fillcolor()` Return the current fillcolor as color specification string, possibly in tuple format (see example). May be used as input to another color/pencolor/fillcolor call. `fillcolor(colorstring)` Set fillcolor to *colorstring*, which is a Tk color specification string, such as `"red"`, `"yellow"`, or `"#33cc8c"`. `fillcolor((r, g, b))` Set fillcolor to the RGB color represented by the tuple of *r*, *g*, and *b*. Each of *r*, *g*, and *b* must be in the range 0..colormode, where colormode is either 1.0 or 255 (see [`colormode()`](#turtle.colormode "turtle.colormode")). `fillcolor(r, g, b)` Set fillcolor to the RGB color represented by *r*, *g*, and *b*. Each of *r*, *g*, and *b* must be in the range 0..colormode. If turtleshape is a polygon, the interior of that polygon is drawn with the newly set fillcolor. ``` >>> turtle.fillcolor("violet") >>> turtle.fillcolor() 'violet' >>> turtle.pencolor() (50.0, 193.0, 143.0) >>> turtle.fillcolor((50, 193, 143)) # Integers, not floats >>> turtle.fillcolor() (50.0, 193.0, 143.0) >>> turtle.fillcolor('#ffffff') >>> turtle.fillcolor() (255.0, 255.0, 255.0) ``` `turtle.color(*args)` Return or set pencolor and fillcolor. Several input formats are allowed. They use 0 to 3 arguments as follows: `color()` Return the current pencolor and the current fillcolor as a pair of color specification strings or tuples as returned by [`pencolor()`](#turtle.pencolor "turtle.pencolor") and [`fillcolor()`](#turtle.fillcolor "turtle.fillcolor"). `color(colorstring), color((r,g,b)), color(r,g,b)` Inputs as in [`pencolor()`](#turtle.pencolor "turtle.pencolor"), set both, fillcolor and pencolor, to the given value. `color(colorstring1, colorstring2), color((r1,g1,b1), (r2,g2,b2))` Equivalent to `pencolor(colorstring1)` and `fillcolor(colorstring2)` and analogously if the other input format is used. If turtleshape is a polygon, outline and interior of that polygon is drawn with the newly set colors. ``` >>> turtle.color("red", "green") >>> turtle.color() ('red', 'green') >>> color("#285078", "#a0c8f0") >>> color() ((40.0, 80.0, 120.0), (160.0, 200.0, 240.0)) ``` See also: Screen method [`colormode()`](#turtle.colormode "turtle.colormode"). #### Filling `turtle.filling()` Return fillstate (`True` if filling, `False` else). ``` >>> turtle.begin_fill() >>> if turtle.filling(): ... turtle.pensize(5) ... else: ... turtle.pensize(3) ``` `turtle.begin_fill()` To be called just before drawing a shape to be filled. `turtle.end_fill()` Fill the shape drawn after the last call to [`begin_fill()`](#turtle.begin_fill "turtle.begin_fill"). Whether or not overlap regions for self-intersecting polygons or multiple shapes are filled depends on the operating system graphics, type of overlap, and number of overlaps. For example, the Turtle star above may be either all yellow or have some white regions. ``` >>> turtle.color("black", "red") >>> turtle.begin_fill() >>> turtle.circle(80) >>> turtle.end_fill() ``` #### More drawing control `turtle.reset()` Delete the turtle’s drawings from the screen, re-center the turtle and set variables to the default values. ``` >>> turtle.goto(0,-22) >>> turtle.left(100) >>> turtle.position() (0.00,-22.00) >>> turtle.heading() 100.0 >>> turtle.reset() >>> turtle.position() (0.00,0.00) >>> turtle.heading() 0.0 ``` `turtle.clear()` Delete the turtle’s drawings from the screen. Do not move turtle. State and position of the turtle as well as drawings of other turtles are not affected. `turtle.write(arg, move=False, align="left", font=("Arial", 8, "normal"))` Parameters * **arg** – object to be written to the TurtleScreen * **move** – True/False * **align** – one of the strings “left”, “center” or right” * **font** – a triple (fontname, fontsize, fonttype) Write text - the string representation of *arg* - at the current turtle position according to *align* (“left”, “center” or “right”) and with the given font. If *move* is true, the pen is moved to the bottom-right corner of the text. By default, *move* is `False`. ``` >>> turtle.write("Home = ", True, align="center") >>> turtle.write((0,0), True) ``` ### Turtle state #### Visibility `turtle.hideturtle()` `turtle.ht()` Make the turtle invisible. It’s a good idea to do this while you’re in the middle of doing some complex drawing, because hiding the turtle speeds up the drawing observably. ``` >>> turtle.hideturtle() ``` `turtle.showturtle()` `turtle.st()` Make the turtle visible. ``` >>> turtle.showturtle() ``` `turtle.isvisible()` Return `True` if the Turtle is shown, `False` if it’s hidden. ``` >>> turtle.hideturtle() >>> turtle.isvisible() False >>> turtle.showturtle() >>> turtle.isvisible() True ``` #### Appearance `turtle.shape(name=None)` Parameters **name** – a string which is a valid shapename Set turtle shape to shape with given *name* or, if name is not given, return name of current shape. Shape with *name* must exist in the TurtleScreen’s shape dictionary. Initially there are the following polygon shapes: “arrow”, “turtle”, “circle”, “square”, “triangle”, “classic”. To learn about how to deal with shapes see Screen method [`register_shape()`](#turtle.register_shape "turtle.register_shape"). ``` >>> turtle.shape() 'classic' >>> turtle.shape("turtle") >>> turtle.shape() 'turtle' ``` `turtle.resizemode(rmode=None)` Parameters **rmode** – one of the strings “auto”, “user”, “noresize” Set resizemode to one of the values: “auto”, “user”, “noresize”. If *rmode* is not given, return current resizemode. Different resizemodes have the following effects: * “auto”: adapts the appearance of the turtle corresponding to the value of pensize. * “user”: adapts the appearance of the turtle according to the values of stretchfactor and outlinewidth (outline), which are set by [`shapesize()`](#turtle.shapesize "turtle.shapesize"). * “noresize”: no adaption of the turtle’s appearance takes place. `resizemode("user")` is called by [`shapesize()`](#turtle.shapesize "turtle.shapesize") when used with arguments. ``` >>> turtle.resizemode() 'noresize' >>> turtle.resizemode("auto") >>> turtle.resizemode() 'auto' ``` `turtle.shapesize(stretch_wid=None, stretch_len=None, outline=None)` `turtle.turtlesize(stretch_wid=None, stretch_len=None, outline=None)` Parameters * **stretch\_wid** – positive number * **stretch\_len** – positive number * **outline** – positive number Return or set the pen’s attributes x/y-stretchfactors and/or outline. Set resizemode to “user”. If and only if resizemode is set to “user”, the turtle will be displayed stretched according to its stretchfactors: *stretch\_wid* is stretchfactor perpendicular to its orientation, *stretch\_len* is stretchfactor in direction of its orientation, *outline* determines the width of the shapes’s outline. ``` >>> turtle.shapesize() (1.0, 1.0, 1) >>> turtle.resizemode("user") >>> turtle.shapesize(5, 5, 12) >>> turtle.shapesize() (5, 5, 12) >>> turtle.shapesize(outline=8) >>> turtle.shapesize() (5, 5, 8) ``` `turtle.shearfactor(shear=None)` Parameters **shear** – number (optional) Set or return the current shearfactor. Shear the turtleshape according to the given shearfactor shear, which is the tangent of the shear angle. Do *not* change the turtle’s heading (direction of movement). If shear is not given: return the current shearfactor, i. e. the tangent of the shear angle, by which lines parallel to the heading of the turtle are sheared. ``` >>> turtle.shape("circle") >>> turtle.shapesize(5,2) >>> turtle.shearfactor(0.5) >>> turtle.shearfactor() 0.5 ``` `turtle.tilt(angle)` Parameters **angle** – a number Rotate the turtleshape by *angle* from its current tilt-angle, but do *not* change the turtle’s heading (direction of movement). ``` >>> turtle.reset() >>> turtle.shape("circle") >>> turtle.shapesize(5,2) >>> turtle.tilt(30) >>> turtle.fd(50) >>> turtle.tilt(30) >>> turtle.fd(50) ``` `turtle.settiltangle(angle)` Parameters **angle** – a number Rotate the turtleshape to point in the direction specified by *angle*, regardless of its current tilt-angle. *Do not* change the turtle’s heading (direction of movement). ``` >>> turtle.reset() >>> turtle.shape("circle") >>> turtle.shapesize(5,2) >>> turtle.settiltangle(45) >>> turtle.fd(50) >>> turtle.settiltangle(-45) >>> turtle.fd(50) ``` Deprecated since version 3.1. `turtle.tiltangle(angle=None)` Parameters **angle** – a number (optional) Set or return the current tilt-angle. If angle is given, rotate the turtleshape to point in the direction specified by angle, regardless of its current tilt-angle. Do *not* change the turtle’s heading (direction of movement). If angle is not given: return the current tilt-angle, i. e. the angle between the orientation of the turtleshape and the heading of the turtle (its direction of movement). ``` >>> turtle.reset() >>> turtle.shape("circle") >>> turtle.shapesize(5,2) >>> turtle.tilt(45) >>> turtle.tiltangle() 45.0 ``` `turtle.shapetransform(t11=None, t12=None, t21=None, t22=None)` Parameters * **t11** – a number (optional) * **t12** – a number (optional) * **t21** – a number (optional) * **t12** – a number (optional) Set or return the current transformation matrix of the turtle shape. If none of the matrix elements are given, return the transformation matrix as a tuple of 4 elements. Otherwise set the given elements and transform the turtleshape according to the matrix consisting of first row t11, t12 and second row t21, t22. The determinant t11 \* t22 - t12 \* t21 must not be zero, otherwise an error is raised. Modify stretchfactor, shearfactor and tiltangle according to the given matrix. ``` >>> turtle = Turtle() >>> turtle.shape("square") >>> turtle.shapesize(4,2) >>> turtle.shearfactor(-0.5) >>> turtle.shapetransform() (4.0, -1.0, -0.0, 2.0) ``` `turtle.get_shapepoly()` Return the current shape polygon as tuple of coordinate pairs. This can be used to define a new shape or components of a compound shape. ``` >>> turtle.shape("square") >>> turtle.shapetransform(4, -1, 0, 2) >>> turtle.get_shapepoly() ((50, -20), (30, 20), (-50, 20), (-30, -20)) ``` ### Using events `turtle.onclick(fun, btn=1, add=None)` Parameters * **fun** – a function with two arguments which will be called with the coordinates of the clicked point on the canvas * **btn** – number of the mouse-button, defaults to 1 (left mouse button) * **add** – `True` or `False` – if `True`, a new binding will be added, otherwise it will replace a former binding Bind *fun* to mouse-click events on this turtle. If *fun* is `None`, existing bindings are removed. Example for the anonymous turtle, i.e. the procedural way: ``` >>> def turn(x, y): ... left(180) ... >>> onclick(turn) # Now clicking into the turtle will turn it. >>> onclick(None) # event-binding will be removed ``` `turtle.onrelease(fun, btn=1, add=None)` Parameters * **fun** – a function with two arguments which will be called with the coordinates of the clicked point on the canvas * **btn** – number of the mouse-button, defaults to 1 (left mouse button) * **add** – `True` or `False` – if `True`, a new binding will be added, otherwise it will replace a former binding Bind *fun* to mouse-button-release events on this turtle. If *fun* is `None`, existing bindings are removed. ``` >>> class MyTurtle(Turtle): ... def glow(self,x,y): ... self.fillcolor("red") ... def unglow(self,x,y): ... self.fillcolor("") ... >>> turtle = MyTurtle() >>> turtle.onclick(turtle.glow) # clicking on turtle turns fillcolor red, >>> turtle.onrelease(turtle.unglow) # releasing turns it to transparent. ``` `turtle.ondrag(fun, btn=1, add=None)` Parameters * **fun** – a function with two arguments which will be called with the coordinates of the clicked point on the canvas * **btn** – number of the mouse-button, defaults to 1 (left mouse button) * **add** – `True` or `False` – if `True`, a new binding will be added, otherwise it will replace a former binding Bind *fun* to mouse-move events on this turtle. If *fun* is `None`, existing bindings are removed. Remark: Every sequence of mouse-move-events on a turtle is preceded by a mouse-click event on that turtle. ``` >>> turtle.ondrag(turtle.goto) ``` Subsequently, clicking and dragging the Turtle will move it across the screen thereby producing handdrawings (if pen is down). ### Special Turtle methods `turtle.begin_poly()` Start recording the vertices of a polygon. Current turtle position is first vertex of polygon. `turtle.end_poly()` Stop recording the vertices of a polygon. Current turtle position is last vertex of polygon. This will be connected with the first vertex. `turtle.get_poly()` Return the last recorded polygon. ``` >>> turtle.home() >>> turtle.begin_poly() >>> turtle.fd(100) >>> turtle.left(20) >>> turtle.fd(30) >>> turtle.left(60) >>> turtle.fd(50) >>> turtle.end_poly() >>> p = turtle.get_poly() >>> register_shape("myFavouriteShape", p) ``` `turtle.clone()` Create and return a clone of the turtle with same position, heading and turtle properties. ``` >>> mick = Turtle() >>> joe = mick.clone() ``` `turtle.getturtle()` `turtle.getpen()` Return the Turtle object itself. Only reasonable use: as a function to return the “anonymous turtle”: ``` >>> pet = getturtle() >>> pet.fd(50) >>> pet <turtle.Turtle object at 0x...> ``` `turtle.getscreen()` Return the [`TurtleScreen`](#turtle.TurtleScreen "turtle.TurtleScreen") object the turtle is drawing on. TurtleScreen methods can then be called for that object. ``` >>> ts = turtle.getscreen() >>> ts <turtle._Screen object at 0x...> >>> ts.bgcolor("pink") ``` `turtle.setundobuffer(size)` Parameters **size** – an integer or `None` Set or disable undobuffer. If *size* is an integer, an empty undobuffer of given size is installed. *size* gives the maximum number of turtle actions that can be undone by the [`undo()`](#turtle.undo "turtle.undo") method/function. If *size* is `None`, the undobuffer is disabled. ``` >>> turtle.setundobuffer(42) ``` `turtle.undobufferentries()` Return number of entries in the undobuffer. ``` >>> while undobufferentries(): ... undo() ``` ### Compound shapes To use compound turtle shapes, which consist of several polygons of different color, you must use the helper class [`Shape`](#turtle.Shape "turtle.Shape") explicitly as described below: 1. Create an empty Shape object of type “compound”. 2. Add as many components to this object as desired, using the `addcomponent()` method. For example: ``` >>> s = Shape("compound") >>> poly1 = ((0,0),(10,-5),(0,10),(-10,-5)) >>> s.addcomponent(poly1, "red", "blue") >>> poly2 = ((0,0),(10,-5),(-10,-5)) >>> s.addcomponent(poly2, "blue", "red") ``` 3. Now add the Shape to the Screen’s shapelist and use it: ``` >>> register_shape("myshape", s) >>> shape("myshape") ``` Note The [`Shape`](#turtle.Shape "turtle.Shape") class is used internally by the [`register_shape()`](#turtle.register_shape "turtle.register_shape") method in different ways. The application programmer has to deal with the Shape class *only* when using compound shapes like shown above! Methods of TurtleScreen/Screen and corresponding functions ---------------------------------------------------------- Most of the examples in this section refer to a TurtleScreen instance called `screen`. ### Window control `turtle.bgcolor(*args)` Parameters **args** – a color string or three numbers in the range 0..colormode or a 3-tuple of such numbers Set or return background color of the TurtleScreen. ``` >>> screen.bgcolor("orange") >>> screen.bgcolor() 'orange' >>> screen.bgcolor("#800080") >>> screen.bgcolor() (128.0, 0.0, 128.0) ``` `turtle.bgpic(picname=None)` Parameters **picname** – a string, name of a gif-file or `"nopic"`, or `None` Set background image or return name of current backgroundimage. If *picname* is a filename, set the corresponding image as background. If *picname* is `"nopic"`, delete background image, if present. If *picname* is `None`, return the filename of the current backgroundimage. ``` >>> screen.bgpic() 'nopic' >>> screen.bgpic("landscape.gif") >>> screen.bgpic() "landscape.gif" ``` `turtle.clear()` Note This TurtleScreen method is available as a global function only under the name `clearscreen`. The global function `clear` is a different one derived from the Turtle method `clear`. `turtle.clearscreen()` Delete all drawings and all turtles from the TurtleScreen. Reset the now empty TurtleScreen to its initial state: white background, no background image, no event bindings and tracing on. `turtle.reset()` Note This TurtleScreen method is available as a global function only under the name `resetscreen`. The global function `reset` is another one derived from the Turtle method `reset`. `turtle.resetscreen()` Reset all Turtles on the Screen to their initial state. `turtle.screensize(canvwidth=None, canvheight=None, bg=None)` Parameters * **canvwidth** – positive integer, new width of canvas in pixels * **canvheight** – positive integer, new height of canvas in pixels * **bg** – colorstring or color-tuple, new background color If no arguments are given, return current (canvaswidth, canvasheight). Else resize the canvas the turtles are drawing on. Do not alter the drawing window. To observe hidden parts of the canvas, use the scrollbars. With this method, one can make visible those parts of a drawing which were outside the canvas before. ``` >>> screen.screensize() (400, 300) >>> screen.screensize(2000,1500) >>> screen.screensize() (2000, 1500) ``` e.g. to search for an erroneously escaped turtle ;-) `turtle.setworldcoordinates(llx, lly, urx, ury)` Parameters * **llx** – a number, x-coordinate of lower left corner of canvas * **lly** – a number, y-coordinate of lower left corner of canvas * **urx** – a number, x-coordinate of upper right corner of canvas * **ury** – a number, y-coordinate of upper right corner of canvas Set up user-defined coordinate system and switch to mode “world” if necessary. This performs a `screen.reset()`. If mode “world” is already active, all drawings are redrawn according to the new coordinates. **ATTENTION**: in user-defined coordinate systems angles may appear distorted. ``` >>> screen.reset() >>> screen.setworldcoordinates(-50,-7.5,50,7.5) >>> for _ in range(72): ... left(10) ... >>> for _ in range(8): ... left(45); fd(2) # a regular octagon ``` ### Animation control `turtle.delay(delay=None)` Parameters **delay** – positive integer Set or return the drawing *delay* in milliseconds. (This is approximately the time interval between two consecutive canvas updates.) The longer the drawing delay, the slower the animation. Optional argument: ``` >>> screen.delay() 10 >>> screen.delay(5) >>> screen.delay() 5 ``` `turtle.tracer(n=None, delay=None)` Parameters * **n** – nonnegative integer * **delay** – nonnegative integer Turn turtle animation on/off and set delay for update drawings. If *n* is given, only each n-th regular screen update is really performed. (Can be used to accelerate the drawing of complex graphics.) When called without arguments, returns the currently stored value of n. Second argument sets delay value (see [`delay()`](#turtle.delay "turtle.delay")). ``` >>> screen.tracer(8, 25) >>> dist = 2 >>> for i in range(200): ... fd(dist) ... rt(90) ... dist += 2 ``` `turtle.update()` Perform a TurtleScreen update. To be used when tracer is turned off. See also the RawTurtle/Turtle method [`speed()`](#turtle.speed "turtle.speed"). ### Using screen events `turtle.listen(xdummy=None, ydummy=None)` Set focus on TurtleScreen (in order to collect key-events). Dummy arguments are provided in order to be able to pass [`listen()`](#turtle.listen "turtle.listen") to the onclick method. `turtle.onkey(fun, key)` `turtle.onkeyrelease(fun, key)` Parameters * **fun** – a function with no arguments or `None` * **key** – a string: key (e.g. “a”) or key-symbol (e.g. “space”) Bind *fun* to key-release event of key. If *fun* is `None`, event bindings are removed. Remark: in order to be able to register key-events, TurtleScreen must have the focus. (See method [`listen()`](#turtle.listen "turtle.listen").) ``` >>> def f(): ... fd(50) ... lt(60) ... >>> screen.onkey(f, "Up") >>> screen.listen() ``` `turtle.onkeypress(fun, key=None)` Parameters * **fun** – a function with no arguments or `None` * **key** – a string: key (e.g. “a”) or key-symbol (e.g. “space”) Bind *fun* to key-press event of key if key is given, or to any key-press-event if no key is given. Remark: in order to be able to register key-events, TurtleScreen must have focus. (See method [`listen()`](#turtle.listen "turtle.listen").) ``` >>> def f(): ... fd(50) ... >>> screen.onkey(f, "Up") >>> screen.listen() ``` `turtle.onclick(fun, btn=1, add=None)` `turtle.onscreenclick(fun, btn=1, add=None)` Parameters * **fun** – a function with two arguments which will be called with the coordinates of the clicked point on the canvas * **btn** – number of the mouse-button, defaults to 1 (left mouse button) * **add** – `True` or `False` – if `True`, a new binding will be added, otherwise it will replace a former binding Bind *fun* to mouse-click events on this screen. If *fun* is `None`, existing bindings are removed. Example for a TurtleScreen instance named `screen` and a Turtle instance named `turtle`: ``` >>> screen.onclick(turtle.goto) # Subsequently clicking into the TurtleScreen will >>> # make the turtle move to the clicked point. >>> screen.onclick(None) # remove event binding again ``` Note This TurtleScreen method is available as a global function only under the name `onscreenclick`. The global function `onclick` is another one derived from the Turtle method `onclick`. `turtle.ontimer(fun, t=0)` Parameters * **fun** – a function with no arguments * **t** – a number >= 0 Install a timer that calls *fun* after *t* milliseconds. ``` >>> running = True >>> def f(): ... if running: ... fd(50) ... lt(60) ... screen.ontimer(f, 250) >>> f() ### makes the turtle march around >>> running = False ``` `turtle.mainloop()` `turtle.done()` Starts event loop - calling Tkinter’s mainloop function. Must be the last statement in a turtle graphics program. Must *not* be used if a script is run from within IDLE in -n mode (No subprocess) - for interactive use of turtle graphics. ``` >>> screen.mainloop() ``` ### Input methods `turtle.textinput(title, prompt)` Parameters * **title** – string * **prompt** – string Pop up a dialog window for input of a string. Parameter title is the title of the dialog window, prompt is a text mostly describing what information to input. Return the string input. If the dialog is canceled, return `None`. ``` >>> screen.textinput("NIM", "Name of first player:") ``` `turtle.numinput(title, prompt, default=None, minval=None, maxval=None)` Parameters * **title** – string * **prompt** – string * **default** – number (optional) * **minval** – number (optional) * **maxval** – number (optional) Pop up a dialog window for input of a number. title is the title of the dialog window, prompt is a text mostly describing what numerical information to input. default: default value, minval: minimum value for input, maxval: maximum value for input The number input must be in the range minval .. maxval if these are given. If not, a hint is issued and the dialog remains open for correction. Return the number input. If the dialog is canceled, return `None`. ``` >>> screen.numinput("Poker", "Your stakes:", 1000, minval=10, maxval=10000) ``` ### Settings and special methods `turtle.mode(mode=None)` Parameters **mode** – one of the strings “standard”, “logo” or “world” Set turtle mode (“standard”, “logo” or “world”) and perform reset. If mode is not given, current mode is returned. Mode “standard” is compatible with old [`turtle`](#module-turtle "turtle: An educational framework for simple graphics applications"). Mode “logo” is compatible with most Logo turtle graphics. Mode “world” uses user-defined “world coordinates”. **Attention**: in this mode angles appear distorted if `x/y` unit-ratio doesn’t equal 1. | Mode | Initial turtle heading | positive angles | | --- | --- | --- | | “standard” | to the right (east) | counterclockwise | | “logo” | upward (north) | clockwise | ``` >>> mode("logo") # resets turtle heading to north >>> mode() 'logo' ``` `turtle.colormode(cmode=None)` Parameters **cmode** – one of the values 1.0 or 255 Return the colormode or set it to 1.0 or 255. Subsequently *r*, *g*, *b* values of color triples have to be in the range 0..*cmode*. ``` >>> screen.colormode(1) >>> turtle.pencolor(240, 160, 80) Traceback (most recent call last): ... TurtleGraphicsError: bad color sequence: (240, 160, 80) >>> screen.colormode() 1.0 >>> screen.colormode(255) >>> screen.colormode() 255 >>> turtle.pencolor(240,160,80) ``` `turtle.getcanvas()` Return the Canvas of this TurtleScreen. Useful for insiders who know what to do with a Tkinter Canvas. ``` >>> cv = screen.getcanvas() >>> cv <turtle.ScrolledCanvas object ...> ``` `turtle.getshapes()` Return a list of names of all currently available turtle shapes. ``` >>> screen.getshapes() ['arrow', 'blank', 'circle', ..., 'turtle'] ``` `turtle.register_shape(name, shape=None)` `turtle.addshape(name, shape=None)` There are three different ways to call this function: 1. *name* is the name of a gif-file and *shape* is `None`: Install the corresponding image shape. ``` >>> screen.register_shape("turtle.gif") ``` Note Image shapes *do not* rotate when turning the turtle, so they do not display the heading of the turtle! 2. *name* is an arbitrary string and *shape* is a tuple of pairs of coordinates: Install the corresponding polygon shape. ``` >>> screen.register_shape("triangle", ((5,-3), (0,5), (-5,-3))) ``` 3. *name* is an arbitrary string and shape is a (compound) [`Shape`](#turtle.Shape "turtle.Shape") object: Install the corresponding compound shape. Add a turtle shape to TurtleScreen’s shapelist. Only thusly registered shapes can be used by issuing the command `shape(shapename)`. `turtle.turtles()` Return the list of turtles on the screen. ``` >>> for turtle in screen.turtles(): ... turtle.color("red") ``` `turtle.window_height()` Return the height of the turtle window. ``` >>> screen.window_height() 480 ``` `turtle.window_width()` Return the width of the turtle window. ``` >>> screen.window_width() 640 ``` ### Methods specific to Screen, not inherited from TurtleScreen `turtle.bye()` Shut the turtlegraphics window. `turtle.exitonclick()` Bind `bye()` method to mouse clicks on the Screen. If the value “using\_IDLE” in the configuration dictionary is `False` (default value), also enter mainloop. Remark: If IDLE with the `-n` switch (no subprocess) is used, this value should be set to `True` in `turtle.cfg`. In this case IDLE’s own mainloop is active also for the client script. `turtle.setup(width=_CFG["width"], height=_CFG["height"], startx=_CFG["leftright"], starty=_CFG["topbottom"])` Set the size and position of the main window. Default values of arguments are stored in the configuration dictionary and can be changed via a `turtle.cfg` file. Parameters * **width** – if an integer, a size in pixels, if a float, a fraction of the screen; default is 50% of screen * **height** – if an integer, the height in pixels, if a float, a fraction of the screen; default is 75% of screen * **startx** – if positive, starting position in pixels from the left edge of the screen, if negative from the right edge, if `None`, center window horizontally * **starty** – if positive, starting position in pixels from the top edge of the screen, if negative from the bottom edge, if `None`, center window vertically ``` >>> screen.setup (width=200, height=200, startx=0, starty=0) >>> # sets window to 200x200 pixels, in upper left of screen >>> screen.setup(width=.75, height=0.5, startx=None, starty=None) >>> # sets window to 75% of screen by 50% of screen and centers ``` `turtle.title(titlestring)` Parameters **titlestring** – a string that is shown in the titlebar of the turtle graphics window Set title of turtle window to *titlestring*. ``` >>> screen.title("Welcome to the turtle zoo!") ``` Public classes -------------- `class turtle.RawTurtle(canvas)` `class turtle.RawPen(canvas)` Parameters **canvas** – a `tkinter.Canvas`, a [`ScrolledCanvas`](#turtle.ScrolledCanvas "turtle.ScrolledCanvas") or a [`TurtleScreen`](#turtle.TurtleScreen "turtle.TurtleScreen") Create a turtle. The turtle has all methods described above as “methods of Turtle/RawTurtle”. `class turtle.Turtle` Subclass of RawTurtle, has the same interface but draws on a default [`Screen`](#turtle.Screen "turtle.Screen") object created automatically when needed for the first time. `class turtle.TurtleScreen(cv)` Parameters **cv** – a `tkinter.Canvas` Provides screen oriented methods like `setbg()` etc. that are described above. `class turtle.Screen` Subclass of TurtleScreen, with [four methods added](#screenspecific). `class turtle.ScrolledCanvas(master)` Parameters **master** – some Tkinter widget to contain the ScrolledCanvas, i.e. a Tkinter-canvas with scrollbars added Used by class Screen, which thus automatically provides a ScrolledCanvas as playground for the turtles. `class turtle.Shape(type_, data)` Parameters **type\_** – one of the strings “polygon”, “image”, “compound” Data structure modeling shapes. The pair `(type_, data)` must follow this specification: | *type\_* | *data* | | --- | --- | | “polygon” | a polygon-tuple, i.e. a tuple of pairs of coordinates | | “image” | an image (in this form only used internally!) | | “compound” | `None` (a compound shape has to be constructed using the [`addcomponent()`](#turtle.Shape.addcomponent "turtle.Shape.addcomponent") method) | `addcomponent(poly, fill, outline=None)` Parameters * **poly** – a polygon, i.e. a tuple of pairs of numbers * **fill** – a color the *poly* will be filled with * **outline** – a color for the poly’s outline (if given) Example: ``` >>> poly = ((0,0),(10,-5),(0,10),(-10,-5)) >>> s = Shape("compound") >>> s.addcomponent(poly, "red", "blue") >>> # ... add more components and then use register_shape() ``` See [Compound shapes](#compoundshapes). `class turtle.Vec2D(x, y)` A two-dimensional vector class, used as a helper class for implementing turtle graphics. May be useful for turtle graphics programs too. Derived from tuple, so a vector is a tuple! Provides (for *a*, *b* vectors, *k* number): * `a + b` vector addition * `a - b` vector subtraction * `a * b` inner product * `k * a` and `a * k` multiplication with scalar * `abs(a)` absolute value of a * `a.rotate(angle)` rotation Help and configuration ---------------------- ### How to use help The public methods of the Screen and Turtle classes are documented extensively via docstrings. So these can be used as online-help via the Python help facilities: * When using IDLE, tooltips show the signatures and first lines of the docstrings of typed in function-/method calls. * Calling [`help()`](functions#help "help") on methods or functions displays the docstrings: ``` >>> help(Screen.bgcolor) Help on method bgcolor in module turtle: bgcolor(self, *args) unbound turtle.Screen method Set or return backgroundcolor of the TurtleScreen. Arguments (if given): a color string or three numbers in the range 0..colormode or a 3-tuple of such numbers. >>> screen.bgcolor("orange") >>> screen.bgcolor() "orange" >>> screen.bgcolor(0.5,0,0.5) >>> screen.bgcolor() "#800080" >>> help(Turtle.penup) Help on method penup in module turtle: penup(self) unbound turtle.Turtle method Pull the pen up -- no drawing when moving. Aliases: penup | pu | up No argument >>> turtle.penup() ``` * The docstrings of the functions which are derived from methods have a modified form: ``` >>> help(bgcolor) Help on function bgcolor in module turtle: bgcolor(*args) Set or return backgroundcolor of the TurtleScreen. Arguments (if given): a color string or three numbers in the range 0..colormode or a 3-tuple of such numbers. Example:: >>> bgcolor("orange") >>> bgcolor() "orange" >>> bgcolor(0.5,0,0.5) >>> bgcolor() "#800080" >>> help(penup) Help on function penup in module turtle: penup() Pull the pen up -- no drawing when moving. Aliases: penup | pu | up No argument Example: >>> penup() ``` These modified docstrings are created automatically together with the function definitions that are derived from the methods at import time. ### Translation of docstrings into different languages There is a utility to create a dictionary the keys of which are the method names and the values of which are the docstrings of the public methods of the classes Screen and Turtle. `turtle.write_docstringdict(filename="turtle_docstringdict")` Parameters **filename** – a string, used as filename Create and write docstring-dictionary to a Python script with the given filename. This function has to be called explicitly (it is not used by the turtle graphics classes). The docstring dictionary will be written to the Python script `*filename*.py`. It is intended to serve as a template for translation of the docstrings into different languages. If you (or your students) want to use [`turtle`](#module-turtle "turtle: An educational framework for simple graphics applications") with online help in your native language, you have to translate the docstrings and save the resulting file as e.g. `turtle_docstringdict_german.py`. If you have an appropriate entry in your `turtle.cfg` file this dictionary will be read in at import time and will replace the original English docstrings. At the time of this writing there are docstring dictionaries in German and in Italian. (Requests please to [[email protected]](mailto:glingl%40aon.at).) ### How to configure Screen and Turtles The built-in default configuration mimics the appearance and behaviour of the old turtle module in order to retain best possible compatibility with it. If you want to use a different configuration which better reflects the features of this module or which better fits to your needs, e.g. for use in a classroom, you can prepare a configuration file `turtle.cfg` which will be read at import time and modify the configuration according to its settings. The built in configuration would correspond to the following turtle.cfg: ``` width = 0.5 height = 0.75 leftright = None topbottom = None canvwidth = 400 canvheight = 300 mode = standard colormode = 1.0 delay = 10 undobuffersize = 1000 shape = classic pencolor = black fillcolor = black resizemode = noresize visible = True language = english exampleturtle = turtle examplescreen = screen title = Python Turtle Graphics using_IDLE = False ``` Short explanation of selected entries: * The first four lines correspond to the arguments of the `Screen.setup()` method. * Line 5 and 6 correspond to the arguments of the method `Screen.screensize()`. * *shape* can be any of the built-in shapes, e.g: arrow, turtle, etc. For more info try `help(shape)`. * If you want to use no fillcolor (i.e. make the turtle transparent), you have to write `fillcolor = ""` (but all nonempty strings must not have quotes in the cfg-file). * If you want to reflect the turtle its state, you have to use `resizemode = auto`. * If you set e.g. `language = italian` the docstringdict `turtle_docstringdict_italian.py` will be loaded at import time (if present on the import path, e.g. in the same directory as [`turtle`](#module-turtle "turtle: An educational framework for simple graphics applications"). * The entries *exampleturtle* and *examplescreen* define the names of these objects as they occur in the docstrings. The transformation of method-docstrings to function-docstrings will delete these names from the docstrings. * *using\_IDLE*: Set this to `True` if you regularly work with IDLE and its -n switch (“no subprocess”). This will prevent [`exitonclick()`](#turtle.exitonclick "turtle.exitonclick") to enter the mainloop. There can be a `turtle.cfg` file in the directory where [`turtle`](#module-turtle "turtle: An educational framework for simple graphics applications") is stored and an additional one in the current working directory. The latter will override the settings of the first one. The `Lib/turtledemo` directory contains a `turtle.cfg` file. You can study it as an example and see its effects when running the demos (preferably not from within the demo-viewer). turtledemo — Demo scripts ------------------------- The [`turtledemo`](#module-turtledemo "turtledemo: A viewer for example turtle scripts") package includes a set of demo scripts. These scripts can be run and viewed using the supplied demo viewer as follows: ``` python -m turtledemo ``` Alternatively, you can run the demo scripts individually. For example, ``` python -m turtledemo.bytedesign ``` The [`turtledemo`](#module-turtledemo "turtledemo: A viewer for example turtle scripts") package directory contains: * A demo viewer `__main__.py` which can be used to view the sourcecode of the scripts and run them at the same time. * Multiple scripts demonstrating different features of the [`turtle`](#module-turtle "turtle: An educational framework for simple graphics applications") module. Examples can be accessed via the Examples menu. They can also be run standalone. * A `turtle.cfg` file which serves as an example of how to write and use such files. The demo scripts are: | Name | Description | Features | | --- | --- | --- | | bytedesign | complex classical turtle graphics pattern | `tracer()`, delay, `update()` | | chaos | graphs Verhulst dynamics, shows that computer’s computations can generate results sometimes against the common sense expectations | world coordinates | | clock | analog clock showing time of your computer | turtles as clock’s hands, ontimer | | colormixer | experiment with r, g, b | `ondrag()` | | forest | 3 breadth-first trees | randomization | | fractalcurves | Hilbert & Koch curves | recursion | | lindenmayer | ethnomathematics (indian kolams) | L-System | | minimal\_hanoi | Towers of Hanoi | Rectangular Turtles as Hanoi discs (shape, shapesize) | | nim | play the classical nim game with three heaps of sticks against the computer. | turtles as nimsticks, event driven (mouse, keyboard) | | paint | super minimalistic drawing program | `onclick()` | | peace | elementary | turtle: appearance and animation | | penrose | aperiodic tiling with kites and darts | `stamp()` | | planet\_and\_moon | simulation of gravitational system | compound shapes, `Vec2D` | | round\_dance | dancing turtles rotating pairwise in opposite direction | compound shapes, clone shapesize, tilt, get\_shapepoly, update | | sorting\_animate | visual demonstration of different sorting methods | simple alignment, randomization | | tree | a (graphical) breadth first tree (using generators) | `clone()` | | two\_canvases | simple design | turtles on two canvases | | wikipedia | a pattern from the wikipedia article on turtle graphics | `clone()`, `undo()` | | yinyang | another elementary example | `circle()` | Have fun! Changes since Python 2.6 ------------------------ * The methods `Turtle.tracer()`, `Turtle.window_width()` and `Turtle.window_height()` have been eliminated. Methods with these names and functionality are now available only as methods of `Screen`. The functions derived from these remain available. (In fact already in Python 2.6 these methods were merely duplications of the corresponding `TurtleScreen`/`Screen`-methods.) * The method `Turtle.fill()` has been eliminated. The behaviour of `begin_fill()` and `end_fill()` have changed slightly: now every filling-process must be completed with an `end_fill()` call. * A method `Turtle.filling()` has been added. It returns a boolean value: `True` if a filling process is under way, `False` otherwise. This behaviour corresponds to a `fill()` call without arguments in Python 2.6. Changes since Python 3.0 ------------------------ * The methods `Turtle.shearfactor()`, `Turtle.shapetransform()` and `Turtle.get_shapepoly()` have been added. Thus the full range of regular linear transforms is now available for transforming turtle shapes. `Turtle.tiltangle()` has been enhanced in functionality: it now can be used to get or set the tiltangle. `Turtle.settiltangle()` has been deprecated. * The method `Screen.onkeypress()` has been added as a complement to `Screen.onkey()` which in fact binds actions to the keyrelease event. Accordingly the latter has got an alias: `Screen.onkeyrelease()`. * The method `Screen.mainloop()` has been added. So when working only with Screen and Turtle objects one must not additionally import `mainloop()` anymore. * Two input methods has been added `Screen.textinput()` and `Screen.numinput()`. These popup input dialogs and return strings and numbers respectively. * Two example scripts `tdemo_nim.py` and `tdemo_round_dance.py` have been added to the `Lib/turtledemo` directory.
programming_docs
python uuid — UUID objects according to RFC 4122 uuid — UUID objects according to RFC 4122 ========================================= **Source code:** [Lib/uuid.py](https://github.com/python/cpython/tree/3.9/Lib/uuid.py) This module provides immutable [`UUID`](#uuid.UUID "uuid.UUID") objects (the [`UUID`](#uuid.UUID "uuid.UUID") class) and the functions [`uuid1()`](#uuid.uuid1 "uuid.uuid1"), [`uuid3()`](#uuid.uuid3 "uuid.uuid3"), [`uuid4()`](#uuid.uuid4 "uuid.uuid4"), [`uuid5()`](#uuid.uuid5 "uuid.uuid5") for generating version 1, 3, 4, and 5 UUIDs as specified in [**RFC 4122**](https://tools.ietf.org/html/rfc4122.html). If all you want is a unique ID, you should probably call [`uuid1()`](#uuid.uuid1 "uuid.uuid1") or [`uuid4()`](#uuid.uuid4 "uuid.uuid4"). Note that [`uuid1()`](#uuid.uuid1 "uuid.uuid1") may compromise privacy since it creates a UUID containing the computer’s network address. [`uuid4()`](#uuid.uuid4 "uuid.uuid4") creates a random UUID. Depending on support from the underlying platform, [`uuid1()`](#uuid.uuid1 "uuid.uuid1") may or may not return a “safe” UUID. A safe UUID is one which is generated using synchronization methods that ensure no two processes can obtain the same UUID. All instances of [`UUID`](#uuid.UUID "uuid.UUID") have an `is_safe` attribute which relays any information about the UUID’s safety, using this enumeration: `class uuid.SafeUUID` New in version 3.7. `safe` The UUID was generated by the platform in a multiprocessing-safe way. `unsafe` The UUID was not generated in a multiprocessing-safe way. `unknown` The platform does not provide information on whether the UUID was generated safely or not. `class uuid.UUID(hex=None, bytes=None, bytes_le=None, fields=None, int=None, version=None, *, is_safe=SafeUUID.unknown)` Create a UUID from either a string of 32 hexadecimal digits, a string of 16 bytes in big-endian order as the *bytes* argument, a string of 16 bytes in little-endian order as the *bytes\_le* argument, a tuple of six integers (32-bit *time\_low*, 16-bit *time\_mid*, 16-bit *time\_hi\_version*, 8-bit *clock\_seq\_hi\_variant*, 8-bit *clock\_seq\_low*, 48-bit *node*) as the *fields* argument, or a single 128-bit integer as the *int* argument. When a string of hex digits is given, curly braces, hyphens, and a URN prefix are all optional. For example, these expressions all yield the same UUID: ``` UUID('{12345678-1234-5678-1234-567812345678}') UUID('12345678123456781234567812345678') UUID('urn:uuid:12345678-1234-5678-1234-567812345678') UUID(bytes=b'\x12\x34\x56\x78'*4) UUID(bytes_le=b'\x78\x56\x34\x12\x34\x12\x78\x56' + b'\x12\x34\x56\x78\x12\x34\x56\x78') UUID(fields=(0x12345678, 0x1234, 0x5678, 0x12, 0x34, 0x567812345678)) UUID(int=0x12345678123456781234567812345678) ``` Exactly one of *hex*, *bytes*, *bytes\_le*, *fields*, or *int* must be given. The *version* argument is optional; if given, the resulting UUID will have its variant and version number set according to [**RFC 4122**](https://tools.ietf.org/html/rfc4122.html), overriding bits in the given *hex*, *bytes*, *bytes\_le*, *fields*, or *int*. Comparison of UUID objects are made by way of comparing their [`UUID.int`](#uuid.UUID.int "uuid.UUID.int") attributes. Comparison with a non-UUID object raises a [`TypeError`](exceptions#TypeError "TypeError"). `str(uuid)` returns a string in the form `12345678-1234-5678-1234-567812345678` where the 32 hexadecimal digits represent the UUID. [`UUID`](#uuid.UUID "uuid.UUID") instances have these read-only attributes: `UUID.bytes` The UUID as a 16-byte string (containing the six integer fields in big-endian byte order). `UUID.bytes_le` The UUID as a 16-byte string (with *time\_low*, *time\_mid*, and *time\_hi\_version* in little-endian byte order). `UUID.fields` A tuple of the six integer fields of the UUID, which are also available as six individual attributes and two derived attributes: | Field | Meaning | | --- | --- | | `time_low` | the first 32 bits of the UUID | | `time_mid` | the next 16 bits of the UUID | | `time_hi_version` | the next 16 bits of the UUID | | `clock_seq_hi_variant` | the next 8 bits of the UUID | | `clock_seq_low` | the next 8 bits of the UUID | | `node` | the last 48 bits of the UUID | | [`time`](time#module-time "time: Time access and conversions.") | the 60-bit timestamp | | `clock_seq` | the 14-bit sequence number | `UUID.hex` The UUID as a 32-character lowercase hexadecimal string. `UUID.int` The UUID as a 128-bit integer. `UUID.urn` The UUID as a URN as specified in [**RFC 4122**](https://tools.ietf.org/html/rfc4122.html). `UUID.variant` The UUID variant, which determines the internal layout of the UUID. This will be one of the constants [`RESERVED_NCS`](#uuid.RESERVED_NCS "uuid.RESERVED_NCS"), [`RFC_4122`](#uuid.RFC_4122 "uuid.RFC_4122"), [`RESERVED_MICROSOFT`](#uuid.RESERVED_MICROSOFT "uuid.RESERVED_MICROSOFT"), or [`RESERVED_FUTURE`](#uuid.RESERVED_FUTURE "uuid.RESERVED_FUTURE"). `UUID.version` The UUID version number (1 through 5, meaningful only when the variant is [`RFC_4122`](#uuid.RFC_4122 "uuid.RFC_4122")). `UUID.is_safe` An enumeration of [`SafeUUID`](#uuid.SafeUUID "uuid.SafeUUID") which indicates whether the platform generated the UUID in a multiprocessing-safe way. New in version 3.7. The [`uuid`](#module-uuid "uuid: UUID objects (universally unique identifiers) according to RFC 4122") module defines the following functions: `uuid.getnode()` Get the hardware address as a 48-bit positive integer. The first time this runs, it may launch a separate program, which could be quite slow. If all attempts to obtain the hardware address fail, we choose a random 48-bit number with the multicast bit (least significant bit of the first octet) set to 1 as recommended in [**RFC 4122**](https://tools.ietf.org/html/rfc4122.html). “Hardware address” means the MAC address of a network interface. On a machine with multiple network interfaces, universally administered MAC addresses (i.e. where the second least significant bit of the first octet is *unset*) will be preferred over locally administered MAC addresses, but with no other ordering guarantees. Changed in version 3.7: Universally administered MAC addresses are preferred over locally administered MAC addresses, since the former are guaranteed to be globally unique, while the latter are not. `uuid.uuid1(node=None, clock_seq=None)` Generate a UUID from a host ID, sequence number, and the current time. If *node* is not given, [`getnode()`](#uuid.getnode "uuid.getnode") is used to obtain the hardware address. If *clock\_seq* is given, it is used as the sequence number; otherwise a random 14-bit sequence number is chosen. `uuid.uuid3(namespace, name)` Generate a UUID based on the MD5 hash of a namespace identifier (which is a UUID) and a name (which is a string). `uuid.uuid4()` Generate a random UUID. `uuid.uuid5(namespace, name)` Generate a UUID based on the SHA-1 hash of a namespace identifier (which is a UUID) and a name (which is a string). The [`uuid`](#module-uuid "uuid: UUID objects (universally unique identifiers) according to RFC 4122") module defines the following namespace identifiers for use with [`uuid3()`](#uuid.uuid3 "uuid.uuid3") or [`uuid5()`](#uuid.uuid5 "uuid.uuid5"). `uuid.NAMESPACE_DNS` When this namespace is specified, the *name* string is a fully-qualified domain name. `uuid.NAMESPACE_URL` When this namespace is specified, the *name* string is a URL. `uuid.NAMESPACE_OID` When this namespace is specified, the *name* string is an ISO OID. `uuid.NAMESPACE_X500` When this namespace is specified, the *name* string is an X.500 DN in DER or a text output format. The [`uuid`](#module-uuid "uuid: UUID objects (universally unique identifiers) according to RFC 4122") module defines the following constants for the possible values of the `variant` attribute: `uuid.RESERVED_NCS` Reserved for NCS compatibility. `uuid.RFC_4122` Specifies the UUID layout given in [**RFC 4122**](https://tools.ietf.org/html/rfc4122.html). `uuid.RESERVED_MICROSOFT` Reserved for Microsoft compatibility. `uuid.RESERVED_FUTURE` Reserved for future definition. See also [**RFC 4122**](https://tools.ietf.org/html/rfc4122.html) - A Universally Unique IDentifier (UUID) URN Namespace This specification defines a Uniform Resource Name namespace for UUIDs, the internal format of UUIDs, and methods of generating UUIDs. Example ------- Here are some examples of typical usage of the [`uuid`](#module-uuid "uuid: UUID objects (universally unique identifiers) according to RFC 4122") module: ``` >>> import uuid >>> # make a UUID based on the host ID and current time >>> uuid.uuid1() UUID('a8098c1a-f86e-11da-bd1a-00112444be1e') >>> # make a UUID using an MD5 hash of a namespace UUID and a name >>> uuid.uuid3(uuid.NAMESPACE_DNS, 'python.org') UUID('6fa459ea-ee8a-3ca4-894e-db77e160355e') >>> # make a random UUID >>> uuid.uuid4() UUID('16fd2706-8baf-433b-82eb-8c7fada847da') >>> # make a UUID using a SHA-1 hash of a namespace UUID and a name >>> uuid.uuid5(uuid.NAMESPACE_DNS, 'python.org') UUID('886313e1-3b8a-5372-9b90-0c9aee199e5d') >>> # make a UUID from a string of hex digits (braces and hyphens ignored) >>> x = uuid.UUID('{00010203-0405-0607-0809-0a0b0c0d0e0f}') >>> # convert a UUID to a string of hex digits in standard form >>> str(x) '00010203-0405-0607-0809-0a0b0c0d0e0f' >>> # get the raw 16 bytes of the UUID >>> x.bytes b'\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f' >>> # make a UUID from a 16-byte string >>> uuid.UUID(bytes=x.bytes) UUID('00010203-0405-0607-0809-0a0b0c0d0e0f') ``` python inspect — Inspect live objects inspect — Inspect live objects ============================== **Source code:** [Lib/inspect.py](https://github.com/python/cpython/tree/3.9/Lib/inspect.py) The [`inspect`](#module-inspect "inspect: Extract information and source code from live objects.") module provides several useful functions to help get information about live objects such as modules, classes, methods, functions, tracebacks, frame objects, and code objects. For example, it can help you examine the contents of a class, retrieve the source code of a method, extract and format the argument list for a function, or get all the information you need to display a detailed traceback. There are four main kinds of services provided by this module: type checking, getting source code, inspecting classes and functions, and examining the interpreter stack. Types and members ----------------- The [`getmembers()`](#inspect.getmembers "inspect.getmembers") function retrieves the members of an object such as a class or module. The functions whose names begin with “is” are mainly provided as convenient choices for the second argument to [`getmembers()`](#inspect.getmembers "inspect.getmembers"). They also help you determine when you can expect to find the following special attributes: | Type | Attribute | Description | | --- | --- | --- | | module | \_\_doc\_\_ | documentation string | | | \_\_file\_\_ | filename (missing for built-in modules) | | class | \_\_doc\_\_ | documentation string | | | \_\_name\_\_ | name with which this class was defined | | | \_\_qualname\_\_ | qualified name | | | \_\_module\_\_ | name of module in which this class was defined | | method | \_\_doc\_\_ | documentation string | | | \_\_name\_\_ | name with which this method was defined | | | \_\_qualname\_\_ | qualified name | | | \_\_func\_\_ | function object containing implementation of method | | | \_\_self\_\_ | instance to which this method is bound, or `None` | | | \_\_module\_\_ | name of module in which this method was defined | | function | \_\_doc\_\_ | documentation string | | | \_\_name\_\_ | name with which this function was defined | | | \_\_qualname\_\_ | qualified name | | | \_\_code\_\_ | code object containing compiled function [bytecode](../glossary#term-bytecode) | | | \_\_defaults\_\_ | tuple of any default values for positional or keyword parameters | | | \_\_kwdefaults\_\_ | mapping of any default values for keyword-only parameters | | | \_\_globals\_\_ | global namespace in which this function was defined | | | \_\_annotations\_\_ | mapping of parameters names to annotations; `"return"` key is reserved for return annotations. | | | \_\_module\_\_ | name of module in which this function was defined | | traceback | tb\_frame | frame object at this level | | | tb\_lasti | index of last attempted instruction in bytecode | | | tb\_lineno | current line number in Python source code | | | tb\_next | next inner traceback object (called by this level) | | frame | f\_back | next outer frame object (this frame’s caller) | | | f\_builtins | builtins namespace seen by this frame | | | f\_code | code object being executed in this frame | | | f\_globals | global namespace seen by this frame | | | f\_lasti | index of last attempted instruction in bytecode | | | f\_lineno | current line number in Python source code | | | f\_locals | local namespace seen by this frame | | | f\_trace | tracing function for this frame, or `None` | | code | co\_argcount | number of arguments (not including keyword only arguments, \* or \*\* args) | | | co\_code | string of raw compiled bytecode | | | co\_cellvars | tuple of names of cell variables (referenced by containing scopes) | | | co\_consts | tuple of constants used in the bytecode | | | co\_filename | name of file in which this code object was created | | | co\_firstlineno | number of first line in Python source code | | | co\_flags | bitmap of `CO_*` flags, read more [here](#inspect-module-co-flags) | | | co\_lnotab | encoded mapping of line numbers to bytecode indices | | | co\_freevars | tuple of names of free variables (referenced via a function’s closure) | | | co\_posonlyargcount | number of positional only arguments | | | co\_kwonlyargcount | number of keyword only arguments (not including \*\* arg) | | | co\_name | name with which this code object was defined | | | co\_names | tuple of names of local variables | | | co\_nlocals | number of local variables | | | co\_stacksize | virtual machine stack space required | | | co\_varnames | tuple of names of arguments and local variables | | generator | \_\_name\_\_ | name | | | \_\_qualname\_\_ | qualified name | | | gi\_frame | frame | | | gi\_running | is the generator running? | | | gi\_code | code | | | gi\_yieldfrom | object being iterated by `yield from`, or `None` | | coroutine | \_\_name\_\_ | name | | | \_\_qualname\_\_ | qualified name | | | cr\_await | object being awaited on, or `None` | | | cr\_frame | frame | | | cr\_running | is the coroutine running? | | | cr\_code | code | | | cr\_origin | where coroutine was created, or `None`. See [`sys.set_coroutine_origin_tracking_depth()`](sys#sys.set_coroutine_origin_tracking_depth "sys.set_coroutine_origin_tracking_depth") | | builtin | \_\_doc\_\_ | documentation string | | | \_\_name\_\_ | original name of this function or method | | | \_\_qualname\_\_ | qualified name | | | \_\_self\_\_ | instance to which a method is bound, or `None` | Changed in version 3.5: Add `__qualname__` and `gi_yieldfrom` attributes to generators. The `__name__` attribute of generators is now set from the function name, instead of the code name, and it can now be modified. Changed in version 3.7: Add `cr_origin` attribute to coroutines. `inspect.getmembers(object[, predicate])` Return all the members of an object in a list of `(name, value)` pairs sorted by name. If the optional *predicate* argument—which will be called with the `value` object of each member—is supplied, only members for which the predicate returns a true value are included. Note [`getmembers()`](#inspect.getmembers "inspect.getmembers") will only return class attributes defined in the metaclass when the argument is a class and those attributes have been listed in the metaclass’ custom [`__dir__()`](../reference/datamodel#object.__dir__ "object.__dir__"). `inspect.getmodulename(path)` Return the name of the module named by the file *path*, without including the names of enclosing packages. The file extension is checked against all of the entries in [`importlib.machinery.all_suffixes()`](importlib#importlib.machinery.all_suffixes "importlib.machinery.all_suffixes"). If it matches, the final path component is returned with the extension removed. Otherwise, `None` is returned. Note that this function *only* returns a meaningful name for actual Python modules - paths that potentially refer to Python packages will still return `None`. Changed in version 3.3: The function is based directly on [`importlib`](importlib#module-importlib "importlib: The implementation of the import machinery."). `inspect.ismodule(object)` Return `True` if the object is a module. `inspect.isclass(object)` Return `True` if the object is a class, whether built-in or created in Python code. `inspect.ismethod(object)` Return `True` if the object is a bound method written in Python. `inspect.isfunction(object)` Return `True` if the object is a Python function, which includes functions created by a [lambda](../glossary#term-lambda) expression. `inspect.isgeneratorfunction(object)` Return `True` if the object is a Python generator function. Changed in version 3.8: Functions wrapped in [`functools.partial()`](functools#functools.partial "functools.partial") now return `True` if the wrapped function is a Python generator function. `inspect.isgenerator(object)` Return `True` if the object is a generator. `inspect.iscoroutinefunction(object)` Return `True` if the object is a [coroutine function](../glossary#term-coroutine-function) (a function defined with an [`async def`](../reference/compound_stmts#async-def) syntax). New in version 3.5. Changed in version 3.8: Functions wrapped in [`functools.partial()`](functools#functools.partial "functools.partial") now return `True` if the wrapped function is a [coroutine function](../glossary#term-coroutine-function). `inspect.iscoroutine(object)` Return `True` if the object is a [coroutine](../glossary#term-coroutine) created by an [`async def`](../reference/compound_stmts#async-def) function. New in version 3.5. `inspect.isawaitable(object)` Return `True` if the object can be used in [`await`](../reference/expressions#await) expression. Can also be used to distinguish generator-based coroutines from regular generators: ``` def gen(): yield @types.coroutine def gen_coro(): yield assert not isawaitable(gen()) assert isawaitable(gen_coro()) ``` New in version 3.5. `inspect.isasyncgenfunction(object)` Return `True` if the object is an [asynchronous generator](../glossary#term-asynchronous-generator) function, for example: ``` >>> async def agen(): ... yield 1 ... >>> inspect.isasyncgenfunction(agen) True ``` New in version 3.6. Changed in version 3.8: Functions wrapped in [`functools.partial()`](functools#functools.partial "functools.partial") now return `True` if the wrapped function is a [asynchronous generator](../glossary#term-asynchronous-generator) function. `inspect.isasyncgen(object)` Return `True` if the object is an [asynchronous generator iterator](../glossary#term-asynchronous-generator-iterator) created by an [asynchronous generator](../glossary#term-asynchronous-generator) function. New in version 3.6. `inspect.istraceback(object)` Return `True` if the object is a traceback. `inspect.isframe(object)` Return `True` if the object is a frame. `inspect.iscode(object)` Return `True` if the object is a code. `inspect.isbuiltin(object)` Return `True` if the object is a built-in function or a bound built-in method. `inspect.isroutine(object)` Return `True` if the object is a user-defined or built-in function or method. `inspect.isabstract(object)` Return `True` if the object is an abstract base class. `inspect.ismethoddescriptor(object)` Return `True` if the object is a method descriptor, but not if [`ismethod()`](#inspect.ismethod "inspect.ismethod"), [`isclass()`](#inspect.isclass "inspect.isclass"), [`isfunction()`](#inspect.isfunction "inspect.isfunction") or [`isbuiltin()`](#inspect.isbuiltin "inspect.isbuiltin") are true. This, for example, is true of `int.__add__`. An object passing this test has a [`__get__()`](../reference/datamodel#object.__get__ "object.__get__") method but not a [`__set__()`](../reference/datamodel#object.__set__ "object.__set__") method, but beyond that the set of attributes varies. A [`__name__`](stdtypes#definition.__name__ "definition.__name__") attribute is usually sensible, and `__doc__` often is. Methods implemented via descriptors that also pass one of the other tests return `False` from the [`ismethoddescriptor()`](#inspect.ismethoddescriptor "inspect.ismethoddescriptor") test, simply because the other tests promise more – you can, e.g., count on having the `__func__` attribute (etc) when an object passes [`ismethod()`](#inspect.ismethod "inspect.ismethod"). `inspect.isdatadescriptor(object)` Return `True` if the object is a data descriptor. Data descriptors have a [`__set__`](../reference/datamodel#object.__set__ "object.__set__") or a [`__delete__`](../reference/datamodel#object.__delete__ "object.__delete__") method. Examples are properties (defined in Python), getsets, and members. The latter two are defined in C and there are more specific tests available for those types, which is robust across Python implementations. Typically, data descriptors will also have [`__name__`](stdtypes#definition.__name__ "definition.__name__") and `__doc__` attributes (properties, getsets, and members have both of these attributes), but this is not guaranteed. `inspect.isgetsetdescriptor(object)` Return `True` if the object is a getset descriptor. **CPython implementation detail:** getsets are attributes defined in extension modules via [`PyGetSetDef`](../c-api/structures#c.PyGetSetDef "PyGetSetDef") structures. For Python implementations without such types, this method will always return `False`. `inspect.ismemberdescriptor(object)` Return `True` if the object is a member descriptor. **CPython implementation detail:** Member descriptors are attributes defined in extension modules via [`PyMemberDef`](../c-api/structures#c.PyMemberDef "PyMemberDef") structures. For Python implementations without such types, this method will always return `False`. Retrieving source code ---------------------- `inspect.getdoc(object)` Get the documentation string for an object, cleaned up with [`cleandoc()`](#inspect.cleandoc "inspect.cleandoc"). If the documentation string for an object is not provided and the object is a class, a method, a property or a descriptor, retrieve the documentation string from the inheritance hierarchy. Changed in version 3.5: Documentation strings are now inherited if not overridden. `inspect.getcomments(object)` Return in a single string any lines of comments immediately preceding the object’s source code (for a class, function, or method), or at the top of the Python source file (if the object is a module). If the object’s source code is unavailable, return `None`. This could happen if the object has been defined in C or the interactive shell. `inspect.getfile(object)` Return the name of the (text or binary) file in which an object was defined. This will fail with a [`TypeError`](exceptions#TypeError "TypeError") if the object is a built-in module, class, or function. `inspect.getmodule(object)` Try to guess which module an object was defined in. `inspect.getsourcefile(object)` Return the name of the Python source file in which an object was defined. This will fail with a [`TypeError`](exceptions#TypeError "TypeError") if the object is a built-in module, class, or function. `inspect.getsourcelines(object)` Return a list of source lines and starting line number for an object. The argument may be a module, class, method, function, traceback, frame, or code object. The source code is returned as a list of the lines corresponding to the object and the line number indicates where in the original source file the first line of code was found. An [`OSError`](exceptions#OSError "OSError") is raised if the source code cannot be retrieved. Changed in version 3.3: [`OSError`](exceptions#OSError "OSError") is raised instead of [`IOError`](exceptions#IOError "IOError"), now an alias of the former. `inspect.getsource(object)` Return the text of the source code for an object. The argument may be a module, class, method, function, traceback, frame, or code object. The source code is returned as a single string. An [`OSError`](exceptions#OSError "OSError") is raised if the source code cannot be retrieved. Changed in version 3.3: [`OSError`](exceptions#OSError "OSError") is raised instead of [`IOError`](exceptions#IOError "IOError"), now an alias of the former. `inspect.cleandoc(doc)` Clean up indentation from docstrings that are indented to line up with blocks of code. All leading whitespace is removed from the first line. Any leading whitespace that can be uniformly removed from the second line onwards is removed. Empty lines at the beginning and end are subsequently removed. Also, all tabs are expanded to spaces. Introspecting callables with the Signature object ------------------------------------------------- New in version 3.3. The Signature object represents the call signature of a callable object and its return annotation. To retrieve a Signature object, use the [`signature()`](#inspect.signature "inspect.signature") function. `inspect.signature(callable, *, follow_wrapped=True)` Return a [`Signature`](#inspect.Signature "inspect.Signature") object for the given `callable`: ``` >>> from inspect import signature >>> def foo(a, *, b:int, **kwargs): ... pass >>> sig = signature(foo) >>> str(sig) '(a, *, b:int, **kwargs)' >>> str(sig.parameters['b']) 'b:int' >>> sig.parameters['b'].annotation <class 'int'> ``` Accepts a wide range of Python callables, from plain functions and classes to [`functools.partial()`](functools#functools.partial "functools.partial") objects. Raises [`ValueError`](exceptions#ValueError "ValueError") if no signature can be provided, and [`TypeError`](exceptions#TypeError "TypeError") if that type of object is not supported. A slash(/) in the signature of a function denotes that the parameters prior to it are positional-only. For more info, see [the FAQ entry on positional-only parameters](../faq/programming#faq-positional-only-arguments). New in version 3.5: `follow_wrapped` parameter. Pass `False` to get a signature of `callable` specifically (`callable.__wrapped__` will not be used to unwrap decorated callables.) Note Some callables may not be introspectable in certain implementations of Python. For example, in CPython, some built-in functions defined in C provide no metadata about their arguments. `class inspect.Signature(parameters=None, *, return_annotation=Signature.empty)` A Signature object represents the call signature of a function and its return annotation. For each parameter accepted by the function it stores a [`Parameter`](#inspect.Parameter "inspect.Parameter") object in its [`parameters`](#inspect.Signature.parameters "inspect.Signature.parameters") collection. The optional *parameters* argument is a sequence of [`Parameter`](#inspect.Parameter "inspect.Parameter") objects, which is validated to check that there are no parameters with duplicate names, and that the parameters are in the right order, i.e. positional-only first, then positional-or-keyword, and that parameters with defaults follow parameters without defaults. The optional *return\_annotation* argument, can be an arbitrary Python object, is the “return” annotation of the callable. Signature objects are *immutable*. Use [`Signature.replace()`](#inspect.Signature.replace "inspect.Signature.replace") to make a modified copy. Changed in version 3.5: Signature objects are picklable and hashable. `empty` A special class-level marker to specify absence of a return annotation. `parameters` An ordered mapping of parameters’ names to the corresponding [`Parameter`](#inspect.Parameter "inspect.Parameter") objects. Parameters appear in strict definition order, including keyword-only parameters. Changed in version 3.7: Python only explicitly guaranteed that it preserved the declaration order of keyword-only parameters as of version 3.7, although in practice this order had always been preserved in Python 3. `return_annotation` The “return” annotation for the callable. If the callable has no “return” annotation, this attribute is set to [`Signature.empty`](#inspect.Signature.empty "inspect.Signature.empty"). `bind(*args, **kwargs)` Create a mapping from positional and keyword arguments to parameters. Returns [`BoundArguments`](#inspect.BoundArguments "inspect.BoundArguments") if `*args` and `**kwargs` match the signature, or raises a [`TypeError`](exceptions#TypeError "TypeError"). `bind_partial(*args, **kwargs)` Works the same way as [`Signature.bind()`](#inspect.Signature.bind "inspect.Signature.bind"), but allows the omission of some required arguments (mimics [`functools.partial()`](functools#functools.partial "functools.partial") behavior.) Returns [`BoundArguments`](#inspect.BoundArguments "inspect.BoundArguments"), or raises a [`TypeError`](exceptions#TypeError "TypeError") if the passed arguments do not match the signature. `replace(*[, parameters][, return_annotation])` Create a new Signature instance based on the instance replace was invoked on. It is possible to pass different `parameters` and/or `return_annotation` to override the corresponding properties of the base signature. To remove return\_annotation from the copied Signature, pass in [`Signature.empty`](#inspect.Signature.empty "inspect.Signature.empty"). ``` >>> def test(a, b): ... pass >>> sig = signature(test) >>> new_sig = sig.replace(return_annotation="new return anno") >>> str(new_sig) "(a, b) -> 'new return anno'" ``` `classmethod from_callable(obj, *, follow_wrapped=True)` Return a [`Signature`](#inspect.Signature "inspect.Signature") (or its subclass) object for a given callable `obj`. Pass `follow_wrapped=False` to get a signature of `obj` without unwrapping its `__wrapped__` chain. This method simplifies subclassing of [`Signature`](#inspect.Signature "inspect.Signature"): ``` class MySignature(Signature): pass sig = MySignature.from_callable(min) assert isinstance(sig, MySignature) ``` New in version 3.5. `class inspect.Parameter(name, kind, *, default=Parameter.empty, annotation=Parameter.empty)` Parameter objects are *immutable*. Instead of modifying a Parameter object, you can use [`Parameter.replace()`](#inspect.Parameter.replace "inspect.Parameter.replace") to create a modified copy. Changed in version 3.5: Parameter objects are picklable and hashable. `empty` A special class-level marker to specify absence of default values and annotations. `name` The name of the parameter as a string. The name must be a valid Python identifier. **CPython implementation detail:** CPython generates implicit parameter names of the form `.0` on the code objects used to implement comprehensions and generator expressions. Changed in version 3.6: These parameter names are exposed by this module as names like `implicit0`. `default` The default value for the parameter. If the parameter has no default value, this attribute is set to [`Parameter.empty`](#inspect.Parameter.empty "inspect.Parameter.empty"). `annotation` The annotation for the parameter. If the parameter has no annotation, this attribute is set to [`Parameter.empty`](#inspect.Parameter.empty "inspect.Parameter.empty"). `kind` Describes how argument values are bound to the parameter. Possible values (accessible via [`Parameter`](#inspect.Parameter "inspect.Parameter"), like `Parameter.KEYWORD_ONLY`): | Name | Meaning | | --- | --- | | *POSITIONAL\_ONLY* | Value must be supplied as a positional argument. Positional only parameters are those which appear before a `/` entry (if present) in a Python function definition. | | *POSITIONAL\_OR\_KEYWORD* | Value may be supplied as either a keyword or positional argument (this is the standard binding behaviour for functions implemented in Python.) | | *VAR\_POSITIONAL* | A tuple of positional arguments that aren’t bound to any other parameter. This corresponds to a `*args` parameter in a Python function definition. | | *KEYWORD\_ONLY* | Value must be supplied as a keyword argument. Keyword only parameters are those which appear after a `*` or `*args` entry in a Python function definition. | | *VAR\_KEYWORD* | A dict of keyword arguments that aren’t bound to any other parameter. This corresponds to a `**kwargs` parameter in a Python function definition. | Example: print all keyword-only arguments without default values: ``` >>> def foo(a, b, *, c, d=10): ... pass >>> sig = signature(foo) >>> for param in sig.parameters.values(): ... if (param.kind == param.KEYWORD_ONLY and ... param.default is param.empty): ... print('Parameter:', param) Parameter: c ``` `kind.description` Describes a enum value of Parameter.kind. New in version 3.8. Example: print all descriptions of arguments: ``` >>> def foo(a, b, *, c, d=10): ... pass >>> sig = signature(foo) >>> for param in sig.parameters.values(): ... print(param.kind.description) positional or keyword positional or keyword keyword-only keyword-only ``` `replace(*[, name][, kind][, default][, annotation])` Create a new Parameter instance based on the instance replaced was invoked on. To override a [`Parameter`](#inspect.Parameter "inspect.Parameter") attribute, pass the corresponding argument. To remove a default value or/and an annotation from a Parameter, pass [`Parameter.empty`](#inspect.Parameter.empty "inspect.Parameter.empty"). ``` >>> from inspect import Parameter >>> param = Parameter('foo', Parameter.KEYWORD_ONLY, default=42) >>> str(param) 'foo=42' >>> str(param.replace()) # Will create a shallow copy of 'param' 'foo=42' >>> str(param.replace(default=Parameter.empty, annotation='spam')) "foo:'spam'" ``` Changed in version 3.4: In Python 3.3 Parameter objects were allowed to have `name` set to `None` if their `kind` was set to `POSITIONAL_ONLY`. This is no longer permitted. `class inspect.BoundArguments` Result of a [`Signature.bind()`](#inspect.Signature.bind "inspect.Signature.bind") or [`Signature.bind_partial()`](#inspect.Signature.bind_partial "inspect.Signature.bind_partial") call. Holds the mapping of arguments to the function’s parameters. `arguments` A mutable mapping of parameters’ names to arguments’ values. Contains only explicitly bound arguments. Changes in [`arguments`](#inspect.BoundArguments.arguments "inspect.BoundArguments.arguments") will reflect in [`args`](#inspect.BoundArguments.args "inspect.BoundArguments.args") and [`kwargs`](#inspect.BoundArguments.kwargs "inspect.BoundArguments.kwargs"). Should be used in conjunction with [`Signature.parameters`](#inspect.Signature.parameters "inspect.Signature.parameters") for any argument processing purposes. Note Arguments for which [`Signature.bind()`](#inspect.Signature.bind "inspect.Signature.bind") or [`Signature.bind_partial()`](#inspect.Signature.bind_partial "inspect.Signature.bind_partial") relied on a default value are skipped. However, if needed, use [`BoundArguments.apply_defaults()`](#inspect.BoundArguments.apply_defaults "inspect.BoundArguments.apply_defaults") to add them. Changed in version 3.9: [`arguments`](#inspect.BoundArguments.arguments "inspect.BoundArguments.arguments") is now of type [`dict`](stdtypes#dict "dict"). Formerly, it was of type [`collections.OrderedDict`](collections#collections.OrderedDict "collections.OrderedDict"). `args` A tuple of positional arguments values. Dynamically computed from the [`arguments`](#inspect.BoundArguments.arguments "inspect.BoundArguments.arguments") attribute. `kwargs` A dict of keyword arguments values. Dynamically computed from the [`arguments`](#inspect.BoundArguments.arguments "inspect.BoundArguments.arguments") attribute. `signature` A reference to the parent [`Signature`](#inspect.Signature "inspect.Signature") object. `apply_defaults()` Set default values for missing arguments. For variable-positional arguments (`*args`) the default is an empty tuple. For variable-keyword arguments (`**kwargs`) the default is an empty dict. ``` >>> def foo(a, b='ham', *args): pass >>> ba = inspect.signature(foo).bind('spam') >>> ba.apply_defaults() >>> ba.arguments {'a': 'spam', 'b': 'ham', 'args': ()} ``` New in version 3.5. The [`args`](#inspect.BoundArguments.args "inspect.BoundArguments.args") and [`kwargs`](#inspect.BoundArguments.kwargs "inspect.BoundArguments.kwargs") properties can be used to invoke functions: ``` def test(a, *, b): ... sig = signature(test) ba = sig.bind(10, b=20) test(*ba.args, **ba.kwargs) ``` See also [**PEP 362**](https://www.python.org/dev/peps/pep-0362) - Function Signature Object. The detailed specification, implementation details and examples. Classes and functions --------------------- `inspect.getclasstree(classes, unique=False)` Arrange the given list of classes into a hierarchy of nested lists. Where a nested list appears, it contains classes derived from the class whose entry immediately precedes the list. Each entry is a 2-tuple containing a class and a tuple of its base classes. If the *unique* argument is true, exactly one entry appears in the returned structure for each class in the given list. Otherwise, classes using multiple inheritance and their descendants will appear multiple times. `inspect.getargspec(func)` Get the names and default values of a Python function’s parameters. A [named tuple](../glossary#term-named-tuple) `ArgSpec(args, varargs, keywords, defaults)` is returned. *args* is a list of the parameter names. *varargs* and *keywords* are the names of the `*` and `**` parameters or `None`. *defaults* is a tuple of default argument values or `None` if there are no default arguments; if this tuple has *n* elements, they correspond to the last *n* elements listed in *args*. Deprecated since version 3.0: Use [`getfullargspec()`](#inspect.getfullargspec "inspect.getfullargspec") for an updated API that is usually a drop-in replacement, but also correctly handles function annotations and keyword-only parameters. Alternatively, use [`signature()`](#inspect.signature "inspect.signature") and [Signature Object](#inspect-signature-object), which provide a more structured introspection API for callables. `inspect.getfullargspec(func)` Get the names and default values of a Python function’s parameters. A [named tuple](../glossary#term-named-tuple) is returned: `FullArgSpec(args, varargs, varkw, defaults, kwonlyargs, kwonlydefaults, annotations)` *args* is a list of the positional parameter names. *varargs* is the name of the `*` parameter or `None` if arbitrary positional arguments are not accepted. *varkw* is the name of the `**` parameter or `None` if arbitrary keyword arguments are not accepted. *defaults* is an *n*-tuple of default argument values corresponding to the last *n* positional parameters, or `None` if there are no such defaults defined. *kwonlyargs* is a list of keyword-only parameter names in declaration order. *kwonlydefaults* is a dictionary mapping parameter names from *kwonlyargs* to the default values used if no argument is supplied. *annotations* is a dictionary mapping parameter names to annotations. The special key `"return"` is used to report the function return value annotation (if any). Note that [`signature()`](#inspect.signature "inspect.signature") and [Signature Object](#inspect-signature-object) provide the recommended API for callable introspection, and support additional behaviours (like positional-only arguments) that are sometimes encountered in extension module APIs. This function is retained primarily for use in code that needs to maintain compatibility with the Python 2 `inspect` module API. Changed in version 3.4: This function is now based on [`signature()`](#inspect.signature "inspect.signature"), but still ignores `__wrapped__` attributes and includes the already bound first parameter in the signature output for bound methods. Changed in version 3.6: This method was previously documented as deprecated in favour of [`signature()`](#inspect.signature "inspect.signature") in Python 3.5, but that decision has been reversed in order to restore a clearly supported standard interface for single-source Python 2/3 code migrating away from the legacy [`getargspec()`](#inspect.getargspec "inspect.getargspec") API. Changed in version 3.7: Python only explicitly guaranteed that it preserved the declaration order of keyword-only parameters as of version 3.7, although in practice this order had always been preserved in Python 3. `inspect.getargvalues(frame)` Get information about arguments passed into a particular frame. A [named tuple](../glossary#term-named-tuple) `ArgInfo(args, varargs, keywords, locals)` is returned. *args* is a list of the argument names. *varargs* and *keywords* are the names of the `*` and `**` arguments or `None`. *locals* is the locals dictionary of the given frame. Note This function was inadvertently marked as deprecated in Python 3.5. `inspect.formatargspec(args[, varargs, varkw, defaults, kwonlyargs, kwonlydefaults, annotations[, formatarg, formatvarargs, formatvarkw, formatvalue, formatreturns, formatannotations]])` Format a pretty argument spec from the values returned by [`getfullargspec()`](#inspect.getfullargspec "inspect.getfullargspec"). The first seven arguments are (`args`, `varargs`, `varkw`, `defaults`, `kwonlyargs`, `kwonlydefaults`, `annotations`). The other six arguments are functions that are called to turn argument names, `*` argument name, `**` argument name, default values, return annotation and individual annotations into strings, respectively. For example: ``` >>> from inspect import formatargspec, getfullargspec >>> def f(a: int, b: float): ... pass ... >>> formatargspec(*getfullargspec(f)) '(a: int, b: float)' ``` Deprecated since version 3.5: Use [`signature()`](#inspect.signature "inspect.signature") and [Signature Object](#inspect-signature-object), which provide a better introspecting API for callables. `inspect.formatargvalues(args[, varargs, varkw, locals, formatarg, formatvarargs, formatvarkw, formatvalue])` Format a pretty argument spec from the four values returned by [`getargvalues()`](#inspect.getargvalues "inspect.getargvalues"). The format\* arguments are the corresponding optional formatting functions that are called to turn names and values into strings. Note This function was inadvertently marked as deprecated in Python 3.5. `inspect.getmro(cls)` Return a tuple of class cls’s base classes, including cls, in method resolution order. No class appears more than once in this tuple. Note that the method resolution order depends on cls’s type. Unless a very peculiar user-defined metatype is in use, cls will be the first element of the tuple. `inspect.getcallargs(func, /, *args, **kwds)` Bind the *args* and *kwds* to the argument names of the Python function or method *func*, as if it was called with them. For bound methods, bind also the first argument (typically named `self`) to the associated instance. A dict is returned, mapping the argument names (including the names of the `*` and `**` arguments, if any) to their values from *args* and *kwds*. In case of invoking *func* incorrectly, i.e. whenever `func(*args, **kwds)` would raise an exception because of incompatible signature, an exception of the same type and the same or similar message is raised. For example: ``` >>> from inspect import getcallargs >>> def f(a, b=1, *pos, **named): ... pass >>> getcallargs(f, 1, 2, 3) == {'a': 1, 'named': {}, 'b': 2, 'pos': (3,)} True >>> getcallargs(f, a=2, x=4) == {'a': 2, 'named': {'x': 4}, 'b': 1, 'pos': ()} True >>> getcallargs(f) Traceback (most recent call last): ... TypeError: f() missing 1 required positional argument: 'a' ``` New in version 3.2. Deprecated since version 3.5: Use [`Signature.bind()`](#inspect.Signature.bind "inspect.Signature.bind") and [`Signature.bind_partial()`](#inspect.Signature.bind_partial "inspect.Signature.bind_partial") instead. `inspect.getclosurevars(func)` Get the mapping of external name references in a Python function or method *func* to their current values. A [named tuple](../glossary#term-named-tuple) `ClosureVars(nonlocals, globals, builtins, unbound)` is returned. *nonlocals* maps referenced names to lexical closure variables, *globals* to the function’s module globals and *builtins* to the builtins visible from the function body. *unbound* is the set of names referenced in the function that could not be resolved at all given the current module globals and builtins. [`TypeError`](exceptions#TypeError "TypeError") is raised if *func* is not a Python function or method. New in version 3.3. `inspect.unwrap(func, *, stop=None)` Get the object wrapped by *func*. It follows the chain of `__wrapped__` attributes returning the last object in the chain. *stop* is an optional callback accepting an object in the wrapper chain as its sole argument that allows the unwrapping to be terminated early if the callback returns a true value. If the callback never returns a true value, the last object in the chain is returned as usual. For example, [`signature()`](#inspect.signature "inspect.signature") uses this to stop unwrapping if any object in the chain has a `__signature__` attribute defined. [`ValueError`](exceptions#ValueError "ValueError") is raised if a cycle is encountered. New in version 3.4. The interpreter stack --------------------- When the following functions return “frame records,” each record is a [named tuple](../glossary#term-named-tuple) `FrameInfo(frame, filename, lineno, function, code_context, index)`. The tuple contains the frame object, the filename, the line number of the current line, the function name, a list of lines of context from the source code, and the index of the current line within that list. Changed in version 3.5: Return a named tuple instead of a tuple. Note Keeping references to frame objects, as found in the first element of the frame records these functions return, can cause your program to create reference cycles. Once a reference cycle has been created, the lifespan of all objects which can be accessed from the objects which form the cycle can become much longer even if Python’s optional cycle detector is enabled. If such cycles must be created, it is important to ensure they are explicitly broken to avoid the delayed destruction of objects and increased memory consumption which occurs. Though the cycle detector will catch these, destruction of the frames (and local variables) can be made deterministic by removing the cycle in a [`finally`](../reference/compound_stmts#finally) clause. This is also important if the cycle detector was disabled when Python was compiled or using [`gc.disable()`](gc#gc.disable "gc.disable"). For example: ``` def handle_stackframe_without_leak(): frame = inspect.currentframe() try: # do something with the frame finally: del frame ``` If you want to keep the frame around (for example to print a traceback later), you can also break reference cycles by using the [`frame.clear()`](../reference/datamodel#frame.clear "frame.clear") method. The optional *context* argument supported by most of these functions specifies the number of lines of context to return, which are centered around the current line. `inspect.getframeinfo(frame, context=1)` Get information about a frame or traceback object. A [named tuple](../glossary#term-named-tuple) `Traceback(filename, lineno, function, code_context, index)` is returned. `inspect.getouterframes(frame, context=1)` Get a list of frame records for a frame and all outer frames. These frames represent the calls that lead to the creation of *frame*. The first entry in the returned list represents *frame*; the last entry represents the outermost call on *frame*’s stack. Changed in version 3.5: A list of [named tuples](../glossary#term-named-tuple) `FrameInfo(frame, filename, lineno, function, code_context, index)` is returned. `inspect.getinnerframes(traceback, context=1)` Get a list of frame records for a traceback’s frame and all inner frames. These frames represent calls made as a consequence of *frame*. The first entry in the list represents *traceback*; the last entry represents where the exception was raised. Changed in version 3.5: A list of [named tuples](../glossary#term-named-tuple) `FrameInfo(frame, filename, lineno, function, code_context, index)` is returned. `inspect.currentframe()` Return the frame object for the caller’s stack frame. **CPython implementation detail:** This function relies on Python stack frame support in the interpreter, which isn’t guaranteed to exist in all implementations of Python. If running in an implementation without Python stack frame support this function returns `None`. `inspect.stack(context=1)` Return a list of frame records for the caller’s stack. The first entry in the returned list represents the caller; the last entry represents the outermost call on the stack. Changed in version 3.5: A list of [named tuples](../glossary#term-named-tuple) `FrameInfo(frame, filename, lineno, function, code_context, index)` is returned. `inspect.trace(context=1)` Return a list of frame records for the stack between the current frame and the frame in which an exception currently being handled was raised in. The first entry in the list represents the caller; the last entry represents where the exception was raised. Changed in version 3.5: A list of [named tuples](../glossary#term-named-tuple) `FrameInfo(frame, filename, lineno, function, code_context, index)` is returned. Fetching attributes statically ------------------------------ Both [`getattr()`](functions#getattr "getattr") and [`hasattr()`](functions#hasattr "hasattr") can trigger code execution when fetching or checking for the existence of attributes. Descriptors, like properties, will be invoked and [`__getattr__()`](../reference/datamodel#object.__getattr__ "object.__getattr__") and [`__getattribute__()`](../reference/datamodel#object.__getattribute__ "object.__getattribute__") may be called. For cases where you want passive introspection, like documentation tools, this can be inconvenient. [`getattr_static()`](#inspect.getattr_static "inspect.getattr_static") has the same signature as [`getattr()`](functions#getattr "getattr") but avoids executing code when it fetches attributes. `inspect.getattr_static(obj, attr, default=None)` Retrieve attributes without triggering dynamic lookup via the descriptor protocol, [`__getattr__()`](../reference/datamodel#object.__getattr__ "object.__getattr__") or [`__getattribute__()`](../reference/datamodel#object.__getattribute__ "object.__getattribute__"). Note: this function may not be able to retrieve all attributes that getattr can fetch (like dynamically created attributes) and may find attributes that getattr can’t (like descriptors that raise AttributeError). It can also return descriptors objects instead of instance members. If the instance [`__dict__`](stdtypes#object.__dict__ "object.__dict__") is shadowed by another member (for example a property) then this function will be unable to find instance members. New in version 3.2. [`getattr_static()`](#inspect.getattr_static "inspect.getattr_static") does not resolve descriptors, for example slot descriptors or getset descriptors on objects implemented in C. The descriptor object is returned instead of the underlying attribute. You can handle these with code like the following. Note that for arbitrary getset descriptors invoking these may trigger code execution: ``` # example code for resolving the builtin descriptor types class _foo: __slots__ = ['foo'] slot_descriptor = type(_foo.foo) getset_descriptor = type(type(open(__file__)).name) wrapper_descriptor = type(str.__dict__['__add__']) descriptor_types = (slot_descriptor, getset_descriptor, wrapper_descriptor) result = getattr_static(some_object, 'foo') if type(result) in descriptor_types: try: result = result.__get__() except AttributeError: # descriptors can raise AttributeError to # indicate there is no underlying value # in which case the descriptor itself will # have to do pass ``` Current State of Generators and Coroutines ------------------------------------------ When implementing coroutine schedulers and for other advanced uses of generators, it is useful to determine whether a generator is currently executing, is waiting to start or resume or execution, or has already terminated. [`getgeneratorstate()`](#inspect.getgeneratorstate "inspect.getgeneratorstate") allows the current state of a generator to be determined easily. `inspect.getgeneratorstate(generator)` Get current state of a generator-iterator. Possible states are: * GEN\_CREATED: Waiting to start execution. * GEN\_RUNNING: Currently being executed by the interpreter. * GEN\_SUSPENDED: Currently suspended at a yield expression. * GEN\_CLOSED: Execution has completed. New in version 3.2. `inspect.getcoroutinestate(coroutine)` Get current state of a coroutine object. The function is intended to be used with coroutine objects created by [`async def`](../reference/compound_stmts#async-def) functions, but will accept any coroutine-like object that has `cr_running` and `cr_frame` attributes. Possible states are: * CORO\_CREATED: Waiting to start execution. * CORO\_RUNNING: Currently being executed by the interpreter. * CORO\_SUSPENDED: Currently suspended at an await expression. * CORO\_CLOSED: Execution has completed. New in version 3.5. The current internal state of the generator can also be queried. This is mostly useful for testing purposes, to ensure that internal state is being updated as expected: `inspect.getgeneratorlocals(generator)` Get the mapping of live local variables in *generator* to their current values. A dictionary is returned that maps from variable names to values. This is the equivalent of calling [`locals()`](functions#locals "locals") in the body of the generator, and all the same caveats apply. If *generator* is a [generator](../glossary#term-generator) with no currently associated frame, then an empty dictionary is returned. [`TypeError`](exceptions#TypeError "TypeError") is raised if *generator* is not a Python generator object. **CPython implementation detail:** This function relies on the generator exposing a Python stack frame for introspection, which isn’t guaranteed to be the case in all implementations of Python. In such cases, this function will always return an empty dictionary. New in version 3.3. `inspect.getcoroutinelocals(coroutine)` This function is analogous to [`getgeneratorlocals()`](#inspect.getgeneratorlocals "inspect.getgeneratorlocals"), but works for coroutine objects created by [`async def`](../reference/compound_stmts#async-def) functions. New in version 3.5. Code Objects Bit Flags ---------------------- Python code objects have a `co_flags` attribute, which is a bitmap of the following flags: `inspect.CO_OPTIMIZED` The code object is optimized, using fast locals. `inspect.CO_NEWLOCALS` If set, a new dict will be created for the frame’s `f_locals` when the code object is executed. `inspect.CO_VARARGS` The code object has a variable positional parameter (`*args`-like). `inspect.CO_VARKEYWORDS` The code object has a variable keyword parameter (`**kwargs`-like). `inspect.CO_NESTED` The flag is set when the code object is a nested function. `inspect.CO_GENERATOR` The flag is set when the code object is a generator function, i.e. a generator object is returned when the code object is executed. `inspect.CO_NOFREE` The flag is set if there are no free or cell variables. `inspect.CO_COROUTINE` The flag is set when the code object is a coroutine function. When the code object is executed it returns a coroutine object. See [**PEP 492**](https://www.python.org/dev/peps/pep-0492) for more details. New in version 3.5. `inspect.CO_ITERABLE_COROUTINE` The flag is used to transform generators into generator-based coroutines. Generator objects with this flag can be used in `await` expression, and can `yield from` coroutine objects. See [**PEP 492**](https://www.python.org/dev/peps/pep-0492) for more details. New in version 3.5. `inspect.CO_ASYNC_GENERATOR` The flag is set when the code object is an asynchronous generator function. When the code object is executed it returns an asynchronous generator object. See [**PEP 525**](https://www.python.org/dev/peps/pep-0525) for more details. New in version 3.6. Note The flags are specific to CPython, and may not be defined in other Python implementations. Furthermore, the flags are an implementation detail, and can be removed or deprecated in future Python releases. It’s recommended to use public APIs from the [`inspect`](#module-inspect "inspect: Extract information and source code from live objects.") module for any introspection needs. Command Line Interface ---------------------- The [`inspect`](#module-inspect "inspect: Extract information and source code from live objects.") module also provides a basic introspection capability from the command line. By default, accepts the name of a module and prints the source of that module. A class or function within the module can be printed instead by appended a colon and the qualified name of the target object. `--details` Print information about the specified object rather than the source code
programming_docs
python reprlib — Alternate repr() implementation reprlib — Alternate repr() implementation ========================================= **Source code:** [Lib/reprlib.py](https://github.com/python/cpython/tree/3.9/Lib/reprlib.py) The [`reprlib`](#module-reprlib "reprlib: Alternate repr() implementation with size limits.") module provides a means for producing object representations with limits on the size of the resulting strings. This is used in the Python debugger and may be useful in other contexts as well. This module provides a class, an instance, and a function: `class reprlib.Repr` Class which provides formatting services useful in implementing functions similar to the built-in [`repr()`](functions#repr "repr"); size limits for different object types are added to avoid the generation of representations which are excessively long. `reprlib.aRepr` This is an instance of [`Repr`](#reprlib.Repr "reprlib.Repr") which is used to provide the [`repr()`](#reprlib.repr "reprlib.repr") function described below. Changing the attributes of this object will affect the size limits used by [`repr()`](#reprlib.repr "reprlib.repr") and the Python debugger. `reprlib.repr(obj)` This is the [`repr()`](#reprlib.Repr.repr "reprlib.Repr.repr") method of `aRepr`. It returns a string similar to that returned by the built-in function of the same name, but with limits on most sizes. In addition to size-limiting tools, the module also provides a decorator for detecting recursive calls to [`__repr__()`](../reference/datamodel#object.__repr__ "object.__repr__") and substituting a placeholder string instead. `@reprlib.recursive_repr(fillvalue="...")` Decorator for [`__repr__()`](../reference/datamodel#object.__repr__ "object.__repr__") methods to detect recursive calls within the same thread. If a recursive call is made, the *fillvalue* is returned, otherwise, the usual [`__repr__()`](../reference/datamodel#object.__repr__ "object.__repr__") call is made. For example: ``` >>> from reprlib import recursive_repr >>> class MyList(list): ... @recursive_repr() ... def __repr__(self): ... return '<' + '|'.join(map(repr, self)) + '>' ... >>> m = MyList('abc') >>> m.append(m) >>> m.append('x') >>> print(m) <'a'|'b'|'c'|...|'x'> ``` New in version 3.2. Repr Objects ------------ [`Repr`](#reprlib.Repr "reprlib.Repr") instances provide several attributes which can be used to provide size limits for the representations of different object types, and methods which format specific object types. `Repr.maxlevel` Depth limit on the creation of recursive representations. The default is `6`. `Repr.maxdict` `Repr.maxlist` `Repr.maxtuple` `Repr.maxset` `Repr.maxfrozenset` `Repr.maxdeque` `Repr.maxarray` Limits on the number of entries represented for the named object type. The default is `4` for [`maxdict`](#reprlib.Repr.maxdict "reprlib.Repr.maxdict"), `5` for [`maxarray`](#reprlib.Repr.maxarray "reprlib.Repr.maxarray"), and `6` for the others. `Repr.maxlong` Maximum number of characters in the representation for an integer. Digits are dropped from the middle. The default is `40`. `Repr.maxstring` Limit on the number of characters in the representation of the string. Note that the “normal” representation of the string is used as the character source: if escape sequences are needed in the representation, these may be mangled when the representation is shortened. The default is `30`. `Repr.maxother` This limit is used to control the size of object types for which no specific formatting method is available on the [`Repr`](#reprlib.Repr "reprlib.Repr") object. It is applied in a similar manner as [`maxstring`](#reprlib.Repr.maxstring "reprlib.Repr.maxstring"). The default is `20`. `Repr.repr(obj)` The equivalent to the built-in [`repr()`](functions#repr "repr") that uses the formatting imposed by the instance. `Repr.repr1(obj, level)` Recursive implementation used by [`repr()`](#reprlib.Repr.repr "reprlib.Repr.repr"). This uses the type of *obj* to determine which formatting method to call, passing it *obj* and *level*. The type-specific methods should call [`repr1()`](#reprlib.Repr.repr1 "reprlib.Repr.repr1") to perform recursive formatting, with `level - 1` for the value of *level* in the recursive call. `Repr.repr_TYPE(obj, level)` Formatting methods for specific types are implemented as methods with a name based on the type name. In the method name, **TYPE** is replaced by `'_'.join(type(obj).__name__.split())`. Dispatch to these methods is handled by [`repr1()`](#reprlib.Repr.repr1 "reprlib.Repr.repr1"). Type-specific methods which need to recursively format a value should call `self.repr1(subobj, level - 1)`. Subclassing Repr Objects ------------------------ The use of dynamic dispatching by [`Repr.repr1()`](#reprlib.Repr.repr1 "reprlib.Repr.repr1") allows subclasses of [`Repr`](#reprlib.Repr "reprlib.Repr") to add support for additional built-in object types or to modify the handling of types already supported. This example shows how special support for file objects could be added: ``` import reprlib import sys class MyRepr(reprlib.Repr): def repr_TextIOWrapper(self, obj, level): if obj.name in {'<stdin>', '<stdout>', '<stderr>'}: return obj.name return repr(obj) aRepr = MyRepr() print(aRepr.repr(sys.stdin)) # prints '<stdin>' ``` python xmlrpc — XMLRPC server and client modules xmlrpc — XMLRPC server and client modules ========================================= XML-RPC is a Remote Procedure Call method that uses XML passed via HTTP as a transport. With it, a client can call methods with parameters on a remote server (the server is named by a URI) and get back structured data. `xmlrpc` is a package that collects server and client modules implementing XML-RPC. The modules are: * [`xmlrpc.client`](xmlrpc.client#module-xmlrpc.client "xmlrpc.client: XML-RPC client access.") * [`xmlrpc.server`](xmlrpc.server#module-xmlrpc.server "xmlrpc.server: Basic XML-RPC server implementations.") python Exceptions Exceptions ========== **Source code:** [Lib/asyncio/exceptions.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/exceptions.py) `exception asyncio.TimeoutError` The operation has exceeded the given deadline. Important This exception is different from the builtin [`TimeoutError`](exceptions#TimeoutError "TimeoutError") exception. `exception asyncio.CancelledError` The operation has been cancelled. This exception can be caught to perform custom operations when asyncio Tasks are cancelled. In almost all situations the exception must be re-raised. Changed in version 3.8: [`CancelledError`](#asyncio.CancelledError "asyncio.CancelledError") is now a subclass of [`BaseException`](exceptions#BaseException "BaseException"). `exception asyncio.InvalidStateError` Invalid internal state of [`Task`](asyncio-task#asyncio.Task "asyncio.Task") or [`Future`](asyncio-future#asyncio.Future "asyncio.Future"). Can be raised in situations like setting a result value for a *Future* object that already has a result value set. `exception asyncio.SendfileNotAvailableError` The “sendfile” syscall is not available for the given socket or file type. A subclass of [`RuntimeError`](exceptions#RuntimeError "RuntimeError"). `exception asyncio.IncompleteReadError` The requested read operation did not complete fully. Raised by the [asyncio stream APIs](asyncio-stream#asyncio-streams). This exception is a subclass of [`EOFError`](exceptions#EOFError "EOFError"). `expected` The total number ([`int`](functions#int "int")) of expected bytes. `partial` A string of [`bytes`](stdtypes#bytes "bytes") read before the end of stream was reached. `exception asyncio.LimitOverrunError` Reached the buffer size limit while looking for a separator. Raised by the [asyncio stream APIs](asyncio-stream#asyncio-streams). `consumed` The total number of to be consumed bytes. python pprint — Data pretty printer pprint — Data pretty printer ============================ **Source code:** [Lib/pprint.py](https://github.com/python/cpython/tree/3.9/Lib/pprint.py) The [`pprint`](#module-pprint "pprint: Data pretty printer.") module provides a capability to “pretty-print” arbitrary Python data structures in a form which can be used as input to the interpreter. If the formatted structures include objects which are not fundamental Python types, the representation may not be loadable. This may be the case if objects such as files, sockets or classes are included, as well as many other objects which are not representable as Python literals. The formatted representation keeps objects on a single line if it can, and breaks them onto multiple lines if they don’t fit within the allowed width. Construct [`PrettyPrinter`](#pprint.PrettyPrinter "pprint.PrettyPrinter") objects explicitly if you need to adjust the width constraint. Dictionaries are sorted by key before the display is computed. Changed in version 3.9: Added support for pretty-printing [`types.SimpleNamespace`](types#types.SimpleNamespace "types.SimpleNamespace"). The [`pprint`](#module-pprint "pprint: Data pretty printer.") module defines one class: `class pprint.PrettyPrinter(indent=1, width=80, depth=None, stream=None, *, compact=False, sort_dicts=True)` Construct a [`PrettyPrinter`](#pprint.PrettyPrinter "pprint.PrettyPrinter") instance. This constructor understands several keyword parameters. An output stream may be set using the *stream* keyword; the only method used on the stream object is the file protocol’s `write()` method. If not specified, the [`PrettyPrinter`](#pprint.PrettyPrinter "pprint.PrettyPrinter") adopts `sys.stdout`. The amount of indentation added for each recursive level is specified by *indent*; the default is one. Other values can cause output to look a little odd, but can make nesting easier to spot. The number of levels which may be printed is controlled by *depth*; if the data structure being printed is too deep, the next contained level is replaced by `...`. By default, there is no constraint on the depth of the objects being formatted. The desired output width is constrained using the *width* parameter; the default is 80 characters. If a structure cannot be formatted within the constrained width, a best effort will be made. If *compact* is false (the default) each item of a long sequence will be formatted on a separate line. If *compact* is true, as many items as will fit within the *width* will be formatted on each output line. If *sort\_dicts* is true (the default), dictionaries will be formatted with their keys sorted, otherwise they will display in insertion order. Changed in version 3.4: Added the *compact* parameter. Changed in version 3.8: Added the *sort\_dicts* parameter. ``` >>> import pprint >>> stuff = ['spam', 'eggs', 'lumberjack', 'knights', 'ni'] >>> stuff.insert(0, stuff[:]) >>> pp = pprint.PrettyPrinter(indent=4) >>> pp.pprint(stuff) [ ['spam', 'eggs', 'lumberjack', 'knights', 'ni'], 'spam', 'eggs', 'lumberjack', 'knights', 'ni'] >>> pp = pprint.PrettyPrinter(width=41, compact=True) >>> pp.pprint(stuff) [['spam', 'eggs', 'lumberjack', 'knights', 'ni'], 'spam', 'eggs', 'lumberjack', 'knights', 'ni'] >>> tup = ('spam', ('eggs', ('lumberjack', ('knights', ('ni', ('dead', ... ('parrot', ('fresh fruit',)))))))) >>> pp = pprint.PrettyPrinter(depth=6) >>> pp.pprint(tup) ('spam', ('eggs', ('lumberjack', ('knights', ('ni', ('dead', (...))))))) ``` The [`pprint`](#module-pprint "pprint: Data pretty printer.") module also provides several shortcut functions: `pprint.pformat(object, indent=1, width=80, depth=None, *, compact=False, sort_dicts=True)` Return the formatted representation of *object* as a string. *indent*, *width*, *depth*, *compact* and *sort\_dicts* will be passed to the [`PrettyPrinter`](#pprint.PrettyPrinter "pprint.PrettyPrinter") constructor as formatting parameters. Changed in version 3.4: Added the *compact* parameter. Changed in version 3.8: Added the *sort\_dicts* parameter. `pprint.pp(object, *args, sort_dicts=False, **kwargs)` Prints the formatted representation of *object* followed by a newline. If *sort\_dicts* is false (the default), dictionaries will be displayed with their keys in insertion order, otherwise the dict keys will be sorted. *args* and *kwargs* will be passed to [`pprint()`](#module-pprint "pprint: Data pretty printer.") as formatting parameters. New in version 3.8. `pprint.pprint(object, stream=None, indent=1, width=80, depth=None, *, compact=False, sort_dicts=True)` Prints the formatted representation of *object* on *stream*, followed by a newline. If *stream* is `None`, `sys.stdout` is used. This may be used in the interactive interpreter instead of the [`print()`](functions#print "print") function for inspecting values (you can even reassign `print = pprint.pprint` for use within a scope). *indent*, *width*, *depth*, *compact* and *sort\_dicts* will be passed to the [`PrettyPrinter`](#pprint.PrettyPrinter "pprint.PrettyPrinter") constructor as formatting parameters. Changed in version 3.4: Added the *compact* parameter. Changed in version 3.8: Added the *sort\_dicts* parameter. ``` >>> import pprint >>> stuff = ['spam', 'eggs', 'lumberjack', 'knights', 'ni'] >>> stuff.insert(0, stuff) >>> pprint.pprint(stuff) [<Recursion on list with id=...>, 'spam', 'eggs', 'lumberjack', 'knights', 'ni'] ``` `pprint.isreadable(object)` Determine if the formatted representation of *object* is “readable”, or can be used to reconstruct the value using [`eval()`](functions#eval "eval"). This always returns `False` for recursive objects. ``` >>> pprint.isreadable(stuff) False ``` `pprint.isrecursive(object)` Determine if *object* requires a recursive representation. One more support function is also defined: `pprint.saferepr(object)` Return a string representation of *object*, protected against recursive data structures. If the representation of *object* exposes a recursive entry, the recursive reference will be represented as `<Recursion on typename with id=number>`. The representation is not otherwise formatted. ``` >>> pprint.saferepr(stuff) "[<Recursion on list with id=...>, 'spam', 'eggs', 'lumberjack', 'knights', 'ni']" ``` PrettyPrinter Objects --------------------- [`PrettyPrinter`](#pprint.PrettyPrinter "pprint.PrettyPrinter") instances have the following methods: `PrettyPrinter.pformat(object)` Return the formatted representation of *object*. This takes into account the options passed to the [`PrettyPrinter`](#pprint.PrettyPrinter "pprint.PrettyPrinter") constructor. `PrettyPrinter.pprint(object)` Print the formatted representation of *object* on the configured stream, followed by a newline. The following methods provide the implementations for the corresponding functions of the same names. Using these methods on an instance is slightly more efficient since new [`PrettyPrinter`](#pprint.PrettyPrinter "pprint.PrettyPrinter") objects don’t need to be created. `PrettyPrinter.isreadable(object)` Determine if the formatted representation of the object is “readable,” or can be used to reconstruct the value using [`eval()`](functions#eval "eval"). Note that this returns `False` for recursive objects. If the *depth* parameter of the [`PrettyPrinter`](#pprint.PrettyPrinter "pprint.PrettyPrinter") is set and the object is deeper than allowed, this returns `False`. `PrettyPrinter.isrecursive(object)` Determine if the object requires a recursive representation. This method is provided as a hook to allow subclasses to modify the way objects are converted to strings. The default implementation uses the internals of the [`saferepr()`](#pprint.saferepr "pprint.saferepr") implementation. `PrettyPrinter.format(object, context, maxlevels, level)` Returns three values: the formatted version of *object* as a string, a flag indicating whether the result is readable, and a flag indicating whether recursion was detected. The first argument is the object to be presented. The second is a dictionary which contains the [`id()`](functions#id "id") of objects that are part of the current presentation context (direct and indirect containers for *object* that are affecting the presentation) as the keys; if an object needs to be presented which is already represented in *context*, the third return value should be `True`. Recursive calls to the [`format()`](#pprint.PrettyPrinter.format "pprint.PrettyPrinter.format") method should add additional entries for containers to this dictionary. The third argument, *maxlevels*, gives the requested limit to recursion; this will be `0` if there is no requested limit. This argument should be passed unmodified to recursive calls. The fourth argument, *level*, gives the current level; recursive calls should be passed a value less than that of the current call. Example ------- To demonstrate several uses of the [`pprint()`](#module-pprint "pprint: Data pretty printer.") function and its parameters, let’s fetch information about a project from [PyPI](https://pypi.org): ``` >>> import json >>> import pprint >>> from urllib.request import urlopen >>> with urlopen('https://pypi.org/pypi/sampleproject/json') as resp: ... project_info = json.load(resp)['info'] ``` In its basic form, [`pprint()`](#module-pprint "pprint: Data pretty printer.") shows the whole object: ``` >>> pprint.pprint(project_info) {'author': 'The Python Packaging Authority', 'author_email': '[email protected]', 'bugtrack_url': None, 'classifiers': ['Development Status :: 3 - Alpha', 'Intended Audience :: Developers', 'License :: OSI Approved :: MIT License', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.2', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Topic :: Software Development :: Build Tools'], 'description': 'A sample Python project\n' '=======================\n' '\n' 'This is the description file for the project.\n' '\n' 'The file should use UTF-8 encoding and be written using ' 'ReStructured Text. It\n' 'will be used to generate the project webpage on PyPI, and ' 'should be written for\n' 'that purpose.\n' '\n' 'Typical contents for this file would include an overview of ' 'the project, basic\n' 'usage examples, etc. Generally, including the project ' 'changelog in here is not\n' 'a good idea, although a simple "What\'s New" section for the ' 'most recent version\n' 'may be appropriate.', 'description_content_type': None, 'docs_url': None, 'download_url': 'UNKNOWN', 'downloads': {'last_day': -1, 'last_month': -1, 'last_week': -1}, 'home_page': 'https://github.com/pypa/sampleproject', 'keywords': 'sample setuptools development', 'license': 'MIT', 'maintainer': None, 'maintainer_email': None, 'name': 'sampleproject', 'package_url': 'https://pypi.org/project/sampleproject/', 'platform': 'UNKNOWN', 'project_url': 'https://pypi.org/project/sampleproject/', 'project_urls': {'Download': 'UNKNOWN', 'Homepage': 'https://github.com/pypa/sampleproject'}, 'release_url': 'https://pypi.org/project/sampleproject/1.2.0/', 'requires_dist': None, 'requires_python': None, 'summary': 'A sample Python project', 'version': '1.2.0'} ``` The result can be limited to a certain *depth* (ellipsis is used for deeper contents): ``` >>> pprint.pprint(project_info, depth=1) {'author': 'The Python Packaging Authority', 'author_email': '[email protected]', 'bugtrack_url': None, 'classifiers': [...], 'description': 'A sample Python project\n' '=======================\n' '\n' 'This is the description file for the project.\n' '\n' 'The file should use UTF-8 encoding and be written using ' 'ReStructured Text. It\n' 'will be used to generate the project webpage on PyPI, and ' 'should be written for\n' 'that purpose.\n' '\n' 'Typical contents for this file would include an overview of ' 'the project, basic\n' 'usage examples, etc. Generally, including the project ' 'changelog in here is not\n' 'a good idea, although a simple "What\'s New" section for the ' 'most recent version\n' 'may be appropriate.', 'description_content_type': None, 'docs_url': None, 'download_url': 'UNKNOWN', 'downloads': {...}, 'home_page': 'https://github.com/pypa/sampleproject', 'keywords': 'sample setuptools development', 'license': 'MIT', 'maintainer': None, 'maintainer_email': None, 'name': 'sampleproject', 'package_url': 'https://pypi.org/project/sampleproject/', 'platform': 'UNKNOWN', 'project_url': 'https://pypi.org/project/sampleproject/', 'project_urls': {...}, 'release_url': 'https://pypi.org/project/sampleproject/1.2.0/', 'requires_dist': None, 'requires_python': None, 'summary': 'A sample Python project', 'version': '1.2.0'} ``` Additionally, maximum character *width* can be suggested. If a long object cannot be split, the specified width will be exceeded: ``` >>> pprint.pprint(project_info, depth=1, width=60) {'author': 'The Python Packaging Authority', 'author_email': '[email protected]', 'bugtrack_url': None, 'classifiers': [...], 'description': 'A sample Python project\n' '=======================\n' '\n' 'This is the description file for the ' 'project.\n' '\n' 'The file should use UTF-8 encoding and be ' 'written using ReStructured Text. It\n' 'will be used to generate the project ' 'webpage on PyPI, and should be written ' 'for\n' 'that purpose.\n' '\n' 'Typical contents for this file would ' 'include an overview of the project, ' 'basic\n' 'usage examples, etc. Generally, including ' 'the project changelog in here is not\n' 'a good idea, although a simple "What\'s ' 'New" section for the most recent version\n' 'may be appropriate.', 'description_content_type': None, 'docs_url': None, 'download_url': 'UNKNOWN', 'downloads': {...}, 'home_page': 'https://github.com/pypa/sampleproject', 'keywords': 'sample setuptools development', 'license': 'MIT', 'maintainer': None, 'maintainer_email': None, 'name': 'sampleproject', 'package_url': 'https://pypi.org/project/sampleproject/', 'platform': 'UNKNOWN', 'project_url': 'https://pypi.org/project/sampleproject/', 'project_urls': {...}, 'release_url': 'https://pypi.org/project/sampleproject/1.2.0/', 'requires_dist': None, 'requires_python': None, 'summary': 'A sample Python project', 'version': '1.2.0'} ```
programming_docs
python email — An email and MIME handling package email — An email and MIME handling package ========================================== **Source code:** [Lib/email/\_\_init\_\_.py](https://github.com/python/cpython/tree/3.9/Lib/email/__init__.py) The [`email`](#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package is a library for managing email messages. It is specifically *not* designed to do any sending of email messages to SMTP ([**RFC 2821**](https://tools.ietf.org/html/rfc2821.html)), NNTP, or other servers; those are functions of modules such as [`smtplib`](smtplib#module-smtplib "smtplib: SMTP protocol client (requires sockets).") and [`nntplib`](nntplib#module-nntplib "nntplib: NNTP protocol client (requires sockets). (deprecated)"). The [`email`](#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package attempts to be as RFC-compliant as possible, supporting [**RFC 5322**](https://tools.ietf.org/html/rfc5322.html) and [**RFC 6532**](https://tools.ietf.org/html/rfc6532.html), as well as such MIME-related RFCs as [**RFC 2045**](https://tools.ietf.org/html/rfc2045.html), [**RFC 2046**](https://tools.ietf.org/html/rfc2046.html), [**RFC 2047**](https://tools.ietf.org/html/rfc2047.html), [**RFC 2183**](https://tools.ietf.org/html/rfc2183.html), and [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html). The overall structure of the email package can be divided into three major components, plus a fourth component that controls the behavior of the other components. The central component of the package is an “object model” that represents email messages. An application interacts with the package primarily through the object model interface defined in the [`message`](email.message#module-email.message "email.message: The base class representing email messages.") sub-module. The application can use this API to ask questions about an existing email, to construct a new email, or to add or remove email subcomponents that themselves use the same object model interface. That is, following the nature of email messages and their MIME subcomponents, the email object model is a tree structure of objects that all provide the [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") API. The other two major components of the package are the [`parser`](email.parser#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure.") and the [`generator`](email.generator#module-email.generator "email.generator: Generate flat text email messages from a message structure."). The parser takes the serialized version of an email message (a stream of bytes) and converts it into a tree of [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") objects. The generator takes an [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") and turns it back into a serialized byte stream. (The parser and generator also handle streams of text characters, but this usage is discouraged as it is too easy to end up with messages that are not valid in one way or another.) The control component is the [`policy`](email.policy#module-email.policy "email.policy: Controlling the parsing and generating of messages") module. Every [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage"), every [`generator`](email.generator#module-email.generator "email.generator: Generate flat text email messages from a message structure."), and every [`parser`](email.parser#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure.") has an associated [`policy`](email.policy#module-email.policy "email.policy: Controlling the parsing and generating of messages") object that controls its behavior. Usually an application only needs to specify the policy when an [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") is created, either by directly instantiating an [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") to create a new email, or by parsing an input stream using a [`parser`](email.parser#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure."). But the policy can be changed when the message is serialized using a [`generator`](email.generator#module-email.generator "email.generator: Generate flat text email messages from a message structure."). This allows, for example, a generic email message to be parsed from disk, but to serialize it using standard SMTP settings when sending it to an email server. The email package does its best to hide the details of the various governing RFCs from the application. Conceptually the application should be able to treat the email message as a structured tree of unicode text and binary attachments, without having to worry about how these are represented when serialized. In practice, however, it is often necessary to be aware of at least some of the rules governing MIME messages and their structure, specifically the names and nature of the MIME “content types” and how they identify multipart documents. For the most part this knowledge should only be required for more complex applications, and even then it should only be the high level structure in question, and not the details of how those structures are represented. Since MIME content types are used widely in modern internet software (not just email), this will be a familiar concept to many programmers. The following sections describe the functionality of the [`email`](#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package. We start with the [`message`](email.message#module-email.message "email.message: The base class representing email messages.") object model, which is the primary interface an application will use, and follow that with the [`parser`](email.parser#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure.") and [`generator`](email.generator#module-email.generator "email.generator: Generate flat text email messages from a message structure.") components. Then we cover the [`policy`](email.policy#module-email.policy "email.policy: Controlling the parsing and generating of messages") controls, which completes the treatment of the main components of the library. The next three sections cover the exceptions the package may raise and the defects (non-compliance with the RFCs) that the [`parser`](email.parser#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure.") may detect. Then we cover the [`headerregistry`](email.headerregistry#module-email.headerregistry "email.headerregistry: Automatic Parsing of headers based on the field name") and the [`contentmanager`](email.contentmanager#module-email.contentmanager "email.contentmanager: Storing and Retrieving Content from MIME Parts") sub-components, which provide tools for doing more detailed manipulation of headers and payloads, respectively. Both of these components contain features relevant to consuming and producing non-trivial messages, but also document their extensibility APIs, which will be of interest to advanced applications. Following those is a set of examples of using the fundamental parts of the APIs covered in the preceding sections. The foregoing represent the modern (unicode friendly) API of the email package. The remaining sections, starting with the [`Message`](email.compat32-message#email.message.Message "email.message.Message") class, cover the legacy [`compat32`](email.policy#email.policy.compat32 "email.policy.compat32") API that deals much more directly with the details of how email messages are represented. The [`compat32`](email.policy#email.policy.compat32 "email.policy.compat32") API does *not* hide the details of the RFCs from the application, but for applications that need to operate at that level, they can be useful tools. This documentation is also relevant for applications that are still using the [`compat32`](email.policy#email.policy.compat32 "email.policy.compat32") API for backward compatibility reasons. Changed in version 3.6: Docs reorganized and rewritten to promote the new [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage")/[`EmailPolicy`](email.policy#email.policy.EmailPolicy "email.policy.EmailPolicy") API. Contents of the [`email`](#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package documentation: * [`email.message`: Representing an email message](email.message) * [`email.parser`: Parsing email messages](email.parser) + [FeedParser API](email.parser#feedparser-api) + [Parser API](email.parser#parser-api) + [Additional notes](email.parser#additional-notes) * [`email.generator`: Generating MIME documents](email.generator) * [`email.policy`: Policy Objects](email.policy) * [`email.errors`: Exception and Defect classes](email.errors) * [`email.headerregistry`: Custom Header Objects](email.headerregistry) * [`email.contentmanager`: Managing MIME Content](email.contentmanager) + [Content Manager Instances](email.contentmanager#content-manager-instances) * [`email`: Examples](email.examples) Legacy API: * [`email.message.Message`: Representing an email message using the `compat32` API](email.compat32-message) * [`email.mime`: Creating email and MIME objects from scratch](email.mime) * [`email.header`: Internationalized headers](email.header) * [`email.charset`: Representing character sets](email.charset) * [`email.encoders`: Encoders](email.encoders) * [`email.utils`: Miscellaneous utilities](email.utils) * [`email.iterators`: Iterators](email.iterators) See also `Module` [`smtplib`](smtplib#module-smtplib "smtplib: SMTP protocol client (requires sockets).") SMTP (Simple Mail Transport Protocol) client `Module` [`poplib`](poplib#module-poplib "poplib: POP3 protocol client (requires sockets).") POP (Post Office Protocol) client `Module` [`imaplib`](imaplib#module-imaplib "imaplib: IMAP4 protocol client (requires sockets).") IMAP (Internet Message Access Protocol) client `Module` [`nntplib`](nntplib#module-nntplib "nntplib: NNTP protocol client (requires sockets). (deprecated)") NNTP (Net News Transport Protocol) client `Module` [`mailbox`](mailbox#module-mailbox "mailbox: Manipulate mailboxes in various formats") Tools for creating, reading, and managing collections of messages on disk using a variety standard formats. `Module` [`smtpd`](smtpd#module-smtpd "smtpd: A SMTP server implementation in Python. (deprecated)") SMTP server framework (primarily useful for testing) python glob — Unix style pathname pattern expansion glob — Unix style pathname pattern expansion ============================================ **Source code:** [Lib/glob.py](https://github.com/python/cpython/tree/3.9/Lib/glob.py) The [`glob`](#module-glob "glob: Unix shell style pathname pattern expansion.") module finds all the pathnames matching a specified pattern according to the rules used by the Unix shell, although results are returned in arbitrary order. No tilde expansion is done, but `*`, `?`, and character ranges expressed with `[]` will be correctly matched. This is done by using the [`os.scandir()`](os#os.scandir "os.scandir") and [`fnmatch.fnmatch()`](fnmatch#fnmatch.fnmatch "fnmatch.fnmatch") functions in concert, and not by actually invoking a subshell. Note that files beginning with a dot (`.`) can only be matched by patterns that also start with a dot, unlike [`fnmatch.fnmatch()`](fnmatch#fnmatch.fnmatch "fnmatch.fnmatch") or [`pathlib.Path.glob()`](pathlib#pathlib.Path.glob "pathlib.Path.glob"). (For tilde and shell variable expansion, use [`os.path.expanduser()`](os.path#os.path.expanduser "os.path.expanduser") and [`os.path.expandvars()`](os.path#os.path.expandvars "os.path.expandvars").) For a literal match, wrap the meta-characters in brackets. For example, `'[?]'` matches the character `'?'`. See also The [`pathlib`](pathlib#module-pathlib "pathlib: Object-oriented filesystem paths") module offers high-level path objects. `glob.glob(pathname, *, recursive=False)` Return a possibly-empty list of path names that match *pathname*, which must be a string containing a path specification. *pathname* can be either absolute (like `/usr/src/Python-1.5/Makefile`) or relative (like `../../Tools/*/*.gif`), and can contain shell-style wildcards. Broken symlinks are included in the results (as in the shell). Whether or not the results are sorted depends on the file system. If a file that satisfies conditions is removed or added during the call of this function, whether a path name for that file be included is unspecified. If *recursive* is true, the pattern “`**`” will match any files and zero or more directories, subdirectories and symbolic links to directories. If the pattern is followed by an [`os.sep`](os#os.sep "os.sep") or [`os.altsep`](os#os.altsep "os.altsep") then files will not match. Raises an [auditing event](sys#auditing) `glob.glob` with arguments `pathname`, `recursive`. Note Using the “`**`” pattern in large directory trees may consume an inordinate amount of time. Changed in version 3.5: Support for recursive globs using “`**`”. `glob.iglob(pathname, *, recursive=False)` Return an [iterator](../glossary#term-iterator) which yields the same values as [`glob()`](#module-glob "glob: Unix shell style pathname pattern expansion.") without actually storing them all simultaneously. Raises an [auditing event](sys#auditing) `glob.glob` with arguments `pathname`, `recursive`. `glob.escape(pathname)` Escape all special characters (`'?'`, `'*'` and `'['`). This is useful if you want to match an arbitrary literal string that may have special characters in it. Special characters in drive/UNC sharepoints are not escaped, e.g. on Windows `escape('//?/c:/Quo vadis?.txt')` returns `'//?/c:/Quo vadis[?].txt'`. New in version 3.4. For example, consider a directory containing the following files: `1.gif`, `2.txt`, `card.gif` and a subdirectory `sub` which contains only the file `3.txt`. [`glob()`](#module-glob "glob: Unix shell style pathname pattern expansion.") will produce the following results. Notice how any leading components of the path are preserved. ``` >>> import glob >>> glob.glob('./[0-9].*') ['./1.gif', './2.txt'] >>> glob.glob('*.gif') ['1.gif', 'card.gif'] >>> glob.glob('?.gif') ['1.gif'] >>> glob.glob('**/*.txt', recursive=True) ['2.txt', 'sub/3.txt'] >>> glob.glob('./**/', recursive=True) ['./', './sub/'] ``` If the directory contains files starting with `.` they won’t be matched by default. For example, consider a directory containing `card.gif` and `.card.gif`: ``` >>> import glob >>> glob.glob('*.gif') ['card.gif'] >>> glob.glob('.c*') ['.card.gif'] ``` See also `Module` [`fnmatch`](fnmatch#module-fnmatch "fnmatch: Unix shell style filename pattern matching.") Shell-style filename (not path) expansion python tokenize — Tokenizer for Python source tokenize — Tokenizer for Python source ====================================== **Source code:** [Lib/tokenize.py](https://github.com/python/cpython/tree/3.9/Lib/tokenize.py) The [`tokenize`](#module-tokenize "tokenize: Lexical scanner for Python source code.") module provides a lexical scanner for Python source code, implemented in Python. The scanner in this module returns comments as tokens as well, making it useful for implementing “pretty-printers”, including colorizers for on-screen displays. To simplify token stream handling, all [operator](../reference/lexical_analysis#operators) and [delimiter](../reference/lexical_analysis#delimiters) tokens and [`Ellipsis`](constants#Ellipsis "Ellipsis") are returned using the generic [`OP`](token#token.OP "token.OP") token type. The exact type can be determined by checking the `exact_type` property on the [named tuple](../glossary#term-named-tuple) returned from [`tokenize.tokenize()`](#tokenize.tokenize "tokenize.tokenize"). Tokenizing Input ---------------- The primary entry point is a [generator](../glossary#term-generator): `tokenize.tokenize(readline)` The [`tokenize()`](#tokenize.tokenize "tokenize.tokenize") generator requires one argument, *readline*, which must be a callable object which provides the same interface as the [`io.IOBase.readline()`](io#io.IOBase.readline "io.IOBase.readline") method of file objects. Each call to the function should return one line of input as bytes. The generator produces 5-tuples with these members: the token type; the token string; a 2-tuple `(srow, scol)` of ints specifying the row and column where the token begins in the source; a 2-tuple `(erow, ecol)` of ints specifying the row and column where the token ends in the source; and the line on which the token was found. The line passed (the last tuple item) is the *physical* line. The 5 tuple is returned as a [named tuple](../glossary#term-named-tuple) with the field names: `type string start end line`. The returned [named tuple](../glossary#term-named-tuple) has an additional property named `exact_type` that contains the exact operator type for [`OP`](token#token.OP "token.OP") tokens. For all other token types `exact_type` equals the named tuple `type` field. Changed in version 3.1: Added support for named tuples. Changed in version 3.3: Added support for `exact_type`. [`tokenize()`](#tokenize.tokenize "tokenize.tokenize") determines the source encoding of the file by looking for a UTF-8 BOM or encoding cookie, according to [**PEP 263**](https://www.python.org/dev/peps/pep-0263). `tokenize.generate_tokens(readline)` Tokenize a source reading unicode strings instead of bytes. Like [`tokenize()`](#tokenize.tokenize "tokenize.tokenize"), the *readline* argument is a callable returning a single line of input. However, [`generate_tokens()`](#tokenize.generate_tokens "tokenize.generate_tokens") expects *readline* to return a str object rather than bytes. The result is an iterator yielding named tuples, exactly like [`tokenize()`](#tokenize.tokenize "tokenize.tokenize"). It does not yield an [`ENCODING`](token#token.ENCODING "token.ENCODING") token. All constants from the [`token`](token#module-token "token: Constants representing terminal nodes of the parse tree.") module are also exported from [`tokenize`](#module-tokenize "tokenize: Lexical scanner for Python source code."). Another function is provided to reverse the tokenization process. This is useful for creating tools that tokenize a script, modify the token stream, and write back the modified script. `tokenize.untokenize(iterable)` Converts tokens back into Python source code. The *iterable* must return sequences with at least two elements, the token type and the token string. Any additional sequence elements are ignored. The reconstructed script is returned as a single string. The result is guaranteed to tokenize back to match the input so that the conversion is lossless and round-trips are assured. The guarantee applies only to the token type and token string as the spacing between tokens (column positions) may change. It returns bytes, encoded using the [`ENCODING`](token#token.ENCODING "token.ENCODING") token, which is the first token sequence output by [`tokenize()`](#tokenize.tokenize "tokenize.tokenize"). If there is no encoding token in the input, it returns a str instead. [`tokenize()`](#tokenize.tokenize "tokenize.tokenize") needs to detect the encoding of source files it tokenizes. The function it uses to do this is available: `tokenize.detect_encoding(readline)` The [`detect_encoding()`](#tokenize.detect_encoding "tokenize.detect_encoding") function is used to detect the encoding that should be used to decode a Python source file. It requires one argument, readline, in the same way as the [`tokenize()`](#tokenize.tokenize "tokenize.tokenize") generator. It will call readline a maximum of twice, and return the encoding used (as a string) and a list of any lines (not decoded from bytes) it has read in. It detects the encoding from the presence of a UTF-8 BOM or an encoding cookie as specified in [**PEP 263**](https://www.python.org/dev/peps/pep-0263). If both a BOM and a cookie are present, but disagree, a [`SyntaxError`](exceptions#SyntaxError "SyntaxError") will be raised. Note that if the BOM is found, `'utf-8-sig'` will be returned as an encoding. If no encoding is specified, then the default of `'utf-8'` will be returned. Use [`open()`](#tokenize.open "tokenize.open") to open Python source files: it uses [`detect_encoding()`](#tokenize.detect_encoding "tokenize.detect_encoding") to detect the file encoding. `tokenize.open(filename)` Open a file in read only mode using the encoding detected by [`detect_encoding()`](#tokenize.detect_encoding "tokenize.detect_encoding"). New in version 3.2. `exception tokenize.TokenError` Raised when either a docstring or expression that may be split over several lines is not completed anywhere in the file, for example: ``` """Beginning of docstring ``` or: ``` [1, 2, 3 ``` Note that unclosed single-quoted strings do not cause an error to be raised. They are tokenized as [`ERRORTOKEN`](token#token.ERRORTOKEN "token.ERRORTOKEN"), followed by the tokenization of their contents. Command-Line Usage ------------------ New in version 3.3. The [`tokenize`](#module-tokenize "tokenize: Lexical scanner for Python source code.") module can be executed as a script from the command line. It is as simple as: ``` python -m tokenize [-e] [filename.py] ``` The following options are accepted: `-h, --help` show this help message and exit `-e, --exact` display token names using the exact type If `filename.py` is specified its contents are tokenized to stdout. Otherwise, tokenization is performed on stdin. Examples -------- Example of a script rewriter that transforms float literals into Decimal objects: ``` from tokenize import tokenize, untokenize, NUMBER, STRING, NAME, OP from io import BytesIO def decistmt(s): """Substitute Decimals for floats in a string of statements. >>> from decimal import Decimal >>> s = 'print(+21.3e-5*-.1234/81.7)' >>> decistmt(s) "print (+Decimal ('21.3e-5')*-Decimal ('.1234')/Decimal ('81.7'))" The format of the exponent is inherited from the platform C library. Known cases are "e-007" (Windows) and "e-07" (not Windows). Since we're only showing 12 digits, and the 13th isn't close to 5, the rest of the output should be platform-independent. >>> exec(s) #doctest: +ELLIPSIS -3.21716034272e-0...7 Output from calculations with Decimal should be identical across all platforms. >>> exec(decistmt(s)) -3.217160342717258261933904529E-7 """ result = [] g = tokenize(BytesIO(s.encode('utf-8')).readline) # tokenize the string for toknum, tokval, _, _, _ in g: if toknum == NUMBER and '.' in tokval: # replace NUMBER tokens result.extend([ (NAME, 'Decimal'), (OP, '('), (STRING, repr(tokval)), (OP, ')') ]) else: result.append((toknum, tokval)) return untokenize(result).decode('utf-8') ``` Example of tokenizing from the command line. The script: ``` def say_hello(): print("Hello, World!") say_hello() ``` will be tokenized to the following output where the first column is the range of the line/column coordinates where the token is found, the second column is the name of the token, and the final column is the value of the token (if any) ``` $ python -m tokenize hello.py 0,0-0,0: ENCODING 'utf-8' 1,0-1,3: NAME 'def' 1,4-1,13: NAME 'say_hello' 1,13-1,14: OP '(' 1,14-1,15: OP ')' 1,15-1,16: OP ':' 1,16-1,17: NEWLINE '\n' 2,0-2,4: INDENT ' ' 2,4-2,9: NAME 'print' 2,9-2,10: OP '(' 2,10-2,25: STRING '"Hello, World!"' 2,25-2,26: OP ')' 2,26-2,27: NEWLINE '\n' 3,0-3,1: NL '\n' 4,0-4,0: DEDENT '' 4,0-4,9: NAME 'say_hello' 4,9-4,10: OP '(' 4,10-4,11: OP ')' 4,11-4,12: NEWLINE '\n' 5,0-5,0: ENDMARKER '' ``` The exact token type names can be displayed using the [`-e`](#cmdoption-tokenize-e) option: ``` $ python -m tokenize -e hello.py 0,0-0,0: ENCODING 'utf-8' 1,0-1,3: NAME 'def' 1,4-1,13: NAME 'say_hello' 1,13-1,14: LPAR '(' 1,14-1,15: RPAR ')' 1,15-1,16: COLON ':' 1,16-1,17: NEWLINE '\n' 2,0-2,4: INDENT ' ' 2,4-2,9: NAME 'print' 2,9-2,10: LPAR '(' 2,10-2,25: STRING '"Hello, World!"' 2,25-2,26: RPAR ')' 2,26-2,27: NEWLINE '\n' 3,0-3,1: NL '\n' 4,0-4,0: DEDENT '' 4,0-4,9: NAME 'say_hello' 4,9-4,10: LPAR '(' 4,10-4,11: RPAR ')' 4,11-4,12: NEWLINE '\n' 5,0-5,0: ENDMARKER '' ``` Example of tokenizing a file programmatically, reading unicode strings instead of bytes with [`generate_tokens()`](#tokenize.generate_tokens "tokenize.generate_tokens"): ``` import tokenize with tokenize.open('hello.py') as f: tokens = tokenize.generate_tokens(f.readline) for token in tokens: print(token) ``` Or reading bytes directly with [`tokenize()`](#tokenize.tokenize "tokenize.tokenize"): ``` import tokenize with open('hello.py', 'rb') as f: tokens = tokenize.tokenize(f.readline) for token in tokens: print(token) ```
programming_docs
python json — JSON encoder and decoder json — JSON encoder and decoder =============================== **Source code:** [Lib/json/\_\_init\_\_.py](https://github.com/python/cpython/tree/3.9/Lib/json/__init__.py) [JSON (JavaScript Object Notation)](https://json.org), specified by [**RFC 7159**](https://tools.ietf.org/html/rfc7159.html) (which obsoletes [**RFC 4627**](https://tools.ietf.org/html/rfc4627.html)) and by [ECMA-404](https://www.ecma-international.org/publications-and-standards/standards/ecma-404/), is a lightweight data interchange format inspired by [JavaScript](https://en.wikipedia.org/wiki/JavaScript) object literal syntax (although it is not a strict subset of JavaScript [1](#rfc-errata) ). Warning Be cautious when parsing JSON data from untrusted sources. A malicious JSON string may cause the decoder to consume considerable CPU and memory resources. Limiting the size of data to be parsed is recommended. [`json`](#module-json "json: Encode and decode the JSON format.") exposes an API familiar to users of the standard library [`marshal`](marshal#module-marshal "marshal: Convert Python objects to streams of bytes and back (with different constraints).") and [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") modules. Encoding basic Python object hierarchies: ``` >>> import json >>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}]) '["foo", {"bar": ["baz", null, 1.0, 2]}]' >>> print(json.dumps("\"foo\bar")) "\"foo\bar" >>> print(json.dumps('\u1234')) "\u1234" >>> print(json.dumps('\\')) "\\" >>> print(json.dumps({"c": 0, "b": 0, "a": 0}, sort_keys=True)) {"a": 0, "b": 0, "c": 0} >>> from io import StringIO >>> io = StringIO() >>> json.dump(['streaming API'], io) >>> io.getvalue() '["streaming API"]' ``` Compact encoding: ``` >>> import json >>> json.dumps([1, 2, 3, {'4': 5, '6': 7}], separators=(',', ':')) '[1,2,3,{"4":5,"6":7}]' ``` Pretty printing: ``` >>> import json >>> print(json.dumps({'4': 5, '6': 7}, sort_keys=True, indent=4)) { "4": 5, "6": 7 } ``` Decoding JSON: ``` >>> import json >>> json.loads('["foo", {"bar":["baz", null, 1.0, 2]}]') ['foo', {'bar': ['baz', None, 1.0, 2]}] >>> json.loads('"\\"foo\\bar"') '"foo\x08ar' >>> from io import StringIO >>> io = StringIO('["streaming API"]') >>> json.load(io) ['streaming API'] ``` Specializing JSON object decoding: ``` >>> import json >>> def as_complex(dct): ... if '__complex__' in dct: ... return complex(dct['real'], dct['imag']) ... return dct ... >>> json.loads('{"__complex__": true, "real": 1, "imag": 2}', ... object_hook=as_complex) (1+2j) >>> import decimal >>> json.loads('1.1', parse_float=decimal.Decimal) Decimal('1.1') ``` Extending [`JSONEncoder`](#json.JSONEncoder "json.JSONEncoder"): ``` >>> import json >>> class ComplexEncoder(json.JSONEncoder): ... def default(self, obj): ... if isinstance(obj, complex): ... return [obj.real, obj.imag] ... # Let the base class default method raise the TypeError ... return json.JSONEncoder.default(self, obj) ... >>> json.dumps(2 + 1j, cls=ComplexEncoder) '[2.0, 1.0]' >>> ComplexEncoder().encode(2 + 1j) '[2.0, 1.0]' >>> list(ComplexEncoder().iterencode(2 + 1j)) ['[2.0', ', 1.0', ']'] ``` Using [`json.tool`](#module-json.tool "json.tool: A command line to validate and pretty-print JSON.") from the shell to validate and pretty-print: ``` $ echo '{"json":"obj"}' | python -m json.tool { "json": "obj" } $ echo '{1.2:3.4}' | python -m json.tool Expecting property name enclosed in double quotes: line 1 column 2 (char 1) ``` See [Command Line Interface](#json-commandline) for detailed documentation. Note JSON is a subset of [YAML](http://yaml.org/) 1.2. The JSON produced by this module’s default settings (in particular, the default *separators* value) is also a subset of YAML 1.0 and 1.1. This module can thus also be used as a YAML serializer. Note This module’s encoders and decoders preserve input and output order by default. Order is only lost if the underlying containers are unordered. Basic Usage ----------- `json.dump(obj, fp, *, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, cls=None, indent=None, separators=None, default=None, sort_keys=False, **kw)` Serialize *obj* as a JSON formatted stream to *fp* (a `.write()`-supporting [file-like object](../glossary#term-file-like-object)) using this [conversion table](#py-to-json-table). If *skipkeys* is true (default: `False`), then dict keys that are not of a basic type ([`str`](stdtypes#str "str"), [`int`](functions#int "int"), [`float`](functions#float "float"), [`bool`](functions#bool "bool"), `None`) will be skipped instead of raising a [`TypeError`](exceptions#TypeError "TypeError"). The [`json`](#module-json "json: Encode and decode the JSON format.") module always produces [`str`](stdtypes#str "str") objects, not [`bytes`](stdtypes#bytes "bytes") objects. Therefore, `fp.write()` must support [`str`](stdtypes#str "str") input. If *ensure\_ascii* is true (the default), the output is guaranteed to have all incoming non-ASCII characters escaped. If *ensure\_ascii* is false, these characters will be output as-is. If *check\_circular* is false (default: `True`), then the circular reference check for container types will be skipped and a circular reference will result in an [`RecursionError`](exceptions#RecursionError "RecursionError") (or worse). If *allow\_nan* is false (default: `True`), then it will be a [`ValueError`](exceptions#ValueError "ValueError") to serialize out of range [`float`](functions#float "float") values (`nan`, `inf`, `-inf`) in strict compliance of the JSON specification. If *allow\_nan* is true, their JavaScript equivalents (`NaN`, `Infinity`, `-Infinity`) will be used. If *indent* is a non-negative integer or string, then JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0, negative, or `""` will only insert newlines. `None` (the default) selects the most compact representation. Using a positive integer indent indents that many spaces per level. If *indent* is a string (such as `"\t"`), that string is used to indent each level. Changed in version 3.2: Allow strings for *indent* in addition to integers. If specified, *separators* should be an `(item_separator, key_separator)` tuple. The default is `(', ', ': ')` if *indent* is `None` and `(',', ': ')` otherwise. To get the most compact JSON representation, you should specify `(',', ':')` to eliminate whitespace. Changed in version 3.4: Use `(',', ': ')` as default if *indent* is not `None`. If specified, *default* should be a function that gets called for objects that can’t otherwise be serialized. It should return a JSON encodable version of the object or raise a [`TypeError`](exceptions#TypeError "TypeError"). If not specified, [`TypeError`](exceptions#TypeError "TypeError") is raised. If *sort\_keys* is true (default: `False`), then the output of dictionaries will be sorted by key. To use a custom [`JSONEncoder`](#json.JSONEncoder "json.JSONEncoder") subclass (e.g. one that overrides the `default()` method to serialize additional types), specify it with the *cls* kwarg; otherwise [`JSONEncoder`](#json.JSONEncoder "json.JSONEncoder") is used. Changed in version 3.6: All optional parameters are now [keyword-only](../glossary#keyword-only-parameter). Note Unlike [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") and [`marshal`](marshal#module-marshal "marshal: Convert Python objects to streams of bytes and back (with different constraints)."), JSON is not a framed protocol, so trying to serialize multiple objects with repeated calls to [`dump()`](#json.dump "json.dump") using the same *fp* will result in an invalid JSON file. `json.dumps(obj, *, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, cls=None, indent=None, separators=None, default=None, sort_keys=False, **kw)` Serialize *obj* to a JSON formatted [`str`](stdtypes#str "str") using this [conversion table](#py-to-json-table). The arguments have the same meaning as in [`dump()`](#json.dump "json.dump"). Note Keys in key/value pairs of JSON are always of the type [`str`](stdtypes#str "str"). When a dictionary is converted into JSON, all the keys of the dictionary are coerced to strings. As a result of this, if a dictionary is converted into JSON and then back into a dictionary, the dictionary may not equal the original one. That is, `loads(dumps(x)) != x` if x has non-string keys. `json.load(fp, *, cls=None, object_hook=None, parse_float=None, parse_int=None, parse_constant=None, object_pairs_hook=None, **kw)` Deserialize *fp* (a `.read()`-supporting [text file](../glossary#term-text-file) or [binary file](../glossary#term-binary-file) containing a JSON document) to a Python object using this [conversion table](#json-to-py-table). *object\_hook* is an optional function that will be called with the result of any object literal decoded (a [`dict`](stdtypes#dict "dict")). The return value of *object\_hook* will be used instead of the [`dict`](stdtypes#dict "dict"). This feature can be used to implement custom decoders (e.g. [JSON-RPC](http://www.jsonrpc.org) class hinting). *object\_pairs\_hook* is an optional function that will be called with the result of any object literal decoded with an ordered list of pairs. The return value of *object\_pairs\_hook* will be used instead of the [`dict`](stdtypes#dict "dict"). This feature can be used to implement custom decoders. If *object\_hook* is also defined, the *object\_pairs\_hook* takes priority. Changed in version 3.1: Added support for *object\_pairs\_hook*. *parse\_float*, if specified, will be called with the string of every JSON float to be decoded. By default, this is equivalent to `float(num_str)`. This can be used to use another datatype or parser for JSON floats (e.g. [`decimal.Decimal`](decimal#decimal.Decimal "decimal.Decimal")). *parse\_int*, if specified, will be called with the string of every JSON int to be decoded. By default, this is equivalent to `int(num_str)`. This can be used to use another datatype or parser for JSON integers (e.g. [`float`](functions#float "float")). Changed in version 3.9.14: The default *parse\_int* of [`int()`](functions#int "int") now limits the maximum length of the integer string via the interpreter’s [integer string conversion length limitation](stdtypes#int-max-str-digits) to help avoid denial of service attacks. *parse\_constant*, if specified, will be called with one of the following strings: `'-Infinity'`, `'Infinity'`, `'NaN'`. This can be used to raise an exception if invalid JSON numbers are encountered. Changed in version 3.1: *parse\_constant* doesn’t get called on ‘null’, ‘true’, ‘false’ anymore. To use a custom [`JSONDecoder`](#json.JSONDecoder "json.JSONDecoder") subclass, specify it with the `cls` kwarg; otherwise [`JSONDecoder`](#json.JSONDecoder "json.JSONDecoder") is used. Additional keyword arguments will be passed to the constructor of the class. If the data being deserialized is not a valid JSON document, a [`JSONDecodeError`](#json.JSONDecodeError "json.JSONDecodeError") will be raised. Changed in version 3.6: All optional parameters are now [keyword-only](../glossary#keyword-only-parameter). Changed in version 3.6: *fp* can now be a [binary file](../glossary#term-binary-file). The input encoding should be UTF-8, UTF-16 or UTF-32. `json.loads(s, *, cls=None, object_hook=None, parse_float=None, parse_int=None, parse_constant=None, object_pairs_hook=None, **kw)` Deserialize *s* (a [`str`](stdtypes#str "str"), [`bytes`](stdtypes#bytes "bytes") or [`bytearray`](stdtypes#bytearray "bytearray") instance containing a JSON document) to a Python object using this [conversion table](#json-to-py-table). The other arguments have the same meaning as in [`load()`](#json.load "json.load"). If the data being deserialized is not a valid JSON document, a [`JSONDecodeError`](#json.JSONDecodeError "json.JSONDecodeError") will be raised. Changed in version 3.6: *s* can now be of type [`bytes`](stdtypes#bytes "bytes") or [`bytearray`](stdtypes#bytearray "bytearray"). The input encoding should be UTF-8, UTF-16 or UTF-32. Changed in version 3.9: The keyword argument *encoding* has been removed. Encoders and Decoders --------------------- `class json.JSONDecoder(*, object_hook=None, parse_float=None, parse_int=None, parse_constant=None, strict=True, object_pairs_hook=None)` Simple JSON decoder. Performs the following translations in decoding by default: | JSON | Python | | --- | --- | | object | dict | | array | list | | string | str | | number (int) | int | | number (real) | float | | true | True | | false | False | | null | None | It also understands `NaN`, `Infinity`, and `-Infinity` as their corresponding `float` values, which is outside the JSON spec. *object\_hook*, if specified, will be called with the result of every JSON object decoded and its return value will be used in place of the given [`dict`](stdtypes#dict "dict"). This can be used to provide custom deserializations (e.g. to support [JSON-RPC](http://www.jsonrpc.org) class hinting). *object\_pairs\_hook*, if specified will be called with the result of every JSON object decoded with an ordered list of pairs. The return value of *object\_pairs\_hook* will be used instead of the [`dict`](stdtypes#dict "dict"). This feature can be used to implement custom decoders. If *object\_hook* is also defined, the *object\_pairs\_hook* takes priority. Changed in version 3.1: Added support for *object\_pairs\_hook*. *parse\_float*, if specified, will be called with the string of every JSON float to be decoded. By default, this is equivalent to `float(num_str)`. This can be used to use another datatype or parser for JSON floats (e.g. [`decimal.Decimal`](decimal#decimal.Decimal "decimal.Decimal")). *parse\_int*, if specified, will be called with the string of every JSON int to be decoded. By default, this is equivalent to `int(num_str)`. This can be used to use another datatype or parser for JSON integers (e.g. [`float`](functions#float "float")). *parse\_constant*, if specified, will be called with one of the following strings: `'-Infinity'`, `'Infinity'`, `'NaN'`. This can be used to raise an exception if invalid JSON numbers are encountered. If *strict* is false (`True` is the default), then control characters will be allowed inside strings. Control characters in this context are those with character codes in the 0–31 range, including `'\t'` (tab), `'\n'`, `'\r'` and `'\0'`. If the data being deserialized is not a valid JSON document, a [`JSONDecodeError`](#json.JSONDecodeError "json.JSONDecodeError") will be raised. Changed in version 3.6: All parameters are now [keyword-only](../glossary#keyword-only-parameter). `decode(s)` Return the Python representation of *s* (a [`str`](stdtypes#str "str") instance containing a JSON document). [`JSONDecodeError`](#json.JSONDecodeError "json.JSONDecodeError") will be raised if the given JSON document is not valid. `raw_decode(s)` Decode a JSON document from *s* (a [`str`](stdtypes#str "str") beginning with a JSON document) and return a 2-tuple of the Python representation and the index in *s* where the document ended. This can be used to decode a JSON document from a string that may have extraneous data at the end. `class json.JSONEncoder(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)` Extensible JSON encoder for Python data structures. Supports the following objects and types by default: | Python | JSON | | --- | --- | | dict | object | | list, tuple | array | | str | string | | int, float, int- & float-derived Enums | number | | True | true | | False | false | | None | null | Changed in version 3.4: Added support for int- and float-derived Enum classes. To extend this to recognize other objects, subclass and implement a [`default()`](#json.JSONEncoder.default "json.JSONEncoder.default") method with another method that returns a serializable object for `o` if possible, otherwise it should call the superclass implementation (to raise [`TypeError`](exceptions#TypeError "TypeError")). If *skipkeys* is false (the default), a [`TypeError`](exceptions#TypeError "TypeError") will be raised when trying to encode keys that are not [`str`](stdtypes#str "str"), [`int`](functions#int "int"), [`float`](functions#float "float") or `None`. If *skipkeys* is true, such items are simply skipped. If *ensure\_ascii* is true (the default), the output is guaranteed to have all incoming non-ASCII characters escaped. If *ensure\_ascii* is false, these characters will be output as-is. If *check\_circular* is true (the default), then lists, dicts, and custom encoded objects will be checked for circular references during encoding to prevent an infinite recursion (which would cause an [`RecursionError`](exceptions#RecursionError "RecursionError")). Otherwise, no such check takes place. If *allow\_nan* is true (the default), then `NaN`, `Infinity`, and `-Infinity` will be encoded as such. This behavior is not JSON specification compliant, but is consistent with most JavaScript based encoders and decoders. Otherwise, it will be a [`ValueError`](exceptions#ValueError "ValueError") to encode such floats. If *sort\_keys* is true (default: `False`), then the output of dictionaries will be sorted by key; this is useful for regression tests to ensure that JSON serializations can be compared on a day-to-day basis. If *indent* is a non-negative integer or string, then JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0, negative, or `""` will only insert newlines. `None` (the default) selects the most compact representation. Using a positive integer indent indents that many spaces per level. If *indent* is a string (such as `"\t"`), that string is used to indent each level. Changed in version 3.2: Allow strings for *indent* in addition to integers. If specified, *separators* should be an `(item_separator, key_separator)` tuple. The default is `(', ', ': ')` if *indent* is `None` and `(',', ': ')` otherwise. To get the most compact JSON representation, you should specify `(',', ':')` to eliminate whitespace. Changed in version 3.4: Use `(',', ': ')` as default if *indent* is not `None`. If specified, *default* should be a function that gets called for objects that can’t otherwise be serialized. It should return a JSON encodable version of the object or raise a [`TypeError`](exceptions#TypeError "TypeError"). If not specified, [`TypeError`](exceptions#TypeError "TypeError") is raised. Changed in version 3.6: All parameters are now [keyword-only](../glossary#keyword-only-parameter). `default(o)` Implement this method in a subclass such that it returns a serializable object for *o*, or calls the base implementation (to raise a [`TypeError`](exceptions#TypeError "TypeError")). For example, to support arbitrary iterators, you could implement [`default()`](#json.JSONEncoder.default "json.JSONEncoder.default") like this: ``` def default(self, o): try: iterable = iter(o) except TypeError: pass else: return list(iterable) # Let the base class default method raise the TypeError return json.JSONEncoder.default(self, o) ``` `encode(o)` Return a JSON string representation of a Python data structure, *o*. For example: ``` >>> json.JSONEncoder().encode({"foo": ["bar", "baz"]}) '{"foo": ["bar", "baz"]}' ``` `iterencode(o)` Encode the given object, *o*, and yield each string representation as available. For example: ``` for chunk in json.JSONEncoder().iterencode(bigobject): mysocket.write(chunk) ``` Exceptions ---------- `exception json.JSONDecodeError(msg, doc, pos)` Subclass of [`ValueError`](exceptions#ValueError "ValueError") with the following additional attributes: `msg` The unformatted error message. `doc` The JSON document being parsed. `pos` The start index of *doc* where parsing failed. `lineno` The line corresponding to *pos*. `colno` The column corresponding to *pos*. New in version 3.5. Standard Compliance and Interoperability ---------------------------------------- The JSON format is specified by [**RFC 7159**](https://tools.ietf.org/html/rfc7159.html) and by [ECMA-404](https://www.ecma-international.org/publications-and-standards/standards/ecma-404/). This section details this module’s level of compliance with the RFC. For simplicity, [`JSONEncoder`](#json.JSONEncoder "json.JSONEncoder") and [`JSONDecoder`](#json.JSONDecoder "json.JSONDecoder") subclasses, and parameters other than those explicitly mentioned, are not considered. This module does not comply with the RFC in a strict fashion, implementing some extensions that are valid JavaScript but not valid JSON. In particular: * Infinite and NaN number values are accepted and output; * Repeated names within an object are accepted, and only the value of the last name-value pair is used. Since the RFC permits RFC-compliant parsers to accept input texts that are not RFC-compliant, this module’s deserializer is technically RFC-compliant under default settings. ### Character Encodings The RFC requires that JSON be represented using either UTF-8, UTF-16, or UTF-32, with UTF-8 being the recommended default for maximum interoperability. As permitted, though not required, by the RFC, this module’s serializer sets *ensure\_ascii=True* by default, thus escaping the output so that the resulting strings only contain ASCII characters. Other than the *ensure\_ascii* parameter, this module is defined strictly in terms of conversion between Python objects and [`Unicode strings`](stdtypes#str "str"), and thus does not otherwise directly address the issue of character encodings. The RFC prohibits adding a byte order mark (BOM) to the start of a JSON text, and this module’s serializer does not add a BOM to its output. The RFC permits, but does not require, JSON deserializers to ignore an initial BOM in their input. This module’s deserializer raises a [`ValueError`](exceptions#ValueError "ValueError") when an initial BOM is present. The RFC does not explicitly forbid JSON strings which contain byte sequences that don’t correspond to valid Unicode characters (e.g. unpaired UTF-16 surrogates), but it does note that they may cause interoperability problems. By default, this module accepts and outputs (when present in the original [`str`](stdtypes#str "str")) code points for such sequences. ### Infinite and NaN Number Values The RFC does not permit the representation of infinite or NaN number values. Despite that, by default, this module accepts and outputs `Infinity`, `-Infinity`, and `NaN` as if they were valid JSON number literal values: ``` >>> # Neither of these calls raises an exception, but the results are not valid JSON >>> json.dumps(float('-inf')) '-Infinity' >>> json.dumps(float('nan')) 'NaN' >>> # Same when deserializing >>> json.loads('-Infinity') -inf >>> json.loads('NaN') nan ``` In the serializer, the *allow\_nan* parameter can be used to alter this behavior. In the deserializer, the *parse\_constant* parameter can be used to alter this behavior. ### Repeated Names Within an Object The RFC specifies that the names within a JSON object should be unique, but does not mandate how repeated names in JSON objects should be handled. By default, this module does not raise an exception; instead, it ignores all but the last name-value pair for a given name: ``` >>> weird_json = '{"x": 1, "x": 2, "x": 3}' >>> json.loads(weird_json) {'x': 3} ``` The *object\_pairs\_hook* parameter can be used to alter this behavior. ### Top-level Non-Object, Non-Array Values The old version of JSON specified by the obsolete [**RFC 4627**](https://tools.ietf.org/html/rfc4627.html) required that the top-level value of a JSON text must be either a JSON object or array (Python [`dict`](stdtypes#dict "dict") or [`list`](stdtypes#list "list")), and could not be a JSON null, boolean, number, or string value. [**RFC 7159**](https://tools.ietf.org/html/rfc7159.html) removed that restriction, and this module does not and has never implemented that restriction in either its serializer or its deserializer. Regardless, for maximum interoperability, you may wish to voluntarily adhere to the restriction yourself. ### Implementation Limitations Some JSON deserializer implementations may set limits on: * the size of accepted JSON texts * the maximum level of nesting of JSON objects and arrays * the range and precision of JSON numbers * the content and maximum length of JSON strings This module does not impose any such limits beyond those of the relevant Python datatypes themselves or the Python interpreter itself. When serializing to JSON, beware any such limitations in applications that may consume your JSON. In particular, it is common for JSON numbers to be deserialized into IEEE 754 double precision numbers and thus subject to that representation’s range and precision limitations. This is especially relevant when serializing Python [`int`](functions#int "int") values of extremely large magnitude, or when serializing instances of “exotic” numerical types such as [`decimal.Decimal`](decimal#decimal.Decimal "decimal.Decimal"). Command Line Interface ---------------------- **Source code:** [Lib/json/tool.py](https://github.com/python/cpython/tree/3.9/Lib/json/tool.py) The [`json.tool`](#module-json.tool "json.tool: A command line to validate and pretty-print JSON.") module provides a simple command line interface to validate and pretty-print JSON objects. If the optional `infile` and `outfile` arguments are not specified, [`sys.stdin`](sys#sys.stdin "sys.stdin") and [`sys.stdout`](sys#sys.stdout "sys.stdout") will be used respectively: ``` $ echo '{"json": "obj"}' | python -m json.tool { "json": "obj" } $ echo '{1.2:3.4}' | python -m json.tool Expecting property name enclosed in double quotes: line 1 column 2 (char 1) ``` Changed in version 3.5: The output is now in the same order as the input. Use the [`--sort-keys`](#cmdoption-json-tool-sort-keys) option to sort the output of dictionaries alphabetically by key. ### Command line options `infile` The JSON file to be validated or pretty-printed: ``` $ python -m json.tool mp_films.json [ { "title": "And Now for Something Completely Different", "year": 1971 }, { "title": "Monty Python and the Holy Grail", "year": 1975 } ] ``` If *infile* is not specified, read from [`sys.stdin`](sys#sys.stdin "sys.stdin"). `outfile` Write the output of the *infile* to the given *outfile*. Otherwise, write it to [`sys.stdout`](sys#sys.stdout "sys.stdout"). `--sort-keys` Sort the output of dictionaries alphabetically by key. New in version 3.5. `--no-ensure-ascii` Disable escaping of non-ascii characters, see [`json.dumps()`](#json.dumps "json.dumps") for more information. New in version 3.9. `--json-lines` Parse every input line as separate JSON object. New in version 3.8. `--indent, --tab, --no-indent, --compact` Mutually exclusive options for whitespace control. New in version 3.9. `-h, --help` Show the help message. #### Footnotes `1` As noted in [the errata for RFC 7159](https://www.rfc-editor.org/errata_search.php?rfc=7159), JSON permits literal U+2028 (LINE SEPARATOR) and U+2029 (PARAGRAPH SEPARATOR) characters in strings, whereas JavaScript (as of ECMAScript Edition 5.1) does not.
programming_docs
python zlib — Compression compatible with gzip zlib — Compression compatible with gzip ======================================= For applications that require data compression, the functions in this module allow compression and decompression, using the zlib library. The zlib library has its own home page at <https://www.zlib.net>. There are known incompatibilities between the Python module and versions of the zlib library earlier than 1.1.3; 1.1.3 has a [security vulnerability](https://zlib.net/zlib_faq.html#faq33), so we recommend using 1.1.4 or later. zlib’s functions have many options and often need to be used in a particular order. This documentation doesn’t attempt to cover all of the permutations; consult the zlib manual at <http://www.zlib.net/manual.html> for authoritative information. For reading and writing `.gz` files see the [`gzip`](gzip#module-gzip "gzip: Interfaces for gzip compression and decompression using file objects.") module. The available exception and functions in this module are: `exception zlib.error` Exception raised on compression and decompression errors. `zlib.adler32(data[, value])` Computes an Adler-32 checksum of *data*. (An Adler-32 checksum is almost as reliable as a CRC32 but can be computed much more quickly.) The result is an unsigned 32-bit integer. If *value* is present, it is used as the starting value of the checksum; otherwise, a default value of 1 is used. Passing in *value* allows computing a running checksum over the concatenation of several inputs. The algorithm is not cryptographically strong, and should not be used for authentication or digital signatures. Since the algorithm is designed for use as a checksum algorithm, it is not suitable for use as a general hash algorithm. Changed in version 3.0: The result is always unsigned. To generate the same numeric value when using Python 2 or earlier, use `adler32(data) & 0xffffffff`. `zlib.compress(data, /, level=-1)` Compresses the bytes in *data*, returning a bytes object containing compressed data. *level* is an integer from `0` to `9` or `-1` controlling the level of compression; `1` (Z\_BEST\_SPEED) is fastest and produces the least compression, `9` (Z\_BEST\_COMPRESSION) is slowest and produces the most. `0` (Z\_NO\_COMPRESSION) is no compression. The default value is `-1` (Z\_DEFAULT\_COMPRESSION). Z\_DEFAULT\_COMPRESSION represents a default compromise between speed and compression (currently equivalent to level 6). Raises the [`error`](#zlib.error "zlib.error") exception if any error occurs. Changed in version 3.6: *level* can now be used as a keyword parameter. `zlib.compressobj(level=-1, method=DEFLATED, wbits=MAX_WBITS, memLevel=DEF_MEM_LEVEL, strategy=Z_DEFAULT_STRATEGY[, zdict])` Returns a compression object, to be used for compressing data streams that won’t fit into memory at once. *level* is the compression level – an integer from `0` to `9` or `-1`. A value of `1` (Z\_BEST\_SPEED) is fastest and produces the least compression, while a value of `9` (Z\_BEST\_COMPRESSION) is slowest and produces the most. `0` (Z\_NO\_COMPRESSION) is no compression. The default value is `-1` (Z\_DEFAULT\_COMPRESSION). Z\_DEFAULT\_COMPRESSION represents a default compromise between speed and compression (currently equivalent to level 6). *method* is the compression algorithm. Currently, the only supported value is `DEFLATED`. The *wbits* argument controls the size of the history buffer (or the “window size”) used when compressing data, and whether a header and trailer is included in the output. It can take several ranges of values, defaulting to `15` (MAX\_WBITS): * +9 to +15: The base-two logarithm of the window size, which therefore ranges between 512 and 32768. Larger values produce better compression at the expense of greater memory usage. The resulting output will include a zlib-specific header and trailer. * −9 to −15: Uses the absolute value of *wbits* as the window size logarithm, while producing a raw output stream with no header or trailing checksum. * +25 to +31 = 16 + (9 to 15): Uses the low 4 bits of the value as the window size logarithm, while including a basic **gzip** header and trailing checksum in the output. The *memLevel* argument controls the amount of memory used for the internal compression state. Valid values range from `1` to `9`. Higher values use more memory, but are faster and produce smaller output. *strategy* is used to tune the compression algorithm. Possible values are `Z_DEFAULT_STRATEGY`, `Z_FILTERED`, `Z_HUFFMAN_ONLY`, `Z_RLE` (zlib 1.2.0.1) and `Z_FIXED` (zlib 1.2.2.2). *zdict* is a predefined compression dictionary. This is a sequence of bytes (such as a [`bytes`](stdtypes#bytes "bytes") object) containing subsequences that are expected to occur frequently in the data that is to be compressed. Those subsequences that are expected to be most common should come at the end of the dictionary. Changed in version 3.3: Added the *zdict* parameter and keyword argument support. `zlib.crc32(data[, value])` Computes a CRC (Cyclic Redundancy Check) checksum of *data*. The result is an unsigned 32-bit integer. If *value* is present, it is used as the starting value of the checksum; otherwise, a default value of 0 is used. Passing in *value* allows computing a running checksum over the concatenation of several inputs. The algorithm is not cryptographically strong, and should not be used for authentication or digital signatures. Since the algorithm is designed for use as a checksum algorithm, it is not suitable for use as a general hash algorithm. Changed in version 3.0: The result is always unsigned. To generate the same numeric value when using Python 2 or earlier, use `crc32(data) & 0xffffffff`. `zlib.decompress(data, /, wbits=MAX_WBITS, bufsize=DEF_BUF_SIZE)` Decompresses the bytes in *data*, returning a bytes object containing the uncompressed data. The *wbits* parameter depends on the format of *data*, and is discussed further below. If *bufsize* is given, it is used as the initial size of the output buffer. Raises the [`error`](#zlib.error "zlib.error") exception if any error occurs. The *wbits* parameter controls the size of the history buffer (or “window size”), and what header and trailer format is expected. It is similar to the parameter for [`compressobj()`](#zlib.compressobj "zlib.compressobj"), but accepts more ranges of values: * +8 to +15: The base-two logarithm of the window size. The input must include a zlib header and trailer. * 0: Automatically determine the window size from the zlib header. Only supported since zlib 1.2.3.5. * −8 to −15: Uses the absolute value of *wbits* as the window size logarithm. The input must be a raw stream with no header or trailer. * +24 to +31 = 16 + (8 to 15): Uses the low 4 bits of the value as the window size logarithm. The input must include a gzip header and trailer. * +40 to +47 = 32 + (8 to 15): Uses the low 4 bits of the value as the window size logarithm, and automatically accepts either the zlib or gzip format. When decompressing a stream, the window size must not be smaller than the size originally used to compress the stream; using a too-small value may result in an [`error`](#zlib.error "zlib.error") exception. The default *wbits* value corresponds to the largest window size and requires a zlib header and trailer to be included. *bufsize* is the initial size of the buffer used to hold decompressed data. If more space is required, the buffer size will be increased as needed, so you don’t have to get this value exactly right; tuning it will only save a few calls to `malloc()`. Changed in version 3.6: *wbits* and *bufsize* can be used as keyword arguments. `zlib.decompressobj(wbits=MAX_WBITS[, zdict])` Returns a decompression object, to be used for decompressing data streams that won’t fit into memory at once. The *wbits* parameter controls the size of the history buffer (or the “window size”), and what header and trailer format is expected. It has the same meaning as [described for decompress()](#decompress-wbits). The *zdict* parameter specifies a predefined compression dictionary. If provided, this must be the same dictionary as was used by the compressor that produced the data that is to be decompressed. Note If *zdict* is a mutable object (such as a [`bytearray`](stdtypes#bytearray "bytearray")), you must not modify its contents between the call to [`decompressobj()`](#zlib.decompressobj "zlib.decompressobj") and the first call to the decompressor’s `decompress()` method. Changed in version 3.3: Added the *zdict* parameter. Compression objects support the following methods: `Compress.compress(data)` Compress *data*, returning a bytes object containing compressed data for at least part of the data in *data*. This data should be concatenated to the output produced by any preceding calls to the [`compress()`](#zlib.compress "zlib.compress") method. Some input may be kept in internal buffers for later processing. `Compress.flush([mode])` All pending input is processed, and a bytes object containing the remaining compressed output is returned. *mode* can be selected from the constants `Z_NO_FLUSH`, `Z_PARTIAL_FLUSH`, `Z_SYNC_FLUSH`, `Z_FULL_FLUSH`, `Z_BLOCK` (zlib 1.2.3.4), or `Z_FINISH`, defaulting to `Z_FINISH`. Except `Z_FINISH`, all constants allow compressing further bytestrings of data, while `Z_FINISH` finishes the compressed stream and prevents compressing any more data. After calling [`flush()`](#zlib.Compress.flush "zlib.Compress.flush") with *mode* set to `Z_FINISH`, the [`compress()`](#zlib.compress "zlib.compress") method cannot be called again; the only realistic action is to delete the object. `Compress.copy()` Returns a copy of the compression object. This can be used to efficiently compress a set of data that share a common initial prefix. Changed in version 3.8: Added [`copy.copy()`](copy#copy.copy "copy.copy") and [`copy.deepcopy()`](copy#copy.deepcopy "copy.deepcopy") support to compression objects. Decompression objects support the following methods and attributes: `Decompress.unused_data` A bytes object which contains any bytes past the end of the compressed data. That is, this remains `b""` until the last byte that contains compression data is available. If the whole bytestring turned out to contain compressed data, this is `b""`, an empty bytes object. `Decompress.unconsumed_tail` A bytes object that contains any data that was not consumed by the last [`decompress()`](#zlib.decompress "zlib.decompress") call because it exceeded the limit for the uncompressed data buffer. This data has not yet been seen by the zlib machinery, so you must feed it (possibly with further data concatenated to it) back to a subsequent [`decompress()`](#zlib.decompress "zlib.decompress") method call in order to get correct output. `Decompress.eof` A boolean indicating whether the end of the compressed data stream has been reached. This makes it possible to distinguish between a properly-formed compressed stream, and an incomplete or truncated one. New in version 3.3. `Decompress.decompress(data, max_length=0)` Decompress *data*, returning a bytes object containing the uncompressed data corresponding to at least part of the data in *string*. This data should be concatenated to the output produced by any preceding calls to the [`decompress()`](#zlib.decompress "zlib.decompress") method. Some of the input data may be preserved in internal buffers for later processing. If the optional parameter *max\_length* is non-zero then the return value will be no longer than *max\_length*. This may mean that not all of the compressed input can be processed; and unconsumed data will be stored in the attribute [`unconsumed_tail`](#zlib.Decompress.unconsumed_tail "zlib.Decompress.unconsumed_tail"). This bytestring must be passed to a subsequent call to [`decompress()`](#zlib.decompress "zlib.decompress") if decompression is to continue. If *max\_length* is zero then the whole input is decompressed, and [`unconsumed_tail`](#zlib.Decompress.unconsumed_tail "zlib.Decompress.unconsumed_tail") is empty. Changed in version 3.6: *max\_length* can be used as a keyword argument. `Decompress.flush([length])` All pending input is processed, and a bytes object containing the remaining uncompressed output is returned. After calling [`flush()`](#zlib.Decompress.flush "zlib.Decompress.flush"), the [`decompress()`](#zlib.decompress "zlib.decompress") method cannot be called again; the only realistic action is to delete the object. The optional parameter *length* sets the initial size of the output buffer. `Decompress.copy()` Returns a copy of the decompression object. This can be used to save the state of the decompressor midway through the data stream in order to speed up random seeks into the stream at a future point. Changed in version 3.8: Added [`copy.copy()`](copy#copy.copy "copy.copy") and [`copy.deepcopy()`](copy#copy.deepcopy "copy.deepcopy") support to decompression objects. Information about the version of the zlib library in use is available through the following constants: `zlib.ZLIB_VERSION` The version string of the zlib library that was used for building the module. This may be different from the zlib library actually used at runtime, which is available as [`ZLIB_RUNTIME_VERSION`](#zlib.ZLIB_RUNTIME_VERSION "zlib.ZLIB_RUNTIME_VERSION"). `zlib.ZLIB_RUNTIME_VERSION` The version string of the zlib library actually loaded by the interpreter. New in version 3.3. See also `Module` [`gzip`](gzip#module-gzip "gzip: Interfaces for gzip compression and decompression using file objects.") Reading and writing **gzip**-format files. <http://www.zlib.net> The zlib library home page. <http://www.zlib.net/manual.html> The zlib manual explains the semantics and usage of the library’s many functions. python html.parser — Simple HTML and XHTML parser html.parser — Simple HTML and XHTML parser ========================================== **Source code:** [Lib/html/parser.py](https://github.com/python/cpython/tree/3.9/Lib/html/parser.py) This module defines a class [`HTMLParser`](#html.parser.HTMLParser "html.parser.HTMLParser") which serves as the basis for parsing text files formatted in HTML (HyperText Mark-up Language) and XHTML. `class html.parser.HTMLParser(*, convert_charrefs=True)` Create a parser instance able to parse invalid markup. If *convert\_charrefs* is `True` (the default), all character references (except the ones in `script`/`style` elements) are automatically converted to the corresponding Unicode characters. An [`HTMLParser`](#html.parser.HTMLParser "html.parser.HTMLParser") instance is fed HTML data and calls handler methods when start tags, end tags, text, comments, and other markup elements are encountered. The user should subclass [`HTMLParser`](#html.parser.HTMLParser "html.parser.HTMLParser") and override its methods to implement the desired behavior. This parser does not check that end tags match start tags or call the end-tag handler for elements which are closed implicitly by closing an outer element. Changed in version 3.4: *convert\_charrefs* keyword argument added. Changed in version 3.5: The default value for argument *convert\_charrefs* is now `True`. Example HTML Parser Application ------------------------------- As a basic example, below is a simple HTML parser that uses the [`HTMLParser`](#html.parser.HTMLParser "html.parser.HTMLParser") class to print out start tags, end tags, and data as they are encountered: ``` from html.parser import HTMLParser class MyHTMLParser(HTMLParser): def handle_starttag(self, tag, attrs): print("Encountered a start tag:", tag) def handle_endtag(self, tag): print("Encountered an end tag :", tag) def handle_data(self, data): print("Encountered some data :", data) parser = MyHTMLParser() parser.feed('<html><head><title>Test</title></head>' '<body><h1>Parse me!</h1></body></html>') ``` The output will then be: ``` Encountered a start tag: html Encountered a start tag: head Encountered a start tag: title Encountered some data : Test Encountered an end tag : title Encountered an end tag : head Encountered a start tag: body Encountered a start tag: h1 Encountered some data : Parse me! Encountered an end tag : h1 Encountered an end tag : body Encountered an end tag : html ``` HTMLParser Methods ------------------ [`HTMLParser`](#html.parser.HTMLParser "html.parser.HTMLParser") instances have the following methods: `HTMLParser.feed(data)` Feed some text to the parser. It is processed insofar as it consists of complete elements; incomplete data is buffered until more data is fed or [`close()`](#html.parser.HTMLParser.close "html.parser.HTMLParser.close") is called. *data* must be [`str`](stdtypes#str "str"). `HTMLParser.close()` Force processing of all buffered data as if it were followed by an end-of-file mark. This method may be redefined by a derived class to define additional processing at the end of the input, but the redefined version should always call the [`HTMLParser`](#html.parser.HTMLParser "html.parser.HTMLParser") base class method [`close()`](#html.parser.HTMLParser.close "html.parser.HTMLParser.close"). `HTMLParser.reset()` Reset the instance. Loses all unprocessed data. This is called implicitly at instantiation time. `HTMLParser.getpos()` Return current line number and offset. `HTMLParser.get_starttag_text()` Return the text of the most recently opened start tag. This should not normally be needed for structured processing, but may be useful in dealing with HTML “as deployed” or for re-generating input with minimal changes (whitespace between attributes can be preserved, etc.). The following methods are called when data or markup elements are encountered and they are meant to be overridden in a subclass. The base class implementations do nothing (except for [`handle_startendtag()`](#html.parser.HTMLParser.handle_startendtag "html.parser.HTMLParser.handle_startendtag")): `HTMLParser.handle_starttag(tag, attrs)` This method is called to handle the start tag of an element (e.g. `<div id="main">`). The *tag* argument is the name of the tag converted to lower case. The *attrs* argument is a list of `(name, value)` pairs containing the attributes found inside the tag’s `<>` brackets. The *name* will be translated to lower case, and quotes in the *value* have been removed, and character and entity references have been replaced. For instance, for the tag `<A HREF="https://www.cwi.nl/">`, this method would be called as `handle_starttag('a', [('href', 'https://www.cwi.nl/')])`. All entity references from [`html.entities`](html.entities#module-html.entities "html.entities: Definitions of HTML general entities.") are replaced in the attribute values. `HTMLParser.handle_endtag(tag)` This method is called to handle the end tag of an element (e.g. `</div>`). The *tag* argument is the name of the tag converted to lower case. `HTMLParser.handle_startendtag(tag, attrs)` Similar to [`handle_starttag()`](#html.parser.HTMLParser.handle_starttag "html.parser.HTMLParser.handle_starttag"), but called when the parser encounters an XHTML-style empty tag (`<img ... />`). This method may be overridden by subclasses which require this particular lexical information; the default implementation simply calls [`handle_starttag()`](#html.parser.HTMLParser.handle_starttag "html.parser.HTMLParser.handle_starttag") and [`handle_endtag()`](#html.parser.HTMLParser.handle_endtag "html.parser.HTMLParser.handle_endtag"). `HTMLParser.handle_data(data)` This method is called to process arbitrary data (e.g. text nodes and the content of `<script>...</script>` and `<style>...</style>`). `HTMLParser.handle_entityref(name)` This method is called to process a named character reference of the form `&name;` (e.g. `&gt;`), where *name* is a general entity reference (e.g. `'gt'`). This method is never called if *convert\_charrefs* is `True`. `HTMLParser.handle_charref(name)` This method is called to process decimal and hexadecimal numeric character references of the form `&#NNN;` and `&#xNNN;`. For example, the decimal equivalent for `&gt;` is `&#62;`, whereas the hexadecimal is `&#x3E;`; in this case the method will receive `'62'` or `'x3E'`. This method is never called if *convert\_charrefs* is `True`. `HTMLParser.handle_comment(data)` This method is called when a comment is encountered (e.g. `<!--comment-->`). For example, the comment `<!-- comment -->` will cause this method to be called with the argument `' comment '`. The content of Internet Explorer conditional comments (condcoms) will also be sent to this method, so, for `<!--[if IE 9]>IE9-specific content<![endif]-->`, this method will receive `'[if IE 9]>IE9-specific content<![endif]'`. `HTMLParser.handle_decl(decl)` This method is called to handle an HTML doctype declaration (e.g. `<!DOCTYPE html>`). The *decl* parameter will be the entire contents of the declaration inside the `<!...>` markup (e.g. `'DOCTYPE html'`). `HTMLParser.handle_pi(data)` Method called when a processing instruction is encountered. The *data* parameter will contain the entire processing instruction. For example, for the processing instruction `<?proc color='red'>`, this method would be called as `handle_pi("proc color='red'")`. It is intended to be overridden by a derived class; the base class implementation does nothing. Note The [`HTMLParser`](#html.parser.HTMLParser "html.parser.HTMLParser") class uses the SGML syntactic rules for processing instructions. An XHTML processing instruction using the trailing `'?'` will cause the `'?'` to be included in *data*. `HTMLParser.unknown_decl(data)` This method is called when an unrecognized declaration is read by the parser. The *data* parameter will be the entire contents of the declaration inside the `<![...]>` markup. It is sometimes useful to be overridden by a derived class. The base class implementation does nothing. Examples -------- The following class implements a parser that will be used to illustrate more examples: ``` from html.parser import HTMLParser from html.entities import name2codepoint class MyHTMLParser(HTMLParser): def handle_starttag(self, tag, attrs): print("Start tag:", tag) for attr in attrs: print(" attr:", attr) def handle_endtag(self, tag): print("End tag :", tag) def handle_data(self, data): print("Data :", data) def handle_comment(self, data): print("Comment :", data) def handle_entityref(self, name): c = chr(name2codepoint[name]) print("Named ent:", c) def handle_charref(self, name): if name.startswith('x'): c = chr(int(name[1:], 16)) else: c = chr(int(name)) print("Num ent :", c) def handle_decl(self, data): print("Decl :", data) parser = MyHTMLParser() ``` Parsing a doctype: ``` >>> parser.feed('<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" ' ... '"http://www.w3.org/TR/html4/strict.dtd">') Decl : DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd" ``` Parsing an element with a few attributes and a title: ``` >>> parser.feed('<img src="python-logo.png" alt="The Python logo">') Start tag: img attr: ('src', 'python-logo.png') attr: ('alt', 'The Python logo') >>> >>> parser.feed('<h1>Python</h1>') Start tag: h1 Data : Python End tag : h1 ``` The content of `script` and `style` elements is returned as is, without further parsing: ``` >>> parser.feed('<style type="text/css">#python { color: green }</style>') Start tag: style attr: ('type', 'text/css') Data : #python { color: green } End tag : style >>> parser.feed('<script type="text/javascript">' ... 'alert("<strong>hello!</strong>");</script>') Start tag: script attr: ('type', 'text/javascript') Data : alert("<strong>hello!</strong>"); End tag : script ``` Parsing comments: ``` >>> parser.feed('<!-- a comment -->' ... '<!--[if IE 9]>IE-specific content<![endif]-->') Comment : a comment Comment : [if IE 9]>IE-specific content<![endif] ``` Parsing named and numeric character references and converting them to the correct char (note: these 3 references are all equivalent to `'>'`): ``` >>> parser.feed('&gt;&#62;&#x3E;') Named ent: > Num ent : > Num ent : > ``` Feeding incomplete chunks to [`feed()`](#html.parser.HTMLParser.feed "html.parser.HTMLParser.feed") works, but [`handle_data()`](#html.parser.HTMLParser.handle_data "html.parser.HTMLParser.handle_data") might be called more than once (unless *convert\_charrefs* is set to `True`): ``` >>> for chunk in ['<sp', 'an>buff', 'ered ', 'text</s', 'pan>']: ... parser.feed(chunk) ... Start tag: span Data : buff Data : ered Data : text End tag : span ``` Parsing invalid HTML (e.g. unquoted attributes) also works: ``` >>> parser.feed('<p><a class=link href=#main>tag soup</p ></a>') Start tag: p Start tag: a attr: ('class', 'link') attr: ('href', '#main') Data : tag soup End tag : p End tag : a ```
programming_docs
python winsound — Sound-playing interface for Windows winsound — Sound-playing interface for Windows ============================================== The [`winsound`](#module-winsound "winsound: Access to the sound-playing machinery for Windows. (Windows)") module provides access to the basic sound-playing machinery provided by Windows platforms. It includes functions and several constants. `winsound.Beep(frequency, duration)` Beep the PC’s speaker. The *frequency* parameter specifies frequency, in hertz, of the sound, and must be in the range 37 through 32,767. The *duration* parameter specifies the number of milliseconds the sound should last. If the system is not able to beep the speaker, [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. `winsound.PlaySound(sound, flags)` Call the underlying `PlaySound()` function from the Platform API. The *sound* parameter may be a filename, a system sound alias, audio data as a [bytes-like object](../glossary#term-bytes-like-object), or `None`. Its interpretation depends on the value of *flags*, which can be a bitwise ORed combination of the constants described below. If the *sound* parameter is `None`, any currently playing waveform sound is stopped. If the system indicates an error, [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. `winsound.MessageBeep(type=MB_OK)` Call the underlying `MessageBeep()` function from the Platform API. This plays a sound as specified in the registry. The *type* argument specifies which sound to play; possible values are `-1`, `MB_ICONASTERISK`, `MB_ICONEXCLAMATION`, `MB_ICONHAND`, `MB_ICONQUESTION`, and `MB_OK`, all described below. The value `-1` produces a “simple beep”; this is the final fallback if a sound cannot be played otherwise. If the system indicates an error, [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. `winsound.SND_FILENAME` The *sound* parameter is the name of a WAV file. Do not use with [`SND_ALIAS`](#winsound.SND_ALIAS "winsound.SND_ALIAS"). `winsound.SND_ALIAS` The *sound* parameter is a sound association name from the registry. If the registry contains no such name, play the system default sound unless [`SND_NODEFAULT`](#winsound.SND_NODEFAULT "winsound.SND_NODEFAULT") is also specified. If no default sound is registered, raise [`RuntimeError`](exceptions#RuntimeError "RuntimeError"). Do not use with [`SND_FILENAME`](#winsound.SND_FILENAME "winsound.SND_FILENAME"). All Win32 systems support at least the following; most systems support many more: | [`PlaySound()`](#winsound.PlaySound "winsound.PlaySound") *name* | Corresponding Control Panel Sound name | | --- | --- | | `'SystemAsterisk'` | Asterisk | | `'SystemExclamation'` | Exclamation | | `'SystemExit'` | Exit Windows | | `'SystemHand'` | Critical Stop | | `'SystemQuestion'` | Question | For example: ``` import winsound # Play Windows exit sound. winsound.PlaySound("SystemExit", winsound.SND_ALIAS) # Probably play Windows default sound, if any is registered (because # "*" probably isn't the registered name of any sound). winsound.PlaySound("*", winsound.SND_ALIAS) ``` `winsound.SND_LOOP` Play the sound repeatedly. The [`SND_ASYNC`](#winsound.SND_ASYNC "winsound.SND_ASYNC") flag must also be used to avoid blocking. Cannot be used with [`SND_MEMORY`](#winsound.SND_MEMORY "winsound.SND_MEMORY"). `winsound.SND_MEMORY` The *sound* parameter to [`PlaySound()`](#winsound.PlaySound "winsound.PlaySound") is a memory image of a WAV file, as a [bytes-like object](../glossary#term-bytes-like-object). Note This module does not support playing from a memory image asynchronously, so a combination of this flag and [`SND_ASYNC`](#winsound.SND_ASYNC "winsound.SND_ASYNC") will raise [`RuntimeError`](exceptions#RuntimeError "RuntimeError"). `winsound.SND_PURGE` Stop playing all instances of the specified sound. Note This flag is not supported on modern Windows platforms. `winsound.SND_ASYNC` Return immediately, allowing sounds to play asynchronously. `winsound.SND_NODEFAULT` If the specified sound cannot be found, do not play the system default sound. `winsound.SND_NOSTOP` Do not interrupt sounds currently playing. `winsound.SND_NOWAIT` Return immediately if the sound driver is busy. Note This flag is not supported on modern Windows platforms. `winsound.MB_ICONASTERISK` Play the `SystemDefault` sound. `winsound.MB_ICONEXCLAMATION` Play the `SystemExclamation` sound. `winsound.MB_ICONHAND` Play the `SystemHand` sound. `winsound.MB_ICONQUESTION` Play the `SystemQuestion` sound. `winsound.MB_OK` Play the `SystemDefault` sound. python Subprocesses Subprocesses ============ **Source code:** [Lib/asyncio/subprocess.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/subprocess.py), [Lib/asyncio/base\_subprocess.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/base_subprocess.py) This section describes high-level async/await asyncio APIs to create and manage subprocesses. Here’s an example of how asyncio can run a shell command and obtain its result: ``` import asyncio async def run(cmd): proc = await asyncio.create_subprocess_shell( cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE) stdout, stderr = await proc.communicate() print(f'[{cmd!r} exited with {proc.returncode}]') if stdout: print(f'[stdout]\n{stdout.decode()}') if stderr: print(f'[stderr]\n{stderr.decode()}') asyncio.run(run('ls /zzz')) ``` will print: ``` ['ls /zzz' exited with 1] [stderr] ls: /zzz: No such file or directory ``` Because all asyncio subprocess functions are asynchronous and asyncio provides many tools to work with such functions, it is easy to execute and monitor multiple subprocesses in parallel. It is indeed trivial to modify the above example to run several commands simultaneously: ``` async def main(): await asyncio.gather( run('ls /zzz'), run('sleep 1; echo "hello"')) asyncio.run(main()) ``` See also the [Examples](#examples) subsection. Creating Subprocesses --------------------- `coroutine asyncio.create_subprocess_exec(program, *args, stdin=None, stdout=None, stderr=None, loop=None, limit=None, **kwds)` Create a subprocess. The *limit* argument sets the buffer limit for [`StreamReader`](asyncio-stream#asyncio.StreamReader "asyncio.StreamReader") wrappers for `Process.stdout` and `Process.stderr` (if [`subprocess.PIPE`](subprocess#subprocess.PIPE "subprocess.PIPE") is passed to *stdout* and *stderr* arguments). Return a [`Process`](#asyncio.subprocess.Process "asyncio.subprocess.Process") instance. See the documentation of [`loop.subprocess_exec()`](asyncio-eventloop#asyncio.loop.subprocess_exec "asyncio.loop.subprocess_exec") for other parameters. Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. `coroutine asyncio.create_subprocess_shell(cmd, stdin=None, stdout=None, stderr=None, loop=None, limit=None, **kwds)` Run the *cmd* shell command. The *limit* argument sets the buffer limit for [`StreamReader`](asyncio-stream#asyncio.StreamReader "asyncio.StreamReader") wrappers for `Process.stdout` and `Process.stderr` (if [`subprocess.PIPE`](subprocess#subprocess.PIPE "subprocess.PIPE") is passed to *stdout* and *stderr* arguments). Return a [`Process`](#asyncio.subprocess.Process "asyncio.subprocess.Process") instance. See the documentation of [`loop.subprocess_shell()`](asyncio-eventloop#asyncio.loop.subprocess_shell "asyncio.loop.subprocess_shell") for other parameters. Important It is the application’s responsibility to ensure that all whitespace and special characters are quoted appropriately to avoid [shell injection](https://en.wikipedia.org/wiki/Shell_injection#Shell_injection) vulnerabilities. The [`shlex.quote()`](shlex#shlex.quote "shlex.quote") function can be used to properly escape whitespace and special shell characters in strings that are going to be used to construct shell commands. Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. Note Subprocesses are available for Windows if a [`ProactorEventLoop`](asyncio-eventloop#asyncio.ProactorEventLoop "asyncio.ProactorEventLoop") is used. See [Subprocess Support on Windows](asyncio-platforms#asyncio-windows-subprocess) for details. See also asyncio also has the following *low-level* APIs to work with subprocesses: [`loop.subprocess_exec()`](asyncio-eventloop#asyncio.loop.subprocess_exec "asyncio.loop.subprocess_exec"), [`loop.subprocess_shell()`](asyncio-eventloop#asyncio.loop.subprocess_shell "asyncio.loop.subprocess_shell"), [`loop.connect_read_pipe()`](asyncio-eventloop#asyncio.loop.connect_read_pipe "asyncio.loop.connect_read_pipe"), [`loop.connect_write_pipe()`](asyncio-eventloop#asyncio.loop.connect_write_pipe "asyncio.loop.connect_write_pipe"), as well as the [Subprocess Transports](asyncio-protocol#asyncio-subprocess-transports) and [Subprocess Protocols](asyncio-protocol#asyncio-subprocess-protocols). Constants --------- `asyncio.subprocess.PIPE` Can be passed to the *stdin*, *stdout* or *stderr* parameters. If *PIPE* is passed to *stdin* argument, the [`Process.stdin`](#asyncio.subprocess.Process.stdin "asyncio.subprocess.Process.stdin") attribute will point to a `StreamWriter` instance. If *PIPE* is passed to *stdout* or *stderr* arguments, the [`Process.stdout`](#asyncio.subprocess.Process.stdout "asyncio.subprocess.Process.stdout") and [`Process.stderr`](#asyncio.subprocess.Process.stderr "asyncio.subprocess.Process.stderr") attributes will point to `StreamReader` instances. `asyncio.subprocess.STDOUT` Special value that can be used as the *stderr* argument and indicates that standard error should be redirected into standard output. `asyncio.subprocess.DEVNULL` Special value that can be used as the *stdin*, *stdout* or *stderr* argument to process creation functions. It indicates that the special file [`os.devnull`](os#os.devnull "os.devnull") will be used for the corresponding subprocess stream. Interacting with Subprocesses ----------------------------- Both [`create_subprocess_exec()`](#asyncio.create_subprocess_exec "asyncio.create_subprocess_exec") and [`create_subprocess_shell()`](#asyncio.create_subprocess_shell "asyncio.create_subprocess_shell") functions return instances of the *Process* class. *Process* is a high-level wrapper that allows communicating with subprocesses and watching for their completion. `class asyncio.subprocess.Process` An object that wraps OS processes created by the `create_subprocess_exec()` and `create_subprocess_shell()` functions. This class is designed to have a similar API to the [`subprocess.Popen`](subprocess#subprocess.Popen "subprocess.Popen") class, but there are some notable differences: * unlike Popen, Process instances do not have an equivalent to the [`poll()`](subprocess#subprocess.Popen.poll "subprocess.Popen.poll") method; * the [`communicate()`](#asyncio.subprocess.Process.communicate "asyncio.subprocess.Process.communicate") and [`wait()`](#asyncio.subprocess.Process.wait "asyncio.subprocess.Process.wait") methods don’t have a *timeout* parameter: use the `wait_for()` function; * the [`Process.wait()`](#asyncio.subprocess.Process.wait "asyncio.subprocess.Process.wait") method is asynchronous, whereas [`subprocess.Popen.wait()`](subprocess#subprocess.Popen.wait "subprocess.Popen.wait") method is implemented as a blocking busy loop; * the *universal\_newlines* parameter is not supported. This class is [not thread safe](asyncio-dev#asyncio-multithreading). See also the [Subprocess and Threads](#asyncio-subprocess-threads) section. `coroutine wait()` Wait for the child process to terminate. Set and return the [`returncode`](#asyncio.subprocess.Process.returncode "asyncio.subprocess.Process.returncode") attribute. Note This method can deadlock when using `stdout=PIPE` or `stderr=PIPE` and the child process generates so much output that it blocks waiting for the OS pipe buffer to accept more data. Use the [`communicate()`](#asyncio.subprocess.Process.communicate "asyncio.subprocess.Process.communicate") method when using pipes to avoid this condition. `coroutine communicate(input=None)` Interact with process: 1. send data to *stdin* (if *input* is not `None`); 2. read data from *stdout* and *stderr*, until EOF is reached; 3. wait for process to terminate. The optional *input* argument is the data ([`bytes`](stdtypes#bytes "bytes") object) that will be sent to the child process. Return a tuple `(stdout_data, stderr_data)`. If either [`BrokenPipeError`](exceptions#BrokenPipeError "BrokenPipeError") or [`ConnectionResetError`](exceptions#ConnectionResetError "ConnectionResetError") exception is raised when writing *input* into *stdin*, the exception is ignored. This condition occurs when the process exits before all data are written into *stdin*. If it is desired to send data to the process’ *stdin*, the process needs to be created with `stdin=PIPE`. Similarly, to get anything other than `None` in the result tuple, the process has to be created with `stdout=PIPE` and/or `stderr=PIPE` arguments. Note, that the data read is buffered in memory, so do not use this method if the data size is large or unlimited. `send_signal(signal)` Sends the signal *signal* to the child process. Note On Windows, `SIGTERM` is an alias for [`terminate()`](#asyncio.subprocess.Process.terminate "asyncio.subprocess.Process.terminate"). `CTRL_C_EVENT` and `CTRL_BREAK_EVENT` can be sent to processes started with a *creationflags* parameter which includes `CREATE_NEW_PROCESS_GROUP`. `terminate()` Stop the child process. On POSIX systems this method sends [`signal.SIGTERM`](signal#signal.SIGTERM "signal.SIGTERM") to the child process. On Windows the Win32 API function `TerminateProcess()` is called to stop the child process. `kill()` Kill the child process. On POSIX systems this method sends `SIGKILL` to the child process. On Windows this method is an alias for [`terminate()`](#asyncio.subprocess.Process.terminate "asyncio.subprocess.Process.terminate"). `stdin` Standard input stream (`StreamWriter`) or `None` if the process was created with `stdin=None`. `stdout` Standard output stream (`StreamReader`) or `None` if the process was created with `stdout=None`. `stderr` Standard error stream (`StreamReader`) or `None` if the process was created with `stderr=None`. Warning Use the [`communicate()`](#asyncio.subprocess.Process.communicate "asyncio.subprocess.Process.communicate") method rather than [`process.stdin.write()`](#asyncio.subprocess.Process.stdin "asyncio.subprocess.Process.stdin"), [`await process.stdout.read()`](#asyncio.subprocess.Process.stdout "asyncio.subprocess.Process.stdout") or [`await process.stderr.read()`](#asyncio.subprocess.Process.stderr "asyncio.subprocess.Process.stderr"). This avoids deadlocks due to streams pausing reading or writing and blocking the child process. `pid` Process identification number (PID). Note that for processes created by the `create_subprocess_shell()` function, this attribute is the PID of the spawned shell. `returncode` Return code of the process when it exits. A `None` value indicates that the process has not terminated yet. A negative value `-N` indicates that the child was terminated by signal `N` (POSIX only). ### Subprocess and Threads Standard asyncio event loop supports running subprocesses from different threads by default. On Windows subprocesses are provided by [`ProactorEventLoop`](asyncio-eventloop#asyncio.ProactorEventLoop "asyncio.ProactorEventLoop") only (default), [`SelectorEventLoop`](asyncio-eventloop#asyncio.SelectorEventLoop "asyncio.SelectorEventLoop") has no subprocess support. On UNIX *child watchers* are used for subprocess finish waiting, see [Process Watchers](asyncio-policy#asyncio-watchers) for more info. Changed in version 3.8: UNIX switched to use [`ThreadedChildWatcher`](asyncio-policy#asyncio.ThreadedChildWatcher "asyncio.ThreadedChildWatcher") for spawning subprocesses from different threads without any limitation. Spawning a subprocess with *inactive* current child watcher raises [`RuntimeError`](exceptions#RuntimeError "RuntimeError"). Note that alternative event loop implementations might have own limitations; please refer to their documentation. See also The [Concurrency and multithreading in asyncio](asyncio-dev#asyncio-multithreading) section. ### Examples An example using the [`Process`](#asyncio.subprocess.Process "asyncio.subprocess.Process") class to control a subprocess and the [`StreamReader`](asyncio-stream#asyncio.StreamReader "asyncio.StreamReader") class to read from its standard output. The subprocess is created by the [`create_subprocess_exec()`](#asyncio.create_subprocess_exec "asyncio.create_subprocess_exec") function: ``` import asyncio import sys async def get_date(): code = 'import datetime; print(datetime.datetime.now())' # Create the subprocess; redirect the standard output # into a pipe. proc = await asyncio.create_subprocess_exec( sys.executable, '-c', code, stdout=asyncio.subprocess.PIPE) # Read one line of output. data = await proc.stdout.readline() line = data.decode('ascii').rstrip() # Wait for the subprocess exit. await proc.wait() return line date = asyncio.run(get_date()) print(f"Current date: {date}") ``` See also the [same example](asyncio-protocol#asyncio-example-subprocess-proto) written using low-level APIs. python logging.handlers — Logging handlers logging.handlers — Logging handlers =================================== **Source code:** [Lib/logging/handlers.py](https://github.com/python/cpython/tree/3.9/Lib/logging/handlers.py) The following useful handlers are provided in the package. Note that three of the handlers ([`StreamHandler`](#logging.StreamHandler "logging.StreamHandler"), [`FileHandler`](#logging.FileHandler "logging.FileHandler") and [`NullHandler`](#logging.NullHandler "logging.NullHandler")) are actually defined in the [`logging`](logging#module-logging "logging: Flexible event logging system for applications.") module itself, but have been documented here along with the other handlers. StreamHandler ------------- The [`StreamHandler`](#logging.StreamHandler "logging.StreamHandler") class, located in the core [`logging`](logging#module-logging "logging: Flexible event logging system for applications.") package, sends logging output to streams such as *sys.stdout*, *sys.stderr* or any file-like object (or, more precisely, any object which supports `write()` and `flush()` methods). `class logging.StreamHandler(stream=None)` Returns a new instance of the [`StreamHandler`](#logging.StreamHandler "logging.StreamHandler") class. If *stream* is specified, the instance will use it for logging output; otherwise, *sys.stderr* will be used. `emit(record)` If a formatter is specified, it is used to format the record. The record is then written to the stream followed by [`terminator`](#logging.StreamHandler.terminator "logging.StreamHandler.terminator"). If exception information is present, it is formatted using [`traceback.print_exception()`](traceback#traceback.print_exception "traceback.print_exception") and appended to the stream. `flush()` Flushes the stream by calling its [`flush()`](#logging.StreamHandler.flush "logging.StreamHandler.flush") method. Note that the `close()` method is inherited from [`Handler`](logging#logging.Handler "logging.Handler") and so does no output, so an explicit [`flush()`](#logging.StreamHandler.flush "logging.StreamHandler.flush") call may be needed at times. `setStream(stream)` Sets the instance’s stream to the specified value, if it is different. The old stream is flushed before the new stream is set. Parameters **stream** – The stream that the handler should use. Returns the old stream, if the stream was changed, or *None* if it wasn’t. New in version 3.7. `terminator` String used as the terminator when writing a formatted record to a stream. Default value is `'\n'`. If you don’t want a newline termination, you can set the handler instance’s `terminator` attribute to the empty string. In earlier versions, the terminator was hardcoded as `'\n'`. New in version 3.2. FileHandler ----------- The [`FileHandler`](#logging.FileHandler "logging.FileHandler") class, located in the core [`logging`](logging#module-logging "logging: Flexible event logging system for applications.") package, sends logging output to a disk file. It inherits the output functionality from [`StreamHandler`](#logging.StreamHandler "logging.StreamHandler"). `class logging.FileHandler(filename, mode='a', encoding=None, delay=False, errors=None)` Returns a new instance of the [`FileHandler`](#logging.FileHandler "logging.FileHandler") class. The specified file is opened and used as the stream for logging. If *mode* is not specified, `'a'` is used. If *encoding* is not `None`, it is used to open the file with that encoding. If *delay* is true, then file opening is deferred until the first call to [`emit()`](#logging.FileHandler.emit "logging.FileHandler.emit"). By default, the file grows indefinitely. If *errors* is specified, it’s used to determine how encoding errors are handled. Changed in version 3.6: As well as string values, [`Path`](pathlib#pathlib.Path "pathlib.Path") objects are also accepted for the *filename* argument. Changed in version 3.9: The *errors* parameter was added. `close()` Closes the file. `emit(record)` Outputs the record to the file. NullHandler ----------- New in version 3.1. The [`NullHandler`](#logging.NullHandler "logging.NullHandler") class, located in the core [`logging`](logging#module-logging "logging: Flexible event logging system for applications.") package, does not do any formatting or output. It is essentially a ‘no-op’ handler for use by library developers. `class logging.NullHandler` Returns a new instance of the [`NullHandler`](#logging.NullHandler "logging.NullHandler") class. `emit(record)` This method does nothing. `handle(record)` This method does nothing. `createLock()` This method returns `None` for the lock, since there is no underlying I/O to which access needs to be serialized. See [Configuring Logging for a Library](../howto/logging#library-config) for more information on how to use [`NullHandler`](#logging.NullHandler "logging.NullHandler"). WatchedFileHandler ------------------ The [`WatchedFileHandler`](#logging.handlers.WatchedFileHandler "logging.handlers.WatchedFileHandler") class, located in the [`logging.handlers`](#module-logging.handlers "logging.handlers: Handlers for the logging module.") module, is a `FileHandler` which watches the file it is logging to. If the file changes, it is closed and reopened using the file name. A file change can happen because of usage of programs such as *newsyslog* and *logrotate* which perform log file rotation. This handler, intended for use under Unix/Linux, watches the file to see if it has changed since the last emit. (A file is deemed to have changed if its device or inode have changed.) If the file has changed, the old file stream is closed, and the file opened to get a new stream. This handler is not appropriate for use under Windows, because under Windows open log files cannot be moved or renamed - logging opens the files with exclusive locks - and so there is no need for such a handler. Furthermore, *ST\_INO* is not supported under Windows; [`stat()`](os#os.stat "os.stat") always returns zero for this value. `class logging.handlers.WatchedFileHandler(filename, mode='a', encoding=None, delay=False, errors=None)` Returns a new instance of the [`WatchedFileHandler`](#logging.handlers.WatchedFileHandler "logging.handlers.WatchedFileHandler") class. The specified file is opened and used as the stream for logging. If *mode* is not specified, `'a'` is used. If *encoding* is not `None`, it is used to open the file with that encoding. If *delay* is true, then file opening is deferred until the first call to [`emit()`](#logging.handlers.WatchedFileHandler.emit "logging.handlers.WatchedFileHandler.emit"). By default, the file grows indefinitely. If *errors* is provided, it determines how encoding errors are handled. Changed in version 3.6: As well as string values, [`Path`](pathlib#pathlib.Path "pathlib.Path") objects are also accepted for the *filename* argument. Changed in version 3.9: The *errors* parameter was added. `reopenIfNeeded()` Checks to see if the file has changed. If it has, the existing stream is flushed and closed and the file opened again, typically as a precursor to outputting the record to the file. New in version 3.6. `emit(record)` Outputs the record to the file, but first calls [`reopenIfNeeded()`](#logging.handlers.WatchedFileHandler.reopenIfNeeded "logging.handlers.WatchedFileHandler.reopenIfNeeded") to reopen the file if it has changed. BaseRotatingHandler ------------------- The [`BaseRotatingHandler`](#logging.handlers.BaseRotatingHandler "logging.handlers.BaseRotatingHandler") class, located in the [`logging.handlers`](#module-logging.handlers "logging.handlers: Handlers for the logging module.") module, is the base class for the rotating file handlers, [`RotatingFileHandler`](#logging.handlers.RotatingFileHandler "logging.handlers.RotatingFileHandler") and [`TimedRotatingFileHandler`](#logging.handlers.TimedRotatingFileHandler "logging.handlers.TimedRotatingFileHandler"). You should not need to instantiate this class, but it has attributes and methods you may need to override. `class logging.handlers.BaseRotatingHandler(filename, mode, encoding=None, delay=False, errors=None)` The parameters are as for `FileHandler`. The attributes are: `namer` If this attribute is set to a callable, the [`rotation_filename()`](#logging.handlers.BaseRotatingHandler.rotation_filename "logging.handlers.BaseRotatingHandler.rotation_filename") method delegates to this callable. The parameters passed to the callable are those passed to [`rotation_filename()`](#logging.handlers.BaseRotatingHandler.rotation_filename "logging.handlers.BaseRotatingHandler.rotation_filename"). Note The namer function is called quite a few times during rollover, so it should be as simple and as fast as possible. It should also return the same output every time for a given input, otherwise the rollover behaviour may not work as expected. It’s also worth noting that care should be taken when using a namer to preserve certain attributes in the filename which are used during rotation. For example, [`RotatingFileHandler`](#logging.handlers.RotatingFileHandler "logging.handlers.RotatingFileHandler") expects to have a set of log files whose names contain successive integers, so that rotation works as expected, and [`TimedRotatingFileHandler`](#logging.handlers.TimedRotatingFileHandler "logging.handlers.TimedRotatingFileHandler") deletes old log files (based on the `backupCount` parameter passed to the handler’s initializer) by determining the oldest files to delete. For this to happen, the filenames should be sortable using the date/time portion of the filename, and a namer needs to respect this. (If a namer is wanted that doesn’t respect this scheme, it will need to be used in a subclass of [`TimedRotatingFileHandler`](#logging.handlers.TimedRotatingFileHandler "logging.handlers.TimedRotatingFileHandler") which overrides the [`getFilesToDelete()`](#logging.handlers.TimedRotatingFileHandler.getFilesToDelete "logging.handlers.TimedRotatingFileHandler.getFilesToDelete") method to fit in with the custom naming scheme.) New in version 3.3. `rotator` If this attribute is set to a callable, the [`rotate()`](#logging.handlers.BaseRotatingHandler.rotate "logging.handlers.BaseRotatingHandler.rotate") method delegates to this callable. The parameters passed to the callable are those passed to [`rotate()`](#logging.handlers.BaseRotatingHandler.rotate "logging.handlers.BaseRotatingHandler.rotate"). New in version 3.3. `rotation_filename(default_name)` Modify the filename of a log file when rotating. This is provided so that a custom filename can be provided. The default implementation calls the ‘namer’ attribute of the handler, if it’s callable, passing the default name to it. If the attribute isn’t callable (the default is `None`), the name is returned unchanged. Parameters **default\_name** – The default name for the log file. New in version 3.3. `rotate(source, dest)` When rotating, rotate the current log. The default implementation calls the ‘rotator’ attribute of the handler, if it’s callable, passing the source and dest arguments to it. If the attribute isn’t callable (the default is `None`), the source is simply renamed to the destination. Parameters * **source** – The source filename. This is normally the base filename, e.g. ‘test.log’. * **dest** – The destination filename. This is normally what the source is rotated to, e.g. ‘test.log.1’. New in version 3.3. The reason the attributes exist is to save you having to subclass - you can use the same callables for instances of [`RotatingFileHandler`](#logging.handlers.RotatingFileHandler "logging.handlers.RotatingFileHandler") and [`TimedRotatingFileHandler`](#logging.handlers.TimedRotatingFileHandler "logging.handlers.TimedRotatingFileHandler"). If either the namer or rotator callable raises an exception, this will be handled in the same way as any other exception during an `emit()` call, i.e. via the `handleError()` method of the handler. If you need to make more significant changes to rotation processing, you can override the methods. For an example, see [Using a rotator and namer to customize log rotation processing](../howto/logging-cookbook#cookbook-rotator-namer). RotatingFileHandler ------------------- The [`RotatingFileHandler`](#logging.handlers.RotatingFileHandler "logging.handlers.RotatingFileHandler") class, located in the [`logging.handlers`](#module-logging.handlers "logging.handlers: Handlers for the logging module.") module, supports rotation of disk log files. `class logging.handlers.RotatingFileHandler(filename, mode='a', maxBytes=0, backupCount=0, encoding=None, delay=False, errors=None)` Returns a new instance of the [`RotatingFileHandler`](#logging.handlers.RotatingFileHandler "logging.handlers.RotatingFileHandler") class. The specified file is opened and used as the stream for logging. If *mode* is not specified, `'a'` is used. If *encoding* is not `None`, it is used to open the file with that encoding. If *delay* is true, then file opening is deferred until the first call to [`emit()`](#logging.handlers.RotatingFileHandler.emit "logging.handlers.RotatingFileHandler.emit"). By default, the file grows indefinitely. If *errors* is provided, it determines how encoding errors are handled. You can use the *maxBytes* and *backupCount* values to allow the file to *rollover* at a predetermined size. When the size is about to be exceeded, the file is closed and a new file is silently opened for output. Rollover occurs whenever the current log file is nearly *maxBytes* in length; but if either of *maxBytes* or *backupCount* is zero, rollover never occurs, so you generally want to set *backupCount* to at least 1, and have a non-zero *maxBytes*. When *backupCount* is non-zero, the system will save old log files by appending the extensions ‘.1’, ‘.2’ etc., to the filename. For example, with a *backupCount* of 5 and a base file name of `app.log`, you would get `app.log`, `app.log.1`, `app.log.2`, up to `app.log.5`. The file being written to is always `app.log`. When this file is filled, it is closed and renamed to `app.log.1`, and if files `app.log.1`, `app.log.2`, etc. exist, then they are renamed to `app.log.2`, `app.log.3` etc. respectively. Changed in version 3.6: As well as string values, [`Path`](pathlib#pathlib.Path "pathlib.Path") objects are also accepted for the *filename* argument. Changed in version 3.9: The *errors* parameter was added. `doRollover()` Does a rollover, as described above. `emit(record)` Outputs the record to the file, catering for rollover as described previously. TimedRotatingFileHandler ------------------------ The [`TimedRotatingFileHandler`](#logging.handlers.TimedRotatingFileHandler "logging.handlers.TimedRotatingFileHandler") class, located in the [`logging.handlers`](#module-logging.handlers "logging.handlers: Handlers for the logging module.") module, supports rotation of disk log files at certain timed intervals. `class logging.handlers.TimedRotatingFileHandler(filename, when='h', interval=1, backupCount=0, encoding=None, delay=False, utc=False, atTime=None, errors=None)` Returns a new instance of the [`TimedRotatingFileHandler`](#logging.handlers.TimedRotatingFileHandler "logging.handlers.TimedRotatingFileHandler") class. The specified file is opened and used as the stream for logging. On rotating it also sets the filename suffix. Rotating happens based on the product of *when* and *interval*. You can use the *when* to specify the type of *interval*. The list of possible values is below. Note that they are not case sensitive. | Value | Type of interval | If/how *atTime* is used | | --- | --- | --- | | `'S'` | Seconds | Ignored | | `'M'` | Minutes | Ignored | | `'H'` | Hours | Ignored | | `'D'` | Days | Ignored | | `'W0'-'W6'` | Weekday (0=Monday) | Used to compute initial rollover time | | `'midnight'` | Roll over at midnight, if *atTime* not specified, else at time *atTime* | Used to compute initial rollover time | When using weekday-based rotation, specify ‘W0’ for Monday, ‘W1’ for Tuesday, and so on up to ‘W6’ for Sunday. In this case, the value passed for *interval* isn’t used. The system will save old log files by appending extensions to the filename. The extensions are date-and-time based, using the strftime format `%Y-%m-%d_%H-%M-%S` or a leading portion thereof, depending on the rollover interval. When computing the next rollover time for the first time (when the handler is created), the last modification time of an existing log file, or else the current time, is used to compute when the next rotation will occur. If the *utc* argument is true, times in UTC will be used; otherwise local time is used. If *backupCount* is nonzero, at most *backupCount* files will be kept, and if more would be created when rollover occurs, the oldest one is deleted. The deletion logic uses the interval to determine which files to delete, so changing the interval may leave old files lying around. If *delay* is true, then file opening is deferred until the first call to [`emit()`](#logging.handlers.TimedRotatingFileHandler.emit "logging.handlers.TimedRotatingFileHandler.emit"). If *atTime* is not `None`, it must be a `datetime.time` instance which specifies the time of day when rollover occurs, for the cases where rollover is set to happen “at midnight” or “on a particular weekday”. Note that in these cases, the *atTime* value is effectively used to compute the *initial* rollover, and subsequent rollovers would be calculated via the normal interval calculation. If *errors* is specified, it’s used to determine how encoding errors are handled. Note Calculation of the initial rollover time is done when the handler is initialised. Calculation of subsequent rollover times is done only when rollover occurs, and rollover occurs only when emitting output. If this is not kept in mind, it might lead to some confusion. For example, if an interval of “every minute” is set, that does not mean you will always see log files with times (in the filename) separated by a minute; if, during application execution, logging output is generated more frequently than once a minute, *then* you can expect to see log files with times separated by a minute. If, on the other hand, logging messages are only output once every five minutes (say), then there will be gaps in the file times corresponding to the minutes where no output (and hence no rollover) occurred. Changed in version 3.4: *atTime* parameter was added. Changed in version 3.6: As well as string values, [`Path`](pathlib#pathlib.Path "pathlib.Path") objects are also accepted for the *filename* argument. Changed in version 3.9: The *errors* parameter was added. `doRollover()` Does a rollover, as described above. `emit(record)` Outputs the record to the file, catering for rollover as described above. `getFilesToDelete()` Returns a list of filenames which should be deleted as part of rollover. These are the absolute paths of the oldest backup log files written by the handler. SocketHandler ------------- The [`SocketHandler`](#logging.handlers.SocketHandler "logging.handlers.SocketHandler") class, located in the [`logging.handlers`](#module-logging.handlers "logging.handlers: Handlers for the logging module.") module, sends logging output to a network socket. The base class uses a TCP socket. `class logging.handlers.SocketHandler(host, port)` Returns a new instance of the [`SocketHandler`](#logging.handlers.SocketHandler "logging.handlers.SocketHandler") class intended to communicate with a remote machine whose address is given by *host* and *port*. Changed in version 3.4: If `port` is specified as `None`, a Unix domain socket is created using the value in `host` - otherwise, a TCP socket is created. `close()` Closes the socket. `emit()` Pickles the record’s attribute dictionary and writes it to the socket in binary format. If there is an error with the socket, silently drops the packet. If the connection was previously lost, re-establishes the connection. To unpickle the record at the receiving end into a [`LogRecord`](logging#logging.LogRecord "logging.LogRecord"), use the [`makeLogRecord()`](logging#logging.makeLogRecord "logging.makeLogRecord") function. `handleError()` Handles an error which has occurred during [`emit()`](#logging.handlers.SocketHandler.emit "logging.handlers.SocketHandler.emit"). The most likely cause is a lost connection. Closes the socket so that we can retry on the next event. `makeSocket()` This is a factory method which allows subclasses to define the precise type of socket they want. The default implementation creates a TCP socket ([`socket.SOCK_STREAM`](socket#socket.SOCK_STREAM "socket.SOCK_STREAM")). `makePickle(record)` Pickles the record’s attribute dictionary in binary format with a length prefix, and returns it ready for transmission across the socket. The details of this operation are equivalent to: ``` data = pickle.dumps(record_attr_dict, 1) datalen = struct.pack('>L', len(data)) return datalen + data ``` Note that pickles aren’t completely secure. If you are concerned about security, you may want to override this method to implement a more secure mechanism. For example, you can sign pickles using HMAC and then verify them on the receiving end, or alternatively you can disable unpickling of global objects on the receiving end. `send(packet)` Send a pickled byte-string *packet* to the socket. The format of the sent byte-string is as described in the documentation for [`makePickle()`](#logging.handlers.SocketHandler.makePickle "logging.handlers.SocketHandler.makePickle"). This function allows for partial sends, which can happen when the network is busy. `createSocket()` Tries to create a socket; on failure, uses an exponential back-off algorithm. On initial failure, the handler will drop the message it was trying to send. When subsequent messages are handled by the same instance, it will not try connecting until some time has passed. The default parameters are such that the initial delay is one second, and if after that delay the connection still can’t be made, the handler will double the delay each time up to a maximum of 30 seconds. This behaviour is controlled by the following handler attributes: * `retryStart` (initial delay, defaulting to 1.0 seconds). * `retryFactor` (multiplier, defaulting to 2.0). * `retryMax` (maximum delay, defaulting to 30.0 seconds). This means that if the remote listener starts up *after* the handler has been used, you could lose messages (since the handler won’t even attempt a connection until the delay has elapsed, but just silently drop messages during the delay period). DatagramHandler --------------- The [`DatagramHandler`](#logging.handlers.DatagramHandler "logging.handlers.DatagramHandler") class, located in the [`logging.handlers`](#module-logging.handlers "logging.handlers: Handlers for the logging module.") module, inherits from [`SocketHandler`](#logging.handlers.SocketHandler "logging.handlers.SocketHandler") to support sending logging messages over UDP sockets. `class logging.handlers.DatagramHandler(host, port)` Returns a new instance of the [`DatagramHandler`](#logging.handlers.DatagramHandler "logging.handlers.DatagramHandler") class intended to communicate with a remote machine whose address is given by *host* and *port*. Changed in version 3.4: If `port` is specified as `None`, a Unix domain socket is created using the value in `host` - otherwise, a UDP socket is created. `emit()` Pickles the record’s attribute dictionary and writes it to the socket in binary format. If there is an error with the socket, silently drops the packet. To unpickle the record at the receiving end into a [`LogRecord`](logging#logging.LogRecord "logging.LogRecord"), use the [`makeLogRecord()`](logging#logging.makeLogRecord "logging.makeLogRecord") function. `makeSocket()` The factory method of [`SocketHandler`](#logging.handlers.SocketHandler "logging.handlers.SocketHandler") is here overridden to create a UDP socket ([`socket.SOCK_DGRAM`](socket#socket.SOCK_DGRAM "socket.SOCK_DGRAM")). `send(s)` Send a pickled byte-string to a socket. The format of the sent byte-string is as described in the documentation for [`SocketHandler.makePickle()`](#logging.handlers.SocketHandler.makePickle "logging.handlers.SocketHandler.makePickle"). SysLogHandler ------------- The [`SysLogHandler`](#logging.handlers.SysLogHandler "logging.handlers.SysLogHandler") class, located in the [`logging.handlers`](#module-logging.handlers "logging.handlers: Handlers for the logging module.") module, supports sending logging messages to a remote or local Unix syslog. `class logging.handlers.SysLogHandler(address=('localhost', SYSLOG_UDP_PORT), facility=LOG_USER, socktype=socket.SOCK_DGRAM)` Returns a new instance of the [`SysLogHandler`](#logging.handlers.SysLogHandler "logging.handlers.SysLogHandler") class intended to communicate with a remote Unix machine whose address is given by *address* in the form of a `(host, port)` tuple. If *address* is not specified, `('localhost', 514)` is used. The address is used to open a socket. An alternative to providing a `(host, port)` tuple is providing an address as a string, for example ‘/dev/log’. In this case, a Unix domain socket is used to send the message to the syslog. If *facility* is not specified, `LOG_USER` is used. The type of socket opened depends on the *socktype* argument, which defaults to [`socket.SOCK_DGRAM`](socket#socket.SOCK_DGRAM "socket.SOCK_DGRAM") and thus opens a UDP socket. To open a TCP socket (for use with the newer syslog daemons such as rsyslog), specify a value of [`socket.SOCK_STREAM`](socket#socket.SOCK_STREAM "socket.SOCK_STREAM"). Note that if your server is not listening on UDP port 514, [`SysLogHandler`](#logging.handlers.SysLogHandler "logging.handlers.SysLogHandler") may appear not to work. In that case, check what address you should be using for a domain socket - it’s system dependent. For example, on Linux it’s usually ‘/dev/log’ but on OS/X it’s ‘/var/run/syslog’. You’ll need to check your platform and use the appropriate address (you may need to do this check at runtime if your application needs to run on several platforms). On Windows, you pretty much have to use the UDP option. Changed in version 3.2: *socktype* was added. `close()` Closes the socket to the remote host. `emit(record)` The record is formatted, and then sent to the syslog server. If exception information is present, it is *not* sent to the server. Changed in version 3.2.1: (See: [bpo-12168](https://bugs.python.org/issue?@action=redirect&bpo=12168).) In earlier versions, the message sent to the syslog daemons was always terminated with a NUL byte, because early versions of these daemons expected a NUL terminated message - even though it’s not in the relevant specification ([**RFC 5424**](https://tools.ietf.org/html/rfc5424.html)). More recent versions of these daemons don’t expect the NUL byte but strip it off if it’s there, and even more recent daemons (which adhere more closely to RFC 5424) pass the NUL byte on as part of the message. To enable easier handling of syslog messages in the face of all these differing daemon behaviours, the appending of the NUL byte has been made configurable, through the use of a class-level attribute, `append_nul`. This defaults to `True` (preserving the existing behaviour) but can be set to `False` on a `SysLogHandler` instance in order for that instance to *not* append the NUL terminator. Changed in version 3.3: (See: [bpo-12419](https://bugs.python.org/issue?@action=redirect&bpo=12419).) In earlier versions, there was no facility for an “ident” or “tag” prefix to identify the source of the message. This can now be specified using a class-level attribute, defaulting to `""` to preserve existing behaviour, but which can be overridden on a `SysLogHandler` instance in order for that instance to prepend the ident to every message handled. Note that the provided ident must be text, not bytes, and is prepended to the message exactly as is. `encodePriority(facility, priority)` Encodes the facility and priority into an integer. You can pass in strings or integers - if strings are passed, internal mapping dictionaries are used to convert them to integers. The symbolic `LOG_` values are defined in [`SysLogHandler`](#logging.handlers.SysLogHandler "logging.handlers.SysLogHandler") and mirror the values defined in the `sys/syslog.h` header file. **Priorities** | Name (string) | Symbolic value | | --- | --- | | `alert` | LOG\_ALERT | | `crit` or `critical` | LOG\_CRIT | | `debug` | LOG\_DEBUG | | `emerg` or `panic` | LOG\_EMERG | | `err` or `error` | LOG\_ERR | | `info` | LOG\_INFO | | `notice` | LOG\_NOTICE | | `warn` or `warning` | LOG\_WARNING | **Facilities** | Name (string) | Symbolic value | | --- | --- | | `auth` | LOG\_AUTH | | `authpriv` | LOG\_AUTHPRIV | | `cron` | LOG\_CRON | | `daemon` | LOG\_DAEMON | | `ftp` | LOG\_FTP | | `kern` | LOG\_KERN | | `lpr` | LOG\_LPR | | `mail` | LOG\_MAIL | | `news` | LOG\_NEWS | | `syslog` | LOG\_SYSLOG | | `user` | LOG\_USER | | `uucp` | LOG\_UUCP | | `local0` | LOG\_LOCAL0 | | `local1` | LOG\_LOCAL1 | | `local2` | LOG\_LOCAL2 | | `local3` | LOG\_LOCAL3 | | `local4` | LOG\_LOCAL4 | | `local5` | LOG\_LOCAL5 | | `local6` | LOG\_LOCAL6 | | `local7` | LOG\_LOCAL7 | `mapPriority(levelname)` Maps a logging level name to a syslog priority name. You may need to override this if you are using custom levels, or if the default algorithm is not suitable for your needs. The default algorithm maps `DEBUG`, `INFO`, `WARNING`, `ERROR` and `CRITICAL` to the equivalent syslog names, and all other level names to ‘warning’. NTEventLogHandler ----------------- The [`NTEventLogHandler`](#logging.handlers.NTEventLogHandler "logging.handlers.NTEventLogHandler") class, located in the [`logging.handlers`](#module-logging.handlers "logging.handlers: Handlers for the logging module.") module, supports sending logging messages to a local Windows NT, Windows 2000 or Windows XP event log. Before you can use it, you need Mark Hammond’s Win32 extensions for Python installed. `class logging.handlers.NTEventLogHandler(appname, dllname=None, logtype='Application')` Returns a new instance of the [`NTEventLogHandler`](#logging.handlers.NTEventLogHandler "logging.handlers.NTEventLogHandler") class. The *appname* is used to define the application name as it appears in the event log. An appropriate registry entry is created using this name. The *dllname* should give the fully qualified pathname of a .dll or .exe which contains message definitions to hold in the log (if not specified, `'win32service.pyd'` is used - this is installed with the Win32 extensions and contains some basic placeholder message definitions. Note that use of these placeholders will make your event logs big, as the entire message source is held in the log. If you want slimmer logs, you have to pass in the name of your own .dll or .exe which contains the message definitions you want to use in the event log). The *logtype* is one of `'Application'`, `'System'` or `'Security'`, and defaults to `'Application'`. `close()` At this point, you can remove the application name from the registry as a source of event log entries. However, if you do this, you will not be able to see the events as you intended in the Event Log Viewer - it needs to be able to access the registry to get the .dll name. The current version does not do this. `emit(record)` Determines the message ID, event category and event type, and then logs the message in the NT event log. `getEventCategory(record)` Returns the event category for the record. Override this if you want to specify your own categories. This version returns 0. `getEventType(record)` Returns the event type for the record. Override this if you want to specify your own types. This version does a mapping using the handler’s typemap attribute, which is set up in [`__init__()`](../reference/datamodel#object.__init__ "object.__init__") to a dictionary which contains mappings for `DEBUG`, `INFO`, `WARNING`, `ERROR` and `CRITICAL`. If you are using your own levels, you will either need to override this method or place a suitable dictionary in the handler’s *typemap* attribute. `getMessageID(record)` Returns the message ID for the record. If you are using your own messages, you could do this by having the *msg* passed to the logger being an ID rather than a format string. Then, in here, you could use a dictionary lookup to get the message ID. This version returns 1, which is the base message ID in `win32service.pyd`. SMTPHandler ----------- The [`SMTPHandler`](#logging.handlers.SMTPHandler "logging.handlers.SMTPHandler") class, located in the [`logging.handlers`](#module-logging.handlers "logging.handlers: Handlers for the logging module.") module, supports sending logging messages to an email address via SMTP. `class logging.handlers.SMTPHandler(mailhost, fromaddr, toaddrs, subject, credentials=None, secure=None, timeout=1.0)` Returns a new instance of the [`SMTPHandler`](#logging.handlers.SMTPHandler "logging.handlers.SMTPHandler") class. The instance is initialized with the from and to addresses and subject line of the email. The *toaddrs* should be a list of strings. To specify a non-standard SMTP port, use the (host, port) tuple format for the *mailhost* argument. If you use a string, the standard SMTP port is used. If your SMTP server requires authentication, you can specify a (username, password) tuple for the *credentials* argument. To specify the use of a secure protocol (TLS), pass in a tuple to the *secure* argument. This will only be used when authentication credentials are supplied. The tuple should be either an empty tuple, or a single-value tuple with the name of a keyfile, or a 2-value tuple with the names of the keyfile and certificate file. (This tuple is passed to the [`smtplib.SMTP.starttls()`](smtplib#smtplib.SMTP.starttls "smtplib.SMTP.starttls") method.) A timeout can be specified for communication with the SMTP server using the *timeout* argument. New in version 3.3: The *timeout* argument was added. `emit(record)` Formats the record and sends it to the specified addressees. `getSubject(record)` If you want to specify a subject line which is record-dependent, override this method. MemoryHandler ------------- The [`MemoryHandler`](#logging.handlers.MemoryHandler "logging.handlers.MemoryHandler") class, located in the [`logging.handlers`](#module-logging.handlers "logging.handlers: Handlers for the logging module.") module, supports buffering of logging records in memory, periodically flushing them to a *target* handler. Flushing occurs whenever the buffer is full, or when an event of a certain severity or greater is seen. [`MemoryHandler`](#logging.handlers.MemoryHandler "logging.handlers.MemoryHandler") is a subclass of the more general [`BufferingHandler`](#logging.handlers.BufferingHandler "logging.handlers.BufferingHandler"), which is an abstract class. This buffers logging records in memory. Whenever each record is added to the buffer, a check is made by calling `shouldFlush()` to see if the buffer should be flushed. If it should, then `flush()` is expected to do the flushing. `class logging.handlers.BufferingHandler(capacity)` Initializes the handler with a buffer of the specified capacity. Here, *capacity* means the number of logging records buffered. `emit(record)` Append the record to the buffer. If [`shouldFlush()`](#logging.handlers.BufferingHandler.shouldFlush "logging.handlers.BufferingHandler.shouldFlush") returns true, call [`flush()`](#logging.handlers.BufferingHandler.flush "logging.handlers.BufferingHandler.flush") to process the buffer. `flush()` You can override this to implement custom flushing behavior. This version just zaps the buffer to empty. `shouldFlush(record)` Return `True` if the buffer is up to capacity. This method can be overridden to implement custom flushing strategies. `class logging.handlers.MemoryHandler(capacity, flushLevel=ERROR, target=None, flushOnClose=True)` Returns a new instance of the [`MemoryHandler`](#logging.handlers.MemoryHandler "logging.handlers.MemoryHandler") class. The instance is initialized with a buffer size of *capacity* (number of records buffered). If *flushLevel* is not specified, `ERROR` is used. If no *target* is specified, the target will need to be set using [`setTarget()`](#logging.handlers.MemoryHandler.setTarget "logging.handlers.MemoryHandler.setTarget") before this handler does anything useful. If *flushOnClose* is specified as `False`, then the buffer is *not* flushed when the handler is closed. If not specified or specified as `True`, the previous behaviour of flushing the buffer will occur when the handler is closed. Changed in version 3.6: The *flushOnClose* parameter was added. `close()` Calls [`flush()`](#logging.handlers.MemoryHandler.flush "logging.handlers.MemoryHandler.flush"), sets the target to `None` and clears the buffer. `flush()` For a [`MemoryHandler`](#logging.handlers.MemoryHandler "logging.handlers.MemoryHandler"), flushing means just sending the buffered records to the target, if there is one. The buffer is also cleared when this happens. Override if you want different behavior. `setTarget(target)` Sets the target handler for this handler. `shouldFlush(record)` Checks for buffer full or a record at the *flushLevel* or higher. HTTPHandler ----------- The [`HTTPHandler`](#logging.handlers.HTTPHandler "logging.handlers.HTTPHandler") class, located in the [`logging.handlers`](#module-logging.handlers "logging.handlers: Handlers for the logging module.") module, supports sending logging messages to a Web server, using either `GET` or `POST` semantics. `class logging.handlers.HTTPHandler(host, url, method='GET', secure=False, credentials=None, context=None)` Returns a new instance of the [`HTTPHandler`](#logging.handlers.HTTPHandler "logging.handlers.HTTPHandler") class. The *host* can be of the form `host:port`, should you need to use a specific port number. If no *method* is specified, `GET` is used. If *secure* is true, a HTTPS connection will be used. The *context* parameter may be set to a [`ssl.SSLContext`](ssl#ssl.SSLContext "ssl.SSLContext") instance to configure the SSL settings used for the HTTPS connection. If *credentials* is specified, it should be a 2-tuple consisting of userid and password, which will be placed in a HTTP ‘Authorization’ header using Basic authentication. If you specify credentials, you should also specify secure=True so that your userid and password are not passed in cleartext across the wire. Changed in version 3.5: The *context* parameter was added. `mapLogRecord(record)` Provides a dictionary, based on `record`, which is to be URL-encoded and sent to the web server. The default implementation just returns `record.__dict__`. This method can be overridden if e.g. only a subset of [`LogRecord`](logging#logging.LogRecord "logging.LogRecord") is to be sent to the web server, or if more specific customization of what’s sent to the server is required. `emit(record)` Sends the record to the Web server as a URL-encoded dictionary. The [`mapLogRecord()`](#logging.handlers.HTTPHandler.mapLogRecord "logging.handlers.HTTPHandler.mapLogRecord") method is used to convert the record to the dictionary to be sent. Note Since preparing a record for sending it to a Web server is not the same as a generic formatting operation, using [`setFormatter()`](logging#logging.Handler.setFormatter "logging.Handler.setFormatter") to specify a [`Formatter`](logging#logging.Formatter "logging.Formatter") for a [`HTTPHandler`](#logging.handlers.HTTPHandler "logging.handlers.HTTPHandler") has no effect. Instead of calling [`format()`](logging#logging.Handler.format "logging.Handler.format"), this handler calls [`mapLogRecord()`](#logging.handlers.HTTPHandler.mapLogRecord "logging.handlers.HTTPHandler.mapLogRecord") and then [`urllib.parse.urlencode()`](urllib.parse#urllib.parse.urlencode "urllib.parse.urlencode") to encode the dictionary in a form suitable for sending to a Web server. QueueHandler ------------ New in version 3.2. The [`QueueHandler`](#logging.handlers.QueueHandler "logging.handlers.QueueHandler") class, located in the [`logging.handlers`](#module-logging.handlers "logging.handlers: Handlers for the logging module.") module, supports sending logging messages to a queue, such as those implemented in the [`queue`](queue#module-queue "queue: A synchronized queue class.") or [`multiprocessing`](multiprocessing#module-multiprocessing "multiprocessing: Process-based parallelism.") modules. Along with the [`QueueListener`](#logging.handlers.QueueListener "logging.handlers.QueueListener") class, [`QueueHandler`](#logging.handlers.QueueHandler "logging.handlers.QueueHandler") can be used to let handlers do their work on a separate thread from the one which does the logging. This is important in Web applications and also other service applications where threads servicing clients need to respond as quickly as possible, while any potentially slow operations (such as sending an email via [`SMTPHandler`](#logging.handlers.SMTPHandler "logging.handlers.SMTPHandler")) are done on a separate thread. `class logging.handlers.QueueHandler(queue)` Returns a new instance of the [`QueueHandler`](#logging.handlers.QueueHandler "logging.handlers.QueueHandler") class. The instance is initialized with the queue to send messages to. The *queue* can be any queue-like object; it’s used as-is by the [`enqueue()`](#logging.handlers.QueueHandler.enqueue "logging.handlers.QueueHandler.enqueue") method, which needs to know how to send messages to it. The queue is not *required* to have the task tracking API, which means that you can use [`SimpleQueue`](queue#queue.SimpleQueue "queue.SimpleQueue") instances for *queue*. `emit(record)` Enqueues the result of preparing the LogRecord. Should an exception occur (e.g. because a bounded queue has filled up), the [`handleError()`](logging#logging.Handler.handleError "logging.Handler.handleError") method is called to handle the error. This can result in the record silently being dropped (if `logging.raiseExceptions` is `False`) or a message printed to `sys.stderr` (if `logging.raiseExceptions` is `True`). `prepare(record)` Prepares a record for queuing. The object returned by this method is enqueued. The base implementation formats the record to merge the message, arguments, and exception information, if present. It also removes unpickleable items from the record in-place. You might want to override this method if you want to convert the record to a dict or JSON string, or send a modified copy of the record while leaving the original intact. `enqueue(record)` Enqueues the record on the queue using `put_nowait()`; you may want to override this if you want to use blocking behaviour, or a timeout, or a customized queue implementation. QueueListener ------------- New in version 3.2. The [`QueueListener`](#logging.handlers.QueueListener "logging.handlers.QueueListener") class, located in the [`logging.handlers`](#module-logging.handlers "logging.handlers: Handlers for the logging module.") module, supports receiving logging messages from a queue, such as those implemented in the [`queue`](queue#module-queue "queue: A synchronized queue class.") or [`multiprocessing`](multiprocessing#module-multiprocessing "multiprocessing: Process-based parallelism.") modules. The messages are received from a queue in an internal thread and passed, on the same thread, to one or more handlers for processing. While [`QueueListener`](#logging.handlers.QueueListener "logging.handlers.QueueListener") is not itself a handler, it is documented here because it works hand-in-hand with [`QueueHandler`](#logging.handlers.QueueHandler "logging.handlers.QueueHandler"). Along with the [`QueueHandler`](#logging.handlers.QueueHandler "logging.handlers.QueueHandler") class, [`QueueListener`](#logging.handlers.QueueListener "logging.handlers.QueueListener") can be used to let handlers do their work on a separate thread from the one which does the logging. This is important in Web applications and also other service applications where threads servicing clients need to respond as quickly as possible, while any potentially slow operations (such as sending an email via [`SMTPHandler`](#logging.handlers.SMTPHandler "logging.handlers.SMTPHandler")) are done on a separate thread. `class logging.handlers.QueueListener(queue, *handlers, respect_handler_level=False)` Returns a new instance of the [`QueueListener`](#logging.handlers.QueueListener "logging.handlers.QueueListener") class. The instance is initialized with the queue to send messages to and a list of handlers which will handle entries placed on the queue. The queue can be any queue-like object; it’s passed as-is to the [`dequeue()`](#logging.handlers.QueueListener.dequeue "logging.handlers.QueueListener.dequeue") method, which needs to know how to get messages from it. The queue is not *required* to have the task tracking API (though it’s used if available), which means that you can use [`SimpleQueue`](queue#queue.SimpleQueue "queue.SimpleQueue") instances for *queue*. If `respect_handler_level` is `True`, a handler’s level is respected (compared with the level for the message) when deciding whether to pass messages to that handler; otherwise, the behaviour is as in previous Python versions - to always pass each message to each handler. Changed in version 3.5: The `respect_handler_level` argument was added. `dequeue(block)` Dequeues a record and return it, optionally blocking. The base implementation uses `get()`. You may want to override this method if you want to use timeouts or work with custom queue implementations. `prepare(record)` Prepare a record for handling. This implementation just returns the passed-in record. You may want to override this method if you need to do any custom marshalling or manipulation of the record before passing it to the handlers. `handle(record)` Handle a record. This just loops through the handlers offering them the record to handle. The actual object passed to the handlers is that which is returned from [`prepare()`](#logging.handlers.QueueListener.prepare "logging.handlers.QueueListener.prepare"). `start()` Starts the listener. This starts up a background thread to monitor the queue for LogRecords to process. `stop()` Stops the listener. This asks the thread to terminate, and then waits for it to do so. Note that if you don’t call this before your application exits, there may be some records still left on the queue, which won’t be processed. `enqueue_sentinel()` Writes a sentinel to the queue to tell the listener to quit. This implementation uses `put_nowait()`. You may want to override this method if you want to use timeouts or work with custom queue implementations. New in version 3.3. See also `Module` [`logging`](logging#module-logging "logging: Flexible event logging system for applications.") API reference for the logging module. `Module` [`logging.config`](logging.config#module-logging.config "logging.config: Configuration of the logging module.") Configuration API for the logging module.
programming_docs
python sysconfig — Provide access to Python’s configuration information sysconfig — Provide access to Python’s configuration information ================================================================ New in version 3.2. **Source code:** [Lib/sysconfig.py](https://github.com/python/cpython/tree/3.9/Lib/sysconfig.py) The [`sysconfig`](#module-sysconfig "sysconfig: Python's configuration information") module provides access to Python’s configuration information like the list of installation paths and the configuration variables relevant for the current platform. Configuration variables ----------------------- A Python distribution contains a `Makefile` and a `pyconfig.h` header file that are necessary to build both the Python binary itself and third-party C extensions compiled using [`distutils`](distutils#module-distutils "distutils: Support for building and installing Python modules into an existing Python installation."). [`sysconfig`](#module-sysconfig "sysconfig: Python's configuration information") puts all variables found in these files in a dictionary that can be accessed using [`get_config_vars()`](#sysconfig.get_config_vars "sysconfig.get_config_vars") or [`get_config_var()`](#sysconfig.get_config_var "sysconfig.get_config_var"). Notice that on Windows, it’s a much smaller set. `sysconfig.get_config_vars(*args)` With no arguments, return a dictionary of all configuration variables relevant for the current platform. With arguments, return a list of values that result from looking up each argument in the configuration variable dictionary. For each argument, if the value is not found, return `None`. `sysconfig.get_config_var(name)` Return the value of a single variable *name*. Equivalent to `get_config_vars().get(name)`. If *name* is not found, return `None`. Example of usage: ``` >>> import sysconfig >>> sysconfig.get_config_var('Py_ENABLE_SHARED') 0 >>> sysconfig.get_config_var('LIBDIR') '/usr/local/lib' >>> sysconfig.get_config_vars('AR', 'CXX') ['ar', 'g++'] ``` Installation paths ------------------ Python uses an installation scheme that differs depending on the platform and on the installation options. These schemes are stored in [`sysconfig`](#module-sysconfig "sysconfig: Python's configuration information") under unique identifiers based on the value returned by [`os.name`](os#os.name "os.name"). Every new component that is installed using [`distutils`](distutils#module-distutils "distutils: Support for building and installing Python modules into an existing Python installation.") or a Distutils-based system will follow the same scheme to copy its file in the right places. Python currently supports six schemes: * *posix\_prefix*: scheme for POSIX platforms like Linux or macOS. This is the default scheme used when Python or a component is installed. * *posix\_home*: scheme for POSIX platforms used when a *home* option is used upon installation. This scheme is used when a component is installed through Distutils with a specific home prefix. * *posix\_user*: scheme for POSIX platforms used when a component is installed through Distutils and the *user* option is used. This scheme defines paths located under the user home directory. * *nt*: scheme for NT platforms like Windows. * *nt\_user*: scheme for NT platforms, when the *user* option is used. * *osx\_framework\_user*: scheme for macOS, when the *user* option is used. Each scheme is itself composed of a series of paths and each path has a unique identifier. Python currently uses eight paths: * *stdlib*: directory containing the standard Python library files that are not platform-specific. * *platstdlib*: directory containing the standard Python library files that are platform-specific. * *platlib*: directory for site-specific, platform-specific files. * *purelib*: directory for site-specific, non-platform-specific files. * *include*: directory for non-platform-specific header files. * *platinclude*: directory for platform-specific header files. * *scripts*: directory for script files. * *data*: directory for data files. [`sysconfig`](#module-sysconfig "sysconfig: Python's configuration information") provides some functions to determine these paths. `sysconfig.get_scheme_names()` Return a tuple containing all schemes currently supported in [`sysconfig`](#module-sysconfig "sysconfig: Python's configuration information"). `sysconfig.get_path_names()` Return a tuple containing all path names currently supported in [`sysconfig`](#module-sysconfig "sysconfig: Python's configuration information"). `sysconfig.get_path(name[, scheme[, vars[, expand]]])` Return an installation path corresponding to the path *name*, from the install scheme named *scheme*. *name* has to be a value from the list returned by [`get_path_names()`](#sysconfig.get_path_names "sysconfig.get_path_names"). [`sysconfig`](#module-sysconfig "sysconfig: Python's configuration information") stores installation paths corresponding to each path name, for each platform, with variables to be expanded. For instance the *stdlib* path for the *nt* scheme is: `{base}/Lib`. [`get_path()`](#sysconfig.get_path "sysconfig.get_path") will use the variables returned by [`get_config_vars()`](#sysconfig.get_config_vars "sysconfig.get_config_vars") to expand the path. All variables have default values for each platform so one may call this function and get the default value. If *scheme* is provided, it must be a value from the list returned by [`get_scheme_names()`](#sysconfig.get_scheme_names "sysconfig.get_scheme_names"). Otherwise, the default scheme for the current platform is used. If *vars* is provided, it must be a dictionary of variables that will update the dictionary return by [`get_config_vars()`](#sysconfig.get_config_vars "sysconfig.get_config_vars"). If *expand* is set to `False`, the path will not be expanded using the variables. If *name* is not found, raise a [`KeyError`](exceptions#KeyError "KeyError"). `sysconfig.get_paths([scheme[, vars[, expand]]])` Return a dictionary containing all installation paths corresponding to an installation scheme. See [`get_path()`](#sysconfig.get_path "sysconfig.get_path") for more information. If *scheme* is not provided, will use the default scheme for the current platform. If *vars* is provided, it must be a dictionary of variables that will update the dictionary used to expand the paths. If *expand* is set to false, the paths will not be expanded. If *scheme* is not an existing scheme, [`get_paths()`](#sysconfig.get_paths "sysconfig.get_paths") will raise a [`KeyError`](exceptions#KeyError "KeyError"). Other functions --------------- `sysconfig.get_python_version()` Return the `MAJOR.MINOR` Python version number as a string. Similar to `'%d.%d' % sys.version_info[:2]`. `sysconfig.get_platform()` Return a string that identifies the current platform. This is used mainly to distinguish platform-specific build directories and platform-specific built distributions. Typically includes the OS name and version and the architecture (as supplied by ‘os.uname()’), although the exact information included depends on the OS; e.g., on Linux, the kernel version isn’t particularly important. Examples of returned values: * linux-i586 * linux-alpha (?) * solaris-2.6-sun4u Windows will return one of: * win-amd64 (64bit Windows on AMD64, aka x86\_64, Intel64, and EM64T) * win32 (all others - specifically, sys.platform is returned) macOS can return: * macosx-10.6-ppc * macosx-10.4-ppc64 * macosx-10.3-i386 * macosx-10.4-fat For other non-POSIX platforms, currently just returns [`sys.platform`](sys#sys.platform "sys.platform"). `sysconfig.is_python_build()` Return `True` if the running Python interpreter was built from source and is being run from its built location, and not from a location resulting from e.g. running `make install` or installing via a binary installer. `sysconfig.parse_config_h(fp[, vars])` Parse a `config.h`-style file. *fp* is a file-like object pointing to the `config.h`-like file. A dictionary containing name/value pairs is returned. If an optional dictionary is passed in as the second argument, it is used instead of a new dictionary, and updated with the values read in the file. `sysconfig.get_config_h_filename()` Return the path of `pyconfig.h`. `sysconfig.get_makefile_filename()` Return the path of `Makefile`. Using sysconfig as a script --------------------------- You can use [`sysconfig`](#module-sysconfig "sysconfig: Python's configuration information") as a script with Python’s *-m* option: ``` $ python -m sysconfig Platform: "macosx-10.4-i386" Python version: "3.2" Current installation scheme: "posix_prefix" Paths: data = "/usr/local" include = "/Users/tarek/Dev/svn.python.org/py3k/Include" platinclude = "." platlib = "/usr/local/lib/python3.2/site-packages" platstdlib = "/usr/local/lib/python3.2" purelib = "/usr/local/lib/python3.2/site-packages" scripts = "/usr/local/bin" stdlib = "/usr/local/lib/python3.2" Variables: AC_APPLE_UNIVERSAL_BUILD = "0" AIX_GENUINE_CPLUSPLUS = "0" AR = "ar" ARFLAGS = "rc" ... ``` This call will print in the standard output the information returned by [`get_platform()`](#sysconfig.get_platform "sysconfig.get_platform"), [`get_python_version()`](#sysconfig.get_python_version "sysconfig.get_python_version"), [`get_path()`](#sysconfig.get_path "sysconfig.get_path") and [`get_config_vars()`](#sysconfig.get_config_vars "sysconfig.get_config_vars"). python operator — Standard operators as functions operator — Standard operators as functions ========================================== **Source code:** [Lib/operator.py](https://github.com/python/cpython/tree/3.9/Lib/operator.py) The [`operator`](#module-operator "operator: Functions corresponding to the standard operators.") module exports a set of efficient functions corresponding to the intrinsic operators of Python. For example, `operator.add(x, y)` is equivalent to the expression `x+y`. Many function names are those used for special methods, without the double underscores. For backward compatibility, many of these have a variant with the double underscores kept. The variants without the double underscores are preferred for clarity. The functions fall into categories that perform object comparisons, logical operations, mathematical operations and sequence operations. The object comparison functions are useful for all objects, and are named after the rich comparison operators they support: `operator.lt(a, b)` `operator.le(a, b)` `operator.eq(a, b)` `operator.ne(a, b)` `operator.ge(a, b)` `operator.gt(a, b)` `operator.__lt__(a, b)` `operator.__le__(a, b)` `operator.__eq__(a, b)` `operator.__ne__(a, b)` `operator.__ge__(a, b)` `operator.__gt__(a, b)` Perform “rich comparisons” between *a* and *b*. Specifically, `lt(a, b)` is equivalent to `a < b`, `le(a, b)` is equivalent to `a <= b`, `eq(a, b)` is equivalent to `a == b`, `ne(a, b)` is equivalent to `a != b`, `gt(a, b)` is equivalent to `a > b` and `ge(a, b)` is equivalent to `a >= b`. Note that these functions can return any value, which may or may not be interpretable as a Boolean value. See [Comparisons](../reference/expressions#comparisons) for more information about rich comparisons. The logical operations are also generally applicable to all objects, and support truth tests, identity tests, and boolean operations: `operator.not_(obj)` `operator.__not__(obj)` Return the outcome of [`not`](../reference/expressions#not) *obj*. (Note that there is no [`__not__()`](#operator.__not__ "operator.__not__") method for object instances; only the interpreter core defines this operation. The result is affected by the [`__bool__()`](../reference/datamodel#object.__bool__ "object.__bool__") and [`__len__()`](../reference/datamodel#object.__len__ "object.__len__") methods.) `operator.truth(obj)` Return [`True`](constants#True "True") if *obj* is true, and [`False`](constants#False "False") otherwise. This is equivalent to using the [`bool`](functions#bool "bool") constructor. `operator.is_(a, b)` Return `a is b`. Tests object identity. `operator.is_not(a, b)` Return `a is not b`. Tests object identity. The mathematical and bitwise operations are the most numerous: `operator.abs(obj)` `operator.__abs__(obj)` Return the absolute value of *obj*. `operator.add(a, b)` `operator.__add__(a, b)` Return `a + b`, for *a* and *b* numbers. `operator.and_(a, b)` `operator.__and__(a, b)` Return the bitwise and of *a* and *b*. `operator.floordiv(a, b)` `operator.__floordiv__(a, b)` Return `a // b`. `operator.index(a)` `operator.__index__(a)` Return *a* converted to an integer. Equivalent to `a.__index__()`. `operator.inv(obj)` `operator.invert(obj)` `operator.__inv__(obj)` `operator.__invert__(obj)` Return the bitwise inverse of the number *obj*. This is equivalent to `~obj`. `operator.lshift(a, b)` `operator.__lshift__(a, b)` Return *a* shifted left by *b*. `operator.mod(a, b)` `operator.__mod__(a, b)` Return `a % b`. `operator.mul(a, b)` `operator.__mul__(a, b)` Return `a * b`, for *a* and *b* numbers. `operator.matmul(a, b)` `operator.__matmul__(a, b)` Return `a @ b`. New in version 3.5. `operator.neg(obj)` `operator.__neg__(obj)` Return *obj* negated (`-obj`). `operator.or_(a, b)` `operator.__or__(a, b)` Return the bitwise or of *a* and *b*. `operator.pos(obj)` `operator.__pos__(obj)` Return *obj* positive (`+obj`). `operator.pow(a, b)` `operator.__pow__(a, b)` Return `a ** b`, for *a* and *b* numbers. `operator.rshift(a, b)` `operator.__rshift__(a, b)` Return *a* shifted right by *b*. `operator.sub(a, b)` `operator.__sub__(a, b)` Return `a - b`. `operator.truediv(a, b)` `operator.__truediv__(a, b)` Return `a / b` where 2/3 is .66 rather than 0. This is also known as “true” division. `operator.xor(a, b)` `operator.__xor__(a, b)` Return the bitwise exclusive or of *a* and *b*. Operations which work with sequences (some of them with mappings too) include: `operator.concat(a, b)` `operator.__concat__(a, b)` Return `a + b` for *a* and *b* sequences. `operator.contains(a, b)` `operator.__contains__(a, b)` Return the outcome of the test `b in a`. Note the reversed operands. `operator.countOf(a, b)` Return the number of occurrences of *b* in *a*. `operator.delitem(a, b)` `operator.__delitem__(a, b)` Remove the value of *a* at index *b*. `operator.getitem(a, b)` `operator.__getitem__(a, b)` Return the value of *a* at index *b*. `operator.indexOf(a, b)` Return the index of the first of occurrence of *b* in *a*. `operator.setitem(a, b, c)` `operator.__setitem__(a, b, c)` Set the value of *a* at index *b* to *c*. `operator.length_hint(obj, default=0)` Return an estimated length for the object *o*. First try to return its actual length, then an estimate using [`object.__length_hint__()`](../reference/datamodel#object.__length_hint__ "object.__length_hint__"), and finally return the default value. New in version 3.4. The [`operator`](#module-operator "operator: Functions corresponding to the standard operators.") module also defines tools for generalized attribute and item lookups. These are useful for making fast field extractors as arguments for [`map()`](functions#map "map"), [`sorted()`](functions#sorted "sorted"), [`itertools.groupby()`](itertools#itertools.groupby "itertools.groupby"), or other functions that expect a function argument. `operator.attrgetter(attr)` `operator.attrgetter(*attrs)` Return a callable object that fetches *attr* from its operand. If more than one attribute is requested, returns a tuple of attributes. The attribute names can also contain dots. For example: * After `f = attrgetter('name')`, the call `f(b)` returns `b.name`. * After `f = attrgetter('name', 'date')`, the call `f(b)` returns `(b.name, b.date)`. * After `f = attrgetter('name.first', 'name.last')`, the call `f(b)` returns `(b.name.first, b.name.last)`. Equivalent to: ``` def attrgetter(*items): if any(not isinstance(item, str) for item in items): raise TypeError('attribute name must be a string') if len(items) == 1: attr = items[0] def g(obj): return resolve_attr(obj, attr) else: def g(obj): return tuple(resolve_attr(obj, attr) for attr in items) return g def resolve_attr(obj, attr): for name in attr.split("."): obj = getattr(obj, name) return obj ``` `operator.itemgetter(item)` `operator.itemgetter(*items)` Return a callable object that fetches *item* from its operand using the operand’s [`__getitem__()`](#operator.__getitem__ "operator.__getitem__") method. If multiple items are specified, returns a tuple of lookup values. For example: * After `f = itemgetter(2)`, the call `f(r)` returns `r[2]`. * After `g = itemgetter(2, 5, 3)`, the call `g(r)` returns `(r[2], r[5], r[3])`. Equivalent to: ``` def itemgetter(*items): if len(items) == 1: item = items[0] def g(obj): return obj[item] else: def g(obj): return tuple(obj[item] for item in items) return g ``` The items can be any type accepted by the operand’s [`__getitem__()`](#operator.__getitem__ "operator.__getitem__") method. Dictionaries accept any hashable value. Lists, tuples, and strings accept an index or a slice: ``` >>> itemgetter(1)('ABCDEFG') 'B' >>> itemgetter(1, 3, 5)('ABCDEFG') ('B', 'D', 'F') >>> itemgetter(slice(2, None))('ABCDEFG') 'CDEFG' >>> soldier = dict(rank='captain', name='dotterbart') >>> itemgetter('rank')(soldier) 'captain' ``` Example of using [`itemgetter()`](#operator.itemgetter "operator.itemgetter") to retrieve specific fields from a tuple record: ``` >>> inventory = [('apple', 3), ('banana', 2), ('pear', 5), ('orange', 1)] >>> getcount = itemgetter(1) >>> list(map(getcount, inventory)) [3, 2, 5, 1] >>> sorted(inventory, key=getcount) [('orange', 1), ('banana', 2), ('apple', 3), ('pear', 5)] ``` `operator.methodcaller(name, /, *args, **kwargs)` Return a callable object that calls the method *name* on its operand. If additional arguments and/or keyword arguments are given, they will be given to the method as well. For example: * After `f = methodcaller('name')`, the call `f(b)` returns `b.name()`. * After `f = methodcaller('name', 'foo', bar=1)`, the call `f(b)` returns `b.name('foo', bar=1)`. Equivalent to: ``` def methodcaller(name, /, *args, **kwargs): def caller(obj): return getattr(obj, name)(*args, **kwargs) return caller ``` Mapping Operators to Functions ------------------------------ This table shows how abstract operations correspond to operator symbols in the Python syntax and the functions in the [`operator`](#module-operator "operator: Functions corresponding to the standard operators.") module. | Operation | Syntax | Function | | --- | --- | --- | | Addition | `a + b` | `add(a, b)` | | Concatenation | `seq1 + seq2` | `concat(seq1, seq2)` | | Containment Test | `obj in seq` | `contains(seq, obj)` | | Division | `a / b` | `truediv(a, b)` | | Division | `a // b` | `floordiv(a, b)` | | Bitwise And | `a & b` | `and_(a, b)` | | Bitwise Exclusive Or | `a ^ b` | `xor(a, b)` | | Bitwise Inversion | `~ a` | `invert(a)` | | Bitwise Or | `a | b` | `or_(a, b)` | | Exponentiation | `a ** b` | `pow(a, b)` | | Identity | `a is b` | `is_(a, b)` | | Identity | `a is not b` | `is_not(a, b)` | | Indexed Assignment | `obj[k] = v` | `setitem(obj, k, v)` | | Indexed Deletion | `del obj[k]` | `delitem(obj, k)` | | Indexing | `obj[k]` | `getitem(obj, k)` | | Left Shift | `a << b` | `lshift(a, b)` | | Modulo | `a % b` | `mod(a, b)` | | Multiplication | `a * b` | `mul(a, b)` | | Matrix Multiplication | `a @ b` | `matmul(a, b)` | | Negation (Arithmetic) | `- a` | `neg(a)` | | Negation (Logical) | `not a` | `not_(a)` | | Positive | `+ a` | `pos(a)` | | Right Shift | `a >> b` | `rshift(a, b)` | | Slice Assignment | `seq[i:j] = values` | `setitem(seq, slice(i, j), values)` | | Slice Deletion | `del seq[i:j]` | `delitem(seq, slice(i, j))` | | Slicing | `seq[i:j]` | `getitem(seq, slice(i, j))` | | String Formatting | `s % obj` | `mod(s, obj)` | | Subtraction | `a - b` | `sub(a, b)` | | Truth Test | `obj` | `truth(obj)` | | Ordering | `a < b` | `lt(a, b)` | | Ordering | `a <= b` | `le(a, b)` | | Equality | `a == b` | `eq(a, b)` | | Difference | `a != b` | `ne(a, b)` | | Ordering | `a >= b` | `ge(a, b)` | | Ordering | `a > b` | `gt(a, b)` | In-place Operators ------------------ Many operations have an “in-place” version. Listed below are functions providing a more primitive access to in-place operators than the usual syntax does; for example, the [statement](../glossary#term-statement) `x += y` is equivalent to `x = operator.iadd(x, y)`. Another way to put it is to say that `z = operator.iadd(x, y)` is equivalent to the compound statement `z = x; z += y`. In those examples, note that when an in-place method is called, the computation and assignment are performed in two separate steps. The in-place functions listed below only do the first step, calling the in-place method. The second step, assignment, is not handled. For immutable targets such as strings, numbers, and tuples, the updated value is computed, but not assigned back to the input variable: ``` >>> a = 'hello' >>> iadd(a, ' world') 'hello world' >>> a 'hello' ``` For mutable targets such as lists and dictionaries, the in-place method will perform the update, so no subsequent assignment is necessary: ``` >>> s = ['h', 'e', 'l', 'l', 'o'] >>> iadd(s, [' ', 'w', 'o', 'r', 'l', 'd']) ['h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd'] >>> s ['h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd'] ``` `operator.iadd(a, b)` `operator.__iadd__(a, b)` `a = iadd(a, b)` is equivalent to `a += b`. `operator.iand(a, b)` `operator.__iand__(a, b)` `a = iand(a, b)` is equivalent to `a &= b`. `operator.iconcat(a, b)` `operator.__iconcat__(a, b)` `a = iconcat(a, b)` is equivalent to `a += b` for *a* and *b* sequences. `operator.ifloordiv(a, b)` `operator.__ifloordiv__(a, b)` `a = ifloordiv(a, b)` is equivalent to `a //= b`. `operator.ilshift(a, b)` `operator.__ilshift__(a, b)` `a = ilshift(a, b)` is equivalent to `a <<= b`. `operator.imod(a, b)` `operator.__imod__(a, b)` `a = imod(a, b)` is equivalent to `a %= b`. `operator.imul(a, b)` `operator.__imul__(a, b)` `a = imul(a, b)` is equivalent to `a *= b`. `operator.imatmul(a, b)` `operator.__imatmul__(a, b)` `a = imatmul(a, b)` is equivalent to `a @= b`. New in version 3.5. `operator.ior(a, b)` `operator.__ior__(a, b)` `a = ior(a, b)` is equivalent to `a |= b`. `operator.ipow(a, b)` `operator.__ipow__(a, b)` `a = ipow(a, b)` is equivalent to `a **= b`. `operator.irshift(a, b)` `operator.__irshift__(a, b)` `a = irshift(a, b)` is equivalent to `a >>= b`. `operator.isub(a, b)` `operator.__isub__(a, b)` `a = isub(a, b)` is equivalent to `a -= b`. `operator.itruediv(a, b)` `operator.__itruediv__(a, b)` `a = itruediv(a, b)` is equivalent to `a /= b`. `operator.ixor(a, b)` `operator.__ixor__(a, b)` `a = ixor(a, b)` is equivalent to `a ^= b`.
programming_docs
python __main__ — Top-level script environment \_\_main\_\_ — Top-level script environment =========================================== `'__main__'` is the name of the scope in which top-level code executes. A module’s \_\_name\_\_ is set equal to `'__main__'` when read from standard input, a script, or from an interactive prompt. A module can discover whether or not it is running in the main scope by checking its own `__name__`, which allows a common idiom for conditionally executing code in a module when it is run as a script or with `python -m` but not when it is imported: ``` if __name__ == "__main__": # execute only if run as a script main() ``` For a package, the same effect can be achieved by including a `__main__.py` module, the contents of which will be executed when the module is run with `-m`. python io — Core tools for working with streams io — Core tools for working with streams ======================================== **Source code:** [Lib/io.py](https://github.com/python/cpython/tree/3.9/Lib/io.py) Overview -------- The [`io`](#module-io "io: Core tools for working with streams.") module provides Python’s main facilities for dealing with various types of I/O. There are three main types of I/O: *text I/O*, *binary I/O* and *raw I/O*. These are generic categories, and various backing stores can be used for each of them. A concrete object belonging to any of these categories is called a [file object](../glossary#term-file-object). Other common terms are *stream* and *file-like object*. Independent of its category, each concrete stream object will also have various capabilities: it can be read-only, write-only, or read-write. It can also allow arbitrary random access (seeking forwards or backwards to any location), or only sequential access (for example in the case of a socket or pipe). All streams are careful about the type of data you give to them. For example giving a [`str`](stdtypes#str "str") object to the `write()` method of a binary stream will raise a [`TypeError`](exceptions#TypeError "TypeError"). So will giving a [`bytes`](stdtypes#bytes "bytes") object to the `write()` method of a text stream. Changed in version 3.3: Operations that used to raise [`IOError`](exceptions#IOError "IOError") now raise [`OSError`](exceptions#OSError "OSError"), since [`IOError`](exceptions#IOError "IOError") is now an alias of [`OSError`](exceptions#OSError "OSError"). ### Text I/O Text I/O expects and produces [`str`](stdtypes#str "str") objects. This means that whenever the backing store is natively made of bytes (such as in the case of a file), encoding and decoding of data is made transparently as well as optional translation of platform-specific newline characters. The easiest way to create a text stream is with [`open()`](functions#open "open"), optionally specifying an encoding: ``` f = open("myfile.txt", "r", encoding="utf-8") ``` In-memory text streams are also available as [`StringIO`](#io.StringIO "io.StringIO") objects: ``` f = io.StringIO("some initial text data") ``` The text stream API is described in detail in the documentation of [`TextIOBase`](#io.TextIOBase "io.TextIOBase"). ### Binary I/O Binary I/O (also called *buffered I/O*) expects [bytes-like objects](../glossary#term-bytes-like-object) and produces [`bytes`](stdtypes#bytes "bytes") objects. No encoding, decoding, or newline translation is performed. This category of streams can be used for all kinds of non-text data, and also when manual control over the handling of text data is desired. The easiest way to create a binary stream is with [`open()`](functions#open "open") with `'b'` in the mode string: ``` f = open("myfile.jpg", "rb") ``` In-memory binary streams are also available as [`BytesIO`](#io.BytesIO "io.BytesIO") objects: ``` f = io.BytesIO(b"some initial binary data: \x00\x01") ``` The binary stream API is described in detail in the docs of [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase"). Other library modules may provide additional ways to create text or binary streams. See [`socket.socket.makefile()`](socket#socket.socket.makefile "socket.socket.makefile") for example. ### Raw I/O Raw I/O (also called *unbuffered I/O*) is generally used as a low-level building-block for binary and text streams; it is rarely useful to directly manipulate a raw stream from user code. Nevertheless, you can create a raw stream by opening a file in binary mode with buffering disabled: ``` f = open("myfile.jpg", "rb", buffering=0) ``` The raw stream API is described in detail in the docs of [`RawIOBase`](#io.RawIOBase "io.RawIOBase"). High-level Module Interface --------------------------- `io.DEFAULT_BUFFER_SIZE` An int containing the default buffer size used by the module’s buffered I/O classes. [`open()`](functions#open "open") uses the file’s blksize (as obtained by [`os.stat()`](os#os.stat "os.stat")) if possible. `io.open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None)` This is an alias for the builtin [`open()`](functions#open "open") function. This function raises an [auditing event](sys#auditing) `open` with arguments `path`, `mode` and `flags`. The `mode` and `flags` arguments may have been modified or inferred from the original call. `io.open_code(path)` Opens the provided file with mode `'rb'`. This function should be used when the intent is to treat the contents as executable code. `path` should be a [`str`](stdtypes#str "str") and an absolute path. The behavior of this function may be overridden by an earlier call to the [`PyFile_SetOpenCodeHook()`](../c-api/file#c.PyFile_SetOpenCodeHook "PyFile_SetOpenCodeHook"). However, assuming that `path` is a [`str`](stdtypes#str "str") and an absolute path, `open_code(path)` should always behave the same as `open(path, 'rb')`. Overriding the behavior is intended for additional validation or preprocessing of the file. New in version 3.8. `exception io.BlockingIOError` This is a compatibility alias for the builtin [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError") exception. `exception io.UnsupportedOperation` An exception inheriting [`OSError`](exceptions#OSError "OSError") and [`ValueError`](exceptions#ValueError "ValueError") that is raised when an unsupported operation is called on a stream. See also [`sys`](sys#module-sys "sys: Access system-specific parameters and functions.") contains the standard IO streams: [`sys.stdin`](sys#sys.stdin "sys.stdin"), [`sys.stdout`](sys#sys.stdout "sys.stdout"), and [`sys.stderr`](sys#sys.stderr "sys.stderr"). Class hierarchy --------------- The implementation of I/O streams is organized as a hierarchy of classes. First [abstract base classes](../glossary#term-abstract-base-class) (ABCs), which are used to specify the various categories of streams, then concrete classes providing the standard stream implementations. Note The abstract base classes also provide default implementations of some methods in order to help implementation of concrete stream classes. For example, [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase") provides unoptimized implementations of `readinto()` and [`readline()`](#io.IOBase.readline "io.IOBase.readline"). At the top of the I/O hierarchy is the abstract base class [`IOBase`](#io.IOBase "io.IOBase"). It defines the basic interface to a stream. Note, however, that there is no separation between reading and writing to streams; implementations are allowed to raise [`UnsupportedOperation`](#io.UnsupportedOperation "io.UnsupportedOperation") if they do not support a given operation. The [`RawIOBase`](#io.RawIOBase "io.RawIOBase") ABC extends [`IOBase`](#io.IOBase "io.IOBase"). It deals with the reading and writing of bytes to a stream. [`FileIO`](#io.FileIO "io.FileIO") subclasses [`RawIOBase`](#io.RawIOBase "io.RawIOBase") to provide an interface to files in the machine’s file system. The [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase") ABC extends [`IOBase`](#io.IOBase "io.IOBase"). It deals with buffering on a raw binary stream ([`RawIOBase`](#io.RawIOBase "io.RawIOBase")). Its subclasses, [`BufferedWriter`](#io.BufferedWriter "io.BufferedWriter"), [`BufferedReader`](#io.BufferedReader "io.BufferedReader"), and [`BufferedRWPair`](#io.BufferedRWPair "io.BufferedRWPair") buffer raw binary streams that are readable, writable, and both readable and writable, respectively. [`BufferedRandom`](#io.BufferedRandom "io.BufferedRandom") provides a buffered interface to seekable streams. Another [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase") subclass, [`BytesIO`](#io.BytesIO "io.BytesIO"), is a stream of in-memory bytes. The [`TextIOBase`](#io.TextIOBase "io.TextIOBase") ABC extends [`IOBase`](#io.IOBase "io.IOBase"). It deals with streams whose bytes represent text, and handles encoding and decoding to and from strings. [`TextIOWrapper`](#io.TextIOWrapper "io.TextIOWrapper"), which extends [`TextIOBase`](#io.TextIOBase "io.TextIOBase"), is a buffered text interface to a buffered raw stream ([`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase")). Finally, [`StringIO`](#io.StringIO "io.StringIO") is an in-memory stream for text. Argument names are not part of the specification, and only the arguments of [`open()`](functions#open "open") are intended to be used as keyword arguments. The following table summarizes the ABCs provided by the [`io`](#module-io "io: Core tools for working with streams.") module: | ABC | Inherits | Stub Methods | Mixin Methods and Properties | | --- | --- | --- | --- | | [`IOBase`](#io.IOBase "io.IOBase") | | `fileno`, `seek`, and `truncate` | `close`, `closed`, `__enter__`, `__exit__`, `flush`, `isatty`, `__iter__`, `__next__`, `readable`, `readline`, `readlines`, `seekable`, `tell`, `writable`, and `writelines` | | [`RawIOBase`](#io.RawIOBase "io.RawIOBase") | [`IOBase`](#io.IOBase "io.IOBase") | `readinto` and `write` | Inherited [`IOBase`](#io.IOBase "io.IOBase") methods, `read`, and `readall` | | [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase") | [`IOBase`](#io.IOBase "io.IOBase") | `detach`, `read`, `read1`, and `write` | Inherited [`IOBase`](#io.IOBase "io.IOBase") methods, `readinto`, and `readinto1` | | [`TextIOBase`](#io.TextIOBase "io.TextIOBase") | [`IOBase`](#io.IOBase "io.IOBase") | `detach`, `read`, `readline`, and `write` | Inherited [`IOBase`](#io.IOBase "io.IOBase") methods, `encoding`, `errors`, and `newlines` | ### I/O Base Classes `class io.IOBase` The abstract base class for all I/O classes. This class provides empty abstract implementations for many methods that derived classes can override selectively; the default implementations represent a file that cannot be read, written or seeked. Even though [`IOBase`](#io.IOBase "io.IOBase") does not declare `read()` or `write()` because their signatures will vary, implementations and clients should consider those methods part of the interface. Also, implementations may raise a [`ValueError`](exceptions#ValueError "ValueError") (or [`UnsupportedOperation`](#io.UnsupportedOperation "io.UnsupportedOperation")) when operations they do not support are called. The basic type used for binary data read from or written to a file is [`bytes`](stdtypes#bytes "bytes"). Other [bytes-like objects](../glossary#term-bytes-like-object) are accepted as method arguments too. Text I/O classes work with [`str`](stdtypes#str "str") data. Note that calling any method (even inquiries) on a closed stream is undefined. Implementations may raise [`ValueError`](exceptions#ValueError "ValueError") in this case. [`IOBase`](#io.IOBase "io.IOBase") (and its subclasses) supports the iterator protocol, meaning that an [`IOBase`](#io.IOBase "io.IOBase") object can be iterated over yielding the lines in a stream. Lines are defined slightly differently depending on whether the stream is a binary stream (yielding bytes), or a text stream (yielding character strings). See [`readline()`](#io.IOBase.readline "io.IOBase.readline") below. [`IOBase`](#io.IOBase "io.IOBase") is also a context manager and therefore supports the [`with`](../reference/compound_stmts#with) statement. In this example, *file* is closed after the `with` statement’s suite is finished—even if an exception occurs: ``` with open('spam.txt', 'w') as file: file.write('Spam and eggs!') ``` [`IOBase`](#io.IOBase "io.IOBase") provides these data attributes and methods: `close()` Flush and close this stream. This method has no effect if the file is already closed. Once the file is closed, any operation on the file (e.g. reading or writing) will raise a [`ValueError`](exceptions#ValueError "ValueError"). As a convenience, it is allowed to call this method more than once; only the first call, however, will have an effect. `closed` `True` if the stream is closed. `fileno()` Return the underlying file descriptor (an integer) of the stream if it exists. An [`OSError`](exceptions#OSError "OSError") is raised if the IO object does not use a file descriptor. `flush()` Flush the write buffers of the stream if applicable. This does nothing for read-only and non-blocking streams. `isatty()` Return `True` if the stream is interactive (i.e., connected to a terminal/tty device). `readable()` Return `True` if the stream can be read from. If `False`, `read()` will raise [`OSError`](exceptions#OSError "OSError"). `readline(size=-1, /)` Read and return one line from the stream. If *size* is specified, at most *size* bytes will be read. The line terminator is always `b'\n'` for binary files; for text files, the *newline* argument to [`open()`](functions#open "open") can be used to select the line terminator(s) recognized. `readlines(hint=-1, /)` Read and return a list of lines from the stream. *hint* can be specified to control the number of lines read: no more lines will be read if the total size (in bytes/characters) of all lines so far exceeds *hint*. *hint* values of `0` or less, as well as `None`, are treated as no hint. Note that it’s already possible to iterate on file objects using `for line in file: ...` without calling `file.readlines()`. `seek(offset, whence=SEEK_SET, /)` Change the stream position to the given byte *offset*. *offset* is interpreted relative to the position indicated by *whence*. The default value for *whence* is `SEEK_SET`. Values for *whence* are: * `SEEK_SET` or `0` – start of the stream (the default); *offset* should be zero or positive * `SEEK_CUR` or `1` – current stream position; *offset* may be negative * `SEEK_END` or `2` – end of the stream; *offset* is usually negative Return the new absolute position. New in version 3.1: The `SEEK_*` constants. New in version 3.3: Some operating systems could support additional values, like `os.SEEK_HOLE` or `os.SEEK_DATA`. The valid values for a file could depend on it being open in text or binary mode. `seekable()` Return `True` if the stream supports random access. If `False`, [`seek()`](#io.IOBase.seek "io.IOBase.seek"), [`tell()`](#io.IOBase.tell "io.IOBase.tell") and [`truncate()`](#io.IOBase.truncate "io.IOBase.truncate") will raise [`OSError`](exceptions#OSError "OSError"). `tell()` Return the current stream position. `truncate(size=None, /)` Resize the stream to the given *size* in bytes (or the current position if *size* is not specified). The current stream position isn’t changed. This resizing can extend or reduce the current file size. In case of extension, the contents of the new file area depend on the platform (on most systems, additional bytes are zero-filled). The new file size is returned. Changed in version 3.5: Windows will now zero-fill files when extending. `writable()` Return `True` if the stream supports writing. If `False`, `write()` and [`truncate()`](#io.IOBase.truncate "io.IOBase.truncate") will raise [`OSError`](exceptions#OSError "OSError"). `writelines(lines, /)` Write a list of lines to the stream. Line separators are not added, so it is usual for each of the lines provided to have a line separator at the end. `__del__()` Prepare for object destruction. [`IOBase`](#io.IOBase "io.IOBase") provides a default implementation of this method that calls the instance’s [`close()`](#io.IOBase.close "io.IOBase.close") method. `class io.RawIOBase` Base class for raw binary streams. It inherits [`IOBase`](#io.IOBase "io.IOBase"). Raw binary streams typically provide low-level access to an underlying OS device or API, and do not try to encapsulate it in high-level primitives (this functionality is done at a higher-level in buffered binary streams and text streams, described later in this page). [`RawIOBase`](#io.RawIOBase "io.RawIOBase") provides these methods in addition to those from [`IOBase`](#io.IOBase "io.IOBase"): `read(size=-1, /)` Read up to *size* bytes from the object and return them. As a convenience, if *size* is unspecified or -1, all bytes until EOF are returned. Otherwise, only one system call is ever made. Fewer than *size* bytes may be returned if the operating system call returns fewer than *size* bytes. If 0 bytes are returned, and *size* was not 0, this indicates end of file. If the object is in non-blocking mode and no bytes are available, `None` is returned. The default implementation defers to [`readall()`](#io.RawIOBase.readall "io.RawIOBase.readall") and [`readinto()`](#io.RawIOBase.readinto "io.RawIOBase.readinto"). `readall()` Read and return all the bytes from the stream until EOF, using multiple calls to the stream if necessary. `readinto(b, /)` Read bytes into a pre-allocated, writable [bytes-like object](../glossary#term-bytes-like-object) *b*, and return the number of bytes read. For example, *b* might be a [`bytearray`](stdtypes#bytearray "bytearray"). If the object is in non-blocking mode and no bytes are available, `None` is returned. `write(b, /)` Write the given [bytes-like object](../glossary#term-bytes-like-object), *b*, to the underlying raw stream, and return the number of bytes written. This can be less than the length of *b* in bytes, depending on specifics of the underlying raw stream, and especially if it is in non-blocking mode. `None` is returned if the raw stream is set not to block and no single byte could be readily written to it. The caller may release or mutate *b* after this method returns, so the implementation should only access *b* during the method call. `class io.BufferedIOBase` Base class for binary streams that support some kind of buffering. It inherits [`IOBase`](#io.IOBase "io.IOBase"). The main difference with [`RawIOBase`](#io.RawIOBase "io.RawIOBase") is that methods [`read()`](#io.BufferedIOBase.read "io.BufferedIOBase.read"), [`readinto()`](#io.BufferedIOBase.readinto "io.BufferedIOBase.readinto") and [`write()`](#io.BufferedIOBase.write "io.BufferedIOBase.write") will try (respectively) to read as much input as requested or to consume all given output, at the expense of making perhaps more than one system call. In addition, those methods can raise [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError") if the underlying raw stream is in non-blocking mode and cannot take or give enough data; unlike their [`RawIOBase`](#io.RawIOBase "io.RawIOBase") counterparts, they will never return `None`. Besides, the [`read()`](#io.BufferedIOBase.read "io.BufferedIOBase.read") method does not have a default implementation that defers to [`readinto()`](#io.BufferedIOBase.readinto "io.BufferedIOBase.readinto"). A typical [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase") implementation should not inherit from a [`RawIOBase`](#io.RawIOBase "io.RawIOBase") implementation, but wrap one, like [`BufferedWriter`](#io.BufferedWriter "io.BufferedWriter") and [`BufferedReader`](#io.BufferedReader "io.BufferedReader") do. [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase") provides or overrides these data attributes and methods in addition to those from [`IOBase`](#io.IOBase "io.IOBase"): `raw` The underlying raw stream (a [`RawIOBase`](#io.RawIOBase "io.RawIOBase") instance) that [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase") deals with. This is not part of the [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase") API and may not exist on some implementations. `detach()` Separate the underlying raw stream from the buffer and return it. After the raw stream has been detached, the buffer is in an unusable state. Some buffers, like [`BytesIO`](#io.BytesIO "io.BytesIO"), do not have the concept of a single raw stream to return from this method. They raise [`UnsupportedOperation`](#io.UnsupportedOperation "io.UnsupportedOperation"). New in version 3.1. `read(size=-1, /)` Read and return up to *size* bytes. If the argument is omitted, `None`, or negative, data is read and returned until EOF is reached. An empty [`bytes`](stdtypes#bytes "bytes") object is returned if the stream is already at EOF. If the argument is positive, and the underlying raw stream is not interactive, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached first). But for interactive raw streams, at most one raw read will be issued, and a short result does not imply that EOF is imminent. A [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError") is raised if the underlying raw stream is in non blocking-mode, and has no data available at the moment. `read1(size=-1, /)` Read and return up to *size* bytes, with at most one call to the underlying raw stream’s [`read()`](#io.RawIOBase.read "io.RawIOBase.read") (or [`readinto()`](#io.RawIOBase.readinto "io.RawIOBase.readinto")) method. This can be useful if you are implementing your own buffering on top of a [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase") object. If *size* is `-1` (the default), an arbitrary number of bytes are returned (more than zero unless EOF is reached). `readinto(b, /)` Read bytes into a pre-allocated, writable [bytes-like object](../glossary#term-bytes-like-object) *b* and return the number of bytes read. For example, *b* might be a [`bytearray`](stdtypes#bytearray "bytearray"). Like [`read()`](#io.BufferedIOBase.read "io.BufferedIOBase.read"), multiple reads may be issued to the underlying raw stream, unless the latter is interactive. A [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError") is raised if the underlying raw stream is in non blocking-mode, and has no data available at the moment. `readinto1(b, /)` Read bytes into a pre-allocated, writable [bytes-like object](../glossary#term-bytes-like-object) *b*, using at most one call to the underlying raw stream’s [`read()`](#io.RawIOBase.read "io.RawIOBase.read") (or [`readinto()`](#io.RawIOBase.readinto "io.RawIOBase.readinto")) method. Return the number of bytes read. A [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError") is raised if the underlying raw stream is in non blocking-mode, and has no data available at the moment. New in version 3.5. `write(b, /)` Write the given [bytes-like object](../glossary#term-bytes-like-object), *b*, and return the number of bytes written (always equal to the length of *b* in bytes, since if the write fails an [`OSError`](exceptions#OSError "OSError") will be raised). Depending on the actual implementation, these bytes may be readily written to the underlying stream, or held in a buffer for performance and latency reasons. When in non-blocking mode, a [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError") is raised if the data needed to be written to the raw stream but it couldn’t accept all the data without blocking. The caller may release or mutate *b* after this method returns, so the implementation should only access *b* during the method call. ### Raw File I/O `class io.FileIO(name, mode='r', closefd=True, opener=None)` A raw binary stream representing an OS-level file containing bytes data. It inherits [`RawIOBase`](#io.RawIOBase "io.RawIOBase"). The *name* can be one of two things: * a character string or [`bytes`](stdtypes#bytes "bytes") object representing the path to the file which will be opened. In this case closefd must be `True` (the default) otherwise an error will be raised. * an integer representing the number of an existing OS-level file descriptor to which the resulting [`FileIO`](#io.FileIO "io.FileIO") object will give access. When the FileIO object is closed this fd will be closed as well, unless *closefd* is set to `False`. The *mode* can be `'r'`, `'w'`, `'x'` or `'a'` for reading (default), writing, exclusive creation or appending. The file will be created if it doesn’t exist when opened for writing or appending; it will be truncated when opened for writing. [`FileExistsError`](exceptions#FileExistsError "FileExistsError") will be raised if it already exists when opened for creating. Opening a file for creating implies writing, so this mode behaves in a similar way to `'w'`. Add a `'+'` to the mode to allow simultaneous reading and writing. The `read()` (when called with a positive argument), `readinto()` and `write()` methods on this class will only make one system call. A custom opener can be used by passing a callable as *opener*. The underlying file descriptor for the file object is then obtained by calling *opener* with (*name*, *flags*). *opener* must return an open file descriptor (passing [`os.open`](os#os.open "os.open") as *opener* results in functionality similar to passing `None`). The newly created file is [non-inheritable](os#fd-inheritance). See the [`open()`](functions#open "open") built-in function for examples on using the *opener* parameter. Changed in version 3.3: The *opener* parameter was added. The `'x'` mode was added. Changed in version 3.4: The file is now non-inheritable. [`FileIO`](#io.FileIO "io.FileIO") provides these data attributes in addition to those from [`RawIOBase`](#io.RawIOBase "io.RawIOBase") and [`IOBase`](#io.IOBase "io.IOBase"): `mode` The mode as given in the constructor. `name` The file name. This is the file descriptor of the file when no name is given in the constructor. ### Buffered Streams Buffered I/O streams provide a higher-level interface to an I/O device than raw I/O does. `class io.BytesIO(initial_bytes=b'')` A binary stream using an in-memory bytes buffer. It inherits [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase"). The buffer is discarded when the [`close()`](#io.IOBase.close "io.IOBase.close") method is called. The optional argument *initial\_bytes* is a [bytes-like object](../glossary#term-bytes-like-object) that contains initial data. [`BytesIO`](#io.BytesIO "io.BytesIO") provides or overrides these methods in addition to those from [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase") and [`IOBase`](#io.IOBase "io.IOBase"): `getbuffer()` Return a readable and writable view over the contents of the buffer without copying them. Also, mutating the view will transparently update the contents of the buffer: ``` >>> b = io.BytesIO(b"abcdef") >>> view = b.getbuffer() >>> view[2:4] = b"56" >>> b.getvalue() b'ab56ef' ``` Note As long as the view exists, the [`BytesIO`](#io.BytesIO "io.BytesIO") object cannot be resized or closed. New in version 3.2. `getvalue()` Return [`bytes`](stdtypes#bytes "bytes") containing the entire contents of the buffer. `read1(size=-1, /)` In [`BytesIO`](#io.BytesIO "io.BytesIO"), this is the same as [`read()`](#io.BufferedIOBase.read "io.BufferedIOBase.read"). Changed in version 3.7: The *size* argument is now optional. `readinto1(b, /)` In [`BytesIO`](#io.BytesIO "io.BytesIO"), this is the same as [`readinto()`](#io.BufferedIOBase.readinto "io.BufferedIOBase.readinto"). New in version 3.5. `class io.BufferedReader(raw, buffer_size=DEFAULT_BUFFER_SIZE)` A buffered binary stream providing higher-level access to a readable, non seekable [`RawIOBase`](#io.RawIOBase "io.RawIOBase") raw binary stream. It inherits [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase"). When reading data from this object, a larger amount of data may be requested from the underlying raw stream, and kept in an internal buffer. The buffered data can then be returned directly on subsequent reads. The constructor creates a [`BufferedReader`](#io.BufferedReader "io.BufferedReader") for the given readable *raw* stream and *buffer\_size*. If *buffer\_size* is omitted, [`DEFAULT_BUFFER_SIZE`](#io.DEFAULT_BUFFER_SIZE "io.DEFAULT_BUFFER_SIZE") is used. [`BufferedReader`](#io.BufferedReader "io.BufferedReader") provides or overrides these methods in addition to those from [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase") and [`IOBase`](#io.IOBase "io.IOBase"): `peek(size=0, /)` Return bytes from the stream without advancing the position. At most one single read on the raw stream is done to satisfy the call. The number of bytes returned may be less or more than requested. `read(size=-1, /)` Read and return *size* bytes, or if *size* is not given or negative, until EOF or if the read call would block in non-blocking mode. `read1(size=-1, /)` Read and return up to *size* bytes with only one call on the raw stream. If at least one byte is buffered, only buffered bytes are returned. Otherwise, one raw stream read call is made. Changed in version 3.7: The *size* argument is now optional. `class io.BufferedWriter(raw, buffer_size=DEFAULT_BUFFER_SIZE)` A buffered binary stream providing higher-level access to a writeable, non seekable [`RawIOBase`](#io.RawIOBase "io.RawIOBase") raw binary stream. It inherits [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase"). When writing to this object, data is normally placed into an internal buffer. The buffer will be written out to the underlying [`RawIOBase`](#io.RawIOBase "io.RawIOBase") object under various conditions, including: * when the buffer gets too small for all pending data; * when [`flush()`](#io.BufferedWriter.flush "io.BufferedWriter.flush") is called; * when a `seek()` is requested (for [`BufferedRandom`](#io.BufferedRandom "io.BufferedRandom") objects); * when the [`BufferedWriter`](#io.BufferedWriter "io.BufferedWriter") object is closed or destroyed. The constructor creates a [`BufferedWriter`](#io.BufferedWriter "io.BufferedWriter") for the given writeable *raw* stream. If the *buffer\_size* is not given, it defaults to [`DEFAULT_BUFFER_SIZE`](#io.DEFAULT_BUFFER_SIZE "io.DEFAULT_BUFFER_SIZE"). [`BufferedWriter`](#io.BufferedWriter "io.BufferedWriter") provides or overrides these methods in addition to those from [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase") and [`IOBase`](#io.IOBase "io.IOBase"): `flush()` Force bytes held in the buffer into the raw stream. A [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError") should be raised if the raw stream blocks. `write(b, /)` Write the [bytes-like object](../glossary#term-bytes-like-object), *b*, and return the number of bytes written. When in non-blocking mode, a [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError") is raised if the buffer needs to be written out but the raw stream blocks. `class io.BufferedRandom(raw, buffer_size=DEFAULT_BUFFER_SIZE)` A buffered binary stream providing higher-level access to a seekable [`RawIOBase`](#io.RawIOBase "io.RawIOBase") raw binary stream. It inherits [`BufferedReader`](#io.BufferedReader "io.BufferedReader") and [`BufferedWriter`](#io.BufferedWriter "io.BufferedWriter"). The constructor creates a reader and writer for a seekable raw stream, given in the first argument. If the *buffer\_size* is omitted it defaults to [`DEFAULT_BUFFER_SIZE`](#io.DEFAULT_BUFFER_SIZE "io.DEFAULT_BUFFER_SIZE"). [`BufferedRandom`](#io.BufferedRandom "io.BufferedRandom") is capable of anything [`BufferedReader`](#io.BufferedReader "io.BufferedReader") or [`BufferedWriter`](#io.BufferedWriter "io.BufferedWriter") can do. In addition, `seek()` and `tell()` are guaranteed to be implemented. `class io.BufferedRWPair(reader, writer, buffer_size=DEFAULT_BUFFER_SIZE, /)` A buffered binary stream providing higher-level access to two non seekable [`RawIOBase`](#io.RawIOBase "io.RawIOBase") raw binary streams—one readable, the other writeable. It inherits [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase"). *reader* and *writer* are [`RawIOBase`](#io.RawIOBase "io.RawIOBase") objects that are readable and writeable respectively. If the *buffer\_size* is omitted it defaults to [`DEFAULT_BUFFER_SIZE`](#io.DEFAULT_BUFFER_SIZE "io.DEFAULT_BUFFER_SIZE"). [`BufferedRWPair`](#io.BufferedRWPair "io.BufferedRWPair") implements all of [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase")’s methods except for [`detach()`](#io.BufferedIOBase.detach "io.BufferedIOBase.detach"), which raises [`UnsupportedOperation`](#io.UnsupportedOperation "io.UnsupportedOperation"). Warning [`BufferedRWPair`](#io.BufferedRWPair "io.BufferedRWPair") does not attempt to synchronize accesses to its underlying raw streams. You should not pass it the same object as reader and writer; use [`BufferedRandom`](#io.BufferedRandom "io.BufferedRandom") instead. ### Text I/O `class io.TextIOBase` Base class for text streams. This class provides a character and line based interface to stream I/O. It inherits [`IOBase`](#io.IOBase "io.IOBase"). [`TextIOBase`](#io.TextIOBase "io.TextIOBase") provides or overrides these data attributes and methods in addition to those from [`IOBase`](#io.IOBase "io.IOBase"): `encoding` The name of the encoding used to decode the stream’s bytes into strings, and to encode strings into bytes. `errors` The error setting of the decoder or encoder. `newlines` A string, a tuple of strings, or `None`, indicating the newlines translated so far. Depending on the implementation and the initial constructor flags, this may not be available. `buffer` The underlying binary buffer (a [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase") instance) that [`TextIOBase`](#io.TextIOBase "io.TextIOBase") deals with. This is not part of the [`TextIOBase`](#io.TextIOBase "io.TextIOBase") API and may not exist in some implementations. `detach()` Separate the underlying binary buffer from the [`TextIOBase`](#io.TextIOBase "io.TextIOBase") and return it. After the underlying buffer has been detached, the [`TextIOBase`](#io.TextIOBase "io.TextIOBase") is in an unusable state. Some [`TextIOBase`](#io.TextIOBase "io.TextIOBase") implementations, like [`StringIO`](#io.StringIO "io.StringIO"), may not have the concept of an underlying buffer and calling this method will raise [`UnsupportedOperation`](#io.UnsupportedOperation "io.UnsupportedOperation"). New in version 3.1. `read(size=-1, /)` Read and return at most *size* characters from the stream as a single [`str`](stdtypes#str "str"). If *size* is negative or `None`, reads until EOF. `readline(size=-1, /)` Read until newline or EOF and return a single `str`. If the stream is already at EOF, an empty string is returned. If *size* is specified, at most *size* characters will be read. `seek(offset, whence=SEEK_SET, /)` Change the stream position to the given *offset*. Behaviour depends on the *whence* parameter. The default value for *whence* is `SEEK_SET`. * `SEEK_SET` or `0`: seek from the start of the stream (the default); *offset* must either be a number returned by [`TextIOBase.tell()`](#io.TextIOBase.tell "io.TextIOBase.tell"), or zero. Any other *offset* value produces undefined behaviour. * `SEEK_CUR` or `1`: “seek” to the current position; *offset* must be zero, which is a no-operation (all other values are unsupported). * `SEEK_END` or `2`: seek to the end of the stream; *offset* must be zero (all other values are unsupported). Return the new absolute position as an opaque number. New in version 3.1: The `SEEK_*` constants. `tell()` Return the current stream position as an opaque number. The number does not usually represent a number of bytes in the underlying binary storage. `write(s, /)` Write the string *s* to the stream and return the number of characters written. `class io.TextIOWrapper(buffer, encoding=None, errors=None, newline=None, line_buffering=False, write_through=False)` A buffered text stream providing higher-level access to a [`BufferedIOBase`](#io.BufferedIOBase "io.BufferedIOBase") buffered binary stream. It inherits [`TextIOBase`](#io.TextIOBase "io.TextIOBase"). *encoding* gives the name of the encoding that the stream will be decoded or encoded with. It defaults to [`locale.getpreferredencoding(False)`](locale#locale.getpreferredencoding "locale.getpreferredencoding"). *errors* is an optional string that specifies how encoding and decoding errors are to be handled. Pass `'strict'` to raise a [`ValueError`](exceptions#ValueError "ValueError") exception if there is an encoding error (the default of `None` has the same effect), or pass `'ignore'` to ignore errors. (Note that ignoring encoding errors can lead to data loss.) `'replace'` causes a replacement marker (such as `'?'`) to be inserted where there is malformed data. `'backslashreplace'` causes malformed data to be replaced by a backslashed escape sequence. When writing, `'xmlcharrefreplace'` (replace with the appropriate XML character reference) or `'namereplace'` (replace with `\N{...}` escape sequences) can be used. Any other error handling name that has been registered with [`codecs.register_error()`](codecs#codecs.register_error "codecs.register_error") is also valid. *newline* controls how line endings are handled. It can be `None`, `''`, `'\n'`, `'\r'`, and `'\r\n'`. It works as follows: * When reading input from the stream, if *newline* is `None`, [universal newlines](../glossary#term-universal-newlines) mode is enabled. Lines in the input can end in `'\n'`, `'\r'`, or `'\r\n'`, and these are translated into `'\n'` before being returned to the caller. If *newline* is `''`, universal newlines mode is enabled, but line endings are returned to the caller untranslated. If *newline* has any of the other legal values, input lines are only terminated by the given string, and the line ending is returned to the caller untranslated. * When writing output to the stream, if *newline* is `None`, any `'\n'` characters written are translated to the system default line separator, [`os.linesep`](os#os.linesep "os.linesep"). If *newline* is `''` or `'\n'`, no translation takes place. If *newline* is any of the other legal values, any `'\n'` characters written are translated to the given string. If *line\_buffering* is `True`, `flush()` is implied when a call to write contains a newline character or a carriage return. If *write\_through* is `True`, calls to `write()` are guaranteed not to be buffered: any data written on the [`TextIOWrapper`](#io.TextIOWrapper "io.TextIOWrapper") object is immediately handled to its underlying binary *buffer*. Changed in version 3.3: The *write\_through* argument has been added. Changed in version 3.3: The default *encoding* is now `locale.getpreferredencoding(False)` instead of `locale.getpreferredencoding()`. Don’t change temporary the locale encoding using [`locale.setlocale()`](locale#locale.setlocale "locale.setlocale"), use the current locale encoding instead of the user preferred encoding. [`TextIOWrapper`](#io.TextIOWrapper "io.TextIOWrapper") provides these data attributes and methods in addition to those from [`TextIOBase`](#io.TextIOBase "io.TextIOBase") and [`IOBase`](#io.IOBase "io.IOBase"): `line_buffering` Whether line buffering is enabled. `write_through` Whether writes are passed immediately to the underlying binary buffer. New in version 3.7. `reconfigure(*[, encoding][, errors][, newline][, line_buffering][, write_through])` Reconfigure this text stream using new settings for *encoding*, *errors*, *newline*, *line\_buffering* and *write\_through*. Parameters not specified keep current settings, except `errors='strict'` is used when *encoding* is specified but *errors* is not specified. It is not possible to change the encoding or newline if some data has already been read from the stream. On the other hand, changing encoding after write is possible. This method does an implicit stream flush before setting the new parameters. New in version 3.7. `class io.StringIO(initial_value='', newline='\n')` A text stream using an in-memory text buffer. It inherits [`TextIOBase`](#io.TextIOBase "io.TextIOBase"). The text buffer is discarded when the [`close()`](#io.IOBase.close "io.IOBase.close") method is called. The initial value of the buffer can be set by providing *initial\_value*. If newline translation is enabled, newlines will be encoded as if by [`write()`](#io.TextIOBase.write "io.TextIOBase.write"). The stream is positioned at the start of the buffer. The *newline* argument works like that of [`TextIOWrapper`](#io.TextIOWrapper "io.TextIOWrapper"), except that when writing output to the stream, if *newline* is `None`, newlines are written as `\n` on all platforms. [`StringIO`](#io.StringIO "io.StringIO") provides this method in addition to those from [`TextIOBase`](#io.TextIOBase "io.TextIOBase") and [`IOBase`](#io.IOBase "io.IOBase"): `getvalue()` Return a `str` containing the entire contents of the buffer. Newlines are decoded as if by [`read()`](#io.TextIOBase.read "io.TextIOBase.read"), although the stream position is not changed. Example usage: ``` import io output = io.StringIO() output.write('First line.\n') print('Second line.', file=output) # Retrieve file contents -- this will be # 'First line.\nSecond line.\n' contents = output.getvalue() # Close object and discard memory buffer -- # .getvalue() will now raise an exception. output.close() ``` `class io.IncrementalNewlineDecoder` A helper codec that decodes newlines for [universal newlines](../glossary#term-universal-newlines) mode. It inherits [`codecs.IncrementalDecoder`](codecs#codecs.IncrementalDecoder "codecs.IncrementalDecoder"). Performance ----------- This section discusses the performance of the provided concrete I/O implementations. ### Binary I/O By reading and writing only large chunks of data even when the user asks for a single byte, buffered I/O hides any inefficiency in calling and executing the operating system’s unbuffered I/O routines. The gain depends on the OS and the kind of I/O which is performed. For example, on some modern OSes such as Linux, unbuffered disk I/O can be as fast as buffered I/O. The bottom line, however, is that buffered I/O offers predictable performance regardless of the platform and the backing device. Therefore, it is almost always preferable to use buffered I/O rather than unbuffered I/O for binary data. ### Text I/O Text I/O over a binary storage (such as a file) is significantly slower than binary I/O over the same storage, because it requires conversions between unicode and binary data using a character codec. This can become noticeable handling huge amounts of text data like large log files. Also, `TextIOWrapper.tell()` and `TextIOWrapper.seek()` are both quite slow due to the reconstruction algorithm used. [`StringIO`](#io.StringIO "io.StringIO"), however, is a native in-memory unicode container and will exhibit similar speed to [`BytesIO`](#io.BytesIO "io.BytesIO"). ### Multi-threading [`FileIO`](#io.FileIO "io.FileIO") objects are thread-safe to the extent that the operating system calls (such as `read(2)` under Unix) they wrap are thread-safe too. Binary buffered objects (instances of [`BufferedReader`](#io.BufferedReader "io.BufferedReader"), [`BufferedWriter`](#io.BufferedWriter "io.BufferedWriter"), [`BufferedRandom`](#io.BufferedRandom "io.BufferedRandom") and [`BufferedRWPair`](#io.BufferedRWPair "io.BufferedRWPair")) protect their internal structures using a lock; it is therefore safe to call them from multiple threads at once. [`TextIOWrapper`](#io.TextIOWrapper "io.TextIOWrapper") objects are not thread-safe. ### Reentrancy Binary buffered objects (instances of [`BufferedReader`](#io.BufferedReader "io.BufferedReader"), [`BufferedWriter`](#io.BufferedWriter "io.BufferedWriter"), [`BufferedRandom`](#io.BufferedRandom "io.BufferedRandom") and [`BufferedRWPair`](#io.BufferedRWPair "io.BufferedRWPair")) are not reentrant. While reentrant calls will not happen in normal situations, they can arise from doing I/O in a [`signal`](signal#module-signal "signal: Set handlers for asynchronous events.") handler. If a thread tries to re-enter a buffered object which it is already accessing, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. Note this doesn’t prohibit a different thread from entering the buffered object. The above implicitly extends to text files, since the [`open()`](functions#open "open") function will wrap a buffered object inside a [`TextIOWrapper`](#io.TextIOWrapper "io.TextIOWrapper"). This includes standard streams and therefore affects the built-in [`print()`](functions#print "print") function as well.
programming_docs
python faulthandler — Dump the Python traceback faulthandler — Dump the Python traceback ======================================== New in version 3.3. This module contains functions to dump Python tracebacks explicitly, on a fault, after a timeout, or on a user signal. Call [`faulthandler.enable()`](#faulthandler.enable "faulthandler.enable") to install fault handlers for the `SIGSEGV`, `SIGFPE`, `SIGABRT`, `SIGBUS`, and `SIGILL` signals. You can also enable them at startup by setting the [`PYTHONFAULTHANDLER`](../using/cmdline#envvar-PYTHONFAULTHANDLER) environment variable or by using the [`-X`](../using/cmdline#id5) `faulthandler` command line option. The fault handler is compatible with system fault handlers like Apport or the Windows fault handler. The module uses an alternative stack for signal handlers if the `sigaltstack()` function is available. This allows it to dump the traceback even on a stack overflow. The fault handler is called on catastrophic cases and therefore can only use signal-safe functions (e.g. it cannot allocate memory on the heap). Because of this limitation traceback dumping is minimal compared to normal Python tracebacks: * Only ASCII is supported. The `backslashreplace` error handler is used on encoding. * Each string is limited to 500 characters. * Only the filename, the function name and the line number are displayed. (no source code) * It is limited to 100 frames and 100 threads. * The order is reversed: the most recent call is shown first. By default, the Python traceback is written to [`sys.stderr`](sys#sys.stderr "sys.stderr"). To see tracebacks, applications must be run in the terminal. A log file can alternatively be passed to [`faulthandler.enable()`](#faulthandler.enable "faulthandler.enable"). The module is implemented in C, so tracebacks can be dumped on a crash or when Python is deadlocked. The [Python Development Mode](devmode#devmode) calls [`faulthandler.enable()`](#faulthandler.enable "faulthandler.enable") at Python startup. Dumping the traceback --------------------- `faulthandler.dump_traceback(file=sys.stderr, all_threads=True)` Dump the tracebacks of all threads into *file*. If *all\_threads* is `False`, dump only the current thread. Changed in version 3.5: Added support for passing file descriptor to this function. Fault handler state ------------------- `faulthandler.enable(file=sys.stderr, all_threads=True)` Enable the fault handler: install handlers for the `SIGSEGV`, `SIGFPE`, `SIGABRT`, `SIGBUS` and `SIGILL` signals to dump the Python traceback. If *all\_threads* is `True`, produce tracebacks for every running thread. Otherwise, dump only the current thread. The *file* must be kept open until the fault handler is disabled: see [issue with file descriptors](#faulthandler-fd). Changed in version 3.5: Added support for passing file descriptor to this function. Changed in version 3.6: On Windows, a handler for Windows exception is also installed. `faulthandler.disable()` Disable the fault handler: uninstall the signal handlers installed by [`enable()`](#faulthandler.enable "faulthandler.enable"). `faulthandler.is_enabled()` Check if the fault handler is enabled. Dumping the tracebacks after a timeout -------------------------------------- `faulthandler.dump_traceback_later(timeout, repeat=False, file=sys.stderr, exit=False)` Dump the tracebacks of all threads, after a timeout of *timeout* seconds, or every *timeout* seconds if *repeat* is `True`. If *exit* is `True`, call `_exit()` with status=1 after dumping the tracebacks. (Note `_exit()` exits the process immediately, which means it doesn’t do any cleanup like flushing file buffers.) If the function is called twice, the new call replaces previous parameters and resets the timeout. The timer has a sub-second resolution. The *file* must be kept open until the traceback is dumped or [`cancel_dump_traceback_later()`](#faulthandler.cancel_dump_traceback_later "faulthandler.cancel_dump_traceback_later") is called: see [issue with file descriptors](#faulthandler-fd). This function is implemented using a watchdog thread. Changed in version 3.7: This function is now always available. Changed in version 3.5: Added support for passing file descriptor to this function. `faulthandler.cancel_dump_traceback_later()` Cancel the last call to [`dump_traceback_later()`](#faulthandler.dump_traceback_later "faulthandler.dump_traceback_later"). Dumping the traceback on a user signal -------------------------------------- `faulthandler.register(signum, file=sys.stderr, all_threads=True, chain=False)` Register a user signal: install a handler for the *signum* signal to dump the traceback of all threads, or of the current thread if *all\_threads* is `False`, into *file*. Call the previous handler if chain is `True`. The *file* must be kept open until the signal is unregistered by [`unregister()`](#faulthandler.unregister "faulthandler.unregister"): see [issue with file descriptors](#faulthandler-fd). Not available on Windows. Changed in version 3.5: Added support for passing file descriptor to this function. `faulthandler.unregister(signum)` Unregister a user signal: uninstall the handler of the *signum* signal installed by [`register()`](#faulthandler.register "faulthandler.register"). Return `True` if the signal was registered, `False` otherwise. Not available on Windows. Issue with file descriptors --------------------------- [`enable()`](#faulthandler.enable "faulthandler.enable"), [`dump_traceback_later()`](#faulthandler.dump_traceback_later "faulthandler.dump_traceback_later") and [`register()`](#faulthandler.register "faulthandler.register") keep the file descriptor of their *file* argument. If the file is closed and its file descriptor is reused by a new file, or if [`os.dup2()`](os#os.dup2 "os.dup2") is used to replace the file descriptor, the traceback will be written into a different file. Call these functions again each time that the file is replaced. Example ------- Example of a segmentation fault on Linux with and without enabling the fault handler: ``` $ python3 -c "import ctypes; ctypes.string_at(0)" Segmentation fault $ python3 -q -X faulthandler >>> import ctypes >>> ctypes.string_at(0) Fatal Python error: Segmentation fault Current thread 0x00007fb899f39700 (most recent call first): File "/home/python/cpython/Lib/ctypes/__init__.py", line 486 in string_at File "<stdin>", line 1 in <module> Segmentation fault ``` python wave — Read and write WAV files wave — Read and write WAV files =============================== **Source code:** [Lib/wave.py](https://github.com/python/cpython/tree/3.9/Lib/wave.py) The [`wave`](#module-wave "wave: Provide an interface to the WAV sound format.") module provides a convenient interface to the WAV sound format. It does not support compression/decompression, but it does support mono/stereo. The [`wave`](#module-wave "wave: Provide an interface to the WAV sound format.") module defines the following function and exception: `wave.open(file, mode=None)` If *file* is a string, open the file by that name, otherwise treat it as a file-like object. *mode* can be: `'rb'` Read only mode. `'wb'` Write only mode. Note that it does not allow read/write WAV files. A *mode* of `'rb'` returns a `Wave_read` object, while a *mode* of `'wb'` returns a `Wave_write` object. If *mode* is omitted and a file-like object is passed as *file*, `file.mode` is used as the default value for *mode*. If you pass in a file-like object, the wave object will not close it when its `close()` method is called; it is the caller’s responsibility to close the file object. The [`open()`](#wave.open "wave.open") function may be used in a [`with`](../reference/compound_stmts#with) statement. When the `with` block completes, the [`Wave_read.close()`](#wave.Wave_read.close "wave.Wave_read.close") or [`Wave_write.close()`](#wave.Wave_write.close "wave.Wave_write.close") method is called. Changed in version 3.4: Added support for unseekable files. `exception wave.Error` An error raised when something is impossible because it violates the WAV specification or hits an implementation deficiency. Wave\_read Objects ------------------ Wave\_read objects, as returned by [`open()`](#wave.open "wave.open"), have the following methods: `Wave_read.close()` Close the stream if it was opened by [`wave`](#module-wave "wave: Provide an interface to the WAV sound format."), and make the instance unusable. This is called automatically on object collection. `Wave_read.getnchannels()` Returns number of audio channels (`1` for mono, `2` for stereo). `Wave_read.getsampwidth()` Returns sample width in bytes. `Wave_read.getframerate()` Returns sampling frequency. `Wave_read.getnframes()` Returns number of audio frames. `Wave_read.getcomptype()` Returns compression type (`'NONE'` is the only supported type). `Wave_read.getcompname()` Human-readable version of [`getcomptype()`](#wave.Wave_read.getcomptype "wave.Wave_read.getcomptype"). Usually `'not compressed'` parallels `'NONE'`. `Wave_read.getparams()` Returns a [`namedtuple()`](collections#collections.namedtuple "collections.namedtuple") `(nchannels, sampwidth, framerate, nframes, comptype, compname)`, equivalent to output of the `get*()` methods. `Wave_read.readframes(n)` Reads and returns at most *n* frames of audio, as a [`bytes`](stdtypes#bytes "bytes") object. `Wave_read.rewind()` Rewind the file pointer to the beginning of the audio stream. The following two methods are defined for compatibility with the [`aifc`](aifc#module-aifc "aifc: Read and write audio files in AIFF or AIFC format. (deprecated)") module, and don’t do anything interesting. `Wave_read.getmarkers()` Returns `None`. `Wave_read.getmark(id)` Raise an error. The following two methods define a term “position” which is compatible between them, and is otherwise implementation dependent. `Wave_read.setpos(pos)` Set the file pointer to the specified position. `Wave_read.tell()` Return current file pointer position. Wave\_write Objects ------------------- For seekable output streams, the `wave` header will automatically be updated to reflect the number of frames actually written. For unseekable streams, the *nframes* value must be accurate when the first frame data is written. An accurate *nframes* value can be achieved either by calling [`setnframes()`](#wave.Wave_write.setnframes "wave.Wave_write.setnframes") or [`setparams()`](#wave.Wave_write.setparams "wave.Wave_write.setparams") with the number of frames that will be written before [`close()`](#wave.Wave_write.close "wave.Wave_write.close") is called and then using [`writeframesraw()`](#wave.Wave_write.writeframesraw "wave.Wave_write.writeframesraw") to write the frame data, or by calling [`writeframes()`](#wave.Wave_write.writeframes "wave.Wave_write.writeframes") with all of the frame data to be written. In the latter case [`writeframes()`](#wave.Wave_write.writeframes "wave.Wave_write.writeframes") will calculate the number of frames in the data and set *nframes* accordingly before writing the frame data. Wave\_write objects, as returned by [`open()`](#wave.open "wave.open"), have the following methods: Changed in version 3.4: Added support for unseekable files. `Wave_write.close()` Make sure *nframes* is correct, and close the file if it was opened by [`wave`](#module-wave "wave: Provide an interface to the WAV sound format."). This method is called upon object collection. It will raise an exception if the output stream is not seekable and *nframes* does not match the number of frames actually written. `Wave_write.setnchannels(n)` Set the number of channels. `Wave_write.setsampwidth(n)` Set the sample width to *n* bytes. `Wave_write.setframerate(n)` Set the frame rate to *n*. Changed in version 3.2: A non-integral input to this method is rounded to the nearest integer. `Wave_write.setnframes(n)` Set the number of frames to *n*. This will be changed later if the number of frames actually written is different (this update attempt will raise an error if the output stream is not seekable). `Wave_write.setcomptype(type, name)` Set the compression type and description. At the moment, only compression type `NONE` is supported, meaning no compression. `Wave_write.setparams(tuple)` The *tuple* should be `(nchannels, sampwidth, framerate, nframes, comptype, compname)`, with values valid for the `set*()` methods. Sets all parameters. `Wave_write.tell()` Return current position in the file, with the same disclaimer for the [`Wave_read.tell()`](#wave.Wave_read.tell "wave.Wave_read.tell") and [`Wave_read.setpos()`](#wave.Wave_read.setpos "wave.Wave_read.setpos") methods. `Wave_write.writeframesraw(data)` Write audio frames, without correcting *nframes*. Changed in version 3.4: Any [bytes-like object](../glossary#term-bytes-like-object) is now accepted. `Wave_write.writeframes(data)` Write audio frames and make sure *nframes* is correct. It will raise an error if the output stream is not seekable and the total number of frames that have been written after *data* has been written does not match the previously set value for *nframes*. Changed in version 3.4: Any [bytes-like object](../glossary#term-bytes-like-object) is now accepted. Note that it is invalid to set any parameters after calling `writeframes()` or `writeframesraw()`, and any attempt to do so will raise [`wave.Error`](#wave.Error "wave.Error"). python gettext — Multilingual internationalization services gettext — Multilingual internationalization services ==================================================== **Source code:** [Lib/gettext.py](https://github.com/python/cpython/tree/3.9/Lib/gettext.py) The [`gettext`](#module-gettext "gettext: Multilingual internationalization services.") module provides internationalization (I18N) and localization (L10N) services for your Python modules and applications. It supports both the GNU **gettext** message catalog API and a higher level, class-based API that may be more appropriate for Python files. The interface described below allows you to write your module and application messages in one natural language, and provide a catalog of translated messages for running under different natural languages. Some hints on localizing your Python modules and applications are also given. GNU **gettext** API ------------------- The [`gettext`](#module-gettext "gettext: Multilingual internationalization services.") module defines the following API, which is very similar to the GNU **gettext** API. If you use this API you will affect the translation of your entire application globally. Often this is what you want if your application is monolingual, with the choice of language dependent on the locale of your user. If you are localizing a Python module, or if your application needs to switch languages on the fly, you probably want to use the class-based API instead. `gettext.bindtextdomain(domain, localedir=None)` Bind the *domain* to the locale directory *localedir*. More concretely, [`gettext`](#module-gettext "gettext: Multilingual internationalization services.") will look for binary `.mo` files for the given domain using the path (on Unix): `*localedir*/*language*/LC_MESSAGES/*domain*.mo`, where *language* is searched for in the environment variables `LANGUAGE`, `LC_ALL`, `LC_MESSAGES`, and `LANG` respectively. If *localedir* is omitted or `None`, then the current binding for *domain* is returned. [1](#id3) `gettext.bind_textdomain_codeset(domain, codeset=None)` Bind the *domain* to *codeset*, changing the encoding of byte strings returned by the [`lgettext()`](#gettext.lgettext "gettext.lgettext"), [`ldgettext()`](#gettext.ldgettext "gettext.ldgettext"), [`lngettext()`](#gettext.lngettext "gettext.lngettext") and [`ldngettext()`](#gettext.ldngettext "gettext.ldngettext") functions. If *codeset* is omitted, then the current binding is returned. Deprecated since version 3.8, will be removed in version 3.10. `gettext.textdomain(domain=None)` Change or query the current global domain. If *domain* is `None`, then the current global domain is returned, otherwise the global domain is set to *domain*, which is returned. `gettext.gettext(message)` Return the localized translation of *message*, based on the current global domain, language, and locale directory. This function is usually aliased as `_()` in the local namespace (see examples below). `gettext.dgettext(domain, message)` Like [`gettext()`](#gettext.gettext "gettext.gettext"), but look the message up in the specified *domain*. `gettext.ngettext(singular, plural, n)` Like [`gettext()`](#gettext.gettext "gettext.gettext"), but consider plural forms. If a translation is found, apply the plural formula to *n*, and return the resulting message (some languages have more than two plural forms). If no translation is found, return *singular* if *n* is 1; return *plural* otherwise. The Plural formula is taken from the catalog header. It is a C or Python expression that has a free variable *n*; the expression evaluates to the index of the plural in the catalog. See [the GNU gettext documentation](https://www.gnu.org/software/gettext/manual/gettext.html) for the precise syntax to be used in `.po` files and the formulas for a variety of languages. `gettext.dngettext(domain, singular, plural, n)` Like [`ngettext()`](#gettext.ngettext "gettext.ngettext"), but look the message up in the specified *domain*. `gettext.pgettext(context, message)` `gettext.dpgettext(domain, context, message)` `gettext.npgettext(context, singular, plural, n)` `gettext.dnpgettext(domain, context, singular, plural, n)` Similar to the corresponding functions without the `p` in the prefix (that is, [`gettext()`](#module-gettext "gettext: Multilingual internationalization services."), [`dgettext()`](#gettext.dgettext "gettext.dgettext"), [`ngettext()`](#gettext.ngettext "gettext.ngettext"), [`dngettext()`](#gettext.dngettext "gettext.dngettext")), but the translation is restricted to the given message *context*. New in version 3.8. `gettext.lgettext(message)` `gettext.ldgettext(domain, message)` `gettext.lngettext(singular, plural, n)` `gettext.ldngettext(domain, singular, plural, n)` Equivalent to the corresponding functions without the `l` prefix ([`gettext()`](#gettext.gettext "gettext.gettext"), [`dgettext()`](#gettext.dgettext "gettext.dgettext"), [`ngettext()`](#gettext.ngettext "gettext.ngettext") and [`dngettext()`](#gettext.dngettext "gettext.dngettext")), but the translation is returned as a byte string encoded in the preferred system encoding if no other encoding was explicitly set with [`bind_textdomain_codeset()`](#gettext.bind_textdomain_codeset "gettext.bind_textdomain_codeset"). Warning These functions should be avoided in Python 3, because they return encoded bytes. It’s much better to use alternatives which return Unicode strings instead, since most Python applications will want to manipulate human readable text as strings instead of bytes. Further, it’s possible that you may get unexpected Unicode-related exceptions if there are encoding problems with the translated strings. Deprecated since version 3.8, will be removed in version 3.10. Note that GNU **gettext** also defines a `dcgettext()` method, but this was deemed not useful and so it is currently unimplemented. Here’s an example of typical usage for this API: ``` import gettext gettext.bindtextdomain('myapplication', '/path/to/my/language/directory') gettext.textdomain('myapplication') _ = gettext.gettext # ... print(_('This is a translatable string.')) ``` Class-based API --------------- The class-based API of the [`gettext`](#module-gettext "gettext: Multilingual internationalization services.") module gives you more flexibility and greater convenience than the GNU **gettext** API. It is the recommended way of localizing your Python applications and modules. `gettext` defines a [`GNUTranslations`](#gettext.GNUTranslations "gettext.GNUTranslations") class which implements the parsing of GNU `.mo` format files, and has methods for returning strings. Instances of this class can also install themselves in the built-in namespace as the function `_()`. `gettext.find(domain, localedir=None, languages=None, all=False)` This function implements the standard `.mo` file search algorithm. It takes a *domain*, identical to what [`textdomain()`](#gettext.textdomain "gettext.textdomain") takes. Optional *localedir* is as in [`bindtextdomain()`](#gettext.bindtextdomain "gettext.bindtextdomain"). Optional *languages* is a list of strings, where each string is a language code. If *localedir* is not given, then the default system locale directory is used. [2](#id4) If *languages* is not given, then the following environment variables are searched: `LANGUAGE`, `LC_ALL`, `LC_MESSAGES`, and `LANG`. The first one returning a non-empty value is used for the *languages* variable. The environment variables should contain a colon separated list of languages, which will be split on the colon to produce the expected list of language code strings. [`find()`](#gettext.find "gettext.find") then expands and normalizes the languages, and then iterates through them, searching for an existing file built of these components: `*localedir*/*language*/LC_MESSAGES/*domain*.mo` The first such file name that exists is returned by [`find()`](#gettext.find "gettext.find"). If no such file is found, then `None` is returned. If *all* is given, it returns a list of all file names, in the order in which they appear in the languages list or the environment variables. `gettext.translation(domain, localedir=None, languages=None, class_=None, fallback=False, codeset=None)` Return a `*Translations` instance based on the *domain*, *localedir*, and *languages*, which are first passed to [`find()`](#gettext.find "gettext.find") to get a list of the associated `.mo` file paths. Instances with identical `.mo` file names are cached. The actual class instantiated is *class\_* if provided, otherwise [`GNUTranslations`](#gettext.GNUTranslations "gettext.GNUTranslations"). The class’s constructor must take a single [file object](../glossary#term-file-object) argument. If provided, *codeset* will change the charset used to encode translated strings in the [`lgettext()`](#gettext.NullTranslations.lgettext "gettext.NullTranslations.lgettext") and [`lngettext()`](#gettext.NullTranslations.lngettext "gettext.NullTranslations.lngettext") methods. If multiple files are found, later files are used as fallbacks for earlier ones. To allow setting the fallback, [`copy.copy()`](copy#copy.copy "copy.copy") is used to clone each translation object from the cache; the actual instance data is still shared with the cache. If no `.mo` file is found, this function raises [`OSError`](exceptions#OSError "OSError") if *fallback* is false (which is the default), and returns a [`NullTranslations`](#gettext.NullTranslations "gettext.NullTranslations") instance if *fallback* is true. Changed in version 3.3: [`IOError`](exceptions#IOError "IOError") used to be raised instead of [`OSError`](exceptions#OSError "OSError"). Deprecated since version 3.8, will be removed in version 3.10: The *codeset* parameter. `gettext.install(domain, localedir=None, codeset=None, names=None)` This installs the function `_()` in Python’s builtins namespace, based on *domain*, *localedir*, and *codeset* which are passed to the function [`translation()`](#gettext.translation "gettext.translation"). For the *names* parameter, please see the description of the translation object’s [`install()`](#gettext.NullTranslations.install "gettext.NullTranslations.install") method. As seen below, you usually mark the strings in your application that are candidates for translation, by wrapping them in a call to the `_()` function, like this: ``` print(_('This string will be translated.')) ``` For convenience, you want the `_()` function to be installed in Python’s builtins namespace, so it is easily accessible in all modules of your application. Deprecated since version 3.8, will be removed in version 3.10: The *codeset* parameter. ### The [`NullTranslations`](#gettext.NullTranslations "gettext.NullTranslations") class Translation classes are what actually implement the translation of original source file message strings to translated message strings. The base class used by all translation classes is [`NullTranslations`](#gettext.NullTranslations "gettext.NullTranslations"); this provides the basic interface you can use to write your own specialized translation classes. Here are the methods of `NullTranslations`: `class gettext.NullTranslations(fp=None)` Takes an optional [file object](../glossary#term-file-object) *fp*, which is ignored by the base class. Initializes “protected” instance variables *\_info* and *\_charset* which are set by derived classes, as well as *\_fallback*, which is set through [`add_fallback()`](#gettext.NullTranslations.add_fallback "gettext.NullTranslations.add_fallback"). It then calls `self._parse(fp)` if *fp* is not `None`. `_parse(fp)` No-op in the base class, this method takes file object *fp*, and reads the data from the file, initializing its message catalog. If you have an unsupported message catalog file format, you should override this method to parse your format. `add_fallback(fallback)` Add *fallback* as the fallback object for the current translation object. A translation object should consult the fallback if it cannot provide a translation for a given message. `gettext(message)` If a fallback has been set, forward `gettext()` to the fallback. Otherwise, return *message*. Overridden in derived classes. `ngettext(singular, plural, n)` If a fallback has been set, forward `ngettext()` to the fallback. Otherwise, return *singular* if *n* is 1; return *plural* otherwise. Overridden in derived classes. `pgettext(context, message)` If a fallback has been set, forward [`pgettext()`](#gettext.pgettext "gettext.pgettext") to the fallback. Otherwise, return the translated message. Overridden in derived classes. New in version 3.8. `npgettext(context, singular, plural, n)` If a fallback has been set, forward [`npgettext()`](#gettext.npgettext "gettext.npgettext") to the fallback. Otherwise, return the translated message. Overridden in derived classes. New in version 3.8. `lgettext(message)` `lngettext(singular, plural, n)` Equivalent to [`gettext()`](#gettext.NullTranslations.gettext "gettext.NullTranslations.gettext") and [`ngettext()`](#gettext.NullTranslations.ngettext "gettext.NullTranslations.ngettext"), but the translation is returned as a byte string encoded in the preferred system encoding if no encoding was explicitly set with [`set_output_charset()`](#gettext.NullTranslations.set_output_charset "gettext.NullTranslations.set_output_charset"). Overridden in derived classes. Warning These methods should be avoided in Python 3. See the warning for the [`lgettext()`](#gettext.lgettext "gettext.lgettext") function. Deprecated since version 3.8, will be removed in version 3.10. `info()` Return the “protected” `_info` variable, a dictionary containing the metadata found in the message catalog file. `charset()` Return the encoding of the message catalog file. `output_charset()` Return the encoding used to return translated messages in [`lgettext()`](#gettext.NullTranslations.lgettext "gettext.NullTranslations.lgettext") and [`lngettext()`](#gettext.NullTranslations.lngettext "gettext.NullTranslations.lngettext"). Deprecated since version 3.8, will be removed in version 3.10. `set_output_charset(charset)` Change the encoding used to return translated messages. Deprecated since version 3.8, will be removed in version 3.10. `install(names=None)` This method installs [`gettext()`](#gettext.NullTranslations.gettext "gettext.NullTranslations.gettext") into the built-in namespace, binding it to `_`. If the *names* parameter is given, it must be a sequence containing the names of functions you want to install in the builtins namespace in addition to `_()`. Supported names are `'gettext'`, `'ngettext'`, `'pgettext'`, `'npgettext'`, `'lgettext'`, and `'lngettext'`. Note that this is only one way, albeit the most convenient way, to make the `_()` function available to your application. Because it affects the entire application globally, and specifically the built-in namespace, localized modules should never install `_()`. Instead, they should use this code to make `_()` available to their module: ``` import gettext t = gettext.translation('mymodule', ...) _ = t.gettext ``` This puts `_()` only in the module’s global namespace and so only affects calls within this module. Changed in version 3.8: Added `'pgettext'` and `'npgettext'`. ### The [`GNUTranslations`](#gettext.GNUTranslations "gettext.GNUTranslations") class The [`gettext`](#module-gettext "gettext: Multilingual internationalization services.") module provides one additional class derived from [`NullTranslations`](#gettext.NullTranslations "gettext.NullTranslations"): [`GNUTranslations`](#gettext.GNUTranslations "gettext.GNUTranslations"). This class overrides `_parse()` to enable reading GNU **gettext** format `.mo` files in both big-endian and little-endian format. [`GNUTranslations`](#gettext.GNUTranslations "gettext.GNUTranslations") parses optional metadata out of the translation catalog. It is convention with GNU **gettext** to include metadata as the translation for the empty string. This metadata is in [**RFC 822**](https://tools.ietf.org/html/rfc822.html)-style `key: value` pairs, and should contain the `Project-Id-Version` key. If the key `Content-Type` is found, then the `charset` property is used to initialize the “protected” `_charset` instance variable, defaulting to `None` if not found. If the charset encoding is specified, then all message ids and message strings read from the catalog are converted to Unicode using this encoding, else ASCII is assumed. Since message ids are read as Unicode strings too, all `*gettext()` methods will assume message ids as Unicode strings, not byte strings. The entire set of key/value pairs are placed into a dictionary and set as the “protected” `_info` instance variable. If the `.mo` file’s magic number is invalid, the major version number is unexpected, or if other problems occur while reading the file, instantiating a [`GNUTranslations`](#gettext.GNUTranslations "gettext.GNUTranslations") class can raise [`OSError`](exceptions#OSError "OSError"). `class gettext.GNUTranslations` The following methods are overridden from the base class implementation: `gettext(message)` Look up the *message* id in the catalog and return the corresponding message string, as a Unicode string. If there is no entry in the catalog for the *message* id, and a fallback has been set, the look up is forwarded to the fallback’s [`gettext()`](#gettext.NullTranslations.gettext "gettext.NullTranslations.gettext") method. Otherwise, the *message* id is returned. `ngettext(singular, plural, n)` Do a plural-forms lookup of a message id. *singular* is used as the message id for purposes of lookup in the catalog, while *n* is used to determine which plural form to use. The returned message string is a Unicode string. If the message id is not found in the catalog, and a fallback is specified, the request is forwarded to the fallback’s [`ngettext()`](#gettext.NullTranslations.ngettext "gettext.NullTranslations.ngettext") method. Otherwise, when *n* is 1 *singular* is returned, and *plural* is returned in all other cases. Here is an example: ``` n = len(os.listdir('.')) cat = GNUTranslations(somefile) message = cat.ngettext( 'There is %(num)d file in this directory', 'There are %(num)d files in this directory', n) % {'num': n} ``` `pgettext(context, message)` Look up the *context* and *message* id in the catalog and return the corresponding message string, as a Unicode string. If there is no entry in the catalog for the *message* id and *context*, and a fallback has been set, the look up is forwarded to the fallback’s [`pgettext()`](#gettext.pgettext "gettext.pgettext") method. Otherwise, the *message* id is returned. New in version 3.8. `npgettext(context, singular, plural, n)` Do a plural-forms lookup of a message id. *singular* is used as the message id for purposes of lookup in the catalog, while *n* is used to determine which plural form to use. If the message id for *context* is not found in the catalog, and a fallback is specified, the request is forwarded to the fallback’s [`npgettext()`](#gettext.npgettext "gettext.npgettext") method. Otherwise, when *n* is 1 *singular* is returned, and *plural* is returned in all other cases. New in version 3.8. `lgettext(message)` `lngettext(singular, plural, n)` Equivalent to [`gettext()`](#gettext.GNUTranslations.gettext "gettext.GNUTranslations.gettext") and [`ngettext()`](#gettext.GNUTranslations.ngettext "gettext.GNUTranslations.ngettext"), but the translation is returned as a byte string encoded in the preferred system encoding if no encoding was explicitly set with [`set_output_charset()`](#gettext.NullTranslations.set_output_charset "gettext.NullTranslations.set_output_charset"). Warning These methods should be avoided in Python 3. See the warning for the [`lgettext()`](#gettext.lgettext "gettext.lgettext") function. Deprecated since version 3.8, will be removed in version 3.10. ### Solaris message catalog support The Solaris operating system defines its own binary `.mo` file format, but since no documentation can be found on this format, it is not supported at this time. ### The Catalog constructor GNOME uses a version of the [`gettext`](#module-gettext "gettext: Multilingual internationalization services.") module by James Henstridge, but this version has a slightly different API. Its documented usage was: ``` import gettext cat = gettext.Catalog(domain, localedir) _ = cat.gettext print(_('hello world')) ``` For compatibility with this older module, the function `Catalog()` is an alias for the [`translation()`](#gettext.translation "gettext.translation") function described above. One difference between this module and Henstridge’s: his catalog objects supported access through a mapping API, but this appears to be unused and so is not currently supported. Internationalizing your programs and modules -------------------------------------------- Internationalization (I18N) refers to the operation by which a program is made aware of multiple languages. Localization (L10N) refers to the adaptation of your program, once internationalized, to the local language and cultural habits. In order to provide multilingual messages for your Python programs, you need to take the following steps: 1. prepare your program or module by specially marking translatable strings 2. run a suite of tools over your marked files to generate raw messages catalogs 3. create language-specific translations of the message catalogs 4. use the [`gettext`](#module-gettext "gettext: Multilingual internationalization services.") module so that message strings are properly translated In order to prepare your code for I18N, you need to look at all the strings in your files. Any string that needs to be translated should be marked by wrapping it in `_('...')` — that is, a call to the function `_()`. For example: ``` filename = 'mylog.txt' message = _('writing a log message') with open(filename, 'w') as fp: fp.write(message) ``` In this example, the string `'writing a log message'` is marked as a candidate for translation, while the strings `'mylog.txt'` and `'w'` are not. There are a few tools to extract the strings meant for translation. The original GNU **gettext** only supported C or C++ source code but its extended version **xgettext** scans code written in a number of languages, including Python, to find strings marked as translatable. [Babel](http://babel.pocoo.org/) is a Python internationalization library that includes a `pybabel` script to extract and compile message catalogs. François Pinard’s program called **xpot** does a similar job and is available as part of his [po-utils package](https://github.com/pinard/po-utils). (Python also includes pure-Python versions of these programs, called **pygettext.py** and **msgfmt.py**; some Python distributions will install them for you. **pygettext.py** is similar to **xgettext**, but only understands Python source code and cannot handle other programming languages such as C or C++. **pygettext.py** supports a command-line interface similar to **xgettext**; for details on its use, run `pygettext.py --help`. **msgfmt.py** is binary compatible with GNU **msgfmt**. With these two programs, you may not need the GNU **gettext** package to internationalize your Python applications.) **xgettext**, **pygettext**, and similar tools generate `.po` files that are message catalogs. They are structured human-readable files that contain every marked string in the source code, along with a placeholder for the translated versions of these strings. Copies of these `.po` files are then handed over to the individual human translators who write translations for every supported natural language. They send back the completed language-specific versions as a `<language-name>.po` file that’s compiled into a machine-readable `.mo` binary catalog file using the **msgfmt** program. The `.mo` files are used by the [`gettext`](#module-gettext "gettext: Multilingual internationalization services.") module for the actual translation processing at run-time. How you use the [`gettext`](#module-gettext "gettext: Multilingual internationalization services.") module in your code depends on whether you are internationalizing a single module or your entire application. The next two sections will discuss each case. ### Localizing your module If you are localizing your module, you must take care not to make global changes, e.g. to the built-in namespace. You should not use the GNU **gettext** API but instead the class-based API. Let’s say your module is called “spam” and the module’s various natural language translation `.mo` files reside in `/usr/share/locale` in GNU **gettext** format. Here’s what you would put at the top of your module: ``` import gettext t = gettext.translation('spam', '/usr/share/locale') _ = t.gettext ``` ### Localizing your application If you are localizing your application, you can install the `_()` function globally into the built-in namespace, usually in the main driver file of your application. This will let all your application-specific files just use `_('...')` without having to explicitly install it in each file. In the simple case then, you need only add the following bit of code to the main driver file of your application: ``` import gettext gettext.install('myapplication') ``` If you need to set the locale directory, you can pass it into the [`install()`](#gettext.install "gettext.install") function: ``` import gettext gettext.install('myapplication', '/usr/share/locale') ``` ### Changing languages on the fly If your program needs to support many languages at the same time, you may want to create multiple translation instances and then switch between them explicitly, like so: ``` import gettext lang1 = gettext.translation('myapplication', languages=['en']) lang2 = gettext.translation('myapplication', languages=['fr']) lang3 = gettext.translation('myapplication', languages=['de']) # start by using language1 lang1.install() # ... time goes by, user selects language 2 lang2.install() # ... more time goes by, user selects language 3 lang3.install() ``` ### Deferred translations In most coding situations, strings are translated where they are coded. Occasionally however, you need to mark strings for translation, but defer actual translation until later. A classic example is: ``` animals = ['mollusk', 'albatross', 'rat', 'penguin', 'python', ] # ... for a in animals: print(a) ``` Here, you want to mark the strings in the `animals` list as being translatable, but you don’t actually want to translate them until they are printed. Here is one way you can handle this situation: ``` def _(message): return message animals = [_('mollusk'), _('albatross'), _('rat'), _('penguin'), _('python'), ] del _ # ... for a in animals: print(_(a)) ``` This works because the dummy definition of `_()` simply returns the string unchanged. And this dummy definition will temporarily override any definition of `_()` in the built-in namespace (until the [`del`](../reference/simple_stmts#del) command). Take care, though if you have a previous definition of `_()` in the local namespace. Note that the second use of `_()` will not identify “a” as being translatable to the **gettext** program, because the parameter is not a string literal. Another way to handle this is with the following example: ``` def N_(message): return message animals = [N_('mollusk'), N_('albatross'), N_('rat'), N_('penguin'), N_('python'), ] # ... for a in animals: print(_(a)) ``` In this case, you are marking translatable strings with the function `N_()`, which won’t conflict with any definition of `_()`. However, you will need to teach your message extraction program to look for translatable strings marked with `N_()`. **xgettext**, **pygettext**, `pybabel extract`, and **xpot** all support this through the use of the `-k` command-line switch. The choice of `N_()` here is totally arbitrary; it could have just as easily been `MarkThisStringForTranslation()`. Acknowledgements ---------------- The following people contributed code, feedback, design suggestions, previous implementations, and valuable experience to the creation of this module: * Peter Funk * James Henstridge * Juan David Ibáñez Palomar * Marc-André Lemburg * Martin von Löwis * François Pinard * Barry Warsaw * Gustavo Niemeyer #### Footnotes `1` The default locale directory is system dependent; for example, on RedHat Linux it is `/usr/share/locale`, but on Solaris it is `/usr/lib/locale`. The [`gettext`](#module-gettext "gettext: Multilingual internationalization services.") module does not try to support these system dependent defaults; instead its default is `*sys.base\_prefix*/share/locale` (see [`sys.base_prefix`](sys#sys.base_prefix "sys.base_prefix")). For this reason, it is always best to call [`bindtextdomain()`](#gettext.bindtextdomain "gettext.bindtextdomain") with an explicit absolute path at the start of your application. `2` See the footnote for [`bindtextdomain()`](#gettext.bindtextdomain "gettext.bindtextdomain") above.
programming_docs
python webbrowser — Convenient Web-browser controller webbrowser — Convenient Web-browser controller ============================================== **Source code:** [Lib/webbrowser.py](https://github.com/python/cpython/tree/3.9/Lib/webbrowser.py) The [`webbrowser`](#module-webbrowser "webbrowser: Easy-to-use controller for Web browsers.") module provides a high-level interface to allow displaying Web-based documents to users. Under most circumstances, simply calling the [`open()`](#webbrowser.open "webbrowser.open") function from this module will do the right thing. Under Unix, graphical browsers are preferred under X11, but text-mode browsers will be used if graphical browsers are not available or an X11 display isn’t available. If text-mode browsers are used, the calling process will block until the user exits the browser. If the environment variable `BROWSER` exists, it is interpreted as the [`os.pathsep`](os#os.pathsep "os.pathsep")-separated list of browsers to try ahead of the platform defaults. When the value of a list part contains the string `%s`, then it is interpreted as a literal browser command line to be used with the argument URL substituted for `%s`; if the part does not contain `%s`, it is simply interpreted as the name of the browser to launch. [1](#id2) For non-Unix platforms, or when a remote browser is available on Unix, the controlling process will not wait for the user to finish with the browser, but allow the remote browser to maintain its own windows on the display. If remote browsers are not available on Unix, the controlling process will launch a new browser and wait. The script **webbrowser** can be used as a command-line interface for the module. It accepts a URL as the argument. It accepts the following optional parameters: `-n` opens the URL in a new browser window, if possible; `-t` opens the URL in a new browser page (“tab”). The options are, naturally, mutually exclusive. Usage example: ``` python -m webbrowser -t "https://www.python.org" ``` The following exception is defined: `exception webbrowser.Error` Exception raised when a browser control error occurs. The following functions are defined: `webbrowser.open(url, new=0, autoraise=True)` Display *url* using the default browser. If *new* is 0, the *url* is opened in the same browser window if possible. If *new* is 1, a new browser window is opened if possible. If *new* is 2, a new browser page (“tab”) is opened if possible. If *autoraise* is `True`, the window is raised if possible (note that under many window managers this will occur regardless of the setting of this variable). Note that on some platforms, trying to open a filename using this function, may work and start the operating system’s associated program. However, this is neither supported nor portable. Raises an [auditing event](sys#auditing) `webbrowser.open` with argument `url`. `webbrowser.open_new(url)` Open *url* in a new window of the default browser, if possible, otherwise, open *url* in the only browser window. `webbrowser.open_new_tab(url)` Open *url* in a new page (“tab”) of the default browser, if possible, otherwise equivalent to [`open_new()`](#webbrowser.open_new "webbrowser.open_new"). `webbrowser.get(using=None)` Return a controller object for the browser type *using*. If *using* is `None`, return a controller for a default browser appropriate to the caller’s environment. `webbrowser.register(name, constructor, instance=None, *, preferred=False)` Register the browser type *name*. Once a browser type is registered, the [`get()`](#webbrowser.get "webbrowser.get") function can return a controller for that browser type. If *instance* is not provided, or is `None`, *constructor* will be called without parameters to create an instance when needed. If *instance* is provided, *constructor* will never be called, and may be `None`. Setting *preferred* to `True` makes this browser a preferred result for a [`get()`](#webbrowser.get "webbrowser.get") call with no argument. Otherwise, this entry point is only useful if you plan to either set the `BROWSER` variable or call [`get()`](#webbrowser.get "webbrowser.get") with a nonempty argument matching the name of a handler you declare. Changed in version 3.7: *preferred* keyword-only parameter was added. A number of browser types are predefined. This table gives the type names that may be passed to the [`get()`](#webbrowser.get "webbrowser.get") function and the corresponding instantiations for the controller classes, all defined in this module. | Type Name | Class Name | Notes | | --- | --- | --- | | `'mozilla'` | `Mozilla('mozilla')` | | | `'firefox'` | `Mozilla('mozilla')` | | | `'netscape'` | `Mozilla('netscape')` | | | `'galeon'` | `Galeon('galeon')` | | | `'epiphany'` | `Galeon('epiphany')` | | | `'skipstone'` | `BackgroundBrowser('skipstone')` | | | `'kfmclient'` | `Konqueror()` | (1) | | `'konqueror'` | `Konqueror()` | (1) | | `'kfm'` | `Konqueror()` | (1) | | `'mosaic'` | `BackgroundBrowser('mosaic')` | | | `'opera'` | `Opera()` | | | `'grail'` | `Grail()` | | | `'links'` | `GenericBrowser('links')` | | | `'elinks'` | `Elinks('elinks')` | | | `'lynx'` | `GenericBrowser('lynx')` | | | `'w3m'` | `GenericBrowser('w3m')` | | | `'windows-default'` | `WindowsDefault` | (2) | | `'macosx'` | `MacOSXOSAScript('default')` | (3) | | `'safari'` | `MacOSXOSAScript('safari')` | (3) | | `'google-chrome'` | `Chrome('google-chrome')` | | | `'chrome'` | `Chrome('chrome')` | | | `'chromium'` | `Chromium('chromium')` | | | `'chromium-browser'` | `Chromium('chromium-browser')` | | Notes: 1. “Konqueror” is the file manager for the KDE desktop environment for Unix, and only makes sense to use if KDE is running. Some way of reliably detecting KDE would be nice; the `KDEDIR` variable is not sufficient. Note also that the name “kfm” is used even when using the **konqueror** command with KDE 2 — the implementation selects the best strategy for running Konqueror. 2. Only on Windows platforms. 3. Only on macOS platform. New in version 3.3: Support for Chrome/Chromium has been added. Here are some simple examples: ``` url = 'https://docs.python.org/' # Open URL in a new tab, if a browser window is already open. webbrowser.open_new_tab(url) # Open URL in new window, raising the window if possible. webbrowser.open_new(url) ``` Browser Controller Objects -------------------------- Browser controllers provide these methods which parallel three of the module-level convenience functions: `controller.open(url, new=0, autoraise=True)` Display *url* using the browser handled by this controller. If *new* is 1, a new browser window is opened if possible. If *new* is 2, a new browser page (“tab”) is opened if possible. `controller.open_new(url)` Open *url* in a new window of the browser handled by this controller, if possible, otherwise, open *url* in the only browser window. Alias [`open_new()`](#webbrowser.open_new "webbrowser.open_new"). `controller.open_new_tab(url)` Open *url* in a new page (“tab”) of the browser handled by this controller, if possible, otherwise equivalent to [`open_new()`](#webbrowser.open_new "webbrowser.open_new"). #### Footnotes `1` Executables named here without a full path will be searched in the directories given in the `PATH` environment variable. python collections.abc — Abstract Base Classes for Containers collections.abc — Abstract Base Classes for Containers ====================================================== New in version 3.3: Formerly, this module was part of the [`collections`](collections#module-collections "collections: Container datatypes") module. **Source code:** [Lib/\_collections\_abc.py](https://github.com/python/cpython/tree/3.9/Lib/_collections_abc.py) This module provides [abstract base classes](../glossary#term-abstract-base-class) that can be used to test whether a class provides a particular interface; for example, whether it is hashable or whether it is a mapping. New in version 3.9: These abstract classes now support `[]`. See [Generic Alias Type](stdtypes#types-genericalias) and [**PEP 585**](https://www.python.org/dev/peps/pep-0585). Collections Abstract Base Classes --------------------------------- The collections module offers the following [ABCs](../glossary#term-abstract-base-class): | ABC | Inherits from | Abstract Methods | Mixin Methods | | --- | --- | --- | --- | | [`Container`](#collections.abc.Container "collections.abc.Container") | | `__contains__` | | | [`Hashable`](#collections.abc.Hashable "collections.abc.Hashable") | | `__hash__` | | | [`Iterable`](#collections.abc.Iterable "collections.abc.Iterable") | | `__iter__` | | | [`Iterator`](#collections.abc.Iterator "collections.abc.Iterator") | [`Iterable`](#collections.abc.Iterable "collections.abc.Iterable") | `__next__` | `__iter__` | | [`Reversible`](#collections.abc.Reversible "collections.abc.Reversible") | [`Iterable`](#collections.abc.Iterable "collections.abc.Iterable") | `__reversed__` | | | [`Generator`](#collections.abc.Generator "collections.abc.Generator") | [`Iterator`](#collections.abc.Iterator "collections.abc.Iterator") | `send`, `throw` | `close`, `__iter__`, `__next__` | | [`Sized`](#collections.abc.Sized "collections.abc.Sized") | | `__len__` | | | [`Callable`](#collections.abc.Callable "collections.abc.Callable") | | `__call__` | | | [`Collection`](#collections.abc.Collection "collections.abc.Collection") | [`Sized`](#collections.abc.Sized "collections.abc.Sized"), [`Iterable`](#collections.abc.Iterable "collections.abc.Iterable"), [`Container`](#collections.abc.Container "collections.abc.Container") | `__contains__`, `__iter__`, `__len__` | | | [`Sequence`](#collections.abc.Sequence "collections.abc.Sequence") | [`Reversible`](#collections.abc.Reversible "collections.abc.Reversible"), [`Collection`](#collections.abc.Collection "collections.abc.Collection") | `__getitem__`, `__len__` | `__contains__`, `__iter__`, `__reversed__`, `index`, and `count` | | [`MutableSequence`](#collections.abc.MutableSequence "collections.abc.MutableSequence") | [`Sequence`](#collections.abc.Sequence "collections.abc.Sequence") | `__getitem__`, `__setitem__`, `__delitem__`, `__len__`, `insert` | Inherited [`Sequence`](#collections.abc.Sequence "collections.abc.Sequence") methods and `append`, `reverse`, `extend`, `pop`, `remove`, and `__iadd__` | | [`ByteString`](#collections.abc.ByteString "collections.abc.ByteString") | [`Sequence`](#collections.abc.Sequence "collections.abc.Sequence") | `__getitem__`, `__len__` | Inherited [`Sequence`](#collections.abc.Sequence "collections.abc.Sequence") methods | | [`Set`](#collections.abc.Set "collections.abc.Set") | [`Collection`](#collections.abc.Collection "collections.abc.Collection") | `__contains__`, `__iter__`, `__len__` | `__le__`, `__lt__`, `__eq__`, `__ne__`, `__gt__`, `__ge__`, `__and__`, `__or__`, `__sub__`, `__xor__`, and `isdisjoint` | | [`MutableSet`](#collections.abc.MutableSet "collections.abc.MutableSet") | [`Set`](#collections.abc.Set "collections.abc.Set") | `__contains__`, `__iter__`, `__len__`, `add`, `discard` | Inherited [`Set`](#collections.abc.Set "collections.abc.Set") methods and `clear`, `pop`, `remove`, `__ior__`, `__iand__`, `__ixor__`, and `__isub__` | | [`Mapping`](#collections.abc.Mapping "collections.abc.Mapping") | [`Collection`](#collections.abc.Collection "collections.abc.Collection") | `__getitem__`, `__iter__`, `__len__` | `__contains__`, `keys`, `items`, `values`, `get`, `__eq__`, and `__ne__` | | [`MutableMapping`](#collections.abc.MutableMapping "collections.abc.MutableMapping") | [`Mapping`](#collections.abc.Mapping "collections.abc.Mapping") | `__getitem__`, `__setitem__`, `__delitem__`, `__iter__`, `__len__` | Inherited [`Mapping`](#collections.abc.Mapping "collections.abc.Mapping") methods and `pop`, `popitem`, `clear`, `update`, and `setdefault` | | [`MappingView`](#collections.abc.MappingView "collections.abc.MappingView") | [`Sized`](#collections.abc.Sized "collections.abc.Sized") | | `__len__` | | [`ItemsView`](#collections.abc.ItemsView "collections.abc.ItemsView") | [`MappingView`](#collections.abc.MappingView "collections.abc.MappingView"), [`Set`](#collections.abc.Set "collections.abc.Set") | | `__contains__`, `__iter__` | | [`KeysView`](#collections.abc.KeysView "collections.abc.KeysView") | [`MappingView`](#collections.abc.MappingView "collections.abc.MappingView"), [`Set`](#collections.abc.Set "collections.abc.Set") | | `__contains__`, `__iter__` | | [`ValuesView`](#collections.abc.ValuesView "collections.abc.ValuesView") | [`MappingView`](#collections.abc.MappingView "collections.abc.MappingView"), [`Collection`](#collections.abc.Collection "collections.abc.Collection") | | `__contains__`, `__iter__` | | [`Awaitable`](#collections.abc.Awaitable "collections.abc.Awaitable") | | `__await__` | | | [`Coroutine`](#collections.abc.Coroutine "collections.abc.Coroutine") | [`Awaitable`](#collections.abc.Awaitable "collections.abc.Awaitable") | `send`, `throw` | `close` | | [`AsyncIterable`](#collections.abc.AsyncIterable "collections.abc.AsyncIterable") | | `__aiter__` | | | [`AsyncIterator`](#collections.abc.AsyncIterator "collections.abc.AsyncIterator") | [`AsyncIterable`](#collections.abc.AsyncIterable "collections.abc.AsyncIterable") | `__anext__` | `__aiter__` | | [`AsyncGenerator`](#collections.abc.AsyncGenerator "collections.abc.AsyncGenerator") | [`AsyncIterator`](#collections.abc.AsyncIterator "collections.abc.AsyncIterator") | `asend`, `athrow` | `aclose`, `__aiter__`, `__anext__` | `class collections.abc.Container` ABC for classes that provide the [`__contains__()`](../reference/datamodel#object.__contains__ "object.__contains__") method. `class collections.abc.Hashable` ABC for classes that provide the [`__hash__()`](../reference/datamodel#object.__hash__ "object.__hash__") method. `class collections.abc.Sized` ABC for classes that provide the [`__len__()`](../reference/datamodel#object.__len__ "object.__len__") method. `class collections.abc.Callable` ABC for classes that provide the [`__call__()`](../reference/datamodel#object.__call__ "object.__call__") method. `class collections.abc.Iterable` ABC for classes that provide the [`__iter__()`](../reference/datamodel#object.__iter__ "object.__iter__") method. Checking `isinstance(obj, Iterable)` detects classes that are registered as [`Iterable`](#collections.abc.Iterable "collections.abc.Iterable") or that have an [`__iter__()`](../reference/datamodel#object.__iter__ "object.__iter__") method, but it does not detect classes that iterate with the [`__getitem__()`](../reference/datamodel#object.__getitem__ "object.__getitem__") method. The only reliable way to determine whether an object is [iterable](../glossary#term-iterable) is to call `iter(obj)`. `class collections.abc.Collection` ABC for sized iterable container classes. New in version 3.6. `class collections.abc.Iterator` ABC for classes that provide the [`__iter__()`](stdtypes#iterator.__iter__ "iterator.__iter__") and [`__next__()`](stdtypes#iterator.__next__ "iterator.__next__") methods. See also the definition of [iterator](../glossary#term-iterator). `class collections.abc.Reversible` ABC for iterable classes that also provide the [`__reversed__()`](../reference/datamodel#object.__reversed__ "object.__reversed__") method. New in version 3.6. `class collections.abc.Generator` ABC for generator classes that implement the protocol defined in [**PEP 342**](https://www.python.org/dev/peps/pep-0342) that extends iterators with the [`send()`](../reference/expressions#generator.send "generator.send"), [`throw()`](../reference/expressions#generator.throw "generator.throw") and [`close()`](../reference/expressions#generator.close "generator.close") methods. See also the definition of [generator](../glossary#term-generator). New in version 3.5. `class collections.abc.Sequence` `class collections.abc.MutableSequence` `class collections.abc.ByteString` ABCs for read-only and mutable [sequences](../glossary#term-sequence). Implementation note: Some of the mixin methods, such as [`__iter__()`](../reference/datamodel#object.__iter__ "object.__iter__"), [`__reversed__()`](../reference/datamodel#object.__reversed__ "object.__reversed__") and `index()`, make repeated calls to the underlying [`__getitem__()`](../reference/datamodel#object.__getitem__ "object.__getitem__") method. Consequently, if [`__getitem__()`](../reference/datamodel#object.__getitem__ "object.__getitem__") is implemented with constant access speed, the mixin methods will have linear performance; however, if the underlying method is linear (as it would be with a linked list), the mixins will have quadratic performance and will likely need to be overridden. Changed in version 3.5: The index() method added support for *stop* and *start* arguments. `class collections.abc.Set` `class collections.abc.MutableSet` ABCs for read-only and mutable sets. `class collections.abc.Mapping` `class collections.abc.MutableMapping` ABCs for read-only and mutable [mappings](../glossary#term-mapping). `class collections.abc.MappingView` `class collections.abc.ItemsView` `class collections.abc.KeysView` `class collections.abc.ValuesView` ABCs for mapping, items, keys, and values [views](../glossary#term-dictionary-view). `class collections.abc.Awaitable` ABC for [awaitable](../glossary#term-awaitable) objects, which can be used in [`await`](../reference/expressions#await) expressions. Custom implementations must provide the [`__await__()`](../reference/datamodel#object.__await__ "object.__await__") method. [Coroutine](../glossary#term-coroutine) objects and instances of the [`Coroutine`](#collections.abc.Coroutine "collections.abc.Coroutine") ABC are all instances of this ABC. Note In CPython, generator-based coroutines (generators decorated with [`types.coroutine()`](types#types.coroutine "types.coroutine") or [`asyncio.coroutine()`](asyncio-task#asyncio.coroutine "asyncio.coroutine")) are *awaitables*, even though they do not have an [`__await__()`](../reference/datamodel#object.__await__ "object.__await__") method. Using `isinstance(gencoro, Awaitable)` for them will return `False`. Use [`inspect.isawaitable()`](inspect#inspect.isawaitable "inspect.isawaitable") to detect them. New in version 3.5. `class collections.abc.Coroutine` ABC for coroutine compatible classes. These implement the following methods, defined in [Coroutine Objects](../reference/datamodel#coroutine-objects): [`send()`](../reference/datamodel#coroutine.send "coroutine.send"), [`throw()`](../reference/datamodel#coroutine.throw "coroutine.throw"), and [`close()`](../reference/datamodel#coroutine.close "coroutine.close"). Custom implementations must also implement [`__await__()`](../reference/datamodel#object.__await__ "object.__await__"). All [`Coroutine`](#collections.abc.Coroutine "collections.abc.Coroutine") instances are also instances of [`Awaitable`](#collections.abc.Awaitable "collections.abc.Awaitable"). See also the definition of [coroutine](../glossary#term-coroutine). Note In CPython, generator-based coroutines (generators decorated with [`types.coroutine()`](types#types.coroutine "types.coroutine") or [`asyncio.coroutine()`](asyncio-task#asyncio.coroutine "asyncio.coroutine")) are *awaitables*, even though they do not have an [`__await__()`](../reference/datamodel#object.__await__ "object.__await__") method. Using `isinstance(gencoro, Coroutine)` for them will return `False`. Use [`inspect.isawaitable()`](inspect#inspect.isawaitable "inspect.isawaitable") to detect them. New in version 3.5. `class collections.abc.AsyncIterable` ABC for classes that provide `__aiter__` method. See also the definition of [asynchronous iterable](../glossary#term-asynchronous-iterable). New in version 3.5. `class collections.abc.AsyncIterator` ABC for classes that provide `__aiter__` and `__anext__` methods. See also the definition of [asynchronous iterator](../glossary#term-asynchronous-iterator). New in version 3.5. `class collections.abc.AsyncGenerator` ABC for asynchronous generator classes that implement the protocol defined in [**PEP 525**](https://www.python.org/dev/peps/pep-0525) and [**PEP 492**](https://www.python.org/dev/peps/pep-0492). New in version 3.6. These ABCs allow us to ask classes or instances if they provide particular functionality, for example: ``` size = None if isinstance(myvar, collections.abc.Sized): size = len(myvar) ``` Several of the ABCs are also useful as mixins that make it easier to develop classes supporting container APIs. For example, to write a class supporting the full [`Set`](#collections.abc.Set "collections.abc.Set") API, it is only necessary to supply the three underlying abstract methods: [`__contains__()`](../reference/datamodel#object.__contains__ "object.__contains__"), [`__iter__()`](../reference/datamodel#object.__iter__ "object.__iter__"), and [`__len__()`](../reference/datamodel#object.__len__ "object.__len__"). The ABC supplies the remaining methods such as [`__and__()`](../reference/datamodel#object.__and__ "object.__and__") and `isdisjoint()`: ``` class ListBasedSet(collections.abc.Set): ''' Alternate set implementation favoring space over speed and not requiring the set elements to be hashable. ''' def __init__(self, iterable): self.elements = lst = [] for value in iterable: if value not in lst: lst.append(value) def __iter__(self): return iter(self.elements) def __contains__(self, value): return value in self.elements def __len__(self): return len(self.elements) s1 = ListBasedSet('abcdef') s2 = ListBasedSet('defghi') overlap = s1 & s2 # The __and__() method is supported automatically ``` Notes on using [`Set`](#collections.abc.Set "collections.abc.Set") and [`MutableSet`](#collections.abc.MutableSet "collections.abc.MutableSet") as a mixin: 1. Since some set operations create new sets, the default mixin methods need a way to create new instances from an iterable. The class constructor is assumed to have a signature in the form `ClassName(iterable)`. That assumption is factored-out to an internal classmethod called `_from_iterable()` which calls `cls(iterable)` to produce a new set. If the [`Set`](#collections.abc.Set "collections.abc.Set") mixin is being used in a class with a different constructor signature, you will need to override `_from_iterable()` with a classmethod or regular method that can construct new instances from an iterable argument. 2. To override the comparisons (presumably for speed, as the semantics are fixed), redefine [`__le__()`](../reference/datamodel#object.__le__ "object.__le__") and [`__ge__()`](../reference/datamodel#object.__ge__ "object.__ge__"), then the other operations will automatically follow suit. 3. The [`Set`](#collections.abc.Set "collections.abc.Set") mixin provides a `_hash()` method to compute a hash value for the set; however, [`__hash__()`](../reference/datamodel#object.__hash__ "object.__hash__") is not defined because not all sets are hashable or immutable. To add set hashability using mixins, inherit from both [`Set()`](#collections.abc.Set "collections.abc.Set") and [`Hashable()`](#collections.abc.Hashable "collections.abc.Hashable"), then define `__hash__ = Set._hash`. See also * [OrderedSet recipe](https://code.activestate.com/recipes/576694/) for an example built on [`MutableSet`](#collections.abc.MutableSet "collections.abc.MutableSet"). * For more about ABCs, see the [`abc`](abc#module-abc "abc: Abstract base classes according to :pep:`3119`.") module and [**PEP 3119**](https://www.python.org/dev/peps/pep-3119).
programming_docs
python zipimport — Import modules from Zip archives zipimport — Import modules from Zip archives ============================================ **Source code:** [Lib/zipimport.py](https://github.com/python/cpython/tree/3.9/Lib/zipimport.py) This module adds the ability to import Python modules (`*.py`, `*.pyc`) and packages from ZIP-format archives. It is usually not needed to use the [`zipimport`](#module-zipimport "zipimport: Support for importing Python modules from ZIP archives.") module explicitly; it is automatically used by the built-in [`import`](../reference/simple_stmts#import) mechanism for [`sys.path`](sys#sys.path "sys.path") items that are paths to ZIP archives. Typically, [`sys.path`](sys#sys.path "sys.path") is a list of directory names as strings. This module also allows an item of [`sys.path`](sys#sys.path "sys.path") to be a string naming a ZIP file archive. The ZIP archive can contain a subdirectory structure to support package imports, and a path within the archive can be specified to only import from a subdirectory. For example, the path `example.zip/lib/` would only import from the `lib/` subdirectory within the archive. Any files may be present in the ZIP archive, but importers are only invoked for `.py` and `.pyc` files. ZIP import of dynamic modules (`.pyd`, `.so`) is disallowed. Note that if an archive only contains `.py` files, Python will not attempt to modify the archive by adding the corresponding `.pyc` file, meaning that if a ZIP archive doesn’t contain `.pyc` files, importing may be rather slow. Changed in version 3.8: Previously, ZIP archives with an archive comment were not supported. See also [PKZIP Application Note](https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT) Documentation on the ZIP file format by Phil Katz, the creator of the format and algorithms used. [**PEP 273**](https://www.python.org/dev/peps/pep-0273) - Import Modules from Zip Archives Written by James C. Ahlstrom, who also provided an implementation. Python 2.3 follows the specification in [**PEP 273**](https://www.python.org/dev/peps/pep-0273), but uses an implementation written by Just van Rossum that uses the import hooks described in [**PEP 302**](https://www.python.org/dev/peps/pep-0302). [**PEP 302**](https://www.python.org/dev/peps/pep-0302) - New Import Hooks The PEP to add the import hooks that help this module work. This module defines an exception: `exception zipimport.ZipImportError` Exception raised by zipimporter objects. It’s a subclass of [`ImportError`](exceptions#ImportError "ImportError"), so it can be caught as [`ImportError`](exceptions#ImportError "ImportError"), too. zipimporter Objects ------------------- [`zipimporter`](#zipimport.zipimporter "zipimport.zipimporter") is the class for importing ZIP files. `class zipimport.zipimporter(archivepath)` Create a new zipimporter instance. *archivepath* must be a path to a ZIP file, or to a specific path within a ZIP file. For example, an *archivepath* of `foo/bar.zip/lib` will look for modules in the `lib` directory inside the ZIP file `foo/bar.zip` (provided that it exists). [`ZipImportError`](#zipimport.ZipImportError "zipimport.ZipImportError") is raised if *archivepath* doesn’t point to a valid ZIP archive. `find_module(fullname[, path])` Search for a module specified by *fullname*. *fullname* must be the fully qualified (dotted) module name. It returns the zipimporter instance itself if the module was found, or [`None`](constants#None "None") if it wasn’t. The optional *path* argument is ignored—it’s there for compatibility with the importer protocol. `get_code(fullname)` Return the code object for the specified module. Raise [`ZipImportError`](#zipimport.ZipImportError "zipimport.ZipImportError") if the module couldn’t be found. `get_data(pathname)` Return the data associated with *pathname*. Raise [`OSError`](exceptions#OSError "OSError") if the file wasn’t found. Changed in version 3.3: [`IOError`](exceptions#IOError "IOError") used to be raised instead of [`OSError`](exceptions#OSError "OSError"). `get_filename(fullname)` Return the value `__file__` would be set to if the specified module was imported. Raise [`ZipImportError`](#zipimport.ZipImportError "zipimport.ZipImportError") if the module couldn’t be found. New in version 3.1. `get_source(fullname)` Return the source code for the specified module. Raise [`ZipImportError`](#zipimport.ZipImportError "zipimport.ZipImportError") if the module couldn’t be found, return [`None`](constants#None "None") if the archive does contain the module, but has no source for it. `is_package(fullname)` Return `True` if the module specified by *fullname* is a package. Raise [`ZipImportError`](#zipimport.ZipImportError "zipimport.ZipImportError") if the module couldn’t be found. `load_module(fullname)` Load the module specified by *fullname*. *fullname* must be the fully qualified (dotted) module name. It returns the imported module, or raises [`ZipImportError`](#zipimport.ZipImportError "zipimport.ZipImportError") if it wasn’t found. `archive` The file name of the importer’s associated ZIP file, without a possible subpath. `prefix` The subpath within the ZIP file where modules are searched. This is the empty string for zipimporter objects which point to the root of the ZIP file. The [`archive`](#zipimport.zipimporter.archive "zipimport.zipimporter.archive") and [`prefix`](#zipimport.zipimporter.prefix "zipimport.zipimporter.prefix") attributes, when combined with a slash, equal the original *archivepath* argument given to the [`zipimporter`](#zipimport.zipimporter "zipimport.zipimporter") constructor. Examples -------- Here is an example that imports a module from a ZIP archive - note that the [`zipimport`](#module-zipimport "zipimport: Support for importing Python modules from ZIP archives.") module is not explicitly used. ``` $ unzip -l example.zip Archive: example.zip Length Date Time Name -------- ---- ---- ---- 8467 11-26-02 22:30 jwzthreading.py -------- ------- 8467 1 file $ ./python Python 2.3 (#1, Aug 1 2003, 19:54:32) >>> import sys >>> sys.path.insert(0, 'example.zip') # Add .zip file to front of path >>> import jwzthreading >>> jwzthreading.__file__ 'example.zip/jwzthreading.py' ``` python decimal — Decimal fixed point and floating point arithmetic decimal — Decimal fixed point and floating point arithmetic =========================================================== **Source code:** [Lib/decimal.py](https://github.com/python/cpython/tree/3.9/Lib/decimal.py) The [`decimal`](#module-decimal "decimal: Implementation of the General Decimal Arithmetic Specification.") module provides support for fast correctly-rounded decimal floating point arithmetic. It offers several advantages over the [`float`](functions#float "float") datatype: * Decimal “is based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.” – excerpt from the decimal arithmetic specification. * Decimal numbers can be represented exactly. In contrast, numbers like `1.1` and `2.2` do not have exact representations in binary floating point. End users typically would not expect `1.1 + 2.2` to display as `3.3000000000000003` as it does with binary floating point. * The exactness carries over into arithmetic. In decimal floating point, `0.1 + 0.1 + 0.1 - 0.3` is exactly equal to zero. In binary floating point, the result is `5.5511151231257827e-017`. While near to zero, the differences prevent reliable equality testing and differences can accumulate. For this reason, decimal is preferred in accounting applications which have strict equality invariants. * The decimal module incorporates a notion of significant places so that `1.30 + 1.20` is `2.50`. The trailing zero is kept to indicate significance. This is the customary presentation for monetary applications. For multiplication, the “schoolbook” approach uses all the figures in the multiplicands. For instance, `1.3 * 1.2` gives `1.56` while `1.30 * 1.20` gives `1.5600`. * Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem: ``` >>> from decimal import * >>> getcontext().prec = 6 >>> Decimal(1) / Decimal(7) Decimal('0.142857') >>> getcontext().prec = 28 >>> Decimal(1) / Decimal(7) Decimal('0.1428571428571428571428571429') ``` * Both binary and decimal floating point are implemented in terms of published standards. While the built-in float type exposes only a modest portion of its capabilities, the decimal module exposes all required parts of the standard. When needed, the programmer has full control over rounding and signal handling. This includes an option to enforce exact arithmetic by using exceptions to block any inexact operations. * The decimal module was designed to support “without prejudice, both exact unrounded decimal arithmetic (sometimes called fixed-point arithmetic) and rounded floating-point arithmetic.” – excerpt from the decimal arithmetic specification. The module design is centered around three concepts: the decimal number, the context for arithmetic, and signals. A decimal number is immutable. It has a sign, coefficient digits, and an exponent. To preserve significance, the coefficient digits do not truncate trailing zeros. Decimals also include special values such as `Infinity`, `-Infinity`, and `NaN`. The standard also differentiates `-0` from `+0`. The context for arithmetic is an environment specifying precision, rounding rules, limits on exponents, flags indicating the results of operations, and trap enablers which determine whether signals are treated as exceptions. Rounding options include [`ROUND_CEILING`](#decimal.ROUND_CEILING "decimal.ROUND_CEILING"), [`ROUND_DOWN`](#decimal.ROUND_DOWN "decimal.ROUND_DOWN"), [`ROUND_FLOOR`](#decimal.ROUND_FLOOR "decimal.ROUND_FLOOR"), [`ROUND_HALF_DOWN`](#decimal.ROUND_HALF_DOWN "decimal.ROUND_HALF_DOWN"), [`ROUND_HALF_EVEN`](#decimal.ROUND_HALF_EVEN "decimal.ROUND_HALF_EVEN"), [`ROUND_HALF_UP`](#decimal.ROUND_HALF_UP "decimal.ROUND_HALF_UP"), [`ROUND_UP`](#decimal.ROUND_UP "decimal.ROUND_UP"), and [`ROUND_05UP`](#decimal.ROUND_05UP "decimal.ROUND_05UP"). Signals are groups of exceptional conditions arising during the course of computation. Depending on the needs of the application, signals may be ignored, considered as informational, or treated as exceptions. The signals in the decimal module are: [`Clamped`](#decimal.Clamped "decimal.Clamped"), [`InvalidOperation`](#decimal.InvalidOperation "decimal.InvalidOperation"), [`DivisionByZero`](#decimal.DivisionByZero "decimal.DivisionByZero"), [`Inexact`](#decimal.Inexact "decimal.Inexact"), [`Rounded`](#decimal.Rounded "decimal.Rounded"), [`Subnormal`](#decimal.Subnormal "decimal.Subnormal"), [`Overflow`](#decimal.Overflow "decimal.Overflow"), [`Underflow`](#decimal.Underflow "decimal.Underflow") and [`FloatOperation`](#decimal.FloatOperation "decimal.FloatOperation"). For each signal there is a flag and a trap enabler. When a signal is encountered, its flag is set to one, then, if the trap enabler is set to one, an exception is raised. Flags are sticky, so the user needs to reset them before monitoring a calculation. See also * IBM’s General Decimal Arithmetic Specification, [The General Decimal Arithmetic Specification](http://speleotrove.com/decimal/decarith.html). Quick-start Tutorial -------------------- The usual start to using decimals is importing the module, viewing the current context with [`getcontext()`](#decimal.getcontext "decimal.getcontext") and, if necessary, setting new values for precision, rounding, or enabled traps: ``` >>> from decimal import * >>> getcontext() Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[Overflow, DivisionByZero, InvalidOperation]) >>> getcontext().prec = 7 # Set a new precision ``` Decimal instances can be constructed from integers, strings, floats, or tuples. Construction from an integer or a float performs an exact conversion of the value of that integer or float. Decimal numbers include special values such as `NaN` which stands for “Not a number”, positive and negative `Infinity`, and `-0`: ``` >>> getcontext().prec = 28 >>> Decimal(10) Decimal('10') >>> Decimal('3.14') Decimal('3.14') >>> Decimal(3.14) Decimal('3.140000000000000124344978758017532527446746826171875') >>> Decimal((0, (3, 1, 4), -2)) Decimal('3.14') >>> Decimal(str(2.0 ** 0.5)) Decimal('1.4142135623730951') >>> Decimal(2) ** Decimal('0.5') Decimal('1.414213562373095048801688724') >>> Decimal('NaN') Decimal('NaN') >>> Decimal('-Infinity') Decimal('-Infinity') ``` If the [`FloatOperation`](#decimal.FloatOperation "decimal.FloatOperation") signal is trapped, accidental mixing of decimals and floats in constructors or ordering comparisons raises an exception: ``` >>> c = getcontext() >>> c.traps[FloatOperation] = True >>> Decimal(3.14) Traceback (most recent call last): File "<stdin>", line 1, in <module> decimal.FloatOperation: [<class 'decimal.FloatOperation'>] >>> Decimal('3.5') < 3.7 Traceback (most recent call last): File "<stdin>", line 1, in <module> decimal.FloatOperation: [<class 'decimal.FloatOperation'>] >>> Decimal('3.5') == 3.5 True ``` New in version 3.3. The significance of a new Decimal is determined solely by the number of digits input. Context precision and rounding only come into play during arithmetic operations. ``` >>> getcontext().prec = 6 >>> Decimal('3.0') Decimal('3.0') >>> Decimal('3.1415926535') Decimal('3.1415926535') >>> Decimal('3.1415926535') + Decimal('2.7182818285') Decimal('5.85987') >>> getcontext().rounding = ROUND_UP >>> Decimal('3.1415926535') + Decimal('2.7182818285') Decimal('5.85988') ``` If the internal limits of the C version are exceeded, constructing a decimal raises [`InvalidOperation`](#decimal.InvalidOperation "decimal.InvalidOperation"): ``` >>> Decimal("1e9999999999999999999") Traceback (most recent call last): File "<stdin>", line 1, in <module> decimal.InvalidOperation: [<class 'decimal.InvalidOperation'>] ``` Changed in version 3.3. Decimals interact well with much of the rest of Python. Here is a small decimal floating point flying circus: ``` >>> data = list(map(Decimal, '1.34 1.87 3.45 2.35 1.00 0.03 9.25'.split())) >>> max(data) Decimal('9.25') >>> min(data) Decimal('0.03') >>> sorted(data) [Decimal('0.03'), Decimal('1.00'), Decimal('1.34'), Decimal('1.87'), Decimal('2.35'), Decimal('3.45'), Decimal('9.25')] >>> sum(data) Decimal('19.29') >>> a,b,c = data[:3] >>> str(a) '1.34' >>> float(a) 1.34 >>> round(a, 1) Decimal('1.3') >>> int(a) 1 >>> a * 5 Decimal('6.70') >>> a * b Decimal('2.5058') >>> c % a Decimal('0.77') ``` And some mathematical functions are also available to Decimal: ``` >>> getcontext().prec = 28 >>> Decimal(2).sqrt() Decimal('1.414213562373095048801688724') >>> Decimal(1).exp() Decimal('2.718281828459045235360287471') >>> Decimal('10').ln() Decimal('2.302585092994045684017991455') >>> Decimal('10').log10() Decimal('1') ``` The `quantize()` method rounds a number to a fixed exponent. This method is useful for monetary applications that often round results to a fixed number of places: ``` >>> Decimal('7.325').quantize(Decimal('.01'), rounding=ROUND_DOWN) Decimal('7.32') >>> Decimal('7.325').quantize(Decimal('1.'), rounding=ROUND_UP) Decimal('8') ``` As shown above, the [`getcontext()`](#decimal.getcontext "decimal.getcontext") function accesses the current context and allows the settings to be changed. This approach meets the needs of most applications. For more advanced work, it may be useful to create alternate contexts using the Context() constructor. To make an alternate active, use the [`setcontext()`](#decimal.setcontext "decimal.setcontext") function. In accordance with the standard, the [`decimal`](#module-decimal "decimal: Implementation of the General Decimal Arithmetic Specification.") module provides two ready to use standard contexts, [`BasicContext`](#decimal.BasicContext "decimal.BasicContext") and [`ExtendedContext`](#decimal.ExtendedContext "decimal.ExtendedContext"). The former is especially useful for debugging because many of the traps are enabled: ``` >>> myothercontext = Context(prec=60, rounding=ROUND_HALF_DOWN) >>> setcontext(myothercontext) >>> Decimal(1) / Decimal(7) Decimal('0.142857142857142857142857142857142857142857142857142857142857') >>> ExtendedContext Context(prec=9, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[]) >>> setcontext(ExtendedContext) >>> Decimal(1) / Decimal(7) Decimal('0.142857143') >>> Decimal(42) / Decimal(0) Decimal('Infinity') >>> setcontext(BasicContext) >>> Decimal(42) / Decimal(0) Traceback (most recent call last): File "<pyshell#143>", line 1, in -toplevel- Decimal(42) / Decimal(0) DivisionByZero: x / 0 ``` Contexts also have signal flags for monitoring exceptional conditions encountered during computations. The flags remain set until explicitly cleared, so it is best to clear the flags before each set of monitored computations by using the `clear_flags()` method. ``` >>> setcontext(ExtendedContext) >>> getcontext().clear_flags() >>> Decimal(355) / Decimal(113) Decimal('3.14159292') >>> getcontext() Context(prec=9, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[Inexact, Rounded], traps=[]) ``` The *flags* entry shows that the rational approximation to `Pi` was rounded (digits beyond the context precision were thrown away) and that the result is inexact (some of the discarded digits were non-zero). Individual traps are set using the dictionary in the `traps` field of a context: ``` >>> setcontext(ExtendedContext) >>> Decimal(1) / Decimal(0) Decimal('Infinity') >>> getcontext().traps[DivisionByZero] = 1 >>> Decimal(1) / Decimal(0) Traceback (most recent call last): File "<pyshell#112>", line 1, in -toplevel- Decimal(1) / Decimal(0) DivisionByZero: x / 0 ``` Most programs adjust the current context only once, at the beginning of the program. And, in many applications, data is converted to [`Decimal`](#decimal.Decimal "decimal.Decimal") with a single cast inside a loop. With context set and decimals created, the bulk of the program manipulates the data no differently than with other Python numeric types. Decimal objects --------------- `class decimal.Decimal(value="0", context=None)` Construct a new [`Decimal`](#decimal.Decimal "decimal.Decimal") object based from *value*. *value* can be an integer, string, tuple, [`float`](functions#float "float"), or another [`Decimal`](#decimal.Decimal "decimal.Decimal") object. If no *value* is given, returns `Decimal('0')`. If *value* is a string, it should conform to the decimal numeric string syntax after leading and trailing whitespace characters, as well as underscores throughout, are removed: ``` sign ::= '+' | '-' digit ::= '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' indicator ::= 'e' | 'E' digits ::= digit [digit]... decimal-part ::= digits '.' [digits] | ['.'] digits exponent-part ::= indicator [sign] digits infinity ::= 'Infinity' | 'Inf' nan ::= 'NaN' [digits] | 'sNaN' [digits] numeric-value ::= decimal-part [exponent-part] | infinity numeric-string ::= [sign] numeric-value | [sign] nan ``` Other Unicode decimal digits are also permitted where `digit` appears above. These include decimal digits from various other alphabets (for example, Arabic-Indic and Devanāgarī digits) along with the fullwidth digits `'\uff10'` through `'\uff19'`. If *value* is a [`tuple`](stdtypes#tuple "tuple"), it should have three components, a sign (`0` for positive or `1` for negative), a [`tuple`](stdtypes#tuple "tuple") of digits, and an integer exponent. For example, `Decimal((0, (1, 4, 1, 4), -3))` returns `Decimal('1.414')`. If *value* is a [`float`](functions#float "float"), the binary floating point value is losslessly converted to its exact decimal equivalent. This conversion can often require 53 or more digits of precision. For example, `Decimal(float('1.1'))` converts to `Decimal('1.100000000000000088817841970012523233890533447265625')`. The *context* precision does not affect how many digits are stored. That is determined exclusively by the number of digits in *value*. For example, `Decimal('3.00000')` records all five zeros even if the context precision is only three. The purpose of the *context* argument is determining what to do if *value* is a malformed string. If the context traps [`InvalidOperation`](#decimal.InvalidOperation "decimal.InvalidOperation"), an exception is raised; otherwise, the constructor returns a new Decimal with the value of `NaN`. Once constructed, [`Decimal`](#decimal.Decimal "decimal.Decimal") objects are immutable. Changed in version 3.2: The argument to the constructor is now permitted to be a [`float`](functions#float "float") instance. Changed in version 3.3: [`float`](functions#float "float") arguments raise an exception if the [`FloatOperation`](#decimal.FloatOperation "decimal.FloatOperation") trap is set. By default the trap is off. Changed in version 3.6: Underscores are allowed for grouping, as with integral and floating-point literals in code. Decimal floating point objects share many properties with the other built-in numeric types such as [`float`](functions#float "float") and [`int`](functions#int "int"). All of the usual math operations and special methods apply. Likewise, decimal objects can be copied, pickled, printed, used as dictionary keys, used as set elements, compared, sorted, and coerced to another type (such as [`float`](functions#float "float") or [`int`](functions#int "int")). There are some small differences between arithmetic on Decimal objects and arithmetic on integers and floats. When the remainder operator `%` is applied to Decimal objects, the sign of the result is the sign of the *dividend* rather than the sign of the divisor: ``` >>> (-7) % 4 1 >>> Decimal(-7) % Decimal(4) Decimal('-3') ``` The integer division operator `//` behaves analogously, returning the integer part of the true quotient (truncating towards zero) rather than its floor, so as to preserve the usual identity `x == (x // y) * y + x % y`: ``` >>> -7 // 4 -2 >>> Decimal(-7) // Decimal(4) Decimal('-1') ``` The `%` and `//` operators implement the `remainder` and `divide-integer` operations (respectively) as described in the specification. Decimal objects cannot generally be combined with floats or instances of [`fractions.Fraction`](fractions#fractions.Fraction "fractions.Fraction") in arithmetic operations: an attempt to add a [`Decimal`](#decimal.Decimal "decimal.Decimal") to a [`float`](functions#float "float"), for example, will raise a [`TypeError`](exceptions#TypeError "TypeError"). However, it is possible to use Python’s comparison operators to compare a [`Decimal`](#decimal.Decimal "decimal.Decimal") instance `x` with another number `y`. This avoids confusing results when doing equality comparisons between numbers of different types. Changed in version 3.2: Mixed-type comparisons between [`Decimal`](#decimal.Decimal "decimal.Decimal") instances and other numeric types are now fully supported. In addition to the standard numeric properties, decimal floating point objects also have a number of specialized methods: `adjusted()` Return the adjusted exponent after shifting out the coefficient’s rightmost digits until only the lead digit remains: `Decimal('321e+5').adjusted()` returns seven. Used for determining the position of the most significant digit with respect to the decimal point. `as_integer_ratio()` Return a pair `(n, d)` of integers that represent the given [`Decimal`](#decimal.Decimal "decimal.Decimal") instance as a fraction, in lowest terms and with a positive denominator: ``` >>> Decimal('-3.14').as_integer_ratio() (-157, 50) ``` The conversion is exact. Raise OverflowError on infinities and ValueError on NaNs. New in version 3.6. `as_tuple()` Return a [named tuple](../glossary#term-named-tuple) representation of the number: `DecimalTuple(sign, digits, exponent)`. `canonical()` Return the canonical encoding of the argument. Currently, the encoding of a [`Decimal`](#decimal.Decimal "decimal.Decimal") instance is always canonical, so this operation returns its argument unchanged. `compare(other, context=None)` Compare the values of two Decimal instances. [`compare()`](#decimal.Decimal.compare "decimal.Decimal.compare") returns a Decimal instance, and if either operand is a NaN then the result is a NaN: ``` a or b is a NaN ==> Decimal('NaN') a < b ==> Decimal('-1') a == b ==> Decimal('0') a > b ==> Decimal('1') ``` `compare_signal(other, context=None)` This operation is identical to the [`compare()`](#decimal.Decimal.compare "decimal.Decimal.compare") method, except that all NaNs signal. That is, if neither operand is a signaling NaN then any quiet NaN operand is treated as though it were a signaling NaN. `compare_total(other, context=None)` Compare two operands using their abstract representation rather than their numerical value. Similar to the [`compare()`](#decimal.Decimal.compare "decimal.Decimal.compare") method, but the result gives a total ordering on [`Decimal`](#decimal.Decimal "decimal.Decimal") instances. Two [`Decimal`](#decimal.Decimal "decimal.Decimal") instances with the same numeric value but different representations compare unequal in this ordering: ``` >>> Decimal('12.0').compare_total(Decimal('12')) Decimal('-1') ``` Quiet and signaling NaNs are also included in the total ordering. The result of this function is `Decimal('0')` if both operands have the same representation, `Decimal('-1')` if the first operand is lower in the total order than the second, and `Decimal('1')` if the first operand is higher in the total order than the second operand. See the specification for details of the total order. This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise InvalidOperation if the second operand cannot be converted exactly. `compare_total_mag(other, context=None)` Compare two operands using their abstract representation rather than their value as in [`compare_total()`](#decimal.Decimal.compare_total "decimal.Decimal.compare_total"), but ignoring the sign of each operand. `x.compare_total_mag(y)` is equivalent to `x.copy_abs().compare_total(y.copy_abs())`. This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise InvalidOperation if the second operand cannot be converted exactly. `conjugate()` Just returns self, this method is only to comply with the Decimal Specification. `copy_abs()` Return the absolute value of the argument. This operation is unaffected by the context and is quiet: no flags are changed and no rounding is performed. `copy_negate()` Return the negation of the argument. This operation is unaffected by the context and is quiet: no flags are changed and no rounding is performed. `copy_sign(other, context=None)` Return a copy of the first operand with the sign set to be the same as the sign of the second operand. For example: ``` >>> Decimal('2.3').copy_sign(Decimal('-1.5')) Decimal('-2.3') ``` This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise InvalidOperation if the second operand cannot be converted exactly. `exp(context=None)` Return the value of the (natural) exponential function `e**x` at the given number. The result is correctly rounded using the [`ROUND_HALF_EVEN`](#decimal.ROUND_HALF_EVEN "decimal.ROUND_HALF_EVEN") rounding mode. ``` >>> Decimal(1).exp() Decimal('2.718281828459045235360287471') >>> Decimal(321).exp() Decimal('2.561702493119680037517373933E+139') ``` `from_float(f)` Classmethod that converts a float to a decimal number, exactly. Note `Decimal.from_float(0.1)` is not the same as `Decimal(‘0.1’)`. Since 0.1 is not exactly representable in binary floating point, the value is stored as the nearest representable value which is `0x1.999999999999ap-4`. That equivalent value in decimal is `0.1000000000000000055511151231257827021181583404541015625`. Note From Python 3.2 onwards, a [`Decimal`](#decimal.Decimal "decimal.Decimal") instance can also be constructed directly from a [`float`](functions#float "float"). ``` >>> Decimal.from_float(0.1) Decimal('0.1000000000000000055511151231257827021181583404541015625') >>> Decimal.from_float(float('nan')) Decimal('NaN') >>> Decimal.from_float(float('inf')) Decimal('Infinity') >>> Decimal.from_float(float('-inf')) Decimal('-Infinity') ``` New in version 3.1. `fma(other, third, context=None)` Fused multiply-add. Return self\*other+third with no rounding of the intermediate product self\*other. ``` >>> Decimal(2).fma(3, 5) Decimal('11') ``` `is_canonical()` Return [`True`](constants#True "True") if the argument is canonical and [`False`](constants#False "False") otherwise. Currently, a [`Decimal`](#decimal.Decimal "decimal.Decimal") instance is always canonical, so this operation always returns [`True`](constants#True "True"). `is_finite()` Return [`True`](constants#True "True") if the argument is a finite number, and [`False`](constants#False "False") if the argument is an infinity or a NaN. `is_infinite()` Return [`True`](constants#True "True") if the argument is either positive or negative infinity and [`False`](constants#False "False") otherwise. `is_nan()` Return [`True`](constants#True "True") if the argument is a (quiet or signaling) NaN and [`False`](constants#False "False") otherwise. `is_normal(context=None)` Return [`True`](constants#True "True") if the argument is a *normal* finite number. Return [`False`](constants#False "False") if the argument is zero, subnormal, infinite or a NaN. `is_qnan()` Return [`True`](constants#True "True") if the argument is a quiet NaN, and [`False`](constants#False "False") otherwise. `is_signed()` Return [`True`](constants#True "True") if the argument has a negative sign and [`False`](constants#False "False") otherwise. Note that zeros and NaNs can both carry signs. `is_snan()` Return [`True`](constants#True "True") if the argument is a signaling NaN and [`False`](constants#False "False") otherwise. `is_subnormal(context=None)` Return [`True`](constants#True "True") if the argument is subnormal, and [`False`](constants#False "False") otherwise. `is_zero()` Return [`True`](constants#True "True") if the argument is a (positive or negative) zero and [`False`](constants#False "False") otherwise. `ln(context=None)` Return the natural (base e) logarithm of the operand. The result is correctly rounded using the [`ROUND_HALF_EVEN`](#decimal.ROUND_HALF_EVEN "decimal.ROUND_HALF_EVEN") rounding mode. `log10(context=None)` Return the base ten logarithm of the operand. The result is correctly rounded using the [`ROUND_HALF_EVEN`](#decimal.ROUND_HALF_EVEN "decimal.ROUND_HALF_EVEN") rounding mode. `logb(context=None)` For a nonzero number, return the adjusted exponent of its operand as a [`Decimal`](#decimal.Decimal "decimal.Decimal") instance. If the operand is a zero then `Decimal('-Infinity')` is returned and the [`DivisionByZero`](#decimal.DivisionByZero "decimal.DivisionByZero") flag is raised. If the operand is an infinity then `Decimal('Infinity')` is returned. `logical_and(other, context=None)` [`logical_and()`](#decimal.Decimal.logical_and "decimal.Decimal.logical_and") is a logical operation which takes two *logical operands* (see [Logical operands](#logical-operands-label)). The result is the digit-wise `and` of the two operands. `logical_invert(context=None)` [`logical_invert()`](#decimal.Decimal.logical_invert "decimal.Decimal.logical_invert") is a logical operation. The result is the digit-wise inversion of the operand. `logical_or(other, context=None)` [`logical_or()`](#decimal.Decimal.logical_or "decimal.Decimal.logical_or") is a logical operation which takes two *logical operands* (see [Logical operands](#logical-operands-label)). The result is the digit-wise `or` of the two operands. `logical_xor(other, context=None)` [`logical_xor()`](#decimal.Decimal.logical_xor "decimal.Decimal.logical_xor") is a logical operation which takes two *logical operands* (see [Logical operands](#logical-operands-label)). The result is the digit-wise exclusive or of the two operands. `max(other, context=None)` Like `max(self, other)` except that the context rounding rule is applied before returning and that `NaN` values are either signaled or ignored (depending on the context and whether they are signaling or quiet). `max_mag(other, context=None)` Similar to the [`max()`](#decimal.Decimal.max "decimal.Decimal.max") method, but the comparison is done using the absolute values of the operands. `min(other, context=None)` Like `min(self, other)` except that the context rounding rule is applied before returning and that `NaN` values are either signaled or ignored (depending on the context and whether they are signaling or quiet). `min_mag(other, context=None)` Similar to the [`min()`](#decimal.Decimal.min "decimal.Decimal.min") method, but the comparison is done using the absolute values of the operands. `next_minus(context=None)` Return the largest number representable in the given context (or in the current thread’s context if no context is given) that is smaller than the given operand. `next_plus(context=None)` Return the smallest number representable in the given context (or in the current thread’s context if no context is given) that is larger than the given operand. `next_toward(other, context=None)` If the two operands are unequal, return the number closest to the first operand in the direction of the second operand. If both operands are numerically equal, return a copy of the first operand with the sign set to be the same as the sign of the second operand. `normalize(context=None)` Normalize the number by stripping the rightmost trailing zeros and converting any result equal to `Decimal('0')` to `Decimal('0e0')`. Used for producing canonical values for attributes of an equivalence class. For example, `Decimal('32.100')` and `Decimal('0.321000e+2')` both normalize to the equivalent value `Decimal('32.1')`. `number_class(context=None)` Return a string describing the *class* of the operand. The returned value is one of the following ten strings. * `"-Infinity"`, indicating that the operand is negative infinity. * `"-Normal"`, indicating that the operand is a negative normal number. * `"-Subnormal"`, indicating that the operand is negative and subnormal. * `"-Zero"`, indicating that the operand is a negative zero. * `"+Zero"`, indicating that the operand is a positive zero. * `"+Subnormal"`, indicating that the operand is positive and subnormal. * `"+Normal"`, indicating that the operand is a positive normal number. * `"+Infinity"`, indicating that the operand is positive infinity. * `"NaN"`, indicating that the operand is a quiet NaN (Not a Number). * `"sNaN"`, indicating that the operand is a signaling NaN. `quantize(exp, rounding=None, context=None)` Return a value equal to the first operand after rounding and having the exponent of the second operand. ``` >>> Decimal('1.41421356').quantize(Decimal('1.000')) Decimal('1.414') ``` Unlike other operations, if the length of the coefficient after the quantize operation would be greater than precision, then an [`InvalidOperation`](#decimal.InvalidOperation "decimal.InvalidOperation") is signaled. This guarantees that, unless there is an error condition, the quantized exponent is always equal to that of the right-hand operand. Also unlike other operations, quantize never signals Underflow, even if the result is subnormal and inexact. If the exponent of the second operand is larger than that of the first then rounding may be necessary. In this case, the rounding mode is determined by the `rounding` argument if given, else by the given `context` argument; if neither argument is given the rounding mode of the current thread’s context is used. An error is returned whenever the resulting exponent is greater than `Emax` or less than `Etiny`. `radix()` Return `Decimal(10)`, the radix (base) in which the [`Decimal`](#decimal.Decimal "decimal.Decimal") class does all its arithmetic. Included for compatibility with the specification. `remainder_near(other, context=None)` Return the remainder from dividing *self* by *other*. This differs from `self % other` in that the sign of the remainder is chosen so as to minimize its absolute value. More precisely, the return value is `self - n * other` where `n` is the integer nearest to the exact value of `self / other`, and if two integers are equally near then the even one is chosen. If the result is zero then its sign will be the sign of *self*. ``` >>> Decimal(18).remainder_near(Decimal(10)) Decimal('-2') >>> Decimal(25).remainder_near(Decimal(10)) Decimal('5') >>> Decimal(35).remainder_near(Decimal(10)) Decimal('-5') ``` `rotate(other, context=None)` Return the result of rotating the digits of the first operand by an amount specified by the second operand. The second operand must be an integer in the range -precision through precision. The absolute value of the second operand gives the number of places to rotate. If the second operand is positive then rotation is to the left; otherwise rotation is to the right. The coefficient of the first operand is padded on the left with zeros to length precision if necessary. The sign and exponent of the first operand are unchanged. `same_quantum(other, context=None)` Test whether self and other have the same exponent or whether both are `NaN`. This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise InvalidOperation if the second operand cannot be converted exactly. `scaleb(other, context=None)` Return the first operand with exponent adjusted by the second. Equivalently, return the first operand multiplied by `10**other`. The second operand must be an integer. `shift(other, context=None)` Return the result of shifting the digits of the first operand by an amount specified by the second operand. The second operand must be an integer in the range -precision through precision. The absolute value of the second operand gives the number of places to shift. If the second operand is positive then the shift is to the left; otherwise the shift is to the right. Digits shifted into the coefficient are zeros. The sign and exponent of the first operand are unchanged. `sqrt(context=None)` Return the square root of the argument to full precision. `to_eng_string(context=None)` Convert to a string, using engineering notation if an exponent is needed. Engineering notation has an exponent which is a multiple of 3. This can leave up to 3 digits to the left of the decimal place and may require the addition of either one or two trailing zeros. For example, this converts `Decimal('123E+1')` to `Decimal('1.23E+3')`. `to_integral(rounding=None, context=None)` Identical to the [`to_integral_value()`](#decimal.Decimal.to_integral_value "decimal.Decimal.to_integral_value") method. The `to_integral` name has been kept for compatibility with older versions. `to_integral_exact(rounding=None, context=None)` Round to the nearest integer, signaling [`Inexact`](#decimal.Inexact "decimal.Inexact") or [`Rounded`](#decimal.Rounded "decimal.Rounded") as appropriate if rounding occurs. The rounding mode is determined by the `rounding` parameter if given, else by the given `context`. If neither parameter is given then the rounding mode of the current context is used. `to_integral_value(rounding=None, context=None)` Round to the nearest integer without signaling [`Inexact`](#decimal.Inexact "decimal.Inexact") or [`Rounded`](#decimal.Rounded "decimal.Rounded"). If given, applies *rounding*; otherwise, uses the rounding method in either the supplied *context* or the current context. ### Logical operands The `logical_and()`, `logical_invert()`, `logical_or()`, and `logical_xor()` methods expect their arguments to be *logical operands*. A *logical operand* is a [`Decimal`](#decimal.Decimal "decimal.Decimal") instance whose exponent and sign are both zero, and whose digits are all either `0` or `1`. Context objects --------------- Contexts are environments for arithmetic operations. They govern precision, set rules for rounding, determine which signals are treated as exceptions, and limit the range for exponents. Each thread has its own current context which is accessed or changed using the [`getcontext()`](#decimal.getcontext "decimal.getcontext") and [`setcontext()`](#decimal.setcontext "decimal.setcontext") functions: `decimal.getcontext()` Return the current context for the active thread. `decimal.setcontext(c)` Set the current context for the active thread to *c*. You can also use the [`with`](../reference/compound_stmts#with) statement and the [`localcontext()`](#decimal.localcontext "decimal.localcontext") function to temporarily change the active context. `decimal.localcontext(ctx=None)` Return a context manager that will set the current context for the active thread to a copy of *ctx* on entry to the with-statement and restore the previous context when exiting the with-statement. If no context is specified, a copy of the current context is used. For example, the following code sets the current decimal precision to 42 places, performs a calculation, and then automatically restores the previous context: ``` from decimal import localcontext with localcontext() as ctx: ctx.prec = 42 # Perform a high precision calculation s = calculate_something() s = +s # Round the final result back to the default precision ``` New contexts can also be created using the [`Context`](#decimal.Context "decimal.Context") constructor described below. In addition, the module provides three pre-made contexts: `class decimal.BasicContext` This is a standard context defined by the General Decimal Arithmetic Specification. Precision is set to nine. Rounding is set to [`ROUND_HALF_UP`](#decimal.ROUND_HALF_UP "decimal.ROUND_HALF_UP"). All flags are cleared. All traps are enabled (treated as exceptions) except [`Inexact`](#decimal.Inexact "decimal.Inexact"), [`Rounded`](#decimal.Rounded "decimal.Rounded"), and [`Subnormal`](#decimal.Subnormal "decimal.Subnormal"). Because many of the traps are enabled, this context is useful for debugging. `class decimal.ExtendedContext` This is a standard context defined by the General Decimal Arithmetic Specification. Precision is set to nine. Rounding is set to [`ROUND_HALF_EVEN`](#decimal.ROUND_HALF_EVEN "decimal.ROUND_HALF_EVEN"). All flags are cleared. No traps are enabled (so that exceptions are not raised during computations). Because the traps are disabled, this context is useful for applications that prefer to have result value of `NaN` or `Infinity` instead of raising exceptions. This allows an application to complete a run in the presence of conditions that would otherwise halt the program. `class decimal.DefaultContext` This context is used by the [`Context`](#decimal.Context "decimal.Context") constructor as a prototype for new contexts. Changing a field (such a precision) has the effect of changing the default for new contexts created by the [`Context`](#decimal.Context "decimal.Context") constructor. This context is most useful in multi-threaded environments. Changing one of the fields before threads are started has the effect of setting system-wide defaults. Changing the fields after threads have started is not recommended as it would require thread synchronization to prevent race conditions. In single threaded environments, it is preferable to not use this context at all. Instead, simply create contexts explicitly as described below. The default values are `prec`=`28`, `rounding`=[`ROUND_HALF_EVEN`](#decimal.ROUND_HALF_EVEN "decimal.ROUND_HALF_EVEN"), and enabled traps for [`Overflow`](#decimal.Overflow "decimal.Overflow"), [`InvalidOperation`](#decimal.InvalidOperation "decimal.InvalidOperation"), and [`DivisionByZero`](#decimal.DivisionByZero "decimal.DivisionByZero"). In addition to the three supplied contexts, new contexts can be created with the [`Context`](#decimal.Context "decimal.Context") constructor. `class decimal.Context(prec=None, rounding=None, Emin=None, Emax=None, capitals=None, clamp=None, flags=None, traps=None)` Creates a new context. If a field is not specified or is [`None`](constants#None "None"), the default values are copied from the [`DefaultContext`](#decimal.DefaultContext "decimal.DefaultContext"). If the *flags* field is not specified or is [`None`](constants#None "None"), all flags are cleared. *prec* is an integer in the range [`1`, [`MAX_PREC`](#decimal.MAX_PREC "decimal.MAX_PREC")] that sets the precision for arithmetic operations in the context. The *rounding* option is one of the constants listed in the section [Rounding Modes](#rounding-modes). The *traps* and *flags* fields list any signals to be set. Generally, new contexts should only set traps and leave the flags clear. The *Emin* and *Emax* fields are integers specifying the outer limits allowable for exponents. *Emin* must be in the range [[`MIN_EMIN`](#decimal.MIN_EMIN "decimal.MIN_EMIN"), `0`], *Emax* in the range [`0`, [`MAX_EMAX`](#decimal.MAX_EMAX "decimal.MAX_EMAX")]. The *capitals* field is either `0` or `1` (the default). If set to `1`, exponents are printed with a capital `E`; otherwise, a lowercase `e` is used: `Decimal('6.02e+23')`. The *clamp* field is either `0` (the default) or `1`. If set to `1`, the exponent `e` of a [`Decimal`](#decimal.Decimal "decimal.Decimal") instance representable in this context is strictly limited to the range `Emin - prec + 1 <= e <= Emax - prec + 1`. If *clamp* is `0` then a weaker condition holds: the adjusted exponent of the [`Decimal`](#decimal.Decimal "decimal.Decimal") instance is at most `Emax`. When *clamp* is `1`, a large normal number will, where possible, have its exponent reduced and a corresponding number of zeros added to its coefficient, in order to fit the exponent constraints; this preserves the value of the number but loses information about significant trailing zeros. For example: ``` >>> Context(prec=6, Emax=999, clamp=1).create_decimal('1.23e999') Decimal('1.23000E+999') ``` A *clamp* value of `1` allows compatibility with the fixed-width decimal interchange formats specified in IEEE 754. The [`Context`](#decimal.Context "decimal.Context") class defines several general purpose methods as well as a large number of methods for doing arithmetic directly in a given context. In addition, for each of the [`Decimal`](#decimal.Decimal "decimal.Decimal") methods described above (with the exception of the `adjusted()` and `as_tuple()` methods) there is a corresponding [`Context`](#decimal.Context "decimal.Context") method. For example, for a [`Context`](#decimal.Context "decimal.Context") instance `C` and [`Decimal`](#decimal.Decimal "decimal.Decimal") instance `x`, `C.exp(x)` is equivalent to `x.exp(context=C)`. Each [`Context`](#decimal.Context "decimal.Context") method accepts a Python integer (an instance of [`int`](functions#int "int")) anywhere that a Decimal instance is accepted. `clear_flags()` Resets all of the flags to `0`. `clear_traps()` Resets all of the traps to `0`. New in version 3.3. `copy()` Return a duplicate of the context. `copy_decimal(num)` Return a copy of the Decimal instance num. `create_decimal(num)` Creates a new Decimal instance from *num* but using *self* as context. Unlike the [`Decimal`](#decimal.Decimal "decimal.Decimal") constructor, the context precision, rounding method, flags, and traps are applied to the conversion. This is useful because constants are often given to a greater precision than is needed by the application. Another benefit is that rounding immediately eliminates unintended effects from digits beyond the current precision. In the following example, using unrounded inputs means that adding zero to a sum can change the result: ``` >>> getcontext().prec = 3 >>> Decimal('3.4445') + Decimal('1.0023') Decimal('4.45') >>> Decimal('3.4445') + Decimal(0) + Decimal('1.0023') Decimal('4.44') ``` This method implements the to-number operation of the IBM specification. If the argument is a string, no leading or trailing whitespace or underscores are permitted. `create_decimal_from_float(f)` Creates a new Decimal instance from a float *f* but rounding using *self* as the context. Unlike the [`Decimal.from_float()`](#decimal.Decimal.from_float "decimal.Decimal.from_float") class method, the context precision, rounding method, flags, and traps are applied to the conversion. ``` >>> context = Context(prec=5, rounding=ROUND_DOWN) >>> context.create_decimal_from_float(math.pi) Decimal('3.1415') >>> context = Context(prec=5, traps=[Inexact]) >>> context.create_decimal_from_float(math.pi) Traceback (most recent call last): ... decimal.Inexact: None ``` New in version 3.1. `Etiny()` Returns a value equal to `Emin - prec + 1` which is the minimum exponent value for subnormal results. When underflow occurs, the exponent is set to [`Etiny`](#decimal.Context.Etiny "decimal.Context.Etiny"). `Etop()` Returns a value equal to `Emax - prec + 1`. The usual approach to working with decimals is to create [`Decimal`](#decimal.Decimal "decimal.Decimal") instances and then apply arithmetic operations which take place within the current context for the active thread. An alternative approach is to use context methods for calculating within a specific context. The methods are similar to those for the [`Decimal`](#decimal.Decimal "decimal.Decimal") class and are only briefly recounted here. `abs(x)` Returns the absolute value of *x*. `add(x, y)` Return the sum of *x* and *y*. `canonical(x)` Returns the same Decimal object *x*. `compare(x, y)` Compares *x* and *y* numerically. `compare_signal(x, y)` Compares the values of the two operands numerically. `compare_total(x, y)` Compares two operands using their abstract representation. `compare_total_mag(x, y)` Compares two operands using their abstract representation, ignoring sign. `copy_abs(x)` Returns a copy of *x* with the sign set to 0. `copy_negate(x)` Returns a copy of *x* with the sign inverted. `copy_sign(x, y)` Copies the sign from *y* to *x*. `divide(x, y)` Return *x* divided by *y*. `divide_int(x, y)` Return *x* divided by *y*, truncated to an integer. `divmod(x, y)` Divides two numbers and returns the integer part of the result. `exp(x)` Returns `e ** x`. `fma(x, y, z)` Returns *x* multiplied by *y*, plus *z*. `is_canonical(x)` Returns `True` if *x* is canonical; otherwise returns `False`. `is_finite(x)` Returns `True` if *x* is finite; otherwise returns `False`. `is_infinite(x)` Returns `True` if *x* is infinite; otherwise returns `False`. `is_nan(x)` Returns `True` if *x* is a qNaN or sNaN; otherwise returns `False`. `is_normal(x)` Returns `True` if *x* is a normal number; otherwise returns `False`. `is_qnan(x)` Returns `True` if *x* is a quiet NaN; otherwise returns `False`. `is_signed(x)` Returns `True` if *x* is negative; otherwise returns `False`. `is_snan(x)` Returns `True` if *x* is a signaling NaN; otherwise returns `False`. `is_subnormal(x)` Returns `True` if *x* is subnormal; otherwise returns `False`. `is_zero(x)` Returns `True` if *x* is a zero; otherwise returns `False`. `ln(x)` Returns the natural (base e) logarithm of *x*. `log10(x)` Returns the base 10 logarithm of *x*. `logb(x)` Returns the exponent of the magnitude of the operand’s MSD. `logical_and(x, y)` Applies the logical operation *and* between each operand’s digits. `logical_invert(x)` Invert all the digits in *x*. `logical_or(x, y)` Applies the logical operation *or* between each operand’s digits. `logical_xor(x, y)` Applies the logical operation *xor* between each operand’s digits. `max(x, y)` Compares two values numerically and returns the maximum. `max_mag(x, y)` Compares the values numerically with their sign ignored. `min(x, y)` Compares two values numerically and returns the minimum. `min_mag(x, y)` Compares the values numerically with their sign ignored. `minus(x)` Minus corresponds to the unary prefix minus operator in Python. `multiply(x, y)` Return the product of *x* and *y*. `next_minus(x)` Returns the largest representable number smaller than *x*. `next_plus(x)` Returns the smallest representable number larger than *x*. `next_toward(x, y)` Returns the number closest to *x*, in direction towards *y*. `normalize(x)` Reduces *x* to its simplest form. `number_class(x)` Returns an indication of the class of *x*. `plus(x)` Plus corresponds to the unary prefix plus operator in Python. This operation applies the context precision and rounding, so it is *not* an identity operation. `power(x, y, modulo=None)` Return `x` to the power of `y`, reduced modulo `modulo` if given. With two arguments, compute `x**y`. If `x` is negative then `y` must be integral. The result will be inexact unless `y` is integral and the result is finite and can be expressed exactly in ‘precision’ digits. The rounding mode of the context is used. Results are always correctly-rounded in the Python version. `Decimal(0) ** Decimal(0)` results in `InvalidOperation`, and if `InvalidOperation` is not trapped, then results in `Decimal('NaN')`. Changed in version 3.3: The C module computes [`power()`](#decimal.Context.power "decimal.Context.power") in terms of the correctly-rounded [`exp()`](#decimal.Context.exp "decimal.Context.exp") and [`ln()`](#decimal.Context.ln "decimal.Context.ln") functions. The result is well-defined but only “almost always correctly-rounded”. With three arguments, compute `(x**y) % modulo`. For the three argument form, the following restrictions on the arguments hold: * all three arguments must be integral * `y` must be nonnegative * at least one of `x` or `y` must be nonzero * `modulo` must be nonzero and have at most ‘precision’ digits The value resulting from `Context.power(x, y, modulo)` is equal to the value that would be obtained by computing `(x**y) % modulo` with unbounded precision, but is computed more efficiently. The exponent of the result is zero, regardless of the exponents of `x`, `y` and `modulo`. The result is always exact. `quantize(x, y)` Returns a value equal to *x* (rounded), having the exponent of *y*. `radix()` Just returns 10, as this is Decimal, :) `remainder(x, y)` Returns the remainder from integer division. The sign of the result, if non-zero, is the same as that of the original dividend. `remainder_near(x, y)` Returns `x - y * n`, where *n* is the integer nearest the exact value of `x / y` (if the result is 0 then its sign will be the sign of *x*). `rotate(x, y)` Returns a rotated copy of *x*, *y* times. `same_quantum(x, y)` Returns `True` if the two operands have the same exponent. `scaleb(x, y)` Returns the first operand after adding the second value its exp. `shift(x, y)` Returns a shifted copy of *x*, *y* times. `sqrt(x)` Square root of a non-negative number to context precision. `subtract(x, y)` Return the difference between *x* and *y*. `to_eng_string(x)` Convert to a string, using engineering notation if an exponent is needed. Engineering notation has an exponent which is a multiple of 3. This can leave up to 3 digits to the left of the decimal place and may require the addition of either one or two trailing zeros. `to_integral_exact(x)` Rounds to an integer. `to_sci_string(x)` Converts a number to a string using scientific notation. Constants --------- The constants in this section are only relevant for the C module. They are also included in the pure Python version for compatibility. | | 32-bit | 64-bit | | --- | --- | --- | | `decimal.MAX_PREC` | `425000000` | `999999999999999999` | | `decimal.MAX_EMAX` | `425000000` | `999999999999999999` | | `decimal.MIN_EMIN` | `-425000000` | `-999999999999999999` | | `decimal.MIN_ETINY` | `-849999999` | `-1999999999999999997` | `decimal.HAVE_THREADS` The value is `True`. Deprecated, because Python now always has threads. Deprecated since version 3.9. `decimal.HAVE_CONTEXTVAR` The default value is `True`. If Python is compiled `--without-decimal-contextvar`, the C version uses a thread-local rather than a coroutine-local context and the value is `False`. This is slightly faster in some nested context scenarios. New in version 3.9: backported to 3.7 and 3.8. Rounding modes -------------- `decimal.ROUND_CEILING` Round towards `Infinity`. `decimal.ROUND_DOWN` Round towards zero. `decimal.ROUND_FLOOR` Round towards `-Infinity`. `decimal.ROUND_HALF_DOWN` Round to nearest with ties going towards zero. `decimal.ROUND_HALF_EVEN` Round to nearest with ties going to nearest even integer. `decimal.ROUND_HALF_UP` Round to nearest with ties going away from zero. `decimal.ROUND_UP` Round away from zero. `decimal.ROUND_05UP` Round away from zero if last digit after rounding towards zero would have been 0 or 5; otherwise round towards zero. Signals ------- Signals represent conditions that arise during computation. Each corresponds to one context flag and one context trap enabler. The context flag is set whenever the condition is encountered. After the computation, flags may be checked for informational purposes (for instance, to determine whether a computation was exact). After checking the flags, be sure to clear all flags before starting the next computation. If the context’s trap enabler is set for the signal, then the condition causes a Python exception to be raised. For example, if the [`DivisionByZero`](#decimal.DivisionByZero "decimal.DivisionByZero") trap is set, then a [`DivisionByZero`](#decimal.DivisionByZero "decimal.DivisionByZero") exception is raised upon encountering the condition. `class decimal.Clamped` Altered an exponent to fit representation constraints. Typically, clamping occurs when an exponent falls outside the context’s `Emin` and `Emax` limits. If possible, the exponent is reduced to fit by adding zeros to the coefficient. `class decimal.DecimalException` Base class for other signals and a subclass of [`ArithmeticError`](exceptions#ArithmeticError "ArithmeticError"). `class decimal.DivisionByZero` Signals the division of a non-infinite number by zero. Can occur with division, modulo division, or when raising a number to a negative power. If this signal is not trapped, returns `Infinity` or `-Infinity` with the sign determined by the inputs to the calculation. `class decimal.Inexact` Indicates that rounding occurred and the result is not exact. Signals when non-zero digits were discarded during rounding. The rounded result is returned. The signal flag or trap is used to detect when results are inexact. `class decimal.InvalidOperation` An invalid operation was performed. Indicates that an operation was requested that does not make sense. If not trapped, returns `NaN`. Possible causes include: ``` Infinity - Infinity 0 * Infinity Infinity / Infinity x % 0 Infinity % x sqrt(-x) and x > 0 0 ** 0 x ** (non-integer) x ** Infinity ``` `class decimal.Overflow` Numerical overflow. Indicates the exponent is larger than `Emax` after rounding has occurred. If not trapped, the result depends on the rounding mode, either pulling inward to the largest representable finite number or rounding outward to `Infinity`. In either case, [`Inexact`](#decimal.Inexact "decimal.Inexact") and [`Rounded`](#decimal.Rounded "decimal.Rounded") are also signaled. `class decimal.Rounded` Rounding occurred though possibly no information was lost. Signaled whenever rounding discards digits; even if those digits are zero (such as rounding `5.00` to `5.0`). If not trapped, returns the result unchanged. This signal is used to detect loss of significant digits. `class decimal.Subnormal` Exponent was lower than `Emin` prior to rounding. Occurs when an operation result is subnormal (the exponent is too small). If not trapped, returns the result unchanged. `class decimal.Underflow` Numerical underflow with result rounded to zero. Occurs when a subnormal result is pushed to zero by rounding. [`Inexact`](#decimal.Inexact "decimal.Inexact") and [`Subnormal`](#decimal.Subnormal "decimal.Subnormal") are also signaled. `class decimal.FloatOperation` Enable stricter semantics for mixing floats and Decimals. If the signal is not trapped (default), mixing floats and Decimals is permitted in the [`Decimal`](#decimal.Decimal "decimal.Decimal") constructor, [`create_decimal()`](#decimal.Context.create_decimal "decimal.Context.create_decimal") and all comparison operators. Both conversion and comparisons are exact. Any occurrence of a mixed operation is silently recorded by setting [`FloatOperation`](#decimal.FloatOperation "decimal.FloatOperation") in the context flags. Explicit conversions with [`from_float()`](#decimal.Decimal.from_float "decimal.Decimal.from_float") or [`create_decimal_from_float()`](#decimal.Context.create_decimal_from_float "decimal.Context.create_decimal_from_float") do not set the flag. Otherwise (the signal is trapped), only equality comparisons and explicit conversions are silent. All other mixed operations raise [`FloatOperation`](#decimal.FloatOperation "decimal.FloatOperation"). The following table summarizes the hierarchy of signals: ``` exceptions.ArithmeticError(exceptions.Exception) DecimalException Clamped DivisionByZero(DecimalException, exceptions.ZeroDivisionError) Inexact Overflow(Inexact, Rounded) Underflow(Inexact, Rounded, Subnormal) InvalidOperation Rounded Subnormal FloatOperation(DecimalException, exceptions.TypeError) ``` Floating Point Notes -------------------- ### Mitigating round-off error with increased precision The use of decimal floating point eliminates decimal representation error (making it possible to represent `0.1` exactly); however, some operations can still incur round-off error when non-zero digits exceed the fixed precision. The effects of round-off error can be amplified by the addition or subtraction of nearly offsetting quantities resulting in loss of significance. Knuth provides two instructive examples where rounded floating point arithmetic with insufficient precision causes the breakdown of the associative and distributive properties of addition: ``` # Examples from Seminumerical Algorithms, Section 4.2.2. >>> from decimal import Decimal, getcontext >>> getcontext().prec = 8 >>> u, v, w = Decimal(11111113), Decimal(-11111111), Decimal('7.51111111') >>> (u + v) + w Decimal('9.5111111') >>> u + (v + w) Decimal('10') >>> u, v, w = Decimal(20000), Decimal(-6), Decimal('6.0000003') >>> (u*v) + (u*w) Decimal('0.01') >>> u * (v+w) Decimal('0.0060000') ``` The [`decimal`](#module-decimal "decimal: Implementation of the General Decimal Arithmetic Specification.") module makes it possible to restore the identities by expanding the precision sufficiently to avoid loss of significance: ``` >>> getcontext().prec = 20 >>> u, v, w = Decimal(11111113), Decimal(-11111111), Decimal('7.51111111') >>> (u + v) + w Decimal('9.51111111') >>> u + (v + w) Decimal('9.51111111') >>> >>> u, v, w = Decimal(20000), Decimal(-6), Decimal('6.0000003') >>> (u*v) + (u*w) Decimal('0.0060000') >>> u * (v+w) Decimal('0.0060000') ``` ### Special values The number system for the [`decimal`](#module-decimal "decimal: Implementation of the General Decimal Arithmetic Specification.") module provides special values including `NaN`, `sNaN`, `-Infinity`, `Infinity`, and two zeros, `+0` and `-0`. Infinities can be constructed directly with: `Decimal('Infinity')`. Also, they can arise from dividing by zero when the [`DivisionByZero`](#decimal.DivisionByZero "decimal.DivisionByZero") signal is not trapped. Likewise, when the [`Overflow`](#decimal.Overflow "decimal.Overflow") signal is not trapped, infinity can result from rounding beyond the limits of the largest representable number. The infinities are signed (affine) and can be used in arithmetic operations where they get treated as very large, indeterminate numbers. For instance, adding a constant to infinity gives another infinite result. Some operations are indeterminate and return `NaN`, or if the [`InvalidOperation`](#decimal.InvalidOperation "decimal.InvalidOperation") signal is trapped, raise an exception. For example, `0/0` returns `NaN` which means “not a number”. This variety of `NaN` is quiet and, once created, will flow through other computations always resulting in another `NaN`. This behavior can be useful for a series of computations that occasionally have missing inputs — it allows the calculation to proceed while flagging specific results as invalid. A variant is `sNaN` which signals rather than remaining quiet after every operation. This is a useful return value when an invalid result needs to interrupt a calculation for special handling. The behavior of Python’s comparison operators can be a little surprising where a `NaN` is involved. A test for equality where one of the operands is a quiet or signaling `NaN` always returns [`False`](constants#False "False") (even when doing `Decimal('NaN')==Decimal('NaN')`), while a test for inequality always returns [`True`](constants#True "True"). An attempt to compare two Decimals using any of the `<`, `<=`, `>` or `>=` operators will raise the [`InvalidOperation`](#decimal.InvalidOperation "decimal.InvalidOperation") signal if either operand is a `NaN`, and return [`False`](constants#False "False") if this signal is not trapped. Note that the General Decimal Arithmetic specification does not specify the behavior of direct comparisons; these rules for comparisons involving a `NaN` were taken from the IEEE 854 standard (see Table 3 in section 5.7). To ensure strict standards-compliance, use the `compare()` and `compare-signal()` methods instead. The signed zeros can result from calculations that underflow. They keep the sign that would have resulted if the calculation had been carried out to greater precision. Since their magnitude is zero, both positive and negative zeros are treated as equal and their sign is informational. In addition to the two signed zeros which are distinct yet equal, there are various representations of zero with differing precisions yet equivalent in value. This takes a bit of getting used to. For an eye accustomed to normalized floating point representations, it is not immediately obvious that the following calculation returns a value equal to zero: ``` >>> 1 / Decimal('Infinity') Decimal('0E-1000026') ``` Working with threads -------------------- The [`getcontext()`](#decimal.getcontext "decimal.getcontext") function accesses a different [`Context`](#decimal.Context "decimal.Context") object for each thread. Having separate thread contexts means that threads may make changes (such as `getcontext().prec=10`) without interfering with other threads. Likewise, the [`setcontext()`](#decimal.setcontext "decimal.setcontext") function automatically assigns its target to the current thread. If [`setcontext()`](#decimal.setcontext "decimal.setcontext") has not been called before [`getcontext()`](#decimal.getcontext "decimal.getcontext"), then [`getcontext()`](#decimal.getcontext "decimal.getcontext") will automatically create a new context for use in the current thread. The new context is copied from a prototype context called *DefaultContext*. To control the defaults so that each thread will use the same values throughout the application, directly modify the *DefaultContext* object. This should be done *before* any threads are started so that there won’t be a race condition between threads calling [`getcontext()`](#decimal.getcontext "decimal.getcontext"). For example: ``` # Set applicationwide defaults for all threads about to be launched DefaultContext.prec = 12 DefaultContext.rounding = ROUND_DOWN DefaultContext.traps = ExtendedContext.traps.copy() DefaultContext.traps[InvalidOperation] = 1 setcontext(DefaultContext) # Afterwards, the threads can be started t1.start() t2.start() t3.start() . . . ``` Recipes ------- Here are a few recipes that serve as utility functions and that demonstrate ways to work with the [`Decimal`](#decimal.Decimal "decimal.Decimal") class: ``` def moneyfmt(value, places=2, curr='', sep=',', dp='.', pos='', neg='-', trailneg=''): """Convert Decimal to a money formatted string. places: required number of places after the decimal point curr: optional currency symbol before the sign (may be blank) sep: optional grouping separator (comma, period, space, or blank) dp: decimal point indicator (comma or period) only specify as blank when places is zero pos: optional sign for positive numbers: '+', space or blank neg: optional sign for negative numbers: '-', '(', space or blank trailneg:optional trailing minus indicator: '-', ')', space or blank >>> d = Decimal('-1234567.8901') >>> moneyfmt(d, curr='$') '-$1,234,567.89' >>> moneyfmt(d, places=0, sep='.', dp='', neg='', trailneg='-') '1.234.568-' >>> moneyfmt(d, curr='$', neg='(', trailneg=')') '($1,234,567.89)' >>> moneyfmt(Decimal(123456789), sep=' ') '123 456 789.00' >>> moneyfmt(Decimal('-0.02'), neg='<', trailneg='>') '<0.02>' """ q = Decimal(10) ** -places # 2 places --> '0.01' sign, digits, exp = value.quantize(q).as_tuple() result = [] digits = list(map(str, digits)) build, next = result.append, digits.pop if sign: build(trailneg) for i in range(places): build(next() if digits else '0') if places: build(dp) if not digits: build('0') i = 0 while digits: build(next()) i += 1 if i == 3 and digits: i = 0 build(sep) build(curr) build(neg if sign else pos) return ''.join(reversed(result)) def pi(): """Compute Pi to the current precision. >>> print(pi()) 3.141592653589793238462643383 """ getcontext().prec += 2 # extra digits for intermediate steps three = Decimal(3) # substitute "three=3.0" for regular floats lasts, t, s, n, na, d, da = 0, three, 3, 1, 0, 0, 24 while s != lasts: lasts = s n, na = n+na, na+8 d, da = d+da, da+32 t = (t * n) / d s += t getcontext().prec -= 2 return +s # unary plus applies the new precision def exp(x): """Return e raised to the power of x. Result type matches input type. >>> print(exp(Decimal(1))) 2.718281828459045235360287471 >>> print(exp(Decimal(2))) 7.389056098930650227230427461 >>> print(exp(2.0)) 7.38905609893 >>> print(exp(2+0j)) (7.38905609893+0j) """ getcontext().prec += 2 i, lasts, s, fact, num = 0, 0, 1, 1, 1 while s != lasts: lasts = s i += 1 fact *= i num *= x s += num / fact getcontext().prec -= 2 return +s def cos(x): """Return the cosine of x as measured in radians. The Taylor series approximation works best for a small value of x. For larger values, first compute x = x % (2 * pi). >>> print(cos(Decimal('0.5'))) 0.8775825618903727161162815826 >>> print(cos(0.5)) 0.87758256189 >>> print(cos(0.5+0j)) (0.87758256189+0j) """ getcontext().prec += 2 i, lasts, s, fact, num, sign = 0, 0, 1, 1, 1, 1 while s != lasts: lasts = s i += 2 fact *= i * (i-1) num *= x * x sign *= -1 s += num / fact * sign getcontext().prec -= 2 return +s def sin(x): """Return the sine of x as measured in radians. The Taylor series approximation works best for a small value of x. For larger values, first compute x = x % (2 * pi). >>> print(sin(Decimal('0.5'))) 0.4794255386042030002732879352 >>> print(sin(0.5)) 0.479425538604 >>> print(sin(0.5+0j)) (0.479425538604+0j) """ getcontext().prec += 2 i, lasts, s, fact, num, sign = 1, 0, x, 1, x, 1 while s != lasts: lasts = s i += 2 fact *= i * (i-1) num *= x * x sign *= -1 s += num / fact * sign getcontext().prec -= 2 return +s ``` Decimal FAQ ----------- Q. It is cumbersome to type `decimal.Decimal('1234.5')`. Is there a way to minimize typing when using the interactive interpreter? A. Some users abbreviate the constructor to just a single letter: ``` >>> D = decimal.Decimal >>> D('1.23') + D('3.45') Decimal('4.68') ``` Q. In a fixed-point application with two decimal places, some inputs have many places and need to be rounded. Others are not supposed to have excess digits and need to be validated. What methods should be used? A. The `quantize()` method rounds to a fixed number of decimal places. If the [`Inexact`](#decimal.Inexact "decimal.Inexact") trap is set, it is also useful for validation: ``` >>> TWOPLACES = Decimal(10) ** -2 # same as Decimal('0.01') ``` ``` >>> # Round to two places >>> Decimal('3.214').quantize(TWOPLACES) Decimal('3.21') ``` ``` >>> # Validate that a number does not exceed two places >>> Decimal('3.21').quantize(TWOPLACES, context=Context(traps=[Inexact])) Decimal('3.21') ``` ``` >>> Decimal('3.214').quantize(TWOPLACES, context=Context(traps=[Inexact])) Traceback (most recent call last): ... Inexact: None ``` Q. Once I have valid two place inputs, how do I maintain that invariant throughout an application? A. Some operations like addition, subtraction, and multiplication by an integer will automatically preserve fixed point. Others operations, like division and non-integer multiplication, will change the number of decimal places and need to be followed-up with a `quantize()` step: ``` >>> a = Decimal('102.72') # Initial fixed-point values >>> b = Decimal('3.17') >>> a + b # Addition preserves fixed-point Decimal('105.89') >>> a - b Decimal('99.55') >>> a * 42 # So does integer multiplication Decimal('4314.24') >>> (a * b).quantize(TWOPLACES) # Must quantize non-integer multiplication Decimal('325.62') >>> (b / a).quantize(TWOPLACES) # And quantize division Decimal('0.03') ``` In developing fixed-point applications, it is convenient to define functions to handle the `quantize()` step: ``` >>> def mul(x, y, fp=TWOPLACES): ... return (x * y).quantize(fp) >>> def div(x, y, fp=TWOPLACES): ... return (x / y).quantize(fp) ``` ``` >>> mul(a, b) # Automatically preserve fixed-point Decimal('325.62') >>> div(b, a) Decimal('0.03') ``` Q. There are many ways to express the same value. The numbers `200`, `200.000`, `2E2`, and `02E+4` all have the same value at various precisions. Is there a way to transform them to a single recognizable canonical value? A. The `normalize()` method maps all equivalent values to a single representative: ``` >>> values = map(Decimal, '200 200.000 2E2 .02E+4'.split()) >>> [v.normalize() for v in values] [Decimal('2E+2'), Decimal('2E+2'), Decimal('2E+2'), Decimal('2E+2')] ``` Q. Some decimal values always print with exponential notation. Is there a way to get a non-exponential representation? A. For some values, exponential notation is the only way to express the number of significant places in the coefficient. For example, expressing `5.0E+3` as `5000` keeps the value constant but cannot show the original’s two-place significance. If an application does not care about tracking significance, it is easy to remove the exponent and trailing zeroes, losing significance, but keeping the value unchanged: ``` >>> def remove_exponent(d): ... return d.quantize(Decimal(1)) if d == d.to_integral() else d.normalize() ``` ``` >>> remove_exponent(Decimal('5E+3')) Decimal('5000') ``` Q. Is there a way to convert a regular float to a [`Decimal`](#decimal.Decimal "decimal.Decimal")? A. Yes, any binary floating point number can be exactly expressed as a Decimal though an exact conversion may take more precision than intuition would suggest: ``` >>> Decimal(math.pi) Decimal('3.141592653589793115997963468544185161590576171875') ``` Q. Within a complex calculation, how can I make sure that I haven’t gotten a spurious result because of insufficient precision or rounding anomalies. A. The decimal module makes it easy to test results. A best practice is to re-run calculations using greater precision and with various rounding modes. Widely differing results indicate insufficient precision, rounding mode issues, ill-conditioned inputs, or a numerically unstable algorithm. Q. I noticed that context precision is applied to the results of operations but not to the inputs. Is there anything to watch out for when mixing values of different precisions? A. Yes. The principle is that all values are considered to be exact and so is the arithmetic on those values. Only the results are rounded. The advantage for inputs is that “what you type is what you get”. A disadvantage is that the results can look odd if you forget that the inputs haven’t been rounded: ``` >>> getcontext().prec = 3 >>> Decimal('3.104') + Decimal('2.104') Decimal('5.21') >>> Decimal('3.104') + Decimal('0.000') + Decimal('2.104') Decimal('5.20') ``` The solution is either to increase precision or to force rounding of inputs using the unary plus operation: ``` >>> getcontext().prec = 3 >>> +Decimal('1.23456789') # unary plus triggers rounding Decimal('1.23') ``` Alternatively, inputs can be rounded upon creation using the [`Context.create_decimal()`](#decimal.Context.create_decimal "decimal.Context.create_decimal") method: ``` >>> Context(prec=5, rounding=ROUND_DOWN).create_decimal('1.2345678') Decimal('1.2345') ``` Q. Is the CPython implementation fast for large numbers? A. Yes. In the CPython and PyPy3 implementations, the C/CFFI versions of the decimal module integrate the high speed [libmpdec](https://www.bytereef.org/mpdecimal/doc/libmpdec/index.html) library for arbitrary precision correctly-rounded decimal floating point arithmetic [1](#id4). `libmpdec` uses [Karatsuba multiplication](https://en.wikipedia.org/wiki/Karatsuba_algorithm) for medium-sized numbers and the [Number Theoretic Transform](https://en.wikipedia.org/wiki/Discrete_Fourier_transform_(general)#Number-theoretic_transform) for very large numbers. The context must be adapted for exact arbitrary precision arithmetic. `Emin` and `Emax` should always be set to the maximum values, `clamp` should always be 0 (the default). Setting `prec` requires some care. The easiest approach for trying out bignum arithmetic is to use the maximum value for `prec` as well [2](#id5): ``` >>> setcontext(Context(prec=MAX_PREC, Emax=MAX_EMAX, Emin=MIN_EMIN)) >>> x = Decimal(2) ** 256 >>> x / 128 Decimal('904625697166532776746648320380374280103671755200316906558262375061821325312') ``` For inexact results, [`MAX_PREC`](#decimal.MAX_PREC "decimal.MAX_PREC") is far too large on 64-bit platforms and the available memory will be insufficient: ``` >>> Decimal(1) / 3 Traceback (most recent call last): File "<stdin>", line 1, in <module> MemoryError ``` On systems with overallocation (e.g. Linux), a more sophisticated approach is to adjust `prec` to the amount of available RAM. Suppose that you have 8GB of RAM and expect 10 simultaneous operands using a maximum of 500MB each: ``` >>> import sys >>> >>> # Maximum number of digits for a single operand using 500MB in 8-byte words >>> # with 19 digits per word (4-byte and 9 digits for the 32-bit build): >>> maxdigits = 19 * ((500 * 1024**2) // 8) >>> >>> # Check that this works: >>> c = Context(prec=maxdigits, Emax=MAX_EMAX, Emin=MIN_EMIN) >>> c.traps[Inexact] = True >>> setcontext(c) >>> >>> # Fill the available precision with nines: >>> x = Decimal(0).logical_invert() * 9 >>> sys.getsizeof(x) 524288112 >>> x + 2 Traceback (most recent call last): File "<stdin>", line 1, in <module> decimal.Inexact: [<class 'decimal.Inexact'>] ``` In general (and especially on systems without overallocation), it is recommended to estimate even tighter bounds and set the [`Inexact`](#decimal.Inexact "decimal.Inexact") trap if all calculations are expected to be exact. `1` New in version 3.3. `2` Changed in version 3.9: This approach now works for all exact results except for non-integer powers.
programming_docs
python termios — POSIX style tty control termios — POSIX style tty control ================================= This module provides an interface to the POSIX calls for tty I/O control. For a complete description of these calls, see *[termios(3)](https://manpages.debian.org/termios(3))* Unix manual page. It is only available for those Unix versions that support POSIX *termios* style tty I/O control configured during installation. All functions in this module take a file descriptor *fd* as their first argument. This can be an integer file descriptor, such as returned by `sys.stdin.fileno()`, or a [file object](../glossary#term-file-object), such as `sys.stdin` itself. This module also defines all the constants needed to work with the functions provided here; these have the same name as their counterparts in C. Please refer to your system documentation for more information on using these terminal control interfaces. The module defines the following functions: `termios.tcgetattr(fd)` Return a list containing the tty attributes for file descriptor *fd*, as follows: `[iflag, oflag, cflag, lflag, ispeed, ospeed, cc]` where *cc* is a list of the tty special characters (each a string of length 1, except the items with indices `VMIN` and `VTIME`, which are integers when these fields are defined). The interpretation of the flags and the speeds as well as the indexing in the *cc* array must be done using the symbolic constants defined in the [`termios`](#module-termios "termios: POSIX style tty control. (Unix)") module. `termios.tcsetattr(fd, when, attributes)` Set the tty attributes for file descriptor *fd* from the *attributes*, which is a list like the one returned by [`tcgetattr()`](#termios.tcgetattr "termios.tcgetattr"). The *when* argument determines when the attributes are changed: `TCSANOW` to change immediately, `TCSADRAIN` to change after transmitting all queued output, or `TCSAFLUSH` to change after transmitting all queued output and discarding all queued input. `termios.tcsendbreak(fd, duration)` Send a break on file descriptor *fd*. A zero *duration* sends a break for 0.25–0.5 seconds; a nonzero *duration* has a system dependent meaning. `termios.tcdrain(fd)` Wait until all output written to file descriptor *fd* has been transmitted. `termios.tcflush(fd, queue)` Discard queued data on file descriptor *fd*. The *queue* selector specifies which queue: `TCIFLUSH` for the input queue, `TCOFLUSH` for the output queue, or `TCIOFLUSH` for both queues. `termios.tcflow(fd, action)` Suspend or resume input or output on file descriptor *fd*. The *action* argument can be `TCOOFF` to suspend output, `TCOON` to restart output, `TCIOFF` to suspend input, or `TCION` to restart input. See also `Module` [`tty`](tty#module-tty "tty: Utility functions that perform common terminal control operations. (Unix)") Convenience functions for common terminal control operations. Example ------- Here’s a function that prompts for a password with echoing turned off. Note the technique using a separate [`tcgetattr()`](#termios.tcgetattr "termios.tcgetattr") call and a [`try`](../reference/compound_stmts#try) … [`finally`](../reference/compound_stmts#finally) statement to ensure that the old tty attributes are restored exactly no matter what happens: ``` def getpass(prompt="Password: "): import termios, sys fd = sys.stdin.fileno() old = termios.tcgetattr(fd) new = termios.tcgetattr(fd) new[3] = new[3] & ~termios.ECHO # lflags try: termios.tcsetattr(fd, termios.TCSADRAIN, new) passwd = input(prompt) finally: termios.tcsetattr(fd, termios.TCSADRAIN, old) return passwd ``` python tkinter.messagebox — Tkinter message prompts tkinter.messagebox — Tkinter message prompts ============================================ **Source code:** [Lib/tkinter/messagebox.py](https://github.com/python/cpython/tree/3.9/Lib/tkinter/messagebox.py) The [`tkinter.messagebox`](#module-tkinter.messagebox "tkinter.messagebox: Various types of alert dialogs (Tk)") module provides a template base class as well as a variety of convenience methods for commonly used configurations. The message boxes are modal and will return a subset of (True, False, OK, None, Yes, No) based on the user’s selection. Common message box styles and layouts include but are not limited to: `class tkinter.messagebox.Message(master=None, **options)` Create a default information message box. **Information message box** `tkinter.messagebox.showinfo(title=None, message=None, **options)` **Warning message boxes** `tkinter.messagebox.showwarning(title=None, message=None, **options)` `tkinter.messagebox.showerror(title=None, message=None, **options)` **Question message boxes** `tkinter.messagebox.askquestion(title=None, message=None, **options)` `tkinter.messagebox.askokcancel(title=None, message=None, **options)` `tkinter.messagebox.askretrycancel(title=None, message=None, **options)` `tkinter.messagebox.askyesno(title=None, message=None, **options)` `tkinter.messagebox.askyesnocancel(title=None, message=None, **options)` python Python Development Mode Python Development Mode ======================= New in version 3.7. The Python Development Mode introduces additional runtime checks that are too expensive to be enabled by default. It should not be more verbose than the default if the code is correct; new warnings are only emitted when an issue is detected. It can be enabled using the [`-X dev`](../using/cmdline#id5) command line option or by setting the [`PYTHONDEVMODE`](../using/cmdline#envvar-PYTHONDEVMODE) environment variable to `1`. python resource — Resource usage information resource — Resource usage information ===================================== This module provides basic mechanisms for measuring and controlling system resources utilized by a program. Symbolic constants are used to specify particular system resources and to request usage information about either the current process or its children. An [`OSError`](exceptions#OSError "OSError") is raised on syscall failure. `exception resource.error` A deprecated alias of [`OSError`](exceptions#OSError "OSError"). Changed in version 3.3: Following [**PEP 3151**](https://www.python.org/dev/peps/pep-3151), this class was made an alias of [`OSError`](exceptions#OSError "OSError"). Resource Limits --------------- Resources usage can be limited using the [`setrlimit()`](#resource.setrlimit "resource.setrlimit") function described below. Each resource is controlled by a pair of limits: a soft limit and a hard limit. The soft limit is the current limit, and may be lowered or raised by a process over time. The soft limit can never exceed the hard limit. The hard limit can be lowered to any value greater than the soft limit, but not raised. (Only processes with the effective UID of the super-user can raise a hard limit.) The specific resources that can be limited are system dependent. They are described in the *[getrlimit(2)](https://manpages.debian.org/getrlimit(2))* man page. The resources listed below are supported when the underlying operating system supports them; resources which cannot be checked or controlled by the operating system are not defined in this module for those platforms. `resource.RLIM_INFINITY` Constant used to represent the limit for an unlimited resource. `resource.getrlimit(resource)` Returns a tuple `(soft, hard)` with the current soft and hard limits of *resource*. Raises [`ValueError`](exceptions#ValueError "ValueError") if an invalid resource is specified, or [`error`](#resource.error "resource.error") if the underlying system call fails unexpectedly. `resource.setrlimit(resource, limits)` Sets new limits of consumption of *resource*. The *limits* argument must be a tuple `(soft, hard)` of two integers describing the new limits. A value of [`RLIM_INFINITY`](#resource.RLIM_INFINITY "resource.RLIM_INFINITY") can be used to request a limit that is unlimited. Raises [`ValueError`](exceptions#ValueError "ValueError") if an invalid resource is specified, if the new soft limit exceeds the hard limit, or if a process tries to raise its hard limit. Specifying a limit of [`RLIM_INFINITY`](#resource.RLIM_INFINITY "resource.RLIM_INFINITY") when the hard or system limit for that resource is not unlimited will result in a [`ValueError`](exceptions#ValueError "ValueError"). A process with the effective UID of super-user can request any valid limit value, including unlimited, but [`ValueError`](exceptions#ValueError "ValueError") will still be raised if the requested limit exceeds the system imposed limit. `setrlimit` may also raise [`error`](#resource.error "resource.error") if the underlying system call fails. VxWorks only supports setting [`RLIMIT_NOFILE`](#resource.RLIMIT_NOFILE "resource.RLIMIT_NOFILE"). Raises an [auditing event](sys#auditing) `resource.setrlimit` with arguments `resource`, `limits`. `resource.prlimit(pid, resource[, limits])` Combines [`setrlimit()`](#resource.setrlimit "resource.setrlimit") and [`getrlimit()`](#resource.getrlimit "resource.getrlimit") in one function and supports to get and set the resources limits of an arbitrary process. If *pid* is 0, then the call applies to the current process. *resource* and *limits* have the same meaning as in [`setrlimit()`](#resource.setrlimit "resource.setrlimit"), except that *limits* is optional. When *limits* is not given the function returns the *resource* limit of the process *pid*. When *limits* is given the *resource* limit of the process is set and the former resource limit is returned. Raises [`ProcessLookupError`](exceptions#ProcessLookupError "ProcessLookupError") when *pid* can’t be found and [`PermissionError`](exceptions#PermissionError "PermissionError") when the user doesn’t have `CAP_SYS_RESOURCE` for the process. Raises an [auditing event](sys#auditing) `resource.prlimit` with arguments `pid`, `resource`, `limits`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 2.6.36 or later with glibc 2.13 or later. New in version 3.4. These symbols define resources whose consumption can be controlled using the [`setrlimit()`](#resource.setrlimit "resource.setrlimit") and [`getrlimit()`](#resource.getrlimit "resource.getrlimit") functions described below. The values of these symbols are exactly the constants used by C programs. The Unix man page for *[getrlimit(2)](https://manpages.debian.org/getrlimit(2))* lists the available resources. Note that not all systems use the same symbol or same value to denote the same resource. This module does not attempt to mask platform differences — symbols not defined for a platform will not be available from this module on that platform. `resource.RLIMIT_CORE` The maximum size (in bytes) of a core file that the current process can create. This may result in the creation of a partial core file if a larger core would be required to contain the entire process image. `resource.RLIMIT_CPU` The maximum amount of processor time (in seconds) that a process can use. If this limit is exceeded, a `SIGXCPU` signal is sent to the process. (See the [`signal`](signal#module-signal "signal: Set handlers for asynchronous events.") module documentation for information about how to catch this signal and do something useful, e.g. flush open files to disk.) `resource.RLIMIT_FSIZE` The maximum size of a file which the process may create. `resource.RLIMIT_DATA` The maximum size (in bytes) of the process’s heap. `resource.RLIMIT_STACK` The maximum size (in bytes) of the call stack for the current process. This only affects the stack of the main thread in a multi-threaded process. `resource.RLIMIT_RSS` The maximum resident set size that should be made available to the process. `resource.RLIMIT_NPROC` The maximum number of processes the current process may create. `resource.RLIMIT_NOFILE` The maximum number of open file descriptors for the current process. `resource.RLIMIT_OFILE` The BSD name for [`RLIMIT_NOFILE`](#resource.RLIMIT_NOFILE "resource.RLIMIT_NOFILE"). `resource.RLIMIT_MEMLOCK` The maximum address space which may be locked in memory. `resource.RLIMIT_VMEM` The largest area of mapped memory which the process may occupy. `resource.RLIMIT_AS` The maximum area (in bytes) of address space which may be taken by the process. `resource.RLIMIT_MSGQUEUE` The number of bytes that can be allocated for POSIX message queues. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 2.6.8 or later. New in version 3.4. `resource.RLIMIT_NICE` The ceiling for the process’s nice level (calculated as 20 - rlim\_cur). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 2.6.12 or later. New in version 3.4. `resource.RLIMIT_RTPRIO` The ceiling of the real-time priority. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 2.6.12 or later. New in version 3.4. `resource.RLIMIT_RTTIME` The time limit (in microseconds) on CPU time that a process can spend under real-time scheduling without making a blocking syscall. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 2.6.25 or later. New in version 3.4. `resource.RLIMIT_SIGPENDING` The number of signals which the process may queue. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 2.6.8 or later. New in version 3.4. `resource.RLIMIT_SBSIZE` The maximum size (in bytes) of socket buffer usage for this user. This limits the amount of network memory, and hence the amount of mbufs, that this user may hold at any time. [Availability](https://docs.python.org/3.9/library/intro.html#availability): FreeBSD 9 or later. New in version 3.4. `resource.RLIMIT_SWAP` The maximum size (in bytes) of the swap space that may be reserved or used by all of this user id’s processes. This limit is enforced only if bit 1 of the vm.overcommit sysctl is set. Please see [tuning(7)](https://www.freebsd.org/cgi/man.cgi?query=tuning&sektion=7) for a complete description of this sysctl. [Availability](https://docs.python.org/3.9/library/intro.html#availability): FreeBSD 9 or later. New in version 3.4. `resource.RLIMIT_NPTS` The maximum number of pseudo-terminals created by this user id. [Availability](https://docs.python.org/3.9/library/intro.html#availability): FreeBSD 9 or later. New in version 3.4. Resource Usage -------------- These functions are used to retrieve resource usage information: `resource.getrusage(who)` This function returns an object that describes the resources consumed by either the current process or its children, as specified by the *who* parameter. The *who* parameter should be specified using one of the `RUSAGE_*` constants described below. A simple example: ``` from resource import * import time # a non CPU-bound task time.sleep(3) print(getrusage(RUSAGE_SELF)) # a CPU-bound task for i in range(10 ** 8): _ = 1 + 1 print(getrusage(RUSAGE_SELF)) ``` The fields of the return value each describe how a particular system resource has been used, e.g. amount of time spent running is user mode or number of times the process was swapped out of main memory. Some values are dependent on the clock tick internal, e.g. the amount of memory the process is using. For backward compatibility, the return value is also accessible as a tuple of 16 elements. The fields `ru_utime` and `ru_stime` of the return value are floating point values representing the amount of time spent executing in user mode and the amount of time spent executing in system mode, respectively. The remaining values are integers. Consult the *[getrusage(2)](https://manpages.debian.org/getrusage(2))* man page for detailed information about these values. A brief summary is presented here: | Index | Field | Resource | | --- | --- | --- | | `0` | `ru_utime` | time in user mode (float seconds) | | `1` | `ru_stime` | time in system mode (float seconds) | | `2` | `ru_maxrss` | maximum resident set size | | `3` | `ru_ixrss` | shared memory size | | `4` | `ru_idrss` | unshared memory size | | `5` | `ru_isrss` | unshared stack size | | `6` | `ru_minflt` | page faults not requiring I/O | | `7` | `ru_majflt` | page faults requiring I/O | | `8` | `ru_nswap` | number of swap outs | | `9` | `ru_inblock` | block input operations | | `10` | `ru_oublock` | block output operations | | `11` | `ru_msgsnd` | messages sent | | `12` | `ru_msgrcv` | messages received | | `13` | `ru_nsignals` | signals received | | `14` | `ru_nvcsw` | voluntary context switches | | `15` | `ru_nivcsw` | involuntary context switches | This function will raise a [`ValueError`](exceptions#ValueError "ValueError") if an invalid *who* parameter is specified. It may also raise [`error`](#resource.error "resource.error") exception in unusual circumstances. `resource.getpagesize()` Returns the number of bytes in a system page. (This need not be the same as the hardware page size.) The following `RUSAGE_*` symbols are passed to the [`getrusage()`](#resource.getrusage "resource.getrusage") function to specify which processes information should be provided for. `resource.RUSAGE_SELF` Pass to [`getrusage()`](#resource.getrusage "resource.getrusage") to request resources consumed by the calling process, which is the sum of resources used by all threads in the process. `resource.RUSAGE_CHILDREN` Pass to [`getrusage()`](#resource.getrusage "resource.getrusage") to request resources consumed by child processes of the calling process which have been terminated and waited for. `resource.RUSAGE_BOTH` Pass to [`getrusage()`](#resource.getrusage "resource.getrusage") to request resources consumed by both the current process and child processes. May not be available on all systems. `resource.RUSAGE_THREAD` Pass to [`getrusage()`](#resource.getrusage "resource.getrusage") to request resources consumed by the current thread. May not be available on all systems. New in version 3.2. python Synchronization Primitives Synchronization Primitives ========================== **Source code:** [Lib/asyncio/locks.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/locks.py) asyncio synchronization primitives are designed to be similar to those of the [`threading`](threading#module-threading "threading: Thread-based parallelism.") module with two important caveats: * asyncio primitives are not thread-safe, therefore they should not be used for OS thread synchronization (use [`threading`](threading#module-threading "threading: Thread-based parallelism.") for that); * methods of these synchronization primitives do not accept the *timeout* argument; use the [`asyncio.wait_for()`](asyncio-task#asyncio.wait_for "asyncio.wait_for") function to perform operations with timeouts. asyncio has the following basic synchronization primitives: * [`Lock`](#asyncio.Lock "asyncio.Lock") * [`Event`](#asyncio.Event "asyncio.Event") * [`Condition`](#asyncio.Condition "asyncio.Condition") * [`Semaphore`](#asyncio.Semaphore "asyncio.Semaphore") * [`BoundedSemaphore`](#asyncio.BoundedSemaphore "asyncio.BoundedSemaphore") Lock ---- `class asyncio.Lock(*, loop=None)` Implements a mutex lock for asyncio tasks. Not thread-safe. An asyncio lock can be used to guarantee exclusive access to a shared resource. The preferred way to use a Lock is an [`async with`](../reference/compound_stmts#async-with) statement: ``` lock = asyncio.Lock() # ... later async with lock: # access shared state ``` which is equivalent to: ``` lock = asyncio.Lock() # ... later await lock.acquire() try: # access shared state finally: lock.release() ``` Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. `coroutine acquire()` Acquire the lock. This method waits until the lock is *unlocked*, sets it to *locked* and returns `True`. When more than one coroutine is blocked in [`acquire()`](#asyncio.Lock.acquire "asyncio.Lock.acquire") waiting for the lock to be unlocked, only one coroutine eventually proceeds. Acquiring a lock is *fair*: the coroutine that proceeds will be the first coroutine that started waiting on the lock. `release()` Release the lock. When the lock is *locked*, reset it to *unlocked* and return. If the lock is *unlocked*, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. `locked()` Return `True` if the lock is *locked*. Event ----- `class asyncio.Event(*, loop=None)` An event object. Not thread-safe. An asyncio event can be used to notify multiple asyncio tasks that some event has happened. An Event object manages an internal flag that can be set to *true* with the [`set()`](#asyncio.Event.set "asyncio.Event.set") method and reset to *false* with the [`clear()`](#asyncio.Event.clear "asyncio.Event.clear") method. The [`wait()`](#asyncio.Event.wait "asyncio.Event.wait") method blocks until the flag is set to *true*. The flag is set to *false* initially. Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. Example: ``` async def waiter(event): print('waiting for it ...') await event.wait() print('... got it!') async def main(): # Create an Event object. event = asyncio.Event() # Spawn a Task to wait until 'event' is set. waiter_task = asyncio.create_task(waiter(event)) # Sleep for 1 second and set the event. await asyncio.sleep(1) event.set() # Wait until the waiter task is finished. await waiter_task asyncio.run(main()) ``` `coroutine wait()` Wait until the event is set. If the event is set, return `True` immediately. Otherwise block until another task calls [`set()`](#asyncio.Event.set "asyncio.Event.set"). `set()` Set the event. All tasks waiting for event to be set will be immediately awakened. `clear()` Clear (unset) the event. Tasks awaiting on [`wait()`](#asyncio.Event.wait "asyncio.Event.wait") will now block until the [`set()`](#asyncio.Event.set "asyncio.Event.set") method is called again. `is_set()` Return `True` if the event is set. Condition --------- `class asyncio.Condition(lock=None, *, loop=None)` A Condition object. Not thread-safe. An asyncio condition primitive can be used by a task to wait for some event to happen and then get exclusive access to a shared resource. In essence, a Condition object combines the functionality of an [`Event`](#asyncio.Event "asyncio.Event") and a [`Lock`](#asyncio.Lock "asyncio.Lock"). It is possible to have multiple Condition objects share one Lock, which allows coordinating exclusive access to a shared resource between different tasks interested in particular states of that shared resource. The optional *lock* argument must be a [`Lock`](#asyncio.Lock "asyncio.Lock") object or `None`. In the latter case a new Lock object is created automatically. Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. The preferred way to use a Condition is an [`async with`](../reference/compound_stmts#async-with) statement: ``` cond = asyncio.Condition() # ... later async with cond: await cond.wait() ``` which is equivalent to: ``` cond = asyncio.Condition() # ... later await cond.acquire() try: await cond.wait() finally: cond.release() ``` `coroutine acquire()` Acquire the underlying lock. This method waits until the underlying lock is *unlocked*, sets it to *locked* and returns `True`. `notify(n=1)` Wake up at most *n* tasks (1 by default) waiting on this condition. The method is no-op if no tasks are waiting. The lock must be acquired before this method is called and released shortly after. If called with an *unlocked* lock a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") error is raised. `locked()` Return `True` if the underlying lock is acquired. `notify_all()` Wake up all tasks waiting on this condition. This method acts like [`notify()`](#asyncio.Condition.notify "asyncio.Condition.notify"), but wakes up all waiting tasks. The lock must be acquired before this method is called and released shortly after. If called with an *unlocked* lock a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") error is raised. `release()` Release the underlying lock. When invoked on an unlocked lock, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. `coroutine wait()` Wait until notified. If the calling task has not acquired the lock when this method is called, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. This method releases the underlying lock, and then blocks until it is awakened by a [`notify()`](#asyncio.Condition.notify "asyncio.Condition.notify") or [`notify_all()`](#asyncio.Condition.notify_all "asyncio.Condition.notify_all") call. Once awakened, the Condition re-acquires its lock and this method returns `True`. `coroutine wait_for(predicate)` Wait until a predicate becomes *true*. The predicate must be a callable which result will be interpreted as a boolean value. The final value is the return value. Semaphore --------- `class asyncio.Semaphore(value=1, *, loop=None)` A Semaphore object. Not thread-safe. A semaphore manages an internal counter which is decremented by each [`acquire()`](#asyncio.Semaphore.acquire "asyncio.Semaphore.acquire") call and incremented by each [`release()`](#asyncio.Semaphore.release "asyncio.Semaphore.release") call. The counter can never go below zero; when [`acquire()`](#asyncio.Semaphore.acquire "asyncio.Semaphore.acquire") finds that it is zero, it blocks, waiting until some task calls [`release()`](#asyncio.Semaphore.release "asyncio.Semaphore.release"). The optional *value* argument gives the initial value for the internal counter (`1` by default). If the given value is less than `0` a [`ValueError`](exceptions#ValueError "ValueError") is raised. Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. The preferred way to use a Semaphore is an [`async with`](../reference/compound_stmts#async-with) statement: ``` sem = asyncio.Semaphore(10) # ... later async with sem: # work with shared resource ``` which is equivalent to: ``` sem = asyncio.Semaphore(10) # ... later await sem.acquire() try: # work with shared resource finally: sem.release() ``` `coroutine acquire()` Acquire a semaphore. If the internal counter is greater than zero, decrement it by one and return `True` immediately. If it is zero, wait until a [`release()`](#asyncio.Semaphore.release "asyncio.Semaphore.release") is called and return `True`. `locked()` Returns `True` if semaphore can not be acquired immediately. `release()` Release a semaphore, incrementing the internal counter by one. Can wake up a task waiting to acquire the semaphore. Unlike [`BoundedSemaphore`](#asyncio.BoundedSemaphore "asyncio.BoundedSemaphore"), [`Semaphore`](#asyncio.Semaphore "asyncio.Semaphore") allows making more `release()` calls than `acquire()` calls. BoundedSemaphore ---------------- `class asyncio.BoundedSemaphore(value=1, *, loop=None)` A bounded semaphore object. Not thread-safe. Bounded Semaphore is a version of [`Semaphore`](#asyncio.Semaphore "asyncio.Semaphore") that raises a [`ValueError`](exceptions#ValueError "ValueError") in [`release()`](#asyncio.Semaphore.release "asyncio.Semaphore.release") if it increases the internal counter above the initial *value*. Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. Changed in version 3.9: Acquiring a lock using `await lock` or `yield from lock` and/or [`with`](../reference/compound_stmts#with) statement (`with await lock`, `with (yield from lock)`) was removed. Use `async with lock` instead.
programming_docs
python unittest.mock — mock object library unittest.mock — mock object library =================================== New in version 3.3. **Source code:** [Lib/unittest/mock.py](https://github.com/python/cpython/tree/3.9/Lib/unittest/mock.py) [`unittest.mock`](#module-unittest.mock "unittest.mock: Mock object library.") is a library for testing in Python. It allows you to replace parts of your system under test with mock objects and make assertions about how they have been used. [`unittest.mock`](#module-unittest.mock "unittest.mock: Mock object library.") provides a core [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") class removing the need to create a host of stubs throughout your test suite. After performing an action, you can make assertions about which methods / attributes were used and arguments they were called with. You can also specify return values and set needed attributes in the normal way. Additionally, mock provides a [`patch()`](#unittest.mock.patch "unittest.mock.patch") decorator that handles patching module and class level attributes within the scope of a test, along with [`sentinel`](#unittest.mock.sentinel "unittest.mock.sentinel") for creating unique objects. See the [quick guide](#quick-guide) for some examples of how to use [`Mock`](#unittest.mock.Mock "unittest.mock.Mock"), [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") and [`patch()`](#unittest.mock.patch "unittest.mock.patch"). Mock is designed for use with [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") and is based on the ‘action -> assertion’ pattern instead of ‘record -> replay’ used by many mocking frameworks. There is a backport of [`unittest.mock`](#module-unittest.mock "unittest.mock: Mock object library.") for earlier versions of Python, available as [mock on PyPI](https://pypi.org/project/mock). Quick Guide ----------- [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") and [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") objects create all attributes and methods as you access them and store details of how they have been used. You can configure them, to specify return values or limit what attributes are available, and then make assertions about how they have been used: ``` >>> from unittest.mock import MagicMock >>> thing = ProductionClass() >>> thing.method = MagicMock(return_value=3) >>> thing.method(3, 4, 5, key='value') 3 >>> thing.method.assert_called_with(3, 4, 5, key='value') ``` `side_effect` allows you to perform side effects, including raising an exception when a mock is called: ``` >>> mock = Mock(side_effect=KeyError('foo')) >>> mock() Traceback (most recent call last): ... KeyError: 'foo' ``` ``` >>> values = {'a': 1, 'b': 2, 'c': 3} >>> def side_effect(arg): ... return values[arg] ... >>> mock.side_effect = side_effect >>> mock('a'), mock('b'), mock('c') (1, 2, 3) >>> mock.side_effect = [5, 4, 3, 2, 1] >>> mock(), mock(), mock() (5, 4, 3) ``` Mock has many other ways you can configure it and control its behaviour. For example the *spec* argument configures the mock to take its specification from another object. Attempting to access attributes or methods on the mock that don’t exist on the spec will fail with an [`AttributeError`](exceptions#AttributeError "AttributeError"). The [`patch()`](#unittest.mock.patch "unittest.mock.patch") decorator / context manager makes it easy to mock classes or objects in a module under test. The object you specify will be replaced with a mock (or other object) during the test and restored when the test ends: ``` >>> from unittest.mock import patch >>> @patch('module.ClassName2') ... @patch('module.ClassName1') ... def test(MockClass1, MockClass2): ... module.ClassName1() ... module.ClassName2() ... assert MockClass1 is module.ClassName1 ... assert MockClass2 is module.ClassName2 ... assert MockClass1.called ... assert MockClass2.called ... >>> test() ``` Note When you nest patch decorators the mocks are passed in to the decorated function in the same order they applied (the normal *Python* order that decorators are applied). This means from the bottom up, so in the example above the mock for `module.ClassName1` is passed in first. With [`patch()`](#unittest.mock.patch "unittest.mock.patch") it matters that you patch objects in the namespace where they are looked up. This is normally straightforward, but for a quick guide read [where to patch](#where-to-patch). As well as a decorator [`patch()`](#unittest.mock.patch "unittest.mock.patch") can be used as a context manager in a with statement: ``` >>> with patch.object(ProductionClass, 'method', return_value=None) as mock_method: ... thing = ProductionClass() ... thing.method(1, 2, 3) ... >>> mock_method.assert_called_once_with(1, 2, 3) ``` There is also [`patch.dict()`](#unittest.mock.patch.dict "unittest.mock.patch.dict") for setting values in a dictionary just during a scope and restoring the dictionary to its original state when the test ends: ``` >>> foo = {'key': 'value'} >>> original = foo.copy() >>> with patch.dict(foo, {'newkey': 'newvalue'}, clear=True): ... assert foo == {'newkey': 'newvalue'} ... >>> assert foo == original ``` Mock supports the mocking of Python [magic methods](#magic-methods). The easiest way of using magic methods is with the [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") class. It allows you to do things like: ``` >>> mock = MagicMock() >>> mock.__str__.return_value = 'foobarbaz' >>> str(mock) 'foobarbaz' >>> mock.__str__.assert_called_with() ``` Mock allows you to assign functions (or other Mock instances) to magic methods and they will be called appropriately. The [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") class is just a Mock variant that has all of the magic methods pre-created for you (well, all the useful ones anyway). The following is an example of using magic methods with the ordinary Mock class: ``` >>> mock = Mock() >>> mock.__str__ = Mock(return_value='wheeeeee') >>> str(mock) 'wheeeeee' ``` For ensuring that the mock objects in your tests have the same api as the objects they are replacing, you can use [auto-speccing](#auto-speccing). Auto-speccing can be done through the *autospec* argument to patch, or the [`create_autospec()`](#unittest.mock.create_autospec "unittest.mock.create_autospec") function. Auto-speccing creates mock objects that have the same attributes and methods as the objects they are replacing, and any functions and methods (including constructors) have the same call signature as the real object. This ensures that your mocks will fail in the same way as your production code if they are used incorrectly: ``` >>> from unittest.mock import create_autospec >>> def function(a, b, c): ... pass ... >>> mock_function = create_autospec(function, return_value='fishy') >>> mock_function(1, 2, 3) 'fishy' >>> mock_function.assert_called_once_with(1, 2, 3) >>> mock_function('wrong arguments') Traceback (most recent call last): ... TypeError: <lambda>() takes exactly 3 arguments (1 given) ``` [`create_autospec()`](#unittest.mock.create_autospec "unittest.mock.create_autospec") can also be used on classes, where it copies the signature of the `__init__` method, and on callable objects where it copies the signature of the `__call__` method. The Mock Class -------------- [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") is a flexible mock object intended to replace the use of stubs and test doubles throughout your code. Mocks are callable and create attributes as new mocks when you access them [1](#id3). Accessing the same attribute will always return the same mock. Mocks record how you use them, allowing you to make assertions about what your code has done to them. [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") is a subclass of [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") with all the magic methods pre-created and ready to use. There are also non-callable variants, useful when you are mocking out objects that aren’t callable: [`NonCallableMock`](#unittest.mock.NonCallableMock "unittest.mock.NonCallableMock") and [`NonCallableMagicMock`](#unittest.mock.NonCallableMagicMock "unittest.mock.NonCallableMagicMock") The [`patch()`](#unittest.mock.patch "unittest.mock.patch") decorators makes it easy to temporarily replace classes in a particular module with a [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") object. By default [`patch()`](#unittest.mock.patch "unittest.mock.patch") will create a [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") for you. You can specify an alternative class of [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") using the *new\_callable* argument to [`patch()`](#unittest.mock.patch "unittest.mock.patch"). `class unittest.mock.Mock(spec=None, side_effect=None, return_value=DEFAULT, wraps=None, name=None, spec_set=None, unsafe=False, **kwargs)` Create a new [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") object. [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") takes several optional arguments that specify the behaviour of the Mock object: * *spec*: This can be either a list of strings or an existing object (a class or instance) that acts as the specification for the mock object. If you pass in an object then a list of strings is formed by calling dir on the object (excluding unsupported magic attributes and methods). Accessing any attribute not in this list will raise an [`AttributeError`](exceptions#AttributeError "AttributeError"). If *spec* is an object (rather than a list of strings) then [`__class__`](stdtypes#instance.__class__ "instance.__class__") returns the class of the spec object. This allows mocks to pass [`isinstance()`](functions#isinstance "isinstance") tests. * *spec\_set*: A stricter variant of *spec*. If used, attempting to *set* or get an attribute on the mock that isn’t on the object passed as *spec\_set* will raise an [`AttributeError`](exceptions#AttributeError "AttributeError"). * *side\_effect*: A function to be called whenever the Mock is called. See the [`side_effect`](#unittest.mock.Mock.side_effect "unittest.mock.Mock.side_effect") attribute. Useful for raising exceptions or dynamically changing return values. The function is called with the same arguments as the mock, and unless it returns [`DEFAULT`](#unittest.mock.DEFAULT "unittest.mock.DEFAULT"), the return value of this function is used as the return value. Alternatively *side\_effect* can be an exception class or instance. In this case the exception will be raised when the mock is called. If *side\_effect* is an iterable then each call to the mock will return the next value from the iterable. A *side\_effect* can be cleared by setting it to `None`. * *return\_value*: The value returned when the mock is called. By default this is a new Mock (created on first access). See the [`return_value`](#unittest.mock.Mock.return_value "unittest.mock.Mock.return_value") attribute. * *unsafe*: By default if any attribute starts with *assert* or *assret* will raise an [`AttributeError`](exceptions#AttributeError "AttributeError"). Passing `unsafe=True` will allow access to these attributes. New in version 3.5. * *wraps*: Item for the mock object to wrap. If *wraps* is not `None` then calling the Mock will pass the call through to the wrapped object (returning the real result). Attribute access on the mock will return a Mock object that wraps the corresponding attribute of the wrapped object (so attempting to access an attribute that doesn’t exist will raise an [`AttributeError`](exceptions#AttributeError "AttributeError")). If the mock has an explicit *return\_value* set then calls are not passed to the wrapped object and the *return\_value* is returned instead. * *name*: If the mock has a name then it will be used in the repr of the mock. This can be useful for debugging. The name is propagated to child mocks. Mocks can also be called with arbitrary keyword arguments. These will be used to set attributes on the mock after it is created. See the [`configure_mock()`](#unittest.mock.Mock.configure_mock "unittest.mock.Mock.configure_mock") method for details. `assert_called()` Assert that the mock was called at least once. ``` >>> mock = Mock() >>> mock.method() <Mock name='mock.method()' id='...'> >>> mock.method.assert_called() ``` New in version 3.6. `assert_called_once()` Assert that the mock was called exactly once. ``` >>> mock = Mock() >>> mock.method() <Mock name='mock.method()' id='...'> >>> mock.method.assert_called_once() >>> mock.method() <Mock name='mock.method()' id='...'> >>> mock.method.assert_called_once() Traceback (most recent call last): ... AssertionError: Expected 'method' to have been called once. Called 2 times. ``` New in version 3.6. `assert_called_with(*args, **kwargs)` This method is a convenient way of asserting that the last call has been made in a particular way: ``` >>> mock = Mock() >>> mock.method(1, 2, 3, test='wow') <Mock name='mock.method()' id='...'> >>> mock.method.assert_called_with(1, 2, 3, test='wow') ``` `assert_called_once_with(*args, **kwargs)` Assert that the mock was called exactly once and that call was with the specified arguments. ``` >>> mock = Mock(return_value=None) >>> mock('foo', bar='baz') >>> mock.assert_called_once_with('foo', bar='baz') >>> mock('other', bar='values') >>> mock.assert_called_once_with('other', bar='values') Traceback (most recent call last): ... AssertionError: Expected 'mock' to be called once. Called 2 times. ``` `assert_any_call(*args, **kwargs)` assert the mock has been called with the specified arguments. The assert passes if the mock has *ever* been called, unlike [`assert_called_with()`](#unittest.mock.Mock.assert_called_with "unittest.mock.Mock.assert_called_with") and [`assert_called_once_with()`](#unittest.mock.Mock.assert_called_once_with "unittest.mock.Mock.assert_called_once_with") that only pass if the call is the most recent one, and in the case of [`assert_called_once_with()`](#unittest.mock.Mock.assert_called_once_with "unittest.mock.Mock.assert_called_once_with") it must also be the only call. ``` >>> mock = Mock(return_value=None) >>> mock(1, 2, arg='thing') >>> mock('some', 'thing', 'else') >>> mock.assert_any_call(1, 2, arg='thing') ``` `assert_has_calls(calls, any_order=False)` assert the mock has been called with the specified calls. The [`mock_calls`](#unittest.mock.Mock.mock_calls "unittest.mock.Mock.mock_calls") list is checked for the calls. If *any\_order* is false then the calls must be sequential. There can be extra calls before or after the specified calls. If *any\_order* is true then the calls can be in any order, but they must all appear in [`mock_calls`](#unittest.mock.Mock.mock_calls "unittest.mock.Mock.mock_calls"). ``` >>> mock = Mock(return_value=None) >>> mock(1) >>> mock(2) >>> mock(3) >>> mock(4) >>> calls = [call(2), call(3)] >>> mock.assert_has_calls(calls) >>> calls = [call(4), call(2), call(3)] >>> mock.assert_has_calls(calls, any_order=True) ``` `assert_not_called()` Assert the mock was never called. ``` >>> m = Mock() >>> m.hello.assert_not_called() >>> obj = m.hello() >>> m.hello.assert_not_called() Traceback (most recent call last): ... AssertionError: Expected 'hello' to not have been called. Called 1 times. ``` New in version 3.5. `reset_mock(*, return_value=False, side_effect=False)` The reset\_mock method resets all the call attributes on a mock object: ``` >>> mock = Mock(return_value=None) >>> mock('hello') >>> mock.called True >>> mock.reset_mock() >>> mock.called False ``` Changed in version 3.6: Added two keyword only argument to the reset\_mock function. This can be useful where you want to make a series of assertions that reuse the same object. Note that [`reset_mock()`](#unittest.mock.Mock.reset_mock "unittest.mock.Mock.reset_mock") *doesn’t* clear the return value, [`side_effect`](#unittest.mock.Mock.side_effect "unittest.mock.Mock.side_effect") or any child attributes you have set using normal assignment by default. In case you want to reset *return\_value* or [`side_effect`](#unittest.mock.Mock.side_effect "unittest.mock.Mock.side_effect"), then pass the corresponding parameter as `True`. Child mocks and the return value mock (if any) are reset as well. Note *return\_value*, and [`side_effect`](#unittest.mock.Mock.side_effect "unittest.mock.Mock.side_effect") are keyword only argument. `mock_add_spec(spec, spec_set=False)` Add a spec to a mock. *spec* can either be an object or a list of strings. Only attributes on the *spec* can be fetched as attributes from the mock. If *spec\_set* is true then only attributes on the spec can be set. `attach_mock(mock, attribute)` Attach a mock as an attribute of this one, replacing its name and parent. Calls to the attached mock will be recorded in the [`method_calls`](#unittest.mock.Mock.method_calls "unittest.mock.Mock.method_calls") and [`mock_calls`](#unittest.mock.Mock.mock_calls "unittest.mock.Mock.mock_calls") attributes of this one. `configure_mock(**kwargs)` Set attributes on the mock through keyword arguments. Attributes plus return values and side effects can be set on child mocks using standard dot notation and unpacking a dictionary in the method call: ``` >>> mock = Mock() >>> attrs = {'method.return_value': 3, 'other.side_effect': KeyError} >>> mock.configure_mock(**attrs) >>> mock.method() 3 >>> mock.other() Traceback (most recent call last): ... KeyError ``` The same thing can be achieved in the constructor call to mocks: ``` >>> attrs = {'method.return_value': 3, 'other.side_effect': KeyError} >>> mock = Mock(some_attribute='eggs', **attrs) >>> mock.some_attribute 'eggs' >>> mock.method() 3 >>> mock.other() Traceback (most recent call last): ... KeyError ``` [`configure_mock()`](#unittest.mock.Mock.configure_mock "unittest.mock.Mock.configure_mock") exists to make it easier to do configuration after the mock has been created. `__dir__()` [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") objects limit the results of `dir(some_mock)` to useful results. For mocks with a *spec* this includes all the permitted attributes for the mock. See [`FILTER_DIR`](#unittest.mock.FILTER_DIR "unittest.mock.FILTER_DIR") for what this filtering does, and how to switch it off. `_get_child_mock(**kw)` Create the child mocks for attributes and return value. By default child mocks will be the same type as the parent. Subclasses of Mock may want to override this to customize the way child mocks are made. For non-callable mocks the callable variant will be used (rather than any custom subclass). `called` A boolean representing whether or not the mock object has been called: ``` >>> mock = Mock(return_value=None) >>> mock.called False >>> mock() >>> mock.called True ``` `call_count` An integer telling you how many times the mock object has been called: ``` >>> mock = Mock(return_value=None) >>> mock.call_count 0 >>> mock() >>> mock() >>> mock.call_count 2 ``` `return_value` Set this to configure the value returned by calling the mock: ``` >>> mock = Mock() >>> mock.return_value = 'fish' >>> mock() 'fish' ``` The default return value is a mock object and you can configure it in the normal way: ``` >>> mock = Mock() >>> mock.return_value.attribute = sentinel.Attribute >>> mock.return_value() <Mock name='mock()()' id='...'> >>> mock.return_value.assert_called_with() ``` [`return_value`](#unittest.mock.Mock.return_value "unittest.mock.Mock.return_value") can also be set in the constructor: ``` >>> mock = Mock(return_value=3) >>> mock.return_value 3 >>> mock() 3 ``` `side_effect` This can either be a function to be called when the mock is called, an iterable or an exception (class or instance) to be raised. If you pass in a function it will be called with same arguments as the mock and unless the function returns the [`DEFAULT`](#unittest.mock.DEFAULT "unittest.mock.DEFAULT") singleton the call to the mock will then return whatever the function returns. If the function returns [`DEFAULT`](#unittest.mock.DEFAULT "unittest.mock.DEFAULT") then the mock will return its normal value (from the [`return_value`](#unittest.mock.Mock.return_value "unittest.mock.Mock.return_value")). If you pass in an iterable, it is used to retrieve an iterator which must yield a value on every call. This value can either be an exception instance to be raised, or a value to be returned from the call to the mock ([`DEFAULT`](#unittest.mock.DEFAULT "unittest.mock.DEFAULT") handling is identical to the function case). An example of a mock that raises an exception (to test exception handling of an API): ``` >>> mock = Mock() >>> mock.side_effect = Exception('Boom!') >>> mock() Traceback (most recent call last): ... Exception: Boom! ``` Using [`side_effect`](#unittest.mock.Mock.side_effect "unittest.mock.Mock.side_effect") to return a sequence of values: ``` >>> mock = Mock() >>> mock.side_effect = [3, 2, 1] >>> mock(), mock(), mock() (3, 2, 1) ``` Using a callable: ``` >>> mock = Mock(return_value=3) >>> def side_effect(*args, **kwargs): ... return DEFAULT ... >>> mock.side_effect = side_effect >>> mock() 3 ``` [`side_effect`](#unittest.mock.Mock.side_effect "unittest.mock.Mock.side_effect") can be set in the constructor. Here’s an example that adds one to the value the mock is called with and returns it: ``` >>> side_effect = lambda value: value + 1 >>> mock = Mock(side_effect=side_effect) >>> mock(3) 4 >>> mock(-8) -7 ``` Setting [`side_effect`](#unittest.mock.Mock.side_effect "unittest.mock.Mock.side_effect") to `None` clears it: ``` >>> m = Mock(side_effect=KeyError, return_value=3) >>> m() Traceback (most recent call last): ... KeyError >>> m.side_effect = None >>> m() 3 ``` `call_args` This is either `None` (if the mock hasn’t been called), or the arguments that the mock was last called with. This will be in the form of a tuple: the first member, which can also be accessed through the `args` property, is any ordered arguments the mock was called with (or an empty tuple) and the second member, which can also be accessed through the `kwargs` property, is any keyword arguments (or an empty dictionary). ``` >>> mock = Mock(return_value=None) >>> print(mock.call_args) None >>> mock() >>> mock.call_args call() >>> mock.call_args == () True >>> mock(3, 4) >>> mock.call_args call(3, 4) >>> mock.call_args == ((3, 4),) True >>> mock.call_args.args (3, 4) >>> mock.call_args.kwargs {} >>> mock(3, 4, 5, key='fish', next='w00t!') >>> mock.call_args call(3, 4, 5, key='fish', next='w00t!') >>> mock.call_args.args (3, 4, 5) >>> mock.call_args.kwargs {'key': 'fish', 'next': 'w00t!'} ``` [`call_args`](#unittest.mock.Mock.call_args "unittest.mock.Mock.call_args"), along with members of the lists [`call_args_list`](#unittest.mock.Mock.call_args_list "unittest.mock.Mock.call_args_list"), [`method_calls`](#unittest.mock.Mock.method_calls "unittest.mock.Mock.method_calls") and [`mock_calls`](#unittest.mock.Mock.mock_calls "unittest.mock.Mock.mock_calls") are [`call`](#unittest.mock.call "unittest.mock.call") objects. These are tuples, so they can be unpacked to get at the individual arguments and make more complex assertions. See [calls as tuples](#calls-as-tuples). Changed in version 3.8: Added `args` and `kwargs` properties. `call_args_list` This is a list of all the calls made to the mock object in sequence (so the length of the list is the number of times it has been called). Before any calls have been made it is an empty list. The [`call`](#unittest.mock.call "unittest.mock.call") object can be used for conveniently constructing lists of calls to compare with [`call_args_list`](#unittest.mock.Mock.call_args_list "unittest.mock.Mock.call_args_list"). ``` >>> mock = Mock(return_value=None) >>> mock() >>> mock(3, 4) >>> mock(key='fish', next='w00t!') >>> mock.call_args_list [call(), call(3, 4), call(key='fish', next='w00t!')] >>> expected = [(), ((3, 4),), ({'key': 'fish', 'next': 'w00t!'},)] >>> mock.call_args_list == expected True ``` Members of [`call_args_list`](#unittest.mock.Mock.call_args_list "unittest.mock.Mock.call_args_list") are [`call`](#unittest.mock.call "unittest.mock.call") objects. These can be unpacked as tuples to get at the individual arguments. See [calls as tuples](#calls-as-tuples). `method_calls` As well as tracking calls to themselves, mocks also track calls to methods and attributes, and *their* methods and attributes: ``` >>> mock = Mock() >>> mock.method() <Mock name='mock.method()' id='...'> >>> mock.property.method.attribute() <Mock name='mock.property.method.attribute()' id='...'> >>> mock.method_calls [call.method(), call.property.method.attribute()] ``` Members of [`method_calls`](#unittest.mock.Mock.method_calls "unittest.mock.Mock.method_calls") are [`call`](#unittest.mock.call "unittest.mock.call") objects. These can be unpacked as tuples to get at the individual arguments. See [calls as tuples](#calls-as-tuples). `mock_calls` [`mock_calls`](#unittest.mock.Mock.mock_calls "unittest.mock.Mock.mock_calls") records *all* calls to the mock object, its methods, magic methods *and* return value mocks. ``` >>> mock = MagicMock() >>> result = mock(1, 2, 3) >>> mock.first(a=3) <MagicMock name='mock.first()' id='...'> >>> mock.second() <MagicMock name='mock.second()' id='...'> >>> int(mock) 1 >>> result(1) <MagicMock name='mock()()' id='...'> >>> expected = [call(1, 2, 3), call.first(a=3), call.second(), ... call.__int__(), call()(1)] >>> mock.mock_calls == expected True ``` Members of [`mock_calls`](#unittest.mock.Mock.mock_calls "unittest.mock.Mock.mock_calls") are [`call`](#unittest.mock.call "unittest.mock.call") objects. These can be unpacked as tuples to get at the individual arguments. See [calls as tuples](#calls-as-tuples). Note The way [`mock_calls`](#unittest.mock.Mock.mock_calls "unittest.mock.Mock.mock_calls") are recorded means that where nested calls are made, the parameters of ancestor calls are not recorded and so will always compare equal: ``` >>> mock = MagicMock() >>> mock.top(a=3).bottom() <MagicMock name='mock.top().bottom()' id='...'> >>> mock.mock_calls [call.top(a=3), call.top().bottom()] >>> mock.mock_calls[-1] == call.top(a=-1).bottom() True ``` `__class__` Normally the [`__class__`](#unittest.mock.Mock.__class__ "unittest.mock.Mock.__class__") attribute of an object will return its type. For a mock object with a `spec`, `__class__` returns the spec class instead. This allows mock objects to pass [`isinstance()`](functions#isinstance "isinstance") tests for the object they are replacing / masquerading as: ``` >>> mock = Mock(spec=3) >>> isinstance(mock, int) True ``` [`__class__`](#unittest.mock.Mock.__class__ "unittest.mock.Mock.__class__") is assignable to, this allows a mock to pass an [`isinstance()`](functions#isinstance "isinstance") check without forcing you to use a spec: ``` >>> mock = Mock() >>> mock.__class__ = dict >>> isinstance(mock, dict) True ``` `class unittest.mock.NonCallableMock(spec=None, wraps=None, name=None, spec_set=None, **kwargs)` A non-callable version of [`Mock`](#unittest.mock.Mock "unittest.mock.Mock"). The constructor parameters have the same meaning of [`Mock`](#unittest.mock.Mock "unittest.mock.Mock"), with the exception of *return\_value* and *side\_effect* which have no meaning on a non-callable mock. Mock objects that use a class or an instance as a `spec` or `spec_set` are able to pass [`isinstance()`](functions#isinstance "isinstance") tests: ``` >>> mock = Mock(spec=SomeClass) >>> isinstance(mock, SomeClass) True >>> mock = Mock(spec_set=SomeClass()) >>> isinstance(mock, SomeClass) True ``` The [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") classes have support for mocking magic methods. See [magic methods](#magic-methods) for the full details. The mock classes and the [`patch()`](#unittest.mock.patch "unittest.mock.patch") decorators all take arbitrary keyword arguments for configuration. For the [`patch()`](#unittest.mock.patch "unittest.mock.patch") decorators the keywords are passed to the constructor of the mock being created. The keyword arguments are for configuring attributes of the mock: ``` >>> m = MagicMock(attribute=3, other='fish') >>> m.attribute 3 >>> m.other 'fish' ``` The return value and side effect of child mocks can be set in the same way, using dotted notation. As you can’t use dotted names directly in a call you have to create a dictionary and unpack it using `**`: ``` >>> attrs = {'method.return_value': 3, 'other.side_effect': KeyError} >>> mock = Mock(some_attribute='eggs', **attrs) >>> mock.some_attribute 'eggs' >>> mock.method() 3 >>> mock.other() Traceback (most recent call last): ... KeyError ``` A callable mock which was created with a *spec* (or a *spec\_set*) will introspect the specification object’s signature when matching calls to the mock. Therefore, it can match the actual call’s arguments regardless of whether they were passed positionally or by name: ``` >>> def f(a, b, c): pass ... >>> mock = Mock(spec=f) >>> mock(1, 2, c=3) <Mock name='mock()' id='140161580456576'> >>> mock.assert_called_with(1, 2, 3) >>> mock.assert_called_with(a=1, b=2, c=3) ``` This applies to [`assert_called_with()`](#unittest.mock.Mock.assert_called_with "unittest.mock.Mock.assert_called_with"), [`assert_called_once_with()`](#unittest.mock.Mock.assert_called_once_with "unittest.mock.Mock.assert_called_once_with"), [`assert_has_calls()`](#unittest.mock.Mock.assert_has_calls "unittest.mock.Mock.assert_has_calls") and [`assert_any_call()`](#unittest.mock.Mock.assert_any_call "unittest.mock.Mock.assert_any_call"). When [Autospeccing](#auto-speccing), it will also apply to method calls on the mock object. Changed in version 3.4: Added signature introspection on specced and autospecced mock objects. `class unittest.mock.PropertyMock(*args, **kwargs)` A mock intended to be used as a property, or other descriptor, on a class. [`PropertyMock`](#unittest.mock.PropertyMock "unittest.mock.PropertyMock") provides [`__get__()`](../reference/datamodel#object.__get__ "object.__get__") and [`__set__()`](../reference/datamodel#object.__set__ "object.__set__") methods so you can specify a return value when it is fetched. Fetching a [`PropertyMock`](#unittest.mock.PropertyMock "unittest.mock.PropertyMock") instance from an object calls the mock, with no args. Setting it calls the mock with the value being set. ``` >>> class Foo: ... @property ... def foo(self): ... return 'something' ... @foo.setter ... def foo(self, value): ... pass ... >>> with patch('__main__.Foo.foo', new_callable=PropertyMock) as mock_foo: ... mock_foo.return_value = 'mockity-mock' ... this_foo = Foo() ... print(this_foo.foo) ... this_foo.foo = 6 ... mockity-mock >>> mock_foo.mock_calls [call(), call(6)] ``` Because of the way mock attributes are stored you can’t directly attach a [`PropertyMock`](#unittest.mock.PropertyMock "unittest.mock.PropertyMock") to a mock object. Instead you can attach it to the mock type object: ``` >>> m = MagicMock() >>> p = PropertyMock(return_value=3) >>> type(m).foo = p >>> m.foo 3 >>> p.assert_called_once_with() ``` `class unittest.mock.AsyncMock(spec=None, side_effect=None, return_value=DEFAULT, wraps=None, name=None, spec_set=None, unsafe=False, **kwargs)` An asynchronous version of [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock"). The [`AsyncMock`](#unittest.mock.AsyncMock "unittest.mock.AsyncMock") object will behave so the object is recognized as an async function, and the result of a call is an awaitable. ``` >>> mock = AsyncMock() >>> asyncio.iscoroutinefunction(mock) True >>> inspect.isawaitable(mock()) True ``` The result of `mock()` is an async function which will have the outcome of `side_effect` or `return_value` after it has been awaited: * if `side_effect` is a function, the async function will return the result of that function, * if `side_effect` is an exception, the async function will raise the exception, * if `side_effect` is an iterable, the async function will return the next value of the iterable, however, if the sequence of result is exhausted, `StopAsyncIteration` is raised immediately, * if `side_effect` is not defined, the async function will return the value defined by `return_value`, hence, by default, the async function returns a new [`AsyncMock`](#unittest.mock.AsyncMock "unittest.mock.AsyncMock") object. Setting the *spec* of a [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") or [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") to an async function will result in a coroutine object being returned after calling. ``` >>> async def async_func(): pass ... >>> mock = MagicMock(async_func) >>> mock <MagicMock spec='function' id='...'> >>> mock() <coroutine object AsyncMockMixin._mock_call at ...> ``` Setting the *spec* of a [`Mock`](#unittest.mock.Mock "unittest.mock.Mock"), [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock"), or [`AsyncMock`](#unittest.mock.AsyncMock "unittest.mock.AsyncMock") to a class with asynchronous and synchronous functions will automatically detect the synchronous functions and set them as [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") (if the parent mock is [`AsyncMock`](#unittest.mock.AsyncMock "unittest.mock.AsyncMock") or [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock")) or [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") (if the parent mock is [`Mock`](#unittest.mock.Mock "unittest.mock.Mock")). All asynchronous functions will be [`AsyncMock`](#unittest.mock.AsyncMock "unittest.mock.AsyncMock"). ``` >>> class ExampleClass: ... def sync_foo(): ... pass ... async def async_foo(): ... pass ... >>> a_mock = AsyncMock(ExampleClass) >>> a_mock.sync_foo <MagicMock name='mock.sync_foo' id='...'> >>> a_mock.async_foo <AsyncMock name='mock.async_foo' id='...'> >>> mock = Mock(ExampleClass) >>> mock.sync_foo <Mock name='mock.sync_foo' id='...'> >>> mock.async_foo <AsyncMock name='mock.async_foo' id='...'> ``` New in version 3.8. `assert_awaited()` Assert that the mock was awaited at least once. Note that this is separate from the object having been called, the `await` keyword must be used: ``` >>> mock = AsyncMock() >>> async def main(coroutine_mock): ... await coroutine_mock ... >>> coroutine_mock = mock() >>> mock.called True >>> mock.assert_awaited() Traceback (most recent call last): ... AssertionError: Expected mock to have been awaited. >>> asyncio.run(main(coroutine_mock)) >>> mock.assert_awaited() ``` `assert_awaited_once()` Assert that the mock was awaited exactly once. ``` >>> mock = AsyncMock() >>> async def main(): ... await mock() ... >>> asyncio.run(main()) >>> mock.assert_awaited_once() >>> asyncio.run(main()) >>> mock.method.assert_awaited_once() Traceback (most recent call last): ... AssertionError: Expected mock to have been awaited once. Awaited 2 times. ``` `assert_awaited_with(*args, **kwargs)` Assert that the last await was with the specified arguments. ``` >>> mock = AsyncMock() >>> async def main(*args, **kwargs): ... await mock(*args, **kwargs) ... >>> asyncio.run(main('foo', bar='bar')) >>> mock.assert_awaited_with('foo', bar='bar') >>> mock.assert_awaited_with('other') Traceback (most recent call last): ... AssertionError: expected call not found. Expected: mock('other') Actual: mock('foo', bar='bar') ``` `assert_awaited_once_with(*args, **kwargs)` Assert that the mock was awaited exactly once and with the specified arguments. ``` >>> mock = AsyncMock() >>> async def main(*args, **kwargs): ... await mock(*args, **kwargs) ... >>> asyncio.run(main('foo', bar='bar')) >>> mock.assert_awaited_once_with('foo', bar='bar') >>> asyncio.run(main('foo', bar='bar')) >>> mock.assert_awaited_once_with('foo', bar='bar') Traceback (most recent call last): ... AssertionError: Expected mock to have been awaited once. Awaited 2 times. ``` `assert_any_await(*args, **kwargs)` Assert the mock has ever been awaited with the specified arguments. ``` >>> mock = AsyncMock() >>> async def main(*args, **kwargs): ... await mock(*args, **kwargs) ... >>> asyncio.run(main('foo', bar='bar')) >>> asyncio.run(main('hello')) >>> mock.assert_any_await('foo', bar='bar') >>> mock.assert_any_await('other') Traceback (most recent call last): ... AssertionError: mock('other') await not found ``` `assert_has_awaits(calls, any_order=False)` Assert the mock has been awaited with the specified calls. The [`await_args_list`](#unittest.mock.AsyncMock.await_args_list "unittest.mock.AsyncMock.await_args_list") list is checked for the awaits. If *any\_order* is false then the awaits must be sequential. There can be extra calls before or after the specified awaits. If *any\_order* is true then the awaits can be in any order, but they must all appear in [`await_args_list`](#unittest.mock.AsyncMock.await_args_list "unittest.mock.AsyncMock.await_args_list"). ``` >>> mock = AsyncMock() >>> async def main(*args, **kwargs): ... await mock(*args, **kwargs) ... >>> calls = [call("foo"), call("bar")] >>> mock.assert_has_awaits(calls) Traceback (most recent call last): ... AssertionError: Awaits not found. Expected: [call('foo'), call('bar')] Actual: [] >>> asyncio.run(main('foo')) >>> asyncio.run(main('bar')) >>> mock.assert_has_awaits(calls) ``` `assert_not_awaited()` Assert that the mock was never awaited. ``` >>> mock = AsyncMock() >>> mock.assert_not_awaited() ``` `reset_mock(*args, **kwargs)` See [`Mock.reset_mock()`](#unittest.mock.Mock.reset_mock "unittest.mock.Mock.reset_mock"). Also sets [`await_count`](#unittest.mock.AsyncMock.await_count "unittest.mock.AsyncMock.await_count") to 0, [`await_args`](#unittest.mock.AsyncMock.await_args "unittest.mock.AsyncMock.await_args") to None, and clears the [`await_args_list`](#unittest.mock.AsyncMock.await_args_list "unittest.mock.AsyncMock.await_args_list"). `await_count` An integer keeping track of how many times the mock object has been awaited. ``` >>> mock = AsyncMock() >>> async def main(): ... await mock() ... >>> asyncio.run(main()) >>> mock.await_count 1 >>> asyncio.run(main()) >>> mock.await_count 2 ``` `await_args` This is either `None` (if the mock hasn’t been awaited), or the arguments that the mock was last awaited with. Functions the same as [`Mock.call_args`](#unittest.mock.Mock.call_args "unittest.mock.Mock.call_args"). ``` >>> mock = AsyncMock() >>> async def main(*args): ... await mock(*args) ... >>> mock.await_args >>> asyncio.run(main('foo')) >>> mock.await_args call('foo') >>> asyncio.run(main('bar')) >>> mock.await_args call('bar') ``` `await_args_list` This is a list of all the awaits made to the mock object in sequence (so the length of the list is the number of times it has been awaited). Before any awaits have been made it is an empty list. ``` >>> mock = AsyncMock() >>> async def main(*args): ... await mock(*args) ... >>> mock.await_args_list [] >>> asyncio.run(main('foo')) >>> mock.await_args_list [call('foo')] >>> asyncio.run(main('bar')) >>> mock.await_args_list [call('foo'), call('bar')] ``` ### Calling Mock objects are callable. The call will return the value set as the [`return_value`](#unittest.mock.Mock.return_value "unittest.mock.Mock.return_value") attribute. The default return value is a new Mock object; it is created the first time the return value is accessed (either explicitly or by calling the Mock) - but it is stored and the same one returned each time. Calls made to the object will be recorded in the attributes like [`call_args`](#unittest.mock.Mock.call_args "unittest.mock.Mock.call_args") and [`call_args_list`](#unittest.mock.Mock.call_args_list "unittest.mock.Mock.call_args_list"). If [`side_effect`](#unittest.mock.Mock.side_effect "unittest.mock.Mock.side_effect") is set then it will be called after the call has been recorded, so if `side_effect` raises an exception the call is still recorded. The simplest way to make a mock raise an exception when called is to make [`side_effect`](#unittest.mock.Mock.side_effect "unittest.mock.Mock.side_effect") an exception class or instance: ``` >>> m = MagicMock(side_effect=IndexError) >>> m(1, 2, 3) Traceback (most recent call last): ... IndexError >>> m.mock_calls [call(1, 2, 3)] >>> m.side_effect = KeyError('Bang!') >>> m('two', 'three', 'four') Traceback (most recent call last): ... KeyError: 'Bang!' >>> m.mock_calls [call(1, 2, 3), call('two', 'three', 'four')] ``` If `side_effect` is a function then whatever that function returns is what calls to the mock return. The `side_effect` function is called with the same arguments as the mock. This allows you to vary the return value of the call dynamically, based on the input: ``` >>> def side_effect(value): ... return value + 1 ... >>> m = MagicMock(side_effect=side_effect) >>> m(1) 2 >>> m(2) 3 >>> m.mock_calls [call(1), call(2)] ``` If you want the mock to still return the default return value (a new mock), or any set return value, then there are two ways of doing this. Either return `mock.return_value` from inside `side_effect`, or return [`DEFAULT`](#unittest.mock.DEFAULT "unittest.mock.DEFAULT"): ``` >>> m = MagicMock() >>> def side_effect(*args, **kwargs): ... return m.return_value ... >>> m.side_effect = side_effect >>> m.return_value = 3 >>> m() 3 >>> def side_effect(*args, **kwargs): ... return DEFAULT ... >>> m.side_effect = side_effect >>> m() 3 ``` To remove a `side_effect`, and return to the default behaviour, set the `side_effect` to `None`: ``` >>> m = MagicMock(return_value=6) >>> def side_effect(*args, **kwargs): ... return 3 ... >>> m.side_effect = side_effect >>> m() 3 >>> m.side_effect = None >>> m() 6 ``` The `side_effect` can also be any iterable object. Repeated calls to the mock will return values from the iterable (until the iterable is exhausted and a [`StopIteration`](exceptions#StopIteration "StopIteration") is raised): ``` >>> m = MagicMock(side_effect=[1, 2, 3]) >>> m() 1 >>> m() 2 >>> m() 3 >>> m() Traceback (most recent call last): ... StopIteration ``` If any members of the iterable are exceptions they will be raised instead of returned: ``` >>> iterable = (33, ValueError, 66) >>> m = MagicMock(side_effect=iterable) >>> m() 33 >>> m() Traceback (most recent call last): ... ValueError >>> m() 66 ``` ### Deleting Attributes Mock objects create attributes on demand. This allows them to pretend to be objects of any type. You may want a mock object to return `False` to a [`hasattr()`](functions#hasattr "hasattr") call, or raise an [`AttributeError`](exceptions#AttributeError "AttributeError") when an attribute is fetched. You can do this by providing an object as a `spec` for a mock, but that isn’t always convenient. You “block” attributes by deleting them. Once deleted, accessing an attribute will raise an [`AttributeError`](exceptions#AttributeError "AttributeError"). ``` >>> mock = MagicMock() >>> hasattr(mock, 'm') True >>> del mock.m >>> hasattr(mock, 'm') False >>> del mock.f >>> mock.f Traceback (most recent call last): ... AttributeError: f ``` ### Mock names and the name attribute Since “name” is an argument to the [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") constructor, if you want your mock object to have a “name” attribute you can’t just pass it in at creation time. There are two alternatives. One option is to use [`configure_mock()`](#unittest.mock.Mock.configure_mock "unittest.mock.Mock.configure_mock"): ``` >>> mock = MagicMock() >>> mock.configure_mock(name='my_name') >>> mock.name 'my_name' ``` A simpler option is to simply set the “name” attribute after mock creation: ``` >>> mock = MagicMock() >>> mock.name = "foo" ``` ### Attaching Mocks as Attributes When you attach a mock as an attribute of another mock (or as the return value) it becomes a “child” of that mock. Calls to the child are recorded in the [`method_calls`](#unittest.mock.Mock.method_calls "unittest.mock.Mock.method_calls") and [`mock_calls`](#unittest.mock.Mock.mock_calls "unittest.mock.Mock.mock_calls") attributes of the parent. This is useful for configuring child mocks and then attaching them to the parent, or for attaching mocks to a parent that records all calls to the children and allows you to make assertions about the order of calls between mocks: ``` >>> parent = MagicMock() >>> child1 = MagicMock(return_value=None) >>> child2 = MagicMock(return_value=None) >>> parent.child1 = child1 >>> parent.child2 = child2 >>> child1(1) >>> child2(2) >>> parent.mock_calls [call.child1(1), call.child2(2)] ``` The exception to this is if the mock has a name. This allows you to prevent the “parenting” if for some reason you don’t want it to happen. ``` >>> mock = MagicMock() >>> not_a_child = MagicMock(name='not-a-child') >>> mock.attribute = not_a_child >>> mock.attribute() <MagicMock name='not-a-child()' id='...'> >>> mock.mock_calls [] ``` Mocks created for you by [`patch()`](#unittest.mock.patch "unittest.mock.patch") are automatically given names. To attach mocks that have names to a parent you use the [`attach_mock()`](#unittest.mock.Mock.attach_mock "unittest.mock.Mock.attach_mock") method: ``` >>> thing1 = object() >>> thing2 = object() >>> parent = MagicMock() >>> with patch('__main__.thing1', return_value=None) as child1: ... with patch('__main__.thing2', return_value=None) as child2: ... parent.attach_mock(child1, 'child1') ... parent.attach_mock(child2, 'child2') ... child1('one') ... child2('two') ... >>> parent.mock_calls [call.child1('one'), call.child2('two')] ``` `1` The only exceptions are magic methods and attributes (those that have leading and trailing double underscores). Mock doesn’t create these but instead raises an [`AttributeError`](exceptions#AttributeError "AttributeError"). This is because the interpreter will often implicitly request these methods, and gets *very* confused to get a new Mock object when it expects a magic method. If you need magic method support see [magic methods](#magic-methods). The patchers ------------ The patch decorators are used for patching objects only within the scope of the function they decorate. They automatically handle the unpatching for you, even if exceptions are raised. All of these functions can also be used in with statements or as class decorators. ### patch Note The key is to do the patching in the right namespace. See the section [where to patch](#id6). `unittest.mock.patch(target, new=DEFAULT, spec=None, create=False, spec_set=None, autospec=None, new_callable=None, **kwargs)` [`patch()`](#unittest.mock.patch "unittest.mock.patch") acts as a function decorator, class decorator or a context manager. Inside the body of the function or with statement, the *target* is patched with a *new* object. When the function/with statement exits the patch is undone. If *new* is omitted, then the target is replaced with an [`AsyncMock`](#unittest.mock.AsyncMock "unittest.mock.AsyncMock") if the patched object is an async function or a [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") otherwise. If [`patch()`](#unittest.mock.patch "unittest.mock.patch") is used as a decorator and *new* is omitted, the created mock is passed in as an extra argument to the decorated function. If [`patch()`](#unittest.mock.patch "unittest.mock.patch") is used as a context manager the created mock is returned by the context manager. *target* should be a string in the form `'package.module.ClassName'`. The *target* is imported and the specified object replaced with the *new* object, so the *target* must be importable from the environment you are calling [`patch()`](#unittest.mock.patch "unittest.mock.patch") from. The target is imported when the decorated function is executed, not at decoration time. The *spec* and *spec\_set* keyword arguments are passed to the [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") if patch is creating one for you. In addition you can pass `spec=True` or `spec_set=True`, which causes patch to pass in the object being mocked as the spec/spec\_set object. *new\_callable* allows you to specify a different class, or callable object, that will be called to create the *new* object. By default [`AsyncMock`](#unittest.mock.AsyncMock "unittest.mock.AsyncMock") is used for async functions and [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") for the rest. A more powerful form of *spec* is *autospec*. If you set `autospec=True` then the mock will be created with a spec from the object being replaced. All attributes of the mock will also have the spec of the corresponding attribute of the object being replaced. Methods and functions being mocked will have their arguments checked and will raise a [`TypeError`](exceptions#TypeError "TypeError") if they are called with the wrong signature. For mocks replacing a class, their return value (the ‘instance’) will have the same spec as the class. See the [`create_autospec()`](#unittest.mock.create_autospec "unittest.mock.create_autospec") function and [Autospeccing](#auto-speccing). Instead of `autospec=True` you can pass `autospec=some_object` to use an arbitrary object as the spec instead of the one being replaced. By default [`patch()`](#unittest.mock.patch "unittest.mock.patch") will fail to replace attributes that don’t exist. If you pass in `create=True`, and the attribute doesn’t exist, patch will create the attribute for you when the patched function is called, and delete it again after the patched function has exited. This is useful for writing tests against attributes that your production code creates at runtime. It is off by default because it can be dangerous. With it switched on you can write passing tests against APIs that don’t actually exist! Note Changed in version 3.5: If you are patching builtins in a module then you don’t need to pass `create=True`, it will be added by default. Patch can be used as a `TestCase` class decorator. It works by decorating each test method in the class. This reduces the boilerplate code when your test methods share a common patchings set. [`patch()`](#unittest.mock.patch "unittest.mock.patch") finds tests by looking for method names that start with `patch.TEST_PREFIX`. By default this is `'test'`, which matches the way [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") finds tests. You can specify an alternative prefix by setting `patch.TEST_PREFIX`. Patch can be used as a context manager, with the with statement. Here the patching applies to the indented block after the with statement. If you use “as” then the patched object will be bound to the name after the “as”; very useful if [`patch()`](#unittest.mock.patch "unittest.mock.patch") is creating a mock object for you. [`patch()`](#unittest.mock.patch "unittest.mock.patch") takes arbitrary keyword arguments. These will be passed to [`AsyncMock`](#unittest.mock.AsyncMock "unittest.mock.AsyncMock") if the patched object is asynchronous, to [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") otherwise or to *new\_callable* if specified. `patch.dict(...)`, `patch.multiple(...)` and `patch.object(...)` are available for alternate use-cases. [`patch()`](#unittest.mock.patch "unittest.mock.patch") as function decorator, creating the mock for you and passing it into the decorated function: ``` >>> @patch('__main__.SomeClass') ... def function(normal_argument, mock_class): ... print(mock_class is SomeClass) ... >>> function(None) True ``` Patching a class replaces the class with a [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") *instance*. If the class is instantiated in the code under test then it will be the [`return_value`](#unittest.mock.Mock.return_value "unittest.mock.Mock.return_value") of the mock that will be used. If the class is instantiated multiple times you could use [`side_effect`](#unittest.mock.Mock.side_effect "unittest.mock.Mock.side_effect") to return a new mock each time. Alternatively you can set the *return\_value* to be anything you want. To configure return values on methods of *instances* on the patched class you must do this on the `return_value`. For example: ``` >>> class Class: ... def method(self): ... pass ... >>> with patch('__main__.Class') as MockClass: ... instance = MockClass.return_value ... instance.method.return_value = 'foo' ... assert Class() is instance ... assert Class().method() == 'foo' ... ``` If you use *spec* or *spec\_set* and [`patch()`](#unittest.mock.patch "unittest.mock.patch") is replacing a *class*, then the return value of the created mock will have the same spec. ``` >>> Original = Class >>> patcher = patch('__main__.Class', spec=True) >>> MockClass = patcher.start() >>> instance = MockClass() >>> assert isinstance(instance, Original) >>> patcher.stop() ``` The *new\_callable* argument is useful where you want to use an alternative class to the default [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") for the created mock. For example, if you wanted a [`NonCallableMock`](#unittest.mock.NonCallableMock "unittest.mock.NonCallableMock") to be used: ``` >>> thing = object() >>> with patch('__main__.thing', new_callable=NonCallableMock) as mock_thing: ... assert thing is mock_thing ... thing() ... Traceback (most recent call last): ... TypeError: 'NonCallableMock' object is not callable ``` Another use case might be to replace an object with an [`io.StringIO`](io#io.StringIO "io.StringIO") instance: ``` >>> from io import StringIO >>> def foo(): ... print('Something') ... >>> @patch('sys.stdout', new_callable=StringIO) ... def test(mock_stdout): ... foo() ... assert mock_stdout.getvalue() == 'Something\n' ... >>> test() ``` When [`patch()`](#unittest.mock.patch "unittest.mock.patch") is creating a mock for you, it is common that the first thing you need to do is to configure the mock. Some of that configuration can be done in the call to patch. Any arbitrary keywords you pass into the call will be used to set attributes on the created mock: ``` >>> patcher = patch('__main__.thing', first='one', second='two') >>> mock_thing = patcher.start() >>> mock_thing.first 'one' >>> mock_thing.second 'two' ``` As well as attributes on the created mock attributes, like the [`return_value`](#unittest.mock.Mock.return_value "unittest.mock.Mock.return_value") and [`side_effect`](#unittest.mock.Mock.side_effect "unittest.mock.Mock.side_effect"), of child mocks can also be configured. These aren’t syntactically valid to pass in directly as keyword arguments, but a dictionary with these as keys can still be expanded into a [`patch()`](#unittest.mock.patch "unittest.mock.patch") call using `**`: ``` >>> config = {'method.return_value': 3, 'other.side_effect': KeyError} >>> patcher = patch('__main__.thing', **config) >>> mock_thing = patcher.start() >>> mock_thing.method() 3 >>> mock_thing.other() Traceback (most recent call last): ... KeyError ``` By default, attempting to patch a function in a module (or a method or an attribute in a class) that does not exist will fail with [`AttributeError`](exceptions#AttributeError "AttributeError"): ``` >>> @patch('sys.non_existing_attribute', 42) ... def test(): ... assert sys.non_existing_attribute == 42 ... >>> test() Traceback (most recent call last): ... AttributeError: <module 'sys' (built-in)> does not have the attribute 'non_existing_attribute' ``` but adding `create=True` in the call to [`patch()`](#unittest.mock.patch "unittest.mock.patch") will make the previous example work as expected: ``` >>> @patch('sys.non_existing_attribute', 42, create=True) ... def test(mock_stdout): ... assert sys.non_existing_attribute == 42 ... >>> test() ``` Changed in version 3.8: [`patch()`](#unittest.mock.patch "unittest.mock.patch") now returns an [`AsyncMock`](#unittest.mock.AsyncMock "unittest.mock.AsyncMock") if the target is an async function. ### patch.object `patch.object(target, attribute, new=DEFAULT, spec=None, create=False, spec_set=None, autospec=None, new_callable=None, **kwargs)` patch the named member (*attribute*) on an object (*target*) with a mock object. [`patch.object()`](#unittest.mock.patch.object "unittest.mock.patch.object") can be used as a decorator, class decorator or a context manager. Arguments *new*, *spec*, *create*, *spec\_set*, *autospec* and *new\_callable* have the same meaning as for [`patch()`](#unittest.mock.patch "unittest.mock.patch"). Like [`patch()`](#unittest.mock.patch "unittest.mock.patch"), [`patch.object()`](#unittest.mock.patch.object "unittest.mock.patch.object") takes arbitrary keyword arguments for configuring the mock object it creates. When used as a class decorator [`patch.object()`](#unittest.mock.patch.object "unittest.mock.patch.object") honours `patch.TEST_PREFIX` for choosing which methods to wrap. You can either call [`patch.object()`](#unittest.mock.patch.object "unittest.mock.patch.object") with three arguments or two arguments. The three argument form takes the object to be patched, the attribute name and the object to replace the attribute with. When calling with the two argument form you omit the replacement object, and a mock is created for you and passed in as an extra argument to the decorated function: ``` >>> @patch.object(SomeClass, 'class_method') ... def test(mock_method): ... SomeClass.class_method(3) ... mock_method.assert_called_with(3) ... >>> test() ``` *spec*, *create* and the other arguments to [`patch.object()`](#unittest.mock.patch.object "unittest.mock.patch.object") have the same meaning as they do for [`patch()`](#unittest.mock.patch "unittest.mock.patch"). ### patch.dict `patch.dict(in_dict, values=(), clear=False, **kwargs)` Patch a dictionary, or dictionary like object, and restore the dictionary to its original state after the test. *in\_dict* can be a dictionary or a mapping like container. If it is a mapping then it must at least support getting, setting and deleting items plus iterating over keys. *in\_dict* can also be a string specifying the name of the dictionary, which will then be fetched by importing it. *values* can be a dictionary of values to set in the dictionary. *values* can also be an iterable of `(key, value)` pairs. If *clear* is true then the dictionary will be cleared before the new values are set. [`patch.dict()`](#unittest.mock.patch.dict "unittest.mock.patch.dict") can also be called with arbitrary keyword arguments to set values in the dictionary. Changed in version 3.8: [`patch.dict()`](#unittest.mock.patch.dict "unittest.mock.patch.dict") now returns the patched dictionary when used as a context manager. [`patch.dict()`](#unittest.mock.patch.dict "unittest.mock.patch.dict") can be used as a context manager, decorator or class decorator: ``` >>> foo = {} >>> @patch.dict(foo, {'newkey': 'newvalue'}) ... def test(): ... assert foo == {'newkey': 'newvalue'} >>> test() >>> assert foo == {} ``` When used as a class decorator [`patch.dict()`](#unittest.mock.patch.dict "unittest.mock.patch.dict") honours `patch.TEST_PREFIX` (default to `'test'`) for choosing which methods to wrap: ``` >>> import os >>> import unittest >>> from unittest.mock import patch >>> @patch.dict('os.environ', {'newkey': 'newvalue'}) ... class TestSample(unittest.TestCase): ... def test_sample(self): ... self.assertEqual(os.environ['newkey'], 'newvalue') ``` If you want to use a different prefix for your test, you can inform the patchers of the different prefix by setting `patch.TEST_PREFIX`. For more details about how to change the value of see [TEST\_PREFIX](#test-prefix). [`patch.dict()`](#unittest.mock.patch.dict "unittest.mock.patch.dict") can be used to add members to a dictionary, or simply let a test change a dictionary, and ensure the dictionary is restored when the test ends. ``` >>> foo = {} >>> with patch.dict(foo, {'newkey': 'newvalue'}) as patched_foo: ... assert foo == {'newkey': 'newvalue'} ... assert patched_foo == {'newkey': 'newvalue'} ... # You can add, update or delete keys of foo (or patched_foo, it's the same dict) ... patched_foo['spam'] = 'eggs' ... >>> assert foo == {} >>> assert patched_foo == {} ``` ``` >>> import os >>> with patch.dict('os.environ', {'newkey': 'newvalue'}): ... print(os.environ['newkey']) ... newvalue >>> assert 'newkey' not in os.environ ``` Keywords can be used in the [`patch.dict()`](#unittest.mock.patch.dict "unittest.mock.patch.dict") call to set values in the dictionary: ``` >>> mymodule = MagicMock() >>> mymodule.function.return_value = 'fish' >>> with patch.dict('sys.modules', mymodule=mymodule): ... import mymodule ... mymodule.function('some', 'args') ... 'fish' ``` [`patch.dict()`](#unittest.mock.patch.dict "unittest.mock.patch.dict") can be used with dictionary like objects that aren’t actually dictionaries. At the very minimum they must support item getting, setting, deleting and either iteration or membership test. This corresponds to the magic methods [`__getitem__()`](../reference/datamodel#object.__getitem__ "object.__getitem__"), [`__setitem__()`](../reference/datamodel#object.__setitem__ "object.__setitem__"), [`__delitem__()`](../reference/datamodel#object.__delitem__ "object.__delitem__") and either [`__iter__()`](../reference/datamodel#object.__iter__ "object.__iter__") or [`__contains__()`](../reference/datamodel#object.__contains__ "object.__contains__"). ``` >>> class Container: ... def __init__(self): ... self.values = {} ... def __getitem__(self, name): ... return self.values[name] ... def __setitem__(self, name, value): ... self.values[name] = value ... def __delitem__(self, name): ... del self.values[name] ... def __iter__(self): ... return iter(self.values) ... >>> thing = Container() >>> thing['one'] = 1 >>> with patch.dict(thing, one=2, two=3): ... assert thing['one'] == 2 ... assert thing['two'] == 3 ... >>> assert thing['one'] == 1 >>> assert list(thing) == ['one'] ``` ### patch.multiple `patch.multiple(target, spec=None, create=False, spec_set=None, autospec=None, new_callable=None, **kwargs)` Perform multiple patches in a single call. It takes the object to be patched (either as an object or a string to fetch the object by importing) and keyword arguments for the patches: ``` with patch.multiple(settings, FIRST_PATCH='one', SECOND_PATCH='two'): ... ``` Use [`DEFAULT`](#unittest.mock.DEFAULT "unittest.mock.DEFAULT") as the value if you want [`patch.multiple()`](#unittest.mock.patch.multiple "unittest.mock.patch.multiple") to create mocks for you. In this case the created mocks are passed into a decorated function by keyword, and a dictionary is returned when [`patch.multiple()`](#unittest.mock.patch.multiple "unittest.mock.patch.multiple") is used as a context manager. [`patch.multiple()`](#unittest.mock.patch.multiple "unittest.mock.patch.multiple") can be used as a decorator, class decorator or a context manager. The arguments *spec*, *spec\_set*, *create*, *autospec* and *new\_callable* have the same meaning as for [`patch()`](#unittest.mock.patch "unittest.mock.patch"). These arguments will be applied to *all* patches done by [`patch.multiple()`](#unittest.mock.patch.multiple "unittest.mock.patch.multiple"). When used as a class decorator [`patch.multiple()`](#unittest.mock.patch.multiple "unittest.mock.patch.multiple") honours `patch.TEST_PREFIX` for choosing which methods to wrap. If you want [`patch.multiple()`](#unittest.mock.patch.multiple "unittest.mock.patch.multiple") to create mocks for you, then you can use [`DEFAULT`](#unittest.mock.DEFAULT "unittest.mock.DEFAULT") as the value. If you use [`patch.multiple()`](#unittest.mock.patch.multiple "unittest.mock.patch.multiple") as a decorator then the created mocks are passed into the decorated function by keyword. ``` >>> thing = object() >>> other = object() >>> @patch.multiple('__main__', thing=DEFAULT, other=DEFAULT) ... def test_function(thing, other): ... assert isinstance(thing, MagicMock) ... assert isinstance(other, MagicMock) ... >>> test_function() ``` [`patch.multiple()`](#unittest.mock.patch.multiple "unittest.mock.patch.multiple") can be nested with other `patch` decorators, but put arguments passed by keyword *after* any of the standard arguments created by [`patch()`](#unittest.mock.patch "unittest.mock.patch"): ``` >>> @patch('sys.exit') ... @patch.multiple('__main__', thing=DEFAULT, other=DEFAULT) ... def test_function(mock_exit, other, thing): ... assert 'other' in repr(other) ... assert 'thing' in repr(thing) ... assert 'exit' in repr(mock_exit) ... >>> test_function() ``` If [`patch.multiple()`](#unittest.mock.patch.multiple "unittest.mock.patch.multiple") is used as a context manager, the value returned by the context manager is a dictionary where created mocks are keyed by name: ``` >>> with patch.multiple('__main__', thing=DEFAULT, other=DEFAULT) as values: ... assert 'other' in repr(values['other']) ... assert 'thing' in repr(values['thing']) ... assert values['thing'] is thing ... assert values['other'] is other ... ``` ### patch methods: start and stop All the patchers have `start()` and `stop()` methods. These make it simpler to do patching in `setUp` methods or where you want to do multiple patches without nesting decorators or with statements. To use them call [`patch()`](#unittest.mock.patch "unittest.mock.patch"), [`patch.object()`](#unittest.mock.patch.object "unittest.mock.patch.object") or [`patch.dict()`](#unittest.mock.patch.dict "unittest.mock.patch.dict") as normal and keep a reference to the returned `patcher` object. You can then call `start()` to put the patch in place and `stop()` to undo it. If you are using [`patch()`](#unittest.mock.patch "unittest.mock.patch") to create a mock for you then it will be returned by the call to `patcher.start`. ``` >>> patcher = patch('package.module.ClassName') >>> from package import module >>> original = module.ClassName >>> new_mock = patcher.start() >>> assert module.ClassName is not original >>> assert module.ClassName is new_mock >>> patcher.stop() >>> assert module.ClassName is original >>> assert module.ClassName is not new_mock ``` A typical use case for this might be for doing multiple patches in the `setUp` method of a `TestCase`: ``` >>> class MyTest(unittest.TestCase): ... def setUp(self): ... self.patcher1 = patch('package.module.Class1') ... self.patcher2 = patch('package.module.Class2') ... self.MockClass1 = self.patcher1.start() ... self.MockClass2 = self.patcher2.start() ... ... def tearDown(self): ... self.patcher1.stop() ... self.patcher2.stop() ... ... def test_something(self): ... assert package.module.Class1 is self.MockClass1 ... assert package.module.Class2 is self.MockClass2 ... >>> MyTest('test_something').run() ``` Caution If you use this technique you must ensure that the patching is “undone” by calling `stop`. This can be fiddlier than you might think, because if an exception is raised in the `setUp` then `tearDown` is not called. [`unittest.TestCase.addCleanup()`](unittest#unittest.TestCase.addCleanup "unittest.TestCase.addCleanup") makes this easier: ``` >>> class MyTest(unittest.TestCase): ... def setUp(self): ... patcher = patch('package.module.Class') ... self.MockClass = patcher.start() ... self.addCleanup(patcher.stop) ... ... def test_something(self): ... assert package.module.Class is self.MockClass ... ``` As an added bonus you no longer need to keep a reference to the `patcher` object. It is also possible to stop all patches which have been started by using [`patch.stopall()`](#unittest.mock.patch.stopall "unittest.mock.patch.stopall"). `patch.stopall()` Stop all active patches. Only stops patches started with `start`. ### patch builtins You can patch any builtins within a module. The following example patches builtin [`ord()`](functions#ord "ord"): ``` >>> @patch('__main__.ord') ... def test(mock_ord): ... mock_ord.return_value = 101 ... print(ord('c')) ... >>> test() 101 ``` ### TEST\_PREFIX All of the patchers can be used as class decorators. When used in this way they wrap every test method on the class. The patchers recognise methods that start with `'test'` as being test methods. This is the same way that the [`unittest.TestLoader`](unittest#unittest.TestLoader "unittest.TestLoader") finds test methods by default. It is possible that you want to use a different prefix for your tests. You can inform the patchers of the different prefix by setting `patch.TEST_PREFIX`: ``` >>> patch.TEST_PREFIX = 'foo' >>> value = 3 >>> >>> @patch('__main__.value', 'not three') ... class Thing: ... def foo_one(self): ... print(value) ... def foo_two(self): ... print(value) ... >>> >>> Thing().foo_one() not three >>> Thing().foo_two() not three >>> value 3 ``` ### Nesting Patch Decorators If you want to perform multiple patches then you can simply stack up the decorators. You can stack up multiple patch decorators using this pattern: ``` >>> @patch.object(SomeClass, 'class_method') ... @patch.object(SomeClass, 'static_method') ... def test(mock1, mock2): ... assert SomeClass.static_method is mock1 ... assert SomeClass.class_method is mock2 ... SomeClass.static_method('foo') ... SomeClass.class_method('bar') ... return mock1, mock2 ... >>> mock1, mock2 = test() >>> mock1.assert_called_once_with('foo') >>> mock2.assert_called_once_with('bar') ``` Note that the decorators are applied from the bottom upwards. This is the standard way that Python applies decorators. The order of the created mocks passed into your test function matches this order. ### Where to patch [`patch()`](#unittest.mock.patch "unittest.mock.patch") works by (temporarily) changing the object that a *name* points to with another one. There can be many names pointing to any individual object, so for patching to work you must ensure that you patch the name used by the system under test. The basic principle is that you patch where an object is *looked up*, which is not necessarily the same place as where it is defined. A couple of examples will help to clarify this. Imagine we have a project that we want to test with the following structure: ``` a.py -> Defines SomeClass b.py -> from a import SomeClass -> some_function instantiates SomeClass ``` Now we want to test `some_function` but we want to mock out `SomeClass` using [`patch()`](#unittest.mock.patch "unittest.mock.patch"). The problem is that when we import module b, which we will have to do then it imports `SomeClass` from module a. If we use [`patch()`](#unittest.mock.patch "unittest.mock.patch") to mock out `a.SomeClass` then it will have no effect on our test; module b already has a reference to the *real* `SomeClass` and it looks like our patching had no effect. The key is to patch out `SomeClass` where it is used (or where it is looked up). In this case `some_function` will actually look up `SomeClass` in module b, where we have imported it. The patching should look like: ``` @patch('b.SomeClass') ``` However, consider the alternative scenario where instead of `from a import SomeClass` module b does `import a` and `some_function` uses `a.SomeClass`. Both of these import forms are common. In this case the class we want to patch is being looked up in the module and so we have to patch `a.SomeClass` instead: ``` @patch('a.SomeClass') ``` ### Patching Descriptors and Proxy Objects Both [patch](#patch) and [patch.object](#patch-object) correctly patch and restore descriptors: class methods, static methods and properties. You should patch these on the *class* rather than an instance. They also work with *some* objects that proxy attribute access, like the [django settings object](http://www.voidspace.org.uk/python/weblog/arch_d7_2010_12_04.shtml#e1198). MagicMock and magic method support ---------------------------------- ### Mocking Magic Methods [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") supports mocking the Python protocol methods, also known as “magic methods”. This allows mock objects to replace containers or other objects that implement Python protocols. Because magic methods are looked up differently from normal methods [2](#id9), this support has been specially implemented. This means that only specific magic methods are supported. The supported list includes *almost* all of them. If there are any missing that you need please let us know. You mock magic methods by setting the method you are interested in to a function or a mock instance. If you are using a function then it *must* take `self` as the first argument [3](#id10). ``` >>> def __str__(self): ... return 'fooble' ... >>> mock = Mock() >>> mock.__str__ = __str__ >>> str(mock) 'fooble' ``` ``` >>> mock = Mock() >>> mock.__str__ = Mock() >>> mock.__str__.return_value = 'fooble' >>> str(mock) 'fooble' ``` ``` >>> mock = Mock() >>> mock.__iter__ = Mock(return_value=iter([])) >>> list(mock) [] ``` One use case for this is for mocking objects used as context managers in a [`with`](../reference/compound_stmts#with) statement: ``` >>> mock = Mock() >>> mock.__enter__ = Mock(return_value='foo') >>> mock.__exit__ = Mock(return_value=False) >>> with mock as m: ... assert m == 'foo' ... >>> mock.__enter__.assert_called_with() >>> mock.__exit__.assert_called_with(None, None, None) ``` Calls to magic methods do not appear in [`method_calls`](#unittest.mock.Mock.method_calls "unittest.mock.Mock.method_calls"), but they are recorded in [`mock_calls`](#unittest.mock.Mock.mock_calls "unittest.mock.Mock.mock_calls"). Note If you use the *spec* keyword argument to create a mock then attempting to set a magic method that isn’t in the spec will raise an [`AttributeError`](exceptions#AttributeError "AttributeError"). The full list of supported magic methods is: * `__hash__`, `__sizeof__`, `__repr__` and `__str__` * `__dir__`, `__format__` and `__subclasses__` * `__round__`, `__floor__`, `__trunc__` and `__ceil__` * Comparisons: `__lt__`, `__gt__`, `__le__`, `__ge__`, `__eq__` and `__ne__` * Container methods: `__getitem__`, `__setitem__`, `__delitem__`, `__contains__`, `__len__`, `__iter__`, `__reversed__` and `__missing__` * Context manager: `__enter__`, `__exit__`, `__aenter__` and `__aexit__` * Unary numeric methods: `__neg__`, `__pos__` and `__invert__` * The numeric methods (including right hand and in-place variants): `__add__`, `__sub__`, `__mul__`, `__matmul__`, `__div__`, `__truediv__`, `__floordiv__`, `__mod__`, `__divmod__`, `__lshift__`, `__rshift__`, `__and__`, `__xor__`, `__or__`, and `__pow__` * Numeric conversion methods: `__complex__`, `__int__`, `__float__` and `__index__` * Descriptor methods: `__get__`, `__set__` and `__delete__` * Pickling: `__reduce__`, `__reduce_ex__`, `__getinitargs__`, `__getnewargs__`, `__getstate__` and `__setstate__` * File system path representation: `__fspath__` * Asynchronous iteration methods: `__aiter__` and `__anext__` Changed in version 3.8: Added support for [`os.PathLike.__fspath__()`](os#os.PathLike.__fspath__ "os.PathLike.__fspath__"). Changed in version 3.8: Added support for `__aenter__`, `__aexit__`, `__aiter__` and `__anext__`. The following methods exist but are *not* supported as they are either in use by mock, can’t be set dynamically, or can cause problems: * `__getattr__`, `__setattr__`, `__init__` and `__new__` * `__prepare__`, `__instancecheck__`, `__subclasscheck__`, `__del__` ### Magic Mock There are two `MagicMock` variants: [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") and [`NonCallableMagicMock`](#unittest.mock.NonCallableMagicMock "unittest.mock.NonCallableMagicMock"). `class unittest.mock.MagicMock(*args, **kw)` `MagicMock` is a subclass of [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") with default implementations of most of the magic methods. You can use `MagicMock` without having to configure the magic methods yourself. The constructor parameters have the same meaning as for [`Mock`](#unittest.mock.Mock "unittest.mock.Mock"). If you use the *spec* or *spec\_set* arguments then *only* magic methods that exist in the spec will be created. `class unittest.mock.NonCallableMagicMock(*args, **kw)` A non-callable version of [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock"). The constructor parameters have the same meaning as for [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock"), with the exception of *return\_value* and *side\_effect* which have no meaning on a non-callable mock. The magic methods are setup with [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") objects, so you can configure them and use them in the usual way: ``` >>> mock = MagicMock() >>> mock[3] = 'fish' >>> mock.__setitem__.assert_called_with(3, 'fish') >>> mock.__getitem__.return_value = 'result' >>> mock[2] 'result' ``` By default many of the protocol methods are required to return objects of a specific type. These methods are preconfigured with a default return value, so that they can be used without you having to do anything if you aren’t interested in the return value. You can still *set* the return value manually if you want to change the default. Methods and their defaults: * `__lt__`: `NotImplemented` * `__gt__`: `NotImplemented` * `__le__`: `NotImplemented` * `__ge__`: `NotImplemented` * `__int__`: `1` * `__contains__`: `False` * `__len__`: `0` * `__iter__`: `iter([])` * `__exit__`: `False` * `__aexit__`: `False` * `__complex__`: `1j` * `__float__`: `1.0` * `__bool__`: `True` * `__index__`: `1` * `__hash__`: default hash for the mock * `__str__`: default str for the mock * `__sizeof__`: default sizeof for the mock For example: ``` >>> mock = MagicMock() >>> int(mock) 1 >>> len(mock) 0 >>> list(mock) [] >>> object() in mock False ``` The two equality methods, [`__eq__()`](../reference/datamodel#object.__eq__ "object.__eq__") and [`__ne__()`](../reference/datamodel#object.__ne__ "object.__ne__"), are special. They do the default equality comparison on identity, using the [`side_effect`](#unittest.mock.Mock.side_effect "unittest.mock.Mock.side_effect") attribute, unless you change their return value to return something else: ``` >>> MagicMock() == 3 False >>> MagicMock() != 3 True >>> mock = MagicMock() >>> mock.__eq__.return_value = True >>> mock == 3 True ``` The return value of `MagicMock.__iter__()` can be any iterable object and isn’t required to be an iterator: ``` >>> mock = MagicMock() >>> mock.__iter__.return_value = ['a', 'b', 'c'] >>> list(mock) ['a', 'b', 'c'] >>> list(mock) ['a', 'b', 'c'] ``` If the return value *is* an iterator, then iterating over it once will consume it and subsequent iterations will result in an empty list: ``` >>> mock.__iter__.return_value = iter(['a', 'b', 'c']) >>> list(mock) ['a', 'b', 'c'] >>> list(mock) [] ``` `MagicMock` has all of the supported magic methods configured except for some of the obscure and obsolete ones. You can still set these up if you want. Magic methods that are supported but not setup by default in `MagicMock` are: * `__subclasses__` * `__dir__` * `__format__` * `__get__`, `__set__` and `__delete__` * `__reversed__` and `__missing__` * `__reduce__`, `__reduce_ex__`, `__getinitargs__`, `__getnewargs__`, `__getstate__` and `__setstate__` * `__getformat__` and `__setformat__` `2` Magic methods *should* be looked up on the class rather than the instance. Different versions of Python are inconsistent about applying this rule. The supported protocol methods should work with all supported versions of Python. `3` The function is basically hooked up to the class, but each `Mock` instance is kept isolated from the others. Helpers ------- ### sentinel `unittest.mock.sentinel` The `sentinel` object provides a convenient way of providing unique objects for your tests. Attributes are created on demand when you access them by name. Accessing the same attribute will always return the same object. The objects returned have a sensible repr so that test failure messages are readable. Changed in version 3.7: The `sentinel` attributes now preserve their identity when they are [`copied`](copy#module-copy "copy: Shallow and deep copy operations.") or [`pickled`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back."). Sometimes when testing you need to test that a specific object is passed as an argument to another method, or returned. It can be common to create named sentinel objects to test this. [`sentinel`](#unittest.mock.sentinel "unittest.mock.sentinel") provides a convenient way of creating and testing the identity of objects like this. In this example we monkey patch `method` to return `sentinel.some_object`: ``` >>> real = ProductionClass() >>> real.method = Mock(name="method") >>> real.method.return_value = sentinel.some_object >>> result = real.method() >>> assert result is sentinel.some_object >>> result sentinel.some_object ``` ### DEFAULT `unittest.mock.DEFAULT` The [`DEFAULT`](#unittest.mock.DEFAULT "unittest.mock.DEFAULT") object is a pre-created sentinel (actually `sentinel.DEFAULT`). It can be used by [`side_effect`](#unittest.mock.Mock.side_effect "unittest.mock.Mock.side_effect") functions to indicate that the normal return value should be used. ### call `unittest.mock.call(*args, **kwargs)` [`call()`](#unittest.mock.call "unittest.mock.call") is a helper object for making simpler assertions, for comparing with [`call_args`](#unittest.mock.Mock.call_args "unittest.mock.Mock.call_args"), [`call_args_list`](#unittest.mock.Mock.call_args_list "unittest.mock.Mock.call_args_list"), [`mock_calls`](#unittest.mock.Mock.mock_calls "unittest.mock.Mock.mock_calls") and [`method_calls`](#unittest.mock.Mock.method_calls "unittest.mock.Mock.method_calls"). [`call()`](#unittest.mock.call "unittest.mock.call") can also be used with [`assert_has_calls()`](#unittest.mock.Mock.assert_has_calls "unittest.mock.Mock.assert_has_calls"). ``` >>> m = MagicMock(return_value=None) >>> m(1, 2, a='foo', b='bar') >>> m() >>> m.call_args_list == [call(1, 2, a='foo', b='bar'), call()] True ``` `call.call_list()` For a call object that represents multiple calls, [`call_list()`](#unittest.mock.call.call_list "unittest.mock.call.call_list") returns a list of all the intermediate calls as well as the final call. `call_list` is particularly useful for making assertions on “chained calls”. A chained call is multiple calls on a single line of code. This results in multiple entries in [`mock_calls`](#unittest.mock.Mock.mock_calls "unittest.mock.Mock.mock_calls") on a mock. Manually constructing the sequence of calls can be tedious. [`call_list()`](#unittest.mock.call.call_list "unittest.mock.call.call_list") can construct the sequence of calls from the same chained call: ``` >>> m = MagicMock() >>> m(1).method(arg='foo').other('bar')(2.0) <MagicMock name='mock().method().other()()' id='...'> >>> kall = call(1).method(arg='foo').other('bar')(2.0) >>> kall.call_list() [call(1), call().method(arg='foo'), call().method().other('bar'), call().method().other()(2.0)] >>> m.mock_calls == kall.call_list() True ``` A `call` object is either a tuple of (positional args, keyword args) or (name, positional args, keyword args) depending on how it was constructed. When you construct them yourself this isn’t particularly interesting, but the `call` objects that are in the [`Mock.call_args`](#unittest.mock.Mock.call_args "unittest.mock.Mock.call_args"), [`Mock.call_args_list`](#unittest.mock.Mock.call_args_list "unittest.mock.Mock.call_args_list") and [`Mock.mock_calls`](#unittest.mock.Mock.mock_calls "unittest.mock.Mock.mock_calls") attributes can be introspected to get at the individual arguments they contain. The `call` objects in [`Mock.call_args`](#unittest.mock.Mock.call_args "unittest.mock.Mock.call_args") and [`Mock.call_args_list`](#unittest.mock.Mock.call_args_list "unittest.mock.Mock.call_args_list") are two-tuples of (positional args, keyword args) whereas the `call` objects in [`Mock.mock_calls`](#unittest.mock.Mock.mock_calls "unittest.mock.Mock.mock_calls"), along with ones you construct yourself, are three-tuples of (name, positional args, keyword args). You can use their “tupleness” to pull out the individual arguments for more complex introspection and assertions. The positional arguments are a tuple (an empty tuple if there are no positional arguments) and the keyword arguments are a dictionary: ``` >>> m = MagicMock(return_value=None) >>> m(1, 2, 3, arg='one', arg2='two') >>> kall = m.call_args >>> kall.args (1, 2, 3) >>> kall.kwargs {'arg': 'one', 'arg2': 'two'} >>> kall.args is kall[0] True >>> kall.kwargs is kall[1] True ``` ``` >>> m = MagicMock() >>> m.foo(4, 5, 6, arg='two', arg2='three') <MagicMock name='mock.foo()' id='...'> >>> kall = m.mock_calls[0] >>> name, args, kwargs = kall >>> name 'foo' >>> args (4, 5, 6) >>> kwargs {'arg': 'two', 'arg2': 'three'} >>> name is m.mock_calls[0][0] True ``` ### create\_autospec `unittest.mock.create_autospec(spec, spec_set=False, instance=False, **kwargs)` Create a mock object using another object as a spec. Attributes on the mock will use the corresponding attribute on the *spec* object as their spec. Functions or methods being mocked will have their arguments checked to ensure that they are called with the correct signature. If *spec\_set* is `True` then attempting to set attributes that don’t exist on the spec object will raise an [`AttributeError`](exceptions#AttributeError "AttributeError"). If a class is used as a spec then the return value of the mock (the instance of the class) will have the same spec. You can use a class as the spec for an instance object by passing `instance=True`. The returned mock will only be callable if instances of the mock are callable. [`create_autospec()`](#unittest.mock.create_autospec "unittest.mock.create_autospec") also takes arbitrary keyword arguments that are passed to the constructor of the created mock. See [Autospeccing](#auto-speccing) for examples of how to use auto-speccing with [`create_autospec()`](#unittest.mock.create_autospec "unittest.mock.create_autospec") and the *autospec* argument to [`patch()`](#unittest.mock.patch "unittest.mock.patch"). Changed in version 3.8: [`create_autospec()`](#unittest.mock.create_autospec "unittest.mock.create_autospec") now returns an [`AsyncMock`](#unittest.mock.AsyncMock "unittest.mock.AsyncMock") if the target is an async function. ### ANY `unittest.mock.ANY` Sometimes you may need to make assertions about *some* of the arguments in a call to mock, but either not care about some of the arguments or want to pull them individually out of [`call_args`](#unittest.mock.Mock.call_args "unittest.mock.Mock.call_args") and make more complex assertions on them. To ignore certain arguments you can pass in objects that compare equal to *everything*. Calls to [`assert_called_with()`](#unittest.mock.Mock.assert_called_with "unittest.mock.Mock.assert_called_with") and [`assert_called_once_with()`](#unittest.mock.Mock.assert_called_once_with "unittest.mock.Mock.assert_called_once_with") will then succeed no matter what was passed in. ``` >>> mock = Mock(return_value=None) >>> mock('foo', bar=object()) >>> mock.assert_called_once_with('foo', bar=ANY) ``` [`ANY`](#unittest.mock.ANY "unittest.mock.ANY") can also be used in comparisons with call lists like [`mock_calls`](#unittest.mock.Mock.mock_calls "unittest.mock.Mock.mock_calls"): ``` >>> m = MagicMock(return_value=None) >>> m(1) >>> m(1, 2) >>> m(object()) >>> m.mock_calls == [call(1), call(1, 2), ANY] True ``` ### FILTER\_DIR `unittest.mock.FILTER_DIR` [`FILTER_DIR`](#unittest.mock.FILTER_DIR "unittest.mock.FILTER_DIR") is a module level variable that controls the way mock objects respond to [`dir()`](functions#dir "dir") (only for Python 2.6 or more recent). The default is `True`, which uses the filtering described below, to only show useful members. If you dislike this filtering, or need to switch it off for diagnostic purposes, then set `mock.FILTER_DIR = False`. With filtering on, `dir(some_mock)` shows only useful attributes and will include any dynamically created attributes that wouldn’t normally be shown. If the mock was created with a *spec* (or *autospec* of course) then all the attributes from the original are shown, even if they haven’t been accessed yet: ``` >>> dir(Mock()) ['assert_any_call', 'assert_called', 'assert_called_once', 'assert_called_once_with', 'assert_called_with', 'assert_has_calls', 'assert_not_called', 'attach_mock', ... >>> from urllib import request >>> dir(Mock(spec=request)) ['AbstractBasicAuthHandler', 'AbstractDigestAuthHandler', 'AbstractHTTPHandler', 'BaseHandler', ... ``` Many of the not-very-useful (private to [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") rather than the thing being mocked) underscore and double underscore prefixed attributes have been filtered from the result of calling [`dir()`](functions#dir "dir") on a [`Mock`](#unittest.mock.Mock "unittest.mock.Mock"). If you dislike this behaviour you can switch it off by setting the module level switch [`FILTER_DIR`](#unittest.mock.FILTER_DIR "unittest.mock.FILTER_DIR"): ``` >>> from unittest import mock >>> mock.FILTER_DIR = False >>> dir(mock.Mock()) ['_NonCallableMock__get_return_value', '_NonCallableMock__get_side_effect', '_NonCallableMock__return_value_doc', '_NonCallableMock__set_return_value', '_NonCallableMock__set_side_effect', '__call__', '__class__', ... ``` Alternatively you can just use `vars(my_mock)` (instance members) and `dir(type(my_mock))` (type members) to bypass the filtering irrespective of `mock.FILTER_DIR`. ### mock\_open `unittest.mock.mock_open(mock=None, read_data=None)` A helper function to create a mock to replace the use of [`open()`](functions#open "open"). It works for [`open()`](functions#open "open") called directly or used as a context manager. The *mock* argument is the mock object to configure. If `None` (the default) then a [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") will be created for you, with the API limited to methods or attributes available on standard file handles. *read\_data* is a string for the `read()`, [`readline()`](io#io.IOBase.readline "io.IOBase.readline"), and [`readlines()`](io#io.IOBase.readlines "io.IOBase.readlines") methods of the file handle to return. Calls to those methods will take data from *read\_data* until it is depleted. The mock of these methods is pretty simplistic: every time the *mock* is called, the *read\_data* is rewound to the start. If you need more control over the data that you are feeding to the tested code you will need to customize this mock for yourself. When that is insufficient, one of the in-memory filesystem packages on [PyPI](https://pypi.org) can offer a realistic filesystem for testing. Changed in version 3.4: Added [`readline()`](io#io.IOBase.readline "io.IOBase.readline") and [`readlines()`](io#io.IOBase.readlines "io.IOBase.readlines") support. The mock of `read()` changed to consume *read\_data* rather than returning it on each call. Changed in version 3.5: *read\_data* is now reset on each call to the *mock*. Changed in version 3.8: Added [`__iter__()`](../reference/datamodel#object.__iter__ "object.__iter__") to implementation so that iteration (such as in for loops) correctly consumes *read\_data*. Using [`open()`](functions#open "open") as a context manager is a great way to ensure your file handles are closed properly and is becoming common: ``` with open('/some/path', 'w') as f: f.write('something') ``` The issue is that even if you mock out the call to [`open()`](functions#open "open") it is the *returned object* that is used as a context manager (and has [`__enter__()`](../reference/datamodel#object.__enter__ "object.__enter__") and [`__exit__()`](../reference/datamodel#object.__exit__ "object.__exit__") called). Mocking context managers with a [`MagicMock`](#unittest.mock.MagicMock "unittest.mock.MagicMock") is common enough and fiddly enough that a helper function is useful. ``` >>> m = mock_open() >>> with patch('__main__.open', m): ... with open('foo', 'w') as h: ... h.write('some stuff') ... >>> m.mock_calls [call('foo', 'w'), call().__enter__(), call().write('some stuff'), call().__exit__(None, None, None)] >>> m.assert_called_once_with('foo', 'w') >>> handle = m() >>> handle.write.assert_called_once_with('some stuff') ``` And for reading files: ``` >>> with patch('__main__.open', mock_open(read_data='bibble')) as m: ... with open('foo') as h: ... result = h.read() ... >>> m.assert_called_once_with('foo') >>> assert result == 'bibble' ``` ### Autospeccing Autospeccing is based on the existing `spec` feature of mock. It limits the api of mocks to the api of an original object (the spec), but it is recursive (implemented lazily) so that attributes of mocks only have the same api as the attributes of the spec. In addition mocked functions / methods have the same call signature as the original so they raise a [`TypeError`](exceptions#TypeError "TypeError") if they are called incorrectly. Before I explain how auto-speccing works, here’s why it is needed. [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") is a very powerful and flexible object, but it suffers from two flaws when used to mock out objects from a system under test. One of these flaws is specific to the [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") api and the other is a more general problem with using mock objects. First the problem specific to [`Mock`](#unittest.mock.Mock "unittest.mock.Mock"). [`Mock`](#unittest.mock.Mock "unittest.mock.Mock") has two assert methods that are extremely handy: [`assert_called_with()`](#unittest.mock.Mock.assert_called_with "unittest.mock.Mock.assert_called_with") and [`assert_called_once_with()`](#unittest.mock.Mock.assert_called_once_with "unittest.mock.Mock.assert_called_once_with"). ``` >>> mock = Mock(name='Thing', return_value=None) >>> mock(1, 2, 3) >>> mock.assert_called_once_with(1, 2, 3) >>> mock(1, 2, 3) >>> mock.assert_called_once_with(1, 2, 3) Traceback (most recent call last): ... AssertionError: Expected 'mock' to be called once. Called 2 times. ``` Because mocks auto-create attributes on demand, and allow you to call them with arbitrary arguments, if you misspell one of these assert methods then your assertion is gone: ``` >>> mock = Mock(name='Thing', return_value=None) >>> mock(1, 2, 3) >>> mock.assret_called_once_with(4, 5, 6) # Intentional typo! ``` Your tests can pass silently and incorrectly because of the typo. The second issue is more general to mocking. If you refactor some of your code, rename members and so on, any tests for code that is still using the *old api* but uses mocks instead of the real objects will still pass. This means your tests can all pass even though your code is broken. Note that this is another reason why you need integration tests as well as unit tests. Testing everything in isolation is all fine and dandy, but if you don’t test how your units are “wired together” there is still lots of room for bugs that tests might have caught. `mock` already provides a feature to help with this, called speccing. If you use a class or instance as the `spec` for a mock then you can only access attributes on the mock that exist on the real class: ``` >>> from urllib import request >>> mock = Mock(spec=request.Request) >>> mock.assret_called_with # Intentional typo! Traceback (most recent call last): ... AttributeError: Mock object has no attribute 'assret_called_with' ``` The spec only applies to the mock itself, so we still have the same issue with any methods on the mock: ``` >>> mock.has_data() <mock.Mock object at 0x...> >>> mock.has_data.assret_called_with() # Intentional typo! ``` Auto-speccing solves this problem. You can either pass `autospec=True` to [`patch()`](#unittest.mock.patch "unittest.mock.patch") / [`patch.object()`](#unittest.mock.patch.object "unittest.mock.patch.object") or use the [`create_autospec()`](#unittest.mock.create_autospec "unittest.mock.create_autospec") function to create a mock with a spec. If you use the `autospec=True` argument to [`patch()`](#unittest.mock.patch "unittest.mock.patch") then the object that is being replaced will be used as the spec object. Because the speccing is done “lazily” (the spec is created as attributes on the mock are accessed) you can use it with very complex or deeply nested objects (like modules that import modules that import modules) without a big performance hit. Here’s an example of it in use: ``` >>> from urllib import request >>> patcher = patch('__main__.request', autospec=True) >>> mock_request = patcher.start() >>> request is mock_request True >>> mock_request.Request <MagicMock name='request.Request' spec='Request' id='...'> ``` You can see that `request.Request` has a spec. `request.Request` takes two arguments in the constructor (one of which is *self*). Here’s what happens if we try to call it incorrectly: ``` >>> req = request.Request() Traceback (most recent call last): ... TypeError: <lambda>() takes at least 2 arguments (1 given) ``` The spec also applies to instantiated classes (i.e. the return value of specced mocks): ``` >>> req = request.Request('foo') >>> req <NonCallableMagicMock name='request.Request()' spec='Request' id='...'> ``` `Request` objects are not callable, so the return value of instantiating our mocked out `request.Request` is a non-callable mock. With the spec in place any typos in our asserts will raise the correct error: ``` >>> req.add_header('spam', 'eggs') <MagicMock name='request.Request().add_header()' id='...'> >>> req.add_header.assret_called_with # Intentional typo! Traceback (most recent call last): ... AttributeError: Mock object has no attribute 'assret_called_with' >>> req.add_header.assert_called_with('spam', 'eggs') ``` In many cases you will just be able to add `autospec=True` to your existing [`patch()`](#unittest.mock.patch "unittest.mock.patch") calls and then be protected against bugs due to typos and api changes. As well as using *autospec* through [`patch()`](#unittest.mock.patch "unittest.mock.patch") there is a [`create_autospec()`](#unittest.mock.create_autospec "unittest.mock.create_autospec") for creating autospecced mocks directly: ``` >>> from urllib import request >>> mock_request = create_autospec(request) >>> mock_request.Request('foo', 'bar') <NonCallableMagicMock name='mock.Request()' spec='Request' id='...'> ``` This isn’t without caveats and limitations however, which is why it is not the default behaviour. In order to know what attributes are available on the spec object, autospec has to introspect (access attributes) the spec. As you traverse attributes on the mock a corresponding traversal of the original object is happening under the hood. If any of your specced objects have properties or descriptors that can trigger code execution then you may not be able to use autospec. On the other hand it is much better to design your objects so that introspection is safe [4](#id12). A more serious problem is that it is common for instance attributes to be created in the [`__init__()`](../reference/datamodel#object.__init__ "object.__init__") method and not to exist on the class at all. *autospec* can’t know about any dynamically created attributes and restricts the api to visible attributes. ``` >>> class Something: ... def __init__(self): ... self.a = 33 ... >>> with patch('__main__.Something', autospec=True): ... thing = Something() ... thing.a ... Traceback (most recent call last): ... AttributeError: Mock object has no attribute 'a' ``` There are a few different ways of resolving this problem. The easiest, but not necessarily the least annoying, way is to simply set the required attributes on the mock after creation. Just because *autospec* doesn’t allow you to fetch attributes that don’t exist on the spec it doesn’t prevent you setting them: ``` >>> with patch('__main__.Something', autospec=True): ... thing = Something() ... thing.a = 33 ... ``` There is a more aggressive version of both *spec* and *autospec* that *does* prevent you setting non-existent attributes. This is useful if you want to ensure your code only *sets* valid attributes too, but obviously it prevents this particular scenario: ``` >>> with patch('__main__.Something', autospec=True, spec_set=True): ... thing = Something() ... thing.a = 33 ... Traceback (most recent call last): ... AttributeError: Mock object has no attribute 'a' ``` Probably the best way of solving the problem is to add class attributes as default values for instance members initialised in [`__init__()`](../reference/datamodel#object.__init__ "object.__init__"). Note that if you are only setting default attributes in [`__init__()`](../reference/datamodel#object.__init__ "object.__init__") then providing them via class attributes (shared between instances of course) is faster too. e.g. ``` class Something: a = 33 ``` This brings up another issue. It is relatively common to provide a default value of `None` for members that will later be an object of a different type. `None` would be useless as a spec because it wouldn’t let you access *any* attributes or methods on it. As `None` is *never* going to be useful as a spec, and probably indicates a member that will normally of some other type, autospec doesn’t use a spec for members that are set to `None`. These will just be ordinary mocks (well - MagicMocks): ``` >>> class Something: ... member = None ... >>> mock = create_autospec(Something) >>> mock.member.foo.bar.baz() <MagicMock name='mock.member.foo.bar.baz()' id='...'> ``` If modifying your production classes to add defaults isn’t to your liking then there are more options. One of these is simply to use an instance as the spec rather than the class. The other is to create a subclass of the production class and add the defaults to the subclass without affecting the production class. Both of these require you to use an alternative object as the spec. Thankfully [`patch()`](#unittest.mock.patch "unittest.mock.patch") supports this - you can simply pass the alternative object as the *autospec* argument: ``` >>> class Something: ... def __init__(self): ... self.a = 33 ... >>> class SomethingForTest(Something): ... a = 33 ... >>> p = patch('__main__.Something', autospec=SomethingForTest) >>> mock = p.start() >>> mock.a <NonCallableMagicMock name='Something.a' spec='int' id='...'> ``` `4` This only applies to classes or already instantiated objects. Calling a mocked class to create a mock instance *does not* create a real instance. It is only attribute lookups - along with calls to [`dir()`](functions#dir "dir") - that are done. ### Sealing mocks `unittest.mock.seal(mock)` Seal will disable the automatic creation of mocks when accessing an attribute of the mock being sealed or any of its attributes that are already mocks recursively. If a mock instance with a name or a spec is assigned to an attribute it won’t be considered in the sealing chain. This allows one to prevent seal from fixing part of the mock object. ``` >>> mock = Mock() >>> mock.submock.attribute1 = 2 >>> mock.not_submock = mock.Mock(name="sample_name") >>> seal(mock) >>> mock.new_attribute # This will raise AttributeError. >>> mock.submock.attribute2 # This will raise AttributeError. >>> mock.not_submock.attribute2 # This won't raise. ``` New in version 3.7.
programming_docs
python Data Compression and Archiving Data Compression and Archiving ============================== The modules described in this chapter support data compression with the zlib, gzip, bzip2 and lzma algorithms, and the creation of ZIP- and tar-format archives. See also [Archiving operations](shutil#archiving-operations) provided by the [`shutil`](shutil#module-shutil "shutil: High-level file operations, including copying.") module. * [`zlib` — Compression compatible with **gzip**](zlib) * [`gzip` — Support for **gzip** files](gzip) + [Examples of usage](gzip#examples-of-usage) + [Command Line Interface](gzip#command-line-interface) - [Command line options](gzip#command-line-options) * [`bz2` — Support for **bzip2** compression](bz2) + [(De)compression of files](bz2#de-compression-of-files) + [Incremental (de)compression](bz2#incremental-de-compression) + [One-shot (de)compression](bz2#one-shot-de-compression) + [Examples of usage](bz2#examples-of-usage) * [`lzma` — Compression using the LZMA algorithm](lzma) + [Reading and writing compressed files](lzma#reading-and-writing-compressed-files) + [Compressing and decompressing data in memory](lzma#compressing-and-decompressing-data-in-memory) + [Miscellaneous](lzma#miscellaneous) + [Specifying custom filter chains](lzma#specifying-custom-filter-chains) + [Examples](lzma#examples) * [`zipfile` — Work with ZIP archives](zipfile) + [ZipFile Objects](zipfile#zipfile-objects) + [Path Objects](zipfile#path-objects) + [PyZipFile Objects](zipfile#pyzipfile-objects) + [ZipInfo Objects](zipfile#zipinfo-objects) + [Command-Line Interface](zipfile#command-line-interface) - [Command-line options](zipfile#command-line-options) + [Decompression pitfalls](zipfile#decompression-pitfalls) - [From file itself](zipfile#from-file-itself) - [File System limitations](zipfile#file-system-limitations) - [Resources limitations](zipfile#resources-limitations) - [Interruption](zipfile#interruption) - [Default behaviors of extraction](zipfile#default-behaviors-of-extraction) * [`tarfile` — Read and write tar archive files](tarfile) + [TarFile Objects](tarfile#tarfile-objects) + [TarInfo Objects](tarfile#tarinfo-objects) + [Command-Line Interface](tarfile#command-line-interface) - [Command-line options](tarfile#command-line-options) + [Examples](tarfile#examples) + [Supported tar formats](tarfile#supported-tar-formats) + [Unicode issues](tarfile#unicode-issues) python xml.dom — The Document Object Model API xml.dom — The Document Object Model API ======================================= **Source code:** [Lib/xml/dom/\_\_init\_\_.py](https://github.com/python/cpython/tree/3.9/Lib/xml/dom/__init__.py) The Document Object Model, or “DOM,” is a cross-language API from the World Wide Web Consortium (W3C) for accessing and modifying XML documents. A DOM implementation presents an XML document as a tree structure, or allows client code to build such a structure from scratch. It then gives access to the structure through a set of objects which provided well-known interfaces. The DOM is extremely useful for random-access applications. SAX only allows you a view of one bit of the document at a time. If you are looking at one SAX element, you have no access to another. If you are looking at a text node, you have no access to a containing element. When you write a SAX application, you need to keep track of your program’s position in the document somewhere in your own code. SAX does not do it for you. Also, if you need to look ahead in the XML document, you are just out of luck. Some applications are simply impossible in an event driven model with no access to a tree. Of course you could build some sort of tree yourself in SAX events, but the DOM allows you to avoid writing that code. The DOM is a standard tree representation for XML data. The Document Object Model is being defined by the W3C in stages, or “levels” in their terminology. The Python mapping of the API is substantially based on the DOM Level 2 recommendation. DOM applications typically start by parsing some XML into a DOM. How this is accomplished is not covered at all by DOM Level 1, and Level 2 provides only limited improvements: There is a `DOMImplementation` object class which provides access to `Document` creation methods, but no way to access an XML reader/parser/Document builder in an implementation-independent way. There is also no well-defined way to access these methods without an existing `Document` object. In Python, each DOM implementation will provide a function [`getDOMImplementation()`](#xml.dom.getDOMImplementation "xml.dom.getDOMImplementation"). DOM Level 3 adds a Load/Store specification, which defines an interface to the reader, but this is not yet available in the Python standard library. Once you have a DOM document object, you can access the parts of your XML document through its properties and methods. These properties are defined in the DOM specification; this portion of the reference manual describes the interpretation of the specification in Python. The specification provided by the W3C defines the DOM API for Java, ECMAScript, and OMG IDL. The Python mapping defined here is based in large part on the IDL version of the specification, but strict compliance is not required (though implementations are free to support the strict mapping from IDL). See section [Conformance](#dom-conformance) for a detailed discussion of mapping requirements. See also [Document Object Model (DOM) Level 2 Specification](https://www.w3.org/TR/2000/REC-DOM-Level-2-Core-20001113/) The W3C recommendation upon which the Python DOM API is based. [Document Object Model (DOM) Level 1 Specification](https://www.w3.org/TR/REC-DOM-Level-1/) The W3C recommendation for the DOM supported by [`xml.dom.minidom`](xml.dom.minidom#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation."). [Python Language Mapping Specification](https://www.omg.org/spec/PYTH/1.2/PDF) This specifies the mapping from OMG IDL to Python. Module Contents --------------- The [`xml.dom`](#module-xml.dom "xml.dom: Document Object Model API for Python.") contains the following functions: `xml.dom.registerDOMImplementation(name, factory)` Register the *factory* function with the name *name*. The factory function should return an object which implements the `DOMImplementation` interface. The factory function can return the same object every time, or a new one for each call, as appropriate for the specific implementation (e.g. if that implementation supports some customization). `xml.dom.getDOMImplementation(name=None, features=())` Return a suitable DOM implementation. The *name* is either well-known, the module name of a DOM implementation, or `None`. If it is not `None`, imports the corresponding module and returns a `DOMImplementation` object if the import succeeds. If no name is given, and if the environment variable `PYTHON_DOM` is set, this variable is used to find the implementation. If name is not given, this examines the available implementations to find one with the required feature set. If no implementation can be found, raise an [`ImportError`](exceptions#ImportError "ImportError"). The features list must be a sequence of `(feature, version)` pairs which are passed to the `hasFeature()` method on available `DOMImplementation` objects. Some convenience constants are also provided: `xml.dom.EMPTY_NAMESPACE` The value used to indicate that no namespace is associated with a node in the DOM. This is typically found as the `namespaceURI` of a node, or used as the *namespaceURI* parameter to a namespaces-specific method. `xml.dom.XML_NAMESPACE` The namespace URI associated with the reserved prefix `xml`, as defined by [Namespaces in XML](https://www.w3.org/TR/REC-xml-names/) (section 4). `xml.dom.XMLNS_NAMESPACE` The namespace URI for namespace declarations, as defined by [Document Object Model (DOM) Level 2 Core Specification](https://www.w3.org/TR/DOM-Level-2-Core/core.html) (section 1.1.8). `xml.dom.XHTML_NAMESPACE` The URI of the XHTML namespace as defined by [XHTML 1.0: The Extensible HyperText Markup Language](https://www.w3.org/TR/xhtml1/) (section 3.1.1). In addition, [`xml.dom`](#module-xml.dom "xml.dom: Document Object Model API for Python.") contains a base `Node` class and the DOM exception classes. The `Node` class provided by this module does not implement any of the methods or attributes defined by the DOM specification; concrete DOM implementations must provide those. The `Node` class provided as part of this module does provide the constants used for the `nodeType` attribute on concrete `Node` objects; they are located within the class rather than at the module level to conform with the DOM specifications. Objects in the DOM ------------------ The definitive documentation for the DOM is the DOM specification from the W3C. Note that DOM attributes may also be manipulated as nodes instead of as simple strings. It is fairly rare that you must do this, however, so this usage is not yet documented. | Interface | Section | Purpose | | --- | --- | --- | | `DOMImplementation` | [DOMImplementation Objects](#dom-implementation-objects) | Interface to the underlying implementation. | | `Node` | [Node Objects](#dom-node-objects) | Base interface for most objects in a document. | | `NodeList` | [NodeList Objects](#dom-nodelist-objects) | Interface for a sequence of nodes. | | `DocumentType` | [DocumentType Objects](#dom-documenttype-objects) | Information about the declarations needed to process a document. | | `Document` | [Document Objects](#dom-document-objects) | Object which represents an entire document. | | `Element` | [Element Objects](#dom-element-objects) | Element nodes in the document hierarchy. | | `Attr` | [Attr Objects](#dom-attr-objects) | Attribute value nodes on element nodes. | | `Comment` | [Comment Objects](#dom-comment-objects) | Representation of comments in the source document. | | `Text` | [Text and CDATASection Objects](#dom-text-objects) | Nodes containing textual content from the document. | | `ProcessingInstruction` | [ProcessingInstruction Objects](#dom-pi-objects) | Processing instruction representation. | An additional section describes the exceptions defined for working with the DOM in Python. ### DOMImplementation Objects The `DOMImplementation` interface provides a way for applications to determine the availability of particular features in the DOM they are using. DOM Level 2 added the ability to create new `Document` and `DocumentType` objects using the `DOMImplementation` as well. `DOMImplementation.hasFeature(feature, version)` Return `True` if the feature identified by the pair of strings *feature* and *version* is implemented. `DOMImplementation.createDocument(namespaceUri, qualifiedName, doctype)` Return a new `Document` object (the root of the DOM), with a child `Element` object having the given *namespaceUri* and *qualifiedName*. The *doctype* must be a `DocumentType` object created by [`createDocumentType()`](#xml.dom.DOMImplementation.createDocumentType "xml.dom.DOMImplementation.createDocumentType"), or `None`. In the Python DOM API, the first two arguments can also be `None` in order to indicate that no `Element` child is to be created. `DOMImplementation.createDocumentType(qualifiedName, publicId, systemId)` Return a new `DocumentType` object that encapsulates the given *qualifiedName*, *publicId*, and *systemId* strings, representing the information contained in an XML document type declaration. ### Node Objects All of the components of an XML document are subclasses of `Node`. `Node.nodeType` An integer representing the node type. Symbolic constants for the types are on the `Node` object: `ELEMENT_NODE`, `ATTRIBUTE_NODE`, `TEXT_NODE`, `CDATA_SECTION_NODE`, `ENTITY_NODE`, `PROCESSING_INSTRUCTION_NODE`, `COMMENT_NODE`, `DOCUMENT_NODE`, `DOCUMENT_TYPE_NODE`, `NOTATION_NODE`. This is a read-only attribute. `Node.parentNode` The parent of the current node, or `None` for the document node. The value is always a `Node` object or `None`. For `Element` nodes, this will be the parent element, except for the root element, in which case it will be the `Document` object. For `Attr` nodes, this is always `None`. This is a read-only attribute. `Node.attributes` A `NamedNodeMap` of attribute objects. Only elements have actual values for this; others provide `None` for this attribute. This is a read-only attribute. `Node.previousSibling` The node that immediately precedes this one with the same parent. For instance the element with an end-tag that comes just before the *self* element’s start-tag. Of course, XML documents are made up of more than just elements so the previous sibling could be text, a comment, or something else. If this node is the first child of the parent, this attribute will be `None`. This is a read-only attribute. `Node.nextSibling` The node that immediately follows this one with the same parent. See also [`previousSibling`](#xml.dom.Node.previousSibling "xml.dom.Node.previousSibling"). If this is the last child of the parent, this attribute will be `None`. This is a read-only attribute. `Node.childNodes` A list of nodes contained within this node. This is a read-only attribute. `Node.firstChild` The first child of the node, if there are any, or `None`. This is a read-only attribute. `Node.lastChild` The last child of the node, if there are any, or `None`. This is a read-only attribute. `Node.localName` The part of the `tagName` following the colon if there is one, else the entire `tagName`. The value is a string. `Node.prefix` The part of the `tagName` preceding the colon if there is one, else the empty string. The value is a string, or `None`. `Node.namespaceURI` The namespace associated with the element name. This will be a string or `None`. This is a read-only attribute. `Node.nodeName` This has a different meaning for each node type; see the DOM specification for details. You can always get the information you would get here from another property such as the `tagName` property for elements or the `name` property for attributes. For all node types, the value of this attribute will be either a string or `None`. This is a read-only attribute. `Node.nodeValue` This has a different meaning for each node type; see the DOM specification for details. The situation is similar to that with [`nodeName`](#xml.dom.Node.nodeName "xml.dom.Node.nodeName"). The value is a string or `None`. `Node.hasAttributes()` Return `True` if the node has any attributes. `Node.hasChildNodes()` Return `True` if the node has any child nodes. `Node.isSameNode(other)` Return `True` if *other* refers to the same node as this node. This is especially useful for DOM implementations which use any sort of proxy architecture (because more than one object can refer to the same node). Note This is based on a proposed DOM Level 3 API which is still in the “working draft” stage, but this particular interface appears uncontroversial. Changes from the W3C will not necessarily affect this method in the Python DOM interface (though any new W3C API for this would also be supported). `Node.appendChild(newChild)` Add a new child node to this node at the end of the list of children, returning *newChild*. If the node was already in the tree, it is removed first. `Node.insertBefore(newChild, refChild)` Insert a new child node before an existing child. It must be the case that *refChild* is a child of this node; if not, [`ValueError`](exceptions#ValueError "ValueError") is raised. *newChild* is returned. If *refChild* is `None`, it inserts *newChild* at the end of the children’s list. `Node.removeChild(oldChild)` Remove a child node. *oldChild* must be a child of this node; if not, [`ValueError`](exceptions#ValueError "ValueError") is raised. *oldChild* is returned on success. If *oldChild* will not be used further, its `unlink()` method should be called. `Node.replaceChild(newChild, oldChild)` Replace an existing node with a new node. It must be the case that *oldChild* is a child of this node; if not, [`ValueError`](exceptions#ValueError "ValueError") is raised. `Node.normalize()` Join adjacent text nodes so that all stretches of text are stored as single `Text` instances. This simplifies processing text from a DOM tree for many applications. `Node.cloneNode(deep)` Clone this node. Setting *deep* means to clone all child nodes as well. This returns the clone. ### NodeList Objects A `NodeList` represents a sequence of nodes. These objects are used in two ways in the DOM Core recommendation: an `Element` object provides one as its list of child nodes, and the `getElementsByTagName()` and `getElementsByTagNameNS()` methods of `Node` return objects with this interface to represent query results. The DOM Level 2 recommendation defines one method and one attribute for these objects: `NodeList.item(i)` Return the *i*’th item from the sequence, if there is one, or `None`. The index *i* is not allowed to be less than zero or greater than or equal to the length of the sequence. `NodeList.length` The number of nodes in the sequence. In addition, the Python DOM interface requires that some additional support is provided to allow `NodeList` objects to be used as Python sequences. All `NodeList` implementations must include support for [`__len__()`](../reference/datamodel#object.__len__ "object.__len__") and [`__getitem__()`](../reference/datamodel#object.__getitem__ "object.__getitem__"); this allows iteration over the `NodeList` in [`for`](../reference/compound_stmts#for) statements and proper support for the [`len()`](functions#len "len") built-in function. If a DOM implementation supports modification of the document, the `NodeList` implementation must also support the [`__setitem__()`](../reference/datamodel#object.__setitem__ "object.__setitem__") and [`__delitem__()`](../reference/datamodel#object.__delitem__ "object.__delitem__") methods. ### DocumentType Objects Information about the notations and entities declared by a document (including the external subset if the parser uses it and can provide the information) is available from a `DocumentType` object. The `DocumentType` for a document is available from the `Document` object’s `doctype` attribute; if there is no `DOCTYPE` declaration for the document, the document’s `doctype` attribute will be set to `None` instead of an instance of this interface. `DocumentType` is a specialization of `Node`, and adds the following attributes: `DocumentType.publicId` The public identifier for the external subset of the document type definition. This will be a string or `None`. `DocumentType.systemId` The system identifier for the external subset of the document type definition. This will be a URI as a string, or `None`. `DocumentType.internalSubset` A string giving the complete internal subset from the document. This does not include the brackets which enclose the subset. If the document has no internal subset, this should be `None`. `DocumentType.name` The name of the root element as given in the `DOCTYPE` declaration, if present. `DocumentType.entities` This is a `NamedNodeMap` giving the definitions of external entities. For entity names defined more than once, only the first definition is provided (others are ignored as required by the XML recommendation). This may be `None` if the information is not provided by the parser, or if no entities are defined. `DocumentType.notations` This is a `NamedNodeMap` giving the definitions of notations. For notation names defined more than once, only the first definition is provided (others are ignored as required by the XML recommendation). This may be `None` if the information is not provided by the parser, or if no notations are defined. ### Document Objects A `Document` represents an entire XML document, including its constituent elements, attributes, processing instructions, comments etc. Remember that it inherits properties from `Node`. `Document.documentElement` The one and only root element of the document. `Document.createElement(tagName)` Create and return a new element node. The element is not inserted into the document when it is created. You need to explicitly insert it with one of the other methods such as `insertBefore()` or `appendChild()`. `Document.createElementNS(namespaceURI, tagName)` Create and return a new element with a namespace. The *tagName* may have a prefix. The element is not inserted into the document when it is created. You need to explicitly insert it with one of the other methods such as `insertBefore()` or `appendChild()`. `Document.createTextNode(data)` Create and return a text node containing the data passed as a parameter. As with the other creation methods, this one does not insert the node into the tree. `Document.createComment(data)` Create and return a comment node containing the data passed as a parameter. As with the other creation methods, this one does not insert the node into the tree. `Document.createProcessingInstruction(target, data)` Create and return a processing instruction node containing the *target* and *data* passed as parameters. As with the other creation methods, this one does not insert the node into the tree. `Document.createAttribute(name)` Create and return an attribute node. This method does not associate the attribute node with any particular element. You must use `setAttributeNode()` on the appropriate `Element` object to use the newly created attribute instance. `Document.createAttributeNS(namespaceURI, qualifiedName)` Create and return an attribute node with a namespace. The *tagName* may have a prefix. This method does not associate the attribute node with any particular element. You must use `setAttributeNode()` on the appropriate `Element` object to use the newly created attribute instance. `Document.getElementsByTagName(tagName)` Search for all descendants (direct children, children’s children, etc.) with a particular element type name. `Document.getElementsByTagNameNS(namespaceURI, localName)` Search for all descendants (direct children, children’s children, etc.) with a particular namespace URI and localname. The localname is the part of the namespace after the prefix. ### Element Objects `Element` is a subclass of `Node`, so inherits all the attributes of that class. `Element.tagName` The element type name. In a namespace-using document it may have colons in it. The value is a string. `Element.getElementsByTagName(tagName)` Same as equivalent method in the `Document` class. `Element.getElementsByTagNameNS(namespaceURI, localName)` Same as equivalent method in the `Document` class. `Element.hasAttribute(name)` Return `True` if the element has an attribute named by *name*. `Element.hasAttributeNS(namespaceURI, localName)` Return `True` if the element has an attribute named by *namespaceURI* and *localName*. `Element.getAttribute(name)` Return the value of the attribute named by *name* as a string. If no such attribute exists, an empty string is returned, as if the attribute had no value. `Element.getAttributeNode(attrname)` Return the `Attr` node for the attribute named by *attrname*. `Element.getAttributeNS(namespaceURI, localName)` Return the value of the attribute named by *namespaceURI* and *localName* as a string. If no such attribute exists, an empty string is returned, as if the attribute had no value. `Element.getAttributeNodeNS(namespaceURI, localName)` Return an attribute value as a node, given a *namespaceURI* and *localName*. `Element.removeAttribute(name)` Remove an attribute by name. If there is no matching attribute, a [`NotFoundErr`](#xml.dom.NotFoundErr "xml.dom.NotFoundErr") is raised. `Element.removeAttributeNode(oldAttr)` Remove and return *oldAttr* from the attribute list, if present. If *oldAttr* is not present, [`NotFoundErr`](#xml.dom.NotFoundErr "xml.dom.NotFoundErr") is raised. `Element.removeAttributeNS(namespaceURI, localName)` Remove an attribute by name. Note that it uses a localName, not a qname. No exception is raised if there is no matching attribute. `Element.setAttribute(name, value)` Set an attribute value from a string. `Element.setAttributeNode(newAttr)` Add a new attribute node to the element, replacing an existing attribute if necessary if the `name` attribute matches. If a replacement occurs, the old attribute node will be returned. If *newAttr* is already in use, [`InuseAttributeErr`](#xml.dom.InuseAttributeErr "xml.dom.InuseAttributeErr") will be raised. `Element.setAttributeNodeNS(newAttr)` Add a new attribute node to the element, replacing an existing attribute if necessary if the `namespaceURI` and `localName` attributes match. If a replacement occurs, the old attribute node will be returned. If *newAttr* is already in use, [`InuseAttributeErr`](#xml.dom.InuseAttributeErr "xml.dom.InuseAttributeErr") will be raised. `Element.setAttributeNS(namespaceURI, qname, value)` Set an attribute value from a string, given a *namespaceURI* and a *qname*. Note that a qname is the whole attribute name. This is different than above. ### Attr Objects `Attr` inherits from `Node`, so inherits all its attributes. `Attr.name` The attribute name. In a namespace-using document it may include a colon. `Attr.localName` The part of the name following the colon if there is one, else the entire name. This is a read-only attribute. `Attr.prefix` The part of the name preceding the colon if there is one, else the empty string. `Attr.value` The text value of the attribute. This is a synonym for the `nodeValue` attribute. ### NamedNodeMap Objects `NamedNodeMap` does *not* inherit from `Node`. `NamedNodeMap.length` The length of the attribute list. `NamedNodeMap.item(index)` Return an attribute with a particular index. The order you get the attributes in is arbitrary but will be consistent for the life of a DOM. Each item is an attribute node. Get its value with the `value` attribute. There are also experimental methods that give this class more mapping behavior. You can use them or you can use the standardized `getAttribute*()` family of methods on the `Element` objects. ### Comment Objects `Comment` represents a comment in the XML document. It is a subclass of `Node`, but cannot have child nodes. `Comment.data` The content of the comment as a string. The attribute contains all characters between the leading `<!-``-` and trailing `-``->`, but does not include them. ### Text and CDATASection Objects The `Text` interface represents text in the XML document. If the parser and DOM implementation support the DOM’s XML extension, portions of the text enclosed in CDATA marked sections are stored in `CDATASection` objects. These two interfaces are identical, but provide different values for the `nodeType` attribute. These interfaces extend the `Node` interface. They cannot have child nodes. `Text.data` The content of the text node as a string. Note The use of a `CDATASection` node does not indicate that the node represents a complete CDATA marked section, only that the content of the node was part of a CDATA section. A single CDATA section may be represented by more than one node in the document tree. There is no way to determine whether two adjacent `CDATASection` nodes represent different CDATA marked sections. ### ProcessingInstruction Objects Represents a processing instruction in the XML document; this inherits from the `Node` interface and cannot have child nodes. `ProcessingInstruction.target` The content of the processing instruction up to the first whitespace character. This is a read-only attribute. `ProcessingInstruction.data` The content of the processing instruction following the first whitespace character. ### Exceptions The DOM Level 2 recommendation defines a single exception, [`DOMException`](#xml.dom.DOMException "xml.dom.DOMException"), and a number of constants that allow applications to determine what sort of error occurred. [`DOMException`](#xml.dom.DOMException "xml.dom.DOMException") instances carry a [`code`](code#module-code "code: Facilities to implement read-eval-print loops.") attribute that provides the appropriate value for the specific exception. The Python DOM interface provides the constants, but also expands the set of exceptions so that a specific exception exists for each of the exception codes defined by the DOM. The implementations must raise the appropriate specific exception, each of which carries the appropriate value for the [`code`](code#module-code "code: Facilities to implement read-eval-print loops.") attribute. `exception xml.dom.DOMException` Base exception class used for all specific DOM exceptions. This exception class cannot be directly instantiated. `exception xml.dom.DomstringSizeErr` Raised when a specified range of text does not fit into a string. This is not known to be used in the Python DOM implementations, but may be received from DOM implementations not written in Python. `exception xml.dom.HierarchyRequestErr` Raised when an attempt is made to insert a node where the node type is not allowed. `exception xml.dom.IndexSizeErr` Raised when an index or size parameter to a method is negative or exceeds the allowed values. `exception xml.dom.InuseAttributeErr` Raised when an attempt is made to insert an `Attr` node that is already present elsewhere in the document. `exception xml.dom.InvalidAccessErr` Raised if a parameter or an operation is not supported on the underlying object. `exception xml.dom.InvalidCharacterErr` This exception is raised when a string parameter contains a character that is not permitted in the context it’s being used in by the XML 1.0 recommendation. For example, attempting to create an `Element` node with a space in the element type name will cause this error to be raised. `exception xml.dom.InvalidModificationErr` Raised when an attempt is made to modify the type of a node. `exception xml.dom.InvalidStateErr` Raised when an attempt is made to use an object that is not defined or is no longer usable. `exception xml.dom.NamespaceErr` If an attempt is made to change any object in a way that is not permitted with regard to the [Namespaces in XML](https://www.w3.org/TR/REC-xml-names/) recommendation, this exception is raised. `exception xml.dom.NotFoundErr` Exception when a node does not exist in the referenced context. For example, `NamedNodeMap.removeNamedItem()` will raise this if the node passed in does not exist in the map. `exception xml.dom.NotSupportedErr` Raised when the implementation does not support the requested type of object or operation. `exception xml.dom.NoDataAllowedErr` This is raised if data is specified for a node which does not support data. `exception xml.dom.NoModificationAllowedErr` Raised on attempts to modify an object where modifications are not allowed (such as for read-only nodes). `exception xml.dom.SyntaxErr` Raised when an invalid or illegal string is specified. `exception xml.dom.WrongDocumentErr` Raised when a node is inserted in a different document than it currently belongs to, and the implementation does not support migrating the node from one document to the other. The exception codes defined in the DOM recommendation map to the exceptions described above according to this table: | Constant | Exception | | --- | --- | | `DOMSTRING_SIZE_ERR` | [`DomstringSizeErr`](#xml.dom.DomstringSizeErr "xml.dom.DomstringSizeErr") | | `HIERARCHY_REQUEST_ERR` | [`HierarchyRequestErr`](#xml.dom.HierarchyRequestErr "xml.dom.HierarchyRequestErr") | | `INDEX_SIZE_ERR` | [`IndexSizeErr`](#xml.dom.IndexSizeErr "xml.dom.IndexSizeErr") | | `INUSE_ATTRIBUTE_ERR` | [`InuseAttributeErr`](#xml.dom.InuseAttributeErr "xml.dom.InuseAttributeErr") | | `INVALID_ACCESS_ERR` | [`InvalidAccessErr`](#xml.dom.InvalidAccessErr "xml.dom.InvalidAccessErr") | | `INVALID_CHARACTER_ERR` | [`InvalidCharacterErr`](#xml.dom.InvalidCharacterErr "xml.dom.InvalidCharacterErr") | | `INVALID_MODIFICATION_ERR` | [`InvalidModificationErr`](#xml.dom.InvalidModificationErr "xml.dom.InvalidModificationErr") | | `INVALID_STATE_ERR` | [`InvalidStateErr`](#xml.dom.InvalidStateErr "xml.dom.InvalidStateErr") | | `NAMESPACE_ERR` | [`NamespaceErr`](#xml.dom.NamespaceErr "xml.dom.NamespaceErr") | | `NOT_FOUND_ERR` | [`NotFoundErr`](#xml.dom.NotFoundErr "xml.dom.NotFoundErr") | | `NOT_SUPPORTED_ERR` | [`NotSupportedErr`](#xml.dom.NotSupportedErr "xml.dom.NotSupportedErr") | | `NO_DATA_ALLOWED_ERR` | [`NoDataAllowedErr`](#xml.dom.NoDataAllowedErr "xml.dom.NoDataAllowedErr") | | `NO_MODIFICATION_ALLOWED_ERR` | [`NoModificationAllowedErr`](#xml.dom.NoModificationAllowedErr "xml.dom.NoModificationAllowedErr") | | `SYNTAX_ERR` | [`SyntaxErr`](#xml.dom.SyntaxErr "xml.dom.SyntaxErr") | | `WRONG_DOCUMENT_ERR` | [`WrongDocumentErr`](#xml.dom.WrongDocumentErr "xml.dom.WrongDocumentErr") | Conformance ----------- This section describes the conformance requirements and relationships between the Python DOM API, the W3C DOM recommendations, and the OMG IDL mapping for Python. ### Type Mapping The IDL types used in the DOM specification are mapped to Python types according to the following table. | IDL Type | Python Type | | --- | --- | | `boolean` | `bool` or `int` | | `int` | `int` | | `long int` | `int` | | `unsigned int` | `int` | | `DOMString` | `str` or `bytes` | | `null` | `None` | ### Accessor Methods The mapping from OMG IDL to Python defines accessor functions for IDL `attribute` declarations in much the way the Java mapping does. Mapping the IDL declarations ``` readonly attribute string someValue; attribute string anotherValue; ``` yields three accessor functions: a “get” method for `someValue` (`_get_someValue()`), and “get” and “set” methods for `anotherValue` (`_get_anotherValue()` and `_set_anotherValue()`). The mapping, in particular, does not require that the IDL attributes are accessible as normal Python attributes: `object.someValue` is *not* required to work, and may raise an [`AttributeError`](exceptions#AttributeError "AttributeError"). The Python DOM API, however, *does* require that normal attribute access work. This means that the typical surrogates generated by Python IDL compilers are not likely to work, and wrapper objects may be needed on the client if the DOM objects are accessed via CORBA. While this does require some additional consideration for CORBA DOM clients, the implementers with experience using DOM over CORBA from Python do not consider this a problem. Attributes that are declared `readonly` may not restrict write access in all DOM implementations. In the Python DOM API, accessor functions are not required. If provided, they should take the form defined by the Python IDL mapping, but these methods are considered unnecessary since the attributes are accessible directly from Python. “Set” accessors should never be provided for `readonly` attributes. The IDL definitions do not fully embody the requirements of the W3C DOM API, such as the notion of certain objects, such as the return value of `getElementsByTagName()`, being “live”. The Python DOM API does not require implementations to enforce such requirements.
programming_docs
python MS Windows Specific Services MS Windows Specific Services ============================ This chapter describes modules that are only available on MS Windows platforms. * [`msvcrt` — Useful routines from the MS VC++ runtime](msvcrt) + [File Operations](msvcrt#file-operations) + [Console I/O](msvcrt#console-i-o) + [Other Functions](msvcrt#other-functions) * [`winreg` — Windows registry access](winreg) + [Functions](winreg#functions) + [Constants](winreg#constants) - [HKEY\_\* Constants](winreg#hkey-constants) - [Access Rights](winreg#access-rights) * [64-bit Specific](winreg#bit-specific) - [Value Types](winreg#value-types) + [Registry Handle Objects](winreg#registry-handle-objects) * [`winsound` — Sound-playing interface for Windows](winsound) python sqlite3 — DB-API 2.0 interface for SQLite databases sqlite3 — DB-API 2.0 interface for SQLite databases =================================================== **Source code:** [Lib/sqlite3/](https://github.com/python/cpython/tree/3.9/Lib/sqlite3/) SQLite is a C library that provides a lightweight disk-based database that doesn’t require a separate server process and allows accessing the database using a nonstandard variant of the SQL query language. Some applications can use SQLite for internal data storage. It’s also possible to prototype an application using SQLite and then port the code to a larger database such as PostgreSQL or Oracle. The sqlite3 module was written by Gerhard Häring. It provides an SQL interface compliant with the DB-API 2.0 specification described by [**PEP 249**](https://www.python.org/dev/peps/pep-0249). To use the module, start by creating a [`Connection`](#sqlite3.Connection "sqlite3.Connection") object that represents the database. Here the data will be stored in the `example.db` file: ``` import sqlite3 con = sqlite3.connect('example.db') ``` The special path name `:memory:` can be provided to create a temporary database in RAM. Once a [`Connection`](#sqlite3.Connection "sqlite3.Connection") has been established, create a [`Cursor`](#sqlite3.Cursor "sqlite3.Cursor") object and call its [`execute()`](#sqlite3.Cursor.execute "sqlite3.Cursor.execute") method to perform SQL commands: ``` cur = con.cursor() # Create table cur.execute('''CREATE TABLE stocks (date text, trans text, symbol text, qty real, price real)''') # Insert a row of data cur.execute("INSERT INTO stocks VALUES ('2006-01-05','BUY','RHAT',100,35.14)") # Save (commit) the changes con.commit() # We can also close the connection if we are done with it. # Just be sure any changes have been committed or they will be lost. con.close() ``` The saved data is persistent: it can be reloaded in a subsequent session even after restarting the Python interpreter: ``` import sqlite3 con = sqlite3.connect('example.db') cur = con.cursor() ``` To retrieve data after executing a SELECT statement, either treat the cursor as an [iterator](../glossary#term-iterator), call the cursor’s [`fetchone()`](#sqlite3.Cursor.fetchone "sqlite3.Cursor.fetchone") method to retrieve a single matching row, or call [`fetchall()`](#sqlite3.Cursor.fetchall "sqlite3.Cursor.fetchall") to get a list of the matching rows. This example uses the iterator form: ``` >>> for row in cur.execute('SELECT * FROM stocks ORDER BY price'): print(row) ('2006-01-05', 'BUY', 'RHAT', 100, 35.14) ('2006-03-28', 'BUY', 'IBM', 1000, 45.0) ('2006-04-06', 'SELL', 'IBM', 500, 53.0) ('2006-04-05', 'BUY', 'MSFT', 1000, 72.0) ``` SQL operations usually need to use values from Python variables. However, beware of using Python’s string operations to assemble queries, as they are vulnerable to SQL injection attacks (see the [xkcd webcomic](https://xkcd.com/327/) for a humorous example of what can go wrong): ``` # Never do this -- insecure! symbol = 'RHAT' cur.execute("SELECT * FROM stocks WHERE symbol = '%s'" % symbol) ``` Instead, use the DB-API’s parameter substitution. To insert a variable into a query string, use a placeholder in the string, and substitute the actual values into the query by providing them as a [`tuple`](stdtypes#tuple "tuple") of values to the second argument of the cursor’s [`execute()`](#sqlite3.Cursor.execute "sqlite3.Cursor.execute") method. An SQL statement may use one of two kinds of placeholders: question marks (qmark style) or named placeholders (named style). For the qmark style, `parameters` must be a [sequence](../glossary#term-sequence). For the named style, it can be either a [sequence](../glossary#term-sequence) or [`dict`](stdtypes#dict "dict") instance. The length of the [sequence](../glossary#term-sequence) must match the number of placeholders, or a [`ProgrammingError`](#sqlite3.ProgrammingError "sqlite3.ProgrammingError") is raised. If a [`dict`](stdtypes#dict "dict") is given, it must contain keys for all named parameters. Any extra items are ignored. Here’s an example of both styles: ``` import sqlite3 con = sqlite3.connect(":memory:") cur = con.cursor() cur.execute("create table lang (name, first_appeared)") # This is the qmark style: cur.execute("insert into lang values (?, ?)", ("C", 1972)) # The qmark style used with executemany(): lang_list = [ ("Fortran", 1957), ("Python", 1991), ("Go", 2009), ] cur.executemany("insert into lang values (?, ?)", lang_list) # And this is the named style: cur.execute("select * from lang where first_appeared=:year", {"year": 1972}) print(cur.fetchall()) con.close() ``` See also <https://www.sqlite.org> The SQLite web page; the documentation describes the syntax and the available data types for the supported SQL dialect. <https://www.w3schools.com/sql/> Tutorial, reference and examples for learning SQL syntax. [**PEP 249**](https://www.python.org/dev/peps/pep-0249) - Database API Specification 2.0 PEP written by Marc-André Lemburg. Module functions and constants ------------------------------ `sqlite3.apilevel` String constant stating the supported DB-API level. Required by the DB-API. Hard-coded to `"2.0"`. `sqlite3.paramstyle` String constant stating the type of parameter marker formatting expected by the [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module. Required by the DB-API. Hard-coded to `"qmark"`. Note The [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module supports both `qmark` and `numeric` DB-API parameter styles, because that is what the underlying SQLite library supports. However, the DB-API does not allow multiple values for the `paramstyle` attribute. `sqlite3.version` The version number of this module, as a string. This is not the version of the SQLite library. `sqlite3.version_info` The version number of this module, as a tuple of integers. This is not the version of the SQLite library. `sqlite3.sqlite_version` The version number of the run-time SQLite library, as a string. `sqlite3.sqlite_version_info` The version number of the run-time SQLite library, as a tuple of integers. `sqlite3.threadsafety` Integer constant required by the DB-API, stating the level of thread safety the [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module supports. Currently hard-coded to `1`, meaning *“Threads may share the module, but not connections.”* However, this may not always be true. You can check the underlying SQLite library’s compile-time threaded mode using the following query: ``` import sqlite3 con = sqlite3.connect(":memory:") con.execute(""" select * from pragma_compile_options where compile_options like 'THREADSAFE=%' """).fetchall() ``` Note that the [SQLITE\_THREADSAFE levels](https://sqlite.org/compile.html#threadsafe) do not match the DB-API 2.0 `threadsafety` levels. `sqlite3.PARSE_DECLTYPES` This constant is meant to be used with the *detect\_types* parameter of the [`connect()`](#sqlite3.connect "sqlite3.connect") function. Setting it makes the [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module parse the declared type for each column it returns. It will parse out the first word of the declared type, i. e. for “integer primary key”, it will parse out “integer”, or for “number(10)” it will parse out “number”. Then for that column, it will look into the converters dictionary and use the converter function registered for that type there. `sqlite3.PARSE_COLNAMES` This constant is meant to be used with the *detect\_types* parameter of the [`connect()`](#sqlite3.connect "sqlite3.connect") function. Setting this makes the SQLite interface parse the column name for each column it returns. It will look for a string formed [mytype] in there, and then decide that ‘mytype’ is the type of the column. It will try to find an entry of ‘mytype’ in the converters dictionary and then use the converter function found there to return the value. The column name found in [`Cursor.description`](#sqlite3.Cursor.description "sqlite3.Cursor.description") does not include the type, i. e. if you use something like `'as "Expiration date [datetime]"'` in your SQL, then we will parse out everything until the first `'['` for the column name and strip the preceding space: the column name would simply be “Expiration date”. `sqlite3.connect(database[, timeout, detect_types, isolation_level, check_same_thread, factory, cached_statements, uri])` Opens a connection to the SQLite database file *database*. By default returns a [`Connection`](#sqlite3.Connection "sqlite3.Connection") object, unless a custom *factory* is given. *database* is a [path-like object](../glossary#term-path-like-object) giving the pathname (absolute or relative to the current working directory) of the database file to be opened. You can use `":memory:"` to open a database connection to a database that resides in RAM instead of on disk. When a database is accessed by multiple connections, and one of the processes modifies the database, the SQLite database is locked until that transaction is committed. The *timeout* parameter specifies how long the connection should wait for the lock to go away until raising an exception. The default for the timeout parameter is 5.0 (five seconds). For the *isolation\_level* parameter, please see the [`isolation_level`](#sqlite3.Connection.isolation_level "sqlite3.Connection.isolation_level") property of [`Connection`](#sqlite3.Connection "sqlite3.Connection") objects. SQLite natively supports only the types TEXT, INTEGER, REAL, BLOB and NULL. If you want to use other types you must add support for them yourself. The *detect\_types* parameter and the using custom **converters** registered with the module-level [`register_converter()`](#sqlite3.register_converter "sqlite3.register_converter") function allow you to easily do that. *detect\_types* defaults to 0 (i. e. off, no type detection), you can set it to any combination of [`PARSE_DECLTYPES`](#sqlite3.PARSE_DECLTYPES "sqlite3.PARSE_DECLTYPES") and [`PARSE_COLNAMES`](#sqlite3.PARSE_COLNAMES "sqlite3.PARSE_COLNAMES") to turn type detection on. Due to SQLite behaviour, types can’t be detected for generated fields (for example `max(data)`), even when *detect\_types* parameter is set. In such case, the returned type is [`str`](stdtypes#str "str"). By default, *check\_same\_thread* is [`True`](constants#True "True") and only the creating thread may use the connection. If set [`False`](constants#False "False"), the returned connection may be shared across multiple threads. When using multiple threads with the same connection writing operations should be serialized by the user to avoid data corruption. By default, the [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module uses its [`Connection`](#sqlite3.Connection "sqlite3.Connection") class for the connect call. You can, however, subclass the [`Connection`](#sqlite3.Connection "sqlite3.Connection") class and make [`connect()`](#sqlite3.connect "sqlite3.connect") use your class instead by providing your class for the *factory* parameter. Consult the section [SQLite and Python types](#sqlite3-types) of this manual for details. The [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module internally uses a statement cache to avoid SQL parsing overhead. If you want to explicitly set the number of statements that are cached for the connection, you can set the *cached\_statements* parameter. The currently implemented default is to cache 100 statements. If *uri* is [`True`](constants#True "True"), *database* is interpreted as a URI with a file path and an optional query string. The scheme part *must* be `"file:"`. The path can be a relative or absolute file path. The query string allows us to pass parameters to SQLite. Some useful URI tricks include: ``` # Open a database in read-only mode. con = sqlite3.connect("file:template.db?mode=ro", uri=True) # Don't implicitly create a new database file if it does not already exist. # Will raise sqlite3.OperationalError if unable to open a database file. con = sqlite3.connect("file:nosuchdb.db?mode=rw", uri=True) # Create a shared named in-memory database. con1 = sqlite3.connect("file:mem1?mode=memory&cache=shared", uri=True) con2 = sqlite3.connect("file:mem1?mode=memory&cache=shared", uri=True) con1.executescript("create table t(t); insert into t values(28);") rows = con2.execute("select * from t").fetchall() ``` More information about this feature, including a list of recognized parameters, can be found in the [SQLite URI documentation](https://www.sqlite.org/uri.html). Raises an [auditing event](sys#auditing) `sqlite3.connect` with argument `database`. Changed in version 3.4: Added the *uri* parameter. Changed in version 3.7: *database* can now also be a [path-like object](../glossary#term-path-like-object), not only a string. `sqlite3.register_converter(typename, callable)` Registers a callable to convert a bytestring from the database into a custom Python type. The callable will be invoked for all database values that are of the type *typename*. Confer the parameter *detect\_types* of the [`connect()`](#sqlite3.connect "sqlite3.connect") function for how the type detection works. Note that *typename* and the name of the type in your query are matched in case-insensitive manner. `sqlite3.register_adapter(type, callable)` Registers a callable to convert the custom Python type *type* into one of SQLite’s supported types. The callable *callable* accepts as single parameter the Python value, and must return a value of the following types: int, float, str or bytes. `sqlite3.complete_statement(sql)` Returns [`True`](constants#True "True") if the string *sql* contains one or more complete SQL statements terminated by semicolons. It does not verify that the SQL is syntactically correct, only that there are no unclosed string literals and the statement is terminated by a semicolon. This can be used to build a shell for SQLite, as in the following example: ``` # A minimal SQLite shell for experiments import sqlite3 con = sqlite3.connect(":memory:") con.isolation_level = None cur = con.cursor() buffer = "" print("Enter your SQL commands to execute in sqlite3.") print("Enter a blank line to exit.") while True: line = input() if line == "": break buffer += line if sqlite3.complete_statement(buffer): try: buffer = buffer.strip() cur.execute(buffer) if buffer.lstrip().upper().startswith("SELECT"): print(cur.fetchall()) except sqlite3.Error as e: print("An error occurred:", e.args[0]) buffer = "" con.close() ``` `sqlite3.enable_callback_tracebacks(flag)` By default you will not get any tracebacks in user-defined functions, aggregates, converters, authorizer callbacks etc. If you want to debug them, you can call this function with *flag* set to `True`. Afterwards, you will get tracebacks from callbacks on `sys.stderr`. Use [`False`](constants#False "False") to disable the feature again. Connection Objects ------------------ `class sqlite3.Connection` An SQLite database connection has the following attributes and methods: `isolation_level` Get or set the current default isolation level. [`None`](constants#None "None") for autocommit mode or one of “DEFERRED”, “IMMEDIATE” or “EXCLUSIVE”. See section [Controlling Transactions](#sqlite3-controlling-transactions) for a more detailed explanation. `in_transaction` [`True`](constants#True "True") if a transaction is active (there are uncommitted changes), [`False`](constants#False "False") otherwise. Read-only attribute. New in version 3.2. `cursor(factory=Cursor)` The cursor method accepts a single optional parameter *factory*. If supplied, this must be a callable returning an instance of [`Cursor`](#sqlite3.Cursor "sqlite3.Cursor") or its subclasses. `commit()` This method commits the current transaction. If you don’t call this method, anything you did since the last call to `commit()` is not visible from other database connections. If you wonder why you don’t see the data you’ve written to the database, please check you didn’t forget to call this method. `rollback()` This method rolls back any changes to the database since the last call to [`commit()`](#sqlite3.Connection.commit "sqlite3.Connection.commit"). `close()` This closes the database connection. Note that this does not automatically call [`commit()`](#sqlite3.Connection.commit "sqlite3.Connection.commit"). If you just close your database connection without calling [`commit()`](#sqlite3.Connection.commit "sqlite3.Connection.commit") first, your changes will be lost! `execute(sql[, parameters])` Create a new [`Cursor`](#sqlite3.Cursor "sqlite3.Cursor") object and call [`execute()`](#sqlite3.Cursor.execute "sqlite3.Cursor.execute") on it with the given *sql* and *parameters*. Return the new cursor object. `executemany(sql[, parameters])` Create a new [`Cursor`](#sqlite3.Cursor "sqlite3.Cursor") object and call [`executemany()`](#sqlite3.Cursor.executemany "sqlite3.Cursor.executemany") on it with the given *sql* and *parameters*. Return the new cursor object. `executescript(sql_script)` Create a new [`Cursor`](#sqlite3.Cursor "sqlite3.Cursor") object and call [`executescript()`](#sqlite3.Cursor.executescript "sqlite3.Cursor.executescript") on it with the given *sql\_script*. Return the new cursor object. `create_function(name, num_params, func, *, deterministic=False)` Creates a user-defined function that you can later use from within SQL statements under the function name *name*. *num\_params* is the number of parameters the function accepts (if *num\_params* is -1, the function may take any number of arguments), and *func* is a Python callable that is called as the SQL function. If *deterministic* is true, the created function is marked as [deterministic](https://sqlite.org/deterministic.html), which allows SQLite to perform additional optimizations. This flag is supported by SQLite 3.8.3 or higher, [`NotSupportedError`](#sqlite3.NotSupportedError "sqlite3.NotSupportedError") will be raised if used with older versions. The function can return any of the types supported by SQLite: bytes, str, int, float and `None`. Changed in version 3.8: The *deterministic* parameter was added. Example: ``` import sqlite3 import hashlib def md5sum(t): return hashlib.md5(t).hexdigest() con = sqlite3.connect(":memory:") con.create_function("md5", 1, md5sum) cur = con.cursor() cur.execute("select md5(?)", (b"foo",)) print(cur.fetchone()[0]) con.close() ``` `create_aggregate(name, num_params, aggregate_class)` Creates a user-defined aggregate function. The aggregate class must implement a `step` method, which accepts the number of parameters *num\_params* (if *num\_params* is -1, the function may take any number of arguments), and a `finalize` method which will return the final result of the aggregate. The `finalize` method can return any of the types supported by SQLite: bytes, str, int, float and `None`. Example: ``` import sqlite3 class MySum: def __init__(self): self.count = 0 def step(self, value): self.count += value def finalize(self): return self.count con = sqlite3.connect(":memory:") con.create_aggregate("mysum", 1, MySum) cur = con.cursor() cur.execute("create table test(i)") cur.execute("insert into test(i) values (1)") cur.execute("insert into test(i) values (2)") cur.execute("select mysum(i) from test") print(cur.fetchone()[0]) con.close() ``` `create_collation(name, callable)` Creates a collation with the specified *name* and *callable*. The callable will be passed two string arguments. It should return -1 if the first is ordered lower than the second, 0 if they are ordered equal and 1 if the first is ordered higher than the second. Note that this controls sorting (ORDER BY in SQL) so your comparisons don’t affect other SQL operations. Note that the callable will get its parameters as Python bytestrings, which will normally be encoded in UTF-8. The following example shows a custom collation that sorts “the wrong way”: ``` import sqlite3 def collate_reverse(string1, string2): if string1 == string2: return 0 elif string1 < string2: return 1 else: return -1 con = sqlite3.connect(":memory:") con.create_collation("reverse", collate_reverse) cur = con.cursor() cur.execute("create table test(x)") cur.executemany("insert into test(x) values (?)", [("a",), ("b",)]) cur.execute("select x from test order by x collate reverse") for row in cur: print(row) con.close() ``` To remove a collation, call `create_collation` with `None` as callable: ``` con.create_collation("reverse", None) ``` `interrupt()` You can call this method from a different thread to abort any queries that might be executing on the connection. The query will then abort and the caller will get an exception. `set_authorizer(authorizer_callback)` This routine registers a callback. The callback is invoked for each attempt to access a column of a table in the database. The callback should return `SQLITE_OK` if access is allowed, `SQLITE_DENY` if the entire SQL statement should be aborted with an error and `SQLITE_IGNORE` if the column should be treated as a NULL value. These constants are available in the [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module. The first argument to the callback signifies what kind of operation is to be authorized. The second and third argument will be arguments or [`None`](constants#None "None") depending on the first argument. The 4th argument is the name of the database (“main”, “temp”, etc.) if applicable. The 5th argument is the name of the inner-most trigger or view that is responsible for the access attempt or [`None`](constants#None "None") if this access attempt is directly from input SQL code. Please consult the SQLite documentation about the possible values for the first argument and the meaning of the second and third argument depending on the first one. All necessary constants are available in the [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module. `set_progress_handler(handler, n)` This routine registers a callback. The callback is invoked for every *n* instructions of the SQLite virtual machine. This is useful if you want to get called from SQLite during long-running operations, for example to update a GUI. If you want to clear any previously installed progress handler, call the method with [`None`](constants#None "None") for *handler*. Returning a non-zero value from the handler function will terminate the currently executing query and cause it to raise an [`OperationalError`](#sqlite3.OperationalError "sqlite3.OperationalError") exception. `set_trace_callback(trace_callback)` Registers *trace\_callback* to be called for each SQL statement that is actually executed by the SQLite backend. The only argument passed to the callback is the statement (as [`str`](stdtypes#str "str")) that is being executed. The return value of the callback is ignored. Note that the backend does not only run statements passed to the [`Cursor.execute()`](#sqlite3.Cursor.execute "sqlite3.Cursor.execute") methods. Other sources include the [transaction management](#sqlite3-controlling-transactions) of the sqlite3 module and the execution of triggers defined in the current database. Passing [`None`](constants#None "None") as *trace\_callback* will disable the trace callback. Note Exceptions raised in the trace callback are not propagated. As a development and debugging aid, use [`enable_callback_tracebacks()`](#sqlite3.enable_callback_tracebacks "sqlite3.enable_callback_tracebacks") to enable printing tracebacks from exceptions raised in the trace callback. New in version 3.3. `enable_load_extension(enabled)` This routine allows/disallows the SQLite engine to load SQLite extensions from shared libraries. SQLite extensions can define new functions, aggregates or whole new virtual table implementations. One well-known extension is the fulltext-search extension distributed with SQLite. Loadable extensions are disabled by default. See [1](#f1). New in version 3.2. ``` import sqlite3 con = sqlite3.connect(":memory:") # enable extension loading con.enable_load_extension(True) # Load the fulltext search extension con.execute("select load_extension('./fts3.so')") # alternatively you can load the extension using an API call: # con.load_extension("./fts3.so") # disable extension loading again con.enable_load_extension(False) # example from SQLite wiki con.execute("create virtual table recipe using fts3(name, ingredients)") con.executescript(""" insert into recipe (name, ingredients) values ('broccoli stew', 'broccoli peppers cheese tomatoes'); insert into recipe (name, ingredients) values ('pumpkin stew', 'pumpkin onions garlic celery'); insert into recipe (name, ingredients) values ('broccoli pie', 'broccoli cheese onions flour'); insert into recipe (name, ingredients) values ('pumpkin pie', 'pumpkin sugar flour butter'); """) for row in con.execute("select rowid, name, ingredients from recipe where name match 'pie'"): print(row) con.close() ``` `load_extension(path)` This routine loads an SQLite extension from a shared library. You have to enable extension loading with [`enable_load_extension()`](#sqlite3.Connection.enable_load_extension "sqlite3.Connection.enable_load_extension") before you can use this routine. Loadable extensions are disabled by default. See [1](#f1). New in version 3.2. `row_factory` You can change this attribute to a callable that accepts the cursor and the original row as a tuple and will return the real result row. This way, you can implement more advanced ways of returning results, such as returning an object that can also access columns by name. Example: ``` import sqlite3 def dict_factory(cursor, row): d = {} for idx, col in enumerate(cursor.description): d[col[0]] = row[idx] return d con = sqlite3.connect(":memory:") con.row_factory = dict_factory cur = con.cursor() cur.execute("select 1 as a") print(cur.fetchone()["a"]) con.close() ``` If returning a tuple doesn’t suffice and you want name-based access to columns, you should consider setting [`row_factory`](#sqlite3.Connection.row_factory "sqlite3.Connection.row_factory") to the highly-optimized [`sqlite3.Row`](#sqlite3.Row "sqlite3.Row") type. [`Row`](#sqlite3.Row "sqlite3.Row") provides both index-based and case-insensitive name-based access to columns with almost no memory overhead. It will probably be better than your own custom dictionary-based approach or even a db\_row based solution. `text_factory` Using this attribute you can control what objects are returned for the `TEXT` data type. By default, this attribute is set to [`str`](stdtypes#str "str") and the [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module will return [`str`](stdtypes#str "str") objects for `TEXT`. If you want to return [`bytes`](stdtypes#bytes "bytes") instead, you can set it to [`bytes`](stdtypes#bytes "bytes"). You can also set it to any other callable that accepts a single bytestring parameter and returns the resulting object. See the following example code for illustration: ``` import sqlite3 con = sqlite3.connect(":memory:") cur = con.cursor() AUSTRIA = "Österreich" # by default, rows are returned as str cur.execute("select ?", (AUSTRIA,)) row = cur.fetchone() assert row[0] == AUSTRIA # but we can make sqlite3 always return bytestrings ... con.text_factory = bytes cur.execute("select ?", (AUSTRIA,)) row = cur.fetchone() assert type(row[0]) is bytes # the bytestrings will be encoded in UTF-8, unless you stored garbage in the # database ... assert row[0] == AUSTRIA.encode("utf-8") # we can also implement a custom text_factory ... # here we implement one that appends "foo" to all strings con.text_factory = lambda x: x.decode("utf-8") + "foo" cur.execute("select ?", ("bar",)) row = cur.fetchone() assert row[0] == "barfoo" con.close() ``` `total_changes` Returns the total number of database rows that have been modified, inserted, or deleted since the database connection was opened. `iterdump()` Returns an iterator to dump the database in an SQL text format. Useful when saving an in-memory database for later restoration. This function provides the same capabilities as the `.dump` command in the **sqlite3** shell. Example: ``` # Convert file existing_db.db to SQL dump file dump.sql import sqlite3 con = sqlite3.connect('existing_db.db') with open('dump.sql', 'w') as f: for line in con.iterdump(): f.write('%s\n' % line) con.close() ``` `backup(target, *, pages=-1, progress=None, name="main", sleep=0.250)` This method makes a backup of an SQLite database even while it’s being accessed by other clients, or concurrently by the same connection. The copy will be written into the mandatory argument *target*, that must be another [`Connection`](#sqlite3.Connection "sqlite3.Connection") instance. By default, or when *pages* is either `0` or a negative integer, the entire database is copied in a single step; otherwise the method performs a loop copying up to *pages* pages at a time. If *progress* is specified, it must either be `None` or a callable object that will be executed at each iteration with three integer arguments, respectively the *status* of the last iteration, the *remaining* number of pages still to be copied and the *total* number of pages. The *name* argument specifies the database name that will be copied: it must be a string containing either `"main"`, the default, to indicate the main database, `"temp"` to indicate the temporary database or the name specified after the `AS` keyword in an `ATTACH DATABASE` statement for an attached database. The *sleep* argument specifies the number of seconds to sleep by between successive attempts to backup remaining pages, can be specified either as an integer or a floating point value. Example 1, copy an existing database into another: ``` import sqlite3 def progress(status, remaining, total): print(f'Copied {total-remaining} of {total} pages...') con = sqlite3.connect('existing_db.db') bck = sqlite3.connect('backup.db') with bck: con.backup(bck, pages=1, progress=progress) bck.close() con.close() ``` Example 2, copy an existing database into a transient copy: ``` import sqlite3 source = sqlite3.connect('existing_db.db') dest = sqlite3.connect(':memory:') source.backup(dest) ``` Availability: SQLite 3.6.11 or higher New in version 3.7. Cursor Objects -------------- `class sqlite3.Cursor` A [`Cursor`](#sqlite3.Cursor "sqlite3.Cursor") instance has the following attributes and methods. `execute(sql[, parameters])` Executes an SQL statement. Values may be bound to the statement using [placeholders](#sqlite3-placeholders). [`execute()`](#sqlite3.Cursor.execute "sqlite3.Cursor.execute") will only execute a single SQL statement. If you try to execute more than one statement with it, it will raise a [`Warning`](#sqlite3.Warning "sqlite3.Warning"). Use [`executescript()`](#sqlite3.Cursor.executescript "sqlite3.Cursor.executescript") if you want to execute multiple SQL statements with one call. `executemany(sql, seq_of_parameters)` Executes a [parameterized](#sqlite3-placeholders) SQL command against all parameter sequences or mappings found in the sequence *seq\_of\_parameters*. The [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module also allows using an [iterator](../glossary#term-iterator) yielding parameters instead of a sequence. ``` import sqlite3 class IterChars: def __init__(self): self.count = ord('a') def __iter__(self): return self def __next__(self): if self.count > ord('z'): raise StopIteration self.count += 1 return (chr(self.count - 1),) # this is a 1-tuple con = sqlite3.connect(":memory:") cur = con.cursor() cur.execute("create table characters(c)") theIter = IterChars() cur.executemany("insert into characters(c) values (?)", theIter) cur.execute("select c from characters") print(cur.fetchall()) con.close() ``` Here’s a shorter example using a [generator](../glossary#term-generator): ``` import sqlite3 import string def char_generator(): for c in string.ascii_lowercase: yield (c,) con = sqlite3.connect(":memory:") cur = con.cursor() cur.execute("create table characters(c)") cur.executemany("insert into characters(c) values (?)", char_generator()) cur.execute("select c from characters") print(cur.fetchall()) con.close() ``` `executescript(sql_script)` This is a nonstandard convenience method for executing multiple SQL statements at once. It issues a `COMMIT` statement first, then executes the SQL script it gets as a parameter. This method disregards `isolation_level`; any transaction control must be added to *sql\_script*. *sql\_script* can be an instance of [`str`](stdtypes#str "str"). Example: ``` import sqlite3 con = sqlite3.connect(":memory:") cur = con.cursor() cur.executescript(""" create table person( firstname, lastname, age ); create table book( title, author, published ); insert into book(title, author, published) values ( 'Dirk Gently''s Holistic Detective Agency', 'Douglas Adams', 1987 ); """) con.close() ``` `fetchone()` Fetches the next row of a query result set, returning a single sequence, or [`None`](constants#None "None") when no more data is available. `fetchmany(size=cursor.arraysize)` Fetches the next set of rows of a query result, returning a list. An empty list is returned when no more rows are available. The number of rows to fetch per call is specified by the *size* parameter. If it is not given, the cursor’s arraysize determines the number of rows to be fetched. The method should try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned. Note there are performance considerations involved with the *size* parameter. For optimal performance, it is usually best to use the arraysize attribute. If the *size* parameter is used, then it is best for it to retain the same value from one [`fetchmany()`](#sqlite3.Cursor.fetchmany "sqlite3.Cursor.fetchmany") call to the next. `fetchall()` Fetches all (remaining) rows of a query result, returning a list. Note that the cursor’s arraysize attribute can affect the performance of this operation. An empty list is returned when no rows are available. `close()` Close the cursor now (rather than whenever `__del__` is called). The cursor will be unusable from this point forward; a [`ProgrammingError`](#sqlite3.ProgrammingError "sqlite3.ProgrammingError") exception will be raised if any operation is attempted with the cursor. `setinputsizes(sizes)` Required by the DB-API. Does nothing in [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x."). `setoutputsize(size[, column])` Required by the DB-API. Does nothing in [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x."). `rowcount` Although the [`Cursor`](#sqlite3.Cursor "sqlite3.Cursor") class of the [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module implements this attribute, the database engine’s own support for the determination of “rows affected”/”rows selected” is quirky. For [`executemany()`](#sqlite3.Cursor.executemany "sqlite3.Cursor.executemany") statements, the number of modifications are summed up into [`rowcount`](#sqlite3.Cursor.rowcount "sqlite3.Cursor.rowcount"). As required by the Python DB API Spec, the [`rowcount`](#sqlite3.Cursor.rowcount "sqlite3.Cursor.rowcount") attribute “is -1 in case no `executeXX()` has been performed on the cursor or the rowcount of the last operation is not determinable by the interface”. This includes `SELECT` statements because we cannot determine the number of rows a query produced until all rows were fetched. With SQLite versions before 3.6.5, [`rowcount`](#sqlite3.Cursor.rowcount "sqlite3.Cursor.rowcount") is set to 0 if you make a `DELETE FROM table` without any condition. `lastrowid` This read-only attribute provides the row id of the last inserted row. It is only updated after successful `INSERT` or `REPLACE` statements using the [`execute()`](#sqlite3.Cursor.execute "sqlite3.Cursor.execute") method. For other statements, after [`executemany()`](#sqlite3.Cursor.executemany "sqlite3.Cursor.executemany") or [`executescript()`](#sqlite3.Cursor.executescript "sqlite3.Cursor.executescript"), or if the insertion failed, the value of `lastrowid` is left unchanged. The initial value of `lastrowid` is [`None`](constants#None "None"). Note Inserts into `WITHOUT ROWID` tables are not recorded. Changed in version 3.6: Added support for the `REPLACE` statement. `arraysize` Read/write attribute that controls the number of rows returned by [`fetchmany()`](#sqlite3.Cursor.fetchmany "sqlite3.Cursor.fetchmany"). The default value is 1 which means a single row would be fetched per call. `description` This read-only attribute provides the column names of the last query. To remain compatible with the Python DB API, it returns a 7-tuple for each column where the last six items of each tuple are [`None`](constants#None "None"). It is set for `SELECT` statements without any matching rows as well. `connection` This read-only attribute provides the SQLite database [`Connection`](#sqlite3.Connection "sqlite3.Connection") used by the [`Cursor`](#sqlite3.Cursor "sqlite3.Cursor") object. A [`Cursor`](#sqlite3.Cursor "sqlite3.Cursor") object created by calling [`con.cursor()`](#sqlite3.Connection.cursor "sqlite3.Connection.cursor") will have a [`connection`](#sqlite3.Cursor.connection "sqlite3.Cursor.connection") attribute that refers to *con*: ``` >>> con = sqlite3.connect(":memory:") >>> cur = con.cursor() >>> cur.connection == con True ``` Row Objects ----------- `class sqlite3.Row` A [`Row`](#sqlite3.Row "sqlite3.Row") instance serves as a highly optimized [`row_factory`](#sqlite3.Connection.row_factory "sqlite3.Connection.row_factory") for [`Connection`](#sqlite3.Connection "sqlite3.Connection") objects. It tries to mimic a tuple in most of its features. It supports mapping access by column name and index, iteration, representation, equality testing and [`len()`](functions#len "len"). If two [`Row`](#sqlite3.Row "sqlite3.Row") objects have exactly the same columns and their members are equal, they compare equal. `keys()` This method returns a list of column names. Immediately after a query, it is the first member of each tuple in [`Cursor.description`](#sqlite3.Cursor.description "sqlite3.Cursor.description"). Changed in version 3.5: Added support of slicing. Let’s assume we initialize a table as in the example given above: ``` con = sqlite3.connect(":memory:") cur = con.cursor() cur.execute('''create table stocks (date text, trans text, symbol text, qty real, price real)''') cur.execute("""insert into stocks values ('2006-01-05','BUY','RHAT',100,35.14)""") con.commit() cur.close() ``` Now we plug [`Row`](#sqlite3.Row "sqlite3.Row") in: ``` >>> con.row_factory = sqlite3.Row >>> cur = con.cursor() >>> cur.execute('select * from stocks') <sqlite3.Cursor object at 0x7f4e7dd8fa80> >>> r = cur.fetchone() >>> type(r) <class 'sqlite3.Row'> >>> tuple(r) ('2006-01-05', 'BUY', 'RHAT', 100.0, 35.14) >>> len(r) 5 >>> r[2] 'RHAT' >>> r.keys() ['date', 'trans', 'symbol', 'qty', 'price'] >>> r['qty'] 100.0 >>> for member in r: ... print(member) ... 2006-01-05 BUY RHAT 100.0 35.14 ``` Exceptions ---------- `exception sqlite3.Warning` A subclass of [`Exception`](exceptions#Exception "Exception"). `exception sqlite3.Error` The base class of the other exceptions in this module. It is a subclass of [`Exception`](exceptions#Exception "Exception"). `exception sqlite3.DatabaseError` Exception raised for errors that are related to the database. `exception sqlite3.IntegrityError` Exception raised when the relational integrity of the database is affected, e.g. a foreign key check fails. It is a subclass of [`DatabaseError`](#sqlite3.DatabaseError "sqlite3.DatabaseError"). `exception sqlite3.ProgrammingError` Exception raised for programming errors, e.g. table not found or already exists, syntax error in the SQL statement, wrong number of parameters specified, etc. It is a subclass of [`DatabaseError`](#sqlite3.DatabaseError "sqlite3.DatabaseError"). `exception sqlite3.OperationalError` Exception raised for errors that are related to the database’s operation and not necessarily under the control of the programmer, e.g. an unexpected disconnect occurs, the data source name is not found, a transaction could not be processed, etc. It is a subclass of [`DatabaseError`](#sqlite3.DatabaseError "sqlite3.DatabaseError"). `exception sqlite3.NotSupportedError` Exception raised in case a method or database API was used which is not supported by the database, e.g. calling the [`rollback()`](#sqlite3.Connection.rollback "sqlite3.Connection.rollback") method on a connection that does not support transaction or has transactions turned off. It is a subclass of [`DatabaseError`](#sqlite3.DatabaseError "sqlite3.DatabaseError"). SQLite and Python types ----------------------- ### Introduction SQLite natively supports the following types: `NULL`, `INTEGER`, `REAL`, `TEXT`, `BLOB`. The following Python types can thus be sent to SQLite without any problem: | Python type | SQLite type | | --- | --- | | [`None`](constants#None "None") | `NULL` | | [`int`](functions#int "int") | `INTEGER` | | [`float`](functions#float "float") | `REAL` | | [`str`](stdtypes#str "str") | `TEXT` | | [`bytes`](stdtypes#bytes "bytes") | `BLOB` | This is how SQLite types are converted to Python types by default: | SQLite type | Python type | | --- | --- | | `NULL` | [`None`](constants#None "None") | | `INTEGER` | [`int`](functions#int "int") | | `REAL` | [`float`](functions#float "float") | | `TEXT` | depends on [`text_factory`](#sqlite3.Connection.text_factory "sqlite3.Connection.text_factory"), [`str`](stdtypes#str "str") by default | | `BLOB` | [`bytes`](stdtypes#bytes "bytes") | The type system of the [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module is extensible in two ways: you can store additional Python types in an SQLite database via object adaptation, and you can let the [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module convert SQLite types to different Python types via converters. ### Using adapters to store additional Python types in SQLite databases As described before, SQLite supports only a limited set of types natively. To use other Python types with SQLite, you must **adapt** them to one of the sqlite3 module’s supported types for SQLite: one of NoneType, int, float, str, bytes. There are two ways to enable the [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module to adapt a custom Python type to one of the supported ones. #### Letting your object adapt itself This is a good approach if you write the class yourself. Let’s suppose you have a class like this: ``` class Point: def __init__(self, x, y): self.x, self.y = x, y ``` Now you want to store the point in a single SQLite column. First you’ll have to choose one of the supported types to be used for representing the point. Let’s just use str and separate the coordinates using a semicolon. Then you need to give your class a method `__conform__(self, protocol)` which must return the converted value. The parameter *protocol* will be `PrepareProtocol`. ``` import sqlite3 class Point: def __init__(self, x, y): self.x, self.y = x, y def __conform__(self, protocol): if protocol is sqlite3.PrepareProtocol: return "%f;%f" % (self.x, self.y) con = sqlite3.connect(":memory:") cur = con.cursor() p = Point(4.0, -3.2) cur.execute("select ?", (p,)) print(cur.fetchone()[0]) con.close() ``` #### Registering an adapter callable The other possibility is to create a function that converts the type to the string representation and register the function with [`register_adapter()`](#sqlite3.register_adapter "sqlite3.register_adapter"). ``` import sqlite3 class Point: def __init__(self, x, y): self.x, self.y = x, y def adapt_point(point): return "%f;%f" % (point.x, point.y) sqlite3.register_adapter(Point, adapt_point) con = sqlite3.connect(":memory:") cur = con.cursor() p = Point(4.0, -3.2) cur.execute("select ?", (p,)) print(cur.fetchone()[0]) con.close() ``` The [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module has two default adapters for Python’s built-in [`datetime.date`](datetime#datetime.date "datetime.date") and [`datetime.datetime`](datetime#datetime.datetime "datetime.datetime") types. Now let’s suppose we want to store [`datetime.datetime`](datetime#datetime.datetime "datetime.datetime") objects not in ISO representation, but as a Unix timestamp. ``` import sqlite3 import datetime import time def adapt_datetime(ts): return time.mktime(ts.timetuple()) sqlite3.register_adapter(datetime.datetime, adapt_datetime) con = sqlite3.connect(":memory:") cur = con.cursor() now = datetime.datetime.now() cur.execute("select ?", (now,)) print(cur.fetchone()[0]) con.close() ``` ### Converting SQLite values to custom Python types Writing an adapter lets you send custom Python types to SQLite. But to make it really useful we need to make the Python to SQLite to Python roundtrip work. Enter converters. Let’s go back to the `Point` class. We stored the x and y coordinates separated via semicolons as strings in SQLite. First, we’ll define a converter function that accepts the string as a parameter and constructs a `Point` object from it. Note Converter functions **always** get called with a [`bytes`](stdtypes#bytes "bytes") object, no matter under which data type you sent the value to SQLite. ``` def convert_point(s): x, y = map(float, s.split(b";")) return Point(x, y) ``` Now you need to make the [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module know that what you select from the database is actually a point. There are two ways of doing this: * Implicitly via the declared type * Explicitly via the column name Both ways are described in section [Module functions and constants](#sqlite3-module-contents), in the entries for the constants [`PARSE_DECLTYPES`](#sqlite3.PARSE_DECLTYPES "sqlite3.PARSE_DECLTYPES") and [`PARSE_COLNAMES`](#sqlite3.PARSE_COLNAMES "sqlite3.PARSE_COLNAMES"). The following example illustrates both approaches. ``` import sqlite3 class Point: def __init__(self, x, y): self.x, self.y = x, y def __repr__(self): return "(%f;%f)" % (self.x, self.y) def adapt_point(point): return ("%f;%f" % (point.x, point.y)).encode('ascii') def convert_point(s): x, y = list(map(float, s.split(b";"))) return Point(x, y) # Register the adapter sqlite3.register_adapter(Point, adapt_point) # Register the converter sqlite3.register_converter("point", convert_point) p = Point(4.0, -3.2) ######################### # 1) Using declared types con = sqlite3.connect(":memory:", detect_types=sqlite3.PARSE_DECLTYPES) cur = con.cursor() cur.execute("create table test(p point)") cur.execute("insert into test(p) values (?)", (p,)) cur.execute("select p from test") print("with declared types:", cur.fetchone()[0]) cur.close() con.close() ####################### # 1) Using column names con = sqlite3.connect(":memory:", detect_types=sqlite3.PARSE_COLNAMES) cur = con.cursor() cur.execute("create table test(p)") cur.execute("insert into test(p) values (?)", (p,)) cur.execute('select p as "p [point]" from test') print("with column names:", cur.fetchone()[0]) cur.close() con.close() ``` ### Default adapters and converters There are default adapters for the date and datetime types in the datetime module. They will be sent as ISO dates/ISO timestamps to SQLite. The default converters are registered under the name “date” for [`datetime.date`](datetime#datetime.date "datetime.date") and under the name “timestamp” for [`datetime.datetime`](datetime#datetime.datetime "datetime.datetime"). This way, you can use date/timestamps from Python without any additional fiddling in most cases. The format of the adapters is also compatible with the experimental SQLite date/time functions. The following example demonstrates this. ``` import sqlite3 import datetime con = sqlite3.connect(":memory:", detect_types=sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAMES) cur = con.cursor() cur.execute("create table test(d date, ts timestamp)") today = datetime.date.today() now = datetime.datetime.now() cur.execute("insert into test(d, ts) values (?, ?)", (today, now)) cur.execute("select d, ts from test") row = cur.fetchone() print(today, "=>", row[0], type(row[0])) print(now, "=>", row[1], type(row[1])) cur.execute('select current_date as "d [date]", current_timestamp as "ts [timestamp]"') row = cur.fetchone() print("current_date", row[0], type(row[0])) print("current_timestamp", row[1], type(row[1])) con.close() ``` If a timestamp stored in SQLite has a fractional part longer than 6 numbers, its value will be truncated to microsecond precision by the timestamp converter. Note The default “timestamp” converter ignores UTC offsets in the database and always returns a naive [`datetime.datetime`](datetime#datetime.datetime "datetime.datetime") object. To preserve UTC offsets in timestamps, either leave converters disabled, or register an offset-aware converter with [`register_converter()`](#sqlite3.register_converter "sqlite3.register_converter"). Controlling Transactions ------------------------ The underlying `sqlite3` library operates in `autocommit` mode by default, but the Python [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module by default does not. `autocommit` mode means that statements that modify the database take effect immediately. A `BEGIN` or `SAVEPOINT` statement disables `autocommit` mode, and a `COMMIT`, a `ROLLBACK`, or a `RELEASE` that ends the outermost transaction, turns `autocommit` mode back on. The Python [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module by default issues a `BEGIN` statement implicitly before a Data Modification Language (DML) statement (i.e. `INSERT`/`UPDATE`/`DELETE`/`REPLACE`). You can control which kind of `BEGIN` statements [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") implicitly executes via the *isolation\_level* parameter to the [`connect()`](#sqlite3.connect "sqlite3.connect") call, or via the `isolation_level` property of connections. If you specify no *isolation\_level*, a plain `BEGIN` is used, which is equivalent to specifying `DEFERRED`. Other possible values are `IMMEDIATE` and `EXCLUSIVE`. You can disable the [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module’s implicit transaction management by setting `isolation_level` to `None`. This will leave the underlying `sqlite3` library operating in `autocommit` mode. You can then completely control the transaction state by explicitly issuing `BEGIN`, `ROLLBACK`, `SAVEPOINT`, and `RELEASE` statements in your code. Note that [`executescript()`](#sqlite3.Cursor.executescript "sqlite3.Cursor.executescript") disregards `isolation_level`; any transaction control must be added explicitly. Changed in version 3.6: [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") used to implicitly commit an open transaction before DDL statements. This is no longer the case. Using sqlite3 efficiently ------------------------- ### Using shortcut methods Using the nonstandard `execute()`, `executemany()` and `executescript()` methods of the [`Connection`](#sqlite3.Connection "sqlite3.Connection") object, your code can be written more concisely because you don’t have to create the (often superfluous) [`Cursor`](#sqlite3.Cursor "sqlite3.Cursor") objects explicitly. Instead, the [`Cursor`](#sqlite3.Cursor "sqlite3.Cursor") objects are created implicitly and these shortcut methods return the cursor objects. This way, you can execute a `SELECT` statement and iterate over it directly using only a single call on the [`Connection`](#sqlite3.Connection "sqlite3.Connection") object. ``` import sqlite3 langs = [ ("C++", 1985), ("Objective-C", 1984), ] con = sqlite3.connect(":memory:") # Create the table con.execute("create table lang(name, first_appeared)") # Fill the table con.executemany("insert into lang(name, first_appeared) values (?, ?)", langs) # Print the table contents for row in con.execute("select name, first_appeared from lang"): print(row) print("I just deleted", con.execute("delete from lang").rowcount, "rows") # close is not a shortcut method and it's not called automatically, # so the connection object should be closed manually con.close() ``` ### Accessing columns by name instead of by index One useful feature of the [`sqlite3`](#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") module is the built-in [`sqlite3.Row`](#sqlite3.Row "sqlite3.Row") class designed to be used as a row factory. Rows wrapped with this class can be accessed both by index (like tuples) and case-insensitively by name: ``` import sqlite3 con = sqlite3.connect(":memory:") con.row_factory = sqlite3.Row cur = con.cursor() cur.execute("select 'John' as name, 42 as age") for row in cur: assert row[0] == row["name"] assert row["name"] == row["nAmE"] assert row[1] == row["age"] assert row[1] == row["AgE"] con.close() ``` ### Using the connection as a context manager Connection objects can be used as context managers that automatically commit or rollback transactions. In the event of an exception, the transaction is rolled back; otherwise, the transaction is committed: ``` import sqlite3 con = sqlite3.connect(":memory:") con.execute("create table lang (id integer primary key, name varchar unique)") # Successful, con.commit() is called automatically afterwards with con: con.execute("insert into lang(name) values (?)", ("Python",)) # con.rollback() is called after the with block finishes with an exception, the # exception is still raised and must be caught try: with con: con.execute("insert into lang(name) values (?)", ("Python",)) except sqlite3.IntegrityError: print("couldn't add Python twice") # Connection object used as context manager only commits or rollbacks transactions, # so the connection object should be closed manually con.close() ``` #### Footnotes `1(1,2)` The sqlite3 module is not built with loadable extension support by default, because some platforms (notably macOS) have SQLite libraries which are compiled without this feature. To get loadable extension support, you must pass `--enable-loadable-sqlite-extensions` to configure.
programming_docs
python urllib — URL handling modules urllib — URL handling modules ============================= **Source code:** [Lib/urllib/](https://github.com/python/cpython/tree/3.9/Lib/urllib/) `urllib` is a package that collects several modules for working with URLs: * [`urllib.request`](urllib.request#module-urllib.request "urllib.request: Extensible library for opening URLs.") for opening and reading URLs * [`urllib.error`](urllib.error#module-urllib.error "urllib.error: Exception classes raised by urllib.request.") containing the exceptions raised by [`urllib.request`](urllib.request#module-urllib.request "urllib.request: Extensible library for opening URLs.") * [`urllib.parse`](urllib.parse#module-urllib.parse "urllib.parse: Parse URLs into or assemble them from components.") for parsing URLs * [`urllib.robotparser`](urllib.robotparser#module-urllib.robotparser "urllib.robotparser: Load a robots.txt file and answer questions about fetchability of other URLs.") for parsing `robots.txt` files python email.parser: Parsing email messages email.parser: Parsing email messages ==================================== **Source code:** [Lib/email/parser.py](https://github.com/python/cpython/tree/3.9/Lib/email/parser.py) Message object structures can be created in one of two ways: they can be created from whole cloth by creating an [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") object, adding headers using the dictionary interface, and adding payload(s) using [`set_content()`](email.message#email.message.EmailMessage.set_content "email.message.EmailMessage.set_content") and related methods, or they can be created by parsing a serialized representation of the email message. The [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package provides a standard parser that understands most email document structures, including MIME documents. You can pass the parser a bytes, string or file object, and the parser will return to you the root [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") instance of the object structure. For simple, non-MIME messages the payload of this root object will likely be a string containing the text of the message. For MIME messages, the root object will return `True` from its [`is_multipart()`](email.message#email.message.EmailMessage.is_multipart "email.message.EmailMessage.is_multipart") method, and the subparts can be accessed via the payload manipulation methods, such as [`get_body()`](email.message#email.message.EmailMessage.get_body "email.message.EmailMessage.get_body"), [`iter_parts()`](email.message#email.message.EmailMessage.iter_parts "email.message.EmailMessage.iter_parts"), and [`walk()`](email.message#email.message.EmailMessage.walk "email.message.EmailMessage.walk"). There are actually two parser interfaces available for use, the [`Parser`](#email.parser.Parser "email.parser.Parser") API and the incremental [`FeedParser`](#email.parser.FeedParser "email.parser.FeedParser") API. The [`Parser`](#email.parser.Parser "email.parser.Parser") API is most useful if you have the entire text of the message in memory, or if the entire message lives in a file on the file system. [`FeedParser`](#email.parser.FeedParser "email.parser.FeedParser") is more appropriate when you are reading the message from a stream which might block waiting for more input (such as reading an email message from a socket). The [`FeedParser`](#email.parser.FeedParser "email.parser.FeedParser") can consume and parse the message incrementally, and only returns the root object when you close the parser. Note that the parser can be extended in limited ways, and of course you can implement your own parser completely from scratch. All of the logic that connects the [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package’s bundled parser and the [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") class is embodied in the `policy` class, so a custom parser can create message object trees any way it finds necessary by implementing custom versions of the appropriate `policy` methods. FeedParser API -------------- The [`BytesFeedParser`](#email.parser.BytesFeedParser "email.parser.BytesFeedParser"), imported from the `email.feedparser` module, provides an API that is conducive to incremental parsing of email messages, such as would be necessary when reading the text of an email message from a source that can block (such as a socket). The [`BytesFeedParser`](#email.parser.BytesFeedParser "email.parser.BytesFeedParser") can of course be used to parse an email message fully contained in a [bytes-like object](../glossary#term-bytes-like-object), string, or file, but the [`BytesParser`](#email.parser.BytesParser "email.parser.BytesParser") API may be more convenient for such use cases. The semantics and results of the two parser APIs are identical. The [`BytesFeedParser`](#email.parser.BytesFeedParser "email.parser.BytesFeedParser")’s API is simple; you create an instance, feed it a bunch of bytes until there’s no more to feed it, then close the parser to retrieve the root message object. The [`BytesFeedParser`](#email.parser.BytesFeedParser "email.parser.BytesFeedParser") is extremely accurate when parsing standards-compliant messages, and it does a very good job of parsing non-compliant messages, providing information about how a message was deemed broken. It will populate a message object’s [`defects`](email.message#email.message.EmailMessage.defects "email.message.EmailMessage.defects") attribute with a list of any problems it found in a message. See the [`email.errors`](email.errors#module-email.errors "email.errors: The exception classes used by the email package.") module for the list of defects that it can find. Here is the API for the [`BytesFeedParser`](#email.parser.BytesFeedParser "email.parser.BytesFeedParser"): `class email.parser.BytesFeedParser(_factory=None, *, policy=policy.compat32)` Create a [`BytesFeedParser`](#email.parser.BytesFeedParser "email.parser.BytesFeedParser") instance. Optional *\_factory* is a no-argument callable; if not specified use the [`message_factory`](email.policy#email.policy.Policy.message_factory "email.policy.Policy.message_factory") from the *policy*. Call *\_factory* whenever a new message object is needed. If *policy* is specified use the rules it specifies to update the representation of the message. If *policy* is not set, use the [`compat32`](email.policy#email.policy.Compat32 "email.policy.Compat32") policy, which maintains backward compatibility with the Python 3.2 version of the email package and provides [`Message`](email.compat32-message#email.message.Message "email.message.Message") as the default factory. All other policies provide [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") as the default *\_factory*. For more information on what else *policy* controls, see the [`policy`](email.policy#module-email.policy "email.policy: Controlling the parsing and generating of messages") documentation. Note: **The policy keyword should always be specified**; The default will change to [`email.policy.default`](email.policy#email.policy.default "email.policy.default") in a future version of Python. New in version 3.2. Changed in version 3.3: Added the *policy* keyword. Changed in version 3.6: *\_factory* defaults to the policy `message_factory`. `feed(data)` Feed the parser some more data. *data* should be a [bytes-like object](../glossary#term-bytes-like-object) containing one or more lines. The lines can be partial and the parser will stitch such partial lines together properly. The lines can have any of the three common line endings: carriage return, newline, or carriage return and newline (they can even be mixed). `close()` Complete the parsing of all previously fed data and return the root message object. It is undefined what happens if [`feed()`](#email.parser.BytesFeedParser.feed "email.parser.BytesFeedParser.feed") is called after this method has been called. `class email.parser.FeedParser(_factory=None, *, policy=policy.compat32)` Works like [`BytesFeedParser`](#email.parser.BytesFeedParser "email.parser.BytesFeedParser") except that the input to the [`feed()`](#email.parser.BytesFeedParser.feed "email.parser.BytesFeedParser.feed") method must be a string. This is of limited utility, since the only way for such a message to be valid is for it to contain only ASCII text or, if `utf8` is `True`, no binary attachments. Changed in version 3.3: Added the *policy* keyword. Parser API ---------- The [`BytesParser`](#email.parser.BytesParser "email.parser.BytesParser") class, imported from the [`email.parser`](#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure.") module, provides an API that can be used to parse a message when the complete contents of the message are available in a [bytes-like object](../glossary#term-bytes-like-object) or file. The [`email.parser`](#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure.") module also provides [`Parser`](#email.parser.Parser "email.parser.Parser") for parsing strings, and header-only parsers, [`BytesHeaderParser`](#email.parser.BytesHeaderParser "email.parser.BytesHeaderParser") and [`HeaderParser`](#email.parser.HeaderParser "email.parser.HeaderParser"), which can be used if you’re only interested in the headers of the message. [`BytesHeaderParser`](#email.parser.BytesHeaderParser "email.parser.BytesHeaderParser") and [`HeaderParser`](#email.parser.HeaderParser "email.parser.HeaderParser") can be much faster in these situations, since they do not attempt to parse the message body, instead setting the payload to the raw body. `class email.parser.BytesParser(_class=None, *, policy=policy.compat32)` Create a [`BytesParser`](#email.parser.BytesParser "email.parser.BytesParser") instance. The *\_class* and *policy* arguments have the same meaning and semantics as the *\_factory* and *policy* arguments of [`BytesFeedParser`](#email.parser.BytesFeedParser "email.parser.BytesFeedParser"). Note: **The policy keyword should always be specified**; The default will change to [`email.policy.default`](email.policy#email.policy.default "email.policy.default") in a future version of Python. Changed in version 3.3: Removed the *strict* argument that was deprecated in 2.4. Added the *policy* keyword. Changed in version 3.6: *\_class* defaults to the policy `message_factory`. `parse(fp, headersonly=False)` Read all the data from the binary file-like object *fp*, parse the resulting bytes, and return the message object. *fp* must support both the [`readline()`](io#io.IOBase.readline "io.IOBase.readline") and the `read()` methods. The bytes contained in *fp* must be formatted as a block of [**RFC 5322**](https://tools.ietf.org/html/rfc5322.html) (or, if `utf8` is `True`, [**RFC 6532**](https://tools.ietf.org/html/rfc6532.html)) style headers and header continuation lines, optionally preceded by an envelope header. The header block is terminated either by the end of the data or by a blank line. Following the header block is the body of the message (which may contain MIME-encoded subparts, including subparts with a *Content-Transfer-Encoding* of `8bit`). Optional *headersonly* is a flag specifying whether to stop parsing after reading the headers or not. The default is `False`, meaning it parses the entire contents of the file. `parsebytes(bytes, headersonly=False)` Similar to the [`parse()`](#email.parser.BytesParser.parse "email.parser.BytesParser.parse") method, except it takes a [bytes-like object](../glossary#term-bytes-like-object) instead of a file-like object. Calling this method on a [bytes-like object](../glossary#term-bytes-like-object) is equivalent to wrapping *bytes* in a [`BytesIO`](io#io.BytesIO "io.BytesIO") instance first and calling [`parse()`](#email.parser.BytesParser.parse "email.parser.BytesParser.parse"). Optional *headersonly* is as with the [`parse()`](#email.parser.BytesParser.parse "email.parser.BytesParser.parse") method. New in version 3.2. `class email.parser.BytesHeaderParser(_class=None, *, policy=policy.compat32)` Exactly like [`BytesParser`](#email.parser.BytesParser "email.parser.BytesParser"), except that *headersonly* defaults to `True`. New in version 3.3. `class email.parser.Parser(_class=None, *, policy=policy.compat32)` This class is parallel to [`BytesParser`](#email.parser.BytesParser "email.parser.BytesParser"), but handles string input. Changed in version 3.3: Removed the *strict* argument. Added the *policy* keyword. Changed in version 3.6: *\_class* defaults to the policy `message_factory`. `parse(fp, headersonly=False)` Read all the data from the text-mode file-like object *fp*, parse the resulting text, and return the root message object. *fp* must support both the [`readline()`](io#io.TextIOBase.readline "io.TextIOBase.readline") and the [`read()`](io#io.TextIOBase.read "io.TextIOBase.read") methods on file-like objects. Other than the text mode requirement, this method operates like [`BytesParser.parse()`](#email.parser.BytesParser.parse "email.parser.BytesParser.parse"). `parsestr(text, headersonly=False)` Similar to the [`parse()`](#email.parser.Parser.parse "email.parser.Parser.parse") method, except it takes a string object instead of a file-like object. Calling this method on a string is equivalent to wrapping *text* in a [`StringIO`](io#io.StringIO "io.StringIO") instance first and calling [`parse()`](#email.parser.Parser.parse "email.parser.Parser.parse"). Optional *headersonly* is as with the [`parse()`](#email.parser.Parser.parse "email.parser.Parser.parse") method. `class email.parser.HeaderParser(_class=None, *, policy=policy.compat32)` Exactly like [`Parser`](#email.parser.Parser "email.parser.Parser"), except that *headersonly* defaults to `True`. Since creating a message object structure from a string or a file object is such a common task, four functions are provided as a convenience. They are available in the top-level [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package namespace. `email.message_from_bytes(s, _class=None, *, policy=policy.compat32)` Return a message object structure from a [bytes-like object](../glossary#term-bytes-like-object). This is equivalent to `BytesParser().parsebytes(s)`. Optional *\_class* and *policy* are interpreted as with the [`BytesParser`](#email.parser.BytesParser "email.parser.BytesParser") class constructor. New in version 3.2. Changed in version 3.3: Removed the *strict* argument. Added the *policy* keyword. `email.message_from_binary_file(fp, _class=None, *, policy=policy.compat32)` Return a message object structure tree from an open binary [file object](../glossary#term-file-object). This is equivalent to `BytesParser().parse(fp)`. *\_class* and *policy* are interpreted as with the [`BytesParser`](#email.parser.BytesParser "email.parser.BytesParser") class constructor. New in version 3.2. Changed in version 3.3: Removed the *strict* argument. Added the *policy* keyword. `email.message_from_string(s, _class=None, *, policy=policy.compat32)` Return a message object structure from a string. This is equivalent to `Parser().parsestr(s)`. *\_class* and *policy* are interpreted as with the [`Parser`](#email.parser.Parser "email.parser.Parser") class constructor. Changed in version 3.3: Removed the *strict* argument. Added the *policy* keyword. `email.message_from_file(fp, _class=None, *, policy=policy.compat32)` Return a message object structure tree from an open [file object](../glossary#term-file-object). This is equivalent to `Parser().parse(fp)`. *\_class* and *policy* are interpreted as with the [`Parser`](#email.parser.Parser "email.parser.Parser") class constructor. Changed in version 3.3: Removed the *strict* argument. Added the *policy* keyword. Changed in version 3.6: *\_class* defaults to the policy `message_factory`. Here’s an example of how you might use [`message_from_bytes()`](#email.message_from_bytes "email.message_from_bytes") at an interactive Python prompt: ``` >>> import email >>> msg = email.message_from_bytes(myBytes) ``` Additional notes ---------------- Here are some notes on the parsing semantics: * Most non-*multipart* type messages are parsed as a single message object with a string payload. These objects will return `False` for [`is_multipart()`](email.message#email.message.EmailMessage.is_multipart "email.message.EmailMessage.is_multipart"), and [`iter_parts()`](email.message#email.message.EmailMessage.iter_parts "email.message.EmailMessage.iter_parts") will yield an empty list. * All *multipart* type messages will be parsed as a container message object with a list of sub-message objects for their payload. The outer container message will return `True` for [`is_multipart()`](email.message#email.message.EmailMessage.is_multipart "email.message.EmailMessage.is_multipart"), and [`iter_parts()`](email.message#email.message.EmailMessage.iter_parts "email.message.EmailMessage.iter_parts") will yield a list of subparts. * Most messages with a content type of *message/\** (such as *message/delivery-status* and *message/rfc822*) will also be parsed as container object containing a list payload of length 1. Their [`is_multipart()`](email.message#email.message.EmailMessage.is_multipart "email.message.EmailMessage.is_multipart") method will return `True`. The single element yielded by [`iter_parts()`](email.message#email.message.EmailMessage.iter_parts "email.message.EmailMessage.iter_parts") will be a sub-message object. * Some non-standards-compliant messages may not be internally consistent about their *multipart*-edness. Such messages may have a *Content-Type* header of type *multipart*, but their [`is_multipart()`](email.message#email.message.EmailMessage.is_multipart "email.message.EmailMessage.is_multipart") method may return `False`. If such messages were parsed with the [`FeedParser`](#email.parser.FeedParser "email.parser.FeedParser"), they will have an instance of the `MultipartInvariantViolationDefect` class in their *defects* attribute list. See [`email.errors`](email.errors#module-email.errors "email.errors: The exception classes used by the email package.") for details. python queue — A synchronized queue class queue — A synchronized queue class ================================== **Source code:** [Lib/queue.py](https://github.com/python/cpython/tree/3.9/Lib/queue.py) The [`queue`](#module-queue "queue: A synchronized queue class.") module implements multi-producer, multi-consumer queues. It is especially useful in threaded programming when information must be exchanged safely between multiple threads. The [`Queue`](#queue.Queue "queue.Queue") class in this module implements all the required locking semantics. The module implements three types of queue, which differ only in the order in which the entries are retrieved. In a FIFO queue, the first tasks added are the first retrieved. In a LIFO queue, the most recently added entry is the first retrieved (operating like a stack). With a priority queue, the entries are kept sorted (using the [`heapq`](heapq#module-heapq "heapq: Heap queue algorithm (a.k.a. priority queue).") module) and the lowest valued entry is retrieved first. Internally, those three types of queues use locks to temporarily block competing threads; however, they are not designed to handle reentrancy within a thread. In addition, the module implements a “simple” FIFO queue type, [`SimpleQueue`](#queue.SimpleQueue "queue.SimpleQueue"), whose specific implementation provides additional guarantees in exchange for the smaller functionality. The [`queue`](#module-queue "queue: A synchronized queue class.") module defines the following classes and exceptions: `class queue.Queue(maxsize=0)` Constructor for a FIFO queue. *maxsize* is an integer that sets the upperbound limit on the number of items that can be placed in the queue. Insertion will block once this size has been reached, until queue items are consumed. If *maxsize* is less than or equal to zero, the queue size is infinite. `class queue.LifoQueue(maxsize=0)` Constructor for a LIFO queue. *maxsize* is an integer that sets the upperbound limit on the number of items that can be placed in the queue. Insertion will block once this size has been reached, until queue items are consumed. If *maxsize* is less than or equal to zero, the queue size is infinite. `class queue.PriorityQueue(maxsize=0)` Constructor for a priority queue. *maxsize* is an integer that sets the upperbound limit on the number of items that can be placed in the queue. Insertion will block once this size has been reached, until queue items are consumed. If *maxsize* is less than or equal to zero, the queue size is infinite. The lowest valued entries are retrieved first (the lowest valued entry is the one returned by `sorted(list(entries))[0]`). A typical pattern for entries is a tuple in the form: `(priority_number, data)`. If the *data* elements are not comparable, the data can be wrapped in a class that ignores the data item and only compares the priority number: ``` from dataclasses import dataclass, field from typing import Any @dataclass(order=True) class PrioritizedItem: priority: int item: Any=field(compare=False) ``` `class queue.SimpleQueue` Constructor for an unbounded FIFO queue. Simple queues lack advanced functionality such as task tracking. New in version 3.7. `exception queue.Empty` Exception raised when non-blocking [`get()`](#queue.Queue.get "queue.Queue.get") (or [`get_nowait()`](#queue.Queue.get_nowait "queue.Queue.get_nowait")) is called on a [`Queue`](#queue.Queue "queue.Queue") object which is empty. `exception queue.Full` Exception raised when non-blocking [`put()`](#queue.Queue.put "queue.Queue.put") (or [`put_nowait()`](#queue.Queue.put_nowait "queue.Queue.put_nowait")) is called on a [`Queue`](#queue.Queue "queue.Queue") object which is full. Queue Objects ------------- Queue objects ([`Queue`](#queue.Queue "queue.Queue"), [`LifoQueue`](#queue.LifoQueue "queue.LifoQueue"), or [`PriorityQueue`](#queue.PriorityQueue "queue.PriorityQueue")) provide the public methods described below. `Queue.qsize()` Return the approximate size of the queue. Note, qsize() > 0 doesn’t guarantee that a subsequent get() will not block, nor will qsize() < maxsize guarantee that put() will not block. `Queue.empty()` Return `True` if the queue is empty, `False` otherwise. If empty() returns `True` it doesn’t guarantee that a subsequent call to put() will not block. Similarly, if empty() returns `False` it doesn’t guarantee that a subsequent call to get() will not block. `Queue.full()` Return `True` if the queue is full, `False` otherwise. If full() returns `True` it doesn’t guarantee that a subsequent call to get() will not block. Similarly, if full() returns `False` it doesn’t guarantee that a subsequent call to put() will not block. `Queue.put(item, block=True, timeout=None)` Put *item* into the queue. If optional args *block* is true and *timeout* is `None` (the default), block if necessary until a free slot is available. If *timeout* is a positive number, it blocks at most *timeout* seconds and raises the [`Full`](#queue.Full "queue.Full") exception if no free slot was available within that time. Otherwise (*block* is false), put an item on the queue if a free slot is immediately available, else raise the [`Full`](#queue.Full "queue.Full") exception (*timeout* is ignored in that case). `Queue.put_nowait(item)` Equivalent to `put(item, block=False)`. `Queue.get(block=True, timeout=None)` Remove and return an item from the queue. If optional args *block* is true and *timeout* is `None` (the default), block if necessary until an item is available. If *timeout* is a positive number, it blocks at most *timeout* seconds and raises the [`Empty`](#queue.Empty "queue.Empty") exception if no item was available within that time. Otherwise (*block* is false), return an item if one is immediately available, else raise the [`Empty`](#queue.Empty "queue.Empty") exception (*timeout* is ignored in that case). Prior to 3.0 on POSIX systems, and for all versions on Windows, if *block* is true and *timeout* is `None`, this operation goes into an uninterruptible wait on an underlying lock. This means that no exceptions can occur, and in particular a SIGINT will not trigger a [`KeyboardInterrupt`](exceptions#KeyboardInterrupt "KeyboardInterrupt"). `Queue.get_nowait()` Equivalent to `get(False)`. Two methods are offered to support tracking whether enqueued tasks have been fully processed by daemon consumer threads. `Queue.task_done()` Indicate that a formerly enqueued task is complete. Used by queue consumer threads. For each [`get()`](#queue.Queue.get "queue.Queue.get") used to fetch a task, a subsequent call to [`task_done()`](#queue.Queue.task_done "queue.Queue.task_done") tells the queue that the processing on the task is complete. If a [`join()`](#queue.Queue.join "queue.Queue.join") is currently blocking, it will resume when all items have been processed (meaning that a [`task_done()`](#queue.Queue.task_done "queue.Queue.task_done") call was received for every item that had been [`put()`](#queue.Queue.put "queue.Queue.put") into the queue). Raises a [`ValueError`](exceptions#ValueError "ValueError") if called more times than there were items placed in the queue. `Queue.join()` Blocks until all items in the queue have been gotten and processed. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls [`task_done()`](#queue.Queue.task_done "queue.Queue.task_done") to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, [`join()`](#queue.Queue.join "queue.Queue.join") unblocks. Example of how to wait for enqueued tasks to be completed: ``` import threading, queue q = queue.Queue() def worker(): while True: item = q.get() print(f'Working on {item}') print(f'Finished {item}') q.task_done() # turn-on the worker thread threading.Thread(target=worker, daemon=True).start() # send thirty task requests to the worker for item in range(30): q.put(item) print('All task requests sent\n', end='') # block until all tasks are done q.join() print('All work completed') ``` SimpleQueue Objects ------------------- [`SimpleQueue`](#queue.SimpleQueue "queue.SimpleQueue") objects provide the public methods described below. `SimpleQueue.qsize()` Return the approximate size of the queue. Note, qsize() > 0 doesn’t guarantee that a subsequent get() will not block. `SimpleQueue.empty()` Return `True` if the queue is empty, `False` otherwise. If empty() returns `False` it doesn’t guarantee that a subsequent call to get() will not block. `SimpleQueue.put(item, block=True, timeout=None)` Put *item* into the queue. The method never blocks and always succeeds (except for potential low-level errors such as failure to allocate memory). The optional args *block* and *timeout* are ignored and only provided for compatibility with [`Queue.put()`](#queue.Queue.put "queue.Queue.put"). **CPython implementation detail:** This method has a C implementation which is reentrant. That is, a `put()` or `get()` call can be interrupted by another `put()` call in the same thread without deadlocking or corrupting internal state inside the queue. This makes it appropriate for use in destructors such as `__del__` methods or [`weakref`](weakref#module-weakref "weakref: Support for weak references and weak dictionaries.") callbacks. `SimpleQueue.put_nowait(item)` Equivalent to `put(item, block=False)`, provided for compatibility with [`Queue.put_nowait()`](#queue.Queue.put_nowait "queue.Queue.put_nowait"). `SimpleQueue.get(block=True, timeout=None)` Remove and return an item from the queue. If optional args *block* is true and *timeout* is `None` (the default), block if necessary until an item is available. If *timeout* is a positive number, it blocks at most *timeout* seconds and raises the [`Empty`](#queue.Empty "queue.Empty") exception if no item was available within that time. Otherwise (*block* is false), return an item if one is immediately available, else raise the [`Empty`](#queue.Empty "queue.Empty") exception (*timeout* is ignored in that case). `SimpleQueue.get_nowait()` Equivalent to `get(False)`. See also `Class` [`multiprocessing.Queue`](multiprocessing#multiprocessing.Queue "multiprocessing.Queue") A queue class for use in a multi-processing (rather than multi-threading) context. [`collections.deque`](collections#collections.deque "collections.deque") is an alternative implementation of unbounded queues with fast atomic [`append()`](collections#collections.deque.append "collections.deque.append") and [`popleft()`](collections#collections.deque.popleft "collections.deque.popleft") operations that do not require locking and also support indexing.
programming_docs
python stat — Interpreting stat() results stat — Interpreting stat() results ================================== **Source code:** [Lib/stat.py](https://github.com/python/cpython/tree/3.9/Lib/stat.py) The [`stat`](#module-stat "stat: Utilities for interpreting the results of os.stat(), os.lstat() and os.fstat().") module defines constants and functions for interpreting the results of [`os.stat()`](os#os.stat "os.stat"), [`os.fstat()`](os#os.fstat "os.fstat") and [`os.lstat()`](os#os.lstat "os.lstat") (if they exist). For complete details about the `stat()`, `fstat()` and `lstat()` calls, consult the documentation for your system. Changed in version 3.4: The stat module is backed by a C implementation. The [`stat`](#module-stat "stat: Utilities for interpreting the results of os.stat(), os.lstat() and os.fstat().") module defines the following functions to test for specific file types: `stat.S_ISDIR(mode)` Return non-zero if the mode is from a directory. `stat.S_ISCHR(mode)` Return non-zero if the mode is from a character special device file. `stat.S_ISBLK(mode)` Return non-zero if the mode is from a block special device file. `stat.S_ISREG(mode)` Return non-zero if the mode is from a regular file. `stat.S_ISFIFO(mode)` Return non-zero if the mode is from a FIFO (named pipe). `stat.S_ISLNK(mode)` Return non-zero if the mode is from a symbolic link. `stat.S_ISSOCK(mode)` Return non-zero if the mode is from a socket. `stat.S_ISDOOR(mode)` Return non-zero if the mode is from a door. New in version 3.4. `stat.S_ISPORT(mode)` Return non-zero if the mode is from an event port. New in version 3.4. `stat.S_ISWHT(mode)` Return non-zero if the mode is from a whiteout. New in version 3.4. Two additional functions are defined for more general manipulation of the file’s mode: `stat.S_IMODE(mode)` Return the portion of the file’s mode that can be set by [`os.chmod()`](os#os.chmod "os.chmod")—that is, the file’s permission bits, plus the sticky bit, set-group-id, and set-user-id bits (on systems that support them). `stat.S_IFMT(mode)` Return the portion of the file’s mode that describes the file type (used by the `S_IS*()` functions above). Normally, you would use the `os.path.is*()` functions for testing the type of a file; the functions here are useful when you are doing multiple tests of the same file and wish to avoid the overhead of the `stat()` system call for each test. These are also useful when checking for information about a file that isn’t handled by [`os.path`](os.path#module-os.path "os.path: Operations on pathnames."), like the tests for block and character devices. Example: ``` import os, sys from stat import * def walktree(top, callback): '''recursively descend the directory tree rooted at top, calling the callback function for each regular file''' for f in os.listdir(top): pathname = os.path.join(top, f) mode = os.lstat(pathname).st_mode if S_ISDIR(mode): # It's a directory, recurse into it walktree(pathname, callback) elif S_ISREG(mode): # It's a file, call the callback function callback(pathname) else: # Unknown file type, print a message print('Skipping %s' % pathname) def visitfile(file): print('visiting', file) if __name__ == '__main__': walktree(sys.argv[1], visitfile) ``` An additional utility function is provided to convert a file’s mode in a human readable string: `stat.filemode(mode)` Convert a file’s mode to a string of the form ‘-rwxrwxrwx’. New in version 3.3. Changed in version 3.4: The function supports [`S_IFDOOR`](#stat.S_IFDOOR "stat.S_IFDOOR"), [`S_IFPORT`](#stat.S_IFPORT "stat.S_IFPORT") and [`S_IFWHT`](#stat.S_IFWHT "stat.S_IFWHT"). All the variables below are simply symbolic indexes into the 10-tuple returned by [`os.stat()`](os#os.stat "os.stat"), [`os.fstat()`](os#os.fstat "os.fstat") or [`os.lstat()`](os#os.lstat "os.lstat"). `stat.ST_MODE` Inode protection mode. `stat.ST_INO` Inode number. `stat.ST_DEV` Device inode resides on. `stat.ST_NLINK` Number of links to the inode. `stat.ST_UID` User id of the owner. `stat.ST_GID` Group id of the owner. `stat.ST_SIZE` Size in bytes of a plain file; amount of data waiting on some special files. `stat.ST_ATIME` Time of last access. `stat.ST_MTIME` Time of last modification. `stat.ST_CTIME` The “ctime” as reported by the operating system. On some systems (like Unix) is the time of the last metadata change, and, on others (like Windows), is the creation time (see platform documentation for details). The interpretation of “file size” changes according to the file type. For plain files this is the size of the file in bytes. For FIFOs and sockets under most flavors of Unix (including Linux in particular), the “size” is the number of bytes waiting to be read at the time of the call to [`os.stat()`](os#os.stat "os.stat"), [`os.fstat()`](os#os.fstat "os.fstat"), or [`os.lstat()`](os#os.lstat "os.lstat"); this can sometimes be useful, especially for polling one of these special files after a non-blocking open. The meaning of the size field for other character and block devices varies more, depending on the implementation of the underlying system call. The variables below define the flags used in the [`ST_MODE`](#stat.ST_MODE "stat.ST_MODE") field. Use of the functions above is more portable than use of the first set of flags: `stat.S_IFSOCK` Socket. `stat.S_IFLNK` Symbolic link. `stat.S_IFREG` Regular file. `stat.S_IFBLK` Block device. `stat.S_IFDIR` Directory. `stat.S_IFCHR` Character device. `stat.S_IFIFO` FIFO. `stat.S_IFDOOR` Door. New in version 3.4. `stat.S_IFPORT` Event port. New in version 3.4. `stat.S_IFWHT` Whiteout. New in version 3.4. Note [`S_IFDOOR`](#stat.S_IFDOOR "stat.S_IFDOOR"), [`S_IFPORT`](#stat.S_IFPORT "stat.S_IFPORT") or [`S_IFWHT`](#stat.S_IFWHT "stat.S_IFWHT") are defined as 0 when the platform does not have support for the file types. The following flags can also be used in the *mode* argument of [`os.chmod()`](os#os.chmod "os.chmod"): `stat.S_ISUID` Set UID bit. `stat.S_ISGID` Set-group-ID bit. This bit has several special uses. For a directory it indicates that BSD semantics is to be used for that directory: files created there inherit their group ID from the directory, not from the effective group ID of the creating process, and directories created there will also get the [`S_ISGID`](#stat.S_ISGID "stat.S_ISGID") bit set. For a file that does not have the group execution bit ([`S_IXGRP`](#stat.S_IXGRP "stat.S_IXGRP")) set, the set-group-ID bit indicates mandatory file/record locking (see also [`S_ENFMT`](#stat.S_ENFMT "stat.S_ENFMT")). `stat.S_ISVTX` Sticky bit. When this bit is set on a directory it means that a file in that directory can be renamed or deleted only by the owner of the file, by the owner of the directory, or by a privileged process. `stat.S_IRWXU` Mask for file owner permissions. `stat.S_IRUSR` Owner has read permission. `stat.S_IWUSR` Owner has write permission. `stat.S_IXUSR` Owner has execute permission. `stat.S_IRWXG` Mask for group permissions. `stat.S_IRGRP` Group has read permission. `stat.S_IWGRP` Group has write permission. `stat.S_IXGRP` Group has execute permission. `stat.S_IRWXO` Mask for permissions for others (not in group). `stat.S_IROTH` Others have read permission. `stat.S_IWOTH` Others have write permission. `stat.S_IXOTH` Others have execute permission. `stat.S_ENFMT` System V file locking enforcement. This flag is shared with [`S_ISGID`](#stat.S_ISGID "stat.S_ISGID"): file/record locking is enforced on files that do not have the group execution bit ([`S_IXGRP`](#stat.S_IXGRP "stat.S_IXGRP")) set. `stat.S_IREAD` Unix V7 synonym for [`S_IRUSR`](#stat.S_IRUSR "stat.S_IRUSR"). `stat.S_IWRITE` Unix V7 synonym for [`S_IWUSR`](#stat.S_IWUSR "stat.S_IWUSR"). `stat.S_IEXEC` Unix V7 synonym for [`S_IXUSR`](#stat.S_IXUSR "stat.S_IXUSR"). The following flags can be used in the *flags* argument of [`os.chflags()`](os#os.chflags "os.chflags"): `stat.UF_NODUMP` Do not dump the file. `stat.UF_IMMUTABLE` The file may not be changed. `stat.UF_APPEND` The file may only be appended to. `stat.UF_OPAQUE` The directory is opaque when viewed through a union stack. `stat.UF_NOUNLINK` The file may not be renamed or deleted. `stat.UF_COMPRESSED` The file is stored compressed (macOS 10.6+). `stat.UF_HIDDEN` The file should not be displayed in a GUI (macOS 10.5+). `stat.SF_ARCHIVED` The file may be archived. `stat.SF_IMMUTABLE` The file may not be changed. `stat.SF_APPEND` The file may only be appended to. `stat.SF_NOUNLINK` The file may not be renamed or deleted. `stat.SF_SNAPSHOT` The file is a snapshot file. See the \*BSD or macOS systems man page *[chflags(2)](https://manpages.debian.org/chflags(2))* for more information. On Windows, the following file attribute constants are available for use when testing bits in the `st_file_attributes` member returned by [`os.stat()`](os#os.stat "os.stat"). See the [Windows API documentation](https://msdn.microsoft.com/en-us/library/windows/desktop/gg258117.aspx) for more detail on the meaning of these constants. `stat.FILE_ATTRIBUTE_ARCHIVE` `stat.FILE_ATTRIBUTE_COMPRESSED` `stat.FILE_ATTRIBUTE_DEVICE` `stat.FILE_ATTRIBUTE_DIRECTORY` `stat.FILE_ATTRIBUTE_ENCRYPTED` `stat.FILE_ATTRIBUTE_HIDDEN` `stat.FILE_ATTRIBUTE_INTEGRITY_STREAM` `stat.FILE_ATTRIBUTE_NORMAL` `stat.FILE_ATTRIBUTE_NOT_CONTENT_INDEXED` `stat.FILE_ATTRIBUTE_NO_SCRUB_DATA` `stat.FILE_ATTRIBUTE_OFFLINE` `stat.FILE_ATTRIBUTE_READONLY` `stat.FILE_ATTRIBUTE_REPARSE_POINT` `stat.FILE_ATTRIBUTE_SPARSE_FILE` `stat.FILE_ATTRIBUTE_SYSTEM` `stat.FILE_ATTRIBUTE_TEMPORARY` `stat.FILE_ATTRIBUTE_VIRTUAL` New in version 3.5. On Windows, the following constants are available for comparing against the `st_reparse_tag` member returned by [`os.lstat()`](os#os.lstat "os.lstat"). These are well-known constants, but are not an exhaustive list. `stat.IO_REPARSE_TAG_SYMLINK` `stat.IO_REPARSE_TAG_MOUNT_POINT` `stat.IO_REPARSE_TAG_APPEXECLINK` New in version 3.8. python linecache — Random access to text lines linecache — Random access to text lines ======================================= **Source code:** [Lib/linecache.py](https://github.com/python/cpython/tree/3.9/Lib/linecache.py) The [`linecache`](#module-linecache "linecache: Provides random access to individual lines from text files.") module allows one to get any line from a Python source file, while attempting to optimize internally, using a cache, the common case where many lines are read from a single file. This is used by the [`traceback`](traceback#module-traceback "traceback: Print or retrieve a stack traceback.") module to retrieve source lines for inclusion in the formatted traceback. The [`tokenize.open()`](tokenize#tokenize.open "tokenize.open") function is used to open files. This function uses [`tokenize.detect_encoding()`](tokenize#tokenize.detect_encoding "tokenize.detect_encoding") to get the encoding of the file; in the absence of an encoding token, the file encoding defaults to UTF-8. The [`linecache`](#module-linecache "linecache: Provides random access to individual lines from text files.") module defines the following functions: `linecache.getline(filename, lineno, module_globals=None)` Get line *lineno* from file named *filename*. This function will never raise an exception — it will return `''` on errors (the terminating newline character will be included for lines that are found). If a file named *filename* is not found, the function first checks for a [**PEP 302**](https://www.python.org/dev/peps/pep-0302) `__loader__` in *module\_globals*. If there is such a loader and it defines a `get_source` method, then that determines the source lines (if `get_source()` returns `None`, then `''` is returned). Finally, if *filename* is a relative filename, it is looked up relative to the entries in the module search path, `sys.path`. `linecache.clearcache()` Clear the cache. Use this function if you no longer need lines from files previously read using [`getline()`](#linecache.getline "linecache.getline"). `linecache.checkcache(filename=None)` Check the cache for validity. Use this function if files in the cache may have changed on disk, and you require the updated version. If *filename* is omitted, it will check all the entries in the cache. `linecache.lazycache(filename, module_globals)` Capture enough detail about a non-file-based module to permit getting its lines later via [`getline()`](#linecache.getline "linecache.getline") even if *module\_globals* is `None` in the later call. This avoids doing I/O until a line is actually needed, without having to carry the module globals around indefinitely. New in version 3.5. Example: ``` >>> import linecache >>> linecache.getline(linecache.__file__, 8) 'import sys\n' ``` python Coroutines and Tasks Coroutines and Tasks ==================== This section outlines high-level asyncio APIs to work with coroutines and Tasks. * [Coroutines](#coroutines) * [Awaitables](#awaitables) * [Running an asyncio Program](#running-an-asyncio-program) * [Creating Tasks](#creating-tasks) * [Sleeping](#sleeping) * [Running Tasks Concurrently](#running-tasks-concurrently) * [Shielding From Cancellation](#shielding-from-cancellation) * [Timeouts](#timeouts) * [Waiting Primitives](#waiting-primitives) * [Running in Threads](#running-in-threads) * [Scheduling From Other Threads](#scheduling-from-other-threads) * [Introspection](#introspection) * [Task Object](#task-object) * [Generator-based Coroutines](#generator-based-coroutines) Coroutines ---------- [Coroutines](../glossary#term-coroutine) declared with the async/await syntax is the preferred way of writing asyncio applications. For example, the following snippet of code prints “hello”, waits 1 second, and then prints “world”: ``` >>> import asyncio >>> async def main(): ... print('hello') ... await asyncio.sleep(1) ... print('world') >>> asyncio.run(main()) hello world ``` Note that simply calling a coroutine will not schedule it to be executed: ``` >>> main() <coroutine object main at 0x1053bb7c8> ``` To actually run a coroutine, asyncio provides three main mechanisms: * The [`asyncio.run()`](#asyncio.run "asyncio.run") function to run the top-level entry point “main()” function (see the above example.) * Awaiting on a coroutine. The following snippet of code will print “hello” after waiting for 1 second, and then print “world” after waiting for *another* 2 seconds: ``` import asyncio import time async def say_after(delay, what): await asyncio.sleep(delay) print(what) async def main(): print(f"started at {time.strftime('%X')}") await say_after(1, 'hello') await say_after(2, 'world') print(f"finished at {time.strftime('%X')}") asyncio.run(main()) ``` Expected output: ``` started at 17:13:52 hello world finished at 17:13:55 ``` * The [`asyncio.create_task()`](#asyncio.create_task "asyncio.create_task") function to run coroutines concurrently as asyncio [`Tasks`](#asyncio.Task "asyncio.Task"). Let’s modify the above example and run two `say_after` coroutines *concurrently*: ``` async def main(): task1 = asyncio.create_task( say_after(1, 'hello')) task2 = asyncio.create_task( say_after(2, 'world')) print(f"started at {time.strftime('%X')}") # Wait until both tasks are completed (should take # around 2 seconds.) await task1 await task2 print(f"finished at {time.strftime('%X')}") ``` Note that expected output now shows that the snippet runs 1 second faster than before: ``` started at 17:14:32 hello world finished at 17:14:34 ``` Awaitables ---------- We say that an object is an **awaitable** object if it can be used in an [`await`](../reference/expressions#await) expression. Many asyncio APIs are designed to accept awaitables. There are three main types of *awaitable* objects: **coroutines**, **Tasks**, and **Futures**. #### Coroutines Python coroutines are *awaitables* and therefore can be awaited from other coroutines: ``` import asyncio async def nested(): return 42 async def main(): # Nothing happens if we just call "nested()". # A coroutine object is created but not awaited, # so it *won't run at all*. nested() # Let's do it differently now and await it: print(await nested()) # will print "42". asyncio.run(main()) ``` Important In this documentation the term “coroutine” can be used for two closely related concepts: * a *coroutine function*: an [`async def`](../reference/compound_stmts#async-def) function; * a *coroutine object*: an object returned by calling a *coroutine function*. asyncio also supports legacy [generator-based](#asyncio-generator-based-coro) coroutines. #### Tasks *Tasks* are used to schedule coroutines *concurrently*. When a coroutine is wrapped into a *Task* with functions like [`asyncio.create_task()`](#asyncio.create_task "asyncio.create_task") the coroutine is automatically scheduled to run soon: ``` import asyncio async def nested(): return 42 async def main(): # Schedule nested() to run soon concurrently # with "main()". task = asyncio.create_task(nested()) # "task" can now be used to cancel "nested()", or # can simply be awaited to wait until it is complete: await task asyncio.run(main()) ``` #### Futures A [`Future`](asyncio-future#asyncio.Future "asyncio.Future") is a special **low-level** awaitable object that represents an **eventual result** of an asynchronous operation. When a Future object is *awaited* it means that the coroutine will wait until the Future is resolved in some other place. Future objects in asyncio are needed to allow callback-based code to be used with async/await. Normally **there is no need** to create Future objects at the application level code. Future objects, sometimes exposed by libraries and some asyncio APIs, can be awaited: ``` async def main(): await function_that_returns_a_future_object() # this is also valid: await asyncio.gather( function_that_returns_a_future_object(), some_python_coroutine() ) ``` A good example of a low-level function that returns a Future object is [`loop.run_in_executor()`](asyncio-eventloop#asyncio.loop.run_in_executor "asyncio.loop.run_in_executor"). Running an asyncio Program -------------------------- `asyncio.run(coro, *, debug=False)` Execute the [coroutine](../glossary#term-coroutine) *coro* and return the result. This function runs the passed coroutine, taking care of managing the asyncio event loop, *finalizing asynchronous generators*, and closing the threadpool. This function cannot be called when another asyncio event loop is running in the same thread. If *debug* is `True`, the event loop will be run in debug mode. This function always creates a new event loop and closes it at the end. It should be used as a main entry point for asyncio programs, and should ideally only be called once. Example: ``` async def main(): await asyncio.sleep(1) print('hello') asyncio.run(main()) ``` New in version 3.7. Changed in version 3.9: Updated to use [`loop.shutdown_default_executor()`](asyncio-eventloop#asyncio.loop.shutdown_default_executor "asyncio.loop.shutdown_default_executor"). Note The source code for `asyncio.run()` can be found in [Lib/asyncio/runners.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/runners.py). Creating Tasks -------------- `asyncio.create_task(coro, *, name=None)` Wrap the *coro* [coroutine](#coroutine) into a [`Task`](#asyncio.Task "asyncio.Task") and schedule its execution. Return the Task object. If *name* is not `None`, it is set as the name of the task using [`Task.set_name()`](#asyncio.Task.set_name "asyncio.Task.set_name"). The task is executed in the loop returned by [`get_running_loop()`](asyncio-eventloop#asyncio.get_running_loop "asyncio.get_running_loop"), [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised if there is no running loop in current thread. Important Save a reference to the result of this function, to avoid a task disappearing mid execution. New in version 3.7. Changed in version 3.8: Added the `name` parameter. Sleeping -------- `coroutine asyncio.sleep(delay, result=None, *, loop=None)` Block for *delay* seconds. If *result* is provided, it is returned to the caller when the coroutine completes. `sleep()` always suspends the current task, allowing other tasks to run. Setting the delay to 0 provides an optimized path to allow other tasks to run. This can be used by long-running functions to avoid blocking the event loop for the full duration of the function call. Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. Example of coroutine displaying the current date every second for 5 seconds: ``` import asyncio import datetime async def display_date(): loop = asyncio.get_running_loop() end_time = loop.time() + 5.0 while True: print(datetime.datetime.now()) if (loop.time() + 1.0) >= end_time: break await asyncio.sleep(1) asyncio.run(display_date()) ``` Running Tasks Concurrently -------------------------- `awaitable asyncio.gather(*aws, loop=None, return_exceptions=False)` Run [awaitable objects](#asyncio-awaitables) in the *aws* sequence *concurrently*. If any awaitable in *aws* is a coroutine, it is automatically scheduled as a Task. If all awaitables are completed successfully, the result is an aggregate list of returned values. The order of result values corresponds to the order of awaitables in *aws*. If *return\_exceptions* is `False` (default), the first raised exception is immediately propagated to the task that awaits on `gather()`. Other awaitables in the *aws* sequence **won’t be cancelled** and will continue to run. If *return\_exceptions* is `True`, exceptions are treated the same as successful results, and aggregated in the result list. If `gather()` is *cancelled*, all submitted awaitables (that have not completed yet) are also *cancelled*. If any Task or Future from the *aws* sequence is *cancelled*, it is treated as if it raised [`CancelledError`](asyncio-exceptions#asyncio.CancelledError "asyncio.CancelledError") – the `gather()` call is **not** cancelled in this case. This is to prevent the cancellation of one submitted Task/Future to cause other Tasks/Futures to be cancelled. Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. Example: ``` import asyncio async def factorial(name, number): f = 1 for i in range(2, number + 1): print(f"Task {name}: Compute factorial({number}), currently i={i}...") await asyncio.sleep(1) f *= i print(f"Task {name}: factorial({number}) = {f}") return f async def main(): # Schedule three calls *concurrently*: L = await asyncio.gather( factorial("A", 2), factorial("B", 3), factorial("C", 4), ) print(L) asyncio.run(main()) # Expected output: # # Task A: Compute factorial(2), currently i=2... # Task B: Compute factorial(3), currently i=2... # Task C: Compute factorial(4), currently i=2... # Task A: factorial(2) = 2 # Task B: Compute factorial(3), currently i=3... # Task C: Compute factorial(4), currently i=3... # Task B: factorial(3) = 6 # Task C: Compute factorial(4), currently i=4... # Task C: factorial(4) = 24 # [2, 6, 24] ``` Note If *return\_exceptions* is False, cancelling gather() after it has been marked done won’t cancel any submitted awaitables. For instance, gather can be marked done after propagating an exception to the caller, therefore, calling `gather.cancel()` after catching an exception (raised by one of the awaitables) from gather won’t cancel any other awaitables. Changed in version 3.7: If the *gather* itself is cancelled, the cancellation is propagated regardless of *return\_exceptions*. Shielding From Cancellation --------------------------- `awaitable asyncio.shield(aw, *, loop=None)` Protect an [awaitable object](#asyncio-awaitables) from being [`cancelled`](#asyncio.Task.cancel "asyncio.Task.cancel"). If *aw* is a coroutine it is automatically scheduled as a Task. The statement: ``` res = await shield(something()) ``` is equivalent to: ``` res = await something() ``` *except* that if the coroutine containing it is cancelled, the Task running in `something()` is not cancelled. From the point of view of `something()`, the cancellation did not happen. Although its caller is still cancelled, so the “await” expression still raises a [`CancelledError`](asyncio-exceptions#asyncio.CancelledError "asyncio.CancelledError"). If `something()` is cancelled by other means (i.e. from within itself) that would also cancel `shield()`. If it is desired to completely ignore cancellation (not recommended) the `shield()` function should be combined with a try/except clause, as follows: ``` try: res = await shield(something()) except CancelledError: res = None ``` Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. Timeouts -------- `coroutine asyncio.wait_for(aw, timeout, *, loop=None)` Wait for the *aw* [awaitable](#asyncio-awaitables) to complete with a timeout. If *aw* is a coroutine it is automatically scheduled as a Task. *timeout* can either be `None` or a float or int number of seconds to wait for. If *timeout* is `None`, block until the future completes. If a timeout occurs, it cancels the task and raises [`asyncio.TimeoutError`](asyncio-exceptions#asyncio.TimeoutError "asyncio.TimeoutError"). To avoid the task [`cancellation`](#asyncio.Task.cancel "asyncio.Task.cancel"), wrap it in [`shield()`](#asyncio.shield "asyncio.shield"). The function will wait until the future is actually cancelled, so the total wait time may exceed the *timeout*. If an exception happens during cancellation, it is propagated. If the wait is cancelled, the future *aw* is also cancelled. Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. Example: ``` async def eternity(): # Sleep for one hour await asyncio.sleep(3600) print('yay!') async def main(): # Wait for at most 1 second try: await asyncio.wait_for(eternity(), timeout=1.0) except asyncio.TimeoutError: print('timeout!') asyncio.run(main()) # Expected output: # # timeout! ``` Changed in version 3.7: When *aw* is cancelled due to a timeout, `wait_for` waits for *aw* to be cancelled. Previously, it raised [`asyncio.TimeoutError`](asyncio-exceptions#asyncio.TimeoutError "asyncio.TimeoutError") immediately. Waiting Primitives ------------------ `coroutine asyncio.wait(aws, *, loop=None, timeout=None, return_when=ALL_COMPLETED)` Run [awaitable objects](#asyncio-awaitables) in the *aws* iterable concurrently and block until the condition specified by *return\_when*. The *aws* iterable must not be empty. Returns two sets of Tasks/Futures: `(done, pending)`. Usage: ``` done, pending = await asyncio.wait(aws) ``` *timeout* (a float or int), if specified, can be used to control the maximum number of seconds to wait before returning. Note that this function does not raise [`asyncio.TimeoutError`](asyncio-exceptions#asyncio.TimeoutError "asyncio.TimeoutError"). Futures or Tasks that aren’t done when the timeout occurs are simply returned in the second set. *return\_when* indicates when this function should return. It must be one of the following constants: | Constant | Description | | --- | --- | | `FIRST_COMPLETED` | The function will return when any future finishes or is cancelled. | | `FIRST_EXCEPTION` | The function will return when any future finishes by raising an exception. If no future raises an exception then it is equivalent to `ALL_COMPLETED`. | | `ALL_COMPLETED` | The function will return when all futures finish or are cancelled. | Unlike [`wait_for()`](#asyncio.wait_for "asyncio.wait_for"), `wait()` does not cancel the futures when a timeout occurs. Deprecated since version 3.8: If any awaitable in *aws* is a coroutine, it is automatically scheduled as a Task. Passing coroutines objects to `wait()` directly is deprecated as it leads to [confusing behavior](#asyncio-example-wait-coroutine). Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. Note `wait()` schedules coroutines as Tasks automatically and later returns those implicitly created Task objects in `(done, pending)` sets. Therefore the following code won’t work as expected: ``` async def foo(): return 42 coro = foo() done, pending = await asyncio.wait({coro}) if coro in done: # This branch will never be run! ``` Here is how the above snippet can be fixed: ``` async def foo(): return 42 task = asyncio.create_task(foo()) done, pending = await asyncio.wait({task}) if task in done: # Everything will work as expected now. ``` Deprecated since version 3.8, will be removed in version 3.11: Passing coroutine objects to `wait()` directly is deprecated. `asyncio.as_completed(aws, *, loop=None, timeout=None)` Run [awaitable objects](#asyncio-awaitables) in the *aws* iterable concurrently. Return an iterator of coroutines. Each coroutine returned can be awaited to get the earliest next result from the iterable of the remaining awaitables. Raises [`asyncio.TimeoutError`](asyncio-exceptions#asyncio.TimeoutError "asyncio.TimeoutError") if the timeout occurs before all Futures are done. Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. Example: ``` for coro in as_completed(aws): earliest_result = await coro # ... ``` Running in Threads ------------------ `coroutine asyncio.to_thread(func, /, *args, **kwargs)` Asynchronously run function *func* in a separate thread. Any \*args and \*\*kwargs supplied for this function are directly passed to *func*. Also, the current [`contextvars.Context`](contextvars#contextvars.Context "contextvars.Context") is propagated, allowing context variables from the event loop thread to be accessed in the separate thread. Return a coroutine that can be awaited to get the eventual result of *func*. This coroutine function is primarily intended to be used for executing IO-bound functions/methods that would otherwise block the event loop if they were ran in the main thread. For example: ``` def blocking_io(): print(f"start blocking_io at {time.strftime('%X')}") # Note that time.sleep() can be replaced with any blocking # IO-bound operation, such as file operations. time.sleep(1) print(f"blocking_io complete at {time.strftime('%X')}") async def main(): print(f"started main at {time.strftime('%X')}") await asyncio.gather( asyncio.to_thread(blocking_io), asyncio.sleep(1)) print(f"finished main at {time.strftime('%X')}") asyncio.run(main()) # Expected output: # # started main at 19:50:53 # start blocking_io at 19:50:53 # blocking_io complete at 19:50:54 # finished main at 19:50:54 ``` Directly calling `blocking_io()` in any coroutine would block the event loop for its duration, resulting in an additional 1 second of run time. Instead, by using `asyncio.to_thread()`, we can run it in a separate thread without blocking the event loop. Note Due to the [GIL](../glossary#term-gil), `asyncio.to_thread()` can typically only be used to make IO-bound functions non-blocking. However, for extension modules that release the GIL or alternative Python implementations that don’t have one, `asyncio.to_thread()` can also be used for CPU-bound functions. New in version 3.9. Scheduling From Other Threads ----------------------------- `asyncio.run_coroutine_threadsafe(coro, loop)` Submit a coroutine to the given event loop. Thread-safe. Return a [`concurrent.futures.Future`](concurrent.futures#concurrent.futures.Future "concurrent.futures.Future") to wait for the result from another OS thread. This function is meant to be called from a different OS thread than the one where the event loop is running. Example: ``` # Create a coroutine coro = asyncio.sleep(1, result=3) # Submit the coroutine to a given loop future = asyncio.run_coroutine_threadsafe(coro, loop) # Wait for the result with an optional timeout argument assert future.result(timeout) == 3 ``` If an exception is raised in the coroutine, the returned Future will be notified. It can also be used to cancel the task in the event loop: ``` try: result = future.result(timeout) except asyncio.TimeoutError: print('The coroutine took too long, cancelling the task...') future.cancel() except Exception as exc: print(f'The coroutine raised an exception: {exc!r}') else: print(f'The coroutine returned: {result!r}') ``` See the [concurrency and multithreading](asyncio-dev#asyncio-multithreading) section of the documentation. Unlike other asyncio functions this function requires the *loop* argument to be passed explicitly. New in version 3.5.1. Introspection ------------- `asyncio.current_task(loop=None)` Return the currently running [`Task`](#asyncio.Task "asyncio.Task") instance, or `None` if no task is running. If *loop* is `None` [`get_running_loop()`](asyncio-eventloop#asyncio.get_running_loop "asyncio.get_running_loop") is used to get the current loop. New in version 3.7. `asyncio.all_tasks(loop=None)` Return a set of not yet finished [`Task`](#asyncio.Task "asyncio.Task") objects run by the loop. If *loop* is `None`, [`get_running_loop()`](asyncio-eventloop#asyncio.get_running_loop "asyncio.get_running_loop") is used for getting current loop. New in version 3.7. Task Object ----------- `class asyncio.Task(coro, *, loop=None, name=None)` A [`Future-like`](asyncio-future#asyncio.Future "asyncio.Future") object that runs a Python [coroutine](#coroutine). Not thread-safe. Tasks are used to run coroutines in event loops. If a coroutine awaits on a Future, the Task suspends the execution of the coroutine and waits for the completion of the Future. When the Future is *done*, the execution of the wrapped coroutine resumes. Event loops use cooperative scheduling: an event loop runs one Task at a time. While a Task awaits for the completion of a Future, the event loop runs other Tasks, callbacks, or performs IO operations. Use the high-level [`asyncio.create_task()`](#asyncio.create_task "asyncio.create_task") function to create Tasks, or the low-level [`loop.create_task()`](asyncio-eventloop#asyncio.loop.create_task "asyncio.loop.create_task") or [`ensure_future()`](asyncio-future#asyncio.ensure_future "asyncio.ensure_future") functions. Manual instantiation of Tasks is discouraged. To cancel a running Task use the [`cancel()`](#asyncio.Task.cancel "asyncio.Task.cancel") method. Calling it will cause the Task to throw a [`CancelledError`](asyncio-exceptions#asyncio.CancelledError "asyncio.CancelledError") exception into the wrapped coroutine. If a coroutine is awaiting on a Future object during cancellation, the Future object will be cancelled. [`cancelled()`](#asyncio.Task.cancelled "asyncio.Task.cancelled") can be used to check if the Task was cancelled. The method returns `True` if the wrapped coroutine did not suppress the [`CancelledError`](asyncio-exceptions#asyncio.CancelledError "asyncio.CancelledError") exception and was actually cancelled. [`asyncio.Task`](#asyncio.Task "asyncio.Task") inherits from [`Future`](asyncio-future#asyncio.Future "asyncio.Future") all of its APIs except [`Future.set_result()`](asyncio-future#asyncio.Future.set_result "asyncio.Future.set_result") and [`Future.set_exception()`](asyncio-future#asyncio.Future.set_exception "asyncio.Future.set_exception"). Tasks support the [`contextvars`](contextvars#module-contextvars "contextvars: Context Variables") module. When a Task is created it copies the current context and later runs its coroutine in the copied context. Changed in version 3.7: Added support for the [`contextvars`](contextvars#module-contextvars "contextvars: Context Variables") module. Changed in version 3.8: Added the `name` parameter. Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. `cancel(msg=None)` Request the Task to be cancelled. This arranges for a [`CancelledError`](asyncio-exceptions#asyncio.CancelledError "asyncio.CancelledError") exception to be thrown into the wrapped coroutine on the next cycle of the event loop. The coroutine then has a chance to clean up or even deny the request by suppressing the exception with a [`try`](../reference/compound_stmts#try) … … `except CancelledError` … [`finally`](../reference/compound_stmts#finally) block. Therefore, unlike [`Future.cancel()`](asyncio-future#asyncio.Future.cancel "asyncio.Future.cancel"), [`Task.cancel()`](#asyncio.Task.cancel "asyncio.Task.cancel") does not guarantee that the Task will be cancelled, although suppressing cancellation completely is not common and is actively discouraged. Changed in version 3.9: Added the `msg` parameter. The following example illustrates how coroutines can intercept the cancellation request: ``` async def cancel_me(): print('cancel_me(): before sleep') try: # Wait for 1 hour await asyncio.sleep(3600) except asyncio.CancelledError: print('cancel_me(): cancel sleep') raise finally: print('cancel_me(): after sleep') async def main(): # Create a "cancel_me" Task task = asyncio.create_task(cancel_me()) # Wait for 1 second await asyncio.sleep(1) task.cancel() try: await task except asyncio.CancelledError: print("main(): cancel_me is cancelled now") asyncio.run(main()) # Expected output: # # cancel_me(): before sleep # cancel_me(): cancel sleep # cancel_me(): after sleep # main(): cancel_me is cancelled now ``` `cancelled()` Return `True` if the Task is *cancelled*. The Task is *cancelled* when the cancellation was requested with [`cancel()`](#asyncio.Task.cancel "asyncio.Task.cancel") and the wrapped coroutine propagated the [`CancelledError`](asyncio-exceptions#asyncio.CancelledError "asyncio.CancelledError") exception thrown into it. `done()` Return `True` if the Task is *done*. A Task is *done* when the wrapped coroutine either returned a value, raised an exception, or the Task was cancelled. `result()` Return the result of the Task. If the Task is *done*, the result of the wrapped coroutine is returned (or if the coroutine raised an exception, that exception is re-raised.) If the Task has been *cancelled*, this method raises a [`CancelledError`](asyncio-exceptions#asyncio.CancelledError "asyncio.CancelledError") exception. If the Task’s result isn’t yet available, this method raises a [`InvalidStateError`](asyncio-exceptions#asyncio.InvalidStateError "asyncio.InvalidStateError") exception. `exception()` Return the exception of the Task. If the wrapped coroutine raised an exception that exception is returned. If the wrapped coroutine returned normally this method returns `None`. If the Task has been *cancelled*, this method raises a [`CancelledError`](asyncio-exceptions#asyncio.CancelledError "asyncio.CancelledError") exception. If the Task isn’t *done* yet, this method raises an [`InvalidStateError`](asyncio-exceptions#asyncio.InvalidStateError "asyncio.InvalidStateError") exception. `add_done_callback(callback, *, context=None)` Add a callback to be run when the Task is *done*. This method should only be used in low-level callback-based code. See the documentation of [`Future.add_done_callback()`](asyncio-future#asyncio.Future.add_done_callback "asyncio.Future.add_done_callback") for more details. `remove_done_callback(callback)` Remove *callback* from the callbacks list. This method should only be used in low-level callback-based code. See the documentation of [`Future.remove_done_callback()`](asyncio-future#asyncio.Future.remove_done_callback "asyncio.Future.remove_done_callback") for more details. `get_stack(*, limit=None)` Return the list of stack frames for this Task. If the wrapped coroutine is not done, this returns the stack where it is suspended. If the coroutine has completed successfully or was cancelled, this returns an empty list. If the coroutine was terminated by an exception, this returns the list of traceback frames. The frames are always ordered from oldest to newest. Only one stack frame is returned for a suspended coroutine. The optional *limit* argument sets the maximum number of frames to return; by default all available frames are returned. The ordering of the returned list differs depending on whether a stack or a traceback is returned: the newest frames of a stack are returned, but the oldest frames of a traceback are returned. (This matches the behavior of the traceback module.) `print_stack(*, limit=None, file=None)` Print the stack or traceback for this Task. This produces output similar to that of the traceback module for the frames retrieved by [`get_stack()`](#asyncio.Task.get_stack "asyncio.Task.get_stack"). The *limit* argument is passed to [`get_stack()`](#asyncio.Task.get_stack "asyncio.Task.get_stack") directly. The *file* argument is an I/O stream to which the output is written; by default output is written to [`sys.stderr`](sys#sys.stderr "sys.stderr"). `get_coro()` Return the coroutine object wrapped by the [`Task`](#asyncio.Task "asyncio.Task"). New in version 3.8. `get_name()` Return the name of the Task. If no name has been explicitly assigned to the Task, the default asyncio Task implementation generates a default name during instantiation. New in version 3.8. `set_name(value)` Set the name of the Task. The *value* argument can be any object, which is then converted to a string. In the default Task implementation, the name will be visible in the [`repr()`](functions#repr "repr") output of a task object. New in version 3.8. Generator-based Coroutines -------------------------- Note Support for generator-based coroutines is **deprecated** and is removed in Python 3.11. Generator-based coroutines predate async/await syntax. They are Python generators that use `yield from` expressions to await on Futures and other coroutines. Generator-based coroutines should be decorated with [`@asyncio.coroutine`](#asyncio.coroutine "asyncio.coroutine"), although this is not enforced. `@asyncio.coroutine` Decorator to mark generator-based coroutines. This decorator enables legacy generator-based coroutines to be compatible with async/await code: ``` @asyncio.coroutine def old_style_coroutine(): yield from asyncio.sleep(1) async def main(): await old_style_coroutine() ``` This decorator should not be used for [`async def`](../reference/compound_stmts#async-def) coroutines. Deprecated since version 3.8, will be removed in version 3.11: Use [`async def`](../reference/compound_stmts#async-def) instead. `asyncio.iscoroutine(obj)` Return `True` if *obj* is a [coroutine object](#coroutine). This method is different from [`inspect.iscoroutine()`](inspect#inspect.iscoroutine "inspect.iscoroutine") because it returns `True` for generator-based coroutines. `asyncio.iscoroutinefunction(func)` Return `True` if *func* is a [coroutine function](#coroutine). This method is different from [`inspect.iscoroutinefunction()`](inspect#inspect.iscoroutinefunction "inspect.iscoroutinefunction") because it returns `True` for generator-based coroutine functions decorated with [`@coroutine`](#asyncio.coroutine "asyncio.coroutine").
programming_docs
python Functional Programming Modules Functional Programming Modules ============================== The modules described in this chapter provide functions and classes that support a functional programming style, and general operations on callables. The following modules are documented in this chapter: * [`itertools` — Functions creating iterators for efficient looping](itertools) + [Itertool functions](itertools#itertool-functions) + [Itertools Recipes](itertools#itertools-recipes) * [`functools` — Higher-order functions and operations on callable objects](functools) + [`partial` Objects](functools#partial-objects) * [`operator` — Standard operators as functions](operator) + [Mapping Operators to Functions](operator#mapping-operators-to-functions) + [In-place Operators](operator#in-place-operators) python winreg — Windows registry access winreg — Windows registry access ================================ These functions expose the Windows registry API to Python. Instead of using an integer as the registry handle, a [handle object](#handle-object) is used to ensure that the handles are closed correctly, even if the programmer neglects to explicitly close them. Changed in version 3.3: Several functions in this module used to raise a [`WindowsError`](exceptions#WindowsError "WindowsError"), which is now an alias of [`OSError`](exceptions#OSError "OSError"). Functions --------- This module offers the following functions: `winreg.CloseKey(hkey)` Closes a previously opened registry key. The *hkey* argument specifies a previously opened key. Note If *hkey* is not closed using this method (or via [`hkey.Close()`](#winreg.PyHKEY.Close "winreg.PyHKEY.Close")), it is closed when the *hkey* object is destroyed by Python. `winreg.ConnectRegistry(computer_name, key)` Establishes a connection to a predefined registry handle on another computer, and returns a [handle object](#handle-object). *computer\_name* is the name of the remote computer, of the form `r"\\computername"`. If `None`, the local computer is used. *key* is the predefined handle to connect to. The return value is the handle of the opened key. If the function fails, an [`OSError`](exceptions#OSError "OSError") exception is raised. Raises an [auditing event](sys#auditing) `winreg.ConnectRegistry` with arguments `computer_name`, `key`. Changed in version 3.3: See [above](#exception-changed). `winreg.CreateKey(key, sub_key)` Creates or opens the specified key, returning a [handle object](#handle-object). *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). *sub\_key* is a string that names the key this method opens or creates. If *key* is one of the predefined keys, *sub\_key* may be `None`. In that case, the handle returned is the same key handle passed in to the function. If the key already exists, this function opens the existing key. The return value is the handle of the opened key. If the function fails, an [`OSError`](exceptions#OSError "OSError") exception is raised. Raises an [auditing event](sys#auditing) `winreg.CreateKey` with arguments `key`, `sub_key`, `access`. Raises an [auditing event](sys#auditing) `winreg.OpenKey/result` with argument `key`. Changed in version 3.3: See [above](#exception-changed). `winreg.CreateKeyEx(key, sub_key, reserved=0, access=KEY_WRITE)` Creates or opens the specified key, returning a [handle object](#handle-object). *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). *sub\_key* is a string that names the key this method opens or creates. *reserved* is a reserved integer, and must be zero. The default is zero. *access* is an integer that specifies an access mask that describes the desired security access for the key. Default is [`KEY_WRITE`](#winreg.KEY_WRITE "winreg.KEY_WRITE"). See [Access Rights](#access-rights) for other allowed values. If *key* is one of the predefined keys, *sub\_key* may be `None`. In that case, the handle returned is the same key handle passed in to the function. If the key already exists, this function opens the existing key. The return value is the handle of the opened key. If the function fails, an [`OSError`](exceptions#OSError "OSError") exception is raised. Raises an [auditing event](sys#auditing) `winreg.CreateKey` with arguments `key`, `sub_key`, `access`. Raises an [auditing event](sys#auditing) `winreg.OpenKey/result` with argument `key`. New in version 3.2. Changed in version 3.3: See [above](#exception-changed). `winreg.DeleteKey(key, sub_key)` Deletes the specified key. *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). *sub\_key* is a string that must be a subkey of the key identified by the *key* parameter. This value must not be `None`, and the key may not have subkeys. *This method can not delete keys with subkeys.* If the method succeeds, the entire key, including all of its values, is removed. If the method fails, an [`OSError`](exceptions#OSError "OSError") exception is raised. Raises an [auditing event](sys#auditing) `winreg.DeleteKey` with arguments `key`, `sub_key`, `access`. Changed in version 3.3: See [above](#exception-changed). `winreg.DeleteKeyEx(key, sub_key, access=KEY_WOW64_64KEY, reserved=0)` Deletes the specified key. Note The [`DeleteKeyEx()`](#winreg.DeleteKeyEx "winreg.DeleteKeyEx") function is implemented with the RegDeleteKeyEx Windows API function, which is specific to 64-bit versions of Windows. See the [RegDeleteKeyEx documentation](https://msdn.microsoft.com/en-us/library/ms724847%28VS.85%29.aspx). *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). *sub\_key* is a string that must be a subkey of the key identified by the *key* parameter. This value must not be `None`, and the key may not have subkeys. *reserved* is a reserved integer, and must be zero. The default is zero. *access* is an integer that specifies an access mask that describes the desired security access for the key. Default is [`KEY_WOW64_64KEY`](#winreg.KEY_WOW64_64KEY "winreg.KEY_WOW64_64KEY"). See [Access Rights](#access-rights) for other allowed values. *This method can not delete keys with subkeys.* If the method succeeds, the entire key, including all of its values, is removed. If the method fails, an [`OSError`](exceptions#OSError "OSError") exception is raised. On unsupported Windows versions, [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError") is raised. Raises an [auditing event](sys#auditing) `winreg.DeleteKey` with arguments `key`, `sub_key`, `access`. New in version 3.2. Changed in version 3.3: See [above](#exception-changed). `winreg.DeleteValue(key, value)` Removes a named value from a registry key. *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). *value* is a string that identifies the value to remove. Raises an [auditing event](sys#auditing) `winreg.DeleteValue` with arguments `key`, `value`. `winreg.EnumKey(key, index)` Enumerates subkeys of an open registry key, returning a string. *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). *index* is an integer that identifies the index of the key to retrieve. The function retrieves the name of one subkey each time it is called. It is typically called repeatedly until an [`OSError`](exceptions#OSError "OSError") exception is raised, indicating, no more values are available. Raises an [auditing event](sys#auditing) `winreg.EnumKey` with arguments `key`, `index`. Changed in version 3.3: See [above](#exception-changed). `winreg.EnumValue(key, index)` Enumerates values of an open registry key, returning a tuple. *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). *index* is an integer that identifies the index of the value to retrieve. The function retrieves the name of one subkey each time it is called. It is typically called repeatedly, until an [`OSError`](exceptions#OSError "OSError") exception is raised, indicating no more values. The result is a tuple of 3 items: | Index | Meaning | | --- | --- | | `0` | A string that identifies the value name | | `1` | An object that holds the value data, and whose type depends on the underlying registry type | | `2` | An integer that identifies the type of the value data (see table in docs for [`SetValueEx()`](#winreg.SetValueEx "winreg.SetValueEx")) | Raises an [auditing event](sys#auditing) `winreg.EnumValue` with arguments `key`, `index`. Changed in version 3.3: See [above](#exception-changed). `winreg.ExpandEnvironmentStrings(str)` Expands environment variable placeholders `%NAME%` in strings like [`REG_EXPAND_SZ`](#winreg.REG_EXPAND_SZ "winreg.REG_EXPAND_SZ"): ``` >>> ExpandEnvironmentStrings('%windir%') 'C:\\Windows' ``` Raises an [auditing event](sys#auditing) `winreg.ExpandEnvironmentStrings` with argument `str`. `winreg.FlushKey(key)` Writes all the attributes of a key to the registry. *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). It is not necessary to call [`FlushKey()`](#winreg.FlushKey "winreg.FlushKey") to change a key. Registry changes are flushed to disk by the registry using its lazy flusher. Registry changes are also flushed to disk at system shutdown. Unlike [`CloseKey()`](#winreg.CloseKey "winreg.CloseKey"), the [`FlushKey()`](#winreg.FlushKey "winreg.FlushKey") method returns only when all the data has been written to the registry. An application should only call [`FlushKey()`](#winreg.FlushKey "winreg.FlushKey") if it requires absolute certainty that registry changes are on disk. Note If you don’t know whether a [`FlushKey()`](#winreg.FlushKey "winreg.FlushKey") call is required, it probably isn’t. `winreg.LoadKey(key, sub_key, file_name)` Creates a subkey under the specified key and stores registration information from a specified file into that subkey. *key* is a handle returned by [`ConnectRegistry()`](#winreg.ConnectRegistry "winreg.ConnectRegistry") or one of the constants [`HKEY_USERS`](#winreg.HKEY_USERS "winreg.HKEY_USERS") or [`HKEY_LOCAL_MACHINE`](#winreg.HKEY_LOCAL_MACHINE "winreg.HKEY_LOCAL_MACHINE"). *sub\_key* is a string that identifies the subkey to load. *file\_name* is the name of the file to load registry data from. This file must have been created with the [`SaveKey()`](#winreg.SaveKey "winreg.SaveKey") function. Under the file allocation table (FAT) file system, the filename may not have an extension. A call to [`LoadKey()`](#winreg.LoadKey "winreg.LoadKey") fails if the calling process does not have the `SE_RESTORE_PRIVILEGE` privilege. Note that privileges are different from permissions – see the [RegLoadKey documentation](https://msdn.microsoft.com/en-us/library/ms724889%28v=VS.85%29.aspx) for more details. If *key* is a handle returned by [`ConnectRegistry()`](#winreg.ConnectRegistry "winreg.ConnectRegistry"), then the path specified in *file\_name* is relative to the remote computer. Raises an [auditing event](sys#auditing) `winreg.LoadKey` with arguments `key`, `sub_key`, `file_name`. `winreg.OpenKey(key, sub_key, reserved=0, access=KEY_READ)` `winreg.OpenKeyEx(key, sub_key, reserved=0, access=KEY_READ)` Opens the specified key, returning a [handle object](#handle-object). *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). *sub\_key* is a string that identifies the sub\_key to open. *reserved* is a reserved integer, and must be zero. The default is zero. *access* is an integer that specifies an access mask that describes the desired security access for the key. Default is [`KEY_READ`](#winreg.KEY_READ "winreg.KEY_READ"). See [Access Rights](#access-rights) for other allowed values. The result is a new handle to the specified key. If the function fails, [`OSError`](exceptions#OSError "OSError") is raised. Raises an [auditing event](sys#auditing) `winreg.OpenKey` with arguments `key`, `sub_key`, `access`. Raises an [auditing event](sys#auditing) `winreg.OpenKey/result` with argument `key`. Changed in version 3.2: Allow the use of named arguments. Changed in version 3.3: See [above](#exception-changed). `winreg.QueryInfoKey(key)` Returns information about a key, as a tuple. *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). The result is a tuple of 3 items: | Index | Meaning | | --- | --- | | `0` | An integer giving the number of sub keys this key has. | | `1` | An integer giving the number of values this key has. | | `2` | An integer giving when the key was last modified (if available) as 100’s of nanoseconds since Jan 1, 1601. | Raises an [auditing event](sys#auditing) `winreg.QueryInfoKey` with argument `key`. `winreg.QueryValue(key, sub_key)` Retrieves the unnamed value for a key, as a string. *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). *sub\_key* is a string that holds the name of the subkey with which the value is associated. If this parameter is `None` or empty, the function retrieves the value set by the [`SetValue()`](#winreg.SetValue "winreg.SetValue") method for the key identified by *key*. Values in the registry have name, type, and data components. This method retrieves the data for a key’s first value that has a `NULL` name. But the underlying API call doesn’t return the type, so always use [`QueryValueEx()`](#winreg.QueryValueEx "winreg.QueryValueEx") if possible. Raises an [auditing event](sys#auditing) `winreg.QueryValue` with arguments `key`, `sub_key`, `value_name`. `winreg.QueryValueEx(key, value_name)` Retrieves the type and data for a specified value name associated with an open registry key. *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). *value\_name* is a string indicating the value to query. The result is a tuple of 2 items: | Index | Meaning | | --- | --- | | `0` | The value of the registry item. | | `1` | An integer giving the registry type for this value (see table in docs for [`SetValueEx()`](#winreg.SetValueEx "winreg.SetValueEx")) | Raises an [auditing event](sys#auditing) `winreg.QueryValue` with arguments `key`, `sub_key`, `value_name`. `winreg.SaveKey(key, file_name)` Saves the specified key, and all its subkeys to the specified file. *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). *file\_name* is the name of the file to save registry data to. This file cannot already exist. If this filename includes an extension, it cannot be used on file allocation table (FAT) file systems by the [`LoadKey()`](#winreg.LoadKey "winreg.LoadKey") method. If *key* represents a key on a remote computer, the path described by *file\_name* is relative to the remote computer. The caller of this method must possess the `SeBackupPrivilege` security privilege. Note that privileges are different than permissions – see the [Conflicts Between User Rights and Permissions documentation](https://msdn.microsoft.com/en-us/library/ms724878%28v=VS.85%29.aspx) for more details. This function passes `NULL` for *security\_attributes* to the API. Raises an [auditing event](sys#auditing) `winreg.SaveKey` with arguments `key`, `file_name`. `winreg.SetValue(key, sub_key, type, value)` Associates a value with a specified key. *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). *sub\_key* is a string that names the subkey with which the value is associated. *type* is an integer that specifies the type of the data. Currently this must be [`REG_SZ`](#winreg.REG_SZ "winreg.REG_SZ"), meaning only strings are supported. Use the [`SetValueEx()`](#winreg.SetValueEx "winreg.SetValueEx") function for support for other data types. *value* is a string that specifies the new value. If the key specified by the *sub\_key* parameter does not exist, the SetValue function creates it. Value lengths are limited by available memory. Long values (more than 2048 bytes) should be stored as files with the filenames stored in the configuration registry. This helps the registry perform efficiently. The key identified by the *key* parameter must have been opened with [`KEY_SET_VALUE`](#winreg.KEY_SET_VALUE "winreg.KEY_SET_VALUE") access. Raises an [auditing event](sys#auditing) `winreg.SetValue` with arguments `key`, `sub_key`, `type`, `value`. `winreg.SetValueEx(key, value_name, reserved, type, value)` Stores data in the value field of an open registry key. *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). *value\_name* is a string that names the subkey with which the value is associated. *reserved* can be anything – zero is always passed to the API. *type* is an integer that specifies the type of the data. See [Value Types](#value-types) for the available types. *value* is a string that specifies the new value. This method can also set additional value and type information for the specified key. The key identified by the key parameter must have been opened with [`KEY_SET_VALUE`](#winreg.KEY_SET_VALUE "winreg.KEY_SET_VALUE") access. To open the key, use the [`CreateKey()`](#winreg.CreateKey "winreg.CreateKey") or [`OpenKey()`](#winreg.OpenKey "winreg.OpenKey") methods. Value lengths are limited by available memory. Long values (more than 2048 bytes) should be stored as files with the filenames stored in the configuration registry. This helps the registry perform efficiently. Raises an [auditing event](sys#auditing) `winreg.SetValue` with arguments `key`, `sub_key`, `type`, `value`. `winreg.DisableReflectionKey(key)` Disables registry reflection for 32-bit processes running on a 64-bit operating system. *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). Will generally raise [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError") if executed on a 32-bit operating system. If the key is not on the reflection list, the function succeeds but has no effect. Disabling reflection for a key does not affect reflection of any subkeys. Raises an [auditing event](sys#auditing) `winreg.DisableReflectionKey` with argument `key`. `winreg.EnableReflectionKey(key)` Restores registry reflection for the specified disabled key. *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). Will generally raise [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError") if executed on a 32-bit operating system. Restoring reflection for a key does not affect reflection of any subkeys. Raises an [auditing event](sys#auditing) `winreg.EnableReflectionKey` with argument `key`. `winreg.QueryReflectionKey(key)` Determines the reflection state for the specified key. *key* is an already open key, or one of the predefined [HKEY\_\* constants](#hkey-constants). Returns `True` if reflection is disabled. Will generally raise [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError") if executed on a 32-bit operating system. Raises an [auditing event](sys#auditing) `winreg.QueryReflectionKey` with argument `key`. Constants --------- The following constants are defined for use in many `_winreg` functions. ### HKEY\_\* Constants `winreg.HKEY_CLASSES_ROOT` Registry entries subordinate to this key define types (or classes) of documents and the properties associated with those types. Shell and COM applications use the information stored under this key. `winreg.HKEY_CURRENT_USER` Registry entries subordinate to this key define the preferences of the current user. These preferences include the settings of environment variables, data about program groups, colors, printers, network connections, and application preferences. `winreg.HKEY_LOCAL_MACHINE` Registry entries subordinate to this key define the physical state of the computer, including data about the bus type, system memory, and installed hardware and software. `winreg.HKEY_USERS` Registry entries subordinate to this key define the default user configuration for new users on the local computer and the user configuration for the current user. `winreg.HKEY_PERFORMANCE_DATA` Registry entries subordinate to this key allow you to access performance data. The data is not actually stored in the registry; the registry functions cause the system to collect the data from its source. `winreg.HKEY_CURRENT_CONFIG` Contains information about the current hardware profile of the local computer system. `winreg.HKEY_DYN_DATA` This key is not used in versions of Windows after 98. ### Access Rights For more information, see [Registry Key Security and Access](https://msdn.microsoft.com/en-us/library/ms724878%28v=VS.85%29.aspx). `winreg.KEY_ALL_ACCESS` Combines the STANDARD\_RIGHTS\_REQUIRED, [`KEY_QUERY_VALUE`](#winreg.KEY_QUERY_VALUE "winreg.KEY_QUERY_VALUE"), [`KEY_SET_VALUE`](#winreg.KEY_SET_VALUE "winreg.KEY_SET_VALUE"), [`KEY_CREATE_SUB_KEY`](#winreg.KEY_CREATE_SUB_KEY "winreg.KEY_CREATE_SUB_KEY"), [`KEY_ENUMERATE_SUB_KEYS`](#winreg.KEY_ENUMERATE_SUB_KEYS "winreg.KEY_ENUMERATE_SUB_KEYS"), [`KEY_NOTIFY`](#winreg.KEY_NOTIFY "winreg.KEY_NOTIFY"), and [`KEY_CREATE_LINK`](#winreg.KEY_CREATE_LINK "winreg.KEY_CREATE_LINK") access rights. `winreg.KEY_WRITE` Combines the STANDARD\_RIGHTS\_WRITE, [`KEY_SET_VALUE`](#winreg.KEY_SET_VALUE "winreg.KEY_SET_VALUE"), and [`KEY_CREATE_SUB_KEY`](#winreg.KEY_CREATE_SUB_KEY "winreg.KEY_CREATE_SUB_KEY") access rights. `winreg.KEY_READ` Combines the STANDARD\_RIGHTS\_READ, [`KEY_QUERY_VALUE`](#winreg.KEY_QUERY_VALUE "winreg.KEY_QUERY_VALUE"), [`KEY_ENUMERATE_SUB_KEYS`](#winreg.KEY_ENUMERATE_SUB_KEYS "winreg.KEY_ENUMERATE_SUB_KEYS"), and [`KEY_NOTIFY`](#winreg.KEY_NOTIFY "winreg.KEY_NOTIFY") values. `winreg.KEY_EXECUTE` Equivalent to [`KEY_READ`](#winreg.KEY_READ "winreg.KEY_READ"). `winreg.KEY_QUERY_VALUE` Required to query the values of a registry key. `winreg.KEY_SET_VALUE` Required to create, delete, or set a registry value. `winreg.KEY_CREATE_SUB_KEY` Required to create a subkey of a registry key. `winreg.KEY_ENUMERATE_SUB_KEYS` Required to enumerate the subkeys of a registry key. `winreg.KEY_NOTIFY` Required to request change notifications for a registry key or for subkeys of a registry key. `winreg.KEY_CREATE_LINK` Reserved for system use. #### 64-bit Specific For more information, see [Accessing an Alternate Registry View](https://msdn.microsoft.com/en-us/library/aa384129(v=VS.85).aspx). `winreg.KEY_WOW64_64KEY` Indicates that an application on 64-bit Windows should operate on the 64-bit registry view. `winreg.KEY_WOW64_32KEY` Indicates that an application on 64-bit Windows should operate on the 32-bit registry view. ### Value Types For more information, see [Registry Value Types](https://msdn.microsoft.com/en-us/library/ms724884%28v=VS.85%29.aspx). `winreg.REG_BINARY` Binary data in any form. `winreg.REG_DWORD` 32-bit number. `winreg.REG_DWORD_LITTLE_ENDIAN` A 32-bit number in little-endian format. Equivalent to [`REG_DWORD`](#winreg.REG_DWORD "winreg.REG_DWORD"). `winreg.REG_DWORD_BIG_ENDIAN` A 32-bit number in big-endian format. `winreg.REG_EXPAND_SZ` Null-terminated string containing references to environment variables (`%PATH%`). `winreg.REG_LINK` A Unicode symbolic link. `winreg.REG_MULTI_SZ` A sequence of null-terminated strings, terminated by two null characters. (Python handles this termination automatically.) `winreg.REG_NONE` No defined value type. `winreg.REG_QWORD` A 64-bit number. New in version 3.6. `winreg.REG_QWORD_LITTLE_ENDIAN` A 64-bit number in little-endian format. Equivalent to [`REG_QWORD`](#winreg.REG_QWORD "winreg.REG_QWORD"). New in version 3.6. `winreg.REG_RESOURCE_LIST` A device-driver resource list. `winreg.REG_FULL_RESOURCE_DESCRIPTOR` A hardware setting. `winreg.REG_RESOURCE_REQUIREMENTS_LIST` A hardware resource list. `winreg.REG_SZ` A null-terminated string. Registry Handle Objects ----------------------- This object wraps a Windows HKEY object, automatically closing it when the object is destroyed. To guarantee cleanup, you can call either the [`Close()`](#winreg.PyHKEY.Close "winreg.PyHKEY.Close") method on the object, or the [`CloseKey()`](#winreg.CloseKey "winreg.CloseKey") function. All registry functions in this module return one of these objects. All registry functions in this module which accept a handle object also accept an integer, however, use of the handle object is encouraged. Handle objects provide semantics for [`__bool__()`](../reference/datamodel#object.__bool__ "object.__bool__") – thus ``` if handle: print("Yes") ``` will print `Yes` if the handle is currently valid (has not been closed or detached). The object also support comparison semantics, so handle objects will compare true if they both reference the same underlying Windows handle value. Handle objects can be converted to an integer (e.g., using the built-in [`int()`](functions#int "int") function), in which case the underlying Windows handle value is returned. You can also use the [`Detach()`](#winreg.PyHKEY.Detach "winreg.PyHKEY.Detach") method to return the integer handle, and also disconnect the Windows handle from the handle object. `PyHKEY.Close()` Closes the underlying Windows handle. If the handle is already closed, no error is raised. `PyHKEY.Detach()` Detaches the Windows handle from the handle object. The result is an integer that holds the value of the handle before it is detached. If the handle is already detached or closed, this will return zero. After calling this function, the handle is effectively invalidated, but the handle is not closed. You would call this function when you need the underlying Win32 handle to exist beyond the lifetime of the handle object. Raises an [auditing event](sys#auditing) `winreg.PyHKEY.Detach` with argument `key`. `PyHKEY.__enter__()` `PyHKEY.__exit__(*exc_info)` The HKEY object implements [`__enter__()`](../reference/datamodel#object.__enter__ "object.__enter__") and [`__exit__()`](../reference/datamodel#object.__exit__ "object.__exit__") and thus supports the context protocol for the [`with`](../reference/compound_stmts#with) statement: ``` with OpenKey(HKEY_LOCAL_MACHINE, "foo") as key: ... # work with key ``` will automatically close *key* when control leaves the [`with`](../reference/compound_stmts#with) block.
programming_docs
python Security Considerations Security Considerations ======================= The following modules have specific security considerations: * [`cgi`](cgi#module-cgi "cgi: Helpers for running Python scripts via the Common Gateway Interface. (deprecated)"): [CGI security considerations](cgi#cgi-security) * [`hashlib`](hashlib#module-hashlib "hashlib: Secure hash and message digest algorithms."): [all constructors take a “usedforsecurity” keyword-only argument disabling known insecure and blocked algorithms](hashlib#hashlib-usedforsecurity) * [`http.server`](http.server#module-http.server "http.server: HTTP server and request handlers.") is not suitable for production use, only implementing basic security checks. See the [security considerations](http.server#http-server-security). * [`logging`](logging#module-logging "logging: Flexible event logging system for applications."): [Logging configuration uses eval()](logging.config#logging-eval-security) * [`multiprocessing`](multiprocessing#module-multiprocessing "multiprocessing: Process-based parallelism."): [Connection.recv() uses pickle](multiprocessing#multiprocessing-recv-pickle-security) * [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back."): [Restricting globals in pickle](pickle#pickle-restrict) * [`random`](random#module-random "random: Generate pseudo-random numbers with various common distributions.") shouldn’t be used for security purposes, use [`secrets`](secrets#module-secrets "secrets: Generate secure random numbers for managing secrets.") instead * [`shelve`](shelve#module-shelve "shelve: Python object persistence."): [shelve is based on pickle and thus unsuitable for dealing with untrusted sources](shelve#shelve-security) * [`ssl`](ssl#module-ssl "ssl: TLS/SSL wrapper for socket objects"): [SSL/TLS security considerations](ssl#ssl-security) * [`subprocess`](subprocess#module-subprocess "subprocess: Subprocess management."): [Subprocess security considerations](subprocess#subprocess-security) * [`tempfile`](tempfile#module-tempfile "tempfile: Generate temporary files and directories."): [mktemp is deprecated due to vulnerability to race conditions](tempfile#tempfile-mktemp-deprecated) * [`xml`](xml#module-xml "xml: Package containing XML processing modules"): [XML vulnerabilities](xml#xml-vulnerabilities) * [`zipfile`](zipfile#module-zipfile "zipfile: Read and write ZIP-format archive files."): [maliciously prepared .zip files can cause disk volume exhaustion](zipfile#zipfile-resources-limitations) python Transports and Protocols Transports and Protocols ======================== #### Preface Transports and Protocols are used by the **low-level** event loop APIs such as [`loop.create_connection()`](asyncio-eventloop#asyncio.loop.create_connection "asyncio.loop.create_connection"). They use callback-based programming style and enable high-performance implementations of network or IPC protocols (e.g. HTTP). Essentially, transports and protocols should only be used in libraries and frameworks and never in high-level asyncio applications. This documentation page covers both [Transports](#transports) and [Protocols](#protocols). #### Introduction At the highest level, the transport is concerned with *how* bytes are transmitted, while the protocol determines *which* bytes to transmit (and to some extent when). A different way of saying the same thing: a transport is an abstraction for a socket (or similar I/O endpoint) while a protocol is an abstraction for an application, from the transport’s point of view. Yet another view is the transport and protocol interfaces together define an abstract interface for using network I/O and interprocess I/O. There is always a 1:1 relationship between transport and protocol objects: the protocol calls transport methods to send data, while the transport calls protocol methods to pass it data that has been received. Most of connection oriented event loop methods (such as [`loop.create_connection()`](asyncio-eventloop#asyncio.loop.create_connection "asyncio.loop.create_connection")) usually accept a *protocol\_factory* argument used to create a *Protocol* object for an accepted connection, represented by a *Transport* object. Such methods usually return a tuple of `(transport, protocol)`. #### Contents This documentation page contains the following sections: * The [Transports](#transports) section documents asyncio [`BaseTransport`](#asyncio.BaseTransport "asyncio.BaseTransport"), [`ReadTransport`](#asyncio.ReadTransport "asyncio.ReadTransport"), [`WriteTransport`](#asyncio.WriteTransport "asyncio.WriteTransport"), [`Transport`](#asyncio.Transport "asyncio.Transport"), [`DatagramTransport`](#asyncio.DatagramTransport "asyncio.DatagramTransport"), and [`SubprocessTransport`](#asyncio.SubprocessTransport "asyncio.SubprocessTransport") classes. * The [Protocols](#protocols) section documents asyncio [`BaseProtocol`](#asyncio.BaseProtocol "asyncio.BaseProtocol"), [`Protocol`](#asyncio.Protocol "asyncio.Protocol"), [`BufferedProtocol`](#asyncio.BufferedProtocol "asyncio.BufferedProtocol"), [`DatagramProtocol`](#asyncio.DatagramProtocol "asyncio.DatagramProtocol"), and [`SubprocessProtocol`](#asyncio.SubprocessProtocol "asyncio.SubprocessProtocol") classes. * The [Examples](#examples) section showcases how to work with transports, protocols, and low-level event loop APIs. Transports ---------- **Source code:** [Lib/asyncio/transports.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/transports.py) Transports are classes provided by [`asyncio`](asyncio#module-asyncio "asyncio: Asynchronous I/O.") in order to abstract various kinds of communication channels. Transport objects are always instantiated by an [asyncio event loop](asyncio-eventloop#asyncio-event-loop). asyncio implements transports for TCP, UDP, SSL, and subprocess pipes. The methods available on a transport depend on the transport’s kind. The transport classes are [not thread safe](asyncio-dev#asyncio-multithreading). ### Transports Hierarchy `class asyncio.BaseTransport` Base class for all transports. Contains methods that all asyncio transports share. `class asyncio.WriteTransport(BaseTransport)` A base transport for write-only connections. Instances of the *WriteTransport* class are returned from the [`loop.connect_write_pipe()`](asyncio-eventloop#asyncio.loop.connect_write_pipe "asyncio.loop.connect_write_pipe") event loop method and are also used by subprocess-related methods like [`loop.subprocess_exec()`](asyncio-eventloop#asyncio.loop.subprocess_exec "asyncio.loop.subprocess_exec"). `class asyncio.ReadTransport(BaseTransport)` A base transport for read-only connections. Instances of the *ReadTransport* class are returned from the [`loop.connect_read_pipe()`](asyncio-eventloop#asyncio.loop.connect_read_pipe "asyncio.loop.connect_read_pipe") event loop method and are also used by subprocess-related methods like [`loop.subprocess_exec()`](asyncio-eventloop#asyncio.loop.subprocess_exec "asyncio.loop.subprocess_exec"). `class asyncio.Transport(WriteTransport, ReadTransport)` Interface representing a bidirectional transport, such as a TCP connection. The user does not instantiate a transport directly; they call a utility function, passing it a protocol factory and other information necessary to create the transport and protocol. Instances of the *Transport* class are returned from or used by event loop methods like [`loop.create_connection()`](asyncio-eventloop#asyncio.loop.create_connection "asyncio.loop.create_connection"), [`loop.create_unix_connection()`](asyncio-eventloop#asyncio.loop.create_unix_connection "asyncio.loop.create_unix_connection"), [`loop.create_server()`](asyncio-eventloop#asyncio.loop.create_server "asyncio.loop.create_server"), [`loop.sendfile()`](asyncio-eventloop#asyncio.loop.sendfile "asyncio.loop.sendfile"), etc. `class asyncio.DatagramTransport(BaseTransport)` A transport for datagram (UDP) connections. Instances of the *DatagramTransport* class are returned from the [`loop.create_datagram_endpoint()`](asyncio-eventloop#asyncio.loop.create_datagram_endpoint "asyncio.loop.create_datagram_endpoint") event loop method. `class asyncio.SubprocessTransport(BaseTransport)` An abstraction to represent a connection between a parent and its child OS process. Instances of the *SubprocessTransport* class are returned from event loop methods [`loop.subprocess_shell()`](asyncio-eventloop#asyncio.loop.subprocess_shell "asyncio.loop.subprocess_shell") and [`loop.subprocess_exec()`](asyncio-eventloop#asyncio.loop.subprocess_exec "asyncio.loop.subprocess_exec"). ### Base Transport `BaseTransport.close()` Close the transport. If the transport has a buffer for outgoing data, buffered data will be flushed asynchronously. No more data will be received. After all buffered data is flushed, the protocol’s [`protocol.connection_lost()`](#asyncio.BaseProtocol.connection_lost "asyncio.BaseProtocol.connection_lost") method will be called with [`None`](constants#None "None") as its argument. `BaseTransport.is_closing()` Return `True` if the transport is closing or is closed. `BaseTransport.get_extra_info(name, default=None)` Return information about the transport or underlying resources it uses. *name* is a string representing the piece of transport-specific information to get. *default* is the value to return if the information is not available, or if the transport does not support querying it with the given third-party event loop implementation or on the current platform. For example, the following code attempts to get the underlying socket object of the transport: ``` sock = transport.get_extra_info('socket') if sock is not None: print(sock.getsockopt(...)) ``` Categories of information that can be queried on some transports: * socket: + `'peername'`: the remote address to which the socket is connected, result of [`socket.socket.getpeername()`](socket#socket.socket.getpeername "socket.socket.getpeername") (`None` on error) + `'socket'`: [`socket.socket`](socket#socket.socket "socket.socket") instance + `'sockname'`: the socket’s own address, result of [`socket.socket.getsockname()`](socket#socket.socket.getsockname "socket.socket.getsockname") * SSL socket: + `'compression'`: the compression algorithm being used as a string, or `None` if the connection isn’t compressed; result of [`ssl.SSLSocket.compression()`](ssl#ssl.SSLSocket.compression "ssl.SSLSocket.compression") + `'cipher'`: a three-value tuple containing the name of the cipher being used, the version of the SSL protocol that defines its use, and the number of secret bits being used; result of [`ssl.SSLSocket.cipher()`](ssl#ssl.SSLSocket.cipher "ssl.SSLSocket.cipher") + `'peercert'`: peer certificate; result of [`ssl.SSLSocket.getpeercert()`](ssl#ssl.SSLSocket.getpeercert "ssl.SSLSocket.getpeercert") + `'sslcontext'`: [`ssl.SSLContext`](ssl#ssl.SSLContext "ssl.SSLContext") instance + `'ssl_object'`: [`ssl.SSLObject`](ssl#ssl.SSLObject "ssl.SSLObject") or [`ssl.SSLSocket`](ssl#ssl.SSLSocket "ssl.SSLSocket") instance * pipe: + `'pipe'`: pipe object * subprocess: + `'subprocess'`: [`subprocess.Popen`](subprocess#subprocess.Popen "subprocess.Popen") instance `BaseTransport.set_protocol(protocol)` Set a new protocol. Switching protocol should only be done when both protocols are documented to support the switch. `BaseTransport.get_protocol()` Return the current protocol. ### Read-only Transports `ReadTransport.is_reading()` Return `True` if the transport is receiving new data. New in version 3.7. `ReadTransport.pause_reading()` Pause the receiving end of the transport. No data will be passed to the protocol’s [`protocol.data_received()`](#asyncio.Protocol.data_received "asyncio.Protocol.data_received") method until [`resume_reading()`](#asyncio.ReadTransport.resume_reading "asyncio.ReadTransport.resume_reading") is called. Changed in version 3.7: The method is idempotent, i.e. it can be called when the transport is already paused or closed. `ReadTransport.resume_reading()` Resume the receiving end. The protocol’s [`protocol.data_received()`](#asyncio.Protocol.data_received "asyncio.Protocol.data_received") method will be called once again if some data is available for reading. Changed in version 3.7: The method is idempotent, i.e. it can be called when the transport is already reading. ### Write-only Transports `WriteTransport.abort()` Close the transport immediately, without waiting for pending operations to complete. Buffered data will be lost. No more data will be received. The protocol’s [`protocol.connection_lost()`](#asyncio.BaseProtocol.connection_lost "asyncio.BaseProtocol.connection_lost") method will eventually be called with [`None`](constants#None "None") as its argument. `WriteTransport.can_write_eof()` Return [`True`](constants#True "True") if the transport supports [`write_eof()`](#asyncio.WriteTransport.write_eof "asyncio.WriteTransport.write_eof"), [`False`](constants#False "False") if not. `WriteTransport.get_write_buffer_size()` Return the current size of the output buffer used by the transport. `WriteTransport.get_write_buffer_limits()` Get the *high* and *low* watermarks for write flow control. Return a tuple `(low, high)` where *low* and *high* are positive number of bytes. Use [`set_write_buffer_limits()`](#asyncio.WriteTransport.set_write_buffer_limits "asyncio.WriteTransport.set_write_buffer_limits") to set the limits. New in version 3.4.2. `WriteTransport.set_write_buffer_limits(high=None, low=None)` Set the *high* and *low* watermarks for write flow control. These two values (measured in number of bytes) control when the protocol’s [`protocol.pause_writing()`](#asyncio.BaseProtocol.pause_writing "asyncio.BaseProtocol.pause_writing") and [`protocol.resume_writing()`](#asyncio.BaseProtocol.resume_writing "asyncio.BaseProtocol.resume_writing") methods are called. If specified, the low watermark must be less than or equal to the high watermark. Neither *high* nor *low* can be negative. [`pause_writing()`](#asyncio.BaseProtocol.pause_writing "asyncio.BaseProtocol.pause_writing") is called when the buffer size becomes greater than or equal to the *high* value. If writing has been paused, [`resume_writing()`](#asyncio.BaseProtocol.resume_writing "asyncio.BaseProtocol.resume_writing") is called when the buffer size becomes less than or equal to the *low* value. The defaults are implementation-specific. If only the high watermark is given, the low watermark defaults to an implementation-specific value less than or equal to the high watermark. Setting *high* to zero forces *low* to zero as well, and causes [`pause_writing()`](#asyncio.BaseProtocol.pause_writing "asyncio.BaseProtocol.pause_writing") to be called whenever the buffer becomes non-empty. Setting *low* to zero causes [`resume_writing()`](#asyncio.BaseProtocol.resume_writing "asyncio.BaseProtocol.resume_writing") to be called only once the buffer is empty. Use of zero for either limit is generally sub-optimal as it reduces opportunities for doing I/O and computation concurrently. Use [`get_write_buffer_limits()`](#asyncio.WriteTransport.get_write_buffer_limits "asyncio.WriteTransport.get_write_buffer_limits") to get the limits. `WriteTransport.write(data)` Write some *data* bytes to the transport. This method does not block; it buffers the data and arranges for it to be sent out asynchronously. `WriteTransport.writelines(list_of_data)` Write a list (or any iterable) of data bytes to the transport. This is functionally equivalent to calling [`write()`](#asyncio.WriteTransport.write "asyncio.WriteTransport.write") on each element yielded by the iterable, but may be implemented more efficiently. `WriteTransport.write_eof()` Close the write end of the transport after flushing all buffered data. Data may still be received. This method can raise [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError") if the transport (e.g. SSL) doesn’t support half-closed connections. ### Datagram Transports `DatagramTransport.sendto(data, addr=None)` Send the *data* bytes to the remote peer given by *addr* (a transport-dependent target address). If *addr* is [`None`](constants#None "None"), the data is sent to the target address given on transport creation. This method does not block; it buffers the data and arranges for it to be sent out asynchronously. `DatagramTransport.abort()` Close the transport immediately, without waiting for pending operations to complete. Buffered data will be lost. No more data will be received. The protocol’s [`protocol.connection_lost()`](#asyncio.BaseProtocol.connection_lost "asyncio.BaseProtocol.connection_lost") method will eventually be called with [`None`](constants#None "None") as its argument. ### Subprocess Transports `SubprocessTransport.get_pid()` Return the subprocess process id as an integer. `SubprocessTransport.get_pipe_transport(fd)` Return the transport for the communication pipe corresponding to the integer file descriptor *fd*: * `0`: readable streaming transport of the standard input (*stdin*), or [`None`](constants#None "None") if the subprocess was not created with `stdin=PIPE` * `1`: writable streaming transport of the standard output (*stdout*), or [`None`](constants#None "None") if the subprocess was not created with `stdout=PIPE` * `2`: writable streaming transport of the standard error (*stderr*), or [`None`](constants#None "None") if the subprocess was not created with `stderr=PIPE` * other *fd*: [`None`](constants#None "None") `SubprocessTransport.get_returncode()` Return the subprocess return code as an integer or [`None`](constants#None "None") if it hasn’t returned, which is similar to the [`subprocess.Popen.returncode`](subprocess#subprocess.Popen.returncode "subprocess.Popen.returncode") attribute. `SubprocessTransport.kill()` Kill the subprocess. On POSIX systems, the function sends SIGKILL to the subprocess. On Windows, this method is an alias for [`terminate()`](#asyncio.SubprocessTransport.terminate "asyncio.SubprocessTransport.terminate"). See also [`subprocess.Popen.kill()`](subprocess#subprocess.Popen.kill "subprocess.Popen.kill"). `SubprocessTransport.send_signal(signal)` Send the *signal* number to the subprocess, as in [`subprocess.Popen.send_signal()`](subprocess#subprocess.Popen.send_signal "subprocess.Popen.send_signal"). `SubprocessTransport.terminate()` Stop the subprocess. On POSIX systems, this method sends SIGTERM to the subprocess. On Windows, the Windows API function TerminateProcess() is called to stop the subprocess. See also [`subprocess.Popen.terminate()`](subprocess#subprocess.Popen.terminate "subprocess.Popen.terminate"). `SubprocessTransport.close()` Kill the subprocess by calling the [`kill()`](#asyncio.SubprocessTransport.kill "asyncio.SubprocessTransport.kill") method. If the subprocess hasn’t returned yet, and close transports of *stdin*, *stdout*, and *stderr* pipes. Protocols --------- **Source code:** [Lib/asyncio/protocols.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/protocols.py) asyncio provides a set of abstract base classes that should be used to implement network protocols. Those classes are meant to be used together with [transports](#asyncio-transport). Subclasses of abstract base protocol classes may implement some or all methods. All these methods are callbacks: they are called by transports on certain events, for example when some data is received. A base protocol method should be called by the corresponding transport. ### Base Protocols `class asyncio.BaseProtocol` Base protocol with methods that all protocols share. `class asyncio.Protocol(BaseProtocol)` The base class for implementing streaming protocols (TCP, Unix sockets, etc). `class asyncio.BufferedProtocol(BaseProtocol)` A base class for implementing streaming protocols with manual control of the receive buffer. `class asyncio.DatagramProtocol(BaseProtocol)` The base class for implementing datagram (UDP) protocols. `class asyncio.SubprocessProtocol(BaseProtocol)` The base class for implementing protocols communicating with child processes (unidirectional pipes). ### Base Protocol All asyncio protocols can implement Base Protocol callbacks. #### Connection Callbacks Connection callbacks are called on all protocols, exactly once per a successful connection. All other protocol callbacks can only be called between those two methods. `BaseProtocol.connection_made(transport)` Called when a connection is made. The *transport* argument is the transport representing the connection. The protocol is responsible for storing the reference to its transport. `BaseProtocol.connection_lost(exc)` Called when the connection is lost or closed. The argument is either an exception object or [`None`](constants#None "None"). The latter means a regular EOF is received, or the connection was aborted or closed by this side of the connection. #### Flow Control Callbacks Flow control callbacks can be called by transports to pause or resume writing performed by the protocol. See the documentation of the [`set_write_buffer_limits()`](#asyncio.WriteTransport.set_write_buffer_limits "asyncio.WriteTransport.set_write_buffer_limits") method for more details. `BaseProtocol.pause_writing()` Called when the transport’s buffer goes over the high watermark. `BaseProtocol.resume_writing()` Called when the transport’s buffer drains below the low watermark. If the buffer size equals the high watermark, [`pause_writing()`](#asyncio.BaseProtocol.pause_writing "asyncio.BaseProtocol.pause_writing") is not called: the buffer size must go strictly over. Conversely, [`resume_writing()`](#asyncio.BaseProtocol.resume_writing "asyncio.BaseProtocol.resume_writing") is called when the buffer size is equal or lower than the low watermark. These end conditions are important to ensure that things go as expected when either mark is zero. ### Streaming Protocols Event methods, such as [`loop.create_server()`](asyncio-eventloop#asyncio.loop.create_server "asyncio.loop.create_server"), [`loop.create_unix_server()`](asyncio-eventloop#asyncio.loop.create_unix_server "asyncio.loop.create_unix_server"), [`loop.create_connection()`](asyncio-eventloop#asyncio.loop.create_connection "asyncio.loop.create_connection"), [`loop.create_unix_connection()`](asyncio-eventloop#asyncio.loop.create_unix_connection "asyncio.loop.create_unix_connection"), [`loop.connect_accepted_socket()`](asyncio-eventloop#asyncio.loop.connect_accepted_socket "asyncio.loop.connect_accepted_socket"), [`loop.connect_read_pipe()`](asyncio-eventloop#asyncio.loop.connect_read_pipe "asyncio.loop.connect_read_pipe"), and [`loop.connect_write_pipe()`](asyncio-eventloop#asyncio.loop.connect_write_pipe "asyncio.loop.connect_write_pipe") accept factories that return streaming protocols. `Protocol.data_received(data)` Called when some data is received. *data* is a non-empty bytes object containing the incoming data. Whether the data is buffered, chunked or reassembled depends on the transport. In general, you shouldn’t rely on specific semantics and instead make your parsing generic and flexible. However, data is always received in the correct order. The method can be called an arbitrary number of times while a connection is open. However, [`protocol.eof_received()`](#asyncio.Protocol.eof_received "asyncio.Protocol.eof_received") is called at most once. Once `eof_received()` is called, `data_received()` is not called anymore. `Protocol.eof_received()` Called when the other end signals it won’t send any more data (for example by calling [`transport.write_eof()`](#asyncio.WriteTransport.write_eof "asyncio.WriteTransport.write_eof"), if the other end also uses asyncio). This method may return a false value (including `None`), in which case the transport will close itself. Conversely, if this method returns a true value, the protocol used determines whether to close the transport. Since the default implementation returns `None`, it implicitly closes the connection. Some transports, including SSL, don’t support half-closed connections, in which case returning true from this method will result in the connection being closed. State machine: ``` start -> connection_made [-> data_received]* [-> eof_received]? -> connection_lost -> end ``` ### Buffered Streaming Protocols New in version 3.7. Buffered Protocols can be used with any event loop method that supports [Streaming Protocols](#streaming-protocols). `BufferedProtocol` implementations allow explicit manual allocation and control of the receive buffer. Event loops can then use the buffer provided by the protocol to avoid unnecessary data copies. This can result in noticeable performance improvement for protocols that receive big amounts of data. Sophisticated protocol implementations can significantly reduce the number of buffer allocations. The following callbacks are called on [`BufferedProtocol`](#asyncio.BufferedProtocol "asyncio.BufferedProtocol") instances: `BufferedProtocol.get_buffer(sizehint)` Called to allocate a new receive buffer. *sizehint* is the recommended minimum size for the returned buffer. It is acceptable to return smaller or larger buffers than what *sizehint* suggests. When set to -1, the buffer size can be arbitrary. It is an error to return a buffer with a zero size. `get_buffer()` must return an object implementing the [buffer protocol](../c-api/buffer#bufferobjects). `BufferedProtocol.buffer_updated(nbytes)` Called when the buffer was updated with the received data. *nbytes* is the total number of bytes that were written to the buffer. `BufferedProtocol.eof_received()` See the documentation of the [`protocol.eof_received()`](#asyncio.Protocol.eof_received "asyncio.Protocol.eof_received") method. [`get_buffer()`](#asyncio.BufferedProtocol.get_buffer "asyncio.BufferedProtocol.get_buffer") can be called an arbitrary number of times during a connection. However, [`protocol.eof_received()`](#asyncio.Protocol.eof_received "asyncio.Protocol.eof_received") is called at most once and, if called, [`get_buffer()`](#asyncio.BufferedProtocol.get_buffer "asyncio.BufferedProtocol.get_buffer") and [`buffer_updated()`](#asyncio.BufferedProtocol.buffer_updated "asyncio.BufferedProtocol.buffer_updated") won’t be called after it. State machine: ``` start -> connection_made [-> get_buffer [-> buffer_updated]? ]* [-> eof_received]? -> connection_lost -> end ``` ### Datagram Protocols Datagram Protocol instances should be constructed by protocol factories passed to the [`loop.create_datagram_endpoint()`](asyncio-eventloop#asyncio.loop.create_datagram_endpoint "asyncio.loop.create_datagram_endpoint") method. `DatagramProtocol.datagram_received(data, addr)` Called when a datagram is received. *data* is a bytes object containing the incoming data. *addr* is the address of the peer sending the data; the exact format depends on the transport. `DatagramProtocol.error_received(exc)` Called when a previous send or receive operation raises an [`OSError`](exceptions#OSError "OSError"). *exc* is the [`OSError`](exceptions#OSError "OSError") instance. This method is called in rare conditions, when the transport (e.g. UDP) detects that a datagram could not be delivered to its recipient. In many conditions though, undeliverable datagrams will be silently dropped. Note On BSD systems (macOS, FreeBSD, etc.) flow control is not supported for datagram protocols, because there is no reliable way to detect send failures caused by writing too many packets. The socket always appears ‘ready’ and excess packets are dropped. An [`OSError`](exceptions#OSError "OSError") with `errno` set to [`errno.ENOBUFS`](errno#errno.ENOBUFS "errno.ENOBUFS") may or may not be raised; if it is raised, it will be reported to [`DatagramProtocol.error_received()`](#asyncio.DatagramProtocol.error_received "asyncio.DatagramProtocol.error_received") but otherwise ignored. ### Subprocess Protocols Subprocess Protocol instances should be constructed by protocol factories passed to the [`loop.subprocess_exec()`](asyncio-eventloop#asyncio.loop.subprocess_exec "asyncio.loop.subprocess_exec") and [`loop.subprocess_shell()`](asyncio-eventloop#asyncio.loop.subprocess_shell "asyncio.loop.subprocess_shell") methods. `SubprocessProtocol.pipe_data_received(fd, data)` Called when the child process writes data into its stdout or stderr pipe. *fd* is the integer file descriptor of the pipe. *data* is a non-empty bytes object containing the received data. `SubprocessProtocol.pipe_connection_lost(fd, exc)` Called when one of the pipes communicating with the child process is closed. *fd* is the integer file descriptor that was closed. `SubprocessProtocol.process_exited()` Called when the child process has exited. Examples -------- ### TCP Echo Server Create a TCP echo server using the [`loop.create_server()`](asyncio-eventloop#asyncio.loop.create_server "asyncio.loop.create_server") method, send back received data, and close the connection: ``` import asyncio class EchoServerProtocol(asyncio.Protocol): def connection_made(self, transport): peername = transport.get_extra_info('peername') print('Connection from {}'.format(peername)) self.transport = transport def data_received(self, data): message = data.decode() print('Data received: {!r}'.format(message)) print('Send: {!r}'.format(message)) self.transport.write(data) print('Close the client socket') self.transport.close() async def main(): # Get a reference to the event loop as we plan to use # low-level APIs. loop = asyncio.get_running_loop() server = await loop.create_server( lambda: EchoServerProtocol(), '127.0.0.1', 8888) async with server: await server.serve_forever() asyncio.run(main()) ``` See also The [TCP echo server using streams](asyncio-stream#asyncio-tcp-echo-server-streams) example uses the high-level [`asyncio.start_server()`](asyncio-stream#asyncio.start_server "asyncio.start_server") function. ### TCP Echo Client A TCP echo client using the [`loop.create_connection()`](asyncio-eventloop#asyncio.loop.create_connection "asyncio.loop.create_connection") method, sends data, and waits until the connection is closed: ``` import asyncio class EchoClientProtocol(asyncio.Protocol): def __init__(self, message, on_con_lost): self.message = message self.on_con_lost = on_con_lost def connection_made(self, transport): transport.write(self.message.encode()) print('Data sent: {!r}'.format(self.message)) def data_received(self, data): print('Data received: {!r}'.format(data.decode())) def connection_lost(self, exc): print('The server closed the connection') self.on_con_lost.set_result(True) async def main(): # Get a reference to the event loop as we plan to use # low-level APIs. loop = asyncio.get_running_loop() on_con_lost = loop.create_future() message = 'Hello World!' transport, protocol = await loop.create_connection( lambda: EchoClientProtocol(message, on_con_lost), '127.0.0.1', 8888) # Wait until the protocol signals that the connection # is lost and close the transport. try: await on_con_lost finally: transport.close() asyncio.run(main()) ``` See also The [TCP echo client using streams](asyncio-stream#asyncio-tcp-echo-client-streams) example uses the high-level [`asyncio.open_connection()`](asyncio-stream#asyncio.open_connection "asyncio.open_connection") function. ### UDP Echo Server A UDP echo server, using the [`loop.create_datagram_endpoint()`](asyncio-eventloop#asyncio.loop.create_datagram_endpoint "asyncio.loop.create_datagram_endpoint") method, sends back received data: ``` import asyncio class EchoServerProtocol: def connection_made(self, transport): self.transport = transport def datagram_received(self, data, addr): message = data.decode() print('Received %r from %s' % (message, addr)) print('Send %r to %s' % (message, addr)) self.transport.sendto(data, addr) async def main(): print("Starting UDP server") # Get a reference to the event loop as we plan to use # low-level APIs. loop = asyncio.get_running_loop() # One protocol instance will be created to serve all # client requests. transport, protocol = await loop.create_datagram_endpoint( lambda: EchoServerProtocol(), local_addr=('127.0.0.1', 9999)) try: await asyncio.sleep(3600) # Serve for 1 hour. finally: transport.close() asyncio.run(main()) ``` ### UDP Echo Client A UDP echo client, using the [`loop.create_datagram_endpoint()`](asyncio-eventloop#asyncio.loop.create_datagram_endpoint "asyncio.loop.create_datagram_endpoint") method, sends data and closes the transport when it receives the answer: ``` import asyncio class EchoClientProtocol: def __init__(self, message, on_con_lost): self.message = message self.on_con_lost = on_con_lost self.transport = None def connection_made(self, transport): self.transport = transport print('Send:', self.message) self.transport.sendto(self.message.encode()) def datagram_received(self, data, addr): print("Received:", data.decode()) print("Close the socket") self.transport.close() def error_received(self, exc): print('Error received:', exc) def connection_lost(self, exc): print("Connection closed") self.on_con_lost.set_result(True) async def main(): # Get a reference to the event loop as we plan to use # low-level APIs. loop = asyncio.get_running_loop() on_con_lost = loop.create_future() message = "Hello World!" transport, protocol = await loop.create_datagram_endpoint( lambda: EchoClientProtocol(message, on_con_lost), remote_addr=('127.0.0.1', 9999)) try: await on_con_lost finally: transport.close() asyncio.run(main()) ``` ### Connecting Existing Sockets Wait until a socket receives data using the [`loop.create_connection()`](asyncio-eventloop#asyncio.loop.create_connection "asyncio.loop.create_connection") method with a protocol: ``` import asyncio import socket class MyProtocol(asyncio.Protocol): def __init__(self, on_con_lost): self.transport = None self.on_con_lost = on_con_lost def connection_made(self, transport): self.transport = transport def data_received(self, data): print("Received:", data.decode()) # We are done: close the transport; # connection_lost() will be called automatically. self.transport.close() def connection_lost(self, exc): # The socket has been closed self.on_con_lost.set_result(True) async def main(): # Get a reference to the event loop as we plan to use # low-level APIs. loop = asyncio.get_running_loop() on_con_lost = loop.create_future() # Create a pair of connected sockets rsock, wsock = socket.socketpair() # Register the socket to wait for data. transport, protocol = await loop.create_connection( lambda: MyProtocol(on_con_lost), sock=rsock) # Simulate the reception of data from the network. loop.call_soon(wsock.send, 'abc'.encode()) try: await protocol.on_con_lost finally: transport.close() wsock.close() asyncio.run(main()) ``` See also The [watch a file descriptor for read events](asyncio-eventloop#asyncio-example-watch-fd) example uses the low-level [`loop.add_reader()`](asyncio-eventloop#asyncio.loop.add_reader "asyncio.loop.add_reader") method to register an FD. The [register an open socket to wait for data using streams](asyncio-stream#asyncio-example-create-connection-streams) example uses high-level streams created by the [`open_connection()`](asyncio-stream#asyncio.open_connection "asyncio.open_connection") function in a coroutine. ### loop.subprocess\_exec() and SubprocessProtocol An example of a subprocess protocol used to get the output of a subprocess and to wait for the subprocess exit. The subprocess is created by the [`loop.subprocess_exec()`](asyncio-eventloop#asyncio.loop.subprocess_exec "asyncio.loop.subprocess_exec") method: ``` import asyncio import sys class DateProtocol(asyncio.SubprocessProtocol): def __init__(self, exit_future): self.exit_future = exit_future self.output = bytearray() def pipe_data_received(self, fd, data): self.output.extend(data) def process_exited(self): self.exit_future.set_result(True) async def get_date(): # Get a reference to the event loop as we plan to use # low-level APIs. loop = asyncio.get_running_loop() code = 'import datetime; print(datetime.datetime.now())' exit_future = asyncio.Future(loop=loop) # Create the subprocess controlled by DateProtocol; # redirect the standard output into a pipe. transport, protocol = await loop.subprocess_exec( lambda: DateProtocol(exit_future), sys.executable, '-c', code, stdin=None, stderr=None) # Wait for the subprocess exit using the process_exited() # method of the protocol. await exit_future # Close the stdout pipe. transport.close() # Read the output which was collected by the # pipe_data_received() method of the protocol. data = bytes(protocol.output) return data.decode('ascii').rstrip() date = asyncio.run(get_date()) print(f"Current date: {date}") ``` See also the [same example](asyncio-subprocess#asyncio-example-create-subprocess-exec) written using high-level APIs.
programming_docs
python ssl — TLS/SSL wrapper for socket objects ssl — TLS/SSL wrapper for socket objects ======================================== **Source code:** [Lib/ssl.py](https://github.com/python/cpython/tree/3.9/Lib/ssl.py) This module provides access to Transport Layer Security (often known as “Secure Sockets Layer”) encryption and peer authentication facilities for network sockets, both client-side and server-side. This module uses the OpenSSL library. It is available on all modern Unix systems, Windows, macOS, and probably additional platforms, as long as OpenSSL is installed on that platform. Note Some behavior may be platform dependent, since calls are made to the operating system socket APIs. The installed version of OpenSSL may also cause variations in behavior. For example, TLSv1.1 and TLSv1.2 come with openssl version 1.0.1. Warning Don’t use this module without reading the [Security considerations](#ssl-security). Doing so may lead to a false sense of security, as the default settings of the ssl module are not necessarily appropriate for your application. This section documents the objects and functions in the `ssl` module; for more general information about TLS, SSL, and certificates, the reader is referred to the documents in the “See Also” section at the bottom. This module provides a class, [`ssl.SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket"), which is derived from the [`socket.socket`](socket#socket.socket "socket.socket") type, and provides a socket-like wrapper that also encrypts and decrypts the data going over the socket with SSL. It supports additional methods such as `getpeercert()`, which retrieves the certificate of the other side of the connection, and `cipher()`, which retrieves the cipher being used for the secure connection. For more sophisticated applications, the [`ssl.SSLContext`](#ssl.SSLContext "ssl.SSLContext") class helps manage settings and certificates, which can then be inherited by SSL sockets created through the [`SSLContext.wrap_socket()`](#ssl.SSLContext.wrap_socket "ssl.SSLContext.wrap_socket") method. Changed in version 3.5.3: Updated to support linking with OpenSSL 1.1.0 Changed in version 3.6: OpenSSL 0.9.8, 1.0.0 and 1.0.1 are deprecated and no longer supported. In the future the ssl module will require at least OpenSSL 1.0.2 or 1.1.0. Functions, Constants, and Exceptions ------------------------------------ ### Socket creation Since Python 3.2 and 2.7.9, it is recommended to use the [`SSLContext.wrap_socket()`](#ssl.SSLContext.wrap_socket "ssl.SSLContext.wrap_socket") of an [`SSLContext`](#ssl.SSLContext "ssl.SSLContext") instance to wrap sockets as [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket") objects. The helper functions [`create_default_context()`](#ssl.create_default_context "ssl.create_default_context") returns a new context with secure default settings. The old [`wrap_socket()`](#ssl.wrap_socket "ssl.wrap_socket") function is deprecated since it is both inefficient and has no support for server name indication (SNI) and hostname matching. Client socket example with default context and IPv4/IPv6 dual stack: ``` import socket import ssl hostname = 'www.python.org' context = ssl.create_default_context() with socket.create_connection((hostname, 443)) as sock: with context.wrap_socket(sock, server_hostname=hostname) as ssock: print(ssock.version()) ``` Client socket example with custom context and IPv4: ``` hostname = 'www.python.org' # PROTOCOL_TLS_CLIENT requires valid cert chain and hostname context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations('path/to/cabundle.pem') with socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) as sock: with context.wrap_socket(sock, server_hostname=hostname) as ssock: print(ssock.version()) ``` Server socket example listening on localhost IPv4: ``` context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) context.load_cert_chain('/path/to/certchain.pem', '/path/to/private.key') with socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) as sock: sock.bind(('127.0.0.1', 8443)) sock.listen(5) with context.wrap_socket(sock, server_side=True) as ssock: conn, addr = ssock.accept() ... ``` ### Context creation A convenience function helps create [`SSLContext`](#ssl.SSLContext "ssl.SSLContext") objects for common purposes. `ssl.create_default_context(purpose=Purpose.SERVER_AUTH, cafile=None, capath=None, cadata=None)` Return a new [`SSLContext`](#ssl.SSLContext "ssl.SSLContext") object with default settings for the given *purpose*. The settings are chosen by the [`ssl`](#module-ssl "ssl: TLS/SSL wrapper for socket objects") module, and usually represent a higher security level than when calling the [`SSLContext`](#ssl.SSLContext "ssl.SSLContext") constructor directly. *cafile*, *capath*, *cadata* represent optional CA certificates to trust for certificate verification, as in [`SSLContext.load_verify_locations()`](#ssl.SSLContext.load_verify_locations "ssl.SSLContext.load_verify_locations"). If all three are [`None`](constants#None "None"), this function can choose to trust the system’s default CA certificates instead. The settings are: [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS"), [`OP_NO_SSLv2`](#ssl.OP_NO_SSLv2 "ssl.OP_NO_SSLv2"), and [`OP_NO_SSLv3`](#ssl.OP_NO_SSLv3 "ssl.OP_NO_SSLv3") with high encryption cipher suites without RC4 and without unauthenticated cipher suites. Passing [`SERVER_AUTH`](#ssl.Purpose.SERVER_AUTH "ssl.Purpose.SERVER_AUTH") as *purpose* sets [`verify_mode`](#ssl.SSLContext.verify_mode "ssl.SSLContext.verify_mode") to [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED") and either loads CA certificates (when at least one of *cafile*, *capath* or *cadata* is given) or uses [`SSLContext.load_default_certs()`](#ssl.SSLContext.load_default_certs "ssl.SSLContext.load_default_certs") to load default CA certificates. When [`keylog_filename`](#ssl.SSLContext.keylog_filename "ssl.SSLContext.keylog_filename") is supported and the environment variable `SSLKEYLOGFILE` is set, [`create_default_context()`](#ssl.create_default_context "ssl.create_default_context") enables key logging. Note The protocol, options, cipher and other settings may change to more restrictive values anytime without prior deprecation. The values represent a fair balance between compatibility and security. If your application needs specific settings, you should create a [`SSLContext`](#ssl.SSLContext "ssl.SSLContext") and apply the settings yourself. Note If you find that when certain older clients or servers attempt to connect with a [`SSLContext`](#ssl.SSLContext "ssl.SSLContext") created by this function that they get an error stating “Protocol or cipher suite mismatch”, it may be that they only support SSL3.0 which this function excludes using the [`OP_NO_SSLv3`](#ssl.OP_NO_SSLv3 "ssl.OP_NO_SSLv3"). SSL3.0 is widely considered to be [completely broken](https://en.wikipedia.org/wiki/POODLE). If you still wish to continue to use this function but still allow SSL 3.0 connections you can re-enable them using: ``` ctx = ssl.create_default_context(Purpose.CLIENT_AUTH) ctx.options &= ~ssl.OP_NO_SSLv3 ``` New in version 3.4. Changed in version 3.4.4: RC4 was dropped from the default cipher string. Changed in version 3.6: ChaCha20/Poly1305 was added to the default cipher string. 3DES was dropped from the default cipher string. Changed in version 3.8: Support for key logging to `SSLKEYLOGFILE` was added. ### Exceptions `exception ssl.SSLError` Raised to signal an error from the underlying SSL implementation (currently provided by the OpenSSL library). This signifies some problem in the higher-level encryption and authentication layer that’s superimposed on the underlying network connection. This error is a subtype of [`OSError`](exceptions#OSError "OSError"). The error code and message of [`SSLError`](#ssl.SSLError "ssl.SSLError") instances are provided by the OpenSSL library. Changed in version 3.3: [`SSLError`](#ssl.SSLError "ssl.SSLError") used to be a subtype of [`socket.error`](socket#socket.error "socket.error"). `library` A string mnemonic designating the OpenSSL submodule in which the error occurred, such as `SSL`, `PEM` or `X509`. The range of possible values depends on the OpenSSL version. New in version 3.3. `reason` A string mnemonic designating the reason this error occurred, for example `CERTIFICATE_VERIFY_FAILED`. The range of possible values depends on the OpenSSL version. New in version 3.3. `exception ssl.SSLZeroReturnError` A subclass of [`SSLError`](#ssl.SSLError "ssl.SSLError") raised when trying to read or write and the SSL connection has been closed cleanly. Note that this doesn’t mean that the underlying transport (read TCP) has been closed. New in version 3.3. `exception ssl.SSLWantReadError` A subclass of [`SSLError`](#ssl.SSLError "ssl.SSLError") raised by a [non-blocking SSL socket](#ssl-nonblocking) when trying to read or write data, but more data needs to be received on the underlying TCP transport before the request can be fulfilled. New in version 3.3. `exception ssl.SSLWantWriteError` A subclass of [`SSLError`](#ssl.SSLError "ssl.SSLError") raised by a [non-blocking SSL socket](#ssl-nonblocking) when trying to read or write data, but more data needs to be sent on the underlying TCP transport before the request can be fulfilled. New in version 3.3. `exception ssl.SSLSyscallError` A subclass of [`SSLError`](#ssl.SSLError "ssl.SSLError") raised when a system error was encountered while trying to fulfill an operation on a SSL socket. Unfortunately, there is no easy way to inspect the original errno number. New in version 3.3. `exception ssl.SSLEOFError` A subclass of [`SSLError`](#ssl.SSLError "ssl.SSLError") raised when the SSL connection has been terminated abruptly. Generally, you shouldn’t try to reuse the underlying transport when this error is encountered. New in version 3.3. `exception ssl.SSLCertVerificationError` A subclass of [`SSLError`](#ssl.SSLError "ssl.SSLError") raised when certificate validation has failed. New in version 3.7. `verify_code` A numeric error number that denotes the verification error. `verify_message` A human readable string of the verification error. `exception ssl.CertificateError` An alias for [`SSLCertVerificationError`](#ssl.SSLCertVerificationError "ssl.SSLCertVerificationError"). Changed in version 3.7: The exception is now an alias for [`SSLCertVerificationError`](#ssl.SSLCertVerificationError "ssl.SSLCertVerificationError"). ### Random generation `ssl.RAND_bytes(num)` Return *num* cryptographically strong pseudo-random bytes. Raises an [`SSLError`](#ssl.SSLError "ssl.SSLError") if the PRNG has not been seeded with enough data or if the operation is not supported by the current RAND method. [`RAND_status()`](#ssl.RAND_status "ssl.RAND_status") can be used to check the status of the PRNG and [`RAND_add()`](#ssl.RAND_add "ssl.RAND_add") can be used to seed the PRNG. For almost all applications [`os.urandom()`](os#os.urandom "os.urandom") is preferable. Read the Wikipedia article, [Cryptographically secure pseudorandom number generator (CSPRNG)](https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator), to get the requirements of a cryptographically strong generator. New in version 3.3. `ssl.RAND_pseudo_bytes(num)` Return (bytes, is\_cryptographic): bytes are *num* pseudo-random bytes, is\_cryptographic is `True` if the bytes generated are cryptographically strong. Raises an [`SSLError`](#ssl.SSLError "ssl.SSLError") if the operation is not supported by the current RAND method. Generated pseudo-random byte sequences will be unique if they are of sufficient length, but are not necessarily unpredictable. They can be used for non-cryptographic purposes and for certain purposes in cryptographic protocols, but usually not for key generation etc. For almost all applications [`os.urandom()`](os#os.urandom "os.urandom") is preferable. New in version 3.3. Deprecated since version 3.6: OpenSSL has deprecated [`ssl.RAND_pseudo_bytes()`](#ssl.RAND_pseudo_bytes "ssl.RAND_pseudo_bytes"), use [`ssl.RAND_bytes()`](#ssl.RAND_bytes "ssl.RAND_bytes") instead. `ssl.RAND_status()` Return `True` if the SSL pseudo-random number generator has been seeded with ‘enough’ randomness, and `False` otherwise. You can use [`ssl.RAND_egd()`](#ssl.RAND_egd "ssl.RAND_egd") and [`ssl.RAND_add()`](#ssl.RAND_add "ssl.RAND_add") to increase the randomness of the pseudo-random number generator. `ssl.RAND_egd(path)` If you are running an entropy-gathering daemon (EGD) somewhere, and *path* is the pathname of a socket connection open to it, this will read 256 bytes of randomness from the socket, and add it to the SSL pseudo-random number generator to increase the security of generated secret keys. This is typically only necessary on systems without better sources of randomness. See <http://egd.sourceforge.net/> or <http://prngd.sourceforge.net/> for sources of entropy-gathering daemons. [Availability](https://docs.python.org/3.9/library/intro.html#availability): not available with LibreSSL and OpenSSL > 1.1.0. `ssl.RAND_add(bytes, entropy)` Mix the given *bytes* into the SSL pseudo-random number generator. The parameter *entropy* (a float) is a lower bound on the entropy contained in string (so you can always use `0.0`). See [**RFC 1750**](https://tools.ietf.org/html/rfc1750.html) for more information on sources of entropy. Changed in version 3.5: Writable [bytes-like object](../glossary#term-bytes-like-object) is now accepted. ### Certificate handling `ssl.match_hostname(cert, hostname)` Verify that *cert* (in decoded format as returned by [`SSLSocket.getpeercert()`](#ssl.SSLSocket.getpeercert "ssl.SSLSocket.getpeercert")) matches the given *hostname*. The rules applied are those for checking the identity of HTTPS servers as outlined in [**RFC 2818**](https://tools.ietf.org/html/rfc2818.html), [**RFC 5280**](https://tools.ietf.org/html/rfc5280.html) and [**RFC 6125**](https://tools.ietf.org/html/rfc6125.html). In addition to HTTPS, this function should be suitable for checking the identity of servers in various SSL-based protocols such as FTPS, IMAPS, POPS and others. [`CertificateError`](#ssl.CertificateError "ssl.CertificateError") is raised on failure. On success, the function returns nothing: ``` >>> cert = {'subject': ((('commonName', 'example.com'),),)} >>> ssl.match_hostname(cert, "example.com") >>> ssl.match_hostname(cert, "example.org") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/py3k/Lib/ssl.py", line 130, in match_hostname ssl.CertificateError: hostname 'example.org' doesn't match 'example.com' ``` New in version 3.2. Changed in version 3.3.3: The function now follows [**RFC 6125**](https://tools.ietf.org/html/rfc6125.html), section 6.4.3 and does neither match multiple wildcards (e.g. `*.*.com` or `*a*.example.org`) nor a wildcard inside an internationalized domain names (IDN) fragment. IDN A-labels such as `www*.xn--pthon-kva.org` are still supported, but `x*.python.org` no longer matches `xn--tda.python.org`. Changed in version 3.5: Matching of IP addresses, when present in the subjectAltName field of the certificate, is now supported. Changed in version 3.7: The function is no longer used to TLS connections. Hostname matching is now performed by OpenSSL. Allow wildcard when it is the leftmost and the only character in that segment. Partial wildcards like `www*.example.com` are no longer supported. Deprecated since version 3.7. `ssl.cert_time_to_seconds(cert_time)` Return the time in seconds since the Epoch, given the `cert_time` string representing the “notBefore” or “notAfter” date from a certificate in `"%b %d %H:%M:%S %Y %Z"` strptime format (C locale). Here’s an example: ``` >>> import ssl >>> timestamp = ssl.cert_time_to_seconds("Jan 5 09:34:43 2018 GMT") >>> timestamp 1515144883 >>> from datetime import datetime >>> print(datetime.utcfromtimestamp(timestamp)) 2018-01-05 09:34:43 ``` “notBefore” or “notAfter” dates must use GMT ([**RFC 5280**](https://tools.ietf.org/html/rfc5280.html)). Changed in version 3.5: Interpret the input time as a time in UTC as specified by ‘GMT’ timezone in the input string. Local timezone was used previously. Return an integer (no fractions of a second in the input format) `ssl.get_server_certificate(addr, ssl_version=PROTOCOL_TLS, ca_certs=None)` Given the address `addr` of an SSL-protected server, as a (*hostname*, *port-number*) pair, fetches the server’s certificate, and returns it as a PEM-encoded string. If `ssl_version` is specified, uses that version of the SSL protocol to attempt to connect to the server. If `ca_certs` is specified, it should be a file containing a list of root certificates, the same format as used for the same parameter in [`SSLContext.wrap_socket()`](#ssl.SSLContext.wrap_socket "ssl.SSLContext.wrap_socket"). The call will attempt to validate the server certificate against that set of root certificates, and will fail if the validation attempt fails. Changed in version 3.3: This function is now IPv6-compatible. Changed in version 3.5: The default *ssl\_version* is changed from [`PROTOCOL_SSLv3`](#ssl.PROTOCOL_SSLv3 "ssl.PROTOCOL_SSLv3") to [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS") for maximum compatibility with modern servers. `ssl.DER_cert_to_PEM_cert(DER_cert_bytes)` Given a certificate as a DER-encoded blob of bytes, returns a PEM-encoded string version of the same certificate. `ssl.PEM_cert_to_DER_cert(PEM_cert_string)` Given a certificate as an ASCII PEM string, returns a DER-encoded sequence of bytes for that same certificate. `ssl.get_default_verify_paths()` Returns a named tuple with paths to OpenSSL’s default cafile and capath. The paths are the same as used by [`SSLContext.set_default_verify_paths()`](#ssl.SSLContext.set_default_verify_paths "ssl.SSLContext.set_default_verify_paths"). The return value is a [named tuple](../glossary#term-named-tuple) `DefaultVerifyPaths`: * `cafile` - resolved path to cafile or `None` if the file doesn’t exist, * `capath` - resolved path to capath or `None` if the directory doesn’t exist, * `openssl_cafile_env` - OpenSSL’s environment key that points to a cafile, * `openssl_cafile` - hard coded path to a cafile, * `openssl_capath_env` - OpenSSL’s environment key that points to a capath, * `openssl_capath` - hard coded path to a capath directory [Availability](https://docs.python.org/3.9/library/intro.html#availability): LibreSSL ignores the environment vars `openssl_cafile_env` and `openssl_capath_env`. New in version 3.4. `ssl.enum_certificates(store_name)` Retrieve certificates from Windows’ system cert store. *store\_name* may be one of `CA`, `ROOT` or `MY`. Windows may provide additional cert stores, too. The function returns a list of (cert\_bytes, encoding\_type, trust) tuples. The encoding\_type specifies the encoding of cert\_bytes. It is either `x509_asn` for X.509 ASN.1 data or `pkcs_7_asn` for PKCS#7 ASN.1 data. Trust specifies the purpose of the certificate as a set of OIDS or exactly `True` if the certificate is trustworthy for all purposes. Example: ``` >>> ssl.enum_certificates("CA") [(b'data...', 'x509_asn', {'1.3.6.1.5.5.7.3.1', '1.3.6.1.5.5.7.3.2'}), (b'data...', 'x509_asn', True)] ``` [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows. New in version 3.4. `ssl.enum_crls(store_name)` Retrieve CRLs from Windows’ system cert store. *store\_name* may be one of `CA`, `ROOT` or `MY`. Windows may provide additional cert stores, too. The function returns a list of (cert\_bytes, encoding\_type, trust) tuples. The encoding\_type specifies the encoding of cert\_bytes. It is either `x509_asn` for X.509 ASN.1 data or `pkcs_7_asn` for PKCS#7 ASN.1 data. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows. New in version 3.4. `ssl.wrap_socket(sock, keyfile=None, certfile=None, server_side=False, cert_reqs=CERT_NONE, ssl_version=PROTOCOL_TLS, ca_certs=None, do_handshake_on_connect=True, suppress_ragged_eofs=True, ciphers=None)` Takes an instance `sock` of [`socket.socket`](socket#socket.socket "socket.socket"), and returns an instance of [`ssl.SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket"), a subtype of [`socket.socket`](socket#socket.socket "socket.socket"), which wraps the underlying socket in an SSL context. `sock` must be a [`SOCK_STREAM`](socket#socket.SOCK_STREAM "socket.SOCK_STREAM") socket; other socket types are unsupported. Internally, function creates a [`SSLContext`](#ssl.SSLContext "ssl.SSLContext") with protocol *ssl\_version* and [`SSLContext.options`](#ssl.SSLContext.options "ssl.SSLContext.options") set to *cert\_reqs*. If parameters *keyfile*, *certfile*, *ca\_certs* or *ciphers* are set, then the values are passed to [`SSLContext.load_cert_chain()`](#ssl.SSLContext.load_cert_chain "ssl.SSLContext.load_cert_chain"), [`SSLContext.load_verify_locations()`](#ssl.SSLContext.load_verify_locations "ssl.SSLContext.load_verify_locations"), and [`SSLContext.set_ciphers()`](#ssl.SSLContext.set_ciphers "ssl.SSLContext.set_ciphers"). The arguments *server\_side*, *do\_handshake\_on\_connect*, and *suppress\_ragged\_eofs* have the same meaning as [`SSLContext.wrap_socket()`](#ssl.SSLContext.wrap_socket "ssl.SSLContext.wrap_socket"). Deprecated since version 3.7: Since Python 3.2 and 2.7.9, it is recommended to use the [`SSLContext.wrap_socket()`](#ssl.SSLContext.wrap_socket "ssl.SSLContext.wrap_socket") instead of [`wrap_socket()`](#ssl.wrap_socket "ssl.wrap_socket"). The top-level function is limited and creates an insecure client socket without server name indication or hostname matching. ### Constants All constants are now [`enum.IntEnum`](enum#enum.IntEnum "enum.IntEnum") or [`enum.IntFlag`](enum#enum.IntFlag "enum.IntFlag") collections. New in version 3.6. `ssl.CERT_NONE` Possible value for [`SSLContext.verify_mode`](#ssl.SSLContext.verify_mode "ssl.SSLContext.verify_mode"), or the `cert_reqs` parameter to [`wrap_socket()`](#ssl.wrap_socket "ssl.wrap_socket"). Except for [`PROTOCOL_TLS_CLIENT`](#ssl.PROTOCOL_TLS_CLIENT "ssl.PROTOCOL_TLS_CLIENT"), it is the default mode. With client-side sockets, just about any cert is accepted. Validation errors, such as untrusted or expired cert, are ignored and do not abort the TLS/SSL handshake. In server mode, no certificate is requested from the client, so the client does not send any for client cert authentication. See the discussion of [Security considerations](#ssl-security) below. `ssl.CERT_OPTIONAL` Possible value for [`SSLContext.verify_mode`](#ssl.SSLContext.verify_mode "ssl.SSLContext.verify_mode"), or the `cert_reqs` parameter to [`wrap_socket()`](#ssl.wrap_socket "ssl.wrap_socket"). In client mode, [`CERT_OPTIONAL`](#ssl.CERT_OPTIONAL "ssl.CERT_OPTIONAL") has the same meaning as [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED"). It is recommended to use [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED") for client-side sockets instead. In server mode, a client certificate request is sent to the client. The client may either ignore the request or send a certificate in order perform TLS client cert authentication. If the client chooses to send a certificate, it is verified. Any verification error immediately aborts the TLS handshake. Use of this setting requires a valid set of CA certificates to be passed, either to [`SSLContext.load_verify_locations()`](#ssl.SSLContext.load_verify_locations "ssl.SSLContext.load_verify_locations") or as a value of the `ca_certs` parameter to [`wrap_socket()`](#ssl.wrap_socket "ssl.wrap_socket"). `ssl.CERT_REQUIRED` Possible value for [`SSLContext.verify_mode`](#ssl.SSLContext.verify_mode "ssl.SSLContext.verify_mode"), or the `cert_reqs` parameter to [`wrap_socket()`](#ssl.wrap_socket "ssl.wrap_socket"). In this mode, certificates are required from the other side of the socket connection; an [`SSLError`](#ssl.SSLError "ssl.SSLError") will be raised if no certificate is provided, or if its validation fails. This mode is **not** sufficient to verify a certificate in client mode as it does not match hostnames. [`check_hostname`](#ssl.SSLContext.check_hostname "ssl.SSLContext.check_hostname") must be enabled as well to verify the authenticity of a cert. [`PROTOCOL_TLS_CLIENT`](#ssl.PROTOCOL_TLS_CLIENT "ssl.PROTOCOL_TLS_CLIENT") uses [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED") and enables [`check_hostname`](#ssl.SSLContext.check_hostname "ssl.SSLContext.check_hostname") by default. With server socket, this mode provides mandatory TLS client cert authentication. A client certificate request is sent to the client and the client must provide a valid and trusted certificate. Use of this setting requires a valid set of CA certificates to be passed, either to [`SSLContext.load_verify_locations()`](#ssl.SSLContext.load_verify_locations "ssl.SSLContext.load_verify_locations") or as a value of the `ca_certs` parameter to [`wrap_socket()`](#ssl.wrap_socket "ssl.wrap_socket"). `class ssl.VerifyMode` [`enum.IntEnum`](enum#enum.IntEnum "enum.IntEnum") collection of CERT\_\* constants. New in version 3.6. `ssl.VERIFY_DEFAULT` Possible value for [`SSLContext.verify_flags`](#ssl.SSLContext.verify_flags "ssl.SSLContext.verify_flags"). In this mode, certificate revocation lists (CRLs) are not checked. By default OpenSSL does neither require nor verify CRLs. New in version 3.4. `ssl.VERIFY_CRL_CHECK_LEAF` Possible value for [`SSLContext.verify_flags`](#ssl.SSLContext.verify_flags "ssl.SSLContext.verify_flags"). In this mode, only the peer cert is checked but none of the intermediate CA certificates. The mode requires a valid CRL that is signed by the peer cert’s issuer (its direct ancestor CA). If no proper CRL has been loaded with [`SSLContext.load_verify_locations`](#ssl.SSLContext.load_verify_locations "ssl.SSLContext.load_verify_locations"), validation will fail. New in version 3.4. `ssl.VERIFY_CRL_CHECK_CHAIN` Possible value for [`SSLContext.verify_flags`](#ssl.SSLContext.verify_flags "ssl.SSLContext.verify_flags"). In this mode, CRLs of all certificates in the peer cert chain are checked. New in version 3.4. `ssl.VERIFY_X509_STRICT` Possible value for [`SSLContext.verify_flags`](#ssl.SSLContext.verify_flags "ssl.SSLContext.verify_flags") to disable workarounds for broken X.509 certificates. New in version 3.4. `ssl.VERIFY_X509_TRUSTED_FIRST` Possible value for [`SSLContext.verify_flags`](#ssl.SSLContext.verify_flags "ssl.SSLContext.verify_flags"). It instructs OpenSSL to prefer trusted certificates when building the trust chain to validate a certificate. This flag is enabled by default. New in version 3.4.4. `class ssl.VerifyFlags` [`enum.IntFlag`](enum#enum.IntFlag "enum.IntFlag") collection of VERIFY\_\* constants. New in version 3.6. `ssl.PROTOCOL_TLS` Selects the highest protocol version that both the client and server support. Despite the name, this option can select both “SSL” and “TLS” protocols. New in version 3.6. `ssl.PROTOCOL_TLS_CLIENT` Auto-negotiate the highest protocol version like [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS"), but only support client-side [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket") connections. The protocol enables [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED") and [`check_hostname`](#ssl.SSLContext.check_hostname "ssl.SSLContext.check_hostname") by default. New in version 3.6. `ssl.PROTOCOL_TLS_SERVER` Auto-negotiate the highest protocol version like [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS"), but only support server-side [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket") connections. New in version 3.6. `ssl.PROTOCOL_SSLv23` Alias for [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS"). Deprecated since version 3.6: Use [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS") instead. `ssl.PROTOCOL_SSLv2` Selects SSL version 2 as the channel encryption protocol. This protocol is not available if OpenSSL is compiled with the `OPENSSL_NO_SSL2` flag. Warning SSL version 2 is insecure. Its use is highly discouraged. Deprecated since version 3.6: OpenSSL has removed support for SSLv2. `ssl.PROTOCOL_SSLv3` Selects SSL version 3 as the channel encryption protocol. This protocol is not be available if OpenSSL is compiled with the `OPENSSL_NO_SSLv3` flag. Warning SSL version 3 is insecure. Its use is highly discouraged. Deprecated since version 3.6: OpenSSL has deprecated all version specific protocols. Use the default protocol [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS") with flags like [`OP_NO_SSLv3`](#ssl.OP_NO_SSLv3 "ssl.OP_NO_SSLv3") instead. `ssl.PROTOCOL_TLSv1` Selects TLS version 1.0 as the channel encryption protocol. Deprecated since version 3.6: OpenSSL has deprecated all version specific protocols. Use the default protocol [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS") with flags like [`OP_NO_SSLv3`](#ssl.OP_NO_SSLv3 "ssl.OP_NO_SSLv3") instead. `ssl.PROTOCOL_TLSv1_1` Selects TLS version 1.1 as the channel encryption protocol. Available only with openssl version 1.0.1+. New in version 3.4. Deprecated since version 3.6: OpenSSL has deprecated all version specific protocols. Use the default protocol [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS") with flags like [`OP_NO_SSLv3`](#ssl.OP_NO_SSLv3 "ssl.OP_NO_SSLv3") instead. `ssl.PROTOCOL_TLSv1_2` Selects TLS version 1.2 as the channel encryption protocol. This is the most modern version, and probably the best choice for maximum protection, if both sides can speak it. Available only with openssl version 1.0.1+. New in version 3.4. Deprecated since version 3.6: OpenSSL has deprecated all version specific protocols. Use the default protocol [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS") with flags like [`OP_NO_SSLv3`](#ssl.OP_NO_SSLv3 "ssl.OP_NO_SSLv3") instead. `ssl.OP_ALL` Enables workarounds for various bugs present in other SSL implementations. This option is set by default. It does not necessarily set the same flags as OpenSSL’s `SSL_OP_ALL` constant. New in version 3.2. `ssl.OP_NO_SSLv2` Prevents an SSLv2 connection. This option is only applicable in conjunction with [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS"). It prevents the peers from choosing SSLv2 as the protocol version. New in version 3.2. Deprecated since version 3.6: SSLv2 is deprecated `ssl.OP_NO_SSLv3` Prevents an SSLv3 connection. This option is only applicable in conjunction with [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS"). It prevents the peers from choosing SSLv3 as the protocol version. New in version 3.2. Deprecated since version 3.6: SSLv3 is deprecated `ssl.OP_NO_TLSv1` Prevents a TLSv1 connection. This option is only applicable in conjunction with [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS"). It prevents the peers from choosing TLSv1 as the protocol version. New in version 3.2. Deprecated since version 3.7: The option is deprecated since OpenSSL 1.1.0, use the new [`SSLContext.minimum_version`](#ssl.SSLContext.minimum_version "ssl.SSLContext.minimum_version") and [`SSLContext.maximum_version`](#ssl.SSLContext.maximum_version "ssl.SSLContext.maximum_version") instead. `ssl.OP_NO_TLSv1_1` Prevents a TLSv1.1 connection. This option is only applicable in conjunction with [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS"). It prevents the peers from choosing TLSv1.1 as the protocol version. Available only with openssl version 1.0.1+. New in version 3.4. Deprecated since version 3.7: The option is deprecated since OpenSSL 1.1.0. `ssl.OP_NO_TLSv1_2` Prevents a TLSv1.2 connection. This option is only applicable in conjunction with [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS"). It prevents the peers from choosing TLSv1.2 as the protocol version. Available only with openssl version 1.0.1+. New in version 3.4. Deprecated since version 3.7: The option is deprecated since OpenSSL 1.1.0. `ssl.OP_NO_TLSv1_3` Prevents a TLSv1.3 connection. This option is only applicable in conjunction with [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS"). It prevents the peers from choosing TLSv1.3 as the protocol version. TLS 1.3 is available with OpenSSL 1.1.1 or later. When Python has been compiled against an older version of OpenSSL, the flag defaults to *0*. New in version 3.7. Deprecated since version 3.7: The option is deprecated since OpenSSL 1.1.0. It was added to 2.7.15, 3.6.3 and 3.7.0 for backwards compatibility with OpenSSL 1.0.2. `ssl.OP_NO_RENEGOTIATION` Disable all renegotiation in TLSv1.2 and earlier. Do not send HelloRequest messages, and ignore renegotiation requests via ClientHello. This option is only available with OpenSSL 1.1.0h and later. New in version 3.7. `ssl.OP_CIPHER_SERVER_PREFERENCE` Use the server’s cipher ordering preference, rather than the client’s. This option has no effect on client sockets and SSLv2 server sockets. New in version 3.3. `ssl.OP_SINGLE_DH_USE` Prevents re-use of the same DH key for distinct SSL sessions. This improves forward secrecy but requires more computational resources. This option only applies to server sockets. New in version 3.3. `ssl.OP_SINGLE_ECDH_USE` Prevents re-use of the same ECDH key for distinct SSL sessions. This improves forward secrecy but requires more computational resources. This option only applies to server sockets. New in version 3.3. `ssl.OP_ENABLE_MIDDLEBOX_COMPAT` Send dummy Change Cipher Spec (CCS) messages in TLS 1.3 handshake to make a TLS 1.3 connection look more like a TLS 1.2 connection. This option is only available with OpenSSL 1.1.1 and later. New in version 3.8. `ssl.OP_NO_COMPRESSION` Disable compression on the SSL channel. This is useful if the application protocol supports its own compression scheme. This option is only available with OpenSSL 1.0.0 and later. New in version 3.3. `class ssl.Options` [`enum.IntFlag`](enum#enum.IntFlag "enum.IntFlag") collection of OP\_\* constants. `ssl.OP_NO_TICKET` Prevent client side from requesting a session ticket. New in version 3.6. `ssl.OP_IGNORE_UNEXPECTED_EOF` Ignore unexpected shutdown of TLS connections. This option is only available with OpenSSL 3.0.0 and later. New in version 3.10. `ssl.HAS_ALPN` Whether the OpenSSL library has built-in support for the *Application-Layer Protocol Negotiation* TLS extension as described in [**RFC 7301**](https://tools.ietf.org/html/rfc7301.html). New in version 3.5. `ssl.HAS_NEVER_CHECK_COMMON_NAME` Whether the OpenSSL library has built-in support not checking subject common name and [`SSLContext.hostname_checks_common_name`](#ssl.SSLContext.hostname_checks_common_name "ssl.SSLContext.hostname_checks_common_name") is writeable. New in version 3.7. `ssl.HAS_ECDH` Whether the OpenSSL library has built-in support for the Elliptic Curve-based Diffie-Hellman key exchange. This should be true unless the feature was explicitly disabled by the distributor. New in version 3.3. `ssl.HAS_SNI` Whether the OpenSSL library has built-in support for the *Server Name Indication* extension (as defined in [**RFC 6066**](https://tools.ietf.org/html/rfc6066.html)). New in version 3.2. `ssl.HAS_NPN` Whether the OpenSSL library has built-in support for the *Next Protocol Negotiation* as described in the [Application Layer Protocol Negotiation](https://en.wikipedia.org/wiki/Application-Layer_Protocol_Negotiation). When true, you can use the [`SSLContext.set_npn_protocols()`](#ssl.SSLContext.set_npn_protocols "ssl.SSLContext.set_npn_protocols") method to advertise which protocols you want to support. New in version 3.3. `ssl.HAS_SSLv2` Whether the OpenSSL library has built-in support for the SSL 2.0 protocol. New in version 3.7. `ssl.HAS_SSLv3` Whether the OpenSSL library has built-in support for the SSL 3.0 protocol. New in version 3.7. `ssl.HAS_TLSv1` Whether the OpenSSL library has built-in support for the TLS 1.0 protocol. New in version 3.7. `ssl.HAS_TLSv1_1` Whether the OpenSSL library has built-in support for the TLS 1.1 protocol. New in version 3.7. `ssl.HAS_TLSv1_2` Whether the OpenSSL library has built-in support for the TLS 1.2 protocol. New in version 3.7. `ssl.HAS_TLSv1_3` Whether the OpenSSL library has built-in support for the TLS 1.3 protocol. New in version 3.7. `ssl.CHANNEL_BINDING_TYPES` List of supported TLS channel binding types. Strings in this list can be used as arguments to [`SSLSocket.get_channel_binding()`](#ssl.SSLSocket.get_channel_binding "ssl.SSLSocket.get_channel_binding"). New in version 3.3. `ssl.OPENSSL_VERSION` The version string of the OpenSSL library loaded by the interpreter: ``` >>> ssl.OPENSSL_VERSION 'OpenSSL 1.0.2k 26 Jan 2017' ``` New in version 3.2. `ssl.OPENSSL_VERSION_INFO` A tuple of five integers representing version information about the OpenSSL library: ``` >>> ssl.OPENSSL_VERSION_INFO (1, 0, 2, 11, 15) ``` New in version 3.2. `ssl.OPENSSL_VERSION_NUMBER` The raw version number of the OpenSSL library, as a single integer: ``` >>> ssl.OPENSSL_VERSION_NUMBER 268443839 >>> hex(ssl.OPENSSL_VERSION_NUMBER) '0x100020bf' ``` New in version 3.2. `ssl.ALERT_DESCRIPTION_HANDSHAKE_FAILURE` `ssl.ALERT_DESCRIPTION_INTERNAL_ERROR` `ALERT_DESCRIPTION_*` Alert Descriptions from [**RFC 5246**](https://tools.ietf.org/html/rfc5246.html) and others. The [IANA TLS Alert Registry](https://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-6) contains this list and references to the RFCs where their meaning is defined. Used as the return value of the callback function in [`SSLContext.set_servername_callback()`](#ssl.SSLContext.set_servername_callback "ssl.SSLContext.set_servername_callback"). New in version 3.4. `class ssl.AlertDescription` [`enum.IntEnum`](enum#enum.IntEnum "enum.IntEnum") collection of ALERT\_DESCRIPTION\_\* constants. New in version 3.6. `Purpose.SERVER_AUTH` Option for [`create_default_context()`](#ssl.create_default_context "ssl.create_default_context") and [`SSLContext.load_default_certs()`](#ssl.SSLContext.load_default_certs "ssl.SSLContext.load_default_certs"). This value indicates that the context may be used to authenticate Web servers (therefore, it will be used to create client-side sockets). New in version 3.4. `Purpose.CLIENT_AUTH` Option for [`create_default_context()`](#ssl.create_default_context "ssl.create_default_context") and [`SSLContext.load_default_certs()`](#ssl.SSLContext.load_default_certs "ssl.SSLContext.load_default_certs"). This value indicates that the context may be used to authenticate Web clients (therefore, it will be used to create server-side sockets). New in version 3.4. `class ssl.SSLErrorNumber` [`enum.IntEnum`](enum#enum.IntEnum "enum.IntEnum") collection of SSL\_ERROR\_\* constants. New in version 3.6. `class ssl.TLSVersion` [`enum.IntEnum`](enum#enum.IntEnum "enum.IntEnum") collection of SSL and TLS versions for [`SSLContext.maximum_version`](#ssl.SSLContext.maximum_version "ssl.SSLContext.maximum_version") and [`SSLContext.minimum_version`](#ssl.SSLContext.minimum_version "ssl.SSLContext.minimum_version"). New in version 3.7. `TLSVersion.MINIMUM_SUPPORTED` `TLSVersion.MAXIMUM_SUPPORTED` The minimum or maximum supported SSL or TLS version. These are magic constants. Their values don’t reflect the lowest and highest available TLS/SSL versions. `TLSVersion.SSLv3` `TLSVersion.TLSv1` `TLSVersion.TLSv1_1` `TLSVersion.TLSv1_2` `TLSVersion.TLSv1_3` SSL 3.0 to TLS 1.3. SSL Sockets ----------- `class ssl.SSLSocket(socket.socket)` SSL sockets provide the following methods of [Socket Objects](socket#socket-objects): * [`accept()`](socket#socket.socket.accept "socket.socket.accept") * [`bind()`](socket#socket.socket.bind "socket.socket.bind") * [`close()`](socket#socket.socket.close "socket.socket.close") * [`connect()`](socket#socket.socket.connect "socket.socket.connect") * [`detach()`](socket#socket.socket.detach "socket.socket.detach") * [`fileno()`](socket#socket.socket.fileno "socket.socket.fileno") * [`getpeername()`](socket#socket.socket.getpeername "socket.socket.getpeername"), [`getsockname()`](socket#socket.socket.getsockname "socket.socket.getsockname") * [`getsockopt()`](socket#socket.socket.getsockopt "socket.socket.getsockopt"), [`setsockopt()`](socket#socket.socket.setsockopt "socket.socket.setsockopt") * [`gettimeout()`](socket#socket.socket.gettimeout "socket.socket.gettimeout"), [`settimeout()`](socket#socket.socket.settimeout "socket.socket.settimeout"), [`setblocking()`](socket#socket.socket.setblocking "socket.socket.setblocking") * [`listen()`](socket#socket.socket.listen "socket.socket.listen") * [`makefile()`](socket#socket.socket.makefile "socket.socket.makefile") * [`recv()`](socket#socket.socket.recv "socket.socket.recv"), [`recv_into()`](socket#socket.socket.recv_into "socket.socket.recv_into") (but passing a non-zero `flags` argument is not allowed) * [`send()`](socket#socket.socket.send "socket.socket.send"), [`sendall()`](socket#socket.socket.sendall "socket.socket.sendall") (with the same limitation) * [`sendfile()`](socket#socket.socket.sendfile "socket.socket.sendfile") (but [`os.sendfile`](os#os.sendfile "os.sendfile") will be used for plain-text sockets only, else [`send()`](socket#socket.socket.send "socket.socket.send") will be used) * [`shutdown()`](socket#socket.socket.shutdown "socket.socket.shutdown") However, since the SSL (and TLS) protocol has its own framing atop of TCP, the SSL sockets abstraction can, in certain respects, diverge from the specification of normal, OS-level sockets. See especially the [notes on non-blocking sockets](#ssl-nonblocking). Instances of [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket") must be created using the [`SSLContext.wrap_socket()`](#ssl.SSLContext.wrap_socket "ssl.SSLContext.wrap_socket") method. Changed in version 3.5: The `sendfile()` method was added. Changed in version 3.5: The `shutdown()` does not reset the socket timeout each time bytes are received or sent. The socket timeout is now to maximum total duration of the shutdown. Deprecated since version 3.6: It is deprecated to create a [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket") instance directly, use [`SSLContext.wrap_socket()`](#ssl.SSLContext.wrap_socket "ssl.SSLContext.wrap_socket") to wrap a socket. Changed in version 3.7: [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket") instances must to created with [`wrap_socket()`](#ssl.SSLContext.wrap_socket "ssl.SSLContext.wrap_socket"). In earlier versions, it was possible to create instances directly. This was never documented or officially supported. SSL sockets also have the following additional methods and attributes: `SSLSocket.read(len=1024, buffer=None)` Read up to *len* bytes of data from the SSL socket and return the result as a `bytes` instance. If *buffer* is specified, then read into the buffer instead, and return the number of bytes read. Raise [`SSLWantReadError`](#ssl.SSLWantReadError "ssl.SSLWantReadError") or [`SSLWantWriteError`](#ssl.SSLWantWriteError "ssl.SSLWantWriteError") if the socket is [non-blocking](#ssl-nonblocking) and the read would block. As at any time a re-negotiation is possible, a call to [`read()`](#ssl.SSLSocket.read "ssl.SSLSocket.read") can also cause write operations. Changed in version 3.5: The socket timeout is no more reset each time bytes are received or sent. The socket timeout is now to maximum total duration to read up to *len* bytes. Deprecated since version 3.6: Use `recv()` instead of [`read()`](#ssl.SSLSocket.read "ssl.SSLSocket.read"). `SSLSocket.write(buf)` Write *buf* to the SSL socket and return the number of bytes written. The *buf* argument must be an object supporting the buffer interface. Raise [`SSLWantReadError`](#ssl.SSLWantReadError "ssl.SSLWantReadError") or [`SSLWantWriteError`](#ssl.SSLWantWriteError "ssl.SSLWantWriteError") if the socket is [non-blocking](#ssl-nonblocking) and the write would block. As at any time a re-negotiation is possible, a call to [`write()`](#ssl.SSLSocket.write "ssl.SSLSocket.write") can also cause read operations. Changed in version 3.5: The socket timeout is no more reset each time bytes are received or sent. The socket timeout is now to maximum total duration to write *buf*. Deprecated since version 3.6: Use `send()` instead of [`write()`](#ssl.SSLSocket.write "ssl.SSLSocket.write"). Note The [`read()`](#ssl.SSLSocket.read "ssl.SSLSocket.read") and [`write()`](#ssl.SSLSocket.write "ssl.SSLSocket.write") methods are the low-level methods that read and write unencrypted, application-level data and decrypt/encrypt it to encrypted, wire-level data. These methods require an active SSL connection, i.e. the handshake was completed and [`SSLSocket.unwrap()`](#ssl.SSLSocket.unwrap "ssl.SSLSocket.unwrap") was not called. Normally you should use the socket API methods like [`recv()`](socket#socket.socket.recv "socket.socket.recv") and [`send()`](socket#socket.socket.send "socket.socket.send") instead of these methods. `SSLSocket.do_handshake()` Perform the SSL setup handshake. Changed in version 3.4: The handshake method also performs [`match_hostname()`](#ssl.match_hostname "ssl.match_hostname") when the [`check_hostname`](#ssl.SSLContext.check_hostname "ssl.SSLContext.check_hostname") attribute of the socket’s [`context`](#ssl.SSLSocket.context "ssl.SSLSocket.context") is true. Changed in version 3.5: The socket timeout is no more reset each time bytes are received or sent. The socket timeout is now to maximum total duration of the handshake. Changed in version 3.7: Hostname or IP address is matched by OpenSSL during handshake. The function [`match_hostname()`](#ssl.match_hostname "ssl.match_hostname") is no longer used. In case OpenSSL refuses a hostname or IP address, the handshake is aborted early and a TLS alert message is send to the peer. `SSLSocket.getpeercert(binary_form=False)` If there is no certificate for the peer on the other end of the connection, return `None`. If the SSL handshake hasn’t been done yet, raise [`ValueError`](exceptions#ValueError "ValueError"). If the `binary_form` parameter is [`False`](constants#False "False"), and a certificate was received from the peer, this method returns a [`dict`](stdtypes#dict "dict") instance. If the certificate was not validated, the dict is empty. If the certificate was validated, it returns a dict with several keys, amongst them `subject` (the principal for which the certificate was issued) and `issuer` (the principal issuing the certificate). If a certificate contains an instance of the *Subject Alternative Name* extension (see [**RFC 3280**](https://tools.ietf.org/html/rfc3280.html)), there will also be a `subjectAltName` key in the dictionary. The `subject` and `issuer` fields are tuples containing the sequence of relative distinguished names (RDNs) given in the certificate’s data structure for the respective fields, and each RDN is a sequence of name-value pairs. Here is a real-world example: ``` {'issuer': ((('countryName', 'IL'),), (('organizationName', 'StartCom Ltd.'),), (('organizationalUnitName', 'Secure Digital Certificate Signing'),), (('commonName', 'StartCom Class 2 Primary Intermediate Server CA'),)), 'notAfter': 'Nov 22 08:15:19 2013 GMT', 'notBefore': 'Nov 21 03:09:52 2011 GMT', 'serialNumber': '95F0', 'subject': ((('description', '571208-SLe257oHY9fVQ07Z'),), (('countryName', 'US'),), (('stateOrProvinceName', 'California'),), (('localityName', 'San Francisco'),), (('organizationName', 'Electronic Frontier Foundation, Inc.'),), (('commonName', '*.eff.org'),), (('emailAddress', '[email protected]'),)), 'subjectAltName': (('DNS', '*.eff.org'), ('DNS', 'eff.org')), 'version': 3} ``` Note To validate a certificate for a particular service, you can use the [`match_hostname()`](#ssl.match_hostname "ssl.match_hostname") function. If the `binary_form` parameter is [`True`](constants#True "True"), and a certificate was provided, this method returns the DER-encoded form of the entire certificate as a sequence of bytes, or [`None`](constants#None "None") if the peer did not provide a certificate. Whether the peer provides a certificate depends on the SSL socket’s role: * for a client SSL socket, the server will always provide a certificate, regardless of whether validation was required; * for a server SSL socket, the client will only provide a certificate when requested by the server; therefore [`getpeercert()`](#ssl.SSLSocket.getpeercert "ssl.SSLSocket.getpeercert") will return [`None`](constants#None "None") if you used [`CERT_NONE`](#ssl.CERT_NONE "ssl.CERT_NONE") (rather than [`CERT_OPTIONAL`](#ssl.CERT_OPTIONAL "ssl.CERT_OPTIONAL") or [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED")). Changed in version 3.2: The returned dictionary includes additional items such as `issuer` and `notBefore`. Changed in version 3.4: [`ValueError`](exceptions#ValueError "ValueError") is raised when the handshake isn’t done. The returned dictionary includes additional X509v3 extension items such as `crlDistributionPoints`, `caIssuers` and `OCSP` URIs. Changed in version 3.9: IPv6 address strings no longer have a trailing new line. `SSLSocket.cipher()` Returns a three-value tuple containing the name of the cipher being used, the version of the SSL protocol that defines its use, and the number of secret bits being used. If no connection has been established, returns `None`. `SSLSocket.shared_ciphers()` Return the list of ciphers shared by the client during the handshake. Each entry of the returned list is a three-value tuple containing the name of the cipher, the version of the SSL protocol that defines its use, and the number of secret bits the cipher uses. [`shared_ciphers()`](#ssl.SSLSocket.shared_ciphers "ssl.SSLSocket.shared_ciphers") returns `None` if no connection has been established or the socket is a client socket. New in version 3.5. `SSLSocket.compression()` Return the compression algorithm being used as a string, or `None` if the connection isn’t compressed. If the higher-level protocol supports its own compression mechanism, you can use [`OP_NO_COMPRESSION`](#ssl.OP_NO_COMPRESSION "ssl.OP_NO_COMPRESSION") to disable SSL-level compression. New in version 3.3. `SSLSocket.get_channel_binding(cb_type="tls-unique")` Get channel binding data for current connection, as a bytes object. Returns `None` if not connected or the handshake has not been completed. The *cb\_type* parameter allow selection of the desired channel binding type. Valid channel binding types are listed in the [`CHANNEL_BINDING_TYPES`](#ssl.CHANNEL_BINDING_TYPES "ssl.CHANNEL_BINDING_TYPES") list. Currently only the ‘tls-unique’ channel binding, defined by [**RFC 5929**](https://tools.ietf.org/html/rfc5929.html), is supported. [`ValueError`](exceptions#ValueError "ValueError") will be raised if an unsupported channel binding type is requested. New in version 3.3. `SSLSocket.selected_alpn_protocol()` Return the protocol that was selected during the TLS handshake. If [`SSLContext.set_alpn_protocols()`](#ssl.SSLContext.set_alpn_protocols "ssl.SSLContext.set_alpn_protocols") was not called, if the other party does not support ALPN, if this socket does not support any of the client’s proposed protocols, or if the handshake has not happened yet, `None` is returned. New in version 3.5. `SSLSocket.selected_npn_protocol()` Return the higher-level protocol that was selected during the TLS/SSL handshake. If [`SSLContext.set_npn_protocols()`](#ssl.SSLContext.set_npn_protocols "ssl.SSLContext.set_npn_protocols") was not called, or if the other party does not support NPN, or if the handshake has not yet happened, this will return `None`. New in version 3.3. `SSLSocket.unwrap()` Performs the SSL shutdown handshake, which removes the TLS layer from the underlying socket, and returns the underlying socket object. This can be used to go from encrypted operation over a connection to unencrypted. The returned socket should always be used for further communication with the other side of the connection, rather than the original socket. `SSLSocket.verify_client_post_handshake()` Requests post-handshake authentication (PHA) from a TLS 1.3 client. PHA can only be initiated for a TLS 1.3 connection from a server-side socket, after the initial TLS handshake and with PHA enabled on both sides, see [`SSLContext.post_handshake_auth`](#ssl.SSLContext.post_handshake_auth "ssl.SSLContext.post_handshake_auth"). The method does not perform a cert exchange immediately. The server-side sends a CertificateRequest during the next write event and expects the client to respond with a certificate on the next read event. If any precondition isn’t met (e.g. not TLS 1.3, PHA not enabled), an [`SSLError`](#ssl.SSLError "ssl.SSLError") is raised. Note Only available with OpenSSL 1.1.1 and TLS 1.3 enabled. Without TLS 1.3 support, the method raises [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError"). New in version 3.8. `SSLSocket.version()` Return the actual SSL protocol version negotiated by the connection as a string, or `None` if no secure connection is established. As of this writing, possible return values include `"SSLv2"`, `"SSLv3"`, `"TLSv1"`, `"TLSv1.1"` and `"TLSv1.2"`. Recent OpenSSL versions may define more return values. New in version 3.5. `SSLSocket.pending()` Returns the number of already decrypted bytes available for read, pending on the connection. `SSLSocket.context` The [`SSLContext`](#ssl.SSLContext "ssl.SSLContext") object this SSL socket is tied to. If the SSL socket was created using the deprecated [`wrap_socket()`](#ssl.wrap_socket "ssl.wrap_socket") function (rather than [`SSLContext.wrap_socket()`](#ssl.SSLContext.wrap_socket "ssl.SSLContext.wrap_socket")), this is a custom context object created for this SSL socket. New in version 3.2. `SSLSocket.server_side` A boolean which is `True` for server-side sockets and `False` for client-side sockets. New in version 3.2. `SSLSocket.server_hostname` Hostname of the server: [`str`](stdtypes#str "str") type, or `None` for server-side socket or if the hostname was not specified in the constructor. New in version 3.2. Changed in version 3.7: The attribute is now always ASCII text. When `server_hostname` is an internationalized domain name (IDN), this attribute now stores the A-label form (`"xn--pythn-mua.org"`), rather than the U-label form (`"pythön.org"`). `SSLSocket.session` The [`SSLSession`](#ssl.SSLSession "ssl.SSLSession") for this SSL connection. The session is available for client and server side sockets after the TLS handshake has been performed. For client sockets the session can be set before [`do_handshake()`](#ssl.SSLSocket.do_handshake "ssl.SSLSocket.do_handshake") has been called to reuse a session. New in version 3.6. `SSLSocket.session_reused` New in version 3.6. SSL Contexts ------------ New in version 3.2. An SSL context holds various data longer-lived than single SSL connections, such as SSL configuration options, certificate(s) and private key(s). It also manages a cache of SSL sessions for server-side sockets, in order to speed up repeated connections from the same clients. `class ssl.SSLContext(protocol=PROTOCOL_TLS)` Create a new SSL context. You may pass *protocol* which must be one of the `PROTOCOL_*` constants defined in this module. The parameter specifies which version of the SSL protocol to use. Typically, the server chooses a particular protocol version, and the client must adapt to the server’s choice. Most of the versions are not interoperable with the other versions. If not specified, the default is [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS"); it provides the most compatibility with other versions. Here’s a table showing which versions in a client (down the side) can connect to which versions in a server (along the top): | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | *client* / **server** | **SSLv2** | **SSLv3** | **TLS** [3](#id9) | **TLSv1** | **TLSv1.1** | **TLSv1.2** | | *SSLv2* | yes | no | no [1](#id7) | no | no | no | | *SSLv3* | no | yes | no [2](#id8) | no | no | no | | *TLS* (*SSLv23*) [3](#id9) | no [1](#id7) | no [2](#id8) | yes | yes | yes | yes | | *TLSv1* | no | no | yes | yes | no | no | | *TLSv1.1* | no | no | yes | no | yes | no | | *TLSv1.2* | no | no | yes | no | no | yes | #### Footnotes `1(1,2)` [`SSLContext`](#ssl.SSLContext "ssl.SSLContext") disables SSLv2 with [`OP_NO_SSLv2`](#ssl.OP_NO_SSLv2 "ssl.OP_NO_SSLv2") by default. `2(1,2)` [`SSLContext`](#ssl.SSLContext "ssl.SSLContext") disables SSLv3 with [`OP_NO_SSLv3`](#ssl.OP_NO_SSLv3 "ssl.OP_NO_SSLv3") by default. `3(1,2)` TLS 1.3 protocol will be available with [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS") in OpenSSL >= 1.1.1. There is no dedicated PROTOCOL constant for just TLS 1.3. See also [`create_default_context()`](#ssl.create_default_context "ssl.create_default_context") lets the [`ssl`](#module-ssl "ssl: TLS/SSL wrapper for socket objects") module choose security settings for a given purpose. Changed in version 3.6: The context is created with secure default values. The options [`OP_NO_COMPRESSION`](#ssl.OP_NO_COMPRESSION "ssl.OP_NO_COMPRESSION"), [`OP_CIPHER_SERVER_PREFERENCE`](#ssl.OP_CIPHER_SERVER_PREFERENCE "ssl.OP_CIPHER_SERVER_PREFERENCE"), [`OP_SINGLE_DH_USE`](#ssl.OP_SINGLE_DH_USE "ssl.OP_SINGLE_DH_USE"), [`OP_SINGLE_ECDH_USE`](#ssl.OP_SINGLE_ECDH_USE "ssl.OP_SINGLE_ECDH_USE"), [`OP_NO_SSLv2`](#ssl.OP_NO_SSLv2 "ssl.OP_NO_SSLv2") (except for [`PROTOCOL_SSLv2`](#ssl.PROTOCOL_SSLv2 "ssl.PROTOCOL_SSLv2")), and [`OP_NO_SSLv3`](#ssl.OP_NO_SSLv3 "ssl.OP_NO_SSLv3") (except for [`PROTOCOL_SSLv3`](#ssl.PROTOCOL_SSLv3 "ssl.PROTOCOL_SSLv3")) are set by default. The initial cipher suite list contains only `HIGH` ciphers, no `NULL` ciphers and no `MD5` ciphers (except for [`PROTOCOL_SSLv2`](#ssl.PROTOCOL_SSLv2 "ssl.PROTOCOL_SSLv2")). [`SSLContext`](#ssl.SSLContext "ssl.SSLContext") objects have the following methods and attributes: `SSLContext.cert_store_stats()` Get statistics about quantities of loaded X.509 certificates, count of X.509 certificates flagged as CA certificates and certificate revocation lists as dictionary. Example for a context with one CA cert and one other cert: ``` >>> context.cert_store_stats() {'crl': 0, 'x509_ca': 1, 'x509': 2} ``` New in version 3.4. `SSLContext.load_cert_chain(certfile, keyfile=None, password=None)` Load a private key and the corresponding certificate. The *certfile* string must be the path to a single file in PEM format containing the certificate as well as any number of CA certificates needed to establish the certificate’s authenticity. The *keyfile* string, if present, must point to a file containing the private key. Otherwise the private key will be taken from *certfile* as well. See the discussion of [Certificates](#ssl-certificates) for more information on how the certificate is stored in the *certfile*. The *password* argument may be a function to call to get the password for decrypting the private key. It will only be called if the private key is encrypted and a password is necessary. It will be called with no arguments, and it should return a string, bytes, or bytearray. If the return value is a string it will be encoded as UTF-8 before using it to decrypt the key. Alternatively a string, bytes, or bytearray value may be supplied directly as the *password* argument. It will be ignored if the private key is not encrypted and no password is needed. If the *password* argument is not specified and a password is required, OpenSSL’s built-in password prompting mechanism will be used to interactively prompt the user for a password. An [`SSLError`](#ssl.SSLError "ssl.SSLError") is raised if the private key doesn’t match with the certificate. Changed in version 3.3: New optional argument *password*. `SSLContext.load_default_certs(purpose=Purpose.SERVER_AUTH)` Load a set of default “certification authority” (CA) certificates from default locations. On Windows it loads CA certs from the `CA` and `ROOT` system stores. On all systems it calls [`SSLContext.set_default_verify_paths()`](#ssl.SSLContext.set_default_verify_paths "ssl.SSLContext.set_default_verify_paths"). In the future the method may load CA certificates from other locations, too. The *purpose* flag specifies what kind of CA certificates are loaded. The default settings [`Purpose.SERVER_AUTH`](#ssl.Purpose.SERVER_AUTH "ssl.Purpose.SERVER_AUTH") loads certificates, that are flagged and trusted for TLS web server authentication (client side sockets). [`Purpose.CLIENT_AUTH`](#ssl.Purpose.CLIENT_AUTH "ssl.Purpose.CLIENT_AUTH") loads CA certificates for client certificate verification on the server side. New in version 3.4. `SSLContext.load_verify_locations(cafile=None, capath=None, cadata=None)` Load a set of “certification authority” (CA) certificates used to validate other peers’ certificates when [`verify_mode`](#ssl.SSLContext.verify_mode "ssl.SSLContext.verify_mode") is other than [`CERT_NONE`](#ssl.CERT_NONE "ssl.CERT_NONE"). At least one of *cafile* or *capath* must be specified. This method can also load certification revocation lists (CRLs) in PEM or DER format. In order to make use of CRLs, [`SSLContext.verify_flags`](#ssl.SSLContext.verify_flags "ssl.SSLContext.verify_flags") must be configured properly. The *cafile* string, if present, is the path to a file of concatenated CA certificates in PEM format. See the discussion of [Certificates](#ssl-certificates) for more information about how to arrange the certificates in this file. The *capath* string, if present, is the path to a directory containing several CA certificates in PEM format, following an [OpenSSL specific layout](https://www.openssl.org/docs/manmaster/man3/SSL_CTX_load_verify_locations.html). The *cadata* object, if present, is either an ASCII string of one or more PEM-encoded certificates or a [bytes-like object](../glossary#term-bytes-like-object) of DER-encoded certificates. Like with *capath* extra lines around PEM-encoded certificates are ignored but at least one certificate must be present. Changed in version 3.4: New optional argument *cadata* `SSLContext.get_ca_certs(binary_form=False)` Get a list of loaded “certification authority” (CA) certificates. If the `binary_form` parameter is [`False`](constants#False "False") each list entry is a dict like the output of [`SSLSocket.getpeercert()`](#ssl.SSLSocket.getpeercert "ssl.SSLSocket.getpeercert"). Otherwise the method returns a list of DER-encoded certificates. The returned list does not contain certificates from *capath* unless a certificate was requested and loaded by a SSL connection. Note Certificates in a capath directory aren’t loaded unless they have been used at least once. New in version 3.4. `SSLContext.get_ciphers()` Get a list of enabled ciphers. The list is in order of cipher priority. See [`SSLContext.set_ciphers()`](#ssl.SSLContext.set_ciphers "ssl.SSLContext.set_ciphers"). Example: ``` >>> ctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23) >>> ctx.set_ciphers('ECDHE+AESGCM:!ECDSA') >>> ctx.get_ciphers() # OpenSSL 1.0.x [{'alg_bits': 256, 'description': 'ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA ' 'Enc=AESGCM(256) Mac=AEAD', 'id': 50380848, 'name': 'ECDHE-RSA-AES256-GCM-SHA384', 'protocol': 'TLSv1/SSLv3', 'strength_bits': 256}, {'alg_bits': 128, 'description': 'ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA ' 'Enc=AESGCM(128) Mac=AEAD', 'id': 50380847, 'name': 'ECDHE-RSA-AES128-GCM-SHA256', 'protocol': 'TLSv1/SSLv3', 'strength_bits': 128}] ``` On OpenSSL 1.1 and newer the cipher dict contains additional fields: ``` >>> ctx.get_ciphers() # OpenSSL 1.1+ [{'aead': True, 'alg_bits': 256, 'auth': 'auth-rsa', 'description': 'ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA ' 'Enc=AESGCM(256) Mac=AEAD', 'digest': None, 'id': 50380848, 'kea': 'kx-ecdhe', 'name': 'ECDHE-RSA-AES256-GCM-SHA384', 'protocol': 'TLSv1.2', 'strength_bits': 256, 'symmetric': 'aes-256-gcm'}, {'aead': True, 'alg_bits': 128, 'auth': 'auth-rsa', 'description': 'ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA ' 'Enc=AESGCM(128) Mac=AEAD', 'digest': None, 'id': 50380847, 'kea': 'kx-ecdhe', 'name': 'ECDHE-RSA-AES128-GCM-SHA256', 'protocol': 'TLSv1.2', 'strength_bits': 128, 'symmetric': 'aes-128-gcm'}] ``` [Availability](https://docs.python.org/3.9/library/intro.html#availability): OpenSSL 1.0.2+. New in version 3.6. `SSLContext.set_default_verify_paths()` Load a set of default “certification authority” (CA) certificates from a filesystem path defined when building the OpenSSL library. Unfortunately, there’s no easy way to know whether this method succeeds: no error is returned if no certificates are to be found. When the OpenSSL library is provided as part of the operating system, though, it is likely to be configured properly. `SSLContext.set_ciphers(ciphers)` Set the available ciphers for sockets created with this context. It should be a string in the [OpenSSL cipher list format](https://www.openssl.org/docs/manmaster/man1/ciphers.html). If no cipher can be selected (because compile-time options or other configuration forbids use of all the specified ciphers), an [`SSLError`](#ssl.SSLError "ssl.SSLError") will be raised. Note when connected, the [`SSLSocket.cipher()`](#ssl.SSLSocket.cipher "ssl.SSLSocket.cipher") method of SSL sockets will give the currently selected cipher. OpenSSL 1.1.1 has TLS 1.3 cipher suites enabled by default. The suites cannot be disabled with [`set_ciphers()`](#ssl.SSLContext.set_ciphers "ssl.SSLContext.set_ciphers"). `SSLContext.set_alpn_protocols(protocols)` Specify which protocols the socket should advertise during the SSL/TLS handshake. It should be a list of ASCII strings, like `['http/1.1', 'spdy/2']`, ordered by preference. The selection of a protocol will happen during the handshake, and will play out according to [**RFC 7301**](https://tools.ietf.org/html/rfc7301.html). After a successful handshake, the [`SSLSocket.selected_alpn_protocol()`](#ssl.SSLSocket.selected_alpn_protocol "ssl.SSLSocket.selected_alpn_protocol") method will return the agreed-upon protocol. This method will raise [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError") if [`HAS_ALPN`](#ssl.HAS_ALPN "ssl.HAS_ALPN") is `False`. OpenSSL 1.1.0 to 1.1.0e will abort the handshake and raise [`SSLError`](#ssl.SSLError "ssl.SSLError") when both sides support ALPN but cannot agree on a protocol. 1.1.0f+ behaves like 1.0.2, [`SSLSocket.selected_alpn_protocol()`](#ssl.SSLSocket.selected_alpn_protocol "ssl.SSLSocket.selected_alpn_protocol") returns None. New in version 3.5. `SSLContext.set_npn_protocols(protocols)` Specify which protocols the socket should advertise during the SSL/TLS handshake. It should be a list of strings, like `['http/1.1', 'spdy/2']`, ordered by preference. The selection of a protocol will happen during the handshake, and will play out according to the [Application Layer Protocol Negotiation](https://en.wikipedia.org/wiki/Application-Layer_Protocol_Negotiation). After a successful handshake, the [`SSLSocket.selected_npn_protocol()`](#ssl.SSLSocket.selected_npn_protocol "ssl.SSLSocket.selected_npn_protocol") method will return the agreed-upon protocol. This method will raise [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError") if [`HAS_NPN`](#ssl.HAS_NPN "ssl.HAS_NPN") is `False`. New in version 3.3. `SSLContext.sni_callback` Register a callback function that will be called after the TLS Client Hello handshake message has been received by the SSL/TLS server when the TLS client specifies a server name indication. The server name indication mechanism is specified in [**RFC 6066**](https://tools.ietf.org/html/rfc6066.html) section 3 - Server Name Indication. Only one callback can be set per `SSLContext`. If *sni\_callback* is set to `None` then the callback is disabled. Calling this function a subsequent time will disable the previously registered callback. The callback function will be called with three arguments; the first being the [`ssl.SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket"), the second is a string that represents the server name that the client is intending to communicate (or [`None`](constants#None "None") if the TLS Client Hello does not contain a server name) and the third argument is the original [`SSLContext`](#ssl.SSLContext "ssl.SSLContext"). The server name argument is text. For internationalized domain name, the server name is an IDN A-label (`"xn--pythn-mua.org"`). A typical use of this callback is to change the [`ssl.SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket")’s [`SSLSocket.context`](#ssl.SSLSocket.context "ssl.SSLSocket.context") attribute to a new object of type [`SSLContext`](#ssl.SSLContext "ssl.SSLContext") representing a certificate chain that matches the server name. Due to the early negotiation phase of the TLS connection, only limited methods and attributes are usable like [`SSLSocket.selected_alpn_protocol()`](#ssl.SSLSocket.selected_alpn_protocol "ssl.SSLSocket.selected_alpn_protocol") and [`SSLSocket.context`](#ssl.SSLSocket.context "ssl.SSLSocket.context"). The [`SSLSocket.getpeercert()`](#ssl.SSLSocket.getpeercert "ssl.SSLSocket.getpeercert"), [`SSLSocket.cipher()`](#ssl.SSLSocket.cipher "ssl.SSLSocket.cipher") and [`SSLSocket.compression()`](#ssl.SSLSocket.compression "ssl.SSLSocket.compression") methods require that the TLS connection has progressed beyond the TLS Client Hello and therefore will not return meaningful values nor can they be called safely. The *sni\_callback* function must return `None` to allow the TLS negotiation to continue. If a TLS failure is required, a constant [`ALERT_DESCRIPTION_*`](#ssl.ALERT_DESCRIPTION_INTERNAL_ERROR "ssl.ALERT_DESCRIPTION_INTERNAL_ERROR") can be returned. Other return values will result in a TLS fatal error with [`ALERT_DESCRIPTION_INTERNAL_ERROR`](#ssl.ALERT_DESCRIPTION_INTERNAL_ERROR "ssl.ALERT_DESCRIPTION_INTERNAL_ERROR"). If an exception is raised from the *sni\_callback* function the TLS connection will terminate with a fatal TLS alert message [`ALERT_DESCRIPTION_HANDSHAKE_FAILURE`](#ssl.ALERT_DESCRIPTION_HANDSHAKE_FAILURE "ssl.ALERT_DESCRIPTION_HANDSHAKE_FAILURE"). This method will raise [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError") if the OpenSSL library had OPENSSL\_NO\_TLSEXT defined when it was built. New in version 3.7. `SSLContext.set_servername_callback(server_name_callback)` This is a legacy API retained for backwards compatibility. When possible, you should use [`sni_callback`](#ssl.SSLContext.sni_callback "ssl.SSLContext.sni_callback") instead. The given *server\_name\_callback* is similar to *sni\_callback*, except that when the server hostname is an IDN-encoded internationalized domain name, the *server\_name\_callback* receives a decoded U-label (`"pythön.org"`). If there is an decoding error on the server name, the TLS connection will terminate with an [`ALERT_DESCRIPTION_INTERNAL_ERROR`](#ssl.ALERT_DESCRIPTION_INTERNAL_ERROR "ssl.ALERT_DESCRIPTION_INTERNAL_ERROR") fatal TLS alert message to the client. New in version 3.4. `SSLContext.load_dh_params(dhfile)` Load the key generation parameters for Diffie-Hellman (DH) key exchange. Using DH key exchange improves forward secrecy at the expense of computational resources (both on the server and on the client). The *dhfile* parameter should be the path to a file containing DH parameters in PEM format. This setting doesn’t apply to client sockets. You can also use the [`OP_SINGLE_DH_USE`](#ssl.OP_SINGLE_DH_USE "ssl.OP_SINGLE_DH_USE") option to further improve security. New in version 3.3. `SSLContext.set_ecdh_curve(curve_name)` Set the curve name for Elliptic Curve-based Diffie-Hellman (ECDH) key exchange. ECDH is significantly faster than regular DH while arguably as secure. The *curve\_name* parameter should be a string describing a well-known elliptic curve, for example `prime256v1` for a widely supported curve. This setting doesn’t apply to client sockets. You can also use the [`OP_SINGLE_ECDH_USE`](#ssl.OP_SINGLE_ECDH_USE "ssl.OP_SINGLE_ECDH_USE") option to further improve security. This method is not available if [`HAS_ECDH`](#ssl.HAS_ECDH "ssl.HAS_ECDH") is `False`. New in version 3.3. See also [SSL/TLS & Perfect Forward Secrecy](https://vincent.bernat.im/en/blog/2011-ssl-perfect-forward-secrecy) Vincent Bernat. `SSLContext.wrap_socket(sock, server_side=False, do_handshake_on_connect=True, suppress_ragged_eofs=True, server_hostname=None, session=None)` Wrap an existing Python socket *sock* and return an instance of [`SSLContext.sslsocket_class`](#ssl.SSLContext.sslsocket_class "ssl.SSLContext.sslsocket_class") (default [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket")). The returned SSL socket is tied to the context, its settings and certificates. *sock* must be a [`SOCK_STREAM`](socket#socket.SOCK_STREAM "socket.SOCK_STREAM") socket; other socket types are unsupported. The parameter `server_side` is a boolean which identifies whether server-side or client-side behavior is desired from this socket. For client-side sockets, the context construction is lazy; if the underlying socket isn’t connected yet, the context construction will be performed after `connect()` is called on the socket. For server-side sockets, if the socket has no remote peer, it is assumed to be a listening socket, and the server-side SSL wrapping is automatically performed on client connections accepted via the `accept()` method. The method may raise [`SSLError`](#ssl.SSLError "ssl.SSLError"). On client connections, the optional parameter *server\_hostname* specifies the hostname of the service which we are connecting to. This allows a single server to host multiple SSL-based services with distinct certificates, quite similarly to HTTP virtual hosts. Specifying *server\_hostname* will raise a [`ValueError`](exceptions#ValueError "ValueError") if *server\_side* is true. The parameter `do_handshake_on_connect` specifies whether to do the SSL handshake automatically after doing a `socket.connect()`, or whether the application program will call it explicitly, by invoking the [`SSLSocket.do_handshake()`](#ssl.SSLSocket.do_handshake "ssl.SSLSocket.do_handshake") method. Calling [`SSLSocket.do_handshake()`](#ssl.SSLSocket.do_handshake "ssl.SSLSocket.do_handshake") explicitly gives the program control over the blocking behavior of the socket I/O involved in the handshake. The parameter `suppress_ragged_eofs` specifies how the `SSLSocket.recv()` method should signal unexpected EOF from the other end of the connection. If specified as [`True`](constants#True "True") (the default), it returns a normal EOF (an empty bytes object) in response to unexpected EOF errors raised from the underlying socket; if [`False`](constants#False "False"), it will raise the exceptions back to the caller. *session*, see [`session`](#ssl.SSLSocket.session "ssl.SSLSocket.session"). Changed in version 3.5: Always allow a server\_hostname to be passed, even if OpenSSL does not have SNI. Changed in version 3.6: *session* argument was added. Changed in version 3.7: The method returns on instance of [`SSLContext.sslsocket_class`](#ssl.SSLContext.sslsocket_class "ssl.SSLContext.sslsocket_class") instead of hard-coded [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket"). `SSLContext.sslsocket_class` The return type of [`SSLContext.wrap_socket()`](#ssl.SSLContext.wrap_socket "ssl.SSLContext.wrap_socket"), defaults to [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket"). The attribute can be overridden on instance of class in order to return a custom subclass of [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket"). New in version 3.7. `SSLContext.wrap_bio(incoming, outgoing, server_side=False, server_hostname=None, session=None)` Wrap the BIO objects *incoming* and *outgoing* and return an instance of [`SSLContext.sslobject_class`](#ssl.SSLContext.sslobject_class "ssl.SSLContext.sslobject_class") (default [`SSLObject`](#ssl.SSLObject "ssl.SSLObject")). The SSL routines will read input data from the incoming BIO and write data to the outgoing BIO. The *server\_side*, *server\_hostname* and *session* parameters have the same meaning as in [`SSLContext.wrap_socket()`](#ssl.SSLContext.wrap_socket "ssl.SSLContext.wrap_socket"). Changed in version 3.6: *session* argument was added. Changed in version 3.7: The method returns on instance of [`SSLContext.sslobject_class`](#ssl.SSLContext.sslobject_class "ssl.SSLContext.sslobject_class") instead of hard-coded [`SSLObject`](#ssl.SSLObject "ssl.SSLObject"). `SSLContext.sslobject_class` The return type of [`SSLContext.wrap_bio()`](#ssl.SSLContext.wrap_bio "ssl.SSLContext.wrap_bio"), defaults to [`SSLObject`](#ssl.SSLObject "ssl.SSLObject"). The attribute can be overridden on instance of class in order to return a custom subclass of [`SSLObject`](#ssl.SSLObject "ssl.SSLObject"). New in version 3.7. `SSLContext.session_stats()` Get statistics about the SSL sessions created or managed by this context. A dictionary is returned which maps the names of each [piece of information](https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_sess_number.html) to their numeric values. For example, here is the total number of hits and misses in the session cache since the context was created: ``` >>> stats = context.session_stats() >>> stats['hits'], stats['misses'] (0, 0) ``` `SSLContext.check_hostname` Whether to match the peer cert’s hostname in [`SSLSocket.do_handshake()`](#ssl.SSLSocket.do_handshake "ssl.SSLSocket.do_handshake"). The context’s [`verify_mode`](#ssl.SSLContext.verify_mode "ssl.SSLContext.verify_mode") must be set to [`CERT_OPTIONAL`](#ssl.CERT_OPTIONAL "ssl.CERT_OPTIONAL") or [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED"), and you must pass *server\_hostname* to [`wrap_socket()`](#ssl.SSLContext.wrap_socket "ssl.SSLContext.wrap_socket") in order to match the hostname. Enabling hostname checking automatically sets [`verify_mode`](#ssl.SSLContext.verify_mode "ssl.SSLContext.verify_mode") from [`CERT_NONE`](#ssl.CERT_NONE "ssl.CERT_NONE") to [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED"). It cannot be set back to [`CERT_NONE`](#ssl.CERT_NONE "ssl.CERT_NONE") as long as hostname checking is enabled. The [`PROTOCOL_TLS_CLIENT`](#ssl.PROTOCOL_TLS_CLIENT "ssl.PROTOCOL_TLS_CLIENT") protocol enables hostname checking by default. With other protocols, hostname checking must be enabled explicitly. Example: ``` import socket, ssl context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2) context.verify_mode = ssl.CERT_REQUIRED context.check_hostname = True context.load_default_certs() s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) ssl_sock = context.wrap_socket(s, server_hostname='www.verisign.com') ssl_sock.connect(('www.verisign.com', 443)) ``` New in version 3.4. Changed in version 3.7: [`verify_mode`](#ssl.SSLContext.verify_mode "ssl.SSLContext.verify_mode") is now automatically changed to [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED") when hostname checking is enabled and [`verify_mode`](#ssl.SSLContext.verify_mode "ssl.SSLContext.verify_mode") is [`CERT_NONE`](#ssl.CERT_NONE "ssl.CERT_NONE"). Previously the same operation would have failed with a [`ValueError`](exceptions#ValueError "ValueError"). Note This features requires OpenSSL 0.9.8f or newer. `SSLContext.keylog_filename` Write TLS keys to a keylog file, whenever key material is generated or received. The keylog file is designed for debugging purposes only. The file format is specified by NSS and used by many traffic analyzers such as Wireshark. The log file is opened in append-only mode. Writes are synchronized between threads, but not between processes. New in version 3.8. Note This features requires OpenSSL 1.1.1 or newer. `SSLContext.maximum_version` A [`TLSVersion`](#ssl.TLSVersion "ssl.TLSVersion") enum member representing the highest supported TLS version. The value defaults to [`TLSVersion.MAXIMUM_SUPPORTED`](#ssl.TLSVersion.MAXIMUM_SUPPORTED "ssl.TLSVersion.MAXIMUM_SUPPORTED"). The attribute is read-only for protocols other than [`PROTOCOL_TLS`](#ssl.PROTOCOL_TLS "ssl.PROTOCOL_TLS"), [`PROTOCOL_TLS_CLIENT`](#ssl.PROTOCOL_TLS_CLIENT "ssl.PROTOCOL_TLS_CLIENT"), and [`PROTOCOL_TLS_SERVER`](#ssl.PROTOCOL_TLS_SERVER "ssl.PROTOCOL_TLS_SERVER"). The attributes [`maximum_version`](#ssl.SSLContext.maximum_version "ssl.SSLContext.maximum_version"), [`minimum_version`](#ssl.SSLContext.minimum_version "ssl.SSLContext.minimum_version") and [`SSLContext.options`](#ssl.SSLContext.options "ssl.SSLContext.options") all affect the supported SSL and TLS versions of the context. The implementation does not prevent invalid combination. For example a context with [`OP_NO_TLSv1_2`](#ssl.OP_NO_TLSv1_2 "ssl.OP_NO_TLSv1_2") in [`options`](#ssl.SSLContext.options "ssl.SSLContext.options") and [`maximum_version`](#ssl.SSLContext.maximum_version "ssl.SSLContext.maximum_version") set to [`TLSVersion.TLSv1_2`](#ssl.TLSVersion.TLSv1_2 "ssl.TLSVersion.TLSv1_2") will not be able to establish a TLS 1.2 connection. Note This attribute is not available unless the ssl module is compiled with OpenSSL 1.1.0g or newer. New in version 3.7. `SSLContext.minimum_version` Like [`SSLContext.maximum_version`](#ssl.SSLContext.maximum_version "ssl.SSLContext.maximum_version") except it is the lowest supported version or [`TLSVersion.MINIMUM_SUPPORTED`](#ssl.TLSVersion.MINIMUM_SUPPORTED "ssl.TLSVersion.MINIMUM_SUPPORTED"). Note This attribute is not available unless the ssl module is compiled with OpenSSL 1.1.0g or newer. New in version 3.7. `SSLContext.num_tickets` Control the number of TLS 1.3 session tickets of a `TLS_PROTOCOL_SERVER` context. The setting has no impact on TLS 1.0 to 1.2 connections. Note This attribute is not available unless the ssl module is compiled with OpenSSL 1.1.1 or newer. New in version 3.8. `SSLContext.options` An integer representing the set of SSL options enabled on this context. The default value is [`OP_ALL`](#ssl.OP_ALL "ssl.OP_ALL"), but you can specify other options such as [`OP_NO_SSLv2`](#ssl.OP_NO_SSLv2 "ssl.OP_NO_SSLv2") by ORing them together. Note With versions of OpenSSL older than 0.9.8m, it is only possible to set options, not to clear them. Attempting to clear an option (by resetting the corresponding bits) will raise a [`ValueError`](exceptions#ValueError "ValueError"). Changed in version 3.6: [`SSLContext.options`](#ssl.SSLContext.options "ssl.SSLContext.options") returns [`Options`](#ssl.Options "ssl.Options") flags: ``` >>> ssl.create_default_context().options <Options.OP_ALL|OP_NO_SSLv3|OP_NO_SSLv2|OP_NO_COMPRESSION: 2197947391> ``` `SSLContext.post_handshake_auth` Enable TLS 1.3 post-handshake client authentication. Post-handshake auth is disabled by default and a server can only request a TLS client certificate during the initial handshake. When enabled, a server may request a TLS client certificate at any time after the handshake. When enabled on client-side sockets, the client signals the server that it supports post-handshake authentication. When enabled on server-side sockets, [`SSLContext.verify_mode`](#ssl.SSLContext.verify_mode "ssl.SSLContext.verify_mode") must be set to [`CERT_OPTIONAL`](#ssl.CERT_OPTIONAL "ssl.CERT_OPTIONAL") or [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED"), too. The actual client cert exchange is delayed until [`SSLSocket.verify_client_post_handshake()`](#ssl.SSLSocket.verify_client_post_handshake "ssl.SSLSocket.verify_client_post_handshake") is called and some I/O is performed. Note Only available with OpenSSL 1.1.1 and TLS 1.3 enabled. Without TLS 1.3 support, the property value is None and can’t be modified New in version 3.8. `SSLContext.protocol` The protocol version chosen when constructing the context. This attribute is read-only. `SSLContext.hostname_checks_common_name` Whether [`check_hostname`](#ssl.SSLContext.check_hostname "ssl.SSLContext.check_hostname") falls back to verify the cert’s subject common name in the absence of a subject alternative name extension (default: true). Note Only writeable with OpenSSL 1.1.0 or higher. New in version 3.7. Changed in version 3.9.3: The flag had no effect with OpenSSL before version 1.1.1k. Python 3.8.9, 3.9.3, and 3.10 include workarounds for previous versions. `SSLContext.verify_flags` The flags for certificate verification operations. You can set flags like [`VERIFY_CRL_CHECK_LEAF`](#ssl.VERIFY_CRL_CHECK_LEAF "ssl.VERIFY_CRL_CHECK_LEAF") by ORing them together. By default OpenSSL does neither require nor verify certificate revocation lists (CRLs). Available only with openssl version 0.9.8+. New in version 3.4. Changed in version 3.6: [`SSLContext.verify_flags`](#ssl.SSLContext.verify_flags "ssl.SSLContext.verify_flags") returns [`VerifyFlags`](#ssl.VerifyFlags "ssl.VerifyFlags") flags: ``` >>> ssl.create_default_context().verify_flags <VerifyFlags.VERIFY_X509_TRUSTED_FIRST: 32768> ``` `SSLContext.verify_mode` Whether to try to verify other peers’ certificates and how to behave if verification fails. This attribute must be one of [`CERT_NONE`](#ssl.CERT_NONE "ssl.CERT_NONE"), [`CERT_OPTIONAL`](#ssl.CERT_OPTIONAL "ssl.CERT_OPTIONAL") or [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED"). Changed in version 3.6: [`SSLContext.verify_mode`](#ssl.SSLContext.verify_mode "ssl.SSLContext.verify_mode") returns [`VerifyMode`](#ssl.VerifyMode "ssl.VerifyMode") enum: ``` >>> ssl.create_default_context().verify_mode <VerifyMode.CERT_REQUIRED: 2> ``` Certificates ------------ Certificates in general are part of a public-key / private-key system. In this system, each *principal*, (which may be a machine, or a person, or an organization) is assigned a unique two-part encryption key. One part of the key is public, and is called the *public key*; the other part is kept secret, and is called the *private key*. The two parts are related, in that if you encrypt a message with one of the parts, you can decrypt it with the other part, and **only** with the other part. A certificate contains information about two principals. It contains the name of a *subject*, and the subject’s public key. It also contains a statement by a second principal, the *issuer*, that the subject is who they claim to be, and that this is indeed the subject’s public key. The issuer’s statement is signed with the issuer’s private key, which only the issuer knows. However, anyone can verify the issuer’s statement by finding the issuer’s public key, decrypting the statement with it, and comparing it to the other information in the certificate. The certificate also contains information about the time period over which it is valid. This is expressed as two fields, called “notBefore” and “notAfter”. In the Python use of certificates, a client or server can use a certificate to prove who they are. The other side of a network connection can also be required to produce a certificate, and that certificate can be validated to the satisfaction of the client or server that requires such validation. The connection attempt can be set to raise an exception if the validation fails. Validation is done automatically, by the underlying OpenSSL framework; the application need not concern itself with its mechanics. But the application does usually need to provide sets of certificates to allow this process to take place. Python uses files to contain certificates. They should be formatted as “PEM” (see [**RFC 1422**](https://tools.ietf.org/html/rfc1422.html)), which is a base-64 encoded form wrapped with a header line and a footer line: ``` -----BEGIN CERTIFICATE----- ... (certificate in base64 PEM encoding) ... -----END CERTIFICATE----- ``` ### Certificate chains The Python files which contain certificates can contain a sequence of certificates, sometimes called a *certificate chain*. This chain should start with the specific certificate for the principal who “is” the client or server, and then the certificate for the issuer of that certificate, and then the certificate for the issuer of *that* certificate, and so on up the chain till you get to a certificate which is *self-signed*, that is, a certificate which has the same subject and issuer, sometimes called a *root certificate*. The certificates should just be concatenated together in the certificate file. For example, suppose we had a three certificate chain, from our server certificate to the certificate of the certification authority that signed our server certificate, to the root certificate of the agency which issued the certification authority’s certificate: ``` -----BEGIN CERTIFICATE----- ... (certificate for your server)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the certificate for the CA)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the root certificate for the CA's issuer)... -----END CERTIFICATE----- ``` ### CA certificates If you are going to require validation of the other side of the connection’s certificate, you need to provide a “CA certs” file, filled with the certificate chains for each issuer you are willing to trust. Again, this file just contains these chains concatenated together. For validation, Python will use the first chain it finds in the file which matches. The platform’s certificates file can be used by calling [`SSLContext.load_default_certs()`](#ssl.SSLContext.load_default_certs "ssl.SSLContext.load_default_certs"), this is done automatically with [`create_default_context()`](#ssl.create_default_context "ssl.create_default_context"). ### Combined key and certificate Often the private key is stored in the same file as the certificate; in this case, only the `certfile` parameter to [`SSLContext.load_cert_chain()`](#ssl.SSLContext.load_cert_chain "ssl.SSLContext.load_cert_chain") and [`wrap_socket()`](#ssl.wrap_socket "ssl.wrap_socket") needs to be passed. If the private key is stored with the certificate, it should come before the first certificate in the certificate chain: ``` -----BEGIN RSA PRIVATE KEY----- ... (private key in base64 encoding) ... -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- ... (certificate in base64 PEM encoding) ... -----END CERTIFICATE----- ``` ### Self-signed certificates If you are going to create a server that provides SSL-encrypted connection services, you will need to acquire a certificate for that service. There are many ways of acquiring appropriate certificates, such as buying one from a certification authority. Another common practice is to generate a self-signed certificate. The simplest way to do this is with the OpenSSL package, using something like the following: ``` % openssl req -new -x509 -days 365 -nodes -out cert.pem -keyout cert.pem Generating a 1024 bit RSA private key .......++++++ .............................++++++ writing new private key to 'cert.pem' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:MyState Locality Name (eg, city) []:Some City Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Organization, Inc. Organizational Unit Name (eg, section) []:My Group Common Name (eg, YOUR name) []:myserver.mygroup.myorganization.com Email Address []:[email protected] % ``` The disadvantage of a self-signed certificate is that it is its own root certificate, and no one else will have it in their cache of known (and trusted) root certificates. Examples -------- ### Testing for SSL support To test for the presence of SSL support in a Python installation, user code should use the following idiom: ``` try: import ssl except ImportError: pass else: ... # do something that requires SSL support ``` ### Client-side operation This example creates a SSL context with the recommended security settings for client sockets, including automatic certificate verification: ``` >>> context = ssl.create_default_context() ``` If you prefer to tune security settings yourself, you might create a context from scratch (but beware that you might not get the settings right): ``` >>> context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) >>> context.load_verify_locations("/etc/ssl/certs/ca-bundle.crt") ``` (this snippet assumes your operating system places a bundle of all CA certificates in `/etc/ssl/certs/ca-bundle.crt`; if not, you’ll get an error and have to adjust the location) The [`PROTOCOL_TLS_CLIENT`](#ssl.PROTOCOL_TLS_CLIENT "ssl.PROTOCOL_TLS_CLIENT") protocol configures the context for cert validation and hostname verification. [`verify_mode`](#ssl.SSLContext.verify_mode "ssl.SSLContext.verify_mode") is set to [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED") and [`check_hostname`](#ssl.SSLContext.check_hostname "ssl.SSLContext.check_hostname") is set to `True`. All other protocols create SSL contexts with insecure defaults. When you use the context to connect to a server, [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED") and [`check_hostname`](#ssl.SSLContext.check_hostname "ssl.SSLContext.check_hostname") validate the server certificate: it ensures that the server certificate was signed with one of the CA certificates, checks the signature for correctness, and verifies other properties like validity and identity of the hostname: ``` >>> conn = context.wrap_socket(socket.socket(socket.AF_INET), ... server_hostname="www.python.org") >>> conn.connect(("www.python.org", 443)) ``` You may then fetch the certificate: ``` >>> cert = conn.getpeercert() ``` Visual inspection shows that the certificate does identify the desired service (that is, the HTTPS host `www.python.org`): ``` >>> pprint.pprint(cert) {'OCSP': ('http://ocsp.digicert.com',), 'caIssuers': ('http://cacerts.digicert.com/DigiCertSHA2ExtendedValidationServerCA.crt',), 'crlDistributionPoints': ('http://crl3.digicert.com/sha2-ev-server-g1.crl', 'http://crl4.digicert.com/sha2-ev-server-g1.crl'), 'issuer': ((('countryName', 'US'),), (('organizationName', 'DigiCert Inc'),), (('organizationalUnitName', 'www.digicert.com'),), (('commonName', 'DigiCert SHA2 Extended Validation Server CA'),)), 'notAfter': 'Sep 9 12:00:00 2016 GMT', 'notBefore': 'Sep 5 00:00:00 2014 GMT', 'serialNumber': '01BB6F00122B177F36CAB49CEA8B6B26', 'subject': ((('businessCategory', 'Private Organization'),), (('1.3.6.1.4.1.311.60.2.1.3', 'US'),), (('1.3.6.1.4.1.311.60.2.1.2', 'Delaware'),), (('serialNumber', '3359300'),), (('streetAddress', '16 Allen Rd'),), (('postalCode', '03894-4801'),), (('countryName', 'US'),), (('stateOrProvinceName', 'NH'),), (('localityName', 'Wolfeboro'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'www.python.org'),)), 'subjectAltName': (('DNS', 'www.python.org'), ('DNS', 'python.org'), ('DNS', 'pypi.org'), ('DNS', 'docs.python.org'), ('DNS', 'testpypi.org'), ('DNS', 'bugs.python.org'), ('DNS', 'wiki.python.org'), ('DNS', 'hg.python.org'), ('DNS', 'mail.python.org'), ('DNS', 'packaging.python.org'), ('DNS', 'pythonhosted.org'), ('DNS', 'www.pythonhosted.org'), ('DNS', 'test.pythonhosted.org'), ('DNS', 'us.pycon.org'), ('DNS', 'id.python.org')), 'version': 3} ``` Now the SSL channel is established and the certificate verified, you can proceed to talk with the server: ``` >>> conn.sendall(b"HEAD / HTTP/1.0\r\nHost: linuxfr.org\r\n\r\n") >>> pprint.pprint(conn.recv(1024).split(b"\r\n")) [b'HTTP/1.1 200 OK', b'Date: Sat, 18 Oct 2014 18:27:20 GMT', b'Server: nginx', b'Content-Type: text/html; charset=utf-8', b'X-Frame-Options: SAMEORIGIN', b'Content-Length: 45679', b'Accept-Ranges: bytes', b'Via: 1.1 varnish', b'Age: 2188', b'X-Served-By: cache-lcy1134-LCY', b'X-Cache: HIT', b'X-Cache-Hits: 11', b'Vary: Cookie', b'Strict-Transport-Security: max-age=63072000; includeSubDomains', b'Connection: close', b'', b''] ``` See the discussion of [Security considerations](#ssl-security) below. ### Server-side operation For server operation, typically you’ll need to have a server certificate, and private key, each in a file. You’ll first create a context holding the key and the certificate, so that clients can check your authenticity. Then you’ll open a socket, bind it to a port, call `listen()` on it, and start waiting for clients to connect: ``` import socket, ssl context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) context.load_cert_chain(certfile="mycertfile", keyfile="mykeyfile") bindsocket = socket.socket() bindsocket.bind(('myaddr.example.com', 10023)) bindsocket.listen(5) ``` When a client connects, you’ll call `accept()` on the socket to get the new socket from the other end, and use the context’s [`SSLContext.wrap_socket()`](#ssl.SSLContext.wrap_socket "ssl.SSLContext.wrap_socket") method to create a server-side SSL socket for the connection: ``` while True: newsocket, fromaddr = bindsocket.accept() connstream = context.wrap_socket(newsocket, server_side=True) try: deal_with_client(connstream) finally: connstream.shutdown(socket.SHUT_RDWR) connstream.close() ``` Then you’ll read data from the `connstream` and do something with it till you are finished with the client (or the client is finished with you): ``` def deal_with_client(connstream): data = connstream.recv(1024) # empty data means the client is finished with us while data: if not do_something(connstream, data): # we'll assume do_something returns False # when we're finished with client break data = connstream.recv(1024) # finished with client ``` And go back to listening for new client connections (of course, a real server would probably handle each client connection in a separate thread, or put the sockets in [non-blocking mode](#ssl-nonblocking) and use an event loop). Notes on non-blocking sockets ----------------------------- SSL sockets behave slightly different than regular sockets in non-blocking mode. When working with non-blocking sockets, there are thus several things you need to be aware of: * Most [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket") methods will raise either [`SSLWantWriteError`](#ssl.SSLWantWriteError "ssl.SSLWantWriteError") or [`SSLWantReadError`](#ssl.SSLWantReadError "ssl.SSLWantReadError") instead of [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError") if an I/O operation would block. [`SSLWantReadError`](#ssl.SSLWantReadError "ssl.SSLWantReadError") will be raised if a read operation on the underlying socket is necessary, and [`SSLWantWriteError`](#ssl.SSLWantWriteError "ssl.SSLWantWriteError") for a write operation on the underlying socket. Note that attempts to *write* to an SSL socket may require *reading* from the underlying socket first, and attempts to *read* from the SSL socket may require a prior *write* to the underlying socket. Changed in version 3.5: In earlier Python versions, the `SSLSocket.send()` method returned zero instead of raising [`SSLWantWriteError`](#ssl.SSLWantWriteError "ssl.SSLWantWriteError") or [`SSLWantReadError`](#ssl.SSLWantReadError "ssl.SSLWantReadError"). * Calling [`select()`](select#select.select "select.select") tells you that the OS-level socket can be read from (or written to), but it does not imply that there is sufficient data at the upper SSL layer. For example, only part of an SSL frame might have arrived. Therefore, you must be ready to handle `SSLSocket.recv()` and `SSLSocket.send()` failures, and retry after another call to [`select()`](select#select.select "select.select"). * Conversely, since the SSL layer has its own framing, a SSL socket may still have data available for reading without [`select()`](select#select.select "select.select") being aware of it. Therefore, you should first call `SSLSocket.recv()` to drain any potentially available data, and then only block on a [`select()`](select#select.select "select.select") call if still necessary. (of course, similar provisions apply when using other primitives such as [`poll()`](select#select.poll "select.poll"), or those in the [`selectors`](selectors#module-selectors "selectors: High-level I/O multiplexing.") module) * The SSL handshake itself will be non-blocking: the [`SSLSocket.do_handshake()`](#ssl.SSLSocket.do_handshake "ssl.SSLSocket.do_handshake") method has to be retried until it returns successfully. Here is a synopsis using [`select()`](select#select.select "select.select") to wait for the socket’s readiness: ``` while True: try: sock.do_handshake() break except ssl.SSLWantReadError: select.select([sock], [], []) except ssl.SSLWantWriteError: select.select([], [sock], []) ``` See also The [`asyncio`](asyncio#module-asyncio "asyncio: Asynchronous I/O.") module supports [non-blocking SSL sockets](#ssl-nonblocking) and provides a higher level API. It polls for events using the [`selectors`](selectors#module-selectors "selectors: High-level I/O multiplexing.") module and handles [`SSLWantWriteError`](#ssl.SSLWantWriteError "ssl.SSLWantWriteError"), [`SSLWantReadError`](#ssl.SSLWantReadError "ssl.SSLWantReadError") and [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError") exceptions. It runs the SSL handshake asynchronously as well. Memory BIO Support ------------------ New in version 3.5. Ever since the SSL module was introduced in Python 2.6, the [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket") class has provided two related but distinct areas of functionality: * SSL protocol handling * Network IO The network IO API is identical to that provided by [`socket.socket`](socket#socket.socket "socket.socket"), from which [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket") also inherits. This allows an SSL socket to be used as a drop-in replacement for a regular socket, making it very easy to add SSL support to an existing application. Combining SSL protocol handling and network IO usually works well, but there are some cases where it doesn’t. An example is async IO frameworks that want to use a different IO multiplexing model than the “select/poll on a file descriptor” (readiness based) model that is assumed by [`socket.socket`](socket#socket.socket "socket.socket") and by the internal OpenSSL socket IO routines. This is mostly relevant for platforms like Windows where this model is not efficient. For this purpose, a reduced scope variant of [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket") called [`SSLObject`](#ssl.SSLObject "ssl.SSLObject") is provided. `class ssl.SSLObject` A reduced-scope variant of [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket") representing an SSL protocol instance that does not contain any network IO methods. This class is typically used by framework authors that want to implement asynchronous IO for SSL through memory buffers. This class implements an interface on top of a low-level SSL object as implemented by OpenSSL. This object captures the state of an SSL connection but does not provide any network IO itself. IO needs to be performed through separate “BIO” objects which are OpenSSL’s IO abstraction layer. This class has no public constructor. An [`SSLObject`](#ssl.SSLObject "ssl.SSLObject") instance must be created using the [`wrap_bio()`](#ssl.SSLContext.wrap_bio "ssl.SSLContext.wrap_bio") method. This method will create the [`SSLObject`](#ssl.SSLObject "ssl.SSLObject") instance and bind it to a pair of BIOs. The *incoming* BIO is used to pass data from Python to the SSL protocol instance, while the *outgoing* BIO is used to pass data the other way around. The following methods are available: * [`context`](#ssl.SSLSocket.context "ssl.SSLSocket.context") * [`server_side`](#ssl.SSLSocket.server_side "ssl.SSLSocket.server_side") * [`server_hostname`](#ssl.SSLSocket.server_hostname "ssl.SSLSocket.server_hostname") * [`session`](#ssl.SSLSocket.session "ssl.SSLSocket.session") * [`session_reused`](#ssl.SSLSocket.session_reused "ssl.SSLSocket.session_reused") * [`read()`](#ssl.SSLSocket.read "ssl.SSLSocket.read") * [`write()`](#ssl.SSLSocket.write "ssl.SSLSocket.write") * [`getpeercert()`](#ssl.SSLSocket.getpeercert "ssl.SSLSocket.getpeercert") * [`selected_alpn_protocol()`](#ssl.SSLSocket.selected_alpn_protocol "ssl.SSLSocket.selected_alpn_protocol") * [`selected_npn_protocol()`](#ssl.SSLSocket.selected_npn_protocol "ssl.SSLSocket.selected_npn_protocol") * [`cipher()`](#ssl.SSLSocket.cipher "ssl.SSLSocket.cipher") * [`shared_ciphers()`](#ssl.SSLSocket.shared_ciphers "ssl.SSLSocket.shared_ciphers") * [`compression()`](#ssl.SSLSocket.compression "ssl.SSLSocket.compression") * [`pending()`](#ssl.SSLSocket.pending "ssl.SSLSocket.pending") * [`do_handshake()`](#ssl.SSLSocket.do_handshake "ssl.SSLSocket.do_handshake") * [`verify_client_post_handshake()`](#ssl.SSLSocket.verify_client_post_handshake "ssl.SSLSocket.verify_client_post_handshake") * [`unwrap()`](#ssl.SSLSocket.unwrap "ssl.SSLSocket.unwrap") * [`get_channel_binding()`](#ssl.SSLSocket.get_channel_binding "ssl.SSLSocket.get_channel_binding") * [`version()`](#ssl.SSLSocket.version "ssl.SSLSocket.version") When compared to [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket"), this object lacks the following features: * Any form of network IO; `recv()` and `send()` read and write only to the underlying [`MemoryBIO`](#ssl.MemoryBIO "ssl.MemoryBIO") buffers. * There is no *do\_handshake\_on\_connect* machinery. You must always manually call [`do_handshake()`](#ssl.SSLSocket.do_handshake "ssl.SSLSocket.do_handshake") to start the handshake. * There is no handling of *suppress\_ragged\_eofs*. All end-of-file conditions that are in violation of the protocol are reported via the [`SSLEOFError`](#ssl.SSLEOFError "ssl.SSLEOFError") exception. * The method [`unwrap()`](#ssl.SSLSocket.unwrap "ssl.SSLSocket.unwrap") call does not return anything, unlike for an SSL socket where it returns the underlying socket. * The *server\_name\_callback* callback passed to [`SSLContext.set_servername_callback()`](#ssl.SSLContext.set_servername_callback "ssl.SSLContext.set_servername_callback") will get an [`SSLObject`](#ssl.SSLObject "ssl.SSLObject") instance instead of a [`SSLSocket`](#ssl.SSLSocket "ssl.SSLSocket") instance as its first parameter. Some notes related to the use of [`SSLObject`](#ssl.SSLObject "ssl.SSLObject"): * All IO on an [`SSLObject`](#ssl.SSLObject "ssl.SSLObject") is [non-blocking](#ssl-nonblocking). This means that for example [`read()`](#ssl.SSLSocket.read "ssl.SSLSocket.read") will raise an [`SSLWantReadError`](#ssl.SSLWantReadError "ssl.SSLWantReadError") if it needs more data than the incoming BIO has available. * There is no module-level `wrap_bio()` call like there is for [`wrap_socket()`](#ssl.SSLContext.wrap_socket "ssl.SSLContext.wrap_socket"). An [`SSLObject`](#ssl.SSLObject "ssl.SSLObject") is always created via an [`SSLContext`](#ssl.SSLContext "ssl.SSLContext"). Changed in version 3.7: [`SSLObject`](#ssl.SSLObject "ssl.SSLObject") instances must to created with [`wrap_bio()`](#ssl.SSLContext.wrap_bio "ssl.SSLContext.wrap_bio"). In earlier versions, it was possible to create instances directly. This was never documented or officially supported. An SSLObject communicates with the outside world using memory buffers. The class [`MemoryBIO`](#ssl.MemoryBIO "ssl.MemoryBIO") provides a memory buffer that can be used for this purpose. It wraps an OpenSSL memory BIO (Basic IO) object: `class ssl.MemoryBIO` A memory buffer that can be used to pass data between Python and an SSL protocol instance. `pending` Return the number of bytes currently in the memory buffer. `eof` A boolean indicating whether the memory BIO is current at the end-of-file position. `read(n=-1)` Read up to *n* bytes from the memory buffer. If *n* is not specified or negative, all bytes are returned. `write(buf)` Write the bytes from *buf* to the memory BIO. The *buf* argument must be an object supporting the buffer protocol. The return value is the number of bytes written, which is always equal to the length of *buf*. `write_eof()` Write an EOF marker to the memory BIO. After this method has been called, it is illegal to call [`write()`](#ssl.MemoryBIO.write "ssl.MemoryBIO.write"). The attribute [`eof`](#ssl.MemoryBIO.eof "ssl.MemoryBIO.eof") will become true after all data currently in the buffer has been read. SSL session ----------- New in version 3.6. `class ssl.SSLSession` Session object used by [`session`](#ssl.SSLSocket.session "ssl.SSLSocket.session"). `id` `time` `timeout` `ticket_lifetime_hint` `has_ticket` Security considerations ----------------------- ### Best defaults For **client use**, if you don’t have any special requirements for your security policy, it is highly recommended that you use the [`create_default_context()`](#ssl.create_default_context "ssl.create_default_context") function to create your SSL context. It will load the system’s trusted CA certificates, enable certificate validation and hostname checking, and try to choose reasonably secure protocol and cipher settings. For example, here is how you would use the [`smtplib.SMTP`](smtplib#smtplib.SMTP "smtplib.SMTP") class to create a trusted, secure connection to a SMTP server: ``` >>> import ssl, smtplib >>> smtp = smtplib.SMTP("mail.python.org", port=587) >>> context = ssl.create_default_context() >>> smtp.starttls(context=context) (220, b'2.0.0 Ready to start TLS') ``` If a client certificate is needed for the connection, it can be added with [`SSLContext.load_cert_chain()`](#ssl.SSLContext.load_cert_chain "ssl.SSLContext.load_cert_chain"). By contrast, if you create the SSL context by calling the [`SSLContext`](#ssl.SSLContext "ssl.SSLContext") constructor yourself, it will not have certificate validation nor hostname checking enabled by default. If you do so, please read the paragraphs below to achieve a good security level. ### Manual settings #### Verifying certificates When calling the [`SSLContext`](#ssl.SSLContext "ssl.SSLContext") constructor directly, [`CERT_NONE`](#ssl.CERT_NONE "ssl.CERT_NONE") is the default. Since it does not authenticate the other peer, it can be insecure, especially in client mode where most of time you would like to ensure the authenticity of the server you’re talking to. Therefore, when in client mode, it is highly recommended to use [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED"). However, it is in itself not sufficient; you also have to check that the server certificate, which can be obtained by calling [`SSLSocket.getpeercert()`](#ssl.SSLSocket.getpeercert "ssl.SSLSocket.getpeercert"), matches the desired service. For many protocols and applications, the service can be identified by the hostname; in this case, the [`match_hostname()`](#ssl.match_hostname "ssl.match_hostname") function can be used. This common check is automatically performed when [`SSLContext.check_hostname`](#ssl.SSLContext.check_hostname "ssl.SSLContext.check_hostname") is enabled. Changed in version 3.7: Hostname matchings is now performed by OpenSSL. Python no longer uses [`match_hostname()`](#ssl.match_hostname "ssl.match_hostname"). In server mode, if you want to authenticate your clients using the SSL layer (rather than using a higher-level authentication mechanism), you’ll also have to specify [`CERT_REQUIRED`](#ssl.CERT_REQUIRED "ssl.CERT_REQUIRED") and similarly check the client certificate. #### Protocol versions SSL versions 2 and 3 are considered insecure and are therefore dangerous to use. If you want maximum compatibility between clients and servers, it is recommended to use [`PROTOCOL_TLS_CLIENT`](#ssl.PROTOCOL_TLS_CLIENT "ssl.PROTOCOL_TLS_CLIENT") or [`PROTOCOL_TLS_SERVER`](#ssl.PROTOCOL_TLS_SERVER "ssl.PROTOCOL_TLS_SERVER") as the protocol version. SSLv2 and SSLv3 are disabled by default. ``` >>> client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) >>> client_context.options |= ssl.OP_NO_TLSv1 >>> client_context.options |= ssl.OP_NO_TLSv1_1 ``` The SSL context created above will only allow TLSv1.2 and later (if supported by your system) connections to a server. [`PROTOCOL_TLS_CLIENT`](#ssl.PROTOCOL_TLS_CLIENT "ssl.PROTOCOL_TLS_CLIENT") implies certificate validation and hostname checks by default. You have to load certificates into the context. #### Cipher selection If you have advanced security requirements, fine-tuning of the ciphers enabled when negotiating a SSL session is possible through the [`SSLContext.set_ciphers()`](#ssl.SSLContext.set_ciphers "ssl.SSLContext.set_ciphers") method. Starting from Python 3.2.3, the ssl module disables certain weak ciphers by default, but you may want to further restrict the cipher choice. Be sure to read OpenSSL’s documentation about the [cipher list format](https://www.openssl.org/docs/manmaster/man1/ciphers.html#CIPHER-LIST-FORMAT). If you want to check which ciphers are enabled by a given cipher list, use [`SSLContext.get_ciphers()`](#ssl.SSLContext.get_ciphers "ssl.SSLContext.get_ciphers") or the `openssl ciphers` command on your system. ### Multi-processing If using this module as part of a multi-processed application (using, for example the [`multiprocessing`](multiprocessing#module-multiprocessing "multiprocessing: Process-based parallelism.") or [`concurrent.futures`](concurrent.futures#module-concurrent.futures "concurrent.futures: Execute computations concurrently using threads or processes.") modules), be aware that OpenSSL’s internal random number generator does not properly handle forked processes. Applications must change the PRNG state of the parent process if they use any SSL feature with [`os.fork()`](os#os.fork "os.fork"). Any successful call of [`RAND_add()`](#ssl.RAND_add "ssl.RAND_add"), [`RAND_bytes()`](#ssl.RAND_bytes "ssl.RAND_bytes") or [`RAND_pseudo_bytes()`](#ssl.RAND_pseudo_bytes "ssl.RAND_pseudo_bytes") is sufficient. TLS 1.3 ------- New in version 3.7. Python has provisional and experimental support for TLS 1.3 with OpenSSL 1.1.1. The new protocol behaves slightly differently than previous version of TLS/SSL. Some new TLS 1.3 features are not yet available. * TLS 1.3 uses a disjunct set of cipher suites. All AES-GCM and ChaCha20 cipher suites are enabled by default. The method [`SSLContext.set_ciphers()`](#ssl.SSLContext.set_ciphers "ssl.SSLContext.set_ciphers") cannot enable or disable any TLS 1.3 ciphers yet, but [`SSLContext.get_ciphers()`](#ssl.SSLContext.get_ciphers "ssl.SSLContext.get_ciphers") returns them. * Session tickets are no longer sent as part of the initial handshake and are handled differently. [`SSLSocket.session`](#ssl.SSLSocket.session "ssl.SSLSocket.session") and [`SSLSession`](#ssl.SSLSession "ssl.SSLSession") are not compatible with TLS 1.3. * Client-side certificates are also no longer verified during the initial handshake. A server can request a certificate at any time. Clients process certificate requests while they send or receive application data from the server. * TLS 1.3 features like early data, deferred TLS client cert request, signature algorithm configuration, and rekeying are not supported yet. LibreSSL support ---------------- LibreSSL is a fork of OpenSSL 1.0.1. The ssl module has limited support for LibreSSL. Some features are not available when the ssl module is compiled with LibreSSL. * LibreSSL >= 2.6.1 no longer supports NPN. The methods [`SSLContext.set_npn_protocols()`](#ssl.SSLContext.set_npn_protocols "ssl.SSLContext.set_npn_protocols") and [`SSLSocket.selected_npn_protocol()`](#ssl.SSLSocket.selected_npn_protocol "ssl.SSLSocket.selected_npn_protocol") are not available. * [`SSLContext.set_default_verify_paths()`](#ssl.SSLContext.set_default_verify_paths "ssl.SSLContext.set_default_verify_paths") ignores the env vars `SSL_CERT_FILE` and `SSL_CERT_PATH` although [`get_default_verify_paths()`](#ssl.get_default_verify_paths "ssl.get_default_verify_paths") still reports them. See also `Class` [`socket.socket`](socket#socket.socket "socket.socket") Documentation of underlying [`socket`](socket#module-socket "socket: Low-level networking interface.") class [SSL/TLS Strong Encryption: An Introduction](https://httpd.apache.org/docs/trunk/en/ssl/ssl_intro.html) Intro from the Apache HTTP Server documentation [**RFC 1422: Privacy Enhancement for Internet Electronic Mail: Part II: Certificate-Based Key Management**](https://tools.ietf.org/html/rfc1422.html) Steve Kent [**RFC 4086: Randomness Requirements for Security**](https://tools.ietf.org/html/rfc4086.html) Donald E., Jeffrey I. Schiller [**RFC 5280: Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile**](https://tools.ietf.org/html/rfc5280.html) D. Cooper [**RFC 5246: The Transport Layer Security (TLS) Protocol Version 1.2**](https://tools.ietf.org/html/rfc5246.html) T. Dierks et. al. [**RFC 6066: Transport Layer Security (TLS) Extensions**](https://tools.ietf.org/html/rfc6066.html) D. Eastlake [IANA TLS: Transport Layer Security (TLS) Parameters](https://www.iana.org/assignments/tls-parameters/tls-parameters.xml) IANA [**RFC 7525: Recommendations for Secure Use of Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS)**](https://tools.ietf.org/html/rfc7525.html) IETF [Mozilla’s Server Side TLS recommendations](https://wiki.mozilla.org/Security/Server_Side_TLS) Mozilla
programming_docs
python copyreg — Register pickle support functions copyreg — Register pickle support functions =========================================== **Source code:** [Lib/copyreg.py](https://github.com/python/cpython/tree/3.9/Lib/copyreg.py) The [`copyreg`](#module-copyreg "copyreg: Register pickle support functions.") module offers a way to define functions used while pickling specific objects. The [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") and [`copy`](copy#module-copy "copy: Shallow and deep copy operations.") modules use those functions when pickling/copying those objects. The module provides configuration information about object constructors which are not classes. Such constructors may be factory functions or class instances. `copyreg.constructor(object)` Declares *object* to be a valid constructor. If *object* is not callable (and hence not valid as a constructor), raises [`TypeError`](exceptions#TypeError "TypeError"). `copyreg.pickle(type, function, constructor=None)` Declares that *function* should be used as a “reduction” function for objects of type *type*. *function* should return either a string or a tuple containing two or three elements. The optional *constructor* parameter, if provided, is a callable object which can be used to reconstruct the object when called with the tuple of arguments returned by *function* at pickling time. A [`TypeError`](exceptions#TypeError "TypeError") is raised if the *constructor* is not callable. See the [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") module for more details on the interface expected of *function* and *constructor*. Note that the [`dispatch_table`](pickle#pickle.Pickler.dispatch_table "pickle.Pickler.dispatch_table") attribute of a pickler object or subclass of [`pickle.Pickler`](pickle#pickle.Pickler "pickle.Pickler") can also be used for declaring reduction functions. Example ------- The example below would like to show how to register a pickle function and how it will be used: ``` >>> import copyreg, copy, pickle >>> class C: ... def __init__(self, a): ... self.a = a ... >>> def pickle_c(c): ... print("pickling a C instance...") ... return C, (c.a,) ... >>> copyreg.pickle(C, pickle_c) >>> c = C(1) >>> d = copy.copy(c) pickling a C instance... >>> p = pickle.dumps(c) pickling a C instance... ``` python Streams Streams ======= **Source code:** [Lib/asyncio/streams.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/streams.py) Streams are high-level async/await-ready primitives to work with network connections. Streams allow sending and receiving data without using callbacks or low-level protocols and transports. Here is an example of a TCP echo client written using asyncio streams: ``` import asyncio async def tcp_echo_client(message): reader, writer = await asyncio.open_connection( '127.0.0.1', 8888) print(f'Send: {message!r}') writer.write(message.encode()) await writer.drain() data = await reader.read(100) print(f'Received: {data.decode()!r}') print('Close the connection') writer.close() await writer.wait_closed() asyncio.run(tcp_echo_client('Hello World!')) ``` See also the [Examples](#examples) section below. #### Stream Functions The following top-level asyncio functions can be used to create and work with streams: `coroutine asyncio.open_connection(host=None, port=None, *, loop=None, limit=None, ssl=None, family=0, proto=0, flags=0, sock=None, local_addr=None, server_hostname=None, ssl_handshake_timeout=None, happy_eyeballs_delay=None, interleave=None)` Establish a network connection and return a pair of `(reader, writer)` objects. The returned *reader* and *writer* objects are instances of [`StreamReader`](#asyncio.StreamReader "asyncio.StreamReader") and [`StreamWriter`](#asyncio.StreamWriter "asyncio.StreamWriter") classes. The *loop* argument is optional and can always be determined automatically when this function is awaited from a coroutine. *limit* determines the buffer size limit used by the returned [`StreamReader`](#asyncio.StreamReader "asyncio.StreamReader") instance. By default the *limit* is set to 64 KiB. The rest of the arguments are passed directly to [`loop.create_connection()`](asyncio-eventloop#asyncio.loop.create_connection "asyncio.loop.create_connection"). New in version 3.7: The *ssl\_handshake\_timeout* parameter. New in version 3.8: Added *happy\_eyeballs\_delay* and *interleave* parameters. `coroutine asyncio.start_server(client_connected_cb, host=None, port=None, *, loop=None, limit=None, family=socket.AF_UNSPEC, flags=socket.AI_PASSIVE, sock=None, backlog=100, ssl=None, reuse_address=None, reuse_port=None, ssl_handshake_timeout=None, start_serving=True)` Start a socket server. The *client\_connected\_cb* callback is called whenever a new client connection is established. It receives a `(reader, writer)` pair as two arguments, instances of the [`StreamReader`](#asyncio.StreamReader "asyncio.StreamReader") and [`StreamWriter`](#asyncio.StreamWriter "asyncio.StreamWriter") classes. *client\_connected\_cb* can be a plain callable or a [coroutine function](asyncio-task#coroutine); if it is a coroutine function, it will be automatically scheduled as a [`Task`](asyncio-task#asyncio.Task "asyncio.Task"). The *loop* argument is optional and can always be determined automatically when this method is awaited from a coroutine. *limit* determines the buffer size limit used by the returned [`StreamReader`](#asyncio.StreamReader "asyncio.StreamReader") instance. By default the *limit* is set to 64 KiB. The rest of the arguments are passed directly to [`loop.create_server()`](asyncio-eventloop#asyncio.loop.create_server "asyncio.loop.create_server"). New in version 3.7: The *ssl\_handshake\_timeout* and *start\_serving* parameters. #### Unix Sockets `coroutine asyncio.open_unix_connection(path=None, *, loop=None, limit=None, ssl=None, sock=None, server_hostname=None, ssl_handshake_timeout=None)` Establish a Unix socket connection and return a pair of `(reader, writer)`. Similar to [`open_connection()`](#asyncio.open_connection "asyncio.open_connection") but operates on Unix sockets. See also the documentation of [`loop.create_unix_connection()`](asyncio-eventloop#asyncio.loop.create_unix_connection "asyncio.loop.create_unix_connection"). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.7: The *ssl\_handshake\_timeout* parameter. Changed in version 3.7: The *path* parameter can now be a [path-like object](../glossary#term-path-like-object) `coroutine asyncio.start_unix_server(client_connected_cb, path=None, *, loop=None, limit=None, sock=None, backlog=100, ssl=None, ssl_handshake_timeout=None, start_serving=True)` Start a Unix socket server. Similar to [`start_server()`](#asyncio.start_server "asyncio.start_server") but works with Unix sockets. See also the documentation of [`loop.create_unix_server()`](asyncio-eventloop#asyncio.loop.create_unix_server "asyncio.loop.create_unix_server"). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.7: The *ssl\_handshake\_timeout* and *start\_serving* parameters. Changed in version 3.7: The *path* parameter can now be a [path-like object](../glossary#term-path-like-object). StreamReader ------------ `class asyncio.StreamReader` Represents a reader object that provides APIs to read data from the IO stream. It is not recommended to instantiate *StreamReader* objects directly; use [`open_connection()`](#asyncio.open_connection "asyncio.open_connection") and [`start_server()`](#asyncio.start_server "asyncio.start_server") instead. `coroutine read(n=-1)` Read up to *n* bytes. If *n* is not provided, or set to `-1`, read until EOF and return all read bytes. If EOF was received and the internal buffer is empty, return an empty `bytes` object. `coroutine readline()` Read one line, where “line” is a sequence of bytes ending with `\n`. If EOF is received and `\n` was not found, the method returns partially read data. If EOF is received and the internal buffer is empty, return an empty `bytes` object. `coroutine readexactly(n)` Read exactly *n* bytes. Raise an [`IncompleteReadError`](asyncio-exceptions#asyncio.IncompleteReadError "asyncio.IncompleteReadError") if EOF is reached before *n* can be read. Use the [`IncompleteReadError.partial`](asyncio-exceptions#asyncio.IncompleteReadError.partial "asyncio.IncompleteReadError.partial") attribute to get the partially read data. `coroutine readuntil(separator=b'\n')` Read data from the stream until *separator* is found. On success, the data and separator will be removed from the internal buffer (consumed). Returned data will include the separator at the end. If the amount of data read exceeds the configured stream limit, a [`LimitOverrunError`](asyncio-exceptions#asyncio.LimitOverrunError "asyncio.LimitOverrunError") exception is raised, and the data is left in the internal buffer and can be read again. If EOF is reached before the complete separator is found, an [`IncompleteReadError`](asyncio-exceptions#asyncio.IncompleteReadError "asyncio.IncompleteReadError") exception is raised, and the internal buffer is reset. The [`IncompleteReadError.partial`](asyncio-exceptions#asyncio.IncompleteReadError.partial "asyncio.IncompleteReadError.partial") attribute may contain a portion of the separator. New in version 3.5.2. `at_eof()` Return `True` if the buffer is empty and `feed_eof()` was called. StreamWriter ------------ `class asyncio.StreamWriter` Represents a writer object that provides APIs to write data to the IO stream. It is not recommended to instantiate *StreamWriter* objects directly; use [`open_connection()`](#asyncio.open_connection "asyncio.open_connection") and [`start_server()`](#asyncio.start_server "asyncio.start_server") instead. `write(data)` The method attempts to write the *data* to the underlying socket immediately. If that fails, the data is queued in an internal write buffer until it can be sent. The method should be used along with the `drain()` method: ``` stream.write(data) await stream.drain() ``` `writelines(data)` The method writes a list (or any iterable) of bytes to the underlying socket immediately. If that fails, the data is queued in an internal write buffer until it can be sent. The method should be used along with the `drain()` method: ``` stream.writelines(lines) await stream.drain() ``` `close()` The method closes the stream and the underlying socket. The method should be used along with the `wait_closed()` method: ``` stream.close() await stream.wait_closed() ``` `can_write_eof()` Return `True` if the underlying transport supports the [`write_eof()`](#asyncio.StreamWriter.write_eof "asyncio.StreamWriter.write_eof") method, `False` otherwise. `write_eof()` Close the write end of the stream after the buffered write data is flushed. `transport` Return the underlying asyncio transport. `get_extra_info(name, default=None)` Access optional transport information; see [`BaseTransport.get_extra_info()`](asyncio-protocol#asyncio.BaseTransport.get_extra_info "asyncio.BaseTransport.get_extra_info") for details. `coroutine drain()` Wait until it is appropriate to resume writing to the stream. Example: ``` writer.write(data) await writer.drain() ``` This is a flow control method that interacts with the underlying IO write buffer. When the size of the buffer reaches the high watermark, *drain()* blocks until the size of the buffer is drained down to the low watermark and writing can be resumed. When there is nothing to wait for, the [`drain()`](#asyncio.StreamWriter.drain "asyncio.StreamWriter.drain") returns immediately. `is_closing()` Return `True` if the stream is closed or in the process of being closed. New in version 3.7. `coroutine wait_closed()` Wait until the stream is closed. Should be called after [`close()`](#asyncio.StreamWriter.close "asyncio.StreamWriter.close") to wait until the underlying connection is closed. New in version 3.7. Examples -------- ### TCP echo client using streams TCP echo client using the [`asyncio.open_connection()`](#asyncio.open_connection "asyncio.open_connection") function: ``` import asyncio async def tcp_echo_client(message): reader, writer = await asyncio.open_connection( '127.0.0.1', 8888) print(f'Send: {message!r}') writer.write(message.encode()) data = await reader.read(100) print(f'Received: {data.decode()!r}') print('Close the connection') writer.close() asyncio.run(tcp_echo_client('Hello World!')) ``` See also The [TCP echo client protocol](asyncio-protocol#asyncio-example-tcp-echo-client-protocol) example uses the low-level [`loop.create_connection()`](asyncio-eventloop#asyncio.loop.create_connection "asyncio.loop.create_connection") method. ### TCP echo server using streams TCP echo server using the [`asyncio.start_server()`](#asyncio.start_server "asyncio.start_server") function: ``` import asyncio async def handle_echo(reader, writer): data = await reader.read(100) message = data.decode() addr = writer.get_extra_info('peername') print(f"Received {message!r} from {addr!r}") print(f"Send: {message!r}") writer.write(data) await writer.drain() print("Close the connection") writer.close() async def main(): server = await asyncio.start_server( handle_echo, '127.0.0.1', 8888) addrs = ', '.join(str(sock.getsockname()) for sock in server.sockets) print(f'Serving on {addrs}') async with server: await server.serve_forever() asyncio.run(main()) ``` See also The [TCP echo server protocol](asyncio-protocol#asyncio-example-tcp-echo-server-protocol) example uses the [`loop.create_server()`](asyncio-eventloop#asyncio.loop.create_server "asyncio.loop.create_server") method. ### Get HTTP headers Simple example querying HTTP headers of the URL passed on the command line: ``` import asyncio import urllib.parse import sys async def print_http_headers(url): url = urllib.parse.urlsplit(url) if url.scheme == 'https': reader, writer = await asyncio.open_connection( url.hostname, 443, ssl=True) else: reader, writer = await asyncio.open_connection( url.hostname, 80) query = ( f"HEAD {url.path or '/'} HTTP/1.0\r\n" f"Host: {url.hostname}\r\n" f"\r\n" ) writer.write(query.encode('latin-1')) while True: line = await reader.readline() if not line: break line = line.decode('latin1').rstrip() if line: print(f'HTTP header> {line}') # Ignore the body, close the socket writer.close() url = sys.argv[1] asyncio.run(print_http_headers(url)) ``` Usage: ``` python example.py http://example.com/path/page.html ``` or with HTTPS: ``` python example.py https://example.com/path/page.html ``` ### Register an open socket to wait for data using streams Coroutine waiting until a socket receives data using the [`open_connection()`](#asyncio.open_connection "asyncio.open_connection") function: ``` import asyncio import socket async def wait_for_data(): # Get a reference to the current event loop because # we want to access low-level APIs. loop = asyncio.get_running_loop() # Create a pair of connected sockets. rsock, wsock = socket.socketpair() # Register the open socket to wait for data. reader, writer = await asyncio.open_connection(sock=rsock) # Simulate the reception of data from the network loop.call_soon(wsock.send, 'abc'.encode()) # Wait for data data = await reader.read(100) # Got data, we are done: close the socket print("Received:", data.decode()) writer.close() # Close the second socket wsock.close() asyncio.run(wait_for_data()) ``` See also The [register an open socket to wait for data using a protocol](asyncio-protocol#asyncio-example-create-connection) example uses a low-level protocol and the [`loop.create_connection()`](asyncio-eventloop#asyncio.loop.create_connection "asyncio.loop.create_connection") method. The [watch a file descriptor for read events](asyncio-eventloop#asyncio-example-watch-fd) example uses the low-level [`loop.add_reader()`](asyncio-eventloop#asyncio.loop.add_reader "asyncio.loop.add_reader") method to watch a file descriptor. python imaplib — IMAP4 protocol client imaplib — IMAP4 protocol client =============================== **Source code:** [Lib/imaplib.py](https://github.com/python/cpython/tree/3.9/Lib/imaplib.py) This module defines three classes, [`IMAP4`](#imaplib.IMAP4 "imaplib.IMAP4"), [`IMAP4_SSL`](#imaplib.IMAP4_SSL "imaplib.IMAP4_SSL") and [`IMAP4_stream`](#imaplib.IMAP4_stream "imaplib.IMAP4_stream"), which encapsulate a connection to an IMAP4 server and implement a large subset of the IMAP4rev1 client protocol as defined in [**RFC 2060**](https://tools.ietf.org/html/rfc2060.html). It is backward compatible with IMAP4 ([**RFC 1730**](https://tools.ietf.org/html/rfc1730.html)) servers, but note that the `STATUS` command is not supported in IMAP4. Three classes are provided by the [`imaplib`](#module-imaplib "imaplib: IMAP4 protocol client (requires sockets).") module, [`IMAP4`](#imaplib.IMAP4 "imaplib.IMAP4") is the base class: `class imaplib.IMAP4(host='', port=IMAP4_PORT, timeout=None)` This class implements the actual IMAP4 protocol. The connection is created and protocol version (IMAP4 or IMAP4rev1) is determined when the instance is initialized. If *host* is not specified, `''` (the local host) is used. If *port* is omitted, the standard IMAP4 port (143) is used. The optional *timeout* parameter specifies a timeout in seconds for the connection attempt. If timeout is not given or is None, the global default socket timeout is used. The [`IMAP4`](#imaplib.IMAP4 "imaplib.IMAP4") class supports the [`with`](../reference/compound_stmts#with) statement. When used like this, the IMAP4 `LOGOUT` command is issued automatically when the `with` statement exits. E.g.: ``` >>> from imaplib import IMAP4 >>> with IMAP4("domain.org") as M: ... M.noop() ... ('OK', [b'Nothing Accomplished. d25if65hy903weo.87']) ``` Changed in version 3.5: Support for the [`with`](../reference/compound_stmts#with) statement was added. Changed in version 3.9: The optional *timeout* parameter was added. Three exceptions are defined as attributes of the [`IMAP4`](#imaplib.IMAP4 "imaplib.IMAP4") class: `exception IMAP4.error` Exception raised on any errors. The reason for the exception is passed to the constructor as a string. `exception IMAP4.abort` IMAP4 server errors cause this exception to be raised. This is a sub-class of [`IMAP4.error`](#imaplib.IMAP4.error "imaplib.IMAP4.error"). Note that closing the instance and instantiating a new one will usually allow recovery from this exception. `exception IMAP4.readonly` This exception is raised when a writable mailbox has its status changed by the server. This is a sub-class of [`IMAP4.error`](#imaplib.IMAP4.error "imaplib.IMAP4.error"). Some other client now has write permission, and the mailbox will need to be re-opened to re-obtain write permission. There’s also a subclass for secure connections: `class imaplib.IMAP4_SSL(host='', port=IMAP4_SSL_PORT, keyfile=None, certfile=None, ssl_context=None, timeout=None)` This is a subclass derived from [`IMAP4`](#imaplib.IMAP4 "imaplib.IMAP4") that connects over an SSL encrypted socket (to use this class you need a socket module that was compiled with SSL support). If *host* is not specified, `''` (the local host) is used. If *port* is omitted, the standard IMAP4-over-SSL port (993) is used. *ssl\_context* is a [`ssl.SSLContext`](ssl#ssl.SSLContext "ssl.SSLContext") object which allows bundling SSL configuration options, certificates and private keys into a single (potentially long-lived) structure. Please read [Security considerations](ssl#ssl-security) for best practices. *keyfile* and *certfile* are a legacy alternative to *ssl\_context* - they can point to PEM-formatted private key and certificate chain files for the SSL connection. Note that the *keyfile*/*certfile* parameters are mutually exclusive with *ssl\_context*, a [`ValueError`](exceptions#ValueError "ValueError") is raised if *keyfile*/*certfile* is provided along with *ssl\_context*. The optional *timeout* parameter specifies a timeout in seconds for the connection attempt. If timeout is not given or is None, the global default socket timeout is used. Changed in version 3.3: *ssl\_context* parameter was added. Changed in version 3.4: The class now supports hostname check with [`ssl.SSLContext.check_hostname`](ssl#ssl.SSLContext.check_hostname "ssl.SSLContext.check_hostname") and *Server Name Indication* (see [`ssl.HAS_SNI`](ssl#ssl.HAS_SNI "ssl.HAS_SNI")). Deprecated since version 3.6: *keyfile* and *certfile* are deprecated in favor of *ssl\_context*. Please use [`ssl.SSLContext.load_cert_chain()`](ssl#ssl.SSLContext.load_cert_chain "ssl.SSLContext.load_cert_chain") instead, or let [`ssl.create_default_context()`](ssl#ssl.create_default_context "ssl.create_default_context") select the system’s trusted CA certificates for you. Changed in version 3.9: The optional *timeout* parameter was added. The second subclass allows for connections created by a child process: `class imaplib.IMAP4_stream(command)` This is a subclass derived from [`IMAP4`](#imaplib.IMAP4 "imaplib.IMAP4") that connects to the `stdin/stdout` file descriptors created by passing *command* to `subprocess.Popen()`. The following utility functions are defined: `imaplib.Internaldate2tuple(datestr)` Parse an IMAP4 `INTERNALDATE` string and return corresponding local time. The return value is a [`time.struct_time`](time#time.struct_time "time.struct_time") tuple or `None` if the string has wrong format. `imaplib.Int2AP(num)` Converts an integer into a bytes representation using characters from the set [`A` .. `P`]. `imaplib.ParseFlags(flagstr)` Converts an IMAP4 `FLAGS` response to a tuple of individual flags. `imaplib.Time2Internaldate(date_time)` Convert *date\_time* to an IMAP4 `INTERNALDATE` representation. The return value is a string in the form: `"DD-Mmm-YYYY HH:MM:SS +HHMM"` (including double-quotes). The *date\_time* argument can be a number (int or float) representing seconds since epoch (as returned by [`time.time()`](time#time.time "time.time")), a 9-tuple representing local time an instance of [`time.struct_time`](time#time.struct_time "time.struct_time") (as returned by [`time.localtime()`](time#time.localtime "time.localtime")), an aware instance of [`datetime.datetime`](datetime#datetime.datetime "datetime.datetime"), or a double-quoted string. In the last case, it is assumed to already be in the correct format. Note that IMAP4 message numbers change as the mailbox changes; in particular, after an `EXPUNGE` command performs deletions the remaining messages are renumbered. So it is highly advisable to use UIDs instead, with the UID command. At the end of the module, there is a test section that contains a more extensive example of usage. See also Documents describing the protocol, sources for servers implementing it, by the University of Washington’s IMAP Information Center can all be found at (**Source Code**) <https://github.com/uw-imap/imap> (**Not Maintained**). IMAP4 Objects ------------- All IMAP4rev1 commands are represented by methods of the same name, either upper-case or lower-case. All arguments to commands are converted to strings, except for `AUTHENTICATE`, and the last argument to `APPEND` which is passed as an IMAP4 literal. If necessary (the string contains IMAP4 protocol-sensitive characters and isn’t enclosed with either parentheses or double quotes) each string is quoted. However, the *password* argument to the `LOGIN` command is always quoted. If you want to avoid having an argument string quoted (eg: the *flags* argument to `STORE`) then enclose the string in parentheses (eg: `r'(\Deleted)'`). Each command returns a tuple: `(type, [data, ...])` where *type* is usually `'OK'` or `'NO'`, and *data* is either the text from the command response, or mandated results from the command. Each *data* is either a `bytes`, or a tuple. If a tuple, then the first part is the header of the response, and the second part contains the data (ie: ‘literal’ value). The *message\_set* options to commands below is a string specifying one or more messages to be acted upon. It may be a simple message number (`'1'`), a range of message numbers (`'2:4'`), or a group of non-contiguous ranges separated by commas (`'1:3,6:9'`). A range can contain an asterisk to indicate an infinite upper bound (`'3:*'`). An [`IMAP4`](#imaplib.IMAP4 "imaplib.IMAP4") instance has the following methods: `IMAP4.append(mailbox, flags, date_time, message)` Append *message* to named mailbox. `IMAP4.authenticate(mechanism, authobject)` Authenticate command — requires response processing. *mechanism* specifies which authentication mechanism is to be used - it should appear in the instance variable `capabilities` in the form `AUTH=mechanism`. *authobject* must be a callable object: ``` data = authobject(response) ``` It will be called to process server continuation responses; the *response* argument it is passed will be `bytes`. It should return `bytes` *data* that will be base64 encoded and sent to the server. It should return `None` if the client abort response `*` should be sent instead. Changed in version 3.5: string usernames and passwords are now encoded to `utf-8` instead of being limited to ASCII. `IMAP4.check()` Checkpoint mailbox on server. `IMAP4.close()` Close currently selected mailbox. Deleted messages are removed from writable mailbox. This is the recommended command before `LOGOUT`. `IMAP4.copy(message_set, new_mailbox)` Copy *message\_set* messages onto end of *new\_mailbox*. `IMAP4.create(mailbox)` Create new mailbox named *mailbox*. `IMAP4.delete(mailbox)` Delete old mailbox named *mailbox*. `IMAP4.deleteacl(mailbox, who)` Delete the ACLs (remove any rights) set for who on mailbox. `IMAP4.enable(capability)` Enable *capability* (see [**RFC 5161**](https://tools.ietf.org/html/rfc5161.html)). Most capabilities do not need to be enabled. Currently only the `UTF8=ACCEPT` capability is supported (see [**RFC 6855**](https://tools.ietf.org/html/rfc6855.html)). New in version 3.5: The [`enable()`](#imaplib.IMAP4.enable "imaplib.IMAP4.enable") method itself, and [**RFC 6855**](https://tools.ietf.org/html/rfc6855.html) support. `IMAP4.expunge()` Permanently remove deleted items from selected mailbox. Generates an `EXPUNGE` response for each deleted message. Returned data contains a list of `EXPUNGE` message numbers in order received. `IMAP4.fetch(message_set, message_parts)` Fetch (parts of) messages. *message\_parts* should be a string of message part names enclosed within parentheses, eg: `"(UID BODY[TEXT])"`. Returned data are tuples of message part envelope and data. `IMAP4.getacl(mailbox)` Get the `ACL`s for *mailbox*. The method is non-standard, but is supported by the `Cyrus` server. `IMAP4.getannotation(mailbox, entry, attribute)` Retrieve the specified `ANNOTATION`s for *mailbox*. The method is non-standard, but is supported by the `Cyrus` server. `IMAP4.getquota(root)` Get the `quota` *root*’s resource usage and limits. This method is part of the IMAP4 QUOTA extension defined in rfc2087. `IMAP4.getquotaroot(mailbox)` Get the list of `quota` `roots` for the named *mailbox*. This method is part of the IMAP4 QUOTA extension defined in rfc2087. `IMAP4.list([directory[, pattern]])` List mailbox names in *directory* matching *pattern*. *directory* defaults to the top-level mail folder, and *pattern* defaults to match anything. Returned data contains a list of `LIST` responses. `IMAP4.login(user, password)` Identify the client using a plaintext password. The *password* will be quoted. `IMAP4.login_cram_md5(user, password)` Force use of `CRAM-MD5` authentication when identifying the client to protect the password. Will only work if the server `CAPABILITY` response includes the phrase `AUTH=CRAM-MD5`. `IMAP4.logout()` Shutdown connection to server. Returns server `BYE` response. Changed in version 3.8: The method no longer ignores silently arbitrary exceptions. `IMAP4.lsub(directory='""', pattern='*')` List subscribed mailbox names in directory matching pattern. *directory* defaults to the top level directory and *pattern* defaults to match any mailbox. Returned data are tuples of message part envelope and data. `IMAP4.myrights(mailbox)` Show my ACLs for a mailbox (i.e. the rights that I have on mailbox). `IMAP4.namespace()` Returns IMAP namespaces as defined in [**RFC 2342**](https://tools.ietf.org/html/rfc2342.html). `IMAP4.noop()` Send `NOOP` to server. `IMAP4.open(host, port, timeout=None)` Opens socket to *port* at *host*. The optional *timeout* parameter specifies a timeout in seconds for the connection attempt. If timeout is not given or is None, the global default socket timeout is used. Also note that if the *timeout* parameter is set to be zero, it will raise a [`ValueError`](exceptions#ValueError "ValueError") to reject creating a non-blocking socket. This method is implicitly called by the [`IMAP4`](#imaplib.IMAP4 "imaplib.IMAP4") constructor. The connection objects established by this method will be used in the [`IMAP4.read()`](#imaplib.IMAP4.read "imaplib.IMAP4.read"), [`IMAP4.readline()`](#imaplib.IMAP4.readline "imaplib.IMAP4.readline"), [`IMAP4.send()`](#imaplib.IMAP4.send "imaplib.IMAP4.send"), and [`IMAP4.shutdown()`](#imaplib.IMAP4.shutdown "imaplib.IMAP4.shutdown") methods. You may override this method. Raises an [auditing event](sys#auditing) `imaplib.open` with arguments `self`, `host`, `port`. Changed in version 3.9: The *timeout* parameter was added. `IMAP4.partial(message_num, message_part, start, length)` Fetch truncated part of a message. Returned data is a tuple of message part envelope and data. `IMAP4.proxyauth(user)` Assume authentication as *user*. Allows an authorised administrator to proxy into any user’s mailbox. `IMAP4.read(size)` Reads *size* bytes from the remote server. You may override this method. `IMAP4.readline()` Reads one line from the remote server. You may override this method. `IMAP4.recent()` Prompt server for an update. Returned data is `None` if no new messages, else value of `RECENT` response. `IMAP4.rename(oldmailbox, newmailbox)` Rename mailbox named *oldmailbox* to *newmailbox*. `IMAP4.response(code)` Return data for response *code* if received, or `None`. Returns the given code, instead of the usual type. `IMAP4.search(charset, criterion[, ...])` Search mailbox for matching messages. *charset* may be `None`, in which case no `CHARSET` will be specified in the request to the server. The IMAP protocol requires that at least one criterion be specified; an exception will be raised when the server returns an error. *charset* must be `None` if the `UTF8=ACCEPT` capability was enabled using the [`enable()`](#imaplib.IMAP4.enable "imaplib.IMAP4.enable") command. Example: ``` # M is a connected IMAP4 instance... typ, msgnums = M.search(None, 'FROM', '"LDJ"') # or: typ, msgnums = M.search(None, '(FROM "LDJ")') ``` `IMAP4.select(mailbox='INBOX', readonly=False)` Select a mailbox. Returned data is the count of messages in *mailbox* (`EXISTS` response). The default *mailbox* is `'INBOX'`. If the *readonly* flag is set, modifications to the mailbox are not allowed. `IMAP4.send(data)` Sends `data` to the remote server. You may override this method. Raises an [auditing event](sys#auditing) `imaplib.send` with arguments `self`, `data`. `IMAP4.setacl(mailbox, who, what)` Set an `ACL` for *mailbox*. The method is non-standard, but is supported by the `Cyrus` server. `IMAP4.setannotation(mailbox, entry, attribute[, ...])` Set `ANNOTATION`s for *mailbox*. The method is non-standard, but is supported by the `Cyrus` server. `IMAP4.setquota(root, limits)` Set the `quota` *root*’s resource *limits*. This method is part of the IMAP4 QUOTA extension defined in rfc2087. `IMAP4.shutdown()` Close connection established in `open`. This method is implicitly called by [`IMAP4.logout()`](#imaplib.IMAP4.logout "imaplib.IMAP4.logout"). You may override this method. `IMAP4.socket()` Returns socket instance used to connect to server. `IMAP4.sort(sort_criteria, charset, search_criterion[, ...])` The `sort` command is a variant of `search` with sorting semantics for the results. Returned data contains a space separated list of matching message numbers. Sort has two arguments before the *search\_criterion* argument(s); a parenthesized list of *sort\_criteria*, and the searching *charset*. Note that unlike `search`, the searching *charset* argument is mandatory. There is also a `uid sort` command which corresponds to `sort` the way that `uid search` corresponds to `search`. The `sort` command first searches the mailbox for messages that match the given searching criteria using the charset argument for the interpretation of strings in the searching criteria. It then returns the numbers of matching messages. This is an `IMAP4rev1` extension command. `IMAP4.starttls(ssl_context=None)` Send a `STARTTLS` command. The *ssl\_context* argument is optional and should be a [`ssl.SSLContext`](ssl#ssl.SSLContext "ssl.SSLContext") object. This will enable encryption on the IMAP connection. Please read [Security considerations](ssl#ssl-security) for best practices. New in version 3.2. Changed in version 3.4: The method now supports hostname check with [`ssl.SSLContext.check_hostname`](ssl#ssl.SSLContext.check_hostname "ssl.SSLContext.check_hostname") and *Server Name Indication* (see [`ssl.HAS_SNI`](ssl#ssl.HAS_SNI "ssl.HAS_SNI")). `IMAP4.status(mailbox, names)` Request named status conditions for *mailbox*. `IMAP4.store(message_set, command, flag_list)` Alters flag dispositions for messages in mailbox. *command* is specified by section 6.4.6 of [**RFC 2060**](https://tools.ietf.org/html/rfc2060.html) as being one of “FLAGS”, “+FLAGS”, or “-FLAGS”, optionally with a suffix of “.SILENT”. For example, to set the delete flag on all messages: ``` typ, data = M.search(None, 'ALL') for num in data[0].split(): M.store(num, '+FLAGS', '\\Deleted') M.expunge() ``` Note Creating flags containing ‘]’ (for example: “[test]”) violates [**RFC 3501**](https://tools.ietf.org/html/rfc3501.html) (the IMAP protocol). However, imaplib has historically allowed creation of such tags, and popular IMAP servers, such as Gmail, accept and produce such flags. There are non-Python programs which also create such tags. Although it is an RFC violation and IMAP clients and servers are supposed to be strict, imaplib nonetheless continues to allow such tags to be created for backward compatibility reasons, and as of Python 3.6, handles them if they are sent from the server, since this improves real-world compatibility. `IMAP4.subscribe(mailbox)` Subscribe to new mailbox. `IMAP4.thread(threading_algorithm, charset, search_criterion[, ...])` The `thread` command is a variant of `search` with threading semantics for the results. Returned data contains a space separated list of thread members. Thread members consist of zero or more messages numbers, delimited by spaces, indicating successive parent and child. Thread has two arguments before the *search\_criterion* argument(s); a *threading\_algorithm*, and the searching *charset*. Note that unlike `search`, the searching *charset* argument is mandatory. There is also a `uid thread` command which corresponds to `thread` the way that `uid search` corresponds to `search`. The `thread` command first searches the mailbox for messages that match the given searching criteria using the charset argument for the interpretation of strings in the searching criteria. It then returns the matching messages threaded according to the specified threading algorithm. This is an `IMAP4rev1` extension command. `IMAP4.uid(command, arg[, ...])` Execute command args with messages identified by UID, rather than message number. Returns response appropriate to command. At least one argument must be supplied; if none are provided, the server will return an error and an exception will be raised. `IMAP4.unsubscribe(mailbox)` Unsubscribe from old mailbox. `IMAP4.unselect()` [`imaplib.IMAP4.unselect()`](#imaplib.IMAP4.unselect "imaplib.IMAP4.unselect") frees server’s resources associated with the selected mailbox and returns the server to the authenticated state. This command performs the same actions as [`imaplib.IMAP4.close()`](#imaplib.IMAP4.close "imaplib.IMAP4.close"), except that no messages are permanently removed from the currently selected mailbox. New in version 3.9. `IMAP4.xatom(name[, ...])` Allow simple extension commands notified by server in `CAPABILITY` response. The following attributes are defined on instances of [`IMAP4`](#imaplib.IMAP4 "imaplib.IMAP4"): `IMAP4.PROTOCOL_VERSION` The most recent supported protocol in the `CAPABILITY` response from the server. `IMAP4.debug` Integer value to control debugging output. The initialize value is taken from the module variable `Debug`. Values greater than three trace each command. `IMAP4.utf8_enabled` Boolean value that is normally `False`, but is set to `True` if an [`enable()`](#imaplib.IMAP4.enable "imaplib.IMAP4.enable") command is successfully issued for the `UTF8=ACCEPT` capability. New in version 3.5. IMAP4 Example ------------- Here is a minimal example (without error checking) that opens a mailbox and retrieves and prints all messages: ``` import getpass, imaplib M = imaplib.IMAP4() M.login(getpass.getuser(), getpass.getpass()) M.select() typ, data = M.search(None, 'ALL') for num in data[0].split(): typ, data = M.fetch(num, '(RFC822)') print('Message %s\n%s\n' % (num, data[0][1])) M.close() M.logout() ```
programming_docs
python xml.sax.saxutils — SAX Utilities xml.sax.saxutils — SAX Utilities ================================ **Source code:** [Lib/xml/sax/saxutils.py](https://github.com/python/cpython/tree/3.9/Lib/xml/sax/saxutils.py) The module [`xml.sax.saxutils`](#module-xml.sax.saxutils "xml.sax.saxutils: Convenience functions and classes for use with SAX.") contains a number of classes and functions that are commonly useful when creating SAX applications, either in direct use, or as base classes. `xml.sax.saxutils.escape(data, entities={})` Escape `'&'`, `'<'`, and `'>'` in a string of data. You can escape other strings of data by passing a dictionary as the optional *entities* parameter. The keys and values must all be strings; each key will be replaced with its corresponding value. The characters `'&'`, `'<'` and `'>'` are always escaped, even if *entities* is provided. `xml.sax.saxutils.unescape(data, entities={})` Unescape `'&amp;'`, `'&lt;'`, and `'&gt;'` in a string of data. You can unescape other strings of data by passing a dictionary as the optional *entities* parameter. The keys and values must all be strings; each key will be replaced with its corresponding value. `'&amp'`, `'&lt;'`, and `'&gt;'` are always unescaped, even if *entities* is provided. `xml.sax.saxutils.quoteattr(data, entities={})` Similar to [`escape()`](#xml.sax.saxutils.escape "xml.sax.saxutils.escape"), but also prepares *data* to be used as an attribute value. The return value is a quoted version of *data* with any additional required replacements. [`quoteattr()`](#xml.sax.saxutils.quoteattr "xml.sax.saxutils.quoteattr") will select a quote character based on the content of *data*, attempting to avoid encoding any quote characters in the string. If both single- and double-quote characters are already in *data*, the double-quote characters will be encoded and *data* will be wrapped in double-quotes. The resulting string can be used directly as an attribute value: ``` >>> print("<element attr=%s>" % quoteattr("ab ' cd \" ef")) <element attr="ab ' cd &quot; ef"> ``` This function is useful when generating attribute values for HTML or any SGML using the reference concrete syntax. `class xml.sax.saxutils.XMLGenerator(out=None, encoding='iso-8859-1', short_empty_elements=False)` This class implements the [`ContentHandler`](xml.sax.handler#xml.sax.handler.ContentHandler "xml.sax.handler.ContentHandler") interface by writing SAX events back into an XML document. In other words, using an [`XMLGenerator`](#xml.sax.saxutils.XMLGenerator "xml.sax.saxutils.XMLGenerator") as the content handler will reproduce the original document being parsed. *out* should be a file-like object which will default to *sys.stdout*. *encoding* is the encoding of the output stream which defaults to `'iso-8859-1'`. *short\_empty\_elements* controls the formatting of elements that contain no content: if `False` (the default) they are emitted as a pair of start/end tags, if set to `True` they are emitted as a single self-closed tag. New in version 3.2: The *short\_empty\_elements* parameter. `class xml.sax.saxutils.XMLFilterBase(base)` This class is designed to sit between an [`XMLReader`](xml.sax.reader#xml.sax.xmlreader.XMLReader "xml.sax.xmlreader.XMLReader") and the client application’s event handlers. By default, it does nothing but pass requests up to the reader and events on to the handlers unmodified, but subclasses can override specific methods to modify the event stream or the configuration requests as they pass through. `xml.sax.saxutils.prepare_input_source(source, base='')` This function takes an input source and an optional base URL and returns a fully resolved [`InputSource`](xml.sax.reader#xml.sax.xmlreader.InputSource "xml.sax.xmlreader.InputSource") object ready for reading. The input source can be given as a string, a file-like object, or an [`InputSource`](xml.sax.reader#xml.sax.xmlreader.InputSource "xml.sax.xmlreader.InputSource") object; parsers will use this function to implement the polymorphic *source* argument to their `parse()` method. python tabnanny — Detection of ambiguous indentation tabnanny — Detection of ambiguous indentation ============================================= **Source code:** [Lib/tabnanny.py](https://github.com/python/cpython/tree/3.9/Lib/tabnanny.py) For the time being this module is intended to be called as a script. However it is possible to import it into an IDE and use the function [`check()`](#tabnanny.check "tabnanny.check") described below. Note The API provided by this module is likely to change in future releases; such changes may not be backward compatible. `tabnanny.check(file_or_dir)` If *file\_or\_dir* is a directory and not a symbolic link, then recursively descend the directory tree named by *file\_or\_dir*, checking all `.py` files along the way. If *file\_or\_dir* is an ordinary Python source file, it is checked for whitespace related problems. The diagnostic messages are written to standard output using the [`print()`](functions#print "print") function. `tabnanny.verbose` Flag indicating whether to print verbose messages. This is incremented by the `-v` option if called as a script. `tabnanny.filename_only` Flag indicating whether to print only the filenames of files containing whitespace related problems. This is set to true by the `-q` option if called as a script. `exception tabnanny.NannyNag` Raised by [`process_tokens()`](#tabnanny.process_tokens "tabnanny.process_tokens") if detecting an ambiguous indent. Captured and handled in [`check()`](#tabnanny.check "tabnanny.check"). `tabnanny.process_tokens(tokens)` This function is used by [`check()`](#tabnanny.check "tabnanny.check") to process tokens generated by the [`tokenize`](tokenize#module-tokenize "tokenize: Lexical scanner for Python source code.") module. See also `Module` [`tokenize`](tokenize#module-tokenize "tokenize: Lexical scanner for Python source code.") Lexical scanner for Python source code. python time — Time access and conversions time — Time access and conversions ================================== This module provides various time-related functions. For related functionality, see also the [`datetime`](datetime#module-datetime "datetime: Basic date and time types.") and [`calendar`](calendar#module-calendar "calendar: Functions for working with calendars, including some emulation of the Unix cal program.") modules. Although this module is always available, not all functions are available on all platforms. Most of the functions defined in this module call platform C library functions with the same name. It may sometimes be helpful to consult the platform documentation, because the semantics of these functions varies among platforms. An explanation of some terminology and conventions is in order. * The *epoch* is the point where the time starts, and is platform dependent. For Unix, the epoch is January 1, 1970, 00:00:00 (UTC). To find out what the epoch is on a given platform, look at `time.gmtime(0)`. * The term *seconds since the epoch* refers to the total number of elapsed seconds since the epoch, typically excluding [leap seconds](https://en.wikipedia.org/wiki/Leap_second). Leap seconds are excluded from this total on all POSIX-compliant platforms. * The functions in this module may not handle dates and times before the epoch or far in the future. The cut-off point in the future is determined by the C library; for 32-bit systems, it is typically in 2038. * Function [`strptime()`](#time.strptime "time.strptime") can parse 2-digit years when given `%y` format code. When 2-digit years are parsed, they are converted according to the POSIX and ISO C standards: values 69–99 are mapped to 1969–1999, and values 0–68 are mapped to 2000–2068. * UTC is Coordinated Universal Time (formerly known as Greenwich Mean Time, or GMT). The acronym UTC is not a mistake but a compromise between English and French. * DST is Daylight Saving Time, an adjustment of the timezone by (usually) one hour during part of the year. DST rules are magic (determined by local law) and can change from year to year. The C library has a table containing the local rules (often it is read from a system file for flexibility) and is the only source of True Wisdom in this respect. * The precision of the various real-time functions may be less than suggested by the units in which their value or argument is expressed. E.g. on most Unix systems, the clock “ticks” only 50 or 100 times a second. * On the other hand, the precision of [`time()`](#time.time "time.time") and [`sleep()`](#time.sleep "time.sleep") is better than their Unix equivalents: times are expressed as floating point numbers, [`time()`](#time.time "time.time") returns the most accurate time available (using Unix `gettimeofday()` where available), and [`sleep()`](#time.sleep "time.sleep") will accept a time with a nonzero fraction (Unix `select()` is used to implement this, where available). * The time value as returned by [`gmtime()`](#time.gmtime "time.gmtime"), [`localtime()`](#time.localtime "time.localtime"), and [`strptime()`](#time.strptime "time.strptime"), and accepted by [`asctime()`](#time.asctime "time.asctime"), [`mktime()`](#time.mktime "time.mktime") and [`strftime()`](#time.strftime "time.strftime"), is a sequence of 9 integers. The return values of [`gmtime()`](#time.gmtime "time.gmtime"), [`localtime()`](#time.localtime "time.localtime"), and [`strptime()`](#time.strptime "time.strptime") also offer attribute names for individual fields. See [`struct_time`](#time.struct_time "time.struct_time") for a description of these objects. Changed in version 3.3: The [`struct_time`](#time.struct_time "time.struct_time") type was extended to provide the `tm_gmtoff` and `tm_zone` attributes when platform supports corresponding `struct tm` members. Changed in version 3.6: The [`struct_time`](#time.struct_time "time.struct_time") attributes `tm_gmtoff` and `tm_zone` are now available on all platforms. * Use the following functions to convert between time representations: | From | To | Use | | --- | --- | --- | | seconds since the epoch | [`struct_time`](#time.struct_time "time.struct_time") in UTC | [`gmtime()`](#time.gmtime "time.gmtime") | | seconds since the epoch | [`struct_time`](#time.struct_time "time.struct_time") in local time | [`localtime()`](#time.localtime "time.localtime") | | [`struct_time`](#time.struct_time "time.struct_time") in UTC | seconds since the epoch | [`calendar.timegm()`](calendar#calendar.timegm "calendar.timegm") | | [`struct_time`](#time.struct_time "time.struct_time") in local time | seconds since the epoch | [`mktime()`](#time.mktime "time.mktime") | Functions --------- `time.asctime([t])` Convert a tuple or [`struct_time`](#time.struct_time "time.struct_time") representing a time as returned by [`gmtime()`](#time.gmtime "time.gmtime") or [`localtime()`](#time.localtime "time.localtime") to a string of the following form: `'Sun Jun 20 23:21:05 1993'`. The day field is two characters long and is space padded if the day is a single digit, e.g.: `'Wed Jun  9 04:26:40 1993'`. If *t* is not provided, the current time as returned by [`localtime()`](#time.localtime "time.localtime") is used. Locale information is not used by [`asctime()`](#time.asctime "time.asctime"). Note Unlike the C function of the same name, [`asctime()`](#time.asctime "time.asctime") does not add a trailing newline. `time.pthread_getcpuclockid(thread_id)` Return the *clk\_id* of the thread-specific CPU-time clock for the specified *thread\_id*. Use [`threading.get_ident()`](threading#threading.get_ident "threading.get_ident") or the [`ident`](threading#threading.Thread.ident "threading.Thread.ident") attribute of [`threading.Thread`](threading#threading.Thread "threading.Thread") objects to get a suitable value for *thread\_id*. Warning Passing an invalid or expired *thread\_id* may result in undefined behavior, such as segmentation fault. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix (see the man page for *[pthread\_getcpuclockid(3)](https://manpages.debian.org/pthread_getcpuclockid(3))* for further information). New in version 3.7. `time.clock_getres(clk_id)` Return the resolution (precision) of the specified clock *clk\_id*. Refer to [Clock ID Constants](#time-clock-id-constants) for a list of accepted values for *clk\_id*. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `time.clock_gettime(clk_id) → float` Return the time of the specified clock *clk\_id*. Refer to [Clock ID Constants](#time-clock-id-constants) for a list of accepted values for *clk\_id*. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `time.clock_gettime_ns(clk_id) → int` Similar to [`clock_gettime()`](#time.clock_gettime "time.clock_gettime") but return time as nanoseconds. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.7. `time.clock_settime(clk_id, time: float)` Set the time of the specified clock *clk\_id*. Currently, [`CLOCK_REALTIME`](#time.CLOCK_REALTIME "time.CLOCK_REALTIME") is the only accepted value for *clk\_id*. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `time.clock_settime_ns(clk_id, time: int)` Similar to [`clock_settime()`](#time.clock_settime "time.clock_settime") but set time with nanoseconds. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.7. `time.ctime([secs])` Convert a time expressed in seconds since the epoch to a string of a form: `'Sun Jun 20 23:21:05 1993'` representing local time. The day field is two characters long and is space padded if the day is a single digit, e.g.: `'Wed Jun  9 04:26:40 1993'`. If *secs* is not provided or [`None`](constants#None "None"), the current time as returned by [`time()`](#time.time "time.time") is used. `ctime(secs)` is equivalent to `asctime(localtime(secs))`. Locale information is not used by [`ctime()`](#time.ctime "time.ctime"). `time.get_clock_info(name)` Get information on the specified clock as a namespace object. Supported clock names and the corresponding functions to read their value are: * `'monotonic'`: [`time.monotonic()`](#time.monotonic "time.monotonic") * `'perf_counter'`: [`time.perf_counter()`](#time.perf_counter "time.perf_counter") * `'process_time'`: [`time.process_time()`](#time.process_time "time.process_time") * `'thread_time'`: [`time.thread_time()`](#time.thread_time "time.thread_time") * `'time'`: [`time.time()`](#time.time "time.time") The result has the following attributes: * *adjustable*: `True` if the clock can be changed automatically (e.g. by a NTP daemon) or manually by the system administrator, `False` otherwise * *implementation*: The name of the underlying C function used to get the clock value. Refer to [Clock ID Constants](#time-clock-id-constants) for possible values. * *monotonic*: `True` if the clock cannot go backward, `False` otherwise * *resolution*: The resolution of the clock in seconds ([`float`](functions#float "float")) New in version 3.3. `time.gmtime([secs])` Convert a time expressed in seconds since the epoch to a [`struct_time`](#time.struct_time "time.struct_time") in UTC in which the dst flag is always zero. If *secs* is not provided or [`None`](constants#None "None"), the current time as returned by [`time()`](#time.time "time.time") is used. Fractions of a second are ignored. See above for a description of the [`struct_time`](#time.struct_time "time.struct_time") object. See [`calendar.timegm()`](calendar#calendar.timegm "calendar.timegm") for the inverse of this function. `time.localtime([secs])` Like [`gmtime()`](#time.gmtime "time.gmtime") but converts to local time. If *secs* is not provided or [`None`](constants#None "None"), the current time as returned by [`time()`](#time.time "time.time") is used. The dst flag is set to `1` when DST applies to the given time. [`localtime()`](#time.localtime "time.localtime") may raise [`OverflowError`](exceptions#OverflowError "OverflowError"), if the timestamp is outside the range of values supported by the platform C `localtime()` or `gmtime()` functions, and [`OSError`](exceptions#OSError "OSError") on `localtime()` or `gmtime()` failure. It’s common for this to be restricted to years between 1970 and 2038. `time.mktime(t)` This is the inverse function of [`localtime()`](#time.localtime "time.localtime"). Its argument is the [`struct_time`](#time.struct_time "time.struct_time") or full 9-tuple (since the dst flag is needed; use `-1` as the dst flag if it is unknown) which expresses the time in *local* time, not UTC. It returns a floating point number, for compatibility with [`time()`](#time.time "time.time"). If the input value cannot be represented as a valid time, either [`OverflowError`](exceptions#OverflowError "OverflowError") or [`ValueError`](exceptions#ValueError "ValueError") will be raised (which depends on whether the invalid value is caught by Python or the underlying C libraries). The earliest date for which it can generate a time is platform-dependent. `time.monotonic() → float` Return the value (in fractional seconds) of a monotonic clock, i.e. a clock that cannot go backwards. The clock is not affected by system clock updates. The reference point of the returned value is undefined, so that only the difference between the results of two calls is valid. New in version 3.3. Changed in version 3.5: The function is now always available and always system-wide. `time.monotonic_ns() → int` Similar to [`monotonic()`](#time.monotonic "time.monotonic"), but return time as nanoseconds. New in version 3.7. `time.perf_counter() → float` Return the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration. It does include time elapsed during sleep and is system-wide. The reference point of the returned value is undefined, so that only the difference between the results of two calls is valid. New in version 3.3. `time.perf_counter_ns() → int` Similar to [`perf_counter()`](#time.perf_counter "time.perf_counter"), but return time as nanoseconds. New in version 3.7. `time.process_time() → float` Return the value (in fractional seconds) of the sum of the system and user CPU time of the current process. It does not include time elapsed during sleep. It is process-wide by definition. The reference point of the returned value is undefined, so that only the difference between the results of two calls is valid. New in version 3.3. `time.process_time_ns() → int` Similar to [`process_time()`](#time.process_time "time.process_time") but return time as nanoseconds. New in version 3.7. `time.sleep(secs)` Suspend execution of the calling thread for the given number of seconds. The argument may be a floating point number to indicate a more precise sleep time. The actual suspension time may be less than that requested because any caught signal will terminate the [`sleep()`](#time.sleep "time.sleep") following execution of that signal’s catching routine. Also, the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system. Changed in version 3.5: The function now sleeps at least *secs* even if the sleep is interrupted by a signal, except if the signal handler raises an exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale). `time.strftime(format[, t])` Convert a tuple or [`struct_time`](#time.struct_time "time.struct_time") representing a time as returned by [`gmtime()`](#time.gmtime "time.gmtime") or [`localtime()`](#time.localtime "time.localtime") to a string as specified by the *format* argument. If *t* is not provided, the current time as returned by [`localtime()`](#time.localtime "time.localtime") is used. *format* must be a string. [`ValueError`](exceptions#ValueError "ValueError") is raised if any field in *t* is outside of the allowed range. 0 is a legal argument for any position in the time tuple; if it is normally illegal the value is forced to a correct one. The following directives can be embedded in the *format* string. They are shown without the optional field width and precision specification, and are replaced by the indicated characters in the [`strftime()`](#time.strftime "time.strftime") result: | Directive | Meaning | Notes | | --- | --- | --- | | `%a` | Locale’s abbreviated weekday name. | | | `%A` | Locale’s full weekday name. | | | `%b` | Locale’s abbreviated month name. | | | `%B` | Locale’s full month name. | | | `%c` | Locale’s appropriate date and time representation. | | | `%d` | Day of the month as a decimal number [01,31]. | | | `%H` | Hour (24-hour clock) as a decimal number [00,23]. | | | `%I` | Hour (12-hour clock) as a decimal number [01,12]. | | | `%j` | Day of the year as a decimal number [001,366]. | | | `%m` | Month as a decimal number [01,12]. | | | `%M` | Minute as a decimal number [00,59]. | | | `%p` | Locale’s equivalent of either AM or PM. | (1) | | `%S` | Second as a decimal number [00,61]. | (2) | | `%U` | Week number of the year (Sunday as the first day of the week) as a decimal number [00,53]. All days in a new year preceding the first Sunday are considered to be in week 0. | (3) | | `%w` | Weekday as a decimal number [0(Sunday),6]. | | | `%W` | Week number of the year (Monday as the first day of the week) as a decimal number [00,53]. All days in a new year preceding the first Monday are considered to be in week 0. | (3) | | `%x` | Locale’s appropriate date representation. | | | `%X` | Locale’s appropriate time representation. | | | `%y` | Year without century as a decimal number [00,99]. | | | `%Y` | Year with century as a decimal number. | | | `%z` | Time zone offset indicating a positive or negative time difference from UTC/GMT of the form +HHMM or -HHMM, where H represents decimal hour digits and M represents decimal minute digits [-23:59, +23:59]. [1](#id4) | | | `%Z` | Time zone name (no characters if no time zone exists). Deprecated. [1](#id4) | | | `%%` | A literal `'%'` character. | | Notes: 1. When used with the [`strptime()`](#time.strptime "time.strptime") function, the `%p` directive only affects the output hour field if the `%I` directive is used to parse the hour. 2. The range really is `0` to `61`; value `60` is valid in timestamps representing [leap seconds](https://en.wikipedia.org/wiki/Leap_second) and value `61` is supported for historical reasons. 3. When used with the [`strptime()`](#time.strptime "time.strptime") function, `%U` and `%W` are only used in calculations when the day of the week and the year are specified. Here is an example, a format for dates compatible with that specified in the [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html) Internet email standard. [1](#id4) ``` >>> from time import gmtime, strftime >>> strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime()) 'Thu, 28 Jun 2001 14:17:15 +0000' ``` Additional directives may be supported on certain platforms, but only the ones listed here have a meaning standardized by ANSI C. To see the full set of format codes supported on your platform, consult the *[strftime(3)](https://manpages.debian.org/strftime(3))* documentation. On some platforms, an optional field width and precision specification can immediately follow the initial `'%'` of a directive in the following order; this is also not portable. The field width is normally 2 except for `%j` where it is 3. `time.strptime(string[, format])` Parse a string representing a time according to a format. The return value is a [`struct_time`](#time.struct_time "time.struct_time") as returned by [`gmtime()`](#time.gmtime "time.gmtime") or [`localtime()`](#time.localtime "time.localtime"). The *format* parameter uses the same directives as those used by [`strftime()`](#time.strftime "time.strftime"); it defaults to `"%a %b %d %H:%M:%S %Y"` which matches the formatting returned by [`ctime()`](#time.ctime "time.ctime"). If *string* cannot be parsed according to *format*, or if it has excess data after parsing, [`ValueError`](exceptions#ValueError "ValueError") is raised. The default values used to fill in any missing data when more accurate values cannot be inferred are `(1900, 1, 1, 0, 0, 0, 0, 1, -1)`. Both *string* and *format* must be strings. For example: ``` >>> import time >>> time.strptime("30 Nov 00", "%d %b %y") time.struct_time(tm_year=2000, tm_mon=11, tm_mday=30, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=3, tm_yday=335, tm_isdst=-1) ``` Support for the `%Z` directive is based on the values contained in `tzname` and whether `daylight` is true. Because of this, it is platform-specific except for recognizing UTC and GMT which are always known (and are considered to be non-daylight savings timezones). Only the directives specified in the documentation are supported. Because `strftime()` is implemented per platform it can sometimes offer more directives than those listed. But `strptime()` is independent of any platform and thus does not necessarily support all directives available that are not documented as supported. `class time.struct_time` The type of the time value sequence returned by [`gmtime()`](#time.gmtime "time.gmtime"), [`localtime()`](#time.localtime "time.localtime"), and [`strptime()`](#time.strptime "time.strptime"). It is an object with a [named tuple](../glossary#term-named-tuple) interface: values can be accessed by index and by attribute name. The following values are present: | Index | Attribute | Values | | --- | --- | --- | | 0 | `tm_year` | (for example, 1993) | | 1 | `tm_mon` | range [1, 12] | | 2 | `tm_mday` | range [1, 31] | | 3 | `tm_hour` | range [0, 23] | | 4 | `tm_min` | range [0, 59] | | 5 | `tm_sec` | range [0, 61]; see **(2)** in [`strftime()`](#time.strftime "time.strftime") description | | 6 | `tm_wday` | range [0, 6], Monday is 0 | | 7 | `tm_yday` | range [1, 366] | | 8 | `tm_isdst` | 0, 1 or -1; see below | | N/A | `tm_zone` | abbreviation of timezone name | | N/A | `tm_gmtoff` | offset east of UTC in seconds | Note that unlike the C structure, the month value is a range of [1, 12], not [0, 11]. In calls to [`mktime()`](#time.mktime "time.mktime"), `tm_isdst` may be set to 1 when daylight savings time is in effect, and 0 when it is not. A value of -1 indicates that this is not known, and will usually result in the correct state being filled in. When a tuple with an incorrect length is passed to a function expecting a [`struct_time`](#time.struct_time "time.struct_time"), or having elements of the wrong type, a [`TypeError`](exceptions#TypeError "TypeError") is raised. `time.time() → float` Return the time in seconds since the [epoch](#epoch) as a floating point number. The specific date of the epoch and the handling of [leap seconds](https://en.wikipedia.org/wiki/Leap_second) is platform dependent. On Windows and most Unix systems, the epoch is January 1, 1970, 00:00:00 (UTC) and leap seconds are not counted towards the time in seconds since the epoch. This is commonly referred to as [Unix time](https://en.wikipedia.org/wiki/Unix_time). To find out what the epoch is on a given platform, look at `gmtime(0)`. Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second. While this function normally returns non-decreasing values, it can return a lower value than a previous call if the system clock has been set back between the two calls. The number returned by [`time()`](#time.time "time.time") may be converted into a more common time format (i.e. year, month, day, hour, etc…) in UTC by passing it to [`gmtime()`](#time.gmtime "time.gmtime") function or in local time by passing it to the [`localtime()`](#time.localtime "time.localtime") function. In both cases a [`struct_time`](#time.struct_time "time.struct_time") object is returned, from which the components of the calendar date may be accessed as attributes. `time.thread_time() → float` Return the value (in fractional seconds) of the sum of the system and user CPU time of the current thread. It does not include time elapsed during sleep. It is thread-specific by definition. The reference point of the returned value is undefined, so that only the difference between the results of two calls in the same thread is valid. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows, Linux, Unix systems supporting `CLOCK_THREAD_CPUTIME_ID`. New in version 3.7. `time.thread_time_ns() → int` Similar to [`thread_time()`](#time.thread_time "time.thread_time") but return time as nanoseconds. New in version 3.7. `time.time_ns() → int` Similar to [`time()`](#time.time "time.time") but returns time as an integer number of nanoseconds since the [epoch](#epoch). New in version 3.7. `time.tzset()` Reset the time conversion rules used by the library routines. The environment variable `TZ` specifies how this is done. It will also set the variables `tzname` (from the `TZ` environment variable), `timezone` (non-DST seconds West of UTC), `altzone` (DST seconds west of UTC) and `daylight` (to 0 if this timezone does not have any daylight saving time rules, or to nonzero if there is a time, past, present or future when daylight saving time applies). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. Note Although in many cases, changing the `TZ` environment variable may affect the output of functions like [`localtime()`](#time.localtime "time.localtime") without calling [`tzset()`](#time.tzset "time.tzset"), this behavior should not be relied on. The `TZ` environment variable should contain no whitespace. The standard format of the `TZ` environment variable is (whitespace added for clarity): ``` std offset [dst [offset [,start[/time], end[/time]]]] ``` Where the components are: `std and dst` Three or more alphanumerics giving the timezone abbreviations. These will be propagated into time.tzname `offset` The offset has the form: `± hh[:mm[:ss]]`. This indicates the value added the local time to arrive at UTC. If preceded by a ‘-’, the timezone is east of the Prime Meridian; otherwise, it is west. If no offset follows dst, summer time is assumed to be one hour ahead of standard time. `start[/time], end[/time]` Indicates when to change to and back from DST. The format of the start and end dates are one of the following: `Jn` The Julian day *n* (1 <= *n* <= 365). Leap days are not counted, so in all years February 28 is day 59 and March 1 is day 60. `n` The zero-based Julian day (0 <= *n* <= 365). Leap days are counted, and it is possible to refer to February 29. `Mm.n.d` The *d*’th day (0 <= *d* <= 6) of week *n* of month *m* of the year (1 <= *n* <= 5, 1 <= *m* <= 12, where week 5 means “the last *d* day in month *m*” which may occur in either the fourth or the fifth week). Week 1 is the first week in which the *d*’th day occurs. Day zero is a Sunday. `time` has the same format as `offset` except that no leading sign (‘-’ or ‘+’) is allowed. The default, if time is not given, is 02:00:00. ``` >>> os.environ['TZ'] = 'EST+05EDT,M4.1.0,M10.5.0' >>> time.tzset() >>> time.strftime('%X %x %Z') '02:07:36 05/08/03 EDT' >>> os.environ['TZ'] = 'AEST-10AEDT-11,M10.5.0,M3.5.0' >>> time.tzset() >>> time.strftime('%X %x %Z') '16:08:12 05/08/03 AEST' ``` On many Unix systems (including \*BSD, Linux, Solaris, and Darwin), it is more convenient to use the system’s zoneinfo (*[tzfile(5)](https://manpages.debian.org/tzfile(5))*) database to specify the timezone rules. To do this, set the `TZ` environment variable to the path of the required timezone datafile, relative to the root of the systems ‘zoneinfo’ timezone database, usually located at `/usr/share/zoneinfo`. For example, `'US/Eastern'`, `'Australia/Melbourne'`, `'Egypt'` or `'Europe/Amsterdam'`. ``` >>> os.environ['TZ'] = 'US/Eastern' >>> time.tzset() >>> time.tzname ('EST', 'EDT') >>> os.environ['TZ'] = 'Egypt' >>> time.tzset() >>> time.tzname ('EET', 'EEST') ``` Clock ID Constants ------------------ These constants are used as parameters for [`clock_getres()`](#time.clock_getres "time.clock_getres") and [`clock_gettime()`](#time.clock_gettime "time.clock_gettime"). `time.CLOCK_BOOTTIME` Identical to [`CLOCK_MONOTONIC`](#time.CLOCK_MONOTONIC "time.CLOCK_MONOTONIC"), except it also includes any time that the system is suspended. This allows applications to get a suspend-aware monotonic clock without having to deal with the complications of [`CLOCK_REALTIME`](#time.CLOCK_REALTIME "time.CLOCK_REALTIME"), which may have discontinuities if the time is changed using `settimeofday()` or similar. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 2.6.39 or later. New in version 3.7. `time.CLOCK_HIGHRES` The Solaris OS has a `CLOCK_HIGHRES` timer that attempts to use an optimal hardware source, and may give close to nanosecond resolution. `CLOCK_HIGHRES` is the nonadjustable, high-resolution clock. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Solaris. New in version 3.3. `time.CLOCK_MONOTONIC` Clock that cannot be set and represents monotonic time since some unspecified starting point. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `time.CLOCK_MONOTONIC_RAW` Similar to [`CLOCK_MONOTONIC`](#time.CLOCK_MONOTONIC "time.CLOCK_MONOTONIC"), but provides access to a raw hardware-based time that is not subject to NTP adjustments. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 2.6.28 and newer, macOS 10.12 and newer. New in version 3.3. `time.CLOCK_PROCESS_CPUTIME_ID` High-resolution per-process timer from the CPU. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `time.CLOCK_PROF` High-resolution per-process timer from the CPU. [Availability](https://docs.python.org/3.9/library/intro.html#availability): FreeBSD, NetBSD 7 or later, OpenBSD. New in version 3.7. `time.CLOCK_TAI` [International Atomic Time](https://www.nist.gov/pml/time-and-frequency-division/nist-time-frequently-asked-questions-faq#tai) The system must have a current leap second table in order for this to give the correct answer. PTP or NTP software can maintain a leap second table. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux. New in version 3.9. `time.CLOCK_THREAD_CPUTIME_ID` Thread-specific CPU-time clock. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `time.CLOCK_UPTIME` Time whose absolute value is the time the system has been running and not suspended, providing accurate uptime measurement, both absolute and interval. [Availability](https://docs.python.org/3.9/library/intro.html#availability): FreeBSD, OpenBSD 5.5 or later. New in version 3.7. `time.CLOCK_UPTIME_RAW` Clock that increments monotonically, tracking the time since an arbitrary point, unaffected by frequency or time adjustments and not incremented while the system is asleep. [Availability](https://docs.python.org/3.9/library/intro.html#availability): macOS 10.12 and newer. New in version 3.8. The following constant is the only parameter that can be sent to [`clock_settime()`](#time.clock_settime "time.clock_settime"). `time.CLOCK_REALTIME` System-wide real-time clock. Setting this clock requires appropriate privileges. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. Timezone Constants ------------------ `time.altzone` The offset of the local DST timezone, in seconds west of UTC, if one is defined. This is negative if the local DST timezone is east of UTC (as in Western Europe, including the UK). Only use this if `daylight` is nonzero. See note below. `time.daylight` Nonzero if a DST timezone is defined. See note below. `time.timezone` The offset of the local (non-DST) timezone, in seconds west of UTC (negative in most of Western Europe, positive in the US, zero in the UK). See note below. `time.tzname` A tuple of two strings: the first is the name of the local non-DST timezone, the second is the name of the local DST timezone. If no DST timezone is defined, the second string should not be used. See note below. Note For the above Timezone constants ([`altzone`](#time.altzone "time.altzone"), [`daylight`](#time.daylight "time.daylight"), [`timezone`](#time.timezone "time.timezone"), and [`tzname`](#time.tzname "time.tzname")), the value is determined by the timezone rules in effect at module load time or the last time [`tzset()`](#time.tzset "time.tzset") is called and may be incorrect for times in the past. It is recommended to use the `tm_gmtoff` and `tm_zone` results from [`localtime()`](#time.localtime "time.localtime") to obtain timezone information. See also `Module` [`datetime`](datetime#module-datetime "datetime: Basic date and time types.") More object-oriented interface to dates and times. `Module` [`locale`](locale#module-locale "locale: Internationalization services.") Internationalization services. The locale setting affects the interpretation of many format specifiers in [`strftime()`](#time.strftime "time.strftime") and [`strptime()`](#time.strptime "time.strptime"). `Module` [`calendar`](calendar#module-calendar "calendar: Functions for working with calendars, including some emulation of the Unix cal program.") General calendar-related functions. [`timegm()`](calendar#calendar.timegm "calendar.timegm") is the inverse of [`gmtime()`](#time.gmtime "time.gmtime") from this module. #### Footnotes `1(1,2,3)` The use of `%Z` is now deprecated, but the `%z` escape that expands to the preferred hour/minute offset is not supported by all ANSI C libraries. Also, a strict reading of the original 1982 [**RFC 822**](https://tools.ietf.org/html/rfc822.html) standard calls for a two-digit year (`%y` rather than `%Y`), but practice moved to 4-digit years long before the year 2000. After that, [**RFC 822**](https://tools.ietf.org/html/rfc822.html) became obsolete and the 4-digit year has been first recommended by [**RFC 1123**](https://tools.ietf.org/html/rfc1123.html) and then mandated by [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html).
programming_docs
python threading — Thread-based parallelism threading — Thread-based parallelism ==================================== **Source code:** [Lib/threading.py](https://github.com/python/cpython/tree/3.9/Lib/threading.py) This module constructs higher-level threading interfaces on top of the lower level [`_thread`](_thread#module-_thread "_thread: Low-level threading API.") module. See also the [`queue`](queue#module-queue "queue: A synchronized queue class.") module. Changed in version 3.7: This module used to be optional, it is now always available. Note While they are not listed below, the `camelCase` names used for some methods and functions in this module in the Python 2.x series are still supported by this module. **CPython implementation detail:** In CPython, due to the [Global Interpreter Lock](../glossary#term-global-interpreter-lock), only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this limitation). If you want your application to make better use of the computational resources of multi-core machines, you are advised to use [`multiprocessing`](multiprocessing#module-multiprocessing "multiprocessing: Process-based parallelism.") or [`concurrent.futures.ProcessPoolExecutor`](concurrent.futures#concurrent.futures.ProcessPoolExecutor "concurrent.futures.ProcessPoolExecutor"). However, threading is still an appropriate model if you want to run multiple I/O-bound tasks simultaneously. This module defines the following functions: `threading.active_count()` Return the number of [`Thread`](#threading.Thread "threading.Thread") objects currently alive. The returned count is equal to the length of the list returned by [`enumerate()`](#threading.enumerate "threading.enumerate"). `threading.current_thread()` Return the current [`Thread`](#threading.Thread "threading.Thread") object, corresponding to the caller’s thread of control. If the caller’s thread of control was not created through the [`threading`](#module-threading "threading: Thread-based parallelism.") module, a dummy thread object with limited functionality is returned. `threading.excepthook(args, /)` Handle uncaught exception raised by [`Thread.run()`](#threading.Thread.run "threading.Thread.run"). The *args* argument has the following attributes: * *exc\_type*: Exception type. * *exc\_value*: Exception value, can be `None`. * *exc\_traceback*: Exception traceback, can be `None`. * *thread*: Thread which raised the exception, can be `None`. If *exc\_type* is [`SystemExit`](exceptions#SystemExit "SystemExit"), the exception is silently ignored. Otherwise, the exception is printed out on [`sys.stderr`](sys#sys.stderr "sys.stderr"). If this function raises an exception, [`sys.excepthook()`](sys#sys.excepthook "sys.excepthook") is called to handle it. [`threading.excepthook()`](#threading.excepthook "threading.excepthook") can be overridden to control how uncaught exceptions raised by [`Thread.run()`](#threading.Thread.run "threading.Thread.run") are handled. Storing *exc\_value* using a custom hook can create a reference cycle. It should be cleared explicitly to break the reference cycle when the exception is no longer needed. Storing *thread* using a custom hook can resurrect it if it is set to an object which is being finalized. Avoid storing *thread* after the custom hook completes to avoid resurrecting objects. See also [`sys.excepthook()`](sys#sys.excepthook "sys.excepthook") handles uncaught exceptions. New in version 3.8. `threading.get_ident()` Return the ‘thread identifier’ of the current thread. This is a nonzero integer. Its value has no direct meaning; it is intended as a magic cookie to be used e.g. to index a dictionary of thread-specific data. Thread identifiers may be recycled when a thread exits and another thread is created. New in version 3.3. `threading.get_native_id()` Return the native integral Thread ID of the current thread assigned by the kernel. This is a non-negative integer. Its value may be used to uniquely identify this particular thread system-wide (until the thread terminates, after which the value may be recycled by the OS). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows, FreeBSD, Linux, macOS, OpenBSD, NetBSD, AIX. New in version 3.8. `threading.enumerate()` Return a list of all [`Thread`](#threading.Thread "threading.Thread") objects currently active. The list includes daemonic threads and dummy thread objects created by [`current_thread()`](#threading.current_thread "threading.current_thread"). It excludes terminated threads and threads that have not yet been started. However, the main thread is always part of the result, even when terminated. `threading.main_thread()` Return the main [`Thread`](#threading.Thread "threading.Thread") object. In normal conditions, the main thread is the thread from which the Python interpreter was started. New in version 3.4. `threading.settrace(func)` Set a trace function for all threads started from the [`threading`](#module-threading "threading: Thread-based parallelism.") module. The *func* will be passed to [`sys.settrace()`](sys#sys.settrace "sys.settrace") for each thread, before its [`run()`](#threading.Thread.run "threading.Thread.run") method is called. `threading.setprofile(func)` Set a profile function for all threads started from the [`threading`](#module-threading "threading: Thread-based parallelism.") module. The *func* will be passed to [`sys.setprofile()`](sys#sys.setprofile "sys.setprofile") for each thread, before its [`run()`](#threading.Thread.run "threading.Thread.run") method is called. `threading.stack_size([size])` Return the thread stack size used when creating new threads. The optional *size* argument specifies the stack size to be used for subsequently created threads, and must be 0 (use platform or configured default) or a positive integer value of at least 32,768 (32 KiB). If *size* is not specified, 0 is used. If changing the thread stack size is unsupported, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. If the specified stack size is invalid, a [`ValueError`](exceptions#ValueError "ValueError") is raised and the stack size is unmodified. 32 KiB is currently the minimum supported stack size value to guarantee sufficient stack space for the interpreter itself. Note that some platforms may have particular restrictions on values for the stack size, such as requiring a minimum stack size > 32 KiB or requiring allocation in multiples of the system memory page size - platform documentation should be referred to for more information (4 KiB pages are common; using multiples of 4096 for the stack size is the suggested approach in the absence of more specific information). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows, systems with POSIX threads. This module also defines the following constant: `threading.TIMEOUT_MAX` The maximum value allowed for the *timeout* parameter of blocking functions ([`Lock.acquire()`](#threading.Lock.acquire "threading.Lock.acquire"), [`RLock.acquire()`](#threading.RLock.acquire "threading.RLock.acquire"), [`Condition.wait()`](#threading.Condition.wait "threading.Condition.wait"), etc.). Specifying a timeout greater than this value will raise an [`OverflowError`](exceptions#OverflowError "OverflowError"). New in version 3.2. This module defines a number of classes, which are detailed in the sections below. The design of this module is loosely based on Java’s threading model. However, where Java makes locks and condition variables basic behavior of every object, they are separate objects in Python. Python’s [`Thread`](#threading.Thread "threading.Thread") class supports a subset of the behavior of Java’s Thread class; currently, there are no priorities, no thread groups, and threads cannot be destroyed, stopped, suspended, resumed, or interrupted. The static methods of Java’s Thread class, when implemented, are mapped to module-level functions. All of the methods described below are executed atomically. Thread-Local Data ----------------- Thread-local data is data whose values are thread specific. To manage thread-local data, just create an instance of [`local`](#threading.local "threading.local") (or a subclass) and store attributes on it: ``` mydata = threading.local() mydata.x = 1 ``` The instance’s values will be different for separate threads. `class threading.local` A class that represents thread-local data. For more details and extensive examples, see the documentation string of the `_threading_local` module. Thread Objects -------------- The [`Thread`](#threading.Thread "threading.Thread") class represents an activity that is run in a separate thread of control. There are two ways to specify the activity: by passing a callable object to the constructor, or by overriding the [`run()`](#threading.Thread.run "threading.Thread.run") method in a subclass. No other methods (except for the constructor) should be overridden in a subclass. In other words, *only* override the `__init__()` and [`run()`](#threading.Thread.run "threading.Thread.run") methods of this class. Once a thread object is created, its activity must be started by calling the thread’s [`start()`](#threading.Thread.start "threading.Thread.start") method. This invokes the [`run()`](#threading.Thread.run "threading.Thread.run") method in a separate thread of control. Once the thread’s activity is started, the thread is considered ‘alive’. It stops being alive when its [`run()`](#threading.Thread.run "threading.Thread.run") method terminates – either normally, or by raising an unhandled exception. The [`is_alive()`](#threading.Thread.is_alive "threading.Thread.is_alive") method tests whether the thread is alive. Other threads can call a thread’s [`join()`](#threading.Thread.join "threading.Thread.join") method. This blocks the calling thread until the thread whose [`join()`](#threading.Thread.join "threading.Thread.join") method is called is terminated. A thread has a name. The name can be passed to the constructor, and read or changed through the [`name`](#threading.Thread.name "threading.Thread.name") attribute. If the [`run()`](#threading.Thread.run "threading.Thread.run") method raises an exception, [`threading.excepthook()`](#threading.excepthook "threading.excepthook") is called to handle it. By default, [`threading.excepthook()`](#threading.excepthook "threading.excepthook") ignores silently [`SystemExit`](exceptions#SystemExit "SystemExit"). A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the [`daemon`](#threading.Thread.daemon "threading.Thread.daemon") property or the *daemon* constructor argument. Note Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signalling mechanism such as an [`Event`](#threading.Event "threading.Event"). There is a “main thread” object; this corresponds to the initial thread of control in the Python program. It is not a daemon thread. There is the possibility that “dummy thread objects” are created. These are thread objects corresponding to “alien threads”, which are threads of control started outside the threading module, such as directly from C code. Dummy thread objects have limited functionality; they are always considered alive and daemonic, and cannot be [`join()`](#threading.Thread.join "threading.Thread.join")ed. They are never deleted, since it is impossible to detect the termination of alien threads. `class threading.Thread(group=None, target=None, name=None, args=(), kwargs={}, *, daemon=None)` This constructor should always be called with keyword arguments. Arguments are: *group* should be `None`; reserved for future extension when a `ThreadGroup` class is implemented. *target* is the callable object to be invoked by the [`run()`](#threading.Thread.run "threading.Thread.run") method. Defaults to `None`, meaning nothing is called. *name* is the thread name. By default, a unique name is constructed of the form “Thread-*N*” where *N* is a small decimal number. *args* is the argument tuple for the target invocation. Defaults to `()`. *kwargs* is a dictionary of keyword arguments for the target invocation. Defaults to `{}`. If not `None`, *daemon* explicitly sets whether the thread is daemonic. If `None` (the default), the daemonic property is inherited from the current thread. If the subclass overrides the constructor, it must make sure to invoke the base class constructor (`Thread.__init__()`) before doing anything else to the thread. Changed in version 3.3: Added the *daemon* argument. `start()` Start the thread’s activity. It must be called at most once per thread object. It arranges for the object’s [`run()`](#threading.Thread.run "threading.Thread.run") method to be invoked in a separate thread of control. This method will raise a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") if called more than once on the same thread object. `run()` Method representing the thread’s activity. You may override this method in a subclass. The standard [`run()`](#threading.Thread.run "threading.Thread.run") method invokes the callable object passed to the object’s constructor as the *target* argument, if any, with positional and keyword arguments taken from the *args* and *kwargs* arguments, respectively. `join(timeout=None)` Wait until the thread terminates. This blocks the calling thread until the thread whose [`join()`](#threading.Thread.join "threading.Thread.join") method is called terminates – either normally or through an unhandled exception – or until the optional timeout occurs. When the *timeout* argument is present and not `None`, it should be a floating point number specifying a timeout for the operation in seconds (or fractions thereof). As [`join()`](#threading.Thread.join "threading.Thread.join") always returns `None`, you must call [`is_alive()`](#threading.Thread.is_alive "threading.Thread.is_alive") after [`join()`](#threading.Thread.join "threading.Thread.join") to decide whether a timeout happened – if the thread is still alive, the [`join()`](#threading.Thread.join "threading.Thread.join") call timed out. When the *timeout* argument is not present or `None`, the operation will block until the thread terminates. A thread can be [`join()`](#threading.Thread.join "threading.Thread.join")ed many times. [`join()`](#threading.Thread.join "threading.Thread.join") raises a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") if an attempt is made to join the current thread as that would cause a deadlock. It is also an error to [`join()`](#threading.Thread.join "threading.Thread.join") a thread before it has been started and attempts to do so raise the same exception. `name` A string used for identification purposes only. It has no semantics. Multiple threads may be given the same name. The initial name is set by the constructor. `getName()` `setName()` Old getter/setter API for [`name`](#threading.Thread.name "threading.Thread.name"); use it directly as a property instead. `ident` The ‘thread identifier’ of this thread or `None` if the thread has not been started. This is a nonzero integer. See the [`get_ident()`](#threading.get_ident "threading.get_ident") function. Thread identifiers may be recycled when a thread exits and another thread is created. The identifier is available even after the thread has exited. `native_id` The native integral thread ID of this thread. This is a non-negative integer, or `None` if the thread has not been started. See the [`get_native_id()`](#threading.get_native_id "threading.get_native_id") function. This represents the Thread ID (`TID`) as assigned to the thread by the OS (kernel). Its value may be used to uniquely identify this particular thread system-wide (until the thread terminates, after which the value may be recycled by the OS). Note Similar to Process IDs, Thread IDs are only valid (guaranteed unique system-wide) from the time the thread is created until the thread has been terminated. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Requires [`get_native_id()`](#threading.get_native_id "threading.get_native_id") function. New in version 3.8. `is_alive()` Return whether the thread is alive. This method returns `True` just before the [`run()`](#threading.Thread.run "threading.Thread.run") method starts until just after the [`run()`](#threading.Thread.run "threading.Thread.run") method terminates. The module function [`enumerate()`](#threading.enumerate "threading.enumerate") returns a list of all alive threads. `daemon` A boolean value indicating whether this thread is a daemon thread (True) or not (False). This must be set before [`start()`](#threading.Thread.start "threading.Thread.start") is called, otherwise [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. Its initial value is inherited from the creating thread; the main thread is not a daemon thread and therefore all threads created in the main thread default to [`daemon`](#threading.Thread.daemon "threading.Thread.daemon") = `False`. The entire Python program exits when no alive non-daemon threads are left. `isDaemon()` `setDaemon()` Old getter/setter API for [`daemon`](#threading.Thread.daemon "threading.Thread.daemon"); use it directly as a property instead. Lock Objects ------------ A primitive lock is a synchronization primitive that is not owned by a particular thread when locked. In Python, it is currently the lowest level synchronization primitive available, implemented directly by the [`_thread`](_thread#module-_thread "_thread: Low-level threading API.") extension module. A primitive lock is in one of two states, “locked” or “unlocked”. It is created in the unlocked state. It has two basic methods, [`acquire()`](#threading.Lock.acquire "threading.Lock.acquire") and [`release()`](#threading.Lock.release "threading.Lock.release"). When the state is unlocked, [`acquire()`](#threading.Lock.acquire "threading.Lock.acquire") changes the state to locked and returns immediately. When the state is locked, [`acquire()`](#threading.Lock.acquire "threading.Lock.acquire") blocks until a call to [`release()`](#threading.Lock.release "threading.Lock.release") in another thread changes it to unlocked, then the [`acquire()`](#threading.Lock.acquire "threading.Lock.acquire") call resets it to locked and returns. The [`release()`](#threading.Lock.release "threading.Lock.release") method should only be called in the locked state; it changes the state to unlocked and returns immediately. If an attempt is made to release an unlocked lock, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") will be raised. Locks also support the [context management protocol](#with-locks). When more than one thread is blocked in [`acquire()`](#threading.Lock.acquire "threading.Lock.acquire") waiting for the state to turn to unlocked, only one thread proceeds when a [`release()`](#threading.Lock.release "threading.Lock.release") call resets the state to unlocked; which one of the waiting threads proceeds is not defined, and may vary across implementations. All methods are executed atomically. `class threading.Lock` The class implementing primitive lock objects. Once a thread has acquired a lock, subsequent attempts to acquire it block, until it is released; any thread may release it. Note that `Lock` is actually a factory function which returns an instance of the most efficient version of the concrete Lock class that is supported by the platform. `acquire(blocking=True, timeout=-1)` Acquire a lock, blocking or non-blocking. When invoked with the *blocking* argument set to `True` (the default), block until the lock is unlocked, then set it to locked and return `True`. When invoked with the *blocking* argument set to `False`, do not block. If a call with *blocking* set to `True` would block, return `False` immediately; otherwise, set the lock to locked and return `True`. When invoked with the floating-point *timeout* argument set to a positive value, block for at most the number of seconds specified by *timeout* and as long as the lock cannot be acquired. A *timeout* argument of `-1` specifies an unbounded wait. It is forbidden to specify a *timeout* when *blocking* is false. The return value is `True` if the lock is acquired successfully, `False` if not (for example if the *timeout* expired). Changed in version 3.2: The *timeout* parameter is new. Changed in version 3.2: Lock acquisition can now be interrupted by signals on POSIX if the underlying threading implementation supports it. `release()` Release a lock. This can be called from any thread, not only the thread which has acquired the lock. When the lock is locked, reset it to unlocked, and return. If any other threads are blocked waiting for the lock to become unlocked, allow exactly one of them to proceed. When invoked on an unlocked lock, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. There is no return value. `locked()` Return true if the lock is acquired. RLock Objects ------------- A reentrant lock is a synchronization primitive that may be acquired multiple times by the same thread. Internally, it uses the concepts of “owning thread” and “recursion level” in addition to the locked/unlocked state used by primitive locks. In the locked state, some thread owns the lock; in the unlocked state, no thread owns it. To lock the lock, a thread calls its [`acquire()`](#threading.RLock.acquire "threading.RLock.acquire") method; this returns once the thread owns the lock. To unlock the lock, a thread calls its [`release()`](#threading.Lock.release "threading.Lock.release") method. [`acquire()`](#threading.Lock.acquire "threading.Lock.acquire")/[`release()`](#threading.Lock.release "threading.Lock.release") call pairs may be nested; only the final [`release()`](#threading.Lock.release "threading.Lock.release") (the [`release()`](#threading.Lock.release "threading.Lock.release") of the outermost pair) resets the lock to unlocked and allows another thread blocked in [`acquire()`](#threading.Lock.acquire "threading.Lock.acquire") to proceed. Reentrant locks also support the [context management protocol](#with-locks). `class threading.RLock` This class implements reentrant lock objects. A reentrant lock must be released by the thread that acquired it. Once a thread has acquired a reentrant lock, the same thread may acquire it again without blocking; the thread must release it once for each time it has acquired it. Note that `RLock` is actually a factory function which returns an instance of the most efficient version of the concrete RLock class that is supported by the platform. `acquire(blocking=True, timeout=-1)` Acquire a lock, blocking or non-blocking. When invoked without arguments: if this thread already owns the lock, increment the recursion level by one, and return immediately. Otherwise, if another thread owns the lock, block until the lock is unlocked. Once the lock is unlocked (not owned by any thread), then grab ownership, set the recursion level to one, and return. If more than one thread is blocked waiting until the lock is unlocked, only one at a time will be able to grab ownership of the lock. There is no return value in this case. When invoked with the *blocking* argument set to true, do the same thing as when called without arguments, and return `True`. When invoked with the *blocking* argument set to false, do not block. If a call without an argument would block, return `False` immediately; otherwise, do the same thing as when called without arguments, and return `True`. When invoked with the floating-point *timeout* argument set to a positive value, block for at most the number of seconds specified by *timeout* and as long as the lock cannot be acquired. Return `True` if the lock has been acquired, false if the timeout has elapsed. Changed in version 3.2: The *timeout* parameter is new. `release()` Release a lock, decrementing the recursion level. If after the decrement it is zero, reset the lock to unlocked (not owned by any thread), and if any other threads are blocked waiting for the lock to become unlocked, allow exactly one of them to proceed. If after the decrement the recursion level is still nonzero, the lock remains locked and owned by the calling thread. Only call this method when the calling thread owns the lock. A [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised if this method is called when the lock is unlocked. There is no return value. Condition Objects ----------------- A condition variable is always associated with some kind of lock; this can be passed in or one will be created by default. Passing one in is useful when several condition variables must share the same lock. The lock is part of the condition object: you don’t have to track it separately. A condition variable obeys the [context management protocol](#with-locks): using the `with` statement acquires the associated lock for the duration of the enclosed block. The [`acquire()`](#threading.Condition.acquire "threading.Condition.acquire") and [`release()`](#threading.Condition.release "threading.Condition.release") methods also call the corresponding methods of the associated lock. Other methods must be called with the associated lock held. The [`wait()`](#threading.Condition.wait "threading.Condition.wait") method releases the lock, and then blocks until another thread awakens it by calling [`notify()`](#threading.Condition.notify "threading.Condition.notify") or [`notify_all()`](#threading.Condition.notify_all "threading.Condition.notify_all"). Once awakened, [`wait()`](#threading.Condition.wait "threading.Condition.wait") re-acquires the lock and returns. It is also possible to specify a timeout. The [`notify()`](#threading.Condition.notify "threading.Condition.notify") method wakes up one of the threads waiting for the condition variable, if any are waiting. The [`notify_all()`](#threading.Condition.notify_all "threading.Condition.notify_all") method wakes up all threads waiting for the condition variable. Note: the [`notify()`](#threading.Condition.notify "threading.Condition.notify") and [`notify_all()`](#threading.Condition.notify_all "threading.Condition.notify_all") methods don’t release the lock; this means that the thread or threads awakened will not return from their [`wait()`](#threading.Condition.wait "threading.Condition.wait") call immediately, but only when the thread that called [`notify()`](#threading.Condition.notify "threading.Condition.notify") or [`notify_all()`](#threading.Condition.notify_all "threading.Condition.notify_all") finally relinquishes ownership of the lock. The typical programming style using condition variables uses the lock to synchronize access to some shared state; threads that are interested in a particular change of state call [`wait()`](#threading.Condition.wait "threading.Condition.wait") repeatedly until they see the desired state, while threads that modify the state call [`notify()`](#threading.Condition.notify "threading.Condition.notify") or [`notify_all()`](#threading.Condition.notify_all "threading.Condition.notify_all") when they change the state in such a way that it could possibly be a desired state for one of the waiters. For example, the following code is a generic producer-consumer situation with unlimited buffer capacity: ``` # Consume one item with cv: while not an_item_is_available(): cv.wait() get_an_available_item() # Produce one item with cv: make_an_item_available() cv.notify() ``` The `while` loop checking for the application’s condition is necessary because [`wait()`](#threading.Condition.wait "threading.Condition.wait") can return after an arbitrary long time, and the condition which prompted the [`notify()`](#threading.Condition.notify "threading.Condition.notify") call may no longer hold true. This is inherent to multi-threaded programming. The [`wait_for()`](#threading.Condition.wait_for "threading.Condition.wait_for") method can be used to automate the condition checking, and eases the computation of timeouts: ``` # Consume an item with cv: cv.wait_for(an_item_is_available) get_an_available_item() ``` To choose between [`notify()`](#threading.Condition.notify "threading.Condition.notify") and [`notify_all()`](#threading.Condition.notify_all "threading.Condition.notify_all"), consider whether one state change can be interesting for only one or several waiting threads. E.g. in a typical producer-consumer situation, adding one item to the buffer only needs to wake up one consumer thread. `class threading.Condition(lock=None)` This class implements condition variable objects. A condition variable allows one or more threads to wait until they are notified by another thread. If the *lock* argument is given and not `None`, it must be a [`Lock`](#threading.Lock "threading.Lock") or [`RLock`](#threading.RLock "threading.RLock") object, and it is used as the underlying lock. Otherwise, a new [`RLock`](#threading.RLock "threading.RLock") object is created and used as the underlying lock. Changed in version 3.3: changed from a factory function to a class. `acquire(*args)` Acquire the underlying lock. This method calls the corresponding method on the underlying lock; the return value is whatever that method returns. `release()` Release the underlying lock. This method calls the corresponding method on the underlying lock; there is no return value. `wait(timeout=None)` Wait until notified or until a timeout occurs. If the calling thread has not acquired the lock when this method is called, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. This method releases the underlying lock, and then blocks until it is awakened by a [`notify()`](#threading.Condition.notify "threading.Condition.notify") or [`notify_all()`](#threading.Condition.notify_all "threading.Condition.notify_all") call for the same condition variable in another thread, or until the optional timeout occurs. Once awakened or timed out, it re-acquires the lock and returns. When the *timeout* argument is present and not `None`, it should be a floating point number specifying a timeout for the operation in seconds (or fractions thereof). When the underlying lock is an [`RLock`](#threading.RLock "threading.RLock"), it is not released using its [`release()`](#threading.Condition.release "threading.Condition.release") method, since this may not actually unlock the lock when it was acquired multiple times recursively. Instead, an internal interface of the [`RLock`](#threading.RLock "threading.RLock") class is used, which really unlocks it even when it has been recursively acquired several times. Another internal interface is then used to restore the recursion level when the lock is reacquired. The return value is `True` unless a given *timeout* expired, in which case it is `False`. Changed in version 3.2: Previously, the method always returned `None`. `wait_for(predicate, timeout=None)` Wait until a condition evaluates to true. *predicate* should be a callable which result will be interpreted as a boolean value. A *timeout* may be provided giving the maximum time to wait. This utility method may call [`wait()`](#threading.Condition.wait "threading.Condition.wait") repeatedly until the predicate is satisfied, or until a timeout occurs. The return value is the last return value of the predicate and will evaluate to `False` if the method timed out. Ignoring the timeout feature, calling this method is roughly equivalent to writing: ``` while not predicate(): cv.wait() ``` Therefore, the same rules apply as with [`wait()`](#threading.Condition.wait "threading.Condition.wait"): The lock must be held when called and is re-acquired on return. The predicate is evaluated with the lock held. New in version 3.2. `notify(n=1)` By default, wake up one thread waiting on this condition, if any. If the calling thread has not acquired the lock when this method is called, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. This method wakes up at most *n* of the threads waiting for the condition variable; it is a no-op if no threads are waiting. The current implementation wakes up exactly *n* threads, if at least *n* threads are waiting. However, it’s not safe to rely on this behavior. A future, optimized implementation may occasionally wake up more than *n* threads. Note: an awakened thread does not actually return from its [`wait()`](#threading.Condition.wait "threading.Condition.wait") call until it can reacquire the lock. Since [`notify()`](#threading.Condition.notify "threading.Condition.notify") does not release the lock, its caller should. `notify_all()` Wake up all threads waiting on this condition. This method acts like [`notify()`](#threading.Condition.notify "threading.Condition.notify"), but wakes up all waiting threads instead of one. If the calling thread has not acquired the lock when this method is called, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. Semaphore Objects ----------------- This is one of the oldest synchronization primitives in the history of computer science, invented by the early Dutch computer scientist Edsger W. Dijkstra (he used the names `P()` and `V()` instead of [`acquire()`](#threading.Semaphore.acquire "threading.Semaphore.acquire") and [`release()`](#threading.Semaphore.release "threading.Semaphore.release")). A semaphore manages an internal counter which is decremented by each [`acquire()`](#threading.Semaphore.acquire "threading.Semaphore.acquire") call and incremented by each [`release()`](#threading.Semaphore.release "threading.Semaphore.release") call. The counter can never go below zero; when [`acquire()`](#threading.Semaphore.acquire "threading.Semaphore.acquire") finds that it is zero, it blocks, waiting until some other thread calls [`release()`](#threading.Semaphore.release "threading.Semaphore.release"). Semaphores also support the [context management protocol](#with-locks). `class threading.Semaphore(value=1)` This class implements semaphore objects. A semaphore manages an atomic counter representing the number of [`release()`](#threading.Semaphore.release "threading.Semaphore.release") calls minus the number of [`acquire()`](#threading.Semaphore.acquire "threading.Semaphore.acquire") calls, plus an initial value. The [`acquire()`](#threading.Semaphore.acquire "threading.Semaphore.acquire") method blocks if necessary until it can return without making the counter negative. If not given, *value* defaults to 1. The optional argument gives the initial *value* for the internal counter; it defaults to `1`. If the *value* given is less than 0, [`ValueError`](exceptions#ValueError "ValueError") is raised. Changed in version 3.3: changed from a factory function to a class. `acquire(blocking=True, timeout=None)` Acquire a semaphore. When invoked without arguments: * If the internal counter is larger than zero on entry, decrement it by one and return `True` immediately. * If the internal counter is zero on entry, block until awoken by a call to [`release()`](#threading.Semaphore.release "threading.Semaphore.release"). Once awoken (and the counter is greater than 0), decrement the counter by 1 and return `True`. Exactly one thread will be awoken by each call to [`release()`](#threading.Semaphore.release "threading.Semaphore.release"). The order in which threads are awoken should not be relied on. When invoked with *blocking* set to false, do not block. If a call without an argument would block, return `False` immediately; otherwise, do the same thing as when called without arguments, and return `True`. When invoked with a *timeout* other than `None`, it will block for at most *timeout* seconds. If acquire does not complete successfully in that interval, return `False`. Return `True` otherwise. Changed in version 3.2: The *timeout* parameter is new. `release(n=1)` Release a semaphore, incrementing the internal counter by *n*. When it was zero on entry and other threads are waiting for it to become larger than zero again, wake up *n* of those threads. Changed in version 3.9: Added the *n* parameter to release multiple waiting threads at once. `class threading.BoundedSemaphore(value=1)` Class implementing bounded semaphore objects. A bounded semaphore checks to make sure its current value doesn’t exceed its initial value. If it does, [`ValueError`](exceptions#ValueError "ValueError") is raised. In most situations semaphores are used to guard resources with limited capacity. If the semaphore is released too many times it’s a sign of a bug. If not given, *value* defaults to 1. Changed in version 3.3: changed from a factory function to a class. ### [`Semaphore`](#threading.Semaphore "threading.Semaphore") Example Semaphores are often used to guard resources with limited capacity, for example, a database server. In any situation where the size of the resource is fixed, you should use a bounded semaphore. Before spawning any worker threads, your main thread would initialize the semaphore: ``` maxconnections = 5 # ... pool_sema = BoundedSemaphore(value=maxconnections) ``` Once spawned, worker threads call the semaphore’s acquire and release methods when they need to connect to the server: ``` with pool_sema: conn = connectdb() try: # ... use connection ... finally: conn.close() ``` The use of a bounded semaphore reduces the chance that a programming error which causes the semaphore to be released more than it’s acquired will go undetected. Event Objects ------------- This is one of the simplest mechanisms for communication between threads: one thread signals an event and other threads wait for it. An event object manages an internal flag that can be set to true with the [`set()`](#threading.Event.set "threading.Event.set") method and reset to false with the [`clear()`](#threading.Event.clear "threading.Event.clear") method. The [`wait()`](#threading.Event.wait "threading.Event.wait") method blocks until the flag is true. `class threading.Event` Class implementing event objects. An event manages a flag that can be set to true with the [`set()`](#threading.Event.set "threading.Event.set") method and reset to false with the [`clear()`](#threading.Event.clear "threading.Event.clear") method. The [`wait()`](#threading.Event.wait "threading.Event.wait") method blocks until the flag is true. The flag is initially false. Changed in version 3.3: changed from a factory function to a class. `is_set()` Return `True` if and only if the internal flag is true. `set()` Set the internal flag to true. All threads waiting for it to become true are awakened. Threads that call [`wait()`](#threading.Event.wait "threading.Event.wait") once the flag is true will not block at all. `clear()` Reset the internal flag to false. Subsequently, threads calling [`wait()`](#threading.Event.wait "threading.Event.wait") will block until [`set()`](#threading.Event.set "threading.Event.set") is called to set the internal flag to true again. `wait(timeout=None)` Block until the internal flag is true. If the internal flag is true on entry, return immediately. Otherwise, block until another thread calls [`set()`](#threading.Event.set "threading.Event.set") to set the flag to true, or until the optional timeout occurs. When the timeout argument is present and not `None`, it should be a floating point number specifying a timeout for the operation in seconds (or fractions thereof). This method returns `True` if and only if the internal flag has been set to true, either before the wait call or after the wait starts, so it will always return `True` except if a timeout is given and the operation times out. Changed in version 3.1: Previously, the method always returned `None`. Timer Objects ------------- This class represents an action that should be run only after a certain amount of time has passed — a timer. [`Timer`](#threading.Timer "threading.Timer") is a subclass of [`Thread`](#threading.Thread "threading.Thread") and as such also functions as an example of creating custom threads. Timers are started, as with threads, by calling their `start()` method. The timer can be stopped (before its action has begun) by calling the [`cancel()`](#threading.Timer.cancel "threading.Timer.cancel") method. The interval the timer will wait before executing its action may not be exactly the same as the interval specified by the user. For example: ``` def hello(): print("hello, world") t = Timer(30.0, hello) t.start() # after 30 seconds, "hello, world" will be printed ``` `class threading.Timer(interval, function, args=None, kwargs=None)` Create a timer that will run *function* with arguments *args* and keyword arguments *kwargs*, after *interval* seconds have passed. If *args* is `None` (the default) then an empty list will be used. If *kwargs* is `None` (the default) then an empty dict will be used. Changed in version 3.3: changed from a factory function to a class. `cancel()` Stop the timer, and cancel the execution of the timer’s action. This will only work if the timer is still in its waiting stage. Barrier Objects --------------- New in version 3.2. This class provides a simple synchronization primitive for use by a fixed number of threads that need to wait for each other. Each of the threads tries to pass the barrier by calling the [`wait()`](#threading.Barrier.wait "threading.Barrier.wait") method and will block until all of the threads have made their [`wait()`](#threading.Barrier.wait "threading.Barrier.wait") calls. At this point, the threads are released simultaneously. The barrier can be reused any number of times for the same number of threads. As an example, here is a simple way to synchronize a client and server thread: ``` b = Barrier(2, timeout=5) def server(): start_server() b.wait() while True: connection = accept_connection() process_server_connection(connection) def client(): b.wait() while True: connection = make_connection() process_client_connection(connection) ``` `class threading.Barrier(parties, action=None, timeout=None)` Create a barrier object for *parties* number of threads. An *action*, when provided, is a callable to be called by one of the threads when they are released. *timeout* is the default timeout value if none is specified for the [`wait()`](#threading.Barrier.wait "threading.Barrier.wait") method. `wait(timeout=None)` Pass the barrier. When all the threads party to the barrier have called this function, they are all released simultaneously. If a *timeout* is provided, it is used in preference to any that was supplied to the class constructor. The return value is an integer in the range 0 to *parties* – 1, different for each thread. This can be used to select a thread to do some special housekeeping, e.g.: ``` i = barrier.wait() if i == 0: # Only one thread needs to print this print("passed the barrier") ``` If an *action* was provided to the constructor, one of the threads will have called it prior to being released. Should this call raise an error, the barrier is put into the broken state. If the call times out, the barrier is put into the broken state. This method may raise a [`BrokenBarrierError`](#threading.BrokenBarrierError "threading.BrokenBarrierError") exception if the barrier is broken or reset while a thread is waiting. `reset()` Return the barrier to the default, empty state. Any threads waiting on it will receive the [`BrokenBarrierError`](#threading.BrokenBarrierError "threading.BrokenBarrierError") exception. Note that using this function may require some external synchronization if there are other threads whose state is unknown. If a barrier is broken it may be better to just leave it and create a new one. `abort()` Put the barrier into a broken state. This causes any active or future calls to [`wait()`](#threading.Barrier.wait "threading.Barrier.wait") to fail with the [`BrokenBarrierError`](#threading.BrokenBarrierError "threading.BrokenBarrierError"). Use this for example if one of the threads needs to abort, to avoid deadlocking the application. It may be preferable to simply create the barrier with a sensible *timeout* value to automatically guard against one of the threads going awry. `parties` The number of threads required to pass the barrier. `n_waiting` The number of threads currently waiting in the barrier. `broken` A boolean that is `True` if the barrier is in the broken state. `exception threading.BrokenBarrierError` This exception, a subclass of [`RuntimeError`](exceptions#RuntimeError "RuntimeError"), is raised when the [`Barrier`](#threading.Barrier "threading.Barrier") object is reset or broken. Using locks, conditions, and semaphores in the `with` statement --------------------------------------------------------------- All of the objects provided by this module that have `acquire()` and `release()` methods can be used as context managers for a [`with`](../reference/compound_stmts#with) statement. The `acquire()` method will be called when the block is entered, and `release()` will be called when the block is exited. Hence, the following snippet: ``` with some_lock: # do something... ``` is equivalent to: ``` some_lock.acquire() try: # do something... finally: some_lock.release() ``` Currently, [`Lock`](#threading.Lock "threading.Lock"), [`RLock`](#threading.RLock "threading.RLock"), [`Condition`](#threading.Condition "threading.Condition"), [`Semaphore`](#threading.Semaphore "threading.Semaphore"), and [`BoundedSemaphore`](#threading.BoundedSemaphore "threading.BoundedSemaphore") objects may be used as [`with`](../reference/compound_stmts#with) statement context managers.
programming_docs
python email.contentmanager: Managing MIME Content email.contentmanager: Managing MIME Content =========================================== **Source code:** [Lib/email/contentmanager.py](https://github.com/python/cpython/tree/3.9/Lib/email/contentmanager.py) New in version 3.6: [1](#id2) `class email.contentmanager.ContentManager` Base class for content managers. Provides the standard registry mechanisms to register converters between MIME content and other representations, as well as the `get_content` and `set_content` dispatch methods. `get_content(msg, *args, **kw)` Look up a handler function based on the `mimetype` of *msg* (see next paragraph), call it, passing through all arguments, and return the result of the call. The expectation is that the handler will extract the payload from *msg* and return an object that encodes information about the extracted data. To find the handler, look for the following keys in the registry, stopping with the first one found: * the string representing the full MIME type (`maintype/subtype`) * the string representing the `maintype` * the empty string If none of these keys produce a handler, raise a [`KeyError`](exceptions#KeyError "KeyError") for the full MIME type. `set_content(msg, obj, *args, **kw)` If the `maintype` is `multipart`, raise a [`TypeError`](exceptions#TypeError "TypeError"); otherwise look up a handler function based on the type of *obj* (see next paragraph), call [`clear_content()`](email.message#email.message.EmailMessage.clear_content "email.message.EmailMessage.clear_content") on the *msg*, and call the handler function, passing through all arguments. The expectation is that the handler will transform and store *obj* into *msg*, possibly making other changes to *msg* as well, such as adding various MIME headers to encode information needed to interpret the stored data. To find the handler, obtain the type of *obj* (`typ = type(obj)`), and look for the following keys in the registry, stopping with the first one found: * the type itself (`typ`) * the type’s fully qualified name (`typ.__module__ + '.' + typ.__qualname__`). * the type’s qualname (`typ.__qualname__`) * the type’s name (`typ.__name__`). If none of the above match, repeat all of the checks above for each of the types in the [MRO](../glossary#term-mro) (`typ.__mro__`). Finally, if no other key yields a handler, check for a handler for the key `None`. If there is no handler for `None`, raise a [`KeyError`](exceptions#KeyError "KeyError") for the fully qualified name of the type. Also add a *MIME-Version* header if one is not present (see also [`MIMEPart`](email.message#email.message.MIMEPart "email.message.MIMEPart")). `add_get_handler(key, handler)` Record the function *handler* as the handler for *key*. For the possible values of *key*, see [`get_content()`](#email.contentmanager.get_content "email.contentmanager.get_content"). `add_set_handler(typekey, handler)` Record *handler* as the function to call when an object of a type matching *typekey* is passed to [`set_content()`](#email.contentmanager.set_content "email.contentmanager.set_content"). For the possible values of *typekey*, see [`set_content()`](#email.contentmanager.set_content "email.contentmanager.set_content"). Content Manager Instances ------------------------- Currently the email package provides only one concrete content manager, [`raw_data_manager`](#email.contentmanager.raw_data_manager "email.contentmanager.raw_data_manager"), although more may be added in the future. [`raw_data_manager`](#email.contentmanager.raw_data_manager "email.contentmanager.raw_data_manager") is the [`content_manager`](email.policy#email.policy.EmailPolicy.content_manager "email.policy.EmailPolicy.content_manager") provided by [`EmailPolicy`](email.policy#email.policy.EmailPolicy "email.policy.EmailPolicy") and its derivatives. `email.contentmanager.raw_data_manager` This content manager provides only a minimum interface beyond that provided by [`Message`](email.compat32-message#email.message.Message "email.message.Message") itself: it deals only with text, raw byte strings, and [`Message`](email.compat32-message#email.message.Message "email.message.Message") objects. Nevertheless, it provides significant advantages compared to the base API: `get_content` on a text part will return a unicode string without the application needing to manually decode it, `set_content` provides a rich set of options for controlling the headers added to a part and controlling the content transfer encoding, and it enables the use of the various `add_` methods, thereby simplifying the creation of multipart messages. `email.contentmanager.get_content(msg, errors='replace')` Return the payload of the part as either a string (for `text` parts), an [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") object (for `message/rfc822` parts), or a `bytes` object (for all other non-multipart types). Raise a [`KeyError`](exceptions#KeyError "KeyError") if called on a `multipart`. If the part is a `text` part and *errors* is specified, use it as the error handler when decoding the payload to unicode. The default error handler is `replace`. `email.contentmanager.set_content(msg, <'str'>, subtype="plain", charset='utf-8', cte=None, disposition=None, filename=None, cid=None, params=None, headers=None)` `email.contentmanager.set_content(msg, <'bytes'>, maintype, subtype, cte="base64", disposition=None, filename=None, cid=None, params=None, headers=None)` `email.contentmanager.set_content(msg, <'EmailMessage'>, cte=None, disposition=None, filename=None, cid=None, params=None, headers=None)` Add headers and payload to *msg*: Add a *Content-Type* header with a `maintype/subtype` value. * For `str`, set the MIME `maintype` to `text`, and set the subtype to *subtype* if it is specified, or `plain` if it is not. * For `bytes`, use the specified *maintype* and *subtype*, or raise a [`TypeError`](exceptions#TypeError "TypeError") if they are not specified. * For [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") objects, set the maintype to `message`, and set the subtype to *subtype* if it is specified or `rfc822` if it is not. If *subtype* is `partial`, raise an error (`bytes` objects must be used to construct `message/partial` parts). If *charset* is provided (which is valid only for `str`), encode the string to bytes using the specified character set. The default is `utf-8`. If the specified *charset* is a known alias for a standard MIME charset name, use the standard charset instead. If *cte* is set, encode the payload using the specified content transfer encoding, and set the *Content-Transfer-Encoding* header to that value. Possible values for *cte* are `quoted-printable`, `base64`, `7bit`, `8bit`, and `binary`. If the input cannot be encoded in the specified encoding (for example, specifying a *cte* of `7bit` for an input that contains non-ASCII values), raise a [`ValueError`](exceptions#ValueError "ValueError"). * For `str` objects, if *cte* is not set use heuristics to determine the most compact encoding. * For [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage"), per [**RFC 2046**](https://tools.ietf.org/html/rfc2046.html), raise an error if a *cte* of `quoted-printable` or `base64` is requested for *subtype* `rfc822`, and for any *cte* other than `7bit` for *subtype* `external-body`. For `message/rfc822`, use `8bit` if *cte* is not specified. For all other values of *subtype*, use `7bit`. Note A *cte* of `binary` does not actually work correctly yet. The `EmailMessage` object as modified by `set_content` is correct, but [`BytesGenerator`](email.generator#email.generator.BytesGenerator "email.generator.BytesGenerator") does not serialize it correctly. If *disposition* is set, use it as the value of the *Content-Disposition* header. If not specified, and *filename* is specified, add the header with the value `attachment`. If *disposition* is not specified and *filename* is also not specified, do not add the header. The only valid values for *disposition* are `attachment` and `inline`. If *filename* is specified, use it as the value of the `filename` parameter of the *Content-Disposition* header. If *cid* is specified, add a *Content-ID* header with *cid* as its value. If *params* is specified, iterate its `items` method and use the resulting `(key, value)` pairs to set additional parameters on the *Content-Type* header. If *headers* is specified and is a list of strings of the form `headername: headervalue` or a list of `header` objects (distinguished from strings by having a `name` attribute), add the headers to *msg*. #### Footnotes `1` Originally added in 3.4 as a [provisional module](../glossary#term-provisional-package) python pwd — The password database pwd — The password database =========================== This module provides access to the Unix user account and password database. It is available on all Unix versions. Password database entries are reported as a tuple-like object, whose attributes correspond to the members of the `passwd` structure (Attribute field below, see `<pwd.h>`): | Index | Attribute | Meaning | | --- | --- | --- | | 0 | `pw_name` | Login name | | 1 | `pw_passwd` | Optional encrypted password | | 2 | `pw_uid` | Numerical user ID | | 3 | `pw_gid` | Numerical group ID | | 4 | `pw_gecos` | User name or comment field | | 5 | `pw_dir` | User home directory | | 6 | `pw_shell` | User command interpreter | The uid and gid items are integers, all others are strings. [`KeyError`](exceptions#KeyError "KeyError") is raised if the entry asked for cannot be found. Note In traditional Unix the field `pw_passwd` usually contains a password encrypted with a DES derived algorithm (see module [`crypt`](crypt#module-crypt "crypt: The crypt() function used to check Unix passwords. (deprecated) (Unix)")). However most modern unices use a so-called *shadow password* system. On those unices the *pw\_passwd* field only contains an asterisk (`'*'`) or the letter `'x'` where the encrypted password is stored in a file `/etc/shadow` which is not world readable. Whether the *pw\_passwd* field contains anything useful is system-dependent. If available, the [`spwd`](spwd#module-spwd "spwd: The shadow password database (getspnam() and friends). (deprecated) (Unix)") module should be used where access to the encrypted password is required. It defines the following items: `pwd.getpwuid(uid)` Return the password database entry for the given numeric user ID. `pwd.getpwnam(name)` Return the password database entry for the given user name. `pwd.getpwall()` Return a list of all available password database entries, in arbitrary order. See also `Module` [`grp`](grp#module-grp "grp: The group database (getgrnam() and friends). (Unix)") An interface to the group database, similar to this. `Module` [`spwd`](spwd#module-spwd "spwd: The shadow password database (getspnam() and friends). (deprecated) (Unix)") An interface to the shadow password database, similar to this. python compileall — Byte-compile Python libraries compileall — Byte-compile Python libraries ========================================== **Source code:** [Lib/compileall.py](https://github.com/python/cpython/tree/3.9/Lib/compileall.py) This module provides some utility functions to support installing Python libraries. These functions compile Python source files in a directory tree. This module can be used to create the cached byte-code files at library installation time, which makes them available for use even by users who don’t have write permission to the library directories. Command-line use ---------------- This module can work as a script (using **python -m compileall**) to compile Python sources. `directory ...` `file ...` Positional arguments are files to compile or directories that contain source files, traversed recursively. If no argument is given, behave as if the command line was `-l <directories from sys.path>`. `-l` Do not recurse into subdirectories, only compile source code files directly contained in the named or implied directories. `-f` Force rebuild even if timestamps are up-to-date. `-q` Do not print the list of files compiled. If passed once, error messages will still be printed. If passed twice (`-qq`), all output is suppressed. `-d destdir` Directory prepended to the path to each file being compiled. This will appear in compilation time tracebacks, and is also compiled in to the byte-code file, where it will be used in tracebacks and other messages in cases where the source file does not exist at the time the byte-code file is executed. `-s strip_prefix` `-p prepend_prefix` Remove (`-s`) or append (`-p`) the given prefix of paths recorded in the `.pyc` files. Cannot be combined with `-d`. `-x regex` regex is used to search the full path to each file considered for compilation, and if the regex produces a match, the file is skipped. `-i list` Read the file `list` and add each line that it contains to the list of files and directories to compile. If `list` is `-`, read lines from `stdin`. `-b` Write the byte-code files to their legacy locations and names, which may overwrite byte-code files created by another version of Python. The default is to write files to their [**PEP 3147**](https://www.python.org/dev/peps/pep-3147) locations and names, which allows byte-code files from multiple versions of Python to coexist. `-r` Control the maximum recursion level for subdirectories. If this is given, then `-l` option will not be taken into account. **python -m compileall <directory> -r 0** is equivalent to **python -m compileall <directory> -l**. `-j N` Use *N* workers to compile the files within the given directory. If `0` is used, then the result of [`os.cpu_count()`](os#os.cpu_count "os.cpu_count") will be used. `--invalidation-mode [timestamp|checked-hash|unchecked-hash]` Control how the generated byte-code files are invalidated at runtime. The `timestamp` value, means that `.pyc` files with the source timestamp and size embedded will be generated. The `checked-hash` and `unchecked-hash` values cause hash-based pycs to be generated. Hash-based pycs embed a hash of the source file contents rather than a timestamp. See [Cached bytecode invalidation](../reference/import#pyc-invalidation) for more information on how Python validates bytecode cache files at runtime. The default is `timestamp` if the `SOURCE_DATE_EPOCH` environment variable is not set, and `checked-hash` if the `SOURCE_DATE_EPOCH` environment variable is set. `-o level` Compile with the given optimization level. May be used multiple times to compile for multiple levels at a time (for example, `compileall -o 1 -o 2`). `-e dir` Ignore symlinks pointing outside the given directory. `--hardlink-dupes` If two `.pyc` files with different optimization level have the same content, use hard links to consolidate duplicate files. Changed in version 3.2: Added the `-i`, `-b` and `-h` options. Changed in version 3.5: Added the `-j`, `-r`, and `-qq` options. `-q` option was changed to a multilevel value. `-b` will always produce a byte-code file ending in `.pyc`, never `.pyo`. Changed in version 3.7: Added the `--invalidation-mode` option. Changed in version 3.9: Added the `-s`, `-p`, `-e` and `--hardlink-dupes` options. Raised the default recursion limit from 10 to [`sys.getrecursionlimit()`](sys#sys.getrecursionlimit "sys.getrecursionlimit"). Added the possibility to specify the `-o` option multiple times. There is no command-line option to control the optimization level used by the [`compile()`](functions#compile "compile") function, because the Python interpreter itself already provides the option: **python -O -m compileall**. Similarly, the [`compile()`](functions#compile "compile") function respects the [`sys.pycache_prefix`](sys#sys.pycache_prefix "sys.pycache_prefix") setting. The generated bytecode cache will only be useful if [`compile()`](functions#compile "compile") is run with the same [`sys.pycache_prefix`](sys#sys.pycache_prefix "sys.pycache_prefix") (if any) that will be used at runtime. Public functions ---------------- `compileall.compile_dir(dir, maxlevels=sys.getrecursionlimit(), ddir=None, force=False, rx=None, quiet=0, legacy=False, optimize=-1, workers=1, invalidation_mode=None, *, stripdir=None, prependdir=None, limit_sl_dest=None, hardlink_dupes=False)` Recursively descend the directory tree named by *dir*, compiling all `.py` files along the way. Return a true value if all the files compiled successfully, and a false value otherwise. The *maxlevels* parameter is used to limit the depth of the recursion; it defaults to `sys.getrecursionlimit()`. If *ddir* is given, it is prepended to the path to each file being compiled for use in compilation time tracebacks, and is also compiled in to the byte-code file, where it will be used in tracebacks and other messages in cases where the source file does not exist at the time the byte-code file is executed. If *force* is true, modules are re-compiled even if the timestamps are up to date. If *rx* is given, its `search` method is called on the complete path to each file considered for compilation, and if it returns a true value, the file is skipped. This can be used to exclude files matching a regular expression, given as a [re.Pattern](re#re-objects) object. If *quiet* is `False` or `0` (the default), the filenames and other information are printed to standard out. Set to `1`, only errors are printed. Set to `2`, all output is suppressed. If *legacy* is true, byte-code files are written to their legacy locations and names, which may overwrite byte-code files created by another version of Python. The default is to write files to their [**PEP 3147**](https://www.python.org/dev/peps/pep-3147) locations and names, which allows byte-code files from multiple versions of Python to coexist. *optimize* specifies the optimization level for the compiler. It is passed to the built-in [`compile()`](functions#compile "compile") function. Accepts also a sequence of optimization levels which lead to multiple compilations of one `.py` file in one call. The argument *workers* specifies how many workers are used to compile files in parallel. The default is to not use multiple workers. If the platform can’t use multiple workers and *workers* argument is given, then sequential compilation will be used as a fallback. If *workers* is 0, the number of cores in the system is used. If *workers* is lower than `0`, a [`ValueError`](exceptions#ValueError "ValueError") will be raised. *invalidation\_mode* should be a member of the [`py_compile.PycInvalidationMode`](py_compile#py_compile.PycInvalidationMode "py_compile.PycInvalidationMode") enum and controls how the generated pycs are invalidated at runtime. The *stripdir*, *prependdir* and *limit\_sl\_dest* arguments correspond to the `-s`, `-p` and `-e` options described above. They may be specified as `str`, `bytes` or [`os.PathLike`](os#os.PathLike "os.PathLike"). If *hardlink\_dupes* is true and two `.pyc` files with different optimization level have the same content, use hard links to consolidate duplicate files. Changed in version 3.2: Added the *legacy* and *optimize* parameter. Changed in version 3.5: Added the *workers* parameter. Changed in version 3.5: *quiet* parameter was changed to a multilevel value. Changed in version 3.5: The *legacy* parameter only writes out `.pyc` files, not `.pyo` files no matter what the value of *optimize* is. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). Changed in version 3.7: The *invalidation\_mode* parameter was added. Changed in version 3.7.2: The *invalidation\_mode* parameter’s default value is updated to None. Changed in version 3.8: Setting *workers* to 0 now chooses the optimal number of cores. Changed in version 3.9: Added *stripdir*, *prependdir*, *limit\_sl\_dest* and *hardlink\_dupes* arguments. Default value of *maxlevels* was changed from `10` to `sys.getrecursionlimit()` `compileall.compile_file(fullname, ddir=None, force=False, rx=None, quiet=0, legacy=False, optimize=-1, invalidation_mode=None, *, stripdir=None, prependdir=None, limit_sl_dest=None, hardlink_dupes=False)` Compile the file with path *fullname*. Return a true value if the file compiled successfully, and a false value otherwise. If *ddir* is given, it is prepended to the path to the file being compiled for use in compilation time tracebacks, and is also compiled in to the byte-code file, where it will be used in tracebacks and other messages in cases where the source file does not exist at the time the byte-code file is executed. If *rx* is given, its `search` method is passed the full path name to the file being compiled, and if it returns a true value, the file is not compiled and `True` is returned. This can be used to exclude files matching a regular expression, given as a [re.Pattern](re#re-objects) object. If *quiet* is `False` or `0` (the default), the filenames and other information are printed to standard out. Set to `1`, only errors are printed. Set to `2`, all output is suppressed. If *legacy* is true, byte-code files are written to their legacy locations and names, which may overwrite byte-code files created by another version of Python. The default is to write files to their [**PEP 3147**](https://www.python.org/dev/peps/pep-3147) locations and names, which allows byte-code files from multiple versions of Python to coexist. *optimize* specifies the optimization level for the compiler. It is passed to the built-in [`compile()`](functions#compile "compile") function. Accepts also a sequence of optimization levels which lead to multiple compilations of one `.py` file in one call. *invalidation\_mode* should be a member of the [`py_compile.PycInvalidationMode`](py_compile#py_compile.PycInvalidationMode "py_compile.PycInvalidationMode") enum and controls how the generated pycs are invalidated at runtime. The *stripdir*, *prependdir* and *limit\_sl\_dest* arguments correspond to the `-s`, `-p` and `-e` options described above. They may be specified as `str`, `bytes` or [`os.PathLike`](os#os.PathLike "os.PathLike"). If *hardlink\_dupes* is true and two `.pyc` files with different optimization level have the same content, use hard links to consolidate duplicate files. New in version 3.2. Changed in version 3.5: *quiet* parameter was changed to a multilevel value. Changed in version 3.5: The *legacy* parameter only writes out `.pyc` files, not `.pyo` files no matter what the value of *optimize* is. Changed in version 3.7: The *invalidation\_mode* parameter was added. Changed in version 3.7.2: The *invalidation\_mode* parameter’s default value is updated to None. Changed in version 3.9: Added *stripdir*, *prependdir*, *limit\_sl\_dest* and *hardlink\_dupes* arguments. `compileall.compile_path(skip_curdir=True, maxlevels=0, force=False, quiet=0, legacy=False, optimize=-1, invalidation_mode=None)` Byte-compile all the `.py` files found along `sys.path`. Return a true value if all the files compiled successfully, and a false value otherwise. If *skip\_curdir* is true (the default), the current directory is not included in the search. All other parameters are passed to the [`compile_dir()`](#compileall.compile_dir "compileall.compile_dir") function. Note that unlike the other compile functions, `maxlevels` defaults to `0`. Changed in version 3.2: Added the *legacy* and *optimize* parameter. Changed in version 3.5: *quiet* parameter was changed to a multilevel value. Changed in version 3.5: The *legacy* parameter only writes out `.pyc` files, not `.pyo` files no matter what the value of *optimize* is. Changed in version 3.7: The *invalidation\_mode* parameter was added. Changed in version 3.7.2: The *invalidation\_mode* parameter’s default value is updated to None. To force a recompile of all the `.py` files in the `Lib/` subdirectory and all its subdirectories: ``` import compileall compileall.compile_dir('Lib/', force=True) # Perform same compilation, excluding files in .svn directories. import re compileall.compile_dir('Lib/', rx=re.compile(r'[/\\][.]svn'), force=True) # pathlib.Path objects can also be used. import pathlib compileall.compile_dir(pathlib.Path('Lib/'), force=True) ``` See also `Module` [`py_compile`](py_compile#module-py_compile "py_compile: Generate byte-code files from Python source files.") Byte-compile a single source file.
programming_docs
python mimetypes — Map filenames to MIME types mimetypes — Map filenames to MIME types ======================================= **Source code:** [Lib/mimetypes.py](https://github.com/python/cpython/tree/3.9/Lib/mimetypes.py) The [`mimetypes`](#module-mimetypes "mimetypes: Mapping of filename extensions to MIME types.") module converts between a filename or URL and the MIME type associated with the filename extension. Conversions are provided from filename to MIME type and from MIME type to filename extension; encodings are not supported for the latter conversion. The module provides one class and a number of convenience functions. The functions are the normal interface to this module, but some applications may be interested in the class as well. The functions described below provide the primary interface for this module. If the module has not been initialized, they will call [`init()`](#mimetypes.init "mimetypes.init") if they rely on the information [`init()`](#mimetypes.init "mimetypes.init") sets up. `mimetypes.guess_type(url, strict=True)` Guess the type of a file based on its filename, path or URL, given by *url*. URL can be a string or a [path-like object](../glossary#term-path-like-object). The return value is a tuple `(type, encoding)` where *type* is `None` if the type can’t be guessed (missing or unknown suffix) or a string of the form `'type/subtype'`, usable for a MIME *content-type* header. *encoding* is `None` for no encoding or the name of the program used to encode (e.g. **compress** or **gzip**). The encoding is suitable for use as a *Content-Encoding* header, **not** as a *Content-Transfer-Encoding* header. The mappings are table driven. Encoding suffixes are case sensitive; type suffixes are first tried case sensitively, then case insensitively. The optional *strict* argument is a flag specifying whether the list of known MIME types is limited to only the official types [registered with IANA](https://www.iana.org/assignments/media-types/media-types.xhtml). When *strict* is `True` (the default), only the IANA types are supported; when *strict* is `False`, some additional non-standard but commonly used MIME types are also recognized. Changed in version 3.8: Added support for url being a [path-like object](../glossary#term-path-like-object). `mimetypes.guess_all_extensions(type, strict=True)` Guess the extensions for a file based on its MIME type, given by *type*. The return value is a list of strings giving all possible filename extensions, including the leading dot (`'.'`). The extensions are not guaranteed to have been associated with any particular data stream, but would be mapped to the MIME type *type* by [`guess_type()`](#mimetypes.guess_type "mimetypes.guess_type"). The optional *strict* argument has the same meaning as with the [`guess_type()`](#mimetypes.guess_type "mimetypes.guess_type") function. `mimetypes.guess_extension(type, strict=True)` Guess the extension for a file based on its MIME type, given by *type*. The return value is a string giving a filename extension, including the leading dot (`'.'`). The extension is not guaranteed to have been associated with any particular data stream, but would be mapped to the MIME type *type* by [`guess_type()`](#mimetypes.guess_type "mimetypes.guess_type"). If no extension can be guessed for *type*, `None` is returned. The optional *strict* argument has the same meaning as with the [`guess_type()`](#mimetypes.guess_type "mimetypes.guess_type") function. Some additional functions and data items are available for controlling the behavior of the module. `mimetypes.init(files=None)` Initialize the internal data structures. If given, *files* must be a sequence of file names which should be used to augment the default type map. If omitted, the file names to use are taken from [`knownfiles`](#mimetypes.knownfiles "mimetypes.knownfiles"); on Windows, the current registry settings are loaded. Each file named in *files* or [`knownfiles`](#mimetypes.knownfiles "mimetypes.knownfiles") takes precedence over those named before it. Calling [`init()`](#mimetypes.init "mimetypes.init") repeatedly is allowed. Specifying an empty list for *files* will prevent the system defaults from being applied: only the well-known values will be present from a built-in list. If *files* is `None` the internal data structure is completely rebuilt to its initial default value. This is a stable operation and will produce the same results when called multiple times. Changed in version 3.2: Previously, Windows registry settings were ignored. `mimetypes.read_mime_types(filename)` Load the type map given in the file *filename*, if it exists. The type map is returned as a dictionary mapping filename extensions, including the leading dot (`'.'`), to strings of the form `'type/subtype'`. If the file *filename* does not exist or cannot be read, `None` is returned. `mimetypes.add_type(type, ext, strict=True)` Add a mapping from the MIME type *type* to the extension *ext*. When the extension is already known, the new type will replace the old one. When the type is already known the extension will be added to the list of known extensions. When *strict* is `True` (the default), the mapping will be added to the official MIME types, otherwise to the non-standard ones. `mimetypes.inited` Flag indicating whether or not the global data structures have been initialized. This is set to `True` by [`init()`](#mimetypes.init "mimetypes.init"). `mimetypes.knownfiles` List of type map file names commonly installed. These files are typically named `mime.types` and are installed in different locations by different packages. `mimetypes.suffix_map` Dictionary mapping suffixes to suffixes. This is used to allow recognition of encoded files for which the encoding and the type are indicated by the same extension. For example, the `.tgz` extension is mapped to `.tar.gz` to allow the encoding and type to be recognized separately. `mimetypes.encodings_map` Dictionary mapping filename extensions to encoding types. `mimetypes.types_map` Dictionary mapping filename extensions to MIME types. `mimetypes.common_types` Dictionary mapping filename extensions to non-standard, but commonly found MIME types. An example usage of the module: ``` >>> import mimetypes >>> mimetypes.init() >>> mimetypes.knownfiles ['/etc/mime.types', '/etc/httpd/mime.types', ... ] >>> mimetypes.suffix_map['.tgz'] '.tar.gz' >>> mimetypes.encodings_map['.gz'] 'gzip' >>> mimetypes.types_map['.tgz'] 'application/x-tar-gz' ``` MimeTypes Objects ----------------- The [`MimeTypes`](#mimetypes.MimeTypes "mimetypes.MimeTypes") class may be useful for applications which may want more than one MIME-type database; it provides an interface similar to the one of the [`mimetypes`](#module-mimetypes "mimetypes: Mapping of filename extensions to MIME types.") module. `class mimetypes.MimeTypes(filenames=(), strict=True)` This class represents a MIME-types database. By default, it provides access to the same database as the rest of this module. The initial database is a copy of that provided by the module, and may be extended by loading additional `mime.types`-style files into the database using the [`read()`](#mimetypes.MimeTypes.read "mimetypes.MimeTypes.read") or [`readfp()`](#mimetypes.MimeTypes.readfp "mimetypes.MimeTypes.readfp") methods. The mapping dictionaries may also be cleared before loading additional data if the default data is not desired. The optional *filenames* parameter can be used to cause additional files to be loaded “on top” of the default database. `suffix_map` Dictionary mapping suffixes to suffixes. This is used to allow recognition of encoded files for which the encoding and the type are indicated by the same extension. For example, the `.tgz` extension is mapped to `.tar.gz` to allow the encoding and type to be recognized separately. This is initially a copy of the global [`suffix_map`](#mimetypes.suffix_map "mimetypes.suffix_map") defined in the module. `encodings_map` Dictionary mapping filename extensions to encoding types. This is initially a copy of the global [`encodings_map`](#mimetypes.encodings_map "mimetypes.encodings_map") defined in the module. `types_map` Tuple containing two dictionaries, mapping filename extensions to MIME types: the first dictionary is for the non-standards types and the second one is for the standard types. They are initialized by [`common_types`](#mimetypes.common_types "mimetypes.common_types") and [`types_map`](#mimetypes.types_map "mimetypes.types_map"). `types_map_inv` Tuple containing two dictionaries, mapping MIME types to a list of filename extensions: the first dictionary is for the non-standards types and the second one is for the standard types. They are initialized by [`common_types`](#mimetypes.common_types "mimetypes.common_types") and [`types_map`](#mimetypes.types_map "mimetypes.types_map"). `guess_extension(type, strict=True)` Similar to the [`guess_extension()`](#mimetypes.guess_extension "mimetypes.guess_extension") function, using the tables stored as part of the object. `guess_type(url, strict=True)` Similar to the [`guess_type()`](#mimetypes.guess_type "mimetypes.guess_type") function, using the tables stored as part of the object. `guess_all_extensions(type, strict=True)` Similar to the [`guess_all_extensions()`](#mimetypes.guess_all_extensions "mimetypes.guess_all_extensions") function, using the tables stored as part of the object. `read(filename, strict=True)` Load MIME information from a file named *filename*. This uses [`readfp()`](#mimetypes.MimeTypes.readfp "mimetypes.MimeTypes.readfp") to parse the file. If *strict* is `True`, information will be added to list of standard types, else to the list of non-standard types. `readfp(fp, strict=True)` Load MIME type information from an open file *fp*. The file must have the format of the standard `mime.types` files. If *strict* is `True`, information will be added to the list of standard types, else to the list of non-standard types. `read_windows_registry(strict=True)` Load MIME type information from the Windows registry. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows. If *strict* is `True`, information will be added to the list of standard types, else to the list of non-standard types. New in version 3.2. python netrc — netrc file processing netrc — netrc file processing ============================= **Source code:** [Lib/netrc.py](https://github.com/python/cpython/tree/3.9/Lib/netrc.py) The [`netrc`](#netrc.netrc "netrc.netrc") class parses and encapsulates the netrc file format used by the Unix **ftp** program and other FTP clients. `class netrc.netrc([file])` A [`netrc`](#netrc.netrc "netrc.netrc") instance or subclass instance encapsulates data from a netrc file. The initialization argument, if present, specifies the file to parse. If no argument is given, the file `.netrc` in the user’s home directory – as determined by [`os.path.expanduser()`](os.path#os.path.expanduser "os.path.expanduser") – will be read. Otherwise, a [`FileNotFoundError`](exceptions#FileNotFoundError "FileNotFoundError") exception will be raised. Parse errors will raise [`NetrcParseError`](#netrc.NetrcParseError "netrc.NetrcParseError") with diagnostic information including the file name, line number, and terminating token. If no argument is specified on a POSIX system, the presence of passwords in the `.netrc` file will raise a [`NetrcParseError`](#netrc.NetrcParseError "netrc.NetrcParseError") if the file ownership or permissions are insecure (owned by a user other than the user running the process, or accessible for read or write by any other user). This implements security behavior equivalent to that of ftp and other programs that use `.netrc`. Changed in version 3.4: Added the POSIX permission check. Changed in version 3.7: [`os.path.expanduser()`](os.path#os.path.expanduser "os.path.expanduser") is used to find the location of the `.netrc` file when *file* is not passed as argument. `exception netrc.NetrcParseError` Exception raised by the [`netrc`](#netrc.netrc "netrc.netrc") class when syntactical errors are encountered in source text. Instances of this exception provide three interesting attributes: `msg` is a textual explanation of the error, `filename` is the name of the source file, and `lineno` gives the line number on which the error was found. netrc Objects ------------- A [`netrc`](#netrc.netrc "netrc.netrc") instance has the following methods: `netrc.authenticators(host)` Return a 3-tuple `(login, account, password)` of authenticators for *host*. If the netrc file did not contain an entry for the given host, return the tuple associated with the ‘default’ entry. If neither matching host nor default entry is available, return `None`. `netrc.__repr__()` Dump the class data as a string in the format of a netrc file. (This discards comments and may reorder the entries.) Instances of [`netrc`](#netrc.netrc "netrc.netrc") have public instance variables: `netrc.hosts` Dictionary mapping host names to `(login, account, password)` tuples. The ‘default’ entry, if any, is represented as a pseudo-host by that name. `netrc.macros` Dictionary mapping macro names to string lists. Note Passwords are limited to a subset of the ASCII character set. All ASCII punctuation is allowed in passwords, however, note that whitespace and non-printable characters are not allowed in passwords. This is a limitation of the way the .netrc file is parsed and may be removed in the future. python string — Common string operations string — Common string operations ================================= **Source code:** [Lib/string.py](https://github.com/python/cpython/tree/3.9/Lib/string.py) See also [Text Sequence Type — str](stdtypes#textseq) [String Methods](stdtypes#string-methods) String constants ---------------- The constants defined in this module are: `string.ascii_letters` The concatenation of the [`ascii_lowercase`](#string.ascii_lowercase "string.ascii_lowercase") and [`ascii_uppercase`](#string.ascii_uppercase "string.ascii_uppercase") constants described below. This value is not locale-dependent. `string.ascii_lowercase` The lowercase letters `'abcdefghijklmnopqrstuvwxyz'`. This value is not locale-dependent and will not change. `string.ascii_uppercase` The uppercase letters `'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`. This value is not locale-dependent and will not change. `string.digits` The string `'0123456789'`. `string.hexdigits` The string `'0123456789abcdefABCDEF'`. `string.octdigits` The string `'01234567'`. `string.punctuation` String of ASCII characters which are considered punctuation characters in the `C` locale: `!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~`. `string.printable` String of ASCII characters which are considered printable. This is a combination of [`digits`](#string.digits "string.digits"), [`ascii_letters`](#string.ascii_letters "string.ascii_letters"), [`punctuation`](#string.punctuation "string.punctuation"), and [`whitespace`](#string.whitespace "string.whitespace"). `string.whitespace` A string containing all ASCII characters that are considered whitespace. This includes the characters space, tab, linefeed, return, formfeed, and vertical tab. Custom String Formatting ------------------------ The built-in string class provides the ability to do complex variable substitutions and value formatting via the [`format()`](stdtypes#str.format "str.format") method described in [**PEP 3101**](https://www.python.org/dev/peps/pep-3101). The [`Formatter`](#string.Formatter "string.Formatter") class in the [`string`](#module-string "string: Common string operations.") module allows you to create and customize your own string formatting behaviors using the same implementation as the built-in [`format()`](stdtypes#str.format "str.format") method. `class string.Formatter` The [`Formatter`](#string.Formatter "string.Formatter") class has the following public methods: `format(format_string, /, *args, **kwargs)` The primary API method. It takes a format string and an arbitrary set of positional and keyword arguments. It is just a wrapper that calls [`vformat()`](#string.Formatter.vformat "string.Formatter.vformat"). Changed in version 3.7: A format string argument is now [positional-only](../glossary#positional-only-parameter). `vformat(format_string, args, kwargs)` This function does the actual work of formatting. It is exposed as a separate function for cases where you want to pass in a predefined dictionary of arguments, rather than unpacking and repacking the dictionary as individual arguments using the `*args` and `**kwargs` syntax. [`vformat()`](#string.Formatter.vformat "string.Formatter.vformat") does the work of breaking up the format string into character data and replacement fields. It calls the various methods described below. In addition, the [`Formatter`](#string.Formatter "string.Formatter") defines a number of methods that are intended to be replaced by subclasses: `parse(format_string)` Loop over the format\_string and return an iterable of tuples (*literal\_text*, *field\_name*, *format\_spec*, *conversion*). This is used by [`vformat()`](#string.Formatter.vformat "string.Formatter.vformat") to break the string into either literal text, or replacement fields. The values in the tuple conceptually represent a span of literal text followed by a single replacement field. If there is no literal text (which can happen if two replacement fields occur consecutively), then *literal\_text* will be a zero-length string. If there is no replacement field, then the values of *field\_name*, *format\_spec* and *conversion* will be `None`. `get_field(field_name, args, kwargs)` Given *field\_name* as returned by [`parse()`](#string.Formatter.parse "string.Formatter.parse") (see above), convert it to an object to be formatted. Returns a tuple (obj, used\_key). The default version takes strings of the form defined in [**PEP 3101**](https://www.python.org/dev/peps/pep-3101), such as “0[name]” or “label.title”. *args* and *kwargs* are as passed in to [`vformat()`](#string.Formatter.vformat "string.Formatter.vformat"). The return value *used\_key* has the same meaning as the *key* parameter to [`get_value()`](#string.Formatter.get_value "string.Formatter.get_value"). `get_value(key, args, kwargs)` Retrieve a given field value. The *key* argument will be either an integer or a string. If it is an integer, it represents the index of the positional argument in *args*; if it is a string, then it represents a named argument in *kwargs*. The *args* parameter is set to the list of positional arguments to [`vformat()`](#string.Formatter.vformat "string.Formatter.vformat"), and the *kwargs* parameter is set to the dictionary of keyword arguments. For compound field names, these functions are only called for the first component of the field name; subsequent components are handled through normal attribute and indexing operations. So for example, the field expression ‘0.name’ would cause [`get_value()`](#string.Formatter.get_value "string.Formatter.get_value") to be called with a *key* argument of 0. The `name` attribute will be looked up after [`get_value()`](#string.Formatter.get_value "string.Formatter.get_value") returns by calling the built-in [`getattr()`](functions#getattr "getattr") function. If the index or keyword refers to an item that does not exist, then an [`IndexError`](exceptions#IndexError "IndexError") or [`KeyError`](exceptions#KeyError "KeyError") should be raised. `check_unused_args(used_args, args, kwargs)` Implement checking for unused arguments if desired. The arguments to this function is the set of all argument keys that were actually referred to in the format string (integers for positional arguments, and strings for named arguments), and a reference to the *args* and *kwargs* that was passed to vformat. The set of unused args can be calculated from these parameters. [`check_unused_args()`](#string.Formatter.check_unused_args "string.Formatter.check_unused_args") is assumed to raise an exception if the check fails. `format_field(value, format_spec)` [`format_field()`](#string.Formatter.format_field "string.Formatter.format_field") simply calls the global [`format()`](functions#format "format") built-in. The method is provided so that subclasses can override it. `convert_field(value, conversion)` Converts the value (returned by [`get_field()`](#string.Formatter.get_field "string.Formatter.get_field")) given a conversion type (as in the tuple returned by the [`parse()`](#string.Formatter.parse "string.Formatter.parse") method). The default version understands ‘s’ (str), ‘r’ (repr) and ‘a’ (ascii) conversion types. Format String Syntax -------------------- The [`str.format()`](stdtypes#str.format "str.format") method and the [`Formatter`](#string.Formatter "string.Formatter") class share the same syntax for format strings (although in the case of [`Formatter`](#string.Formatter "string.Formatter"), subclasses can define their own format string syntax). The syntax is related to that of [formatted string literals](../reference/lexical_analysis#f-strings), but it is less sophisticated and, in particular, does not support arbitrary expressions. Format strings contain “replacement fields” surrounded by curly braces `{}`. Anything that is not contained in braces is considered literal text, which is copied unchanged to the output. If you need to include a brace character in the literal text, it can be escaped by doubling: `{{` and `}}`. The grammar for a replacement field is as follows: ``` **replacement\_field** ::= "{" [[field\_name](#grammar-token-field-name)] ["!" [conversion](../reference/lexical_analysis#grammar-token-conversion)] [":" [format\_spec](../reference/lexical_analysis#grammar-token-format-spec)] "}" **field\_name** ::= arg_name ("." [attribute\_name](#grammar-token-attribute-name) | "[" [element\_index](#grammar-token-element-index) "]")* **arg\_name** ::= [[identifier](../reference/lexical_analysis#grammar-token-identifier) | [digit](../reference/lexical_analysis#grammar-token-digit)+] **attribute\_name** ::= [identifier](../reference/lexical_analysis#grammar-token-identifier) **element\_index** ::= [digit](../reference/lexical_analysis#grammar-token-digit)+ | [index\_string](#grammar-token-index-string) **index\_string** ::= <any source character except "]"> + **conversion** ::= "r" | "s" | "a" **format\_spec** ::= <described in the next section> ``` In less formal terms, the replacement field can start with a *field\_name* that specifies the object whose value is to be formatted and inserted into the output instead of the replacement field. The *field\_name* is optionally followed by a *conversion* field, which is preceded by an exclamation point `'!'`, and a *format\_spec*, which is preceded by a colon `':'`. These specify a non-default format for the replacement value. See also the [Format Specification Mini-Language](#formatspec) section. The *field\_name* itself begins with an *arg\_name* that is either a number or a keyword. If it’s a number, it refers to a positional argument, and if it’s a keyword, it refers to a named keyword argument. If the numerical arg\_names in a format string are 0, 1, 2, … in sequence, they can all be omitted (not just some) and the numbers 0, 1, 2, … will be automatically inserted in that order. Because *arg\_name* is not quote-delimited, it is not possible to specify arbitrary dictionary keys (e.g., the strings `'10'` or `':-]'`) within a format string. The *arg\_name* can be followed by any number of index or attribute expressions. An expression of the form `'.name'` selects the named attribute using [`getattr()`](functions#getattr "getattr"), while an expression of the form `'[index]'` does an index lookup using [`__getitem__()`](../reference/datamodel#object.__getitem__ "object.__getitem__"). Changed in version 3.1: The positional argument specifiers can be omitted for [`str.format()`](stdtypes#str.format "str.format"), so `'{} {}'.format(a, b)` is equivalent to `'{0} {1}'.format(a, b)`. Changed in version 3.4: The positional argument specifiers can be omitted for [`Formatter`](#string.Formatter "string.Formatter"). Some simple format string examples: ``` "First, thou shalt count to {0}" # References first positional argument "Bring me a {}" # Implicitly references the first positional argument "From {} to {}" # Same as "From {0} to {1}" "My quest is {name}" # References keyword argument 'name' "Weight in tons {0.weight}" # 'weight' attribute of first positional arg "Units destroyed: {players[0]}" # First element of keyword argument 'players'. ``` The *conversion* field causes a type coercion before formatting. Normally, the job of formatting a value is done by the [`__format__()`](../reference/datamodel#object.__format__ "object.__format__") method of the value itself. However, in some cases it is desirable to force a type to be formatted as a string, overriding its own definition of formatting. By converting the value to a string before calling [`__format__()`](../reference/datamodel#object.__format__ "object.__format__"), the normal formatting logic is bypassed. Three conversion flags are currently supported: `'!s'` which calls [`str()`](stdtypes#str "str") on the value, `'!r'` which calls [`repr()`](functions#repr "repr") and `'!a'` which calls [`ascii()`](functions#ascii "ascii"). Some examples: ``` "Harold's a clever {0!s}" # Calls str() on the argument first "Bring out the holy {name!r}" # Calls repr() on the argument first "More {!a}" # Calls ascii() on the argument first ``` The *format\_spec* field contains a specification of how the value should be presented, including such details as field width, alignment, padding, decimal precision and so on. Each value type can define its own “formatting mini-language” or interpretation of the *format\_spec*. Most built-in types support a common formatting mini-language, which is described in the next section. A *format\_spec* field can also include nested replacement fields within it. These nested replacement fields may contain a field name, conversion flag and format specification, but deeper nesting is not allowed. The replacement fields within the format\_spec are substituted before the *format\_spec* string is interpreted. This allows the formatting of a value to be dynamically specified. See the [Format examples](#formatexamples) section for some examples. ### Format Specification Mini-Language “Format specifications” are used within replacement fields contained within a format string to define how individual values are presented (see [Format String Syntax](#formatstrings) and [Formatted string literals](../reference/lexical_analysis#f-strings)). They can also be passed directly to the built-in [`format()`](functions#format "format") function. Each formattable type may define how the format specification is to be interpreted. Most built-in types implement the following options for format specifications, although some of the formatting options are only supported by the numeric types. A general convention is that an empty format specification produces the same result as if you had called [`str()`](stdtypes#str "str") on the value. A non-empty format specification typically modifies the result. The general form of a *standard format specifier* is: ``` **format\_spec** ::= [[[fill](#grammar-token-fill)][align](#grammar-token-align)][[sign](#grammar-token-sign)][#][0][[width](#grammar-token-width)][[grouping\_option](#grammar-token-grouping-option)][.[precision](#grammar-token-precision)][[type](#grammar-token-type)] **fill** ::= <any character> **align** ::= "<" | ">" | "=" | "^" **sign** ::= "+" | "-" | " " **width** ::= [digit](../reference/lexical_analysis#grammar-token-digit)+ **grouping\_option** ::= "_" | "," **precision** ::= [digit](../reference/lexical_analysis#grammar-token-digit)+ **type** ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%" ``` If a valid *align* value is specified, it can be preceded by a *fill* character that can be any character and defaults to a space if omitted. It is not possible to use a literal curly brace (”`{`” or “`}`”) as the *fill* character in a [formatted string literal](../reference/lexical_analysis#f-strings) or when using the [`str.format()`](stdtypes#str.format "str.format") method. However, it is possible to insert a curly brace with a nested replacement field. This limitation doesn’t affect the [`format()`](functions#format "format") function. The meaning of the various alignment options is as follows: | Option | Meaning | | --- | --- | | `'<'` | Forces the field to be left-aligned within the available space (this is the default for most objects). | | `'>'` | Forces the field to be right-aligned within the available space (this is the default for numbers). | | `'='` | Forces the padding to be placed after the sign (if any) but before the digits. This is used for printing fields in the form ‘+000000120’. This alignment option is only valid for numeric types. It becomes the default when ‘0’ immediately precedes the field width. | | `'^'` | Forces the field to be centered within the available space. | Note that unless a minimum field width is defined, the field width will always be the same size as the data to fill it, so that the alignment option has no meaning in this case. The *sign* option is only valid for number types, and can be one of the following: | Option | Meaning | | --- | --- | | `'+'` | indicates that a sign should be used for both positive as well as negative numbers. | | `'-'` | indicates that a sign should be used only for negative numbers (this is the default behavior). | | space | indicates that a leading space should be used on positive numbers, and a minus sign on negative numbers. | The `'#'` option causes the “alternate form” to be used for the conversion. The alternate form is defined differently for different types. This option is only valid for integer, float and complex types. For integers, when binary, octal, or hexadecimal output is used, this option adds the respective prefix `'0b'`, `'0o'`, `'0x'`, or `'0X'` to the output value. For float and complex the alternate form causes the result of the conversion to always contain a decimal-point character, even if no digits follow it. Normally, a decimal-point character appears in the result of these conversions only if a digit follows it. In addition, for `'g'` and `'G'` conversions, trailing zeros are not removed from the result. The `','` option signals the use of a comma for a thousands separator. For a locale aware separator, use the `'n'` integer presentation type instead. Changed in version 3.1: Added the `','` option (see also [**PEP 378**](https://www.python.org/dev/peps/pep-0378)). The `'_'` option signals the use of an underscore for a thousands separator for floating point presentation types and for integer presentation type `'d'`. For integer presentation types `'b'`, `'o'`, `'x'`, and `'X'`, underscores will be inserted every 4 digits. For other presentation types, specifying this option is an error. Changed in version 3.6: Added the `'_'` option (see also [**PEP 515**](https://www.python.org/dev/peps/pep-0515)). *width* is a decimal integer defining the minimum total field width, including any prefixes, separators, and other formatting characters. If not specified, then the field width will be determined by the content. When no explicit alignment is given, preceding the *width* field by a zero (`'0'`) character enables sign-aware zero-padding for numeric types. This is equivalent to a *fill* character of `'0'` with an *alignment* type of `'='`. The *precision* is a decimal integer indicating how many digits should be displayed after the decimal point for presentation types `'f'` and `'F'`, or before and after the decimal point for presentation types `'g'` or `'G'`. For string presentation types the field indicates the maximum field size - in other words, how many characters will be used from the field content. The *precision* is not allowed for integer presentation types. Finally, the *type* determines how the data should be presented. The available string presentation types are: | Type | Meaning | | --- | --- | | `'s'` | String format. This is the default type for strings and may be omitted. | | None | The same as `'s'`. | The available integer presentation types are: | Type | Meaning | | --- | --- | | `'b'` | Binary format. Outputs the number in base 2. | | `'c'` | Character. Converts the integer to the corresponding unicode character before printing. | | `'d'` | Decimal Integer. Outputs the number in base 10. | | `'o'` | Octal format. Outputs the number in base 8. | | `'x'` | Hex format. Outputs the number in base 16, using lower-case letters for the digits above 9. | | `'X'` | Hex format. Outputs the number in base 16, using upper-case letters for the digits above 9. In case `'#'` is specified, the prefix `'0x'` will be upper-cased to `'0X'` as well. | | `'n'` | Number. This is the same as `'d'`, except that it uses the current locale setting to insert the appropriate number separator characters. | | None | The same as `'d'`. | In addition to the above presentation types, integers can be formatted with the floating point presentation types listed below (except `'n'` and `None`). When doing so, [`float()`](functions#float "float") is used to convert the integer to a floating point number before formatting. The available presentation types for [`float`](functions#float "float") and [`Decimal`](decimal#decimal.Decimal "decimal.Decimal") values are: | Type | Meaning | | --- | --- | | `'e'` | Scientific notation. For a given precision `p`, formats the number in scientific notation with the letter ‘e’ separating the coefficient from the exponent. The coefficient has one digit before and `p` digits after the decimal point, for a total of `p + 1` significant digits. With no precision given, uses a precision of `6` digits after the decimal point for [`float`](functions#float "float"), and shows all coefficient digits for [`Decimal`](decimal#decimal.Decimal "decimal.Decimal"). If no digits follow the decimal point, the decimal point is also removed unless the `#` option is used. | | `'E'` | Scientific notation. Same as `'e'` except it uses an upper case ‘E’ as the separator character. | | `'f'` | Fixed-point notation. For a given precision `p`, formats the number as a decimal number with exactly `p` digits following the decimal point. With no precision given, uses a precision of `6` digits after the decimal point for [`float`](functions#float "float"), and uses a precision large enough to show all coefficient digits for [`Decimal`](decimal#decimal.Decimal "decimal.Decimal"). If no digits follow the decimal point, the decimal point is also removed unless the `#` option is used. | | `'F'` | Fixed-point notation. Same as `'f'`, but converts `nan` to `NAN` and `inf` to `INF`. | | `'g'` | General format. For a given precision `p >= 1`, this rounds the number to `p` significant digits and then formats the result in either fixed-point format or in scientific notation, depending on its magnitude. A precision of `0` is treated as equivalent to a precision of `1`. The precise rules are as follows: suppose that the result formatted with presentation type `'e'` and precision `p-1` would have exponent `exp`. Then, if `m <= exp < p`, where `m` is -4 for floats and -6 for [`Decimals`](decimal#decimal.Decimal "decimal.Decimal"), the number is formatted with presentation type `'f'` and precision `p-1-exp`. Otherwise, the number is formatted with presentation type `'e'` and precision `p-1`. In both cases insignificant trailing zeros are removed from the significand, and the decimal point is also removed if there are no remaining digits following it, unless the `'#'` option is used. With no precision given, uses a precision of `6` significant digits for [`float`](functions#float "float"). For [`Decimal`](decimal#decimal.Decimal "decimal.Decimal"), the coefficient of the result is formed from the coefficient digits of the value; scientific notation is used for values smaller than `1e-6` in absolute value and values where the place value of the least significant digit is larger than 1, and fixed-point notation is used otherwise. Positive and negative infinity, positive and negative zero, and nans, are formatted as `inf`, `-inf`, `0`, `-0` and `nan` respectively, regardless of the precision. | | `'G'` | General format. Same as `'g'` except switches to `'E'` if the number gets too large. The representations of infinity and NaN are uppercased, too. | | `'n'` | Number. This is the same as `'g'`, except that it uses the current locale setting to insert the appropriate number separator characters. | | `'%'` | Percentage. Multiplies the number by 100 and displays in fixed (`'f'`) format, followed by a percent sign. | | None | For [`float`](functions#float "float") this is the same as `'g'`, except that when fixed-point notation is used to format the result, it always includes at least one digit past the decimal point. The precision used is as large as needed to represent the given value faithfully. For [`Decimal`](decimal#decimal.Decimal "decimal.Decimal"), this is the same as either `'g'` or `'G'` depending on the value of `context.capitals` for the current decimal context. The overall effect is to match the output of [`str()`](stdtypes#str "str") as altered by the other format modifiers. | ### Format examples This section contains examples of the [`str.format()`](stdtypes#str.format "str.format") syntax and comparison with the old `%`-formatting. In most of the cases the syntax is similar to the old `%`-formatting, with the addition of the `{}` and with `:` used instead of `%`. For example, `'%03.2f'` can be translated to `'{:03.2f}'`. The new format syntax also supports new and different options, shown in the following examples. Accessing arguments by position: ``` >>> '{0}, {1}, {2}'.format('a', 'b', 'c') 'a, b, c' >>> '{}, {}, {}'.format('a', 'b', 'c') # 3.1+ only 'a, b, c' >>> '{2}, {1}, {0}'.format('a', 'b', 'c') 'c, b, a' >>> '{2}, {1}, {0}'.format(*'abc') # unpacking argument sequence 'c, b, a' >>> '{0}{1}{0}'.format('abra', 'cad') # arguments' indices can be repeated 'abracadabra' ``` Accessing arguments by name: ``` >>> 'Coordinates: {latitude}, {longitude}'.format(latitude='37.24N', longitude='-115.81W') 'Coordinates: 37.24N, -115.81W' >>> coord = {'latitude': '37.24N', 'longitude': '-115.81W'} >>> 'Coordinates: {latitude}, {longitude}'.format(**coord) 'Coordinates: 37.24N, -115.81W' ``` Accessing arguments’ attributes: ``` >>> c = 3-5j >>> ('The complex number {0} is formed from the real part {0.real} ' ... 'and the imaginary part {0.imag}.').format(c) 'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.' >>> class Point: ... def __init__(self, x, y): ... self.x, self.y = x, y ... def __str__(self): ... return 'Point({self.x}, {self.y})'.format(self=self) ... >>> str(Point(4, 2)) 'Point(4, 2)' ``` Accessing arguments’ items: ``` >>> coord = (3, 5) >>> 'X: {0[0]}; Y: {0[1]}'.format(coord) 'X: 3; Y: 5' ``` Replacing `%s` and `%r`: ``` >>> "repr() shows quotes: {!r}; str() doesn't: {!s}".format('test1', 'test2') "repr() shows quotes: 'test1'; str() doesn't: test2" ``` Aligning the text and specifying a width: ``` >>> '{:<30}'.format('left aligned') 'left aligned ' >>> '{:>30}'.format('right aligned') ' right aligned' >>> '{:^30}'.format('centered') ' centered ' >>> '{:*^30}'.format('centered') # use '*' as a fill char '***********centered***********' ``` Replacing `%+f`, `%-f`, and `% f` and specifying a sign: ``` >>> '{:+f}; {:+f}'.format(3.14, -3.14) # show it always '+3.140000; -3.140000' >>> '{: f}; {: f}'.format(3.14, -3.14) # show a space for positive numbers ' 3.140000; -3.140000' >>> '{:-f}; {:-f}'.format(3.14, -3.14) # show only the minus -- same as '{:f}; {:f}' '3.140000; -3.140000' ``` Replacing `%x` and `%o` and converting the value to different bases: ``` >>> # format also supports binary numbers >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42) 'int: 42; hex: 2a; oct: 52; bin: 101010' >>> # with 0x, 0o, or 0b as prefix: >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42) 'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010' ``` Using the comma as a thousands separator: ``` >>> '{:,}'.format(1234567890) '1,234,567,890' ``` Expressing a percentage: ``` >>> points = 19 >>> total = 22 >>> 'Correct answers: {:.2%}'.format(points/total) 'Correct answers: 86.36%' ``` Using type-specific formatting: ``` >>> import datetime >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58) >>> '{:%Y-%m-%d %H:%M:%S}'.format(d) '2010-07-04 12:15:58' ``` Nesting arguments and more complex examples: ``` >>> for align, text in zip('<^>', ['left', 'center', 'right']): ... '{0:{fill}{align}16}'.format(text, fill=align, align=align) ... 'left<<<<<<<<<<<<' '^^^^^center^^^^^' '>>>>>>>>>>>right' >>> >>> octets = [192, 168, 0, 1] >>> '{:02X}{:02X}{:02X}{:02X}'.format(*octets) 'C0A80001' >>> int(_, 16) 3232235521 >>> >>> width = 5 >>> for num in range(5,12): ... for base in 'dXob': ... print('{0:{width}{base}}'.format(num, base=base, width=width), end=' ') ... print() ... 5 5 5 101 6 6 6 110 7 7 7 111 8 8 10 1000 9 9 11 1001 10 A 12 1010 11 B 13 1011 ``` Template strings ---------------- Template strings provide simpler string substitutions as described in [**PEP 292**](https://www.python.org/dev/peps/pep-0292). A primary use case for template strings is for internationalization (i18n) since in that context, the simpler syntax and functionality makes it easier to translate than other built-in string formatting facilities in Python. As an example of a library built on template strings for i18n, see the [flufl.i18n](http://flufli18n.readthedocs.io/en/latest/) package. Template strings support `$`-based substitutions, using the following rules: * `$$` is an escape; it is replaced with a single `$`. * `$identifier` names a substitution placeholder matching a mapping key of `"identifier"`. By default, `"identifier"` is restricted to any case-insensitive ASCII alphanumeric string (including underscores) that starts with an underscore or ASCII letter. The first non-identifier character after the `$` character terminates this placeholder specification. * `${identifier}` is equivalent to `$identifier`. It is required when valid identifier characters follow the placeholder but are not part of the placeholder, such as `"${noun}ification"`. Any other appearance of `$` in the string will result in a [`ValueError`](exceptions#ValueError "ValueError") being raised. The [`string`](#module-string "string: Common string operations.") module provides a [`Template`](#string.Template "string.Template") class that implements these rules. The methods of [`Template`](#string.Template "string.Template") are: `class string.Template(template)` The constructor takes a single argument which is the template string. `substitute(mapping={}, /, **kwds)` Performs the template substitution, returning a new string. *mapping* is any dictionary-like object with keys that match the placeholders in the template. Alternatively, you can provide keyword arguments, where the keywords are the placeholders. When both *mapping* and *kwds* are given and there are duplicates, the placeholders from *kwds* take precedence. `safe_substitute(mapping={}, /, **kwds)` Like [`substitute()`](#string.Template.substitute "string.Template.substitute"), except that if placeholders are missing from *mapping* and *kwds*, instead of raising a [`KeyError`](exceptions#KeyError "KeyError") exception, the original placeholder will appear in the resulting string intact. Also, unlike with [`substitute()`](#string.Template.substitute "string.Template.substitute"), any other appearances of the `$` will simply return `$` instead of raising [`ValueError`](exceptions#ValueError "ValueError"). While other exceptions may still occur, this method is called “safe” because it always tries to return a usable string instead of raising an exception. In another sense, [`safe_substitute()`](#string.Template.safe_substitute "string.Template.safe_substitute") may be anything other than safe, since it will silently ignore malformed templates containing dangling delimiters, unmatched braces, or placeholders that are not valid Python identifiers. [`Template`](#string.Template "string.Template") instances also provide one public data attribute: `template` This is the object passed to the constructor’s *template* argument. In general, you shouldn’t change it, but read-only access is not enforced. Here is an example of how to use a Template: ``` >>> from string import Template >>> s = Template('$who likes $what') >>> s.substitute(who='tim', what='kung pao') 'tim likes kung pao' >>> d = dict(who='tim') >>> Template('Give $who $100').substitute(d) Traceback (most recent call last): ... ValueError: Invalid placeholder in string: line 1, col 11 >>> Template('$who likes $what').substitute(d) Traceback (most recent call last): ... KeyError: 'what' >>> Template('$who likes $what').safe_substitute(d) 'tim likes $what' ``` Advanced usage: you can derive subclasses of [`Template`](#string.Template "string.Template") to customize the placeholder syntax, delimiter character, or the entire regular expression used to parse template strings. To do this, you can override these class attributes: * *delimiter* – This is the literal string describing a placeholder introducing delimiter. The default value is `$`. Note that this should *not* be a regular expression, as the implementation will call [`re.escape()`](re#re.escape "re.escape") on this string as needed. Note further that you cannot change the delimiter after class creation (i.e. a different delimiter must be set in the subclass’s class namespace). * *idpattern* – This is the regular expression describing the pattern for non-braced placeholders. The default value is the regular expression `(?a:[_a-z][_a-z0-9]*)`. If this is given and *braceidpattern* is `None` this pattern will also apply to braced placeholders. Note Since default *flags* is `re.IGNORECASE`, pattern `[a-z]` can match with some non-ASCII characters. That’s why we use the local `a` flag here. Changed in version 3.7: *braceidpattern* can be used to define separate patterns used inside and outside the braces. * *braceidpattern* – This is like *idpattern* but describes the pattern for braced placeholders. Defaults to `None` which means to fall back to *idpattern* (i.e. the same pattern is used both inside and outside braces). If given, this allows you to define different patterns for braced and unbraced placeholders. New in version 3.7. * *flags* – The regular expression flags that will be applied when compiling the regular expression used for recognizing substitutions. The default value is `re.IGNORECASE`. Note that `re.VERBOSE` will always be added to the flags, so custom *idpattern*s must follow conventions for verbose regular expressions. New in version 3.2. Alternatively, you can provide the entire regular expression pattern by overriding the class attribute *pattern*. If you do this, the value must be a regular expression object with four named capturing groups. The capturing groups correspond to the rules given above, along with the invalid placeholder rule: * *escaped* – This group matches the escape sequence, e.g. `$$`, in the default pattern. * *named* – This group matches the unbraced placeholder name; it should not include the delimiter in capturing group. * *braced* – This group matches the brace enclosed placeholder name; it should not include either the delimiter or braces in the capturing group. * *invalid* – This group matches any other delimiter pattern (usually a single delimiter), and it should appear last in the regular expression. Helper functions ---------------- `string.capwords(s, sep=None)` Split the argument into words using [`str.split()`](stdtypes#str.split "str.split"), capitalize each word using [`str.capitalize()`](stdtypes#str.capitalize "str.capitalize"), and join the capitalized words using [`str.join()`](stdtypes#str.join "str.join"). If the optional second argument *sep* is absent or `None`, runs of whitespace characters are replaced by a single space and leading and trailing whitespace are removed, otherwise *sep* is used to split and join the words.
programming_docs
python Tkinter Dialogs Tkinter Dialogs =============== tkinter.simpledialog — Standard Tkinter input dialogs ----------------------------------------------------- **Source code:** [Lib/tkinter/simpledialog.py](https://github.com/python/cpython/tree/3.9/Lib/tkinter/simpledialog.py) The [`tkinter.simpledialog`](#module-tkinter.simpledialog "tkinter.simpledialog: Simple dialog windows (Tk)") module contains convenience classes and functions for creating simple modal dialogs to get a value from the user. `tkinter.simpledialog.askfloat(title, prompt, **kw)` `tkinter.simpledialog.askinteger(title, prompt, **kw)` `tkinter.simpledialog.askstring(title, prompt, **kw)` The above three functions provide dialogs that prompt the user to enter a value of the desired type. `class tkinter.simpledialog.Dialog(parent, title=None)` The base class for custom dialogs. `body(master)` Override to construct the dialog’s interface and return the widget that should have initial focus. `buttonbox()` Default behaviour adds OK and Cancel buttons. Override for custom button layouts. tkinter.filedialog — File selection dialogs ------------------------------------------- **Source code:** [Lib/tkinter/filedialog.py](https://github.com/python/cpython/tree/3.9/Lib/tkinter/filedialog.py) The [`tkinter.filedialog`](#module-tkinter.filedialog "tkinter.filedialog: Dialog classes for file selection (Tk)") module provides classes and factory functions for creating file/directory selection windows. ### Native Load/Save Dialogs The following classes and functions provide file dialog windows that combine a native look-and-feel with configuration options to customize behaviour. The following keyword arguments are applicable to the classes and functions listed below: **Static factory functions** The below functions when called create a modal, native look-and-feel dialog, wait for the user’s selection, then return the selected value(s) or `None` to the caller. `tkinter.filedialog.askopenfile(mode="r", **options)` `tkinter.filedialog.askopenfiles(mode="r", **options)` The above two functions create an [`Open`](#tkinter.filedialog.Open "tkinter.filedialog.Open") dialog and return the opened file object(s) in read-only mode. `tkinter.filedialog.asksaveasfile(mode="w", **options)` Create a [`SaveAs`](#tkinter.filedialog.SaveAs "tkinter.filedialog.SaveAs") dialog and return a file object opened in write-only mode. `tkinter.filedialog.askopenfilename(**options)` `tkinter.filedialog.askopenfilenames(**options)` The above two functions create an [`Open`](#tkinter.filedialog.Open "tkinter.filedialog.Open") dialog and return the selected filename(s) that correspond to existing file(s). `tkinter.filedialog.asksaveasfilename(**options)` Create a [`SaveAs`](#tkinter.filedialog.SaveAs "tkinter.filedialog.SaveAs") dialog and return the selected filename. `tkinter.filedialog.askdirectory(**options)` `class tkinter.filedialog.Open(master=None, **options)` `class tkinter.filedialog.SaveAs(master=None, **options)` The above two classes provide native dialog windows for saving and loading files. **Convenience classes** The below classes are used for creating file/directory windows from scratch. These do not emulate the native look-and-feel of the platform. `class tkinter.filedialog.Directory(master=None, **options)` Create a dialog prompting the user to select a directory. Note The *FileDialog* class should be subclassed for custom event handling and behaviour. `class tkinter.filedialog.FileDialog(master, title=None)` Create a basic file selection dialog. `cancel_command(event=None)` Trigger the termination of the dialog window. `dirs_double_event(event)` Event handler for double-click event on directory. `dirs_select_event(event)` Event handler for click event on directory. `files_double_event(event)` Event handler for double-click event on file. `files_select_event(event)` Event handler for single-click event on file. `filter_command(event=None)` Filter the files by directory. `get_filter()` Retrieve the file filter currently in use. `get_selection()` Retrieve the currently selected item. `go(dir_or_file=os.curdir, pattern="*", default="", key=None)` Render dialog and start event loop. `ok_event(event)` Exit dialog returning current selection. `quit(how=None)` Exit dialog returning filename, if any. `set_filter(dir, pat)` Set the file filter. `set_selection(file)` Update the current file selection to *file*. `class tkinter.filedialog.LoadFileDialog(master, title=None)` A subclass of FileDialog that creates a dialog window for selecting an existing file. `ok_command()` Test that a file is provided and that the selection indicates an already existing file. `class tkinter.filedialog.SaveFileDialog(master, title=None)` A subclass of FileDialog that creates a dialog window for selecting a destination file. `ok_command()` Test whether or not the selection points to a valid file that is not a directory. Confirmation is required if an already existing file is selected. tkinter.commondialog — Dialog window templates ---------------------------------------------- **Source code:** [Lib/tkinter/commondialog.py](https://github.com/python/cpython/tree/3.9/Lib/tkinter/commondialog.py) The [`tkinter.commondialog`](#module-tkinter.commondialog "tkinter.commondialog: Tkinter base class for dialogs (Tk)") module provides the [`Dialog`](#tkinter.commondialog.Dialog "tkinter.commondialog.Dialog") class that is the base class for dialogs defined in other supporting modules. `class tkinter.commondialog.Dialog(master=None, **options)` `show(color=None, **options)` Render the Dialog window. See also Modules [`tkinter.messagebox`](tkinter.messagebox#module-tkinter.messagebox "tkinter.messagebox: Various types of alert dialogs (Tk)"), [Reading and Writing Files](../tutorial/inputoutput#tut-files) python pkgutil — Package extension utility pkgutil — Package extension utility =================================== **Source code:** [Lib/pkgutil.py](https://github.com/python/cpython/tree/3.9/Lib/pkgutil.py) This module provides utilities for the import system, in particular package support. `class pkgutil.ModuleInfo(module_finder, name, ispkg)` A namedtuple that holds a brief summary of a module’s info. New in version 3.6. `pkgutil.extend_path(path, name)` Extend the search path for the modules which comprise a package. Intended use is to place the following code in a package’s `__init__.py`: ``` from pkgutil import extend_path __path__ = extend_path(__path__, __name__) ``` This will add to the package’s `__path__` all subdirectories of directories on [`sys.path`](sys#sys.path "sys.path") named after the package. This is useful if one wants to distribute different parts of a single logical package as multiple directories. It also looks for `*.pkg` files beginning where `*` matches the *name* argument. This feature is similar to `*.pth` files (see the [`site`](site#module-site "site: Module responsible for site-specific configuration.") module for more information), except that it doesn’t special-case lines starting with `import`. A `*.pkg` file is trusted at face value: apart from checking for duplicates, all entries found in a `*.pkg` file are added to the path, regardless of whether they exist on the filesystem. (This is a feature.) If the input path is not a list (as is the case for frozen packages) it is returned unchanged. The input path is not modified; an extended copy is returned. Items are only appended to the copy at the end. It is assumed that [`sys.path`](sys#sys.path "sys.path") is a sequence. Items of [`sys.path`](sys#sys.path "sys.path") that are not strings referring to existing directories are ignored. Unicode items on [`sys.path`](sys#sys.path "sys.path") that cause errors when used as filenames may cause this function to raise an exception (in line with [`os.path.isdir()`](os.path#os.path.isdir "os.path.isdir") behavior). `class pkgutil.ImpImporter(dirname=None)` [**PEP 302**](https://www.python.org/dev/peps/pep-0302) Finder that wraps Python’s “classic” import algorithm. If *dirname* is a string, a [**PEP 302**](https://www.python.org/dev/peps/pep-0302) finder is created that searches that directory. If *dirname* is `None`, a [**PEP 302**](https://www.python.org/dev/peps/pep-0302) finder is created that searches the current [`sys.path`](sys#sys.path "sys.path"), plus any modules that are frozen or built-in. Note that [`ImpImporter`](#pkgutil.ImpImporter "pkgutil.ImpImporter") does not currently support being used by placement on [`sys.meta_path`](sys#sys.meta_path "sys.meta_path"). Deprecated since version 3.3: This emulation is no longer needed, as the standard import mechanism is now fully [**PEP 302**](https://www.python.org/dev/peps/pep-0302) compliant and available in [`importlib`](importlib#module-importlib "importlib: The implementation of the import machinery."). `class pkgutil.ImpLoader(fullname, file, filename, etc)` [Loader](../glossary#term-loader) that wraps Python’s “classic” import algorithm. Deprecated since version 3.3: This emulation is no longer needed, as the standard import mechanism is now fully [**PEP 302**](https://www.python.org/dev/peps/pep-0302) compliant and available in [`importlib`](importlib#module-importlib "importlib: The implementation of the import machinery."). `pkgutil.find_loader(fullname)` Retrieve a module [loader](../glossary#term-loader) for the given *fullname*. This is a backwards compatibility wrapper around [`importlib.util.find_spec()`](importlib#importlib.util.find_spec "importlib.util.find_spec") that converts most failures to [`ImportError`](exceptions#ImportError "ImportError") and only returns the loader rather than the full `ModuleSpec`. Changed in version 3.3: Updated to be based directly on [`importlib`](importlib#module-importlib "importlib: The implementation of the import machinery.") rather than relying on the package internal [**PEP 302**](https://www.python.org/dev/peps/pep-0302) import emulation. Changed in version 3.4: Updated to be based on [**PEP 451**](https://www.python.org/dev/peps/pep-0451) `pkgutil.get_importer(path_item)` Retrieve a [finder](../glossary#term-finder) for the given *path\_item*. The returned finder is cached in [`sys.path_importer_cache`](sys#sys.path_importer_cache "sys.path_importer_cache") if it was newly created by a path hook. The cache (or part of it) can be cleared manually if a rescan of [`sys.path_hooks`](sys#sys.path_hooks "sys.path_hooks") is necessary. Changed in version 3.3: Updated to be based directly on [`importlib`](importlib#module-importlib "importlib: The implementation of the import machinery.") rather than relying on the package internal [**PEP 302**](https://www.python.org/dev/peps/pep-0302) import emulation. `pkgutil.get_loader(module_or_name)` Get a [loader](../glossary#term-loader) object for *module\_or\_name*. If the module or package is accessible via the normal import mechanism, a wrapper around the relevant part of that machinery is returned. Returns `None` if the module cannot be found or imported. If the named module is not already imported, its containing package (if any) is imported, in order to establish the package `__path__`. Changed in version 3.3: Updated to be based directly on [`importlib`](importlib#module-importlib "importlib: The implementation of the import machinery.") rather than relying on the package internal [**PEP 302**](https://www.python.org/dev/peps/pep-0302) import emulation. Changed in version 3.4: Updated to be based on [**PEP 451**](https://www.python.org/dev/peps/pep-0451) `pkgutil.iter_importers(fullname='')` Yield [finder](../glossary#term-finder) objects for the given module name. If fullname contains a `'.'`, the finders will be for the package containing fullname, otherwise they will be all registered top level finders (i.e. those on both [`sys.meta_path`](sys#sys.meta_path "sys.meta_path") and [`sys.path_hooks`](sys#sys.path_hooks "sys.path_hooks")). If the named module is in a package, that package is imported as a side effect of invoking this function. If no module name is specified, all top level finders are produced. Changed in version 3.3: Updated to be based directly on [`importlib`](importlib#module-importlib "importlib: The implementation of the import machinery.") rather than relying on the package internal [**PEP 302**](https://www.python.org/dev/peps/pep-0302) import emulation. `pkgutil.iter_modules(path=None, prefix='')` Yields [`ModuleInfo`](#pkgutil.ModuleInfo "pkgutil.ModuleInfo") for all submodules on *path*, or, if *path* is `None`, all top-level modules on [`sys.path`](sys#sys.path "sys.path"). *path* should be either `None` or a list of paths to look for modules in. *prefix* is a string to output on the front of every module name on output. Note Only works for a [finder](../glossary#term-finder) which defines an `iter_modules()` method. This interface is non-standard, so the module also provides implementations for [`importlib.machinery.FileFinder`](importlib#importlib.machinery.FileFinder "importlib.machinery.FileFinder") and [`zipimport.zipimporter`](zipimport#zipimport.zipimporter "zipimport.zipimporter"). Changed in version 3.3: Updated to be based directly on [`importlib`](importlib#module-importlib "importlib: The implementation of the import machinery.") rather than relying on the package internal [**PEP 302**](https://www.python.org/dev/peps/pep-0302) import emulation. `pkgutil.walk_packages(path=None, prefix='', onerror=None)` Yields [`ModuleInfo`](#pkgutil.ModuleInfo "pkgutil.ModuleInfo") for all modules recursively on *path*, or, if *path* is `None`, all accessible modules. *path* should be either `None` or a list of paths to look for modules in. *prefix* is a string to output on the front of every module name on output. Note that this function must import all *packages* (*not* all modules!) on the given *path*, in order to access the `__path__` attribute to find submodules. *onerror* is a function which gets called with one argument (the name of the package which was being imported) if any exception occurs while trying to import a package. If no *onerror* function is supplied, [`ImportError`](exceptions#ImportError "ImportError")s are caught and ignored, while all other exceptions are propagated, terminating the search. Examples: ``` # list all modules python can access walk_packages() # list all submodules of ctypes walk_packages(ctypes.__path__, ctypes.__name__ + '.') ``` Note Only works for a [finder](../glossary#term-finder) which defines an `iter_modules()` method. This interface is non-standard, so the module also provides implementations for [`importlib.machinery.FileFinder`](importlib#importlib.machinery.FileFinder "importlib.machinery.FileFinder") and [`zipimport.zipimporter`](zipimport#zipimport.zipimporter "zipimport.zipimporter"). Changed in version 3.3: Updated to be based directly on [`importlib`](importlib#module-importlib "importlib: The implementation of the import machinery.") rather than relying on the package internal [**PEP 302**](https://www.python.org/dev/peps/pep-0302) import emulation. `pkgutil.get_data(package, resource)` Get a resource from a package. This is a wrapper for the [loader](../glossary#term-loader) [`get_data`](importlib#importlib.abc.ResourceLoader.get_data "importlib.abc.ResourceLoader.get_data") API. The *package* argument should be the name of a package, in standard module format (`foo.bar`). The *resource* argument should be in the form of a relative filename, using `/` as the path separator. The parent directory name `..` is not allowed, and nor is a rooted name (starting with a `/`). The function returns a binary string that is the contents of the specified resource. For packages located in the filesystem, which have already been imported, this is the rough equivalent of: ``` d = os.path.dirname(sys.modules[package].__file__) data = open(os.path.join(d, resource), 'rb').read() ``` If the package cannot be located or loaded, or it uses a [loader](../glossary#term-loader) which does not support [`get_data`](importlib#importlib.abc.ResourceLoader.get_data "importlib.abc.ResourceLoader.get_data"), then `None` is returned. In particular, the [loader](../glossary#term-loader) for [namespace packages](../glossary#term-namespace-package) does not support [`get_data`](importlib#importlib.abc.ResourceLoader.get_data "importlib.abc.ResourceLoader.get_data"). `pkgutil.resolve_name(name)` Resolve a name to an object. This functionality is used in numerous places in the standard library (see [bpo-12915](https://bugs.python.org/issue?@action=redirect&bpo=12915)) - and equivalent functionality is also in widely used third-party packages such as setuptools, Django and Pyramid. It is expected that *name* will be a string in one of the following formats, where W is shorthand for a valid Python identifier and dot stands for a literal period in these pseudo-regexes: * `W(.W)*` * `W(.W)*:(W(.W)*)?` The first form is intended for backward compatibility only. It assumes that some part of the dotted name is a package, and the rest is an object somewhere within that package, possibly nested inside other objects. Because the place where the package stops and the object hierarchy starts can’t be inferred by inspection, repeated attempts to import must be done with this form. In the second form, the caller makes the division point clear through the provision of a single colon: the dotted name to the left of the colon is a package to be imported, and the dotted name to the right is the object hierarchy within that package. Only one import is needed in this form. If it ends with the colon, then a module object is returned. The function will return an object (which might be a module), or raise one of the following exceptions: [`ValueError`](exceptions#ValueError "ValueError") – if *name* isn’t in a recognised format. [`ImportError`](exceptions#ImportError "ImportError") – if an import failed when it shouldn’t have. [`AttributeError`](exceptions#AttributeError "AttributeError") – If a failure occurred when traversing the object hierarchy within the imported package to get to the desired object. New in version 3.9. python http.cookies — HTTP state management http.cookies — HTTP state management ==================================== **Source code:** [Lib/http/cookies.py](https://github.com/python/cpython/tree/3.9/Lib/http/cookies.py) The [`http.cookies`](#module-http.cookies "http.cookies: Support for HTTP state management (cookies).") module defines classes for abstracting the concept of cookies, an HTTP state management mechanism. It supports both simple string-only cookies, and provides an abstraction for having any serializable data-type as cookie value. The module formerly strictly applied the parsing rules described in the [**RFC 2109**](https://tools.ietf.org/html/rfc2109.html) and [**RFC 2068**](https://tools.ietf.org/html/rfc2068.html) specifications. It has since been discovered that MSIE 3.0x doesn’t follow the character rules outlined in those specs and also many current day browsers and servers have relaxed parsing rules when comes to Cookie handling. As a result, the parsing rules used are a bit less strict. The character set, [`string.ascii_letters`](string#string.ascii_letters "string.ascii_letters"), [`string.digits`](string#string.digits "string.digits") and `!#$%&'*+-.^_`|~:` denote the set of valid characters allowed by this module in Cookie name (as [`key`](#http.cookies.Morsel.key "http.cookies.Morsel.key")). Changed in version 3.3: Allowed ‘:’ as a valid Cookie name character. Note On encountering an invalid cookie, [`CookieError`](#http.cookies.CookieError "http.cookies.CookieError") is raised, so if your cookie data comes from a browser you should always prepare for invalid data and catch [`CookieError`](#http.cookies.CookieError "http.cookies.CookieError") on parsing. `exception http.cookies.CookieError` Exception failing because of [**RFC 2109**](https://tools.ietf.org/html/rfc2109.html) invalidity: incorrect attributes, incorrect *Set-Cookie* header, etc. `class http.cookies.BaseCookie([input])` This class is a dictionary-like object whose keys are strings and whose values are [`Morsel`](#http.cookies.Morsel "http.cookies.Morsel") instances. Note that upon setting a key to a value, the value is first converted to a [`Morsel`](#http.cookies.Morsel "http.cookies.Morsel") containing the key and the value. If *input* is given, it is passed to the [`load()`](#http.cookies.BaseCookie.load "http.cookies.BaseCookie.load") method. `class http.cookies.SimpleCookie([input])` This class derives from [`BaseCookie`](#http.cookies.BaseCookie "http.cookies.BaseCookie") and overrides `value_decode()` and `value_encode()`. SimpleCookie supports strings as cookie values. When setting the value, SimpleCookie calls the builtin [`str()`](stdtypes#str "str") to convert the value to a string. Values received from HTTP are kept as strings. See also `Module` [`http.cookiejar`](http.cookiejar#module-http.cookiejar "http.cookiejar: Classes for automatic handling of HTTP cookies.") HTTP cookie handling for web *clients*. The [`http.cookiejar`](http.cookiejar#module-http.cookiejar "http.cookiejar: Classes for automatic handling of HTTP cookies.") and [`http.cookies`](#module-http.cookies "http.cookies: Support for HTTP state management (cookies).") modules do not depend on each other. [**RFC 2109**](https://tools.ietf.org/html/rfc2109.html) - HTTP State Management Mechanism This is the state management specification implemented by this module. Cookie Objects -------------- `BaseCookie.value_decode(val)` Return a tuple `(real_value, coded_value)` from a string representation. `real_value` can be any type. This method does no decoding in [`BaseCookie`](#http.cookies.BaseCookie "http.cookies.BaseCookie") — it exists so it can be overridden. `BaseCookie.value_encode(val)` Return a tuple `(real_value, coded_value)`. *val* can be any type, but `coded_value` will always be converted to a string. This method does no encoding in [`BaseCookie`](#http.cookies.BaseCookie "http.cookies.BaseCookie") — it exists so it can be overridden. In general, it should be the case that [`value_encode()`](#http.cookies.BaseCookie.value_encode "http.cookies.BaseCookie.value_encode") and [`value_decode()`](#http.cookies.BaseCookie.value_decode "http.cookies.BaseCookie.value_decode") are inverses on the range of *value\_decode*. `BaseCookie.output(attrs=None, header='Set-Cookie:', sep='\r\n')` Return a string representation suitable to be sent as HTTP headers. *attrs* and *header* are sent to each [`Morsel`](#http.cookies.Morsel "http.cookies.Morsel")’s [`output()`](#http.cookies.BaseCookie.output "http.cookies.BaseCookie.output") method. *sep* is used to join the headers together, and is by default the combination `'\r\n'` (CRLF). `BaseCookie.js_output(attrs=None)` Return an embeddable JavaScript snippet, which, if run on a browser which supports JavaScript, will act the same as if the HTTP headers was sent. The meaning for *attrs* is the same as in [`output()`](#http.cookies.BaseCookie.output "http.cookies.BaseCookie.output"). `BaseCookie.load(rawdata)` If *rawdata* is a string, parse it as an `HTTP_COOKIE` and add the values found there as [`Morsel`](#http.cookies.Morsel "http.cookies.Morsel")s. If it is a dictionary, it is equivalent to: ``` for k, v in rawdata.items(): cookie[k] = v ``` Morsel Objects -------------- `class http.cookies.Morsel` Abstract a key/value pair, which has some [**RFC 2109**](https://tools.ietf.org/html/rfc2109.html) attributes. Morsels are dictionary-like objects, whose set of keys is constant — the valid [**RFC 2109**](https://tools.ietf.org/html/rfc2109.html) attributes, which are * `expires` * `path` * `comment` * `domain` * `max-age` * `secure` * `version` * `httponly` * `samesite` The attribute `httponly` specifies that the cookie is only transferred in HTTP requests, and is not accessible through JavaScript. This is intended to mitigate some forms of cross-site scripting. The attribute `samesite` specifies that the browser is not allowed to send the cookie along with cross-site requests. This helps to mitigate CSRF attacks. Valid values for this attribute are “Strict” and “Lax”. The keys are case-insensitive and their default value is `''`. Changed in version 3.5: `__eq__()` now takes [`key`](#http.cookies.Morsel.key "http.cookies.Morsel.key") and [`value`](#http.cookies.Morsel.value "http.cookies.Morsel.value") into account. Changed in version 3.7: Attributes [`key`](#http.cookies.Morsel.key "http.cookies.Morsel.key"), [`value`](#http.cookies.Morsel.value "http.cookies.Morsel.value") and [`coded_value`](#http.cookies.Morsel.coded_value "http.cookies.Morsel.coded_value") are read-only. Use [`set()`](#http.cookies.Morsel.set "http.cookies.Morsel.set") for setting them. Changed in version 3.8: Added support for the `samesite` attribute. `Morsel.value` The value of the cookie. `Morsel.coded_value` The encoded value of the cookie — this is what should be sent. `Morsel.key` The name of the cookie. `Morsel.set(key, value, coded_value)` Set the *key*, *value* and *coded\_value* attributes. `Morsel.isReservedKey(K)` Whether *K* is a member of the set of keys of a [`Morsel`](#http.cookies.Morsel "http.cookies.Morsel"). `Morsel.output(attrs=None, header='Set-Cookie:')` Return a string representation of the Morsel, suitable to be sent as an HTTP header. By default, all the attributes are included, unless *attrs* is given, in which case it should be a list of attributes to use. *header* is by default `"Set-Cookie:"`. `Morsel.js_output(attrs=None)` Return an embeddable JavaScript snippet, which, if run on a browser which supports JavaScript, will act the same as if the HTTP header was sent. The meaning for *attrs* is the same as in [`output()`](#http.cookies.Morsel.output "http.cookies.Morsel.output"). `Morsel.OutputString(attrs=None)` Return a string representing the Morsel, without any surrounding HTTP or JavaScript. The meaning for *attrs* is the same as in [`output()`](#http.cookies.Morsel.output "http.cookies.Morsel.output"). `Morsel.update(values)` Update the values in the Morsel dictionary with the values in the dictionary *values*. Raise an error if any of the keys in the *values* dict is not a valid [**RFC 2109**](https://tools.ietf.org/html/rfc2109.html) attribute. Changed in version 3.5: an error is raised for invalid keys. `Morsel.copy(value)` Return a shallow copy of the Morsel object. Changed in version 3.5: return a Morsel object instead of a dict. `Morsel.setdefault(key, value=None)` Raise an error if key is not a valid [**RFC 2109**](https://tools.ietf.org/html/rfc2109.html) attribute, otherwise behave the same as [`dict.setdefault()`](stdtypes#dict.setdefault "dict.setdefault"). Example ------- The following example demonstrates how to use the [`http.cookies`](#module-http.cookies "http.cookies: Support for HTTP state management (cookies).") module. ``` >>> from http import cookies >>> C = cookies.SimpleCookie() >>> C["fig"] = "newton" >>> C["sugar"] = "wafer" >>> print(C) # generate HTTP headers Set-Cookie: fig=newton Set-Cookie: sugar=wafer >>> print(C.output()) # same thing Set-Cookie: fig=newton Set-Cookie: sugar=wafer >>> C = cookies.SimpleCookie() >>> C["rocky"] = "road" >>> C["rocky"]["path"] = "/cookie" >>> print(C.output(header="Cookie:")) Cookie: rocky=road; Path=/cookie >>> print(C.output(attrs=[], header="Cookie:")) Cookie: rocky=road >>> C = cookies.SimpleCookie() >>> C.load("chips=ahoy; vienna=finger") # load from a string (HTTP header) >>> print(C) Set-Cookie: chips=ahoy Set-Cookie: vienna=finger >>> C = cookies.SimpleCookie() >>> C.load('keebler="E=everybody; L=\\"Loves\\"; fudge=\\012;";') >>> print(C) Set-Cookie: keebler="E=everybody; L=\"Loves\"; fudge=\012;" >>> C = cookies.SimpleCookie() >>> C["oreo"] = "doublestuff" >>> C["oreo"]["path"] = "/" >>> print(C) Set-Cookie: oreo=doublestuff; Path=/ >>> C = cookies.SimpleCookie() >>> C["twix"] = "none for you" >>> C["twix"].value 'none for you' >>> C = cookies.SimpleCookie() >>> C["number"] = 7 # equivalent to C["number"] = str(7) >>> C["string"] = "seven" >>> C["number"].value '7' >>> C["string"].value 'seven' >>> print(C) Set-Cookie: number=7 Set-Cookie: string=seven ```
programming_docs
python Built-in Exceptions Built-in Exceptions =================== In Python, all exceptions must be instances of a class that derives from [`BaseException`](#BaseException "BaseException"). In a [`try`](../reference/compound_stmts#try) statement with an [`except`](../reference/compound_stmts#except) clause that mentions a particular class, that clause also handles any exception classes derived from that class (but not exception classes from which *it* is derived). Two exception classes that are not related via subclassing are never equivalent, even if they have the same name. The built-in exceptions listed below can be generated by the interpreter or built-in functions. Except where mentioned, they have an “associated value” indicating the detailed cause of the error. This may be a string or a tuple of several items of information (e.g., an error code and a string explaining the code). The associated value is usually passed as arguments to the exception class’s constructor. User code can raise built-in exceptions. This can be used to test an exception handler or to report an error condition “just like” the situation in which the interpreter raises the same exception; but beware that there is nothing to prevent user code from raising an inappropriate error. The built-in exception classes can be subclassed to define new exceptions; programmers are encouraged to derive new exceptions from the [`Exception`](#Exception "Exception") class or one of its subclasses, and not from [`BaseException`](#BaseException "BaseException"). More information on defining exceptions is available in the Python Tutorial under [User-defined Exceptions](../tutorial/errors#tut-userexceptions). Exception context ----------------- When raising a new exception while another exception is already being handled, the new exception’s `__context__` attribute is automatically set to the handled exception. An exception may be handled when an [`except`](../reference/compound_stmts#except) or [`finally`](../reference/compound_stmts#finally) clause, or a [`with`](../reference/compound_stmts#with) statement, is used. This implicit exception context can be supplemented with an explicit cause by using `from` with [`raise`](../reference/simple_stmts#raise): ``` raise new_exc from original_exc ``` The expression following [`from`](../reference/simple_stmts#raise) must be an exception or `None`. It will be set as `__cause__` on the raised exception. Setting `__cause__` also implicitly sets the `__suppress_context__` attribute to `True`, so that using `raise new_exc from None` effectively replaces the old exception with the new one for display purposes (e.g. converting [`KeyError`](#KeyError "KeyError") to [`AttributeError`](#AttributeError "AttributeError")), while leaving the old exception available in `__context__` for introspection when debugging. The default traceback display code shows these chained exceptions in addition to the traceback for the exception itself. An explicitly chained exception in `__cause__` is always shown when present. An implicitly chained exception in `__context__` is shown only if `__cause__` is [`None`](constants#None "None") and `__suppress_context__` is false. In either case, the exception itself is always shown after any chained exceptions so that the final line of the traceback always shows the last exception that was raised. Inheriting from built-in exceptions ----------------------------------- User code can create subclasses that inherit from an exception type. It’s recommended to only subclass one exception type at a time to avoid any possible conflicts between how the bases handle the `args` attribute, as well as due to possible memory layout incompatibilities. **CPython implementation detail:** Most built-in exceptions are implemented in C for efficiency, see: [Objects/exceptions.c](https://github.com/python/cpython/tree/3.9/Objects/exceptions.c). Some have custom memory layouts which makes it impossible to create a subclass that inherits from multiple exception types. The memory layout of a type is an implementation detail and might change between Python versions, leading to new conflicts in the future. Therefore, it’s recommended to avoid subclassing multiple exception types altogether. Base classes ------------ The following exceptions are used mostly as base classes for other exceptions. `exception BaseException` The base class for all built-in exceptions. It is not meant to be directly inherited by user-defined classes (for that, use [`Exception`](#Exception "Exception")). If [`str()`](stdtypes#str "str") is called on an instance of this class, the representation of the argument(s) to the instance are returned, or the empty string when there were no arguments. `args` The tuple of arguments given to the exception constructor. Some built-in exceptions (like [`OSError`](#OSError "OSError")) expect a certain number of arguments and assign a special meaning to the elements of this tuple, while others are usually called only with a single string giving an error message. `with_traceback(tb)` This method sets *tb* as the new traceback for the exception and returns the exception object. It is usually used in exception handling code like this: ``` try: ... except SomeException: tb = sys.exc_info()[2] raise OtherException(...).with_traceback(tb) ``` `exception Exception` All built-in, non-system-exiting exceptions are derived from this class. All user-defined exceptions should also be derived from this class. `exception ArithmeticError` The base class for those built-in exceptions that are raised for various arithmetic errors: [`OverflowError`](#OverflowError "OverflowError"), [`ZeroDivisionError`](#ZeroDivisionError "ZeroDivisionError"), [`FloatingPointError`](#FloatingPointError "FloatingPointError"). `exception BufferError` Raised when a [buffer](../c-api/buffer#bufferobjects) related operation cannot be performed. `exception LookupError` The base class for the exceptions that are raised when a key or index used on a mapping or sequence is invalid: [`IndexError`](#IndexError "IndexError"), [`KeyError`](#KeyError "KeyError"). This can be raised directly by [`codecs.lookup()`](codecs#codecs.lookup "codecs.lookup"). Concrete exceptions ------------------- The following exceptions are the exceptions that are usually raised. `exception AssertionError` Raised when an [`assert`](../reference/simple_stmts#assert) statement fails. `exception AttributeError` Raised when an attribute reference (see [Attribute references](../reference/expressions#attribute-references)) or assignment fails. (When an object does not support attribute references or attribute assignments at all, [`TypeError`](#TypeError "TypeError") is raised.) `exception EOFError` Raised when the [`input()`](functions#input "input") function hits an end-of-file condition (EOF) without reading any data. (N.B.: the `io.IOBase.read()` and [`io.IOBase.readline()`](io#io.IOBase.readline "io.IOBase.readline") methods return an empty string when they hit EOF.) `exception FloatingPointError` Not currently used. `exception GeneratorExit` Raised when a [generator](../glossary#term-generator) or [coroutine](../glossary#term-coroutine) is closed; see [`generator.close()`](../reference/expressions#generator.close "generator.close") and [`coroutine.close()`](../reference/datamodel#coroutine.close "coroutine.close"). It directly inherits from [`BaseException`](#BaseException "BaseException") instead of [`Exception`](#Exception "Exception") since it is technically not an error. `exception ImportError` Raised when the [`import`](../reference/simple_stmts#import) statement has troubles trying to load a module. Also raised when the “from list” in `from ... import` has a name that cannot be found. The `name` and `path` attributes can be set using keyword-only arguments to the constructor. When set they represent the name of the module that was attempted to be imported and the path to any file which triggered the exception, respectively. Changed in version 3.3: Added the `name` and `path` attributes. `exception ModuleNotFoundError` A subclass of [`ImportError`](#ImportError "ImportError") which is raised by [`import`](../reference/simple_stmts#import) when a module could not be located. It is also raised when `None` is found in [`sys.modules`](sys#sys.modules "sys.modules"). New in version 3.6. `exception IndexError` Raised when a sequence subscript is out of range. (Slice indices are silently truncated to fall in the allowed range; if an index is not an integer, [`TypeError`](#TypeError "TypeError") is raised.) `exception KeyError` Raised when a mapping (dictionary) key is not found in the set of existing keys. `exception KeyboardInterrupt` Raised when the user hits the interrupt key (normally `Control-C` or `Delete`). During execution, a check for interrupts is made regularly. The exception inherits from [`BaseException`](#BaseException "BaseException") so as to not be accidentally caught by code that catches [`Exception`](#Exception "Exception") and thus prevent the interpreter from exiting. Note Catching a [`KeyboardInterrupt`](#KeyboardInterrupt "KeyboardInterrupt") requires special consideration. Because it can be raised at unpredictable points, it may, in some circumstances, leave the running program in an inconsistent state. It is generally best to allow [`KeyboardInterrupt`](#KeyboardInterrupt "KeyboardInterrupt") to end the program as quickly as possible or avoid raising it entirely. (See [Note on Signal Handlers and Exceptions](signal#handlers-and-exceptions).) `exception MemoryError` Raised when an operation runs out of memory but the situation may still be rescued (by deleting some objects). The associated value is a string indicating what kind of (internal) operation ran out of memory. Note that because of the underlying memory management architecture (C’s `malloc()` function), the interpreter may not always be able to completely recover from this situation; it nevertheless raises an exception so that a stack traceback can be printed, in case a run-away program was the cause. `exception NameError` Raised when a local or global name is not found. This applies only to unqualified names. The associated value is an error message that includes the name that could not be found. `exception NotImplementedError` This exception is derived from [`RuntimeError`](#RuntimeError "RuntimeError"). In user defined base classes, abstract methods should raise this exception when they require derived classes to override the method, or while the class is being developed to indicate that the real implementation still needs to be added. Note It should not be used to indicate that an operator or method is not meant to be supported at all – in that case either leave the operator / method undefined or, if a subclass, set it to [`None`](constants#None "None"). Note `NotImplementedError` and `NotImplemented` are not interchangeable, even though they have similar names and purposes. See [`NotImplemented`](constants#NotImplemented "NotImplemented") for details on when to use it. `exception OSError([arg])` `exception OSError(errno, strerror[, filename[, winerror[, filename2]]])` This exception is raised when a system function returns a system-related error, including I/O failures such as “file not found” or “disk full” (not for illegal argument types or other incidental errors). The second form of the constructor sets the corresponding attributes, described below. The attributes default to [`None`](constants#None "None") if not specified. For backwards compatibility, if three arguments are passed, the [`args`](#BaseException.args "BaseException.args") attribute contains only a 2-tuple of the first two constructor arguments. The constructor often actually returns a subclass of [`OSError`](#OSError "OSError"), as described in [OS exceptions](#os-exceptions) below. The particular subclass depends on the final [`errno`](#OSError.errno "OSError.errno") value. This behaviour only occurs when constructing [`OSError`](#OSError "OSError") directly or via an alias, and is not inherited when subclassing. `errno` A numeric error code from the C variable `errno`. `winerror` Under Windows, this gives you the native Windows error code. The [`errno`](#OSError.errno "OSError.errno") attribute is then an approximate translation, in POSIX terms, of that native error code. Under Windows, if the *winerror* constructor argument is an integer, the [`errno`](#OSError.errno "OSError.errno") attribute is determined from the Windows error code, and the *errno* argument is ignored. On other platforms, the *winerror* argument is ignored, and the [`winerror`](#OSError.winerror "OSError.winerror") attribute does not exist. `strerror` The corresponding error message, as provided by the operating system. It is formatted by the C functions `perror()` under POSIX, and `FormatMessage()` under Windows. `filename` `filename2` For exceptions that involve a file system path (such as [`open()`](functions#open "open") or [`os.unlink()`](os#os.unlink "os.unlink")), [`filename`](#OSError.filename "OSError.filename") is the file name passed to the function. For functions that involve two file system paths (such as [`os.rename()`](os#os.rename "os.rename")), [`filename2`](#OSError.filename2 "OSError.filename2") corresponds to the second file name passed to the function. Changed in version 3.3: [`EnvironmentError`](#EnvironmentError "EnvironmentError"), [`IOError`](#IOError "IOError"), [`WindowsError`](#WindowsError "WindowsError"), [`socket.error`](socket#socket.error "socket.error"), [`select.error`](select#select.error "select.error") and `mmap.error` have been merged into [`OSError`](#OSError "OSError"), and the constructor may return a subclass. Changed in version 3.4: The [`filename`](#OSError.filename "OSError.filename") attribute is now the original file name passed to the function, instead of the name encoded to or decoded from the filesystem encoding. Also, the *filename2* constructor argument and attribute was added. `exception OverflowError` Raised when the result of an arithmetic operation is too large to be represented. This cannot occur for integers (which would rather raise [`MemoryError`](#MemoryError "MemoryError") than give up). However, for historical reasons, OverflowError is sometimes raised for integers that are outside a required range. Because of the lack of standardization of floating point exception handling in C, most floating point operations are not checked. `exception RecursionError` This exception is derived from [`RuntimeError`](#RuntimeError "RuntimeError"). It is raised when the interpreter detects that the maximum recursion depth (see [`sys.getrecursionlimit()`](sys#sys.getrecursionlimit "sys.getrecursionlimit")) is exceeded. New in version 3.5: Previously, a plain [`RuntimeError`](#RuntimeError "RuntimeError") was raised. `exception ReferenceError` This exception is raised when a weak reference proxy, created by the [`weakref.proxy()`](weakref#weakref.proxy "weakref.proxy") function, is used to access an attribute of the referent after it has been garbage collected. For more information on weak references, see the [`weakref`](weakref#module-weakref "weakref: Support for weak references and weak dictionaries.") module. `exception RuntimeError` Raised when an error is detected that doesn’t fall in any of the other categories. The associated value is a string indicating what precisely went wrong. `exception StopIteration` Raised by built-in function [`next()`](functions#next "next") and an [iterator](../glossary#term-iterator)’s [`__next__()`](stdtypes#iterator.__next__ "iterator.__next__") method to signal that there are no further items produced by the iterator. The exception object has a single attribute `value`, which is given as an argument when constructing the exception, and defaults to [`None`](constants#None "None"). When a [generator](../glossary#term-generator) or [coroutine](../glossary#term-coroutine) function returns, a new [`StopIteration`](#StopIteration "StopIteration") instance is raised, and the value returned by the function is used as the `value` parameter to the constructor of the exception. If a generator code directly or indirectly raises [`StopIteration`](#StopIteration "StopIteration"), it is converted into a [`RuntimeError`](#RuntimeError "RuntimeError") (retaining the [`StopIteration`](#StopIteration "StopIteration") as the new exception’s cause). Changed in version 3.3: Added `value` attribute and the ability for generator functions to use it to return a value. Changed in version 3.5: Introduced the RuntimeError transformation via `from __future__ import generator_stop`, see [**PEP 479**](https://www.python.org/dev/peps/pep-0479). Changed in version 3.7: Enable [**PEP 479**](https://www.python.org/dev/peps/pep-0479) for all code by default: a [`StopIteration`](#StopIteration "StopIteration") error raised in a generator is transformed into a [`RuntimeError`](#RuntimeError "RuntimeError"). `exception StopAsyncIteration` Must be raised by [`__anext__()`](../reference/datamodel#object.__anext__ "object.__anext__") method of an [asynchronous iterator](../glossary#term-asynchronous-iterator) object to stop the iteration. New in version 3.5. `exception SyntaxError(message, details)` Raised when the parser encounters a syntax error. This may occur in an [`import`](../reference/simple_stmts#import) statement, in a call to the built-in functions [`compile()`](functions#compile "compile"), [`exec()`](functions#exec "exec"), or [`eval()`](functions#eval "eval"), or when reading the initial script or standard input (also interactively). The [`str()`](stdtypes#str "str") of the exception instance returns only the error message. Details is a tuple whose members are also available as separate attributes. `filename` The name of the file the syntax error occurred in. `lineno` Which line number in the file the error occurred in. This is 1-indexed: the first line in the file has a `lineno` of 1. `offset` The column in the line where the error occurred. This is 1-indexed: the first character in the line has an `offset` of 1. `text` The source code text involved in the error. For errors in f-string fields, the message is prefixed by “f-string: ” and the offsets are offsets in a text constructed from the replacement expression. For example, compiling f’Bad {a b} field’ results in this args attribute: (‘f-string: …’, (‘’, 1, 4, ‘(a b)n’)). `exception IndentationError` Base class for syntax errors related to incorrect indentation. This is a subclass of [`SyntaxError`](#SyntaxError "SyntaxError"). `exception TabError` Raised when indentation contains an inconsistent use of tabs and spaces. This is a subclass of [`IndentationError`](#IndentationError "IndentationError"). `exception SystemError` Raised when the interpreter finds an internal error, but the situation does not look so serious to cause it to abandon all hope. The associated value is a string indicating what went wrong (in low-level terms). You should report this to the author or maintainer of your Python interpreter. Be sure to report the version of the Python interpreter (`sys.version`; it is also printed at the start of an interactive Python session), the exact error message (the exception’s associated value) and if possible the source of the program that triggered the error. `exception SystemExit` This exception is raised by the [`sys.exit()`](sys#sys.exit "sys.exit") function. It inherits from [`BaseException`](#BaseException "BaseException") instead of [`Exception`](#Exception "Exception") so that it is not accidentally caught by code that catches [`Exception`](#Exception "Exception"). This allows the exception to properly propagate up and cause the interpreter to exit. When it is not handled, the Python interpreter exits; no stack traceback is printed. The constructor accepts the same optional argument passed to [`sys.exit()`](sys#sys.exit "sys.exit"). If the value is an integer, it specifies the system exit status (passed to C’s `exit()` function); if it is `None`, the exit status is zero; if it has another type (such as a string), the object’s value is printed and the exit status is one. A call to [`sys.exit()`](sys#sys.exit "sys.exit") is translated into an exception so that clean-up handlers ([`finally`](../reference/compound_stmts#finally) clauses of [`try`](../reference/compound_stmts#try) statements) can be executed, and so that a debugger can execute a script without running the risk of losing control. The [`os._exit()`](os#os._exit "os._exit") function can be used if it is absolutely positively necessary to exit immediately (for example, in the child process after a call to [`os.fork()`](os#os.fork "os.fork")). `code` The exit status or error message that is passed to the constructor. (Defaults to `None`.) `exception TypeError` Raised when an operation or function is applied to an object of inappropriate type. The associated value is a string giving details about the type mismatch. This exception may be raised by user code to indicate that an attempted operation on an object is not supported, and is not meant to be. If an object is meant to support a given operation but has not yet provided an implementation, [`NotImplementedError`](#NotImplementedError "NotImplementedError") is the proper exception to raise. Passing arguments of the wrong type (e.g. passing a [`list`](stdtypes#list "list") when an [`int`](functions#int "int") is expected) should result in a [`TypeError`](#TypeError "TypeError"), but passing arguments with the wrong value (e.g. a number outside expected boundaries) should result in a [`ValueError`](#ValueError "ValueError"). `exception UnboundLocalError` Raised when a reference is made to a local variable in a function or method, but no value has been bound to that variable. This is a subclass of [`NameError`](#NameError "NameError"). `exception UnicodeError` Raised when a Unicode-related encoding or decoding error occurs. It is a subclass of [`ValueError`](#ValueError "ValueError"). [`UnicodeError`](#UnicodeError "UnicodeError") has attributes that describe the encoding or decoding error. For example, `err.object[err.start:err.end]` gives the particular invalid input that the codec failed on. `encoding` The name of the encoding that raised the error. `reason` A string describing the specific codec error. `object` The object the codec was attempting to encode or decode. `start` The first index of invalid data in [`object`](functions#object "object"). `end` The index after the last invalid data in [`object`](functions#object "object"). `exception UnicodeEncodeError` Raised when a Unicode-related error occurs during encoding. It is a subclass of [`UnicodeError`](#UnicodeError "UnicodeError"). `exception UnicodeDecodeError` Raised when a Unicode-related error occurs during decoding. It is a subclass of [`UnicodeError`](#UnicodeError "UnicodeError"). `exception UnicodeTranslateError` Raised when a Unicode-related error occurs during translating. It is a subclass of [`UnicodeError`](#UnicodeError "UnicodeError"). `exception ValueError` Raised when an operation or function receives an argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as [`IndexError`](#IndexError "IndexError"). `exception ZeroDivisionError` Raised when the second argument of a division or modulo operation is zero. The associated value is a string indicating the type of the operands and the operation. The following exceptions are kept for compatibility with previous versions; starting from Python 3.3, they are aliases of [`OSError`](#OSError "OSError"). `exception EnvironmentError` `exception IOError` `exception WindowsError` Only available on Windows. ### OS exceptions The following exceptions are subclasses of [`OSError`](#OSError "OSError"), they get raised depending on the system error code. `exception BlockingIOError` Raised when an operation would block on an object (e.g. socket) set for non-blocking operation. Corresponds to `errno` [`EAGAIN`](errno#errno.EAGAIN "errno.EAGAIN"), [`EALREADY`](errno#errno.EALREADY "errno.EALREADY"), [`EWOULDBLOCK`](errno#errno.EWOULDBLOCK "errno.EWOULDBLOCK") and [`EINPROGRESS`](errno#errno.EINPROGRESS "errno.EINPROGRESS"). In addition to those of [`OSError`](#OSError "OSError"), [`BlockingIOError`](#BlockingIOError "BlockingIOError") can have one more attribute: `characters_written` An integer containing the number of characters written to the stream before it blocked. This attribute is available when using the buffered I/O classes from the [`io`](io#module-io "io: Core tools for working with streams.") module. `exception ChildProcessError` Raised when an operation on a child process failed. Corresponds to `errno` [`ECHILD`](errno#errno.ECHILD "errno.ECHILD"). `exception ConnectionError` A base class for connection-related issues. Subclasses are [`BrokenPipeError`](#BrokenPipeError "BrokenPipeError"), [`ConnectionAbortedError`](#ConnectionAbortedError "ConnectionAbortedError"), [`ConnectionRefusedError`](#ConnectionRefusedError "ConnectionRefusedError") and [`ConnectionResetError`](#ConnectionResetError "ConnectionResetError"). `exception BrokenPipeError` A subclass of [`ConnectionError`](#ConnectionError "ConnectionError"), raised when trying to write on a pipe while the other end has been closed, or trying to write on a socket which has been shutdown for writing. Corresponds to `errno` [`EPIPE`](errno#errno.EPIPE "errno.EPIPE") and [`ESHUTDOWN`](errno#errno.ESHUTDOWN "errno.ESHUTDOWN"). `exception ConnectionAbortedError` A subclass of [`ConnectionError`](#ConnectionError "ConnectionError"), raised when a connection attempt is aborted by the peer. Corresponds to `errno` [`ECONNABORTED`](errno#errno.ECONNABORTED "errno.ECONNABORTED"). `exception ConnectionRefusedError` A subclass of [`ConnectionError`](#ConnectionError "ConnectionError"), raised when a connection attempt is refused by the peer. Corresponds to `errno` [`ECONNREFUSED`](errno#errno.ECONNREFUSED "errno.ECONNREFUSED"). `exception ConnectionResetError` A subclass of [`ConnectionError`](#ConnectionError "ConnectionError"), raised when a connection is reset by the peer. Corresponds to `errno` [`ECONNRESET`](errno#errno.ECONNRESET "errno.ECONNRESET"). `exception FileExistsError` Raised when trying to create a file or directory which already exists. Corresponds to `errno` [`EEXIST`](errno#errno.EEXIST "errno.EEXIST"). `exception FileNotFoundError` Raised when a file or directory is requested but doesn’t exist. Corresponds to `errno` [`ENOENT`](errno#errno.ENOENT "errno.ENOENT"). `exception InterruptedError` Raised when a system call is interrupted by an incoming signal. Corresponds to `errno` [`EINTR`](errno#errno.EINTR "errno.EINTR"). Changed in version 3.5: Python now retries system calls when a syscall is interrupted by a signal, except if the signal handler raises an exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale), instead of raising [`InterruptedError`](#InterruptedError "InterruptedError"). `exception IsADirectoryError` Raised when a file operation (such as [`os.remove()`](os#os.remove "os.remove")) is requested on a directory. Corresponds to `errno` [`EISDIR`](errno#errno.EISDIR "errno.EISDIR"). `exception NotADirectoryError` Raised when a directory operation (such as [`os.listdir()`](os#os.listdir "os.listdir")) is requested on something which is not a directory. On most POSIX platforms, it may also be raised if an operation attempts to open or traverse a non-directory file as if it were a directory. Corresponds to `errno` [`ENOTDIR`](errno#errno.ENOTDIR "errno.ENOTDIR"). `exception PermissionError` Raised when trying to run an operation without the adequate access rights - for example filesystem permissions. Corresponds to `errno` [`EACCES`](errno#errno.EACCES "errno.EACCES") and [`EPERM`](errno#errno.EPERM "errno.EPERM"). `exception ProcessLookupError` Raised when a given process doesn’t exist. Corresponds to `errno` [`ESRCH`](errno#errno.ESRCH "errno.ESRCH"). `exception TimeoutError` Raised when a system function timed out at the system level. Corresponds to `errno` [`ETIMEDOUT`](errno#errno.ETIMEDOUT "errno.ETIMEDOUT"). New in version 3.3: All the above [`OSError`](#OSError "OSError") subclasses were added. See also [**PEP 3151**](https://www.python.org/dev/peps/pep-3151) - Reworking the OS and IO exception hierarchy Warnings -------- The following exceptions are used as warning categories; see the [Warning Categories](warnings#warning-categories) documentation for more details. `exception Warning` Base class for warning categories. `exception UserWarning` Base class for warnings generated by user code. `exception DeprecationWarning` Base class for warnings about deprecated features when those warnings are intended for other Python developers. Ignored by the default warning filters, except in the `__main__` module ([**PEP 565**](https://www.python.org/dev/peps/pep-0565)). Enabling the [Python Development Mode](devmode#devmode) shows this warning. The deprecation policy is described in [**PEP 387**](https://www.python.org/dev/peps/pep-0387). `exception PendingDeprecationWarning` Base class for warnings about features which are obsolete and expected to be deprecated in the future, but are not deprecated at the moment. This class is rarely used as emitting a warning about a possible upcoming deprecation is unusual, and [`DeprecationWarning`](#DeprecationWarning "DeprecationWarning") is preferred for already active deprecations. Ignored by the default warning filters. Enabling the [Python Development Mode](devmode#devmode) shows this warning. The deprecation policy is described in [**PEP 387**](https://www.python.org/dev/peps/pep-0387). `exception SyntaxWarning` Base class for warnings about dubious syntax. `exception RuntimeWarning` Base class for warnings about dubious runtime behavior. `exception FutureWarning` Base class for warnings about deprecated features when those warnings are intended for end users of applications that are written in Python. `exception ImportWarning` Base class for warnings about probable mistakes in module imports. Ignored by the default warning filters. Enabling the [Python Development Mode](devmode#devmode) shows this warning. `exception UnicodeWarning` Base class for warnings related to Unicode. `exception BytesWarning` Base class for warnings related to [`bytes`](stdtypes#bytes "bytes") and [`bytearray`](stdtypes#bytearray "bytearray"). `exception ResourceWarning` Base class for warnings related to resource usage. Ignored by the default warning filters. Enabling the [Python Development Mode](devmode#devmode) shows this warning. New in version 3.2. Exception hierarchy ------------------- The class hierarchy for built-in exceptions is: ``` BaseException +-- SystemExit +-- KeyboardInterrupt +-- GeneratorExit +-- Exception +-- StopIteration +-- StopAsyncIteration +-- ArithmeticError | +-- FloatingPointError | +-- OverflowError | +-- ZeroDivisionError +-- AssertionError +-- AttributeError +-- BufferError +-- EOFError +-- ImportError | +-- ModuleNotFoundError +-- LookupError | +-- IndexError | +-- KeyError +-- MemoryError +-- NameError | +-- UnboundLocalError +-- OSError | +-- BlockingIOError | +-- ChildProcessError | +-- ConnectionError | | +-- BrokenPipeError | | +-- ConnectionAbortedError | | +-- ConnectionRefusedError | | +-- ConnectionResetError | +-- FileExistsError | +-- FileNotFoundError | +-- InterruptedError | +-- IsADirectoryError | +-- NotADirectoryError | +-- PermissionError | +-- ProcessLookupError | +-- TimeoutError +-- ReferenceError +-- RuntimeError | +-- NotImplementedError | +-- RecursionError +-- SyntaxError | +-- IndentationError | +-- TabError +-- SystemError +-- TypeError +-- ValueError | +-- UnicodeError | +-- UnicodeDecodeError | +-- UnicodeEncodeError | +-- UnicodeTranslateError +-- Warning +-- DeprecationWarning +-- PendingDeprecationWarning +-- RuntimeWarning +-- SyntaxWarning +-- UserWarning +-- FutureWarning +-- ImportWarning +-- UnicodeWarning +-- BytesWarning +-- ResourceWarning ```
programming_docs
python email.errors: Exception and Defect classes email.errors: Exception and Defect classes ========================================== **Source code:** [Lib/email/errors.py](https://github.com/python/cpython/tree/3.9/Lib/email/errors.py) The following exception classes are defined in the [`email.errors`](#module-email.errors "email.errors: The exception classes used by the email package.") module: `exception email.errors.MessageError` This is the base class for all exceptions that the [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package can raise. It is derived from the standard [`Exception`](exceptions#Exception "Exception") class and defines no additional methods. `exception email.errors.MessageParseError` This is the base class for exceptions raised by the [`Parser`](email.parser#email.parser.Parser "email.parser.Parser") class. It is derived from [`MessageError`](#email.errors.MessageError "email.errors.MessageError"). This class is also used internally by the parser used by [`headerregistry`](email.headerregistry#module-email.headerregistry "email.headerregistry: Automatic Parsing of headers based on the field name"). `exception email.errors.HeaderParseError` Raised under some error conditions when parsing the [**RFC 5322**](https://tools.ietf.org/html/rfc5322.html) headers of a message, this class is derived from [`MessageParseError`](#email.errors.MessageParseError "email.errors.MessageParseError"). The [`set_boundary()`](email.message#email.message.EmailMessage.set_boundary "email.message.EmailMessage.set_boundary") method will raise this error if the content type is unknown when the method is called. [`Header`](email.header#email.header.Header "email.header.Header") may raise this error for certain base64 decoding errors, and when an attempt is made to create a header that appears to contain an embedded header (that is, there is what is supposed to be a continuation line that has no leading whitespace and looks like a header). `exception email.errors.BoundaryError` Deprecated and no longer used. `exception email.errors.MultipartConversionError` Raised when a payload is added to a [`Message`](email.compat32-message#email.message.Message "email.message.Message") object using `add_payload()`, but the payload is already a scalar and the message’s *Content-Type* main type is not either *multipart* or missing. [`MultipartConversionError`](#email.errors.MultipartConversionError "email.errors.MultipartConversionError") multiply inherits from [`MessageError`](#email.errors.MessageError "email.errors.MessageError") and the built-in [`TypeError`](exceptions#TypeError "TypeError"). Since `Message.add_payload()` is deprecated, this exception is rarely raised in practice. However the exception may also be raised if the [`attach()`](email.compat32-message#email.message.Message.attach "email.message.Message.attach") method is called on an instance of a class derived from [`MIMENonMultipart`](email.mime#email.mime.nonmultipart.MIMENonMultipart "email.mime.nonmultipart.MIMENonMultipart") (e.g. [`MIMEImage`](email.mime#email.mime.image.MIMEImage "email.mime.image.MIMEImage")). Here is the list of the defects that the [`FeedParser`](email.parser#email.parser.FeedParser "email.parser.FeedParser") can find while parsing messages. Note that the defects are added to the message where the problem was found, so for example, if a message nested inside a *multipart/alternative* had a malformed header, that nested message object would have a defect, but the containing messages would not. All defect classes are subclassed from `email.errors.MessageDefect`. * `NoBoundaryInMultipartDefect` – A message claimed to be a multipart, but had no *boundary* parameter. * `StartBoundaryNotFoundDefect` – The start boundary claimed in the *Content-Type* header was never found. * `CloseBoundaryNotFoundDefect` – A start boundary was found, but no corresponding close boundary was ever found. New in version 3.3. * `FirstHeaderLineIsContinuationDefect` – The message had a continuation line as its first header line. * `MisplacedEnvelopeHeaderDefect` - A “Unix From” header was found in the middle of a header block. * `MissingHeaderBodySeparatorDefect` - A line was found while parsing headers that had no leading white space but contained no ‘:’. Parsing continues assuming that the line represents the first line of the body. New in version 3.3. * `MalformedHeaderDefect` – A header was found that was missing a colon, or was otherwise malformed. Deprecated since version 3.3: This defect has not been used for several Python versions. * `MultipartInvariantViolationDefect` – A message claimed to be a *multipart*, but no subparts were found. Note that when a message has this defect, its [`is_multipart()`](email.compat32-message#email.message.Message.is_multipart "email.message.Message.is_multipart") method may return `False` even though its content type claims to be *multipart*. * `InvalidBase64PaddingDefect` – When decoding a block of base64 encoded bytes, the padding was not correct. Enough padding is added to perform the decode, but the resulting decoded bytes may be invalid. * `InvalidBase64CharactersDefect` – When decoding a block of base64 encoded bytes, characters outside the base64 alphabet were encountered. The characters are ignored, but the resulting decoded bytes may be invalid. * `InvalidBase64LengthDefect` – When decoding a block of base64 encoded bytes, the number of non-padding base64 characters was invalid (1 more than a multiple of 4). The encoded block was kept as-is. python statistics — Mathematical statistics functions statistics — Mathematical statistics functions ============================================== New in version 3.4. **Source code:** [Lib/statistics.py](https://github.com/python/cpython/tree/3.9/Lib/statistics.py) This module provides functions for calculating mathematical statistics of numeric ([`Real`](numbers#numbers.Real "numbers.Real")-valued) data. The module is not intended to be a competitor to third-party libraries such as [NumPy](https://numpy.org), [SciPy](https://www.scipy.org/), or proprietary full-featured statistics packages aimed at professional statisticians such as Minitab, SAS and Matlab. It is aimed at the level of graphing and scientific calculators. Unless explicitly noted, these functions support [`int`](functions#int "int"), [`float`](functions#float "float"), [`Decimal`](decimal#decimal.Decimal "decimal.Decimal") and [`Fraction`](fractions#fractions.Fraction "fractions.Fraction"). Behaviour with other types (whether in the numeric tower or not) is currently unsupported. Collections with a mix of types are also undefined and implementation-dependent. If your input data consists of mixed types, you may be able to use [`map()`](functions#map "map") to ensure a consistent result, for example: `map(float, input_data)`. Averages and measures of central location ----------------------------------------- These functions calculate an average or typical value from a population or sample. | | | | --- | --- | | [`mean()`](#statistics.mean "statistics.mean") | Arithmetic mean (“average”) of data. | | [`fmean()`](#statistics.fmean "statistics.fmean") | Fast, floating point arithmetic mean. | | [`geometric_mean()`](#statistics.geometric_mean "statistics.geometric_mean") | Geometric mean of data. | | [`harmonic_mean()`](#statistics.harmonic_mean "statistics.harmonic_mean") | Harmonic mean of data. | | [`median()`](#statistics.median "statistics.median") | Median (middle value) of data. | | [`median_low()`](#statistics.median_low "statistics.median_low") | Low median of data. | | [`median_high()`](#statistics.median_high "statistics.median_high") | High median of data. | | [`median_grouped()`](#statistics.median_grouped "statistics.median_grouped") | Median, or 50th percentile, of grouped data. | | [`mode()`](#statistics.mode "statistics.mode") | Single mode (most common value) of discrete or nominal data. | | [`multimode()`](#statistics.multimode "statistics.multimode") | List of modes (most common values) of discrete or nomimal data. | | [`quantiles()`](#statistics.quantiles "statistics.quantiles") | Divide data into intervals with equal probability. | Measures of spread ------------------ These functions calculate a measure of how much the population or sample tends to deviate from the typical or average values. | | | | --- | --- | | [`pstdev()`](#statistics.pstdev "statistics.pstdev") | Population standard deviation of data. | | [`pvariance()`](#statistics.pvariance "statistics.pvariance") | Population variance of data. | | [`stdev()`](#statistics.stdev "statistics.stdev") | Sample standard deviation of data. | | [`variance()`](#statistics.variance "statistics.variance") | Sample variance of data. | Function details ---------------- Note: The functions do not require the data given to them to be sorted. However, for reading convenience, most of the examples show sorted sequences. `statistics.mean(data)` Return the sample arithmetic mean of *data* which can be a sequence or iterable. The arithmetic mean is the sum of the data divided by the number of data points. It is commonly called “the average”, although it is only one of many different mathematical averages. It is a measure of the central location of the data. If *data* is empty, [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError") will be raised. Some examples of use: ``` >>> mean([1, 2, 3, 4, 4]) 2.8 >>> mean([-1.0, 2.5, 3.25, 5.75]) 2.625 >>> from fractions import Fraction as F >>> mean([F(3, 7), F(1, 21), F(5, 3), F(1, 3)]) Fraction(13, 21) >>> from decimal import Decimal as D >>> mean([D("0.5"), D("0.75"), D("0.625"), D("0.375")]) Decimal('0.5625') ``` Note The mean is strongly affected by [outliers](https://en.wikipedia.org/wiki/Outlier) and is not necessarily a typical example of the data points. For a more robust, although less efficient, measure of [central tendency](https://en.wikipedia.org/wiki/Central_tendency), see [`median()`](#statistics.median "statistics.median"). The sample mean gives an unbiased estimate of the true population mean, so that when taken on average over all the possible samples, `mean(sample)` converges on the true mean of the entire population. If *data* represents the entire population rather than a sample, then `mean(data)` is equivalent to calculating the true population mean μ. `statistics.fmean(data)` Convert *data* to floats and compute the arithmetic mean. This runs faster than the [`mean()`](#statistics.mean "statistics.mean") function and it always returns a [`float`](functions#float "float"). The *data* may be a sequence or iterable. If the input dataset is empty, raises a [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError"). ``` >>> fmean([3.5, 4.0, 5.25]) 4.25 ``` New in version 3.8. `statistics.geometric_mean(data)` Convert *data* to floats and compute the geometric mean. The geometric mean indicates the central tendency or typical value of the *data* using the product of the values (as opposed to the arithmetic mean which uses their sum). Raises a [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError") if the input dataset is empty, if it contains a zero, or if it contains a negative value. The *data* may be a sequence or iterable. No special efforts are made to achieve exact results. (However, this may change in the future.) ``` >>> round(geometric_mean([54, 24, 36]), 1) 36.0 ``` New in version 3.8. `statistics.harmonic_mean(data)` Return the harmonic mean of *data*, a sequence or iterable of real-valued numbers. The harmonic mean, sometimes called the subcontrary mean, is the reciprocal of the arithmetic [`mean()`](#statistics.mean "statistics.mean") of the reciprocals of the data. For example, the harmonic mean of three values *a*, *b* and *c* will be equivalent to `3/(1/a + 1/b + 1/c)`. If one of the values is zero, the result will be zero. The harmonic mean is a type of average, a measure of the central location of the data. It is often appropriate when averaging rates or ratios, for example speeds. Suppose a car travels 10 km at 40 km/hr, then another 10 km at 60 km/hr. What is the average speed? ``` >>> harmonic_mean([40, 60]) 48.0 ``` Suppose an investor purchases an equal value of shares in each of three companies, with P/E (price/earning) ratios of 2.5, 3 and 10. What is the average P/E ratio for the investor’s portfolio? ``` >>> harmonic_mean([2.5, 3, 10]) # For an equal investment portfolio. 3.6 ``` [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError") is raised if *data* is empty, or any element is less than zero. The current algorithm has an early-out when it encounters a zero in the input. This means that the subsequent inputs are not tested for validity. (This behavior may change in the future.) New in version 3.6. `statistics.median(data)` Return the median (middle value) of numeric data, using the common “mean of middle two” method. If *data* is empty, [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError") is raised. *data* can be a sequence or iterable. The median is a robust measure of central location and is less affected by the presence of outliers. When the number of data points is odd, the middle data point is returned: ``` >>> median([1, 3, 5]) 3 ``` When the number of data points is even, the median is interpolated by taking the average of the two middle values: ``` >>> median([1, 3, 5, 7]) 4.0 ``` This is suited for when your data is discrete, and you don’t mind that the median may not be an actual data point. If the data is ordinal (supports order operations) but not numeric (doesn’t support addition), consider using [`median_low()`](#statistics.median_low "statistics.median_low") or [`median_high()`](#statistics.median_high "statistics.median_high") instead. `statistics.median_low(data)` Return the low median of numeric data. If *data* is empty, [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError") is raised. *data* can be a sequence or iterable. The low median is always a member of the data set. When the number of data points is odd, the middle value is returned. When it is even, the smaller of the two middle values is returned. ``` >>> median_low([1, 3, 5]) 3 >>> median_low([1, 3, 5, 7]) 3 ``` Use the low median when your data are discrete and you prefer the median to be an actual data point rather than interpolated. `statistics.median_high(data)` Return the high median of data. If *data* is empty, [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError") is raised. *data* can be a sequence or iterable. The high median is always a member of the data set. When the number of data points is odd, the middle value is returned. When it is even, the larger of the two middle values is returned. ``` >>> median_high([1, 3, 5]) 3 >>> median_high([1, 3, 5, 7]) 5 ``` Use the high median when your data are discrete and you prefer the median to be an actual data point rather than interpolated. `statistics.median_grouped(data, interval=1)` Return the median of grouped continuous data, calculated as the 50th percentile, using interpolation. If *data* is empty, [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError") is raised. *data* can be a sequence or iterable. ``` >>> median_grouped([52, 52, 53, 54]) 52.5 ``` In the following example, the data are rounded, so that each value represents the midpoint of data classes, e.g. 1 is the midpoint of the class 0.5–1.5, 2 is the midpoint of 1.5–2.5, 3 is the midpoint of 2.5–3.5, etc. With the data given, the middle value falls somewhere in the class 3.5–4.5, and interpolation is used to estimate it: ``` >>> median_grouped([1, 2, 2, 3, 4, 4, 4, 4, 4, 5]) 3.7 ``` Optional argument *interval* represents the class interval, and defaults to 1. Changing the class interval naturally will change the interpolation: ``` >>> median_grouped([1, 3, 3, 5, 7], interval=1) 3.25 >>> median_grouped([1, 3, 3, 5, 7], interval=2) 3.5 ``` This function does not check whether the data points are at least *interval* apart. **CPython implementation detail:** Under some circumstances, [`median_grouped()`](#statistics.median_grouped "statistics.median_grouped") may coerce data points to floats. This behaviour is likely to change in the future. See also * “Statistics for the Behavioral Sciences”, Frederick J Gravetter and Larry B Wallnau (8th Edition). * The [SSMEDIAN](https://help.gnome.org/users/gnumeric/stable/gnumeric.html#gnumeric-function-SSMEDIAN) function in the Gnome Gnumeric spreadsheet, including [this discussion](https://mail.gnome.org/archives/gnumeric-list/2011-April/msg00018.html). `statistics.mode(data)` Return the single most common data point from discrete or nominal *data*. The mode (when it exists) is the most typical value and serves as a measure of central location. If there are multiple modes with the same frequency, returns the first one encountered in the *data*. If the smallest or largest of those is desired instead, use `min(multimode(data))` or `max(multimode(data))`. If the input *data* is empty, [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError") is raised. `mode` assumes discrete data and returns a single value. This is the standard treatment of the mode as commonly taught in schools: ``` >>> mode([1, 1, 2, 3, 3, 3, 3, 4]) 3 ``` The mode is unique in that it is the only statistic in this package that also applies to nominal (non-numeric) data: ``` >>> mode(["red", "blue", "blue", "red", "green", "red", "red"]) 'red' ``` Changed in version 3.8: Now handles multimodal datasets by returning the first mode encountered. Formerly, it raised [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError") when more than one mode was found. `statistics.multimode(data)` Return a list of the most frequently occurring values in the order they were first encountered in the *data*. Will return more than one result if there are multiple modes or an empty list if the *data* is empty: ``` >>> multimode('aabbbbccddddeeffffgg') ['b', 'd', 'f'] >>> multimode('') [] ``` New in version 3.8. `statistics.pstdev(data, mu=None)` Return the population standard deviation (the square root of the population variance). See [`pvariance()`](#statistics.pvariance "statistics.pvariance") for arguments and other details. ``` >>> pstdev([1.5, 2.5, 2.5, 2.75, 3.25, 4.75]) 0.986893273527251 ``` `statistics.pvariance(data, mu=None)` Return the population variance of *data*, a non-empty sequence or iterable of real-valued numbers. Variance, or second moment about the mean, is a measure of the variability (spread or dispersion) of data. A large variance indicates that the data is spread out; a small variance indicates it is clustered closely around the mean. If the optional second argument *mu* is given, it is typically the mean of the *data*. It can also be used to compute the second moment around a point that is not the mean. If it is missing or `None` (the default), the arithmetic mean is automatically calculated. Use this function to calculate the variance from the entire population. To estimate the variance from a sample, the [`variance()`](#statistics.variance "statistics.variance") function is usually a better choice. Raises [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError") if *data* is empty. Examples: ``` >>> data = [0.0, 0.25, 0.25, 1.25, 1.5, 1.75, 2.75, 3.25] >>> pvariance(data) 1.25 ``` If you have already calculated the mean of your data, you can pass it as the optional second argument *mu* to avoid recalculation: ``` >>> mu = mean(data) >>> pvariance(data, mu) 1.25 ``` Decimals and Fractions are supported: ``` >>> from decimal import Decimal as D >>> pvariance([D("27.5"), D("30.25"), D("30.25"), D("34.5"), D("41.75")]) Decimal('24.815') >>> from fractions import Fraction as F >>> pvariance([F(1, 4), F(5, 4), F(1, 2)]) Fraction(13, 72) ``` Note When called with the entire population, this gives the population variance σ². When called on a sample instead, this is the biased sample variance s², also known as variance with N degrees of freedom. If you somehow know the true population mean μ, you may use this function to calculate the variance of a sample, giving the known population mean as the second argument. Provided the data points are a random sample of the population, the result will be an unbiased estimate of the population variance. `statistics.stdev(data, xbar=None)` Return the sample standard deviation (the square root of the sample variance). See [`variance()`](#statistics.variance "statistics.variance") for arguments and other details. ``` >>> stdev([1.5, 2.5, 2.5, 2.75, 3.25, 4.75]) 1.0810874155219827 ``` `statistics.variance(data, xbar=None)` Return the sample variance of *data*, an iterable of at least two real-valued numbers. Variance, or second moment about the mean, is a measure of the variability (spread or dispersion) of data. A large variance indicates that the data is spread out; a small variance indicates it is clustered closely around the mean. If the optional second argument *xbar* is given, it should be the mean of *data*. If it is missing or `None` (the default), the mean is automatically calculated. Use this function when your data is a sample from a population. To calculate the variance from the entire population, see [`pvariance()`](#statistics.pvariance "statistics.pvariance"). Raises [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError") if *data* has fewer than two values. Examples: ``` >>> data = [2.75, 1.75, 1.25, 0.25, 0.5, 1.25, 3.5] >>> variance(data) 1.3720238095238095 ``` If you have already calculated the mean of your data, you can pass it as the optional second argument *xbar* to avoid recalculation: ``` >>> m = mean(data) >>> variance(data, m) 1.3720238095238095 ``` This function does not attempt to verify that you have passed the actual mean as *xbar*. Using arbitrary values for *xbar* can lead to invalid or impossible results. Decimal and Fraction values are supported: ``` >>> from decimal import Decimal as D >>> variance([D("27.5"), D("30.25"), D("30.25"), D("34.5"), D("41.75")]) Decimal('31.01875') >>> from fractions import Fraction as F >>> variance([F(1, 6), F(1, 2), F(5, 3)]) Fraction(67, 108) ``` Note This is the sample variance s² with Bessel’s correction, also known as variance with N-1 degrees of freedom. Provided that the data points are representative (e.g. independent and identically distributed), the result should be an unbiased estimate of the true population variance. If you somehow know the actual population mean μ you should pass it to the [`pvariance()`](#statistics.pvariance "statistics.pvariance") function as the *mu* parameter to get the variance of a sample. `statistics.quantiles(data, *, n=4, method='exclusive')` Divide *data* into *n* continuous intervals with equal probability. Returns a list of `n - 1` cut points separating the intervals. Set *n* to 4 for quartiles (the default). Set *n* to 10 for deciles. Set *n* to 100 for percentiles which gives the 99 cuts points that separate *data* into 100 equal sized groups. Raises [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError") if *n* is not least 1. The *data* can be any iterable containing sample data. For meaningful results, the number of data points in *data* should be larger than *n*. Raises [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError") if there are not at least two data points. The cut points are linearly interpolated from the two nearest data points. For example, if a cut point falls one-third of the distance between two sample values, `100` and `112`, the cut-point will evaluate to `104`. The *method* for computing quantiles can be varied depending on whether the *data* includes or excludes the lowest and highest possible values from the population. The default *method* is “exclusive” and is used for data sampled from a population that can have more extreme values than found in the samples. The portion of the population falling below the *i-th* of *m* sorted data points is computed as `i / (m + 1)`. Given nine sample values, the method sorts them and assigns the following percentiles: 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%. Setting the *method* to “inclusive” is used for describing population data or for samples that are known to include the most extreme values from the population. The minimum value in *data* is treated as the 0th percentile and the maximum value is treated as the 100th percentile. The portion of the population falling below the *i-th* of *m* sorted data points is computed as `(i - 1) / (m - 1)`. Given 11 sample values, the method sorts them and assigns the following percentiles: 0%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 100%. ``` # Decile cut points for empirically sampled data >>> data = [105, 129, 87, 86, 111, 111, 89, 81, 108, 92, 110, ... 100, 75, 105, 103, 109, 76, 119, 99, 91, 103, 129, ... 106, 101, 84, 111, 74, 87, 86, 103, 103, 106, 86, ... 111, 75, 87, 102, 121, 111, 88, 89, 101, 106, 95, ... 103, 107, 101, 81, 109, 104] >>> [round(q, 1) for q in quantiles(data, n=10)] [81.0, 86.2, 89.0, 99.4, 102.5, 103.6, 106.0, 109.8, 111.0] ``` New in version 3.8. Exceptions ---------- A single exception is defined: `exception statistics.StatisticsError` Subclass of [`ValueError`](exceptions#ValueError "ValueError") for statistics-related exceptions. NormalDist objects ------------------ [`NormalDist`](#statistics.NormalDist "statistics.NormalDist") is a tool for creating and manipulating normal distributions of a [random variable](http://www.stat.yale.edu/Courses/1997-98/101/ranvar.htm). It is a class that treats the mean and standard deviation of data measurements as a single entity. Normal distributions arise from the [Central Limit Theorem](https://en.wikipedia.org/wiki/Central_limit_theorem) and have a wide range of applications in statistics. `class statistics.NormalDist(mu=0.0, sigma=1.0)` Returns a new *NormalDist* object where *mu* represents the [arithmetic mean](https://en.wikipedia.org/wiki/Arithmetic_mean) and *sigma* represents the [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation). If *sigma* is negative, raises [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError"). `mean` A read-only property for the [arithmetic mean](https://en.wikipedia.org/wiki/Arithmetic_mean) of a normal distribution. `median` A read-only property for the [median](https://en.wikipedia.org/wiki/Median) of a normal distribution. `mode` A read-only property for the [mode](https://en.wikipedia.org/wiki/Mode_(statistics)) of a normal distribution. `stdev` A read-only property for the [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) of a normal distribution. `variance` A read-only property for the [variance](https://en.wikipedia.org/wiki/Variance) of a normal distribution. Equal to the square of the standard deviation. `classmethod from_samples(data)` Makes a normal distribution instance with *mu* and *sigma* parameters estimated from the *data* using [`fmean()`](#statistics.fmean "statistics.fmean") and [`stdev()`](#statistics.stdev "statistics.stdev"). The *data* can be any [iterable](../glossary#term-iterable) and should consist of values that can be converted to type [`float`](functions#float "float"). If *data* does not contain at least two elements, raises [`StatisticsError`](#statistics.StatisticsError "statistics.StatisticsError") because it takes at least one point to estimate a central value and at least two points to estimate dispersion. `samples(n, *, seed=None)` Generates *n* random samples for a given mean and standard deviation. Returns a [`list`](stdtypes#list "list") of [`float`](functions#float "float") values. If *seed* is given, creates a new instance of the underlying random number generator. This is useful for creating reproducible results, even in a multi-threading context. `pdf(x)` Using a [probability density function (pdf)](https://en.wikipedia.org/wiki/Probability_density_function), compute the relative likelihood that a random variable *X* will be near the given value *x*. Mathematically, it is the limit of the ratio `P(x <= X < x+dx) / dx` as *dx* approaches zero. The relative likelihood is computed as the probability of a sample occurring in a narrow range divided by the width of the range (hence the word “density”). Since the likelihood is relative to other points, its value can be greater than `1.0`. `cdf(x)` Using a [cumulative distribution function (cdf)](https://en.wikipedia.org/wiki/Cumulative_distribution_function), compute the probability that a random variable *X* will be less than or equal to *x*. Mathematically, it is written `P(X <= x)`. `inv_cdf(p)` Compute the inverse cumulative distribution function, also known as the [quantile function](https://en.wikipedia.org/wiki/Quantile_function) or the [percent-point](https://www.statisticshowto.datasciencecentral.com/inverse-distribution-function/) function. Mathematically, it is written `x : P(X <= x) = p`. Finds the value *x* of the random variable *X* such that the probability of the variable being less than or equal to that value equals the given probability *p*. `overlap(other)` Measures the agreement between two normal probability distributions. Returns a value between 0.0 and 1.0 giving [the overlapping area for the two probability density functions](https://www.rasch.org/rmt/rmt101r.htm). `quantiles(n=4)` Divide the normal distribution into *n* continuous intervals with equal probability. Returns a list of (n - 1) cut points separating the intervals. Set *n* to 4 for quartiles (the default). Set *n* to 10 for deciles. Set *n* to 100 for percentiles which gives the 99 cuts points that separate the normal distribution into 100 equal sized groups. `zscore(x)` Compute the [Standard Score](https://www.statisticshowto.com/probability-and-statistics/z-score/) describing *x* in terms of the number of standard deviations above or below the mean of the normal distribution: `(x - mean) / stdev`. New in version 3.9. Instances of [`NormalDist`](#statistics.NormalDist "statistics.NormalDist") support addition, subtraction, multiplication and division by a constant. These operations are used for translation and scaling. For example: ``` >>> temperature_february = NormalDist(5, 2.5) # Celsius >>> temperature_february * (9/5) + 32 # Fahrenheit NormalDist(mu=41.0, sigma=4.5) ``` Dividing a constant by an instance of [`NormalDist`](#statistics.NormalDist "statistics.NormalDist") is not supported because the result wouldn’t be normally distributed. Since normal distributions arise from additive effects of independent variables, it is possible to [add and subtract two independent normally distributed random variables](https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables) represented as instances of [`NormalDist`](#statistics.NormalDist "statistics.NormalDist"). For example: ``` >>> birth_weights = NormalDist.from_samples([2.5, 3.1, 2.1, 2.4, 2.7, 3.5]) >>> drug_effects = NormalDist(0.4, 0.15) >>> combined = birth_weights + drug_effects >>> round(combined.mean, 1) 3.1 >>> round(combined.stdev, 1) 0.5 ``` New in version 3.8. ### [`NormalDist`](#statistics.NormalDist "statistics.NormalDist") Examples and Recipes [`NormalDist`](#statistics.NormalDist "statistics.NormalDist") readily solves classic probability problems. For example, given [historical data for SAT exams](https://nces.ed.gov/programs/digest/d17/tables/dt17_226.40.asp) showing that scores are normally distributed with a mean of 1060 and a standard deviation of 195, determine the percentage of students with test scores between 1100 and 1200, after rounding to the nearest whole number: ``` >>> sat = NormalDist(1060, 195) >>> fraction = sat.cdf(1200 + 0.5) - sat.cdf(1100 - 0.5) >>> round(fraction * 100.0, 1) 18.4 ``` Find the [quartiles](https://en.wikipedia.org/wiki/Quartile) and [deciles](https://en.wikipedia.org/wiki/Decile) for the SAT scores: ``` >>> list(map(round, sat.quantiles())) [928, 1060, 1192] >>> list(map(round, sat.quantiles(n=10))) [810, 896, 958, 1011, 1060, 1109, 1162, 1224, 1310] ``` To estimate the distribution for a model than isn’t easy to solve analytically, [`NormalDist`](#statistics.NormalDist "statistics.NormalDist") can generate input samples for a [Monte Carlo simulation](https://en.wikipedia.org/wiki/Monte_Carlo_method): ``` >>> def model(x, y, z): ... return (3*x + 7*x*y - 5*y) / (11 * z) ... >>> n = 100_000 >>> X = NormalDist(10, 2.5).samples(n, seed=3652260728) >>> Y = NormalDist(15, 1.75).samples(n, seed=4582495471) >>> Z = NormalDist(50, 1.25).samples(n, seed=6582483453) >>> quantiles(map(model, X, Y, Z)) [1.4591308524824727, 1.8035946855390597, 2.175091447274739] ``` Normal distributions can be used to approximate [Binomial distributions](http://mathworld.wolfram.com/BinomialDistribution.html) when the sample size is large and when the probability of a successful trial is near 50%. For example, an open source conference has 750 attendees and two rooms with a 500 person capacity. There is a talk about Python and another about Ruby. In previous conferences, 65% of the attendees preferred to listen to Python talks. Assuming the population preferences haven’t changed, what is the probability that the Python room will stay within its capacity limits? ``` >>> n = 750 # Sample size >>> p = 0.65 # Preference for Python >>> q = 1.0 - p # Preference for Ruby >>> k = 500 # Room capacity >>> # Approximation using the cumulative normal distribution >>> from math import sqrt >>> round(NormalDist(mu=n*p, sigma=sqrt(n*p*q)).cdf(k + 0.5), 4) 0.8402 >>> # Solution using the cumulative binomial distribution >>> from math import comb, fsum >>> round(fsum(comb(n, r) * p**r * q**(n-r) for r in range(k+1)), 4) 0.8402 >>> # Approximation using a simulation >>> from random import seed, choices >>> seed(8675309) >>> def trial(): ... return choices(('Python', 'Ruby'), (p, q), k=n).count('Python') >>> mean(trial() <= k for i in range(10_000)) 0.8398 ``` Normal distributions commonly arise in machine learning problems. Wikipedia has a [nice example of a Naive Bayesian Classifier](https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Sex_classification). The challenge is to predict a person’s gender from measurements of normally distributed features including height, weight, and foot size. We’re given a training dataset with measurements for eight people. The measurements are assumed to be normally distributed, so we summarize the data with [`NormalDist`](#statistics.NormalDist "statistics.NormalDist"): ``` >>> height_male = NormalDist.from_samples([6, 5.92, 5.58, 5.92]) >>> height_female = NormalDist.from_samples([5, 5.5, 5.42, 5.75]) >>> weight_male = NormalDist.from_samples([180, 190, 170, 165]) >>> weight_female = NormalDist.from_samples([100, 150, 130, 150]) >>> foot_size_male = NormalDist.from_samples([12, 11, 12, 10]) >>> foot_size_female = NormalDist.from_samples([6, 8, 7, 9]) ``` Next, we encounter a new person whose feature measurements are known but whose gender is unknown: ``` >>> ht = 6.0 # height >>> wt = 130 # weight >>> fs = 8 # foot size ``` Starting with a 50% [prior probability](https://en.wikipedia.org/wiki/Prior_probability) of being male or female, we compute the posterior as the prior times the product of likelihoods for the feature measurements given the gender: ``` >>> prior_male = 0.5 >>> prior_female = 0.5 >>> posterior_male = (prior_male * height_male.pdf(ht) * ... weight_male.pdf(wt) * foot_size_male.pdf(fs)) >>> posterior_female = (prior_female * height_female.pdf(ht) * ... weight_female.pdf(wt) * foot_size_female.pdf(fs)) ``` The final prediction goes to the largest posterior. This is known as the [maximum a posteriori](https://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation) or MAP: ``` >>> 'male' if posterior_male > posterior_female else 'female' 'female' ```
programming_docs
python fileinput — Iterate over lines from multiple input streams fileinput — Iterate over lines from multiple input streams ========================================================== **Source code:** [Lib/fileinput.py](https://github.com/python/cpython/tree/3.9/Lib/fileinput.py) This module implements a helper class and functions to quickly write a loop over standard input or a list of files. If you just want to read or write one file see [`open()`](functions#open "open"). The typical use is: ``` import fileinput for line in fileinput.input(): process(line) ``` This iterates over the lines of all files listed in `sys.argv[1:]`, defaulting to `sys.stdin` if the list is empty. If a filename is `'-'`, it is also replaced by `sys.stdin` and the optional arguments *mode* and *openhook* are ignored. To specify an alternative list of filenames, pass it as the first argument to [`input()`](#fileinput.input "fileinput.input"). A single file name is also allowed. All files are opened in text mode by default, but you can override this by specifying the *mode* parameter in the call to [`input()`](#fileinput.input "fileinput.input") or [`FileInput`](#fileinput.FileInput "fileinput.FileInput"). If an I/O error occurs during opening or reading a file, [`OSError`](exceptions#OSError "OSError") is raised. Changed in version 3.3: [`IOError`](exceptions#IOError "IOError") used to be raised; it is now an alias of [`OSError`](exceptions#OSError "OSError"). If `sys.stdin` is used more than once, the second and further use will return no lines, except perhaps for interactive use, or if it has been explicitly reset (e.g. using `sys.stdin.seek(0)`). Empty files are opened and immediately closed; the only time their presence in the list of filenames is noticeable at all is when the last file opened is empty. Lines are returned with any newlines intact, which means that the last line in a file may not have one. You can control how files are opened by providing an opening hook via the *openhook* parameter to [`fileinput.input()`](#fileinput.input "fileinput.input") or [`FileInput()`](#fileinput.FileInput "fileinput.FileInput"). The hook must be a function that takes two arguments, *filename* and *mode*, and returns an accordingly opened file-like object. Two useful hooks are already provided by this module. The following function is the primary interface of this module: `fileinput.input(files=None, inplace=False, backup='', *, mode='r', openhook=None)` Create an instance of the [`FileInput`](#fileinput.FileInput "fileinput.FileInput") class. The instance will be used as global state for the functions of this module, and is also returned to use during iteration. The parameters to this function will be passed along to the constructor of the [`FileInput`](#fileinput.FileInput "fileinput.FileInput") class. The [`FileInput`](#fileinput.FileInput "fileinput.FileInput") instance can be used as a context manager in the [`with`](../reference/compound_stmts#with) statement. In this example, *input* is closed after the `with` statement is exited, even if an exception occurs: ``` with fileinput.input(files=('spam.txt', 'eggs.txt')) as f: for line in f: process(line) ``` Changed in version 3.2: Can be used as a context manager. Changed in version 3.8: The keyword parameters *mode* and *openhook* are now keyword-only. The following functions use the global state created by [`fileinput.input()`](#fileinput.input "fileinput.input"); if there is no active state, [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. `fileinput.filename()` Return the name of the file currently being read. Before the first line has been read, returns `None`. `fileinput.fileno()` Return the integer “file descriptor” for the current file. When no file is opened (before the first line and between files), returns `-1`. `fileinput.lineno()` Return the cumulative line number of the line that has just been read. Before the first line has been read, returns `0`. After the last line of the last file has been read, returns the line number of that line. `fileinput.filelineno()` Return the line number in the current file. Before the first line has been read, returns `0`. After the last line of the last file has been read, returns the line number of that line within the file. `fileinput.isfirstline()` Return `True` if the line just read is the first line of its file, otherwise return `False`. `fileinput.isstdin()` Return `True` if the last line was read from `sys.stdin`, otherwise return `False`. `fileinput.nextfile()` Close the current file so that the next iteration will read the first line from the next file (if any); lines not read from the file will not count towards the cumulative line count. The filename is not changed until after the first line of the next file has been read. Before the first line has been read, this function has no effect; it cannot be used to skip the first file. After the last line of the last file has been read, this function has no effect. `fileinput.close()` Close the sequence. The class which implements the sequence behavior provided by the module is available for subclassing as well: `class fileinput.FileInput(files=None, inplace=False, backup='', *, mode='r', openhook=None)` Class [`FileInput`](#fileinput.FileInput "fileinput.FileInput") is the implementation; its methods [`filename()`](#fileinput.filename "fileinput.filename"), [`fileno()`](#fileinput.fileno "fileinput.fileno"), [`lineno()`](#fileinput.lineno "fileinput.lineno"), [`filelineno()`](#fileinput.filelineno "fileinput.filelineno"), [`isfirstline()`](#fileinput.isfirstline "fileinput.isfirstline"), [`isstdin()`](#fileinput.isstdin "fileinput.isstdin"), [`nextfile()`](#fileinput.nextfile "fileinput.nextfile") and [`close()`](#fileinput.close "fileinput.close") correspond to the functions of the same name in the module. In addition it has a [`readline()`](io#io.TextIOBase.readline "io.TextIOBase.readline") method which returns the next input line, and a [`__getitem__()`](../reference/datamodel#object.__getitem__ "object.__getitem__") method which implements the sequence behavior. The sequence must be accessed in strictly sequential order; random access and [`readline()`](io#io.TextIOBase.readline "io.TextIOBase.readline") cannot be mixed. With *mode* you can specify which file mode will be passed to [`open()`](functions#open "open"). It must be one of `'r'`, `'rU'`, `'U'` and `'rb'`. The *openhook*, when given, must be a function that takes two arguments, *filename* and *mode*, and returns an accordingly opened file-like object. You cannot use *inplace* and *openhook* together. A [`FileInput`](#fileinput.FileInput "fileinput.FileInput") instance can be used as a context manager in the [`with`](../reference/compound_stmts#with) statement. In this example, *input* is closed after the `with` statement is exited, even if an exception occurs: ``` with FileInput(files=('spam.txt', 'eggs.txt')) as input: process(input) ``` Changed in version 3.2: Can be used as a context manager. Deprecated since version 3.4: The `'rU'` and `'U'` modes. Deprecated since version 3.8: Support for [`__getitem__()`](../reference/datamodel#object.__getitem__ "object.__getitem__") method is deprecated. Changed in version 3.8: The keyword parameter *mode* and *openhook* are now keyword-only. **Optional in-place filtering:** if the keyword argument `inplace=True` is passed to [`fileinput.input()`](#fileinput.input "fileinput.input") or to the [`FileInput`](#fileinput.FileInput "fileinput.FileInput") constructor, the file is moved to a backup file and standard output is directed to the input file (if a file of the same name as the backup file already exists, it will be replaced silently). This makes it possible to write a filter that rewrites its input file in place. If the *backup* parameter is given (typically as `backup='.<some extension>'`), it specifies the extension for the backup file, and the backup file remains around; by default, the extension is `'.bak'` and it is deleted when the output file is closed. In-place filtering is disabled when standard input is read. The two following opening hooks are provided by this module: `fileinput.hook_compressed(filename, mode)` Transparently opens files compressed with gzip and bzip2 (recognized by the extensions `'.gz'` and `'.bz2'`) using the [`gzip`](gzip#module-gzip "gzip: Interfaces for gzip compression and decompression using file objects.") and [`bz2`](bz2#module-bz2 "bz2: Interfaces for bzip2 compression and decompression.") modules. If the filename extension is not `'.gz'` or `'.bz2'`, the file is opened normally (ie, using [`open()`](functions#open "open") without any decompression). Usage example: `fi = fileinput.FileInput(openhook=fileinput.hook_compressed)` `fileinput.hook_encoded(encoding, errors=None)` Returns a hook which opens each file with [`open()`](functions#open "open"), using the given *encoding* and *errors* to read the file. Usage example: `fi = fileinput.FileInput(openhook=fileinput.hook_encoded("utf-8", "surrogateescape"))` Changed in version 3.6: Added the optional *errors* parameter. python IDLE IDLE ==== **Source code:** [Lib/idlelib/](https://github.com/python/cpython/tree/3.9/Lib/idlelib/) IDLE is Python’s Integrated Development and Learning Environment. IDLE has the following features: * coded in 100% pure Python, using the [`tkinter`](tkinter#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") GUI toolkit * cross-platform: works mostly the same on Windows, Unix, and macOS * Python shell window (interactive interpreter) with colorizing of code input, output, and error messages * multi-window text editor with multiple undo, Python colorizing, smart indent, call tips, auto completion, and other features * search within any window, replace within editor windows, and search through multiple files (grep) * debugger with persistent breakpoints, stepping, and viewing of global and local namespaces * configuration, browsers, and other dialogs Menus ----- IDLE has two main window types, the Shell window and the Editor window. It is possible to have multiple editor windows simultaneously. On Windows and Linux, each has its own top menu. Each menu documented below indicates which window type it is associated with. Output windows, such as used for Edit => Find in Files, are a subtype of editor window. They currently have the same top menu but a different default title and context menu. On macOS, there is one application menu. It dynamically changes according to the window currently selected. It has an IDLE menu, and some entries described below are moved around to conform to Apple guidelines. ### File menu (Shell and Editor) New File Create a new file editing window. Open… Open an existing file with an Open dialog. Recent Files Open a list of recent files. Click one to open it. Open Module… Open an existing module (searches sys.path). Class Browser Show functions, classes, and methods in the current Editor file in a tree structure. In the shell, open a module first. Path Browser Show sys.path directories, modules, functions, classes and methods in a tree structure. Save Save the current window to the associated file, if there is one. Windows that have been changed since being opened or last saved have a \* before and after the window title. If there is no associated file, do Save As instead. Save As… Save the current window with a Save As dialog. The file saved becomes the new associated file for the window. Save Copy As… Save the current window to different file without changing the associated file. Print Window Print the current window to the default printer. Close Window Close the current window (if an unsaved editor, ask to save; if an unsaved Shell, ask to quit execution). Calling `exit()` or `close()` in the Shell window also closes Shell. If this is the only window, also exit IDLE. Exit IDLE Close all windows and quit IDLE (ask to save unsaved edit windows). ### Edit menu (Shell and Editor) Undo Undo the last change to the current window. A maximum of 1000 changes may be undone. Redo Redo the last undone change to the current window. Cut Copy selection into the system-wide clipboard; then delete the selection. Copy Copy selection into the system-wide clipboard. Paste Insert contents of the system-wide clipboard into the current window. The clipboard functions are also available in context menus. Select All Select the entire contents of the current window. Find… Open a search dialog with many options Find Again Repeat the last search, if there is one. Find Selection Search for the currently selected string, if there is one. Find in Files… Open a file search dialog. Put results in a new output window. Replace… Open a search-and-replace dialog. Go to Line Move the cursor to the beginning of the line requested and make that line visible. A request past the end of the file goes to the end. Clear any selection and update the line and column status. Show Completions Open a scrollable list allowing selection of existing names. See [Completions](#completions) in the Editing and navigation section below. Expand Word Expand a prefix you have typed to match a full word in the same window; repeat to get a different expansion. Show call tip After an unclosed parenthesis for a function, open a small window with function parameter hints. See [Calltips](#calltips) in the Editing and navigation section below. Show surrounding parens Highlight the surrounding parenthesis. ### Format menu (Editor window only) Indent Region Shift selected lines right by the indent width (default 4 spaces). Dedent Region Shift selected lines left by the indent width (default 4 spaces). Comment Out Region Insert ## in front of selected lines. Uncomment Region Remove leading # or ## from selected lines. Tabify Region Turn *leading* stretches of spaces into tabs. (Note: We recommend using 4 space blocks to indent Python code.) Untabify Region Turn *all* tabs into the correct number of spaces. Toggle Tabs Open a dialog to switch between indenting with spaces and tabs. New Indent Width Open a dialog to change indent width. The accepted default by the Python community is 4 spaces. Format Paragraph Reformat the current blank-line-delimited paragraph in comment block or multiline string or selected line in a string. All lines in the paragraph will be formatted to less than N columns, where N defaults to 72. Strip trailing whitespace Remove trailing space and other whitespace characters after the last non-whitespace character of a line by applying str.rstrip to each line, including lines within multiline strings. Except for Shell windows, remove extra newlines at the end of the file. ### Run menu (Editor window only) Run Module Do [Check Module](#check-module). If no error, restart the shell to clean the environment, then execute the module. Output is displayed in the Shell window. Note that output requires use of `print` or `write`. When execution is complete, the Shell retains focus and displays a prompt. At this point, one may interactively explore the result of execution. This is similar to executing a file with `python -i file` at a command line. Run… Customized Same as [Run Module](#run-module), but run the module with customized settings. *Command Line Arguments* extend [`sys.argv`](sys#sys.argv "sys.argv") as if passed on a command line. The module can be run in the Shell without restarting. Check Module Check the syntax of the module currently open in the Editor window. If the module has not been saved IDLE will either prompt the user to save or autosave, as selected in the General tab of the Idle Settings dialog. If there is a syntax error, the approximate location is indicated in the Editor window. Python Shell Open or wake up the Python Shell window. ### Shell menu (Shell window only) View Last Restart Scroll the shell window to the last Shell restart. Restart Shell Restart the shell to clean the environment and reset display and exception handling. Previous History Cycle through earlier commands in history which match the current entry. Next History Cycle through later commands in history which match the current entry. Interrupt Execution Stop a running program. ### Debug menu (Shell window only) Go to File/Line Look on the current line. with the cursor, and the line above for a filename and line number. If found, open the file if not already open, and show the line. Use this to view source lines referenced in an exception traceback and lines found by Find in Files. Also available in the context menu of the Shell window and Output windows. Debugger (toggle) When activated, code entered in the Shell or run from an Editor will run under the debugger. In the Editor, breakpoints can be set with the context menu. This feature is still incomplete and somewhat experimental. Stack Viewer Show the stack traceback of the last exception in a tree widget, with access to locals and globals. Auto-open Stack Viewer Toggle automatically opening the stack viewer on an unhandled exception. ### Options menu (Shell and Editor) Configure IDLE Open a configuration dialog and change preferences for the following: fonts, indentation, keybindings, text color themes, startup windows and size, additional help sources, and extensions. On macOS, open the configuration dialog by selecting Preferences in the application menu. For more details, see [Setting preferences](#preferences) under Help and preferences. Most configuration options apply to all windows or all future windows. The option items below only apply to the active window. Show/Hide Code Context (Editor Window only) Open a pane at the top of the edit window which shows the block context of the code which has scrolled above the top of the window. See [Code Context](#code-context) in the Editing and Navigation section below. Show/Hide Line Numbers (Editor Window only) Open a column to the left of the edit window which shows the number of each line of text. The default is off, which may be changed in the preferences (see [Setting preferences](#preferences)). Zoom/Restore Height Toggles the window between normal size and maximum height. The initial size defaults to 40 lines by 80 chars unless changed on the General tab of the Configure IDLE dialog. The maximum height for a screen is determined by momentarily maximizing a window the first time one is zoomed on the screen. Changing screen settings may invalidate the saved height. This toggle has no effect when a window is maximized. ### Window menu (Shell and Editor) Lists the names of all open windows; select one to bring it to the foreground (deiconifying it if necessary). ### Help menu (Shell and Editor) About IDLE Display version, copyright, license, credits, and more. IDLE Help Display this IDLE document, detailing the menu options, basic editing and navigation, and other tips. Python Docs Access local Python documentation, if installed, or start a web browser and open docs.python.org showing the latest Python documentation. Turtle Demo Run the turtledemo module with example Python code and turtle drawings. Additional help sources may be added here with the Configure IDLE dialog under the General tab. See the [Help sources](#help-sources) subsection below for more on Help menu choices. ### Context Menus Open a context menu by right-clicking in a window (Control-click on macOS). Context menus have the standard clipboard functions also on the Edit menu. Cut Copy selection into the system-wide clipboard; then delete the selection. Copy Copy selection into the system-wide clipboard. Paste Insert contents of the system-wide clipboard into the current window. Editor windows also have breakpoint functions. Lines with a breakpoint set are specially marked. Breakpoints only have an effect when running under the debugger. Breakpoints for a file are saved in the user’s `.idlerc` directory. Set Breakpoint Set a breakpoint on the current line. Clear Breakpoint Clear the breakpoint on that line. Shell and Output windows also have the following. Go to file/line Same as in Debug menu. The Shell window also has an output squeezing facility explained in the *Python Shell window* subsection below. Squeeze If the cursor is over an output line, squeeze all the output between the code above and the prompt below down to a ‘Squeezed text’ label. Editing and navigation ---------------------- ### Editor windows IDLE may open editor windows when it starts, depending on settings and how you start IDLE. Thereafter, use the File menu. There can be only one open editor window for a given file. The title bar contains the name of the file, the full path, and the version of Python and IDLE running the window. The status bar contains the line number (‘Ln’) and column number (‘Col’). Line numbers start with 1; column numbers with 0. IDLE assumes that files with a known .py\* extension contain Python code and that other files do not. Run Python code with the Run menu. ### Key bindings In this section, ‘C’ refers to the `Control` key on Windows and Unix and the `Command` key on macOS. * `Backspace` deletes to the left; `Del` deletes to the right * `C-Backspace` delete word left; `C-Del` delete word to the right * Arrow keys and `Page Up`/`Page Down` to move around * `C-LeftArrow` and `C-RightArrow` moves by words * `Home`/`End` go to begin/end of line * `C-Home`/`C-End` go to begin/end of file * Some useful Emacs bindings are inherited from Tcl/Tk: + `C-a` beginning of line + `C-e` end of line + `C-k` kill line (but doesn’t put it in clipboard) + `C-l` center window around the insertion point + `C-b` go backward one character without deleting (usually you can also use the cursor key for this) + `C-f` go forward one character without deleting (usually you can also use the cursor key for this) + `C-p` go up one line (usually you can also use the cursor key for this) + `C-d` delete next character Standard keybindings (like `C-c` to copy and `C-v` to paste) may work. Keybindings are selected in the Configure IDLE dialog. ### Automatic indentation After a block-opening statement, the next line is indented by 4 spaces (in the Python Shell window by one tab). After certain keywords (break, return etc.) the next line is dedented. In leading indentation, `Backspace` deletes up to 4 spaces if they are there. `Tab` inserts spaces (in the Python Shell window one tab), number depends on Indent width. Currently, tabs are restricted to four spaces due to Tcl/Tk limitations. See also the indent/dedent region commands on the [Format menu](#format-menu). ### Completions Completions are supplied, when requested and available, for module names, attributes of classes or functions, or filenames. Each request method displays a completion box with existing names. (See tab completions below for an exception.) For any box, change the name being completed and the item highlighted in the box by typing and deleting characters; by hitting `Up`, `Down`, `PageUp`, `PageDown`, `Home`, and `End` keys; and by a single click within the box. Close the box with `Escape`, `Enter`, and double `Tab` keys or clicks outside the box. A double click within the box selects and closes. One way to open a box is to type a key character and wait for a predefined interval. This defaults to 2 seconds; customize it in the settings dialog. (To prevent auto popups, set the delay to a large number of milliseconds, such as 100000000.) For imported module names or class or function attributes, type ‘.’. For filenames in the root directory, type [`os.sep`](os#os.sep "os.sep") or [`os.altsep`](os#os.altsep "os.altsep") immediately after an opening quote. (On Windows, one can specify a drive first.) Move into subdirectories by typing a directory name and a separator. Instead of waiting, or after a box is closed, open a completion box immediately with Show Completions on the Edit menu. The default hot key is `C-space`. If one types a prefix for the desired name before opening the box, the first match or near miss is made visible. The result is the same as if one enters a prefix after the box is displayed. Show Completions after a quote completes filenames in the current directory instead of a root directory. Hitting `Tab` after a prefix usually has the same effect as Show Completions. (With no prefix, it indents.) However, if there is only one match to the prefix, that match is immediately added to the editor text without opening a box. Invoking ‘Show Completions’, or hitting `Tab` after a prefix, outside of a string and without a preceding ‘.’ opens a box with keywords, builtin names, and available module-level names. When editing code in an editor (as oppose to Shell), increase the available module-level names by running your code and not restarting the Shell thereafter. This is especially useful after adding imports at the top of a file. This also increases possible attribute completions. Completion boxes initially exclude names beginning with ‘\_’ or, for modules, not included in ‘\_\_all\_\_’. The hidden names can be accessed by typing ‘\_’ after ‘.’, either before or after the box is opened. ### Calltips A calltip is shown automatically when one types `(` after the name of an *accessible* function. A function name expression may include dots and subscripts. A calltip remains until it is clicked, the cursor is moved out of the argument area, or `)` is typed. Whenever the cursor is in the argument part of a definition, select Edit and “Show Call Tip” on the menu or enter its shortcut to display a calltip. The calltip consists of the function’s signature and docstring up to the latter’s first blank line or the fifth non-blank line. (Some builtin functions lack an accessible signature.) A ‘/’ or ‘\*’ in the signature indicates that the preceding or following arguments are passed by position or name (keyword) only. Details are subject to change. In Shell, the accessible functions depends on what modules have been imported into the user process, including those imported by Idle itself, and which definitions have been run, all since the last restart. For example, restart the Shell and enter `itertools.count(`. A calltip appears because Idle imports itertools into the user process for its own use. (This could change.) Enter `turtle.write(` and nothing appears. Idle does not itself import turtle. The menu entry and shortcut also do nothing. Enter `import turtle`. Thereafter, `turtle.write(` will display a calltip. In an editor, import statements have no effect until one runs the file. One might want to run a file after writing import statements, after adding function definitions, or after opening an existing file. ### Code Context Within an editor window containing Python code, code context can be toggled in order to show or hide a pane at the top of the window. When shown, this pane freezes the opening lines for block code, such as those beginning with `class`, `def`, or `if` keywords, that would have otherwise scrolled out of view. The size of the pane will be expanded and contracted as needed to show the all current levels of context, up to the maximum number of lines defined in the Configure IDLE dialog (which defaults to 15). If there are no current context lines and the feature is toggled on, a single blank line will display. Clicking on a line in the context pane will move that line to the top of the editor. The text and background colors for the context pane can be configured under the Highlights tab in the Configure IDLE dialog. ### Python Shell window With IDLE’s Shell, one enters, edits, and recalls complete statements. Most consoles and terminals only work with a single physical line at a time. When one pastes code into Shell, it is not compiled and possibly executed until one hits `Return`. One may edit pasted code first. If one pastes more that one statement into Shell, the result will be a [`SyntaxError`](exceptions#SyntaxError "SyntaxError") when multiple statements are compiled as if they were one. The editing features described in previous subsections work when entering code interactively. IDLE’s Shell window also responds to the following keys. * `C-c` interrupts executing command * `C-d` sends end-of-file; closes window if typed at a `>>>` prompt * `Alt-/` (Expand word) is also useful to reduce typing Command history + `Alt-p` retrieves previous command matching what you have typed. On macOS use `C-p`. + `Alt-n` retrieves next. On macOS use `C-n`. + `Return` while on any previous command retrieves that command ### Text colors Idle defaults to black on white text, but colors text with special meanings. For the shell, these are shell output, shell error, user output, and user error. For Python code, at the shell prompt or in an editor, these are keywords, builtin class and function names, names following `class` and `def`, strings, and comments. For any text window, these are the cursor (when present), found text (when possible), and selected text. Text coloring is done in the background, so uncolorized text is occasionally visible. To change the color scheme, use the Configure IDLE dialog Highlighting tab. The marking of debugger breakpoint lines in the editor and text in popups and dialogs is not user-configurable. Startup and code execution -------------------------- Upon startup with the `-s` option, IDLE will execute the file referenced by the environment variables `IDLESTARTUP` or [`PYTHONSTARTUP`](../using/cmdline#envvar-PYTHONSTARTUP). IDLE first checks for `IDLESTARTUP`; if `IDLESTARTUP` is present the file referenced is run. If `IDLESTARTUP` is not present, IDLE checks for `PYTHONSTARTUP`. Files referenced by these environment variables are convenient places to store functions that are used frequently from the IDLE shell, or for executing import statements to import common modules. In addition, `Tk` also loads a startup file if it is present. Note that the Tk file is loaded unconditionally. This additional file is `.Idle.py` and is looked for in the user’s home directory. Statements in this file will be executed in the Tk namespace, so this file is not useful for importing functions to be used from IDLE’s Python shell. ### Command line usage ``` idle.py [-c command] [-d] [-e] [-h] [-i] [-r file] [-s] [-t title] [-] [arg] ... -c command run command in the shell window -d enable debugger and open shell window -e open editor window -h print help message with legal combinations and exit -i open shell window -r file run file in shell window -s run $IDLESTARTUP or $PYTHONSTARTUP first, in shell window -t title set title of shell window - run stdin in shell (- must be last option before args) ``` If there are arguments: * If `-`, `-c`, or `r` is used, all arguments are placed in `sys.argv[1:...]` and `sys.argv[0]` is set to `''`, `'-c'`, or `'-r'`. No editor window is opened, even if that is the default set in the Options dialog. * Otherwise, arguments are files opened for editing and `sys.argv` reflects the arguments passed to IDLE itself. ### Startup failure IDLE uses a socket to communicate between the IDLE GUI process and the user code execution process. A connection must be established whenever the Shell starts or restarts. (The latter is indicated by a divider line that says ‘RESTART’). If the user process fails to connect to the GUI process, it usually displays a `Tk` error box with a ‘cannot connect’ message that directs the user here. It then exits. One specific connection failure on Unix systems results from misconfigured masquerading rules somewhere in a system’s network setup. When IDLE is started from a terminal, one will see a message starting with `** Invalid host:`. The valid value is `127.0.0.1 (idlelib.rpc.LOCALHOST)`. One can diagnose with `tcpconnect -irv 127.0.0.1 6543` in one terminal window and `tcplisten <same args>` in another. A common cause of failure is a user-written file with the same name as a standard library module, such as *random.py* and *tkinter.py*. When such a file is located in the same directory as a file that is about to be run, IDLE cannot import the stdlib file. The current fix is to rename the user file. Though less common than in the past, an antivirus or firewall program may stop the connection. If the program cannot be taught to allow the connection, then it must be turned off for IDLE to work. It is safe to allow this internal connection because no data is visible on external ports. A similar problem is a network mis-configuration that blocks connections. Python installation issues occasionally stop IDLE: multiple versions can clash, or a single installation might need admin access. If one undo the clash, or cannot or does not want to run as admin, it might be easiest to completely remove Python and start over. A zombie pythonw.exe process could be a problem. On Windows, use Task Manager to check for one and stop it if there is. Sometimes a restart initiated by a program crash or Keyboard Interrupt (control-C) may fail to connect. Dismissing the error box or using Restart Shell on the Shell menu may fix a temporary problem. When IDLE first starts, it attempts to read user configuration files in `~/.idlerc/` (~ is one’s home directory). If there is a problem, an error message should be displayed. Leaving aside random disk glitches, this can be prevented by never editing the files by hand. Instead, use the configuration dialog, under Options. Once there is an error in a user configuration file, the best solution may be to delete it and start over with the settings dialog. If IDLE quits with no message, and it was not started from a console, try starting it from a console or terminal (`python -m idlelib`) and see if this results in an error message. On Unix-based systems with tcl/tk older than `8.6.11` (see `About IDLE`) certain characters of certain fonts can cause a tk failure with a message to the terminal. This can happen either if one starts IDLE to edit a file with such a character or later when entering such a character. If one cannot upgrade tcl/tk, then re-configure IDLE to use a font that works better. ### Running user code With rare exceptions, the result of executing Python code with IDLE is intended to be the same as executing the same code by the default method, directly with Python in a text-mode system console or terminal window. However, the different interface and operation occasionally affect visible results. For instance, `sys.modules` starts with more entries, and `threading.active_count()` returns 2 instead of 1. By default, IDLE runs user code in a separate OS process rather than in the user interface process that runs the shell and editor. In the execution process, it replaces `sys.stdin`, `sys.stdout`, and `sys.stderr` with objects that get input from and send output to the Shell window. The original values stored in `sys.__stdin__`, `sys.__stdout__`, and `sys.__stderr__` are not touched, but may be `None`. Sending print output from one process to a text widget in another is slower than printing to a system terminal in the same process. This has the most effect when printing multiple arguments, as the string for each argument, each separator, the newline are sent separately. For development, this is usually not a problem, but if one wants to print faster in IDLE, format and join together everything one wants displayed together and then print a single string. Both format strings and [`str.join()`](stdtypes#str.join "str.join") can help combine fields and lines. IDLE’s standard stream replacements are not inherited by subprocesses created in the execution process, whether directly by user code or by modules such as multiprocessing. If such subprocess use `input` from sys.stdin or `print` or `write` to sys.stdout or sys.stderr, IDLE should be started in a command line window. The secondary subprocess will then be attached to that window for input and output. If `sys` is reset by user code, such as with `importlib.reload(sys)`, IDLE’s changes are lost and input from the keyboard and output to the screen will not work correctly. When Shell has the focus, it controls the keyboard and screen. This is normally transparent, but functions that directly access the keyboard and screen will not work. These include system-specific functions that determine whether a key has been pressed and if so, which. The IDLE code running in the execution process adds frames to the call stack that would not be there otherwise. IDLE wraps `sys.getrecursionlimit` and `sys.setrecursionlimit` to reduce the effect of the additional stack frames. When user code raises SystemExit either directly or by calling sys.exit, IDLE returns to a Shell prompt instead of exiting. ### User output in Shell When a program outputs text, the result is determined by the corresponding output device. When IDLE executes user code, `sys.stdout` and `sys.stderr` are connected to the display area of IDLE’s Shell. Some of its features are inherited from the underlying Tk Text widget. Others are programmed additions. Where it matters, Shell is designed for development rather than production runs. For instance, Shell never throws away output. A program that sends unlimited output to Shell will eventually fill memory, resulting in a memory error. In contrast, some system text windows only keep the last n lines of output. A Windows console, for instance, keeps a user-settable 1 to 9999 lines, with 300 the default. A Tk Text widget, and hence IDLE’s Shell, displays characters (codepoints) in the BMP (Basic Multilingual Plane) subset of Unicode. Which characters are displayed with a proper glyph and which with a replacement box depends on the operating system and installed fonts. Tab characters cause the following text to begin after the next tab stop. (They occur every 8 ‘characters’). Newline characters cause following text to appear on a new line. Other control characters are ignored or displayed as a space, box, or something else, depending on the operating system and font. (Moving the text cursor through such output with arrow keys may exhibit some surprising spacing behavior.) ``` >>> s = 'a\tb\a<\x02><\r>\bc\nd' # Enter 22 chars. >>> len(s) 14 >>> s # Display repr(s) 'a\tb\x07<\x02><\r>\x08c\nd' >>> print(s, end='') # Display s as is. # Result varies by OS and font. Try it. ``` The `repr` function is used for interactive echo of expression values. It returns an altered version of the input string in which control codes, some BMP codepoints, and all non-BMP codepoints are replaced with escape codes. As demonstrated above, it allows one to identify the characters in a string, regardless of how they are displayed. Normal and error output are generally kept separate (on separate lines) from code input and each other. They each get different highlight colors. For SyntaxError tracebacks, the normal ‘^’ marking where the error was detected is replaced by coloring the text with an error highlight. When code run from a file causes other exceptions, one may right click on a traceback line to jump to the corresponding line in an IDLE editor. The file will be opened if necessary. Shell has a special facility for squeezing output lines down to a ‘Squeezed text’ label. This is done automatically for output over N lines (N = 50 by default). N can be changed in the PyShell section of the General page of the Settings dialog. Output with fewer lines can be squeezed by right clicking on the output. This can be useful lines long enough to slow down scrolling. Squeezed output is expanded in place by double-clicking the label. It can also be sent to the clipboard or a separate view window by right-clicking the label. ### Developing tkinter applications IDLE is intentionally different from standard Python in order to facilitate development of tkinter programs. Enter `import tkinter as tk; root = tk.Tk()` in standard Python and nothing appears. Enter the same in IDLE and a tk window appears. In standard Python, one must also enter `root.update()` to see the window. IDLE does the equivalent in the background, about 20 times a second, which is about every 50 milliseconds. Next enter `b = tk.Button(root, text='button'); b.pack()`. Again, nothing visibly changes in standard Python until one enters `root.update()`. Most tkinter programs run `root.mainloop()`, which usually does not return until the tk app is destroyed. If the program is run with `python -i` or from an IDLE editor, a `>>>` shell prompt does not appear until `mainloop()` returns, at which time there is nothing left to interact with. When running a tkinter program from an IDLE editor, one can comment out the mainloop call. One then gets a shell prompt immediately and can interact with the live application. One just has to remember to re-enable the mainloop call when running in standard Python. ### Running without a subprocess By default, IDLE executes user code in a separate subprocess via a socket, which uses the internal loopback interface. This connection is not externally visible and no data is sent to or received from the Internet. If firewall software complains anyway, you can ignore it. If the attempt to make the socket connection fails, Idle will notify you. Such failures are sometimes transient, but if persistent, the problem may be either a firewall blocking the connection or misconfiguration of a particular system. Until the problem is fixed, one can run Idle with the -n command line switch. If IDLE is started with the -n command line switch it will run in a single process and will not create the subprocess which runs the RPC Python execution server. This can be useful if Python cannot create the subprocess or the RPC socket interface on your platform. However, in this mode user code is not isolated from IDLE itself. Also, the environment is not restarted when Run/Run Module (F5) is selected. If your code has been modified, you must reload() the affected modules and re-import any specific items (e.g. from foo import baz) if the changes are to take effect. For these reasons, it is preferable to run IDLE with the default subprocess if at all possible. Deprecated since version 3.4. Help and preferences -------------------- ### Help sources Help menu entry “IDLE Help” displays a formatted html version of the IDLE chapter of the Library Reference. The result, in a read-only tkinter text window, is close to what one sees in a web browser. Navigate through the text with a mousewheel, the scrollbar, or up and down arrow keys held down. Or click the TOC (Table of Contents) button and select a section header in the opened box. Help menu entry “Python Docs” opens the extensive sources of help, including tutorials, available at `docs.python.org/x.y`, where ‘x.y’ is the currently running Python version. If your system has an off-line copy of the docs (this may be an installation option), that will be opened instead. Selected URLs can be added or removed from the help menu at any time using the General tab of the Configure IDLE dialog. ### Setting preferences The font preferences, highlighting, keys, and general preferences can be changed via Configure IDLE on the Option menu. Non-default user settings are saved in a `.idlerc` directory in the user’s home directory. Problems caused by bad user configuration files are solved by editing or deleting one or more of the files in `.idlerc`. On the Font tab, see the text sample for the effect of font face and size on multiple characters in multiple languages. Edit the sample to add other characters of personal interest. Use the sample to select monospaced fonts. If particular characters have problems in Shell or an editor, add them to the top of the sample and try changing first size and then font. On the Highlights and Keys tab, select a built-in or custom color theme and key set. To use a newer built-in color theme or key set with older IDLEs, save it as a new custom theme or key set and it well be accessible to older IDLEs. ### IDLE on macOS Under System Preferences: Dock, one can set “Prefer tabs when opening documents” to “Always”. This setting is not compatible with the tk/tkinter GUI framework used by IDLE, and it breaks a few IDLE features. ### Extensions IDLE contains an extension facility. Preferences for extensions can be changed with the Extensions tab of the preferences dialog. See the beginning of config-extensions.def in the idlelib directory for further information. The only current default extension is zzdummy, an example also used for testing.
programming_docs
python numbers — Numeric abstract base classes numbers — Numeric abstract base classes ======================================= **Source code:** [Lib/numbers.py](https://github.com/python/cpython/tree/3.9/Lib/numbers.py) The [`numbers`](#module-numbers "numbers: Numeric abstract base classes (Complex, Real, Integral, etc.).") module ([**PEP 3141**](https://www.python.org/dev/peps/pep-3141)) defines a hierarchy of numeric [abstract base classes](../glossary#term-abstract-base-class) which progressively define more operations. None of the types defined in this module are intended to be instantiated. `class numbers.Number` The root of the numeric hierarchy. If you just want to check if an argument *x* is a number, without caring what kind, use `isinstance(x, Number)`. The numeric tower ----------------- `class numbers.Complex` Subclasses of this type describe complex numbers and include the operations that work on the built-in [`complex`](functions#complex "complex") type. These are: conversions to [`complex`](functions#complex "complex") and [`bool`](functions#bool "bool"), [`real`](#numbers.Complex.real "numbers.Complex.real"), [`imag`](#numbers.Complex.imag "numbers.Complex.imag"), `+`, `-`, `*`, `/`, `**`, [`abs()`](functions#abs "abs"), [`conjugate()`](#numbers.Complex.conjugate "numbers.Complex.conjugate"), `==`, and `!=`. All except `-` and `!=` are abstract. `real` Abstract. Retrieves the real component of this number. `imag` Abstract. Retrieves the imaginary component of this number. `abstractmethod conjugate()` Abstract. Returns the complex conjugate. For example, `(1+3j).conjugate() == (1-3j)`. `class numbers.Real` To [`Complex`](#numbers.Complex "numbers.Complex"), [`Real`](#numbers.Real "numbers.Real") adds the operations that work on real numbers. In short, those are: a conversion to [`float`](functions#float "float"), [`math.trunc()`](math#math.trunc "math.trunc"), [`round()`](functions#round "round"), [`math.floor()`](math#math.floor "math.floor"), [`math.ceil()`](math#math.ceil "math.ceil"), [`divmod()`](functions#divmod "divmod"), `//`, `%`, `<`, `<=`, `>`, and `>=`. Real also provides defaults for [`complex()`](functions#complex "complex"), [`real`](#numbers.Complex.real "numbers.Complex.real"), [`imag`](#numbers.Complex.imag "numbers.Complex.imag"), and [`conjugate()`](#numbers.Complex.conjugate "numbers.Complex.conjugate"). `class numbers.Rational` Subtypes [`Real`](#numbers.Real "numbers.Real") and adds [`numerator`](#numbers.Rational.numerator "numbers.Rational.numerator") and [`denominator`](#numbers.Rational.denominator "numbers.Rational.denominator") properties, which should be in lowest terms. With these, it provides a default for [`float()`](functions#float "float"). `numerator` Abstract. `denominator` Abstract. `class numbers.Integral` Subtypes [`Rational`](#numbers.Rational "numbers.Rational") and adds a conversion to [`int`](functions#int "int"). Provides defaults for [`float()`](functions#float "float"), [`numerator`](#numbers.Rational.numerator "numbers.Rational.numerator"), and [`denominator`](#numbers.Rational.denominator "numbers.Rational.denominator"). Adds abstract methods for [`pow()`](functions#pow "pow") with modulus and bit-string operations: `<<`, `>>`, `&`, `^`, `|`, `~`. Notes for type implementors --------------------------- Implementors should be careful to make equal numbers equal and hash them to the same values. This may be subtle if there are two different extensions of the real numbers. For example, [`fractions.Fraction`](fractions#fractions.Fraction "fractions.Fraction") implements [`hash()`](functions#hash "hash") as follows: ``` def __hash__(self): if self.denominator == 1: # Get integers right. return hash(self.numerator) # Expensive check, but definitely correct. if self == float(self): return hash(float(self)) else: # Use tuple's hash to avoid a high collision rate on # simple fractions. return hash((self.numerator, self.denominator)) ``` ### Adding More Numeric ABCs There are, of course, more possible ABCs for numbers, and this would be a poor hierarchy if it precluded the possibility of adding those. You can add `MyFoo` between [`Complex`](#numbers.Complex "numbers.Complex") and [`Real`](#numbers.Real "numbers.Real") with: ``` class MyFoo(Complex): ... MyFoo.register(Real) ``` ### Implementing the arithmetic operations We want to implement the arithmetic operations so that mixed-mode operations either call an implementation whose author knew about the types of both arguments, or convert both to the nearest built in type and do the operation there. For subtypes of [`Integral`](#numbers.Integral "numbers.Integral"), this means that [`__add__()`](../reference/datamodel#object.__add__ "object.__add__") and [`__radd__()`](../reference/datamodel#object.__radd__ "object.__radd__") should be defined as: ``` class MyIntegral(Integral): def __add__(self, other): if isinstance(other, MyIntegral): return do_my_adding_stuff(self, other) elif isinstance(other, OtherTypeIKnowAbout): return do_my_other_adding_stuff(self, other) else: return NotImplemented def __radd__(self, other): if isinstance(other, MyIntegral): return do_my_adding_stuff(other, self) elif isinstance(other, OtherTypeIKnowAbout): return do_my_other_adding_stuff(other, self) elif isinstance(other, Integral): return int(other) + int(self) elif isinstance(other, Real): return float(other) + float(self) elif isinstance(other, Complex): return complex(other) + complex(self) else: return NotImplemented ``` There are 5 different cases for a mixed-type operation on subclasses of [`Complex`](#numbers.Complex "numbers.Complex"). I’ll refer to all of the above code that doesn’t refer to `MyIntegral` and `OtherTypeIKnowAbout` as “boilerplate”. `a` will be an instance of `A`, which is a subtype of [`Complex`](#numbers.Complex "numbers.Complex") (`a : A <: Complex`), and `b : B <: Complex`. I’ll consider `a + b`: 1. If `A` defines an [`__add__()`](../reference/datamodel#object.__add__ "object.__add__") which accepts `b`, all is well. 2. If `A` falls back to the boilerplate code, and it were to return a value from [`__add__()`](../reference/datamodel#object.__add__ "object.__add__"), we’d miss the possibility that `B` defines a more intelligent [`__radd__()`](../reference/datamodel#object.__radd__ "object.__radd__"), so the boilerplate should return [`NotImplemented`](constants#NotImplemented "NotImplemented") from [`__add__()`](../reference/datamodel#object.__add__ "object.__add__"). (Or `A` may not implement [`__add__()`](../reference/datamodel#object.__add__ "object.__add__") at all.) 3. Then `B`’s [`__radd__()`](../reference/datamodel#object.__radd__ "object.__radd__") gets a chance. If it accepts `a`, all is well. 4. If it falls back to the boilerplate, there are no more possible methods to try, so this is where the default implementation should live. 5. If `B <: A`, Python tries `B.__radd__` before `A.__add__`. This is ok, because it was implemented with knowledge of `A`, so it can handle those instances before delegating to [`Complex`](#numbers.Complex "numbers.Complex"). If `A <: Complex` and `B <: Real` without sharing any other knowledge, then the appropriate shared operation is the one involving the built in [`complex`](functions#complex "complex"), and both [`__radd__()`](../reference/datamodel#object.__radd__ "object.__radd__") s land there, so `a+b == b+a`. Because most of the operations on any given type will be very similar, it can be useful to define a helper function which generates the forward and reverse instances of any given operator. For example, [`fractions.Fraction`](fractions#fractions.Fraction "fractions.Fraction") uses: ``` def _operator_fallbacks(monomorphic_operator, fallback_operator): def forward(a, b): if isinstance(b, (int, Fraction)): return monomorphic_operator(a, b) elif isinstance(b, float): return fallback_operator(float(a), b) elif isinstance(b, complex): return fallback_operator(complex(a), b) else: return NotImplemented forward.__name__ = '__' + fallback_operator.__name__ + '__' forward.__doc__ = monomorphic_operator.__doc__ def reverse(b, a): if isinstance(a, Rational): # Includes ints. return monomorphic_operator(a, b) elif isinstance(a, numbers.Real): return fallback_operator(float(a), float(b)) elif isinstance(a, numbers.Complex): return fallback_operator(complex(a), complex(b)) else: return NotImplemented reverse.__name__ = '__r' + fallback_operator.__name__ + '__' reverse.__doc__ = monomorphic_operator.__doc__ return forward, reverse def _add(a, b): """a + b""" return Fraction(a.numerator * b.denominator + b.numerator * a.denominator, a.denominator * b.denominator) __add__, __radd__ = _operator_fallbacks(_add, operator.add) # ... ``` python fractions — Rational numbers fractions — Rational numbers ============================ **Source code:** [Lib/fractions.py](https://github.com/python/cpython/tree/3.9/Lib/fractions.py) The [`fractions`](#module-fractions "fractions: Rational numbers.") module provides support for rational number arithmetic. A Fraction instance can be constructed from a pair of integers, from another rational number, or from a string. `class fractions.Fraction(numerator=0, denominator=1)` `class fractions.Fraction(other_fraction)` `class fractions.Fraction(float)` `class fractions.Fraction(decimal)` `class fractions.Fraction(string)` The first version requires that *numerator* and *denominator* are instances of [`numbers.Rational`](numbers#numbers.Rational "numbers.Rational") and returns a new [`Fraction`](#fractions.Fraction "fractions.Fraction") instance with value `numerator/denominator`. If *denominator* is `0`, it raises a [`ZeroDivisionError`](exceptions#ZeroDivisionError "ZeroDivisionError"). The second version requires that *other\_fraction* is an instance of [`numbers.Rational`](numbers#numbers.Rational "numbers.Rational") and returns a [`Fraction`](#fractions.Fraction "fractions.Fraction") instance with the same value. The next two versions accept either a [`float`](functions#float "float") or a [`decimal.Decimal`](decimal#decimal.Decimal "decimal.Decimal") instance, and return a [`Fraction`](#fractions.Fraction "fractions.Fraction") instance with exactly the same value. Note that due to the usual issues with binary floating-point (see [Floating Point Arithmetic: Issues and Limitations](../tutorial/floatingpoint#tut-fp-issues)), the argument to `Fraction(1.1)` is not exactly equal to 11/10, and so `Fraction(1.1)` does *not* return `Fraction(11, 10)` as one might expect. (But see the documentation for the [`limit_denominator()`](#fractions.Fraction.limit_denominator "fractions.Fraction.limit_denominator") method below.) The last version of the constructor expects a string or unicode instance. The usual form for this instance is: ``` [sign] numerator ['/' denominator] ``` where the optional `sign` may be either ‘+’ or ‘-’ and `numerator` and `denominator` (if present) are strings of decimal digits. In addition, any string that represents a finite value and is accepted by the [`float`](functions#float "float") constructor is also accepted by the [`Fraction`](#fractions.Fraction "fractions.Fraction") constructor. In either form the input string may also have leading and/or trailing whitespace. Here are some examples: ``` >>> from fractions import Fraction >>> Fraction(16, -10) Fraction(-8, 5) >>> Fraction(123) Fraction(123, 1) >>> Fraction() Fraction(0, 1) >>> Fraction('3/7') Fraction(3, 7) >>> Fraction(' -3/7 ') Fraction(-3, 7) >>> Fraction('1.414213 \t\n') Fraction(1414213, 1000000) >>> Fraction('-.125') Fraction(-1, 8) >>> Fraction('7e-6') Fraction(7, 1000000) >>> Fraction(2.25) Fraction(9, 4) >>> Fraction(1.1) Fraction(2476979795053773, 2251799813685248) >>> from decimal import Decimal >>> Fraction(Decimal('1.1')) Fraction(11, 10) ``` The [`Fraction`](#fractions.Fraction "fractions.Fraction") class inherits from the abstract base class [`numbers.Rational`](numbers#numbers.Rational "numbers.Rational"), and implements all of the methods and operations from that class. [`Fraction`](#fractions.Fraction "fractions.Fraction") instances are hashable, and should be treated as immutable. In addition, [`Fraction`](#fractions.Fraction "fractions.Fraction") has the following properties and methods: Changed in version 3.2: The [`Fraction`](#fractions.Fraction "fractions.Fraction") constructor now accepts [`float`](functions#float "float") and [`decimal.Decimal`](decimal#decimal.Decimal "decimal.Decimal") instances. Changed in version 3.9: The [`math.gcd()`](math#math.gcd "math.gcd") function is now used to normalize the *numerator* and *denominator*. [`math.gcd()`](math#math.gcd "math.gcd") always return a [`int`](functions#int "int") type. Previously, the GCD type depended on *numerator* and *denominator*. `numerator` Numerator of the Fraction in lowest term. `denominator` Denominator of the Fraction in lowest term. `as_integer_ratio()` Return a tuple of two integers, whose ratio is equal to the Fraction and with a positive denominator. New in version 3.8. `from_float(flt)` This class method constructs a [`Fraction`](#fractions.Fraction "fractions.Fraction") representing the exact value of *flt*, which must be a [`float`](functions#float "float"). Beware that `Fraction.from_float(0.3)` is not the same value as `Fraction(3, 10)`. Note From Python 3.2 onwards, you can also construct a [`Fraction`](#fractions.Fraction "fractions.Fraction") instance directly from a [`float`](functions#float "float"). `from_decimal(dec)` This class method constructs a [`Fraction`](#fractions.Fraction "fractions.Fraction") representing the exact value of *dec*, which must be a [`decimal.Decimal`](decimal#decimal.Decimal "decimal.Decimal") instance. Note From Python 3.2 onwards, you can also construct a [`Fraction`](#fractions.Fraction "fractions.Fraction") instance directly from a [`decimal.Decimal`](decimal#decimal.Decimal "decimal.Decimal") instance. `limit_denominator(max_denominator=1000000)` Finds and returns the closest [`Fraction`](#fractions.Fraction "fractions.Fraction") to `self` that has denominator at most max\_denominator. This method is useful for finding rational approximations to a given floating-point number: ``` >>> from fractions import Fraction >>> Fraction('3.1415926535897932').limit_denominator(1000) Fraction(355, 113) ``` or for recovering a rational number that’s represented as a float: ``` >>> from math import pi, cos >>> Fraction(cos(pi/3)) Fraction(4503599627370497, 9007199254740992) >>> Fraction(cos(pi/3)).limit_denominator() Fraction(1, 2) >>> Fraction(1.1).limit_denominator() Fraction(11, 10) ``` `__floor__()` Returns the greatest [`int`](functions#int "int") `<= self`. This method can also be accessed through the [`math.floor()`](math#math.floor "math.floor") function: ``` >>> from math import floor >>> floor(Fraction(355, 113)) 3 ``` `__ceil__()` Returns the least [`int`](functions#int "int") `>= self`. This method can also be accessed through the [`math.ceil()`](math#math.ceil "math.ceil") function. `__round__()` `__round__(ndigits)` The first version returns the nearest [`int`](functions#int "int") to `self`, rounding half to even. The second version rounds `self` to the nearest multiple of `Fraction(1, 10**ndigits)` (logically, if `ndigits` is negative), again rounding half toward even. This method can also be accessed through the [`round()`](functions#round "round") function. See also `Module` [`numbers`](numbers#module-numbers "numbers: Numeric abstract base classes (Complex, Real, Integral, etc.).") The abstract base classes making up the numeric tower. python fcntl — The fcntl and ioctl system calls fcntl — The fcntl and ioctl system calls ======================================== This module performs file control and I/O control on file descriptors. It is an interface to the `fcntl()` and `ioctl()` Unix routines. For a complete description of these calls, see *[fcntl(2)](https://manpages.debian.org/fcntl(2))* and *[ioctl(2)](https://manpages.debian.org/ioctl(2))* Unix manual pages. All functions in this module take a file descriptor *fd* as their first argument. This can be an integer file descriptor, such as returned by `sys.stdin.fileno()`, or an [`io.IOBase`](io#io.IOBase "io.IOBase") object, such as `sys.stdin` itself, which provides a [`fileno()`](io#io.IOBase.fileno "io.IOBase.fileno") that returns a genuine file descriptor. Changed in version 3.3: Operations in this module used to raise an [`IOError`](exceptions#IOError "IOError") where they now raise an [`OSError`](exceptions#OSError "OSError"). Changed in version 3.8: The fcntl module now contains `F_ADD_SEALS`, `F_GET_SEALS`, and `F_SEAL_*` constants for sealing of [`os.memfd_create()`](os#os.memfd_create "os.memfd_create") file descriptors. Changed in version 3.9: On macOS, the fcntl module exposes the `F_GETPATH` constant, which obtains the path of a file from a file descriptor. On Linux(>=3.15), the fcntl module exposes the `F_OFD_GETLK`, `F_OFD_SETLK` and `F_OFD_SETLKW` constants, which are used when working with open file description locks. The module defines the following functions: `fcntl.fcntl(fd, cmd, arg=0)` Perform the operation *cmd* on file descriptor *fd* (file objects providing a [`fileno()`](io#io.IOBase.fileno "io.IOBase.fileno") method are accepted as well). The values used for *cmd* are operating system dependent, and are available as constants in the [`fcntl`](#module-fcntl "fcntl: The fcntl() and ioctl() system calls. (Unix)") module, using the same names as used in the relevant C header files. The argument *arg* can either be an integer value, or a [`bytes`](stdtypes#bytes "bytes") object. With an integer value, the return value of this function is the integer return value of the C `fcntl()` call. When the argument is bytes it represents a binary structure, e.g. created by [`struct.pack()`](struct#struct.pack "struct.pack"). The binary data is copied to a buffer whose address is passed to the C `fcntl()` call. The return value after a successful call is the contents of the buffer, converted to a [`bytes`](stdtypes#bytes "bytes") object. The length of the returned object will be the same as the length of the *arg* argument. This is limited to 1024 bytes. If the information returned in the buffer by the operating system is larger than 1024 bytes, this is most likely to result in a segmentation violation or a more subtle data corruption. If the `fcntl()` fails, an [`OSError`](exceptions#OSError "OSError") is raised. Raises an [auditing event](sys#auditing) `fcntl.fcntl` with arguments `fd`, `cmd`, `arg`. `fcntl.ioctl(fd, request, arg=0, mutate_flag=True)` This function is identical to the [`fcntl()`](#fcntl.fcntl "fcntl.fcntl") function, except that the argument handling is even more complicated. The *request* parameter is limited to values that can fit in 32-bits. Additional constants of interest for use as the *request* argument can be found in the [`termios`](termios#module-termios "termios: POSIX style tty control. (Unix)") module, under the same names as used in the relevant C header files. The parameter *arg* can be one of an integer, an object supporting the read-only buffer interface (like [`bytes`](stdtypes#bytes "bytes")) or an object supporting the read-write buffer interface (like [`bytearray`](stdtypes#bytearray "bytearray")). In all but the last case, behaviour is as for the [`fcntl()`](#fcntl.fcntl "fcntl.fcntl") function. If a mutable buffer is passed, then the behaviour is determined by the value of the *mutate\_flag* parameter. If it is false, the buffer’s mutability is ignored and behaviour is as for a read-only buffer, except that the 1024 byte limit mentioned above is avoided – so long as the buffer you pass is at least as long as what the operating system wants to put there, things should work. If *mutate\_flag* is true (the default), then the buffer is (in effect) passed to the underlying [`ioctl()`](#fcntl.ioctl "fcntl.ioctl") system call, the latter’s return code is passed back to the calling Python, and the buffer’s new contents reflect the action of the [`ioctl()`](#fcntl.ioctl "fcntl.ioctl"). This is a slight simplification, because if the supplied buffer is less than 1024 bytes long it is first copied into a static buffer 1024 bytes long which is then passed to [`ioctl()`](#fcntl.ioctl "fcntl.ioctl") and copied back into the supplied buffer. If the `ioctl()` fails, an [`OSError`](exceptions#OSError "OSError") exception is raised. An example: ``` >>> import array, fcntl, struct, termios, os >>> os.getpgrp() 13341 >>> struct.unpack('h', fcntl.ioctl(0, termios.TIOCGPGRP, " "))[0] 13341 >>> buf = array.array('h', [0]) >>> fcntl.ioctl(0, termios.TIOCGPGRP, buf, 1) 0 >>> buf array('h', [13341]) ``` Raises an [auditing event](sys#auditing) `fcntl.ioctl` with arguments `fd`, `request`, `arg`. `fcntl.flock(fd, operation)` Perform the lock operation *operation* on file descriptor *fd* (file objects providing a [`fileno()`](io#io.IOBase.fileno "io.IOBase.fileno") method are accepted as well). See the Unix manual *[flock(2)](https://manpages.debian.org/flock(2))* for details. (On some systems, this function is emulated using `fcntl()`.) If the `flock()` fails, an [`OSError`](exceptions#OSError "OSError") exception is raised. Raises an [auditing event](sys#auditing) `fcntl.flock` with arguments `fd`, `operation`. `fcntl.lockf(fd, cmd, len=0, start=0, whence=0)` This is essentially a wrapper around the [`fcntl()`](#fcntl.fcntl "fcntl.fcntl") locking calls. *fd* is the file descriptor (file objects providing a [`fileno()`](io#io.IOBase.fileno "io.IOBase.fileno") method are accepted as well) of the file to lock or unlock, and *cmd* is one of the following values: * `LOCK_UN` – unlock * `LOCK_SH` – acquire a shared lock * `LOCK_EX` – acquire an exclusive lock When *cmd* is `LOCK_SH` or `LOCK_EX`, it can also be bitwise ORed with `LOCK_NB` to avoid blocking on lock acquisition. If `LOCK_NB` is used and the lock cannot be acquired, an [`OSError`](exceptions#OSError "OSError") will be raised and the exception will have an *errno* attribute set to `EACCES` or `EAGAIN` (depending on the operating system; for portability, check for both values). On at least some systems, `LOCK_EX` can only be used if the file descriptor refers to a file opened for writing. *len* is the number of bytes to lock, *start* is the byte offset at which the lock starts, relative to *whence*, and *whence* is as with [`io.IOBase.seek()`](io#io.IOBase.seek "io.IOBase.seek"), specifically: * `0` – relative to the start of the file ([`os.SEEK_SET`](os#os.SEEK_SET "os.SEEK_SET")) * `1` – relative to the current buffer position ([`os.SEEK_CUR`](os#os.SEEK_CUR "os.SEEK_CUR")) * `2` – relative to the end of the file ([`os.SEEK_END`](os#os.SEEK_END "os.SEEK_END")) The default for *start* is 0, which means to start at the beginning of the file. The default for *len* is 0 which means to lock to the end of the file. The default for *whence* is also 0. Raises an [auditing event](sys#auditing) `fcntl.lockf` with arguments `fd`, `cmd`, `len`, `start`, `whence`. Examples (all on a SVR4 compliant system): ``` import struct, fcntl, os f = open(...) rv = fcntl.fcntl(f, fcntl.F_SETFL, os.O_NDELAY) lockdata = struct.pack('hhllhh', fcntl.F_WRLCK, 0, 0, 0, 0, 0) rv = fcntl.fcntl(f, fcntl.F_SETLKW, lockdata) ``` Note that in the first example the return value variable *rv* will hold an integer value; in the second example it will hold a [`bytes`](stdtypes#bytes "bytes") object. The structure lay-out for the *lockdata* variable is system dependent — therefore using the [`flock()`](#fcntl.flock "fcntl.flock") call may be better. See also `Module` [`os`](os#module-os "os: Miscellaneous operating system interfaces.") If the locking flags [`O_SHLOCK`](os#os.O_SHLOCK "os.O_SHLOCK") and [`O_EXLOCK`](os#os.O_EXLOCK "os.O_EXLOCK") are present in the [`os`](os#module-os "os: Miscellaneous operating system interfaces.") module (on BSD only), the [`os.open()`](os#os.open "os.open") function provides an alternative to the [`lockf()`](#fcntl.lockf "fcntl.lockf") and [`flock()`](#fcntl.flock "fcntl.flock") functions.
programming_docs
python grp — The group database grp — The group database ======================== This module provides access to the Unix group database. It is available on all Unix versions. Group database entries are reported as a tuple-like object, whose attributes correspond to the members of the `group` structure (Attribute field below, see `<grp.h>`): | Index | Attribute | Meaning | | --- | --- | --- | | 0 | gr\_name | the name of the group | | 1 | gr\_passwd | the (encrypted) group password; often empty | | 2 | gr\_gid | the numerical group ID | | 3 | gr\_mem | all the group member’s user names | The gid is an integer, name and password are strings, and the member list is a list of strings. (Note that most users are not explicitly listed as members of the group they are in according to the password database. Check both databases to get complete membership information. Also note that a `gr_name` that starts with a `+` or `-` is likely to be a YP/NIS reference and may not be accessible via [`getgrnam()`](#grp.getgrnam "grp.getgrnam") or [`getgrgid()`](#grp.getgrgid "grp.getgrgid").) It defines the following items: `grp.getgrgid(gid)` Return the group database entry for the given numeric group ID. [`KeyError`](exceptions#KeyError "KeyError") is raised if the entry asked for cannot be found. Deprecated since version 3.6: Since Python 3.6 the support of non-integer arguments like floats or strings in [`getgrgid()`](#grp.getgrgid "grp.getgrgid") is deprecated. `grp.getgrnam(name)` Return the group database entry for the given group name. [`KeyError`](exceptions#KeyError "KeyError") is raised if the entry asked for cannot be found. `grp.getgrall()` Return a list of all available group entries, in arbitrary order. See also `Module` [`pwd`](pwd#module-pwd "pwd: The password database (getpwnam() and friends). (Unix)") An interface to the user database, similar to this. `Module` [`spwd`](spwd#module-spwd "spwd: The shadow password database (getspnam() and friends). (deprecated) (Unix)") An interface to the shadow password database, similar to this. python ensurepip — Bootstrapping the pip installer ensurepip — Bootstrapping the pip installer =========================================== New in version 3.4. The [`ensurepip`](#module-ensurepip "ensurepip: Bootstrapping the \"pip\" installer into an existing Python installation or virtual environment.") package provides support for bootstrapping the `pip` installer into an existing Python installation or virtual environment. This bootstrapping approach reflects the fact that `pip` is an independent project with its own release cycle, and the latest available stable version is bundled with maintenance and feature releases of the CPython reference interpreter. In most cases, end users of Python shouldn’t need to invoke this module directly (as `pip` should be bootstrapped by default), but it may be needed if installing `pip` was skipped when installing Python (or when creating a virtual environment) or after explicitly uninstalling `pip`. Note This module *does not* access the internet. All of the components needed to bootstrap `pip` are included as internal parts of the package. See also [Installing Python Modules](../installing/index#installing-index) The end user guide for installing Python packages [**PEP 453**](https://www.python.org/dev/peps/pep-0453): Explicit bootstrapping of pip in Python installations The original rationale and specification for this module. Command line interface ---------------------- The command line interface is invoked using the interpreter’s `-m` switch. The simplest possible invocation is: ``` python -m ensurepip ``` This invocation will install `pip` if it is not already installed, but otherwise does nothing. To ensure the installed version of `pip` is at least as recent as the one bundled with `ensurepip`, pass the `--upgrade` option: ``` python -m ensurepip --upgrade ``` By default, `pip` is installed into the current virtual environment (if one is active) or into the system site packages (if there is no active virtual environment). The installation location can be controlled through two additional command line options: * `--root <dir>`: Installs `pip` relative to the given root directory rather than the root of the currently active virtual environment (if any) or the default root for the current Python installation. * `--user`: Installs `pip` into the user site packages directory rather than globally for the current Python installation (this option is not permitted inside an active virtual environment). By default, the scripts `pipX` and `pipX.Y` will be installed (where X.Y stands for the version of Python used to invoke `ensurepip`). The scripts installed can be controlled through two additional command line options: * `--altinstall`: if an alternate installation is requested, the `pipX` script will *not* be installed. * `--default-pip`: if a “default pip” installation is requested, the `pip` script will be installed in addition to the two regular scripts. Providing both of the script selection options will trigger an exception. Module API ---------- [`ensurepip`](#module-ensurepip "ensurepip: Bootstrapping the \"pip\" installer into an existing Python installation or virtual environment.") exposes two functions for programmatic use: `ensurepip.version()` Returns a string specifying the bundled version of pip that will be installed when bootstrapping an environment. `ensurepip.bootstrap(root=None, upgrade=False, user=False, altinstall=False, default_pip=False, verbosity=0)` Bootstraps `pip` into the current or designated environment. *root* specifies an alternative root directory to install relative to. If *root* is `None`, then installation uses the default install location for the current environment. *upgrade* indicates whether or not to upgrade an existing installation of an earlier version of `pip` to the bundled version. *user* indicates whether to use the user scheme rather than installing globally. By default, the scripts `pipX` and `pipX.Y` will be installed (where X.Y stands for the current version of Python). If *altinstall* is set, then `pipX` will *not* be installed. If *default\_pip* is set, then `pip` will be installed in addition to the two regular scripts. Setting both *altinstall* and *default\_pip* will trigger [`ValueError`](exceptions#ValueError "ValueError"). *verbosity* controls the level of output to [`sys.stdout`](sys#sys.stdout "sys.stdout") from the bootstrapping operation. Raises an [auditing event](sys#auditing) `ensurepip.bootstrap` with argument `root`. Note The bootstrapping process has side effects on both `sys.path` and `os.environ`. Invoking the command line interface in a subprocess instead allows these side effects to be avoided. Note The bootstrapping process may install additional modules required by `pip`, but other software should not assume those dependencies will always be present by default (as the dependencies may be removed in a future version of `pip`). python Using importlib.metadata Using importlib.metadata ======================== **Source code:** [Lib/importlib/metadata.py](https://github.com/python/cpython/tree/3.9/Lib/importlib/metadata.py) New in version 3.8. Note This functionality is provisional and may deviate from the usual version semantics of the standard library. `importlib.metadata` is a library that provides for access to installed package metadata. Built in part on Python’s import system, this library intends to replace similar functionality in the [entry point API](https://setuptools.readthedocs.io/en/latest/pkg_resources.html#entry-points) and [metadata API](https://setuptools.readthedocs.io/en/latest/pkg_resources.html#metadata-api) of `pkg_resources`. Along with [`importlib.resources`](importlib#module-importlib.resources "importlib.resources: Package resource reading, opening, and access") in Python 3.7 and newer (backported as [importlib\_resources](https://importlib-resources.readthedocs.io/en/latest/index.html) for older versions of Python), this can eliminate the need to use the older and less efficient `pkg_resources` package. By “installed package” we generally mean a third-party package installed into Python’s `site-packages` directory via tools such as [pip](https://pypi.org/project/pip/). Specifically, it means a package with either a discoverable `dist-info` or `egg-info` directory, and metadata defined by [**PEP 566**](https://www.python.org/dev/peps/pep-0566) or its older specifications. By default, package metadata can live on the file system or in zip archives on [`sys.path`](sys#sys.path "sys.path"). Through an extension mechanism, the metadata can live almost anywhere. Overview -------- Let’s say you wanted to get the version string for a package you’ve installed using `pip`. We start by creating a virtual environment and installing something into it: ``` $ python3 -m venv example $ source example/bin/activate (example) $ pip install wheel ``` You can get the version string for `wheel` by running the following: ``` (example) $ python >>> from importlib.metadata import version >>> version('wheel') '0.32.3' ``` You can also get the set of entry points keyed by group, such as `console_scripts`, `distutils.commands` and others. Each group contains a sequence of [EntryPoint](#entry-points) objects. You can get the [metadata for a distribution](#metadata): ``` >>> list(metadata('wheel')) ['Metadata-Version', 'Name', 'Version', 'Summary', 'Home-page', 'Author', 'Author-email', 'Maintainer', 'Maintainer-email', 'License', 'Project-URL', 'Project-URL', 'Project-URL', 'Keywords', 'Platform', 'Classifier', 'Classifier', 'Classifier', 'Classifier', 'Classifier', 'Classifier', 'Classifier', 'Classifier', 'Classifier', 'Classifier', 'Classifier', 'Classifier', 'Requires-Python', 'Provides-Extra', 'Requires-Dist', 'Requires-Dist'] ``` You can also get a [distribution’s version number](#version), list its [constituent files](#files), and get a list of the distribution’s [Distribution requirements](#requirements). Functional API -------------- This package provides the following functionality via its public API. ### Entry points The `entry_points()` function returns a dictionary of all entry points, keyed by group. Entry points are represented by `EntryPoint` instances; each `EntryPoint` has a `.name`, `.group`, and `.value` attributes and a `.load()` method to resolve the value. There are also `.module`, `.attr`, and `.extras` attributes for getting the components of the `.value` attribute: ``` >>> eps = entry_points() >>> list(eps) ['console_scripts', 'distutils.commands', 'distutils.setup_keywords', 'egg_info.writers', 'setuptools.installation'] >>> scripts = eps['console_scripts'] >>> wheel = [ep for ep in scripts if ep.name == 'wheel'][0] >>> wheel EntryPoint(name='wheel', value='wheel.cli:main', group='console_scripts') >>> wheel.module 'wheel.cli' >>> wheel.attr 'main' >>> wheel.extras [] >>> main = wheel.load() >>> main <function main at 0x103528488> ``` The `group` and `name` are arbitrary values defined by the package author and usually a client will wish to resolve all entry points for a particular group. Read [the setuptools docs](https://setuptools.readthedocs.io/en/latest/setuptools.html#dynamic-discovery-of-services-and-plugins) for more information on entry points, their definition, and usage. ### Distribution metadata Every distribution includes some metadata, which you can extract using the `metadata()` function: ``` >>> wheel_metadata = metadata('wheel') ``` The keys of the returned data structure [1](#f1) name the metadata keywords, and their values are returned unparsed from the distribution metadata: ``` >>> wheel_metadata['Requires-Python'] '>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*' ``` ### Distribution versions The `version()` function is the quickest way to get a distribution’s version number, as a string: ``` >>> version('wheel') '0.32.3' ``` ### Distribution files You can also get the full set of files contained within a distribution. The `files()` function takes a distribution package name and returns all of the files installed by this distribution. Each file object returned is a `PackagePath`, a [`pathlib.PurePath`](pathlib#pathlib.PurePath "pathlib.PurePath") derived object with additional `dist`, `size`, and `hash` properties as indicated by the metadata. For example: ``` >>> util = [p for p in files('wheel') if 'util.py' in str(p)][0] >>> util PackagePath('wheel/util.py') >>> util.size 859 >>> util.dist <importlib.metadata._hooks.PathDistribution object at 0x101e0cef0> >>> util.hash <FileHash mode: sha256 value: bYkw5oMccfazVCoYQwKkkemoVyMAFoR34mmKBx8R1NI> ``` Once you have the file, you can also read its contents: ``` >>> print(util.read_text()) import base64 import sys ... def as_bytes(s): if isinstance(s, text_type): return s.encode('utf-8') return s ``` You can also use the `locate` method to get a the absolute path to the file: ``` >>> util.locate() PosixPath('/home/gustav/example/lib/site-packages/wheel/util.py') ``` In the case where the metadata file listing files (RECORD or SOURCES.txt) is missing, `files()` will return `None`. The caller may wish to wrap calls to `files()` in [always\_iterable](https://more-itertools.readthedocs.io/en/stable/api.html#more_itertools.always_iterable) or otherwise guard against this condition if the target distribution is not known to have the metadata present. ### Distribution requirements To get the full set of requirements for a distribution, use the `requires()` function: ``` >>> requires('wheel') ["pytest (>=3.0.0) ; extra == 'test'", "pytest-cov ; extra == 'test'"] ``` Distributions ------------- While the above API is the most common and convenient usage, you can get all of that information from the `Distribution` class. A `Distribution` is an abstract object that represents the metadata for a Python package. You can get the `Distribution` instance: ``` >>> from importlib.metadata import distribution >>> dist = distribution('wheel') ``` Thus, an alternative way to get the version number is through the `Distribution` instance: ``` >>> dist.version '0.32.3' ``` There are all kinds of additional metadata available on the `Distribution` instance: ``` >>> dist.metadata['Requires-Python'] '>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*' >>> dist.metadata['License'] 'MIT' ``` The full set of available metadata is not described here. See [**PEP 566**](https://www.python.org/dev/peps/pep-0566) for additional details. Extending the search algorithm ------------------------------ Because package metadata is not available through [`sys.path`](sys#sys.path "sys.path") searches, or package loaders directly, the metadata for a package is found through import system [finders](../reference/import#finders-and-loaders). To find a distribution package’s metadata, `importlib.metadata` queries the list of [meta path finders](../glossary#term-meta-path-finder) on [`sys.meta_path`](sys#sys.meta_path "sys.meta_path"). The default `PathFinder` for Python includes a hook that calls into `importlib.metadata.MetadataPathFinder` for finding distributions loaded from typical file-system-based paths. The abstract class [`importlib.abc.MetaPathFinder`](importlib#importlib.abc.MetaPathFinder "importlib.abc.MetaPathFinder") defines the interface expected of finders by Python’s import system. `importlib.metadata` extends this protocol by looking for an optional `find_distributions` callable on the finders from [`sys.meta_path`](sys#sys.meta_path "sys.meta_path") and presents this extended interface as the `DistributionFinder` abstract base class, which defines this abstract method: ``` @abc.abstractmethod def find_distributions(context=DistributionFinder.Context()): """Return an iterable of all Distribution instances capable of loading the metadata for packages for the indicated ``context``. """ ``` The `DistributionFinder.Context` object provides `.path` and `.name` properties indicating the path to search and name to match and may supply other relevant context. What this means in practice is that to support finding distribution package metadata in locations other than the file system, subclass `Distribution` and implement the abstract methods. Then from a custom finder, return instances of this derived `Distribution` in the `find_distributions()` method. #### Footnotes `1` Technically, the returned distribution metadata object is an [`email.message.EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") instance, but this is an implementation detail, and not part of the stable API. You should only use dictionary-like methods and syntax to access the metadata contents. python zipfile — Work with ZIP archives zipfile — Work with ZIP archives ================================ **Source code:** [Lib/zipfile.py](https://github.com/python/cpython/tree/3.9/Lib/zipfile.py) The ZIP file format is a common archive and compression standard. This module provides tools to create, read, write, append, and list a ZIP file. Any advanced use of this module will require an understanding of the format, as defined in [PKZIP Application Note](https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT). This module does not currently handle multi-disk ZIP files. It can handle ZIP files that use the ZIP64 extensions (that is ZIP files that are more than 4 GiB in size). It supports decryption of encrypted files in ZIP archives, but it currently cannot create an encrypted file. Decryption is extremely slow as it is implemented in native Python rather than C. The module defines the following items: `exception zipfile.BadZipFile` The error raised for bad ZIP files. New in version 3.2. `exception zipfile.BadZipfile` Alias of [`BadZipFile`](#zipfile.BadZipFile "zipfile.BadZipFile"), for compatibility with older Python versions. Deprecated since version 3.2. `exception zipfile.LargeZipFile` The error raised when a ZIP file would require ZIP64 functionality but that has not been enabled. `class zipfile.ZipFile` The class for reading and writing ZIP files. See section [ZipFile Objects](#zipfile-objects) for constructor details. `class zipfile.Path` A pathlib-compatible wrapper for zip files. See section [Path Objects](#path-objects) for details. New in version 3.8. `class zipfile.PyZipFile` Class for creating ZIP archives containing Python libraries. `class zipfile.ZipInfo(filename='NoName', date_time=(1980, 1, 1, 0, 0, 0))` Class used to represent information about a member of an archive. Instances of this class are returned by the [`getinfo()`](#zipfile.ZipFile.getinfo "zipfile.ZipFile.getinfo") and [`infolist()`](#zipfile.ZipFile.infolist "zipfile.ZipFile.infolist") methods of [`ZipFile`](#zipfile.ZipFile "zipfile.ZipFile") objects. Most users of the [`zipfile`](#module-zipfile "zipfile: Read and write ZIP-format archive files.") module will not need to create these, but only use those created by this module. *filename* should be the full name of the archive member, and *date\_time* should be a tuple containing six fields which describe the time of the last modification to the file; the fields are described in section [ZipInfo Objects](#zipinfo-objects). `zipfile.is_zipfile(filename)` Returns `True` if *filename* is a valid ZIP file based on its magic number, otherwise returns `False`. *filename* may be a file or file-like object too. Changed in version 3.1: Support for file and file-like objects. `zipfile.ZIP_STORED` The numeric constant for an uncompressed archive member. `zipfile.ZIP_DEFLATED` The numeric constant for the usual ZIP compression method. This requires the [`zlib`](zlib#module-zlib "zlib: Low-level interface to compression and decompression routines compatible with gzip.") module. `zipfile.ZIP_BZIP2` The numeric constant for the BZIP2 compression method. This requires the [`bz2`](bz2#module-bz2 "bz2: Interfaces for bzip2 compression and decompression.") module. New in version 3.3. `zipfile.ZIP_LZMA` The numeric constant for the LZMA compression method. This requires the [`lzma`](lzma#module-lzma "lzma: A Python wrapper for the liblzma compression library.") module. New in version 3.3. Note The ZIP file format specification has included support for bzip2 compression since 2001, and for LZMA compression since 2006. However, some tools (including older Python releases) do not support these compression methods, and may either refuse to process the ZIP file altogether, or fail to extract individual files. See also [PKZIP Application Note](https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT) Documentation on the ZIP file format by Phil Katz, the creator of the format and algorithms used. [Info-ZIP Home Page](http://www.info-zip.org/) Information about the Info-ZIP project’s ZIP archive programs and development libraries. ZipFile Objects --------------- `class zipfile.ZipFile(file, mode='r', compression=ZIP_STORED, allowZip64=True, compresslevel=None, *, strict_timestamps=True)` Open a ZIP file, where *file* can be a path to a file (a string), a file-like object or a [path-like object](../glossary#term-path-like-object). The *mode* parameter should be `'r'` to read an existing file, `'w'` to truncate and write a new file, `'a'` to append to an existing file, or `'x'` to exclusively create and write a new file. If *mode* is `'x'` and *file* refers to an existing file, a [`FileExistsError`](exceptions#FileExistsError "FileExistsError") will be raised. If *mode* is `'a'` and *file* refers to an existing ZIP file, then additional files are added to it. If *file* does not refer to a ZIP file, then a new ZIP archive is appended to the file. This is meant for adding a ZIP archive to another file (such as `python.exe`). If *mode* is `'a'` and the file does not exist at all, it is created. If *mode* is `'r'` or `'a'`, the file should be seekable. *compression* is the ZIP compression method to use when writing the archive, and should be [`ZIP_STORED`](#zipfile.ZIP_STORED "zipfile.ZIP_STORED"), [`ZIP_DEFLATED`](#zipfile.ZIP_DEFLATED "zipfile.ZIP_DEFLATED"), [`ZIP_BZIP2`](#zipfile.ZIP_BZIP2 "zipfile.ZIP_BZIP2") or [`ZIP_LZMA`](#zipfile.ZIP_LZMA "zipfile.ZIP_LZMA"); unrecognized values will cause [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError") to be raised. If [`ZIP_DEFLATED`](#zipfile.ZIP_DEFLATED "zipfile.ZIP_DEFLATED"), [`ZIP_BZIP2`](#zipfile.ZIP_BZIP2 "zipfile.ZIP_BZIP2") or [`ZIP_LZMA`](#zipfile.ZIP_LZMA "zipfile.ZIP_LZMA") is specified but the corresponding module ([`zlib`](zlib#module-zlib "zlib: Low-level interface to compression and decompression routines compatible with gzip."), [`bz2`](bz2#module-bz2 "bz2: Interfaces for bzip2 compression and decompression.") or [`lzma`](lzma#module-lzma "lzma: A Python wrapper for the liblzma compression library.")) is not available, [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. The default is [`ZIP_STORED`](#zipfile.ZIP_STORED "zipfile.ZIP_STORED"). If *allowZip64* is `True` (the default) zipfile will create ZIP files that use the ZIP64 extensions when the zipfile is larger than 4 GiB. If it is `false` [`zipfile`](#module-zipfile "zipfile: Read and write ZIP-format archive files.") will raise an exception when the ZIP file would require ZIP64 extensions. The *compresslevel* parameter controls the compression level to use when writing files to the archive. When using [`ZIP_STORED`](#zipfile.ZIP_STORED "zipfile.ZIP_STORED") or [`ZIP_LZMA`](#zipfile.ZIP_LZMA "zipfile.ZIP_LZMA") it has no effect. When using [`ZIP_DEFLATED`](#zipfile.ZIP_DEFLATED "zipfile.ZIP_DEFLATED") integers `0` through `9` are accepted (see [`zlib`](zlib#zlib.compressobj "zlib.compressobj") for more information). When using [`ZIP_BZIP2`](#zipfile.ZIP_BZIP2 "zipfile.ZIP_BZIP2") integers `1` through `9` are accepted (see [`bz2`](bz2#bz2.BZ2File "bz2.BZ2File") for more information). The *strict\_timestamps* argument, when set to `False`, allows to zip files older than 1980-01-01 at the cost of setting the timestamp to 1980-01-01. Similar behavior occurs with files newer than 2107-12-31, the timestamp is also set to the limit. If the file is created with mode `'w'`, `'x'` or `'a'` and then [`closed`](#zipfile.ZipFile.close "zipfile.ZipFile.close") without adding any files to the archive, the appropriate ZIP structures for an empty archive will be written to the file. ZipFile is also a context manager and therefore supports the [`with`](../reference/compound_stmts#with) statement. In the example, *myzip* is closed after the `with` statement’s suite is finished—even if an exception occurs: ``` with ZipFile('spam.zip', 'w') as myzip: myzip.write('eggs.txt') ``` New in version 3.2: Added the ability to use [`ZipFile`](#zipfile.ZipFile "zipfile.ZipFile") as a context manager. Changed in version 3.3: Added support for [`bzip2`](bz2#module-bz2 "bz2: Interfaces for bzip2 compression and decompression.") and [`lzma`](lzma#module-lzma "lzma: A Python wrapper for the liblzma compression library.") compression. Changed in version 3.4: ZIP64 extensions are enabled by default. Changed in version 3.5: Added support for writing to unseekable streams. Added support for the `'x'` mode. Changed in version 3.6: Previously, a plain [`RuntimeError`](exceptions#RuntimeError "RuntimeError") was raised for unrecognized compression values. Changed in version 3.6.2: The *file* parameter accepts a [path-like object](../glossary#term-path-like-object). Changed in version 3.7: Add the *compresslevel* parameter. New in version 3.8: The *strict\_timestamps* keyword-only argument `ZipFile.close()` Close the archive file. You must call [`close()`](#zipfile.ZipFile.close "zipfile.ZipFile.close") before exiting your program or essential records will not be written. `ZipFile.getinfo(name)` Return a [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") object with information about the archive member *name*. Calling [`getinfo()`](#zipfile.ZipFile.getinfo "zipfile.ZipFile.getinfo") for a name not currently contained in the archive will raise a [`KeyError`](exceptions#KeyError "KeyError"). `ZipFile.infolist()` Return a list containing a [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") object for each member of the archive. The objects are in the same order as their entries in the actual ZIP file on disk if an existing archive was opened. `ZipFile.namelist()` Return a list of archive members by name. `ZipFile.open(name, mode='r', pwd=None, *, force_zip64=False)` Access a member of the archive as a binary file-like object. *name* can be either the name of a file within the archive or a [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") object. The *mode* parameter, if included, must be `'r'` (the default) or `'w'`. *pwd* is the password used to decrypt encrypted ZIP files. [`open()`](#zipfile.ZipFile.open "zipfile.ZipFile.open") is also a context manager and therefore supports the [`with`](../reference/compound_stmts#with) statement: ``` with ZipFile('spam.zip') as myzip: with myzip.open('eggs.txt') as myfile: print(myfile.read()) ``` With *mode* `'r'` the file-like object (`ZipExtFile`) is read-only and provides the following methods: [`read()`](io#io.BufferedIOBase.read "io.BufferedIOBase.read"), [`readline()`](io#io.IOBase.readline "io.IOBase.readline"), [`readlines()`](io#io.IOBase.readlines "io.IOBase.readlines"), [`seek()`](io#io.IOBase.seek "io.IOBase.seek"), [`tell()`](io#io.IOBase.tell "io.IOBase.tell"), [`__iter__()`](../reference/datamodel#object.__iter__ "object.__iter__"), [`__next__()`](stdtypes#iterator.__next__ "iterator.__next__"). These objects can operate independently of the ZipFile. With `mode='w'`, a writable file handle is returned, which supports the [`write()`](io#io.BufferedIOBase.write "io.BufferedIOBase.write") method. While a writable file handle is open, attempting to read or write other files in the ZIP file will raise a [`ValueError`](exceptions#ValueError "ValueError"). When writing a file, if the file size is not known in advance but may exceed 2 GiB, pass `force_zip64=True` to ensure that the header format is capable of supporting large files. If the file size is known in advance, construct a [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") object with [`file_size`](#zipfile.ZipInfo.file_size "zipfile.ZipInfo.file_size") set, and use that as the *name* parameter. Note The [`open()`](#zipfile.ZipFile.open "zipfile.ZipFile.open"), [`read()`](#zipfile.ZipFile.read "zipfile.ZipFile.read") and [`extract()`](#zipfile.ZipFile.extract "zipfile.ZipFile.extract") methods can take a filename or a [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") object. You will appreciate this when trying to read a ZIP file that contains members with duplicate names. Changed in version 3.6: Removed support of `mode='U'`. Use [`io.TextIOWrapper`](io#io.TextIOWrapper "io.TextIOWrapper") for reading compressed text files in [universal newlines](../glossary#term-universal-newlines) mode. Changed in version 3.6: [`ZipFile.open()`](#zipfile.ZipFile.open "zipfile.ZipFile.open") can now be used to write files into the archive with the `mode='w'` option. Changed in version 3.6: Calling [`open()`](#zipfile.ZipFile.open "zipfile.ZipFile.open") on a closed ZipFile will raise a [`ValueError`](exceptions#ValueError "ValueError"). Previously, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") was raised. `ZipFile.extract(member, path=None, pwd=None)` Extract a member from the archive to the current working directory; *member* must be its full name or a [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") object. Its file information is extracted as accurately as possible. *path* specifies a different directory to extract to. *member* can be a filename or a [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") object. *pwd* is the password used for encrypted files. Returns the normalized path created (a directory or new file). Note If a member filename is an absolute path, a drive/UNC sharepoint and leading (back)slashes will be stripped, e.g.: `///foo/bar` becomes `foo/bar` on Unix, and `C:\foo\bar` becomes `foo\bar` on Windows. And all `".."` components in a member filename will be removed, e.g.: `../../foo../../ba..r` becomes `foo../ba..r`. On Windows illegal characters (`:`, `<`, `>`, `|`, `"`, `?`, and `*`) replaced by underscore (`_`). Changed in version 3.6: Calling [`extract()`](#zipfile.ZipFile.extract "zipfile.ZipFile.extract") on a closed ZipFile will raise a [`ValueError`](exceptions#ValueError "ValueError"). Previously, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") was raised. Changed in version 3.6.2: The *path* parameter accepts a [path-like object](../glossary#term-path-like-object). `ZipFile.extractall(path=None, members=None, pwd=None)` Extract all members from the archive to the current working directory. *path* specifies a different directory to extract to. *members* is optional and must be a subset of the list returned by [`namelist()`](#zipfile.ZipFile.namelist "zipfile.ZipFile.namelist"). *pwd* is the password used for encrypted files. Warning Never extract archives from untrusted sources without prior inspection. It is possible that files are created outside of *path*, e.g. members that have absolute filenames starting with `"/"` or filenames with two dots `".."`. This module attempts to prevent that. See [`extract()`](#zipfile.ZipFile.extract "zipfile.ZipFile.extract") note. Changed in version 3.6: Calling [`extractall()`](#zipfile.ZipFile.extractall "zipfile.ZipFile.extractall") on a closed ZipFile will raise a [`ValueError`](exceptions#ValueError "ValueError"). Previously, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") was raised. Changed in version 3.6.2: The *path* parameter accepts a [path-like object](../glossary#term-path-like-object). `ZipFile.printdir()` Print a table of contents for the archive to `sys.stdout`. `ZipFile.setpassword(pwd)` Set *pwd* as default password to extract encrypted files. `ZipFile.read(name, pwd=None)` Return the bytes of the file *name* in the archive. *name* is the name of the file in the archive, or a [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") object. The archive must be open for read or append. *pwd* is the password used for encrypted files and, if specified, it will override the default password set with [`setpassword()`](#zipfile.ZipFile.setpassword "zipfile.ZipFile.setpassword"). Calling [`read()`](#zipfile.ZipFile.read "zipfile.ZipFile.read") on a ZipFile that uses a compression method other than [`ZIP_STORED`](#zipfile.ZIP_STORED "zipfile.ZIP_STORED"), [`ZIP_DEFLATED`](#zipfile.ZIP_DEFLATED "zipfile.ZIP_DEFLATED"), [`ZIP_BZIP2`](#zipfile.ZIP_BZIP2 "zipfile.ZIP_BZIP2") or [`ZIP_LZMA`](#zipfile.ZIP_LZMA "zipfile.ZIP_LZMA") will raise a [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError"). An error will also be raised if the corresponding compression module is not available. Changed in version 3.6: Calling [`read()`](#zipfile.ZipFile.read "zipfile.ZipFile.read") on a closed ZipFile will raise a [`ValueError`](exceptions#ValueError "ValueError"). Previously, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") was raised. `ZipFile.testzip()` Read all the files in the archive and check their CRC’s and file headers. Return the name of the first bad file, or else return `None`. Changed in version 3.6: Calling [`testzip()`](#zipfile.ZipFile.testzip "zipfile.ZipFile.testzip") on a closed ZipFile will raise a [`ValueError`](exceptions#ValueError "ValueError"). Previously, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") was raised. `ZipFile.write(filename, arcname=None, compress_type=None, compresslevel=None)` Write the file named *filename* to the archive, giving it the archive name *arcname* (by default, this will be the same as *filename*, but without a drive letter and with leading path separators removed). If given, *compress\_type* overrides the value given for the *compression* parameter to the constructor for the new entry. Similarly, *compresslevel* will override the constructor if given. The archive must be open with mode `'w'`, `'x'` or `'a'`. Note Archive names should be relative to the archive root, that is, they should not start with a path separator. Note If `arcname` (or `filename`, if `arcname` is not given) contains a null byte, the name of the file in the archive will be truncated at the null byte. Note A leading slash in the filename may lead to the archive being impossible to open in some zip programs on Windows systems. Changed in version 3.6: Calling [`write()`](#zipfile.ZipFile.write "zipfile.ZipFile.write") on a ZipFile created with mode `'r'` or a closed ZipFile will raise a [`ValueError`](exceptions#ValueError "ValueError"). Previously, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") was raised. `ZipFile.writestr(zinfo_or_arcname, data, compress_type=None, compresslevel=None)` Write a file into the archive. The contents is *data*, which may be either a [`str`](stdtypes#str "str") or a [`bytes`](stdtypes#bytes "bytes") instance; if it is a [`str`](stdtypes#str "str"), it is encoded as UTF-8 first. *zinfo\_or\_arcname* is either the file name it will be given in the archive, or a [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") instance. If it’s an instance, at least the filename, date, and time must be given. If it’s a name, the date and time is set to the current date and time. The archive must be opened with mode `'w'`, `'x'` or `'a'`. If given, *compress\_type* overrides the value given for the *compression* parameter to the constructor for the new entry, or in the *zinfo\_or\_arcname* (if that is a [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") instance). Similarly, *compresslevel* will override the constructor if given. Note When passing a [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") instance as the *zinfo\_or\_arcname* parameter, the compression method used will be that specified in the *compress\_type* member of the given [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") instance. By default, the [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") constructor sets this member to [`ZIP_STORED`](#zipfile.ZIP_STORED "zipfile.ZIP_STORED"). Changed in version 3.2: The *compress\_type* argument. Changed in version 3.6: Calling [`writestr()`](#zipfile.ZipFile.writestr "zipfile.ZipFile.writestr") on a ZipFile created with mode `'r'` or a closed ZipFile will raise a [`ValueError`](exceptions#ValueError "ValueError"). Previously, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") was raised. The following data attributes are also available: `ZipFile.filename` Name of the ZIP file. `ZipFile.debug` The level of debug output to use. This may be set from `0` (the default, no output) to `3` (the most output). Debugging information is written to `sys.stdout`. `ZipFile.comment` The comment associated with the ZIP file as a [`bytes`](stdtypes#bytes "bytes") object. If assigning a comment to a [`ZipFile`](#zipfile.ZipFile "zipfile.ZipFile") instance created with mode `'w'`, `'x'` or `'a'`, it should be no longer than 65535 bytes. Comments longer than this will be truncated. Path Objects ------------ `class zipfile.Path(root, at='')` Construct a Path object from a `root` zipfile (which may be a [`ZipFile`](#zipfile.ZipFile "zipfile.ZipFile") instance or `file` suitable for passing to the [`ZipFile`](#zipfile.ZipFile "zipfile.ZipFile") constructor). `at` specifies the location of this Path within the zipfile, e.g. ‘dir/file.txt’, ‘dir/’, or ‘’. Defaults to the empty string, indicating the root. Path objects expose the following features of [`pathlib.Path`](pathlib#pathlib.Path "pathlib.Path") objects: Path objects are traversable using the `/` operator. `Path.name` The final path component. `Path.open(mode='r', *, pwd, **)` Invoke [`ZipFile.open()`](#zipfile.ZipFile.open "zipfile.ZipFile.open") on the current path. Allows opening for read or write, text or binary through supported modes: ‘r’, ‘w’, ‘rb’, ‘wb’. Positional and keyword arguments are passed through to [`io.TextIOWrapper`](io#io.TextIOWrapper "io.TextIOWrapper") when opened as text and ignored otherwise. `pwd` is the `pwd` parameter to [`ZipFile.open()`](#zipfile.ZipFile.open "zipfile.ZipFile.open"). Changed in version 3.9: Added support for text and binary modes for open. Default mode is now text. `Path.iterdir()` Enumerate the children of the current directory. `Path.is_dir()` Return `True` if the current context references a directory. `Path.is_file()` Return `True` if the current context references a file. `Path.exists()` Return `True` if the current context references a file or directory in the zip file. `Path.read_text(*, **)` Read the current file as unicode text. Positional and keyword arguments are passed through to [`io.TextIOWrapper`](io#io.TextIOWrapper "io.TextIOWrapper") (except `buffer`, which is implied by the context). `Path.read_bytes()` Read the current file as bytes. PyZipFile Objects ----------------- The [`PyZipFile`](#zipfile.PyZipFile "zipfile.PyZipFile") constructor takes the same parameters as the [`ZipFile`](#zipfile.ZipFile "zipfile.ZipFile") constructor, and one additional parameter, *optimize*. `class zipfile.PyZipFile(file, mode='r', compression=ZIP_STORED, allowZip64=True, optimize=-1)` New in version 3.2: The *optimize* parameter. Changed in version 3.4: ZIP64 extensions are enabled by default. Instances have one method in addition to those of [`ZipFile`](#zipfile.ZipFile "zipfile.ZipFile") objects: `writepy(pathname, basename='', filterfunc=None)` Search for files `*.py` and add the corresponding file to the archive. If the *optimize* parameter to [`PyZipFile`](#zipfile.PyZipFile "zipfile.PyZipFile") was not given or `-1`, the corresponding file is a `*.pyc` file, compiling if necessary. If the *optimize* parameter to [`PyZipFile`](#zipfile.PyZipFile "zipfile.PyZipFile") was `0`, `1` or `2`, only files with that optimization level (see [`compile()`](functions#compile "compile")) are added to the archive, compiling if necessary. If *pathname* is a file, the filename must end with `.py`, and just the (corresponding `*.pyc`) file is added at the top level (no path information). If *pathname* is a file that does not end with `.py`, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") will be raised. If it is a directory, and the directory is not a package directory, then all the files `*.pyc` are added at the top level. If the directory is a package directory, then all `*.pyc` are added under the package name as a file path, and if any subdirectories are package directories, all of these are added recursively in sorted order. *basename* is intended for internal use only. *filterfunc*, if given, must be a function taking a single string argument. It will be passed each path (including each individual full file path) before it is added to the archive. If *filterfunc* returns a false value, the path will not be added, and if it is a directory its contents will be ignored. For example, if our test files are all either in `test` directories or start with the string `test_`, we can use a *filterfunc* to exclude them: ``` >>> zf = PyZipFile('myprog.zip') >>> def notests(s): ... fn = os.path.basename(s) ... return (not (fn == 'test' or fn.startswith('test_'))) >>> zf.writepy('myprog', filterfunc=notests) ``` The [`writepy()`](#zipfile.PyZipFile.writepy "zipfile.PyZipFile.writepy") method makes archives with file names like this: ``` string.pyc # Top level name test/__init__.pyc # Package directory test/testall.pyc # Module test.testall test/bogus/__init__.pyc # Subpackage directory test/bogus/myfile.pyc # Submodule test.bogus.myfile ``` New in version 3.4: The *filterfunc* parameter. Changed in version 3.6.2: The *pathname* parameter accepts a [path-like object](../glossary#term-path-like-object). Changed in version 3.7: Recursion sorts directory entries. ZipInfo Objects --------------- Instances of the [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") class are returned by the [`getinfo()`](#zipfile.ZipFile.getinfo "zipfile.ZipFile.getinfo") and [`infolist()`](#zipfile.ZipFile.infolist "zipfile.ZipFile.infolist") methods of [`ZipFile`](#zipfile.ZipFile "zipfile.ZipFile") objects. Each object stores information about a single member of the ZIP archive. There is one classmethod to make a [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") instance for a filesystem file: `classmethod ZipInfo.from_file(filename, arcname=None, *, strict_timestamps=True)` Construct a [`ZipInfo`](#zipfile.ZipInfo "zipfile.ZipInfo") instance for a file on the filesystem, in preparation for adding it to a zip file. *filename* should be the path to a file or directory on the filesystem. If *arcname* is specified, it is used as the name within the archive. If *arcname* is not specified, the name will be the same as *filename*, but with any drive letter and leading path separators removed. The *strict\_timestamps* argument, when set to `False`, allows to zip files older than 1980-01-01 at the cost of setting the timestamp to 1980-01-01. Similar behavior occurs with files newer than 2107-12-31, the timestamp is also set to the limit. New in version 3.6. Changed in version 3.6.2: The *filename* parameter accepts a [path-like object](../glossary#term-path-like-object). New in version 3.8: The *strict\_timestamps* keyword-only argument Instances have the following methods and attributes: `ZipInfo.is_dir()` Return `True` if this archive member is a directory. This uses the entry’s name: directories should always end with `/`. New in version 3.6. `ZipInfo.filename` Name of the file in the archive. `ZipInfo.date_time` The time and date of the last modification to the archive member. This is a tuple of six values: | Index | Value | | --- | --- | | `0` | Year (>= 1980) | | `1` | Month (one-based) | | `2` | Day of month (one-based) | | `3` | Hours (zero-based) | | `4` | Minutes (zero-based) | | `5` | Seconds (zero-based) | Note The ZIP file format does not support timestamps before 1980. `ZipInfo.compress_type` Type of compression for the archive member. `ZipInfo.comment` Comment for the individual archive member as a [`bytes`](stdtypes#bytes "bytes") object. `ZipInfo.extra` Expansion field data. The [PKZIP Application Note](https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT) contains some comments on the internal structure of the data contained in this [`bytes`](stdtypes#bytes "bytes") object. `ZipInfo.create_system` System which created ZIP archive. `ZipInfo.create_version` PKZIP version which created ZIP archive. `ZipInfo.extract_version` PKZIP version needed to extract archive. `ZipInfo.reserved` Must be zero. `ZipInfo.flag_bits` ZIP flag bits. `ZipInfo.volume` Volume number of file header. `ZipInfo.internal_attr` Internal attributes. `ZipInfo.external_attr` External file attributes. `ZipInfo.header_offset` Byte offset to the file header. `ZipInfo.CRC` CRC-32 of the uncompressed file. `ZipInfo.compress_size` Size of the compressed data. `ZipInfo.file_size` Size of the uncompressed file. Command-Line Interface ---------------------- The [`zipfile`](#module-zipfile "zipfile: Read and write ZIP-format archive files.") module provides a simple command-line interface to interact with ZIP archives. If you want to create a new ZIP archive, specify its name after the [`-c`](#cmdoption-zipfile-c) option and then list the filename(s) that should be included: ``` $ python -m zipfile -c monty.zip spam.txt eggs.txt ``` Passing a directory is also acceptable: ``` $ python -m zipfile -c monty.zip life-of-brian_1979/ ``` If you want to extract a ZIP archive into the specified directory, use the [`-e`](#cmdoption-zipfile-e) option: ``` $ python -m zipfile -e monty.zip target-dir/ ``` For a list of the files in a ZIP archive, use the [`-l`](#cmdoption-zipfile-l) option: ``` $ python -m zipfile -l monty.zip ``` ### Command-line options `-l <zipfile>` `--list <zipfile>` List files in a zipfile. `-c <zipfile> <source1> ... <sourceN>` `--create <zipfile> <source1> ... <sourceN>` Create zipfile from source files. `-e <zipfile> <output_dir>` `--extract <zipfile> <output_dir>` Extract zipfile into target directory. `-t <zipfile>` `--test <zipfile>` Test whether the zipfile is valid or not. Decompression pitfalls ---------------------- The extraction in zipfile module might fail due to some pitfalls listed below. ### From file itself Decompression may fail due to incorrect password / CRC checksum / ZIP format or unsupported compression method / decryption. ### File System limitations Exceeding limitations on different file systems can cause decompression failed. Such as allowable characters in the directory entries, length of the file name, length of the pathname, size of a single file, and number of files, etc. ### Resources limitations The lack of memory or disk volume would lead to decompression failed. For example, decompression bombs (aka [ZIP bomb](https://en.wikipedia.org/wiki/Zip_bomb)) apply to zipfile library that can cause disk volume exhaustion. ### Interruption Interruption during the decompression, such as pressing control-C or killing the decompression process may result in incomplete decompression of the archive. ### Default behaviors of extraction Not knowing the default extraction behaviors can cause unexpected decompression results. For example, when extracting the same archive twice, it overwrites files without asking.
programming_docs
python random — Generate pseudo-random numbers random — Generate pseudo-random numbers ======================================= **Source code:** [Lib/random.py](https://github.com/python/cpython/tree/3.9/Lib/random.py) This module implements pseudo-random number generators for various distributions. For integers, there is uniform selection from a range. For sequences, there is uniform selection of a random element, a function to generate a random permutation of a list in-place, and a function for random sampling without replacement. On the real line, there are functions to compute uniform, normal (Gaussian), lognormal, negative exponential, gamma, and beta distributions. For generating distributions of angles, the von Mises distribution is available. Almost all module functions depend on the basic function [`random()`](#random.random "random.random"), which generates a random float uniformly in the semi-open range [0.0, 1.0). Python uses the Mersenne Twister as the core generator. It produces 53-bit precision floats and has a period of 2\*\*19937-1. The underlying implementation in C is both fast and threadsafe. The Mersenne Twister is one of the most extensively tested random number generators in existence. However, being completely deterministic, it is not suitable for all purposes, and is completely unsuitable for cryptographic purposes. The functions supplied by this module are actually bound methods of a hidden instance of the [`random.Random`](#random.Random "random.Random") class. You can instantiate your own instances of [`Random`](#random.Random "random.Random") to get generators that don’t share state. Class [`Random`](#random.Random "random.Random") can also be subclassed if you want to use a different basic generator of your own devising: in that case, override the `random()`, `seed()`, `getstate()`, and `setstate()` methods. Optionally, a new generator can supply a `getrandbits()` method — this allows [`randrange()`](#random.randrange "random.randrange") to produce selections over an arbitrarily large range. The [`random`](#module-random "random: Generate pseudo-random numbers with various common distributions.") module also provides the [`SystemRandom`](#random.SystemRandom "random.SystemRandom") class which uses the system function [`os.urandom()`](os#os.urandom "os.urandom") to generate random numbers from sources provided by the operating system. Warning The pseudo-random generators of this module should not be used for security purposes. For security or cryptographic uses, see the [`secrets`](secrets#module-secrets "secrets: Generate secure random numbers for managing secrets.") module. See also M. Matsumoto and T. Nishimura, “Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator”, ACM Transactions on Modeling and Computer Simulation Vol. 8, No. 1, January pp.3–30 1998. [Complementary-Multiply-with-Carry recipe](https://code.activestate.com/recipes/576707/) for a compatible alternative random number generator with a long period and comparatively simple update operations. Bookkeeping functions --------------------- `random.seed(a=None, version=2)` Initialize the random number generator. If *a* is omitted or `None`, the current system time is used. If randomness sources are provided by the operating system, they are used instead of the system time (see the [`os.urandom()`](os#os.urandom "os.urandom") function for details on availability). If *a* is an int, it is used directly. With version 2 (the default), a [`str`](stdtypes#str "str"), [`bytes`](stdtypes#bytes "bytes"), or [`bytearray`](stdtypes#bytearray "bytearray") object gets converted to an [`int`](functions#int "int") and all of its bits are used. With version 1 (provided for reproducing random sequences from older versions of Python), the algorithm for [`str`](stdtypes#str "str") and [`bytes`](stdtypes#bytes "bytes") generates a narrower range of seeds. Changed in version 3.2: Moved to the version 2 scheme which uses all of the bits in a string seed. Deprecated since version 3.9: In the future, the *seed* must be one of the following types: *NoneType*, [`int`](functions#int "int"), [`float`](functions#float "float"), [`str`](stdtypes#str "str"), [`bytes`](stdtypes#bytes "bytes"), or [`bytearray`](stdtypes#bytearray "bytearray"). `random.getstate()` Return an object capturing the current internal state of the generator. This object can be passed to [`setstate()`](#random.setstate "random.setstate") to restore the state. `random.setstate(state)` *state* should have been obtained from a previous call to [`getstate()`](#random.getstate "random.getstate"), and [`setstate()`](#random.setstate "random.setstate") restores the internal state of the generator to what it was at the time [`getstate()`](#random.getstate "random.getstate") was called. Functions for bytes ------------------- `random.randbytes(n)` Generate *n* random bytes. This method should not be used for generating security tokens. Use [`secrets.token_bytes()`](secrets#secrets.token_bytes "secrets.token_bytes") instead. New in version 3.9. Functions for integers ---------------------- `random.randrange(stop)` `random.randrange(start, stop[, step])` Return a randomly selected element from `range(start, stop, step)`. This is equivalent to `choice(range(start, stop, step))`, but doesn’t actually build a range object. The positional argument pattern matches that of [`range()`](stdtypes#range "range"). Keyword arguments should not be used because the function may use them in unexpected ways. Changed in version 3.2: [`randrange()`](#random.randrange "random.randrange") is more sophisticated about producing equally distributed values. Formerly it used a style like `int(random()*n)` which could produce slightly uneven distributions. `random.randint(a, b)` Return a random integer *N* such that `a <= N <= b`. Alias for `randrange(a, b+1)`. `random.getrandbits(k)` Returns a non-negative Python integer with *k* random bits. This method is supplied with the MersenneTwister generator and some other generators may also provide it as an optional part of the API. When available, [`getrandbits()`](#random.getrandbits "random.getrandbits") enables [`randrange()`](#random.randrange "random.randrange") to handle arbitrarily large ranges. Changed in version 3.9: This method now accepts zero for *k*. Functions for sequences ----------------------- `random.choice(seq)` Return a random element from the non-empty sequence *seq*. If *seq* is empty, raises [`IndexError`](exceptions#IndexError "IndexError"). `random.choices(population, weights=None, *, cum_weights=None, k=1)` Return a *k* sized list of elements chosen from the *population* with replacement. If the *population* is empty, raises [`IndexError`](exceptions#IndexError "IndexError"). If a *weights* sequence is specified, selections are made according to the relative weights. Alternatively, if a *cum\_weights* sequence is given, the selections are made according to the cumulative weights (perhaps computed using [`itertools.accumulate()`](itertools#itertools.accumulate "itertools.accumulate")). For example, the relative weights `[10, 5, 30, 5]` are equivalent to the cumulative weights `[10, 15, 45, 50]`. Internally, the relative weights are converted to cumulative weights before making selections, so supplying the cumulative weights saves work. If neither *weights* nor *cum\_weights* are specified, selections are made with equal probability. If a weights sequence is supplied, it must be the same length as the *population* sequence. It is a [`TypeError`](exceptions#TypeError "TypeError") to specify both *weights* and *cum\_weights*. The *weights* or *cum\_weights* can use any numeric type that interoperates with the [`float`](functions#float "float") values returned by [`random()`](#module-random "random: Generate pseudo-random numbers with various common distributions.") (that includes integers, floats, and fractions but excludes decimals). Behavior is undefined if any weight is negative. A [`ValueError`](exceptions#ValueError "ValueError") is raised if all weights are zero. For a given seed, the [`choices()`](#random.choices "random.choices") function with equal weighting typically produces a different sequence than repeated calls to [`choice()`](#random.choice "random.choice"). The algorithm used by [`choices()`](#random.choices "random.choices") uses floating point arithmetic for internal consistency and speed. The algorithm used by [`choice()`](#random.choice "random.choice") defaults to integer arithmetic with repeated selections to avoid small biases from round-off error. New in version 3.6. Changed in version 3.9: Raises a [`ValueError`](exceptions#ValueError "ValueError") if all weights are zero. `random.shuffle(x[, random])` Shuffle the sequence *x* in place. The optional argument *random* is a 0-argument function returning a random float in [0.0, 1.0); by default, this is the function [`random()`](#random.random "random.random"). To shuffle an immutable sequence and return a new shuffled list, use `sample(x, k=len(x))` instead. Note that even for small `len(x)`, the total number of permutations of *x* can quickly grow larger than the period of most random number generators. This implies that most permutations of a long sequence can never be generated. For example, a sequence of length 2080 is the largest that can fit within the period of the Mersenne Twister random number generator. Deprecated since version 3.9, will be removed in version 3.11: The optional parameter *random*. `random.sample(population, k, *, counts=None)` Return a *k* length list of unique elements chosen from the population sequence or set. Used for random sampling without replacement. Returns a new list containing elements from the population while leaving the original population unchanged. The resulting list is in selection order so that all sub-slices will also be valid random samples. This allows raffle winners (the sample) to be partitioned into grand prize and second place winners (the subslices). Members of the population need not be [hashable](../glossary#term-hashable) or unique. If the population contains repeats, then each occurrence is a possible selection in the sample. Repeated elements can be specified one at a time or with the optional keyword-only *counts* parameter. For example, `sample(['red', 'blue'], counts=[4, 2], k=5)` is equivalent to `sample(['red', 'red', 'red', 'red', 'blue', 'blue'], k=5)`. To choose a sample from a range of integers, use a [`range()`](stdtypes#range "range") object as an argument. This is especially fast and space efficient for sampling from a large population: `sample(range(10000000), k=60)`. If the sample size is larger than the population size, a [`ValueError`](exceptions#ValueError "ValueError") is raised. Changed in version 3.9: Added the *counts* parameter. Deprecated since version 3.9: In the future, the *population* must be a sequence. Instances of [`set`](stdtypes#set "set") are no longer supported. The set must first be converted to a [`list`](stdtypes#list "list") or [`tuple`](stdtypes#tuple "tuple"), preferably in a deterministic order so that the sample is reproducible. Real-valued distributions ------------------------- The following functions generate specific real-valued distributions. Function parameters are named after the corresponding variables in the distribution’s equation, as used in common mathematical practice; most of these equations can be found in any statistics text. `random.random()` Return the next random floating point number in the range [0.0, 1.0). `random.uniform(a, b)` Return a random floating point number *N* such that `a <= N <= b` for `a <= b` and `b <= N <= a` for `b < a`. The end-point value `b` may or may not be included in the range depending on floating-point rounding in the equation `a + (b-a) * random()`. `random.triangular(low, high, mode)` Return a random floating point number *N* such that `low <= N <= high` and with the specified *mode* between those bounds. The *low* and *high* bounds default to zero and one. The *mode* argument defaults to the midpoint between the bounds, giving a symmetric distribution. `random.betavariate(alpha, beta)` Beta distribution. Conditions on the parameters are `alpha > 0` and `beta > 0`. Returned values range between 0 and 1. `random.expovariate(lambd)` Exponential distribution. *lambd* is 1.0 divided by the desired mean. It should be nonzero. (The parameter would be called “lambda”, but that is a reserved word in Python.) Returned values range from 0 to positive infinity if *lambd* is positive, and from negative infinity to 0 if *lambd* is negative. `random.gammavariate(alpha, beta)` Gamma distribution. (*Not* the gamma function!) Conditions on the parameters are `alpha > 0` and `beta > 0`. The probability distribution function is: ``` x ** (alpha - 1) * math.exp(-x / beta) pdf(x) = -------------------------------------- math.gamma(alpha) * beta ** alpha ``` `random.gauss(mu, sigma)` Gaussian distribution. *mu* is the mean, and *sigma* is the standard deviation. This is slightly faster than the [`normalvariate()`](#random.normalvariate "random.normalvariate") function defined below. Multithreading note: When two threads call this function simultaneously, it is possible that they will receive the same return value. This can be avoided in three ways. 1) Have each thread use a different instance of the random number generator. 2) Put locks around all calls. 3) Use the slower, but thread-safe [`normalvariate()`](#random.normalvariate "random.normalvariate") function instead. `random.lognormvariate(mu, sigma)` Log normal distribution. If you take the natural logarithm of this distribution, you’ll get a normal distribution with mean *mu* and standard deviation *sigma*. *mu* can have any value, and *sigma* must be greater than zero. `random.normalvariate(mu, sigma)` Normal distribution. *mu* is the mean, and *sigma* is the standard deviation. `random.vonmisesvariate(mu, kappa)` *mu* is the mean angle, expressed in radians between 0 and 2\**pi*, and *kappa* is the concentration parameter, which must be greater than or equal to zero. If *kappa* is equal to zero, this distribution reduces to a uniform random angle over the range 0 to 2\**pi*. `random.paretovariate(alpha)` Pareto distribution. *alpha* is the shape parameter. `random.weibullvariate(alpha, beta)` Weibull distribution. *alpha* is the scale parameter and *beta* is the shape parameter. Alternative Generator --------------------- `class random.Random([seed])` Class that implements the default pseudo-random number generator used by the [`random`](#module-random "random: Generate pseudo-random numbers with various common distributions.") module. Deprecated since version 3.9: In the future, the *seed* must be one of the following types: `NoneType`, [`int`](functions#int "int"), [`float`](functions#float "float"), [`str`](stdtypes#str "str"), [`bytes`](stdtypes#bytes "bytes"), or [`bytearray`](stdtypes#bytearray "bytearray"). `class random.SystemRandom([seed])` Class that uses the [`os.urandom()`](os#os.urandom "os.urandom") function for generating random numbers from sources provided by the operating system. Not available on all systems. Does not rely on software state, and sequences are not reproducible. Accordingly, the [`seed()`](#random.seed "random.seed") method has no effect and is ignored. The [`getstate()`](#random.getstate "random.getstate") and [`setstate()`](#random.setstate "random.setstate") methods raise [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError") if called. Notes on Reproducibility ------------------------ Sometimes it is useful to be able to reproduce the sequences given by a pseudo-random number generator. By re-using a seed value, the same sequence should be reproducible from run to run as long as multiple threads are not running. Most of the random module’s algorithms and seeding functions are subject to change across Python versions, but two aspects are guaranteed not to change: * If a new seeding method is added, then a backward compatible seeder will be offered. * The generator’s `random()` method will continue to produce the same sequence when the compatible seeder is given the same seed. Examples -------- Basic examples: ``` >>> random() # Random float: 0.0 <= x < 1.0 0.37444887175646646 >>> uniform(2.5, 10.0) # Random float: 2.5 <= x <= 10.0 3.1800146073117523 >>> expovariate(1 / 5) # Interval between arrivals averaging 5 seconds 5.148957571865031 >>> randrange(10) # Integer from 0 to 9 inclusive 7 >>> randrange(0, 101, 2) # Even integer from 0 to 100 inclusive 26 >>> choice(['win', 'lose', 'draw']) # Single random element from a sequence 'draw' >>> deck = 'ace two three four'.split() >>> shuffle(deck) # Shuffle a list >>> deck ['four', 'two', 'ace', 'three'] >>> sample([10, 20, 30, 40, 50], k=4) # Four samples without replacement [40, 10, 50, 30] ``` Simulations: ``` >>> # Six roulette wheel spins (weighted sampling with replacement) >>> choices(['red', 'black', 'green'], [18, 18, 2], k=6) ['red', 'green', 'black', 'black', 'red', 'black'] >>> # Deal 20 cards without replacement from a deck >>> # of 52 playing cards, and determine the proportion of cards >>> # with a ten-value: ten, jack, queen, or king. >>> dealt = sample(['tens', 'low cards'], counts=[16, 36], k=20) >>> dealt.count('tens') / 20 0.15 >>> # Estimate the probability of getting 5 or more heads from 7 spins >>> # of a biased coin that settles on heads 60% of the time. >>> def trial(): ... return choices('HT', cum_weights=(0.60, 1.00), k=7).count('H') >= 5 ... >>> sum(trial() for i in range(10_000)) / 10_000 0.4169 >>> # Probability of the median of 5 samples being in middle two quartiles >>> def trial(): ... return 2_500 <= sorted(choices(range(10_000), k=5))[2] < 7_500 ... >>> sum(trial() for i in range(10_000)) / 10_000 0.7958 ``` Example of [statistical bootstrapping](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)) using resampling with replacement to estimate a confidence interval for the mean of a sample: ``` # http://statistics.about.com/od/Applications/a/Example-Of-Bootstrapping.htm from statistics import fmean as mean from random import choices data = [41, 50, 29, 37, 81, 30, 73, 63, 20, 35, 68, 22, 60, 31, 95] means = sorted(mean(choices(data, k=len(data))) for i in range(100)) print(f'The sample mean of {mean(data):.1f} has a 90% confidence ' f'interval from {means[5]:.1f} to {means[94]:.1f}') ``` Example of a [resampling permutation test](https://en.wikipedia.org/wiki/Resampling_(statistics)#Permutation_tests) to determine the statistical significance or [p-value](https://en.wikipedia.org/wiki/P-value) of an observed difference between the effects of a drug versus a placebo: ``` # Example from "Statistics is Easy" by Dennis Shasha and Manda Wilson from statistics import fmean as mean from random import shuffle drug = [54, 73, 53, 70, 73, 68, 52, 65, 65] placebo = [54, 51, 58, 44, 55, 52, 42, 47, 58, 46] observed_diff = mean(drug) - mean(placebo) n = 10_000 count = 0 combined = drug + placebo for i in range(n): shuffle(combined) new_diff = mean(combined[:len(drug)]) - mean(combined[len(drug):]) count += (new_diff >= observed_diff) print(f'{n} label reshufflings produced only {count} instances with a difference') print(f'at least as extreme as the observed difference of {observed_diff:.1f}.') print(f'The one-sided p-value of {count / n:.4f} leads us to reject the null') print(f'hypothesis that there is no difference between the drug and the placebo.') ``` Simulation of arrival times and service deliveries for a multiserver queue: ``` from heapq import heapify, heapreplace from random import expovariate, gauss from statistics import mean, median, stdev average_arrival_interval = 5.6 average_service_time = 15.0 stdev_service_time = 3.5 num_servers = 3 waits = [] arrival_time = 0.0 servers = [0.0] * num_servers # time when each server becomes available heapify(servers) for i in range(1_000_000): arrival_time += expovariate(1.0 / average_arrival_interval) next_server_available = servers[0] wait = max(0.0, next_server_available - arrival_time) waits.append(wait) service_duration = max(0.0, gauss(average_service_time, stdev_service_time)) service_completed = arrival_time + wait + service_duration heapreplace(servers, service_completed) print(f'Mean wait: {mean(waits):.1f}. Stdev wait: {stdev(waits):.1f}.') print(f'Median wait: {median(waits):.1f}. Max wait: {max(waits):.1f}.') ``` See also [Statistics for Hackers](https://www.youtube.com/watch?v=Iq9DzN6mvYA) a video tutorial by [Jake Vanderplas](https://us.pycon.org/2016/speaker/profile/295/) on statistical analysis using just a few fundamental concepts including simulation, sampling, shuffling, and cross-validation. [Economics Simulation](http://nbviewer.jupyter.org/url/norvig.com/ipython/Economics.ipynb) a simulation of a marketplace by [Peter Norvig](http://norvig.com/bio.html) that shows effective use of many of the tools and distributions provided by this module (gauss, uniform, sample, betavariate, choice, triangular, and randrange). [A Concrete Introduction to Probability (using Python)](http://nbviewer.jupyter.org/url/norvig.com/ipython/Probability.ipynb) a tutorial by [Peter Norvig](http://norvig.com/bio.html) covering the basics of probability theory, how to write simulations, and how to perform data analysis using Python. Recipes ------- The default [`random()`](#random.random "random.random") returns multiples of 2⁻⁵³ in the range *0.0 ≤ x < 1.0*. All such numbers are evenly spaced and are exactly representable as Python floats. However, many other representable floats in that interval are not possible selections. For example, `0.05954861408025609` isn’t an integer multiple of 2⁻⁵³. The following recipe takes a different approach. All floats in the interval are possible selections. The mantissa comes from a uniform distribution of integers in the range *2⁵² ≤ mantissa < 2⁵³*. The exponent comes from a geometric distribution where exponents smaller than *-53* occur half as often as the next larger exponent. ``` from random import Random from math import ldexp class FullRandom(Random): def random(self): mantissa = 0x10_0000_0000_0000 | self.getrandbits(52) exponent = -53 x = 0 while not x: x = self.getrandbits(32) exponent += x.bit_length() - 32 return ldexp(mantissa, exponent) ``` All [real valued distributions](#real-valued-distributions) in the class will use the new method: ``` >>> fr = FullRandom() >>> fr.random() 0.05954861408025609 >>> fr.expovariate(0.25) 8.87925541791544 ``` The recipe is conceptually equivalent to an algorithm that chooses from all the multiples of 2⁻¹⁰⁷⁴ in the range *0.0 ≤ x < 1.0*. All such numbers are evenly spaced, but most have to be rounded down to the nearest representable Python float. (The value 2⁻¹⁰⁷⁴ is the smallest positive unnormalized float and is equal to `math.ulp(0.0)`.) See also [Generating Pseudo-random Floating-Point Values](https://allendowney.com/research/rand/downey07randfloat.pdf) a paper by Allen B. Downey describing ways to generate more fine-grained floats than normally generated by [`random()`](#random.random "random.random").
programming_docs
python unittest — Unit testing framework unittest — Unit testing framework ================================= **Source code:** [Lib/unittest/\_\_init\_\_.py](https://github.com/python/cpython/tree/3.9/Lib/unittest/__init__.py) (If you are already familiar with the basic concepts of testing, you might want to skip to [the list of assert methods](#assert-methods).) The [`unittest`](#module-unittest "unittest: Unit testing framework for Python.") unit testing framework was originally inspired by JUnit and has a similar flavor as major unit testing frameworks in other languages. It supports test automation, sharing of setup and shutdown code for tests, aggregation of tests into collections, and independence of the tests from the reporting framework. To achieve this, [`unittest`](#module-unittest "unittest: Unit testing framework for Python.") supports some important concepts in an object-oriented way: test fixture A *test fixture* represents the preparation needed to perform one or more tests, and any associated cleanup actions. This may involve, for example, creating temporary or proxy databases, directories, or starting a server process. test case A *test case* is the individual unit of testing. It checks for a specific response to a particular set of inputs. [`unittest`](#module-unittest "unittest: Unit testing framework for Python.") provides a base class, [`TestCase`](#unittest.TestCase "unittest.TestCase"), which may be used to create new test cases. test suite A *test suite* is a collection of test cases, test suites, or both. It is used to aggregate tests that should be executed together. test runner A *test runner* is a component which orchestrates the execution of tests and provides the outcome to the user. The runner may use a graphical interface, a textual interface, or return a special value to indicate the results of executing the tests. See also `Module` [`doctest`](doctest#module-doctest "doctest: Test pieces of code within docstrings.") Another test-support module with a very different flavor. [Simple Smalltalk Testing: With Patterns](https://web.archive.org/web/20150315073817/http://www.xprogramming.com/testfram.htm) Kent Beck’s original paper on testing frameworks using the pattern shared by [`unittest`](#module-unittest "unittest: Unit testing framework for Python."). [pytest](https://docs.pytest.org/) Third-party unittest framework with a lighter-weight syntax for writing tests. For example, `assert func(10) == 42`. [The Python Testing Tools Taxonomy](https://wiki.python.org/moin/PythonTestingToolsTaxonomy) An extensive list of Python testing tools including functional testing frameworks and mock object libraries. [Testing in Python Mailing List](http://lists.idyll.org/listinfo/testing-in-python) A special-interest-group for discussion of testing, and testing tools, in Python. The script `Tools/unittestgui/unittestgui.py` in the Python source distribution is a GUI tool for test discovery and execution. This is intended largely for ease of use for those new to unit testing. For production environments it is recommended that tests be driven by a continuous integration system such as [Buildbot](https://buildbot.net/), [Jenkins](https://jenkins.io/) or [Travis-CI](https://travis-ci.com), or [AppVeyor](https://www.appveyor.com/). Basic example ------------- The [`unittest`](#module-unittest "unittest: Unit testing framework for Python.") module provides a rich set of tools for constructing and running tests. This section demonstrates that a small subset of the tools suffice to meet the needs of most users. Here is a short script to test three string methods: ``` import unittest class TestStringMethods(unittest.TestCase): def test_upper(self): self.assertEqual('foo'.upper(), 'FOO') def test_isupper(self): self.assertTrue('FOO'.isupper()) self.assertFalse('Foo'.isupper()) def test_split(self): s = 'hello world' self.assertEqual(s.split(), ['hello', 'world']) # check that s.split fails when the separator is not a string with self.assertRaises(TypeError): s.split(2) if __name__ == '__main__': unittest.main() ``` A testcase is created by subclassing [`unittest.TestCase`](#unittest.TestCase "unittest.TestCase"). The three individual tests are defined with methods whose names start with the letters `test`. This naming convention informs the test runner about which methods represent tests. The crux of each test is a call to [`assertEqual()`](#unittest.TestCase.assertEqual "unittest.TestCase.assertEqual") to check for an expected result; [`assertTrue()`](#unittest.TestCase.assertTrue "unittest.TestCase.assertTrue") or [`assertFalse()`](#unittest.TestCase.assertFalse "unittest.TestCase.assertFalse") to verify a condition; or [`assertRaises()`](#unittest.TestCase.assertRaises "unittest.TestCase.assertRaises") to verify that a specific exception gets raised. These methods are used instead of the [`assert`](../reference/simple_stmts#assert) statement so the test runner can accumulate all test results and produce a report. The [`setUp()`](#unittest.TestCase.setUp "unittest.TestCase.setUp") and [`tearDown()`](#unittest.TestCase.tearDown "unittest.TestCase.tearDown") methods allow you to define instructions that will be executed before and after each test method. They are covered in more detail in the section [Organizing test code](#organizing-tests). The final block shows a simple way to run the tests. [`unittest.main()`](#unittest.main "unittest.main") provides a command-line interface to the test script. When run from the command line, the above script produces an output that looks like this: ``` ... ---------------------------------------------------------------------- Ran 3 tests in 0.000s OK ``` Passing the `-v` option to your test script will instruct [`unittest.main()`](#unittest.main "unittest.main") to enable a higher level of verbosity, and produce the following output: ``` test_isupper (__main__.TestStringMethods) ... ok test_split (__main__.TestStringMethods) ... ok test_upper (__main__.TestStringMethods) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.001s OK ``` The above examples show the most commonly used [`unittest`](#module-unittest "unittest: Unit testing framework for Python.") features which are sufficient to meet many everyday testing needs. The remainder of the documentation explores the full feature set from first principles. Command-Line Interface ---------------------- The unittest module can be used from the command line to run tests from modules, classes or even individual test methods: ``` python -m unittest test_module1 test_module2 python -m unittest test_module.TestClass python -m unittest test_module.TestClass.test_method ``` You can pass in a list with any combination of module names, and fully qualified class or method names. Test modules can be specified by file path as well: ``` python -m unittest tests/test_something.py ``` This allows you to use the shell filename completion to specify the test module. The file specified must still be importable as a module. The path is converted to a module name by removing the ‘.py’ and converting path separators into ‘.’. If you want to execute a test file that isn’t importable as a module you should execute the file directly instead. You can run tests with more detail (higher verbosity) by passing in the -v flag: ``` python -m unittest -v test_module ``` When executed without arguments [Test Discovery](#unittest-test-discovery) is started: ``` python -m unittest ``` For a list of all the command-line options: ``` python -m unittest -h ``` Changed in version 3.2: In earlier versions it was only possible to run individual test methods and not modules or classes. ### Command-line options **unittest** supports these command-line options: `-b, --buffer` The standard output and standard error streams are buffered during the test run. Output during a passing test is discarded. Output is echoed normally on test fail or error and is added to the failure messages. `-c, --catch` `Control-C` during the test run waits for the current test to end and then reports all the results so far. A second `Control-C` raises the normal [`KeyboardInterrupt`](exceptions#KeyboardInterrupt "KeyboardInterrupt") exception. See [Signal Handling](#signal-handling) for the functions that provide this functionality. `-f, --failfast` Stop the test run on the first error or failure. `-k` Only run test methods and classes that match the pattern or substring. This option may be used multiple times, in which case all test cases that match any of the given patterns are included. Patterns that contain a wildcard character (`*`) are matched against the test name using [`fnmatch.fnmatchcase()`](fnmatch#fnmatch.fnmatchcase "fnmatch.fnmatchcase"); otherwise simple case-sensitive substring matching is used. Patterns are matched against the fully qualified test method name as imported by the test loader. For example, `-k foo` matches `foo_tests.SomeTest.test_something`, `bar_tests.SomeTest.test_foo`, but not `bar_tests.FooTest.test_something`. `--locals` Show local variables in tracebacks. New in version 3.2: The command-line options `-b`, `-c` and `-f` were added. New in version 3.5: The command-line option `--locals`. New in version 3.7: The command-line option `-k`. The command line can also be used for test discovery, for running all of the tests in a project or just a subset. Test Discovery -------------- New in version 3.2. Unittest supports simple test discovery. In order to be compatible with test discovery, all of the test files must be [modules](../tutorial/modules#tut-modules) or [packages](../tutorial/modules#tut-packages) (including [namespace packages](../glossary#term-namespace-package)) importable from the top-level directory of the project (this means that their filenames must be valid [identifiers](../reference/lexical_analysis#identifiers)). Test discovery is implemented in [`TestLoader.discover()`](#unittest.TestLoader.discover "unittest.TestLoader.discover"), but can also be used from the command line. The basic command-line usage is: ``` cd project_directory python -m unittest discover ``` Note As a shortcut, `python -m unittest` is the equivalent of `python -m unittest discover`. If you want to pass arguments to test discovery the `discover` sub-command must be used explicitly. The `discover` sub-command has the following options: `-v, --verbose` Verbose output `-s, --start-directory directory` Directory to start discovery (`.` default) `-p, --pattern pattern` Pattern to match test files (`test*.py` default) `-t, --top-level-directory directory` Top level directory of project (defaults to start directory) The [`-s`](#cmdoption-unittest-discover-s), [`-p`](#cmdoption-unittest-discover-p), and [`-t`](#cmdoption-unittest-discover-t) options can be passed in as positional arguments in that order. The following two command lines are equivalent: ``` python -m unittest discover -s project_directory -p "*_test.py" python -m unittest discover project_directory "*_test.py" ``` As well as being a path it is possible to pass a package name, for example `myproject.subpackage.test`, as the start directory. The package name you supply will then be imported and its location on the filesystem will be used as the start directory. Caution Test discovery loads tests by importing them. Once test discovery has found all the test files from the start directory you specify it turns the paths into package names to import. For example `foo/bar/baz.py` will be imported as `foo.bar.baz`. If you have a package installed globally and attempt test discovery on a different copy of the package then the import *could* happen from the wrong place. If this happens test discovery will warn you and exit. If you supply the start directory as a package name rather than a path to a directory then discover assumes that whichever location it imports from is the location you intended, so you will not get the warning. Test modules and packages can customize test loading and discovery by through the [load\_tests protocol](#load-tests-protocol). Changed in version 3.4: Test discovery supports [namespace packages](../glossary#term-namespace-package) for the start directory. Note that you need to specify the top level directory too (e.g. `python -m unittest discover -s root/namespace -t root`). Organizing test code -------------------- The basic building blocks of unit testing are *test cases* — single scenarios that must be set up and checked for correctness. In [`unittest`](#module-unittest "unittest: Unit testing framework for Python."), test cases are represented by [`unittest.TestCase`](#unittest.TestCase "unittest.TestCase") instances. To make your own test cases you must write subclasses of [`TestCase`](#unittest.TestCase "unittest.TestCase") or use [`FunctionTestCase`](#unittest.FunctionTestCase "unittest.FunctionTestCase"). The testing code of a [`TestCase`](#unittest.TestCase "unittest.TestCase") instance should be entirely self contained, such that it can be run either in isolation or in arbitrary combination with any number of other test cases. The simplest [`TestCase`](#unittest.TestCase "unittest.TestCase") subclass will simply implement a test method (i.e. a method whose name starts with `test`) in order to perform specific testing code: ``` import unittest class DefaultWidgetSizeTestCase(unittest.TestCase): def test_default_widget_size(self): widget = Widget('The widget') self.assertEqual(widget.size(), (50, 50)) ``` Note that in order to test something, we use one of the `assert*()` methods provided by the [`TestCase`](#unittest.TestCase "unittest.TestCase") base class. If the test fails, an exception will be raised with an explanatory message, and [`unittest`](#module-unittest "unittest: Unit testing framework for Python.") will identify the test case as a *failure*. Any other exceptions will be treated as *errors*. Tests can be numerous, and their set-up can be repetitive. Luckily, we can factor out set-up code by implementing a method called [`setUp()`](#unittest.TestCase.setUp "unittest.TestCase.setUp"), which the testing framework will automatically call for every single test we run: ``` import unittest class WidgetTestCase(unittest.TestCase): def setUp(self): self.widget = Widget('The widget') def test_default_widget_size(self): self.assertEqual(self.widget.size(), (50,50), 'incorrect default size') def test_widget_resize(self): self.widget.resize(100,150) self.assertEqual(self.widget.size(), (100,150), 'wrong size after resize') ``` Note The order in which the various tests will be run is determined by sorting the test method names with respect to the built-in ordering for strings. If the [`setUp()`](#unittest.TestCase.setUp "unittest.TestCase.setUp") method raises an exception while the test is running, the framework will consider the test to have suffered an error, and the test method will not be executed. Similarly, we can provide a [`tearDown()`](#unittest.TestCase.tearDown "unittest.TestCase.tearDown") method that tidies up after the test method has been run: ``` import unittest class WidgetTestCase(unittest.TestCase): def setUp(self): self.widget = Widget('The widget') def tearDown(self): self.widget.dispose() ``` If [`setUp()`](#unittest.TestCase.setUp "unittest.TestCase.setUp") succeeded, [`tearDown()`](#unittest.TestCase.tearDown "unittest.TestCase.tearDown") will be run whether the test method succeeded or not. Such a working environment for the testing code is called a *test fixture*. A new TestCase instance is created as a unique test fixture used to execute each individual test method. Thus [`setUp()`](#unittest.TestCase.setUp "unittest.TestCase.setUp"), [`tearDown()`](#unittest.TestCase.tearDown "unittest.TestCase.tearDown"), and `__init__()` will be called once per test. It is recommended that you use TestCase implementations to group tests together according to the features they test. [`unittest`](#module-unittest "unittest: Unit testing framework for Python.") provides a mechanism for this: the *test suite*, represented by [`unittest`](#module-unittest "unittest: Unit testing framework for Python.")’s [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") class. In most cases, calling [`unittest.main()`](#unittest.main "unittest.main") will do the right thing and collect all the module’s test cases for you and execute them. However, should you want to customize the building of your test suite, you can do it yourself: ``` def suite(): suite = unittest.TestSuite() suite.addTest(WidgetTestCase('test_default_widget_size')) suite.addTest(WidgetTestCase('test_widget_resize')) return suite if __name__ == '__main__': runner = unittest.TextTestRunner() runner.run(suite()) ``` You can place the definitions of test cases and test suites in the same modules as the code they are to test (such as `widget.py`), but there are several advantages to placing the test code in a separate module, such as `test_widget.py`: * The test module can be run standalone from the command line. * The test code can more easily be separated from shipped code. * There is less temptation to change test code to fit the code it tests without a good reason. * Test code should be modified much less frequently than the code it tests. * Tested code can be refactored more easily. * Tests for modules written in C must be in separate modules anyway, so why not be consistent? * If the testing strategy changes, there is no need to change the source code. Re-using old test code ---------------------- Some users will find that they have existing test code that they would like to run from [`unittest`](#module-unittest "unittest: Unit testing framework for Python."), without converting every old test function to a [`TestCase`](#unittest.TestCase "unittest.TestCase") subclass. For this reason, [`unittest`](#module-unittest "unittest: Unit testing framework for Python.") provides a [`FunctionTestCase`](#unittest.FunctionTestCase "unittest.FunctionTestCase") class. This subclass of [`TestCase`](#unittest.TestCase "unittest.TestCase") can be used to wrap an existing test function. Set-up and tear-down functions can also be provided. Given the following test function: ``` def testSomething(): something = makeSomething() assert something.name is not None # ... ``` one can create an equivalent test case instance as follows, with optional set-up and tear-down methods: ``` testcase = unittest.FunctionTestCase(testSomething, setUp=makeSomethingDB, tearDown=deleteSomethingDB) ``` Note Even though [`FunctionTestCase`](#unittest.FunctionTestCase "unittest.FunctionTestCase") can be used to quickly convert an existing test base over to a [`unittest`](#module-unittest "unittest: Unit testing framework for Python.")-based system, this approach is not recommended. Taking the time to set up proper [`TestCase`](#unittest.TestCase "unittest.TestCase") subclasses will make future test refactorings infinitely easier. In some cases, the existing tests may have been written using the [`doctest`](doctest#module-doctest "doctest: Test pieces of code within docstrings.") module. If so, [`doctest`](doctest#module-doctest "doctest: Test pieces of code within docstrings.") provides a `DocTestSuite` class that can automatically build [`unittest.TestSuite`](#unittest.TestSuite "unittest.TestSuite") instances from the existing [`doctest`](doctest#module-doctest "doctest: Test pieces of code within docstrings.")-based tests. Skipping tests and expected failures ------------------------------------ New in version 3.1. Unittest supports skipping individual test methods and even whole classes of tests. In addition, it supports marking a test as an “expected failure,” a test that is broken and will fail, but shouldn’t be counted as a failure on a [`TestResult`](#unittest.TestResult "unittest.TestResult"). Skipping a test is simply a matter of using the [`skip()`](#unittest.skip "unittest.skip") [decorator](../glossary#term-decorator) or one of its conditional variants, calling [`TestCase.skipTest()`](#unittest.TestCase.skipTest "unittest.TestCase.skipTest") within a [`setUp()`](#unittest.TestCase.setUp "unittest.TestCase.setUp") or test method, or raising [`SkipTest`](#unittest.SkipTest "unittest.SkipTest") directly. Basic skipping looks like this: ``` class MyTestCase(unittest.TestCase): @unittest.skip("demonstrating skipping") def test_nothing(self): self.fail("shouldn't happen") @unittest.skipIf(mylib.__version__ < (1, 3), "not supported in this library version") def test_format(self): # Tests that work for only a certain version of the library. pass @unittest.skipUnless(sys.platform.startswith("win"), "requires Windows") def test_windows_support(self): # windows specific testing code pass def test_maybe_skipped(self): if not external_resource_available(): self.skipTest("external resource not available") # test code that depends on the external resource pass ``` This is the output of running the example above in verbose mode: ``` test_format (__main__.MyTestCase) ... skipped 'not supported in this library version' test_nothing (__main__.MyTestCase) ... skipped 'demonstrating skipping' test_maybe_skipped (__main__.MyTestCase) ... skipped 'external resource not available' test_windows_support (__main__.MyTestCase) ... skipped 'requires Windows' ---------------------------------------------------------------------- Ran 4 tests in 0.005s OK (skipped=4) ``` Classes can be skipped just like methods: ``` @unittest.skip("showing class skipping") class MySkippedTestCase(unittest.TestCase): def test_not_run(self): pass ``` [`TestCase.setUp()`](#unittest.TestCase.setUp "unittest.TestCase.setUp") can also skip the test. This is useful when a resource that needs to be set up is not available. Expected failures use the [`expectedFailure()`](#unittest.expectedFailure "unittest.expectedFailure") decorator. ``` class ExpectedFailureTestCase(unittest.TestCase): @unittest.expectedFailure def test_fail(self): self.assertEqual(1, 0, "broken") ``` It’s easy to roll your own skipping decorators by making a decorator that calls [`skip()`](#unittest.skip "unittest.skip") on the test when it wants it to be skipped. This decorator skips the test unless the passed object has a certain attribute: ``` def skipUnlessHasattr(obj, attr): if hasattr(obj, attr): return lambda func: func return unittest.skip("{!r} doesn't have {!r}".format(obj, attr)) ``` The following decorators and exception implement test skipping and expected failures: `@unittest.skip(reason)` Unconditionally skip the decorated test. *reason* should describe why the test is being skipped. `@unittest.skipIf(condition, reason)` Skip the decorated test if *condition* is true. `@unittest.skipUnless(condition, reason)` Skip the decorated test unless *condition* is true. `@unittest.expectedFailure` Mark the test as an expected failure or error. If the test fails or errors in the test function itself (rather than in one of the *test fixture* methods) then it will be considered a success. If the test passes, it will be considered a failure. `exception unittest.SkipTest(reason)` This exception is raised to skip a test. Usually you can use [`TestCase.skipTest()`](#unittest.TestCase.skipTest "unittest.TestCase.skipTest") or one of the skipping decorators instead of raising this directly. Skipped tests will not have [`setUp()`](#unittest.TestCase.setUp "unittest.TestCase.setUp") or [`tearDown()`](#unittest.TestCase.tearDown "unittest.TestCase.tearDown") run around them. Skipped classes will not have [`setUpClass()`](#unittest.TestCase.setUpClass "unittest.TestCase.setUpClass") or [`tearDownClass()`](#unittest.TestCase.tearDownClass "unittest.TestCase.tearDownClass") run. Skipped modules will not have `setUpModule()` or `tearDownModule()` run. Distinguishing test iterations using subtests --------------------------------------------- New in version 3.4. When there are very small differences among your tests, for instance some parameters, unittest allows you to distinguish them inside the body of a test method using the [`subTest()`](#unittest.TestCase.subTest "unittest.TestCase.subTest") context manager. For example, the following test: ``` class NumbersTest(unittest.TestCase): def test_even(self): """ Test that numbers between 0 and 5 are all even. """ for i in range(0, 6): with self.subTest(i=i): self.assertEqual(i % 2, 0) ``` will produce the following output: ``` ====================================================================== FAIL: test_even (__main__.NumbersTest) (i=1) ---------------------------------------------------------------------- Traceback (most recent call last): File "subtests.py", line 32, in test_even self.assertEqual(i % 2, 0) AssertionError: 1 != 0 ====================================================================== FAIL: test_even (__main__.NumbersTest) (i=3) ---------------------------------------------------------------------- Traceback (most recent call last): File "subtests.py", line 32, in test_even self.assertEqual(i % 2, 0) AssertionError: 1 != 0 ====================================================================== FAIL: test_even (__main__.NumbersTest) (i=5) ---------------------------------------------------------------------- Traceback (most recent call last): File "subtests.py", line 32, in test_even self.assertEqual(i % 2, 0) AssertionError: 1 != 0 ``` Without using a subtest, execution would stop after the first failure, and the error would be less easy to diagnose because the value of `i` wouldn’t be displayed: ``` ====================================================================== FAIL: test_even (__main__.NumbersTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "subtests.py", line 32, in test_even self.assertEqual(i % 2, 0) AssertionError: 1 != 0 ``` Classes and functions --------------------- This section describes in depth the API of [`unittest`](#module-unittest "unittest: Unit testing framework for Python."). ### Test cases `class unittest.TestCase(methodName='runTest')` Instances of the [`TestCase`](#unittest.TestCase "unittest.TestCase") class represent the logical test units in the [`unittest`](#module-unittest "unittest: Unit testing framework for Python.") universe. This class is intended to be used as a base class, with specific tests being implemented by concrete subclasses. This class implements the interface needed by the test runner to allow it to drive the tests, and methods that the test code can use to check for and report various kinds of failure. Each instance of [`TestCase`](#unittest.TestCase "unittest.TestCase") will run a single base method: the method named *methodName*. In most uses of [`TestCase`](#unittest.TestCase "unittest.TestCase"), you will neither change the *methodName* nor reimplement the default `runTest()` method. Changed in version 3.2: [`TestCase`](#unittest.TestCase "unittest.TestCase") can be instantiated successfully without providing a *methodName*. This makes it easier to experiment with [`TestCase`](#unittest.TestCase "unittest.TestCase") from the interactive interpreter. [`TestCase`](#unittest.TestCase "unittest.TestCase") instances provide three groups of methods: one group used to run the test, another used by the test implementation to check conditions and report failures, and some inquiry methods allowing information about the test itself to be gathered. Methods in the first group (running the test) are: `setUp()` Method called to prepare the test fixture. This is called immediately before calling the test method; other than [`AssertionError`](exceptions#AssertionError "AssertionError") or [`SkipTest`](#unittest.SkipTest "unittest.SkipTest"), any exception raised by this method will be considered an error rather than a test failure. The default implementation does nothing. `tearDown()` Method called immediately after the test method has been called and the result recorded. This is called even if the test method raised an exception, so the implementation in subclasses may need to be particularly careful about checking internal state. Any exception, other than [`AssertionError`](exceptions#AssertionError "AssertionError") or [`SkipTest`](#unittest.SkipTest "unittest.SkipTest"), raised by this method will be considered an additional error rather than a test failure (thus increasing the total number of reported errors). This method will only be called if the [`setUp()`](#unittest.TestCase.setUp "unittest.TestCase.setUp") succeeds, regardless of the outcome of the test method. The default implementation does nothing. `setUpClass()` A class method called before tests in an individual class are run. `setUpClass` is called with the class as the only argument and must be decorated as a [`classmethod()`](functions#classmethod "classmethod"): ``` @classmethod def setUpClass(cls): ... ``` See [Class and Module Fixtures](#class-and-module-fixtures) for more details. New in version 3.2. `tearDownClass()` A class method called after tests in an individual class have run. `tearDownClass` is called with the class as the only argument and must be decorated as a [`classmethod()`](functions#classmethod "classmethod"): ``` @classmethod def tearDownClass(cls): ... ``` See [Class and Module Fixtures](#class-and-module-fixtures) for more details. New in version 3.2. `run(result=None)` Run the test, collecting the result into the [`TestResult`](#unittest.TestResult "unittest.TestResult") object passed as *result*. If *result* is omitted or `None`, a temporary result object is created (by calling the [`defaultTestResult()`](#unittest.TestCase.defaultTestResult "unittest.TestCase.defaultTestResult") method) and used. The result object is returned to [`run()`](#unittest.TestCase.run "unittest.TestCase.run")’s caller. The same effect may be had by simply calling the [`TestCase`](#unittest.TestCase "unittest.TestCase") instance. Changed in version 3.3: Previous versions of `run` did not return the result. Neither did calling an instance. `skipTest(reason)` Calling this during a test method or [`setUp()`](#unittest.TestCase.setUp "unittest.TestCase.setUp") skips the current test. See [Skipping tests and expected failures](#unittest-skipping) for more information. New in version 3.1. `subTest(msg=None, **params)` Return a context manager which executes the enclosed code block as a subtest. *msg* and *params* are optional, arbitrary values which are displayed whenever a subtest fails, allowing you to identify them clearly. A test case can contain any number of subtest declarations, and they can be arbitrarily nested. See [Distinguishing test iterations using subtests](#subtests) for more information. New in version 3.4. `debug()` Run the test without collecting the result. This allows exceptions raised by the test to be propagated to the caller, and can be used to support running tests under a debugger. The [`TestCase`](#unittest.TestCase "unittest.TestCase") class provides several assert methods to check for and report failures. The following table lists the most commonly used methods (see the tables below for more assert methods): | Method | Checks that | New in | | --- | --- | --- | | [`assertEqual(a, b)`](#unittest.TestCase.assertEqual "unittest.TestCase.assertEqual") | `a == b` | | | [`assertNotEqual(a, b)`](#unittest.TestCase.assertNotEqual "unittest.TestCase.assertNotEqual") | `a != b` | | | [`assertTrue(x)`](#unittest.TestCase.assertTrue "unittest.TestCase.assertTrue") | `bool(x) is True` | | | [`assertFalse(x)`](#unittest.TestCase.assertFalse "unittest.TestCase.assertFalse") | `bool(x) is False` | | | [`assertIs(a, b)`](#unittest.TestCase.assertIs "unittest.TestCase.assertIs") | `a is b` | 3.1 | | [`assertIsNot(a, b)`](#unittest.TestCase.assertIsNot "unittest.TestCase.assertIsNot") | `a is not b` | 3.1 | | [`assertIsNone(x)`](#unittest.TestCase.assertIsNone "unittest.TestCase.assertIsNone") | `x is None` | 3.1 | | [`assertIsNotNone(x)`](#unittest.TestCase.assertIsNotNone "unittest.TestCase.assertIsNotNone") | `x is not None` | 3.1 | | [`assertIn(a, b)`](#unittest.TestCase.assertIn "unittest.TestCase.assertIn") | `a in b` | 3.1 | | [`assertNotIn(a, b)`](#unittest.TestCase.assertNotIn "unittest.TestCase.assertNotIn") | `a not in b` | 3.1 | | [`assertIsInstance(a, b)`](#unittest.TestCase.assertIsInstance "unittest.TestCase.assertIsInstance") | `isinstance(a, b)` | 3.2 | | [`assertNotIsInstance(a, b)`](#unittest.TestCase.assertNotIsInstance "unittest.TestCase.assertNotIsInstance") | `not isinstance(a, b)` | 3.2 | All the assert methods accept a *msg* argument that, if specified, is used as the error message on failure (see also [`longMessage`](#unittest.TestCase.longMessage "unittest.TestCase.longMessage")). Note that the *msg* keyword argument can be passed to [`assertRaises()`](#unittest.TestCase.assertRaises "unittest.TestCase.assertRaises"), [`assertRaisesRegex()`](#unittest.TestCase.assertRaisesRegex "unittest.TestCase.assertRaisesRegex"), [`assertWarns()`](#unittest.TestCase.assertWarns "unittest.TestCase.assertWarns"), [`assertWarnsRegex()`](#unittest.TestCase.assertWarnsRegex "unittest.TestCase.assertWarnsRegex") only when they are used as a context manager. `assertEqual(first, second, msg=None)` Test that *first* and *second* are equal. If the values do not compare equal, the test will fail. In addition, if *first* and *second* are the exact same type and one of list, tuple, dict, set, frozenset or str or any type that a subclass registers with [`addTypeEqualityFunc()`](#unittest.TestCase.addTypeEqualityFunc "unittest.TestCase.addTypeEqualityFunc") the type-specific equality function will be called in order to generate a more useful default error message (see also the [list of type-specific methods](#type-specific-methods)). Changed in version 3.1: Added the automatic calling of type-specific equality function. Changed in version 3.2: [`assertMultiLineEqual()`](#unittest.TestCase.assertMultiLineEqual "unittest.TestCase.assertMultiLineEqual") added as the default type equality function for comparing strings. `assertNotEqual(first, second, msg=None)` Test that *first* and *second* are not equal. If the values do compare equal, the test will fail. `assertTrue(expr, msg=None)` `assertFalse(expr, msg=None)` Test that *expr* is true (or false). Note that this is equivalent to `bool(expr) is True` and not to `expr is True` (use `assertIs(expr, True)` for the latter). This method should also be avoided when more specific methods are available (e.g. `assertEqual(a, b)` instead of `assertTrue(a == b)`), because they provide a better error message in case of failure. `assertIs(first, second, msg=None)` `assertIsNot(first, second, msg=None)` Test that *first* and *second* are (or are not) the same object. New in version 3.1. `assertIsNone(expr, msg=None)` `assertIsNotNone(expr, msg=None)` Test that *expr* is (or is not) `None`. New in version 3.1. `assertIn(member, container, msg=None)` `assertNotIn(member, container, msg=None)` Test that *member* is (or is not) in *container*. New in version 3.1. `assertIsInstance(obj, cls, msg=None)` `assertNotIsInstance(obj, cls, msg=None)` Test that *obj* is (or is not) an instance of *cls* (which can be a class or a tuple of classes, as supported by [`isinstance()`](functions#isinstance "isinstance")). To check for the exact type, use [`assertIs(type(obj), cls)`](#unittest.TestCase.assertIs "unittest.TestCase.assertIs"). New in version 3.2. It is also possible to check the production of exceptions, warnings, and log messages using the following methods: | Method | Checks that | New in | | --- | --- | --- | | [`assertRaises(exc, fun, *args, **kwds)`](#unittest.TestCase.assertRaises "unittest.TestCase.assertRaises") | `fun(*args, **kwds)` raises *exc* | | | [`assertRaisesRegex(exc, r, fun, *args, **kwds)`](#unittest.TestCase.assertRaisesRegex "unittest.TestCase.assertRaisesRegex") | `fun(*args, **kwds)` raises *exc* and the message matches regex *r* | 3.1 | | [`assertWarns(warn, fun, *args, **kwds)`](#unittest.TestCase.assertWarns "unittest.TestCase.assertWarns") | `fun(*args, **kwds)` raises *warn* | 3.2 | | [`assertWarnsRegex(warn, r, fun, *args, **kwds)`](#unittest.TestCase.assertWarnsRegex "unittest.TestCase.assertWarnsRegex") | `fun(*args, **kwds)` raises *warn* and the message matches regex *r* | 3.2 | | [`assertLogs(logger, level)`](#unittest.TestCase.assertLogs "unittest.TestCase.assertLogs") | The `with` block logs on *logger* with minimum *level* | 3.4 | `assertRaises(exception, callable, *args, **kwds)` `assertRaises(exception, *, msg=None)` Test that an exception is raised when *callable* is called with any positional or keyword arguments that are also passed to [`assertRaises()`](#unittest.TestCase.assertRaises "unittest.TestCase.assertRaises"). The test passes if *exception* is raised, is an error if another exception is raised, or fails if no exception is raised. To catch any of a group of exceptions, a tuple containing the exception classes may be passed as *exception*. If only the *exception* and possibly the *msg* arguments are given, return a context manager so that the code under test can be written inline rather than as a function: ``` with self.assertRaises(SomeException): do_something() ``` When used as a context manager, [`assertRaises()`](#unittest.TestCase.assertRaises "unittest.TestCase.assertRaises") accepts the additional keyword argument *msg*. The context manager will store the caught exception object in its `exception` attribute. This can be useful if the intention is to perform additional checks on the exception raised: ``` with self.assertRaises(SomeException) as cm: do_something() the_exception = cm.exception self.assertEqual(the_exception.error_code, 3) ``` Changed in version 3.1: Added the ability to use [`assertRaises()`](#unittest.TestCase.assertRaises "unittest.TestCase.assertRaises") as a context manager. Changed in version 3.2: Added the `exception` attribute. Changed in version 3.3: Added the *msg* keyword argument when used as a context manager. `assertRaisesRegex(exception, regex, callable, *args, **kwds)` `assertRaisesRegex(exception, regex, *, msg=None)` Like [`assertRaises()`](#unittest.TestCase.assertRaises "unittest.TestCase.assertRaises") but also tests that *regex* matches on the string representation of the raised exception. *regex* may be a regular expression object or a string containing a regular expression suitable for use by [`re.search()`](re#re.search "re.search"). Examples: ``` self.assertRaisesRegex(ValueError, "invalid literal for.*XYZ'$", int, 'XYZ') ``` or: ``` with self.assertRaisesRegex(ValueError, 'literal'): int('XYZ') ``` New in version 3.1: Added under the name `assertRaisesRegexp`. Changed in version 3.2: Renamed to [`assertRaisesRegex()`](#unittest.TestCase.assertRaisesRegex "unittest.TestCase.assertRaisesRegex"). Changed in version 3.3: Added the *msg* keyword argument when used as a context manager. `assertWarns(warning, callable, *args, **kwds)` `assertWarns(warning, *, msg=None)` Test that a warning is triggered when *callable* is called with any positional or keyword arguments that are also passed to [`assertWarns()`](#unittest.TestCase.assertWarns "unittest.TestCase.assertWarns"). The test passes if *warning* is triggered and fails if it isn’t. Any exception is an error. To catch any of a group of warnings, a tuple containing the warning classes may be passed as *warnings*. If only the *warning* and possibly the *msg* arguments are given, return a context manager so that the code under test can be written inline rather than as a function: ``` with self.assertWarns(SomeWarning): do_something() ``` When used as a context manager, [`assertWarns()`](#unittest.TestCase.assertWarns "unittest.TestCase.assertWarns") accepts the additional keyword argument *msg*. The context manager will store the caught warning object in its `warning` attribute, and the source line which triggered the warnings in the `filename` and `lineno` attributes. This can be useful if the intention is to perform additional checks on the warning caught: ``` with self.assertWarns(SomeWarning) as cm: do_something() self.assertIn('myfile.py', cm.filename) self.assertEqual(320, cm.lineno) ``` This method works regardless of the warning filters in place when it is called. New in version 3.2. Changed in version 3.3: Added the *msg* keyword argument when used as a context manager. `assertWarnsRegex(warning, regex, callable, *args, **kwds)` `assertWarnsRegex(warning, regex, *, msg=None)` Like [`assertWarns()`](#unittest.TestCase.assertWarns "unittest.TestCase.assertWarns") but also tests that *regex* matches on the message of the triggered warning. *regex* may be a regular expression object or a string containing a regular expression suitable for use by [`re.search()`](re#re.search "re.search"). Example: ``` self.assertWarnsRegex(DeprecationWarning, r'legacy_function\(\) is deprecated', legacy_function, 'XYZ') ``` or: ``` with self.assertWarnsRegex(RuntimeWarning, 'unsafe frobnicating'): frobnicate('/etc/passwd') ``` New in version 3.2. Changed in version 3.3: Added the *msg* keyword argument when used as a context manager. `assertLogs(logger=None, level=None)` A context manager to test that at least one message is logged on the *logger* or one of its children, with at least the given *level*. If given, *logger* should be a [`logging.Logger`](logging#logging.Logger "logging.Logger") object or a [`str`](stdtypes#str "str") giving the name of a logger. The default is the root logger, which will catch all messages that were not blocked by a non-propagating descendent logger. If given, *level* should be either a numeric logging level or its string equivalent (for example either `"ERROR"` or `logging.ERROR`). The default is `logging.INFO`. The test passes if at least one message emitted inside the `with` block matches the *logger* and *level* conditions, otherwise it fails. The object returned by the context manager is a recording helper which keeps tracks of the matching log messages. It has two attributes: `records` A list of [`logging.LogRecord`](logging#logging.LogRecord "logging.LogRecord") objects of the matching log messages. `output` A list of [`str`](stdtypes#str "str") objects with the formatted output of matching messages. Example: ``` with self.assertLogs('foo', level='INFO') as cm: logging.getLogger('foo').info('first message') logging.getLogger('foo.bar').error('second message') self.assertEqual(cm.output, ['INFO:foo:first message', 'ERROR:foo.bar:second message']) ``` New in version 3.4. There are also other methods used to perform more specific checks, such as: | Method | Checks that | New in | | --- | --- | --- | | [`assertAlmostEqual(a, b)`](#unittest.TestCase.assertAlmostEqual "unittest.TestCase.assertAlmostEqual") | `round(a-b, 7) == 0` | | | [`assertNotAlmostEqual(a, b)`](#unittest.TestCase.assertNotAlmostEqual "unittest.TestCase.assertNotAlmostEqual") | `round(a-b, 7) != 0` | | | [`assertGreater(a, b)`](#unittest.TestCase.assertGreater "unittest.TestCase.assertGreater") | `a > b` | 3.1 | | [`assertGreaterEqual(a, b)`](#unittest.TestCase.assertGreaterEqual "unittest.TestCase.assertGreaterEqual") | `a >= b` | 3.1 | | [`assertLess(a, b)`](#unittest.TestCase.assertLess "unittest.TestCase.assertLess") | `a < b` | 3.1 | | [`assertLessEqual(a, b)`](#unittest.TestCase.assertLessEqual "unittest.TestCase.assertLessEqual") | `a <= b` | 3.1 | | [`assertRegex(s, r)`](#unittest.TestCase.assertRegex "unittest.TestCase.assertRegex") | `r.search(s)` | 3.1 | | [`assertNotRegex(s, r)`](#unittest.TestCase.assertNotRegex "unittest.TestCase.assertNotRegex") | `not r.search(s)` | 3.2 | | [`assertCountEqual(a, b)`](#unittest.TestCase.assertCountEqual "unittest.TestCase.assertCountEqual") | *a* and *b* have the same elements in the same number, regardless of their order. | 3.2 | `assertAlmostEqual(first, second, places=7, msg=None, delta=None)` `assertNotAlmostEqual(first, second, places=7, msg=None, delta=None)` Test that *first* and *second* are approximately (or not approximately) equal by computing the difference, rounding to the given number of decimal *places* (default 7), and comparing to zero. Note that these methods round the values to the given number of *decimal places* (i.e. like the [`round()`](functions#round "round") function) and not *significant digits*. If *delta* is supplied instead of *places* then the difference between *first* and *second* must be less or equal to (or greater than) *delta*. Supplying both *delta* and *places* raises a [`TypeError`](exceptions#TypeError "TypeError"). Changed in version 3.2: [`assertAlmostEqual()`](#unittest.TestCase.assertAlmostEqual "unittest.TestCase.assertAlmostEqual") automatically considers almost equal objects that compare equal. [`assertNotAlmostEqual()`](#unittest.TestCase.assertNotAlmostEqual "unittest.TestCase.assertNotAlmostEqual") automatically fails if the objects compare equal. Added the *delta* keyword argument. `assertGreater(first, second, msg=None)` `assertGreaterEqual(first, second, msg=None)` `assertLess(first, second, msg=None)` `assertLessEqual(first, second, msg=None)` Test that *first* is respectively >, >=, < or <= than *second* depending on the method name. If not, the test will fail: ``` >>> self.assertGreaterEqual(3, 4) AssertionError: "3" unexpectedly not greater than or equal to "4" ``` New in version 3.1. `assertRegex(text, regex, msg=None)` `assertNotRegex(text, regex, msg=None)` Test that a *regex* search matches (or does not match) *text*. In case of failure, the error message will include the pattern and the *text* (or the pattern and the part of *text* that unexpectedly matched). *regex* may be a regular expression object or a string containing a regular expression suitable for use by [`re.search()`](re#re.search "re.search"). New in version 3.1: Added under the name `assertRegexpMatches`. Changed in version 3.2: The method `assertRegexpMatches()` has been renamed to [`assertRegex()`](#unittest.TestCase.assertRegex "unittest.TestCase.assertRegex"). New in version 3.2: [`assertNotRegex()`](#unittest.TestCase.assertNotRegex "unittest.TestCase.assertNotRegex"). New in version 3.5: The name `assertNotRegexpMatches` is a deprecated alias for [`assertNotRegex()`](#unittest.TestCase.assertNotRegex "unittest.TestCase.assertNotRegex"). `assertCountEqual(first, second, msg=None)` Test that sequence *first* contains the same elements as *second*, regardless of their order. When they don’t, an error message listing the differences between the sequences will be generated. Duplicate elements are *not* ignored when comparing *first* and *second*. It verifies whether each element has the same count in both sequences. Equivalent to: `assertEqual(Counter(list(first)), Counter(list(second)))` but works with sequences of unhashable objects as well. New in version 3.2. The [`assertEqual()`](#unittest.TestCase.assertEqual "unittest.TestCase.assertEqual") method dispatches the equality check for objects of the same type to different type-specific methods. These methods are already implemented for most of the built-in types, but it’s also possible to register new methods using [`addTypeEqualityFunc()`](#unittest.TestCase.addTypeEqualityFunc "unittest.TestCase.addTypeEqualityFunc"): `addTypeEqualityFunc(typeobj, function)` Registers a type-specific method called by [`assertEqual()`](#unittest.TestCase.assertEqual "unittest.TestCase.assertEqual") to check if two objects of exactly the same *typeobj* (not subclasses) compare equal. *function* must take two positional arguments and a third msg=None keyword argument just as [`assertEqual()`](#unittest.TestCase.assertEqual "unittest.TestCase.assertEqual") does. It must raise [`self.failureException(msg)`](#unittest.TestCase.failureException "unittest.TestCase.failureException") when inequality between the first two parameters is detected – possibly providing useful information and explaining the inequalities in details in the error message. New in version 3.1. The list of type-specific methods automatically used by [`assertEqual()`](#unittest.TestCase.assertEqual "unittest.TestCase.assertEqual") are summarized in the following table. Note that it’s usually not necessary to invoke these methods directly. | Method | Used to compare | New in | | --- | --- | --- | | [`assertMultiLineEqual(a, b)`](#unittest.TestCase.assertMultiLineEqual "unittest.TestCase.assertMultiLineEqual") | strings | 3.1 | | [`assertSequenceEqual(a, b)`](#unittest.TestCase.assertSequenceEqual "unittest.TestCase.assertSequenceEqual") | sequences | 3.1 | | [`assertListEqual(a, b)`](#unittest.TestCase.assertListEqual "unittest.TestCase.assertListEqual") | lists | 3.1 | | [`assertTupleEqual(a, b)`](#unittest.TestCase.assertTupleEqual "unittest.TestCase.assertTupleEqual") | tuples | 3.1 | | [`assertSetEqual(a, b)`](#unittest.TestCase.assertSetEqual "unittest.TestCase.assertSetEqual") | sets or frozensets | 3.1 | | [`assertDictEqual(a, b)`](#unittest.TestCase.assertDictEqual "unittest.TestCase.assertDictEqual") | dicts | 3.1 | `assertMultiLineEqual(first, second, msg=None)` Test that the multiline string *first* is equal to the string *second*. When not equal a diff of the two strings highlighting the differences will be included in the error message. This method is used by default when comparing strings with [`assertEqual()`](#unittest.TestCase.assertEqual "unittest.TestCase.assertEqual"). New in version 3.1. `assertSequenceEqual(first, second, msg=None, seq_type=None)` Tests that two sequences are equal. If a *seq\_type* is supplied, both *first* and *second* must be instances of *seq\_type* or a failure will be raised. If the sequences are different an error message is constructed that shows the difference between the two. This method is not called directly by [`assertEqual()`](#unittest.TestCase.assertEqual "unittest.TestCase.assertEqual"), but it’s used to implement [`assertListEqual()`](#unittest.TestCase.assertListEqual "unittest.TestCase.assertListEqual") and [`assertTupleEqual()`](#unittest.TestCase.assertTupleEqual "unittest.TestCase.assertTupleEqual"). New in version 3.1. `assertListEqual(first, second, msg=None)` `assertTupleEqual(first, second, msg=None)` Tests that two lists or tuples are equal. If not, an error message is constructed that shows only the differences between the two. An error is also raised if either of the parameters are of the wrong type. These methods are used by default when comparing lists or tuples with [`assertEqual()`](#unittest.TestCase.assertEqual "unittest.TestCase.assertEqual"). New in version 3.1. `assertSetEqual(first, second, msg=None)` Tests that two sets are equal. If not, an error message is constructed that lists the differences between the sets. This method is used by default when comparing sets or frozensets with [`assertEqual()`](#unittest.TestCase.assertEqual "unittest.TestCase.assertEqual"). Fails if either of *first* or *second* does not have a `set.difference()` method. New in version 3.1. `assertDictEqual(first, second, msg=None)` Test that two dictionaries are equal. If not, an error message is constructed that shows the differences in the dictionaries. This method will be used by default to compare dictionaries in calls to [`assertEqual()`](#unittest.TestCase.assertEqual "unittest.TestCase.assertEqual"). New in version 3.1. Finally the [`TestCase`](#unittest.TestCase "unittest.TestCase") provides the following methods and attributes: `fail(msg=None)` Signals a test failure unconditionally, with *msg* or `None` for the error message. `failureException` This class attribute gives the exception raised by the test method. If a test framework needs to use a specialized exception, possibly to carry additional information, it must subclass this exception in order to “play fair” with the framework. The initial value of this attribute is [`AssertionError`](exceptions#AssertionError "AssertionError"). `longMessage` This class attribute determines what happens when a custom failure message is passed as the msg argument to an assertXYY call that fails. `True` is the default value. In this case, the custom message is appended to the end of the standard failure message. When set to `False`, the custom message replaces the standard message. The class setting can be overridden in individual test methods by assigning an instance attribute, self.longMessage, to `True` or `False` before calling the assert methods. The class setting gets reset before each test call. New in version 3.1. `maxDiff` This attribute controls the maximum length of diffs output by assert methods that report diffs on failure. It defaults to 80\*8 characters. Assert methods affected by this attribute are [`assertSequenceEqual()`](#unittest.TestCase.assertSequenceEqual "unittest.TestCase.assertSequenceEqual") (including all the sequence comparison methods that delegate to it), [`assertDictEqual()`](#unittest.TestCase.assertDictEqual "unittest.TestCase.assertDictEqual") and [`assertMultiLineEqual()`](#unittest.TestCase.assertMultiLineEqual "unittest.TestCase.assertMultiLineEqual"). Setting `maxDiff` to `None` means that there is no maximum length of diffs. New in version 3.2. Testing frameworks can use the following methods to collect information on the test: `countTestCases()` Return the number of tests represented by this test object. For [`TestCase`](#unittest.TestCase "unittest.TestCase") instances, this will always be `1`. `defaultTestResult()` Return an instance of the test result class that should be used for this test case class (if no other result instance is provided to the [`run()`](#unittest.TestCase.run "unittest.TestCase.run") method). For [`TestCase`](#unittest.TestCase "unittest.TestCase") instances, this will always be an instance of [`TestResult`](#unittest.TestResult "unittest.TestResult"); subclasses of [`TestCase`](#unittest.TestCase "unittest.TestCase") should override this as necessary. `id()` Return a string identifying the specific test case. This is usually the full name of the test method, including the module and class name. `shortDescription()` Returns a description of the test, or `None` if no description has been provided. The default implementation of this method returns the first line of the test method’s docstring, if available, or `None`. Changed in version 3.1: In 3.1 this was changed to add the test name to the short description even in the presence of a docstring. This caused compatibility issues with unittest extensions and adding the test name was moved to the [`TextTestResult`](#unittest.TextTestResult "unittest.TextTestResult") in Python 3.2. `addCleanup(function, /, *args, **kwargs)` Add a function to be called after [`tearDown()`](#unittest.TestCase.tearDown "unittest.TestCase.tearDown") to cleanup resources used during the test. Functions will be called in reverse order to the order they are added (LIFO). They are called with any arguments and keyword arguments passed into [`addCleanup()`](#unittest.TestCase.addCleanup "unittest.TestCase.addCleanup") when they are added. If [`setUp()`](#unittest.TestCase.setUp "unittest.TestCase.setUp") fails, meaning that [`tearDown()`](#unittest.TestCase.tearDown "unittest.TestCase.tearDown") is not called, then any cleanup functions added will still be called. New in version 3.1. `doCleanups()` This method is called unconditionally after [`tearDown()`](#unittest.TestCase.tearDown "unittest.TestCase.tearDown"), or after [`setUp()`](#unittest.TestCase.setUp "unittest.TestCase.setUp") if [`setUp()`](#unittest.TestCase.setUp "unittest.TestCase.setUp") raises an exception. It is responsible for calling all the cleanup functions added by [`addCleanup()`](#unittest.TestCase.addCleanup "unittest.TestCase.addCleanup"). If you need cleanup functions to be called *prior* to [`tearDown()`](#unittest.TestCase.tearDown "unittest.TestCase.tearDown") then you can call [`doCleanups()`](#unittest.TestCase.doCleanups "unittest.TestCase.doCleanups") yourself. [`doCleanups()`](#unittest.TestCase.doCleanups "unittest.TestCase.doCleanups") pops methods off the stack of cleanup functions one at a time, so it can be called at any time. New in version 3.1. `classmethod addClassCleanup(function, /, *args, **kwargs)` Add a function to be called after [`tearDownClass()`](#unittest.TestCase.tearDownClass "unittest.TestCase.tearDownClass") to cleanup resources used during the test class. Functions will be called in reverse order to the order they are added (LIFO). They are called with any arguments and keyword arguments passed into [`addClassCleanup()`](#unittest.TestCase.addClassCleanup "unittest.TestCase.addClassCleanup") when they are added. If [`setUpClass()`](#unittest.TestCase.setUpClass "unittest.TestCase.setUpClass") fails, meaning that [`tearDownClass()`](#unittest.TestCase.tearDownClass "unittest.TestCase.tearDownClass") is not called, then any cleanup functions added will still be called. New in version 3.8. `classmethod doClassCleanups()` This method is called unconditionally after [`tearDownClass()`](#unittest.TestCase.tearDownClass "unittest.TestCase.tearDownClass"), or after [`setUpClass()`](#unittest.TestCase.setUpClass "unittest.TestCase.setUpClass") if [`setUpClass()`](#unittest.TestCase.setUpClass "unittest.TestCase.setUpClass") raises an exception. It is responsible for calling all the cleanup functions added by [`addClassCleanup()`](#unittest.TestCase.addClassCleanup "unittest.TestCase.addClassCleanup"). If you need cleanup functions to be called *prior* to [`tearDownClass()`](#unittest.TestCase.tearDownClass "unittest.TestCase.tearDownClass") then you can call [`doClassCleanups()`](#unittest.TestCase.doClassCleanups "unittest.TestCase.doClassCleanups") yourself. [`doClassCleanups()`](#unittest.TestCase.doClassCleanups "unittest.TestCase.doClassCleanups") pops methods off the stack of cleanup functions one at a time, so it can be called at any time. New in version 3.8. `class unittest.IsolatedAsyncioTestCase(methodName='runTest')` This class provides an API similar to [`TestCase`](#unittest.TestCase "unittest.TestCase") and also accepts coroutines as test functions. New in version 3.8. `coroutine asyncSetUp()` Method called to prepare the test fixture. This is called after `setUp()`. This is called immediately before calling the test method; other than [`AssertionError`](exceptions#AssertionError "AssertionError") or [`SkipTest`](#unittest.SkipTest "unittest.SkipTest"), any exception raised by this method will be considered an error rather than a test failure. The default implementation does nothing. `coroutine asyncTearDown()` Method called immediately after the test method has been called and the result recorded. This is called before `tearDown()`. This is called even if the test method raised an exception, so the implementation in subclasses may need to be particularly careful about checking internal state. Any exception, other than [`AssertionError`](exceptions#AssertionError "AssertionError") or [`SkipTest`](#unittest.SkipTest "unittest.SkipTest"), raised by this method will be considered an additional error rather than a test failure (thus increasing the total number of reported errors). This method will only be called if the [`asyncSetUp()`](#unittest.IsolatedAsyncioTestCase.asyncSetUp "unittest.IsolatedAsyncioTestCase.asyncSetUp") succeeds, regardless of the outcome of the test method. The default implementation does nothing. `addAsyncCleanup(function, /, *args, **kwargs)` This method accepts a coroutine that can be used as a cleanup function. `run(result=None)` Sets up a new event loop to run the test, collecting the result into the [`TestResult`](#unittest.TestResult "unittest.TestResult") object passed as *result*. If *result* is omitted or `None`, a temporary result object is created (by calling the `defaultTestResult()` method) and used. The result object is returned to [`run()`](#unittest.IsolatedAsyncioTestCase.run "unittest.IsolatedAsyncioTestCase.run")’s caller. At the end of the test all the tasks in the event loop are cancelled. An example illustrating the order: ``` from unittest import IsolatedAsyncioTestCase events = [] class Test(IsolatedAsyncioTestCase): def setUp(self): events.append("setUp") async def asyncSetUp(self): self._async_connection = await AsyncConnection() events.append("asyncSetUp") async def test_response(self): events.append("test_response") response = await self._async_connection.get("https://example.com") self.assertEqual(response.status_code, 200) self.addAsyncCleanup(self.on_cleanup) def tearDown(self): events.append("tearDown") async def asyncTearDown(self): await self._async_connection.close() events.append("asyncTearDown") async def on_cleanup(self): events.append("cleanup") if __name__ == "__main__": unittest.main() ``` After running the test, `events` would contain `["setUp", "asyncSetUp", "test_response", "asyncTearDown", "tearDown", "cleanup"]`. `class unittest.FunctionTestCase(testFunc, setUp=None, tearDown=None, description=None)` This class implements the portion of the [`TestCase`](#unittest.TestCase "unittest.TestCase") interface which allows the test runner to drive the test, but does not provide the methods which test code can use to check and report errors. This is used to create test cases using legacy test code, allowing it to be integrated into a [`unittest`](#module-unittest "unittest: Unit testing framework for Python.")-based test framework. #### Deprecated aliases For historical reasons, some of the [`TestCase`](#unittest.TestCase "unittest.TestCase") methods had one or more aliases that are now deprecated. The following table lists the correct names along with their deprecated aliases: | Method Name | Deprecated alias | Deprecated alias | | --- | --- | --- | | [`assertEqual()`](#unittest.TestCase.assertEqual "unittest.TestCase.assertEqual") | failUnlessEqual | assertEquals | | [`assertNotEqual()`](#unittest.TestCase.assertNotEqual "unittest.TestCase.assertNotEqual") | failIfEqual | assertNotEquals | | [`assertTrue()`](#unittest.TestCase.assertTrue "unittest.TestCase.assertTrue") | failUnless | assert\_ | | [`assertFalse()`](#unittest.TestCase.assertFalse "unittest.TestCase.assertFalse") | failIf | | | [`assertRaises()`](#unittest.TestCase.assertRaises "unittest.TestCase.assertRaises") | failUnlessRaises | | | [`assertAlmostEqual()`](#unittest.TestCase.assertAlmostEqual "unittest.TestCase.assertAlmostEqual") | failUnlessAlmostEqual | assertAlmostEquals | | [`assertNotAlmostEqual()`](#unittest.TestCase.assertNotAlmostEqual "unittest.TestCase.assertNotAlmostEqual") | failIfAlmostEqual | assertNotAlmostEquals | | [`assertRegex()`](#unittest.TestCase.assertRegex "unittest.TestCase.assertRegex") | | assertRegexpMatches | | [`assertNotRegex()`](#unittest.TestCase.assertNotRegex "unittest.TestCase.assertNotRegex") | | assertNotRegexpMatches | | [`assertRaisesRegex()`](#unittest.TestCase.assertRaisesRegex "unittest.TestCase.assertRaisesRegex") | | assertRaisesRegexp | Deprecated since version 3.1: The fail\* aliases listed in the second column have been deprecated. Deprecated since version 3.2: The assert\* aliases listed in the third column have been deprecated. Deprecated since version 3.2: `assertRegexpMatches` and `assertRaisesRegexp` have been renamed to [`assertRegex()`](#unittest.TestCase.assertRegex "unittest.TestCase.assertRegex") and [`assertRaisesRegex()`](#unittest.TestCase.assertRaisesRegex "unittest.TestCase.assertRaisesRegex"). Deprecated since version 3.5: The `assertNotRegexpMatches` name is deprecated in favor of [`assertNotRegex()`](#unittest.TestCase.assertNotRegex "unittest.TestCase.assertNotRegex"). ### Grouping tests `class unittest.TestSuite(tests=())` This class represents an aggregation of individual test cases and test suites. The class presents the interface needed by the test runner to allow it to be run as any other test case. Running a [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") instance is the same as iterating over the suite, running each test individually. If *tests* is given, it must be an iterable of individual test cases or other test suites that will be used to build the suite initially. Additional methods are provided to add test cases and suites to the collection later on. [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") objects behave much like [`TestCase`](#unittest.TestCase "unittest.TestCase") objects, except they do not actually implement a test. Instead, they are used to aggregate tests into groups of tests that should be run together. Some additional methods are available to add tests to [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") instances: `addTest(test)` Add a [`TestCase`](#unittest.TestCase "unittest.TestCase") or [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") to the suite. `addTests(tests)` Add all the tests from an iterable of [`TestCase`](#unittest.TestCase "unittest.TestCase") and [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") instances to this test suite. This is equivalent to iterating over *tests*, calling [`addTest()`](#unittest.TestSuite.addTest "unittest.TestSuite.addTest") for each element. [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") shares the following methods with [`TestCase`](#unittest.TestCase "unittest.TestCase"): `run(result)` Run the tests associated with this suite, collecting the result into the test result object passed as *result*. Note that unlike [`TestCase.run()`](#unittest.TestCase.run "unittest.TestCase.run"), [`TestSuite.run()`](#unittest.TestSuite.run "unittest.TestSuite.run") requires the result object to be passed in. `debug()` Run the tests associated with this suite without collecting the result. This allows exceptions raised by the test to be propagated to the caller and can be used to support running tests under a debugger. `countTestCases()` Return the number of tests represented by this test object, including all individual tests and sub-suites. `__iter__()` Tests grouped by a [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") are always accessed by iteration. Subclasses can lazily provide tests by overriding [`__iter__()`](#unittest.TestSuite.__iter__ "unittest.TestSuite.__iter__"). Note that this method may be called several times on a single suite (for example when counting tests or comparing for equality) so the tests returned by repeated iterations before [`TestSuite.run()`](#unittest.TestSuite.run "unittest.TestSuite.run") must be the same for each call iteration. After [`TestSuite.run()`](#unittest.TestSuite.run "unittest.TestSuite.run"), callers should not rely on the tests returned by this method unless the caller uses a subclass that overrides `TestSuite._removeTestAtIndex()` to preserve test references. Changed in version 3.2: In earlier versions the [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") accessed tests directly rather than through iteration, so overriding [`__iter__()`](#unittest.TestSuite.__iter__ "unittest.TestSuite.__iter__") wasn’t sufficient for providing tests. Changed in version 3.4: In earlier versions the [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") held references to each [`TestCase`](#unittest.TestCase "unittest.TestCase") after [`TestSuite.run()`](#unittest.TestSuite.run "unittest.TestSuite.run"). Subclasses can restore that behavior by overriding `TestSuite._removeTestAtIndex()`. In the typical usage of a [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") object, the [`run()`](#unittest.TestSuite.run "unittest.TestSuite.run") method is invoked by a `TestRunner` rather than by the end-user test harness. ### Loading and running tests `class unittest.TestLoader` The [`TestLoader`](#unittest.TestLoader "unittest.TestLoader") class is used to create test suites from classes and modules. Normally, there is no need to create an instance of this class; the [`unittest`](#module-unittest "unittest: Unit testing framework for Python.") module provides an instance that can be shared as [`unittest.defaultTestLoader`](#unittest.defaultTestLoader "unittest.defaultTestLoader"). Using a subclass or instance, however, allows customization of some configurable properties. [`TestLoader`](#unittest.TestLoader "unittest.TestLoader") objects have the following attributes: `errors` A list of the non-fatal errors encountered while loading tests. Not reset by the loader at any point. Fatal errors are signalled by the relevant a method raising an exception to the caller. Non-fatal errors are also indicated by a synthetic test that will raise the original error when run. New in version 3.5. [`TestLoader`](#unittest.TestLoader "unittest.TestLoader") objects have the following methods: `loadTestsFromTestCase(testCaseClass)` Return a suite of all test cases contained in the [`TestCase`](#unittest.TestCase "unittest.TestCase")-derived `testCaseClass`. A test case instance is created for each method named by [`getTestCaseNames()`](#unittest.TestLoader.getTestCaseNames "unittest.TestLoader.getTestCaseNames"). By default these are the method names beginning with `test`. If [`getTestCaseNames()`](#unittest.TestLoader.getTestCaseNames "unittest.TestLoader.getTestCaseNames") returns no methods, but the `runTest()` method is implemented, a single test case is created for that method instead. `loadTestsFromModule(module, pattern=None)` Return a suite of all test cases contained in the given module. This method searches *module* for classes derived from [`TestCase`](#unittest.TestCase "unittest.TestCase") and creates an instance of the class for each test method defined for the class. Note While using a hierarchy of [`TestCase`](#unittest.TestCase "unittest.TestCase")-derived classes can be convenient in sharing fixtures and helper functions, defining test methods on base classes that are not intended to be instantiated directly does not play well with this method. Doing so, however, can be useful when the fixtures are different and defined in subclasses. If a module provides a `load_tests` function it will be called to load the tests. This allows modules to customize test loading. This is the [load\_tests protocol](#load-tests-protocol). The *pattern* argument is passed as the third argument to `load_tests`. Changed in version 3.2: Support for `load_tests` added. Changed in version 3.5: The undocumented and unofficial *use\_load\_tests* default argument is deprecated and ignored, although it is still accepted for backward compatibility. The method also now accepts a keyword-only argument *pattern* which is passed to `load_tests` as the third argument. `loadTestsFromName(name, module=None)` Return a suite of all test cases given a string specifier. The specifier *name* is a “dotted name” that may resolve either to a module, a test case class, a test method within a test case class, a [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") instance, or a callable object which returns a [`TestCase`](#unittest.TestCase "unittest.TestCase") or [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") instance. These checks are applied in the order listed here; that is, a method on a possible test case class will be picked up as “a test method within a test case class”, rather than “a callable object”. For example, if you have a module `SampleTests` containing a [`TestCase`](#unittest.TestCase "unittest.TestCase")-derived class `SampleTestCase` with three test methods (`test_one()`, `test_two()`, and `test_three()`), the specifier `'SampleTests.SampleTestCase'` would cause this method to return a suite which will run all three test methods. Using the specifier `'SampleTests.SampleTestCase.test_two'` would cause it to return a test suite which will run only the `test_two()` test method. The specifier can refer to modules and packages which have not been imported; they will be imported as a side-effect. The method optionally resolves *name* relative to the given *module*. Changed in version 3.5: If an [`ImportError`](exceptions#ImportError "ImportError") or [`AttributeError`](exceptions#AttributeError "AttributeError") occurs while traversing *name* then a synthetic test that raises that error when run will be returned. These errors are included in the errors accumulated by self.errors. `loadTestsFromNames(names, module=None)` Similar to [`loadTestsFromName()`](#unittest.TestLoader.loadTestsFromName "unittest.TestLoader.loadTestsFromName"), but takes a sequence of names rather than a single name. The return value is a test suite which supports all the tests defined for each name. `getTestCaseNames(testCaseClass)` Return a sorted sequence of method names found within *testCaseClass*; this should be a subclass of [`TestCase`](#unittest.TestCase "unittest.TestCase"). `discover(start_dir, pattern='test*.py', top_level_dir=None)` Find all the test modules by recursing into subdirectories from the specified start directory, and return a TestSuite object containing them. Only test files that match *pattern* will be loaded. (Using shell style pattern matching.) Only module names that are importable (i.e. are valid Python identifiers) will be loaded. All test modules must be importable from the top level of the project. If the start directory is not the top level directory then the top level directory must be specified separately. If importing a module fails, for example due to a syntax error, then this will be recorded as a single error and discovery will continue. If the import failure is due to [`SkipTest`](#unittest.SkipTest "unittest.SkipTest") being raised, it will be recorded as a skip instead of an error. If a package (a directory containing a file named `__init__.py`) is found, the package will be checked for a `load_tests` function. If this exists then it will be called `package.load_tests(loader, tests, pattern)`. Test discovery takes care to ensure that a package is only checked for tests once during an invocation, even if the load\_tests function itself calls `loader.discover`. If `load_tests` exists then discovery does *not* recurse into the package, `load_tests` is responsible for loading all tests in the package. The pattern is deliberately not stored as a loader attribute so that packages can continue discovery themselves. *top\_level\_dir* is stored so `load_tests` does not need to pass this argument in to `loader.discover()`. *start\_dir* can be a dotted module name as well as a directory. New in version 3.2. Changed in version 3.4: Modules that raise [`SkipTest`](#unittest.SkipTest "unittest.SkipTest") on import are recorded as skips, not errors. Changed in version 3.4: *start\_dir* can be a [namespace packages](../glossary#term-namespace-package). Changed in version 3.4: Paths are sorted before being imported so that execution order is the same even if the underlying file system’s ordering is not dependent on file name. Changed in version 3.5: Found packages are now checked for `load_tests` regardless of whether their path matches *pattern*, because it is impossible for a package name to match the default pattern. The following attributes of a [`TestLoader`](#unittest.TestLoader "unittest.TestLoader") can be configured either by subclassing or assignment on an instance: `testMethodPrefix` String giving the prefix of method names which will be interpreted as test methods. The default value is `'test'`. This affects [`getTestCaseNames()`](#unittest.TestLoader.getTestCaseNames "unittest.TestLoader.getTestCaseNames") and all the `loadTestsFrom*()` methods. `sortTestMethodsUsing` Function to be used to compare method names when sorting them in [`getTestCaseNames()`](#unittest.TestLoader.getTestCaseNames "unittest.TestLoader.getTestCaseNames") and all the `loadTestsFrom*()` methods. `suiteClass` Callable object that constructs a test suite from a list of tests. No methods on the resulting object are needed. The default value is the [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") class. This affects all the `loadTestsFrom*()` methods. `testNamePatterns` List of Unix shell-style wildcard test name patterns that test methods have to match to be included in test suites (see `-v` option). If this attribute is not `None` (the default), all test methods to be included in test suites must match one of the patterns in this list. Note that matches are always performed using [`fnmatch.fnmatchcase()`](fnmatch#fnmatch.fnmatchcase "fnmatch.fnmatchcase"), so unlike patterns passed to the `-v` option, simple substring patterns will have to be converted using `*` wildcards. This affects all the `loadTestsFrom*()` methods. New in version 3.7. `class unittest.TestResult` This class is used to compile information about which tests have succeeded and which have failed. A [`TestResult`](#unittest.TestResult "unittest.TestResult") object stores the results of a set of tests. The [`TestCase`](#unittest.TestCase "unittest.TestCase") and [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") classes ensure that results are properly recorded; test authors do not need to worry about recording the outcome of tests. Testing frameworks built on top of [`unittest`](#module-unittest "unittest: Unit testing framework for Python.") may want access to the [`TestResult`](#unittest.TestResult "unittest.TestResult") object generated by running a set of tests for reporting purposes; a [`TestResult`](#unittest.TestResult "unittest.TestResult") instance is returned by the `TestRunner.run()` method for this purpose. [`TestResult`](#unittest.TestResult "unittest.TestResult") instances have the following attributes that will be of interest when inspecting the results of running a set of tests: `errors` A list containing 2-tuples of [`TestCase`](#unittest.TestCase "unittest.TestCase") instances and strings holding formatted tracebacks. Each tuple represents a test which raised an unexpected exception. `failures` A list containing 2-tuples of [`TestCase`](#unittest.TestCase "unittest.TestCase") instances and strings holding formatted tracebacks. Each tuple represents a test where a failure was explicitly signalled using the `TestCase.assert*()` methods. `skipped` A list containing 2-tuples of [`TestCase`](#unittest.TestCase "unittest.TestCase") instances and strings holding the reason for skipping the test. New in version 3.1. `expectedFailures` A list containing 2-tuples of [`TestCase`](#unittest.TestCase "unittest.TestCase") instances and strings holding formatted tracebacks. Each tuple represents an expected failure or error of the test case. `unexpectedSuccesses` A list containing [`TestCase`](#unittest.TestCase "unittest.TestCase") instances that were marked as expected failures, but succeeded. `shouldStop` Set to `True` when the execution of tests should stop by [`stop()`](#unittest.TestResult.stop "unittest.TestResult.stop"). `testsRun` The total number of tests run so far. `buffer` If set to true, `sys.stdout` and `sys.stderr` will be buffered in between [`startTest()`](#unittest.TestResult.startTest "unittest.TestResult.startTest") and [`stopTest()`](#unittest.TestResult.stopTest "unittest.TestResult.stopTest") being called. Collected output will only be echoed onto the real `sys.stdout` and `sys.stderr` if the test fails or errors. Any output is also attached to the failure / error message. New in version 3.2. `failfast` If set to true [`stop()`](#unittest.TestResult.stop "unittest.TestResult.stop") will be called on the first failure or error, halting the test run. New in version 3.2. `tb_locals` If set to true then local variables will be shown in tracebacks. New in version 3.5. `wasSuccessful()` Return `True` if all tests run so far have passed, otherwise returns `False`. Changed in version 3.4: Returns `False` if there were any [`unexpectedSuccesses`](#unittest.TestResult.unexpectedSuccesses "unittest.TestResult.unexpectedSuccesses") from tests marked with the [`expectedFailure()`](#unittest.expectedFailure "unittest.expectedFailure") decorator. `stop()` This method can be called to signal that the set of tests being run should be aborted by setting the [`shouldStop`](#unittest.TestResult.shouldStop "unittest.TestResult.shouldStop") attribute to `True`. `TestRunner` objects should respect this flag and return without running any additional tests. For example, this feature is used by the [`TextTestRunner`](#unittest.TextTestRunner "unittest.TextTestRunner") class to stop the test framework when the user signals an interrupt from the keyboard. Interactive tools which provide `TestRunner` implementations can use this in a similar manner. The following methods of the [`TestResult`](#unittest.TestResult "unittest.TestResult") class are used to maintain the internal data structures, and may be extended in subclasses to support additional reporting requirements. This is particularly useful in building tools which support interactive reporting while tests are being run. `startTest(test)` Called when the test case *test* is about to be run. `stopTest(test)` Called after the test case *test* has been executed, regardless of the outcome. `startTestRun()` Called once before any tests are executed. New in version 3.1. `stopTestRun()` Called once after all tests are executed. New in version 3.1. `addError(test, err)` Called when the test case *test* raises an unexpected exception. *err* is a tuple of the form returned by [`sys.exc_info()`](sys#sys.exc_info "sys.exc_info"): `(type, value, traceback)`. The default implementation appends a tuple `(test, formatted_err)` to the instance’s [`errors`](#unittest.TestResult.errors "unittest.TestResult.errors") attribute, where *formatted\_err* is a formatted traceback derived from *err*. `addFailure(test, err)` Called when the test case *test* signals a failure. *err* is a tuple of the form returned by [`sys.exc_info()`](sys#sys.exc_info "sys.exc_info"): `(type, value, traceback)`. The default implementation appends a tuple `(test, formatted_err)` to the instance’s [`failures`](#unittest.TestResult.failures "unittest.TestResult.failures") attribute, where *formatted\_err* is a formatted traceback derived from *err*. `addSuccess(test)` Called when the test case *test* succeeds. The default implementation does nothing. `addSkip(test, reason)` Called when the test case *test* is skipped. *reason* is the reason the test gave for skipping. The default implementation appends a tuple `(test, reason)` to the instance’s [`skipped`](#unittest.TestResult.skipped "unittest.TestResult.skipped") attribute. `addExpectedFailure(test, err)` Called when the test case *test* fails or errors, but was marked with the [`expectedFailure()`](#unittest.expectedFailure "unittest.expectedFailure") decorator. The default implementation appends a tuple `(test, formatted_err)` to the instance’s [`expectedFailures`](#unittest.TestResult.expectedFailures "unittest.TestResult.expectedFailures") attribute, where *formatted\_err* is a formatted traceback derived from *err*. `addUnexpectedSuccess(test)` Called when the test case *test* was marked with the [`expectedFailure()`](#unittest.expectedFailure "unittest.expectedFailure") decorator, but succeeded. The default implementation appends the test to the instance’s [`unexpectedSuccesses`](#unittest.TestResult.unexpectedSuccesses "unittest.TestResult.unexpectedSuccesses") attribute. `addSubTest(test, subtest, outcome)` Called when a subtest finishes. *test* is the test case corresponding to the test method. *subtest* is a custom [`TestCase`](#unittest.TestCase "unittest.TestCase") instance describing the subtest. If *outcome* is [`None`](constants#None "None"), the subtest succeeded. Otherwise, it failed with an exception where *outcome* is a tuple of the form returned by [`sys.exc_info()`](sys#sys.exc_info "sys.exc_info"): `(type, value, traceback)`. The default implementation does nothing when the outcome is a success, and records subtest failures as normal failures. New in version 3.4. `class unittest.TextTestResult(stream, descriptions, verbosity)` A concrete implementation of [`TestResult`](#unittest.TestResult "unittest.TestResult") used by the [`TextTestRunner`](#unittest.TextTestRunner "unittest.TextTestRunner"). New in version 3.2: This class was previously named `_TextTestResult`. The old name still exists as an alias but is deprecated. `unittest.defaultTestLoader` Instance of the [`TestLoader`](#unittest.TestLoader "unittest.TestLoader") class intended to be shared. If no customization of the [`TestLoader`](#unittest.TestLoader "unittest.TestLoader") is needed, this instance can be used instead of repeatedly creating new instances. `class unittest.TextTestRunner(stream=None, descriptions=True, verbosity=1, failfast=False, buffer=False, resultclass=None, warnings=None, *, tb_locals=False)` A basic test runner implementation that outputs results to a stream. If *stream* is `None`, the default, [`sys.stderr`](sys#sys.stderr "sys.stderr") is used as the output stream. This class has a few configurable parameters, but is essentially very simple. Graphical applications which run test suites should provide alternate implementations. Such implementations should accept `**kwargs` as the interface to construct runners changes when features are added to unittest. By default this runner shows [`DeprecationWarning`](exceptions#DeprecationWarning "DeprecationWarning"), [`PendingDeprecationWarning`](exceptions#PendingDeprecationWarning "PendingDeprecationWarning"), [`ResourceWarning`](exceptions#ResourceWarning "ResourceWarning") and [`ImportWarning`](exceptions#ImportWarning "ImportWarning") even if they are [ignored by default](warnings#warning-ignored). Deprecation warnings caused by [deprecated unittest methods](#deprecated-aliases) are also special-cased and, when the warning filters are `'default'` or `'always'`, they will appear only once per-module, in order to avoid too many warning messages. This behavior can be overridden using Python’s `-Wd` or `-Wa` options (see [Warning control](../using/cmdline#using-on-warnings)) and leaving *warnings* to `None`. Changed in version 3.2: Added the `warnings` argument. Changed in version 3.2: The default stream is set to [`sys.stderr`](sys#sys.stderr "sys.stderr") at instantiation time rather than import time. Changed in version 3.5: Added the tb\_locals parameter. `_makeResult()` This method returns the instance of `TestResult` used by [`run()`](#unittest.TextTestRunner.run "unittest.TextTestRunner.run"). It is not intended to be called directly, but can be overridden in subclasses to provide a custom `TestResult`. `_makeResult()` instantiates the class or callable passed in the `TextTestRunner` constructor as the `resultclass` argument. It defaults to [`TextTestResult`](#unittest.TextTestResult "unittest.TextTestResult") if no `resultclass` is provided. The result class is instantiated with the following arguments: ``` stream, descriptions, verbosity ``` `run(test)` This method is the main public interface to the `TextTestRunner`. This method takes a [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") or [`TestCase`](#unittest.TestCase "unittest.TestCase") instance. A [`TestResult`](#unittest.TestResult "unittest.TestResult") is created by calling [`_makeResult()`](#unittest.TextTestRunner._makeResult "unittest.TextTestRunner._makeResult") and the test(s) are run and the results printed to stdout. `unittest.main(module='__main__', defaultTest=None, argv=None, testRunner=None, testLoader=unittest.defaultTestLoader, exit=True, verbosity=1, failfast=None, catchbreak=None, buffer=None, warnings=None)` A command-line program that loads a set of tests from *module* and runs them; this is primarily for making test modules conveniently executable. The simplest use for this function is to include the following line at the end of a test script: ``` if __name__ == '__main__': unittest.main() ``` You can run tests with more detailed information by passing in the verbosity argument: ``` if __name__ == '__main__': unittest.main(verbosity=2) ``` The *defaultTest* argument is either the name of a single test or an iterable of test names to run if no test names are specified via *argv*. If not specified or `None` and no test names are provided via *argv*, all tests found in *module* are run. The *argv* argument can be a list of options passed to the program, with the first element being the program name. If not specified or `None`, the values of [`sys.argv`](sys#sys.argv "sys.argv") are used. The *testRunner* argument can either be a test runner class or an already created instance of it. By default `main` calls [`sys.exit()`](sys#sys.exit "sys.exit") with an exit code indicating success or failure of the tests run. The *testLoader* argument has to be a [`TestLoader`](#unittest.TestLoader "unittest.TestLoader") instance, and defaults to [`defaultTestLoader`](#unittest.defaultTestLoader "unittest.defaultTestLoader"). `main` supports being used from the interactive interpreter by passing in the argument `exit=False`. This displays the result on standard output without calling [`sys.exit()`](sys#sys.exit "sys.exit"): ``` >>> from unittest import main >>> main(module='test_module', exit=False) ``` The *failfast*, *catchbreak* and *buffer* parameters have the same effect as the same-name [command-line options](#command-line-options). The *warnings* argument specifies the [warning filter](warnings#warning-filter) that should be used while running the tests. If it’s not specified, it will remain `None` if a `-W` option is passed to **python** (see [Warning control](../using/cmdline#using-on-warnings)), otherwise it will be set to `'default'`. Calling `main` actually returns an instance of the `TestProgram` class. This stores the result of the tests run as the `result` attribute. Changed in version 3.1: The *exit* parameter was added. Changed in version 3.2: The *verbosity*, *failfast*, *catchbreak*, *buffer* and *warnings* parameters were added. Changed in version 3.4: The *defaultTest* parameter was changed to also accept an iterable of test names. #### load\_tests Protocol New in version 3.2. Modules or packages can customize how tests are loaded from them during normal test runs or test discovery by implementing a function called `load_tests`. If a test module defines `load_tests` it will be called by [`TestLoader.loadTestsFromModule()`](#unittest.TestLoader.loadTestsFromModule "unittest.TestLoader.loadTestsFromModule") with the following arguments: ``` load_tests(loader, standard_tests, pattern) ``` where *pattern* is passed straight through from `loadTestsFromModule`. It defaults to `None`. It should return a [`TestSuite`](#unittest.TestSuite "unittest.TestSuite"). *loader* is the instance of [`TestLoader`](#unittest.TestLoader "unittest.TestLoader") doing the loading. *standard\_tests* are the tests that would be loaded by default from the module. It is common for test modules to only want to add or remove tests from the standard set of tests. The third argument is used when loading packages as part of test discovery. A typical `load_tests` function that loads tests from a specific set of [`TestCase`](#unittest.TestCase "unittest.TestCase") classes may look like: ``` test_cases = (TestCase1, TestCase2, TestCase3) def load_tests(loader, tests, pattern): suite = TestSuite() for test_class in test_cases: tests = loader.loadTestsFromTestCase(test_class) suite.addTests(tests) return suite ``` If discovery is started in a directory containing a package, either from the command line or by calling [`TestLoader.discover()`](#unittest.TestLoader.discover "unittest.TestLoader.discover"), then the package `__init__.py` will be checked for `load_tests`. If that function does not exist, discovery will recurse into the package as though it were just another directory. Otherwise, discovery of the package’s tests will be left up to `load_tests` which is called with the following arguments: ``` load_tests(loader, standard_tests, pattern) ``` This should return a [`TestSuite`](#unittest.TestSuite "unittest.TestSuite") representing all the tests from the package. (`standard_tests` will only contain tests collected from `__init__.py`.) Because the pattern is passed into `load_tests` the package is free to continue (and potentially modify) test discovery. A ‘do nothing’ `load_tests` function for a test package would look like: ``` def load_tests(loader, standard_tests, pattern): # top level directory cached on loader instance this_dir = os.path.dirname(__file__) package_tests = loader.discover(start_dir=this_dir, pattern=pattern) standard_tests.addTests(package_tests) return standard_tests ``` Changed in version 3.5: Discovery no longer checks package names for matching *pattern* due to the impossibility of package names matching the default pattern. Class and Module Fixtures ------------------------- Class and module level fixtures are implemented in [`TestSuite`](#unittest.TestSuite "unittest.TestSuite"). When the test suite encounters a test from a new class then `tearDownClass()` from the previous class (if there is one) is called, followed by `setUpClass()` from the new class. Similarly if a test is from a different module from the previous test then `tearDownModule` from the previous module is run, followed by `setUpModule` from the new module. After all the tests have run the final `tearDownClass` and `tearDownModule` are run. Note that shared fixtures do not play well with [potential] features like test parallelization and they break test isolation. They should be used with care. The default ordering of tests created by the unittest test loaders is to group all tests from the same modules and classes together. This will lead to `setUpClass` / `setUpModule` (etc) being called exactly once per class and module. If you randomize the order, so that tests from different modules and classes are adjacent to each other, then these shared fixture functions may be called multiple times in a single test run. Shared fixtures are not intended to work with suites with non-standard ordering. A `BaseTestSuite` still exists for frameworks that don’t want to support shared fixtures. If there are any exceptions raised during one of the shared fixture functions the test is reported as an error. Because there is no corresponding test instance an `_ErrorHolder` object (that has the same interface as a [`TestCase`](#unittest.TestCase "unittest.TestCase")) is created to represent the error. If you are just using the standard unittest test runner then this detail doesn’t matter, but if you are a framework author it may be relevant. ### setUpClass and tearDownClass These must be implemented as class methods: ``` import unittest class Test(unittest.TestCase): @classmethod def setUpClass(cls): cls._connection = createExpensiveConnectionObject() @classmethod def tearDownClass(cls): cls._connection.destroy() ``` If you want the `setUpClass` and `tearDownClass` on base classes called then you must call up to them yourself. The implementations in [`TestCase`](#unittest.TestCase "unittest.TestCase") are empty. If an exception is raised during a `setUpClass` then the tests in the class are not run and the `tearDownClass` is not run. Skipped classes will not have `setUpClass` or `tearDownClass` run. If the exception is a [`SkipTest`](#unittest.SkipTest "unittest.SkipTest") exception then the class will be reported as having been skipped instead of as an error. ### setUpModule and tearDownModule These should be implemented as functions: ``` def setUpModule(): createConnection() def tearDownModule(): closeConnection() ``` If an exception is raised in a `setUpModule` then none of the tests in the module will be run and the `tearDownModule` will not be run. If the exception is a [`SkipTest`](#unittest.SkipTest "unittest.SkipTest") exception then the module will be reported as having been skipped instead of as an error. To add cleanup code that must be run even in the case of an exception, use `addModuleCleanup`: `unittest.addModuleCleanup(function, /, *args, **kwargs)` Add a function to be called after `tearDownModule()` to cleanup resources used during the test class. Functions will be called in reverse order to the order they are added (LIFO). They are called with any arguments and keyword arguments passed into [`addModuleCleanup()`](#unittest.addModuleCleanup "unittest.addModuleCleanup") when they are added. If `setUpModule()` fails, meaning that `tearDownModule()` is not called, then any cleanup functions added will still be called. New in version 3.8. `unittest.doModuleCleanups()` This function is called unconditionally after `tearDownModule()`, or after `setUpModule()` if `setUpModule()` raises an exception. It is responsible for calling all the cleanup functions added by [`addModuleCleanup()`](#unittest.addModuleCleanup "unittest.addModuleCleanup"). If you need cleanup functions to be called *prior* to `tearDownModule()` then you can call [`doModuleCleanups()`](#unittest.doModuleCleanups "unittest.doModuleCleanups") yourself. [`doModuleCleanups()`](#unittest.doModuleCleanups "unittest.doModuleCleanups") pops methods off the stack of cleanup functions one at a time, so it can be called at any time. New in version 3.8. Signal Handling --------------- New in version 3.2. The [`-c/--catch`](#cmdoption-unittest-c) command-line option to unittest, along with the `catchbreak` parameter to [`unittest.main()`](#unittest.main "unittest.main"), provide more friendly handling of control-C during a test run. With catch break behavior enabled control-C will allow the currently running test to complete, and the test run will then end and report all the results so far. A second control-c will raise a [`KeyboardInterrupt`](exceptions#KeyboardInterrupt "KeyboardInterrupt") in the usual way. The control-c handling signal handler attempts to remain compatible with code or tests that install their own [`signal.SIGINT`](signal#signal.SIGINT "signal.SIGINT") handler. If the `unittest` handler is called but *isn’t* the installed [`signal.SIGINT`](signal#signal.SIGINT "signal.SIGINT") handler, i.e. it has been replaced by the system under test and delegated to, then it calls the default handler. This will normally be the expected behavior by code that replaces an installed handler and delegates to it. For individual tests that need `unittest` control-c handling disabled the [`removeHandler()`](#unittest.removeHandler "unittest.removeHandler") decorator can be used. There are a few utility functions for framework authors to enable control-c handling functionality within test frameworks. `unittest.installHandler()` Install the control-c handler. When a [`signal.SIGINT`](signal#signal.SIGINT "signal.SIGINT") is received (usually in response to the user pressing control-c) all registered results have [`stop()`](#unittest.TestResult.stop "unittest.TestResult.stop") called. `unittest.registerResult(result)` Register a [`TestResult`](#unittest.TestResult "unittest.TestResult") object for control-c handling. Registering a result stores a weak reference to it, so it doesn’t prevent the result from being garbage collected. Registering a [`TestResult`](#unittest.TestResult "unittest.TestResult") object has no side-effects if control-c handling is not enabled, so test frameworks can unconditionally register all results they create independently of whether or not handling is enabled. `unittest.removeResult(result)` Remove a registered result. Once a result has been removed then [`stop()`](#unittest.TestResult.stop "unittest.TestResult.stop") will no longer be called on that result object in response to a control-c. `unittest.removeHandler(function=None)` When called without arguments this function removes the control-c handler if it has been installed. This function can also be used as a test decorator to temporarily remove the handler while the test is being executed: ``` @unittest.removeHandler def test_signal_handling(self): ... ```
programming_docs
python functools — Higher-order functions and operations on callable objects functools — Higher-order functions and operations on callable objects ===================================================================== **Source code:** [Lib/functools.py](https://github.com/python/cpython/tree/3.9/Lib/functools.py) The [`functools`](#module-functools "functools: Higher-order functions and operations on callable objects.") module is for higher-order functions: functions that act on or return other functions. In general, any callable object can be treated as a function for the purposes of this module. The [`functools`](#module-functools "functools: Higher-order functions and operations on callable objects.") module defines the following functions: `@functools.cache(user_function)` Simple lightweight unbounded function cache. Sometimes called [“memoize”](https://en.wikipedia.org/wiki/Memoization). Returns the same as `lru_cache(maxsize=None)`, creating a thin wrapper around a dictionary lookup for the function arguments. Because it never needs to evict old values, this is smaller and faster than [`lru_cache()`](#functools.lru_cache "functools.lru_cache") with a size limit. For example: ``` @cache def factorial(n): return n * factorial(n-1) if n else 1 >>> factorial(10) # no previously cached result, makes 11 recursive calls 3628800 >>> factorial(5) # just looks up cached value result 120 >>> factorial(12) # makes two new recursive calls, the other 10 are cached 479001600 ``` New in version 3.9. `@functools.cached_property(func)` Transform a method of a class into a property whose value is computed once and then cached as a normal attribute for the life of the instance. Similar to [`property()`](functions#property "property"), with the addition of caching. Useful for expensive computed properties of instances that are otherwise effectively immutable. Example: ``` class DataSet: def __init__(self, sequence_of_numbers): self._data = tuple(sequence_of_numbers) @cached_property def stdev(self): return statistics.stdev(self._data) ``` The mechanics of [`cached_property()`](#functools.cached_property "functools.cached_property") are somewhat different from [`property()`](functions#property "property"). A regular property blocks attribute writes unless a setter is defined. In contrast, a *cached\_property* allows writes. The *cached\_property* decorator only runs on lookups and only when an attribute of the same name doesn’t exist. When it does run, the *cached\_property* writes to the attribute with the same name. Subsequent attribute reads and writes take precedence over the *cached\_property* method and it works like a normal attribute. The cached value can be cleared by deleting the attribute. This allows the *cached\_property* method to run again. Note, this decorator interferes with the operation of [**PEP 412**](https://www.python.org/dev/peps/pep-0412) key-sharing dictionaries. This means that instance dictionaries can take more space than usual. Also, this decorator requires that the `__dict__` attribute on each instance be a mutable mapping. This means it will not work with some types, such as metaclasses (since the `__dict__` attributes on type instances are read-only proxies for the class namespace), and those that specify `__slots__` without including `__dict__` as one of the defined slots (as such classes don’t provide a `__dict__` attribute at all). If a mutable mapping is not available or if space-efficient key sharing is desired, an effect similar to [`cached_property()`](#functools.cached_property "functools.cached_property") can be achieved by a stacking [`property()`](functions#property "property") on top of [`cache()`](#functools.cache "functools.cache"): ``` class DataSet: def __init__(self, sequence_of_numbers): self._data = sequence_of_numbers @property @cache def stdev(self): return statistics.stdev(self._data) ``` New in version 3.8. `functools.cmp_to_key(func)` Transform an old-style comparison function to a [key function](../glossary#term-key-function). Used with tools that accept key functions (such as [`sorted()`](functions#sorted "sorted"), [`min()`](functions#min "min"), [`max()`](functions#max "max"), [`heapq.nlargest()`](heapq#heapq.nlargest "heapq.nlargest"), [`heapq.nsmallest()`](heapq#heapq.nsmallest "heapq.nsmallest"), [`itertools.groupby()`](itertools#itertools.groupby "itertools.groupby")). This function is primarily used as a transition tool for programs being converted from Python 2 which supported the use of comparison functions. A comparison function is any callable that accept two arguments, compares them, and returns a negative number for less-than, zero for equality, or a positive number for greater-than. A key function is a callable that accepts one argument and returns another value to be used as the sort key. Example: ``` sorted(iterable, key=cmp_to_key(locale.strcoll)) # locale-aware sort order ``` For sorting examples and a brief sorting tutorial, see [Sorting HOW TO](../howto/sorting#sortinghowto). New in version 3.2. `@functools.lru_cache(user_function)` `@functools.lru_cache(maxsize=128, typed=False)` Decorator to wrap a function with a memoizing callable that saves up to the *maxsize* most recent calls. It can save time when an expensive or I/O bound function is periodically called with the same arguments. Since a dictionary is used to cache results, the positional and keyword arguments to the function must be hashable. Distinct argument patterns may be considered to be distinct calls with separate cache entries. For example, `f(a=1, b=2)` and `f(b=2, a=1)` differ in their keyword argument order and may have two separate cache entries. If *user\_function* is specified, it must be a callable. This allows the *lru\_cache* decorator to be applied directly to a user function, leaving the *maxsize* at its default value of 128: ``` @lru_cache def count_vowels(sentence): sentence = sentence.casefold() return sum(sentence.count(vowel) for vowel in 'aeiou') ``` If *maxsize* is set to `None`, the LRU feature is disabled and the cache can grow without bound. If *typed* is set to true, function arguments of different types will be cached separately. For example, `f(3)` and `f(3.0)` will be treated as distinct calls with distinct results. The wrapped function is instrumented with a `cache_parameters()` function that returns a new [`dict`](stdtypes#dict "dict") showing the values for *maxsize* and *typed*. This is for information purposes only. Mutating the values has no effect. To help measure the effectiveness of the cache and tune the *maxsize* parameter, the wrapped function is instrumented with a `cache_info()` function that returns a [named tuple](../glossary#term-named-tuple) showing *hits*, *misses*, *maxsize* and *currsize*. In a multi-threaded environment, the hits and misses are approximate. The decorator also provides a `cache_clear()` function for clearing or invalidating the cache. The original underlying function is accessible through the `__wrapped__` attribute. This is useful for introspection, for bypassing the cache, or for rewrapping the function with a different cache. An [LRU (least recently used) cache](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)) works best when the most recent calls are the best predictors of upcoming calls (for example, the most popular articles on a news server tend to change each day). The cache’s size limit assures that the cache does not grow without bound on long-running processes such as web servers. In general, the LRU cache should only be used when you want to reuse previously computed values. Accordingly, it doesn’t make sense to cache functions with side-effects, functions that need to create distinct mutable objects on each call, or impure functions such as time() or random(). Example of an LRU cache for static web content: ``` @lru_cache(maxsize=32) def get_pep(num): 'Retrieve text of a Python Enhancement Proposal' resource = 'https://www.python.org/dev/peps/pep-%04d/' % num try: with urllib.request.urlopen(resource) as s: return s.read() except urllib.error.HTTPError: return 'Not Found' >>> for n in 8, 290, 308, 320, 8, 218, 320, 279, 289, 320, 9991: ... pep = get_pep(n) ... print(n, len(pep)) >>> get_pep.cache_info() CacheInfo(hits=3, misses=8, maxsize=32, currsize=8) ``` Example of efficiently computing [Fibonacci numbers](https://en.wikipedia.org/wiki/Fibonacci_number) using a cache to implement a [dynamic programming](https://en.wikipedia.org/wiki/Dynamic_programming) technique: ``` @lru_cache(maxsize=None) def fib(n): if n < 2: return n return fib(n-1) + fib(n-2) >>> [fib(n) for n in range(16)] [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610] >>> fib.cache_info() CacheInfo(hits=28, misses=16, maxsize=None, currsize=16) ``` New in version 3.2. Changed in version 3.3: Added the *typed* option. Changed in version 3.8: Added the *user\_function* option. New in version 3.9: Added the function `cache_parameters()` `@functools.total_ordering` Given a class defining one or more rich comparison ordering methods, this class decorator supplies the rest. This simplifies the effort involved in specifying all of the possible rich comparison operations: The class must define one of [`__lt__()`](../reference/datamodel#object.__lt__ "object.__lt__"), [`__le__()`](../reference/datamodel#object.__le__ "object.__le__"), [`__gt__()`](../reference/datamodel#object.__gt__ "object.__gt__"), or [`__ge__()`](../reference/datamodel#object.__ge__ "object.__ge__"). In addition, the class should supply an [`__eq__()`](../reference/datamodel#object.__eq__ "object.__eq__") method. For example: ``` @total_ordering class Student: def _is_valid_operand(self, other): return (hasattr(other, "lastname") and hasattr(other, "firstname")) def __eq__(self, other): if not self._is_valid_operand(other): return NotImplemented return ((self.lastname.lower(), self.firstname.lower()) == (other.lastname.lower(), other.firstname.lower())) def __lt__(self, other): if not self._is_valid_operand(other): return NotImplemented return ((self.lastname.lower(), self.firstname.lower()) < (other.lastname.lower(), other.firstname.lower())) ``` Note While this decorator makes it easy to create well behaved totally ordered types, it *does* come at the cost of slower execution and more complex stack traces for the derived comparison methods. If performance benchmarking indicates this is a bottleneck for a given application, implementing all six rich comparison methods instead is likely to provide an easy speed boost. New in version 3.2. Changed in version 3.4: Returning NotImplemented from the underlying comparison function for unrecognised types is now supported. `functools.partial(func, /, *args, **keywords)` Return a new [partial object](#partial-objects) which when called will behave like *func* called with the positional arguments *args* and keyword arguments *keywords*. If more arguments are supplied to the call, they are appended to *args*. If additional keyword arguments are supplied, they extend and override *keywords*. Roughly equivalent to: ``` def partial(func, /, *args, **keywords): def newfunc(*fargs, **fkeywords): newkeywords = {**keywords, **fkeywords} return func(*args, *fargs, **newkeywords) newfunc.func = func newfunc.args = args newfunc.keywords = keywords return newfunc ``` The [`partial()`](#functools.partial "functools.partial") is used for partial function application which “freezes” some portion of a function’s arguments and/or keywords resulting in a new object with a simplified signature. For example, [`partial()`](#functools.partial "functools.partial") can be used to create a callable that behaves like the [`int()`](functions#int "int") function where the *base* argument defaults to two: ``` >>> from functools import partial >>> basetwo = partial(int, base=2) >>> basetwo.__doc__ = 'Convert base 2 string to an int.' >>> basetwo('10010') 18 ``` `class functools.partialmethod(func, /, *args, **keywords)` Return a new [`partialmethod`](#functools.partialmethod "functools.partialmethod") descriptor which behaves like [`partial`](#functools.partial "functools.partial") except that it is designed to be used as a method definition rather than being directly callable. *func* must be a [descriptor](../glossary#term-descriptor) or a callable (objects which are both, like normal functions, are handled as descriptors). When *func* is a descriptor (such as a normal Python function, [`classmethod()`](functions#classmethod "classmethod"), [`staticmethod()`](functions#staticmethod "staticmethod"), `abstractmethod()` or another instance of [`partialmethod`](#functools.partialmethod "functools.partialmethod")), calls to `__get__` are delegated to the underlying descriptor, and an appropriate [partial object](#partial-objects) returned as the result. When *func* is a non-descriptor callable, an appropriate bound method is created dynamically. This behaves like a normal Python function when used as a method: the *self* argument will be inserted as the first positional argument, even before the *args* and *keywords* supplied to the [`partialmethod`](#functools.partialmethod "functools.partialmethod") constructor. Example: ``` >>> class Cell: ... def __init__(self): ... self._alive = False ... @property ... def alive(self): ... return self._alive ... def set_state(self, state): ... self._alive = bool(state) ... set_alive = partialmethod(set_state, True) ... set_dead = partialmethod(set_state, False) ... >>> c = Cell() >>> c.alive False >>> c.set_alive() >>> c.alive True ``` New in version 3.4. `functools.reduce(function, iterable[, initializer])` Apply *function* of two arguments cumulatively to the items of *iterable*, from left to right, so as to reduce the iterable to a single value. For example, `reduce(lambda x, y: x+y, [1, 2, 3, 4, 5])` calculates `((((1+2)+3)+4)+5)`. The left argument, *x*, is the accumulated value and the right argument, *y*, is the update value from the *iterable*. If the optional *initializer* is present, it is placed before the items of the iterable in the calculation, and serves as a default when the iterable is empty. If *initializer* is not given and *iterable* contains only one item, the first item is returned. Roughly equivalent to: ``` def reduce(function, iterable, initializer=None): it = iter(iterable) if initializer is None: value = next(it) else: value = initializer for element in it: value = function(value, element) return value ``` See [`itertools.accumulate()`](itertools#itertools.accumulate "itertools.accumulate") for an iterator that yields all intermediate values. `@functools.singledispatch` Transform a function into a [single-dispatch](../glossary#term-single-dispatch) [generic function](../glossary#term-generic-function). To define a generic function, decorate it with the `@singledispatch` decorator. When defining a function using `@singledispatch`, note that the dispatch happens on the type of the first argument: ``` >>> from functools import singledispatch >>> @singledispatch ... def fun(arg, verbose=False): ... if verbose: ... print("Let me just say,", end=" ") ... print(arg) ``` To add overloaded implementations to the function, use the `register()` attribute of the generic function, which can be used as a decorator. For functions annotated with types, the decorator will infer the type of the first argument automatically: ``` >>> @fun.register ... def _(arg: int, verbose=False): ... if verbose: ... print("Strength in numbers, eh?", end=" ") ... print(arg) ... >>> @fun.register ... def _(arg: list, verbose=False): ... if verbose: ... print("Enumerate this:") ... for i, elem in enumerate(arg): ... print(i, elem) ``` For code which doesn’t use type annotations, the appropriate type argument can be passed explicitly to the decorator itself: ``` >>> @fun.register(complex) ... def _(arg, verbose=False): ... if verbose: ... print("Better than complicated.", end=" ") ... print(arg.real, arg.imag) ... ``` To enable registering [lambdas](../glossary#term-lambda) and pre-existing functions, the `register()` attribute can also be used in a functional form: ``` >>> def nothing(arg, verbose=False): ... print("Nothing.") ... >>> fun.register(type(None), nothing) ``` The `register()` attribute returns the undecorated function. This enables decorator stacking, [`pickling`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back."), and the creation of unit tests for each variant independently: ``` >>> @fun.register(float) ... @fun.register(Decimal) ... def fun_num(arg, verbose=False): ... if verbose: ... print("Half of your number:", end=" ") ... print(arg / 2) ... >>> fun_num is fun False ``` When called, the generic function dispatches on the type of the first argument: ``` >>> fun("Hello, world.") Hello, world. >>> fun("test.", verbose=True) Let me just say, test. >>> fun(42, verbose=True) Strength in numbers, eh? 42 >>> fun(['spam', 'spam', 'eggs', 'spam'], verbose=True) Enumerate this: 0 spam 1 spam 2 eggs 3 spam >>> fun(None) Nothing. >>> fun(1.23) 0.615 ``` Where there is no registered implementation for a specific type, its method resolution order is used to find a more generic implementation. The original function decorated with `@singledispatch` is registered for the base [`object`](functions#object "object") type, which means it is used if no better implementation is found. If an implementation is registered to an [abstract base class](../glossary#term-abstract-base-class), virtual subclasses of the base class will be dispatched to that implementation: ``` >>> from collections.abc import Mapping >>> @fun.register ... def _(arg: Mapping, verbose=False): ... if verbose: ... print("Keys & Values") ... for key, value in arg.items(): ... print(key, "=>", value) ... >>> fun({"a": "b"}) a => b ``` To check which implementation the generic function will choose for a given type, use the `dispatch()` attribute: ``` >>> fun.dispatch(float) <function fun_num at 0x1035a2840> >>> fun.dispatch(dict) # note: default implementation <function fun at 0x103fe0000> ``` To access all registered implementations, use the read-only `registry` attribute: ``` >>> fun.registry.keys() dict_keys([<class 'NoneType'>, <class 'int'>, <class 'object'>, <class 'decimal.Decimal'>, <class 'list'>, <class 'float'>]) >>> fun.registry[float] <function fun_num at 0x1035a2840> >>> fun.registry[object] <function fun at 0x103fe0000> ``` New in version 3.4. Changed in version 3.7: The `register()` attribute now supports using type annotations. `class functools.singledispatchmethod(func)` Transform a method into a [single-dispatch](../glossary#term-single-dispatch) [generic function](../glossary#term-generic-function). To define a generic method, decorate it with the `@singledispatchmethod` decorator. When defining a function using `@singledispatchmethod`, note that the dispatch happens on the type of the first non-*self* or non-*cls* argument: ``` class Negator: @singledispatchmethod def neg(self, arg): raise NotImplementedError("Cannot negate a") @neg.register def _(self, arg: int): return -arg @neg.register def _(self, arg: bool): return not arg ``` `@singledispatchmethod` supports nesting with other decorators such as [`@classmethod`](functions#classmethod "classmethod"). Note that to allow for `dispatcher.register`, `singledispatchmethod` must be the *outer most* decorator. Here is the `Negator` class with the `neg` methods bound to the class, rather than an instance of the class: ``` class Negator: @singledispatchmethod @classmethod def neg(cls, arg): raise NotImplementedError("Cannot negate a") @neg.register @classmethod def _(cls, arg: int): return -arg @neg.register @classmethod def _(cls, arg: bool): return not arg ``` The same pattern can be used for other similar decorators: [`@staticmethod`](functions#staticmethod "staticmethod"), [`@abstractmethod`](abc#abc.abstractmethod "abc.abstractmethod"), and others. New in version 3.8. `functools.update_wrapper(wrapper, wrapped, assigned=WRAPPER_ASSIGNMENTS, updated=WRAPPER_UPDATES)` Update a *wrapper* function to look like the *wrapped* function. The optional arguments are tuples to specify which attributes of the original function are assigned directly to the matching attributes on the wrapper function and which attributes of the wrapper function are updated with the corresponding attributes from the original function. The default values for these arguments are the module level constants `WRAPPER_ASSIGNMENTS` (which assigns to the wrapper function’s `__module__`, `__name__`, `__qualname__`, `__annotations__` and `__doc__`, the documentation string) and `WRAPPER_UPDATES` (which updates the wrapper function’s `__dict__`, i.e. the instance dictionary). To allow access to the original function for introspection and other purposes (e.g. bypassing a caching decorator such as [`lru_cache()`](#functools.lru_cache "functools.lru_cache")), this function automatically adds a `__wrapped__` attribute to the wrapper that refers to the function being wrapped. The main intended use for this function is in [decorator](../glossary#term-decorator) functions which wrap the decorated function and return the wrapper. If the wrapper function is not updated, the metadata of the returned function will reflect the wrapper definition rather than the original function definition, which is typically less than helpful. [`update_wrapper()`](#functools.update_wrapper "functools.update_wrapper") may be used with callables other than functions. Any attributes named in *assigned* or *updated* that are missing from the object being wrapped are ignored (i.e. this function will not attempt to set them on the wrapper function). [`AttributeError`](exceptions#AttributeError "AttributeError") is still raised if the wrapper function itself is missing any attributes named in *updated*. New in version 3.2: Automatic addition of the `__wrapped__` attribute. New in version 3.2: Copying of the `__annotations__` attribute by default. Changed in version 3.2: Missing attributes no longer trigger an [`AttributeError`](exceptions#AttributeError "AttributeError"). Changed in version 3.4: The `__wrapped__` attribute now always refers to the wrapped function, even if that function defined a `__wrapped__` attribute. (see [bpo-17482](https://bugs.python.org/issue?@action=redirect&bpo=17482)) `@functools.wraps(wrapped, assigned=WRAPPER_ASSIGNMENTS, updated=WRAPPER_UPDATES)` This is a convenience function for invoking [`update_wrapper()`](#functools.update_wrapper "functools.update_wrapper") as a function decorator when defining a wrapper function. It is equivalent to `partial(update_wrapper, wrapped=wrapped, assigned=assigned, updated=updated)`. For example: ``` >>> from functools import wraps >>> def my_decorator(f): ... @wraps(f) ... def wrapper(*args, **kwds): ... print('Calling decorated function') ... return f(*args, **kwds) ... return wrapper ... >>> @my_decorator ... def example(): ... """Docstring""" ... print('Called example function') ... >>> example() Calling decorated function Called example function >>> example.__name__ 'example' >>> example.__doc__ 'Docstring' ``` Without the use of this decorator factory, the name of the example function would have been `'wrapper'`, and the docstring of the original `example()` would have been lost. partial Objects --------------- [`partial`](#functools.partial "functools.partial") objects are callable objects created by [`partial()`](#functools.partial "functools.partial"). They have three read-only attributes: `partial.func` A callable object or function. Calls to the [`partial`](#functools.partial "functools.partial") object will be forwarded to [`func`](#functools.partial.func "functools.partial.func") with new arguments and keywords. `partial.args` The leftmost positional arguments that will be prepended to the positional arguments provided to a [`partial`](#functools.partial "functools.partial") object call. `partial.keywords` The keyword arguments that will be supplied when the [`partial`](#functools.partial "functools.partial") object is called. [`partial`](#functools.partial "functools.partial") objects are like `function` objects in that they are callable, weak referencable, and can have attributes. There are some important differences. For instance, the [`__name__`](stdtypes#definition.__name__ "definition.__name__") and `__doc__` attributes are not created automatically. Also, [`partial`](#functools.partial "functools.partial") objects defined in classes behave like static methods and do not transform into bound methods during instance attribute look-up.
programming_docs
python platform — Access to underlying platform’s identifying data platform — Access to underlying platform’s identifying data =========================================================== **Source code:** [Lib/platform.py](https://github.com/python/cpython/tree/3.9/Lib/platform.py) Note Specific platforms listed alphabetically, with Linux included in the Unix section. Cross Platform -------------- `platform.architecture(executable=sys.executable, bits='', linkage='')` Queries the given executable (defaults to the Python interpreter binary) for various architecture information. Returns a tuple `(bits, linkage)` which contain information about the bit architecture and the linkage format used for the executable. Both values are returned as strings. Values that cannot be determined are returned as given by the parameter presets. If bits is given as `''`, the `sizeof(pointer)` (or `sizeof(long)` on Python version < 1.5.2) is used as indicator for the supported pointer size. The function relies on the system’s `file` command to do the actual work. This is available on most if not all Unix platforms and some non-Unix platforms and then only if the executable points to the Python interpreter. Reasonable defaults are used when the above needs are not met. Note On macOS (and perhaps other platforms), executable files may be universal files containing multiple architectures. To get at the “64-bitness” of the current interpreter, it is more reliable to query the [`sys.maxsize`](sys#sys.maxsize "sys.maxsize") attribute: ``` is_64bits = sys.maxsize > 2**32 ``` `platform.machine()` Returns the machine type, e.g. `'i386'`. An empty string is returned if the value cannot be determined. `platform.node()` Returns the computer’s network name (may not be fully qualified!). An empty string is returned if the value cannot be determined. `platform.platform(aliased=0, terse=0)` Returns a single string identifying the underlying platform with as much useful information as possible. The output is intended to be *human readable* rather than machine parseable. It may look different on different platforms and this is intended. If *aliased* is true, the function will use aliases for various platforms that report system names which differ from their common names, for example SunOS will be reported as Solaris. The [`system_alias()`](#platform.system_alias "platform.system_alias") function is used to implement this. Setting *terse* to true causes the function to return only the absolute minimum information needed to identify the platform. Changed in version 3.8: On macOS, the function now uses [`mac_ver()`](#platform.mac_ver "platform.mac_ver"), if it returns a non-empty release string, to get the macOS version rather than the darwin version. `platform.processor()` Returns the (real) processor name, e.g. `'amdk6'`. An empty string is returned if the value cannot be determined. Note that many platforms do not provide this information or simply return the same value as for [`machine()`](#platform.machine "platform.machine"). NetBSD does this. `platform.python_build()` Returns a tuple `(buildno, builddate)` stating the Python build number and date as strings. `platform.python_compiler()` Returns a string identifying the compiler used for compiling Python. `platform.python_branch()` Returns a string identifying the Python implementation SCM branch. `platform.python_implementation()` Returns a string identifying the Python implementation. Possible return values are: ‘CPython’, ‘IronPython’, ‘Jython’, ‘PyPy’. `platform.python_revision()` Returns a string identifying the Python implementation SCM revision. `platform.python_version()` Returns the Python version as string `'major.minor.patchlevel'`. Note that unlike the Python `sys.version`, the returned value will always include the patchlevel (it defaults to 0). `platform.python_version_tuple()` Returns the Python version as tuple `(major, minor, patchlevel)` of strings. Note that unlike the Python `sys.version`, the returned value will always include the patchlevel (it defaults to `'0'`). `platform.release()` Returns the system’s release, e.g. `'2.2.0'` or `'NT'`. An empty string is returned if the value cannot be determined. `platform.system()` Returns the system/OS name, such as `'Linux'`, `'Darwin'`, `'Java'`, `'Windows'`. An empty string is returned if the value cannot be determined. `platform.system_alias(system, release, version)` Returns `(system, release, version)` aliased to common marketing names used for some systems. It also does some reordering of the information in some cases where it would otherwise cause confusion. `platform.version()` Returns the system’s release version, e.g. `'#3 on degas'`. An empty string is returned if the value cannot be determined. `platform.uname()` Fairly portable uname interface. Returns a [`namedtuple()`](collections#collections.namedtuple "collections.namedtuple") containing six attributes: [`system`](#platform.system "platform.system"), [`node`](#platform.node "platform.node"), [`release`](#platform.release "platform.release"), [`version`](#platform.version "platform.version"), [`machine`](#platform.machine "platform.machine"), and [`processor`](#platform.processor "platform.processor"). Note that this adds a sixth attribute ([`processor`](#platform.processor "platform.processor")) not present in the [`os.uname()`](os#os.uname "os.uname") result. Also, the attribute names are different for the first two attributes; [`os.uname()`](os#os.uname "os.uname") names them `sysname` and `nodename`. Entries which cannot be determined are set to `''`. Changed in version 3.3: Result changed from a tuple to a [`namedtuple()`](collections#collections.namedtuple "collections.namedtuple"). Java Platform ------------- `platform.java_ver(release='', vendor='', vminfo=('', '', ''), osinfo=('', '', ''))` Version interface for Jython. Returns a tuple `(release, vendor, vminfo, osinfo)` with *vminfo* being a tuple `(vm_name, vm_release, vm_vendor)` and *osinfo* being a tuple `(os_name, os_version, os_arch)`. Values which cannot be determined are set to the defaults given as parameters (which all default to `''`). Windows Platform ---------------- `platform.win32_ver(release='', version='', csd='', ptype='')` Get additional version information from the Windows Registry and return a tuple `(release, version, csd, ptype)` referring to OS release, version number, CSD level (service pack) and OS type (multi/single processor). Values which cannot be determined are set to the defaults given as parameters (which all default to an empty string). As a hint: *ptype* is `'Uniprocessor Free'` on single processor NT machines and `'Multiprocessor Free'` on multi processor machines. The *‘Free’* refers to the OS version being free of debugging code. It could also state *‘Checked’* which means the OS version uses debugging code, i.e. code that checks arguments, ranges, etc. `platform.win32_edition()` Returns a string representing the current Windows edition, or `None` if the value cannot be determined. Possible values include but are not limited to `'Enterprise'`, `'IoTUAP'`, `'ServerStandard'`, and `'nanoserver'`. New in version 3.8. `platform.win32_is_iot()` Return `True` if the Windows edition returned by [`win32_edition()`](#platform.win32_edition "platform.win32_edition") is recognized as an IoT edition. New in version 3.8. macOS Platform -------------- `platform.mac_ver(release='', versioninfo=('', '', ''), machine='')` Get macOS version information and return it as tuple `(release, versioninfo, machine)` with *versioninfo* being a tuple `(version, dev_stage, non_release_version)`. Entries which cannot be determined are set to `''`. All tuple entries are strings. Unix Platforms -------------- `platform.libc_ver(executable=sys.executable, lib='', version='', chunksize=16384)` Tries to determine the libc version against which the file executable (defaults to the Python interpreter) is linked. Returns a tuple of strings `(lib, version)` which default to the given parameters in case the lookup fails. Note that this function has intimate knowledge of how different libc versions add symbols to the executable is probably only usable for executables compiled using **gcc**. The file is read and scanned in chunks of *chunksize* bytes. python secrets — Generate secure random numbers for managing secrets secrets — Generate secure random numbers for managing secrets ============================================================= New in version 3.6. **Source code:** [Lib/secrets.py](https://github.com/python/cpython/tree/3.9/Lib/secrets.py) The [`secrets`](#module-secrets "secrets: Generate secure random numbers for managing secrets.") module is used for generating cryptographically strong random numbers suitable for managing data such as passwords, account authentication, security tokens, and related secrets. In particular, [`secrets`](#module-secrets "secrets: Generate secure random numbers for managing secrets.") should be used in preference to the default pseudo-random number generator in the [`random`](random#module-random "random: Generate pseudo-random numbers with various common distributions.") module, which is designed for modelling and simulation, not security or cryptography. See also [**PEP 506**](https://www.python.org/dev/peps/pep-0506) Random numbers -------------- The [`secrets`](#module-secrets "secrets: Generate secure random numbers for managing secrets.") module provides access to the most secure source of randomness that your operating system provides. `class secrets.SystemRandom` A class for generating random numbers using the highest-quality sources provided by the operating system. See [`random.SystemRandom`](random#random.SystemRandom "random.SystemRandom") for additional details. `secrets.choice(sequence)` Return a randomly-chosen element from a non-empty sequence. `secrets.randbelow(n)` Return a random int in the range [0, *n*). `secrets.randbits(k)` Return an int with *k* random bits. Generating tokens ----------------- The [`secrets`](#module-secrets "secrets: Generate secure random numbers for managing secrets.") module provides functions for generating secure tokens, suitable for applications such as password resets, hard-to-guess URLs, and similar. `secrets.token_bytes([nbytes=None])` Return a random byte string containing *nbytes* number of bytes. If *nbytes* is `None` or not supplied, a reasonable default is used. ``` >>> token_bytes(16) b'\xebr\x17D*t\xae\xd4\xe3S\xb6\xe2\xebP1\x8b' ``` `secrets.token_hex([nbytes=None])` Return a random text string, in hexadecimal. The string has *nbytes* random bytes, each byte converted to two hex digits. If *nbytes* is `None` or not supplied, a reasonable default is used. ``` >>> token_hex(16) 'f9bf78b9a18ce6d46a0cd2b0b86df9da' ``` `secrets.token_urlsafe([nbytes=None])` Return a random URL-safe text string, containing *nbytes* random bytes. The text is Base64 encoded, so on average each byte results in approximately 1.3 characters. If *nbytes* is `None` or not supplied, a reasonable default is used. ``` >>> token_urlsafe(16) 'Drmhze6EPcv0fN_81Bj-nA' ``` ### How many bytes should tokens use? To be secure against [brute-force attacks](https://en.wikipedia.org/wiki/Brute-force_attack), tokens need to have sufficient randomness. Unfortunately, what is considered sufficient will necessarily increase as computers get more powerful and able to make more guesses in a shorter period. As of 2015, it is believed that 32 bytes (256 bits) of randomness is sufficient for the typical use-case expected for the [`secrets`](#module-secrets "secrets: Generate secure random numbers for managing secrets.") module. For those who want to manage their own token length, you can explicitly specify how much randomness is used for tokens by giving an [`int`](functions#int "int") argument to the various `token_*` functions. That argument is taken as the number of bytes of randomness to use. Otherwise, if no argument is provided, or if the argument is `None`, the `token_*` functions will use a reasonable default instead. Note That default is subject to change at any time, including during maintenance releases. Other functions --------------- `secrets.compare_digest(a, b)` Return `True` if strings *a* and *b* are equal, otherwise `False`, in such a way as to reduce the risk of [timing attacks](https://codahale.com/a-lesson-in-timing-attacks/). See [`hmac.compare_digest()`](hmac#hmac.compare_digest "hmac.compare_digest") for additional details. Recipes and best practices -------------------------- This section shows recipes and best practices for using [`secrets`](#module-secrets "secrets: Generate secure random numbers for managing secrets.") to manage a basic level of security. Generate an eight-character alphanumeric password: ``` import string import secrets alphabet = string.ascii_letters + string.digits password = ''.join(secrets.choice(alphabet) for i in range(8)) ``` Note Applications should not [store passwords in a recoverable format](http://cwe.mitre.org/data/definitions/257.html), whether plain text or encrypted. They should be salted and hashed using a cryptographically-strong one-way (irreversible) hash function. Generate a ten-character alphanumeric password with at least one lowercase character, at least one uppercase character, and at least three digits: ``` import string import secrets alphabet = string.ascii_letters + string.digits while True: password = ''.join(secrets.choice(alphabet) for i in range(10)) if (any(c.islower() for c in password) and any(c.isupper() for c in password) and sum(c.isdigit() for c in password) >= 3): break ``` Generate an [XKCD-style passphrase](https://xkcd.com/936/): ``` import secrets # On standard Linux systems, use a convenient dictionary file. # Other platforms may need to provide their own word-list. with open('/usr/share/dict/words') as f: words = [word.strip() for word in f] password = ' '.join(secrets.choice(words) for i in range(4)) ``` Generate a hard-to-guess temporary URL containing a security token suitable for password recovery applications: ``` import secrets url = 'https://example.com/reset=' + secrets.token_urlsafe() ``` python msvcrt — Useful routines from the MS VC++ runtime msvcrt — Useful routines from the MS VC++ runtime ================================================= These functions provide access to some useful capabilities on Windows platforms. Some higher-level modules use these functions to build the Windows implementations of their services. For example, the [`getpass`](getpass#module-getpass "getpass: Portable reading of passwords and retrieval of the userid.") module uses this in the implementation of the [`getpass()`](getpass#module-getpass "getpass: Portable reading of passwords and retrieval of the userid.") function. Further documentation on these functions can be found in the Platform API documentation. The module implements both the normal and wide char variants of the console I/O api. The normal API deals only with ASCII characters and is of limited use for internationalized applications. The wide char API should be used where ever possible. Changed in version 3.3: Operations in this module now raise [`OSError`](exceptions#OSError "OSError") where [`IOError`](exceptions#IOError "IOError") was raised. File Operations --------------- `msvcrt.locking(fd, mode, nbytes)` Lock part of a file based on file descriptor *fd* from the C runtime. Raises [`OSError`](exceptions#OSError "OSError") on failure. The locked region of the file extends from the current file position for *nbytes* bytes, and may continue beyond the end of the file. *mode* must be one of the `LK_*` constants listed below. Multiple regions in a file may be locked at the same time, but may not overlap. Adjacent regions are not merged; they must be unlocked individually. Raises an [auditing event](sys#auditing) `msvcrt.locking` with arguments `fd`, `mode`, `nbytes`. `msvcrt.LK_LOCK` `msvcrt.LK_RLCK` Locks the specified bytes. If the bytes cannot be locked, the program immediately tries again after 1 second. If, after 10 attempts, the bytes cannot be locked, [`OSError`](exceptions#OSError "OSError") is raised. `msvcrt.LK_NBLCK` `msvcrt.LK_NBRLCK` Locks the specified bytes. If the bytes cannot be locked, [`OSError`](exceptions#OSError "OSError") is raised. `msvcrt.LK_UNLCK` Unlocks the specified bytes, which must have been previously locked. `msvcrt.setmode(fd, flags)` Set the line-end translation mode for the file descriptor *fd*. To set it to text mode, *flags* should be [`os.O_TEXT`](os#os.O_TEXT "os.O_TEXT"); for binary, it should be [`os.O_BINARY`](os#os.O_BINARY "os.O_BINARY"). `msvcrt.open_osfhandle(handle, flags)` Create a C runtime file descriptor from the file handle *handle*. The *flags* parameter should be a bitwise OR of [`os.O_APPEND`](os#os.O_APPEND "os.O_APPEND"), [`os.O_RDONLY`](os#os.O_RDONLY "os.O_RDONLY"), and [`os.O_TEXT`](os#os.O_TEXT "os.O_TEXT"). The returned file descriptor may be used as a parameter to [`os.fdopen()`](os#os.fdopen "os.fdopen") to create a file object. Raises an [auditing event](sys#auditing) `msvcrt.open_osfhandle` with arguments `handle`, `flags`. `msvcrt.get_osfhandle(fd)` Return the file handle for the file descriptor *fd*. Raises [`OSError`](exceptions#OSError "OSError") if *fd* is not recognized. Raises an [auditing event](sys#auditing) `msvcrt.get_osfhandle` with argument `fd`. Console I/O ----------- `msvcrt.kbhit()` Return `True` if a keypress is waiting to be read. `msvcrt.getch()` Read a keypress and return the resulting character as a byte string. Nothing is echoed to the console. This call will block if a keypress is not already available, but will not wait for `Enter` to be pressed. If the pressed key was a special function key, this will return `'\000'` or `'\xe0'`; the next call will return the keycode. The `Control-C` keypress cannot be read with this function. `msvcrt.getwch()` Wide char variant of [`getch()`](#msvcrt.getch "msvcrt.getch"), returning a Unicode value. `msvcrt.getche()` Similar to [`getch()`](#msvcrt.getch "msvcrt.getch"), but the keypress will be echoed if it represents a printable character. `msvcrt.getwche()` Wide char variant of [`getche()`](#msvcrt.getche "msvcrt.getche"), returning a Unicode value. `msvcrt.putch(char)` Print the byte string *char* to the console without buffering. `msvcrt.putwch(unicode_char)` Wide char variant of [`putch()`](#msvcrt.putch "msvcrt.putch"), accepting a Unicode value. `msvcrt.ungetch(char)` Cause the byte string *char* to be “pushed back” into the console buffer; it will be the next character read by [`getch()`](#msvcrt.getch "msvcrt.getch") or [`getche()`](#msvcrt.getche "msvcrt.getche"). `msvcrt.ungetwch(unicode_char)` Wide char variant of [`ungetch()`](#msvcrt.ungetch "msvcrt.ungetch"), accepting a Unicode value. Other Functions --------------- `msvcrt.heapmin()` Force the `malloc()` heap to clean itself up and return unused blocks to the operating system. On failure, this raises [`OSError`](exceptions#OSError "OSError"). python test — Regression tests package for Python test — Regression tests package for Python ========================================== Note The [`test`](#module-test "test: Regression tests package containing the testing suite for Python.") package is meant for internal use by Python only. It is documented for the benefit of the core developers of Python. Any use of this package outside of Python’s standard library is discouraged as code mentioned here can change or be removed without notice between releases of Python. The [`test`](#module-test "test: Regression tests package containing the testing suite for Python.") package contains all regression tests for Python as well as the modules [`test.support`](#module-test.support "test.support: Support for Python's regression test suite.") and `test.regrtest`. [`test.support`](#module-test.support "test.support: Support for Python's regression test suite.") is used to enhance your tests while `test.regrtest` drives the testing suite. Each module in the [`test`](#module-test "test: Regression tests package containing the testing suite for Python.") package whose name starts with `test_` is a testing suite for a specific module or feature. All new tests should be written using the [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") or [`doctest`](doctest#module-doctest "doctest: Test pieces of code within docstrings.") module. Some older tests are written using a “traditional” testing style that compares output printed to `sys.stdout`; this style of test is considered deprecated. See also `Module` [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") Writing PyUnit regression tests. `Module` [`doctest`](doctest#module-doctest "doctest: Test pieces of code within docstrings.") Tests embedded in documentation strings. Writing Unit Tests for the test package --------------------------------------- It is preferred that tests that use the [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") module follow a few guidelines. One is to name the test module by starting it with `test_` and end it with the name of the module being tested. The test methods in the test module should start with `test_` and end with a description of what the method is testing. This is needed so that the methods are recognized by the test driver as test methods. Also, no documentation string for the method should be included. A comment (such as `# Tests function returns only True or False`) should be used to provide documentation for test methods. This is done because documentation strings get printed out if they exist and thus what test is being run is not stated. A basic boilerplate is often used: ``` import unittest from test import support class MyTestCase1(unittest.TestCase): # Only use setUp() and tearDown() if necessary def setUp(self): ... code to execute in preparation for tests ... def tearDown(self): ... code to execute to clean up after tests ... def test_feature_one(self): # Test feature one. ... testing code ... def test_feature_two(self): # Test feature two. ... testing code ... ... more test methods ... class MyTestCase2(unittest.TestCase): ... same structure as MyTestCase1 ... ... more test classes ... if __name__ == '__main__': unittest.main() ``` This code pattern allows the testing suite to be run by `test.regrtest`, on its own as a script that supports the [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") CLI, or via the `python -m unittest` CLI. The goal for regression testing is to try to break code. This leads to a few guidelines to be followed: * The testing suite should exercise all classes, functions, and constants. This includes not just the external API that is to be presented to the outside world but also “private” code. * Whitebox testing (examining the code being tested when the tests are being written) is preferred. Blackbox testing (testing only the published user interface) is not complete enough to make sure all boundary and edge cases are tested. * Make sure all possible values are tested including invalid ones. This makes sure that not only all valid values are acceptable but also that improper values are handled correctly. * Exhaust as many code paths as possible. Test where branching occurs and thus tailor input to make sure as many different paths through the code are taken. * Add an explicit test for any bugs discovered for the tested code. This will make sure that the error does not crop up again if the code is changed in the future. * Make sure to clean up after your tests (such as close and remove all temporary files). * If a test is dependent on a specific condition of the operating system then verify the condition already exists before attempting the test. * Import as few modules as possible and do it as soon as possible. This minimizes external dependencies of tests and also minimizes possible anomalous behavior from side-effects of importing a module. * Try to maximize code reuse. On occasion, tests will vary by something as small as what type of input is used. Minimize code duplication by subclassing a basic test class with a class that specifies the input: ``` class TestFuncAcceptsSequencesMixin: func = mySuperWhammyFunction def test_func(self): self.func(self.arg) class AcceptLists(TestFuncAcceptsSequencesMixin, unittest.TestCase): arg = [1, 2, 3] class AcceptStrings(TestFuncAcceptsSequencesMixin, unittest.TestCase): arg = 'abc' class AcceptTuples(TestFuncAcceptsSequencesMixin, unittest.TestCase): arg = (1, 2, 3) ``` When using this pattern, remember that all classes that inherit from [`unittest.TestCase`](unittest#unittest.TestCase "unittest.TestCase") are run as tests. The `Mixin` class in the example above does not have any data and so can’t be run by itself, thus it does not inherit from [`unittest.TestCase`](unittest#unittest.TestCase "unittest.TestCase"). See also Test Driven Development A book by Kent Beck on writing tests before code. Running tests using the command-line interface ---------------------------------------------- The [`test`](#module-test "test: Regression tests package containing the testing suite for Python.") package can be run as a script to drive Python’s regression test suite, thanks to the [`-m`](../using/cmdline#cmdoption-m) option: **python -m test**. Under the hood, it uses `test.regrtest`; the call **python -m test.regrtest** used in previous Python versions still works. Running the script by itself automatically starts running all regression tests in the [`test`](#module-test "test: Regression tests package containing the testing suite for Python.") package. It does this by finding all modules in the package whose name starts with `test_`, importing them, and executing the function `test_main()` if present or loading the tests via unittest.TestLoader.loadTestsFromModule if `test_main` does not exist. The names of tests to execute may also be passed to the script. Specifying a single regression test (**python -m test test\_spam**) will minimize output and only print whether the test passed or failed. Running [`test`](#module-test "test: Regression tests package containing the testing suite for Python.") directly allows what resources are available for tests to use to be set. You do this by using the `-u` command-line option. Specifying `all` as the value for the `-u` option enables all possible resources: **python -m test -uall**. If all but one resource is desired (a more common case), a comma-separated list of resources that are not desired may be listed after `all`. The command **python -m test -uall,-audio,-largefile** will run [`test`](#module-test "test: Regression tests package containing the testing suite for Python.") with all resources except the `audio` and `largefile` resources. For a list of all resources and more command-line options, run **python -m test -h**. Some other ways to execute the regression tests depend on what platform the tests are being executed on. On Unix, you can run **make test** at the top-level directory where Python was built. On Windows, executing **rt.bat** from your `PCbuild` directory will run all regression tests.
programming_docs
python pickletools — Tools for pickle developers pickletools — Tools for pickle developers ========================================= **Source code:** [Lib/pickletools.py](https://github.com/python/cpython/tree/3.9/Lib/pickletools.py) This module contains various constants relating to the intimate details of the [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") module, some lengthy comments about the implementation, and a few useful functions for analyzing pickled data. The contents of this module are useful for Python core developers who are working on the [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back."); ordinary users of the [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") module probably won’t find the [`pickletools`](#module-pickletools "pickletools: Contains extensive comments about the pickle protocols and pickle-machine opcodes, as well as some useful functions.") module relevant. Command line usage ------------------ New in version 3.2. When invoked from the command line, `python -m pickletools` will disassemble the contents of one or more pickle files. Note that if you want to see the Python object stored in the pickle rather than the details of pickle format, you may want to use `-m pickle` instead. However, when the pickle file that you want to examine comes from an untrusted source, `-m pickletools` is a safer option because it does not execute pickle bytecode. For example, with a tuple `(1, 2)` pickled in file `x.pickle`: ``` $ python -m pickle x.pickle (1, 2) $ python -m pickletools x.pickle 0: \x80 PROTO 3 2: K BININT1 1 4: K BININT1 2 6: \x86 TUPLE2 7: q BINPUT 0 9: . STOP highest protocol among opcodes = 2 ``` ### Command line options `-a, --annotate` Annotate each line with a short opcode description. `-o, --output=<file>` Name of a file where the output should be written. `-l, --indentlevel=<num>` The number of blanks by which to indent a new MARK level. `-m, --memo` When multiple objects are disassembled, preserve memo between disassemblies. `-p, --preamble=<preamble>` When more than one pickle file are specified, print given preamble before each disassembly. Programmatic Interface ---------------------- `pickletools.dis(pickle, out=None, memo=None, indentlevel=4, annotate=0)` Outputs a symbolic disassembly of the pickle to the file-like object *out*, defaulting to `sys.stdout`. *pickle* can be a string or a file-like object. *memo* can be a Python dictionary that will be used as the pickle’s memo; it can be used to perform disassemblies across multiple pickles created by the same pickler. Successive levels, indicated by `MARK` opcodes in the stream, are indented by *indentlevel* spaces. If a nonzero value is given to *annotate*, each opcode in the output is annotated with a short description. The value of *annotate* is used as a hint for the column where annotation should start. New in version 3.2: The *annotate* argument. `pickletools.genops(pickle)` Provides an [iterator](../glossary#term-iterator) over all of the opcodes in a pickle, returning a sequence of `(opcode, arg, pos)` triples. *opcode* is an instance of an `OpcodeInfo` class; *arg* is the decoded value, as a Python object, of the opcode’s argument; *pos* is the position at which this opcode is located. *pickle* can be a string or a file-like object. `pickletools.optimize(picklestring)` Returns a new equivalent pickle string after eliminating unused `PUT` opcodes. The optimized pickle is shorter, takes less transmission time, requires less storage space, and unpickles more efficiently. python locale — Internationalization services locale — Internationalization services ====================================== **Source code:** [Lib/locale.py](https://github.com/python/cpython/tree/3.9/Lib/locale.py) The [`locale`](#module-locale "locale: Internationalization services.") module opens access to the POSIX locale database and functionality. The POSIX locale mechanism allows programmers to deal with certain cultural issues in an application, without requiring the programmer to know all the specifics of each country where the software is executed. The [`locale`](#module-locale "locale: Internationalization services.") module is implemented on top of the `_locale` module, which in turn uses an ANSI C locale implementation if available. The [`locale`](#module-locale "locale: Internationalization services.") module defines the following exception and functions: `exception locale.Error` Exception raised when the locale passed to [`setlocale()`](#locale.setlocale "locale.setlocale") is not recognized. `locale.setlocale(category, locale=None)` If *locale* is given and not `None`, [`setlocale()`](#locale.setlocale "locale.setlocale") modifies the locale setting for the *category*. The available categories are listed in the data description below. *locale* may be a string, or an iterable of two strings (language code and encoding). If it’s an iterable, it’s converted to a locale name using the locale aliasing engine. An empty string specifies the user’s default settings. If the modification of the locale fails, the exception [`Error`](#locale.Error "locale.Error") is raised. If successful, the new locale setting is returned. If *locale* is omitted or `None`, the current setting for *category* is returned. [`setlocale()`](#locale.setlocale "locale.setlocale") is not thread-safe on most systems. Applications typically start with a call of ``` import locale locale.setlocale(locale.LC_ALL, '') ``` This sets the locale for all categories to the user’s default setting (typically specified in the `LANG` environment variable). If the locale is not changed thereafter, using multithreading should not cause problems. `locale.localeconv()` Returns the database of the local conventions as a dictionary. This dictionary has the following strings as keys: | Category | Key | Meaning | | --- | --- | --- | | [`LC_NUMERIC`](#locale.LC_NUMERIC "locale.LC_NUMERIC") | `'decimal_point'` | Decimal point character. | | | `'grouping'` | Sequence of numbers specifying which relative positions the `'thousands_sep'` is expected. If the sequence is terminated with [`CHAR_MAX`](#locale.CHAR_MAX "locale.CHAR_MAX"), no further grouping is performed. If the sequence terminates with a `0`, the last group size is repeatedly used. | | | `'thousands_sep'` | Character used between groups. | | [`LC_MONETARY`](#locale.LC_MONETARY "locale.LC_MONETARY") | `'int_curr_symbol'` | International currency symbol. | | | `'currency_symbol'` | Local currency symbol. | | | `'p_cs_precedes/n_cs_precedes'` | Whether the currency symbol precedes the value (for positive resp. negative values). | | | `'p_sep_by_space/n_sep_by_space'` | Whether the currency symbol is separated from the value by a space (for positive resp. negative values). | | | `'mon_decimal_point'` | Decimal point used for monetary values. | | | `'frac_digits'` | Number of fractional digits used in local formatting of monetary values. | | | `'int_frac_digits'` | Number of fractional digits used in international formatting of monetary values. | | | `'mon_thousands_sep'` | Group separator used for monetary values. | | | `'mon_grouping'` | Equivalent to `'grouping'`, used for monetary values. | | | `'positive_sign'` | Symbol used to annotate a positive monetary value. | | | `'negative_sign'` | Symbol used to annotate a negative monetary value. | | | `'p_sign_posn/n_sign_posn'` | The position of the sign (for positive resp. negative values), see below. | All numeric values can be set to [`CHAR_MAX`](#locale.CHAR_MAX "locale.CHAR_MAX") to indicate that there is no value specified in this locale. The possible values for `'p_sign_posn'` and `'n_sign_posn'` are given below. | Value | Explanation | | --- | --- | | `0` | Currency and value are surrounded by parentheses. | | `1` | The sign should precede the value and currency symbol. | | `2` | The sign should follow the value and currency symbol. | | `3` | The sign should immediately precede the value. | | `4` | The sign should immediately follow the value. | | `CHAR_MAX` | Nothing is specified in this locale. | The function sets temporarily the `LC_CTYPE` locale to the `LC_NUMERIC` locale or the `LC_MONETARY` locale if locales are different and numeric or monetary strings are non-ASCII. This temporary change affects other threads. Changed in version 3.7: The function now sets temporarily the `LC_CTYPE` locale to the `LC_NUMERIC` locale in some cases. `locale.nl_langinfo(option)` Return some locale-specific information as a string. This function is not available on all systems, and the set of possible options might also vary across platforms. The possible argument values are numbers, for which symbolic constants are available in the locale module. The [`nl_langinfo()`](#locale.nl_langinfo "locale.nl_langinfo") function accepts one of the following keys. Most descriptions are taken from the corresponding description in the GNU C library. `locale.CODESET` Get a string with the name of the character encoding used in the selected locale. `locale.D_T_FMT` Get a string that can be used as a format string for [`time.strftime()`](time#time.strftime "time.strftime") to represent date and time in a locale-specific way. `locale.D_FMT` Get a string that can be used as a format string for [`time.strftime()`](time#time.strftime "time.strftime") to represent a date in a locale-specific way. `locale.T_FMT` Get a string that can be used as a format string for [`time.strftime()`](time#time.strftime "time.strftime") to represent a time in a locale-specific way. `locale.T_FMT_AMPM` Get a format string for [`time.strftime()`](time#time.strftime "time.strftime") to represent time in the am/pm format. `DAY_1 ... DAY_7` Get the name of the n-th day of the week. Note This follows the US convention of `DAY_1` being Sunday, not the international convention (ISO 8601) that Monday is the first day of the week. `ABDAY_1 ... ABDAY_7` Get the abbreviated name of the n-th day of the week. `MON_1 ... MON_12` Get the name of the n-th month. `ABMON_1 ... ABMON_12` Get the abbreviated name of the n-th month. `locale.RADIXCHAR` Get the radix character (decimal dot, decimal comma, etc.). `locale.THOUSEP` Get the separator character for thousands (groups of three digits). `locale.YESEXPR` Get a regular expression that can be used with the regex function to recognize a positive response to a yes/no question. Note The expression is in the syntax suitable for the `regex()` function from the C library, which might differ from the syntax used in [`re`](re#module-re "re: Regular expression operations."). `locale.NOEXPR` Get a regular expression that can be used with the regex(3) function to recognize a negative response to a yes/no question. `locale.CRNCYSTR` Get the currency symbol, preceded by “-” if the symbol should appear before the value, “+” if the symbol should appear after the value, or “.” if the symbol should replace the radix character. `locale.ERA` Get a string that represents the era used in the current locale. Most locales do not define this value. An example of a locale which does define this value is the Japanese one. In Japan, the traditional representation of dates includes the name of the era corresponding to the then-emperor’s reign. Normally it should not be necessary to use this value directly. Specifying the `E` modifier in their format strings causes the [`time.strftime()`](time#time.strftime "time.strftime") function to use this information. The format of the returned string is not specified, and therefore you should not assume knowledge of it on different systems. `locale.ERA_D_T_FMT` Get a format string for [`time.strftime()`](time#time.strftime "time.strftime") to represent date and time in a locale-specific era-based way. `locale.ERA_D_FMT` Get a format string for [`time.strftime()`](time#time.strftime "time.strftime") to represent a date in a locale-specific era-based way. `locale.ERA_T_FMT` Get a format string for [`time.strftime()`](time#time.strftime "time.strftime") to represent a time in a locale-specific era-based way. `locale.ALT_DIGITS` Get a representation of up to 100 values used to represent the values 0 to 99. `locale.getdefaultlocale([envvars])` Tries to determine the default locale settings and returns them as a tuple of the form `(language code, encoding)`. According to POSIX, a program which has not called `setlocale(LC_ALL, '')` runs using the portable `'C'` locale. Calling `setlocale(LC_ALL, '')` lets it use the default locale as defined by the `LANG` variable. Since we do not want to interfere with the current locale setting we thus emulate the behavior in the way described above. To maintain compatibility with other platforms, not only the `LANG` variable is tested, but a list of variables given as envvars parameter. The first found to be defined will be used. *envvars* defaults to the search path used in GNU gettext; it must always contain the variable name `'LANG'`. The GNU gettext search path contains `'LC_ALL'`, `'LC_CTYPE'`, `'LANG'` and `'LANGUAGE'`, in that order. Except for the code `'C'`, the language code corresponds to [**RFC 1766**](https://tools.ietf.org/html/rfc1766.html). *language code* and *encoding* may be `None` if their values cannot be determined. `locale.getlocale(category=LC_CTYPE)` Returns the current setting for the given locale category as sequence containing *language code*, *encoding*. *category* may be one of the `LC_*` values except [`LC_ALL`](#locale.LC_ALL "locale.LC_ALL"). It defaults to [`LC_CTYPE`](#locale.LC_CTYPE "locale.LC_CTYPE"). Except for the code `'C'`, the language code corresponds to [**RFC 1766**](https://tools.ietf.org/html/rfc1766.html). *language code* and *encoding* may be `None` if their values cannot be determined. `locale.getpreferredencoding(do_setlocale=True)` Return the encoding used for text data, according to user preferences. User preferences are expressed differently on different systems, and might not be available programmatically on some systems, so this function only returns a guess. On some systems, it is necessary to invoke [`setlocale()`](#locale.setlocale "locale.setlocale") to obtain the user preferences, so this function is not thread-safe. If invoking setlocale is not necessary or desired, *do\_setlocale* should be set to `False`. On Android or in the UTF-8 mode ([`-X`](../using/cmdline#id5) `utf8` option), always return `'UTF-8'`, the locale and the *do\_setlocale* argument are ignored. Changed in version 3.7: The function now always returns `UTF-8` on Android or if the UTF-8 mode is enabled. `locale.normalize(localename)` Returns a normalized locale code for the given locale name. The returned locale code is formatted for use with [`setlocale()`](#locale.setlocale "locale.setlocale"). If normalization fails, the original name is returned unchanged. If the given encoding is not known, the function defaults to the default encoding for the locale code just like [`setlocale()`](#locale.setlocale "locale.setlocale"). `locale.resetlocale(category=LC_ALL)` Sets the locale for *category* to the default setting. The default setting is determined by calling [`getdefaultlocale()`](#locale.getdefaultlocale "locale.getdefaultlocale"). *category* defaults to [`LC_ALL`](#locale.LC_ALL "locale.LC_ALL"). `locale.strcoll(string1, string2)` Compares two strings according to the current [`LC_COLLATE`](#locale.LC_COLLATE "locale.LC_COLLATE") setting. As any other compare function, returns a negative, or a positive value, or `0`, depending on whether *string1* collates before or after *string2* or is equal to it. `locale.strxfrm(string)` Transforms a string to one that can be used in locale-aware comparisons. For example, `strxfrm(s1) < strxfrm(s2)` is equivalent to `strcoll(s1, s2) < 0`. This function can be used when the same string is compared repeatedly, e.g. when collating a sequence of strings. `locale.format_string(format, val, grouping=False, monetary=False)` Formats a number *val* according to the current [`LC_NUMERIC`](#locale.LC_NUMERIC "locale.LC_NUMERIC") setting. The format follows the conventions of the `%` operator. For floating point values, the decimal point is modified if appropriate. If *grouping* is true, also takes the grouping into account. If *monetary* is true, the conversion uses monetary thousands separator and grouping strings. Processes formatting specifiers as in `format % val`, but takes the current locale settings into account. Changed in version 3.7: The *monetary* keyword parameter was added. `locale.format(format, val, grouping=False, monetary=False)` Please note that this function works like [`format_string()`](#locale.format_string "locale.format_string") but will only work for exactly one `%char` specifier. For example, `'%f'` and `'%.0f'` are both valid specifiers, but `'%f KiB'` is not. For whole format strings, use [`format_string()`](#locale.format_string "locale.format_string"). Deprecated since version 3.7: Use [`format_string()`](#locale.format_string "locale.format_string") instead. `locale.currency(val, symbol=True, grouping=False, international=False)` Formats a number *val* according to the current [`LC_MONETARY`](#locale.LC_MONETARY "locale.LC_MONETARY") settings. The returned string includes the currency symbol if *symbol* is true, which is the default. If *grouping* is true (which is not the default), grouping is done with the value. If *international* is true (which is not the default), the international currency symbol is used. Note that this function will not work with the ‘C’ locale, so you have to set a locale via [`setlocale()`](#locale.setlocale "locale.setlocale") first. `locale.str(float)` Formats a floating point number using the same format as the built-in function `str(float)`, but takes the decimal point into account. `locale.delocalize(string)` Converts a string into a normalized number string, following the [`LC_NUMERIC`](#locale.LC_NUMERIC "locale.LC_NUMERIC") settings. New in version 3.5. `locale.atof(string, func=float)` Converts a string to a number, following the [`LC_NUMERIC`](#locale.LC_NUMERIC "locale.LC_NUMERIC") settings, by calling *func* on the result of calling [`delocalize()`](#locale.delocalize "locale.delocalize") on *string*. `locale.atoi(string)` Converts a string to an integer, following the [`LC_NUMERIC`](#locale.LC_NUMERIC "locale.LC_NUMERIC") conventions. `locale.LC_CTYPE` Locale category for the character type functions. Depending on the settings of this category, the functions of module [`string`](string#module-string "string: Common string operations.") dealing with case change their behaviour. `locale.LC_COLLATE` Locale category for sorting strings. The functions [`strcoll()`](#locale.strcoll "locale.strcoll") and [`strxfrm()`](#locale.strxfrm "locale.strxfrm") of the [`locale`](#module-locale "locale: Internationalization services.") module are affected. `locale.LC_TIME` Locale category for the formatting of time. The function [`time.strftime()`](time#time.strftime "time.strftime") follows these conventions. `locale.LC_MONETARY` Locale category for formatting of monetary values. The available options are available from the [`localeconv()`](#locale.localeconv "locale.localeconv") function. `locale.LC_MESSAGES` Locale category for message display. Python currently does not support application specific locale-aware messages. Messages displayed by the operating system, like those returned by [`os.strerror()`](os#os.strerror "os.strerror") might be affected by this category. `locale.LC_NUMERIC` Locale category for formatting numbers. The functions [`format()`](#locale.format "locale.format"), [`atoi()`](#locale.atoi "locale.atoi"), [`atof()`](#locale.atof "locale.atof") and [`str()`](#locale.str "locale.str") of the [`locale`](#module-locale "locale: Internationalization services.") module are affected by that category. All other numeric formatting operations are not affected. `locale.LC_ALL` Combination of all locale settings. If this flag is used when the locale is changed, setting the locale for all categories is attempted. If that fails for any category, no category is changed at all. When the locale is retrieved using this flag, a string indicating the setting for all categories is returned. This string can be later used to restore the settings. `locale.CHAR_MAX` This is a symbolic constant used for different values returned by [`localeconv()`](#locale.localeconv "locale.localeconv"). Example: ``` >>> import locale >>> loc = locale.getlocale() # get current locale # use German locale; name might vary with platform >>> locale.setlocale(locale.LC_ALL, 'de_DE') >>> locale.strcoll('f\xe4n', 'foo') # compare a string containing an umlaut >>> locale.setlocale(locale.LC_ALL, '') # use user's preferred locale >>> locale.setlocale(locale.LC_ALL, 'C') # use default (C) locale >>> locale.setlocale(locale.LC_ALL, loc) # restore saved locale ``` Background, details, hints, tips and caveats -------------------------------------------- The C standard defines the locale as a program-wide property that may be relatively expensive to change. On top of that, some implementation are broken in such a way that frequent locale changes may cause core dumps. This makes the locale somewhat painful to use correctly. Initially, when a program is started, the locale is the `C` locale, no matter what the user’s preferred locale is. There is one exception: the [`LC_CTYPE`](#locale.LC_CTYPE "locale.LC_CTYPE") category is changed at startup to set the current locale encoding to the user’s preferred locale encoding. The program must explicitly say that it wants the user’s preferred locale settings for other categories by calling `setlocale(LC_ALL, '')`. It is generally a bad idea to call [`setlocale()`](#locale.setlocale "locale.setlocale") in some library routine, since as a side effect it affects the entire program. Saving and restoring it is almost as bad: it is expensive and affects other threads that happen to run before the settings have been restored. If, when coding a module for general use, you need a locale independent version of an operation that is affected by the locale (such as certain formats used with [`time.strftime()`](time#time.strftime "time.strftime")), you will have to find a way to do it without using the standard library routine. Even better is convincing yourself that using locale settings is okay. Only as a last resort should you document that your module is not compatible with non-`C` locale settings. The only way to perform numeric operations according to the locale is to use the special functions defined by this module: [`atof()`](#locale.atof "locale.atof"), [`atoi()`](#locale.atoi "locale.atoi"), [`format()`](#locale.format "locale.format"), [`str()`](#locale.str "locale.str"). There is no way to perform case conversions and character classifications according to the locale. For (Unicode) text strings these are done according to the character value only, while for byte strings, the conversions and classifications are done according to the ASCII value of the byte, and bytes whose high bit is set (i.e., non-ASCII bytes) are never converted or considered part of a character class such as letter or whitespace. For extension writers and programs that embed Python ---------------------------------------------------- Extension modules should never call [`setlocale()`](#locale.setlocale "locale.setlocale"), except to find out what the current locale is. But since the return value can only be used portably to restore it, that is not very useful (except perhaps to find out whether or not the locale is `C`). When Python code uses the [`locale`](#module-locale "locale: Internationalization services.") module to change the locale, this also affects the embedding application. If the embedding application doesn’t want this to happen, it should remove the `_locale` extension module (which does all the work) from the table of built-in modules in the `config.c` file, and make sure that the `_locale` module is not accessible as a shared library. Access to message catalogs -------------------------- `locale.gettext(msg)` `locale.dgettext(domain, msg)` `locale.dcgettext(domain, msg, category)` `locale.textdomain(domain)` `locale.bindtextdomain(domain, dir)` The locale module exposes the C library’s gettext interface on systems that provide this interface. It consists of the functions `gettext()`, `dgettext()`, `dcgettext()`, `textdomain()`, `bindtextdomain()`, and `bind_textdomain_codeset()`. These are similar to the same functions in the [`gettext`](gettext#module-gettext "gettext: Multilingual internationalization services.") module, but use the C library’s binary format for message catalogs, and the C library’s search algorithms for locating message catalogs. Python applications should normally find no need to invoke these functions, and should use [`gettext`](gettext#module-gettext "gettext: Multilingual internationalization services.") instead. A known exception to this rule are applications that link with additional C libraries which internally invoke `gettext()` or `dcgettext()`. For these applications, it may be necessary to bind the text domain, so that the libraries can properly locate their message catalogs.
programming_docs
python http — HTTP modules http — HTTP modules =================== **Source code:** [Lib/http/\_\_init\_\_.py](https://github.com/python/cpython/tree/3.9/Lib/http/__init__.py) [`http`](#module-http "http: HTTP status codes and messages") is a package that collects several modules for working with the HyperText Transfer Protocol: * [`http.client`](http.client#module-http.client "http.client: HTTP and HTTPS protocol client (requires sockets).") is a low-level HTTP protocol client; for high-level URL opening use [`urllib.request`](urllib.request#module-urllib.request "urllib.request: Extensible library for opening URLs.") * [`http.server`](http.server#module-http.server "http.server: HTTP server and request handlers.") contains basic HTTP server classes based on [`socketserver`](socketserver#module-socketserver "socketserver: A framework for network servers.") * [`http.cookies`](http.cookies#module-http.cookies "http.cookies: Support for HTTP state management (cookies).") has utilities for implementing state management with cookies * [`http.cookiejar`](http.cookiejar#module-http.cookiejar "http.cookiejar: Classes for automatic handling of HTTP cookies.") provides persistence of cookies [`http`](#module-http "http: HTTP status codes and messages") is also a module that defines a number of HTTP status codes and associated messages through the [`http.HTTPStatus`](#http.HTTPStatus "http.HTTPStatus") enum: `class http.HTTPStatus` New in version 3.5. A subclass of [`enum.IntEnum`](enum#enum.IntEnum "enum.IntEnum") that defines a set of HTTP status codes, reason phrases and long descriptions written in English. Usage: ``` >>> from http import HTTPStatus >>> HTTPStatus.OK <HTTPStatus.OK: 200> >>> HTTPStatus.OK == 200 True >>> HTTPStatus.OK.value 200 >>> HTTPStatus.OK.phrase 'OK' >>> HTTPStatus.OK.description 'Request fulfilled, document follows' >>> list(HTTPStatus) [<HTTPStatus.CONTINUE: 100>, <HTTPStatus.SWITCHING_PROTOCOLS: 101>, ...] ``` HTTP status codes ----------------- Supported, [IANA-registered](https://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml) status codes available in [`http.HTTPStatus`](#http.HTTPStatus "http.HTTPStatus") are: | Code | Enum Name | Details | | --- | --- | --- | | `100` | `CONTINUE` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.2.1 | | `101` | `SWITCHING_PROTOCOLS` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.2.2 | | `102` | `PROCESSING` | WebDAV [**RFC 2518**](https://tools.ietf.org/html/rfc2518.html), Section 10.1 | | `103` | `EARLY_HINTS` | An HTTP Status Code for Indicating Hints [**RFC 8297**](https://tools.ietf.org/html/rfc8297.html) | | `200` | `OK` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.3.1 | | `201` | `CREATED` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.3.2 | | `202` | `ACCEPTED` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.3.3 | | `203` | `NON_AUTHORITATIVE_INFORMATION` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.3.4 | | `204` | `NO_CONTENT` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.3.5 | | `205` | `RESET_CONTENT` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.3.6 | | `206` | `PARTIAL_CONTENT` | HTTP/1.1 [**RFC 7233**](https://tools.ietf.org/html/rfc7233.html), Section 4.1 | | `207` | `MULTI_STATUS` | WebDAV [**RFC 4918**](https://tools.ietf.org/html/rfc4918.html), Section 11.1 | | `208` | `ALREADY_REPORTED` | WebDAV Binding Extensions [**RFC 5842**](https://tools.ietf.org/html/rfc5842.html), Section 7.1 (Experimental) | | `226` | `IM_USED` | Delta Encoding in HTTP [**RFC 3229**](https://tools.ietf.org/html/rfc3229.html), Section 10.4.1 | | `300` | `MULTIPLE_CHOICES` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.4.1 | | `301` | `MOVED_PERMANENTLY` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.4.2 | | `302` | `FOUND` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.4.3 | | `303` | `SEE_OTHER` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.4.4 | | `304` | `NOT_MODIFIED` | HTTP/1.1 [**RFC 7232**](https://tools.ietf.org/html/rfc7232.html), Section 4.1 | | `305` | `USE_PROXY` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.4.5 | | `307` | `TEMPORARY_REDIRECT` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.4.7 | | `308` | `PERMANENT_REDIRECT` | Permanent Redirect [**RFC 7238**](https://tools.ietf.org/html/rfc7238.html), Section 3 (Experimental) | | `400` | `BAD_REQUEST` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.1 | | `401` | `UNAUTHORIZED` | HTTP/1.1 Authentication [**RFC 7235**](https://tools.ietf.org/html/rfc7235.html), Section 3.1 | | `402` | `PAYMENT_REQUIRED` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.2 | | `403` | `FORBIDDEN` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.3 | | `404` | `NOT_FOUND` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.4 | | `405` | `METHOD_NOT_ALLOWED` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.5 | | `406` | `NOT_ACCEPTABLE` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.6 | | `407` | `PROXY_AUTHENTICATION_REQUIRED` | HTTP/1.1 Authentication [**RFC 7235**](https://tools.ietf.org/html/rfc7235.html), Section 3.2 | | `408` | `REQUEST_TIMEOUT` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.7 | | `409` | `CONFLICT` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.8 | | `410` | `GONE` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.9 | | `411` | `LENGTH_REQUIRED` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.10 | | `412` | `PRECONDITION_FAILED` | HTTP/1.1 [**RFC 7232**](https://tools.ietf.org/html/rfc7232.html), Section 4.2 | | `413` | `REQUEST_ENTITY_TOO_LARGE` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.11 | | `414` | `REQUEST_URI_TOO_LONG` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.12 | | `415` | `UNSUPPORTED_MEDIA_TYPE` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.13 | | `416` | `REQUESTED_RANGE_NOT_SATISFIABLE` | HTTP/1.1 Range Requests [**RFC 7233**](https://tools.ietf.org/html/rfc7233.html), Section 4.4 | | `417` | `EXPECTATION_FAILED` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.14 | | `418` | `IM_A_TEAPOT` | HTCPCP/1.0 [**RFC 2324**](https://tools.ietf.org/html/rfc2324.html), Section 2.3.2 | | `421` | `MISDIRECTED_REQUEST` | HTTP/2 [**RFC 7540**](https://tools.ietf.org/html/rfc7540.html), Section 9.1.2 | | `422` | `UNPROCESSABLE_ENTITY` | WebDAV [**RFC 4918**](https://tools.ietf.org/html/rfc4918.html), Section 11.2 | | `423` | `LOCKED` | WebDAV [**RFC 4918**](https://tools.ietf.org/html/rfc4918.html), Section 11.3 | | `424` | `FAILED_DEPENDENCY` | WebDAV [**RFC 4918**](https://tools.ietf.org/html/rfc4918.html), Section 11.4 | | `425` | `TOO_EARLY` | Using Early Data in HTTP [**RFC 8470**](https://tools.ietf.org/html/rfc8470.html) | | `426` | `UPGRADE_REQUIRED` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.5.15 | | `428` | `PRECONDITION_REQUIRED` | Additional HTTP Status Codes [**RFC 6585**](https://tools.ietf.org/html/rfc6585.html) | | `429` | `TOO_MANY_REQUESTS` | Additional HTTP Status Codes [**RFC 6585**](https://tools.ietf.org/html/rfc6585.html) | | `431` | `REQUEST_HEADER_FIELDS_TOO_LARGE` | Additional HTTP Status Codes [**RFC 6585**](https://tools.ietf.org/html/rfc6585.html) | | `451` | `UNAVAILABLE_FOR_LEGAL_REASONS` | An HTTP Status Code to Report Legal Obstacles [**RFC 7725**](https://tools.ietf.org/html/rfc7725.html) | | `500` | `INTERNAL_SERVER_ERROR` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.6.1 | | `501` | `NOT_IMPLEMENTED` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.6.2 | | `502` | `BAD_GATEWAY` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.6.3 | | `503` | `SERVICE_UNAVAILABLE` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.6.4 | | `504` | `GATEWAY_TIMEOUT` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.6.5 | | `505` | `HTTP_VERSION_NOT_SUPPORTED` | HTTP/1.1 [**RFC 7231**](https://tools.ietf.org/html/rfc7231.html), Section 6.6.6 | | `506` | `VARIANT_ALSO_NEGOTIATES` | Transparent Content Negotiation in HTTP [**RFC 2295**](https://tools.ietf.org/html/rfc2295.html), Section 8.1 (Experimental) | | `507` | `INSUFFICIENT_STORAGE` | WebDAV [**RFC 4918**](https://tools.ietf.org/html/rfc4918.html), Section 11.5 | | `508` | `LOOP_DETECTED` | WebDAV Binding Extensions [**RFC 5842**](https://tools.ietf.org/html/rfc5842.html), Section 7.2 (Experimental) | | `510` | `NOT_EXTENDED` | An HTTP Extension Framework [**RFC 2774**](https://tools.ietf.org/html/rfc2774.html), Section 7 (Experimental) | | `511` | `NETWORK_AUTHENTICATION_REQUIRED` | Additional HTTP Status Codes [**RFC 6585**](https://tools.ietf.org/html/rfc6585.html), Section 6 | In order to preserve backwards compatibility, enum values are also present in the [`http.client`](http.client#module-http.client "http.client: HTTP and HTTPS protocol client (requires sockets).") module in the form of constants. The enum name is equal to the constant name (i.e. `http.HTTPStatus.OK` is also available as `http.client.OK`). Changed in version 3.7: Added `421 MISDIRECTED_REQUEST` status code. New in version 3.8: Added `451 UNAVAILABLE_FOR_LEGAL_REASONS` status code. New in version 3.9: Added `103 EARLY_HINTS`, `418 IM_A_TEAPOT` and `425 TOO_EARLY` status codes. python Structured Markup Processing Tools Structured Markup Processing Tools ================================== Python supports a variety of modules to work with various forms of structured data markup. This includes modules to work with the Standard Generalized Markup Language (SGML) and the Hypertext Markup Language (HTML), and several interfaces for working with the Extensible Markup Language (XML). * [`html` — HyperText Markup Language support](html) * [`html.parser` — Simple HTML and XHTML parser](html.parser) + [Example HTML Parser Application](html.parser#example-html-parser-application) + [`HTMLParser` Methods](html.parser#htmlparser-methods) + [Examples](html.parser#examples) * [`html.entities` — Definitions of HTML general entities](html.entities) * [XML Processing Modules](xml) + [XML vulnerabilities](xml#xml-vulnerabilities) + [The `defusedxml` Package](xml#the-defusedxml-package) * [`xml.etree.ElementTree` — The ElementTree XML API](xml.etree.elementtree) + [Tutorial](xml.etree.elementtree#tutorial) - [XML tree and elements](xml.etree.elementtree#xml-tree-and-elements) - [Parsing XML](xml.etree.elementtree#parsing-xml) - [Pull API for non-blocking parsing](xml.etree.elementtree#pull-api-for-non-blocking-parsing) - [Finding interesting elements](xml.etree.elementtree#finding-interesting-elements) - [Modifying an XML File](xml.etree.elementtree#modifying-an-xml-file) - [Building XML documents](xml.etree.elementtree#building-xml-documents) - [Parsing XML with Namespaces](xml.etree.elementtree#parsing-xml-with-namespaces) + [XPath support](xml.etree.elementtree#xpath-support) - [Example](xml.etree.elementtree#example) - [Supported XPath syntax](xml.etree.elementtree#supported-xpath-syntax) + [Reference](xml.etree.elementtree#reference) - [Functions](xml.etree.elementtree#functions) + [XInclude support](xml.etree.elementtree#xinclude-support) - [Example](xml.etree.elementtree#id3) + [Reference](xml.etree.elementtree#id4) - [Functions](xml.etree.elementtree#elementinclude-functions) - [Element Objects](xml.etree.elementtree#element-objects) - [ElementTree Objects](xml.etree.elementtree#elementtree-objects) - [QName Objects](xml.etree.elementtree#qname-objects) - [TreeBuilder Objects](xml.etree.elementtree#treebuilder-objects) - [XMLParser Objects](xml.etree.elementtree#xmlparser-objects) - [XMLPullParser Objects](xml.etree.elementtree#xmlpullparser-objects) - [Exceptions](xml.etree.elementtree#exceptions) * [`xml.dom` — The Document Object Model API](xml.dom) + [Module Contents](xml.dom#module-contents) + [Objects in the DOM](xml.dom#objects-in-the-dom) - [DOMImplementation Objects](xml.dom#domimplementation-objects) - [Node Objects](xml.dom#node-objects) - [NodeList Objects](xml.dom#nodelist-objects) - [DocumentType Objects](xml.dom#documenttype-objects) - [Document Objects](xml.dom#document-objects) - [Element Objects](xml.dom#element-objects) - [Attr Objects](xml.dom#attr-objects) - [NamedNodeMap Objects](xml.dom#namednodemap-objects) - [Comment Objects](xml.dom#comment-objects) - [Text and CDATASection Objects](xml.dom#text-and-cdatasection-objects) - [ProcessingInstruction Objects](xml.dom#processinginstruction-objects) - [Exceptions](xml.dom#exceptions) + [Conformance](xml.dom#conformance) - [Type Mapping](xml.dom#type-mapping) - [Accessor Methods](xml.dom#accessor-methods) * [`xml.dom.minidom` — Minimal DOM implementation](xml.dom.minidom) + [DOM Objects](xml.dom.minidom#dom-objects) + [DOM Example](xml.dom.minidom#dom-example) + [minidom and the DOM standard](xml.dom.minidom#minidom-and-the-dom-standard) * [`xml.dom.pulldom` — Support for building partial DOM trees](xml.dom.pulldom) + [DOMEventStream Objects](xml.dom.pulldom#domeventstream-objects) * [`xml.sax` — Support for SAX2 parsers](xml.sax) + [SAXException Objects](xml.sax#saxexception-objects) * [`xml.sax.handler` — Base classes for SAX handlers](xml.sax.handler) + [ContentHandler Objects](xml.sax.handler#contenthandler-objects) + [DTDHandler Objects](xml.sax.handler#dtdhandler-objects) + [EntityResolver Objects](xml.sax.handler#entityresolver-objects) + [ErrorHandler Objects](xml.sax.handler#errorhandler-objects) * [`xml.sax.saxutils` — SAX Utilities](xml.sax.utils) * [`xml.sax.xmlreader` — Interface for XML parsers](xml.sax.reader) + [XMLReader Objects](xml.sax.reader#xmlreader-objects) + [IncrementalParser Objects](xml.sax.reader#incrementalparser-objects) + [Locator Objects](xml.sax.reader#locator-objects) + [InputSource Objects](xml.sax.reader#inputsource-objects) + [The `Attributes` Interface](xml.sax.reader#the-attributes-interface) + [The `AttributesNS` Interface](xml.sax.reader#the-attributesns-interface) * [`xml.parsers.expat` — Fast XML parsing using Expat](pyexpat) + [XMLParser Objects](pyexpat#xmlparser-objects) + [ExpatError Exceptions](pyexpat#expaterror-exceptions) + [Example](pyexpat#example) + [Content Model Descriptions](pyexpat#module-xml.parsers.expat.model) + [Expat error constants](pyexpat#module-xml.parsers.expat.errors) python Platform Support Platform Support ================ The [`asyncio`](asyncio#module-asyncio "asyncio: Asynchronous I/O.") module is designed to be portable, but some platforms have subtle differences and limitations due to the platforms’ underlying architecture and capabilities. All Platforms ------------- * [`loop.add_reader()`](asyncio-eventloop#asyncio.loop.add_reader "asyncio.loop.add_reader") and [`loop.add_writer()`](asyncio-eventloop#asyncio.loop.add_writer "asyncio.loop.add_writer") cannot be used to monitor file I/O. Windows ------- **Source code:** [Lib/asyncio/proactor\_events.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/proactor_events.py), [Lib/asyncio/windows\_events.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/windows_events.py), [Lib/asyncio/windows\_utils.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/windows_utils.py) Changed in version 3.8: On Windows, [`ProactorEventLoop`](asyncio-eventloop#asyncio.ProactorEventLoop "asyncio.ProactorEventLoop") is now the default event loop. All event loops on Windows do not support the following methods: * [`loop.create_unix_connection()`](asyncio-eventloop#asyncio.loop.create_unix_connection "asyncio.loop.create_unix_connection") and [`loop.create_unix_server()`](asyncio-eventloop#asyncio.loop.create_unix_server "asyncio.loop.create_unix_server") are not supported. The [`socket.AF_UNIX`](socket#socket.AF_UNIX "socket.AF_UNIX") socket family is specific to Unix. * [`loop.add_signal_handler()`](asyncio-eventloop#asyncio.loop.add_signal_handler "asyncio.loop.add_signal_handler") and [`loop.remove_signal_handler()`](asyncio-eventloop#asyncio.loop.remove_signal_handler "asyncio.loop.remove_signal_handler") are not supported. [`SelectorEventLoop`](asyncio-eventloop#asyncio.SelectorEventLoop "asyncio.SelectorEventLoop") has the following limitations: * [`SelectSelector`](selectors#selectors.SelectSelector "selectors.SelectSelector") is used to wait on socket events: it supports sockets and is limited to 512 sockets. * [`loop.add_reader()`](asyncio-eventloop#asyncio.loop.add_reader "asyncio.loop.add_reader") and [`loop.add_writer()`](asyncio-eventloop#asyncio.loop.add_writer "asyncio.loop.add_writer") only accept socket handles (e.g. pipe file descriptors are not supported). * Pipes are not supported, so the [`loop.connect_read_pipe()`](asyncio-eventloop#asyncio.loop.connect_read_pipe "asyncio.loop.connect_read_pipe") and [`loop.connect_write_pipe()`](asyncio-eventloop#asyncio.loop.connect_write_pipe "asyncio.loop.connect_write_pipe") methods are not implemented. * [Subprocesses](asyncio-subprocess#asyncio-subprocess) are not supported, i.e. [`loop.subprocess_exec()`](asyncio-eventloop#asyncio.loop.subprocess_exec "asyncio.loop.subprocess_exec") and [`loop.subprocess_shell()`](asyncio-eventloop#asyncio.loop.subprocess_shell "asyncio.loop.subprocess_shell") methods are not implemented. [`ProactorEventLoop`](asyncio-eventloop#asyncio.ProactorEventLoop "asyncio.ProactorEventLoop") has the following limitations: * The [`loop.add_reader()`](asyncio-eventloop#asyncio.loop.add_reader "asyncio.loop.add_reader") and [`loop.add_writer()`](asyncio-eventloop#asyncio.loop.add_writer "asyncio.loop.add_writer") methods are not supported. The resolution of the monotonic clock on Windows is usually around 15.6 msec. The best resolution is 0.5 msec. The resolution depends on the hardware (availability of [HPET](https://en.wikipedia.org/wiki/High_Precision_Event_Timer)) and on the Windows configuration. ### Subprocess Support on Windows On Windows, the default event loop [`ProactorEventLoop`](asyncio-eventloop#asyncio.ProactorEventLoop "asyncio.ProactorEventLoop") supports subprocesses, whereas [`SelectorEventLoop`](asyncio-eventloop#asyncio.SelectorEventLoop "asyncio.SelectorEventLoop") does not. The [`policy.set_child_watcher()`](asyncio-policy#asyncio.AbstractEventLoopPolicy.set_child_watcher "asyncio.AbstractEventLoopPolicy.set_child_watcher") function is also not supported, as [`ProactorEventLoop`](asyncio-eventloop#asyncio.ProactorEventLoop "asyncio.ProactorEventLoop") has a different mechanism to watch child processes. macOS ----- Modern macOS versions are fully supported. #### macOS <= 10.8 On macOS 10.6, 10.7 and 10.8, the default event loop uses [`selectors.KqueueSelector`](selectors#selectors.KqueueSelector "selectors.KqueueSelector"), which does not support character devices on these versions. The [`SelectorEventLoop`](asyncio-eventloop#asyncio.SelectorEventLoop "asyncio.SelectorEventLoop") can be manually configured to use [`SelectSelector`](selectors#selectors.SelectSelector "selectors.SelectSelector") or [`PollSelector`](selectors#selectors.PollSelector "selectors.PollSelector") to support character devices on these older versions of macOS. Example: ``` import asyncio import selectors selector = selectors.SelectSelector() loop = asyncio.SelectorEventLoop(selector) asyncio.set_event_loop(loop) ```
programming_docs
python syslog — Unix syslog library routines syslog — Unix syslog library routines ===================================== This module provides an interface to the Unix `syslog` library routines. Refer to the Unix manual pages for a detailed description of the `syslog` facility. This module wraps the system `syslog` family of routines. A pure Python library that can speak to a syslog server is available in the [`logging.handlers`](logging.handlers#module-logging.handlers "logging.handlers: Handlers for the logging module.") module as `SysLogHandler`. The module defines the following functions: `syslog.syslog(message)` `syslog.syslog(priority, message)` Send the string *message* to the system logger. A trailing newline is added if necessary. Each message is tagged with a priority composed of a *facility* and a *level*. The optional *priority* argument, which defaults to `LOG_INFO`, determines the message priority. If the facility is not encoded in *priority* using logical-or (`LOG_INFO | LOG_USER`), the value given in the [`openlog()`](#syslog.openlog "syslog.openlog") call is used. If [`openlog()`](#syslog.openlog "syslog.openlog") has not been called prior to the call to [`syslog()`](#module-syslog "syslog: An interface to the Unix syslog library routines. (Unix)"), `openlog()` will be called with no arguments. Raises an [auditing event](sys#auditing) `syslog.syslog` with arguments `priority`, `message`. `syslog.openlog([ident[, logoption[, facility]]])` Logging options of subsequent [`syslog()`](#module-syslog "syslog: An interface to the Unix syslog library routines. (Unix)") calls can be set by calling [`openlog()`](#syslog.openlog "syslog.openlog"). [`syslog()`](#module-syslog "syslog: An interface to the Unix syslog library routines. (Unix)") will call [`openlog()`](#syslog.openlog "syslog.openlog") with no arguments if the log is not currently open. The optional *ident* keyword argument is a string which is prepended to every message, and defaults to `sys.argv[0]` with leading path components stripped. The optional *logoption* keyword argument (default is 0) is a bit field – see below for possible values to combine. The optional *facility* keyword argument (default is `LOG_USER`) sets the default facility for messages which do not have a facility explicitly encoded. Raises an [auditing event](sys#auditing) `syslog.openlog` with arguments `ident`, `logoption`, `facility`. Changed in version 3.2: In previous versions, keyword arguments were not allowed, and *ident* was required. The default for *ident* was dependent on the system libraries, and often was `python` instead of the name of the Python program file. `syslog.closelog()` Reset the syslog module values and call the system library `closelog()`. This causes the module to behave as it does when initially imported. For example, [`openlog()`](#syslog.openlog "syslog.openlog") will be called on the first [`syslog()`](#module-syslog "syslog: An interface to the Unix syslog library routines. (Unix)") call (if [`openlog()`](#syslog.openlog "syslog.openlog") hasn’t already been called), and *ident* and other [`openlog()`](#syslog.openlog "syslog.openlog") parameters are reset to defaults. Raises an [auditing event](sys#auditing) `syslog.closelog` with no arguments. `syslog.setlogmask(maskpri)` Set the priority mask to *maskpri* and return the previous mask value. Calls to [`syslog()`](#module-syslog "syslog: An interface to the Unix syslog library routines. (Unix)") with a priority level not set in *maskpri* are ignored. The default is to log all priorities. The function `LOG_MASK(pri)` calculates the mask for the individual priority *pri*. The function `LOG_UPTO(pri)` calculates the mask for all priorities up to and including *pri*. Raises an [auditing event](sys#auditing) `syslog.setlogmask` with argument `maskpri`. The module defines the following constants: Priority levels (high to low): `LOG_EMERG`, `LOG_ALERT`, `LOG_CRIT`, `LOG_ERR`, `LOG_WARNING`, `LOG_NOTICE`, `LOG_INFO`, `LOG_DEBUG`. Facilities: `LOG_KERN`, `LOG_USER`, `LOG_MAIL`, `LOG_DAEMON`, `LOG_AUTH`, `LOG_LPR`, `LOG_NEWS`, `LOG_UUCP`, `LOG_CRON`, `LOG_SYSLOG`, `LOG_LOCAL0` to `LOG_LOCAL7`, and, if defined in `<syslog.h>`, `LOG_AUTHPRIV`. Log options: `LOG_PID`, `LOG_CONS`, `LOG_NDELAY`, and, if defined in `<syslog.h>`, `LOG_ODELAY`, `LOG_NOWAIT`, and `LOG_PERROR`. Examples -------- ### Simple example A simple set of examples: ``` import syslog syslog.syslog('Processing started') if error: syslog.syslog(syslog.LOG_ERR, 'Processing started') ``` An example of setting some log options, these would include the process ID in logged messages, and write the messages to the destination facility used for mail logging: ``` syslog.openlog(logoption=syslog.LOG_PID, facility=syslog.LOG_MAIL) syslog.syslog('E-mail processing initiated...') ``` python bisect — Array bisection algorithm bisect — Array bisection algorithm ================================== **Source code:** [Lib/bisect.py](https://github.com/python/cpython/tree/3.9/Lib/bisect.py) This module provides support for maintaining a list in sorted order without having to sort the list after each insertion. For long lists of items with expensive comparison operations, this can be an improvement over the more common approach. The module is called [`bisect`](#module-bisect "bisect: Array bisection algorithms for binary searching.") because it uses a basic bisection algorithm to do its work. The source code may be most useful as a working example of the algorithm (the boundary conditions are already right!). The following functions are provided: `bisect.bisect_left(a, x, lo=0, hi=len(a))` Locate the insertion point for *x* in *a* to maintain sorted order. The parameters *lo* and *hi* may be used to specify a subset of the list which should be considered; by default the entire list is used. If *x* is already present in *a*, the insertion point will be before (to the left of) any existing entries. The return value is suitable for use as the first parameter to `list.insert()` assuming that *a* is already sorted. The returned insertion point *i* partitions the array *a* into two halves so that `all(val < x for val in a[lo:i])` for the left side and `all(val >= x for val in a[i:hi])` for the right side. `bisect.bisect_right(a, x, lo=0, hi=len(a))` `bisect.bisect(a, x, lo=0, hi=len(a))` Similar to [`bisect_left()`](#bisect.bisect_left "bisect.bisect_left"), but returns an insertion point which comes after (to the right of) any existing entries of *x* in *a*. The returned insertion point *i* partitions the array *a* into two halves so that `all(val <= x for val in a[lo:i])` for the left side and `all(val > x for val in a[i:hi])` for the right side. `bisect.insort_left(a, x, lo=0, hi=len(a))` Insert *x* in *a* in sorted order. This is equivalent to `a.insert(bisect.bisect_left(a, x, lo, hi), x)` assuming that *a* is already sorted. Keep in mind that the O(log n) search is dominated by the slow O(n) insertion step. `bisect.insort_right(a, x, lo=0, hi=len(a))` `bisect.insort(a, x, lo=0, hi=len(a))` Similar to [`insort_left()`](#bisect.insort_left "bisect.insort_left"), but inserting *x* in *a* after any existing entries of *x*. See also [SortedCollection recipe](https://code.activestate.com/recipes/577197-sortedcollection/) that uses bisect to build a full-featured collection class with straight-forward search methods and support for a key-function. The keys are precomputed to save unnecessary calls to the key function during searches. Searching Sorted Lists ---------------------- The above [`bisect()`](#module-bisect "bisect: Array bisection algorithms for binary searching.") functions are useful for finding insertion points but can be tricky or awkward to use for common searching tasks. The following five functions show how to transform them into the standard lookups for sorted lists: ``` def index(a, x): 'Locate the leftmost value exactly equal to x' i = bisect_left(a, x) if i != len(a) and a[i] == x: return i raise ValueError def find_lt(a, x): 'Find rightmost value less than x' i = bisect_left(a, x) if i: return a[i-1] raise ValueError def find_le(a, x): 'Find rightmost value less than or equal to x' i = bisect_right(a, x) if i: return a[i-1] raise ValueError def find_gt(a, x): 'Find leftmost value greater than x' i = bisect_right(a, x) if i != len(a): return a[i] raise ValueError def find_ge(a, x): 'Find leftmost item greater than or equal to x' i = bisect_left(a, x) if i != len(a): return a[i] raise ValueError ``` Other Examples -------------- The [`bisect()`](#module-bisect "bisect: Array bisection algorithms for binary searching.") function can be useful for numeric table lookups. This example uses [`bisect()`](#module-bisect "bisect: Array bisection algorithms for binary searching.") to look up a letter grade for an exam score (say) based on a set of ordered numeric breakpoints: 90 and up is an ‘A’, 80 to 89 is a ‘B’, and so on: ``` >>> def grade(score, breakpoints=[60, 70, 80, 90], grades='FDCBA'): ... i = bisect(breakpoints, score) ... return grades[i] ... >>> [grade(score) for score in [33, 99, 77, 70, 89, 90, 100]] ['F', 'A', 'C', 'C', 'B', 'A', 'A'] ``` Unlike the [`sorted()`](functions#sorted "sorted") function, it does not make sense for the [`bisect()`](#module-bisect "bisect: Array bisection algorithms for binary searching.") functions to have *key* or *reversed* arguments because that would lead to an inefficient design (successive calls to bisect functions would not “remember” all of the previous key lookups). Instead, it is better to search a list of precomputed keys to find the index of the record in question: ``` >>> data = [('red', 5), ('blue', 1), ('yellow', 8), ('black', 0)] >>> data.sort(key=lambda r: r[1]) >>> keys = [r[1] for r in data] # precomputed list of keys >>> data[bisect_left(keys, 0)] ('black', 0) >>> data[bisect_left(keys, 1)] ('blue', 1) >>> data[bisect_left(keys, 5)] ('red', 5) >>> data[bisect_left(keys, 8)] ('yellow', 8) ``` python os.path — Common pathname manipulations os.path — Common pathname manipulations ======================================= **Source code:** [Lib/posixpath.py](https://github.com/python/cpython/tree/3.9/Lib/posixpath.py) (for POSIX) and [Lib/ntpath.py](https://github.com/python/cpython/tree/3.9/Lib/ntpath.py) (for Windows). This module implements some useful functions on pathnames. To read or write files see [`open()`](functions#open "open"), and for accessing the filesystem see the [`os`](os#module-os "os: Miscellaneous operating system interfaces.") module. The path parameters can be passed as strings, or bytes, or any object implementing the [`os.PathLike`](os#os.PathLike "os.PathLike") protocol. Unlike a unix shell, Python does not do any *automatic* path expansions. Functions such as [`expanduser()`](#os.path.expanduser "os.path.expanduser") and [`expandvars()`](#os.path.expandvars "os.path.expandvars") can be invoked explicitly when an application desires shell-like path expansion. (See also the [`glob`](glob#module-glob "glob: Unix shell style pathname pattern expansion.") module.) See also The [`pathlib`](pathlib#module-pathlib "pathlib: Object-oriented filesystem paths") module offers high-level path objects. Note All of these functions accept either only bytes or only string objects as their parameters. The result is an object of the same type, if a path or file name is returned. Note Since different operating systems have different path name conventions, there are several versions of this module in the standard library. The [`os.path`](#module-os.path "os.path: Operations on pathnames.") module is always the path module suitable for the operating system Python is running on, and therefore usable for local paths. However, you can also import and use the individual modules if you want to manipulate a path that is *always* in one of the different formats. They all have the same interface: * `posixpath` for UNIX-style paths * `ntpath` for Windows paths Changed in version 3.8: [`exists()`](#os.path.exists "os.path.exists"), [`lexists()`](#os.path.lexists "os.path.lexists"), [`isdir()`](#os.path.isdir "os.path.isdir"), [`isfile()`](#os.path.isfile "os.path.isfile"), [`islink()`](#os.path.islink "os.path.islink"), and [`ismount()`](#os.path.ismount "os.path.ismount") now return `False` instead of raising an exception for paths that contain characters or bytes unrepresentable at the OS level. `os.path.abspath(path)` Return a normalized absolutized version of the pathname *path*. On most platforms, this is equivalent to calling the function [`normpath()`](#os.path.normpath "os.path.normpath") as follows: `normpath(join(os.getcwd(), path))`. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.basename(path)` Return the base name of pathname *path*. This is the second element of the pair returned by passing *path* to the function [`split()`](#os.path.split "os.path.split"). Note that the result of this function is different from the Unix **basename** program; where **basename** for `'/foo/bar/'` returns `'bar'`, the [`basename()`](#os.path.basename "os.path.basename") function returns an empty string (`''`). Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.commonpath(paths)` Return the longest common sub-path of each pathname in the sequence *paths*. Raise [`ValueError`](exceptions#ValueError "ValueError") if *paths* contain both absolute and relative pathnames, the *paths* are on the different drives or if *paths* is empty. Unlike [`commonprefix()`](#os.path.commonprefix "os.path.commonprefix"), this returns a valid path. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. New in version 3.5. Changed in version 3.6: Accepts a sequence of [path-like objects](../glossary#term-path-like-object). `os.path.commonprefix(list)` Return the longest path prefix (taken character-by-character) that is a prefix of all paths in *list*. If *list* is empty, return the empty string (`''`). Note This function may return invalid paths because it works a character at a time. To obtain a valid path, see [`commonpath()`](#os.path.commonpath "os.path.commonpath"). ``` >>> os.path.commonprefix(['/usr/lib', '/usr/local/lib']) '/usr/l' >>> os.path.commonpath(['/usr/lib', '/usr/local/lib']) '/usr' ``` Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.dirname(path)` Return the directory name of pathname *path*. This is the first element of the pair returned by passing *path* to the function [`split()`](#os.path.split "os.path.split"). Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.exists(path)` Return `True` if *path* refers to an existing path or an open file descriptor. Returns `False` for broken symbolic links. On some platforms, this function may return `False` if permission is not granted to execute [`os.stat()`](os#os.stat "os.stat") on the requested file, even if the *path* physically exists. Changed in version 3.3: *path* can now be an integer: `True` is returned if it is an open file descriptor, `False` otherwise. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.lexists(path)` Return `True` if *path* refers to an existing path. Returns `True` for broken symbolic links. Equivalent to [`exists()`](#os.path.exists "os.path.exists") on platforms lacking [`os.lstat()`](os#os.lstat "os.lstat"). Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.expanduser(path)` On Unix and Windows, return the argument with an initial component of `~` or `~user` replaced by that *user*’s home directory. On Unix, an initial `~` is replaced by the environment variable `HOME` if it is set; otherwise the current user’s home directory is looked up in the password directory through the built-in module [`pwd`](pwd#module-pwd "pwd: The password database (getpwnam() and friends). (Unix)"). An initial `~user` is looked up directly in the password directory. On Windows, `USERPROFILE` will be used if set, otherwise a combination of `HOMEPATH` and `HOMEDRIVE` will be used. An initial `~user` is handled by stripping the last directory component from the created user path derived above. If the expansion fails or if the path does not begin with a tilde, the path is returned unchanged. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). Changed in version 3.8: No longer uses `HOME` on Windows. `os.path.expandvars(path)` Return the argument with environment variables expanded. Substrings of the form `$name` or `${name}` are replaced by the value of environment variable *name*. Malformed variable names and references to non-existing variables are left unchanged. On Windows, `%name%` expansions are supported in addition to `$name` and `${name}`. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.getatime(path)` Return the time of last access of *path*. The return value is a floating point number giving the number of seconds since the epoch (see the [`time`](time#module-time "time: Time access and conversions.") module). Raise [`OSError`](exceptions#OSError "OSError") if the file does not exist or is inaccessible. `os.path.getmtime(path)` Return the time of last modification of *path*. The return value is a floating point number giving the number of seconds since the epoch (see the [`time`](time#module-time "time: Time access and conversions.") module). Raise [`OSError`](exceptions#OSError "OSError") if the file does not exist or is inaccessible. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.getctime(path)` Return the system’s ctime which, on some systems (like Unix) is the time of the last metadata change, and, on others (like Windows), is the creation time for *path*. The return value is a number giving the number of seconds since the epoch (see the [`time`](time#module-time "time: Time access and conversions.") module). Raise [`OSError`](exceptions#OSError "OSError") if the file does not exist or is inaccessible. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.getsize(path)` Return the size, in bytes, of *path*. Raise [`OSError`](exceptions#OSError "OSError") if the file does not exist or is inaccessible. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.isabs(path)` Return `True` if *path* is an absolute pathname. On Unix, that means it begins with a slash, on Windows that it begins with a (back)slash after chopping off a potential drive letter. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.isfile(path)` Return `True` if *path* is an [`existing`](#os.path.exists "os.path.exists") regular file. This follows symbolic links, so both [`islink()`](#os.path.islink "os.path.islink") and [`isfile()`](#os.path.isfile "os.path.isfile") can be true for the same path. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.isdir(path)` Return `True` if *path* is an [`existing`](#os.path.exists "os.path.exists") directory. This follows symbolic links, so both [`islink()`](#os.path.islink "os.path.islink") and [`isdir()`](#os.path.isdir "os.path.isdir") can be true for the same path. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.islink(path)` Return `True` if *path* refers to an [`existing`](#os.path.exists "os.path.exists") directory entry that is a symbolic link. Always `False` if symbolic links are not supported by the Python runtime. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.ismount(path)` Return `True` if pathname *path* is a *mount point*: a point in a file system where a different file system has been mounted. On POSIX, the function checks whether *path*’s parent, `*path*/..`, is on a different device than *path*, or whether `*path*/..` and *path* point to the same i-node on the same device — this should detect mount points for all Unix and POSIX variants. It is not able to reliably detect bind mounts on the same filesystem. On Windows, a drive letter root and a share UNC are always mount points, and for any other path `GetVolumePathName` is called to see if it is different from the input path. New in version 3.4: Support for detecting non-root mount points on Windows. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.join(path, *paths)` Join one or more path components intelligently. The return value is the concatenation of *path* and any members of *\*paths* with exactly one directory separator following each non-empty part except the last, meaning that the result will only end in a separator if the last part is empty. If a component is an absolute path, all previous components are thrown away and joining continues from the absolute path component. On Windows, the drive letter is not reset when an absolute path component (e.g., `r'\foo'`) is encountered. If a component contains a drive letter, all previous components are thrown away and the drive letter is reset. Note that since there is a current directory for each drive, `os.path.join("c:", "foo")` represents a path relative to the current directory on drive `C:` (`c:foo`), not `c:\foo`. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object) for *path* and *paths*. `os.path.normcase(path)` Normalize the case of a pathname. On Windows, convert all characters in the pathname to lowercase, and also convert forward slashes to backward slashes. On other operating systems, return the path unchanged. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.normpath(path)` Normalize a pathname by collapsing redundant separators and up-level references so that `A//B`, `A/B/`, `A/./B` and `A/foo/../B` all become `A/B`. This string manipulation may change the meaning of a path that contains symbolic links. On Windows, it converts forward slashes to backward slashes. To normalize case, use [`normcase()`](#os.path.normcase "os.path.normcase"). Note On POSIX systems, in accordance with [IEEE Std 1003.1 2013 Edition; 4.13 Pathname Resolution](http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_13), if a pathname begins with exactly two slashes, the first component following the leading characters may be interpreted in an implementation-defined manner, although more than two leading characters shall be treated as a single character. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.realpath(path)` Return the canonical path of the specified filename, eliminating any symbolic links encountered in the path (if they are supported by the operating system). Note When symbolic link cycles occur, the returned path will be one member of the cycle, but no guarantee is made about which member that will be. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). Changed in version 3.8: Symbolic links and junctions are now resolved on Windows. `os.path.relpath(path, start=os.curdir)` Return a relative filepath to *path* either from the current directory or from an optional *start* directory. This is a path computation: the filesystem is not accessed to confirm the existence or nature of *path* or *start*. On Windows, [`ValueError`](exceptions#ValueError "ValueError") is raised when *path* and *start* are on different drives. *start* defaults to [`os.curdir`](os#os.curdir "os.curdir"). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.samefile(path1, path2)` Return `True` if both pathname arguments refer to the same file or directory. This is determined by the device number and i-node number and raises an exception if an [`os.stat()`](os#os.stat "os.stat") call on either pathname fails. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. Changed in version 3.2: Added Windows support. Changed in version 3.4: Windows now uses the same implementation as all other platforms. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.sameopenfile(fp1, fp2)` Return `True` if the file descriptors *fp1* and *fp2* refer to the same file. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. Changed in version 3.2: Added Windows support. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.samestat(stat1, stat2)` Return `True` if the stat tuples *stat1* and *stat2* refer to the same file. These structures may have been returned by [`os.fstat()`](os#os.fstat "os.fstat"), [`os.lstat()`](os#os.lstat "os.lstat"), or [`os.stat()`](os#os.stat "os.stat"). This function implements the underlying comparison used by [`samefile()`](#os.path.samefile "os.path.samefile") and [`sameopenfile()`](#os.path.sameopenfile "os.path.sameopenfile"). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. Changed in version 3.4: Added Windows support. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.split(path)` Split the pathname *path* into a pair, `(head, tail)` where *tail* is the last pathname component and *head* is everything leading up to that. The *tail* part will never contain a slash; if *path* ends in a slash, *tail* will be empty. If there is no slash in *path*, *head* will be empty. If *path* is empty, both *head* and *tail* are empty. Trailing slashes are stripped from *head* unless it is the root (one or more slashes only). In all cases, `join(head, tail)` returns a path to the same location as *path* (but the strings may differ). Also see the functions [`dirname()`](#os.path.dirname "os.path.dirname") and [`basename()`](#os.path.basename "os.path.basename"). Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.splitdrive(path)` Split the pathname *path* into a pair `(drive, tail)` where *drive* is either a mount point or the empty string. On systems which do not use drive specifications, *drive* will always be the empty string. In all cases, `drive + tail` will be the same as *path*. On Windows, splits a pathname into drive/UNC sharepoint and relative path. If the path contains a drive letter, drive will contain everything up to and including the colon: ``` >>> splitdrive("c:/dir") ("c:", "/dir") ``` If the path contains a UNC path, drive will contain the host name and share, up to but not including the fourth separator: ``` >>> splitdrive("//host/computer/dir") ("//host/computer", "/dir") ``` Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.splitext(path)` Split the pathname *path* into a pair `(root, ext)` such that `root + ext == path`, and the extension, *ext*, is empty or begins with a period and contains at most one period. If the path contains no extension, *ext* will be `''`: ``` >>> splitext('bar') ('bar', '') ``` If the path contains an extension, then *ext* will be set to this extension, including the leading period. Note that previous periods will be ignored: ``` >>> splitext('foo.bar.exe') ('foo.bar', '.exe') >>> splitext('/foo/bar.exe') ('/foo/bar', '.exe') ``` Leading periods of the last component of the path are considered to be part of the root: ``` >>> splitext('.cshrc') ('.cshrc', '') >>> splitext('/foo/....jpg') ('/foo/....jpg', '') ``` Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.path.supports_unicode_filenames` `True` if arbitrary Unicode strings can be used as file names (within limitations imposed by the file system).
programming_docs
python symtable — Access to the compiler’s symbol tables symtable — Access to the compiler’s symbol tables ================================================= **Source code:** [Lib/symtable.py](https://github.com/python/cpython/tree/3.9/Lib/symtable.py) Symbol tables are generated by the compiler from AST just before bytecode is generated. The symbol table is responsible for calculating the scope of every identifier in the code. [`symtable`](#module-symtable "symtable: Interface to the compiler's internal symbol tables.") provides an interface to examine these tables. Generating Symbol Tables ------------------------ `symtable.symtable(code, filename, compile_type)` Return the toplevel [`SymbolTable`](#symtable.SymbolTable "symtable.SymbolTable") for the Python source *code*. *filename* is the name of the file containing the code. *compile\_type* is like the *mode* argument to [`compile()`](functions#compile "compile"). Examining Symbol Tables ----------------------- `class symtable.SymbolTable` A namespace table for a block. The constructor is not public. `get_type()` Return the type of the symbol table. Possible values are `'class'`, `'module'`, and `'function'`. `get_id()` Return the table’s identifier. `get_name()` Return the table’s name. This is the name of the class if the table is for a class, the name of the function if the table is for a function, or `'top'` if the table is global ([`get_type()`](#symtable.SymbolTable.get_type "symtable.SymbolTable.get_type") returns `'module'`). `get_lineno()` Return the number of the first line in the block this table represents. `is_optimized()` Return `True` if the locals in this table can be optimized. `is_nested()` Return `True` if the block is a nested class or function. `has_children()` Return `True` if the block has nested namespaces within it. These can be obtained with [`get_children()`](#symtable.SymbolTable.get_children "symtable.SymbolTable.get_children"). `get_identifiers()` Return a list of names of symbols in this table. `lookup(name)` Lookup *name* in the table and return a [`Symbol`](#symtable.Symbol "symtable.Symbol") instance. `get_symbols()` Return a list of [`Symbol`](#symtable.Symbol "symtable.Symbol") instances for names in the table. `get_children()` Return a list of the nested symbol tables. `class symtable.Function` A namespace for a function or method. This class inherits [`SymbolTable`](#symtable.SymbolTable "symtable.SymbolTable"). `get_parameters()` Return a tuple containing names of parameters to this function. `get_locals()` Return a tuple containing names of locals in this function. `get_globals()` Return a tuple containing names of globals in this function. `get_nonlocals()` Return a tuple containing names of nonlocals in this function. `get_frees()` Return a tuple containing names of free variables in this function. `class symtable.Class` A namespace of a class. This class inherits [`SymbolTable`](#symtable.SymbolTable "symtable.SymbolTable"). `get_methods()` Return a tuple containing the names of methods declared in the class. `class symtable.Symbol` An entry in a [`SymbolTable`](#symtable.SymbolTable "symtable.SymbolTable") corresponding to an identifier in the source. The constructor is not public. `get_name()` Return the symbol’s name. `is_referenced()` Return `True` if the symbol is used in its block. `is_imported()` Return `True` if the symbol is created from an import statement. `is_parameter()` Return `True` if the symbol is a parameter. `is_global()` Return `True` if the symbol is global. `is_nonlocal()` Return `True` if the symbol is nonlocal. `is_declared_global()` Return `True` if the symbol is declared global with a global statement. `is_local()` Return `True` if the symbol is local to its block. `is_annotated()` Return `True` if the symbol is annotated. New in version 3.6. `is_free()` Return `True` if the symbol is referenced in its block, but not assigned to. `is_assigned()` Return `True` if the symbol is assigned to in its block. `is_namespace()` Return `True` if name binding introduces new namespace. If the name is used as the target of a function or class statement, this will be true. For example: ``` >>> table = symtable.symtable("def some_func(): pass", "string", "exec") >>> table.lookup("some_func").is_namespace() True ``` Note that a single name can be bound to multiple objects. If the result is `True`, the name may also be bound to other objects, like an int or list, that does not introduce a new namespace. `get_namespaces()` Return a list of namespaces bound to this name. `get_namespace()` Return the namespace bound to this name. If more than one namespace is bound, [`ValueError`](exceptions#ValueError "ValueError") is raised. python tkinter.tix — Extension widgets for Tk tkinter.tix — Extension widgets for Tk ====================================== **Source code:** [Lib/tkinter/tix.py](https://github.com/python/cpython/tree/3.9/Lib/tkinter/tix.py) Deprecated since version 3.6: This Tk extension is unmaintained and should not be used in new code. Use [`tkinter.ttk`](tkinter.ttk#module-tkinter.ttk "tkinter.ttk: Tk themed widget set") instead. The [`tkinter.tix`](#module-tkinter.tix "tkinter.tix: Tk Extension Widgets for Tkinter") (Tk Interface Extension) module provides an additional rich set of widgets. Although the standard Tk library has many useful widgets, they are far from complete. The [`tkinter.tix`](#module-tkinter.tix "tkinter.tix: Tk Extension Widgets for Tkinter") library provides most of the commonly needed widgets that are missing from standard Tk: [`HList`](#tkinter.tix.HList "tkinter.tix.HList"), [`ComboBox`](#tkinter.tix.ComboBox "tkinter.tix.ComboBox"), [`Control`](#tkinter.tix.Control "tkinter.tix.Control") (a.k.a. SpinBox) and an assortment of scrollable widgets. [`tkinter.tix`](#module-tkinter.tix "tkinter.tix: Tk Extension Widgets for Tkinter") also includes many more widgets that are generally useful in a wide range of applications: [`NoteBook`](#tkinter.tix.NoteBook "tkinter.tix.NoteBook"), [`FileEntry`](#tkinter.tix.FileEntry "tkinter.tix.FileEntry"), [`PanedWindow`](#tkinter.tix.PanedWindow "tkinter.tix.PanedWindow"), etc; there are more than 40 of them. With all these new widgets, you can introduce new interaction techniques into applications, creating more useful and more intuitive user interfaces. You can design your application by choosing the most appropriate widgets to match the special needs of your application and users. See also [Tix Homepage](http://tix.sourceforge.net/) The home page for `Tix`. This includes links to additional documentation and downloads. [Tix Man Pages](http://tix.sourceforge.net/dist/current/man/) On-line version of the man pages and reference material. [Tix Programming Guide](http://tix.sourceforge.net/dist/current/docs/tix-book/tix.book.html) On-line version of the programmer’s reference material. [Tix Development Applications](http://tix.sourceforge.net/Tixapps/src/Tide.html) Tix applications for development of Tix and Tkinter programs. Tide applications work under Tk or Tkinter, and include **TixInspect**, an inspector to remotely modify and debug Tix/Tk/Tkinter applications. Using Tix --------- `class tkinter.tix.Tk(screenName=None, baseName=None, className='Tix')` Toplevel widget of Tix which represents mostly the main window of an application. It has an associated Tcl interpreter. Classes in the [`tkinter.tix`](#module-tkinter.tix "tkinter.tix: Tk Extension Widgets for Tkinter") module subclasses the classes in the [`tkinter`](tkinter#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces"). The former imports the latter, so to use [`tkinter.tix`](#module-tkinter.tix "tkinter.tix: Tk Extension Widgets for Tkinter") with Tkinter, all you need to do is to import one module. In general, you can just import [`tkinter.tix`](#module-tkinter.tix "tkinter.tix: Tk Extension Widgets for Tkinter"), and replace the toplevel call to [`tkinter.Tk`](tkinter#tkinter.Tk "tkinter.Tk") with `tix.Tk`: ``` from tkinter import tix from tkinter.constants import * root = tix.Tk() ``` To use [`tkinter.tix`](#module-tkinter.tix "tkinter.tix: Tk Extension Widgets for Tkinter"), you must have the Tix widgets installed, usually alongside your installation of the Tk widgets. To test your installation, try the following: ``` from tkinter import tix root = tix.Tk() root.tk.eval('package require Tix') ``` Tix Widgets ----------- [Tix](http://tix.sourceforge.net/dist/current/man/html/TixCmd/TixIntro.htm) introduces over 40 widget classes to the [`tkinter`](tkinter#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") repertoire. ### Basic Widgets `class tkinter.tix.Balloon` A [Balloon](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixBalloon.htm) that pops up over a widget to provide help. When the user moves the cursor inside a widget to which a Balloon widget has been bound, a small pop-up window with a descriptive message will be shown on the screen. `class tkinter.tix.ButtonBox` The [ButtonBox](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixButtonBox.htm) widget creates a box of buttons, such as is commonly used for `Ok Cancel`. `class tkinter.tix.ComboBox` The [ComboBox](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixComboBox.htm) widget is similar to the combo box control in MS Windows. The user can select a choice by either typing in the entry subwidget or selecting from the listbox subwidget. `class tkinter.tix.Control` The [Control](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixControl.htm) widget is also known as the `SpinBox` widget. The user can adjust the value by pressing the two arrow buttons or by entering the value directly into the entry. The new value will be checked against the user-defined upper and lower limits. `class tkinter.tix.LabelEntry` The [LabelEntry](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixLabelEntry.htm) widget packages an entry widget and a label into one mega widget. It can be used to simplify the creation of “entry-form” type of interface. `class tkinter.tix.LabelFrame` The [LabelFrame](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixLabelFrame.htm) widget packages a frame widget and a label into one mega widget. To create widgets inside a LabelFrame widget, one creates the new widgets relative to the `frame` subwidget and manage them inside the `frame` subwidget. `class tkinter.tix.Meter` The [Meter](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixMeter.htm) widget can be used to show the progress of a background job which may take a long time to execute. `class tkinter.tix.OptionMenu` The [OptionMenu](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixOptionMenu.htm) creates a menu button of options. `class tkinter.tix.PopupMenu` The [PopupMenu](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixPopupMenu.htm) widget can be used as a replacement of the `tk_popup` command. The advantage of the `Tix` [`PopupMenu`](#tkinter.tix.PopupMenu "tkinter.tix.PopupMenu") widget is it requires less application code to manipulate. `class tkinter.tix.Select` The [Select](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixSelect.htm) widget is a container of button subwidgets. It can be used to provide radio-box or check-box style of selection options for the user. `class tkinter.tix.StdButtonBox` The [StdButtonBox](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixStdButtonBox.htm) widget is a group of standard buttons for Motif-like dialog boxes. ### File Selectors `class tkinter.tix.DirList` The [DirList](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixDirList.htm) widget displays a list view of a directory, its previous directories and its sub-directories. The user can choose one of the directories displayed in the list or change to another directory. `class tkinter.tix.DirTree` The [DirTree](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixDirTree.htm) widget displays a tree view of a directory, its previous directories and its sub-directories. The user can choose one of the directories displayed in the list or change to another directory. `class tkinter.tix.DirSelectDialog` The [DirSelectDialog](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixDirSelectDialog.htm) widget presents the directories in the file system in a dialog window. The user can use this dialog window to navigate through the file system to select the desired directory. `class tkinter.tix.DirSelectBox` The [`DirSelectBox`](#tkinter.tix.DirSelectBox "tkinter.tix.DirSelectBox") is similar to the standard Motif(TM) directory-selection box. It is generally used for the user to choose a directory. DirSelectBox stores the directories mostly recently selected into a ComboBox widget so that they can be quickly selected again. `class tkinter.tix.ExFileSelectBox` The [ExFileSelectBox](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixExFileSelectBox.htm) widget is usually embedded in a tixExFileSelectDialog widget. It provides a convenient method for the user to select files. The style of the [`ExFileSelectBox`](#tkinter.tix.ExFileSelectBox "tkinter.tix.ExFileSelectBox") widget is very similar to the standard file dialog on MS Windows 3.1. `class tkinter.tix.FileSelectBox` The [FileSelectBox](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixFileSelectBox.htm) is similar to the standard Motif(TM) file-selection box. It is generally used for the user to choose a file. FileSelectBox stores the files mostly recently selected into a [`ComboBox`](#tkinter.tix.ComboBox "tkinter.tix.ComboBox") widget so that they can be quickly selected again. `class tkinter.tix.FileEntry` The [FileEntry](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixFileEntry.htm) widget can be used to input a filename. The user can type in the filename manually. Alternatively, the user can press the button widget that sits next to the entry, which will bring up a file selection dialog. ### Hierarchical ListBox `class tkinter.tix.HList` The [HList](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixHList.htm) widget can be used to display any data that have a hierarchical structure, for example, file system directory trees. The list entries are indented and connected by branch lines according to their places in the hierarchy. `class tkinter.tix.CheckList` The [CheckList](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixCheckList.htm) widget displays a list of items to be selected by the user. CheckList acts similarly to the Tk checkbutton or radiobutton widgets, except it is capable of handling many more items than checkbuttons or radiobuttons. `class tkinter.tix.Tree` The [Tree](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixTree.htm) widget can be used to display hierarchical data in a tree form. The user can adjust the view of the tree by opening or closing parts of the tree. ### Tabular ListBox `class tkinter.tix.TList` The [TList](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixTList.htm) widget can be used to display data in a tabular format. The list entries of a [`TList`](#tkinter.tix.TList "tkinter.tix.TList") widget are similar to the entries in the Tk listbox widget. The main differences are (1) the [`TList`](#tkinter.tix.TList "tkinter.tix.TList") widget can display the list entries in a two dimensional format and (2) you can use graphical images as well as multiple colors and fonts for the list entries. ### Manager Widgets `class tkinter.tix.PanedWindow` The [PanedWindow](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixPanedWindow.htm) widget allows the user to interactively manipulate the sizes of several panes. The panes can be arranged either vertically or horizontally. The user changes the sizes of the panes by dragging the resize handle between two panes. `class tkinter.tix.ListNoteBook` The [ListNoteBook](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixListNoteBook.htm) widget is very similar to the `TixNoteBook` widget: it can be used to display many windows in a limited space using a notebook metaphor. The notebook is divided into a stack of pages (windows). At one time only one of these pages can be shown. The user can navigate through these pages by choosing the name of the desired page in the `hlist` subwidget. `class tkinter.tix.NoteBook` The [NoteBook](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixNoteBook.htm) widget can be used to display many windows in a limited space using a notebook metaphor. The notebook is divided into a stack of pages. At one time only one of these pages can be shown. The user can navigate through these pages by choosing the visual “tabs” at the top of the NoteBook widget. ### Image Types The [`tkinter.tix`](#module-tkinter.tix "tkinter.tix: Tk Extension Widgets for Tkinter") module adds: * [pixmap](http://tix.sourceforge.net/dist/current/man/html/TixCmd/pixmap.htm) capabilities to all [`tkinter.tix`](#module-tkinter.tix "tkinter.tix: Tk Extension Widgets for Tkinter") and [`tkinter`](tkinter#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") widgets to create color images from XPM files. * [Compound](http://tix.sourceforge.net/dist/current/man/html/TixCmd/compound.htm) image types can be used to create images that consists of multiple horizontal lines; each line is composed of a series of items (texts, bitmaps, images or spaces) arranged from left to right. For example, a compound image can be used to display a bitmap and a text string simultaneously in a Tk `Button` widget. ### Miscellaneous Widgets `class tkinter.tix.InputOnly` The [InputOnly](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixInputOnly.htm) widgets are to accept inputs from the user, which can be done with the `bind` command (Unix only). ### Form Geometry Manager In addition, [`tkinter.tix`](#module-tkinter.tix "tkinter.tix: Tk Extension Widgets for Tkinter") augments [`tkinter`](tkinter#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") by providing: `class tkinter.tix.Form` The [Form](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tixForm.htm) geometry manager based on attachment rules for all Tk widgets. Tix Commands ------------ `class tkinter.tix.tixCommand` The [tix commands](http://tix.sourceforge.net/dist/current/man/html/TixCmd/tix.htm) provide access to miscellaneous elements of `Tix`’s internal state and the `Tix` application context. Most of the information manipulated by these methods pertains to the application as a whole, or to a screen or display, rather than to a particular window. To view the current settings, the common usage is: ``` from tkinter import tix root = tix.Tk() print(root.tix_configure()) ``` `tixCommand.tix_configure(cnf=None, **kw)` Query or modify the configuration options of the Tix application context. If no option is specified, returns a dictionary all of the available options. If option is specified with no value, then the method returns a list describing the one named option (this list will be identical to the corresponding sublist of the value returned if no option is specified). If one or more option-value pairs are specified, then the method modifies the given option(s) to have the given value(s); in this case the method returns an empty string. Option may be any of the configuration options. `tixCommand.tix_cget(option)` Returns the current value of the configuration option given by *option*. Option may be any of the configuration options. `tixCommand.tix_getbitmap(name)` Locates a bitmap file of the name `name.xpm` or `name` in one of the bitmap directories (see the [`tix_addbitmapdir()`](#tkinter.tix.tixCommand.tix_addbitmapdir "tkinter.tix.tixCommand.tix_addbitmapdir") method). By using [`tix_getbitmap()`](#tkinter.tix.tixCommand.tix_getbitmap "tkinter.tix.tixCommand.tix_getbitmap"), you can avoid hard coding the pathnames of the bitmap files in your application. When successful, it returns the complete pathname of the bitmap file, prefixed with the character `@`. The returned value can be used to configure the `bitmap` option of the Tk and Tix widgets. `tixCommand.tix_addbitmapdir(directory)` Tix maintains a list of directories under which the [`tix_getimage()`](#tkinter.tix.tixCommand.tix_getimage "tkinter.tix.tixCommand.tix_getimage") and [`tix_getbitmap()`](#tkinter.tix.tixCommand.tix_getbitmap "tkinter.tix.tixCommand.tix_getbitmap") methods will search for image files. The standard bitmap directory is `$TIX_LIBRARY/bitmaps`. The [`tix_addbitmapdir()`](#tkinter.tix.tixCommand.tix_addbitmapdir "tkinter.tix.tixCommand.tix_addbitmapdir") method adds *directory* into this list. By using this method, the image files of an applications can also be located using the [`tix_getimage()`](#tkinter.tix.tixCommand.tix_getimage "tkinter.tix.tixCommand.tix_getimage") or [`tix_getbitmap()`](#tkinter.tix.tixCommand.tix_getbitmap "tkinter.tix.tixCommand.tix_getbitmap") method. `tixCommand.tix_filedialog([dlgclass])` Returns the file selection dialog that may be shared among different calls from this application. This method will create a file selection dialog widget when it is called the first time. This dialog will be returned by all subsequent calls to [`tix_filedialog()`](#tkinter.tix.tixCommand.tix_filedialog "tkinter.tix.tixCommand.tix_filedialog"). An optional dlgclass parameter can be passed as a string to specified what type of file selection dialog widget is desired. Possible options are `tix`, `FileSelectDialog` or `tixExFileSelectDialog`. `tixCommand.tix_getimage(self, name)` Locates an image file of the name `name.xpm`, `name.xbm` or `name.ppm` in one of the bitmap directories (see the [`tix_addbitmapdir()`](#tkinter.tix.tixCommand.tix_addbitmapdir "tkinter.tix.tixCommand.tix_addbitmapdir") method above). If more than one file with the same name (but different extensions) exist, then the image type is chosen according to the depth of the X display: xbm images are chosen on monochrome displays and color images are chosen on color displays. By using [`tix_getimage()`](#tkinter.tix.tixCommand.tix_getimage "tkinter.tix.tixCommand.tix_getimage"), you can avoid hard coding the pathnames of the image files in your application. When successful, this method returns the name of the newly created image, which can be used to configure the `image` option of the Tk and Tix widgets. `tixCommand.tix_option_get(name)` Gets the options maintained by the Tix scheme mechanism. `tixCommand.tix_resetoptions(newScheme, newFontSet[, newScmPrio])` Resets the scheme and fontset of the Tix application to *newScheme* and *newFontSet*, respectively. This affects only those widgets created after this call. Therefore, it is best to call the resetoptions method before the creation of any widgets in a Tix application. The optional parameter *newScmPrio* can be given to reset the priority level of the Tk options set by the Tix schemes. Because of the way Tk handles the X option database, after Tix has been has imported and inited, it is not possible to reset the color schemes and font sets using the `tix_config()` method. Instead, the [`tix_resetoptions()`](#tkinter.tix.tixCommand.tix_resetoptions "tkinter.tix.tixCommand.tix_resetoptions") method must be used.
programming_docs
python shutil — High-level file operations shutil — High-level file operations =================================== **Source code:** [Lib/shutil.py](https://github.com/python/cpython/tree/3.9/Lib/shutil.py) The [`shutil`](#module-shutil "shutil: High-level file operations, including copying.") module offers a number of high-level operations on files and collections of files. In particular, functions are provided which support file copying and removal. For operations on individual files, see also the [`os`](os#module-os "os: Miscellaneous operating system interfaces.") module. Warning Even the higher-level file copying functions ([`shutil.copy()`](#shutil.copy "shutil.copy"), [`shutil.copy2()`](#shutil.copy2 "shutil.copy2")) cannot copy all file metadata. On POSIX platforms, this means that file owner and group are lost as well as ACLs. On Mac OS, the resource fork and other metadata are not used. This means that resources will be lost and file type and creator codes will not be correct. On Windows, file owners, ACLs and alternate data streams are not copied. Directory and files operations ------------------------------ `shutil.copyfileobj(fsrc, fdst[, length])` Copy the contents of the file-like object *fsrc* to the file-like object *fdst*. The integer *length*, if given, is the buffer size. In particular, a negative *length* value means to copy the data without looping over the source data in chunks; by default the data is read in chunks to avoid uncontrolled memory consumption. Note that if the current file position of the *fsrc* object is not 0, only the contents from the current file position to the end of the file will be copied. `shutil.copyfile(src, dst, *, follow_symlinks=True)` Copy the contents (no metadata) of the file named *src* to a file named *dst* and return *dst* in the most efficient way possible. *src* and *dst* are path-like objects or path names given as strings. *dst* must be the complete target file name; look at [`copy()`](#shutil.copy "shutil.copy") for a copy that accepts a target directory path. If *src* and *dst* specify the same file, [`SameFileError`](#shutil.SameFileError "shutil.SameFileError") is raised. The destination location must be writable; otherwise, an [`OSError`](exceptions#OSError "OSError") exception will be raised. If *dst* already exists, it will be replaced. Special files such as character or block devices and pipes cannot be copied with this function. If *follow\_symlinks* is false and *src* is a symbolic link, a new symbolic link will be created instead of copying the file *src* points to. Raises an [auditing event](sys#auditing) `shutil.copyfile` with arguments `src`, `dst`. Changed in version 3.3: [`IOError`](exceptions#IOError "IOError") used to be raised instead of [`OSError`](exceptions#OSError "OSError"). Added *follow\_symlinks* argument. Now returns *dst*. Changed in version 3.4: Raise [`SameFileError`](#shutil.SameFileError "shutil.SameFileError") instead of [`Error`](#shutil.Error "shutil.Error"). Since the former is a subclass of the latter, this change is backward compatible. Changed in version 3.8: Platform-specific fast-copy syscalls may be used internally in order to copy the file more efficiently. See [Platform-dependent efficient copy operations](#shutil-platform-dependent-efficient-copy-operations) section. `exception shutil.SameFileError` This exception is raised if source and destination in [`copyfile()`](#shutil.copyfile "shutil.copyfile") are the same file. New in version 3.4. `shutil.copymode(src, dst, *, follow_symlinks=True)` Copy the permission bits from *src* to *dst*. The file contents, owner, and group are unaffected. *src* and *dst* are path-like objects or path names given as strings. If *follow\_symlinks* is false, and both *src* and *dst* are symbolic links, [`copymode()`](#shutil.copymode "shutil.copymode") will attempt to modify the mode of *dst* itself (rather than the file it points to). This functionality is not available on every platform; please see [`copystat()`](#shutil.copystat "shutil.copystat") for more information. If [`copymode()`](#shutil.copymode "shutil.copymode") cannot modify symbolic links on the local platform, and it is asked to do so, it will do nothing and return. Raises an [auditing event](sys#auditing) `shutil.copymode` with arguments `src`, `dst`. Changed in version 3.3: Added *follow\_symlinks* argument. `shutil.copystat(src, dst, *, follow_symlinks=True)` Copy the permission bits, last access time, last modification time, and flags from *src* to *dst*. On Linux, [`copystat()`](#shutil.copystat "shutil.copystat") also copies the “extended attributes” where possible. The file contents, owner, and group are unaffected. *src* and *dst* are path-like objects or path names given as strings. If *follow\_symlinks* is false, and *src* and *dst* both refer to symbolic links, [`copystat()`](#shutil.copystat "shutil.copystat") will operate on the symbolic links themselves rather than the files the symbolic links refer to—reading the information from the *src* symbolic link, and writing the information to the *dst* symbolic link. Note Not all platforms provide the ability to examine and modify symbolic links. Python itself can tell you what functionality is locally available. * If `os.chmod in os.supports_follow_symlinks` is `True`, [`copystat()`](#shutil.copystat "shutil.copystat") can modify the permission bits of a symbolic link. * If `os.utime in os.supports_follow_symlinks` is `True`, [`copystat()`](#shutil.copystat "shutil.copystat") can modify the last access and modification times of a symbolic link. * If `os.chflags in os.supports_follow_symlinks` is `True`, [`copystat()`](#shutil.copystat "shutil.copystat") can modify the flags of a symbolic link. (`os.chflags` is not available on all platforms.) On platforms where some or all of this functionality is unavailable, when asked to modify a symbolic link, [`copystat()`](#shutil.copystat "shutil.copystat") will copy everything it can. [`copystat()`](#shutil.copystat "shutil.copystat") never returns failure. Please see [`os.supports_follow_symlinks`](os#os.supports_follow_symlinks "os.supports_follow_symlinks") for more information. Raises an [auditing event](sys#auditing) `shutil.copystat` with arguments `src`, `dst`. Changed in version 3.3: Added *follow\_symlinks* argument and support for Linux extended attributes. `shutil.copy(src, dst, *, follow_symlinks=True)` Copies the file *src* to the file or directory *dst*. *src* and *dst* should be [path-like objects](../glossary#term-path-like-object) or strings. If *dst* specifies a directory, the file will be copied into *dst* using the base filename from *src*. If *dst* specifies a file that already exists, it will be replaced. Returns the path to the newly created file. If *follow\_symlinks* is false, and *src* is a symbolic link, *dst* will be created as a symbolic link. If *follow\_symlinks* is true and *src* is a symbolic link, *dst* will be a copy of the file *src* refers to. [`copy()`](#shutil.copy "shutil.copy") copies the file data and the file’s permission mode (see [`os.chmod()`](os#os.chmod "os.chmod")). Other metadata, like the file’s creation and modification times, is not preserved. To preserve all file metadata from the original, use [`copy2()`](#shutil.copy2 "shutil.copy2") instead. Raises an [auditing event](sys#auditing) `shutil.copyfile` with arguments `src`, `dst`. Raises an [auditing event](sys#auditing) `shutil.copymode` with arguments `src`, `dst`. Changed in version 3.3: Added *follow\_symlinks* argument. Now returns path to the newly created file. Changed in version 3.8: Platform-specific fast-copy syscalls may be used internally in order to copy the file more efficiently. See [Platform-dependent efficient copy operations](#shutil-platform-dependent-efficient-copy-operations) section. `shutil.copy2(src, dst, *, follow_symlinks=True)` Identical to [`copy()`](#shutil.copy "shutil.copy") except that [`copy2()`](#shutil.copy2 "shutil.copy2") also attempts to preserve file metadata. When *follow\_symlinks* is false, and *src* is a symbolic link, [`copy2()`](#shutil.copy2 "shutil.copy2") attempts to copy all metadata from the *src* symbolic link to the newly-created *dst* symbolic link. However, this functionality is not available on all platforms. On platforms where some or all of this functionality is unavailable, [`copy2()`](#shutil.copy2 "shutil.copy2") will preserve all the metadata it can; [`copy2()`](#shutil.copy2 "shutil.copy2") never raises an exception because it cannot preserve file metadata. [`copy2()`](#shutil.copy2 "shutil.copy2") uses [`copystat()`](#shutil.copystat "shutil.copystat") to copy the file metadata. Please see [`copystat()`](#shutil.copystat "shutil.copystat") for more information about platform support for modifying symbolic link metadata. Raises an [auditing event](sys#auditing) `shutil.copyfile` with arguments `src`, `dst`. Raises an [auditing event](sys#auditing) `shutil.copystat` with arguments `src`, `dst`. Changed in version 3.3: Added *follow\_symlinks* argument, try to copy extended file system attributes too (currently Linux only). Now returns path to the newly created file. Changed in version 3.8: Platform-specific fast-copy syscalls may be used internally in order to copy the file more efficiently. See [Platform-dependent efficient copy operations](#shutil-platform-dependent-efficient-copy-operations) section. `shutil.ignore_patterns(*patterns)` This factory function creates a function that can be used as a callable for [`copytree()`](#shutil.copytree "shutil.copytree")’s *ignore* argument, ignoring files and directories that match one of the glob-style *patterns* provided. See the example below. `shutil.copytree(src, dst, symlinks=False, ignore=None, copy_function=copy2, ignore_dangling_symlinks=False, dirs_exist_ok=False)` Recursively copy an entire directory tree rooted at *src* to a directory named *dst* and return the destination directory. All intermediate directories needed to contain *dst* will also be created by default. Permissions and times of directories are copied with [`copystat()`](#shutil.copystat "shutil.copystat"), individual files are copied using [`copy2()`](#shutil.copy2 "shutil.copy2"). If *symlinks* is true, symbolic links in the source tree are represented as symbolic links in the new tree and the metadata of the original links will be copied as far as the platform allows; if false or omitted, the contents and metadata of the linked files are copied to the new tree. When *symlinks* is false, if the file pointed by the symlink doesn’t exist, an exception will be added in the list of errors raised in an [`Error`](#shutil.Error "shutil.Error") exception at the end of the copy process. You can set the optional *ignore\_dangling\_symlinks* flag to true if you want to silence this exception. Notice that this option has no effect on platforms that don’t support [`os.symlink()`](os#os.symlink "os.symlink"). If *ignore* is given, it must be a callable that will receive as its arguments the directory being visited by [`copytree()`](#shutil.copytree "shutil.copytree"), and a list of its contents, as returned by [`os.listdir()`](os#os.listdir "os.listdir"). Since [`copytree()`](#shutil.copytree "shutil.copytree") is called recursively, the *ignore* callable will be called once for each directory that is copied. The callable must return a sequence of directory and file names relative to the current directory (i.e. a subset of the items in its second argument); these names will then be ignored in the copy process. [`ignore_patterns()`](#shutil.ignore_patterns "shutil.ignore_patterns") can be used to create such a callable that ignores names based on glob-style patterns. If exception(s) occur, an [`Error`](#shutil.Error "shutil.Error") is raised with a list of reasons. If *copy\_function* is given, it must be a callable that will be used to copy each file. It will be called with the source path and the destination path as arguments. By default, [`copy2()`](#shutil.copy2 "shutil.copy2") is used, but any function that supports the same signature (like [`copy()`](#shutil.copy "shutil.copy")) can be used. If *dirs\_exist\_ok* is false (the default) and *dst* already exists, a [`FileExistsError`](exceptions#FileExistsError "FileExistsError") is raised. If *dirs\_exist\_ok* is true, the copying operation will continue if it encounters existing directories, and files within the *dst* tree will be overwritten by corresponding files from the *src* tree. Raises an [auditing event](sys#auditing) `shutil.copytree` with arguments `src`, `dst`. Changed in version 3.3: Copy metadata when *symlinks* is false. Now returns *dst*. Changed in version 3.2: Added the *copy\_function* argument to be able to provide a custom copy function. Added the *ignore\_dangling\_symlinks* argument to silence dangling symlinks errors when *symlinks* is false. Changed in version 3.8: Platform-specific fast-copy syscalls may be used internally in order to copy the file more efficiently. See [Platform-dependent efficient copy operations](#shutil-platform-dependent-efficient-copy-operations) section. New in version 3.8: The *dirs\_exist\_ok* parameter. `shutil.rmtree(path, ignore_errors=False, onerror=None)` Delete an entire directory tree; *path* must point to a directory (but not a symbolic link to a directory). If *ignore\_errors* is true, errors resulting from failed removals will be ignored; if false or omitted, such errors are handled by calling a handler specified by *onerror* or, if that is omitted, they raise an exception. Note On platforms that support the necessary fd-based functions a symlink attack resistant version of [`rmtree()`](#shutil.rmtree "shutil.rmtree") is used by default. On other platforms, the [`rmtree()`](#shutil.rmtree "shutil.rmtree") implementation is susceptible to a symlink attack: given proper timing and circumstances, attackers can manipulate symlinks on the filesystem to delete files they wouldn’t be able to access otherwise. Applications can use the [`rmtree.avoids_symlink_attacks`](#shutil.rmtree.avoids_symlink_attacks "shutil.rmtree.avoids_symlink_attacks") function attribute to determine which case applies. If *onerror* is provided, it must be a callable that accepts three parameters: *function*, *path*, and *excinfo*. The first parameter, *function*, is the function which raised the exception; it depends on the platform and implementation. The second parameter, *path*, will be the path name passed to *function*. The third parameter, *excinfo*, will be the exception information returned by [`sys.exc_info()`](sys#sys.exc_info "sys.exc_info"). Exceptions raised by *onerror* will not be caught. Raises an [auditing event](sys#auditing) `shutil.rmtree` with argument `path`. Changed in version 3.3: Added a symlink attack resistant version that is used automatically if platform supports fd-based functions. Changed in version 3.8: On Windows, will no longer delete the contents of a directory junction before removing the junction. `rmtree.avoids_symlink_attacks` Indicates whether the current platform and implementation provides a symlink attack resistant version of [`rmtree()`](#shutil.rmtree "shutil.rmtree"). Currently this is only true for platforms supporting fd-based directory access functions. New in version 3.3. `shutil.move(src, dst, copy_function=copy2)` Recursively move a file or directory (*src*) to another location (*dst*) and return the destination. If the destination is an existing directory, then *src* is moved inside that directory. If the destination already exists but is not a directory, it may be overwritten depending on [`os.rename()`](os#os.rename "os.rename") semantics. If the destination is on the current filesystem, then [`os.rename()`](os#os.rename "os.rename") is used. Otherwise, *src* is copied to *dst* using *copy\_function* and then removed. In case of symlinks, a new symlink pointing to the target of *src* will be created in or as *dst* and *src* will be removed. If *copy\_function* is given, it must be a callable that takes two arguments *src* and *dst*, and will be used to copy *src* to *dst* if [`os.rename()`](os#os.rename "os.rename") cannot be used. If the source is a directory, [`copytree()`](#shutil.copytree "shutil.copytree") is called, passing it the `copy_function()`. The default *copy\_function* is [`copy2()`](#shutil.copy2 "shutil.copy2"). Using [`copy()`](#shutil.copy "shutil.copy") as the *copy\_function* allows the move to succeed when it is not possible to also copy the metadata, at the expense of not copying any of the metadata. Raises an [auditing event](sys#auditing) `shutil.move` with arguments `src`, `dst`. Changed in version 3.3: Added explicit symlink handling for foreign filesystems, thus adapting it to the behavior of GNU’s **mv**. Now returns *dst*. Changed in version 3.5: Added the *copy\_function* keyword argument. Changed in version 3.8: Platform-specific fast-copy syscalls may be used internally in order to copy the file more efficiently. See [Platform-dependent efficient copy operations](#shutil-platform-dependent-efficient-copy-operations) section. Changed in version 3.9: Accepts a [path-like object](../glossary#term-path-like-object) for both *src* and *dst*. `shutil.disk_usage(path)` Return disk usage statistics about the given path as a [named tuple](../glossary#term-named-tuple) with the attributes *total*, *used* and *free*, which are the amount of total, used and free space, in bytes. *path* may be a file or a directory. New in version 3.3. Changed in version 3.8: On Windows, *path* can now be a file or directory. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. `shutil.chown(path, user=None, group=None)` Change owner *user* and/or *group* of the given *path*. *user* can be a system user name or a uid; the same applies to *group*. At least one argument is required. See also [`os.chown()`](os#os.chown "os.chown"), the underlying function. Raises an [auditing event](sys#auditing) `shutil.chown` with arguments `path`, `user`, `group`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `shutil.which(cmd, mode=os.F_OK | os.X_OK, path=None)` Return the path to an executable which would be run if the given *cmd* was called. If no *cmd* would be called, return `None`. *mode* is a permission mask passed to [`os.access()`](os#os.access "os.access"), by default determining if the file exists and executable. When no *path* is specified, the results of [`os.environ()`](os#os.environ "os.environ") are used, returning either the “PATH” value or a fallback of [`os.defpath`](os#os.defpath "os.defpath"). On Windows, the current directory is always prepended to the *path* whether or not you use the default or provide your own, which is the behavior the command shell uses when finding executables. Additionally, when finding the *cmd* in the *path*, the `PATHEXT` environment variable is checked. For example, if you call `shutil.which("python")`, [`which()`](#shutil.which "shutil.which") will search `PATHEXT` to know that it should look for `python.exe` within the *path* directories. For example, on Windows: ``` >>> shutil.which("python") 'C:\\Python33\\python.EXE' ``` New in version 3.3. Changed in version 3.8: The [`bytes`](stdtypes#bytes "bytes") type is now accepted. If *cmd* type is [`bytes`](stdtypes#bytes "bytes"), the result type is also [`bytes`](stdtypes#bytes "bytes"). `exception shutil.Error` This exception collects exceptions that are raised during a multi-file operation. For [`copytree()`](#shutil.copytree "shutil.copytree"), the exception argument is a list of 3-tuples (*srcname*, *dstname*, *exception*). ### Platform-dependent efficient copy operations Starting from Python 3.8, all functions involving a file copy ([`copyfile()`](#shutil.copyfile "shutil.copyfile"), [`copy()`](#shutil.copy "shutil.copy"), [`copy2()`](#shutil.copy2 "shutil.copy2"), [`copytree()`](#shutil.copytree "shutil.copytree"), and [`move()`](#shutil.move "shutil.move")) may use platform-specific “fast-copy” syscalls in order to copy the file more efficiently (see [bpo-33671](https://bugs.python.org/issue?@action=redirect&bpo=33671)). “fast-copy” means that the copying operation occurs within the kernel, avoiding the use of userspace buffers in Python as in “`outfd.write(infd.read())`”. On macOS [fcopyfile](http://www.manpagez.com/man/3/copyfile/) is used to copy the file content (not metadata). On Linux [`os.sendfile()`](os#os.sendfile "os.sendfile") is used. On Windows [`shutil.copyfile()`](#shutil.copyfile "shutil.copyfile") uses a bigger default buffer size (1 MiB instead of 64 KiB) and a [`memoryview()`](stdtypes#memoryview "memoryview")-based variant of [`shutil.copyfileobj()`](#shutil.copyfileobj "shutil.copyfileobj") is used. If the fast-copy operation fails and no data was written in the destination file then shutil will silently fallback on using less efficient [`copyfileobj()`](#shutil.copyfileobj "shutil.copyfileobj") function internally. Changed in version 3.8. ### copytree example An example that uses the [`ignore_patterns()`](#shutil.ignore_patterns "shutil.ignore_patterns") helper: ``` from shutil import copytree, ignore_patterns copytree(source, destination, ignore=ignore_patterns('*.pyc', 'tmp*')) ``` This will copy everything except `.pyc` files and files or directories whose name starts with `tmp`. Another example that uses the *ignore* argument to add a logging call: ``` from shutil import copytree import logging def _logpath(path, names): logging.info('Working in %s', path) return [] # nothing will be ignored copytree(source, destination, ignore=_logpath) ``` ### rmtree example This example shows how to remove a directory tree on Windows where some of the files have their read-only bit set. It uses the onerror callback to clear the readonly bit and reattempt the remove. Any subsequent failure will propagate. ``` import os, stat import shutil def remove_readonly(func, path, _): "Clear the readonly bit and reattempt the removal" os.chmod(path, stat.S_IWRITE) func(path) shutil.rmtree(directory, onerror=remove_readonly) ``` Archiving operations -------------------- New in version 3.2. Changed in version 3.5: Added support for the *xztar* format. High-level utilities to create and read compressed and archived files are also provided. They rely on the [`zipfile`](zipfile#module-zipfile "zipfile: Read and write ZIP-format archive files.") and [`tarfile`](tarfile#module-tarfile "tarfile: Read and write tar-format archive files.") modules. `shutil.make_archive(base_name, format[, root_dir[, base_dir[, verbose[, dry_run[, owner[, group[, logger]]]]]]])` Create an archive file (such as zip or tar) and return its name. *base\_name* is the name of the file to create, including the path, minus any format-specific extension. *format* is the archive format: one of “zip” (if the [`zlib`](zlib#module-zlib "zlib: Low-level interface to compression and decompression routines compatible with gzip.") module is available), “tar”, “gztar” (if the [`zlib`](zlib#module-zlib "zlib: Low-level interface to compression and decompression routines compatible with gzip.") module is available), “bztar” (if the [`bz2`](bz2#module-bz2 "bz2: Interfaces for bzip2 compression and decompression.") module is available), or “xztar” (if the [`lzma`](lzma#module-lzma "lzma: A Python wrapper for the liblzma compression library.") module is available). *root\_dir* is a directory that will be the root directory of the archive, all paths in the archive will be relative to it; for example, we typically chdir into *root\_dir* before creating the archive. *base\_dir* is the directory where we start archiving from; i.e. *base\_dir* will be the common prefix of all files and directories in the archive. *base\_dir* must be given relative to *root\_dir*. See [Archiving example with base\_dir](#shutil-archiving-example-with-basedir) for how to use *base\_dir* and *root\_dir* together. *root\_dir* and *base\_dir* both default to the current directory. If *dry\_run* is true, no archive is created, but the operations that would be executed are logged to *logger*. *owner* and *group* are used when creating a tar archive. By default, uses the current owner and group. *logger* must be an object compatible with [**PEP 282**](https://www.python.org/dev/peps/pep-0282), usually an instance of [`logging.Logger`](logging#logging.Logger "logging.Logger"). The *verbose* argument is unused and deprecated. Raises an [auditing event](sys#auditing) `shutil.make_archive` with arguments `base_name`, `format`, `root_dir`, `base_dir`. Note This function is not thread-safe. Changed in version 3.8: The modern pax (POSIX.1-2001) format is now used instead of the legacy GNU format for archives created with `format="tar"`. `shutil.get_archive_formats()` Return a list of supported formats for archiving. Each element of the returned sequence is a tuple `(name, description)`. By default [`shutil`](#module-shutil "shutil: High-level file operations, including copying.") provides these formats: * *zip*: ZIP file (if the [`zlib`](zlib#module-zlib "zlib: Low-level interface to compression and decompression routines compatible with gzip.") module is available). * *tar*: Uncompressed tar file. Uses POSIX.1-2001 pax format for new archives. * *gztar*: gzip’ed tar-file (if the [`zlib`](zlib#module-zlib "zlib: Low-level interface to compression and decompression routines compatible with gzip.") module is available). * *bztar*: bzip2’ed tar-file (if the [`bz2`](bz2#module-bz2 "bz2: Interfaces for bzip2 compression and decompression.") module is available). * *xztar*: xz’ed tar-file (if the [`lzma`](lzma#module-lzma "lzma: A Python wrapper for the liblzma compression library.") module is available). You can register new formats or provide your own archiver for any existing formats, by using [`register_archive_format()`](#shutil.register_archive_format "shutil.register_archive_format"). `shutil.register_archive_format(name, function[, extra_args[, description]])` Register an archiver for the format *name*. *function* is the callable that will be used to unpack archives. The callable will receive the *base\_name* of the file to create, followed by the *base\_dir* (which defaults to [`os.curdir`](os#os.curdir "os.curdir")) to start archiving from. Further arguments are passed as keyword arguments: *owner*, *group*, *dry\_run* and *logger* (as passed in [`make_archive()`](#shutil.make_archive "shutil.make_archive")). If given, *extra\_args* is a sequence of `(name, value)` pairs that will be used as extra keywords arguments when the archiver callable is used. *description* is used by [`get_archive_formats()`](#shutil.get_archive_formats "shutil.get_archive_formats") which returns the list of archivers. Defaults to an empty string. `shutil.unregister_archive_format(name)` Remove the archive format *name* from the list of supported formats. `shutil.unpack_archive(filename[, extract_dir[, format]])` Unpack an archive. *filename* is the full path of the archive. *extract\_dir* is the name of the target directory where the archive is unpacked. If not provided, the current working directory is used. *format* is the archive format: one of “zip”, “tar”, “gztar”, “bztar”, or “xztar”. Or any other format registered with [`register_unpack_format()`](#shutil.register_unpack_format "shutil.register_unpack_format"). If not provided, [`unpack_archive()`](#shutil.unpack_archive "shutil.unpack_archive") will use the archive file name extension and see if an unpacker was registered for that extension. In case none is found, a [`ValueError`](exceptions#ValueError "ValueError") is raised. Raises an [auditing event](sys#auditing) `shutil.unpack_archive` with arguments `filename`, `extract_dir`, `format`. Warning Never extract archives from untrusted sources without prior inspection. It is possible that files are created outside of the path specified in the *extract\_dir* argument, e.g. members that have absolute filenames starting with “/” or filenames with two dots “..”. Changed in version 3.7: Accepts a [path-like object](../glossary#term-path-like-object) for *filename* and *extract\_dir*. `shutil.register_unpack_format(name, extensions, function[, extra_args[, description]])` Registers an unpack format. *name* is the name of the format and *extensions* is a list of extensions corresponding to the format, like `.zip` for Zip files. *function* is the callable that will be used to unpack archives. The callable will receive the path of the archive, followed by the directory the archive must be extracted to. When provided, *extra\_args* is a sequence of `(name, value)` tuples that will be passed as keywords arguments to the callable. *description* can be provided to describe the format, and will be returned by the [`get_unpack_formats()`](#shutil.get_unpack_formats "shutil.get_unpack_formats") function. `shutil.unregister_unpack_format(name)` Unregister an unpack format. *name* is the name of the format. `shutil.get_unpack_formats()` Return a list of all registered formats for unpacking. Each element of the returned sequence is a tuple `(name, extensions, description)`. By default [`shutil`](#module-shutil "shutil: High-level file operations, including copying.") provides these formats: * *zip*: ZIP file (unpacking compressed files works only if the corresponding module is available). * *tar*: uncompressed tar file. * *gztar*: gzip’ed tar-file (if the [`zlib`](zlib#module-zlib "zlib: Low-level interface to compression and decompression routines compatible with gzip.") module is available). * *bztar*: bzip2’ed tar-file (if the [`bz2`](bz2#module-bz2 "bz2: Interfaces for bzip2 compression and decompression.") module is available). * *xztar*: xz’ed tar-file (if the [`lzma`](lzma#module-lzma "lzma: A Python wrapper for the liblzma compression library.") module is available). You can register new formats or provide your own unpacker for any existing formats, by using [`register_unpack_format()`](#shutil.register_unpack_format "shutil.register_unpack_format"). ### Archiving example In this example, we create a gzip’ed tar-file archive containing all files found in the `.ssh` directory of the user: ``` >>> from shutil import make_archive >>> import os >>> archive_name = os.path.expanduser(os.path.join('~', 'myarchive')) >>> root_dir = os.path.expanduser(os.path.join('~', '.ssh')) >>> make_archive(archive_name, 'gztar', root_dir) '/Users/tarek/myarchive.tar.gz' ``` The resulting archive contains: ``` $ tar -tzvf /Users/tarek/myarchive.tar.gz drwx------ tarek/staff 0 2010-02-01 16:23:40 ./ -rw-r--r-- tarek/staff 609 2008-06-09 13:26:54 ./authorized_keys -rwxr-xr-x tarek/staff 65 2008-06-09 13:26:54 ./config -rwx------ tarek/staff 668 2008-06-09 13:26:54 ./id_dsa -rwxr-xr-x tarek/staff 609 2008-06-09 13:26:54 ./id_dsa.pub -rw------- tarek/staff 1675 2008-06-09 13:26:54 ./id_rsa -rw-r--r-- tarek/staff 397 2008-06-09 13:26:54 ./id_rsa.pub -rw-r--r-- tarek/staff 37192 2010-02-06 18:23:10 ./known_hosts ``` ### Archiving example with *base\_dir* In this example, similar to the [one above](#shutil-archiving-example), we show how to use [`make_archive()`](#shutil.make_archive "shutil.make_archive"), but this time with the usage of *base\_dir*. We now have the following directory structure: ``` $ tree tmp tmp └── root └── structure ├── content └── please_add.txt └── do_not_add.txt ``` In the final archive, `please_add.txt` should be included, but `do_not_add.txt` should not. Therefore we use the following: ``` >>> from shutil import make_archive >>> import os >>> archive_name = os.path.expanduser(os.path.join('~', 'myarchive')) >>> make_archive( ... archive_name, ... 'tar', ... root_dir='tmp/root', ... base_dir='structure/content', ... ) '/Users/tarek/my_archive.tar' ``` Listing the files in the resulting archive gives us: ``` $ python -m tarfile -l /Users/tarek/myarchive.tar structure/content/ structure/content/please_add.txt ``` Querying the size of the output terminal ---------------------------------------- `shutil.get_terminal_size(fallback=(columns, lines))` Get the size of the terminal window. For each of the two dimensions, the environment variable, `COLUMNS` and `LINES` respectively, is checked. If the variable is defined and the value is a positive integer, it is used. When `COLUMNS` or `LINES` is not defined, which is the common case, the terminal connected to [`sys.__stdout__`](sys#sys.__stdout__ "sys.__stdout__") is queried by invoking [`os.get_terminal_size()`](os#os.get_terminal_size "os.get_terminal_size"). If the terminal size cannot be successfully queried, either because the system doesn’t support querying, or because we are not connected to a terminal, the value given in `fallback` parameter is used. `fallback` defaults to `(80, 24)` which is the default size used by many terminal emulators. The value returned is a named tuple of type [`os.terminal_size`](os#os.terminal_size "os.terminal_size"). See also: The Single UNIX Specification, Version 2, [Other Environment Variables](http://pubs.opengroup.org/onlinepubs/7908799/xbd/envvar.html#tag_002_003). New in version 3.3.
programming_docs
python Development Tools Development Tools ================= The modules described in this chapter help you write software. For example, the [`pydoc`](pydoc#module-pydoc "pydoc: Documentation generator and online help system.") module takes a module and generates documentation based on the module’s contents. The [`doctest`](doctest#module-doctest "doctest: Test pieces of code within docstrings.") and [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") modules contains frameworks for writing unit tests that automatically exercise code and verify that the expected output is produced. **2to3** can translate Python 2.x source code into valid Python 3.x code. The list of modules described in this chapter is: * [`typing` — Support for type hints](typing) + [Relevant PEPs](typing#relevant-peps) + [Type aliases](typing#type-aliases) + [NewType](typing#newtype) + [Callable](typing#callable) + [Generics](typing#generics) + [User-defined generic types](typing#user-defined-generic-types) + [The `Any` type](typing#the-any-type) + [Nominal vs structural subtyping](typing#nominal-vs-structural-subtyping) + [Module contents](typing#module-contents) - [Special typing primitives](typing#special-typing-primitives) * [Special types](typing#special-types) * [Special forms](typing#special-forms) * [Building generic types](typing#building-generic-types) * [Other special directives](typing#other-special-directives) - [Generic concrete collections](typing#generic-concrete-collections) * [Corresponding to built-in types](typing#corresponding-to-built-in-types) * [Corresponding to types in `collections`](typing#corresponding-to-types-in-collections) * [Other concrete types](typing#other-concrete-types) - [Abstract Base Classes](typing#abstract-base-classes) * [Corresponding to collections in `collections.abc`](typing#corresponding-to-collections-in-collections-abc) * [Corresponding to other types in `collections.abc`](typing#corresponding-to-other-types-in-collections-abc) * [Asynchronous programming](typing#asynchronous-programming) * [Context manager types](typing#context-manager-types) - [Protocols](typing#protocols) - [Functions and decorators](typing#functions-and-decorators) - [Introspection helpers](typing#introspection-helpers) - [Constant](typing#constant) * [`pydoc` — Documentation generator and online help system](pydoc) * [Python Development Mode](devmode) * [Effects of the Python Development Mode](devmode#effects-of-the-python-development-mode) * [ResourceWarning Example](devmode#resourcewarning-example) * [Bad file descriptor error example](devmode#bad-file-descriptor-error-example) * [`doctest` — Test interactive Python examples](doctest) + [Simple Usage: Checking Examples in Docstrings](doctest#simple-usage-checking-examples-in-docstrings) + [Simple Usage: Checking Examples in a Text File](doctest#simple-usage-checking-examples-in-a-text-file) + [How It Works](doctest#how-it-works) - [Which Docstrings Are Examined?](doctest#which-docstrings-are-examined) - [How are Docstring Examples Recognized?](doctest#how-are-docstring-examples-recognized) - [What’s the Execution Context?](doctest#what-s-the-execution-context) - [What About Exceptions?](doctest#what-about-exceptions) - [Option Flags](doctest#option-flags) - [Directives](doctest#directives) - [Warnings](doctest#warnings) + [Basic API](doctest#basic-api) + [Unittest API](doctest#unittest-api) + [Advanced API](doctest#advanced-api) - [DocTest Objects](doctest#doctest-objects) - [Example Objects](doctest#example-objects) - [DocTestFinder objects](doctest#doctestfinder-objects) - [DocTestParser objects](doctest#doctestparser-objects) - [DocTestRunner objects](doctest#doctestrunner-objects) - [OutputChecker objects](doctest#outputchecker-objects) + [Debugging](doctest#debugging) + [Soapbox](doctest#soapbox) * [`unittest` — Unit testing framework](unittest) + [Basic example](unittest#basic-example) + [Command-Line Interface](unittest#command-line-interface) - [Command-line options](unittest#command-line-options) + [Test Discovery](unittest#test-discovery) + [Organizing test code](unittest#organizing-test-code) + [Re-using old test code](unittest#re-using-old-test-code) + [Skipping tests and expected failures](unittest#skipping-tests-and-expected-failures) + [Distinguishing test iterations using subtests](unittest#distinguishing-test-iterations-using-subtests) + [Classes and functions](unittest#classes-and-functions) - [Test cases](unittest#test-cases) * [Deprecated aliases](unittest#deprecated-aliases) - [Grouping tests](unittest#grouping-tests) - [Loading and running tests](unittest#loading-and-running-tests) * [load\_tests Protocol](unittest#load-tests-protocol) + [Class and Module Fixtures](unittest#class-and-module-fixtures) - [setUpClass and tearDownClass](unittest#setupclass-and-teardownclass) - [setUpModule and tearDownModule](unittest#setupmodule-and-teardownmodule) + [Signal Handling](unittest#signal-handling) * [`unittest.mock` — mock object library](unittest.mock) + [Quick Guide](unittest.mock#quick-guide) + [The Mock Class](unittest.mock#the-mock-class) - [Calling](unittest.mock#calling) - [Deleting Attributes](unittest.mock#deleting-attributes) - [Mock names and the name attribute](unittest.mock#mock-names-and-the-name-attribute) - [Attaching Mocks as Attributes](unittest.mock#attaching-mocks-as-attributes) + [The patchers](unittest.mock#the-patchers) - [patch](unittest.mock#patch) - [patch.object](unittest.mock#patch-object) - [patch.dict](unittest.mock#patch-dict) - [patch.multiple](unittest.mock#patch-multiple) - [patch methods: start and stop](unittest.mock#patch-methods-start-and-stop) - [patch builtins](unittest.mock#patch-builtins) - [TEST\_PREFIX](unittest.mock#test-prefix) - [Nesting Patch Decorators](unittest.mock#nesting-patch-decorators) - [Where to patch](unittest.mock#where-to-patch) - [Patching Descriptors and Proxy Objects](unittest.mock#patching-descriptors-and-proxy-objects) + [MagicMock and magic method support](unittest.mock#magicmock-and-magic-method-support) - [Mocking Magic Methods](unittest.mock#mocking-magic-methods) - [Magic Mock](unittest.mock#magic-mock) + [Helpers](unittest.mock#helpers) - [sentinel](unittest.mock#sentinel) - [DEFAULT](unittest.mock#default) - [call](unittest.mock#call) - [create\_autospec](unittest.mock#create-autospec) - [ANY](unittest.mock#any) - [FILTER\_DIR](unittest.mock#filter-dir) - [mock\_open](unittest.mock#mock-open) - [Autospeccing](unittest.mock#autospeccing) - [Sealing mocks](unittest.mock#sealing-mocks) * [`unittest.mock` — getting started](https://docs.python.org/3.9/library/unittest.mock-examples.html) + [Using Mock](https://docs.python.org/3.9/library/unittest.mock-examples.html#using-mock) - [Mock Patching Methods](https://docs.python.org/3.9/library/unittest.mock-examples.html#mock-patching-methods) - [Mock for Method Calls on an Object](https://docs.python.org/3.9/library/unittest.mock-examples.html#mock-for-method-calls-on-an-object) - [Mocking Classes](https://docs.python.org/3.9/library/unittest.mock-examples.html#mocking-classes) - [Naming your mocks](https://docs.python.org/3.9/library/unittest.mock-examples.html#naming-your-mocks) - [Tracking all Calls](https://docs.python.org/3.9/library/unittest.mock-examples.html#tracking-all-calls) - [Setting Return Values and Attributes](https://docs.python.org/3.9/library/unittest.mock-examples.html#setting-return-values-and-attributes) - [Raising exceptions with mocks](https://docs.python.org/3.9/library/unittest.mock-examples.html#raising-exceptions-with-mocks) - [Side effect functions and iterables](https://docs.python.org/3.9/library/unittest.mock-examples.html#side-effect-functions-and-iterables) - [Mocking asynchronous iterators](https://docs.python.org/3.9/library/unittest.mock-examples.html#mocking-asynchronous-iterators) - [Mocking asynchronous context manager](https://docs.python.org/3.9/library/unittest.mock-examples.html#mocking-asynchronous-context-manager) - [Creating a Mock from an Existing Object](https://docs.python.org/3.9/library/unittest.mock-examples.html#creating-a-mock-from-an-existing-object) + [Patch Decorators](https://docs.python.org/3.9/library/unittest.mock-examples.html#patch-decorators) + [Further Examples](https://docs.python.org/3.9/library/unittest.mock-examples.html#further-examples) - [Mocking chained calls](https://docs.python.org/3.9/library/unittest.mock-examples.html#mocking-chained-calls) - [Partial mocking](https://docs.python.org/3.9/library/unittest.mock-examples.html#partial-mocking) - [Mocking a Generator Method](https://docs.python.org/3.9/library/unittest.mock-examples.html#mocking-a-generator-method) - [Applying the same patch to every test method](https://docs.python.org/3.9/library/unittest.mock-examples.html#applying-the-same-patch-to-every-test-method) - [Mocking Unbound Methods](https://docs.python.org/3.9/library/unittest.mock-examples.html#mocking-unbound-methods) - [Checking multiple calls with mock](https://docs.python.org/3.9/library/unittest.mock-examples.html#checking-multiple-calls-with-mock) - [Coping with mutable arguments](https://docs.python.org/3.9/library/unittest.mock-examples.html#coping-with-mutable-arguments) - [Nesting Patches](https://docs.python.org/3.9/library/unittest.mock-examples.html#nesting-patches) - [Mocking a dictionary with MagicMock](https://docs.python.org/3.9/library/unittest.mock-examples.html#mocking-a-dictionary-with-magicmock) - [Mock subclasses and their attributes](https://docs.python.org/3.9/library/unittest.mock-examples.html#mock-subclasses-and-their-attributes) - [Mocking imports with patch.dict](https://docs.python.org/3.9/library/unittest.mock-examples.html#mocking-imports-with-patch-dict) - [Tracking order of calls and less verbose call assertions](https://docs.python.org/3.9/library/unittest.mock-examples.html#tracking-order-of-calls-and-less-verbose-call-assertions) - [More complex argument matching](https://docs.python.org/3.9/library/unittest.mock-examples.html#more-complex-argument-matching) * [2to3 - Automated Python 2 to 3 code translation](https://docs.python.org/3.9/library/2to3.html) + [Using 2to3](https://docs.python.org/3.9/library/2to3.html#using-2to3) + [Fixers](https://docs.python.org/3.9/library/2to3.html#fixers) + [`lib2to3` - 2to3’s library](https://docs.python.org/3.9/library/2to3.html#module-lib2to3) * [`test` — Regression tests package for Python](test) + [Writing Unit Tests for the `test` package](test#writing-unit-tests-for-the-test-package) + [Running tests using the command-line interface](test#running-tests-using-the-command-line-interface) * [`test.support` — Utilities for the Python test suite](test#module-test.support) * [`test.support.socket_helper` — Utilities for socket tests](test#module-test.support.socket_helper) * [`test.support.script_helper` — Utilities for the Python execution tests](test#module-test.support.script_helper) * [`test.support.bytecode_helper` — Support tools for testing correct bytecode generation](test#module-test.support.bytecode_helper) python The concurrent package The concurrent package ====================== Currently, there is only one module in this package: * [`concurrent.futures`](concurrent.futures#module-concurrent.futures "concurrent.futures: Execute computations concurrently using threads or processes.") – Launching parallel tasks python gc — Garbage Collector interface gc — Garbage Collector interface ================================ This module provides an interface to the optional garbage collector. It provides the ability to disable the collector, tune the collection frequency, and set debugging options. It also provides access to unreachable objects that the collector found but cannot free. Since the collector supplements the reference counting already used in Python, you can disable the collector if you are sure your program does not create reference cycles. Automatic collection can be disabled by calling `gc.disable()`. To debug a leaking program call `gc.set_debug(gc.DEBUG_LEAK)`. Notice that this includes `gc.DEBUG_SAVEALL`, causing garbage-collected objects to be saved in gc.garbage for inspection. The [`gc`](#module-gc "gc: Interface to the cycle-detecting garbage collector.") module provides the following functions: `gc.enable()` Enable automatic garbage collection. `gc.disable()` Disable automatic garbage collection. `gc.isenabled()` Return `True` if automatic collection is enabled. `gc.collect(generation=2)` With no arguments, run a full collection. The optional argument *generation* may be an integer specifying which generation to collect (from 0 to 2). A [`ValueError`](exceptions#ValueError "ValueError") is raised if the generation number is invalid. The number of unreachable objects found is returned. The free lists maintained for a number of built-in types are cleared whenever a full collection or collection of the highest generation (2) is run. Not all items in some free lists may be freed due to the particular implementation, in particular [`float`](functions#float "float"). `gc.set_debug(flags)` Set the garbage collection debugging flags. Debugging information will be written to `sys.stderr`. See below for a list of debugging flags which can be combined using bit operations to control debugging. `gc.get_debug()` Return the debugging flags currently set. `gc.get_objects(generation=None)` Returns a list of all objects tracked by the collector, excluding the list returned. If *generation* is not None, return only the objects tracked by the collector that are in that generation. Changed in version 3.8: New *generation* parameter. Raises an [auditing event](sys#auditing) `gc.get_objects` with argument `generation`. `gc.get_stats()` Return a list of three per-generation dictionaries containing collection statistics since interpreter start. The number of keys may change in the future, but currently each dictionary will contain the following items: * `collections` is the number of times this generation was collected; * `collected` is the total number of objects collected inside this generation; * `uncollectable` is the total number of objects which were found to be uncollectable (and were therefore moved to the [`garbage`](#gc.garbage "gc.garbage") list) inside this generation. New in version 3.4. `gc.set_threshold(threshold0[, threshold1[, threshold2]])` Set the garbage collection thresholds (the collection frequency). Setting *threshold0* to zero disables collection. The GC classifies objects into three generations depending on how many collection sweeps they have survived. New objects are placed in the youngest generation (generation `0`). If an object survives a collection it is moved into the next older generation. Since generation `2` is the oldest generation, objects in that generation remain there after a collection. In order to decide when to run, the collector keeps track of the number object allocations and deallocations since the last collection. When the number of allocations minus the number of deallocations exceeds *threshold0*, collection starts. Initially only generation `0` is examined. If generation `0` has been examined more than *threshold1* times since generation `1` has been examined, then generation `1` is examined as well. With the third generation, things are a bit more complicated, see [Collecting the oldest generation](https://devguide.python.org/garbage_collector/#collecting-the-oldest-generation) for more information. `gc.get_count()` Return the current collection counts as a tuple of `(count0, count1, count2)`. `gc.get_threshold()` Return the current collection thresholds as a tuple of `(threshold0, threshold1, threshold2)`. `gc.get_referrers(*objs)` Return the list of objects that directly refer to any of objs. This function will only locate those containers which support garbage collection; extension types which do refer to other objects but do not support garbage collection will not be found. Note that objects which have already been dereferenced, but which live in cycles and have not yet been collected by the garbage collector can be listed among the resulting referrers. To get only currently live objects, call [`collect()`](#gc.collect "gc.collect") before calling [`get_referrers()`](#gc.get_referrers "gc.get_referrers"). Warning Care must be taken when using objects returned by [`get_referrers()`](#gc.get_referrers "gc.get_referrers") because some of them could still be under construction and hence in a temporarily invalid state. Avoid using [`get_referrers()`](#gc.get_referrers "gc.get_referrers") for any purpose other than debugging. Raises an [auditing event](sys#auditing) `gc.get_referrers` with argument `objs`. `gc.get_referents(*objs)` Return a list of objects directly referred to by any of the arguments. The referents returned are those objects visited by the arguments’ C-level [`tp_traverse`](../c-api/typeobj#c.PyTypeObject.tp_traverse "PyTypeObject.tp_traverse") methods (if any), and may not be all objects actually directly reachable. [`tp_traverse`](../c-api/typeobj#c.PyTypeObject.tp_traverse "PyTypeObject.tp_traverse") methods are supported only by objects that support garbage collection, and are only required to visit objects that may be involved in a cycle. So, for example, if an integer is directly reachable from an argument, that integer object may or may not appear in the result list. Raises an [auditing event](sys#auditing) `gc.get_referents` with argument `objs`. `gc.is_tracked(obj)` Returns `True` if the object is currently tracked by the garbage collector, `False` otherwise. As a general rule, instances of atomic types aren’t tracked and instances of non-atomic types (containers, user-defined objects…) are. However, some type-specific optimizations can be present in order to suppress the garbage collector footprint of simple instances (e.g. dicts containing only atomic keys and values): ``` >>> gc.is_tracked(0) False >>> gc.is_tracked("a") False >>> gc.is_tracked([]) True >>> gc.is_tracked({}) False >>> gc.is_tracked({"a": 1}) False >>> gc.is_tracked({"a": []}) True ``` New in version 3.1. `gc.is_finalized(obj)` Returns `True` if the given object has been finalized by the garbage collector, `False` otherwise. ``` >>> x = None >>> class Lazarus: ... def __del__(self): ... global x ... x = self ... >>> lazarus = Lazarus() >>> gc.is_finalized(lazarus) False >>> del lazarus >>> gc.is_finalized(x) True ``` New in version 3.9. `gc.freeze()` Freeze all the objects tracked by gc - move them to a permanent generation and ignore all the future collections. This can be used before a POSIX fork() call to make the gc copy-on-write friendly or to speed up collection. Also collection before a POSIX fork() call may free pages for future allocation which can cause copy-on-write too so it’s advised to disable gc in parent process and freeze before fork and enable gc in child process. New in version 3.7. `gc.unfreeze()` Unfreeze the objects in the permanent generation, put them back into the oldest generation. New in version 3.7. `gc.get_freeze_count()` Return the number of objects in the permanent generation. New in version 3.7. The following variables are provided for read-only access (you can mutate the values but should not rebind them): `gc.garbage` A list of objects which the collector found to be unreachable but could not be freed (uncollectable objects). Starting with Python 3.4, this list should be empty most of the time, except when using instances of C extension types with a non-`NULL` `tp_del` slot. If [`DEBUG_SAVEALL`](#gc.DEBUG_SAVEALL "gc.DEBUG_SAVEALL") is set, then all unreachable objects will be added to this list rather than freed. Changed in version 3.2: If this list is non-empty at [interpreter shutdown](../glossary#term-interpreter-shutdown), a [`ResourceWarning`](exceptions#ResourceWarning "ResourceWarning") is emitted, which is silent by default. If [`DEBUG_UNCOLLECTABLE`](#gc.DEBUG_UNCOLLECTABLE "gc.DEBUG_UNCOLLECTABLE") is set, in addition all uncollectable objects are printed. Changed in version 3.4: Following [**PEP 442**](https://www.python.org/dev/peps/pep-0442), objects with a [`__del__()`](../reference/datamodel#object.__del__ "object.__del__") method don’t end up in [`gc.garbage`](#gc.garbage "gc.garbage") anymore. `gc.callbacks` A list of callbacks that will be invoked by the garbage collector before and after collection. The callbacks will be called with two arguments, *phase* and *info*. *phase* can be one of two values: “start”: The garbage collection is about to start. “stop”: The garbage collection has finished. *info* is a dict providing more information for the callback. The following keys are currently defined: “generation”: The oldest generation being collected. “collected”: When *phase* is “stop”, the number of objects successfully collected. “uncollectable”: When *phase* is “stop”, the number of objects that could not be collected and were put in [`garbage`](#gc.garbage "gc.garbage"). Applications can add their own callbacks to this list. The primary use cases are: Gathering statistics about garbage collection, such as how often various generations are collected, and how long the collection takes. Allowing applications to identify and clear their own uncollectable types when they appear in [`garbage`](#gc.garbage "gc.garbage"). New in version 3.3. The following constants are provided for use with [`set_debug()`](#gc.set_debug "gc.set_debug"): `gc.DEBUG_STATS` Print statistics during collection. This information can be useful when tuning the collection frequency. `gc.DEBUG_COLLECTABLE` Print information on collectable objects found. `gc.DEBUG_UNCOLLECTABLE` Print information of uncollectable objects found (objects which are not reachable but cannot be freed by the collector). These objects will be added to the `garbage` list. Changed in version 3.2: Also print the contents of the [`garbage`](#gc.garbage "gc.garbage") list at [interpreter shutdown](../glossary#term-interpreter-shutdown), if it isn’t empty. `gc.DEBUG_SAVEALL` When set, all unreachable objects found will be appended to *garbage* rather than being freed. This can be useful for debugging a leaking program. `gc.DEBUG_LEAK` The debugging flags necessary for the collector to print information about a leaking program (equal to `DEBUG_COLLECTABLE | DEBUG_UNCOLLECTABLE | DEBUG_SAVEALL`).
programming_docs
python email.header: Internationalized headers email.header: Internationalized headers ======================================= **Source code:** [Lib/email/header.py](https://github.com/python/cpython/tree/3.9/Lib/email/header.py) This module is part of the legacy (`Compat32`) email API. In the current API encoding and decoding of headers is handled transparently by the dictionary-like API of the [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") class. In addition to uses in legacy code, this module can be useful in applications that need to completely control the character sets used when encoding headers. The remaining text in this section is the original documentation of the module. [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html) is the base standard that describes the format of email messages. It derives from the older [**RFC 822**](https://tools.ietf.org/html/rfc822.html) standard which came into widespread use at a time when most email was composed of ASCII characters only. [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html) is a specification written assuming email contains only 7-bit ASCII characters. Of course, as email has been deployed worldwide, it has become internationalized, such that language specific character sets can now be used in email messages. The base standard still requires email messages to be transferred using only 7-bit ASCII characters, so a slew of RFCs have been written describing how to encode email containing non-ASCII characters into [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html)-compliant format. These RFCs include [**RFC 2045**](https://tools.ietf.org/html/rfc2045.html), [**RFC 2046**](https://tools.ietf.org/html/rfc2046.html), [**RFC 2047**](https://tools.ietf.org/html/rfc2047.html), and [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html). The [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package supports these standards in its [`email.header`](#module-email.header "email.header: Representing non-ASCII headers") and [`email.charset`](email.charset#module-email.charset "email.charset: Character Sets") modules. If you want to include non-ASCII characters in your email headers, say in the *Subject* or *To* fields, you should use the [`Header`](#email.header.Header "email.header.Header") class and assign the field in the [`Message`](email.compat32-message#email.message.Message "email.message.Message") object to an instance of [`Header`](#email.header.Header "email.header.Header") instead of using a string for the header value. Import the [`Header`](#email.header.Header "email.header.Header") class from the [`email.header`](#module-email.header "email.header: Representing non-ASCII headers") module. For example: ``` >>> from email.message import Message >>> from email.header import Header >>> msg = Message() >>> h = Header('p\xf6stal', 'iso-8859-1') >>> msg['Subject'] = h >>> msg.as_string() 'Subject: =?iso-8859-1?q?p=F6stal?=\n\n' ``` Notice here how we wanted the *Subject* field to contain a non-ASCII character? We did this by creating a [`Header`](#email.header.Header "email.header.Header") instance and passing in the character set that the byte string was encoded in. When the subsequent [`Message`](email.compat32-message#email.message.Message "email.message.Message") instance was flattened, the *Subject* field was properly [**RFC 2047**](https://tools.ietf.org/html/rfc2047.html) encoded. MIME-aware mail readers would show this header using the embedded ISO-8859-1 character. Here is the [`Header`](#email.header.Header "email.header.Header") class description: `class email.header.Header(s=None, charset=None, maxlinelen=None, header_name=None, continuation_ws=' ', errors='strict')` Create a MIME-compliant header that can contain strings in different character sets. Optional *s* is the initial header value. If `None` (the default), the initial header value is not set. You can later append to the header with [`append()`](#email.header.Header.append "email.header.Header.append") method calls. *s* may be an instance of [`bytes`](stdtypes#bytes "bytes") or [`str`](stdtypes#str "str"), but see the [`append()`](#email.header.Header.append "email.header.Header.append") documentation for semantics. Optional *charset* serves two purposes: it has the same meaning as the *charset* argument to the [`append()`](#email.header.Header.append "email.header.Header.append") method. It also sets the default character set for all subsequent [`append()`](#email.header.Header.append "email.header.Header.append") calls that omit the *charset* argument. If *charset* is not provided in the constructor (the default), the `us-ascii` character set is used both as *s*’s initial charset and as the default for subsequent [`append()`](#email.header.Header.append "email.header.Header.append") calls. The maximum line length can be specified explicitly via *maxlinelen*. For splitting the first line to a shorter value (to account for the field header which isn’t included in *s*, e.g. *Subject*) pass in the name of the field in *header\_name*. The default *maxlinelen* is 76, and the default value for *header\_name* is `None`, meaning it is not taken into account for the first line of a long, split header. Optional *continuation\_ws* must be [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html)-compliant folding whitespace, and is usually either a space or a hard tab character. This character will be prepended to continuation lines. *continuation\_ws* defaults to a single space character. Optional *errors* is passed straight through to the [`append()`](#email.header.Header.append "email.header.Header.append") method. `append(s, charset=None, errors='strict')` Append the string *s* to the MIME header. Optional *charset*, if given, should be a [`Charset`](email.charset#email.charset.Charset "email.charset.Charset") instance (see [`email.charset`](email.charset#module-email.charset "email.charset: Character Sets")) or the name of a character set, which will be converted to a [`Charset`](email.charset#email.charset.Charset "email.charset.Charset") instance. A value of `None` (the default) means that the *charset* given in the constructor is used. *s* may be an instance of [`bytes`](stdtypes#bytes "bytes") or [`str`](stdtypes#str "str"). If it is an instance of [`bytes`](stdtypes#bytes "bytes"), then *charset* is the encoding of that byte string, and a [`UnicodeError`](exceptions#UnicodeError "UnicodeError") will be raised if the string cannot be decoded with that character set. If *s* is an instance of [`str`](stdtypes#str "str"), then *charset* is a hint specifying the character set of the characters in the string. In either case, when producing an [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html)-compliant header using [**RFC 2047**](https://tools.ietf.org/html/rfc2047.html) rules, the string will be encoded using the output codec of the charset. If the string cannot be encoded using the output codec, a UnicodeError will be raised. Optional *errors* is passed as the errors argument to the decode call if *s* is a byte string. `encode(splitchars=';, \t', maxlinelen=None, linesep='\n')` Encode a message header into an RFC-compliant format, possibly wrapping long lines and encapsulating non-ASCII parts in base64 or quoted-printable encodings. Optional *splitchars* is a string containing characters which should be given extra weight by the splitting algorithm during normal header wrapping. This is in very rough support of [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html)’s ‘higher level syntactic breaks’: split points preceded by a splitchar are preferred during line splitting, with the characters preferred in the order in which they appear in the string. Space and tab may be included in the string to indicate whether preference should be given to one over the other as a split point when other split chars do not appear in the line being split. Splitchars does not affect [**RFC 2047**](https://tools.ietf.org/html/rfc2047.html) encoded lines. *maxlinelen*, if given, overrides the instance’s value for the maximum line length. *linesep* specifies the characters used to separate the lines of the folded header. It defaults to the most useful value for Python application code (`\n`), but `\r\n` can be specified in order to produce headers with RFC-compliant line separators. Changed in version 3.2: Added the *linesep* argument. The [`Header`](#email.header.Header "email.header.Header") class also provides a number of methods to support standard operators and built-in functions. `__str__()` Returns an approximation of the [`Header`](#email.header.Header "email.header.Header") as a string, using an unlimited line length. All pieces are converted to unicode using the specified encoding and joined together appropriately. Any pieces with a charset of `'unknown-8bit'` are decoded as ASCII using the `'replace'` error handler. Changed in version 3.2: Added handling for the `'unknown-8bit'` charset. `__eq__(other)` This method allows you to compare two [`Header`](#email.header.Header "email.header.Header") instances for equality. `__ne__(other)` This method allows you to compare two [`Header`](#email.header.Header "email.header.Header") instances for inequality. The [`email.header`](#module-email.header "email.header: Representing non-ASCII headers") module also provides the following convenient functions. `email.header.decode_header(header)` Decode a message header value without converting the character set. The header value is in *header*. This function returns a list of `(decoded_string, charset)` pairs containing each of the decoded parts of the header. *charset* is `None` for non-encoded parts of the header, otherwise a lower case string containing the name of the character set specified in the encoded string. Here’s an example: ``` >>> from email.header import decode_header >>> decode_header('=?iso-8859-1?q?p=F6stal?=') [(b'p\xf6stal', 'iso-8859-1')] ``` `email.header.make_header(decoded_seq, maxlinelen=None, header_name=None, continuation_ws=' ')` Create a [`Header`](#email.header.Header "email.header.Header") instance from a sequence of pairs as returned by [`decode_header()`](#email.header.decode_header "email.header.decode_header"). [`decode_header()`](#email.header.decode_header "email.header.decode_header") takes a header value string and returns a sequence of pairs of the format `(decoded_string, charset)` where *charset* is the name of the character set. This function takes one of those sequence of pairs and returns a [`Header`](#email.header.Header "email.header.Header") instance. Optional *maxlinelen*, *header\_name*, and *continuation\_ws* are as in the [`Header`](#email.header.Header "email.header.Header") constructor. python tempfile — Generate temporary files and directories tempfile — Generate temporary files and directories =================================================== **Source code:** [Lib/tempfile.py](https://github.com/python/cpython/tree/3.9/Lib/tempfile.py) This module creates temporary files and directories. It works on all supported platforms. [`TemporaryFile`](#tempfile.TemporaryFile "tempfile.TemporaryFile"), [`NamedTemporaryFile`](#tempfile.NamedTemporaryFile "tempfile.NamedTemporaryFile"), [`TemporaryDirectory`](#tempfile.TemporaryDirectory "tempfile.TemporaryDirectory"), and [`SpooledTemporaryFile`](#tempfile.SpooledTemporaryFile "tempfile.SpooledTemporaryFile") are high-level interfaces which provide automatic cleanup and can be used as context managers. [`mkstemp()`](#tempfile.mkstemp "tempfile.mkstemp") and [`mkdtemp()`](#tempfile.mkdtemp "tempfile.mkdtemp") are lower-level functions which require manual cleanup. All the user-callable functions and constructors take additional arguments which allow direct control over the location and name of temporary files and directories. Files names used by this module include a string of random characters which allows those files to be securely created in shared temporary directories. To maintain backward compatibility, the argument order is somewhat odd; it is recommended to use keyword arguments for clarity. The module defines the following user-callable items: `tempfile.TemporaryFile(mode='w+b', buffering=-1, encoding=None, newline=None, suffix=None, prefix=None, dir=None, *, errors=None)` Return a [file-like object](../glossary#term-file-like-object) that can be used as a temporary storage area. The file is created securely, using the same rules as [`mkstemp()`](#tempfile.mkstemp "tempfile.mkstemp"). It will be destroyed as soon as it is closed (including an implicit close when the object is garbage collected). Under Unix, the directory entry for the file is either not created at all or is removed immediately after the file is created. Other platforms do not support this; your code should not rely on a temporary file created using this function having or not having a visible name in the file system. The resulting object can be used as a context manager (see [Examples](#tempfile-examples)). On completion of the context or destruction of the file object the temporary file will be removed from the filesystem. The *mode* parameter defaults to `'w+b'` so that the file created can be read and written without being closed. Binary mode is used so that it behaves consistently on all platforms without regard for the data that is stored. *buffering*, *encoding*, *errors* and *newline* are interpreted as for [`open()`](functions#open "open"). The *dir*, *prefix* and *suffix* parameters have the same meaning and defaults as with [`mkstemp()`](#tempfile.mkstemp "tempfile.mkstemp"). The returned object is a true file object on POSIX platforms. On other platforms, it is a file-like object whose `file` attribute is the underlying true file object. The [`os.O_TMPFILE`](os#os.O_TMPFILE "os.O_TMPFILE") flag is used if it is available and works (Linux-specific, requires Linux kernel 3.11 or later). On platforms that are neither Posix nor Cygwin, TemporaryFile is an alias for NamedTemporaryFile. Raises an [auditing event](sys#auditing) `tempfile.mkstemp` with argument `fullpath`. Changed in version 3.5: The [`os.O_TMPFILE`](os#os.O_TMPFILE "os.O_TMPFILE") flag is now used if available. Changed in version 3.8: Added *errors* parameter. `tempfile.NamedTemporaryFile(mode='w+b', buffering=-1, encoding=None, newline=None, suffix=None, prefix=None, dir=None, delete=True, *, errors=None)` This function operates exactly as [`TemporaryFile()`](#tempfile.TemporaryFile "tempfile.TemporaryFile") does, except that the file is guaranteed to have a visible name in the file system (on Unix, the directory entry is not unlinked). That name can be retrieved from the `name` attribute of the returned file-like object. Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows). If *delete* is true (the default), the file is deleted as soon as it is closed. The returned object is always a file-like object whose `file` attribute is the underlying true file object. This file-like object can be used in a [`with`](../reference/compound_stmts#with) statement, just like a normal file. Raises an [auditing event](sys#auditing) `tempfile.mkstemp` with argument `fullpath`. Changed in version 3.8: Added *errors* parameter. `tempfile.SpooledTemporaryFile(max_size=0, mode='w+b', buffering=-1, encoding=None, newline=None, suffix=None, prefix=None, dir=None, *, errors=None)` This function operates exactly as [`TemporaryFile()`](#tempfile.TemporaryFile "tempfile.TemporaryFile") does, except that data is spooled in memory until the file size exceeds *max\_size*, or until the file’s `fileno()` method is called, at which point the contents are written to disk and operation proceeds as with [`TemporaryFile()`](#tempfile.TemporaryFile "tempfile.TemporaryFile"). The resulting file has one additional method, `rollover()`, which causes the file to roll over to an on-disk file regardless of its size. The returned object is a file-like object whose `_file` attribute is either an [`io.BytesIO`](io#io.BytesIO "io.BytesIO") or [`io.TextIOWrapper`](io#io.TextIOWrapper "io.TextIOWrapper") object (depending on whether binary or text *mode* was specified) or a true file object, depending on whether `rollover()` has been called. This file-like object can be used in a [`with`](../reference/compound_stmts#with) statement, just like a normal file. Changed in version 3.3: the truncate method now accepts a `size` argument. Changed in version 3.8: Added *errors* parameter. `tempfile.TemporaryDirectory(suffix=None, prefix=None, dir=None)` This function securely creates a temporary directory using the same rules as [`mkdtemp()`](#tempfile.mkdtemp "tempfile.mkdtemp"). The resulting object can be used as a context manager (see [Examples](#tempfile-examples)). On completion of the context or destruction of the temporary directory object the newly created temporary directory and all its contents are removed from the filesystem. The directory name can be retrieved from the `name` attribute of the returned object. When the returned object is used as a context manager, the `name` will be assigned to the target of the `as` clause in the [`with`](../reference/compound_stmts#with) statement, if there is one. The directory can be explicitly cleaned up by calling the `cleanup()` method. Raises an [auditing event](sys#auditing) `tempfile.mkdtemp` with argument `fullpath`. New in version 3.2. `tempfile.mkstemp(suffix=None, prefix=None, dir=None, text=False)` Creates a temporary file in the most secure manner possible. There are no race conditions in the file’s creation, assuming that the platform properly implements the [`os.O_EXCL`](os#os.O_EXCL "os.O_EXCL") flag for [`os.open()`](os#os.open "os.open"). The file is readable and writable only by the creating user ID. If the platform uses permission bits to indicate whether a file is executable, the file is executable by no one. The file descriptor is not inherited by child processes. Unlike [`TemporaryFile()`](#tempfile.TemporaryFile "tempfile.TemporaryFile"), the user of [`mkstemp()`](#tempfile.mkstemp "tempfile.mkstemp") is responsible for deleting the temporary file when done with it. If *suffix* is not `None`, the file name will end with that suffix, otherwise there will be no suffix. [`mkstemp()`](#tempfile.mkstemp "tempfile.mkstemp") does not put a dot between the file name and the suffix; if you need one, put it at the beginning of *suffix*. If *prefix* is not `None`, the file name will begin with that prefix; otherwise, a default prefix is used. The default is the return value of [`gettempprefix()`](#tempfile.gettempprefix "tempfile.gettempprefix") or [`gettempprefixb()`](#tempfile.gettempprefixb "tempfile.gettempprefixb"), as appropriate. If *dir* is not `None`, the file will be created in that directory; otherwise, a default directory is used. The default directory is chosen from a platform-dependent list, but the user of the application can control the directory location by setting the *TMPDIR*, *TEMP* or *TMP* environment variables. There is thus no guarantee that the generated filename will have any nice properties, such as not requiring quoting when passed to external commands via `os.popen()`. If any of *suffix*, *prefix*, and *dir* are not `None`, they must be the same type. If they are bytes, the returned name will be bytes instead of str. If you want to force a bytes return value with otherwise default behavior, pass `suffix=b''`. If *text* is specified and true, the file is opened in text mode. Otherwise, (the default) the file is opened in binary mode. [`mkstemp()`](#tempfile.mkstemp "tempfile.mkstemp") returns a tuple containing an OS-level handle to an open file (as would be returned by [`os.open()`](os#os.open "os.open")) and the absolute pathname of that file, in that order. Raises an [auditing event](sys#auditing) `tempfile.mkstemp` with argument `fullpath`. Changed in version 3.5: *suffix*, *prefix*, and *dir* may now be supplied in bytes in order to obtain a bytes return value. Prior to this, only str was allowed. *suffix* and *prefix* now accept and default to `None` to cause an appropriate default value to be used. Changed in version 3.6: The *dir* parameter now accepts a [path-like object](../glossary#term-path-like-object). `tempfile.mkdtemp(suffix=None, prefix=None, dir=None)` Creates a temporary directory in the most secure manner possible. There are no race conditions in the directory’s creation. The directory is readable, writable, and searchable only by the creating user ID. The user of [`mkdtemp()`](#tempfile.mkdtemp "tempfile.mkdtemp") is responsible for deleting the temporary directory and its contents when done with it. The *prefix*, *suffix*, and *dir* arguments are the same as for [`mkstemp()`](#tempfile.mkstemp "tempfile.mkstemp"). [`mkdtemp()`](#tempfile.mkdtemp "tempfile.mkdtemp") returns the absolute pathname of the new directory. Raises an [auditing event](sys#auditing) `tempfile.mkdtemp` with argument `fullpath`. Changed in version 3.5: *suffix*, *prefix*, and *dir* may now be supplied in bytes in order to obtain a bytes return value. Prior to this, only str was allowed. *suffix* and *prefix* now accept and default to `None` to cause an appropriate default value to be used. Changed in version 3.6: The *dir* parameter now accepts a [path-like object](../glossary#term-path-like-object). `tempfile.gettempdir()` Return the name of the directory used for temporary files. This defines the default value for the *dir* argument to all functions in this module. Python searches a standard list of directories to find one which the calling user can create files in. The list is: 1. The directory named by the `TMPDIR` environment variable. 2. The directory named by the `TEMP` environment variable. 3. The directory named by the `TMP` environment variable. 4. A platform-specific location: * On Windows, the directories `C:\TEMP`, `C:\TMP`, `\TEMP`, and `\TMP`, in that order. * On all other platforms, the directories `/tmp`, `/var/tmp`, and `/usr/tmp`, in that order. 5. As a last resort, the current working directory. The result of this search is cached, see the description of [`tempdir`](#tempfile.tempdir "tempfile.tempdir") below. `tempfile.gettempdirb()` Same as [`gettempdir()`](#tempfile.gettempdir "tempfile.gettempdir") but the return value is in bytes. New in version 3.5. `tempfile.gettempprefix()` Return the filename prefix used to create temporary files. This does not contain the directory component. `tempfile.gettempprefixb()` Same as [`gettempprefix()`](#tempfile.gettempprefix "tempfile.gettempprefix") but the return value is in bytes. New in version 3.5. The module uses a global variable to store the name of the directory used for temporary files returned by [`gettempdir()`](#tempfile.gettempdir "tempfile.gettempdir"). It can be set directly to override the selection process, but this is discouraged. All functions in this module take a *dir* argument which can be used to specify the directory and this is the recommended approach. `tempfile.tempdir` When set to a value other than `None`, this variable defines the default value for the *dir* argument to the functions defined in this module. If `tempdir` is `None` (the default) at any call to any of the above functions except [`gettempprefix()`](#tempfile.gettempprefix "tempfile.gettempprefix") it is initialized following the algorithm described in [`gettempdir()`](#tempfile.gettempdir "tempfile.gettempdir"). Examples -------- Here are some examples of typical usage of the [`tempfile`](#module-tempfile "tempfile: Generate temporary files and directories.") module: ``` >>> import tempfile # create a temporary file and write some data to it >>> fp = tempfile.TemporaryFile() >>> fp.write(b'Hello world!') # read data from file >>> fp.seek(0) >>> fp.read() b'Hello world!' # close the file, it will be removed >>> fp.close() # create a temporary file using a context manager >>> with tempfile.TemporaryFile() as fp: ... fp.write(b'Hello world!') ... fp.seek(0) ... fp.read() b'Hello world!' >>> # file is now closed and removed # create a temporary directory using the context manager >>> with tempfile.TemporaryDirectory() as tmpdirname: ... print('created temporary directory', tmpdirname) >>> # directory and contents have been removed ``` Deprecated functions and variables ---------------------------------- A historical way to create temporary files was to first generate a file name with the [`mktemp()`](#tempfile.mktemp "tempfile.mktemp") function and then create a file using this name. Unfortunately this is not secure, because a different process may create a file with this name in the time between the call to [`mktemp()`](#tempfile.mktemp "tempfile.mktemp") and the subsequent attempt to create the file by the first process. The solution is to combine the two steps and create the file immediately. This approach is used by [`mkstemp()`](#tempfile.mkstemp "tempfile.mkstemp") and the other functions described above. `tempfile.mktemp(suffix='', prefix='tmp', dir=None)` Deprecated since version 2.3: Use [`mkstemp()`](#tempfile.mkstemp "tempfile.mkstemp") instead. Return an absolute pathname of a file that did not exist at the time the call is made. The *prefix*, *suffix*, and *dir* arguments are similar to those of [`mkstemp()`](#tempfile.mkstemp "tempfile.mkstemp"), except that bytes file names, `suffix=None` and `prefix=None` are not supported. Warning Use of this function may introduce a security hole in your program. By the time you get around to doing anything with the file name it returns, someone else may have beaten you to the punch. [`mktemp()`](#tempfile.mktemp "tempfile.mktemp") usage can be replaced easily with [`NamedTemporaryFile()`](#tempfile.NamedTemporaryFile "tempfile.NamedTemporaryFile"), passing it the `delete=False` parameter: ``` >>> f = NamedTemporaryFile(delete=False) >>> f.name '/tmp/tmptjujjt' >>> f.write(b"Hello World!\n") 13 >>> f.close() >>> os.unlink(f.name) >>> os.path.exists(f.name) False ```
programming_docs
python symbol — Constants used with Python parse trees symbol — Constants used with Python parse trees =============================================== **Source code:** [Lib/symbol.py](https://github.com/python/cpython/tree/3.9/Lib/symbol.py) This module provides constants which represent the numeric values of internal nodes of the parse tree. Unlike most Python constants, these use lower-case names. Refer to the file `Grammar/Grammar` in the Python distribution for the definitions of the names in the context of the language grammar. The specific numeric values which the names map to may change between Python versions. Warning The symbol module is deprecated and will be removed in future versions of Python. This module also provides one additional data object: `symbol.sym_name` Dictionary mapping the numeric values of the constants defined in this module back to name strings, allowing more human-readable representation of parse trees to be generated. python Multimedia Services Multimedia Services =================== The modules described in this chapter implement various algorithms or interfaces that are mainly useful for multimedia applications. They are available at the discretion of the installation. Here’s an overview: * [`wave` — Read and write WAV files](wave) + [Wave\_read Objects](wave#wave-read-objects) + [Wave\_write Objects](wave#wave-write-objects) * [`colorsys` — Conversions between color systems](colorsys) python Audit events table Audit events table ================== This table contains all events raised by [`sys.audit()`](sys#sys.audit "sys.audit") or [`PySys_Audit()`](../c-api/sys#c.PySys_Audit "PySys_Audit") calls throughout the CPython runtime and the standard library. These calls were added in 3.8.0 or later (see [**PEP 578**](https://www.python.org/dev/peps/pep-0578)). See [`sys.addaudithook()`](sys#sys.addaudithook "sys.addaudithook") and [`PySys_AddAuditHook()`](../c-api/sys#c.PySys_AddAuditHook "PySys_AddAuditHook") for information on handling these events. **CPython implementation detail:** This table is generated from the CPython documentation, and may not represent events raised by other implementations. See your runtime specific documentation for actual events raised. | Audit event | Arguments | References | | --- | --- | --- | | array.\_\_new\_\_ | `typecode`, `initializer` | [[1]](array#array.array) | | builtins.breakpoint | `breakpointhook` | [[1]](functions#breakpoint) | | builtins.id | `id` | [[1]](functions#id) | | builtins.input | `prompt` | [[1]](functions#input) | | builtins.input/result | `result` | [[1]](functions#input) | | code.\_\_new\_\_ | `code`, `filename`, `name`, `argcount`, `posonlyargcount`, `kwonlyargcount`, `nlocals`, `stacksize`, `flags` | [[1]](types#types.CodeType) | | compile | `source`, `filename` | [[1]](functions#compile) | | cpython.PyInterpreterState\_Clear | | [[1]](../c-api/init#c.PyInterpreterState_Clear) | | cpython.PyInterpreterState\_New | | [[1]](../c-api/init#c.PyInterpreterState_New) | | cpython.\_PySys\_ClearAuditHooks | | [[1]](../c-api/init#c.Py_FinalizeEx) | | cpython.run\_command | `command` | [[1]](../using/cmdline#cmdoption-c) | | cpython.run\_file | `filename` | [[1]](../using/cmdline#audit_event_cpython_run_file_0) | | cpython.run\_interactivehook | `hook` | [[1]](sys#sys.__interactivehook__) | | cpython.run\_module | `module-name` | [[1]](../using/cmdline#cmdoption-m) | | cpython.run\_startup | `filename` | [[1]](../using/cmdline#envvar-PYTHONSTARTUP) | | cpython.run\_stdin | | [[1]](../using/cmdline#audit_event_cpython_run_stdin_0) | | ctypes.addressof | `obj` | [[1]](ctypes#ctypes.addressof) | | ctypes.call\_function | `func_pointer`, `arguments` | [[1]](ctypes#foreign-functions) | | ctypes.cdata | `address` | [[1]](ctypes#ctypes._CData.from_address) | | ctypes.cdata/buffer | `pointer`, `size`, `offset` | [[1]](ctypes#ctypes._CData.from_buffer)[[2]](ctypes#ctypes._CData.from_buffer_copy) | | ctypes.create\_string\_buffer | `init`, `size` | [[1]](ctypes#ctypes.create_string_buffer) | | ctypes.create\_unicode\_buffer | `init`, `size` | [[1]](ctypes#ctypes.create_unicode_buffer) | | ctypes.dlopen | `name` | [[1]](ctypes#ctypes.LibraryLoader) | | ctypes.dlsym | `library`, `name` | [[1]](ctypes#ctypes.LibraryLoader) | | ctypes.dlsym/handle | `handle`, `name` | [[1]](ctypes#ctypes.LibraryLoader) | | ctypes.get\_errno | | [[1]](ctypes#ctypes.get_errno) | | ctypes.get\_last\_error | | [[1]](ctypes#ctypes.get_last_error) | | ctypes.seh\_exception | `code` | [[1]](ctypes#foreign-functions) | | ctypes.set\_errno | `errno` | [[1]](ctypes#ctypes.set_errno) | | ctypes.set\_last\_error | `error` | [[1]](ctypes#ctypes.set_last_error) | | ctypes.string\_at | `address`, `size` | [[1]](ctypes#ctypes.string_at) | | ctypes.wstring\_at | `address`, `size` | [[1]](ctypes#ctypes.wstring_at) | | ensurepip.bootstrap | `root` | [[1]](ensurepip#ensurepip.bootstrap) | | exec | `code_object` | [[1]](functions#eval)[[2]](functions#exec) | | fcntl.fcntl | `fd`, `cmd`, `arg` | [[1]](fcntl#fcntl.fcntl) | | fcntl.flock | `fd`, `operation` | [[1]](fcntl#fcntl.flock) | | fcntl.ioctl | `fd`, `request`, `arg` | [[1]](fcntl#fcntl.ioctl) | | fcntl.lockf | `fd`, `cmd`, `len`, `start`, `whence` | [[1]](fcntl#fcntl.lockf) | | ftplib.connect | `self`, `host`, `port` | [[1]](ftplib#ftplib.FTP.connect) | | ftplib.sendcmd | `self`, `cmd` | [[1]](ftplib#ftplib.FTP.sendcmd)[[2]](ftplib#ftplib.FTP.voidcmd) | | function.\_\_new\_\_ | `code` | [[1]](types#types.FunctionType) | | gc.get\_objects | `generation` | [[1]](gc#gc.get_objects) | | gc.get\_referents | `objs` | [[1]](gc#gc.get_referents) | | gc.get\_referrers | `objs` | [[1]](gc#gc.get_referrers) | | glob.glob | `pathname`, `recursive` | [[1]](glob#glob.glob)[[2]](glob#glob.iglob) | | imaplib.open | `self`, `host`, `port` | [[1]](imaplib#imaplib.IMAP4.open) | | imaplib.send | `self`, `data` | [[1]](imaplib#imaplib.IMAP4.send) | | import | `module`, `filename`, `sys.path`, `sys.meta_path`, `sys.path_hooks` | [[1]](../reference/simple_stmts#import) | | marshal.dumps | `value`, `version` | [[1]](marshal#marshal.dump) | | marshal.load | | [[1]](marshal#marshal.load) | | marshal.loads | `bytes` | [[1]](marshal#marshal.load) | | mmap.\_\_new\_\_ | `fileno`, `length`, `access`, `offset` | [[1]](mmap#mmap.mmap) | | msvcrt.get\_osfhandle | `fd` | [[1]](msvcrt#msvcrt.get_osfhandle) | | msvcrt.locking | `fd`, `mode`, `nbytes` | [[1]](msvcrt#msvcrt.locking) | | msvcrt.open\_osfhandle | `handle`, `flags` | [[1]](msvcrt#msvcrt.open_osfhandle) | | nntplib.connect | `self`, `host`, `port` | [[1]](nntplib#nntplib.NNTP)[[2]](nntplib#nntplib.NNTP_SSL) | | nntplib.putline | `self`, `line` | [[1]](nntplib#nntplib.NNTP)[[2]](nntplib#nntplib.NNTP_SSL) | | object.\_\_delattr\_\_ | `obj`, `name` | [[1]](../reference/datamodel#object.__delattr__) | | object.\_\_getattr\_\_ | `obj`, `name` | [[1]](../reference/datamodel#object.__getattribute__) | | object.\_\_setattr\_\_ | `obj`, `name`, `value` | [[1]](../reference/datamodel#object.__setattr__) | | open | `file`, `mode`, `flags` | [[1]](functions#open)[[2]](io#io.open)[[3]](os#os.open) | | os.add\_dll\_directory | `path` | [[1]](os#os.add_dll_directory) | | os.chdir | `path` | [[1]](os#os.chdir)[[2]](os#os.fchdir) | | os.chflags | `path`, `flags` | [[1]](os#os.chflags)[[2]](os#os.lchflags) | | os.chmod | `path`, `mode`, `dir_fd` | [[1]](os#os.chmod)[[2]](os#os.fchmod)[[3]](os#os.lchmod) | | os.chown | `path`, `uid`, `gid`, `dir_fd` | [[1]](os#os.chown)[[2]](os#os.fchown)[[3]](os#os.lchown) | | os.exec | `path`, `args`, `env` | [[1]](os#os.execl) | | os.fork | | [[1]](os#os.fork) | | os.forkpty | | [[1]](os#os.forkpty) | | os.fwalk | `top`, `topdown`, `onerror`, `follow_symlinks`, `dir_fd` | [[1]](os#os.fwalk) | | os.getxattr | `path`, `attribute` | [[1]](os#os.getxattr) | | os.kill | `pid`, `sig` | [[1]](os#os.kill) | | os.killpg | `pgid`, `sig` | [[1]](os#os.killpg) | | os.link | `src`, `dst`, `src_dir_fd`, `dst_dir_fd` | [[1]](os#os.link) | | os.listdir | `path` | [[1]](os#os.listdir) | | os.listxattr | `path` | [[1]](os#os.listxattr) | | os.lockf | `fd`, `cmd`, `len` | [[1]](os#os.lockf) | | os.mkdir | `path`, `mode`, `dir_fd` | [[1]](os#os.makedirs)[[2]](os#os.mkdir) | | os.posix\_spawn | `path`, `argv`, `env` | [[1]](os#os.posix_spawn)[[2]](os#os.posix_spawnp) | | os.putenv | `key`, `value` | [[1]](os#os.putenv) | | os.remove | `path`, `dir_fd` | [[1]](os#os.remove)[[2]](os#os.removedirs)[[3]](os#os.unlink) | | os.removexattr | `path`, `attribute` | [[1]](os#os.removexattr) | | os.rename | `src`, `dst`, `src_dir_fd`, `dst_dir_fd` | [[1]](os#os.rename)[[2]](os#os.renames)[[3]](os#os.replace) | | os.rmdir | `path`, `dir_fd` | [[1]](os#os.rmdir) | | os.scandir | `path` | [[1]](os#os.scandir) | | os.setxattr | `path`, `attribute`, `value`, `flags` | [[1]](os#os.setxattr) | | os.spawn | `mode`, `path`, `args`, `env` | [[1]](os#os.spawnl) | | os.startfile | `path`, `operation` | [[1]](os#os.startfile) | | os.symlink | `src`, `dst`, `dir_fd` | [[1]](os#os.symlink) | | os.system | `command` | [[1]](os#os.system) | | os.truncate | `fd`, `length` | [[1]](os#os.ftruncate)[[2]](os#os.truncate) | | os.unsetenv | `key` | [[1]](os#os.unsetenv) | | os.utime | `path`, `times`, `ns`, `dir_fd` | [[1]](os#os.utime) | | os.walk | `top`, `topdown`, `onerror`, `followlinks` | [[1]](os#os.walk) | | pathlib.Path.glob | `self`, `pattern` | [[1]](pathlib#pathlib.Path.glob) | | pathlib.Path.rglob | `self`, `pattern` | [[1]](pathlib#pathlib.Path.rglob) | | pdb.Pdb | | [[1]](pdb#pdb.Pdb) | | pickle.find\_class | `module`, `name` | [[1]](pickle#pickle.Unpickler.find_class) | | poplib.connect | `self`, `host`, `port` | [[1]](poplib#poplib.POP3)[[2]](poplib#poplib.POP3_SSL) | | poplib.putline | `self`, `line` | [[1]](poplib#poplib.POP3)[[2]](poplib#poplib.POP3_SSL) | | pty.spawn | `argv` | [[1]](pty#pty.spawn) | | resource.prlimit | `pid`, `resource`, `limits` | [[1]](resource#resource.prlimit) | | resource.setrlimit | `resource`, `limits` | [[1]](resource#resource.setrlimit) | | setopencodehook | | [[1]](../c-api/file#c.PyFile_SetOpenCodeHook) | | shutil.chown | `path`, `user`, `group` | [[1]](shutil#shutil.chown) | | shutil.copyfile | `src`, `dst` | [[1]](shutil#shutil.copy)[[2]](shutil#shutil.copy2)[[3]](shutil#shutil.copyfile) | | shutil.copymode | `src`, `dst` | [[1]](shutil#shutil.copy)[[2]](shutil#shutil.copymode) | | shutil.copystat | `src`, `dst` | [[1]](shutil#shutil.copy2)[[2]](shutil#shutil.copystat) | | shutil.copytree | `src`, `dst` | [[1]](shutil#shutil.copytree) | | shutil.make\_archive | `base_name`, `format`, `root_dir`, `base_dir` | [[1]](shutil#shutil.make_archive) | | shutil.move | `src`, `dst` | [[1]](shutil#shutil.move) | | shutil.rmtree | `path` | [[1]](shutil#shutil.rmtree) | | shutil.unpack\_archive | `filename`, `extract_dir`, `format` | [[1]](shutil#shutil.unpack_archive) | | signal.pthread\_kill | `thread_id`, `signalnum` | [[1]](signal#signal.pthread_kill) | | smtplib.connect | `self`, `host`, `port` | [[1]](smtplib#smtplib.SMTP.connect) | | smtplib.send | `self`, `data` | [[1]](smtplib#smtplib.SMTP) | | socket.\_\_new\_\_ | `self`, `family`, `type`, `protocol` | [[1]](socket#socket.socket) | | socket.bind | `self`, `address` | [[1]](socket#socket.socket.bind) | | socket.connect | `self`, `address` | [[1]](socket#socket.socket.connect)[[2]](socket#socket.socket.connect_ex) | | socket.getaddrinfo | `host`, `port`, `family`, `type`, `protocol` | [[1]](socket#socket.getaddrinfo) | | socket.gethostbyaddr | `ip_address` | [[1]](socket#socket.gethostbyaddr) | | socket.gethostbyname | `hostname` | [[1]](socket#socket.gethostbyname)[[2]](socket#socket.gethostbyname_ex) | | socket.gethostname | | [[1]](socket#socket.gethostname) | | socket.getnameinfo | `sockaddr` | [[1]](socket#socket.getnameinfo) | | socket.getservbyname | `servicename`, `protocolname` | [[1]](socket#socket.getservbyname) | | socket.getservbyport | `port`, `protocolname` | [[1]](socket#socket.getservbyport) | | socket.sendmsg | `self`, `address` | [[1]](socket#socket.socket.sendmsg) | | socket.sendto | `self`, `address` | [[1]](socket#socket.socket.sendto) | | socket.sethostname | `name` | [[1]](socket#socket.sethostname) | | sqlite3.connect | `database` | [[1]](sqlite3#sqlite3.connect) | | subprocess.Popen | `executable`, `args`, `cwd`, `env` | [[1]](subprocess#subprocess.Popen) | | sys.\_current\_frames | | [[1]](sys#sys._current_frames) | | sys.\_getframe | | [[1]](sys#sys._getframe) | | sys.addaudithook | | [[1]](../c-api/sys#c.PySys_AddAuditHook)[[2]](sys#sys.addaudithook) | | sys.excepthook | `hook`, `type`, `value`, `traceback` | [[1]](sys#sys.excepthook) | | sys.set\_asyncgen\_hooks\_finalizer | | [[1]](sys#sys.set_asyncgen_hooks) | | sys.set\_asyncgen\_hooks\_firstiter | | [[1]](sys#sys.set_asyncgen_hooks) | | sys.setprofile | | [[1]](sys#sys.setprofile) | | sys.settrace | | [[1]](sys#sys.settrace) | | sys.unraisablehook | `hook`, `unraisable` | [[1]](sys#sys.unraisablehook) | | syslog.closelog | | [[1]](syslog#syslog.closelog) | | syslog.openlog | `ident`, `logoption`, `facility` | [[1]](syslog#syslog.openlog) | | syslog.setlogmask | `maskpri` | [[1]](syslog#syslog.setlogmask) | | syslog.syslog | `priority`, `message` | [[1]](syslog#syslog.syslog) | | telnetlib.Telnet.open | `self`, `host`, `port` | [[1]](telnetlib#telnetlib.Telnet.open) | | telnetlib.Telnet.write | `self`, `buffer` | [[1]](telnetlib#telnetlib.Telnet.write) | | tempfile.mkdtemp | `fullpath` | [[1]](tempfile#tempfile.TemporaryDirectory)[[2]](tempfile#tempfile.mkdtemp) | | tempfile.mkstemp | `fullpath` | [[1]](tempfile#tempfile.NamedTemporaryFile)[[2]](tempfile#tempfile.TemporaryFile)[[3]](tempfile#tempfile.mkstemp) | | urllib.Request | `fullurl`, `data`, `headers`, `method` | [[1]](urllib.request#urllib.request.urlopen) | | webbrowser.open | `url` | [[1]](webbrowser#webbrowser.open) | | winreg.ConnectRegistry | `computer_name`, `key` | [[1]](winreg#winreg.ConnectRegistry) | | winreg.CreateKey | `key`, `sub_key`, `access` | [[1]](winreg#winreg.CreateKey)[[2]](winreg#winreg.CreateKeyEx) | | winreg.DeleteKey | `key`, `sub_key`, `access` | [[1]](winreg#winreg.DeleteKey)[[2]](winreg#winreg.DeleteKeyEx) | | winreg.DeleteValue | `key`, `value` | [[1]](winreg#winreg.DeleteValue) | | winreg.DisableReflectionKey | `key` | [[1]](winreg#winreg.DisableReflectionKey) | | winreg.EnableReflectionKey | `key` | [[1]](winreg#winreg.EnableReflectionKey) | | winreg.EnumKey | `key`, `index` | [[1]](winreg#winreg.EnumKey) | | winreg.EnumValue | `key`, `index` | [[1]](winreg#winreg.EnumValue) | | winreg.ExpandEnvironmentStrings | `str` | [[1]](winreg#winreg.ExpandEnvironmentStrings) | | winreg.LoadKey | `key`, `sub_key`, `file_name` | [[1]](winreg#winreg.LoadKey) | | winreg.OpenKey | `key`, `sub_key`, `access` | [[1]](winreg#winreg.OpenKey) | | winreg.OpenKey/result | `key` | [[1]](winreg#winreg.CreateKey)[[2]](winreg#winreg.CreateKeyEx)[[3]](winreg#winreg.OpenKey) | | winreg.PyHKEY.Detach | `key` | [[1]](winreg#winreg.PyHKEY.Detach) | | winreg.QueryInfoKey | `key` | [[1]](winreg#winreg.QueryInfoKey) | | winreg.QueryReflectionKey | `key` | [[1]](winreg#winreg.QueryReflectionKey) | | winreg.QueryValue | `key`, `sub_key`, `value_name` | [[1]](winreg#winreg.QueryValue)[[2]](winreg#winreg.QueryValueEx) | | winreg.SaveKey | `key`, `file_name` | [[1]](winreg#winreg.SaveKey) | | winreg.SetValue | `key`, `sub_key`, `type`, `value` | [[1]](winreg#winreg.SetValue)[[2]](winreg#winreg.SetValueEx) | The following events are raised internally and do not correspond to any public API of CPython: | Audit event | Arguments | | --- | --- | | \_winapi.CreateFile | `file_name`, `desired_access`, `share_mode`, `creation_disposition`, `flags_and_attributes` | | \_winapi.CreateJunction | `src_path`, `dst_path` | | \_winapi.CreateNamedPipe | `name`, `open_mode`, `pipe_mode` | | \_winapi.CreatePipe | | | \_winapi.CreateProcess | `application_name`, `command_line`, `current_directory` | | \_winapi.OpenProcess | `process_id`, `desired_access` | | \_winapi.TerminateProcess | `handle`, `exit_code` | | ctypes.PyObj\_FromPtr | `obj` | python curses.ascii — Utilities for ASCII characters curses.ascii — Utilities for ASCII characters ============================================= The [`curses.ascii`](#module-curses.ascii "curses.ascii: Constants and set-membership functions for ASCII characters.") module supplies name constants for ASCII characters and functions to test membership in various ASCII character classes. The constants supplied are names for control characters as follows: | Name | Meaning | | --- | --- | | `NUL` | | | `SOH` | Start of heading, console interrupt | | `STX` | Start of text | | `ETX` | End of text | | `EOT` | End of transmission | | `ENQ` | Enquiry, goes with `ACK` flow control | | `ACK` | Acknowledgement | | `BEL` | Bell | | `BS` | Backspace | | `TAB` | Tab | | `HT` | Alias for `TAB`: “Horizontal tab” | | `LF` | Line feed | | `NL` | Alias for `LF`: “New line” | | `VT` | Vertical tab | | `FF` | Form feed | | `CR` | Carriage return | | `SO` | Shift-out, begin alternate character set | | `SI` | Shift-in, resume default character set | | `DLE` | Data-link escape | | `DC1` | XON, for flow control | | `DC2` | Device control 2, block-mode flow control | | `DC3` | XOFF, for flow control | | `DC4` | Device control 4 | | `NAK` | Negative acknowledgement | | `SYN` | Synchronous idle | | `ETB` | End transmission block | | `CAN` | Cancel | | `EM` | End of medium | | `SUB` | Substitute | | `ESC` | Escape | | `FS` | File separator | | `GS` | Group separator | | `RS` | Record separator, block-mode terminator | | `US` | Unit separator | | `SP` | Space | | `DEL` | Delete | Note that many of these have little practical significance in modern usage. The mnemonics derive from teleprinter conventions that predate digital computers. The module supplies the following functions, patterned on those in the standard C library: `curses.ascii.isalnum(c)` Checks for an ASCII alphanumeric character; it is equivalent to `isalpha(c) or isdigit(c)`. `curses.ascii.isalpha(c)` Checks for an ASCII alphabetic character; it is equivalent to `isupper(c) or islower(c)`. `curses.ascii.isascii(c)` Checks for a character value that fits in the 7-bit ASCII set. `curses.ascii.isblank(c)` Checks for an ASCII whitespace character; space or horizontal tab. `curses.ascii.iscntrl(c)` Checks for an ASCII control character (in the range 0x00 to 0x1f or 0x7f). `curses.ascii.isdigit(c)` Checks for an ASCII decimal digit, `'0'` through `'9'`. This is equivalent to `c in string.digits`. `curses.ascii.isgraph(c)` Checks for ASCII any printable character except space. `curses.ascii.islower(c)` Checks for an ASCII lower-case character. `curses.ascii.isprint(c)` Checks for any ASCII printable character including space. `curses.ascii.ispunct(c)` Checks for any printable ASCII character which is not a space or an alphanumeric character. `curses.ascii.isspace(c)` Checks for ASCII white-space characters; space, line feed, carriage return, form feed, horizontal tab, vertical tab. `curses.ascii.isupper(c)` Checks for an ASCII uppercase letter. `curses.ascii.isxdigit(c)` Checks for an ASCII hexadecimal digit. This is equivalent to `c in string.hexdigits`. `curses.ascii.isctrl(c)` Checks for an ASCII control character (ordinal values 0 to 31). `curses.ascii.ismeta(c)` Checks for a non-ASCII character (ordinal values 0x80 and above). These functions accept either integers or single-character strings; when the argument is a string, it is first converted using the built-in function [`ord()`](functions#ord "ord"). Note that all these functions check ordinal bit values derived from the character of the string you pass in; they do not actually know anything about the host machine’s character encoding. The following two functions take either a single-character string or integer byte value; they return a value of the same type. `curses.ascii.ascii(c)` Return the ASCII value corresponding to the low 7 bits of *c*. `curses.ascii.ctrl(c)` Return the control character corresponding to the given character (the character bit value is bitwise-anded with 0x1f). `curses.ascii.alt(c)` Return the 8-bit character corresponding to the given ASCII character (the character bit value is bitwise-ored with 0x80). The following function takes either a single-character string or integer value; it returns a string. `curses.ascii.unctrl(c)` Return a string representation of the ASCII character *c*. If *c* is printable, this string is the character itself. If the character is a control character (0x00–0x1f) the string consists of a caret (`'^'`) followed by the corresponding uppercase letter. If the character is an ASCII delete (0x7f) the string is `'^?'`. If the character has its meta bit (0x80) set, the meta bit is stripped, the preceding rules applied, and `'!'` prepended to the result. `curses.ascii.controlnames` A 33-element string array that contains the ASCII mnemonics for the thirty-two ASCII control characters from 0 (NUL) to 0x1f (US), in order, plus the mnemonic `SP` for the space character.
programming_docs
python bz2 — Support for bzip2 compression bz2 — Support for bzip2 compression =================================== **Source code:** [Lib/bz2.py](https://github.com/python/cpython/tree/3.9/Lib/bz2.py) This module provides a comprehensive interface for compressing and decompressing data using the bzip2 compression algorithm. The [`bz2`](#module-bz2 "bz2: Interfaces for bzip2 compression and decompression.") module contains: * The [`open()`](#bz2.open "bz2.open") function and [`BZ2File`](#bz2.BZ2File "bz2.BZ2File") class for reading and writing compressed files. * The [`BZ2Compressor`](#bz2.BZ2Compressor "bz2.BZ2Compressor") and [`BZ2Decompressor`](#bz2.BZ2Decompressor "bz2.BZ2Decompressor") classes for incremental (de)compression. * The [`compress()`](#bz2.compress "bz2.compress") and [`decompress()`](#bz2.decompress "bz2.decompress") functions for one-shot (de)compression. All of the classes in this module may safely be accessed from multiple threads. (De)compression of files ------------------------ `bz2.open(filename, mode='rb', compresslevel=9, encoding=None, errors=None, newline=None)` Open a bzip2-compressed file in binary or text mode, returning a [file object](../glossary#term-file-object). As with the constructor for [`BZ2File`](#bz2.BZ2File "bz2.BZ2File"), the *filename* argument can be an actual filename (a [`str`](stdtypes#str "str") or [`bytes`](stdtypes#bytes "bytes") object), or an existing file object to read from or write to. The *mode* argument can be any of `'r'`, `'rb'`, `'w'`, `'wb'`, `'x'`, `'xb'`, `'a'` or `'ab'` for binary mode, or `'rt'`, `'wt'`, `'xt'`, or `'at'` for text mode. The default is `'rb'`. The *compresslevel* argument is an integer from 1 to 9, as for the [`BZ2File`](#bz2.BZ2File "bz2.BZ2File") constructor. For binary mode, this function is equivalent to the [`BZ2File`](#bz2.BZ2File "bz2.BZ2File") constructor: `BZ2File(filename, mode, compresslevel=compresslevel)`. In this case, the *encoding*, *errors* and *newline* arguments must not be provided. For text mode, a [`BZ2File`](#bz2.BZ2File "bz2.BZ2File") object is created, and wrapped in an [`io.TextIOWrapper`](io#io.TextIOWrapper "io.TextIOWrapper") instance with the specified encoding, error handling behavior, and line ending(s). New in version 3.3. Changed in version 3.4: The `'x'` (exclusive creation) mode was added. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `class bz2.BZ2File(filename, mode='r', *, compresslevel=9)` Open a bzip2-compressed file in binary mode. If *filename* is a [`str`](stdtypes#str "str") or [`bytes`](stdtypes#bytes "bytes") object, open the named file directly. Otherwise, *filename* should be a [file object](../glossary#term-file-object), which will be used to read or write the compressed data. The *mode* argument can be either `'r'` for reading (default), `'w'` for overwriting, `'x'` for exclusive creation, or `'a'` for appending. These can equivalently be given as `'rb'`, `'wb'`, `'xb'` and `'ab'` respectively. If *filename* is a file object (rather than an actual file name), a mode of `'w'` does not truncate the file, and is instead equivalent to `'a'`. If *mode* is `'w'` or `'a'`, *compresslevel* can be an integer between `1` and `9` specifying the level of compression: `1` produces the least compression, and `9` (default) produces the most compression. If *mode* is `'r'`, the input file may be the concatenation of multiple compressed streams. [`BZ2File`](#bz2.BZ2File "bz2.BZ2File") provides all of the members specified by the [`io.BufferedIOBase`](io#io.BufferedIOBase "io.BufferedIOBase"), except for `detach()` and `truncate()`. Iteration and the [`with`](../reference/compound_stmts#with) statement are supported. [`BZ2File`](#bz2.BZ2File "bz2.BZ2File") also provides the following method: `peek([n])` Return buffered data without advancing the file position. At least one byte of data will be returned (unless at EOF). The exact number of bytes returned is unspecified. Note While calling [`peek()`](#bz2.BZ2File.peek "bz2.BZ2File.peek") does not change the file position of the [`BZ2File`](#bz2.BZ2File "bz2.BZ2File"), it may change the position of the underlying file object (e.g. if the [`BZ2File`](#bz2.BZ2File "bz2.BZ2File") was constructed by passing a file object for *filename*). New in version 3.3. Changed in version 3.1: Support for the [`with`](../reference/compound_stmts#with) statement was added. Changed in version 3.3: The `fileno()`, `readable()`, `seekable()`, `writable()`, `read1()` and `readinto()` methods were added. Changed in version 3.3: Support was added for *filename* being a [file object](../glossary#term-file-object) instead of an actual filename. Changed in version 3.3: The `'a'` (append) mode was added, along with support for reading multi-stream files. Changed in version 3.4: The `'x'` (exclusive creation) mode was added. Changed in version 3.5: The [`read()`](io#io.BufferedIOBase.read "io.BufferedIOBase.read") method now accepts an argument of `None`. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). Changed in version 3.9: The *buffering* parameter has been removed. It was ignored and deprecated since Python 3.0. Pass an open file object to control how the file is opened. The *compresslevel* parameter became keyword-only. Incremental (de)compression --------------------------- `class bz2.BZ2Compressor(compresslevel=9)` Create a new compressor object. This object may be used to compress data incrementally. For one-shot compression, use the [`compress()`](#bz2.compress "bz2.compress") function instead. *compresslevel*, if given, must be an integer between `1` and `9`. The default is `9`. `compress(data)` Provide data to the compressor object. Returns a chunk of compressed data if possible, or an empty byte string otherwise. When you have finished providing data to the compressor, call the [`flush()`](#bz2.BZ2Compressor.flush "bz2.BZ2Compressor.flush") method to finish the compression process. `flush()` Finish the compression process. Returns the compressed data left in internal buffers. The compressor object may not be used after this method has been called. `class bz2.BZ2Decompressor` Create a new decompressor object. This object may be used to decompress data incrementally. For one-shot compression, use the [`decompress()`](#bz2.decompress "bz2.decompress") function instead. Note This class does not transparently handle inputs containing multiple compressed streams, unlike [`decompress()`](#bz2.decompress "bz2.decompress") and [`BZ2File`](#bz2.BZ2File "bz2.BZ2File"). If you need to decompress a multi-stream input with [`BZ2Decompressor`](#bz2.BZ2Decompressor "bz2.BZ2Decompressor"), you must use a new decompressor for each stream. `decompress(data, max_length=-1)` Decompress *data* (a [bytes-like object](../glossary#term-bytes-like-object)), returning uncompressed data as bytes. Some of *data* may be buffered internally, for use in later calls to [`decompress()`](#bz2.decompress "bz2.decompress"). The returned data should be concatenated with the output of any previous calls to [`decompress()`](#bz2.decompress "bz2.decompress"). If *max\_length* is nonnegative, returns at most *max\_length* bytes of decompressed data. If this limit is reached and further output can be produced, the [`needs_input`](#bz2.BZ2Decompressor.needs_input "bz2.BZ2Decompressor.needs_input") attribute will be set to `False`. In this case, the next call to [`decompress()`](#bz2.BZ2Decompressor.decompress "bz2.BZ2Decompressor.decompress") may provide *data* as `b''` to obtain more of the output. If all of the input data was decompressed and returned (either because this was less than *max\_length* bytes, or because *max\_length* was negative), the [`needs_input`](#bz2.BZ2Decompressor.needs_input "bz2.BZ2Decompressor.needs_input") attribute will be set to `True`. Attempting to decompress data after the end of stream is reached raises an `EOFError`. Any data found after the end of the stream is ignored and saved in the [`unused_data`](#bz2.BZ2Decompressor.unused_data "bz2.BZ2Decompressor.unused_data") attribute. Changed in version 3.5: Added the *max\_length* parameter. `eof` `True` if the end-of-stream marker has been reached. New in version 3.3. `unused_data` Data found after the end of the compressed stream. If this attribute is accessed before the end of the stream has been reached, its value will be `b''`. `needs_input` `False` if the [`decompress()`](#bz2.BZ2Decompressor.decompress "bz2.BZ2Decompressor.decompress") method can provide more decompressed data before requiring new uncompressed input. New in version 3.5. One-shot (de)compression ------------------------ `bz2.compress(data, compresslevel=9)` Compress *data*, a [bytes-like object](../glossary#term-bytes-like-object). *compresslevel*, if given, must be an integer between `1` and `9`. The default is `9`. For incremental compression, use a [`BZ2Compressor`](#bz2.BZ2Compressor "bz2.BZ2Compressor") instead. `bz2.decompress(data)` Decompress *data*, a [bytes-like object](../glossary#term-bytes-like-object). If *data* is the concatenation of multiple compressed streams, decompress all of the streams. For incremental decompression, use a [`BZ2Decompressor`](#bz2.BZ2Decompressor "bz2.BZ2Decompressor") instead. Changed in version 3.3: Support for multi-stream inputs was added. Examples of usage ----------------- Below are some examples of typical usage of the [`bz2`](#module-bz2 "bz2: Interfaces for bzip2 compression and decompression.") module. Using [`compress()`](#bz2.compress "bz2.compress") and [`decompress()`](#bz2.decompress "bz2.decompress") to demonstrate round-trip compression: ``` >>> import bz2 >>> data = b"""\ ... Donec rhoncus quis sapien sit amet molestie. Fusce scelerisque vel augue ... nec ullamcorper. Nam rutrum pretium placerat. Aliquam vel tristique lorem, ... sit amet cursus ante. In interdum laoreet mi, sit amet ultrices purus ... pulvinar a. Nam gravida euismod magna, non varius justo tincidunt feugiat. ... Aliquam pharetra lacus non risus vehicula rutrum. Maecenas aliquam leo ... felis. Pellentesque semper nunc sit amet nibh ullamcorper, ac elementum ... dolor luctus. Curabitur lacinia mi ornare consectetur vestibulum.""" >>> c = bz2.compress(data) >>> len(data) / len(c) # Data compression ratio 1.513595166163142 >>> d = bz2.decompress(c) >>> data == d # Check equality to original object after round-trip True ``` Using [`BZ2Compressor`](#bz2.BZ2Compressor "bz2.BZ2Compressor") for incremental compression: ``` >>> import bz2 >>> def gen_data(chunks=10, chunksize=1000): ... """Yield incremental blocks of chunksize bytes.""" ... for _ in range(chunks): ... yield b"z" * chunksize ... >>> comp = bz2.BZ2Compressor() >>> out = b"" >>> for chunk in gen_data(): ... # Provide data to the compressor object ... out = out + comp.compress(chunk) ... >>> # Finish the compression process. Call this once you have >>> # finished providing data to the compressor. >>> out = out + comp.flush() ``` The example above uses a very “nonrandom” stream of data (a stream of `b”z”` chunks). Random data tends to compress poorly, while ordered, repetitive data usually yields a high compression ratio. Writing and reading a bzip2-compressed file in binary mode: ``` >>> import bz2 >>> data = b"""\ ... Donec rhoncus quis sapien sit amet molestie. Fusce scelerisque vel augue ... nec ullamcorper. Nam rutrum pretium placerat. Aliquam vel tristique lorem, ... sit amet cursus ante. In interdum laoreet mi, sit amet ultrices purus ... pulvinar a. Nam gravida euismod magna, non varius justo tincidunt feugiat. ... Aliquam pharetra lacus non risus vehicula rutrum. Maecenas aliquam leo ... felis. Pellentesque semper nunc sit amet nibh ullamcorper, ac elementum ... dolor luctus. Curabitur lacinia mi ornare consectetur vestibulum.""" >>> with bz2.open("myfile.bz2", "wb") as f: ... # Write compressed data to file ... unused = f.write(data) >>> with bz2.open("myfile.bz2", "rb") as f: ... # Decompress data from file ... content = f.read() >>> content == data # Check equality to original object after round-trip True ``` python enum — Support for enumerations enum — Support for enumerations =============================== New in version 3.4. **Source code:** [Lib/enum.py](https://github.com/python/cpython/tree/3.9/Lib/enum.py) An enumeration is a set of symbolic names (members) bound to unique, constant values. Within an enumeration, the members can be compared by identity, and the enumeration itself can be iterated over. Note Case of Enum Members Because Enums are used to represent constants we recommend using UPPER\_CASE names for enum members, and will be using that style in our examples. Module Contents --------------- This module defines four enumeration classes that can be used to define unique sets of names and values: [`Enum`](#enum.Enum "enum.Enum"), [`IntEnum`](#enum.IntEnum "enum.IntEnum"), [`Flag`](#enum.Flag "enum.Flag"), and [`IntFlag`](#enum.IntFlag "enum.IntFlag"). It also defines one decorator, [`unique()`](#enum.unique "enum.unique"), and one helper, [`auto`](#enum.auto "enum.auto"). `class enum.Enum` Base class for creating enumerated constants. See section [Functional API](#functional-api) for an alternate construction syntax. `class enum.IntEnum` Base class for creating enumerated constants that are also subclasses of [`int`](functions#int "int"). `class enum.IntFlag` Base class for creating enumerated constants that can be combined using the bitwise operators without losing their [`IntFlag`](#enum.IntFlag "enum.IntFlag") membership. [`IntFlag`](#enum.IntFlag "enum.IntFlag") members are also subclasses of [`int`](functions#int "int"). `class enum.Flag` Base class for creating enumerated constants that can be combined using the bitwise operations without losing their [`Flag`](#enum.Flag "enum.Flag") membership. `enum.unique()` Enum class decorator that ensures only one name is bound to any one value. `class enum.auto` Instances are replaced with an appropriate value for Enum members. By default, the initial value starts at 1. New in version 3.6: `Flag`, `IntFlag`, `auto` Creating an Enum ---------------- Enumerations are created using the [`class`](../reference/compound_stmts#class) syntax, which makes them easy to read and write. An alternative creation method is described in [Functional API](#functional-api). To define an enumeration, subclass [`Enum`](#enum.Enum "enum.Enum") as follows: ``` >>> from enum import Enum >>> class Color(Enum): ... RED = 1 ... GREEN = 2 ... BLUE = 3 ... ``` Note Enum member values Member values can be anything: [`int`](functions#int "int"), [`str`](stdtypes#str "str"), etc.. If the exact value is unimportant you may use [`auto`](#enum.auto "enum.auto") instances and an appropriate value will be chosen for you. Care must be taken if you mix [`auto`](#enum.auto "enum.auto") with other values. Note Nomenclature * The class `Color` is an *enumeration* (or *enum*) * The attributes `Color.RED`, `Color.GREEN`, etc., are *enumeration members* (or *enum members*) and are functionally constants. * The enum members have *names* and *values* (the name of `Color.RED` is `RED`, the value of `Color.BLUE` is `3`, etc.) Note Even though we use the [`class`](../reference/compound_stmts#class) syntax to create Enums, Enums are not normal Python classes. See [How are Enums different?](#how-are-enums-different) for more details. Enumeration members have human readable string representations: ``` >>> print(Color.RED) Color.RED ``` …while their `repr` has more information: ``` >>> print(repr(Color.RED)) <Color.RED: 1> ``` The *type* of an enumeration member is the enumeration it belongs to: ``` >>> type(Color.RED) <enum 'Color'> >>> isinstance(Color.GREEN, Color) True >>> ``` Enum members also have a property that contains just their item name: ``` >>> print(Color.RED.name) RED ``` Enumerations support iteration, in definition order: ``` >>> class Shake(Enum): ... VANILLA = 7 ... CHOCOLATE = 4 ... COOKIES = 9 ... MINT = 3 ... >>> for shake in Shake: ... print(shake) ... Shake.VANILLA Shake.CHOCOLATE Shake.COOKIES Shake.MINT ``` Enumeration members are hashable, so they can be used in dictionaries and sets: ``` >>> apples = {} >>> apples[Color.RED] = 'red delicious' >>> apples[Color.GREEN] = 'granny smith' >>> apples == {Color.RED: 'red delicious', Color.GREEN: 'granny smith'} True ``` Programmatic access to enumeration members and their attributes --------------------------------------------------------------- Sometimes it’s useful to access members in enumerations programmatically (i.e. situations where `Color.RED` won’t do because the exact color is not known at program-writing time). `Enum` allows such access: ``` >>> Color(1) <Color.RED: 1> >>> Color(3) <Color.BLUE: 3> ``` If you want to access enum members by *name*, use item access: ``` >>> Color['RED'] <Color.RED: 1> >>> Color['GREEN'] <Color.GREEN: 2> ``` If you have an enum member and need its `name` or `value`: ``` >>> member = Color.RED >>> member.name 'RED' >>> member.value 1 ``` Duplicating enum members and values ----------------------------------- Having two enum members with the same name is invalid: ``` >>> class Shape(Enum): ... SQUARE = 2 ... SQUARE = 3 ... Traceback (most recent call last): ... TypeError: Attempted to reuse key: 'SQUARE' ``` However, two enum members are allowed to have the same value. Given two members A and B with the same value (and A defined first), B is an alias to A. By-value lookup of the value of A and B will return A. By-name lookup of B will also return A: ``` >>> class Shape(Enum): ... SQUARE = 2 ... DIAMOND = 1 ... CIRCLE = 3 ... ALIAS_FOR_SQUARE = 2 ... >>> Shape.SQUARE <Shape.SQUARE: 2> >>> Shape.ALIAS_FOR_SQUARE <Shape.SQUARE: 2> >>> Shape(2) <Shape.SQUARE: 2> ``` Note Attempting to create a member with the same name as an already defined attribute (another member, a method, etc.) or attempting to create an attribute with the same name as a member is not allowed. Ensuring unique enumeration values ---------------------------------- By default, enumerations allow multiple names as aliases for the same value. When this behavior isn’t desired, the following decorator can be used to ensure each value is used only once in the enumeration: `@enum.unique` A [`class`](../reference/compound_stmts#class) decorator specifically for enumerations. It searches an enumeration’s `__members__` gathering any aliases it finds; if any are found [`ValueError`](exceptions#ValueError "ValueError") is raised with the details: ``` >>> from enum import Enum, unique >>> @unique ... class Mistake(Enum): ... ONE = 1 ... TWO = 2 ... THREE = 3 ... FOUR = 3 ... Traceback (most recent call last): ... ValueError: duplicate values found in <enum 'Mistake'>: FOUR -> THREE ``` Using automatic values ---------------------- If the exact value is unimportant you can use [`auto`](#enum.auto "enum.auto"): ``` >>> from enum import Enum, auto >>> class Color(Enum): ... RED = auto() ... BLUE = auto() ... GREEN = auto() ... >>> list(Color) [<Color.RED: 1>, <Color.BLUE: 2>, <Color.GREEN: 3>] ``` The values are chosen by `_generate_next_value_()`, which can be overridden: ``` >>> class AutoName(Enum): ... def _generate_next_value_(name, start, count, last_values): ... return name ... >>> class Ordinal(AutoName): ... NORTH = auto() ... SOUTH = auto() ... EAST = auto() ... WEST = auto() ... >>> list(Ordinal) [<Ordinal.NORTH: 'NORTH'>, <Ordinal.SOUTH: 'SOUTH'>, <Ordinal.EAST: 'EAST'>, <Ordinal.WEST: 'WEST'>] ``` Note The goal of the default `_generate_next_value_()` method is to provide the next [`int`](functions#int "int") in sequence with the last [`int`](functions#int "int") provided, but the way it does this is an implementation detail and may change. Note The `_generate_next_value_()` method must be defined before any members. Iteration --------- Iterating over the members of an enum does not provide the aliases: ``` >>> list(Shape) [<Shape.SQUARE: 2>, <Shape.DIAMOND: 1>, <Shape.CIRCLE: 3>] ``` The special attribute `__members__` is a read-only ordered mapping of names to members. It includes all names defined in the enumeration, including the aliases: ``` >>> for name, member in Shape.__members__.items(): ... name, member ... ('SQUARE', <Shape.SQUARE: 2>) ('DIAMOND', <Shape.DIAMOND: 1>) ('CIRCLE', <Shape.CIRCLE: 3>) ('ALIAS_FOR_SQUARE', <Shape.SQUARE: 2>) ``` The `__members__` attribute can be used for detailed programmatic access to the enumeration members. For example, finding all the aliases: ``` >>> [name for name, member in Shape.__members__.items() if member.name != name] ['ALIAS_FOR_SQUARE'] ``` Comparisons ----------- Enumeration members are compared by identity: ``` >>> Color.RED is Color.RED True >>> Color.RED is Color.BLUE False >>> Color.RED is not Color.BLUE True ``` Ordered comparisons between enumeration values are *not* supported. Enum members are not integers (but see [IntEnum](#intenum) below): ``` >>> Color.RED < Color.BLUE Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: '<' not supported between instances of 'Color' and 'Color' ``` Equality comparisons are defined though: ``` >>> Color.BLUE == Color.RED False >>> Color.BLUE != Color.RED True >>> Color.BLUE == Color.BLUE True ``` Comparisons against non-enumeration values will always compare not equal (again, [`IntEnum`](#enum.IntEnum "enum.IntEnum") was explicitly designed to behave differently, see below): ``` >>> Color.BLUE == 2 False ``` Allowed members and attributes of enumerations ---------------------------------------------- The examples above use integers for enumeration values. Using integers is short and handy (and provided by default by the [Functional API](#functional-api)), but not strictly enforced. In the vast majority of use-cases, one doesn’t care what the actual value of an enumeration is. But if the value *is* important, enumerations can have arbitrary values. Enumerations are Python classes, and can have methods and special methods as usual. If we have this enumeration: ``` >>> class Mood(Enum): ... FUNKY = 1 ... HAPPY = 3 ... ... def describe(self): ... # self is the member here ... return self.name, self.value ... ... def __str__(self): ... return 'my custom str! {0}'.format(self.value) ... ... @classmethod ... def favorite_mood(cls): ... # cls here is the enumeration ... return cls.HAPPY ... ``` Then: ``` >>> Mood.favorite_mood() <Mood.HAPPY: 3> >>> Mood.HAPPY.describe() ('HAPPY', 3) >>> str(Mood.FUNKY) 'my custom str! 1' ``` The rules for what is allowed are as follows: names that start and end with a single underscore are reserved by enum and cannot be used; all other attributes defined within an enumeration will become members of this enumeration, with the exception of special methods ([`__str__()`](../reference/datamodel#object.__str__ "object.__str__"), [`__add__()`](../reference/datamodel#object.__add__ "object.__add__"), etc.), descriptors (methods are also descriptors), and variable names listed in `_ignore_`. Note: if your enumeration defines [`__new__()`](../reference/datamodel#object.__new__ "object.__new__") and/or [`__init__()`](../reference/datamodel#object.__init__ "object.__init__") then any value(s) given to the enum member will be passed into those methods. See [Planet](#planet) for an example. Restricted Enum subclassing --------------------------- A new [`Enum`](#enum.Enum "enum.Enum") class must have one base Enum class, up to one concrete data type, and as many [`object`](functions#object "object")-based mixin classes as needed. The order of these base classes is: ``` class EnumName([mix-in, ...,] [data-type,] base-enum): pass ``` Also, subclassing an enumeration is allowed only if the enumeration does not define any members. So this is forbidden: ``` >>> class MoreColor(Color): ... PINK = 17 ... Traceback (most recent call last): ... TypeError: Cannot extend enumerations ``` But this is allowed: ``` >>> class Foo(Enum): ... def some_behavior(self): ... pass ... >>> class Bar(Foo): ... HAPPY = 1 ... SAD = 2 ... ``` Allowing subclassing of enums that define members would lead to a violation of some important invariants of types and instances. On the other hand, it makes sense to allow sharing some common behavior between a group of enumerations. (See [OrderedEnum](#orderedenum) for an example.) Pickling -------- Enumerations can be pickled and unpickled: ``` >>> from test.test_enum import Fruit >>> from pickle import dumps, loads >>> Fruit.TOMATO is loads(dumps(Fruit.TOMATO)) True ``` The usual restrictions for pickling apply: picklable enums must be defined in the top level of a module, since unpickling requires them to be importable from that module. Note With pickle protocol version 4 it is possible to easily pickle enums nested in other classes. It is possible to modify how Enum members are pickled/unpickled by defining [`__reduce_ex__()`](pickle#object.__reduce_ex__ "object.__reduce_ex__") in the enumeration class. Functional API -------------- The [`Enum`](#enum.Enum "enum.Enum") class is callable, providing the following functional API: ``` >>> Animal = Enum('Animal', 'ANT BEE CAT DOG') >>> Animal <enum 'Animal'> >>> Animal.ANT <Animal.ANT: 1> >>> Animal.ANT.value 1 >>> list(Animal) [<Animal.ANT: 1>, <Animal.BEE: 2>, <Animal.CAT: 3>, <Animal.DOG: 4>] ``` The semantics of this API resemble [`namedtuple`](collections#collections.namedtuple "collections.namedtuple"). The first argument of the call to [`Enum`](#enum.Enum "enum.Enum") is the name of the enumeration. The second argument is the *source* of enumeration member names. It can be a whitespace-separated string of names, a sequence of names, a sequence of 2-tuples with key/value pairs, or a mapping (e.g. dictionary) of names to values. The last two options enable assigning arbitrary values to enumerations; the others auto-assign increasing integers starting with 1 (use the `start` parameter to specify a different starting value). A new class derived from [`Enum`](#enum.Enum "enum.Enum") is returned. In other words, the above assignment to `Animal` is equivalent to: ``` >>> class Animal(Enum): ... ANT = 1 ... BEE = 2 ... CAT = 3 ... DOG = 4 ... ``` The reason for defaulting to `1` as the starting number and not `0` is that `0` is `False` in a boolean sense, but enum members all evaluate to `True`. Pickling enums created with the functional API can be tricky as frame stack implementation details are used to try and figure out which module the enumeration is being created in (e.g. it will fail if you use a utility function in separate module, and also may not work on IronPython or Jython). The solution is to specify the module name explicitly as follows: ``` >>> Animal = Enum('Animal', 'ANT BEE CAT DOG', module=__name__) ``` Warning If `module` is not supplied, and Enum cannot determine what it is, the new Enum members will not be unpicklable; to keep errors closer to the source, pickling will be disabled. The new pickle protocol 4 also, in some circumstances, relies on [`__qualname__`](stdtypes#definition.__qualname__ "definition.__qualname__") being set to the location where pickle will be able to find the class. For example, if the class was made available in class SomeData in the global scope: ``` >>> Animal = Enum('Animal', 'ANT BEE CAT DOG', qualname='SomeData.Animal') ``` The complete signature is: ``` Enum(value='NewEnumName', names=<...>, *, module='...', qualname='...', type=<mixed-in class>, start=1) ``` value What the new Enum class will record as its name. names The Enum members. This can be a whitespace or comma separated string (values will start at 1 unless otherwise specified): ``` 'RED GREEN BLUE' | 'RED,GREEN,BLUE' | 'RED, GREEN, BLUE' ``` or an iterator of names: ``` ['RED', 'GREEN', 'BLUE'] ``` or an iterator of (name, value) pairs: ``` [('CYAN', 4), ('MAGENTA', 5), ('YELLOW', 6)] ``` or a mapping: ``` {'CHARTREUSE': 7, 'SEA_GREEN': 11, 'ROSEMARY': 42} ``` module name of module where new Enum class can be found. qualname where in module new Enum class can be found. type type to mix in to new Enum class. start number to start counting at if only names are passed in. Changed in version 3.5: The *start* parameter was added. Derived Enumerations -------------------- ### IntEnum The first variation of [`Enum`](#enum.Enum "enum.Enum") that is provided is also a subclass of [`int`](functions#int "int"). Members of an [`IntEnum`](#enum.IntEnum "enum.IntEnum") can be compared to integers; by extension, integer enumerations of different types can also be compared to each other: ``` >>> from enum import IntEnum >>> class Shape(IntEnum): ... CIRCLE = 1 ... SQUARE = 2 ... >>> class Request(IntEnum): ... POST = 1 ... GET = 2 ... >>> Shape == 1 False >>> Shape.CIRCLE == 1 True >>> Shape.CIRCLE == Request.POST True ``` However, they still can’t be compared to standard [`Enum`](#enum.Enum "enum.Enum") enumerations: ``` >>> class Shape(IntEnum): ... CIRCLE = 1 ... SQUARE = 2 ... >>> class Color(Enum): ... RED = 1 ... GREEN = 2 ... >>> Shape.CIRCLE == Color.RED False ``` [`IntEnum`](#enum.IntEnum "enum.IntEnum") values behave like integers in other ways you’d expect: ``` >>> int(Shape.CIRCLE) 1 >>> ['a', 'b', 'c'][Shape.CIRCLE] 'b' >>> [i for i in range(Shape.SQUARE)] [0, 1] ``` ### IntFlag The next variation of [`Enum`](#enum.Enum "enum.Enum") provided, [`IntFlag`](#enum.IntFlag "enum.IntFlag"), is also based on [`int`](functions#int "int"). The difference being [`IntFlag`](#enum.IntFlag "enum.IntFlag") members can be combined using the bitwise operators (&, |, ^, ~) and the result is still an [`IntFlag`](#enum.IntFlag "enum.IntFlag") member. However, as the name implies, [`IntFlag`](#enum.IntFlag "enum.IntFlag") members also subclass [`int`](functions#int "int") and can be used wherever an [`int`](functions#int "int") is used. Any operation on an [`IntFlag`](#enum.IntFlag "enum.IntFlag") member besides the bit-wise operations will lose the [`IntFlag`](#enum.IntFlag "enum.IntFlag") membership. New in version 3.6. Sample [`IntFlag`](#enum.IntFlag "enum.IntFlag") class: ``` >>> from enum import IntFlag >>> class Perm(IntFlag): ... R = 4 ... W = 2 ... X = 1 ... >>> Perm.R | Perm.W <Perm.R|W: 6> >>> Perm.R + Perm.W 6 >>> RW = Perm.R | Perm.W >>> Perm.R in RW True ``` It is also possible to name the combinations: ``` >>> class Perm(IntFlag): ... R = 4 ... W = 2 ... X = 1 ... RWX = 7 >>> Perm.RWX <Perm.RWX: 7> >>> ~Perm.RWX <Perm.-8: -8> ``` Another important difference between [`IntFlag`](#enum.IntFlag "enum.IntFlag") and [`Enum`](#enum.Enum "enum.Enum") is that if no flags are set (the value is 0), its boolean evaluation is [`False`](constants#False "False"): ``` >>> Perm.R & Perm.X <Perm.0: 0> >>> bool(Perm.R & Perm.X) False ``` Because [`IntFlag`](#enum.IntFlag "enum.IntFlag") members are also subclasses of [`int`](functions#int "int") they can be combined with them: ``` >>> Perm.X | 8 <Perm.8|X: 9> ``` ### Flag The last variation is [`Flag`](#enum.Flag "enum.Flag"). Like [`IntFlag`](#enum.IntFlag "enum.IntFlag"), [`Flag`](#enum.Flag "enum.Flag") members can be combined using the bitwise operators (&, |, ^, ~). Unlike [`IntFlag`](#enum.IntFlag "enum.IntFlag"), they cannot be combined with, nor compared against, any other [`Flag`](#enum.Flag "enum.Flag") enumeration, nor [`int`](functions#int "int"). While it is possible to specify the values directly it is recommended to use [`auto`](#enum.auto "enum.auto") as the value and let [`Flag`](#enum.Flag "enum.Flag") select an appropriate value. New in version 3.6. Like [`IntFlag`](#enum.IntFlag "enum.IntFlag"), if a combination of [`Flag`](#enum.Flag "enum.Flag") members results in no flags being set, the boolean evaluation is [`False`](constants#False "False"): ``` >>> from enum import Flag, auto >>> class Color(Flag): ... RED = auto() ... BLUE = auto() ... GREEN = auto() ... >>> Color.RED & Color.GREEN <Color.0: 0> >>> bool(Color.RED & Color.GREEN) False ``` Individual flags should have values that are powers of two (1, 2, 4, 8, …), while combinations of flags won’t: ``` >>> class Color(Flag): ... RED = auto() ... BLUE = auto() ... GREEN = auto() ... WHITE = RED | BLUE | GREEN ... >>> Color.WHITE <Color.WHITE: 7> ``` Giving a name to the “no flags set” condition does not change its boolean value: ``` >>> class Color(Flag): ... BLACK = 0 ... RED = auto() ... BLUE = auto() ... GREEN = auto() ... >>> Color.BLACK <Color.BLACK: 0> >>> bool(Color.BLACK) False ``` Note For the majority of new code, [`Enum`](#enum.Enum "enum.Enum") and [`Flag`](#enum.Flag "enum.Flag") are strongly recommended, since [`IntEnum`](#enum.IntEnum "enum.IntEnum") and [`IntFlag`](#enum.IntFlag "enum.IntFlag") break some semantic promises of an enumeration (by being comparable to integers, and thus by transitivity to other unrelated enumerations). [`IntEnum`](#enum.IntEnum "enum.IntEnum") and [`IntFlag`](#enum.IntFlag "enum.IntFlag") should be used only in cases where [`Enum`](#enum.Enum "enum.Enum") and [`Flag`](#enum.Flag "enum.Flag") will not do; for example, when integer constants are replaced with enumerations, or for interoperability with other systems. ### Others While [`IntEnum`](#enum.IntEnum "enum.IntEnum") is part of the [`enum`](#module-enum "enum: Implementation of an enumeration class.") module, it would be very simple to implement independently: ``` class IntEnum(int, Enum): pass ``` This demonstrates how similar derived enumerations can be defined; for example a `StrEnum` that mixes in [`str`](stdtypes#str "str") instead of [`int`](functions#int "int"). Some rules: 1. When subclassing [`Enum`](#enum.Enum "enum.Enum"), mix-in types must appear before [`Enum`](#enum.Enum "enum.Enum") itself in the sequence of bases, as in the [`IntEnum`](#enum.IntEnum "enum.IntEnum") example above. 2. While [`Enum`](#enum.Enum "enum.Enum") can have members of any type, once you mix in an additional type, all the members must have values of that type, e.g. [`int`](functions#int "int") above. This restriction does not apply to mix-ins which only add methods and don’t specify another type. 3. When another data type is mixed in, the `value` attribute is *not the same* as the enum member itself, although it is equivalent and will compare equal. 4. %-style formatting: `%s` and `%r` call the [`Enum`](#enum.Enum "enum.Enum") class’s [`__str__()`](../reference/datamodel#object.__str__ "object.__str__") and [`__repr__()`](../reference/datamodel#object.__repr__ "object.__repr__") respectively; other codes (such as `%i` or `%h` for IntEnum) treat the enum member as its mixed-in type. 5. [Formatted string literals](../reference/lexical_analysis#f-strings), [`str.format()`](stdtypes#str.format "str.format"), and [`format()`](functions#format "format") will use the mixed-in type’s [`__format__()`](../reference/datamodel#object.__format__ "object.__format__") unless [`__str__()`](../reference/datamodel#object.__str__ "object.__str__") or [`__format__()`](../reference/datamodel#object.__format__ "object.__format__") is overridden in the subclass, in which case the overridden methods or [`Enum`](#enum.Enum "enum.Enum") methods will be used. Use the !s and !r format codes to force usage of the [`Enum`](#enum.Enum "enum.Enum") class’s [`__str__()`](../reference/datamodel#object.__str__ "object.__str__") and [`__repr__()`](../reference/datamodel#object.__repr__ "object.__repr__") methods. When to use \_\_new\_\_() vs. \_\_init\_\_() -------------------------------------------- [`__new__()`](../reference/datamodel#object.__new__ "object.__new__") must be used whenever you want to customize the actual value of the [`Enum`](#enum.Enum "enum.Enum") member. Any other modifications may go in either [`__new__()`](../reference/datamodel#object.__new__ "object.__new__") or [`__init__()`](../reference/datamodel#object.__init__ "object.__init__"), with [`__init__()`](../reference/datamodel#object.__init__ "object.__init__") being preferred. For example, if you want to pass several items to the constructor, but only want one of them to be the value: ``` >>> class Coordinate(bytes, Enum): ... """ ... Coordinate with binary codes that can be indexed by the int code. ... """ ... def __new__(cls, value, label, unit): ... obj = bytes.__new__(cls, [value]) ... obj._value_ = value ... obj.label = label ... obj.unit = unit ... return obj ... PX = (0, 'P.X', 'km') ... PY = (1, 'P.Y', 'km') ... VX = (2, 'V.X', 'km/s') ... VY = (3, 'V.Y', 'km/s') ... >>> print(Coordinate['PY']) Coordinate.PY >>> print(Coordinate(3)) Coordinate.VY ``` Interesting examples -------------------- While [`Enum`](#enum.Enum "enum.Enum"), [`IntEnum`](#enum.IntEnum "enum.IntEnum"), [`IntFlag`](#enum.IntFlag "enum.IntFlag"), and [`Flag`](#enum.Flag "enum.Flag") are expected to cover the majority of use-cases, they cannot cover them all. Here are recipes for some different types of enumerations that can be used directly, or as examples for creating one’s own. ### Omitting values In many use-cases one doesn’t care what the actual value of an enumeration is. There are several ways to define this type of simple enumeration: * use instances of [`auto`](#enum.auto "enum.auto") for the value * use instances of [`object`](functions#object "object") as the value * use a descriptive string as the value * use a tuple as the value and a custom [`__new__()`](../reference/datamodel#object.__new__ "object.__new__") to replace the tuple with an [`int`](functions#int "int") value Using any of these methods signifies to the user that these values are not important, and also enables one to add, remove, or reorder members without having to renumber the remaining members. Whichever method you choose, you should provide a [`repr()`](functions#repr "repr") that also hides the (unimportant) value: ``` >>> class NoValue(Enum): ... def __repr__(self): ... return '<%s.%s>' % (self.__class__.__name__, self.name) ... ``` #### Using [`auto`](#enum.auto "enum.auto") Using [`auto`](#enum.auto "enum.auto") would look like: ``` >>> class Color(NoValue): ... RED = auto() ... BLUE = auto() ... GREEN = auto() ... >>> Color.GREEN <Color.GREEN> ``` #### Using [`object`](functions#object "object") Using [`object`](functions#object "object") would look like: ``` >>> class Color(NoValue): ... RED = object() ... GREEN = object() ... BLUE = object() ... >>> Color.GREEN <Color.GREEN> ``` #### Using a descriptive string Using a string as the value would look like: ``` >>> class Color(NoValue): ... RED = 'stop' ... GREEN = 'go' ... BLUE = 'too fast!' ... >>> Color.GREEN <Color.GREEN> >>> Color.GREEN.value 'go' ``` #### Using a custom [`__new__()`](../reference/datamodel#object.__new__ "object.__new__") Using an auto-numbering [`__new__()`](../reference/datamodel#object.__new__ "object.__new__") would look like: ``` >>> class AutoNumber(NoValue): ... def __new__(cls): ... value = len(cls.__members__) + 1 ... obj = object.__new__(cls) ... obj._value_ = value ... return obj ... >>> class Color(AutoNumber): ... RED = () ... GREEN = () ... BLUE = () ... >>> Color.GREEN <Color.GREEN> >>> Color.GREEN.value 2 ``` To make a more general purpose `AutoNumber`, add `*args` to the signature: ``` >>> class AutoNumber(NoValue): ... def __new__(cls, *args): # this is the only change from above ... value = len(cls.__members__) + 1 ... obj = object.__new__(cls) ... obj._value_ = value ... return obj ... ``` Then when you inherit from `AutoNumber` you can write your own `__init__` to handle any extra arguments: ``` >>> class Swatch(AutoNumber): ... def __init__(self, pantone='unknown'): ... self.pantone = pantone ... AUBURN = '3497' ... SEA_GREEN = '1246' ... BLEACHED_CORAL = () # New color, no Pantone code yet! ... >>> Swatch.SEA_GREEN <Swatch.SEA_GREEN: 2> >>> Swatch.SEA_GREEN.pantone '1246' >>> Swatch.BLEACHED_CORAL.pantone 'unknown' ``` Note The [`__new__()`](../reference/datamodel#object.__new__ "object.__new__") method, if defined, is used during creation of the Enum members; it is then replaced by Enum’s [`__new__()`](../reference/datamodel#object.__new__ "object.__new__") which is used after class creation for lookup of existing members. ### OrderedEnum An ordered enumeration that is not based on [`IntEnum`](#enum.IntEnum "enum.IntEnum") and so maintains the normal [`Enum`](#enum.Enum "enum.Enum") invariants (such as not being comparable to other enumerations): ``` >>> class OrderedEnum(Enum): ... def __ge__(self, other): ... if self.__class__ is other.__class__: ... return self.value >= other.value ... return NotImplemented ... def __gt__(self, other): ... if self.__class__ is other.__class__: ... return self.value > other.value ... return NotImplemented ... def __le__(self, other): ... if self.__class__ is other.__class__: ... return self.value <= other.value ... return NotImplemented ... def __lt__(self, other): ... if self.__class__ is other.__class__: ... return self.value < other.value ... return NotImplemented ... >>> class Grade(OrderedEnum): ... A = 5 ... B = 4 ... C = 3 ... D = 2 ... F = 1 ... >>> Grade.C < Grade.A True ``` ### DuplicateFreeEnum Raises an error if a duplicate member name is found instead of creating an alias: ``` >>> class DuplicateFreeEnum(Enum): ... def __init__(self, *args): ... cls = self.__class__ ... if any(self.value == e.value for e in cls): ... a = self.name ... e = cls(self.value).name ... raise ValueError( ... "aliases not allowed in DuplicateFreeEnum: %r --> %r" ... % (a, e)) ... >>> class Color(DuplicateFreeEnum): ... RED = 1 ... GREEN = 2 ... BLUE = 3 ... GRENE = 2 ... Traceback (most recent call last): ... ValueError: aliases not allowed in DuplicateFreeEnum: 'GRENE' --> 'GREEN' ``` Note This is a useful example for subclassing Enum to add or change other behaviors as well as disallowing aliases. If the only desired change is disallowing aliases, the [`unique()`](#enum.unique "enum.unique") decorator can be used instead. ### Planet If [`__new__()`](../reference/datamodel#object.__new__ "object.__new__") or [`__init__()`](../reference/datamodel#object.__init__ "object.__init__") is defined the value of the enum member will be passed to those methods: ``` >>> class Planet(Enum): ... MERCURY = (3.303e+23, 2.4397e6) ... VENUS = (4.869e+24, 6.0518e6) ... EARTH = (5.976e+24, 6.37814e6) ... MARS = (6.421e+23, 3.3972e6) ... JUPITER = (1.9e+27, 7.1492e7) ... SATURN = (5.688e+26, 6.0268e7) ... URANUS = (8.686e+25, 2.5559e7) ... NEPTUNE = (1.024e+26, 2.4746e7) ... def __init__(self, mass, radius): ... self.mass = mass # in kilograms ... self.radius = radius # in meters ... @property ... def surface_gravity(self): ... # universal gravitational constant (m3 kg-1 s-2) ... G = 6.67300E-11 ... return G * self.mass / (self.radius * self.radius) ... >>> Planet.EARTH.value (5.976e+24, 6378140.0) >>> Planet.EARTH.surface_gravity 9.802652743337129 ``` ### TimePeriod An example to show the `_ignore_` attribute in use: ``` >>> from datetime import timedelta >>> class Period(timedelta, Enum): ... "different lengths of time" ... _ignore_ = 'Period i' ... Period = vars() ... for i in range(367): ... Period['day_%d' % i] = i ... >>> list(Period)[:2] [<Period.day_0: datetime.timedelta(0)>, <Period.day_1: datetime.timedelta(days=1)>] >>> list(Period)[-2:] [<Period.day_365: datetime.timedelta(days=365)>, <Period.day_366: datetime.timedelta(days=366)>] ``` How are Enums different? ------------------------ Enums have a custom metaclass that affects many aspects of both derived Enum classes and their instances (members). ### Enum Classes The `EnumMeta` metaclass is responsible for providing the [`__contains__()`](../reference/datamodel#object.__contains__ "object.__contains__"), [`__dir__()`](../reference/datamodel#object.__dir__ "object.__dir__"), [`__iter__()`](../reference/datamodel#object.__iter__ "object.__iter__") and other methods that allow one to do things with an [`Enum`](#enum.Enum "enum.Enum") class that fail on a typical class, such as `list(Color)` or `some_enum_var in Color`. `EnumMeta` is responsible for ensuring that various other methods on the final [`Enum`](#enum.Enum "enum.Enum") class are correct (such as [`__new__()`](../reference/datamodel#object.__new__ "object.__new__"), [`__getnewargs__()`](pickle#object.__getnewargs__ "object.__getnewargs__"), [`__str__()`](../reference/datamodel#object.__str__ "object.__str__") and [`__repr__()`](../reference/datamodel#object.__repr__ "object.__repr__")). ### Enum Members (aka instances) The most interesting thing about Enum members is that they are singletons. `EnumMeta` creates them all while it is creating the [`Enum`](#enum.Enum "enum.Enum") class itself, and then puts a custom [`__new__()`](../reference/datamodel#object.__new__ "object.__new__") in place to ensure that no new ones are ever instantiated by returning only the existing member instances. ### Finer Points #### Supported `__dunder__` names `__members__` is a read-only ordered mapping of `member_name`:`member` items. It is only available on the class. [`__new__()`](../reference/datamodel#object.__new__ "object.__new__"), if specified, must create and return the enum members; it is also a very good idea to set the member’s `_value_` appropriately. Once all the members are created it is no longer used. #### Supported `_sunder_` names * `_name_` – name of the member * `_value_` – value of the member; can be set / modified in `__new__` * `_missing_` – a lookup function used when a value is not found; may be overridden * `_ignore_` – a list of names, either as a [`list`](stdtypes#list "list") or a [`str`](stdtypes#str "str"), that will not be transformed into members, and will be removed from the final class * `_order_` – used in Python 2/3 code to ensure member order is consistent (class attribute, removed during class creation) * `_generate_next_value_` – used by the [Functional API](#functional-api) and by [`auto`](#enum.auto "enum.auto") to get an appropriate value for an enum member; may be overridden New in version 3.6: `_missing_`, `_order_`, `_generate_next_value_` New in version 3.7: `_ignore_` To help keep Python 2 / Python 3 code in sync an `_order_` attribute can be provided. It will be checked against the actual order of the enumeration and raise an error if the two do not match: ``` >>> class Color(Enum): ... _order_ = 'RED GREEN BLUE' ... RED = 1 ... BLUE = 3 ... GREEN = 2 ... Traceback (most recent call last): ... TypeError: member order does not match _order_ ``` Note In Python 2 code the `_order_` attribute is necessary as definition order is lost before it can be recorded. #### \_Private\_\_names [Private names](../reference/expressions#private-name-mangling) will be normal attributes in Python 3.11 instead of either an error or a member (depending on if the name ends with an underscore). Using these names in 3.9 and 3.10 will issue a [`DeprecationWarning`](exceptions#DeprecationWarning "DeprecationWarning"). #### `Enum` member type [`Enum`](#enum.Enum "enum.Enum") members are instances of their [`Enum`](#enum.Enum "enum.Enum") class, and are normally accessed as `EnumClass.member`. Under certain circumstances they can also be accessed as `EnumClass.member.member`, but you should never do this as that lookup may fail or, worse, return something besides the [`Enum`](#enum.Enum "enum.Enum") member you are looking for (this is another good reason to use all-uppercase names for members): ``` >>> class FieldTypes(Enum): ... name = 0 ... value = 1 ... size = 2 ... >>> FieldTypes.value.size <FieldTypes.size: 2> >>> FieldTypes.size.value 2 ``` Note This behavior is deprecated and will be removed in 3.11. Changed in version 3.5. #### Boolean value of `Enum` classes and members [`Enum`](#enum.Enum "enum.Enum") members that are mixed with non-[`Enum`](#enum.Enum "enum.Enum") types (such as [`int`](functions#int "int"), [`str`](stdtypes#str "str"), etc.) are evaluated according to the mixed-in type’s rules; otherwise, all members evaluate as [`True`](constants#True "True"). To make your own Enum’s boolean evaluation depend on the member’s value add the following to your class: ``` def __bool__(self): return bool(self.value) ``` [`Enum`](#enum.Enum "enum.Enum") classes always evaluate as [`True`](constants#True "True"). #### `Enum` classes with methods If you give your [`Enum`](#enum.Enum "enum.Enum") subclass extra methods, like the [Planet](#planet) class above, those methods will show up in a [`dir()`](functions#dir "dir") of the member, but not of the class: ``` >>> dir(Planet) ['EARTH', 'JUPITER', 'MARS', 'MERCURY', 'NEPTUNE', 'SATURN', 'URANUS', 'VENUS', '__class__', '__doc__', '__members__', '__module__'] >>> dir(Planet.EARTH) ['__class__', '__doc__', '__module__', 'name', 'surface_gravity', 'value'] ``` #### Combining members of `Flag` If a combination of Flag members is not named, the [`repr()`](functions#repr "repr") will include all named flags and all named combinations of flags that are in the value: ``` >>> class Color(Flag): ... RED = auto() ... GREEN = auto() ... BLUE = auto() ... MAGENTA = RED | BLUE ... YELLOW = RED | GREEN ... CYAN = GREEN | BLUE ... >>> Color(3) # named combination <Color.YELLOW: 3> >>> Color(7) # not named combination <Color.CYAN|MAGENTA|BLUE|YELLOW|GREEN|RED: 7> ``` Note In 3.11 unnamed combinations of flags will only produce the canonical flag members (aka single-value flags). So `Color(7)` would produce something like `<Color.BLUE|GREEN|RED: 7>`.
programming_docs
python Internationalization Internationalization ==================== The modules described in this chapter help you write software that is independent of language and locale by providing mechanisms for selecting a language to be used in program messages or by tailoring output to match local conventions. The list of modules described in this chapter is: * [`gettext` — Multilingual internationalization services](gettext) + [GNU **gettext** API](gettext#gnu-gettext-api) + [Class-based API](gettext#class-based-api) - [The `NullTranslations` class](gettext#the-nulltranslations-class) - [The `GNUTranslations` class](gettext#the-gnutranslations-class) - [Solaris message catalog support](gettext#solaris-message-catalog-support) - [The Catalog constructor](gettext#the-catalog-constructor) + [Internationalizing your programs and modules](gettext#internationalizing-your-programs-and-modules) - [Localizing your module](gettext#localizing-your-module) - [Localizing your application](gettext#localizing-your-application) - [Changing languages on the fly](gettext#changing-languages-on-the-fly) - [Deferred translations](gettext#deferred-translations) + [Acknowledgements](gettext#acknowledgements) * [`locale` — Internationalization services](locale) + [Background, details, hints, tips and caveats](locale#background-details-hints-tips-and-caveats) + [For extension writers and programs that embed Python](locale#for-extension-writers-and-programs-that-embed-python) + [Access to message catalogs](locale#access-to-message-catalogs) python concurrent.futures — Launching parallel tasks concurrent.futures — Launching parallel tasks ============================================= New in version 3.2. **Source code:** [Lib/concurrent/futures/thread.py](https://github.com/python/cpython/tree/3.9/Lib/concurrent/futures/thread.py) and [Lib/concurrent/futures/process.py](https://github.com/python/cpython/tree/3.9/Lib/concurrent/futures/process.py) The [`concurrent.futures`](#module-concurrent.futures "concurrent.futures: Execute computations concurrently using threads or processes.") module provides a high-level interface for asynchronously executing callables. The asynchronous execution can be performed with threads, using [`ThreadPoolExecutor`](#concurrent.futures.ThreadPoolExecutor "concurrent.futures.ThreadPoolExecutor"), or separate processes, using [`ProcessPoolExecutor`](#concurrent.futures.ProcessPoolExecutor "concurrent.futures.ProcessPoolExecutor"). Both implement the same interface, which is defined by the abstract [`Executor`](#concurrent.futures.Executor "concurrent.futures.Executor") class. Executor Objects ---------------- `class concurrent.futures.Executor` An abstract class that provides methods to execute calls asynchronously. It should not be used directly, but through its concrete subclasses. `submit(fn, /, *args, **kwargs)` Schedules the callable, *fn*, to be executed as `fn(*args, **kwargs)` and returns a [`Future`](#concurrent.futures.Future "concurrent.futures.Future") object representing the execution of the callable. ``` with ThreadPoolExecutor(max_workers=1) as executor: future = executor.submit(pow, 323, 1235) print(future.result()) ``` `map(func, *iterables, timeout=None, chunksize=1)` Similar to [`map(func, *iterables)`](functions#map "map") except: * the *iterables* are collected immediately rather than lazily; * *func* is executed asynchronously and several calls to *func* may be made concurrently. The returned iterator raises a [`concurrent.futures.TimeoutError`](#concurrent.futures.TimeoutError "concurrent.futures.TimeoutError") if [`__next__()`](stdtypes#iterator.__next__ "iterator.__next__") is called and the result isn’t available after *timeout* seconds from the original call to [`Executor.map()`](#concurrent.futures.Executor.map "concurrent.futures.Executor.map"). *timeout* can be an int or a float. If *timeout* is not specified or `None`, there is no limit to the wait time. If a *func* call raises an exception, then that exception will be raised when its value is retrieved from the iterator. When using [`ProcessPoolExecutor`](#concurrent.futures.ProcessPoolExecutor "concurrent.futures.ProcessPoolExecutor"), this method chops *iterables* into a number of chunks which it submits to the pool as separate tasks. The (approximate) size of these chunks can be specified by setting *chunksize* to a positive integer. For very long iterables, using a large value for *chunksize* can significantly improve performance compared to the default size of 1. With [`ThreadPoolExecutor`](#concurrent.futures.ThreadPoolExecutor "concurrent.futures.ThreadPoolExecutor"), *chunksize* has no effect. Changed in version 3.5: Added the *chunksize* argument. `shutdown(wait=True, *, cancel_futures=False)` Signal the executor that it should free any resources that it is using when the currently pending futures are done executing. Calls to [`Executor.submit()`](#concurrent.futures.Executor.submit "concurrent.futures.Executor.submit") and [`Executor.map()`](#concurrent.futures.Executor.map "concurrent.futures.Executor.map") made after shutdown will raise [`RuntimeError`](exceptions#RuntimeError "RuntimeError"). If *wait* is `True` then this method will not return until all the pending futures are done executing and the resources associated with the executor have been freed. If *wait* is `False` then this method will return immediately and the resources associated with the executor will be freed when all pending futures are done executing. Regardless of the value of *wait*, the entire Python program will not exit until all pending futures are done executing. If *cancel\_futures* is `True`, this method will cancel all pending futures that the executor has not started running. Any futures that are completed or running won’t be cancelled, regardless of the value of *cancel\_futures*. If both *cancel\_futures* and *wait* are `True`, all futures that the executor has started running will be completed prior to this method returning. The remaining futures are cancelled. You can avoid having to call this method explicitly if you use the [`with`](../reference/compound_stmts#with) statement, which will shutdown the [`Executor`](#concurrent.futures.Executor "concurrent.futures.Executor") (waiting as if [`Executor.shutdown()`](#concurrent.futures.Executor.shutdown "concurrent.futures.Executor.shutdown") were called with *wait* set to `True`): ``` import shutil with ThreadPoolExecutor(max_workers=4) as e: e.submit(shutil.copy, 'src1.txt', 'dest1.txt') e.submit(shutil.copy, 'src2.txt', 'dest2.txt') e.submit(shutil.copy, 'src3.txt', 'dest3.txt') e.submit(shutil.copy, 'src4.txt', 'dest4.txt') ``` Changed in version 3.9: Added *cancel\_futures*. ThreadPoolExecutor ------------------ [`ThreadPoolExecutor`](#concurrent.futures.ThreadPoolExecutor "concurrent.futures.ThreadPoolExecutor") is an [`Executor`](#concurrent.futures.Executor "concurrent.futures.Executor") subclass that uses a pool of threads to execute calls asynchronously. Deadlocks can occur when the callable associated with a [`Future`](#concurrent.futures.Future "concurrent.futures.Future") waits on the results of another [`Future`](#concurrent.futures.Future "concurrent.futures.Future"). For example: ``` import time def wait_on_b(): time.sleep(5) print(b.result()) # b will never complete because it is waiting on a. return 5 def wait_on_a(): time.sleep(5) print(a.result()) # a will never complete because it is waiting on b. return 6 executor = ThreadPoolExecutor(max_workers=2) a = executor.submit(wait_on_b) b = executor.submit(wait_on_a) ``` And: ``` def wait_on_future(): f = executor.submit(pow, 5, 2) # This will never complete because there is only one worker thread and # it is executing this function. print(f.result()) executor = ThreadPoolExecutor(max_workers=1) executor.submit(wait_on_future) ``` `class concurrent.futures.ThreadPoolExecutor(max_workers=None, thread_name_prefix='', initializer=None, initargs=())` An [`Executor`](#concurrent.futures.Executor "concurrent.futures.Executor") subclass that uses a pool of at most *max\_workers* threads to execute calls asynchronously. *initializer* is an optional callable that is called at the start of each worker thread; *initargs* is a tuple of arguments passed to the initializer. Should *initializer* raise an exception, all currently pending jobs will raise a [`BrokenThreadPool`](#concurrent.futures.thread.BrokenThreadPool "concurrent.futures.thread.BrokenThreadPool"), as well as any attempt to submit more jobs to the pool. Changed in version 3.5: If *max\_workers* is `None` or not given, it will default to the number of processors on the machine, multiplied by `5`, assuming that [`ThreadPoolExecutor`](#concurrent.futures.ThreadPoolExecutor "concurrent.futures.ThreadPoolExecutor") is often used to overlap I/O instead of CPU work and the number of workers should be higher than the number of workers for [`ProcessPoolExecutor`](#concurrent.futures.ProcessPoolExecutor "concurrent.futures.ProcessPoolExecutor"). New in version 3.6: The *thread\_name\_prefix* argument was added to allow users to control the [`threading.Thread`](threading#threading.Thread "threading.Thread") names for worker threads created by the pool for easier debugging. Changed in version 3.7: Added the *initializer* and *initargs* arguments. Changed in version 3.8: Default value of *max\_workers* is changed to `min(32, os.cpu_count() + 4)`. This default value preserves at least 5 workers for I/O bound tasks. It utilizes at most 32 CPU cores for CPU bound tasks which release the GIL. And it avoids using very large resources implicitly on many-core machines. ThreadPoolExecutor now reuses idle worker threads before starting *max\_workers* worker threads too. ### ThreadPoolExecutor Example ``` import concurrent.futures import urllib.request URLS = ['http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', 'http://some-made-up-domain.com/'] # Retrieve a single page and report the URL and contents def load_url(url, timeout): with urllib.request.urlopen(url, timeout=timeout) as conn: return conn.read() # We can use a with statement to ensure threads are cleaned up promptly with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: # Start the load operations and mark each future with its URL future_to_url = {executor.submit(load_url, url, 60): url for url in URLS} for future in concurrent.futures.as_completed(future_to_url): url = future_to_url[future] try: data = future.result() except Exception as exc: print('%r generated an exception: %s' % (url, exc)) else: print('%r page is %d bytes' % (url, len(data))) ``` ProcessPoolExecutor ------------------- The [`ProcessPoolExecutor`](#concurrent.futures.ProcessPoolExecutor "concurrent.futures.ProcessPoolExecutor") class is an [`Executor`](#concurrent.futures.Executor "concurrent.futures.Executor") subclass that uses a pool of processes to execute calls asynchronously. [`ProcessPoolExecutor`](#concurrent.futures.ProcessPoolExecutor "concurrent.futures.ProcessPoolExecutor") uses the [`multiprocessing`](multiprocessing#module-multiprocessing "multiprocessing: Process-based parallelism.") module, which allows it to side-step the [Global Interpreter Lock](../glossary#term-global-interpreter-lock) but also means that only picklable objects can be executed and returned. The `__main__` module must be importable by worker subprocesses. This means that [`ProcessPoolExecutor`](#concurrent.futures.ProcessPoolExecutor "concurrent.futures.ProcessPoolExecutor") will not work in the interactive interpreter. Calling [`Executor`](#concurrent.futures.Executor "concurrent.futures.Executor") or [`Future`](#concurrent.futures.Future "concurrent.futures.Future") methods from a callable submitted to a [`ProcessPoolExecutor`](#concurrent.futures.ProcessPoolExecutor "concurrent.futures.ProcessPoolExecutor") will result in deadlock. `class concurrent.futures.ProcessPoolExecutor(max_workers=None, mp_context=None, initializer=None, initargs=())` An [`Executor`](#concurrent.futures.Executor "concurrent.futures.Executor") subclass that executes calls asynchronously using a pool of at most *max\_workers* processes. If *max\_workers* is `None` or not given, it will default to the number of processors on the machine. If *max\_workers* is less than or equal to `0`, then a [`ValueError`](exceptions#ValueError "ValueError") will be raised. On Windows, *max\_workers* must be less than or equal to `61`. If it is not then [`ValueError`](exceptions#ValueError "ValueError") will be raised. If *max\_workers* is `None`, then the default chosen will be at most `61`, even if more processors are available. *mp\_context* can be a multiprocessing context or None. It will be used to launch the workers. If *mp\_context* is `None` or not given, the default multiprocessing context is used. *initializer* is an optional callable that is called at the start of each worker process; *initargs* is a tuple of arguments passed to the initializer. Should *initializer* raise an exception, all currently pending jobs will raise a [`BrokenProcessPool`](#concurrent.futures.process.BrokenProcessPool "concurrent.futures.process.BrokenProcessPool"), as well as any attempt to submit more jobs to the pool. Changed in version 3.3: When one of the worker processes terminates abruptly, a `BrokenProcessPool` error is now raised. Previously, behaviour was undefined but operations on the executor or its futures would often freeze or deadlock. Changed in version 3.7: The *mp\_context* argument was added to allow users to control the start\_method for worker processes created by the pool. Added the *initializer* and *initargs* arguments. ### ProcessPoolExecutor Example ``` import concurrent.futures import math PRIMES = [ 112272535095293, 112582705942171, 112272535095293, 115280095190773, 115797848077099, 1099726899285419] def is_prime(n): if n < 2: return False if n == 2: return True if n % 2 == 0: return False sqrt_n = int(math.floor(math.sqrt(n))) for i in range(3, sqrt_n + 1, 2): if n % i == 0: return False return True def main(): with concurrent.futures.ProcessPoolExecutor() as executor: for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)): print('%d is prime: %s' % (number, prime)) if __name__ == '__main__': main() ``` Future Objects -------------- The [`Future`](#concurrent.futures.Future "concurrent.futures.Future") class encapsulates the asynchronous execution of a callable. [`Future`](#concurrent.futures.Future "concurrent.futures.Future") instances are created by [`Executor.submit()`](#concurrent.futures.Executor.submit "concurrent.futures.Executor.submit"). `class concurrent.futures.Future` Encapsulates the asynchronous execution of a callable. [`Future`](#concurrent.futures.Future "concurrent.futures.Future") instances are created by [`Executor.submit()`](#concurrent.futures.Executor.submit "concurrent.futures.Executor.submit") and should not be created directly except for testing. `cancel()` Attempt to cancel the call. If the call is currently being executed or finished running and cannot be cancelled then the method will return `False`, otherwise the call will be cancelled and the method will return `True`. `cancelled()` Return `True` if the call was successfully cancelled. `running()` Return `True` if the call is currently being executed and cannot be cancelled. `done()` Return `True` if the call was successfully cancelled or finished running. `result(timeout=None)` Return the value returned by the call. If the call hasn’t yet completed then this method will wait up to *timeout* seconds. If the call hasn’t completed in *timeout* seconds, then a [`concurrent.futures.TimeoutError`](#concurrent.futures.TimeoutError "concurrent.futures.TimeoutError") will be raised. *timeout* can be an int or float. If *timeout* is not specified or `None`, there is no limit to the wait time. If the future is cancelled before completing then [`CancelledError`](#concurrent.futures.CancelledError "concurrent.futures.CancelledError") will be raised. If the call raised an exception, this method will raise the same exception. `exception(timeout=None)` Return the exception raised by the call. If the call hasn’t yet completed then this method will wait up to *timeout* seconds. If the call hasn’t completed in *timeout* seconds, then a [`concurrent.futures.TimeoutError`](#concurrent.futures.TimeoutError "concurrent.futures.TimeoutError") will be raised. *timeout* can be an int or float. If *timeout* is not specified or `None`, there is no limit to the wait time. If the future is cancelled before completing then [`CancelledError`](#concurrent.futures.CancelledError "concurrent.futures.CancelledError") will be raised. If the call completed without raising, `None` is returned. `add_done_callback(fn)` Attaches the callable *fn* to the future. *fn* will be called, with the future as its only argument, when the future is cancelled or finishes running. Added callables are called in the order that they were added and are always called in a thread belonging to the process that added them. If the callable raises an [`Exception`](exceptions#Exception "Exception") subclass, it will be logged and ignored. If the callable raises a [`BaseException`](exceptions#BaseException "BaseException") subclass, the behavior is undefined. If the future has already completed or been cancelled, *fn* will be called immediately. The following [`Future`](#concurrent.futures.Future "concurrent.futures.Future") methods are meant for use in unit tests and [`Executor`](#concurrent.futures.Executor "concurrent.futures.Executor") implementations. `set_running_or_notify_cancel()` This method should only be called by [`Executor`](#concurrent.futures.Executor "concurrent.futures.Executor") implementations before executing the work associated with the [`Future`](#concurrent.futures.Future "concurrent.futures.Future") and by unit tests. If the method returns `False` then the [`Future`](#concurrent.futures.Future "concurrent.futures.Future") was cancelled, i.e. [`Future.cancel()`](#concurrent.futures.Future.cancel "concurrent.futures.Future.cancel") was called and returned `True`. Any threads waiting on the [`Future`](#concurrent.futures.Future "concurrent.futures.Future") completing (i.e. through [`as_completed()`](#concurrent.futures.as_completed "concurrent.futures.as_completed") or [`wait()`](#concurrent.futures.wait "concurrent.futures.wait")) will be woken up. If the method returns `True` then the [`Future`](#concurrent.futures.Future "concurrent.futures.Future") was not cancelled and has been put in the running state, i.e. calls to [`Future.running()`](#concurrent.futures.Future.running "concurrent.futures.Future.running") will return `True`. This method can only be called once and cannot be called after [`Future.set_result()`](#concurrent.futures.Future.set_result "concurrent.futures.Future.set_result") or [`Future.set_exception()`](#concurrent.futures.Future.set_exception "concurrent.futures.Future.set_exception") have been called. `set_result(result)` Sets the result of the work associated with the [`Future`](#concurrent.futures.Future "concurrent.futures.Future") to *result*. This method should only be used by [`Executor`](#concurrent.futures.Executor "concurrent.futures.Executor") implementations and unit tests. Changed in version 3.8: This method raises [`concurrent.futures.InvalidStateError`](#concurrent.futures.InvalidStateError "concurrent.futures.InvalidStateError") if the [`Future`](#concurrent.futures.Future "concurrent.futures.Future") is already done. `set_exception(exception)` Sets the result of the work associated with the [`Future`](#concurrent.futures.Future "concurrent.futures.Future") to the [`Exception`](exceptions#Exception "Exception") *exception*. This method should only be used by [`Executor`](#concurrent.futures.Executor "concurrent.futures.Executor") implementations and unit tests. Changed in version 3.8: This method raises [`concurrent.futures.InvalidStateError`](#concurrent.futures.InvalidStateError "concurrent.futures.InvalidStateError") if the [`Future`](#concurrent.futures.Future "concurrent.futures.Future") is already done. Module Functions ---------------- `concurrent.futures.wait(fs, timeout=None, return_when=ALL_COMPLETED)` Wait for the [`Future`](#concurrent.futures.Future "concurrent.futures.Future") instances (possibly created by different [`Executor`](#concurrent.futures.Executor "concurrent.futures.Executor") instances) given by *fs* to complete. Duplicate futures given to *fs* are removed and will be returned only once. Returns a named 2-tuple of sets. The first set, named `done`, contains the futures that completed (finished or cancelled futures) before the wait completed. The second set, named `not_done`, contains the futures that did not complete (pending or running futures). *timeout* can be used to control the maximum number of seconds to wait before returning. *timeout* can be an int or float. If *timeout* is not specified or `None`, there is no limit to the wait time. *return\_when* indicates when this function should return. It must be one of the following constants: | Constant | Description | | --- | --- | | `FIRST_COMPLETED` | The function will return when any future finishes or is cancelled. | | `FIRST_EXCEPTION` | The function will return when any future finishes by raising an exception. If no future raises an exception then it is equivalent to `ALL_COMPLETED`. | | `ALL_COMPLETED` | The function will return when all futures finish or are cancelled. | `concurrent.futures.as_completed(fs, timeout=None)` Returns an iterator over the [`Future`](#concurrent.futures.Future "concurrent.futures.Future") instances (possibly created by different [`Executor`](#concurrent.futures.Executor "concurrent.futures.Executor") instances) given by *fs* that yields futures as they complete (finished or cancelled futures). Any futures given by *fs* that are duplicated will be returned once. Any futures that completed before [`as_completed()`](#concurrent.futures.as_completed "concurrent.futures.as_completed") is called will be yielded first. The returned iterator raises a [`concurrent.futures.TimeoutError`](#concurrent.futures.TimeoutError "concurrent.futures.TimeoutError") if [`__next__()`](stdtypes#iterator.__next__ "iterator.__next__") is called and the result isn’t available after *timeout* seconds from the original call to [`as_completed()`](#concurrent.futures.as_completed "concurrent.futures.as_completed"). *timeout* can be an int or float. If *timeout* is not specified or `None`, there is no limit to the wait time. See also [**PEP 3148**](https://www.python.org/dev/peps/pep-3148) – futures - execute computations asynchronously The proposal which described this feature for inclusion in the Python standard library. Exception classes ----------------- `exception concurrent.futures.CancelledError` Raised when a future is cancelled. `exception concurrent.futures.TimeoutError` Raised when a future operation exceeds the given timeout. `exception concurrent.futures.BrokenExecutor` Derived from [`RuntimeError`](exceptions#RuntimeError "RuntimeError"), this exception class is raised when an executor is broken for some reason, and cannot be used to submit or execute new tasks. New in version 3.7. `exception concurrent.futures.InvalidStateError` Raised when an operation is performed on a future that is not allowed in the current state. New in version 3.8. `exception concurrent.futures.thread.BrokenThreadPool` Derived from [`BrokenExecutor`](#concurrent.futures.BrokenExecutor "concurrent.futures.BrokenExecutor"), this exception class is raised when one of the workers of a `ThreadPoolExecutor` has failed initializing. New in version 3.7. `exception concurrent.futures.process.BrokenProcessPool` Derived from [`BrokenExecutor`](#concurrent.futures.BrokenExecutor "concurrent.futures.BrokenExecutor") (formerly [`RuntimeError`](exceptions#RuntimeError "RuntimeError")), this exception class is raised when one of the workers of a `ProcessPoolExecutor` has terminated in a non-clean fashion (for example, if it was killed from the outside). New in version 3.3.
programming_docs
python email.message.Message: Representing an email message using the compat32 API email.message.Message: Representing an email message using the compat32 API =========================================================================== The [`Message`](#email.message.Message "email.message.Message") class is very similar to the [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") class, without the methods added by that class, and with the default behavior of certain other methods being slightly different. We also document here some methods that, while supported by the [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") class, are not recommended unless you are dealing with legacy code. The philosophy and structure of the two classes is otherwise the same. This document describes the behavior under the default (for [`Message`](#email.message.Message "email.message.Message")) policy [`Compat32`](email.policy#email.policy.Compat32 "email.policy.Compat32"). If you are going to use another policy, you should be using the [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") class instead. An email message consists of *headers* and a *payload*. Headers must be [**RFC 5322**](https://tools.ietf.org/html/rfc5322.html) style names and values, where the field name and value are separated by a colon. The colon is not part of either the field name or the field value. The payload may be a simple text message, or a binary object, or a structured sequence of sub-messages each with their own set of headers and their own payload. The latter type of payload is indicated by the message having a MIME type such as *multipart/\** or *message/rfc822*. The conceptual model provided by a [`Message`](#email.message.Message "email.message.Message") object is that of an ordered dictionary of headers with additional methods for accessing both specialized information from the headers, for accessing the payload, for generating a serialized version of the message, and for recursively walking over the object tree. Note that duplicate headers are supported but special methods must be used to access them. The [`Message`](#email.message.Message "email.message.Message") pseudo-dictionary is indexed by the header names, which must be ASCII values. The values of the dictionary are strings that are supposed to contain only ASCII characters; there is some special handling for non-ASCII input, but it doesn’t always produce the correct results. Headers are stored and returned in case-preserving form, but field names are matched case-insensitively. There may also be a single envelope header, also known as the *Unix-From* header or the `From_` header. The *payload* is either a string or bytes, in the case of simple message objects, or a list of [`Message`](#email.message.Message "email.message.Message") objects, for MIME container documents (e.g. *multipart/\** and *message/rfc822*). Here are the methods of the [`Message`](#email.message.Message "email.message.Message") class: `class email.message.Message(policy=compat32)` If *policy* is specified (it must be an instance of a [`policy`](email.policy#module-email.policy "email.policy: Controlling the parsing and generating of messages") class) use the rules it specifies to update and serialize the representation of the message. If *policy* is not set, use the [`compat32`](email.policy#email.policy.Compat32 "email.policy.Compat32") policy, which maintains backward compatibility with the Python 3.2 version of the email package. For more information see the [`policy`](email.policy#module-email.policy "email.policy: Controlling the parsing and generating of messages") documentation. Changed in version 3.3: The *policy* keyword argument was added. `as_string(unixfrom=False, maxheaderlen=0, policy=None)` Return the entire message flattened as a string. When optional *unixfrom* is true, the envelope header is included in the returned string. *unixfrom* defaults to `False`. For backward compatibility reasons, *maxheaderlen* defaults to `0`, so if you want a different value you must override it explicitly (the value specified for *max\_line\_length* in the policy will be ignored by this method). The *policy* argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified *policy* will be passed to the `Generator`. Flattening the message may trigger changes to the [`Message`](#email.message.Message "email.message.Message") if defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified). Note that this method is provided as a convenience and may not always format the message the way you want. For example, by default it does not do the mangling of lines that begin with `From` that is required by the unix mbox format. For more flexibility, instantiate a [`Generator`](email.generator#email.generator.Generator "email.generator.Generator") instance and use its [`flatten()`](email.generator#email.generator.Generator.flatten "email.generator.Generator.flatten") method directly. For example: ``` from io import StringIO from email.generator import Generator fp = StringIO() g = Generator(fp, mangle_from_=True, maxheaderlen=60) g.flatten(msg) text = fp.getvalue() ``` If the message object contains binary data that is not encoded according to RFC standards, the non-compliant data will be replaced by unicode “unknown character” code points. (See also [`as_bytes()`](#email.message.Message.as_bytes "email.message.Message.as_bytes") and [`BytesGenerator`](email.generator#email.generator.BytesGenerator "email.generator.BytesGenerator").) Changed in version 3.4: the *policy* keyword argument was added. `__str__()` Equivalent to [`as_string()`](#email.message.Message.as_string "email.message.Message.as_string"). Allows `str(msg)` to produce a string containing the formatted message. `as_bytes(unixfrom=False, policy=None)` Return the entire message flattened as a bytes object. When optional *unixfrom* is true, the envelope header is included in the returned string. *unixfrom* defaults to `False`. The *policy* argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified *policy* will be passed to the `BytesGenerator`. Flattening the message may trigger changes to the [`Message`](#email.message.Message "email.message.Message") if defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified). Note that this method is provided as a convenience and may not always format the message the way you want. For example, by default it does not do the mangling of lines that begin with `From` that is required by the unix mbox format. For more flexibility, instantiate a [`BytesGenerator`](email.generator#email.generator.BytesGenerator "email.generator.BytesGenerator") instance and use its [`flatten()`](email.generator#email.generator.BytesGenerator.flatten "email.generator.BytesGenerator.flatten") method directly. For example: ``` from io import BytesIO from email.generator import BytesGenerator fp = BytesIO() g = BytesGenerator(fp, mangle_from_=True, maxheaderlen=60) g.flatten(msg) text = fp.getvalue() ``` New in version 3.4. `__bytes__()` Equivalent to [`as_bytes()`](#email.message.Message.as_bytes "email.message.Message.as_bytes"). Allows `bytes(msg)` to produce a bytes object containing the formatted message. New in version 3.4. `is_multipart()` Return `True` if the message’s payload is a list of sub-[`Message`](#email.message.Message "email.message.Message") objects, otherwise return `False`. When [`is_multipart()`](#email.message.Message.is_multipart "email.message.Message.is_multipart") returns `False`, the payload should be a string object (which might be a CTE encoded binary payload). (Note that [`is_multipart()`](#email.message.Message.is_multipart "email.message.Message.is_multipart") returning `True` does not necessarily mean that “msg.get\_content\_maintype() == ‘multipart’” will return the `True`. For example, `is_multipart` will return `True` when the [`Message`](#email.message.Message "email.message.Message") is of type `message/rfc822`.) `set_unixfrom(unixfrom)` Set the message’s envelope header to *unixfrom*, which should be a string. `get_unixfrom()` Return the message’s envelope header. Defaults to `None` if the envelope header was never set. `attach(payload)` Add the given *payload* to the current payload, which must be `None` or a list of [`Message`](#email.message.Message "email.message.Message") objects before the call. After the call, the payload will always be a list of [`Message`](#email.message.Message "email.message.Message") objects. If you want to set the payload to a scalar object (e.g. a string), use [`set_payload()`](#email.message.Message.set_payload "email.message.Message.set_payload") instead. This is a legacy method. On the `EmailMessage` class its functionality is replaced by [`set_content()`](email.message#email.message.EmailMessage.set_content "email.message.EmailMessage.set_content") and the related `make` and `add` methods. `get_payload(i=None, decode=False)` Return the current payload, which will be a list of [`Message`](#email.message.Message "email.message.Message") objects when [`is_multipart()`](#email.message.Message.is_multipart "email.message.Message.is_multipart") is `True`, or a string when [`is_multipart()`](#email.message.Message.is_multipart "email.message.Message.is_multipart") is `False`. If the payload is a list and you mutate the list object, you modify the message’s payload in place. With optional argument *i*, [`get_payload()`](#email.message.Message.get_payload "email.message.Message.get_payload") will return the *i*-th element of the payload, counting from zero, if [`is_multipart()`](#email.message.Message.is_multipart "email.message.Message.is_multipart") is `True`. An [`IndexError`](exceptions#IndexError "IndexError") will be raised if *i* is less than 0 or greater than or equal to the number of items in the payload. If the payload is a string (i.e. [`is_multipart()`](#email.message.Message.is_multipart "email.message.Message.is_multipart") is `False`) and *i* is given, a [`TypeError`](exceptions#TypeError "TypeError") is raised. Optional *decode* is a flag indicating whether the payload should be decoded or not, according to the *Content-Transfer-Encoding* header. When `True` and the message is not a multipart, the payload will be decoded if this header’s value is `quoted-printable` or `base64`. If some other encoding is used, or *Content-Transfer-Encoding* header is missing, the payload is returned as-is (undecoded). In all cases the returned value is binary data. If the message is a multipart and the *decode* flag is `True`, then `None` is returned. If the payload is base64 and it was not perfectly formed (missing padding, characters outside the base64 alphabet), then an appropriate defect will be added to the message’s defect property (`InvalidBase64PaddingDefect` or `InvalidBase64CharactersDefect`, respectively). When *decode* is `False` (the default) the body is returned as a string without decoding the *Content-Transfer-Encoding*. However, for a *Content-Transfer-Encoding* of 8bit, an attempt is made to decode the original bytes using the `charset` specified by the *Content-Type* header, using the `replace` error handler. If no `charset` is specified, or if the `charset` given is not recognized by the email package, the body is decoded using the default ASCII charset. This is a legacy method. On the `EmailMessage` class its functionality is replaced by [`get_content()`](email.message#email.message.EmailMessage.get_content "email.message.EmailMessage.get_content") and [`iter_parts()`](email.message#email.message.EmailMessage.iter_parts "email.message.EmailMessage.iter_parts"). `set_payload(payload, charset=None)` Set the entire message object’s payload to *payload*. It is the client’s responsibility to ensure the payload invariants. Optional *charset* sets the message’s default character set; see [`set_charset()`](#email.message.Message.set_charset "email.message.Message.set_charset") for details. This is a legacy method. On the `EmailMessage` class its functionality is replaced by [`set_content()`](email.message#email.message.EmailMessage.set_content "email.message.EmailMessage.set_content"). `set_charset(charset)` Set the character set of the payload to *charset*, which can either be a [`Charset`](email.charset#email.charset.Charset "email.charset.Charset") instance (see [`email.charset`](email.charset#module-email.charset "email.charset: Character Sets")), a string naming a character set, or `None`. If it is a string, it will be converted to a [`Charset`](email.charset#email.charset.Charset "email.charset.Charset") instance. If *charset* is `None`, the `charset` parameter will be removed from the *Content-Type* header (the message will not be otherwise modified). Anything else will generate a [`TypeError`](exceptions#TypeError "TypeError"). If there is no existing *MIME-Version* header one will be added. If there is no existing *Content-Type* header, one will be added with a value of *text/plain*. Whether the *Content-Type* header already exists or not, its `charset` parameter will be set to *charset.output\_charset*. If *charset.input\_charset* and *charset.output\_charset* differ, the payload will be re-encoded to the *output\_charset*. If there is no existing *Content-Transfer-Encoding* header, then the payload will be transfer-encoded, if needed, using the specified [`Charset`](email.charset#email.charset.Charset "email.charset.Charset"), and a header with the appropriate value will be added. If a *Content-Transfer-Encoding* header already exists, the payload is assumed to already be correctly encoded using that *Content-Transfer-Encoding* and is not modified. This is a legacy method. On the `EmailMessage` class its functionality is replaced by the *charset* parameter of the `email.emailmessage.EmailMessage.set_content()` method. `get_charset()` Return the [`Charset`](email.charset#email.charset.Charset "email.charset.Charset") instance associated with the message’s payload. This is a legacy method. On the `EmailMessage` class it always returns `None`. The following methods implement a mapping-like interface for accessing the message’s [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html) headers. Note that there are some semantic differences between these methods and a normal mapping (i.e. dictionary) interface. For example, in a dictionary there are no duplicate keys, but here there may be duplicate message headers. Also, in dictionaries there is no guaranteed order to the keys returned by [`keys()`](#email.message.Message.keys "email.message.Message.keys"), but in a [`Message`](#email.message.Message "email.message.Message") object, headers are always returned in the order they appeared in the original message, or were added to the message later. Any header deleted and then re-added are always appended to the end of the header list. These semantic differences are intentional and are biased toward maximal convenience. Note that in all cases, any envelope header present in the message is not included in the mapping interface. In a model generated from bytes, any header values that (in contravention of the RFCs) contain non-ASCII bytes will, when retrieved through this interface, be represented as [`Header`](email.header#email.header.Header "email.header.Header") objects with a charset of `unknown-8bit`. `__len__()` Return the total number of headers, including duplicates. `__contains__(name)` Return `True` if the message object has a field named *name*. Matching is done case-insensitively and *name* should not include the trailing colon. Used for the `in` operator, e.g.: ``` if 'message-id' in myMessage: print('Message-ID:', myMessage['message-id']) ``` `__getitem__(name)` Return the value of the named header field. *name* should not include the colon field separator. If the header is missing, `None` is returned; a [`KeyError`](exceptions#KeyError "KeyError") is never raised. Note that if the named field appears more than once in the message’s headers, exactly which of those field values will be returned is undefined. Use the [`get_all()`](#email.message.Message.get_all "email.message.Message.get_all") method to get the values of all the extant named headers. `__setitem__(name, val)` Add a header to the message with field name *name* and value *val*. The field is appended to the end of the message’s existing fields. Note that this does *not* overwrite or delete any existing header with the same name. If you want to ensure that the new header is the only one present in the message with field name *name*, delete the field first, e.g.: ``` del msg['subject'] msg['subject'] = 'Python roolz!' ``` `__delitem__(name)` Delete all occurrences of the field with name *name* from the message’s headers. No exception is raised if the named field isn’t present in the headers. `keys()` Return a list of all the message’s header field names. `values()` Return a list of all the message’s field values. `items()` Return a list of 2-tuples containing all the message’s field headers and values. `get(name, failobj=None)` Return the value of the named header field. This is identical to [`__getitem__()`](#email.message.Message.__getitem__ "email.message.Message.__getitem__") except that optional *failobj* is returned if the named header is missing (defaults to `None`). Here are some additional useful methods: `get_all(name, failobj=None)` Return a list of all the values for the field named *name*. If there are no such named headers in the message, *failobj* is returned (defaults to `None`). `add_header(_name, _value, **_params)` Extended header setting. This method is similar to [`__setitem__()`](#email.message.Message.__setitem__ "email.message.Message.__setitem__") except that additional header parameters can be provided as keyword arguments. *\_name* is the header field to add and *\_value* is the *primary* value for the header. For each item in the keyword argument dictionary *\_params*, the key is taken as the parameter name, with underscores converted to dashes (since dashes are illegal in Python identifiers). Normally, the parameter will be added as `key="value"` unless the value is `None`, in which case only the key will be added. If the value contains non-ASCII characters, it can be specified as a three tuple in the format `(CHARSET, LANGUAGE, VALUE)`, where `CHARSET` is a string naming the charset to be used to encode the value, `LANGUAGE` can usually be set to `None` or the empty string (see [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html) for other possibilities), and `VALUE` is the string value containing non-ASCII code points. If a three tuple is not passed and the value contains non-ASCII characters, it is automatically encoded in [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html) format using a `CHARSET` of `utf-8` and a `LANGUAGE` of `None`. Here’s an example: ``` msg.add_header('Content-Disposition', 'attachment', filename='bud.gif') ``` This will add a header that looks like ``` Content-Disposition: attachment; filename="bud.gif" ``` An example with non-ASCII characters: ``` msg.add_header('Content-Disposition', 'attachment', filename=('iso-8859-1', '', 'Fußballer.ppt')) ``` Which produces ``` Content-Disposition: attachment; filename*="iso-8859-1''Fu%DFballer.ppt" ``` `replace_header(_name, _value)` Replace a header. Replace the first header found in the message that matches *\_name*, retaining header order and field name case. If no matching header was found, a [`KeyError`](exceptions#KeyError "KeyError") is raised. `get_content_type()` Return the message’s content type. The returned string is coerced to lower case of the form *maintype/subtype*. If there was no *Content-Type* header in the message the default type as given by [`get_default_type()`](#email.message.Message.get_default_type "email.message.Message.get_default_type") will be returned. Since according to [**RFC 2045**](https://tools.ietf.org/html/rfc2045.html), messages always have a default type, [`get_content_type()`](#email.message.Message.get_content_type "email.message.Message.get_content_type") will always return a value. [**RFC 2045**](https://tools.ietf.org/html/rfc2045.html) defines a message’s default type to be *text/plain* unless it appears inside a *multipart/digest* container, in which case it would be *message/rfc822*. If the *Content-Type* header has an invalid type specification, [**RFC 2045**](https://tools.ietf.org/html/rfc2045.html) mandates that the default type be *text/plain*. `get_content_maintype()` Return the message’s main content type. This is the *maintype* part of the string returned by [`get_content_type()`](#email.message.Message.get_content_type "email.message.Message.get_content_type"). `get_content_subtype()` Return the message’s sub-content type. This is the *subtype* part of the string returned by [`get_content_type()`](#email.message.Message.get_content_type "email.message.Message.get_content_type"). `get_default_type()` Return the default content type. Most messages have a default content type of *text/plain*, except for messages that are subparts of *multipart/digest* containers. Such subparts have a default content type of *message/rfc822*. `set_default_type(ctype)` Set the default content type. *ctype* should either be *text/plain* or *message/rfc822*, although this is not enforced. The default content type is not stored in the *Content-Type* header. `get_params(failobj=None, header='content-type', unquote=True)` Return the message’s *Content-Type* parameters, as a list. The elements of the returned list are 2-tuples of key/value pairs, as split on the `'='` sign. The left hand side of the `'='` is the key, while the right hand side is the value. If there is no `'='` sign in the parameter the value is the empty string, otherwise the value is as described in [`get_param()`](#email.message.Message.get_param "email.message.Message.get_param") and is unquoted if optional *unquote* is `True` (the default). Optional *failobj* is the object to return if there is no *Content-Type* header. Optional *header* is the header to search instead of *Content-Type*. This is a legacy method. On the `EmailMessage` class its functionality is replaced by the *params* property of the individual header objects returned by the header access methods. `get_param(param, failobj=None, header='content-type', unquote=True)` Return the value of the *Content-Type* header’s parameter *param* as a string. If the message has no *Content-Type* header or if there is no such parameter, then *failobj* is returned (defaults to `None`). Optional *header* if given, specifies the message header to use instead of *Content-Type*. Parameter keys are always compared case insensitively. The return value can either be a string, or a 3-tuple if the parameter was [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html) encoded. When it’s a 3-tuple, the elements of the value are of the form `(CHARSET, LANGUAGE, VALUE)`. Note that both `CHARSET` and `LANGUAGE` can be `None`, in which case you should consider `VALUE` to be encoded in the `us-ascii` charset. You can usually ignore `LANGUAGE`. If your application doesn’t care whether the parameter was encoded as in [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html), you can collapse the parameter value by calling [`email.utils.collapse_rfc2231_value()`](email.utils#email.utils.collapse_rfc2231_value "email.utils.collapse_rfc2231_value"), passing in the return value from [`get_param()`](#email.message.Message.get_param "email.message.Message.get_param"). This will return a suitably decoded Unicode string when the value is a tuple, or the original string unquoted if it isn’t. For example: ``` rawparam = msg.get_param('foo') param = email.utils.collapse_rfc2231_value(rawparam) ``` In any case, the parameter value (either the returned string, or the `VALUE` item in the 3-tuple) is always unquoted, unless *unquote* is set to `False`. This is a legacy method. On the `EmailMessage` class its functionality is replaced by the *params* property of the individual header objects returned by the header access methods. `set_param(param, value, header='Content-Type', requote=True, charset=None, language='', replace=False)` Set a parameter in the *Content-Type* header. If the parameter already exists in the header, its value will be replaced with *value*. If the *Content-Type* header as not yet been defined for this message, it will be set to *text/plain* and the new parameter value will be appended as per [**RFC 2045**](https://tools.ietf.org/html/rfc2045.html). Optional *header* specifies an alternative header to *Content-Type*, and all parameters will be quoted as necessary unless optional *requote* is `False` (the default is `True`). If optional *charset* is specified, the parameter will be encoded according to [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html). Optional *language* specifies the RFC 2231 language, defaulting to the empty string. Both *charset* and *language* should be strings. If *replace* is `False` (the default) the header is moved to the end of the list of headers. If *replace* is `True`, the header will be updated in place. Changed in version 3.4: `replace` keyword was added. `del_param(param, header='content-type', requote=True)` Remove the given parameter completely from the *Content-Type* header. The header will be re-written in place without the parameter or its value. All values will be quoted as necessary unless *requote* is `False` (the default is `True`). Optional *header* specifies an alternative to *Content-Type*. `set_type(type, header='Content-Type', requote=True)` Set the main type and subtype for the *Content-Type* header. *type* must be a string in the form *maintype/subtype*, otherwise a [`ValueError`](exceptions#ValueError "ValueError") is raised. This method replaces the *Content-Type* header, keeping all the parameters in place. If *requote* is `False`, this leaves the existing header’s quoting as is, otherwise the parameters will be quoted (the default). An alternative header can be specified in the *header* argument. When the *Content-Type* header is set a *MIME-Version* header is also added. This is a legacy method. On the `EmailMessage` class its functionality is replaced by the `make_` and `add_` methods. `get_filename(failobj=None)` Return the value of the `filename` parameter of the *Content-Disposition* header of the message. If the header does not have a `filename` parameter, this method falls back to looking for the `name` parameter on the *Content-Type* header. If neither is found, or the header is missing, then *failobj* is returned. The returned string will always be unquoted as per [`email.utils.unquote()`](email.utils#email.utils.unquote "email.utils.unquote"). `get_boundary(failobj=None)` Return the value of the `boundary` parameter of the *Content-Type* header of the message, or *failobj* if either the header is missing, or has no `boundary` parameter. The returned string will always be unquoted as per [`email.utils.unquote()`](email.utils#email.utils.unquote "email.utils.unquote"). `set_boundary(boundary)` Set the `boundary` parameter of the *Content-Type* header to *boundary*. [`set_boundary()`](#email.message.Message.set_boundary "email.message.Message.set_boundary") will always quote *boundary* if necessary. A [`HeaderParseError`](email.errors#email.errors.HeaderParseError "email.errors.HeaderParseError") is raised if the message object has no *Content-Type* header. Note that using this method is subtly different than deleting the old *Content-Type* header and adding a new one with the new boundary via [`add_header()`](#email.message.Message.add_header "email.message.Message.add_header"), because [`set_boundary()`](#email.message.Message.set_boundary "email.message.Message.set_boundary") preserves the order of the *Content-Type* header in the list of headers. However, it does *not* preserve any continuation lines which may have been present in the original *Content-Type* header. `get_content_charset(failobj=None)` Return the `charset` parameter of the *Content-Type* header, coerced to lower case. If there is no *Content-Type* header, or if that header has no `charset` parameter, *failobj* is returned. Note that this method differs from [`get_charset()`](#email.message.Message.get_charset "email.message.Message.get_charset") which returns the [`Charset`](email.charset#email.charset.Charset "email.charset.Charset") instance for the default encoding of the message body. `get_charsets(failobj=None)` Return a list containing the character set names in the message. If the message is a *multipart*, then the list will contain one element for each subpart in the payload, otherwise, it will be a list of length 1. Each item in the list will be a string which is the value of the `charset` parameter in the *Content-Type* header for the represented subpart. However, if the subpart has no *Content-Type* header, no `charset` parameter, or is not of the *text* main MIME type, then that item in the returned list will be *failobj*. `get_content_disposition()` Return the lowercased value (without parameters) of the message’s *Content-Disposition* header if it has one, or `None`. The possible values for this method are *inline*, *attachment* or `None` if the message follows [**RFC 2183**](https://tools.ietf.org/html/rfc2183.html). New in version 3.5. `walk()` The [`walk()`](#email.message.Message.walk "email.message.Message.walk") method is an all-purpose generator which can be used to iterate over all the parts and subparts of a message object tree, in depth-first traversal order. You will typically use [`walk()`](#email.message.Message.walk "email.message.Message.walk") as the iterator in a `for` loop; each iteration returns the next subpart. Here’s an example that prints the MIME type of every part of a multipart message structure: ``` >>> for part in msg.walk(): ... print(part.get_content_type()) multipart/report text/plain message/delivery-status text/plain text/plain message/rfc822 text/plain ``` `walk` iterates over the subparts of any part where [`is_multipart()`](#email.message.Message.is_multipart "email.message.Message.is_multipart") returns `True`, even though `msg.get_content_maintype() == 'multipart'` may return `False`. We can see this in our example by making use of the `_structure` debug helper function: ``` >>> for part in msg.walk(): ... print(part.get_content_maintype() == 'multipart', ... part.is_multipart()) True True False False False True False False False False False True False False >>> _structure(msg) multipart/report text/plain message/delivery-status text/plain text/plain message/rfc822 text/plain ``` Here the `message` parts are not `multiparts`, but they do contain subparts. `is_multipart()` returns `True` and `walk` descends into the subparts. [`Message`](#email.message.Message "email.message.Message") objects can also optionally contain two instance attributes, which can be used when generating the plain text of a MIME message. `preamble` The format of a MIME document allows for some text between the blank line following the headers, and the first multipart boundary string. Normally, this text is never visible in a MIME-aware mail reader because it falls outside the standard MIME armor. However, when viewing the raw text of the message, or when viewing the message in a non-MIME aware reader, this text can become visible. The *preamble* attribute contains this leading extra-armor text for MIME documents. When the [`Parser`](email.parser#email.parser.Parser "email.parser.Parser") discovers some text after the headers but before the first boundary string, it assigns this text to the message’s *preamble* attribute. When the [`Generator`](email.generator#email.generator.Generator "email.generator.Generator") is writing out the plain text representation of a MIME message, and it finds the message has a *preamble* attribute, it will write this text in the area between the headers and the first boundary. See [`email.parser`](email.parser#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure.") and [`email.generator`](email.generator#module-email.generator "email.generator: Generate flat text email messages from a message structure.") for details. Note that if the message object has no preamble, the *preamble* attribute will be `None`. `epilogue` The *epilogue* attribute acts the same way as the *preamble* attribute, except that it contains text that appears between the last boundary and the end of the message. You do not need to set the epilogue to the empty string in order for the [`Generator`](email.generator#email.generator.Generator "email.generator.Generator") to print a newline at the end of the file. `defects` The *defects* attribute contains a list of all the problems found when parsing this message. See [`email.errors`](email.errors#module-email.errors "email.errors: The exception classes used by the email package.") for a detailed description of the possible parsing defects.
programming_docs
python tkinter.font — Tkinter font wrapper tkinter.font — Tkinter font wrapper =================================== **Source code:** [Lib/tkinter/font.py](https://github.com/python/cpython/tree/3.9/Lib/tkinter/font.py) The [`tkinter.font`](#module-tkinter.font "tkinter.font: Tkinter font-wrapping class (Tk)") module provides the [`Font`](#tkinter.font.Font "tkinter.font.Font") class for creating and using named fonts. The different font weights and slants are: `tkinter.font.NORMAL` `tkinter.font.BOLD` `tkinter.font.ITALIC` `tkinter.font.ROMAN` `class tkinter.font.Font(root=None, font=None, name=None, exists=False, **options)` The [`Font`](#tkinter.font.Font "tkinter.font.Font") class represents a named font. *Font* instances are given unique names and can be specified by their family, size, and style configuration. Named fonts are Tk’s method of creating and identifying fonts as a single object, rather than specifying a font by its attributes with each occurrence. arguments: additional keyword options (ignored if *font* is specified): `actual(option=None, displayof=None)` Return the attributes of the font. `cget(option)` Retrieve an attribute of the font. `config(**options)` Modify attributes of the font. `copy()` Return new instance of the current font. `measure(text, displayof=None)` Return amount of space the text would occupy on the specified display when formatted in the current font. If no display is specified then the main application window is assumed. `metrics(*options, **kw)` Return font-specific data. Options include: *ascent* - distance between baseline and highest point that a character of the font can occupy *descent* - distance between baseline and lowest point that a character of the font can occupy *linespace* - minimum vertical separation necessary between any two characters of the font that ensures no vertical overlap between lines. *fixed* - 1 if font is fixed-width else 0 `tkinter.font.families(root=None, displayof=None)` Return the different font families. `tkinter.font.names(root=None)` Return the names of defined fonts. `tkinter.font.nametofont(name)` Return a [`Font`](#tkinter.font.Font "tkinter.font.Font") representation of a tk named font. python array — Efficient arrays of numeric values array — Efficient arrays of numeric values ========================================== This module defines an object type which can compactly represent an array of basic values: characters, integers, floating point numbers. Arrays are sequence types and behave very much like lists, except that the type of objects stored in them is constrained. The type is specified at object creation time by using a *type code*, which is a single character. The following type codes are defined: | Type code | C Type | Python Type | Minimum size in bytes | Notes | | --- | --- | --- | --- | --- | | `'b'` | signed char | int | 1 | | | `'B'` | unsigned char | int | 1 | | | `'u'` | wchar\_t | Unicode character | 2 | (1) | | `'h'` | signed short | int | 2 | | | `'H'` | unsigned short | int | 2 | | | `'i'` | signed int | int | 2 | | | `'I'` | unsigned int | int | 2 | | | `'l'` | signed long | int | 4 | | | `'L'` | unsigned long | int | 4 | | | `'q'` | signed long long | int | 8 | | | `'Q'` | unsigned long long | int | 8 | | | `'f'` | float | float | 4 | | | `'d'` | double | float | 8 | | Notes: 1. It can be 16 bits or 32 bits depending on the platform. Changed in version 3.9: `array('u')` now uses `wchar_t` as C type instead of deprecated `Py_UNICODE`. This change doesn’t affect to its behavior because `Py_UNICODE` is alias of `wchar_t` since Python 3.3. Deprecated since version 3.3, will be removed in version 4.0. The actual representation of values is determined by the machine architecture (strictly speaking, by the C implementation). The actual size can be accessed through the `itemsize` attribute. The module defines the following type: `class array.array(typecode[, initializer])` A new array whose items are restricted by *typecode*, and initialized from the optional *initializer* value, which must be a list, a [bytes-like object](../glossary#term-bytes-like-object), or iterable over elements of the appropriate type. If given a list or string, the initializer is passed to the new array’s [`fromlist()`](#array.array.fromlist "array.array.fromlist"), [`frombytes()`](#array.array.frombytes "array.array.frombytes"), or [`fromunicode()`](#array.array.fromunicode "array.array.fromunicode") method (see below) to add initial items to the array. Otherwise, the iterable initializer is passed to the [`extend()`](#array.array.extend "array.array.extend") method. Raises an [auditing event](sys#auditing) `array.__new__` with arguments `typecode`, `initializer`. `array.typecodes` A string with all available type codes. Array objects support the ordinary sequence operations of indexing, slicing, concatenation, and multiplication. When using slice assignment, the assigned value must be an array object with the same type code; in all other cases, [`TypeError`](exceptions#TypeError "TypeError") is raised. Array objects also implement the buffer interface, and may be used wherever [bytes-like objects](../glossary#term-bytes-like-object) are supported. The following data items and methods are also supported: `array.typecode` The typecode character used to create the array. `array.itemsize` The length in bytes of one array item in the internal representation. `array.append(x)` Append a new item with value *x* to the end of the array. `array.buffer_info()` Return a tuple `(address, length)` giving the current memory address and the length in elements of the buffer used to hold array’s contents. The size of the memory buffer in bytes can be computed as `array.buffer_info()[1] * array.itemsize`. This is occasionally useful when working with low-level (and inherently unsafe) I/O interfaces that require memory addresses, such as certain `ioctl()` operations. The returned numbers are valid as long as the array exists and no length-changing operations are applied to it. Note When using array objects from code written in C or C++ (the only way to effectively make use of this information), it makes more sense to use the buffer interface supported by array objects. This method is maintained for backward compatibility and should be avoided in new code. The buffer interface is documented in [Buffer Protocol](../c-api/buffer#bufferobjects). `array.byteswap()` “Byteswap” all items of the array. This is only supported for values which are 1, 2, 4, or 8 bytes in size; for other types of values, [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. It is useful when reading data from a file written on a machine with a different byte order. `array.count(x)` Return the number of occurrences of *x* in the array. `array.extend(iterable)` Append items from *iterable* to the end of the array. If *iterable* is another array, it must have *exactly* the same type code; if not, [`TypeError`](exceptions#TypeError "TypeError") will be raised. If *iterable* is not an array, it must be iterable and its elements must be the right type to be appended to the array. `array.frombytes(s)` Appends items from the string, interpreting the string as an array of machine values (as if it had been read from a file using the [`fromfile()`](#array.array.fromfile "array.array.fromfile") method). New in version 3.2: `fromstring()` is renamed to [`frombytes()`](#array.array.frombytes "array.array.frombytes") for clarity. `array.fromfile(f, n)` Read *n* items (as machine values) from the [file object](../glossary#term-file-object) *f* and append them to the end of the array. If less than *n* items are available, [`EOFError`](exceptions#EOFError "EOFError") is raised, but the items that were available are still inserted into the array. `array.fromlist(list)` Append items from the list. This is equivalent to `for x in list: a.append(x)` except that if there is a type error, the array is unchanged. `array.fromunicode(s)` Extends this array with data from the given unicode string. The array must be a type `'u'` array; otherwise a [`ValueError`](exceptions#ValueError "ValueError") is raised. Use `array.frombytes(unicodestring.encode(enc))` to append Unicode data to an array of some other type. `array.index(x)` Return the smallest *i* such that *i* is the index of the first occurrence of *x* in the array. `array.insert(i, x)` Insert a new item with value *x* in the array before position *i*. Negative values are treated as being relative to the end of the array. `array.pop([i])` Removes the item with the index *i* from the array and returns it. The optional argument defaults to `-1`, so that by default the last item is removed and returned. `array.remove(x)` Remove the first occurrence of *x* from the array. `array.reverse()` Reverse the order of the items in the array. `array.tobytes()` Convert the array to an array of machine values and return the bytes representation (the same sequence of bytes that would be written to a file by the [`tofile()`](#array.array.tofile "array.array.tofile") method.) New in version 3.2: `tostring()` is renamed to [`tobytes()`](#array.array.tobytes "array.array.tobytes") for clarity. `array.tofile(f)` Write all items (as machine values) to the [file object](../glossary#term-file-object) *f*. `array.tolist()` Convert the array to an ordinary list with the same items. `array.tounicode()` Convert the array to a unicode string. The array must be a type `'u'` array; otherwise a [`ValueError`](exceptions#ValueError "ValueError") is raised. Use `array.tobytes().decode(enc)` to obtain a unicode string from an array of some other type. When an array object is printed or converted to a string, it is represented as `array(typecode, initializer)`. The *initializer* is omitted if the array is empty, otherwise it is a string if the *typecode* is `'u'`, otherwise it is a list of numbers. The string is guaranteed to be able to be converted back to an array with the same type and value using [`eval()`](functions#eval "eval"), so long as the [`array`](#array.array "array.array") class has been imported using `from array import array`. Examples: ``` array('l') array('u', 'hello \u2641') array('l', [1, 2, 3, 4, 5]) array('d', [1.0, 2.0, 3.14]) ``` See also `Module` [`struct`](struct#module-struct "struct: Interpret bytes as packed binary data.") Packing and unpacking of heterogeneous binary data. `Module` [`xdrlib`](xdrlib#module-xdrlib "xdrlib: Encoders and decoders for the External Data Representation (XDR). (deprecated)") Packing and unpacking of External Data Representation (XDR) data as used in some remote procedure call systems. [NumPy](https://numpy.org/) The NumPy package defines another array type. python sched — Event scheduler sched — Event scheduler ======================= **Source code:** [Lib/sched.py](https://github.com/python/cpython/tree/3.9/Lib/sched.py) The [`sched`](#module-sched "sched: General purpose event scheduler.") module defines a class which implements a general purpose event scheduler: `class sched.scheduler(timefunc=time.monotonic, delayfunc=time.sleep)` The [`scheduler`](#sched.scheduler "sched.scheduler") class defines a generic interface to scheduling events. It needs two functions to actually deal with the “outside world” — *timefunc* should be callable without arguments, and return a number (the “time”, in any units whatsoever). The *delayfunc* function should be callable with one argument, compatible with the output of *timefunc*, and should delay that many time units. *delayfunc* will also be called with the argument `0` after each event is run to allow other threads an opportunity to run in multi-threaded applications. Changed in version 3.3: *timefunc* and *delayfunc* parameters are optional. Changed in version 3.3: [`scheduler`](#sched.scheduler "sched.scheduler") class can be safely used in multi-threaded environments. Example: ``` >>> import sched, time >>> s = sched.scheduler(time.time, time.sleep) >>> def print_time(a='default'): ... print("From print_time", time.time(), a) ... >>> def print_some_times(): ... print(time.time()) ... s.enter(10, 1, print_time) ... s.enter(5, 2, print_time, argument=('positional',)) ... s.enter(5, 1, print_time, kwargs={'a': 'keyword'}) ... s.run() ... print(time.time()) ... >>> print_some_times() 930343690.257 From print_time 930343695.274 positional From print_time 930343695.275 keyword From print_time 930343700.273 default 930343700.276 ``` Scheduler Objects ----------------- [`scheduler`](#sched.scheduler "sched.scheduler") instances have the following methods and attributes: `scheduler.enterabs(time, priority, action, argument=(), kwargs={})` Schedule a new event. The *time* argument should be a numeric type compatible with the return value of the *timefunc* function passed to the constructor. Events scheduled for the same *time* will be executed in the order of their *priority*. A lower number represents a higher priority. Executing the event means executing `action(*argument, **kwargs)`. *argument* is a sequence holding the positional arguments for *action*. *kwargs* is a dictionary holding the keyword arguments for *action*. Return value is an event which may be used for later cancellation of the event (see [`cancel()`](#sched.scheduler.cancel "sched.scheduler.cancel")). Changed in version 3.3: *argument* parameter is optional. Changed in version 3.3: *kwargs* parameter was added. `scheduler.enter(delay, priority, action, argument=(), kwargs={})` Schedule an event for *delay* more time units. Other than the relative time, the other arguments, the effect and the return value are the same as those for [`enterabs()`](#sched.scheduler.enterabs "sched.scheduler.enterabs"). Changed in version 3.3: *argument* parameter is optional. Changed in version 3.3: *kwargs* parameter was added. `scheduler.cancel(event)` Remove the event from the queue. If *event* is not an event currently in the queue, this method will raise a [`ValueError`](exceptions#ValueError "ValueError"). `scheduler.empty()` Return `True` if the event queue is empty. `scheduler.run(blocking=True)` Run all scheduled events. This method will wait (using the `delayfunc()` function passed to the constructor) for the next event, then execute it and so on until there are no more scheduled events. If *blocking* is false executes the scheduled events due to expire soonest (if any) and then return the deadline of the next scheduled call in the scheduler (if any). Either *action* or *delayfunc* can raise an exception. In either case, the scheduler will maintain a consistent state and propagate the exception. If an exception is raised by *action*, the event will not be attempted in future calls to [`run()`](#sched.scheduler.run "sched.scheduler.run"). If a sequence of events takes longer to run than the time available before the next event, the scheduler will simply fall behind. No events will be dropped; the calling code is responsible for canceling events which are no longer pertinent. Changed in version 3.3: *blocking* parameter was added. `scheduler.queue` Read-only attribute returning a list of upcoming events in the order they will be run. Each event is shown as a [named tuple](../glossary#term-named-tuple) with the following fields: time, priority, action, argument, kwargs. python atexit — Exit handlers atexit — Exit handlers ====================== The [`atexit`](#module-atexit "atexit: Register and execute cleanup functions.") module defines functions to register and unregister cleanup functions. Functions thus registered are automatically executed upon normal interpreter termination. [`atexit`](#module-atexit "atexit: Register and execute cleanup functions.") runs these functions in the *reverse* order in which they were registered; if you register `A`, `B`, and `C`, at interpreter termination time they will be run in the order `C`, `B`, `A`. **Note:** The functions registered via this module are not called when the program is killed by a signal not handled by Python, when a Python fatal internal error is detected, or when [`os._exit()`](os#os._exit "os._exit") is called. Changed in version 3.7: When used with C-API subinterpreters, registered functions are local to the interpreter they were registered in. `atexit.register(func, *args, **kwargs)` Register *func* as a function to be executed at termination. Any optional arguments that are to be passed to *func* must be passed as arguments to [`register()`](#atexit.register "atexit.register"). It is possible to register the same function and arguments more than once. At normal program termination (for instance, if [`sys.exit()`](sys#sys.exit "sys.exit") is called or the main module’s execution completes), all functions registered are called in last in, first out order. The assumption is that lower level modules will normally be imported before higher level modules and thus must be cleaned up later. If an exception is raised during execution of the exit handlers, a traceback is printed (unless [`SystemExit`](exceptions#SystemExit "SystemExit") is raised) and the exception information is saved. After all exit handlers have had a chance to run, the last exception to be raised is re-raised. This function returns *func*, which makes it possible to use it as a decorator. `atexit.unregister(func)` Remove *func* from the list of functions to be run at interpreter shutdown. [`unregister()`](#atexit.unregister "atexit.unregister") silently does nothing if *func* was not previously registered. If *func* has been registered more than once, every occurrence of that function in the [`atexit`](#module-atexit "atexit: Register and execute cleanup functions.") call stack will be removed. Equality comparisons (`==`) are used internally during unregistration, so function references do not need to have matching identities. See also `Module` [`readline`](readline#module-readline "readline: GNU readline support for Python. (Unix)") Useful example of [`atexit`](#module-atexit "atexit: Register and execute cleanup functions.") to read and write [`readline`](readline#module-readline "readline: GNU readline support for Python. (Unix)") history files. atexit Example -------------- The following simple example demonstrates how a module can initialize a counter from a file when it is imported and save the counter’s updated value automatically when the program terminates without relying on the application making an explicit call into this module at termination. ``` try: with open('counterfile') as infile: _count = int(infile.read()) except FileNotFoundError: _count = 0 def incrcounter(n): global _count _count = _count + n def savecounter(): with open('counterfile', 'w') as outfile: outfile.write('%d' % _count) import atexit atexit.register(savecounter) ``` Positional and keyword arguments may also be passed to [`register()`](#atexit.register "atexit.register") to be passed along to the registered function when it is called: ``` def goodbye(name, adjective): print('Goodbye %s, it was %s to meet you.' % (name, adjective)) import atexit atexit.register(goodbye, 'Donny', 'nice') # or: atexit.register(goodbye, adjective='nice', name='Donny') ``` Usage as a [decorator](../glossary#term-decorator): ``` import atexit @atexit.register def goodbye(): print('You are now leaving the Python sector.') ``` This only works with functions that can be called without arguments.
programming_docs
python xml.sax.handler — Base classes for SAX handlers xml.sax.handler — Base classes for SAX handlers =============================================== **Source code:** [Lib/xml/sax/handler.py](https://github.com/python/cpython/tree/3.9/Lib/xml/sax/handler.py) The SAX API defines four kinds of handlers: content handlers, DTD handlers, error handlers, and entity resolvers. Applications normally only need to implement those interfaces whose events they are interested in; they can implement the interfaces in a single object or in multiple objects. Handler implementations should inherit from the base classes provided in the module [`xml.sax.handler`](#module-xml.sax.handler "xml.sax.handler: Base classes for SAX event handlers."), so that all methods get default implementations. `class xml.sax.handler.ContentHandler` This is the main callback interface in SAX, and the one most important to applications. The order of events in this interface mirrors the order of the information in the document. `class xml.sax.handler.DTDHandler` Handle DTD events. This interface specifies only those DTD events required for basic parsing (unparsed entities and attributes). `class xml.sax.handler.EntityResolver` Basic interface for resolving entities. If you create an object implementing this interface, then register the object with your Parser, the parser will call the method in your object to resolve all external entities. `class xml.sax.handler.ErrorHandler` Interface used by the parser to present error and warning messages to the application. The methods of this object control whether errors are immediately converted to exceptions or are handled in some other way. In addition to these classes, [`xml.sax.handler`](#module-xml.sax.handler "xml.sax.handler: Base classes for SAX event handlers.") provides symbolic constants for the feature and property names. `xml.sax.handler.feature_namespaces` `xml.sax.handler.feature_namespace_prefixes` `xml.sax.handler.feature_string_interning` `xml.sax.handler.feature_validation` `xml.sax.handler.feature_external_ges` `xml.sax.handler.feature_external_pes` `xml.sax.handler.all_features` List of all features. `xml.sax.handler.property_lexical_handler` `xml.sax.handler.property_declaration_handler` `xml.sax.handler.property_dom_node` `xml.sax.handler.property_xml_string` `xml.sax.handler.all_properties` List of all known property names. ContentHandler Objects ---------------------- Users are expected to subclass [`ContentHandler`](#xml.sax.handler.ContentHandler "xml.sax.handler.ContentHandler") to support their application. The following methods are called by the parser on the appropriate events in the input document: `ContentHandler.setDocumentLocator(locator)` Called by the parser to give the application a locator for locating the origin of document events. SAX parsers are strongly encouraged (though not absolutely required) to supply a locator: if it does so, it must supply the locator to the application by invoking this method before invoking any of the other methods in the DocumentHandler interface. The locator allows the application to determine the end position of any document-related event, even if the parser is not reporting an error. Typically, the application will use this information for reporting its own errors (such as character content that does not match an application’s business rules). The information returned by the locator is probably not sufficient for use with a search engine. Note that the locator will return correct information only during the invocation of the events in this interface. The application should not attempt to use it at any other time. `ContentHandler.startDocument()` Receive notification of the beginning of a document. The SAX parser will invoke this method only once, before any other methods in this interface or in DTDHandler (except for [`setDocumentLocator()`](#xml.sax.handler.ContentHandler.setDocumentLocator "xml.sax.handler.ContentHandler.setDocumentLocator")). `ContentHandler.endDocument()` Receive notification of the end of a document. The SAX parser will invoke this method only once, and it will be the last method invoked during the parse. The parser shall not invoke this method until it has either abandoned parsing (because of an unrecoverable error) or reached the end of input. `ContentHandler.startPrefixMapping(prefix, uri)` Begin the scope of a prefix-URI Namespace mapping. The information from this event is not necessary for normal Namespace processing: the SAX XML reader will automatically replace prefixes for element and attribute names when the `feature_namespaces` feature is enabled (the default). There are cases, however, when applications need to use prefixes in character data or in attribute values, where they cannot safely be expanded automatically; the [`startPrefixMapping()`](#xml.sax.handler.ContentHandler.startPrefixMapping "xml.sax.handler.ContentHandler.startPrefixMapping") and [`endPrefixMapping()`](#xml.sax.handler.ContentHandler.endPrefixMapping "xml.sax.handler.ContentHandler.endPrefixMapping") events supply the information to the application to expand prefixes in those contexts itself, if necessary. Note that [`startPrefixMapping()`](#xml.sax.handler.ContentHandler.startPrefixMapping "xml.sax.handler.ContentHandler.startPrefixMapping") and [`endPrefixMapping()`](#xml.sax.handler.ContentHandler.endPrefixMapping "xml.sax.handler.ContentHandler.endPrefixMapping") events are not guaranteed to be properly nested relative to each-other: all [`startPrefixMapping()`](#xml.sax.handler.ContentHandler.startPrefixMapping "xml.sax.handler.ContentHandler.startPrefixMapping") events will occur before the corresponding [`startElement()`](#xml.sax.handler.ContentHandler.startElement "xml.sax.handler.ContentHandler.startElement") event, and all [`endPrefixMapping()`](#xml.sax.handler.ContentHandler.endPrefixMapping "xml.sax.handler.ContentHandler.endPrefixMapping") events will occur after the corresponding [`endElement()`](#xml.sax.handler.ContentHandler.endElement "xml.sax.handler.ContentHandler.endElement") event, but their order is not guaranteed. `ContentHandler.endPrefixMapping(prefix)` End the scope of a prefix-URI mapping. See [`startPrefixMapping()`](#xml.sax.handler.ContentHandler.startPrefixMapping "xml.sax.handler.ContentHandler.startPrefixMapping") for details. This event will always occur after the corresponding [`endElement()`](#xml.sax.handler.ContentHandler.endElement "xml.sax.handler.ContentHandler.endElement") event, but the order of [`endPrefixMapping()`](#xml.sax.handler.ContentHandler.endPrefixMapping "xml.sax.handler.ContentHandler.endPrefixMapping") events is not otherwise guaranteed. `ContentHandler.startElement(name, attrs)` Signals the start of an element in non-namespace mode. The *name* parameter contains the raw XML 1.0 name of the element type as a string and the *attrs* parameter holds an object of the `Attributes` interface (see [The Attributes Interface](xml.sax.reader#attributes-objects)) containing the attributes of the element. The object passed as *attrs* may be re-used by the parser; holding on to a reference to it is not a reliable way to keep a copy of the attributes. To keep a copy of the attributes, use the [`copy()`](copy#module-copy "copy: Shallow and deep copy operations.") method of the *attrs* object. `ContentHandler.endElement(name)` Signals the end of an element in non-namespace mode. The *name* parameter contains the name of the element type, just as with the [`startElement()`](#xml.sax.handler.ContentHandler.startElement "xml.sax.handler.ContentHandler.startElement") event. `ContentHandler.startElementNS(name, qname, attrs)` Signals the start of an element in namespace mode. The *name* parameter contains the name of the element type as a `(uri, localname)` tuple, the *qname* parameter contains the raw XML 1.0 name used in the source document, and the *attrs* parameter holds an instance of the `AttributesNS` interface (see [The AttributesNS Interface](xml.sax.reader#attributes-ns-objects)) containing the attributes of the element. If no namespace is associated with the element, the *uri* component of *name* will be `None`. The object passed as *attrs* may be re-used by the parser; holding on to a reference to it is not a reliable way to keep a copy of the attributes. To keep a copy of the attributes, use the [`copy()`](copy#module-copy "copy: Shallow and deep copy operations.") method of the *attrs* object. Parsers may set the *qname* parameter to `None`, unless the `feature_namespace_prefixes` feature is activated. `ContentHandler.endElementNS(name, qname)` Signals the end of an element in namespace mode. The *name* parameter contains the name of the element type, just as with the [`startElementNS()`](#xml.sax.handler.ContentHandler.startElementNS "xml.sax.handler.ContentHandler.startElementNS") method, likewise the *qname* parameter. `ContentHandler.characters(content)` Receive notification of character data. The Parser will call this method to report each chunk of character data. SAX parsers may return all contiguous character data in a single chunk, or they may split it into several chunks; however, all of the characters in any single event must come from the same external entity so that the Locator provides useful information. *content* may be a string or bytes instance; the `expat` reader module always produces strings. Note The earlier SAX 1 interface provided by the Python XML Special Interest Group used a more Java-like interface for this method. Since most parsers used from Python did not take advantage of the older interface, the simpler signature was chosen to replace it. To convert old code to the new interface, use *content* instead of slicing content with the old *offset* and *length* parameters. `ContentHandler.ignorableWhitespace(whitespace)` Receive notification of ignorable whitespace in element content. Validating Parsers must use this method to report each chunk of ignorable whitespace (see the W3C XML 1.0 recommendation, section 2.10): non-validating parsers may also use this method if they are capable of parsing and using content models. SAX parsers may return all contiguous whitespace in a single chunk, or they may split it into several chunks; however, all of the characters in any single event must come from the same external entity, so that the Locator provides useful information. `ContentHandler.processingInstruction(target, data)` Receive notification of a processing instruction. The Parser will invoke this method once for each processing instruction found: note that processing instructions may occur before or after the main document element. A SAX parser should never report an XML declaration (XML 1.0, section 2.8) or a text declaration (XML 1.0, section 4.3.1) using this method. `ContentHandler.skippedEntity(name)` Receive notification of a skipped entity. The Parser will invoke this method once for each entity skipped. Non-validating processors may skip entities if they have not seen the declarations (because, for example, the entity was declared in an external DTD subset). All processors may skip external entities, depending on the values of the `feature_external_ges` and the `feature_external_pes` properties. DTDHandler Objects ------------------ [`DTDHandler`](#xml.sax.handler.DTDHandler "xml.sax.handler.DTDHandler") instances provide the following methods: `DTDHandler.notationDecl(name, publicId, systemId)` Handle a notation declaration event. `DTDHandler.unparsedEntityDecl(name, publicId, systemId, ndata)` Handle an unparsed entity declaration event. EntityResolver Objects ---------------------- `EntityResolver.resolveEntity(publicId, systemId)` Resolve the system identifier of an entity and return either the system identifier to read from as a string, or an InputSource to read from. The default implementation returns *systemId*. ErrorHandler Objects -------------------- Objects with this interface are used to receive error and warning information from the [`XMLReader`](xml.sax.reader#xml.sax.xmlreader.XMLReader "xml.sax.xmlreader.XMLReader"). If you create an object that implements this interface, then register the object with your [`XMLReader`](xml.sax.reader#xml.sax.xmlreader.XMLReader "xml.sax.xmlreader.XMLReader"), the parser will call the methods in your object to report all warnings and errors. There are three levels of errors available: warnings, (possibly) recoverable errors, and unrecoverable errors. All methods take a `SAXParseException` as the only parameter. Errors and warnings may be converted to an exception by raising the passed-in exception object. `ErrorHandler.error(exception)` Called when the parser encounters a recoverable error. If this method does not raise an exception, parsing may continue, but further document information should not be expected by the application. Allowing the parser to continue may allow additional errors to be discovered in the input document. `ErrorHandler.fatalError(exception)` Called when the parser encounters an error it cannot recover from; parsing is expected to terminate when this method returns. `ErrorHandler.warning(exception)` Called when the parser presents minor warning information to the application. Parsing is expected to continue when this method returns, and document information will continue to be passed to the application. Raising an exception in this method will cause parsing to end. python email.charset: Representing character sets email.charset: Representing character sets ========================================== **Source code:** [Lib/email/charset.py](https://github.com/python/cpython/tree/3.9/Lib/email/charset.py) This module is part of the legacy (`Compat32`) email API. In the new API only the aliases table is used. The remaining text in this section is the original documentation of the module. This module provides a class [`Charset`](#email.charset.Charset "email.charset.Charset") for representing character sets and character set conversions in email messages, as well as a character set registry and several convenience methods for manipulating this registry. Instances of [`Charset`](#email.charset.Charset "email.charset.Charset") are used in several other modules within the [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package. Import this class from the [`email.charset`](#module-email.charset "email.charset: Character Sets") module. `class email.charset.Charset(input_charset=DEFAULT_CHARSET)` Map character sets to their email properties. This class provides information about the requirements imposed on email for a specific character set. It also provides convenience routines for converting between character sets, given the availability of the applicable codecs. Given a character set, it will do its best to provide information on how to use that character set in an email message in an RFC-compliant way. Certain character sets must be encoded with quoted-printable or base64 when used in email headers or bodies. Certain character sets must be converted outright, and are not allowed in email. Optional *input\_charset* is as described below; it is always coerced to lower case. After being alias normalized it is also used as a lookup into the registry of character sets to find out the header encoding, body encoding, and output conversion codec to be used for the character set. For example, if *input\_charset* is `iso-8859-1`, then headers and bodies will be encoded using quoted-printable and no output conversion codec is necessary. If *input\_charset* is `euc-jp`, then headers will be encoded with base64, bodies will not be encoded, but output text will be converted from the `euc-jp` character set to the `iso-2022-jp` character set. [`Charset`](#email.charset.Charset "email.charset.Charset") instances have the following data attributes: `input_charset` The initial character set specified. Common aliases are converted to their *official* email names (e.g. `latin_1` is converted to `iso-8859-1`). Defaults to 7-bit `us-ascii`. `header_encoding` If the character set must be encoded before it can be used in an email header, this attribute will be set to `charset.QP` (for quoted-printable), `charset.BASE64` (for base64 encoding), or `charset.SHORTEST` for the shortest of QP or BASE64 encoding. Otherwise, it will be `None`. `body_encoding` Same as *header\_encoding*, but describes the encoding for the mail message’s body, which indeed may be different than the header encoding. `charset.SHORTEST` is not allowed for *body\_encoding*. `output_charset` Some character sets must be converted before they can be used in email headers or bodies. If the *input\_charset* is one of them, this attribute will contain the name of the character set output will be converted to. Otherwise, it will be `None`. `input_codec` The name of the Python codec used to convert the *input\_charset* to Unicode. If no conversion codec is necessary, this attribute will be `None`. `output_codec` The name of the Python codec used to convert Unicode to the *output\_charset*. If no conversion codec is necessary, this attribute will have the same value as the *input\_codec*. [`Charset`](#email.charset.Charset "email.charset.Charset") instances also have the following methods: `get_body_encoding()` Return the content transfer encoding used for body encoding. This is either the string `quoted-printable` or `base64` depending on the encoding used, or it is a function, in which case you should call the function with a single argument, the Message object being encoded. The function should then set the *Content-Transfer-Encoding* header itself to whatever is appropriate. Returns the string `quoted-printable` if *body\_encoding* is `QP`, returns the string `base64` if *body\_encoding* is `BASE64`, and returns the string `7bit` otherwise. `get_output_charset()` Return the output character set. This is the *output\_charset* attribute if that is not `None`, otherwise it is *input\_charset*. `header_encode(string)` Header-encode the string *string*. The type of encoding (base64 or quoted-printable) will be based on the *header\_encoding* attribute. `header_encode_lines(string, maxlengths)` Header-encode a *string* by converting it first to bytes. This is similar to [`header_encode()`](#email.charset.Charset.header_encode "email.charset.Charset.header_encode") except that the string is fit into maximum line lengths as given by the argument *maxlengths*, which must be an iterator: each element returned from this iterator will provide the next maximum line length. `body_encode(string)` Body-encode the string *string*. The type of encoding (base64 or quoted-printable) will be based on the *body\_encoding* attribute. The [`Charset`](#email.charset.Charset "email.charset.Charset") class also provides a number of methods to support standard operations and built-in functions. `__str__()` Returns *input\_charset* as a string coerced to lower case. [`__repr__()`](../reference/datamodel#object.__repr__ "object.__repr__") is an alias for [`__str__()`](#email.charset.Charset.__str__ "email.charset.Charset.__str__"). `__eq__(other)` This method allows you to compare two [`Charset`](#email.charset.Charset "email.charset.Charset") instances for equality. `__ne__(other)` This method allows you to compare two [`Charset`](#email.charset.Charset "email.charset.Charset") instances for inequality. The [`email.charset`](#module-email.charset "email.charset: Character Sets") module also provides the following functions for adding new entries to the global character set, alias, and codec registries: `email.charset.add_charset(charset, header_enc=None, body_enc=None, output_charset=None)` Add character properties to the global registry. *charset* is the input character set, and must be the canonical name of a character set. Optional *header\_enc* and *body\_enc* is either `charset.QP` for quoted-printable, `charset.BASE64` for base64 encoding, `charset.SHORTEST` for the shortest of quoted-printable or base64 encoding, or `None` for no encoding. `SHORTEST` is only valid for *header\_enc*. The default is `None` for no encoding. Optional *output\_charset* is the character set that the output should be in. Conversions will proceed from input charset, to Unicode, to the output charset when the method `Charset.convert()` is called. The default is to output in the same character set as the input. Both *input\_charset* and *output\_charset* must have Unicode codec entries in the module’s character set-to-codec mapping; use [`add_codec()`](#email.charset.add_codec "email.charset.add_codec") to add codecs the module does not know about. See the [`codecs`](codecs#module-codecs "codecs: Encode and decode data and streams.") module’s documentation for more information. The global character set registry is kept in the module global dictionary `CHARSETS`. `email.charset.add_alias(alias, canonical)` Add a character set alias. *alias* is the alias name, e.g. `latin-1`. *canonical* is the character set’s canonical name, e.g. `iso-8859-1`. The global charset alias registry is kept in the module global dictionary `ALIASES`. `email.charset.add_codec(charset, codecname)` Add a codec that map characters in the given character set to and from Unicode. *charset* is the canonical name of a character set. *codecname* is the name of a Python codec, as appropriate for the second argument to the [`str`](stdtypes#str "str")’s [`encode()`](stdtypes#str.encode "str.encode") method.
programming_docs
python tracemalloc — Trace memory allocations tracemalloc — Trace memory allocations ====================================== New in version 3.4. **Source code:** [Lib/tracemalloc.py](https://github.com/python/cpython/tree/3.9/Lib/tracemalloc.py) The tracemalloc module is a debug tool to trace memory blocks allocated by Python. It provides the following information: * Traceback where an object was allocated * Statistics on allocated memory blocks per filename and per line number: total size, number and average size of allocated memory blocks * Compute the differences between two snapshots to detect memory leaks To trace most memory blocks allocated by Python, the module should be started as early as possible by setting the [`PYTHONTRACEMALLOC`](../using/cmdline#envvar-PYTHONTRACEMALLOC) environment variable to `1`, or by using [`-X`](../using/cmdline#id5) `tracemalloc` command line option. The [`tracemalloc.start()`](#tracemalloc.start "tracemalloc.start") function can be called at runtime to start tracing Python memory allocations. By default, a trace of an allocated memory block only stores the most recent frame (1 frame). To store 25 frames at startup: set the [`PYTHONTRACEMALLOC`](../using/cmdline#envvar-PYTHONTRACEMALLOC) environment variable to `25`, or use the [`-X`](../using/cmdline#id5) `tracemalloc=25` command line option. Examples -------- ### Display the top 10 Display the 10 files allocating the most memory: ``` import tracemalloc tracemalloc.start() # ... run your application ... snapshot = tracemalloc.take_snapshot() top_stats = snapshot.statistics('lineno') print("[ Top 10 ]") for stat in top_stats[:10]: print(stat) ``` Example of output of the Python test suite: ``` [ Top 10 ] <frozen importlib._bootstrap>:716: size=4855 KiB, count=39328, average=126 B <frozen importlib._bootstrap>:284: size=521 KiB, count=3199, average=167 B /usr/lib/python3.4/collections/__init__.py:368: size=244 KiB, count=2315, average=108 B /usr/lib/python3.4/unittest/case.py:381: size=185 KiB, count=779, average=243 B /usr/lib/python3.4/unittest/case.py:402: size=154 KiB, count=378, average=416 B /usr/lib/python3.4/abc.py:133: size=88.7 KiB, count=347, average=262 B <frozen importlib._bootstrap>:1446: size=70.4 KiB, count=911, average=79 B <frozen importlib._bootstrap>:1454: size=52.0 KiB, count=25, average=2131 B <string>:5: size=49.7 KiB, count=148, average=344 B /usr/lib/python3.4/sysconfig.py:411: size=48.0 KiB, count=1, average=48.0 KiB ``` We can see that Python loaded `4855 KiB` data (bytecode and constants) from modules and that the [`collections`](collections#module-collections "collections: Container datatypes") module allocated `244 KiB` to build [`namedtuple`](collections#collections.namedtuple "collections.namedtuple") types. See [`Snapshot.statistics()`](#tracemalloc.Snapshot.statistics "tracemalloc.Snapshot.statistics") for more options. ### Compute differences Take two snapshots and display the differences: ``` import tracemalloc tracemalloc.start() # ... start your application ... snapshot1 = tracemalloc.take_snapshot() # ... call the function leaking memory ... snapshot2 = tracemalloc.take_snapshot() top_stats = snapshot2.compare_to(snapshot1, 'lineno') print("[ Top 10 differences ]") for stat in top_stats[:10]: print(stat) ``` Example of output before/after running some tests of the Python test suite: ``` [ Top 10 differences ] <frozen importlib._bootstrap>:716: size=8173 KiB (+4428 KiB), count=71332 (+39369), average=117 B /usr/lib/python3.4/linecache.py:127: size=940 KiB (+940 KiB), count=8106 (+8106), average=119 B /usr/lib/python3.4/unittest/case.py:571: size=298 KiB (+298 KiB), count=589 (+589), average=519 B <frozen importlib._bootstrap>:284: size=1005 KiB (+166 KiB), count=7423 (+1526), average=139 B /usr/lib/python3.4/mimetypes.py:217: size=112 KiB (+112 KiB), count=1334 (+1334), average=86 B /usr/lib/python3.4/http/server.py:848: size=96.0 KiB (+96.0 KiB), count=1 (+1), average=96.0 KiB /usr/lib/python3.4/inspect.py:1465: size=83.5 KiB (+83.5 KiB), count=109 (+109), average=784 B /usr/lib/python3.4/unittest/mock.py:491: size=77.7 KiB (+77.7 KiB), count=143 (+143), average=557 B /usr/lib/python3.4/urllib/parse.py:476: size=71.8 KiB (+71.8 KiB), count=969 (+969), average=76 B /usr/lib/python3.4/contextlib.py:38: size=67.2 KiB (+67.2 KiB), count=126 (+126), average=546 B ``` We can see that Python has loaded `8173 KiB` of module data (bytecode and constants), and that this is `4428 KiB` more than had been loaded before the tests, when the previous snapshot was taken. Similarly, the [`linecache`](linecache#module-linecache "linecache: Provides random access to individual lines from text files.") module has cached `940 KiB` of Python source code to format tracebacks, all of it since the previous snapshot. If the system has little free memory, snapshots can be written on disk using the [`Snapshot.dump()`](#tracemalloc.Snapshot.dump "tracemalloc.Snapshot.dump") method to analyze the snapshot offline. Then use the [`Snapshot.load()`](#tracemalloc.Snapshot.load "tracemalloc.Snapshot.load") method reload the snapshot. ### Get the traceback of a memory block Code to display the traceback of the biggest memory block: ``` import tracemalloc # Store 25 frames tracemalloc.start(25) # ... run your application ... snapshot = tracemalloc.take_snapshot() top_stats = snapshot.statistics('traceback') # pick the biggest memory block stat = top_stats[0] print("%s memory blocks: %.1f KiB" % (stat.count, stat.size / 1024)) for line in stat.traceback.format(): print(line) ``` Example of output of the Python test suite (traceback limited to 25 frames): ``` 903 memory blocks: 870.1 KiB File "<frozen importlib._bootstrap>", line 716 File "<frozen importlib._bootstrap>", line 1036 File "<frozen importlib._bootstrap>", line 934 File "<frozen importlib._bootstrap>", line 1068 File "<frozen importlib._bootstrap>", line 619 File "<frozen importlib._bootstrap>", line 1581 File "<frozen importlib._bootstrap>", line 1614 File "/usr/lib/python3.4/doctest.py", line 101 import pdb File "<frozen importlib._bootstrap>", line 284 File "<frozen importlib._bootstrap>", line 938 File "<frozen importlib._bootstrap>", line 1068 File "<frozen importlib._bootstrap>", line 619 File "<frozen importlib._bootstrap>", line 1581 File "<frozen importlib._bootstrap>", line 1614 File "/usr/lib/python3.4/test/support/__init__.py", line 1728 import doctest File "/usr/lib/python3.4/test/test_pickletools.py", line 21 support.run_doctest(pickletools) File "/usr/lib/python3.4/test/regrtest.py", line 1276 test_runner() File "/usr/lib/python3.4/test/regrtest.py", line 976 display_failure=not verbose) File "/usr/lib/python3.4/test/regrtest.py", line 761 match_tests=ns.match_tests) File "/usr/lib/python3.4/test/regrtest.py", line 1563 main() File "/usr/lib/python3.4/test/__main__.py", line 3 regrtest.main_in_temp_cwd() File "/usr/lib/python3.4/runpy.py", line 73 exec(code, run_globals) File "/usr/lib/python3.4/runpy.py", line 160 "__main__", fname, loader, pkg_name) ``` We can see that the most memory was allocated in the [`importlib`](importlib#module-importlib "importlib: The implementation of the import machinery.") module to load data (bytecode and constants) from modules: `870.1 KiB`. The traceback is where the [`importlib`](importlib#module-importlib "importlib: The implementation of the import machinery.") loaded data most recently: on the `import pdb` line of the [`doctest`](doctest#module-doctest "doctest: Test pieces of code within docstrings.") module. The traceback may change if a new module is loaded. ### Pretty top Code to display the 10 lines allocating the most memory with a pretty output, ignoring `<frozen importlib._bootstrap>` and `<unknown>` files: ``` import linecache import os import tracemalloc def display_top(snapshot, key_type='lineno', limit=10): snapshot = snapshot.filter_traces(( tracemalloc.Filter(False, "<frozen importlib._bootstrap>"), tracemalloc.Filter(False, "<unknown>"), )) top_stats = snapshot.statistics(key_type) print("Top %s lines" % limit) for index, stat in enumerate(top_stats[:limit], 1): frame = stat.traceback[0] print("#%s: %s:%s: %.1f KiB" % (index, frame.filename, frame.lineno, stat.size / 1024)) line = linecache.getline(frame.filename, frame.lineno).strip() if line: print(' %s' % line) other = top_stats[limit:] if other: size = sum(stat.size for stat in other) print("%s other: %.1f KiB" % (len(other), size / 1024)) total = sum(stat.size for stat in top_stats) print("Total allocated size: %.1f KiB" % (total / 1024)) tracemalloc.start() # ... run your application ... snapshot = tracemalloc.take_snapshot() display_top(snapshot) ``` Example of output of the Python test suite: ``` Top 10 lines #1: Lib/base64.py:414: 419.8 KiB _b85chars2 = [(a + b) for a in _b85chars for b in _b85chars] #2: Lib/base64.py:306: 419.8 KiB _a85chars2 = [(a + b) for a in _a85chars for b in _a85chars] #3: collections/__init__.py:368: 293.6 KiB exec(class_definition, namespace) #4: Lib/abc.py:133: 115.2 KiB cls = super().__new__(mcls, name, bases, namespace) #5: unittest/case.py:574: 103.1 KiB testMethod() #6: Lib/linecache.py:127: 95.4 KiB lines = fp.readlines() #7: urllib/parse.py:476: 71.8 KiB for a in _hexdig for b in _hexdig} #8: <string>:5: 62.0 KiB #9: Lib/_weakrefset.py:37: 60.0 KiB self.data = set() #10: Lib/base64.py:142: 59.8 KiB _b32tab2 = [a + b for a in _b32tab for b in _b32tab] 6220 other: 3602.8 KiB Total allocated size: 5303.1 KiB ``` See [`Snapshot.statistics()`](#tracemalloc.Snapshot.statistics "tracemalloc.Snapshot.statistics") for more options. #### Record the current and peak size of all traced memory blocks The following code computes two sums like `0 + 1 + 2 + ...` inefficiently, by creating a list of those numbers. This list consumes a lot of memory temporarily. We can use [`get_traced_memory()`](#tracemalloc.get_traced_memory "tracemalloc.get_traced_memory") and [`reset_peak()`](#tracemalloc.reset_peak "tracemalloc.reset_peak") to observe the small memory usage after the sum is computed as well as the peak memory usage during the computations: ``` import tracemalloc tracemalloc.start() # Example code: compute a sum with a large temporary list large_sum = sum(list(range(100000))) first_size, first_peak = tracemalloc.get_traced_memory() tracemalloc.reset_peak() # Example code: compute a sum with a small temporary list small_sum = sum(list(range(1000))) second_size, second_peak = tracemalloc.get_traced_memory() print(f"{first_size=}, {first_peak=}") print(f"{second_size=}, {second_peak=}") ``` Output: ``` first_size=664, first_peak=3592984 second_size=804, second_peak=29704 ``` Using [`reset_peak()`](#tracemalloc.reset_peak "tracemalloc.reset_peak") ensured we could accurately record the peak during the computation of `small_sum`, even though it is much smaller than the overall peak size of memory blocks since the [`start()`](#tracemalloc.start "tracemalloc.start") call. Without the call to [`reset_peak()`](#tracemalloc.reset_peak "tracemalloc.reset_peak"), `second_peak` would still be the peak from the computation `large_sum` (that is, equal to `first_peak`). In this case, both peaks are much higher than the final memory usage, and which suggests we could optimise (by removing the unnecessary call to [`list`](stdtypes#list "list"), and writing `sum(range(...))`). API --- ### Functions `tracemalloc.clear_traces()` Clear traces of memory blocks allocated by Python. See also [`stop()`](#tracemalloc.stop "tracemalloc.stop"). `tracemalloc.get_object_traceback(obj)` Get the traceback where the Python object *obj* was allocated. Return a [`Traceback`](#tracemalloc.Traceback "tracemalloc.Traceback") instance, or `None` if the [`tracemalloc`](#module-tracemalloc "tracemalloc: Trace memory allocations.") module is not tracing memory allocations or did not trace the allocation of the object. See also [`gc.get_referrers()`](gc#gc.get_referrers "gc.get_referrers") and [`sys.getsizeof()`](sys#sys.getsizeof "sys.getsizeof") functions. `tracemalloc.get_traceback_limit()` Get the maximum number of frames stored in the traceback of a trace. The [`tracemalloc`](#module-tracemalloc "tracemalloc: Trace memory allocations.") module must be tracing memory allocations to get the limit, otherwise an exception is raised. The limit is set by the [`start()`](#tracemalloc.start "tracemalloc.start") function. `tracemalloc.get_traced_memory()` Get the current size and peak size of memory blocks traced by the [`tracemalloc`](#module-tracemalloc "tracemalloc: Trace memory allocations.") module as a tuple: `(current: int, peak: int)`. `tracemalloc.reset_peak()` Set the peak size of memory blocks traced by the [`tracemalloc`](#module-tracemalloc "tracemalloc: Trace memory allocations.") module to the current size. Do nothing if the [`tracemalloc`](#module-tracemalloc "tracemalloc: Trace memory allocations.") module is not tracing memory allocations. This function only modifies the recorded peak size, and does not modify or clear any traces, unlike [`clear_traces()`](#tracemalloc.clear_traces "tracemalloc.clear_traces"). Snapshots taken with [`take_snapshot()`](#tracemalloc.take_snapshot "tracemalloc.take_snapshot") before a call to [`reset_peak()`](#tracemalloc.reset_peak "tracemalloc.reset_peak") can be meaningfully compared to snapshots taken after the call. See also [`get_traced_memory()`](#tracemalloc.get_traced_memory "tracemalloc.get_traced_memory"). New in version 3.9. `tracemalloc.get_tracemalloc_memory()` Get the memory usage in bytes of the [`tracemalloc`](#module-tracemalloc "tracemalloc: Trace memory allocations.") module used to store traces of memory blocks. Return an [`int`](functions#int "int"). `tracemalloc.is_tracing()` `True` if the [`tracemalloc`](#module-tracemalloc "tracemalloc: Trace memory allocations.") module is tracing Python memory allocations, `False` otherwise. See also [`start()`](#tracemalloc.start "tracemalloc.start") and [`stop()`](#tracemalloc.stop "tracemalloc.stop") functions. `tracemalloc.start(nframe: int=1)` Start tracing Python memory allocations: install hooks on Python memory allocators. Collected tracebacks of traces will be limited to *nframe* frames. By default, a trace of a memory block only stores the most recent frame: the limit is `1`. *nframe* must be greater or equal to `1`. You can still read the original number of total frames that composed the traceback by looking at the [`Traceback.total_nframe`](#tracemalloc.Traceback.total_nframe "tracemalloc.Traceback.total_nframe") attribute. Storing more than `1` frame is only useful to compute statistics grouped by `'traceback'` or to compute cumulative statistics: see the [`Snapshot.compare_to()`](#tracemalloc.Snapshot.compare_to "tracemalloc.Snapshot.compare_to") and [`Snapshot.statistics()`](#tracemalloc.Snapshot.statistics "tracemalloc.Snapshot.statistics") methods. Storing more frames increases the memory and CPU overhead of the [`tracemalloc`](#module-tracemalloc "tracemalloc: Trace memory allocations.") module. Use the [`get_tracemalloc_memory()`](#tracemalloc.get_tracemalloc_memory "tracemalloc.get_tracemalloc_memory") function to measure how much memory is used by the [`tracemalloc`](#module-tracemalloc "tracemalloc: Trace memory allocations.") module. The [`PYTHONTRACEMALLOC`](../using/cmdline#envvar-PYTHONTRACEMALLOC) environment variable (`PYTHONTRACEMALLOC=NFRAME`) and the [`-X`](../using/cmdline#id5) `tracemalloc=NFRAME` command line option can be used to start tracing at startup. See also [`stop()`](#tracemalloc.stop "tracemalloc.stop"), [`is_tracing()`](#tracemalloc.is_tracing "tracemalloc.is_tracing") and [`get_traceback_limit()`](#tracemalloc.get_traceback_limit "tracemalloc.get_traceback_limit") functions. `tracemalloc.stop()` Stop tracing Python memory allocations: uninstall hooks on Python memory allocators. Also clears all previously collected traces of memory blocks allocated by Python. Call [`take_snapshot()`](#tracemalloc.take_snapshot "tracemalloc.take_snapshot") function to take a snapshot of traces before clearing them. See also [`start()`](#tracemalloc.start "tracemalloc.start"), [`is_tracing()`](#tracemalloc.is_tracing "tracemalloc.is_tracing") and [`clear_traces()`](#tracemalloc.clear_traces "tracemalloc.clear_traces") functions. `tracemalloc.take_snapshot()` Take a snapshot of traces of memory blocks allocated by Python. Return a new [`Snapshot`](#tracemalloc.Snapshot "tracemalloc.Snapshot") instance. The snapshot does not include memory blocks allocated before the [`tracemalloc`](#module-tracemalloc "tracemalloc: Trace memory allocations.") module started to trace memory allocations. Tracebacks of traces are limited to [`get_traceback_limit()`](#tracemalloc.get_traceback_limit "tracemalloc.get_traceback_limit") frames. Use the *nframe* parameter of the [`start()`](#tracemalloc.start "tracemalloc.start") function to store more frames. The [`tracemalloc`](#module-tracemalloc "tracemalloc: Trace memory allocations.") module must be tracing memory allocations to take a snapshot, see the [`start()`](#tracemalloc.start "tracemalloc.start") function. See also the [`get_object_traceback()`](#tracemalloc.get_object_traceback "tracemalloc.get_object_traceback") function. ### DomainFilter `class tracemalloc.DomainFilter(inclusive: bool, domain: int)` Filter traces of memory blocks by their address space (domain). New in version 3.6. `inclusive` If *inclusive* is `True` (include), match memory blocks allocated in the address space [`domain`](#tracemalloc.DomainFilter.domain "tracemalloc.DomainFilter.domain"). If *inclusive* is `False` (exclude), match memory blocks not allocated in the address space [`domain`](#tracemalloc.DomainFilter.domain "tracemalloc.DomainFilter.domain"). `domain` Address space of a memory block (`int`). Read-only property. ### Filter `class tracemalloc.Filter(inclusive: bool, filename_pattern: str, lineno: int=None, all_frames: bool=False, domain: int=None)` Filter on traces of memory blocks. See the [`fnmatch.fnmatch()`](fnmatch#fnmatch.fnmatch "fnmatch.fnmatch") function for the syntax of *filename\_pattern*. The `'.pyc'` file extension is replaced with `'.py'`. Examples: * `Filter(True, subprocess.__file__)` only includes traces of the [`subprocess`](subprocess#module-subprocess "subprocess: Subprocess management.") module * `Filter(False, tracemalloc.__file__)` excludes traces of the [`tracemalloc`](#module-tracemalloc "tracemalloc: Trace memory allocations.") module * `Filter(False, "<unknown>")` excludes empty tracebacks Changed in version 3.5: The `'.pyo'` file extension is no longer replaced with `'.py'`. Changed in version 3.6: Added the [`domain`](#tracemalloc.Filter.domain "tracemalloc.Filter.domain") attribute. `domain` Address space of a memory block (`int` or `None`). tracemalloc uses the domain `0` to trace memory allocations made by Python. C extensions can use other domains to trace other resources. `inclusive` If *inclusive* is `True` (include), only match memory blocks allocated in a file with a name matching [`filename_pattern`](#tracemalloc.Filter.filename_pattern "tracemalloc.Filter.filename_pattern") at line number [`lineno`](#tracemalloc.Filter.lineno "tracemalloc.Filter.lineno"). If *inclusive* is `False` (exclude), ignore memory blocks allocated in a file with a name matching [`filename_pattern`](#tracemalloc.Filter.filename_pattern "tracemalloc.Filter.filename_pattern") at line number [`lineno`](#tracemalloc.Filter.lineno "tracemalloc.Filter.lineno"). `lineno` Line number (`int`) of the filter. If *lineno* is `None`, the filter matches any line number. `filename_pattern` Filename pattern of the filter (`str`). Read-only property. `all_frames` If *all\_frames* is `True`, all frames of the traceback are checked. If *all\_frames* is `False`, only the most recent frame is checked. This attribute has no effect if the traceback limit is `1`. See the [`get_traceback_limit()`](#tracemalloc.get_traceback_limit "tracemalloc.get_traceback_limit") function and [`Snapshot.traceback_limit`](#tracemalloc.Snapshot.traceback_limit "tracemalloc.Snapshot.traceback_limit") attribute. ### Frame `class tracemalloc.Frame` Frame of a traceback. The [`Traceback`](#tracemalloc.Traceback "tracemalloc.Traceback") class is a sequence of [`Frame`](#tracemalloc.Frame "tracemalloc.Frame") instances. `filename` Filename (`str`). `lineno` Line number (`int`). ### Snapshot `class tracemalloc.Snapshot` Snapshot of traces of memory blocks allocated by Python. The [`take_snapshot()`](#tracemalloc.take_snapshot "tracemalloc.take_snapshot") function creates a snapshot instance. `compare_to(old_snapshot: Snapshot, key_type: str, cumulative: bool=False)` Compute the differences with an old snapshot. Get statistics as a sorted list of [`StatisticDiff`](#tracemalloc.StatisticDiff "tracemalloc.StatisticDiff") instances grouped by *key\_type*. See the [`Snapshot.statistics()`](#tracemalloc.Snapshot.statistics "tracemalloc.Snapshot.statistics") method for *key\_type* and *cumulative* parameters. The result is sorted from the biggest to the smallest by: absolute value of [`StatisticDiff.size_diff`](#tracemalloc.StatisticDiff.size_diff "tracemalloc.StatisticDiff.size_diff"), [`StatisticDiff.size`](#tracemalloc.StatisticDiff.size "tracemalloc.StatisticDiff.size"), absolute value of [`StatisticDiff.count_diff`](#tracemalloc.StatisticDiff.count_diff "tracemalloc.StatisticDiff.count_diff"), [`Statistic.count`](#tracemalloc.Statistic.count "tracemalloc.Statistic.count") and then by [`StatisticDiff.traceback`](#tracemalloc.StatisticDiff.traceback "tracemalloc.StatisticDiff.traceback"). `dump(filename)` Write the snapshot into a file. Use [`load()`](#tracemalloc.Snapshot.load "tracemalloc.Snapshot.load") to reload the snapshot. `filter_traces(filters)` Create a new [`Snapshot`](#tracemalloc.Snapshot "tracemalloc.Snapshot") instance with a filtered [`traces`](#tracemalloc.Snapshot.traces "tracemalloc.Snapshot.traces") sequence, *filters* is a list of [`DomainFilter`](#tracemalloc.DomainFilter "tracemalloc.DomainFilter") and [`Filter`](#tracemalloc.Filter "tracemalloc.Filter") instances. If *filters* is an empty list, return a new [`Snapshot`](#tracemalloc.Snapshot "tracemalloc.Snapshot") instance with a copy of the traces. All inclusive filters are applied at once, a trace is ignored if no inclusive filters match it. A trace is ignored if at least one exclusive filter matches it. Changed in version 3.6: [`DomainFilter`](#tracemalloc.DomainFilter "tracemalloc.DomainFilter") instances are now also accepted in *filters*. `classmethod load(filename)` Load a snapshot from a file. See also [`dump()`](#tracemalloc.Snapshot.dump "tracemalloc.Snapshot.dump"). `statistics(key_type: str, cumulative: bool=False)` Get statistics as a sorted list of [`Statistic`](#tracemalloc.Statistic "tracemalloc.Statistic") instances grouped by *key\_type*: | key\_type | description | | --- | --- | | `'filename'` | filename | | `'lineno'` | filename and line number | | `'traceback'` | traceback | If *cumulative* is `True`, cumulate size and count of memory blocks of all frames of the traceback of a trace, not only the most recent frame. The cumulative mode can only be used with *key\_type* equals to `'filename'` and `'lineno'`. The result is sorted from the biggest to the smallest by: [`Statistic.size`](#tracemalloc.Statistic.size "tracemalloc.Statistic.size"), [`Statistic.count`](#tracemalloc.Statistic.count "tracemalloc.Statistic.count") and then by [`Statistic.traceback`](#tracemalloc.Statistic.traceback "tracemalloc.Statistic.traceback"). `traceback_limit` Maximum number of frames stored in the traceback of [`traces`](#tracemalloc.Snapshot.traces "tracemalloc.Snapshot.traces"): result of the [`get_traceback_limit()`](#tracemalloc.get_traceback_limit "tracemalloc.get_traceback_limit") when the snapshot was taken. `traces` Traces of all memory blocks allocated by Python: sequence of [`Trace`](#tracemalloc.Trace "tracemalloc.Trace") instances. The sequence has an undefined order. Use the [`Snapshot.statistics()`](#tracemalloc.Snapshot.statistics "tracemalloc.Snapshot.statistics") method to get a sorted list of statistics. ### Statistic `class tracemalloc.Statistic` Statistic on memory allocations. [`Snapshot.statistics()`](#tracemalloc.Snapshot.statistics "tracemalloc.Snapshot.statistics") returns a list of [`Statistic`](#tracemalloc.Statistic "tracemalloc.Statistic") instances. See also the [`StatisticDiff`](#tracemalloc.StatisticDiff "tracemalloc.StatisticDiff") class. `count` Number of memory blocks (`int`). `size` Total size of memory blocks in bytes (`int`). `traceback` Traceback where the memory block was allocated, [`Traceback`](#tracemalloc.Traceback "tracemalloc.Traceback") instance. ### StatisticDiff `class tracemalloc.StatisticDiff` Statistic difference on memory allocations between an old and a new [`Snapshot`](#tracemalloc.Snapshot "tracemalloc.Snapshot") instance. [`Snapshot.compare_to()`](#tracemalloc.Snapshot.compare_to "tracemalloc.Snapshot.compare_to") returns a list of [`StatisticDiff`](#tracemalloc.StatisticDiff "tracemalloc.StatisticDiff") instances. See also the [`Statistic`](#tracemalloc.Statistic "tracemalloc.Statistic") class. `count` Number of memory blocks in the new snapshot (`int`): `0` if the memory blocks have been released in the new snapshot. `count_diff` Difference of number of memory blocks between the old and the new snapshots (`int`): `0` if the memory blocks have been allocated in the new snapshot. `size` Total size of memory blocks in bytes in the new snapshot (`int`): `0` if the memory blocks have been released in the new snapshot. `size_diff` Difference of total size of memory blocks in bytes between the old and the new snapshots (`int`): `0` if the memory blocks have been allocated in the new snapshot. `traceback` Traceback where the memory blocks were allocated, [`Traceback`](#tracemalloc.Traceback "tracemalloc.Traceback") instance. ### Trace `class tracemalloc.Trace` Trace of a memory block. The [`Snapshot.traces`](#tracemalloc.Snapshot.traces "tracemalloc.Snapshot.traces") attribute is a sequence of [`Trace`](#tracemalloc.Trace "tracemalloc.Trace") instances. Changed in version 3.6: Added the [`domain`](#tracemalloc.Trace.domain "tracemalloc.Trace.domain") attribute. `domain` Address space of a memory block (`int`). Read-only property. tracemalloc uses the domain `0` to trace memory allocations made by Python. C extensions can use other domains to trace other resources. `size` Size of the memory block in bytes (`int`). `traceback` Traceback where the memory block was allocated, [`Traceback`](#tracemalloc.Traceback "tracemalloc.Traceback") instance. ### Traceback `class tracemalloc.Traceback` Sequence of [`Frame`](#tracemalloc.Frame "tracemalloc.Frame") instances sorted from the oldest frame to the most recent frame. A traceback contains at least `1` frame. If the `tracemalloc` module failed to get a frame, the filename `"<unknown>"` at line number `0` is used. When a snapshot is taken, tracebacks of traces are limited to [`get_traceback_limit()`](#tracemalloc.get_traceback_limit "tracemalloc.get_traceback_limit") frames. See the [`take_snapshot()`](#tracemalloc.take_snapshot "tracemalloc.take_snapshot") function. The original number of frames of the traceback is stored in the [`Traceback.total_nframe`](#tracemalloc.Traceback.total_nframe "tracemalloc.Traceback.total_nframe") attribute. That allows to know if a traceback has been truncated by the traceback limit. The [`Trace.traceback`](#tracemalloc.Trace.traceback "tracemalloc.Trace.traceback") attribute is an instance of [`Traceback`](#tracemalloc.Traceback "tracemalloc.Traceback") instance. Changed in version 3.7: Frames are now sorted from the oldest to the most recent, instead of most recent to oldest. `total_nframe` Total number of frames that composed the traceback before truncation. This attribute can be set to `None` if the information is not available. Changed in version 3.9: The [`Traceback.total_nframe`](#tracemalloc.Traceback.total_nframe "tracemalloc.Traceback.total_nframe") attribute was added. `format(limit=None, most_recent_first=False)` Format the traceback as a list of lines. Use the [`linecache`](linecache#module-linecache "linecache: Provides random access to individual lines from text files.") module to retrieve lines from the source code. If *limit* is set, format the *limit* most recent frames if *limit* is positive. Otherwise, format the `abs(limit)` oldest frames. If *most\_recent\_first* is `True`, the order of the formatted frames is reversed, returning the most recent frame first instead of last. Similar to the [`traceback.format_tb()`](traceback#traceback.format_tb "traceback.format_tb") function, except that [`format()`](#tracemalloc.Traceback.format "tracemalloc.Traceback.format") does not include newlines. Example: ``` print("Traceback (most recent call first):") for line in traceback: print(line) ``` Output: ``` Traceback (most recent call first): File "test.py", line 9 obj = Object() File "test.py", line 12 tb = tracemalloc.get_object_traceback(f()) ```
programming_docs
python Program Frameworks Program Frameworks ================== The modules described in this chapter are frameworks that will largely dictate the structure of your program. Currently the modules described here are all oriented toward writing command-line interfaces. The full list of modules described in this chapter is: * [`turtle` — Turtle graphics](turtle) + [Introduction](turtle#introduction) + [Overview of available Turtle and Screen methods](turtle#overview-of-available-turtle-and-screen-methods) - [Turtle methods](turtle#turtle-methods) - [Methods of TurtleScreen/Screen](turtle#methods-of-turtlescreen-screen) + [Methods of RawTurtle/Turtle and corresponding functions](turtle#methods-of-rawturtle-turtle-and-corresponding-functions) - [Turtle motion](turtle#turtle-motion) - [Tell Turtle’s state](turtle#tell-turtle-s-state) - [Settings for measurement](turtle#settings-for-measurement) - [Pen control](turtle#pen-control) * [Drawing state](turtle#drawing-state) * [Color control](turtle#color-control) * [Filling](turtle#filling) * [More drawing control](turtle#more-drawing-control) - [Turtle state](turtle#turtle-state) * [Visibility](turtle#visibility) * [Appearance](turtle#appearance) - [Using events](turtle#using-events) - [Special Turtle methods](turtle#special-turtle-methods) - [Compound shapes](turtle#compound-shapes) + [Methods of TurtleScreen/Screen and corresponding functions](turtle#methods-of-turtlescreen-screen-and-corresponding-functions) - [Window control](turtle#window-control) - [Animation control](turtle#animation-control) - [Using screen events](turtle#using-screen-events) - [Input methods](turtle#input-methods) - [Settings and special methods](turtle#settings-and-special-methods) - [Methods specific to Screen, not inherited from TurtleScreen](turtle#methods-specific-to-screen-not-inherited-from-turtlescreen) + [Public classes](turtle#public-classes) + [Help and configuration](turtle#help-and-configuration) - [How to use help](turtle#how-to-use-help) - [Translation of docstrings into different languages](turtle#translation-of-docstrings-into-different-languages) - [How to configure Screen and Turtles](turtle#how-to-configure-screen-and-turtles) + [`turtledemo` — Demo scripts](turtle#module-turtledemo) + [Changes since Python 2.6](turtle#changes-since-python-2-6) + [Changes since Python 3.0](turtle#changes-since-python-3-0) * [`cmd` — Support for line-oriented command interpreters](cmd) + [Cmd Objects](cmd#cmd-objects) + [Cmd Example](cmd#cmd-example) * [`shlex` — Simple lexical analysis](shlex) + [shlex Objects](shlex#shlex-objects) + [Parsing Rules](shlex#parsing-rules) + [Improved Compatibility with Shells](shlex#improved-compatibility-with-shells) python urllib.robotparser — Parser for robots.txt urllib.robotparser — Parser for robots.txt ========================================== **Source code:** [Lib/urllib/robotparser.py](https://github.com/python/cpython/tree/3.9/Lib/urllib/robotparser.py) This module provides a single class, [`RobotFileParser`](#urllib.robotparser.RobotFileParser "urllib.robotparser.RobotFileParser"), which answers questions about whether or not a particular user agent can fetch a URL on the Web site that published the `robots.txt` file. For more details on the structure of `robots.txt` files, see <http://www.robotstxt.org/orig.html>. `class urllib.robotparser.RobotFileParser(url='')` This class provides methods to read, parse and answer questions about the `robots.txt` file at *url*. `set_url(url)` Sets the URL referring to a `robots.txt` file. `read()` Reads the `robots.txt` URL and feeds it to the parser. `parse(lines)` Parses the lines argument. `can_fetch(useragent, url)` Returns `True` if the *useragent* is allowed to fetch the *url* according to the rules contained in the parsed `robots.txt` file. `mtime()` Returns the time the `robots.txt` file was last fetched. This is useful for long-running web spiders that need to check for new `robots.txt` files periodically. `modified()` Sets the time the `robots.txt` file was last fetched to the current time. `crawl_delay(useragent)` Returns the value of the `Crawl-delay` parameter from `robots.txt` for the *useragent* in question. If there is no such parameter or it doesn’t apply to the *useragent* specified or the `robots.txt` entry for this parameter has invalid syntax, return `None`. New in version 3.6. `request_rate(useragent)` Returns the contents of the `Request-rate` parameter from `robots.txt` as a [named tuple](../glossary#term-named-tuple) `RequestRate(requests, seconds)`. If there is no such parameter or it doesn’t apply to the *useragent* specified or the `robots.txt` entry for this parameter has invalid syntax, return `None`. New in version 3.6. `site_maps()` Returns the contents of the `Sitemap` parameter from `robots.txt` in the form of a [`list()`](stdtypes#list "list"). If there is no such parameter or the `robots.txt` entry for this parameter has invalid syntax, return `None`. New in version 3.8. The following example demonstrates basic use of the [`RobotFileParser`](#urllib.robotparser.RobotFileParser "urllib.robotparser.RobotFileParser") class: ``` >>> import urllib.robotparser >>> rp = urllib.robotparser.RobotFileParser() >>> rp.set_url("http://www.musi-cal.com/robots.txt") >>> rp.read() >>> rrate = rp.request_rate("*") >>> rrate.requests 3 >>> rrate.seconds 20 >>> rp.crawl_delay("*") 6 >>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco") False >>> rp.can_fetch("*", "http://www.musi-cal.com/") True ``` python distutils — Building and installing Python modules distutils — Building and installing Python modules ================================================== The [`distutils`](#module-distutils "distutils: Support for building and installing Python modules into an existing Python installation.") package provides support for building and installing additional modules into a Python installation. The new modules may be either 100%-pure Python, or may be extension modules written in C, or may be collections of Python packages which include modules coded in both Python and C. Most Python users will *not* want to use this module directly, but instead use the cross-version tools maintained by the Python Packaging Authority. In particular, [setuptools](https://setuptools.readthedocs.io/en/latest/) is an enhanced alternative to [`distutils`](#module-distutils "distutils: Support for building and installing Python modules into an existing Python installation.") that provides: * support for declaring project dependencies * additional mechanisms for configuring which files to include in source releases (including plugins for integration with version control systems) * the ability to declare project “entry points”, which can be used as the basis for application plugin systems * the ability to automatically generate Windows command line executables at installation time rather than needing to prebuild them * consistent behaviour across all supported Python versions The recommended [pip](https://pip.pypa.io/) installer runs all `setup.py` scripts with `setuptools`, even if the script itself only imports `distutils`. Refer to the [Python Packaging User Guide](https://packaging.python.org) for more information. For the benefits of packaging tool authors and users seeking a deeper understanding of the details of the current packaging and distribution system, the legacy [`distutils`](#module-distutils "distutils: Support for building and installing Python modules into an existing Python installation.") based user documentation and API reference remain available: * [Installing Python Modules (Legacy version)](../install/index#install-index) * [Distributing Python Modules (Legacy version)](../distutils/index#distutils-index) python textwrap — Text wrapping and filling textwrap — Text wrapping and filling ==================================== **Source code:** [Lib/textwrap.py](https://github.com/python/cpython/tree/3.9/Lib/textwrap.py) The [`textwrap`](#module-textwrap "textwrap: Text wrapping and filling") module provides some convenience functions, as well as [`TextWrapper`](#textwrap.TextWrapper "textwrap.TextWrapper"), the class that does all the work. If you’re just wrapping or filling one or two text strings, the convenience functions should be good enough; otherwise, you should use an instance of [`TextWrapper`](#textwrap.TextWrapper "textwrap.TextWrapper") for efficiency. `textwrap.wrap(text, width=70, *, initial_indent="", subsequent_indent="", expand_tabs=True, replace_whitespace=True, fix_sentence_endings=False, break_long_words=True, drop_whitespace=True, break_on_hyphens=True, tabsize=8, max_lines=None, placeholder=' [...]')` Wraps the single paragraph in *text* (a string) so every line is at most *width* characters long. Returns a list of output lines, without final newlines. Optional keyword arguments correspond to the instance attributes of [`TextWrapper`](#textwrap.TextWrapper "textwrap.TextWrapper"), documented below. See the [`TextWrapper.wrap()`](#textwrap.TextWrapper.wrap "textwrap.TextWrapper.wrap") method for additional details on how [`wrap()`](#textwrap.wrap "textwrap.wrap") behaves. `textwrap.fill(text, width=70, *, initial_indent="", subsequent_indent="", expand_tabs=True, replace_whitespace=True, fix_sentence_endings=False, break_long_words=True, drop_whitespace=True, break_on_hyphens=True, tabsize=8, max_lines=None, placeholder=' [...]')` Wraps the single paragraph in *text*, and returns a single string containing the wrapped paragraph. [`fill()`](#textwrap.fill "textwrap.fill") is shorthand for ``` "\n".join(wrap(text, ...)) ``` In particular, [`fill()`](#textwrap.fill "textwrap.fill") accepts exactly the same keyword arguments as [`wrap()`](#textwrap.wrap "textwrap.wrap"). `textwrap.shorten(text, width, *, fix_sentence_endings=False, break_long_words=True, break_on_hyphens=True, placeholder=' [...]')` Collapse and truncate the given *text* to fit in the given *width*. First the whitespace in *text* is collapsed (all whitespace is replaced by single spaces). If the result fits in the *width*, it is returned. Otherwise, enough words are dropped from the end so that the remaining words plus the `placeholder` fit within `width`: ``` >>> textwrap.shorten("Hello world!", width=12) 'Hello world!' >>> textwrap.shorten("Hello world!", width=11) 'Hello [...]' >>> textwrap.shorten("Hello world", width=10, placeholder="...") 'Hello...' ``` Optional keyword arguments correspond to the instance attributes of [`TextWrapper`](#textwrap.TextWrapper "textwrap.TextWrapper"), documented below. Note that the whitespace is collapsed before the text is passed to the [`TextWrapper`](#textwrap.TextWrapper "textwrap.TextWrapper") [`fill()`](#textwrap.fill "textwrap.fill") function, so changing the value of [`tabsize`](#textwrap.TextWrapper.tabsize "textwrap.TextWrapper.tabsize"), [`expand_tabs`](#textwrap.TextWrapper.expand_tabs "textwrap.TextWrapper.expand_tabs"), [`drop_whitespace`](#textwrap.TextWrapper.drop_whitespace "textwrap.TextWrapper.drop_whitespace"), and [`replace_whitespace`](#textwrap.TextWrapper.replace_whitespace "textwrap.TextWrapper.replace_whitespace") will have no effect. New in version 3.4. `textwrap.dedent(text)` Remove any common leading whitespace from every line in *text*. This can be used to make triple-quoted strings line up with the left edge of the display, while still presenting them in the source code in indented form. Note that tabs and spaces are both treated as whitespace, but they are not equal: the lines `"  hello"` and `"\thello"` are considered to have no common leading whitespace. Lines containing only whitespace are ignored in the input and normalized to a single newline character in the output. For example: ``` def test(): # end first line with \ to avoid the empty line! s = '''\ hello world ''' print(repr(s)) # prints ' hello\n world\n ' print(repr(dedent(s))) # prints 'hello\n world\n' ``` `textwrap.indent(text, prefix, predicate=None)` Add *prefix* to the beginning of selected lines in *text*. Lines are separated by calling `text.splitlines(True)`. By default, *prefix* is added to all lines that do not consist solely of whitespace (including any line endings). For example: ``` >>> s = 'hello\n\n \nworld' >>> indent(s, ' ') ' hello\n\n \n world' ``` The optional *predicate* argument can be used to control which lines are indented. For example, it is easy to add *prefix* to even empty and whitespace-only lines: ``` >>> print(indent(s, '+ ', lambda line: True)) + hello + + + world ``` New in version 3.3. [`wrap()`](#textwrap.wrap "textwrap.wrap"), [`fill()`](#textwrap.fill "textwrap.fill") and [`shorten()`](#textwrap.shorten "textwrap.shorten") work by creating a [`TextWrapper`](#textwrap.TextWrapper "textwrap.TextWrapper") instance and calling a single method on it. That instance is not reused, so for applications that process many text strings using [`wrap()`](#textwrap.wrap "textwrap.wrap") and/or [`fill()`](#textwrap.fill "textwrap.fill"), it may be more efficient to create your own [`TextWrapper`](#textwrap.TextWrapper "textwrap.TextWrapper") object. Text is preferably wrapped on whitespaces and right after the hyphens in hyphenated words; only then will long words be broken if necessary, unless [`TextWrapper.break_long_words`](#textwrap.TextWrapper.break_long_words "textwrap.TextWrapper.break_long_words") is set to false. `class textwrap.TextWrapper(**kwargs)` The [`TextWrapper`](#textwrap.TextWrapper "textwrap.TextWrapper") constructor accepts a number of optional keyword arguments. Each keyword argument corresponds to an instance attribute, so for example ``` wrapper = TextWrapper(initial_indent="* ") ``` is the same as ``` wrapper = TextWrapper() wrapper.initial_indent = "* " ``` You can re-use the same [`TextWrapper`](#textwrap.TextWrapper "textwrap.TextWrapper") object many times, and you can change any of its options through direct assignment to instance attributes between uses. The [`TextWrapper`](#textwrap.TextWrapper "textwrap.TextWrapper") instance attributes (and keyword arguments to the constructor) are as follows: `width` (default: `70`) The maximum length of wrapped lines. As long as there are no individual words in the input text longer than [`width`](#textwrap.TextWrapper.width "textwrap.TextWrapper.width"), [`TextWrapper`](#textwrap.TextWrapper "textwrap.TextWrapper") guarantees that no output line will be longer than [`width`](#textwrap.TextWrapper.width "textwrap.TextWrapper.width") characters. `expand_tabs` (default: `True`) If true, then all tab characters in *text* will be expanded to spaces using the `expandtabs()` method of *text*. `tabsize` (default: `8`) If [`expand_tabs`](#textwrap.TextWrapper.expand_tabs "textwrap.TextWrapper.expand_tabs") is true, then all tab characters in *text* will be expanded to zero or more spaces, depending on the current column and the given tab size. New in version 3.3. `replace_whitespace` (default: `True`) If true, after tab expansion but before wrapping, the [`wrap()`](#textwrap.wrap "textwrap.wrap") method will replace each whitespace character with a single space. The whitespace characters replaced are as follows: tab, newline, vertical tab, formfeed, and carriage return (`'\t\n\v\f\r'`). Note If [`expand_tabs`](#textwrap.TextWrapper.expand_tabs "textwrap.TextWrapper.expand_tabs") is false and [`replace_whitespace`](#textwrap.TextWrapper.replace_whitespace "textwrap.TextWrapper.replace_whitespace") is true, each tab character will be replaced by a single space, which is *not* the same as tab expansion. Note If [`replace_whitespace`](#textwrap.TextWrapper.replace_whitespace "textwrap.TextWrapper.replace_whitespace") is false, newlines may appear in the middle of a line and cause strange output. For this reason, text should be split into paragraphs (using [`str.splitlines()`](stdtypes#str.splitlines "str.splitlines") or similar) which are wrapped separately. `drop_whitespace` (default: `True`) If true, whitespace at the beginning and ending of every line (after wrapping but before indenting) is dropped. Whitespace at the beginning of the paragraph, however, is not dropped if non-whitespace follows it. If whitespace being dropped takes up an entire line, the whole line is dropped. `initial_indent` (default: `''`) String that will be prepended to the first line of wrapped output. Counts towards the length of the first line. The empty string is not indented. `subsequent_indent` (default: `''`) String that will be prepended to all lines of wrapped output except the first. Counts towards the length of each line except the first. `fix_sentence_endings` (default: `False`) If true, [`TextWrapper`](#textwrap.TextWrapper "textwrap.TextWrapper") attempts to detect sentence endings and ensure that sentences are always separated by exactly two spaces. This is generally desired for text in a monospaced font. However, the sentence detection algorithm is imperfect: it assumes that a sentence ending consists of a lowercase letter followed by one of `'.'`, `'!'`, or `'?'`, possibly followed by one of `'"'` or `"'"`, followed by a space. One problem with this is algorithm is that it is unable to detect the difference between “Dr.” in ``` [...] Dr. Frankenstein's monster [...] ``` and “Spot.” in ``` [...] See Spot. See Spot run [...] ``` [`fix_sentence_endings`](#textwrap.TextWrapper.fix_sentence_endings "textwrap.TextWrapper.fix_sentence_endings") is false by default. Since the sentence detection algorithm relies on `string.lowercase` for the definition of “lowercase letter”, and a convention of using two spaces after a period to separate sentences on the same line, it is specific to English-language texts. `break_long_words` (default: `True`) If true, then words longer than [`width`](#textwrap.TextWrapper.width "textwrap.TextWrapper.width") will be broken in order to ensure that no lines are longer than [`width`](#textwrap.TextWrapper.width "textwrap.TextWrapper.width"). If it is false, long words will not be broken, and some lines may be longer than [`width`](#textwrap.TextWrapper.width "textwrap.TextWrapper.width"). (Long words will be put on a line by themselves, in order to minimize the amount by which [`width`](#textwrap.TextWrapper.width "textwrap.TextWrapper.width") is exceeded.) `break_on_hyphens` (default: `True`) If true, wrapping will occur preferably on whitespaces and right after hyphens in compound words, as it is customary in English. If false, only whitespaces will be considered as potentially good places for line breaks, but you need to set [`break_long_words`](#textwrap.TextWrapper.break_long_words "textwrap.TextWrapper.break_long_words") to false if you want truly insecable words. Default behaviour in previous versions was to always allow breaking hyphenated words. `max_lines` (default: `None`) If not `None`, then the output will contain at most *max\_lines* lines, with *placeholder* appearing at the end of the output. New in version 3.4. `placeholder` (default: `' [...]'`) String that will appear at the end of the output text if it has been truncated. New in version 3.4. [`TextWrapper`](#textwrap.TextWrapper "textwrap.TextWrapper") also provides some public methods, analogous to the module-level convenience functions: `wrap(text)` Wraps the single paragraph in *text* (a string) so every line is at most [`width`](#textwrap.TextWrapper.width "textwrap.TextWrapper.width") characters long. All wrapping options are taken from instance attributes of the [`TextWrapper`](#textwrap.TextWrapper "textwrap.TextWrapper") instance. Returns a list of output lines, without final newlines. If the wrapped output has no content, the returned list is empty. `fill(text)` Wraps the single paragraph in *text*, and returns a single string containing the wrapped paragraph.
programming_docs
python stringprep — Internet String Preparation stringprep — Internet String Preparation ======================================== **Source code:** [Lib/stringprep.py](https://github.com/python/cpython/tree/3.9/Lib/stringprep.py) When identifying things (such as host names) in the internet, it is often necessary to compare such identifications for “equality”. Exactly how this comparison is executed may depend on the application domain, e.g. whether it should be case-insensitive or not. It may be also necessary to restrict the possible identifications, to allow only identifications consisting of “printable” characters. [**RFC 3454**](https://tools.ietf.org/html/rfc3454.html) defines a procedure for “preparing” Unicode strings in internet protocols. Before passing strings onto the wire, they are processed with the preparation procedure, after which they have a certain normalized form. The RFC defines a set of tables, which can be combined into profiles. Each profile must define which tables it uses, and what other optional parts of the `stringprep` procedure are part of the profile. One example of a `stringprep` profile is `nameprep`, which is used for internationalized domain names. The module [`stringprep`](#module-stringprep "stringprep: String preparation, as per RFC 3453") only exposes the tables from [**RFC 3454**](https://tools.ietf.org/html/rfc3454.html). As these tables would be very large to represent them as dictionaries or lists, the module uses the Unicode character database internally. The module source code itself was generated using the `mkstringprep.py` utility. As a result, these tables are exposed as functions, not as data structures. There are two kinds of tables in the RFC: sets and mappings. For a set, [`stringprep`](#module-stringprep "stringprep: String preparation, as per RFC 3453") provides the “characteristic function”, i.e. a function that returns `True` if the parameter is part of the set. For mappings, it provides the mapping function: given the key, it returns the associated value. Below is a list of all functions available in the module. `stringprep.in_table_a1(code)` Determine whether *code* is in tableA.1 (Unassigned code points in Unicode 3.2). `stringprep.in_table_b1(code)` Determine whether *code* is in tableB.1 (Commonly mapped to nothing). `stringprep.map_table_b2(code)` Return the mapped value for *code* according to tableB.2 (Mapping for case-folding used with NFKC). `stringprep.map_table_b3(code)` Return the mapped value for *code* according to tableB.3 (Mapping for case-folding used with no normalization). `stringprep.in_table_c11(code)` Determine whether *code* is in tableC.1.1 (ASCII space characters). `stringprep.in_table_c12(code)` Determine whether *code* is in tableC.1.2 (Non-ASCII space characters). `stringprep.in_table_c11_c12(code)` Determine whether *code* is in tableC.1 (Space characters, union of C.1.1 and C.1.2). `stringprep.in_table_c21(code)` Determine whether *code* is in tableC.2.1 (ASCII control characters). `stringprep.in_table_c22(code)` Determine whether *code* is in tableC.2.2 (Non-ASCII control characters). `stringprep.in_table_c21_c22(code)` Determine whether *code* is in tableC.2 (Control characters, union of C.2.1 and C.2.2). `stringprep.in_table_c3(code)` Determine whether *code* is in tableC.3 (Private use). `stringprep.in_table_c4(code)` Determine whether *code* is in tableC.4 (Non-character code points). `stringprep.in_table_c5(code)` Determine whether *code* is in tableC.5 (Surrogate codes). `stringprep.in_table_c6(code)` Determine whether *code* is in tableC.6 (Inappropriate for plain text). `stringprep.in_table_c7(code)` Determine whether *code* is in tableC.7 (Inappropriate for canonical representation). `stringprep.in_table_c8(code)` Determine whether *code* is in tableC.8 (Change display properties or are deprecated). `stringprep.in_table_c9(code)` Determine whether *code* is in tableC.9 (Tagging characters). `stringprep.in_table_d1(code)` Determine whether *code* is in tableD.1 (Characters with bidirectional property “R” or “AL”). `stringprep.in_table_d2(code)` Determine whether *code* is in tableD.2 (Characters with bidirectional property “L”). python codecs — Codec registry and base classes codecs — Codec registry and base classes ======================================== **Source code:** [Lib/codecs.py](https://github.com/python/cpython/tree/3.9/Lib/codecs.py) This module defines base classes for standard Python codecs (encoders and decoders) and provides access to the internal Python codec registry, which manages the codec and error handling lookup process. Most standard codecs are [text encodings](../glossary#term-text-encoding), which encode text to bytes (and decode bytes to text), but there are also codecs provided that encode text to text, and bytes to bytes. Custom codecs may encode and decode between arbitrary types, but some module features are restricted to be used specifically with [text encodings](../glossary#term-text-encoding) or with codecs that encode to [`bytes`](stdtypes#bytes "bytes"). The module defines the following functions for encoding and decoding with any codec: `codecs.encode(obj, encoding='utf-8', errors='strict')` Encodes *obj* using the codec registered for *encoding*. *Errors* may be given to set the desired error handling scheme. The default error handler is `'strict'` meaning that encoding errors raise [`ValueError`](exceptions#ValueError "ValueError") (or a more codec specific subclass, such as [`UnicodeEncodeError`](exceptions#UnicodeEncodeError "UnicodeEncodeError")). Refer to [Codec Base Classes](#codec-base-classes) for more information on codec error handling. `codecs.decode(obj, encoding='utf-8', errors='strict')` Decodes *obj* using the codec registered for *encoding*. *Errors* may be given to set the desired error handling scheme. The default error handler is `'strict'` meaning that decoding errors raise [`ValueError`](exceptions#ValueError "ValueError") (or a more codec specific subclass, such as [`UnicodeDecodeError`](exceptions#UnicodeDecodeError "UnicodeDecodeError")). Refer to [Codec Base Classes](#codec-base-classes) for more information on codec error handling. The full details for each codec can also be looked up directly: `codecs.lookup(encoding)` Looks up the codec info in the Python codec registry and returns a [`CodecInfo`](#codecs.CodecInfo "codecs.CodecInfo") object as defined below. Encodings are first looked up in the registry’s cache. If not found, the list of registered search functions is scanned. If no [`CodecInfo`](#codecs.CodecInfo "codecs.CodecInfo") object is found, a [`LookupError`](exceptions#LookupError "LookupError") is raised. Otherwise, the [`CodecInfo`](#codecs.CodecInfo "codecs.CodecInfo") object is stored in the cache and returned to the caller. `class codecs.CodecInfo(encode, decode, streamreader=None, streamwriter=None, incrementalencoder=None, incrementaldecoder=None, name=None)` Codec details when looking up the codec registry. The constructor arguments are stored in attributes of the same name: `name` The name of the encoding. `encode` `decode` The stateless encoding and decoding functions. These must be functions or methods which have the same interface as the [`encode()`](#codecs.Codec.encode "codecs.Codec.encode") and [`decode()`](#codecs.Codec.decode "codecs.Codec.decode") methods of Codec instances (see [Codec Interface](#codec-objects)). The functions or methods are expected to work in a stateless mode. `incrementalencoder` `incrementaldecoder` Incremental encoder and decoder classes or factory functions. These have to provide the interface defined by the base classes [`IncrementalEncoder`](#codecs.IncrementalEncoder "codecs.IncrementalEncoder") and [`IncrementalDecoder`](#codecs.IncrementalDecoder "codecs.IncrementalDecoder"), respectively. Incremental codecs can maintain state. `streamwriter` `streamreader` Stream writer and reader classes or factory functions. These have to provide the interface defined by the base classes [`StreamWriter`](#codecs.StreamWriter "codecs.StreamWriter") and [`StreamReader`](#codecs.StreamReader "codecs.StreamReader"), respectively. Stream codecs can maintain state. To simplify access to the various codec components, the module provides these additional functions which use [`lookup()`](#codecs.lookup "codecs.lookup") for the codec lookup: `codecs.getencoder(encoding)` Look up the codec for the given encoding and return its encoder function. Raises a [`LookupError`](exceptions#LookupError "LookupError") in case the encoding cannot be found. `codecs.getdecoder(encoding)` Look up the codec for the given encoding and return its decoder function. Raises a [`LookupError`](exceptions#LookupError "LookupError") in case the encoding cannot be found. `codecs.getincrementalencoder(encoding)` Look up the codec for the given encoding and return its incremental encoder class or factory function. Raises a [`LookupError`](exceptions#LookupError "LookupError") in case the encoding cannot be found or the codec doesn’t support an incremental encoder. `codecs.getincrementaldecoder(encoding)` Look up the codec for the given encoding and return its incremental decoder class or factory function. Raises a [`LookupError`](exceptions#LookupError "LookupError") in case the encoding cannot be found or the codec doesn’t support an incremental decoder. `codecs.getreader(encoding)` Look up the codec for the given encoding and return its [`StreamReader`](#codecs.StreamReader "codecs.StreamReader") class or factory function. Raises a [`LookupError`](exceptions#LookupError "LookupError") in case the encoding cannot be found. `codecs.getwriter(encoding)` Look up the codec for the given encoding and return its [`StreamWriter`](#codecs.StreamWriter "codecs.StreamWriter") class or factory function. Raises a [`LookupError`](exceptions#LookupError "LookupError") in case the encoding cannot be found. Custom codecs are made available by registering a suitable codec search function: `codecs.register(search_function)` Register a codec search function. Search functions are expected to take one argument, being the encoding name in all lower case letters with hyphens and spaces converted to underscores, and return a [`CodecInfo`](#codecs.CodecInfo "codecs.CodecInfo") object. In case a search function cannot find a given encoding, it should return `None`. Changed in version 3.9: Hyphens and spaces are converted to underscore. Note Search function registration is not currently reversible, which may cause problems in some cases, such as unit testing or module reloading. While the builtin [`open()`](functions#open "open") and the associated [`io`](io#module-io "io: Core tools for working with streams.") module are the recommended approach for working with encoded text files, this module provides additional utility functions and classes that allow the use of a wider range of codecs when working with binary files: `codecs.open(filename, mode='r', encoding=None, errors='strict', buffering=-1)` Open an encoded file using the given *mode* and return an instance of [`StreamReaderWriter`](#codecs.StreamReaderWriter "codecs.StreamReaderWriter"), providing transparent encoding/decoding. The default file mode is `'r'`, meaning to open the file in read mode. Note Underlying encoded files are always opened in binary mode. No automatic conversion of `'\n'` is done on reading and writing. The *mode* argument may be any binary mode acceptable to the built-in [`open()`](functions#open "open") function; the `'b'` is automatically added. *encoding* specifies the encoding which is to be used for the file. Any encoding that encodes to and decodes from bytes is allowed, and the data types supported by the file methods depend on the codec used. *errors* may be given to define the error handling. It defaults to `'strict'` which causes a [`ValueError`](exceptions#ValueError "ValueError") to be raised in case an encoding error occurs. *buffering* has the same meaning as for the built-in [`open()`](functions#open "open") function. It defaults to -1 which means that the default buffer size will be used. `codecs.EncodedFile(file, data_encoding, file_encoding=None, errors='strict')` Return a [`StreamRecoder`](#codecs.StreamRecoder "codecs.StreamRecoder") instance, a wrapped version of *file* which provides transparent transcoding. The original file is closed when the wrapped version is closed. Data written to the wrapped file is decoded according to the given *data\_encoding* and then written to the original file as bytes using *file\_encoding*. Bytes read from the original file are decoded according to *file\_encoding*, and the result is encoded using *data\_encoding*. If *file\_encoding* is not given, it defaults to *data\_encoding*. *errors* may be given to define the error handling. It defaults to `'strict'`, which causes [`ValueError`](exceptions#ValueError "ValueError") to be raised in case an encoding error occurs. `codecs.iterencode(iterator, encoding, errors='strict', **kwargs)` Uses an incremental encoder to iteratively encode the input provided by *iterator*. This function is a [generator](../glossary#term-generator). The *errors* argument (as well as any other keyword argument) is passed through to the incremental encoder. This function requires that the codec accept text [`str`](stdtypes#str "str") objects to encode. Therefore it does not support bytes-to-bytes encoders such as `base64_codec`. `codecs.iterdecode(iterator, encoding, errors='strict', **kwargs)` Uses an incremental decoder to iteratively decode the input provided by *iterator*. This function is a [generator](../glossary#term-generator). The *errors* argument (as well as any other keyword argument) is passed through to the incremental decoder. This function requires that the codec accept [`bytes`](stdtypes#bytes "bytes") objects to decode. Therefore it does not support text-to-text encoders such as `rot_13`, although `rot_13` may be used equivalently with [`iterencode()`](#codecs.iterencode "codecs.iterencode"). The module also provides the following constants which are useful for reading and writing to platform dependent files: `codecs.BOM` `codecs.BOM_BE` `codecs.BOM_LE` `codecs.BOM_UTF8` `codecs.BOM_UTF16` `codecs.BOM_UTF16_BE` `codecs.BOM_UTF16_LE` `codecs.BOM_UTF32` `codecs.BOM_UTF32_BE` `codecs.BOM_UTF32_LE` These constants define various byte sequences, being Unicode byte order marks (BOMs) for several encodings. They are used in UTF-16 and UTF-32 data streams to indicate the byte order used, and in UTF-8 as a Unicode signature. [`BOM_UTF16`](#codecs.BOM_UTF16 "codecs.BOM_UTF16") is either [`BOM_UTF16_BE`](#codecs.BOM_UTF16_BE "codecs.BOM_UTF16_BE") or [`BOM_UTF16_LE`](#codecs.BOM_UTF16_LE "codecs.BOM_UTF16_LE") depending on the platform’s native byte order, [`BOM`](#codecs.BOM "codecs.BOM") is an alias for [`BOM_UTF16`](#codecs.BOM_UTF16 "codecs.BOM_UTF16"), [`BOM_LE`](#codecs.BOM_LE "codecs.BOM_LE") for [`BOM_UTF16_LE`](#codecs.BOM_UTF16_LE "codecs.BOM_UTF16_LE") and [`BOM_BE`](#codecs.BOM_BE "codecs.BOM_BE") for [`BOM_UTF16_BE`](#codecs.BOM_UTF16_BE "codecs.BOM_UTF16_BE"). The others represent the BOM in UTF-8 and UTF-32 encodings. Codec Base Classes ------------------ The [`codecs`](#module-codecs "codecs: Encode and decode data and streams.") module defines a set of base classes which define the interfaces for working with codec objects, and can also be used as the basis for custom codec implementations. Each codec has to define four interfaces to make it usable as codec in Python: stateless encoder, stateless decoder, stream reader and stream writer. The stream reader and writers typically reuse the stateless encoder/decoder to implement the file protocols. Codec authors also need to define how the codec will handle encoding and decoding errors. ### Error Handlers To simplify and standardize error handling, codecs may implement different error handling schemes by accepting the *errors* string argument: ``` >>> 'German ß, ♬'.encode(encoding='ascii', errors='backslashreplace') b'German \\xdf, \\u266c' >>> 'German ß, ♬'.encode(encoding='ascii', errors='xmlcharrefreplace') b'German &#223;, &#9836;' ``` The following error handlers can be used with all Python [Standard Encodings](#standard-encodings) codecs: | Value | Meaning | | --- | --- | | `'strict'` | Raise [`UnicodeError`](exceptions#UnicodeError "UnicodeError") (or a subclass), this is the default. Implemented in [`strict_errors()`](#codecs.strict_errors "codecs.strict_errors"). | | `'ignore'` | Ignore the malformed data and continue without further notice. Implemented in [`ignore_errors()`](#codecs.ignore_errors "codecs.ignore_errors"). | | `'replace'` | Replace with a replacement marker. On encoding, use `?` (ASCII character). On decoding, use `�` (U+FFFD, the official REPLACEMENT CHARACTER). Implemented in [`replace_errors()`](#codecs.replace_errors "codecs.replace_errors"). | | `'backslashreplace'` | Replace with backslashed escape sequences. On encoding, use hexadecimal form of Unicode code point with formats `\xhh` `\uxxxx` `\Uxxxxxxxx`. On decoding, use hexadecimal form of byte value with format `\xhh`. Implemented in [`backslashreplace_errors()`](#codecs.backslashreplace_errors "codecs.backslashreplace_errors"). | | `'surrogateescape'` | On decoding, replace byte with individual surrogate code ranging from `U+DC80` to `U+DCFF`. This code will then be turned back into the same byte when the `'surrogateescape'` error handler is used when encoding the data. (See [**PEP 383**](https://www.python.org/dev/peps/pep-0383) for more.) | The following error handlers are only applicable to encoding (within [text encodings](../glossary#term-text-encoding)): | Value | Meaning | | --- | --- | | `'xmlcharrefreplace'` | Replace with XML/HTML numeric character reference, which is a decimal form of Unicode code point with format `&#num;` Implemented in [`xmlcharrefreplace_errors()`](#codecs.xmlcharrefreplace_errors "codecs.xmlcharrefreplace_errors"). | | `'namereplace'` | Replace with `\N{...}` escape sequences, what appears in the braces is the Name property from Unicode Character Database. Implemented in [`namereplace_errors()`](#codecs.namereplace_errors "codecs.namereplace_errors"). | In addition, the following error handler is specific to the given codecs: | Value | Codecs | Meaning | | --- | --- | --- | | `'surrogatepass'` | utf-8, utf-16, utf-32, utf-16-be, utf-16-le, utf-32-be, utf-32-le | Allow encoding and decoding surrogate code point (`U+D800` - `U+DFFF`) as normal code point. Otherwise these codecs treat the presence of surrogate code point in [`str`](stdtypes#str "str") as an error. | New in version 3.1: The `'surrogateescape'` and `'surrogatepass'` error handlers. Changed in version 3.4: The `'surrogatepass'` error handler now works with utf-16\* and utf-32\* codecs. New in version 3.5: The `'namereplace'` error handler. Changed in version 3.5: The `'backslashreplace'` error handler now works with decoding and translating. The set of allowed values can be extended by registering a new named error handler: `codecs.register_error(name, error_handler)` Register the error handling function *error\_handler* under the name *name*. The *error\_handler* argument will be called during encoding and decoding in case of an error, when *name* is specified as the errors parameter. For encoding, *error\_handler* will be called with a [`UnicodeEncodeError`](exceptions#UnicodeEncodeError "UnicodeEncodeError") instance, which contains information about the location of the error. The error handler must either raise this or a different exception, or return a tuple with a replacement for the unencodable part of the input and a position where encoding should continue. The replacement may be either [`str`](stdtypes#str "str") or [`bytes`](stdtypes#bytes "bytes"). If the replacement is bytes, the encoder will simply copy them into the output buffer. If the replacement is a string, the encoder will encode the replacement. Encoding continues on original input at the specified position. Negative position values will be treated as being relative to the end of the input string. If the resulting position is out of bound an [`IndexError`](exceptions#IndexError "IndexError") will be raised. Decoding and translating works similarly, except [`UnicodeDecodeError`](exceptions#UnicodeDecodeError "UnicodeDecodeError") or [`UnicodeTranslateError`](exceptions#UnicodeTranslateError "UnicodeTranslateError") will be passed to the handler and that the replacement from the error handler will be put into the output directly. Previously registered error handlers (including the standard error handlers) can be looked up by name: `codecs.lookup_error(name)` Return the error handler previously registered under the name *name*. Raises a [`LookupError`](exceptions#LookupError "LookupError") in case the handler cannot be found. The following standard error handlers are also made available as module level functions: `codecs.strict_errors(exception)` Implements the `'strict'` error handling. Each encoding or decoding error raises a [`UnicodeError`](exceptions#UnicodeError "UnicodeError"). `codecs.ignore_errors(exception)` Implements the `'ignore'` error handling. Malformed data is ignored; encoding or decoding is continued without further notice. `codecs.replace_errors(exception)` Implements the `'replace'` error handling. Substitutes `?` (ASCII character) for encoding errors or `�` (U+FFFD, the official REPLACEMENT CHARACTER) for decoding errors. `codecs.backslashreplace_errors(exception)` Implements the `'backslashreplace'` error handling. Malformed data is replaced by a backslashed escape sequence. On encoding, use the hexadecimal form of Unicode code point with formats `\xhh` `\uxxxx` `\Uxxxxxxxx`. On decoding, use the hexadecimal form of byte value with format `\xhh`. Changed in version 3.5: Works with decoding and translating. `codecs.xmlcharrefreplace_errors(exception)` Implements the `'xmlcharrefreplace'` error handling (for encoding within [text encoding](../glossary#term-text-encoding) only). The unencodable character is replaced by an appropriate XML/HTML numeric character reference, which is a decimal form of Unicode code point with format `&#num;` . `codecs.namereplace_errors(exception)` Implements the `'namereplace'` error handling (for encoding within [text encoding](../glossary#term-text-encoding) only). The unencodable character is replaced by a `\N{...}` escape sequence. The set of characters that appear in the braces is the Name property from Unicode Character Database. For example, the German lowercase letter `'ß'` will be converted to byte sequence `\N{LATIN SMALL LETTER SHARP S}` . New in version 3.5. ### Stateless Encoding and Decoding The base `Codec` class defines these methods which also define the function interfaces of the stateless encoder and decoder: `Codec.encode(input, errors='strict')` Encodes the object *input* and returns a tuple (output object, length consumed). For instance, [text encoding](../glossary#term-text-encoding) converts a string object to a bytes object using a particular character set encoding (e.g., `cp1252` or `iso-8859-1`). The *errors* argument defines the error handling to apply. It defaults to `'strict'` handling. The method may not store state in the `Codec` instance. Use [`StreamWriter`](#codecs.StreamWriter "codecs.StreamWriter") for codecs which have to keep state in order to make encoding efficient. The encoder must be able to handle zero length input and return an empty object of the output object type in this situation. `Codec.decode(input, errors='strict')` Decodes the object *input* and returns a tuple (output object, length consumed). For instance, for a [text encoding](../glossary#term-text-encoding), decoding converts a bytes object encoded using a particular character set encoding to a string object. For text encodings and bytes-to-bytes codecs, *input* must be a bytes object or one which provides the read-only buffer interface – for example, buffer objects and memory mapped files. The *errors* argument defines the error handling to apply. It defaults to `'strict'` handling. The method may not store state in the `Codec` instance. Use [`StreamReader`](#codecs.StreamReader "codecs.StreamReader") for codecs which have to keep state in order to make decoding efficient. The decoder must be able to handle zero length input and return an empty object of the output object type in this situation. ### Incremental Encoding and Decoding The [`IncrementalEncoder`](#codecs.IncrementalEncoder "codecs.IncrementalEncoder") and [`IncrementalDecoder`](#codecs.IncrementalDecoder "codecs.IncrementalDecoder") classes provide the basic interface for incremental encoding and decoding. Encoding/decoding the input isn’t done with one call to the stateless encoder/decoder function, but with multiple calls to the [`encode()`](#codecs.IncrementalEncoder.encode "codecs.IncrementalEncoder.encode")/[`decode()`](#codecs.IncrementalDecoder.decode "codecs.IncrementalDecoder.decode") method of the incremental encoder/decoder. The incremental encoder/decoder keeps track of the encoding/decoding process during method calls. The joined output of calls to the [`encode()`](#codecs.IncrementalEncoder.encode "codecs.IncrementalEncoder.encode")/[`decode()`](#codecs.IncrementalDecoder.decode "codecs.IncrementalDecoder.decode") method is the same as if all the single inputs were joined into one, and this input was encoded/decoded with the stateless encoder/decoder. #### IncrementalEncoder Objects The [`IncrementalEncoder`](#codecs.IncrementalEncoder "codecs.IncrementalEncoder") class is used for encoding an input in multiple steps. It defines the following methods which every incremental encoder must define in order to be compatible with the Python codec registry. `class codecs.IncrementalEncoder(errors='strict')` Constructor for an [`IncrementalEncoder`](#codecs.IncrementalEncoder "codecs.IncrementalEncoder") instance. All incremental encoders must provide this constructor interface. They are free to add additional keyword arguments, but only the ones defined here are used by the Python codec registry. The [`IncrementalEncoder`](#codecs.IncrementalEncoder "codecs.IncrementalEncoder") may implement different error handling schemes by providing the *errors* keyword argument. See [Error Handlers](#error-handlers) for possible values. The *errors* argument will be assigned to an attribute of the same name. Assigning to this attribute makes it possible to switch between different error handling strategies during the lifetime of the [`IncrementalEncoder`](#codecs.IncrementalEncoder "codecs.IncrementalEncoder") object. `encode(object, final=False)` Encodes *object* (taking the current state of the encoder into account) and returns the resulting encoded object. If this is the last call to [`encode()`](#codecs.encode "codecs.encode") *final* must be true (the default is false). `reset()` Reset the encoder to the initial state. The output is discarded: call `.encode(object, final=True)`, passing an empty byte or text string if necessary, to reset the encoder and to get the output. `getstate()` Return the current state of the encoder which must be an integer. The implementation should make sure that `0` is the most common state. (States that are more complicated than integers can be converted into an integer by marshaling/pickling the state and encoding the bytes of the resulting string into an integer.) `setstate(state)` Set the state of the encoder to *state*. *state* must be an encoder state returned by [`getstate()`](#codecs.IncrementalEncoder.getstate "codecs.IncrementalEncoder.getstate"). #### IncrementalDecoder Objects The [`IncrementalDecoder`](#codecs.IncrementalDecoder "codecs.IncrementalDecoder") class is used for decoding an input in multiple steps. It defines the following methods which every incremental decoder must define in order to be compatible with the Python codec registry. `class codecs.IncrementalDecoder(errors='strict')` Constructor for an [`IncrementalDecoder`](#codecs.IncrementalDecoder "codecs.IncrementalDecoder") instance. All incremental decoders must provide this constructor interface. They are free to add additional keyword arguments, but only the ones defined here are used by the Python codec registry. The [`IncrementalDecoder`](#codecs.IncrementalDecoder "codecs.IncrementalDecoder") may implement different error handling schemes by providing the *errors* keyword argument. See [Error Handlers](#error-handlers) for possible values. The *errors* argument will be assigned to an attribute of the same name. Assigning to this attribute makes it possible to switch between different error handling strategies during the lifetime of the [`IncrementalDecoder`](#codecs.IncrementalDecoder "codecs.IncrementalDecoder") object. `decode(object, final=False)` Decodes *object* (taking the current state of the decoder into account) and returns the resulting decoded object. If this is the last call to [`decode()`](#codecs.decode "codecs.decode") *final* must be true (the default is false). If *final* is true the decoder must decode the input completely and must flush all buffers. If this isn’t possible (e.g. because of incomplete byte sequences at the end of the input) it must initiate error handling just like in the stateless case (which might raise an exception). `reset()` Reset the decoder to the initial state. `getstate()` Return the current state of the decoder. This must be a tuple with two items, the first must be the buffer containing the still undecoded input. The second must be an integer and can be additional state info. (The implementation should make sure that `0` is the most common additional state info.) If this additional state info is `0` it must be possible to set the decoder to the state which has no input buffered and `0` as the additional state info, so that feeding the previously buffered input to the decoder returns it to the previous state without producing any output. (Additional state info that is more complicated than integers can be converted into an integer by marshaling/pickling the info and encoding the bytes of the resulting string into an integer.) `setstate(state)` Set the state of the decoder to *state*. *state* must be a decoder state returned by [`getstate()`](#codecs.IncrementalDecoder.getstate "codecs.IncrementalDecoder.getstate"). ### Stream Encoding and Decoding The [`StreamWriter`](#codecs.StreamWriter "codecs.StreamWriter") and [`StreamReader`](#codecs.StreamReader "codecs.StreamReader") classes provide generic working interfaces which can be used to implement new encoding submodules very easily. See `encodings.utf_8` for an example of how this is done. #### StreamWriter Objects The [`StreamWriter`](#codecs.StreamWriter "codecs.StreamWriter") class is a subclass of `Codec` and defines the following methods which every stream writer must define in order to be compatible with the Python codec registry. `class codecs.StreamWriter(stream, errors='strict')` Constructor for a [`StreamWriter`](#codecs.StreamWriter "codecs.StreamWriter") instance. All stream writers must provide this constructor interface. They are free to add additional keyword arguments, but only the ones defined here are used by the Python codec registry. The *stream* argument must be a file-like object open for writing text or binary data, as appropriate for the specific codec. The [`StreamWriter`](#codecs.StreamWriter "codecs.StreamWriter") may implement different error handling schemes by providing the *errors* keyword argument. See [Error Handlers](#error-handlers) for the standard error handlers the underlying stream codec may support. The *errors* argument will be assigned to an attribute of the same name. Assigning to this attribute makes it possible to switch between different error handling strategies during the lifetime of the [`StreamWriter`](#codecs.StreamWriter "codecs.StreamWriter") object. `write(object)` Writes the object’s contents encoded to the stream. `writelines(list)` Writes the concatenated iterable of strings to the stream (possibly by reusing the [`write()`](#codecs.StreamWriter.write "codecs.StreamWriter.write") method). Infinite or very large iterables are not supported. The standard bytes-to-bytes codecs do not support this method. `reset()` Resets the codec buffers used for keeping internal state. Calling this method should ensure that the data on the output is put into a clean state that allows appending of new fresh data without having to rescan the whole stream to recover state. In addition to the above methods, the [`StreamWriter`](#codecs.StreamWriter "codecs.StreamWriter") must also inherit all other methods and attributes from the underlying stream. #### StreamReader Objects The [`StreamReader`](#codecs.StreamReader "codecs.StreamReader") class is a subclass of `Codec` and defines the following methods which every stream reader must define in order to be compatible with the Python codec registry. `class codecs.StreamReader(stream, errors='strict')` Constructor for a [`StreamReader`](#codecs.StreamReader "codecs.StreamReader") instance. All stream readers must provide this constructor interface. They are free to add additional keyword arguments, but only the ones defined here are used by the Python codec registry. The *stream* argument must be a file-like object open for reading text or binary data, as appropriate for the specific codec. The [`StreamReader`](#codecs.StreamReader "codecs.StreamReader") may implement different error handling schemes by providing the *errors* keyword argument. See [Error Handlers](#error-handlers) for the standard error handlers the underlying stream codec may support. The *errors* argument will be assigned to an attribute of the same name. Assigning to this attribute makes it possible to switch between different error handling strategies during the lifetime of the [`StreamReader`](#codecs.StreamReader "codecs.StreamReader") object. The set of allowed values for the *errors* argument can be extended with [`register_error()`](#codecs.register_error "codecs.register_error"). `read(size=-1, chars=-1, firstline=False)` Decodes data from the stream and returns the resulting object. The *chars* argument indicates the number of decoded code points or bytes to return. The [`read()`](#codecs.StreamReader.read "codecs.StreamReader.read") method will never return more data than requested, but it might return less, if there is not enough available. The *size* argument indicates the approximate maximum number of encoded bytes or code points to read for decoding. The decoder can modify this setting as appropriate. The default value -1 indicates to read and decode as much as possible. This parameter is intended to prevent having to decode huge files in one step. The *firstline* flag indicates that it would be sufficient to only return the first line, if there are decoding errors on later lines. The method should use a greedy read strategy meaning that it should read as much data as is allowed within the definition of the encoding and the given size, e.g. if optional encoding endings or state markers are available on the stream, these should be read too. `readline(size=None, keepends=True)` Read one line from the input stream and return the decoded data. *size*, if given, is passed as size argument to the stream’s [`read()`](#codecs.StreamReader.read "codecs.StreamReader.read") method. If *keepends* is false line-endings will be stripped from the lines returned. `readlines(sizehint=None, keepends=True)` Read all lines available on the input stream and return them as a list of lines. Line-endings are implemented using the codec’s [`decode()`](#codecs.decode "codecs.decode") method and are included in the list entries if *keepends* is true. *sizehint*, if given, is passed as the *size* argument to the stream’s [`read()`](#codecs.StreamReader.read "codecs.StreamReader.read") method. `reset()` Resets the codec buffers used for keeping internal state. Note that no stream repositioning should take place. This method is primarily intended to be able to recover from decoding errors. In addition to the above methods, the [`StreamReader`](#codecs.StreamReader "codecs.StreamReader") must also inherit all other methods and attributes from the underlying stream. #### StreamReaderWriter Objects The [`StreamReaderWriter`](#codecs.StreamReaderWriter "codecs.StreamReaderWriter") is a convenience class that allows wrapping streams which work in both read and write modes. The design is such that one can use the factory functions returned by the [`lookup()`](#codecs.lookup "codecs.lookup") function to construct the instance. `class codecs.StreamReaderWriter(stream, Reader, Writer, errors='strict')` Creates a [`StreamReaderWriter`](#codecs.StreamReaderWriter "codecs.StreamReaderWriter") instance. *stream* must be a file-like object. *Reader* and *Writer* must be factory functions or classes providing the [`StreamReader`](#codecs.StreamReader "codecs.StreamReader") and [`StreamWriter`](#codecs.StreamWriter "codecs.StreamWriter") interface resp. Error handling is done in the same way as defined for the stream readers and writers. [`StreamReaderWriter`](#codecs.StreamReaderWriter "codecs.StreamReaderWriter") instances define the combined interfaces of [`StreamReader`](#codecs.StreamReader "codecs.StreamReader") and [`StreamWriter`](#codecs.StreamWriter "codecs.StreamWriter") classes. They inherit all other methods and attributes from the underlying stream. #### StreamRecoder Objects The [`StreamRecoder`](#codecs.StreamRecoder "codecs.StreamRecoder") translates data from one encoding to another, which is sometimes useful when dealing with different encoding environments. The design is such that one can use the factory functions returned by the [`lookup()`](#codecs.lookup "codecs.lookup") function to construct the instance. `class codecs.StreamRecoder(stream, encode, decode, Reader, Writer, errors='strict')` Creates a [`StreamRecoder`](#codecs.StreamRecoder "codecs.StreamRecoder") instance which implements a two-way conversion: *encode* and *decode* work on the frontend — the data visible to code calling `read()` and `write()`, while *Reader* and *Writer* work on the backend — the data in *stream*. You can use these objects to do transparent transcodings, e.g., from Latin-1 to UTF-8 and back. The *stream* argument must be a file-like object. The *encode* and *decode* arguments must adhere to the `Codec` interface. *Reader* and *Writer* must be factory functions or classes providing objects of the [`StreamReader`](#codecs.StreamReader "codecs.StreamReader") and [`StreamWriter`](#codecs.StreamWriter "codecs.StreamWriter") interface respectively. Error handling is done in the same way as defined for the stream readers and writers. [`StreamRecoder`](#codecs.StreamRecoder "codecs.StreamRecoder") instances define the combined interfaces of [`StreamReader`](#codecs.StreamReader "codecs.StreamReader") and [`StreamWriter`](#codecs.StreamWriter "codecs.StreamWriter") classes. They inherit all other methods and attributes from the underlying stream. Encodings and Unicode --------------------- Strings are stored internally as sequences of code points in range `U+0000`–`U+10FFFF`. (See [**PEP 393**](https://www.python.org/dev/peps/pep-0393) for more details about the implementation.) Once a string object is used outside of CPU and memory, endianness and how these arrays are stored as bytes become an issue. As with other codecs, serialising a string into a sequence of bytes is known as *encoding*, and recreating the string from the sequence of bytes is known as *decoding*. There are a variety of different text serialisation codecs, which are collectivity referred to as [text encodings](../glossary#term-text-encoding). The simplest text encoding (called `'latin-1'` or `'iso-8859-1'`) maps the code points 0–255 to the bytes `0x0`–`0xff`, which means that a string object that contains code points above `U+00FF` can’t be encoded with this codec. Doing so will raise a [`UnicodeEncodeError`](exceptions#UnicodeEncodeError "UnicodeEncodeError") that looks like the following (although the details of the error message may differ): `UnicodeEncodeError: 'latin-1' codec can't encode character '\u1234' in position 3: ordinal not in range(256)`. There’s another group of encodings (the so called charmap encodings) that choose a different subset of all Unicode code points and how these code points are mapped to the bytes `0x0`–`0xff`. To see how this is done simply open e.g. `encodings/cp1252.py` (which is an encoding that is used primarily on Windows). There’s a string constant with 256 characters that shows you which character is mapped to which byte value. All of these encodings can only encode 256 of the 1114112 code points defined in Unicode. A simple and straightforward way that can store each Unicode code point, is to store each code point as four consecutive bytes. There are two possibilities: store the bytes in big endian or in little endian order. These two encodings are called `UTF-32-BE` and `UTF-32-LE` respectively. Their disadvantage is that if e.g. you use `UTF-32-BE` on a little endian machine you will always have to swap bytes on encoding and decoding. `UTF-32` avoids this problem: bytes will always be in natural endianness. When these bytes are read by a CPU with a different endianness, then bytes have to be swapped though. To be able to detect the endianness of a `UTF-16` or `UTF-32` byte sequence, there’s the so called BOM (“Byte Order Mark”). This is the Unicode character `U+FEFF`. This character can be prepended to every `UTF-16` or `UTF-32` byte sequence. The byte swapped version of this character (`0xFFFE`) is an illegal character that may not appear in a Unicode text. So when the first character in a `UTF-16` or `UTF-32` byte sequence appears to be a `U+FFFE` the bytes have to be swapped on decoding. Unfortunately the character `U+FEFF` had a second purpose as a `ZERO WIDTH NO-BREAK SPACE`: a character that has no width and doesn’t allow a word to be split. It can e.g. be used to give hints to a ligature algorithm. With Unicode 4.0 using `U+FEFF` as a `ZERO WIDTH NO-BREAK SPACE` has been deprecated (with `U+2060` (`WORD JOINER`) assuming this role). Nevertheless Unicode software still must be able to handle `U+FEFF` in both roles: as a BOM it’s a device to determine the storage layout of the encoded bytes, and vanishes once the byte sequence has been decoded into a string; as a `ZERO WIDTH NO-BREAK SPACE` it’s a normal character that will be decoded like any other. There’s another encoding that is able to encode the full range of Unicode characters: UTF-8. UTF-8 is an 8-bit encoding, which means there are no issues with byte order in UTF-8. Each byte in a UTF-8 byte sequence consists of two parts: marker bits (the most significant bits) and payload bits. The marker bits are a sequence of zero to four `1` bits followed by a `0` bit. Unicode characters are encoded like this (with x being payload bits, which when concatenated give the Unicode character): | Range | Encoding | | --- | --- | | `U-00000000` … `U-0000007F` | 0xxxxxxx | | `U-00000080` … `U-000007FF` | 110xxxxx 10xxxxxx | | `U-00000800` … `U-0000FFFF` | 1110xxxx 10xxxxxx 10xxxxxx | | `U-00010000` … `U-0010FFFF` | 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx | The least significant bit of the Unicode character is the rightmost x bit. As UTF-8 is an 8-bit encoding no BOM is required and any `U+FEFF` character in the decoded string (even if it’s the first character) is treated as a `ZERO WIDTH NO-BREAK SPACE`. Without external information it’s impossible to reliably determine which encoding was used for encoding a string. Each charmap encoding can decode any random byte sequence. However that’s not possible with UTF-8, as UTF-8 byte sequences have a structure that doesn’t allow arbitrary byte sequences. To increase the reliability with which a UTF-8 encoding can be detected, Microsoft invented a variant of UTF-8 (that Python calls `"utf-8-sig"`) for its Notepad program: Before any of the Unicode characters is written to the file, a UTF-8 encoded BOM (which looks like this as a byte sequence: `0xef`, `0xbb`, `0xbf`) is written. As it’s rather improbable that any charmap encoded file starts with these byte values (which would e.g. map to in iso-8859-1), this increases the probability that a `utf-8-sig` encoding can be correctly guessed from the byte sequence. So here the BOM is not used to be able to determine the byte order used for generating the byte sequence, but as a signature that helps in guessing the encoding. On encoding the utf-8-sig codec will write `0xef`, `0xbb`, `0xbf` as the first three bytes to the file. On decoding `utf-8-sig` will skip those three bytes if they appear as the first three bytes in the file. In UTF-8, the use of the BOM is discouraged and should generally be avoided. Standard Encodings ------------------ Python comes with a number of codecs built-in, either implemented as C functions or with dictionaries as mapping tables. The following table lists the codecs by name, together with a few common aliases, and the languages for which the encoding is likely used. Neither the list of aliases nor the list of languages is meant to be exhaustive. Notice that spelling alternatives that only differ in case or use a hyphen instead of an underscore are also valid aliases; therefore, e.g. `'utf-8'` is a valid alias for the `'utf_8'` codec. **CPython implementation detail:** Some common encodings can bypass the codecs lookup machinery to improve performance. These optimization opportunities are only recognized by CPython for a limited set of (case insensitive) aliases: utf-8, utf8, latin-1, latin1, iso-8859-1, iso8859-1, mbcs (Windows only), ascii, us-ascii, utf-16, utf16, utf-32, utf32, and the same using underscores instead of dashes. Using alternative aliases for these encodings may result in slower execution. Changed in version 3.6: Optimization opportunity recognized for us-ascii. Many of the character sets support the same languages. They vary in individual characters (e.g. whether the EURO SIGN is supported or not), and in the assignment of characters to code positions. For the European languages in particular, the following variants typically exist: * an ISO 8859 codeset * a Microsoft Windows code page, which is typically derived from an 8859 codeset, but replaces control characters with additional graphic characters * an IBM EBCDIC code page * an IBM PC code page, which is ASCII compatible | Codec | Aliases | Languages | | --- | --- | --- | | ascii | 646, us-ascii | English | | big5 | big5-tw, csbig5 | Traditional Chinese | | big5hkscs | big5-hkscs, hkscs | Traditional Chinese | | cp037 | IBM037, IBM039 | English | | cp273 | 273, IBM273, csIBM273 | German New in version 3.4. | | cp424 | EBCDIC-CP-HE, IBM424 | Hebrew | | cp437 | 437, IBM437 | English | | cp500 | EBCDIC-CP-BE, EBCDIC-CP-CH, IBM500 | Western Europe | | cp720 | | Arabic | | cp737 | | Greek | | cp775 | IBM775 | Baltic languages | | cp850 | 850, IBM850 | Western Europe | | cp852 | 852, IBM852 | Central and Eastern Europe | | cp855 | 855, IBM855 | Bulgarian, Byelorussian, Macedonian, Russian, Serbian | | cp856 | | Hebrew | | cp857 | 857, IBM857 | Turkish | | cp858 | 858, IBM858 | Western Europe | | cp860 | 860, IBM860 | Portuguese | | cp861 | 861, CP-IS, IBM861 | Icelandic | | cp862 | 862, IBM862 | Hebrew | | cp863 | 863, IBM863 | Canadian | | cp864 | IBM864 | Arabic | | cp865 | 865, IBM865 | Danish, Norwegian | | cp866 | 866, IBM866 | Russian | | cp869 | 869, CP-GR, IBM869 | Greek | | cp874 | | Thai | | cp875 | | Greek | | cp932 | 932, ms932, mskanji, ms-kanji | Japanese | | cp949 | 949, ms949, uhc | Korean | | cp950 | 950, ms950 | Traditional Chinese | | cp1006 | | Urdu | | cp1026 | ibm1026 | Turkish | | cp1125 | 1125, ibm1125, cp866u, ruscii | Ukrainian New in version 3.4. | | cp1140 | ibm1140 | Western Europe | | cp1250 | windows-1250 | Central and Eastern Europe | | cp1251 | windows-1251 | Bulgarian, Byelorussian, Macedonian, Russian, Serbian | | cp1252 | windows-1252 | Western Europe | | cp1253 | windows-1253 | Greek | | cp1254 | windows-1254 | Turkish | | cp1255 | windows-1255 | Hebrew | | cp1256 | windows-1256 | Arabic | | cp1257 | windows-1257 | Baltic languages | | cp1258 | windows-1258 | Vietnamese | | euc\_jp | eucjp, ujis, u-jis | Japanese | | euc\_jis\_2004 | jisx0213, eucjis2004 | Japanese | | euc\_jisx0213 | eucjisx0213 | Japanese | | euc\_kr | euckr, korean, ksc5601, ks\_c-5601, ks\_c-5601-1987, ksx1001, ks\_x-1001 | Korean | | gb2312 | chinese, csiso58gb231280, euc-cn, euccn, eucgb2312-cn, gb2312-1980, gb2312-80, iso-ir-58 | Simplified Chinese | | gbk | 936, cp936, ms936 | Unified Chinese | | gb18030 | gb18030-2000 | Unified Chinese | | hz | hzgb, hz-gb, hz-gb-2312 | Simplified Chinese | | iso2022\_jp | csiso2022jp, iso2022jp, iso-2022-jp | Japanese | | iso2022\_jp\_1 | iso2022jp-1, iso-2022-jp-1 | Japanese | | iso2022\_jp\_2 | iso2022jp-2, iso-2022-jp-2 | Japanese, Korean, Simplified Chinese, Western Europe, Greek | | iso2022\_jp\_2004 | iso2022jp-2004, iso-2022-jp-2004 | Japanese | | iso2022\_jp\_3 | iso2022jp-3, iso-2022-jp-3 | Japanese | | iso2022\_jp\_ext | iso2022jp-ext, iso-2022-jp-ext | Japanese | | iso2022\_kr | csiso2022kr, iso2022kr, iso-2022-kr | Korean | | latin\_1 | iso-8859-1, iso8859-1, 8859, cp819, latin, latin1, L1 | Western Europe | | iso8859\_2 | iso-8859-2, latin2, L2 | Central and Eastern Europe | | iso8859\_3 | iso-8859-3, latin3, L3 | Esperanto, Maltese | | iso8859\_4 | iso-8859-4, latin4, L4 | Baltic languages | | iso8859\_5 | iso-8859-5, cyrillic | Bulgarian, Byelorussian, Macedonian, Russian, Serbian | | iso8859\_6 | iso-8859-6, arabic | Arabic | | iso8859\_7 | iso-8859-7, greek, greek8 | Greek | | iso8859\_8 | iso-8859-8, hebrew | Hebrew | | iso8859\_9 | iso-8859-9, latin5, L5 | Turkish | | iso8859\_10 | iso-8859-10, latin6, L6 | Nordic languages | | iso8859\_11 | iso-8859-11, thai | Thai languages | | iso8859\_13 | iso-8859-13, latin7, L7 | Baltic languages | | iso8859\_14 | iso-8859-14, latin8, L8 | Celtic languages | | iso8859\_15 | iso-8859-15, latin9, L9 | Western Europe | | iso8859\_16 | iso-8859-16, latin10, L10 | South-Eastern Europe | | johab | cp1361, ms1361 | Korean | | koi8\_r | | Russian | | koi8\_t | | Tajik New in version 3.5. | | koi8\_u | | Ukrainian | | kz1048 | kz\_1048, strk1048\_2002, rk1048 | Kazakh New in version 3.5. | | mac\_cyrillic | maccyrillic | Bulgarian, Byelorussian, Macedonian, Russian, Serbian | | mac\_greek | macgreek | Greek | | mac\_iceland | maciceland | Icelandic | | mac\_latin2 | maclatin2, maccentraleurope, mac\_centeuro | Central and Eastern Europe | | mac\_roman | macroman, macintosh | Western Europe | | mac\_turkish | macturkish | Turkish | | ptcp154 | csptcp154, pt154, cp154, cyrillic-asian | Kazakh | | shift\_jis | csshiftjis, shiftjis, sjis, s\_jis | Japanese | | shift\_jis\_2004 | shiftjis2004, sjis\_2004, sjis2004 | Japanese | | shift\_jisx0213 | shiftjisx0213, sjisx0213, s\_jisx0213 | Japanese | | utf\_32 | U32, utf32 | all languages | | utf\_32\_be | UTF-32BE | all languages | | utf\_32\_le | UTF-32LE | all languages | | utf\_16 | U16, utf16 | all languages | | utf\_16\_be | UTF-16BE | all languages | | utf\_16\_le | UTF-16LE | all languages | | utf\_7 | U7, unicode-1-1-utf-7 | all languages | | utf\_8 | U8, UTF, utf8, cp65001 | all languages | | utf\_8\_sig | | all languages | Changed in version 3.4: The utf-16\* and utf-32\* encoders no longer allow surrogate code points (`U+D800`–`U+DFFF`) to be encoded. The utf-32\* decoders no longer decode byte sequences that correspond to surrogate code points. Changed in version 3.8: `cp65001` is now an alias to `utf_8`. Python Specific Encodings ------------------------- A number of predefined codecs are specific to Python, so their codec names have no meaning outside Python. These are listed in the tables below based on the expected input and output types (note that while text encodings are the most common use case for codecs, the underlying codec infrastructure supports arbitrary data transforms rather than just text encodings). For asymmetric codecs, the stated meaning describes the encoding direction. ### Text Encodings The following codecs provide [`str`](stdtypes#str "str") to [`bytes`](stdtypes#bytes "bytes") encoding and [bytes-like object](../glossary#term-bytes-like-object) to [`str`](stdtypes#str "str") decoding, similar to the Unicode text encodings. | Codec | Aliases | Meaning | | --- | --- | --- | | idna | | Implement [**RFC 3490**](https://tools.ietf.org/html/rfc3490.html), see also [`encodings.idna`](#module-encodings.idna "encodings.idna: Internationalized Domain Names implementation"). Only `errors='strict'` is supported. | | mbcs | ansi, dbcs | Windows only: Encode the operand according to the ANSI codepage (CP\_ACP). | | oem | | Windows only: Encode the operand according to the OEM codepage (CP\_OEMCP). New in version 3.6. | | palmos | | Encoding of PalmOS 3.5. | | punycode | | Implement [**RFC 3492**](https://tools.ietf.org/html/rfc3492.html). Stateful codecs are not supported. | | raw\_unicode\_escape | | Latin-1 encoding with `\uXXXX` and `\UXXXXXXXX` for other code points. Existing backslashes are not escaped in any way. It is used in the Python pickle protocol. | | undefined | | Raise an exception for all conversions, even empty strings. The error handler is ignored. | | unicode\_escape | | Encoding suitable as the contents of a Unicode literal in ASCII-encoded Python source code, except that quotes are not escaped. Decode from Latin-1 source code. Beware that Python source code actually uses UTF-8 by default. | Changed in version 3.8: “unicode\_internal” codec is removed. ### Binary Transforms The following codecs provide binary transforms: [bytes-like object](../glossary#term-bytes-like-object) to [`bytes`](stdtypes#bytes "bytes") mappings. They are not supported by [`bytes.decode()`](stdtypes#bytes.decode "bytes.decode") (which only produces [`str`](stdtypes#str "str") output). | Codec | Aliases | Meaning | Encoder / decoder | | --- | --- | --- | --- | | base64\_codec [1](#b64) | base64, base\_64 | Convert the operand to multiline MIME base64 (the result always includes a trailing `'\n'`). Changed in version 3.4: accepts any [bytes-like object](../glossary#term-bytes-like-object) as input for encoding and decoding | [`base64.encodebytes()`](base64#base64.encodebytes "base64.encodebytes") / [`base64.decodebytes()`](base64#base64.decodebytes "base64.decodebytes") | | bz2\_codec | bz2 | Compress the operand using bz2. | [`bz2.compress()`](bz2#bz2.compress "bz2.compress") / [`bz2.decompress()`](bz2#bz2.decompress "bz2.decompress") | | hex\_codec | hex | Convert the operand to hexadecimal representation, with two digits per byte. | [`binascii.b2a_hex()`](binascii#binascii.b2a_hex "binascii.b2a_hex") / [`binascii.a2b_hex()`](binascii#binascii.a2b_hex "binascii.a2b_hex") | | quopri\_codec | quopri, quotedprintable, quoted\_printable | Convert the operand to MIME quoted printable. | [`quopri.encode()`](quopri#quopri.encode "quopri.encode") with `quotetabs=True` / [`quopri.decode()`](quopri#quopri.decode "quopri.decode") | | uu\_codec | uu | Convert the operand using uuencode. | [`uu.encode()`](uu#uu.encode "uu.encode") / [`uu.decode()`](uu#uu.decode "uu.decode") | | zlib\_codec | zip, zlib | Compress the operand using gzip. | [`zlib.compress()`](zlib#zlib.compress "zlib.compress") / [`zlib.decompress()`](zlib#zlib.decompress "zlib.decompress") | `1` In addition to [bytes-like objects](../glossary#term-bytes-like-object), `'base64_codec'` also accepts ASCII-only instances of [`str`](stdtypes#str "str") for decoding New in version 3.2: Restoration of the binary transforms. Changed in version 3.4: Restoration of the aliases for the binary transforms. ### Text Transforms The following codec provides a text transform: a [`str`](stdtypes#str "str") to [`str`](stdtypes#str "str") mapping. It is not supported by [`str.encode()`](stdtypes#str.encode "str.encode") (which only produces [`bytes`](stdtypes#bytes "bytes") output). | Codec | Aliases | Meaning | | --- | --- | --- | | rot\_13 | rot13 | Return the Caesar-cypher encryption of the operand. | New in version 3.2: Restoration of the `rot_13` text transform. Changed in version 3.4: Restoration of the `rot13` alias. encodings.idna — Internationalized Domain Names in Applications --------------------------------------------------------------- This module implements [**RFC 3490**](https://tools.ietf.org/html/rfc3490.html) (Internationalized Domain Names in Applications) and [**RFC 3492**](https://tools.ietf.org/html/rfc3492.html) (Nameprep: A Stringprep Profile for Internationalized Domain Names (IDN)). It builds upon the `punycode` encoding and [`stringprep`](stringprep#module-stringprep "stringprep: String preparation, as per RFC 3453"). If you need the IDNA 2008 standard from [**RFC 5891**](https://tools.ietf.org/html/rfc5891.html) and [**RFC 5895**](https://tools.ietf.org/html/rfc5895.html), use the third-party [idna module](https://pypi.org/project/idna/). These RFCs together define a protocol to support non-ASCII characters in domain names. A domain name containing non-ASCII characters (such as `www.Alliancefrançaise.nu`) is converted into an ASCII-compatible encoding (ACE, such as `www.xn--alliancefranaise-npb.nu`). The ACE form of the domain name is then used in all places where arbitrary characters are not allowed by the protocol, such as DNS queries, HTTP *Host* fields, and so on. This conversion is carried out in the application; if possible invisible to the user: The application should transparently convert Unicode domain labels to IDNA on the wire, and convert back ACE labels to Unicode before presenting them to the user. Python supports this conversion in several ways: the `idna` codec performs conversion between Unicode and ACE, separating an input string into labels based on the separator characters defined in [**section 3.1 of RFC 3490**](https://tools.ietf.org/html/rfc3490.html#section-3.1) and converting each label to ACE as required, and conversely separating an input byte string into labels based on the `.` separator and converting any ACE labels found into unicode. Furthermore, the [`socket`](socket#module-socket "socket: Low-level networking interface.") module transparently converts Unicode host names to ACE, so that applications need not be concerned about converting host names themselves when they pass them to the socket module. On top of that, modules that have host names as function parameters, such as [`http.client`](http.client#module-http.client "http.client: HTTP and HTTPS protocol client (requires sockets).") and [`ftplib`](ftplib#module-ftplib "ftplib: FTP protocol client (requires sockets)."), accept Unicode host names ([`http.client`](http.client#module-http.client "http.client: HTTP and HTTPS protocol client (requires sockets).") then also transparently sends an IDNA hostname in the *Host* field if it sends that field at all). When receiving host names from the wire (such as in reverse name lookup), no automatic conversion to Unicode is performed: applications wishing to present such host names to the user should decode them to Unicode. The module [`encodings.idna`](#module-encodings.idna "encodings.idna: Internationalized Domain Names implementation") also implements the nameprep procedure, which performs certain normalizations on host names, to achieve case-insensitivity of international domain names, and to unify similar characters. The nameprep functions can be used directly if desired. `encodings.idna.nameprep(label)` Return the nameprepped version of *label*. The implementation currently assumes query strings, so `AllowUnassigned` is true. `encodings.idna.ToASCII(label)` Convert a label to ASCII, as specified in [**RFC 3490**](https://tools.ietf.org/html/rfc3490.html). `UseSTD3ASCIIRules` is assumed to be false. `encodings.idna.ToUnicode(label)` Convert a label to Unicode, as specified in [**RFC 3490**](https://tools.ietf.org/html/rfc3490.html). encodings.mbcs — Windows ANSI codepage -------------------------------------- This module implements the ANSI codepage (CP\_ACP). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows only. Changed in version 3.3: Support any error handler. Changed in version 3.2: Before 3.2, the *errors* argument was ignored; `'replace'` was always used to encode, and `'ignore'` to decode. encodings.utf\_8\_sig — UTF-8 codec with BOM signature ------------------------------------------------------ This module implements a variant of the UTF-8 codec. On encoding, a UTF-8 encoded BOM will be prepended to the UTF-8 encoded bytes. For the stateful encoder this is only done once (on the first write to the byte stream). On decoding, an optional UTF-8 encoded BOM at the start of the data will be skipped.
programming_docs
python Queues Queues ====== **Source code:** [Lib/asyncio/queues.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/queues.py) asyncio queues are designed to be similar to classes of the [`queue`](queue#module-queue "queue: A synchronized queue class.") module. Although asyncio queues are not thread-safe, they are designed to be used specifically in async/await code. Note that methods of asyncio queues don’t have a *timeout* parameter; use [`asyncio.wait_for()`](asyncio-task#asyncio.wait_for "asyncio.wait_for") function to do queue operations with a timeout. See also the [Examples](#examples) section below. Queue ----- `class asyncio.Queue(maxsize=0, *, loop=None)` A first in, first out (FIFO) queue. If *maxsize* is less than or equal to zero, the queue size is infinite. If it is an integer greater than `0`, then `await put()` blocks when the queue reaches *maxsize* until an item is removed by [`get()`](#asyncio.Queue.get "asyncio.Queue.get"). Unlike the standard library threading [`queue`](queue#module-queue "queue: A synchronized queue class."), the size of the queue is always known and can be returned by calling the [`qsize()`](#asyncio.Queue.qsize "asyncio.Queue.qsize") method. Deprecated since version 3.8, will be removed in version 3.10: The *loop* parameter. This class is [not thread safe](asyncio-dev#asyncio-multithreading). `maxsize` Number of items allowed in the queue. `empty()` Return `True` if the queue is empty, `False` otherwise. `full()` Return `True` if there are [`maxsize`](#asyncio.Queue.maxsize "asyncio.Queue.maxsize") items in the queue. If the queue was initialized with `maxsize=0` (the default), then [`full()`](#asyncio.Queue.full "asyncio.Queue.full") never returns `True`. `coroutine get()` Remove and return an item from the queue. If queue is empty, wait until an item is available. `get_nowait()` Return an item if one is immediately available, else raise [`QueueEmpty`](#asyncio.QueueEmpty "asyncio.QueueEmpty"). `coroutine join()` Block until all items in the queue have been received and processed. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer coroutine calls [`task_done()`](#asyncio.Queue.task_done "asyncio.Queue.task_done") to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, [`join()`](#asyncio.Queue.join "asyncio.Queue.join") unblocks. `coroutine put(item)` Put an item into the queue. If the queue is full, wait until a free slot is available before adding the item. `put_nowait(item)` Put an item into the queue without blocking. If no free slot is immediately available, raise [`QueueFull`](#asyncio.QueueFull "asyncio.QueueFull"). `qsize()` Return the number of items in the queue. `task_done()` Indicate that a formerly enqueued task is complete. Used by queue consumers. For each [`get()`](#asyncio.Queue.get "asyncio.Queue.get") used to fetch a task, a subsequent call to [`task_done()`](#asyncio.Queue.task_done "asyncio.Queue.task_done") tells the queue that the processing on the task is complete. If a [`join()`](#asyncio.Queue.join "asyncio.Queue.join") is currently blocking, it will resume when all items have been processed (meaning that a [`task_done()`](#asyncio.Queue.task_done "asyncio.Queue.task_done") call was received for every item that had been [`put()`](#asyncio.Queue.put "asyncio.Queue.put") into the queue). Raises [`ValueError`](exceptions#ValueError "ValueError") if called more times than there were items placed in the queue. Priority Queue -------------- `class asyncio.PriorityQueue` A variant of [`Queue`](#asyncio.Queue "asyncio.Queue"); retrieves entries in priority order (lowest first). Entries are typically tuples of the form `(priority_number, data)`. LIFO Queue ---------- `class asyncio.LifoQueue` A variant of [`Queue`](#asyncio.Queue "asyncio.Queue") that retrieves most recently added entries first (last in, first out). Exceptions ---------- `exception asyncio.QueueEmpty` This exception is raised when the [`get_nowait()`](#asyncio.Queue.get_nowait "asyncio.Queue.get_nowait") method is called on an empty queue. `exception asyncio.QueueFull` Exception raised when the [`put_nowait()`](#asyncio.Queue.put_nowait "asyncio.Queue.put_nowait") method is called on a queue that has reached its *maxsize*. Examples -------- Queues can be used to distribute workload between several concurrent tasks: ``` import asyncio import random import time async def worker(name, queue): while True: # Get a "work item" out of the queue. sleep_for = await queue.get() # Sleep for the "sleep_for" seconds. await asyncio.sleep(sleep_for) # Notify the queue that the "work item" has been processed. queue.task_done() print(f'{name} has slept for {sleep_for:.2f} seconds') async def main(): # Create a queue that we will use to store our "workload". queue = asyncio.Queue() # Generate random timings and put them into the queue. total_sleep_time = 0 for _ in range(20): sleep_for = random.uniform(0.05, 1.0) total_sleep_time += sleep_for queue.put_nowait(sleep_for) # Create three worker tasks to process the queue concurrently. tasks = [] for i in range(3): task = asyncio.create_task(worker(f'worker-{i}', queue)) tasks.append(task) # Wait until the queue is fully processed. started_at = time.monotonic() await queue.join() total_slept_for = time.monotonic() - started_at # Cancel our worker tasks. for task in tasks: task.cancel() # Wait until all worker tasks are cancelled. await asyncio.gather(*tasks, return_exceptions=True) print('====') print(f'3 workers slept in parallel for {total_slept_for:.2f} seconds') print(f'total expected sleep time: {total_sleep_time:.2f} seconds') asyncio.run(main()) ``` python Data Types Data Types ========== The modules described in this chapter provide a variety of specialized data types such as dates and times, fixed-type arrays, heap queues, double-ended queues, and enumerations. Python also provides some built-in data types, in particular, [`dict`](stdtypes#dict "dict"), [`list`](stdtypes#list "list"), [`set`](stdtypes#set "set") and [`frozenset`](stdtypes#frozenset "frozenset"), and [`tuple`](stdtypes#tuple "tuple"). The [`str`](stdtypes#str "str") class is used to hold Unicode strings, and the [`bytes`](stdtypes#bytes "bytes") and [`bytearray`](stdtypes#bytearray "bytearray") classes are used to hold binary data. The following modules are documented in this chapter: * [`datetime` — Basic date and time types](datetime) + [Aware and Naive Objects](datetime#aware-and-naive-objects) + [Constants](datetime#constants) + [Available Types](datetime#available-types) - [Common Properties](datetime#common-properties) - [Determining if an Object is Aware or Naive](datetime#determining-if-an-object-is-aware-or-naive) + [`timedelta` Objects](datetime#timedelta-objects) - [Examples of usage: `timedelta`](datetime#examples-of-usage-timedelta) + [`date` Objects](datetime#date-objects) - [Examples of Usage: `date`](datetime#examples-of-usage-date) + [`datetime` Objects](datetime#datetime-objects) - [Examples of Usage: `datetime`](datetime#examples-of-usage-datetime) + [`time` Objects](datetime#time-objects) - [Examples of Usage: `time`](datetime#examples-of-usage-time) + [`tzinfo` Objects](datetime#tzinfo-objects) + [`timezone` Objects](datetime#timezone-objects) + [`strftime()` and `strptime()` Behavior](datetime#strftime-and-strptime-behavior) - [`strftime()` and `strptime()` Format Codes](datetime#strftime-and-strptime-format-codes) - [Technical Detail](datetime#technical-detail) * [`zoneinfo` — IANA time zone support](zoneinfo) + [Using `ZoneInfo`](zoneinfo#using-zoneinfo) + [Data sources](zoneinfo#data-sources) - [Configuring the data sources](zoneinfo#configuring-the-data-sources) * [Compile-time configuration](zoneinfo#compile-time-configuration) * [Environment configuration](zoneinfo#environment-configuration) * [Runtime configuration](zoneinfo#runtime-configuration) + [The `ZoneInfo` class](zoneinfo#the-zoneinfo-class) - [String representations](zoneinfo#string-representations) - [Pickle serialization](zoneinfo#pickle-serialization) + [Functions](zoneinfo#functions) + [Globals](zoneinfo#globals) + [Exceptions and warnings](zoneinfo#exceptions-and-warnings) * [`calendar` — General calendar-related functions](calendar) * [`collections` — Container datatypes](collections) + [`ChainMap` objects](collections#chainmap-objects) - [`ChainMap` Examples and Recipes](collections#chainmap-examples-and-recipes) + [`Counter` objects](collections#counter-objects) + [`deque` objects](collections#deque-objects) - [`deque` Recipes](collections#deque-recipes) + [`defaultdict` objects](collections#defaultdict-objects) - [`defaultdict` Examples](collections#defaultdict-examples) + [`namedtuple()` Factory Function for Tuples with Named Fields](collections#namedtuple-factory-function-for-tuples-with-named-fields) + [`OrderedDict` objects](collections#ordereddict-objects) - [`OrderedDict` Examples and Recipes](collections#ordereddict-examples-and-recipes) + [`UserDict` objects](collections#userdict-objects) + [`UserList` objects](collections#userlist-objects) + [`UserString` objects](collections#userstring-objects) * [`collections.abc` — Abstract Base Classes for Containers](collections.abc) + [Collections Abstract Base Classes](collections.abc#collections-abstract-base-classes) * [`heapq` — Heap queue algorithm](heapq) + [Basic Examples](heapq#basic-examples) + [Priority Queue Implementation Notes](heapq#priority-queue-implementation-notes) + [Theory](heapq#theory) * [`bisect` — Array bisection algorithm](bisect) + [Searching Sorted Lists](bisect#searching-sorted-lists) + [Other Examples](bisect#other-examples) * [`array` — Efficient arrays of numeric values](array) * [`weakref` — Weak references](weakref) + [Weak Reference Objects](weakref#weak-reference-objects) + [Example](weakref#example) + [Finalizer Objects](weakref#finalizer-objects) + [Comparing finalizers with `__del__()` methods](weakref#comparing-finalizers-with-del-methods) * [`types` — Dynamic type creation and names for built-in types](types) + [Dynamic Type Creation](types#dynamic-type-creation) + [Standard Interpreter Types](types#standard-interpreter-types) + [Additional Utility Classes and Functions](types#additional-utility-classes-and-functions) + [Coroutine Utility Functions](types#coroutine-utility-functions) * [`copy` — Shallow and deep copy operations](copy) * [`pprint` — Data pretty printer](pprint) + [PrettyPrinter Objects](pprint#prettyprinter-objects) + [Example](pprint#example) * [`reprlib` — Alternate `repr()` implementation](reprlib) + [Repr Objects](reprlib#repr-objects) + [Subclassing Repr Objects](reprlib#subclassing-repr-objects) * [`enum` — Support for enumerations](enum) + [Module Contents](enum#module-contents) + [Creating an Enum](enum#creating-an-enum) + [Programmatic access to enumeration members and their attributes](enum#programmatic-access-to-enumeration-members-and-their-attributes) + [Duplicating enum members and values](enum#duplicating-enum-members-and-values) + [Ensuring unique enumeration values](enum#ensuring-unique-enumeration-values) + [Using automatic values](enum#using-automatic-values) + [Iteration](enum#iteration) + [Comparisons](enum#comparisons) + [Allowed members and attributes of enumerations](enum#allowed-members-and-attributes-of-enumerations) + [Restricted Enum subclassing](enum#restricted-enum-subclassing) + [Pickling](enum#pickling) + [Functional API](enum#functional-api) + [Derived Enumerations](enum#derived-enumerations) - [IntEnum](enum#intenum) - [IntFlag](enum#intflag) - [Flag](enum#flag) - [Others](enum#others) + [When to use `__new__()` vs. `__init__()`](enum#when-to-use-new-vs-init) + [Interesting examples](enum#interesting-examples) - [Omitting values](enum#omitting-values) * [Using `auto`](enum#using-auto) * [Using `object`](enum#using-object) * [Using a descriptive string](enum#using-a-descriptive-string) * [Using a custom `__new__()`](enum#using-a-custom-new) - [OrderedEnum](enum#orderedenum) - [DuplicateFreeEnum](enum#duplicatefreeenum) - [Planet](enum#planet) - [TimePeriod](enum#timeperiod) + [How are Enums different?](enum#how-are-enums-different) - [Enum Classes](enum#enum-classes) - [Enum Members (aka instances)](enum#enum-members-aka-instances) - [Finer Points](enum#finer-points) * [Supported `__dunder__` names](enum#supported-dunder-names) * [Supported `_sunder_` names](enum#supported-sunder-names) * [\_Private\_\_names](enum#private-names) * [`Enum` member type](enum#enum-member-type) * [Boolean value of `Enum` classes and members](enum#boolean-value-of-enum-classes-and-members) * [`Enum` classes with methods](enum#enum-classes-with-methods) * [Combining members of `Flag`](enum#combining-members-of-flag) * [`graphlib` — Functionality to operate with graph-like structures](graphlib) + [Exceptions](graphlib#exceptions) python ipaddress — IPv4/IPv6 manipulation library ipaddress — IPv4/IPv6 manipulation library ========================================== **Source code:** [Lib/ipaddress.py](https://github.com/python/cpython/tree/3.9/Lib/ipaddress.py) [`ipaddress`](#module-ipaddress "ipaddress: IPv4/IPv6 manipulation library.") provides the capabilities to create, manipulate and operate on IPv4 and IPv6 addresses and networks. The functions and classes in this module make it straightforward to handle various tasks related to IP addresses, including checking whether or not two hosts are on the same subnet, iterating over all hosts in a particular subnet, checking whether or not a string represents a valid IP address or network definition, and so on. This is the full module API reference—for an overview and introduction, see [An introduction to the ipaddress module](../howto/ipaddress#ipaddress-howto). New in version 3.3. Convenience factory functions ----------------------------- The [`ipaddress`](#module-ipaddress "ipaddress: IPv4/IPv6 manipulation library.") module provides factory functions to conveniently create IP addresses, networks and interfaces: `ipaddress.ip_address(address)` Return an [`IPv4Address`](#ipaddress.IPv4Address "ipaddress.IPv4Address") or [`IPv6Address`](#ipaddress.IPv6Address "ipaddress.IPv6Address") object depending on the IP address passed as argument. Either IPv4 or IPv6 addresses may be supplied; integers less than `2**32` will be considered to be IPv4 by default. A [`ValueError`](exceptions#ValueError "ValueError") is raised if *address* does not represent a valid IPv4 or IPv6 address. ``` >>> ipaddress.ip_address('192.168.0.1') IPv4Address('192.168.0.1') >>> ipaddress.ip_address('2001:db8::') IPv6Address('2001:db8::') ``` `ipaddress.ip_network(address, strict=True)` Return an [`IPv4Network`](#ipaddress.IPv4Network "ipaddress.IPv4Network") or [`IPv6Network`](#ipaddress.IPv6Network "ipaddress.IPv6Network") object depending on the IP address passed as argument. *address* is a string or integer representing the IP network. Either IPv4 or IPv6 networks may be supplied; integers less than `2**32` will be considered to be IPv4 by default. *strict* is passed to [`IPv4Network`](#ipaddress.IPv4Network "ipaddress.IPv4Network") or [`IPv6Network`](#ipaddress.IPv6Network "ipaddress.IPv6Network") constructor. A [`ValueError`](exceptions#ValueError "ValueError") is raised if *address* does not represent a valid IPv4 or IPv6 address, or if the network has host bits set. ``` >>> ipaddress.ip_network('192.168.0.0/28') IPv4Network('192.168.0.0/28') ``` `ipaddress.ip_interface(address)` Return an [`IPv4Interface`](#ipaddress.IPv4Interface "ipaddress.IPv4Interface") or [`IPv6Interface`](#ipaddress.IPv6Interface "ipaddress.IPv6Interface") object depending on the IP address passed as argument. *address* is a string or integer representing the IP address. Either IPv4 or IPv6 addresses may be supplied; integers less than `2**32` will be considered to be IPv4 by default. A [`ValueError`](exceptions#ValueError "ValueError") is raised if *address* does not represent a valid IPv4 or IPv6 address. One downside of these convenience functions is that the need to handle both IPv4 and IPv6 formats means that error messages provide minimal information on the precise error, as the functions don’t know whether the IPv4 or IPv6 format was intended. More detailed error reporting can be obtained by calling the appropriate version specific class constructors directly. IP Addresses ------------ ### Address objects The [`IPv4Address`](#ipaddress.IPv4Address "ipaddress.IPv4Address") and [`IPv6Address`](#ipaddress.IPv6Address "ipaddress.IPv6Address") objects share a lot of common attributes. Some attributes that are only meaningful for IPv6 addresses are also implemented by [`IPv4Address`](#ipaddress.IPv4Address "ipaddress.IPv4Address") objects, in order to make it easier to write code that handles both IP versions correctly. Address objects are [hashable](../glossary#term-hashable), so they can be used as keys in dictionaries. `class ipaddress.IPv4Address(address)` Construct an IPv4 address. An [`AddressValueError`](#ipaddress.AddressValueError "ipaddress.AddressValueError") is raised if *address* is not a valid IPv4 address. The following constitutes a valid IPv4 address: 1. A string in decimal-dot notation, consisting of four decimal integers in the inclusive range 0–255, separated by dots (e.g. `192.168.0.1`). Each integer represents an octet (byte) in the address. Leading zeroes are not tolerated to prevent confusion with octal notation. 2. An integer that fits into 32 bits. 3. An integer packed into a [`bytes`](stdtypes#bytes "bytes") object of length 4 (most significant octet first). ``` >>> ipaddress.IPv4Address('192.168.0.1') IPv4Address('192.168.0.1') >>> ipaddress.IPv4Address(3232235521) IPv4Address('192.168.0.1') >>> ipaddress.IPv4Address(b'\xC0\xA8\x00\x01') IPv4Address('192.168.0.1') ``` Changed in version 3.8: Leading zeros are tolerated, even in ambiguous cases that look like octal notation. Changed in version 3.10: Leading zeros are no longer tolerated and are treated as an error. IPv4 address strings are now parsed as strict as glibc [`inet_pton()`](socket#socket.inet_pton "socket.inet_pton"). Changed in version 3.9.5: The above change was also included in Python 3.9 starting with version 3.9.5. Changed in version 3.8.12: The above change was also included in Python 3.8 starting with version 3.8.12. `version` The appropriate version number: `4` for IPv4, `6` for IPv6. `max_prefixlen` The total number of bits in the address representation for this version: `32` for IPv4, `128` for IPv6. The prefix defines the number of leading bits in an address that are compared to determine whether or not an address is part of a network. `compressed` `exploded` The string representation in dotted decimal notation. Leading zeroes are never included in the representation. As IPv4 does not define a shorthand notation for addresses with octets set to zero, these two attributes are always the same as `str(addr)` for IPv4 addresses. Exposing these attributes makes it easier to write display code that can handle both IPv4 and IPv6 addresses. `packed` The binary representation of this address - a [`bytes`](stdtypes#bytes "bytes") object of the appropriate length (most significant octet first). This is 4 bytes for IPv4 and 16 bytes for IPv6. `reverse_pointer` The name of the reverse DNS PTR record for the IP address, e.g.: ``` >>> ipaddress.ip_address("127.0.0.1").reverse_pointer '1.0.0.127.in-addr.arpa' >>> ipaddress.ip_address("2001:db8::1").reverse_pointer '1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa' ``` This is the name that could be used for performing a PTR lookup, not the resolved hostname itself. New in version 3.5. `is_multicast` `True` if the address is reserved for multicast use. See [**RFC 3171**](https://tools.ietf.org/html/rfc3171.html) (for IPv4) or [**RFC 2373**](https://tools.ietf.org/html/rfc2373.html) (for IPv6). `is_private` `True` if the address is allocated for private networks. See [iana-ipv4-special-registry](https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml) (for IPv4) or [iana-ipv6-special-registry](https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml) (for IPv6). `is_global` `True` if the address is allocated for public networks. See [iana-ipv4-special-registry](https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml) (for IPv4) or [iana-ipv6-special-registry](https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml) (for IPv6). New in version 3.4. `is_unspecified` `True` if the address is unspecified. See [**RFC 5735**](https://tools.ietf.org/html/rfc5735.html) (for IPv4) or [**RFC 2373**](https://tools.ietf.org/html/rfc2373.html) (for IPv6). `is_reserved` `True` if the address is otherwise IETF reserved. `is_loopback` `True` if this is a loopback address. See [**RFC 3330**](https://tools.ietf.org/html/rfc3330.html) (for IPv4) or [**RFC 2373**](https://tools.ietf.org/html/rfc2373.html) (for IPv6). `is_link_local` `True` if the address is reserved for link-local usage. See [**RFC 3927**](https://tools.ietf.org/html/rfc3927.html). `IPv4Address.__format__(fmt)` Returns a string representation of the IP address, controlled by an explicit format string. *fmt* can be one of the following: `'s'`, the default option, equivalent to [`str()`](stdtypes#str "str"), `'b'` for a zero-padded binary string, `'X'` or `'x'` for an uppercase or lowercase hexadecimal representation, or `'n'`, which is equivalent to `'b'` for IPv4 addresses and `'x'` for IPv6. For binary and hexadecimal representations, the form specifier `'#'` and the grouping option `'_'` are available. `__format__` is used by `format`, `str.format` and f-strings. ``` >>> format(ipaddress.IPv4Address('192.168.0.1')) '192.168.0.1' >>> '{:#b}'.format(ipaddress.IPv4Address('192.168.0.1')) '0b11000000101010000000000000000001' >>> f'{ipaddress.IPv6Address("2001:db8::1000"):s}' '2001:db8::1000' >>> format(ipaddress.IPv6Address('2001:db8::1000'), '_X') '2001_0DB8_0000_0000_0000_0000_0000_1000' >>> '{:#_n}'.format(ipaddress.IPv6Address('2001:db8::1000')) '0x2001_0db8_0000_0000_0000_0000_0000_1000' ``` New in version 3.9. `class ipaddress.IPv6Address(address)` Construct an IPv6 address. An [`AddressValueError`](#ipaddress.AddressValueError "ipaddress.AddressValueError") is raised if *address* is not a valid IPv6 address. The following constitutes a valid IPv6 address: 1. A string consisting of eight groups of four hexadecimal digits, each group representing 16 bits. The groups are separated by colons. This describes an *exploded* (longhand) notation. The string can also be *compressed* (shorthand notation) by various means. See [**RFC 4291**](https://tools.ietf.org/html/rfc4291.html) for details. For example, `"0000:0000:0000:0000:0000:0abc:0007:0def"` can be compressed to `"::abc:7:def"`. Optionally, the string may also have a scope zone ID, expressed with a suffix `%scope_id`. If present, the scope ID must be non-empty, and may not contain `%`. See [**RFC 4007**](https://tools.ietf.org/html/rfc4007.html) for details. For example, `fe80::1234%1` might identify address `fe80::1234` on the first link of the node. 2. An integer that fits into 128 bits. 3. An integer packed into a [`bytes`](stdtypes#bytes "bytes") object of length 16, big-endian. ``` >>> ipaddress.IPv6Address('2001:db8::1000') IPv6Address('2001:db8::1000') >>> ipaddress.IPv6Address('ff02::5678%1') IPv6Address('ff02::5678%1') ``` `compressed` The short form of the address representation, with leading zeroes in groups omitted and the longest sequence of groups consisting entirely of zeroes collapsed to a single empty group. This is also the value returned by `str(addr)` for IPv6 addresses. `exploded` The long form of the address representation, with all leading zeroes and groups consisting entirely of zeroes included. For the following attributes and methods, see the corresponding documentation of the [`IPv4Address`](#ipaddress.IPv4Address "ipaddress.IPv4Address") class: `packed` `reverse_pointer` `version` `max_prefixlen` `is_multicast` `is_private` `is_global` `is_unspecified` `is_reserved` `is_loopback` `is_link_local` New in version 3.4: is\_global `is_site_local` `True` if the address is reserved for site-local usage. Note that the site-local address space has been deprecated by [**RFC 3879**](https://tools.ietf.org/html/rfc3879.html). Use [`is_private`](#ipaddress.IPv4Address.is_private "ipaddress.IPv4Address.is_private") to test if this address is in the space of unique local addresses as defined by [**RFC 4193**](https://tools.ietf.org/html/rfc4193.html). `ipv4_mapped` For addresses that appear to be IPv4 mapped addresses (starting with `::FFFF/96`), this property will report the embedded IPv4 address. For any other address, this property will be `None`. `scope_id` For scoped addresses as defined by [**RFC 4007**](https://tools.ietf.org/html/rfc4007.html), this property identifies the particular zone of the address’s scope that the address belongs to, as a string. When no scope zone is specified, this property will be `None`. `sixtofour` For addresses that appear to be 6to4 addresses (starting with `2002::/16`) as defined by [**RFC 3056**](https://tools.ietf.org/html/rfc3056.html), this property will report the embedded IPv4 address. For any other address, this property will be `None`. `teredo` For addresses that appear to be Teredo addresses (starting with `2001::/32`) as defined by [**RFC 4380**](https://tools.ietf.org/html/rfc4380.html), this property will report the embedded `(server, client)` IP address pair. For any other address, this property will be `None`. `IPv6Address.__format__(fmt)` Refer to the corresponding method documentation in [`IPv4Address`](#ipaddress.IPv4Address "ipaddress.IPv4Address"). New in version 3.9. ### Conversion to Strings and Integers To interoperate with networking interfaces such as the socket module, addresses must be converted to strings or integers. This is handled using the [`str()`](stdtypes#str "str") and [`int()`](functions#int "int") builtin functions: ``` >>> str(ipaddress.IPv4Address('192.168.0.1')) '192.168.0.1' >>> int(ipaddress.IPv4Address('192.168.0.1')) 3232235521 >>> str(ipaddress.IPv6Address('::1')) '::1' >>> int(ipaddress.IPv6Address('::1')) 1 ``` Note that IPv6 scoped addresses are converted to integers without scope zone ID. ### Operators Address objects support some operators. Unless stated otherwise, operators can only be applied between compatible objects (i.e. IPv4 with IPv4, IPv6 with IPv6). #### Comparison operators Address objects can be compared with the usual set of comparison operators. Same IPv6 addresses with different scope zone IDs are not equal. Some examples: ``` >>> IPv4Address('127.0.0.2') > IPv4Address('127.0.0.1') True >>> IPv4Address('127.0.0.2') == IPv4Address('127.0.0.1') False >>> IPv4Address('127.0.0.2') != IPv4Address('127.0.0.1') True >>> IPv6Address('fe80::1234') == IPv6Address('fe80::1234%1') False >>> IPv6Address('fe80::1234%1') != IPv6Address('fe80::1234%2') True ``` #### Arithmetic operators Integers can be added to or subtracted from address objects. Some examples: ``` >>> IPv4Address('127.0.0.2') + 3 IPv4Address('127.0.0.5') >>> IPv4Address('127.0.0.2') - 3 IPv4Address('126.255.255.255') >>> IPv4Address('255.255.255.255') + 1 Traceback (most recent call last): File "<stdin>", line 1, in <module> ipaddress.AddressValueError: 4294967296 (>= 2**32) is not permitted as an IPv4 address ``` IP Network definitions ---------------------- The [`IPv4Network`](#ipaddress.IPv4Network "ipaddress.IPv4Network") and [`IPv6Network`](#ipaddress.IPv6Network "ipaddress.IPv6Network") objects provide a mechanism for defining and inspecting IP network definitions. A network definition consists of a *mask* and a *network address*, and as such defines a range of IP addresses that equal the network address when masked (binary AND) with the mask. For example, a network definition with the mask `255.255.255.0` and the network address `192.168.1.0` consists of IP addresses in the inclusive range `192.168.1.0` to `192.168.1.255`. ### Prefix, net mask and host mask There are several equivalent ways to specify IP network masks. A *prefix* `/<nbits>` is a notation that denotes how many high-order bits are set in the network mask. A *net mask* is an IP address with some number of high-order bits set. Thus the prefix `/24` is equivalent to the net mask `255.255.255.0` in IPv4, or `ffff:ff00::` in IPv6. In addition, a *host mask* is the logical inverse of a *net mask*, and is sometimes used (for example in Cisco access control lists) to denote a network mask. The host mask equivalent to `/24` in IPv4 is `0.0.0.255`. ### Network objects All attributes implemented by address objects are implemented by network objects as well. In addition, network objects implement additional attributes. All of these are common between [`IPv4Network`](#ipaddress.IPv4Network "ipaddress.IPv4Network") and [`IPv6Network`](#ipaddress.IPv6Network "ipaddress.IPv6Network"), so to avoid duplication they are only documented for [`IPv4Network`](#ipaddress.IPv4Network "ipaddress.IPv4Network"). Network objects are [hashable](../glossary#term-hashable), so they can be used as keys in dictionaries. `class ipaddress.IPv4Network(address, strict=True)` Construct an IPv4 network definition. *address* can be one of the following: 1. A string consisting of an IP address and an optional mask, separated by a slash (`/`). The IP address is the network address, and the mask can be either a single number, which means it’s a *prefix*, or a string representation of an IPv4 address. If it’s the latter, the mask is interpreted as a *net mask* if it starts with a non-zero field, or as a *host mask* if it starts with a zero field, with the single exception of an all-zero mask which is treated as a *net mask*. If no mask is provided, it’s considered to be `/32`. For example, the following *address* specifications are equivalent: `192.168.1.0/24`, `192.168.1.0/255.255.255.0` and `192.168.1.0/0.0.0.255`. 2. An integer that fits into 32 bits. This is equivalent to a single-address network, with the network address being *address* and the mask being `/32`. 3. An integer packed into a [`bytes`](stdtypes#bytes "bytes") object of length 4, big-endian. The interpretation is similar to an integer *address*. 4. A two-tuple of an address description and a netmask, where the address description is either a string, a 32-bits integer, a 4-bytes packed integer, or an existing IPv4Address object; and the netmask is either an integer representing the prefix length (e.g. `24`) or a string representing the prefix mask (e.g. `255.255.255.0`). An [`AddressValueError`](#ipaddress.AddressValueError "ipaddress.AddressValueError") is raised if *address* is not a valid IPv4 address. A [`NetmaskValueError`](#ipaddress.NetmaskValueError "ipaddress.NetmaskValueError") is raised if the mask is not valid for an IPv4 address. If *strict* is `True` and host bits are set in the supplied address, then [`ValueError`](exceptions#ValueError "ValueError") is raised. Otherwise, the host bits are masked out to determine the appropriate network address. Unless stated otherwise, all network methods accepting other network/address objects will raise [`TypeError`](exceptions#TypeError "TypeError") if the argument’s IP version is incompatible to `self`. Changed in version 3.5: Added the two-tuple form for the *address* constructor parameter. `version` `max_prefixlen` Refer to the corresponding attribute documentation in [`IPv4Address`](#ipaddress.IPv4Address "ipaddress.IPv4Address"). `is_multicast` `is_private` `is_unspecified` `is_reserved` `is_loopback` `is_link_local` These attributes are true for the network as a whole if they are true for both the network address and the broadcast address. `network_address` The network address for the network. The network address and the prefix length together uniquely define a network. `broadcast_address` The broadcast address for the network. Packets sent to the broadcast address should be received by every host on the network. `hostmask` The host mask, as an [`IPv4Address`](#ipaddress.IPv4Address "ipaddress.IPv4Address") object. `netmask` The net mask, as an [`IPv4Address`](#ipaddress.IPv4Address "ipaddress.IPv4Address") object. `with_prefixlen` `compressed` `exploded` A string representation of the network, with the mask in prefix notation. `with_prefixlen` and `compressed` are always the same as `str(network)`. `exploded` uses the exploded form the network address. `with_netmask` A string representation of the network, with the mask in net mask notation. `with_hostmask` A string representation of the network, with the mask in host mask notation. `num_addresses` The total number of addresses in the network. `prefixlen` Length of the network prefix, in bits. `hosts()` Returns an iterator over the usable hosts in the network. The usable hosts are all the IP addresses that belong to the network, except the network address itself and the network broadcast address. For networks with a mask length of 31, the network address and network broadcast address are also included in the result. Networks with a mask of 32 will return a list containing the single host address. ``` >>> list(ip_network('192.0.2.0/29').hosts()) [IPv4Address('192.0.2.1'), IPv4Address('192.0.2.2'), IPv4Address('192.0.2.3'), IPv4Address('192.0.2.4'), IPv4Address('192.0.2.5'), IPv4Address('192.0.2.6')] >>> list(ip_network('192.0.2.0/31').hosts()) [IPv4Address('192.0.2.0'), IPv4Address('192.0.2.1')] >>> list(ip_network('192.0.2.1/32').hosts()) [IPv4Address('192.0.2.1')] ``` `overlaps(other)` `True` if this network is partly or wholly contained in *other* or *other* is wholly contained in this network. `address_exclude(network)` Computes the network definitions resulting from removing the given *network* from this one. Returns an iterator of network objects. Raises [`ValueError`](exceptions#ValueError "ValueError") if *network* is not completely contained in this network. ``` >>> n1 = ip_network('192.0.2.0/28') >>> n2 = ip_network('192.0.2.1/32') >>> list(n1.address_exclude(n2)) [IPv4Network('192.0.2.8/29'), IPv4Network('192.0.2.4/30'), IPv4Network('192.0.2.2/31'), IPv4Network('192.0.2.0/32')] ``` `subnets(prefixlen_diff=1, new_prefix=None)` The subnets that join to make the current network definition, depending on the argument values. *prefixlen\_diff* is the amount our prefix length should be increased by. *new\_prefix* is the desired new prefix of the subnets; it must be larger than our prefix. One and only one of *prefixlen\_diff* and *new\_prefix* must be set. Returns an iterator of network objects. ``` >>> list(ip_network('192.0.2.0/24').subnets()) [IPv4Network('192.0.2.0/25'), IPv4Network('192.0.2.128/25')] >>> list(ip_network('192.0.2.0/24').subnets(prefixlen_diff=2)) [IPv4Network('192.0.2.0/26'), IPv4Network('192.0.2.64/26'), IPv4Network('192.0.2.128/26'), IPv4Network('192.0.2.192/26')] >>> list(ip_network('192.0.2.0/24').subnets(new_prefix=26)) [IPv4Network('192.0.2.0/26'), IPv4Network('192.0.2.64/26'), IPv4Network('192.0.2.128/26'), IPv4Network('192.0.2.192/26')] >>> list(ip_network('192.0.2.0/24').subnets(new_prefix=23)) Traceback (most recent call last): File "<stdin>", line 1, in <module> raise ValueError('new prefix must be longer') ValueError: new prefix must be longer >>> list(ip_network('192.0.2.0/24').subnets(new_prefix=25)) [IPv4Network('192.0.2.0/25'), IPv4Network('192.0.2.128/25')] ``` `supernet(prefixlen_diff=1, new_prefix=None)` The supernet containing this network definition, depending on the argument values. *prefixlen\_diff* is the amount our prefix length should be decreased by. *new\_prefix* is the desired new prefix of the supernet; it must be smaller than our prefix. One and only one of *prefixlen\_diff* and *new\_prefix* must be set. Returns a single network object. ``` >>> ip_network('192.0.2.0/24').supernet() IPv4Network('192.0.2.0/23') >>> ip_network('192.0.2.0/24').supernet(prefixlen_diff=2) IPv4Network('192.0.0.0/22') >>> ip_network('192.0.2.0/24').supernet(new_prefix=20) IPv4Network('192.0.0.0/20') ``` `subnet_of(other)` Return `True` if this network is a subnet of *other*. ``` >>> a = ip_network('192.168.1.0/24') >>> b = ip_network('192.168.1.128/30') >>> b.subnet_of(a) True ``` New in version 3.7. `supernet_of(other)` Return `True` if this network is a supernet of *other*. ``` >>> a = ip_network('192.168.1.0/24') >>> b = ip_network('192.168.1.128/30') >>> a.supernet_of(b) True ``` New in version 3.7. `compare_networks(other)` Compare this network to *other*. In this comparison only the network addresses are considered; host bits aren’t. Returns either `-1`, `0` or `1`. ``` >>> ip_network('192.0.2.1/32').compare_networks(ip_network('192.0.2.2/32')) -1 >>> ip_network('192.0.2.1/32').compare_networks(ip_network('192.0.2.0/32')) 1 >>> ip_network('192.0.2.1/32').compare_networks(ip_network('192.0.2.1/32')) 0 ``` Deprecated since version 3.7: It uses the same ordering and comparison algorithm as “<”, “==”, and “>” `class ipaddress.IPv6Network(address, strict=True)` Construct an IPv6 network definition. *address* can be one of the following: 1. A string consisting of an IP address and an optional prefix length, separated by a slash (`/`). The IP address is the network address, and the prefix length must be a single number, the *prefix*. If no prefix length is provided, it’s considered to be `/128`. Note that currently expanded netmasks are not supported. That means `2001:db00::0/24` is a valid argument while `2001:db00::0/ffff:ff00::` is not. 2. An integer that fits into 128 bits. This is equivalent to a single-address network, with the network address being *address* and the mask being `/128`. 3. An integer packed into a [`bytes`](stdtypes#bytes "bytes") object of length 16, big-endian. The interpretation is similar to an integer *address*. 4. A two-tuple of an address description and a netmask, where the address description is either a string, a 128-bits integer, a 16-bytes packed integer, or an existing IPv6Address object; and the netmask is an integer representing the prefix length. An [`AddressValueError`](#ipaddress.AddressValueError "ipaddress.AddressValueError") is raised if *address* is not a valid IPv6 address. A [`NetmaskValueError`](#ipaddress.NetmaskValueError "ipaddress.NetmaskValueError") is raised if the mask is not valid for an IPv6 address. If *strict* is `True` and host bits are set in the supplied address, then [`ValueError`](exceptions#ValueError "ValueError") is raised. Otherwise, the host bits are masked out to determine the appropriate network address. Changed in version 3.5: Added the two-tuple form for the *address* constructor parameter. `version` `max_prefixlen` `is_multicast` `is_private` `is_unspecified` `is_reserved` `is_loopback` `is_link_local` `network_address` `broadcast_address` `hostmask` `netmask` `with_prefixlen` `compressed` `exploded` `with_netmask` `with_hostmask` `num_addresses` `prefixlen` `hosts()` Returns an iterator over the usable hosts in the network. The usable hosts are all the IP addresses that belong to the network, except the Subnet-Router anycast address. For networks with a mask length of 127, the Subnet-Router anycast address is also included in the result. Networks with a mask of 128 will return a list containing the single host address. `overlaps(other)` `address_exclude(network)` `subnets(prefixlen_diff=1, new_prefix=None)` `supernet(prefixlen_diff=1, new_prefix=None)` `subnet_of(other)` `supernet_of(other)` `compare_networks(other)` Refer to the corresponding attribute documentation in [`IPv4Network`](#ipaddress.IPv4Network "ipaddress.IPv4Network"). `is_site_local` These attribute is true for the network as a whole if it is true for both the network address and the broadcast address. ### Operators Network objects support some operators. Unless stated otherwise, operators can only be applied between compatible objects (i.e. IPv4 with IPv4, IPv6 with IPv6). #### Logical operators Network objects can be compared with the usual set of logical operators. Network objects are ordered first by network address, then by net mask. #### Iteration Network objects can be iterated to list all the addresses belonging to the network. For iteration, *all* hosts are returned, including unusable hosts (for usable hosts, use the [`hosts()`](#ipaddress.IPv4Network.hosts "ipaddress.IPv4Network.hosts") method). An example: ``` >>> for addr in IPv4Network('192.0.2.0/28'): ... addr ... IPv4Address('192.0.2.0') IPv4Address('192.0.2.1') IPv4Address('192.0.2.2') IPv4Address('192.0.2.3') IPv4Address('192.0.2.4') IPv4Address('192.0.2.5') IPv4Address('192.0.2.6') IPv4Address('192.0.2.7') IPv4Address('192.0.2.8') IPv4Address('192.0.2.9') IPv4Address('192.0.2.10') IPv4Address('192.0.2.11') IPv4Address('192.0.2.12') IPv4Address('192.0.2.13') IPv4Address('192.0.2.14') IPv4Address('192.0.2.15') ``` #### Networks as containers of addresses Network objects can act as containers of addresses. Some examples: ``` >>> IPv4Network('192.0.2.0/28')[0] IPv4Address('192.0.2.0') >>> IPv4Network('192.0.2.0/28')[15] IPv4Address('192.0.2.15') >>> IPv4Address('192.0.2.6') in IPv4Network('192.0.2.0/28') True >>> IPv4Address('192.0.3.6') in IPv4Network('192.0.2.0/28') False ``` Interface objects ----------------- Interface objects are [hashable](../glossary#term-hashable), so they can be used as keys in dictionaries. `class ipaddress.IPv4Interface(address)` Construct an IPv4 interface. The meaning of *address* is as in the constructor of [`IPv4Network`](#ipaddress.IPv4Network "ipaddress.IPv4Network"), except that arbitrary host addresses are always accepted. [`IPv4Interface`](#ipaddress.IPv4Interface "ipaddress.IPv4Interface") is a subclass of [`IPv4Address`](#ipaddress.IPv4Address "ipaddress.IPv4Address"), so it inherits all the attributes from that class. In addition, the following attributes are available: `ip` The address ([`IPv4Address`](#ipaddress.IPv4Address "ipaddress.IPv4Address")) without network information. ``` >>> interface = IPv4Interface('192.0.2.5/24') >>> interface.ip IPv4Address('192.0.2.5') ``` `network` The network ([`IPv4Network`](#ipaddress.IPv4Network "ipaddress.IPv4Network")) this interface belongs to. ``` >>> interface = IPv4Interface('192.0.2.5/24') >>> interface.network IPv4Network('192.0.2.0/24') ``` `with_prefixlen` A string representation of the interface with the mask in prefix notation. ``` >>> interface = IPv4Interface('192.0.2.5/24') >>> interface.with_prefixlen '192.0.2.5/24' ``` `with_netmask` A string representation of the interface with the network as a net mask. ``` >>> interface = IPv4Interface('192.0.2.5/24') >>> interface.with_netmask '192.0.2.5/255.255.255.0' ``` `with_hostmask` A string representation of the interface with the network as a host mask. ``` >>> interface = IPv4Interface('192.0.2.5/24') >>> interface.with_hostmask '192.0.2.5/0.0.0.255' ``` `class ipaddress.IPv6Interface(address)` Construct an IPv6 interface. The meaning of *address* is as in the constructor of [`IPv6Network`](#ipaddress.IPv6Network "ipaddress.IPv6Network"), except that arbitrary host addresses are always accepted. [`IPv6Interface`](#ipaddress.IPv6Interface "ipaddress.IPv6Interface") is a subclass of [`IPv6Address`](#ipaddress.IPv6Address "ipaddress.IPv6Address"), so it inherits all the attributes from that class. In addition, the following attributes are available: `ip` `network` `with_prefixlen` `with_netmask` `with_hostmask` Refer to the corresponding attribute documentation in [`IPv4Interface`](#ipaddress.IPv4Interface "ipaddress.IPv4Interface"). ### Operators Interface objects support some operators. Unless stated otherwise, operators can only be applied between compatible objects (i.e. IPv4 with IPv4, IPv6 with IPv6). #### Logical operators Interface objects can be compared with the usual set of logical operators. For equality comparison (`==` and `!=`), both the IP address and network must be the same for the objects to be equal. An interface will not compare equal to any address or network object. For ordering (`<`, `>`, etc) the rules are different. Interface and address objects with the same IP version can be compared, and the address objects will always sort before the interface objects. Two interface objects are first compared by their networks and, if those are the same, then by their IP addresses. Other Module Level Functions ---------------------------- The module also provides the following module level functions: `ipaddress.v4_int_to_packed(address)` Represent an address as 4 packed bytes in network (big-endian) order. *address* is an integer representation of an IPv4 IP address. A [`ValueError`](exceptions#ValueError "ValueError") is raised if the integer is negative or too large to be an IPv4 IP address. ``` >>> ipaddress.ip_address(3221225985) IPv4Address('192.0.2.1') >>> ipaddress.v4_int_to_packed(3221225985) b'\xc0\x00\x02\x01' ``` `ipaddress.v6_int_to_packed(address)` Represent an address as 16 packed bytes in network (big-endian) order. *address* is an integer representation of an IPv6 IP address. A [`ValueError`](exceptions#ValueError "ValueError") is raised if the integer is negative or too large to be an IPv6 IP address. `ipaddress.summarize_address_range(first, last)` Return an iterator of the summarized network range given the first and last IP addresses. *first* is the first [`IPv4Address`](#ipaddress.IPv4Address "ipaddress.IPv4Address") or [`IPv6Address`](#ipaddress.IPv6Address "ipaddress.IPv6Address") in the range and *last* is the last [`IPv4Address`](#ipaddress.IPv4Address "ipaddress.IPv4Address") or [`IPv6Address`](#ipaddress.IPv6Address "ipaddress.IPv6Address") in the range. A [`TypeError`](exceptions#TypeError "TypeError") is raised if *first* or *last* are not IP addresses or are not of the same version. A [`ValueError`](exceptions#ValueError "ValueError") is raised if *last* is not greater than *first* or if *first* address version is not 4 or 6. ``` >>> [ipaddr for ipaddr in ipaddress.summarize_address_range( ... ipaddress.IPv4Address('192.0.2.0'), ... ipaddress.IPv4Address('192.0.2.130'))] [IPv4Network('192.0.2.0/25'), IPv4Network('192.0.2.128/31'), IPv4Network('192.0.2.130/32')] ``` `ipaddress.collapse_addresses(addresses)` Return an iterator of the collapsed [`IPv4Network`](#ipaddress.IPv4Network "ipaddress.IPv4Network") or [`IPv6Network`](#ipaddress.IPv6Network "ipaddress.IPv6Network") objects. *addresses* is an iterator of [`IPv4Network`](#ipaddress.IPv4Network "ipaddress.IPv4Network") or [`IPv6Network`](#ipaddress.IPv6Network "ipaddress.IPv6Network") objects. A [`TypeError`](exceptions#TypeError "TypeError") is raised if *addresses* contains mixed version objects. ``` >>> [ipaddr for ipaddr in ... ipaddress.collapse_addresses([ipaddress.IPv4Network('192.0.2.0/25'), ... ipaddress.IPv4Network('192.0.2.128/25')])] [IPv4Network('192.0.2.0/24')] ``` `ipaddress.get_mixed_type_key(obj)` Return a key suitable for sorting between networks and addresses. Address and Network objects are not sortable by default; they’re fundamentally different, so the expression: ``` IPv4Address('192.0.2.0') <= IPv4Network('192.0.2.0/24') ``` doesn’t make sense. There are some times however, where you may wish to have [`ipaddress`](#module-ipaddress "ipaddress: IPv4/IPv6 manipulation library.") sort these anyway. If you need to do this, you can use this function as the *key* argument to [`sorted()`](functions#sorted "sorted"). *obj* is either a network or address object. Custom Exceptions ----------------- To support more specific error reporting from class constructors, the module defines the following exceptions: `exception ipaddress.AddressValueError(ValueError)` Any value error related to the address. `exception ipaddress.NetmaskValueError(ValueError)` Any value error related to the net mask.
programming_docs
python bdb — Debugger framework bdb — Debugger framework ======================== **Source code:** [Lib/bdb.py](https://github.com/python/cpython/tree/3.9/Lib/bdb.py) The [`bdb`](#module-bdb "bdb: Debugger framework.") module handles basic debugger functions, like setting breakpoints or managing execution via the debugger. The following exception is defined: `exception bdb.BdbQuit` Exception raised by the [`Bdb`](#bdb.Bdb "bdb.Bdb") class for quitting the debugger. The [`bdb`](#module-bdb "bdb: Debugger framework.") module also defines two classes: `class bdb.Breakpoint(self, file, line, temporary=0, cond=None, funcname=None)` This class implements temporary breakpoints, ignore counts, disabling and (re-)enabling, and conditionals. Breakpoints are indexed by number through a list called `bpbynumber` and by `(file, line)` pairs through `bplist`. The former points to a single instance of class [`Breakpoint`](#bdb.Breakpoint "bdb.Breakpoint"). The latter points to a list of such instances since there may be more than one breakpoint per line. When creating a breakpoint, its associated filename should be in canonical form. If a *funcname* is defined, a breakpoint hit will be counted when the first line of that function is executed. A conditional breakpoint always counts a hit. [`Breakpoint`](#bdb.Breakpoint "bdb.Breakpoint") instances have the following methods: `deleteMe()` Delete the breakpoint from the list associated to a file/line. If it is the last breakpoint in that position, it also deletes the entry for the file/line. `enable()` Mark the breakpoint as enabled. `disable()` Mark the breakpoint as disabled. `bpformat()` Return a string with all the information about the breakpoint, nicely formatted: * The breakpoint number. * If it is temporary or not. * Its file,line position. * The condition that causes a break. * If it must be ignored the next N times. * The breakpoint hit count. New in version 3.2. `bpprint(out=None)` Print the output of [`bpformat()`](#bdb.Breakpoint.bpformat "bdb.Breakpoint.bpformat") to the file *out*, or if it is `None`, to standard output. `class bdb.Bdb(skip=None)` The [`Bdb`](#bdb.Bdb "bdb.Bdb") class acts as a generic Python debugger base class. This class takes care of the details of the trace facility; a derived class should implement user interaction. The standard debugger class ([`pdb.Pdb`](pdb#pdb.Pdb "pdb.Pdb")) is an example. The *skip* argument, if given, must be an iterable of glob-style module name patterns. The debugger will not step into frames that originate in a module that matches one of these patterns. Whether a frame is considered to originate in a certain module is determined by the `__name__` in the frame globals. New in version 3.1: The *skip* argument. The following methods of [`Bdb`](#bdb.Bdb "bdb.Bdb") normally don’t need to be overridden. `canonic(filename)` Auxiliary method for getting a filename in a canonical form, that is, as a case-normalized (on case-insensitive filesystems) absolute path, stripped of surrounding angle brackets. `reset()` Set the `botframe`, `stopframe`, `returnframe` and `quitting` attributes with values ready to start debugging. `trace_dispatch(frame, event, arg)` This function is installed as the trace function of debugged frames. Its return value is the new trace function (in most cases, that is, itself). The default implementation decides how to dispatch a frame, depending on the type of event (passed as a string) that is about to be executed. *event* can be one of the following: * `"line"`: A new line of code is going to be executed. * `"call"`: A function is about to be called, or another code block entered. * `"return"`: A function or other code block is about to return. * `"exception"`: An exception has occurred. * `"c_call"`: A C function is about to be called. * `"c_return"`: A C function has returned. * `"c_exception"`: A C function has raised an exception. For the Python events, specialized functions (see below) are called. For the C events, no action is taken. The *arg* parameter depends on the previous event. See the documentation for [`sys.settrace()`](sys#sys.settrace "sys.settrace") for more information on the trace function. For more information on code and frame objects, refer to [The standard type hierarchy](../reference/datamodel#types). `dispatch_line(frame)` If the debugger should stop on the current line, invoke the [`user_line()`](#bdb.Bdb.user_line "bdb.Bdb.user_line") method (which should be overridden in subclasses). Raise a [`BdbQuit`](#bdb.BdbQuit "bdb.BdbQuit") exception if the `Bdb.quitting` flag is set (which can be set from [`user_line()`](#bdb.Bdb.user_line "bdb.Bdb.user_line")). Return a reference to the [`trace_dispatch()`](#bdb.Bdb.trace_dispatch "bdb.Bdb.trace_dispatch") method for further tracing in that scope. `dispatch_call(frame, arg)` If the debugger should stop on this function call, invoke the [`user_call()`](#bdb.Bdb.user_call "bdb.Bdb.user_call") method (which should be overridden in subclasses). Raise a [`BdbQuit`](#bdb.BdbQuit "bdb.BdbQuit") exception if the `Bdb.quitting` flag is set (which can be set from [`user_call()`](#bdb.Bdb.user_call "bdb.Bdb.user_call")). Return a reference to the [`trace_dispatch()`](#bdb.Bdb.trace_dispatch "bdb.Bdb.trace_dispatch") method for further tracing in that scope. `dispatch_return(frame, arg)` If the debugger should stop on this function return, invoke the [`user_return()`](#bdb.Bdb.user_return "bdb.Bdb.user_return") method (which should be overridden in subclasses). Raise a [`BdbQuit`](#bdb.BdbQuit "bdb.BdbQuit") exception if the `Bdb.quitting` flag is set (which can be set from [`user_return()`](#bdb.Bdb.user_return "bdb.Bdb.user_return")). Return a reference to the [`trace_dispatch()`](#bdb.Bdb.trace_dispatch "bdb.Bdb.trace_dispatch") method for further tracing in that scope. `dispatch_exception(frame, arg)` If the debugger should stop at this exception, invokes the [`user_exception()`](#bdb.Bdb.user_exception "bdb.Bdb.user_exception") method (which should be overridden in subclasses). Raise a [`BdbQuit`](#bdb.BdbQuit "bdb.BdbQuit") exception if the `Bdb.quitting` flag is set (which can be set from [`user_exception()`](#bdb.Bdb.user_exception "bdb.Bdb.user_exception")). Return a reference to the [`trace_dispatch()`](#bdb.Bdb.trace_dispatch "bdb.Bdb.trace_dispatch") method for further tracing in that scope. Normally derived classes don’t override the following methods, but they may if they want to redefine the definition of stopping and breakpoints. `stop_here(frame)` This method checks if the *frame* is somewhere below `botframe` in the call stack. `botframe` is the frame in which debugging started. `break_here(frame)` This method checks if there is a breakpoint in the filename and line belonging to *frame* or, at least, in the current function. If the breakpoint is a temporary one, this method deletes it. `break_anywhere(frame)` This method checks if there is a breakpoint in the filename of the current frame. Derived classes should override these methods to gain control over debugger operation. `user_call(frame, argument_list)` This method is called from [`dispatch_call()`](#bdb.Bdb.dispatch_call "bdb.Bdb.dispatch_call") when there is the possibility that a break might be necessary anywhere inside the called function. `user_line(frame)` This method is called from [`dispatch_line()`](#bdb.Bdb.dispatch_line "bdb.Bdb.dispatch_line") when either [`stop_here()`](#bdb.Bdb.stop_here "bdb.Bdb.stop_here") or [`break_here()`](#bdb.Bdb.break_here "bdb.Bdb.break_here") yields `True`. `user_return(frame, return_value)` This method is called from [`dispatch_return()`](#bdb.Bdb.dispatch_return "bdb.Bdb.dispatch_return") when [`stop_here()`](#bdb.Bdb.stop_here "bdb.Bdb.stop_here") yields `True`. `user_exception(frame, exc_info)` This method is called from [`dispatch_exception()`](#bdb.Bdb.dispatch_exception "bdb.Bdb.dispatch_exception") when [`stop_here()`](#bdb.Bdb.stop_here "bdb.Bdb.stop_here") yields `True`. `do_clear(arg)` Handle how a breakpoint must be removed when it is a temporary one. This method must be implemented by derived classes. Derived classes and clients can call the following methods to affect the stepping state. `set_step()` Stop after one line of code. `set_next(frame)` Stop on the next line in or below the given frame. `set_return(frame)` Stop when returning from the given frame. `set_until(frame)` Stop when the line with the line no greater than the current one is reached or when returning from current frame. `set_trace([frame])` Start debugging from *frame*. If *frame* is not specified, debugging starts from caller’s frame. `set_continue()` Stop only at breakpoints or when finished. If there are no breakpoints, set the system trace function to `None`. `set_quit()` Set the `quitting` attribute to `True`. This raises [`BdbQuit`](#bdb.BdbQuit "bdb.BdbQuit") in the next call to one of the `dispatch_*()` methods. Derived classes and clients can call the following methods to manipulate breakpoints. These methods return a string containing an error message if something went wrong, or `None` if all is well. `set_break(filename, lineno, temporary=0, cond, funcname)` Set a new breakpoint. If the *lineno* line doesn’t exist for the *filename* passed as argument, return an error message. The *filename* should be in canonical form, as described in the [`canonic()`](#bdb.Bdb.canonic "bdb.Bdb.canonic") method. `clear_break(filename, lineno)` Delete the breakpoints in *filename* and *lineno*. If none were set, an error message is returned. `clear_bpbynumber(arg)` Delete the breakpoint which has the index *arg* in the `Breakpoint.bpbynumber`. If *arg* is not numeric or out of range, return an error message. `clear_all_file_breaks(filename)` Delete all breakpoints in *filename*. If none were set, an error message is returned. `clear_all_breaks()` Delete all existing breakpoints. `get_bpbynumber(arg)` Return a breakpoint specified by the given number. If *arg* is a string, it will be converted to a number. If *arg* is a non-numeric string, if the given breakpoint never existed or has been deleted, a [`ValueError`](exceptions#ValueError "ValueError") is raised. New in version 3.2. `get_break(filename, lineno)` Check if there is a breakpoint for *lineno* of *filename*. `get_breaks(filename, lineno)` Return all breakpoints for *lineno* in *filename*, or an empty list if none are set. `get_file_breaks(filename)` Return all breakpoints in *filename*, or an empty list if none are set. `get_all_breaks()` Return all breakpoints that are set. Derived classes and clients can call the following methods to get a data structure representing a stack trace. `get_stack(f, t)` Get a list of records for a frame and all higher (calling) and lower frames, and the size of the higher part. `format_stack_entry(frame_lineno, lprefix=': ')` Return a string with information about a stack entry, identified by a `(frame, lineno)` tuple: * The canonical form of the filename which contains the frame. * The function name, or `"<lambda>"`. * The input arguments. * The return value. * The line of code (if it exists). The following two methods can be called by clients to use a debugger to debug a [statement](../glossary#term-statement), given as a string. `run(cmd, globals=None, locals=None)` Debug a statement executed via the [`exec()`](functions#exec "exec") function. *globals* defaults to `__main__.__dict__`, *locals* defaults to *globals*. `runeval(expr, globals=None, locals=None)` Debug an expression executed via the [`eval()`](functions#eval "eval") function. *globals* and *locals* have the same meaning as in [`run()`](#bdb.Bdb.run "bdb.Bdb.run"). `runctx(cmd, globals, locals)` For backwards compatibility. Calls the [`run()`](#bdb.Bdb.run "bdb.Bdb.run") method. `runcall(func, /, *args, **kwds)` Debug a single function call, and return its result. Finally, the module defines the following functions: `bdb.checkfuncname(b, frame)` Check whether we should break here, depending on the way the breakpoint *b* was set. If it was set via line number, it checks if `b.line` is the same as the one in the frame also passed as argument. If the breakpoint was set via function name, we have to check we are in the right frame (the right function) and if we are in its first executable line. `bdb.effective(file, line, frame)` Determine if there is an effective (active) breakpoint at this line of code. Return a tuple of the breakpoint and a boolean that indicates if it is ok to delete a temporary breakpoint. Return `(None, None)` if there is no matching breakpoint. `bdb.set_trace()` Start debugging with a [`Bdb`](#bdb.Bdb "bdb.Bdb") instance from caller’s frame. python email.policy: Policy Objects email.policy: Policy Objects ============================ New in version 3.3. **Source code:** [Lib/email/policy.py](https://github.com/python/cpython/tree/3.9/Lib/email/policy.py) The [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package’s prime focus is the handling of email messages as described by the various email and MIME RFCs. However, the general format of email messages (a block of header fields each consisting of a name followed by a colon followed by a value, the whole block followed by a blank line and an arbitrary ‘body’), is a format that has found utility outside of the realm of email. Some of these uses conform fairly closely to the main email RFCs, some do not. Even when working with email, there are times when it is desirable to break strict compliance with the RFCs, such as generating emails that interoperate with email servers that do not themselves follow the standards, or that implement extensions you want to use in ways that violate the standards. Policy objects give the email package the flexibility to handle all these disparate use cases. A [`Policy`](#email.policy.Policy "email.policy.Policy") object encapsulates a set of attributes and methods that control the behavior of various components of the email package during use. [`Policy`](#email.policy.Policy "email.policy.Policy") instances can be passed to various classes and methods in the email package to alter the default behavior. The settable values and their defaults are described below. There is a default policy used by all classes in the email package. For all of the [`parser`](email.parser#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure.") classes and the related convenience functions, and for the [`Message`](email.compat32-message#email.message.Message "email.message.Message") class, this is the [`Compat32`](#email.policy.Compat32 "email.policy.Compat32") policy, via its corresponding pre-defined instance [`compat32`](#email.policy.compat32 "email.policy.compat32"). This policy provides for complete backward compatibility (in some cases, including bug compatibility) with the pre-Python3.3 version of the email package. This default value for the *policy* keyword to [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") is the [`EmailPolicy`](#email.policy.EmailPolicy "email.policy.EmailPolicy") policy, via its pre-defined instance [`default`](#email.policy.default "email.policy.default"). When a [`Message`](email.compat32-message#email.message.Message "email.message.Message") or [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") object is created, it acquires a policy. If the message is created by a [`parser`](email.parser#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure."), a policy passed to the parser will be the policy used by the message it creates. If the message is created by the program, then the policy can be specified when it is created. When a message is passed to a [`generator`](email.generator#module-email.generator "email.generator: Generate flat text email messages from a message structure."), the generator uses the policy from the message by default, but you can also pass a specific policy to the generator that will override the one stored on the message object. The default value for the *policy* keyword for the [`email.parser`](email.parser#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure.") classes and the parser convenience functions **will be changing** in a future version of Python. Therefore you should **always specify explicitly which policy you want to use** when calling any of the classes and functions described in the [`parser`](email.parser#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure.") module. The first part of this documentation covers the features of [`Policy`](#email.policy.Policy "email.policy.Policy"), an [abstract base class](../glossary#term-abstract-base-class) that defines the features that are common to all policy objects, including [`compat32`](#email.policy.compat32 "email.policy.compat32"). This includes certain hook methods that are called internally by the email package, which a custom policy could override to obtain different behavior. The second part describes the concrete classes [`EmailPolicy`](#email.policy.EmailPolicy "email.policy.EmailPolicy") and [`Compat32`](#email.policy.Compat32 "email.policy.Compat32"), which implement the hooks that provide the standard behavior and the backward compatible behavior and features, respectively. [`Policy`](#email.policy.Policy "email.policy.Policy") instances are immutable, but they can be cloned, accepting the same keyword arguments as the class constructor and returning a new [`Policy`](#email.policy.Policy "email.policy.Policy") instance that is a copy of the original but with the specified attributes values changed. As an example, the following code could be used to read an email message from a file on disk and pass it to the system `sendmail` program on a Unix system: ``` >>> from email import message_from_binary_file >>> from email.generator import BytesGenerator >>> from email import policy >>> from subprocess import Popen, PIPE >>> with open('mymsg.txt', 'rb') as f: ... msg = message_from_binary_file(f, policy=policy.default) >>> p = Popen(['sendmail', msg['To'].addresses[0]], stdin=PIPE) >>> g = BytesGenerator(p.stdin, policy=msg.policy.clone(linesep='\r\n')) >>> g.flatten(msg) >>> p.stdin.close() >>> rc = p.wait() ``` Here we are telling [`BytesGenerator`](email.generator#email.generator.BytesGenerator "email.generator.BytesGenerator") to use the RFC correct line separator characters when creating the binary string to feed into `sendmail's` `stdin`, where the default policy would use `\n` line separators. Some email package methods accept a *policy* keyword argument, allowing the policy to be overridden for that method. For example, the following code uses the [`as_bytes()`](email.compat32-message#email.message.Message.as_bytes "email.message.Message.as_bytes") method of the *msg* object from the previous example and writes the message to a file using the native line separators for the platform on which it is running: ``` >>> import os >>> with open('converted.txt', 'wb') as f: ... f.write(msg.as_bytes(policy=msg.policy.clone(linesep=os.linesep))) 17 ``` Policy objects can also be combined using the addition operator, producing a policy object whose settings are a combination of the non-default values of the summed objects: ``` >>> compat_SMTP = policy.compat32.clone(linesep='\r\n') >>> compat_strict = policy.compat32.clone(raise_on_defect=True) >>> compat_strict_SMTP = compat_SMTP + compat_strict ``` This operation is not commutative; that is, the order in which the objects are added matters. To illustrate: ``` >>> policy100 = policy.compat32.clone(max_line_length=100) >>> policy80 = policy.compat32.clone(max_line_length=80) >>> apolicy = policy100 + policy80 >>> apolicy.max_line_length 80 >>> apolicy = policy80 + policy100 >>> apolicy.max_line_length 100 ``` `class email.policy.Policy(**kw)` This is the [abstract base class](../glossary#term-abstract-base-class) for all policy classes. It provides default implementations for a couple of trivial methods, as well as the implementation of the immutability property, the [`clone()`](#email.policy.Policy.clone "email.policy.Policy.clone") method, and the constructor semantics. The constructor of a policy class can be passed various keyword arguments. The arguments that may be specified are any non-method properties on this class, plus any additional non-method properties on the concrete class. A value specified in the constructor will override the default value for the corresponding attribute. This class defines the following properties, and thus values for the following may be passed in the constructor of any policy class: `max_line_length` The maximum length of any line in the serialized output, not counting the end of line character(s). Default is 78, per [**RFC 5322**](https://tools.ietf.org/html/rfc5322.html). A value of `0` or [`None`](constants#None "None") indicates that no line wrapping should be done at all. `linesep` The string to be used to terminate lines in serialized output. The default is `\n` because that’s the internal end-of-line discipline used by Python, though `\r\n` is required by the RFCs. `cte_type` Controls the type of Content Transfer Encodings that may be or are required to be used. The possible values are: | | | | --- | --- | | `7bit` | all data must be “7 bit clean” (ASCII-only). This means that where necessary data will be encoded using either quoted-printable or base64 encoding. | | `8bit` | data is not constrained to be 7 bit clean. Data in headers is still required to be ASCII-only and so will be encoded (see [`fold_binary()`](#email.policy.Policy.fold_binary "email.policy.Policy.fold_binary") and [`utf8`](#email.policy.EmailPolicy.utf8 "email.policy.EmailPolicy.utf8") below for exceptions), but body parts may use the `8bit` CTE. | A `cte_type` value of `8bit` only works with `BytesGenerator`, not `Generator`, because strings cannot contain binary data. If a `Generator` is operating under a policy that specifies `cte_type=8bit`, it will act as if `cte_type` is `7bit`. `raise_on_defect` If [`True`](constants#True "True"), any defects encountered will be raised as errors. If [`False`](constants#False "False") (the default), defects will be passed to the [`register_defect()`](#email.policy.Policy.register_defect "email.policy.Policy.register_defect") method. `mangle_from_` If [`True`](constants#True "True"), lines starting with *“From “* in the body are escaped by putting a `>` in front of them. This parameter is used when the message is being serialized by a generator. Default: [`False`](constants#False "False"). New in version 3.5: The *mangle\_from\_* parameter. `message_factory` A factory function for constructing a new empty message object. Used by the parser when building messages. Defaults to `None`, in which case [`Message`](email.compat32-message#email.message.Message "email.message.Message") is used. New in version 3.6. The following [`Policy`](#email.policy.Policy "email.policy.Policy") method is intended to be called by code using the email library to create policy instances with custom settings: `clone(**kw)` Return a new [`Policy`](#email.policy.Policy "email.policy.Policy") instance whose attributes have the same values as the current instance, except where those attributes are given new values by the keyword arguments. The remaining [`Policy`](#email.policy.Policy "email.policy.Policy") methods are called by the email package code, and are not intended to be called by an application using the email package. A custom policy must implement all of these methods. `handle_defect(obj, defect)` Handle a *defect* found on *obj*. When the email package calls this method, *defect* will always be a subclass of `Defect`. The default implementation checks the [`raise_on_defect`](#email.policy.Policy.raise_on_defect "email.policy.Policy.raise_on_defect") flag. If it is `True`, *defect* is raised as an exception. If it is `False` (the default), *obj* and *defect* are passed to [`register_defect()`](#email.policy.Policy.register_defect "email.policy.Policy.register_defect"). `register_defect(obj, defect)` Register a *defect* on *obj*. In the email package, *defect* will always be a subclass of `Defect`. The default implementation calls the `append` method of the `defects` attribute of *obj*. When the email package calls [`handle_defect`](#email.policy.Policy.handle_defect "email.policy.Policy.handle_defect"), *obj* will normally have a `defects` attribute that has an `append` method. Custom object types used with the email package (for example, custom `Message` objects) should also provide such an attribute, otherwise defects in parsed messages will raise unexpected errors. `header_max_count(name)` Return the maximum allowed number of headers named *name*. Called when a header is added to an [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") or [`Message`](email.compat32-message#email.message.Message "email.message.Message") object. If the returned value is not `0` or `None`, and there are already a number of headers with the name *name* greater than or equal to the value returned, a [`ValueError`](exceptions#ValueError "ValueError") is raised. Because the default behavior of `Message.__setitem__` is to append the value to the list of headers, it is easy to create duplicate headers without realizing it. This method allows certain headers to be limited in the number of instances of that header that may be added to a `Message` programmatically. (The limit is not observed by the parser, which will faithfully produce as many headers as exist in the message being parsed.) The default implementation returns `None` for all header names. `header_source_parse(sourcelines)` The email package calls this method with a list of strings, each string ending with the line separation characters found in the source being parsed. The first line includes the field header name and separator. All whitespace in the source is preserved. The method should return the `(name, value)` tuple that is to be stored in the `Message` to represent the parsed header. If an implementation wishes to retain compatibility with the existing email package policies, *name* should be the case preserved name (all characters up to the ‘`:`’ separator), while *value* should be the unfolded value (all line separator characters removed, but whitespace kept intact), stripped of leading whitespace. *sourcelines* may contain surrogateescaped binary data. There is no default implementation `header_store_parse(name, value)` The email package calls this method with the name and value provided by the application program when the application program is modifying a `Message` programmatically (as opposed to a `Message` created by a parser). The method should return the `(name, value)` tuple that is to be stored in the `Message` to represent the header. If an implementation wishes to retain compatibility with the existing email package policies, the *name* and *value* should be strings or string subclasses that do not change the content of the passed in arguments. There is no default implementation `header_fetch_parse(name, value)` The email package calls this method with the *name* and *value* currently stored in the `Message` when that header is requested by the application program, and whatever the method returns is what is passed back to the application as the value of the header being retrieved. Note that there may be more than one header with the same name stored in the `Message`; the method is passed the specific name and value of the header destined to be returned to the application. *value* may contain surrogateescaped binary data. There should be no surrogateescaped binary data in the value returned by the method. There is no default implementation `fold(name, value)` The email package calls this method with the *name* and *value* currently stored in the `Message` for a given header. The method should return a string that represents that header “folded” correctly (according to the policy settings) by composing the *name* with the *value* and inserting [`linesep`](#email.policy.Policy.linesep "email.policy.Policy.linesep") characters at the appropriate places. See [**RFC 5322**](https://tools.ietf.org/html/rfc5322.html) for a discussion of the rules for folding email headers. *value* may contain surrogateescaped binary data. There should be no surrogateescaped binary data in the string returned by the method. `fold_binary(name, value)` The same as [`fold()`](#email.policy.Policy.fold "email.policy.Policy.fold"), except that the returned value should be a bytes object rather than a string. *value* may contain surrogateescaped binary data. These could be converted back into binary data in the returned bytes object. `class email.policy.EmailPolicy(**kw)` This concrete [`Policy`](#email.policy.Policy "email.policy.Policy") provides behavior that is intended to be fully compliant with the current email RFCs. These include (but are not limited to) [**RFC 5322**](https://tools.ietf.org/html/rfc5322.html), [**RFC 2047**](https://tools.ietf.org/html/rfc2047.html), and the current MIME RFCs. This policy adds new header parsing and folding algorithms. Instead of simple strings, headers are `str` subclasses with attributes that depend on the type of the field. The parsing and folding algorithm fully implement [**RFC 2047**](https://tools.ietf.org/html/rfc2047.html) and [**RFC 5322**](https://tools.ietf.org/html/rfc5322.html). The default value for the [`message_factory`](#email.policy.Policy.message_factory "email.policy.Policy.message_factory") attribute is [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage"). In addition to the settable attributes listed above that apply to all policies, this policy adds the following additional attributes: New in version 3.6: [1](#id2) `utf8` If `False`, follow [**RFC 5322**](https://tools.ietf.org/html/rfc5322.html), supporting non-ASCII characters in headers by encoding them as “encoded words”. If `True`, follow [**RFC 6532**](https://tools.ietf.org/html/rfc6532.html) and use `utf-8` encoding for headers. Messages formatted in this way may be passed to SMTP servers that support the `SMTPUTF8` extension ([**RFC 6531**](https://tools.ietf.org/html/rfc6531.html)). `refold_source` If the value for a header in the `Message` object originated from a [`parser`](email.parser#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure.") (as opposed to being set by a program), this attribute indicates whether or not a generator should refold that value when transforming the message back into serialized form. The possible values are: | | | | --- | --- | | `none` | all source values use original folding | | `long` | source values that have any line that is longer than `max_line_length` will be refolded | | `all` | all values are refolded. | The default is `long`. `header_factory` A callable that takes two arguments, `name` and `value`, where `name` is a header field name and `value` is an unfolded header field value, and returns a string subclass that represents that header. A default `header_factory` (see [`headerregistry`](email.headerregistry#module-email.headerregistry "email.headerregistry: Automatic Parsing of headers based on the field name")) is provided that supports custom parsing for the various address and date [**RFC 5322**](https://tools.ietf.org/html/rfc5322.html) header field types, and the major MIME header field stypes. Support for additional custom parsing will be added in the future. `content_manager` An object with at least two methods: get\_content and set\_content. When the [`get_content()`](email.message#email.message.EmailMessage.get_content "email.message.EmailMessage.get_content") or [`set_content()`](email.message#email.message.EmailMessage.set_content "email.message.EmailMessage.set_content") method of an [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") object is called, it calls the corresponding method of this object, passing it the message object as its first argument, and any arguments or keywords that were passed to it as additional arguments. By default `content_manager` is set to [`raw_data_manager`](email.contentmanager#email.contentmanager.raw_data_manager "email.contentmanager.raw_data_manager"). New in version 3.4. The class provides the following concrete implementations of the abstract methods of [`Policy`](#email.policy.Policy "email.policy.Policy"): `header_max_count(name)` Returns the value of the [`max_count`](email.headerregistry#email.headerregistry.BaseHeader.max_count "email.headerregistry.BaseHeader.max_count") attribute of the specialized class used to represent the header with the given name. `header_source_parse(sourcelines)` The name is parsed as everything up to the ‘`:`’ and returned unmodified. The value is determined by stripping leading whitespace off the remainder of the first line, joining all subsequent lines together, and stripping any trailing carriage return or linefeed characters. `header_store_parse(name, value)` The name is returned unchanged. If the input value has a `name` attribute and it matches *name* ignoring case, the value is returned unchanged. Otherwise the *name* and *value* are passed to `header_factory`, and the resulting header object is returned as the value. In this case a `ValueError` is raised if the input value contains CR or LF characters. `header_fetch_parse(name, value)` If the value has a `name` attribute, it is returned to unmodified. Otherwise the *name*, and the *value* with any CR or LF characters removed, are passed to the `header_factory`, and the resulting header object is returned. Any surrogateescaped bytes get turned into the unicode unknown-character glyph. `fold(name, value)` Header folding is controlled by the [`refold_source`](#email.policy.EmailPolicy.refold_source "email.policy.EmailPolicy.refold_source") policy setting. A value is considered to be a ‘source value’ if and only if it does not have a `name` attribute (having a `name` attribute means it is a header object of some sort). If a source value needs to be refolded according to the policy, it is converted into a header object by passing the *name* and the *value* with any CR and LF characters removed to the `header_factory`. Folding of a header object is done by calling its `fold` method with the current policy. Source values are split into lines using [`splitlines()`](stdtypes#str.splitlines "str.splitlines"). If the value is not to be refolded, the lines are rejoined using the `linesep` from the policy and returned. The exception is lines containing non-ascii binary data. In that case the value is refolded regardless of the `refold_source` setting, which causes the binary data to be CTE encoded using the `unknown-8bit` charset. `fold_binary(name, value)` The same as [`fold()`](#email.policy.EmailPolicy.fold "email.policy.EmailPolicy.fold") if [`cte_type`](#email.policy.Policy.cte_type "email.policy.Policy.cte_type") is `7bit`, except that the returned value is bytes. If [`cte_type`](#email.policy.Policy.cte_type "email.policy.Policy.cte_type") is `8bit`, non-ASCII binary data is converted back into bytes. Headers with binary data are not refolded, regardless of the `refold_header` setting, since there is no way to know whether the binary data consists of single byte characters or multibyte characters. The following instances of [`EmailPolicy`](#email.policy.EmailPolicy "email.policy.EmailPolicy") provide defaults suitable for specific application domains. Note that in the future the behavior of these instances (in particular the `HTTP` instance) may be adjusted to conform even more closely to the RFCs relevant to their domains. `email.policy.default` An instance of `EmailPolicy` with all defaults unchanged. This policy uses the standard Python `\n` line endings rather than the RFC-correct `\r\n`. `email.policy.SMTP` Suitable for serializing messages in conformance with the email RFCs. Like `default`, but with `linesep` set to `\r\n`, which is RFC compliant. `email.policy.SMTPUTF8` The same as `SMTP` except that [`utf8`](#email.policy.EmailPolicy.utf8 "email.policy.EmailPolicy.utf8") is `True`. Useful for serializing messages to a message store without using encoded words in the headers. Should only be used for SMTP transmission if the sender or recipient addresses have non-ASCII characters (the [`smtplib.SMTP.send_message()`](smtplib#smtplib.SMTP.send_message "smtplib.SMTP.send_message") method handles this automatically). `email.policy.HTTP` Suitable for serializing headers with for use in HTTP traffic. Like `SMTP` except that `max_line_length` is set to `None` (unlimited). `email.policy.strict` Convenience instance. The same as `default` except that `raise_on_defect` is set to `True`. This allows any policy to be made strict by writing: ``` somepolicy + policy.strict ``` With all of these [`EmailPolicies`](#email.policy.EmailPolicy "email.policy.EmailPolicy"), the effective API of the email package is changed from the Python 3.2 API in the following ways: * Setting a header on a [`Message`](email.compat32-message#email.message.Message "email.message.Message") results in that header being parsed and a header object created. * Fetching a header value from a [`Message`](email.compat32-message#email.message.Message "email.message.Message") results in that header being parsed and a header object created and returned. * Any header object, or any header that is refolded due to the policy settings, is folded using an algorithm that fully implements the RFC folding algorithms, including knowing where encoded words are required and allowed. From the application view, this means that any header obtained through the [`EmailMessage`](email.message#email.message.EmailMessage "email.message.EmailMessage") is a header object with extra attributes, whose string value is the fully decoded unicode value of the header. Likewise, a header may be assigned a new value, or a new header created, using a unicode string, and the policy will take care of converting the unicode string into the correct RFC encoded form. The header objects and their attributes are described in [`headerregistry`](email.headerregistry#module-email.headerregistry "email.headerregistry: Automatic Parsing of headers based on the field name"). `class email.policy.Compat32(**kw)` This concrete [`Policy`](#email.policy.Policy "email.policy.Policy") is the backward compatibility policy. It replicates the behavior of the email package in Python 3.2. The [`policy`](#module-email.policy "email.policy: Controlling the parsing and generating of messages") module also defines an instance of this class, [`compat32`](#email.policy.compat32 "email.policy.compat32"), that is used as the default policy. Thus the default behavior of the email package is to maintain compatibility with Python 3.2. The following attributes have values that are different from the [`Policy`](#email.policy.Policy "email.policy.Policy") default: `mangle_from_` The default is `True`. The class provides the following concrete implementations of the abstract methods of [`Policy`](#email.policy.Policy "email.policy.Policy"): `header_source_parse(sourcelines)` The name is parsed as everything up to the ‘`:`’ and returned unmodified. The value is determined by stripping leading whitespace off the remainder of the first line, joining all subsequent lines together, and stripping any trailing carriage return or linefeed characters. `header_store_parse(name, value)` The name and value are returned unmodified. `header_fetch_parse(name, value)` If the value contains binary data, it is converted into a [`Header`](email.header#email.header.Header "email.header.Header") object using the `unknown-8bit` charset. Otherwise it is returned unmodified. `fold(name, value)` Headers are folded using the [`Header`](email.header#email.header.Header "email.header.Header") folding algorithm, which preserves existing line breaks in the value, and wraps each resulting line to the `max_line_length`. Non-ASCII binary data are CTE encoded using the `unknown-8bit` charset. `fold_binary(name, value)` Headers are folded using the [`Header`](email.header#email.header.Header "email.header.Header") folding algorithm, which preserves existing line breaks in the value, and wraps each resulting line to the `max_line_length`. If `cte_type` is `7bit`, non-ascii binary data is CTE encoded using the `unknown-8bit` charset. Otherwise the original source header is used, with its existing line breaks and any (RFC invalid) binary data it may contain. `email.policy.compat32` An instance of [`Compat32`](#email.policy.Compat32 "email.policy.Compat32"), providing backward compatibility with the behavior of the email package in Python 3.2. #### Footnotes `1` Originally added in 3.3 as a [provisional feature](../glossary#term-provisional-package).
programming_docs
python selectors — High-level I/O multiplexing selectors — High-level I/O multiplexing ======================================= New in version 3.4. **Source code:** [Lib/selectors.py](https://github.com/python/cpython/tree/3.9/Lib/selectors.py) Introduction ------------ This module allows high-level and efficient I/O multiplexing, built upon the [`select`](select#module-select "select: Wait for I/O completion on multiple streams.") module primitives. Users are encouraged to use this module instead, unless they want precise control over the OS-level primitives used. It defines a [`BaseSelector`](#selectors.BaseSelector "selectors.BaseSelector") abstract base class, along with several concrete implementations ([`KqueueSelector`](#selectors.KqueueSelector "selectors.KqueueSelector"), [`EpollSelector`](#selectors.EpollSelector "selectors.EpollSelector")…), that can be used to wait for I/O readiness notification on multiple file objects. In the following, “file object” refers to any object with a `fileno()` method, or a raw file descriptor. See [file object](../glossary#term-file-object). [`DefaultSelector`](#selectors.DefaultSelector "selectors.DefaultSelector") is an alias to the most efficient implementation available on the current platform: this should be the default choice for most users. Note The type of file objects supported depends on the platform: on Windows, sockets are supported, but not pipes, whereas on Unix, both are supported (some other types may be supported as well, such as fifos or special file devices). See also [`select`](select#module-select "select: Wait for I/O completion on multiple streams.") Low-level I/O multiplexing module. Classes ------- Classes hierarchy: ``` BaseSelector +-- SelectSelector +-- PollSelector +-- EpollSelector +-- DevpollSelector +-- KqueueSelector ``` In the following, *events* is a bitwise mask indicating which I/O events should be waited for on a given file object. It can be a combination of the modules constants below: | Constant | Meaning | | --- | --- | | `EVENT_READ` | Available for read | | `EVENT_WRITE` | Available for write | `class selectors.SelectorKey` A [`SelectorKey`](#selectors.SelectorKey "selectors.SelectorKey") is a [`namedtuple`](collections#collections.namedtuple "collections.namedtuple") used to associate a file object to its underlying file descriptor, selected event mask and attached data. It is returned by several [`BaseSelector`](#selectors.BaseSelector "selectors.BaseSelector") methods. `fileobj` File object registered. `fd` Underlying file descriptor. `events` Events that must be waited for on this file object. `data` Optional opaque data associated to this file object: for example, this could be used to store a per-client session ID. `class selectors.BaseSelector` A [`BaseSelector`](#selectors.BaseSelector "selectors.BaseSelector") is used to wait for I/O event readiness on multiple file objects. It supports file stream registration, unregistration, and a method to wait for I/O events on those streams, with an optional timeout. It’s an abstract base class, so cannot be instantiated. Use [`DefaultSelector`](#selectors.DefaultSelector "selectors.DefaultSelector") instead, or one of [`SelectSelector`](#selectors.SelectSelector "selectors.SelectSelector"), [`KqueueSelector`](#selectors.KqueueSelector "selectors.KqueueSelector") etc. if you want to specifically use an implementation, and your platform supports it. [`BaseSelector`](#selectors.BaseSelector "selectors.BaseSelector") and its concrete implementations support the [context manager](../glossary#term-context-manager) protocol. `abstractmethod register(fileobj, events, data=None)` Register a file object for selection, monitoring it for I/O events. *fileobj* is the file object to monitor. It may either be an integer file descriptor or an object with a `fileno()` method. *events* is a bitwise mask of events to monitor. *data* is an opaque object. This returns a new [`SelectorKey`](#selectors.SelectorKey "selectors.SelectorKey") instance, or raises a [`ValueError`](exceptions#ValueError "ValueError") in case of invalid event mask or file descriptor, or [`KeyError`](exceptions#KeyError "KeyError") if the file object is already registered. `abstractmethod unregister(fileobj)` Unregister a file object from selection, removing it from monitoring. A file object shall be unregistered prior to being closed. *fileobj* must be a file object previously registered. This returns the associated [`SelectorKey`](#selectors.SelectorKey "selectors.SelectorKey") instance, or raises a [`KeyError`](exceptions#KeyError "KeyError") if *fileobj* is not registered. It will raise [`ValueError`](exceptions#ValueError "ValueError") if *fileobj* is invalid (e.g. it has no `fileno()` method or its `fileno()` method has an invalid return value). `modify(fileobj, events, data=None)` Change a registered file object’s monitored events or attached data. This is equivalent to `BaseSelector.unregister(fileobj)()` followed by `BaseSelector.register(fileobj, events, data)()`, except that it can be implemented more efficiently. This returns a new [`SelectorKey`](#selectors.SelectorKey "selectors.SelectorKey") instance, or raises a [`ValueError`](exceptions#ValueError "ValueError") in case of invalid event mask or file descriptor, or [`KeyError`](exceptions#KeyError "KeyError") if the file object is not registered. `abstractmethod select(timeout=None)` Wait until some registered file objects become ready, or the timeout expires. If `timeout > 0`, this specifies the maximum wait time, in seconds. If `timeout <= 0`, the call won’t block, and will report the currently ready file objects. If *timeout* is `None`, the call will block until a monitored file object becomes ready. This returns a list of `(key, events)` tuples, one for each ready file object. *key* is the [`SelectorKey`](#selectors.SelectorKey "selectors.SelectorKey") instance corresponding to a ready file object. *events* is a bitmask of events ready on this file object. Note This method can return before any file object becomes ready or the timeout has elapsed if the current process receives a signal: in this case, an empty list will be returned. Changed in version 3.5: The selector is now retried with a recomputed timeout when interrupted by a signal if the signal handler did not raise an exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale), instead of returning an empty list of events before the timeout. `close()` Close the selector. This must be called to make sure that any underlying resource is freed. The selector shall not be used once it has been closed. `get_key(fileobj)` Return the key associated with a registered file object. This returns the [`SelectorKey`](#selectors.SelectorKey "selectors.SelectorKey") instance associated to this file object, or raises [`KeyError`](exceptions#KeyError "KeyError") if the file object is not registered. `abstractmethod get_map()` Return a mapping of file objects to selector keys. This returns a [`Mapping`](collections.abc#collections.abc.Mapping "collections.abc.Mapping") instance mapping registered file objects to their associated [`SelectorKey`](#selectors.SelectorKey "selectors.SelectorKey") instance. `class selectors.DefaultSelector` The default selector class, using the most efficient implementation available on the current platform. This should be the default choice for most users. `class selectors.SelectSelector` [`select.select()`](select#select.select "select.select")-based selector. `class selectors.PollSelector` [`select.poll()`](select#select.poll "select.poll")-based selector. `class selectors.EpollSelector` [`select.epoll()`](select#select.epoll "select.epoll")-based selector. `fileno()` This returns the file descriptor used by the underlying [`select.epoll()`](select#select.epoll "select.epoll") object. `class selectors.DevpollSelector` [`select.devpoll()`](select#select.devpoll "select.devpoll")-based selector. `fileno()` This returns the file descriptor used by the underlying [`select.devpoll()`](select#select.devpoll "select.devpoll") object. New in version 3.5. `class selectors.KqueueSelector` [`select.kqueue()`](select#select.kqueue "select.kqueue")-based selector. `fileno()` This returns the file descriptor used by the underlying [`select.kqueue()`](select#select.kqueue "select.kqueue") object. Examples -------- Here is a simple echo server implementation: ``` import selectors import socket sel = selectors.DefaultSelector() def accept(sock, mask): conn, addr = sock.accept() # Should be ready print('accepted', conn, 'from', addr) conn.setblocking(False) sel.register(conn, selectors.EVENT_READ, read) def read(conn, mask): data = conn.recv(1000) # Should be ready if data: print('echoing', repr(data), 'to', conn) conn.send(data) # Hope it won't block else: print('closing', conn) sel.unregister(conn) conn.close() sock = socket.socket() sock.bind(('localhost', 1234)) sock.listen(100) sock.setblocking(False) sel.register(sock, selectors.EVENT_READ, accept) while True: events = sel.select() for key, mask in events: callback = key.data callback(key.fileobj, mask) ``` python Internet Data Handling Internet Data Handling ====================== This chapter describes modules which support handling data formats commonly used on the Internet. * [`email` — An email and MIME handling package](email) + [`email.message`: Representing an email message](email.message) + [`email.parser`: Parsing email messages](email.parser) - [FeedParser API](email.parser#feedparser-api) - [Parser API](email.parser#parser-api) - [Additional notes](email.parser#additional-notes) + [`email.generator`: Generating MIME documents](email.generator) + [`email.policy`: Policy Objects](email.policy) + [`email.errors`: Exception and Defect classes](email.errors) + [`email.headerregistry`: Custom Header Objects](email.headerregistry) + [`email.contentmanager`: Managing MIME Content](email.contentmanager) - [Content Manager Instances](email.contentmanager#content-manager-instances) + [`email`: Examples](email.examples) + [`email.message.Message`: Representing an email message using the `compat32` API](email.compat32-message) + [`email.mime`: Creating email and MIME objects from scratch](email.mime) + [`email.header`: Internationalized headers](email.header) + [`email.charset`: Representing character sets](email.charset) + [`email.encoders`: Encoders](email.encoders) + [`email.utils`: Miscellaneous utilities](email.utils) + [`email.iterators`: Iterators](email.iterators) * [`json` — JSON encoder and decoder](json) + [Basic Usage](json#basic-usage) + [Encoders and Decoders](json#encoders-and-decoders) + [Exceptions](json#exceptions) + [Standard Compliance and Interoperability](json#standard-compliance-and-interoperability) - [Character Encodings](json#character-encodings) - [Infinite and NaN Number Values](json#infinite-and-nan-number-values) - [Repeated Names Within an Object](json#repeated-names-within-an-object) - [Top-level Non-Object, Non-Array Values](json#top-level-non-object-non-array-values) - [Implementation Limitations](json#implementation-limitations) + [Command Line Interface](json#module-json.tool) - [Command line options](json#command-line-options) * [`mailbox` — Manipulate mailboxes in various formats](mailbox) + [`Mailbox` objects](mailbox#mailbox-objects) - [`Maildir`](mailbox#maildir) - [`mbox`](mailbox#mbox) - [`MH`](mailbox#mh) - [`Babyl`](mailbox#babyl) - [`MMDF`](mailbox#mmdf) + [`Message` objects](mailbox#message-objects) - [`MaildirMessage`](mailbox#maildirmessage) - [`mboxMessage`](mailbox#mboxmessage) - [`MHMessage`](mailbox#mhmessage) - [`BabylMessage`](mailbox#babylmessage) - [`MMDFMessage`](mailbox#mmdfmessage) + [Exceptions](mailbox#exceptions) + [Examples](mailbox#examples) * [`mimetypes` — Map filenames to MIME types](mimetypes) + [MimeTypes Objects](mimetypes#mimetypes-objects) * [`base64` — Base16, Base32, Base64, Base85 Data Encodings](base64) * [`binhex` — Encode and decode binhex4 files](binhex) + [Notes](binhex#notes) * [`binascii` — Convert between binary and ASCII](binascii) * [`quopri` — Encode and decode MIME quoted-printable data](quopri) python Debugging and Profiling Debugging and Profiling ======================= These libraries help you with Python development: the debugger enables you to step through code, analyze stack frames and set breakpoints etc., and the profilers run code and give you a detailed breakdown of execution times, allowing you to identify bottlenecks in your programs. Auditing events provide visibility into runtime behaviors that would otherwise require intrusive debugging or patching. * [Audit events table](audit_events) * [`bdb` — Debugger framework](bdb) * [`faulthandler` — Dump the Python traceback](faulthandler) + [Dumping the traceback](faulthandler#dumping-the-traceback) + [Fault handler state](faulthandler#fault-handler-state) + [Dumping the tracebacks after a timeout](faulthandler#dumping-the-tracebacks-after-a-timeout) + [Dumping the traceback on a user signal](faulthandler#dumping-the-traceback-on-a-user-signal) + [Issue with file descriptors](faulthandler#issue-with-file-descriptors) + [Example](faulthandler#example) * [`pdb` — The Python Debugger](pdb) + [Debugger Commands](pdb#debugger-commands) * [The Python Profilers](profile) + [Introduction to the profilers](profile#introduction-to-the-profilers) + [Instant User’s Manual](profile#instant-user-s-manual) + [`profile` and `cProfile` Module Reference](profile#module-cProfile) + [The `Stats` Class](profile#the-stats-class) + [What Is Deterministic Profiling?](profile#what-is-deterministic-profiling) + [Limitations](profile#limitations) + [Calibration](profile#calibration) + [Using a custom timer](profile#using-a-custom-timer) * [`timeit` — Measure execution time of small code snippets](timeit) + [Basic Examples](timeit#basic-examples) + [Python Interface](timeit#python-interface) + [Command-Line Interface](timeit#command-line-interface) + [Examples](timeit#examples) * [`trace` — Trace or track Python statement execution](trace) + [Command-Line Usage](trace#command-line-usage) - [Main options](trace#main-options) - [Modifiers](trace#modifiers) - [Filters](trace#filters) + [Programmatic Interface](trace#programmatic-interface) * [`tracemalloc` — Trace memory allocations](tracemalloc) + [Examples](tracemalloc#examples) - [Display the top 10](tracemalloc#display-the-top-10) - [Compute differences](tracemalloc#compute-differences) - [Get the traceback of a memory block](tracemalloc#get-the-traceback-of-a-memory-block) - [Pretty top](tracemalloc#pretty-top) * [Record the current and peak size of all traced memory blocks](tracemalloc#record-the-current-and-peak-size-of-all-traced-memory-blocks) + [API](tracemalloc#api) - [Functions](tracemalloc#functions) - [DomainFilter](tracemalloc#domainfilter) - [Filter](tracemalloc#filter) - [Frame](tracemalloc#frame) - [Snapshot](tracemalloc#snapshot) - [Statistic](tracemalloc#statistic) - [StatisticDiff](tracemalloc#statisticdiff) - [Trace](tracemalloc#trace) - [Traceback](tracemalloc#traceback) python types — Dynamic type creation and names for built-in types types — Dynamic type creation and names for built-in types ========================================================== **Source code:** [Lib/types.py](https://github.com/python/cpython/tree/3.9/Lib/types.py) This module defines utility functions to assist in dynamic creation of new types. It also defines names for some object types that are used by the standard Python interpreter, but not exposed as builtins like [`int`](functions#int "int") or [`str`](stdtypes#str "str") are. Finally, it provides some additional type-related utility classes and functions that are not fundamental enough to be builtins. Dynamic Type Creation --------------------- `types.new_class(name, bases=(), kwds=None, exec_body=None)` Creates a class object dynamically using the appropriate metaclass. The first three arguments are the components that make up a class definition header: the class name, the base classes (in order), the keyword arguments (such as `metaclass`). The *exec\_body* argument is a callback that is used to populate the freshly created class namespace. It should accept the class namespace as its sole argument and update the namespace directly with the class contents. If no callback is provided, it has the same effect as passing in `lambda ns: None`. New in version 3.3. `types.prepare_class(name, bases=(), kwds=None)` Calculates the appropriate metaclass and creates the class namespace. The arguments are the components that make up a class definition header: the class name, the base classes (in order) and the keyword arguments (such as `metaclass`). The return value is a 3-tuple: `metaclass, namespace, kwds` *metaclass* is the appropriate metaclass, *namespace* is the prepared class namespace and *kwds* is an updated copy of the passed in *kwds* argument with any `'metaclass'` entry removed. If no *kwds* argument is passed in, this will be an empty dict. New in version 3.3. Changed in version 3.6: The default value for the `namespace` element of the returned tuple has changed. Now an insertion-order-preserving mapping is used when the metaclass does not have a `__prepare__` method. See also [Metaclasses](../reference/datamodel#metaclasses) Full details of the class creation process supported by these functions [**PEP 3115**](https://www.python.org/dev/peps/pep-3115) - Metaclasses in Python 3000 Introduced the `__prepare__` namespace hook `types.resolve_bases(bases)` Resolve MRO entries dynamically as specified by [**PEP 560**](https://www.python.org/dev/peps/pep-0560). This function looks for items in *bases* that are not instances of [`type`](functions#type "type"), and returns a tuple where each such object that has an `__mro_entries__` method is replaced with an unpacked result of calling this method. If a *bases* item is an instance of [`type`](functions#type "type"), or it doesn’t have an `__mro_entries__` method, then it is included in the return tuple unchanged. New in version 3.7. See also [**PEP 560**](https://www.python.org/dev/peps/pep-0560) - Core support for typing module and generic types Standard Interpreter Types -------------------------- This module provides names for many of the types that are required to implement a Python interpreter. It deliberately avoids including some of the types that arise only incidentally during processing such as the `listiterator` type. Typical use of these names is for [`isinstance()`](functions#isinstance "isinstance") or [`issubclass()`](functions#issubclass "issubclass") checks. If you instantiate any of these types, note that signatures may vary between Python versions. Standard names are defined for the following types: `types.FunctionType` `types.LambdaType` The type of user-defined functions and functions created by [`lambda`](../reference/expressions#lambda) expressions. Raises an [auditing event](sys#auditing) `function.__new__` with argument `code`. The audit event only occurs for direct instantiation of function objects, and is not raised for normal compilation. `types.GeneratorType` The type of [generator](../glossary#term-generator)-iterator objects, created by generator functions. `types.CoroutineType` The type of [coroutine](../glossary#term-coroutine) objects, created by [`async def`](../reference/compound_stmts#async-def) functions. New in version 3.5. `types.AsyncGeneratorType` The type of [asynchronous generator](../glossary#term-asynchronous-generator)-iterator objects, created by asynchronous generator functions. New in version 3.6. `class types.CodeType(**kwargs)` The type for code objects such as returned by [`compile()`](functions#compile "compile"). Raises an [auditing event](sys#auditing) `code.__new__` with arguments `code`, `filename`, `name`, `argcount`, `posonlyargcount`, `kwonlyargcount`, `nlocals`, `stacksize`, `flags`. Note that the audited arguments may not match the names or positions required by the initializer. The audit event only occurs for direct instantiation of code objects, and is not raised for normal compilation. `replace(**kwargs)` Return a copy of the code object with new values for the specified fields. New in version 3.8. `types.CellType` The type for cell objects: such objects are used as containers for a function’s free variables. New in version 3.8. `types.MethodType` The type of methods of user-defined class instances. `types.BuiltinFunctionType` `types.BuiltinMethodType` The type of built-in functions like [`len()`](functions#len "len") or [`sys.exit()`](sys#sys.exit "sys.exit"), and methods of built-in classes. (Here, the term “built-in” means “written in C”.) `types.WrapperDescriptorType` The type of methods of some built-in data types and base classes such as [`object.__init__()`](../reference/datamodel#object.__init__ "object.__init__") or [`object.__lt__()`](../reference/datamodel#object.__lt__ "object.__lt__"). New in version 3.7. `types.MethodWrapperType` The type of *bound* methods of some built-in data types and base classes. For example it is the type of `object().__str__`. New in version 3.7. `types.MethodDescriptorType` The type of methods of some built-in data types such as [`str.join()`](stdtypes#str.join "str.join"). New in version 3.7. `types.ClassMethodDescriptorType` The type of *unbound* class methods of some built-in data types such as `dict.__dict__['fromkeys']`. New in version 3.7. `class types.ModuleType(name, doc=None)` The type of [modules](../glossary#term-module). The constructor takes the name of the module to be created and optionally its [docstring](../glossary#term-docstring). Note Use [`importlib.util.module_from_spec()`](importlib#importlib.util.module_from_spec "importlib.util.module_from_spec") to create a new module if you wish to set the various import-controlled attributes. `__doc__` The [docstring](../glossary#term-docstring) of the module. Defaults to `None`. `__loader__` The [loader](../glossary#term-loader) which loaded the module. Defaults to `None`. This attribute is to match [`importlib.machinery.ModuleSpec.loader`](importlib#importlib.machinery.ModuleSpec.loader "importlib.machinery.ModuleSpec.loader") as stored in the attr:`__spec__` object. Note A future version of Python may stop setting this attribute by default. To guard against this potential change, preferably read from the [`__spec__`](../reference/import#__spec__ "__spec__") attribute instead or use `getattr(module, "__loader__", None)` if you explicitly need to use this attribute. Changed in version 3.4: Defaults to `None`. Previously the attribute was optional. `__name__` The name of the module. Expected to match [`importlib.machinery.ModuleSpec.name`](importlib#importlib.machinery.ModuleSpec.name "importlib.machinery.ModuleSpec.name"). `__package__` Which [package](../glossary#term-package) a module belongs to. If the module is top-level (i.e. not a part of any specific package) then the attribute should be set to `''`, else it should be set to the name of the package (which can be [`__name__`](../reference/import#__name__ "__name__") if the module is a package itself). Defaults to `None`. This attribute is to match [`importlib.machinery.ModuleSpec.parent`](importlib#importlib.machinery.ModuleSpec.parent "importlib.machinery.ModuleSpec.parent") as stored in the attr:`__spec__` object. Note A future version of Python may stop setting this attribute by default. To guard against this potential change, preferably read from the [`__spec__`](../reference/import#__spec__ "__spec__") attribute instead or use `getattr(module, "__package__", None)` if you explicitly need to use this attribute. Changed in version 3.4: Defaults to `None`. Previously the attribute was optional. `__spec__` A record of the module’s import-system-related state. Expected to be an instance of [`importlib.machinery.ModuleSpec`](importlib#importlib.machinery.ModuleSpec "importlib.machinery.ModuleSpec"). New in version 3.4. `class types.GenericAlias(t_origin, t_args)` The type of [parameterized generics](stdtypes#types-genericalias) such as `list[int]`. `t_origin` should be a non-parameterized generic class, such as `list`, `tuple` or `dict`. `t_args` should be a [`tuple`](stdtypes#tuple "tuple") (possibly of length 1) of types which parameterize `t_origin`: ``` >>> from types import GenericAlias >>> list[int] == GenericAlias(list, (int,)) True >>> dict[str, int] == GenericAlias(dict, (str, int)) True ``` New in version 3.9. Changed in version 3.9.2: This type can now be subclassed. `class types.TracebackType(tb_next, tb_frame, tb_lasti, tb_lineno)` The type of traceback objects such as found in `sys.exc_info()[2]`. See [the language reference](../reference/datamodel#traceback-objects) for details of the available attributes and operations, and guidance on creating tracebacks dynamically. `types.FrameType` The type of frame objects such as found in `tb.tb_frame` if `tb` is a traceback object. See [the language reference](../reference/datamodel#frame-objects) for details of the available attributes and operations. `types.GetSetDescriptorType` The type of objects defined in extension modules with `PyGetSetDef`, such as `FrameType.f_locals` or `array.array.typecode`. This type is used as descriptor for object attributes; it has the same purpose as the [`property`](functions#property "property") type, but for classes defined in extension modules. `types.MemberDescriptorType` The type of objects defined in extension modules with `PyMemberDef`, such as `datetime.timedelta.days`. This type is used as descriptor for simple C data members which use standard conversion functions; it has the same purpose as the [`property`](functions#property "property") type, but for classes defined in extension modules. **CPython implementation detail:** In other implementations of Python, this type may be identical to `GetSetDescriptorType`. `class types.MappingProxyType(mapping)` Read-only proxy of a mapping. It provides a dynamic view on the mapping’s entries, which means that when the mapping changes, the view reflects these changes. New in version 3.3. Changed in version 3.9: Updated to support the new union (`|`) operator from [**PEP 584**](https://www.python.org/dev/peps/pep-0584), which simply delegates to the underlying mapping. `key in proxy` Return `True` if the underlying mapping has a key *key*, else `False`. `proxy[key]` Return the item of the underlying mapping with key *key*. Raises a [`KeyError`](exceptions#KeyError "KeyError") if *key* is not in the underlying mapping. `iter(proxy)` Return an iterator over the keys of the underlying mapping. This is a shortcut for `iter(proxy.keys())`. `len(proxy)` Return the number of items in the underlying mapping. `copy()` Return a shallow copy of the underlying mapping. `get(key[, default])` Return the value for *key* if *key* is in the underlying mapping, else *default*. If *default* is not given, it defaults to `None`, so that this method never raises a [`KeyError`](exceptions#KeyError "KeyError"). `items()` Return a new view of the underlying mapping’s items (`(key, value)` pairs). `keys()` Return a new view of the underlying mapping’s keys. `values()` Return a new view of the underlying mapping’s values. `reversed(proxy)` Return a reverse iterator over the keys of the underlying mapping. New in version 3.9. Additional Utility Classes and Functions ---------------------------------------- `class types.SimpleNamespace` A simple [`object`](functions#object "object") subclass that provides attribute access to its namespace, as well as a meaningful repr. Unlike [`object`](functions#object "object"), with `SimpleNamespace` you can add and remove attributes. If a `SimpleNamespace` object is initialized with keyword arguments, those are directly added to the underlying namespace. The type is roughly equivalent to the following code: ``` class SimpleNamespace: def __init__(self, /, **kwargs): self.__dict__.update(kwargs) def __repr__(self): items = (f"{k}={v!r}" for k, v in self.__dict__.items()) return "{}({})".format(type(self).__name__, ", ".join(items)) def __eq__(self, other): if isinstance(self, SimpleNamespace) and isinstance(other, SimpleNamespace): return self.__dict__ == other.__dict__ return NotImplemented ``` `SimpleNamespace` may be useful as a replacement for `class NS: pass`. However, for a structured record type use [`namedtuple()`](collections#collections.namedtuple "collections.namedtuple") instead. New in version 3.3. Changed in version 3.9: Attribute order in the repr changed from alphabetical to insertion (like `dict`). `types.DynamicClassAttribute(fget=None, fset=None, fdel=None, doc=None)` Route attribute access on a class to \_\_getattr\_\_. This is a descriptor, used to define attributes that act differently when accessed through an instance and through a class. Instance access remains normal, but access to an attribute through a class will be routed to the class’s \_\_getattr\_\_ method; this is done by raising AttributeError. This allows one to have properties active on an instance, and have virtual attributes on the class with the same name (see [`enum.Enum`](enum#enum.Enum "enum.Enum") for an example). New in version 3.4. Coroutine Utility Functions --------------------------- `types.coroutine(gen_func)` This function transforms a [generator](../glossary#term-generator) function into a [coroutine function](../glossary#term-coroutine-function) which returns a generator-based coroutine. The generator-based coroutine is still a [generator iterator](../glossary#term-generator-iterator), but is also considered to be a [coroutine](../glossary#term-coroutine) object and is [awaitable](../glossary#term-awaitable). However, it may not necessarily implement the [`__await__()`](../reference/datamodel#object.__await__ "object.__await__") method. If *gen\_func* is a generator function, it will be modified in-place. If *gen\_func* is not a generator function, it will be wrapped. If it returns an instance of [`collections.abc.Generator`](collections.abc#collections.abc.Generator "collections.abc.Generator"), the instance will be wrapped in an *awaitable* proxy object. All other types of objects will be returned as is. New in version 3.5.
programming_docs
python codeop — Compile Python code codeop — Compile Python code ============================ **Source code:** [Lib/codeop.py](https://github.com/python/cpython/tree/3.9/Lib/codeop.py) The [`codeop`](#module-codeop "codeop: Compile (possibly incomplete) Python code.") module provides utilities upon which the Python read-eval-print loop can be emulated, as is done in the [`code`](code#module-code "code: Facilities to implement read-eval-print loops.") module. As a result, you probably don’t want to use the module directly; if you want to include such a loop in your program you probably want to use the [`code`](code#module-code "code: Facilities to implement read-eval-print loops.") module instead. There are two parts to this job: 1. Being able to tell if a line of input completes a Python statement: in short, telling whether to print ‘`>>>`’ or ‘`...`’ next. 2. Remembering which future statements the user has entered, so subsequent input can be compiled with these in effect. The [`codeop`](#module-codeop "codeop: Compile (possibly incomplete) Python code.") module provides a way of doing each of these things, and a way of doing them both. To do just the former: `codeop.compile_command(source, filename="<input>", symbol="single")` Tries to compile *source*, which should be a string of Python code and return a code object if *source* is valid Python code. In that case, the filename attribute of the code object will be *filename*, which defaults to `'<input>'`. Returns `None` if *source* is *not* valid Python code, but is a prefix of valid Python code. If there is a problem with *source*, an exception will be raised. [`SyntaxError`](exceptions#SyntaxError "SyntaxError") is raised if there is invalid Python syntax, and [`OverflowError`](exceptions#OverflowError "OverflowError") or [`ValueError`](exceptions#ValueError "ValueError") if there is an invalid literal. The *symbol* argument determines whether *source* is compiled as a statement (`'single'`, the default), as a sequence of statements (`'exec'`) or as an [expression](../glossary#term-expression) (`'eval'`). Any other value will cause [`ValueError`](exceptions#ValueError "ValueError") to be raised. Note It is possible (but not likely) that the parser stops parsing with a successful outcome before reaching the end of the source; in this case, trailing symbols may be ignored instead of causing an error. For example, a backslash followed by two newlines may be followed by arbitrary garbage. This will be fixed once the API for the parser is better. `class codeop.Compile` Instances of this class have [`__call__()`](../reference/datamodel#object.__call__ "object.__call__") methods identical in signature to the built-in function [`compile()`](functions#compile "compile"), but with the difference that if the instance compiles program text containing a [`__future__`](__future__#module-__future__ "__future__: Future statement definitions") statement, the instance ‘remembers’ and compiles all subsequent program texts with the statement in force. `class codeop.CommandCompiler` Instances of this class have [`__call__()`](../reference/datamodel#object.__call__ "object.__call__") methods identical in signature to [`compile_command()`](#codeop.compile_command "codeop.compile_command"); the difference is that if the instance compiles program text containing a `__future__` statement, the instance ‘remembers’ and compiles all subsequent program texts with the statement in force. python Miscellaneous Services Miscellaneous Services ====================== The modules described in this chapter provide miscellaneous services that are available in all Python versions. Here’s an overview: * [`formatter` — Generic output formatting](https://docs.python.org/3.9/library/formatter.html) + [The Formatter Interface](https://docs.python.org/3.9/library/formatter.html#the-formatter-interface) + [Formatter Implementations](https://docs.python.org/3.9/library/formatter.html#formatter-implementations) + [The Writer Interface](https://docs.python.org/3.9/library/formatter.html#the-writer-interface) + [Writer Implementations](https://docs.python.org/3.9/library/formatter.html#writer-implementations) python pty — Pseudo-terminal utilities pty — Pseudo-terminal utilities =============================== **Source code:** [Lib/pty.py](https://github.com/python/cpython/tree/3.9/Lib/pty.py) The [`pty`](#module-pty "pty: Pseudo-Terminal Handling for Linux. (Linux)") module defines operations for handling the pseudo-terminal concept: starting another process and being able to write to and read from its controlling terminal programmatically. Because pseudo-terminal handling is highly platform dependent, there is code to do it only for Linux. (The Linux code is supposed to work on other platforms, but hasn’t been tested yet.) The [`pty`](#module-pty "pty: Pseudo-Terminal Handling for Linux. (Linux)") module defines the following functions: `pty.fork()` Fork. Connect the child’s controlling terminal to a pseudo-terminal. Return value is `(pid, fd)`. Note that the child gets *pid* 0, and the *fd* is *invalid*. The parent’s return value is the *pid* of the child, and *fd* is a file descriptor connected to the child’s controlling terminal (and also to the child’s standard input and output). `pty.openpty()` Open a new pseudo-terminal pair, using [`os.openpty()`](os#os.openpty "os.openpty") if possible, or emulation code for generic Unix systems. Return a pair of file descriptors `(master, slave)`, for the master and the slave end, respectively. `pty.spawn(argv[, master_read[, stdin_read]])` Spawn a process, and connect its controlling terminal with the current process’s standard io. This is often used to baffle programs which insist on reading from the controlling terminal. It is expected that the process spawned behind the pty will eventually terminate, and when it does *spawn* will return. The functions *master\_read* and *stdin\_read* are passed a file descriptor which they should read from, and they should always return a byte string. In order to force spawn to return before the child process exits an [`OSError`](exceptions#OSError "OSError") should be thrown. The default implementation for both functions will read and return up to 1024 bytes each time the function is called. The *master\_read* callback is passed the pseudoterminal’s master file descriptor to read output from the child process, and *stdin\_read* is passed file descriptor 0, to read from the parent process’s standard input. Returning an empty byte string from either callback is interpreted as an end-of-file (EOF) condition, and that callback will not be called after that. If *stdin\_read* signals EOF the controlling terminal can no longer communicate with the parent process OR the child process. Unless the child process will quit without any input, *spawn* will then loop forever. If *master\_read* signals EOF the same behavior results (on linux at least). If both callbacks signal EOF then *spawn* will probably never return, unless *select* throws an error on your platform when passed three empty lists. This is a bug, documented in [issue 26228](https://bugs.python.org/issue26228). Return the exit status value from [`os.waitpid()`](os#os.waitpid "os.waitpid") on the child process. `waitstatus_to_exitcode()` can be used to convert the exit status into an exit code. Raises an [auditing event](sys#auditing) `pty.spawn` with argument `argv`. Changed in version 3.4: [`spawn()`](#pty.spawn "pty.spawn") now returns the status value from [`os.waitpid()`](os#os.waitpid "os.waitpid") on the child process. Example ------- The following program acts like the Unix command *[script(1)](https://manpages.debian.org/script(1))*, using a pseudo-terminal to record all input and output of a terminal session in a “typescript”. ``` import argparse import os import pty import sys import time parser = argparse.ArgumentParser() parser.add_argument('-a', dest='append', action='store_true') parser.add_argument('-p', dest='use_python', action='store_true') parser.add_argument('filename', nargs='?', default='typescript') options = parser.parse_args() shell = sys.executable if options.use_python else os.environ.get('SHELL', 'sh') filename = options.filename mode = 'ab' if options.append else 'wb' with open(filename, mode) as script: def read(fd): data = os.read(fd, 1024) script.write(data) return data print('Script started, file is', filename) script.write(('Script started on %s\n' % time.asctime()).encode()) pty.spawn(shell, read) script.write(('Script done on %s\n' % time.asctime()).encode()) print('Script done, file is', filename) ``` python xml.sax.xmlreader — Interface for XML parsers xml.sax.xmlreader — Interface for XML parsers ============================================= **Source code:** [Lib/xml/sax/xmlreader.py](https://github.com/python/cpython/tree/3.9/Lib/xml/sax/xmlreader.py) SAX parsers implement the [`XMLReader`](#xml.sax.xmlreader.XMLReader "xml.sax.xmlreader.XMLReader") interface. They are implemented in a Python module, which must provide a function `create_parser()`. This function is invoked by [`xml.sax.make_parser()`](xml.sax#xml.sax.make_parser "xml.sax.make_parser") with no arguments to create a new parser object. `class xml.sax.xmlreader.XMLReader` Base class which can be inherited by SAX parsers. `class xml.sax.xmlreader.IncrementalParser` In some cases, it is desirable not to parse an input source at once, but to feed chunks of the document as they get available. Note that the reader will normally not read the entire file, but read it in chunks as well; still `parse()` won’t return until the entire document is processed. So these interfaces should be used if the blocking behaviour of `parse()` is not desirable. When the parser is instantiated it is ready to begin accepting data from the feed method immediately. After parsing has been finished with a call to close the reset method must be called to make the parser ready to accept new data, either from feed or using the parse method. Note that these methods must *not* be called during parsing, that is, after parse has been called and before it returns. By default, the class also implements the parse method of the XMLReader interface using the feed, close and reset methods of the IncrementalParser interface as a convenience to SAX 2.0 driver writers. `class xml.sax.xmlreader.Locator` Interface for associating a SAX event with a document location. A locator object will return valid results only during calls to DocumentHandler methods; at any other time, the results are unpredictable. If information is not available, methods may return `None`. `class xml.sax.xmlreader.InputSource(system_id=None)` Encapsulation of the information needed by the [`XMLReader`](#xml.sax.xmlreader.XMLReader "xml.sax.xmlreader.XMLReader") to read entities. This class may include information about the public identifier, system identifier, byte stream (possibly with character encoding information) and/or the character stream of an entity. Applications will create objects of this class for use in the [`XMLReader.parse()`](#xml.sax.xmlreader.XMLReader.parse "xml.sax.xmlreader.XMLReader.parse") method and for returning from EntityResolver.resolveEntity. An [`InputSource`](#xml.sax.xmlreader.InputSource "xml.sax.xmlreader.InputSource") belongs to the application, the [`XMLReader`](#xml.sax.xmlreader.XMLReader "xml.sax.xmlreader.XMLReader") is not allowed to modify [`InputSource`](#xml.sax.xmlreader.InputSource "xml.sax.xmlreader.InputSource") objects passed to it from the application, although it may make copies and modify those. `class xml.sax.xmlreader.AttributesImpl(attrs)` This is an implementation of the `Attributes` interface (see section [The Attributes Interface](#attributes-objects)). This is a dictionary-like object which represents the element attributes in a `startElement()` call. In addition to the most useful dictionary operations, it supports a number of other methods as described by the interface. Objects of this class should be instantiated by readers; *attrs* must be a dictionary-like object containing a mapping from attribute names to attribute values. `class xml.sax.xmlreader.AttributesNSImpl(attrs, qnames)` Namespace-aware variant of [`AttributesImpl`](#xml.sax.xmlreader.AttributesImpl "xml.sax.xmlreader.AttributesImpl"), which will be passed to `startElementNS()`. It is derived from [`AttributesImpl`](#xml.sax.xmlreader.AttributesImpl "xml.sax.xmlreader.AttributesImpl"), but understands attribute names as two-tuples of *namespaceURI* and *localname*. In addition, it provides a number of methods expecting qualified names as they appear in the original document. This class implements the `AttributesNS` interface (see section [The AttributesNS Interface](#attributes-ns-objects)). XMLReader Objects ----------------- The [`XMLReader`](#xml.sax.xmlreader.XMLReader "xml.sax.xmlreader.XMLReader") interface supports the following methods: `XMLReader.parse(source)` Process an input source, producing SAX events. The *source* object can be a system identifier (a string identifying the input source – typically a file name or a URL), a [`pathlib.Path`](pathlib#pathlib.Path "pathlib.Path") or [path-like](../glossary#term-path-like-object) object, or an [`InputSource`](#xml.sax.xmlreader.InputSource "xml.sax.xmlreader.InputSource") object. When [`parse()`](#xml.sax.xmlreader.XMLReader.parse "xml.sax.xmlreader.XMLReader.parse") returns, the input is completely processed, and the parser object can be discarded or reset. Changed in version 3.5: Added support of character streams. Changed in version 3.8: Added support of path-like objects. `XMLReader.getContentHandler()` Return the current [`ContentHandler`](xml.sax.handler#xml.sax.handler.ContentHandler "xml.sax.handler.ContentHandler"). `XMLReader.setContentHandler(handler)` Set the current [`ContentHandler`](xml.sax.handler#xml.sax.handler.ContentHandler "xml.sax.handler.ContentHandler"). If no [`ContentHandler`](xml.sax.handler#xml.sax.handler.ContentHandler "xml.sax.handler.ContentHandler") is set, content events will be discarded. `XMLReader.getDTDHandler()` Return the current [`DTDHandler`](xml.sax.handler#xml.sax.handler.DTDHandler "xml.sax.handler.DTDHandler"). `XMLReader.setDTDHandler(handler)` Set the current [`DTDHandler`](xml.sax.handler#xml.sax.handler.DTDHandler "xml.sax.handler.DTDHandler"). If no [`DTDHandler`](xml.sax.handler#xml.sax.handler.DTDHandler "xml.sax.handler.DTDHandler") is set, DTD events will be discarded. `XMLReader.getEntityResolver()` Return the current [`EntityResolver`](xml.sax.handler#xml.sax.handler.EntityResolver "xml.sax.handler.EntityResolver"). `XMLReader.setEntityResolver(handler)` Set the current [`EntityResolver`](xml.sax.handler#xml.sax.handler.EntityResolver "xml.sax.handler.EntityResolver"). If no [`EntityResolver`](xml.sax.handler#xml.sax.handler.EntityResolver "xml.sax.handler.EntityResolver") is set, attempts to resolve an external entity will result in opening the system identifier for the entity, and fail if it is not available. `XMLReader.getErrorHandler()` Return the current [`ErrorHandler`](xml.sax.handler#xml.sax.handler.ErrorHandler "xml.sax.handler.ErrorHandler"). `XMLReader.setErrorHandler(handler)` Set the current error handler. If no [`ErrorHandler`](xml.sax.handler#xml.sax.handler.ErrorHandler "xml.sax.handler.ErrorHandler") is set, errors will be raised as exceptions, and warnings will be printed. `XMLReader.setLocale(locale)` Allow an application to set the locale for errors and warnings. SAX parsers are not required to provide localization for errors and warnings; if they cannot support the requested locale, however, they must raise a SAX exception. Applications may request a locale change in the middle of a parse. `XMLReader.getFeature(featurename)` Return the current setting for feature *featurename*. If the feature is not recognized, `SAXNotRecognizedException` is raised. The well-known featurenames are listed in the module [`xml.sax.handler`](xml.sax.handler#module-xml.sax.handler "xml.sax.handler: Base classes for SAX event handlers."). `XMLReader.setFeature(featurename, value)` Set the *featurename* to *value*. If the feature is not recognized, `SAXNotRecognizedException` is raised. If the feature or its setting is not supported by the parser, *SAXNotSupportedException* is raised. `XMLReader.getProperty(propertyname)` Return the current setting for property *propertyname*. If the property is not recognized, a `SAXNotRecognizedException` is raised. The well-known propertynames are listed in the module [`xml.sax.handler`](xml.sax.handler#module-xml.sax.handler "xml.sax.handler: Base classes for SAX event handlers."). `XMLReader.setProperty(propertyname, value)` Set the *propertyname* to *value*. If the property is not recognized, `SAXNotRecognizedException` is raised. If the property or its setting is not supported by the parser, *SAXNotSupportedException* is raised. IncrementalParser Objects ------------------------- Instances of [`IncrementalParser`](#xml.sax.xmlreader.IncrementalParser "xml.sax.xmlreader.IncrementalParser") offer the following additional methods: `IncrementalParser.feed(data)` Process a chunk of *data*. `IncrementalParser.close()` Assume the end of the document. That will check well-formedness conditions that can be checked only at the end, invoke handlers, and may clean up resources allocated during parsing. `IncrementalParser.reset()` This method is called after close has been called to reset the parser so that it is ready to parse new documents. The results of calling parse or feed after close without calling reset are undefined. Locator Objects --------------- Instances of [`Locator`](#xml.sax.xmlreader.Locator "xml.sax.xmlreader.Locator") provide these methods: `Locator.getColumnNumber()` Return the column number where the current event begins. `Locator.getLineNumber()` Return the line number where the current event begins. `Locator.getPublicId()` Return the public identifier for the current event. `Locator.getSystemId()` Return the system identifier for the current event. InputSource Objects ------------------- `InputSource.setPublicId(id)` Sets the public identifier of this [`InputSource`](#xml.sax.xmlreader.InputSource "xml.sax.xmlreader.InputSource"). `InputSource.getPublicId()` Returns the public identifier of this [`InputSource`](#xml.sax.xmlreader.InputSource "xml.sax.xmlreader.InputSource"). `InputSource.setSystemId(id)` Sets the system identifier of this [`InputSource`](#xml.sax.xmlreader.InputSource "xml.sax.xmlreader.InputSource"). `InputSource.getSystemId()` Returns the system identifier of this [`InputSource`](#xml.sax.xmlreader.InputSource "xml.sax.xmlreader.InputSource"). `InputSource.setEncoding(encoding)` Sets the character encoding of this [`InputSource`](#xml.sax.xmlreader.InputSource "xml.sax.xmlreader.InputSource"). The encoding must be a string acceptable for an XML encoding declaration (see section 4.3.3 of the XML recommendation). The encoding attribute of the [`InputSource`](#xml.sax.xmlreader.InputSource "xml.sax.xmlreader.InputSource") is ignored if the [`InputSource`](#xml.sax.xmlreader.InputSource "xml.sax.xmlreader.InputSource") also contains a character stream. `InputSource.getEncoding()` Get the character encoding of this InputSource. `InputSource.setByteStream(bytefile)` Set the byte stream (a [binary file](../glossary#term-binary-file)) for this input source. The SAX parser will ignore this if there is also a character stream specified, but it will use a byte stream in preference to opening a URI connection itself. If the application knows the character encoding of the byte stream, it should set it with the setEncoding method. `InputSource.getByteStream()` Get the byte stream for this input source. The getEncoding method will return the character encoding for this byte stream, or `None` if unknown. `InputSource.setCharacterStream(charfile)` Set the character stream (a [text file](../glossary#term-text-file)) for this input source. If there is a character stream specified, the SAX parser will ignore any byte stream and will not attempt to open a URI connection to the system identifier. `InputSource.getCharacterStream()` Get the character stream for this input source. The `Attributes` Interface -------------------------- `Attributes` objects implement a portion of the [mapping protocol](../glossary#term-mapping), including the methods `copy()`, `get()`, [`__contains__()`](../reference/datamodel#object.__contains__ "object.__contains__"), `items()`, `keys()`, and `values()`. The following methods are also provided: `Attributes.getLength()` Return the number of attributes. `Attributes.getNames()` Return the names of the attributes. `Attributes.getType(name)` Returns the type of the attribute *name*, which is normally `'CDATA'`. `Attributes.getValue(name)` Return the value of attribute *name*. The `AttributesNS` Interface ---------------------------- This interface is a subtype of the `Attributes` interface (see section [The Attributes Interface](#attributes-objects)). All methods supported by that interface are also available on `AttributesNS` objects. The following methods are also available: `AttributesNS.getValueByQName(name)` Return the value for a qualified name. `AttributesNS.getNameByQName(name)` Return the `(namespace, localname)` pair for a qualified *name*. `AttributesNS.getQNameByName(name)` Return the qualified name for a `(namespace, localname)` pair. `AttributesNS.getQNames()` Return the qualified names of all attributes.
programming_docs
python socket — Low-level networking interface socket — Low-level networking interface ======================================= **Source code:** [Lib/socket.py](https://github.com/python/cpython/tree/3.9/Lib/socket.py) This module provides access to the BSD *socket* interface. It is available on all modern Unix systems, Windows, MacOS, and probably additional platforms. Note Some behavior may be platform dependent, since calls are made to the operating system socket APIs. The Python interface is a straightforward transliteration of the Unix system call and library interface for sockets to Python’s object-oriented style: the `socket()` function returns a *socket object* whose methods implement the various socket system calls. Parameter types are somewhat higher-level than in the C interface: as with `read()` and `write()` operations on Python files, buffer allocation on receive operations is automatic, and buffer length is implicit on send operations. See also `Module` [`socketserver`](socketserver#module-socketserver "socketserver: A framework for network servers.") Classes that simplify writing network servers. `Module` [`ssl`](ssl#module-ssl "ssl: TLS/SSL wrapper for socket objects") A TLS/SSL wrapper for socket objects. Socket families --------------- Depending on the system and the build options, various socket families are supported by this module. The address format required by a particular socket object is automatically selected based on the address family specified when the socket object was created. Socket addresses are represented as follows: * The address of an [`AF_UNIX`](#socket.AF_UNIX "socket.AF_UNIX") socket bound to a file system node is represented as a string, using the file system encoding and the `'surrogateescape'` error handler (see [**PEP 383**](https://www.python.org/dev/peps/pep-0383)). An address in Linux’s abstract namespace is returned as a [bytes-like object](../glossary#term-bytes-like-object) with an initial null byte; note that sockets in this namespace can communicate with normal file system sockets, so programs intended to run on Linux may need to deal with both types of address. A string or bytes-like object can be used for either type of address when passing it as an argument. Changed in version 3.3: Previously, [`AF_UNIX`](#socket.AF_UNIX "socket.AF_UNIX") socket paths were assumed to use UTF-8 encoding. Changed in version 3.5: Writable [bytes-like object](../glossary#term-bytes-like-object) is now accepted. * A pair `(host, port)` is used for the [`AF_INET`](#socket.AF_INET "socket.AF_INET") address family, where *host* is a string representing either a hostname in Internet domain notation like `'daring.cwi.nl'` or an IPv4 address like `'100.50.200.5'`, and *port* is an integer. + For IPv4 addresses, two special forms are accepted instead of a host address: `''` represents `INADDR_ANY`, which is used to bind to all interfaces, and the string `'<broadcast>'` represents `INADDR_BROADCAST`. This behavior is not compatible with IPv6, therefore, you may want to avoid these if you intend to support IPv6 with your Python programs. * For [`AF_INET6`](#socket.AF_INET6 "socket.AF_INET6") address family, a four-tuple `(host, port, flowinfo, scope_id)` is used, where *flowinfo* and *scope\_id* represent the `sin6_flowinfo` and `sin6_scope_id` members in `struct sockaddr_in6` in C. For [`socket`](#module-socket "socket: Low-level networking interface.") module methods, *flowinfo* and *scope\_id* can be omitted just for backward compatibility. Note, however, omission of *scope\_id* can cause problems in manipulating scoped IPv6 addresses. Changed in version 3.7: For multicast addresses (with *scope\_id* meaningful) *address* may not contain `%scope_id` (or `zone id`) part. This information is superfluous and may be safely omitted (recommended). * `AF_NETLINK` sockets are represented as pairs `(pid, groups)`. * Linux-only support for TIPC is available using the `AF_TIPC` address family. TIPC is an open, non-IP based networked protocol designed for use in clustered computer environments. Addresses are represented by a tuple, and the fields depend on the address type. The general tuple form is `(addr_type, v1, v2, v3 [, scope])`, where: + *addr\_type* is one of `TIPC_ADDR_NAMESEQ`, `TIPC_ADDR_NAME`, or `TIPC_ADDR_ID`. + *scope* is one of `TIPC_ZONE_SCOPE`, `TIPC_CLUSTER_SCOPE`, and `TIPC_NODE_SCOPE`. + If *addr\_type* is `TIPC_ADDR_NAME`, then *v1* is the server type, *v2* is the port identifier, and *v3* should be 0. If *addr\_type* is `TIPC_ADDR_NAMESEQ`, then *v1* is the server type, *v2* is the lower port number, and *v3* is the upper port number. If *addr\_type* is `TIPC_ADDR_ID`, then *v1* is the node, *v2* is the reference, and *v3* should be set to 0. * A tuple `(interface, )` is used for the [`AF_CAN`](#socket.AF_CAN "socket.AF_CAN") address family, where *interface* is a string representing a network interface name like `'can0'`. The network interface name `''` can be used to receive packets from all network interfaces of this family. + [`CAN_ISOTP`](#socket.CAN_ISOTP "socket.CAN_ISOTP") protocol require a tuple `(interface, rx_addr, tx_addr)` where both additional parameters are unsigned long integer that represent a CAN identifier (standard or extended). + [`CAN_J1939`](#socket.CAN_J1939 "socket.CAN_J1939") protocol require a tuple `(interface, name, pgn, addr)` where additional parameters are 64-bit unsigned integer representing the ECU name, a 32-bit unsigned integer representing the Parameter Group Number (PGN), and an 8-bit integer representing the address. * A string or a tuple `(id, unit)` is used for the `SYSPROTO_CONTROL` protocol of the `PF_SYSTEM` family. The string is the name of a kernel control using a dynamically-assigned ID. The tuple can be used if ID and unit number of the kernel control are known or if a registered ID is used. New in version 3.3. * `AF_BLUETOOTH` supports the following protocols and address formats: + `BTPROTO_L2CAP` accepts `(bdaddr, psm)` where `bdaddr` is the Bluetooth address as a string and `psm` is an integer. + `BTPROTO_RFCOMM` accepts `(bdaddr, channel)` where `bdaddr` is the Bluetooth address as a string and `channel` is an integer. + `BTPROTO_HCI` accepts `(device_id,)` where `device_id` is either an integer or a string with the Bluetooth address of the interface. (This depends on your OS; NetBSD and DragonFlyBSD expect a Bluetooth address while everything else expects an integer.) Changed in version 3.2: NetBSD and DragonFlyBSD support added. + `BTPROTO_SCO` accepts `bdaddr` where `bdaddr` is a [`bytes`](stdtypes#bytes "bytes") object containing the Bluetooth address in a string format. (ex. `b'12:23:34:45:56:67'`) This protocol is not supported under FreeBSD. * [`AF_ALG`](#socket.AF_ALG "socket.AF_ALG") is a Linux-only socket based interface to Kernel cryptography. An algorithm socket is configured with a tuple of two to four elements `(type, name [, feat [, mask]])`, where: + *type* is the algorithm type as string, e.g. `aead`, `hash`, `skcipher` or `rng`. + *name* is the algorithm name and operation mode as string, e.g. `sha256`, `hmac(sha256)`, `cbc(aes)` or `drbg_nopr_ctr_aes256`. + *feat* and *mask* are unsigned 32bit integers.[Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 2.6.38, some algorithm types require more recent Kernels. New in version 3.6. * [`AF_VSOCK`](#socket.AF_VSOCK "socket.AF_VSOCK") allows communication between virtual machines and their hosts. The sockets are represented as a `(CID, port)` tuple where the context ID or CID and port are integers. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 4.8 QEMU >= 2.8 ESX >= 4.0 ESX Workstation >= 6.5. New in version 3.7. * [`AF_PACKET`](#socket.AF_PACKET "socket.AF_PACKET") is a low-level interface directly to network devices. The packets are represented by the tuple `(ifname, proto[, pkttype[, hatype[, addr]]])` where: + *ifname* - String specifying the device name. + *proto* - An in network-byte-order integer specifying the Ethernet protocol number. + *pkttype* - Optional integer specifying the packet type: - `PACKET_HOST` (the default) - Packet addressed to the local host. - `PACKET_BROADCAST` - Physical-layer broadcast packet. - `PACKET_MULTIHOST` - Packet sent to a physical-layer multicast address. - `PACKET_OTHERHOST` - Packet to some other host that has been caught by a device driver in promiscuous mode. - `PACKET_OUTGOING` - Packet originating from the local host that is looped back to a packet socket. + *hatype* - Optional integer specifying the ARP hardware address type. + *addr* - Optional bytes-like object specifying the hardware physical address, whose interpretation depends on the device.[Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 2.2. * [`AF_QIPCRTR`](#socket.AF_QIPCRTR "socket.AF_QIPCRTR") is a Linux-only socket based interface for communicating with services running on co-processors in Qualcomm platforms. The address family is represented as a `(node, port)` tuple where the *node* and *port* are non-negative integers. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 4.7. New in version 3.8. * `IPPROTO_UDPLITE` is a variant of UDP which allows you to specify what portion of a packet is covered with the checksum. It adds two socket options that you can change. `self.setsockopt(IPPROTO_UDPLITE, UDPLITE_SEND_CSCOV, length)` will change what portion of outgoing packets are covered by the checksum and `self.setsockopt(IPPROTO_UDPLITE, UDPLITE_RECV_CSCOV, length)` will filter out packets which cover too little of their data. In both cases `length` should be in `range(8, 2**16, 8)`. Such a socket should be constructed with `socket(AF_INET, SOCK_DGRAM, IPPROTO_UDPLITE)` for IPv4 or `socket(AF_INET6, SOCK_DGRAM, IPPROTO_UDPLITE)` for IPv6. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 2.6.20, FreeBSD >= 10.1-RELEASE New in version 3.9. If you use a hostname in the *host* portion of IPv4/v6 socket address, the program may show a nondeterministic behavior, as Python uses the first address returned from the DNS resolution. The socket address will be resolved differently into an actual IPv4/v6 address, depending on the results from DNS resolution and/or the host configuration. For deterministic behavior use a numeric address in *host* portion. All errors raise exceptions. The normal exceptions for invalid argument types and out-of-memory conditions can be raised; starting from Python 3.3, errors related to socket or address semantics raise [`OSError`](exceptions#OSError "OSError") or one of its subclasses (they used to raise [`socket.error`](#socket.error "socket.error")). Non-blocking mode is supported through [`setblocking()`](#socket.socket.setblocking "socket.socket.setblocking"). A generalization of this based on timeouts is supported through [`settimeout()`](#socket.socket.settimeout "socket.socket.settimeout"). Module contents --------------- The module [`socket`](#module-socket "socket: Low-level networking interface.") exports the following elements. ### Exceptions `exception socket.error` A deprecated alias of [`OSError`](exceptions#OSError "OSError"). Changed in version 3.3: Following [**PEP 3151**](https://www.python.org/dev/peps/pep-3151), this class was made an alias of [`OSError`](exceptions#OSError "OSError"). `exception socket.herror` A subclass of [`OSError`](exceptions#OSError "OSError"), this exception is raised for address-related errors, i.e. for functions that use *h\_errno* in the POSIX C API, including [`gethostbyname_ex()`](#socket.gethostbyname_ex "socket.gethostbyname_ex") and [`gethostbyaddr()`](#socket.gethostbyaddr "socket.gethostbyaddr"). The accompanying value is a pair `(h_errno, string)` representing an error returned by a library call. *h\_errno* is a numeric value, while *string* represents the description of *h\_errno*, as returned by the `hstrerror()` C function. Changed in version 3.3: This class was made a subclass of [`OSError`](exceptions#OSError "OSError"). `exception socket.gaierror` A subclass of [`OSError`](exceptions#OSError "OSError"), this exception is raised for address-related errors by [`getaddrinfo()`](#socket.getaddrinfo "socket.getaddrinfo") and [`getnameinfo()`](#socket.getnameinfo "socket.getnameinfo"). The accompanying value is a pair `(error, string)` representing an error returned by a library call. *string* represents the description of *error*, as returned by the `gai_strerror()` C function. The numeric *error* value will match one of the `EAI_*` constants defined in this module. Changed in version 3.3: This class was made a subclass of [`OSError`](exceptions#OSError "OSError"). `exception socket.timeout` A subclass of [`OSError`](exceptions#OSError "OSError"), this exception is raised when a timeout occurs on a socket which has had timeouts enabled via a prior call to [`settimeout()`](#socket.socket.settimeout "socket.socket.settimeout") (or implicitly through [`setdefaulttimeout()`](#socket.setdefaulttimeout "socket.setdefaulttimeout")). The accompanying value is a string whose value is currently always “timed out”. Changed in version 3.3: This class was made a subclass of [`OSError`](exceptions#OSError "OSError"). ### Constants The AF\_\* and SOCK\_\* constants are now `AddressFamily` and `SocketKind` [`IntEnum`](enum#enum.IntEnum "enum.IntEnum") collections. New in version 3.4. `socket.AF_UNIX` `socket.AF_INET` `socket.AF_INET6` These constants represent the address (and protocol) families, used for the first argument to `socket()`. If the [`AF_UNIX`](#socket.AF_UNIX "socket.AF_UNIX") constant is not defined then this protocol is unsupported. More constants may be available depending on the system. `socket.SOCK_STREAM` `socket.SOCK_DGRAM` `socket.SOCK_RAW` `socket.SOCK_RDM` `socket.SOCK_SEQPACKET` These constants represent the socket types, used for the second argument to `socket()`. More constants may be available depending on the system. (Only [`SOCK_STREAM`](#socket.SOCK_STREAM "socket.SOCK_STREAM") and [`SOCK_DGRAM`](#socket.SOCK_DGRAM "socket.SOCK_DGRAM") appear to be generally useful.) `socket.SOCK_CLOEXEC` `socket.SOCK_NONBLOCK` These two constants, if defined, can be combined with the socket types and allow you to set some flags atomically (thus avoiding possible race conditions and the need for separate calls). See also [Secure File Descriptor Handling](http://udrepper.livejournal.com/20407.html) for a more thorough explanation. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 2.6.27. New in version 3.2. `SO_*` `socket.SOMAXCONN` `MSG_*` `SOL_*` `SCM_*` `IPPROTO_*` `IPPORT_*` `INADDR_*` `IP_*` `IPV6_*` `EAI_*` `AI_*` `NI_*` `TCP_*` Many constants of these forms, documented in the Unix documentation on sockets and/or the IP protocol, are also defined in the socket module. They are generally used in arguments to the `setsockopt()` and `getsockopt()` methods of socket objects. In most cases, only those symbols that are defined in the Unix header files are defined; for a few symbols, default values are provided. Changed in version 3.6: `SO_DOMAIN`, `SO_PROTOCOL`, `SO_PEERSEC`, `SO_PASSSEC`, `TCP_USER_TIMEOUT`, `TCP_CONGESTION` were added. Changed in version 3.6.5: On Windows, `TCP_FASTOPEN`, `TCP_KEEPCNT` appear if run-time Windows supports. Changed in version 3.7: `TCP_NOTSENT_LOWAT` was added. On Windows, `TCP_KEEPIDLE`, `TCP_KEEPINTVL` appear if run-time Windows supports. `socket.AF_CAN` `socket.PF_CAN` `SOL_CAN_*` `CAN_*` Many constants of these forms, documented in the Linux documentation, are also defined in the socket module. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 2.6.25. New in version 3.3. `socket.CAN_BCM` `CAN_BCM_*` CAN\_BCM, in the CAN protocol family, is the broadcast manager (BCM) protocol. Broadcast manager constants, documented in the Linux documentation, are also defined in the socket module. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 2.6.25. Note The `CAN_BCM_CAN_FD_FRAME` flag is only available on Linux >= 4.8. New in version 3.4. `socket.CAN_RAW_FD_FRAMES` Enables CAN FD support in a CAN\_RAW socket. This is disabled by default. This allows your application to send both CAN and CAN FD frames; however, you must accept both CAN and CAN FD frames when reading from the socket. This constant is documented in the Linux documentation. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 3.6. New in version 3.5. `socket.CAN_RAW_JOIN_FILTERS` Joins the applied CAN filters such that only CAN frames that match all given CAN filters are passed to user space. This constant is documented in the Linux documentation. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 4.1. New in version 3.9. `socket.CAN_ISOTP` CAN\_ISOTP, in the CAN protocol family, is the ISO-TP (ISO 15765-2) protocol. ISO-TP constants, documented in the Linux documentation. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 2.6.25. New in version 3.7. `socket.CAN_J1939` CAN\_J1939, in the CAN protocol family, is the SAE J1939 protocol. J1939 constants, documented in the Linux documentation. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 5.4. New in version 3.9. `socket.AF_PACKET` `socket.PF_PACKET` `PACKET_*` Many constants of these forms, documented in the Linux documentation, are also defined in the socket module. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 2.2. `socket.AF_RDS` `socket.PF_RDS` `socket.SOL_RDS` `RDS_*` Many constants of these forms, documented in the Linux documentation, are also defined in the socket module. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 2.6.30. New in version 3.3. `socket.SIO_RCVALL` `socket.SIO_KEEPALIVE_VALS` `socket.SIO_LOOPBACK_FAST_PATH` `RCVALL_*` Constants for Windows’ WSAIoctl(). The constants are used as arguments to the [`ioctl()`](#socket.socket.ioctl "socket.socket.ioctl") method of socket objects. Changed in version 3.6: `SIO_LOOPBACK_FAST_PATH` was added. `TIPC_*` TIPC related constants, matching the ones exported by the C socket API. See the TIPC documentation for more information. `socket.AF_ALG` `socket.SOL_ALG` `ALG_*` Constants for Linux Kernel cryptography. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 2.6.38. New in version 3.6. `socket.AF_VSOCK` `socket.IOCTL_VM_SOCKETS_GET_LOCAL_CID` `VMADDR*` `SO_VM*` Constants for Linux host/guest communication. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 4.8. New in version 3.7. `socket.AF_LINK` [Availability](https://docs.python.org/3.9/library/intro.html#availability): BSD, macOS. New in version 3.4. `socket.has_ipv6` This constant contains a boolean value which indicates if IPv6 is supported on this platform. `socket.BDADDR_ANY` `socket.BDADDR_LOCAL` These are string constants containing Bluetooth addresses with special meanings. For example, [`BDADDR_ANY`](#socket.BDADDR_ANY "socket.BDADDR_ANY") can be used to indicate any address when specifying the binding socket with `BTPROTO_RFCOMM`. `socket.HCI_FILTER` `socket.HCI_TIME_STAMP` `socket.HCI_DATA_DIR` For use with `BTPROTO_HCI`. [`HCI_FILTER`](#socket.HCI_FILTER "socket.HCI_FILTER") is not available for NetBSD or DragonFlyBSD. [`HCI_TIME_STAMP`](#socket.HCI_TIME_STAMP "socket.HCI_TIME_STAMP") and [`HCI_DATA_DIR`](#socket.HCI_DATA_DIR "socket.HCI_DATA_DIR") are not available for FreeBSD, NetBSD, or DragonFlyBSD. `socket.AF_QIPCRTR` Constant for Qualcomm’s IPC router protocol, used to communicate with service providing remote processors. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 4.7. ### Functions #### Creating sockets The following functions all create [socket objects](#socket-objects). `class socket.socket(family=AF_INET, type=SOCK_STREAM, proto=0, fileno=None)` Create a new socket using the given address family, socket type and protocol number. The address family should be [`AF_INET`](#socket.AF_INET "socket.AF_INET") (the default), [`AF_INET6`](#socket.AF_INET6 "socket.AF_INET6"), [`AF_UNIX`](#socket.AF_UNIX "socket.AF_UNIX"), [`AF_CAN`](#socket.AF_CAN "socket.AF_CAN"), [`AF_PACKET`](#socket.AF_PACKET "socket.AF_PACKET"), or [`AF_RDS`](#socket.AF_RDS "socket.AF_RDS"). The socket type should be [`SOCK_STREAM`](#socket.SOCK_STREAM "socket.SOCK_STREAM") (the default), [`SOCK_DGRAM`](#socket.SOCK_DGRAM "socket.SOCK_DGRAM"), [`SOCK_RAW`](#socket.SOCK_RAW "socket.SOCK_RAW") or perhaps one of the other `SOCK_` constants. The protocol number is usually zero and may be omitted or in the case where the address family is [`AF_CAN`](#socket.AF_CAN "socket.AF_CAN") the protocol should be one of `CAN_RAW`, [`CAN_BCM`](#socket.CAN_BCM "socket.CAN_BCM"), [`CAN_ISOTP`](#socket.CAN_ISOTP "socket.CAN_ISOTP") or [`CAN_J1939`](#socket.CAN_J1939 "socket.CAN_J1939"). If *fileno* is specified, the values for *family*, *type*, and *proto* are auto-detected from the specified file descriptor. Auto-detection can be overruled by calling the function with explicit *family*, *type*, or *proto* arguments. This only affects how Python represents e.g. the return value of [`socket.getpeername()`](#socket.socket.getpeername "socket.socket.getpeername") but not the actual OS resource. Unlike [`socket.fromfd()`](#socket.fromfd "socket.fromfd"), *fileno* will return the same socket and not a duplicate. This may help close a detached socket using [`socket.close()`](#socket.close "socket.close"). The newly created socket is [non-inheritable](os#fd-inheritance). Raises an [auditing event](sys#auditing) `socket.__new__` with arguments `self`, `family`, `type`, `protocol`. Changed in version 3.3: The AF\_CAN family was added. The AF\_RDS family was added. Changed in version 3.4: The CAN\_BCM protocol was added. Changed in version 3.4: The returned socket is now non-inheritable. Changed in version 3.7: The CAN\_ISOTP protocol was added. Changed in version 3.7: When [`SOCK_NONBLOCK`](#socket.SOCK_NONBLOCK "socket.SOCK_NONBLOCK") or [`SOCK_CLOEXEC`](#socket.SOCK_CLOEXEC "socket.SOCK_CLOEXEC") bit flags are applied to *type* they are cleared, and [`socket.type`](#socket.socket.type "socket.socket.type") will not reflect them. They are still passed to the underlying system `socket()` call. Therefore, ``` sock = socket.socket( socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_NONBLOCK) ``` will still create a non-blocking socket on OSes that support `SOCK_NONBLOCK`, but `sock.type` will be set to `socket.SOCK_STREAM`. Changed in version 3.9: The CAN\_J1939 protocol was added. `socket.socketpair([family[, type[, proto]]])` Build a pair of connected socket objects using the given address family, socket type, and protocol number. Address family, socket type, and protocol number are as for the `socket()` function above. The default family is [`AF_UNIX`](#socket.AF_UNIX "socket.AF_UNIX") if defined on the platform; otherwise, the default is [`AF_INET`](#socket.AF_INET "socket.AF_INET"). The newly created sockets are [non-inheritable](os#fd-inheritance). Changed in version 3.2: The returned socket objects now support the whole socket API, rather than a subset. Changed in version 3.4: The returned sockets are now non-inheritable. Changed in version 3.5: Windows support added. `socket.create_connection(address[, timeout[, source_address]])` Connect to a TCP service listening on the Internet *address* (a 2-tuple `(host, port)`), and return the socket object. This is a higher-level function than [`socket.connect()`](#socket.socket.connect "socket.socket.connect"): if *host* is a non-numeric hostname, it will try to resolve it for both [`AF_INET`](#socket.AF_INET "socket.AF_INET") and [`AF_INET6`](#socket.AF_INET6 "socket.AF_INET6"), and then try to connect to all possible addresses in turn until a connection succeeds. This makes it easy to write clients that are compatible to both IPv4 and IPv6. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by [`getdefaulttimeout()`](#socket.getdefaulttimeout "socket.getdefaulttimeout") is used. If supplied, *source\_address* must be a 2-tuple `(host, port)` for the socket to bind to as its source address before connecting. If host or port are ‘’ or 0 respectively the OS default behavior will be used. Changed in version 3.2: *source\_address* was added. `socket.create_server(address, *, family=AF_INET, backlog=None, reuse_port=False, dualstack_ipv6=False)` Convenience function which creates a TCP socket bound to *address* (a 2-tuple `(host, port)`) and return the socket object. *family* should be either [`AF_INET`](#socket.AF_INET "socket.AF_INET") or [`AF_INET6`](#socket.AF_INET6 "socket.AF_INET6"). *backlog* is the queue size passed to [`socket.listen()`](#socket.socket.listen "socket.socket.listen"); when `0` a default reasonable value is chosen. *reuse\_port* dictates whether to set the `SO_REUSEPORT` socket option. If *dualstack\_ipv6* is true and the platform supports it the socket will be able to accept both IPv4 and IPv6 connections, else it will raise [`ValueError`](exceptions#ValueError "ValueError"). Most POSIX platforms and Windows are supposed to support this functionality. When this functionality is enabled the address returned by [`socket.getpeername()`](#socket.socket.getpeername "socket.socket.getpeername") when an IPv4 connection occurs will be an IPv6 address represented as an IPv4-mapped IPv6 address. If *dualstack\_ipv6* is false it will explicitly disable this functionality on platforms that enable it by default (e.g. Linux). This parameter can be used in conjunction with [`has_dualstack_ipv6()`](#socket.has_dualstack_ipv6 "socket.has_dualstack_ipv6"): ``` import socket addr = ("", 8080) # all interfaces, port 8080 if socket.has_dualstack_ipv6(): s = socket.create_server(addr, family=socket.AF_INET6, dualstack_ipv6=True) else: s = socket.create_server(addr) ``` Note On POSIX platforms the `SO_REUSEADDR` socket option is set in order to immediately reuse previous sockets which were bound on the same *address* and remained in TIME\_WAIT state. New in version 3.8. `socket.has_dualstack_ipv6()` Return `True` if the platform supports creating a TCP socket which can handle both IPv4 and IPv6 connections. New in version 3.8. `socket.fromfd(fd, family, type, proto=0)` Duplicate the file descriptor *fd* (an integer as returned by a file object’s `fileno()` method) and build a socket object from the result. Address family, socket type and protocol number are as for the `socket()` function above. The file descriptor should refer to a socket, but this is not checked — subsequent operations on the object may fail if the file descriptor is invalid. This function is rarely needed, but can be used to get or set socket options on a socket passed to a program as standard input or output (such as a server started by the Unix inet daemon). The socket is assumed to be in blocking mode. The newly created socket is [non-inheritable](os#fd-inheritance). Changed in version 3.4: The returned socket is now non-inheritable. `socket.fromshare(data)` Instantiate a socket from data obtained from the [`socket.share()`](#socket.socket.share "socket.socket.share") method. The socket is assumed to be in blocking mode. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows. New in version 3.3. `socket.SocketType` This is a Python type object that represents the socket object type. It is the same as `type(socket(...))`. #### Other functions The [`socket`](#module-socket "socket: Low-level networking interface.") module also offers various network-related services: `socket.close(fd)` Close a socket file descriptor. This is like [`os.close()`](os#os.close "os.close"), but for sockets. On some platforms (most noticeable Windows) [`os.close()`](os#os.close "os.close") does not work for socket file descriptors. New in version 3.7. `socket.getaddrinfo(host, port, family=0, type=0, proto=0, flags=0)` Translate the *host*/*port* argument into a sequence of 5-tuples that contain all the necessary arguments for creating a socket connected to that service. *host* is a domain name, a string representation of an IPv4/v6 address or `None`. *port* is a string service name such as `'http'`, a numeric port number or `None`. By passing `None` as the value of *host* and *port*, you can pass `NULL` to the underlying C API. The *family*, *type* and *proto* arguments can be optionally specified in order to narrow the list of addresses returned. Passing zero as a value for each of these arguments selects the full range of results. The *flags* argument can be one or several of the `AI_*` constants, and will influence how results are computed and returned. For example, `AI_NUMERICHOST` will disable domain name resolution and will raise an error if *host* is a domain name. The function returns a list of 5-tuples with the following structure: `(family, type, proto, canonname, sockaddr)` In these tuples, *family*, *type*, *proto* are all integers and are meant to be passed to the `socket()` function. *canonname* will be a string representing the canonical name of the *host* if `AI_CANONNAME` is part of the *flags* argument; else *canonname* will be empty. *sockaddr* is a tuple describing a socket address, whose format depends on the returned *family* (a `(address, port)` 2-tuple for [`AF_INET`](#socket.AF_INET "socket.AF_INET"), a `(address, port, flowinfo, scope_id)` 4-tuple for [`AF_INET6`](#socket.AF_INET6 "socket.AF_INET6")), and is meant to be passed to the [`socket.connect()`](#socket.socket.connect "socket.socket.connect") method. Raises an [auditing event](sys#auditing) `socket.getaddrinfo` with arguments `host`, `port`, `family`, `type`, `protocol`. The following example fetches address information for a hypothetical TCP connection to `example.org` on port 80 (results may differ on your system if IPv6 isn’t enabled): ``` >>> socket.getaddrinfo("example.org", 80, proto=socket.IPPROTO_TCP) [(<AddressFamily.AF_INET6: 10>, <SocketType.SOCK_STREAM: 1>, 6, '', ('2606:2800:220:1:248:1893:25c8:1946', 80, 0, 0)), (<AddressFamily.AF_INET: 2>, <SocketType.SOCK_STREAM: 1>, 6, '', ('93.184.216.34', 80))] ``` Changed in version 3.2: parameters can now be passed using keyword arguments. Changed in version 3.7: for IPv6 multicast addresses, string representing an address will not contain `%scope_id` part. `socket.getfqdn([name])` Return a fully qualified domain name for *name*. If *name* is omitted or empty, it is interpreted as the local host. To find the fully qualified name, the hostname returned by [`gethostbyaddr()`](#socket.gethostbyaddr "socket.gethostbyaddr") is checked, followed by aliases for the host, if available. The first name which includes a period is selected. In case no fully qualified domain name is available and *name* was provided, it is returned unchanged. If *name* was empty or equal to `'0.0.0.0'`, the hostname from [`gethostname()`](#socket.gethostname "socket.gethostname") is returned. `socket.gethostbyname(hostname)` Translate a host name to IPv4 address format. The IPv4 address is returned as a string, such as `'100.50.200.5'`. If the host name is an IPv4 address itself it is returned unchanged. See [`gethostbyname_ex()`](#socket.gethostbyname_ex "socket.gethostbyname_ex") for a more complete interface. [`gethostbyname()`](#socket.gethostbyname "socket.gethostbyname") does not support IPv6 name resolution, and [`getaddrinfo()`](#socket.getaddrinfo "socket.getaddrinfo") should be used instead for IPv4/v6 dual stack support. Raises an [auditing event](sys#auditing) `socket.gethostbyname` with argument `hostname`. `socket.gethostbyname_ex(hostname)` Translate a host name to IPv4 address format, extended interface. Return a triple `(hostname, aliaslist, ipaddrlist)` where *hostname* is the host’s primary host name, *aliaslist* is a (possibly empty) list of alternative host names for the same address, and *ipaddrlist* is a list of IPv4 addresses for the same interface on the same host (often but not always a single address). [`gethostbyname_ex()`](#socket.gethostbyname_ex "socket.gethostbyname_ex") does not support IPv6 name resolution, and [`getaddrinfo()`](#socket.getaddrinfo "socket.getaddrinfo") should be used instead for IPv4/v6 dual stack support. Raises an [auditing event](sys#auditing) `socket.gethostbyname` with argument `hostname`. `socket.gethostname()` Return a string containing the hostname of the machine where the Python interpreter is currently executing. Raises an [auditing event](sys#auditing) `socket.gethostname` with no arguments. Note: [`gethostname()`](#socket.gethostname "socket.gethostname") doesn’t always return the fully qualified domain name; use [`getfqdn()`](#socket.getfqdn "socket.getfqdn") for that. `socket.gethostbyaddr(ip_address)` Return a triple `(hostname, aliaslist, ipaddrlist)` where *hostname* is the primary host name responding to the given *ip\_address*, *aliaslist* is a (possibly empty) list of alternative host names for the same address, and *ipaddrlist* is a list of IPv4/v6 addresses for the same interface on the same host (most likely containing only a single address). To find the fully qualified domain name, use the function [`getfqdn()`](#socket.getfqdn "socket.getfqdn"). [`gethostbyaddr()`](#socket.gethostbyaddr "socket.gethostbyaddr") supports both IPv4 and IPv6. Raises an [auditing event](sys#auditing) `socket.gethostbyaddr` with argument `ip_address`. `socket.getnameinfo(sockaddr, flags)` Translate a socket address *sockaddr* into a 2-tuple `(host, port)`. Depending on the settings of *flags*, the result can contain a fully-qualified domain name or numeric address representation in *host*. Similarly, *port* can contain a string port name or a numeric port number. For IPv6 addresses, `%scope_id` is appended to the host part if *sockaddr* contains meaningful *scope\_id*. Usually this happens for multicast addresses. For more information about *flags* you can consult *[getnameinfo(3)](https://manpages.debian.org/getnameinfo(3))*. Raises an [auditing event](sys#auditing) `socket.getnameinfo` with argument `sockaddr`. `socket.getprotobyname(protocolname)` Translate an Internet protocol name (for example, `'icmp'`) to a constant suitable for passing as the (optional) third argument to the `socket()` function. This is usually only needed for sockets opened in “raw” mode ([`SOCK_RAW`](#socket.SOCK_RAW "socket.SOCK_RAW")); for the normal socket modes, the correct protocol is chosen automatically if the protocol is omitted or zero. `socket.getservbyname(servicename[, protocolname])` Translate an Internet service name and protocol name to a port number for that service. The optional protocol name, if given, should be `'tcp'` or `'udp'`, otherwise any protocol will match. Raises an [auditing event](sys#auditing) `socket.getservbyname` with arguments `servicename`, `protocolname`. `socket.getservbyport(port[, protocolname])` Translate an Internet port number and protocol name to a service name for that service. The optional protocol name, if given, should be `'tcp'` or `'udp'`, otherwise any protocol will match. Raises an [auditing event](sys#auditing) `socket.getservbyport` with arguments `port`, `protocolname`. `socket.ntohl(x)` Convert 32-bit positive integers from network to host byte order. On machines where the host byte order is the same as network byte order, this is a no-op; otherwise, it performs a 4-byte swap operation. `socket.ntohs(x)` Convert 16-bit positive integers from network to host byte order. On machines where the host byte order is the same as network byte order, this is a no-op; otherwise, it performs a 2-byte swap operation. Deprecated since version 3.7: In case *x* does not fit in 16-bit unsigned integer, but does fit in a positive C int, it is silently truncated to 16-bit unsigned integer. This silent truncation feature is deprecated, and will raise an exception in future versions of Python. `socket.htonl(x)` Convert 32-bit positive integers from host to network byte order. On machines where the host byte order is the same as network byte order, this is a no-op; otherwise, it performs a 4-byte swap operation. `socket.htons(x)` Convert 16-bit positive integers from host to network byte order. On machines where the host byte order is the same as network byte order, this is a no-op; otherwise, it performs a 2-byte swap operation. Deprecated since version 3.7: In case *x* does not fit in 16-bit unsigned integer, but does fit in a positive C int, it is silently truncated to 16-bit unsigned integer. This silent truncation feature is deprecated, and will raise an exception in future versions of Python. `socket.inet_aton(ip_string)` Convert an IPv4 address from dotted-quad string format (for example, ‘123.45.67.89’) to 32-bit packed binary format, as a bytes object four characters in length. This is useful when conversing with a program that uses the standard C library and needs objects of type `struct in_addr`, which is the C type for the 32-bit packed binary this function returns. [`inet_aton()`](#socket.inet_aton "socket.inet_aton") also accepts strings with less than three dots; see the Unix manual page *[inet(3)](https://manpages.debian.org/inet(3))* for details. If the IPv4 address string passed to this function is invalid, [`OSError`](exceptions#OSError "OSError") will be raised. Note that exactly what is valid depends on the underlying C implementation of `inet_aton()`. [`inet_aton()`](#socket.inet_aton "socket.inet_aton") does not support IPv6, and [`inet_pton()`](#socket.inet_pton "socket.inet_pton") should be used instead for IPv4/v6 dual stack support. `socket.inet_ntoa(packed_ip)` Convert a 32-bit packed IPv4 address (a [bytes-like object](../glossary#term-bytes-like-object) four bytes in length) to its standard dotted-quad string representation (for example, ‘123.45.67.89’). This is useful when conversing with a program that uses the standard C library and needs objects of type `struct in_addr`, which is the C type for the 32-bit packed binary data this function takes as an argument. If the byte sequence passed to this function is not exactly 4 bytes in length, [`OSError`](exceptions#OSError "OSError") will be raised. [`inet_ntoa()`](#socket.inet_ntoa "socket.inet_ntoa") does not support IPv6, and [`inet_ntop()`](#socket.inet_ntop "socket.inet_ntop") should be used instead for IPv4/v6 dual stack support. Changed in version 3.5: Writable [bytes-like object](../glossary#term-bytes-like-object) is now accepted. `socket.inet_pton(address_family, ip_string)` Convert an IP address from its family-specific string format to a packed, binary format. [`inet_pton()`](#socket.inet_pton "socket.inet_pton") is useful when a library or network protocol calls for an object of type `struct in_addr` (similar to [`inet_aton()`](#socket.inet_aton "socket.inet_aton")) or `struct in6_addr`. Supported values for *address\_family* are currently [`AF_INET`](#socket.AF_INET "socket.AF_INET") and [`AF_INET6`](#socket.AF_INET6 "socket.AF_INET6"). If the IP address string *ip\_string* is invalid, [`OSError`](exceptions#OSError "OSError") will be raised. Note that exactly what is valid depends on both the value of *address\_family* and the underlying implementation of `inet_pton()`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix (maybe not all platforms), Windows. Changed in version 3.4: Windows support added `socket.inet_ntop(address_family, packed_ip)` Convert a packed IP address (a [bytes-like object](../glossary#term-bytes-like-object) of some number of bytes) to its standard, family-specific string representation (for example, `'7.10.0.5'` or `'5aef:2b::8'`). [`inet_ntop()`](#socket.inet_ntop "socket.inet_ntop") is useful when a library or network protocol returns an object of type `struct in_addr` (similar to [`inet_ntoa()`](#socket.inet_ntoa "socket.inet_ntoa")) or `struct in6_addr`. Supported values for *address\_family* are currently [`AF_INET`](#socket.AF_INET "socket.AF_INET") and [`AF_INET6`](#socket.AF_INET6 "socket.AF_INET6"). If the bytes object *packed\_ip* is not the correct length for the specified address family, [`ValueError`](exceptions#ValueError "ValueError") will be raised. [`OSError`](exceptions#OSError "OSError") is raised for errors from the call to [`inet_ntop()`](#socket.inet_ntop "socket.inet_ntop"). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix (maybe not all platforms), Windows. Changed in version 3.4: Windows support added Changed in version 3.5: Writable [bytes-like object](../glossary#term-bytes-like-object) is now accepted. `socket.CMSG_LEN(length)` Return the total length, without trailing padding, of an ancillary data item with associated data of the given *length*. This value can often be used as the buffer size for [`recvmsg()`](#socket.socket.recvmsg "socket.socket.recvmsg") to receive a single item of ancillary data, but [**RFC 3542**](https://tools.ietf.org/html/rfc3542.html) requires portable applications to use [`CMSG_SPACE()`](#socket.CMSG_SPACE "socket.CMSG_SPACE") and thus include space for padding, even when the item will be the last in the buffer. Raises [`OverflowError`](exceptions#OverflowError "OverflowError") if *length* is outside the permissible range of values. [Availability](https://docs.python.org/3.9/library/intro.html#availability): most Unix platforms, possibly others. New in version 3.3. `socket.CMSG_SPACE(length)` Return the buffer size needed for [`recvmsg()`](#socket.socket.recvmsg "socket.socket.recvmsg") to receive an ancillary data item with associated data of the given *length*, along with any trailing padding. The buffer space needed to receive multiple items is the sum of the [`CMSG_SPACE()`](#socket.CMSG_SPACE "socket.CMSG_SPACE") values for their associated data lengths. Raises [`OverflowError`](exceptions#OverflowError "OverflowError") if *length* is outside the permissible range of values. Note that some systems might support ancillary data without providing this function. Also note that setting the buffer size using the results of this function may not precisely limit the amount of ancillary data that can be received, since additional data may be able to fit into the padding area. [Availability](https://docs.python.org/3.9/library/intro.html#availability): most Unix platforms, possibly others. New in version 3.3. `socket.getdefaulttimeout()` Return the default timeout in seconds (float) for new socket objects. A value of `None` indicates that new socket objects have no timeout. When the socket module is first imported, the default is `None`. `socket.setdefaulttimeout(timeout)` Set the default timeout in seconds (float) for new socket objects. When the socket module is first imported, the default is `None`. See [`settimeout()`](#socket.socket.settimeout "socket.socket.settimeout") for possible values and their respective meanings. `socket.sethostname(name)` Set the machine’s hostname to *name*. This will raise an [`OSError`](exceptions#OSError "OSError") if you don’t have enough rights. Raises an [auditing event](sys#auditing) `socket.sethostname` with argument `name`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `socket.if_nameindex()` Return a list of network interface information (index int, name string) tuples. [`OSError`](exceptions#OSError "OSError") if the system call fails. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. New in version 3.3. Changed in version 3.8: Windows support was added. Note On Windows network interfaces have different names in different contexts (all names are examples): * UUID: `{FB605B73-AAC2-49A6-9A2F-25416AEA0573}` * name: `ethernet_32770` * friendly name: `vEthernet (nat)` * description: `Hyper-V Virtual Ethernet Adapter` This function returns names of the second form from the list, `ethernet_32770` in this example case. `socket.if_nametoindex(if_name)` Return a network interface index number corresponding to an interface name. [`OSError`](exceptions#OSError "OSError") if no interface with the given name exists. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. New in version 3.3. Changed in version 3.8: Windows support was added. See also “Interface name” is a name as documented in [`if_nameindex()`](#socket.if_nameindex "socket.if_nameindex"). `socket.if_indextoname(if_index)` Return a network interface name corresponding to an interface index number. [`OSError`](exceptions#OSError "OSError") if no interface with the given index exists. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. New in version 3.3. Changed in version 3.8: Windows support was added. See also “Interface name” is a name as documented in [`if_nameindex()`](#socket.if_nameindex "socket.if_nameindex"). `socket.send_fds(sock, buffers, fds[, flags[, address]])` Send the list of file descriptors *fds* over an [`AF_UNIX`](#socket.AF_UNIX "socket.AF_UNIX") socket *sock*. The *fds* parameter is a sequence of file descriptors. Consult `sendmsg()` for the documentation of these parameters. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix supporting [`sendmsg()`](#socket.socket.sendmsg "socket.socket.sendmsg") and `SCM_RIGHTS` mechanism. New in version 3.9. `socket.recv_fds(sock, bufsize, maxfds[, flags])` Receive up to *maxfds* file descriptors from an [`AF_UNIX`](#socket.AF_UNIX "socket.AF_UNIX") socket *sock*. Return `(msg, list(fds), flags, addr)`. Consult `recvmsg()` for the documentation of these parameters. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix supporting [`recvmsg()`](#socket.socket.recvmsg "socket.socket.recvmsg") and `SCM_RIGHTS` mechanism. New in version 3.9. Note Any truncated integers at the end of the list of file descriptors. Socket Objects -------------- Socket objects have the following methods. Except for [`makefile()`](#socket.socket.makefile "socket.socket.makefile"), these correspond to Unix system calls applicable to sockets. Changed in version 3.2: Support for the [context manager](../glossary#term-context-manager) protocol was added. Exiting the context manager is equivalent to calling [`close()`](#socket.close "socket.close"). `socket.accept()` Accept a connection. The socket must be bound to an address and listening for connections. The return value is a pair `(conn, address)` where *conn* is a *new* socket object usable to send and receive data on the connection, and *address* is the address bound to the socket on the other end of the connection. The newly created socket is [non-inheritable](os#fd-inheritance). Changed in version 3.4: The socket is now non-inheritable. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the method now retries the system call instead of raising an [`InterruptedError`](exceptions#InterruptedError "InterruptedError") exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale). `socket.bind(address)` Bind the socket to *address*. The socket must not already be bound. (The format of *address* depends on the address family — see above.) Raises an [auditing event](sys#auditing) `socket.bind` with arguments `self`, `address`. `socket.close()` Mark the socket closed. The underlying system resource (e.g. a file descriptor) is also closed when all file objects from [`makefile()`](#socket.socket.makefile "socket.socket.makefile") are closed. Once that happens, all future operations on the socket object will fail. The remote end will receive no more data (after queued data is flushed). Sockets are automatically closed when they are garbage-collected, but it is recommended to [`close()`](#socket.close "socket.close") them explicitly, or to use a [`with`](../reference/compound_stmts#with) statement around them. Changed in version 3.6: [`OSError`](exceptions#OSError "OSError") is now raised if an error occurs when the underlying `close()` call is made. Note [`close()`](#socket.close "socket.close") releases the resource associated with a connection but does not necessarily close the connection immediately. If you want to close the connection in a timely fashion, call [`shutdown()`](#socket.socket.shutdown "socket.socket.shutdown") before [`close()`](#socket.close "socket.close"). `socket.connect(address)` Connect to a remote socket at *address*. (The format of *address* depends on the address family — see above.) If the connection is interrupted by a signal, the method waits until the connection completes, or raise a [`socket.timeout`](#socket.timeout "socket.timeout") on timeout, if the signal handler doesn’t raise an exception and the socket is blocking or has a timeout. For non-blocking sockets, the method raises an [`InterruptedError`](exceptions#InterruptedError "InterruptedError") exception if the connection is interrupted by a signal (or the exception raised by the signal handler). Raises an [auditing event](sys#auditing) `socket.connect` with arguments `self`, `address`. Changed in version 3.5: The method now waits until the connection completes instead of raising an [`InterruptedError`](exceptions#InterruptedError "InterruptedError") exception if the connection is interrupted by a signal, the signal handler doesn’t raise an exception and the socket is blocking or has a timeout (see the [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale). `socket.connect_ex(address)` Like `connect(address)`, but return an error indicator instead of raising an exception for errors returned by the C-level `connect()` call (other problems, such as “host not found,” can still raise exceptions). The error indicator is `0` if the operation succeeded, otherwise the value of the `errno` variable. This is useful to support, for example, asynchronous connects. Raises an [auditing event](sys#auditing) `socket.connect` with arguments `self`, `address`. `socket.detach()` Put the socket object into closed state without actually closing the underlying file descriptor. The file descriptor is returned, and can be reused for other purposes. New in version 3.2. `socket.dup()` Duplicate the socket. The newly created socket is [non-inheritable](os#fd-inheritance). Changed in version 3.4: The socket is now non-inheritable. `socket.fileno()` Return the socket’s file descriptor (a small integer), or -1 on failure. This is useful with [`select.select()`](select#select.select "select.select"). Under Windows the small integer returned by this method cannot be used where a file descriptor can be used (such as [`os.fdopen()`](os#os.fdopen "os.fdopen")). Unix does not have this limitation. `socket.get_inheritable()` Get the [inheritable flag](os#fd-inheritance) of the socket’s file descriptor or socket’s handle: `True` if the socket can be inherited in child processes, `False` if it cannot. New in version 3.4. `socket.getpeername()` Return the remote address to which the socket is connected. This is useful to find out the port number of a remote IPv4/v6 socket, for instance. (The format of the address returned depends on the address family — see above.) On some systems this function is not supported. `socket.getsockname()` Return the socket’s own address. This is useful to find out the port number of an IPv4/v6 socket, for instance. (The format of the address returned depends on the address family — see above.) `socket.getsockopt(level, optname[, buflen])` Return the value of the given socket option (see the Unix man page *[getsockopt(2)](https://manpages.debian.org/getsockopt(2))*). The needed symbolic constants (`SO_*` etc.) are defined in this module. If *buflen* is absent, an integer option is assumed and its integer value is returned by the function. If *buflen* is present, it specifies the maximum length of the buffer used to receive the option in, and this buffer is returned as a bytes object. It is up to the caller to decode the contents of the buffer (see the optional built-in module [`struct`](struct#module-struct "struct: Interpret bytes as packed binary data.") for a way to decode C structures encoded as byte strings). `socket.getblocking()` Return `True` if socket is in blocking mode, `False` if in non-blocking. This is equivalent to checking `socket.gettimeout() == 0`. New in version 3.7. `socket.gettimeout()` Return the timeout in seconds (float) associated with socket operations, or `None` if no timeout is set. This reflects the last call to [`setblocking()`](#socket.socket.setblocking "socket.socket.setblocking") or [`settimeout()`](#socket.socket.settimeout "socket.socket.settimeout"). `socket.ioctl(control, option)` Platform Windows The [`ioctl()`](#socket.socket.ioctl "socket.socket.ioctl") method is a limited interface to the WSAIoctl system interface. Please refer to the [Win32 documentation](https://msdn.microsoft.com/en-us/library/ms741621%28VS.85%29.aspx) for more information. On other platforms, the generic [`fcntl.fcntl()`](fcntl#fcntl.fcntl "fcntl.fcntl") and [`fcntl.ioctl()`](fcntl#fcntl.ioctl "fcntl.ioctl") functions may be used; they accept a socket object as their first argument. Currently only the following control codes are supported: `SIO_RCVALL`, `SIO_KEEPALIVE_VALS`, and `SIO_LOOPBACK_FAST_PATH`. Changed in version 3.6: `SIO_LOOPBACK_FAST_PATH` was added. `socket.listen([backlog])` Enable a server to accept connections. If *backlog* is specified, it must be at least 0 (if it is lower, it is set to 0); it specifies the number of unaccepted connections that the system will allow before refusing new connections. If not specified, a default reasonable value is chosen. Changed in version 3.5: The *backlog* parameter is now optional. `socket.makefile(mode='r', buffering=None, *, encoding=None, errors=None, newline=None)` Return a [file object](../glossary#term-file-object) associated with the socket. The exact returned type depends on the arguments given to [`makefile()`](#socket.socket.makefile "socket.socket.makefile"). These arguments are interpreted the same way as by the built-in [`open()`](functions#open "open") function, except the only supported *mode* values are `'r'` (default), `'w'` and `'b'`. The socket must be in blocking mode; it can have a timeout, but the file object’s internal buffer may end up in an inconsistent state if a timeout occurs. Closing the file object returned by [`makefile()`](#socket.socket.makefile "socket.socket.makefile") won’t close the original socket unless all other file objects have been closed and [`socket.close()`](#socket.close "socket.close") has been called on the socket object. Note On Windows, the file-like object created by [`makefile()`](#socket.socket.makefile "socket.socket.makefile") cannot be used where a file object with a file descriptor is expected, such as the stream arguments of [`subprocess.Popen()`](subprocess#subprocess.Popen "subprocess.Popen"). `socket.recv(bufsize[, flags])` Receive data from the socket. The return value is a bytes object representing the data received. The maximum amount of data to be received at once is specified by *bufsize*. See the Unix manual page *[recv(2)](https://manpages.debian.org/recv(2))* for the meaning of the optional argument *flags*; it defaults to zero. Note For best match with hardware and network realities, the value of *bufsize* should be a relatively small power of 2, for example, 4096. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the method now retries the system call instead of raising an [`InterruptedError`](exceptions#InterruptedError "InterruptedError") exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale). `socket.recvfrom(bufsize[, flags])` Receive data from the socket. The return value is a pair `(bytes, address)` where *bytes* is a bytes object representing the data received and *address* is the address of the socket sending the data. See the Unix manual page *[recv(2)](https://manpages.debian.org/recv(2))* for the meaning of the optional argument *flags*; it defaults to zero. (The format of *address* depends on the address family — see above.) Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the method now retries the system call instead of raising an [`InterruptedError`](exceptions#InterruptedError "InterruptedError") exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale). Changed in version 3.7: For multicast IPv6 address, first item of *address* does not contain `%scope_id` part anymore. In order to get full IPv6 address use [`getnameinfo()`](#socket.getnameinfo "socket.getnameinfo"). `socket.recvmsg(bufsize[, ancbufsize[, flags]])` Receive normal data (up to *bufsize* bytes) and ancillary data from the socket. The *ancbufsize* argument sets the size in bytes of the internal buffer used to receive the ancillary data; it defaults to 0, meaning that no ancillary data will be received. Appropriate buffer sizes for ancillary data can be calculated using [`CMSG_SPACE()`](#socket.CMSG_SPACE "socket.CMSG_SPACE") or [`CMSG_LEN()`](#socket.CMSG_LEN "socket.CMSG_LEN"), and items which do not fit into the buffer might be truncated or discarded. The *flags* argument defaults to 0 and has the same meaning as for [`recv()`](#socket.socket.recv "socket.socket.recv"). The return value is a 4-tuple: `(data, ancdata, msg_flags, address)`. The *data* item is a [`bytes`](stdtypes#bytes "bytes") object holding the non-ancillary data received. The *ancdata* item is a list of zero or more tuples `(cmsg_level, cmsg_type, cmsg_data)` representing the ancillary data (control messages) received: *cmsg\_level* and *cmsg\_type* are integers specifying the protocol level and protocol-specific type respectively, and *cmsg\_data* is a [`bytes`](stdtypes#bytes "bytes") object holding the associated data. The *msg\_flags* item is the bitwise OR of various flags indicating conditions on the received message; see your system documentation for details. If the receiving socket is unconnected, *address* is the address of the sending socket, if available; otherwise, its value is unspecified. On some systems, [`sendmsg()`](#socket.socket.sendmsg "socket.socket.sendmsg") and [`recvmsg()`](#socket.socket.recvmsg "socket.socket.recvmsg") can be used to pass file descriptors between processes over an [`AF_UNIX`](#socket.AF_UNIX "socket.AF_UNIX") socket. When this facility is used (it is often restricted to [`SOCK_STREAM`](#socket.SOCK_STREAM "socket.SOCK_STREAM") sockets), [`recvmsg()`](#socket.socket.recvmsg "socket.socket.recvmsg") will return, in its ancillary data, items of the form `(socket.SOL_SOCKET, socket.SCM_RIGHTS, fds)`, where *fds* is a [`bytes`](stdtypes#bytes "bytes") object representing the new file descriptors as a binary array of the native C `int` type. If [`recvmsg()`](#socket.socket.recvmsg "socket.socket.recvmsg") raises an exception after the system call returns, it will first attempt to close any file descriptors received via this mechanism. Some systems do not indicate the truncated length of ancillary data items which have been only partially received. If an item appears to extend beyond the end of the buffer, [`recvmsg()`](#socket.socket.recvmsg "socket.socket.recvmsg") will issue a [`RuntimeWarning`](exceptions#RuntimeWarning "RuntimeWarning"), and will return the part of it which is inside the buffer provided it has not been truncated before the start of its associated data. On systems which support the `SCM_RIGHTS` mechanism, the following function will receive up to *maxfds* file descriptors, returning the message data and a list containing the descriptors (while ignoring unexpected conditions such as unrelated control messages being received). See also [`sendmsg()`](#socket.socket.sendmsg "socket.socket.sendmsg"). ``` import socket, array def recv_fds(sock, msglen, maxfds): fds = array.array("i") # Array of ints msg, ancdata, flags, addr = sock.recvmsg(msglen, socket.CMSG_LEN(maxfds * fds.itemsize)) for cmsg_level, cmsg_type, cmsg_data in ancdata: if cmsg_level == socket.SOL_SOCKET and cmsg_type == socket.SCM_RIGHTS: # Append data, ignoring any truncated integers at the end. fds.frombytes(cmsg_data[:len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) return msg, list(fds) ``` [Availability](https://docs.python.org/3.9/library/intro.html#availability): most Unix platforms, possibly others. New in version 3.3. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the method now retries the system call instead of raising an [`InterruptedError`](exceptions#InterruptedError "InterruptedError") exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale). `socket.recvmsg_into(buffers[, ancbufsize[, flags]])` Receive normal data and ancillary data from the socket, behaving as [`recvmsg()`](#socket.socket.recvmsg "socket.socket.recvmsg") would, but scatter the non-ancillary data into a series of buffers instead of returning a new bytes object. The *buffers* argument must be an iterable of objects that export writable buffers (e.g. [`bytearray`](stdtypes#bytearray "bytearray") objects); these will be filled with successive chunks of the non-ancillary data until it has all been written or there are no more buffers. The operating system may set a limit ([`sysconf()`](os#os.sysconf "os.sysconf") value `SC_IOV_MAX`) on the number of buffers that can be used. The *ancbufsize* and *flags* arguments have the same meaning as for [`recvmsg()`](#socket.socket.recvmsg "socket.socket.recvmsg"). The return value is a 4-tuple: `(nbytes, ancdata, msg_flags, address)`, where *nbytes* is the total number of bytes of non-ancillary data written into the buffers, and *ancdata*, *msg\_flags* and *address* are the same as for [`recvmsg()`](#socket.socket.recvmsg "socket.socket.recvmsg"). Example: ``` >>> import socket >>> s1, s2 = socket.socketpair() >>> b1 = bytearray(b'----') >>> b2 = bytearray(b'0123456789') >>> b3 = bytearray(b'--------------') >>> s1.send(b'Mary had a little lamb') 22 >>> s2.recvmsg_into([b1, memoryview(b2)[2:9], b3]) (22, [], 0, None) >>> [b1, b2, b3] [bytearray(b'Mary'), bytearray(b'01 had a 9'), bytearray(b'little lamb---')] ``` [Availability](https://docs.python.org/3.9/library/intro.html#availability): most Unix platforms, possibly others. New in version 3.3. `socket.recvfrom_into(buffer[, nbytes[, flags]])` Receive data from the socket, writing it into *buffer* instead of creating a new bytestring. The return value is a pair `(nbytes, address)` where *nbytes* is the number of bytes received and *address* is the address of the socket sending the data. See the Unix manual page *[recv(2)](https://manpages.debian.org/recv(2))* for the meaning of the optional argument *flags*; it defaults to zero. (The format of *address* depends on the address family — see above.) `socket.recv_into(buffer[, nbytes[, flags]])` Receive up to *nbytes* bytes from the socket, storing the data into a buffer rather than creating a new bytestring. If *nbytes* is not specified (or 0), receive up to the size available in the given buffer. Returns the number of bytes received. See the Unix manual page *[recv(2)](https://manpages.debian.org/recv(2))* for the meaning of the optional argument *flags*; it defaults to zero. `socket.send(bytes[, flags])` Send data to the socket. The socket must be connected to a remote socket. The optional *flags* argument has the same meaning as for [`recv()`](#socket.socket.recv "socket.socket.recv") above. Returns the number of bytes sent. Applications are responsible for checking that all data has been sent; if only some of the data was transmitted, the application needs to attempt delivery of the remaining data. For further information on this topic, consult the [Socket Programming HOWTO](../howto/sockets#socket-howto). Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the method now retries the system call instead of raising an [`InterruptedError`](exceptions#InterruptedError "InterruptedError") exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale). `socket.sendall(bytes[, flags])` Send data to the socket. The socket must be connected to a remote socket. The optional *flags* argument has the same meaning as for [`recv()`](#socket.socket.recv "socket.socket.recv") above. Unlike [`send()`](#socket.socket.send "socket.socket.send"), this method continues to send data from *bytes* until either all data has been sent or an error occurs. `None` is returned on success. On error, an exception is raised, and there is no way to determine how much data, if any, was successfully sent. Changed in version 3.5: The socket timeout is no more reset each time data is sent successfully. The socket timeout is now the maximum total duration to send all data. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the method now retries the system call instead of raising an [`InterruptedError`](exceptions#InterruptedError "InterruptedError") exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale). `socket.sendto(bytes, address)` `socket.sendto(bytes, flags, address)` Send data to the socket. The socket should not be connected to a remote socket, since the destination socket is specified by *address*. The optional *flags* argument has the same meaning as for [`recv()`](#socket.socket.recv "socket.socket.recv") above. Return the number of bytes sent. (The format of *address* depends on the address family — see above.) Raises an [auditing event](sys#auditing) `socket.sendto` with arguments `self`, `address`. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the method now retries the system call instead of raising an [`InterruptedError`](exceptions#InterruptedError "InterruptedError") exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale). `socket.sendmsg(buffers[, ancdata[, flags[, address]]])` Send normal and ancillary data to the socket, gathering the non-ancillary data from a series of buffers and concatenating it into a single message. The *buffers* argument specifies the non-ancillary data as an iterable of [bytes-like objects](../glossary#term-bytes-like-object) (e.g. [`bytes`](stdtypes#bytes "bytes") objects); the operating system may set a limit ([`sysconf()`](os#os.sysconf "os.sysconf") value `SC_IOV_MAX`) on the number of buffers that can be used. The *ancdata* argument specifies the ancillary data (control messages) as an iterable of zero or more tuples `(cmsg_level, cmsg_type, cmsg_data)`, where *cmsg\_level* and *cmsg\_type* are integers specifying the protocol level and protocol-specific type respectively, and *cmsg\_data* is a bytes-like object holding the associated data. Note that some systems (in particular, systems without [`CMSG_SPACE()`](#socket.CMSG_SPACE "socket.CMSG_SPACE")) might support sending only one control message per call. The *flags* argument defaults to 0 and has the same meaning as for [`send()`](#socket.socket.send "socket.socket.send"). If *address* is supplied and not `None`, it sets a destination address for the message. The return value is the number of bytes of non-ancillary data sent. The following function sends the list of file descriptors *fds* over an [`AF_UNIX`](#socket.AF_UNIX "socket.AF_UNIX") socket, on systems which support the `SCM_RIGHTS` mechanism. See also [`recvmsg()`](#socket.socket.recvmsg "socket.socket.recvmsg"). ``` import socket, array def send_fds(sock, msg, fds): return sock.sendmsg([msg], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", fds))]) ``` [Availability](https://docs.python.org/3.9/library/intro.html#availability): most Unix platforms, possibly others. Raises an [auditing event](sys#auditing) `socket.sendmsg` with arguments `self`, `address`. New in version 3.3. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the method now retries the system call instead of raising an [`InterruptedError`](exceptions#InterruptedError "InterruptedError") exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale). `socket.sendmsg_afalg([msg, ]*, op[, iv[, assoclen[, flags]]])` Specialized version of [`sendmsg()`](#socket.socket.sendmsg "socket.socket.sendmsg") for [`AF_ALG`](#socket.AF_ALG "socket.AF_ALG") socket. Set mode, IV, AEAD associated data length and flags for [`AF_ALG`](#socket.AF_ALG "socket.AF_ALG") socket. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux >= 2.6.38. New in version 3.6. `socket.sendfile(file, offset=0, count=None)` Send a file until EOF is reached by using high-performance [`os.sendfile`](os#os.sendfile "os.sendfile") and return the total number of bytes which were sent. *file* must be a regular file object opened in binary mode. If [`os.sendfile`](os#os.sendfile "os.sendfile") is not available (e.g. Windows) or *file* is not a regular file [`send()`](#socket.socket.send "socket.socket.send") will be used instead. *offset* tells from where to start reading the file. If specified, *count* is the total number of bytes to transmit as opposed to sending the file until EOF is reached. File position is updated on return or also in case of error in which case [`file.tell()`](io#io.IOBase.tell "io.IOBase.tell") can be used to figure out the number of bytes which were sent. The socket must be of [`SOCK_STREAM`](#socket.SOCK_STREAM "socket.SOCK_STREAM") type. Non-blocking sockets are not supported. New in version 3.5. `socket.set_inheritable(inheritable)` Set the [inheritable flag](os#fd-inheritance) of the socket’s file descriptor or socket’s handle. New in version 3.4. `socket.setblocking(flag)` Set blocking or non-blocking mode of the socket: if *flag* is false, the socket is set to non-blocking, else to blocking mode. This method is a shorthand for certain [`settimeout()`](#socket.socket.settimeout "socket.socket.settimeout") calls: * `sock.setblocking(True)` is equivalent to `sock.settimeout(None)` * `sock.setblocking(False)` is equivalent to `sock.settimeout(0.0)` Changed in version 3.7: The method no longer applies [`SOCK_NONBLOCK`](#socket.SOCK_NONBLOCK "socket.SOCK_NONBLOCK") flag on [`socket.type`](#socket.socket.type "socket.socket.type"). `socket.settimeout(value)` Set a timeout on blocking socket operations. The *value* argument can be a nonnegative floating point number expressing seconds, or `None`. If a non-zero value is given, subsequent socket operations will raise a [`timeout`](#socket.timeout "socket.timeout") exception if the timeout period *value* has elapsed before the operation has completed. If zero is given, the socket is put in non-blocking mode. If `None` is given, the socket is put in blocking mode. For further information, please consult the [notes on socket timeouts](#socket-timeouts). Changed in version 3.7: The method no longer toggles [`SOCK_NONBLOCK`](#socket.SOCK_NONBLOCK "socket.SOCK_NONBLOCK") flag on [`socket.type`](#socket.socket.type "socket.socket.type"). `socket.setsockopt(level, optname, value: int)` `socket.setsockopt(level, optname, value: buffer)` `socket.setsockopt(level, optname, None, optlen: int)` Set the value of the given socket option (see the Unix manual page *[setsockopt(2)](https://manpages.debian.org/setsockopt(2))*). The needed symbolic constants are defined in the [`socket`](#module-socket "socket: Low-level networking interface.") module (`SO_*` etc.). The value can be an integer, `None` or a [bytes-like object](../glossary#term-bytes-like-object) representing a buffer. In the later case it is up to the caller to ensure that the bytestring contains the proper bits (see the optional built-in module [`struct`](struct#module-struct "struct: Interpret bytes as packed binary data.") for a way to encode C structures as bytestrings). When *value* is set to `None`, *optlen* argument is required. It’s equivalent to call `setsockopt()` C function with `optval=NULL` and `optlen=optlen`. Changed in version 3.5: Writable [bytes-like object](../glossary#term-bytes-like-object) is now accepted. Changed in version 3.6: setsockopt(level, optname, None, optlen: int) form added. `socket.shutdown(how)` Shut down one or both halves of the connection. If *how* is `SHUT_RD`, further receives are disallowed. If *how* is `SHUT_WR`, further sends are disallowed. If *how* is `SHUT_RDWR`, further sends and receives are disallowed. `socket.share(process_id)` Duplicate a socket and prepare it for sharing with a target process. The target process must be provided with *process\_id*. The resulting bytes object can then be passed to the target process using some form of interprocess communication and the socket can be recreated there using [`fromshare()`](#socket.fromshare "socket.fromshare"). Once this method has been called, it is safe to close the socket since the operating system has already duplicated it for the target process. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows. New in version 3.3. Note that there are no methods `read()` or `write()`; use [`recv()`](#socket.socket.recv "socket.socket.recv") and [`send()`](#socket.socket.send "socket.socket.send") without *flags* argument instead. Socket objects also have these (read-only) attributes that correspond to the values given to the [`socket`](#socket.socket "socket.socket") constructor. `socket.family` The socket family. `socket.type` The socket type. `socket.proto` The socket protocol. Notes on socket timeouts ------------------------ A socket object can be in one of three modes: blocking, non-blocking, or timeout. Sockets are by default always created in blocking mode, but this can be changed by calling [`setdefaulttimeout()`](#socket.setdefaulttimeout "socket.setdefaulttimeout"). * In *blocking mode*, operations block until complete or the system returns an error (such as connection timed out). * In *non-blocking mode*, operations fail (with an error that is unfortunately system-dependent) if they cannot be completed immediately: functions from the [`select`](select#module-select "select: Wait for I/O completion on multiple streams.") can be used to know when and whether a socket is available for reading or writing. * In *timeout mode*, operations fail if they cannot be completed within the timeout specified for the socket (they raise a [`timeout`](#socket.timeout "socket.timeout") exception) or if the system returns an error. Note At the operating system level, sockets in *timeout mode* are internally set in non-blocking mode. Also, the blocking and timeout modes are shared between file descriptors and socket objects that refer to the same network endpoint. This implementation detail can have visible consequences if e.g. you decide to use the [`fileno()`](#socket.socket.fileno "socket.socket.fileno") of a socket. ### Timeouts and the `connect` method The [`connect()`](#socket.socket.connect "socket.socket.connect") operation is also subject to the timeout setting, and in general it is recommended to call [`settimeout()`](#socket.socket.settimeout "socket.socket.settimeout") before calling [`connect()`](#socket.socket.connect "socket.socket.connect") or pass a timeout parameter to [`create_connection()`](#socket.create_connection "socket.create_connection"). However, the system network stack may also return a connection timeout error of its own regardless of any Python socket timeout setting. ### Timeouts and the `accept` method If [`getdefaulttimeout()`](#socket.getdefaulttimeout "socket.getdefaulttimeout") is not [`None`](constants#None "None"), sockets returned by the [`accept()`](#socket.socket.accept "socket.socket.accept") method inherit that timeout. Otherwise, the behaviour depends on settings of the listening socket: * if the listening socket is in *blocking mode* or in *timeout mode*, the socket returned by [`accept()`](#socket.socket.accept "socket.socket.accept") is in *blocking mode*; * if the listening socket is in *non-blocking mode*, whether the socket returned by [`accept()`](#socket.socket.accept "socket.socket.accept") is in blocking or non-blocking mode is operating system-dependent. If you want to ensure cross-platform behaviour, it is recommended you manually override this setting. Example ------- Here are four minimal example programs using the TCP/IP protocol: a server that echoes all data that it receives back (servicing only one client), and a client using it. Note that a server must perform the sequence `socket()`, [`bind()`](#socket.socket.bind "socket.socket.bind"), [`listen()`](#socket.socket.listen "socket.socket.listen"), [`accept()`](#socket.socket.accept "socket.socket.accept") (possibly repeating the [`accept()`](#socket.socket.accept "socket.socket.accept") to service more than one client), while a client only needs the sequence `socket()`, [`connect()`](#socket.socket.connect "socket.socket.connect"). Also note that the server does not [`sendall()`](#socket.socket.sendall "socket.socket.sendall")/[`recv()`](#socket.socket.recv "socket.socket.recv") on the socket it is listening on but on the new socket returned by [`accept()`](#socket.socket.accept "socket.socket.accept"). The first two examples support IPv4 only. ``` # Echo server program import socket HOST = '' # Symbolic name meaning all available interfaces PORT = 50007 # Arbitrary non-privileged port with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen(1) conn, addr = s.accept() with conn: print('Connected by', addr) while True: data = conn.recv(1024) if not data: break conn.sendall(data) ``` ``` # Echo client program import socket HOST = 'daring.cwi.nl' # The remote host PORT = 50007 # The same port as used by the server with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) s.sendall(b'Hello, world') data = s.recv(1024) print('Received', repr(data)) ``` The next two examples are identical to the above two, but support both IPv4 and IPv6. The server side will listen to the first address family available (it should listen to both instead). On most of IPv6-ready systems, IPv6 will take precedence and the server may not accept IPv4 traffic. The client side will try to connect to the all addresses returned as a result of the name resolution, and sends traffic to the first one connected successfully. ``` # Echo server program import socket import sys HOST = None # Symbolic name meaning all available interfaces PORT = 50007 # Arbitrary non-privileged port s = None for res in socket.getaddrinfo(HOST, PORT, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE): af, socktype, proto, canonname, sa = res try: s = socket.socket(af, socktype, proto) except OSError as msg: s = None continue try: s.bind(sa) s.listen(1) except OSError as msg: s.close() s = None continue break if s is None: print('could not open socket') sys.exit(1) conn, addr = s.accept() with conn: print('Connected by', addr) while True: data = conn.recv(1024) if not data: break conn.send(data) ``` ``` # Echo client program import socket import sys HOST = 'daring.cwi.nl' # The remote host PORT = 50007 # The same port as used by the server s = None for res in socket.getaddrinfo(HOST, PORT, socket.AF_UNSPEC, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res try: s = socket.socket(af, socktype, proto) except OSError as msg: s = None continue try: s.connect(sa) except OSError as msg: s.close() s = None continue break if s is None: print('could not open socket') sys.exit(1) with s: s.sendall(b'Hello, world') data = s.recv(1024) print('Received', repr(data)) ``` The next example shows how to write a very simple network sniffer with raw sockets on Windows. The example requires administrator privileges to modify the interface: ``` import socket # the public network interface HOST = socket.gethostbyname(socket.gethostname()) # create a raw socket and bind it to the public interface s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_IP) s.bind((HOST, 0)) # Include IP headers s.setsockopt(socket.IPPROTO_IP, socket.IP_HDRINCL, 1) # receive all packages s.ioctl(socket.SIO_RCVALL, socket.RCVALL_ON) # receive a package print(s.recvfrom(65565)) # disabled promiscuous mode s.ioctl(socket.SIO_RCVALL, socket.RCVALL_OFF) ``` The next example shows how to use the socket interface to communicate to a CAN network using the raw socket protocol. To use CAN with the broadcast manager protocol instead, open a socket with: ``` socket.socket(socket.AF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) ``` After binding (`CAN_RAW`) or connecting ([`CAN_BCM`](#socket.CAN_BCM "socket.CAN_BCM")) the socket, you can use the [`socket.send()`](#socket.socket.send "socket.socket.send"), and the [`socket.recv()`](#socket.socket.recv "socket.socket.recv") operations (and their counterparts) on the socket object as usual. This last example might require special privileges: ``` import socket import struct # CAN frame packing/unpacking (see 'struct can_frame' in <linux/can.h>) can_frame_fmt = "=IB3x8s" can_frame_size = struct.calcsize(can_frame_fmt) def build_can_frame(can_id, data): can_dlc = len(data) data = data.ljust(8, b'\x00') return struct.pack(can_frame_fmt, can_id, can_dlc, data) def dissect_can_frame(frame): can_id, can_dlc, data = struct.unpack(can_frame_fmt, frame) return (can_id, can_dlc, data[:can_dlc]) # create a raw socket and bind it to the 'vcan0' interface s = socket.socket(socket.AF_CAN, socket.SOCK_RAW, socket.CAN_RAW) s.bind(('vcan0',)) while True: cf, addr = s.recvfrom(can_frame_size) print('Received: can_id=%x, can_dlc=%x, data=%s' % dissect_can_frame(cf)) try: s.send(cf) except OSError: print('Error sending CAN frame') try: s.send(build_can_frame(0x01, b'\x01\x02\x03')) except OSError: print('Error sending CAN frame') ``` Running an example several times with too small delay between executions, could lead to this error: ``` OSError: [Errno 98] Address already in use ``` This is because the previous execution has left the socket in a `TIME_WAIT` state, and can’t be immediately reused. There is a [`socket`](#module-socket "socket: Low-level networking interface.") flag to set, in order to prevent this, `socket.SO_REUSEADDR`: ``` s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind((HOST, PORT)) ``` the `SO_REUSEADDR` flag tells the kernel to reuse a local socket in `TIME_WAIT` state, without waiting for its natural timeout to expire. See also For an introduction to socket programming (in C), see the following papers: * *An Introductory 4.3BSD Interprocess Communication Tutorial*, by Stuart Sechrest * *An Advanced 4.3BSD Interprocess Communication Tutorial*, by Samuel J. Leffler et al, both in the UNIX Programmer’s Manual, Supplementary Documents 1 (sections PS1:7 and PS1:8). The platform-specific reference material for the various socket-related system calls are also a valuable source of information on the details of socket semantics. For Unix, refer to the manual pages; for Windows, see the WinSock (or Winsock 2) specification. For IPv6-ready APIs, readers may want to refer to [**RFC 3493**](https://tools.ietf.org/html/rfc3493.html) titled Basic Socket Interface Extensions for IPv6.
programming_docs
python pathlib — Object-oriented filesystem paths pathlib — Object-oriented filesystem paths ========================================== New in version 3.4. **Source code:** [Lib/pathlib.py](https://github.com/python/cpython/tree/3.9/Lib/pathlib.py) This module offers classes representing filesystem paths with semantics appropriate for different operating systems. Path classes are divided between [pure paths](#pure-paths), which provide purely computational operations without I/O, and [concrete paths](#concrete-paths), which inherit from pure paths but also provide I/O operations. If you’ve never used this module before or just aren’t sure which class is right for your task, [`Path`](#pathlib.Path "pathlib.Path") is most likely what you need. It instantiates a [concrete path](#concrete-paths) for the platform the code is running on. Pure paths are useful in some special cases; for example: 1. If you want to manipulate Windows paths on a Unix machine (or vice versa). You cannot instantiate a [`WindowsPath`](#pathlib.WindowsPath "pathlib.WindowsPath") when running on Unix, but you can instantiate [`PureWindowsPath`](#pathlib.PureWindowsPath "pathlib.PureWindowsPath"). 2. You want to make sure that your code only manipulates paths without actually accessing the OS. In this case, instantiating one of the pure classes may be useful since those simply don’t have any OS-accessing operations. See also [**PEP 428**](https://www.python.org/dev/peps/pep-0428): The pathlib module – object-oriented filesystem paths. See also For low-level path manipulation on strings, you can also use the [`os.path`](os.path#module-os.path "os.path: Operations on pathnames.") module. Basic use --------- Importing the main class: ``` >>> from pathlib import Path ``` Listing subdirectories: ``` >>> p = Path('.') >>> [x for x in p.iterdir() if x.is_dir()] [PosixPath('.hg'), PosixPath('docs'), PosixPath('dist'), PosixPath('__pycache__'), PosixPath('build')] ``` Listing Python source files in this directory tree: ``` >>> list(p.glob('**/*.py')) [PosixPath('test_pathlib.py'), PosixPath('setup.py'), PosixPath('pathlib.py'), PosixPath('docs/conf.py'), PosixPath('build/lib/pathlib.py')] ``` Navigating inside a directory tree: ``` >>> p = Path('/etc') >>> q = p / 'init.d' / 'reboot' >>> q PosixPath('/etc/init.d/reboot') >>> q.resolve() PosixPath('/etc/rc.d/init.d/halt') ``` Querying path properties: ``` >>> q.exists() True >>> q.is_dir() False ``` Opening a file: ``` >>> with q.open() as f: f.readline() ... '#!/bin/bash\n' ``` Pure paths ---------- Pure path objects provide path-handling operations which don’t actually access a filesystem. There are three ways to access these classes, which we also call *flavours*: `class pathlib.PurePath(*pathsegments)` A generic class that represents the system’s path flavour (instantiating it creates either a [`PurePosixPath`](#pathlib.PurePosixPath "pathlib.PurePosixPath") or a [`PureWindowsPath`](#pathlib.PureWindowsPath "pathlib.PureWindowsPath")): ``` >>> PurePath('setup.py') # Running on a Unix machine PurePosixPath('setup.py') ``` Each element of *pathsegments* can be either a string representing a path segment, an object implementing the [`os.PathLike`](os#os.PathLike "os.PathLike") interface which returns a string, or another path object: ``` >>> PurePath('foo', 'some/path', 'bar') PurePosixPath('foo/some/path/bar') >>> PurePath(Path('foo'), Path('bar')) PurePosixPath('foo/bar') ``` When *pathsegments* is empty, the current directory is assumed: ``` >>> PurePath() PurePosixPath('.') ``` When several absolute paths are given, the last is taken as an anchor (mimicking [`os.path.join()`](os.path#os.path.join "os.path.join")’s behaviour): ``` >>> PurePath('/etc', '/usr', 'lib64') PurePosixPath('/usr/lib64') >>> PureWindowsPath('c:/Windows', 'd:bar') PureWindowsPath('d:bar') ``` However, in a Windows path, changing the local root doesn’t discard the previous drive setting: ``` >>> PureWindowsPath('c:/Windows', '/Program Files') PureWindowsPath('c:/Program Files') ``` Spurious slashes and single dots are collapsed, but double dots (`'..'`) are not, since this would change the meaning of a path in the face of symbolic links: ``` >>> PurePath('foo//bar') PurePosixPath('foo/bar') >>> PurePath('foo/./bar') PurePosixPath('foo/bar') >>> PurePath('foo/../bar') PurePosixPath('foo/../bar') ``` (a naïve approach would make `PurePosixPath('foo/../bar')` equivalent to `PurePosixPath('bar')`, which is wrong if `foo` is a symbolic link to another directory) Pure path objects implement the [`os.PathLike`](os#os.PathLike "os.PathLike") interface, allowing them to be used anywhere the interface is accepted. Changed in version 3.6: Added support for the [`os.PathLike`](os#os.PathLike "os.PathLike") interface. `class pathlib.PurePosixPath(*pathsegments)` A subclass of [`PurePath`](#pathlib.PurePath "pathlib.PurePath"), this path flavour represents non-Windows filesystem paths: ``` >>> PurePosixPath('/etc') PurePosixPath('/etc') ``` *pathsegments* is specified similarly to [`PurePath`](#pathlib.PurePath "pathlib.PurePath"). `class pathlib.PureWindowsPath(*pathsegments)` A subclass of [`PurePath`](#pathlib.PurePath "pathlib.PurePath"), this path flavour represents Windows filesystem paths: ``` >>> PureWindowsPath('c:/Program Files/') PureWindowsPath('c:/Program Files') ``` *pathsegments* is specified similarly to [`PurePath`](#pathlib.PurePath "pathlib.PurePath"). Regardless of the system you’re running on, you can instantiate all of these classes, since they don’t provide any operation that does system calls. ### General properties Paths are immutable and hashable. Paths of a same flavour are comparable and orderable. These properties respect the flavour’s case-folding semantics: ``` >>> PurePosixPath('foo') == PurePosixPath('FOO') False >>> PureWindowsPath('foo') == PureWindowsPath('FOO') True >>> PureWindowsPath('FOO') in { PureWindowsPath('foo') } True >>> PureWindowsPath('C:') < PureWindowsPath('d:') True ``` Paths of a different flavour compare unequal and cannot be ordered: ``` >>> PureWindowsPath('foo') == PurePosixPath('foo') False >>> PureWindowsPath('foo') < PurePosixPath('foo') Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: '<' not supported between instances of 'PureWindowsPath' and 'PurePosixPath' ``` ### Operators The slash operator helps create child paths, similarly to [`os.path.join()`](os.path#os.path.join "os.path.join"): ``` >>> p = PurePath('/etc') >>> p PurePosixPath('/etc') >>> p / 'init.d' / 'apache2' PurePosixPath('/etc/init.d/apache2') >>> q = PurePath('bin') >>> '/usr' / q PurePosixPath('/usr/bin') ``` A path object can be used anywhere an object implementing [`os.PathLike`](os#os.PathLike "os.PathLike") is accepted: ``` >>> import os >>> p = PurePath('/etc') >>> os.fspath(p) '/etc' ``` The string representation of a path is the raw filesystem path itself (in native form, e.g. with backslashes under Windows), which you can pass to any function taking a file path as a string: ``` >>> p = PurePath('/etc') >>> str(p) '/etc' >>> p = PureWindowsPath('c:/Program Files') >>> str(p) 'c:\\Program Files' ``` Similarly, calling [`bytes`](stdtypes#bytes "bytes") on a path gives the raw filesystem path as a bytes object, as encoded by [`os.fsencode()`](os#os.fsencode "os.fsencode"): ``` >>> bytes(p) b'/etc' ``` Note Calling [`bytes`](stdtypes#bytes "bytes") is only recommended under Unix. Under Windows, the unicode form is the canonical representation of filesystem paths. ### Accessing individual parts To access the individual “parts” (components) of a path, use the following property: `PurePath.parts` A tuple giving access to the path’s various components: ``` >>> p = PurePath('/usr/bin/python3') >>> p.parts ('/', 'usr', 'bin', 'python3') >>> p = PureWindowsPath('c:/Program Files/PSF') >>> p.parts ('c:\\', 'Program Files', 'PSF') ``` (note how the drive and local root are regrouped in a single part) ### Methods and properties Pure paths provide the following methods and properties: `PurePath.drive` A string representing the drive letter or name, if any: ``` >>> PureWindowsPath('c:/Program Files/').drive 'c:' >>> PureWindowsPath('/Program Files/').drive '' >>> PurePosixPath('/etc').drive '' ``` UNC shares are also considered drives: ``` >>> PureWindowsPath('//host/share/foo.txt').drive '\\\\host\\share' ``` `PurePath.root` A string representing the (local or global) root, if any: ``` >>> PureWindowsPath('c:/Program Files/').root '\\' >>> PureWindowsPath('c:Program Files/').root '' >>> PurePosixPath('/etc').root '/' ``` UNC shares always have a root: ``` >>> PureWindowsPath('//host/share').root '\\' ``` `PurePath.anchor` The concatenation of the drive and root: ``` >>> PureWindowsPath('c:/Program Files/').anchor 'c:\\' >>> PureWindowsPath('c:Program Files/').anchor 'c:' >>> PurePosixPath('/etc').anchor '/' >>> PureWindowsPath('//host/share').anchor '\\\\host\\share\\' ``` `PurePath.parents` An immutable sequence providing access to the logical ancestors of the path: ``` >>> p = PureWindowsPath('c:/foo/bar/setup.py') >>> p.parents[0] PureWindowsPath('c:/foo/bar') >>> p.parents[1] PureWindowsPath('c:/foo') >>> p.parents[2] PureWindowsPath('c:/') ``` `PurePath.parent` The logical parent of the path: ``` >>> p = PurePosixPath('/a/b/c/d') >>> p.parent PurePosixPath('/a/b/c') ``` You cannot go past an anchor, or empty path: ``` >>> p = PurePosixPath('/') >>> p.parent PurePosixPath('/') >>> p = PurePosixPath('.') >>> p.parent PurePosixPath('.') ``` Note This is a purely lexical operation, hence the following behaviour: ``` >>> p = PurePosixPath('foo/..') >>> p.parent PurePosixPath('foo') ``` If you want to walk an arbitrary filesystem path upwards, it is recommended to first call [`Path.resolve()`](#pathlib.Path.resolve "pathlib.Path.resolve") so as to resolve symlinks and eliminate `“..”` components. `PurePath.name` A string representing the final path component, excluding the drive and root, if any: ``` >>> PurePosixPath('my/library/setup.py').name 'setup.py' ``` UNC drive names are not considered: ``` >>> PureWindowsPath('//some/share/setup.py').name 'setup.py' >>> PureWindowsPath('//some/share').name '' ``` `PurePath.suffix` The file extension of the final component, if any: ``` >>> PurePosixPath('my/library/setup.py').suffix '.py' >>> PurePosixPath('my/library.tar.gz').suffix '.gz' >>> PurePosixPath('my/library').suffix '' ``` `PurePath.suffixes` A list of the path’s file extensions: ``` >>> PurePosixPath('my/library.tar.gar').suffixes ['.tar', '.gar'] >>> PurePosixPath('my/library.tar.gz').suffixes ['.tar', '.gz'] >>> PurePosixPath('my/library').suffixes [] ``` `PurePath.stem` The final path component, without its suffix: ``` >>> PurePosixPath('my/library.tar.gz').stem 'library.tar' >>> PurePosixPath('my/library.tar').stem 'library' >>> PurePosixPath('my/library').stem 'library' ``` `PurePath.as_posix()` Return a string representation of the path with forward slashes (`/`): ``` >>> p = PureWindowsPath('c:\\windows') >>> str(p) 'c:\\windows' >>> p.as_posix() 'c:/windows' ``` `PurePath.as_uri()` Represent the path as a `file` URI. [`ValueError`](exceptions#ValueError "ValueError") is raised if the path isn’t absolute. ``` >>> p = PurePosixPath('/etc/passwd') >>> p.as_uri() 'file:///etc/passwd' >>> p = PureWindowsPath('c:/Windows') >>> p.as_uri() 'file:///c:/Windows' ``` `PurePath.is_absolute()` Return whether the path is absolute or not. A path is considered absolute if it has both a root and (if the flavour allows) a drive: ``` >>> PurePosixPath('/a/b').is_absolute() True >>> PurePosixPath('a/b').is_absolute() False >>> PureWindowsPath('c:/a/b').is_absolute() True >>> PureWindowsPath('/a/b').is_absolute() False >>> PureWindowsPath('c:').is_absolute() False >>> PureWindowsPath('//some/share').is_absolute() True ``` `PurePath.is_relative_to(*other)` Return whether or not this path is relative to the *other* path. ``` >>> p = PurePath('/etc/passwd') >>> p.is_relative_to('/etc') True >>> p.is_relative_to('/usr') False ``` New in version 3.9. `PurePath.is_reserved()` With [`PureWindowsPath`](#pathlib.PureWindowsPath "pathlib.PureWindowsPath"), return `True` if the path is considered reserved under Windows, `False` otherwise. With [`PurePosixPath`](#pathlib.PurePosixPath "pathlib.PurePosixPath"), `False` is always returned. ``` >>> PureWindowsPath('nul').is_reserved() True >>> PurePosixPath('nul').is_reserved() False ``` File system calls on reserved paths can fail mysteriously or have unintended effects. `PurePath.joinpath(*other)` Calling this method is equivalent to combining the path with each of the *other* arguments in turn: ``` >>> PurePosixPath('/etc').joinpath('passwd') PurePosixPath('/etc/passwd') >>> PurePosixPath('/etc').joinpath(PurePosixPath('passwd')) PurePosixPath('/etc/passwd') >>> PurePosixPath('/etc').joinpath('init.d', 'apache2') PurePosixPath('/etc/init.d/apache2') >>> PureWindowsPath('c:').joinpath('/Program Files') PureWindowsPath('c:/Program Files') ``` `PurePath.match(pattern)` Match this path against the provided glob-style pattern. Return `True` if matching is successful, `False` otherwise. If *pattern* is relative, the path can be either relative or absolute, and matching is done from the right: ``` >>> PurePath('a/b.py').match('*.py') True >>> PurePath('/a/b/c.py').match('b/*.py') True >>> PurePath('/a/b/c.py').match('a/*.py') False ``` If *pattern* is absolute, the path must be absolute, and the whole path must match: ``` >>> PurePath('/a.py').match('/*.py') True >>> PurePath('a/b.py').match('/*.py') False ``` As with other methods, case-sensitivity follows platform defaults: ``` >>> PurePosixPath('b.py').match('*.PY') False >>> PureWindowsPath('b.py').match('*.PY') True ``` `PurePath.relative_to(*other)` Compute a version of this path relative to the path represented by *other*. If it’s impossible, ValueError is raised: ``` >>> p = PurePosixPath('/etc/passwd') >>> p.relative_to('/') PurePosixPath('etc/passwd') >>> p.relative_to('/etc') PurePosixPath('passwd') >>> p.relative_to('/usr') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "pathlib.py", line 694, in relative_to .format(str(self), str(formatted))) ValueError: '/etc/passwd' is not in the subpath of '/usr' OR one path is relative and the other absolute. ``` NOTE: This function is part of [`PurePath`](#pathlib.PurePath "pathlib.PurePath") and works with strings. It does not check or access the underlying file structure. `PurePath.with_name(name)` Return a new path with the [`name`](#pathlib.PurePath.name "pathlib.PurePath.name") changed. If the original path doesn’t have a name, ValueError is raised: ``` >>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz') >>> p.with_name('setup.py') PureWindowsPath('c:/Downloads/setup.py') >>> p = PureWindowsPath('c:/') >>> p.with_name('setup.py') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/antoine/cpython/default/Lib/pathlib.py", line 751, in with_name raise ValueError("%r has an empty name" % (self,)) ValueError: PureWindowsPath('c:/') has an empty name ``` `PurePath.with_stem(stem)` Return a new path with the [`stem`](#pathlib.PurePath.stem "pathlib.PurePath.stem") changed. If the original path doesn’t have a name, ValueError is raised: ``` >>> p = PureWindowsPath('c:/Downloads/draft.txt') >>> p.with_stem('final') PureWindowsPath('c:/Downloads/final.txt') >>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz') >>> p.with_stem('lib') PureWindowsPath('c:/Downloads/lib.gz') >>> p = PureWindowsPath('c:/') >>> p.with_stem('') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/antoine/cpython/default/Lib/pathlib.py", line 861, in with_stem return self.with_name(stem + self.suffix) File "/home/antoine/cpython/default/Lib/pathlib.py", line 851, in with_name raise ValueError("%r has an empty name" % (self,)) ValueError: PureWindowsPath('c:/') has an empty name ``` New in version 3.9. `PurePath.with_suffix(suffix)` Return a new path with the [`suffix`](#pathlib.PurePath.suffix "pathlib.PurePath.suffix") changed. If the original path doesn’t have a suffix, the new *suffix* is appended instead. If the *suffix* is an empty string, the original suffix is removed: ``` >>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz') >>> p.with_suffix('.bz2') PureWindowsPath('c:/Downloads/pathlib.tar.bz2') >>> p = PureWindowsPath('README') >>> p.with_suffix('.txt') PureWindowsPath('README.txt') >>> p = PureWindowsPath('README.txt') >>> p.with_suffix('') PureWindowsPath('README') ``` Concrete paths -------------- Concrete paths are subclasses of the pure path classes. In addition to operations provided by the latter, they also provide methods to do system calls on path objects. There are three ways to instantiate concrete paths: `class pathlib.Path(*pathsegments)` A subclass of [`PurePath`](#pathlib.PurePath "pathlib.PurePath"), this class represents concrete paths of the system’s path flavour (instantiating it creates either a [`PosixPath`](#pathlib.PosixPath "pathlib.PosixPath") or a [`WindowsPath`](#pathlib.WindowsPath "pathlib.WindowsPath")): ``` >>> Path('setup.py') PosixPath('setup.py') ``` *pathsegments* is specified similarly to [`PurePath`](#pathlib.PurePath "pathlib.PurePath"). `class pathlib.PosixPath(*pathsegments)` A subclass of [`Path`](#pathlib.Path "pathlib.Path") and [`PurePosixPath`](#pathlib.PurePosixPath "pathlib.PurePosixPath"), this class represents concrete non-Windows filesystem paths: ``` >>> PosixPath('/etc') PosixPath('/etc') ``` *pathsegments* is specified similarly to [`PurePath`](#pathlib.PurePath "pathlib.PurePath"). `class pathlib.WindowsPath(*pathsegments)` A subclass of [`Path`](#pathlib.Path "pathlib.Path") and [`PureWindowsPath`](#pathlib.PureWindowsPath "pathlib.PureWindowsPath"), this class represents concrete Windows filesystem paths: ``` >>> WindowsPath('c:/Program Files/') WindowsPath('c:/Program Files') ``` *pathsegments* is specified similarly to [`PurePath`](#pathlib.PurePath "pathlib.PurePath"). You can only instantiate the class flavour that corresponds to your system (allowing system calls on non-compatible path flavours could lead to bugs or failures in your application): ``` >>> import os >>> os.name 'posix' >>> Path('setup.py') PosixPath('setup.py') >>> PosixPath('setup.py') PosixPath('setup.py') >>> WindowsPath('setup.py') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "pathlib.py", line 798, in __new__ % (cls.__name__,)) NotImplementedError: cannot instantiate 'WindowsPath' on your system ``` ### Methods Concrete paths provide the following methods in addition to pure paths methods. Many of these methods can raise an [`OSError`](exceptions#OSError "OSError") if a system call fails (for example because the path doesn’t exist). Changed in version 3.8: [`exists()`](#pathlib.Path.exists "pathlib.Path.exists"), [`is_dir()`](#pathlib.Path.is_dir "pathlib.Path.is_dir"), [`is_file()`](#pathlib.Path.is_file "pathlib.Path.is_file"), [`is_mount()`](#pathlib.Path.is_mount "pathlib.Path.is_mount"), [`is_symlink()`](#pathlib.Path.is_symlink "pathlib.Path.is_symlink"), [`is_block_device()`](#pathlib.Path.is_block_device "pathlib.Path.is_block_device"), [`is_char_device()`](#pathlib.Path.is_char_device "pathlib.Path.is_char_device"), [`is_fifo()`](#pathlib.Path.is_fifo "pathlib.Path.is_fifo"), [`is_socket()`](#pathlib.Path.is_socket "pathlib.Path.is_socket") now return `False` instead of raising an exception for paths that contain characters unrepresentable at the OS level. `classmethod Path.cwd()` Return a new path object representing the current directory (as returned by [`os.getcwd()`](os#os.getcwd "os.getcwd")): ``` >>> Path.cwd() PosixPath('/home/antoine/pathlib') ``` `classmethod Path.home()` Return a new path object representing the user’s home directory (as returned by [`os.path.expanduser()`](os.path#os.path.expanduser "os.path.expanduser") with `~` construct): ``` >>> Path.home() PosixPath('/home/antoine') ``` Note that unlike [`os.path.expanduser()`](os.path#os.path.expanduser "os.path.expanduser"), on POSIX systems a [`KeyError`](exceptions#KeyError "KeyError") or [`RuntimeError`](exceptions#RuntimeError "RuntimeError") will be raised, and on Windows systems a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") will be raised if home directory can’t be resolved. New in version 3.5. `Path.stat()` Return a [`os.stat_result`](os#os.stat_result "os.stat_result") object containing information about this path, like [`os.stat()`](os#os.stat "os.stat"). The result is looked up at each call to this method. ``` >>> p = Path('setup.py') >>> p.stat().st_size 956 >>> p.stat().st_mtime 1327883547.852554 ``` `Path.chmod(mode)` Change the file mode and permissions, like [`os.chmod()`](os#os.chmod "os.chmod"): ``` >>> p = Path('setup.py') >>> p.stat().st_mode 33277 >>> p.chmod(0o444) >>> p.stat().st_mode 33060 ``` `Path.exists()` Whether the path points to an existing file or directory: ``` >>> Path('.').exists() True >>> Path('setup.py').exists() True >>> Path('/etc').exists() True >>> Path('nonexistentfile').exists() False ``` Note If the path points to a symlink, [`exists()`](#pathlib.Path.exists "pathlib.Path.exists") returns whether the symlink *points to* an existing file or directory. `Path.expanduser()` Return a new path with expanded `~` and `~user` constructs, as returned by [`os.path.expanduser()`](os.path#os.path.expanduser "os.path.expanduser"): ``` >>> p = PosixPath('~/films/Monty Python') >>> p.expanduser() PosixPath('/home/eric/films/Monty Python') ``` Note that unlike [`os.path.expanduser()`](os.path#os.path.expanduser "os.path.expanduser"), on POSIX systems a [`KeyError`](exceptions#KeyError "KeyError") or [`RuntimeError`](exceptions#RuntimeError "RuntimeError") will be raised, and on Windows systems a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") will be raised if home directory can’t be resolved. New in version 3.5. `Path.glob(pattern)` Glob the given relative *pattern* in the directory represented by this path, yielding all matching files (of any kind): ``` >>> sorted(Path('.').glob('*.py')) [PosixPath('pathlib.py'), PosixPath('setup.py'), PosixPath('test_pathlib.py')] >>> sorted(Path('.').glob('*/*.py')) [PosixPath('docs/conf.py')] ``` The “`**`” pattern means “this directory and all subdirectories, recursively”. In other words, it enables recursive globbing: ``` >>> sorted(Path('.').glob('**/*.py')) [PosixPath('build/lib/pathlib.py'), PosixPath('docs/conf.py'), PosixPath('pathlib.py'), PosixPath('setup.py'), PosixPath('test_pathlib.py')] ``` Note Using the “`**`” pattern in large directory trees may consume an inordinate amount of time. Raises an [auditing event](sys#auditing) `pathlib.Path.glob` with arguments `self`, `pattern`. `Path.group()` Return the name of the group owning the file. [`KeyError`](exceptions#KeyError "KeyError") is raised if the file’s gid isn’t found in the system database. `Path.is_dir()` Return `True` if the path points to a directory (or a symbolic link pointing to a directory), `False` if it points to another kind of file. `False` is also returned if the path doesn’t exist or is a broken symlink; other errors (such as permission errors) are propagated. `Path.is_file()` Return `True` if the path points to a regular file (or a symbolic link pointing to a regular file), `False` if it points to another kind of file. `False` is also returned if the path doesn’t exist or is a broken symlink; other errors (such as permission errors) are propagated. `Path.is_mount()` Return `True` if the path is a *mount point*: a point in a file system where a different file system has been mounted. On POSIX, the function checks whether *path*’s parent, `path/..`, is on a different device than *path*, or whether `path/..` and *path* point to the same i-node on the same device — this should detect mount points for all Unix and POSIX variants. Not implemented on Windows. New in version 3.7. `Path.is_symlink()` Return `True` if the path points to a symbolic link, `False` otherwise. `False` is also returned if the path doesn’t exist; other errors (such as permission errors) are propagated. `Path.is_socket()` Return `True` if the path points to a Unix socket (or a symbolic link pointing to a Unix socket), `False` if it points to another kind of file. `False` is also returned if the path doesn’t exist or is a broken symlink; other errors (such as permission errors) are propagated. `Path.is_fifo()` Return `True` if the path points to a FIFO (or a symbolic link pointing to a FIFO), `False` if it points to another kind of file. `False` is also returned if the path doesn’t exist or is a broken symlink; other errors (such as permission errors) are propagated. `Path.is_block_device()` Return `True` if the path points to a block device (or a symbolic link pointing to a block device), `False` if it points to another kind of file. `False` is also returned if the path doesn’t exist or is a broken symlink; other errors (such as permission errors) are propagated. `Path.is_char_device()` Return `True` if the path points to a character device (or a symbolic link pointing to a character device), `False` if it points to another kind of file. `False` is also returned if the path doesn’t exist or is a broken symlink; other errors (such as permission errors) are propagated. `Path.iterdir()` When the path points to a directory, yield path objects of the directory contents: ``` >>> p = Path('docs') >>> for child in p.iterdir(): child ... PosixPath('docs/conf.py') PosixPath('docs/_templates') PosixPath('docs/make.bat') PosixPath('docs/index.rst') PosixPath('docs/_build') PosixPath('docs/_static') PosixPath('docs/Makefile') ``` The children are yielded in arbitrary order, and the special entries `'.'` and `'..'` are not included. If a file is removed from or added to the directory after creating the iterator, whether a path object for that file be included is unspecified. `Path.lchmod(mode)` Like [`Path.chmod()`](#pathlib.Path.chmod "pathlib.Path.chmod") but, if the path points to a symbolic link, the symbolic link’s mode is changed rather than its target’s. `Path.lstat()` Like [`Path.stat()`](#pathlib.Path.stat "pathlib.Path.stat") but, if the path points to a symbolic link, return the symbolic link’s information rather than its target’s. `Path.mkdir(mode=0o777, parents=False, exist_ok=False)` Create a new directory at this given path. If *mode* is given, it is combined with the process’ `umask` value to determine the file mode and access flags. If the path already exists, [`FileExistsError`](exceptions#FileExistsError "FileExistsError") is raised. If *parents* is true, any missing parents of this path are created as needed; they are created with the default permissions without taking *mode* into account (mimicking the POSIX `mkdir -p` command). If *parents* is false (the default), a missing parent raises [`FileNotFoundError`](exceptions#FileNotFoundError "FileNotFoundError"). If *exist\_ok* is false (the default), [`FileExistsError`](exceptions#FileExistsError "FileExistsError") is raised if the target directory already exists. If *exist\_ok* is true, [`FileExistsError`](exceptions#FileExistsError "FileExistsError") exceptions will be ignored (same behavior as the POSIX `mkdir -p` command), but only if the last path component is not an existing non-directory file. Changed in version 3.5: The *exist\_ok* parameter was added. `Path.open(mode='r', buffering=-1, encoding=None, errors=None, newline=None)` Open the file pointed to by the path, like the built-in [`open()`](functions#open "open") function does: ``` >>> p = Path('setup.py') >>> with p.open() as f: ... f.readline() ... '#!/usr/bin/env python3\n' ``` `Path.owner()` Return the name of the user owning the file. [`KeyError`](exceptions#KeyError "KeyError") is raised if the file’s uid isn’t found in the system database. `Path.read_bytes()` Return the binary contents of the pointed-to file as a bytes object: ``` >>> p = Path('my_binary_file') >>> p.write_bytes(b'Binary file contents') 20 >>> p.read_bytes() b'Binary file contents' ``` New in version 3.5. `Path.read_text(encoding=None, errors=None)` Return the decoded contents of the pointed-to file as a string: ``` >>> p = Path('my_text_file') >>> p.write_text('Text file contents') 18 >>> p.read_text() 'Text file contents' ``` The file is opened and then closed. The optional parameters have the same meaning as in [`open()`](functions#open "open"). New in version 3.5. `Path.readlink()` Return the path to which the symbolic link points (as returned by [`os.readlink()`](os#os.readlink "os.readlink")): ``` >>> p = Path('mylink') >>> p.symlink_to('setup.py') >>> p.readlink() PosixPath('setup.py') ``` New in version 3.9. `Path.rename(target)` Rename this file or directory to the given *target*, and return a new Path instance pointing to *target*. On Unix, if *target* exists and is a file, it will be replaced silently if the user has permission. *target* can be either a string or another path object: ``` >>> p = Path('foo') >>> p.open('w').write('some text') 9 >>> target = Path('bar') >>> p.rename(target) PosixPath('bar') >>> target.open().read() 'some text' ``` The target path may be absolute or relative. Relative paths are interpreted relative to the current working directory, *not* the directory of the Path object. Changed in version 3.8: Added return value, return the new Path instance. `Path.replace(target)` Rename this file or directory to the given *target*, and return a new Path instance pointing to *target*. If *target* points to an existing file or empty directory, it will be unconditionally replaced. The target path may be absolute or relative. Relative paths are interpreted relative to the current working directory, *not* the directory of the Path object. Changed in version 3.8: Added return value, return the new Path instance. `Path.resolve(strict=False)` Make the path absolute, resolving any symlinks. A new path object is returned: ``` >>> p = Path() >>> p PosixPath('.') >>> p.resolve() PosixPath('/home/antoine/pathlib') ``` “`..`” components are also eliminated (this is the only method to do so): ``` >>> p = Path('docs/../setup.py') >>> p.resolve() PosixPath('/home/antoine/pathlib/setup.py') ``` If the path doesn’t exist and *strict* is `True`, [`FileNotFoundError`](exceptions#FileNotFoundError "FileNotFoundError") is raised. If *strict* is `False`, the path is resolved as far as possible and any remainder is appended without checking whether it exists. If an infinite loop is encountered along the resolution path, [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. New in version 3.6: The *strict* argument (pre-3.6 behavior is strict). `Path.rglob(pattern)` This is like calling [`Path.glob()`](#pathlib.Path.glob "pathlib.Path.glob") with “`**/`” added in front of the given relative *pattern*: ``` >>> sorted(Path().rglob("*.py")) [PosixPath('build/lib/pathlib.py'), PosixPath('docs/conf.py'), PosixPath('pathlib.py'), PosixPath('setup.py'), PosixPath('test_pathlib.py')] ``` Raises an [auditing event](sys#auditing) `pathlib.Path.rglob` with arguments `self`, `pattern`. `Path.rmdir()` Remove this directory. The directory must be empty. `Path.samefile(other_path)` Return whether this path points to the same file as *other\_path*, which can be either a Path object, or a string. The semantics are similar to [`os.path.samefile()`](os.path#os.path.samefile "os.path.samefile") and [`os.path.samestat()`](os.path#os.path.samestat "os.path.samestat"). An [`OSError`](exceptions#OSError "OSError") can be raised if either file cannot be accessed for some reason. ``` >>> p = Path('spam') >>> q = Path('eggs') >>> p.samefile(q) False >>> p.samefile('spam') True ``` New in version 3.5. `Path.symlink_to(target, target_is_directory=False)` Make this path a symbolic link to *target*. Under Windows, *target\_is\_directory* must be true (default `False`) if the link’s target is a directory. Under POSIX, *target\_is\_directory*’s value is ignored. ``` >>> p = Path('mylink') >>> p.symlink_to('setup.py') >>> p.resolve() PosixPath('/home/antoine/pathlib/setup.py') >>> p.stat().st_size 956 >>> p.lstat().st_size 8 ``` Note The order of arguments (link, target) is the reverse of [`os.symlink()`](os#os.symlink "os.symlink")’s. `Path.link_to(target)` Make *target* a hard link to this path. Warning This function does not make this path a hard link to *target*, despite the implication of the function and argument names. The argument order (target, link) is the reverse of [`Path.symlink_to()`](#pathlib.Path.symlink_to "pathlib.Path.symlink_to"), but matches that of [`os.link()`](os#os.link "os.link"). New in version 3.8. `Path.touch(mode=0o666, exist_ok=True)` Create a file at this given path. If *mode* is given, it is combined with the process’ `umask` value to determine the file mode and access flags. If the file already exists, the function succeeds if *exist\_ok* is true (and its modification time is updated to the current time), otherwise [`FileExistsError`](exceptions#FileExistsError "FileExistsError") is raised. `Path.unlink(missing_ok=False)` Remove this file or symbolic link. If the path points to a directory, use [`Path.rmdir()`](#pathlib.Path.rmdir "pathlib.Path.rmdir") instead. If *missing\_ok* is false (the default), [`FileNotFoundError`](exceptions#FileNotFoundError "FileNotFoundError") is raised if the path does not exist. If *missing\_ok* is true, [`FileNotFoundError`](exceptions#FileNotFoundError "FileNotFoundError") exceptions will be ignored (same behavior as the POSIX `rm -f` command). Changed in version 3.8: The *missing\_ok* parameter was added. `Path.write_bytes(data)` Open the file pointed to in bytes mode, write *data* to it, and close the file: ``` >>> p = Path('my_binary_file') >>> p.write_bytes(b'Binary file contents') 20 >>> p.read_bytes() b'Binary file contents' ``` An existing file of the same name is overwritten. New in version 3.5. `Path.write_text(data, encoding=None, errors=None)` Open the file pointed to in text mode, write *data* to it, and close the file: ``` >>> p = Path('my_text_file') >>> p.write_text('Text file contents') 18 >>> p.read_text() 'Text file contents' ``` An existing file of the same name is overwritten. The optional parameters have the same meaning as in [`open()`](functions#open "open"). New in version 3.5. Correspondence to tools in the os module ---------------------------------------- Below is a table mapping various [`os`](os#module-os "os: Miscellaneous operating system interfaces.") functions to their corresponding [`PurePath`](#pathlib.PurePath "pathlib.PurePath")/[`Path`](#pathlib.Path "pathlib.Path") equivalent. Note Although [`os.path.relpath()`](os.path#os.path.relpath "os.path.relpath") and [`PurePath.relative_to()`](#pathlib.PurePath.relative_to "pathlib.PurePath.relative_to") have some overlapping use-cases, their semantics differ enough to warrant not considering them equivalent. | os and os.path | pathlib | | --- | --- | | [`os.path.abspath()`](os.path#os.path.abspath "os.path.abspath") | [`Path.resolve()`](#pathlib.Path.resolve "pathlib.Path.resolve") | | [`os.chmod()`](os#os.chmod "os.chmod") | [`Path.chmod()`](#pathlib.Path.chmod "pathlib.Path.chmod") | | [`os.mkdir()`](os#os.mkdir "os.mkdir") | [`Path.mkdir()`](#pathlib.Path.mkdir "pathlib.Path.mkdir") | | [`os.makedirs()`](os#os.makedirs "os.makedirs") | [`Path.mkdir()`](#pathlib.Path.mkdir "pathlib.Path.mkdir") | | [`os.rename()`](os#os.rename "os.rename") | [`Path.rename()`](#pathlib.Path.rename "pathlib.Path.rename") | | [`os.replace()`](os#os.replace "os.replace") | [`Path.replace()`](#pathlib.Path.replace "pathlib.Path.replace") | | [`os.rmdir()`](os#os.rmdir "os.rmdir") | [`Path.rmdir()`](#pathlib.Path.rmdir "pathlib.Path.rmdir") | | [`os.remove()`](os#os.remove "os.remove"), [`os.unlink()`](os#os.unlink "os.unlink") | [`Path.unlink()`](#pathlib.Path.unlink "pathlib.Path.unlink") | | [`os.getcwd()`](os#os.getcwd "os.getcwd") | [`Path.cwd()`](#pathlib.Path.cwd "pathlib.Path.cwd") | | [`os.path.exists()`](os.path#os.path.exists "os.path.exists") | [`Path.exists()`](#pathlib.Path.exists "pathlib.Path.exists") | | [`os.path.expanduser()`](os.path#os.path.expanduser "os.path.expanduser") | [`Path.expanduser()`](#pathlib.Path.expanduser "pathlib.Path.expanduser") and [`Path.home()`](#pathlib.Path.home "pathlib.Path.home") | | [`os.listdir()`](os#os.listdir "os.listdir") | [`Path.iterdir()`](#pathlib.Path.iterdir "pathlib.Path.iterdir") | | [`os.path.isdir()`](os.path#os.path.isdir "os.path.isdir") | [`Path.is_dir()`](#pathlib.Path.is_dir "pathlib.Path.is_dir") | | [`os.path.isfile()`](os.path#os.path.isfile "os.path.isfile") | [`Path.is_file()`](#pathlib.Path.is_file "pathlib.Path.is_file") | | [`os.path.islink()`](os.path#os.path.islink "os.path.islink") | [`Path.is_symlink()`](#pathlib.Path.is_symlink "pathlib.Path.is_symlink") | | [`os.link()`](os#os.link "os.link") | [`Path.link_to()`](#pathlib.Path.link_to "pathlib.Path.link_to") | | [`os.symlink()`](os#os.symlink "os.symlink") | [`Path.symlink_to()`](#pathlib.Path.symlink_to "pathlib.Path.symlink_to") | | [`os.readlink()`](os#os.readlink "os.readlink") | [`Path.readlink()`](#pathlib.Path.readlink "pathlib.Path.readlink") | | [`os.stat()`](os#os.stat "os.stat") | [`Path.stat()`](#pathlib.Path.stat "pathlib.Path.stat"), [`Path.owner()`](#pathlib.Path.owner "pathlib.Path.owner"), [`Path.group()`](#pathlib.Path.group "pathlib.Path.group") | | [`os.path.isabs()`](os.path#os.path.isabs "os.path.isabs") | [`PurePath.is_absolute()`](#pathlib.PurePath.is_absolute "pathlib.PurePath.is_absolute") | | [`os.path.join()`](os.path#os.path.join "os.path.join") | [`PurePath.joinpath()`](#pathlib.PurePath.joinpath "pathlib.PurePath.joinpath") | | [`os.path.basename()`](os.path#os.path.basename "os.path.basename") | [`PurePath.name`](#pathlib.PurePath.name "pathlib.PurePath.name") | | [`os.path.dirname()`](os.path#os.path.dirname "os.path.dirname") | [`PurePath.parent`](#pathlib.PurePath.parent "pathlib.PurePath.parent") | | [`os.path.samefile()`](os.path#os.path.samefile "os.path.samefile") | [`Path.samefile()`](#pathlib.Path.samefile "pathlib.Path.samefile") | | [`os.path.splitext()`](os.path#os.path.splitext "os.path.splitext") | [`PurePath.suffix`](#pathlib.PurePath.suffix "pathlib.PurePath.suffix") |
programming_docs
python hmac — Keyed-Hashing for Message Authentication hmac — Keyed-Hashing for Message Authentication =============================================== **Source code:** [Lib/hmac.py](https://github.com/python/cpython/tree/3.9/Lib/hmac.py) This module implements the HMAC algorithm as described by [**RFC 2104**](https://tools.ietf.org/html/rfc2104.html). `hmac.new(key, msg=None, digestmod='')` Return a new hmac object. *key* is a bytes or bytearray object giving the secret key. If *msg* is present, the method call `update(msg)` is made. *digestmod* is the digest name, digest constructor or module for the HMAC object to use. It may be any name suitable to [`hashlib.new()`](hashlib#hashlib.new "hashlib.new"). Despite its argument position, it is required. Changed in version 3.4: Parameter *key* can be a bytes or bytearray object. Parameter *msg* can be of any type supported by [`hashlib`](hashlib#module-hashlib "hashlib: Secure hash and message digest algorithms."). Parameter *digestmod* can be the name of a hash algorithm. Deprecated since version 3.4, removed in version 3.8: MD5 as implicit default digest for *digestmod* is deprecated. The digestmod parameter is now required. Pass it as a keyword argument to avoid awkwardness when you do not have an initial msg. `hmac.digest(key, msg, digest)` Return digest of *msg* for given secret *key* and *digest*. The function is equivalent to `HMAC(key, msg, digest).digest()`, but uses an optimized C or inline implementation, which is faster for messages that fit into memory. The parameters *key*, *msg*, and *digest* have the same meaning as in [`new()`](#hmac.new "hmac.new"). CPython implementation detail, the optimized C implementation is only used when *digest* is a string and name of a digest algorithm, which is supported by OpenSSL. New in version 3.7. An HMAC object has the following methods: `HMAC.update(msg)` Update the hmac object with *msg*. Repeated calls are equivalent to a single call with the concatenation of all the arguments: `m.update(a); m.update(b)` is equivalent to `m.update(a + b)`. Changed in version 3.4: Parameter *msg* can be of any type supported by [`hashlib`](hashlib#module-hashlib "hashlib: Secure hash and message digest algorithms."). `HMAC.digest()` Return the digest of the bytes passed to the [`update()`](#hmac.HMAC.update "hmac.HMAC.update") method so far. This bytes object will be the same length as the *digest\_size* of the digest given to the constructor. It may contain non-ASCII bytes, including NUL bytes. Warning When comparing the output of [`digest()`](#hmac.digest "hmac.digest") to an externally-supplied digest during a verification routine, it is recommended to use the [`compare_digest()`](#hmac.compare_digest "hmac.compare_digest") function instead of the `==` operator to reduce the vulnerability to timing attacks. `HMAC.hexdigest()` Like [`digest()`](#hmac.digest "hmac.digest") except the digest is returned as a string twice the length containing only hexadecimal digits. This may be used to exchange the value safely in email or other non-binary environments. Warning When comparing the output of [`hexdigest()`](#hmac.HMAC.hexdigest "hmac.HMAC.hexdigest") to an externally-supplied digest during a verification routine, it is recommended to use the [`compare_digest()`](#hmac.compare_digest "hmac.compare_digest") function instead of the `==` operator to reduce the vulnerability to timing attacks. `HMAC.copy()` Return a copy (“clone”) of the hmac object. This can be used to efficiently compute the digests of strings that share a common initial substring. A hash object has the following attributes: `HMAC.digest_size` The size of the resulting HMAC digest in bytes. `HMAC.block_size` The internal block size of the hash algorithm in bytes. New in version 3.4. `HMAC.name` The canonical name of this HMAC, always lowercase, e.g. `hmac-md5`. New in version 3.4. Deprecated since version 3.9: The undocumented attributes `HMAC.digest_cons`, `HMAC.inner`, and `HMAC.outer` are internal implementation details and will be removed in Python 3.10. This module also provides the following helper function: `hmac.compare_digest(a, b)` Return `a == b`. This function uses an approach designed to prevent timing analysis by avoiding content-based short circuiting behaviour, making it appropriate for cryptography. *a* and *b* must both be of the same type: either [`str`](stdtypes#str "str") (ASCII only, as e.g. returned by [`HMAC.hexdigest()`](#hmac.HMAC.hexdigest "hmac.HMAC.hexdigest")), or a [bytes-like object](../glossary#term-bytes-like-object). Note If *a* and *b* are of different lengths, or if an error occurs, a timing attack could theoretically reveal information about the types and lengths of *a* and *b*—but not their values. New in version 3.3. Changed in version 3.9: The function uses OpenSSL’s `CRYPTO_memcmp()` internally when available. See also `Module` [`hashlib`](hashlib#module-hashlib "hashlib: Secure hash and message digest algorithms.") The Python module providing secure hash functions. python Developing with asyncio Developing with asyncio ======================= Asynchronous programming is different from classic “sequential” programming. This page lists common mistakes and traps and explains how to avoid them. Debug Mode ---------- By default asyncio runs in production mode. In order to ease the development asyncio has a *debug mode*. There are several ways to enable asyncio debug mode: * Setting the [`PYTHONASYNCIODEBUG`](../using/cmdline#envvar-PYTHONASYNCIODEBUG) environment variable to `1`. * Using the [Python Development Mode](devmode#devmode). * Passing `debug=True` to [`asyncio.run()`](asyncio-task#asyncio.run "asyncio.run"). * Calling [`loop.set_debug()`](asyncio-eventloop#asyncio.loop.set_debug "asyncio.loop.set_debug"). In addition to enabling the debug mode, consider also: * setting the log level of the [asyncio logger](#asyncio-logger) to `logging.DEBUG`, for example the following snippet of code can be run at startup of the application: ``` logging.basicConfig(level=logging.DEBUG) ``` * configuring the [`warnings`](warnings#module-warnings "warnings: Issue warning messages and control their disposition.") module to display [`ResourceWarning`](exceptions#ResourceWarning "ResourceWarning") warnings. One way of doing that is by using the [`-W`](../using/cmdline#cmdoption-w) `default` command line option. When the debug mode is enabled: * asyncio checks for [coroutines that were not awaited](#asyncio-coroutine-not-scheduled) and logs them; this mitigates the “forgotten await” pitfall. * Many non-threadsafe asyncio APIs (such as [`loop.call_soon()`](asyncio-eventloop#asyncio.loop.call_soon "asyncio.loop.call_soon") and [`loop.call_at()`](asyncio-eventloop#asyncio.loop.call_at "asyncio.loop.call_at") methods) raise an exception if they are called from a wrong thread. * The execution time of the I/O selector is logged if it takes too long to perform an I/O operation. * Callbacks taking longer than 100ms are logged. The `loop.slow_callback_duration` attribute can be used to set the minimum execution duration in seconds that is considered “slow”. Concurrency and Multithreading ------------------------------ An event loop runs in a thread (typically the main thread) and executes all callbacks and Tasks in its thread. While a Task is running in the event loop, no other Tasks can run in the same thread. When a Task executes an `await` expression, the running Task gets suspended, and the event loop executes the next Task. To schedule a [callback](../glossary#term-callback) from another OS thread, the [`loop.call_soon_threadsafe()`](asyncio-eventloop#asyncio.loop.call_soon_threadsafe "asyncio.loop.call_soon_threadsafe") method should be used. Example: ``` loop.call_soon_threadsafe(callback, *args) ``` Almost all asyncio objects are not thread safe, which is typically not a problem unless there is code that works with them from outside of a Task or a callback. If there’s a need for such code to call a low-level asyncio API, the [`loop.call_soon_threadsafe()`](asyncio-eventloop#asyncio.loop.call_soon_threadsafe "asyncio.loop.call_soon_threadsafe") method should be used, e.g.: ``` loop.call_soon_threadsafe(fut.cancel) ``` To schedule a coroutine object from a different OS thread, the [`run_coroutine_threadsafe()`](asyncio-task#asyncio.run_coroutine_threadsafe "asyncio.run_coroutine_threadsafe") function should be used. It returns a [`concurrent.futures.Future`](concurrent.futures#concurrent.futures.Future "concurrent.futures.Future") to access the result: ``` async def coro_func(): return await asyncio.sleep(1, 42) # Later in another OS thread: future = asyncio.run_coroutine_threadsafe(coro_func(), loop) # Wait for the result: result = future.result() ``` To handle signals and to execute subprocesses, the event loop must be run in the main thread. The [`loop.run_in_executor()`](asyncio-eventloop#asyncio.loop.run_in_executor "asyncio.loop.run_in_executor") method can be used with a [`concurrent.futures.ThreadPoolExecutor`](concurrent.futures#concurrent.futures.ThreadPoolExecutor "concurrent.futures.ThreadPoolExecutor") to execute blocking code in a different OS thread without blocking the OS thread that the event loop runs in. There is currently no way to schedule coroutines or callbacks directly from a different process (such as one started with [`multiprocessing`](multiprocessing#module-multiprocessing "multiprocessing: Process-based parallelism.")). The [Event Loop Methods](asyncio-eventloop#asyncio-event-loop) section lists APIs that can read from pipes and watch file descriptors without blocking the event loop. In addition, asyncio’s [Subprocess](asyncio-subprocess#asyncio-subprocess) APIs provide a way to start a process and communicate with it from the event loop. Lastly, the aforementioned [`loop.run_in_executor()`](asyncio-eventloop#asyncio.loop.run_in_executor "asyncio.loop.run_in_executor") method can also be used with a [`concurrent.futures.ProcessPoolExecutor`](concurrent.futures#concurrent.futures.ProcessPoolExecutor "concurrent.futures.ProcessPoolExecutor") to execute code in a different process. Running Blocking Code --------------------- Blocking (CPU-bound) code should not be called directly. For example, if a function performs a CPU-intensive calculation for 1 second, all concurrent asyncio Tasks and IO operations would be delayed by 1 second. An executor can be used to run a task in a different thread or even in a different process to avoid blocking the OS thread with the event loop. See the [`loop.run_in_executor()`](asyncio-eventloop#asyncio.loop.run_in_executor "asyncio.loop.run_in_executor") method for more details. Logging ------- asyncio uses the [`logging`](logging#module-logging "logging: Flexible event logging system for applications.") module and all logging is performed via the `"asyncio"` logger. The default log level is `logging.INFO`, which can be easily adjusted: ``` logging.getLogger("asyncio").setLevel(logging.WARNING) ``` Detect never-awaited coroutines ------------------------------- When a coroutine function is called, but not awaited (e.g. `coro()` instead of `await coro()`) or the coroutine is not scheduled with [`asyncio.create_task()`](asyncio-task#asyncio.create_task "asyncio.create_task"), asyncio will emit a [`RuntimeWarning`](exceptions#RuntimeWarning "RuntimeWarning"): ``` import asyncio async def test(): print("never scheduled") async def main(): test() asyncio.run(main()) ``` Output: ``` test.py:7: RuntimeWarning: coroutine 'test' was never awaited test() ``` Output in debug mode: ``` test.py:7: RuntimeWarning: coroutine 'test' was never awaited Coroutine created at (most recent call last) File "../t.py", line 9, in <module> asyncio.run(main(), debug=True) < .. > File "../t.py", line 7, in main test() test() ``` The usual fix is to either await the coroutine or call the [`asyncio.create_task()`](asyncio-task#asyncio.create_task "asyncio.create_task") function: ``` async def main(): await test() ``` Detect never-retrieved exceptions --------------------------------- If a [`Future.set_exception()`](asyncio-future#asyncio.Future.set_exception "asyncio.Future.set_exception") is called but the Future object is never awaited on, the exception would never be propagated to the user code. In this case, asyncio would emit a log message when the Future object is garbage collected. Example of an unhandled exception: ``` import asyncio async def bug(): raise Exception("not consumed") async def main(): asyncio.create_task(bug()) asyncio.run(main()) ``` Output: ``` Task exception was never retrieved future: <Task finished coro=<bug() done, defined at test.py:3> exception=Exception('not consumed')> Traceback (most recent call last): File "test.py", line 4, in bug raise Exception("not consumed") Exception: not consumed ``` [Enable the debug mode](#asyncio-debug-mode) to get the traceback where the task was created: ``` asyncio.run(main(), debug=True) ``` Output in debug mode: ``` Task exception was never retrieved future: <Task finished coro=<bug() done, defined at test.py:3> exception=Exception('not consumed') created at asyncio/tasks.py:321> source_traceback: Object created at (most recent call last): File "../t.py", line 9, in <module> asyncio.run(main(), debug=True) < .. > Traceback (most recent call last): File "../t.py", line 4, in bug raise Exception("not consumed") Exception: not consumed ``` python runpy — Locating and executing Python modules runpy — Locating and executing Python modules ============================================= **Source code:** [Lib/runpy.py](https://github.com/python/cpython/tree/3.9/Lib/runpy.py) The [`runpy`](#module-runpy "runpy: Locate and run Python modules without importing them first.") module is used to locate and run Python modules without importing them first. Its main use is to implement the [`-m`](../using/cmdline#cmdoption-m) command line switch that allows scripts to be located using the Python module namespace rather than the filesystem. Note that this is *not* a sandbox module - all code is executed in the current process, and any side effects (such as cached imports of other modules) will remain in place after the functions have returned. Furthermore, any functions and classes defined by the executed code are not guaranteed to work correctly after a [`runpy`](#module-runpy "runpy: Locate and run Python modules without importing them first.") function has returned. If that limitation is not acceptable for a given use case, [`importlib`](importlib#module-importlib "importlib: The implementation of the import machinery.") is likely to be a more suitable choice than this module. The [`runpy`](#module-runpy "runpy: Locate and run Python modules without importing them first.") module provides two functions: `runpy.run_module(mod_name, init_globals=None, run_name=None, alter_sys=False)` Execute the code of the specified module and return the resulting module globals dictionary. The module’s code is first located using the standard import mechanism (refer to [**PEP 302**](https://www.python.org/dev/peps/pep-0302) for details) and then executed in a fresh module namespace. The *mod\_name* argument should be an absolute module name. If the module name refers to a package rather than a normal module, then that package is imported and the `__main__` submodule within that package is then executed and the resulting module globals dictionary returned. The optional dictionary argument *init\_globals* may be used to pre-populate the module’s globals dictionary before the code is executed. The supplied dictionary will not be modified. If any of the special global variables below are defined in the supplied dictionary, those definitions are overridden by [`run_module()`](#runpy.run_module "runpy.run_module"). The special global variables `__name__`, `__spec__`, `__file__`, `__cached__`, `__loader__` and `__package__` are set in the globals dictionary before the module code is executed (Note that this is a minimal set of variables - other variables may be set implicitly as an interpreter implementation detail). `__name__` is set to *run\_name* if this optional argument is not [`None`](constants#None "None"), to `mod_name + '.__main__'` if the named module is a package and to the *mod\_name* argument otherwise. `__spec__` will be set appropriately for the *actually* imported module (that is, `__spec__.name` will always be *mod\_name* or `mod_name + '.__main__`, never *run\_name*). `__file__`, `__cached__`, `__loader__` and `__package__` are [set as normal](../reference/import#import-mod-attrs) based on the module spec. If the argument *alter\_sys* is supplied and evaluates to [`True`](constants#True "True"), then `sys.argv[0]` is updated with the value of `__file__` and `sys.modules[__name__]` is updated with a temporary module object for the module being executed. Both `sys.argv[0]` and `sys.modules[__name__]` are restored to their original values before the function returns. Note that this manipulation of [`sys`](sys#module-sys "sys: Access system-specific parameters and functions.") is not thread-safe. Other threads may see the partially initialised module, as well as the altered list of arguments. It is recommended that the [`sys`](sys#module-sys "sys: Access system-specific parameters and functions.") module be left alone when invoking this function from threaded code. See also The [`-m`](../using/cmdline#cmdoption-m) option offering equivalent functionality from the command line. Changed in version 3.1: Added ability to execute packages by looking for a `__main__` submodule. Changed in version 3.2: Added `__cached__` global variable (see [**PEP 3147**](https://www.python.org/dev/peps/pep-3147)). Changed in version 3.4: Updated to take advantage of the module spec feature added by [**PEP 451**](https://www.python.org/dev/peps/pep-0451). This allows `__cached__` to be set correctly for modules run this way, as well as ensuring the real module name is always accessible as `__spec__.name`. `runpy.run_path(path_name, init_globals=None, run_name=None)` Execute the code at the named filesystem location and return the resulting module globals dictionary. As with a script name supplied to the CPython command line, the supplied path may refer to a Python source file, a compiled bytecode file or a valid sys.path entry containing a `__main__` module (e.g. a zipfile containing a top-level `__main__.py` file). For a simple script, the specified code is simply executed in a fresh module namespace. For a valid sys.path entry (typically a zipfile or directory), the entry is first added to the beginning of `sys.path`. The function then looks for and executes a [`__main__`](__main__#module-__main__ "__main__: The environment where the top-level script is run.") module using the updated path. Note that there is no special protection against invoking an existing [`__main__`](__main__#module-__main__ "__main__: The environment where the top-level script is run.") entry located elsewhere on `sys.path` if there is no such module at the specified location. The optional dictionary argument *init\_globals* may be used to pre-populate the module’s globals dictionary before the code is executed. The supplied dictionary will not be modified. If any of the special global variables below are defined in the supplied dictionary, those definitions are overridden by [`run_path()`](#runpy.run_path "runpy.run_path"). The special global variables `__name__`, `__spec__`, `__file__`, `__cached__`, `__loader__` and `__package__` are set in the globals dictionary before the module code is executed (Note that this is a minimal set of variables - other variables may be set implicitly as an interpreter implementation detail). `__name__` is set to *run\_name* if this optional argument is not [`None`](constants#None "None") and to `'<run_path>'` otherwise. If the supplied path directly references a script file (whether as source or as precompiled byte code), then `__file__` will be set to the supplied path, and `__spec__`, `__cached__`, `__loader__` and `__package__` will all be set to [`None`](constants#None "None"). If the supplied path is a reference to a valid sys.path entry, then `__spec__` will be set appropriately for the imported `__main__` module (that is, `__spec__.name` will always be `__main__`). `__file__`, `__cached__`, `__loader__` and `__package__` will be [set as normal](../reference/import#import-mod-attrs) based on the module spec. A number of alterations are also made to the [`sys`](sys#module-sys "sys: Access system-specific parameters and functions.") module. Firstly, `sys.path` may be altered as described above. `sys.argv[0]` is updated with the value of `path_name` and `sys.modules[__name__]` is updated with a temporary module object for the module being executed. All modifications to items in [`sys`](sys#module-sys "sys: Access system-specific parameters and functions.") are reverted before the function returns. Note that, unlike [`run_module()`](#runpy.run_module "runpy.run_module"), the alterations made to [`sys`](sys#module-sys "sys: Access system-specific parameters and functions.") are not optional in this function as these adjustments are essential to allowing the execution of sys.path entries. As the thread-safety limitations still apply, use of this function in threaded code should be either serialised with the import lock or delegated to a separate process. See also [Interface options](../using/cmdline#using-on-interface-options) for equivalent functionality on the command line (`python path/to/script`). New in version 3.2. Changed in version 3.4: Updated to take advantage of the module spec feature added by [**PEP 451**](https://www.python.org/dev/peps/pep-0451). This allows `__cached__` to be set correctly in the case where `__main__` is imported from a valid sys.path entry rather than being executed directly. See also [**PEP 338**](https://www.python.org/dev/peps/pep-0338) – Executing modules as scripts PEP written and implemented by Nick Coghlan. [**PEP 366**](https://www.python.org/dev/peps/pep-0366) – Main module explicit relative imports PEP written and implemented by Nick Coghlan. [**PEP 451**](https://www.python.org/dev/peps/pep-0451) – A ModuleSpec Type for the Import System PEP written and implemented by Eric Snow [Command line and environment](../using/cmdline#using-on-general) - CPython command line details The [`importlib.import_module()`](importlib#importlib.import_module "importlib.import_module") function
programming_docs
python XML Processing Modules XML Processing Modules ====================== **Source code:** [Lib/xml/](https://github.com/python/cpython/tree/3.9/Lib/xml/) Python’s interfaces for processing XML are grouped in the `xml` package. Warning The XML modules are not secure against erroneous or maliciously constructed data. If you need to parse untrusted or unauthenticated data see the [XML vulnerabilities](#xml-vulnerabilities) and [The defusedxml Package](#defusedxml-package) sections. It is important to note that modules in the [`xml`](#module-xml "xml: Package containing XML processing modules") package require that there be at least one SAX-compliant XML parser available. The Expat parser is included with Python, so the [`xml.parsers.expat`](pyexpat#module-xml.parsers.expat "xml.parsers.expat: An interface to the Expat non-validating XML parser.") module will always be available. The documentation for the [`xml.dom`](xml.dom#module-xml.dom "xml.dom: Document Object Model API for Python.") and [`xml.sax`](xml.sax#module-xml.sax "xml.sax: Package containing SAX2 base classes and convenience functions.") packages are the definition of the Python bindings for the DOM and SAX interfaces. The XML handling submodules are: * [`xml.etree.ElementTree`](xml.etree.elementtree#module-xml.etree.ElementTree "xml.etree.ElementTree: Implementation of the ElementTree API."): the ElementTree API, a simple and lightweight XML processor * [`xml.dom`](xml.dom#module-xml.dom "xml.dom: Document Object Model API for Python."): the DOM API definition * [`xml.dom.minidom`](xml.dom.minidom#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation."): a minimal DOM implementation * [`xml.dom.pulldom`](xml.dom.pulldom#module-xml.dom.pulldom "xml.dom.pulldom: Support for building partial DOM trees from SAX events."): support for building partial DOM trees * [`xml.sax`](xml.sax#module-xml.sax "xml.sax: Package containing SAX2 base classes and convenience functions."): SAX2 base classes and convenience functions * [`xml.parsers.expat`](pyexpat#module-xml.parsers.expat "xml.parsers.expat: An interface to the Expat non-validating XML parser."): the Expat parser binding XML vulnerabilities ------------------- The XML processing modules are not secure against maliciously constructed data. An attacker can abuse XML features to carry out denial of service attacks, access local files, generate network connections to other machines, or circumvent firewalls. The following table gives an overview of the known attacks and whether the various modules are vulnerable to them. | kind | sax | etree | minidom | pulldom | xmlrpc | | --- | --- | --- | --- | --- | --- | | billion laughs | **Vulnerable** (1) | **Vulnerable** (1) | **Vulnerable** (1) | **Vulnerable** (1) | **Vulnerable** (1) | | quadratic blowup | **Vulnerable** (1) | **Vulnerable** (1) | **Vulnerable** (1) | **Vulnerable** (1) | **Vulnerable** (1) | | external entity expansion | Safe (5) | Safe (2) | Safe (3) | Safe (5) | Safe (4) | | [DTD](https://en.wikipedia.org/wiki/Document_type_definition) retrieval | Safe (5) | Safe | Safe | Safe (5) | Safe | | decompression bomb | Safe | Safe | Safe | Safe | **Vulnerable** | 1. Expat 2.4.1 and newer is not vulnerable to the “billion laughs” and “quadratic blowup” vulnerabilities. Items still listed as vulnerable due to potential reliance on system-provided libraries. Check `pyexpat.EXPAT_VERSION`. 2. [`xml.etree.ElementTree`](xml.etree.elementtree#module-xml.etree.ElementTree "xml.etree.ElementTree: Implementation of the ElementTree API.") doesn’t expand external entities and raises a `ParserError` when an entity occurs. 3. [`xml.dom.minidom`](xml.dom.minidom#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation.") doesn’t expand external entities and simply returns the unexpanded entity verbatim. 4. `xmlrpclib` doesn’t expand external entities and omits them. 5. Since Python 3.7.1, external general entities are no longer processed by default. billion laughs / exponential entity expansion The [Billion Laughs](https://en.wikipedia.org/wiki/Billion_laughs) attack – also known as exponential entity expansion – uses multiple levels of nested entities. Each entity refers to another entity several times, and the final entity definition contains a small string. The exponential expansion results in several gigabytes of text and consumes lots of memory and CPU time. quadratic blowup entity expansion A quadratic blowup attack is similar to a [Billion Laughs](https://en.wikipedia.org/wiki/Billion_laughs) attack; it abuses entity expansion, too. Instead of nested entities it repeats one large entity with a couple of thousand chars over and over again. The attack isn’t as efficient as the exponential case but it avoids triggering parser countermeasures that forbid deeply-nested entities. external entity expansion Entity declarations can contain more than just text for replacement. They can also point to external resources or local files. The XML parser accesses the resource and embeds the content into the XML document. [DTD](https://en.wikipedia.org/wiki/Document_type_definition) retrieval Some XML libraries like Python’s [`xml.dom.pulldom`](xml.dom.pulldom#module-xml.dom.pulldom "xml.dom.pulldom: Support for building partial DOM trees from SAX events.") retrieve document type definitions from remote or local locations. The feature has similar implications as the external entity expansion issue. decompression bomb Decompression bombs (aka [ZIP bomb](https://en.wikipedia.org/wiki/Zip_bomb)) apply to all XML libraries that can parse compressed XML streams such as gzipped HTTP streams or LZMA-compressed files. For an attacker it can reduce the amount of transmitted data by three magnitudes or more. The documentation for [defusedxml](https://pypi.org/project/defusedxml/) on PyPI has further information about all known attack vectors with examples and references. The `defusedxml` Package ------------------------ [defusedxml](https://pypi.org/project/defusedxml/) is a pure Python package with modified subclasses of all stdlib XML parsers that prevent any potentially malicious operation. Use of this package is recommended for any server code that parses untrusted XML data. The package also ships with example exploits and extended documentation on more XML exploits such as XPath injection. python py_compile — Compile Python source files py\_compile — Compile Python source files ========================================= **Source code:** [Lib/py\_compile.py](https://github.com/python/cpython/tree/3.9/Lib/py_compile.py) The [`py_compile`](#module-py_compile "py_compile: Generate byte-code files from Python source files.") module provides a function to generate a byte-code file from a source file, and another function used when the module source file is invoked as a script. Though not often needed, this function can be useful when installing modules for shared use, especially if some of the users may not have permission to write the byte-code cache files in the directory containing the source code. `exception py_compile.PyCompileError` Exception raised when an error occurs while attempting to compile the file. `py_compile.compile(file, cfile=None, dfile=None, doraise=False, optimize=-1, invalidation_mode=PycInvalidationMode.TIMESTAMP, quiet=0)` Compile a source file to byte-code and write out the byte-code cache file. The source code is loaded from the file named *file*. The byte-code is written to *cfile*, which defaults to the [**PEP 3147**](https://www.python.org/dev/peps/pep-3147)/[**PEP 488**](https://www.python.org/dev/peps/pep-0488) path, ending in `.pyc`. For example, if *file* is `/foo/bar/baz.py` *cfile* will default to `/foo/bar/__pycache__/baz.cpython-32.pyc` for Python 3.2. If *dfile* is specified, it is used as the name of the source file in error messages instead of *file*. If *doraise* is true, a [`PyCompileError`](#py_compile.PyCompileError "py_compile.PyCompileError") is raised when an error is encountered while compiling *file*. If *doraise* is false (the default), an error string is written to `sys.stderr`, but no exception is raised. This function returns the path to byte-compiled file, i.e. whatever *cfile* value was used. The *doraise* and *quiet* arguments determine how errors are handled while compiling file. If *quiet* is 0 or 1, and *doraise* is false, the default behaviour is enabled: an error string is written to `sys.stderr`, and the function returns `None` instead of a path. If *doraise* is true, a [`PyCompileError`](#py_compile.PyCompileError "py_compile.PyCompileError") is raised instead. However if *quiet* is 2, no message is written, and *doraise* has no effect. If the path that *cfile* becomes (either explicitly specified or computed) is a symlink or non-regular file, [`FileExistsError`](exceptions#FileExistsError "FileExistsError") will be raised. This is to act as a warning that import will turn those paths into regular files if it is allowed to write byte-compiled files to those paths. This is a side-effect of import using file renaming to place the final byte-compiled file into place to prevent concurrent file writing issues. *optimize* controls the optimization level and is passed to the built-in [`compile()`](functions#compile "compile") function. The default of `-1` selects the optimization level of the current interpreter. *invalidation\_mode* should be a member of the [`PycInvalidationMode`](#py_compile.PycInvalidationMode "py_compile.PycInvalidationMode") enum and controls how the generated bytecode cache is invalidated at runtime. The default is [`PycInvalidationMode.CHECKED_HASH`](#py_compile.PycInvalidationMode.CHECKED_HASH "py_compile.PycInvalidationMode.CHECKED_HASH") if the `SOURCE_DATE_EPOCH` environment variable is set, otherwise the default is [`PycInvalidationMode.TIMESTAMP`](#py_compile.PycInvalidationMode.TIMESTAMP "py_compile.PycInvalidationMode.TIMESTAMP"). Changed in version 3.2: Changed default value of *cfile* to be [**PEP 3147**](https://www.python.org/dev/peps/pep-3147)-compliant. Previous default was *file* + `'c'` (`'o'` if optimization was enabled). Also added the *optimize* parameter. Changed in version 3.4: Changed code to use [`importlib`](importlib#module-importlib "importlib: The implementation of the import machinery.") for the byte-code cache file writing. This means file creation/writing semantics now match what [`importlib`](importlib#module-importlib "importlib: The implementation of the import machinery.") does, e.g. permissions, write-and-move semantics, etc. Also added the caveat that [`FileExistsError`](exceptions#FileExistsError "FileExistsError") is raised if *cfile* is a symlink or non-regular file. Changed in version 3.7: The *invalidation\_mode* parameter was added as specified in [**PEP 552**](https://www.python.org/dev/peps/pep-0552). If the `SOURCE_DATE_EPOCH` environment variable is set, *invalidation\_mode* will be forced to [`PycInvalidationMode.CHECKED_HASH`](#py_compile.PycInvalidationMode.CHECKED_HASH "py_compile.PycInvalidationMode.CHECKED_HASH"). Changed in version 3.7.2: The `SOURCE_DATE_EPOCH` environment variable no longer overrides the value of the *invalidation\_mode* argument, and determines its default value instead. Changed in version 3.8: The *quiet* parameter was added. `class py_compile.PycInvalidationMode` A enumeration of possible methods the interpreter can use to determine whether a bytecode file is up to date with a source file. The `.pyc` file indicates the desired invalidation mode in its header. See [Cached bytecode invalidation](../reference/import#pyc-invalidation) for more information on how Python invalidates `.pyc` files at runtime. New in version 3.7. `TIMESTAMP` The `.pyc` file includes the timestamp and size of the source file, which Python will compare against the metadata of the source file at runtime to determine if the `.pyc` file needs to be regenerated. `CHECKED_HASH` The `.pyc` file includes a hash of the source file content, which Python will compare against the source at runtime to determine if the `.pyc` file needs to be regenerated. `UNCHECKED_HASH` Like [`CHECKED_HASH`](#py_compile.PycInvalidationMode.CHECKED_HASH "py_compile.PycInvalidationMode.CHECKED_HASH"), the `.pyc` file includes a hash of the source file content. However, Python will at runtime assume the `.pyc` file is up to date and not validate the `.pyc` against the source file at all. This option is useful when the `.pycs` are kept up to date by some system external to Python like a build system. `py_compile.main(args=None)` Compile several source files. The files named in *args* (or on the command line, if *args* is `None`) are compiled and the resulting byte-code is cached in the normal manner. This function does not search a directory structure to locate source files; it only compiles files named explicitly. If `'-'` is the only parameter in args, the list of files is taken from standard input. Changed in version 3.2: Added support for `'-'`. When this module is run as a script, the [`main()`](#py_compile.main "py_compile.main") is used to compile all the files named on the command line. The exit status is nonzero if one of the files could not be compiled. See also `Module` [`compileall`](compileall#module-compileall "compileall: Tools for byte-compiling all Python source files in a directory tree.") Utilities to compile all Python source files in a directory tree. python Event Loop Event Loop ========== **Source code:** [Lib/asyncio/events.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/events.py), [Lib/asyncio/base\_events.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/base_events.py) #### Preface The event loop is the core of every asyncio application. Event loops run asynchronous tasks and callbacks, perform network IO operations, and run subprocesses. Application developers should typically use the high-level asyncio functions, such as [`asyncio.run()`](asyncio-task#asyncio.run "asyncio.run"), and should rarely need to reference the loop object or call its methods. This section is intended mostly for authors of lower-level code, libraries, and frameworks, who need finer control over the event loop behavior. #### Obtaining the Event Loop The following low-level functions can be used to get, set, or create an event loop: `asyncio.get_running_loop()` Return the running event loop in the current OS thread. If there is no running event loop a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. This function can only be called from a coroutine or a callback. New in version 3.7. `asyncio.get_event_loop()` Get the current event loop. If there is no current event loop set in the current OS thread, the OS thread is main, and [`set_event_loop()`](#asyncio.set_event_loop "asyncio.set_event_loop") has not yet been called, asyncio will create a new event loop and set it as the current one. Because this function has rather complex behavior (especially when custom event loop policies are in use), using the [`get_running_loop()`](#asyncio.get_running_loop "asyncio.get_running_loop") function is preferred to [`get_event_loop()`](#asyncio.get_event_loop "asyncio.get_event_loop") in coroutines and callbacks. Consider also using the [`asyncio.run()`](asyncio-task#asyncio.run "asyncio.run") function instead of using lower level functions to manually create and close an event loop. `asyncio.set_event_loop(loop)` Set *loop* as a current event loop for the current OS thread. `asyncio.new_event_loop()` Create and return a new event loop object. Note that the behaviour of [`get_event_loop()`](#asyncio.get_event_loop "asyncio.get_event_loop"), [`set_event_loop()`](#asyncio.set_event_loop "asyncio.set_event_loop"), and [`new_event_loop()`](#asyncio.new_event_loop "asyncio.new_event_loop") functions can be altered by [setting a custom event loop policy](asyncio-policy#asyncio-policies). #### Contents This documentation page contains the following sections: * The [Event Loop Methods](#event-loop-methods) section is the reference documentation of the event loop APIs; * The [Callback Handles](#callback-handles) section documents the [`Handle`](#asyncio.Handle "asyncio.Handle") and [`TimerHandle`](#asyncio.TimerHandle "asyncio.TimerHandle") instances which are returned from scheduling methods such as [`loop.call_soon()`](#asyncio.loop.call_soon "asyncio.loop.call_soon") and [`loop.call_later()`](#asyncio.loop.call_later "asyncio.loop.call_later"); * The [Server Objects](#server-objects) section documents types returned from event loop methods like [`loop.create_server()`](#asyncio.loop.create_server "asyncio.loop.create_server"); * The [Event Loop Implementations](#event-loop-implementations) section documents the [`SelectorEventLoop`](#asyncio.SelectorEventLoop "asyncio.SelectorEventLoop") and [`ProactorEventLoop`](#asyncio.ProactorEventLoop "asyncio.ProactorEventLoop") classes; * The [Examples](#examples) section showcases how to work with some event loop APIs. Event Loop Methods ------------------ Event loops have **low-level** APIs for the following: * [Running and stopping the loop](#running-and-stopping-the-loop) * [Scheduling callbacks](#scheduling-callbacks) * [Scheduling delayed callbacks](#scheduling-delayed-callbacks) * [Creating Futures and Tasks](#creating-futures-and-tasks) * [Opening network connections](#opening-network-connections) * [Creating network servers](#creating-network-servers) * [Transferring files](#transferring-files) * [TLS Upgrade](#tls-upgrade) * [Watching file descriptors](#watching-file-descriptors) * [Working with socket objects directly](#working-with-socket-objects-directly) * [DNS](#dns) * [Working with pipes](#working-with-pipes) * [Unix signals](#unix-signals) * [Executing code in thread or process pools](#executing-code-in-thread-or-process-pools) * [Error Handling API](#error-handling-api) * [Enabling debug mode](#enabling-debug-mode) * [Running Subprocesses](#running-subprocesses) ### Running and stopping the loop `loop.run_until_complete(future)` Run until the *future* (an instance of [`Future`](asyncio-future#asyncio.Future "asyncio.Future")) has completed. If the argument is a [coroutine object](asyncio-task#coroutine) it is implicitly scheduled to run as a [`asyncio.Task`](asyncio-task#asyncio.Task "asyncio.Task"). Return the Future’s result or raise its exception. `loop.run_forever()` Run the event loop until [`stop()`](#asyncio.loop.stop "asyncio.loop.stop") is called. If [`stop()`](#asyncio.loop.stop "asyncio.loop.stop") is called before [`run_forever()`](#asyncio.loop.run_forever "asyncio.loop.run_forever") is called, the loop will poll the I/O selector once with a timeout of zero, run all callbacks scheduled in response to I/O events (and those that were already scheduled), and then exit. If [`stop()`](#asyncio.loop.stop "asyncio.loop.stop") is called while [`run_forever()`](#asyncio.loop.run_forever "asyncio.loop.run_forever") is running, the loop will run the current batch of callbacks and then exit. Note that new callbacks scheduled by callbacks will not run in this case; instead, they will run the next time [`run_forever()`](#asyncio.loop.run_forever "asyncio.loop.run_forever") or [`run_until_complete()`](#asyncio.loop.run_until_complete "asyncio.loop.run_until_complete") is called. `loop.stop()` Stop the event loop. `loop.is_running()` Return `True` if the event loop is currently running. `loop.is_closed()` Return `True` if the event loop was closed. `loop.close()` Close the event loop. The loop must not be running when this function is called. Any pending callbacks will be discarded. This method clears all queues and shuts down the executor, but does not wait for the executor to finish. This method is idempotent and irreversible. No other methods should be called after the event loop is closed. `coroutine loop.shutdown_asyncgens()` Schedule all currently open [asynchronous generator](../glossary#term-asynchronous-generator) objects to close with an [`aclose()`](../reference/expressions#agen.aclose "agen.aclose") call. After calling this method, the event loop will issue a warning if a new asynchronous generator is iterated. This should be used to reliably finalize all scheduled asynchronous generators. Note that there is no need to call this function when [`asyncio.run()`](asyncio-task#asyncio.run "asyncio.run") is used. Example: ``` try: loop.run_forever() finally: loop.run_until_complete(loop.shutdown_asyncgens()) loop.close() ``` New in version 3.6. `coroutine loop.shutdown_default_executor()` Schedule the closure of the default executor and wait for it to join all of the threads in the `ThreadPoolExecutor`. After calling this method, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") will be raised if [`loop.run_in_executor()`](#asyncio.loop.run_in_executor "asyncio.loop.run_in_executor") is called while using the default executor. Note that there is no need to call this function when [`asyncio.run()`](asyncio-task#asyncio.run "asyncio.run") is used. New in version 3.9. ### Scheduling callbacks `loop.call_soon(callback, *args, context=None)` Schedule the *callback* [callback](../glossary#term-callback) to be called with *args* arguments at the next iteration of the event loop. Callbacks are called in the order in which they are registered. Each callback will be called exactly once. An optional keyword-only *context* argument allows specifying a custom [`contextvars.Context`](contextvars#contextvars.Context "contextvars.Context") for the *callback* to run in. The current context is used when no *context* is provided. An instance of [`asyncio.Handle`](#asyncio.Handle "asyncio.Handle") is returned, which can be used later to cancel the callback. This method is not thread-safe. `loop.call_soon_threadsafe(callback, *args, context=None)` A thread-safe variant of [`call_soon()`](#asyncio.loop.call_soon "asyncio.loop.call_soon"). Must be used to schedule callbacks *from another thread*. Raises [`RuntimeError`](exceptions#RuntimeError "RuntimeError") if called on a loop that’s been closed. This can happen on a secondary thread when the main application is shutting down. See the [concurrency and multithreading](asyncio-dev#asyncio-multithreading) section of the documentation. Changed in version 3.7: The *context* keyword-only parameter was added. See [**PEP 567**](https://www.python.org/dev/peps/pep-0567) for more details. Note Most [`asyncio`](asyncio#module-asyncio "asyncio: Asynchronous I/O.") scheduling functions don’t allow passing keyword arguments. To do that, use [`functools.partial()`](functools#functools.partial "functools.partial"): ``` # will schedule "print("Hello", flush=True)" loop.call_soon( functools.partial(print, "Hello", flush=True)) ``` Using partial objects is usually more convenient than using lambdas, as asyncio can render partial objects better in debug and error messages. ### Scheduling delayed callbacks Event loop provides mechanisms to schedule callback functions to be called at some point in the future. Event loop uses monotonic clocks to track time. `loop.call_later(delay, callback, *args, context=None)` Schedule *callback* to be called after the given *delay* number of seconds (can be either an int or a float). An instance of [`asyncio.TimerHandle`](#asyncio.TimerHandle "asyncio.TimerHandle") is returned which can be used to cancel the callback. *callback* will be called exactly once. If two callbacks are scheduled for exactly the same time, the order in which they are called is undefined. The optional positional *args* will be passed to the callback when it is called. If you want the callback to be called with keyword arguments use [`functools.partial()`](functools#functools.partial "functools.partial"). An optional keyword-only *context* argument allows specifying a custom [`contextvars.Context`](contextvars#contextvars.Context "contextvars.Context") for the *callback* to run in. The current context is used when no *context* is provided. Changed in version 3.7: The *context* keyword-only parameter was added. See [**PEP 567**](https://www.python.org/dev/peps/pep-0567) for more details. Changed in version 3.8: In Python 3.7 and earlier with the default event loop implementation, the *delay* could not exceed one day. This has been fixed in Python 3.8. `loop.call_at(when, callback, *args, context=None)` Schedule *callback* to be called at the given absolute timestamp *when* (an int or a float), using the same time reference as [`loop.time()`](#asyncio.loop.time "asyncio.loop.time"). This method’s behavior is the same as [`call_later()`](#asyncio.loop.call_later "asyncio.loop.call_later"). An instance of [`asyncio.TimerHandle`](#asyncio.TimerHandle "asyncio.TimerHandle") is returned which can be used to cancel the callback. Changed in version 3.7: The *context* keyword-only parameter was added. See [**PEP 567**](https://www.python.org/dev/peps/pep-0567) for more details. Changed in version 3.8: In Python 3.7 and earlier with the default event loop implementation, the difference between *when* and the current time could not exceed one day. This has been fixed in Python 3.8. `loop.time()` Return the current time, as a [`float`](functions#float "float") value, according to the event loop’s internal monotonic clock. Note Changed in version 3.8: In Python 3.7 and earlier timeouts (relative *delay* or absolute *when*) should not exceed one day. This has been fixed in Python 3.8. See also The [`asyncio.sleep()`](asyncio-task#asyncio.sleep "asyncio.sleep") function. ### Creating Futures and Tasks `loop.create_future()` Create an [`asyncio.Future`](asyncio-future#asyncio.Future "asyncio.Future") object attached to the event loop. This is the preferred way to create Futures in asyncio. This lets third-party event loops provide alternative implementations of the Future object (with better performance or instrumentation). New in version 3.5.2. `loop.create_task(coro, *, name=None)` Schedule the execution of a [Coroutines](asyncio-task#coroutine). Return a [`Task`](asyncio-task#asyncio.Task "asyncio.Task") object. Third-party event loops can use their own subclass of [`Task`](asyncio-task#asyncio.Task "asyncio.Task") for interoperability. In this case, the result type is a subclass of [`Task`](asyncio-task#asyncio.Task "asyncio.Task"). If the *name* argument is provided and not `None`, it is set as the name of the task using [`Task.set_name()`](asyncio-task#asyncio.Task.set_name "asyncio.Task.set_name"). Changed in version 3.8: Added the `name` parameter. `loop.set_task_factory(factory)` Set a task factory that will be used by [`loop.create_task()`](#asyncio.loop.create_task "asyncio.loop.create_task"). If *factory* is `None` the default task factory will be set. Otherwise, *factory* must be a *callable* with the signature matching `(loop, coro)`, where *loop* is a reference to the active event loop, and *coro* is a coroutine object. The callable must return a [`asyncio.Future`](asyncio-future#asyncio.Future "asyncio.Future")-compatible object. `loop.get_task_factory()` Return a task factory or `None` if the default one is in use. ### Opening network connections `coroutine loop.create_connection(protocol_factory, host=None, port=None, *, ssl=None, family=0, proto=0, flags=0, sock=None, local_addr=None, server_hostname=None, ssl_handshake_timeout=None, happy_eyeballs_delay=None, interleave=None)` Open a streaming transport connection to a given address specified by *host* and *port*. The socket family can be either [`AF_INET`](socket#socket.AF_INET "socket.AF_INET") or [`AF_INET6`](socket#socket.AF_INET6 "socket.AF_INET6") depending on *host* (or the *family* argument, if provided). The socket type will be [`SOCK_STREAM`](socket#socket.SOCK_STREAM "socket.SOCK_STREAM"). *protocol\_factory* must be a callable returning an [asyncio protocol](asyncio-protocol#asyncio-protocol) implementation. This method will try to establish the connection in the background. When successful, it returns a `(transport, protocol)` pair. The chronological synopsis of the underlying operation is as follows: 1. The connection is established and a [transport](asyncio-protocol#asyncio-transport) is created for it. 2. *protocol\_factory* is called without arguments and is expected to return a [protocol](asyncio-protocol#asyncio-protocol) instance. 3. The protocol instance is coupled with the transport by calling its [`connection_made()`](asyncio-protocol#asyncio.BaseProtocol.connection_made "asyncio.BaseProtocol.connection_made") method. 4. A `(transport, protocol)` tuple is returned on success. The created transport is an implementation-dependent bidirectional stream. Other arguments: * *ssl*: if given and not false, a SSL/TLS transport is created (by default a plain TCP transport is created). If *ssl* is a [`ssl.SSLContext`](ssl#ssl.SSLContext "ssl.SSLContext") object, this context is used to create the transport; if *ssl* is [`True`](constants#True "True"), a default context returned from [`ssl.create_default_context()`](ssl#ssl.create_default_context "ssl.create_default_context") is used. See also [SSL/TLS security considerations](ssl#ssl-security) * *server\_hostname* sets or overrides the hostname that the target server’s certificate will be matched against. Should only be passed if *ssl* is not `None`. By default the value of the *host* argument is used. If *host* is empty, there is no default and you must pass a value for *server\_hostname*. If *server\_hostname* is an empty string, hostname matching is disabled (which is a serious security risk, allowing for potential man-in-the-middle attacks). * *family*, *proto*, *flags* are the optional address family, protocol and flags to be passed through to getaddrinfo() for *host* resolution. If given, these should all be integers from the corresponding [`socket`](socket#module-socket "socket: Low-level networking interface.") module constants. * *happy\_eyeballs\_delay*, if given, enables Happy Eyeballs for this connection. It should be a floating-point number representing the amount of time in seconds to wait for a connection attempt to complete, before starting the next attempt in parallel. This is the “Connection Attempt Delay” as defined in [**RFC 8305**](https://tools.ietf.org/html/rfc8305.html). A sensible default value recommended by the RFC is `0.25` (250 milliseconds). * *interleave* controls address reordering when a host name resolves to multiple IP addresses. If `0` or unspecified, no reordering is done, and addresses are tried in the order returned by [`getaddrinfo()`](#asyncio.loop.getaddrinfo "asyncio.loop.getaddrinfo"). If a positive integer is specified, the addresses are interleaved by address family, and the given integer is interpreted as “First Address Family Count” as defined in [**RFC 8305**](https://tools.ietf.org/html/rfc8305.html). The default is `0` if *happy\_eyeballs\_delay* is not specified, and `1` if it is. * *sock*, if given, should be an existing, already connected [`socket.socket`](socket#socket.socket "socket.socket") object to be used by the transport. If *sock* is given, none of *host*, *port*, *family*, *proto*, *flags*, *happy\_eyeballs\_delay*, *interleave* and *local\_addr* should be specified. * *local\_addr*, if given, is a `(local_host, local_port)` tuple used to bind the socket locally. The *local\_host* and *local\_port* are looked up using `getaddrinfo()`, similarly to *host* and *port*. * *ssl\_handshake\_timeout* is (for a TLS connection) the time in seconds to wait for the TLS handshake to complete before aborting the connection. `60.0` seconds if `None` (default). New in version 3.8: Added the *happy\_eyeballs\_delay* and *interleave* parameters. Happy Eyeballs Algorithm: Success with Dual-Stack Hosts. When a server’s IPv4 path and protocol are working, but the server’s IPv6 path and protocol are not working, a dual-stack client application experiences significant connection delay compared to an IPv4-only client. This is undesirable because it causes the dual- stack client to have a worse user experience. This document specifies requirements for algorithms that reduce this user-visible delay and provides an algorithm. For more information: <https://tools.ietf.org/html/rfc6555> New in version 3.7: The *ssl\_handshake\_timeout* parameter. Changed in version 3.6: The socket option `TCP_NODELAY` is set by default for all TCP connections. Changed in version 3.5: Added support for SSL/TLS in [`ProactorEventLoop`](#asyncio.ProactorEventLoop "asyncio.ProactorEventLoop"). See also The [`open_connection()`](asyncio-stream#asyncio.open_connection "asyncio.open_connection") function is a high-level alternative API. It returns a pair of ([`StreamReader`](asyncio-stream#asyncio.StreamReader "asyncio.StreamReader"), [`StreamWriter`](asyncio-stream#asyncio.StreamWriter "asyncio.StreamWriter")) that can be used directly in async/await code. `coroutine loop.create_datagram_endpoint(protocol_factory, local_addr=None, remote_addr=None, *, family=0, proto=0, flags=0, reuse_address=None, reuse_port=None, allow_broadcast=None, sock=None)` Note The parameter *reuse\_address* is no longer supported, as using `SO_REUSEADDR` poses a significant security concern for UDP. Explicitly passing `reuse_address=True` will raise an exception. When multiple processes with differing UIDs assign sockets to an identical UDP socket address with `SO_REUSEADDR`, incoming packets can become randomly distributed among the sockets. For supported platforms, *reuse\_port* can be used as a replacement for similar functionality. With *reuse\_port*, `SO_REUSEPORT` is used instead, which specifically prevents processes with differing UIDs from assigning sockets to the same socket address. Create a datagram connection. The socket family can be either [`AF_INET`](socket#socket.AF_INET "socket.AF_INET"), [`AF_INET6`](socket#socket.AF_INET6 "socket.AF_INET6"), or [`AF_UNIX`](socket#socket.AF_UNIX "socket.AF_UNIX"), depending on *host* (or the *family* argument, if provided). The socket type will be [`SOCK_DGRAM`](socket#socket.SOCK_DGRAM "socket.SOCK_DGRAM"). *protocol\_factory* must be a callable returning a [protocol](asyncio-protocol#asyncio-protocol) implementation. A tuple of `(transport, protocol)` is returned on success. Other arguments: * *local\_addr*, if given, is a `(local_host, local_port)` tuple used to bind the socket locally. The *local\_host* and *local\_port* are looked up using [`getaddrinfo()`](#asyncio.loop.getaddrinfo "asyncio.loop.getaddrinfo"). * *remote\_addr*, if given, is a `(remote_host, remote_port)` tuple used to connect the socket to a remote address. The *remote\_host* and *remote\_port* are looked up using [`getaddrinfo()`](#asyncio.loop.getaddrinfo "asyncio.loop.getaddrinfo"). * *family*, *proto*, *flags* are the optional address family, protocol and flags to be passed through to [`getaddrinfo()`](#asyncio.loop.getaddrinfo "asyncio.loop.getaddrinfo") for *host* resolution. If given, these should all be integers from the corresponding [`socket`](socket#module-socket "socket: Low-level networking interface.") module constants. * *reuse\_port* tells the kernel to allow this endpoint to be bound to the same port as other existing endpoints are bound to, so long as they all set this flag when being created. This option is not supported on Windows and some Unixes. If the `SO_REUSEPORT` constant is not defined then this capability is unsupported. * *allow\_broadcast* tells the kernel to allow this endpoint to send messages to the broadcast address. * *sock* can optionally be specified in order to use a preexisting, already connected, [`socket.socket`](socket#socket.socket "socket.socket") object to be used by the transport. If specified, *local\_addr* and *remote\_addr* should be omitted (must be [`None`](constants#None "None")). See [UDP echo client protocol](asyncio-protocol#asyncio-udp-echo-client-protocol) and [UDP echo server protocol](asyncio-protocol#asyncio-udp-echo-server-protocol) examples. Changed in version 3.4.4: The *family*, *proto*, *flags*, *reuse\_address*, *reuse\_port, \*allow\_broadcast*, and *sock* parameters were added. Changed in version 3.8.1: The *reuse\_address* parameter is no longer supported due to security concerns. Changed in version 3.8: Added support for Windows. `coroutine loop.create_unix_connection(protocol_factory, path=None, *, ssl=None, sock=None, server_hostname=None, ssl_handshake_timeout=None)` Create a Unix connection. The socket family will be [`AF_UNIX`](socket#socket.AF_UNIX "socket.AF_UNIX"); socket type will be [`SOCK_STREAM`](socket#socket.SOCK_STREAM "socket.SOCK_STREAM"). A tuple of `(transport, protocol)` is returned on success. *path* is the name of a Unix domain socket and is required, unless a *sock* parameter is specified. Abstract Unix sockets, [`str`](stdtypes#str "str"), [`bytes`](stdtypes#bytes "bytes"), and [`Path`](pathlib#pathlib.Path "pathlib.Path") paths are supported. See the documentation of the [`loop.create_connection()`](#asyncio.loop.create_connection "asyncio.loop.create_connection") method for information about arguments to this method. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.7: The *ssl\_handshake\_timeout* parameter. Changed in version 3.7: The *path* parameter can now be a [path-like object](../glossary#term-path-like-object). ### Creating network servers `coroutine loop.create_server(protocol_factory, host=None, port=None, *, family=socket.AF_UNSPEC, flags=socket.AI_PASSIVE, sock=None, backlog=100, ssl=None, reuse_address=None, reuse_port=None, ssl_handshake_timeout=None, start_serving=True)` Create a TCP server (socket type [`SOCK_STREAM`](socket#socket.SOCK_STREAM "socket.SOCK_STREAM")) listening on *port* of the *host* address. Returns a [`Server`](#asyncio.Server "asyncio.Server") object. Arguments: * *protocol\_factory* must be a callable returning a [protocol](asyncio-protocol#asyncio-protocol) implementation. * The *host* parameter can be set to several types which determine where the server would be listening: + If *host* is a string, the TCP server is bound to a single network interface specified by *host*. + If *host* is a sequence of strings, the TCP server is bound to all network interfaces specified by the sequence. + If *host* is an empty string or `None`, all interfaces are assumed and a list of multiple sockets will be returned (most likely one for IPv4 and another one for IPv6). * The *port* parameter can be set to specify which port the server should listen on. If `0` or `None` (the default), a random unused port will be selected (note that if *host* resolves to multiple network interfaces, a different random port will be selected for each interface). * *family* can be set to either [`socket.AF_INET`](socket#socket.AF_INET "socket.AF_INET") or [`AF_INET6`](socket#socket.AF_INET6 "socket.AF_INET6") to force the socket to use IPv4 or IPv6. If not set, the *family* will be determined from host name (defaults to `AF_UNSPEC`). * *flags* is a bitmask for [`getaddrinfo()`](#asyncio.loop.getaddrinfo "asyncio.loop.getaddrinfo"). * *sock* can optionally be specified in order to use a preexisting socket object. If specified, *host* and *port* must not be specified. * *backlog* is the maximum number of queued connections passed to [`listen()`](socket#socket.socket.listen "socket.socket.listen") (defaults to 100). * *ssl* can be set to an [`SSLContext`](ssl#ssl.SSLContext "ssl.SSLContext") instance to enable TLS over the accepted connections. * *reuse\_address* tells the kernel to reuse a local socket in `TIME_WAIT` state, without waiting for its natural timeout to expire. If not specified will automatically be set to `True` on Unix. * *reuse\_port* tells the kernel to allow this endpoint to be bound to the same port as other existing endpoints are bound to, so long as they all set this flag when being created. This option is not supported on Windows. * *ssl\_handshake\_timeout* is (for a TLS server) the time in seconds to wait for the TLS handshake to complete before aborting the connection. `60.0` seconds if `None` (default). * *start\_serving* set to `True` (the default) causes the created server to start accepting connections immediately. When set to `False`, the user should await on [`Server.start_serving()`](#asyncio.Server.start_serving "asyncio.Server.start_serving") or [`Server.serve_forever()`](#asyncio.Server.serve_forever "asyncio.Server.serve_forever") to make the server to start accepting connections. New in version 3.7: Added *ssl\_handshake\_timeout* and *start\_serving* parameters. Changed in version 3.6: The socket option `TCP_NODELAY` is set by default for all TCP connections. Changed in version 3.5: Added support for SSL/TLS in [`ProactorEventLoop`](#asyncio.ProactorEventLoop "asyncio.ProactorEventLoop"). Changed in version 3.5.1: The *host* parameter can be a sequence of strings. See also The [`start_server()`](asyncio-stream#asyncio.start_server "asyncio.start_server") function is a higher-level alternative API that returns a pair of [`StreamReader`](asyncio-stream#asyncio.StreamReader "asyncio.StreamReader") and [`StreamWriter`](asyncio-stream#asyncio.StreamWriter "asyncio.StreamWriter") that can be used in an async/await code. `coroutine loop.create_unix_server(protocol_factory, path=None, *, sock=None, backlog=100, ssl=None, ssl_handshake_timeout=None, start_serving=True)` Similar to [`loop.create_server()`](#asyncio.loop.create_server "asyncio.loop.create_server") but works with the [`AF_UNIX`](socket#socket.AF_UNIX "socket.AF_UNIX") socket family. *path* is the name of a Unix domain socket, and is required, unless a *sock* argument is provided. Abstract Unix sockets, [`str`](stdtypes#str "str"), [`bytes`](stdtypes#bytes "bytes"), and [`Path`](pathlib#pathlib.Path "pathlib.Path") paths are supported. See the documentation of the [`loop.create_server()`](#asyncio.loop.create_server "asyncio.loop.create_server") method for information about arguments to this method. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.7: The *ssl\_handshake\_timeout* and *start\_serving* parameters. Changed in version 3.7: The *path* parameter can now be a [`Path`](pathlib#pathlib.Path "pathlib.Path") object. `coroutine loop.connect_accepted_socket(protocol_factory, sock, *, ssl=None, ssl_handshake_timeout=None)` Wrap an already accepted connection into a transport/protocol pair. This method can be used by servers that accept connections outside of asyncio but that use asyncio to handle them. Parameters: * *protocol\_factory* must be a callable returning a [protocol](asyncio-protocol#asyncio-protocol) implementation. * *sock* is a preexisting socket object returned from [`socket.accept`](socket#socket.socket.accept "socket.socket.accept"). * *ssl* can be set to an [`SSLContext`](ssl#ssl.SSLContext "ssl.SSLContext") to enable SSL over the accepted connections. * *ssl\_handshake\_timeout* is (for an SSL connection) the time in seconds to wait for the SSL handshake to complete before aborting the connection. `60.0` seconds if `None` (default). Returns a `(transport, protocol)` pair. New in version 3.7: The *ssl\_handshake\_timeout* parameter. New in version 3.5.3. ### Transferring files `coroutine loop.sendfile(transport, file, offset=0, count=None, *, fallback=True)` Send a *file* over a *transport*. Return the total number of bytes sent. The method uses high-performance [`os.sendfile()`](os#os.sendfile "os.sendfile") if available. *file* must be a regular file object opened in binary mode. *offset* tells from where to start reading the file. If specified, *count* is the total number of bytes to transmit as opposed to sending the file until EOF is reached. File position is always updated, even when this method raises an error, and [`file.tell()`](io#io.IOBase.tell "io.IOBase.tell") can be used to obtain the actual number of bytes sent. *fallback* set to `True` makes asyncio to manually read and send the file when the platform does not support the sendfile system call (e.g. Windows or SSL socket on Unix). Raise [`SendfileNotAvailableError`](asyncio-exceptions#asyncio.SendfileNotAvailableError "asyncio.SendfileNotAvailableError") if the system does not support the *sendfile* syscall and *fallback* is `False`. New in version 3.7. ### TLS Upgrade `coroutine loop.start_tls(transport, protocol, sslcontext, *, server_side=False, server_hostname=None, ssl_handshake_timeout=None)` Upgrade an existing transport-based connection to TLS. Return a new transport instance, that the *protocol* must start using immediately after the *await*. The *transport* instance passed to the *start\_tls* method should never be used again. Parameters: * *transport* and *protocol* instances that methods like [`create_server()`](#asyncio.loop.create_server "asyncio.loop.create_server") and [`create_connection()`](#asyncio.loop.create_connection "asyncio.loop.create_connection") return. * *sslcontext*: a configured instance of [`SSLContext`](ssl#ssl.SSLContext "ssl.SSLContext"). * *server\_side* pass `True` when a server-side connection is being upgraded (like the one created by [`create_server()`](#asyncio.loop.create_server "asyncio.loop.create_server")). * *server\_hostname*: sets or overrides the host name that the target server’s certificate will be matched against. * *ssl\_handshake\_timeout* is (for a TLS connection) the time in seconds to wait for the TLS handshake to complete before aborting the connection. `60.0` seconds if `None` (default). New in version 3.7. ### Watching file descriptors `loop.add_reader(fd, callback, *args)` Start monitoring the *fd* file descriptor for read availability and invoke *callback* with the specified arguments once *fd* is available for reading. `loop.remove_reader(fd)` Stop monitoring the *fd* file descriptor for read availability. `loop.add_writer(fd, callback, *args)` Start monitoring the *fd* file descriptor for write availability and invoke *callback* with the specified arguments once *fd* is available for writing. Use [`functools.partial()`](functools#functools.partial "functools.partial") [to pass keyword arguments](#asyncio-pass-keywords) to *callback*. `loop.remove_writer(fd)` Stop monitoring the *fd* file descriptor for write availability. See also [Platform Support](asyncio-platforms#asyncio-platform-support) section for some limitations of these methods. ### Working with socket objects directly In general, protocol implementations that use transport-based APIs such as [`loop.create_connection()`](#asyncio.loop.create_connection "asyncio.loop.create_connection") and [`loop.create_server()`](#asyncio.loop.create_server "asyncio.loop.create_server") are faster than implementations that work with sockets directly. However, there are some use cases when performance is not critical, and working with [`socket`](socket#socket.socket "socket.socket") objects directly is more convenient. `coroutine loop.sock_recv(sock, nbytes)` Receive up to *nbytes* from *sock*. Asynchronous version of [`socket.recv()`](socket#socket.socket.recv "socket.socket.recv"). Return the received data as a bytes object. *sock* must be a non-blocking socket. Changed in version 3.7: Even though this method was always documented as a coroutine method, releases before Python 3.7 returned a [`Future`](asyncio-future#asyncio.Future "asyncio.Future"). Since Python 3.7 this is an `async def` method. `coroutine loop.sock_recv_into(sock, buf)` Receive data from *sock* into the *buf* buffer. Modeled after the blocking [`socket.recv_into()`](socket#socket.socket.recv_into "socket.socket.recv_into") method. Return the number of bytes written to the buffer. *sock* must be a non-blocking socket. New in version 3.7. `coroutine loop.sock_sendall(sock, data)` Send *data* to the *sock* socket. Asynchronous version of [`socket.sendall()`](socket#socket.socket.sendall "socket.socket.sendall"). This method continues to send to the socket until either all data in *data* has been sent or an error occurs. `None` is returned on success. On error, an exception is raised. Additionally, there is no way to determine how much data, if any, was successfully processed by the receiving end of the connection. *sock* must be a non-blocking socket. Changed in version 3.7: Even though the method was always documented as a coroutine method, before Python 3.7 it returned an [`Future`](asyncio-future#asyncio.Future "asyncio.Future"). Since Python 3.7, this is an `async def` method. `coroutine loop.sock_connect(sock, address)` Connect *sock* to a remote socket at *address*. Asynchronous version of [`socket.connect()`](socket#socket.socket.connect "socket.socket.connect"). *sock* must be a non-blocking socket. Changed in version 3.5.2: `address` no longer needs to be resolved. `sock_connect` will try to check if the *address* is already resolved by calling [`socket.inet_pton()`](socket#socket.inet_pton "socket.inet_pton"). If not, [`loop.getaddrinfo()`](#asyncio.loop.getaddrinfo "asyncio.loop.getaddrinfo") will be used to resolve the *address*. See also [`loop.create_connection()`](#asyncio.loop.create_connection "asyncio.loop.create_connection") and [`asyncio.open_connection()`](asyncio-stream#asyncio.open_connection "asyncio.open_connection"). `coroutine loop.sock_accept(sock)` Accept a connection. Modeled after the blocking [`socket.accept()`](socket#socket.socket.accept "socket.socket.accept") method. The socket must be bound to an address and listening for connections. The return value is a pair `(conn, address)` where *conn* is a *new* socket object usable to send and receive data on the connection, and *address* is the address bound to the socket on the other end of the connection. *sock* must be a non-blocking socket. Changed in version 3.7: Even though the method was always documented as a coroutine method, before Python 3.7 it returned a [`Future`](asyncio-future#asyncio.Future "asyncio.Future"). Since Python 3.7, this is an `async def` method. See also [`loop.create_server()`](#asyncio.loop.create_server "asyncio.loop.create_server") and [`start_server()`](asyncio-stream#asyncio.start_server "asyncio.start_server"). `coroutine loop.sock_sendfile(sock, file, offset=0, count=None, *, fallback=True)` Send a file using high-performance [`os.sendfile`](os#os.sendfile "os.sendfile") if possible. Return the total number of bytes sent. Asynchronous version of [`socket.sendfile()`](socket#socket.socket.sendfile "socket.socket.sendfile"). *sock* must be a non-blocking [`socket.SOCK_STREAM`](socket#socket.SOCK_STREAM "socket.SOCK_STREAM") [`socket`](socket#socket.socket "socket.socket"). *file* must be a regular file object open in binary mode. *offset* tells from where to start reading the file. If specified, *count* is the total number of bytes to transmit as opposed to sending the file until EOF is reached. File position is always updated, even when this method raises an error, and [`file.tell()`](io#io.IOBase.tell "io.IOBase.tell") can be used to obtain the actual number of bytes sent. *fallback*, when set to `True`, makes asyncio manually read and send the file when the platform does not support the sendfile syscall (e.g. Windows or SSL socket on Unix). Raise [`SendfileNotAvailableError`](asyncio-exceptions#asyncio.SendfileNotAvailableError "asyncio.SendfileNotAvailableError") if the system does not support *sendfile* syscall and *fallback* is `False`. *sock* must be a non-blocking socket. New in version 3.7. ### DNS `coroutine loop.getaddrinfo(host, port, *, family=0, type=0, proto=0, flags=0)` Asynchronous version of [`socket.getaddrinfo()`](socket#socket.getaddrinfo "socket.getaddrinfo"). `coroutine loop.getnameinfo(sockaddr, flags=0)` Asynchronous version of [`socket.getnameinfo()`](socket#socket.getnameinfo "socket.getnameinfo"). Changed in version 3.7: Both *getaddrinfo* and *getnameinfo* methods were always documented to return a coroutine, but prior to Python 3.7 they were, in fact, returning [`asyncio.Future`](asyncio-future#asyncio.Future "asyncio.Future") objects. Starting with Python 3.7 both methods are coroutines. ### Working with pipes `coroutine loop.connect_read_pipe(protocol_factory, pipe)` Register the read end of *pipe* in the event loop. *protocol\_factory* must be a callable returning an [asyncio protocol](asyncio-protocol#asyncio-protocol) implementation. *pipe* is a [file-like object](../glossary#term-file-object). Return pair `(transport, protocol)`, where *transport* supports the [`ReadTransport`](asyncio-protocol#asyncio.ReadTransport "asyncio.ReadTransport") interface and *protocol* is an object instantiated by the *protocol\_factory*. With [`SelectorEventLoop`](#asyncio.SelectorEventLoop "asyncio.SelectorEventLoop") event loop, the *pipe* is set to non-blocking mode. `coroutine loop.connect_write_pipe(protocol_factory, pipe)` Register the write end of *pipe* in the event loop. *protocol\_factory* must be a callable returning an [asyncio protocol](asyncio-protocol#asyncio-protocol) implementation. *pipe* is [file-like object](../glossary#term-file-object). Return pair `(transport, protocol)`, where *transport* supports [`WriteTransport`](asyncio-protocol#asyncio.WriteTransport "asyncio.WriteTransport") interface and *protocol* is an object instantiated by the *protocol\_factory*. With [`SelectorEventLoop`](#asyncio.SelectorEventLoop "asyncio.SelectorEventLoop") event loop, the *pipe* is set to non-blocking mode. Note [`SelectorEventLoop`](#asyncio.SelectorEventLoop "asyncio.SelectorEventLoop") does not support the above methods on Windows. Use [`ProactorEventLoop`](#asyncio.ProactorEventLoop "asyncio.ProactorEventLoop") instead for Windows. See also The [`loop.subprocess_exec()`](#asyncio.loop.subprocess_exec "asyncio.loop.subprocess_exec") and [`loop.subprocess_shell()`](#asyncio.loop.subprocess_shell "asyncio.loop.subprocess_shell") methods. ### Unix signals `loop.add_signal_handler(signum, callback, *args)` Set *callback* as the handler for the *signum* signal. The callback will be invoked by *loop*, along with other queued callbacks and runnable coroutines of that event loop. Unlike signal handlers registered using [`signal.signal()`](signal#signal.signal "signal.signal"), a callback registered with this function is allowed to interact with the event loop. Raise [`ValueError`](exceptions#ValueError "ValueError") if the signal number is invalid or uncatchable. Raise [`RuntimeError`](exceptions#RuntimeError "RuntimeError") if there is a problem setting up the handler. Use [`functools.partial()`](functools#functools.partial "functools.partial") [to pass keyword arguments](#asyncio-pass-keywords) to *callback*. Like [`signal.signal()`](signal#signal.signal "signal.signal"), this function must be invoked in the main thread. `loop.remove_signal_handler(sig)` Remove the handler for the *sig* signal. Return `True` if the signal handler was removed, or `False` if no handler was set for the given signal. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. See also The [`signal`](signal#module-signal "signal: Set handlers for asynchronous events.") module. ### Executing code in thread or process pools `awaitable loop.run_in_executor(executor, func, *args)` Arrange for *func* to be called in the specified executor. The *executor* argument should be an [`concurrent.futures.Executor`](concurrent.futures#concurrent.futures.Executor "concurrent.futures.Executor") instance. The default executor is used if *executor* is `None`. Example: ``` import asyncio import concurrent.futures def blocking_io(): # File operations (such as logging) can block the # event loop: run them in a thread pool. with open('/dev/urandom', 'rb') as f: return f.read(100) def cpu_bound(): # CPU-bound operations will block the event loop: # in general it is preferable to run them in a # process pool. return sum(i * i for i in range(10 ** 7)) async def main(): loop = asyncio.get_running_loop() ## Options: # 1. Run in the default loop's executor: result = await loop.run_in_executor( None, blocking_io) print('default thread pool', result) # 2. Run in a custom thread pool: with concurrent.futures.ThreadPoolExecutor() as pool: result = await loop.run_in_executor( pool, blocking_io) print('custom thread pool', result) # 3. Run in a custom process pool: with concurrent.futures.ProcessPoolExecutor() as pool: result = await loop.run_in_executor( pool, cpu_bound) print('custom process pool', result) asyncio.run(main()) ``` This method returns a [`asyncio.Future`](asyncio-future#asyncio.Future "asyncio.Future") object. Use [`functools.partial()`](functools#functools.partial "functools.partial") [to pass keyword arguments](#asyncio-pass-keywords) to *func*. Changed in version 3.5.3: [`loop.run_in_executor()`](#asyncio.loop.run_in_executor "asyncio.loop.run_in_executor") no longer configures the `max_workers` of the thread pool executor it creates, instead leaving it up to the thread pool executor ([`ThreadPoolExecutor`](concurrent.futures#concurrent.futures.ThreadPoolExecutor "concurrent.futures.ThreadPoolExecutor")) to set the default. `loop.set_default_executor(executor)` Set *executor* as the default executor used by [`run_in_executor()`](#asyncio.loop.run_in_executor "asyncio.loop.run_in_executor"). *executor* should be an instance of [`ThreadPoolExecutor`](concurrent.futures#concurrent.futures.ThreadPoolExecutor "concurrent.futures.ThreadPoolExecutor"). Deprecated since version 3.8: Using an executor that is not an instance of [`ThreadPoolExecutor`](concurrent.futures#concurrent.futures.ThreadPoolExecutor "concurrent.futures.ThreadPoolExecutor") is deprecated and will trigger an error in Python 3.9. *executor* must be an instance of [`concurrent.futures.ThreadPoolExecutor`](concurrent.futures#concurrent.futures.ThreadPoolExecutor "concurrent.futures.ThreadPoolExecutor"). ### Error Handling API Allows customizing how exceptions are handled in the event loop. `loop.set_exception_handler(handler)` Set *handler* as the new event loop exception handler. If *handler* is `None`, the default exception handler will be set. Otherwise, *handler* must be a callable with the signature matching `(loop, context)`, where `loop` is a reference to the active event loop, and `context` is a `dict` object containing the details of the exception (see [`call_exception_handler()`](#asyncio.loop.call_exception_handler "asyncio.loop.call_exception_handler") documentation for details about context). `loop.get_exception_handler()` Return the current exception handler, or `None` if no custom exception handler was set. New in version 3.5.2. `loop.default_exception_handler(context)` Default exception handler. This is called when an exception occurs and no exception handler is set. This can be called by a custom exception handler that wants to defer to the default handler behavior. *context* parameter has the same meaning as in [`call_exception_handler()`](#asyncio.loop.call_exception_handler "asyncio.loop.call_exception_handler"). `loop.call_exception_handler(context)` Call the current event loop exception handler. *context* is a `dict` object containing the following keys (new keys may be introduced in future Python versions): * ‘message’: Error message; * ‘exception’ (optional): Exception object; * ‘future’ (optional): [`asyncio.Future`](asyncio-future#asyncio.Future "asyncio.Future") instance; * ‘task’ (optional): [`asyncio.Task`](asyncio-task#asyncio.Task "asyncio.Task") instance; * ‘handle’ (optional): [`asyncio.Handle`](#asyncio.Handle "asyncio.Handle") instance; * ‘protocol’ (optional): [Protocol](asyncio-protocol#asyncio-protocol) instance; * ‘transport’ (optional): [Transport](asyncio-protocol#asyncio-transport) instance; * ‘socket’ (optional): [`socket.socket`](socket#socket.socket "socket.socket") instance; * ‘asyncgen’ (optional): Asynchronous generator that caused the exception. Note This method should not be overloaded in subclassed event loops. For custom exception handling, use the [`set_exception_handler()`](#asyncio.loop.set_exception_handler "asyncio.loop.set_exception_handler") method. ### Enabling debug mode `loop.get_debug()` Get the debug mode ([`bool`](functions#bool "bool")) of the event loop. The default value is `True` if the environment variable [`PYTHONASYNCIODEBUG`](../using/cmdline#envvar-PYTHONASYNCIODEBUG) is set to a non-empty string, `False` otherwise. `loop.set_debug(enabled: bool)` Set the debug mode of the event loop. Changed in version 3.7: The new [Python Development Mode](devmode#devmode) can now also be used to enable the debug mode. See also The [debug mode of asyncio](asyncio-dev#asyncio-debug-mode). ### Running Subprocesses Methods described in this subsections are low-level. In regular async/await code consider using the high-level [`asyncio.create_subprocess_shell()`](asyncio-subprocess#asyncio.create_subprocess_shell "asyncio.create_subprocess_shell") and [`asyncio.create_subprocess_exec()`](asyncio-subprocess#asyncio.create_subprocess_exec "asyncio.create_subprocess_exec") convenience functions instead. Note On Windows, the default event loop [`ProactorEventLoop`](#asyncio.ProactorEventLoop "asyncio.ProactorEventLoop") supports subprocesses, whereas [`SelectorEventLoop`](#asyncio.SelectorEventLoop "asyncio.SelectorEventLoop") does not. See [Subprocess Support on Windows](asyncio-platforms#asyncio-windows-subprocess) for details. `coroutine loop.subprocess_exec(protocol_factory, *args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kwargs)` Create a subprocess from one or more string arguments specified by *args*. *args* must be a list of strings represented by: * [`str`](stdtypes#str "str"); * or [`bytes`](stdtypes#bytes "bytes"), encoded to the [filesystem encoding](os#filesystem-encoding). The first string specifies the program executable, and the remaining strings specify the arguments. Together, string arguments form the `argv` of the program. This is similar to the standard library [`subprocess.Popen`](subprocess#subprocess.Popen "subprocess.Popen") class called with `shell=False` and the list of strings passed as the first argument; however, where [`Popen`](subprocess#subprocess.Popen "subprocess.Popen") takes a single argument which is list of strings, *subprocess\_exec* takes multiple string arguments. The *protocol\_factory* must be a callable returning a subclass of the [`asyncio.SubprocessProtocol`](asyncio-protocol#asyncio.SubprocessProtocol "asyncio.SubprocessProtocol") class. Other parameters: * *stdin* can be any of these: + a file-like object representing a pipe to be connected to the subprocess’s standard input stream using [`connect_write_pipe()`](#asyncio.loop.connect_write_pipe "asyncio.loop.connect_write_pipe") + the [`subprocess.PIPE`](subprocess#subprocess.PIPE "subprocess.PIPE") constant (default) which will create a new pipe and connect it, + the value `None` which will make the subprocess inherit the file descriptor from this process + the [`subprocess.DEVNULL`](subprocess#subprocess.DEVNULL "subprocess.DEVNULL") constant which indicates that the special [`os.devnull`](os#os.devnull "os.devnull") file will be used * *stdout* can be any of these: + a file-like object representing a pipe to be connected to the subprocess’s standard output stream using [`connect_write_pipe()`](#asyncio.loop.connect_write_pipe "asyncio.loop.connect_write_pipe") + the [`subprocess.PIPE`](subprocess#subprocess.PIPE "subprocess.PIPE") constant (default) which will create a new pipe and connect it, + the value `None` which will make the subprocess inherit the file descriptor from this process + the [`subprocess.DEVNULL`](subprocess#subprocess.DEVNULL "subprocess.DEVNULL") constant which indicates that the special [`os.devnull`](os#os.devnull "os.devnull") file will be used * *stderr* can be any of these: + a file-like object representing a pipe to be connected to the subprocess’s standard error stream using [`connect_write_pipe()`](#asyncio.loop.connect_write_pipe "asyncio.loop.connect_write_pipe") + the [`subprocess.PIPE`](subprocess#subprocess.PIPE "subprocess.PIPE") constant (default) which will create a new pipe and connect it, + the value `None` which will make the subprocess inherit the file descriptor from this process + the [`subprocess.DEVNULL`](subprocess#subprocess.DEVNULL "subprocess.DEVNULL") constant which indicates that the special [`os.devnull`](os#os.devnull "os.devnull") file will be used + the [`subprocess.STDOUT`](subprocess#subprocess.STDOUT "subprocess.STDOUT") constant which will connect the standard error stream to the process’ standard output stream * All other keyword arguments are passed to [`subprocess.Popen`](subprocess#subprocess.Popen "subprocess.Popen") without interpretation, except for *bufsize*, *universal\_newlines*, *shell*, *text*, *encoding* and *errors*, which should not be specified at all. The `asyncio` subprocess API does not support decoding the streams as text. [`bytes.decode()`](stdtypes#bytes.decode "bytes.decode") can be used to convert the bytes returned from the stream to text. See the constructor of the [`subprocess.Popen`](subprocess#subprocess.Popen "subprocess.Popen") class for documentation on other arguments. Returns a pair of `(transport, protocol)`, where *transport* conforms to the [`asyncio.SubprocessTransport`](asyncio-protocol#asyncio.SubprocessTransport "asyncio.SubprocessTransport") base class and *protocol* is an object instantiated by the *protocol\_factory*. `coroutine loop.subprocess_shell(protocol_factory, cmd, *, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kwargs)` Create a subprocess from *cmd*, which can be a [`str`](stdtypes#str "str") or a [`bytes`](stdtypes#bytes "bytes") string encoded to the [filesystem encoding](os#filesystem-encoding), using the platform’s “shell” syntax. This is similar to the standard library [`subprocess.Popen`](subprocess#subprocess.Popen "subprocess.Popen") class called with `shell=True`. The *protocol\_factory* must be a callable returning a subclass of the [`SubprocessProtocol`](asyncio-protocol#asyncio.SubprocessProtocol "asyncio.SubprocessProtocol") class. See [`subprocess_exec()`](#asyncio.loop.subprocess_exec "asyncio.loop.subprocess_exec") for more details about the remaining arguments. Returns a pair of `(transport, protocol)`, where *transport* conforms to the [`SubprocessTransport`](asyncio-protocol#asyncio.SubprocessTransport "asyncio.SubprocessTransport") base class and *protocol* is an object instantiated by the *protocol\_factory*. Note It is the application’s responsibility to ensure that all whitespace and special characters are quoted appropriately to avoid [shell injection](https://en.wikipedia.org/wiki/Shell_injection#Shell_injection) vulnerabilities. The [`shlex.quote()`](shlex#shlex.quote "shlex.quote") function can be used to properly escape whitespace and special characters in strings that are going to be used to construct shell commands. Callback Handles ---------------- `class asyncio.Handle` A callback wrapper object returned by [`loop.call_soon()`](#asyncio.loop.call_soon "asyncio.loop.call_soon"), [`loop.call_soon_threadsafe()`](#asyncio.loop.call_soon_threadsafe "asyncio.loop.call_soon_threadsafe"). `cancel()` Cancel the callback. If the callback has already been canceled or executed, this method has no effect. `cancelled()` Return `True` if the callback was cancelled. New in version 3.7. `class asyncio.TimerHandle` A callback wrapper object returned by [`loop.call_later()`](#asyncio.loop.call_later "asyncio.loop.call_later"), and [`loop.call_at()`](#asyncio.loop.call_at "asyncio.loop.call_at"). This class is a subclass of [`Handle`](#asyncio.Handle "asyncio.Handle"). `when()` Return a scheduled callback time as [`float`](functions#float "float") seconds. The time is an absolute timestamp, using the same time reference as [`loop.time()`](#asyncio.loop.time "asyncio.loop.time"). New in version 3.7. Server Objects -------------- Server objects are created by [`loop.create_server()`](#asyncio.loop.create_server "asyncio.loop.create_server"), [`loop.create_unix_server()`](#asyncio.loop.create_unix_server "asyncio.loop.create_unix_server"), [`start_server()`](asyncio-stream#asyncio.start_server "asyncio.start_server"), and [`start_unix_server()`](asyncio-stream#asyncio.start_unix_server "asyncio.start_unix_server") functions. Do not instantiate the class directly. `class asyncio.Server` *Server* objects are asynchronous context managers. When used in an `async with` statement, it’s guaranteed that the Server object is closed and not accepting new connections when the `async with` statement is completed: ``` srv = await loop.create_server(...) async with srv: # some code # At this point, srv is closed and no longer accepts new connections. ``` Changed in version 3.7: Server object is an asynchronous context manager since Python 3.7. `close()` Stop serving: close listening sockets and set the [`sockets`](#asyncio.Server.sockets "asyncio.Server.sockets") attribute to `None`. The sockets that represent existing incoming client connections are left open. The server is closed asynchronously, use the [`wait_closed()`](#asyncio.Server.wait_closed "asyncio.Server.wait_closed") coroutine to wait until the server is closed. `get_loop()` Return the event loop associated with the server object. New in version 3.7. `coroutine start_serving()` Start accepting connections. This method is idempotent, so it can be called when the server is already being serving. The *start\_serving* keyword-only parameter to [`loop.create_server()`](#asyncio.loop.create_server "asyncio.loop.create_server") and [`asyncio.start_server()`](asyncio-stream#asyncio.start_server "asyncio.start_server") allows creating a Server object that is not accepting connections initially. In this case `Server.start_serving()`, or [`Server.serve_forever()`](#asyncio.Server.serve_forever "asyncio.Server.serve_forever") can be used to make the Server start accepting connections. New in version 3.7. `coroutine serve_forever()` Start accepting connections until the coroutine is cancelled. Cancellation of `serve_forever` task causes the server to be closed. This method can be called if the server is already accepting connections. Only one `serve_forever` task can exist per one *Server* object. Example: ``` async def client_connected(reader, writer): # Communicate with the client with # reader/writer streams. For example: await reader.readline() async def main(host, port): srv = await asyncio.start_server( client_connected, host, port) await srv.serve_forever() asyncio.run(main('127.0.0.1', 0)) ``` New in version 3.7. `is_serving()` Return `True` if the server is accepting new connections. New in version 3.7. `coroutine wait_closed()` Wait until the [`close()`](#asyncio.Server.close "asyncio.Server.close") method completes. `sockets` List of [`socket.socket`](socket#socket.socket "socket.socket") objects the server is listening on. Changed in version 3.7: Prior to Python 3.7 `Server.sockets` used to return an internal list of server sockets directly. In 3.7 a copy of that list is returned. Event Loop Implementations -------------------------- asyncio ships with two different event loop implementations: [`SelectorEventLoop`](#asyncio.SelectorEventLoop "asyncio.SelectorEventLoop") and [`ProactorEventLoop`](#asyncio.ProactorEventLoop "asyncio.ProactorEventLoop"). By default asyncio is configured to use [`SelectorEventLoop`](#asyncio.SelectorEventLoop "asyncio.SelectorEventLoop") on Unix and [`ProactorEventLoop`](#asyncio.ProactorEventLoop "asyncio.ProactorEventLoop") on Windows. `class asyncio.SelectorEventLoop` An event loop based on the [`selectors`](selectors#module-selectors "selectors: High-level I/O multiplexing.") module. Uses the most efficient *selector* available for the given platform. It is also possible to manually configure the exact selector implementation to be used: ``` import asyncio import selectors selector = selectors.SelectSelector() loop = asyncio.SelectorEventLoop(selector) asyncio.set_event_loop(loop) ``` [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. `class asyncio.ProactorEventLoop` An event loop for Windows that uses “I/O Completion Ports” (IOCP). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows. See also [MSDN documentation on I/O Completion Ports](https://docs.microsoft.com/en-ca/windows/desktop/FileIO/i-o-completion-ports). `class asyncio.AbstractEventLoop` Abstract base class for asyncio-compliant event loops. The [Event Loop Methods](#asyncio-event-loop) section lists all methods that an alternative implementation of `AbstractEventLoop` should have defined. Examples -------- Note that all examples in this section **purposefully** show how to use the low-level event loop APIs, such as [`loop.run_forever()`](#asyncio.loop.run_forever "asyncio.loop.run_forever") and [`loop.call_soon()`](#asyncio.loop.call_soon "asyncio.loop.call_soon"). Modern asyncio applications rarely need to be written this way; consider using the high-level functions like [`asyncio.run()`](asyncio-task#asyncio.run "asyncio.run"). ### Hello World with call\_soon() An example using the [`loop.call_soon()`](#asyncio.loop.call_soon "asyncio.loop.call_soon") method to schedule a callback. The callback displays `"Hello World"` and then stops the event loop: ``` import asyncio def hello_world(loop): """A callback to print 'Hello World' and stop the event loop""" print('Hello World') loop.stop() loop = asyncio.get_event_loop() # Schedule a call to hello_world() loop.call_soon(hello_world, loop) # Blocking call interrupted by loop.stop() try: loop.run_forever() finally: loop.close() ``` See also A similar [Hello World](asyncio-task#coroutine) example created with a coroutine and the [`run()`](asyncio-task#asyncio.run "asyncio.run") function. ### Display the current date with call\_later() An example of a callback displaying the current date every second. The callback uses the [`loop.call_later()`](#asyncio.loop.call_later "asyncio.loop.call_later") method to reschedule itself after 5 seconds, and then stops the event loop: ``` import asyncio import datetime def display_date(end_time, loop): print(datetime.datetime.now()) if (loop.time() + 1.0) < end_time: loop.call_later(1, display_date, end_time, loop) else: loop.stop() loop = asyncio.get_event_loop() # Schedule the first call to display_date() end_time = loop.time() + 5.0 loop.call_soon(display_date, end_time, loop) # Blocking call interrupted by loop.stop() try: loop.run_forever() finally: loop.close() ``` See also A similar [current date](asyncio-task#asyncio-example-sleep) example created with a coroutine and the [`run()`](asyncio-task#asyncio.run "asyncio.run") function. ### Watch a file descriptor for read events Wait until a file descriptor received some data using the [`loop.add_reader()`](#asyncio.loop.add_reader "asyncio.loop.add_reader") method and then close the event loop: ``` import asyncio from socket import socketpair # Create a pair of connected file descriptors rsock, wsock = socketpair() loop = asyncio.get_event_loop() def reader(): data = rsock.recv(100) print("Received:", data.decode()) # We are done: unregister the file descriptor loop.remove_reader(rsock) # Stop the event loop loop.stop() # Register the file descriptor for read event loop.add_reader(rsock, reader) # Simulate the reception of data from the network loop.call_soon(wsock.send, 'abc'.encode()) try: # Run the event loop loop.run_forever() finally: # We are done. Close sockets and the event loop. rsock.close() wsock.close() loop.close() ``` See also * A similar [example](asyncio-protocol#asyncio-example-create-connection) using transports, protocols, and the [`loop.create_connection()`](#asyncio.loop.create_connection "asyncio.loop.create_connection") method. * Another similar [example](asyncio-stream#asyncio-example-create-connection-streams) using the high-level [`asyncio.open_connection()`](asyncio-stream#asyncio.open_connection "asyncio.open_connection") function and streams. ### Set signal handlers for SIGINT and SIGTERM (This `signals` example only works on Unix.) Register handlers for signals `SIGINT` and `SIGTERM` using the [`loop.add_signal_handler()`](#asyncio.loop.add_signal_handler "asyncio.loop.add_signal_handler") method: ``` import asyncio import functools import os import signal def ask_exit(signame, loop): print("got signal %s: exit" % signame) loop.stop() async def main(): loop = asyncio.get_running_loop() for signame in {'SIGINT', 'SIGTERM'}: loop.add_signal_handler( getattr(signal, signame), functools.partial(ask_exit, signame, loop)) await asyncio.sleep(3600) print("Event loop running for 1 hour, press Ctrl+C to interrupt.") print(f"pid {os.getpid()}: send SIGINT or SIGTERM to exit.") asyncio.run(main()) ```
programming_docs
python code — Interpreter base classes code — Interpreter base classes =============================== **Source code:** [Lib/code.py](https://github.com/python/cpython/tree/3.9/Lib/code.py) The `code` module provides facilities to implement read-eval-print loops in Python. Two classes and convenience functions are included which can be used to build applications which provide an interactive interpreter prompt. `class code.InteractiveInterpreter(locals=None)` This class deals with parsing and interpreter state (the user’s namespace); it does not deal with input buffering or prompting or input file naming (the filename is always passed in explicitly). The optional *locals* argument specifies the dictionary in which code will be executed; it defaults to a newly created dictionary with key `'__name__'` set to `'__console__'` and key `'__doc__'` set to `None`. `class code.InteractiveConsole(locals=None, filename="<console>")` Closely emulate the behavior of the interactive Python interpreter. This class builds on [`InteractiveInterpreter`](#code.InteractiveInterpreter "code.InteractiveInterpreter") and adds prompting using the familiar `sys.ps1` and `sys.ps2`, and input buffering. `code.interact(banner=None, readfunc=None, local=None, exitmsg=None)` Convenience function to run a read-eval-print loop. This creates a new instance of [`InteractiveConsole`](#code.InteractiveConsole "code.InteractiveConsole") and sets *readfunc* to be used as the [`InteractiveConsole.raw_input()`](#code.InteractiveConsole.raw_input "code.InteractiveConsole.raw_input") method, if provided. If *local* is provided, it is passed to the [`InteractiveConsole`](#code.InteractiveConsole "code.InteractiveConsole") constructor for use as the default namespace for the interpreter loop. The [`interact()`](#code.interact "code.interact") method of the instance is then run with *banner* and *exitmsg* passed as the banner and exit message to use, if provided. The console object is discarded after use. Changed in version 3.6: Added *exitmsg* parameter. `code.compile_command(source, filename="<input>", symbol="single")` This function is useful for programs that want to emulate Python’s interpreter main loop (a.k.a. the read-eval-print loop). The tricky part is to determine when the user has entered an incomplete command that can be completed by entering more text (as opposed to a complete command or a syntax error). This function *almost* always makes the same decision as the real interpreter main loop. *source* is the source string; *filename* is the optional filename from which source was read, defaulting to `'<input>'`; and *symbol* is the optional grammar start symbol, which should be `'single'` (the default), `'eval'` or `'exec'`. Returns a code object (the same as `compile(source, filename, symbol)`) if the command is complete and valid; `None` if the command is incomplete; raises [`SyntaxError`](exceptions#SyntaxError "SyntaxError") if the command is complete and contains a syntax error, or raises [`OverflowError`](exceptions#OverflowError "OverflowError") or [`ValueError`](exceptions#ValueError "ValueError") if the command contains an invalid literal. Interactive Interpreter Objects ------------------------------- `InteractiveInterpreter.runsource(source, filename="<input>", symbol="single")` Compile and run some source in the interpreter. Arguments are the same as for [`compile_command()`](#code.compile_command "code.compile_command"); the default for *filename* is `'<input>'`, and for *symbol* is `'single'`. One of several things can happen: * The input is incorrect; [`compile_command()`](#code.compile_command "code.compile_command") raised an exception ([`SyntaxError`](exceptions#SyntaxError "SyntaxError") or [`OverflowError`](exceptions#OverflowError "OverflowError")). A syntax traceback will be printed by calling the [`showsyntaxerror()`](#code.InteractiveInterpreter.showsyntaxerror "code.InteractiveInterpreter.showsyntaxerror") method. [`runsource()`](#code.InteractiveInterpreter.runsource "code.InteractiveInterpreter.runsource") returns `False`. * The input is incomplete, and more input is required; [`compile_command()`](#code.compile_command "code.compile_command") returned `None`. [`runsource()`](#code.InteractiveInterpreter.runsource "code.InteractiveInterpreter.runsource") returns `True`. * The input is complete; [`compile_command()`](#code.compile_command "code.compile_command") returned a code object. The code is executed by calling the [`runcode()`](#code.InteractiveInterpreter.runcode "code.InteractiveInterpreter.runcode") (which also handles run-time exceptions, except for [`SystemExit`](exceptions#SystemExit "SystemExit")). [`runsource()`](#code.InteractiveInterpreter.runsource "code.InteractiveInterpreter.runsource") returns `False`. The return value can be used to decide whether to use `sys.ps1` or `sys.ps2` to prompt the next line. `InteractiveInterpreter.runcode(code)` Execute a code object. When an exception occurs, [`showtraceback()`](#code.InteractiveInterpreter.showtraceback "code.InteractiveInterpreter.showtraceback") is called to display a traceback. All exceptions are caught except [`SystemExit`](exceptions#SystemExit "SystemExit"), which is allowed to propagate. A note about [`KeyboardInterrupt`](exceptions#KeyboardInterrupt "KeyboardInterrupt"): this exception may occur elsewhere in this code, and may not always be caught. The caller should be prepared to deal with it. `InteractiveInterpreter.showsyntaxerror(filename=None)` Display the syntax error that just occurred. This does not display a stack trace because there isn’t one for syntax errors. If *filename* is given, it is stuffed into the exception instead of the default filename provided by Python’s parser, because it always uses `'<string>'` when reading from a string. The output is written by the [`write()`](#code.InteractiveInterpreter.write "code.InteractiveInterpreter.write") method. `InteractiveInterpreter.showtraceback()` Display the exception that just occurred. We remove the first stack item because it is within the interpreter object implementation. The output is written by the [`write()`](#code.InteractiveInterpreter.write "code.InteractiveInterpreter.write") method. Changed in version 3.5: The full chained traceback is displayed instead of just the primary traceback. `InteractiveInterpreter.write(data)` Write a string to the standard error stream (`sys.stderr`). Derived classes should override this to provide the appropriate output handling as needed. Interactive Console Objects --------------------------- The [`InteractiveConsole`](#code.InteractiveConsole "code.InteractiveConsole") class is a subclass of [`InteractiveInterpreter`](#code.InteractiveInterpreter "code.InteractiveInterpreter"), and so offers all the methods of the interpreter objects as well as the following additions. `InteractiveConsole.interact(banner=None, exitmsg=None)` Closely emulate the interactive Python console. The optional *banner* argument specify the banner to print before the first interaction; by default it prints a banner similar to the one printed by the standard Python interpreter, followed by the class name of the console object in parentheses (so as not to confuse this with the real interpreter – since it’s so close!). The optional *exitmsg* argument specifies an exit message printed when exiting. Pass the empty string to suppress the exit message. If *exitmsg* is not given or `None`, a default message is printed. Changed in version 3.4: To suppress printing any banner, pass an empty string. Changed in version 3.6: Print an exit message when exiting. `InteractiveConsole.push(line)` Push a line of source text to the interpreter. The line should not have a trailing newline; it may have internal newlines. The line is appended to a buffer and the interpreter’s `runsource()` method is called with the concatenated contents of the buffer as source. If this indicates that the command was executed or invalid, the buffer is reset; otherwise, the command is incomplete, and the buffer is left as it was after the line was appended. The return value is `True` if more input is required, `False` if the line was dealt with in some way (this is the same as `runsource()`). `InteractiveConsole.resetbuffer()` Remove any unhandled source text from the input buffer. `InteractiveConsole.raw_input(prompt="")` Write a prompt and read a line. The returned line does not include the trailing newline. When the user enters the EOF key sequence, [`EOFError`](exceptions#EOFError "EOFError") is raised. The base implementation reads from `sys.stdin`; a subclass may replace this with a different implementation. python marshal — Internal Python object serialization marshal — Internal Python object serialization ============================================== This module contains functions that can read and write Python values in a binary format. The format is specific to Python, but independent of machine architecture issues (e.g., you can write a Python value to a file on a PC, transport the file to a Sun, and read it back there). Details of the format are undocumented on purpose; it may change between Python versions (although it rarely does). [1](#id2) This is not a general “persistence” module. For general persistence and transfer of Python objects through RPC calls, see the modules [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") and [`shelve`](shelve#module-shelve "shelve: Python object persistence."). The [`marshal`](#module-marshal "marshal: Convert Python objects to streams of bytes and back (with different constraints).") module exists mainly to support reading and writing the “pseudo-compiled” code for Python modules of `.pyc` files. Therefore, the Python maintainers reserve the right to modify the marshal format in backward incompatible ways should the need arise. If you’re serializing and de-serializing Python objects, use the [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") module instead – the performance is comparable, version independence is guaranteed, and pickle supports a substantially wider range of objects than marshal. Warning The [`marshal`](#module-marshal "marshal: Convert Python objects to streams of bytes and back (with different constraints).") module is not intended to be secure against erroneous or maliciously constructed data. Never unmarshal data received from an untrusted or unauthenticated source. Not all Python object types are supported; in general, only objects whose value is independent from a particular invocation of Python can be written and read by this module. The following types are supported: booleans, integers, floating point numbers, complex numbers, strings, bytes, bytearrays, tuples, lists, sets, frozensets, dictionaries, and code objects, where it should be understood that tuples, lists, sets, frozensets and dictionaries are only supported as long as the values contained therein are themselves supported. The singletons [`None`](constants#None "None"), [`Ellipsis`](constants#Ellipsis "Ellipsis") and [`StopIteration`](exceptions#StopIteration "StopIteration") can also be marshalled and unmarshalled. For format *version* lower than 3, recursive lists, sets and dictionaries cannot be written (see below). There are functions that read/write files as well as functions operating on bytes-like objects. The module defines these functions: `marshal.dump(value, file[, version])` Write the value on the open file. The value must be a supported type. The file must be a writeable [binary file](../glossary#term-binary-file). If the value has (or contains an object that has) an unsupported type, a [`ValueError`](exceptions#ValueError "ValueError") exception is raised — but garbage data will also be written to the file. The object will not be properly read back by [`load()`](#marshal.load "marshal.load"). The *version* argument indicates the data format that `dump` should use (see below). Raises an [auditing event](sys#auditing) `marshal.dumps` with arguments `value`, `version`. `marshal.load(file)` Read one value from the open file and return it. If no valid value is read (e.g. because the data has a different Python version’s incompatible marshal format), raise [`EOFError`](exceptions#EOFError "EOFError"), [`ValueError`](exceptions#ValueError "ValueError") or [`TypeError`](exceptions#TypeError "TypeError"). The file must be a readable [binary file](../glossary#term-binary-file). Raises an [auditing event](sys#auditing) `marshal.load` with no arguments. Note If an object containing an unsupported type was marshalled with [`dump()`](#marshal.dump "marshal.dump"), [`load()`](#marshal.load "marshal.load") will substitute `None` for the unmarshallable type. Changed in version 3.9.7: This call used to raise a `code.__new__` audit event for each code object. Now it raises a single `marshal.load` event for the entire load operation. `marshal.dumps(value[, version])` Return the bytes object that would be written to a file by `dump(value, file)`. The value must be a supported type. Raise a [`ValueError`](exceptions#ValueError "ValueError") exception if value has (or contains an object that has) an unsupported type. The *version* argument indicates the data format that `dumps` should use (see below). Raises an [auditing event](sys#auditing) `marshal.dumps` with arguments `value`, `version`. `marshal.loads(bytes)` Convert the [bytes-like object](../glossary#term-bytes-like-object) to a value. If no valid value is found, raise [`EOFError`](exceptions#EOFError "EOFError"), [`ValueError`](exceptions#ValueError "ValueError") or [`TypeError`](exceptions#TypeError "TypeError"). Extra bytes in the input are ignored. Raises an [auditing event](sys#auditing) `marshal.loads` with argument `bytes`. Changed in version 3.9.7: This call used to raise a `code.__new__` audit event for each code object. Now it raises a single `marshal.loads` event for the entire load operation. In addition, the following constants are defined: `marshal.version` Indicates the format that the module uses. Version 0 is the historical format, version 1 shares interned strings and version 2 uses a binary format for floating point numbers. Version 3 adds support for object instancing and recursion. The current version is 4. #### Footnotes `1` The name of this module stems from a bit of terminology used by the designers of Modula-3 (amongst others), who use the term “marshalling” for shipping of data around in a self-contained form. Strictly speaking, “to marshal” means to convert some data from internal to external form (in an RPC buffer for instance) and “unmarshalling” for the reverse process. python multiprocessing.shared_memory — Provides shared memory for direct access across processes multiprocessing.shared\_memory — Provides shared memory for direct access across processes ========================================================================================== **Source code:** [Lib/multiprocessing/shared\_memory.py](https://github.com/python/cpython/tree/3.9/Lib/multiprocessing/shared_memory.py) New in version 3.8. This module provides a class, [`SharedMemory`](#multiprocessing.shared_memory.SharedMemory "multiprocessing.shared_memory.SharedMemory"), for the allocation and management of shared memory to be accessed by one or more processes on a multicore or symmetric multiprocessor (SMP) machine. To assist with the life-cycle management of shared memory especially across distinct processes, a [`BaseManager`](multiprocessing#multiprocessing.managers.BaseManager "multiprocessing.managers.BaseManager") subclass, `SharedMemoryManager`, is also provided in the `multiprocessing.managers` module. In this module, shared memory refers to “System V style” shared memory blocks (though is not necessarily implemented explicitly as such) and does not refer to “distributed shared memory”. This style of shared memory permits distinct processes to potentially read and write to a common (or shared) region of volatile memory. Processes are conventionally limited to only have access to their own process memory space but shared memory permits the sharing of data between processes, avoiding the need to instead send messages between processes containing that data. Sharing data directly via memory can provide significant performance benefits compared to sharing data via disk or socket or other communications requiring the serialization/deserialization and copying of data. `class multiprocessing.shared_memory.SharedMemory(name=None, create=False, size=0)` Creates a new shared memory block or attaches to an existing shared memory block. Each shared memory block is assigned a unique name. In this way, one process can create a shared memory block with a particular name and a different process can attach to that same shared memory block using that same name. As a resource for sharing data across processes, shared memory blocks may outlive the original process that created them. When one process no longer needs access to a shared memory block that might still be needed by other processes, the [`close()`](#multiprocessing.shared_memory.SharedMemory.close "multiprocessing.shared_memory.SharedMemory.close") method should be called. When a shared memory block is no longer needed by any process, the [`unlink()`](#multiprocessing.shared_memory.SharedMemory.unlink "multiprocessing.shared_memory.SharedMemory.unlink") method should be called to ensure proper cleanup. *name* is the unique name for the requested shared memory, specified as a string. When creating a new shared memory block, if `None` (the default) is supplied for the name, a novel name will be generated. *create* controls whether a new shared memory block is created (`True`) or an existing shared memory block is attached (`False`). *size* specifies the requested number of bytes when creating a new shared memory block. Because some platforms choose to allocate chunks of memory based upon that platform’s memory page size, the exact size of the shared memory block may be larger or equal to the size requested. When attaching to an existing shared memory block, the `size` parameter is ignored. `close()` Closes access to the shared memory from this instance. In order to ensure proper cleanup of resources, all instances should call `close()` once the instance is no longer needed. Note that calling `close()` does not cause the shared memory block itself to be destroyed. `unlink()` Requests that the underlying shared memory block be destroyed. In order to ensure proper cleanup of resources, `unlink()` should be called once (and only once) across all processes which have need for the shared memory block. After requesting its destruction, a shared memory block may or may not be immediately destroyed and this behavior may differ across platforms. Attempts to access data inside the shared memory block after `unlink()` has been called may result in memory access errors. Note: the last process relinquishing its hold on a shared memory block may call `unlink()` and [`close()`](#multiprocessing.shared_memory.SharedMemory.close "multiprocessing.shared_memory.SharedMemory.close") in either order. `buf` A memoryview of contents of the shared memory block. `name` Read-only access to the unique name of the shared memory block. `size` Read-only access to size in bytes of the shared memory block. The following example demonstrates low-level use of [`SharedMemory`](#multiprocessing.shared_memory.SharedMemory "multiprocessing.shared_memory.SharedMemory") instances: ``` >>> from multiprocessing import shared_memory >>> shm_a = shared_memory.SharedMemory(create=True, size=10) >>> type(shm_a.buf) <class 'memoryview'> >>> buffer = shm_a.buf >>> len(buffer) 10 >>> buffer[:4] = bytearray([22, 33, 44, 55]) # Modify multiple at once >>> buffer[4] = 100 # Modify single byte at a time >>> # Attach to an existing shared memory block >>> shm_b = shared_memory.SharedMemory(shm_a.name) >>> import array >>> array.array('b', shm_b.buf[:5]) # Copy the data into a new array.array array('b', [22, 33, 44, 55, 100]) >>> shm_b.buf[:5] = b'howdy' # Modify via shm_b using bytes >>> bytes(shm_a.buf[:5]) # Access via shm_a b'howdy' >>> shm_b.close() # Close each SharedMemory instance >>> shm_a.close() >>> shm_a.unlink() # Call unlink only once to release the shared memory ``` The following example demonstrates a practical use of the [`SharedMemory`](#multiprocessing.shared_memory.SharedMemory "multiprocessing.shared_memory.SharedMemory") class with [NumPy arrays](https://www.numpy.org/), accessing the same `numpy.ndarray` from two distinct Python shells: ``` >>> # In the first Python interactive shell >>> import numpy as np >>> a = np.array([1, 1, 2, 3, 5, 8]) # Start with an existing NumPy array >>> from multiprocessing import shared_memory >>> shm = shared_memory.SharedMemory(create=True, size=a.nbytes) >>> # Now create a NumPy array backed by shared memory >>> b = np.ndarray(a.shape, dtype=a.dtype, buffer=shm.buf) >>> b[:] = a[:] # Copy the original data into shared memory >>> b array([1, 1, 2, 3, 5, 8]) >>> type(b) <class 'numpy.ndarray'> >>> type(a) <class 'numpy.ndarray'> >>> shm.name # We did not specify a name so one was chosen for us 'psm_21467_46075' >>> # In either the same shell or a new Python shell on the same machine >>> import numpy as np >>> from multiprocessing import shared_memory >>> # Attach to the existing shared memory block >>> existing_shm = shared_memory.SharedMemory(name='psm_21467_46075') >>> # Note that a.shape is (6,) and a.dtype is np.int64 in this example >>> c = np.ndarray((6,), dtype=np.int64, buffer=existing_shm.buf) >>> c array([1, 1, 2, 3, 5, 8]) >>> c[-1] = 888 >>> c array([ 1, 1, 2, 3, 5, 888]) >>> # Back in the first Python interactive shell, b reflects this change >>> b array([ 1, 1, 2, 3, 5, 888]) >>> # Clean up from within the second Python shell >>> del c # Unnecessary; merely emphasizing the array is no longer used >>> existing_shm.close() >>> # Clean up from within the first Python shell >>> del b # Unnecessary; merely emphasizing the array is no longer used >>> shm.close() >>> shm.unlink() # Free and release the shared memory block at the very end ``` `class multiprocessing.managers.SharedMemoryManager([address[, authkey]])` A subclass of [`BaseManager`](multiprocessing#multiprocessing.managers.BaseManager "multiprocessing.managers.BaseManager") which can be used for the management of shared memory blocks across processes. A call to [`start()`](multiprocessing#multiprocessing.managers.BaseManager.start "multiprocessing.managers.BaseManager.start") on a [`SharedMemoryManager`](#multiprocessing.managers.SharedMemoryManager "multiprocessing.managers.SharedMemoryManager") instance causes a new process to be started. This new process’s sole purpose is to manage the life cycle of all shared memory blocks created through it. To trigger the release of all shared memory blocks managed by that process, call [`shutdown()`](multiprocessing#multiprocessing.managers.BaseManager.shutdown "multiprocessing.managers.BaseManager.shutdown") on the instance. This triggers a `SharedMemory.unlink()` call on all of the [`SharedMemory`](#multiprocessing.managers.SharedMemoryManager.SharedMemory "multiprocessing.managers.SharedMemoryManager.SharedMemory") objects managed by that process and then stops the process itself. By creating `SharedMemory` instances through a `SharedMemoryManager`, we avoid the need to manually track and trigger the freeing of shared memory resources. This class provides methods for creating and returning [`SharedMemory`](#multiprocessing.managers.SharedMemoryManager.SharedMemory "multiprocessing.managers.SharedMemoryManager.SharedMemory") instances and for creating a list-like object ([`ShareableList`](#multiprocessing.managers.SharedMemoryManager.ShareableList "multiprocessing.managers.SharedMemoryManager.ShareableList")) backed by shared memory. Refer to [`multiprocessing.managers.BaseManager`](multiprocessing#multiprocessing.managers.BaseManager "multiprocessing.managers.BaseManager") for a description of the inherited *address* and *authkey* optional input arguments and how they may be used to connect to an existing `SharedMemoryManager` service from other processes. `SharedMemory(size)` Create and return a new [`SharedMemory`](#multiprocessing.managers.SharedMemoryManager.SharedMemory "multiprocessing.managers.SharedMemoryManager.SharedMemory") object with the specified `size` in bytes. `ShareableList(sequence)` Create and return a new [`ShareableList`](#multiprocessing.managers.SharedMemoryManager.ShareableList "multiprocessing.managers.SharedMemoryManager.ShareableList") object, initialized by the values from the input `sequence`. The following example demonstrates the basic mechanisms of a `SharedMemoryManager`: ``` >>> from multiprocessing.managers import SharedMemoryManager >>> smm = SharedMemoryManager() >>> smm.start() # Start the process that manages the shared memory blocks >>> sl = smm.ShareableList(range(4)) >>> sl ShareableList([0, 1, 2, 3], name='psm_6572_7512') >>> raw_shm = smm.SharedMemory(size=128) >>> another_sl = smm.ShareableList('alpha') >>> another_sl ShareableList(['a', 'l', 'p', 'h', 'a'], name='psm_6572_12221') >>> smm.shutdown() # Calls unlink() on sl, raw_shm, and another_sl ``` The following example depicts a potentially more convenient pattern for using `SharedMemoryManager` objects via the [`with`](../reference/compound_stmts#with) statement to ensure that all shared memory blocks are released after they are no longer needed: ``` >>> with SharedMemoryManager() as smm: ... sl = smm.ShareableList(range(2000)) ... # Divide the work among two processes, storing partial results in sl ... p1 = Process(target=do_work, args=(sl, 0, 1000)) ... p2 = Process(target=do_work, args=(sl, 1000, 2000)) ... p1.start() ... p2.start() # A multiprocessing.Pool might be more efficient ... p1.join() ... p2.join() # Wait for all work to complete in both processes ... total_result = sum(sl) # Consolidate the partial results now in sl ``` When using a `SharedMemoryManager` in a [`with`](../reference/compound_stmts#with) statement, the shared memory blocks created using that manager are all released when the [`with`](../reference/compound_stmts#with) statement’s code block finishes execution. `class multiprocessing.shared_memory.ShareableList(sequence=None, *, name=None)` Provides a mutable list-like object where all values stored within are stored in a shared memory block. This constrains storable values to only the `int`, `float`, `bool`, `str` (less than 10M bytes each), `bytes` (less than 10M bytes each), and `None` built-in data types. It also notably differs from the built-in `list` type in that these lists can not change their overall length (i.e. no append, insert, etc.) and do not support the dynamic creation of new [`ShareableList`](#multiprocessing.shared_memory.ShareableList "multiprocessing.shared_memory.ShareableList") instances via slicing. *sequence* is used in populating a new `ShareableList` full of values. Set to `None` to instead attach to an already existing `ShareableList` by its unique shared memory name. *name* is the unique name for the requested shared memory, as described in the definition for [`SharedMemory`](#multiprocessing.shared_memory.SharedMemory "multiprocessing.shared_memory.SharedMemory"). When attaching to an existing `ShareableList`, specify its shared memory block’s unique name while leaving `sequence` set to `None`. `count(value)` Returns the number of occurrences of `value`. `index(value)` Returns first index position of `value`. Raises [`ValueError`](exceptions#ValueError "ValueError") if `value` is not present. `format` Read-only attribute containing the [`struct`](struct#module-struct "struct: Interpret bytes as packed binary data.") packing format used by all currently stored values. `shm` The [`SharedMemory`](#multiprocessing.shared_memory.SharedMemory "multiprocessing.shared_memory.SharedMemory") instance where the values are stored. The following example demonstrates basic use of a [`ShareableList`](#multiprocessing.shared_memory.ShareableList "multiprocessing.shared_memory.ShareableList") instance: ``` >>> from multiprocessing import shared_memory >>> a = shared_memory.ShareableList(['howdy', b'HoWdY', -273.154, 100, None, True, 42]) >>> [ type(entry) for entry in a ] [<class 'str'>, <class 'bytes'>, <class 'float'>, <class 'int'>, <class 'NoneType'>, <class 'bool'>, <class 'int'>] >>> a[2] -273.154 >>> a[2] = -78.5 >>> a[2] -78.5 >>> a[2] = 'dry ice' # Changing data types is supported as well >>> a[2] 'dry ice' >>> a[2] = 'larger than previously allocated storage space' Traceback (most recent call last): ... ValueError: exceeds available storage for existing str >>> a[2] 'dry ice' >>> len(a) 7 >>> a.index(42) 6 >>> a.count(b'howdy') 0 >>> a.count(b'HoWdY') 1 >>> a.shm.close() >>> a.shm.unlink() >>> del a # Use of a ShareableList after call to unlink() is unsupported ``` The following example depicts how one, two, or many processes may access the same [`ShareableList`](#multiprocessing.shared_memory.ShareableList "multiprocessing.shared_memory.ShareableList") by supplying the name of the shared memory block behind it: ``` >>> b = shared_memory.ShareableList(range(5)) # In a first process >>> c = shared_memory.ShareableList(name=b.shm.name) # In a second process >>> c ShareableList([0, 1, 2, 3, 4], name='...') >>> c[-1] = -999 >>> b[-1] -999 >>> b.shm.close() >>> c.shm.close() >>> c.shm.unlink() ```
programming_docs
python email.encoders: Encoders email.encoders: Encoders ======================== **Source code:** [Lib/email/encoders.py](https://github.com/python/cpython/tree/3.9/Lib/email/encoders.py) This module is part of the legacy (`Compat32`) email API. In the new API the functionality is provided by the *cte* parameter of the [`set_content()`](email.message#email.message.EmailMessage.set_content "email.message.EmailMessage.set_content") method. This module is deprecated in Python 3. The functions provided here should not be called explicitly since the [`MIMEText`](email.mime#email.mime.text.MIMEText "email.mime.text.MIMEText") class sets the content type and CTE header using the *\_subtype* and *\_charset* values passed during the instantiation of that class. The remaining text in this section is the original documentation of the module. When creating [`Message`](email.compat32-message#email.message.Message "email.message.Message") objects from scratch, you often need to encode the payloads for transport through compliant mail servers. This is especially true for *image/\** and *text/\** type messages containing binary data. The [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package provides some convenient encoders in its `encoders` module. These encoders are actually used by the [`MIMEAudio`](email.mime#email.mime.audio.MIMEAudio "email.mime.audio.MIMEAudio") and [`MIMEImage`](email.mime#email.mime.image.MIMEImage "email.mime.image.MIMEImage") class constructors to provide default encodings. All encoder functions take exactly one argument, the message object to encode. They usually extract the payload, encode it, and reset the payload to this newly encoded value. They should also set the *Content-Transfer-Encoding* header as appropriate. Note that these functions are not meaningful for a multipart message. They must be applied to individual subparts instead, and will raise a [`TypeError`](exceptions#TypeError "TypeError") if passed a message whose type is multipart. Here are the encoding functions provided: `email.encoders.encode_quopri(msg)` Encodes the payload into quoted-printable form and sets the *Content-Transfer-Encoding* header to `quoted-printable` [1](#id2). This is a good encoding to use when most of your payload is normal printable data, but contains a few unprintable characters. `email.encoders.encode_base64(msg)` Encodes the payload into base64 form and sets the *Content-Transfer-Encoding* header to `base64`. This is a good encoding to use when most of your payload is unprintable data since it is a more compact form than quoted-printable. The drawback of base64 encoding is that it renders the text non-human readable. `email.encoders.encode_7or8bit(msg)` This doesn’t actually modify the message’s payload, but it does set the *Content-Transfer-Encoding* header to either `7bit` or `8bit` as appropriate, based on the payload data. `email.encoders.encode_noop(msg)` This does nothing; it doesn’t even set the *Content-Transfer-Encoding* header. #### Footnotes `1` Note that encoding with [`encode_quopri()`](#email.encoders.encode_quopri "email.encoders.encode_quopri") also encodes all tabs and space characters in the data. python token — Constants used with Python parse trees token — Constants used with Python parse trees ============================================== **Source code:** [Lib/token.py](https://github.com/python/cpython/tree/3.9/Lib/token.py) This module provides constants which represent the numeric values of leaf nodes of the parse tree (terminal tokens). Refer to the file `Grammar/Grammar` in the Python distribution for the definitions of the names in the context of the language grammar. The specific numeric values which the names map to may change between Python versions. The module also provides a mapping from numeric codes to names and some functions. The functions mirror definitions in the Python C header files. `token.tok_name` Dictionary mapping the numeric values of the constants defined in this module back to name strings, allowing more human-readable representation of parse trees to be generated. `token.ISTERMINAL(x)` Return `True` for terminal token values. `token.ISNONTERMINAL(x)` Return `True` for non-terminal token values. `token.ISEOF(x)` Return `True` if *x* is the marker indicating the end of input. The token constants are: `token.ENDMARKER` `token.NAME` `token.NUMBER` `token.STRING` `token.NEWLINE` `token.INDENT` `token.DEDENT` `token.LPAR` Token value for `"("`. `token.RPAR` Token value for `")"`. `token.LSQB` Token value for `"["`. `token.RSQB` Token value for `"]"`. `token.COLON` Token value for `":"`. `token.COMMA` Token value for `","`. `token.SEMI` Token value for `";"`. `token.PLUS` Token value for `"+"`. `token.MINUS` Token value for `"-"`. `token.STAR` Token value for `"*"`. `token.SLASH` Token value for `"/"`. `token.VBAR` Token value for `"|"`. `token.AMPER` Token value for `"&"`. `token.LESS` Token value for `"<"`. `token.GREATER` Token value for `">"`. `token.EQUAL` Token value for `"="`. `token.DOT` Token value for `"."`. `token.PERCENT` Token value for `"%"`. `token.LBRACE` Token value for `"{"`. `token.RBRACE` Token value for `"}"`. `token.EQEQUAL` Token value for `"=="`. `token.NOTEQUAL` Token value for `"!="`. `token.LESSEQUAL` Token value for `"<="`. `token.GREATEREQUAL` Token value for `">="`. `token.TILDE` Token value for `"~"`. `token.CIRCUMFLEX` Token value for `"^"`. `token.LEFTSHIFT` Token value for `"<<"`. `token.RIGHTSHIFT` Token value for `">>"`. `token.DOUBLESTAR` Token value for `"**"`. `token.PLUSEQUAL` Token value for `"+="`. `token.MINEQUAL` Token value for `"-="`. `token.STAREQUAL` Token value for `"*="`. `token.SLASHEQUAL` Token value for `"/="`. `token.PERCENTEQUAL` Token value for `"%="`. `token.AMPEREQUAL` Token value for `"&="`. `token.VBAREQUAL` Token value for `"|="`. `token.CIRCUMFLEXEQUAL` Token value for `"^="`. `token.LEFTSHIFTEQUAL` Token value for `"<<="`. `token.RIGHTSHIFTEQUAL` Token value for `">>="`. `token.DOUBLESTAREQUAL` Token value for `"**="`. `token.DOUBLESLASH` Token value for `"//"`. `token.DOUBLESLASHEQUAL` Token value for `"//="`. `token.AT` Token value for `"@"`. `token.ATEQUAL` Token value for `"@="`. `token.RARROW` Token value for `"->"`. `token.ELLIPSIS` Token value for `"..."`. `token.COLONEQUAL` Token value for `":="`. `token.OP` `token.AWAIT` `token.ASYNC` `token.TYPE_IGNORE` `token.TYPE_COMMENT` `token.ERRORTOKEN` `token.N_TOKENS` `token.NT_OFFSET` The following token type values aren’t used by the C tokenizer but are needed for the [`tokenize`](tokenize#module-tokenize "tokenize: Lexical scanner for Python source code.") module. `token.COMMENT` Token value used to indicate a comment. `token.NL` Token value used to indicate a non-terminating newline. The [`NEWLINE`](#token.NEWLINE "token.NEWLINE") token indicates the end of a logical line of Python code; `NL` tokens are generated when a logical line of code is continued over multiple physical lines. `token.ENCODING` Token value that indicates the encoding used to decode the source bytes into text. The first token returned by [`tokenize.tokenize()`](tokenize#tokenize.tokenize "tokenize.tokenize") will always be an `ENCODING` token. `token.TYPE_COMMENT` Token value indicating that a type comment was recognized. Such tokens are only produced when [`ast.parse()`](ast#ast.parse "ast.parse") is invoked with `type_comments=True`. Changed in version 3.5: Added [`AWAIT`](#token.AWAIT "token.AWAIT") and [`ASYNC`](#token.ASYNC "token.ASYNC") tokens. Changed in version 3.7: Added [`COMMENT`](#token.COMMENT "token.COMMENT"), [`NL`](#token.NL "token.NL") and [`ENCODING`](#token.ENCODING "token.ENCODING") tokens. Changed in version 3.7: Removed [`AWAIT`](#token.AWAIT "token.AWAIT") and [`ASYNC`](#token.ASYNC "token.ASYNC") tokens. “async” and “await” are now tokenized as [`NAME`](#token.NAME "token.NAME") tokens. Changed in version 3.8: Added [`TYPE_COMMENT`](#token.TYPE_COMMENT "token.TYPE_COMMENT"), [`TYPE_IGNORE`](#token.TYPE_IGNORE "token.TYPE_IGNORE"), [`COLONEQUAL`](#token.COLONEQUAL "token.COLONEQUAL"). Added [`AWAIT`](#token.AWAIT "token.AWAIT") and [`ASYNC`](#token.ASYNC "token.ASYNC") tokens back (they’re needed to support parsing older Python versions for [`ast.parse()`](ast#ast.parse "ast.parse") with `feature_version` set to 6 or lower). python email.mime: Creating email and MIME objects from scratch email.mime: Creating email and MIME objects from scratch ======================================================== **Source code:** [Lib/email/mime/](https://github.com/python/cpython/tree/3.9/Lib/email/mime/) This module is part of the legacy (`Compat32`) email API. Its functionality is partially replaced by the [`contentmanager`](email.contentmanager#module-email.contentmanager "email.contentmanager: Storing and Retrieving Content from MIME Parts") in the new API, but in certain applications these classes may still be useful, even in non-legacy code. Ordinarily, you get a message object structure by passing a file or some text to a parser, which parses the text and returns the root message object. However you can also build a complete message structure from scratch, or even individual [`Message`](email.compat32-message#email.message.Message "email.message.Message") objects by hand. In fact, you can also take an existing structure and add new [`Message`](email.compat32-message#email.message.Message "email.message.Message") objects, move them around, etc. This makes a very convenient interface for slicing-and-dicing MIME messages. You can create a new object structure by creating [`Message`](email.compat32-message#email.message.Message "email.message.Message") instances, adding attachments and all the appropriate headers manually. For MIME messages though, the [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package provides some convenient subclasses to make things easier. Here are the classes: `class email.mime.base.MIMEBase(_maintype, _subtype, *, policy=compat32, **_params)` Module: `email.mime.base` This is the base class for all the MIME-specific subclasses of [`Message`](email.compat32-message#email.message.Message "email.message.Message"). Ordinarily you won’t create instances specifically of [`MIMEBase`](#email.mime.base.MIMEBase "email.mime.base.MIMEBase"), although you could. [`MIMEBase`](#email.mime.base.MIMEBase "email.mime.base.MIMEBase") is provided primarily as a convenient base class for more specific MIME-aware subclasses. *\_maintype* is the *Content-Type* major type (e.g. *text* or *image*), and *\_subtype* is the *Content-Type* minor type (e.g. *plain* or *gif*). *\_params* is a parameter key/value dictionary and is passed directly to [`Message.add_header`](email.compat32-message#email.message.Message.add_header "email.message.Message.add_header"). If *policy* is specified, (defaults to the [`compat32`](email.policy#email.policy.Compat32 "email.policy.Compat32") policy) it will be passed to [`Message`](email.compat32-message#email.message.Message "email.message.Message"). The [`MIMEBase`](#email.mime.base.MIMEBase "email.mime.base.MIMEBase") class always adds a *Content-Type* header (based on *\_maintype*, *\_subtype*, and *\_params*), and a *MIME-Version* header (always set to `1.0`). Changed in version 3.6: Added *policy* keyword-only parameter. `class email.mime.nonmultipart.MIMENonMultipart` Module: `email.mime.nonmultipart` A subclass of [`MIMEBase`](#email.mime.base.MIMEBase "email.mime.base.MIMEBase"), this is an intermediate base class for MIME messages that are not *multipart*. The primary purpose of this class is to prevent the use of the [`attach()`](email.compat32-message#email.message.Message.attach "email.message.Message.attach") method, which only makes sense for *multipart* messages. If [`attach()`](email.compat32-message#email.message.Message.attach "email.message.Message.attach") is called, a [`MultipartConversionError`](email.errors#email.errors.MultipartConversionError "email.errors.MultipartConversionError") exception is raised. `class email.mime.multipart.MIMEMultipart(_subtype='mixed', boundary=None, _subparts=None, *, policy=compat32, **_params)` Module: `email.mime.multipart` A subclass of [`MIMEBase`](#email.mime.base.MIMEBase "email.mime.base.MIMEBase"), this is an intermediate base class for MIME messages that are *multipart*. Optional *\_subtype* defaults to *mixed*, but can be used to specify the subtype of the message. A *Content-Type* header of *multipart/\_subtype* will be added to the message object. A *MIME-Version* header will also be added. Optional *boundary* is the multipart boundary string. When `None` (the default), the boundary is calculated when needed (for example, when the message is serialized). *\_subparts* is a sequence of initial subparts for the payload. It must be possible to convert this sequence to a list. You can always attach new subparts to the message by using the [`Message.attach`](email.compat32-message#email.message.Message.attach "email.message.Message.attach") method. Optional *policy* argument defaults to [`compat32`](email.policy#email.policy.Compat32 "email.policy.Compat32"). Additional parameters for the *Content-Type* header are taken from the keyword arguments, or passed into the *\_params* argument, which is a keyword dictionary. Changed in version 3.6: Added *policy* keyword-only parameter. `class email.mime.application.MIMEApplication(_data, _subtype='octet-stream', _encoder=email.encoders.encode_base64, *, policy=compat32, **_params)` Module: `email.mime.application` A subclass of [`MIMENonMultipart`](#email.mime.nonmultipart.MIMENonMultipart "email.mime.nonmultipart.MIMENonMultipart"), the [`MIMEApplication`](#email.mime.application.MIMEApplication "email.mime.application.MIMEApplication") class is used to represent MIME message objects of major type *application*. *\_data* is a string containing the raw byte data. Optional *\_subtype* specifies the MIME subtype and defaults to *octet-stream*. Optional *\_encoder* is a callable (i.e. function) which will perform the actual encoding of the data for transport. This callable takes one argument, which is the [`MIMEApplication`](#email.mime.application.MIMEApplication "email.mime.application.MIMEApplication") instance. It should use [`get_payload()`](email.compat32-message#email.message.Message.get_payload "email.message.Message.get_payload") and [`set_payload()`](email.compat32-message#email.message.Message.set_payload "email.message.Message.set_payload") to change the payload to encoded form. It should also add any *Content-Transfer-Encoding* or other headers to the message object as necessary. The default encoding is base64. See the [`email.encoders`](email.encoders#module-email.encoders "email.encoders: Encoders for email message payloads.") module for a list of the built-in encoders. Optional *policy* argument defaults to [`compat32`](email.policy#email.policy.Compat32 "email.policy.Compat32"). *\_params* are passed straight through to the base class constructor. Changed in version 3.6: Added *policy* keyword-only parameter. `class email.mime.audio.MIMEAudio(_audiodata, _subtype=None, _encoder=email.encoders.encode_base64, *, policy=compat32, **_params)` Module: `email.mime.audio` A subclass of [`MIMENonMultipart`](#email.mime.nonmultipart.MIMENonMultipart "email.mime.nonmultipart.MIMENonMultipart"), the [`MIMEAudio`](#email.mime.audio.MIMEAudio "email.mime.audio.MIMEAudio") class is used to create MIME message objects of major type *audio*. *\_audiodata* is a string containing the raw audio data. If this data can be decoded by the standard Python module [`sndhdr`](sndhdr#module-sndhdr "sndhdr: Determine type of a sound file. (deprecated)"), then the subtype will be automatically included in the *Content-Type* header. Otherwise you can explicitly specify the audio subtype via the *\_subtype* argument. If the minor type could not be guessed and *\_subtype* was not given, then [`TypeError`](exceptions#TypeError "TypeError") is raised. Optional *\_encoder* is a callable (i.e. function) which will perform the actual encoding of the audio data for transport. This callable takes one argument, which is the [`MIMEAudio`](#email.mime.audio.MIMEAudio "email.mime.audio.MIMEAudio") instance. It should use [`get_payload()`](email.compat32-message#email.message.Message.get_payload "email.message.Message.get_payload") and [`set_payload()`](email.compat32-message#email.message.Message.set_payload "email.message.Message.set_payload") to change the payload to encoded form. It should also add any *Content-Transfer-Encoding* or other headers to the message object as necessary. The default encoding is base64. See the [`email.encoders`](email.encoders#module-email.encoders "email.encoders: Encoders for email message payloads.") module for a list of the built-in encoders. Optional *policy* argument defaults to [`compat32`](email.policy#email.policy.Compat32 "email.policy.Compat32"). *\_params* are passed straight through to the base class constructor. Changed in version 3.6: Added *policy* keyword-only parameter. `class email.mime.image.MIMEImage(_imagedata, _subtype=None, _encoder=email.encoders.encode_base64, *, policy=compat32, **_params)` Module: `email.mime.image` A subclass of [`MIMENonMultipart`](#email.mime.nonmultipart.MIMENonMultipart "email.mime.nonmultipart.MIMENonMultipart"), the [`MIMEImage`](#email.mime.image.MIMEImage "email.mime.image.MIMEImage") class is used to create MIME message objects of major type *image*. *\_imagedata* is a string containing the raw image data. If this data can be decoded by the standard Python module [`imghdr`](imghdr#module-imghdr "imghdr: Determine the type of image contained in a file or byte stream. (deprecated)"), then the subtype will be automatically included in the *Content-Type* header. Otherwise you can explicitly specify the image subtype via the *\_subtype* argument. If the minor type could not be guessed and *\_subtype* was not given, then [`TypeError`](exceptions#TypeError "TypeError") is raised. Optional *\_encoder* is a callable (i.e. function) which will perform the actual encoding of the image data for transport. This callable takes one argument, which is the [`MIMEImage`](#email.mime.image.MIMEImage "email.mime.image.MIMEImage") instance. It should use [`get_payload()`](email.compat32-message#email.message.Message.get_payload "email.message.Message.get_payload") and [`set_payload()`](email.compat32-message#email.message.Message.set_payload "email.message.Message.set_payload") to change the payload to encoded form. It should also add any *Content-Transfer-Encoding* or other headers to the message object as necessary. The default encoding is base64. See the [`email.encoders`](email.encoders#module-email.encoders "email.encoders: Encoders for email message payloads.") module for a list of the built-in encoders. Optional *policy* argument defaults to [`compat32`](email.policy#email.policy.Compat32 "email.policy.Compat32"). *\_params* are passed straight through to the [`MIMEBase`](#email.mime.base.MIMEBase "email.mime.base.MIMEBase") constructor. Changed in version 3.6: Added *policy* keyword-only parameter. `class email.mime.message.MIMEMessage(_msg, _subtype='rfc822', *, policy=compat32)` Module: `email.mime.message` A subclass of [`MIMENonMultipart`](#email.mime.nonmultipart.MIMENonMultipart "email.mime.nonmultipart.MIMENonMultipart"), the [`MIMEMessage`](#email.mime.message.MIMEMessage "email.mime.message.MIMEMessage") class is used to create MIME objects of main type *message*. *\_msg* is used as the payload, and must be an instance of class [`Message`](email.compat32-message#email.message.Message "email.message.Message") (or a subclass thereof), otherwise a [`TypeError`](exceptions#TypeError "TypeError") is raised. Optional *\_subtype* sets the subtype of the message; it defaults to *rfc822*. Optional *policy* argument defaults to [`compat32`](email.policy#email.policy.Compat32 "email.policy.Compat32"). Changed in version 3.6: Added *policy* keyword-only parameter. `class email.mime.text.MIMEText(_text, _subtype='plain', _charset=None, *, policy=compat32)` Module: `email.mime.text` A subclass of [`MIMENonMultipart`](#email.mime.nonmultipart.MIMENonMultipart "email.mime.nonmultipart.MIMENonMultipart"), the [`MIMEText`](#email.mime.text.MIMEText "email.mime.text.MIMEText") class is used to create MIME objects of major type *text*. *\_text* is the string for the payload. *\_subtype* is the minor type and defaults to *plain*. *\_charset* is the character set of the text and is passed as an argument to the [`MIMENonMultipart`](#email.mime.nonmultipart.MIMENonMultipart "email.mime.nonmultipart.MIMENonMultipart") constructor; it defaults to `us-ascii` if the string contains only `ascii` code points, and `utf-8` otherwise. The *\_charset* parameter accepts either a string or a [`Charset`](email.charset#email.charset.Charset "email.charset.Charset") instance. Unless the *\_charset* argument is explicitly set to `None`, the MIMEText object created will have both a *Content-Type* header with a `charset` parameter, and a *Content-Transfer-Encoding* header. This means that a subsequent `set_payload` call will not result in an encoded payload, even if a charset is passed in the `set_payload` command. You can “reset” this behavior by deleting the `Content-Transfer-Encoding` header, after which a `set_payload` call will automatically encode the new payload (and add a new *Content-Transfer-Encoding* header). Optional *policy* argument defaults to [`compat32`](email.policy#email.policy.Compat32 "email.policy.Compat32"). Changed in version 3.5: *\_charset* also accepts [`Charset`](email.charset#email.charset.Charset "email.charset.Charset") instances. Changed in version 3.6: Added *policy* keyword-only parameter.
programming_docs
python ctypes — A foreign function library for Python ctypes — A foreign function library for Python ============================================== [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") is a foreign function library for Python. It provides C compatible data types, and allows calling functions in DLLs or shared libraries. It can be used to wrap these libraries in pure Python. ctypes tutorial --------------- Note: The code samples in this tutorial use [`doctest`](doctest#module-doctest "doctest: Test pieces of code within docstrings.") to make sure that they actually work. Since some code samples behave differently under Linux, Windows, or macOS, they contain doctest directives in comments. Note: Some code samples reference the ctypes [`c_int`](#ctypes.c_int "ctypes.c_int") type. On platforms where `sizeof(long) == sizeof(int)` it is an alias to [`c_long`](#ctypes.c_long "ctypes.c_long"). So, you should not be confused if [`c_long`](#ctypes.c_long "ctypes.c_long") is printed if you would expect [`c_int`](#ctypes.c_int "ctypes.c_int") — they are actually the same type. ### Loading dynamic link libraries [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") exports the *cdll*, and on Windows *windll* and *oledll* objects, for loading dynamic link libraries. You load libraries by accessing them as attributes of these objects. *cdll* loads libraries which export functions using the standard `cdecl` calling convention, while *windll* libraries call functions using the `stdcall` calling convention. *oledll* also uses the `stdcall` calling convention, and assumes the functions return a Windows `HRESULT` error code. The error code is used to automatically raise an [`OSError`](exceptions#OSError "OSError") exception when the function call fails. Changed in version 3.3: Windows errors used to raise [`WindowsError`](exceptions#WindowsError "WindowsError"), which is now an alias of [`OSError`](exceptions#OSError "OSError"). Here are some examples for Windows. Note that `msvcrt` is the MS standard C library containing most standard C functions, and uses the cdecl calling convention: ``` >>> from ctypes import * >>> print(windll.kernel32) <WinDLL 'kernel32', handle ... at ...> >>> print(cdll.msvcrt) <CDLL 'msvcrt', handle ... at ...> >>> libc = cdll.msvcrt >>> ``` Windows appends the usual `.dll` file suffix automatically. Note Accessing the standard C library through `cdll.msvcrt` will use an outdated version of the library that may be incompatible with the one being used by Python. Where possible, use native Python functionality, or else import and use the `msvcrt` module. On Linux, it is required to specify the filename *including* the extension to load a library, so attribute access can not be used to load libraries. Either the `LoadLibrary()` method of the dll loaders should be used, or you should load the library by creating an instance of CDLL by calling the constructor: ``` >>> cdll.LoadLibrary("libc.so.6") <CDLL 'libc.so.6', handle ... at ...> >>> libc = CDLL("libc.so.6") >>> libc <CDLL 'libc.so.6', handle ... at ...> >>> ``` ### Accessing functions from loaded dlls Functions are accessed as attributes of dll objects: ``` >>> from ctypes import * >>> libc.printf <_FuncPtr object at 0x...> >>> print(windll.kernel32.GetModuleHandleA) <_FuncPtr object at 0x...> >>> print(windll.kernel32.MyOwnFunction) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "ctypes.py", line 239, in __getattr__ func = _StdcallFuncPtr(name, self) AttributeError: function 'MyOwnFunction' not found >>> ``` Note that win32 system dlls like `kernel32` and `user32` often export ANSI as well as UNICODE versions of a function. The UNICODE version is exported with an `W` appended to the name, while the ANSI version is exported with an `A` appended to the name. The win32 `GetModuleHandle` function, which returns a *module handle* for a given module name, has the following C prototype, and a macro is used to expose one of them as `GetModuleHandle` depending on whether UNICODE is defined or not: ``` /* ANSI version */ HMODULE GetModuleHandleA(LPCSTR lpModuleName); /* UNICODE version */ HMODULE GetModuleHandleW(LPCWSTR lpModuleName); ``` *windll* does not try to select one of them by magic, you must access the version you need by specifying `GetModuleHandleA` or `GetModuleHandleW` explicitly, and then call it with bytes or string objects respectively. Sometimes, dlls export functions with names which aren’t valid Python identifiers, like `"??2@YAPAXI@Z"`. In this case you have to use [`getattr()`](functions#getattr "getattr") to retrieve the function: ``` >>> getattr(cdll.msvcrt, "??2@YAPAXI@Z") <_FuncPtr object at 0x...> >>> ``` On Windows, some dlls export functions not by name but by ordinal. These functions can be accessed by indexing the dll object with the ordinal number: ``` >>> cdll.kernel32[1] <_FuncPtr object at 0x...> >>> cdll.kernel32[0] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "ctypes.py", line 310, in __getitem__ func = _StdcallFuncPtr(name, self) AttributeError: function ordinal 0 not found >>> ``` ### Calling functions You can call these functions like any other Python callable. This example uses the `time()` function, which returns system time in seconds since the Unix epoch, and the `GetModuleHandleA()` function, which returns a win32 module handle. This example calls both functions with a `NULL` pointer (`None` should be used as the `NULL` pointer): ``` >>> print(libc.time(None)) 1150640792 >>> print(hex(windll.kernel32.GetModuleHandleA(None))) 0x1d000000 >>> ``` [`ValueError`](exceptions#ValueError "ValueError") is raised when you call an `stdcall` function with the `cdecl` calling convention, or vice versa: ``` >>> cdll.kernel32.GetModuleHandleA(None) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: Procedure probably called with not enough arguments (4 bytes missing) >>> >>> windll.msvcrt.printf(b"spam") Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: Procedure probably called with too many arguments (4 bytes in excess) >>> ``` To find out the correct calling convention you have to look into the C header file or the documentation for the function you want to call. On Windows, [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") uses win32 structured exception handling to prevent crashes from general protection faults when functions are called with invalid argument values: ``` >>> windll.kernel32.GetModuleHandleA(32) Traceback (most recent call last): File "<stdin>", line 1, in <module> OSError: exception: access violation reading 0x00000020 >>> ``` There are, however, enough ways to crash Python with [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python."), so you should be careful anyway. The [`faulthandler`](faulthandler#module-faulthandler "faulthandler: Dump the Python traceback.") module can be helpful in debugging crashes (e.g. from segmentation faults produced by erroneous C library calls). `None`, integers, bytes objects and (unicode) strings are the only native Python objects that can directly be used as parameters in these function calls. `None` is passed as a C `NULL` pointer, bytes objects and strings are passed as pointer to the memory block that contains their data (`char *` or `wchar_t *`). Python integers are passed as the platforms default C `int` type, their value is masked to fit into the C type. Before we move on calling functions with other parameter types, we have to learn more about [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") data types. ### Fundamental data types [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") defines a number of primitive C compatible data types: | ctypes type | C type | Python type | | --- | --- | --- | | [`c_bool`](#ctypes.c_bool "ctypes.c_bool") | `_Bool` | bool (1) | | [`c_char`](#ctypes.c_char "ctypes.c_char") | `char` | 1-character bytes object | | [`c_wchar`](#ctypes.c_wchar "ctypes.c_wchar") | `wchar_t` | 1-character string | | [`c_byte`](#ctypes.c_byte "ctypes.c_byte") | `char` | int | | [`c_ubyte`](#ctypes.c_ubyte "ctypes.c_ubyte") | `unsigned char` | int | | [`c_short`](#ctypes.c_short "ctypes.c_short") | `short` | int | | [`c_ushort`](#ctypes.c_ushort "ctypes.c_ushort") | `unsigned short` | int | | [`c_int`](#ctypes.c_int "ctypes.c_int") | `int` | int | | [`c_uint`](#ctypes.c_uint "ctypes.c_uint") | `unsigned int` | int | | [`c_long`](#ctypes.c_long "ctypes.c_long") | `long` | int | | [`c_ulong`](#ctypes.c_ulong "ctypes.c_ulong") | `unsigned long` | int | | [`c_longlong`](#ctypes.c_longlong "ctypes.c_longlong") | `__int64` or `long long` | int | | [`c_ulonglong`](#ctypes.c_ulonglong "ctypes.c_ulonglong") | `unsigned __int64` or `unsigned long long` | int | | [`c_size_t`](#ctypes.c_size_t "ctypes.c_size_t") | `size_t` | int | | [`c_ssize_t`](#ctypes.c_ssize_t "ctypes.c_ssize_t") | `ssize_t` or [`Py_ssize_t`](../c-api/intro#c.Py_ssize_t "Py_ssize_t") | int | | [`c_float`](#ctypes.c_float "ctypes.c_float") | `float` | float | | [`c_double`](#ctypes.c_double "ctypes.c_double") | `double` | float | | [`c_longdouble`](#ctypes.c_longdouble "ctypes.c_longdouble") | `long double` | float | | [`c_char_p`](#ctypes.c_char_p "ctypes.c_char_p") | `char *` (NUL terminated) | bytes object or `None` | | [`c_wchar_p`](#ctypes.c_wchar_p "ctypes.c_wchar_p") | `wchar_t *` (NUL terminated) | string or `None` | | [`c_void_p`](#ctypes.c_void_p "ctypes.c_void_p") | `void *` | int or `None` | 1. The constructor accepts any object with a truth value. All these types can be created by calling them with an optional initializer of the correct type and value: ``` >>> c_int() c_long(0) >>> c_wchar_p("Hello, World") c_wchar_p(140018365411392) >>> c_ushort(-3) c_ushort(65533) >>> ``` Since these types are mutable, their value can also be changed afterwards: ``` >>> i = c_int(42) >>> print(i) c_long(42) >>> print(i.value) 42 >>> i.value = -99 >>> print(i.value) -99 >>> ``` Assigning a new value to instances of the pointer types [`c_char_p`](#ctypes.c_char_p "ctypes.c_char_p"), [`c_wchar_p`](#ctypes.c_wchar_p "ctypes.c_wchar_p"), and [`c_void_p`](#ctypes.c_void_p "ctypes.c_void_p") changes the *memory location* they point to, *not the contents* of the memory block (of course not, because Python bytes objects are immutable): ``` >>> s = "Hello, World" >>> c_s = c_wchar_p(s) >>> print(c_s) c_wchar_p(139966785747344) >>> print(c_s.value) Hello World >>> c_s.value = "Hi, there" >>> print(c_s) # the memory location has changed c_wchar_p(139966783348904) >>> print(c_s.value) Hi, there >>> print(s) # first object is unchanged Hello, World >>> ``` You should be careful, however, not to pass them to functions expecting pointers to mutable memory. If you need mutable memory blocks, ctypes has a [`create_string_buffer()`](#ctypes.create_string_buffer "ctypes.create_string_buffer") function which creates these in various ways. The current memory block contents can be accessed (or changed) with the `raw` property; if you want to access it as NUL terminated string, use the `value` property: ``` >>> from ctypes import * >>> p = create_string_buffer(3) # create a 3 byte buffer, initialized to NUL bytes >>> print(sizeof(p), repr(p.raw)) 3 b'\x00\x00\x00' >>> p = create_string_buffer(b"Hello") # create a buffer containing a NUL terminated string >>> print(sizeof(p), repr(p.raw)) 6 b'Hello\x00' >>> print(repr(p.value)) b'Hello' >>> p = create_string_buffer(b"Hello", 10) # create a 10 byte buffer >>> print(sizeof(p), repr(p.raw)) 10 b'Hello\x00\x00\x00\x00\x00' >>> p.value = b"Hi" >>> print(sizeof(p), repr(p.raw)) 10 b'Hi\x00lo\x00\x00\x00\x00\x00' >>> ``` The [`create_string_buffer()`](#ctypes.create_string_buffer "ctypes.create_string_buffer") function replaces the `c_buffer()` function (which is still available as an alias), as well as the `c_string()` function from earlier ctypes releases. To create a mutable memory block containing unicode characters of the C type `wchar_t` use the [`create_unicode_buffer()`](#ctypes.create_unicode_buffer "ctypes.create_unicode_buffer") function. ### Calling functions, continued Note that printf prints to the real standard output channel, *not* to [`sys.stdout`](sys#sys.stdout "sys.stdout"), so these examples will only work at the console prompt, not from within *IDLE* or *PythonWin*: ``` >>> printf = libc.printf >>> printf(b"Hello, %s\n", b"World!") Hello, World! 14 >>> printf(b"Hello, %S\n", "World!") Hello, World! 14 >>> printf(b"%d bottles of beer\n", 42) 42 bottles of beer 19 >>> printf(b"%f bottles of beer\n", 42.5) Traceback (most recent call last): File "<stdin>", line 1, in <module> ArgumentError: argument 2: exceptions.TypeError: Don't know how to convert parameter 2 >>> ``` As has been mentioned before, all Python types except integers, strings, and bytes objects have to be wrapped in their corresponding [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") type, so that they can be converted to the required C data type: ``` >>> printf(b"An int %d, a double %f\n", 1234, c_double(3.14)) An int 1234, a double 3.140000 31 >>> ``` ### Calling functions with your own custom data types You can also customize [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") argument conversion to allow instances of your own classes be used as function arguments. [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") looks for an `_as_parameter_` attribute and uses this as the function argument. Of course, it must be one of integer, string, or bytes: ``` >>> class Bottles: ... def __init__(self, number): ... self._as_parameter_ = number ... >>> bottles = Bottles(42) >>> printf(b"%d bottles of beer\n", bottles) 42 bottles of beer 19 >>> ``` If you don’t want to store the instance’s data in the `_as_parameter_` instance variable, you could define a [`property`](functions#property "property") which makes the attribute available on request. ### Specifying the required argument types (function prototypes) It is possible to specify the required argument types of functions exported from DLLs by setting the `argtypes` attribute. `argtypes` must be a sequence of C data types (the `printf` function is probably not a good example here, because it takes a variable number and different types of parameters depending on the format string, on the other hand this is quite handy to experiment with this feature): ``` >>> printf.argtypes = [c_char_p, c_char_p, c_int, c_double] >>> printf(b"String '%s', Int %d, Double %f\n", b"Hi", 10, 2.2) String 'Hi', Int 10, Double 2.200000 37 >>> ``` Specifying a format protects against incompatible argument types (just as a prototype for a C function), and tries to convert the arguments to valid types: ``` >>> printf(b"%d %d %d", 1, 2, 3) Traceback (most recent call last): File "<stdin>", line 1, in <module> ArgumentError: argument 2: exceptions.TypeError: wrong type >>> printf(b"%s %d %f\n", b"X", 2, 3) X 2 3.000000 13 >>> ``` If you have defined your own classes which you pass to function calls, you have to implement a `from_param()` class method for them to be able to use them in the `argtypes` sequence. The `from_param()` class method receives the Python object passed to the function call, it should do a typecheck or whatever is needed to make sure this object is acceptable, and then return the object itself, its `_as_parameter_` attribute, or whatever you want to pass as the C function argument in this case. Again, the result should be an integer, string, bytes, a [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") instance, or an object with an `_as_parameter_` attribute. ### Return types By default functions are assumed to return the C `int` type. Other return types can be specified by setting the `restype` attribute of the function object. Here is a more advanced example, it uses the `strchr` function, which expects a string pointer and a char, and returns a pointer to a string: ``` >>> strchr = libc.strchr >>> strchr(b"abcdef", ord("d")) 8059983 >>> strchr.restype = c_char_p # c_char_p is a pointer to a string >>> strchr(b"abcdef", ord("d")) b'def' >>> print(strchr(b"abcdef", ord("x"))) None >>> ``` If you want to avoid the `ord("x")` calls above, you can set the `argtypes` attribute, and the second argument will be converted from a single character Python bytes object into a C char: ``` >>> strchr.restype = c_char_p >>> strchr.argtypes = [c_char_p, c_char] >>> strchr(b"abcdef", b"d") 'def' >>> strchr(b"abcdef", b"def") Traceback (most recent call last): File "<stdin>", line 1, in <module> ArgumentError: argument 2: exceptions.TypeError: one character string expected >>> print(strchr(b"abcdef", b"x")) None >>> strchr(b"abcdef", b"d") 'def' >>> ``` You can also use a callable Python object (a function or a class for example) as the `restype` attribute, if the foreign function returns an integer. The callable will be called with the *integer* the C function returns, and the result of this call will be used as the result of your function call. This is useful to check for error return values and automatically raise an exception: ``` >>> GetModuleHandle = windll.kernel32.GetModuleHandleA >>> def ValidHandle(value): ... if value == 0: ... raise WinError() ... return value ... >>> >>> GetModuleHandle.restype = ValidHandle >>> GetModuleHandle(None) 486539264 >>> GetModuleHandle("something silly") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in ValidHandle OSError: [Errno 126] The specified module could not be found. >>> ``` `WinError` is a function which will call Windows `FormatMessage()` api to get the string representation of an error code, and *returns* an exception. `WinError` takes an optional error code parameter, if no one is used, it calls [`GetLastError()`](#ctypes.GetLastError "ctypes.GetLastError") to retrieve it. Please note that a much more powerful error checking mechanism is available through the `errcheck` attribute; see the reference manual for details. ### Passing pointers (or: passing parameters by reference) Sometimes a C api function expects a *pointer* to a data type as parameter, probably to write into the corresponding location, or if the data is too large to be passed by value. This is also known as *passing parameters by reference*. [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") exports the [`byref()`](#ctypes.byref "ctypes.byref") function which is used to pass parameters by reference. The same effect can be achieved with the [`pointer()`](#ctypes.pointer "ctypes.pointer") function, although [`pointer()`](#ctypes.pointer "ctypes.pointer") does a lot more work since it constructs a real pointer object, so it is faster to use [`byref()`](#ctypes.byref "ctypes.byref") if you don’t need the pointer object in Python itself: ``` >>> i = c_int() >>> f = c_float() >>> s = create_string_buffer(b'\000' * 32) >>> print(i.value, f.value, repr(s.value)) 0 0.0 b'' >>> libc.sscanf(b"1 3.14 Hello", b"%d %f %s", ... byref(i), byref(f), s) 3 >>> print(i.value, f.value, repr(s.value)) 1 3.1400001049 b'Hello' >>> ``` ### Structures and unions Structures and unions must derive from the [`Structure`](#ctypes.Structure "ctypes.Structure") and [`Union`](#ctypes.Union "ctypes.Union") base classes which are defined in the [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") module. Each subclass must define a `_fields_` attribute. `_fields_` must be a list of *2-tuples*, containing a *field name* and a *field type*. The field type must be a [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") type like [`c_int`](#ctypes.c_int "ctypes.c_int"), or any other derived [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") type: structure, union, array, pointer. Here is a simple example of a POINT structure, which contains two integers named *x* and *y*, and also shows how to initialize a structure in the constructor: ``` >>> from ctypes import * >>> class POINT(Structure): ... _fields_ = [("x", c_int), ... ("y", c_int)] ... >>> point = POINT(10, 20) >>> print(point.x, point.y) 10 20 >>> point = POINT(y=5) >>> print(point.x, point.y) 0 5 >>> POINT(1, 2, 3) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: too many initializers >>> ``` You can, however, build much more complicated structures. A structure can itself contain other structures by using a structure as a field type. Here is a RECT structure which contains two POINTs named *upperleft* and *lowerright*: ``` >>> class RECT(Structure): ... _fields_ = [("upperleft", POINT), ... ("lowerright", POINT)] ... >>> rc = RECT(point) >>> print(rc.upperleft.x, rc.upperleft.y) 0 5 >>> print(rc.lowerright.x, rc.lowerright.y) 0 0 >>> ``` Nested structures can also be initialized in the constructor in several ways: ``` >>> r = RECT(POINT(1, 2), POINT(3, 4)) >>> r = RECT((1, 2), (3, 4)) ``` Field [descriptor](../glossary#term-descriptor)s can be retrieved from the *class*, they are useful for debugging because they can provide useful information: ``` >>> print(POINT.x) <Field type=c_long, ofs=0, size=4> >>> print(POINT.y) <Field type=c_long, ofs=4, size=4> >>> ``` Warning [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") does not support passing unions or structures with bit-fields to functions by value. While this may work on 32-bit x86, it’s not guaranteed by the library to work in the general case. Unions and structures with bit-fields should always be passed to functions by pointer. ### Structure/union alignment and byte order By default, Structure and Union fields are aligned in the same way the C compiler does it. It is possible to override this behavior by specifying a `_pack_` class attribute in the subclass definition. This must be set to a positive integer and specifies the maximum alignment for the fields. This is what `#pragma pack(n)` also does in MSVC. [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") uses the native byte order for Structures and Unions. To build structures with non-native byte order, you can use one of the [`BigEndianStructure`](#ctypes.BigEndianStructure "ctypes.BigEndianStructure"), [`LittleEndianStructure`](#ctypes.LittleEndianStructure "ctypes.LittleEndianStructure"), `BigEndianUnion`, and `LittleEndianUnion` base classes. These classes cannot contain pointer fields. ### Bit fields in structures and unions It is possible to create structures and unions containing bit fields. Bit fields are only possible for integer fields, the bit width is specified as the third item in the `_fields_` tuples: ``` >>> class Int(Structure): ... _fields_ = [("first_16", c_int, 16), ... ("second_16", c_int, 16)] ... >>> print(Int.first_16) <Field type=c_long, ofs=0:0, bits=16> >>> print(Int.second_16) <Field type=c_long, ofs=0:16, bits=16> >>> ``` ### Arrays Arrays are sequences, containing a fixed number of instances of the same type. The recommended way to create array types is by multiplying a data type with a positive integer: ``` TenPointsArrayType = POINT * 10 ``` Here is an example of a somewhat artificial data type, a structure containing 4 POINTs among other stuff: ``` >>> from ctypes import * >>> class POINT(Structure): ... _fields_ = ("x", c_int), ("y", c_int) ... >>> class MyStruct(Structure): ... _fields_ = [("a", c_int), ... ("b", c_float), ... ("point_array", POINT * 4)] >>> >>> print(len(MyStruct().point_array)) 4 >>> ``` Instances are created in the usual way, by calling the class: ``` arr = TenPointsArrayType() for pt in arr: print(pt.x, pt.y) ``` The above code print a series of `0 0` lines, because the array contents is initialized to zeros. Initializers of the correct type can also be specified: ``` >>> from ctypes import * >>> TenIntegers = c_int * 10 >>> ii = TenIntegers(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) >>> print(ii) <c_long_Array_10 object at 0x...> >>> for i in ii: print(i, end=" ") ... 1 2 3 4 5 6 7 8 9 10 >>> ``` ### Pointers Pointer instances are created by calling the [`pointer()`](#ctypes.pointer "ctypes.pointer") function on a [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") type: ``` >>> from ctypes import * >>> i = c_int(42) >>> pi = pointer(i) >>> ``` Pointer instances have a [`contents`](#ctypes._Pointer.contents "ctypes._Pointer.contents") attribute which returns the object to which the pointer points, the `i` object above: ``` >>> pi.contents c_long(42) >>> ``` Note that [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") does not have OOR (original object return), it constructs a new, equivalent object each time you retrieve an attribute: ``` >>> pi.contents is i False >>> pi.contents is pi.contents False >>> ``` Assigning another [`c_int`](#ctypes.c_int "ctypes.c_int") instance to the pointer’s contents attribute would cause the pointer to point to the memory location where this is stored: ``` >>> i = c_int(99) >>> pi.contents = i >>> pi.contents c_long(99) >>> ``` Pointer instances can also be indexed with integers: ``` >>> pi[0] 99 >>> ``` Assigning to an integer index changes the pointed to value: ``` >>> print(i) c_long(99) >>> pi[0] = 22 >>> print(i) c_long(22) >>> ``` It is also possible to use indexes different from 0, but you must know what you’re doing, just as in C: You can access or change arbitrary memory locations. Generally you only use this feature if you receive a pointer from a C function, and you *know* that the pointer actually points to an array instead of a single item. Behind the scenes, the [`pointer()`](#ctypes.pointer "ctypes.pointer") function does more than simply create pointer instances, it has to create pointer *types* first. This is done with the [`POINTER()`](#ctypes.POINTER "ctypes.POINTER") function, which accepts any [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") type, and returns a new type: ``` >>> PI = POINTER(c_int) >>> PI <class 'ctypes.LP_c_long'> >>> PI(42) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: expected c_long instead of int >>> PI(c_int(42)) <ctypes.LP_c_long object at 0x...> >>> ``` Calling the pointer type without an argument creates a `NULL` pointer. `NULL` pointers have a `False` boolean value: ``` >>> null_ptr = POINTER(c_int)() >>> print(bool(null_ptr)) False >>> ``` [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") checks for `NULL` when dereferencing pointers (but dereferencing invalid non-`NULL` pointers would crash Python): ``` >>> null_ptr[0] Traceback (most recent call last): .... ValueError: NULL pointer access >>> >>> null_ptr[0] = 1234 Traceback (most recent call last): .... ValueError: NULL pointer access >>> ``` ### Type conversions Usually, ctypes does strict type checking. This means, if you have `POINTER(c_int)` in the `argtypes` list of a function or as the type of a member field in a structure definition, only instances of exactly the same type are accepted. There are some exceptions to this rule, where ctypes accepts other objects. For example, you can pass compatible array instances instead of pointer types. So, for `POINTER(c_int)`, ctypes accepts an array of c\_int: ``` >>> class Bar(Structure): ... _fields_ = [("count", c_int), ("values", POINTER(c_int))] ... >>> bar = Bar() >>> bar.values = (c_int * 3)(1, 2, 3) >>> bar.count = 3 >>> for i in range(bar.count): ... print(bar.values[i]) ... 1 2 3 >>> ``` In addition, if a function argument is explicitly declared to be a pointer type (such as `POINTER(c_int)`) in `argtypes`, an object of the pointed type (`c_int` in this case) can be passed to the function. ctypes will apply the required [`byref()`](#ctypes.byref "ctypes.byref") conversion in this case automatically. To set a POINTER type field to `NULL`, you can assign `None`: ``` >>> bar.values = None >>> ``` Sometimes you have instances of incompatible types. In C, you can cast one type into another type. [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") provides a [`cast()`](#ctypes.cast "ctypes.cast") function which can be used in the same way. The `Bar` structure defined above accepts `POINTER(c_int)` pointers or [`c_int`](#ctypes.c_int "ctypes.c_int") arrays for its `values` field, but not instances of other types: ``` >>> bar.values = (c_byte * 4)() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: incompatible types, c_byte_Array_4 instance instead of LP_c_long instance >>> ``` For these cases, the [`cast()`](#ctypes.cast "ctypes.cast") function is handy. The [`cast()`](#ctypes.cast "ctypes.cast") function can be used to cast a ctypes instance into a pointer to a different ctypes data type. [`cast()`](#ctypes.cast "ctypes.cast") takes two parameters, a ctypes object that is or can be converted to a pointer of some kind, and a ctypes pointer type. It returns an instance of the second argument, which references the same memory block as the first argument: ``` >>> a = (c_byte * 4)() >>> cast(a, POINTER(c_int)) <ctypes.LP_c_long object at ...> >>> ``` So, [`cast()`](#ctypes.cast "ctypes.cast") can be used to assign to the `values` field of `Bar` the structure: ``` >>> bar = Bar() >>> bar.values = cast((c_byte * 4)(), POINTER(c_int)) >>> print(bar.values[0]) 0 >>> ``` ### Incomplete Types *Incomplete Types* are structures, unions or arrays whose members are not yet specified. In C, they are specified by forward declarations, which are defined later: ``` struct cell; /* forward declaration */ struct cell { char *name; struct cell *next; }; ``` The straightforward translation into ctypes code would be this, but it does not work: ``` >>> class cell(Structure): ... _fields_ = [("name", c_char_p), ... ("next", POINTER(cell))] ... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in cell NameError: name 'cell' is not defined >>> ``` because the new `class cell` is not available in the class statement itself. In [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python."), we can define the `cell` class and set the `_fields_` attribute later, after the class statement: ``` >>> from ctypes import * >>> class cell(Structure): ... pass ... >>> cell._fields_ = [("name", c_char_p), ... ("next", POINTER(cell))] >>> ``` Let’s try it. We create two instances of `cell`, and let them point to each other, and finally follow the pointer chain a few times: ``` >>> c1 = cell() >>> c1.name = b"foo" >>> c2 = cell() >>> c2.name = b"bar" >>> c1.next = pointer(c2) >>> c2.next = pointer(c1) >>> p = c1 >>> for i in range(8): ... print(p.name, end=" ") ... p = p.next[0] ... foo bar foo bar foo bar foo bar >>> ``` ### Callback functions [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") allows creating C callable function pointers from Python callables. These are sometimes called *callback functions*. First, you must create a class for the callback function. The class knows the calling convention, the return type, and the number and types of arguments this function will receive. The [`CFUNCTYPE()`](#ctypes.CFUNCTYPE "ctypes.CFUNCTYPE") factory function creates types for callback functions using the `cdecl` calling convention. On Windows, the [`WINFUNCTYPE()`](#ctypes.WINFUNCTYPE "ctypes.WINFUNCTYPE") factory function creates types for callback functions using the `stdcall` calling convention. Both of these factory functions are called with the result type as first argument, and the callback functions expected argument types as the remaining arguments. I will present an example here which uses the standard C library’s `qsort()` function, that is used to sort items with the help of a callback function. `qsort()` will be used to sort an array of integers: ``` >>> IntArray5 = c_int * 5 >>> ia = IntArray5(5, 1, 7, 33, 99) >>> qsort = libc.qsort >>> qsort.restype = None >>> ``` `qsort()` must be called with a pointer to the data to sort, the number of items in the data array, the size of one item, and a pointer to the comparison function, the callback. The callback will then be called with two pointers to items, and it must return a negative integer if the first item is smaller than the second, a zero if they are equal, and a positive integer otherwise. So our callback function receives pointers to integers, and must return an integer. First we create the `type` for the callback function: ``` >>> CMPFUNC = CFUNCTYPE(c_int, POINTER(c_int), POINTER(c_int)) >>> ``` To get started, here is a simple callback that shows the values it gets passed: ``` >>> def py_cmp_func(a, b): ... print("py_cmp_func", a[0], b[0]) ... return 0 ... >>> cmp_func = CMPFUNC(py_cmp_func) >>> ``` The result: ``` >>> qsort(ia, len(ia), sizeof(c_int), cmp_func) py_cmp_func 5 1 py_cmp_func 33 99 py_cmp_func 7 33 py_cmp_func 5 7 py_cmp_func 1 7 >>> ``` Now we can actually compare the two items and return a useful result: ``` >>> def py_cmp_func(a, b): ... print("py_cmp_func", a[0], b[0]) ... return a[0] - b[0] ... >>> >>> qsort(ia, len(ia), sizeof(c_int), CMPFUNC(py_cmp_func)) py_cmp_func 5 1 py_cmp_func 33 99 py_cmp_func 7 33 py_cmp_func 1 7 py_cmp_func 5 7 >>> ``` As we can easily check, our array is sorted now: ``` >>> for i in ia: print(i, end=" ") ... 1 5 7 33 99 >>> ``` The function factories can be used as decorator factories, so we may as well write: ``` >>> @CFUNCTYPE(c_int, POINTER(c_int), POINTER(c_int)) ... def py_cmp_func(a, b): ... print("py_cmp_func", a[0], b[0]) ... return a[0] - b[0] ... >>> qsort(ia, len(ia), sizeof(c_int), py_cmp_func) py_cmp_func 5 1 py_cmp_func 33 99 py_cmp_func 7 33 py_cmp_func 1 7 py_cmp_func 5 7 >>> ``` Note Make sure you keep references to [`CFUNCTYPE()`](#ctypes.CFUNCTYPE "ctypes.CFUNCTYPE") objects as long as they are used from C code. [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") doesn’t, and if you don’t, they may be garbage collected, crashing your program when a callback is made. Also, note that if the callback function is called in a thread created outside of Python’s control (e.g. by the foreign code that calls the callback), ctypes creates a new dummy Python thread on every invocation. This behavior is correct for most purposes, but it means that values stored with [`threading.local`](threading#threading.local "threading.local") will *not* survive across different callbacks, even when those calls are made from the same C thread. ### Accessing values exported from dlls Some shared libraries not only export functions, they also export variables. An example in the Python library itself is the [`Py_OptimizeFlag`](../c-api/init#c.Py_OptimizeFlag "Py_OptimizeFlag"), an integer set to 0, 1, or 2, depending on the [`-O`](../using/cmdline#cmdoption-o) or [`-OO`](../using/cmdline#cmdoption-oo) flag given on startup. [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") can access values like this with the `in_dll()` class methods of the type. *pythonapi* is a predefined symbol giving access to the Python C api: ``` >>> opt_flag = c_int.in_dll(pythonapi, "Py_OptimizeFlag") >>> print(opt_flag) c_long(0) >>> ``` If the interpreter would have been started with [`-O`](../using/cmdline#cmdoption-o), the sample would have printed `c_long(1)`, or `c_long(2)` if [`-OO`](../using/cmdline#cmdoption-oo) would have been specified. An extended example which also demonstrates the use of pointers accesses the [`PyImport_FrozenModules`](../c-api/import#c.PyImport_FrozenModules "PyImport_FrozenModules") pointer exported by Python. Quoting the docs for that value: This pointer is initialized to point to an array of `struct _frozen` records, terminated by one whose members are all `NULL` or zero. When a frozen module is imported, it is searched in this table. Third-party code could play tricks with this to provide a dynamically created collection of frozen modules. So manipulating this pointer could even prove useful. To restrict the example size, we show only how this table can be read with [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python."): ``` >>> from ctypes import * >>> >>> class struct_frozen(Structure): ... _fields_ = [("name", c_char_p), ... ("code", POINTER(c_ubyte)), ... ("size", c_int)] ... >>> ``` We have defined the `struct _frozen` data type, so we can get the pointer to the table: ``` >>> FrozenTable = POINTER(struct_frozen) >>> table = FrozenTable.in_dll(pythonapi, "PyImport_FrozenModules") >>> ``` Since `table` is a `pointer` to the array of `struct_frozen` records, we can iterate over it, but we just have to make sure that our loop terminates, because pointers have no size. Sooner or later it would probably crash with an access violation or whatever, so it’s better to break out of the loop when we hit the `NULL` entry: ``` >>> for item in table: ... if item.name is None: ... break ... print(item.name.decode("ascii"), item.size) ... _frozen_importlib 31764 _frozen_importlib_external 41499 __hello__ 161 __phello__ -161 __phello__.spam 161 >>> ``` The fact that standard Python has a frozen module and a frozen package (indicated by the negative `size` member) is not well known, it is only used for testing. Try it out with `import __hello__` for example. ### Surprises There are some edges in [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") where you might expect something other than what actually happens. Consider the following example: ``` >>> from ctypes import * >>> class POINT(Structure): ... _fields_ = ("x", c_int), ("y", c_int) ... >>> class RECT(Structure): ... _fields_ = ("a", POINT), ("b", POINT) ... >>> p1 = POINT(1, 2) >>> p2 = POINT(3, 4) >>> rc = RECT(p1, p2) >>> print(rc.a.x, rc.a.y, rc.b.x, rc.b.y) 1 2 3 4 >>> # now swap the two points >>> rc.a, rc.b = rc.b, rc.a >>> print(rc.a.x, rc.a.y, rc.b.x, rc.b.y) 3 4 3 4 >>> ``` Hm. We certainly expected the last statement to print `3 4 1 2`. What happened? Here are the steps of the `rc.a, rc.b = rc.b, rc.a` line above: ``` >>> temp0, temp1 = rc.b, rc.a >>> rc.a = temp0 >>> rc.b = temp1 >>> ``` Note that `temp0` and `temp1` are objects still using the internal buffer of the `rc` object above. So executing `rc.a = temp0` copies the buffer contents of `temp0` into `rc` ‘s buffer. This, in turn, changes the contents of `temp1`. So, the last assignment `rc.b = temp1`, doesn’t have the expected effect. Keep in mind that retrieving sub-objects from Structure, Unions, and Arrays doesn’t *copy* the sub-object, instead it retrieves a wrapper object accessing the root-object’s underlying buffer. Another example that may behave differently from what one would expect is this: ``` >>> s = c_char_p() >>> s.value = b"abc def ghi" >>> s.value b'abc def ghi' >>> s.value is s.value False >>> ``` Note Objects instantiated from [`c_char_p`](#ctypes.c_char_p "ctypes.c_char_p") can only have their value set to bytes or integers. Why is it printing `False`? ctypes instances are objects containing a memory block plus some [descriptor](../glossary#term-descriptor)s accessing the contents of the memory. Storing a Python object in the memory block does not store the object itself, instead the `contents` of the object is stored. Accessing the contents again constructs a new Python object each time! ### Variable-sized data types [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") provides some support for variable-sized arrays and structures. The [`resize()`](#ctypes.resize "ctypes.resize") function can be used to resize the memory buffer of an existing ctypes object. The function takes the object as first argument, and the requested size in bytes as the second argument. The memory block cannot be made smaller than the natural memory block specified by the objects type, a [`ValueError`](exceptions#ValueError "ValueError") is raised if this is tried: ``` >>> short_array = (c_short * 4)() >>> print(sizeof(short_array)) 8 >>> resize(short_array, 4) Traceback (most recent call last): ... ValueError: minimum size is 8 >>> resize(short_array, 32) >>> sizeof(short_array) 32 >>> sizeof(type(short_array)) 8 >>> ``` This is nice and fine, but how would one access the additional elements contained in this array? Since the type still only knows about 4 elements, we get errors accessing other elements: ``` >>> short_array[:] [0, 0, 0, 0] >>> short_array[7] Traceback (most recent call last): ... IndexError: invalid index >>> ``` Another way to use variable-sized data types with [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") is to use the dynamic nature of Python, and (re-)define the data type after the required size is already known, on a case by case basis. ctypes reference ---------------- ### Finding shared libraries When programming in a compiled language, shared libraries are accessed when compiling/linking a program, and when the program is run. The purpose of the `find_library()` function is to locate a library in a way similar to what the compiler or runtime loader does (on platforms with several versions of a shared library the most recent should be loaded), while the ctypes library loaders act like when a program is run, and call the runtime loader directly. The `ctypes.util` module provides a function which can help to determine the library to load. `ctypes.util.find_library(name)` Try to find a library and return a pathname. *name* is the library name without any prefix like *lib*, suffix like `.so`, `.dylib` or version number (this is the form used for the posix linker option `-l`). If no library can be found, returns `None`. The exact functionality is system dependent. On Linux, `find_library()` tries to run external programs (`/sbin/ldconfig`, `gcc`, `objdump` and `ld`) to find the library file. It returns the filename of the library file. Changed in version 3.6: On Linux, the value of the environment variable `LD_LIBRARY_PATH` is used when searching for libraries, if a library cannot be found by any other means. Here are some examples: ``` >>> from ctypes.util import find_library >>> find_library("m") 'libm.so.6' >>> find_library("c") 'libc.so.6' >>> find_library("bz2") 'libbz2.so.1.0' >>> ``` On macOS, `find_library()` tries several predefined naming schemes and paths to locate the library, and returns a full pathname if successful: ``` >>> from ctypes.util import find_library >>> find_library("c") '/usr/lib/libc.dylib' >>> find_library("m") '/usr/lib/libm.dylib' >>> find_library("bz2") '/usr/lib/libbz2.dylib' >>> find_library("AGL") '/System/Library/Frameworks/AGL.framework/AGL' >>> ``` On Windows, `find_library()` searches along the system search path, and returns the full pathname, but since there is no predefined naming scheme a call like `find_library("c")` will fail and return `None`. If wrapping a shared library with [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python."), it *may* be better to determine the shared library name at development time, and hardcode that into the wrapper module instead of using `find_library()` to locate the library at runtime. ### Loading shared libraries There are several ways to load shared libraries into the Python process. One way is to instantiate one of the following classes: `class ctypes.CDLL(name, mode=DEFAULT_MODE, handle=None, use_errno=False, use_last_error=False, winmode=None)` Instances of this class represent loaded shared libraries. Functions in these libraries use the standard C calling convention, and are assumed to return `int`. On Windows creating a [`CDLL`](#ctypes.CDLL "ctypes.CDLL") instance may fail even if the DLL name exists. When a dependent DLL of the loaded DLL is not found, a [`OSError`](exceptions#OSError "OSError") error is raised with the message *“[WinError 126] The specified module could not be found”.* This error message does not contain the name of the missing DLL because the Windows API does not return this information making this error hard to diagnose. To resolve this error and determine which DLL is not found, you need to find the list of dependent DLLs and determine which one is not found using Windows debugging and tracing tools. See also [Microsoft DUMPBIN tool](https://docs.microsoft.com/cpp/build/reference/dependents) – A tool to find DLL dependents. `class ctypes.OleDLL(name, mode=DEFAULT_MODE, handle=None, use_errno=False, use_last_error=False, winmode=None)` Windows only: Instances of this class represent loaded shared libraries, functions in these libraries use the `stdcall` calling convention, and are assumed to return the windows specific [`HRESULT`](#ctypes.HRESULT "ctypes.HRESULT") code. [`HRESULT`](#ctypes.HRESULT "ctypes.HRESULT") values contain information specifying whether the function call failed or succeeded, together with additional error code. If the return value signals a failure, an [`OSError`](exceptions#OSError "OSError") is automatically raised. Changed in version 3.3: [`WindowsError`](exceptions#WindowsError "WindowsError") used to be raised. `class ctypes.WinDLL(name, mode=DEFAULT_MODE, handle=None, use_errno=False, use_last_error=False, winmode=None)` Windows only: Instances of this class represent loaded shared libraries, functions in these libraries use the `stdcall` calling convention, and are assumed to return `int` by default. The Python [global interpreter lock](../glossary#term-global-interpreter-lock) is released before calling any function exported by these libraries, and reacquired afterwards. `class ctypes.PyDLL(name, mode=DEFAULT_MODE, handle=None)` Instances of this class behave like [`CDLL`](#ctypes.CDLL "ctypes.CDLL") instances, except that the Python GIL is *not* released during the function call, and after the function execution the Python error flag is checked. If the error flag is set, a Python exception is raised. Thus, this is only useful to call Python C api functions directly. All these classes can be instantiated by calling them with at least one argument, the pathname of the shared library. If you have an existing handle to an already loaded shared library, it can be passed as the `handle` named parameter, otherwise the underlying platforms `dlopen` or `LoadLibrary` function is used to load the library into the process, and to get a handle to it. The *mode* parameter can be used to specify how the library is loaded. For details, consult the *[dlopen(3)](https://manpages.debian.org/dlopen(3))* manpage. On Windows, *mode* is ignored. On posix systems, RTLD\_NOW is always added, and is not configurable. The *use\_errno* parameter, when set to true, enables a ctypes mechanism that allows accessing the system [`errno`](errno#module-errno "errno: Standard errno system symbols.") error number in a safe way. [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") maintains a thread-local copy of the systems [`errno`](errno#module-errno "errno: Standard errno system symbols.") variable; if you call foreign functions created with `use_errno=True` then the [`errno`](errno#module-errno "errno: Standard errno system symbols.") value before the function call is swapped with the ctypes private copy, the same happens immediately after the function call. The function [`ctypes.get_errno()`](#ctypes.get_errno "ctypes.get_errno") returns the value of the ctypes private copy, and the function [`ctypes.set_errno()`](#ctypes.set_errno "ctypes.set_errno") changes the ctypes private copy to a new value and returns the former value. The *use\_last\_error* parameter, when set to true, enables the same mechanism for the Windows error code which is managed by the [`GetLastError()`](#ctypes.GetLastError "ctypes.GetLastError") and `SetLastError()` Windows API functions; [`ctypes.get_last_error()`](#ctypes.get_last_error "ctypes.get_last_error") and [`ctypes.set_last_error()`](#ctypes.set_last_error "ctypes.set_last_error") are used to request and change the ctypes private copy of the windows error code. The *winmode* parameter is used on Windows to specify how the library is loaded (since *mode* is ignored). It takes any value that is valid for the Win32 API `LoadLibraryEx` flags parameter. When omitted, the default is to use the flags that result in the most secure DLL load to avoiding issues such as DLL hijacking. Passing the full path to the DLL is the safest way to ensure the correct library and dependencies are loaded. Changed in version 3.8: Added *winmode* parameter. `ctypes.RTLD_GLOBAL` Flag to use as *mode* parameter. On platforms where this flag is not available, it is defined as the integer zero. `ctypes.RTLD_LOCAL` Flag to use as *mode* parameter. On platforms where this is not available, it is the same as *RTLD\_GLOBAL*. `ctypes.DEFAULT_MODE` The default mode which is used to load shared libraries. On OSX 10.3, this is *RTLD\_GLOBAL*, otherwise it is the same as *RTLD\_LOCAL*. Instances of these classes have no public methods. Functions exported by the shared library can be accessed as attributes or by index. Please note that accessing the function through an attribute caches the result and therefore accessing it repeatedly returns the same object each time. On the other hand, accessing it through an index returns a new object each time: ``` >>> from ctypes import CDLL >>> libc = CDLL("libc.so.6") # On Linux >>> libc.time == libc.time True >>> libc['time'] == libc['time'] False ``` The following public attributes are available, their name starts with an underscore to not clash with exported function names: `PyDLL._handle` The system handle used to access the library. `PyDLL._name` The name of the library passed in the constructor. Shared libraries can also be loaded by using one of the prefabricated objects, which are instances of the [`LibraryLoader`](#ctypes.LibraryLoader "ctypes.LibraryLoader") class, either by calling the `LoadLibrary()` method, or by retrieving the library as attribute of the loader instance. `class ctypes.LibraryLoader(dlltype)` Class which loads shared libraries. *dlltype* should be one of the [`CDLL`](#ctypes.CDLL "ctypes.CDLL"), [`PyDLL`](#ctypes.PyDLL "ctypes.PyDLL"), [`WinDLL`](#ctypes.WinDLL "ctypes.WinDLL"), or [`OleDLL`](#ctypes.OleDLL "ctypes.OleDLL") types. [`__getattr__()`](../reference/datamodel#object.__getattr__ "object.__getattr__") has special behavior: It allows loading a shared library by accessing it as attribute of a library loader instance. The result is cached, so repeated attribute accesses return the same library each time. `LoadLibrary(name)` Load a shared library into the process and return it. This method always returns a new instance of the library. These prefabricated library loaders are available: `ctypes.cdll` Creates [`CDLL`](#ctypes.CDLL "ctypes.CDLL") instances. `ctypes.windll` Windows only: Creates [`WinDLL`](#ctypes.WinDLL "ctypes.WinDLL") instances. `ctypes.oledll` Windows only: Creates [`OleDLL`](#ctypes.OleDLL "ctypes.OleDLL") instances. `ctypes.pydll` Creates [`PyDLL`](#ctypes.PyDLL "ctypes.PyDLL") instances. For accessing the C Python api directly, a ready-to-use Python shared library object is available: `ctypes.pythonapi` An instance of [`PyDLL`](#ctypes.PyDLL "ctypes.PyDLL") that exposes Python C API functions as attributes. Note that all these functions are assumed to return C `int`, which is of course not always the truth, so you have to assign the correct `restype` attribute to use these functions. Loading a library through any of these objects raises an [auditing event](sys#auditing) `ctypes.dlopen` with string argument `name`, the name used to load the library. Accessing a function on a loaded library raises an auditing event `ctypes.dlsym` with arguments `library` (the library object) and `name` (the symbol’s name as a string or integer). In cases when only the library handle is available rather than the object, accessing a function raises an auditing event `ctypes.dlsym/handle` with arguments `handle` (the raw library handle) and `name`. ### Foreign functions As explained in the previous section, foreign functions can be accessed as attributes of loaded shared libraries. The function objects created in this way by default accept any number of arguments, accept any ctypes data instances as arguments, and return the default result type specified by the library loader. They are instances of a private class: `class ctypes._FuncPtr` Base class for C callable foreign functions. Instances of foreign functions are also C compatible data types; they represent C function pointers. This behavior can be customized by assigning to special attributes of the foreign function object. `restype` Assign a ctypes type to specify the result type of the foreign function. Use `None` for `void`, a function not returning anything. It is possible to assign a callable Python object that is not a ctypes type, in this case the function is assumed to return a C `int`, and the callable will be called with this integer, allowing further processing or error checking. Using this is deprecated, for more flexible post processing or error checking use a ctypes data type as [`restype`](#ctypes._FuncPtr.restype "ctypes._FuncPtr.restype") and assign a callable to the [`errcheck`](#ctypes._FuncPtr.errcheck "ctypes._FuncPtr.errcheck") attribute. `argtypes` Assign a tuple of ctypes types to specify the argument types that the function accepts. Functions using the `stdcall` calling convention can only be called with the same number of arguments as the length of this tuple; functions using the C calling convention accept additional, unspecified arguments as well. When a foreign function is called, each actual argument is passed to the `from_param()` class method of the items in the [`argtypes`](#ctypes._FuncPtr.argtypes "ctypes._FuncPtr.argtypes") tuple, this method allows adapting the actual argument to an object that the foreign function accepts. For example, a [`c_char_p`](#ctypes.c_char_p "ctypes.c_char_p") item in the [`argtypes`](#ctypes._FuncPtr.argtypes "ctypes._FuncPtr.argtypes") tuple will convert a string passed as argument into a bytes object using ctypes conversion rules. New: It is now possible to put items in argtypes which are not ctypes types, but each item must have a `from_param()` method which returns a value usable as argument (integer, string, ctypes instance). This allows defining adapters that can adapt custom objects as function parameters. `errcheck` Assign a Python function or another callable to this attribute. The callable will be called with three or more arguments: `callable(result, func, arguments)` *result* is what the foreign function returns, as specified by the `restype` attribute. *func* is the foreign function object itself, this allows reusing the same callable object to check or post process the results of several functions. *arguments* is a tuple containing the parameters originally passed to the function call, this allows specializing the behavior on the arguments used. The object that this function returns will be returned from the foreign function call, but it can also check the result value and raise an exception if the foreign function call failed. `exception ctypes.ArgumentError` This exception is raised when a foreign function call cannot convert one of the passed arguments. On Windows, when a foreign function call raises a system exception (for example, due to an access violation), it will be captured and replaced with a suitable Python exception. Further, an auditing event `ctypes.seh_exception` with argument `code` will be raised, allowing an audit hook to replace the exception with its own. Some ways to invoke foreign function calls may raise an auditing event `ctypes.call_function` with arguments `function pointer` and `arguments`. ### Function prototypes Foreign functions can also be created by instantiating function prototypes. Function prototypes are similar to function prototypes in C; they describe a function (return type, argument types, calling convention) without defining an implementation. The factory functions must be called with the desired result type and the argument types of the function, and can be used as decorator factories, and as such, be applied to functions through the `@wrapper` syntax. See [Callback functions](#ctypes-callback-functions) for examples. `ctypes.CFUNCTYPE(restype, *argtypes, use_errno=False, use_last_error=False)` The returned function prototype creates functions that use the standard C calling convention. The function will release the GIL during the call. If *use\_errno* is set to true, the ctypes private copy of the system [`errno`](errno#module-errno "errno: Standard errno system symbols.") variable is exchanged with the real [`errno`](errno#module-errno "errno: Standard errno system symbols.") value before and after the call; *use\_last\_error* does the same for the Windows error code. `ctypes.WINFUNCTYPE(restype, *argtypes, use_errno=False, use_last_error=False)` Windows only: The returned function prototype creates functions that use the `stdcall` calling convention. The function will release the GIL during the call. *use\_errno* and *use\_last\_error* have the same meaning as above. `ctypes.PYFUNCTYPE(restype, *argtypes)` The returned function prototype creates functions that use the Python calling convention. The function will *not* release the GIL during the call. Function prototypes created by these factory functions can be instantiated in different ways, depending on the type and number of the parameters in the call: `prototype(address)` Returns a foreign function at the specified address which must be an integer. `prototype(callable)` Create a C callable function (a callback function) from a Python *callable*. `prototype(func_spec[, paramflags])` Returns a foreign function exported by a shared library. *func\_spec* must be a 2-tuple `(name_or_ordinal, library)`. The first item is the name of the exported function as string, or the ordinal of the exported function as small integer. The second item is the shared library instance. `prototype(vtbl_index, name[, paramflags[, iid]])` Returns a foreign function that will call a COM method. *vtbl\_index* is the index into the virtual function table, a small non-negative integer. *name* is name of the COM method. *iid* is an optional pointer to the interface identifier which is used in extended error reporting. COM methods use a special calling convention: They require a pointer to the COM interface as first argument, in addition to those parameters that are specified in the `argtypes` tuple. The optional *paramflags* parameter creates foreign function wrappers with much more functionality than the features described above. *paramflags* must be a tuple of the same length as `argtypes`. Each item in this tuple contains further information about a parameter, it must be a tuple containing one, two, or three items. The first item is an integer containing a combination of direction flags for the parameter: 1 Specifies an input parameter to the function. 2 Output parameter. The foreign function fills in a value. 4 Input parameter which defaults to the integer zero. The optional second item is the parameter name as string. If this is specified, the foreign function can be called with named parameters. The optional third item is the default value for this parameter. This example demonstrates how to wrap the Windows `MessageBoxW` function so that it supports default parameters and named arguments. The C declaration from the windows header file is this: ``` WINUSERAPI int WINAPI MessageBoxW( HWND hWnd, LPCWSTR lpText, LPCWSTR lpCaption, UINT uType); ``` Here is the wrapping with [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python."): ``` >>> from ctypes import c_int, WINFUNCTYPE, windll >>> from ctypes.wintypes import HWND, LPCWSTR, UINT >>> prototype = WINFUNCTYPE(c_int, HWND, LPCWSTR, LPCWSTR, UINT) >>> paramflags = (1, "hwnd", 0), (1, "text", "Hi"), (1, "caption", "Hello from ctypes"), (1, "flags", 0) >>> MessageBox = prototype(("MessageBoxW", windll.user32), paramflags) ``` The `MessageBox` foreign function can now be called in these ways: ``` >>> MessageBox() >>> MessageBox(text="Spam, spam, spam") >>> MessageBox(flags=2, text="foo bar") ``` A second example demonstrates output parameters. The win32 `GetWindowRect` function retrieves the dimensions of a specified window by copying them into `RECT` structure that the caller has to supply. Here is the C declaration: ``` WINUSERAPI BOOL WINAPI GetWindowRect( HWND hWnd, LPRECT lpRect); ``` Here is the wrapping with [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python."): ``` >>> from ctypes import POINTER, WINFUNCTYPE, windll, WinError >>> from ctypes.wintypes import BOOL, HWND, RECT >>> prototype = WINFUNCTYPE(BOOL, HWND, POINTER(RECT)) >>> paramflags = (1, "hwnd"), (2, "lprect") >>> GetWindowRect = prototype(("GetWindowRect", windll.user32), paramflags) >>> ``` Functions with output parameters will automatically return the output parameter value if there is a single one, or a tuple containing the output parameter values when there are more than one, so the GetWindowRect function now returns a RECT instance, when called. Output parameters can be combined with the `errcheck` protocol to do further output processing and error checking. The win32 `GetWindowRect` api function returns a `BOOL` to signal success or failure, so this function could do the error checking, and raises an exception when the api call failed: ``` >>> def errcheck(result, func, args): ... if not result: ... raise WinError() ... return args ... >>> GetWindowRect.errcheck = errcheck >>> ``` If the `errcheck` function returns the argument tuple it receives unchanged, [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") continues the normal processing it does on the output parameters. If you want to return a tuple of window coordinates instead of a `RECT` instance, you can retrieve the fields in the function and return them instead, the normal processing will no longer take place: ``` >>> def errcheck(result, func, args): ... if not result: ... raise WinError() ... rc = args[1] ... return rc.left, rc.top, rc.bottom, rc.right ... >>> GetWindowRect.errcheck = errcheck >>> ``` ### Utility functions `ctypes.addressof(obj)` Returns the address of the memory buffer as integer. *obj* must be an instance of a ctypes type. Raises an [auditing event](sys#auditing) `ctypes.addressof` with argument `obj`. `ctypes.alignment(obj_or_type)` Returns the alignment requirements of a ctypes type. *obj\_or\_type* must be a ctypes type or instance. `ctypes.byref(obj[, offset])` Returns a light-weight pointer to *obj*, which must be an instance of a ctypes type. *offset* defaults to zero, and must be an integer that will be added to the internal pointer value. `byref(obj, offset)` corresponds to this C code: ``` (((char *)&obj) + offset) ``` The returned object can only be used as a foreign function call parameter. It behaves similar to `pointer(obj)`, but the construction is a lot faster. `ctypes.cast(obj, type)` This function is similar to the cast operator in C. It returns a new instance of *type* which points to the same memory block as *obj*. *type* must be a pointer type, and *obj* must be an object that can be interpreted as a pointer. `ctypes.create_string_buffer(init_or_size, size=None)` This function creates a mutable character buffer. The returned object is a ctypes array of [`c_char`](#ctypes.c_char "ctypes.c_char"). *init\_or\_size* must be an integer which specifies the size of the array, or a bytes object which will be used to initialize the array items. If a bytes object is specified as first argument, the buffer is made one item larger than its length so that the last element in the array is a NUL termination character. An integer can be passed as second argument which allows specifying the size of the array if the length of the bytes should not be used. Raises an [auditing event](sys#auditing) `ctypes.create_string_buffer` with arguments `init`, `size`. `ctypes.create_unicode_buffer(init_or_size, size=None)` This function creates a mutable unicode character buffer. The returned object is a ctypes array of [`c_wchar`](#ctypes.c_wchar "ctypes.c_wchar"). *init\_or\_size* must be an integer which specifies the size of the array, or a string which will be used to initialize the array items. If a string is specified as first argument, the buffer is made one item larger than the length of the string so that the last element in the array is a NUL termination character. An integer can be passed as second argument which allows specifying the size of the array if the length of the string should not be used. Raises an [auditing event](sys#auditing) `ctypes.create_unicode_buffer` with arguments `init`, `size`. `ctypes.DllCanUnloadNow()` Windows only: This function is a hook which allows implementing in-process COM servers with ctypes. It is called from the DllCanUnloadNow function that the \_ctypes extension dll exports. `ctypes.DllGetClassObject()` Windows only: This function is a hook which allows implementing in-process COM servers with ctypes. It is called from the DllGetClassObject function that the `_ctypes` extension dll exports. `ctypes.util.find_library(name)` Try to find a library and return a pathname. *name* is the library name without any prefix like `lib`, suffix like `.so`, `.dylib` or version number (this is the form used for the posix linker option `-l`). If no library can be found, returns `None`. The exact functionality is system dependent. `ctypes.util.find_msvcrt()` Windows only: return the filename of the VC runtime library used by Python, and by the extension modules. If the name of the library cannot be determined, `None` is returned. If you need to free memory, for example, allocated by an extension module with a call to the `free(void *)`, it is important that you use the function in the same library that allocated the memory. `ctypes.FormatError([code])` Windows only: Returns a textual description of the error code *code*. If no error code is specified, the last error code is used by calling the Windows api function GetLastError. `ctypes.GetLastError()` Windows only: Returns the last error code set by Windows in the calling thread. This function calls the Windows `GetLastError()` function directly, it does not return the ctypes-private copy of the error code. `ctypes.get_errno()` Returns the current value of the ctypes-private copy of the system [`errno`](errno#module-errno "errno: Standard errno system symbols.") variable in the calling thread. Raises an [auditing event](sys#auditing) `ctypes.get_errno` with no arguments. `ctypes.get_last_error()` Windows only: returns the current value of the ctypes-private copy of the system `LastError` variable in the calling thread. Raises an [auditing event](sys#auditing) `ctypes.get_last_error` with no arguments. `ctypes.memmove(dst, src, count)` Same as the standard C memmove library function: copies *count* bytes from *src* to *dst*. *dst* and *src* must be integers or ctypes instances that can be converted to pointers. `ctypes.memset(dst, c, count)` Same as the standard C memset library function: fills the memory block at address *dst* with *count* bytes of value *c*. *dst* must be an integer specifying an address, or a ctypes instance. `ctypes.POINTER(type)` This factory function creates and returns a new ctypes pointer type. Pointer types are cached and reused internally, so calling this function repeatedly is cheap. *type* must be a ctypes type. `ctypes.pointer(obj)` This function creates a new pointer instance, pointing to *obj*. The returned object is of the type `POINTER(type(obj))`. Note: If you just want to pass a pointer to an object to a foreign function call, you should use `byref(obj)` which is much faster. `ctypes.resize(obj, size)` This function resizes the internal memory buffer of *obj*, which must be an instance of a ctypes type. It is not possible to make the buffer smaller than the native size of the objects type, as given by `sizeof(type(obj))`, but it is possible to enlarge the buffer. `ctypes.set_errno(value)` Set the current value of the ctypes-private copy of the system [`errno`](errno#module-errno "errno: Standard errno system symbols.") variable in the calling thread to *value* and return the previous value. Raises an [auditing event](sys#auditing) `ctypes.set_errno` with argument `errno`. `ctypes.set_last_error(value)` Windows only: set the current value of the ctypes-private copy of the system `LastError` variable in the calling thread to *value* and return the previous value. Raises an [auditing event](sys#auditing) `ctypes.set_last_error` with argument `error`. `ctypes.sizeof(obj_or_type)` Returns the size in bytes of a ctypes type or instance memory buffer. Does the same as the C `sizeof` operator. `ctypes.string_at(address, size=-1)` This function returns the C string starting at memory address *address* as a bytes object. If size is specified, it is used as size, otherwise the string is assumed to be zero-terminated. Raises an [auditing event](sys#auditing) `ctypes.string_at` with arguments `address`, `size`. `ctypes.WinError(code=None, descr=None)` Windows only: this function is probably the worst-named thing in ctypes. It creates an instance of OSError. If *code* is not specified, `GetLastError` is called to determine the error code. If *descr* is not specified, [`FormatError()`](#ctypes.FormatError "ctypes.FormatError") is called to get a textual description of the error. Changed in version 3.3: An instance of [`WindowsError`](exceptions#WindowsError "WindowsError") used to be created. `ctypes.wstring_at(address, size=-1)` This function returns the wide character string starting at memory address *address* as a string. If *size* is specified, it is used as the number of characters of the string, otherwise the string is assumed to be zero-terminated. Raises an [auditing event](sys#auditing) `ctypes.wstring_at` with arguments `address`, `size`. ### Data types `class ctypes._CData` This non-public class is the common base class of all ctypes data types. Among other things, all ctypes type instances contain a memory block that hold C compatible data; the address of the memory block is returned by the [`addressof()`](#ctypes.addressof "ctypes.addressof") helper function. Another instance variable is exposed as [`_objects`](#ctypes._CData._objects "ctypes._CData._objects"); this contains other Python objects that need to be kept alive in case the memory block contains pointers. Common methods of ctypes data types, these are all class methods (to be exact, they are methods of the [metaclass](../glossary#term-metaclass)): `from_buffer(source[, offset])` This method returns a ctypes instance that shares the buffer of the *source* object. The *source* object must support the writeable buffer interface. The optional *offset* parameter specifies an offset into the source buffer in bytes; the default is zero. If the source buffer is not large enough a [`ValueError`](exceptions#ValueError "ValueError") is raised. Raises an [auditing event](sys#auditing) `ctypes.cdata/buffer` with arguments `pointer`, `size`, `offset`. `from_buffer_copy(source[, offset])` This method creates a ctypes instance, copying the buffer from the *source* object buffer which must be readable. The optional *offset* parameter specifies an offset into the source buffer in bytes; the default is zero. If the source buffer is not large enough a [`ValueError`](exceptions#ValueError "ValueError") is raised. Raises an [auditing event](sys#auditing) `ctypes.cdata/buffer` with arguments `pointer`, `size`, `offset`. `from_address(address)` This method returns a ctypes type instance using the memory specified by *address* which must be an integer. This method, and others that indirectly call this method, raises an [auditing event](sys#auditing) `ctypes.cdata` with argument `address`. `from_param(obj)` This method adapts *obj* to a ctypes type. It is called with the actual object used in a foreign function call when the type is present in the foreign function’s `argtypes` tuple; it must return an object that can be used as a function call parameter. All ctypes data types have a default implementation of this classmethod that normally returns *obj* if that is an instance of the type. Some types accept other objects as well. `in_dll(library, name)` This method returns a ctypes type instance exported by a shared library. *name* is the name of the symbol that exports the data, *library* is the loaded shared library. Common instance variables of ctypes data types: `_b_base_` Sometimes ctypes data instances do not own the memory block they contain, instead they share part of the memory block of a base object. The [`_b_base_`](#ctypes._CData._b_base_ "ctypes._CData._b_base_") read-only member is the root ctypes object that owns the memory block. `_b_needsfree_` This read-only variable is true when the ctypes data instance has allocated the memory block itself, false otherwise. `_objects` This member is either `None` or a dictionary containing Python objects that need to be kept alive so that the memory block contents is kept valid. This object is only exposed for debugging; never modify the contents of this dictionary. ### Fundamental data types `class ctypes._SimpleCData` This non-public class is the base class of all fundamental ctypes data types. It is mentioned here because it contains the common attributes of the fundamental ctypes data types. [`_SimpleCData`](#ctypes._SimpleCData "ctypes._SimpleCData") is a subclass of [`_CData`](#ctypes._CData "ctypes._CData"), so it inherits their methods and attributes. ctypes data types that are not and do not contain pointers can now be pickled. Instances have a single attribute: `value` This attribute contains the actual value of the instance. For integer and pointer types, it is an integer, for character types, it is a single character bytes object or string, for character pointer types it is a Python bytes object or string. When the `value` attribute is retrieved from a ctypes instance, usually a new object is returned each time. [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") does *not* implement original object return, always a new object is constructed. The same is true for all other ctypes object instances. Fundamental data types, when returned as foreign function call results, or, for example, by retrieving structure field members or array items, are transparently converted to native Python types. In other words, if a foreign function has a `restype` of [`c_char_p`](#ctypes.c_char_p "ctypes.c_char_p"), you will always receive a Python bytes object, *not* a [`c_char_p`](#ctypes.c_char_p "ctypes.c_char_p") instance. Subclasses of fundamental data types do *not* inherit this behavior. So, if a foreign functions `restype` is a subclass of [`c_void_p`](#ctypes.c_void_p "ctypes.c_void_p"), you will receive an instance of this subclass from the function call. Of course, you can get the value of the pointer by accessing the `value` attribute. These are the fundamental ctypes data types: `class ctypes.c_byte` Represents the C `signed char` datatype, and interprets the value as small integer. The constructor accepts an optional integer initializer; no overflow checking is done. `class ctypes.c_char` Represents the C `char` datatype, and interprets the value as a single character. The constructor accepts an optional string initializer, the length of the string must be exactly one character. `class ctypes.c_char_p` Represents the C `char *` datatype when it points to a zero-terminated string. For a general character pointer that may also point to binary data, `POINTER(c_char)` must be used. The constructor accepts an integer address, or a bytes object. `class ctypes.c_double` Represents the C `double` datatype. The constructor accepts an optional float initializer. `class ctypes.c_longdouble` Represents the C `long double` datatype. The constructor accepts an optional float initializer. On platforms where `sizeof(long double) == sizeof(double)` it is an alias to [`c_double`](#ctypes.c_double "ctypes.c_double"). `class ctypes.c_float` Represents the C `float` datatype. The constructor accepts an optional float initializer. `class ctypes.c_int` Represents the C `signed int` datatype. The constructor accepts an optional integer initializer; no overflow checking is done. On platforms where `sizeof(int) == sizeof(long)` it is an alias to [`c_long`](#ctypes.c_long "ctypes.c_long"). `class ctypes.c_int8` Represents the C 8-bit `signed int` datatype. Usually an alias for [`c_byte`](#ctypes.c_byte "ctypes.c_byte"). `class ctypes.c_int16` Represents the C 16-bit `signed int` datatype. Usually an alias for [`c_short`](#ctypes.c_short "ctypes.c_short"). `class ctypes.c_int32` Represents the C 32-bit `signed int` datatype. Usually an alias for [`c_int`](#ctypes.c_int "ctypes.c_int"). `class ctypes.c_int64` Represents the C 64-bit `signed int` datatype. Usually an alias for [`c_longlong`](#ctypes.c_longlong "ctypes.c_longlong"). `class ctypes.c_long` Represents the C `signed long` datatype. The constructor accepts an optional integer initializer; no overflow checking is done. `class ctypes.c_longlong` Represents the C `signed long long` datatype. The constructor accepts an optional integer initializer; no overflow checking is done. `class ctypes.c_short` Represents the C `signed short` datatype. The constructor accepts an optional integer initializer; no overflow checking is done. `class ctypes.c_size_t` Represents the C `size_t` datatype. `class ctypes.c_ssize_t` Represents the C `ssize_t` datatype. New in version 3.2. `class ctypes.c_ubyte` Represents the C `unsigned char` datatype, it interprets the value as small integer. The constructor accepts an optional integer initializer; no overflow checking is done. `class ctypes.c_uint` Represents the C `unsigned int` datatype. The constructor accepts an optional integer initializer; no overflow checking is done. On platforms where `sizeof(int) == sizeof(long)` it is an alias for [`c_ulong`](#ctypes.c_ulong "ctypes.c_ulong"). `class ctypes.c_uint8` Represents the C 8-bit `unsigned int` datatype. Usually an alias for [`c_ubyte`](#ctypes.c_ubyte "ctypes.c_ubyte"). `class ctypes.c_uint16` Represents the C 16-bit `unsigned int` datatype. Usually an alias for [`c_ushort`](#ctypes.c_ushort "ctypes.c_ushort"). `class ctypes.c_uint32` Represents the C 32-bit `unsigned int` datatype. Usually an alias for [`c_uint`](#ctypes.c_uint "ctypes.c_uint"). `class ctypes.c_uint64` Represents the C 64-bit `unsigned int` datatype. Usually an alias for [`c_ulonglong`](#ctypes.c_ulonglong "ctypes.c_ulonglong"). `class ctypes.c_ulong` Represents the C `unsigned long` datatype. The constructor accepts an optional integer initializer; no overflow checking is done. `class ctypes.c_ulonglong` Represents the C `unsigned long long` datatype. The constructor accepts an optional integer initializer; no overflow checking is done. `class ctypes.c_ushort` Represents the C `unsigned short` datatype. The constructor accepts an optional integer initializer; no overflow checking is done. `class ctypes.c_void_p` Represents the C `void *` type. The value is represented as integer. The constructor accepts an optional integer initializer. `class ctypes.c_wchar` Represents the C `wchar_t` datatype, and interprets the value as a single character unicode string. The constructor accepts an optional string initializer, the length of the string must be exactly one character. `class ctypes.c_wchar_p` Represents the C `wchar_t *` datatype, which must be a pointer to a zero-terminated wide character string. The constructor accepts an integer address, or a string. `class ctypes.c_bool` Represent the C `bool` datatype (more accurately, `_Bool` from C99). Its value can be `True` or `False`, and the constructor accepts any object that has a truth value. `class ctypes.HRESULT` Windows only: Represents a `HRESULT` value, which contains success or error information for a function or method call. `class ctypes.py_object` Represents the C [`PyObject *`](../c-api/structures#c.PyObject "PyObject") datatype. Calling this without an argument creates a `NULL` [`PyObject *`](../c-api/structures#c.PyObject "PyObject") pointer. The `ctypes.wintypes` module provides quite some other Windows specific data types, for example `HWND`, `WPARAM`, or `DWORD`. Some useful structures like `MSG` or `RECT` are also defined. ### Structured data types `class ctypes.Union(*args, **kw)` Abstract base class for unions in native byte order. `class ctypes.BigEndianStructure(*args, **kw)` Abstract base class for structures in *big endian* byte order. `class ctypes.LittleEndianStructure(*args, **kw)` Abstract base class for structures in *little endian* byte order. Structures with non-native byte order cannot contain pointer type fields, or any other data types containing pointer type fields. `class ctypes.Structure(*args, **kw)` Abstract base class for structures in *native* byte order. Concrete structure and union types must be created by subclassing one of these types, and at least define a [`_fields_`](#ctypes.Structure._fields_ "ctypes.Structure._fields_") class variable. [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") will create [descriptor](../glossary#term-descriptor)s which allow reading and writing the fields by direct attribute accesses. These are the `_fields_` A sequence defining the structure fields. The items must be 2-tuples or 3-tuples. The first item is the name of the field, the second item specifies the type of the field; it can be any ctypes data type. For integer type fields like [`c_int`](#ctypes.c_int "ctypes.c_int"), a third optional item can be given. It must be a small positive integer defining the bit width of the field. Field names must be unique within one structure or union. This is not checked, only one field can be accessed when names are repeated. It is possible to define the [`_fields_`](#ctypes.Structure._fields_ "ctypes.Structure._fields_") class variable *after* the class statement that defines the Structure subclass, this allows creating data types that directly or indirectly reference themselves: ``` class List(Structure): pass List._fields_ = [("pnext", POINTER(List)), ... ] ``` The [`_fields_`](#ctypes.Structure._fields_ "ctypes.Structure._fields_") class variable must, however, be defined before the type is first used (an instance is created, [`sizeof()`](#ctypes.sizeof "ctypes.sizeof") is called on it, and so on). Later assignments to the [`_fields_`](#ctypes.Structure._fields_ "ctypes.Structure._fields_") class variable will raise an AttributeError. It is possible to define sub-subclasses of structure types, they inherit the fields of the base class plus the [`_fields_`](#ctypes.Structure._fields_ "ctypes.Structure._fields_") defined in the sub-subclass, if any. `_pack_` An optional small integer that allows overriding the alignment of structure fields in the instance. [`_pack_`](#ctypes.Structure._pack_ "ctypes.Structure._pack_") must already be defined when [`_fields_`](#ctypes.Structure._fields_ "ctypes.Structure._fields_") is assigned, otherwise it will have no effect. `_anonymous_` An optional sequence that lists the names of unnamed (anonymous) fields. [`_anonymous_`](#ctypes.Structure._anonymous_ "ctypes.Structure._anonymous_") must be already defined when [`_fields_`](#ctypes.Structure._fields_ "ctypes.Structure._fields_") is assigned, otherwise it will have no effect. The fields listed in this variable must be structure or union type fields. [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") will create descriptors in the structure type that allows accessing the nested fields directly, without the need to create the structure or union field. Here is an example type (Windows): ``` class _U(Union): _fields_ = [("lptdesc", POINTER(TYPEDESC)), ("lpadesc", POINTER(ARRAYDESC)), ("hreftype", HREFTYPE)] class TYPEDESC(Structure): _anonymous_ = ("u",) _fields_ = [("u", _U), ("vt", VARTYPE)] ``` The `TYPEDESC` structure describes a COM data type, the `vt` field specifies which one of the union fields is valid. Since the `u` field is defined as anonymous field, it is now possible to access the members directly off the TYPEDESC instance. `td.lptdesc` and `td.u.lptdesc` are equivalent, but the former is faster since it does not need to create a temporary union instance: ``` td = TYPEDESC() td.vt = VT_PTR td.lptdesc = POINTER(some_type) td.u.lptdesc = POINTER(some_type) ``` It is possible to define sub-subclasses of structures, they inherit the fields of the base class. If the subclass definition has a separate [`_fields_`](#ctypes.Structure._fields_ "ctypes.Structure._fields_") variable, the fields specified in this are appended to the fields of the base class. Structure and union constructors accept both positional and keyword arguments. Positional arguments are used to initialize member fields in the same order as they are appear in [`_fields_`](#ctypes.Structure._fields_ "ctypes.Structure._fields_"). Keyword arguments in the constructor are interpreted as attribute assignments, so they will initialize [`_fields_`](#ctypes.Structure._fields_ "ctypes.Structure._fields_") with the same name, or create new attributes for names not present in [`_fields_`](#ctypes.Structure._fields_ "ctypes.Structure._fields_"). ### Arrays and pointers `class ctypes.Array(*args)` Abstract base class for arrays. The recommended way to create concrete array types is by multiplying any [`ctypes`](#module-ctypes "ctypes: A foreign function library for Python.") data type with a non-negative integer. Alternatively, you can subclass this type and define [`_length_`](#ctypes.Array._length_ "ctypes.Array._length_") and [`_type_`](#ctypes.Array._type_ "ctypes.Array._type_") class variables. Array elements can be read and written using standard subscript and slice accesses; for slice reads, the resulting object is *not* itself an [`Array`](#ctypes.Array "ctypes.Array"). `_length_` A positive integer specifying the number of elements in the array. Out-of-range subscripts result in an [`IndexError`](exceptions#IndexError "IndexError"). Will be returned by [`len()`](functions#len "len"). `_type_` Specifies the type of each element in the array. Array subclass constructors accept positional arguments, used to initialize the elements in order. `class ctypes._Pointer` Private, abstract base class for pointers. Concrete pointer types are created by calling [`POINTER()`](#ctypes.POINTER "ctypes.POINTER") with the type that will be pointed to; this is done automatically by [`pointer()`](#ctypes.pointer "ctypes.pointer"). If a pointer points to an array, its elements can be read and written using standard subscript and slice accesses. Pointer objects have no size, so [`len()`](functions#len "len") will raise [`TypeError`](exceptions#TypeError "TypeError"). Negative subscripts will read from the memory *before* the pointer (as in C), and out-of-range subscripts will probably crash with an access violation (if you’re lucky). `_type_` Specifies the type pointed to. `contents` Returns the object to which to pointer points. Assigning to this attribute changes the pointer to point to the assigned object.
programming_docs
python os — Miscellaneous operating system interfaces os — Miscellaneous operating system interfaces ============================================== **Source code:** [Lib/os.py](https://github.com/python/cpython/tree/3.9/Lib/os.py) This module provides a portable way of using operating system dependent functionality. If you just want to read or write a file see [`open()`](functions#open "open"), if you want to manipulate paths, see the [`os.path`](os.path#module-os.path "os.path: Operations on pathnames.") module, and if you want to read all the lines in all the files on the command line see the [`fileinput`](fileinput#module-fileinput "fileinput: Loop over standard input or a list of files.") module. For creating temporary files and directories see the [`tempfile`](tempfile#module-tempfile "tempfile: Generate temporary files and directories.") module, and for high-level file and directory handling see the [`shutil`](shutil#module-shutil "shutil: High-level file operations, including copying.") module. Notes on the availability of these functions: * The design of all built-in operating system dependent modules of Python is such that as long as the same functionality is available, it uses the same interface; for example, the function `os.stat(path)` returns stat information about *path* in the same format (which happens to have originated with the POSIX interface). * Extensions peculiar to a particular operating system are also available through the [`os`](#module-os "os: Miscellaneous operating system interfaces.") module, but using them is of course a threat to portability. * All functions accepting path or file names accept both bytes and string objects, and result in an object of the same type, if a path or file name is returned. * On VxWorks, os.fork, os.execv and os.spawn\*p\* are not supported. Note All functions in this module raise [`OSError`](exceptions#OSError "OSError") (or subclasses thereof) in the case of invalid or inaccessible file names and paths, or other arguments that have the correct type, but are not accepted by the operating system. `exception os.error` An alias for the built-in [`OSError`](exceptions#OSError "OSError") exception. `os.name` The name of the operating system dependent module imported. The following names have currently been registered: `'posix'`, `'nt'`, `'java'`. See also [`sys.platform`](sys#sys.platform "sys.platform") has a finer granularity. [`os.uname()`](#os.uname "os.uname") gives system-dependent version information. The [`platform`](platform#module-platform "platform: Retrieves as much platform identifying data as possible.") module provides detailed checks for the system’s identity. File Names, Command Line Arguments, and Environment Variables ------------------------------------------------------------- In Python, file names, command line arguments, and environment variables are represented using the string type. On some systems, decoding these strings to and from bytes is necessary before passing them to the operating system. Python uses the file system encoding to perform this conversion (see [`sys.getfilesystemencoding()`](sys#sys.getfilesystemencoding "sys.getfilesystemencoding")). Changed in version 3.1: On some systems, conversion using the file system encoding may fail. In this case, Python uses the [surrogateescape encoding error handler](codecs#surrogateescape), which means that undecodable bytes are replaced by a Unicode character U+DCxx on decoding, and these are again translated to the original byte on encoding. The file system encoding must guarantee to successfully decode all bytes below 128. If the file system encoding fails to provide this guarantee, API functions may raise UnicodeErrors. Process Parameters ------------------ These functions and data items provide information and operate on the current process and user. `os.ctermid()` Return the filename corresponding to the controlling terminal of the process. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.environ` A [mapping](../glossary#term-mapping) object where keys and values are strings that represent the process environment. For example, `environ['HOME']` is the pathname of your home directory (on some platforms), and is equivalent to `getenv("HOME")` in C. This mapping is captured the first time the [`os`](#module-os "os: Miscellaneous operating system interfaces.") module is imported, typically during Python startup as part of processing `site.py`. Changes to the environment made after this time are not reflected in [`os.environ`](#os.environ "os.environ"), except for changes made by modifying [`os.environ`](#os.environ "os.environ") directly. This mapping may be used to modify the environment as well as query the environment. [`putenv()`](#os.putenv "os.putenv") will be called automatically when the mapping is modified. On Unix, keys and values use [`sys.getfilesystemencoding()`](sys#sys.getfilesystemencoding "sys.getfilesystemencoding") and `'surrogateescape'` error handler. Use [`environb`](#os.environb "os.environb") if you would like to use a different encoding. Note Calling [`putenv()`](#os.putenv "os.putenv") directly does not change [`os.environ`](#os.environ "os.environ"), so it’s better to modify [`os.environ`](#os.environ "os.environ"). Note On some platforms, including FreeBSD and macOS, setting `environ` may cause memory leaks. Refer to the system documentation for `putenv()`. You can delete items in this mapping to unset environment variables. [`unsetenv()`](#os.unsetenv "os.unsetenv") will be called automatically when an item is deleted from [`os.environ`](#os.environ "os.environ"), and when one of the `pop()` or `clear()` methods is called. Changed in version 3.9: Updated to support [**PEP 584**](https://www.python.org/dev/peps/pep-0584)’s merge (`|`) and update (`|=`) operators. `os.environb` Bytes version of [`environ`](#os.environ "os.environ"): a [mapping](../glossary#term-mapping) object where both keys and values are [`bytes`](stdtypes#bytes "bytes") objects representing the process environment. [`environ`](#os.environ "os.environ") and [`environb`](#os.environb "os.environb") are synchronized (modifying [`environb`](#os.environb "os.environb") updates [`environ`](#os.environ "os.environ"), and vice versa). [`environb`](#os.environb "os.environb") is only available if [`supports_bytes_environ`](#os.supports_bytes_environ "os.supports_bytes_environ") is `True`. New in version 3.2. Changed in version 3.9: Updated to support [**PEP 584**](https://www.python.org/dev/peps/pep-0584)’s merge (`|`) and update (`|=`) operators. `os.chdir(path)` `os.fchdir(fd)` `os.getcwd()` These functions are described in [Files and Directories](#os-file-dir). `os.fsencode(filename)` Encode [path-like](../glossary#term-path-like-object) *filename* to the filesystem encoding with `'surrogateescape'` error handler, or `'strict'` on Windows; return [`bytes`](stdtypes#bytes "bytes") unchanged. [`fsdecode()`](#os.fsdecode "os.fsdecode") is the reverse function. New in version 3.2. Changed in version 3.6: Support added to accept objects implementing the [`os.PathLike`](#os.PathLike "os.PathLike") interface. `os.fsdecode(filename)` Decode the [path-like](../glossary#term-path-like-object) *filename* from the filesystem encoding with `'surrogateescape'` error handler, or `'strict'` on Windows; return [`str`](stdtypes#str "str") unchanged. [`fsencode()`](#os.fsencode "os.fsencode") is the reverse function. New in version 3.2. Changed in version 3.6: Support added to accept objects implementing the [`os.PathLike`](#os.PathLike "os.PathLike") interface. `os.fspath(path)` Return the file system representation of the path. If [`str`](stdtypes#str "str") or [`bytes`](stdtypes#bytes "bytes") is passed in, it is returned unchanged. Otherwise [`__fspath__()`](#os.PathLike.__fspath__ "os.PathLike.__fspath__") is called and its value is returned as long as it is a [`str`](stdtypes#str "str") or [`bytes`](stdtypes#bytes "bytes") object. In all other cases, [`TypeError`](exceptions#TypeError "TypeError") is raised. New in version 3.6. `class os.PathLike` An [abstract base class](../glossary#term-abstract-base-class) for objects representing a file system path, e.g. [`pathlib.PurePath`](pathlib#pathlib.PurePath "pathlib.PurePath"). New in version 3.6. `abstractmethod __fspath__()` Return the file system path representation of the object. The method should only return a [`str`](stdtypes#str "str") or [`bytes`](stdtypes#bytes "bytes") object, with the preference being for [`str`](stdtypes#str "str"). `os.getenv(key, default=None)` Return the value of the environment variable *key* if it exists, or *default* if it doesn’t. *key*, *default* and the result are str. Note that since [`getenv()`](#os.getenv "os.getenv") uses [`os.environ`](#os.environ "os.environ"), the mapping of [`getenv()`](#os.getenv "os.getenv") is similarly also captured on import, and the function may not reflect future environment changes. On Unix, keys and values are decoded with [`sys.getfilesystemencoding()`](sys#sys.getfilesystemencoding "sys.getfilesystemencoding") and `'surrogateescape'` error handler. Use [`os.getenvb()`](#os.getenvb "os.getenvb") if you would like to use a different encoding. [Availability](https://docs.python.org/3.9/library/intro.html#availability): most flavors of Unix, Windows. `os.getenvb(key, default=None)` Return the value of the environment variable *key* if it exists, or *default* if it doesn’t. *key*, *default* and the result are bytes. Note that since [`getenvb()`](#os.getenvb "os.getenvb") uses [`os.environb`](#os.environb "os.environb"), the mapping of [`getenvb()`](#os.getenvb "os.getenvb") is similarly also captured on import, and the function may not reflect future environment changes. [`getenvb()`](#os.getenvb "os.getenvb") is only available if [`supports_bytes_environ`](#os.supports_bytes_environ "os.supports_bytes_environ") is `True`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): most flavors of Unix. New in version 3.2. `os.get_exec_path(env=None)` Returns the list of directories that will be searched for a named executable, similar to a shell, when launching a process. *env*, when specified, should be an environment variable dictionary to lookup the PATH in. By default, when *env* is `None`, [`environ`](#os.environ "os.environ") is used. New in version 3.2. `os.getegid()` Return the effective group id of the current process. This corresponds to the “set id” bit on the file being executed in the current process. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.geteuid()` Return the current process’s effective user id. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.getgid()` Return the real group id of the current process. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.getgrouplist(user, group)` Return list of group ids that *user* belongs to. If *group* is not in the list, it is included; typically, *group* is specified as the group ID field from the password record for *user*, because that group ID will otherwise be potentially omitted. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.getgroups()` Return list of supplemental group ids associated with the current process. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. Note On macOS, [`getgroups()`](#os.getgroups "os.getgroups") behavior differs somewhat from other Unix platforms. If the Python interpreter was built with a deployment target of `10.5` or earlier, [`getgroups()`](#os.getgroups "os.getgroups") returns the list of effective group ids associated with the current user process; this list is limited to a system-defined number of entries, typically 16, and may be modified by calls to [`setgroups()`](#os.setgroups "os.setgroups") if suitably privileged. If built with a deployment target greater than `10.5`, [`getgroups()`](#os.getgroups "os.getgroups") returns the current group access list for the user associated with the effective user id of the process; the group access list may change over the lifetime of the process, it is not affected by calls to [`setgroups()`](#os.setgroups "os.setgroups"), and its length is not limited to 16. The deployment target value, `MACOSX_DEPLOYMENT_TARGET`, can be obtained with [`sysconfig.get_config_var()`](sysconfig#sysconfig.get_config_var "sysconfig.get_config_var"). `os.getlogin()` Return the name of the user logged in on the controlling terminal of the process. For most purposes, it is more useful to use [`getpass.getuser()`](getpass#getpass.getuser "getpass.getuser") since the latter checks the environment variables `LOGNAME` or `USERNAME` to find out who the user is, and falls back to `pwd.getpwuid(os.getuid())[0]` to get the login name of the current real user id. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. `os.getpgid(pid)` Return the process group id of the process with process id *pid*. If *pid* is 0, the process group id of the current process is returned. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.getpgrp()` Return the id of the current process group. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.getpid()` Return the current process id. `os.getppid()` Return the parent’s process id. When the parent process has exited, on Unix the id returned is the one of the init process (1), on Windows it is still the same id, which may be already reused by another process. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. Changed in version 3.2: Added support for Windows. `os.getpriority(which, who)` Get program scheduling priority. The value *which* is one of [`PRIO_PROCESS`](#os.PRIO_PROCESS "os.PRIO_PROCESS"), [`PRIO_PGRP`](#os.PRIO_PGRP "os.PRIO_PGRP"), or [`PRIO_USER`](#os.PRIO_USER "os.PRIO_USER"), and *who* is interpreted relative to *which* (a process identifier for [`PRIO_PROCESS`](#os.PRIO_PROCESS "os.PRIO_PROCESS"), process group identifier for [`PRIO_PGRP`](#os.PRIO_PGRP "os.PRIO_PGRP"), and a user ID for [`PRIO_USER`](#os.PRIO_USER "os.PRIO_USER")). A zero value for *who* denotes (respectively) the calling process, the process group of the calling process, or the real user ID of the calling process. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.PRIO_PROCESS` `os.PRIO_PGRP` `os.PRIO_USER` Parameters for the [`getpriority()`](#os.getpriority "os.getpriority") and [`setpriority()`](#os.setpriority "os.setpriority") functions. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.getresuid()` Return a tuple (ruid, euid, suid) denoting the current process’s real, effective, and saved user ids. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.2. `os.getresgid()` Return a tuple (rgid, egid, sgid) denoting the current process’s real, effective, and saved group ids. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.2. `os.getuid()` Return the current process’s real user id. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.initgroups(username, gid)` Call the system initgroups() to initialize the group access list with all of the groups of which the specified username is a member, plus the specified group id. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.2. `os.putenv(key, value)` Set the environment variable named *key* to the string *value*. Such changes to the environment affect subprocesses started with [`os.system()`](#os.system "os.system"), [`popen()`](#os.popen "os.popen") or [`fork()`](#os.fork "os.fork") and [`execv()`](#os.execv "os.execv"). Assignments to items in [`os.environ`](#os.environ "os.environ") are automatically translated into corresponding calls to [`putenv()`](#os.putenv "os.putenv"); however, calls to [`putenv()`](#os.putenv "os.putenv") don’t update [`os.environ`](#os.environ "os.environ"), so it is actually preferable to assign to items of [`os.environ`](#os.environ "os.environ"). This also applies to [`getenv()`](#os.getenv "os.getenv") and [`getenvb()`](#os.getenvb "os.getenvb"), which respectively use [`os.environ`](#os.environ "os.environ") and [`os.environb`](#os.environb "os.environb") in their implementations. Note On some platforms, including FreeBSD and macOS, setting `environ` may cause memory leaks. Refer to the system documentation for `putenv()`. Raises an [auditing event](sys#auditing) `os.putenv` with arguments `key`, `value`. Changed in version 3.9: The function is now always available. `os.setegid(egid)` Set the current process’s effective group id. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.seteuid(euid)` Set the current process’s effective user id. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.setgid(gid)` Set the current process’ group id. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.setgroups(groups)` Set the list of supplemental group ids associated with the current process to *groups*. *groups* must be a sequence, and each element must be an integer identifying a group. This operation is typically available only to the superuser. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. Note On macOS, the length of *groups* may not exceed the system-defined maximum number of effective group ids, typically 16. See the documentation for [`getgroups()`](#os.getgroups "os.getgroups") for cases where it may not return the same group list set by calling setgroups(). `os.setpgrp()` Call the system call `setpgrp()` or `setpgrp(0, 0)` depending on which version is implemented (if any). See the Unix manual for the semantics. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.setpgid(pid, pgrp)` Call the system call `setpgid()` to set the process group id of the process with id *pid* to the process group with id *pgrp*. See the Unix manual for the semantics. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.setpriority(which, who, priority)` Set program scheduling priority. The value *which* is one of [`PRIO_PROCESS`](#os.PRIO_PROCESS "os.PRIO_PROCESS"), [`PRIO_PGRP`](#os.PRIO_PGRP "os.PRIO_PGRP"), or [`PRIO_USER`](#os.PRIO_USER "os.PRIO_USER"), and *who* is interpreted relative to *which* (a process identifier for [`PRIO_PROCESS`](#os.PRIO_PROCESS "os.PRIO_PROCESS"), process group identifier for [`PRIO_PGRP`](#os.PRIO_PGRP "os.PRIO_PGRP"), and a user ID for [`PRIO_USER`](#os.PRIO_USER "os.PRIO_USER")). A zero value for *who* denotes (respectively) the calling process, the process group of the calling process, or the real user ID of the calling process. *priority* is a value in the range -20 to 19. The default priority is 0; lower priorities cause more favorable scheduling. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.setregid(rgid, egid)` Set the current process’s real and effective group ids. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.setresgid(rgid, egid, sgid)` Set the current process’s real, effective, and saved group ids. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.2. `os.setresuid(ruid, euid, suid)` Set the current process’s real, effective, and saved user ids. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.2. `os.setreuid(ruid, euid)` Set the current process’s real and effective user ids. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.getsid(pid)` Call the system call `getsid()`. See the Unix manual for the semantics. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.setsid()` Call the system call `setsid()`. See the Unix manual for the semantics. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.setuid(uid)` Set the current process’s user id. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.strerror(code)` Return the error message corresponding to the error code in *code*. On platforms where `strerror()` returns `NULL` when given an unknown error number, [`ValueError`](exceptions#ValueError "ValueError") is raised. `os.supports_bytes_environ` `True` if the native OS type of the environment is bytes (eg. `False` on Windows). New in version 3.2. `os.umask(mask)` Set the current numeric umask and return the previous umask. `os.uname()` Returns information identifying the current operating system. The return value is an object with five attributes: * `sysname` - operating system name * `nodename` - name of machine on network (implementation-defined) * `release` - operating system release * `version` - operating system version * `machine` - hardware identifier For backwards compatibility, this object is also iterable, behaving like a five-tuple containing `sysname`, `nodename`, `release`, `version`, and `machine` in that order. Some systems truncate `nodename` to 8 characters or to the leading component; a better way to get the hostname is [`socket.gethostname()`](socket#socket.gethostname "socket.gethostname") or even `socket.gethostbyaddr(socket.gethostname())`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): recent flavors of Unix. Changed in version 3.3: Return type changed from a tuple to a tuple-like object with named attributes. `os.unsetenv(key)` Unset (delete) the environment variable named *key*. Such changes to the environment affect subprocesses started with [`os.system()`](#os.system "os.system"), [`popen()`](#os.popen "os.popen") or [`fork()`](#os.fork "os.fork") and [`execv()`](#os.execv "os.execv"). Deletion of items in [`os.environ`](#os.environ "os.environ") is automatically translated into a corresponding call to [`unsetenv()`](#os.unsetenv "os.unsetenv"); however, calls to [`unsetenv()`](#os.unsetenv "os.unsetenv") don’t update [`os.environ`](#os.environ "os.environ"), so it is actually preferable to delete items of [`os.environ`](#os.environ "os.environ"). Raises an [auditing event](sys#auditing) `os.unsetenv` with argument `key`. Changed in version 3.9: The function is now always available and is also available on Windows. File Object Creation -------------------- These functions create new [file objects](../glossary#term-file-object). (See also [`open()`](#os.open "os.open") for opening file descriptors.) `os.fdopen(fd, *args, **kwargs)` Return an open file object connected to the file descriptor *fd*. This is an alias of the [`open()`](functions#open "open") built-in function and accepts the same arguments. The only difference is that the first argument of [`fdopen()`](#os.fdopen "os.fdopen") must always be an integer. File Descriptor Operations -------------------------- These functions operate on I/O streams referenced using file descriptors. File descriptors are small integers corresponding to a file that has been opened by the current process. For example, standard input is usually file descriptor 0, standard output is 1, and standard error is 2. Further files opened by a process will then be assigned 3, 4, 5, and so forth. The name “file descriptor” is slightly deceptive; on Unix platforms, sockets and pipes are also referenced by file descriptors. The [`fileno()`](io#io.IOBase.fileno "io.IOBase.fileno") method can be used to obtain the file descriptor associated with a [file object](../glossary#term-file-object) when required. Note that using the file descriptor directly will bypass the file object methods, ignoring aspects such as internal buffering of data. `os.close(fd)` Close file descriptor *fd*. Note This function is intended for low-level I/O and must be applied to a file descriptor as returned by [`os.open()`](#os.open "os.open") or [`pipe()`](#os.pipe "os.pipe"). To close a “file object” returned by the built-in function [`open()`](functions#open "open") or by [`popen()`](#os.popen "os.popen") or [`fdopen()`](#os.fdopen "os.fdopen"), use its [`close()`](io#io.IOBase.close "io.IOBase.close") method. `os.closerange(fd_low, fd_high)` Close all file descriptors from *fd\_low* (inclusive) to *fd\_high* (exclusive), ignoring errors. Equivalent to (but much faster than): ``` for fd in range(fd_low, fd_high): try: os.close(fd) except OSError: pass ``` `os.copy_file_range(src, dst, count, offset_src=None, offset_dst=None)` Copy *count* bytes from file descriptor *src*, starting from offset *offset\_src*, to file descriptor *dst*, starting from offset *offset\_dst*. If *offset\_src* is None, then *src* is read from the current position; respectively for *offset\_dst*. The files pointed by *src* and *dst* must reside in the same filesystem, otherwise an [`OSError`](exceptions#OSError "OSError") is raised with [`errno`](exceptions#OSError.errno "OSError.errno") set to [`errno.EXDEV`](errno#errno.EXDEV "errno.EXDEV"). This copy is done without the additional cost of transferring data from the kernel to user space and then back into the kernel. Additionally, some filesystems could implement extra optimizations. The copy is done as if both files are opened as binary. The return value is the amount of bytes copied. This could be less than the amount requested. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux kernel >= 4.5 or glibc >= 2.27. New in version 3.8. `os.device_encoding(fd)` Return a string describing the encoding of the device associated with *fd* if it is connected to a terminal; else return [`None`](constants#None "None"). `os.dup(fd)` Return a duplicate of file descriptor *fd*. The new file descriptor is [non-inheritable](#fd-inheritance). On Windows, when duplicating a standard stream (0: stdin, 1: stdout, 2: stderr), the new file descriptor is [inheritable](#fd-inheritance). Changed in version 3.4: The new file descriptor is now non-inheritable. `os.dup2(fd, fd2, inheritable=True)` Duplicate file descriptor *fd* to *fd2*, closing the latter first if necessary. Return *fd2*. The new file descriptor is [inheritable](#fd-inheritance) by default or non-inheritable if *inheritable* is `False`. Changed in version 3.4: Add the optional *inheritable* parameter. Changed in version 3.7: Return *fd2* on success. Previously, `None` was always returned. `os.fchmod(fd, mode)` Change the mode of the file given by *fd* to the numeric *mode*. See the docs for [`chmod()`](#os.chmod "os.chmod") for possible values of *mode*. As of Python 3.3, this is equivalent to `os.chmod(fd, mode)`. Raises an [auditing event](sys#auditing) `os.chmod` with arguments `path`, `mode`, `dir_fd`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.fchown(fd, uid, gid)` Change the owner and group id of the file given by *fd* to the numeric *uid* and *gid*. To leave one of the ids unchanged, set it to -1. See [`chown()`](#os.chown "os.chown"). As of Python 3.3, this is equivalent to `os.chown(fd, uid, gid)`. Raises an [auditing event](sys#auditing) `os.chown` with arguments `path`, `uid`, `gid`, `dir_fd`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.fdatasync(fd)` Force write of file with filedescriptor *fd* to disk. Does not force update of metadata. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. Note This function is not available on MacOS. `os.fpathconf(fd, name)` Return system configuration information relevant to an open file. *name* specifies the configuration value to retrieve; it may be a string which is the name of a defined system value; these names are specified in a number of standards (POSIX.1, Unix 95, Unix 98, and others). Some platforms define additional names as well. The names known to the host operating system are given in the `pathconf_names` dictionary. For configuration variables not included in that mapping, passing an integer for *name* is also accepted. If *name* is a string and is not known, [`ValueError`](exceptions#ValueError "ValueError") is raised. If a specific value for *name* is not supported by the host system, even if it is included in `pathconf_names`, an [`OSError`](exceptions#OSError "OSError") is raised with [`errno.EINVAL`](errno#errno.EINVAL "errno.EINVAL") for the error number. As of Python 3.3, this is equivalent to `os.pathconf(fd, name)`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.fstat(fd)` Get the status of the file descriptor *fd*. Return a [`stat_result`](#os.stat_result "os.stat_result") object. As of Python 3.3, this is equivalent to `os.stat(fd)`. See also The [`stat()`](#os.stat "os.stat") function. `os.fstatvfs(fd)` Return information about the filesystem containing the file associated with file descriptor *fd*, like [`statvfs()`](#os.statvfs "os.statvfs"). As of Python 3.3, this is equivalent to `os.statvfs(fd)`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.fsync(fd)` Force write of file with filedescriptor *fd* to disk. On Unix, this calls the native `fsync()` function; on Windows, the MS `_commit()` function. If you’re starting with a buffered Python [file object](../glossary#term-file-object) *f*, first do `f.flush()`, and then do `os.fsync(f.fileno())`, to ensure that all internal buffers associated with *f* are written to disk. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. `os.ftruncate(fd, length)` Truncate the file corresponding to file descriptor *fd*, so that it is at most *length* bytes in size. As of Python 3.3, this is equivalent to `os.truncate(fd, length)`. Raises an [auditing event](sys#auditing) `os.truncate` with arguments `fd`, `length`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. Changed in version 3.5: Added support for Windows `os.get_blocking(fd)` Get the blocking mode of the file descriptor: `False` if the [`O_NONBLOCK`](#os.O_NONBLOCK "os.O_NONBLOCK") flag is set, `True` if the flag is cleared. See also [`set_blocking()`](#os.set_blocking "os.set_blocking") and [`socket.socket.setblocking()`](socket#socket.socket.setblocking "socket.socket.setblocking"). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.5. `os.isatty(fd)` Return `True` if the file descriptor *fd* is open and connected to a tty(-like) device, else `False`. `os.lockf(fd, cmd, len)` Apply, test or remove a POSIX lock on an open file descriptor. *fd* is an open file descriptor. *cmd* specifies the command to use - one of [`F_LOCK`](#os.F_LOCK "os.F_LOCK"), [`F_TLOCK`](#os.F_TLOCK "os.F_TLOCK"), [`F_ULOCK`](#os.F_ULOCK "os.F_ULOCK") or [`F_TEST`](#os.F_TEST "os.F_TEST"). *len* specifies the section of the file to lock. Raises an [auditing event](sys#auditing) `os.lockf` with arguments `fd`, `cmd`, `len`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.F_LOCK` `os.F_TLOCK` `os.F_ULOCK` `os.F_TEST` Flags that specify what action [`lockf()`](#os.lockf "os.lockf") will take. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.lseek(fd, pos, how)` Set the current position of file descriptor *fd* to position *pos*, modified by *how*: [`SEEK_SET`](#os.SEEK_SET "os.SEEK_SET") or `0` to set the position relative to the beginning of the file; [`SEEK_CUR`](#os.SEEK_CUR "os.SEEK_CUR") or `1` to set it relative to the current position; [`SEEK_END`](#os.SEEK_END "os.SEEK_END") or `2` to set it relative to the end of the file. Return the new cursor position in bytes, starting from the beginning. `os.SEEK_SET` `os.SEEK_CUR` `os.SEEK_END` Parameters to the [`lseek()`](#os.lseek "os.lseek") function. Their values are 0, 1, and 2, respectively. New in version 3.3: Some operating systems could support additional values, like `os.SEEK_HOLE` or `os.SEEK_DATA`. `os.open(path, flags, mode=0o777, *, dir_fd=None)` Open the file *path* and set various flags according to *flags* and possibly its mode according to *mode*. When computing *mode*, the current umask value is first masked out. Return the file descriptor for the newly opened file. The new file descriptor is [non-inheritable](#fd-inheritance). For a description of the flag and mode values, see the C run-time documentation; flag constants (like [`O_RDONLY`](#os.O_RDONLY "os.O_RDONLY") and [`O_WRONLY`](#os.O_WRONLY "os.O_WRONLY")) are defined in the [`os`](#module-os "os: Miscellaneous operating system interfaces.") module. In particular, on Windows adding [`O_BINARY`](#os.O_BINARY "os.O_BINARY") is needed to open files in binary mode. This function can support [paths relative to directory descriptors](#dir-fd) with the *dir\_fd* parameter. Raises an [auditing event](sys#auditing) `open` with arguments `path`, `mode`, `flags`. Changed in version 3.4: The new file descriptor is now non-inheritable. Note This function is intended for low-level I/O. For normal usage, use the built-in function [`open()`](functions#open "open"), which returns a [file object](../glossary#term-file-object) with `read()` and `write()` methods (and many more). To wrap a file descriptor in a file object, use [`fdopen()`](#os.fdopen "os.fdopen"). New in version 3.3: The *dir\_fd* argument. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the function now retries the system call instead of raising an [`InterruptedError`](exceptions#InterruptedError "InterruptedError") exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale). Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). The following constants are options for the *flags* parameter to the [`open()`](#os.open "os.open") function. They can be combined using the bitwise OR operator `|`. Some of them are not available on all platforms. For descriptions of their availability and use, consult the *[open(2)](https://manpages.debian.org/open(2))* manual page on Unix or [the MSDN](https://msdn.microsoft.com/en-us/library/z0kc8e3z.aspx) on Windows. `os.O_RDONLY` `os.O_WRONLY` `os.O_RDWR` `os.O_APPEND` `os.O_CREAT` `os.O_EXCL` `os.O_TRUNC` The above constants are available on Unix and Windows. `os.O_DSYNC` `os.O_RSYNC` `os.O_SYNC` `os.O_NDELAY` `os.O_NONBLOCK` `os.O_NOCTTY` `os.O_CLOEXEC` The above constants are only available on Unix. Changed in version 3.3: Add [`O_CLOEXEC`](#os.O_CLOEXEC "os.O_CLOEXEC") constant. `os.O_BINARY` `os.O_NOINHERIT` `os.O_SHORT_LIVED` `os.O_TEMPORARY` `os.O_RANDOM` `os.O_SEQUENTIAL` `os.O_TEXT` The above constants are only available on Windows. `os.O_ASYNC` `os.O_DIRECT` `os.O_DIRECTORY` `os.O_NOFOLLOW` `os.O_NOATIME` `os.O_PATH` `os.O_TMPFILE` `os.O_SHLOCK` `os.O_EXLOCK` The above constants are extensions and not present if they are not defined by the C library. Changed in version 3.4: Add [`O_PATH`](#os.O_PATH "os.O_PATH") on systems that support it. Add [`O_TMPFILE`](#os.O_TMPFILE "os.O_TMPFILE"), only available on Linux Kernel 3.11 or newer. `os.openpty()` Open a new pseudo-terminal pair. Return a pair of file descriptors `(master, slave)` for the pty and the tty, respectively. The new file descriptors are [non-inheritable](#fd-inheritance). For a (slightly) more portable approach, use the [`pty`](pty#module-pty "pty: Pseudo-Terminal Handling for Linux. (Linux)") module. [Availability](https://docs.python.org/3.9/library/intro.html#availability): some flavors of Unix. Changed in version 3.4: The new file descriptors are now non-inheritable. `os.pipe()` Create a pipe. Return a pair of file descriptors `(r, w)` usable for reading and writing, respectively. The new file descriptor is [non-inheritable](#fd-inheritance). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. Changed in version 3.4: The new file descriptors are now non-inheritable. `os.pipe2(flags)` Create a pipe with *flags* set atomically. *flags* can be constructed by ORing together one or more of these values: [`O_NONBLOCK`](#os.O_NONBLOCK "os.O_NONBLOCK"), [`O_CLOEXEC`](#os.O_CLOEXEC "os.O_CLOEXEC"). Return a pair of file descriptors `(r, w)` usable for reading and writing, respectively. [Availability](https://docs.python.org/3.9/library/intro.html#availability): some flavors of Unix. New in version 3.3. `os.posix_fallocate(fd, offset, len)` Ensures that enough disk space is allocated for the file specified by *fd* starting from *offset* and continuing for *len* bytes. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.posix_fadvise(fd, offset, len, advice)` Announces an intention to access data in a specific pattern thus allowing the kernel to make optimizations. The advice applies to the region of the file specified by *fd* starting at *offset* and continuing for *len* bytes. *advice* is one of [`POSIX_FADV_NORMAL`](#os.POSIX_FADV_NORMAL "os.POSIX_FADV_NORMAL"), [`POSIX_FADV_SEQUENTIAL`](#os.POSIX_FADV_SEQUENTIAL "os.POSIX_FADV_SEQUENTIAL"), [`POSIX_FADV_RANDOM`](#os.POSIX_FADV_RANDOM "os.POSIX_FADV_RANDOM"), [`POSIX_FADV_NOREUSE`](#os.POSIX_FADV_NOREUSE "os.POSIX_FADV_NOREUSE"), [`POSIX_FADV_WILLNEED`](#os.POSIX_FADV_WILLNEED "os.POSIX_FADV_WILLNEED") or [`POSIX_FADV_DONTNEED`](#os.POSIX_FADV_DONTNEED "os.POSIX_FADV_DONTNEED"). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.POSIX_FADV_NORMAL` `os.POSIX_FADV_SEQUENTIAL` `os.POSIX_FADV_RANDOM` `os.POSIX_FADV_NOREUSE` `os.POSIX_FADV_WILLNEED` `os.POSIX_FADV_DONTNEED` Flags that can be used in *advice* in [`posix_fadvise()`](#os.posix_fadvise "os.posix_fadvise") that specify the access pattern that is likely to be used. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.pread(fd, n, offset)` Read at most *n* bytes from file descriptor *fd* at a position of *offset*, leaving the file offset unchanged. Return a bytestring containing the bytes read. If the end of the file referred to by *fd* has been reached, an empty bytes object is returned. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.preadv(fd, buffers, offset, flags=0)` Read from a file descriptor *fd* at a position of *offset* into mutable [bytes-like objects](../glossary#term-bytes-like-object) *buffers*, leaving the file offset unchanged. Transfer data into each buffer until it is full and then move on to the next buffer in the sequence to hold the rest of the data. The flags argument contains a bitwise OR of zero or more of the following flags: * [`RWF_HIPRI`](#os.RWF_HIPRI "os.RWF_HIPRI") * [`RWF_NOWAIT`](#os.RWF_NOWAIT "os.RWF_NOWAIT") Return the total number of bytes actually read which can be less than the total capacity of all the objects. The operating system may set a limit ([`sysconf()`](#os.sysconf "os.sysconf") value `'SC_IOV_MAX'`) on the number of buffers that can be used. Combine the functionality of [`os.readv()`](#os.readv "os.readv") and [`os.pread()`](#os.pread "os.pread"). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 2.6.30 and newer, FreeBSD 6.0 and newer, OpenBSD 2.7 and newer, AIX 7.1 and newer. Using flags requires Linux 4.6 or newer. New in version 3.7. `os.RWF_NOWAIT` Do not wait for data which is not immediately available. If this flag is specified, the system call will return instantly if it would have to read data from the backing storage or wait for a lock. If some data was successfully read, it will return the number of bytes read. If no bytes were read, it will return `-1` and set errno to [`errno.EAGAIN`](errno#errno.EAGAIN "errno.EAGAIN"). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 4.14 and newer. New in version 3.7. `os.RWF_HIPRI` High priority read/write. Allows block-based filesystems to use polling of the device, which provides lower latency, but may use additional resources. Currently, on Linux, this feature is usable only on a file descriptor opened using the [`O_DIRECT`](#os.O_DIRECT "os.O_DIRECT") flag. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 4.6 and newer. New in version 3.7. `os.pwrite(fd, str, offset)` Write the bytestring in *str* to file descriptor *fd* at position of *offset*, leaving the file offset unchanged. Return the number of bytes actually written. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.pwritev(fd, buffers, offset, flags=0)` Write the *buffers* contents to file descriptor *fd* at a offset *offset*, leaving the file offset unchanged. *buffers* must be a sequence of [bytes-like objects](../glossary#term-bytes-like-object). Buffers are processed in array order. Entire contents of the first buffer is written before proceeding to the second, and so on. The flags argument contains a bitwise OR of zero or more of the following flags: * [`RWF_DSYNC`](#os.RWF_DSYNC "os.RWF_DSYNC") * [`RWF_SYNC`](#os.RWF_SYNC "os.RWF_SYNC") Return the total number of bytes actually written. The operating system may set a limit ([`sysconf()`](#os.sysconf "os.sysconf") value `'SC_IOV_MAX'`) on the number of buffers that can be used. Combine the functionality of [`os.writev()`](#os.writev "os.writev") and [`os.pwrite()`](#os.pwrite "os.pwrite"). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 2.6.30 and newer, FreeBSD 6.0 and newer, OpenBSD 2.7 and newer, AIX 7.1 and newer. Using flags requires Linux 4.7 or newer. New in version 3.7. `os.RWF_DSYNC` Provide a per-write equivalent of the [`O_DSYNC`](#os.O_DSYNC "os.O_DSYNC") `open(2)` flag. This flag effect applies only to the data range written by the system call. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 4.7 and newer. New in version 3.7. `os.RWF_SYNC` Provide a per-write equivalent of the [`O_SYNC`](#os.O_SYNC "os.O_SYNC") `open(2)` flag. This flag effect applies only to the data range written by the system call. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 4.7 and newer. New in version 3.7. `os.read(fd, n)` Read at most *n* bytes from file descriptor *fd*. Return a bytestring containing the bytes read. If the end of the file referred to by *fd* has been reached, an empty bytes object is returned. Note This function is intended for low-level I/O and must be applied to a file descriptor as returned by [`os.open()`](#os.open "os.open") or [`pipe()`](#os.pipe "os.pipe"). To read a “file object” returned by the built-in function [`open()`](functions#open "open") or by [`popen()`](#os.popen "os.popen") or [`fdopen()`](#os.fdopen "os.fdopen"), or [`sys.stdin`](sys#sys.stdin "sys.stdin"), use its `read()` or `readline()` methods. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the function now retries the system call instead of raising an [`InterruptedError`](exceptions#InterruptedError "InterruptedError") exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale). `os.sendfile(out_fd, in_fd, offset, count)` `os.sendfile(out_fd, in_fd, offset, count, headers=(), trailers=(), flags=0)` Copy *count* bytes from file descriptor *in\_fd* to file descriptor *out\_fd* starting at *offset*. Return the number of bytes sent. When EOF is reached return `0`. The first function notation is supported by all platforms that define [`sendfile()`](#os.sendfile "os.sendfile"). On Linux, if *offset* is given as `None`, the bytes are read from the current position of *in\_fd* and the position of *in\_fd* is updated. The second case may be used on macOS and FreeBSD where *headers* and *trailers* are arbitrary sequences of buffers that are written before and after the data from *in\_fd* is written. It returns the same as the first case. On macOS and FreeBSD, a value of `0` for *count* specifies to send until the end of *in\_fd* is reached. All platforms support sockets as *out\_fd* file descriptor, and some platforms allow other types (e.g. regular file, pipe) as well. Cross-platform applications should not use *headers*, *trailers* and *flags* arguments. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. Note For a higher-level wrapper of [`sendfile()`](#os.sendfile "os.sendfile"), see [`socket.socket.sendfile()`](socket#socket.socket.sendfile "socket.socket.sendfile"). New in version 3.3. Changed in version 3.9: Parameters *out* and *in* was renamed to *out\_fd* and *in\_fd*. `os.set_blocking(fd, blocking)` Set the blocking mode of the specified file descriptor. Set the [`O_NONBLOCK`](#os.O_NONBLOCK "os.O_NONBLOCK") flag if blocking is `False`, clear the flag otherwise. See also [`get_blocking()`](#os.get_blocking "os.get_blocking") and [`socket.socket.setblocking()`](socket#socket.socket.setblocking "socket.socket.setblocking"). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.5. `os.SF_NODISKIO` `os.SF_MNOWAIT` `os.SF_SYNC` Parameters to the [`sendfile()`](#os.sendfile "os.sendfile") function, if the implementation supports them. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.readv(fd, buffers)` Read from a file descriptor *fd* into a number of mutable [bytes-like objects](../glossary#term-bytes-like-object) *buffers*. Transfer data into each buffer until it is full and then move on to the next buffer in the sequence to hold the rest of the data. Return the total number of bytes actually read which can be less than the total capacity of all the objects. The operating system may set a limit ([`sysconf()`](#os.sysconf "os.sysconf") value `'SC_IOV_MAX'`) on the number of buffers that can be used. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.tcgetpgrp(fd)` Return the process group associated with the terminal given by *fd* (an open file descriptor as returned by [`os.open()`](#os.open "os.open")). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.tcsetpgrp(fd, pg)` Set the process group associated with the terminal given by *fd* (an open file descriptor as returned by [`os.open()`](#os.open "os.open")) to *pg*. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.ttyname(fd)` Return a string which specifies the terminal device associated with file descriptor *fd*. If *fd* is not associated with a terminal device, an exception is raised. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.write(fd, str)` Write the bytestring in *str* to file descriptor *fd*. Return the number of bytes actually written. Note This function is intended for low-level I/O and must be applied to a file descriptor as returned by [`os.open()`](#os.open "os.open") or [`pipe()`](#os.pipe "os.pipe"). To write a “file object” returned by the built-in function [`open()`](functions#open "open") or by [`popen()`](#os.popen "os.popen") or [`fdopen()`](#os.fdopen "os.fdopen"), or [`sys.stdout`](sys#sys.stdout "sys.stdout") or [`sys.stderr`](sys#sys.stderr "sys.stderr"), use its `write()` method. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the function now retries the system call instead of raising an [`InterruptedError`](exceptions#InterruptedError "InterruptedError") exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale). `os.writev(fd, buffers)` Write the contents of *buffers* to file descriptor *fd*. *buffers* must be a sequence of [bytes-like objects](../glossary#term-bytes-like-object). Buffers are processed in array order. Entire contents of the first buffer is written before proceeding to the second, and so on. Returns the total number of bytes actually written. The operating system may set a limit ([`sysconf()`](#os.sysconf "os.sysconf") value `'SC_IOV_MAX'`) on the number of buffers that can be used. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. ### Querying the size of a terminal New in version 3.3. `os.get_terminal_size(fd=STDOUT_FILENO)` Return the size of the terminal window as `(columns, lines)`, tuple of type [`terminal_size`](#os.terminal_size "os.terminal_size"). The optional argument `fd` (default `STDOUT_FILENO`, or standard output) specifies which file descriptor should be queried. If the file descriptor is not connected to a terminal, an [`OSError`](exceptions#OSError "OSError") is raised. [`shutil.get_terminal_size()`](shutil#shutil.get_terminal_size "shutil.get_terminal_size") is the high-level function which should normally be used, `os.get_terminal_size` is the low-level implementation. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. `class os.terminal_size` A subclass of tuple, holding `(columns, lines)` of the terminal window size. `columns` Width of the terminal window in characters. `lines` Height of the terminal window in characters. ### Inheritance of File Descriptors New in version 3.4. A file descriptor has an “inheritable” flag which indicates if the file descriptor can be inherited by child processes. Since Python 3.4, file descriptors created by Python are non-inheritable by default. On UNIX, non-inheritable file descriptors are closed in child processes at the execution of a new program, other file descriptors are inherited. On Windows, non-inheritable handles and file descriptors are closed in child processes, except for standard streams (file descriptors 0, 1 and 2: stdin, stdout and stderr), which are always inherited. Using [`spawn*`](#os.spawnl "os.spawnl") functions, all inheritable handles and all inheritable file descriptors are inherited. Using the [`subprocess`](subprocess#module-subprocess "subprocess: Subprocess management.") module, all file descriptors except standard streams are closed, and inheritable handles are only inherited if the *close\_fds* parameter is `False`. `os.get_inheritable(fd)` Get the “inheritable” flag of the specified file descriptor (a boolean). `os.set_inheritable(fd, inheritable)` Set the “inheritable” flag of the specified file descriptor. `os.get_handle_inheritable(handle)` Get the “inheritable” flag of the specified handle (a boolean). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows. `os.set_handle_inheritable(handle, inheritable)` Set the “inheritable” flag of the specified handle. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows. Files and Directories --------------------- On some Unix platforms, many of these functions support one or more of these features: * **specifying a file descriptor:** Normally the *path* argument provided to functions in the [`os`](#module-os "os: Miscellaneous operating system interfaces.") module must be a string specifying a file path. However, some functions now alternatively accept an open file descriptor for their *path* argument. The function will then operate on the file referred to by the descriptor. (For POSIX systems, Python will call the variant of the function prefixed with `f` (e.g. call `fchdir` instead of `chdir`).) You can check whether or not *path* can be specified as a file descriptor for a particular function on your platform using [`os.supports_fd`](#os.supports_fd "os.supports_fd"). If this functionality is unavailable, using it will raise a [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError"). If the function also supports *dir\_fd* or *follow\_symlinks* arguments, it’s an error to specify one of those when supplying *path* as a file descriptor. * **paths relative to directory descriptors:** If *dir\_fd* is not `None`, it should be a file descriptor referring to a directory, and the path to operate on should be relative; path will then be relative to that directory. If the path is absolute, *dir\_fd* is ignored. (For POSIX systems, Python will call the variant of the function with an `at` suffix and possibly prefixed with `f` (e.g. call `faccessat` instead of `access`). You can check whether or not *dir\_fd* is supported for a particular function on your platform using [`os.supports_dir_fd`](#os.supports_dir_fd "os.supports_dir_fd"). If it’s unavailable, using it will raise a [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError"). * **not following symlinks:** If *follow\_symlinks* is `False`, and the last element of the path to operate on is a symbolic link, the function will operate on the symbolic link itself rather than the file pointed to by the link. (For POSIX systems, Python will call the `l...` variant of the function.) You can check whether or not *follow\_symlinks* is supported for a particular function on your platform using [`os.supports_follow_symlinks`](#os.supports_follow_symlinks "os.supports_follow_symlinks"). If it’s unavailable, using it will raise a [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError"). `os.access(path, mode, *, dir_fd=None, effective_ids=False, follow_symlinks=True)` Use the real uid/gid to test for access to *path*. Note that most operations will use the effective uid/gid, therefore this routine can be used in a suid/sgid environment to test if the invoking user has the specified access to *path*. *mode* should be [`F_OK`](#os.F_OK "os.F_OK") to test the existence of *path*, or it can be the inclusive OR of one or more of [`R_OK`](#os.R_OK "os.R_OK"), [`W_OK`](#os.W_OK "os.W_OK"), and [`X_OK`](#os.X_OK "os.X_OK") to test permissions. Return [`True`](constants#True "True") if access is allowed, [`False`](constants#False "False") if not. See the Unix man page *[access(2)](https://manpages.debian.org/access(2))* for more information. This function can support specifying [paths relative to directory descriptors](#dir-fd) and [not following symlinks](#follow-symlinks). If *effective\_ids* is `True`, [`access()`](#os.access "os.access") will perform its access checks using the effective uid/gid instead of the real uid/gid. *effective\_ids* may not be supported on your platform; you can check whether or not it is available using [`os.supports_effective_ids`](#os.supports_effective_ids "os.supports_effective_ids"). If it is unavailable, using it will raise a [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError"). Note Using [`access()`](#os.access "os.access") to check if a user is authorized to e.g. open a file before actually doing so using [`open()`](functions#open "open") creates a security hole, because the user might exploit the short time interval between checking and opening the file to manipulate it. It’s preferable to use [EAFP](../glossary#term-eafp) techniques. For example: ``` if os.access("myfile", os.R_OK): with open("myfile") as fp: return fp.read() return "some default data" ``` is better written as: ``` try: fp = open("myfile") except PermissionError: return "some default data" else: with fp: return fp.read() ``` Note I/O operations may fail even when [`access()`](#os.access "os.access") indicates that they would succeed, particularly for operations on network filesystems which may have permissions semantics beyond the usual POSIX permission-bit model. Changed in version 3.3: Added the *dir\_fd*, *effective\_ids*, and *follow\_symlinks* parameters. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.F_OK` `os.R_OK` `os.W_OK` `os.X_OK` Values to pass as the *mode* parameter of [`access()`](#os.access "os.access") to test the existence, readability, writability and executability of *path*, respectively. `os.chdir(path)` Change the current working directory to *path*. This function can support [specifying a file descriptor](#path-fd). The descriptor must refer to an opened directory, not an open file. This function can raise [`OSError`](exceptions#OSError "OSError") and subclasses such as [`FileNotFoundError`](exceptions#FileNotFoundError "FileNotFoundError"), [`PermissionError`](exceptions#PermissionError "PermissionError"), and [`NotADirectoryError`](exceptions#NotADirectoryError "NotADirectoryError"). Raises an [auditing event](sys#auditing) `os.chdir` with argument `path`. New in version 3.3: Added support for specifying *path* as a file descriptor on some platforms. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.chflags(path, flags, *, follow_symlinks=True)` Set the flags of *path* to the numeric *flags*. *flags* may take a combination (bitwise OR) of the following values (as defined in the [`stat`](stat#module-stat "stat: Utilities for interpreting the results of os.stat(), os.lstat() and os.fstat().") module): * [`stat.UF_NODUMP`](stat#stat.UF_NODUMP "stat.UF_NODUMP") * [`stat.UF_IMMUTABLE`](stat#stat.UF_IMMUTABLE "stat.UF_IMMUTABLE") * [`stat.UF_APPEND`](stat#stat.UF_APPEND "stat.UF_APPEND") * [`stat.UF_OPAQUE`](stat#stat.UF_OPAQUE "stat.UF_OPAQUE") * [`stat.UF_NOUNLINK`](stat#stat.UF_NOUNLINK "stat.UF_NOUNLINK") * [`stat.UF_COMPRESSED`](stat#stat.UF_COMPRESSED "stat.UF_COMPRESSED") * [`stat.UF_HIDDEN`](stat#stat.UF_HIDDEN "stat.UF_HIDDEN") * [`stat.SF_ARCHIVED`](stat#stat.SF_ARCHIVED "stat.SF_ARCHIVED") * [`stat.SF_IMMUTABLE`](stat#stat.SF_IMMUTABLE "stat.SF_IMMUTABLE") * [`stat.SF_APPEND`](stat#stat.SF_APPEND "stat.SF_APPEND") * [`stat.SF_NOUNLINK`](stat#stat.SF_NOUNLINK "stat.SF_NOUNLINK") * [`stat.SF_SNAPSHOT`](stat#stat.SF_SNAPSHOT "stat.SF_SNAPSHOT") This function can support [not following symlinks](#follow-symlinks). Raises an [auditing event](sys#auditing) `os.chflags` with arguments `path`, `flags`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3: The *follow\_symlinks* argument. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.chmod(path, mode, *, dir_fd=None, follow_symlinks=True)` Change the mode of *path* to the numeric *mode*. *mode* may take one of the following values (as defined in the [`stat`](stat#module-stat "stat: Utilities for interpreting the results of os.stat(), os.lstat() and os.fstat().") module) or bitwise ORed combinations of them: * [`stat.S_ISUID`](stat#stat.S_ISUID "stat.S_ISUID") * [`stat.S_ISGID`](stat#stat.S_ISGID "stat.S_ISGID") * [`stat.S_ENFMT`](stat#stat.S_ENFMT "stat.S_ENFMT") * [`stat.S_ISVTX`](stat#stat.S_ISVTX "stat.S_ISVTX") * [`stat.S_IREAD`](stat#stat.S_IREAD "stat.S_IREAD") * [`stat.S_IWRITE`](stat#stat.S_IWRITE "stat.S_IWRITE") * [`stat.S_IEXEC`](stat#stat.S_IEXEC "stat.S_IEXEC") * [`stat.S_IRWXU`](stat#stat.S_IRWXU "stat.S_IRWXU") * [`stat.S_IRUSR`](stat#stat.S_IRUSR "stat.S_IRUSR") * [`stat.S_IWUSR`](stat#stat.S_IWUSR "stat.S_IWUSR") * [`stat.S_IXUSR`](stat#stat.S_IXUSR "stat.S_IXUSR") * [`stat.S_IRWXG`](stat#stat.S_IRWXG "stat.S_IRWXG") * [`stat.S_IRGRP`](stat#stat.S_IRGRP "stat.S_IRGRP") * [`stat.S_IWGRP`](stat#stat.S_IWGRP "stat.S_IWGRP") * [`stat.S_IXGRP`](stat#stat.S_IXGRP "stat.S_IXGRP") * [`stat.S_IRWXO`](stat#stat.S_IRWXO "stat.S_IRWXO") * [`stat.S_IROTH`](stat#stat.S_IROTH "stat.S_IROTH") * [`stat.S_IWOTH`](stat#stat.S_IWOTH "stat.S_IWOTH") * [`stat.S_IXOTH`](stat#stat.S_IXOTH "stat.S_IXOTH") This function can support [specifying a file descriptor](#path-fd), [paths relative to directory descriptors](#dir-fd) and [not following symlinks](#follow-symlinks). Note Although Windows supports [`chmod()`](#os.chmod "os.chmod"), you can only set the file’s read-only flag with it (via the `stat.S_IWRITE` and `stat.S_IREAD` constants or a corresponding integer value). All other bits are ignored. Raises an [auditing event](sys#auditing) `os.chmod` with arguments `path`, `mode`, `dir_fd`. New in version 3.3: Added support for specifying *path* as an open file descriptor, and the *dir\_fd* and *follow\_symlinks* arguments. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.chown(path, uid, gid, *, dir_fd=None, follow_symlinks=True)` Change the owner and group id of *path* to the numeric *uid* and *gid*. To leave one of the ids unchanged, set it to -1. This function can support [specifying a file descriptor](#path-fd), [paths relative to directory descriptors](#dir-fd) and [not following symlinks](#follow-symlinks). See [`shutil.chown()`](shutil#shutil.chown "shutil.chown") for a higher-level function that accepts names in addition to numeric ids. Raises an [auditing event](sys#auditing) `os.chown` with arguments `path`, `uid`, `gid`, `dir_fd`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3: Added support for specifying *path* as an open file descriptor, and the *dir\_fd* and *follow\_symlinks* arguments. Changed in version 3.6: Supports a [path-like object](../glossary#term-path-like-object). `os.chroot(path)` Change the root directory of the current process to *path*. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.fchdir(fd)` Change the current working directory to the directory represented by the file descriptor *fd*. The descriptor must refer to an opened directory, not an open file. As of Python 3.3, this is equivalent to `os.chdir(fd)`. Raises an [auditing event](sys#auditing) `os.chdir` with argument `path`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.getcwd()` Return a string representing the current working directory. `os.getcwdb()` Return a bytestring representing the current working directory. Changed in version 3.8: The function now uses the UTF-8 encoding on Windows, rather than the ANSI code page: see [**PEP 529**](https://www.python.org/dev/peps/pep-0529) for the rationale. The function is no longer deprecated on Windows. `os.lchflags(path, flags)` Set the flags of *path* to the numeric *flags*, like [`chflags()`](#os.chflags "os.chflags"), but do not follow symbolic links. As of Python 3.3, this is equivalent to `os.chflags(path, flags, follow_symlinks=False)`. Raises an [auditing event](sys#auditing) `os.chflags` with arguments `path`, `flags`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.lchmod(path, mode)` Change the mode of *path* to the numeric *mode*. If path is a symlink, this affects the symlink rather than the target. See the docs for [`chmod()`](#os.chmod "os.chmod") for possible values of *mode*. As of Python 3.3, this is equivalent to `os.chmod(path, mode, follow_symlinks=False)`. Raises an [auditing event](sys#auditing) `os.chmod` with arguments `path`, `mode`, `dir_fd`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.lchown(path, uid, gid)` Change the owner and group id of *path* to the numeric *uid* and *gid*. This function will not follow symbolic links. As of Python 3.3, this is equivalent to `os.chown(path, uid, gid, follow_symlinks=False)`. Raises an [auditing event](sys#auditing) `os.chown` with arguments `path`, `uid`, `gid`, `dir_fd`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.link(src, dst, *, src_dir_fd=None, dst_dir_fd=None, follow_symlinks=True)` Create a hard link pointing to *src* named *dst*. This function can support specifying *src\_dir\_fd* and/or *dst\_dir\_fd* to supply [paths relative to directory descriptors](#dir-fd), and [not following symlinks](#follow-symlinks). Raises an [auditing event](sys#auditing) `os.link` with arguments `src`, `dst`, `src_dir_fd`, `dst_dir_fd`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. Changed in version 3.2: Added Windows support. New in version 3.3: Added the *src\_dir\_fd*, *dst\_dir\_fd*, and *follow\_symlinks* arguments. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object) for *src* and *dst*. `os.listdir(path='.')` Return a list containing the names of the entries in the directory given by *path*. The list is in arbitrary order, and does not include the special entries `'.'` and `'..'` even if they are present in the directory. If a file is removed from or added to the directory during the call of this function, whether a name for that file be included is unspecified. *path* may be a [path-like object](../glossary#term-path-like-object). If *path* is of type `bytes` (directly or indirectly through the [`PathLike`](#os.PathLike "os.PathLike") interface), the filenames returned will also be of type `bytes`; in all other circumstances, they will be of type `str`. This function can also support [specifying a file descriptor](#path-fd); the file descriptor must refer to a directory. Raises an [auditing event](sys#auditing) `os.listdir` with argument `path`. Note To encode `str` filenames to `bytes`, use [`fsencode()`](#os.fsencode "os.fsencode"). See also The [`scandir()`](#os.scandir "os.scandir") function returns directory entries along with file attribute information, giving better performance for many common use cases. Changed in version 3.2: The *path* parameter became optional. New in version 3.3: Added support for specifying *path* as an open file descriptor. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.lstat(path, *, dir_fd=None)` Perform the equivalent of an `lstat()` system call on the given path. Similar to [`stat()`](#os.stat "os.stat"), but does not follow symbolic links. Return a [`stat_result`](#os.stat_result "os.stat_result") object. On platforms that do not support symbolic links, this is an alias for [`stat()`](#os.stat "os.stat"). As of Python 3.3, this is equivalent to `os.stat(path, dir_fd=dir_fd, follow_symlinks=False)`. This function can also support [paths relative to directory descriptors](#dir-fd). See also The [`stat()`](#os.stat "os.stat") function. Changed in version 3.2: Added support for Windows 6.0 (Vista) symbolic links. Changed in version 3.3: Added the *dir\_fd* parameter. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). Changed in version 3.8: On Windows, now opens reparse points that represent another path (name surrogates), including symbolic links and directory junctions. Other kinds of reparse points are resolved by the operating system as for [`stat()`](#os.stat "os.stat"). `os.mkdir(path, mode=0o777, *, dir_fd=None)` Create a directory named *path* with numeric mode *mode*. If the directory already exists, [`FileExistsError`](exceptions#FileExistsError "FileExistsError") is raised. If a parent directory in the path does not exist, [`FileNotFoundError`](exceptions#FileNotFoundError "FileNotFoundError") is raised. On some systems, *mode* is ignored. Where it is used, the current umask value is first masked out. If bits other than the last 9 (i.e. the last 3 digits of the octal representation of the *mode*) are set, their meaning is platform-dependent. On some platforms, they are ignored and you should call [`chmod()`](#os.chmod "os.chmod") explicitly to set them. This function can also support [paths relative to directory descriptors](#dir-fd). It is also possible to create temporary directories; see the [`tempfile`](tempfile#module-tempfile "tempfile: Generate temporary files and directories.") module’s [`tempfile.mkdtemp()`](tempfile#tempfile.mkdtemp "tempfile.mkdtemp") function. Raises an [auditing event](sys#auditing) `os.mkdir` with arguments `path`, `mode`, `dir_fd`. New in version 3.3: The *dir\_fd* argument. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.makedirs(name, mode=0o777, exist_ok=False)` Recursive directory creation function. Like [`mkdir()`](#os.mkdir "os.mkdir"), but makes all intermediate-level directories needed to contain the leaf directory. The *mode* parameter is passed to [`mkdir()`](#os.mkdir "os.mkdir") for creating the leaf directory; see [the mkdir() description](#mkdir-modebits) for how it is interpreted. To set the file permission bits of any newly-created parent directories you can set the umask before invoking [`makedirs()`](#os.makedirs "os.makedirs"). The file permission bits of existing parent directories are not changed. If *exist\_ok* is `False` (the default), an [`FileExistsError`](exceptions#FileExistsError "FileExistsError") is raised if the target directory already exists. Note [`makedirs()`](#os.makedirs "os.makedirs") will become confused if the path elements to create include [`pardir`](#os.pardir "os.pardir") (eg. “..” on UNIX systems). This function handles UNC paths correctly. Raises an [auditing event](sys#auditing) `os.mkdir` with arguments `path`, `mode`, `dir_fd`. New in version 3.2: The *exist\_ok* parameter. Changed in version 3.4.1: Before Python 3.4.1, if *exist\_ok* was `True` and the directory existed, [`makedirs()`](#os.makedirs "os.makedirs") would still raise an error if *mode* did not match the mode of the existing directory. Since this behavior was impossible to implement safely, it was removed in Python 3.4.1. See [bpo-21082](https://bugs.python.org/issue?@action=redirect&bpo=21082). Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). Changed in version 3.7: The *mode* argument no longer affects the file permission bits of newly-created intermediate-level directories. `os.mkfifo(path, mode=0o666, *, dir_fd=None)` Create a FIFO (a named pipe) named *path* with numeric mode *mode*. The current umask value is first masked out from the mode. This function can also support [paths relative to directory descriptors](#dir-fd). FIFOs are pipes that can be accessed like regular files. FIFOs exist until they are deleted (for example with [`os.unlink()`](#os.unlink "os.unlink")). Generally, FIFOs are used as rendezvous between “client” and “server” type processes: the server opens the FIFO for reading, and the client opens it for writing. Note that [`mkfifo()`](#os.mkfifo "os.mkfifo") doesn’t open the FIFO — it just creates the rendezvous point. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3: The *dir\_fd* argument. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.mknod(path, mode=0o600, device=0, *, dir_fd=None)` Create a filesystem node (file, device special file or named pipe) named *path*. *mode* specifies both the permissions to use and the type of node to be created, being combined (bitwise OR) with one of `stat.S_IFREG`, `stat.S_IFCHR`, `stat.S_IFBLK`, and `stat.S_IFIFO` (those constants are available in [`stat`](stat#module-stat "stat: Utilities for interpreting the results of os.stat(), os.lstat() and os.fstat().")). For `stat.S_IFCHR` and `stat.S_IFBLK`, *device* defines the newly created device special file (probably using [`os.makedev()`](#os.makedev "os.makedev")), otherwise it is ignored. This function can also support [paths relative to directory descriptors](#dir-fd). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3: The *dir\_fd* argument. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.major(device)` Extract the device major number from a raw device number (usually the `st_dev` or `st_rdev` field from `stat`). `os.minor(device)` Extract the device minor number from a raw device number (usually the `st_dev` or `st_rdev` field from `stat`). `os.makedev(major, minor)` Compose a raw device number from the major and minor device numbers. `os.pathconf(path, name)` Return system configuration information relevant to a named file. *name* specifies the configuration value to retrieve; it may be a string which is the name of a defined system value; these names are specified in a number of standards (POSIX.1, Unix 95, Unix 98, and others). Some platforms define additional names as well. The names known to the host operating system are given in the `pathconf_names` dictionary. For configuration variables not included in that mapping, passing an integer for *name* is also accepted. If *name* is a string and is not known, [`ValueError`](exceptions#ValueError "ValueError") is raised. If a specific value for *name* is not supported by the host system, even if it is included in `pathconf_names`, an [`OSError`](exceptions#OSError "OSError") is raised with [`errno.EINVAL`](errno#errno.EINVAL "errno.EINVAL") for the error number. This function can support [specifying a file descriptor](#path-fd). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.pathconf_names` Dictionary mapping names accepted by [`pathconf()`](#os.pathconf "os.pathconf") and [`fpathconf()`](#os.fpathconf "os.fpathconf") to the integer values defined for those names by the host operating system. This can be used to determine the set of names known to the system. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.readlink(path, *, dir_fd=None)` Return a string representing the path to which the symbolic link points. The result may be either an absolute or relative pathname; if it is relative, it may be converted to an absolute pathname using `os.path.join(os.path.dirname(path), result)`. If the *path* is a string object (directly or indirectly through a [`PathLike`](#os.PathLike "os.PathLike") interface), the result will also be a string object, and the call may raise a UnicodeDecodeError. If the *path* is a bytes object (direct or indirectly), the result will be a bytes object. This function can also support [paths relative to directory descriptors](#dir-fd). When trying to resolve a path that may contain links, use [`realpath()`](os.path#os.path.realpath "os.path.realpath") to properly handle recursion and platform differences. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. Changed in version 3.2: Added support for Windows 6.0 (Vista) symbolic links. New in version 3.3: The *dir\_fd* argument. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object) on Unix. Changed in version 3.8: Accepts a [path-like object](../glossary#term-path-like-object) and a bytes object on Windows. Changed in version 3.8: Added support for directory junctions, and changed to return the substitution path (which typically includes `\\?\` prefix) rather than the optional “print name” field that was previously returned. `os.remove(path, *, dir_fd=None)` Remove (delete) the file *path*. If *path* is a directory, an [`IsADirectoryError`](exceptions#IsADirectoryError "IsADirectoryError") is raised. Use [`rmdir()`](#os.rmdir "os.rmdir") to remove directories. If the file does not exist, a [`FileNotFoundError`](exceptions#FileNotFoundError "FileNotFoundError") is raised. This function can support [paths relative to directory descriptors](#dir-fd). On Windows, attempting to remove a file that is in use causes an exception to be raised; on Unix, the directory entry is removed but the storage allocated to the file is not made available until the original file is no longer in use. This function is semantically identical to [`unlink()`](#os.unlink "os.unlink"). Raises an [auditing event](sys#auditing) `os.remove` with arguments `path`, `dir_fd`. New in version 3.3: The *dir\_fd* argument. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.removedirs(name)` Remove directories recursively. Works like [`rmdir()`](#os.rmdir "os.rmdir") except that, if the leaf directory is successfully removed, [`removedirs()`](#os.removedirs "os.removedirs") tries to successively remove every parent directory mentioned in *path* until an error is raised (which is ignored, because it generally means that a parent directory is not empty). For example, `os.removedirs('foo/bar/baz')` will first remove the directory `'foo/bar/baz'`, and then remove `'foo/bar'` and `'foo'` if they are empty. Raises [`OSError`](exceptions#OSError "OSError") if the leaf directory could not be successfully removed. Raises an [auditing event](sys#auditing) `os.remove` with arguments `path`, `dir_fd`. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.rename(src, dst, *, src_dir_fd=None, dst_dir_fd=None)` Rename the file or directory *src* to *dst*. If *dst* exists, the operation will fail with an [`OSError`](exceptions#OSError "OSError") subclass in a number of cases: On Windows, if *dst* exists a [`FileExistsError`](exceptions#FileExistsError "FileExistsError") is always raised. On Unix, if *src* is a file and *dst* is a directory or vice-versa, an [`IsADirectoryError`](exceptions#IsADirectoryError "IsADirectoryError") or a [`NotADirectoryError`](exceptions#NotADirectoryError "NotADirectoryError") will be raised respectively. If both are directories and *dst* is empty, *dst* will be silently replaced. If *dst* is a non-empty directory, an [`OSError`](exceptions#OSError "OSError") is raised. If both are files, *dst* it will be replaced silently if the user has permission. The operation may fail on some Unix flavors if *src* and *dst* are on different filesystems. If successful, the renaming will be an atomic operation (this is a POSIX requirement). This function can support specifying *src\_dir\_fd* and/or *dst\_dir\_fd* to supply [paths relative to directory descriptors](#dir-fd). If you want cross-platform overwriting of the destination, use [`replace()`](#os.replace "os.replace"). Raises an [auditing event](sys#auditing) `os.rename` with arguments `src`, `dst`, `src_dir_fd`, `dst_dir_fd`. New in version 3.3: The *src\_dir\_fd* and *dst\_dir\_fd* arguments. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object) for *src* and *dst*. `os.renames(old, new)` Recursive directory or file renaming function. Works like [`rename()`](#os.rename "os.rename"), except creation of any intermediate directories needed to make the new pathname good is attempted first. After the rename, directories corresponding to rightmost path segments of the old name will be pruned away using [`removedirs()`](#os.removedirs "os.removedirs"). Note This function can fail with the new directory structure made if you lack permissions needed to remove the leaf directory or file. Raises an [auditing event](sys#auditing) `os.rename` with arguments `src`, `dst`, `src_dir_fd`, `dst_dir_fd`. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object) for *old* and *new*. `os.replace(src, dst, *, src_dir_fd=None, dst_dir_fd=None)` Rename the file or directory *src* to *dst*. If *dst* is a non-empty directory, [`OSError`](exceptions#OSError "OSError") will be raised. If *dst* exists and is a file, it will be replaced silently if the user has permission. The operation may fail if *src* and *dst* are on different filesystems. If successful, the renaming will be an atomic operation (this is a POSIX requirement). This function can support specifying *src\_dir\_fd* and/or *dst\_dir\_fd* to supply [paths relative to directory descriptors](#dir-fd). Raises an [auditing event](sys#auditing) `os.rename` with arguments `src`, `dst`, `src_dir_fd`, `dst_dir_fd`. New in version 3.3. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object) for *src* and *dst*. `os.rmdir(path, *, dir_fd=None)` Remove (delete) the directory *path*. If the directory does not exist or is not empty, an [`FileNotFoundError`](exceptions#FileNotFoundError "FileNotFoundError") or an [`OSError`](exceptions#OSError "OSError") is raised respectively. In order to remove whole directory trees, [`shutil.rmtree()`](shutil#shutil.rmtree "shutil.rmtree") can be used. This function can support [paths relative to directory descriptors](#dir-fd). Raises an [auditing event](sys#auditing) `os.rmdir` with arguments `path`, `dir_fd`. New in version 3.3: The *dir\_fd* parameter. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.scandir(path='.')` Return an iterator of [`os.DirEntry`](#os.DirEntry "os.DirEntry") objects corresponding to the entries in the directory given by *path*. The entries are yielded in arbitrary order, and the special entries `'.'` and `'..'` are not included. If a file is removed from or added to the directory after creating the iterator, whether an entry for that file be included is unspecified. Using [`scandir()`](#os.scandir "os.scandir") instead of [`listdir()`](#os.listdir "os.listdir") can significantly increase the performance of code that also needs file type or file attribute information, because [`os.DirEntry`](#os.DirEntry "os.DirEntry") objects expose this information if the operating system provides it when scanning a directory. All [`os.DirEntry`](#os.DirEntry "os.DirEntry") methods may perform a system call, but [`is_dir()`](#os.DirEntry.is_dir "os.DirEntry.is_dir") and [`is_file()`](#os.DirEntry.is_file "os.DirEntry.is_file") usually only require a system call for symbolic links; [`os.DirEntry.stat()`](#os.DirEntry.stat "os.DirEntry.stat") always requires a system call on Unix but only requires one for symbolic links on Windows. *path* may be a [path-like object](../glossary#term-path-like-object). If *path* is of type `bytes` (directly or indirectly through the [`PathLike`](#os.PathLike "os.PathLike") interface), the type of the [`name`](#os.DirEntry.name "os.DirEntry.name") and [`path`](#os.DirEntry.path "os.DirEntry.path") attributes of each [`os.DirEntry`](#os.DirEntry "os.DirEntry") will be `bytes`; in all other circumstances, they will be of type `str`. This function can also support [specifying a file descriptor](#path-fd); the file descriptor must refer to a directory. Raises an [auditing event](sys#auditing) `os.scandir` with argument `path`. The [`scandir()`](#os.scandir "os.scandir") iterator supports the [context manager](../glossary#term-context-manager) protocol and has the following method: `scandir.close()` Close the iterator and free acquired resources. This is called automatically when the iterator is exhausted or garbage collected, or when an error happens during iterating. However it is advisable to call it explicitly or use the [`with`](../reference/compound_stmts#with) statement. New in version 3.6. The following example shows a simple use of [`scandir()`](#os.scandir "os.scandir") to display all the files (excluding directories) in the given *path* that don’t start with `'.'`. The `entry.is_file()` call will generally not make an additional system call: ``` with os.scandir(path) as it: for entry in it: if not entry.name.startswith('.') and entry.is_file(): print(entry.name) ``` Note On Unix-based systems, [`scandir()`](#os.scandir "os.scandir") uses the system’s [opendir()](http://pubs.opengroup.org/onlinepubs/009695399/functions/opendir.html) and [readdir()](http://pubs.opengroup.org/onlinepubs/009695399/functions/readdir_r.html) functions. On Windows, it uses the Win32 [FindFirstFileW](https://msdn.microsoft.com/en-us/library/windows/desktop/aa364418(v=vs.85).aspx) and [FindNextFileW](https://msdn.microsoft.com/en-us/library/windows/desktop/aa364428(v=vs.85).aspx) functions. New in version 3.5. New in version 3.6: Added support for the [context manager](../glossary#term-context-manager) protocol and the [`close()`](#os.scandir.close "os.scandir.close") method. If a [`scandir()`](#os.scandir "os.scandir") iterator is neither exhausted nor explicitly closed a [`ResourceWarning`](exceptions#ResourceWarning "ResourceWarning") will be emitted in its destructor. The function accepts a [path-like object](../glossary#term-path-like-object). Changed in version 3.7: Added support for [file descriptors](#path-fd) on Unix. `class os.DirEntry` Object yielded by [`scandir()`](#os.scandir "os.scandir") to expose the file path and other file attributes of a directory entry. [`scandir()`](#os.scandir "os.scandir") will provide as much of this information as possible without making additional system calls. When a `stat()` or `lstat()` system call is made, the `os.DirEntry` object will cache the result. `os.DirEntry` instances are not intended to be stored in long-lived data structures; if you know the file metadata has changed or if a long time has elapsed since calling [`scandir()`](#os.scandir "os.scandir"), call `os.stat(entry.path)` to fetch up-to-date information. Because the `os.DirEntry` methods can make operating system calls, they may also raise [`OSError`](exceptions#OSError "OSError"). If you need very fine-grained control over errors, you can catch [`OSError`](exceptions#OSError "OSError") when calling one of the `os.DirEntry` methods and handle as appropriate. To be directly usable as a [path-like object](../glossary#term-path-like-object), `os.DirEntry` implements the [`PathLike`](#os.PathLike "os.PathLike") interface. Attributes and methods on a `os.DirEntry` instance are as follows: `name` The entry’s base filename, relative to the [`scandir()`](#os.scandir "os.scandir") *path* argument. The [`name`](#os.name "os.name") attribute will be `bytes` if the [`scandir()`](#os.scandir "os.scandir") *path* argument is of type `bytes` and `str` otherwise. Use [`fsdecode()`](#os.fsdecode "os.fsdecode") to decode byte filenames. `path` The entry’s full path name: equivalent to `os.path.join(scandir_path, entry.name)` where *scandir\_path* is the [`scandir()`](#os.scandir "os.scandir") *path* argument. The path is only absolute if the [`scandir()`](#os.scandir "os.scandir") *path* argument was absolute. If the [`scandir()`](#os.scandir "os.scandir") *path* argument was a [file descriptor](#path-fd), the [`path`](os.path#module-os.path "os.path: Operations on pathnames.") attribute is the same as the [`name`](#os.name "os.name") attribute. The [`path`](os.path#module-os.path "os.path: Operations on pathnames.") attribute will be `bytes` if the [`scandir()`](#os.scandir "os.scandir") *path* argument is of type `bytes` and `str` otherwise. Use [`fsdecode()`](#os.fsdecode "os.fsdecode") to decode byte filenames. `inode()` Return the inode number of the entry. The result is cached on the `os.DirEntry` object. Use `os.stat(entry.path, follow_symlinks=False).st_ino` to fetch up-to-date information. On the first, uncached call, a system call is required on Windows but not on Unix. `is_dir(*, follow_symlinks=True)` Return `True` if this entry is a directory or a symbolic link pointing to a directory; return `False` if the entry is or points to any other kind of file, or if it doesn’t exist anymore. If *follow\_symlinks* is `False`, return `True` only if this entry is a directory (without following symlinks); return `False` if the entry is any other kind of file or if it doesn’t exist anymore. The result is cached on the `os.DirEntry` object, with a separate cache for *follow\_symlinks* `True` and `False`. Call [`os.stat()`](#os.stat "os.stat") along with [`stat.S_ISDIR()`](stat#stat.S_ISDIR "stat.S_ISDIR") to fetch up-to-date information. On the first, uncached call, no system call is required in most cases. Specifically, for non-symlinks, neither Windows or Unix require a system call, except on certain Unix file systems, such as network file systems, that return `dirent.d_type == DT_UNKNOWN`. If the entry is a symlink, a system call will be required to follow the symlink unless *follow\_symlinks* is `False`. This method can raise [`OSError`](exceptions#OSError "OSError"), such as [`PermissionError`](exceptions#PermissionError "PermissionError"), but [`FileNotFoundError`](exceptions#FileNotFoundError "FileNotFoundError") is caught and not raised. `is_file(*, follow_symlinks=True)` Return `True` if this entry is a file or a symbolic link pointing to a file; return `False` if the entry is or points to a directory or other non-file entry, or if it doesn’t exist anymore. If *follow\_symlinks* is `False`, return `True` only if this entry is a file (without following symlinks); return `False` if the entry is a directory or other non-file entry, or if it doesn’t exist anymore. The result is cached on the `os.DirEntry` object. Caching, system calls made, and exceptions raised are as per [`is_dir()`](#os.DirEntry.is_dir "os.DirEntry.is_dir"). `is_symlink()` Return `True` if this entry is a symbolic link (even if broken); return `False` if the entry points to a directory or any kind of file, or if it doesn’t exist anymore. The result is cached on the `os.DirEntry` object. Call [`os.path.islink()`](os.path#os.path.islink "os.path.islink") to fetch up-to-date information. On the first, uncached call, no system call is required in most cases. Specifically, neither Windows or Unix require a system call, except on certain Unix file systems, such as network file systems, that return `dirent.d_type == DT_UNKNOWN`. This method can raise [`OSError`](exceptions#OSError "OSError"), such as [`PermissionError`](exceptions#PermissionError "PermissionError"), but [`FileNotFoundError`](exceptions#FileNotFoundError "FileNotFoundError") is caught and not raised. `stat(*, follow_symlinks=True)` Return a [`stat_result`](#os.stat_result "os.stat_result") object for this entry. This method follows symbolic links by default; to stat a symbolic link add the `follow_symlinks=False` argument. On Unix, this method always requires a system call. On Windows, it only requires a system call if *follow\_symlinks* is `True` and the entry is a reparse point (for example, a symbolic link or directory junction). On Windows, the `st_ino`, `st_dev` and `st_nlink` attributes of the [`stat_result`](#os.stat_result "os.stat_result") are always set to zero. Call [`os.stat()`](#os.stat "os.stat") to get these attributes. The result is cached on the `os.DirEntry` object, with a separate cache for *follow\_symlinks* `True` and `False`. Call [`os.stat()`](#os.stat "os.stat") to fetch up-to-date information. Note that there is a nice correspondence between several attributes and methods of `os.DirEntry` and of [`pathlib.Path`](pathlib#pathlib.Path "pathlib.Path"). In particular, the `name` attribute has the same meaning, as do the `is_dir()`, `is_file()`, `is_symlink()` and `stat()` methods. New in version 3.5. Changed in version 3.6: Added support for the [`PathLike`](#os.PathLike "os.PathLike") interface. Added support for [`bytes`](stdtypes#bytes "bytes") paths on Windows. `os.stat(path, *, dir_fd=None, follow_symlinks=True)` Get the status of a file or a file descriptor. Perform the equivalent of a `stat()` system call on the given path. *path* may be specified as either a string or bytes – directly or indirectly through the [`PathLike`](#os.PathLike "os.PathLike") interface – or as an open file descriptor. Return a [`stat_result`](#os.stat_result "os.stat_result") object. This function normally follows symlinks; to stat a symlink add the argument `follow_symlinks=False`, or use [`lstat()`](#os.lstat "os.lstat"). This function can support [specifying a file descriptor](#path-fd) and [not following symlinks](#follow-symlinks). On Windows, passing `follow_symlinks=False` will disable following all name-surrogate reparse points, which includes symlinks and directory junctions. Other types of reparse points that do not resemble links or that the operating system is unable to follow will be opened directly. When following a chain of multiple links, this may result in the original link being returned instead of the non-link that prevented full traversal. To obtain stat results for the final path in this case, use the [`os.path.realpath()`](os.path#os.path.realpath "os.path.realpath") function to resolve the path name as far as possible and call [`lstat()`](#os.lstat "os.lstat") on the result. This does not apply to dangling symlinks or junction points, which will raise the usual exceptions. Example: ``` >>> import os >>> statinfo = os.stat('somefile.txt') >>> statinfo os.stat_result(st_mode=33188, st_ino=7876932, st_dev=234881026, st_nlink=1, st_uid=501, st_gid=501, st_size=264, st_atime=1297230295, st_mtime=1297230027, st_ctime=1297230027) >>> statinfo.st_size 264 ``` See also [`fstat()`](#os.fstat "os.fstat") and [`lstat()`](#os.lstat "os.lstat") functions. New in version 3.3: Added the *dir\_fd* and *follow\_symlinks* arguments, specifying a file descriptor instead of a path. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). Changed in version 3.8: On Windows, all reparse points that can be resolved by the operating system are now followed, and passing `follow_symlinks=False` disables following all name surrogate reparse points. If the operating system reaches a reparse point that it is not able to follow, *stat* now returns the information for the original path as if `follow_symlinks=False` had been specified instead of raising an error. `class os.stat_result` Object whose attributes correspond roughly to the members of the `stat` structure. It is used for the result of [`os.stat()`](#os.stat "os.stat"), [`os.fstat()`](#os.fstat "os.fstat") and [`os.lstat()`](#os.lstat "os.lstat"). Attributes: `st_mode` File mode: file type and file mode bits (permissions). `st_ino` Platform dependent, but if non-zero, uniquely identifies the file for a given value of `st_dev`. Typically: * the inode number on Unix, * the [file index](https://msdn.microsoft.com/en-us/library/aa363788) on Windows `st_dev` Identifier of the device on which this file resides. `st_nlink` Number of hard links. `st_uid` User identifier of the file owner. `st_gid` Group identifier of the file owner. `st_size` Size of the file in bytes, if it is a regular file or a symbolic link. The size of a symbolic link is the length of the pathname it contains, without a terminating null byte. Timestamps: `st_atime` Time of most recent access expressed in seconds. `st_mtime` Time of most recent content modification expressed in seconds. `st_ctime` Platform dependent: * the time of most recent metadata change on Unix, * the time of creation on Windows, expressed in seconds. `st_atime_ns` Time of most recent access expressed in nanoseconds as an integer. `st_mtime_ns` Time of most recent content modification expressed in nanoseconds as an integer. `st_ctime_ns` Platform dependent: * the time of most recent metadata change on Unix, * the time of creation on Windows, expressed in nanoseconds as an integer. Note The exact meaning and resolution of the [`st_atime`](#os.stat_result.st_atime "os.stat_result.st_atime"), [`st_mtime`](#os.stat_result.st_mtime "os.stat_result.st_mtime"), and [`st_ctime`](#os.stat_result.st_ctime "os.stat_result.st_ctime") attributes depend on the operating system and the file system. For example, on Windows systems using the FAT or FAT32 file systems, [`st_mtime`](#os.stat_result.st_mtime "os.stat_result.st_mtime") has 2-second resolution, and [`st_atime`](#os.stat_result.st_atime "os.stat_result.st_atime") has only 1-day resolution. See your operating system documentation for details. Similarly, although [`st_atime_ns`](#os.stat_result.st_atime_ns "os.stat_result.st_atime_ns"), [`st_mtime_ns`](#os.stat_result.st_mtime_ns "os.stat_result.st_mtime_ns"), and [`st_ctime_ns`](#os.stat_result.st_ctime_ns "os.stat_result.st_ctime_ns") are always expressed in nanoseconds, many systems do not provide nanosecond precision. On systems that do provide nanosecond precision, the floating-point object used to store [`st_atime`](#os.stat_result.st_atime "os.stat_result.st_atime"), [`st_mtime`](#os.stat_result.st_mtime "os.stat_result.st_mtime"), and [`st_ctime`](#os.stat_result.st_ctime "os.stat_result.st_ctime") cannot preserve all of it, and as such will be slightly inexact. If you need the exact timestamps you should always use [`st_atime_ns`](#os.stat_result.st_atime_ns "os.stat_result.st_atime_ns"), [`st_mtime_ns`](#os.stat_result.st_mtime_ns "os.stat_result.st_mtime_ns"), and [`st_ctime_ns`](#os.stat_result.st_ctime_ns "os.stat_result.st_ctime_ns"). On some Unix systems (such as Linux), the following attributes may also be available: `st_blocks` Number of 512-byte blocks allocated for file. This may be smaller than [`st_size`](#os.stat_result.st_size "os.stat_result.st_size")/512 when the file has holes. `st_blksize` “Preferred” blocksize for efficient file system I/O. Writing to a file in smaller chunks may cause an inefficient read-modify-rewrite. `st_rdev` Type of device if an inode device. `st_flags` User defined flags for file. On other Unix systems (such as FreeBSD), the following attributes may be available (but may be only filled out if root tries to use them): `st_gen` File generation number. `st_birthtime` Time of file creation. On Solaris and derivatives, the following attributes may also be available: `st_fstype` String that uniquely identifies the type of the filesystem that contains the file. On macOS systems, the following attributes may also be available: `st_rsize` Real size of the file. `st_creator` Creator of the file. `st_type` File type. On Windows systems, the following attributes are also available: `st_file_attributes` Windows file attributes: `dwFileAttributes` member of the `BY_HANDLE_FILE_INFORMATION` structure returned by `GetFileInformationByHandle()`. See the `FILE_ATTRIBUTE_*` constants in the [`stat`](stat#module-stat "stat: Utilities for interpreting the results of os.stat(), os.lstat() and os.fstat().") module. `st_reparse_tag` When [`st_file_attributes`](#os.stat_result.st_file_attributes "os.stat_result.st_file_attributes") has the `FILE_ATTRIBUTE_REPARSE_POINT` set, this field contains the tag identifying the type of reparse point. See the `IO_REPARSE_TAG_*` constants in the [`stat`](stat#module-stat "stat: Utilities for interpreting the results of os.stat(), os.lstat() and os.fstat().") module. The standard module [`stat`](stat#module-stat "stat: Utilities for interpreting the results of os.stat(), os.lstat() and os.fstat().") defines functions and constants that are useful for extracting information from a `stat` structure. (On Windows, some items are filled with dummy values.) For backward compatibility, a [`stat_result`](#os.stat_result "os.stat_result") instance is also accessible as a tuple of at least 10 integers giving the most important (and portable) members of the `stat` structure, in the order [`st_mode`](#os.stat_result.st_mode "os.stat_result.st_mode"), [`st_ino`](#os.stat_result.st_ino "os.stat_result.st_ino"), [`st_dev`](#os.stat_result.st_dev "os.stat_result.st_dev"), [`st_nlink`](#os.stat_result.st_nlink "os.stat_result.st_nlink"), [`st_uid`](#os.stat_result.st_uid "os.stat_result.st_uid"), [`st_gid`](#os.stat_result.st_gid "os.stat_result.st_gid"), [`st_size`](#os.stat_result.st_size "os.stat_result.st_size"), [`st_atime`](#os.stat_result.st_atime "os.stat_result.st_atime"), [`st_mtime`](#os.stat_result.st_mtime "os.stat_result.st_mtime"), [`st_ctime`](#os.stat_result.st_ctime "os.stat_result.st_ctime"). More items may be added at the end by some implementations. For compatibility with older Python versions, accessing [`stat_result`](#os.stat_result "os.stat_result") as a tuple always returns integers. New in version 3.3: Added the [`st_atime_ns`](#os.stat_result.st_atime_ns "os.stat_result.st_atime_ns"), [`st_mtime_ns`](#os.stat_result.st_mtime_ns "os.stat_result.st_mtime_ns"), and [`st_ctime_ns`](#os.stat_result.st_ctime_ns "os.stat_result.st_ctime_ns") members. New in version 3.5: Added the [`st_file_attributes`](#os.stat_result.st_file_attributes "os.stat_result.st_file_attributes") member on Windows. Changed in version 3.5: Windows now returns the file index as [`st_ino`](#os.stat_result.st_ino "os.stat_result.st_ino") when available. New in version 3.7: Added the [`st_fstype`](#os.stat_result.st_fstype "os.stat_result.st_fstype") member to Solaris/derivatives. New in version 3.8: Added the [`st_reparse_tag`](#os.stat_result.st_reparse_tag "os.stat_result.st_reparse_tag") member on Windows. Changed in version 3.8: On Windows, the [`st_mode`](#os.stat_result.st_mode "os.stat_result.st_mode") member now identifies special files as `S_IFCHR`, `S_IFIFO` or `S_IFBLK` as appropriate. `os.statvfs(path)` Perform a `statvfs()` system call on the given path. The return value is an object whose attributes describe the filesystem on the given path, and correspond to the members of the `statvfs` structure, namely: `f_bsize`, `f_frsize`, `f_blocks`, `f_bfree`, `f_bavail`, `f_files`, `f_ffree`, `f_favail`, `f_flag`, `f_namemax`, `f_fsid`. Two module-level constants are defined for the `f_flag` attribute’s bit-flags: if `ST_RDONLY` is set, the filesystem is mounted read-only, and if `ST_NOSUID` is set, the semantics of setuid/setgid bits are disabled or not supported. Additional module-level constants are defined for GNU/glibc based systems. These are `ST_NODEV` (disallow access to device special files), `ST_NOEXEC` (disallow program execution), `ST_SYNCHRONOUS` (writes are synced at once), `ST_MANDLOCK` (allow mandatory locks on an FS), `ST_WRITE` (write on file/directory/symlink), `ST_APPEND` (append-only file), `ST_IMMUTABLE` (immutable file), `ST_NOATIME` (do not update access times), `ST_NODIRATIME` (do not update directory access times), `ST_RELATIME` (update atime relative to mtime/ctime). This function can support [specifying a file descriptor](#path-fd). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. Changed in version 3.2: The `ST_RDONLY` and `ST_NOSUID` constants were added. New in version 3.3: Added support for specifying *path* as an open file descriptor. Changed in version 3.4: The `ST_NODEV`, `ST_NOEXEC`, `ST_SYNCHRONOUS`, `ST_MANDLOCK`, `ST_WRITE`, `ST_APPEND`, `ST_IMMUTABLE`, `ST_NOATIME`, `ST_NODIRATIME`, and `ST_RELATIME` constants were added. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). New in version 3.7: Added `f_fsid`. `os.supports_dir_fd` A [`set`](stdtypes#set "set") object indicating which functions in the [`os`](#module-os "os: Miscellaneous operating system interfaces.") module accept an open file descriptor for their *dir\_fd* parameter. Different platforms provide different features, and the underlying functionality Python uses to implement the *dir\_fd* parameter is not available on all platforms Python supports. For consistency’s sake, functions that may support *dir\_fd* always allow specifying the parameter, but will throw an exception if the functionality is used when it’s not locally available. (Specifying `None` for *dir\_fd* is always supported on all platforms.) To check whether a particular function accepts an open file descriptor for its *dir\_fd* parameter, use the `in` operator on `supports_dir_fd`. As an example, this expression evaluates to `True` if [`os.stat()`](#os.stat "os.stat") accepts open file descriptors for *dir\_fd* on the local platform: ``` os.stat in os.supports_dir_fd ``` Currently *dir\_fd* parameters only work on Unix platforms; none of them work on Windows. New in version 3.3. `os.supports_effective_ids` A [`set`](stdtypes#set "set") object indicating whether [`os.access()`](#os.access "os.access") permits specifying `True` for its *effective\_ids* parameter on the local platform. (Specifying `False` for *effective\_ids* is always supported on all platforms.) If the local platform supports it, the collection will contain [`os.access()`](#os.access "os.access"); otherwise it will be empty. This expression evaluates to `True` if [`os.access()`](#os.access "os.access") supports `effective_ids=True` on the local platform: ``` os.access in os.supports_effective_ids ``` Currently *effective\_ids* is only supported on Unix platforms; it does not work on Windows. New in version 3.3. `os.supports_fd` A [`set`](stdtypes#set "set") object indicating which functions in the [`os`](#module-os "os: Miscellaneous operating system interfaces.") module permit specifying their *path* parameter as an open file descriptor on the local platform. Different platforms provide different features, and the underlying functionality Python uses to accept open file descriptors as *path* arguments is not available on all platforms Python supports. To determine whether a particular function permits specifying an open file descriptor for its *path* parameter, use the `in` operator on `supports_fd`. As an example, this expression evaluates to `True` if [`os.chdir()`](#os.chdir "os.chdir") accepts open file descriptors for *path* on your local platform: ``` os.chdir in os.supports_fd ``` New in version 3.3. `os.supports_follow_symlinks` A [`set`](stdtypes#set "set") object indicating which functions in the [`os`](#module-os "os: Miscellaneous operating system interfaces.") module accept `False` for their *follow\_symlinks* parameter on the local platform. Different platforms provide different features, and the underlying functionality Python uses to implement *follow\_symlinks* is not available on all platforms Python supports. For consistency’s sake, functions that may support *follow\_symlinks* always allow specifying the parameter, but will throw an exception if the functionality is used when it’s not locally available. (Specifying `True` for *follow\_symlinks* is always supported on all platforms.) To check whether a particular function accepts `False` for its *follow\_symlinks* parameter, use the `in` operator on `supports_follow_symlinks`. As an example, this expression evaluates to `True` if you may specify `follow_symlinks=False` when calling [`os.stat()`](#os.stat "os.stat") on the local platform: ``` os.stat in os.supports_follow_symlinks ``` New in version 3.3. `os.symlink(src, dst, target_is_directory=False, *, dir_fd=None)` Create a symbolic link pointing to *src* named *dst*. On Windows, a symlink represents either a file or a directory, and does not morph to the target dynamically. If the target is present, the type of the symlink will be created to match. Otherwise, the symlink will be created as a directory if *target\_is\_directory* is `True` or a file symlink (the default) otherwise. On non-Windows platforms, *target\_is\_directory* is ignored. This function can support [paths relative to directory descriptors](#dir-fd). Note On newer versions of Windows 10, unprivileged accounts can create symlinks if Developer Mode is enabled. When Developer Mode is not available/enabled, the *SeCreateSymbolicLinkPrivilege* privilege is required, or the process must be run as an administrator. [`OSError`](exceptions#OSError "OSError") is raised when the function is called by an unprivileged user. Raises an [auditing event](sys#auditing) `os.symlink` with arguments `src`, `dst`, `dir_fd`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. Changed in version 3.2: Added support for Windows 6.0 (Vista) symbolic links. New in version 3.3: Added the *dir\_fd* argument, and now allow *target\_is\_directory* on non-Windows platforms. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object) for *src* and *dst*. Changed in version 3.8: Added support for unelevated symlinks on Windows with Developer Mode. `os.sync()` Force write of everything to disk. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.truncate(path, length)` Truncate the file corresponding to *path*, so that it is at most *length* bytes in size. This function can support [specifying a file descriptor](#path-fd). Raises an [auditing event](sys#auditing) `os.truncate` with arguments `path`, `length`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. New in version 3.3. Changed in version 3.5: Added support for Windows Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.unlink(path, *, dir_fd=None)` Remove (delete) the file *path*. This function is semantically identical to [`remove()`](#os.remove "os.remove"); the `unlink` name is its traditional Unix name. Please see the documentation for [`remove()`](#os.remove "os.remove") for further information. Raises an [auditing event](sys#auditing) `os.remove` with arguments `path`, `dir_fd`. New in version 3.3: The *dir\_fd* parameter. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.utime(path, times=None, *, [ns, ]dir_fd=None, follow_symlinks=True)` Set the access and modified times of the file specified by *path*. [`utime()`](#os.utime "os.utime") takes two optional parameters, *times* and *ns*. These specify the times set on *path* and are used as follows: * If *ns* is specified, it must be a 2-tuple of the form `(atime_ns, mtime_ns)` where each member is an int expressing nanoseconds. * If *times* is not `None`, it must be a 2-tuple of the form `(atime, mtime)` where each member is an int or float expressing seconds. * If *times* is `None` and *ns* is unspecified, this is equivalent to specifying `ns=(atime_ns, mtime_ns)` where both times are the current time. It is an error to specify tuples for both *times* and *ns*. Note that the exact times you set here may not be returned by a subsequent [`stat()`](#os.stat "os.stat") call, depending on the resolution with which your operating system records access and modification times; see [`stat()`](#os.stat "os.stat"). The best way to preserve exact times is to use the *st\_atime\_ns* and *st\_mtime\_ns* fields from the [`os.stat()`](#os.stat "os.stat") result object with the *ns* parameter to `utime`. This function can support [specifying a file descriptor](#path-fd), [paths relative to directory descriptors](#dir-fd) and [not following symlinks](#follow-symlinks). Raises an [auditing event](sys#auditing) `os.utime` with arguments `path`, `times`, `ns`, `dir_fd`. New in version 3.3: Added support for specifying *path* as an open file descriptor, and the *dir\_fd*, *follow\_symlinks*, and *ns* parameters. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.walk(top, topdown=True, onerror=None, followlinks=False)` Generate the file names in a directory tree by walking the tree either top-down or bottom-up. For each directory in the tree rooted at directory *top* (including *top* itself), it yields a 3-tuple `(dirpath, dirnames, filenames)`. *dirpath* is a string, the path to the directory. *dirnames* is a list of the names of the subdirectories in *dirpath* (excluding `'.'` and `'..'`). *filenames* is a list of the names of the non-directory files in *dirpath*. Note that the names in the lists contain no path components. To get a full path (which begins with *top*) to a file or directory in *dirpath*, do `os.path.join(dirpath, name)`. Whether or not the lists are sorted depends on the file system. If a file is removed from or added to the *dirpath* directory during generating the lists, whether a name for that file be included is unspecified. If optional argument *topdown* is `True` or not specified, the triple for a directory is generated before the triples for any of its subdirectories (directories are generated top-down). If *topdown* is `False`, the triple for a directory is generated after the triples for all of its subdirectories (directories are generated bottom-up). No matter the value of *topdown*, the list of subdirectories is retrieved before the tuples for the directory and its subdirectories are generated. When *topdown* is `True`, the caller can modify the *dirnames* list in-place (perhaps using [`del`](../reference/simple_stmts#del) or slice assignment), and [`walk()`](#os.walk "os.walk") will only recurse into the subdirectories whose names remain in *dirnames*; this can be used to prune the search, impose a specific order of visiting, or even to inform [`walk()`](#os.walk "os.walk") about directories the caller creates or renames before it resumes [`walk()`](#os.walk "os.walk") again. Modifying *dirnames* when *topdown* is `False` has no effect on the behavior of the walk, because in bottom-up mode the directories in *dirnames* are generated before *dirpath* itself is generated. By default, errors from the [`scandir()`](#os.scandir "os.scandir") call are ignored. If optional argument *onerror* is specified, it should be a function; it will be called with one argument, an [`OSError`](exceptions#OSError "OSError") instance. It can report the error to continue with the walk, or raise the exception to abort the walk. Note that the filename is available as the `filename` attribute of the exception object. By default, [`walk()`](#os.walk "os.walk") will not walk down into symbolic links that resolve to directories. Set *followlinks* to `True` to visit directories pointed to by symlinks, on systems that support them. Note Be aware that setting *followlinks* to `True` can lead to infinite recursion if a link points to a parent directory of itself. [`walk()`](#os.walk "os.walk") does not keep track of the directories it visited already. Note If you pass a relative pathname, don’t change the current working directory between resumptions of [`walk()`](#os.walk "os.walk"). [`walk()`](#os.walk "os.walk") never changes the current directory, and assumes that its caller doesn’t either. This example displays the number of bytes taken by non-directory files in each directory under the starting directory, except that it doesn’t look under any CVS subdirectory: ``` import os from os.path import join, getsize for root, dirs, files in os.walk('python/Lib/email'): print(root, "consumes", end=" ") print(sum(getsize(join(root, name)) for name in files), end=" ") print("bytes in", len(files), "non-directory files") if 'CVS' in dirs: dirs.remove('CVS') # don't visit CVS directories ``` In the next example (simple implementation of [`shutil.rmtree()`](shutil#shutil.rmtree "shutil.rmtree")), walking the tree bottom-up is essential, [`rmdir()`](#os.rmdir "os.rmdir") doesn’t allow deleting a directory before the directory is empty: ``` # Delete everything reachable from the directory named in "top", # assuming there are no symbolic links. # CAUTION: This is dangerous! For example, if top == '/', it # could delete all your disk files. import os for root, dirs, files in os.walk(top, topdown=False): for name in files: os.remove(os.path.join(root, name)) for name in dirs: os.rmdir(os.path.join(root, name)) ``` Raises an [auditing event](sys#auditing) `os.walk` with arguments `top`, `topdown`, `onerror`, `followlinks`. Changed in version 3.5: This function now calls [`os.scandir()`](#os.scandir "os.scandir") instead of [`os.listdir()`](#os.listdir "os.listdir"), making it faster by reducing the number of calls to [`os.stat()`](#os.stat "os.stat"). Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.fwalk(top='.', topdown=True, onerror=None, *, follow_symlinks=False, dir_fd=None)` This behaves exactly like [`walk()`](#os.walk "os.walk"), except that it yields a 4-tuple `(dirpath, dirnames, filenames, dirfd)`, and it supports `dir_fd`. *dirpath*, *dirnames* and *filenames* are identical to [`walk()`](#os.walk "os.walk") output, and *dirfd* is a file descriptor referring to the directory *dirpath*. This function always supports [paths relative to directory descriptors](#dir-fd) and [not following symlinks](#follow-symlinks). Note however that, unlike other functions, the [`fwalk()`](#os.fwalk "os.fwalk") default value for *follow\_symlinks* is `False`. Note Since [`fwalk()`](#os.fwalk "os.fwalk") yields file descriptors, those are only valid until the next iteration step, so you should duplicate them (e.g. with [`dup()`](#os.dup "os.dup")) if you want to keep them longer. This example displays the number of bytes taken by non-directory files in each directory under the starting directory, except that it doesn’t look under any CVS subdirectory: ``` import os for root, dirs, files, rootfd in os.fwalk('python/Lib/email'): print(root, "consumes", end="") print(sum([os.stat(name, dir_fd=rootfd).st_size for name in files]), end="") print("bytes in", len(files), "non-directory files") if 'CVS' in dirs: dirs.remove('CVS') # don't visit CVS directories ``` In the next example, walking the tree bottom-up is essential: [`rmdir()`](#os.rmdir "os.rmdir") doesn’t allow deleting a directory before the directory is empty: ``` # Delete everything reachable from the directory named in "top", # assuming there are no symbolic links. # CAUTION: This is dangerous! For example, if top == '/', it # could delete all your disk files. import os for root, dirs, files, rootfd in os.fwalk(top, topdown=False): for name in files: os.unlink(name, dir_fd=rootfd) for name in dirs: os.rmdir(name, dir_fd=rootfd) ``` Raises an [auditing event](sys#auditing) `os.fwalk` with arguments `top`, `topdown`, `onerror`, `follow_symlinks`, `dir_fd`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). Changed in version 3.7: Added support for [`bytes`](stdtypes#bytes "bytes") paths. `os.memfd_create(name[, flags=os.MFD_CLOEXEC])` Create an anonymous file and return a file descriptor that refers to it. *flags* must be one of the `os.MFD_*` constants available on the system (or a bitwise ORed combination of them). By default, the new file descriptor is [non-inheritable](#fd-inheritance). The name supplied in *name* is used as a filename and will be displayed as the target of the corresponding symbolic link in the directory `/proc/self/fd/`. The displayed name is always prefixed with `memfd:` and serves only for debugging purposes. Names do not affect the behavior of the file descriptor, and as such multiple files can have the same name without any side effects. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 3.17 or newer with glibc 2.27 or newer. New in version 3.8. `os.MFD_CLOEXEC` `os.MFD_ALLOW_SEALING` `os.MFD_HUGETLB` `os.MFD_HUGE_SHIFT` `os.MFD_HUGE_MASK` `os.MFD_HUGE_64KB` `os.MFD_HUGE_512KB` `os.MFD_HUGE_1MB` `os.MFD_HUGE_2MB` `os.MFD_HUGE_8MB` `os.MFD_HUGE_16MB` `os.MFD_HUGE_32MB` `os.MFD_HUGE_256MB` `os.MFD_HUGE_512MB` `os.MFD_HUGE_1GB` `os.MFD_HUGE_2GB` `os.MFD_HUGE_16GB` These flags can be passed to [`memfd_create()`](#os.memfd_create "os.memfd_create"). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 3.17 or newer with glibc 2.27 or newer. The `MFD_HUGE*` flags are only available since Linux 4.14. New in version 3.8. ### Linux extended attributes New in version 3.3. These functions are all available on Linux only. `os.getxattr(path, attribute, *, follow_symlinks=True)` Return the value of the extended filesystem attribute *attribute* for *path*. *attribute* can be bytes or str (directly or indirectly through the [`PathLike`](#os.PathLike "os.PathLike") interface). If it is str, it is encoded with the filesystem encoding. This function can support [specifying a file descriptor](#path-fd) and [not following symlinks](#follow-symlinks). Raises an [auditing event](sys#auditing) `os.getxattr` with arguments `path`, `attribute`. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object) for *path* and *attribute*. `os.listxattr(path=None, *, follow_symlinks=True)` Return a list of the extended filesystem attributes on *path*. The attributes in the list are represented as strings decoded with the filesystem encoding. If *path* is `None`, [`listxattr()`](#os.listxattr "os.listxattr") will examine the current directory. This function can support [specifying a file descriptor](#path-fd) and [not following symlinks](#follow-symlinks). Raises an [auditing event](sys#auditing) `os.listxattr` with argument `path`. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.removexattr(path, attribute, *, follow_symlinks=True)` Removes the extended filesystem attribute *attribute* from *path*. *attribute* should be bytes or str (directly or indirectly through the [`PathLike`](#os.PathLike "os.PathLike") interface). If it is a string, it is encoded with the filesystem encoding. This function can support [specifying a file descriptor](#path-fd) and [not following symlinks](#follow-symlinks). Raises an [auditing event](sys#auditing) `os.removexattr` with arguments `path`, `attribute`. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object) for *path* and *attribute*. `os.setxattr(path, attribute, value, flags=0, *, follow_symlinks=True)` Set the extended filesystem attribute *attribute* on *path* to *value*. *attribute* must be a bytes or str with no embedded NULs (directly or indirectly through the [`PathLike`](#os.PathLike "os.PathLike") interface). If it is a str, it is encoded with the filesystem encoding. *flags* may be [`XATTR_REPLACE`](#os.XATTR_REPLACE "os.XATTR_REPLACE") or [`XATTR_CREATE`](#os.XATTR_CREATE "os.XATTR_CREATE"). If [`XATTR_REPLACE`](#os.XATTR_REPLACE "os.XATTR_REPLACE") is given and the attribute does not exist, `ENODATA` will be raised. If [`XATTR_CREATE`](#os.XATTR_CREATE "os.XATTR_CREATE") is given and the attribute already exists, the attribute will not be created and `EEXISTS` will be raised. This function can support [specifying a file descriptor](#path-fd) and [not following symlinks](#follow-symlinks). Note A bug in Linux kernel versions less than 2.6.39 caused the flags argument to be ignored on some filesystems. Raises an [auditing event](sys#auditing) `os.setxattr` with arguments `path`, `attribute`, `value`, `flags`. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object) for *path* and *attribute*. `os.XATTR_SIZE_MAX` The maximum size the value of an extended attribute can be. Currently, this is 64 KiB on Linux. `os.XATTR_CREATE` This is a possible value for the flags argument in [`setxattr()`](#os.setxattr "os.setxattr"). It indicates the operation must create an attribute. `os.XATTR_REPLACE` This is a possible value for the flags argument in [`setxattr()`](#os.setxattr "os.setxattr"). It indicates the operation must replace an existing attribute. Process Management ------------------ These functions may be used to create and manage processes. The various [`exec*`](#os.execl "os.execl") functions take a list of arguments for the new program loaded into the process. In each case, the first of these arguments is passed to the new program as its own name rather than as an argument a user may have typed on a command line. For the C programmer, this is the `argv[0]` passed to a program’s `main()`. For example, `os.execv('/bin/echo', ['foo', 'bar'])` will only print `bar` on standard output; `foo` will seem to be ignored. `os.abort()` Generate a `SIGABRT` signal to the current process. On Unix, the default behavior is to produce a core dump; on Windows, the process immediately returns an exit code of `3`. Be aware that calling this function will not call the Python signal handler registered for `SIGABRT` with [`signal.signal()`](signal#signal.signal "signal.signal"). `os.add_dll_directory(path)` Add a path to the DLL search path. This search path is used when resolving dependencies for imported extension modules (the module itself is resolved through [`sys.path`](sys#sys.path "sys.path")), and also by [`ctypes`](ctypes#module-ctypes "ctypes: A foreign function library for Python."). Remove the directory by calling **close()** on the returned object or using it in a [`with`](../reference/compound_stmts#with) statement. See the [Microsoft documentation](https://msdn.microsoft.com/44228cf2-6306-466c-8f16-f513cd3ba8b5) for more information about how DLLs are loaded. Raises an [auditing event](sys#auditing) `os.add_dll_directory` with argument `path`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows. New in version 3.8: Previous versions of CPython would resolve DLLs using the default behavior for the current process. This led to inconsistencies, such as only sometimes searching `PATH` or the current working directory, and OS functions such as `AddDllDirectory` having no effect. In 3.8, the two primary ways DLLs are loaded now explicitly override the process-wide behavior to ensure consistency. See the [porting notes](https://docs.python.org/3.9/whatsnew/3.8.html#bpo-36085-whatsnew) for information on updating libraries. `os.execl(path, arg0, arg1, ...)` `os.execle(path, arg0, arg1, ..., env)` `os.execlp(file, arg0, arg1, ...)` `os.execlpe(file, arg0, arg1, ..., env)` `os.execv(path, args)` `os.execve(path, args, env)` `os.execvp(file, args)` `os.execvpe(file, args, env)` These functions all execute a new program, replacing the current process; they do not return. On Unix, the new executable is loaded into the current process, and will have the same process id as the caller. Errors will be reported as [`OSError`](exceptions#OSError "OSError") exceptions. The current process is replaced immediately. Open file objects and descriptors are not flushed, so if there may be data buffered on these open files, you should flush them using `sys.stdout.flush()` or [`os.fsync()`](#os.fsync "os.fsync") before calling an [`exec*`](#os.execl "os.execl") function. The “l” and “v” variants of the [`exec*`](#os.execl "os.execl") functions differ in how command-line arguments are passed. The “l” variants are perhaps the easiest to work with if the number of parameters is fixed when the code is written; the individual parameters simply become additional parameters to the `execl*()` functions. The “v” variants are good when the number of parameters is variable, with the arguments being passed in a list or tuple as the *args* parameter. In either case, the arguments to the child process should start with the name of the command being run, but this is not enforced. The variants which include a “p” near the end ([`execlp()`](#os.execlp "os.execlp"), [`execlpe()`](#os.execlpe "os.execlpe"), [`execvp()`](#os.execvp "os.execvp"), and [`execvpe()`](#os.execvpe "os.execvpe")) will use the `PATH` environment variable to locate the program *file*. When the environment is being replaced (using one of the [`exec*e`](#os.execl "os.execl") variants, discussed in the next paragraph), the new environment is used as the source of the `PATH` variable. The other variants, [`execl()`](#os.execl "os.execl"), [`execle()`](#os.execle "os.execle"), [`execv()`](#os.execv "os.execv"), and [`execve()`](#os.execve "os.execve"), will not use the `PATH` variable to locate the executable; *path* must contain an appropriate absolute or relative path. For [`execle()`](#os.execle "os.execle"), [`execlpe()`](#os.execlpe "os.execlpe"), [`execve()`](#os.execve "os.execve"), and [`execvpe()`](#os.execvpe "os.execvpe") (note that these all end in “e”), the *env* parameter must be a mapping which is used to define the environment variables for the new process (these are used instead of the current process’ environment); the functions [`execl()`](#os.execl "os.execl"), [`execlp()`](#os.execlp "os.execlp"), [`execv()`](#os.execv "os.execv"), and [`execvp()`](#os.execvp "os.execvp") all cause the new process to inherit the environment of the current process. For [`execve()`](#os.execve "os.execve") on some platforms, *path* may also be specified as an open file descriptor. This functionality may not be supported on your platform; you can check whether or not it is available using [`os.supports_fd`](#os.supports_fd "os.supports_fd"). If it is unavailable, using it will raise a [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError"). Raises an [auditing event](sys#auditing) `os.exec` with arguments `path`, `args`, `env`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. New in version 3.3: Added support for specifying *path* as an open file descriptor for [`execve()`](#os.execve "os.execve"). Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os._exit(n)` Exit the process with status *n*, without calling cleanup handlers, flushing stdio buffers, etc. Note The standard way to exit is `sys.exit(n)`. [`_exit()`](#os._exit "os._exit") should normally only be used in the child process after a [`fork()`](#os.fork "os.fork"). The following exit codes are defined and can be used with [`_exit()`](#os._exit "os._exit"), although they are not required. These are typically used for system programs written in Python, such as a mail server’s external command delivery program. Note Some of these may not be available on all Unix platforms, since there is some variation. These constants are defined where they are defined by the underlying platform. `os.EX_OK` Exit code that means no error occurred. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_USAGE` Exit code that means the command was used incorrectly, such as when the wrong number of arguments are given. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_DATAERR` Exit code that means the input data was incorrect. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_NOINPUT` Exit code that means an input file did not exist or was not readable. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_NOUSER` Exit code that means a specified user did not exist. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_NOHOST` Exit code that means a specified host did not exist. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_UNAVAILABLE` Exit code that means that a required service is unavailable. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_SOFTWARE` Exit code that means an internal software error was detected. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_OSERR` Exit code that means an operating system error was detected, such as the inability to fork or create a pipe. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_OSFILE` Exit code that means some system file did not exist, could not be opened, or had some other kind of error. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_CANTCREAT` Exit code that means a user specified output file could not be created. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_IOERR` Exit code that means that an error occurred while doing I/O on some file. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_TEMPFAIL` Exit code that means a temporary failure occurred. This indicates something that may not really be an error, such as a network connection that couldn’t be made during a retryable operation. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_PROTOCOL` Exit code that means that a protocol exchange was illegal, invalid, or not understood. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_NOPERM` Exit code that means that there were insufficient permissions to perform the operation (but not intended for file system problems). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_CONFIG` Exit code that means that some kind of configuration error occurred. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.EX_NOTFOUND` Exit code that means something like “an entry was not found”. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.fork()` Fork a child process. Return `0` in the child and the child’s process id in the parent. If an error occurs [`OSError`](exceptions#OSError "OSError") is raised. Note that some platforms including FreeBSD <= 6.3 and Cygwin have known issues when using `fork()` from a thread. Raises an [auditing event](sys#auditing) `os.fork` with no arguments. Changed in version 3.8: Calling `fork()` in a subinterpreter is no longer supported ([`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised). Warning See [`ssl`](ssl#module-ssl "ssl: TLS/SSL wrapper for socket objects") for applications that use the SSL module with fork(). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.forkpty()` Fork a child process, using a new pseudo-terminal as the child’s controlling terminal. Return a pair of `(pid, fd)`, where *pid* is `0` in the child, the new child’s process id in the parent, and *fd* is the file descriptor of the master end of the pseudo-terminal. For a more portable approach, use the [`pty`](pty#module-pty "pty: Pseudo-Terminal Handling for Linux. (Linux)") module. If an error occurs [`OSError`](exceptions#OSError "OSError") is raised. Raises an [auditing event](sys#auditing) `os.forkpty` with no arguments. Changed in version 3.8: Calling `forkpty()` in a subinterpreter is no longer supported ([`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised). [Availability](https://docs.python.org/3.9/library/intro.html#availability): some flavors of Unix. `os.kill(pid, sig)` Send signal *sig* to the process *pid*. Constants for the specific signals available on the host platform are defined in the [`signal`](signal#module-signal "signal: Set handlers for asynchronous events.") module. Windows: The [`signal.CTRL_C_EVENT`](signal#signal.CTRL_C_EVENT "signal.CTRL_C_EVENT") and [`signal.CTRL_BREAK_EVENT`](signal#signal.CTRL_BREAK_EVENT "signal.CTRL_BREAK_EVENT") signals are special signals which can only be sent to console processes which share a common console window, e.g., some subprocesses. Any other value for *sig* will cause the process to be unconditionally killed by the TerminateProcess API, and the exit code will be set to *sig*. The Windows version of [`kill()`](#os.kill "os.kill") additionally takes process handles to be killed. See also [`signal.pthread_kill()`](signal#signal.pthread_kill "signal.pthread_kill"). Raises an [auditing event](sys#auditing) `os.kill` with arguments `pid`, `sig`. New in version 3.2: Windows support. `os.killpg(pgid, sig)` Send the signal *sig* to the process group *pgid*. Raises an [auditing event](sys#auditing) `os.killpg` with arguments `pgid`, `sig`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.nice(increment)` Add *increment* to the process’s “niceness”. Return the new niceness. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.pidfd_open(pid, flags=0)` Return a file descriptor referring to the process *pid*. This descriptor can be used to perform process management without races and signals. The *flags* argument is provided for future extensions; no flag values are currently defined. See the *[pidfd\_open(2)](https://manpages.debian.org/pidfd_open(2))* man page for more details. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 5.3+ New in version 3.9. `os.plock(op)` Lock program segments into memory. The value of *op* (defined in `<sys/lock.h>`) determines which segments are locked. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.popen(cmd, mode='r', buffering=-1)` Open a pipe to or from command *cmd*. The return value is an open file object connected to the pipe, which can be read or written depending on whether *mode* is `'r'` (default) or `'w'`. The *buffering* argument has the same meaning as the corresponding argument to the built-in [`open()`](functions#open "open") function. The returned file object reads or writes text strings rather than bytes. The `close` method returns [`None`](constants#None "None") if the subprocess exited successfully, or the subprocess’s return code if there was an error. On POSIX systems, if the return code is positive it represents the return value of the process left-shifted by one byte. If the return code is negative, the process was terminated by the signal given by the negated value of the return code. (For example, the return value might be `- signal.SIGKILL` if the subprocess was killed.) On Windows systems, the return value contains the signed integer return code from the child process. On Unix, [`waitstatus_to_exitcode()`](#os.waitstatus_to_exitcode "os.waitstatus_to_exitcode") can be used to convert the `close` method result (exit status) into an exit code if it is not `None`. On Windows, the `close` method result is directly the exit code (or `None`). This is implemented using [`subprocess.Popen`](subprocess#subprocess.Popen "subprocess.Popen"); see that class’s documentation for more powerful ways to manage and communicate with subprocesses. `os.posix_spawn(path, argv, env, *, file_actions=None, setpgroup=None, resetids=False, setsid=False, setsigmask=(), setsigdef=(), scheduler=None)` Wraps the `posix_spawn()` C library API for use from Python. Most users should use [`subprocess.run()`](subprocess#subprocess.run "subprocess.run") instead of [`posix_spawn()`](#os.posix_spawn "os.posix_spawn"). The positional-only arguments *path*, *args*, and *env* are similar to [`execve()`](#os.execve "os.execve"). The *path* parameter is the path to the executable file. The *path* should contain a directory. Use [`posix_spawnp()`](#os.posix_spawnp "os.posix_spawnp") to pass an executable file without directory. The *file\_actions* argument may be a sequence of tuples describing actions to take on specific file descriptors in the child process between the C library implementation’s `fork()` and `exec()` steps. The first item in each tuple must be one of the three type indicator listed below describing the remaining tuple elements: `os.POSIX_SPAWN_OPEN` (`os.POSIX_SPAWN_OPEN`, *fd*, *path*, *flags*, *mode*) Performs `os.dup2(os.open(path, flags, mode), fd)`. `os.POSIX_SPAWN_CLOSE` (`os.POSIX_SPAWN_CLOSE`, *fd*) Performs `os.close(fd)`. `os.POSIX_SPAWN_DUP2` (`os.POSIX_SPAWN_DUP2`, *fd*, *new\_fd*) Performs `os.dup2(fd, new_fd)`. These tuples correspond to the C library `posix_spawn_file_actions_addopen()`, `posix_spawn_file_actions_addclose()`, and `posix_spawn_file_actions_adddup2()` API calls used to prepare for the `posix_spawn()` call itself. The *setpgroup* argument will set the process group of the child to the value specified. If the value specified is 0, the child’s process group ID will be made the same as its process ID. If the value of *setpgroup* is not set, the child will inherit the parent’s process group ID. This argument corresponds to the C library `POSIX_SPAWN_SETPGROUP` flag. If the *resetids* argument is `True` it will reset the effective UID and GID of the child to the real UID and GID of the parent process. If the argument is `False`, then the child retains the effective UID and GID of the parent. In either case, if the set-user-ID and set-group-ID permission bits are enabled on the executable file, their effect will override the setting of the effective UID and GID. This argument corresponds to the C library `POSIX_SPAWN_RESETIDS` flag. If the *setsid* argument is `True`, it will create a new session ID for `posix_spawn`. *setsid* requires `POSIX_SPAWN_SETSID` or `POSIX_SPAWN_SETSID_NP` flag. Otherwise, [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError") is raised. The *setsigmask* argument will set the signal mask to the signal set specified. If the parameter is not used, then the child inherits the parent’s signal mask. This argument corresponds to the C library `POSIX_SPAWN_SETSIGMASK` flag. The *sigdef* argument will reset the disposition of all signals in the set specified. This argument corresponds to the C library `POSIX_SPAWN_SETSIGDEF` flag. The *scheduler* argument must be a tuple containing the (optional) scheduler policy and an instance of [`sched_param`](#os.sched_param "os.sched_param") with the scheduler parameters. A value of `None` in the place of the scheduler policy indicates that is not being provided. This argument is a combination of the C library `POSIX_SPAWN_SETSCHEDPARAM` and `POSIX_SPAWN_SETSCHEDULER` flags. Raises an [auditing event](sys#auditing) `os.posix_spawn` with arguments `path`, `argv`, `env`. New in version 3.8. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.posix_spawnp(path, argv, env, *, file_actions=None, setpgroup=None, resetids=False, setsid=False, setsigmask=(), setsigdef=(), scheduler=None)` Wraps the `posix_spawnp()` C library API for use from Python. Similar to [`posix_spawn()`](#os.posix_spawn "os.posix_spawn") except that the system searches for the *executable* file in the list of directories specified by the `PATH` environment variable (in the same way as for `execvp(3)`). Raises an [auditing event](sys#auditing) `os.posix_spawn` with arguments `path`, `argv`, `env`. New in version 3.8. [Availability](https://docs.python.org/3.9/library/intro.html#availability): See [`posix_spawn()`](#os.posix_spawn "os.posix_spawn") documentation. `os.register_at_fork(*, before=None, after_in_parent=None, after_in_child=None)` Register callables to be executed when a new child process is forked using [`os.fork()`](#os.fork "os.fork") or similar process cloning APIs. The parameters are optional and keyword-only. Each specifies a different call point. * *before* is a function called before forking a child process. * *after\_in\_parent* is a function called from the parent process after forking a child process. * *after\_in\_child* is a function called from the child process. These calls are only made if control is expected to return to the Python interpreter. A typical [`subprocess`](subprocess#module-subprocess "subprocess: Subprocess management.") launch will not trigger them as the child is not going to re-enter the interpreter. Functions registered for execution before forking are called in reverse registration order. Functions registered for execution after forking (either in the parent or in the child) are called in registration order. Note that `fork()` calls made by third-party C code may not call those functions, unless it explicitly calls [`PyOS_BeforeFork()`](../c-api/sys#c.PyOS_BeforeFork "PyOS_BeforeFork"), [`PyOS_AfterFork_Parent()`](../c-api/sys#c.PyOS_AfterFork_Parent "PyOS_AfterFork_Parent") and [`PyOS_AfterFork_Child()`](../c-api/sys#c.PyOS_AfterFork_Child "PyOS_AfterFork_Child"). There is no way to unregister a function. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.7. `os.spawnl(mode, path, ...)` `os.spawnle(mode, path, ..., env)` `os.spawnlp(mode, file, ...)` `os.spawnlpe(mode, file, ..., env)` `os.spawnv(mode, path, args)` `os.spawnve(mode, path, args, env)` `os.spawnvp(mode, file, args)` `os.spawnvpe(mode, file, args, env)` Execute the program *path* in a new process. (Note that the [`subprocess`](subprocess#module-subprocess "subprocess: Subprocess management.") module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions. Check especially the [Replacing Older Functions with the subprocess Module](subprocess#subprocess-replacements) section.) If *mode* is [`P_NOWAIT`](#os.P_NOWAIT "os.P_NOWAIT"), this function returns the process id of the new process; if *mode* is [`P_WAIT`](#os.P_WAIT "os.P_WAIT"), returns the process’s exit code if it exits normally, or `-signal`, where *signal* is the signal that killed the process. On Windows, the process id will actually be the process handle, so can be used with the [`waitpid()`](#os.waitpid "os.waitpid") function. Note on VxWorks, this function doesn’t return `-signal` when the new process is killed. Instead it raises OSError exception. The “l” and “v” variants of the [`spawn*`](#os.spawnl "os.spawnl") functions differ in how command-line arguments are passed. The “l” variants are perhaps the easiest to work with if the number of parameters is fixed when the code is written; the individual parameters simply become additional parameters to the `spawnl*()` functions. The “v” variants are good when the number of parameters is variable, with the arguments being passed in a list or tuple as the *args* parameter. In either case, the arguments to the child process must start with the name of the command being run. The variants which include a second “p” near the end ([`spawnlp()`](#os.spawnlp "os.spawnlp"), [`spawnlpe()`](#os.spawnlpe "os.spawnlpe"), [`spawnvp()`](#os.spawnvp "os.spawnvp"), and [`spawnvpe()`](#os.spawnvpe "os.spawnvpe")) will use the `PATH` environment variable to locate the program *file*. When the environment is being replaced (using one of the [`spawn*e`](#os.spawnl "os.spawnl") variants, discussed in the next paragraph), the new environment is used as the source of the `PATH` variable. The other variants, [`spawnl()`](#os.spawnl "os.spawnl"), [`spawnle()`](#os.spawnle "os.spawnle"), [`spawnv()`](#os.spawnv "os.spawnv"), and [`spawnve()`](#os.spawnve "os.spawnve"), will not use the `PATH` variable to locate the executable; *path* must contain an appropriate absolute or relative path. For [`spawnle()`](#os.spawnle "os.spawnle"), [`spawnlpe()`](#os.spawnlpe "os.spawnlpe"), [`spawnve()`](#os.spawnve "os.spawnve"), and [`spawnvpe()`](#os.spawnvpe "os.spawnvpe") (note that these all end in “e”), the *env* parameter must be a mapping which is used to define the environment variables for the new process (they are used instead of the current process’ environment); the functions [`spawnl()`](#os.spawnl "os.spawnl"), [`spawnlp()`](#os.spawnlp "os.spawnlp"), [`spawnv()`](#os.spawnv "os.spawnv"), and [`spawnvp()`](#os.spawnvp "os.spawnvp") all cause the new process to inherit the environment of the current process. Note that keys and values in the *env* dictionary must be strings; invalid keys or values will cause the function to fail, with a return value of `127`. As an example, the following calls to [`spawnlp()`](#os.spawnlp "os.spawnlp") and [`spawnvpe()`](#os.spawnvpe "os.spawnvpe") are equivalent: ``` import os os.spawnlp(os.P_WAIT, 'cp', 'cp', 'index.html', '/dev/null') L = ['cp', 'index.html', '/dev/null'] os.spawnvpe(os.P_WAIT, 'cp', L, os.environ) ``` Raises an [auditing event](sys#auditing) `os.spawn` with arguments `mode`, `path`, `args`, `env`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. [`spawnlp()`](#os.spawnlp "os.spawnlp"), [`spawnlpe()`](#os.spawnlpe "os.spawnlpe"), [`spawnvp()`](#os.spawnvp "os.spawnvp") and [`spawnvpe()`](#os.spawnvpe "os.spawnvpe") are not available on Windows. [`spawnle()`](#os.spawnle "os.spawnle") and [`spawnve()`](#os.spawnve "os.spawnve") are not thread-safe on Windows; we advise you to use the [`subprocess`](subprocess#module-subprocess "subprocess: Subprocess management.") module instead. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `os.P_NOWAIT` `os.P_NOWAITO` Possible values for the *mode* parameter to the [`spawn*`](#os.spawnl "os.spawnl") family of functions. If either of these values is given, the `spawn*()` functions will return as soon as the new process has been created, with the process id as the return value. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. `os.P_WAIT` Possible value for the *mode* parameter to the [`spawn*`](#os.spawnl "os.spawnl") family of functions. If this is given as *mode*, the `spawn*()` functions will not return until the new process has run to completion and will return the exit code of the process the run is successful, or `-signal` if a signal kills the process. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. `os.P_DETACH` `os.P_OVERLAY` Possible values for the *mode* parameter to the [`spawn*`](#os.spawnl "os.spawnl") family of functions. These are less portable than those listed above. [`P_DETACH`](#os.P_DETACH "os.P_DETACH") is similar to [`P_NOWAIT`](#os.P_NOWAIT "os.P_NOWAIT"), but the new process is detached from the console of the calling process. If [`P_OVERLAY`](#os.P_OVERLAY "os.P_OVERLAY") is used, the current process will be replaced; the [`spawn*`](#os.spawnl "os.spawnl") function will not return. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows. `os.startfile(path[, operation])` Start a file with its associated application. When *operation* is not specified or `'open'`, this acts like double-clicking the file in Windows Explorer, or giving the file name as an argument to the **start** command from the interactive command shell: the file is opened with whatever application (if any) its extension is associated. When another *operation* is given, it must be a “command verb” that specifies what should be done with the file. Common verbs documented by Microsoft are `'print'` and `'edit'` (to be used on files) as well as `'explore'` and `'find'` (to be used on directories). [`startfile()`](#os.startfile "os.startfile") returns as soon as the associated application is launched. There is no option to wait for the application to close, and no way to retrieve the application’s exit status. The *path* parameter is relative to the current directory. If you want to use an absolute path, make sure the first character is not a slash (`'/'`); the underlying Win32 `ShellExecute()` function doesn’t work if it is. Use the [`os.path.normpath()`](os.path#os.path.normpath "os.path.normpath") function to ensure that the path is properly encoded for Win32. To reduce interpreter startup overhead, the Win32 `ShellExecute()` function is not resolved until this function is first called. If the function cannot be resolved, [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError") will be raised. Raises an [auditing event](sys#auditing) `os.startfile` with arguments `path`, `operation`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows. `os.system(command)` Execute the command (a string) in a subshell. This is implemented by calling the Standard C function `system()`, and has the same limitations. Changes to [`sys.stdin`](sys#sys.stdin "sys.stdin"), etc. are not reflected in the environment of the executed command. If *command* generates any output, it will be sent to the interpreter standard output stream. The C standard does not specify the meaning of the return value of the C function, so the return value of the Python function is system-dependent. On Unix, the return value is the exit status of the process encoded in the format specified for [`wait()`](#os.wait "os.wait"). On Windows, the return value is that returned by the system shell after running *command*. The shell is given by the Windows environment variable `COMSPEC`: it is usually **cmd.exe**, which returns the exit status of the command run; on systems using a non-native shell, consult your shell documentation. The [`subprocess`](subprocess#module-subprocess "subprocess: Subprocess management.") module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function. See the [Replacing Older Functions with the subprocess Module](subprocess#subprocess-replacements) section in the [`subprocess`](subprocess#module-subprocess "subprocess: Subprocess management.") documentation for some helpful recipes. On Unix, [`waitstatus_to_exitcode()`](#os.waitstatus_to_exitcode "os.waitstatus_to_exitcode") can be used to convert the result (exit status) into an exit code. On Windows, the result is directly the exit code. Raises an [auditing event](sys#auditing) `os.system` with argument `command`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. `os.times()` Returns the current global process times. The return value is an object with five attributes: * `user` - user time * `system` - system time * `children_user` - user time of all child processes * `children_system` - system time of all child processes * `elapsed` - elapsed real time since a fixed point in the past For backwards compatibility, this object also behaves like a five-tuple containing `user`, `system`, `children_user`, `children_system`, and `elapsed` in that order. See the Unix manual page *[times(2)](https://manpages.debian.org/times(2))* and *[times(3)](https://manpages.debian.org/times(3))* manual page on Unix or [the GetProcessTimes MSDN](https://docs.microsoft.com/windows/win32/api/processthreadsapi/nf-processthreadsapi-getprocesstimes) on Windows. On Windows, only `user` and `system` are known; the other attributes are zero. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix, Windows. Changed in version 3.3: Return type changed from a tuple to a tuple-like object with named attributes. `os.wait()` Wait for completion of a child process, and return a tuple containing its pid and exit status indication: a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero); the high bit of the low byte is set if a core file was produced. [`waitstatus_to_exitcode()`](#os.waitstatus_to_exitcode "os.waitstatus_to_exitcode") can be used to convert the exit status into an exit code. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. See also [`waitpid()`](#os.waitpid "os.waitpid") can be used to wait for the completion of a specific child process and has more options. `os.waitid(idtype, id, options)` Wait for the completion of one or more child processes. *idtype* can be [`P_PID`](#os.P_PID "os.P_PID"), [`P_PGID`](#os.P_PGID "os.P_PGID"), [`P_ALL`](#os.P_ALL "os.P_ALL"), or [`P_PIDFD`](#os.P_PIDFD "os.P_PIDFD") on Linux. *id* specifies the pid to wait on. *options* is constructed from the ORing of one or more of [`WEXITED`](#os.WEXITED "os.WEXITED"), [`WSTOPPED`](#os.WSTOPPED "os.WSTOPPED") or [`WCONTINUED`](#os.WCONTINUED "os.WCONTINUED") and additionally may be ORed with [`WNOHANG`](#os.WNOHANG "os.WNOHANG") or [`WNOWAIT`](#os.WNOWAIT "os.WNOWAIT"). The return value is an object representing the data contained in the `siginfo_t` structure, namely: `si_pid`, `si_uid`, `si_signo`, `si_status`, `si_code` or `None` if [`WNOHANG`](#os.WNOHANG "os.WNOHANG") is specified and there are no children in a waitable state. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.P_PID` `os.P_PGID` `os.P_ALL` These are the possible values for *idtype* in [`waitid()`](#os.waitid "os.waitid"). They affect how *id* is interpreted. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.P_PIDFD` This is a Linux-specific *idtype* that indicates that *id* is a file descriptor that refers to a process. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 5.4+ New in version 3.9. `os.WEXITED` `os.WSTOPPED` `os.WNOWAIT` Flags that can be used in *options* in [`waitid()`](#os.waitid "os.waitid") that specify what child signal to wait for. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. `os.CLD_EXITED` `os.CLD_KILLED` `os.CLD_DUMPED` `os.CLD_TRAPPED` `os.CLD_STOPPED` `os.CLD_CONTINUED` These are the possible values for `si_code` in the result returned by [`waitid()`](#os.waitid "os.waitid"). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. New in version 3.3. Changed in version 3.9: Added [`CLD_KILLED`](#os.CLD_KILLED "os.CLD_KILLED") and [`CLD_STOPPED`](#os.CLD_STOPPED "os.CLD_STOPPED") values. `os.waitpid(pid, options)` The details of this function differ on Unix and Windows. On Unix: Wait for completion of a child process given by process id *pid*, and return a tuple containing its process id and exit status indication (encoded as for [`wait()`](#os.wait "os.wait")). The semantics of the call are affected by the value of the integer *options*, which should be `0` for normal operation. If *pid* is greater than `0`, [`waitpid()`](#os.waitpid "os.waitpid") requests status information for that specific process. If *pid* is `0`, the request is for the status of any child in the process group of the current process. If *pid* is `-1`, the request pertains to any child of the current process. If *pid* is less than `-1`, status is requested for any process in the process group `-pid` (the absolute value of *pid*). An [`OSError`](exceptions#OSError "OSError") is raised with the value of errno when the syscall returns -1. On Windows: Wait for completion of a process given by process handle *pid*, and return a tuple containing *pid*, and its exit status shifted left by 8 bits (shifting makes cross-platform use of the function easier). A *pid* less than or equal to `0` has no special meaning on Windows, and raises an exception. The value of integer *options* has no effect. *pid* can refer to any process whose id is known, not necessarily a child process. The [`spawn*`](#os.spawnl "os.spawnl") functions called with [`P_NOWAIT`](#os.P_NOWAIT "os.P_NOWAIT") return suitable process handles. [`waitstatus_to_exitcode()`](#os.waitstatus_to_exitcode "os.waitstatus_to_exitcode") can be used to convert the exit status into an exit code. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the function now retries the system call instead of raising an [`InterruptedError`](exceptions#InterruptedError "InterruptedError") exception (see [**PEP 475**](https://www.python.org/dev/peps/pep-0475) for the rationale). `os.wait3(options)` Similar to [`waitpid()`](#os.waitpid "os.waitpid"), except no process id argument is given and a 3-element tuple containing the child’s process id, exit status indication, and resource usage information is returned. Refer to [`resource`](resource#module-resource "resource: An interface to provide resource usage information on the current process. (Unix)").[`getrusage()`](resource#resource.getrusage "resource.getrusage") for details on resource usage information. The option argument is the same as that provided to [`waitpid()`](#os.waitpid "os.waitpid") and [`wait4()`](#os.wait4 "os.wait4"). [`waitstatus_to_exitcode()`](#os.waitstatus_to_exitcode "os.waitstatus_to_exitcode") can be used to convert the exit status into an exitcode. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.wait4(pid, options)` Similar to [`waitpid()`](#os.waitpid "os.waitpid"), except a 3-element tuple, containing the child’s process id, exit status indication, and resource usage information is returned. Refer to [`resource`](resource#module-resource "resource: An interface to provide resource usage information on the current process. (Unix)").[`getrusage()`](resource#resource.getrusage "resource.getrusage") for details on resource usage information. The arguments to [`wait4()`](#os.wait4 "os.wait4") are the same as those provided to [`waitpid()`](#os.waitpid "os.waitpid"). [`waitstatus_to_exitcode()`](#os.waitstatus_to_exitcode "os.waitstatus_to_exitcode") can be used to convert the exit status into an exitcode. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.waitstatus_to_exitcode(status)` Convert a wait status to an exit code. On Unix: * If the process exited normally (if `WIFEXITED(status)` is true), return the process exit status (return `WEXITSTATUS(status)`): result greater than or equal to 0. * If the process was terminated by a signal (if `WIFSIGNALED(status)` is true), return `-signum` where *signum* is the number of the signal that caused the process to terminate (return `-WTERMSIG(status)`): result less than 0. * Otherwise, raise a [`ValueError`](exceptions#ValueError "ValueError"). On Windows, return *status* shifted right by 8 bits. On Unix, if the process is being traced or if [`waitpid()`](#os.waitpid "os.waitpid") was called with [`WUNTRACED`](#os.WUNTRACED "os.WUNTRACED") option, the caller must first check if `WIFSTOPPED(status)` is true. This function must not be called if `WIFSTOPPED(status)` is true. See also [`WIFEXITED()`](#os.WIFEXITED "os.WIFEXITED"), [`WEXITSTATUS()`](#os.WEXITSTATUS "os.WEXITSTATUS"), [`WIFSIGNALED()`](#os.WIFSIGNALED "os.WIFSIGNALED"), [`WTERMSIG()`](#os.WTERMSIG "os.WTERMSIG"), [`WIFSTOPPED()`](#os.WIFSTOPPED "os.WIFSTOPPED"), [`WSTOPSIG()`](#os.WSTOPSIG "os.WSTOPSIG") functions. New in version 3.9. `os.WNOHANG` The option for [`waitpid()`](#os.waitpid "os.waitpid") to return immediately if no child process status is available immediately. The function returns `(0, 0)` in this case. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.WCONTINUED` This option causes child processes to be reported if they have been continued from a job control stop since their status was last reported. [Availability](https://docs.python.org/3.9/library/intro.html#availability): some Unix systems. `os.WUNTRACED` This option causes child processes to be reported if they have been stopped but their current state has not been reported since they were stopped. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. The following functions take a process status code as returned by [`system()`](#os.system "os.system"), [`wait()`](#os.wait "os.wait"), or [`waitpid()`](#os.waitpid "os.waitpid") as a parameter. They may be used to determine the disposition of a process. `os.WCOREDUMP(status)` Return `True` if a core dump was generated for the process, otherwise return `False`. This function should be employed only if [`WIFSIGNALED()`](#os.WIFSIGNALED "os.WIFSIGNALED") is true. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.WIFCONTINUED(status)` Return `True` if a stopped child has been resumed by delivery of [`SIGCONT`](signal#signal.SIGCONT "signal.SIGCONT") (if the process has been continued from a job control stop), otherwise return `False`. See [`WCONTINUED`](#os.WCONTINUED "os.WCONTINUED") option. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.WIFSTOPPED(status)` Return `True` if the process was stopped by delivery of a signal, otherwise return `False`. [`WIFSTOPPED()`](#os.WIFSTOPPED "os.WIFSTOPPED") only returns `True` if the [`waitpid()`](#os.waitpid "os.waitpid") call was done using [`WUNTRACED`](#os.WUNTRACED "os.WUNTRACED") option or when the process is being traced (see *[ptrace(2)](https://manpages.debian.org/ptrace(2))*). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.WIFSIGNALED(status)` Return `True` if the process was terminated by a signal, otherwise return `False`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.WIFEXITED(status)` Return `True` if the process exited terminated normally, that is, by calling `exit()` or `_exit()`, or by returning from `main()`; otherwise return `False`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.WEXITSTATUS(status)` Return the process exit status. This function should be employed only if [`WIFEXITED()`](#os.WIFEXITED "os.WIFEXITED") is true. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.WSTOPSIG(status)` Return the signal which caused the process to stop. This function should be employed only if [`WIFSTOPPED()`](#os.WIFSTOPPED "os.WIFSTOPPED") is true. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.WTERMSIG(status)` Return the number of the signal that caused the process to terminate. This function should be employed only if [`WIFSIGNALED()`](#os.WIFSIGNALED "os.WIFSIGNALED") is true. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. Interface to the scheduler -------------------------- These functions control how a process is allocated CPU time by the operating system. They are only available on some Unix platforms. For more detailed information, consult your Unix manpages. New in version 3.3. The following scheduling policies are exposed if they are supported by the operating system. `os.SCHED_OTHER` The default scheduling policy. `os.SCHED_BATCH` Scheduling policy for CPU-intensive processes that tries to preserve interactivity on the rest of the computer. `os.SCHED_IDLE` Scheduling policy for extremely low priority background tasks. `os.SCHED_SPORADIC` Scheduling policy for sporadic server programs. `os.SCHED_FIFO` A First In First Out scheduling policy. `os.SCHED_RR` A round-robin scheduling policy. `os.SCHED_RESET_ON_FORK` This flag can be OR’ed with any other scheduling policy. When a process with this flag set forks, its child’s scheduling policy and priority are reset to the default. `class os.sched_param(sched_priority)` This class represents tunable scheduling parameters used in [`sched_setparam()`](#os.sched_setparam "os.sched_setparam"), [`sched_setscheduler()`](#os.sched_setscheduler "os.sched_setscheduler"), and [`sched_getparam()`](#os.sched_getparam "os.sched_getparam"). It is immutable. At the moment, there is only one possible parameter: `sched_priority` The scheduling priority for a scheduling policy. `os.sched_get_priority_min(policy)` Get the minimum priority value for *policy*. *policy* is one of the scheduling policy constants above. `os.sched_get_priority_max(policy)` Get the maximum priority value for *policy*. *policy* is one of the scheduling policy constants above. `os.sched_setscheduler(pid, policy, param)` Set the scheduling policy for the process with PID *pid*. A *pid* of 0 means the calling process. *policy* is one of the scheduling policy constants above. *param* is a [`sched_param`](#os.sched_param "os.sched_param") instance. `os.sched_getscheduler(pid)` Return the scheduling policy for the process with PID *pid*. A *pid* of 0 means the calling process. The result is one of the scheduling policy constants above. `os.sched_setparam(pid, param)` Set the scheduling parameters for the process with PID *pid*. A *pid* of 0 means the calling process. *param* is a [`sched_param`](#os.sched_param "os.sched_param") instance. `os.sched_getparam(pid)` Return the scheduling parameters as a [`sched_param`](#os.sched_param "os.sched_param") instance for the process with PID *pid*. A *pid* of 0 means the calling process. `os.sched_rr_get_interval(pid)` Return the round-robin quantum in seconds for the process with PID *pid*. A *pid* of 0 means the calling process. `os.sched_yield()` Voluntarily relinquish the CPU. `os.sched_setaffinity(pid, mask)` Restrict the process with PID *pid* (or the current process if zero) to a set of CPUs. *mask* is an iterable of integers representing the set of CPUs to which the process should be restricted. `os.sched_getaffinity(pid)` Return the set of CPUs the process with PID *pid* (or the current process if zero) is restricted to. Miscellaneous System Information -------------------------------- `os.confstr(name)` Return string-valued system configuration values. *name* specifies the configuration value to retrieve; it may be a string which is the name of a defined system value; these names are specified in a number of standards (POSIX, Unix 95, Unix 98, and others). Some platforms define additional names as well. The names known to the host operating system are given as the keys of the `confstr_names` dictionary. For configuration variables not included in that mapping, passing an integer for *name* is also accepted. If the configuration value specified by *name* isn’t defined, `None` is returned. If *name* is a string and is not known, [`ValueError`](exceptions#ValueError "ValueError") is raised. If a specific value for *name* is not supported by the host system, even if it is included in `confstr_names`, an [`OSError`](exceptions#OSError "OSError") is raised with [`errno.EINVAL`](errno#errno.EINVAL "errno.EINVAL") for the error number. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.confstr_names` Dictionary mapping names accepted by [`confstr()`](#os.confstr "os.confstr") to the integer values defined for those names by the host operating system. This can be used to determine the set of names known to the system. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.cpu_count()` Return the number of CPUs in the system. Returns `None` if undetermined. This number is not equivalent to the number of CPUs the current process can use. The number of usable CPUs can be obtained with `len(os.sched_getaffinity(0))` New in version 3.4. `os.getloadavg()` Return the number of processes in the system run queue averaged over the last 1, 5, and 15 minutes or raises [`OSError`](exceptions#OSError "OSError") if the load average was unobtainable. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.sysconf(name)` Return integer-valued system configuration values. If the configuration value specified by *name* isn’t defined, `-1` is returned. The comments regarding the *name* parameter for [`confstr()`](#os.confstr "os.confstr") apply here as well; the dictionary that provides information on the known names is given by `sysconf_names`. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. `os.sysconf_names` Dictionary mapping names accepted by [`sysconf()`](#os.sysconf "os.sysconf") to the integer values defined for those names by the host operating system. This can be used to determine the set of names known to the system. [Availability](https://docs.python.org/3.9/library/intro.html#availability): Unix. The following data values are used to support path manipulation operations. These are defined for all platforms. Higher-level operations on pathnames are defined in the [`os.path`](os.path#module-os.path "os.path: Operations on pathnames.") module. `os.curdir` The constant string used by the operating system to refer to the current directory. This is `'.'` for Windows and POSIX. Also available via [`os.path`](os.path#module-os.path "os.path: Operations on pathnames."). `os.pardir` The constant string used by the operating system to refer to the parent directory. This is `'..'` for Windows and POSIX. Also available via [`os.path`](os.path#module-os.path "os.path: Operations on pathnames."). `os.sep` The character used by the operating system to separate pathname components. This is `'/'` for POSIX and `'\\'` for Windows. Note that knowing this is not sufficient to be able to parse or concatenate pathnames — use [`os.path.split()`](os.path#os.path.split "os.path.split") and [`os.path.join()`](os.path#os.path.join "os.path.join") — but it is occasionally useful. Also available via [`os.path`](os.path#module-os.path "os.path: Operations on pathnames."). `os.altsep` An alternative character used by the operating system to separate pathname components, or `None` if only one separator character exists. This is set to `'/'` on Windows systems where `sep` is a backslash. Also available via [`os.path`](os.path#module-os.path "os.path: Operations on pathnames."). `os.extsep` The character which separates the base filename from the extension; for example, the `'.'` in `os.py`. Also available via [`os.path`](os.path#module-os.path "os.path: Operations on pathnames."). `os.pathsep` The character conventionally used by the operating system to separate search path components (as in `PATH`), such as `':'` for POSIX or `';'` for Windows. Also available via [`os.path`](os.path#module-os.path "os.path: Operations on pathnames."). `os.defpath` The default search path used by [`exec*p*`](#os.execl "os.execl") and [`spawn*p*`](#os.spawnl "os.spawnl") if the environment doesn’t have a `'PATH'` key. Also available via [`os.path`](os.path#module-os.path "os.path: Operations on pathnames."). `os.linesep` The string used to separate (or, rather, terminate) lines on the current platform. This may be a single character, such as `'\n'` for POSIX, or multiple characters, for example, `'\r\n'` for Windows. Do not use *os.linesep* as a line terminator when writing files opened in text mode (the default); use a single `'\n'` instead, on all platforms. `os.devnull` The file path of the null device. For example: `'/dev/null'` for POSIX, `'nul'` for Windows. Also available via [`os.path`](os.path#module-os.path "os.path: Operations on pathnames."). `os.RTLD_LAZY` `os.RTLD_NOW` `os.RTLD_GLOBAL` `os.RTLD_LOCAL` `os.RTLD_NODELETE` `os.RTLD_NOLOAD` `os.RTLD_DEEPBIND` Flags for use with the [`setdlopenflags()`](sys#sys.setdlopenflags "sys.setdlopenflags") and [`getdlopenflags()`](sys#sys.getdlopenflags "sys.getdlopenflags") functions. See the Unix manual page *[dlopen(3)](https://manpages.debian.org/dlopen(3))* for what the different flags mean. New in version 3.3. Random numbers -------------- `os.getrandom(size, flags=0)` Get up to *size* random bytes. The function can return less bytes than requested. These bytes can be used to seed user-space random number generators or for cryptographic purposes. `getrandom()` relies on entropy gathered from device drivers and other sources of environmental noise. Unnecessarily reading large quantities of data will have a negative impact on other users of the `/dev/random` and `/dev/urandom` devices. The flags argument is a bit mask that can contain zero or more of the following values ORed together: [`os.GRND_RANDOM`](#os.GRND_RANDOM "os.GRND_RANDOM") and [`GRND_NONBLOCK`](#os.GRND_NONBLOCK "os.GRND_NONBLOCK"). See also the [Linux getrandom() manual page](http://man7.org/linux/man-pages/man2/getrandom.2.html). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Linux 3.17 and newer. New in version 3.6. `os.urandom(size)` Return a bytestring of *size* random bytes suitable for cryptographic use. This function returns random bytes from an OS-specific randomness source. The returned data should be unpredictable enough for cryptographic applications, though its exact quality depends on the OS implementation. On Linux, if the `getrandom()` syscall is available, it is used in blocking mode: block until the system urandom entropy pool is initialized (128 bits of entropy are collected by the kernel). See the [**PEP 524**](https://www.python.org/dev/peps/pep-0524) for the rationale. On Linux, the [`getrandom()`](#os.getrandom "os.getrandom") function can be used to get random bytes in non-blocking mode (using the [`GRND_NONBLOCK`](#os.GRND_NONBLOCK "os.GRND_NONBLOCK") flag) or to poll until the system urandom entropy pool is initialized. On a Unix-like system, random bytes are read from the `/dev/urandom` device. If the `/dev/urandom` device is not available or not readable, the [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError") exception is raised. On Windows, it will use `CryptGenRandom()`. See also The [`secrets`](secrets#module-secrets "secrets: Generate secure random numbers for managing secrets.") module provides higher level functions. For an easy-to-use interface to the random number generator provided by your platform, please see [`random.SystemRandom`](random#random.SystemRandom "random.SystemRandom"). Changed in version 3.6.0: On Linux, `getrandom()` is now used in blocking mode to increase the security. Changed in version 3.5.2: On Linux, if the `getrandom()` syscall blocks (the urandom entropy pool is not initialized yet), fall back on reading `/dev/urandom`. Changed in version 3.5: On Linux 3.17 and newer, the `getrandom()` syscall is now used when available. On OpenBSD 5.6 and newer, the C `getentropy()` function is now used. These functions avoid the usage of an internal file descriptor. `os.GRND_NONBLOCK` By default, when reading from `/dev/random`, [`getrandom()`](#os.getrandom "os.getrandom") blocks if no random bytes are available, and when reading from `/dev/urandom`, it blocks if the entropy pool has not yet been initialized. If the [`GRND_NONBLOCK`](#os.GRND_NONBLOCK "os.GRND_NONBLOCK") flag is set, then [`getrandom()`](#os.getrandom "os.getrandom") does not block in these cases, but instead immediately raises [`BlockingIOError`](exceptions#BlockingIOError "BlockingIOError"). New in version 3.6. `os.GRND_RANDOM` If this bit is set, then random bytes are drawn from the `/dev/random` pool instead of the `/dev/urandom` pool. New in version 3.6.
programming_docs
python copy — Shallow and deep copy operations copy — Shallow and deep copy operations ======================================= **Source code:** [Lib/copy.py](https://github.com/python/cpython/tree/3.9/Lib/copy.py) Assignment statements in Python do not copy objects, they create bindings between a target and an object. For collections that are mutable or contain mutable items, a copy is sometimes needed so one can change one copy without changing the other. This module provides generic shallow and deep copy operations (explained below). Interface summary: `copy.copy(x)` Return a shallow copy of *x*. `copy.deepcopy(x[, memo])` Return a deep copy of *x*. `exception copy.Error` Raised for module specific errors. The difference between shallow and deep copying is only relevant for compound objects (objects that contain other objects, like lists or class instances): * A *shallow copy* constructs a new compound object and then (to the extent possible) inserts *references* into it to the objects found in the original. * A *deep copy* constructs a new compound object and then, recursively, inserts *copies* into it of the objects found in the original. Two problems often exist with deep copy operations that don’t exist with shallow copy operations: * Recursive objects (compound objects that, directly or indirectly, contain a reference to themselves) may cause a recursive loop. * Because deep copy copies everything it may copy too much, such as data which is intended to be shared between copies. The [`deepcopy()`](#copy.deepcopy "copy.deepcopy") function avoids these problems by: * keeping a `memo` dictionary of objects already copied during the current copying pass; and * letting user-defined classes override the copying operation or the set of components copied. This module does not copy types like module, method, stack trace, stack frame, file, socket, window, or any similar types. It does “copy” functions and classes (shallow and deeply), by returning the original object unchanged; this is compatible with the way these are treated by the [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") module. Shallow copies of dictionaries can be made using [`dict.copy()`](stdtypes#dict.copy "dict.copy"), and of lists by assigning a slice of the entire list, for example, `copied_list = original_list[:]`. Classes can use the same interfaces to control copying that they use to control pickling. See the description of module [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") for information on these methods. In fact, the [`copy`](#module-copy "copy: Shallow and deep copy operations.") module uses the registered pickle functions from the [`copyreg`](copyreg#module-copyreg "copyreg: Register pickle support functions.") module. In order for a class to define its own copy implementation, it can define special methods `__copy__()` and `__deepcopy__()`. The former is called to implement the shallow copy operation; no additional arguments are passed. The latter is called to implement the deep copy operation; it is passed one argument, the `memo` dictionary. If the `__deepcopy__()` implementation needs to make a deep copy of a component, it should call the [`deepcopy()`](#copy.deepcopy "copy.deepcopy") function with the component as first argument and the memo dictionary as second argument. See also `Module` [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") Discussion of the special methods used to support object state retrieval and restoration. python Cryptographic Services Cryptographic Services ====================== The modules described in this chapter implement various algorithms of a cryptographic nature. They are available at the discretion of the installation. On Unix systems, the [`crypt`](crypt#module-crypt "crypt: The crypt() function used to check Unix passwords. (deprecated) (Unix)") module may also be available. Here’s an overview: * [`hashlib` — Secure hashes and message digests](hashlib) + [Hash algorithms](hashlib#hash-algorithms) + [SHAKE variable length digests](hashlib#shake-variable-length-digests) + [Key derivation](hashlib#key-derivation) + [BLAKE2](hashlib#blake2) - [Creating hash objects](hashlib#creating-hash-objects) - [Constants](hashlib#constants) - [Examples](hashlib#examples) * [Simple hashing](hashlib#simple-hashing) * [Using different digest sizes](hashlib#using-different-digest-sizes) * [Keyed hashing](hashlib#keyed-hashing) * [Randomized hashing](hashlib#randomized-hashing) * [Personalization](hashlib#personalization) * [Tree mode](hashlib#tree-mode) - [Credits](hashlib#credits) * [`hmac` — Keyed-Hashing for Message Authentication](hmac) * [`secrets` — Generate secure random numbers for managing secrets](secrets) + [Random numbers](secrets#random-numbers) + [Generating tokens](secrets#generating-tokens) - [How many bytes should tokens use?](secrets#how-many-bytes-should-tokens-use) + [Other functions](secrets#other-functions) + [Recipes and best practices](secrets#recipes-and-best-practices) python readline — GNU readline interface readline — GNU readline interface ================================= The [`readline`](#module-readline "readline: GNU readline support for Python. (Unix)") module defines a number of functions to facilitate completion and reading/writing of history files from the Python interpreter. This module can be used directly, or via the [`rlcompleter`](rlcompleter#module-rlcompleter "rlcompleter: Python identifier completion, suitable for the GNU readline library.") module, which supports completion of Python identifiers at the interactive prompt. Settings made using this module affect the behaviour of both the interpreter’s interactive prompt and the prompts offered by the built-in [`input()`](functions#input "input") function. Readline keybindings may be configured via an initialization file, typically `.inputrc` in your home directory. See [Readline Init File](https://tiswww.cwru.edu/php/chet/readline/rluserman.html#SEC9) in the GNU Readline manual for information about the format and allowable constructs of that file, and the capabilities of the Readline library in general. Note The underlying Readline library API may be implemented by the `libedit` library instead of GNU readline. On macOS the [`readline`](#module-readline "readline: GNU readline support for Python. (Unix)") module detects which library is being used at run time. The configuration file for `libedit` is different from that of GNU readline. If you programmatically load configuration strings you can check for the text “libedit” in `readline.__doc__` to differentiate between GNU readline and libedit. If you use *editline*/`libedit` readline emulation on macOS, the initialization file located in your home directory is named `.editrc`. For example, the following content in `~/.editrc` will turn ON *vi* keybindings and TAB completion: ``` python:bind -v python:bind ^I rl_complete ``` Init file --------- The following functions relate to the init file and user configuration: `readline.parse_and_bind(string)` Execute the init line provided in the *string* argument. This calls `rl_parse_and_bind()` in the underlying library. `readline.read_init_file([filename])` Execute a readline initialization file. The default filename is the last filename used. This calls `rl_read_init_file()` in the underlying library. Line buffer ----------- The following functions operate on the line buffer: `readline.get_line_buffer()` Return the current contents of the line buffer (`rl_line_buffer` in the underlying library). `readline.insert_text(string)` Insert text into the line buffer at the cursor position. This calls `rl_insert_text()` in the underlying library, but ignores the return value. `readline.redisplay()` Change what’s displayed on the screen to reflect the current contents of the line buffer. This calls `rl_redisplay()` in the underlying library. History file ------------ The following functions operate on a history file: `readline.read_history_file([filename])` Load a readline history file, and append it to the history list. The default filename is `~/.history`. This calls `read_history()` in the underlying library. `readline.write_history_file([filename])` Save the history list to a readline history file, overwriting any existing file. The default filename is `~/.history`. This calls `write_history()` in the underlying library. `readline.append_history_file(nelements[, filename])` Append the last *nelements* items of history to a file. The default filename is `~/.history`. The file must already exist. This calls `append_history()` in the underlying library. This function only exists if Python was compiled for a version of the library that supports it. New in version 3.5. `readline.get_history_length()` `readline.set_history_length(length)` Set or return the desired number of lines to save in the history file. The [`write_history_file()`](#readline.write_history_file "readline.write_history_file") function uses this value to truncate the history file, by calling `history_truncate_file()` in the underlying library. Negative values imply unlimited history file size. History list ------------ The following functions operate on a global history list: `readline.clear_history()` Clear the current history. This calls `clear_history()` in the underlying library. The Python function only exists if Python was compiled for a version of the library that supports it. `readline.get_current_history_length()` Return the number of items currently in the history. (This is different from [`get_history_length()`](#readline.get_history_length "readline.get_history_length"), which returns the maximum number of lines that will be written to a history file.) `readline.get_history_item(index)` Return the current contents of history item at *index*. The item index is one-based. This calls `history_get()` in the underlying library. `readline.remove_history_item(pos)` Remove history item specified by its position from the history. The position is zero-based. This calls `remove_history()` in the underlying library. `readline.replace_history_item(pos, line)` Replace history item specified by its position with *line*. The position is zero-based. This calls `replace_history_entry()` in the underlying library. `readline.add_history(line)` Append *line* to the history buffer, as if it was the last line typed. This calls `add_history()` in the underlying library. `readline.set_auto_history(enabled)` Enable or disable automatic calls to `add_history()` when reading input via readline. The *enabled* argument should be a Boolean value that when true, enables auto history, and that when false, disables auto history. New in version 3.6. **CPython implementation detail:** Auto history is enabled by default, and changes to this do not persist across multiple sessions. Startup hooks ------------- `readline.set_startup_hook([function])` Set or remove the function invoked by the `rl_startup_hook` callback of the underlying library. If *function* is specified, it will be used as the new hook function; if omitted or `None`, any function already installed is removed. The hook is called with no arguments just before readline prints the first prompt. `readline.set_pre_input_hook([function])` Set or remove the function invoked by the `rl_pre_input_hook` callback of the underlying library. If *function* is specified, it will be used as the new hook function; if omitted or `None`, any function already installed is removed. The hook is called with no arguments after the first prompt has been printed and just before readline starts reading input characters. This function only exists if Python was compiled for a version of the library that supports it. Completion ---------- The following functions relate to implementing a custom word completion function. This is typically operated by the Tab key, and can suggest and automatically complete a word being typed. By default, Readline is set up to be used by [`rlcompleter`](rlcompleter#module-rlcompleter "rlcompleter: Python identifier completion, suitable for the GNU readline library.") to complete Python identifiers for the interactive interpreter. If the [`readline`](#module-readline "readline: GNU readline support for Python. (Unix)") module is to be used with a custom completer, a different set of word delimiters should be set. `readline.set_completer([function])` Set or remove the completer function. If *function* is specified, it will be used as the new completer function; if omitted or `None`, any completer function already installed is removed. The completer function is called as `function(text, state)`, for *state* in `0`, `1`, `2`, …, until it returns a non-string value. It should return the next possible completion starting with *text*. The installed completer function is invoked by the *entry\_func* callback passed to `rl_completion_matches()` in the underlying library. The *text* string comes from the first parameter to the `rl_attempted_completion_function` callback of the underlying library. `readline.get_completer()` Get the completer function, or `None` if no completer function has been set. `readline.get_completion_type()` Get the type of completion being attempted. This returns the `rl_completion_type` variable in the underlying library as an integer. `readline.get_begidx()` `readline.get_endidx()` Get the beginning or ending index of the completion scope. These indexes are the *start* and *end* arguments passed to the `rl_attempted_completion_function` callback of the underlying library. `readline.set_completer_delims(string)` `readline.get_completer_delims()` Set or get the word delimiters for completion. These determine the start of the word to be considered for completion (the completion scope). These functions access the `rl_completer_word_break_characters` variable in the underlying library. `readline.set_completion_display_matches_hook([function])` Set or remove the completion display function. If *function* is specified, it will be used as the new completion display function; if omitted or `None`, any completion display function already installed is removed. This sets or clears the `rl_completion_display_matches_hook` callback in the underlying library. The completion display function is called as `function(substitution, [matches], longest_match_length)` once each time matches need to be displayed. Example ------- The following example demonstrates how to use the [`readline`](#module-readline "readline: GNU readline support for Python. (Unix)") module’s history reading and writing functions to automatically load and save a history file named `.python_history` from the user’s home directory. The code below would normally be executed automatically during interactive sessions from the user’s [`PYTHONSTARTUP`](../using/cmdline#envvar-PYTHONSTARTUP) file. ``` import atexit import os import readline histfile = os.path.join(os.path.expanduser("~"), ".python_history") try: readline.read_history_file(histfile) # default history len is -1 (infinite), which may grow unruly readline.set_history_length(1000) except FileNotFoundError: pass atexit.register(readline.write_history_file, histfile) ``` This code is actually automatically run when Python is run in [interactive mode](../tutorial/interpreter#tut-interactive) (see [Readline configuration](site#rlcompleter-config)). The following example achieves the same goal but supports concurrent interactive sessions, by only appending the new history. ``` import atexit import os import readline histfile = os.path.join(os.path.expanduser("~"), ".python_history") try: readline.read_history_file(histfile) h_len = readline.get_current_history_length() except FileNotFoundError: open(histfile, 'wb').close() h_len = 0 def save(prev_h_len, histfile): new_h_len = readline.get_current_history_length() readline.set_history_length(1000) readline.append_history_file(new_h_len - prev_h_len, histfile) atexit.register(save, h_len, histfile) ``` The following example extends the [`code.InteractiveConsole`](code#code.InteractiveConsole "code.InteractiveConsole") class to support history save/restore. ``` import atexit import code import os import readline class HistoryConsole(code.InteractiveConsole): def __init__(self, locals=None, filename="<console>", histfile=os.path.expanduser("~/.console-history")): code.InteractiveConsole.__init__(self, locals, filename) self.init_history(histfile) def init_history(self, histfile): readline.parse_and_bind("tab: complete") if hasattr(readline, "read_history_file"): try: readline.read_history_file(histfile) except FileNotFoundError: pass atexit.register(self.save_history, histfile) def save_history(self, histfile): readline.set_history_length(1000) readline.write_history_file(histfile) ``` python Concurrent Execution Concurrent Execution ==================== The modules described in this chapter provide support for concurrent execution of code. The appropriate choice of tool will depend on the task to be executed (CPU bound vs IO bound) and preferred style of development (event driven cooperative multitasking vs preemptive multitasking). Here’s an overview: * [`threading` — Thread-based parallelism](threading) + [Thread-Local Data](threading#thread-local-data) + [Thread Objects](threading#thread-objects) + [Lock Objects](threading#lock-objects) + [RLock Objects](threading#rlock-objects) + [Condition Objects](threading#condition-objects) + [Semaphore Objects](threading#semaphore-objects) - [`Semaphore` Example](threading#semaphore-example) + [Event Objects](threading#event-objects) + [Timer Objects](threading#timer-objects) + [Barrier Objects](threading#barrier-objects) + [Using locks, conditions, and semaphores in the `with` statement](threading#using-locks-conditions-and-semaphores-in-the-with-statement) * [`multiprocessing` — Process-based parallelism](multiprocessing) + [Introduction](multiprocessing#introduction) - [The `Process` class](multiprocessing#the-process-class) - [Contexts and start methods](multiprocessing#contexts-and-start-methods) - [Exchanging objects between processes](multiprocessing#exchanging-objects-between-processes) - [Synchronization between processes](multiprocessing#synchronization-between-processes) - [Sharing state between processes](multiprocessing#sharing-state-between-processes) - [Using a pool of workers](multiprocessing#using-a-pool-of-workers) + [Reference](multiprocessing#reference) - [`Process` and exceptions](multiprocessing#process-and-exceptions) - [Pipes and Queues](multiprocessing#pipes-and-queues) - [Miscellaneous](multiprocessing#miscellaneous) - [Connection Objects](multiprocessing#connection-objects) - [Synchronization primitives](multiprocessing#synchronization-primitives) - [Shared `ctypes` Objects](multiprocessing#shared-ctypes-objects) * [The `multiprocessing.sharedctypes` module](multiprocessing#module-multiprocessing.sharedctypes) - [Managers](multiprocessing#managers) * [Customized managers](multiprocessing#customized-managers) * [Using a remote manager](multiprocessing#using-a-remote-manager) - [Proxy Objects](multiprocessing#proxy-objects) * [Cleanup](multiprocessing#cleanup) - [Process Pools](multiprocessing#module-multiprocessing.pool) - [Listeners and Clients](multiprocessing#module-multiprocessing.connection) * [Address Formats](multiprocessing#address-formats) - [Authentication keys](multiprocessing#authentication-keys) - [Logging](multiprocessing#logging) - [The `multiprocessing.dummy` module](multiprocessing#module-multiprocessing.dummy) + [Programming guidelines](multiprocessing#programming-guidelines) - [All start methods](multiprocessing#all-start-methods) - [The spawn and forkserver start methods](multiprocessing#the-spawn-and-forkserver-start-methods) + [Examples](multiprocessing#examples) * [`multiprocessing.shared_memory` — Provides shared memory for direct access across processes](multiprocessing.shared_memory) * [The `concurrent` package](concurrent) * [`concurrent.futures` — Launching parallel tasks](concurrent.futures) + [Executor Objects](concurrent.futures#executor-objects) + [ThreadPoolExecutor](concurrent.futures#threadpoolexecutor) - [ThreadPoolExecutor Example](concurrent.futures#threadpoolexecutor-example) + [ProcessPoolExecutor](concurrent.futures#processpoolexecutor) - [ProcessPoolExecutor Example](concurrent.futures#processpoolexecutor-example) + [Future Objects](concurrent.futures#future-objects) + [Module Functions](concurrent.futures#module-functions) + [Exception classes](concurrent.futures#exception-classes) * [`subprocess` — Subprocess management](subprocess) + [Using the `subprocess` Module](subprocess#using-the-subprocess-module) - [Frequently Used Arguments](subprocess#frequently-used-arguments) - [Popen Constructor](subprocess#popen-constructor) - [Exceptions](subprocess#exceptions) + [Security Considerations](subprocess#security-considerations) + [Popen Objects](subprocess#popen-objects) + [Windows Popen Helpers](subprocess#windows-popen-helpers) - [Windows Constants](subprocess#windows-constants) + [Older high-level API](subprocess#older-high-level-api) + [Replacing Older Functions with the `subprocess` Module](subprocess#replacing-older-functions-with-the-subprocess-module) - [Replacing **/bin/sh** shell command substitution](subprocess#replacing-bin-sh-shell-command-substitution) - [Replacing shell pipeline](subprocess#replacing-shell-pipeline) - [Replacing `os.system()`](subprocess#replacing-os-system) - [Replacing the `os.spawn` family](subprocess#replacing-the-os-spawn-family) - [Replacing `os.popen()`, `os.popen2()`, `os.popen3()`](subprocess#replacing-os-popen-os-popen2-os-popen3) - [Replacing functions from the `popen2` module](subprocess#replacing-functions-from-the-popen2-module) + [Legacy Shell Invocation Functions](subprocess#legacy-shell-invocation-functions) + [Notes](subprocess#notes) - [Converting an argument sequence to a string on Windows](subprocess#converting-an-argument-sequence-to-a-string-on-windows) * [`sched` — Event scheduler](sched) + [Scheduler Objects](sched#scheduler-objects) * [`queue` — A synchronized queue class](queue) + [Queue Objects](queue#queue-objects) + [SimpleQueue Objects](queue#simplequeue-objects) * [`contextvars` — Context Variables](contextvars) + [Context Variables](contextvars#context-variables) + [Manual Context Management](contextvars#manual-context-management) + [asyncio support](contextvars#asyncio-support) The following are support modules for some of the above services: * [`_thread` — Low-level threading API](_thread)
programming_docs
python __future__ — Future statement definitions \_\_future\_\_ — Future statement definitions ============================================= **Source code:** [Lib/\_\_future\_\_.py](https://github.com/python/cpython/tree/3.9/Lib/__future__.py) [`__future__`](#module-__future__ "__future__: Future statement definitions") is a real module, and serves three purposes: * To avoid confusing existing tools that analyze import statements and expect to find the modules they’re importing. * To ensure that [future statements](../reference/simple_stmts#future) run under releases prior to 2.1 at least yield runtime exceptions (the import of [`__future__`](#module-__future__ "__future__: Future statement definitions") will fail, because there was no module of that name prior to 2.1). * To document when incompatible changes were introduced, and when they will be — or were — made mandatory. This is a form of executable documentation, and can be inspected programmatically via importing [`__future__`](#module-__future__ "__future__: Future statement definitions") and examining its contents. Each statement in `__future__.py` is of the form: ``` FeatureName = _Feature(OptionalRelease, MandatoryRelease, CompilerFlag) ``` where, normally, *OptionalRelease* is less than *MandatoryRelease*, and both are 5-tuples of the same form as [`sys.version_info`](sys#sys.version_info "sys.version_info"): ``` (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int PY_MINOR_VERSION, # the 1; an int PY_MICRO_VERSION, # the 0; an int PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string PY_RELEASE_SERIAL # the 3; an int ) ``` *OptionalRelease* records the first release in which the feature was accepted. In the case of a *MandatoryRelease* that has not yet occurred, *MandatoryRelease* predicts the release in which the feature will become part of the language. Else *MandatoryRelease* records when the feature became part of the language; in releases at or after that, modules no longer need a future statement to use the feature in question, but may continue to use such imports. *MandatoryRelease* may also be `None`, meaning that a planned feature got dropped. Instances of class `_Feature` have two corresponding methods, `getOptionalRelease()` and `getMandatoryRelease()`. *CompilerFlag* is the (bitfield) flag that should be passed in the fourth argument to the built-in function [`compile()`](functions#compile "compile") to enable the feature in dynamically compiled code. This flag is stored in the `compiler_flag` attribute on `_Feature` instances. No feature description will ever be deleted from [`__future__`](#module-__future__ "__future__: Future statement definitions"). Since its introduction in Python 2.1 the following features have found their way into the language using this mechanism: | feature | optional in | mandatory in | effect | | --- | --- | --- | --- | | nested\_scopes | 2.1.0b1 | 2.2 | [**PEP 227**](https://www.python.org/dev/peps/pep-0227): *Statically Nested Scopes* | | generators | 2.2.0a1 | 2.3 | [**PEP 255**](https://www.python.org/dev/peps/pep-0255): *Simple Generators* | | division | 2.2.0a2 | 3.0 | [**PEP 238**](https://www.python.org/dev/peps/pep-0238): *Changing the Division Operator* | | absolute\_import | 2.5.0a1 | 3.0 | [**PEP 328**](https://www.python.org/dev/peps/pep-0328): *Imports: Multi-Line and Absolute/Relative* | | with\_statement | 2.5.0a1 | 2.6 | [**PEP 343**](https://www.python.org/dev/peps/pep-0343): *The “with” Statement* | | print\_function | 2.6.0a2 | 3.0 | [**PEP 3105**](https://www.python.org/dev/peps/pep-3105): *Make print a function* | | unicode\_literals | 2.6.0a2 | 3.0 | [**PEP 3112**](https://www.python.org/dev/peps/pep-3112): *Bytes literals in Python 3000* | | generator\_stop | 3.5.0b1 | 3.7 | [**PEP 479**](https://www.python.org/dev/peps/pep-0479): *StopIteration handling inside generators* | | annotations | 3.7.0b1 | TBD [1](#id2) | [**PEP 563**](https://www.python.org/dev/peps/pep-0563): *Postponed evaluation of annotations* | `1` `from __future__ import annotations` was previously scheduled to become mandatory in Python 3.10, but the Python Steering Council twice decided to delay the change ([announcement for Python 3.10](https://mail.python.org/archives/list/[email protected]/message/CLVXXPQ2T2LQ5MP2Y53VVQFCXYWQJHKZ/); [announcement for Python 3.11](https://mail.python.org/archives/list/[email protected]/message/VIZEBX5EYMSYIJNDBF6DMUMZOCWHARSO/)). No final decision has been made yet. See also [**PEP 563**](https://www.python.org/dev/peps/pep-0563) and [**PEP 649**](https://www.python.org/dev/peps/pep-0649). See also [Future statements](../reference/simple_stmts#future) How the compiler treats future imports. python collections — Container datatypes collections — Container datatypes ================================= **Source code:** [Lib/collections/\_\_init\_\_.py](https://github.com/python/cpython/tree/3.9/Lib/collections/__init__.py) This module implements specialized container datatypes providing alternatives to Python’s general purpose built-in containers, [`dict`](stdtypes#dict "dict"), [`list`](stdtypes#list "list"), [`set`](stdtypes#set "set"), and [`tuple`](stdtypes#tuple "tuple"). | | | | --- | --- | | [`namedtuple()`](#collections.namedtuple "collections.namedtuple") | factory function for creating tuple subclasses with named fields | | [`deque`](#collections.deque "collections.deque") | list-like container with fast appends and pops on either end | | [`ChainMap`](#collections.ChainMap "collections.ChainMap") | dict-like class for creating a single view of multiple mappings | | [`Counter`](#collections.Counter "collections.Counter") | dict subclass for counting hashable objects | | [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") | dict subclass that remembers the order entries were added | | [`defaultdict`](#collections.defaultdict "collections.defaultdict") | dict subclass that calls a factory function to supply missing values | | [`UserDict`](#collections.UserDict "collections.UserDict") | wrapper around dictionary objects for easier dict subclassing | | [`UserList`](#collections.UserList "collections.UserList") | wrapper around list objects for easier list subclassing | | [`UserString`](#collections.UserString "collections.UserString") | wrapper around string objects for easier string subclassing | Deprecated since version 3.3, will be removed in version 3.10: Moved [Collections Abstract Base Classes](collections.abc#collections-abstract-base-classes) to the [`collections.abc`](collections.abc#module-collections.abc "collections.abc: Abstract base classes for containers") module. For backwards compatibility, they continue to be visible in this module through Python 3.9. ChainMap objects ---------------- New in version 3.3. A [`ChainMap`](#collections.ChainMap "collections.ChainMap") class is provided for quickly linking a number of mappings so they can be treated as a single unit. It is often much faster than creating a new dictionary and running multiple [`update()`](stdtypes#dict.update "dict.update") calls. The class can be used to simulate nested scopes and is useful in templating. `class collections.ChainMap(*maps)` A [`ChainMap`](#collections.ChainMap "collections.ChainMap") groups multiple dicts or other mappings together to create a single, updateable view. If no *maps* are specified, a single empty dictionary is provided so that a new chain always has at least one mapping. The underlying mappings are stored in a list. That list is public and can be accessed or updated using the *maps* attribute. There is no other state. Lookups search the underlying mappings successively until a key is found. In contrast, writes, updates, and deletions only operate on the first mapping. A [`ChainMap`](#collections.ChainMap "collections.ChainMap") incorporates the underlying mappings by reference. So, if one of the underlying mappings gets updated, those changes will be reflected in [`ChainMap`](#collections.ChainMap "collections.ChainMap"). All of the usual dictionary methods are supported. In addition, there is a *maps* attribute, a method for creating new subcontexts, and a property for accessing all but the first mapping: `maps` A user updateable list of mappings. The list is ordered from first-searched to last-searched. It is the only stored state and can be modified to change which mappings are searched. The list should always contain at least one mapping. `new_child(m=None)` Returns a new [`ChainMap`](#collections.ChainMap "collections.ChainMap") containing a new map followed by all of the maps in the current instance. If `m` is specified, it becomes the new map at the front of the list of mappings; if not specified, an empty dict is used, so that a call to `d.new_child()` is equivalent to: `ChainMap({}, *d.maps)`. This method is used for creating subcontexts that can be updated without altering values in any of the parent mappings. Changed in version 3.4: The optional `m` parameter was added. `parents` Property returning a new [`ChainMap`](#collections.ChainMap "collections.ChainMap") containing all of the maps in the current instance except the first one. This is useful for skipping the first map in the search. Use cases are similar to those for the [`nonlocal`](../reference/simple_stmts#nonlocal) keyword used in [nested scopes](../glossary#term-nested-scope). The use cases also parallel those for the built-in [`super()`](functions#super "super") function. A reference to `d.parents` is equivalent to: `ChainMap(*d.maps[1:])`. Note, the iteration order of a [`ChainMap()`](#collections.ChainMap "collections.ChainMap") is determined by scanning the mappings last to first: ``` >>> baseline = {'music': 'bach', 'art': 'rembrandt'} >>> adjustments = {'art': 'van gogh', 'opera': 'carmen'} >>> list(ChainMap(adjustments, baseline)) ['music', 'art', 'opera'] ``` This gives the same ordering as a series of [`dict.update()`](stdtypes#dict.update "dict.update") calls starting with the last mapping: ``` >>> combined = baseline.copy() >>> combined.update(adjustments) >>> list(combined) ['music', 'art', 'opera'] ``` Changed in version 3.9: Added support for `|` and `|=` operators, specified in [**PEP 584**](https://www.python.org/dev/peps/pep-0584). See also * The [MultiContext class](https://github.com/enthought/codetools/blob/4.0.0/codetools/contexts/multi_context.py) in the Enthought [CodeTools package](https://github.com/enthought/codetools) has options to support writing to any mapping in the chain. * Django’s [Context class](https://github.com/django/django/blob/main/django/template/context.py) for templating is a read-only chain of mappings. It also features pushing and popping of contexts similar to the [`new_child()`](#collections.ChainMap.new_child "collections.ChainMap.new_child") method and the [`parents`](#collections.ChainMap.parents "collections.ChainMap.parents") property. * The [Nested Contexts recipe](https://code.activestate.com/recipes/577434/) has options to control whether writes and other mutations apply only to the first mapping or to any mapping in the chain. * A [greatly simplified read-only version of Chainmap](https://code.activestate.com/recipes/305268/). ### [`ChainMap`](#collections.ChainMap "collections.ChainMap") Examples and Recipes This section shows various approaches to working with chained maps. Example of simulating Python’s internal lookup chain: ``` import builtins pylookup = ChainMap(locals(), globals(), vars(builtins)) ``` Example of letting user specified command-line arguments take precedence over environment variables which in turn take precedence over default values: ``` import os, argparse defaults = {'color': 'red', 'user': 'guest'} parser = argparse.ArgumentParser() parser.add_argument('-u', '--user') parser.add_argument('-c', '--color') namespace = parser.parse_args() command_line_args = {k: v for k, v in vars(namespace).items() if v is not None} combined = ChainMap(command_line_args, os.environ, defaults) print(combined['color']) print(combined['user']) ``` Example patterns for using the [`ChainMap`](#collections.ChainMap "collections.ChainMap") class to simulate nested contexts: ``` c = ChainMap() # Create root context d = c.new_child() # Create nested child context e = c.new_child() # Child of c, independent from d e.maps[0] # Current context dictionary -- like Python's locals() e.maps[-1] # Root context -- like Python's globals() e.parents # Enclosing context chain -- like Python's nonlocals d['x'] = 1 # Set value in current context d['x'] # Get first key in the chain of contexts del d['x'] # Delete from current context list(d) # All nested values k in d # Check all nested values len(d) # Number of nested values d.items() # All nested items dict(d) # Flatten into a regular dictionary ``` The [`ChainMap`](#collections.ChainMap "collections.ChainMap") class only makes updates (writes and deletions) to the first mapping in the chain while lookups will search the full chain. However, if deep writes and deletions are desired, it is easy to make a subclass that updates keys found deeper in the chain: ``` class DeepChainMap(ChainMap): 'Variant of ChainMap that allows direct updates to inner scopes' def __setitem__(self, key, value): for mapping in self.maps: if key in mapping: mapping[key] = value return self.maps[0][key] = value def __delitem__(self, key): for mapping in self.maps: if key in mapping: del mapping[key] return raise KeyError(key) >>> d = DeepChainMap({'zebra': 'black'}, {'elephant': 'blue'}, {'lion': 'yellow'}) >>> d['lion'] = 'orange' # update an existing key two levels down >>> d['snake'] = 'red' # new keys get added to the topmost dict >>> del d['elephant'] # remove an existing key one level down >>> d # display result DeepChainMap({'zebra': 'black', 'snake': 'red'}, {}, {'lion': 'orange'}) ``` Counter objects --------------- A counter tool is provided to support convenient and rapid tallies. For example: ``` >>> # Tally occurrences of words in a list >>> cnt = Counter() >>> for word in ['red', 'blue', 'red', 'green', 'blue', 'blue']: ... cnt[word] += 1 >>> cnt Counter({'blue': 3, 'red': 2, 'green': 1}) >>> # Find the ten most common words in Hamlet >>> import re >>> words = re.findall(r'\w+', open('hamlet.txt').read().lower()) >>> Counter(words).most_common(10) [('the', 1143), ('and', 966), ('to', 762), ('of', 669), ('i', 631), ('you', 554), ('a', 546), ('my', 514), ('hamlet', 471), ('in', 451)] ``` `class collections.Counter([iterable-or-mapping])` A [`Counter`](#collections.Counter "collections.Counter") is a [`dict`](stdtypes#dict "dict") subclass for counting hashable objects. It is a collection where elements are stored as dictionary keys and their counts are stored as dictionary values. Counts are allowed to be any integer value including zero or negative counts. The [`Counter`](#collections.Counter "collections.Counter") class is similar to bags or multisets in other languages. Elements are counted from an *iterable* or initialized from another *mapping* (or counter): ``` >>> c = Counter() # a new, empty counter >>> c = Counter('gallahad') # a new counter from an iterable >>> c = Counter({'red': 4, 'blue': 2}) # a new counter from a mapping >>> c = Counter(cats=4, dogs=8) # a new counter from keyword args ``` Counter objects have a dictionary interface except that they return a zero count for missing items instead of raising a [`KeyError`](exceptions#KeyError "KeyError"): ``` >>> c = Counter(['eggs', 'ham']) >>> c['bacon'] # count of a missing element is zero 0 ``` Setting a count to zero does not remove an element from a counter. Use `del` to remove it entirely: ``` >>> c['sausage'] = 0 # counter entry with a zero count >>> del c['sausage'] # del actually removes the entry ``` New in version 3.1. Changed in version 3.7: As a [`dict`](stdtypes#dict "dict") subclass, [`Counter`](#collections.Counter "collections.Counter") Inherited the capability to remember insertion order. Math operations on *Counter* objects also preserve order. Results are ordered according to when an element is first encountered in the left operand and then by the order encountered in the right operand. Counter objects support additional methods beyond those available for all dictionaries: `elements()` Return an iterator over elements repeating each as many times as its count. Elements are returned in the order first encountered. If an element’s count is less than one, [`elements()`](#collections.Counter.elements "collections.Counter.elements") will ignore it. ``` >>> c = Counter(a=4, b=2, c=0, d=-2) >>> sorted(c.elements()) ['a', 'a', 'a', 'a', 'b', 'b'] ``` `most_common([n])` Return a list of the *n* most common elements and their counts from the most common to the least. If *n* is omitted or `None`, [`most_common()`](#collections.Counter.most_common "collections.Counter.most_common") returns *all* elements in the counter. Elements with equal counts are ordered in the order first encountered: ``` >>> Counter('abracadabra').most_common(3) [('a', 5), ('b', 2), ('r', 2)] ``` `subtract([iterable-or-mapping])` Elements are subtracted from an *iterable* or from another *mapping* (or counter). Like [`dict.update()`](stdtypes#dict.update "dict.update") but subtracts counts instead of replacing them. Both inputs and outputs may be zero or negative. ``` >>> c = Counter(a=4, b=2, c=0, d=-2) >>> d = Counter(a=1, b=2, c=3, d=4) >>> c.subtract(d) >>> c Counter({'a': 3, 'b': 0, 'c': -3, 'd': -6}) ``` New in version 3.2. The usual dictionary methods are available for [`Counter`](#collections.Counter "collections.Counter") objects except for two which work differently for counters. `fromkeys(iterable)` This class method is not implemented for [`Counter`](#collections.Counter "collections.Counter") objects. `update([iterable-or-mapping])` Elements are counted from an *iterable* or added-in from another *mapping* (or counter). Like [`dict.update()`](stdtypes#dict.update "dict.update") but adds counts instead of replacing them. Also, the *iterable* is expected to be a sequence of elements, not a sequence of `(key, value)` pairs. Common patterns for working with [`Counter`](#collections.Counter "collections.Counter") objects: ``` sum(c.values()) # total of all counts c.clear() # reset all counts list(c) # list unique elements set(c) # convert to a set dict(c) # convert to a regular dictionary c.items() # convert to a list of (elem, cnt) pairs Counter(dict(list_of_pairs)) # convert from a list of (elem, cnt) pairs c.most_common()[:-n-1:-1] # n least common elements +c # remove zero and negative counts ``` Several mathematical operations are provided for combining [`Counter`](#collections.Counter "collections.Counter") objects to produce multisets (counters that have counts greater than zero). Addition and subtraction combine counters by adding or subtracting the counts of corresponding elements. Intersection and union return the minimum and maximum of corresponding counts. Each operation can accept inputs with signed counts, but the output will exclude results with counts of zero or less. ``` >>> c = Counter(a=3, b=1) >>> d = Counter(a=1, b=2) >>> c + d # add two counters together: c[x] + d[x] Counter({'a': 4, 'b': 3}) >>> c - d # subtract (keeping only positive counts) Counter({'a': 2}) >>> c & d # intersection: min(c[x], d[x]) Counter({'a': 1, 'b': 1}) >>> c | d # union: max(c[x], d[x]) Counter({'a': 3, 'b': 2}) ``` Unary addition and subtraction are shortcuts for adding an empty counter or subtracting from an empty counter. ``` >>> c = Counter(a=2, b=-4) >>> +c Counter({'a': 2}) >>> -c Counter({'b': 4}) ``` New in version 3.3: Added support for unary plus, unary minus, and in-place multiset operations. Note Counters were primarily designed to work with positive integers to represent running counts; however, care was taken to not unnecessarily preclude use cases needing other types or negative values. To help with those use cases, this section documents the minimum range and type restrictions. * The [`Counter`](#collections.Counter "collections.Counter") class itself is a dictionary subclass with no restrictions on its keys and values. The values are intended to be numbers representing counts, but you *could* store anything in the value field. * The [`most_common()`](#collections.Counter.most_common "collections.Counter.most_common") method requires only that the values be orderable. * For in-place operations such as `c[key] += 1`, the value type need only support addition and subtraction. So fractions, floats, and decimals would work and negative values are supported. The same is also true for [`update()`](#collections.Counter.update "collections.Counter.update") and [`subtract()`](#collections.Counter.subtract "collections.Counter.subtract") which allow negative and zero values for both inputs and outputs. * The multiset methods are designed only for use cases with positive values. The inputs may be negative or zero, but only outputs with positive values are created. There are no type restrictions, but the value type needs to support addition, subtraction, and comparison. * The [`elements()`](#collections.Counter.elements "collections.Counter.elements") method requires integer counts. It ignores zero and negative counts. See also * [Bag class](https://www.gnu.org/software/smalltalk/manual-base/html_node/Bag.html) in Smalltalk. * Wikipedia entry for [Multisets](https://en.wikipedia.org/wiki/Multiset). * [C++ multisets](http://www.java2s.com/Tutorial/Cpp/0380__set-multiset/Catalog0380__set-multiset.htm) tutorial with examples. * For mathematical operations on multisets and their use cases, see *Knuth, Donald. The Art of Computer Programming Volume II, Section 4.6.3, Exercise 19*. * To enumerate all distinct multisets of a given size over a given set of elements, see [`itertools.combinations_with_replacement()`](itertools#itertools.combinations_with_replacement "itertools.combinations_with_replacement"): ``` map(Counter, combinations_with_replacement('ABC', 2)) # --> AA AB AC BB BC CC ``` deque objects ------------- `class collections.deque([iterable[, maxlen]])` Returns a new deque object initialized left-to-right (using [`append()`](#collections.deque.append "collections.deque.append")) with data from *iterable*. If *iterable* is not specified, the new deque is empty. Deques are a generalization of stacks and queues (the name is pronounced “deck” and is short for “double-ended queue”). Deques support thread-safe, memory efficient appends and pops from either side of the deque with approximately the same O(1) performance in either direction. Though [`list`](stdtypes#list "list") objects support similar operations, they are optimized for fast fixed-length operations and incur O(n) memory movement costs for `pop(0)` and `insert(0, v)` operations which change both the size and position of the underlying data representation. If *maxlen* is not specified or is `None`, deques may grow to an arbitrary length. Otherwise, the deque is bounded to the specified maximum length. Once a bounded length deque is full, when new items are added, a corresponding number of items are discarded from the opposite end. Bounded length deques provide functionality similar to the `tail` filter in Unix. They are also useful for tracking transactions and other pools of data where only the most recent activity is of interest. Deque objects support the following methods: `append(x)` Add *x* to the right side of the deque. `appendleft(x)` Add *x* to the left side of the deque. `clear()` Remove all elements from the deque leaving it with length 0. `copy()` Create a shallow copy of the deque. New in version 3.5. `count(x)` Count the number of deque elements equal to *x*. New in version 3.2. `extend(iterable)` Extend the right side of the deque by appending elements from the iterable argument. `extendleft(iterable)` Extend the left side of the deque by appending elements from *iterable*. Note, the series of left appends results in reversing the order of elements in the iterable argument. `index(x[, start[, stop]])` Return the position of *x* in the deque (at or after index *start* and before index *stop*). Returns the first match or raises [`ValueError`](exceptions#ValueError "ValueError") if not found. New in version 3.5. `insert(i, x)` Insert *x* into the deque at position *i*. If the insertion would cause a bounded deque to grow beyond *maxlen*, an [`IndexError`](exceptions#IndexError "IndexError") is raised. New in version 3.5. `pop()` Remove and return an element from the right side of the deque. If no elements are present, raises an [`IndexError`](exceptions#IndexError "IndexError"). `popleft()` Remove and return an element from the left side of the deque. If no elements are present, raises an [`IndexError`](exceptions#IndexError "IndexError"). `remove(value)` Remove the first occurrence of *value*. If not found, raises a [`ValueError`](exceptions#ValueError "ValueError"). `reverse()` Reverse the elements of the deque in-place and then return `None`. New in version 3.2. `rotate(n=1)` Rotate the deque *n* steps to the right. If *n* is negative, rotate to the left. When the deque is not empty, rotating one step to the right is equivalent to `d.appendleft(d.pop())`, and rotating one step to the left is equivalent to `d.append(d.popleft())`. Deque objects also provide one read-only attribute: `maxlen` Maximum size of a deque or `None` if unbounded. New in version 3.1. In addition to the above, deques support iteration, pickling, `len(d)`, `reversed(d)`, `copy.copy(d)`, `copy.deepcopy(d)`, membership testing with the [`in`](../reference/expressions#in) operator, and subscript references such as `d[0]` to access the first element. Indexed access is O(1) at both ends but slows to O(n) in the middle. For fast random access, use lists instead. Starting in version 3.5, deques support `__add__()`, `__mul__()`, and `__imul__()`. Example: ``` >>> from collections import deque >>> d = deque('ghi') # make a new deque with three items >>> for elem in d: # iterate over the deque's elements ... print(elem.upper()) G H I >>> d.append('j') # add a new entry to the right side >>> d.appendleft('f') # add a new entry to the left side >>> d # show the representation of the deque deque(['f', 'g', 'h', 'i', 'j']) >>> d.pop() # return and remove the rightmost item 'j' >>> d.popleft() # return and remove the leftmost item 'f' >>> list(d) # list the contents of the deque ['g', 'h', 'i'] >>> d[0] # peek at leftmost item 'g' >>> d[-1] # peek at rightmost item 'i' >>> list(reversed(d)) # list the contents of a deque in reverse ['i', 'h', 'g'] >>> 'h' in d # search the deque True >>> d.extend('jkl') # add multiple elements at once >>> d deque(['g', 'h', 'i', 'j', 'k', 'l']) >>> d.rotate(1) # right rotation >>> d deque(['l', 'g', 'h', 'i', 'j', 'k']) >>> d.rotate(-1) # left rotation >>> d deque(['g', 'h', 'i', 'j', 'k', 'l']) >>> deque(reversed(d)) # make a new deque in reverse order deque(['l', 'k', 'j', 'i', 'h', 'g']) >>> d.clear() # empty the deque >>> d.pop() # cannot pop from an empty deque Traceback (most recent call last): File "<pyshell#6>", line 1, in -toplevel- d.pop() IndexError: pop from an empty deque >>> d.extendleft('abc') # extendleft() reverses the input order >>> d deque(['c', 'b', 'a']) ``` ### [`deque`](#collections.deque "collections.deque") Recipes This section shows various approaches to working with deques. Bounded length deques provide functionality similar to the `tail` filter in Unix: ``` def tail(filename, n=10): 'Return the last n lines of a file' with open(filename) as f: return deque(f, n) ``` Another approach to using deques is to maintain a sequence of recently added elements by appending to the right and popping to the left: ``` def moving_average(iterable, n=3): # moving_average([40, 30, 50, 46, 39, 44]) --> 40.0 42.0 45.0 43.0 # http://en.wikipedia.org/wiki/Moving_average it = iter(iterable) d = deque(itertools.islice(it, n-1)) d.appendleft(0) s = sum(d) for elem in it: s += elem - d.popleft() d.append(elem) yield s / n ``` A [round-robin scheduler](https://en.wikipedia.org/wiki/Round-robin_scheduling) can be implemented with input iterators stored in a [`deque`](#collections.deque "collections.deque"). Values are yielded from the active iterator in position zero. If that iterator is exhausted, it can be removed with [`popleft()`](#collections.deque.popleft "collections.deque.popleft"); otherwise, it can be cycled back to the end with the [`rotate()`](#collections.deque.rotate "collections.deque.rotate") method: ``` def roundrobin(*iterables): "roundrobin('ABC', 'D', 'EF') --> A D E B F C" iterators = deque(map(iter, iterables)) while iterators: try: while True: yield next(iterators[0]) iterators.rotate(-1) except StopIteration: # Remove an exhausted iterator. iterators.popleft() ``` The [`rotate()`](#collections.deque.rotate "collections.deque.rotate") method provides a way to implement [`deque`](#collections.deque "collections.deque") slicing and deletion. For example, a pure Python implementation of `del d[n]` relies on the `rotate()` method to position elements to be popped: ``` def delete_nth(d, n): d.rotate(-n) d.popleft() d.rotate(n) ``` To implement [`deque`](#collections.deque "collections.deque") slicing, use a similar approach applying [`rotate()`](#collections.deque.rotate "collections.deque.rotate") to bring a target element to the left side of the deque. Remove old entries with [`popleft()`](#collections.deque.popleft "collections.deque.popleft"), add new entries with [`extend()`](#collections.deque.extend "collections.deque.extend"), and then reverse the rotation. With minor variations on that approach, it is easy to implement Forth style stack manipulations such as `dup`, `drop`, `swap`, `over`, `pick`, `rot`, and `roll`. defaultdict objects ------------------- `class collections.defaultdict(default_factory=None, /[, ...])` Return a new dictionary-like object. [`defaultdict`](#collections.defaultdict "collections.defaultdict") is a subclass of the built-in [`dict`](stdtypes#dict "dict") class. It overrides one method and adds one writable instance variable. The remaining functionality is the same as for the [`dict`](stdtypes#dict "dict") class and is not documented here. The first argument provides the initial value for the [`default_factory`](#collections.defaultdict.default_factory "collections.defaultdict.default_factory") attribute; it defaults to `None`. All remaining arguments are treated the same as if they were passed to the [`dict`](stdtypes#dict "dict") constructor, including keyword arguments. [`defaultdict`](#collections.defaultdict "collections.defaultdict") objects support the following method in addition to the standard [`dict`](stdtypes#dict "dict") operations: `__missing__(key)` If the [`default_factory`](#collections.defaultdict.default_factory "collections.defaultdict.default_factory") attribute is `None`, this raises a [`KeyError`](exceptions#KeyError "KeyError") exception with the *key* as argument. If [`default_factory`](#collections.defaultdict.default_factory "collections.defaultdict.default_factory") is not `None`, it is called without arguments to provide a default value for the given *key*, this value is inserted in the dictionary for the *key*, and returned. If calling [`default_factory`](#collections.defaultdict.default_factory "collections.defaultdict.default_factory") raises an exception this exception is propagated unchanged. This method is called by the [`__getitem__()`](../reference/datamodel#object.__getitem__ "object.__getitem__") method of the [`dict`](stdtypes#dict "dict") class when the requested key is not found; whatever it returns or raises is then returned or raised by [`__getitem__()`](../reference/datamodel#object.__getitem__ "object.__getitem__"). Note that [`__missing__()`](#collections.defaultdict.__missing__ "collections.defaultdict.__missing__") is *not* called for any operations besides [`__getitem__()`](../reference/datamodel#object.__getitem__ "object.__getitem__"). This means that `get()` will, like normal dictionaries, return `None` as a default rather than using [`default_factory`](#collections.defaultdict.default_factory "collections.defaultdict.default_factory"). [`defaultdict`](#collections.defaultdict "collections.defaultdict") objects support the following instance variable: `default_factory` This attribute is used by the [`__missing__()`](#collections.defaultdict.__missing__ "collections.defaultdict.__missing__") method; it is initialized from the first argument to the constructor, if present, or to `None`, if absent. Changed in version 3.9: Added merge (`|`) and update (`|=`) operators, specified in [**PEP 584**](https://www.python.org/dev/peps/pep-0584). ### [`defaultdict`](#collections.defaultdict "collections.defaultdict") Examples Using [`list`](stdtypes#list "list") as the [`default_factory`](#collections.defaultdict.default_factory "collections.defaultdict.default_factory"), it is easy to group a sequence of key-value pairs into a dictionary of lists: ``` >>> s = [('yellow', 1), ('blue', 2), ('yellow', 3), ('blue', 4), ('red', 1)] >>> d = defaultdict(list) >>> for k, v in s: ... d[k].append(v) ... >>> sorted(d.items()) [('blue', [2, 4]), ('red', [1]), ('yellow', [1, 3])] ``` When each key is encountered for the first time, it is not already in the mapping; so an entry is automatically created using the [`default_factory`](#collections.defaultdict.default_factory "collections.defaultdict.default_factory") function which returns an empty [`list`](stdtypes#list "list"). The `list.append()` operation then attaches the value to the new list. When keys are encountered again, the look-up proceeds normally (returning the list for that key) and the `list.append()` operation adds another value to the list. This technique is simpler and faster than an equivalent technique using [`dict.setdefault()`](stdtypes#dict.setdefault "dict.setdefault"): ``` >>> d = {} >>> for k, v in s: ... d.setdefault(k, []).append(v) ... >>> sorted(d.items()) [('blue', [2, 4]), ('red', [1]), ('yellow', [1, 3])] ``` Setting the [`default_factory`](#collections.defaultdict.default_factory "collections.defaultdict.default_factory") to [`int`](functions#int "int") makes the [`defaultdict`](#collections.defaultdict "collections.defaultdict") useful for counting (like a bag or multiset in other languages): ``` >>> s = 'mississippi' >>> d = defaultdict(int) >>> for k in s: ... d[k] += 1 ... >>> sorted(d.items()) [('i', 4), ('m', 1), ('p', 2), ('s', 4)] ``` When a letter is first encountered, it is missing from the mapping, so the [`default_factory`](#collections.defaultdict.default_factory "collections.defaultdict.default_factory") function calls [`int()`](functions#int "int") to supply a default count of zero. The increment operation then builds up the count for each letter. The function [`int()`](functions#int "int") which always returns zero is just a special case of constant functions. A faster and more flexible way to create constant functions is to use a lambda function which can supply any constant value (not just zero): ``` >>> def constant_factory(value): ... return lambda: value >>> d = defaultdict(constant_factory('<missing>')) >>> d.update(name='John', action='ran') >>> '%(name)s %(action)s to %(object)s' % d 'John ran to <missing>' ``` Setting the [`default_factory`](#collections.defaultdict.default_factory "collections.defaultdict.default_factory") to [`set`](stdtypes#set "set") makes the [`defaultdict`](#collections.defaultdict "collections.defaultdict") useful for building a dictionary of sets: ``` >>> s = [('red', 1), ('blue', 2), ('red', 3), ('blue', 4), ('red', 1), ('blue', 4)] >>> d = defaultdict(set) >>> for k, v in s: ... d[k].add(v) ... >>> sorted(d.items()) [('blue', {2, 4}), ('red', {1, 3})] ``` namedtuple() Factory Function for Tuples with Named Fields ---------------------------------------------------------- Named tuples assign meaning to each position in a tuple and allow for more readable, self-documenting code. They can be used wherever regular tuples are used, and they add the ability to access fields by name instead of position index. `collections.namedtuple(typename, field_names, *, rename=False, defaults=None, module=None)` Returns a new tuple subclass named *typename*. The new subclass is used to create tuple-like objects that have fields accessible by attribute lookup as well as being indexable and iterable. Instances of the subclass also have a helpful docstring (with typename and field\_names) and a helpful [`__repr__()`](../reference/datamodel#object.__repr__ "object.__repr__") method which lists the tuple contents in a `name=value` format. The *field\_names* are a sequence of strings such as `['x', 'y']`. Alternatively, *field\_names* can be a single string with each fieldname separated by whitespace and/or commas, for example `'x y'` or `'x, y'`. Any valid Python identifier may be used for a fieldname except for names starting with an underscore. Valid identifiers consist of letters, digits, and underscores but do not start with a digit or underscore and cannot be a [`keyword`](keyword#module-keyword "keyword: Test whether a string is a keyword in Python.") such as *class*, *for*, *return*, *global*, *pass*, or *raise*. If *rename* is true, invalid fieldnames are automatically replaced with positional names. For example, `['abc', 'def', 'ghi', 'abc']` is converted to `['abc', '_1', 'ghi', '_3']`, eliminating the keyword `def` and the duplicate fieldname `abc`. *defaults* can be `None` or an [iterable](../glossary#term-iterable) of default values. Since fields with a default value must come after any fields without a default, the *defaults* are applied to the rightmost parameters. For example, if the fieldnames are `['x', 'y', 'z']` and the defaults are `(1, 2)`, then `x` will be a required argument, `y` will default to `1`, and `z` will default to `2`. If *module* is defined, the `__module__` attribute of the named tuple is set to that value. Named tuple instances do not have per-instance dictionaries, so they are lightweight and require no more memory than regular tuples. To support pickling, the named tuple class should be assigned to a variable that matches *typename*. Changed in version 3.1: Added support for *rename*. Changed in version 3.6: The *verbose* and *rename* parameters became [keyword-only arguments](../glossary#keyword-only-parameter). Changed in version 3.6: Added the *module* parameter. Changed in version 3.7: Removed the *verbose* parameter and the `_source` attribute. Changed in version 3.7: Added the *defaults* parameter and the `_field_defaults` attribute. ``` >>> # Basic example >>> Point = namedtuple('Point', ['x', 'y']) >>> p = Point(11, y=22) # instantiate with positional or keyword arguments >>> p[0] + p[1] # indexable like the plain tuple (11, 22) 33 >>> x, y = p # unpack like a regular tuple >>> x, y (11, 22) >>> p.x + p.y # fields also accessible by name 33 >>> p # readable __repr__ with a name=value style Point(x=11, y=22) ``` Named tuples are especially useful for assigning field names to result tuples returned by the [`csv`](csv#module-csv "csv: Write and read tabular data to and from delimited files.") or [`sqlite3`](sqlite3#module-sqlite3 "sqlite3: A DB-API 2.0 implementation using SQLite 3.x.") modules: ``` EmployeeRecord = namedtuple('EmployeeRecord', 'name, age, title, department, paygrade') import csv for emp in map(EmployeeRecord._make, csv.reader(open("employees.csv", "rb"))): print(emp.name, emp.title) import sqlite3 conn = sqlite3.connect('/companydata') cursor = conn.cursor() cursor.execute('SELECT name, age, title, department, paygrade FROM employees') for emp in map(EmployeeRecord._make, cursor.fetchall()): print(emp.name, emp.title) ``` In addition to the methods inherited from tuples, named tuples support three additional methods and two attributes. To prevent conflicts with field names, the method and attribute names start with an underscore. `classmethod somenamedtuple._make(iterable)` Class method that makes a new instance from an existing sequence or iterable. ``` >>> t = [11, 22] >>> Point._make(t) Point(x=11, y=22) ``` `somenamedtuple._asdict()` Return a new [`dict`](stdtypes#dict "dict") which maps field names to their corresponding values: ``` >>> p = Point(x=11, y=22) >>> p._asdict() {'x': 11, 'y': 22} ``` Changed in version 3.1: Returns an [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") instead of a regular [`dict`](stdtypes#dict "dict"). Changed in version 3.8: Returns a regular [`dict`](stdtypes#dict "dict") instead of an [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict"). As of Python 3.7, regular dicts are guaranteed to be ordered. If the extra features of [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") are required, the suggested remediation is to cast the result to the desired type: `OrderedDict(nt._asdict())`. `somenamedtuple._replace(**kwargs)` Return a new instance of the named tuple replacing specified fields with new values: ``` >>> p = Point(x=11, y=22) >>> p._replace(x=33) Point(x=33, y=22) >>> for partnum, record in inventory.items(): ... inventory[partnum] = record._replace(price=newprices[partnum], timestamp=time.now()) ``` `somenamedtuple._fields` Tuple of strings listing the field names. Useful for introspection and for creating new named tuple types from existing named tuples. ``` >>> p._fields # view the field names ('x', 'y') >>> Color = namedtuple('Color', 'red green blue') >>> Pixel = namedtuple('Pixel', Point._fields + Color._fields) >>> Pixel(11, 22, 128, 255, 0) Pixel(x=11, y=22, red=128, green=255, blue=0) ``` `somenamedtuple._field_defaults` Dictionary mapping field names to default values. ``` >>> Account = namedtuple('Account', ['type', 'balance'], defaults=[0]) >>> Account._field_defaults {'balance': 0} >>> Account('premium') Account(type='premium', balance=0) ``` To retrieve a field whose name is stored in a string, use the [`getattr()`](functions#getattr "getattr") function: ``` >>> getattr(p, 'x') 11 ``` To convert a dictionary to a named tuple, use the double-star-operator (as described in [Unpacking Argument Lists](../tutorial/controlflow#tut-unpacking-arguments)): ``` >>> d = {'x': 11, 'y': 22} >>> Point(**d) Point(x=11, y=22) ``` Since a named tuple is a regular Python class, it is easy to add or change functionality with a subclass. Here is how to add a calculated field and a fixed-width print format: ``` >>> class Point(namedtuple('Point', ['x', 'y'])): ... __slots__ = () ... @property ... def hypot(self): ... return (self.x ** 2 + self.y ** 2) ** 0.5 ... def __str__(self): ... return 'Point: x=%6.3f y=%6.3f hypot=%6.3f' % (self.x, self.y, self.hypot) >>> for p in Point(3, 4), Point(14, 5/7): ... print(p) Point: x= 3.000 y= 4.000 hypot= 5.000 Point: x=14.000 y= 0.714 hypot=14.018 ``` The subclass shown above sets `__slots__` to an empty tuple. This helps keep memory requirements low by preventing the creation of instance dictionaries. Subclassing is not useful for adding new, stored fields. Instead, simply create a new named tuple type from the [`_fields`](#collections.somenamedtuple._fields "collections.somenamedtuple._fields") attribute: ``` >>> Point3D = namedtuple('Point3D', Point._fields + ('z',)) ``` Docstrings can be customized by making direct assignments to the `__doc__` fields: ``` >>> Book = namedtuple('Book', ['id', 'title', 'authors']) >>> Book.__doc__ += ': Hardcover book in active collection' >>> Book.id.__doc__ = '13-digit ISBN' >>> Book.title.__doc__ = 'Title of first printing' >>> Book.authors.__doc__ = 'List of authors sorted by last name' ``` Changed in version 3.5: Property docstrings became writeable. See also * See [`typing.NamedTuple`](typing#typing.NamedTuple "typing.NamedTuple") for a way to add type hints for named tuples. It also provides an elegant notation using the [`class`](../reference/compound_stmts#class) keyword: ``` class Component(NamedTuple): part_number: int weight: float description: Optional[str] = None ``` * See [`types.SimpleNamespace()`](types#types.SimpleNamespace "types.SimpleNamespace") for a mutable namespace based on an underlying dictionary instead of a tuple. * The [`dataclasses`](dataclasses#module-dataclasses "dataclasses: Generate special methods on user-defined classes.") module provides a decorator and functions for automatically adding generated special methods to user-defined classes. OrderedDict objects ------------------- Ordered dictionaries are just like regular dictionaries but have some extra capabilities relating to ordering operations. They have become less important now that the built-in [`dict`](stdtypes#dict "dict") class gained the ability to remember insertion order (this new behavior became guaranteed in Python 3.7). Some differences from [`dict`](stdtypes#dict "dict") still remain: * The regular [`dict`](stdtypes#dict "dict") was designed to be very good at mapping operations. Tracking insertion order was secondary. * The [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") was designed to be good at reordering operations. Space efficiency, iteration speed, and the performance of update operations were secondary. * Algorithmically, [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") can handle frequent reordering operations better than [`dict`](stdtypes#dict "dict"). This makes it suitable for tracking recent accesses (for example in an [LRU cache](https://medium.com/@krishankantsinghal/my-first-blog-on-medium-583159139237)). * The equality operation for [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") checks for matching order. * The `popitem()` method of [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") has a different signature. It accepts an optional argument to specify which item is popped. * [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") has a `move_to_end()` method to efficiently reposition an element to an endpoint. * Until Python 3.8, [`dict`](stdtypes#dict "dict") lacked a [`__reversed__()`](../reference/datamodel#object.__reversed__ "object.__reversed__") method. `class collections.OrderedDict([items])` Return an instance of a [`dict`](stdtypes#dict "dict") subclass that has methods specialized for rearranging dictionary order. New in version 3.1. `popitem(last=True)` The [`popitem()`](#collections.OrderedDict.popitem "collections.OrderedDict.popitem") method for ordered dictionaries returns and removes a (key, value) pair. The pairs are returned in LIFO order if *last* is true or FIFO order if false. `move_to_end(key, last=True)` Move an existing *key* to either end of an ordered dictionary. The item is moved to the right end if *last* is true (the default) or to the beginning if *last* is false. Raises [`KeyError`](exceptions#KeyError "KeyError") if the *key* does not exist: ``` >>> d = OrderedDict.fromkeys('abcde') >>> d.move_to_end('b') >>> ''.join(d.keys()) 'acdeb' >>> d.move_to_end('b', last=False) >>> ''.join(d.keys()) 'bacde' ``` New in version 3.2. In addition to the usual mapping methods, ordered dictionaries also support reverse iteration using [`reversed()`](functions#reversed "reversed"). Equality tests between [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") objects are order-sensitive and are implemented as `list(od1.items())==list(od2.items())`. Equality tests between [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") objects and other [`Mapping`](collections.abc#collections.abc.Mapping "collections.abc.Mapping") objects are order-insensitive like regular dictionaries. This allows [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") objects to be substituted anywhere a regular dictionary is used. Changed in version 3.5: The items, keys, and values [views](../glossary#term-dictionary-view) of [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") now support reverse iteration using [`reversed()`](functions#reversed "reversed"). Changed in version 3.6: With the acceptance of [**PEP 468**](https://www.python.org/dev/peps/pep-0468), order is retained for keyword arguments passed to the [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") constructor and its `update()` method. Changed in version 3.9: Added merge (`|`) and update (`|=`) operators, specified in [**PEP 584**](https://www.python.org/dev/peps/pep-0584). ### [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") Examples and Recipes It is straightforward to create an ordered dictionary variant that remembers the order the keys were *last* inserted. If a new entry overwrites an existing entry, the original insertion position is changed and moved to the end: ``` class LastUpdatedOrderedDict(OrderedDict): 'Store items in the order the keys were last added' def __setitem__(self, key, value): super().__setitem__(key, value) self.move_to_end(key) ``` An [`OrderedDict`](#collections.OrderedDict "collections.OrderedDict") would also be useful for implementing variants of [`functools.lru_cache()`](functools#functools.lru_cache "functools.lru_cache"): ``` class LRU: def __init__(self, func, maxsize=128): self.func = func self.maxsize = maxsize self.cache = OrderedDict() def __call__(self, *args): if args in self.cache: value = self.cache[args] self.cache.move_to_end(args) return value value = self.func(*args) if len(self.cache) >= self.maxsize: self.cache.popitem(False) self.cache[args] = value return value ``` UserDict objects ---------------- The class, [`UserDict`](#collections.UserDict "collections.UserDict") acts as a wrapper around dictionary objects. The need for this class has been partially supplanted by the ability to subclass directly from [`dict`](stdtypes#dict "dict"); however, this class can be easier to work with because the underlying dictionary is accessible as an attribute. `class collections.UserDict([initialdata])` Class that simulates a dictionary. The instance’s contents are kept in a regular dictionary, which is accessible via the [`data`](#collections.UserDict.data "collections.UserDict.data") attribute of [`UserDict`](#collections.UserDict "collections.UserDict") instances. If *initialdata* is provided, [`data`](#collections.UserDict.data "collections.UserDict.data") is initialized with its contents; note that a reference to *initialdata* will not be kept, allowing it to be used for other purposes. In addition to supporting the methods and operations of mappings, [`UserDict`](#collections.UserDict "collections.UserDict") instances provide the following attribute: `data` A real dictionary used to store the contents of the [`UserDict`](#collections.UserDict "collections.UserDict") class. UserList objects ---------------- This class acts as a wrapper around list objects. It is a useful base class for your own list-like classes which can inherit from them and override existing methods or add new ones. In this way, one can add new behaviors to lists. The need for this class has been partially supplanted by the ability to subclass directly from [`list`](stdtypes#list "list"); however, this class can be easier to work with because the underlying list is accessible as an attribute. `class collections.UserList([list])` Class that simulates a list. The instance’s contents are kept in a regular list, which is accessible via the [`data`](#collections.UserList.data "collections.UserList.data") attribute of [`UserList`](#collections.UserList "collections.UserList") instances. The instance’s contents are initially set to a copy of *list*, defaulting to the empty list `[]`. *list* can be any iterable, for example a real Python list or a [`UserList`](#collections.UserList "collections.UserList") object. In addition to supporting the methods and operations of mutable sequences, [`UserList`](#collections.UserList "collections.UserList") instances provide the following attribute: `data` A real [`list`](stdtypes#list "list") object used to store the contents of the [`UserList`](#collections.UserList "collections.UserList") class. **Subclassing requirements:** Subclasses of [`UserList`](#collections.UserList "collections.UserList") are expected to offer a constructor which can be called with either no arguments or one argument. List operations which return a new sequence attempt to create an instance of the actual implementation class. To do so, it assumes that the constructor can be called with a single parameter, which is a sequence object used as a data source. If a derived class does not wish to comply with this requirement, all of the special methods supported by this class will need to be overridden; please consult the sources for information about the methods which need to be provided in that case. UserString objects ------------------ The class, [`UserString`](#collections.UserString "collections.UserString") acts as a wrapper around string objects. The need for this class has been partially supplanted by the ability to subclass directly from [`str`](stdtypes#str "str"); however, this class can be easier to work with because the underlying string is accessible as an attribute. `class collections.UserString(seq)` Class that simulates a string object. The instance’s content is kept in a regular string object, which is accessible via the [`data`](#collections.UserString.data "collections.UserString.data") attribute of [`UserString`](#collections.UserString "collections.UserString") instances. The instance’s contents are initially set to a copy of *seq*. The *seq* argument can be any object which can be converted into a string using the built-in [`str()`](stdtypes#str "str") function. In addition to supporting the methods and operations of strings, [`UserString`](#collections.UserString "collections.UserString") instances provide the following attribute: `data` A real [`str`](stdtypes#str "str") object used to store the contents of the [`UserString`](#collections.UserString "collections.UserString") class. Changed in version 3.5: New methods `__getnewargs__`, `__rmod__`, `casefold`, `format_map`, `isprintable`, and `maketrans`.
programming_docs
python doctest — Test interactive Python examples doctest — Test interactive Python examples ========================================== **Source code:** [Lib/doctest.py](https://github.com/python/cpython/tree/3.9/Lib/doctest.py) The [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") module searches for pieces of text that look like interactive Python sessions, and then executes those sessions to verify that they work exactly as shown. There are several common ways to use doctest: * To check that a module’s docstrings are up-to-date by verifying that all interactive examples still work as documented. * To perform regression testing by verifying that interactive examples from a test file or a test object work as expected. * To write tutorial documentation for a package, liberally illustrated with input-output examples. Depending on whether the examples or the expository text are emphasized, this has the flavor of “literate testing” or “executable documentation”. Here’s a complete but small example module: ``` """ This is the "example" module. The example module supplies one function, factorial(). For example, >>> factorial(5) 120 """ def factorial(n): """Return the factorial of n, an exact integer >= 0. >>> [factorial(n) for n in range(6)] [1, 1, 2, 6, 24, 120] >>> factorial(30) 265252859812191058636308480000000 >>> factorial(-1) Traceback (most recent call last): ... ValueError: n must be >= 0 Factorials of floats are OK, but the float must be an exact integer: >>> factorial(30.1) Traceback (most recent call last): ... ValueError: n must be exact integer >>> factorial(30.0) 265252859812191058636308480000000 It must also not be ridiculously large: >>> factorial(1e100) Traceback (most recent call last): ... OverflowError: n too large """ import math if not n >= 0: raise ValueError("n must be >= 0") if math.floor(n) != n: raise ValueError("n must be exact integer") if n+1 == n: # catch a value like 1e300 raise OverflowError("n too large") result = 1 factor = 2 while factor <= n: result *= factor factor += 1 return result if __name__ == "__main__": import doctest doctest.testmod() ``` If you run `example.py` directly from the command line, [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") works its magic: ``` $ python example.py $ ``` There’s no output! That’s normal, and it means all the examples worked. Pass `-v` to the script, and [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") prints a detailed log of what it’s trying, and prints a summary at the end: ``` $ python example.py -v Trying: factorial(5) Expecting: 120 ok Trying: [factorial(n) for n in range(6)] Expecting: [1, 1, 2, 6, 24, 120] ok ``` And so on, eventually ending with: ``` Trying: factorial(1e100) Expecting: Traceback (most recent call last): ... OverflowError: n too large ok 2 items passed all tests: 1 tests in __main__ 8 tests in __main__.factorial 9 tests in 2 items. 9 passed and 0 failed. Test passed. $ ``` That’s all you need to know to start making productive use of [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.")! Jump in. The following sections provide full details. Note that there are many examples of doctests in the standard Python test suite and libraries. Especially useful examples can be found in the standard test file `Lib/test/test_doctest.py`. Simple Usage: Checking Examples in Docstrings --------------------------------------------- The simplest way to start using doctest (but not necessarily the way you’ll continue to do it) is to end each module `M` with: ``` if __name__ == "__main__": import doctest doctest.testmod() ``` [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") then examines docstrings in module `M`. Running the module as a script causes the examples in the docstrings to get executed and verified: ``` python M.py ``` This won’t display anything unless an example fails, in which case the failing example(s) and the cause(s) of the failure(s) are printed to stdout, and the final line of output is `***Test Failed*** N failures.`, where *N* is the number of examples that failed. Run it with the `-v` switch instead: ``` python M.py -v ``` and a detailed report of all examples tried is printed to standard output, along with assorted summaries at the end. You can force verbose mode by passing `verbose=True` to [`testmod()`](#doctest.testmod "doctest.testmod"), or prohibit it by passing `verbose=False`. In either of those cases, `sys.argv` is not examined by [`testmod()`](#doctest.testmod "doctest.testmod") (so passing `-v` or not has no effect). There is also a command line shortcut for running [`testmod()`](#doctest.testmod "doctest.testmod"). You can instruct the Python interpreter to run the doctest module directly from the standard library and pass the module name(s) on the command line: ``` python -m doctest -v example.py ``` This will import `example.py` as a standalone module and run [`testmod()`](#doctest.testmod "doctest.testmod") on it. Note that this may not work correctly if the file is part of a package and imports other submodules from that package. For more information on [`testmod()`](#doctest.testmod "doctest.testmod"), see section [Basic API](#doctest-basic-api). Simple Usage: Checking Examples in a Text File ---------------------------------------------- Another simple application of doctest is testing interactive examples in a text file. This can be done with the [`testfile()`](#doctest.testfile "doctest.testfile") function: ``` import doctest doctest.testfile("example.txt") ``` That short script executes and verifies any interactive Python examples contained in the file `example.txt`. The file content is treated as if it were a single giant docstring; the file doesn’t need to contain a Python program! For example, perhaps `example.txt` contains this: ``` The ``example`` module ====================== Using ``factorial`` ------------------- This is an example text file in reStructuredText format. First import ``factorial`` from the ``example`` module: >>> from example import factorial Now use it: >>> factorial(6) 120 ``` Running `doctest.testfile("example.txt")` then finds the error in this documentation: ``` File "./example.txt", line 14, in example.txt Failed example: factorial(6) Expected: 120 Got: 720 ``` As with [`testmod()`](#doctest.testmod "doctest.testmod"), [`testfile()`](#doctest.testfile "doctest.testfile") won’t display anything unless an example fails. If an example does fail, then the failing example(s) and the cause(s) of the failure(s) are printed to stdout, using the same format as [`testmod()`](#doctest.testmod "doctest.testmod"). By default, [`testfile()`](#doctest.testfile "doctest.testfile") looks for files in the calling module’s directory. See section [Basic API](#doctest-basic-api) for a description of the optional arguments that can be used to tell it to look for files in other locations. Like [`testmod()`](#doctest.testmod "doctest.testmod"), [`testfile()`](#doctest.testfile "doctest.testfile")’s verbosity can be set with the `-v` command-line switch or with the optional keyword argument *verbose*. There is also a command line shortcut for running [`testfile()`](#doctest.testfile "doctest.testfile"). You can instruct the Python interpreter to run the doctest module directly from the standard library and pass the file name(s) on the command line: ``` python -m doctest -v example.txt ``` Because the file name does not end with `.py`, [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") infers that it must be run with [`testfile()`](#doctest.testfile "doctest.testfile"), not [`testmod()`](#doctest.testmod "doctest.testmod"). For more information on [`testfile()`](#doctest.testfile "doctest.testfile"), see section [Basic API](#doctest-basic-api). How It Works ------------ This section examines in detail how doctest works: which docstrings it looks at, how it finds interactive examples, what execution context it uses, how it handles exceptions, and how option flags can be used to control its behavior. This is the information that you need to know to write doctest examples; for information about actually running doctest on these examples, see the following sections. ### Which Docstrings Are Examined? The module docstring, and all function, class and method docstrings are searched. Objects imported into the module are not searched. In addition, if `M.__test__` exists and “is true”, it must be a dict, and each entry maps a (string) name to a function object, class object, or string. Function and class object docstrings found from `M.__test__` are searched, and strings are treated as if they were docstrings. In output, a key `K` in `M.__test__` appears with name ``` <name of M>.__test__.K ``` Any classes found are recursively searched similarly, to test docstrings in their contained methods and nested classes. **CPython implementation detail:** Prior to version 3.4, extension modules written in C were not fully searched by doctest. ### How are Docstring Examples Recognized? In most cases a copy-and-paste of an interactive console session works fine, but doctest isn’t trying to do an exact emulation of any specific Python shell. ``` >>> # comments are ignored >>> x = 12 >>> x 12 >>> if x == 13: ... print("yes") ... else: ... print("no") ... print("NO") ... print("NO!!!") ... no NO NO!!! >>> ``` Any expected output must immediately follow the final `'>>> '` or `'... '` line containing the code, and the expected output (if any) extends to the next `'>>> '` or all-whitespace line. The fine print: * Expected output cannot contain an all-whitespace line, since such a line is taken to signal the end of expected output. If expected output does contain a blank line, put `<BLANKLINE>` in your doctest example each place a blank line is expected. * All hard tab characters are expanded to spaces, using 8-column tab stops. Tabs in output generated by the tested code are not modified. Because any hard tabs in the sample output *are* expanded, this means that if the code output includes hard tabs, the only way the doctest can pass is if the [`NORMALIZE_WHITESPACE`](#doctest.NORMALIZE_WHITESPACE "doctest.NORMALIZE_WHITESPACE") option or [directive](#doctest-directives) is in effect. Alternatively, the test can be rewritten to capture the output and compare it to an expected value as part of the test. This handling of tabs in the source was arrived at through trial and error, and has proven to be the least error prone way of handling them. It is possible to use a different algorithm for handling tabs by writing a custom [`DocTestParser`](#doctest.DocTestParser "doctest.DocTestParser") class. * Output to stdout is captured, but not output to stderr (exception tracebacks are captured via a different means). * If you continue a line via backslashing in an interactive session, or for any other reason use a backslash, you should use a raw docstring, which will preserve your backslashes exactly as you type them: ``` >>> def f(x): ... r'''Backslashes in a raw docstring: m\n''' >>> print(f.__doc__) Backslashes in a raw docstring: m\n ``` Otherwise, the backslash will be interpreted as part of the string. For example, the `\n` above would be interpreted as a newline character. Alternatively, you can double each backslash in the doctest version (and not use a raw string): ``` >>> def f(x): ... '''Backslashes in a raw docstring: m\\n''' >>> print(f.__doc__) Backslashes in a raw docstring: m\n ``` * The starting column doesn’t matter: ``` >>> assert "Easy!" >>> import math >>> math.floor(1.9) 1 ``` and as many leading whitespace characters are stripped from the expected output as appeared in the initial `'>>> '` line that started the example. ### What’s the Execution Context? By default, each time [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") finds a docstring to test, it uses a *shallow copy* of `M`’s globals, so that running tests doesn’t change the module’s real globals, and so that one test in `M` can’t leave behind crumbs that accidentally allow another test to work. This means examples can freely use any names defined at top-level in `M`, and names defined earlier in the docstring being run. Examples cannot see names defined in other docstrings. You can force use of your own dict as the execution context by passing `globs=your_dict` to [`testmod()`](#doctest.testmod "doctest.testmod") or [`testfile()`](#doctest.testfile "doctest.testfile") instead. ### What About Exceptions? No problem, provided that the traceback is the only output produced by the example: just paste in the traceback. [1](#id2) Since tracebacks contain details that are likely to change rapidly (for example, exact file paths and line numbers), this is one case where doctest works hard to be flexible in what it accepts. Simple example: ``` >>> [1, 2, 3].remove(42) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: list.remove(x): x not in list ``` That doctest succeeds if [`ValueError`](exceptions#ValueError "ValueError") is raised, with the `list.remove(x): x not in list` detail as shown. The expected output for an exception must start with a traceback header, which may be either of the following two lines, indented the same as the first line of the example: ``` Traceback (most recent call last): Traceback (innermost last): ``` The traceback header is followed by an optional traceback stack, whose contents are ignored by doctest. The traceback stack is typically omitted, or copied verbatim from an interactive session. The traceback stack is followed by the most interesting part: the line(s) containing the exception type and detail. This is usually the last line of a traceback, but can extend across multiple lines if the exception has a multi-line detail: ``` >>> raise ValueError('multi\n line\ndetail') Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: multi line detail ``` The last three lines (starting with [`ValueError`](exceptions#ValueError "ValueError")) are compared against the exception’s type and detail, and the rest are ignored. Best practice is to omit the traceback stack, unless it adds significant documentation value to the example. So the last example is probably better as: ``` >>> raise ValueError('multi\n line\ndetail') Traceback (most recent call last): ... ValueError: multi line detail ``` Note that tracebacks are treated very specially. In particular, in the rewritten example, the use of `...` is independent of doctest’s [`ELLIPSIS`](#doctest.ELLIPSIS "doctest.ELLIPSIS") option. The ellipsis in that example could be left out, or could just as well be three (or three hundred) commas or digits, or an indented transcript of a Monty Python skit. Some details you should read once, but won’t need to remember: * Doctest can’t guess whether your expected output came from an exception traceback or from ordinary printing. So, e.g., an example that expects `ValueError: 42 is prime` will pass whether [`ValueError`](exceptions#ValueError "ValueError") is actually raised or if the example merely prints that traceback text. In practice, ordinary output rarely begins with a traceback header line, so this doesn’t create real problems. * Each line of the traceback stack (if present) must be indented further than the first line of the example, *or* start with a non-alphanumeric character. The first line following the traceback header indented the same and starting with an alphanumeric is taken to be the start of the exception detail. Of course this does the right thing for genuine tracebacks. * When the [`IGNORE_EXCEPTION_DETAIL`](#doctest.IGNORE_EXCEPTION_DETAIL "doctest.IGNORE_EXCEPTION_DETAIL") doctest option is specified, everything following the leftmost colon and any module information in the exception name is ignored. * The interactive shell omits the traceback header line for some [`SyntaxError`](exceptions#SyntaxError "SyntaxError")s. But doctest uses the traceback header line to distinguish exceptions from non-exceptions. So in the rare case where you need to test a [`SyntaxError`](exceptions#SyntaxError "SyntaxError") that omits the traceback header, you will need to manually add the traceback header line to your test example. * For some [`SyntaxError`](exceptions#SyntaxError "SyntaxError")s, Python displays the character position of the syntax error, using a `^` marker: ``` >>> 1 1 File "<stdin>", line 1 1 1 ^ SyntaxError: invalid syntax ``` Since the lines showing the position of the error come before the exception type and detail, they are not checked by doctest. For example, the following test would pass, even though it puts the `^` marker in the wrong location: ``` >>> 1 1 File "<stdin>", line 1 1 1 ^ SyntaxError: invalid syntax ``` ### Option Flags A number of option flags control various aspects of doctest’s behavior. Symbolic names for the flags are supplied as module constants, which can be [bitwise ORed](../reference/expressions#bitwise) together and passed to various functions. The names can also be used in [doctest directives](#doctest-directives), and may be passed to the doctest command line interface via the `-o` option. New in version 3.4: The `-o` command line option. The first group of options define test semantics, controlling aspects of how doctest decides whether actual output matches an example’s expected output: `doctest.DONT_ACCEPT_TRUE_FOR_1` By default, if an expected output block contains just `1`, an actual output block containing just `1` or just `True` is considered to be a match, and similarly for `0` versus `False`. When [`DONT_ACCEPT_TRUE_FOR_1`](#doctest.DONT_ACCEPT_TRUE_FOR_1 "doctest.DONT_ACCEPT_TRUE_FOR_1") is specified, neither substitution is allowed. The default behavior caters to that Python changed the return type of many functions from integer to boolean; doctests expecting “little integer” output still work in these cases. This option will probably go away, but not for several years. `doctest.DONT_ACCEPT_BLANKLINE` By default, if an expected output block contains a line containing only the string `<BLANKLINE>`, then that line will match a blank line in the actual output. Because a genuinely blank line delimits the expected output, this is the only way to communicate that a blank line is expected. When [`DONT_ACCEPT_BLANKLINE`](#doctest.DONT_ACCEPT_BLANKLINE "doctest.DONT_ACCEPT_BLANKLINE") is specified, this substitution is not allowed. `doctest.NORMALIZE_WHITESPACE` When specified, all sequences of whitespace (blanks and newlines) are treated as equal. Any sequence of whitespace within the expected output will match any sequence of whitespace within the actual output. By default, whitespace must match exactly. [`NORMALIZE_WHITESPACE`](#doctest.NORMALIZE_WHITESPACE "doctest.NORMALIZE_WHITESPACE") is especially useful when a line of expected output is very long, and you want to wrap it across multiple lines in your source. `doctest.ELLIPSIS` When specified, an ellipsis marker (`...`) in the expected output can match any substring in the actual output. This includes substrings that span line boundaries, and empty substrings, so it’s best to keep usage of this simple. Complicated uses can lead to the same kinds of “oops, it matched too much!” surprises that `.*` is prone to in regular expressions. `doctest.IGNORE_EXCEPTION_DETAIL` When specified, doctests expecting exceptions pass so long as an exception of the expected type is raised, even if the details (message and fully-qualified exception name) don’t match. For example, an example expecting `ValueError: 42` will pass if the actual exception raised is `ValueError: 3*14`, but will fail if, say, a [`TypeError`](exceptions#TypeError "TypeError") is raised instead. It will also ignore any fully-qualified name included before the exception class, which can vary between implementations and versions of Python and the code/libraries in use. Hence, all three of these variations will work with the flag specified: ``` >>> raise Exception('message') Traceback (most recent call last): Exception: message >>> raise Exception('message') Traceback (most recent call last): builtins.Exception: message >>> raise Exception('message') Traceback (most recent call last): __main__.Exception: message ``` Note that [`ELLIPSIS`](#doctest.ELLIPSIS "doctest.ELLIPSIS") can also be used to ignore the details of the exception message, but such a test may still fail based on whether the module name is present or matches exactly. Changed in version 3.2: [`IGNORE_EXCEPTION_DETAIL`](#doctest.IGNORE_EXCEPTION_DETAIL "doctest.IGNORE_EXCEPTION_DETAIL") now also ignores any information relating to the module containing the exception under test. `doctest.SKIP` When specified, do not run the example at all. This can be useful in contexts where doctest examples serve as both documentation and test cases, and an example should be included for documentation purposes, but should not be checked. E.g., the example’s output might be random; or the example might depend on resources which would be unavailable to the test driver. The SKIP flag can also be used for temporarily “commenting out” examples. `doctest.COMPARISON_FLAGS` A bitmask or’ing together all the comparison flags above. The second group of options controls how test failures are reported: `doctest.REPORT_UDIFF` When specified, failures that involve multi-line expected and actual outputs are displayed using a unified diff. `doctest.REPORT_CDIFF` When specified, failures that involve multi-line expected and actual outputs will be displayed using a context diff. `doctest.REPORT_NDIFF` When specified, differences are computed by `difflib.Differ`, using the same algorithm as the popular `ndiff.py` utility. This is the only method that marks differences within lines as well as across lines. For example, if a line of expected output contains digit `1` where actual output contains letter `l`, a line is inserted with a caret marking the mismatching column positions. `doctest.REPORT_ONLY_FIRST_FAILURE` When specified, display the first failing example in each doctest, but suppress output for all remaining examples. This will prevent doctest from reporting correct examples that break because of earlier failures; but it might also hide incorrect examples that fail independently of the first failure. When [`REPORT_ONLY_FIRST_FAILURE`](#doctest.REPORT_ONLY_FIRST_FAILURE "doctest.REPORT_ONLY_FIRST_FAILURE") is specified, the remaining examples are still run, and still count towards the total number of failures reported; only the output is suppressed. `doctest.FAIL_FAST` When specified, exit after the first failing example and don’t attempt to run the remaining examples. Thus, the number of failures reported will be at most 1. This flag may be useful during debugging, since examples after the first failure won’t even produce debugging output. The doctest command line accepts the option `-f` as a shorthand for `-o FAIL_FAST`. New in version 3.4. `doctest.REPORTING_FLAGS` A bitmask or’ing together all the reporting flags above. There is also a way to register new option flag names, though this isn’t useful unless you intend to extend [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") internals via subclassing: `doctest.register_optionflag(name)` Create a new option flag with a given name, and return the new flag’s integer value. [`register_optionflag()`](#doctest.register_optionflag "doctest.register_optionflag") can be used when subclassing [`OutputChecker`](#doctest.OutputChecker "doctest.OutputChecker") or [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner") to create new options that are supported by your subclasses. [`register_optionflag()`](#doctest.register_optionflag "doctest.register_optionflag") should always be called using the following idiom: ``` MY_FLAG = register_optionflag('MY_FLAG') ``` ### Directives Doctest directives may be used to modify the [option flags](#doctest-options) for an individual example. Doctest directives are special Python comments following an example’s source code: ``` **directive** ::= "#" "doctest:" [directive\_options](#grammar-token-directive-options) **directive\_options** ::= [directive\_option](#grammar-token-directive-option) ("," [directive\_option](#grammar-token-directive-option))\* **directive\_option** ::= [on\_or\_off](#grammar-token-on-or-off) [directive\_option\_name](#grammar-token-directive-option-name) **on\_or\_off** ::= "+" \| "-" **directive\_option\_name** ::= "DONT_ACCEPT_BLANKLINE" \| "NORMALIZE_WHITESPACE" \| ... ``` Whitespace is not allowed between the `+` or `-` and the directive option name. The directive option name can be any of the option flag names explained above. An example’s doctest directives modify doctest’s behavior for that single example. Use `+` to enable the named behavior, or `-` to disable it. For example, this test passes: ``` >>> print(list(range(20))) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] ``` Without the directive it would fail, both because the actual output doesn’t have two blanks before the single-digit list elements, and because the actual output is on a single line. This test also passes, and also requires a directive to do so: ``` >>> print(list(range(20))) [0, 1, ..., 18, 19] ``` Multiple directives can be used on a single physical line, separated by commas: ``` >>> print(list(range(20))) [0, 1, ..., 18, 19] ``` If multiple directive comments are used for a single example, then they are combined: ``` >>> print(list(range(20))) ... [0, 1, ..., 18, 19] ``` As the previous example shows, you can add `...` lines to your example containing only directives. This can be useful when an example is too long for a directive to comfortably fit on the same line: ``` >>> print(list(range(5)) + list(range(10, 20)) + list(range(30, 40))) ... [0, ..., 4, 10, ..., 19, 30, ..., 39] ``` Note that since all options are disabled by default, and directives apply only to the example they appear in, enabling options (via `+` in a directive) is usually the only meaningful choice. However, option flags can also be passed to functions that run doctests, establishing different defaults. In such cases, disabling an option via `-` in a directive can be useful. ### Warnings [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") is serious about requiring exact matches in expected output. If even a single character doesn’t match, the test fails. This will probably surprise you a few times, as you learn exactly what Python does and doesn’t guarantee about output. For example, when printing a set, Python doesn’t guarantee that the element is printed in any particular order, so a test like ``` >>> foo() {"Hermione", "Harry"} ``` is vulnerable! One workaround is to do ``` >>> foo() == {"Hermione", "Harry"} True ``` instead. Another is to do ``` >>> d = sorted(foo()) >>> d ['Harry', 'Hermione'] ``` Note Before Python 3.6, when printing a dict, Python did not guarantee that the key-value pairs was printed in any particular order. There are others, but you get the idea. Another bad idea is to print things that embed an object address, like ``` >>> id(1.0) # certain to fail some of the time 7948648 >>> class C: pass >>> C() # the default repr() for instances embeds an address <__main__.C instance at 0x00AC18F0> ``` The [`ELLIPSIS`](#doctest.ELLIPSIS "doctest.ELLIPSIS") directive gives a nice approach for the last example: ``` >>> C() <__main__.C instance at 0x...> ``` Floating-point numbers are also subject to small output variations across platforms, because Python defers to the platform C library for float formatting, and C libraries vary widely in quality here. ``` >>> 1./7 # risky 0.14285714285714285 >>> print(1./7) # safer 0.142857142857 >>> print(round(1./7, 6)) # much safer 0.142857 ``` Numbers of the form `I/2.**J` are safe across all platforms, and I often contrive doctest examples to produce numbers of that form: ``` >>> 3./4 # utterly safe 0.75 ``` Simple fractions are also easier for people to understand, and that makes for better documentation. Basic API --------- The functions [`testmod()`](#doctest.testmod "doctest.testmod") and [`testfile()`](#doctest.testfile "doctest.testfile") provide a simple interface to doctest that should be sufficient for most basic uses. For a less formal introduction to these two functions, see sections [Simple Usage: Checking Examples in Docstrings](#doctest-simple-testmod) and [Simple Usage: Checking Examples in a Text File](#doctest-simple-testfile). `doctest.testfile(filename, module_relative=True, name=None, package=None, globs=None, verbose=None, report=True, optionflags=0, extraglobs=None, raise_on_error=False, parser=DocTestParser(), encoding=None)` All arguments except *filename* are optional, and should be specified in keyword form. Test examples in the file named *filename*. Return `(failure_count, test_count)`. Optional argument *module\_relative* specifies how the filename should be interpreted: * If *module\_relative* is `True` (the default), then *filename* specifies an OS-independent module-relative path. By default, this path is relative to the calling module’s directory; but if the *package* argument is specified, then it is relative to that package. To ensure OS-independence, *filename* should use `/` characters to separate path segments, and may not be an absolute path (i.e., it may not begin with `/`). * If *module\_relative* is `False`, then *filename* specifies an OS-specific path. The path may be absolute or relative; relative paths are resolved with respect to the current working directory. Optional argument *name* gives the name of the test; by default, or if `None`, `os.path.basename(filename)` is used. Optional argument *package* is a Python package or the name of a Python package whose directory should be used as the base directory for a module-relative filename. If no package is specified, then the calling module’s directory is used as the base directory for module-relative filenames. It is an error to specify *package* if *module\_relative* is `False`. Optional argument *globs* gives a dict to be used as the globals when executing examples. A new shallow copy of this dict is created for the doctest, so its examples start with a clean slate. By default, or if `None`, a new empty dict is used. Optional argument *extraglobs* gives a dict merged into the globals used to execute examples. This works like [`dict.update()`](stdtypes#dict.update "dict.update"): if *globs* and *extraglobs* have a common key, the associated value in *extraglobs* appears in the combined dict. By default, or if `None`, no extra globals are used. This is an advanced feature that allows parameterization of doctests. For example, a doctest can be written for a base class, using a generic name for the class, then reused to test any number of subclasses by passing an *extraglobs* dict mapping the generic name to the subclass to be tested. Optional argument *verbose* prints lots of stuff if true, and prints only failures if false; by default, or if `None`, it’s true if and only if `'-v'` is in `sys.argv`. Optional argument *report* prints a summary at the end when true, else prints nothing at the end. In verbose mode, the summary is detailed, else the summary is very brief (in fact, empty if all tests passed). Optional argument *optionflags* (default value 0) takes the [bitwise OR](../reference/expressions#bitwise) of option flags. See section [Option Flags](#doctest-options). Optional argument *raise\_on\_error* defaults to false. If true, an exception is raised upon the first failure or unexpected exception in an example. This allows failures to be post-mortem debugged. Default behavior is to continue running examples. Optional argument *parser* specifies a [`DocTestParser`](#doctest.DocTestParser "doctest.DocTestParser") (or subclass) that should be used to extract tests from the files. It defaults to a normal parser (i.e., `DocTestParser()`). Optional argument *encoding* specifies an encoding that should be used to convert the file to unicode. `doctest.testmod(m=None, name=None, globs=None, verbose=None, report=True, optionflags=0, extraglobs=None, raise_on_error=False, exclude_empty=False)` All arguments are optional, and all except for *m* should be specified in keyword form. Test examples in docstrings in functions and classes reachable from module *m* (or module [`__main__`](__main__#module-__main__ "__main__: The environment where the top-level script is run.") if *m* is not supplied or is `None`), starting with `m.__doc__`. Also test examples reachable from dict `m.__test__`, if it exists and is not `None`. `m.__test__` maps names (strings) to functions, classes and strings; function and class docstrings are searched for examples; strings are searched directly, as if they were docstrings. Only docstrings attached to objects belonging to module *m* are searched. Return `(failure_count, test_count)`. Optional argument *name* gives the name of the module; by default, or if `None`, `m.__name__` is used. Optional argument *exclude\_empty* defaults to false. If true, objects for which no doctests are found are excluded from consideration. The default is a backward compatibility hack, so that code still using `doctest.master.summarize()` in conjunction with [`testmod()`](#doctest.testmod "doctest.testmod") continues to get output for objects with no tests. The *exclude\_empty* argument to the newer [`DocTestFinder`](#doctest.DocTestFinder "doctest.DocTestFinder") constructor defaults to true. Optional arguments *extraglobs*, *verbose*, *report*, *optionflags*, *raise\_on\_error*, and *globs* are the same as for function [`testfile()`](#doctest.testfile "doctest.testfile") above, except that *globs* defaults to `m.__dict__`. `doctest.run_docstring_examples(f, globs, verbose=False, name="NoName", compileflags=None, optionflags=0)` Test examples associated with object *f*; for example, *f* may be a string, a module, a function, or a class object. A shallow copy of dictionary argument *globs* is used for the execution context. Optional argument *name* is used in failure messages, and defaults to `"NoName"`. If optional argument *verbose* is true, output is generated even if there are no failures. By default, output is generated only in case of an example failure. Optional argument *compileflags* gives the set of flags that should be used by the Python compiler when running the examples. By default, or if `None`, flags are deduced corresponding to the set of future features found in *globs*. Optional argument *optionflags* works as for function [`testfile()`](#doctest.testfile "doctest.testfile") above. Unittest API ------------ As your collection of doctest’ed modules grows, you’ll want a way to run all their doctests systematically. [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") provides two functions that can be used to create [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") test suites from modules and text files containing doctests. To integrate with [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") test discovery, include a `load_tests()` function in your test module: ``` import unittest import doctest import my_module_with_doctests def load_tests(loader, tests, ignore): tests.addTests(doctest.DocTestSuite(my_module_with_doctests)) return tests ``` There are two main functions for creating [`unittest.TestSuite`](unittest#unittest.TestSuite "unittest.TestSuite") instances from text files and modules with doctests: `doctest.DocFileSuite(*paths, module_relative=True, package=None, setUp=None, tearDown=None, globs=None, optionflags=0, parser=DocTestParser(), encoding=None)` Convert doctest tests from one or more text files to a [`unittest.TestSuite`](unittest#unittest.TestSuite "unittest.TestSuite"). The returned [`unittest.TestSuite`](unittest#unittest.TestSuite "unittest.TestSuite") is to be run by the unittest framework and runs the interactive examples in each file. If an example in any file fails, then the synthesized unit test fails, and a `failureException` exception is raised showing the name of the file containing the test and a (sometimes approximate) line number. Pass one or more paths (as strings) to text files to be examined. Options may be provided as keyword arguments: Optional argument *module\_relative* specifies how the filenames in *paths* should be interpreted: * If *module\_relative* is `True` (the default), then each filename in *paths* specifies an OS-independent module-relative path. By default, this path is relative to the calling module’s directory; but if the *package* argument is specified, then it is relative to that package. To ensure OS-independence, each filename should use `/` characters to separate path segments, and may not be an absolute path (i.e., it may not begin with `/`). * If *module\_relative* is `False`, then each filename in *paths* specifies an OS-specific path. The path may be absolute or relative; relative paths are resolved with respect to the current working directory. Optional argument *package* is a Python package or the name of a Python package whose directory should be used as the base directory for module-relative filenames in *paths*. If no package is specified, then the calling module’s directory is used as the base directory for module-relative filenames. It is an error to specify *package* if *module\_relative* is `False`. Optional argument *setUp* specifies a set-up function for the test suite. This is called before running the tests in each file. The *setUp* function will be passed a [`DocTest`](#doctest.DocTest "doctest.DocTest") object. The setUp function can access the test globals as the *globs* attribute of the test passed. Optional argument *tearDown* specifies a tear-down function for the test suite. This is called after running the tests in each file. The *tearDown* function will be passed a [`DocTest`](#doctest.DocTest "doctest.DocTest") object. The setUp function can access the test globals as the *globs* attribute of the test passed. Optional argument *globs* is a dictionary containing the initial global variables for the tests. A new copy of this dictionary is created for each test. By default, *globs* is a new empty dictionary. Optional argument *optionflags* specifies the default doctest options for the tests, created by or-ing together individual option flags. See section [Option Flags](#doctest-options). See function [`set_unittest_reportflags()`](#doctest.set_unittest_reportflags "doctest.set_unittest_reportflags") below for a better way to set reporting options. Optional argument *parser* specifies a [`DocTestParser`](#doctest.DocTestParser "doctest.DocTestParser") (or subclass) that should be used to extract tests from the files. It defaults to a normal parser (i.e., `DocTestParser()`). Optional argument *encoding* specifies an encoding that should be used to convert the file to unicode. The global `__file__` is added to the globals provided to doctests loaded from a text file using [`DocFileSuite()`](#doctest.DocFileSuite "doctest.DocFileSuite"). `doctest.DocTestSuite(module=None, globs=None, extraglobs=None, test_finder=None, setUp=None, tearDown=None, checker=None)` Convert doctest tests for a module to a [`unittest.TestSuite`](unittest#unittest.TestSuite "unittest.TestSuite"). The returned [`unittest.TestSuite`](unittest#unittest.TestSuite "unittest.TestSuite") is to be run by the unittest framework and runs each doctest in the module. If any of the doctests fail, then the synthesized unit test fails, and a `failureException` exception is raised showing the name of the file containing the test and a (sometimes approximate) line number. Optional argument *module* provides the module to be tested. It can be a module object or a (possibly dotted) module name. If not specified, the module calling this function is used. Optional argument *globs* is a dictionary containing the initial global variables for the tests. A new copy of this dictionary is created for each test. By default, *globs* is a new empty dictionary. Optional argument *extraglobs* specifies an extra set of global variables, which is merged into *globs*. By default, no extra globals are used. Optional argument *test\_finder* is the [`DocTestFinder`](#doctest.DocTestFinder "doctest.DocTestFinder") object (or a drop-in replacement) that is used to extract doctests from the module. Optional arguments *setUp*, *tearDown*, and *optionflags* are the same as for function [`DocFileSuite()`](#doctest.DocFileSuite "doctest.DocFileSuite") above. This function uses the same search technique as [`testmod()`](#doctest.testmod "doctest.testmod"). Changed in version 3.5: [`DocTestSuite()`](#doctest.DocTestSuite "doctest.DocTestSuite") returns an empty [`unittest.TestSuite`](unittest#unittest.TestSuite "unittest.TestSuite") if *module* contains no docstrings instead of raising [`ValueError`](exceptions#ValueError "ValueError"). Under the covers, [`DocTestSuite()`](#doctest.DocTestSuite "doctest.DocTestSuite") creates a [`unittest.TestSuite`](unittest#unittest.TestSuite "unittest.TestSuite") out of `doctest.DocTestCase` instances, and `DocTestCase` is a subclass of [`unittest.TestCase`](unittest#unittest.TestCase "unittest.TestCase"). `DocTestCase` isn’t documented here (it’s an internal detail), but studying its code can answer questions about the exact details of [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") integration. Similarly, [`DocFileSuite()`](#doctest.DocFileSuite "doctest.DocFileSuite") creates a [`unittest.TestSuite`](unittest#unittest.TestSuite "unittest.TestSuite") out of `doctest.DocFileCase` instances, and `DocFileCase` is a subclass of `DocTestCase`. So both ways of creating a [`unittest.TestSuite`](unittest#unittest.TestSuite "unittest.TestSuite") run instances of `DocTestCase`. This is important for a subtle reason: when you run [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") functions yourself, you can control the [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") options in use directly, by passing option flags to [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") functions. However, if you’re writing a [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") framework, [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") ultimately controls when and how tests get run. The framework author typically wants to control [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") reporting options (perhaps, e.g., specified by command line options), but there’s no way to pass options through [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") to [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") test runners. For this reason, [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") also supports a notion of [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") reporting flags specific to [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") support, via this function: `doctest.set_unittest_reportflags(flags)` Set the [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") reporting flags to use. Argument *flags* takes the [bitwise OR](../reference/expressions#bitwise) of option flags. See section [Option Flags](#doctest-options). Only “reporting flags” can be used. This is a module-global setting, and affects all future doctests run by module [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python."): the `runTest()` method of `DocTestCase` looks at the option flags specified for the test case when the `DocTestCase` instance was constructed. If no reporting flags were specified (which is the typical and expected case), [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.")’s [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") reporting flags are [bitwise ORed](../reference/expressions#bitwise) into the option flags, and the option flags so augmented are passed to the [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner") instance created to run the doctest. If any reporting flags were specified when the `DocTestCase` instance was constructed, [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.")’s [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") reporting flags are ignored. The value of the [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") reporting flags in effect before the function was called is returned by the function. Advanced API ------------ The basic API is a simple wrapper that’s intended to make doctest easy to use. It is fairly flexible, and should meet most users’ needs; however, if you require more fine-grained control over testing, or wish to extend doctest’s capabilities, then you should use the advanced API. The advanced API revolves around two container classes, which are used to store the interactive examples extracted from doctest cases: * [`Example`](#doctest.Example "doctest.Example"): A single Python [statement](../glossary#term-statement), paired with its expected output. * [`DocTest`](#doctest.DocTest "doctest.DocTest"): A collection of [`Example`](#doctest.Example "doctest.Example")s, typically extracted from a single docstring or text file. Additional processing classes are defined to find, parse, and run, and check doctest examples: * [`DocTestFinder`](#doctest.DocTestFinder "doctest.DocTestFinder"): Finds all docstrings in a given module, and uses a [`DocTestParser`](#doctest.DocTestParser "doctest.DocTestParser") to create a [`DocTest`](#doctest.DocTest "doctest.DocTest") from every docstring that contains interactive examples. * [`DocTestParser`](#doctest.DocTestParser "doctest.DocTestParser"): Creates a [`DocTest`](#doctest.DocTest "doctest.DocTest") object from a string (such as an object’s docstring). * [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner"): Executes the examples in a [`DocTest`](#doctest.DocTest "doctest.DocTest"), and uses an [`OutputChecker`](#doctest.OutputChecker "doctest.OutputChecker") to verify their output. * [`OutputChecker`](#doctest.OutputChecker "doctest.OutputChecker"): Compares the actual output from a doctest example with the expected output, and decides whether they match. The relationships among these processing classes are summarized in the following diagram: ``` list of: +------+ +---------+ |module| --DocTestFinder-> | DocTest | --DocTestRunner-> results +------+ | ^ +---------+ | ^ (printed) | | | Example | | | v | | ... | v | DocTestParser | Example | OutputChecker +---------+ ``` ### DocTest Objects `class doctest.DocTest(examples, globs, name, filename, lineno, docstring)` A collection of doctest examples that should be run in a single namespace. The constructor arguments are used to initialize the attributes of the same names. [`DocTest`](#doctest.DocTest "doctest.DocTest") defines the following attributes. They are initialized by the constructor, and should not be modified directly. `examples` A list of [`Example`](#doctest.Example "doctest.Example") objects encoding the individual interactive Python examples that should be run by this test. `globs` The namespace (aka globals) that the examples should be run in. This is a dictionary mapping names to values. Any changes to the namespace made by the examples (such as binding new variables) will be reflected in [`globs`](#doctest.DocTest.globs "doctest.DocTest.globs") after the test is run. `name` A string name identifying the [`DocTest`](#doctest.DocTest "doctest.DocTest"). Typically, this is the name of the object or file that the test was extracted from. `filename` The name of the file that this [`DocTest`](#doctest.DocTest "doctest.DocTest") was extracted from; or `None` if the filename is unknown, or if the [`DocTest`](#doctest.DocTest "doctest.DocTest") was not extracted from a file. `lineno` The line number within [`filename`](#doctest.DocTest.filename "doctest.DocTest.filename") where this [`DocTest`](#doctest.DocTest "doctest.DocTest") begins, or `None` if the line number is unavailable. This line number is zero-based with respect to the beginning of the file. `docstring` The string that the test was extracted from, or `None` if the string is unavailable, or if the test was not extracted from a string. ### Example Objects `class doctest.Example(source, want, exc_msg=None, lineno=0, indent=0, options=None)` A single interactive example, consisting of a Python statement and its expected output. The constructor arguments are used to initialize the attributes of the same names. [`Example`](#doctest.Example "doctest.Example") defines the following attributes. They are initialized by the constructor, and should not be modified directly. `source` A string containing the example’s source code. This source code consists of a single Python statement, and always ends with a newline; the constructor adds a newline when necessary. `want` The expected output from running the example’s source code (either from stdout, or a traceback in case of exception). [`want`](#doctest.Example.want "doctest.Example.want") ends with a newline unless no output is expected, in which case it’s an empty string. The constructor adds a newline when necessary. `exc_msg` The exception message generated by the example, if the example is expected to generate an exception; or `None` if it is not expected to generate an exception. This exception message is compared against the return value of [`traceback.format_exception_only()`](traceback#traceback.format_exception_only "traceback.format_exception_only"). [`exc_msg`](#doctest.Example.exc_msg "doctest.Example.exc_msg") ends with a newline unless it’s `None`. The constructor adds a newline if needed. `lineno` The line number within the string containing this example where the example begins. This line number is zero-based with respect to the beginning of the containing string. `indent` The example’s indentation in the containing string, i.e., the number of space characters that precede the example’s first prompt. `options` A dictionary mapping from option flags to `True` or `False`, which is used to override default options for this example. Any option flags not contained in this dictionary are left at their default value (as specified by the [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner")’s `optionflags`). By default, no options are set. ### DocTestFinder objects `class doctest.DocTestFinder(verbose=False, parser=DocTestParser(), recurse=True, exclude_empty=True)` A processing class used to extract the [`DocTest`](#doctest.DocTest "doctest.DocTest")s that are relevant to a given object, from its docstring and the docstrings of its contained objects. [`DocTest`](#doctest.DocTest "doctest.DocTest")s can be extracted from modules, classes, functions, methods, staticmethods, classmethods, and properties. The optional argument *verbose* can be used to display the objects searched by the finder. It defaults to `False` (no output). The optional argument *parser* specifies the [`DocTestParser`](#doctest.DocTestParser "doctest.DocTestParser") object (or a drop-in replacement) that is used to extract doctests from docstrings. If the optional argument *recurse* is false, then [`DocTestFinder.find()`](#doctest.DocTestFinder.find "doctest.DocTestFinder.find") will only examine the given object, and not any contained objects. If the optional argument *exclude\_empty* is false, then [`DocTestFinder.find()`](#doctest.DocTestFinder.find "doctest.DocTestFinder.find") will include tests for objects with empty docstrings. [`DocTestFinder`](#doctest.DocTestFinder "doctest.DocTestFinder") defines the following method: `find(obj[, name][, module][, globs][, extraglobs])` Return a list of the [`DocTest`](#doctest.DocTest "doctest.DocTest")s that are defined by *obj*’s docstring, or by any of its contained objects’ docstrings. The optional argument *name* specifies the object’s name; this name will be used to construct names for the returned [`DocTest`](#doctest.DocTest "doctest.DocTest")s. If *name* is not specified, then `obj.__name__` is used. The optional parameter *module* is the module that contains the given object. If the module is not specified or is `None`, then the test finder will attempt to automatically determine the correct module. The object’s module is used: * As a default namespace, if *globs* is not specified. * To prevent the DocTestFinder from extracting DocTests from objects that are imported from other modules. (Contained objects with modules other than *module* are ignored.) * To find the name of the file containing the object. * To help find the line number of the object within its file. If *module* is `False`, no attempt to find the module will be made. This is obscure, of use mostly in testing doctest itself: if *module* is `False`, or is `None` but cannot be found automatically, then all objects are considered to belong to the (non-existent) module, so all contained objects will (recursively) be searched for doctests. The globals for each [`DocTest`](#doctest.DocTest "doctest.DocTest") is formed by combining *globs* and *extraglobs* (bindings in *extraglobs* override bindings in *globs*). A new shallow copy of the globals dictionary is created for each [`DocTest`](#doctest.DocTest "doctest.DocTest"). If *globs* is not specified, then it defaults to the module’s *\_\_dict\_\_*, if specified, or `{}` otherwise. If *extraglobs* is not specified, then it defaults to `{}`. ### DocTestParser objects `class doctest.DocTestParser` A processing class used to extract interactive examples from a string, and use them to create a [`DocTest`](#doctest.DocTest "doctest.DocTest") object. [`DocTestParser`](#doctest.DocTestParser "doctest.DocTestParser") defines the following methods: `get_doctest(string, globs, name, filename, lineno)` Extract all doctest examples from the given string, and collect them into a [`DocTest`](#doctest.DocTest "doctest.DocTest") object. *globs*, *name*, *filename*, and *lineno* are attributes for the new [`DocTest`](#doctest.DocTest "doctest.DocTest") object. See the documentation for [`DocTest`](#doctest.DocTest "doctest.DocTest") for more information. `get_examples(string, name='<string>')` Extract all doctest examples from the given string, and return them as a list of [`Example`](#doctest.Example "doctest.Example") objects. Line numbers are 0-based. The optional argument *name* is a name identifying this string, and is only used for error messages. `parse(string, name='<string>')` Divide the given string into examples and intervening text, and return them as a list of alternating [`Example`](#doctest.Example "doctest.Example")s and strings. Line numbers for the [`Example`](#doctest.Example "doctest.Example")s are 0-based. The optional argument *name* is a name identifying this string, and is only used for error messages. ### DocTestRunner objects `class doctest.DocTestRunner(checker=None, verbose=None, optionflags=0)` A processing class used to execute and verify the interactive examples in a [`DocTest`](#doctest.DocTest "doctest.DocTest"). The comparison between expected outputs and actual outputs is done by an [`OutputChecker`](#doctest.OutputChecker "doctest.OutputChecker"). This comparison may be customized with a number of option flags; see section [Option Flags](#doctest-options) for more information. If the option flags are insufficient, then the comparison may also be customized by passing a subclass of [`OutputChecker`](#doctest.OutputChecker "doctest.OutputChecker") to the constructor. The test runner’s display output can be controlled in two ways. First, an output function can be passed to `TestRunner.run()`; this function will be called with strings that should be displayed. It defaults to `sys.stdout.write`. If capturing the output is not sufficient, then the display output can be also customized by subclassing DocTestRunner, and overriding the methods [`report_start()`](#doctest.DocTestRunner.report_start "doctest.DocTestRunner.report_start"), [`report_success()`](#doctest.DocTestRunner.report_success "doctest.DocTestRunner.report_success"), [`report_unexpected_exception()`](#doctest.DocTestRunner.report_unexpected_exception "doctest.DocTestRunner.report_unexpected_exception"), and [`report_failure()`](#doctest.DocTestRunner.report_failure "doctest.DocTestRunner.report_failure"). The optional keyword argument *checker* specifies the [`OutputChecker`](#doctest.OutputChecker "doctest.OutputChecker") object (or drop-in replacement) that should be used to compare the expected outputs to the actual outputs of doctest examples. The optional keyword argument *verbose* controls the [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner")’s verbosity. If *verbose* is `True`, then information is printed about each example, as it is run. If *verbose* is `False`, then only failures are printed. If *verbose* is unspecified, or `None`, then verbose output is used iff the command-line switch `-v` is used. The optional keyword argument *optionflags* can be used to control how the test runner compares expected output to actual output, and how it displays failures. For more information, see section [Option Flags](#doctest-options). [`DocTestParser`](#doctest.DocTestParser "doctest.DocTestParser") defines the following methods: `report_start(out, test, example)` Report that the test runner is about to process the given example. This method is provided to allow subclasses of [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner") to customize their output; it should not be called directly. *example* is the example about to be processed. *test* is the test *containing example*. *out* is the output function that was passed to [`DocTestRunner.run()`](#doctest.DocTestRunner.run "doctest.DocTestRunner.run"). `report_success(out, test, example, got)` Report that the given example ran successfully. This method is provided to allow subclasses of [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner") to customize their output; it should not be called directly. *example* is the example about to be processed. *got* is the actual output from the example. *test* is the test containing *example*. *out* is the output function that was passed to [`DocTestRunner.run()`](#doctest.DocTestRunner.run "doctest.DocTestRunner.run"). `report_failure(out, test, example, got)` Report that the given example failed. This method is provided to allow subclasses of [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner") to customize their output; it should not be called directly. *example* is the example about to be processed. *got* is the actual output from the example. *test* is the test containing *example*. *out* is the output function that was passed to [`DocTestRunner.run()`](#doctest.DocTestRunner.run "doctest.DocTestRunner.run"). `report_unexpected_exception(out, test, example, exc_info)` Report that the given example raised an unexpected exception. This method is provided to allow subclasses of [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner") to customize their output; it should not be called directly. *example* is the example about to be processed. *exc\_info* is a tuple containing information about the unexpected exception (as returned by [`sys.exc_info()`](sys#sys.exc_info "sys.exc_info")). *test* is the test containing *example*. *out* is the output function that was passed to [`DocTestRunner.run()`](#doctest.DocTestRunner.run "doctest.DocTestRunner.run"). `run(test, compileflags=None, out=None, clear_globs=True)` Run the examples in *test* (a [`DocTest`](#doctest.DocTest "doctest.DocTest") object), and display the results using the writer function *out*. The examples are run in the namespace `test.globs`. If *clear\_globs* is true (the default), then this namespace will be cleared after the test runs, to help with garbage collection. If you would like to examine the namespace after the test completes, then use *clear\_globs=False*. *compileflags* gives the set of flags that should be used by the Python compiler when running the examples. If not specified, then it will default to the set of future-import flags that apply to *globs*. The output of each example is checked using the [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner")’s output checker, and the results are formatted by the `DocTestRunner.report_*()` methods. `summarize(verbose=None)` Print a summary of all the test cases that have been run by this DocTestRunner, and return a [named tuple](../glossary#term-named-tuple) `TestResults(failed, attempted)`. The optional *verbose* argument controls how detailed the summary is. If the verbosity is not specified, then the [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner")’s verbosity is used. ### OutputChecker objects `class doctest.OutputChecker` A class used to check the whether the actual output from a doctest example matches the expected output. [`OutputChecker`](#doctest.OutputChecker "doctest.OutputChecker") defines two methods: [`check_output()`](#doctest.OutputChecker.check_output "doctest.OutputChecker.check_output"), which compares a given pair of outputs, and returns `True` if they match; and [`output_difference()`](#doctest.OutputChecker.output_difference "doctest.OutputChecker.output_difference"), which returns a string describing the differences between two outputs. [`OutputChecker`](#doctest.OutputChecker "doctest.OutputChecker") defines the following methods: `check_output(want, got, optionflags)` Return `True` iff the actual output from an example (*got*) matches the expected output (*want*). These strings are always considered to match if they are identical; but depending on what option flags the test runner is using, several non-exact match types are also possible. See section [Option Flags](#doctest-options) for more information about option flags. `output_difference(example, got, optionflags)` Return a string describing the differences between the expected output for a given example (*example*) and the actual output (*got*). *optionflags* is the set of option flags used to compare *want* and *got*. Debugging --------- Doctest provides several mechanisms for debugging doctest examples: * Several functions convert doctests to executable Python programs, which can be run under the Python debugger, [`pdb`](pdb#module-pdb "pdb: The Python debugger for interactive interpreters."). * The [`DebugRunner`](#doctest.DebugRunner "doctest.DebugRunner") class is a subclass of [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner") that raises an exception for the first failing example, containing information about that example. This information can be used to perform post-mortem debugging on the example. * The [`unittest`](unittest#module-unittest "unittest: Unit testing framework for Python.") cases generated by [`DocTestSuite()`](#doctest.DocTestSuite "doctest.DocTestSuite") support the [`debug()`](#doctest.debug "doctest.debug") method defined by [`unittest.TestCase`](unittest#unittest.TestCase "unittest.TestCase"). * You can add a call to [`pdb.set_trace()`](pdb#pdb.set_trace "pdb.set_trace") in a doctest example, and you’ll drop into the Python debugger when that line is executed. Then you can inspect current values of variables, and so on. For example, suppose `a.py` contains just this module docstring: ``` """ >>> def f(x): ... g(x*2) >>> def g(x): ... print(x+3) ... import pdb; pdb.set_trace() >>> f(3) 9 """ ``` Then an interactive Python session may look like this: ``` >>> import a, doctest >>> doctest.testmod(a) --Return-- > <doctest a[1]>(3)g()->None -> import pdb; pdb.set_trace() (Pdb) list 1 def g(x): 2 print(x+3) 3 -> import pdb; pdb.set_trace() [EOF] (Pdb) p x 6 (Pdb) step --Return-- > <doctest a[0]>(2)f()->None -> g(x*2) (Pdb) list 1 def f(x): 2 -> g(x*2) [EOF] (Pdb) p x 3 (Pdb) step --Return-- > <doctest a[2]>(1)?()->None -> f(3) (Pdb) cont (0, 3) >>> ``` Functions that convert doctests to Python code, and possibly run the synthesized code under the debugger: `doctest.script_from_examples(s)` Convert text with examples to a script. Argument *s* is a string containing doctest examples. The string is converted to a Python script, where doctest examples in *s* are converted to regular code, and everything else is converted to Python comments. The generated script is returned as a string. For example, ``` import doctest print(doctest.script_from_examples(r""" Set x and y to 1 and 2. >>> x, y = 1, 2 Print their sum: >>> print(x+y) 3 """)) ``` displays: ``` # Set x and y to 1 and 2. x, y = 1, 2 # # Print their sum: print(x+y) # Expected: ## 3 ``` This function is used internally by other functions (see below), but can also be useful when you want to transform an interactive Python session into a Python script. `doctest.testsource(module, name)` Convert the doctest for an object to a script. Argument *module* is a module object, or dotted name of a module, containing the object whose doctests are of interest. Argument *name* is the name (within the module) of the object with the doctests of interest. The result is a string, containing the object’s docstring converted to a Python script, as described for [`script_from_examples()`](#doctest.script_from_examples "doctest.script_from_examples") above. For example, if module `a.py` contains a top-level function `f()`, then ``` import a, doctest print(doctest.testsource(a, "a.f")) ``` prints a script version of function `f()`’s docstring, with doctests converted to code, and the rest placed in comments. `doctest.debug(module, name, pm=False)` Debug the doctests for an object. The *module* and *name* arguments are the same as for function [`testsource()`](#doctest.testsource "doctest.testsource") above. The synthesized Python script for the named object’s docstring is written to a temporary file, and then that file is run under the control of the Python debugger, [`pdb`](pdb#module-pdb "pdb: The Python debugger for interactive interpreters."). A shallow copy of `module.__dict__` is used for both local and global execution context. Optional argument *pm* controls whether post-mortem debugging is used. If *pm* has a true value, the script file is run directly, and the debugger gets involved only if the script terminates via raising an unhandled exception. If it does, then post-mortem debugging is invoked, via [`pdb.post_mortem()`](pdb#pdb.post_mortem "pdb.post_mortem"), passing the traceback object from the unhandled exception. If *pm* is not specified, or is false, the script is run under the debugger from the start, via passing an appropriate [`exec()`](functions#exec "exec") call to [`pdb.run()`](pdb#pdb.run "pdb.run"). `doctest.debug_src(src, pm=False, globs=None)` Debug the doctests in a string. This is like function [`debug()`](#doctest.debug "doctest.debug") above, except that a string containing doctest examples is specified directly, via the *src* argument. Optional argument *pm* has the same meaning as in function [`debug()`](#doctest.debug "doctest.debug") above. Optional argument *globs* gives a dictionary to use as both local and global execution context. If not specified, or `None`, an empty dictionary is used. If specified, a shallow copy of the dictionary is used. The [`DebugRunner`](#doctest.DebugRunner "doctest.DebugRunner") class, and the special exceptions it may raise, are of most interest to testing framework authors, and will only be sketched here. See the source code, and especially [`DebugRunner`](#doctest.DebugRunner "doctest.DebugRunner")’s docstring (which is a doctest!) for more details: `class doctest.DebugRunner(checker=None, verbose=None, optionflags=0)` A subclass of [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner") that raises an exception as soon as a failure is encountered. If an unexpected exception occurs, an [`UnexpectedException`](#doctest.UnexpectedException "doctest.UnexpectedException") exception is raised, containing the test, the example, and the original exception. If the output doesn’t match, then a [`DocTestFailure`](#doctest.DocTestFailure "doctest.DocTestFailure") exception is raised, containing the test, the example, and the actual output. For information about the constructor parameters and methods, see the documentation for [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner") in section [Advanced API](#doctest-advanced-api). There are two exceptions that may be raised by [`DebugRunner`](#doctest.DebugRunner "doctest.DebugRunner") instances: `exception doctest.DocTestFailure(test, example, got)` An exception raised by [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner") to signal that a doctest example’s actual output did not match its expected output. The constructor arguments are used to initialize the attributes of the same names. [`DocTestFailure`](#doctest.DocTestFailure "doctest.DocTestFailure") defines the following attributes: `DocTestFailure.test` The [`DocTest`](#doctest.DocTest "doctest.DocTest") object that was being run when the example failed. `DocTestFailure.example` The [`Example`](#doctest.Example "doctest.Example") that failed. `DocTestFailure.got` The example’s actual output. `exception doctest.UnexpectedException(test, example, exc_info)` An exception raised by [`DocTestRunner`](#doctest.DocTestRunner "doctest.DocTestRunner") to signal that a doctest example raised an unexpected exception. The constructor arguments are used to initialize the attributes of the same names. [`UnexpectedException`](#doctest.UnexpectedException "doctest.UnexpectedException") defines the following attributes: `UnexpectedException.test` The [`DocTest`](#doctest.DocTest "doctest.DocTest") object that was being run when the example failed. `UnexpectedException.example` The [`Example`](#doctest.Example "doctest.Example") that failed. `UnexpectedException.exc_info` A tuple containing information about the unexpected exception, as returned by [`sys.exc_info()`](sys#sys.exc_info "sys.exc_info"). Soapbox ------- As mentioned in the introduction, [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") has grown to have three primary uses: 1. Checking examples in docstrings. 2. Regression testing. 3. Executable documentation / literate testing. These uses have different requirements, and it is important to distinguish them. In particular, filling your docstrings with obscure test cases makes for bad documentation. When writing a docstring, choose docstring examples with care. There’s an art to this that needs to be learned—it may not be natural at first. Examples should add genuine value to the documentation. A good example can often be worth many words. If done with care, the examples will be invaluable for your users, and will pay back the time it takes to collect them many times over as the years go by and things change. I’m still amazed at how often one of my [`doctest`](#module-doctest "doctest: Test pieces of code within docstrings.") examples stops working after a “harmless” change. Doctest also makes an excellent tool for regression testing, especially if you don’t skimp on explanatory text. By interleaving prose and examples, it becomes much easier to keep track of what’s actually being tested, and why. When a test fails, good prose can make it much easier to figure out what the problem is, and how it should be fixed. It’s true that you could write extensive comments in code-based testing, but few programmers do. Many have found that using doctest approaches instead leads to much clearer tests. Perhaps this is simply because doctest makes writing prose a little easier than writing code, while writing comments in code is a little harder. I think it goes deeper than just that: the natural attitude when writing a doctest-based test is that you want to explain the fine points of your software, and illustrate them with examples. This in turn naturally leads to test files that start with the simplest features, and logically progress to complications and edge cases. A coherent narrative is the result, instead of a collection of isolated functions that test isolated bits of functionality seemingly at random. It’s a different attitude, and produces different results, blurring the distinction between testing and explaining. Regression testing is best confined to dedicated objects or files. There are several options for organizing tests: * Write text files containing test cases as interactive examples, and test the files using [`testfile()`](#doctest.testfile "doctest.testfile") or [`DocFileSuite()`](#doctest.DocFileSuite "doctest.DocFileSuite"). This is recommended, although is easiest to do for new projects, designed from the start to use doctest. * Define functions named `_regrtest_topic` that consist of single docstrings, containing test cases for the named topics. These functions can be included in the same file as the module, or separated out into a separate test file. * Define a `__test__` dictionary mapping from regression test topics to docstrings containing test cases. When you have placed your tests in a module, the module can itself be the test runner. When a test fails, you can arrange for your test runner to re-run only the failing doctest while you debug the problem. Here is a minimal example of such a test runner: ``` if __name__ == '__main__': import doctest flags = doctest.REPORT_NDIFF|doctest.FAIL_FAST if len(sys.argv) > 1: name = sys.argv[1] if name in globals(): obj = globals()[name] else: obj = __test__[name] doctest.run_docstring_examples(obj, globals(), name=name, optionflags=flags) else: fail, total = doctest.testmod(optionflags=flags) print("{} failures out of {} tests".format(fail, total)) ``` #### Footnotes `1` Examples containing both expected output and an exception are not supported. Trying to guess where one ends and the other begins is too error-prone, and that also makes for a confusing test.
programming_docs
python html.entities — Definitions of HTML general entities html.entities — Definitions of HTML general entities ==================================================== **Source code:** [Lib/html/entities.py](https://github.com/python/cpython/tree/3.9/Lib/html/entities.py) This module defines four dictionaries, [`html5`](#html.entities.html5 "html.entities.html5"), [`name2codepoint`](#html.entities.name2codepoint "html.entities.name2codepoint"), [`codepoint2name`](#html.entities.codepoint2name "html.entities.codepoint2name"), and [`entitydefs`](#html.entities.entitydefs "html.entities.entitydefs"). `html.entities.html5` A dictionary that maps HTML5 named character references [1](#id2) to the equivalent Unicode character(s), e.g. `html5['gt;'] == '>'`. Note that the trailing semicolon is included in the name (e.g. `'gt;'`), however some of the names are accepted by the standard even without the semicolon: in this case the name is present with and without the `';'`. See also [`html.unescape()`](html#html.unescape "html.unescape"). New in version 3.3. `html.entities.entitydefs` A dictionary mapping XHTML 1.0 entity definitions to their replacement text in ISO Latin-1. `html.entities.name2codepoint` A dictionary that maps HTML entity names to the Unicode code points. `html.entities.codepoint2name` A dictionary that maps Unicode code points to HTML entity names. #### Footnotes `1` See <https://html.spec.whatwg.org/multipage/syntax.html#named-character-references> python fnmatch — Unix filename pattern matching fnmatch — Unix filename pattern matching ======================================== **Source code:** [Lib/fnmatch.py](https://github.com/python/cpython/tree/3.9/Lib/fnmatch.py) This module provides support for Unix shell-style wildcards, which are *not* the same as regular expressions (which are documented in the [`re`](re#module-re "re: Regular expression operations.") module). The special characters used in shell-style wildcards are: | Pattern | Meaning | | --- | --- | | `*` | matches everything | | `?` | matches any single character | | `[seq]` | matches any character in *seq* | | `[!seq]` | matches any character not in *seq* | For a literal match, wrap the meta-characters in brackets. For example, `'[?]'` matches the character `'?'`. Note that the filename separator (`'/'` on Unix) is *not* special to this module. See module [`glob`](glob#module-glob "glob: Unix shell style pathname pattern expansion.") for pathname expansion ([`glob`](glob#module-glob "glob: Unix shell style pathname pattern expansion.") uses [`filter()`](#fnmatch.filter "fnmatch.filter") to match pathname segments). Similarly, filenames starting with a period are not special for this module, and are matched by the `*` and `?` patterns. `fnmatch.fnmatch(filename, pattern)` Test whether the *filename* string matches the *pattern* string, returning [`True`](constants#True "True") or [`False`](constants#False "False"). Both parameters are case-normalized using [`os.path.normcase()`](os.path#os.path.normcase "os.path.normcase"). [`fnmatchcase()`](#fnmatch.fnmatchcase "fnmatch.fnmatchcase") can be used to perform a case-sensitive comparison, regardless of whether that’s standard for the operating system. This example will print all file names in the current directory with the extension `.txt`: ``` import fnmatch import os for file in os.listdir('.'): if fnmatch.fnmatch(file, '*.txt'): print(file) ``` `fnmatch.fnmatchcase(filename, pattern)` Test whether *filename* matches *pattern*, returning [`True`](constants#True "True") or [`False`](constants#False "False"); the comparison is case-sensitive and does not apply [`os.path.normcase()`](os.path#os.path.normcase "os.path.normcase"). `fnmatch.filter(names, pattern)` Construct a list from those elements of the iterable *names* that match *pattern*. It is the same as `[n for n in names if fnmatch(n, pattern)]`, but implemented more efficiently. `fnmatch.translate(pattern)` Return the shell-style *pattern* converted to a regular expression for using with [`re.match()`](re#re.match "re.match"). Example: ``` >>> import fnmatch, re >>> >>> regex = fnmatch.translate('*.txt') >>> regex '(?s:.*\\.txt)\\Z' >>> reobj = re.compile(regex) >>> reobj.match('foobar.txt') <re.Match object; span=(0, 10), match='foobar.txt'> ``` See also `Module` [`glob`](glob#module-glob "glob: Unix shell style pathname pattern expansion.") Unix shell-style path expansion. python traceback — Print or retrieve a stack traceback traceback — Print or retrieve a stack traceback =============================================== **Source code:** [Lib/traceback.py](https://github.com/python/cpython/tree/3.9/Lib/traceback.py) This module provides a standard interface to extract, format and print stack traces of Python programs. It exactly mimics the behavior of the Python interpreter when it prints a stack trace. This is useful when you want to print stack traces under program control, such as in a “wrapper” around the interpreter. The module uses traceback objects — this is the object type that is stored in the [`sys.last_traceback`](sys#sys.last_traceback "sys.last_traceback") variable and returned as the third item from [`sys.exc_info()`](sys#sys.exc_info "sys.exc_info"). The module defines the following functions: `traceback.print_tb(tb, limit=None, file=None)` Print up to *limit* stack trace entries from traceback object *tb* (starting from the caller’s frame) if *limit* is positive. Otherwise, print the last `abs(limit)` entries. If *limit* is omitted or `None`, all entries are printed. If *file* is omitted or `None`, the output goes to `sys.stderr`; otherwise it should be an open file or file-like object to receive the output. Changed in version 3.5: Added negative *limit* support. `traceback.print_exception(etype, value, tb, limit=None, file=None, chain=True)` Print exception information and stack trace entries from traceback object *tb* to *file*. This differs from [`print_tb()`](#traceback.print_tb "traceback.print_tb") in the following ways: * if *tb* is not `None`, it prints a header `Traceback (most recent call last):` * it prints the exception *etype* and *value* after the stack trace * if *type(value)* is [`SyntaxError`](exceptions#SyntaxError "SyntaxError") and *value* has the appropriate format, it prints the line where the syntax error occurred with a caret indicating the approximate position of the error. The optional *limit* argument has the same meaning as for [`print_tb()`](#traceback.print_tb "traceback.print_tb"). If *chain* is true (the default), then chained exceptions (the `__cause__` or `__context__` attributes of the exception) will be printed as well, like the interpreter itself does when printing an unhandled exception. Changed in version 3.5: The *etype* argument is ignored and inferred from the type of *value*. `traceback.print_exc(limit=None, file=None, chain=True)` This is a shorthand for `print_exception(*sys.exc_info(), limit, file, chain)`. `traceback.print_last(limit=None, file=None, chain=True)` This is a shorthand for `print_exception(sys.last_type, sys.last_value, sys.last_traceback, limit, file, chain)`. In general it will work only after an exception has reached an interactive prompt (see [`sys.last_type`](sys#sys.last_type "sys.last_type")). `traceback.print_stack(f=None, limit=None, file=None)` Print up to *limit* stack trace entries (starting from the invocation point) if *limit* is positive. Otherwise, print the last `abs(limit)` entries. If *limit* is omitted or `None`, all entries are printed. The optional *f* argument can be used to specify an alternate stack frame to start. The optional *file* argument has the same meaning as for [`print_tb()`](#traceback.print_tb "traceback.print_tb"). Changed in version 3.5: Added negative *limit* support. `traceback.extract_tb(tb, limit=None)` Return a [`StackSummary`](#traceback.StackSummary "traceback.StackSummary") object representing a list of “pre-processed” stack trace entries extracted from the traceback object *tb*. It is useful for alternate formatting of stack traces. The optional *limit* argument has the same meaning as for [`print_tb()`](#traceback.print_tb "traceback.print_tb"). A “pre-processed” stack trace entry is a [`FrameSummary`](#traceback.FrameSummary "traceback.FrameSummary") object containing attributes `filename`, `lineno`, `name`, and `line` representing the information that is usually printed for a stack trace. The `line` is a string with leading and trailing whitespace stripped; if the source is not available it is `None`. `traceback.extract_stack(f=None, limit=None)` Extract the raw traceback from the current stack frame. The return value has the same format as for [`extract_tb()`](#traceback.extract_tb "traceback.extract_tb"). The optional *f* and *limit* arguments have the same meaning as for [`print_stack()`](#traceback.print_stack "traceback.print_stack"). `traceback.format_list(extracted_list)` Given a list of tuples or [`FrameSummary`](#traceback.FrameSummary "traceback.FrameSummary") objects as returned by [`extract_tb()`](#traceback.extract_tb "traceback.extract_tb") or [`extract_stack()`](#traceback.extract_stack "traceback.extract_stack"), return a list of strings ready for printing. Each string in the resulting list corresponds to the item with the same index in the argument list. Each string ends in a newline; the strings may contain internal newlines as well, for those items whose source text line is not `None`. `traceback.format_exception_only(etype, value)` Format the exception part of a traceback. The arguments are the exception type and value such as given by `sys.last_type` and `sys.last_value`. The return value is a list of strings, each ending in a newline. Normally, the list contains a single string; however, for [`SyntaxError`](exceptions#SyntaxError "SyntaxError") exceptions, it contains several lines that (when printed) display detailed information about where the syntax error occurred. The message indicating which exception occurred is the always last string in the list. `traceback.format_exception(etype, value, tb, limit=None, chain=True)` Format a stack trace and the exception information. The arguments have the same meaning as the corresponding arguments to [`print_exception()`](#traceback.print_exception "traceback.print_exception"). The return value is a list of strings, each ending in a newline and some containing internal newlines. When these lines are concatenated and printed, exactly the same text is printed as does [`print_exception()`](#traceback.print_exception "traceback.print_exception"). Changed in version 3.5: The *etype* argument is ignored and inferred from the type of *value*. `traceback.format_exc(limit=None, chain=True)` This is like `print_exc(limit)` but returns a string instead of printing to a file. `traceback.format_tb(tb, limit=None)` A shorthand for `format_list(extract_tb(tb, limit))`. `traceback.format_stack(f=None, limit=None)` A shorthand for `format_list(extract_stack(f, limit))`. `traceback.clear_frames(tb)` Clears the local variables of all the stack frames in a traceback *tb* by calling the `clear()` method of each frame object. New in version 3.4. `traceback.walk_stack(f)` Walk a stack following `f.f_back` from the given frame, yielding the frame and line number for each frame. If *f* is `None`, the current stack is used. This helper is used with [`StackSummary.extract()`](#traceback.StackSummary.extract "traceback.StackSummary.extract"). New in version 3.5. `traceback.walk_tb(tb)` Walk a traceback following `tb_next` yielding the frame and line number for each frame. This helper is used with [`StackSummary.extract()`](#traceback.StackSummary.extract "traceback.StackSummary.extract"). New in version 3.5. The module also defines the following classes: TracebackException Objects -------------------------- New in version 3.5. [`TracebackException`](#traceback.TracebackException "traceback.TracebackException") objects are created from actual exceptions to capture data for later printing in a lightweight fashion. `class traceback.TracebackException(exc_type, exc_value, exc_traceback, *, limit=None, lookup_lines=True, capture_locals=False)` Capture an exception for later rendering. *limit*, *lookup\_lines* and *capture\_locals* are as for the [`StackSummary`](#traceback.StackSummary "traceback.StackSummary") class. Note that when locals are captured, they are also shown in the traceback. `__cause__` A [`TracebackException`](#traceback.TracebackException "traceback.TracebackException") of the original `__cause__`. `__context__` A [`TracebackException`](#traceback.TracebackException "traceback.TracebackException") of the original `__context__`. `__suppress_context__` The `__suppress_context__` value from the original exception. `stack` A [`StackSummary`](#traceback.StackSummary "traceback.StackSummary") representing the traceback. `exc_type` The class of the original traceback. `filename` For syntax errors - the file name where the error occurred. `lineno` For syntax errors - the line number where the error occurred. `text` For syntax errors - the text where the error occurred. `offset` For syntax errors - the offset into the text where the error occurred. `msg` For syntax errors - the compiler error message. `classmethod from_exception(exc, *, limit=None, lookup_lines=True, capture_locals=False)` Capture an exception for later rendering. *limit*, *lookup\_lines* and *capture\_locals* are as for the [`StackSummary`](#traceback.StackSummary "traceback.StackSummary") class. Note that when locals are captured, they are also shown in the traceback. `format(*, chain=True)` Format the exception. If *chain* is not `True`, `__cause__` and `__context__` will not be formatted. The return value is a generator of strings, each ending in a newline and some containing internal newlines. [`print_exception()`](#traceback.print_exception "traceback.print_exception") is a wrapper around this method which just prints the lines to a file. The message indicating which exception occurred is always the last string in the output. `format_exception_only()` Format the exception part of the traceback. The return value is a generator of strings, each ending in a newline. Normally, the generator emits a single string; however, for [`SyntaxError`](exceptions#SyntaxError "SyntaxError") exceptions, it emits several lines that (when printed) display detailed information about where the syntax error occurred. The message indicating which exception occurred is always the last string in the output. StackSummary Objects -------------------- New in version 3.5. [`StackSummary`](#traceback.StackSummary "traceback.StackSummary") objects represent a call stack ready for formatting. `class traceback.StackSummary` `classmethod extract(frame_gen, *, limit=None, lookup_lines=True, capture_locals=False)` Construct a [`StackSummary`](#traceback.StackSummary "traceback.StackSummary") object from a frame generator (such as is returned by [`walk_stack()`](#traceback.walk_stack "traceback.walk_stack") or [`walk_tb()`](#traceback.walk_tb "traceback.walk_tb")). If *limit* is supplied, only this many frames are taken from *frame\_gen*. If *lookup\_lines* is `False`, the returned [`FrameSummary`](#traceback.FrameSummary "traceback.FrameSummary") objects will not have read their lines in yet, making the cost of creating the [`StackSummary`](#traceback.StackSummary "traceback.StackSummary") cheaper (which may be valuable if it may not actually get formatted). If *capture\_locals* is `True` the local variables in each [`FrameSummary`](#traceback.FrameSummary "traceback.FrameSummary") are captured as object representations. `classmethod from_list(a_list)` Construct a [`StackSummary`](#traceback.StackSummary "traceback.StackSummary") object from a supplied list of [`FrameSummary`](#traceback.FrameSummary "traceback.FrameSummary") objects or old-style list of tuples. Each tuple should be a 4-tuple with filename, lineno, name, line as the elements. `format()` Returns a list of strings ready for printing. Each string in the resulting list corresponds to a single frame from the stack. Each string ends in a newline; the strings may contain internal newlines as well, for those items with source text lines. For long sequences of the same frame and line, the first few repetitions are shown, followed by a summary line stating the exact number of further repetitions. Changed in version 3.6: Long sequences of repeated frames are now abbreviated. FrameSummary Objects -------------------- New in version 3.5. [`FrameSummary`](#traceback.FrameSummary "traceback.FrameSummary") objects represent a single frame in a traceback. `class traceback.FrameSummary(filename, lineno, name, lookup_line=True, locals=None, line=None)` Represent a single frame in the traceback or stack that is being formatted or printed. It may optionally have a stringified version of the frames locals included in it. If *lookup\_line* is `False`, the source code is not looked up until the [`FrameSummary`](#traceback.FrameSummary "traceback.FrameSummary") has the `line` attribute accessed (which also happens when casting it to a tuple). `line` may be directly provided, and will prevent line lookups happening at all. *locals* is an optional local variable dictionary, and if supplied the variable representations are stored in the summary for later display. Traceback Examples ------------------ This simple example implements a basic read-eval-print loop, similar to (but less useful than) the standard Python interactive interpreter loop. For a more complete implementation of the interpreter loop, refer to the [`code`](code#module-code "code: Facilities to implement read-eval-print loops.") module. ``` import sys, traceback def run_user_code(envdir): source = input(">>> ") try: exec(source, envdir) except Exception: print("Exception in user code:") print("-"*60) traceback.print_exc(file=sys.stdout) print("-"*60) envdir = {} while True: run_user_code(envdir) ``` The following example demonstrates the different ways to print and format the exception and traceback: ``` import sys, traceback def lumberjack(): bright_side_of_death() def bright_side_of_death(): return tuple()[0] try: lumberjack() except IndexError: exc_type, exc_value, exc_traceback = sys.exc_info() print("*** print_tb:") traceback.print_tb(exc_traceback, limit=1, file=sys.stdout) print("*** print_exception:") # exc_type below is ignored on 3.5 and later traceback.print_exception(exc_type, exc_value, exc_traceback, limit=2, file=sys.stdout) print("*** print_exc:") traceback.print_exc(limit=2, file=sys.stdout) print("*** format_exc, first and last line:") formatted_lines = traceback.format_exc().splitlines() print(formatted_lines[0]) print(formatted_lines[-1]) print("*** format_exception:") # exc_type below is ignored on 3.5 and later print(repr(traceback.format_exception(exc_type, exc_value, exc_traceback))) print("*** extract_tb:") print(repr(traceback.extract_tb(exc_traceback))) print("*** format_tb:") print(repr(traceback.format_tb(exc_traceback))) print("*** tb_lineno:", exc_traceback.tb_lineno) ``` The output for the example would look similar to this: ``` *** print_tb: File "<doctest...>", line 10, in <module> lumberjack() *** print_exception: Traceback (most recent call last): File "<doctest...>", line 10, in <module> lumberjack() File "<doctest...>", line 4, in lumberjack bright_side_of_death() IndexError: tuple index out of range *** print_exc: Traceback (most recent call last): File "<doctest...>", line 10, in <module> lumberjack() File "<doctest...>", line 4, in lumberjack bright_side_of_death() IndexError: tuple index out of range *** format_exc, first and last line: Traceback (most recent call last): IndexError: tuple index out of range *** format_exception: ['Traceback (most recent call last):\n', ' File "<doctest...>", line 10, in <module>\n lumberjack()\n', ' File "<doctest...>", line 4, in lumberjack\n bright_side_of_death()\n', ' File "<doctest...>", line 7, in bright_side_of_death\n return tuple()[0]\n', 'IndexError: tuple index out of range\n'] *** extract_tb: [<FrameSummary file <doctest...>, line 10 in <module>>, <FrameSummary file <doctest...>, line 4 in lumberjack>, <FrameSummary file <doctest...>, line 7 in bright_side_of_death>] *** format_tb: [' File "<doctest...>", line 10, in <module>\n lumberjack()\n', ' File "<doctest...>", line 4, in lumberjack\n bright_side_of_death()\n', ' File "<doctest...>", line 7, in bright_side_of_death\n return tuple()[0]\n'] *** tb_lineno: 10 ``` The following example shows the different ways to print and format the stack: ``` >>> import traceback >>> def another_function(): ... lumberstack() ... >>> def lumberstack(): ... traceback.print_stack() ... print(repr(traceback.extract_stack())) ... print(repr(traceback.format_stack())) ... >>> another_function() File "<doctest>", line 10, in <module> another_function() File "<doctest>", line 3, in another_function lumberstack() File "<doctest>", line 6, in lumberstack traceback.print_stack() [('<doctest>', 10, '<module>', 'another_function()'), ('<doctest>', 3, 'another_function', 'lumberstack()'), ('<doctest>', 7, 'lumberstack', 'print(repr(traceback.extract_stack()))')] [' File "<doctest>", line 10, in <module>\n another_function()\n', ' File "<doctest>", line 3, in another_function\n lumberstack()\n', ' File "<doctest>", line 8, in lumberstack\n print(repr(traceback.format_stack()))\n'] ``` This last example demonstrates the final few formatting functions: ``` >>> import traceback >>> traceback.format_list([('spam.py', 3, '<module>', 'spam.eggs()'), ... ('eggs.py', 42, 'eggs', 'return "bacon"')]) [' File "spam.py", line 3, in <module>\n spam.eggs()\n', ' File "eggs.py", line 42, in eggs\n return "bacon"\n'] >>> an_error = IndexError('tuple index out of range') >>> traceback.format_exception_only(type(an_error), an_error) ['IndexError: tuple index out of range\n'] ```
programming_docs
python asyncio — Asynchronous I/O asyncio — Asynchronous I/O ========================== asyncio is a library to write **concurrent** code using the **async/await** syntax. asyncio is used as a foundation for multiple Python asynchronous frameworks that provide high-performance network and web-servers, database connection libraries, distributed task queues, etc. asyncio is often a perfect fit for IO-bound and high-level **structured** network code. asyncio provides a set of **high-level** APIs to: * [run Python coroutines](asyncio-task#coroutine) concurrently and have full control over their execution; * perform [network IO and IPC](asyncio-stream#asyncio-streams); * control [subprocesses](asyncio-subprocess#asyncio-subprocess); * distribute tasks via [queues](asyncio-queue#asyncio-queues); * [synchronize](asyncio-sync#asyncio-sync) concurrent code; Additionally, there are **low-level** APIs for *library and framework developers* to: * create and manage [event loops](asyncio-eventloop#asyncio-event-loop), which provide asynchronous APIs for [`networking`](asyncio-eventloop#asyncio.loop.create_server "asyncio.loop.create_server"), running [`subprocesses`](asyncio-eventloop#asyncio.loop.subprocess_exec "asyncio.loop.subprocess_exec"), handling [`OS signals`](asyncio-eventloop#asyncio.loop.add_signal_handler "asyncio.loop.add_signal_handler"), etc; * implement efficient protocols using [transports](asyncio-protocol#asyncio-transports-protocols); * [bridge](asyncio-future#asyncio-futures) callback-based libraries and code with async/await syntax. #### Reference High-level APIs * [Coroutines and Tasks](asyncio-task) * [Streams](asyncio-stream) * [Synchronization Primitives](asyncio-sync) * [Subprocesses](asyncio-subprocess) * [Queues](asyncio-queue) * [Exceptions](asyncio-exceptions) Low-level APIs * [Event Loop](asyncio-eventloop) * [Futures](asyncio-future) * [Transports and Protocols](asyncio-protocol) * [Policies](asyncio-policy) * [Platform Support](asyncio-platforms) Guides and Tutorials * [High-level API Index](asyncio-api-index) * [Low-level API Index](asyncio-llapi-index) * [Developing with asyncio](asyncio-dev) Note The source code for asyncio can be found in [Lib/asyncio/](https://github.com/python/cpython/tree/3.9/Lib/asyncio/). python dis — Disassembler for Python bytecode dis — Disassembler for Python bytecode ====================================== **Source code:** [Lib/dis.py](https://github.com/python/cpython/tree/3.9/Lib/dis.py) The [`dis`](#module-dis "dis: Disassembler for Python bytecode.") module supports the analysis of CPython [bytecode](../glossary#term-bytecode) by disassembling it. The CPython bytecode which this module takes as an input is defined in the file `Include/opcode.h` and used by the compiler and the interpreter. **CPython implementation detail:** Bytecode is an implementation detail of the CPython interpreter. No guarantees are made that bytecode will not be added, removed, or changed between versions of Python. Use of this module should not be considered to work across Python VMs or Python releases. Changed in version 3.6: Use 2 bytes for each instruction. Previously the number of bytes varied by instruction. Example: Given the function `myfunc()`: ``` def myfunc(alist): return len(alist) ``` the following command can be used to display the disassembly of `myfunc()`: ``` >>> dis.dis(myfunc) 2 0 LOAD_GLOBAL 0 (len) 2 LOAD_FAST 0 (alist) 4 CALL_FUNCTION 1 6 RETURN_VALUE ``` (The “2” is a line number). Bytecode analysis ----------------- New in version 3.4. The bytecode analysis API allows pieces of Python code to be wrapped in a [`Bytecode`](#dis.Bytecode "dis.Bytecode") object that provides easy access to details of the compiled code. `class dis.Bytecode(x, *, first_line=None, current_offset=None)` Analyse the bytecode corresponding to a function, generator, asynchronous generator, coroutine, method, string of source code, or a code object (as returned by [`compile()`](functions#compile "compile")). This is a convenience wrapper around many of the functions listed below, most notably [`get_instructions()`](#dis.get_instructions "dis.get_instructions"), as iterating over a [`Bytecode`](#dis.Bytecode "dis.Bytecode") instance yields the bytecode operations as [`Instruction`](#dis.Instruction "dis.Instruction") instances. If *first\_line* is not `None`, it indicates the line number that should be reported for the first source line in the disassembled code. Otherwise, the source line information (if any) is taken directly from the disassembled code object. If *current\_offset* is not `None`, it refers to an instruction offset in the disassembled code. Setting this means [`dis()`](#dis.Bytecode.dis "dis.Bytecode.dis") will display a “current instruction” marker against the specified opcode. `classmethod from_traceback(tb)` Construct a [`Bytecode`](#dis.Bytecode "dis.Bytecode") instance from the given traceback, setting *current\_offset* to the instruction responsible for the exception. `codeobj` The compiled code object. `first_line` The first source line of the code object (if available) `dis()` Return a formatted view of the bytecode operations (the same as printed by [`dis.dis()`](#dis.dis "dis.dis"), but returned as a multi-line string). `info()` Return a formatted multi-line string with detailed information about the code object, like [`code_info()`](#dis.code_info "dis.code_info"). Changed in version 3.7: This can now handle coroutine and asynchronous generator objects. Example: ``` >>> bytecode = dis.Bytecode(myfunc) >>> for instr in bytecode: ... print(instr.opname) ... LOAD_GLOBAL LOAD_FAST CALL_FUNCTION RETURN_VALUE ``` Analysis functions ------------------ The [`dis`](#module-dis "dis: Disassembler for Python bytecode.") module also defines the following analysis functions that convert the input directly to the desired output. They can be useful if only a single operation is being performed, so the intermediate analysis object isn’t useful: `dis.code_info(x)` Return a formatted multi-line string with detailed code object information for the supplied function, generator, asynchronous generator, coroutine, method, source code string or code object. Note that the exact contents of code info strings are highly implementation dependent and they may change arbitrarily across Python VMs or Python releases. New in version 3.2. Changed in version 3.7: This can now handle coroutine and asynchronous generator objects. `dis.show_code(x, *, file=None)` Print detailed code object information for the supplied function, method, source code string or code object to *file* (or `sys.stdout` if *file* is not specified). This is a convenient shorthand for `print(code_info(x), file=file)`, intended for interactive exploration at the interpreter prompt. New in version 3.2. Changed in version 3.4: Added *file* parameter. `dis.dis(x=None, *, file=None, depth=None)` Disassemble the *x* object. *x* can denote either a module, a class, a method, a function, a generator, an asynchronous generator, a coroutine, a code object, a string of source code or a byte sequence of raw bytecode. For a module, it disassembles all functions. For a class, it disassembles all methods (including class and static methods). For a code object or sequence of raw bytecode, it prints one line per bytecode instruction. It also recursively disassembles nested code objects (the code of comprehensions, generator expressions and nested functions, and the code used for building nested classes). Strings are first compiled to code objects with the [`compile()`](functions#compile "compile") built-in function before being disassembled. If no object is provided, this function disassembles the last traceback. The disassembly is written as text to the supplied *file* argument if provided and to `sys.stdout` otherwise. The maximal depth of recursion is limited by *depth* unless it is `None`. `depth=0` means no recursion. Changed in version 3.4: Added *file* parameter. Changed in version 3.7: Implemented recursive disassembling and added *depth* parameter. Changed in version 3.7: This can now handle coroutine and asynchronous generator objects. `dis.distb(tb=None, *, file=None)` Disassemble the top-of-stack function of a traceback, using the last traceback if none was passed. The instruction causing the exception is indicated. The disassembly is written as text to the supplied *file* argument if provided and to `sys.stdout` otherwise. Changed in version 3.4: Added *file* parameter. `dis.disassemble(code, lasti=-1, *, file=None)` `dis.disco(code, lasti=-1, *, file=None)` Disassemble a code object, indicating the last instruction if *lasti* was provided. The output is divided in the following columns: 1. the line number, for the first instruction of each line 2. the current instruction, indicated as `-->`, 3. a labelled instruction, indicated with `>>`, 4. the address of the instruction, 5. the operation code name, 6. operation parameters, and 7. interpretation of the parameters in parentheses. The parameter interpretation recognizes local and global variable names, constant values, branch targets, and compare operators. The disassembly is written as text to the supplied *file* argument if provided and to `sys.stdout` otherwise. Changed in version 3.4: Added *file* parameter. `dis.get_instructions(x, *, first_line=None)` Return an iterator over the instructions in the supplied function, method, source code string or code object. The iterator generates a series of [`Instruction`](#dis.Instruction "dis.Instruction") named tuples giving the details of each operation in the supplied code. If *first\_line* is not `None`, it indicates the line number that should be reported for the first source line in the disassembled code. Otherwise, the source line information (if any) is taken directly from the disassembled code object. New in version 3.4. `dis.findlinestarts(code)` This generator function uses the `co_firstlineno` and `co_lnotab` attributes of the code object *code* to find the offsets which are starts of lines in the source code. They are generated as `(offset, lineno)` pairs. See [Objects/lnotab\_notes.txt](https://github.com/python/cpython/tree/3.9/Objects/lnotab_notes.txt) for the `co_lnotab` format and how to decode it. Changed in version 3.6: Line numbers can be decreasing. Before, they were always increasing. `dis.findlabels(code)` Detect all offsets in the raw compiled bytecode string *code* which are jump targets, and return a list of these offsets. `dis.stack_effect(opcode, oparg=None, *, jump=None)` Compute the stack effect of *opcode* with argument *oparg*. If the code has a jump target and *jump* is `True`, [`stack_effect()`](#dis.stack_effect "dis.stack_effect") will return the stack effect of jumping. If *jump* is `False`, it will return the stack effect of not jumping. And if *jump* is `None` (default), it will return the maximal stack effect of both cases. New in version 3.4. Changed in version 3.8: Added *jump* parameter. Python Bytecode Instructions ---------------------------- The [`get_instructions()`](#dis.get_instructions "dis.get_instructions") function and [`Bytecode`](#dis.Bytecode "dis.Bytecode") class provide details of bytecode instructions as [`Instruction`](#dis.Instruction "dis.Instruction") instances: `class dis.Instruction` Details for a bytecode operation `opcode` numeric code for operation, corresponding to the opcode values listed below and the bytecode values in the [Opcode collections](#opcode-collections). `opname` human readable name for operation `arg` numeric argument to operation (if any), otherwise `None` `argval` resolved arg value (if known), otherwise same as arg `argrepr` human readable description of operation argument `offset` start index of operation within bytecode sequence `starts_line` line started by this opcode (if any), otherwise `None` `is_jump_target` `True` if other code jumps to here, otherwise `False` New in version 3.4. The Python compiler currently generates the following bytecode instructions. **General instructions** `NOP` Do nothing code. Used as a placeholder by the bytecode optimizer. `POP_TOP` Removes the top-of-stack (TOS) item. `ROT_TWO` Swaps the two top-most stack items. `ROT_THREE` Lifts second and third stack item one position up, moves top down to position three. `ROT_FOUR` Lifts second, third and fourth stack items one position up, moves top down to position four. New in version 3.8. `DUP_TOP` Duplicates the reference on top of the stack. New in version 3.2. `DUP_TOP_TWO` Duplicates the two references on top of the stack, leaving them in the same order. New in version 3.2. **Unary operations** Unary operations take the top of the stack, apply the operation, and push the result back on the stack. `UNARY_POSITIVE` Implements `TOS = +TOS`. `UNARY_NEGATIVE` Implements `TOS = -TOS`. `UNARY_NOT` Implements `TOS = not TOS`. `UNARY_INVERT` Implements `TOS = ~TOS`. `GET_ITER` Implements `TOS = iter(TOS)`. `GET_YIELD_FROM_ITER` If `TOS` is a [generator iterator](../glossary#term-generator-iterator) or [coroutine](../glossary#term-coroutine) object it is left as is. Otherwise, implements `TOS = iter(TOS)`. New in version 3.5. **Binary operations** Binary operations remove the top of the stack (TOS) and the second top-most stack item (TOS1) from the stack. They perform the operation, and put the result back on the stack. `BINARY_POWER` Implements `TOS = TOS1 ** TOS`. `BINARY_MULTIPLY` Implements `TOS = TOS1 * TOS`. `BINARY_MATRIX_MULTIPLY` Implements `TOS = TOS1 @ TOS`. New in version 3.5. `BINARY_FLOOR_DIVIDE` Implements `TOS = TOS1 // TOS`. `BINARY_TRUE_DIVIDE` Implements `TOS = TOS1 / TOS`. `BINARY_MODULO` Implements `TOS = TOS1 % TOS`. `BINARY_ADD` Implements `TOS = TOS1 + TOS`. `BINARY_SUBTRACT` Implements `TOS = TOS1 - TOS`. `BINARY_SUBSCR` Implements `TOS = TOS1[TOS]`. `BINARY_LSHIFT` Implements `TOS = TOS1 << TOS`. `BINARY_RSHIFT` Implements `TOS = TOS1 >> TOS`. `BINARY_AND` Implements `TOS = TOS1 & TOS`. `BINARY_XOR` Implements `TOS = TOS1 ^ TOS`. `BINARY_OR` Implements `TOS = TOS1 | TOS`. **In-place operations** In-place operations are like binary operations, in that they remove TOS and TOS1, and push the result back on the stack, but the operation is done in-place when TOS1 supports it, and the resulting TOS may be (but does not have to be) the original TOS1. `INPLACE_POWER` Implements in-place `TOS = TOS1 ** TOS`. `INPLACE_MULTIPLY` Implements in-place `TOS = TOS1 * TOS`. `INPLACE_MATRIX_MULTIPLY` Implements in-place `TOS = TOS1 @ TOS`. New in version 3.5. `INPLACE_FLOOR_DIVIDE` Implements in-place `TOS = TOS1 // TOS`. `INPLACE_TRUE_DIVIDE` Implements in-place `TOS = TOS1 / TOS`. `INPLACE_MODULO` Implements in-place `TOS = TOS1 % TOS`. `INPLACE_ADD` Implements in-place `TOS = TOS1 + TOS`. `INPLACE_SUBTRACT` Implements in-place `TOS = TOS1 - TOS`. `INPLACE_LSHIFT` Implements in-place `TOS = TOS1 << TOS`. `INPLACE_RSHIFT` Implements in-place `TOS = TOS1 >> TOS`. `INPLACE_AND` Implements in-place `TOS = TOS1 & TOS`. `INPLACE_XOR` Implements in-place `TOS = TOS1 ^ TOS`. `INPLACE_OR` Implements in-place `TOS = TOS1 | TOS`. `STORE_SUBSCR` Implements `TOS1[TOS] = TOS2`. `DELETE_SUBSCR` Implements `del TOS1[TOS]`. **Coroutine opcodes** `GET_AWAITABLE` Implements `TOS = get_awaitable(TOS)`, where `get_awaitable(o)` returns `o` if `o` is a coroutine object or a generator object with the CO\_ITERABLE\_COROUTINE flag, or resolves `o.__await__`. New in version 3.5. `GET_AITER` Implements `TOS = TOS.__aiter__()`. New in version 3.5. Changed in version 3.7: Returning awaitable objects from `__aiter__` is no longer supported. `GET_ANEXT` Implements `PUSH(get_awaitable(TOS.__anext__()))`. See `GET_AWAITABLE` for details about `get_awaitable` New in version 3.5. `END_ASYNC_FOR` Terminates an [`async for`](../reference/compound_stmts#async-for) loop. Handles an exception raised when awaiting a next item. If TOS is [`StopAsyncIteration`](exceptions#StopAsyncIteration "StopAsyncIteration") pop 7 values from the stack and restore the exception state using the second three of them. Otherwise re-raise the exception using the three values from the stack. An exception handler block is removed from the block stack. New in version 3.8. `BEFORE_ASYNC_WITH` Resolves `__aenter__` and `__aexit__` from the object on top of the stack. Pushes `__aexit__` and result of `__aenter__()` to the stack. New in version 3.5. `SETUP_ASYNC_WITH` Creates a new frame object. New in version 3.5. **Miscellaneous opcodes** `PRINT_EXPR` Implements the expression statement for the interactive mode. TOS is removed from the stack and printed. In non-interactive mode, an expression statement is terminated with [`POP_TOP`](#opcode-POP_TOP). `SET_ADD(i)` Calls `set.add(TOS1[-i], TOS)`. Used to implement set comprehensions. `LIST_APPEND(i)` Calls `list.append(TOS1[-i], TOS)`. Used to implement list comprehensions. `MAP_ADD(i)` Calls `dict.__setitem__(TOS1[-i], TOS1, TOS)`. Used to implement dict comprehensions. New in version 3.1. Changed in version 3.8: Map value is TOS and map key is TOS1. Before, those were reversed. For all of the [`SET_ADD`](#opcode-SET_ADD), [`LIST_APPEND`](#opcode-LIST_APPEND) and [`MAP_ADD`](#opcode-MAP_ADD) instructions, while the added value or key/value pair is popped off, the container object remains on the stack so that it is available for further iterations of the loop. `RETURN_VALUE` Returns with TOS to the caller of the function. `YIELD_VALUE` Pops TOS and yields it from a [generator](../glossary#term-generator). `YIELD_FROM` Pops TOS and delegates to it as a subiterator from a [generator](../glossary#term-generator). New in version 3.3. `SETUP_ANNOTATIONS` Checks whether `__annotations__` is defined in `locals()`, if not it is set up to an empty `dict`. This opcode is only emitted if a class or module body contains [variable annotations](../glossary#term-variable-annotation) statically. New in version 3.6. `IMPORT_STAR` Loads all symbols not starting with `'_'` directly from the module TOS to the local namespace. The module is popped after loading all names. This opcode implements `from module import *`. `POP_BLOCK` Removes one block from the block stack. Per frame, there is a stack of blocks, denoting [`try`](../reference/compound_stmts#try) statements, and such. `POP_EXCEPT` Removes one block from the block stack. The popped block must be an exception handler block, as implicitly created when entering an except handler. In addition to popping extraneous values from the frame stack, the last three popped values are used to restore the exception state. `RERAISE` Re-raises the exception currently on top of the stack. New in version 3.9. `WITH_EXCEPT_START` Calls the function in position 7 on the stack with the top three items on the stack as arguments. Used to implement the call `context_manager.__exit__(*exc_info())` when an exception has occurred in a [`with`](../reference/compound_stmts#with) statement. New in version 3.9. `LOAD_ASSERTION_ERROR` Pushes [`AssertionError`](exceptions#AssertionError "AssertionError") onto the stack. Used by the [`assert`](../reference/simple_stmts#assert) statement. New in version 3.9. `LOAD_BUILD_CLASS` Pushes `builtins.__build_class__()` onto the stack. It is later called by [`CALL_FUNCTION`](#opcode-CALL_FUNCTION) to construct a class. `SETUP_WITH(delta)` This opcode performs several operations before a with block starts. First, it loads [`__exit__()`](../reference/datamodel#object.__exit__ "object.__exit__") from the context manager and pushes it onto the stack for later use by [`WITH_EXCEPT_START`](#opcode-WITH_EXCEPT_START). Then, [`__enter__()`](../reference/datamodel#object.__enter__ "object.__enter__") is called, and a finally block pointing to *delta* is pushed. Finally, the result of calling the `__enter__()` method is pushed onto the stack. The next opcode will either ignore it ([`POP_TOP`](#opcode-POP_TOP)), or store it in (a) variable(s) ([`STORE_FAST`](#opcode-STORE_FAST), [`STORE_NAME`](#opcode-STORE_NAME), or [`UNPACK_SEQUENCE`](#opcode-UNPACK_SEQUENCE)). New in version 3.2. All of the following opcodes use their arguments. `STORE_NAME(namei)` Implements `name = TOS`. *namei* is the index of *name* in the attribute `co_names` of the code object. The compiler tries to use [`STORE_FAST`](#opcode-STORE_FAST) or [`STORE_GLOBAL`](#opcode-STORE_GLOBAL) if possible. `DELETE_NAME(namei)` Implements `del name`, where *namei* is the index into `co_names` attribute of the code object. `UNPACK_SEQUENCE(count)` Unpacks TOS into *count* individual values, which are put onto the stack right-to-left. `UNPACK_EX(counts)` Implements assignment with a starred target: Unpacks an iterable in TOS into individual values, where the total number of values can be smaller than the number of items in the iterable: one of the new values will be a list of all leftover items. The low byte of *counts* is the number of values before the list value, the high byte of *counts* the number of values after it. The resulting values are put onto the stack right-to-left. `STORE_ATTR(namei)` Implements `TOS.name = TOS1`, where *namei* is the index of name in `co_names`. `DELETE_ATTR(namei)` Implements `del TOS.name`, using *namei* as index into `co_names`. `STORE_GLOBAL(namei)` Works as [`STORE_NAME`](#opcode-STORE_NAME), but stores the name as a global. `DELETE_GLOBAL(namei)` Works as [`DELETE_NAME`](#opcode-DELETE_NAME), but deletes a global name. `LOAD_CONST(consti)` Pushes `co_consts[consti]` onto the stack. `LOAD_NAME(namei)` Pushes the value associated with `co_names[namei]` onto the stack. `BUILD_TUPLE(count)` Creates a tuple consuming *count* items from the stack, and pushes the resulting tuple onto the stack. `BUILD_LIST(count)` Works as [`BUILD_TUPLE`](#opcode-BUILD_TUPLE), but creates a list. `BUILD_SET(count)` Works as [`BUILD_TUPLE`](#opcode-BUILD_TUPLE), but creates a set. `BUILD_MAP(count)` Pushes a new dictionary object onto the stack. Pops `2 * count` items so that the dictionary holds *count* entries: `{..., TOS3: TOS2, TOS1: TOS}`. Changed in version 3.5: The dictionary is created from stack items instead of creating an empty dictionary pre-sized to hold *count* items. `BUILD_CONST_KEY_MAP(count)` The version of [`BUILD_MAP`](#opcode-BUILD_MAP) specialized for constant keys. Pops the top element on the stack which contains a tuple of keys, then starting from `TOS1`, pops *count* values to form values in the built dictionary. New in version 3.6. `BUILD_STRING(count)` Concatenates *count* strings from the stack and pushes the resulting string onto the stack. New in version 3.6. `LIST_TO_TUPLE` Pops a list from the stack and pushes a tuple containing the same values. New in version 3.9. `LIST_EXTEND(i)` Calls `list.extend(TOS1[-i], TOS)`. Used to build lists. New in version 3.9. `SET_UPDATE(i)` Calls `set.update(TOS1[-i], TOS)`. Used to build sets. New in version 3.9. `DICT_UPDATE(i)` Calls `dict.update(TOS1[-i], TOS)`. Used to build dicts. New in version 3.9. `DICT_MERGE` Like [`DICT_UPDATE`](#opcode-DICT_UPDATE) but raises an exception for duplicate keys. New in version 3.9. `LOAD_ATTR(namei)` Replaces TOS with `getattr(TOS, co_names[namei])`. `COMPARE_OP(opname)` Performs a Boolean operation. The operation name can be found in `cmp_op[opname]`. `IS_OP(invert)` Performs `is` comparison, or `is not` if `invert` is 1. New in version 3.9. `CONTAINS_OP(invert)` Performs `in` comparison, or `not in` if `invert` is 1. New in version 3.9. `IMPORT_NAME(namei)` Imports the module `co_names[namei]`. TOS and TOS1 are popped and provide the *fromlist* and *level* arguments of [`__import__()`](functions#__import__ "__import__"). The module object is pushed onto the stack. The current namespace is not affected: for a proper import statement, a subsequent [`STORE_FAST`](#opcode-STORE_FAST) instruction modifies the namespace. `IMPORT_FROM(namei)` Loads the attribute `co_names[namei]` from the module found in TOS. The resulting object is pushed onto the stack, to be subsequently stored by a [`STORE_FAST`](#opcode-STORE_FAST) instruction. `JUMP_FORWARD(delta)` Increments bytecode counter by *delta*. `POP_JUMP_IF_TRUE(target)` If TOS is true, sets the bytecode counter to *target*. TOS is popped. New in version 3.1. `POP_JUMP_IF_FALSE(target)` If TOS is false, sets the bytecode counter to *target*. TOS is popped. New in version 3.1. `JUMP_IF_NOT_EXC_MATCH(target)` Tests whether the second value on the stack is an exception matching TOS, and jumps if it is not. Pops two values from the stack. New in version 3.9. `JUMP_IF_TRUE_OR_POP(target)` If TOS is true, sets the bytecode counter to *target* and leaves TOS on the stack. Otherwise (TOS is false), TOS is popped. New in version 3.1. `JUMP_IF_FALSE_OR_POP(target)` If TOS is false, sets the bytecode counter to *target* and leaves TOS on the stack. Otherwise (TOS is true), TOS is popped. New in version 3.1. `JUMP_ABSOLUTE(target)` Set bytecode counter to *target*. `FOR_ITER(delta)` TOS is an [iterator](../glossary#term-iterator). Call its [`__next__()`](stdtypes#iterator.__next__ "iterator.__next__") method. If this yields a new value, push it on the stack (leaving the iterator below it). If the iterator indicates it is exhausted, TOS is popped, and the byte code counter is incremented by *delta*. `LOAD_GLOBAL(namei)` Loads the global named `co_names[namei]` onto the stack. `SETUP_FINALLY(delta)` Pushes a try block from a try-finally or try-except clause onto the block stack. *delta* points to the finally block or the first except block. `LOAD_FAST(var_num)` Pushes a reference to the local `co_varnames[var_num]` onto the stack. `STORE_FAST(var_num)` Stores TOS into the local `co_varnames[var_num]`. `DELETE_FAST(var_num)` Deletes local `co_varnames[var_num]`. `LOAD_CLOSURE(i)` Pushes a reference to the cell contained in slot *i* of the cell and free variable storage. The name of the variable is `co_cellvars[i]` if *i* is less than the length of *co\_cellvars*. Otherwise it is `co_freevars[i - len(co_cellvars)]`. `LOAD_DEREF(i)` Loads the cell contained in slot *i* of the cell and free variable storage. Pushes a reference to the object the cell contains on the stack. `LOAD_CLASSDEREF(i)` Much like [`LOAD_DEREF`](#opcode-LOAD_DEREF) but first checks the locals dictionary before consulting the cell. This is used for loading free variables in class bodies. New in version 3.4. `STORE_DEREF(i)` Stores TOS into the cell contained in slot *i* of the cell and free variable storage. `DELETE_DEREF(i)` Empties the cell contained in slot *i* of the cell and free variable storage. Used by the [`del`](../reference/simple_stmts#del) statement. New in version 3.2. `RAISE_VARARGS(argc)` Raises an exception using one of the 3 forms of the `raise` statement, depending on the value of *argc*: * 0: `raise` (re-raise previous exception) * 1: `raise TOS` (raise exception instance or type at `TOS`) * 2: `raise TOS1 from TOS` (raise exception instance or type at `TOS1` with `__cause__` set to `TOS`) `CALL_FUNCTION(argc)` Calls a callable object with positional arguments. *argc* indicates the number of positional arguments. The top of the stack contains positional arguments, with the right-most argument on top. Below the arguments is a callable object to call. `CALL_FUNCTION` pops all arguments and the callable object off the stack, calls the callable object with those arguments, and pushes the return value returned by the callable object. Changed in version 3.6: This opcode is used only for calls with positional arguments. `CALL_FUNCTION_KW(argc)` Calls a callable object with positional (if any) and keyword arguments. *argc* indicates the total number of positional and keyword arguments. The top element on the stack contains a tuple with the names of the keyword arguments, which must be strings. Below that are the values for the keyword arguments, in the order corresponding to the tuple. Below that are positional arguments, with the right-most parameter on top. Below the arguments is a callable object to call. `CALL_FUNCTION_KW` pops all arguments and the callable object off the stack, calls the callable object with those arguments, and pushes the return value returned by the callable object. Changed in version 3.6: Keyword arguments are packed in a tuple instead of a dictionary, *argc* indicates the total number of arguments. `CALL_FUNCTION_EX(flags)` Calls a callable object with variable set of positional and keyword arguments. If the lowest bit of *flags* is set, the top of the stack contains a mapping object containing additional keyword arguments. Before the callable is called, the mapping object and iterable object are each “unpacked” and their contents passed in as keyword and positional arguments respectively. `CALL_FUNCTION_EX` pops all arguments and the callable object off the stack, calls the callable object with those arguments, and pushes the return value returned by the callable object. New in version 3.6. `LOAD_METHOD(namei)` Loads a method named `co_names[namei]` from the TOS object. TOS is popped. This bytecode distinguishes two cases: if TOS has a method with the correct name, the bytecode pushes the unbound method and TOS. TOS will be used as the first argument (`self`) by [`CALL_METHOD`](#opcode-CALL_METHOD) when calling the unbound method. Otherwise, `NULL` and the object return by the attribute lookup are pushed. New in version 3.7. `CALL_METHOD(argc)` Calls a method. *argc* is the number of positional arguments. Keyword arguments are not supported. This opcode is designed to be used with [`LOAD_METHOD`](#opcode-LOAD_METHOD). Positional arguments are on top of the stack. Below them, the two items described in [`LOAD_METHOD`](#opcode-LOAD_METHOD) are on the stack (either `self` and an unbound method object or `NULL` and an arbitrary callable). All of them are popped and the return value is pushed. New in version 3.7. `MAKE_FUNCTION(flags)` Pushes a new function object on the stack. From bottom to top, the consumed stack must consist of values if the argument carries a specified flag value * `0x01` a tuple of default values for positional-only and positional-or-keyword parameters in positional order * `0x02` a dictionary of keyword-only parameters’ default values * `0x04` an annotation dictionary * `0x08` a tuple containing cells for free variables, making a closure * the code associated with the function (at TOS1) * the [qualified name](../glossary#term-qualified-name) of the function (at TOS) `BUILD_SLICE(argc)` Pushes a slice object on the stack. *argc* must be 2 or 3. If it is 2, `slice(TOS1, TOS)` is pushed; if it is 3, `slice(TOS2, TOS1, TOS)` is pushed. See the [`slice()`](functions#slice "slice") built-in function for more information. `EXTENDED_ARG(ext)` Prefixes any opcode which has an argument too big to fit into the default one byte. *ext* holds an additional byte which act as higher bits in the argument. For each opcode, at most three prefixal `EXTENDED_ARG` are allowed, forming an argument from two-byte to four-byte. `FORMAT_VALUE(flags)` Used for implementing formatted literal strings (f-strings). Pops an optional *fmt\_spec* from the stack, then a required *value*. *flags* is interpreted as follows: * `(flags & 0x03) == 0x00`: *value* is formatted as-is. * `(flags & 0x03) == 0x01`: call [`str()`](stdtypes#str "str") on *value* before formatting it. * `(flags & 0x03) == 0x02`: call [`repr()`](functions#repr "repr") on *value* before formatting it. * `(flags & 0x03) == 0x03`: call [`ascii()`](functions#ascii "ascii") on *value* before formatting it. * `(flags & 0x04) == 0x04`: pop *fmt\_spec* from the stack and use it, else use an empty *fmt\_spec*. Formatting is performed using `PyObject_Format()`. The result is pushed on the stack. New in version 3.6. `HAVE_ARGUMENT` This is not really an opcode. It identifies the dividing line between opcodes which don’t use their argument and those that do (`< HAVE_ARGUMENT` and `>= HAVE_ARGUMENT`, respectively). Changed in version 3.6: Now every instruction has an argument, but opcodes `< HAVE_ARGUMENT` ignore it. Before, only opcodes `>= HAVE_ARGUMENT` had an argument. Opcode collections ------------------ These collections are provided for automatic introspection of bytecode instructions: `dis.opname` Sequence of operation names, indexable using the bytecode. `dis.opmap` Dictionary mapping operation names to bytecodes. `dis.cmp_op` Sequence of all compare operation names. `dis.hasconst` Sequence of bytecodes that access a constant. `dis.hasfree` Sequence of bytecodes that access a free variable (note that ‘free’ in this context refers to names in the current scope that are referenced by inner scopes or names in outer scopes that are referenced from this scope. It does *not* include references to global or builtin scopes). `dis.hasname` Sequence of bytecodes that access an attribute by name. `dis.hasjrel` Sequence of bytecodes that have a relative jump target. `dis.hasjabs` Sequence of bytecodes that have an absolute jump target. `dis.haslocal` Sequence of bytecodes that access a local variable. `dis.hascompare` Sequence of bytecodes of Boolean operations.
programming_docs
python Python Language Services Python Language Services ======================== Python provides a number of modules to assist in working with the Python language. These modules support tokenizing, parsing, syntax analysis, bytecode disassembly, and various other facilities. These modules include: * [`parser` — Access Python parse trees](parser) + [Creating ST Objects](parser#creating-st-objects) + [Converting ST Objects](parser#converting-st-objects) + [Queries on ST Objects](parser#queries-on-st-objects) + [Exceptions and Error Handling](parser#exceptions-and-error-handling) + [ST Objects](parser#st-objects) + [Example: Emulation of `compile()`](parser#example-emulation-of-compile) * [`ast` — Abstract Syntax Trees](ast) + [Abstract Grammar](ast#abstract-grammar) + [Node classes](ast#node-classes) - [Literals](ast#literals) - [Variables](ast#variables) - [Expressions](ast#expressions) * [Subscripting](ast#subscripting) * [Comprehensions](ast#comprehensions) - [Statements](ast#statements) * [Imports](ast#imports) - [Control flow](ast#control-flow) - [Function and class definitions](ast#function-and-class-definitions) - [Async and await](ast#async-and-await) + [`ast` Helpers](ast#ast-helpers) + [Compiler Flags](ast#compiler-flags) + [Command-Line Usage](ast#command-line-usage) * [`symtable` — Access to the compiler’s symbol tables](symtable) + [Generating Symbol Tables](symtable#generating-symbol-tables) + [Examining Symbol Tables](symtable#examining-symbol-tables) * [`symbol` — Constants used with Python parse trees](symbol) * [`token` — Constants used with Python parse trees](token) * [`keyword` — Testing for Python keywords](keyword) * [`tokenize` — Tokenizer for Python source](tokenize) + [Tokenizing Input](tokenize#tokenizing-input) + [Command-Line Usage](tokenize#command-line-usage) + [Examples](tokenize#examples) * [`tabnanny` — Detection of ambiguous indentation](tabnanny) * [`pyclbr` — Python module browser support](pyclbr) + [Function Objects](pyclbr#function-objects) + [Class Objects](pyclbr#class-objects) * [`py_compile` — Compile Python source files](py_compile) * [`compileall` — Byte-compile Python libraries](compileall) + [Command-line use](compileall#command-line-use) + [Public functions](compileall#public-functions) * [`dis` — Disassembler for Python bytecode](dis) + [Bytecode analysis](dis#bytecode-analysis) + [Analysis functions](dis#analysis-functions) + [Python Bytecode Instructions](dis#python-bytecode-instructions) + [Opcode collections](dis#opcode-collections) * [`pickletools` — Tools for pickle developers](pickletools) + [Command line usage](pickletools#command-line-usage) - [Command line options](pickletools#command-line-options) + [Programmatic Interface](pickletools#programmatic-interface) python plistlib — Generate and parse Apple .plist files plistlib — Generate and parse Apple .plist files ================================================ **Source code:** [Lib/plistlib.py](https://github.com/python/cpython/tree/3.9/Lib/plistlib.py) This module provides an interface for reading and writing the “property list” files used by Apple, primarily on macOS and iOS. This module supports both binary and XML plist files. The property list (`.plist`) file format is a simple serialization supporting basic object types, like dictionaries, lists, numbers and strings. Usually the top level object is a dictionary. To write out and to parse a plist file, use the [`dump()`](#plistlib.dump "plistlib.dump") and [`load()`](#plistlib.load "plistlib.load") functions. To work with plist data in bytes objects, use [`dumps()`](#plistlib.dumps "plistlib.dumps") and [`loads()`](#plistlib.loads "plistlib.loads"). Values can be strings, integers, floats, booleans, tuples, lists, dictionaries (but only with string keys), [`bytes`](stdtypes#bytes "bytes"), [`bytearray`](stdtypes#bytearray "bytearray") or [`datetime.datetime`](datetime#datetime.datetime "datetime.datetime") objects. Changed in version 3.4: New API, old API deprecated. Support for binary format plists added. Changed in version 3.8: Support added for reading and writing [`UID`](#plistlib.UID "plistlib.UID") tokens in binary plists as used by NSKeyedArchiver and NSKeyedUnarchiver. Changed in version 3.9: Old API removed. See also [PList manual page](https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/PropertyLists/) Apple’s documentation of the file format. This module defines the following functions: `plistlib.load(fp, *, fmt=None, dict_type=dict)` Read a plist file. *fp* should be a readable and binary file object. Return the unpacked root object (which usually is a dictionary). The *fmt* is the format of the file and the following values are valid: * [`None`](constants#None "None"): Autodetect the file format * [`FMT_XML`](#plistlib.FMT_XML "plistlib.FMT_XML"): XML file format * [`FMT_BINARY`](#plistlib.FMT_BINARY "plistlib.FMT_BINARY"): Binary plist format The *dict\_type* is the type used for dictionaries that are read from the plist file. XML data for the [`FMT_XML`](#plistlib.FMT_XML "plistlib.FMT_XML") format is parsed using the Expat parser from [`xml.parsers.expat`](pyexpat#module-xml.parsers.expat "xml.parsers.expat: An interface to the Expat non-validating XML parser.") – see its documentation for possible exceptions on ill-formed XML. Unknown elements will simply be ignored by the plist parser. The parser for the binary format raises `InvalidFileException` when the file cannot be parsed. New in version 3.4. `plistlib.loads(data, *, fmt=None, dict_type=dict)` Load a plist from a bytes object. See [`load()`](#plistlib.load "plistlib.load") for an explanation of the keyword arguments. New in version 3.4. `plistlib.dump(value, fp, *, fmt=FMT_XML, sort_keys=True, skipkeys=False)` Write *value* to a plist file. *Fp* should be a writable, binary file object. The *fmt* argument specifies the format of the plist file and can be one of the following values: * [`FMT_XML`](#plistlib.FMT_XML "plistlib.FMT_XML"): XML formatted plist file * [`FMT_BINARY`](#plistlib.FMT_BINARY "plistlib.FMT_BINARY"): Binary formatted plist file When *sort\_keys* is true (the default) the keys for dictionaries will be written to the plist in sorted order, otherwise they will be written in the iteration order of the dictionary. When *skipkeys* is false (the default) the function raises [`TypeError`](exceptions#TypeError "TypeError") when a key of a dictionary is not a string, otherwise such keys are skipped. A [`TypeError`](exceptions#TypeError "TypeError") will be raised if the object is of an unsupported type or a container that contains objects of unsupported types. An [`OverflowError`](exceptions#OverflowError "OverflowError") will be raised for integer values that cannot be represented in (binary) plist files. New in version 3.4. `plistlib.dumps(value, *, fmt=FMT_XML, sort_keys=True, skipkeys=False)` Return *value* as a plist-formatted bytes object. See the documentation for [`dump()`](#plistlib.dump "plistlib.dump") for an explanation of the keyword arguments of this function. New in version 3.4. The following classes are available: `class plistlib.UID(data)` Wraps an [`int`](functions#int "int"). This is used when reading or writing NSKeyedArchiver encoded data, which contains UID (see PList manual). It has one attribute, `data`, which can be used to retrieve the int value of the UID. `data` must be in the range `0 <= data < 2**64`. New in version 3.8. The following constants are available: `plistlib.FMT_XML` The XML format for plist files. New in version 3.4. `plistlib.FMT_BINARY` The binary format for plist files New in version 3.4. Examples -------- Generating a plist: ``` pl = dict( aString = "Doodah", aList = ["A", "B", 12, 32.1, [1, 2, 3]], aFloat = 0.1, anInt = 728, aDict = dict( anotherString = "<hello & hi there!>", aThirdString = "M\xe4ssig, Ma\xdf", aTrueValue = True, aFalseValue = False, ), someData = b"<binary gunk>", someMoreData = b"<lots of binary gunk>" * 10, aDate = datetime.datetime.fromtimestamp(time.mktime(time.gmtime())), ) with open(fileName, 'wb') as fp: dump(pl, fp) ``` Parsing a plist: ``` with open(fileName, 'rb') as fp: pl = load(fp) print(pl["aKey"]) ``` python tty — Terminal control functions tty — Terminal control functions ================================ **Source code:** [Lib/tty.py](https://github.com/python/cpython/tree/3.9/Lib/tty.py) The [`tty`](#module-tty "tty: Utility functions that perform common terminal control operations. (Unix)") module defines functions for putting the tty into cbreak and raw modes. Because it requires the [`termios`](termios#module-termios "termios: POSIX style tty control. (Unix)") module, it will work only on Unix. The [`tty`](#module-tty "tty: Utility functions that perform common terminal control operations. (Unix)") module defines the following functions: `tty.setraw(fd, when=termios.TCSAFLUSH)` Change the mode of the file descriptor *fd* to raw. If *when* is omitted, it defaults to `termios.TCSAFLUSH`, and is passed to [`termios.tcsetattr()`](termios#termios.tcsetattr "termios.tcsetattr"). `tty.setcbreak(fd, when=termios.TCSAFLUSH)` Change the mode of file descriptor *fd* to cbreak. If *when* is omitted, it defaults to `termios.TCSAFLUSH`, and is passed to [`termios.tcsetattr()`](termios#termios.tcsetattr "termios.tcsetattr"). See also `Module` [`termios`](termios#module-termios "termios: POSIX style tty control. (Unix)") Low-level terminal control interface. python itertools — Functions creating iterators for efficient looping itertools — Functions creating iterators for efficient looping ============================================================== This module implements a number of [iterator](../glossary#term-iterator) building blocks inspired by constructs from APL, Haskell, and SML. Each has been recast in a form suitable for Python. The module standardizes a core set of fast, memory efficient tools that are useful by themselves or in combination. Together, they form an “iterator algebra” making it possible to construct specialized tools succinctly and efficiently in pure Python. For instance, SML provides a tabulation tool: `tabulate(f)` which produces a sequence `f(0), f(1), ...`. The same effect can be achieved in Python by combining [`map()`](functions#map "map") and [`count()`](#itertools.count "itertools.count") to form `map(f, count())`. These tools and their built-in counterparts also work well with the high-speed functions in the [`operator`](operator#module-operator "operator: Functions corresponding to the standard operators.") module. For example, the multiplication operator can be mapped across two vectors to form an efficient dot-product: `sum(map(operator.mul, vector1, vector2))`. **Infinite iterators:** | Iterator | Arguments | Results | Example | | --- | --- | --- | --- | | [`count()`](#itertools.count "itertools.count") | start, [step] | start, start+step, start+2\*step, … | `count(10) --> 10 11 12 13 14 ...` | | [`cycle()`](#itertools.cycle "itertools.cycle") | p | p0, p1, … plast, p0, p1, … | `cycle('ABCD') --> A B C D A B C D ...` | | [`repeat()`](#itertools.repeat "itertools.repeat") | elem [,n] | elem, elem, elem, … endlessly or up to n times | `repeat(10, 3) --> 10 10 10` | **Iterators terminating on the shortest input sequence:** | Iterator | Arguments | Results | Example | | --- | --- | --- | --- | | [`accumulate()`](#itertools.accumulate "itertools.accumulate") | p [,func] | p0, p0+p1, p0+p1+p2, … | `accumulate([1,2,3,4,5]) --> 1 3 6 10 15` | | [`chain()`](#itertools.chain "itertools.chain") | p, q, … | p0, p1, … plast, q0, q1, … | `chain('ABC', 'DEF') --> A B C D E F` | | [`chain.from_iterable()`](#itertools.chain.from_iterable "itertools.chain.from_iterable") | iterable | p0, p1, … plast, q0, q1, … | `chain.from_iterable(['ABC', 'DEF']) --> A B C D E F` | | [`compress()`](#itertools.compress "itertools.compress") | data, selectors | (d[0] if s[0]), (d[1] if s[1]), … | `compress('ABCDEF', [1,0,1,0,1,1]) --> A C E F` | | [`dropwhile()`](#itertools.dropwhile "itertools.dropwhile") | pred, seq | seq[n], seq[n+1], starting when pred fails | `dropwhile(lambda x: x<5, [1,4,6,4,1]) --> 6 4 1` | | [`filterfalse()`](#itertools.filterfalse "itertools.filterfalse") | pred, seq | elements of seq where pred(elem) is false | `filterfalse(lambda x: x%2, range(10)) --> 0 2 4 6 8` | | [`groupby()`](#itertools.groupby "itertools.groupby") | iterable[, key] | sub-iterators grouped by value of key(v) | | | [`islice()`](#itertools.islice "itertools.islice") | seq, [start,] stop [, step] | elements from seq[start:stop:step] | `islice('ABCDEFG', 2, None) --> C D E F G` | | [`starmap()`](#itertools.starmap "itertools.starmap") | func, seq | func(\*seq[0]), func(\*seq[1]), … | `starmap(pow, [(2,5), (3,2), (10,3)]) --> 32 9 1000` | | [`takewhile()`](#itertools.takewhile "itertools.takewhile") | pred, seq | seq[0], seq[1], until pred fails | `takewhile(lambda x: x<5, [1,4,6,4,1]) --> 1 4` | | [`tee()`](#itertools.tee "itertools.tee") | it, n | it1, it2, … itn splits one iterator into n | | | [`zip_longest()`](#itertools.zip_longest "itertools.zip_longest") | p, q, … | (p[0], q[0]), (p[1], q[1]), … | `zip_longest('ABCD', 'xy', fillvalue='-') --> Ax By C- D-` | **Combinatoric iterators:** | Iterator | Arguments | Results | | --- | --- | --- | | [`product()`](#itertools.product "itertools.product") | p, q, … [repeat=1] | cartesian product, equivalent to a nested for-loop | | [`permutations()`](#itertools.permutations "itertools.permutations") | p[, r] | r-length tuples, all possible orderings, no repeated elements | | [`combinations()`](#itertools.combinations "itertools.combinations") | p, r | r-length tuples, in sorted order, no repeated elements | | [`combinations_with_replacement()`](#itertools.combinations_with_replacement "itertools.combinations_with_replacement") | p, r | r-length tuples, in sorted order, with repeated elements | | Examples | Results | | --- | --- | | `product('ABCD', repeat=2)` | `AA AB AC AD BA BB BC BD CA CB CC CD DA DB DC DD` | | `permutations('ABCD', 2)` | `AB AC AD BA BC BD CA CB CD DA DB DC` | | `combinations('ABCD', 2)` | `AB AC AD BC BD CD` | | `combinations_with_replacement('ABCD', 2)` | `AA AB AC AD BB BC BD CC CD DD` | Itertool functions ------------------ The following module functions all construct and return iterators. Some provide streams of infinite length, so they should only be accessed by functions or loops that truncate the stream. `itertools.accumulate(iterable[, func, *, initial=None])` Make an iterator that returns accumulated sums, or accumulated results of other binary functions (specified via the optional *func* argument). If *func* is supplied, it should be a function of two arguments. Elements of the input *iterable* may be any type that can be accepted as arguments to *func*. (For example, with the default operation of addition, elements may be any addable type including [`Decimal`](decimal#decimal.Decimal "decimal.Decimal") or [`Fraction`](fractions#fractions.Fraction "fractions.Fraction").) Usually, the number of elements output matches the input iterable. However, if the keyword argument *initial* is provided, the accumulation leads off with the *initial* value so that the output has one more element than the input iterable. Roughly equivalent to: ``` def accumulate(iterable, func=operator.add, *, initial=None): 'Return running totals' # accumulate([1,2,3,4,5]) --> 1 3 6 10 15 # accumulate([1,2,3,4,5], initial=100) --> 100 101 103 106 110 115 # accumulate([1,2,3,4,5], operator.mul) --> 1 2 6 24 120 it = iter(iterable) total = initial if initial is None: try: total = next(it) except StopIteration: return yield total for element in it: total = func(total, element) yield total ``` There are a number of uses for the *func* argument. It can be set to [`min()`](functions#min "min") for a running minimum, [`max()`](functions#max "max") for a running maximum, or [`operator.mul()`](operator#operator.mul "operator.mul") for a running product. Amortization tables can be built by accumulating interest and applying payments. First-order [recurrence relations](https://en.wikipedia.org/wiki/Recurrence_relation) can be modeled by supplying the initial value in the iterable and using only the accumulated total in *func* argument: ``` >>> data = [3, 4, 6, 2, 1, 9, 0, 7, 5, 8] >>> list(accumulate(data, operator.mul)) # running product [3, 12, 72, 144, 144, 1296, 0, 0, 0, 0] >>> list(accumulate(data, max)) # running maximum [3, 4, 6, 6, 6, 9, 9, 9, 9, 9] # Amortize a 5% loan of 1000 with 4 annual payments of 90 >>> cashflows = [1000, -90, -90, -90, -90] >>> list(accumulate(cashflows, lambda bal, pmt: bal*1.05 + pmt)) [1000, 960.0, 918.0, 873.9000000000001, 827.5950000000001] # Chaotic recurrence relation https://en.wikipedia.org/wiki/Logistic_map >>> logistic_map = lambda x, _: r * x * (1 - x) >>> r = 3.8 >>> x0 = 0.4 >>> inputs = repeat(x0, 36) # only the initial value is used >>> [format(x, '.2f') for x in accumulate(inputs, logistic_map)] ['0.40', '0.91', '0.30', '0.81', '0.60', '0.92', '0.29', '0.79', '0.63', '0.88', '0.39', '0.90', '0.33', '0.84', '0.52', '0.95', '0.18', '0.57', '0.93', '0.25', '0.71', '0.79', '0.63', '0.88', '0.39', '0.91', '0.32', '0.83', '0.54', '0.95', '0.20', '0.60', '0.91', '0.30', '0.80', '0.60'] ``` See [`functools.reduce()`](functools#functools.reduce "functools.reduce") for a similar function that returns only the final accumulated value. New in version 3.2. Changed in version 3.3: Added the optional *func* parameter. Changed in version 3.8: Added the optional *initial* parameter. `itertools.chain(*iterables)` Make an iterator that returns elements from the first iterable until it is exhausted, then proceeds to the next iterable, until all of the iterables are exhausted. Used for treating consecutive sequences as a single sequence. Roughly equivalent to: ``` def chain(*iterables): # chain('ABC', 'DEF') --> A B C D E F for it in iterables: for element in it: yield element ``` `classmethod chain.from_iterable(iterable)` Alternate constructor for [`chain()`](#itertools.chain "itertools.chain"). Gets chained inputs from a single iterable argument that is evaluated lazily. Roughly equivalent to: ``` def from_iterable(iterables): # chain.from_iterable(['ABC', 'DEF']) --> A B C D E F for it in iterables: for element in it: yield element ``` `itertools.combinations(iterable, r)` Return *r* length subsequences of elements from the input *iterable*. The combination tuples are emitted in lexicographic ordering according to the order of the input *iterable*. So, if the input *iterable* is sorted, the combination tuples will be produced in sorted order. Elements are treated as unique based on their position, not on their value. So if the input elements are unique, there will be no repeat values in each combination. Roughly equivalent to: ``` def combinations(iterable, r): # combinations('ABCD', 2) --> AB AC AD BC BD CD # combinations(range(4), 3) --> 012 013 023 123 pool = tuple(iterable) n = len(pool) if r > n: return indices = list(range(r)) yield tuple(pool[i] for i in indices) while True: for i in reversed(range(r)): if indices[i] != i + n - r: break else: return indices[i] += 1 for j in range(i+1, r): indices[j] = indices[j-1] + 1 yield tuple(pool[i] for i in indices) ``` The code for [`combinations()`](#itertools.combinations "itertools.combinations") can be also expressed as a subsequence of [`permutations()`](#itertools.permutations "itertools.permutations") after filtering entries where the elements are not in sorted order (according to their position in the input pool): ``` def combinations(iterable, r): pool = tuple(iterable) n = len(pool) for indices in permutations(range(n), r): if sorted(indices) == list(indices): yield tuple(pool[i] for i in indices) ``` The number of items returned is `n! / r! / (n-r)!` when `0 <= r <= n` or zero when `r > n`. `itertools.combinations_with_replacement(iterable, r)` Return *r* length subsequences of elements from the input *iterable* allowing individual elements to be repeated more than once. The combination tuples are emitted in lexicographic ordering according to the order of the input *iterable*. So, if the input *iterable* is sorted, the combination tuples will be produced in sorted order. Elements are treated as unique based on their position, not on their value. So if the input elements are unique, the generated combinations will also be unique. Roughly equivalent to: ``` def combinations_with_replacement(iterable, r): # combinations_with_replacement('ABC', 2) --> AA AB AC BB BC CC pool = tuple(iterable) n = len(pool) if not n and r: return indices = [0] * r yield tuple(pool[i] for i in indices) while True: for i in reversed(range(r)): if indices[i] != n - 1: break else: return indices[i:] = [indices[i] + 1] * (r - i) yield tuple(pool[i] for i in indices) ``` The code for [`combinations_with_replacement()`](#itertools.combinations_with_replacement "itertools.combinations_with_replacement") can be also expressed as a subsequence of [`product()`](#itertools.product "itertools.product") after filtering entries where the elements are not in sorted order (according to their position in the input pool): ``` def combinations_with_replacement(iterable, r): pool = tuple(iterable) n = len(pool) for indices in product(range(n), repeat=r): if sorted(indices) == list(indices): yield tuple(pool[i] for i in indices) ``` The number of items returned is `(n+r-1)! / r! / (n-1)!` when `n > 0`. New in version 3.1. `itertools.compress(data, selectors)` Make an iterator that filters elements from *data* returning only those that have a corresponding element in *selectors* that evaluates to `True`. Stops when either the *data* or *selectors* iterables has been exhausted. Roughly equivalent to: ``` def compress(data, selectors): # compress('ABCDEF', [1,0,1,0,1,1]) --> A C E F return (d for d, s in zip(data, selectors) if s) ``` New in version 3.1. `itertools.count(start=0, step=1)` Make an iterator that returns evenly spaced values starting with number *start*. Often used as an argument to [`map()`](functions#map "map") to generate consecutive data points. Also, used with [`zip()`](functions#zip "zip") to add sequence numbers. Roughly equivalent to: ``` def count(start=0, step=1): # count(10) --> 10 11 12 13 14 ... # count(2.5, 0.5) -> 2.5 3.0 3.5 ... n = start while True: yield n n += step ``` When counting with floating point numbers, better accuracy can sometimes be achieved by substituting multiplicative code such as: `(start + step * i for i in count())`. Changed in version 3.1: Added *step* argument and allowed non-integer arguments. `itertools.cycle(iterable)` Make an iterator returning elements from the iterable and saving a copy of each. When the iterable is exhausted, return elements from the saved copy. Repeats indefinitely. Roughly equivalent to: ``` def cycle(iterable): # cycle('ABCD') --> A B C D A B C D A B C D ... saved = [] for element in iterable: yield element saved.append(element) while saved: for element in saved: yield element ``` Note, this member of the toolkit may require significant auxiliary storage (depending on the length of the iterable). `itertools.dropwhile(predicate, iterable)` Make an iterator that drops elements from the iterable as long as the predicate is true; afterwards, returns every element. Note, the iterator does not produce *any* output until the predicate first becomes false, so it may have a lengthy start-up time. Roughly equivalent to: ``` def dropwhile(predicate, iterable): # dropwhile(lambda x: x<5, [1,4,6,4,1]) --> 6 4 1 iterable = iter(iterable) for x in iterable: if not predicate(x): yield x break for x in iterable: yield x ``` `itertools.filterfalse(predicate, iterable)` Make an iterator that filters elements from iterable returning only those for which the predicate is `False`. If *predicate* is `None`, return the items that are false. Roughly equivalent to: ``` def filterfalse(predicate, iterable): # filterfalse(lambda x: x%2, range(10)) --> 0 2 4 6 8 if predicate is None: predicate = bool for x in iterable: if not predicate(x): yield x ``` `itertools.groupby(iterable, key=None)` Make an iterator that returns consecutive keys and groups from the *iterable*. The *key* is a function computing a key value for each element. If not specified or is `None`, *key* defaults to an identity function and returns the element unchanged. Generally, the iterable needs to already be sorted on the same key function. The operation of [`groupby()`](#itertools.groupby "itertools.groupby") is similar to the `uniq` filter in Unix. It generates a break or new group every time the value of the key function changes (which is why it is usually necessary to have sorted the data using the same key function). That behavior differs from SQL’s GROUP BY which aggregates common elements regardless of their input order. The returned group is itself an iterator that shares the underlying iterable with [`groupby()`](#itertools.groupby "itertools.groupby"). Because the source is shared, when the [`groupby()`](#itertools.groupby "itertools.groupby") object is advanced, the previous group is no longer visible. So, if that data is needed later, it should be stored as a list: ``` groups = [] uniquekeys = [] data = sorted(data, key=keyfunc) for k, g in groupby(data, keyfunc): groups.append(list(g)) # Store group iterator as a list uniquekeys.append(k) ``` [`groupby()`](#itertools.groupby "itertools.groupby") is roughly equivalent to: ``` class groupby: # [k for k, g in groupby('AAAABBBCCDAABBB')] --> A B C D A B # [list(g) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D def __init__(self, iterable, key=None): if key is None: key = lambda x: x self.keyfunc = key self.it = iter(iterable) self.tgtkey = self.currkey = self.currvalue = object() def __iter__(self): return self def __next__(self): self.id = object() while self.currkey == self.tgtkey: self.currvalue = next(self.it) # Exit on StopIteration self.currkey = self.keyfunc(self.currvalue) self.tgtkey = self.currkey return (self.currkey, self._grouper(self.tgtkey, self.id)) def _grouper(self, tgtkey, id): while self.id is id and self.currkey == tgtkey: yield self.currvalue try: self.currvalue = next(self.it) except StopIteration: return self.currkey = self.keyfunc(self.currvalue) ``` `itertools.islice(iterable, stop)` `itertools.islice(iterable, start, stop[, step])` Make an iterator that returns selected elements from the iterable. If *start* is non-zero, then elements from the iterable are skipped until start is reached. Afterward, elements are returned consecutively unless *step* is set higher than one which results in items being skipped. If *stop* is `None`, then iteration continues until the iterator is exhausted, if at all; otherwise, it stops at the specified position. Unlike regular slicing, [`islice()`](#itertools.islice "itertools.islice") does not support negative values for *start*, *stop*, or *step*. Can be used to extract related fields from data where the internal structure has been flattened (for example, a multi-line report may list a name field on every third line). Roughly equivalent to: ``` def islice(iterable, *args): # islice('ABCDEFG', 2) --> A B # islice('ABCDEFG', 2, 4) --> C D # islice('ABCDEFG', 2, None) --> C D E F G # islice('ABCDEFG', 0, None, 2) --> A C E G s = slice(*args) start, stop, step = s.start or 0, s.stop or sys.maxsize, s.step or 1 it = iter(range(start, stop, step)) try: nexti = next(it) except StopIteration: # Consume *iterable* up to the *start* position. for i, element in zip(range(start), iterable): pass return try: for i, element in enumerate(iterable): if i == nexti: yield element nexti = next(it) except StopIteration: # Consume to *stop*. for i, element in zip(range(i + 1, stop), iterable): pass ``` If *start* is `None`, then iteration starts at zero. If *step* is `None`, then the step defaults to one. `itertools.permutations(iterable, r=None)` Return successive *r* length permutations of elements in the *iterable*. If *r* is not specified or is `None`, then *r* defaults to the length of the *iterable* and all possible full-length permutations are generated. The permutation tuples are emitted in lexicographic ordering according to the order of the input *iterable*. So, if the input *iterable* is sorted, the combination tuples will be produced in sorted order. Elements are treated as unique based on their position, not on their value. So if the input elements are unique, there will be no repeat values in each permutation. Roughly equivalent to: ``` def permutations(iterable, r=None): # permutations('ABCD', 2) --> AB AC AD BA BC BD CA CB CD DA DB DC # permutations(range(3)) --> 012 021 102 120 201 210 pool = tuple(iterable) n = len(pool) r = n if r is None else r if r > n: return indices = list(range(n)) cycles = list(range(n, n-r, -1)) yield tuple(pool[i] for i in indices[:r]) while n: for i in reversed(range(r)): cycles[i] -= 1 if cycles[i] == 0: indices[i:] = indices[i+1:] + indices[i:i+1] cycles[i] = n - i else: j = cycles[i] indices[i], indices[-j] = indices[-j], indices[i] yield tuple(pool[i] for i in indices[:r]) break else: return ``` The code for [`permutations()`](#itertools.permutations "itertools.permutations") can be also expressed as a subsequence of [`product()`](#itertools.product "itertools.product"), filtered to exclude entries with repeated elements (those from the same position in the input pool): ``` def permutations(iterable, r=None): pool = tuple(iterable) n = len(pool) r = n if r is None else r for indices in product(range(n), repeat=r): if len(set(indices)) == r: yield tuple(pool[i] for i in indices) ``` The number of items returned is `n! / (n-r)!` when `0 <= r <= n` or zero when `r > n`. `itertools.product(*iterables, repeat=1)` Cartesian product of input iterables. Roughly equivalent to nested for-loops in a generator expression. For example, `product(A, B)` returns the same as `((x,y) for x in A for y in B)`. The nested loops cycle like an odometer with the rightmost element advancing on every iteration. This pattern creates a lexicographic ordering so that if the input’s iterables are sorted, the product tuples are emitted in sorted order. To compute the product of an iterable with itself, specify the number of repetitions with the optional *repeat* keyword argument. For example, `product(A, repeat=4)` means the same as `product(A, A, A, A)`. This function is roughly equivalent to the following code, except that the actual implementation does not build up intermediate results in memory: ``` def product(*args, repeat=1): # product('ABCD', 'xy') --> Ax Ay Bx By Cx Cy Dx Dy # product(range(2), repeat=3) --> 000 001 010 011 100 101 110 111 pools = [tuple(pool) for pool in args] * repeat result = [[]] for pool in pools: result = [x+[y] for x in result for y in pool] for prod in result: yield tuple(prod) ``` Before [`product()`](#itertools.product "itertools.product") runs, it completely consumes the input iterables, keeping pools of values in memory to generate the products. Accordingly, it is only useful with finite inputs. `itertools.repeat(object[, times])` Make an iterator that returns *object* over and over again. Runs indefinitely unless the *times* argument is specified. Used as argument to [`map()`](functions#map "map") for invariant parameters to the called function. Also used with [`zip()`](functions#zip "zip") to create an invariant part of a tuple record. Roughly equivalent to: ``` def repeat(object, times=None): # repeat(10, 3) --> 10 10 10 if times is None: while True: yield object else: for i in range(times): yield object ``` A common use for *repeat* is to supply a stream of constant values to *map* or *zip*: ``` >>> list(map(pow, range(10), repeat(2))) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] ``` `itertools.starmap(function, iterable)` Make an iterator that computes the function using arguments obtained from the iterable. Used instead of [`map()`](functions#map "map") when argument parameters are already grouped in tuples from a single iterable (the data has been “pre-zipped”). The difference between [`map()`](functions#map "map") and [`starmap()`](#itertools.starmap "itertools.starmap") parallels the distinction between `function(a,b)` and `function(*c)`. Roughly equivalent to: ``` def starmap(function, iterable): # starmap(pow, [(2,5), (3,2), (10,3)]) --> 32 9 1000 for args in iterable: yield function(*args) ``` `itertools.takewhile(predicate, iterable)` Make an iterator that returns elements from the iterable as long as the predicate is true. Roughly equivalent to: ``` def takewhile(predicate, iterable): # takewhile(lambda x: x<5, [1,4,6,4,1]) --> 1 4 for x in iterable: if predicate(x): yield x else: break ``` `itertools.tee(iterable, n=2)` Return *n* independent iterators from a single iterable. The following Python code helps explain what *tee* does (although the actual implementation is more complex and uses only a single underlying FIFO queue). Roughly equivalent to: ``` def tee(iterable, n=2): it = iter(iterable) deques = [collections.deque() for i in range(n)] def gen(mydeque): while True: if not mydeque: # when the local deque is empty try: newval = next(it) # fetch a new value and except StopIteration: return for d in deques: # load it to all the deques d.append(newval) yield mydeque.popleft() return tuple(gen(d) for d in deques) ``` Once [`tee()`](#itertools.tee "itertools.tee") has made a split, the original *iterable* should not be used anywhere else; otherwise, the *iterable* could get advanced without the tee objects being informed. `tee` iterators are not threadsafe. A [`RuntimeError`](exceptions#RuntimeError "RuntimeError") may be raised when using simultaneously iterators returned by the same [`tee()`](#itertools.tee "itertools.tee") call, even if the original *iterable* is threadsafe. This itertool may require significant auxiliary storage (depending on how much temporary data needs to be stored). In general, if one iterator uses most or all of the data before another iterator starts, it is faster to use [`list()`](stdtypes#list "list") instead of [`tee()`](#itertools.tee "itertools.tee"). `itertools.zip_longest(*iterables, fillvalue=None)` Make an iterator that aggregates elements from each of the iterables. If the iterables are of uneven length, missing values are filled-in with *fillvalue*. Iteration continues until the longest iterable is exhausted. Roughly equivalent to: ``` def zip_longest(*args, fillvalue=None): # zip_longest('ABCD', 'xy', fillvalue='-') --> Ax By C- D- iterators = [iter(it) for it in args] num_active = len(iterators) if not num_active: return while True: values = [] for i, it in enumerate(iterators): try: value = next(it) except StopIteration: num_active -= 1 if not num_active: return iterators[i] = repeat(fillvalue) value = fillvalue values.append(value) yield tuple(values) ``` If one of the iterables is potentially infinite, then the [`zip_longest()`](#itertools.zip_longest "itertools.zip_longest") function should be wrapped with something that limits the number of calls (for example [`islice()`](#itertools.islice "itertools.islice") or [`takewhile()`](#itertools.takewhile "itertools.takewhile")). If not specified, *fillvalue* defaults to `None`. Itertools Recipes ----------------- This section shows recipes for creating an extended toolset using the existing itertools as building blocks. Substantially all of these recipes and many, many others can be installed from the [more-itertools project](https://pypi.org/project/more-itertools/) found on the Python Package Index: ``` pip install more-itertools ``` The extended tools offer the same high performance as the underlying toolset. The superior memory performance is kept by processing elements one at a time rather than bringing the whole iterable into memory all at once. Code volume is kept small by linking the tools together in a functional style which helps eliminate temporary variables. High speed is retained by preferring “vectorized” building blocks over the use of for-loops and [generator](../glossary#term-generator)s which incur interpreter overhead. ``` def take(n, iterable): "Return first n items of the iterable as a list" return list(islice(iterable, n)) def prepend(value, iterator): "Prepend a single value in front of an iterator" # prepend(1, [2, 3, 4]) -> 1 2 3 4 return chain([value], iterator) def tabulate(function, start=0): "Return function(0), function(1), ..." return map(function, count(start)) def tail(n, iterable): "Return an iterator over the last n items" # tail(3, 'ABCDEFG') --> E F G return iter(collections.deque(iterable, maxlen=n)) def consume(iterator, n=None): "Advance the iterator n-steps ahead. If n is None, consume entirely." # Use functions that consume iterators at C speed. if n is None: # feed the entire iterator into a zero-length deque collections.deque(iterator, maxlen=0) else: # advance to the empty slice starting at position n next(islice(iterator, n, n), None) def nth(iterable, n, default=None): "Returns the nth item or a default value" return next(islice(iterable, n, None), default) def all_equal(iterable): "Returns True if all the elements are equal to each other" g = groupby(iterable) return next(g, True) and not next(g, False) def quantify(iterable, pred=bool): "Count how many times the predicate is true" return sum(map(pred, iterable)) def pad_none(iterable): """Returns the sequence elements and then returns None indefinitely. Useful for emulating the behavior of the built-in map() function. """ return chain(iterable, repeat(None)) def ncycles(iterable, n): "Returns the sequence elements n times" return chain.from_iterable(repeat(tuple(iterable), n)) def dotproduct(vec1, vec2): return sum(map(operator.mul, vec1, vec2)) def convolve(signal, kernel): # See: https://betterexplained.com/articles/intuitive-convolution/ # convolve(data, [0.25, 0.25, 0.25, 0.25]) --> Moving average (blur) # convolve(data, [1, -1]) --> 1st finite difference (1st derivative) # convolve(data, [1, -2, 1]) --> 2nd finite difference (2nd derivative) kernel = tuple(kernel)[::-1] n = len(kernel) window = collections.deque([0], maxlen=n) * n for x in chain(signal, repeat(0, n-1)): window.append(x) yield sum(map(operator.mul, kernel, window)) def flatten(list_of_lists): "Flatten one level of nesting" return chain.from_iterable(list_of_lists) def repeatfunc(func, times=None, *args): """Repeat calls to func with specified arguments. Example: repeatfunc(random.random) """ if times is None: return starmap(func, repeat(args)) return starmap(func, repeat(args, times)) def pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." a, b = tee(iterable) next(b, None) return zip(a, b) def grouper(iterable, n, fillvalue=None): "Collect data into fixed-length chunks or blocks" # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx" args = [iter(iterable)] * n return zip_longest(*args, fillvalue=fillvalue) def roundrobin(*iterables): "roundrobin('ABC', 'D', 'EF') --> A D E B F C" # Recipe credited to George Sakkis num_active = len(iterables) nexts = cycle(iter(it).__next__ for it in iterables) while num_active: try: for next in nexts: yield next() except StopIteration: # Remove the iterator we just exhausted from the cycle. num_active -= 1 nexts = cycle(islice(nexts, num_active)) def partition(pred, iterable): "Use a predicate to partition entries into false entries and true entries" # partition(is_odd, range(10)) --> 0 2 4 6 8 and 1 3 5 7 9 t1, t2 = tee(iterable) return filterfalse(pred, t1), filter(pred, t2) def powerset(iterable): "powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)" s = list(iterable) return chain.from_iterable(combinations(s, r) for r in range(len(s)+1)) def unique_everseen(iterable, key=None): "List unique elements, preserving order. Remember all elements ever seen." # unique_everseen('AAAABBBCCDAABBB') --> A B C D # unique_everseen('ABBCcAD', str.lower) --> A B C D seen = set() seen_add = seen.add if key is None: for element in filterfalse(seen.__contains__, iterable): seen_add(element) yield element else: for element in iterable: k = key(element) if k not in seen: seen_add(k) yield element def unique_justseen(iterable, key=None): "List unique elements, preserving order. Remember only the element just seen." # unique_justseen('AAAABBBCCDAABBB') --> A B C D A B # unique_justseen('ABBCcAD', str.lower) --> A B C A D return map(next, map(operator.itemgetter(1), groupby(iterable, key))) def iter_except(func, exception, first=None): """ Call a function repeatedly until an exception is raised. Converts a call-until-exception interface to an iterator interface. Like builtins.iter(func, sentinel) but uses an exception instead of a sentinel to end the loop. Examples: iter_except(functools.partial(heappop, h), IndexError) # priority queue iterator iter_except(d.popitem, KeyError) # non-blocking dict iterator iter_except(d.popleft, IndexError) # non-blocking deque iterator iter_except(q.get_nowait, Queue.Empty) # loop over a producer Queue iter_except(s.pop, KeyError) # non-blocking set iterator """ try: if first is not None: yield first() # For database APIs needing an initial cast to db.first() while True: yield func() except exception: pass def first_true(iterable, default=False, pred=None): """Returns the first true value in the iterable. If no true value is found, returns *default* If *pred* is not None, returns the first item for which pred(item) is true. """ # first_true([a,b,c], x) --> a or b or c or x # first_true([a,b], x, f) --> a if f(a) else b if f(b) else x return next(filter(pred, iterable), default) def random_product(*args, repeat=1): "Random selection from itertools.product(*args, **kwds)" pools = [tuple(pool) for pool in args] * repeat return tuple(map(random.choice, pools)) def random_permutation(iterable, r=None): "Random selection from itertools.permutations(iterable, r)" pool = tuple(iterable) r = len(pool) if r is None else r return tuple(random.sample(pool, r)) def random_combination(iterable, r): "Random selection from itertools.combinations(iterable, r)" pool = tuple(iterable) n = len(pool) indices = sorted(random.sample(range(n), r)) return tuple(pool[i] for i in indices) def random_combination_with_replacement(iterable, r): "Random selection from itertools.combinations_with_replacement(iterable, r)" pool = tuple(iterable) n = len(pool) indices = sorted(random.choices(range(n), k=r)) return tuple(pool[i] for i in indices) def nth_combination(iterable, r, index): "Equivalent to list(combinations(iterable, r))[index]" pool = tuple(iterable) n = len(pool) if r < 0 or r > n: raise ValueError c = 1 k = min(r, n-r) for i in range(1, k+1): c = c * (n - k + i) // i if index < 0: index += c if index < 0 or index >= c: raise IndexError result = [] while r: c, n, r = c*r//n, n-1, r-1 while index >= c: index -= c c, n = c*(n-r)//n, n-1 result.append(pool[-1-n]) return tuple(result) ```
programming_docs
python logging.config — Logging configuration logging.config — Logging configuration ====================================== **Source code:** [Lib/logging/config.py](https://github.com/python/cpython/tree/3.9/Lib/logging/config.py) This section describes the API for configuring the logging module. Configuration functions ----------------------- The following functions configure the logging module. They are located in the [`logging.config`](#module-logging.config "logging.config: Configuration of the logging module.") module. Their use is optional — you can configure the logging module using these functions or by making calls to the main API (defined in [`logging`](logging#module-logging "logging: Flexible event logging system for applications.") itself) and defining handlers which are declared either in [`logging`](logging#module-logging "logging: Flexible event logging system for applications.") or [`logging.handlers`](logging.handlers#module-logging.handlers "logging.handlers: Handlers for the logging module."). `logging.config.dictConfig(config)` Takes the logging configuration from a dictionary. The contents of this dictionary are described in [Configuration dictionary schema](#logging-config-dictschema) below. If an error is encountered during configuration, this function will raise a [`ValueError`](exceptions#ValueError "ValueError"), [`TypeError`](exceptions#TypeError "TypeError"), [`AttributeError`](exceptions#AttributeError "AttributeError") or [`ImportError`](exceptions#ImportError "ImportError") with a suitably descriptive message. The following is a (possibly incomplete) list of conditions which will raise an error: * A `level` which is not a string or which is a string not corresponding to an actual logging level. * A `propagate` value which is not a boolean. * An id which does not have a corresponding destination. * A non-existent handler id found during an incremental call. * An invalid logger name. * Inability to resolve to an internal or external object. Parsing is performed by the `DictConfigurator` class, whose constructor is passed the dictionary used for configuration, and has a `configure()` method. The [`logging.config`](#module-logging.config "logging.config: Configuration of the logging module.") module has a callable attribute `dictConfigClass` which is initially set to `DictConfigurator`. You can replace the value of `dictConfigClass` with a suitable implementation of your own. [`dictConfig()`](#logging.config.dictConfig "logging.config.dictConfig") calls `dictConfigClass` passing the specified dictionary, and then calls the `configure()` method on the returned object to put the configuration into effect: ``` def dictConfig(config): dictConfigClass(config).configure() ``` For example, a subclass of `DictConfigurator` could call `DictConfigurator.__init__()` in its own [`__init__()`](../reference/datamodel#object.__init__ "object.__init__"), then set up custom prefixes which would be usable in the subsequent `configure()` call. `dictConfigClass` would be bound to this new subclass, and then [`dictConfig()`](#logging.config.dictConfig "logging.config.dictConfig") could be called exactly as in the default, uncustomized state. New in version 3.2. `logging.config.fileConfig(fname, defaults=None, disable_existing_loggers=True)` Reads the logging configuration from a [`configparser`](configparser#module-configparser "configparser: Configuration file parser.")-format file. The format of the file should be as described in [Configuration file format](#logging-config-fileformat). This function can be called several times from an application, allowing an end user to select from various pre-canned configurations (if the developer provides a mechanism to present the choices and load the chosen configuration). Parameters * **fname** – A filename, or a file-like object, or an instance derived from [`RawConfigParser`](configparser#configparser.RawConfigParser "configparser.RawConfigParser"). If a `RawConfigParser`-derived instance is passed, it is used as is. Otherwise, a `Configparser` is instantiated, and the configuration read by it from the object passed in `fname`. If that has a [`readline()`](readline#module-readline "readline: GNU readline support for Python. (Unix)") method, it is assumed to be a file-like object and read using [`read_file()`](configparser#configparser.ConfigParser.read_file "configparser.ConfigParser.read_file"); otherwise, it is assumed to be a filename and passed to [`read()`](configparser#configparser.ConfigParser.read "configparser.ConfigParser.read"). * **defaults** – Defaults to be passed to the ConfigParser can be specified in this argument. * **disable\_existing\_loggers** – If specified as `False`, loggers which exist when this call is made are left enabled. The default is `True` because this enables old behaviour in a backward-compatible way. This behaviour is to disable any existing non-root loggers unless they or their ancestors are explicitly named in the logging configuration. Changed in version 3.4: An instance of a subclass of [`RawConfigParser`](configparser#configparser.RawConfigParser "configparser.RawConfigParser") is now accepted as a value for `fname`. This facilitates: * Use of a configuration file where logging configuration is just part of the overall application configuration. * Use of a configuration read from a file, and then modified by the using application (e.g. based on command-line parameters or other aspects of the runtime environment) before being passed to `fileConfig`. `logging.config.listen(port=DEFAULT_LOGGING_CONFIG_PORT, verify=None)` Starts up a socket server on the specified port, and listens for new configurations. If no port is specified, the module’s default `DEFAULT_LOGGING_CONFIG_PORT` is used. Logging configurations will be sent as a file suitable for processing by [`dictConfig()`](#logging.config.dictConfig "logging.config.dictConfig") or [`fileConfig()`](#logging.config.fileConfig "logging.config.fileConfig"). Returns a [`Thread`](threading#threading.Thread "threading.Thread") instance on which you can call [`start()`](threading#threading.Thread.start "threading.Thread.start") to start the server, and which you can [`join()`](threading#threading.Thread.join "threading.Thread.join") when appropriate. To stop the server, call [`stopListening()`](#logging.config.stopListening "logging.config.stopListening"). The `verify` argument, if specified, should be a callable which should verify whether bytes received across the socket are valid and should be processed. This could be done by encrypting and/or signing what is sent across the socket, such that the `verify` callable can perform signature verification and/or decryption. The `verify` callable is called with a single argument - the bytes received across the socket - and should return the bytes to be processed, or `None` to indicate that the bytes should be discarded. The returned bytes could be the same as the passed in bytes (e.g. when only verification is done), or they could be completely different (perhaps if decryption were performed). To send a configuration to the socket, read in the configuration file and send it to the socket as a sequence of bytes preceded by a four-byte length string packed in binary using `struct.pack('>L', n)`. Note Because portions of the configuration are passed through [`eval()`](functions#eval "eval"), use of this function may open its users to a security risk. While the function only binds to a socket on `localhost`, and so does not accept connections from remote machines, there are scenarios where untrusted code could be run under the account of the process which calls [`listen()`](#logging.config.listen "logging.config.listen"). Specifically, if the process calling [`listen()`](#logging.config.listen "logging.config.listen") runs on a multi-user machine where users cannot trust each other, then a malicious user could arrange to run essentially arbitrary code in a victim user’s process, simply by connecting to the victim’s [`listen()`](#logging.config.listen "logging.config.listen") socket and sending a configuration which runs whatever code the attacker wants to have executed in the victim’s process. This is especially easy to do if the default port is used, but not hard even if a different port is used. To avoid the risk of this happening, use the `verify` argument to [`listen()`](#logging.config.listen "logging.config.listen") to prevent unrecognised configurations from being applied. Changed in version 3.4: The `verify` argument was added. Note If you want to send configurations to the listener which don’t disable existing loggers, you will need to use a JSON format for the configuration, which will use [`dictConfig()`](#logging.config.dictConfig "logging.config.dictConfig") for configuration. This method allows you to specify `disable_existing_loggers` as `False` in the configuration you send. `logging.config.stopListening()` Stops the listening server which was created with a call to [`listen()`](#logging.config.listen "logging.config.listen"). This is typically called before calling `join()` on the return value from [`listen()`](#logging.config.listen "logging.config.listen"). Security considerations ----------------------- The logging configuration functionality tries to offer convenience, and in part this is done by offering the ability to convert text in configuration files into Python objects used in logging configuration - for example, as described in [User-defined objects](#logging-config-dict-userdef). However, these same mechanisms (importing callables from user-defined modules and calling them with parameters from the configuration) could be used to invoke any code you like, and for this reason you should treat configuration files from untrusted sources with *extreme caution* and satisfy yourself that nothing bad can happen if you load them, before actually loading them. Configuration dictionary schema ------------------------------- Describing a logging configuration requires listing the various objects to create and the connections between them; for example, you may create a handler named ‘console’ and then say that the logger named ‘startup’ will send its messages to the ‘console’ handler. These objects aren’t limited to those provided by the [`logging`](logging#module-logging "logging: Flexible event logging system for applications.") module because you might write your own formatter or handler class. The parameters to these classes may also need to include external objects such as `sys.stderr`. The syntax for describing these objects and connections is defined in [Object connections](#logging-config-dict-connections) below. ### Dictionary Schema Details The dictionary passed to [`dictConfig()`](#logging.config.dictConfig "logging.config.dictConfig") must contain the following keys: * *version* - to be set to an integer value representing the schema version. The only valid value at present is 1, but having this key allows the schema to evolve while still preserving backwards compatibility. All other keys are optional, but if present they will be interpreted as described below. In all cases below where a ‘configuring dict’ is mentioned, it will be checked for the special `'()'` key to see if a custom instantiation is required. If so, the mechanism described in [User-defined objects](#logging-config-dict-userdef) below is used to create an instance; otherwise, the context is used to determine what to instantiate. * *formatters* - the corresponding value will be a dict in which each key is a formatter id and each value is a dict describing how to configure the corresponding [`Formatter`](logging#logging.Formatter "logging.Formatter") instance. The configuring dict is searched for keys `format` and `datefmt` (with defaults of `None`) and these are used to construct a [`Formatter`](logging#logging.Formatter "logging.Formatter") instance. Changed in version 3.8: a `validate` key (with default of `True`) can be added into the `formatters` section of the configuring dict, this is to validate the format. * *filters* - the corresponding value will be a dict in which each key is a filter id and each value is a dict describing how to configure the corresponding Filter instance. The configuring dict is searched for the key `name` (defaulting to the empty string) and this is used to construct a [`logging.Filter`](logging#logging.Filter "logging.Filter") instance. * *handlers* - the corresponding value will be a dict in which each key is a handler id and each value is a dict describing how to configure the corresponding Handler instance. The configuring dict is searched for the following keys: + `class` (mandatory). This is the fully qualified name of the handler class. + `level` (optional). The level of the handler. + `formatter` (optional). The id of the formatter for this handler. + `filters` (optional). A list of ids of the filters for this handler.All *other* keys are passed through as keyword arguments to the handler’s constructor. For example, given the snippet: ``` handlers: console: class : logging.StreamHandler formatter: brief level : INFO filters: [allow_foo] stream : ext://sys.stdout file: class : logging.handlers.RotatingFileHandler formatter: precise filename: logconfig.log maxBytes: 1024 backupCount: 3 ``` the handler with id `console` is instantiated as a [`logging.StreamHandler`](logging.handlers#logging.StreamHandler "logging.StreamHandler"), using `sys.stdout` as the underlying stream. The handler with id `file` is instantiated as a [`logging.handlers.RotatingFileHandler`](logging.handlers#logging.handlers.RotatingFileHandler "logging.handlers.RotatingFileHandler") with the keyword arguments `filename='logconfig.log', maxBytes=1024, backupCount=3`. * *loggers* - the corresponding value will be a dict in which each key is a logger name and each value is a dict describing how to configure the corresponding Logger instance. The configuring dict is searched for the following keys: + `level` (optional). The level of the logger. + `propagate` (optional). The propagation setting of the logger. + `filters` (optional). A list of ids of the filters for this logger. + `handlers` (optional). A list of ids of the handlers for this logger.The specified loggers will be configured according to the level, propagation, filters and handlers specified. * *root* - this will be the configuration for the root logger. Processing of the configuration will be as for any logger, except that the `propagate` setting will not be applicable. * *incremental* - whether the configuration is to be interpreted as incremental to the existing configuration. This value defaults to `False`, which means that the specified configuration replaces the existing configuration with the same semantics as used by the existing [`fileConfig()`](#logging.config.fileConfig "logging.config.fileConfig") API. If the specified value is `True`, the configuration is processed as described in the section on [Incremental Configuration](#logging-config-dict-incremental). * *disable\_existing\_loggers* - whether any existing non-root loggers are to be disabled. This setting mirrors the parameter of the same name in [`fileConfig()`](#logging.config.fileConfig "logging.config.fileConfig"). If absent, this parameter defaults to `True`. This value is ignored if *incremental* is `True`. ### Incremental Configuration It is difficult to provide complete flexibility for incremental configuration. For example, because objects such as filters and formatters are anonymous, once a configuration is set up, it is not possible to refer to such anonymous objects when augmenting a configuration. Furthermore, there is not a compelling case for arbitrarily altering the object graph of loggers, handlers, filters, formatters at run-time, once a configuration is set up; the verbosity of loggers and handlers can be controlled just by setting levels (and, in the case of loggers, propagation flags). Changing the object graph arbitrarily in a safe way is problematic in a multi-threaded environment; while not impossible, the benefits are not worth the complexity it adds to the implementation. Thus, when the `incremental` key of a configuration dict is present and is `True`, the system will completely ignore any `formatters` and `filters` entries, and process only the `level` settings in the `handlers` entries, and the `level` and `propagate` settings in the `loggers` and `root` entries. Using a value in the configuration dict lets configurations to be sent over the wire as pickled dicts to a socket listener. Thus, the logging verbosity of a long-running application can be altered over time with no need to stop and restart the application. ### Object connections The schema describes a set of logging objects - loggers, handlers, formatters, filters - which are connected to each other in an object graph. Thus, the schema needs to represent connections between the objects. For example, say that, once configured, a particular logger has attached to it a particular handler. For the purposes of this discussion, we can say that the logger represents the source, and the handler the destination, of a connection between the two. Of course in the configured objects this is represented by the logger holding a reference to the handler. In the configuration dict, this is done by giving each destination object an id which identifies it unambiguously, and then using the id in the source object’s configuration to indicate that a connection exists between the source and the destination object with that id. So, for example, consider the following YAML snippet: ``` formatters: brief: # configuration for formatter with id 'brief' goes here precise: # configuration for formatter with id 'precise' goes here handlers: h1: #This is an id # configuration of handler with id 'h1' goes here formatter: brief h2: #This is another id # configuration of handler with id 'h2' goes here formatter: precise loggers: foo.bar.baz: # other configuration for logger 'foo.bar.baz' handlers: [h1, h2] ``` (Note: YAML used here because it’s a little more readable than the equivalent Python source form for the dictionary.) The ids for loggers are the logger names which would be used programmatically to obtain a reference to those loggers, e.g. `foo.bar.baz`. The ids for Formatters and Filters can be any string value (such as `brief`, `precise` above) and they are transient, in that they are only meaningful for processing the configuration dictionary and used to determine connections between objects, and are not persisted anywhere when the configuration call is complete. The above snippet indicates that logger named `foo.bar.baz` should have two handlers attached to it, which are described by the handler ids `h1` and `h2`. The formatter for `h1` is that described by id `brief`, and the formatter for `h2` is that described by id `precise`. ### User-defined objects The schema supports user-defined objects for handlers, filters and formatters. (Loggers do not need to have different types for different instances, so there is no support in this configuration schema for user-defined logger classes.) Objects to be configured are described by dictionaries which detail their configuration. In some places, the logging system will be able to infer from the context how an object is to be instantiated, but when a user-defined object is to be instantiated, the system will not know how to do this. In order to provide complete flexibility for user-defined object instantiation, the user needs to provide a ‘factory’ - a callable which is called with a configuration dictionary and which returns the instantiated object. This is signalled by an absolute import path to the factory being made available under the special key `'()'`. Here’s a concrete example: ``` formatters: brief: format: '%(message)s' default: format: '%(asctime)s %(levelname)-8s %(name)-15s %(message)s' datefmt: '%Y-%m-%d %H:%M:%S' custom: (): my.package.customFormatterFactory bar: baz spam: 99.9 answer: 42 ``` The above YAML snippet defines three formatters. The first, with id `brief`, is a standard [`logging.Formatter`](logging#logging.Formatter "logging.Formatter") instance with the specified format string. The second, with id `default`, has a longer format and also defines the time format explicitly, and will result in a [`logging.Formatter`](logging#logging.Formatter "logging.Formatter") initialized with those two format strings. Shown in Python source form, the `brief` and `default` formatters have configuration sub-dictionaries: ``` { 'format' : '%(message)s' } ``` and: ``` { 'format' : '%(asctime)s %(levelname)-8s %(name)-15s %(message)s', 'datefmt' : '%Y-%m-%d %H:%M:%S' } ``` respectively, and as these dictionaries do not contain the special key `'()'`, the instantiation is inferred from the context: as a result, standard [`logging.Formatter`](logging#logging.Formatter "logging.Formatter") instances are created. The configuration sub-dictionary for the third formatter, with id `custom`, is: ``` { '()' : 'my.package.customFormatterFactory', 'bar' : 'baz', 'spam' : 99.9, 'answer' : 42 } ``` and this contains the special key `'()'`, which means that user-defined instantiation is wanted. In this case, the specified factory callable will be used. If it is an actual callable it will be used directly - otherwise, if you specify a string (as in the example) the actual callable will be located using normal import mechanisms. The callable will be called with the **remaining** items in the configuration sub-dictionary as keyword arguments. In the above example, the formatter with id `custom` will be assumed to be returned by the call: ``` my.package.customFormatterFactory(bar='baz', spam=99.9, answer=42) ``` The key `'()'` has been used as the special key because it is not a valid keyword parameter name, and so will not clash with the names of the keyword arguments used in the call. The `'()'` also serves as a mnemonic that the corresponding value is a callable. ### Access to external objects There are times where a configuration needs to refer to objects external to the configuration, for example `sys.stderr`. If the configuration dict is constructed using Python code, this is straightforward, but a problem arises when the configuration is provided via a text file (e.g. JSON, YAML). In a text file, there is no standard way to distinguish `sys.stderr` from the literal string `'sys.stderr'`. To facilitate this distinction, the configuration system looks for certain special prefixes in string values and treat them specially. For example, if the literal string `'ext://sys.stderr'` is provided as a value in the configuration, then the `ext://` will be stripped off and the remainder of the value processed using normal import mechanisms. The handling of such prefixes is done in a way analogous to protocol handling: there is a generic mechanism to look for prefixes which match the regular expression `^(?P<prefix>[a-z]+)://(?P<suffix>.*)$` whereby, if the `prefix` is recognised, the `suffix` is processed in a prefix-dependent manner and the result of the processing replaces the string value. If the prefix is not recognised, then the string value will be left as-is. ### Access to internal objects As well as external objects, there is sometimes also a need to refer to objects in the configuration. This will be done implicitly by the configuration system for things that it knows about. For example, the string value `'DEBUG'` for a `level` in a logger or handler will automatically be converted to the value `logging.DEBUG`, and the `handlers`, `filters` and `formatter` entries will take an object id and resolve to the appropriate destination object. However, a more generic mechanism is needed for user-defined objects which are not known to the [`logging`](logging#module-logging "logging: Flexible event logging system for applications.") module. For example, consider [`logging.handlers.MemoryHandler`](logging.handlers#logging.handlers.MemoryHandler "logging.handlers.MemoryHandler"), which takes a `target` argument which is another handler to delegate to. Since the system already knows about this class, then in the configuration, the given `target` just needs to be the object id of the relevant target handler, and the system will resolve to the handler from the id. If, however, a user defines a `my.package.MyHandler` which has an `alternate` handler, the configuration system would not know that the `alternate` referred to a handler. To cater for this, a generic resolution system allows the user to specify: ``` handlers: file: # configuration of file handler goes here custom: (): my.package.MyHandler alternate: cfg://handlers.file ``` The literal string `'cfg://handlers.file'` will be resolved in an analogous way to strings with the `ext://` prefix, but looking in the configuration itself rather than the import namespace. The mechanism allows access by dot or by index, in a similar way to that provided by `str.format`. Thus, given the following snippet: ``` handlers: email: class: logging.handlers.SMTPHandler mailhost: localhost fromaddr: [email protected] toaddrs: - [email protected] - [email protected] subject: Houston, we have a problem. ``` in the configuration, the string `'cfg://handlers'` would resolve to the dict with key `handlers`, the string `'cfg://handlers.email` would resolve to the dict with key `email` in the `handlers` dict, and so on. The string `'cfg://handlers.email.toaddrs[1]` would resolve to `'dev_team.domain.tld'` and the string `'cfg://handlers.email.toaddrs[0]'` would resolve to the value `'[email protected]'`. The `subject` value could be accessed using either `'cfg://handlers.email.subject'` or, equivalently, `'cfg://handlers.email[subject]'`. The latter form only needs to be used if the key contains spaces or non-alphanumeric characters. If an index value consists only of decimal digits, access will be attempted using the corresponding integer value, falling back to the string value if needed. Given a string `cfg://handlers.myhandler.mykey.123`, this will resolve to `config_dict['handlers']['myhandler']['mykey']['123']`. If the string is specified as `cfg://handlers.myhandler.mykey[123]`, the system will attempt to retrieve the value from `config_dict['handlers']['myhandler']['mykey'][123]`, and fall back to `config_dict['handlers']['myhandler']['mykey']['123']` if that fails. ### Import resolution and custom importers Import resolution, by default, uses the builtin [`__import__()`](functions#__import__ "__import__") function to do its importing. You may want to replace this with your own importing mechanism: if so, you can replace the `importer` attribute of the `DictConfigurator` or its superclass, the `BaseConfigurator` class. However, you need to be careful because of the way functions are accessed from classes via descriptors. If you are using a Python callable to do your imports, and you want to define it at class level rather than instance level, you need to wrap it with [`staticmethod()`](functions#staticmethod "staticmethod"). For example: ``` from importlib import import_module from logging.config import BaseConfigurator BaseConfigurator.importer = staticmethod(import_module) ``` You don’t need to wrap with [`staticmethod()`](functions#staticmethod "staticmethod") if you’re setting the import callable on a configurator *instance*. Configuration file format ------------------------- The configuration file format understood by [`fileConfig()`](#logging.config.fileConfig "logging.config.fileConfig") is based on [`configparser`](configparser#module-configparser "configparser: Configuration file parser.") functionality. The file must contain sections called `[loggers]`, `[handlers]` and `[formatters]` which identify by name the entities of each type which are defined in the file. For each such entity, there is a separate section which identifies how that entity is configured. Thus, for a logger named `log01` in the `[loggers]` section, the relevant configuration details are held in a section `[logger_log01]`. Similarly, a handler called `hand01` in the `[handlers]` section will have its configuration held in a section called `[handler_hand01]`, while a formatter called `form01` in the `[formatters]` section will have its configuration specified in a section called `[formatter_form01]`. The root logger configuration must be specified in a section called `[logger_root]`. Note The [`fileConfig()`](#logging.config.fileConfig "logging.config.fileConfig") API is older than the [`dictConfig()`](#logging.config.dictConfig "logging.config.dictConfig") API and does not provide functionality to cover certain aspects of logging. For example, you cannot configure [`Filter`](logging#logging.Filter "logging.Filter") objects, which provide for filtering of messages beyond simple integer levels, using [`fileConfig()`](#logging.config.fileConfig "logging.config.fileConfig"). If you need to have instances of [`Filter`](logging#logging.Filter "logging.Filter") in your logging configuration, you will need to use [`dictConfig()`](#logging.config.dictConfig "logging.config.dictConfig"). Note that future enhancements to configuration functionality will be added to [`dictConfig()`](#logging.config.dictConfig "logging.config.dictConfig"), so it’s worth considering transitioning to this newer API when it’s convenient to do so. Examples of these sections in the file are given below. ``` [loggers] keys=root,log02,log03,log04,log05,log06,log07 [handlers] keys=hand01,hand02,hand03,hand04,hand05,hand06,hand07,hand08,hand09 [formatters] keys=form01,form02,form03,form04,form05,form06,form07,form08,form09 ``` The root logger must specify a level and a list of handlers. An example of a root logger section is given below. ``` [logger_root] level=NOTSET handlers=hand01 ``` The `level` entry can be one of `DEBUG, INFO, WARNING, ERROR, CRITICAL` or `NOTSET`. For the root logger only, `NOTSET` means that all messages will be logged. Level values are [`eval()`](functions#eval "eval")uated in the context of the `logging` package’s namespace. The `handlers` entry is a comma-separated list of handler names, which must appear in the `[handlers]` section. These names must appear in the `[handlers]` section and have corresponding sections in the configuration file. For loggers other than the root logger, some additional information is required. This is illustrated by the following example. ``` [logger_parser] level=DEBUG handlers=hand01 propagate=1 qualname=compiler.parser ``` The `level` and `handlers` entries are interpreted as for the root logger, except that if a non-root logger’s level is specified as `NOTSET`, the system consults loggers higher up the hierarchy to determine the effective level of the logger. The `propagate` entry is set to 1 to indicate that messages must propagate to handlers higher up the logger hierarchy from this logger, or 0 to indicate that messages are **not** propagated to handlers up the hierarchy. The `qualname` entry is the hierarchical channel name of the logger, that is to say the name used by the application to get the logger. Sections which specify handler configuration are exemplified by the following. ``` [handler_hand01] class=StreamHandler level=NOTSET formatter=form01 args=(sys.stdout,) ``` The `class` entry indicates the handler’s class (as determined by [`eval()`](functions#eval "eval") in the `logging` package’s namespace). The `level` is interpreted as for loggers, and `NOTSET` is taken to mean ‘log everything’. The `formatter` entry indicates the key name of the formatter for this handler. If blank, a default formatter (`logging._defaultFormatter`) is used. If a name is specified, it must appear in the `[formatters]` section and have a corresponding section in the configuration file. The `args` entry, when [`eval()`](functions#eval "eval")uated in the context of the `logging` package’s namespace, is the list of arguments to the constructor for the handler class. Refer to the constructors for the relevant handlers, or to the examples below, to see how typical entries are constructed. If not provided, it defaults to `()`. The optional `kwargs` entry, when [`eval()`](functions#eval "eval")uated in the context of the `logging` package’s namespace, is the keyword argument dict to the constructor for the handler class. If not provided, it defaults to `{}`. ``` [handler_hand02] class=FileHandler level=DEBUG formatter=form02 args=('python.log', 'w') [handler_hand03] class=handlers.SocketHandler level=INFO formatter=form03 args=('localhost', handlers.DEFAULT_TCP_LOGGING_PORT) [handler_hand04] class=handlers.DatagramHandler level=WARN formatter=form04 args=('localhost', handlers.DEFAULT_UDP_LOGGING_PORT) [handler_hand05] class=handlers.SysLogHandler level=ERROR formatter=form05 args=(('localhost', handlers.SYSLOG_UDP_PORT), handlers.SysLogHandler.LOG_USER) [handler_hand06] class=handlers.NTEventLogHandler level=CRITICAL formatter=form06 args=('Python Application', '', 'Application') [handler_hand07] class=handlers.SMTPHandler level=WARN formatter=form07 args=('localhost', 'from@abc', ['user1@abc', 'user2@xyz'], 'Logger Subject') kwargs={'timeout': 10.0} [handler_hand08] class=handlers.MemoryHandler level=NOTSET formatter=form08 target= args=(10, ERROR) [handler_hand09] class=handlers.HTTPHandler level=NOTSET formatter=form09 args=('localhost:9022', '/log', 'GET') kwargs={'secure': True} ``` Sections which specify formatter configuration are typified by the following. ``` [formatter_form01] format=F1 %(asctime)s %(levelname)s %(message)s datefmt= class=logging.Formatter ``` The `format` entry is the overall format string, and the `datefmt` entry is the `strftime()`-compatible date/time format string. If empty, the package substitutes something which is almost equivalent to specifying the date format string `'%Y-%m-%d %H:%M:%S'`. This format also specifies milliseconds, which are appended to the result of using the above format string, with a comma separator. An example time in this format is `2003-01-23 00:29:50,411`. The `class` entry is optional. It indicates the name of the formatter’s class (as a dotted module and class name.) This option is useful for instantiating a [`Formatter`](logging#logging.Formatter "logging.Formatter") subclass. Subclasses of [`Formatter`](logging#logging.Formatter "logging.Formatter") can present exception tracebacks in an expanded or condensed format. Note Due to the use of [`eval()`](functions#eval "eval") as described above, there are potential security risks which result from using the [`listen()`](#logging.config.listen "logging.config.listen") to send and receive configurations via sockets. The risks are limited to where multiple users with no mutual trust run code on the same machine; see the [`listen()`](#logging.config.listen "logging.config.listen") documentation for more information. See also `Module` [`logging`](logging#module-logging "logging: Flexible event logging system for applications.") API reference for the logging module. `Module` [`logging.handlers`](logging.handlers#module-logging.handlers "logging.handlers: Handlers for the logging module.") Useful handlers included with the logging module.
programming_docs
python math — Mathematical functions math — Mathematical functions ============================= This module provides access to the mathematical functions defined by the C standard. These functions cannot be used with complex numbers; use the functions of the same name from the [`cmath`](cmath#module-cmath "cmath: Mathematical functions for complex numbers.") module if you require support for complex numbers. The distinction between functions which support complex numbers and those which don’t is made since most users do not want to learn quite as much mathematics as required to understand complex numbers. Receiving an exception instead of a complex result allows earlier detection of the unexpected complex number used as a parameter, so that the programmer can determine how and why it was generated in the first place. The following functions are provided by this module. Except when explicitly noted otherwise, all return values are floats. Number-theoretic and representation functions --------------------------------------------- `math.ceil(x)` Return the ceiling of *x*, the smallest integer greater than or equal to *x*. If *x* is not a float, delegates to [`x.__ceil__`](../reference/datamodel#object.__ceil__ "object.__ceil__"), which should return an [`Integral`](numbers#numbers.Integral "numbers.Integral") value. `math.comb(n, k)` Return the number of ways to choose *k* items from *n* items without repetition and without order. Evaluates to `n! / (k! * (n - k)!)` when `k <= n` and evaluates to zero when `k > n`. Also called the binomial coefficient because it is equivalent to the coefficient of k-th term in polynomial expansion of the expression `(1 + x) ** n`. Raises [`TypeError`](exceptions#TypeError "TypeError") if either of the arguments are not integers. Raises [`ValueError`](exceptions#ValueError "ValueError") if either of the arguments are negative. New in version 3.8. `math.copysign(x, y)` Return a float with the magnitude (absolute value) of *x* but the sign of *y*. On platforms that support signed zeros, `copysign(1.0, -0.0)` returns *-1.0*. `math.fabs(x)` Return the absolute value of *x*. `math.factorial(x)` Return *x* factorial as an integer. Raises [`ValueError`](exceptions#ValueError "ValueError") if *x* is not integral or is negative. Deprecated since version 3.9: Accepting floats with integral values (like `5.0`) is deprecated. `math.floor(x)` Return the floor of *x*, the largest integer less than or equal to *x*. If *x* is not a float, delegates to [`x.__floor__`](../reference/datamodel#object.__floor__ "object.__floor__"), which should return an [`Integral`](numbers#numbers.Integral "numbers.Integral") value. `math.fmod(x, y)` Return `fmod(x, y)`, as defined by the platform C library. Note that the Python expression `x % y` may not return the same result. The intent of the C standard is that `fmod(x, y)` be exactly (mathematically; to infinite precision) equal to `x - n*y` for some integer *n* such that the result has the same sign as *x* and magnitude less than `abs(y)`. Python’s `x % y` returns a result with the sign of *y* instead, and may not be exactly computable for float arguments. For example, `fmod(-1e-100, 1e100)` is `-1e-100`, but the result of Python’s `-1e-100 % 1e100` is `1e100-1e-100`, which cannot be represented exactly as a float, and rounds to the surprising `1e100`. For this reason, function [`fmod()`](#math.fmod "math.fmod") is generally preferred when working with floats, while Python’s `x % y` is preferred when working with integers. `math.frexp(x)` Return the mantissa and exponent of *x* as the pair `(m, e)`. *m* is a float and *e* is an integer such that `x == m * 2**e` exactly. If *x* is zero, returns `(0.0, 0)`, otherwise `0.5 <= abs(m) < 1`. This is used to “pick apart” the internal representation of a float in a portable way. `math.fsum(iterable)` Return an accurate floating point sum of values in the iterable. Avoids loss of precision by tracking multiple intermediate partial sums: ``` >>> sum([.1, .1, .1, .1, .1, .1, .1, .1, .1, .1]) 0.9999999999999999 >>> fsum([.1, .1, .1, .1, .1, .1, .1, .1, .1, .1]) 1.0 ``` The algorithm’s accuracy depends on IEEE-754 arithmetic guarantees and the typical case where the rounding mode is half-even. On some non-Windows builds, the underlying C library uses extended precision addition and may occasionally double-round an intermediate sum causing it to be off in its least significant bit. For further discussion and two alternative approaches, see the [ASPN cookbook recipes for accurate floating point summation](https://code.activestate.com/recipes/393090/). `math.gcd(*integers)` Return the greatest common divisor of the specified integer arguments. If any of the arguments is nonzero, then the returned value is the largest positive integer that is a divisor of all arguments. If all arguments are zero, then the returned value is `0`. `gcd()` without arguments returns `0`. New in version 3.5. Changed in version 3.9: Added support for an arbitrary number of arguments. Formerly, only two arguments were supported. `math.isclose(a, b, *, rel_tol=1e-09, abs_tol=0.0)` Return `True` if the values *a* and *b* are close to each other and `False` otherwise. Whether or not two values are considered close is determined according to given absolute and relative tolerances. *rel\_tol* is the relative tolerance – it is the maximum allowed difference between *a* and *b*, relative to the larger absolute value of *a* or *b*. For example, to set a tolerance of 5%, pass `rel_tol=0.05`. The default tolerance is `1e-09`, which assures that the two values are the same within about 9 decimal digits. *rel\_tol* must be greater than zero. *abs\_tol* is the minimum absolute tolerance – useful for comparisons near zero. *abs\_tol* must be at least zero. If no errors occur, the result will be: `abs(a-b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)`. The IEEE 754 special values of `NaN`, `inf`, and `-inf` will be handled according to IEEE rules. Specifically, `NaN` is not considered close to any other value, including `NaN`. `inf` and `-inf` are only considered close to themselves. New in version 3.5. See also [**PEP 485**](https://www.python.org/dev/peps/pep-0485) – A function for testing approximate equality `math.isfinite(x)` Return `True` if *x* is neither an infinity nor a NaN, and `False` otherwise. (Note that `0.0` *is* considered finite.) New in version 3.2. `math.isinf(x)` Return `True` if *x* is a positive or negative infinity, and `False` otherwise. `math.isnan(x)` Return `True` if *x* is a NaN (not a number), and `False` otherwise. `math.isqrt(n)` Return the integer square root of the nonnegative integer *n*. This is the floor of the exact square root of *n*, or equivalently the greatest integer *a* such that *a*² ≤ *n*. For some applications, it may be more convenient to have the least integer *a* such that *n* ≤ *a*², or in other words the ceiling of the exact square root of *n*. For positive *n*, this can be computed using `a = 1 + isqrt(n - 1)`. New in version 3.8. `math.lcm(*integers)` Return the least common multiple of the specified integer arguments. If all arguments are nonzero, then the returned value is the smallest positive integer that is a multiple of all arguments. If any of the arguments is zero, then the returned value is `0`. `lcm()` without arguments returns `1`. New in version 3.9. `math.ldexp(x, i)` Return `x * (2**i)`. This is essentially the inverse of function [`frexp()`](#math.frexp "math.frexp"). `math.modf(x)` Return the fractional and integer parts of *x*. Both results carry the sign of *x* and are floats. `math.nextafter(x, y)` Return the next floating-point value after *x* towards *y*. If *x* is equal to *y*, return *y*. Examples: * `math.nextafter(x, math.inf)` goes up: towards positive infinity. * `math.nextafter(x, -math.inf)` goes down: towards minus infinity. * `math.nextafter(x, 0.0)` goes towards zero. * `math.nextafter(x, math.copysign(math.inf, x))` goes away from zero. See also [`math.ulp()`](#math.ulp "math.ulp"). New in version 3.9. `math.perm(n, k=None)` Return the number of ways to choose *k* items from *n* items without repetition and with order. Evaluates to `n! / (n - k)!` when `k <= n` and evaluates to zero when `k > n`. If *k* is not specified or is None, then *k* defaults to *n* and the function returns `n!`. Raises [`TypeError`](exceptions#TypeError "TypeError") if either of the arguments are not integers. Raises [`ValueError`](exceptions#ValueError "ValueError") if either of the arguments are negative. New in version 3.8. `math.prod(iterable, *, start=1)` Calculate the product of all the elements in the input *iterable*. The default *start* value for the product is `1`. When the iterable is empty, return the start value. This function is intended specifically for use with numeric values and may reject non-numeric types. New in version 3.8. `math.remainder(x, y)` Return the IEEE 754-style remainder of *x* with respect to *y*. For finite *x* and finite nonzero *y*, this is the difference `x - n*y`, where `n` is the closest integer to the exact value of the quotient `x / y`. If `x / y` is exactly halfway between two consecutive integers, the nearest *even* integer is used for `n`. The remainder `r = remainder(x, y)` thus always satisfies `abs(r) <= 0.5 * abs(y)`. Special cases follow IEEE 754: in particular, `remainder(x, math.inf)` is *x* for any finite *x*, and `remainder(x, 0)` and `remainder(math.inf, x)` raise [`ValueError`](exceptions#ValueError "ValueError") for any non-NaN *x*. If the result of the remainder operation is zero, that zero will have the same sign as *x*. On platforms using IEEE 754 binary floating-point, the result of this operation is always exactly representable: no rounding error is introduced. New in version 3.7. `math.trunc(x)` Return *x* with the fractional part removed, leaving the integer part. This rounds toward 0: `trunc()` is equivalent to [`floor()`](#math.floor "math.floor") for positive *x*, and equivalent to [`ceil()`](#math.ceil "math.ceil") for negative *x*. If *x* is not a float, delegates to [`x.__trunc__`](../reference/datamodel#object.__trunc__ "object.__trunc__"), which should return an [`Integral`](numbers#numbers.Integral "numbers.Integral") value. `math.ulp(x)` Return the value of the least significant bit of the float *x*: * If *x* is a NaN (not a number), return *x*. * If *x* is negative, return `ulp(-x)`. * If *x* is a positive infinity, return *x*. * If *x* is equal to zero, return the smallest positive *denormalized* representable float (smaller than the minimum positive *normalized* float, [`sys.float_info.min`](sys#sys.float_info "sys.float_info")). * If *x* is equal to the largest positive representable float, return the value of the least significant bit of *x*, such that the first float smaller than *x* is `x - ulp(x)`. * Otherwise (*x* is a positive finite number), return the value of the least significant bit of *x*, such that the first float bigger than *x* is `x + ulp(x)`. ULP stands for “Unit in the Last Place”. See also [`math.nextafter()`](#math.nextafter "math.nextafter") and [`sys.float_info.epsilon`](sys#sys.float_info "sys.float_info"). New in version 3.9. Note that [`frexp()`](#math.frexp "math.frexp") and [`modf()`](#math.modf "math.modf") have a different call/return pattern than their C equivalents: they take a single argument and return a pair of values, rather than returning their second return value through an ‘output parameter’ (there is no such thing in Python). For the [`ceil()`](#math.ceil "math.ceil"), [`floor()`](#math.floor "math.floor"), and [`modf()`](#math.modf "math.modf") functions, note that *all* floating-point numbers of sufficiently large magnitude are exact integers. Python floats typically carry no more than 53 bits of precision (the same as the platform C double type), in which case any float *x* with `abs(x) >= 2**52` necessarily has no fractional bits. Power and logarithmic functions ------------------------------- `math.exp(x)` Return *e* raised to the power *x*, where *e* = 2.718281… is the base of natural logarithms. This is usually more accurate than `math.e ** x` or `pow(math.e, x)`. `math.expm1(x)` Return *e* raised to the power *x*, minus 1. Here *e* is the base of natural logarithms. For small floats *x*, the subtraction in `exp(x) - 1` can result in a [significant loss of precision](https://en.wikipedia.org/wiki/Loss_of_significance); the [`expm1()`](#math.expm1 "math.expm1") function provides a way to compute this quantity to full precision: ``` >>> from math import exp, expm1 >>> exp(1e-5) - 1 # gives result accurate to 11 places 1.0000050000069649e-05 >>> expm1(1e-5) # result accurate to full precision 1.0000050000166668e-05 ``` New in version 3.2. `math.log(x[, base])` With one argument, return the natural logarithm of *x* (to base *e*). With two arguments, return the logarithm of *x* to the given *base*, calculated as `log(x)/log(base)`. `math.log1p(x)` Return the natural logarithm of *1+x* (base *e*). The result is calculated in a way which is accurate for *x* near zero. `math.log2(x)` Return the base-2 logarithm of *x*. This is usually more accurate than `log(x, 2)`. New in version 3.3. See also [`int.bit_length()`](stdtypes#int.bit_length "int.bit_length") returns the number of bits necessary to represent an integer in binary, excluding the sign and leading zeros. `math.log10(x)` Return the base-10 logarithm of *x*. This is usually more accurate than `log(x, 10)`. `math.pow(x, y)` Return `x` raised to the power `y`. Exceptional cases follow Annex ‘F’ of the C99 standard as far as possible. In particular, `pow(1.0, x)` and `pow(x, 0.0)` always return `1.0`, even when `x` is a zero or a NaN. If both `x` and `y` are finite, `x` is negative, and `y` is not an integer then `pow(x, y)` is undefined, and raises [`ValueError`](exceptions#ValueError "ValueError"). Unlike the built-in `**` operator, [`math.pow()`](#math.pow "math.pow") converts both its arguments to type [`float`](functions#float "float"). Use `**` or the built-in [`pow()`](functions#pow "pow") function for computing exact integer powers. `math.sqrt(x)` Return the square root of *x*. Trigonometric functions ----------------------- `math.acos(x)` Return the arc cosine of *x*, in radians. The result is between `0` and `pi`. `math.asin(x)` Return the arc sine of *x*, in radians. The result is between `-pi/2` and `pi/2`. `math.atan(x)` Return the arc tangent of *x*, in radians. The result is between `-pi/2` and `pi/2`. `math.atan2(y, x)` Return `atan(y / x)`, in radians. The result is between `-pi` and `pi`. The vector in the plane from the origin to point `(x, y)` makes this angle with the positive X axis. The point of [`atan2()`](#math.atan2 "math.atan2") is that the signs of both inputs are known to it, so it can compute the correct quadrant for the angle. For example, `atan(1)` and `atan2(1, 1)` are both `pi/4`, but `atan2(-1, -1)` is `-3*pi/4`. `math.cos(x)` Return the cosine of *x* radians. `math.dist(p, q)` Return the Euclidean distance between two points *p* and *q*, each given as a sequence (or iterable) of coordinates. The two points must have the same dimension. Roughly equivalent to: ``` sqrt(sum((px - qx) ** 2.0 for px, qx in zip(p, q))) ``` New in version 3.8. `math.hypot(*coordinates)` Return the Euclidean norm, `sqrt(sum(x**2 for x in coordinates))`. This is the length of the vector from the origin to the point given by the coordinates. For a two dimensional point `(x, y)`, this is equivalent to computing the hypotenuse of a right triangle using the Pythagorean theorem, `sqrt(x*x + y*y)`. Changed in version 3.8: Added support for n-dimensional points. Formerly, only the two dimensional case was supported. `math.sin(x)` Return the sine of *x* radians. `math.tan(x)` Return the tangent of *x* radians. Angular conversion ------------------ `math.degrees(x)` Convert angle *x* from radians to degrees. `math.radians(x)` Convert angle *x* from degrees to radians. Hyperbolic functions -------------------- [Hyperbolic functions](https://en.wikipedia.org/wiki/Hyperbolic_function) are analogs of trigonometric functions that are based on hyperbolas instead of circles. `math.acosh(x)` Return the inverse hyperbolic cosine of *x*. `math.asinh(x)` Return the inverse hyperbolic sine of *x*. `math.atanh(x)` Return the inverse hyperbolic tangent of *x*. `math.cosh(x)` Return the hyperbolic cosine of *x*. `math.sinh(x)` Return the hyperbolic sine of *x*. `math.tanh(x)` Return the hyperbolic tangent of *x*. Special functions ----------------- `math.erf(x)` Return the [error function](https://en.wikipedia.org/wiki/Error_function) at *x*. The [`erf()`](#math.erf "math.erf") function can be used to compute traditional statistical functions such as the [cumulative standard normal distribution](https://en.wikipedia.org/wiki/Normal_distribution#Cumulative_distribution_function): ``` def phi(x): 'Cumulative distribution function for the standard normal distribution' return (1.0 + erf(x / sqrt(2.0))) / 2.0 ``` New in version 3.2. `math.erfc(x)` Return the complementary error function at *x*. The [complementary error function](https://en.wikipedia.org/wiki/Error_function) is defined as `1.0 - erf(x)`. It is used for large values of *x* where a subtraction from one would cause a [loss of significance](https://en.wikipedia.org/wiki/Loss_of_significance). New in version 3.2. `math.gamma(x)` Return the [Gamma function](https://en.wikipedia.org/wiki/Gamma_function) at *x*. New in version 3.2. `math.lgamma(x)` Return the natural logarithm of the absolute value of the Gamma function at *x*. New in version 3.2. Constants --------- `math.pi` The mathematical constant *π* = 3.141592…, to available precision. `math.e` The mathematical constant *e* = 2.718281…, to available precision. `math.tau` The mathematical constant *τ* = 6.283185…, to available precision. Tau is a circle constant equal to 2*π*, the ratio of a circle’s circumference to its radius. To learn more about Tau, check out Vi Hart’s video [Pi is (still) Wrong](https://www.youtube.com/watch?v=jG7vhMMXagQ), and start celebrating [Tau day](https://tauday.com/) by eating twice as much pie! New in version 3.6. `math.inf` A floating-point positive infinity. (For negative infinity, use `-math.inf`.) Equivalent to the output of `float('inf')`. New in version 3.5. `math.nan` A floating-point “not a number” (NaN) value. Equivalent to the output of `float('nan')`. Due to the requirements of the [IEEE-754 standard](https://en.wikipedia.org/wiki/IEEE_754), `math.nan` and `float('nan')` are not considered to equal to any other numeric value, including themselves. To check whether a number is a NaN, use the [`isnan()`](#math.isnan "math.isnan") function to test for NaNs instead of `is` or `==`. Example: ``` >>> import math >>> math.nan == math.nan False >>> float('nan') == float('nan') False >>> math.isnan(math.nan) True >>> math.isnan(float('nan')) True ``` New in version 3.5. **CPython implementation detail:** The [`math`](#module-math "math: Mathematical functions (sin() etc.).") module consists mostly of thin wrappers around the platform C math library functions. Behavior in exceptional cases follows Annex F of the C99 standard where appropriate. The current implementation will raise [`ValueError`](exceptions#ValueError "ValueError") for invalid operations like `sqrt(-1.0)` or `log(0.0)` (where C99 Annex F recommends signaling invalid operation or divide-by-zero), and [`OverflowError`](exceptions#OverflowError "OverflowError") for results that overflow (for example, `exp(1000.0)`). A NaN will not be returned from any of the functions above unless one or more of the input arguments was a NaN; in that case, most functions will return a NaN, but (again following C99 Annex F) there are some exceptions to this rule, for example `pow(float('nan'), 0.0)` or `hypot(float('nan'), float('inf'))`. Note that Python makes no effort to distinguish signaling NaNs from quiet NaNs, and behavior for signaling NaNs remains unspecified. Typical behavior is to treat all NaNs as though they were quiet. See also `Module` [`cmath`](cmath#module-cmath "cmath: Mathematical functions for complex numbers.") Complex number versions of many of these functions.
programming_docs
python Built-in Types Built-in Types ============== The following sections describe the standard types that are built into the interpreter. The principal built-in types are numerics, sequences, mappings, classes, instances and exceptions. Some collection classes are mutable. The methods that add, subtract, or rearrange their members in place, and don’t return a specific item, never return the collection instance itself but `None`. Some operations are supported by several object types; in particular, practically all objects can be compared for equality, tested for truth value, and converted to a string (with the [`repr()`](functions#repr "repr") function or the slightly different [`str()`](#str "str") function). The latter function is implicitly used when an object is written by the [`print()`](functions#print "print") function. Truth Value Testing ------------------- Any object can be tested for truth value, for use in an [`if`](../reference/compound_stmts#if) or [`while`](../reference/compound_stmts#while) condition or as operand of the Boolean operations below. By default, an object is considered true unless its class defines either a [`__bool__()`](../reference/datamodel#object.__bool__ "object.__bool__") method that returns `False` or a [`__len__()`](../reference/datamodel#object.__len__ "object.__len__") method that returns zero, when called with the object. [1](#id12) Here are most of the built-in objects considered false: * constants defined to be false: `None` and `False`. * zero of any numeric type: `0`, `0.0`, `0j`, `Decimal(0)`, `Fraction(0, 1)` * empty sequences and collections: `''`, `()`, `[]`, `{}`, `set()`, `range(0)` Operations and built-in functions that have a Boolean result always return `0` or `False` for false and `1` or `True` for true, unless otherwise stated. (Important exception: the Boolean operations `or` and `and` always return one of their operands.) Boolean Operations — `and`, `or`, `not` --------------------------------------- These are the Boolean operations, ordered by ascending priority: | Operation | Result | Notes | | --- | --- | --- | | `x or y` | if *x* is false, then *y*, else *x* | (1) | | `x and y` | if *x* is false, then *x*, else *y* | (2) | | `not x` | if *x* is false, then `True`, else `False` | (3) | Notes: 1. This is a short-circuit operator, so it only evaluates the second argument if the first one is false. 2. This is a short-circuit operator, so it only evaluates the second argument if the first one is true. 3. `not` has a lower priority than non-Boolean operators, so `not a == b` is interpreted as `not (a == b)`, and `a == not b` is a syntax error. Comparisons ----------- There are eight comparison operations in Python. They all have the same priority (which is higher than that of the Boolean operations). Comparisons can be chained arbitrarily; for example, `x < y <= z` is equivalent to `x < y and y <= z`, except that *y* is evaluated only once (but in both cases *z* is not evaluated at all when `x < y` is found to be false). This table summarizes the comparison operations: | Operation | Meaning | | --- | --- | | `<` | strictly less than | | `<=` | less than or equal | | `>` | strictly greater than | | `>=` | greater than or equal | | `==` | equal | | `!=` | not equal | | `is` | object identity | | `is not` | negated object identity | Objects of different types, except different numeric types, never compare equal. The `==` operator is always defined but for some object types (for example, class objects) is equivalent to [`is`](../reference/expressions#is). The `<`, `<=`, `>` and `>=` operators are only defined where they make sense; for example, they raise a [`TypeError`](exceptions#TypeError "TypeError") exception when one of the arguments is a complex number. Non-identical instances of a class normally compare as non-equal unless the class defines the [`__eq__()`](../reference/datamodel#object.__eq__ "object.__eq__") method. Instances of a class cannot be ordered with respect to other instances of the same class, or other types of object, unless the class defines enough of the methods [`__lt__()`](../reference/datamodel#object.__lt__ "object.__lt__"), [`__le__()`](../reference/datamodel#object.__le__ "object.__le__"), [`__gt__()`](../reference/datamodel#object.__gt__ "object.__gt__"), and [`__ge__()`](../reference/datamodel#object.__ge__ "object.__ge__") (in general, [`__lt__()`](../reference/datamodel#object.__lt__ "object.__lt__") and [`__eq__()`](../reference/datamodel#object.__eq__ "object.__eq__") are sufficient, if you want the conventional meanings of the comparison operators). The behavior of the [`is`](../reference/expressions#is) and [`is not`](../reference/expressions#is-not) operators cannot be customized; also they can be applied to any two objects and never raise an exception. Two more operations with the same syntactic priority, [`in`](../reference/expressions#in) and [`not in`](../reference/expressions#not-in), are supported by types that are [iterable](../glossary#term-iterable) or implement the [`__contains__()`](../reference/datamodel#object.__contains__ "object.__contains__") method. Numeric Types — int, float, complex ----------------------------------- There are three distinct numeric types: *integers*, *floating point numbers*, and *complex numbers*. In addition, Booleans are a subtype of integers. Integers have unlimited precision. Floating point numbers are usually implemented using `double` in C; information about the precision and internal representation of floating point numbers for the machine on which your program is running is available in [`sys.float_info`](sys#sys.float_info "sys.float_info"). Complex numbers have a real and imaginary part, which are each a floating point number. To extract these parts from a complex number *z*, use `z.real` and `z.imag`. (The standard library includes the additional numeric types [`fractions.Fraction`](fractions#fractions.Fraction "fractions.Fraction"), for rationals, and [`decimal.Decimal`](decimal#decimal.Decimal "decimal.Decimal"), for floating-point numbers with user-definable precision.) Numbers are created by numeric literals or as the result of built-in functions and operators. Unadorned integer literals (including hex, octal and binary numbers) yield integers. Numeric literals containing a decimal point or an exponent sign yield floating point numbers. Appending `'j'` or `'J'` to a numeric literal yields an imaginary number (a complex number with a zero real part) which you can add to an integer or float to get a complex number with real and imaginary parts. Python fully supports mixed arithmetic: when a binary arithmetic operator has operands of different numeric types, the operand with the “narrower” type is widened to that of the other, where integer is narrower than floating point, which is narrower than complex. A comparison between numbers of different types behaves as though the exact values of those numbers were being compared. [2](#id13) The constructors [`int()`](functions#int "int"), [`float()`](functions#float "float"), and [`complex()`](functions#complex "complex") can be used to produce numbers of a specific type. All numeric types (except complex) support the following operations (for priorities of the operations, see [Operator precedence](../reference/expressions#operator-summary)): | Operation | Result | Notes | Full documentation | | --- | --- | --- | --- | | `x + y` | sum of *x* and *y* | | | | `x - y` | difference of *x* and *y* | | | | `x * y` | product of *x* and *y* | | | | `x / y` | quotient of *x* and *y* | | | | `x // y` | floored quotient of *x* and *y* | (1) | | | `x % y` | remainder of `x / y` | (2) | | | `-x` | *x* negated | | | | `+x` | *x* unchanged | | | | `abs(x)` | absolute value or magnitude of *x* | | [`abs()`](functions#abs "abs") | | `int(x)` | *x* converted to integer | (3)(6) | [`int()`](functions#int "int") | | `float(x)` | *x* converted to floating point | (4)(6) | [`float()`](functions#float "float") | | `complex(re, im)` | a complex number with real part *re*, imaginary part *im*. *im* defaults to zero. | (6) | [`complex()`](functions#complex "complex") | | `c.conjugate()` | conjugate of the complex number *c* | | | | `divmod(x, y)` | the pair `(x // y, x % y)` | (2) | [`divmod()`](functions#divmod "divmod") | | `pow(x, y)` | *x* to the power *y* | (5) | [`pow()`](functions#pow "pow") | | `x ** y` | *x* to the power *y* | (5) | | Notes: 1. Also referred to as integer division. The resultant value is a whole integer, though the result’s type is not necessarily int. The result is always rounded towards minus infinity: `1//2` is `0`, `(-1)//2` is `-1`, `1//(-2)` is `-1`, and `(-1)//(-2)` is `0`. 2. Not for complex numbers. Instead convert to floats using [`abs()`](functions#abs "abs") if appropriate. 3. Conversion from floating point to integer may round or truncate as in C; see functions [`math.floor()`](math#math.floor "math.floor") and [`math.ceil()`](math#math.ceil "math.ceil") for well-defined conversions. 4. float also accepts the strings “nan” and “inf” with an optional prefix “+” or “-” for Not a Number (NaN) and positive or negative infinity. 5. Python defines `pow(0, 0)` and `0 ** 0` to be `1`, as is common for programming languages. 6. The numeric literals accepted include the digits `0` to `9` or any Unicode equivalent (code points with the `Nd` property). See <https://www.unicode.org/Public/13.0.0/ucd/extracted/DerivedNumericType.txt> for a complete list of code points with the `Nd` property. All [`numbers.Real`](numbers#numbers.Real "numbers.Real") types ([`int`](functions#int "int") and [`float`](functions#float "float")) also include the following operations: | Operation | Result | | --- | --- | | [`math.trunc(x)`](math#math.trunc "math.trunc") | *x* truncated to [`Integral`](numbers#numbers.Integral "numbers.Integral") | | [`round(x[, n])`](functions#round "round") | *x* rounded to *n* digits, rounding half to even. If *n* is omitted, it defaults to 0. | | [`math.floor(x)`](math#math.floor "math.floor") | the greatest [`Integral`](numbers#numbers.Integral "numbers.Integral") <= *x* | | [`math.ceil(x)`](math#math.ceil "math.ceil") | the least [`Integral`](numbers#numbers.Integral "numbers.Integral") >= *x* | For additional numeric operations see the [`math`](math#module-math "math: Mathematical functions (sin() etc.).") and [`cmath`](cmath#module-cmath "cmath: Mathematical functions for complex numbers.") modules. ### Bitwise Operations on Integer Types Bitwise operations only make sense for integers. The result of bitwise operations is calculated as though carried out in two’s complement with an infinite number of sign bits. The priorities of the binary bitwise operations are all lower than the numeric operations and higher than the comparisons; the unary operation `~` has the same priority as the other unary numeric operations (`+` and `-`). This table lists the bitwise operations sorted in ascending priority: | Operation | Result | Notes | | --- | --- | --- | | `x | y` | bitwise *or* of *x* and *y* | (4) | | `x ^ y` | bitwise *exclusive or* of *x* and *y* | (4) | | `x & y` | bitwise *and* of *x* and *y* | (4) | | `x << n` | *x* shifted left by *n* bits | (1)(2) | | `x >> n` | *x* shifted right by *n* bits | (1)(3) | | `~x` | the bits of *x* inverted | | Notes: 1. Negative shift counts are illegal and cause a [`ValueError`](exceptions#ValueError "ValueError") to be raised. 2. A left shift by *n* bits is equivalent to multiplication by `pow(2, n)`. 3. A right shift by *n* bits is equivalent to floor division by `pow(2, n)`. 4. Performing these calculations with at least one extra sign extension bit in a finite two’s complement representation (a working bit-width of `1 + max(x.bit_length(), y.bit_length())` or more) is sufficient to get the same result as if there were an infinite number of sign bits. ### Additional Methods on Integer Types The int type implements the [`numbers.Integral`](numbers#numbers.Integral "numbers.Integral") [abstract base class](../glossary#term-abstract-base-class). In addition, it provides a few more methods: `int.bit_length()` Return the number of bits necessary to represent an integer in binary, excluding the sign and leading zeros: ``` >>> n = -37 >>> bin(n) '-0b100101' >>> n.bit_length() 6 ``` More precisely, if `x` is nonzero, then `x.bit_length()` is the unique positive integer `k` such that `2**(k-1) <= abs(x) < 2**k`. Equivalently, when `abs(x)` is small enough to have a correctly rounded logarithm, then `k = 1 + int(log(abs(x), 2))`. If `x` is zero, then `x.bit_length()` returns `0`. Equivalent to: ``` def bit_length(self): s = bin(self) # binary representation: bin(-37) --> '-0b100101' s = s.lstrip('-0b') # remove leading zeros and minus sign return len(s) # len('100101') --> 6 ``` New in version 3.1. `int.to_bytes(length, byteorder, *, signed=False)` Return an array of bytes representing an integer. ``` >>> (1024).to_bytes(2, byteorder='big') b'\x04\x00' >>> (1024).to_bytes(10, byteorder='big') b'\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00' >>> (-1024).to_bytes(10, byteorder='big', signed=True) b'\xff\xff\xff\xff\xff\xff\xff\xff\xfc\x00' >>> x = 1000 >>> x.to_bytes((x.bit_length() + 7) // 8, byteorder='little') b'\xe8\x03' ``` The integer is represented using *length* bytes. An [`OverflowError`](exceptions#OverflowError "OverflowError") is raised if the integer is not representable with the given number of bytes. The *byteorder* argument determines the byte order used to represent the integer. If *byteorder* is `"big"`, the most significant byte is at the beginning of the byte array. If *byteorder* is `"little"`, the most significant byte is at the end of the byte array. To request the native byte order of the host system, use [`sys.byteorder`](sys#sys.byteorder "sys.byteorder") as the byte order value. The *signed* argument determines whether two’s complement is used to represent the integer. If *signed* is `False` and a negative integer is given, an [`OverflowError`](exceptions#OverflowError "OverflowError") is raised. The default value for *signed* is `False`. New in version 3.2. `classmethod int.from_bytes(bytes, byteorder, *, signed=False)` Return the integer represented by the given array of bytes. ``` >>> int.from_bytes(b'\x00\x10', byteorder='big') 16 >>> int.from_bytes(b'\x00\x10', byteorder='little') 4096 >>> int.from_bytes(b'\xfc\x00', byteorder='big', signed=True) -1024 >>> int.from_bytes(b'\xfc\x00', byteorder='big', signed=False) 64512 >>> int.from_bytes([255, 0, 0], byteorder='big') 16711680 ``` The argument *bytes* must either be a [bytes-like object](../glossary#term-bytes-like-object) or an iterable producing bytes. The *byteorder* argument determines the byte order used to represent the integer. If *byteorder* is `"big"`, the most significant byte is at the beginning of the byte array. If *byteorder* is `"little"`, the most significant byte is at the end of the byte array. To request the native byte order of the host system, use [`sys.byteorder`](sys#sys.byteorder "sys.byteorder") as the byte order value. The *signed* argument indicates whether two’s complement is used to represent the integer. New in version 3.2. `int.as_integer_ratio()` Return a pair of integers whose ratio is exactly equal to the original integer and with a positive denominator. The integer ratio of integers (whole numbers) is always the integer as the numerator and `1` as the denominator. New in version 3.8. ### Additional Methods on Float The float type implements the [`numbers.Real`](numbers#numbers.Real "numbers.Real") [abstract base class](../glossary#term-abstract-base-class). float also has the following additional methods. `float.as_integer_ratio()` Return a pair of integers whose ratio is exactly equal to the original float and with a positive denominator. Raises [`OverflowError`](exceptions#OverflowError "OverflowError") on infinities and a [`ValueError`](exceptions#ValueError "ValueError") on NaNs. `float.is_integer()` Return `True` if the float instance is finite with integral value, and `False` otherwise: ``` >>> (-2.0).is_integer() True >>> (3.2).is_integer() False ``` Two methods support conversion to and from hexadecimal strings. Since Python’s floats are stored internally as binary numbers, converting a float to or from a *decimal* string usually involves a small rounding error. In contrast, hexadecimal strings allow exact representation and specification of floating-point numbers. This can be useful when debugging, and in numerical work. `float.hex()` Return a representation of a floating-point number as a hexadecimal string. For finite floating-point numbers, this representation will always include a leading `0x` and a trailing `p` and exponent. `classmethod float.fromhex(s)` Class method to return the float represented by a hexadecimal string *s*. The string *s* may have leading and trailing whitespace. Note that [`float.hex()`](#float.hex "float.hex") is an instance method, while [`float.fromhex()`](#float.fromhex "float.fromhex") is a class method. A hexadecimal string takes the form: ``` [sign] ['0x'] integer ['.' fraction] ['p' exponent] ``` where the optional `sign` may by either `+` or `-`, `integer` and `fraction` are strings of hexadecimal digits, and `exponent` is a decimal integer with an optional leading sign. Case is not significant, and there must be at least one hexadecimal digit in either the integer or the fraction. This syntax is similar to the syntax specified in section 6.4.4.2 of the C99 standard, and also to the syntax used in Java 1.5 onwards. In particular, the output of [`float.hex()`](#float.hex "float.hex") is usable as a hexadecimal floating-point literal in C or Java code, and hexadecimal strings produced by C’s `%a` format character or Java’s `Double.toHexString` are accepted by [`float.fromhex()`](#float.fromhex "float.fromhex"). Note that the exponent is written in decimal rather than hexadecimal, and that it gives the power of 2 by which to multiply the coefficient. For example, the hexadecimal string `0x3.a7p10` represents the floating-point number `(3 + 10./16 + 7./16**2) * 2.0**10`, or `3740.0`: ``` >>> float.fromhex('0x3.a7p10') 3740.0 ``` Applying the reverse conversion to `3740.0` gives a different hexadecimal string representing the same number: ``` >>> float.hex(3740.0) '0x1.d380000000000p+11' ``` ### Hashing of numeric types For numbers `x` and `y`, possibly of different types, it’s a requirement that `hash(x) == hash(y)` whenever `x == y` (see the [`__hash__()`](../reference/datamodel#object.__hash__ "object.__hash__") method documentation for more details). For ease of implementation and efficiency across a variety of numeric types (including [`int`](functions#int "int"), [`float`](functions#float "float"), [`decimal.Decimal`](decimal#decimal.Decimal "decimal.Decimal") and [`fractions.Fraction`](fractions#fractions.Fraction "fractions.Fraction")) Python’s hash for numeric types is based on a single mathematical function that’s defined for any rational number, and hence applies to all instances of [`int`](functions#int "int") and [`fractions.Fraction`](fractions#fractions.Fraction "fractions.Fraction"), and all finite instances of [`float`](functions#float "float") and [`decimal.Decimal`](decimal#decimal.Decimal "decimal.Decimal"). Essentially, this function is given by reduction modulo `P` for a fixed prime `P`. The value of `P` is made available to Python as the `modulus` attribute of [`sys.hash_info`](sys#sys.hash_info "sys.hash_info"). **CPython implementation detail:** Currently, the prime used is `P = 2**31 - 1` on machines with 32-bit C longs and `P = 2**61 - 1` on machines with 64-bit C longs. Here are the rules in detail: * If `x = m / n` is a nonnegative rational number and `n` is not divisible by `P`, define `hash(x)` as `m * invmod(n, P) % P`, where `invmod(n, P)` gives the inverse of `n` modulo `P`. * If `x = m / n` is a nonnegative rational number and `n` is divisible by `P` (but `m` is not) then `n` has no inverse modulo `P` and the rule above doesn’t apply; in this case define `hash(x)` to be the constant value `sys.hash_info.inf`. * If `x = m / n` is a negative rational number define `hash(x)` as `-hash(-x)`. If the resulting hash is `-1`, replace it with `-2`. * The particular values `sys.hash_info.inf`, `-sys.hash_info.inf` and `sys.hash_info.nan` are used as hash values for positive infinity, negative infinity, or nans (respectively). (All hashable nans have the same hash value.) * For a [`complex`](functions#complex "complex") number `z`, the hash values of the real and imaginary parts are combined by computing `hash(z.real) + sys.hash_info.imag * hash(z.imag)`, reduced modulo `2**sys.hash_info.width` so that it lies in `range(-2**(sys.hash_info.width - 1), 2**(sys.hash_info.width - 1))`. Again, if the result is `-1`, it’s replaced with `-2`. To clarify the above rules, here’s some example Python code, equivalent to the built-in hash, for computing the hash of a rational number, [`float`](functions#float "float"), or [`complex`](functions#complex "complex"): ``` import sys, math def hash_fraction(m, n): """Compute the hash of a rational number m / n. Assumes m and n are integers, with n positive. Equivalent to hash(fractions.Fraction(m, n)). """ P = sys.hash_info.modulus # Remove common factors of P. (Unnecessary if m and n already coprime.) while m % P == n % P == 0: m, n = m // P, n // P if n % P == 0: hash_value = sys.hash_info.inf else: # Fermat's Little Theorem: pow(n, P-1, P) is 1, so # pow(n, P-2, P) gives the inverse of n modulo P. hash_value = (abs(m) % P) * pow(n, P - 2, P) % P if m < 0: hash_value = -hash_value if hash_value == -1: hash_value = -2 return hash_value def hash_float(x): """Compute the hash of a float x.""" if math.isnan(x): return sys.hash_info.nan elif math.isinf(x): return sys.hash_info.inf if x > 0 else -sys.hash_info.inf else: return hash_fraction(*x.as_integer_ratio()) def hash_complex(z): """Compute the hash of a complex number z.""" hash_value = hash_float(z.real) + sys.hash_info.imag * hash_float(z.imag) # do a signed reduction modulo 2**sys.hash_info.width M = 2**(sys.hash_info.width - 1) hash_value = (hash_value & (M - 1)) - (hash_value & M) if hash_value == -1: hash_value = -2 return hash_value ``` Iterator Types -------------- Python supports a concept of iteration over containers. This is implemented using two distinct methods; these are used to allow user-defined classes to support iteration. Sequences, described below in more detail, always support the iteration methods. One method needs to be defined for container objects to provide iteration support: `container.__iter__()` Return an iterator object. The object is required to support the iterator protocol described below. If a container supports different types of iteration, additional methods can be provided to specifically request iterators for those iteration types. (An example of an object supporting multiple forms of iteration would be a tree structure which supports both breadth-first and depth-first traversal.) This method corresponds to the [`tp_iter`](../c-api/typeobj#c.PyTypeObject.tp_iter "PyTypeObject.tp_iter") slot of the type structure for Python objects in the Python/C API. The iterator objects themselves are required to support the following two methods, which together form the *iterator protocol*: `iterator.__iter__()` Return the iterator object itself. This is required to allow both containers and iterators to be used with the [`for`](../reference/compound_stmts#for) and [`in`](../reference/expressions#in) statements. This method corresponds to the [`tp_iter`](../c-api/typeobj#c.PyTypeObject.tp_iter "PyTypeObject.tp_iter") slot of the type structure for Python objects in the Python/C API. `iterator.__next__()` Return the next item from the container. If there are no further items, raise the [`StopIteration`](exceptions#StopIteration "StopIteration") exception. This method corresponds to the [`tp_iternext`](../c-api/typeobj#c.PyTypeObject.tp_iternext "PyTypeObject.tp_iternext") slot of the type structure for Python objects in the Python/C API. Python defines several iterator objects to support iteration over general and specific sequence types, dictionaries, and other more specialized forms. The specific types are not important beyond their implementation of the iterator protocol. Once an iterator’s [`__next__()`](#iterator.__next__ "iterator.__next__") method raises [`StopIteration`](exceptions#StopIteration "StopIteration"), it must continue to do so on subsequent calls. Implementations that do not obey this property are deemed broken. ### Generator Types Python’s [generator](../glossary#term-generator)s provide a convenient way to implement the iterator protocol. If a container object’s [`__iter__()`](../reference/datamodel#object.__iter__ "object.__iter__") method is implemented as a generator, it will automatically return an iterator object (technically, a generator object) supplying the [`__iter__()`](../reference/datamodel#object.__iter__ "object.__iter__") and [`__next__()`](../reference/expressions#generator.__next__ "generator.__next__") methods. More information about generators can be found in [the documentation for the yield expression](../reference/expressions#yieldexpr). Sequence Types — list, tuple, range ----------------------------------- There are three basic sequence types: lists, tuples, and range objects. Additional sequence types tailored for processing of [binary data](#binaryseq) and [text strings](#textseq) are described in dedicated sections. ### Common Sequence Operations The operations in the following table are supported by most sequence types, both mutable and immutable. The [`collections.abc.Sequence`](collections.abc#collections.abc.Sequence "collections.abc.Sequence") ABC is provided to make it easier to correctly implement these operations on custom sequence types. This table lists the sequence operations sorted in ascending priority. In the table, *s* and *t* are sequences of the same type, *n*, *i*, *j* and *k* are integers and *x* is an arbitrary object that meets any type and value restrictions imposed by *s*. The `in` and `not in` operations have the same priorities as the comparison operations. The `+` (concatenation) and `*` (repetition) operations have the same priority as the corresponding numeric operations. [3](#id14) | Operation | Result | Notes | | --- | --- | --- | | `x in s` | `True` if an item of *s* is equal to *x*, else `False` | (1) | | `x not in s` | `False` if an item of *s* is equal to *x*, else `True` | (1) | | `s + t` | the concatenation of *s* and *t* | (6)(7) | | `s * n` or `n * s` | equivalent to adding *s* to itself *n* times | (2)(7) | | `s[i]` | *i*th item of *s*, origin 0 | (3) | | `s[i:j]` | slice of *s* from *i* to *j* | (3)(4) | | `s[i:j:k]` | slice of *s* from *i* to *j* with step *k* | (3)(5) | | `len(s)` | length of *s* | | | `min(s)` | smallest item of *s* | | | `max(s)` | largest item of *s* | | | `s.index(x[, i[, j]])` | index of the first occurrence of *x* in *s* (at or after index *i* and before index *j*) | (8) | | `s.count(x)` | total number of occurrences of *x* in *s* | | Sequences of the same type also support comparisons. In particular, tuples and lists are compared lexicographically by comparing corresponding elements. This means that to compare equal, every element must compare equal and the two sequences must be of the same type and have the same length. (For full details see [Comparisons](../reference/expressions#comparisons) in the language reference.) Notes: 1. While the `in` and `not in` operations are used only for simple containment testing in the general case, some specialised sequences (such as [`str`](#str "str"), [`bytes`](#bytes "bytes") and [`bytearray`](#bytearray "bytearray")) also use them for subsequence testing: ``` >>> "gg" in "eggs" True ``` 2. Values of *n* less than `0` are treated as `0` (which yields an empty sequence of the same type as *s*). Note that items in the sequence *s* are not copied; they are referenced multiple times. This often haunts new Python programmers; consider: ``` >>> lists = [[]] * 3 >>> lists [[], [], []] >>> lists[0].append(3) >>> lists [[3], [3], [3]] ``` What has happened is that `[[]]` is a one-element list containing an empty list, so all three elements of `[[]] * 3` are references to this single empty list. Modifying any of the elements of `lists` modifies this single list. You can create a list of different lists this way: ``` >>> lists = [[] for i in range(3)] >>> lists[0].append(3) >>> lists[1].append(5) >>> lists[2].append(7) >>> lists [[3], [5], [7]] ``` Further explanation is available in the FAQ entry [How do I create a multidimensional list?](../faq/programming#faq-multidimensional-list). 3. If *i* or *j* is negative, the index is relative to the end of sequence *s*: `len(s) + i` or `len(s) + j` is substituted. But note that `-0` is still `0`. 4. The slice of *s* from *i* to *j* is defined as the sequence of items with index *k* such that `i <= k < j`. If *i* or *j* is greater than `len(s)`, use `len(s)`. If *i* is omitted or `None`, use `0`. If *j* is omitted or `None`, use `len(s)`. If *i* is greater than or equal to *j*, the slice is empty. 5. The slice of *s* from *i* to *j* with step *k* is defined as the sequence of items with index `x = i + n*k` such that `0 <= n < (j-i)/k`. In other words, the indices are `i`, `i+k`, `i+2*k`, `i+3*k` and so on, stopping when *j* is reached (but never including *j*). When *k* is positive, *i* and *j* are reduced to `len(s)` if they are greater. When *k* is negative, *i* and *j* are reduced to `len(s) - 1` if they are greater. If *i* or *j* are omitted or `None`, they become “end” values (which end depends on the sign of *k*). Note, *k* cannot be zero. If *k* is `None`, it is treated like `1`. 6. Concatenating immutable sequences always results in a new object. This means that building up a sequence by repeated concatenation will have a quadratic runtime cost in the total sequence length. To get a linear runtime cost, you must switch to one of the alternatives below: * if concatenating [`str`](#str "str") objects, you can build a list and use [`str.join()`](#str.join "str.join") at the end or else write to an [`io.StringIO`](io#io.StringIO "io.StringIO") instance and retrieve its value when complete * if concatenating [`bytes`](#bytes "bytes") objects, you can similarly use [`bytes.join()`](#bytes.join "bytes.join") or [`io.BytesIO`](io#io.BytesIO "io.BytesIO"), or you can do in-place concatenation with a [`bytearray`](#bytearray "bytearray") object. [`bytearray`](#bytearray "bytearray") objects are mutable and have an efficient overallocation mechanism * if concatenating [`tuple`](#tuple "tuple") objects, extend a [`list`](#list "list") instead * for other types, investigate the relevant class documentation 7. Some sequence types (such as [`range`](#range "range")) only support item sequences that follow specific patterns, and hence don’t support sequence concatenation or repetition. 8. `index` raises [`ValueError`](exceptions#ValueError "ValueError") when *x* is not found in *s*. Not all implementations support passing the additional arguments *i* and *j*. These arguments allow efficient searching of subsections of the sequence. Passing the extra arguments is roughly equivalent to using `s[i:j].index(x)`, only without copying any data and with the returned index being relative to the start of the sequence rather than the start of the slice. ### Immutable Sequence Types The only operation that immutable sequence types generally implement that is not also implemented by mutable sequence types is support for the [`hash()`](functions#hash "hash") built-in. This support allows immutable sequences, such as [`tuple`](#tuple "tuple") instances, to be used as [`dict`](#dict "dict") keys and stored in [`set`](#set "set") and [`frozenset`](#frozenset "frozenset") instances. Attempting to hash an immutable sequence that contains unhashable values will result in [`TypeError`](exceptions#TypeError "TypeError"). ### Mutable Sequence Types The operations in the following table are defined on mutable sequence types. The [`collections.abc.MutableSequence`](collections.abc#collections.abc.MutableSequence "collections.abc.MutableSequence") ABC is provided to make it easier to correctly implement these operations on custom sequence types. In the table *s* is an instance of a mutable sequence type, *t* is any iterable object and *x* is an arbitrary object that meets any type and value restrictions imposed by *s* (for example, [`bytearray`](#bytearray "bytearray") only accepts integers that meet the value restriction `0 <= x <= 255`). | Operation | Result | Notes | | --- | --- | --- | | `s[i] = x` | item *i* of *s* is replaced by *x* | | | `s[i:j] = t` | slice of *s* from *i* to *j* is replaced by the contents of the iterable *t* | | | `del s[i:j]` | same as `s[i:j] = []` | | | `s[i:j:k] = t` | the elements of `s[i:j:k]` are replaced by those of *t* | (1) | | `del s[i:j:k]` | removes the elements of `s[i:j:k]` from the list | | | `s.append(x)` | appends *x* to the end of the sequence (same as `s[len(s):len(s)] = [x]`) | | | `s.clear()` | removes all items from *s* (same as `del s[:]`) | (5) | | `s.copy()` | creates a shallow copy of *s* (same as `s[:]`) | (5) | | `s.extend(t)` or `s += t` | extends *s* with the contents of *t* (for the most part the same as `s[len(s):len(s)] = t`) | | | `s *= n` | updates *s* with its contents repeated *n* times | (6) | | `s.insert(i, x)` | inserts *x* into *s* at the index given by *i* (same as `s[i:i] = [x]`) | | | `s.pop()` or `s.pop(i)` | retrieves the item at *i* and also removes it from *s* | (2) | | `s.remove(x)` | remove the first item from *s* where `s[i]` is equal to *x* | (3) | | `s.reverse()` | reverses the items of *s* in place | (4) | Notes: 1. *t* must have the same length as the slice it is replacing. 2. The optional argument *i* defaults to `-1`, so that by default the last item is removed and returned. 3. `remove()` raises [`ValueError`](exceptions#ValueError "ValueError") when *x* is not found in *s*. 4. The `reverse()` method modifies the sequence in place for economy of space when reversing a large sequence. To remind users that it operates by side effect, it does not return the reversed sequence. 5. `clear()` and `copy()` are included for consistency with the interfaces of mutable containers that don’t support slicing operations (such as [`dict`](#dict "dict") and [`set`](#set "set")). `copy()` is not part of the [`collections.abc.MutableSequence`](collections.abc#collections.abc.MutableSequence "collections.abc.MutableSequence") ABC, but most concrete mutable sequence classes provide it. New in version 3.3: `clear()` and `copy()` methods. 6. The value *n* is an integer, or an object implementing [`__index__()`](../reference/datamodel#object.__index__ "object.__index__"). Zero and negative values of *n* clear the sequence. Items in the sequence are not copied; they are referenced multiple times, as explained for `s * n` under [Common Sequence Operations](#typesseq-common). ### Lists Lists are mutable sequences, typically used to store collections of homogeneous items (where the precise degree of similarity will vary by application). `class list([iterable])` Lists may be constructed in several ways: * Using a pair of square brackets to denote the empty list: `[]` * Using square brackets, separating items with commas: `[a]`, `[a, b, c]` * Using a list comprehension: `[x for x in iterable]` * Using the type constructor: `list()` or `list(iterable)` The constructor builds a list whose items are the same and in the same order as *iterable*’s items. *iterable* may be either a sequence, a container that supports iteration, or an iterator object. If *iterable* is already a list, a copy is made and returned, similar to `iterable[:]`. For example, `list('abc')` returns `['a', 'b', 'c']` and `list( (1, 2, 3) )` returns `[1, 2, 3]`. If no argument is given, the constructor creates a new empty list, `[]`. Many other operations also produce lists, including the [`sorted()`](functions#sorted "sorted") built-in. Lists implement all of the [common](#typesseq-common) and [mutable](#typesseq-mutable) sequence operations. Lists also provide the following additional method: `sort(*, key=None, reverse=False)` This method sorts the list in place, using only `<` comparisons between items. Exceptions are not suppressed - if any comparison operations fail, the entire sort operation will fail (and the list will likely be left in a partially modified state). [`sort()`](#list.sort "list.sort") accepts two arguments that can only be passed by keyword ([keyword-only arguments](../glossary#keyword-only-parameter)): *key* specifies a function of one argument that is used to extract a comparison key from each list element (for example, `key=str.lower`). The key corresponding to each item in the list is calculated once and then used for the entire sorting process. The default value of `None` means that list items are sorted directly without calculating a separate key value. The [`functools.cmp_to_key()`](functools#functools.cmp_to_key "functools.cmp_to_key") utility is available to convert a 2.x style *cmp* function to a *key* function. *reverse* is a boolean value. If set to `True`, then the list elements are sorted as if each comparison were reversed. This method modifies the sequence in place for economy of space when sorting a large sequence. To remind users that it operates by side effect, it does not return the sorted sequence (use [`sorted()`](functions#sorted "sorted") to explicitly request a new sorted list instance). The [`sort()`](#list.sort "list.sort") method is guaranteed to be stable. A sort is stable if it guarantees not to change the relative order of elements that compare equal — this is helpful for sorting in multiple passes (for example, sort by department, then by salary grade). For sorting examples and a brief sorting tutorial, see [Sorting HOW TO](../howto/sorting#sortinghowto). **CPython implementation detail:** While a list is being sorted, the effect of attempting to mutate, or even inspect, the list is undefined. The C implementation of Python makes the list appear empty for the duration, and raises [`ValueError`](exceptions#ValueError "ValueError") if it can detect that the list has been mutated during a sort. ### Tuples Tuples are immutable sequences, typically used to store collections of heterogeneous data (such as the 2-tuples produced by the [`enumerate()`](functions#enumerate "enumerate") built-in). Tuples are also used for cases where an immutable sequence of homogeneous data is needed (such as allowing storage in a [`set`](#set "set") or [`dict`](#dict "dict") instance). `class tuple([iterable])` Tuples may be constructed in a number of ways: * Using a pair of parentheses to denote the empty tuple: `()` * Using a trailing comma for a singleton tuple: `a,` or `(a,)` * Separating items with commas: `a, b, c` or `(a, b, c)` * Using the [`tuple()`](#tuple "tuple") built-in: `tuple()` or `tuple(iterable)` The constructor builds a tuple whose items are the same and in the same order as *iterable*’s items. *iterable* may be either a sequence, a container that supports iteration, or an iterator object. If *iterable* is already a tuple, it is returned unchanged. For example, `tuple('abc')` returns `('a', 'b', 'c')` and `tuple( [1, 2, 3] )` returns `(1, 2, 3)`. If no argument is given, the constructor creates a new empty tuple, `()`. Note that it is actually the comma which makes a tuple, not the parentheses. The parentheses are optional, except in the empty tuple case, or when they are needed to avoid syntactic ambiguity. For example, `f(a, b, c)` is a function call with three arguments, while `f((a, b, c))` is a function call with a 3-tuple as the sole argument. Tuples implement all of the [common](#typesseq-common) sequence operations. For heterogeneous collections of data where access by name is clearer than access by index, [`collections.namedtuple()`](collections#collections.namedtuple "collections.namedtuple") may be a more appropriate choice than a simple tuple object. ### Ranges The [`range`](#range "range") type represents an immutable sequence of numbers and is commonly used for looping a specific number of times in [`for`](../reference/compound_stmts#for) loops. `class range(stop)` `class range(start, stop[, step])` The arguments to the range constructor must be integers (either built-in [`int`](functions#int "int") or any object that implements the [`__index__()`](../reference/datamodel#object.__index__ "object.__index__") special method). If the *step* argument is omitted, it defaults to `1`. If the *start* argument is omitted, it defaults to `0`. If *step* is zero, [`ValueError`](exceptions#ValueError "ValueError") is raised. For a positive *step*, the contents of a range `r` are determined by the formula `r[i] = start + step*i` where `i >= 0` and `r[i] < stop`. For a negative *step*, the contents of the range are still determined by the formula `r[i] = start + step*i`, but the constraints are `i >= 0` and `r[i] > stop`. A range object will be empty if `r[0]` does not meet the value constraint. Ranges do support negative indices, but these are interpreted as indexing from the end of the sequence determined by the positive indices. Ranges containing absolute values larger than [`sys.maxsize`](sys#sys.maxsize "sys.maxsize") are permitted but some features (such as [`len()`](functions#len "len")) may raise [`OverflowError`](exceptions#OverflowError "OverflowError"). Range examples: ``` >>> list(range(10)) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> list(range(1, 11)) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] >>> list(range(0, 30, 5)) [0, 5, 10, 15, 20, 25] >>> list(range(0, 10, 3)) [0, 3, 6, 9] >>> list(range(0, -10, -1)) [0, -1, -2, -3, -4, -5, -6, -7, -8, -9] >>> list(range(0)) [] >>> list(range(1, 0)) [] ``` Ranges implement all of the [common](#typesseq-common) sequence operations except concatenation and repetition (due to the fact that range objects can only represent sequences that follow a strict pattern and repetition and concatenation will usually violate that pattern). `start` The value of the *start* parameter (or `0` if the parameter was not supplied) `stop` The value of the *stop* parameter `step` The value of the *step* parameter (or `1` if the parameter was not supplied) The advantage of the [`range`](#range "range") type over a regular [`list`](#list "list") or [`tuple`](#tuple "tuple") is that a [`range`](#range "range") object will always take the same (small) amount of memory, no matter the size of the range it represents (as it only stores the `start`, `stop` and `step` values, calculating individual items and subranges as needed). Range objects implement the [`collections.abc.Sequence`](collections.abc#collections.abc.Sequence "collections.abc.Sequence") ABC, and provide features such as containment tests, element index lookup, slicing and support for negative indices (see [Sequence Types — list, tuple, range](#typesseq)): ``` >>> r = range(0, 20, 2) >>> r range(0, 20, 2) >>> 11 in r False >>> 10 in r True >>> r.index(10) 5 >>> r[5] 10 >>> r[:5] range(0, 10, 2) >>> r[-1] 18 ``` Testing range objects for equality with `==` and `!=` compares them as sequences. That is, two range objects are considered equal if they represent the same sequence of values. (Note that two range objects that compare equal might have different [`start`](#range.start "range.start"), [`stop`](#range.stop "range.stop") and [`step`](#range.step "range.step") attributes, for example `range(0) == range(2, 1, 3)` or `range(0, 3, 2) == range(0, 4, 2)`.) Changed in version 3.2: Implement the Sequence ABC. Support slicing and negative indices. Test [`int`](functions#int "int") objects for membership in constant time instead of iterating through all items. Changed in version 3.3: Define ‘==’ and ‘!=’ to compare range objects based on the sequence of values they define (instead of comparing based on object identity). New in version 3.3: The [`start`](#range.start "range.start"), [`stop`](#range.stop "range.stop") and [`step`](#range.step "range.step") attributes. See also * The [linspace recipe](http://code.activestate.com/recipes/579000/) shows how to implement a lazy version of range suitable for floating point applications. Text Sequence Type — str ------------------------ Textual data in Python is handled with [`str`](#str "str") objects, or *strings*. Strings are immutable [sequences](#typesseq) of Unicode code points. String literals are written in a variety of ways: * Single quotes: `'allows embedded "double" quotes'` * Double quotes: `"allows embedded 'single' quotes"` * Triple quoted: `'''Three single quotes'''`, `"""Three double quotes"""` Triple quoted strings may span multiple lines - all associated whitespace will be included in the string literal. String literals that are part of a single expression and have only whitespace between them will be implicitly converted to a single string literal. That is, `("spam " "eggs") == "spam eggs"`. See [String and Bytes literals](../reference/lexical_analysis#strings) for more about the various forms of string literal, including supported escape sequences, and the `r` (“raw”) prefix that disables most escape sequence processing. Strings may also be created from other objects using the [`str`](#str "str") constructor. Since there is no separate “character” type, indexing a string produces strings of length 1. That is, for a non-empty string *s*, `s[0] == s[0:1]`. There is also no mutable string type, but [`str.join()`](#str.join "str.join") or [`io.StringIO`](io#io.StringIO "io.StringIO") can be used to efficiently construct strings from multiple fragments. Changed in version 3.3: For backwards compatibility with the Python 2 series, the `u` prefix is once again permitted on string literals. It has no effect on the meaning of string literals and cannot be combined with the `r` prefix. `class str(object='')` `class str(object=b'', encoding='utf-8', errors='strict')` Return a [string](#textseq) version of *object*. If *object* is not provided, returns the empty string. Otherwise, the behavior of `str()` depends on whether *encoding* or *errors* is given, as follows. If neither *encoding* nor *errors* is given, `str(object)` returns [`type(object).__str__(object)`](../reference/datamodel#object.__str__ "object.__str__"), which is the “informal” or nicely printable string representation of *object*. For string objects, this is the string itself. If *object* does not have a [`__str__()`](../reference/datamodel#object.__str__ "object.__str__") method, then [`str()`](#str "str") falls back to returning [`repr(object)`](functions#repr "repr"). If at least one of *encoding* or *errors* is given, *object* should be a [bytes-like object](../glossary#term-bytes-like-object) (e.g. [`bytes`](#bytes "bytes") or [`bytearray`](#bytearray "bytearray")). In this case, if *object* is a [`bytes`](#bytes "bytes") (or [`bytearray`](#bytearray "bytearray")) object, then `str(bytes, encoding, errors)` is equivalent to [`bytes.decode(encoding, errors)`](#bytes.decode "bytes.decode"). Otherwise, the bytes object underlying the buffer object is obtained before calling [`bytes.decode()`](#bytes.decode "bytes.decode"). See [Binary Sequence Types — bytes, bytearray, memoryview](#binaryseq) and [Buffer Protocol](../c-api/buffer#bufferobjects) for information on buffer objects. Passing a [`bytes`](#bytes "bytes") object to [`str()`](#str "str") without the *encoding* or *errors* arguments falls under the first case of returning the informal string representation (see also the [`-b`](../using/cmdline#cmdoption-b) command-line option to Python). For example: ``` >>> str(b'Zoot!') "b'Zoot!'" ``` For more information on the `str` class and its methods, see [Text Sequence Type — str](#textseq) and the [String Methods](#string-methods) section below. To output formatted strings, see the [Formatted string literals](../reference/lexical_analysis#f-strings) and [Format String Syntax](string#formatstrings) sections. In addition, see the [Text Processing Services](text#stringservices) section. ### String Methods Strings implement all of the [common](#typesseq-common) sequence operations, along with the additional methods described below. Strings also support two styles of string formatting, one providing a large degree of flexibility and customization (see [`str.format()`](#str.format "str.format"), [Format String Syntax](string#formatstrings) and [Custom String Formatting](string#string-formatting)) and the other based on C `printf` style formatting that handles a narrower range of types and is slightly harder to use correctly, but is often faster for the cases it can handle ([printf-style String Formatting](#old-string-formatting)). The [Text Processing Services](text#textservices) section of the standard library covers a number of other modules that provide various text related utilities (including regular expression support in the [`re`](re#module-re "re: Regular expression operations.") module). `str.capitalize()` Return a copy of the string with its first character capitalized and the rest lowercased. Changed in version 3.8: The first character is now put into titlecase rather than uppercase. This means that characters like digraphs will only have their first letter capitalized, instead of the full character. `str.casefold()` Return a casefolded copy of the string. Casefolded strings may be used for caseless matching. Casefolding is similar to lowercasing but more aggressive because it is intended to remove all case distinctions in a string. For example, the German lowercase letter `'ß'` is equivalent to `"ss"`. Since it is already lowercase, [`lower()`](#str.lower "str.lower") would do nothing to `'ß'`; [`casefold()`](#str.casefold "str.casefold") converts it to `"ss"`. The casefolding algorithm is described in section 3.13 of the Unicode Standard. New in version 3.3. `str.center(width[, fillchar])` Return centered in a string of length *width*. Padding is done using the specified *fillchar* (default is an ASCII space). The original string is returned if *width* is less than or equal to `len(s)`. `str.count(sub[, start[, end]])` Return the number of non-overlapping occurrences of substring *sub* in the range [*start*, *end*]. Optional arguments *start* and *end* are interpreted as in slice notation. `str.encode(encoding="utf-8", errors="strict")` Return an encoded version of the string as a bytes object. Default encoding is `'utf-8'`. *errors* may be given to set a different error handling scheme. The default for *errors* is `'strict'`, meaning that encoding errors raise a [`UnicodeError`](exceptions#UnicodeError "UnicodeError"). Other possible values are `'ignore'`, `'replace'`, `'xmlcharrefreplace'`, `'backslashreplace'` and any other name registered via [`codecs.register_error()`](codecs#codecs.register_error "codecs.register_error"), see section [Error Handlers](codecs#error-handlers). For a list of possible encodings, see section [Standard Encodings](codecs#standard-encodings). By default, the *errors* argument is not checked for best performances, but only used at the first encoding error. Enable the [Python Development Mode](devmode#devmode), or use a debug build to check *errors*. Changed in version 3.1: Support for keyword arguments added. Changed in version 3.9: The *errors* is now checked in development mode and in debug mode. `str.endswith(suffix[, start[, end]])` Return `True` if the string ends with the specified *suffix*, otherwise return `False`. *suffix* can also be a tuple of suffixes to look for. With optional *start*, test beginning at that position. With optional *end*, stop comparing at that position. `str.expandtabs(tabsize=8)` Return a copy of the string where all tab characters are replaced by one or more spaces, depending on the current column and the given tab size. Tab positions occur every *tabsize* characters (default is 8, giving tab positions at columns 0, 8, 16 and so on). To expand the string, the current column is set to zero and the string is examined character by character. If the character is a tab (`\t`), one or more space characters are inserted in the result until the current column is equal to the next tab position. (The tab character itself is not copied.) If the character is a newline (`\n`) or return (`\r`), it is copied and the current column is reset to zero. Any other character is copied unchanged and the current column is incremented by one regardless of how the character is represented when printed. ``` >>> '01\t012\t0123\t01234'.expandtabs() '01 012 0123 01234' >>> '01\t012\t0123\t01234'.expandtabs(4) '01 012 0123 01234' ``` `str.find(sub[, start[, end]])` Return the lowest index in the string where substring *sub* is found within the slice `s[start:end]`. Optional arguments *start* and *end* are interpreted as in slice notation. Return `-1` if *sub* is not found. Note The [`find()`](#str.find "str.find") method should be used only if you need to know the position of *sub*. To check if *sub* is a substring or not, use the [`in`](../reference/expressions#in) operator: ``` >>> 'Py' in 'Python' True ``` `str.format(*args, **kwargs)` Perform a string formatting operation. The string on which this method is called can contain literal text or replacement fields delimited by braces `{}`. Each replacement field contains either the numeric index of a positional argument, or the name of a keyword argument. Returns a copy of the string where each replacement field is replaced with the string value of the corresponding argument. ``` >>> "The sum of 1 + 2 is {0}".format(1+2) 'The sum of 1 + 2 is 3' ``` See [Format String Syntax](string#formatstrings) for a description of the various formatting options that can be specified in format strings. Note When formatting a number ([`int`](functions#int "int"), [`float`](functions#float "float"), [`complex`](functions#complex "complex"), [`decimal.Decimal`](decimal#decimal.Decimal "decimal.Decimal") and subclasses) with the `n` type (ex: `'{:n}'.format(1234)`), the function temporarily sets the `LC_CTYPE` locale to the `LC_NUMERIC` locale to decode `decimal_point` and `thousands_sep` fields of `localeconv()` if they are non-ASCII or longer than 1 byte, and the `LC_NUMERIC` locale is different than the `LC_CTYPE` locale. This temporary change affects other threads. Changed in version 3.7: When formatting a number with the `n` type, the function sets temporarily the `LC_CTYPE` locale to the `LC_NUMERIC` locale in some cases. `str.format_map(mapping)` Similar to `str.format(**mapping)`, except that `mapping` is used directly and not copied to a [`dict`](#dict "dict"). This is useful if for example `mapping` is a dict subclass: ``` >>> class Default(dict): ... def __missing__(self, key): ... return key ... >>> '{name} was born in {country}'.format_map(Default(name='Guido')) 'Guido was born in country' ``` New in version 3.2. `str.index(sub[, start[, end]])` Like [`find()`](#str.find "str.find"), but raise [`ValueError`](exceptions#ValueError "ValueError") when the substring is not found. `str.isalnum()` Return `True` if all characters in the string are alphanumeric and there is at least one character, `False` otherwise. A character `c` is alphanumeric if one of the following returns `True`: `c.isalpha()`, `c.isdecimal()`, `c.isdigit()`, or `c.isnumeric()`. `str.isalpha()` Return `True` if all characters in the string are alphabetic and there is at least one character, `False` otherwise. Alphabetic characters are those characters defined in the Unicode character database as “Letter”, i.e., those with general category property being one of “Lm”, “Lt”, “Lu”, “Ll”, or “Lo”. Note that this is different from the “Alphabetic” property defined in the Unicode Standard. `str.isascii()` Return `True` if the string is empty or all characters in the string are ASCII, `False` otherwise. ASCII characters have code points in the range U+0000-U+007F. New in version 3.7. `str.isdecimal()` Return `True` if all characters in the string are decimal characters and there is at least one character, `False` otherwise. Decimal characters are those that can be used to form numbers in base 10, e.g. U+0660, ARABIC-INDIC DIGIT ZERO. Formally a decimal character is a character in the Unicode General Category “Nd”. `str.isdigit()` Return `True` if all characters in the string are digits and there is at least one character, `False` otherwise. Digits include decimal characters and digits that need special handling, such as the compatibility superscript digits. This covers digits which cannot be used to form numbers in base 10, like the Kharosthi numbers. Formally, a digit is a character that has the property value Numeric\_Type=Digit or Numeric\_Type=Decimal. `str.isidentifier()` Return `True` if the string is a valid identifier according to the language definition, section [Identifiers and keywords](../reference/lexical_analysis#identifiers). Call [`keyword.iskeyword()`](keyword#keyword.iskeyword "keyword.iskeyword") to test whether string `s` is a reserved identifier, such as [`def`](../reference/compound_stmts#def) and [`class`](../reference/compound_stmts#class). Example: ``` >>> from keyword import iskeyword >>> 'hello'.isidentifier(), iskeyword('hello') (True, False) >>> 'def'.isidentifier(), iskeyword('def') (True, True) ``` `str.islower()` Return `True` if all cased characters [4](#id15) in the string are lowercase and there is at least one cased character, `False` otherwise. `str.isnumeric()` Return `True` if all characters in the string are numeric characters, and there is at least one character, `False` otherwise. Numeric characters include digit characters, and all characters that have the Unicode numeric value property, e.g. U+2155, VULGAR FRACTION ONE FIFTH. Formally, numeric characters are those with the property value Numeric\_Type=Digit, Numeric\_Type=Decimal or Numeric\_Type=Numeric. `str.isprintable()` Return `True` if all characters in the string are printable or the string is empty, `False` otherwise. Nonprintable characters are those characters defined in the Unicode character database as “Other” or “Separator”, excepting the ASCII space (0x20) which is considered printable. (Note that printable characters in this context are those which should not be escaped when [`repr()`](functions#repr "repr") is invoked on a string. It has no bearing on the handling of strings written to [`sys.stdout`](sys#sys.stdout "sys.stdout") or [`sys.stderr`](sys#sys.stderr "sys.stderr").) `str.isspace()` Return `True` if there are only whitespace characters in the string and there is at least one character, `False` otherwise. A character is *whitespace* if in the Unicode character database (see [`unicodedata`](unicodedata#module-unicodedata "unicodedata: Access the Unicode Database.")), either its general category is `Zs` (“Separator, space”), or its bidirectional class is one of `WS`, `B`, or `S`. `str.istitle()` Return `True` if the string is a titlecased string and there is at least one character, for example uppercase characters may only follow uncased characters and lowercase characters only cased ones. Return `False` otherwise. `str.isupper()` Return `True` if all cased characters [4](#id15) in the string are uppercase and there is at least one cased character, `False` otherwise. ``` >>> 'BANANA'.isupper() True >>> 'banana'.isupper() False >>> 'baNana'.isupper() False >>> ' '.isupper() False ``` `str.join(iterable)` Return a string which is the concatenation of the strings in *iterable*. A [`TypeError`](exceptions#TypeError "TypeError") will be raised if there are any non-string values in *iterable*, including [`bytes`](#bytes "bytes") objects. The separator between elements is the string providing this method. `str.ljust(width[, fillchar])` Return the string left justified in a string of length *width*. Padding is done using the specified *fillchar* (default is an ASCII space). The original string is returned if *width* is less than or equal to `len(s)`. `str.lower()` Return a copy of the string with all the cased characters [4](#id15) converted to lowercase. The lowercasing algorithm used is described in section 3.13 of the Unicode Standard. `str.lstrip([chars])` Return a copy of the string with leading characters removed. The *chars* argument is a string specifying the set of characters to be removed. If omitted or `None`, the *chars* argument defaults to removing whitespace. The *chars* argument is not a prefix; rather, all combinations of its values are stripped: ``` >>> ' spacious '.lstrip() 'spacious ' >>> 'www.example.com'.lstrip('cmowz.') 'example.com' ``` See [`str.removeprefix()`](#str.removeprefix "str.removeprefix") for a method that will remove a single prefix string rather than all of a set of characters. For example: ``` >>> 'Arthur: three!'.lstrip('Arthur: ') 'ee!' >>> 'Arthur: three!'.removeprefix('Arthur: ') 'three!' ``` `static str.maketrans(x[, y[, z]])` This static method returns a translation table usable for [`str.translate()`](#str.translate "str.translate"). If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters (strings of length 1) to Unicode ordinals, strings (of arbitrary lengths) or `None`. Character keys will then be converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to `None` in the result. `str.partition(sep)` Split the string at the first occurrence of *sep*, and return a 3-tuple containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return a 3-tuple containing the string itself, followed by two empty strings. `str.removeprefix(prefix, /)` If the string starts with the *prefix* string, return `string[len(prefix):]`. Otherwise, return a copy of the original string: ``` >>> 'TestHook'.removeprefix('Test') 'Hook' >>> 'BaseTestCase'.removeprefix('Test') 'BaseTestCase' ``` New in version 3.9. `str.removesuffix(suffix, /)` If the string ends with the *suffix* string and that *suffix* is not empty, return `string[:-len(suffix)]`. Otherwise, return a copy of the original string: ``` >>> 'MiscTests'.removesuffix('Tests') 'Misc' >>> 'TmpDirMixin'.removesuffix('Tests') 'TmpDirMixin' ``` New in version 3.9. `str.replace(old, new[, count])` Return a copy of the string with all occurrences of substring *old* replaced by *new*. If the optional argument *count* is given, only the first *count* occurrences are replaced. `str.rfind(sub[, start[, end]])` Return the highest index in the string where substring *sub* is found, such that *sub* is contained within `s[start:end]`. Optional arguments *start* and *end* are interpreted as in slice notation. Return `-1` on failure. `str.rindex(sub[, start[, end]])` Like [`rfind()`](#str.rfind "str.rfind") but raises [`ValueError`](exceptions#ValueError "ValueError") when the substring *sub* is not found. `str.rjust(width[, fillchar])` Return the string right justified in a string of length *width*. Padding is done using the specified *fillchar* (default is an ASCII space). The original string is returned if *width* is less than or equal to `len(s)`. `str.rpartition(sep)` Split the string at the last occurrence of *sep*, and return a 3-tuple containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return a 3-tuple containing two empty strings, followed by the string itself. `str.rsplit(sep=None, maxsplit=-1)` Return a list of the words in the string, using *sep* as the delimiter string. If *maxsplit* is given, at most *maxsplit* splits are done, the *rightmost* ones. If *sep* is not specified or `None`, any whitespace string is a separator. Except for splitting from the right, [`rsplit()`](#str.rsplit "str.rsplit") behaves like [`split()`](#str.split "str.split") which is described in detail below. `str.rstrip([chars])` Return a copy of the string with trailing characters removed. The *chars* argument is a string specifying the set of characters to be removed. If omitted or `None`, the *chars* argument defaults to removing whitespace. The *chars* argument is not a suffix; rather, all combinations of its values are stripped: ``` >>> ' spacious '.rstrip() ' spacious' >>> 'mississippi'.rstrip('ipz') 'mississ' ``` See [`str.removesuffix()`](#str.removesuffix "str.removesuffix") for a method that will remove a single suffix string rather than all of a set of characters. For example: ``` >>> 'Monty Python'.rstrip(' Python') 'M' >>> 'Monty Python'.removesuffix(' Python') 'Monty' ``` `str.split(sep=None, maxsplit=-1)` Return a list of the words in the string, using *sep* as the delimiter string. If *maxsplit* is given, at most *maxsplit* splits are done (thus, the list will have at most `maxsplit+1` elements). If *maxsplit* is not specified or `-1`, then there is no limit on the number of splits (all possible splits are made). If *sep* is given, consecutive delimiters are not grouped together and are deemed to delimit empty strings (for example, `'1,,2'.split(',')` returns `['1', '', '2']`). The *sep* argument may consist of multiple characters (for example, `'1<>2<>3'.split('<>')` returns `['1', '2', '3']`). Splitting an empty string with a specified separator returns `['']`. For example: ``` >>> '1,2,3'.split(',') ['1', '2', '3'] >>> '1,2,3'.split(',', maxsplit=1) ['1', '2,3'] >>> '1,2,,3,'.split(',') ['1', '2', '', '3', ''] ``` If *sep* is not specified or is `None`, a different splitting algorithm is applied: runs of consecutive whitespace are regarded as a single separator, and the result will contain no empty strings at the start or end if the string has leading or trailing whitespace. Consequently, splitting an empty string or a string consisting of just whitespace with a `None` separator returns `[]`. For example: ``` >>> '1 2 3'.split() ['1', '2', '3'] >>> '1 2 3'.split(maxsplit=1) ['1', '2 3'] >>> ' 1 2 3 '.split() ['1', '2', '3'] ``` `str.splitlines(keepends=False)` Return a list of the lines in the string, breaking at line boundaries. Line breaks are not included in the resulting list unless *keepends* is given and true. This method splits on the following line boundaries. In particular, the boundaries are a superset of [universal newlines](../glossary#term-universal-newlines). | Representation | Description | | --- | --- | | `\n` | Line Feed | | `\r` | Carriage Return | | `\r\n` | Carriage Return + Line Feed | | `\v` or `\x0b` | Line Tabulation | | `\f` or `\x0c` | Form Feed | | `\x1c` | File Separator | | `\x1d` | Group Separator | | `\x1e` | Record Separator | | `\x85` | Next Line (C1 Control Code) | | `\u2028` | Line Separator | | `\u2029` | Paragraph Separator | Changed in version 3.2: `\v` and `\f` added to list of line boundaries. For example: ``` >>> 'ab c\n\nde fg\rkl\r\n'.splitlines() ['ab c', '', 'de fg', 'kl'] >>> 'ab c\n\nde fg\rkl\r\n'.splitlines(keepends=True) ['ab c\n', '\n', 'de fg\r', 'kl\r\n'] ``` Unlike [`split()`](#str.split "str.split") when a delimiter string *sep* is given, this method returns an empty list for the empty string, and a terminal line break does not result in an extra line: ``` >>> "".splitlines() [] >>> "One line\n".splitlines() ['One line'] ``` For comparison, `split('\n')` gives: ``` >>> ''.split('\n') [''] >>> 'Two lines\n'.split('\n') ['Two lines', ''] ``` `str.startswith(prefix[, start[, end]])` Return `True` if string starts with the *prefix*, otherwise return `False`. *prefix* can also be a tuple of prefixes to look for. With optional *start*, test string beginning at that position. With optional *end*, stop comparing string at that position. `str.strip([chars])` Return a copy of the string with the leading and trailing characters removed. The *chars* argument is a string specifying the set of characters to be removed. If omitted or `None`, the *chars* argument defaults to removing whitespace. The *chars* argument is not a prefix or suffix; rather, all combinations of its values are stripped: ``` >>> ' spacious '.strip() 'spacious' >>> 'www.example.com'.strip('cmowz.') 'example' ``` The outermost leading and trailing *chars* argument values are stripped from the string. Characters are removed from the leading end until reaching a string character that is not contained in the set of characters in *chars*. A similar action takes place on the trailing end. For example: ``` >>> comment_string = '#....... Section 3.2.1 Issue #32 .......' >>> comment_string.strip('.#! ') 'Section 3.2.1 Issue #32' ``` `str.swapcase()` Return a copy of the string with uppercase characters converted to lowercase and vice versa. Note that it is not necessarily true that `s.swapcase().swapcase() == s`. `str.title()` Return a titlecased version of the string where words start with an uppercase character and the remaining characters are lowercase. For example: ``` >>> 'Hello world'.title() 'Hello World' ``` The algorithm uses a simple language-independent definition of a word as groups of consecutive letters. The definition works in many contexts but it means that apostrophes in contractions and possessives form word boundaries, which may not be the desired result: ``` >>> "they're bill's friends from the UK".title() "They'Re Bill'S Friends From The Uk" ``` The [`string.capwords()`](string#string.capwords "string.capwords") function does not have this problem, as it splits words on spaces only. Alternatively, a workaround for apostrophes can be constructed using regular expressions: ``` >>> import re >>> def titlecase(s): ... return re.sub(r"[A-Za-z]+('[A-Za-z]+)?", ... lambda mo: mo.group(0).capitalize(), ... s) ... >>> titlecase("they're bill's friends.") "They're Bill's Friends." ``` `str.translate(table)` Return a copy of the string in which each character has been mapped through the given translation table. The table must be an object that implements indexing via [`__getitem__()`](../reference/datamodel#object.__getitem__ "object.__getitem__"), typically a [mapping](../glossary#term-mapping) or [sequence](../glossary#term-sequence). When indexed by a Unicode ordinal (an integer), the table object can do any of the following: return a Unicode ordinal or a string, to map the character to one or more other characters; return `None`, to delete the character from the return string; or raise a [`LookupError`](exceptions#LookupError "LookupError") exception, to map the character to itself. You can use [`str.maketrans()`](#str.maketrans "str.maketrans") to create a translation map from character-to-character mappings in different formats. See also the [`codecs`](codecs#module-codecs "codecs: Encode and decode data and streams.") module for a more flexible approach to custom character mappings. `str.upper()` Return a copy of the string with all the cased characters [4](#id15) converted to uppercase. Note that `s.upper().isupper()` might be `False` if `s` contains uncased characters or if the Unicode category of the resulting character(s) is not “Lu” (Letter, uppercase), but e.g. “Lt” (Letter, titlecase). The uppercasing algorithm used is described in section 3.13 of the Unicode Standard. `str.zfill(width)` Return a copy of the string left filled with ASCII `'0'` digits to make a string of length *width*. A leading sign prefix (`'+'`/`'-'`) is handled by inserting the padding *after* the sign character rather than before. The original string is returned if *width* is less than or equal to `len(s)`. For example: ``` >>> "42".zfill(5) '00042' >>> "-42".zfill(5) '-0042' ``` ### `printf`-style String Formatting Note The formatting operations described here exhibit a variety of quirks that lead to a number of common errors (such as failing to display tuples and dictionaries correctly). Using the newer [formatted string literals](../reference/lexical_analysis#f-strings), the [`str.format()`](#str.format "str.format") interface, or [template strings](string#template-strings) may help avoid these errors. Each of these alternatives provides their own trade-offs and benefits of simplicity, flexibility, and/or extensibility. String objects have one unique built-in operation: the `%` operator (modulo). This is also known as the string *formatting* or *interpolation* operator. Given `format % values` (where *format* is a string), `%` conversion specifications in *format* are replaced with zero or more elements of *values*. The effect is similar to using the `sprintf()` in the C language. If *format* requires a single argument, *values* may be a single non-tuple object. [5](#id16) Otherwise, *values* must be a tuple with exactly the number of items specified by the format string, or a single mapping object (for example, a dictionary). A conversion specifier contains two or more characters and has the following components, which must occur in this order: 1. The `'%'` character, which marks the start of the specifier. 2. Mapping key (optional), consisting of a parenthesised sequence of characters (for example, `(somename)`). 3. Conversion flags (optional), which affect the result of some conversion types. 4. Minimum field width (optional). If specified as an `'*'` (asterisk), the actual width is read from the next element of the tuple in *values*, and the object to convert comes after the minimum field width and optional precision. 5. Precision (optional), given as a `'.'` (dot) followed by the precision. If specified as `'*'` (an asterisk), the actual precision is read from the next element of the tuple in *values*, and the value to convert comes after the precision. 6. Length modifier (optional). 7. Conversion type. When the right argument is a dictionary (or other mapping type), then the formats in the string *must* include a parenthesised mapping key into that dictionary inserted immediately after the `'%'` character. The mapping key selects the value to be formatted from the mapping. For example: ``` >>> print('%(language)s has %(number)03d quote types.' % ... {'language': "Python", "number": 2}) Python has 002 quote types. ``` In this case no `*` specifiers may occur in a format (since they require a sequential parameter list). The conversion flag characters are: | Flag | Meaning | | --- | --- | | `'#'` | The value conversion will use the “alternate form” (where defined below). | | `'0'` | The conversion will be zero padded for numeric values. | | `'-'` | The converted value is left adjusted (overrides the `'0'` conversion if both are given). | | `' '` | (a space) A blank should be left before a positive number (or empty string) produced by a signed conversion. | | `'+'` | A sign character (`'+'` or `'-'`) will precede the conversion (overrides a “space” flag). | A length modifier (`h`, `l`, or `L`) may be present, but is ignored as it is not necessary for Python – so e.g. `%ld` is identical to `%d`. The conversion types are: | Conversion | Meaning | Notes | | --- | --- | --- | | `'d'` | Signed integer decimal. | | | `'i'` | Signed integer decimal. | | | `'o'` | Signed octal value. | (1) | | `'u'` | Obsolete type – it is identical to `'d'`. | (6) | | `'x'` | Signed hexadecimal (lowercase). | (2) | | `'X'` | Signed hexadecimal (uppercase). | (2) | | `'e'` | Floating point exponential format (lowercase). | (3) | | `'E'` | Floating point exponential format (uppercase). | (3) | | `'f'` | Floating point decimal format. | (3) | | `'F'` | Floating point decimal format. | (3) | | `'g'` | Floating point format. Uses lowercase exponential format if exponent is less than -4 or not less than precision, decimal format otherwise. | (4) | | `'G'` | Floating point format. Uses uppercase exponential format if exponent is less than -4 or not less than precision, decimal format otherwise. | (4) | | `'c'` | Single character (accepts integer or single character string). | | | `'r'` | String (converts any Python object using [`repr()`](functions#repr "repr")). | (5) | | `'s'` | String (converts any Python object using [`str()`](#str "str")). | (5) | | `'a'` | String (converts any Python object using [`ascii()`](functions#ascii "ascii")). | (5) | | `'%'` | No argument is converted, results in a `'%'` character in the result. | | Notes: 1. The alternate form causes a leading octal specifier (`'0o'`) to be inserted before the first digit. 2. The alternate form causes a leading `'0x'` or `'0X'` (depending on whether the `'x'` or `'X'` format was used) to be inserted before the first digit. 3. The alternate form causes the result to always contain a decimal point, even if no digits follow it. The precision determines the number of digits after the decimal point and defaults to 6. 4. The alternate form causes the result to always contain a decimal point, and trailing zeroes are not removed as they would otherwise be. The precision determines the number of significant digits before and after the decimal point and defaults to 6. 5. If precision is `N`, the output is truncated to `N` characters. 6. See [**PEP 237**](https://www.python.org/dev/peps/pep-0237). Since Python strings have an explicit length, `%s` conversions do not assume that `'\0'` is the end of the string. Changed in version 3.1: `%f` conversions for numbers whose absolute value is over 1e50 are no longer replaced by `%g` conversions. Binary Sequence Types — bytes, bytearray, memoryview ---------------------------------------------------- The core built-in types for manipulating binary data are [`bytes`](#bytes "bytes") and [`bytearray`](#bytearray "bytearray"). They are supported by [`memoryview`](#memoryview "memoryview") which uses the [buffer protocol](../c-api/buffer#bufferobjects) to access the memory of other binary objects without needing to make a copy. The [`array`](array#module-array "array: Space efficient arrays of uniformly typed numeric values.") module supports efficient storage of basic data types like 32-bit integers and IEEE754 double-precision floating values. ### Bytes Objects Bytes objects are immutable sequences of single bytes. Since many major binary protocols are based on the ASCII text encoding, bytes objects offer several methods that are only valid when working with ASCII compatible data and are closely related to string objects in a variety of other ways. `class bytes([source[, encoding[, errors]]])` Firstly, the syntax for bytes literals is largely the same as that for string literals, except that a `b` prefix is added: * Single quotes: `b'still allows embedded "double" quotes'` * Double quotes: `b"still allows embedded 'single' quotes"` * Triple quoted: `b'''3 single quotes'''`, `b"""3 double quotes"""` Only ASCII characters are permitted in bytes literals (regardless of the declared source code encoding). Any binary values over 127 must be entered into bytes literals using the appropriate escape sequence. As with string literals, bytes literals may also use a `r` prefix to disable processing of escape sequences. See [String and Bytes literals](../reference/lexical_analysis#strings) for more about the various forms of bytes literal, including supported escape sequences. While bytes literals and representations are based on ASCII text, bytes objects actually behave like immutable sequences of integers, with each value in the sequence restricted such that `0 <= x < 256` (attempts to violate this restriction will trigger [`ValueError`](exceptions#ValueError "ValueError")). This is done deliberately to emphasise that while many binary formats include ASCII based elements and can be usefully manipulated with some text-oriented algorithms, this is not generally the case for arbitrary binary data (blindly applying text processing algorithms to binary data formats that are not ASCII compatible will usually lead to data corruption). In addition to the literal forms, bytes objects can be created in a number of other ways: * A zero-filled bytes object of a specified length: `bytes(10)` * From an iterable of integers: `bytes(range(20))` * Copying existing binary data via the buffer protocol: `bytes(obj)` Also see the [bytes](functions#func-bytes) built-in. Since 2 hexadecimal digits correspond precisely to a single byte, hexadecimal numbers are a commonly used format for describing binary data. Accordingly, the bytes type has an additional class method to read data in that format: `classmethod fromhex(string)` This [`bytes`](#bytes "bytes") class method returns a bytes object, decoding the given string object. The string must contain two hexadecimal digits per byte, with ASCII whitespace being ignored. ``` >>> bytes.fromhex('2Ef0 F1f2 ') b'.\xf0\xf1\xf2' ``` Changed in version 3.7: [`bytes.fromhex()`](#bytes.fromhex "bytes.fromhex") now skips all ASCII whitespace in the string, not just spaces. A reverse conversion function exists to transform a bytes object into its hexadecimal representation. `hex([sep[, bytes_per_sep]])` Return a string object containing two hexadecimal digits for each byte in the instance. ``` >>> b'\xf0\xf1\xf2'.hex() 'f0f1f2' ``` If you want to make the hex string easier to read, you can specify a single character separator *sep* parameter to include in the output. By default between each byte. A second optional *bytes\_per\_sep* parameter controls the spacing. Positive values calculate the separator position from the right, negative values from the left. ``` >>> value = b'\xf0\xf1\xf2' >>> value.hex('-') 'f0-f1-f2' >>> value.hex('_', 2) 'f0_f1f2' >>> b'UUDDLRLRAB'.hex(' ', -4) '55554444 4c524c52 4142' ``` New in version 3.5. Changed in version 3.8: [`bytes.hex()`](#bytes.hex "bytes.hex") now supports optional *sep* and *bytes\_per\_sep* parameters to insert separators between bytes in the hex output. Since bytes objects are sequences of integers (akin to a tuple), for a bytes object *b*, `b[0]` will be an integer, while `b[0:1]` will be a bytes object of length 1. (This contrasts with text strings, where both indexing and slicing will produce a string of length 1) The representation of bytes objects uses the literal format (`b'...'`) since it is often more useful than e.g. `bytes([46, 46, 46])`. You can always convert a bytes object into a list of integers using `list(b)`. ### Bytearray Objects [`bytearray`](#bytearray "bytearray") objects are a mutable counterpart to [`bytes`](#bytes "bytes") objects. `class bytearray([source[, encoding[, errors]]])` There is no dedicated literal syntax for bytearray objects, instead they are always created by calling the constructor: * Creating an empty instance: `bytearray()` * Creating a zero-filled instance with a given length: `bytearray(10)` * From an iterable of integers: `bytearray(range(20))` * Copying existing binary data via the buffer protocol: `bytearray(b'Hi!')` As bytearray objects are mutable, they support the [mutable](#typesseq-mutable) sequence operations in addition to the common bytes and bytearray operations described in [Bytes and Bytearray Operations](#bytes-methods). Also see the [bytearray](functions#func-bytearray) built-in. Since 2 hexadecimal digits correspond precisely to a single byte, hexadecimal numbers are a commonly used format for describing binary data. Accordingly, the bytearray type has an additional class method to read data in that format: `classmethod fromhex(string)` This [`bytearray`](#bytearray "bytearray") class method returns bytearray object, decoding the given string object. The string must contain two hexadecimal digits per byte, with ASCII whitespace being ignored. ``` >>> bytearray.fromhex('2Ef0 F1f2 ') bytearray(b'.\xf0\xf1\xf2') ``` Changed in version 3.7: [`bytearray.fromhex()`](#bytearray.fromhex "bytearray.fromhex") now skips all ASCII whitespace in the string, not just spaces. A reverse conversion function exists to transform a bytearray object into its hexadecimal representation. `hex([sep[, bytes_per_sep]])` Return a string object containing two hexadecimal digits for each byte in the instance. ``` >>> bytearray(b'\xf0\xf1\xf2').hex() 'f0f1f2' ``` New in version 3.5. Changed in version 3.8: Similar to [`bytes.hex()`](#bytes.hex "bytes.hex"), [`bytearray.hex()`](#bytearray.hex "bytearray.hex") now supports optional *sep* and *bytes\_per\_sep* parameters to insert separators between bytes in the hex output. Since bytearray objects are sequences of integers (akin to a list), for a bytearray object *b*, `b[0]` will be an integer, while `b[0:1]` will be a bytearray object of length 1. (This contrasts with text strings, where both indexing and slicing will produce a string of length 1) The representation of bytearray objects uses the bytes literal format (`bytearray(b'...')`) since it is often more useful than e.g. `bytearray([46, 46, 46])`. You can always convert a bytearray object into a list of integers using `list(b)`. ### Bytes and Bytearray Operations Both bytes and bytearray objects support the [common](#typesseq-common) sequence operations. They interoperate not just with operands of the same type, but with any [bytes-like object](../glossary#term-bytes-like-object). Due to this flexibility, they can be freely mixed in operations without causing errors. However, the return type of the result may depend on the order of operands. Note The methods on bytes and bytearray objects don’t accept strings as their arguments, just as the methods on strings don’t accept bytes as their arguments. For example, you have to write: ``` a = "abc" b = a.replace("a", "f") ``` and: ``` a = b"abc" b = a.replace(b"a", b"f") ``` Some bytes and bytearray operations assume the use of ASCII compatible binary formats, and hence should be avoided when working with arbitrary binary data. These restrictions are covered below. Note Using these ASCII based operations to manipulate binary data that is not stored in an ASCII based format may lead to data corruption. The following methods on bytes and bytearray objects can be used with arbitrary binary data. `bytes.count(sub[, start[, end]])` `bytearray.count(sub[, start[, end]])` Return the number of non-overlapping occurrences of subsequence *sub* in the range [*start*, *end*]. Optional arguments *start* and *end* are interpreted as in slice notation. The subsequence to search for may be any [bytes-like object](../glossary#term-bytes-like-object) or an integer in the range 0 to 255. Changed in version 3.3: Also accept an integer in the range 0 to 255 as the subsequence. `bytes.removeprefix(prefix, /)` `bytearray.removeprefix(prefix, /)` If the binary data starts with the *prefix* string, return `bytes[len(prefix):]`. Otherwise, return a copy of the original binary data: ``` >>> b'TestHook'.removeprefix(b'Test') b'Hook' >>> b'BaseTestCase'.removeprefix(b'Test') b'BaseTestCase' ``` The *prefix* may be any [bytes-like object](../glossary#term-bytes-like-object). Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. New in version 3.9. `bytes.removesuffix(suffix, /)` `bytearray.removesuffix(suffix, /)` If the binary data ends with the *suffix* string and that *suffix* is not empty, return `bytes[:-len(suffix)]`. Otherwise, return a copy of the original binary data: ``` >>> b'MiscTests'.removesuffix(b'Tests') b'Misc' >>> b'TmpDirMixin'.removesuffix(b'Tests') b'TmpDirMixin' ``` The *suffix* may be any [bytes-like object](../glossary#term-bytes-like-object). Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. New in version 3.9. `bytes.decode(encoding="utf-8", errors="strict")` `bytearray.decode(encoding="utf-8", errors="strict")` Return a string decoded from the given bytes. Default encoding is `'utf-8'`. *errors* may be given to set a different error handling scheme. The default for *errors* is `'strict'`, meaning that encoding errors raise a [`UnicodeError`](exceptions#UnicodeError "UnicodeError"). Other possible values are `'ignore'`, `'replace'` and any other name registered via [`codecs.register_error()`](codecs#codecs.register_error "codecs.register_error"), see section [Error Handlers](codecs#error-handlers). For a list of possible encodings, see section [Standard Encodings](codecs#standard-encodings). By default, the *errors* argument is not checked for best performances, but only used at the first decoding error. Enable the [Python Development Mode](devmode#devmode), or use a debug build to check *errors*. Note Passing the *encoding* argument to [`str`](#str "str") allows decoding any [bytes-like object](../glossary#term-bytes-like-object) directly, without needing to make a temporary bytes or bytearray object. Changed in version 3.1: Added support for keyword arguments. Changed in version 3.9: The *errors* is now checked in development mode and in debug mode. `bytes.endswith(suffix[, start[, end]])` `bytearray.endswith(suffix[, start[, end]])` Return `True` if the binary data ends with the specified *suffix*, otherwise return `False`. *suffix* can also be a tuple of suffixes to look for. With optional *start*, test beginning at that position. With optional *end*, stop comparing at that position. The suffix(es) to search for may be any [bytes-like object](../glossary#term-bytes-like-object). `bytes.find(sub[, start[, end]])` `bytearray.find(sub[, start[, end]])` Return the lowest index in the data where the subsequence *sub* is found, such that *sub* is contained in the slice `s[start:end]`. Optional arguments *start* and *end* are interpreted as in slice notation. Return `-1` if *sub* is not found. The subsequence to search for may be any [bytes-like object](../glossary#term-bytes-like-object) or an integer in the range 0 to 255. Note The [`find()`](#bytes.find "bytes.find") method should be used only if you need to know the position of *sub*. To check if *sub* is a substring or not, use the [`in`](../reference/expressions#in) operator: ``` >>> b'Py' in b'Python' True ``` Changed in version 3.3: Also accept an integer in the range 0 to 255 as the subsequence. `bytes.index(sub[, start[, end]])` `bytearray.index(sub[, start[, end]])` Like [`find()`](#bytes.find "bytes.find"), but raise [`ValueError`](exceptions#ValueError "ValueError") when the subsequence is not found. The subsequence to search for may be any [bytes-like object](../glossary#term-bytes-like-object) or an integer in the range 0 to 255. Changed in version 3.3: Also accept an integer in the range 0 to 255 as the subsequence. `bytes.join(iterable)` `bytearray.join(iterable)` Return a bytes or bytearray object which is the concatenation of the binary data sequences in *iterable*. A [`TypeError`](exceptions#TypeError "TypeError") will be raised if there are any values in *iterable* that are not [bytes-like objects](../glossary#term-bytes-like-object), including [`str`](#str "str") objects. The separator between elements is the contents of the bytes or bytearray object providing this method. `static bytes.maketrans(from, to)` `static bytearray.maketrans(from, to)` This static method returns a translation table usable for [`bytes.translate()`](#bytes.translate "bytes.translate") that will map each character in *from* into the character at the same position in *to*; *from* and *to* must both be [bytes-like objects](../glossary#term-bytes-like-object) and have the same length. New in version 3.1. `bytes.partition(sep)` `bytearray.partition(sep)` Split the sequence at the first occurrence of *sep*, and return a 3-tuple containing the part before the separator, the separator itself or its bytearray copy, and the part after the separator. If the separator is not found, return a 3-tuple containing a copy of the original sequence, followed by two empty bytes or bytearray objects. The separator to search for may be any [bytes-like object](../glossary#term-bytes-like-object). `bytes.replace(old, new[, count])` `bytearray.replace(old, new[, count])` Return a copy of the sequence with all occurrences of subsequence *old* replaced by *new*. If the optional argument *count* is given, only the first *count* occurrences are replaced. The subsequence to search for and its replacement may be any [bytes-like object](../glossary#term-bytes-like-object). Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. `bytes.rfind(sub[, start[, end]])` `bytearray.rfind(sub[, start[, end]])` Return the highest index in the sequence where the subsequence *sub* is found, such that *sub* is contained within `s[start:end]`. Optional arguments *start* and *end* are interpreted as in slice notation. Return `-1` on failure. The subsequence to search for may be any [bytes-like object](../glossary#term-bytes-like-object) or an integer in the range 0 to 255. Changed in version 3.3: Also accept an integer in the range 0 to 255 as the subsequence. `bytes.rindex(sub[, start[, end]])` `bytearray.rindex(sub[, start[, end]])` Like [`rfind()`](#bytes.rfind "bytes.rfind") but raises [`ValueError`](exceptions#ValueError "ValueError") when the subsequence *sub* is not found. The subsequence to search for may be any [bytes-like object](../glossary#term-bytes-like-object) or an integer in the range 0 to 255. Changed in version 3.3: Also accept an integer in the range 0 to 255 as the subsequence. `bytes.rpartition(sep)` `bytearray.rpartition(sep)` Split the sequence at the last occurrence of *sep*, and return a 3-tuple containing the part before the separator, the separator itself or its bytearray copy, and the part after the separator. If the separator is not found, return a 3-tuple containing two empty bytes or bytearray objects, followed by a copy of the original sequence. The separator to search for may be any [bytes-like object](../glossary#term-bytes-like-object). `bytes.startswith(prefix[, start[, end]])` `bytearray.startswith(prefix[, start[, end]])` Return `True` if the binary data starts with the specified *prefix*, otherwise return `False`. *prefix* can also be a tuple of prefixes to look for. With optional *start*, test beginning at that position. With optional *end*, stop comparing at that position. The prefix(es) to search for may be any [bytes-like object](../glossary#term-bytes-like-object). `bytes.translate(table, /, delete=b'')` `bytearray.translate(table, /, delete=b'')` Return a copy of the bytes or bytearray object where all bytes occurring in the optional argument *delete* are removed, and the remaining bytes have been mapped through the given translation table, which must be a bytes object of length 256. You can use the [`bytes.maketrans()`](#bytes.maketrans "bytes.maketrans") method to create a translation table. Set the *table* argument to `None` for translations that only delete characters: ``` >>> b'read this short text'.translate(None, b'aeiou') b'rd ths shrt txt' ``` Changed in version 3.6: *delete* is now supported as a keyword argument. The following methods on bytes and bytearray objects have default behaviours that assume the use of ASCII compatible binary formats, but can still be used with arbitrary binary data by passing appropriate arguments. Note that all of the bytearray methods in this section do *not* operate in place, and instead produce new objects. `bytes.center(width[, fillbyte])` `bytearray.center(width[, fillbyte])` Return a copy of the object centered in a sequence of length *width*. Padding is done using the specified *fillbyte* (default is an ASCII space). For [`bytes`](#bytes "bytes") objects, the original sequence is returned if *width* is less than or equal to `len(s)`. Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. `bytes.ljust(width[, fillbyte])` `bytearray.ljust(width[, fillbyte])` Return a copy of the object left justified in a sequence of length *width*. Padding is done using the specified *fillbyte* (default is an ASCII space). For [`bytes`](#bytes "bytes") objects, the original sequence is returned if *width* is less than or equal to `len(s)`. Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. `bytes.lstrip([chars])` `bytearray.lstrip([chars])` Return a copy of the sequence with specified leading bytes removed. The *chars* argument is a binary sequence specifying the set of byte values to be removed - the name refers to the fact this method is usually used with ASCII characters. If omitted or `None`, the *chars* argument defaults to removing ASCII whitespace. The *chars* argument is not a prefix; rather, all combinations of its values are stripped: ``` >>> b' spacious '.lstrip() b'spacious ' >>> b'www.example.com'.lstrip(b'cmowz.') b'example.com' ``` The binary sequence of byte values to remove may be any [bytes-like object](../glossary#term-bytes-like-object). See [`removeprefix()`](#bytes.removeprefix "bytes.removeprefix") for a method that will remove a single prefix string rather than all of a set of characters. For example: ``` >>> b'Arthur: three!'.lstrip(b'Arthur: ') b'ee!' >>> b'Arthur: three!'.removeprefix(b'Arthur: ') b'three!' ``` Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. `bytes.rjust(width[, fillbyte])` `bytearray.rjust(width[, fillbyte])` Return a copy of the object right justified in a sequence of length *width*. Padding is done using the specified *fillbyte* (default is an ASCII space). For [`bytes`](#bytes "bytes") objects, the original sequence is returned if *width* is less than or equal to `len(s)`. Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. `bytes.rsplit(sep=None, maxsplit=-1)` `bytearray.rsplit(sep=None, maxsplit=-1)` Split the binary sequence into subsequences of the same type, using *sep* as the delimiter string. If *maxsplit* is given, at most *maxsplit* splits are done, the *rightmost* ones. If *sep* is not specified or `None`, any subsequence consisting solely of ASCII whitespace is a separator. Except for splitting from the right, [`rsplit()`](#bytearray.rsplit "bytearray.rsplit") behaves like [`split()`](#bytearray.split "bytearray.split") which is described in detail below. `bytes.rstrip([chars])` `bytearray.rstrip([chars])` Return a copy of the sequence with specified trailing bytes removed. The *chars* argument is a binary sequence specifying the set of byte values to be removed - the name refers to the fact this method is usually used with ASCII characters. If omitted or `None`, the *chars* argument defaults to removing ASCII whitespace. The *chars* argument is not a suffix; rather, all combinations of its values are stripped: ``` >>> b' spacious '.rstrip() b' spacious' >>> b'mississippi'.rstrip(b'ipz') b'mississ' ``` The binary sequence of byte values to remove may be any [bytes-like object](../glossary#term-bytes-like-object). See [`removesuffix()`](#bytes.removesuffix "bytes.removesuffix") for a method that will remove a single suffix string rather than all of a set of characters. For example: ``` >>> b'Monty Python'.rstrip(b' Python') b'M' >>> b'Monty Python'.removesuffix(b' Python') b'Monty' ``` Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. `bytes.split(sep=None, maxsplit=-1)` `bytearray.split(sep=None, maxsplit=-1)` Split the binary sequence into subsequences of the same type, using *sep* as the delimiter string. If *maxsplit* is given and non-negative, at most *maxsplit* splits are done (thus, the list will have at most `maxsplit+1` elements). If *maxsplit* is not specified or is `-1`, then there is no limit on the number of splits (all possible splits are made). If *sep* is given, consecutive delimiters are not grouped together and are deemed to delimit empty subsequences (for example, `b'1,,2'.split(b',')` returns `[b'1', b'', b'2']`). The *sep* argument may consist of a multibyte sequence (for example, `b'1<>2<>3'.split(b'<>')` returns `[b'1', b'2', b'3']`). Splitting an empty sequence with a specified separator returns `[b'']` or `[bytearray(b'')]` depending on the type of object being split. The *sep* argument may be any [bytes-like object](../glossary#term-bytes-like-object). For example: ``` >>> b'1,2,3'.split(b',') [b'1', b'2', b'3'] >>> b'1,2,3'.split(b',', maxsplit=1) [b'1', b'2,3'] >>> b'1,2,,3,'.split(b',') [b'1', b'2', b'', b'3', b''] ``` If *sep* is not specified or is `None`, a different splitting algorithm is applied: runs of consecutive ASCII whitespace are regarded as a single separator, and the result will contain no empty strings at the start or end if the sequence has leading or trailing whitespace. Consequently, splitting an empty sequence or a sequence consisting solely of ASCII whitespace without a specified separator returns `[]`. For example: ``` >>> b'1 2 3'.split() [b'1', b'2', b'3'] >>> b'1 2 3'.split(maxsplit=1) [b'1', b'2 3'] >>> b' 1 2 3 '.split() [b'1', b'2', b'3'] ``` `bytes.strip([chars])` `bytearray.strip([chars])` Return a copy of the sequence with specified leading and trailing bytes removed. The *chars* argument is a binary sequence specifying the set of byte values to be removed - the name refers to the fact this method is usually used with ASCII characters. If omitted or `None`, the *chars* argument defaults to removing ASCII whitespace. The *chars* argument is not a prefix or suffix; rather, all combinations of its values are stripped: ``` >>> b' spacious '.strip() b'spacious' >>> b'www.example.com'.strip(b'cmowz.') b'example' ``` The binary sequence of byte values to remove may be any [bytes-like object](../glossary#term-bytes-like-object). Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. The following methods on bytes and bytearray objects assume the use of ASCII compatible binary formats and should not be applied to arbitrary binary data. Note that all of the bytearray methods in this section do *not* operate in place, and instead produce new objects. `bytes.capitalize()` `bytearray.capitalize()` Return a copy of the sequence with each byte interpreted as an ASCII character, and the first byte capitalized and the rest lowercased. Non-ASCII byte values are passed through unchanged. Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. `bytes.expandtabs(tabsize=8)` `bytearray.expandtabs(tabsize=8)` Return a copy of the sequence where all ASCII tab characters are replaced by one or more ASCII spaces, depending on the current column and the given tab size. Tab positions occur every *tabsize* bytes (default is 8, giving tab positions at columns 0, 8, 16 and so on). To expand the sequence, the current column is set to zero and the sequence is examined byte by byte. If the byte is an ASCII tab character (`b'\t'`), one or more space characters are inserted in the result until the current column is equal to the next tab position. (The tab character itself is not copied.) If the current byte is an ASCII newline (`b'\n'`) or carriage return (`b'\r'`), it is copied and the current column is reset to zero. Any other byte value is copied unchanged and the current column is incremented by one regardless of how the byte value is represented when printed: ``` >>> b'01\t012\t0123\t01234'.expandtabs() b'01 012 0123 01234' >>> b'01\t012\t0123\t01234'.expandtabs(4) b'01 012 0123 01234' ``` Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. `bytes.isalnum()` `bytearray.isalnum()` Return `True` if all bytes in the sequence are alphabetical ASCII characters or ASCII decimal digits and the sequence is not empty, `False` otherwise. Alphabetic ASCII characters are those byte values in the sequence `b'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'`. ASCII decimal digits are those byte values in the sequence `b'0123456789'`. For example: ``` >>> b'ABCabc1'.isalnum() True >>> b'ABC abc1'.isalnum() False ``` `bytes.isalpha()` `bytearray.isalpha()` Return `True` if all bytes in the sequence are alphabetic ASCII characters and the sequence is not empty, `False` otherwise. Alphabetic ASCII characters are those byte values in the sequence `b'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'`. For example: ``` >>> b'ABCabc'.isalpha() True >>> b'ABCabc1'.isalpha() False ``` `bytes.isascii()` `bytearray.isascii()` Return `True` if the sequence is empty or all bytes in the sequence are ASCII, `False` otherwise. ASCII bytes are in the range 0-0x7F. New in version 3.7. `bytes.isdigit()` `bytearray.isdigit()` Return `True` if all bytes in the sequence are ASCII decimal digits and the sequence is not empty, `False` otherwise. ASCII decimal digits are those byte values in the sequence `b'0123456789'`. For example: ``` >>> b'1234'.isdigit() True >>> b'1.23'.isdigit() False ``` `bytes.islower()` `bytearray.islower()` Return `True` if there is at least one lowercase ASCII character in the sequence and no uppercase ASCII characters, `False` otherwise. For example: ``` >>> b'hello world'.islower() True >>> b'Hello world'.islower() False ``` Lowercase ASCII characters are those byte values in the sequence `b'abcdefghijklmnopqrstuvwxyz'`. Uppercase ASCII characters are those byte values in the sequence `b'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`. `bytes.isspace()` `bytearray.isspace()` Return `True` if all bytes in the sequence are ASCII whitespace and the sequence is not empty, `False` otherwise. ASCII whitespace characters are those byte values in the sequence `b' \t\n\r\x0b\f'` (space, tab, newline, carriage return, vertical tab, form feed). `bytes.istitle()` `bytearray.istitle()` Return `True` if the sequence is ASCII titlecase and the sequence is not empty, `False` otherwise. See [`bytes.title()`](#bytes.title "bytes.title") for more details on the definition of “titlecase”. For example: ``` >>> b'Hello World'.istitle() True >>> b'Hello world'.istitle() False ``` `bytes.isupper()` `bytearray.isupper()` Return `True` if there is at least one uppercase alphabetic ASCII character in the sequence and no lowercase ASCII characters, `False` otherwise. For example: ``` >>> b'HELLO WORLD'.isupper() True >>> b'Hello world'.isupper() False ``` Lowercase ASCII characters are those byte values in the sequence `b'abcdefghijklmnopqrstuvwxyz'`. Uppercase ASCII characters are those byte values in the sequence `b'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`. `bytes.lower()` `bytearray.lower()` Return a copy of the sequence with all the uppercase ASCII characters converted to their corresponding lowercase counterpart. For example: ``` >>> b'Hello World'.lower() b'hello world' ``` Lowercase ASCII characters are those byte values in the sequence `b'abcdefghijklmnopqrstuvwxyz'`. Uppercase ASCII characters are those byte values in the sequence `b'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`. Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. `bytes.splitlines(keepends=False)` `bytearray.splitlines(keepends=False)` Return a list of the lines in the binary sequence, breaking at ASCII line boundaries. This method uses the [universal newlines](../glossary#term-universal-newlines) approach to splitting lines. Line breaks are not included in the resulting list unless *keepends* is given and true. For example: ``` >>> b'ab c\n\nde fg\rkl\r\n'.splitlines() [b'ab c', b'', b'de fg', b'kl'] >>> b'ab c\n\nde fg\rkl\r\n'.splitlines(keepends=True) [b'ab c\n', b'\n', b'de fg\r', b'kl\r\n'] ``` Unlike [`split()`](#bytes.split "bytes.split") when a delimiter string *sep* is given, this method returns an empty list for the empty string, and a terminal line break does not result in an extra line: ``` >>> b"".split(b'\n'), b"Two lines\n".split(b'\n') ([b''], [b'Two lines', b'']) >>> b"".splitlines(), b"One line\n".splitlines() ([], [b'One line']) ``` `bytes.swapcase()` `bytearray.swapcase()` Return a copy of the sequence with all the lowercase ASCII characters converted to their corresponding uppercase counterpart and vice-versa. For example: ``` >>> b'Hello World'.swapcase() b'hELLO wORLD' ``` Lowercase ASCII characters are those byte values in the sequence `b'abcdefghijklmnopqrstuvwxyz'`. Uppercase ASCII characters are those byte values in the sequence `b'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`. Unlike [`str.swapcase()`](#str.swapcase "str.swapcase"), it is always the case that `bin.swapcase().swapcase() == bin` for the binary versions. Case conversions are symmetrical in ASCII, even though that is not generally true for arbitrary Unicode code points. Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. `bytes.title()` `bytearray.title()` Return a titlecased version of the binary sequence where words start with an uppercase ASCII character and the remaining characters are lowercase. Uncased byte values are left unmodified. For example: ``` >>> b'Hello world'.title() b'Hello World' ``` Lowercase ASCII characters are those byte values in the sequence `b'abcdefghijklmnopqrstuvwxyz'`. Uppercase ASCII characters are those byte values in the sequence `b'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`. All other byte values are uncased. The algorithm uses a simple language-independent definition of a word as groups of consecutive letters. The definition works in many contexts but it means that apostrophes in contractions and possessives form word boundaries, which may not be the desired result: ``` >>> b"they're bill's friends from the UK".title() b"They'Re Bill'S Friends From The Uk" ``` A workaround for apostrophes can be constructed using regular expressions: ``` >>> import re >>> def titlecase(s): ... return re.sub(rb"[A-Za-z]+('[A-Za-z]+)?", ... lambda mo: mo.group(0)[0:1].upper() + ... mo.group(0)[1:].lower(), ... s) ... >>> titlecase(b"they're bill's friends.") b"They're Bill's Friends." ``` Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. `bytes.upper()` `bytearray.upper()` Return a copy of the sequence with all the lowercase ASCII characters converted to their corresponding uppercase counterpart. For example: ``` >>> b'Hello World'.upper() b'HELLO WORLD' ``` Lowercase ASCII characters are those byte values in the sequence `b'abcdefghijklmnopqrstuvwxyz'`. Uppercase ASCII characters are those byte values in the sequence `b'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`. Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. `bytes.zfill(width)` `bytearray.zfill(width)` Return a copy of the sequence left filled with ASCII `b'0'` digits to make a sequence of length *width*. A leading sign prefix (`b'+'`/ `b'-'`) is handled by inserting the padding *after* the sign character rather than before. For [`bytes`](#bytes "bytes") objects, the original sequence is returned if *width* is less than or equal to `len(seq)`. For example: ``` >>> b"42".zfill(5) b'00042' >>> b"-42".zfill(5) b'-0042' ``` Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. ### `printf`-style Bytes Formatting Note The formatting operations described here exhibit a variety of quirks that lead to a number of common errors (such as failing to display tuples and dictionaries correctly). If the value being printed may be a tuple or dictionary, wrap it in a tuple. Bytes objects (`bytes`/`bytearray`) have one unique built-in operation: the `%` operator (modulo). This is also known as the bytes *formatting* or *interpolation* operator. Given `format % values` (where *format* is a bytes object), `%` conversion specifications in *format* are replaced with zero or more elements of *values*. The effect is similar to using the `sprintf()` in the C language. If *format* requires a single argument, *values* may be a single non-tuple object. [5](#id16) Otherwise, *values* must be a tuple with exactly the number of items specified by the format bytes object, or a single mapping object (for example, a dictionary). A conversion specifier contains two or more characters and has the following components, which must occur in this order: 1. The `'%'` character, which marks the start of the specifier. 2. Mapping key (optional), consisting of a parenthesised sequence of characters (for example, `(somename)`). 3. Conversion flags (optional), which affect the result of some conversion types. 4. Minimum field width (optional). If specified as an `'*'` (asterisk), the actual width is read from the next element of the tuple in *values*, and the object to convert comes after the minimum field width and optional precision. 5. Precision (optional), given as a `'.'` (dot) followed by the precision. If specified as `'*'` (an asterisk), the actual precision is read from the next element of the tuple in *values*, and the value to convert comes after the precision. 6. Length modifier (optional). 7. Conversion type. When the right argument is a dictionary (or other mapping type), then the formats in the bytes object *must* include a parenthesised mapping key into that dictionary inserted immediately after the `'%'` character. The mapping key selects the value to be formatted from the mapping. For example: ``` >>> print(b'%(language)s has %(number)03d quote types.' % ... {b'language': b"Python", b"number": 2}) b'Python has 002 quote types.' ``` In this case no `*` specifiers may occur in a format (since they require a sequential parameter list). The conversion flag characters are: | Flag | Meaning | | --- | --- | | `'#'` | The value conversion will use the “alternate form” (where defined below). | | `'0'` | The conversion will be zero padded for numeric values. | | `'-'` | The converted value is left adjusted (overrides the `'0'` conversion if both are given). | | `' '` | (a space) A blank should be left before a positive number (or empty string) produced by a signed conversion. | | `'+'` | A sign character (`'+'` or `'-'`) will precede the conversion (overrides a “space” flag). | A length modifier (`h`, `l`, or `L`) may be present, but is ignored as it is not necessary for Python – so e.g. `%ld` is identical to `%d`. The conversion types are: | Conversion | Meaning | Notes | | --- | --- | --- | | `'d'` | Signed integer decimal. | | | `'i'` | Signed integer decimal. | | | `'o'` | Signed octal value. | (1) | | `'u'` | Obsolete type – it is identical to `'d'`. | (8) | | `'x'` | Signed hexadecimal (lowercase). | (2) | | `'X'` | Signed hexadecimal (uppercase). | (2) | | `'e'` | Floating point exponential format (lowercase). | (3) | | `'E'` | Floating point exponential format (uppercase). | (3) | | `'f'` | Floating point decimal format. | (3) | | `'F'` | Floating point decimal format. | (3) | | `'g'` | Floating point format. Uses lowercase exponential format if exponent is less than -4 or not less than precision, decimal format otherwise. | (4) | | `'G'` | Floating point format. Uses uppercase exponential format if exponent is less than -4 or not less than precision, decimal format otherwise. | (4) | | `'c'` | Single byte (accepts integer or single byte objects). | | | `'b'` | Bytes (any object that follows the [buffer protocol](../c-api/buffer#bufferobjects) or has [`__bytes__()`](../reference/datamodel#object.__bytes__ "object.__bytes__")). | (5) | | `'s'` | `'s'` is an alias for `'b'` and should only be used for Python2/3 code bases. | (6) | | `'a'` | Bytes (converts any Python object using `repr(obj).encode('ascii', 'backslashreplace')`). | (5) | | `'r'` | `'r'` is an alias for `'a'` and should only be used for Python2/3 code bases. | (7) | | `'%'` | No argument is converted, results in a `'%'` character in the result. | | Notes: 1. The alternate form causes a leading octal specifier (`'0o'`) to be inserted before the first digit. 2. The alternate form causes a leading `'0x'` or `'0X'` (depending on whether the `'x'` or `'X'` format was used) to be inserted before the first digit. 3. The alternate form causes the result to always contain a decimal point, even if no digits follow it. The precision determines the number of digits after the decimal point and defaults to 6. 4. The alternate form causes the result to always contain a decimal point, and trailing zeroes are not removed as they would otherwise be. The precision determines the number of significant digits before and after the decimal point and defaults to 6. 5. If precision is `N`, the output is truncated to `N` characters. 6. `b'%s'` is deprecated, but will not be removed during the 3.x series. 7. `b'%r'` is deprecated, but will not be removed during the 3.x series. 8. See [**PEP 237**](https://www.python.org/dev/peps/pep-0237). Note The bytearray version of this method does *not* operate in place - it always produces a new object, even if no changes were made. See also [**PEP 461**](https://www.python.org/dev/peps/pep-0461) - Adding % formatting to bytes and bytearray New in version 3.5. ### Memory Views [`memoryview`](#memoryview "memoryview") objects allow Python code to access the internal data of an object that supports the [buffer protocol](../c-api/buffer#bufferobjects) without copying. `class memoryview(object)` Create a [`memoryview`](#memoryview "memoryview") that references *object*. *object* must support the buffer protocol. Built-in objects that support the buffer protocol include [`bytes`](#bytes "bytes") and [`bytearray`](#bytearray "bytearray"). A [`memoryview`](#memoryview "memoryview") has the notion of an *element*, which is the atomic memory unit handled by the originating *object*. For many simple types such as [`bytes`](#bytes "bytes") and [`bytearray`](#bytearray "bytearray"), an element is a single byte, but other types such as [`array.array`](array#array.array "array.array") may have bigger elements. `len(view)` is equal to the length of [`tolist`](#memoryview.tolist "memoryview.tolist"). If `view.ndim = 0`, the length is 1. If `view.ndim = 1`, the length is equal to the number of elements in the view. For higher dimensions, the length is equal to the length of the nested list representation of the view. The [`itemsize`](#memoryview.itemsize "memoryview.itemsize") attribute will give you the number of bytes in a single element. A [`memoryview`](#memoryview "memoryview") supports slicing and indexing to expose its data. One-dimensional slicing will result in a subview: ``` >>> v = memoryview(b'abcefg') >>> v[1] 98 >>> v[-1] 103 >>> v[1:4] <memory at 0x7f3ddc9f4350> >>> bytes(v[1:4]) b'bce' ``` If [`format`](#memoryview.format "memoryview.format") is one of the native format specifiers from the [`struct`](struct#module-struct "struct: Interpret bytes as packed binary data.") module, indexing with an integer or a tuple of integers is also supported and returns a single *element* with the correct type. One-dimensional memoryviews can be indexed with an integer or a one-integer tuple. Multi-dimensional memoryviews can be indexed with tuples of exactly *ndim* integers where *ndim* is the number of dimensions. Zero-dimensional memoryviews can be indexed with the empty tuple. Here is an example with a non-byte format: ``` >>> import array >>> a = array.array('l', [-11111111, 22222222, -33333333, 44444444]) >>> m = memoryview(a) >>> m[0] -11111111 >>> m[-1] 44444444 >>> m[::2].tolist() [-11111111, -33333333] ``` If the underlying object is writable, the memoryview supports one-dimensional slice assignment. Resizing is not allowed: ``` >>> data = bytearray(b'abcefg') >>> v = memoryview(data) >>> v.readonly False >>> v[0] = ord(b'z') >>> data bytearray(b'zbcefg') >>> v[1:4] = b'123' >>> data bytearray(b'z123fg') >>> v[2:3] = b'spam' Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: memoryview assignment: lvalue and rvalue have different structures >>> v[2:6] = b'spam' >>> data bytearray(b'z1spam') ``` One-dimensional memoryviews of hashable (read-only) types with formats ‘B’, ‘b’ or ‘c’ are also hashable. The hash is defined as `hash(m) == hash(m.tobytes())`: ``` >>> v = memoryview(b'abcefg') >>> hash(v) == hash(b'abcefg') True >>> hash(v[2:4]) == hash(b'ce') True >>> hash(v[::-2]) == hash(b'abcefg'[::-2]) True ``` Changed in version 3.3: One-dimensional memoryviews can now be sliced. One-dimensional memoryviews with formats ‘B’, ‘b’ or ‘c’ are now hashable. Changed in version 3.4: memoryview is now registered automatically with [`collections.abc.Sequence`](collections.abc#collections.abc.Sequence "collections.abc.Sequence") Changed in version 3.5: memoryviews can now be indexed with tuple of integers. [`memoryview`](#memoryview "memoryview") has several methods: `__eq__(exporter)` A memoryview and a [**PEP 3118**](https://www.python.org/dev/peps/pep-3118) exporter are equal if their shapes are equivalent and if all corresponding values are equal when the operands’ respective format codes are interpreted using [`struct`](struct#module-struct "struct: Interpret bytes as packed binary data.") syntax. For the subset of [`struct`](struct#module-struct "struct: Interpret bytes as packed binary data.") format strings currently supported by [`tolist()`](#memoryview.tolist "memoryview.tolist"), `v` and `w` are equal if `v.tolist() == w.tolist()`: ``` >>> import array >>> a = array.array('I', [1, 2, 3, 4, 5]) >>> b = array.array('d', [1.0, 2.0, 3.0, 4.0, 5.0]) >>> c = array.array('b', [5, 3, 1]) >>> x = memoryview(a) >>> y = memoryview(b) >>> x == a == y == b True >>> x.tolist() == a.tolist() == y.tolist() == b.tolist() True >>> z = y[::-2] >>> z == c True >>> z.tolist() == c.tolist() True ``` If either format string is not supported by the [`struct`](struct#module-struct "struct: Interpret bytes as packed binary data.") module, then the objects will always compare as unequal (even if the format strings and buffer contents are identical): ``` >>> from ctypes import BigEndianStructure, c_long >>> class BEPoint(BigEndianStructure): ... _fields_ = [("x", c_long), ("y", c_long)] ... >>> point = BEPoint(100, 200) >>> a = memoryview(point) >>> b = memoryview(point) >>> a == point False >>> a == b False ``` Note that, as with floating point numbers, `v is w` does *not* imply `v == w` for memoryview objects. Changed in version 3.3: Previous versions compared the raw memory disregarding the item format and the logical array structure. `tobytes(order=None)` Return the data in the buffer as a bytestring. This is equivalent to calling the [`bytes`](#bytes "bytes") constructor on the memoryview. ``` >>> m = memoryview(b"abc") >>> m.tobytes() b'abc' >>> bytes(m) b'abc' ``` For non-contiguous arrays the result is equal to the flattened list representation with all elements converted to bytes. [`tobytes()`](#memoryview.tobytes "memoryview.tobytes") supports all format strings, including those that are not in [`struct`](struct#module-struct "struct: Interpret bytes as packed binary data.") module syntax. New in version 3.8: *order* can be {‘C’, ‘F’, ‘A’}. When *order* is ‘C’ or ‘F’, the data of the original array is converted to C or Fortran order. For contiguous views, ‘A’ returns an exact copy of the physical memory. In particular, in-memory Fortran order is preserved. For non-contiguous views, the data is converted to C first. *order=None* is the same as *order=’C’*. `hex([sep[, bytes_per_sep]])` Return a string object containing two hexadecimal digits for each byte in the buffer. ``` >>> m = memoryview(b"abc") >>> m.hex() '616263' ``` New in version 3.5. Changed in version 3.8: Similar to [`bytes.hex()`](#bytes.hex "bytes.hex"), [`memoryview.hex()`](#memoryview.hex "memoryview.hex") now supports optional *sep* and *bytes\_per\_sep* parameters to insert separators between bytes in the hex output. `tolist()` Return the data in the buffer as a list of elements. ``` >>> memoryview(b'abc').tolist() [97, 98, 99] >>> import array >>> a = array.array('d', [1.1, 2.2, 3.3]) >>> m = memoryview(a) >>> m.tolist() [1.1, 2.2, 3.3] ``` Changed in version 3.3: [`tolist()`](#memoryview.tolist "memoryview.tolist") now supports all single character native formats in [`struct`](struct#module-struct "struct: Interpret bytes as packed binary data.") module syntax as well as multi-dimensional representations. `toreadonly()` Return a readonly version of the memoryview object. The original memoryview object is unchanged. ``` >>> m = memoryview(bytearray(b'abc')) >>> mm = m.toreadonly() >>> mm.tolist() [89, 98, 99] >>> mm[0] = 42 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: cannot modify read-only memory >>> m[0] = 43 >>> mm.tolist() [43, 98, 99] ``` New in version 3.8. `release()` Release the underlying buffer exposed by the memoryview object. Many objects take special actions when a view is held on them (for example, a [`bytearray`](#bytearray "bytearray") would temporarily forbid resizing); therefore, calling release() is handy to remove these restrictions (and free any dangling resources) as soon as possible. After this method has been called, any further operation on the view raises a [`ValueError`](exceptions#ValueError "ValueError") (except [`release()`](#memoryview.release "memoryview.release") itself which can be called multiple times): ``` >>> m = memoryview(b'abc') >>> m.release() >>> m[0] Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: operation forbidden on released memoryview object ``` The context management protocol can be used for a similar effect, using the `with` statement: ``` >>> with memoryview(b'abc') as m: ... m[0] ... 97 >>> m[0] Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: operation forbidden on released memoryview object ``` New in version 3.2. `cast(format[, shape])` Cast a memoryview to a new format or shape. *shape* defaults to `[byte_length//new_itemsize]`, which means that the result view will be one-dimensional. The return value is a new memoryview, but the buffer itself is not copied. Supported casts are 1D -> C-[contiguous](../glossary#term-contiguous) and C-contiguous -> 1D. The destination format is restricted to a single element native format in [`struct`](struct#module-struct "struct: Interpret bytes as packed binary data.") syntax. One of the formats must be a byte format (‘B’, ‘b’ or ‘c’). The byte length of the result must be the same as the original length. Cast 1D/long to 1D/unsigned bytes: ``` >>> import array >>> a = array.array('l', [1,2,3]) >>> x = memoryview(a) >>> x.format 'l' >>> x.itemsize 8 >>> len(x) 3 >>> x.nbytes 24 >>> y = x.cast('B') >>> y.format 'B' >>> y.itemsize 1 >>> len(y) 24 >>> y.nbytes 24 ``` Cast 1D/unsigned bytes to 1D/char: ``` >>> b = bytearray(b'zyz') >>> x = memoryview(b) >>> x[0] = b'a' Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: memoryview: invalid value for format "B" >>> y = x.cast('c') >>> y[0] = b'a' >>> b bytearray(b'ayz') ``` Cast 1D/bytes to 3D/ints to 1D/signed char: ``` >>> import struct >>> buf = struct.pack("i"*12, *list(range(12))) >>> x = memoryview(buf) >>> y = x.cast('i', shape=[2,2,3]) >>> y.tolist() [[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]]] >>> y.format 'i' >>> y.itemsize 4 >>> len(y) 2 >>> y.nbytes 48 >>> z = y.cast('b') >>> z.format 'b' >>> z.itemsize 1 >>> len(z) 48 >>> z.nbytes 48 ``` Cast 1D/unsigned long to 2D/unsigned long: ``` >>> buf = struct.pack("L"*6, *list(range(6))) >>> x = memoryview(buf) >>> y = x.cast('L', shape=[2,3]) >>> len(y) 2 >>> y.nbytes 48 >>> y.tolist() [[0, 1, 2], [3, 4, 5]] ``` New in version 3.3. Changed in version 3.5: The source format is no longer restricted when casting to a byte view. There are also several readonly attributes available: `obj` The underlying object of the memoryview: ``` >>> b = bytearray(b'xyz') >>> m = memoryview(b) >>> m.obj is b True ``` New in version 3.3. `nbytes` `nbytes == product(shape) * itemsize == len(m.tobytes())`. This is the amount of space in bytes that the array would use in a contiguous representation. It is not necessarily equal to `len(m)`: ``` >>> import array >>> a = array.array('i', [1,2,3,4,5]) >>> m = memoryview(a) >>> len(m) 5 >>> m.nbytes 20 >>> y = m[::2] >>> len(y) 3 >>> y.nbytes 12 >>> len(y.tobytes()) 12 ``` Multi-dimensional arrays: ``` >>> import struct >>> buf = struct.pack("d"*12, *[1.5*x for x in range(12)]) >>> x = memoryview(buf) >>> y = x.cast('d', shape=[3,4]) >>> y.tolist() [[0.0, 1.5, 3.0, 4.5], [6.0, 7.5, 9.0, 10.5], [12.0, 13.5, 15.0, 16.5]] >>> len(y) 3 >>> y.nbytes 96 ``` New in version 3.3. `readonly` A bool indicating whether the memory is read only. `format` A string containing the format (in [`struct`](struct#module-struct "struct: Interpret bytes as packed binary data.") module style) for each element in the view. A memoryview can be created from exporters with arbitrary format strings, but some methods (e.g. [`tolist()`](#memoryview.tolist "memoryview.tolist")) are restricted to native single element formats. Changed in version 3.3: format `'B'` is now handled according to the struct module syntax. This means that `memoryview(b'abc')[0] == b'abc'[0] == 97`. `itemsize` The size in bytes of each element of the memoryview: ``` >>> import array, struct >>> m = memoryview(array.array('H', [32000, 32001, 32002])) >>> m.itemsize 2 >>> m[0] 32000 >>> struct.calcsize('H') == m.itemsize True ``` `ndim` An integer indicating how many dimensions of a multi-dimensional array the memory represents. `shape` A tuple of integers the length of [`ndim`](#memoryview.ndim "memoryview.ndim") giving the shape of the memory as an N-dimensional array. Changed in version 3.3: An empty tuple instead of `None` when ndim = 0. `strides` A tuple of integers the length of [`ndim`](#memoryview.ndim "memoryview.ndim") giving the size in bytes to access each element for each dimension of the array. Changed in version 3.3: An empty tuple instead of `None` when ndim = 0. `suboffsets` Used internally for PIL-style arrays. The value is informational only. `c_contiguous` A bool indicating whether the memory is C-[contiguous](../glossary#term-contiguous). New in version 3.3. `f_contiguous` A bool indicating whether the memory is Fortran [contiguous](../glossary#term-contiguous). New in version 3.3. `contiguous` A bool indicating whether the memory is [contiguous](../glossary#term-contiguous). New in version 3.3. Set Types — set, frozenset -------------------------- A *set* object is an unordered collection of distinct [hashable](../glossary#term-hashable) objects. Common uses include membership testing, removing duplicates from a sequence, and computing mathematical operations such as intersection, union, difference, and symmetric difference. (For other containers see the built-in [`dict`](#dict "dict"), [`list`](#list "list"), and [`tuple`](#tuple "tuple") classes, and the [`collections`](collections#module-collections "collections: Container datatypes") module.) Like other collections, sets support `x in set`, `len(set)`, and `for x in set`. Being an unordered collection, sets do not record element position or order of insertion. Accordingly, sets do not support indexing, slicing, or other sequence-like behavior. There are currently two built-in set types, [`set`](#set "set") and [`frozenset`](#frozenset "frozenset"). The [`set`](#set "set") type is mutable — the contents can be changed using methods like `add()` and `remove()`. Since it is mutable, it has no hash value and cannot be used as either a dictionary key or as an element of another set. The [`frozenset`](#frozenset "frozenset") type is immutable and [hashable](../glossary#term-hashable) — its contents cannot be altered after it is created; it can therefore be used as a dictionary key or as an element of another set. Non-empty sets (not frozensets) can be created by placing a comma-separated list of elements within braces, for example: `{'jack', 'sjoerd'}`, in addition to the [`set`](#set "set") constructor. The constructors for both classes work the same: `class set([iterable])` `class frozenset([iterable])` Return a new set or frozenset object whose elements are taken from *iterable*. The elements of a set must be [hashable](../glossary#term-hashable). To represent sets of sets, the inner sets must be [`frozenset`](#frozenset "frozenset") objects. If *iterable* is not specified, a new empty set is returned. Sets can be created by several means: * Use a comma-separated list of elements within braces: `{'jack', 'sjoerd'}` * Use a set comprehension: `{c for c in 'abracadabra' if c not in 'abc'}` * Use the type constructor: `set()`, `set('foobar')`, `set(['a', 'b', 'foo'])` Instances of [`set`](#set "set") and [`frozenset`](#frozenset "frozenset") provide the following operations: `len(s)` Return the number of elements in set *s* (cardinality of *s*). `x in s` Test *x* for membership in *s*. `x not in s` Test *x* for non-membership in *s*. `isdisjoint(other)` Return `True` if the set has no elements in common with *other*. Sets are disjoint if and only if their intersection is the empty set. `issubset(other)` `set <= other` Test whether every element in the set is in *other*. `set < other` Test whether the set is a proper subset of *other*, that is, `set <= other and set != other`. `issuperset(other)` `set >= other` Test whether every element in *other* is in the set. `set > other` Test whether the set is a proper superset of *other*, that is, `set >= other and set != other`. `union(*others)` `set | other | ...` Return a new set with elements from the set and all others. `intersection(*others)` `set & other & ...` Return a new set with elements common to the set and all others. `difference(*others)` `set - other - ...` Return a new set with elements in the set that are not in the others. `symmetric_difference(other)` `set ^ other` Return a new set with elements in either the set or *other* but not both. `copy()` Return a shallow copy of the set. Note, the non-operator versions of [`union()`](#frozenset.union "frozenset.union"), [`intersection()`](#frozenset.intersection "frozenset.intersection"), [`difference()`](#frozenset.difference "frozenset.difference"), and [`symmetric_difference()`](#frozenset.symmetric_difference "frozenset.symmetric_difference"), [`issubset()`](#frozenset.issubset "frozenset.issubset"), and [`issuperset()`](#frozenset.issuperset "frozenset.issuperset") methods will accept any iterable as an argument. In contrast, their operator based counterparts require their arguments to be sets. This precludes error-prone constructions like `set('abc') & 'cbs'` in favor of the more readable `set('abc').intersection('cbs')`. Both [`set`](#set "set") and [`frozenset`](#frozenset "frozenset") support set to set comparisons. Two sets are equal if and only if every element of each set is contained in the other (each is a subset of the other). A set is less than another set if and only if the first set is a proper subset of the second set (is a subset, but is not equal). A set is greater than another set if and only if the first set is a proper superset of the second set (is a superset, but is not equal). Instances of [`set`](#set "set") are compared to instances of [`frozenset`](#frozenset "frozenset") based on their members. For example, `set('abc') == frozenset('abc')` returns `True` and so does `set('abc') in set([frozenset('abc')])`. The subset and equality comparisons do not generalize to a total ordering function. For example, any two nonempty disjoint sets are not equal and are not subsets of each other, so *all* of the following return `False`: `a<b`, `a==b`, or `a>b`. Since sets only define partial ordering (subset relationships), the output of the [`list.sort()`](#list.sort "list.sort") method is undefined for lists of sets. Set elements, like dictionary keys, must be [hashable](../glossary#term-hashable). Binary operations that mix [`set`](#set "set") instances with [`frozenset`](#frozenset "frozenset") return the type of the first operand. For example: `frozenset('ab') | set('bc')` returns an instance of [`frozenset`](#frozenset "frozenset"). The following table lists operations available for [`set`](#set "set") that do not apply to immutable instances of [`frozenset`](#frozenset "frozenset"): `update(*others)` `set |= other | ...` Update the set, adding elements from all others. `intersection_update(*others)` `set &= other & ...` Update the set, keeping only elements found in it and all others. `difference_update(*others)` `set -= other | ...` Update the set, removing elements found in others. `symmetric_difference_update(other)` `set ^= other` Update the set, keeping only elements found in either set, but not in both. `add(elem)` Add element *elem* to the set. `remove(elem)` Remove element *elem* from the set. Raises [`KeyError`](exceptions#KeyError "KeyError") if *elem* is not contained in the set. `discard(elem)` Remove element *elem* from the set if it is present. `pop()` Remove and return an arbitrary element from the set. Raises [`KeyError`](exceptions#KeyError "KeyError") if the set is empty. `clear()` Remove all elements from the set. Note, the non-operator versions of the [`update()`](#frozenset.update "frozenset.update"), [`intersection_update()`](#frozenset.intersection_update "frozenset.intersection_update"), [`difference_update()`](#frozenset.difference_update "frozenset.difference_update"), and [`symmetric_difference_update()`](#frozenset.symmetric_difference_update "frozenset.symmetric_difference_update") methods will accept any iterable as an argument. Note, the *elem* argument to the [`__contains__()`](../reference/datamodel#object.__contains__ "object.__contains__"), [`remove()`](#frozenset.remove "frozenset.remove"), and [`discard()`](#frozenset.discard "frozenset.discard") methods may be a set. To support searching for an equivalent frozenset, a temporary one is created from *elem*. Mapping Types — dict -------------------- A [mapping](../glossary#term-mapping) object maps [hashable](../glossary#term-hashable) values to arbitrary objects. Mappings are mutable objects. There is currently only one standard mapping type, the *dictionary*. (For other containers see the built-in [`list`](#list "list"), [`set`](#set "set"), and [`tuple`](#tuple "tuple") classes, and the [`collections`](collections#module-collections "collections: Container datatypes") module.) A dictionary’s keys are *almost* arbitrary values. Values that are not [hashable](../glossary#term-hashable), that is, values containing lists, dictionaries or other mutable types (that are compared by value rather than by object identity) may not be used as keys. Numeric types used for keys obey the normal rules for numeric comparison: if two numbers compare equal (such as `1` and `1.0`) then they can be used interchangeably to index the same dictionary entry. (Note however, that since computers store floating-point numbers as approximations it is usually unwise to use them as dictionary keys.) `class dict(**kwargs)` `class dict(mapping, **kwargs)` `class dict(iterable, **kwargs)` Return a new dictionary initialized from an optional positional argument and a possibly empty set of keyword arguments. Dictionaries can be created by several means: * Use a comma-separated list of `key: value` pairs within braces: `{'jack': 4098, 'sjoerd': 4127}` or `{4098: 'jack', 4127: 'sjoerd'}` * Use a dict comprehension: `{}`, `{x: x ** 2 for x in range(10)}` * Use the type constructor: `dict()`, `dict([('foo', 100), ('bar', 200)])`, `dict(foo=100, bar=200)` If no positional argument is given, an empty dictionary is created. If a positional argument is given and it is a mapping object, a dictionary is created with the same key-value pairs as the mapping object. Otherwise, the positional argument must be an [iterable](../glossary#term-iterable) object. Each item in the iterable must itself be an iterable with exactly two objects. The first object of each item becomes a key in the new dictionary, and the second object the corresponding value. If a key occurs more than once, the last value for that key becomes the corresponding value in the new dictionary. If keyword arguments are given, the keyword arguments and their values are added to the dictionary created from the positional argument. If a key being added is already present, the value from the keyword argument replaces the value from the positional argument. To illustrate, the following examples all return a dictionary equal to `{"one": 1, "two": 2, "three": 3}`: ``` >>> a = dict(one=1, two=2, three=3) >>> b = {'one': 1, 'two': 2, 'three': 3} >>> c = dict(zip(['one', 'two', 'three'], [1, 2, 3])) >>> d = dict([('two', 2), ('one', 1), ('three', 3)]) >>> e = dict({'three': 3, 'one': 1, 'two': 2}) >>> f = dict({'one': 1, 'three': 3}, two=2) >>> a == b == c == d == e == f True ``` Providing keyword arguments as in the first example only works for keys that are valid Python identifiers. Otherwise, any valid keys can be used. These are the operations that dictionaries support (and therefore, custom mapping types should support too): `list(d)` Return a list of all the keys used in the dictionary *d*. `len(d)` Return the number of items in the dictionary *d*. `d[key]` Return the item of *d* with key *key*. Raises a [`KeyError`](exceptions#KeyError "KeyError") if *key* is not in the map. If a subclass of dict defines a method [`__missing__()`](../reference/datamodel#object.__missing__ "object.__missing__") and *key* is not present, the `d[key]` operation calls that method with the key *key* as argument. The `d[key]` operation then returns or raises whatever is returned or raised by the `__missing__(key)` call. No other operations or methods invoke [`__missing__()`](../reference/datamodel#object.__missing__ "object.__missing__"). If [`__missing__()`](../reference/datamodel#object.__missing__ "object.__missing__") is not defined, [`KeyError`](exceptions#KeyError "KeyError") is raised. [`__missing__()`](../reference/datamodel#object.__missing__ "object.__missing__") must be a method; it cannot be an instance variable: ``` >>> class Counter(dict): ... def __missing__(self, key): ... return 0 >>> c = Counter() >>> c['red'] 0 >>> c['red'] += 1 >>> c['red'] 1 ``` The example above shows part of the implementation of [`collections.Counter`](collections#collections.Counter "collections.Counter"). A different `__missing__` method is used by [`collections.defaultdict`](collections#collections.defaultdict "collections.defaultdict"). `d[key] = value` Set `d[key]` to *value*. `del d[key]` Remove `d[key]` from *d*. Raises a [`KeyError`](exceptions#KeyError "KeyError") if *key* is not in the map. `key in d` Return `True` if *d* has a key *key*, else `False`. `key not in d` Equivalent to `not key in d`. `iter(d)` Return an iterator over the keys of the dictionary. This is a shortcut for `iter(d.keys())`. `clear()` Remove all items from the dictionary. `copy()` Return a shallow copy of the dictionary. `classmethod fromkeys(iterable[, value])` Create a new dictionary with keys from *iterable* and values set to *value*. [`fromkeys()`](#dict.fromkeys "dict.fromkeys") is a class method that returns a new dictionary. *value* defaults to `None`. All of the values refer to just a single instance, so it generally doesn’t make sense for *value* to be a mutable object such as an empty list. To get distinct values, use a [dict comprehension](../reference/expressions#dict) instead. `get(key[, default])` Return the value for *key* if *key* is in the dictionary, else *default*. If *default* is not given, it defaults to `None`, so that this method never raises a [`KeyError`](exceptions#KeyError "KeyError"). `items()` Return a new view of the dictionary’s items (`(key, value)` pairs). See the [documentation of view objects](#dict-views). `keys()` Return a new view of the dictionary’s keys. See the [documentation of view objects](#dict-views). `pop(key[, default])` If *key* is in the dictionary, remove it and return its value, else return *default*. If *default* is not given and *key* is not in the dictionary, a [`KeyError`](exceptions#KeyError "KeyError") is raised. `popitem()` Remove and return a `(key, value)` pair from the dictionary. Pairs are returned in LIFO order. [`popitem()`](#dict.popitem "dict.popitem") is useful to destructively iterate over a dictionary, as often used in set algorithms. If the dictionary is empty, calling [`popitem()`](#dict.popitem "dict.popitem") raises a [`KeyError`](exceptions#KeyError "KeyError"). Changed in version 3.7: LIFO order is now guaranteed. In prior versions, [`popitem()`](#dict.popitem "dict.popitem") would return an arbitrary key/value pair. `reversed(d)` Return a reverse iterator over the keys of the dictionary. This is a shortcut for `reversed(d.keys())`. New in version 3.8. `setdefault(key[, default])` If *key* is in the dictionary, return its value. If not, insert *key* with a value of *default* and return *default*. *default* defaults to `None`. `update([other])` Update the dictionary with the key/value pairs from *other*, overwriting existing keys. Return `None`. [`update()`](#dict.update "dict.update") accepts either another dictionary object or an iterable of key/value pairs (as tuples or other iterables of length two). If keyword arguments are specified, the dictionary is then updated with those key/value pairs: `d.update(red=1, blue=2)`. `values()` Return a new view of the dictionary’s values. See the [documentation of view objects](#dict-views). An equality comparison between one `dict.values()` view and another will always return `False`. This also applies when comparing `dict.values()` to itself: ``` >>> d = {'a': 1} >>> d.values() == d.values() False ``` `d | other` Create a new dictionary with the merged keys and values of *d* and *other*, which must both be dictionaries. The values of *other* take priority when *d* and *other* share keys. New in version 3.9. `d |= other` Update the dictionary *d* with keys and values from *other*, which may be either a [mapping](../glossary#term-mapping) or an [iterable](../glossary#term-iterable) of key/value pairs. The values of *other* take priority when *d* and *other* share keys. New in version 3.9. Dictionaries compare equal if and only if they have the same `(key, value)` pairs (regardless of ordering). Order comparisons (‘<’, ‘<=’, ‘>=’, ‘>’) raise [`TypeError`](exceptions#TypeError "TypeError"). Dictionaries preserve insertion order. Note that updating a key does not affect the order. Keys added after deletion are inserted at the end. ``` >>> d = {"one": 1, "two": 2, "three": 3, "four": 4} >>> d {'one': 1, 'two': 2, 'three': 3, 'four': 4} >>> list(d) ['one', 'two', 'three', 'four'] >>> list(d.values()) [1, 2, 3, 4] >>> d["one"] = 42 >>> d {'one': 42, 'two': 2, 'three': 3, 'four': 4} >>> del d["two"] >>> d["two"] = None >>> d {'one': 42, 'three': 3, 'four': 4, 'two': None} ``` Changed in version 3.7: Dictionary order is guaranteed to be insertion order. This behavior was an implementation detail of CPython from 3.6. Dictionaries and dictionary views are reversible. ``` >>> d = {"one": 1, "two": 2, "three": 3, "four": 4} >>> d {'one': 1, 'two': 2, 'three': 3, 'four': 4} >>> list(reversed(d)) ['four', 'three', 'two', 'one'] >>> list(reversed(d.values())) [4, 3, 2, 1] >>> list(reversed(d.items())) [('four', 4), ('three', 3), ('two', 2), ('one', 1)] ``` Changed in version 3.8: Dictionaries are now reversible. See also [`types.MappingProxyType`](types#types.MappingProxyType "types.MappingProxyType") can be used to create a read-only view of a [`dict`](#dict "dict"). ### Dictionary view objects The objects returned by [`dict.keys()`](#dict.keys "dict.keys"), [`dict.values()`](#dict.values "dict.values") and [`dict.items()`](#dict.items "dict.items") are *view objects*. They provide a dynamic view on the dictionary’s entries, which means that when the dictionary changes, the view reflects these changes. Dictionary views can be iterated over to yield their respective data, and support membership tests: `len(dictview)` Return the number of entries in the dictionary. `iter(dictview)` Return an iterator over the keys, values or items (represented as tuples of `(key, value)`) in the dictionary. Keys and values are iterated over in insertion order. This allows the creation of `(value, key)` pairs using [`zip()`](functions#zip "zip"): `pairs = zip(d.values(), d.keys())`. Another way to create the same list is `pairs = [(v, k) for (k, v) in d.items()]`. Iterating views while adding or deleting entries in the dictionary may raise a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") or fail to iterate over all entries. Changed in version 3.7: Dictionary order is guaranteed to be insertion order. `x in dictview` Return `True` if *x* is in the underlying dictionary’s keys, values or items (in the latter case, *x* should be a `(key, value)` tuple). `reversed(dictview)` Return a reverse iterator over the keys, values or items of the dictionary. The view will be iterated in reverse order of the insertion. Changed in version 3.8: Dictionary views are now reversible. Keys views are set-like since their entries are unique and hashable. If all values are hashable, so that `(key, value)` pairs are unique and hashable, then the items view is also set-like. (Values views are not treated as set-like since the entries are generally not unique.) For set-like views, all of the operations defined for the abstract base class [`collections.abc.Set`](collections.abc#collections.abc.Set "collections.abc.Set") are available (for example, `==`, `<`, or `^`). An example of dictionary view usage: ``` >>> dishes = {'eggs': 2, 'sausage': 1, 'bacon': 1, 'spam': 500} >>> keys = dishes.keys() >>> values = dishes.values() >>> # iteration >>> n = 0 >>> for val in values: ... n += val >>> print(n) 504 >>> # keys and values are iterated over in the same order (insertion order) >>> list(keys) ['eggs', 'sausage', 'bacon', 'spam'] >>> list(values) [2, 1, 1, 500] >>> # view objects are dynamic and reflect dict changes >>> del dishes['eggs'] >>> del dishes['sausage'] >>> list(keys) ['bacon', 'spam'] >>> # set operations >>> keys & {'eggs', 'bacon', 'salad'} {'bacon'} >>> keys ^ {'sausage', 'juice'} {'juice', 'sausage', 'bacon', 'spam'} ``` Context Manager Types --------------------- Python’s [`with`](../reference/compound_stmts#with) statement supports the concept of a runtime context defined by a context manager. This is implemented using a pair of methods that allow user-defined classes to define a runtime context that is entered before the statement body is executed and exited when the statement ends: `contextmanager.__enter__()` Enter the runtime context and return either this object or another object related to the runtime context. The value returned by this method is bound to the identifier in the `as` clause of [`with`](../reference/compound_stmts#with) statements using this context manager. An example of a context manager that returns itself is a [file object](../glossary#term-file-object). File objects return themselves from \_\_enter\_\_() to allow [`open()`](functions#open "open") to be used as the context expression in a [`with`](../reference/compound_stmts#with) statement. An example of a context manager that returns a related object is the one returned by [`decimal.localcontext()`](decimal#decimal.localcontext "decimal.localcontext"). These managers set the active decimal context to a copy of the original decimal context and then return the copy. This allows changes to be made to the current decimal context in the body of the [`with`](../reference/compound_stmts#with) statement without affecting code outside the `with` statement. `contextmanager.__exit__(exc_type, exc_val, exc_tb)` Exit the runtime context and return a Boolean flag indicating if any exception that occurred should be suppressed. If an exception occurred while executing the body of the [`with`](../reference/compound_stmts#with) statement, the arguments contain the exception type, value and traceback information. Otherwise, all three arguments are `None`. Returning a true value from this method will cause the [`with`](../reference/compound_stmts#with) statement to suppress the exception and continue execution with the statement immediately following the `with` statement. Otherwise the exception continues propagating after this method has finished executing. Exceptions that occur during execution of this method will replace any exception that occurred in the body of the `with` statement. The exception passed in should never be reraised explicitly - instead, this method should return a false value to indicate that the method completed successfully and does not want to suppress the raised exception. This allows context management code to easily detect whether or not an [`__exit__()`](#contextmanager.__exit__ "contextmanager.__exit__") method has actually failed. Python defines several context managers to support easy thread synchronisation, prompt closure of files or other objects, and simpler manipulation of the active decimal arithmetic context. The specific types are not treated specially beyond their implementation of the context management protocol. See the [`contextlib`](contextlib#module-contextlib "contextlib: Utilities for with-statement contexts.") module for some examples. Python’s [generator](../glossary#term-generator)s and the [`contextlib.contextmanager`](contextlib#contextlib.contextmanager "contextlib.contextmanager") decorator provide a convenient way to implement these protocols. If a generator function is decorated with the [`contextlib.contextmanager`](contextlib#contextlib.contextmanager "contextlib.contextmanager") decorator, it will return a context manager implementing the necessary [`__enter__()`](#contextmanager.__enter__ "contextmanager.__enter__") and [`__exit__()`](#contextmanager.__exit__ "contextmanager.__exit__") methods, rather than the iterator produced by an undecorated generator function. Note that there is no specific slot for any of these methods in the type structure for Python objects in the Python/C API. Extension types wanting to define these methods must provide them as a normal Python accessible method. Compared to the overhead of setting up the runtime context, the overhead of a single class dictionary lookup is negligible. Generic Alias Type ------------------ `GenericAlias` objects are generally created by [subscripting](../reference/expressions#subscriptions) a class. They are most often used with [container classes](../reference/datamodel#sequence-types), such as [`list`](#list "list") or [`dict`](#dict "dict"). For example, `list[int]` is a `GenericAlias` object created by subscripting the `list` class with the argument [`int`](functions#int "int"). `GenericAlias` objects are intended primarily for use with [type annotations](../glossary#term-annotation). Note It is generally only possible to subscript a class if the class implements the special method [`__class_getitem__()`](../reference/datamodel#object.__class_getitem__ "object.__class_getitem__"). A `GenericAlias` object acts as a proxy for a [generic type](../glossary#term-generic-type), implementing *parameterized generics*. For a container class, the argument(s) supplied to a [subscription](../reference/expressions#subscriptions) of the class may indicate the type(s) of the elements an object contains. For example, `set[bytes]` can be used in type annotations to signify a [`set`](#set "set") in which all the elements are of type [`bytes`](#bytes "bytes"). For a class which defines [`__class_getitem__()`](../reference/datamodel#object.__class_getitem__ "object.__class_getitem__") but is not a container, the argument(s) supplied to a subscription of the class will often indicate the return type(s) of one or more methods defined on an object. For example, [`regular expressions`](re#module-re "re: Regular expression operations.") can be used on both the [`str`](#str "str") data type and the [`bytes`](#bytes "bytes") data type: * If `x = re.search('foo', 'foo')`, `x` will be a [re.Match](re#match-objects) object where the return values of `x.group(0)` and `x[0]` will both be of type [`str`](#str "str"). We can represent this kind of object in type annotations with the `GenericAlias` `re.Match[str]`. * If `y = re.search(b'bar', b'bar')`, (note the `b` for [`bytes`](#bytes "bytes")), `y` will also be an instance of `re.Match`, but the return values of `y.group(0)` and `y[0]` will both be of type [`bytes`](#bytes "bytes"). In type annotations, we would represent this variety of [re.Match](re#match-objects) objects with `re.Match[bytes]`. `GenericAlias` objects are instances of the class [`types.GenericAlias`](types#types.GenericAlias "types.GenericAlias"), which can also be used to create `GenericAlias` objects directly. `T[X, Y, ...]` Creates a `GenericAlias` representing a type `T` parameterized by types *X*, *Y*, and more depending on the `T` used. For example, a function expecting a [`list`](#list "list") containing [`float`](functions#float "float") elements: ``` def average(values: list[float]) -> float: return sum(values) / len(values) ``` Another example for [mapping](../glossary#term-mapping) objects, using a [`dict`](#dict "dict"), which is a generic type expecting two type parameters representing the key type and the value type. In this example, the function expects a `dict` with keys of type [`str`](#str "str") and values of type [`int`](functions#int "int"): ``` def send_post_request(url: str, body: dict[str, int]) -> None: ... ``` The builtin functions [`isinstance()`](functions#isinstance "isinstance") and [`issubclass()`](functions#issubclass "issubclass") do not accept `GenericAlias` types for their second argument: ``` >>> isinstance([1, 2], list[str]) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: isinstance() argument 2 cannot be a parameterized generic ``` The Python runtime does not enforce [type annotations](../glossary#term-annotation). This extends to generic types and their type parameters. When creating a container object from a `GenericAlias`, the elements in the container are not checked against their type. For example, the following code is discouraged, but will run without errors: ``` >>> t = list[str] >>> t([1, 2, 3]) [1, 2, 3] ``` Furthermore, parameterized generics erase type parameters during object creation: ``` >>> t = list[str] >>> type(t) <class 'types.GenericAlias'> >>> l = t() >>> type(l) <class 'list'> ``` Calling [`repr()`](functions#repr "repr") or [`str()`](#str "str") on a generic shows the parameterized type: ``` >>> repr(list[int]) 'list[int]' >>> str(list[int]) 'list[int]' ``` The [`__getitem__()`](../reference/datamodel#object.__getitem__ "object.__getitem__") method of generic containers will raise an exception to disallow mistakes like `dict[str][str]`: ``` >>> dict[str][str] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: There are no type variables left in dict[str] ``` However, such expressions are valid when [type variables](typing#generics) are used. The index must have as many elements as there are type variable items in the `GenericAlias` object’s [`__args__`](#genericalias.__args__ "genericalias.__args__"). ``` >>> from typing import TypeVar >>> Y = TypeVar('Y') >>> dict[str, Y][int] dict[str, int] ``` ### Standard Generic Classes The following standard library classes support parameterized generics. This list is non-exhaustive. * [`tuple`](#tuple "tuple") * [`list`](#list "list") * [`dict`](#dict "dict") * [`set`](#set "set") * [`frozenset`](#frozenset "frozenset") * [`type`](functions#type "type") * [`collections.deque`](collections#collections.deque "collections.deque") * [`collections.defaultdict`](collections#collections.defaultdict "collections.defaultdict") * [`collections.OrderedDict`](collections#collections.OrderedDict "collections.OrderedDict") * [`collections.Counter`](collections#collections.Counter "collections.Counter") * [`collections.ChainMap`](collections#collections.ChainMap "collections.ChainMap") * [`collections.abc.Awaitable`](collections.abc#collections.abc.Awaitable "collections.abc.Awaitable") * [`collections.abc.Coroutine`](collections.abc#collections.abc.Coroutine "collections.abc.Coroutine") * [`collections.abc.AsyncIterable`](collections.abc#collections.abc.AsyncIterable "collections.abc.AsyncIterable") * [`collections.abc.AsyncIterator`](collections.abc#collections.abc.AsyncIterator "collections.abc.AsyncIterator") * [`collections.abc.AsyncGenerator`](collections.abc#collections.abc.AsyncGenerator "collections.abc.AsyncGenerator") * [`collections.abc.Iterable`](collections.abc#collections.abc.Iterable "collections.abc.Iterable") * [`collections.abc.Iterator`](collections.abc#collections.abc.Iterator "collections.abc.Iterator") * [`collections.abc.Generator`](collections.abc#collections.abc.Generator "collections.abc.Generator") * [`collections.abc.Reversible`](collections.abc#collections.abc.Reversible "collections.abc.Reversible") * [`collections.abc.Container`](collections.abc#collections.abc.Container "collections.abc.Container") * [`collections.abc.Collection`](collections.abc#collections.abc.Collection "collections.abc.Collection") * [`collections.abc.Callable`](collections.abc#collections.abc.Callable "collections.abc.Callable") * [`collections.abc.Set`](collections.abc#collections.abc.Set "collections.abc.Set") * [`collections.abc.MutableSet`](collections.abc#collections.abc.MutableSet "collections.abc.MutableSet") * [`collections.abc.Mapping`](collections.abc#collections.abc.Mapping "collections.abc.Mapping") * [`collections.abc.MutableMapping`](collections.abc#collections.abc.MutableMapping "collections.abc.MutableMapping") * [`collections.abc.Sequence`](collections.abc#collections.abc.Sequence "collections.abc.Sequence") * [`collections.abc.MutableSequence`](collections.abc#collections.abc.MutableSequence "collections.abc.MutableSequence") * [`collections.abc.ByteString`](collections.abc#collections.abc.ByteString "collections.abc.ByteString") * [`collections.abc.MappingView`](collections.abc#collections.abc.MappingView "collections.abc.MappingView") * [`collections.abc.KeysView`](collections.abc#collections.abc.KeysView "collections.abc.KeysView") * [`collections.abc.ItemsView`](collections.abc#collections.abc.ItemsView "collections.abc.ItemsView") * [`collections.abc.ValuesView`](collections.abc#collections.abc.ValuesView "collections.abc.ValuesView") * [`contextlib.AbstractContextManager`](contextlib#contextlib.AbstractContextManager "contextlib.AbstractContextManager") * [`contextlib.AbstractAsyncContextManager`](contextlib#contextlib.AbstractAsyncContextManager "contextlib.AbstractAsyncContextManager") * [`dataclasses.Field`](dataclasses#dataclasses.Field "dataclasses.Field") * [`functools.cached_property`](functools#functools.cached_property "functools.cached_property") * [`functools.partialmethod`](functools#functools.partialmethod "functools.partialmethod") * [`os.PathLike`](os#os.PathLike "os.PathLike") * [`queue.LifoQueue`](queue#queue.LifoQueue "queue.LifoQueue") * [`queue.Queue`](queue#queue.Queue "queue.Queue") * [`queue.PriorityQueue`](queue#queue.PriorityQueue "queue.PriorityQueue") * [`queue.SimpleQueue`](queue#queue.SimpleQueue "queue.SimpleQueue") * [re.Pattern](re#re-objects) * [re.Match](re#match-objects) * [`shelve.BsdDbShelf`](shelve#shelve.BsdDbShelf "shelve.BsdDbShelf") * [`shelve.DbfilenameShelf`](shelve#shelve.DbfilenameShelf "shelve.DbfilenameShelf") * [`shelve.Shelf`](shelve#shelve.Shelf "shelve.Shelf") * [`types.MappingProxyType`](types#types.MappingProxyType "types.MappingProxyType") * [`weakref.WeakKeyDictionary`](weakref#weakref.WeakKeyDictionary "weakref.WeakKeyDictionary") * [`weakref.WeakMethod`](weakref#weakref.WeakMethod "weakref.WeakMethod") * [`weakref.WeakSet`](weakref#weakref.WeakSet "weakref.WeakSet") * [`weakref.WeakValueDictionary`](weakref#weakref.WeakValueDictionary "weakref.WeakValueDictionary") ### Special Attributes of `GenericAlias` objects All parameterized generics implement special read-only attributes. `genericalias.__origin__` This attribute points at the non-parameterized generic class: ``` >>> list[int].__origin__ <class 'list'> ``` `genericalias.__args__` This attribute is a [`tuple`](#tuple "tuple") (possibly of length 1) of generic types passed to the original [`__class_getitem__()`](../reference/datamodel#object.__class_getitem__ "object.__class_getitem__") of the generic class: ``` >>> dict[str, list[int]].__args__ (<class 'str'>, list[int]) ``` `genericalias.__parameters__` This attribute is a lazily computed tuple (possibly empty) of unique type variables found in `__args__`: ``` >>> from typing import TypeVar >>> T = TypeVar('T') >>> list[T].__parameters__ (~T,) ``` See also [**PEP 484**](https://www.python.org/dev/peps/pep-0484) - Type Hints Introducing Python’s framework for type annotations. [**PEP 585**](https://www.python.org/dev/peps/pep-0585) - Type Hinting Generics In Standard Collections Introducing the ability to natively parameterize standard-library classes, provided they implement the special class method [`__class_getitem__()`](../reference/datamodel#object.__class_getitem__ "object.__class_getitem__"). `Generics, user-defined generics and` [`typing.Generic`](typing#typing.Generic "typing.Generic") Documentation on how to implement generic classes that can be parameterized at runtime and understood by static type-checkers. New in version 3.9. Other Built-in Types -------------------- The interpreter supports several other kinds of objects. Most of these support only one or two operations. ### Modules The only special operation on a module is attribute access: `m.name`, where *m* is a module and *name* accesses a name defined in *m*’s symbol table. Module attributes can be assigned to. (Note that the [`import`](../reference/simple_stmts#import) statement is not, strictly speaking, an operation on a module object; `import foo` does not require a module object named *foo* to exist, rather it requires an (external) *definition* for a module named *foo* somewhere.) A special attribute of every module is [`__dict__`](#object.__dict__ "object.__dict__"). This is the dictionary containing the module’s symbol table. Modifying this dictionary will actually change the module’s symbol table, but direct assignment to the [`__dict__`](#object.__dict__ "object.__dict__") attribute is not possible (you can write `m.__dict__['a'] = 1`, which defines `m.a` to be `1`, but you can’t write `m.__dict__ = {}`). Modifying [`__dict__`](#object.__dict__ "object.__dict__") directly is not recommended. Modules built into the interpreter are written like this: `<module 'sys' (built-in)>`. If loaded from a file, they are written as `<module 'os' from '/usr/local/lib/pythonX.Y/os.pyc'>`. ### Classes and Class Instances See [Objects, values and types](../reference/datamodel#objects) and [Class definitions](../reference/compound_stmts#class) for these. ### Functions Function objects are created by function definitions. The only operation on a function object is to call it: `func(argument-list)`. There are really two flavors of function objects: built-in functions and user-defined functions. Both support the same operation (to call the function), but the implementation is different, hence the different object types. See [Function definitions](../reference/compound_stmts#function) for more information. ### Methods Methods are functions that are called using the attribute notation. There are two flavors: built-in methods (such as `append()` on lists) and class instance methods. Built-in methods are described with the types that support them. If you access a method (a function defined in a class namespace) through an instance, you get a special object: a *bound method* (also called *instance method*) object. When called, it will add the `self` argument to the argument list. Bound methods have two special read-only attributes: `m.__self__` is the object on which the method operates, and `m.__func__` is the function implementing the method. Calling `m(arg-1, arg-2, ..., arg-n)` is completely equivalent to calling `m.__func__(m.__self__, arg-1, arg-2, ..., arg-n)`. Like function objects, bound method objects support getting arbitrary attributes. However, since method attributes are actually stored on the underlying function object (`meth.__func__`), setting method attributes on bound methods is disallowed. Attempting to set an attribute on a method results in an [`AttributeError`](exceptions#AttributeError "AttributeError") being raised. In order to set a method attribute, you need to explicitly set it on the underlying function object: ``` >>> class C: ... def method(self): ... pass ... >>> c = C() >>> c.method.whoami = 'my name is method' # can't set on the method Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'method' object has no attribute 'whoami' >>> c.method.__func__.whoami = 'my name is method' >>> c.method.whoami 'my name is method' ``` See [The standard type hierarchy](../reference/datamodel#types) for more information. ### Code Objects Code objects are used by the implementation to represent “pseudo-compiled” executable Python code such as a function body. They differ from function objects because they don’t contain a reference to their global execution environment. Code objects are returned by the built-in [`compile()`](functions#compile "compile") function and can be extracted from function objects through their `__code__` attribute. See also the [`code`](code#module-code "code: Facilities to implement read-eval-print loops.") module. Accessing `__code__` raises an [auditing event](sys#auditing) `object.__getattr__` with arguments `obj` and `"__code__"`. A code object can be executed or evaluated by passing it (instead of a source string) to the [`exec()`](functions#exec "exec") or [`eval()`](functions#eval "eval") built-in functions. See [The standard type hierarchy](../reference/datamodel#types) for more information. ### Type Objects Type objects represent the various object types. An object’s type is accessed by the built-in function [`type()`](functions#type "type"). There are no special operations on types. The standard module [`types`](types#module-types "types: Names for built-in types.") defines names for all standard built-in types. Types are written like this: `<class 'int'>`. ### The Null Object This object is returned by functions that don’t explicitly return a value. It supports no special operations. There is exactly one null object, named `None` (a built-in name). `type(None)()` produces the same singleton. It is written as `None`. ### The Ellipsis Object This object is commonly used by slicing (see [Slicings](../reference/expressions#slicings)). It supports no special operations. There is exactly one ellipsis object, named [`Ellipsis`](constants#Ellipsis "Ellipsis") (a built-in name). `type(Ellipsis)()` produces the [`Ellipsis`](constants#Ellipsis "Ellipsis") singleton. It is written as `Ellipsis` or `...`. ### The NotImplemented Object This object is returned from comparisons and binary operations when they are asked to operate on types they don’t support. See [Comparisons](../reference/expressions#comparisons) for more information. There is exactly one `NotImplemented` object. `type(NotImplemented)()` produces the singleton instance. It is written as `NotImplemented`. ### Boolean Values Boolean values are the two constant objects `False` and `True`. They are used to represent truth values (although other values can also be considered false or true). In numeric contexts (for example when used as the argument to an arithmetic operator), they behave like the integers 0 and 1, respectively. The built-in function [`bool()`](functions#bool "bool") can be used to convert any value to a Boolean, if the value can be interpreted as a truth value (see section [Truth Value Testing](#truth) above). They are written as `False` and `True`, respectively. ### Internal Objects See [The standard type hierarchy](../reference/datamodel#types) for this information. It describes stack frame objects, traceback objects, and slice objects. Special Attributes ------------------ The implementation adds a few special read-only attributes to several object types, where they are relevant. Some of these are not reported by the [`dir()`](functions#dir "dir") built-in function. `object.__dict__` A dictionary or other mapping object used to store an object’s (writable) attributes. `instance.__class__` The class to which a class instance belongs. `class.__bases__` The tuple of base classes of a class object. `definition.__name__` The name of the class, function, method, descriptor, or generator instance. `definition.__qualname__` The [qualified name](../glossary#term-qualified-name) of the class, function, method, descriptor, or generator instance. New in version 3.3. `class.__mro__` This attribute is a tuple of classes that are considered when looking for base classes during method resolution. `class.mro()` This method can be overridden by a metaclass to customize the method resolution order for its instances. It is called at class instantiation, and its result is stored in [`__mro__`](#class.__mro__ "class.__mro__"). `class.__subclasses__()` Each class keeps a list of weak references to its immediate subclasses. This method returns a list of all those references still alive. The list is in definition order. Example: ``` >>> int.__subclasses__() [<class 'bool'>] ``` Integer string conversion length limitation ------------------------------------------- CPython has a global limit for converting between [`int`](functions#int "int") and [`str`](#str "str") to mitigate denial of service attacks. This limit *only* applies to decimal or other non-power-of-two number bases. Hexadecimal, octal, and binary conversions are unlimited. The limit can be configured. The [`int`](functions#int "int") type in CPython is an abitrary length number stored in binary form (commonly known as a “bignum”). There exists no algorithm that can convert a string to a binary integer or a binary integer to a string in linear time, *unless* the base is a power of 2. Even the best known algorithms for base 10 have sub-quadratic complexity. Converting a large value such as `int('1' * 500_000)` can take over a second on a fast CPU. Limiting conversion size offers a practical way to avoid [CVE-2020-10735](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10735). The limit is applied to the number of digit characters in the input or output string when a non-linear conversion algorithm would be involved. Underscores and the sign are not counted towards the limit. When an operation would exceed the limit, a [`ValueError`](exceptions#ValueError "ValueError") is raised: ``` >>> import sys >>> sys.set_int_max_str_digits(4300) # Illustrative, this is the default. >>> _ = int('2' * 5432) Traceback (most recent call last): ... ValueError: Exceeds the limit (4300) for integer string conversion: value has 5432 digits. >>> i = int('2' * 4300) >>> len(str(i)) 4300 >>> i_squared = i*i >>> len(str(i_squared)) Traceback (most recent call last): ... ValueError: Exceeds the limit (4300) for integer string conversion: value has 8599 digits. >>> len(hex(i_squared)) 7144 >>> assert int(hex(i_squared), base=16) == i*i # Hexadecimal is unlimited. ``` The default limit is 4300 digits as provided in [`sys.int_info.default_max_str_digits`](sys#sys.int_info "sys.int_info"). The lowest limit that can be configured is 640 digits as provided in [`sys.int_info.str_digits_check_threshold`](sys#sys.int_info "sys.int_info"). Verification: ``` >>> import sys >>> assert sys.int_info.default_max_str_digits == 4300, sys.int_info >>> assert sys.int_info.str_digits_check_threshold == 640, sys.int_info >>> msg = int('578966293710682886880994035146873798396722250538762761564' ... '9252925514383915483333812743580549779436104706260696366600' ... '571186405732').to_bytes(53, 'big') ... ``` New in version 3.9.14. ### Affected APIs The limitation only applies to potentially slow conversions between [`int`](functions#int "int") and [`str`](#str "str") or [`bytes`](#bytes "bytes"): * `int(string)` with default base 10. * `int(string, base)` for all bases that are not a power of 2. * `str(integer)`. * `repr(integer)` * any other string conversion to base 10, for example `f"{integer}"`, `"{}".format(integer)`, or `b"%d" % integer`. The limitations do not apply to functions with a linear algorithm: * `int(string, base)` with base 2, 4, 8, 16, or 32. * [`int.from_bytes()`](#int.from_bytes "int.from_bytes") and [`int.to_bytes()`](#int.to_bytes "int.to_bytes"). * [`hex()`](functions#hex "hex"), [`oct()`](functions#oct "oct"), [`bin()`](functions#bin "bin"). * [Format Specification Mini-Language](string#formatspec) for hex, octal, and binary numbers. * [`str`](#str "str") to [`float`](functions#float "float"). * [`str`](#str "str") to [`decimal.Decimal`](decimal#decimal.Decimal "decimal.Decimal"). ### Configuring the limit Before Python starts up you can use an environment variable or an interpreter command line flag to configure the limit: * [`PYTHONINTMAXSTRDIGITS`](../using/cmdline#envvar-PYTHONINTMAXSTRDIGITS), e.g. `PYTHONINTMAXSTRDIGITS=640 python3` to set the limit to 640 or `PYTHONINTMAXSTRDIGITS=0 python3` to disable the limitation. * [`-X int_max_str_digits`](../using/cmdline#id5), e.g. `python3 -X int_max_str_digits=640` * `sys.flags.int_max_str_digits` contains the value of [`PYTHONINTMAXSTRDIGITS`](../using/cmdline#envvar-PYTHONINTMAXSTRDIGITS) or [`-X int_max_str_digits`](../using/cmdline#id5). If both the env var and the `-X` option are set, the `-X` option takes precedence. A value of *-1* indicates that both were unset, thus a value of `sys.int_info.default_max_str_digits` was used during initilization. From code, you can inspect the current limit and set a new one using these [`sys`](sys#module-sys "sys: Access system-specific parameters and functions.") APIs: * [`sys.get_int_max_str_digits()`](sys#sys.get_int_max_str_digits "sys.get_int_max_str_digits") and [`sys.set_int_max_str_digits()`](sys#sys.set_int_max_str_digits "sys.set_int_max_str_digits") are a getter and setter for the interpreter-wide limit. Subinterpreters have their own limit. Information about the default and minimum can be found in [`sys.int_info`](sys#sys.int_info "sys.int_info"): * [`sys.int_info.default_max_str_digits`](sys#sys.int_info "sys.int_info") is the compiled-in default limit. * [`sys.int_info.str_digits_check_threshold`](sys#sys.int_info "sys.int_info") is the lowest accepted value for the limit (other than 0 which disables it). New in version 3.9.14. Caution Setting a low limit *can* lead to problems. While rare, code exists that contains integer constants in decimal in their source that exceed the minimum threshold. A consequence of setting the limit is that Python source code containing decimal integer literals longer than the limit will encounter an error during parsing, usually at startup time or import time or even at installation time - anytime an up to date `.pyc` does not already exist for the code. A workaround for source that contains such large constants is to convert them to `0x` hexadecimal form as it has no limit. Test your application thoroughly if you use a low limit. Ensure your tests run with the limit set early via the environment or flag so that it applies during startup and even during any installation step that may invoke Python to precompile `.py` sources to `.pyc` files. ### Recommended configuration The default `sys.int_info.default_max_str_digits` is expected to be reasonable for most applications. If your application requires a different limit, set it from your main entry point using Python version agnostic code as these APIs were added in security patch releases in versions before 3.11. Example: ``` >>> import sys >>> if hasattr(sys, "set_int_max_str_digits"): ... upper_bound = 68000 ... lower_bound = 4004 ... current_limit = sys.get_int_max_str_digits() ... if current_limit == 0 or current_limit > upper_bound: ... sys.set_int_max_str_digits(upper_bound) ... elif current_limit < lower_bound: ... sys.set_int_max_str_digits(lower_bound) ``` If you need to disable it entirely, set it to `0`. #### Footnotes `1` Additional information on these special methods may be found in the Python Reference Manual ([Basic customization](../reference/datamodel#customization)). `2` As a consequence, the list `[1, 2]` is considered equal to `[1.0, 2.0]`, and similarly for tuples. `3` They must have since the parser can’t tell the type of the operands. `4(1,2,3,4)` Cased characters are those with general category property being one of “Lu” (Letter, uppercase), “Ll” (Letter, lowercase), or “Lt” (Letter, titlecase). `5(1,2)` To format only a tuple you should therefore provide a singleton tuple whose only element is the tuple to be formatted.
programming_docs
python datetime — Basic date and time types datetime — Basic date and time types ==================================== **Source code:** [Lib/datetime.py](https://github.com/python/cpython/tree/3.9/Lib/datetime.py) The [`datetime`](#module-datetime "datetime: Basic date and time types.") module supplies classes for manipulating dates and times. While date and time arithmetic is supported, the focus of the implementation is on efficient attribute extraction for output formatting and manipulation. See also `Module` [`calendar`](calendar#module-calendar "calendar: Functions for working with calendars, including some emulation of the Unix cal program.") General calendar related functions. `Module` [`time`](time#module-time "time: Time access and conversions.") Time access and conversions. `Module` [`zoneinfo`](zoneinfo#module-zoneinfo "zoneinfo: IANA time zone support") Concrete time zones representing the IANA time zone database. Package [dateutil](https://dateutil.readthedocs.io/en/stable/) Third-party library with expanded time zone and parsing support. Aware and Naive Objects ----------------------- Date and time objects may be categorized as “aware” or “naive” depending on whether or not they include timezone information. With sufficient knowledge of applicable algorithmic and political time adjustments, such as time zone and daylight saving time information, an **aware** object can locate itself relative to other aware objects. An aware object represents a specific moment in time that is not open to interpretation. [1](#id5) A **naive** object does not contain enough information to unambiguously locate itself relative to other date/time objects. Whether a naive object represents Coordinated Universal Time (UTC), local time, or time in some other timezone is purely up to the program, just like it is up to the program whether a particular number represents metres, miles, or mass. Naive objects are easy to understand and to work with, at the cost of ignoring some aspects of reality. For applications requiring aware objects, [`datetime`](#datetime.datetime "datetime.datetime") and [`time`](#datetime.time "datetime.time") objects have an optional time zone information attribute, `tzinfo`, that can be set to an instance of a subclass of the abstract [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") class. These [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") objects capture information about the offset from UTC time, the time zone name, and whether daylight saving time is in effect. Only one concrete [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") class, the [`timezone`](#datetime.timezone "datetime.timezone") class, is supplied by the [`datetime`](#module-datetime "datetime: Basic date and time types.") module. The [`timezone`](#datetime.timezone "datetime.timezone") class can represent simple timezones with fixed offsets from UTC, such as UTC itself or North American EST and EDT timezones. Supporting timezones at deeper levels of detail is up to the application. The rules for time adjustment across the world are more political than rational, change frequently, and there is no standard suitable for every application aside from UTC. Constants --------- The [`datetime`](#module-datetime "datetime: Basic date and time types.") module exports the following constants: `datetime.MINYEAR` The smallest year number allowed in a [`date`](#datetime.date "datetime.date") or [`datetime`](#datetime.datetime "datetime.datetime") object. [`MINYEAR`](#datetime.MINYEAR "datetime.MINYEAR") is `1`. `datetime.MAXYEAR` The largest year number allowed in a [`date`](#datetime.date "datetime.date") or [`datetime`](#datetime.datetime "datetime.datetime") object. [`MAXYEAR`](#datetime.MAXYEAR "datetime.MAXYEAR") is `9999`. Available Types --------------- `class datetime.date` An idealized naive date, assuming the current Gregorian calendar always was, and always will be, in effect. Attributes: [`year`](#datetime.date.year "datetime.date.year"), [`month`](#datetime.date.month "datetime.date.month"), and [`day`](#datetime.date.day "datetime.date.day"). `class datetime.time` An idealized time, independent of any particular day, assuming that every day has exactly 24\*60\*60 seconds. (There is no notion of “leap seconds” here.) Attributes: [`hour`](#datetime.time.hour "datetime.time.hour"), [`minute`](#datetime.time.minute "datetime.time.minute"), [`second`](#datetime.time.second "datetime.time.second"), [`microsecond`](#datetime.time.microsecond "datetime.time.microsecond"), and [`tzinfo`](#datetime.time.tzinfo "datetime.time.tzinfo"). `class datetime.datetime` A combination of a date and a time. Attributes: [`year`](#datetime.datetime.year "datetime.datetime.year"), [`month`](#datetime.datetime.month "datetime.datetime.month"), [`day`](#datetime.datetime.day "datetime.datetime.day"), [`hour`](#datetime.datetime.hour "datetime.datetime.hour"), [`minute`](#datetime.datetime.minute "datetime.datetime.minute"), [`second`](#datetime.datetime.second "datetime.datetime.second"), [`microsecond`](#datetime.datetime.microsecond "datetime.datetime.microsecond"), and [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo"). `class datetime.timedelta` A duration expressing the difference between two [`date`](#datetime.date "datetime.date"), [`time`](#datetime.time "datetime.time"), or [`datetime`](#datetime.datetime "datetime.datetime") instances to microsecond resolution. `class datetime.tzinfo` An abstract base class for time zone information objects. These are used by the [`datetime`](#datetime.datetime "datetime.datetime") and [`time`](#datetime.time "datetime.time") classes to provide a customizable notion of time adjustment (for example, to account for time zone and/or daylight saving time). `class datetime.timezone` A class that implements the [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") abstract base class as a fixed offset from the UTC. New in version 3.2. Objects of these types are immutable. Subclass relationships: ``` object timedelta tzinfo timezone time date datetime ``` ### Common Properties The [`date`](#datetime.date "datetime.date"), [`datetime`](#datetime.datetime "datetime.datetime"), [`time`](#datetime.time "datetime.time"), and [`timezone`](#datetime.timezone "datetime.timezone") types share these common features: * Objects of these types are immutable. * Objects of these types are hashable, meaning that they can be used as dictionary keys. * Objects of these types support efficient pickling via the [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") module. ### Determining if an Object is Aware or Naive Objects of the [`date`](#datetime.date "datetime.date") type are always naive. An object of type [`time`](#datetime.time "datetime.time") or [`datetime`](#datetime.datetime "datetime.datetime") may be aware or naive. A [`datetime`](#datetime.datetime "datetime.datetime") object *d* is aware if both of the following hold: 1. `d.tzinfo` is not `None` 2. `d.tzinfo.utcoffset(d)` does not return `None` Otherwise, *d* is naive. A [`time`](#datetime.time "datetime.time") object *t* is aware if both of the following hold: 1. `t.tzinfo` is not `None` 2. `t.tzinfo.utcoffset(None)` does not return `None`. Otherwise, *t* is naive. The distinction between aware and naive doesn’t apply to [`timedelta`](#datetime.timedelta "datetime.timedelta") objects. timedelta Objects ----------------- A [`timedelta`](#datetime.timedelta "datetime.timedelta") object represents a duration, the difference between two dates or times. `class datetime.timedelta(days=0, seconds=0, microseconds=0, milliseconds=0, minutes=0, hours=0, weeks=0)` All arguments are optional and default to `0`. Arguments may be integers or floats, and may be positive or negative. Only *days*, *seconds* and *microseconds* are stored internally. Arguments are converted to those units: * A millisecond is converted to 1000 microseconds. * A minute is converted to 60 seconds. * An hour is converted to 3600 seconds. * A week is converted to 7 days. and days, seconds and microseconds are then normalized so that the representation is unique, with * `0 <= microseconds < 1000000` * `0 <= seconds < 3600*24` (the number of seconds in one day) * `-999999999 <= days <= 999999999` The following example illustrates how any arguments besides *days*, *seconds* and *microseconds* are “merged” and normalized into those three resulting attributes: ``` >>> from datetime import timedelta >>> delta = timedelta( ... days=50, ... seconds=27, ... microseconds=10, ... milliseconds=29000, ... minutes=5, ... hours=8, ... weeks=2 ... ) >>> # Only days, seconds, and microseconds remain >>> delta datetime.timedelta(days=64, seconds=29156, microseconds=10) ``` If any argument is a float and there are fractional microseconds, the fractional microseconds left over from all arguments are combined and their sum is rounded to the nearest microsecond using round-half-to-even tiebreaker. If no argument is a float, the conversion and normalization processes are exact (no information is lost). If the normalized value of days lies outside the indicated range, [`OverflowError`](exceptions#OverflowError "OverflowError") is raised. Note that normalization of negative values may be surprising at first. For example: ``` >>> from datetime import timedelta >>> d = timedelta(microseconds=-1) >>> (d.days, d.seconds, d.microseconds) (-1, 86399, 999999) ``` Class attributes: `timedelta.min` The most negative [`timedelta`](#datetime.timedelta "datetime.timedelta") object, `timedelta(-999999999)`. `timedelta.max` The most positive [`timedelta`](#datetime.timedelta "datetime.timedelta") object, `timedelta(days=999999999, hours=23, minutes=59, seconds=59, microseconds=999999)`. `timedelta.resolution` The smallest possible difference between non-equal [`timedelta`](#datetime.timedelta "datetime.timedelta") objects, `timedelta(microseconds=1)`. Note that, because of normalization, `timedelta.max` > `-timedelta.min`. `-timedelta.max` is not representable as a [`timedelta`](#datetime.timedelta "datetime.timedelta") object. Instance attributes (read-only): | Attribute | Value | | --- | --- | | `days` | Between -999999999 and 999999999 inclusive | | `seconds` | Between 0 and 86399 inclusive | | `microseconds` | Between 0 and 999999 inclusive | Supported operations: | Operation | Result | | --- | --- | | `t1 = t2 + t3` | Sum of *t2* and *t3*. Afterwards *t1*-*t2* == *t3* and *t1*-*t3* == *t2* are true. (1) | | `t1 = t2 - t3` | Difference of *t2* and *t3*. Afterwards *t1* == *t2* - *t3* and *t2* == *t1* + *t3* are true. (1)(6) | | `t1 = t2 * i or t1 = i * t2` | Delta multiplied by an integer. Afterwards *t1* // i == *t2* is true, provided `i != 0`. | | | In general, *t1* \* i == *t1* \* (i-1) + *t1* is true. (1) | | `t1 = t2 * f or t1 = f * t2` | Delta multiplied by a float. The result is rounded to the nearest multiple of timedelta.resolution using round-half-to-even. | | `f = t2 / t3` | Division (3) of overall duration *t2* by interval unit *t3*. Returns a [`float`](functions#float "float") object. | | `t1 = t2 / f or t1 = t2 / i` | Delta divided by a float or an int. The result is rounded to the nearest multiple of timedelta.resolution using round-half-to-even. | | `t1 = t2 // i` or `t1 = t2 // t3` | The floor is computed and the remainder (if any) is thrown away. In the second case, an integer is returned. (3) | | `t1 = t2 % t3` | The remainder is computed as a [`timedelta`](#datetime.timedelta "datetime.timedelta") object. (3) | | `q, r = divmod(t1, t2)` | Computes the quotient and the remainder: `q = t1 // t2` (3) and `r = t1 % t2`. q is an integer and r is a [`timedelta`](#datetime.timedelta "datetime.timedelta") object. | | `+t1` | Returns a [`timedelta`](#datetime.timedelta "datetime.timedelta") object with the same value. (2) | | `-t1` | equivalent to [`timedelta`](#datetime.timedelta "datetime.timedelta")(-*t1.days*, -*t1.seconds*, -*t1.microseconds*), and to *t1*\* -1. (1)(4) | | `abs(t)` | equivalent to +*t* when `t.days >= 0`, and to -*t* when `t.days < 0`. (2) | | `str(t)` | Returns a string in the form `[D day[s], ][H]H:MM:SS[.UUUUUU]`, where D is negative for negative `t`. (5) | | `repr(t)` | Returns a string representation of the [`timedelta`](#datetime.timedelta "datetime.timedelta") object as a constructor call with canonical attribute values. | Notes: 1. This is exact but may overflow. 2. This is exact and cannot overflow. 3. Division by 0 raises [`ZeroDivisionError`](exceptions#ZeroDivisionError "ZeroDivisionError"). 4. -*timedelta.max* is not representable as a [`timedelta`](#datetime.timedelta "datetime.timedelta") object. 5. String representations of [`timedelta`](#datetime.timedelta "datetime.timedelta") objects are normalized similarly to their internal representation. This leads to somewhat unusual results for negative timedeltas. For example: ``` >>> timedelta(hours=-5) datetime.timedelta(days=-1, seconds=68400) >>> print(_) -1 day, 19:00:00 ``` 6. The expression `t2 - t3` will always be equal to the expression `t2 + (-t3)` except when t3 is equal to `timedelta.max`; in that case the former will produce a result while the latter will overflow. In addition to the operations listed above, [`timedelta`](#datetime.timedelta "datetime.timedelta") objects support certain additions and subtractions with [`date`](#datetime.date "datetime.date") and [`datetime`](#datetime.datetime "datetime.datetime") objects (see below). Changed in version 3.2: Floor division and true division of a [`timedelta`](#datetime.timedelta "datetime.timedelta") object by another [`timedelta`](#datetime.timedelta "datetime.timedelta") object are now supported, as are remainder operations and the [`divmod()`](functions#divmod "divmod") function. True division and multiplication of a [`timedelta`](#datetime.timedelta "datetime.timedelta") object by a [`float`](functions#float "float") object are now supported. Comparisons of [`timedelta`](#datetime.timedelta "datetime.timedelta") objects are supported, with some caveats. The comparisons `==` or `!=` *always* return a [`bool`](functions#bool "bool"), no matter the type of the compared object: ``` >>> from datetime import timedelta >>> delta1 = timedelta(seconds=57) >>> delta2 = timedelta(hours=25, seconds=2) >>> delta2 != delta1 True >>> delta2 == 5 False ``` For all other comparisons (such as `<` and `>`), when a [`timedelta`](#datetime.timedelta "datetime.timedelta") object is compared to an object of a different type, [`TypeError`](exceptions#TypeError "TypeError") is raised: ``` >>> delta2 > delta1 True >>> delta2 > 5 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: '>' not supported between instances of 'datetime.timedelta' and 'int' ``` In Boolean contexts, a [`timedelta`](#datetime.timedelta "datetime.timedelta") object is considered to be true if and only if it isn’t equal to `timedelta(0)`. Instance methods: `timedelta.total_seconds()` Return the total number of seconds contained in the duration. Equivalent to `td / timedelta(seconds=1)`. For interval units other than seconds, use the division form directly (e.g. `td / timedelta(microseconds=1)`). Note that for very large time intervals (greater than 270 years on most platforms) this method will lose microsecond accuracy. New in version 3.2. ### Examples of usage: [`timedelta`](#datetime.timedelta "datetime.timedelta") An additional example of normalization: ``` >>> # Components of another_year add up to exactly 365 days >>> from datetime import timedelta >>> year = timedelta(days=365) >>> another_year = timedelta(weeks=40, days=84, hours=23, ... minutes=50, seconds=600) >>> year == another_year True >>> year.total_seconds() 31536000.0 ``` Examples of [`timedelta`](#datetime.timedelta "datetime.timedelta") arithmetic: ``` >>> from datetime import timedelta >>> year = timedelta(days=365) >>> ten_years = 10 * year >>> ten_years datetime.timedelta(days=3650) >>> ten_years.days // 365 10 >>> nine_years = ten_years - year >>> nine_years datetime.timedelta(days=3285) >>> three_years = nine_years // 3 >>> three_years, three_years.days // 365 (datetime.timedelta(days=1095), 3) ``` date Objects ------------ A [`date`](#datetime.date "datetime.date") object represents a date (year, month and day) in an idealized calendar, the current Gregorian calendar indefinitely extended in both directions. January 1 of year 1 is called day number 1, January 2 of year 1 is called day number 2, and so on. [2](#id6) `class datetime.date(year, month, day)` All arguments are required. Arguments must be integers, in the following ranges: * `MINYEAR <= year <= MAXYEAR` * `1 <= month <= 12` * `1 <= day <= number of days in the given month and year` If an argument outside those ranges is given, [`ValueError`](exceptions#ValueError "ValueError") is raised. Other constructors, all class methods: `classmethod date.today()` Return the current local date. This is equivalent to `date.fromtimestamp(time.time())`. `classmethod date.fromtimestamp(timestamp)` Return the local date corresponding to the POSIX timestamp, such as is returned by [`time.time()`](time#time.time "time.time"). This may raise [`OverflowError`](exceptions#OverflowError "OverflowError"), if the timestamp is out of the range of values supported by the platform C `localtime()` function, and [`OSError`](exceptions#OSError "OSError") on `localtime()` failure. It’s common for this to be restricted to years from 1970 through 2038. Note that on non-POSIX systems that include leap seconds in their notion of a timestamp, leap seconds are ignored by [`fromtimestamp()`](#datetime.date.fromtimestamp "datetime.date.fromtimestamp"). Changed in version 3.3: Raise [`OverflowError`](exceptions#OverflowError "OverflowError") instead of [`ValueError`](exceptions#ValueError "ValueError") if the timestamp is out of the range of values supported by the platform C `localtime()` function. Raise [`OSError`](exceptions#OSError "OSError") instead of [`ValueError`](exceptions#ValueError "ValueError") on `localtime()` failure. `classmethod date.fromordinal(ordinal)` Return the date corresponding to the proleptic Gregorian ordinal, where January 1 of year 1 has ordinal 1. [`ValueError`](exceptions#ValueError "ValueError") is raised unless `1 <= ordinal <= date.max.toordinal()`. For any date *d*, `date.fromordinal(d.toordinal()) == d`. `classmethod date.fromisoformat(date_string)` Return a [`date`](#datetime.date "datetime.date") corresponding to a *date\_string* given in the format `YYYY-MM-DD`: ``` >>> from datetime import date >>> date.fromisoformat('2019-12-04') datetime.date(2019, 12, 4) ``` This is the inverse of [`date.isoformat()`](#datetime.date.isoformat "datetime.date.isoformat"). It only supports the format `YYYY-MM-DD`. New in version 3.7. `classmethod date.fromisocalendar(year, week, day)` Return a [`date`](#datetime.date "datetime.date") corresponding to the ISO calendar date specified by year, week and day. This is the inverse of the function [`date.isocalendar()`](#datetime.date.isocalendar "datetime.date.isocalendar"). New in version 3.8. Class attributes: `date.min` The earliest representable date, `date(MINYEAR, 1, 1)`. `date.max` The latest representable date, `date(MAXYEAR, 12, 31)`. `date.resolution` The smallest possible difference between non-equal date objects, `timedelta(days=1)`. Instance attributes (read-only): `date.year` Between [`MINYEAR`](#datetime.MINYEAR "datetime.MINYEAR") and [`MAXYEAR`](#datetime.MAXYEAR "datetime.MAXYEAR") inclusive. `date.month` Between 1 and 12 inclusive. `date.day` Between 1 and the number of days in the given month of the given year. Supported operations: | Operation | Result | | --- | --- | | `date2 = date1 + timedelta` | *date2* is `timedelta.days` days removed from *date1*. (1) | | `date2 = date1 - timedelta` | Computes *date2* such that `date2 + timedelta == date1`. (2) | | `timedelta = date1 - date2` | (3) | | `date1 < date2` | *date1* is considered less than *date2* when *date1* precedes *date2* in time. (4) | Notes: 1. *date2* is moved forward in time if `timedelta.days > 0`, or backward if `timedelta.days < 0`. Afterward `date2 - date1 == timedelta.days`. `timedelta.seconds` and `timedelta.microseconds` are ignored. [`OverflowError`](exceptions#OverflowError "OverflowError") is raised if `date2.year` would be smaller than [`MINYEAR`](#datetime.MINYEAR "datetime.MINYEAR") or larger than [`MAXYEAR`](#datetime.MAXYEAR "datetime.MAXYEAR"). 2. `timedelta.seconds` and `timedelta.microseconds` are ignored. 3. This is exact, and cannot overflow. timedelta.seconds and timedelta.microseconds are 0, and date2 + timedelta == date1 after. 4. In other words, `date1 < date2` if and only if `date1.toordinal() < date2.toordinal()`. Date comparison raises [`TypeError`](exceptions#TypeError "TypeError") if the other comparand isn’t also a [`date`](#datetime.date "datetime.date") object. However, `NotImplemented` is returned instead if the other comparand has a `timetuple()` attribute. This hook gives other kinds of date objects a chance at implementing mixed-type comparison. If not, when a [`date`](#datetime.date "datetime.date") object is compared to an object of a different type, [`TypeError`](exceptions#TypeError "TypeError") is raised unless the comparison is `==` or `!=`. The latter cases return [`False`](constants#False "False") or [`True`](constants#True "True"), respectively. In Boolean contexts, all [`date`](#datetime.date "datetime.date") objects are considered to be true. Instance methods: `date.replace(year=self.year, month=self.month, day=self.day)` Return a date with the same value, except for those parameters given new values by whichever keyword arguments are specified. Example: ``` >>> from datetime import date >>> d = date(2002, 12, 31) >>> d.replace(day=26) datetime.date(2002, 12, 26) ``` `date.timetuple()` Return a [`time.struct_time`](time#time.struct_time "time.struct_time") such as returned by [`time.localtime()`](time#time.localtime "time.localtime"). The hours, minutes and seconds are 0, and the DST flag is -1. `d.timetuple()` is equivalent to: ``` time.struct_time((d.year, d.month, d.day, 0, 0, 0, d.weekday(), yday, -1)) ``` where `yday = d.toordinal() - date(d.year, 1, 1).toordinal() + 1` is the day number within the current year starting with `1` for January 1st. `date.toordinal()` Return the proleptic Gregorian ordinal of the date, where January 1 of year 1 has ordinal 1. For any [`date`](#datetime.date "datetime.date") object *d*, `date.fromordinal(d.toordinal()) == d`. `date.weekday()` Return the day of the week as an integer, where Monday is 0 and Sunday is 6. For example, `date(2002, 12, 4).weekday() == 2`, a Wednesday. See also [`isoweekday()`](#datetime.date.isoweekday "datetime.date.isoweekday"). `date.isoweekday()` Return the day of the week as an integer, where Monday is 1 and Sunday is 7. For example, `date(2002, 12, 4).isoweekday() == 3`, a Wednesday. See also [`weekday()`](#datetime.date.weekday "datetime.date.weekday"), [`isocalendar()`](#datetime.date.isocalendar "datetime.date.isocalendar"). `date.isocalendar()` Return a [named tuple](../glossary#term-named-tuple) object with three components: `year`, `week` and `weekday`. The ISO calendar is a widely used variant of the Gregorian calendar. [3](#id7) The ISO year consists of 52 or 53 full weeks, and where a week starts on a Monday and ends on a Sunday. The first week of an ISO year is the first (Gregorian) calendar week of a year containing a Thursday. This is called week number 1, and the ISO year of that Thursday is the same as its Gregorian year. For example, 2004 begins on a Thursday, so the first week of ISO year 2004 begins on Monday, 29 Dec 2003 and ends on Sunday, 4 Jan 2004: ``` >>> from datetime import date >>> date(2003, 12, 29).isocalendar() datetime.IsoCalendarDate(year=2004, week=1, weekday=1) >>> date(2004, 1, 4).isocalendar() datetime.IsoCalendarDate(year=2004, week=1, weekday=7) ``` Changed in version 3.9: Result changed from a tuple to a [named tuple](../glossary#term-named-tuple). `date.isoformat()` Return a string representing the date in ISO 8601 format, `YYYY-MM-DD`: ``` >>> from datetime import date >>> date(2002, 12, 4).isoformat() '2002-12-04' ``` This is the inverse of [`date.fromisoformat()`](#datetime.date.fromisoformat "datetime.date.fromisoformat"). `date.__str__()` For a date *d*, `str(d)` is equivalent to `d.isoformat()`. `date.ctime()` Return a string representing the date: ``` >>> from datetime import date >>> date(2002, 12, 4).ctime() 'Wed Dec 4 00:00:00 2002' ``` `d.ctime()` is equivalent to: ``` time.ctime(time.mktime(d.timetuple())) ``` on platforms where the native C `ctime()` function (which [`time.ctime()`](time#time.ctime "time.ctime") invokes, but which [`date.ctime()`](#datetime.date.ctime "datetime.date.ctime") does not invoke) conforms to the C standard. `date.strftime(format)` Return a string representing the date, controlled by an explicit format string. Format codes referring to hours, minutes or seconds will see 0 values. For a complete list of formatting directives, see [strftime() and strptime() Behavior](#strftime-strptime-behavior). `date.__format__(format)` Same as [`date.strftime()`](#datetime.date.strftime "datetime.date.strftime"). This makes it possible to specify a format string for a [`date`](#datetime.date "datetime.date") object in [formatted string literals](../reference/lexical_analysis#f-strings) and when using [`str.format()`](stdtypes#str.format "str.format"). For a complete list of formatting directives, see [strftime() and strptime() Behavior](#strftime-strptime-behavior). ### Examples of Usage: [`date`](#datetime.date "datetime.date") Example of counting days to an event: ``` >>> import time >>> from datetime import date >>> today = date.today() >>> today datetime.date(2007, 12, 5) >>> today == date.fromtimestamp(time.time()) True >>> my_birthday = date(today.year, 6, 24) >>> if my_birthday < today: ... my_birthday = my_birthday.replace(year=today.year + 1) >>> my_birthday datetime.date(2008, 6, 24) >>> time_to_birthday = abs(my_birthday - today) >>> time_to_birthday.days 202 ``` More examples of working with [`date`](#datetime.date "datetime.date"): ``` >>> from datetime import date >>> d = date.fromordinal(730920) # 730920th day after 1. 1. 0001 >>> d datetime.date(2002, 3, 11) >>> # Methods related to formatting string output >>> d.isoformat() '2002-03-11' >>> d.strftime("%d/%m/%y") '11/03/02' >>> d.strftime("%A %d. %B %Y") 'Monday 11. March 2002' >>> d.ctime() 'Mon Mar 11 00:00:00 2002' >>> 'The {1} is {0:%d}, the {2} is {0:%B}.'.format(d, "day", "month") 'The day is 11, the month is March.' >>> # Methods for to extracting 'components' under different calendars >>> t = d.timetuple() >>> for i in t: ... print(i) 2002 # year 3 # month 11 # day 0 0 0 0 # weekday (0 = Monday) 70 # 70th day in the year -1 >>> ic = d.isocalendar() >>> for i in ic: ... print(i) 2002 # ISO year 11 # ISO week number 1 # ISO day number ( 1 = Monday ) >>> # A date object is immutable; all operations produce a new object >>> d.replace(year=2005) datetime.date(2005, 3, 11) ``` datetime Objects ---------------- A [`datetime`](#datetime.datetime "datetime.datetime") object is a single object containing all the information from a [`date`](#datetime.date "datetime.date") object and a [`time`](#datetime.time "datetime.time") object. Like a [`date`](#datetime.date "datetime.date") object, [`datetime`](#datetime.datetime "datetime.datetime") assumes the current Gregorian calendar extended in both directions; like a [`time`](#datetime.time "datetime.time") object, [`datetime`](#datetime.datetime "datetime.datetime") assumes there are exactly 3600\*24 seconds in every day. Constructor: `class datetime.datetime(year, month, day, hour=0, minute=0, second=0, microsecond=0, tzinfo=None, *, fold=0)` The *year*, *month* and *day* arguments are required. *tzinfo* may be `None`, or an instance of a [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclass. The remaining arguments must be integers in the following ranges: * `MINYEAR <= year <= MAXYEAR`, * `1 <= month <= 12`, * `1 <= day <= number of days in the given month and year`, * `0 <= hour < 24`, * `0 <= minute < 60`, * `0 <= second < 60`, * `0 <= microsecond < 1000000`, * `fold in [0, 1]`. If an argument outside those ranges is given, [`ValueError`](exceptions#ValueError "ValueError") is raised. New in version 3.6: Added the `fold` argument. Other constructors, all class methods: `classmethod datetime.today()` Return the current local datetime, with [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") `None`. Equivalent to: ``` datetime.fromtimestamp(time.time()) ``` See also [`now()`](#datetime.datetime.now "datetime.datetime.now"), [`fromtimestamp()`](#datetime.datetime.fromtimestamp "datetime.datetime.fromtimestamp"). This method is functionally equivalent to [`now()`](#datetime.datetime.now "datetime.datetime.now"), but without a `tz` parameter. `classmethod datetime.now(tz=None)` Return the current local date and time. If optional argument *tz* is `None` or not specified, this is like [`today()`](#datetime.datetime.today "datetime.datetime.today"), but, if possible, supplies more precision than can be gotten from going through a [`time.time()`](time#time.time "time.time") timestamp (for example, this may be possible on platforms supplying the C `gettimeofday()` function). If *tz* is not `None`, it must be an instance of a [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclass, and the current date and time are converted to *tz*’s time zone. This function is preferred over [`today()`](#datetime.datetime.today "datetime.datetime.today") and [`utcnow()`](#datetime.datetime.utcnow "datetime.datetime.utcnow"). `classmethod datetime.utcnow()` Return the current UTC date and time, with [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") `None`. This is like [`now()`](#datetime.datetime.now "datetime.datetime.now"), but returns the current UTC date and time, as a naive [`datetime`](#datetime.datetime "datetime.datetime") object. An aware current UTC datetime can be obtained by calling `datetime.now(timezone.utc)`. See also [`now()`](#datetime.datetime.now "datetime.datetime.now"). Warning Because naive `datetime` objects are treated by many `datetime` methods as local times, it is preferred to use aware datetimes to represent times in UTC. As such, the recommended way to create an object representing the current time in UTC is by calling `datetime.now(timezone.utc)`. `classmethod datetime.fromtimestamp(timestamp, tz=None)` Return the local date and time corresponding to the POSIX timestamp, such as is returned by [`time.time()`](time#time.time "time.time"). If optional argument *tz* is `None` or not specified, the timestamp is converted to the platform’s local date and time, and the returned [`datetime`](#datetime.datetime "datetime.datetime") object is naive. If *tz* is not `None`, it must be an instance of a [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclass, and the timestamp is converted to *tz*’s time zone. [`fromtimestamp()`](#datetime.datetime.fromtimestamp "datetime.datetime.fromtimestamp") may raise [`OverflowError`](exceptions#OverflowError "OverflowError"), if the timestamp is out of the range of values supported by the platform C `localtime()` or `gmtime()` functions, and [`OSError`](exceptions#OSError "OSError") on `localtime()` or `gmtime()` failure. It’s common for this to be restricted to years in 1970 through 2038. Note that on non-POSIX systems that include leap seconds in their notion of a timestamp, leap seconds are ignored by [`fromtimestamp()`](#datetime.datetime.fromtimestamp "datetime.datetime.fromtimestamp"), and then it’s possible to have two timestamps differing by a second that yield identical [`datetime`](#datetime.datetime "datetime.datetime") objects. This method is preferred over [`utcfromtimestamp()`](#datetime.datetime.utcfromtimestamp "datetime.datetime.utcfromtimestamp"). Changed in version 3.3: Raise [`OverflowError`](exceptions#OverflowError "OverflowError") instead of [`ValueError`](exceptions#ValueError "ValueError") if the timestamp is out of the range of values supported by the platform C `localtime()` or `gmtime()` functions. Raise [`OSError`](exceptions#OSError "OSError") instead of [`ValueError`](exceptions#ValueError "ValueError") on `localtime()` or `gmtime()` failure. Changed in version 3.6: [`fromtimestamp()`](#datetime.datetime.fromtimestamp "datetime.datetime.fromtimestamp") may return instances with [`fold`](#datetime.datetime.fold "datetime.datetime.fold") set to 1. `classmethod datetime.utcfromtimestamp(timestamp)` Return the UTC [`datetime`](#datetime.datetime "datetime.datetime") corresponding to the POSIX timestamp, with [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") `None`. (The resulting object is naive.) This may raise [`OverflowError`](exceptions#OverflowError "OverflowError"), if the timestamp is out of the range of values supported by the platform C `gmtime()` function, and [`OSError`](exceptions#OSError "OSError") on `gmtime()` failure. It’s common for this to be restricted to years in 1970 through 2038. To get an aware [`datetime`](#datetime.datetime "datetime.datetime") object, call [`fromtimestamp()`](#datetime.datetime.fromtimestamp "datetime.datetime.fromtimestamp"): ``` datetime.fromtimestamp(timestamp, timezone.utc) ``` On the POSIX compliant platforms, it is equivalent to the following expression: ``` datetime(1970, 1, 1, tzinfo=timezone.utc) + timedelta(seconds=timestamp) ``` except the latter formula always supports the full years range: between [`MINYEAR`](#datetime.MINYEAR "datetime.MINYEAR") and [`MAXYEAR`](#datetime.MAXYEAR "datetime.MAXYEAR") inclusive. Warning Because naive `datetime` objects are treated by many `datetime` methods as local times, it is preferred to use aware datetimes to represent times in UTC. As such, the recommended way to create an object representing a specific timestamp in UTC is by calling `datetime.fromtimestamp(timestamp, tz=timezone.utc)`. Changed in version 3.3: Raise [`OverflowError`](exceptions#OverflowError "OverflowError") instead of [`ValueError`](exceptions#ValueError "ValueError") if the timestamp is out of the range of values supported by the platform C `gmtime()` function. Raise [`OSError`](exceptions#OSError "OSError") instead of [`ValueError`](exceptions#ValueError "ValueError") on `gmtime()` failure. `classmethod datetime.fromordinal(ordinal)` Return the [`datetime`](#datetime.datetime "datetime.datetime") corresponding to the proleptic Gregorian ordinal, where January 1 of year 1 has ordinal 1. [`ValueError`](exceptions#ValueError "ValueError") is raised unless `1 <= ordinal <= datetime.max.toordinal()`. The hour, minute, second and microsecond of the result are all 0, and [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") is `None`. `classmethod datetime.combine(date, time, tzinfo=self.tzinfo)` Return a new [`datetime`](#datetime.datetime "datetime.datetime") object whose date components are equal to the given [`date`](#datetime.date "datetime.date") object’s, and whose time components are equal to the given [`time`](#datetime.time "datetime.time") object’s. If the *tzinfo* argument is provided, its value is used to set the [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") attribute of the result, otherwise the [`tzinfo`](#datetime.time.tzinfo "datetime.time.tzinfo") attribute of the *time* argument is used. For any [`datetime`](#datetime.datetime "datetime.datetime") object *d*, `d == datetime.combine(d.date(), d.time(), d.tzinfo)`. If date is a [`datetime`](#datetime.datetime "datetime.datetime") object, its time components and [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") attributes are ignored. Changed in version 3.6: Added the *tzinfo* argument. `classmethod datetime.fromisoformat(date_string)` Return a [`datetime`](#datetime.datetime "datetime.datetime") corresponding to a *date\_string* in one of the formats emitted by [`date.isoformat()`](#datetime.date.isoformat "datetime.date.isoformat") and [`datetime.isoformat()`](#datetime.datetime.isoformat "datetime.datetime.isoformat"). Specifically, this function supports strings in the format: ``` YYYY-MM-DD[*HH[:MM[:SS[.fff[fff]]]][+HH:MM[:SS[.ffffff]]]] ``` where `*` can match any single character. Caution This does *not* support parsing arbitrary ISO 8601 strings - it is only intended as the inverse operation of [`datetime.isoformat()`](#datetime.datetime.isoformat "datetime.datetime.isoformat"). A more full-featured ISO 8601 parser, `dateutil.parser.isoparse` is available in the third-party package [dateutil](https://dateutil.readthedocs.io/en/stable/parser.html#dateutil.parser.isoparse). Examples: ``` >>> from datetime import datetime >>> datetime.fromisoformat('2011-11-04') datetime.datetime(2011, 11, 4, 0, 0) >>> datetime.fromisoformat('2011-11-04T00:05:23') datetime.datetime(2011, 11, 4, 0, 5, 23) >>> datetime.fromisoformat('2011-11-04 00:05:23.283') datetime.datetime(2011, 11, 4, 0, 5, 23, 283000) >>> datetime.fromisoformat('2011-11-04 00:05:23.283+00:00') datetime.datetime(2011, 11, 4, 0, 5, 23, 283000, tzinfo=datetime.timezone.utc) >>> datetime.fromisoformat('2011-11-04T00:05:23+04:00') datetime.datetime(2011, 11, 4, 0, 5, 23, tzinfo=datetime.timezone(datetime.timedelta(seconds=14400))) ``` New in version 3.7. `classmethod datetime.fromisocalendar(year, week, day)` Return a [`datetime`](#datetime.datetime "datetime.datetime") corresponding to the ISO calendar date specified by year, week and day. The non-date components of the datetime are populated with their normal default values. This is the inverse of the function [`datetime.isocalendar()`](#datetime.datetime.isocalendar "datetime.datetime.isocalendar"). New in version 3.8. `classmethod datetime.strptime(date_string, format)` Return a [`datetime`](#datetime.datetime "datetime.datetime") corresponding to *date\_string*, parsed according to *format*. This is equivalent to: ``` datetime(*(time.strptime(date_string, format)[0:6])) ``` [`ValueError`](exceptions#ValueError "ValueError") is raised if the date\_string and format can’t be parsed by [`time.strptime()`](time#time.strptime "time.strptime") or if it returns a value which isn’t a time tuple. For a complete list of formatting directives, see [strftime() and strptime() Behavior](#strftime-strptime-behavior). Class attributes: `datetime.min` The earliest representable [`datetime`](#datetime.datetime "datetime.datetime"), `datetime(MINYEAR, 1, 1, tzinfo=None)`. `datetime.max` The latest representable [`datetime`](#datetime.datetime "datetime.datetime"), `datetime(MAXYEAR, 12, 31, 23, 59, 59, 999999, tzinfo=None)`. `datetime.resolution` The smallest possible difference between non-equal [`datetime`](#datetime.datetime "datetime.datetime") objects, `timedelta(microseconds=1)`. Instance attributes (read-only): `datetime.year` Between [`MINYEAR`](#datetime.MINYEAR "datetime.MINYEAR") and [`MAXYEAR`](#datetime.MAXYEAR "datetime.MAXYEAR") inclusive. `datetime.month` Between 1 and 12 inclusive. `datetime.day` Between 1 and the number of days in the given month of the given year. `datetime.hour` In `range(24)`. `datetime.minute` In `range(60)`. `datetime.second` In `range(60)`. `datetime.microsecond` In `range(1000000)`. `datetime.tzinfo` The object passed as the *tzinfo* argument to the [`datetime`](#datetime.datetime "datetime.datetime") constructor, or `None` if none was passed. `datetime.fold` In `[0, 1]`. Used to disambiguate wall times during a repeated interval. (A repeated interval occurs when clocks are rolled back at the end of daylight saving time or when the UTC offset for the current zone is decreased for political reasons.) The value 0 (1) represents the earlier (later) of the two moments with the same wall time representation. New in version 3.6. Supported operations: | Operation | Result | | --- | --- | | `datetime2 = datetime1 + timedelta` | (1) | | `datetime2 = datetime1 - timedelta` | (2) | | `timedelta = datetime1 - datetime2` | (3) | | `datetime1 < datetime2` | Compares [`datetime`](#datetime.datetime "datetime.datetime") to [`datetime`](#datetime.datetime "datetime.datetime"). (4) | 1. datetime2 is a duration of timedelta removed from datetime1, moving forward in time if `timedelta.days` > 0, or backward if `timedelta.days` < 0. The result has the same [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") attribute as the input datetime, and datetime2 - datetime1 == timedelta after. [`OverflowError`](exceptions#OverflowError "OverflowError") is raised if datetime2.year would be smaller than [`MINYEAR`](#datetime.MINYEAR "datetime.MINYEAR") or larger than [`MAXYEAR`](#datetime.MAXYEAR "datetime.MAXYEAR"). Note that no time zone adjustments are done even if the input is an aware object. 2. Computes the datetime2 such that datetime2 + timedelta == datetime1. As for addition, the result has the same [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") attribute as the input datetime, and no time zone adjustments are done even if the input is aware. 3. Subtraction of a [`datetime`](#datetime.datetime "datetime.datetime") from a [`datetime`](#datetime.datetime "datetime.datetime") is defined only if both operands are naive, or if both are aware. If one is aware and the other is naive, [`TypeError`](exceptions#TypeError "TypeError") is raised. If both are naive, or both are aware and have the same [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") attribute, the [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") attributes are ignored, and the result is a [`timedelta`](#datetime.timedelta "datetime.timedelta") object *t* such that `datetime2 + t == datetime1`. No time zone adjustments are done in this case. If both are aware and have different [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") attributes, `a-b` acts as if *a* and *b* were first converted to naive UTC datetimes first. The result is `(a.replace(tzinfo=None) - a.utcoffset()) - (b.replace(tzinfo=None) - b.utcoffset())` except that the implementation never overflows. 4. *datetime1* is considered less than *datetime2* when *datetime1* precedes *datetime2* in time. If one comparand is naive and the other is aware, [`TypeError`](exceptions#TypeError "TypeError") is raised if an order comparison is attempted. For equality comparisons, naive instances are never equal to aware instances. If both comparands are aware, and have the same [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") attribute, the common [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") attribute is ignored and the base datetimes are compared. If both comparands are aware and have different [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") attributes, the comparands are first adjusted by subtracting their UTC offsets (obtained from `self.utcoffset()`). Changed in version 3.3: Equality comparisons between aware and naive [`datetime`](#datetime.datetime "datetime.datetime") instances don’t raise [`TypeError`](exceptions#TypeError "TypeError"). Note In order to stop comparison from falling back to the default scheme of comparing object addresses, datetime comparison normally raises [`TypeError`](exceptions#TypeError "TypeError") if the other comparand isn’t also a [`datetime`](#datetime.datetime "datetime.datetime") object. However, `NotImplemented` is returned instead if the other comparand has a `timetuple()` attribute. This hook gives other kinds of date objects a chance at implementing mixed-type comparison. If not, when a [`datetime`](#datetime.datetime "datetime.datetime") object is compared to an object of a different type, [`TypeError`](exceptions#TypeError "TypeError") is raised unless the comparison is `==` or `!=`. The latter cases return [`False`](constants#False "False") or [`True`](constants#True "True"), respectively. Instance methods: `datetime.date()` Return [`date`](#datetime.date "datetime.date") object with same year, month and day. `datetime.time()` Return [`time`](#datetime.time "datetime.time") object with same hour, minute, second, microsecond and fold. [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") is `None`. See also method [`timetz()`](#datetime.datetime.timetz "datetime.datetime.timetz"). Changed in version 3.6: The fold value is copied to the returned [`time`](#datetime.time "datetime.time") object. `datetime.timetz()` Return [`time`](#datetime.time "datetime.time") object with same hour, minute, second, microsecond, fold, and tzinfo attributes. See also method [`time()`](time#module-time "time: Time access and conversions."). Changed in version 3.6: The fold value is copied to the returned [`time`](#datetime.time "datetime.time") object. `datetime.replace(year=self.year, month=self.month, day=self.day, hour=self.hour, minute=self.minute, second=self.second, microsecond=self.microsecond, tzinfo=self.tzinfo, *, fold=0)` Return a datetime with the same attributes, except for those attributes given new values by whichever keyword arguments are specified. Note that `tzinfo=None` can be specified to create a naive datetime from an aware datetime with no conversion of date and time data. New in version 3.6: Added the `fold` argument. `datetime.astimezone(tz=None)` Return a [`datetime`](#datetime.datetime "datetime.datetime") object with new [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") attribute *tz*, adjusting the date and time data so the result is the same UTC time as *self*, but in *tz*’s local time. If provided, *tz* must be an instance of a [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclass, and its [`utcoffset()`](#datetime.datetime.utcoffset "datetime.datetime.utcoffset") and [`dst()`](#datetime.datetime.dst "datetime.datetime.dst") methods must not return `None`. If *self* is naive, it is presumed to represent time in the system timezone. If called without arguments (or with `tz=None`) the system local timezone is assumed for the target timezone. The `.tzinfo` attribute of the converted datetime instance will be set to an instance of [`timezone`](#datetime.timezone "datetime.timezone") with the zone name and offset obtained from the OS. If `self.tzinfo` is *tz*, `self.astimezone(tz)` is equal to *self*: no adjustment of date or time data is performed. Else the result is local time in the timezone *tz*, representing the same UTC time as *self*: after `astz = dt.astimezone(tz)`, `astz - astz.utcoffset()` will have the same date and time data as `dt - dt.utcoffset()`. If you merely want to attach a time zone object *tz* to a datetime *dt* without adjustment of date and time data, use `dt.replace(tzinfo=tz)`. If you merely want to remove the time zone object from an aware datetime *dt* without conversion of date and time data, use `dt.replace(tzinfo=None)`. Note that the default [`tzinfo.fromutc()`](#datetime.tzinfo.fromutc "datetime.tzinfo.fromutc") method can be overridden in a [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclass to affect the result returned by [`astimezone()`](#datetime.datetime.astimezone "datetime.datetime.astimezone"). Ignoring error cases, [`astimezone()`](#datetime.datetime.astimezone "datetime.datetime.astimezone") acts like: ``` def astimezone(self, tz): if self.tzinfo is tz: return self # Convert self to UTC, and attach the new time zone object. utc = (self - self.utcoffset()).replace(tzinfo=tz) # Convert from UTC to tz's local time. return tz.fromutc(utc) ``` Changed in version 3.3: *tz* now can be omitted. Changed in version 3.6: The [`astimezone()`](#datetime.datetime.astimezone "datetime.datetime.astimezone") method can now be called on naive instances that are presumed to represent system local time. `datetime.utcoffset()` If [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") is `None`, returns `None`, else returns `self.tzinfo.utcoffset(self)`, and raises an exception if the latter doesn’t return `None` or a [`timedelta`](#datetime.timedelta "datetime.timedelta") object with magnitude less than one day. Changed in version 3.7: The UTC offset is not restricted to a whole number of minutes. `datetime.dst()` If [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") is `None`, returns `None`, else returns `self.tzinfo.dst(self)`, and raises an exception if the latter doesn’t return `None` or a [`timedelta`](#datetime.timedelta "datetime.timedelta") object with magnitude less than one day. Changed in version 3.7: The DST offset is not restricted to a whole number of minutes. `datetime.tzname()` If [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") is `None`, returns `None`, else returns `self.tzinfo.tzname(self)`, raises an exception if the latter doesn’t return `None` or a string object, `datetime.timetuple()` Return a [`time.struct_time`](time#time.struct_time "time.struct_time") such as returned by [`time.localtime()`](time#time.localtime "time.localtime"). `d.timetuple()` is equivalent to: ``` time.struct_time((d.year, d.month, d.day, d.hour, d.minute, d.second, d.weekday(), yday, dst)) ``` where `yday = d.toordinal() - date(d.year, 1, 1).toordinal() + 1` is the day number within the current year starting with `1` for January 1st. The `tm_isdst` flag of the result is set according to the [`dst()`](#datetime.datetime.dst "datetime.datetime.dst") method: [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") is `None` or [`dst()`](#datetime.datetime.dst "datetime.datetime.dst") returns `None`, `tm_isdst` is set to `-1`; else if [`dst()`](#datetime.datetime.dst "datetime.datetime.dst") returns a non-zero value, `tm_isdst` is set to `1`; else `tm_isdst` is set to `0`. `datetime.utctimetuple()` If [`datetime`](#datetime.datetime "datetime.datetime") instance *d* is naive, this is the same as `d.timetuple()` except that `tm_isdst` is forced to 0 regardless of what `d.dst()` returns. DST is never in effect for a UTC time. If *d* is aware, *d* is normalized to UTC time, by subtracting `d.utcoffset()`, and a [`time.struct_time`](time#time.struct_time "time.struct_time") for the normalized time is returned. `tm_isdst` is forced to 0. Note that an [`OverflowError`](exceptions#OverflowError "OverflowError") may be raised if *d*.year was `MINYEAR` or `MAXYEAR` and UTC adjustment spills over a year boundary. Warning Because naive `datetime` objects are treated by many `datetime` methods as local times, it is preferred to use aware datetimes to represent times in UTC; as a result, using `utcfromtimetuple` may give misleading results. If you have a naive `datetime` representing UTC, use `datetime.replace(tzinfo=timezone.utc)` to make it aware, at which point you can use [`datetime.timetuple()`](#datetime.datetime.timetuple "datetime.datetime.timetuple"). `datetime.toordinal()` Return the proleptic Gregorian ordinal of the date. The same as `self.date().toordinal()`. `datetime.timestamp()` Return POSIX timestamp corresponding to the [`datetime`](#datetime.datetime "datetime.datetime") instance. The return value is a [`float`](functions#float "float") similar to that returned by [`time.time()`](time#time.time "time.time"). Naive [`datetime`](#datetime.datetime "datetime.datetime") instances are assumed to represent local time and this method relies on the platform C `mktime()` function to perform the conversion. Since [`datetime`](#datetime.datetime "datetime.datetime") supports wider range of values than `mktime()` on many platforms, this method may raise [`OverflowError`](exceptions#OverflowError "OverflowError") for times far in the past or far in the future. For aware [`datetime`](#datetime.datetime "datetime.datetime") instances, the return value is computed as: ``` (dt - datetime(1970, 1, 1, tzinfo=timezone.utc)).total_seconds() ``` New in version 3.3. Changed in version 3.6: The [`timestamp()`](#datetime.datetime.timestamp "datetime.datetime.timestamp") method uses the [`fold`](#datetime.datetime.fold "datetime.datetime.fold") attribute to disambiguate the times during a repeated interval. Note There is no method to obtain the POSIX timestamp directly from a naive [`datetime`](#datetime.datetime "datetime.datetime") instance representing UTC time. If your application uses this convention and your system timezone is not set to UTC, you can obtain the POSIX timestamp by supplying `tzinfo=timezone.utc`: ``` timestamp = dt.replace(tzinfo=timezone.utc).timestamp() ``` or by calculating the timestamp directly: ``` timestamp = (dt - datetime(1970, 1, 1)) / timedelta(seconds=1) ``` `datetime.weekday()` Return the day of the week as an integer, where Monday is 0 and Sunday is 6. The same as `self.date().weekday()`. See also [`isoweekday()`](#datetime.datetime.isoweekday "datetime.datetime.isoweekday"). `datetime.isoweekday()` Return the day of the week as an integer, where Monday is 1 and Sunday is 7. The same as `self.date().isoweekday()`. See also [`weekday()`](#datetime.datetime.weekday "datetime.datetime.weekday"), [`isocalendar()`](#datetime.datetime.isocalendar "datetime.datetime.isocalendar"). `datetime.isocalendar()` Return a [named tuple](../glossary#term-named-tuple) with three components: `year`, `week` and `weekday`. The same as `self.date().isocalendar()`. `datetime.isoformat(sep='T', timespec='auto')` Return a string representing the date and time in ISO 8601 format: * `YYYY-MM-DDTHH:MM:SS.ffffff`, if [`microsecond`](#datetime.datetime.microsecond "datetime.datetime.microsecond") is not 0 * `YYYY-MM-DDTHH:MM:SS`, if [`microsecond`](#datetime.datetime.microsecond "datetime.datetime.microsecond") is 0 If [`utcoffset()`](#datetime.datetime.utcoffset "datetime.datetime.utcoffset") does not return `None`, a string is appended, giving the UTC offset: * `YYYY-MM-DDTHH:MM:SS.ffffff+HH:MM[:SS[.ffffff]]`, if [`microsecond`](#datetime.datetime.microsecond "datetime.datetime.microsecond") is not 0 * `YYYY-MM-DDTHH:MM:SS+HH:MM[:SS[.ffffff]]`, if [`microsecond`](#datetime.datetime.microsecond "datetime.datetime.microsecond") is 0 Examples: ``` >>> from datetime import datetime, timezone >>> datetime(2019, 5, 18, 15, 17, 8, 132263).isoformat() '2019-05-18T15:17:08.132263' >>> datetime(2019, 5, 18, 15, 17, tzinfo=timezone.utc).isoformat() '2019-05-18T15:17:00+00:00' ``` The optional argument *sep* (default `'T'`) is a one-character separator, placed between the date and time portions of the result. For example: ``` >>> from datetime import tzinfo, timedelta, datetime >>> class TZ(tzinfo): ... """A time zone with an arbitrary, constant -06:39 offset.""" ... def utcoffset(self, dt): ... return timedelta(hours=-6, minutes=-39) ... >>> datetime(2002, 12, 25, tzinfo=TZ()).isoformat(' ') '2002-12-25 00:00:00-06:39' >>> datetime(2009, 11, 27, microsecond=100, tzinfo=TZ()).isoformat() '2009-11-27T00:00:00.000100-06:39' ``` The optional argument *timespec* specifies the number of additional components of the time to include (the default is `'auto'`). It can be one of the following: * `'auto'`: Same as `'seconds'` if [`microsecond`](#datetime.datetime.microsecond "datetime.datetime.microsecond") is 0, same as `'microseconds'` otherwise. * `'hours'`: Include the [`hour`](#datetime.datetime.hour "datetime.datetime.hour") in the two-digit `HH` format. * `'minutes'`: Include [`hour`](#datetime.datetime.hour "datetime.datetime.hour") and [`minute`](#datetime.datetime.minute "datetime.datetime.minute") in `HH:MM` format. * `'seconds'`: Include [`hour`](#datetime.datetime.hour "datetime.datetime.hour"), [`minute`](#datetime.datetime.minute "datetime.datetime.minute"), and [`second`](#datetime.datetime.second "datetime.datetime.second") in `HH:MM:SS` format. * `'milliseconds'`: Include full time, but truncate fractional second part to milliseconds. `HH:MM:SS.sss` format. * `'microseconds'`: Include full time in `HH:MM:SS.ffffff` format. Note Excluded time components are truncated, not rounded. [`ValueError`](exceptions#ValueError "ValueError") will be raised on an invalid *timespec* argument: ``` >>> from datetime import datetime >>> datetime.now().isoformat(timespec='minutes') '2002-12-25T00:00' >>> dt = datetime(2015, 1, 1, 12, 30, 59, 0) >>> dt.isoformat(timespec='microseconds') '2015-01-01T12:30:59.000000' ``` New in version 3.6: Added the *timespec* argument. `datetime.__str__()` For a [`datetime`](#datetime.datetime "datetime.datetime") instance *d*, `str(d)` is equivalent to `d.isoformat(' ')`. `datetime.ctime()` Return a string representing the date and time: ``` >>> from datetime import datetime >>> datetime(2002, 12, 4, 20, 30, 40).ctime() 'Wed Dec 4 20:30:40 2002' ``` The output string will *not* include time zone information, regardless of whether the input is aware or naive. `d.ctime()` is equivalent to: ``` time.ctime(time.mktime(d.timetuple())) ``` on platforms where the native C `ctime()` function (which [`time.ctime()`](time#time.ctime "time.ctime") invokes, but which [`datetime.ctime()`](#datetime.datetime.ctime "datetime.datetime.ctime") does not invoke) conforms to the C standard. `datetime.strftime(format)` Return a string representing the date and time, controlled by an explicit format string. For a complete list of formatting directives, see [strftime() and strptime() Behavior](#strftime-strptime-behavior). `datetime.__format__(format)` Same as [`datetime.strftime()`](#datetime.datetime.strftime "datetime.datetime.strftime"). This makes it possible to specify a format string for a [`datetime`](#datetime.datetime "datetime.datetime") object in [formatted string literals](../reference/lexical_analysis#f-strings) and when using [`str.format()`](stdtypes#str.format "str.format"). For a complete list of formatting directives, see [strftime() and strptime() Behavior](#strftime-strptime-behavior). ### Examples of Usage: [`datetime`](#datetime.datetime "datetime.datetime") Examples of working with [`datetime`](#datetime.datetime "datetime.datetime") objects: ``` >>> from datetime import datetime, date, time, timezone >>> # Using datetime.combine() >>> d = date(2005, 7, 14) >>> t = time(12, 30) >>> datetime.combine(d, t) datetime.datetime(2005, 7, 14, 12, 30) >>> # Using datetime.now() >>> datetime.now() datetime.datetime(2007, 12, 6, 16, 29, 43, 79043) # GMT +1 >>> datetime.now(timezone.utc) datetime.datetime(2007, 12, 6, 15, 29, 43, 79060, tzinfo=datetime.timezone.utc) >>> # Using datetime.strptime() >>> dt = datetime.strptime("21/11/06 16:30", "%d/%m/%y %H:%M") >>> dt datetime.datetime(2006, 11, 21, 16, 30) >>> # Using datetime.timetuple() to get tuple of all attributes >>> tt = dt.timetuple() >>> for it in tt: ... print(it) ... 2006 # year 11 # month 21 # day 16 # hour 30 # minute 0 # second 1 # weekday (0 = Monday) 325 # number of days since 1st January -1 # dst - method tzinfo.dst() returned None >>> # Date in ISO format >>> ic = dt.isocalendar() >>> for it in ic: ... print(it) ... 2006 # ISO year 47 # ISO week 2 # ISO weekday >>> # Formatting a datetime >>> dt.strftime("%A, %d. %B %Y %I:%M%p") 'Tuesday, 21. November 2006 04:30PM' >>> 'The {1} is {0:%d}, the {2} is {0:%B}, the {3} is {0:%I:%M%p}.'.format(dt, "day", "month", "time") 'The day is 21, the month is November, the time is 04:30PM.' ``` The example below defines a [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclass capturing time zone information for Kabul, Afghanistan, which used +4 UTC until 1945 and then +4:30 UTC thereafter: ``` from datetime import timedelta, datetime, tzinfo, timezone class KabulTz(tzinfo): # Kabul used +4 until 1945, when they moved to +4:30 UTC_MOVE_DATE = datetime(1944, 12, 31, 20, tzinfo=timezone.utc) def utcoffset(self, dt): if dt.year < 1945: return timedelta(hours=4) elif (1945, 1, 1, 0, 0) <= dt.timetuple()[:5] < (1945, 1, 1, 0, 30): # An ambiguous ("imaginary") half-hour range representing # a 'fold' in time due to the shift from +4 to +4:30. # If dt falls in the imaginary range, use fold to decide how # to resolve. See PEP495. return timedelta(hours=4, minutes=(30 if dt.fold else 0)) else: return timedelta(hours=4, minutes=30) def fromutc(self, dt): # Follow same validations as in datetime.tzinfo if not isinstance(dt, datetime): raise TypeError("fromutc() requires a datetime argument") if dt.tzinfo is not self: raise ValueError("dt.tzinfo is not self") # A custom implementation is required for fromutc as # the input to this function is a datetime with utc values # but with a tzinfo set to self. # See datetime.astimezone or fromtimestamp. if dt.replace(tzinfo=timezone.utc) >= self.UTC_MOVE_DATE: return dt + timedelta(hours=4, minutes=30) else: return dt + timedelta(hours=4) def dst(self, dt): # Kabul does not observe daylight saving time. return timedelta(0) def tzname(self, dt): if dt >= self.UTC_MOVE_DATE: return "+04:30" return "+04" ``` Usage of `KabulTz` from above: ``` >>> tz1 = KabulTz() >>> # Datetime before the change >>> dt1 = datetime(1900, 11, 21, 16, 30, tzinfo=tz1) >>> print(dt1.utcoffset()) 4:00:00 >>> # Datetime after the change >>> dt2 = datetime(2006, 6, 14, 13, 0, tzinfo=tz1) >>> print(dt2.utcoffset()) 4:30:00 >>> # Convert datetime to another time zone >>> dt3 = dt2.astimezone(timezone.utc) >>> dt3 datetime.datetime(2006, 6, 14, 8, 30, tzinfo=datetime.timezone.utc) >>> dt2 datetime.datetime(2006, 6, 14, 13, 0, tzinfo=KabulTz()) >>> dt2 == dt3 True ``` time Objects ------------ A [`time`](time#module-time "time: Time access and conversions.") object represents a (local) time of day, independent of any particular day, and subject to adjustment via a [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") object. `class datetime.time(hour=0, minute=0, second=0, microsecond=0, tzinfo=None, *, fold=0)` All arguments are optional. *tzinfo* may be `None`, or an instance of a [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclass. The remaining arguments must be integers in the following ranges: * `0 <= hour < 24`, * `0 <= minute < 60`, * `0 <= second < 60`, * `0 <= microsecond < 1000000`, * `fold in [0, 1]`. If an argument outside those ranges is given, [`ValueError`](exceptions#ValueError "ValueError") is raised. All default to `0` except *tzinfo*, which defaults to [`None`](constants#None "None"). Class attributes: `time.min` The earliest representable [`time`](#datetime.time "datetime.time"), `time(0, 0, 0, 0)`. `time.max` The latest representable [`time`](#datetime.time "datetime.time"), `time(23, 59, 59, 999999)`. `time.resolution` The smallest possible difference between non-equal [`time`](#datetime.time "datetime.time") objects, `timedelta(microseconds=1)`, although note that arithmetic on [`time`](#datetime.time "datetime.time") objects is not supported. Instance attributes (read-only): `time.hour` In `range(24)`. `time.minute` In `range(60)`. `time.second` In `range(60)`. `time.microsecond` In `range(1000000)`. `time.tzinfo` The object passed as the tzinfo argument to the [`time`](#datetime.time "datetime.time") constructor, or `None` if none was passed. `time.fold` In `[0, 1]`. Used to disambiguate wall times during a repeated interval. (A repeated interval occurs when clocks are rolled back at the end of daylight saving time or when the UTC offset for the current zone is decreased for political reasons.) The value 0 (1) represents the earlier (later) of the two moments with the same wall time representation. New in version 3.6. [`time`](#datetime.time "datetime.time") objects support comparison of [`time`](#datetime.time "datetime.time") to [`time`](#datetime.time "datetime.time"), where *a* is considered less than *b* when *a* precedes *b* in time. If one comparand is naive and the other is aware, [`TypeError`](exceptions#TypeError "TypeError") is raised if an order comparison is attempted. For equality comparisons, naive instances are never equal to aware instances. If both comparands are aware, and have the same [`tzinfo`](#datetime.time.tzinfo "datetime.time.tzinfo") attribute, the common [`tzinfo`](#datetime.time.tzinfo "datetime.time.tzinfo") attribute is ignored and the base times are compared. If both comparands are aware and have different [`tzinfo`](#datetime.time.tzinfo "datetime.time.tzinfo") attributes, the comparands are first adjusted by subtracting their UTC offsets (obtained from `self.utcoffset()`). In order to stop mixed-type comparisons from falling back to the default comparison by object address, when a [`time`](#datetime.time "datetime.time") object is compared to an object of a different type, [`TypeError`](exceptions#TypeError "TypeError") is raised unless the comparison is `==` or `!=`. The latter cases return [`False`](constants#False "False") or [`True`](constants#True "True"), respectively. Changed in version 3.3: Equality comparisons between aware and naive [`time`](#datetime.time "datetime.time") instances don’t raise [`TypeError`](exceptions#TypeError "TypeError"). In Boolean contexts, a [`time`](#datetime.time "datetime.time") object is always considered to be true. Changed in version 3.5: Before Python 3.5, a [`time`](#datetime.time "datetime.time") object was considered to be false if it represented midnight in UTC. This behavior was considered obscure and error-prone and has been removed in Python 3.5. See [bpo-13936](https://bugs.python.org/issue?@action=redirect&bpo=13936) for full details. Other constructor: `classmethod time.fromisoformat(time_string)` Return a [`time`](#datetime.time "datetime.time") corresponding to a *time\_string* in one of the formats emitted by [`time.isoformat()`](#datetime.time.isoformat "datetime.time.isoformat"). Specifically, this function supports strings in the format: ``` HH[:MM[:SS[.fff[fff]]]][+HH:MM[:SS[.ffffff]]] ``` Caution This does *not* support parsing arbitrary ISO 8601 strings. It is only intended as the inverse operation of [`time.isoformat()`](#datetime.time.isoformat "datetime.time.isoformat"). Examples: ``` >>> from datetime import time >>> time.fromisoformat('04:23:01') datetime.time(4, 23, 1) >>> time.fromisoformat('04:23:01.000384') datetime.time(4, 23, 1, 384) >>> time.fromisoformat('04:23:01+04:00') datetime.time(4, 23, 1, tzinfo=datetime.timezone(datetime.timedelta(seconds=14400))) ``` New in version 3.7. Instance methods: `time.replace(hour=self.hour, minute=self.minute, second=self.second, microsecond=self.microsecond, tzinfo=self.tzinfo, *, fold=0)` Return a [`time`](#datetime.time "datetime.time") with the same value, except for those attributes given new values by whichever keyword arguments are specified. Note that `tzinfo=None` can be specified to create a naive [`time`](#datetime.time "datetime.time") from an aware [`time`](#datetime.time "datetime.time"), without conversion of the time data. New in version 3.6: Added the `fold` argument. `time.isoformat(timespec='auto')` Return a string representing the time in ISO 8601 format, one of: * `HH:MM:SS.ffffff`, if [`microsecond`](#datetime.time.microsecond "datetime.time.microsecond") is not 0 * `HH:MM:SS`, if [`microsecond`](#datetime.time.microsecond "datetime.time.microsecond") is 0 * `HH:MM:SS.ffffff+HH:MM[:SS[.ffffff]]`, if [`utcoffset()`](#datetime.time.utcoffset "datetime.time.utcoffset") does not return `None` * `HH:MM:SS+HH:MM[:SS[.ffffff]]`, if [`microsecond`](#datetime.time.microsecond "datetime.time.microsecond") is 0 and [`utcoffset()`](#datetime.time.utcoffset "datetime.time.utcoffset") does not return `None` The optional argument *timespec* specifies the number of additional components of the time to include (the default is `'auto'`). It can be one of the following: * `'auto'`: Same as `'seconds'` if [`microsecond`](#datetime.time.microsecond "datetime.time.microsecond") is 0, same as `'microseconds'` otherwise. * `'hours'`: Include the [`hour`](#datetime.time.hour "datetime.time.hour") in the two-digit `HH` format. * `'minutes'`: Include [`hour`](#datetime.time.hour "datetime.time.hour") and [`minute`](#datetime.time.minute "datetime.time.minute") in `HH:MM` format. * `'seconds'`: Include [`hour`](#datetime.time.hour "datetime.time.hour"), [`minute`](#datetime.time.minute "datetime.time.minute"), and [`second`](#datetime.time.second "datetime.time.second") in `HH:MM:SS` format. * `'milliseconds'`: Include full time, but truncate fractional second part to milliseconds. `HH:MM:SS.sss` format. * `'microseconds'`: Include full time in `HH:MM:SS.ffffff` format. Note Excluded time components are truncated, not rounded. [`ValueError`](exceptions#ValueError "ValueError") will be raised on an invalid *timespec* argument. Example: ``` >>> from datetime import time >>> time(hour=12, minute=34, second=56, microsecond=123456).isoformat(timespec='minutes') '12:34' >>> dt = time(hour=12, minute=34, second=56, microsecond=0) >>> dt.isoformat(timespec='microseconds') '12:34:56.000000' >>> dt.isoformat(timespec='auto') '12:34:56' ``` New in version 3.6: Added the *timespec* argument. `time.__str__()` For a time *t*, `str(t)` is equivalent to `t.isoformat()`. `time.strftime(format)` Return a string representing the time, controlled by an explicit format string. For a complete list of formatting directives, see [strftime() and strptime() Behavior](#strftime-strptime-behavior). `time.__format__(format)` Same as [`time.strftime()`](#datetime.time.strftime "datetime.time.strftime"). This makes it possible to specify a format string for a [`time`](#datetime.time "datetime.time") object in [formatted string literals](../reference/lexical_analysis#f-strings) and when using [`str.format()`](stdtypes#str.format "str.format"). For a complete list of formatting directives, see [strftime() and strptime() Behavior](#strftime-strptime-behavior). `time.utcoffset()` If [`tzinfo`](#datetime.time.tzinfo "datetime.time.tzinfo") is `None`, returns `None`, else returns `self.tzinfo.utcoffset(None)`, and raises an exception if the latter doesn’t return `None` or a [`timedelta`](#datetime.timedelta "datetime.timedelta") object with magnitude less than one day. Changed in version 3.7: The UTC offset is not restricted to a whole number of minutes. `time.dst()` If [`tzinfo`](#datetime.time.tzinfo "datetime.time.tzinfo") is `None`, returns `None`, else returns `self.tzinfo.dst(None)`, and raises an exception if the latter doesn’t return `None`, or a [`timedelta`](#datetime.timedelta "datetime.timedelta") object with magnitude less than one day. Changed in version 3.7: The DST offset is not restricted to a whole number of minutes. `time.tzname()` If [`tzinfo`](#datetime.time.tzinfo "datetime.time.tzinfo") is `None`, returns `None`, else returns `self.tzinfo.tzname(None)`, or raises an exception if the latter doesn’t return `None` or a string object. ### Examples of Usage: [`time`](#datetime.time "datetime.time") Examples of working with a [`time`](#datetime.time "datetime.time") object: ``` >>> from datetime import time, tzinfo, timedelta >>> class TZ1(tzinfo): ... def utcoffset(self, dt): ... return timedelta(hours=1) ... def dst(self, dt): ... return timedelta(0) ... def tzname(self,dt): ... return "+01:00" ... def __repr__(self): ... return f"{self.__class__.__name__}()" ... >>> t = time(12, 10, 30, tzinfo=TZ1()) >>> t datetime.time(12, 10, 30, tzinfo=TZ1()) >>> t.isoformat() '12:10:30+01:00' >>> t.dst() datetime.timedelta(0) >>> t.tzname() '+01:00' >>> t.strftime("%H:%M:%S %Z") '12:10:30 +01:00' >>> 'The {} is {:%H:%M}.'.format("time", t) 'The time is 12:10.' ``` tzinfo Objects -------------- `class datetime.tzinfo` This is an abstract base class, meaning that this class should not be instantiated directly. Define a subclass of [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") to capture information about a particular time zone. An instance of (a concrete subclass of) [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") can be passed to the constructors for [`datetime`](#datetime.datetime "datetime.datetime") and [`time`](#datetime.time "datetime.time") objects. The latter objects view their attributes as being in local time, and the [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") object supports methods revealing offset of local time from UTC, the name of the time zone, and DST offset, all relative to a date or time object passed to them. You need to derive a concrete subclass, and (at least) supply implementations of the standard [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") methods needed by the [`datetime`](#datetime.datetime "datetime.datetime") methods you use. The [`datetime`](#module-datetime "datetime: Basic date and time types.") module provides [`timezone`](#datetime.timezone "datetime.timezone"), a simple concrete subclass of [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") which can represent timezones with fixed offset from UTC such as UTC itself or North American EST and EDT. Special requirement for pickling: A [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclass must have an [`__init__()`](../reference/datamodel#object.__init__ "object.__init__") method that can be called with no arguments, otherwise it can be pickled but possibly not unpickled again. This is a technical requirement that may be relaxed in the future. A concrete subclass of [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") may need to implement the following methods. Exactly which methods are needed depends on the uses made of aware [`datetime`](#module-datetime "datetime: Basic date and time types.") objects. If in doubt, simply implement all of them. `tzinfo.utcoffset(dt)` Return offset of local time from UTC, as a [`timedelta`](#datetime.timedelta "datetime.timedelta") object that is positive east of UTC. If local time is west of UTC, this should be negative. This represents the *total* offset from UTC; for example, if a [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") object represents both time zone and DST adjustments, [`utcoffset()`](#datetime.tzinfo.utcoffset "datetime.tzinfo.utcoffset") should return their sum. If the UTC offset isn’t known, return `None`. Else the value returned must be a [`timedelta`](#datetime.timedelta "datetime.timedelta") object strictly between `-timedelta(hours=24)` and `timedelta(hours=24)` (the magnitude of the offset must be less than one day). Most implementations of [`utcoffset()`](#datetime.tzinfo.utcoffset "datetime.tzinfo.utcoffset") will probably look like one of these two: ``` return CONSTANT # fixed-offset class return CONSTANT + self.dst(dt) # daylight-aware class ``` If [`utcoffset()`](#datetime.tzinfo.utcoffset "datetime.tzinfo.utcoffset") does not return `None`, [`dst()`](#datetime.tzinfo.dst "datetime.tzinfo.dst") should not return `None` either. The default implementation of [`utcoffset()`](#datetime.tzinfo.utcoffset "datetime.tzinfo.utcoffset") raises [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError"). Changed in version 3.7: The UTC offset is not restricted to a whole number of minutes. `tzinfo.dst(dt)` Return the daylight saving time (DST) adjustment, as a [`timedelta`](#datetime.timedelta "datetime.timedelta") object or `None` if DST information isn’t known. Return `timedelta(0)` if DST is not in effect. If DST is in effect, return the offset as a [`timedelta`](#datetime.timedelta "datetime.timedelta") object (see [`utcoffset()`](#datetime.tzinfo.utcoffset "datetime.tzinfo.utcoffset") for details). Note that DST offset, if applicable, has already been added to the UTC offset returned by [`utcoffset()`](#datetime.tzinfo.utcoffset "datetime.tzinfo.utcoffset"), so there’s no need to consult [`dst()`](#datetime.tzinfo.dst "datetime.tzinfo.dst") unless you’re interested in obtaining DST info separately. For example, [`datetime.timetuple()`](#datetime.datetime.timetuple "datetime.datetime.timetuple") calls its [`tzinfo`](#datetime.datetime.tzinfo "datetime.datetime.tzinfo") attribute’s [`dst()`](#datetime.tzinfo.dst "datetime.tzinfo.dst") method to determine how the `tm_isdst` flag should be set, and [`tzinfo.fromutc()`](#datetime.tzinfo.fromutc "datetime.tzinfo.fromutc") calls [`dst()`](#datetime.tzinfo.dst "datetime.tzinfo.dst") to account for DST changes when crossing time zones. An instance *tz* of a [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclass that models both standard and daylight times must be consistent in this sense: `tz.utcoffset(dt) - tz.dst(dt)` must return the same result for every [`datetime`](#datetime.datetime "datetime.datetime") *dt* with `dt.tzinfo == tz` For sane [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclasses, this expression yields the time zone’s “standard offset”, which should not depend on the date or the time, but only on geographic location. The implementation of [`datetime.astimezone()`](#datetime.datetime.astimezone "datetime.datetime.astimezone") relies on this, but cannot detect violations; it’s the programmer’s responsibility to ensure it. If a [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclass cannot guarantee this, it may be able to override the default implementation of [`tzinfo.fromutc()`](#datetime.tzinfo.fromutc "datetime.tzinfo.fromutc") to work correctly with `astimezone()` regardless. Most implementations of [`dst()`](#datetime.tzinfo.dst "datetime.tzinfo.dst") will probably look like one of these two: ``` def dst(self, dt): # a fixed-offset class: doesn't account for DST return timedelta(0) ``` or: ``` def dst(self, dt): # Code to set dston and dstoff to the time zone's DST # transition times based on the input dt.year, and expressed # in standard local time. if dston <= dt.replace(tzinfo=None) < dstoff: return timedelta(hours=1) else: return timedelta(0) ``` The default implementation of [`dst()`](#datetime.tzinfo.dst "datetime.tzinfo.dst") raises [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError"). Changed in version 3.7: The DST offset is not restricted to a whole number of minutes. `tzinfo.tzname(dt)` Return the time zone name corresponding to the [`datetime`](#datetime.datetime "datetime.datetime") object *dt*, as a string. Nothing about string names is defined by the [`datetime`](#module-datetime "datetime: Basic date and time types.") module, and there’s no requirement that it mean anything in particular. For example, “GMT”, “UTC”, “-500”, “-5:00”, “EDT”, “US/Eastern”, “America/New York” are all valid replies. Return `None` if a string name isn’t known. Note that this is a method rather than a fixed string primarily because some [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclasses will wish to return different names depending on the specific value of *dt* passed, especially if the [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") class is accounting for daylight time. The default implementation of [`tzname()`](#datetime.tzinfo.tzname "datetime.tzinfo.tzname") raises [`NotImplementedError`](exceptions#NotImplementedError "NotImplementedError"). These methods are called by a [`datetime`](#datetime.datetime "datetime.datetime") or [`time`](#datetime.time "datetime.time") object, in response to their methods of the same names. A [`datetime`](#datetime.datetime "datetime.datetime") object passes itself as the argument, and a [`time`](#datetime.time "datetime.time") object passes `None` as the argument. A [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclass’s methods should therefore be prepared to accept a *dt* argument of `None`, or of class [`datetime`](#datetime.datetime "datetime.datetime"). When `None` is passed, it’s up to the class designer to decide the best response. For example, returning `None` is appropriate if the class wishes to say that time objects don’t participate in the [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") protocols. It may be more useful for `utcoffset(None)` to return the standard UTC offset, as there is no other convention for discovering the standard offset. When a [`datetime`](#datetime.datetime "datetime.datetime") object is passed in response to a [`datetime`](#datetime.datetime "datetime.datetime") method, `dt.tzinfo` is the same object as *self*. [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") methods can rely on this, unless user code calls [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") methods directly. The intent is that the [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") methods interpret *dt* as being in local time, and not need worry about objects in other timezones. There is one more [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") method that a subclass may wish to override: `tzinfo.fromutc(dt)` This is called from the default [`datetime.astimezone()`](#datetime.datetime.astimezone "datetime.datetime.astimezone") implementation. When called from that, `dt.tzinfo` is *self*, and *dt*’s date and time data are to be viewed as expressing a UTC time. The purpose of [`fromutc()`](#datetime.tzinfo.fromutc "datetime.tzinfo.fromutc") is to adjust the date and time data, returning an equivalent datetime in *self*’s local time. Most [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclasses should be able to inherit the default [`fromutc()`](#datetime.tzinfo.fromutc "datetime.tzinfo.fromutc") implementation without problems. It’s strong enough to handle fixed-offset time zones, and time zones accounting for both standard and daylight time, and the latter even if the DST transition times differ in different years. An example of a time zone the default [`fromutc()`](#datetime.tzinfo.fromutc "datetime.tzinfo.fromutc") implementation may not handle correctly in all cases is one where the standard offset (from UTC) depends on the specific date and time passed, which can happen for political reasons. The default implementations of `astimezone()` and [`fromutc()`](#datetime.tzinfo.fromutc "datetime.tzinfo.fromutc") may not produce the result you want if the result is one of the hours straddling the moment the standard offset changes. Skipping code for error cases, the default [`fromutc()`](#datetime.tzinfo.fromutc "datetime.tzinfo.fromutc") implementation acts like: ``` def fromutc(self, dt): # raise ValueError error if dt.tzinfo is not self dtoff = dt.utcoffset() dtdst = dt.dst() # raise ValueError if dtoff is None or dtdst is None delta = dtoff - dtdst # this is self's standard offset if delta: dt += delta # convert to standard local time dtdst = dt.dst() # raise ValueError if dtdst is None if dtdst: return dt + dtdst else: return dt ``` In the following [`tzinfo_examples.py`](../_downloads/6b45dc135219d1404be49d606589a11d/tzinfo_examples.py) file there are some examples of [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") classes: ``` from datetime import tzinfo, timedelta, datetime ZERO = timedelta(0) HOUR = timedelta(hours=1) SECOND = timedelta(seconds=1) # A class capturing the platform's idea of local time. # (May result in wrong values on historical times in # timezones where UTC offset and/or the DST rules had # changed in the past.) import time as _time STDOFFSET = timedelta(seconds = -_time.timezone) if _time.daylight: DSTOFFSET = timedelta(seconds = -_time.altzone) else: DSTOFFSET = STDOFFSET DSTDIFF = DSTOFFSET - STDOFFSET class LocalTimezone(tzinfo): def fromutc(self, dt): assert dt.tzinfo is self stamp = (dt - datetime(1970, 1, 1, tzinfo=self)) // SECOND args = _time.localtime(stamp)[:6] dst_diff = DSTDIFF // SECOND # Detect fold fold = (args == _time.localtime(stamp - dst_diff)) return datetime(*args, microsecond=dt.microsecond, tzinfo=self, fold=fold) def utcoffset(self, dt): if self._isdst(dt): return DSTOFFSET else: return STDOFFSET def dst(self, dt): if self._isdst(dt): return DSTDIFF else: return ZERO def tzname(self, dt): return _time.tzname[self._isdst(dt)] def _isdst(self, dt): tt = (dt.year, dt.month, dt.day, dt.hour, dt.minute, dt.second, dt.weekday(), 0, 0) stamp = _time.mktime(tt) tt = _time.localtime(stamp) return tt.tm_isdst > 0 Local = LocalTimezone() # A complete implementation of current DST rules for major US time zones. def first_sunday_on_or_after(dt): days_to_go = 6 - dt.weekday() if days_to_go: dt += timedelta(days_to_go) return dt # US DST Rules # # This is a simplified (i.e., wrong for a few cases) set of rules for US # DST start and end times. For a complete and up-to-date set of DST rules # and timezone definitions, visit the Olson Database (or try pytz): # http://www.twinsun.com/tz/tz-link.htm # http://sourceforge.net/projects/pytz/ (might not be up-to-date) # # In the US, since 2007, DST starts at 2am (standard time) on the second # Sunday in March, which is the first Sunday on or after Mar 8. DSTSTART_2007 = datetime(1, 3, 8, 2) # and ends at 2am (DST time) on the first Sunday of Nov. DSTEND_2007 = datetime(1, 11, 1, 2) # From 1987 to 2006, DST used to start at 2am (standard time) on the first # Sunday in April and to end at 2am (DST time) on the last # Sunday of October, which is the first Sunday on or after Oct 25. DSTSTART_1987_2006 = datetime(1, 4, 1, 2) DSTEND_1987_2006 = datetime(1, 10, 25, 2) # From 1967 to 1986, DST used to start at 2am (standard time) on the last # Sunday in April (the one on or after April 24) and to end at 2am (DST time) # on the last Sunday of October, which is the first Sunday # on or after Oct 25. DSTSTART_1967_1986 = datetime(1, 4, 24, 2) DSTEND_1967_1986 = DSTEND_1987_2006 def us_dst_range(year): # Find start and end times for US DST. For years before 1967, return # start = end for no DST. if 2006 < year: dststart, dstend = DSTSTART_2007, DSTEND_2007 elif 1986 < year < 2007: dststart, dstend = DSTSTART_1987_2006, DSTEND_1987_2006 elif 1966 < year < 1987: dststart, dstend = DSTSTART_1967_1986, DSTEND_1967_1986 else: return (datetime(year, 1, 1), ) * 2 start = first_sunday_on_or_after(dststart.replace(year=year)) end = first_sunday_on_or_after(dstend.replace(year=year)) return start, end class USTimeZone(tzinfo): def __init__(self, hours, reprname, stdname, dstname): self.stdoffset = timedelta(hours=hours) self.reprname = reprname self.stdname = stdname self.dstname = dstname def __repr__(self): return self.reprname def tzname(self, dt): if self.dst(dt): return self.dstname else: return self.stdname def utcoffset(self, dt): return self.stdoffset + self.dst(dt) def dst(self, dt): if dt is None or dt.tzinfo is None: # An exception may be sensible here, in one or both cases. # It depends on how you want to treat them. The default # fromutc() implementation (called by the default astimezone() # implementation) passes a datetime with dt.tzinfo is self. return ZERO assert dt.tzinfo is self start, end = us_dst_range(dt.year) # Can't compare naive to aware objects, so strip the timezone from # dt first. dt = dt.replace(tzinfo=None) if start + HOUR <= dt < end - HOUR: # DST is in effect. return HOUR if end - HOUR <= dt < end: # Fold (an ambiguous hour): use dt.fold to disambiguate. return ZERO if dt.fold else HOUR if start <= dt < start + HOUR: # Gap (a non-existent hour): reverse the fold rule. return HOUR if dt.fold else ZERO # DST is off. return ZERO def fromutc(self, dt): assert dt.tzinfo is self start, end = us_dst_range(dt.year) start = start.replace(tzinfo=self) end = end.replace(tzinfo=self) std_time = dt + self.stdoffset dst_time = std_time + HOUR if end <= dst_time < end + HOUR: # Repeated hour return std_time.replace(fold=1) if std_time < start or dst_time >= end: # Standard time return std_time if start <= std_time < end - HOUR: # Daylight saving time return dst_time Eastern = USTimeZone(-5, "Eastern", "EST", "EDT") Central = USTimeZone(-6, "Central", "CST", "CDT") Mountain = USTimeZone(-7, "Mountain", "MST", "MDT") Pacific = USTimeZone(-8, "Pacific", "PST", "PDT") ``` Note that there are unavoidable subtleties twice per year in a [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclass accounting for both standard and daylight time, at the DST transition points. For concreteness, consider US Eastern (UTC -0500), where EDT begins the minute after 1:59 (EST) on the second Sunday in March, and ends the minute after 1:59 (EDT) on the first Sunday in November: ``` UTC 3:MM 4:MM 5:MM 6:MM 7:MM 8:MM EST 22:MM 23:MM 0:MM 1:MM 2:MM 3:MM EDT 23:MM 0:MM 1:MM 2:MM 3:MM 4:MM start 22:MM 23:MM 0:MM 1:MM 3:MM 4:MM end 23:MM 0:MM 1:MM 1:MM 2:MM 3:MM ``` When DST starts (the “start” line), the local wall clock leaps from 1:59 to 3:00. A wall time of the form 2:MM doesn’t really make sense on that day, so `astimezone(Eastern)` won’t deliver a result with `hour == 2` on the day DST begins. For example, at the Spring forward transition of 2016, we get: ``` >>> from datetime import datetime, timezone >>> from tzinfo_examples import HOUR, Eastern >>> u0 = datetime(2016, 3, 13, 5, tzinfo=timezone.utc) >>> for i in range(4): ... u = u0 + i*HOUR ... t = u.astimezone(Eastern) ... print(u.time(), 'UTC =', t.time(), t.tzname()) ... 05:00:00 UTC = 00:00:00 EST 06:00:00 UTC = 01:00:00 EST 07:00:00 UTC = 03:00:00 EDT 08:00:00 UTC = 04:00:00 EDT ``` When DST ends (the “end” line), there’s a potentially worse problem: there’s an hour that can’t be spelled unambiguously in local wall time: the last hour of daylight time. In Eastern, that’s times of the form 5:MM UTC on the day daylight time ends. The local wall clock leaps from 1:59 (daylight time) back to 1:00 (standard time) again. Local times of the form 1:MM are ambiguous. `astimezone()` mimics the local clock’s behavior by mapping two adjacent UTC hours into the same local hour then. In the Eastern example, UTC times of the form 5:MM and 6:MM both map to 1:MM when converted to Eastern, but earlier times have the [`fold`](#datetime.datetime.fold "datetime.datetime.fold") attribute set to 0 and the later times have it set to 1. For example, at the Fall back transition of 2016, we get: ``` >>> u0 = datetime(2016, 11, 6, 4, tzinfo=timezone.utc) >>> for i in range(4): ... u = u0 + i*HOUR ... t = u.astimezone(Eastern) ... print(u.time(), 'UTC =', t.time(), t.tzname(), t.fold) ... 04:00:00 UTC = 00:00:00 EDT 0 05:00:00 UTC = 01:00:00 EDT 0 06:00:00 UTC = 01:00:00 EST 1 07:00:00 UTC = 02:00:00 EST 0 ``` Note that the [`datetime`](#datetime.datetime "datetime.datetime") instances that differ only by the value of the [`fold`](#datetime.datetime.fold "datetime.datetime.fold") attribute are considered equal in comparisons. Applications that can’t bear wall-time ambiguities should explicitly check the value of the [`fold`](#datetime.datetime.fold "datetime.datetime.fold") attribute or avoid using hybrid [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclasses; there are no ambiguities when using [`timezone`](#datetime.timezone "datetime.timezone"), or any other fixed-offset [`tzinfo`](#datetime.tzinfo "datetime.tzinfo") subclass (such as a class representing only EST (fixed offset -5 hours), or only EDT (fixed offset -4 hours)). See also [`zoneinfo`](zoneinfo#module-zoneinfo "zoneinfo: IANA time zone support") The [`datetime`](#module-datetime "datetime: Basic date and time types.") module has a basic [`timezone`](#datetime.timezone "datetime.timezone") class (for handling arbitrary fixed offsets from UTC) and its [`timezone.utc`](#datetime.timezone.utc "datetime.timezone.utc") attribute (a UTC timezone instance). `zoneinfo` brings the *IANA timezone database* (also known as the Olson database) to Python, and its usage is recommended. [IANA timezone database](https://www.iana.org/time-zones) The Time Zone Database (often called tz, tzdata or zoneinfo) contains code and data that represent the history of local time for many representative locations around the globe. It is updated periodically to reflect changes made by political bodies to time zone boundaries, UTC offsets, and daylight-saving rules. timezone Objects ---------------- The [`timezone`](#datetime.timezone "datetime.timezone") class is a subclass of [`tzinfo`](#datetime.tzinfo "datetime.tzinfo"), each instance of which represents a timezone defined by a fixed offset from UTC. Objects of this class cannot be used to represent timezone information in the locations where different offsets are used in different days of the year or where historical changes have been made to civil time. `class datetime.timezone(offset, name=None)` The *offset* argument must be specified as a [`timedelta`](#datetime.timedelta "datetime.timedelta") object representing the difference between the local time and UTC. It must be strictly between `-timedelta(hours=24)` and `timedelta(hours=24)`, otherwise [`ValueError`](exceptions#ValueError "ValueError") is raised. The *name* argument is optional. If specified it must be a string that will be used as the value returned by the [`datetime.tzname()`](#datetime.datetime.tzname "datetime.datetime.tzname") method. New in version 3.2. Changed in version 3.7: The UTC offset is not restricted to a whole number of minutes. `timezone.utcoffset(dt)` Return the fixed value specified when the [`timezone`](#datetime.timezone "datetime.timezone") instance is constructed. The *dt* argument is ignored. The return value is a [`timedelta`](#datetime.timedelta "datetime.timedelta") instance equal to the difference between the local time and UTC. Changed in version 3.7: The UTC offset is not restricted to a whole number of minutes. `timezone.tzname(dt)` Return the fixed value specified when the [`timezone`](#datetime.timezone "datetime.timezone") instance is constructed. If *name* is not provided in the constructor, the name returned by `tzname(dt)` is generated from the value of the `offset` as follows. If *offset* is `timedelta(0)`, the name is “UTC”, otherwise it is a string in the format `UTC±HH:MM`, where ± is the sign of `offset`, HH and MM are two digits of `offset.hours` and `offset.minutes` respectively. Changed in version 3.6: Name generated from `offset=timedelta(0)` is now plain `‘UTC’`, not `'UTC+00:00'`. `timezone.dst(dt)` Always returns `None`. `timezone.fromutc(dt)` Return `dt + offset`. The *dt* argument must be an aware [`datetime`](#datetime.datetime "datetime.datetime") instance, with `tzinfo` set to `self`. Class attributes: `timezone.utc` The UTC timezone, `timezone(timedelta(0))`. `strftime()` and `strptime()` Behavior --------------------------------------- [`date`](#datetime.date "datetime.date"), [`datetime`](#datetime.datetime "datetime.datetime"), and [`time`](#datetime.time "datetime.time") objects all support a `strftime(format)` method, to create a string representing the time under the control of an explicit format string. Conversely, the [`datetime.strptime()`](#datetime.datetime.strptime "datetime.datetime.strptime") class method creates a [`datetime`](#datetime.datetime "datetime.datetime") object from a string representing a date and time and a corresponding format string. The table below provides a high-level comparison of `strftime()` versus `strptime()`: | | `strftime` | `strptime` | | --- | --- | --- | | Usage | Convert object to a string according to a given format | Parse a string into a [`datetime`](#datetime.datetime "datetime.datetime") object given a corresponding format | | Type of method | Instance method | Class method | | Method of | [`date`](#datetime.date "datetime.date"); [`datetime`](#datetime.datetime "datetime.datetime"); [`time`](#datetime.time "datetime.time") | [`datetime`](#datetime.datetime "datetime.datetime") | | Signature | `strftime(format)` | `strptime(date_string, format)` | ### `strftime()` and `strptime()` Format Codes The following is a list of all the format codes that the 1989 C standard requires, and these work on all platforms with a standard C implementation. | Directive | Meaning | Example | Notes | | --- | --- | --- | --- | | `%a` | Weekday as locale’s abbreviated name. | | (1) | | `%A` | Weekday as locale’s full name. | | (1) | | `%w` | Weekday as a decimal number, where 0 is Sunday and 6 is Saturday. | 0, 1, …, 6 | | | `%d` | Day of the month as a zero-padded decimal number. | 01, 02, …, 31 | (9) | | `%b` | Month as locale’s abbreviated name. | | (1) | | `%B` | Month as locale’s full name. | | (1) | | `%m` | Month as a zero-padded decimal number. | 01, 02, …, 12 | (9) | | `%y` | Year without century as a zero-padded decimal number. | 00, 01, …, 99 | (9) | | `%Y` | Year with century as a decimal number. | 0001, 0002, …, 2013, 2014, …, 9998, 9999 | (2) | | `%H` | Hour (24-hour clock) as a zero-padded decimal number. | 00, 01, …, 23 | (9) | | `%I` | Hour (12-hour clock) as a zero-padded decimal number. | 01, 02, …, 12 | (9) | | `%p` | Locale’s equivalent of either AM or PM. | | (1), (3) | | `%M` | Minute as a zero-padded decimal number. | 00, 01, …, 59 | (9) | | `%S` | Second as a zero-padded decimal number. | 00, 01, …, 59 | (4), (9) | | `%f` | Microsecond as a decimal number, zero-padded to 6 digits. | 000000, 000001, …, 999999 | (5) | | `%z` | UTC offset in the form `±HHMM[SS[.ffffff]]` (empty string if the object is naive). | (empty), +0000, -0400, +1030, +063415, -030712.345216 | (6) | | `%Z` | Time zone name (empty string if the object is naive). | (empty), UTC, GMT | (6) | | `%j` | Day of the year as a zero-padded decimal number. | 001, 002, …, 366 | (9) | | `%U` | Week number of the year (Sunday as the first day of the week) as a zero-padded decimal number. All days in a new year preceding the first Sunday are considered to be in week 0. | 00, 01, …, 53 | (7), (9) | | `%W` | Week number of the year (Monday as the first day of the week) as a zero-padded decimal number. All days in a new year preceding the first Monday are considered to be in week 0. | 00, 01, …, 53 | (7), (9) | | `%c` | Locale’s appropriate date and time representation. | | (1) | | `%x` | Locale’s appropriate date representation. | | (1) | | `%X` | Locale’s appropriate time representation. | | (1) | | `%%` | A literal `'%'` character. | % | | Several additional directives not required by the C89 standard are included for convenience. These parameters all correspond to ISO 8601 date values. | Directive | Meaning | Example | Notes | | --- | --- | --- | --- | | `%G` | ISO 8601 year with century representing the year that contains the greater part of the ISO week (`%V`). | 0001, 0002, …, 2013, 2014, …, 9998, 9999 | (8) | | `%u` | ISO 8601 weekday as a decimal number where 1 is Monday. | 1, 2, …, 7 | | | `%V` | ISO 8601 week as a decimal number with Monday as the first day of the week. Week 01 is the week containing Jan 4. | 01, 02, …, 53 | (8), (9) | These may not be available on all platforms when used with the `strftime()` method. The ISO 8601 year and ISO 8601 week directives are not interchangeable with the year and week number directives above. Calling `strptime()` with incomplete or ambiguous ISO 8601 directives will raise a [`ValueError`](exceptions#ValueError "ValueError"). The full set of format codes supported varies across platforms, because Python calls the platform C library’s `strftime()` function, and platform variations are common. To see the full set of format codes supported on your platform, consult the *[strftime(3)](https://manpages.debian.org/strftime(3))* documentation. There are also differences between platforms in handling of unsupported format specifiers. New in version 3.6: `%G`, `%u` and `%V` were added. ### Technical Detail Broadly speaking, `d.strftime(fmt)` acts like the [`time`](time#module-time "time: Time access and conversions.") module’s `time.strftime(fmt, d.timetuple())` although not all objects support a `timetuple()` method. For the [`datetime.strptime()`](#datetime.datetime.strptime "datetime.datetime.strptime") class method, the default value is `1900-01-01T00:00:00.000`: any components not specified in the format string will be pulled from the default value. [4](#id8) Using `datetime.strptime(date_string, format)` is equivalent to: ``` datetime(*(time.strptime(date_string, format)[0:6])) ``` except when the format includes sub-second components or timezone offset information, which are supported in `datetime.strptime` but are discarded by `time.strptime`. For [`time`](#datetime.time "datetime.time") objects, the format codes for year, month, and day should not be used, as [`time`](time#module-time "time: Time access and conversions.") objects have no such values. If they’re used anyway, `1900` is substituted for the year, and `1` for the month and day. For [`date`](#datetime.date "datetime.date") objects, the format codes for hours, minutes, seconds, and microseconds should not be used, as [`date`](#datetime.date "datetime.date") objects have no such values. If they’re used anyway, `0` is substituted for them. For the same reason, handling of format strings containing Unicode code points that can’t be represented in the charset of the current locale is also platform-dependent. On some platforms such code points are preserved intact in the output, while on others `strftime` may raise [`UnicodeError`](exceptions#UnicodeError "UnicodeError") or return an empty string instead. Notes: 1. Because the format depends on the current locale, care should be taken when making assumptions about the output value. Field orderings will vary (for example, “month/day/year” versus “day/month/year”), and the output may contain Unicode characters encoded using the locale’s default encoding (for example, if the current locale is `ja_JP`, the default encoding could be any one of `eucJP`, `SJIS`, or `utf-8`; use [`locale.getlocale()`](locale#locale.getlocale "locale.getlocale") to determine the current locale’s encoding). 2. The `strptime()` method can parse years in the full [1, 9999] range, but years < 1000 must be zero-filled to 4-digit width. Changed in version 3.2: In previous versions, `strftime()` method was restricted to years >= 1900. Changed in version 3.3: In version 3.2, `strftime()` method was restricted to years >= 1000. 3. When used with the `strptime()` method, the `%p` directive only affects the output hour field if the `%I` directive is used to parse the hour. 4. Unlike the [`time`](time#module-time "time: Time access and conversions.") module, the [`datetime`](#module-datetime "datetime: Basic date and time types.") module does not support leap seconds. 5. When used with the `strptime()` method, the `%f` directive accepts from one to six digits and zero pads on the right. `%f` is an extension to the set of format characters in the C standard (but implemented separately in datetime objects, and therefore always available). 6. For a naive object, the `%z` and `%Z` format codes are replaced by empty strings. For an aware object: `%z` `utcoffset()` is transformed into a string of the form `±HHMM[SS[.ffffff]]`, where `HH` is a 2-digit string giving the number of UTC offset hours, `MM` is a 2-digit string giving the number of UTC offset minutes, `SS` is a 2-digit string giving the number of UTC offset seconds and `ffffff` is a 6-digit string giving the number of UTC offset microseconds. The `ffffff` part is omitted when the offset is a whole number of seconds and both the `ffffff` and the `SS` part is omitted when the offset is a whole number of minutes. For example, if `utcoffset()` returns `timedelta(hours=-3, minutes=-30)`, `%z` is replaced with the string `'-0330'`. Changed in version 3.7: The UTC offset is not restricted to a whole number of minutes. Changed in version 3.7: When the `%z` directive is provided to the `strptime()` method, the UTC offsets can have a colon as a separator between hours, minutes and seconds. For example, `'+01:00:00'` will be parsed as an offset of one hour. In addition, providing `'Z'` is identical to `'+00:00'`. `%Z` In `strftime()`, `%Z` is replaced by an empty string if `tzname()` returns `None`; otherwise `%Z` is replaced by the returned value, which must be a string. `strptime()` only accepts certain values for `%Z`: 1. any value in `time.tzname` for your machine’s locale 2. the hard-coded values `UTC` and `GMT` So someone living in Japan may have `JST`, `UTC`, and `GMT` as valid values, but probably not `EST`. It will raise `ValueError` for invalid values. Changed in version 3.2: When the `%z` directive is provided to the `strptime()` method, an aware [`datetime`](#datetime.datetime "datetime.datetime") object will be produced. The `tzinfo` of the result will be set to a [`timezone`](#datetime.timezone "datetime.timezone") instance. 7. When used with the `strptime()` method, `%U` and `%W` are only used in calculations when the day of the week and the calendar year (`%Y`) are specified. 8. Similar to `%U` and `%W`, `%V` is only used in calculations when the day of the week and the ISO year (`%G`) are specified in a `strptime()` format string. Also note that `%G` and `%Y` are not interchangeable. 9. When used with the `strptime()` method, the leading zero is optional for formats `%d`, `%m`, `%H`, `%I`, `%M`, `%S`, `%J`, `%U`, `%W`, and `%V`. Format `%y` does require a leading zero. #### Footnotes `1` If, that is, we ignore the effects of Relativity `2` This matches the definition of the “proleptic Gregorian” calendar in Dershowitz and Reingold’s book *Calendrical Calculations*, where it’s the base calendar for all computations. See the book for algorithms for converting between proleptic Gregorian ordinals and many other calendar systems. `3` See R. H. van Gent’s [guide to the mathematics of the ISO 8601 calendar](https://www.staff.science.uu.nl/~gent0113/calendar/isocalendar.htm) for a good explanation. `4` Passing `datetime.strptime('Feb 29', '%b %d')` will fail since `1900` is not a leap year.
programming_docs
python smtplib — SMTP protocol client smtplib — SMTP protocol client ============================== **Source code:** [Lib/smtplib.py](https://github.com/python/cpython/tree/3.9/Lib/smtplib.py) The [`smtplib`](#module-smtplib "smtplib: SMTP protocol client (requires sockets).") module defines an SMTP client session object that can be used to send mail to any Internet machine with an SMTP or ESMTP listener daemon. For details of SMTP and ESMTP operation, consult [**RFC 821**](https://tools.ietf.org/html/rfc821.html) (Simple Mail Transfer Protocol) and [**RFC 1869**](https://tools.ietf.org/html/rfc1869.html) (SMTP Service Extensions). `class smtplib.SMTP(host='', port=0, local_hostname=None, [timeout, ]source_address=None)` An [`SMTP`](#smtplib.SMTP "smtplib.SMTP") instance encapsulates an SMTP connection. It has methods that support a full repertoire of SMTP and ESMTP operations. If the optional host and port parameters are given, the SMTP [`connect()`](#smtplib.SMTP.connect "smtplib.SMTP.connect") method is called with those parameters during initialization. If specified, *local\_hostname* is used as the FQDN of the local host in the HELO/EHLO command. Otherwise, the local hostname is found using [`socket.getfqdn()`](socket#socket.getfqdn "socket.getfqdn"). If the [`connect()`](#smtplib.SMTP.connect "smtplib.SMTP.connect") call returns anything other than a success code, an [`SMTPConnectError`](#smtplib.SMTPConnectError "smtplib.SMTPConnectError") is raised. The optional *timeout* parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the global default timeout setting will be used). If the timeout expires, [`socket.timeout`](socket#socket.timeout "socket.timeout") is raised. The optional source\_address parameter allows binding to some specific source address in a machine with multiple network interfaces, and/or to some specific source TCP port. It takes a 2-tuple (host, port), for the socket to bind to as its source address before connecting. If omitted (or if host or port are `''` and/or 0 respectively) the OS default behavior will be used. For normal use, you should only require the initialization/connect, [`sendmail()`](#smtplib.SMTP.sendmail "smtplib.SMTP.sendmail"), and [`SMTP.quit()`](#smtplib.SMTP.quit "smtplib.SMTP.quit") methods. An example is included below. The [`SMTP`](#smtplib.SMTP "smtplib.SMTP") class supports the [`with`](../reference/compound_stmts#with) statement. When used like this, the SMTP `QUIT` command is issued automatically when the `with` statement exits. E.g.: ``` >>> from smtplib import SMTP >>> with SMTP("domain.org") as smtp: ... smtp.noop() ... (250, b'Ok') >>> ``` All commands will raise an [auditing event](sys#auditing) `smtplib.SMTP.send` with arguments `self` and `data`, where `data` is the bytes about to be sent to the remote host. Changed in version 3.3: Support for the [`with`](../reference/compound_stmts#with) statement was added. Changed in version 3.3: source\_address argument was added. New in version 3.5: The SMTPUTF8 extension ([**RFC 6531**](https://tools.ietf.org/html/rfc6531.html)) is now supported. Changed in version 3.9: If the *timeout* parameter is set to be zero, it will raise a [`ValueError`](exceptions#ValueError "ValueError") to prevent the creation of a non-blocking socket `class smtplib.SMTP_SSL(host='', port=0, local_hostname=None, keyfile=None, certfile=None, [timeout, ]context=None, source_address=None)` An [`SMTP_SSL`](#smtplib.SMTP_SSL "smtplib.SMTP_SSL") instance behaves exactly the same as instances of [`SMTP`](#smtplib.SMTP "smtplib.SMTP"). [`SMTP_SSL`](#smtplib.SMTP_SSL "smtplib.SMTP_SSL") should be used for situations where SSL is required from the beginning of the connection and using `starttls()` is not appropriate. If *host* is not specified, the local host is used. If *port* is zero, the standard SMTP-over-SSL port (465) is used. The optional arguments *local\_hostname*, *timeout* and *source\_address* have the same meaning as they do in the [`SMTP`](#smtplib.SMTP "smtplib.SMTP") class. *context*, also optional, can contain a [`SSLContext`](ssl#ssl.SSLContext "ssl.SSLContext") and allows configuring various aspects of the secure connection. Please read [Security considerations](ssl#ssl-security) for best practices. *keyfile* and *certfile* are a legacy alternative to *context*, and can point to a PEM formatted private key and certificate chain file for the SSL connection. Changed in version 3.3: *context* was added. Changed in version 3.3: source\_address argument was added. Changed in version 3.4: The class now supports hostname check with [`ssl.SSLContext.check_hostname`](ssl#ssl.SSLContext.check_hostname "ssl.SSLContext.check_hostname") and *Server Name Indication* (see [`ssl.HAS_SNI`](ssl#ssl.HAS_SNI "ssl.HAS_SNI")). Deprecated since version 3.6: *keyfile* and *certfile* are deprecated in favor of *context*. Please use [`ssl.SSLContext.load_cert_chain()`](ssl#ssl.SSLContext.load_cert_chain "ssl.SSLContext.load_cert_chain") instead, or let [`ssl.create_default_context()`](ssl#ssl.create_default_context "ssl.create_default_context") select the system’s trusted CA certificates for you. Changed in version 3.9: If the *timeout* parameter is set to be zero, it will raise a [`ValueError`](exceptions#ValueError "ValueError") to prevent the creation of a non-blocking socket `class smtplib.LMTP(host='', port=LMTP_PORT, local_hostname=None, source_address=None[, timeout])` The LMTP protocol, which is very similar to ESMTP, is heavily based on the standard SMTP client. It’s common to use Unix sockets for LMTP, so our `connect()` method must support that as well as a regular host:port server. The optional arguments local\_hostname and source\_address have the same meaning as they do in the [`SMTP`](#smtplib.SMTP "smtplib.SMTP") class. To specify a Unix socket, you must use an absolute path for *host*, starting with a ‘/’. Authentication is supported, using the regular SMTP mechanism. When using a Unix socket, LMTP generally don’t support or require any authentication, but your mileage might vary. Changed in version 3.9: The optional *timeout* parameter was added. A nice selection of exceptions is defined as well: `exception smtplib.SMTPException` Subclass of [`OSError`](exceptions#OSError "OSError") that is the base exception class for all the other exceptions provided by this module. Changed in version 3.4: SMTPException became subclass of [`OSError`](exceptions#OSError "OSError") `exception smtplib.SMTPServerDisconnected` This exception is raised when the server unexpectedly disconnects, or when an attempt is made to use the [`SMTP`](#smtplib.SMTP "smtplib.SMTP") instance before connecting it to a server. `exception smtplib.SMTPResponseException` Base class for all exceptions that include an SMTP error code. These exceptions are generated in some instances when the SMTP server returns an error code. The error code is stored in the `smtp_code` attribute of the error, and the `smtp_error` attribute is set to the error message. `exception smtplib.SMTPSenderRefused` Sender address refused. In addition to the attributes set by on all [`SMTPResponseException`](#smtplib.SMTPResponseException "smtplib.SMTPResponseException") exceptions, this sets ‘sender’ to the string that the SMTP server refused. `exception smtplib.SMTPRecipientsRefused` All recipient addresses refused. The errors for each recipient are accessible through the attribute `recipients`, which is a dictionary of exactly the same sort as [`SMTP.sendmail()`](#smtplib.SMTP.sendmail "smtplib.SMTP.sendmail") returns. `exception smtplib.SMTPDataError` The SMTP server refused to accept the message data. `exception smtplib.SMTPConnectError` Error occurred during establishment of a connection with the server. `exception smtplib.SMTPHeloError` The server refused our `HELO` message. `exception smtplib.SMTPNotSupportedError` The command or option attempted is not supported by the server. New in version 3.5. `exception smtplib.SMTPAuthenticationError` SMTP authentication went wrong. Most probably the server didn’t accept the username/password combination provided. See also [**RFC 821**](https://tools.ietf.org/html/rfc821.html) - Simple Mail Transfer Protocol Protocol definition for SMTP. This document covers the model, operating procedure, and protocol details for SMTP. [**RFC 1869**](https://tools.ietf.org/html/rfc1869.html) - SMTP Service Extensions Definition of the ESMTP extensions for SMTP. This describes a framework for extending SMTP with new commands, supporting dynamic discovery of the commands provided by the server, and defines a few additional commands. SMTP Objects ------------ An [`SMTP`](#smtplib.SMTP "smtplib.SMTP") instance has the following methods: `SMTP.set_debuglevel(level)` Set the debug output level. A value of 1 or `True` for *level* results in debug messages for connection and for all messages sent to and received from the server. A value of 2 for *level* results in these messages being timestamped. Changed in version 3.5: Added debuglevel 2. `SMTP.docmd(cmd, args='')` Send a command *cmd* to the server. The optional argument *args* is simply concatenated to the command, separated by a space. This returns a 2-tuple composed of a numeric response code and the actual response line (multiline responses are joined into one long line.) In normal operation it should not be necessary to call this method explicitly. It is used to implement other methods and may be useful for testing private extensions. If the connection to the server is lost while waiting for the reply, [`SMTPServerDisconnected`](#smtplib.SMTPServerDisconnected "smtplib.SMTPServerDisconnected") will be raised. `SMTP.connect(host='localhost', port=0)` Connect to a host on a given port. The defaults are to connect to the local host at the standard SMTP port (25). If the hostname ends with a colon (`':'`) followed by a number, that suffix will be stripped off and the number interpreted as the port number to use. This method is automatically invoked by the constructor if a host is specified during instantiation. Returns a 2-tuple of the response code and message sent by the server in its connection response. Raises an [auditing event](sys#auditing) `smtplib.connect` with arguments `self`, `host`, `port`. `SMTP.helo(name='')` Identify yourself to the SMTP server using `HELO`. The hostname argument defaults to the fully qualified domain name of the local host. The message returned by the server is stored as the `helo_resp` attribute of the object. In normal operation it should not be necessary to call this method explicitly. It will be implicitly called by the [`sendmail()`](#smtplib.SMTP.sendmail "smtplib.SMTP.sendmail") when necessary. `SMTP.ehlo(name='')` Identify yourself to an ESMTP server using `EHLO`. The hostname argument defaults to the fully qualified domain name of the local host. Examine the response for ESMTP option and store them for use by [`has_extn()`](#smtplib.SMTP.has_extn "smtplib.SMTP.has_extn"). Also sets several informational attributes: the message returned by the server is stored as the `ehlo_resp` attribute, `does_esmtp` is set to true or false depending on whether the server supports ESMTP, and `esmtp_features` will be a dictionary containing the names of the SMTP service extensions this server supports, and their parameters (if any). Unless you wish to use [`has_extn()`](#smtplib.SMTP.has_extn "smtplib.SMTP.has_extn") before sending mail, it should not be necessary to call this method explicitly. It will be implicitly called by [`sendmail()`](#smtplib.SMTP.sendmail "smtplib.SMTP.sendmail") when necessary. `SMTP.ehlo_or_helo_if_needed()` This method calls [`ehlo()`](#smtplib.SMTP.ehlo "smtplib.SMTP.ehlo") and/or [`helo()`](#smtplib.SMTP.helo "smtplib.SMTP.helo") if there has been no previous `EHLO` or `HELO` command this session. It tries ESMTP `EHLO` first. [`SMTPHeloError`](#smtplib.SMTPHeloError "smtplib.SMTPHeloError") The server didn’t reply properly to the `HELO` greeting. `SMTP.has_extn(name)` Return [`True`](constants#True "True") if *name* is in the set of SMTP service extensions returned by the server, [`False`](constants#False "False") otherwise. Case is ignored. `SMTP.verify(address)` Check the validity of an address on this server using SMTP `VRFY`. Returns a tuple consisting of code 250 and a full [**RFC 822**](https://tools.ietf.org/html/rfc822.html) address (including human name) if the user address is valid. Otherwise returns an SMTP error code of 400 or greater and an error string. Note Many sites disable SMTP `VRFY` in order to foil spammers. `SMTP.login(user, password, *, initial_response_ok=True)` Log in on an SMTP server that requires authentication. The arguments are the username and the password to authenticate with. If there has been no previous `EHLO` or `HELO` command this session, this method tries ESMTP `EHLO` first. This method will return normally if the authentication was successful, or may raise the following exceptions: [`SMTPHeloError`](#smtplib.SMTPHeloError "smtplib.SMTPHeloError") The server didn’t reply properly to the `HELO` greeting. [`SMTPAuthenticationError`](#smtplib.SMTPAuthenticationError "smtplib.SMTPAuthenticationError") The server didn’t accept the username/password combination. [`SMTPNotSupportedError`](#smtplib.SMTPNotSupportedError "smtplib.SMTPNotSupportedError") The `AUTH` command is not supported by the server. [`SMTPException`](#smtplib.SMTPException "smtplib.SMTPException") No suitable authentication method was found. Each of the authentication methods supported by [`smtplib`](#module-smtplib "smtplib: SMTP protocol client (requires sockets).") are tried in turn if they are advertised as supported by the server. See [`auth()`](#smtplib.SMTP.auth "smtplib.SMTP.auth") for a list of supported authentication methods. *initial\_response\_ok* is passed through to [`auth()`](#smtplib.SMTP.auth "smtplib.SMTP.auth"). Optional keyword argument *initial\_response\_ok* specifies whether, for authentication methods that support it, an “initial response” as specified in [**RFC 4954**](https://tools.ietf.org/html/rfc4954.html) can be sent along with the `AUTH` command, rather than requiring a challenge/response. Changed in version 3.5: [`SMTPNotSupportedError`](#smtplib.SMTPNotSupportedError "smtplib.SMTPNotSupportedError") may be raised, and the *initial\_response\_ok* parameter was added. `SMTP.auth(mechanism, authobject, *, initial_response_ok=True)` Issue an `SMTP` `AUTH` command for the specified authentication *mechanism*, and handle the challenge response via *authobject*. *mechanism* specifies which authentication mechanism is to be used as argument to the `AUTH` command; the valid values are those listed in the `auth` element of `esmtp_features`. *authobject* must be a callable object taking an optional single argument: data = authobject(challenge=None) If optional keyword argument *initial\_response\_ok* is true, `authobject()` will be called first with no argument. It can return the [**RFC 4954**](https://tools.ietf.org/html/rfc4954.html) “initial response” ASCII `str` which will be encoded and sent with the `AUTH` command as below. If the `authobject()` does not support an initial response (e.g. because it requires a challenge), it should return `None` when called with `challenge=None`. If *initial\_response\_ok* is false, then `authobject()` will not be called first with `None`. If the initial response check returns `None`, or if *initial\_response\_ok* is false, `authobject()` will be called to process the server’s challenge response; the *challenge* argument it is passed will be a `bytes`. It should return ASCII `str` *data* that will be base64 encoded and sent to the server. The `SMTP` class provides `authobjects` for the `CRAM-MD5`, `PLAIN`, and `LOGIN` mechanisms; they are named `SMTP.auth_cram_md5`, `SMTP.auth_plain`, and `SMTP.auth_login` respectively. They all require that the `user` and `password` properties of the `SMTP` instance are set to appropriate values. User code does not normally need to call `auth` directly, but can instead call the [`login()`](#smtplib.SMTP.login "smtplib.SMTP.login") method, which will try each of the above mechanisms in turn, in the order listed. `auth` is exposed to facilitate the implementation of authentication methods not (or not yet) supported directly by [`smtplib`](#module-smtplib "smtplib: SMTP protocol client (requires sockets)."). New in version 3.5. `SMTP.starttls(keyfile=None, certfile=None, context=None)` Put the SMTP connection in TLS (Transport Layer Security) mode. All SMTP commands that follow will be encrypted. You should then call [`ehlo()`](#smtplib.SMTP.ehlo "smtplib.SMTP.ehlo") again. If *keyfile* and *certfile* are provided, they are used to create an [`ssl.SSLContext`](ssl#ssl.SSLContext "ssl.SSLContext"). Optional *context* parameter is an [`ssl.SSLContext`](ssl#ssl.SSLContext "ssl.SSLContext") object; This is an alternative to using a keyfile and a certfile and if specified both *keyfile* and *certfile* should be `None`. If there has been no previous `EHLO` or `HELO` command this session, this method tries ESMTP `EHLO` first. Deprecated since version 3.6: *keyfile* and *certfile* are deprecated in favor of *context*. Please use [`ssl.SSLContext.load_cert_chain()`](ssl#ssl.SSLContext.load_cert_chain "ssl.SSLContext.load_cert_chain") instead, or let [`ssl.create_default_context()`](ssl#ssl.create_default_context "ssl.create_default_context") select the system’s trusted CA certificates for you. [`SMTPHeloError`](#smtplib.SMTPHeloError "smtplib.SMTPHeloError") The server didn’t reply properly to the `HELO` greeting. [`SMTPNotSupportedError`](#smtplib.SMTPNotSupportedError "smtplib.SMTPNotSupportedError") The server does not support the STARTTLS extension. [`RuntimeError`](exceptions#RuntimeError "RuntimeError") SSL/TLS support is not available to your Python interpreter. Changed in version 3.3: *context* was added. Changed in version 3.4: The method now supports hostname check with `SSLContext.check_hostname` and *Server Name Indicator* (see [`HAS_SNI`](ssl#ssl.HAS_SNI "ssl.HAS_SNI")). Changed in version 3.5: The error raised for lack of STARTTLS support is now the [`SMTPNotSupportedError`](#smtplib.SMTPNotSupportedError "smtplib.SMTPNotSupportedError") subclass instead of the base [`SMTPException`](#smtplib.SMTPException "smtplib.SMTPException"). `SMTP.sendmail(from_addr, to_addrs, msg, mail_options=(), rcpt_options=())` Send mail. The required arguments are an [**RFC 822**](https://tools.ietf.org/html/rfc822.html) from-address string, a list of [**RFC 822**](https://tools.ietf.org/html/rfc822.html) to-address strings (a bare string will be treated as a list with 1 address), and a message string. The caller may pass a list of ESMTP options (such as `8bitmime`) to be used in `MAIL FROM` commands as *mail\_options*. ESMTP options (such as `DSN` commands) that should be used with all `RCPT` commands can be passed as *rcpt\_options*. (If you need to use different ESMTP options to different recipients you have to use the low-level methods such as `mail()`, `rcpt()` and `data()` to send the message.) Note The *from\_addr* and *to\_addrs* parameters are used to construct the message envelope used by the transport agents. `sendmail` does not modify the message headers in any way. *msg* may be a string containing characters in the ASCII range, or a byte string. A string is encoded to bytes using the ascii codec, and lone `\r` and `\n` characters are converted to `\r\n` characters. A byte string is not modified. If there has been no previous `EHLO` or `HELO` command this session, this method tries ESMTP `EHLO` first. If the server does ESMTP, message size and each of the specified options will be passed to it (if the option is in the feature set the server advertises). If `EHLO` fails, `HELO` will be tried and ESMTP options suppressed. This method will return normally if the mail is accepted for at least one recipient. Otherwise it will raise an exception. That is, if this method does not raise an exception, then someone should get your mail. If this method does not raise an exception, it returns a dictionary, with one entry for each recipient that was refused. Each entry contains a tuple of the SMTP error code and the accompanying error message sent by the server. If `SMTPUTF8` is included in *mail\_options*, and the server supports it, *from\_addr* and *to\_addrs* may contain non-ASCII characters. This method may raise the following exceptions: [`SMTPRecipientsRefused`](#smtplib.SMTPRecipientsRefused "smtplib.SMTPRecipientsRefused") All recipients were refused. Nobody got the mail. The `recipients` attribute of the exception object is a dictionary with information about the refused recipients (like the one returned when at least one recipient was accepted). [`SMTPHeloError`](#smtplib.SMTPHeloError "smtplib.SMTPHeloError") The server didn’t reply properly to the `HELO` greeting. [`SMTPSenderRefused`](#smtplib.SMTPSenderRefused "smtplib.SMTPSenderRefused") The server didn’t accept the *from\_addr*. [`SMTPDataError`](#smtplib.SMTPDataError "smtplib.SMTPDataError") The server replied with an unexpected error code (other than a refusal of a recipient). [`SMTPNotSupportedError`](#smtplib.SMTPNotSupportedError "smtplib.SMTPNotSupportedError") `SMTPUTF8` was given in the *mail\_options* but is not supported by the server. Unless otherwise noted, the connection will be open even after an exception is raised. Changed in version 3.2: *msg* may be a byte string. Changed in version 3.5: `SMTPUTF8` support added, and [`SMTPNotSupportedError`](#smtplib.SMTPNotSupportedError "smtplib.SMTPNotSupportedError") may be raised if `SMTPUTF8` is specified but the server does not support it. `SMTP.send_message(msg, from_addr=None, to_addrs=None, mail_options=(), rcpt_options=())` This is a convenience method for calling [`sendmail()`](#smtplib.SMTP.sendmail "smtplib.SMTP.sendmail") with the message represented by an [`email.message.Message`](email.compat32-message#email.message.Message "email.message.Message") object. The arguments have the same meaning as for [`sendmail()`](#smtplib.SMTP.sendmail "smtplib.SMTP.sendmail"), except that *msg* is a `Message` object. If *from\_addr* is `None` or *to\_addrs* is `None`, `send_message` fills those arguments with addresses extracted from the headers of *msg* as specified in [**RFC 5322**](https://tools.ietf.org/html/rfc5322.html): *from\_addr* is set to the *Sender* field if it is present, and otherwise to the *From* field. *to\_addrs* combines the values (if any) of the *To*, *Cc*, and *Bcc* fields from *msg*. If exactly one set of *Resent-\** headers appear in the message, the regular headers are ignored and the *Resent-\** headers are used instead. If the message contains more than one set of *Resent-\** headers, a [`ValueError`](exceptions#ValueError "ValueError") is raised, since there is no way to unambiguously detect the most recent set of *Resent-* headers. `send_message` serializes *msg* using [`BytesGenerator`](email.generator#email.generator.BytesGenerator "email.generator.BytesGenerator") with `\r\n` as the *linesep*, and calls [`sendmail()`](#smtplib.SMTP.sendmail "smtplib.SMTP.sendmail") to transmit the resulting message. Regardless of the values of *from\_addr* and *to\_addrs*, `send_message` does not transmit any *Bcc* or *Resent-Bcc* headers that may appear in *msg*. If any of the addresses in *from\_addr* and *to\_addrs* contain non-ASCII characters and the server does not advertise `SMTPUTF8` support, an `SMTPNotSupported` error is raised. Otherwise the `Message` is serialized with a clone of its [`policy`](email.policy#module-email.policy "email.policy: Controlling the parsing and generating of messages") with the [`utf8`](email.policy#email.policy.EmailPolicy.utf8 "email.policy.EmailPolicy.utf8") attribute set to `True`, and `SMTPUTF8` and `BODY=8BITMIME` are added to *mail\_options*. New in version 3.2. New in version 3.5: Support for internationalized addresses (`SMTPUTF8`). `SMTP.quit()` Terminate the SMTP session and close the connection. Return the result of the SMTP `QUIT` command. Low-level methods corresponding to the standard SMTP/ESMTP commands `HELP`, `RSET`, `NOOP`, `MAIL`, `RCPT`, and `DATA` are also supported. Normally these do not need to be called directly, so they are not documented here. For details, consult the module code. SMTP Example ------------ This example prompts the user for addresses needed in the message envelope (‘To’ and ‘From’ addresses), and the message to be delivered. Note that the headers to be included with the message must be included in the message as entered; this example doesn’t do any processing of the [**RFC 822**](https://tools.ietf.org/html/rfc822.html) headers. In particular, the ‘To’ and ‘From’ addresses must be included in the message headers explicitly. ``` import smtplib def prompt(prompt): return input(prompt).strip() fromaddr = prompt("From: ") toaddrs = prompt("To: ").split() print("Enter message, end with ^D (Unix) or ^Z (Windows):") # Add the From: and To: headers at the start! msg = ("From: %s\r\nTo: %s\r\n\r\n" % (fromaddr, ", ".join(toaddrs))) while True: try: line = input() except EOFError: break if not line: break msg = msg + line print("Message length is", len(msg)) server = smtplib.SMTP('localhost') server.set_debuglevel(1) server.sendmail(fromaddr, toaddrs, msg) server.quit() ``` Note In general, you will want to use the [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package’s features to construct an email message, which you can then send via [`send_message()`](#smtplib.SMTP.send_message "smtplib.SMTP.send_message"); see [email: Examples](email.examples#email-examples).
programming_docs
python csv — CSV File Reading and Writing csv — CSV File Reading and Writing ================================== **Source code:** [Lib/csv.py](https://github.com/python/cpython/tree/3.9/Lib/csv.py) The so-called CSV (Comma Separated Values) format is the most common import and export format for spreadsheets and databases. CSV format was used for many years prior to attempts to describe the format in a standardized way in [**RFC 4180**](https://tools.ietf.org/html/rfc4180.html). The lack of a well-defined standard means that subtle differences often exist in the data produced and consumed by different applications. These differences can make it annoying to process CSV files from multiple sources. Still, while the delimiters and quoting characters vary, the overall format is similar enough that it is possible to write a single module which can efficiently manipulate such data, hiding the details of reading and writing the data from the programmer. The [`csv`](#module-csv "csv: Write and read tabular data to and from delimited files.") module implements classes to read and write tabular data in CSV format. It allows programmers to say, “write this data in the format preferred by Excel,” or “read data from this file which was generated by Excel,” without knowing the precise details of the CSV format used by Excel. Programmers can also describe the CSV formats understood by other applications or define their own special-purpose CSV formats. The [`csv`](#module-csv "csv: Write and read tabular data to and from delimited files.") module’s [`reader`](#csv.reader "csv.reader") and [`writer`](#csv.writer "csv.writer") objects read and write sequences. Programmers can also read and write data in dictionary form using the [`DictReader`](#csv.DictReader "csv.DictReader") and [`DictWriter`](#csv.DictWriter "csv.DictWriter") classes. See also [**PEP 305**](https://www.python.org/dev/peps/pep-0305) - CSV File API The Python Enhancement Proposal which proposed this addition to Python. Module Contents --------------- The [`csv`](#module-csv "csv: Write and read tabular data to and from delimited files.") module defines the following functions: `csv.reader(csvfile, dialect='excel', **fmtparams)` Return a reader object which will iterate over lines in the given *csvfile*. *csvfile* can be any object which supports the [iterator](../glossary#term-iterator) protocol and returns a string each time its `__next__()` method is called — [file objects](../glossary#term-file-object) and list objects are both suitable. If *csvfile* is a file object, it should be opened with `newline=''`. [1](#id3) An optional *dialect* parameter can be given which is used to define a set of parameters specific to a particular CSV dialect. It may be an instance of a subclass of the [`Dialect`](#csv.Dialect "csv.Dialect") class or one of the strings returned by the [`list_dialects()`](#csv.list_dialects "csv.list_dialects") function. The other optional *fmtparams* keyword arguments can be given to override individual formatting parameters in the current dialect. For full details about the dialect and formatting parameters, see section [Dialects and Formatting Parameters](#csv-fmt-params). Each row read from the csv file is returned as a list of strings. No automatic data type conversion is performed unless the `QUOTE_NONNUMERIC` format option is specified (in which case unquoted fields are transformed into floats). A short usage example: ``` >>> import csv >>> with open('eggs.csv', newline='') as csvfile: ... spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|') ... for row in spamreader: ... print(', '.join(row)) Spam, Spam, Spam, Spam, Spam, Baked Beans Spam, Lovely Spam, Wonderful Spam ``` `csv.writer(csvfile, dialect='excel', **fmtparams)` Return a writer object responsible for converting the user’s data into delimited strings on the given file-like object. *csvfile* can be any object with a `write()` method. If *csvfile* is a file object, it should be opened with `newline=''` [1](#id3). An optional *dialect* parameter can be given which is used to define a set of parameters specific to a particular CSV dialect. It may be an instance of a subclass of the [`Dialect`](#csv.Dialect "csv.Dialect") class or one of the strings returned by the [`list_dialects()`](#csv.list_dialects "csv.list_dialects") function. The other optional *fmtparams* keyword arguments can be given to override individual formatting parameters in the current dialect. For full details about dialects and formatting parameters, see the [Dialects and Formatting Parameters](#csv-fmt-params) section. To make it as easy as possible to interface with modules which implement the DB API, the value [`None`](constants#None "None") is written as the empty string. While this isn’t a reversible transformation, it makes it easier to dump SQL NULL data values to CSV files without preprocessing the data returned from a `cursor.fetch*` call. All other non-string data are stringified with [`str()`](stdtypes#str "str") before being written. A short usage example: ``` import csv with open('eggs.csv', 'w', newline='') as csvfile: spamwriter = csv.writer(csvfile, delimiter=' ', quotechar='|', quoting=csv.QUOTE_MINIMAL) spamwriter.writerow(['Spam'] * 5 + ['Baked Beans']) spamwriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam']) ``` `csv.register_dialect(name[, dialect[, **fmtparams]])` Associate *dialect* with *name*. *name* must be a string. The dialect can be specified either by passing a sub-class of [`Dialect`](#csv.Dialect "csv.Dialect"), or by *fmtparams* keyword arguments, or both, with keyword arguments overriding parameters of the dialect. For full details about dialects and formatting parameters, see section [Dialects and Formatting Parameters](#csv-fmt-params). `csv.unregister_dialect(name)` Delete the dialect associated with *name* from the dialect registry. An [`Error`](#csv.Error "csv.Error") is raised if *name* is not a registered dialect name. `csv.get_dialect(name)` Return the dialect associated with *name*. An [`Error`](#csv.Error "csv.Error") is raised if *name* is not a registered dialect name. This function returns an immutable [`Dialect`](#csv.Dialect "csv.Dialect"). `csv.list_dialects()` Return the names of all registered dialects. `csv.field_size_limit([new_limit])` Returns the current maximum field size allowed by the parser. If *new\_limit* is given, this becomes the new limit. The [`csv`](#module-csv "csv: Write and read tabular data to and from delimited files.") module defines the following classes: `class csv.DictReader(f, fieldnames=None, restkey=None, restval=None, dialect='excel', *args, **kwds)` Create an object that operates like a regular reader but maps the information in each row to a [`dict`](stdtypes#dict "dict") whose keys are given by the optional *fieldnames* parameter. The *fieldnames* parameter is a [sequence](../glossary#term-sequence). If *fieldnames* is omitted, the values in the first row of file *f* will be used as the fieldnames. Regardless of how the fieldnames are determined, the dictionary preserves their original ordering. If a row has more fields than fieldnames, the remaining data is put in a list and stored with the fieldname specified by *restkey* (which defaults to `None`). If a non-blank row has fewer fields than fieldnames, the missing values are filled-in with the value of *restval* (which defaults to `None`). All other optional or keyword arguments are passed to the underlying [`reader`](#csv.reader "csv.reader") instance. Changed in version 3.6: Returned rows are now of type `OrderedDict`. Changed in version 3.8: Returned rows are now of type [`dict`](stdtypes#dict "dict"). A short usage example: ``` >>> import csv >>> with open('names.csv', newline='') as csvfile: ... reader = csv.DictReader(csvfile) ... for row in reader: ... print(row['first_name'], row['last_name']) ... Eric Idle John Cleese >>> print(row) {'first_name': 'John', 'last_name': 'Cleese'} ``` `class csv.DictWriter(f, fieldnames, restval='', extrasaction='raise', dialect='excel', *args, **kwds)` Create an object which operates like a regular writer but maps dictionaries onto output rows. The *fieldnames* parameter is a [`sequence`](collections.abc#module-collections.abc "collections.abc: Abstract base classes for containers") of keys that identify the order in which values in the dictionary passed to the `writerow()` method are written to file *f*. The optional *restval* parameter specifies the value to be written if the dictionary is missing a key in *fieldnames*. If the dictionary passed to the `writerow()` method contains a key not found in *fieldnames*, the optional *extrasaction* parameter indicates what action to take. If it is set to `'raise'`, the default value, a [`ValueError`](exceptions#ValueError "ValueError") is raised. If it is set to `'ignore'`, extra values in the dictionary are ignored. Any other optional or keyword arguments are passed to the underlying [`writer`](#csv.writer "csv.writer") instance. Note that unlike the [`DictReader`](#csv.DictReader "csv.DictReader") class, the *fieldnames* parameter of the [`DictWriter`](#csv.DictWriter "csv.DictWriter") class is not optional. A short usage example: ``` import csv with open('names.csv', 'w', newline='') as csvfile: fieldnames = ['first_name', 'last_name'] writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() writer.writerow({'first_name': 'Baked', 'last_name': 'Beans'}) writer.writerow({'first_name': 'Lovely', 'last_name': 'Spam'}) writer.writerow({'first_name': 'Wonderful', 'last_name': 'Spam'}) ``` `class csv.Dialect` The [`Dialect`](#csv.Dialect "csv.Dialect") class is a container class whose attributes contain information for how to handle doublequotes, whitespace, delimiters, etc. Due to the lack of a strict CSV specification, different applications produce subtly different CSV data. [`Dialect`](#csv.Dialect "csv.Dialect") instances define how [`reader`](#csv.reader "csv.reader") and [`writer`](#csv.writer "csv.writer") instances behave. All available [`Dialect`](#csv.Dialect "csv.Dialect") names are returned by [`list_dialects()`](#csv.list_dialects "csv.list_dialects"), and they can be registered with specific [`reader`](#csv.reader "csv.reader") and [`writer`](#csv.writer "csv.writer") classes through their initializer (`__init__`) functions like this: ``` import csv with open('students.csv', 'w', newline='') as csvfile: writer = csv.writer(csvfile, dialect='unix') ^^^^^^^^^^^^^^ ``` `class csv.excel` The [`excel`](#csv.excel "csv.excel") class defines the usual properties of an Excel-generated CSV file. It is registered with the dialect name `'excel'`. `class csv.excel_tab` The [`excel_tab`](#csv.excel_tab "csv.excel_tab") class defines the usual properties of an Excel-generated TAB-delimited file. It is registered with the dialect name `'excel-tab'`. `class csv.unix_dialect` The [`unix_dialect`](#csv.unix_dialect "csv.unix_dialect") class defines the usual properties of a CSV file generated on UNIX systems, i.e. using `'\n'` as line terminator and quoting all fields. It is registered with the dialect name `'unix'`. New in version 3.2. `class csv.Sniffer` The [`Sniffer`](#csv.Sniffer "csv.Sniffer") class is used to deduce the format of a CSV file. The [`Sniffer`](#csv.Sniffer "csv.Sniffer") class provides two methods: `sniff(sample, delimiters=None)` Analyze the given *sample* and return a [`Dialect`](#csv.Dialect "csv.Dialect") subclass reflecting the parameters found. If the optional *delimiters* parameter is given, it is interpreted as a string containing possible valid delimiter characters. `has_header(sample)` Analyze the sample text (presumed to be in CSV format) and return [`True`](constants#True "True") if the first row appears to be a series of column headers. An example for [`Sniffer`](#csv.Sniffer "csv.Sniffer") use: ``` with open('example.csv', newline='') as csvfile: dialect = csv.Sniffer().sniff(csvfile.read(1024)) csvfile.seek(0) reader = csv.reader(csvfile, dialect) # ... process CSV file contents here ... ``` The [`csv`](#module-csv "csv: Write and read tabular data to and from delimited files.") module defines the following constants: `csv.QUOTE_ALL` Instructs [`writer`](#csv.writer "csv.writer") objects to quote all fields. `csv.QUOTE_MINIMAL` Instructs [`writer`](#csv.writer "csv.writer") objects to only quote those fields which contain special characters such as *delimiter*, *quotechar* or any of the characters in *lineterminator*. `csv.QUOTE_NONNUMERIC` Instructs [`writer`](#csv.writer "csv.writer") objects to quote all non-numeric fields. Instructs the reader to convert all non-quoted fields to type *float*. `csv.QUOTE_NONE` Instructs [`writer`](#csv.writer "csv.writer") objects to never quote fields. When the current *delimiter* occurs in output data it is preceded by the current *escapechar* character. If *escapechar* is not set, the writer will raise [`Error`](#csv.Error "csv.Error") if any characters that require escaping are encountered. Instructs [`reader`](#csv.reader "csv.reader") to perform no special processing of quote characters. The [`csv`](#module-csv "csv: Write and read tabular data to and from delimited files.") module defines the following exception: `exception csv.Error` Raised by any of the functions when an error is detected. Dialects and Formatting Parameters ---------------------------------- To make it easier to specify the format of input and output records, specific formatting parameters are grouped together into dialects. A dialect is a subclass of the [`Dialect`](#csv.Dialect "csv.Dialect") class having a set of specific methods and a single `validate()` method. When creating [`reader`](#csv.reader "csv.reader") or [`writer`](#csv.writer "csv.writer") objects, the programmer can specify a string or a subclass of the [`Dialect`](#csv.Dialect "csv.Dialect") class as the dialect parameter. In addition to, or instead of, the *dialect* parameter, the programmer can also specify individual formatting parameters, which have the same names as the attributes defined below for the [`Dialect`](#csv.Dialect "csv.Dialect") class. Dialects support the following attributes: `Dialect.delimiter` A one-character string used to separate fields. It defaults to `','`. `Dialect.doublequote` Controls how instances of *quotechar* appearing inside a field should themselves be quoted. When [`True`](constants#True "True"), the character is doubled. When [`False`](constants#False "False"), the *escapechar* is used as a prefix to the *quotechar*. It defaults to [`True`](constants#True "True"). On output, if *doublequote* is [`False`](constants#False "False") and no *escapechar* is set, [`Error`](#csv.Error "csv.Error") is raised if a *quotechar* is found in a field. `Dialect.escapechar` A one-character string used by the writer to escape the *delimiter* if *quoting* is set to [`QUOTE_NONE`](#csv.QUOTE_NONE "csv.QUOTE_NONE") and the *quotechar* if *doublequote* is [`False`](constants#False "False"). On reading, the *escapechar* removes any special meaning from the following character. It defaults to [`None`](constants#None "None"), which disables escaping. `Dialect.lineterminator` The string used to terminate lines produced by the [`writer`](#csv.writer "csv.writer"). It defaults to `'\r\n'`. Note The [`reader`](#csv.reader "csv.reader") is hard-coded to recognise either `'\r'` or `'\n'` as end-of-line, and ignores *lineterminator*. This behavior may change in the future. `Dialect.quotechar` A one-character string used to quote fields containing special characters, such as the *delimiter* or *quotechar*, or which contain new-line characters. It defaults to `'"'`. `Dialect.quoting` Controls when quotes should be generated by the writer and recognised by the reader. It can take on any of the `QUOTE_*` constants (see section [Module Contents](#csv-contents)) and defaults to [`QUOTE_MINIMAL`](#csv.QUOTE_MINIMAL "csv.QUOTE_MINIMAL"). `Dialect.skipinitialspace` When [`True`](constants#True "True"), whitespace immediately following the *delimiter* is ignored. The default is [`False`](constants#False "False"). `Dialect.strict` When `True`, raise exception [`Error`](#csv.Error "csv.Error") on bad CSV input. The default is `False`. Reader Objects -------------- Reader objects ([`DictReader`](#csv.DictReader "csv.DictReader") instances and objects returned by the [`reader()`](#csv.reader "csv.reader") function) have the following public methods: `csvreader.__next__()` Return the next row of the reader’s iterable object as a list (if the object was returned from [`reader()`](#csv.reader "csv.reader")) or a dict (if it is a [`DictReader`](#csv.DictReader "csv.DictReader") instance), parsed according to the current [`Dialect`](#csv.Dialect "csv.Dialect"). Usually you should call this as `next(reader)`. Reader objects have the following public attributes: `csvreader.dialect` A read-only description of the dialect in use by the parser. `csvreader.line_num` The number of lines read from the source iterator. This is not the same as the number of records returned, as records can span multiple lines. DictReader objects have the following public attribute: `csvreader.fieldnames` If not passed as a parameter when creating the object, this attribute is initialized upon first access or when the first record is read from the file. Writer Objects -------------- `Writer` objects ([`DictWriter`](#csv.DictWriter "csv.DictWriter") instances and objects returned by the [`writer()`](#csv.writer "csv.writer") function) have the following public methods. A *row* must be an iterable of strings or numbers for `Writer` objects and a dictionary mapping fieldnames to strings or numbers (by passing them through [`str()`](stdtypes#str "str") first) for [`DictWriter`](#csv.DictWriter "csv.DictWriter") objects. Note that complex numbers are written out surrounded by parens. This may cause some problems for other programs which read CSV files (assuming they support complex numbers at all). `csvwriter.writerow(row)` Write the *row* parameter to the writer’s file object, formatted according to the current [`Dialect`](#csv.Dialect "csv.Dialect"). Return the return value of the call to the *write* method of the underlying file object. Changed in version 3.5: Added support of arbitrary iterables. `csvwriter.writerows(rows)` Write all elements in *rows* (an iterable of *row* objects as described above) to the writer’s file object, formatted according to the current dialect. Writer objects have the following public attribute: `csvwriter.dialect` A read-only description of the dialect in use by the writer. DictWriter objects have the following public method: `DictWriter.writeheader()` Write a row with the field names (as specified in the constructor) to the writer’s file object, formatted according to the current dialect. Return the return value of the [`csvwriter.writerow()`](#csv.csvwriter.writerow "csv.csvwriter.writerow") call used internally. New in version 3.2. Changed in version 3.8: [`writeheader()`](#csv.DictWriter.writeheader "csv.DictWriter.writeheader") now also returns the value returned by the [`csvwriter.writerow()`](#csv.csvwriter.writerow "csv.csvwriter.writerow") method it uses internally. Examples -------- The simplest example of reading a CSV file: ``` import csv with open('some.csv', newline='') as f: reader = csv.reader(f) for row in reader: print(row) ``` Reading a file with an alternate format: ``` import csv with open('passwd', newline='') as f: reader = csv.reader(f, delimiter=':', quoting=csv.QUOTE_NONE) for row in reader: print(row) ``` The corresponding simplest possible writing example is: ``` import csv with open('some.csv', 'w', newline='') as f: writer = csv.writer(f) writer.writerows(someiterable) ``` Since [`open()`](functions#open "open") is used to open a CSV file for reading, the file will by default be decoded into unicode using the system default encoding (see [`locale.getpreferredencoding()`](locale#locale.getpreferredencoding "locale.getpreferredencoding")). To decode a file using a different encoding, use the `encoding` argument of open: ``` import csv with open('some.csv', newline='', encoding='utf-8') as f: reader = csv.reader(f) for row in reader: print(row) ``` The same applies to writing in something other than the system default encoding: specify the encoding argument when opening the output file. Registering a new dialect: ``` import csv csv.register_dialect('unixpwd', delimiter=':', quoting=csv.QUOTE_NONE) with open('passwd', newline='') as f: reader = csv.reader(f, 'unixpwd') ``` A slightly more advanced use of the reader — catching and reporting errors: ``` import csv, sys filename = 'some.csv' with open(filename, newline='') as f: reader = csv.reader(f) try: for row in reader: print(row) except csv.Error as e: sys.exit('file {}, line {}: {}'.format(filename, reader.line_num, e)) ``` And while the module doesn’t directly support parsing strings, it can easily be done: ``` import csv for row in csv.reader(['one,two,three']): print(row) ``` #### Footnotes `1(1,2)` If `newline=''` is not specified, newlines embedded inside quoted fields will not be interpreted correctly, and on platforms that use `\r\n` linendings on write an extra `\r` will be added. It should always be safe to specify `newline=''`, since the csv module does its own ([universal](../glossary#term-universal-newlines)) newline handling.
programming_docs
python lzma — Compression using the LZMA algorithm lzma — Compression using the LZMA algorithm =========================================== New in version 3.3. **Source code:** [Lib/lzma.py](https://github.com/python/cpython/tree/3.9/Lib/lzma.py) This module provides classes and convenience functions for compressing and decompressing data using the LZMA compression algorithm. Also included is a file interface supporting the `.xz` and legacy `.lzma` file formats used by the **xz** utility, as well as raw compressed streams. The interface provided by this module is very similar to that of the [`bz2`](bz2#module-bz2 "bz2: Interfaces for bzip2 compression and decompression.") module. However, note that [`LZMAFile`](#lzma.LZMAFile "lzma.LZMAFile") is *not* thread-safe, unlike [`bz2.BZ2File`](bz2#bz2.BZ2File "bz2.BZ2File"), so if you need to use a single [`LZMAFile`](#lzma.LZMAFile "lzma.LZMAFile") instance from multiple threads, it is necessary to protect it with a lock. `exception lzma.LZMAError` This exception is raised when an error occurs during compression or decompression, or while initializing the compressor/decompressor state. Reading and writing compressed files ------------------------------------ `lzma.open(filename, mode="rb", *, format=None, check=-1, preset=None, filters=None, encoding=None, errors=None, newline=None)` Open an LZMA-compressed file in binary or text mode, returning a [file object](../glossary#term-file-object). The *filename* argument can be either an actual file name (given as a [`str`](stdtypes#str "str"), [`bytes`](stdtypes#bytes "bytes") or [path-like](../glossary#term-path-like-object) object), in which case the named file is opened, or it can be an existing file object to read from or write to. The *mode* argument can be any of `"r"`, `"rb"`, `"w"`, `"wb"`, `"x"`, `"xb"`, `"a"` or `"ab"` for binary mode, or `"rt"`, `"wt"`, `"xt"`, or `"at"` for text mode. The default is `"rb"`. When opening a file for reading, the *format* and *filters* arguments have the same meanings as for [`LZMADecompressor`](#lzma.LZMADecompressor "lzma.LZMADecompressor"). In this case, the *check* and *preset* arguments should not be used. When opening a file for writing, the *format*, *check*, *preset* and *filters* arguments have the same meanings as for [`LZMACompressor`](#lzma.LZMACompressor "lzma.LZMACompressor"). For binary mode, this function is equivalent to the [`LZMAFile`](#lzma.LZMAFile "lzma.LZMAFile") constructor: `LZMAFile(filename, mode, ...)`. In this case, the *encoding*, *errors* and *newline* arguments must not be provided. For text mode, a [`LZMAFile`](#lzma.LZMAFile "lzma.LZMAFile") object is created, and wrapped in an [`io.TextIOWrapper`](io#io.TextIOWrapper "io.TextIOWrapper") instance with the specified encoding, error handling behavior, and line ending(s). Changed in version 3.4: Added support for the `"x"`, `"xb"` and `"xt"` modes. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). `class lzma.LZMAFile(filename=None, mode="r", *, format=None, check=-1, preset=None, filters=None)` Open an LZMA-compressed file in binary mode. An [`LZMAFile`](#lzma.LZMAFile "lzma.LZMAFile") can wrap an already-open [file object](../glossary#term-file-object), or operate directly on a named file. The *filename* argument specifies either the file object to wrap, or the name of the file to open (as a [`str`](stdtypes#str "str"), [`bytes`](stdtypes#bytes "bytes") or [path-like](../glossary#term-path-like-object) object). When wrapping an existing file object, the wrapped file will not be closed when the [`LZMAFile`](#lzma.LZMAFile "lzma.LZMAFile") is closed. The *mode* argument can be either `"r"` for reading (default), `"w"` for overwriting, `"x"` for exclusive creation, or `"a"` for appending. These can equivalently be given as `"rb"`, `"wb"`, `"xb"` and `"ab"` respectively. If *filename* is a file object (rather than an actual file name), a mode of `"w"` does not truncate the file, and is instead equivalent to `"a"`. When opening a file for reading, the input file may be the concatenation of multiple separate compressed streams. These are transparently decoded as a single logical stream. When opening a file for reading, the *format* and *filters* arguments have the same meanings as for [`LZMADecompressor`](#lzma.LZMADecompressor "lzma.LZMADecompressor"). In this case, the *check* and *preset* arguments should not be used. When opening a file for writing, the *format*, *check*, *preset* and *filters* arguments have the same meanings as for [`LZMACompressor`](#lzma.LZMACompressor "lzma.LZMACompressor"). [`LZMAFile`](#lzma.LZMAFile "lzma.LZMAFile") supports all the members specified by [`io.BufferedIOBase`](io#io.BufferedIOBase "io.BufferedIOBase"), except for `detach()` and `truncate()`. Iteration and the [`with`](../reference/compound_stmts#with) statement are supported. The following method is also provided: `peek(size=-1)` Return buffered data without advancing the file position. At least one byte of data will be returned, unless EOF has been reached. The exact number of bytes returned is unspecified (the *size* argument is ignored). Note While calling [`peek()`](#lzma.LZMAFile.peek "lzma.LZMAFile.peek") does not change the file position of the [`LZMAFile`](#lzma.LZMAFile "lzma.LZMAFile"), it may change the position of the underlying file object (e.g. if the [`LZMAFile`](#lzma.LZMAFile "lzma.LZMAFile") was constructed by passing a file object for *filename*). Changed in version 3.4: Added support for the `"x"` and `"xb"` modes. Changed in version 3.5: The [`read()`](io#io.BufferedIOBase.read "io.BufferedIOBase.read") method now accepts an argument of `None`. Changed in version 3.6: Accepts a [path-like object](../glossary#term-path-like-object). Compressing and decompressing data in memory -------------------------------------------- `class lzma.LZMACompressor(format=FORMAT_XZ, check=-1, preset=None, filters=None)` Create a compressor object, which can be used to compress data incrementally. For a more convenient way of compressing a single chunk of data, see [`compress()`](#lzma.compress "lzma.compress"). The *format* argument specifies what container format should be used. Possible values are: * `FORMAT_XZ: The .xz container format.` This is the default format. * `FORMAT_ALONE: The legacy .lzma container format.` This format is more limited than `.xz` – it does not support integrity checks or multiple filters. * `FORMAT_RAW: A raw data stream, not using any container format.` This format specifier does not support integrity checks, and requires that you always specify a custom filter chain (for both compression and decompression). Additionally, data compressed in this manner cannot be decompressed using `FORMAT_AUTO` (see [`LZMADecompressor`](#lzma.LZMADecompressor "lzma.LZMADecompressor")). The *check* argument specifies the type of integrity check to include in the compressed data. This check is used when decompressing, to ensure that the data has not been corrupted. Possible values are: * `CHECK_NONE`: No integrity check. This is the default (and the only acceptable value) for `FORMAT_ALONE` and `FORMAT_RAW`. * `CHECK_CRC32`: 32-bit Cyclic Redundancy Check. * `CHECK_CRC64`: 64-bit Cyclic Redundancy Check. This is the default for `FORMAT_XZ`. * `CHECK_SHA256`: 256-bit Secure Hash Algorithm. If the specified check is not supported, an [`LZMAError`](#lzma.LZMAError "lzma.LZMAError") is raised. The compression settings can be specified either as a preset compression level (with the *preset* argument), or in detail as a custom filter chain (with the *filters* argument). The *preset* argument (if provided) should be an integer between `0` and `9` (inclusive), optionally OR-ed with the constant `PRESET_EXTREME`. If neither *preset* nor *filters* are given, the default behavior is to use `PRESET_DEFAULT` (preset level `6`). Higher presets produce smaller output, but make the compression process slower. Note In addition to being more CPU-intensive, compression with higher presets also requires much more memory (and produces output that needs more memory to decompress). With preset `9` for example, the overhead for an [`LZMACompressor`](#lzma.LZMACompressor "lzma.LZMACompressor") object can be as high as 800 MiB. For this reason, it is generally best to stick with the default preset. The *filters* argument (if provided) should be a filter chain specifier. See [Specifying custom filter chains](#filter-chain-specs) for details. `compress(data)` Compress *data* (a [`bytes`](stdtypes#bytes "bytes") object), returning a [`bytes`](stdtypes#bytes "bytes") object containing compressed data for at least part of the input. Some of *data* may be buffered internally, for use in later calls to [`compress()`](#lzma.compress "lzma.compress") and [`flush()`](#lzma.LZMACompressor.flush "lzma.LZMACompressor.flush"). The returned data should be concatenated with the output of any previous calls to [`compress()`](#lzma.compress "lzma.compress"). `flush()` Finish the compression process, returning a [`bytes`](stdtypes#bytes "bytes") object containing any data stored in the compressor’s internal buffers. The compressor cannot be used after this method has been called. `class lzma.LZMADecompressor(format=FORMAT_AUTO, memlimit=None, filters=None)` Create a decompressor object, which can be used to decompress data incrementally. For a more convenient way of decompressing an entire compressed stream at once, see [`decompress()`](#lzma.decompress "lzma.decompress"). The *format* argument specifies the container format that should be used. The default is `FORMAT_AUTO`, which can decompress both `.xz` and `.lzma` files. Other possible values are `FORMAT_XZ`, `FORMAT_ALONE`, and `FORMAT_RAW`. The *memlimit* argument specifies a limit (in bytes) on the amount of memory that the decompressor can use. When this argument is used, decompression will fail with an [`LZMAError`](#lzma.LZMAError "lzma.LZMAError") if it is not possible to decompress the input within the given memory limit. The *filters* argument specifies the filter chain that was used to create the stream being decompressed. This argument is required if *format* is `FORMAT_RAW`, but should not be used for other formats. See [Specifying custom filter chains](#filter-chain-specs) for more information about filter chains. Note This class does not transparently handle inputs containing multiple compressed streams, unlike [`decompress()`](#lzma.decompress "lzma.decompress") and [`LZMAFile`](#lzma.LZMAFile "lzma.LZMAFile"). To decompress a multi-stream input with [`LZMADecompressor`](#lzma.LZMADecompressor "lzma.LZMADecompressor"), you must create a new decompressor for each stream. `decompress(data, max_length=-1)` Decompress *data* (a [bytes-like object](../glossary#term-bytes-like-object)), returning uncompressed data as bytes. Some of *data* may be buffered internally, for use in later calls to [`decompress()`](#lzma.decompress "lzma.decompress"). The returned data should be concatenated with the output of any previous calls to [`decompress()`](#lzma.decompress "lzma.decompress"). If *max\_length* is nonnegative, returns at most *max\_length* bytes of decompressed data. If this limit is reached and further output can be produced, the [`needs_input`](#lzma.LZMADecompressor.needs_input "lzma.LZMADecompressor.needs_input") attribute will be set to `False`. In this case, the next call to [`decompress()`](#lzma.LZMADecompressor.decompress "lzma.LZMADecompressor.decompress") may provide *data* as `b''` to obtain more of the output. If all of the input data was decompressed and returned (either because this was less than *max\_length* bytes, or because *max\_length* was negative), the [`needs_input`](#lzma.LZMADecompressor.needs_input "lzma.LZMADecompressor.needs_input") attribute will be set to `True`. Attempting to decompress data after the end of stream is reached raises an `EOFError`. Any data found after the end of the stream is ignored and saved in the [`unused_data`](#lzma.LZMADecompressor.unused_data "lzma.LZMADecompressor.unused_data") attribute. Changed in version 3.5: Added the *max\_length* parameter. `check` The ID of the integrity check used by the input stream. This may be `CHECK_UNKNOWN` until enough of the input has been decoded to determine what integrity check it uses. `eof` `True` if the end-of-stream marker has been reached. `unused_data` Data found after the end of the compressed stream. Before the end of the stream is reached, this will be `b""`. `needs_input` `False` if the [`decompress()`](#lzma.LZMADecompressor.decompress "lzma.LZMADecompressor.decompress") method can provide more decompressed data before requiring new uncompressed input. New in version 3.5. `lzma.compress(data, format=FORMAT_XZ, check=-1, preset=None, filters=None)` Compress *data* (a [`bytes`](stdtypes#bytes "bytes") object), returning the compressed data as a [`bytes`](stdtypes#bytes "bytes") object. See [`LZMACompressor`](#lzma.LZMACompressor "lzma.LZMACompressor") above for a description of the *format*, *check*, *preset* and *filters* arguments. `lzma.decompress(data, format=FORMAT_AUTO, memlimit=None, filters=None)` Decompress *data* (a [`bytes`](stdtypes#bytes "bytes") object), returning the uncompressed data as a [`bytes`](stdtypes#bytes "bytes") object. If *data* is the concatenation of multiple distinct compressed streams, decompress all of these streams, and return the concatenation of the results. See [`LZMADecompressor`](#lzma.LZMADecompressor "lzma.LZMADecompressor") above for a description of the *format*, *memlimit* and *filters* arguments. Miscellaneous ------------- `lzma.is_check_supported(check)` Return `True` if the given integrity check is supported on this system. `CHECK_NONE` and `CHECK_CRC32` are always supported. `CHECK_CRC64` and `CHECK_SHA256` may be unavailable if you are using a version of **liblzma** that was compiled with a limited feature set. Specifying custom filter chains ------------------------------- A filter chain specifier is a sequence of dictionaries, where each dictionary contains the ID and options for a single filter. Each dictionary must contain the key `"id"`, and may contain additional keys to specify filter-dependent options. Valid filter IDs are as follows: * Compression filters: + `FILTER_LZMA1` (for use with `FORMAT_ALONE`) + `FILTER_LZMA2` (for use with `FORMAT_XZ` and `FORMAT_RAW`) * Delta filter: + `FILTER_DELTA` * Branch-Call-Jump (BCJ) filters: + `FILTER_X86` + `FILTER_IA64` + `FILTER_ARM` + `FILTER_ARMTHUMB` + `FILTER_POWERPC` + `FILTER_SPARC` A filter chain can consist of up to 4 filters, and cannot be empty. The last filter in the chain must be a compression filter, and any other filters must be delta or BCJ filters. Compression filters support the following options (specified as additional entries in the dictionary representing the filter): * `preset`: A compression preset to use as a source of default values for options that are not specified explicitly. * `dict_size`: Dictionary size in bytes. This should be between 4 KiB and 1.5 GiB (inclusive). * `lc`: Number of literal context bits. * `lp`: Number of literal position bits. The sum `lc + lp` must be at most 4. * `pb`: Number of position bits; must be at most 4. * `mode`: `MODE_FAST` or `MODE_NORMAL`. * `nice_len`: What should be considered a “nice length” for a match. This should be 273 or less. * `mf`: What match finder to use – `MF_HC3`, `MF_HC4`, `MF_BT2`, `MF_BT3`, or `MF_BT4`. * `depth`: Maximum search depth used by match finder. 0 (default) means to select automatically based on other filter options. The delta filter stores the differences between bytes, producing more repetitive input for the compressor in certain circumstances. It supports one option, `dist`. This indicates the distance between bytes to be subtracted. The default is 1, i.e. take the differences between adjacent bytes. The BCJ filters are intended to be applied to machine code. They convert relative branches, calls and jumps in the code to use absolute addressing, with the aim of increasing the redundancy that can be exploited by the compressor. These filters support one option, `start_offset`. This specifies the address that should be mapped to the beginning of the input data. The default is 0. Examples -------- Reading in a compressed file: ``` import lzma with lzma.open("file.xz") as f: file_content = f.read() ``` Creating a compressed file: ``` import lzma data = b"Insert Data Here" with lzma.open("file.xz", "w") as f: f.write(data) ``` Compressing data in memory: ``` import lzma data_in = b"Insert Data Here" data_out = lzma.compress(data_in) ``` Incremental compression: ``` import lzma lzc = lzma.LZMACompressor() out1 = lzc.compress(b"Some data\n") out2 = lzc.compress(b"Another piece of data\n") out3 = lzc.compress(b"Even more data\n") out4 = lzc.flush() # Concatenate all the partial results: result = b"".join([out1, out2, out3, out4]) ``` Writing compressed data to an already-open file: ``` import lzma with open("file.xz", "wb") as f: f.write(b"This data will not be compressed\n") with lzma.open(f, "w") as lzf: lzf.write(b"This *will* be compressed\n") f.write(b"Not compressed\n") ``` Creating a compressed file using a custom filter chain: ``` import lzma my_filters = [ {"id": lzma.FILTER_DELTA, "dist": 5}, {"id": lzma.FILTER_LZMA2, "preset": 7 | lzma.PRESET_EXTREME}, ] with lzma.open("file.xz", "w", filters=my_filters) as f: f.write(b"blah blah blah") ``` python Futures Futures ======= **Source code:** [Lib/asyncio/futures.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/futures.py), [Lib/asyncio/base\_futures.py](https://github.com/python/cpython/tree/3.9/Lib/asyncio/base_futures.py) *Future* objects are used to bridge **low-level callback-based code** with high-level async/await code. Future Functions ---------------- `asyncio.isfuture(obj)` Return `True` if *obj* is either of: * an instance of [`asyncio.Future`](#asyncio.Future "asyncio.Future"), * an instance of [`asyncio.Task`](asyncio-task#asyncio.Task "asyncio.Task"), * a Future-like object with a `_asyncio_future_blocking` attribute. New in version 3.5. `asyncio.ensure_future(obj, *, loop=None)` Return: * *obj* argument as is, if *obj* is a [`Future`](#asyncio.Future "asyncio.Future"), a [`Task`](asyncio-task#asyncio.Task "asyncio.Task"), or a Future-like object ([`isfuture()`](#asyncio.isfuture "asyncio.isfuture") is used for the test.) * a [`Task`](asyncio-task#asyncio.Task "asyncio.Task") object wrapping *obj*, if *obj* is a coroutine ([`iscoroutine()`](asyncio-task#asyncio.iscoroutine "asyncio.iscoroutine") is used for the test); in this case the coroutine will be scheduled by `ensure_future()`. * a [`Task`](asyncio-task#asyncio.Task "asyncio.Task") object that would await on *obj*, if *obj* is an awaitable ([`inspect.isawaitable()`](inspect#inspect.isawaitable "inspect.isawaitable") is used for the test.) If *obj* is neither of the above a [`TypeError`](exceptions#TypeError "TypeError") is raised. Important See also the [`create_task()`](asyncio-task#asyncio.create_task "asyncio.create_task") function which is the preferred way for creating new Tasks. Save a reference to the result of this function, to avoid a task disappearing mid execution. Changed in version 3.5.1: The function accepts any [awaitable](../glossary#term-awaitable) object. `asyncio.wrap_future(future, *, loop=None)` Wrap a [`concurrent.futures.Future`](concurrent.futures#concurrent.futures.Future "concurrent.futures.Future") object in a [`asyncio.Future`](#asyncio.Future "asyncio.Future") object. Future Object ------------- `class asyncio.Future(*, loop=None)` A Future represents an eventual result of an asynchronous operation. Not thread-safe. Future is an [awaitable](../glossary#term-awaitable) object. Coroutines can await on Future objects until they either have a result or an exception set, or until they are cancelled. Typically Futures are used to enable low-level callback-based code (e.g. in protocols implemented using asyncio [transports](asyncio-protocol#asyncio-transports-protocols)) to interoperate with high-level async/await code. The rule of thumb is to never expose Future objects in user-facing APIs, and the recommended way to create a Future object is to call [`loop.create_future()`](asyncio-eventloop#asyncio.loop.create_future "asyncio.loop.create_future"). This way alternative event loop implementations can inject their own optimized implementations of a Future object. Changed in version 3.7: Added support for the [`contextvars`](contextvars#module-contextvars "contextvars: Context Variables") module. `result()` Return the result of the Future. If the Future is *done* and has a result set by the [`set_result()`](#asyncio.Future.set_result "asyncio.Future.set_result") method, the result value is returned. If the Future is *done* and has an exception set by the [`set_exception()`](#asyncio.Future.set_exception "asyncio.Future.set_exception") method, this method raises the exception. If the Future has been *cancelled*, this method raises a [`CancelledError`](asyncio-exceptions#asyncio.CancelledError "asyncio.CancelledError") exception. If the Future’s result isn’t yet available, this method raises a [`InvalidStateError`](asyncio-exceptions#asyncio.InvalidStateError "asyncio.InvalidStateError") exception. `set_result(result)` Mark the Future as *done* and set its result. Raises a [`InvalidStateError`](asyncio-exceptions#asyncio.InvalidStateError "asyncio.InvalidStateError") error if the Future is already *done*. `set_exception(exception)` Mark the Future as *done* and set an exception. Raises a [`InvalidStateError`](asyncio-exceptions#asyncio.InvalidStateError "asyncio.InvalidStateError") error if the Future is already *done*. `done()` Return `True` if the Future is *done*. A Future is *done* if it was *cancelled* or if it has a result or an exception set with [`set_result()`](#asyncio.Future.set_result "asyncio.Future.set_result") or [`set_exception()`](#asyncio.Future.set_exception "asyncio.Future.set_exception") calls. `cancelled()` Return `True` if the Future was *cancelled*. The method is usually used to check if a Future is not *cancelled* before setting a result or an exception for it: ``` if not fut.cancelled(): fut.set_result(42) ``` `add_done_callback(callback, *, context=None)` Add a callback to be run when the Future is *done*. The *callback* is called with the Future object as its only argument. If the Future is already *done* when this method is called, the callback is scheduled with [`loop.call_soon()`](asyncio-eventloop#asyncio.loop.call_soon "asyncio.loop.call_soon"). An optional keyword-only *context* argument allows specifying a custom [`contextvars.Context`](contextvars#contextvars.Context "contextvars.Context") for the *callback* to run in. The current context is used when no *context* is provided. [`functools.partial()`](functools#functools.partial "functools.partial") can be used to pass parameters to the callback, e.g.: ``` # Call 'print("Future:", fut)' when "fut" is done. fut.add_done_callback( functools.partial(print, "Future:")) ``` Changed in version 3.7: The *context* keyword-only parameter was added. See [**PEP 567**](https://www.python.org/dev/peps/pep-0567) for more details. `remove_done_callback(callback)` Remove *callback* from the callbacks list. Returns the number of callbacks removed, which is typically 1, unless a callback was added more than once. `cancel(msg=None)` Cancel the Future and schedule callbacks. If the Future is already *done* or *cancelled*, return `False`. Otherwise, change the Future’s state to *cancelled*, schedule the callbacks, and return `True`. Changed in version 3.9: Added the `msg` parameter. `exception()` Return the exception that was set on this Future. The exception (or `None` if no exception was set) is returned only if the Future is *done*. If the Future has been *cancelled*, this method raises a [`CancelledError`](asyncio-exceptions#asyncio.CancelledError "asyncio.CancelledError") exception. If the Future isn’t *done* yet, this method raises an [`InvalidStateError`](asyncio-exceptions#asyncio.InvalidStateError "asyncio.InvalidStateError") exception. `get_loop()` Return the event loop the Future object is bound to. New in version 3.7. This example creates a Future object, creates and schedules an asynchronous Task to set result for the Future, and waits until the Future has a result: ``` async def set_after(fut, delay, value): # Sleep for *delay* seconds. await asyncio.sleep(delay) # Set *value* as a result of *fut* Future. fut.set_result(value) async def main(): # Get the current event loop. loop = asyncio.get_running_loop() # Create a new Future object. fut = loop.create_future() # Run "set_after()" coroutine in a parallel Task. # We are using the low-level "loop.create_task()" API here because # we already have a reference to the event loop at hand. # Otherwise we could have just used "asyncio.create_task()". loop.create_task( set_after(fut, 1, '... world')) print('hello ...') # Wait until *fut* has a result (1 second) and print it. print(await fut) asyncio.run(main()) ``` Important The Future object was designed to mimic [`concurrent.futures.Future`](concurrent.futures#concurrent.futures.Future "concurrent.futures.Future"). Key differences include: * unlike asyncio Futures, [`concurrent.futures.Future`](concurrent.futures#concurrent.futures.Future "concurrent.futures.Future") instances cannot be awaited. * [`asyncio.Future.result()`](#asyncio.Future.result "asyncio.Future.result") and [`asyncio.Future.exception()`](#asyncio.Future.exception "asyncio.Future.exception") do not accept the *timeout* argument. * [`asyncio.Future.result()`](#asyncio.Future.result "asyncio.Future.result") and [`asyncio.Future.exception()`](#asyncio.Future.exception "asyncio.Future.exception") raise an [`InvalidStateError`](asyncio-exceptions#asyncio.InvalidStateError "asyncio.InvalidStateError") exception when the Future is not *done*. * Callbacks registered with [`asyncio.Future.add_done_callback()`](#asyncio.Future.add_done_callback "asyncio.Future.add_done_callback") are not called immediately. They are scheduled with [`loop.call_soon()`](asyncio-eventloop#asyncio.loop.call_soon "asyncio.loop.call_soon") instead. * asyncio Future is not compatible with the [`concurrent.futures.wait()`](concurrent.futures#concurrent.futures.wait "concurrent.futures.wait") and [`concurrent.futures.as_completed()`](concurrent.futures#concurrent.futures.as_completed "concurrent.futures.as_completed") functions. * [`asyncio.Future.cancel()`](#asyncio.Future.cancel "asyncio.Future.cancel") accepts an optional `msg` argument, but `concurrent.futures.cancel()` does not.
programming_docs
python socketserver — A framework for network servers socketserver — A framework for network servers ============================================== **Source code:** [Lib/socketserver.py](https://github.com/python/cpython/tree/3.9/Lib/socketserver.py) The [`socketserver`](#module-socketserver "socketserver: A framework for network servers.") module simplifies the task of writing network servers. There are four basic concrete server classes: `class socketserver.TCPServer(server_address, RequestHandlerClass, bind_and_activate=True)` This uses the Internet TCP protocol, which provides for continuous streams of data between the client and server. If *bind\_and\_activate* is true, the constructor automatically attempts to invoke [`server_bind()`](#socketserver.BaseServer.server_bind "socketserver.BaseServer.server_bind") and [`server_activate()`](#socketserver.BaseServer.server_activate "socketserver.BaseServer.server_activate"). The other parameters are passed to the [`BaseServer`](#socketserver.BaseServer "socketserver.BaseServer") base class. `class socketserver.UDPServer(server_address, RequestHandlerClass, bind_and_activate=True)` This uses datagrams, which are discrete packets of information that may arrive out of order or be lost while in transit. The parameters are the same as for [`TCPServer`](#socketserver.TCPServer "socketserver.TCPServer"). `class socketserver.UnixStreamServer(server_address, RequestHandlerClass, bind_and_activate=True)` `class socketserver.UnixDatagramServer(server_address, RequestHandlerClass, bind_and_activate=True)` These more infrequently used classes are similar to the TCP and UDP classes, but use Unix domain sockets; they’re not available on non-Unix platforms. The parameters are the same as for [`TCPServer`](#socketserver.TCPServer "socketserver.TCPServer"). These four classes process requests *synchronously*; each request must be completed before the next request can be started. This isn’t suitable if each request takes a long time to complete, because it requires a lot of computation, or because it returns a lot of data which the client is slow to process. The solution is to create a separate process or thread to handle each request; the [`ForkingMixIn`](#socketserver.ForkingMixIn "socketserver.ForkingMixIn") and [`ThreadingMixIn`](#socketserver.ThreadingMixIn "socketserver.ThreadingMixIn") mix-in classes can be used to support asynchronous behaviour. Creating a server requires several steps. First, you must create a request handler class by subclassing the [`BaseRequestHandler`](#socketserver.BaseRequestHandler "socketserver.BaseRequestHandler") class and overriding its [`handle()`](#socketserver.BaseRequestHandler.handle "socketserver.BaseRequestHandler.handle") method; this method will process incoming requests. Second, you must instantiate one of the server classes, passing it the server’s address and the request handler class. It is recommended to use the server in a [`with`](../reference/compound_stmts#with) statement. Then call the [`handle_request()`](#socketserver.BaseServer.handle_request "socketserver.BaseServer.handle_request") or [`serve_forever()`](#socketserver.BaseServer.serve_forever "socketserver.BaseServer.serve_forever") method of the server object to process one or many requests. Finally, call [`server_close()`](#socketserver.BaseServer.server_close "socketserver.BaseServer.server_close") to close the socket (unless you used a `with` statement). When inheriting from [`ThreadingMixIn`](#socketserver.ThreadingMixIn "socketserver.ThreadingMixIn") for threaded connection behavior, you should explicitly declare how you want your threads to behave on an abrupt shutdown. The [`ThreadingMixIn`](#socketserver.ThreadingMixIn "socketserver.ThreadingMixIn") class defines an attribute *daemon\_threads*, which indicates whether or not the server should wait for thread termination. You should set the flag explicitly if you would like threads to behave autonomously; the default is [`False`](constants#False "False"), meaning that Python will not exit until all threads created by [`ThreadingMixIn`](#socketserver.ThreadingMixIn "socketserver.ThreadingMixIn") have exited. Server classes have the same external methods and attributes, no matter what network protocol they use. Server Creation Notes --------------------- There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: ``` +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ ``` Note that [`UnixDatagramServer`](#socketserver.UnixDatagramServer "socketserver.UnixDatagramServer") derives from [`UDPServer`](#socketserver.UDPServer "socketserver.UDPServer"), not from [`UnixStreamServer`](#socketserver.UnixStreamServer "socketserver.UnixStreamServer") — the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both Unix server classes. `class socketserver.ForkingMixIn` `class socketserver.ThreadingMixIn` Forking and threading versions of each type of server can be created using these mix-in classes. For instance, [`ThreadingUDPServer`](#socketserver.ThreadingUDPServer "socketserver.ThreadingUDPServer") is created as follows: ``` class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass ``` The mix-in class comes first, since it overrides a method defined in [`UDPServer`](#socketserver.UDPServer "socketserver.UDPServer"). Setting the various attributes also changes the behavior of the underlying server mechanism. [`ForkingMixIn`](#socketserver.ForkingMixIn "socketserver.ForkingMixIn") and the Forking classes mentioned below are only available on POSIX platforms that support [`fork()`](os#os.fork "os.fork"). `socketserver.ForkingMixIn.server_close()` waits until all child processes complete, except if `socketserver.ForkingMixIn.block_on_close` attribute is false. `socketserver.ThreadingMixIn.server_close()` waits until all non-daemon threads complete, except if `socketserver.ThreadingMixIn.block_on_close` attribute is false. Use daemonic threads by setting `ThreadingMixIn.daemon_threads` to `True` to not wait until threads complete. Changed in version 3.7: `socketserver.ForkingMixIn.server_close()` and `socketserver.ThreadingMixIn.server_close()` now waits until all child processes and non-daemonic threads complete. Add a new `socketserver.ForkingMixIn.block_on_close` class attribute to opt-in for the pre-3.7 behaviour. `class socketserver.ForkingTCPServer` `class socketserver.ForkingUDPServer` `class socketserver.ThreadingTCPServer` `class socketserver.ThreadingUDPServer` These classes are pre-defined using the mix-in classes. To implement a service, you must derive a class from [`BaseRequestHandler`](#socketserver.BaseRequestHandler "socketserver.BaseRequestHandler") and redefine its [`handle()`](#socketserver.BaseRequestHandler.handle "socketserver.BaseRequestHandler.handle") method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the handler subclasses [`StreamRequestHandler`](#socketserver.StreamRequestHandler "socketserver.StreamRequestHandler") or [`DatagramRequestHandler`](#socketserver.DatagramRequestHandler "socketserver.DatagramRequestHandler"). Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by different requests, since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child. In this case, you can use a threading server, but you will probably have to use locks to protect the integrity of the shared data. On the other hand, if you are building an HTTP server where all data is stored externally (for instance, in the file system), a synchronous class will essentially render the service “deaf” while one request is being handled – which may be for a very long time if a client is slow to receive all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class [`handle()`](#socketserver.BaseRequestHandler.handle "socketserver.BaseRequestHandler.handle") method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor [`fork()`](os#os.fork "os.fork") (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use [`selectors`](selectors#module-selectors "selectors: High-level I/O multiplexing.") to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). See [`asyncore`](asyncore#module-asyncore "asyncore: A base class for developing asynchronous socket handling services. (deprecated)") for another way to manage this. Server Objects -------------- `class socketserver.BaseServer(server_address, RequestHandlerClass)` This is the superclass of all Server objects in the module. It defines the interface, given below, but does not implement most of the methods, which is done in subclasses. The two parameters are stored in the respective [`server_address`](#socketserver.BaseServer.server_address "socketserver.BaseServer.server_address") and [`RequestHandlerClass`](#socketserver.BaseServer.RequestHandlerClass "socketserver.BaseServer.RequestHandlerClass") attributes. `fileno()` Return an integer file descriptor for the socket on which the server is listening. This function is most commonly passed to [`selectors`](selectors#module-selectors "selectors: High-level I/O multiplexing."), to allow monitoring multiple servers in the same process. `handle_request()` Process a single request. This function calls the following methods in order: [`get_request()`](#socketserver.BaseServer.get_request "socketserver.BaseServer.get_request"), [`verify_request()`](#socketserver.BaseServer.verify_request "socketserver.BaseServer.verify_request"), and [`process_request()`](#socketserver.BaseServer.process_request "socketserver.BaseServer.process_request"). If the user-provided [`handle()`](#socketserver.BaseRequestHandler.handle "socketserver.BaseRequestHandler.handle") method of the handler class raises an exception, the server’s [`handle_error()`](#socketserver.BaseServer.handle_error "socketserver.BaseServer.handle_error") method will be called. If no request is received within [`timeout`](#socketserver.BaseServer.timeout "socketserver.BaseServer.timeout") seconds, [`handle_timeout()`](#socketserver.BaseServer.handle_timeout "socketserver.BaseServer.handle_timeout") will be called and [`handle_request()`](#socketserver.BaseServer.handle_request "socketserver.BaseServer.handle_request") will return. `serve_forever(poll_interval=0.5)` Handle requests until an explicit [`shutdown()`](#socketserver.BaseServer.shutdown "socketserver.BaseServer.shutdown") request. Poll for shutdown every *poll\_interval* seconds. Ignores the [`timeout`](#socketserver.BaseServer.timeout "socketserver.BaseServer.timeout") attribute. It also calls [`service_actions()`](#socketserver.BaseServer.service_actions "socketserver.BaseServer.service_actions"), which may be used by a subclass or mixin to provide actions specific to a given service. For example, the [`ForkingMixIn`](#socketserver.ForkingMixIn "socketserver.ForkingMixIn") class uses [`service_actions()`](#socketserver.BaseServer.service_actions "socketserver.BaseServer.service_actions") to clean up zombie child processes. Changed in version 3.3: Added `service_actions` call to the `serve_forever` method. `service_actions()` This is called in the [`serve_forever()`](#socketserver.BaseServer.serve_forever "socketserver.BaseServer.serve_forever") loop. This method can be overridden by subclasses or mixin classes to perform actions specific to a given service, such as cleanup actions. New in version 3.3. `shutdown()` Tell the [`serve_forever()`](#socketserver.BaseServer.serve_forever "socketserver.BaseServer.serve_forever") loop to stop and wait until it does. [`shutdown()`](#socketserver.BaseServer.shutdown "socketserver.BaseServer.shutdown") must be called while [`serve_forever()`](#socketserver.BaseServer.serve_forever "socketserver.BaseServer.serve_forever") is running in a different thread otherwise it will deadlock. `server_close()` Clean up the server. May be overridden. `address_family` The family of protocols to which the server’s socket belongs. Common examples are [`socket.AF_INET`](socket#socket.AF_INET "socket.AF_INET") and [`socket.AF_UNIX`](socket#socket.AF_UNIX "socket.AF_UNIX"). `RequestHandlerClass` The user-provided request handler class; an instance of this class is created for each request. `server_address` The address on which the server is listening. The format of addresses varies depending on the protocol family; see the documentation for the [`socket`](socket#module-socket "socket: Low-level networking interface.") module for details. For Internet protocols, this is a tuple containing a string giving the address, and an integer port number: `('127.0.0.1', 80)`, for example. `socket` The socket object on which the server will listen for incoming requests. The server classes support the following class variables: `allow_reuse_address` Whether the server will allow the reuse of an address. This defaults to [`False`](constants#False "False"), and can be set in subclasses to change the policy. `request_queue_size` The size of the request queue. If it takes a long time to process a single request, any requests that arrive while the server is busy are placed into a queue, up to [`request_queue_size`](#socketserver.BaseServer.request_queue_size "socketserver.BaseServer.request_queue_size") requests. Once the queue is full, further requests from clients will get a “Connection denied” error. The default value is usually 5, but this can be overridden by subclasses. `socket_type` The type of socket used by the server; [`socket.SOCK_STREAM`](socket#socket.SOCK_STREAM "socket.SOCK_STREAM") and [`socket.SOCK_DGRAM`](socket#socket.SOCK_DGRAM "socket.SOCK_DGRAM") are two common values. `timeout` Timeout duration, measured in seconds, or [`None`](constants#None "None") if no timeout is desired. If [`handle_request()`](#socketserver.BaseServer.handle_request "socketserver.BaseServer.handle_request") receives no incoming requests within the timeout period, the [`handle_timeout()`](#socketserver.BaseServer.handle_timeout "socketserver.BaseServer.handle_timeout") method is called. There are various server methods that can be overridden by subclasses of base server classes like [`TCPServer`](#socketserver.TCPServer "socketserver.TCPServer"); these methods aren’t useful to external users of the server object. `finish_request(request, client_address)` Actually processes the request by instantiating [`RequestHandlerClass`](#socketserver.BaseServer.RequestHandlerClass "socketserver.BaseServer.RequestHandlerClass") and calling its [`handle()`](#socketserver.BaseRequestHandler.handle "socketserver.BaseRequestHandler.handle") method. `get_request()` Must accept a request from the socket, and return a 2-tuple containing the *new* socket object to be used to communicate with the client, and the client’s address. `handle_error(request, client_address)` This function is called if the [`handle()`](#socketserver.BaseRequestHandler.handle "socketserver.BaseRequestHandler.handle") method of a [`RequestHandlerClass`](#socketserver.BaseServer.RequestHandlerClass "socketserver.BaseServer.RequestHandlerClass") instance raises an exception. The default action is to print the traceback to standard error and continue handling further requests. Changed in version 3.6: Now only called for exceptions derived from the [`Exception`](exceptions#Exception "Exception") class. `handle_timeout()` This function is called when the [`timeout`](#socketserver.BaseServer.timeout "socketserver.BaseServer.timeout") attribute has been set to a value other than [`None`](constants#None "None") and the timeout period has passed with no requests being received. The default action for forking servers is to collect the status of any child processes that have exited, while in threading servers this method does nothing. `process_request(request, client_address)` Calls [`finish_request()`](#socketserver.BaseServer.finish_request "socketserver.BaseServer.finish_request") to create an instance of the [`RequestHandlerClass`](#socketserver.BaseServer.RequestHandlerClass "socketserver.BaseServer.RequestHandlerClass"). If desired, this function can create a new process or thread to handle the request; the [`ForkingMixIn`](#socketserver.ForkingMixIn "socketserver.ForkingMixIn") and [`ThreadingMixIn`](#socketserver.ThreadingMixIn "socketserver.ThreadingMixIn") classes do this. `server_activate()` Called by the server’s constructor to activate the server. The default behavior for a TCP server just invokes [`listen()`](socket#socket.socket.listen "socket.socket.listen") on the server’s socket. May be overridden. `server_bind()` Called by the server’s constructor to bind the socket to the desired address. May be overridden. `verify_request(request, client_address)` Must return a Boolean value; if the value is [`True`](constants#True "True"), the request will be processed, and if it’s [`False`](constants#False "False"), the request will be denied. This function can be overridden to implement access controls for a server. The default implementation always returns [`True`](constants#True "True"). Changed in version 3.6: Support for the [context manager](../glossary#term-context-manager) protocol was added. Exiting the context manager is equivalent to calling [`server_close()`](#socketserver.BaseServer.server_close "socketserver.BaseServer.server_close"). Request Handler Objects ----------------------- `class socketserver.BaseRequestHandler` This is the superclass of all request handler objects. It defines the interface, given below. A concrete request handler subclass must define a new [`handle()`](#socketserver.BaseRequestHandler.handle "socketserver.BaseRequestHandler.handle") method, and can override any of the other methods. A new instance of the subclass is created for each request. `setup()` Called before the [`handle()`](#socketserver.BaseRequestHandler.handle "socketserver.BaseRequestHandler.handle") method to perform any initialization actions required. The default implementation does nothing. `handle()` This function must do all the work required to service a request. The default implementation does nothing. Several instance attributes are available to it; the request is available as `self.request`; the client address as `self.client_address`; and the server instance as `self.server`, in case it needs access to per-server information. The type of `self.request` is different for datagram or stream services. For stream services, `self.request` is a socket object; for datagram services, `self.request` is a pair of string and socket. `finish()` Called after the [`handle()`](#socketserver.BaseRequestHandler.handle "socketserver.BaseRequestHandler.handle") method to perform any clean-up actions required. The default implementation does nothing. If [`setup()`](#socketserver.BaseRequestHandler.setup "socketserver.BaseRequestHandler.setup") raises an exception, this function will not be called. `class socketserver.StreamRequestHandler` `class socketserver.DatagramRequestHandler` These [`BaseRequestHandler`](#socketserver.BaseRequestHandler "socketserver.BaseRequestHandler") subclasses override the [`setup()`](#socketserver.BaseRequestHandler.setup "socketserver.BaseRequestHandler.setup") and [`finish()`](#socketserver.BaseRequestHandler.finish "socketserver.BaseRequestHandler.finish") methods, and provide `self.rfile` and `self.wfile` attributes. The `self.rfile` and `self.wfile` attributes can be read or written, respectively, to get the request data or return data to the client. The `rfile` attributes of both classes support the [`io.BufferedIOBase`](io#io.BufferedIOBase "io.BufferedIOBase") readable interface, and `DatagramRequestHandler.wfile` supports the [`io.BufferedIOBase`](io#io.BufferedIOBase "io.BufferedIOBase") writable interface. Changed in version 3.6: `StreamRequestHandler.wfile` also supports the [`io.BufferedIOBase`](io#io.BufferedIOBase "io.BufferedIOBase") writable interface. Examples -------- ### [`socketserver.TCPServer`](#socketserver.TCPServer "socketserver.TCPServer") Example This is the server side: ``` import socketserver class MyTCPHandler(socketserver.BaseRequestHandler): """ The request handler class for our server. It is instantiated once per connection to the server, and must override the handle() method to implement communication to the client. """ def handle(self): # self.request is the TCP socket connected to the client self.data = self.request.recv(1024).strip() print("{} wrote:".format(self.client_address[0])) print(self.data) # just send back the same data, but upper-cased self.request.sendall(self.data.upper()) if __name__ == "__main__": HOST, PORT = "localhost", 9999 # Create the server, binding to localhost on port 9999 with socketserver.TCPServer((HOST, PORT), MyTCPHandler) as server: # Activate the server; this will keep running until you # interrupt the program with Ctrl-C server.serve_forever() ``` An alternative request handler class that makes use of streams (file-like objects that simplify communication by providing the standard file interface): ``` class MyTCPHandler(socketserver.StreamRequestHandler): def handle(self): # self.rfile is a file-like object created by the handler; # we can now use e.g. readline() instead of raw recv() calls self.data = self.rfile.readline().strip() print("{} wrote:".format(self.client_address[0])) print(self.data) # Likewise, self.wfile is a file-like object used to write back # to the client self.wfile.write(self.data.upper()) ``` The difference is that the `readline()` call in the second handler will call `recv()` multiple times until it encounters a newline character, while the single `recv()` call in the first handler will just return what has been sent from the client in one `sendall()` call. This is the client side: ``` import socket import sys HOST, PORT = "localhost", 9999 data = " ".join(sys.argv[1:]) # Create a socket (SOCK_STREAM means a TCP socket) with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: # Connect to server and send data sock.connect((HOST, PORT)) sock.sendall(bytes(data + "\n", "utf-8")) # Receive data from the server and shut down received = str(sock.recv(1024), "utf-8") print("Sent: {}".format(data)) print("Received: {}".format(received)) ``` The output of the example should look something like this: Server: ``` $ python TCPServer.py 127.0.0.1 wrote: b'hello world with TCP' 127.0.0.1 wrote: b'python is nice' ``` Client: ``` $ python TCPClient.py hello world with TCP Sent: hello world with TCP Received: HELLO WORLD WITH TCP $ python TCPClient.py python is nice Sent: python is nice Received: PYTHON IS NICE ``` ### [`socketserver.UDPServer`](#socketserver.UDPServer "socketserver.UDPServer") Example This is the server side: ``` import socketserver class MyUDPHandler(socketserver.BaseRequestHandler): """ This class works similar to the TCP handler class, except that self.request consists of a pair of data and client socket, and since there is no connection the client address must be given explicitly when sending data back via sendto(). """ def handle(self): data = self.request[0].strip() socket = self.request[1] print("{} wrote:".format(self.client_address[0])) print(data) socket.sendto(data.upper(), self.client_address) if __name__ == "__main__": HOST, PORT = "localhost", 9999 with socketserver.UDPServer((HOST, PORT), MyUDPHandler) as server: server.serve_forever() ``` This is the client side: ``` import socket import sys HOST, PORT = "localhost", 9999 data = " ".join(sys.argv[1:]) # SOCK_DGRAM is the socket type to use for UDP sockets sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # As you can see, there is no connect() call; UDP has no connections. # Instead, data is directly sent to the recipient via sendto(). sock.sendto(bytes(data + "\n", "utf-8"), (HOST, PORT)) received = str(sock.recv(1024), "utf-8") print("Sent: {}".format(data)) print("Received: {}".format(received)) ``` The output of the example should look exactly like for the TCP server example. ### Asynchronous Mixins To build asynchronous handlers, use the [`ThreadingMixIn`](#socketserver.ThreadingMixIn "socketserver.ThreadingMixIn") and [`ForkingMixIn`](#socketserver.ForkingMixIn "socketserver.ForkingMixIn") classes. An example for the [`ThreadingMixIn`](#socketserver.ThreadingMixIn "socketserver.ThreadingMixIn") class: ``` import socket import threading import socketserver class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler): def handle(self): data = str(self.request.recv(1024), 'ascii') cur_thread = threading.current_thread() response = bytes("{}: {}".format(cur_thread.name, data), 'ascii') self.request.sendall(response) class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer): pass def client(ip, port, message): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: sock.connect((ip, port)) sock.sendall(bytes(message, 'ascii')) response = str(sock.recv(1024), 'ascii') print("Received: {}".format(response)) if __name__ == "__main__": # Port 0 means to select an arbitrary unused port HOST, PORT = "localhost", 0 server = ThreadedTCPServer((HOST, PORT), ThreadedTCPRequestHandler) with server: ip, port = server.server_address # Start a thread with the server -- that thread will then start one # more thread for each request server_thread = threading.Thread(target=server.serve_forever) # Exit the server thread when the main thread terminates server_thread.daemon = True server_thread.start() print("Server loop running in thread:", server_thread.name) client(ip, port, "Hello World 1") client(ip, port, "Hello World 2") client(ip, port, "Hello World 3") server.shutdown() ``` The output of the example should look something like this: ``` $ python ThreadedTCPServer.py Server loop running in thread: Thread-1 Received: Thread-2: Hello World 1 Received: Thread-3: Hello World 2 Received: Thread-4: Hello World 3 ``` The [`ForkingMixIn`](#socketserver.ForkingMixIn "socketserver.ForkingMixIn") class is used in the same way, except that the server will spawn a new process for each request. Available only on POSIX platforms that support [`fork()`](os#os.fork "os.fork").
programming_docs
python _thread — Low-level threading API \_thread — Low-level threading API ================================== This module provides low-level primitives for working with multiple threads (also called *light-weight processes* or *tasks*) — multiple threads of control sharing their global data space. For synchronization, simple locks (also called *mutexes* or *binary semaphores*) are provided. The [`threading`](threading#module-threading "threading: Thread-based parallelism.") module provides an easier to use and higher-level threading API built on top of this module. Changed in version 3.7: This module used to be optional, it is now always available. This module defines the following constants and functions: `exception _thread.error` Raised on thread-specific errors. Changed in version 3.3: This is now a synonym of the built-in [`RuntimeError`](exceptions#RuntimeError "RuntimeError"). `_thread.LockType` This is the type of lock objects. `_thread.start_new_thread(function, args[, kwargs])` Start a new thread and return its identifier. The thread executes the function *function* with the argument list *args* (which must be a tuple). The optional *kwargs* argument specifies a dictionary of keyword arguments. When the function returns, the thread silently exits. When the function terminates with an unhandled exception, [`sys.unraisablehook()`](sys#sys.unraisablehook "sys.unraisablehook") is called to handle the exception. The *object* attribute of the hook argument is *function*. By default, a stack trace is printed and then the thread exits (but other threads continue to run). When the function raises a [`SystemExit`](exceptions#SystemExit "SystemExit") exception, it is silently ignored. Changed in version 3.8: [`sys.unraisablehook()`](sys#sys.unraisablehook "sys.unraisablehook") is now used to handle unhandled exceptions. `_thread.interrupt_main()` Simulate the effect of a [`signal.SIGINT`](signal#signal.SIGINT "signal.SIGINT") signal arriving in the main thread. A thread can use this function to interrupt the main thread. If [`signal.SIGINT`](signal#signal.SIGINT "signal.SIGINT") isn’t handled by Python (it was set to [`signal.SIG_DFL`](signal#signal.SIG_DFL "signal.SIG_DFL") or [`signal.SIG_IGN`](signal#signal.SIG_IGN "signal.SIG_IGN")), this function does nothing. `_thread.exit()` Raise the [`SystemExit`](exceptions#SystemExit "SystemExit") exception. When not caught, this will cause the thread to exit silently. `_thread.allocate_lock()` Return a new lock object. Methods of locks are described below. The lock is initially unlocked. `_thread.get_ident()` Return the ‘thread identifier’ of the current thread. This is a nonzero integer. Its value has no direct meaning; it is intended as a magic cookie to be used e.g. to index a dictionary of thread-specific data. Thread identifiers may be recycled when a thread exits and another thread is created. `_thread.get_native_id()` Return the native integral Thread ID of the current thread assigned by the kernel. This is a non-negative integer. Its value may be used to uniquely identify this particular thread system-wide (until the thread terminates, after which the value may be recycled by the OS). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows, FreeBSD, Linux, macOS, OpenBSD, NetBSD, AIX. New in version 3.8. `_thread.stack_size([size])` Return the thread stack size used when creating new threads. The optional *size* argument specifies the stack size to be used for subsequently created threads, and must be 0 (use platform or configured default) or a positive integer value of at least 32,768 (32 KiB). If *size* is not specified, 0 is used. If changing the thread stack size is unsupported, a [`RuntimeError`](exceptions#RuntimeError "RuntimeError") is raised. If the specified stack size is invalid, a [`ValueError`](exceptions#ValueError "ValueError") is raised and the stack size is unmodified. 32 KiB is currently the minimum supported stack size value to guarantee sufficient stack space for the interpreter itself. Note that some platforms may have particular restrictions on values for the stack size, such as requiring a minimum stack size > 32 KiB or requiring allocation in multiples of the system memory page size - platform documentation should be referred to for more information (4 KiB pages are common; using multiples of 4096 for the stack size is the suggested approach in the absence of more specific information). [Availability](https://docs.python.org/3.9/library/intro.html#availability): Windows, systems with POSIX threads. `_thread.TIMEOUT_MAX` The maximum value allowed for the *timeout* parameter of `Lock.acquire()`. Specifying a timeout greater than this value will raise an [`OverflowError`](exceptions#OverflowError "OverflowError"). New in version 3.2. Lock objects have the following methods: `lock.acquire(waitflag=1, timeout=-1)` Without any optional argument, this method acquires the lock unconditionally, if necessary waiting until it is released by another thread (only one thread at a time can acquire a lock — that’s their reason for existence). If the integer *waitflag* argument is present, the action depends on its value: if it is zero, the lock is only acquired if it can be acquired immediately without waiting, while if it is nonzero, the lock is acquired unconditionally as above. If the floating-point *timeout* argument is present and positive, it specifies the maximum wait time in seconds before returning. A negative *timeout* argument specifies an unbounded wait. You cannot specify a *timeout* if *waitflag* is zero. The return value is `True` if the lock is acquired successfully, `False` if not. Changed in version 3.2: The *timeout* parameter is new. Changed in version 3.2: Lock acquires can now be interrupted by signals on POSIX. `lock.release()` Releases the lock. The lock must have been acquired earlier, but not necessarily by the same thread. `lock.locked()` Return the status of the lock: `True` if it has been acquired by some thread, `False` if not. In addition to these methods, lock objects can also be used via the [`with`](../reference/compound_stmts#with) statement, e.g.: ``` import _thread a_lock = _thread.allocate_lock() with a_lock: print("a_lock is locked while this executes") ``` **Caveats:** * Threads interact strangely with interrupts: the [`KeyboardInterrupt`](exceptions#KeyboardInterrupt "KeyboardInterrupt") exception will be received by an arbitrary thread. (When the [`signal`](signal#module-signal "signal: Set handlers for asynchronous events.") module is available, interrupts always go to the main thread.) * Calling [`sys.exit()`](sys#sys.exit "sys.exit") or raising the [`SystemExit`](exceptions#SystemExit "SystemExit") exception is equivalent to calling [`_thread.exit()`](#_thread.exit "_thread.exit"). * It is not possible to interrupt the `acquire()` method on a lock — the [`KeyboardInterrupt`](exceptions#KeyboardInterrupt "KeyboardInterrupt") exception will happen after the lock has been acquired. * When the main thread exits, it is system defined whether the other threads survive. On most systems, they are killed without executing [`try`](../reference/compound_stmts#try) … [`finally`](../reference/compound_stmts#finally) clauses or executing object destructors. * When the main thread exits, it does not do any of its usual cleanup (except that [`try`](../reference/compound_stmts#try) … [`finally`](../reference/compound_stmts#finally) clauses are honored), and the standard I/O files are not flushed. python mailbox — Manipulate mailboxes in various formats mailbox — Manipulate mailboxes in various formats ================================================= **Source code:** [Lib/mailbox.py](https://github.com/python/cpython/tree/3.9/Lib/mailbox.py) This module defines two classes, [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") and [`Message`](#mailbox.Message "mailbox.Message"), for accessing and manipulating on-disk mailboxes and the messages they contain. [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") offers a dictionary-like mapping from keys to messages. [`Message`](#mailbox.Message "mailbox.Message") extends the [`email.message`](email.message#module-email.message "email.message: The base class representing email messages.") module’s [`Message`](email.compat32-message#email.message.Message "email.message.Message") class with format-specific state and behavior. Supported mailbox formats are Maildir, mbox, MH, Babyl, and MMDF. See also `Module` [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") Represent and manipulate messages. Mailbox objects --------------- `class mailbox.Mailbox` A mailbox, which may be inspected and modified. The [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") class defines an interface and is not intended to be instantiated. Instead, format-specific subclasses should inherit from [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") and your code should instantiate a particular subclass. The [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") interface is dictionary-like, with small keys corresponding to messages. Keys are issued by the [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instance with which they will be used and are only meaningful to that [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instance. A key continues to identify a message even if the corresponding message is modified, such as by replacing it with another message. Messages may be added to a [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instance using the set-like method [`add()`](#mailbox.Mailbox.add "mailbox.Mailbox.add") and removed using a `del` statement or the set-like methods [`remove()`](#mailbox.Mailbox.remove "mailbox.Mailbox.remove") and [`discard()`](#mailbox.Mailbox.discard "mailbox.Mailbox.discard"). [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") interface semantics differ from dictionary semantics in some noteworthy ways. Each time a message is requested, a new representation (typically a [`Message`](#mailbox.Message "mailbox.Message") instance) is generated based upon the current state of the mailbox. Similarly, when a message is added to a [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instance, the provided message representation’s contents are copied. In neither case is a reference to the message representation kept by the [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instance. The default [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") iterator iterates over message representations, not keys as the default dictionary iterator does. Moreover, modification of a mailbox during iteration is safe and well-defined. Messages added to the mailbox after an iterator is created will not be seen by the iterator. Messages removed from the mailbox before the iterator yields them will be silently skipped, though using a key from an iterator may result in a [`KeyError`](exceptions#KeyError "KeyError") exception if the corresponding message is subsequently removed. Warning Be very cautious when modifying mailboxes that might be simultaneously changed by some other process. The safest mailbox format to use for such tasks is Maildir; try to avoid using single-file formats such as mbox for concurrent writing. If you’re modifying a mailbox, you *must* lock it by calling the [`lock()`](#mailbox.Mailbox.lock "mailbox.Mailbox.lock") and [`unlock()`](#mailbox.Mailbox.unlock "mailbox.Mailbox.unlock") methods *before* reading any messages in the file or making any changes by adding or deleting a message. Failing to lock the mailbox runs the risk of losing messages or corrupting the entire mailbox. [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instances have the following methods: `add(message)` Add *message* to the mailbox and return the key that has been assigned to it. Parameter *message* may be a [`Message`](#mailbox.Message "mailbox.Message") instance, an [`email.message.Message`](email.compat32-message#email.message.Message "email.message.Message") instance, a string, a byte string, or a file-like object (which should be open in binary mode). If *message* is an instance of the appropriate format-specific [`Message`](#mailbox.Message "mailbox.Message") subclass (e.g., if it’s an [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") instance and this is an [`mbox`](#mailbox.mbox "mailbox.mbox") instance), its format-specific information is used. Otherwise, reasonable defaults for format-specific information are used. Changed in version 3.2: Support for binary input was added. `remove(key)` `__delitem__(key)` `discard(key)` Delete the message corresponding to *key* from the mailbox. If no such message exists, a [`KeyError`](exceptions#KeyError "KeyError") exception is raised if the method was called as [`remove()`](#mailbox.Mailbox.remove "mailbox.Mailbox.remove") or [`__delitem__()`](#mailbox.Mailbox.__delitem__ "mailbox.Mailbox.__delitem__") but no exception is raised if the method was called as [`discard()`](#mailbox.Mailbox.discard "mailbox.Mailbox.discard"). The behavior of [`discard()`](#mailbox.Mailbox.discard "mailbox.Mailbox.discard") may be preferred if the underlying mailbox format supports concurrent modification by other processes. `__setitem__(key, message)` Replace the message corresponding to *key* with *message*. Raise a [`KeyError`](exceptions#KeyError "KeyError") exception if no message already corresponds to *key*. As with [`add()`](#mailbox.Mailbox.add "mailbox.Mailbox.add"), parameter *message* may be a [`Message`](#mailbox.Message "mailbox.Message") instance, an [`email.message.Message`](email.compat32-message#email.message.Message "email.message.Message") instance, a string, a byte string, or a file-like object (which should be open in binary mode). If *message* is an instance of the appropriate format-specific [`Message`](#mailbox.Message "mailbox.Message") subclass (e.g., if it’s an [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") instance and this is an [`mbox`](#mailbox.mbox "mailbox.mbox") instance), its format-specific information is used. Otherwise, the format-specific information of the message that currently corresponds to *key* is left unchanged. `iterkeys()` `keys()` Return an iterator over all keys if called as [`iterkeys()`](#mailbox.Mailbox.iterkeys "mailbox.Mailbox.iterkeys") or return a list of keys if called as [`keys()`](#mailbox.Mailbox.keys "mailbox.Mailbox.keys"). `itervalues()` `__iter__()` `values()` Return an iterator over representations of all messages if called as [`itervalues()`](#mailbox.Mailbox.itervalues "mailbox.Mailbox.itervalues") or [`__iter__()`](#mailbox.Mailbox.__iter__ "mailbox.Mailbox.__iter__") or return a list of such representations if called as [`values()`](#mailbox.Mailbox.values "mailbox.Mailbox.values"). The messages are represented as instances of the appropriate format-specific [`Message`](#mailbox.Message "mailbox.Message") subclass unless a custom message factory was specified when the [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instance was initialized. Note The behavior of [`__iter__()`](#mailbox.Mailbox.__iter__ "mailbox.Mailbox.__iter__") is unlike that of dictionaries, which iterate over keys. `iteritems()` `items()` Return an iterator over (*key*, *message*) pairs, where *key* is a key and *message* is a message representation, if called as [`iteritems()`](#mailbox.Mailbox.iteritems "mailbox.Mailbox.iteritems") or return a list of such pairs if called as [`items()`](#mailbox.Mailbox.items "mailbox.Mailbox.items"). The messages are represented as instances of the appropriate format-specific [`Message`](#mailbox.Message "mailbox.Message") subclass unless a custom message factory was specified when the [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instance was initialized. `get(key, default=None)` `__getitem__(key)` Return a representation of the message corresponding to *key*. If no such message exists, *default* is returned if the method was called as [`get()`](#mailbox.Mailbox.get "mailbox.Mailbox.get") and a [`KeyError`](exceptions#KeyError "KeyError") exception is raised if the method was called as [`__getitem__()`](#mailbox.Mailbox.__getitem__ "mailbox.Mailbox.__getitem__"). The message is represented as an instance of the appropriate format-specific [`Message`](#mailbox.Message "mailbox.Message") subclass unless a custom message factory was specified when the [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instance was initialized. `get_message(key)` Return a representation of the message corresponding to *key* as an instance of the appropriate format-specific [`Message`](#mailbox.Message "mailbox.Message") subclass, or raise a [`KeyError`](exceptions#KeyError "KeyError") exception if no such message exists. `get_bytes(key)` Return a byte representation of the message corresponding to *key*, or raise a [`KeyError`](exceptions#KeyError "KeyError") exception if no such message exists. New in version 3.2. `get_string(key)` Return a string representation of the message corresponding to *key*, or raise a [`KeyError`](exceptions#KeyError "KeyError") exception if no such message exists. The message is processed through [`email.message.Message`](email.compat32-message#email.message.Message "email.message.Message") to convert it to a 7bit clean representation. `get_file(key)` Return a file-like representation of the message corresponding to *key*, or raise a [`KeyError`](exceptions#KeyError "KeyError") exception if no such message exists. The file-like object behaves as if open in binary mode. This file should be closed once it is no longer needed. Changed in version 3.2: The file object really is a binary file; previously it was incorrectly returned in text mode. Also, the file-like object now supports the context management protocol: you can use a [`with`](../reference/compound_stmts#with) statement to automatically close it. Note Unlike other representations of messages, file-like representations are not necessarily independent of the [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instance that created them or of the underlying mailbox. More specific documentation is provided by each subclass. `__contains__(key)` Return `True` if *key* corresponds to a message, `False` otherwise. `__len__()` Return a count of messages in the mailbox. `clear()` Delete all messages from the mailbox. `pop(key, default=None)` Return a representation of the message corresponding to *key* and delete the message. If no such message exists, return *default*. The message is represented as an instance of the appropriate format-specific [`Message`](#mailbox.Message "mailbox.Message") subclass unless a custom message factory was specified when the [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instance was initialized. `popitem()` Return an arbitrary (*key*, *message*) pair, where *key* is a key and *message* is a message representation, and delete the corresponding message. If the mailbox is empty, raise a [`KeyError`](exceptions#KeyError "KeyError") exception. The message is represented as an instance of the appropriate format-specific [`Message`](#mailbox.Message "mailbox.Message") subclass unless a custom message factory was specified when the [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instance was initialized. `update(arg)` Parameter *arg* should be a *key*-to-*message* mapping or an iterable of (*key*, *message*) pairs. Updates the mailbox so that, for each given *key* and *message*, the message corresponding to *key* is set to *message* as if by using [`__setitem__()`](#mailbox.Mailbox.__setitem__ "mailbox.Mailbox.__setitem__"). As with [`__setitem__()`](#mailbox.Mailbox.__setitem__ "mailbox.Mailbox.__setitem__"), each *key* must already correspond to a message in the mailbox or else a [`KeyError`](exceptions#KeyError "KeyError") exception will be raised, so in general it is incorrect for *arg* to be a [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instance. Note Unlike with dictionaries, keyword arguments are not supported. `flush()` Write any pending changes to the filesystem. For some [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") subclasses, changes are always written immediately and [`flush()`](#mailbox.Mailbox.flush "mailbox.Mailbox.flush") does nothing, but you should still make a habit of calling this method. `lock()` Acquire an exclusive advisory lock on the mailbox so that other processes know not to modify it. An [`ExternalClashError`](#mailbox.ExternalClashError "mailbox.ExternalClashError") is raised if the lock is not available. The particular locking mechanisms used depend upon the mailbox format. You should *always* lock the mailbox before making any modifications to its contents. `unlock()` Release the lock on the mailbox, if any. `close()` Flush the mailbox, unlock it if necessary, and close any open files. For some [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") subclasses, this method does nothing. ### [`Maildir`](#mailbox.Maildir "mailbox.Maildir") `class mailbox.Maildir(dirname, factory=None, create=True)` A subclass of [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") for mailboxes in Maildir format. Parameter *factory* is a callable object that accepts a file-like message representation (which behaves as if opened in binary mode) and returns a custom representation. If *factory* is `None`, [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") is used as the default message representation. If *create* is `True`, the mailbox is created if it does not exist. If *create* is `True` and the *dirname* path exists, it will be treated as an existing maildir without attempting to verify its directory layout. It is for historical reasons that *dirname* is named as such rather than *path*. Maildir is a directory-based mailbox format invented for the qmail mail transfer agent and now widely supported by other programs. Messages in a Maildir mailbox are stored in separate files within a common directory structure. This design allows Maildir mailboxes to be accessed and modified by multiple unrelated programs without data corruption, so file locking is unnecessary. Maildir mailboxes contain three subdirectories, namely: `tmp`, `new`, and `cur`. Messages are created momentarily in the `tmp` subdirectory and then moved to the `new` subdirectory to finalize delivery. A mail user agent may subsequently move the message to the `cur` subdirectory and store information about the state of the message in a special “info” section appended to its file name. Folders of the style introduced by the Courier mail transfer agent are also supported. Any subdirectory of the main mailbox is considered a folder if `'.'` is the first character in its name. Folder names are represented by [`Maildir`](#mailbox.Maildir "mailbox.Maildir") without the leading `'.'`. Each folder is itself a Maildir mailbox but should not contain other folders. Instead, a logical nesting is indicated using `'.'` to delimit levels, e.g., “Archived.2005.07”. Note The Maildir specification requires the use of a colon (`':'`) in certain message file names. However, some operating systems do not permit this character in file names, If you wish to use a Maildir-like format on such an operating system, you should specify another character to use instead. The exclamation point (`'!'`) is a popular choice. For example: ``` import mailbox mailbox.Maildir.colon = '!' ``` The `colon` attribute may also be set on a per-instance basis. [`Maildir`](#mailbox.Maildir "mailbox.Maildir") instances have all of the methods of [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") in addition to the following: `list_folders()` Return a list of the names of all folders. `get_folder(folder)` Return a [`Maildir`](#mailbox.Maildir "mailbox.Maildir") instance representing the folder whose name is *folder*. A [`NoSuchMailboxError`](#mailbox.NoSuchMailboxError "mailbox.NoSuchMailboxError") exception is raised if the folder does not exist. `add_folder(folder)` Create a folder whose name is *folder* and return a [`Maildir`](#mailbox.Maildir "mailbox.Maildir") instance representing it. `remove_folder(folder)` Delete the folder whose name is *folder*. If the folder contains any messages, a [`NotEmptyError`](#mailbox.NotEmptyError "mailbox.NotEmptyError") exception will be raised and the folder will not be deleted. `clean()` Delete temporary files from the mailbox that have not been accessed in the last 36 hours. The Maildir specification says that mail-reading programs should do this occasionally. Some [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") methods implemented by [`Maildir`](#mailbox.Maildir "mailbox.Maildir") deserve special remarks: `add(message)` `__setitem__(key, message)` `update(arg)` Warning These methods generate unique file names based upon the current process ID. When using multiple threads, undetected name clashes may occur and cause corruption of the mailbox unless threads are coordinated to avoid using these methods to manipulate the same mailbox simultaneously. `flush()` All changes to Maildir mailboxes are immediately applied, so this method does nothing. `lock()` `unlock()` Maildir mailboxes do not support (or require) locking, so these methods do nothing. `close()` [`Maildir`](#mailbox.Maildir "mailbox.Maildir") instances do not keep any open files and the underlying mailboxes do not support locking, so this method does nothing. `get_file(key)` Depending upon the host platform, it may not be possible to modify or remove the underlying message while the returned file remains open. See also [maildir man page from Courier](http://www.courier-mta.org/maildir.html) A specification of the format. Describes a common extension for supporting folders. [Using maildir format](https://cr.yp.to/proto/maildir.html) Notes on Maildir by its inventor. Includes an updated name-creation scheme and details on “info” semantics. ### [`mbox`](#mailbox.mbox "mailbox.mbox") `class mailbox.mbox(path, factory=None, create=True)` A subclass of [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") for mailboxes in mbox format. Parameter *factory* is a callable object that accepts a file-like message representation (which behaves as if opened in binary mode) and returns a custom representation. If *factory* is `None`, [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") is used as the default message representation. If *create* is `True`, the mailbox is created if it does not exist. The mbox format is the classic format for storing mail on Unix systems. All messages in an mbox mailbox are stored in a single file with the beginning of each message indicated by a line whose first five characters are “From “. Several variations of the mbox format exist to address perceived shortcomings in the original. In the interest of compatibility, [`mbox`](#mailbox.mbox "mailbox.mbox") implements the original format, which is sometimes referred to as *mboxo*. This means that the *Content-Length* header, if present, is ignored and that any occurrences of “From ” at the beginning of a line in a message body are transformed to “>From ” when storing the message, although occurrences of “>From ” are not transformed to “From ” when reading the message. Some [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") methods implemented by [`mbox`](#mailbox.mbox "mailbox.mbox") deserve special remarks: `get_file(key)` Using the file after calling `flush()` or `close()` on the [`mbox`](#mailbox.mbox "mailbox.mbox") instance may yield unpredictable results or raise an exception. `lock()` `unlock()` Three locking mechanisms are used—dot locking and, if available, the `flock()` and `lockf()` system calls. See also [mbox man page from tin](http://www.tin.org/bin/man.cgi?section=5&topic=mbox) A specification of the format, with details on locking. [Configuring Netscape Mail on Unix: Why The Content-Length Format is Bad](https://www.jwz.org/doc/content-length.html) An argument for using the original mbox format rather than a variation. [“mbox” is a family of several mutually incompatible mailbox formats](https://www.loc.gov/preservation/digital/formats/fdd/fdd000383.shtml) A history of mbox variations. ### [`MH`](#mailbox.MH "mailbox.MH") `class mailbox.MH(path, factory=None, create=True)` A subclass of [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") for mailboxes in MH format. Parameter *factory* is a callable object that accepts a file-like message representation (which behaves as if opened in binary mode) and returns a custom representation. If *factory* is `None`, [`MHMessage`](#mailbox.MHMessage "mailbox.MHMessage") is used as the default message representation. If *create* is `True`, the mailbox is created if it does not exist. MH is a directory-based mailbox format invented for the MH Message Handling System, a mail user agent. Each message in an MH mailbox resides in its own file. An MH mailbox may contain other MH mailboxes (called *folders*) in addition to messages. Folders may be nested indefinitely. MH mailboxes also support *sequences*, which are named lists used to logically group messages without moving them to sub-folders. Sequences are defined in a file called `.mh_sequences` in each folder. The [`MH`](#mailbox.MH "mailbox.MH") class manipulates MH mailboxes, but it does not attempt to emulate all of **mh**’s behaviors. In particular, it does not modify and is not affected by the `context` or `.mh_profile` files that are used by **mh** to store its state and configuration. [`MH`](#mailbox.MH "mailbox.MH") instances have all of the methods of [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") in addition to the following: `list_folders()` Return a list of the names of all folders. `get_folder(folder)` Return an [`MH`](#mailbox.MH "mailbox.MH") instance representing the folder whose name is *folder*. A [`NoSuchMailboxError`](#mailbox.NoSuchMailboxError "mailbox.NoSuchMailboxError") exception is raised if the folder does not exist. `add_folder(folder)` Create a folder whose name is *folder* and return an [`MH`](#mailbox.MH "mailbox.MH") instance representing it. `remove_folder(folder)` Delete the folder whose name is *folder*. If the folder contains any messages, a [`NotEmptyError`](#mailbox.NotEmptyError "mailbox.NotEmptyError") exception will be raised and the folder will not be deleted. `get_sequences()` Return a dictionary of sequence names mapped to key lists. If there are no sequences, the empty dictionary is returned. `set_sequences(sequences)` Re-define the sequences that exist in the mailbox based upon *sequences*, a dictionary of names mapped to key lists, like returned by [`get_sequences()`](#mailbox.MH.get_sequences "mailbox.MH.get_sequences"). `pack()` Rename messages in the mailbox as necessary to eliminate gaps in numbering. Entries in the sequences list are updated correspondingly. Note Already-issued keys are invalidated by this operation and should not be subsequently used. Some [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") methods implemented by [`MH`](#mailbox.MH "mailbox.MH") deserve special remarks: `remove(key)` `__delitem__(key)` `discard(key)` These methods immediately delete the message. The MH convention of marking a message for deletion by prepending a comma to its name is not used. `lock()` `unlock()` Three locking mechanisms are used—dot locking and, if available, the `flock()` and `lockf()` system calls. For MH mailboxes, locking the mailbox means locking the `.mh_sequences` file and, only for the duration of any operations that affect them, locking individual message files. `get_file(key)` Depending upon the host platform, it may not be possible to remove the underlying message while the returned file remains open. `flush()` All changes to MH mailboxes are immediately applied, so this method does nothing. `close()` [`MH`](#mailbox.MH "mailbox.MH") instances do not keep any open files, so this method is equivalent to [`unlock()`](#mailbox.MH.unlock "mailbox.MH.unlock"). See also [nmh - Message Handling System](http://www.nongnu.org/nmh/) Home page of **nmh**, an updated version of the original **mh**. [MH & nmh: Email for Users & Programmers](https://rand-mh.sourceforge.io/book/) A GPL-licensed book on **mh** and **nmh**, with some information on the mailbox format. ### [`Babyl`](#mailbox.Babyl "mailbox.Babyl") `class mailbox.Babyl(path, factory=None, create=True)` A subclass of [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") for mailboxes in Babyl format. Parameter *factory* is a callable object that accepts a file-like message representation (which behaves as if opened in binary mode) and returns a custom representation. If *factory* is `None`, [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") is used as the default message representation. If *create* is `True`, the mailbox is created if it does not exist. Babyl is a single-file mailbox format used by the Rmail mail user agent included with Emacs. The beginning of a message is indicated by a line containing the two characters Control-Underscore (`'\037'`) and Control-L (`'\014'`). The end of a message is indicated by the start of the next message or, in the case of the last message, a line containing a Control-Underscore (`'\037'`) character. Messages in a Babyl mailbox have two sets of headers, original headers and so-called visible headers. Visible headers are typically a subset of the original headers that have been reformatted or abridged to be more attractive. Each message in a Babyl mailbox also has an accompanying list of *labels*, or short strings that record extra information about the message, and a list of all user-defined labels found in the mailbox is kept in the Babyl options section. [`Babyl`](#mailbox.Babyl "mailbox.Babyl") instances have all of the methods of [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") in addition to the following: `get_labels()` Return a list of the names of all user-defined labels used in the mailbox. Note The actual messages are inspected to determine which labels exist in the mailbox rather than consulting the list of labels in the Babyl options section, but the Babyl section is updated whenever the mailbox is modified. Some [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") methods implemented by [`Babyl`](#mailbox.Babyl "mailbox.Babyl") deserve special remarks: `get_file(key)` In Babyl mailboxes, the headers of a message are not stored contiguously with the body of the message. To generate a file-like representation, the headers and body are copied together into an [`io.BytesIO`](io#io.BytesIO "io.BytesIO") instance, which has an API identical to that of a file. As a result, the file-like object is truly independent of the underlying mailbox but does not save memory compared to a string representation. `lock()` `unlock()` Three locking mechanisms are used—dot locking and, if available, the `flock()` and `lockf()` system calls. See also [Format of Version 5 Babyl Files](https://quimby.gnus.org/notes/BABYL) A specification of the Babyl format. [Reading Mail with Rmail](https://www.gnu.org/software/emacs/manual/html_node/emacs/Rmail.html) The Rmail manual, with some information on Babyl semantics. ### [`MMDF`](#mailbox.MMDF "mailbox.MMDF") `class mailbox.MMDF(path, factory=None, create=True)` A subclass of [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") for mailboxes in MMDF format. Parameter *factory* is a callable object that accepts a file-like message representation (which behaves as if opened in binary mode) and returns a custom representation. If *factory* is `None`, [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") is used as the default message representation. If *create* is `True`, the mailbox is created if it does not exist. MMDF is a single-file mailbox format invented for the Multichannel Memorandum Distribution Facility, a mail transfer agent. Each message is in the same form as an mbox message but is bracketed before and after by lines containing four Control-A (`'\001'`) characters. As with the mbox format, the beginning of each message is indicated by a line whose first five characters are “From “, but additional occurrences of “From ” are not transformed to “>From ” when storing messages because the extra message separator lines prevent mistaking such occurrences for the starts of subsequent messages. Some [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") methods implemented by [`MMDF`](#mailbox.MMDF "mailbox.MMDF") deserve special remarks: `get_file(key)` Using the file after calling `flush()` or `close()` on the [`MMDF`](#mailbox.MMDF "mailbox.MMDF") instance may yield unpredictable results or raise an exception. `lock()` `unlock()` Three locking mechanisms are used—dot locking and, if available, the `flock()` and `lockf()` system calls. See also [mmdf man page from tin](http://www.tin.org/bin/man.cgi?section=5&topic=mmdf) A specification of MMDF format from the documentation of tin, a newsreader. [MMDF](https://en.wikipedia.org/wiki/MMDF) A Wikipedia article describing the Multichannel Memorandum Distribution Facility. Message objects --------------- `class mailbox.Message(message=None)` A subclass of the [`email.message`](email.message#module-email.message "email.message: The base class representing email messages.") module’s [`Message`](email.compat32-message#email.message.Message "email.message.Message"). Subclasses of [`mailbox.Message`](#mailbox.Message "mailbox.Message") add mailbox-format-specific state and behavior. If *message* is omitted, the new instance is created in a default, empty state. If *message* is an [`email.message.Message`](email.compat32-message#email.message.Message "email.message.Message") instance, its contents are copied; furthermore, any format-specific information is converted insofar as possible if *message* is a [`Message`](#mailbox.Message "mailbox.Message") instance. If *message* is a string, a byte string, or a file, it should contain an [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html)-compliant message, which is read and parsed. Files should be open in binary mode, but text mode files are accepted for backward compatibility. The format-specific state and behaviors offered by subclasses vary, but in general it is only the properties that are not specific to a particular mailbox that are supported (although presumably the properties are specific to a particular mailbox format). For example, file offsets for single-file mailbox formats and file names for directory-based mailbox formats are not retained, because they are only applicable to the original mailbox. But state such as whether a message has been read by the user or marked as important is retained, because it applies to the message itself. There is no requirement that [`Message`](#mailbox.Message "mailbox.Message") instances be used to represent messages retrieved using [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instances. In some situations, the time and memory required to generate [`Message`](#mailbox.Message "mailbox.Message") representations might not be acceptable. For such situations, [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instances also offer string and file-like representations, and a custom message factory may be specified when a [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") instance is initialized. ### [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") `class mailbox.MaildirMessage(message=None)` A message with Maildir-specific behaviors. Parameter *message* has the same meaning as with the [`Message`](#mailbox.Message "mailbox.Message") constructor. Typically, a mail user agent application moves all of the messages in the `new` subdirectory to the `cur` subdirectory after the first time the user opens and closes the mailbox, recording that the messages are old whether or not they’ve actually been read. Each message in `cur` has an “info” section added to its file name to store information about its state. (Some mail readers may also add an “info” section to messages in `new`.) The “info” section may take one of two forms: it may contain “2,” followed by a list of standardized flags (e.g., “2,FR”) or it may contain “1,” followed by so-called experimental information. Standard flags for Maildir messages are as follows: | Flag | Meaning | Explanation | | --- | --- | --- | | D | Draft | Under composition | | F | Flagged | Marked as important | | P | Passed | Forwarded, resent, or bounced | | R | Replied | Replied to | | S | Seen | Read | | T | Trashed | Marked for subsequent deletion | [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") instances offer the following methods: `get_subdir()` Return either “new” (if the message should be stored in the `new` subdirectory) or “cur” (if the message should be stored in the `cur` subdirectory). Note A message is typically moved from `new` to `cur` after its mailbox has been accessed, whether or not the message is has been read. A message `msg` has been read if `"S" in msg.get_flags()` is `True`. `set_subdir(subdir)` Set the subdirectory the message should be stored in. Parameter *subdir* must be either “new” or “cur”. `get_flags()` Return a string specifying the flags that are currently set. If the message complies with the standard Maildir format, the result is the concatenation in alphabetical order of zero or one occurrence of each of `'D'`, `'F'`, `'P'`, `'R'`, `'S'`, and `'T'`. The empty string is returned if no flags are set or if “info” contains experimental semantics. `set_flags(flags)` Set the flags specified by *flags* and unset all others. `add_flag(flag)` Set the flag(s) specified by *flag* without changing other flags. To add more than one flag at a time, *flag* may be a string of more than one character. The current “info” is overwritten whether or not it contains experimental information rather than flags. `remove_flag(flag)` Unset the flag(s) specified by *flag* without changing other flags. To remove more than one flag at a time, *flag* maybe a string of more than one character. If “info” contains experimental information rather than flags, the current “info” is not modified. `get_date()` Return the delivery date of the message as a floating-point number representing seconds since the epoch. `set_date(date)` Set the delivery date of the message to *date*, a floating-point number representing seconds since the epoch. `get_info()` Return a string containing the “info” for a message. This is useful for accessing and modifying “info” that is experimental (i.e., not a list of flags). `set_info(info)` Set “info” to *info*, which should be a string. When a [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") instance is created based upon an [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") or [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") instance, the *Status* and *X-Status* headers are omitted and the following conversions take place: | Resulting state | [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") or [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") state | | --- | --- | | “cur” subdirectory | O flag | | F flag | F flag | | R flag | A flag | | S flag | R flag | | T flag | D flag | When a [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") instance is created based upon an [`MHMessage`](#mailbox.MHMessage "mailbox.MHMessage") instance, the following conversions take place: | Resulting state | [`MHMessage`](#mailbox.MHMessage "mailbox.MHMessage") state | | --- | --- | | “cur” subdirectory | “unseen” sequence | | “cur” subdirectory and S flag | no “unseen” sequence | | F flag | “flagged” sequence | | R flag | “replied” sequence | When a [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") instance is created based upon a [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") instance, the following conversions take place: | Resulting state | [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") state | | --- | --- | | “cur” subdirectory | “unseen” label | | “cur” subdirectory and S flag | no “unseen” label | | P flag | “forwarded” or “resent” label | | R flag | “answered” label | | T flag | “deleted” label | ### [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") `class mailbox.mboxMessage(message=None)` A message with mbox-specific behaviors. Parameter *message* has the same meaning as with the [`Message`](#mailbox.Message "mailbox.Message") constructor. Messages in an mbox mailbox are stored together in a single file. The sender’s envelope address and the time of delivery are typically stored in a line beginning with “From ” that is used to indicate the start of a message, though there is considerable variation in the exact format of this data among mbox implementations. Flags that indicate the state of the message, such as whether it has been read or marked as important, are typically stored in *Status* and *X-Status* headers. Conventional flags for mbox messages are as follows: | Flag | Meaning | Explanation | | --- | --- | --- | | R | Read | Read | | O | Old | Previously detected by MUA | | D | Deleted | Marked for subsequent deletion | | F | Flagged | Marked as important | | A | Answered | Replied to | The “R” and “O” flags are stored in the *Status* header, and the “D”, “F”, and “A” flags are stored in the *X-Status* header. The flags and headers typically appear in the order mentioned. [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") instances offer the following methods: `get_from()` Return a string representing the “From ” line that marks the start of the message in an mbox mailbox. The leading “From ” and the trailing newline are excluded. `set_from(from_, time_=None)` Set the “From ” line to *from\_*, which should be specified without a leading “From ” or trailing newline. For convenience, *time\_* may be specified and will be formatted appropriately and appended to *from\_*. If *time\_* is specified, it should be a [`time.struct_time`](time#time.struct_time "time.struct_time") instance, a tuple suitable for passing to [`time.strftime()`](time#time.strftime "time.strftime"), or `True` (to use [`time.gmtime()`](time#time.gmtime "time.gmtime")). `get_flags()` Return a string specifying the flags that are currently set. If the message complies with the conventional format, the result is the concatenation in the following order of zero or one occurrence of each of `'R'`, `'O'`, `'D'`, `'F'`, and `'A'`. `set_flags(flags)` Set the flags specified by *flags* and unset all others. Parameter *flags* should be the concatenation in any order of zero or more occurrences of each of `'R'`, `'O'`, `'D'`, `'F'`, and `'A'`. `add_flag(flag)` Set the flag(s) specified by *flag* without changing other flags. To add more than one flag at a time, *flag* may be a string of more than one character. `remove_flag(flag)` Unset the flag(s) specified by *flag* without changing other flags. To remove more than one flag at a time, *flag* maybe a string of more than one character. When an [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") instance is created based upon a [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") instance, a “From ” line is generated based upon the [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") instance’s delivery date, and the following conversions take place: | Resulting state | [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") state | | --- | --- | | R flag | S flag | | O flag | “cur” subdirectory | | D flag | T flag | | F flag | F flag | | A flag | R flag | When an [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") instance is created based upon an [`MHMessage`](#mailbox.MHMessage "mailbox.MHMessage") instance, the following conversions take place: | Resulting state | [`MHMessage`](#mailbox.MHMessage "mailbox.MHMessage") state | | --- | --- | | R flag and O flag | no “unseen” sequence | | O flag | “unseen” sequence | | F flag | “flagged” sequence | | A flag | “replied” sequence | When an [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") instance is created based upon a [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") instance, the following conversions take place: | Resulting state | [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") state | | --- | --- | | R flag and O flag | no “unseen” label | | O flag | “unseen” label | | D flag | “deleted” label | | A flag | “answered” label | When a [`Message`](#mailbox.Message "mailbox.Message") instance is created based upon an [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") instance, the “From ” line is copied and all flags directly correspond: | Resulting state | [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") state | | --- | --- | | R flag | R flag | | O flag | O flag | | D flag | D flag | | F flag | F flag | | A flag | A flag | ### [`MHMessage`](#mailbox.MHMessage "mailbox.MHMessage") `class mailbox.MHMessage(message=None)` A message with MH-specific behaviors. Parameter *message* has the same meaning as with the [`Message`](#mailbox.Message "mailbox.Message") constructor. MH messages do not support marks or flags in the traditional sense, but they do support sequences, which are logical groupings of arbitrary messages. Some mail reading programs (although not the standard **mh** and **nmh**) use sequences in much the same way flags are used with other formats, as follows: | Sequence | Explanation | | --- | --- | | unseen | Not read, but previously detected by MUA | | replied | Replied to | | flagged | Marked as important | [`MHMessage`](#mailbox.MHMessage "mailbox.MHMessage") instances offer the following methods: `get_sequences()` Return a list of the names of sequences that include this message. `set_sequences(sequences)` Set the list of sequences that include this message. `add_sequence(sequence)` Add *sequence* to the list of sequences that include this message. `remove_sequence(sequence)` Remove *sequence* from the list of sequences that include this message. When an [`MHMessage`](#mailbox.MHMessage "mailbox.MHMessage") instance is created based upon a [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") instance, the following conversions take place: | Resulting state | [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") state | | --- | --- | | “unseen” sequence | no S flag | | “replied” sequence | R flag | | “flagged” sequence | F flag | When an [`MHMessage`](#mailbox.MHMessage "mailbox.MHMessage") instance is created based upon an [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") or [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") instance, the *Status* and *X-Status* headers are omitted and the following conversions take place: | Resulting state | [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") or [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") state | | --- | --- | | “unseen” sequence | no R flag | | “replied” sequence | A flag | | “flagged” sequence | F flag | When an [`MHMessage`](#mailbox.MHMessage "mailbox.MHMessage") instance is created based upon a [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") instance, the following conversions take place: | Resulting state | [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") state | | --- | --- | | “unseen” sequence | “unseen” label | | “replied” sequence | “answered” label | ### [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") `class mailbox.BabylMessage(message=None)` A message with Babyl-specific behaviors. Parameter *message* has the same meaning as with the [`Message`](#mailbox.Message "mailbox.Message") constructor. Certain message labels, called *attributes*, are defined by convention to have special meanings. The attributes are as follows: | Label | Explanation | | --- | --- | | unseen | Not read, but previously detected by MUA | | deleted | Marked for subsequent deletion | | filed | Copied to another file or mailbox | | answered | Replied to | | forwarded | Forwarded | | edited | Modified by the user | | resent | Resent | By default, Rmail displays only visible headers. The [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") class, though, uses the original headers because they are more complete. Visible headers may be accessed explicitly if desired. [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") instances offer the following methods: `get_labels()` Return a list of labels on the message. `set_labels(labels)` Set the list of labels on the message to *labels*. `add_label(label)` Add *label* to the list of labels on the message. `remove_label(label)` Remove *label* from the list of labels on the message. `get_visible()` Return an [`Message`](#mailbox.Message "mailbox.Message") instance whose headers are the message’s visible headers and whose body is empty. `set_visible(visible)` Set the message’s visible headers to be the same as the headers in *message*. Parameter *visible* should be a [`Message`](#mailbox.Message "mailbox.Message") instance, an [`email.message.Message`](email.compat32-message#email.message.Message "email.message.Message") instance, a string, or a file-like object (which should be open in text mode). `update_visible()` When a [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") instance’s original headers are modified, the visible headers are not automatically modified to correspond. This method updates the visible headers as follows: each visible header with a corresponding original header is set to the value of the original header, each visible header without a corresponding original header is removed, and any of *Date*, *From*, *Reply-To*, *To*, *CC*, and *Subject* that are present in the original headers but not the visible headers are added to the visible headers. When a [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") instance is created based upon a [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") instance, the following conversions take place: | Resulting state | [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") state | | --- | --- | | “unseen” label | no S flag | | “deleted” label | T flag | | “answered” label | R flag | | “forwarded” label | P flag | When a [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") instance is created based upon an [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") or [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") instance, the *Status* and *X-Status* headers are omitted and the following conversions take place: | Resulting state | [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") or [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") state | | --- | --- | | “unseen” label | no R flag | | “deleted” label | D flag | | “answered” label | A flag | When a [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") instance is created based upon an [`MHMessage`](#mailbox.MHMessage "mailbox.MHMessage") instance, the following conversions take place: | Resulting state | [`MHMessage`](#mailbox.MHMessage "mailbox.MHMessage") state | | --- | --- | | “unseen” label | “unseen” sequence | | “answered” label | “replied” sequence | ### [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") `class mailbox.MMDFMessage(message=None)` A message with MMDF-specific behaviors. Parameter *message* has the same meaning as with the [`Message`](#mailbox.Message "mailbox.Message") constructor. As with message in an mbox mailbox, MMDF messages are stored with the sender’s address and the delivery date in an initial line beginning with “From “. Likewise, flags that indicate the state of the message are typically stored in *Status* and *X-Status* headers. Conventional flags for MMDF messages are identical to those of mbox message and are as follows: | Flag | Meaning | Explanation | | --- | --- | --- | | R | Read | Read | | O | Old | Previously detected by MUA | | D | Deleted | Marked for subsequent deletion | | F | Flagged | Marked as important | | A | Answered | Replied to | The “R” and “O” flags are stored in the *Status* header, and the “D”, “F”, and “A” flags are stored in the *X-Status* header. The flags and headers typically appear in the order mentioned. [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") instances offer the following methods, which are identical to those offered by [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage"): `get_from()` Return a string representing the “From ” line that marks the start of the message in an mbox mailbox. The leading “From ” and the trailing newline are excluded. `set_from(from_, time_=None)` Set the “From ” line to *from\_*, which should be specified without a leading “From ” or trailing newline. For convenience, *time\_* may be specified and will be formatted appropriately and appended to *from\_*. If *time\_* is specified, it should be a [`time.struct_time`](time#time.struct_time "time.struct_time") instance, a tuple suitable for passing to [`time.strftime()`](time#time.strftime "time.strftime"), or `True` (to use [`time.gmtime()`](time#time.gmtime "time.gmtime")). `get_flags()` Return a string specifying the flags that are currently set. If the message complies with the conventional format, the result is the concatenation in the following order of zero or one occurrence of each of `'R'`, `'O'`, `'D'`, `'F'`, and `'A'`. `set_flags(flags)` Set the flags specified by *flags* and unset all others. Parameter *flags* should be the concatenation in any order of zero or more occurrences of each of `'R'`, `'O'`, `'D'`, `'F'`, and `'A'`. `add_flag(flag)` Set the flag(s) specified by *flag* without changing other flags. To add more than one flag at a time, *flag* may be a string of more than one character. `remove_flag(flag)` Unset the flag(s) specified by *flag* without changing other flags. To remove more than one flag at a time, *flag* maybe a string of more than one character. When an [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") instance is created based upon a [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") instance, a “From ” line is generated based upon the [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") instance’s delivery date, and the following conversions take place: | Resulting state | [`MaildirMessage`](#mailbox.MaildirMessage "mailbox.MaildirMessage") state | | --- | --- | | R flag | S flag | | O flag | “cur” subdirectory | | D flag | T flag | | F flag | F flag | | A flag | R flag | When an [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") instance is created based upon an [`MHMessage`](#mailbox.MHMessage "mailbox.MHMessage") instance, the following conversions take place: | Resulting state | [`MHMessage`](#mailbox.MHMessage "mailbox.MHMessage") state | | --- | --- | | R flag and O flag | no “unseen” sequence | | O flag | “unseen” sequence | | F flag | “flagged” sequence | | A flag | “replied” sequence | When an [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") instance is created based upon a [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") instance, the following conversions take place: | Resulting state | [`BabylMessage`](#mailbox.BabylMessage "mailbox.BabylMessage") state | | --- | --- | | R flag and O flag | no “unseen” label | | O flag | “unseen” label | | D flag | “deleted” label | | A flag | “answered” label | When an [`MMDFMessage`](#mailbox.MMDFMessage "mailbox.MMDFMessage") instance is created based upon an [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") instance, the “From ” line is copied and all flags directly correspond: | Resulting state | [`mboxMessage`](#mailbox.mboxMessage "mailbox.mboxMessage") state | | --- | --- | | R flag | R flag | | O flag | O flag | | D flag | D flag | | F flag | F flag | | A flag | A flag | Exceptions ---------- The following exception classes are defined in the [`mailbox`](#module-mailbox "mailbox: Manipulate mailboxes in various formats") module: `exception mailbox.Error` The based class for all other module-specific exceptions. `exception mailbox.NoSuchMailboxError` Raised when a mailbox is expected but is not found, such as when instantiating a [`Mailbox`](#mailbox.Mailbox "mailbox.Mailbox") subclass with a path that does not exist (and with the *create* parameter set to `False`), or when opening a folder that does not exist. `exception mailbox.NotEmptyError` Raised when a mailbox is not empty but is expected to be, such as when deleting a folder that contains messages. `exception mailbox.ExternalClashError` Raised when some mailbox-related condition beyond the control of the program causes it to be unable to proceed, such as when failing to acquire a lock that another program already holds a lock, or when a uniquely-generated file name already exists. `exception mailbox.FormatError` Raised when the data in a file cannot be parsed, such as when an [`MH`](#mailbox.MH "mailbox.MH") instance attempts to read a corrupted `.mh_sequences` file. Examples -------- A simple example of printing the subjects of all messages in a mailbox that seem interesting: ``` import mailbox for message in mailbox.mbox('~/mbox'): subject = message['subject'] # Could possibly be None. if subject and 'python' in subject.lower(): print(subject) ``` To copy all mail from a Babyl mailbox to an MH mailbox, converting all of the format-specific information that can be converted: ``` import mailbox destination = mailbox.MH('~/Mail') destination.lock() for message in mailbox.Babyl('~/RMAIL'): destination.add(mailbox.MHMessage(message)) destination.flush() destination.unlock() ``` This example sorts mail from several mailing lists into different mailboxes, being careful to avoid mail corruption due to concurrent modification by other programs, mail loss due to interruption of the program, or premature termination due to malformed messages in the mailbox: ``` import mailbox import email.errors list_names = ('python-list', 'python-dev', 'python-bugs') boxes = {name: mailbox.mbox('~/email/%s' % name) for name in list_names} inbox = mailbox.Maildir('~/Maildir', factory=None) for key in inbox.iterkeys(): try: message = inbox[key] except email.errors.MessageParseError: continue # The message is malformed. Just leave it. for name in list_names: list_id = message['list-id'] if list_id and name in list_id: # Get mailbox to use box = boxes[name] # Write copy to disk before removing original. # If there's a crash, you might duplicate a message, but # that's better than losing a message completely. box.lock() box.add(message) box.flush() box.unlock() # Remove original message inbox.lock() inbox.discard(key) inbox.flush() inbox.unlock() break # Found destination, so stop looking. for box in boxes.itervalues(): box.close() ```
programming_docs
python http.server — HTTP servers http.server — HTTP servers ========================== **Source code:** [Lib/http/server.py](https://github.com/python/cpython/tree/3.9/Lib/http/server.py) This module defines classes for implementing HTTP servers (Web servers). Warning [`http.server`](#module-http.server "http.server: HTTP server and request handlers.") is not recommended for production. It only implements [basic security checks](#http-server-security). One class, [`HTTPServer`](#http.server.HTTPServer "http.server.HTTPServer"), is a [`socketserver.TCPServer`](socketserver#socketserver.TCPServer "socketserver.TCPServer") subclass. It creates and listens at the HTTP socket, dispatching the requests to a handler. Code to create and run the server looks like this: ``` def run(server_class=HTTPServer, handler_class=BaseHTTPRequestHandler): server_address = ('', 8000) httpd = server_class(server_address, handler_class) httpd.serve_forever() ``` `class http.server.HTTPServer(server_address, RequestHandlerClass)` This class builds on the [`TCPServer`](socketserver#socketserver.TCPServer "socketserver.TCPServer") class by storing the server address as instance variables named `server_name` and `server_port`. The server is accessible by the handler, typically through the handler’s `server` instance variable. `class http.server.ThreadingHTTPServer(server_address, RequestHandlerClass)` This class is identical to HTTPServer but uses threads to handle requests by using the [`ThreadingMixIn`](socketserver#socketserver.ThreadingMixIn "socketserver.ThreadingMixIn"). This is useful to handle web browsers pre-opening sockets, on which [`HTTPServer`](#http.server.HTTPServer "http.server.HTTPServer") would wait indefinitely. New in version 3.7. The [`HTTPServer`](#http.server.HTTPServer "http.server.HTTPServer") and [`ThreadingHTTPServer`](#http.server.ThreadingHTTPServer "http.server.ThreadingHTTPServer") must be given a *RequestHandlerClass* on instantiation, of which this module provides three different variants: `class http.server.BaseHTTPRequestHandler(request, client_address, server)` This class is used to handle the HTTP requests that arrive at the server. By itself, it cannot respond to any actual HTTP requests; it must be subclassed to handle each request method (e.g. GET or POST). [`BaseHTTPRequestHandler`](#http.server.BaseHTTPRequestHandler "http.server.BaseHTTPRequestHandler") provides a number of class and instance variables, and methods for use by subclasses. The handler will parse the request and the headers, then call a method specific to the request type. The method name is constructed from the request. For example, for the request method `SPAM`, the `do_SPAM()` method will be called with no arguments. All of the relevant information is stored in instance variables of the handler. Subclasses should not need to override or extend the [`__init__()`](../reference/datamodel#object.__init__ "object.__init__") method. [`BaseHTTPRequestHandler`](#http.server.BaseHTTPRequestHandler "http.server.BaseHTTPRequestHandler") has the following instance variables: `client_address` Contains a tuple of the form `(host, port)` referring to the client’s address. `server` Contains the server instance. `close_connection` Boolean that should be set before [`handle_one_request()`](#http.server.BaseHTTPRequestHandler.handle_one_request "http.server.BaseHTTPRequestHandler.handle_one_request") returns, indicating if another request may be expected, or if the connection should be shut down. `requestline` Contains the string representation of the HTTP request line. The terminating CRLF is stripped. This attribute should be set by [`handle_one_request()`](#http.server.BaseHTTPRequestHandler.handle_one_request "http.server.BaseHTTPRequestHandler.handle_one_request"). If no valid request line was processed, it should be set to the empty string. `command` Contains the command (request type). For example, `'GET'`. `path` Contains the request path. If query component of the URL is present, then `path` includes the query. Using the terminology of [**RFC 3986**](https://tools.ietf.org/html/rfc3986.html), `path` here includes `hier-part` and the `query`. `request_version` Contains the version string from the request. For example, `'HTTP/1.0'`. `headers` Holds an instance of the class specified by the [`MessageClass`](#http.server.BaseHTTPRequestHandler.MessageClass "http.server.BaseHTTPRequestHandler.MessageClass") class variable. This instance parses and manages the headers in the HTTP request. The [`parse_headers()`](http.client#http.client.parse_headers "http.client.parse_headers") function from [`http.client`](http.client#module-http.client "http.client: HTTP and HTTPS protocol client (requires sockets).") is used to parse the headers and it requires that the HTTP request provide a valid [**RFC 2822**](https://tools.ietf.org/html/rfc2822.html) style header. `rfile` An [`io.BufferedIOBase`](io#io.BufferedIOBase "io.BufferedIOBase") input stream, ready to read from the start of the optional input data. `wfile` Contains the output stream for writing a response back to the client. Proper adherence to the HTTP protocol must be used when writing to this stream in order to achieve successful interoperation with HTTP clients. Changed in version 3.6: This is an [`io.BufferedIOBase`](io#io.BufferedIOBase "io.BufferedIOBase") stream. [`BaseHTTPRequestHandler`](#http.server.BaseHTTPRequestHandler "http.server.BaseHTTPRequestHandler") has the following attributes: `server_version` Specifies the server software version. You may want to override this. The format is multiple whitespace-separated strings, where each string is of the form name[/version]. For example, `'BaseHTTP/0.2'`. `sys_version` Contains the Python system version, in a form usable by the [`version_string`](#http.server.BaseHTTPRequestHandler.version_string "http.server.BaseHTTPRequestHandler.version_string") method and the [`server_version`](#http.server.BaseHTTPRequestHandler.server_version "http.server.BaseHTTPRequestHandler.server_version") class variable. For example, `'Python/1.4'`. `error_message_format` Specifies a format string that should be used by [`send_error()`](#http.server.BaseHTTPRequestHandler.send_error "http.server.BaseHTTPRequestHandler.send_error") method for building an error response to the client. The string is filled by default with variables from [`responses`](#http.server.BaseHTTPRequestHandler.responses "http.server.BaseHTTPRequestHandler.responses") based on the status code that passed to [`send_error()`](#http.server.BaseHTTPRequestHandler.send_error "http.server.BaseHTTPRequestHandler.send_error"). `error_content_type` Specifies the Content-Type HTTP header of error responses sent to the client. The default value is `'text/html'`. `protocol_version` This specifies the HTTP protocol version used in responses. If set to `'HTTP/1.1'`, the server will permit HTTP persistent connections; however, your server *must* then include an accurate `Content-Length` header (using [`send_header()`](#http.server.BaseHTTPRequestHandler.send_header "http.server.BaseHTTPRequestHandler.send_header")) in all of its responses to clients. For backwards compatibility, the setting defaults to `'HTTP/1.0'`. `MessageClass` Specifies an [`email.message.Message`](email.compat32-message#email.message.Message "email.message.Message")-like class to parse HTTP headers. Typically, this is not overridden, and it defaults to `http.client.HTTPMessage`. `responses` This attribute contains a mapping of error code integers to two-element tuples containing a short and long message. For example, `{code: (shortmessage, longmessage)}`. The *shortmessage* is usually used as the *message* key in an error response, and *longmessage* as the *explain* key. It is used by [`send_response_only()`](#http.server.BaseHTTPRequestHandler.send_response_only "http.server.BaseHTTPRequestHandler.send_response_only") and [`send_error()`](#http.server.BaseHTTPRequestHandler.send_error "http.server.BaseHTTPRequestHandler.send_error") methods. A [`BaseHTTPRequestHandler`](#http.server.BaseHTTPRequestHandler "http.server.BaseHTTPRequestHandler") instance has the following methods: `handle()` Calls [`handle_one_request()`](#http.server.BaseHTTPRequestHandler.handle_one_request "http.server.BaseHTTPRequestHandler.handle_one_request") once (or, if persistent connections are enabled, multiple times) to handle incoming HTTP requests. You should never need to override it; instead, implement appropriate `do_*()` methods. `handle_one_request()` This method will parse and dispatch the request to the appropriate `do_*()` method. You should never need to override it. `handle_expect_100()` When a HTTP/1.1 compliant server receives an `Expect: 100-continue` request header it responds back with a `100 Continue` followed by `200 OK` headers. This method can be overridden to raise an error if the server does not want the client to continue. For e.g. server can choose to send `417 Expectation Failed` as a response header and `return False`. New in version 3.2. `send_error(code, message=None, explain=None)` Sends and logs a complete error reply to the client. The numeric *code* specifies the HTTP error code, with *message* as an optional, short, human readable description of the error. The *explain* argument can be used to provide more detailed information about the error; it will be formatted using the [`error_message_format`](#http.server.BaseHTTPRequestHandler.error_message_format "http.server.BaseHTTPRequestHandler.error_message_format") attribute and emitted, after a complete set of headers, as the response body. The [`responses`](#http.server.BaseHTTPRequestHandler.responses "http.server.BaseHTTPRequestHandler.responses") attribute holds the default values for *message* and *explain* that will be used if no value is provided; for unknown codes the default value for both is the string `???`. The body will be empty if the method is HEAD or the response code is one of the following: `1xx`, `204 No Content`, `205 Reset Content`, `304 Not Modified`. Changed in version 3.4: The error response includes a Content-Length header. Added the *explain* argument. `send_response(code, message=None)` Adds a response header to the headers buffer and logs the accepted request. The HTTP response line is written to the internal buffer, followed by *Server* and *Date* headers. The values for these two headers are picked up from the [`version_string()`](#http.server.BaseHTTPRequestHandler.version_string "http.server.BaseHTTPRequestHandler.version_string") and [`date_time_string()`](#http.server.BaseHTTPRequestHandler.date_time_string "http.server.BaseHTTPRequestHandler.date_time_string") methods, respectively. If the server does not intend to send any other headers using the [`send_header()`](#http.server.BaseHTTPRequestHandler.send_header "http.server.BaseHTTPRequestHandler.send_header") method, then [`send_response()`](#http.server.BaseHTTPRequestHandler.send_response "http.server.BaseHTTPRequestHandler.send_response") should be followed by an [`end_headers()`](#http.server.BaseHTTPRequestHandler.end_headers "http.server.BaseHTTPRequestHandler.end_headers") call. Changed in version 3.3: Headers are stored to an internal buffer and [`end_headers()`](#http.server.BaseHTTPRequestHandler.end_headers "http.server.BaseHTTPRequestHandler.end_headers") needs to be called explicitly. `send_header(keyword, value)` Adds the HTTP header to an internal buffer which will be written to the output stream when either [`end_headers()`](#http.server.BaseHTTPRequestHandler.end_headers "http.server.BaseHTTPRequestHandler.end_headers") or [`flush_headers()`](#http.server.BaseHTTPRequestHandler.flush_headers "http.server.BaseHTTPRequestHandler.flush_headers") is invoked. *keyword* should specify the header keyword, with *value* specifying its value. Note that, after the send\_header calls are done, [`end_headers()`](#http.server.BaseHTTPRequestHandler.end_headers "http.server.BaseHTTPRequestHandler.end_headers") MUST BE called in order to complete the operation. Changed in version 3.2: Headers are stored in an internal buffer. `send_response_only(code, message=None)` Sends the response header only, used for the purposes when `100 Continue` response is sent by the server to the client. The headers not buffered and sent directly the output stream.If the *message* is not specified, the HTTP message corresponding the response *code* is sent. New in version 3.2. `end_headers()` Adds a blank line (indicating the end of the HTTP headers in the response) to the headers buffer and calls [`flush_headers()`](#http.server.BaseHTTPRequestHandler.flush_headers "http.server.BaseHTTPRequestHandler.flush_headers"). Changed in version 3.2: The buffered headers are written to the output stream. `flush_headers()` Finally send the headers to the output stream and flush the internal headers buffer. New in version 3.3. `log_request(code='-', size='-')` Logs an accepted (successful) request. *code* should specify the numeric HTTP code associated with the response. If a size of the response is available, then it should be passed as the *size* parameter. `log_error(...)` Logs an error when a request cannot be fulfilled. By default, it passes the message to [`log_message()`](#http.server.BaseHTTPRequestHandler.log_message "http.server.BaseHTTPRequestHandler.log_message"), so it takes the same arguments (*format* and additional values). `log_message(format, ...)` Logs an arbitrary message to `sys.stderr`. This is typically overridden to create custom error logging mechanisms. The *format* argument is a standard printf-style format string, where the additional arguments to [`log_message()`](#http.server.BaseHTTPRequestHandler.log_message "http.server.BaseHTTPRequestHandler.log_message") are applied as inputs to the formatting. The client ip address and current date and time are prefixed to every message logged. `version_string()` Returns the server software’s version string. This is a combination of the [`server_version`](#http.server.BaseHTTPRequestHandler.server_version "http.server.BaseHTTPRequestHandler.server_version") and [`sys_version`](#http.server.BaseHTTPRequestHandler.sys_version "http.server.BaseHTTPRequestHandler.sys_version") attributes. `date_time_string(timestamp=None)` Returns the date and time given by *timestamp* (which must be `None` or in the format returned by [`time.time()`](time#time.time "time.time")), formatted for a message header. If *timestamp* is omitted, it uses the current date and time. The result looks like `'Sun, 06 Nov 1994 08:49:37 GMT'`. `log_date_time_string()` Returns the current date and time, formatted for logging. `address_string()` Returns the client address. Changed in version 3.3: Previously, a name lookup was performed. To avoid name resolution delays, it now always returns the IP address. `class http.server.SimpleHTTPRequestHandler(request, client_address, server, directory=None)` This class serves files from the directory *directory* and below, or the current directory if *directory* is not provided, directly mapping the directory structure to HTTP requests. New in version 3.7: The *directory* parameter. Changed in version 3.9: The *directory* parameter accepts a [path-like object](../glossary#term-path-like-object). A lot of the work, such as parsing the request, is done by the base class [`BaseHTTPRequestHandler`](#http.server.BaseHTTPRequestHandler "http.server.BaseHTTPRequestHandler"). This class implements the [`do_GET()`](#http.server.SimpleHTTPRequestHandler.do_GET "http.server.SimpleHTTPRequestHandler.do_GET") and [`do_HEAD()`](#http.server.SimpleHTTPRequestHandler.do_HEAD "http.server.SimpleHTTPRequestHandler.do_HEAD") functions. The following are defined as class-level attributes of [`SimpleHTTPRequestHandler`](#http.server.SimpleHTTPRequestHandler "http.server.SimpleHTTPRequestHandler"): `server_version` This will be `"SimpleHTTP/" + __version__`, where `__version__` is defined at the module level. `extensions_map` A dictionary mapping suffixes into MIME types, contains custom overrides for the default system mappings. The mapping is used case-insensitively, and so should contain only lower-cased keys. Changed in version 3.9: This dictionary is no longer filled with the default system mappings, but only contains overrides. The [`SimpleHTTPRequestHandler`](#http.server.SimpleHTTPRequestHandler "http.server.SimpleHTTPRequestHandler") class defines the following methods: `do_HEAD()` This method serves the `'HEAD'` request type: it sends the headers it would send for the equivalent `GET` request. See the [`do_GET()`](#http.server.SimpleHTTPRequestHandler.do_GET "http.server.SimpleHTTPRequestHandler.do_GET") method for a more complete explanation of the possible headers. `do_GET()` The request is mapped to a local file by interpreting the request as a path relative to the current working directory. If the request was mapped to a directory, the directory is checked for a file named `index.html` or `index.htm` (in that order). If found, the file’s contents are returned; otherwise a directory listing is generated by calling the `list_directory()` method. This method uses [`os.listdir()`](os#os.listdir "os.listdir") to scan the directory, and returns a `404` error response if the [`listdir()`](os#os.listdir "os.listdir") fails. If the request was mapped to a file, it is opened. Any [`OSError`](exceptions#OSError "OSError") exception in opening the requested file is mapped to a `404`, `'File not found'` error. If there was a `'If-Modified-Since'` header in the request, and the file was not modified after this time, a `304`, `'Not Modified'` response is sent. Otherwise, the content type is guessed by calling the `guess_type()` method, which in turn uses the *extensions\_map* variable, and the file contents are returned. A `'Content-type:'` header with the guessed content type is output, followed by a `'Content-Length:'` header with the file’s size and a `'Last-Modified:'` header with the file’s modification time. Then follows a blank line signifying the end of the headers, and then the contents of the file are output. If the file’s MIME type starts with `text/` the file is opened in text mode; otherwise binary mode is used. For example usage, see the implementation of the [`test()`](test#module-test "test: Regression tests package containing the testing suite for Python.") function invocation in the [`http.server`](#module-http.server "http.server: HTTP server and request handlers.") module. Changed in version 3.7: Support of the `'If-Modified-Since'` header. The [`SimpleHTTPRequestHandler`](#http.server.SimpleHTTPRequestHandler "http.server.SimpleHTTPRequestHandler") class can be used in the following manner in order to create a very basic webserver serving files relative to the current directory: ``` import http.server import socketserver PORT = 8000 Handler = http.server.SimpleHTTPRequestHandler with socketserver.TCPServer(("", PORT), Handler) as httpd: print("serving at port", PORT) httpd.serve_forever() ``` [`http.server`](#module-http.server "http.server: HTTP server and request handlers.") can also be invoked directly using the [`-m`](../using/cmdline#cmdoption-m) switch of the interpreter. Similar to the previous example, this serves files relative to the current directory: ``` python -m http.server ``` The server listens to port 8000 by default. The default can be overridden by passing the desired port number as an argument: ``` python -m http.server 9000 ``` By default, the server binds itself to all interfaces. The option `-b/--bind` specifies a specific address to which it should bind. Both IPv4 and IPv6 addresses are supported. For example, the following command causes the server to bind to localhost only: ``` python -m http.server --bind 127.0.0.1 ``` New in version 3.4: `--bind` argument was introduced. New in version 3.8: `--bind` argument enhanced to support IPv6 By default, the server uses the current directory. The option `-d/--directory` specifies a directory to which it should serve the files. For example, the following command uses a specific directory: ``` python -m http.server --directory /tmp/ ``` New in version 3.7: `--directory` argument was introduced. `class http.server.CGIHTTPRequestHandler(request, client_address, server)` This class is used to serve either files or output of CGI scripts from the current directory and below. Note that mapping HTTP hierarchic structure to local directory structure is exactly as in [`SimpleHTTPRequestHandler`](#http.server.SimpleHTTPRequestHandler "http.server.SimpleHTTPRequestHandler"). Note CGI scripts run by the [`CGIHTTPRequestHandler`](#http.server.CGIHTTPRequestHandler "http.server.CGIHTTPRequestHandler") class cannot execute redirects (HTTP code 302), because code 200 (script output follows) is sent prior to execution of the CGI script. This pre-empts the status code. The class will however, run the CGI script, instead of serving it as a file, if it guesses it to be a CGI script. Only directory-based CGI are used — the other common server configuration is to treat special extensions as denoting CGI scripts. The `do_GET()` and `do_HEAD()` functions are modified to run CGI scripts and serve the output, instead of serving files, if the request leads to somewhere below the `cgi_directories` path. The [`CGIHTTPRequestHandler`](#http.server.CGIHTTPRequestHandler "http.server.CGIHTTPRequestHandler") defines the following data member: `cgi_directories` This defaults to `['/cgi-bin', '/htbin']` and describes directories to treat as containing CGI scripts. The [`CGIHTTPRequestHandler`](#http.server.CGIHTTPRequestHandler "http.server.CGIHTTPRequestHandler") defines the following method: `do_POST()` This method serves the `'POST'` request type, only allowed for CGI scripts. Error 501, “Can only POST to CGI scripts”, is output when trying to POST to a non-CGI url. Note that CGI scripts will be run with UID of user nobody, for security reasons. Problems with the CGI script will be translated to error 403. [`CGIHTTPRequestHandler`](#http.server.CGIHTTPRequestHandler "http.server.CGIHTTPRequestHandler") can be enabled in the command line by passing the `--cgi` option: ``` python -m http.server --cgi ``` Security Considerations ----------------------- [`SimpleHTTPRequestHandler`](#http.server.SimpleHTTPRequestHandler "http.server.SimpleHTTPRequestHandler") will follow symbolic links when handling requests, this makes it possible for files outside of the specified directory to be served.
programming_docs
python email.message: Representing an email message email.message: Representing an email message ============================================ **Source code:** [Lib/email/message.py](https://github.com/python/cpython/tree/3.9/Lib/email/message.py) New in version 3.6: [1](#id2) The central class in the [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package is the [`EmailMessage`](#email.message.EmailMessage "email.message.EmailMessage") class, imported from the [`email.message`](#module-email.message "email.message: The base class representing email messages.") module. It is the base class for the [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") object model. [`EmailMessage`](#email.message.EmailMessage "email.message.EmailMessage") provides the core functionality for setting and querying header fields, for accessing message bodies, and for creating or modifying structured messages. An email message consists of *headers* and a *payload* (which is also referred to as the *content*). Headers are [**RFC 5322**](https://tools.ietf.org/html/rfc5322.html) or [**RFC 6532**](https://tools.ietf.org/html/rfc6532.html) style field names and values, where the field name and value are separated by a colon. The colon is not part of either the field name or the field value. The payload may be a simple text message, or a binary object, or a structured sequence of sub-messages each with their own set of headers and their own payload. The latter type of payload is indicated by the message having a MIME type such as *multipart/\** or *message/rfc822*. The conceptual model provided by an [`EmailMessage`](#email.message.EmailMessage "email.message.EmailMessage") object is that of an ordered dictionary of headers coupled with a *payload* that represents the [**RFC 5322**](https://tools.ietf.org/html/rfc5322.html) body of the message, which might be a list of sub-`EmailMessage` objects. In addition to the normal dictionary methods for accessing the header names and values, there are methods for accessing specialized information from the headers (for example the MIME content type), for operating on the payload, for generating a serialized version of the message, and for recursively walking over the object tree. The [`EmailMessage`](#email.message.EmailMessage "email.message.EmailMessage") dictionary-like interface is indexed by the header names, which must be ASCII values. The values of the dictionary are strings with some extra methods. Headers are stored and returned in case-preserving form, but field names are matched case-insensitively. Unlike a real dict, there is an ordering to the keys, and there can be duplicate keys. Additional methods are provided for working with headers that have duplicate keys. The *payload* is either a string or bytes object, in the case of simple message objects, or a list of [`EmailMessage`](#email.message.EmailMessage "email.message.EmailMessage") objects, for MIME container documents such as *multipart/\** and *message/rfc822* message objects. `class email.message.EmailMessage(policy=default)` If *policy* is specified use the rules it specifies to update and serialize the representation of the message. If *policy* is not set, use the [`default`](email.policy#email.policy.default "email.policy.default") policy, which follows the rules of the email RFCs except for line endings (instead of the RFC mandated `\r\n`, it uses the Python standard `\n` line endings). For more information see the [`policy`](email.policy#module-email.policy "email.policy: Controlling the parsing and generating of messages") documentation. `as_string(unixfrom=False, maxheaderlen=None, policy=None)` Return the entire message flattened as a string. When optional *unixfrom* is true, the envelope header is included in the returned string. *unixfrom* defaults to `False`. For backward compatibility with the base [`Message`](email.compat32-message#email.message.Message "email.message.Message") class *maxheaderlen* is accepted, but defaults to `None`, which means that by default the line length is controlled by the `max_line_length` of the policy. The *policy* argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified *policy* will be passed to the [`Generator`](email.generator#email.generator.Generator "email.generator.Generator"). Flattening the message may trigger changes to the [`EmailMessage`](#email.message.EmailMessage "email.message.EmailMessage") if defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified). Note that this method is provided as a convenience and may not be the most useful way to serialize messages in your application, especially if you are dealing with multiple messages. See [`email.generator.Generator`](email.generator#email.generator.Generator "email.generator.Generator") for a more flexible API for serializing messages. Note also that this method is restricted to producing messages serialized as “7 bit clean” when [`utf8`](email.policy#email.policy.EmailPolicy.utf8 "email.policy.EmailPolicy.utf8") is `False`, which is the default. Changed in version 3.6: the default behavior when *maxheaderlen* is not specified was changed from defaulting to 0 to defaulting to the value of *max\_line\_length* from the policy. `__str__()` Equivalent to `as_string(policy=self.policy.clone(utf8=True))`. Allows `str(msg)` to produce a string containing the serialized message in a readable format. Changed in version 3.4: the method was changed to use `utf8=True`, thus producing an [**RFC 6531**](https://tools.ietf.org/html/rfc6531.html)-like message representation, instead of being a direct alias for [`as_string()`](#email.message.EmailMessage.as_string "email.message.EmailMessage.as_string"). `as_bytes(unixfrom=False, policy=None)` Return the entire message flattened as a bytes object. When optional *unixfrom* is true, the envelope header is included in the returned string. *unixfrom* defaults to `False`. The *policy* argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified *policy* will be passed to the [`BytesGenerator`](email.generator#email.generator.BytesGenerator "email.generator.BytesGenerator"). Flattening the message may trigger changes to the [`EmailMessage`](#email.message.EmailMessage "email.message.EmailMessage") if defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified). Note that this method is provided as a convenience and may not be the most useful way to serialize messages in your application, especially if you are dealing with multiple messages. See [`email.generator.BytesGenerator`](email.generator#email.generator.BytesGenerator "email.generator.BytesGenerator") for a more flexible API for serializing messages. `__bytes__()` Equivalent to [`as_bytes()`](#email.message.EmailMessage.as_bytes "email.message.EmailMessage.as_bytes"). Allows `bytes(msg)` to produce a bytes object containing the serialized message. `is_multipart()` Return `True` if the message’s payload is a list of sub-[`EmailMessage`](#email.message.EmailMessage "email.message.EmailMessage") objects, otherwise return `False`. When [`is_multipart()`](#email.message.EmailMessage.is_multipart "email.message.EmailMessage.is_multipart") returns `False`, the payload should be a string object (which might be a CTE encoded binary payload). Note that [`is_multipart()`](#email.message.EmailMessage.is_multipart "email.message.EmailMessage.is_multipart") returning `True` does not necessarily mean that “msg.get\_content\_maintype() == ‘multipart’” will return the `True`. For example, `is_multipart` will return `True` when the [`EmailMessage`](#email.message.EmailMessage "email.message.EmailMessage") is of type `message/rfc822`. `set_unixfrom(unixfrom)` Set the message’s envelope header to *unixfrom*, which should be a string. (See [`mboxMessage`](mailbox#mailbox.mboxMessage "mailbox.mboxMessage") for a brief description of this header.) `get_unixfrom()` Return the message’s envelope header. Defaults to `None` if the envelope header was never set. The following methods implement the mapping-like interface for accessing the message’s headers. Note that there are some semantic differences between these methods and a normal mapping (i.e. dictionary) interface. For example, in a dictionary there are no duplicate keys, but here there may be duplicate message headers. Also, in dictionaries there is no guaranteed order to the keys returned by [`keys()`](#email.message.EmailMessage.keys "email.message.EmailMessage.keys"), but in an [`EmailMessage`](#email.message.EmailMessage "email.message.EmailMessage") object, headers are always returned in the order they appeared in the original message, or in which they were added to the message later. Any header deleted and then re-added is always appended to the end of the header list. These semantic differences are intentional and are biased toward convenience in the most common use cases. Note that in all cases, any envelope header present in the message is not included in the mapping interface. `__len__()` Return the total number of headers, including duplicates. `__contains__(name)` Return `True` if the message object has a field named *name*. Matching is done without regard to case and *name* does not include the trailing colon. Used for the `in` operator. For example: ``` if 'message-id' in myMessage: print('Message-ID:', myMessage['message-id']) ``` `__getitem__(name)` Return the value of the named header field. *name* does not include the colon field separator. If the header is missing, `None` is returned; a [`KeyError`](exceptions#KeyError "KeyError") is never raised. Note that if the named field appears more than once in the message’s headers, exactly which of those field values will be returned is undefined. Use the [`get_all()`](#email.message.EmailMessage.get_all "email.message.EmailMessage.get_all") method to get the values of all the extant headers named *name*. Using the standard (non-`compat32`) policies, the returned value is an instance of a subclass of [`email.headerregistry.BaseHeader`](email.headerregistry#email.headerregistry.BaseHeader "email.headerregistry.BaseHeader"). `__setitem__(name, val)` Add a header to the message with field name *name* and value *val*. The field is appended to the end of the message’s existing headers. Note that this does *not* overwrite or delete any existing header with the same name. If you want to ensure that the new header is the only one present in the message with field name *name*, delete the field first, e.g.: ``` del msg['subject'] msg['subject'] = 'Python roolz!' ``` If the `policy` defines certain headers to be unique (as the standard policies do), this method may raise a [`ValueError`](exceptions#ValueError "ValueError") when an attempt is made to assign a value to such a header when one already exists. This behavior is intentional for consistency’s sake, but do not depend on it as we may choose to make such assignments do an automatic deletion of the existing header in the future. `__delitem__(name)` Delete all occurrences of the field with name *name* from the message’s headers. No exception is raised if the named field isn’t present in the headers. `keys()` Return a list of all the message’s header field names. `values()` Return a list of all the message’s field values. `items()` Return a list of 2-tuples containing all the message’s field headers and values. `get(name, failobj=None)` Return the value of the named header field. This is identical to [`__getitem__()`](#email.message.EmailMessage.__getitem__ "email.message.EmailMessage.__getitem__") except that optional *failobj* is returned if the named header is missing (*failobj* defaults to `None`). Here are some additional useful header related methods: `get_all(name, failobj=None)` Return a list of all the values for the field named *name*. If there are no such named headers in the message, *failobj* is returned (defaults to `None`). `add_header(_name, _value, **_params)` Extended header setting. This method is similar to [`__setitem__()`](#email.message.EmailMessage.__setitem__ "email.message.EmailMessage.__setitem__") except that additional header parameters can be provided as keyword arguments. *\_name* is the header field to add and *\_value* is the *primary* value for the header. For each item in the keyword argument dictionary *\_params*, the key is taken as the parameter name, with underscores converted to dashes (since dashes are illegal in Python identifiers). Normally, the parameter will be added as `key="value"` unless the value is `None`, in which case only the key will be added. If the value contains non-ASCII characters, the charset and language may be explicitly controlled by specifying the value as a three tuple in the format `(CHARSET, LANGUAGE, VALUE)`, where `CHARSET` is a string naming the charset to be used to encode the value, `LANGUAGE` can usually be set to `None` or the empty string (see [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html) for other possibilities), and `VALUE` is the string value containing non-ASCII code points. If a three tuple is not passed and the value contains non-ASCII characters, it is automatically encoded in [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html) format using a `CHARSET` of `utf-8` and a `LANGUAGE` of `None`. Here is an example: ``` msg.add_header('Content-Disposition', 'attachment', filename='bud.gif') ``` This will add a header that looks like ``` Content-Disposition: attachment; filename="bud.gif" ``` An example of the extended interface with non-ASCII characters: ``` msg.add_header('Content-Disposition', 'attachment', filename=('iso-8859-1', '', 'Fußballer.ppt')) ``` `replace_header(_name, _value)` Replace a header. Replace the first header found in the message that matches *\_name*, retaining header order and field name case of the original header. If no matching header is found, raise a [`KeyError`](exceptions#KeyError "KeyError"). `get_content_type()` Return the message’s content type, coerced to lower case of the form *maintype/subtype*. If there is no *Content-Type* header in the message return the value returned by [`get_default_type()`](#email.message.EmailMessage.get_default_type "email.message.EmailMessage.get_default_type"). If the *Content-Type* header is invalid, return `text/plain`. (According to [**RFC 2045**](https://tools.ietf.org/html/rfc2045.html), messages always have a default type, [`get_content_type()`](#email.message.EmailMessage.get_content_type "email.message.EmailMessage.get_content_type") will always return a value. [**RFC 2045**](https://tools.ietf.org/html/rfc2045.html) defines a message’s default type to be *text/plain* unless it appears inside a *multipart/digest* container, in which case it would be *message/rfc822*. If the *Content-Type* header has an invalid type specification, [**RFC 2045**](https://tools.ietf.org/html/rfc2045.html) mandates that the default type be *text/plain*.) `get_content_maintype()` Return the message’s main content type. This is the *maintype* part of the string returned by [`get_content_type()`](#email.message.EmailMessage.get_content_type "email.message.EmailMessage.get_content_type"). `get_content_subtype()` Return the message’s sub-content type. This is the *subtype* part of the string returned by [`get_content_type()`](#email.message.EmailMessage.get_content_type "email.message.EmailMessage.get_content_type"). `get_default_type()` Return the default content type. Most messages have a default content type of *text/plain*, except for messages that are subparts of *multipart/digest* containers. Such subparts have a default content type of *message/rfc822*. `set_default_type(ctype)` Set the default content type. *ctype* should either be *text/plain* or *message/rfc822*, although this is not enforced. The default content type is not stored in the *Content-Type* header, so it only affects the return value of the `get_content_type` methods when no *Content-Type* header is present in the message. `set_param(param, value, header='Content-Type', requote=True, charset=None, language='', replace=False)` Set a parameter in the *Content-Type* header. If the parameter already exists in the header, replace its value with *value*. When *header* is `Content-Type` (the default) and the header does not yet exist in the message, add it, set its value to *text/plain*, and append the new parameter value. Optional *header* specifies an alternative header to *Content-Type*. If the value contains non-ASCII characters, the charset and language may be explicitly specified using the optional *charset* and *language* parameters. Optional *language* specifies the [**RFC 2231**](https://tools.ietf.org/html/rfc2231.html) language, defaulting to the empty string. Both *charset* and *language* should be strings. The default is to use the `utf8` *charset* and `None` for the *language*. If *replace* is `False` (the default) the header is moved to the end of the list of headers. If *replace* is `True`, the header will be updated in place. Use of the *requote* parameter with [`EmailMessage`](#email.message.EmailMessage "email.message.EmailMessage") objects is deprecated. Note that existing parameter values of headers may be accessed through the `params` attribute of the header value (for example, `msg['Content-Type'].params['charset']`). Changed in version 3.4: `replace` keyword was added. `del_param(param, header='content-type', requote=True)` Remove the given parameter completely from the *Content-Type* header. The header will be re-written in place without the parameter or its value. Optional *header* specifies an alternative to *Content-Type*. Use of the *requote* parameter with [`EmailMessage`](#email.message.EmailMessage "email.message.EmailMessage") objects is deprecated. `get_filename(failobj=None)` Return the value of the `filename` parameter of the *Content-Disposition* header of the message. If the header does not have a `filename` parameter, this method falls back to looking for the `name` parameter on the *Content-Type* header. If neither is found, or the header is missing, then *failobj* is returned. The returned string will always be unquoted as per [`email.utils.unquote()`](email.utils#email.utils.unquote "email.utils.unquote"). `get_boundary(failobj=None)` Return the value of the `boundary` parameter of the *Content-Type* header of the message, or *failobj* if either the header is missing, or has no `boundary` parameter. The returned string will always be unquoted as per [`email.utils.unquote()`](email.utils#email.utils.unquote "email.utils.unquote"). `set_boundary(boundary)` Set the `boundary` parameter of the *Content-Type* header to *boundary*. [`set_boundary()`](#email.message.EmailMessage.set_boundary "email.message.EmailMessage.set_boundary") will always quote *boundary* if necessary. A [`HeaderParseError`](email.errors#email.errors.HeaderParseError "email.errors.HeaderParseError") is raised if the message object has no *Content-Type* header. Note that using this method is subtly different from deleting the old *Content-Type* header and adding a new one with the new boundary via [`add_header()`](#email.message.EmailMessage.add_header "email.message.EmailMessage.add_header"), because [`set_boundary()`](#email.message.EmailMessage.set_boundary "email.message.EmailMessage.set_boundary") preserves the order of the *Content-Type* header in the list of headers. `get_content_charset(failobj=None)` Return the `charset` parameter of the *Content-Type* header, coerced to lower case. If there is no *Content-Type* header, or if that header has no `charset` parameter, *failobj* is returned. `get_charsets(failobj=None)` Return a list containing the character set names in the message. If the message is a *multipart*, then the list will contain one element for each subpart in the payload, otherwise, it will be a list of length 1. Each item in the list will be a string which is the value of the `charset` parameter in the *Content-Type* header for the represented subpart. If the subpart has no *Content-Type* header, no `charset` parameter, or is not of the *text* main MIME type, then that item in the returned list will be *failobj*. `is_attachment()` Return `True` if there is a *Content-Disposition* header and its (case insensitive) value is `attachment`, `False` otherwise. Changed in version 3.4.2: is\_attachment is now a method instead of a property, for consistency with [`is_multipart()`](email.compat32-message#email.message.Message.is_multipart "email.message.Message.is_multipart"). `get_content_disposition()` Return the lowercased value (without parameters) of the message’s *Content-Disposition* header if it has one, or `None`. The possible values for this method are *inline*, *attachment* or `None` if the message follows [**RFC 2183**](https://tools.ietf.org/html/rfc2183.html). New in version 3.5. The following methods relate to interrogating and manipulating the content (payload) of the message. `walk()` The [`walk()`](#email.message.EmailMessage.walk "email.message.EmailMessage.walk") method is an all-purpose generator which can be used to iterate over all the parts and subparts of a message object tree, in depth-first traversal order. You will typically use [`walk()`](#email.message.EmailMessage.walk "email.message.EmailMessage.walk") as the iterator in a `for` loop; each iteration returns the next subpart. Here’s an example that prints the MIME type of every part of a multipart message structure: ``` >>> for part in msg.walk(): ... print(part.get_content_type()) multipart/report text/plain message/delivery-status text/plain text/plain message/rfc822 text/plain ``` `walk` iterates over the subparts of any part where [`is_multipart()`](#email.message.EmailMessage.is_multipart "email.message.EmailMessage.is_multipart") returns `True`, even though `msg.get_content_maintype() == 'multipart'` may return `False`. We can see this in our example by making use of the `_structure` debug helper function: ``` >>> from email.iterators import _structure >>> for part in msg.walk(): ... print(part.get_content_maintype() == 'multipart', ... part.is_multipart()) True True False False False True False False False False False True False False >>> _structure(msg) multipart/report text/plain message/delivery-status text/plain text/plain message/rfc822 text/plain ``` Here the `message` parts are not `multiparts`, but they do contain subparts. `is_multipart()` returns `True` and `walk` descends into the subparts. `get_body(preferencelist=('related', 'html', 'plain'))` Return the MIME part that is the best candidate to be the “body” of the message. *preferencelist* must be a sequence of strings from the set `related`, `html`, and `plain`, and indicates the order of preference for the content type of the part returned. Start looking for candidate matches with the object on which the `get_body` method is called. If `related` is not included in *preferencelist*, consider the root part (or subpart of the root part) of any related encountered as a candidate if the (sub-)part matches a preference. When encountering a `multipart/related`, check the `start` parameter and if a part with a matching *Content-ID* is found, consider only it when looking for candidate matches. Otherwise consider only the first (default root) part of the `multipart/related`. If a part has a *Content-Disposition* header, only consider the part a candidate match if the value of the header is `inline`. If none of the candidates matches any of the preferences in *preferencelist*, return `None`. Notes: (1) For most applications the only *preferencelist* combinations that really make sense are `('plain',)`, `('html', 'plain')`, and the default `('related', 'html', 'plain')`. (2) Because matching starts with the object on which `get_body` is called, calling `get_body` on a `multipart/related` will return the object itself unless *preferencelist* has a non-default value. (3) Messages (or message parts) that do not specify a *Content-Type* or whose *Content-Type* header is invalid will be treated as if they are of type `text/plain`, which may occasionally cause `get_body` to return unexpected results. `iter_attachments()` Return an iterator over all of the immediate sub-parts of the message that are not candidate “body” parts. That is, skip the first occurrence of each of `text/plain`, `text/html`, `multipart/related`, or `multipart/alternative` (unless they are explicitly marked as attachments via *Content-Disposition: attachment*), and return all remaining parts. When applied directly to a `multipart/related`, return an iterator over the all the related parts except the root part (ie: the part pointed to by the `start` parameter, or the first part if there is no `start` parameter or the `start` parameter doesn’t match the *Content-ID* of any of the parts). When applied directly to a `multipart/alternative` or a non-`multipart`, return an empty iterator. `iter_parts()` Return an iterator over all of the immediate sub-parts of the message, which will be empty for a non-`multipart`. (See also [`walk()`](#email.message.EmailMessage.walk "email.message.EmailMessage.walk").) `get_content(*args, content_manager=None, **kw)` Call the [`get_content()`](email.contentmanager#email.contentmanager.ContentManager.get_content "email.contentmanager.ContentManager.get_content") method of the *content\_manager*, passing self as the message object, and passing along any other arguments or keywords as additional arguments. If *content\_manager* is not specified, use the `content_manager` specified by the current [`policy`](email.policy#module-email.policy "email.policy: Controlling the parsing and generating of messages"). `set_content(*args, content_manager=None, **kw)` Call the [`set_content()`](email.contentmanager#email.contentmanager.ContentManager.set_content "email.contentmanager.ContentManager.set_content") method of the *content\_manager*, passing self as the message object, and passing along any other arguments or keywords as additional arguments. If *content\_manager* is not specified, use the `content_manager` specified by the current [`policy`](email.policy#module-email.policy "email.policy: Controlling the parsing and generating of messages"). `make_related(boundary=None)` Convert a non-`multipart` message into a `multipart/related` message, moving any existing *Content-* headers and payload into a (new) first part of the `multipart`. If *boundary* is specified, use it as the boundary string in the multipart, otherwise leave the boundary to be automatically created when it is needed (for example, when the message is serialized). `make_alternative(boundary=None)` Convert a non-`multipart` or a `multipart/related` into a `multipart/alternative`, moving any existing *Content-* headers and payload into a (new) first part of the `multipart`. If *boundary* is specified, use it as the boundary string in the multipart, otherwise leave the boundary to be automatically created when it is needed (for example, when the message is serialized). `make_mixed(boundary=None)` Convert a non-`multipart`, a `multipart/related`, or a `multipart-alternative` into a `multipart/mixed`, moving any existing *Content-* headers and payload into a (new) first part of the `multipart`. If *boundary* is specified, use it as the boundary string in the multipart, otherwise leave the boundary to be automatically created when it is needed (for example, when the message is serialized). `add_related(*args, content_manager=None, **kw)` If the message is a `multipart/related`, create a new message object, pass all of the arguments to its [`set_content()`](#email.message.EmailMessage.set_content "email.message.EmailMessage.set_content") method, and [`attach()`](email.compat32-message#email.message.Message.attach "email.message.Message.attach") it to the `multipart`. If the message is a non-`multipart`, call [`make_related()`](#email.message.EmailMessage.make_related "email.message.EmailMessage.make_related") and then proceed as above. If the message is any other type of `multipart`, raise a [`TypeError`](exceptions#TypeError "TypeError"). If *content\_manager* is not specified, use the `content_manager` specified by the current [`policy`](email.policy#module-email.policy "email.policy: Controlling the parsing and generating of messages"). If the added part has no *Content-Disposition* header, add one with the value `inline`. `add_alternative(*args, content_manager=None, **kw)` If the message is a `multipart/alternative`, create a new message object, pass all of the arguments to its [`set_content()`](#email.message.EmailMessage.set_content "email.message.EmailMessage.set_content") method, and [`attach()`](email.compat32-message#email.message.Message.attach "email.message.Message.attach") it to the `multipart`. If the message is a non-`multipart` or `multipart/related`, call [`make_alternative()`](#email.message.EmailMessage.make_alternative "email.message.EmailMessage.make_alternative") and then proceed as above. If the message is any other type of `multipart`, raise a [`TypeError`](exceptions#TypeError "TypeError"). If *content\_manager* is not specified, use the `content_manager` specified by the current [`policy`](email.policy#module-email.policy "email.policy: Controlling the parsing and generating of messages"). `add_attachment(*args, content_manager=None, **kw)` If the message is a `multipart/mixed`, create a new message object, pass all of the arguments to its [`set_content()`](#email.message.EmailMessage.set_content "email.message.EmailMessage.set_content") method, and [`attach()`](email.compat32-message#email.message.Message.attach "email.message.Message.attach") it to the `multipart`. If the message is a non-`multipart`, `multipart/related`, or `multipart/alternative`, call [`make_mixed()`](#email.message.EmailMessage.make_mixed "email.message.EmailMessage.make_mixed") and then proceed as above. If *content\_manager* is not specified, use the `content_manager` specified by the current [`policy`](email.policy#module-email.policy "email.policy: Controlling the parsing and generating of messages"). If the added part has no *Content-Disposition* header, add one with the value `attachment`. This method can be used both for explicit attachments (*Content-Disposition: attachment*) and `inline` attachments (*Content-Disposition: inline*), by passing appropriate options to the `content_manager`. `clear()` Remove the payload and all of the headers. `clear_content()` Remove the payload and all of the `Content-` headers, leaving all other headers intact and in their original order. [`EmailMessage`](#email.message.EmailMessage "email.message.EmailMessage") objects have the following instance attributes: `preamble` The format of a MIME document allows for some text between the blank line following the headers, and the first multipart boundary string. Normally, this text is never visible in a MIME-aware mail reader because it falls outside the standard MIME armor. However, when viewing the raw text of the message, or when viewing the message in a non-MIME aware reader, this text can become visible. The *preamble* attribute contains this leading extra-armor text for MIME documents. When the [`Parser`](email.parser#email.parser.Parser "email.parser.Parser") discovers some text after the headers but before the first boundary string, it assigns this text to the message’s *preamble* attribute. When the [`Generator`](email.generator#email.generator.Generator "email.generator.Generator") is writing out the plain text representation of a MIME message, and it finds the message has a *preamble* attribute, it will write this text in the area between the headers and the first boundary. See [`email.parser`](email.parser#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure.") and [`email.generator`](email.generator#module-email.generator "email.generator: Generate flat text email messages from a message structure.") for details. Note that if the message object has no preamble, the *preamble* attribute will be `None`. `epilogue` The *epilogue* attribute acts the same way as the *preamble* attribute, except that it contains text that appears between the last boundary and the end of the message. As with the [`preamble`](#email.message.EmailMessage.preamble "email.message.EmailMessage.preamble"), if there is no epilog text this attribute will be `None`. `defects` The *defects* attribute contains a list of all the problems found when parsing this message. See [`email.errors`](email.errors#module-email.errors "email.errors: The exception classes used by the email package.") for a detailed description of the possible parsing defects. `class email.message.MIMEPart(policy=default)` This class represents a subpart of a MIME message. It is identical to [`EmailMessage`](#email.message.EmailMessage "email.message.EmailMessage"), except that no *MIME-Version* headers are added when [`set_content()`](#email.message.EmailMessage.set_content "email.message.EmailMessage.set_content") is called, since sub-parts do not need their own *MIME-Version* headers. #### Footnotes `1` Originally added in 3.4 as a [provisional module](../glossary#term-provisional-package). Docs for legacy message class moved to [email.message.Message: Representing an email message using the compat32 API](email.compat32-message#compat32-message).
programming_docs
python cmath — Mathematical functions for complex numbers cmath — Mathematical functions for complex numbers ================================================== This module provides access to mathematical functions for complex numbers. The functions in this module accept integers, floating-point numbers or complex numbers as arguments. They will also accept any Python object that has either a [`__complex__()`](../reference/datamodel#object.__complex__ "object.__complex__") or a [`__float__()`](../reference/datamodel#object.__float__ "object.__float__") method: these methods are used to convert the object to a complex or floating-point number, respectively, and the function is then applied to the result of the conversion. Note On platforms with hardware and system-level support for signed zeros, functions involving branch cuts are continuous on *both* sides of the branch cut: the sign of the zero distinguishes one side of the branch cut from the other. On platforms that do not support signed zeros the continuity is as specified below. Conversions to and from polar coordinates ----------------------------------------- A Python complex number `z` is stored internally using *rectangular* or *Cartesian* coordinates. It is completely determined by its *real part* `z.real` and its *imaginary part* `z.imag`. In other words: ``` z == z.real + z.imag*1j ``` *Polar coordinates* give an alternative way to represent a complex number. In polar coordinates, a complex number *z* is defined by the modulus *r* and the phase angle *phi*. The modulus *r* is the distance from *z* to the origin, while the phase *phi* is the counterclockwise angle, measured in radians, from the positive x-axis to the line segment that joins the origin to *z*. The following functions can be used to convert from the native rectangular coordinates to polar coordinates and back. `cmath.phase(x)` Return the phase of *x* (also known as the *argument* of *x*), as a float. `phase(x)` is equivalent to `math.atan2(x.imag, x.real)`. The result lies in the range [-*π*, *π*], and the branch cut for this operation lies along the negative real axis, continuous from above. On systems with support for signed zeros (which includes most systems in current use), this means that the sign of the result is the same as the sign of `x.imag`, even when `x.imag` is zero: ``` >>> phase(complex(-1.0, 0.0)) 3.141592653589793 >>> phase(complex(-1.0, -0.0)) -3.141592653589793 ``` Note The modulus (absolute value) of a complex number *x* can be computed using the built-in [`abs()`](functions#abs "abs") function. There is no separate [`cmath`](#module-cmath "cmath: Mathematical functions for complex numbers.") module function for this operation. `cmath.polar(x)` Return the representation of *x* in polar coordinates. Returns a pair `(r, phi)` where *r* is the modulus of *x* and phi is the phase of *x*. `polar(x)` is equivalent to `(abs(x), phase(x))`. `cmath.rect(r, phi)` Return the complex number *x* with polar coordinates *r* and *phi*. Equivalent to `r * (math.cos(phi) + math.sin(phi)*1j)`. Power and logarithmic functions ------------------------------- `cmath.exp(x)` Return *e* raised to the power *x*, where *e* is the base of natural logarithms. `cmath.log(x[, base])` Returns the logarithm of *x* to the given *base*. If the *base* is not specified, returns the natural logarithm of *x*. There is one branch cut, from 0 along the negative real axis to -∞, continuous from above. `cmath.log10(x)` Return the base-10 logarithm of *x*. This has the same branch cut as [`log()`](#cmath.log "cmath.log"). `cmath.sqrt(x)` Return the square root of *x*. This has the same branch cut as [`log()`](#cmath.log "cmath.log"). Trigonometric functions ----------------------- `cmath.acos(x)` Return the arc cosine of *x*. There are two branch cuts: One extends right from 1 along the real axis to ∞, continuous from below. The other extends left from -1 along the real axis to -∞, continuous from above. `cmath.asin(x)` Return the arc sine of *x*. This has the same branch cuts as [`acos()`](#cmath.acos "cmath.acos"). `cmath.atan(x)` Return the arc tangent of *x*. There are two branch cuts: One extends from `1j` along the imaginary axis to `∞j`, continuous from the right. The other extends from `-1j` along the imaginary axis to `-∞j`, continuous from the left. `cmath.cos(x)` Return the cosine of *x*. `cmath.sin(x)` Return the sine of *x*. `cmath.tan(x)` Return the tangent of *x*. Hyperbolic functions -------------------- `cmath.acosh(x)` Return the inverse hyperbolic cosine of *x*. There is one branch cut, extending left from 1 along the real axis to -∞, continuous from above. `cmath.asinh(x)` Return the inverse hyperbolic sine of *x*. There are two branch cuts: One extends from `1j` along the imaginary axis to `∞j`, continuous from the right. The other extends from `-1j` along the imaginary axis to `-∞j`, continuous from the left. `cmath.atanh(x)` Return the inverse hyperbolic tangent of *x*. There are two branch cuts: One extends from `1` along the real axis to `∞`, continuous from below. The other extends from `-1` along the real axis to `-∞`, continuous from above. `cmath.cosh(x)` Return the hyperbolic cosine of *x*. `cmath.sinh(x)` Return the hyperbolic sine of *x*. `cmath.tanh(x)` Return the hyperbolic tangent of *x*. Classification functions ------------------------ `cmath.isfinite(x)` Return `True` if both the real and imaginary parts of *x* are finite, and `False` otherwise. New in version 3.2. `cmath.isinf(x)` Return `True` if either the real or the imaginary part of *x* is an infinity, and `False` otherwise. `cmath.isnan(x)` Return `True` if either the real or the imaginary part of *x* is a NaN, and `False` otherwise. `cmath.isclose(a, b, *, rel_tol=1e-09, abs_tol=0.0)` Return `True` if the values *a* and *b* are close to each other and `False` otherwise. Whether or not two values are considered close is determined according to given absolute and relative tolerances. *rel\_tol* is the relative tolerance – it is the maximum allowed difference between *a* and *b*, relative to the larger absolute value of *a* or *b*. For example, to set a tolerance of 5%, pass `rel_tol=0.05`. The default tolerance is `1e-09`, which assures that the two values are the same within about 9 decimal digits. *rel\_tol* must be greater than zero. *abs\_tol* is the minimum absolute tolerance – useful for comparisons near zero. *abs\_tol* must be at least zero. If no errors occur, the result will be: `abs(a-b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)`. The IEEE 754 special values of `NaN`, `inf`, and `-inf` will be handled according to IEEE rules. Specifically, `NaN` is not considered close to any other value, including `NaN`. `inf` and `-inf` are only considered close to themselves. New in version 3.5. See also [**PEP 485**](https://www.python.org/dev/peps/pep-0485) – A function for testing approximate equality Constants --------- `cmath.pi` The mathematical constant *π*, as a float. `cmath.e` The mathematical constant *e*, as a float. `cmath.tau` The mathematical constant *τ*, as a float. New in version 3.6. `cmath.inf` Floating-point positive infinity. Equivalent to `float('inf')`. New in version 3.6. `cmath.infj` Complex number with zero real part and positive infinity imaginary part. Equivalent to `complex(0.0, float('inf'))`. New in version 3.6. `cmath.nan` A floating-point “not a number” (NaN) value. Equivalent to `float('nan')`. New in version 3.6. `cmath.nanj` Complex number with zero real part and NaN imaginary part. Equivalent to `complex(0.0, float('nan'))`. New in version 3.6. Note that the selection of functions is similar, but not identical, to that in module [`math`](math#module-math "math: Mathematical functions (sin() etc.)."). The reason for having two modules is that some users aren’t interested in complex numbers, and perhaps don’t even know what they are. They would rather have `math.sqrt(-1)` raise an exception than return a complex number. Also note that the functions defined in [`cmath`](#module-cmath "cmath: Mathematical functions for complex numbers.") always return a complex number, even if the answer can be expressed as a real number (in which case the complex number has an imaginary part of zero). A note on branch cuts: They are curves along which the given function fails to be continuous. They are a necessary feature of many complex functions. It is assumed that if you need to compute with complex functions, you will understand about branch cuts. Consult almost any (not too elementary) book on complex variables for enlightenment. For information of the proper choice of branch cuts for numerical purposes, a good reference should be the following: See also Kahan, W: Branch cuts for complex elementary functions; or, Much ado about nothing’s sign bit. In Iserles, A., and Powell, M. (eds.), The state of the art in numerical analysis. Clarendon Press (1987) pp165–211. python tkinter.ttk — Tk themed widgets tkinter.ttk — Tk themed widgets =============================== **Source code:** [Lib/tkinter/ttk.py](https://github.com/python/cpython/tree/3.9/Lib/tkinter/ttk.py) The [`tkinter.ttk`](#module-tkinter.ttk "tkinter.ttk: Tk themed widget set") module provides access to the Tk themed widget set, introduced in Tk 8.5. If Python has not been compiled against Tk 8.5, this module can still be accessed if *Tile* has been installed. The former method using Tk 8.5 provides additional benefits including anti-aliased font rendering under X11 and window transparency (requiring a composition window manager on X11). The basic idea for [`tkinter.ttk`](#module-tkinter.ttk "tkinter.ttk: Tk themed widget set") is to separate, to the extent possible, the code implementing a widget’s behavior from the code implementing its appearance. See also [Tk Widget Styling Support](https://core.tcl.tk/tips/doc/trunk/tip/48.md) A document introducing theming support for Tk Using Ttk --------- To start using Ttk, import its module: ``` from tkinter import ttk ``` To override the basic Tk widgets, the import should follow the Tk import: ``` from tkinter import * from tkinter.ttk import * ``` That code causes several [`tkinter.ttk`](#module-tkinter.ttk "tkinter.ttk: Tk themed widget set") widgets (`Button`, `Checkbutton`, `Entry`, `Frame`, `Label`, `LabelFrame`, `Menubutton`, `PanedWindow`, `Radiobutton`, `Scale` and `Scrollbar`) to automatically replace the Tk widgets. This has the direct benefit of using the new widgets which gives a better look and feel across platforms; however, the replacement widgets are not completely compatible. The main difference is that widget options such as “fg”, “bg” and others related to widget styling are no longer present in Ttk widgets. Instead, use the `ttk.Style` class for improved styling effects. See also [Converting existing applications to use Tile widgets](http://tktable.sourceforge.net/tile/doc/converting.txt) A monograph (using Tcl terminology) about differences typically encountered when moving applications to use the new widgets. Ttk Widgets ----------- Ttk comes with 18 widgets, twelve of which already existed in tkinter: `Button`, `Checkbutton`, `Entry`, `Frame`, `Label`, `LabelFrame`, `Menubutton`, `PanedWindow`, `Radiobutton`, `Scale`, `Scrollbar`, and [`Spinbox`](#tkinter.ttk.Spinbox "tkinter.ttk.Spinbox"). The other six are new: [`Combobox`](#tkinter.ttk.Combobox "tkinter.ttk.Combobox"), [`Notebook`](#tkinter.ttk.Notebook "tkinter.ttk.Notebook"), [`Progressbar`](#tkinter.ttk.Progressbar "tkinter.ttk.Progressbar"), `Separator`, `Sizegrip` and [`Treeview`](#tkinter.ttk.Treeview "tkinter.ttk.Treeview"). And all them are subclasses of [`Widget`](#tkinter.ttk.Widget "tkinter.ttk.Widget"). Using the Ttk widgets gives the application an improved look and feel. As discussed above, there are differences in how the styling is coded. Tk code: ``` l1 = tkinter.Label(text="Test", fg="black", bg="white") l2 = tkinter.Label(text="Test", fg="black", bg="white") ``` Ttk code: ``` style = ttk.Style() style.configure("BW.TLabel", foreground="black", background="white") l1 = ttk.Label(text="Test", style="BW.TLabel") l2 = ttk.Label(text="Test", style="BW.TLabel") ``` For more information about [TtkStyling](#ttkstyling), see the [`Style`](#tkinter.ttk.Style "tkinter.ttk.Style") class documentation. Widget ------ `ttk.Widget` defines standard options and methods supported by Tk themed widgets and is not supposed to be directly instantiated. ### Standard Options All the `ttk` Widgets accepts the following options: | Option | Description | | --- | --- | | class | Specifies the window class. The class is used when querying the option database for the window’s other options, to determine the default bindtags for the window, and to select the widget’s default layout and style. This option is read-only, and may only be specified when the window is created. | | cursor | Specifies the mouse cursor to be used for the widget. If set to the empty string (the default), the cursor is inherited for the parent widget. | | takefocus | Determines whether the window accepts the focus during keyboard traversal. 0, 1 or an empty string is returned. If 0 is returned, it means that the window should be skipped entirely during keyboard traversal. If 1, it means that the window should receive the input focus as long as it is viewable. And an empty string means that the traversal scripts make the decision about whether or not to focus on the window. | | style | May be used to specify a custom widget style. | ### Scrollable Widget Options The following options are supported by widgets that are controlled by a scrollbar. | Option | Description | | --- | --- | | xscrollcommand | Used to communicate with horizontal scrollbars. When the view in the widget’s window change, the widget will generate a Tcl command based on the scrollcommand. Usually this option consists of the method `Scrollbar.set()` of some scrollbar. This will cause the scrollbar to be updated whenever the view in the window changes. | | yscrollcommand | Used to communicate with vertical scrollbars. For some more information, see above. | ### Label Options The following options are supported by labels, buttons and other button-like widgets. | Option | Description | | --- | --- | | text | Specifies a text string to be displayed inside the widget. | | textvariable | Specifies a name whose value will be used in place of the text option resource. | | underline | If set, specifies the index (0-based) of a character to underline in the text string. The underline character is used for mnemonic activation. | | image | Specifies an image to display. This is a list of 1 or more elements. The first element is the default image name. The rest of the list if a sequence of statespec/value pairs as defined by [`Style.map()`](#tkinter.ttk.Style.map "tkinter.ttk.Style.map"), specifying different images to use when the widget is in a particular state or a combination of states. All images in the list should have the same size. | | compound | Specifies how to display the image relative to the text, in the case both text and images options are present. Valid values are:* text: display text only * image: display image only * top, bottom, left, right: display image above, below, left of, or right of the text, respectively. * none: the default. display the image if present, otherwise the text. | | width | If greater than zero, specifies how much space, in character widths, to allocate for the text label, if less than zero, specifies a minimum width. If zero or unspecified, the natural width of the text label is used. | ### Compatibility Options | Option | Description | | --- | --- | | state | May be set to “normal” or “disabled” to control the “disabled” state bit. This is a write-only option: setting it changes the widget state, but the [`Widget.state()`](#tkinter.ttk.Widget.state "tkinter.ttk.Widget.state") method does not affect this option. | ### Widget States The widget state is a bitmap of independent state flags. | Flag | Description | | --- | --- | | active | The mouse cursor is over the widget and pressing a mouse button will cause some action to occur | | disabled | Widget is disabled under program control | | focus | Widget has keyboard focus | | pressed | Widget is being pressed | | selected | “On”, “true”, or “current” for things like Checkbuttons and radiobuttons | | background | Windows and Mac have a notion of an “active” or foreground window. The *background* state is set for widgets in a background window, and cleared for those in the foreground window | | readonly | Widget should not allow user modification | | alternate | A widget-specific alternate display format | | invalid | The widget’s value is invalid | A state specification is a sequence of state names, optionally prefixed with an exclamation point indicating that the bit is off. ### ttk.Widget Besides the methods described below, the `ttk.Widget` supports the methods `tkinter.Widget.cget()` and `tkinter.Widget.configure()`. `class tkinter.ttk.Widget` `identify(x, y)` Returns the name of the element at position *x* *y*, or the empty string if the point does not lie within any element. *x* and *y* are pixel coordinates relative to the widget. `instate(statespec, callback=None, *args, **kw)` Test the widget’s state. If a callback is not specified, returns `True` if the widget state matches *statespec* and `False` otherwise. If callback is specified then it is called with args if widget state matches *statespec*. `state(statespec=None)` Modify or inquire widget state. If *statespec* is specified, sets the widget state according to it and return a new *statespec* indicating which flags were changed. If *statespec* is not specified, returns the currently-enabled state flags. *statespec* will usually be a list or a tuple. Combobox -------- The `ttk.Combobox` widget combines a text field with a pop-down list of values. This widget is a subclass of `Entry`. Besides the methods inherited from [`Widget`](#tkinter.ttk.Widget "tkinter.ttk.Widget"): `Widget.cget()`, `Widget.configure()`, [`Widget.identify()`](#tkinter.ttk.Widget.identify "tkinter.ttk.Widget.identify"), [`Widget.instate()`](#tkinter.ttk.Widget.instate "tkinter.ttk.Widget.instate") and [`Widget.state()`](#tkinter.ttk.Widget.state "tkinter.ttk.Widget.state"), and the following inherited from `Entry`: `Entry.bbox()`, `Entry.delete()`, `Entry.icursor()`, `Entry.index()`, `Entry.insert()`, `Entry.selection()`, `Entry.xview()`, it has some other methods, described at `ttk.Combobox`. ### Options This widget accepts the following specific options: | Option | Description | | --- | --- | | exportselection | Boolean value. If set, the widget selection is linked to the Window Manager selection (which can be returned by invoking Misc.selection\_get, for example). | | justify | Specifies how the text is aligned within the widget. One of “left”, “center”, or “right”. | | height | Specifies the height of the pop-down listbox, in rows. | | postcommand | A script (possibly registered with Misc.register) that is called immediately before displaying the values. It may specify which values to display. | | state | One of “normal”, “readonly”, or “disabled”. In the “readonly” state, the value may not be edited directly, and the user can only selection of the values from the dropdown list. In the “normal” state, the text field is directly editable. In the “disabled” state, no interaction is possible. | | textvariable | Specifies a name whose value is linked to the widget value. Whenever the value associated with that name changes, the widget value is updated, and vice versa. See `tkinter.StringVar`. | | values | Specifies the list of values to display in the drop-down listbox. | | width | Specifies an integer value indicating the desired width of the entry window, in average-size characters of the widget’s font. | ### Virtual events The combobox widgets generates a **<<ComboboxSelected>>** virtual event when the user selects an element from the list of values. ### ttk.Combobox `class tkinter.ttk.Combobox` `current(newindex=None)` If *newindex* is specified, sets the combobox value to the element position *newindex*. Otherwise, returns the index of the current value or -1 if the current value is not in the values list. `get()` Returns the current value of the combobox. `set(value)` Sets the value of the combobox to *value*. Spinbox ------- The `ttk.Spinbox` widget is a `ttk.Entry` enhanced with increment and decrement arrows. It can be used for numbers or lists of string values. This widget is a subclass of `Entry`. Besides the methods inherited from [`Widget`](#tkinter.ttk.Widget "tkinter.ttk.Widget"): `Widget.cget()`, `Widget.configure()`, [`Widget.identify()`](#tkinter.ttk.Widget.identify "tkinter.ttk.Widget.identify"), [`Widget.instate()`](#tkinter.ttk.Widget.instate "tkinter.ttk.Widget.instate") and [`Widget.state()`](#tkinter.ttk.Widget.state "tkinter.ttk.Widget.state"), and the following inherited from `Entry`: `Entry.bbox()`, `Entry.delete()`, `Entry.icursor()`, `Entry.index()`, `Entry.insert()`, `Entry.xview()`, it has some other methods, described at `ttk.Spinbox`. ### Options This widget accepts the following specific options: | Option | Description | | --- | --- | | from | Float value. If set, this is the minimum value to which the decrement button will decrement. Must be spelled as `from_` when used as an argument, since `from` is a Python keyword. | | to | Float value. If set, this is the maximum value to which the increment button will increment. | | increment | Float value. Specifies the amount which the increment/decrement buttons change the value. Defaults to 1.0. | | values | Sequence of string or float values. If specified, the increment/decrement buttons will cycle through the items in this sequence rather than incrementing or decrementing numbers. | | wrap | Boolean value. If `True`, increment and decrement buttons will cycle from the `to` value to the `from` value or the `from` value to the `to` value, respectively. | | format | String value. This specifies the format of numbers set by the increment/decrement buttons. It must be in the form “%W.Pf”, where W is the padded width of the value, P is the precision, and ‘%’ and ‘f’ are literal. | | command | Python callable. Will be called with no arguments whenever either of the increment or decrement buttons are pressed. | ### Virtual events The spinbox widget generates an **<<Increment>>** virtual event when the user presses <Up>, and a **<<Decrement>>** virtual event when the user presses <Down>. ### ttk.Spinbox `class tkinter.ttk.Spinbox` `get()` Returns the current value of the spinbox. `set(value)` Sets the value of the spinbox to *value*. Notebook -------- Ttk Notebook widget manages a collection of windows and displays a single one at a time. Each child window is associated with a tab, which the user may select to change the currently-displayed window. ### Options This widget accepts the following specific options: | Option | Description | | --- | --- | | height | If present and greater than zero, specifies the desired height of the pane area (not including internal padding or tabs). Otherwise, the maximum height of all panes is used. | | padding | Specifies the amount of extra space to add around the outside of the notebook. The padding is a list up to four length specifications left top right bottom. If fewer than four elements are specified, bottom defaults to top, right defaults to left, and top defaults to left. | | width | If present and greater than zero, specified the desired width of the pane area (not including internal padding). Otherwise, the maximum width of all panes is used. | ### Tab Options There are also specific options for tabs: | Option | Description | | --- | --- | | state | Either “normal”, “disabled” or “hidden”. If “disabled”, then the tab is not selectable. If “hidden”, then the tab is not shown. | | sticky | Specifies how the child window is positioned within the pane area. Value is a string containing zero or more of the characters “n”, “s”, “e” or “w”. Each letter refers to a side (north, south, east or west) that the child window will stick to, as per the `grid()` geometry manager. | | padding | Specifies the amount of extra space to add between the notebook and this pane. Syntax is the same as for the option padding used by this widget. | | text | Specifies a text to be displayed in the tab. | | image | Specifies an image to display in the tab. See the option image described in [`Widget`](#tkinter.ttk.Widget "tkinter.ttk.Widget"). | | compound | Specifies how to display the image relative to the text, in the case both options text and image are present. See [Label Options](#label-options) for legal values. | | underline | Specifies the index (0-based) of a character to underline in the text string. The underlined character is used for mnemonic activation if [`Notebook.enable_traversal()`](#tkinter.ttk.Notebook.enable_traversal "tkinter.ttk.Notebook.enable_traversal") is called. | ### Tab Identifiers The tab\_id present in several methods of `ttk.Notebook` may take any of the following forms: * An integer between zero and the number of tabs * The name of a child window * A positional specification of the form “@x,y”, which identifies the tab * The literal string “current”, which identifies the currently-selected tab * The literal string “end”, which returns the number of tabs (only valid for [`Notebook.index()`](#tkinter.ttk.Notebook.index "tkinter.ttk.Notebook.index")) ### Virtual Events This widget generates a **<<NotebookTabChanged>>** virtual event after a new tab is selected. ### ttk.Notebook `class tkinter.ttk.Notebook` `add(child, **kw)` Adds a new tab to the notebook. If window is currently managed by the notebook but hidden, it is restored to its previous position. See [Tab Options](#tab-options) for the list of available options. `forget(tab_id)` Removes the tab specified by *tab\_id*, unmaps and unmanages the associated window. `hide(tab_id)` Hides the tab specified by *tab\_id*. The tab will not be displayed, but the associated window remains managed by the notebook and its configuration remembered. Hidden tabs may be restored with the [`add()`](#tkinter.ttk.Notebook.add "tkinter.ttk.Notebook.add") command. `identify(x, y)` Returns the name of the tab element at position *x*, *y*, or the empty string if none. `index(tab_id)` Returns the numeric index of the tab specified by *tab\_id*, or the total number of tabs if *tab\_id* is the string “end”. `insert(pos, child, **kw)` Inserts a pane at the specified position. *pos* is either the string “end”, an integer index, or the name of a managed child. If *child* is already managed by the notebook, moves it to the specified position. See [Tab Options](#tab-options) for the list of available options. `select(tab_id=None)` Selects the specified *tab\_id*. The associated child window will be displayed, and the previously-selected window (if different) is unmapped. If *tab\_id* is omitted, returns the widget name of the currently selected pane. `tab(tab_id, option=None, **kw)` Query or modify the options of the specific *tab\_id*. If *kw* is not given, returns a dictionary of the tab option values. If *option* is specified, returns the value of that *option*. Otherwise, sets the options to the corresponding values. `tabs()` Returns a list of windows managed by the notebook. `enable_traversal()` Enable keyboard traversal for a toplevel window containing this notebook. This will extend the bindings for the toplevel window containing the notebook as follows: * `Control-Tab`: selects the tab following the currently selected one. * `Shift-Control-Tab`: selects the tab preceding the currently selected one. * `Alt-K`: where *K* is the mnemonic (underlined) character of any tab, will select that tab. Multiple notebooks in a single toplevel may be enabled for traversal, including nested notebooks. However, notebook traversal only works properly if all panes have the notebook they are in as master. Progressbar ----------- The `ttk.Progressbar` widget shows the status of a long-running operation. It can operate in two modes: 1) the determinate mode which shows the amount completed relative to the total amount of work to be done and 2) the indeterminate mode which provides an animated display to let the user know that work is progressing. ### Options This widget accepts the following specific options: | Option | Description | | --- | --- | | orient | One of “horizontal” or “vertical”. Specifies the orientation of the progress bar. | | length | Specifies the length of the long axis of the progress bar (width if horizontal, height if vertical). | | mode | One of “determinate” or “indeterminate”. | | maximum | A number specifying the maximum value. Defaults to 100. | | value | The current value of the progress bar. In “determinate” mode, this represents the amount of work completed. In “indeterminate” mode, it is interpreted as modulo *maximum*; that is, the progress bar completes one “cycle” when its value increases by *maximum*. | | variable | A name which is linked to the option value. If specified, the value of the progress bar is automatically set to the value of this name whenever the latter is modified. | | phase | Read-only option. The widget periodically increments the value of this option whenever its value is greater than 0 and, in determinate mode, less than maximum. This option may be used by the current theme to provide additional animation effects. | ### ttk.Progressbar `class tkinter.ttk.Progressbar` `start(interval=None)` Begin autoincrement mode: schedules a recurring timer event that calls [`Progressbar.step()`](#tkinter.ttk.Progressbar.step "tkinter.ttk.Progressbar.step") every *interval* milliseconds. If omitted, *interval* defaults to 50 milliseconds. `step(amount=None)` Increments the progress bar’s value by *amount*. *amount* defaults to 1.0 if omitted. `stop()` Stop autoincrement mode: cancels any recurring timer event initiated by [`Progressbar.start()`](#tkinter.ttk.Progressbar.start "tkinter.ttk.Progressbar.start") for this progress bar. Separator --------- The `ttk.Separator` widget displays a horizontal or vertical separator bar. It has no other methods besides the ones inherited from `ttk.Widget`. ### Options This widget accepts the following specific option: | Option | Description | | --- | --- | | orient | One of “horizontal” or “vertical”. Specifies the orientation of the separator. | Sizegrip -------- The `ttk.Sizegrip` widget (also known as a grow box) allows the user to resize the containing toplevel window by pressing and dragging the grip. This widget has neither specific options nor specific methods, besides the ones inherited from `ttk.Widget`. ### Platform-specific notes * On macOS, toplevel windows automatically include a built-in size grip by default. Adding a `Sizegrip` is harmless, since the built-in grip will just mask the widget. ### Bugs * If the containing toplevel’s position was specified relative to the right or bottom of the screen (e.g. ….), the `Sizegrip` widget will not resize the window. * This widget supports only “southeast” resizing. Treeview -------- The `ttk.Treeview` widget displays a hierarchical collection of items. Each item has a textual label, an optional image, and an optional list of data values. The data values are displayed in successive columns after the tree label. The order in which data values are displayed may be controlled by setting the widget option `displaycolumns`. The tree widget can also display column headings. Columns may be accessed by number or symbolic names listed in the widget option columns. See [Column Identifiers](#column-identifiers). Each item is identified by a unique name. The widget will generate item IDs if they are not supplied by the caller. There is a distinguished root item, named `{}`. The root item itself is not displayed; its children appear at the top level of the hierarchy. Each item also has a list of tags, which can be used to associate event bindings with individual items and control the appearance of the item. The Treeview widget supports horizontal and vertical scrolling, according to the options described in [Scrollable Widget Options](#scrollable-widget-options) and the methods [`Treeview.xview()`](#tkinter.ttk.Treeview.xview "tkinter.ttk.Treeview.xview") and [`Treeview.yview()`](#tkinter.ttk.Treeview.yview "tkinter.ttk.Treeview.yview"). ### Options This widget accepts the following specific options: | Option | Description | | --- | --- | | columns | A list of column identifiers, specifying the number of columns and their names. | | displaycolumns | A list of column identifiers (either symbolic or integer indices) specifying which data columns are displayed and the order in which they appear, or the string “#all”. | | height | Specifies the number of rows which should be visible. Note: the requested width is determined from the sum of the column widths. | | padding | Specifies the internal padding for the widget. The padding is a list of up to four length specifications. | | selectmode | Controls how the built-in class bindings manage the selection. One of “extended”, “browse” or “none”. If set to “extended” (the default), multiple items may be selected. If “browse”, only a single item will be selected at a time. If “none”, the selection will not be changed. Note that the application code and tag bindings can set the selection however they wish, regardless of the value of this option. | | show | A list containing zero or more of the following values, specifying which elements of the tree to display.* tree: display tree labels in column #0. * headings: display the heading row. The default is “tree headings”, i.e., show all elements. **Note**: Column #0 always refers to the tree column, even if show=”tree” is not specified. | ### Item Options The following item options may be specified for items in the insert and item widget commands. | Option | Description | | --- | --- | | text | The textual label to display for the item. | | image | A Tk Image, displayed to the left of the label. | | values | The list of values associated with the item. Each item should have the same number of values as the widget option columns. If there are fewer values than columns, the remaining values are assumed empty. If there are more values than columns, the extra values are ignored. | | open | `True`/`False` value indicating whether the item’s children should be displayed or hidden. | | tags | A list of tags associated with this item. | ### Tag Options The following options may be specified on tags: | Option | Description | | --- | --- | | foreground | Specifies the text foreground color. | | background | Specifies the cell or item background color. | | font | Specifies the font to use when drawing text. | | image | Specifies the item image, in case the item’s image option is empty. | ### Column Identifiers Column identifiers take any of the following forms: * A symbolic name from the list of columns option. * An integer n, specifying the nth data column. * A string of the form #n, where n is an integer, specifying the nth display column. Notes: * Item’s option values may be displayed in a different order than the order in which they are stored. * Column #0 always refers to the tree column, even if show=”tree” is not specified. A data column number is an index into an item’s option values list; a display column number is the column number in the tree where the values are displayed. Tree labels are displayed in column #0. If option displaycolumns is not set, then data column n is displayed in column #n+1. Again, **column #0 always refers to the tree column**. ### Virtual Events The Treeview widget generates the following virtual events. | Event | Description | | --- | --- | | <<TreeviewSelect>> | Generated whenever the selection changes. | | <<TreeviewOpen>> | Generated just before settings the focus item to open=True. | | <<TreeviewClose>> | Generated just after setting the focus item to open=False. | The [`Treeview.focus()`](#tkinter.ttk.Treeview.focus "tkinter.ttk.Treeview.focus") and [`Treeview.selection()`](#tkinter.ttk.Treeview.selection "tkinter.ttk.Treeview.selection") methods can be used to determine the affected item or items. ### ttk.Treeview `class tkinter.ttk.Treeview` `bbox(item, column=None)` Returns the bounding box (relative to the treeview widget’s window) of the specified *item* in the form (x, y, width, height). If *column* is specified, returns the bounding box of that cell. If the *item* is not visible (i.e., if it is a descendant of a closed item or is scrolled offscreen), returns an empty string. `get_children(item=None)` Returns the list of children belonging to *item*. If *item* is not specified, returns root children. `set_children(item, *newchildren)` Replaces *item*’s child with *newchildren*. Children present in *item* that are not present in *newchildren* are detached from the tree. No items in *newchildren* may be an ancestor of *item*. Note that not specifying *newchildren* results in detaching *item*’s children. `column(column, option=None, **kw)` Query or modify the options for the specified *column*. If *kw* is not given, returns a dict of the column option values. If *option* is specified then the value for that *option* is returned. Otherwise, sets the options to the corresponding values. The valid options/values are: * id Returns the column name. This is a read-only option. * anchor: One of the standard Tk anchor values. Specifies how the text in this column should be aligned with respect to the cell. * minwidth: width The minimum width of the column in pixels. The treeview widget will not make the column any smaller than specified by this option when the widget is resized or the user drags a column. * `stretch: True/False` Specifies whether the column’s width should be adjusted when the widget is resized. * width: width The width of the column in pixels. To configure the tree column, call this with column = “#0” `delete(*items)` Delete all specified *items* and all their descendants. The root item may not be deleted. `detach(*items)` Unlinks all of the specified *items* from the tree. The items and all of their descendants are still present, and may be reinserted at another point in the tree, but will not be displayed. The root item may not be detached. `exists(item)` Returns `True` if the specified *item* is present in the tree. `focus(item=None)` If *item* is specified, sets the focus item to *item*. Otherwise, returns the current focus item, or ‘’ if there is none. `heading(column, option=None, **kw)` Query or modify the heading options for the specified *column*. If *kw* is not given, returns a dict of the heading option values. If *option* is specified then the value for that *option* is returned. Otherwise, sets the options to the corresponding values. The valid options/values are: * text: text The text to display in the column heading. * image: imageName Specifies an image to display to the right of the column heading. * anchor: anchor Specifies how the heading text should be aligned. One of the standard Tk anchor values. * command: callback A callback to be invoked when the heading label is pressed. To configure the tree column heading, call this with column = “#0”. `identify(component, x, y)` Returns a description of the specified *component* under the point given by *x* and *y*, or the empty string if no such *component* is present at that position. `identify_row(y)` Returns the item ID of the item at position *y*. `identify_column(x)` Returns the data column identifier of the cell at position *x*. The tree column has ID #0. `identify_region(x, y)` Returns one of: | region | meaning | | --- | --- | | heading | Tree heading area. | | separator | Space between two columns headings. | | tree | The tree area. | | cell | A data cell. | Availability: Tk 8.6. `identify_element(x, y)` Returns the element at position *x*, *y*. Availability: Tk 8.6. `index(item)` Returns the integer index of *item* within its parent’s list of children. `insert(parent, index, iid=None, **kw)` Creates a new item and returns the item identifier of the newly created item. *parent* is the item ID of the parent item, or the empty string to create a new top-level item. *index* is an integer, or the value “end”, specifying where in the list of parent’s children to insert the new item. If *index* is less than or equal to zero, the new node is inserted at the beginning; if *index* is greater than or equal to the current number of children, it is inserted at the end. If *iid* is specified, it is used as the item identifier; *iid* must not already exist in the tree. Otherwise, a new unique identifier is generated. See [Item Options](#item-options) for the list of available points. `item(item, option=None, **kw)` Query or modify the options for the specified *item*. If no options are given, a dict with options/values for the item is returned. If *option* is specified then the value for that option is returned. Otherwise, sets the options to the corresponding values as given by *kw*. `move(item, parent, index)` Moves *item* to position *index* in *parent*’s list of children. It is illegal to move an item under one of its descendants. If *index* is less than or equal to zero, *item* is moved to the beginning; if greater than or equal to the number of children, it is moved to the end. If *item* was detached it is reattached. `next(item)` Returns the identifier of *item*’s next sibling, or ‘’ if *item* is the last child of its parent. `parent(item)` Returns the ID of the parent of *item*, or ‘’ if *item* is at the top level of the hierarchy. `prev(item)` Returns the identifier of *item*’s previous sibling, or ‘’ if *item* is the first child of its parent. `reattach(item, parent, index)` An alias for [`Treeview.move()`](#tkinter.ttk.Treeview.move "tkinter.ttk.Treeview.move"). `see(item)` Ensure that *item* is visible. Sets all of *item*’s ancestors open option to `True`, and scrolls the widget if necessary so that *item* is within the visible portion of the tree. `selection()` Returns a tuple of selected items. Changed in version 3.8: `selection()` no longer takes arguments. For changing the selection state use the following selection methods. `selection_set(*items)` *items* becomes the new selection. Changed in version 3.6: *items* can be passed as separate arguments, not just as a single tuple. `selection_add(*items)` Add *items* to the selection. Changed in version 3.6: *items* can be passed as separate arguments, not just as a single tuple. `selection_remove(*items)` Remove *items* from the selection. Changed in version 3.6: *items* can be passed as separate arguments, not just as a single tuple. `selection_toggle(*items)` Toggle the selection state of each item in *items*. Changed in version 3.6: *items* can be passed as separate arguments, not just as a single tuple. `set(item, column=None, value=None)` With one argument, returns a dictionary of column/value pairs for the specified *item*. With two arguments, returns the current value of the specified *column*. With three arguments, sets the value of given *column* in given *item* to the specified *value*. `tag_bind(tagname, sequence=None, callback=None)` Bind a callback for the given event *sequence* to the tag *tagname*. When an event is delivered to an item, the callbacks for each of the item’s tags option are called. `tag_configure(tagname, option=None, **kw)` Query or modify the options for the specified *tagname*. If *kw* is not given, returns a dict of the option settings for *tagname*. If *option* is specified, returns the value for that *option* for the specified *tagname*. Otherwise, sets the options to the corresponding values for the given *tagname*. `tag_has(tagname, item=None)` If *item* is specified, returns 1 or 0 depending on whether the specified *item* has the given *tagname*. Otherwise, returns a list of all items that have the specified tag. Availability: Tk 8.6 `xview(*args)` Query or modify horizontal position of the treeview. `yview(*args)` Query or modify vertical position of the treeview. Ttk Styling ----------- Each widget in `ttk` is assigned a style, which specifies the set of elements making up the widget and how they are arranged, along with dynamic and default settings for element options. By default the style name is the same as the widget’s class name, but it may be overridden by the widget’s style option. If you don’t know the class name of a widget, use the method `Misc.winfo_class()` (somewidget.winfo\_class()). See also [Tcl’2004 conference presentation](http://tktable.sourceforge.net/tile/tile-tcl2004.pdf) This document explains how the theme engine works `class tkinter.ttk.Style` This class is used to manipulate the style database. `configure(style, query_opt=None, **kw)` Query or set the default value of the specified option(s) in *style*. Each key in *kw* is an option and each value is a string identifying the value for that option. For example, to change every default button to be a flat button with some padding and a different background color: ``` from tkinter import ttk import tkinter root = tkinter.Tk() ttk.Style().configure("TButton", padding=6, relief="flat", background="#ccc") btn = ttk.Button(text="Sample") btn.pack() root.mainloop() ``` `map(style, query_opt=None, **kw)` Query or sets dynamic values of the specified option(s) in *style*. Each key in *kw* is an option and each value should be a list or a tuple (usually) containing statespecs grouped in tuples, lists, or some other preference. A statespec is a compound of one or more states and then a value. An example may make it more understandable: ``` import tkinter from tkinter import ttk root = tkinter.Tk() style = ttk.Style() style.map("C.TButton", foreground=[('pressed', 'red'), ('active', 'blue')], background=[('pressed', '!disabled', 'black'), ('active', 'white')] ) colored_btn = ttk.Button(text="Test", style="C.TButton").pack() root.mainloop() ``` Note that the order of the (states, value) sequences for an option does matter, if the order is changed to `[('active', 'blue'), ('pressed', 'red')]` in the foreground option, for example, the result would be a blue foreground when the widget were in active or pressed states. `lookup(style, option, state=None, default=None)` Returns the value specified for *option* in *style*. If *state* is specified, it is expected to be a sequence of one or more states. If the *default* argument is set, it is used as a fallback value in case no specification for option is found. To check what font a Button uses by default: ``` from tkinter import ttk print(ttk.Style().lookup("TButton", "font")) ``` `layout(style, layoutspec=None)` Define the widget layout for given *style*. If *layoutspec* is omitted, return the layout specification for given style. *layoutspec*, if specified, is expected to be a list or some other sequence type (excluding strings), where each item should be a tuple and the first item is the layout name and the second item should have the format described in [Layouts](#layouts). To understand the format, see the following example (it is not intended to do anything useful): ``` from tkinter import ttk import tkinter root = tkinter.Tk() style = ttk.Style() style.layout("TMenubutton", [ ("Menubutton.background", None), ("Menubutton.button", {"children": [("Menubutton.focus", {"children": [("Menubutton.padding", {"children": [("Menubutton.label", {"side": "left", "expand": 1})] })] })] }), ]) mbtn = ttk.Menubutton(text='Text') mbtn.pack() root.mainloop() ``` `element_create(elementname, etype, *args, **kw)` Create a new element in the current theme, of the given *etype* which is expected to be either “image”, “from” or “vsapi”. The latter is only available in Tk 8.6a for Windows XP and Vista and is not described here. If “image” is used, *args* should contain the default image name followed by statespec/value pairs (this is the imagespec), and *kw* may have the following options: * border=padding padding is a list of up to four integers, specifying the left, top, right, and bottom borders, respectively. * height=height Specifies a minimum height for the element. If less than zero, the base image’s height is used as a default. * padding=padding Specifies the element’s interior padding. Defaults to border’s value if not specified. * sticky=spec Specifies how the image is placed within the final parcel. spec contains zero or more characters “n”, “s”, “w”, or “e”. * width=width Specifies a minimum width for the element. If less than zero, the base image’s width is used as a default. If “from” is used as the value of *etype*, [`element_create()`](#tkinter.ttk.Style.element_create "tkinter.ttk.Style.element_create") will clone an existing element. *args* is expected to contain a themename, from which the element will be cloned, and optionally an element to clone from. If this element to clone from is not specified, an empty element will be used. *kw* is discarded. `element_names()` Returns the list of elements defined in the current theme. `element_options(elementname)` Returns the list of *elementname*’s options. `theme_create(themename, parent=None, settings=None)` Create a new theme. It is an error if *themename* already exists. If *parent* is specified, the new theme will inherit styles, elements and layouts from the parent theme. If *settings* are present they are expected to have the same syntax used for [`theme_settings()`](#tkinter.ttk.Style.theme_settings "tkinter.ttk.Style.theme_settings"). `theme_settings(themename, settings)` Temporarily sets the current theme to *themename*, apply specified *settings* and then restore the previous theme. Each key in *settings* is a style and each value may contain the keys ‘configure’, ‘map’, ‘layout’ and ‘element create’ and they are expected to have the same format as specified by the methods [`Style.configure()`](#tkinter.ttk.Style.configure "tkinter.ttk.Style.configure"), [`Style.map()`](#tkinter.ttk.Style.map "tkinter.ttk.Style.map"), [`Style.layout()`](#tkinter.ttk.Style.layout "tkinter.ttk.Style.layout") and [`Style.element_create()`](#tkinter.ttk.Style.element_create "tkinter.ttk.Style.element_create") respectively. As an example, let’s change the Combobox for the default theme a bit: ``` from tkinter import ttk import tkinter root = tkinter.Tk() style = ttk.Style() style.theme_settings("default", { "TCombobox": { "configure": {"padding": 5}, "map": { "background": [("active", "green2"), ("!disabled", "green4")], "fieldbackground": [("!disabled", "green3")], "foreground": [("focus", "OliveDrab1"), ("!disabled", "OliveDrab2")] } } }) combo = ttk.Combobox().pack() root.mainloop() ``` `theme_names()` Returns a list of all known themes. `theme_use(themename=None)` If *themename* is not given, returns the theme in use. Otherwise, sets the current theme to *themename*, refreshes all widgets and emits a <<ThemeChanged>> event. ### Layouts A layout can be just `None`, if it takes no options, or a dict of options specifying how to arrange the element. The layout mechanism uses a simplified version of the pack geometry manager: given an initial cavity, each element is allocated a parcel. Valid options/values are: * side: whichside Specifies which side of the cavity to place the element; one of top, right, bottom or left. If omitted, the element occupies the entire cavity. * sticky: nswe Specifies where the element is placed inside its allocated parcel. * unit: 0 or 1 If set to 1, causes the element and all of its descendants to be treated as a single element for the purposes of [`Widget.identify()`](#tkinter.ttk.Widget.identify "tkinter.ttk.Widget.identify") et al. It’s used for things like scrollbar thumbs with grips. * children: [sublayout… ] Specifies a list of elements to place inside the element. Each element is a tuple (or other sequence type) where the first item is the layout name, and the other is a [Layout](#layouts).
programming_docs
python email: Examples email: Examples =============== Here are a few examples of how to use the [`email`](email#module-email "email: Package supporting the parsing, manipulating, and generating email messages.") package to read, write, and send simple email messages, as well as more complex MIME messages. First, let’s see how to create and send a simple text message (both the text content and the addresses may contain unicode characters): ``` # Import smtplib for the actual sending function import smtplib # Import the email modules we'll need from email.message import EmailMessage # Open the plain text file whose name is in textfile for reading. with open(textfile) as fp: # Create a text/plain message msg = EmailMessage() msg.set_content(fp.read()) # me == the sender's email address # you == the recipient's email address msg['Subject'] = f'The contents of {textfile}' msg['From'] = me msg['To'] = you # Send the message via our own SMTP server. s = smtplib.SMTP('localhost') s.send_message(msg) s.quit() ``` Parsing [**RFC 822**](https://tools.ietf.org/html/rfc822.html) headers can easily be done by the using the classes from the [`parser`](email.parser#module-email.parser "email.parser: Parse flat text email messages to produce a message object structure.") module: ``` # Import the email modules we'll need from email.parser import BytesParser, Parser from email.policy import default # If the e-mail headers are in a file, uncomment these two lines: # with open(messagefile, 'rb') as fp: # headers = BytesParser(policy=default).parse(fp) # Or for parsing headers in a string (this is an uncommon operation), use: headers = Parser(policy=default).parsestr( 'From: Foo Bar <[email protected]>\n' 'To: <[email protected]>\n' 'Subject: Test message\n' '\n' 'Body would go here\n') # Now the header items can be accessed as a dictionary: print('To: {}'.format(headers['to'])) print('From: {}'.format(headers['from'])) print('Subject: {}'.format(headers['subject'])) # You can also access the parts of the addresses: print('Recipient username: {}'.format(headers['to'].addresses[0].username)) print('Sender name: {}'.format(headers['from'].addresses[0].display_name)) ``` Here’s an example of how to send a MIME message containing a bunch of family pictures that may be residing in a directory: ``` # Import smtplib for the actual sending function import smtplib # And imghdr to find the types of our images import imghdr # Here are the email package modules we'll need from email.message import EmailMessage # Create the container email message. msg = EmailMessage() msg['Subject'] = 'Our family reunion' # me == the sender's email address # family = the list of all recipients' email addresses msg['From'] = me msg['To'] = ', '.join(family) msg.preamble = 'You will not see this in a MIME-aware mail reader.\n' # Open the files in binary mode. Use imghdr to figure out the # MIME subtype for each specific image. for file in pngfiles: with open(file, 'rb') as fp: img_data = fp.read() msg.add_attachment(img_data, maintype='image', subtype=imghdr.what(None, img_data)) # Send the email via our own SMTP server. with smtplib.SMTP('localhost') as s: s.send_message(msg) ``` Here’s an example of how to send the entire contents of a directory as an email message: [1](#id3) ``` #!/usr/bin/env python3 """Send the contents of a directory as a MIME message.""" import os import smtplib # For guessing MIME type based on file name extension import mimetypes from argparse import ArgumentParser from email.message import EmailMessage from email.policy import SMTP def main(): parser = ArgumentParser(description="""\ Send the contents of a directory as a MIME message. Unless the -o option is given, the email is sent by forwarding to your local SMTP server, which then does the normal delivery process. Your local machine must be running an SMTP server. """) parser.add_argument('-d', '--directory', help="""Mail the contents of the specified directory, otherwise use the current directory. Only the regular files in the directory are sent, and we don't recurse to subdirectories.""") parser.add_argument('-o', '--output', metavar='FILE', help="""Print the composed message to FILE instead of sending the message to the SMTP server.""") parser.add_argument('-s', '--sender', required=True, help='The value of the From: header (required)') parser.add_argument('-r', '--recipient', required=True, action='append', metavar='RECIPIENT', default=[], dest='recipients', help='A To: header value (at least one required)') args = parser.parse_args() directory = args.directory if not directory: directory = '.' # Create the message msg = EmailMessage() msg['Subject'] = f'Contents of directory {os.path.abspath(directory)}' msg['To'] = ', '.join(args.recipients) msg['From'] = args.sender msg.preamble = 'You will not see this in a MIME-aware mail reader.\n' for filename in os.listdir(directory): path = os.path.join(directory, filename) if not os.path.isfile(path): continue # Guess the content type based on the file's extension. Encoding # will be ignored, although we should check for simple things like # gzip'd or compressed files. ctype, encoding = mimetypes.guess_type(path) if ctype is None or encoding is not None: # No guess could be made, or the file is encoded (compressed), so # use a generic bag-of-bits type. ctype = 'application/octet-stream' maintype, subtype = ctype.split('/', 1) with open(path, 'rb') as fp: msg.add_attachment(fp.read(), maintype=maintype, subtype=subtype, filename=filename) # Now send or store the message if args.output: with open(args.output, 'wb') as fp: fp.write(msg.as_bytes(policy=SMTP)) else: with smtplib.SMTP('localhost') as s: s.send_message(msg) if __name__ == '__main__': main() ``` Here’s an example of how to unpack a MIME message like the one above, into a directory of files: ``` #!/usr/bin/env python3 """Unpack a MIME message into a directory of files.""" import os import email import mimetypes from email.policy import default from argparse import ArgumentParser def main(): parser = ArgumentParser(description="""\ Unpack a MIME message into a directory of files. """) parser.add_argument('-d', '--directory', required=True, help="""Unpack the MIME message into the named directory, which will be created if it doesn't already exist.""") parser.add_argument('msgfile') args = parser.parse_args() with open(args.msgfile, 'rb') as fp: msg = email.message_from_binary_file(fp, policy=default) try: os.mkdir(args.directory) except FileExistsError: pass counter = 1 for part in msg.walk(): # multipart/* are just containers if part.get_content_maintype() == 'multipart': continue # Applications should really sanitize the given filename so that an # email message can't be used to overwrite important files filename = part.get_filename() if not filename: ext = mimetypes.guess_extension(part.get_content_type()) if not ext: # Use a generic bag-of-bits extension ext = '.bin' filename = f'part-{counter:03d}{ext}' counter += 1 with open(os.path.join(args.directory, filename), 'wb') as fp: fp.write(part.get_payload(decode=True)) if __name__ == '__main__': main() ``` Here’s an example of how to create an HTML message with an alternative plain text version. To make things a bit more interesting, we include a related image in the html part, and we save a copy of what we are going to send to disk, as well as sending it. ``` #!/usr/bin/env python3 import smtplib from email.message import EmailMessage from email.headerregistry import Address from email.utils import make_msgid # Create the base text message. msg = EmailMessage() msg['Subject'] = "Ayons asperges pour le déjeuner" msg['From'] = Address("Pepé Le Pew", "pepe", "example.com") msg['To'] = (Address("Penelope Pussycat", "penelope", "example.com"), Address("Fabrette Pussycat", "fabrette", "example.com")) msg.set_content("""\ Salut! Cela ressemble à un excellent recipie[1] déjeuner. [1] http://www.yummly.com/recipe/Roasted-Asparagus-Epicurious-203718 --Pepé """) # Add the html version. This converts the message into a multipart/alternative # container, with the original text message as the first part and the new html # message as the second part. asparagus_cid = make_msgid() msg.add_alternative("""\ <html> <head></head> <body> <p>Salut!</p> <p>Cela ressemble à un excellent <a href="http://www.yummly.com/recipe/Roasted-Asparagus-Epicurious-203718"> recipie </a> déjeuner. </p> <img src="cid:{asparagus_cid}" /> </body> </html> """.format(asparagus_cid=asparagus_cid[1:-1]), subtype='html') # note that we needed to peel the <> off the msgid for use in the html. # Now add the related image to the html part. with open("roasted-asparagus.jpg", 'rb') as img: msg.get_payload()[1].add_related(img.read(), 'image', 'jpeg', cid=asparagus_cid) # Make a local copy of what we are going to send. with open('outgoing.msg', 'wb') as f: f.write(bytes(msg)) # Send the message via local SMTP server. with smtplib.SMTP('localhost') as s: s.send_message(msg) ``` If we were sent the message from the last example, here is one way we could process it: ``` import os import sys import tempfile import mimetypes import webbrowser # Import the email modules we'll need from email import policy from email.parser import BytesParser # An imaginary module that would make this work and be safe. from imaginary import magic_html_parser # In a real program you'd get the filename from the arguments. with open('outgoing.msg', 'rb') as fp: msg = BytesParser(policy=policy.default).parse(fp) # Now the header items can be accessed as a dictionary, and any non-ASCII will # be converted to unicode: print('To:', msg['to']) print('From:', msg['from']) print('Subject:', msg['subject']) # If we want to print a preview of the message content, we can extract whatever # the least formatted payload is and print the first three lines. Of course, # if the message has no plain text part printing the first three lines of html # is probably useless, but this is just a conceptual example. simplest = msg.get_body(preferencelist=('plain', 'html')) print() print(''.join(simplest.get_content().splitlines(keepends=True)[:3])) ans = input("View full message?") if ans.lower()[0] == 'n': sys.exit() # We can extract the richest alternative in order to display it: richest = msg.get_body() partfiles = {} if richest['content-type'].maintype == 'text': if richest['content-type'].subtype == 'plain': for line in richest.get_content().splitlines(): print(line) sys.exit() elif richest['content-type'].subtype == 'html': body = richest else: print("Don't know how to display {}".format(richest.get_content_type())) sys.exit() elif richest['content-type'].content_type == 'multipart/related': body = richest.get_body(preferencelist=('html')) for part in richest.iter_attachments(): fn = part.get_filename() if fn: extension = os.path.splitext(part.get_filename())[1] else: extension = mimetypes.guess_extension(part.get_content_type()) with tempfile.NamedTemporaryFile(suffix=extension, delete=False) as f: f.write(part.get_content()) # again strip the <> to go from email form of cid to html form. partfiles[part['content-id'][1:-1]] = f.name else: print("Don't know how to display {}".format(richest.get_content_type())) sys.exit() with tempfile.NamedTemporaryFile(mode='w', delete=False) as f: # The magic_html_parser has to rewrite the href="cid:...." attributes to # point to the filenames in partfiles. It also has to do a safety-sanitize # of the html. It could be written using html.parser. f.write(magic_html_parser(body.get_content(), partfiles)) webbrowser.open(f.name) os.remove(f.name) for fn in partfiles.values(): os.remove(fn) # Of course, there are lots of email messages that could break this simple # minded program, but it will handle the most common ones. ``` Up to the prompt, the output from the above is: ``` To: Penelope Pussycat <[email protected]>, Fabrette Pussycat <[email protected]> From: Pepé Le Pew <[email protected]> Subject: Ayons asperges pour le déjeuner Salut! Cela ressemble à un excellent recipie[1] déjeuner. ``` #### Footnotes `1` Thanks to Matthew Dixon Cowles for the original inspiration and examples. python zipapp — Manage executable Python zip archives zipapp — Manage executable Python zip archives ============================================== New in version 3.5. **Source code:** [Lib/zipapp.py](https://github.com/python/cpython/tree/3.9/Lib/zipapp.py) This module provides tools to manage the creation of zip files containing Python code, which can be [executed directly by the Python interpreter](../using/cmdline#using-on-interface-options). The module provides both a [Command-Line Interface](#zipapp-command-line-interface) and a [Python API](#zipapp-python-api). Basic Example ------------- The following example shows how the [Command-Line Interface](#zipapp-command-line-interface) can be used to create an executable archive from a directory containing Python code. When run, the archive will execute the `main` function from the module `myapp` in the archive. ``` $ python -m zipapp myapp -m "myapp:main" $ python myapp.pyz <output from myapp> ``` Command-Line Interface ---------------------- When called as a program from the command line, the following form is used: ``` $ python -m zipapp source [options] ``` If *source* is a directory, this will create an archive from the contents of *source*. If *source* is a file, it should be an archive, and it will be copied to the target archive (or the contents of its shebang line will be displayed if the –info option is specified). The following options are understood: `-o <output>, --output=<output>` Write the output to a file named *output*. If this option is not specified, the output filename will be the same as the input *source*, with the extension `.pyz` added. If an explicit filename is given, it is used as is (so a `.pyz` extension should be included if required). An output filename must be specified if the *source* is an archive (and in that case, *output* must not be the same as *source*). `-p <interpreter>, --python=<interpreter>` Add a `#!` line to the archive specifying *interpreter* as the command to run. Also, on POSIX, make the archive executable. The default is to write no `#!` line, and not make the file executable. `-m <mainfn>, --main=<mainfn>` Write a `__main__.py` file to the archive that executes *mainfn*. The *mainfn* argument should have the form “pkg.mod:fn”, where “pkg.mod” is a package/module in the archive, and “fn” is a callable in the given module. The `__main__.py` file will execute that callable. [`--main`](#cmdoption-zipapp-m) cannot be specified when copying an archive. `-c, --compress` Compress files with the deflate method, reducing the size of the output file. By default, files are stored uncompressed in the archive. [`--compress`](#cmdoption-zipapp-c) has no effect when copying an archive. New in version 3.7. `--info` Display the interpreter embedded in the archive, for diagnostic purposes. In this case, any other options are ignored and SOURCE must be an archive, not a directory. `-h, --help` Print a short usage message and exit. Python API ---------- The module defines two convenience functions: `zipapp.create_archive(source, target=None, interpreter=None, main=None, filter=None, compressed=False)` Create an application archive from *source*. The source can be any of the following: * The name of a directory, or a [path-like object](../glossary#term-path-like-object) referring to a directory, in which case a new application archive will be created from the content of that directory. * The name of an existing application archive file, or a [path-like object](../glossary#term-path-like-object) referring to such a file, in which case the file is copied to the target (modifying it to reflect the value given for the *interpreter* argument). The file name should include the `.pyz` extension, if required. * A file object open for reading in bytes mode. The content of the file should be an application archive, and the file object is assumed to be positioned at the start of the archive. The *target* argument determines where the resulting archive will be written: * If it is the name of a file, or a [path-like object](../glossary#term-path-like-object), the archive will be written to that file. * If it is an open file object, the archive will be written to that file object, which must be open for writing in bytes mode. * If the target is omitted (or `None`), the source must be a directory and the target will be a file with the same name as the source, with a `.pyz` extension added. The *interpreter* argument specifies the name of the Python interpreter with which the archive will be executed. It is written as a “shebang” line at the start of the archive. On POSIX, this will be interpreted by the OS, and on Windows it will be handled by the Python launcher. Omitting the *interpreter* results in no shebang line being written. If an interpreter is specified, and the target is a filename, the executable bit of the target file will be set. The *main* argument specifies the name of a callable which will be used as the main program for the archive. It can only be specified if the source is a directory, and the source does not already contain a `__main__.py` file. The *main* argument should take the form “pkg.module:callable” and the archive will be run by importing “pkg.module” and executing the given callable with no arguments. It is an error to omit *main* if the source is a directory and does not contain a `__main__.py` file, as otherwise the resulting archive would not be executable. The optional *filter* argument specifies a callback function that is passed a Path object representing the path to the file being added (relative to the source directory). It should return `True` if the file is to be added. The optional *compressed* argument determines whether files are compressed. If set to `True`, files in the archive are compressed with the deflate method; otherwise, files are stored uncompressed. This argument has no effect when copying an existing archive. If a file object is specified for *source* or *target*, it is the caller’s responsibility to close it after calling create\_archive. When copying an existing archive, file objects supplied only need `read` and `readline`, or `write` methods. When creating an archive from a directory, if the target is a file object it will be passed to the `zipfile.ZipFile` class, and must supply the methods needed by that class. New in version 3.7: Added the *filter* and *compressed* arguments. `zipapp.get_interpreter(archive)` Return the interpreter specified in the `#!` line at the start of the archive. If there is no `#!` line, return [`None`](constants#None "None"). The *archive* argument can be a filename or a file-like object open for reading in bytes mode. It is assumed to be at the start of the archive. Examples -------- Pack up a directory into an archive, and run it. ``` $ python -m zipapp myapp $ python myapp.pyz <output from myapp> ``` The same can be done using the [`create_archive()`](#zipapp.create_archive "zipapp.create_archive") function: ``` >>> import zipapp >>> zipapp.create_archive('myapp', 'myapp.pyz') ``` To make the application directly executable on POSIX, specify an interpreter to use. ``` $ python -m zipapp myapp -p "/usr/bin/env python" $ ./myapp.pyz <output from myapp> ``` To replace the shebang line on an existing archive, create a modified archive using the [`create_archive()`](#zipapp.create_archive "zipapp.create_archive") function: ``` >>> import zipapp >>> zipapp.create_archive('old_archive.pyz', 'new_archive.pyz', '/usr/bin/python3') ``` To update the file in place, do the replacement in memory using a `BytesIO` object, and then overwrite the source afterwards. Note that there is a risk when overwriting a file in place that an error will result in the loss of the original file. This code does not protect against such errors, but production code should do so. Also, this method will only work if the archive fits in memory: ``` >>> import zipapp >>> import io >>> temp = io.BytesIO() >>> zipapp.create_archive('myapp.pyz', temp, '/usr/bin/python2') >>> with open('myapp.pyz', 'wb') as f: >>> f.write(temp.getvalue()) ``` Specifying the Interpreter -------------------------- Note that if you specify an interpreter and then distribute your application archive, you need to ensure that the interpreter used is portable. The Python launcher for Windows supports most common forms of POSIX `#!` line, but there are other issues to consider: * If you use “/usr/bin/env python” (or other forms of the “python” command, such as “/usr/bin/python”), you need to consider that your users may have either Python 2 or Python 3 as their default, and write your code to work under both versions. * If you use an explicit version, for example “/usr/bin/env python3” your application will not work for users who do not have that version. (This may be what you want if you have not made your code Python 2 compatible). * There is no way to say “python X.Y or later”, so be careful of using an exact version like “/usr/bin/env python3.4” as you will need to change your shebang line for users of Python 3.5, for example. Typically, you should use an “/usr/bin/env python2” or “/usr/bin/env python3”, depending on whether your code is written for Python 2 or 3. Creating Standalone Applications with zipapp -------------------------------------------- Using the [`zipapp`](#module-zipapp "zipapp: Manage executable Python zip archives") module, it is possible to create self-contained Python programs, which can be distributed to end users who only need to have a suitable version of Python installed on their system. The key to doing this is to bundle all of the application’s dependencies into the archive, along with the application code. The steps to create a standalone archive are as follows: 1. Create your application in a directory as normal, so you have a `myapp` directory containing a `__main__.py` file, and any supporting application code. 2. Install all of your application’s dependencies into the `myapp` directory, using pip: ``` $ python -m pip install -r requirements.txt --target myapp ``` (this assumes you have your project requirements in a `requirements.txt` file - if not, you can just list the dependencies manually on the pip command line). 3. Optionally, delete the `.dist-info` directories created by pip in the `myapp` directory. These hold metadata for pip to manage the packages, and as you won’t be making any further use of pip they aren’t required - although it won’t do any harm if you leave them. 4. Package the application using: ``` $ python -m zipapp -p "interpreter" myapp ``` This will produce a standalone executable, which can be run on any machine with the appropriate interpreter available. See [Specifying the Interpreter](#zipapp-specifying-the-interpreter) for details. It can be shipped to users as a single file. On Unix, the `myapp.pyz` file is executable as it stands. You can rename the file to remove the `.pyz` extension if you prefer a “plain” command name. On Windows, the `myapp.pyz[w]` file is executable by virtue of the fact that the Python interpreter registers the `.pyz` and `.pyzw` file extensions when installed. ### Making a Windows executable On Windows, registration of the `.pyz` extension is optional, and furthermore, there are certain places that don’t recognise registered extensions “transparently” (the simplest example is that `subprocess.run(['myapp'])` won’t find your application - you need to explicitly specify the extension). On Windows, therefore, it is often preferable to create an executable from the zipapp. This is relatively easy, although it does require a C compiler. The basic approach relies on the fact that zipfiles can have arbitrary data prepended, and Windows exe files can have arbitrary data appended. So by creating a suitable launcher and tacking the `.pyz` file onto the end of it, you end up with a single-file executable that runs your application. A suitable launcher can be as simple as the following: ``` #define Py_LIMITED_API 1 #include "Python.h" #define WIN32_LEAN_AND_MEAN #include <windows.h> #ifdef WINDOWS int WINAPI wWinMain( HINSTANCE hInstance, /* handle to current instance */ HINSTANCE hPrevInstance, /* handle to previous instance */ LPWSTR lpCmdLine, /* pointer to command line */ int nCmdShow /* show state of window */ ) #else int wmain() #endif { wchar_t **myargv = _alloca((__argc + 1) * sizeof(wchar_t*)); myargv[0] = __wargv[0]; memcpy(myargv + 1, __wargv, __argc * sizeof(wchar_t *)); return Py_Main(__argc+1, myargv); } ``` If you define the `WINDOWS` preprocessor symbol, this will generate a GUI executable, and without it, a console executable. To compile the executable, you can either just use the standard MSVC command line tools, or you can take advantage of the fact that distutils knows how to compile Python source: ``` >>> from distutils.ccompiler import new_compiler >>> import distutils.sysconfig >>> import sys >>> import os >>> from pathlib import Path >>> def compile(src): >>> src = Path(src) >>> cc = new_compiler() >>> exe = src.stem >>> cc.add_include_dir(distutils.sysconfig.get_python_inc()) >>> cc.add_library_dir(os.path.join(sys.base_exec_prefix, 'libs')) >>> # First the CLI executable >>> objs = cc.compile([str(src)]) >>> cc.link_executable(objs, exe) >>> # Now the GUI executable >>> cc.define_macro('WINDOWS') >>> objs = cc.compile([str(src)]) >>> cc.link_executable(objs, exe + 'w') >>> if __name__ == "__main__": >>> compile("zastub.c") ``` The resulting launcher uses the “Limited ABI”, so it will run unchanged with any version of Python 3.x. All it needs is for Python (`python3.dll`) to be on the user’s `PATH`. For a fully standalone distribution, you can distribute the launcher with your application appended, bundled with the Python “embedded” distribution. This will run on any PC with the appropriate architecture (32 bit or 64 bit). ### Caveats There are some limitations to the process of bundling your application into a single file. In most, if not all, cases they can be addressed without needing major changes to your application. 1. If your application depends on a package that includes a C extension, that package cannot be run from a zip file (this is an OS limitation, as executable code must be present in the filesystem for the OS loader to load it). In this case, you can exclude that dependency from the zipfile, and either require your users to have it installed, or ship it alongside your zipfile and add code to your `__main__.py` to include the directory containing the unzipped module in `sys.path`. In this case, you will need to make sure to ship appropriate binaries for your target architecture(s) (and potentially pick the correct version to add to `sys.path` at runtime, based on the user’s machine). 2. If you are shipping a Windows executable as described above, you either need to ensure that your users have `python3.dll` on their PATH (which is not the default behaviour of the installer) or you should bundle your application with the embedded distribution. 3. The suggested launcher above uses the Python embedding API. This means that in your application, `sys.executable` will be your application, and *not* a conventional Python interpreter. Your code and its dependencies need to be prepared for this possibility. For example, if your application uses the [`multiprocessing`](multiprocessing#module-multiprocessing "multiprocessing: Process-based parallelism.") module, it will need to call [`multiprocessing.set_executable()`](multiprocessing#multiprocessing.set_executable "multiprocessing.set_executable") to let the module know where to find the standard Python interpreter. The Python Zip Application Archive Format ----------------------------------------- Python has been able to execute zip files which contain a `__main__.py` file since version 2.6. In order to be executed by Python, an application archive simply has to be a standard zip file containing a `__main__.py` file which will be run as the entry point for the application. As usual for any Python script, the parent of the script (in this case the zip file) will be placed on [`sys.path`](sys#sys.path "sys.path") and thus further modules can be imported from the zip file. The zip file format allows arbitrary data to be prepended to a zip file. The zip application format uses this ability to prepend a standard POSIX “shebang” line to the file (`#!/path/to/interpreter`). Formally, the Python zip application format is therefore: 1. An optional shebang line, containing the characters `b'#!'` followed by an interpreter name, and then a newline (`b'\n'`) character. The interpreter name can be anything acceptable to the OS “shebang” processing, or the Python launcher on Windows. The interpreter should be encoded in UTF-8 on Windows, and in [`sys.getfilesystemencoding()`](sys#sys.getfilesystemencoding "sys.getfilesystemencoding") on POSIX. 2. Standard zipfile data, as generated by the [`zipfile`](zipfile#module-zipfile "zipfile: Read and write ZIP-format archive files.") module. The zipfile content *must* include a file called `__main__.py` (which must be in the “root” of the zipfile - i.e., it cannot be in a subdirectory). The zipfile data can be compressed or uncompressed. If an application archive has a shebang line, it may have the executable bit set on POSIX systems, to allow it to be executed directly. There is no requirement that the tools in this module are used to create application archives - the module is a convenience, but archives in the above format created by any means are acceptable to Python.
programming_docs
python shelve — Python object persistence shelve — Python object persistence ================================== **Source code:** [Lib/shelve.py](https://github.com/python/cpython/tree/3.9/Lib/shelve.py) A “shelf” is a persistent, dictionary-like object. The difference with “dbm” databases is that the values (not the keys!) in a shelf can be essentially arbitrary Python objects — anything that the [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") module can handle. This includes most class instances, recursive data types, and objects containing lots of shared sub-objects. The keys are ordinary strings. `shelve.open(filename, flag='c', protocol=None, writeback=False)` Open a persistent dictionary. The filename specified is the base filename for the underlying database. As a side-effect, an extension may be added to the filename and more than one file may be created. By default, the underlying database file is opened for reading and writing. The optional *flag* parameter has the same interpretation as the *flag* parameter of [`dbm.open()`](dbm#dbm.open "dbm.open"). By default, version 3 pickles are used to serialize values. The version of the pickle protocol can be specified with the *protocol* parameter. Because of Python semantics, a shelf cannot know when a mutable persistent-dictionary entry is modified. By default modified objects are written *only* when assigned to the shelf (see [Example](#shelve-example)). If the optional *writeback* parameter is set to `True`, all entries accessed are also cached in memory, and written back on [`sync()`](#shelve.Shelf.sync "shelve.Shelf.sync") and [`close()`](#shelve.Shelf.close "shelve.Shelf.close"); this can make it handier to mutate mutable entries in the persistent dictionary, but, if many entries are accessed, it can consume vast amounts of memory for the cache, and it can make the close operation very slow since all accessed entries are written back (there is no way to determine which accessed entries are mutable, nor which ones were actually mutated). Note Do not rely on the shelf being closed automatically; always call [`close()`](#shelve.Shelf.close "shelve.Shelf.close") explicitly when you don’t need it any more, or use [`shelve.open()`](#shelve.open "shelve.open") as a context manager: ``` with shelve.open('spam') as db: db['eggs'] = 'eggs' ``` Warning Because the [`shelve`](#module-shelve "shelve: Python object persistence.") module is backed by [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back."), it is insecure to load a shelf from an untrusted source. Like with pickle, loading a shelf can execute arbitrary code. Shelf objects support most of methods and operations supported by dictionaries (except copying, constructors and operators `|` and `|=`). This eases the transition from dictionary based scripts to those requiring persistent storage. Two additional methods are supported: `Shelf.sync()` Write back all entries in the cache if the shelf was opened with *writeback* set to [`True`](constants#True "True"). Also empty the cache and synchronize the persistent dictionary on disk, if feasible. This is called automatically when the shelf is closed with [`close()`](#shelve.Shelf.close "shelve.Shelf.close"). `Shelf.close()` Synchronize and close the persistent *dict* object. Operations on a closed shelf will fail with a [`ValueError`](exceptions#ValueError "ValueError"). See also [Persistent dictionary recipe](https://code.activestate.com/recipes/576642/) with widely supported storage formats and having the speed of native dictionaries. Restrictions ------------ * The choice of which database package will be used (such as [`dbm.ndbm`](dbm#module-dbm.ndbm "dbm.ndbm: The standard \"database\" interface, based on ndbm. (Unix)") or [`dbm.gnu`](dbm#module-dbm.gnu "dbm.gnu: GNU's reinterpretation of dbm. (Unix)")) depends on which interface is available. Therefore it is not safe to open the database directly using [`dbm`](dbm#module-dbm "dbm: Interfaces to various Unix \"database\" formats."). The database is also (unfortunately) subject to the limitations of [`dbm`](dbm#module-dbm "dbm: Interfaces to various Unix \"database\" formats."), if it is used — this means that (the pickled representation of) the objects stored in the database should be fairly small, and in rare cases key collisions may cause the database to refuse updates. * The [`shelve`](#module-shelve "shelve: Python object persistence.") module does not support *concurrent* read/write access to shelved objects. (Multiple simultaneous read accesses are safe.) When a program has a shelf open for writing, no other program should have it open for reading or writing. Unix file locking can be used to solve this, but this differs across Unix versions and requires knowledge about the database implementation used. `class shelve.Shelf(dict, protocol=None, writeback=False, keyencoding='utf-8')` A subclass of [`collections.abc.MutableMapping`](collections.abc#collections.abc.MutableMapping "collections.abc.MutableMapping") which stores pickled values in the *dict* object. By default, version 3 pickles are used to serialize values. The version of the pickle protocol can be specified with the *protocol* parameter. See the [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") documentation for a discussion of the pickle protocols. If the *writeback* parameter is `True`, the object will hold a cache of all entries accessed and write them back to the *dict* at sync and close times. This allows natural operations on mutable entries, but can consume much more memory and make sync and close take a long time. The *keyencoding* parameter is the encoding used to encode keys before they are used with the underlying dict. A [`Shelf`](#shelve.Shelf "shelve.Shelf") object can also be used as a context manager, in which case it will be automatically closed when the [`with`](../reference/compound_stmts#with) block ends. Changed in version 3.2: Added the *keyencoding* parameter; previously, keys were always encoded in UTF-8. Changed in version 3.4: Added context manager support. `class shelve.BsdDbShelf(dict, protocol=None, writeback=False, keyencoding='utf-8')` A subclass of [`Shelf`](#shelve.Shelf "shelve.Shelf") which exposes `first()`, `next()`, `previous()`, `last()` and `set_location()` which are available in the third-party `bsddb` module from [pybsddb](https://www.jcea.es/programacion/pybsddb.htm) but not in other database modules. The *dict* object passed to the constructor must support those methods. This is generally accomplished by calling one of `bsddb.hashopen()`, `bsddb.btopen()` or `bsddb.rnopen()`. The optional *protocol*, *writeback*, and *keyencoding* parameters have the same interpretation as for the [`Shelf`](#shelve.Shelf "shelve.Shelf") class. `class shelve.DbfilenameShelf(filename, flag='c', protocol=None, writeback=False)` A subclass of [`Shelf`](#shelve.Shelf "shelve.Shelf") which accepts a *filename* instead of a dict-like object. The underlying file will be opened using [`dbm.open()`](dbm#dbm.open "dbm.open"). By default, the file will be created and opened for both read and write. The optional *flag* parameter has the same interpretation as for the [`open()`](#shelve.open "shelve.open") function. The optional *protocol* and *writeback* parameters have the same interpretation as for the [`Shelf`](#shelve.Shelf "shelve.Shelf") class. Example ------- To summarize the interface (`key` is a string, `data` is an arbitrary object): ``` import shelve d = shelve.open(filename) # open -- file may get suffix added by low-level # library d[key] = data # store data at key (overwrites old data if # using an existing key) data = d[key] # retrieve a COPY of data at key (raise KeyError # if no such key) del d[key] # delete data stored at key (raises KeyError # if no such key) flag = key in d # true if the key exists klist = list(d.keys()) # a list of all existing keys (slow!) # as d was opened WITHOUT writeback=True, beware: d['xx'] = [0, 1, 2] # this works as expected, but... d['xx'].append(3) # *this doesn't!* -- d['xx'] is STILL [0, 1, 2]! # having opened d without writeback=True, you need to code carefully: temp = d['xx'] # extracts the copy temp.append(5) # mutates the copy d['xx'] = temp # stores the copy right back, to persist it # or, d=shelve.open(filename,writeback=True) would let you just code # d['xx'].append(5) and have it work as expected, BUT it would also # consume more memory and make the d.close() operation slower. d.close() # close it ``` See also `Module` [`dbm`](dbm#module-dbm "dbm: Interfaces to various Unix \"database\" formats.") Generic interface to `dbm`-style databases. `Module` [`pickle`](pickle#module-pickle "pickle: Convert Python objects to streams of bytes and back.") Object serialization used by [`shelve`](#module-shelve "shelve: Python object persistence."). python pdb — The Python Debugger pdb — The Python Debugger ========================= **Source code:** [Lib/pdb.py](https://github.com/python/cpython/tree/3.9/Lib/pdb.py) The module [`pdb`](#module-pdb "pdb: The Python debugger for interactive interpreters.") defines an interactive source code debugger for Python programs. It supports setting (conditional) breakpoints and single stepping at the source line level, inspection of stack frames, source code listing, and evaluation of arbitrary Python code in the context of any stack frame. It also supports post-mortem debugging and can be called under program control. The debugger is extensible – it is actually defined as the class [`Pdb`](#pdb.Pdb "pdb.Pdb"). This is currently undocumented but easily understood by reading the source. The extension interface uses the modules [`bdb`](bdb#module-bdb "bdb: Debugger framework.") and [`cmd`](cmd#module-cmd "cmd: Build line-oriented command interpreters."). The debugger’s prompt is `(Pdb)`. Typical usage to run a program under control of the debugger is: ``` >>> import pdb >>> import mymodule >>> pdb.run('mymodule.test()') > <string>(0)?() (Pdb) continue > <string>(1)?() (Pdb) continue NameError: 'spam' > <string>(1)?() (Pdb) ``` Changed in version 3.3: Tab-completion via the [`readline`](readline#module-readline "readline: GNU readline support for Python. (Unix)") module is available for commands and command arguments, e.g. the current global and local names are offered as arguments of the `p` command. `pdb.py` can also be invoked as a script to debug other scripts. For example: ``` python3 -m pdb myscript.py ``` When invoked as a script, pdb will automatically enter post-mortem debugging if the program being debugged exits abnormally. After post-mortem debugging (or after normal exit of the program), pdb will restart the program. Automatic restarting preserves pdb’s state (such as breakpoints) and in most cases is more useful than quitting the debugger upon program’s exit. New in version 3.2: `pdb.py` now accepts a `-c` option that executes commands as if given in a `.pdbrc` file, see [Debugger Commands](#debugger-commands). New in version 3.7: `pdb.py` now accepts a `-m` option that execute modules similar to the way `python3 -m` does. As with a script, the debugger will pause execution just before the first line of the module. The typical usage to break into the debugger is to insert: ``` import pdb; pdb.set_trace() ``` at the location you want to break into the debugger, and then run the program. You can then step through the code following this statement, and continue running without the debugger using the [`continue`](#pdbcommand-continue) command. New in version 3.7: The built-in [`breakpoint()`](functions#breakpoint "breakpoint"), when called with defaults, can be used instead of `import pdb; pdb.set_trace()`. The typical usage to inspect a crashed program is: ``` >>> import pdb >>> import mymodule >>> mymodule.test() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "./mymodule.py", line 4, in test test2() File "./mymodule.py", line 3, in test2 print(spam) NameError: spam >>> pdb.pm() > ./mymodule.py(3)test2() -> print(spam) (Pdb) ``` The module defines the following functions; each enters the debugger in a slightly different way: `pdb.run(statement, globals=None, locals=None)` Execute the *statement* (given as a string or a code object) under debugger control. The debugger prompt appears before any code is executed; you can set breakpoints and type [`continue`](#pdbcommand-continue), or you can step through the statement using [`step`](#pdbcommand-step) or [`next`](#pdbcommand-next) (all these commands are explained below). The optional *globals* and *locals* arguments specify the environment in which the code is executed; by default the dictionary of the module [`__main__`](__main__#module-__main__ "__main__: The environment where the top-level script is run.") is used. (See the explanation of the built-in [`exec()`](functions#exec "exec") or [`eval()`](functions#eval "eval") functions.) `pdb.runeval(expression, globals=None, locals=None)` Evaluate the *expression* (given as a string or a code object) under debugger control. When [`runeval()`](#pdb.runeval "pdb.runeval") returns, it returns the value of the expression. Otherwise this function is similar to [`run()`](#pdb.run "pdb.run"). `pdb.runcall(function, *args, **kwds)` Call the *function* (a function or method object, not a string) with the given arguments. When [`runcall()`](#pdb.runcall "pdb.runcall") returns, it returns whatever the function call returned. The debugger prompt appears as soon as the function is entered. `pdb.set_trace(*, header=None)` Enter the debugger at the calling stack frame. This is useful to hard-code a breakpoint at a given point in a program, even if the code is not otherwise being debugged (e.g. when an assertion fails). If given, *header* is printed to the console just before debugging begins. Changed in version 3.7: The keyword-only argument *header*. `pdb.post_mortem(traceback=None)` Enter post-mortem debugging of the given *traceback* object. If no *traceback* is given, it uses the one of the exception that is currently being handled (an exception must be being handled if the default is to be used). `pdb.pm()` Enter post-mortem debugging of the traceback found in [`sys.last_traceback`](sys#sys.last_traceback "sys.last_traceback"). The `run*` functions and [`set_trace()`](#pdb.set_trace "pdb.set_trace") are aliases for instantiating the [`Pdb`](#pdb.Pdb "pdb.Pdb") class and calling the method of the same name. If you want to access further features, you have to do this yourself: `class pdb.Pdb(completekey='tab', stdin=None, stdout=None, skip=None, nosigint=False, readrc=True)` [`Pdb`](#pdb.Pdb "pdb.Pdb") is the debugger class. The *completekey*, *stdin* and *stdout* arguments are passed to the underlying [`cmd.Cmd`](cmd#cmd.Cmd "cmd.Cmd") class; see the description there. The *skip* argument, if given, must be an iterable of glob-style module name patterns. The debugger will not step into frames that originate in a module that matches one of these patterns. [1](#id3) By default, Pdb sets a handler for the SIGINT signal (which is sent when the user presses `Ctrl-C` on the console) when you give a `continue` command. This allows you to break into the debugger again by pressing `Ctrl-C`. If you want Pdb not to touch the SIGINT handler, set *nosigint* to true. The *readrc* argument defaults to true and controls whether Pdb will load .pdbrc files from the filesystem. Example call to enable tracing with *skip*: ``` import pdb; pdb.Pdb(skip=['django.*']).set_trace() ``` Raises an [auditing event](sys#auditing) `pdb.Pdb` with no arguments. New in version 3.1: The *skip* argument. New in version 3.2: The *nosigint* argument. Previously, a SIGINT handler was never set by Pdb. Changed in version 3.6: The *readrc* argument. `run(statement, globals=None, locals=None)` `runeval(expression, globals=None, locals=None)` `runcall(function, *args, **kwds)` `set_trace()` See the documentation for the functions explained above. Debugger Commands ----------------- The commands recognized by the debugger are listed below. Most commands can be abbreviated to one or two letters as indicated; e.g. `h(elp)` means that either `h` or `help` can be used to enter the help command (but not `he` or `hel`, nor `H` or `Help` or `HELP`). Arguments to commands must be separated by whitespace (spaces or tabs). Optional arguments are enclosed in square brackets (`[]`) in the command syntax; the square brackets must not be typed. Alternatives in the command syntax are separated by a vertical bar (`|`). Entering a blank line repeats the last command entered. Exception: if the last command was a [`list`](#pdbcommand-list) command, the next 11 lines are listed. Commands that the debugger doesn’t recognize are assumed to be Python statements and are executed in the context of the program being debugged. Python statements can also be prefixed with an exclamation point (`!`). This is a powerful way to inspect the program being debugged; it is even possible to change a variable or call a function. When an exception occurs in such a statement, the exception name is printed but the debugger’s state is not changed. The debugger supports [aliases](#debugger-aliases). Aliases can have parameters which allows one a certain level of adaptability to the context under examination. Multiple commands may be entered on a single line, separated by `;;`. (A single `;` is not used as it is the separator for multiple commands in a line that is passed to the Python parser.) No intelligence is applied to separating the commands; the input is split at the first `;;` pair, even if it is in the middle of a quoted string. A workaround for strings with double semicolons is to use implicit string concatenation `';'';'` or `";"";"`. If a file `.pdbrc` exists in the user’s home directory or in the current directory, it is read in and executed as if it had been typed at the debugger prompt. This is particularly useful for aliases. If both files exist, the one in the home directory is read first and aliases defined there can be overridden by the local file. Changed in version 3.2: `.pdbrc` can now contain commands that continue debugging, such as [`continue`](#pdbcommand-continue) or [`next`](#pdbcommand-next). Previously, these commands had no effect. `h(elp) [command]` Without argument, print the list of available commands. With a *command* as argument, print help about that command. `help pdb` displays the full documentation (the docstring of the [`pdb`](#module-pdb "pdb: The Python debugger for interactive interpreters.") module). Since the *command* argument must be an identifier, `help exec` must be entered to get help on the `!` command. `w(here)` Print a stack trace, with the most recent frame at the bottom. An arrow indicates the current frame, which determines the context of most commands. `d(own) [count]` Move the current frame *count* (default one) levels down in the stack trace (to a newer frame). `u(p) [count]` Move the current frame *count* (default one) levels up in the stack trace (to an older frame). `b(reak) [([filename:]lineno | function) [, condition]]` With a *lineno* argument, set a break there in the current file. With a *function* argument, set a break at the first executable statement within that function. The line number may be prefixed with a filename and a colon, to specify a breakpoint in another file (probably one that hasn’t been loaded yet). The file is searched on [`sys.path`](sys#sys.path "sys.path"). Note that each breakpoint is assigned a number to which all the other breakpoint commands refer. If a second argument is present, it is an expression which must evaluate to true before the breakpoint is honored. Without argument, list all breaks, including for each breakpoint, the number of times that breakpoint has been hit, the current ignore count, and the associated condition if any. `tbreak [([filename:]lineno | function) [, condition]]` Temporary breakpoint, which is removed automatically when it is first hit. The arguments are the same as for [`break`](#pdbcommand-break). `cl(ear) [filename:lineno | bpnumber ...]` With a *filename:lineno* argument, clear all the breakpoints at this line. With a space separated list of breakpoint numbers, clear those breakpoints. Without argument, clear all breaks (but first ask confirmation). `disable [bpnumber ...]` Disable the breakpoints given as a space separated list of breakpoint numbers. Disabling a breakpoint means it cannot cause the program to stop execution, but unlike clearing a breakpoint, it remains in the list of breakpoints and can be (re-)enabled. `enable [bpnumber ...]` Enable the breakpoints specified. `ignore bpnumber [count]` Set the ignore count for the given breakpoint number. If count is omitted, the ignore count is set to 0. A breakpoint becomes active when the ignore count is zero. When non-zero, the count is decremented each time the breakpoint is reached and the breakpoint is not disabled and any associated condition evaluates to true. `condition bpnumber [condition]` Set a new *condition* for the breakpoint, an expression which must evaluate to true before the breakpoint is honored. If *condition* is absent, any existing condition is removed; i.e., the breakpoint is made unconditional. `commands [bpnumber]` Specify a list of commands for breakpoint number *bpnumber*. The commands themselves appear on the following lines. Type a line containing just `end` to terminate the commands. An example: ``` (Pdb) commands 1 (com) p some_variable (com) end (Pdb) ``` To remove all commands from a breakpoint, type `commands` and follow it immediately with `end`; that is, give no commands. With no *bpnumber* argument, `commands` refers to the last breakpoint set. You can use breakpoint commands to start your program up again. Simply use the [`continue`](#pdbcommand-continue) command, or [`step`](#pdbcommand-step), or any other command that resumes execution. Specifying any command resuming execution (currently [`continue`](#pdbcommand-continue), [`step`](#pdbcommand-step), [`next`](#pdbcommand-next), [`return`](#pdbcommand-return), [`jump`](#pdbcommand-jump), [`quit`](#pdbcommand-quit) and their abbreviations) terminates the command list (as if that command was immediately followed by end). This is because any time you resume execution (even with a simple next or step), you may encounter another breakpoint—which could have its own command list, leading to ambiguities about which list to execute. If you use the ‘silent’ command in the command list, the usual message about stopping at a breakpoint is not printed. This may be desirable for breakpoints that are to print a specific message and then continue. If none of the other commands print anything, you see no sign that the breakpoint was reached. `s(tep)` Execute the current line, stop at the first possible occasion (either in a function that is called or on the next line in the current function). `n(ext)` Continue execution until the next line in the current function is reached or it returns. (The difference between [`next`](#pdbcommand-next) and [`step`](#pdbcommand-step) is that [`step`](#pdbcommand-step) stops inside a called function, while [`next`](#pdbcommand-next) executes called functions at (nearly) full speed, only stopping at the next line in the current function.) `unt(il) [lineno]` Without argument, continue execution until the line with a number greater than the current one is reached. With a line number, continue execution until a line with a number greater or equal to that is reached. In both cases, also stop when the current frame returns. Changed in version 3.2: Allow giving an explicit line number. `r(eturn)` Continue execution until the current function returns. `c(ont(inue))` Continue execution, only stop when a breakpoint is encountered. `j(ump) lineno` Set the next line that will be executed. Only available in the bottom-most frame. This lets you jump back and execute code again, or jump forward to skip code that you don’t want to run. It should be noted that not all jumps are allowed – for instance it is not possible to jump into the middle of a [`for`](../reference/compound_stmts#for) loop or out of a [`finally`](../reference/compound_stmts#finally) clause. `l(ist) [first[, last]]` List source code for the current file. Without arguments, list 11 lines around the current line or continue the previous listing. With `.` as argument, list 11 lines around the current line. With one argument, list 11 lines around at that line. With two arguments, list the given range; if the second argument is less than the first, it is interpreted as a count. The current line in the current frame is indicated by `->`. If an exception is being debugged, the line where the exception was originally raised or propagated is indicated by `>>`, if it differs from the current line. New in version 3.2: The `>>` marker. `ll | longlist` List all source code for the current function or frame. Interesting lines are marked as for [`list`](#pdbcommand-list). New in version 3.2. `a(rgs)` Print the argument list of the current function. `p expression` Evaluate the *expression* in the current context and print its value. Note `print()` can also be used, but is not a debugger command — this executes the Python [`print()`](functions#print "print") function. `pp expression` Like the [`p`](#pdbcommand-p) command, except the value of the expression is pretty-printed using the [`pprint`](pprint#module-pprint "pprint: Data pretty printer.") module. `whatis expression` Print the type of the *expression*. `source expression` Try to get source code for the given object and display it. New in version 3.2. `display [expression]` Display the value of the expression if it changed, each time execution stops in the current frame. Without expression, list all display expressions for the current frame. New in version 3.2. `undisplay [expression]` Do not display the expression any more in the current frame. Without expression, clear all display expressions for the current frame. New in version 3.2. `interact` Start an interactive interpreter (using the [`code`](code#module-code "code: Facilities to implement read-eval-print loops.") module) whose global namespace contains all the (global and local) names found in the current scope. New in version 3.2. `alias [name [command]]` Create an alias called *name* that executes *command*. The command must *not* be enclosed in quotes. Replaceable parameters can be indicated by `%1`, `%2`, and so on, while `%*` is replaced by all the parameters. If no command is given, the current alias for *name* is shown. If no arguments are given, all aliases are listed. Aliases may be nested and can contain anything that can be legally typed at the pdb prompt. Note that internal pdb commands *can* be overridden by aliases. Such a command is then hidden until the alias is removed. Aliasing is recursively applied to the first word of the command line; all other words in the line are left alone. As an example, here are two useful aliases (especially when placed in the `.pdbrc` file): ``` # Print instance variables (usage "pi classInst") alias pi for k in %1.__dict__.keys(): print("%1.",k,"=",%1.__dict__[k]) # Print instance variables in self alias ps pi self ``` `unalias name` Delete the specified alias. `! statement` Execute the (one-line) *statement* in the context of the current stack frame. The exclamation point can be omitted unless the first word of the statement resembles a debugger command. To set a global variable, you can prefix the assignment command with a [`global`](../reference/simple_stmts#global) statement on the same line, e.g.: ``` (Pdb) global list_options; list_options = ['-l'] (Pdb) ``` `run [args ...]` `restart [args ...]` Restart the debugged Python program. If an argument is supplied, it is split with [`shlex`](shlex#module-shlex "shlex: Simple lexical analysis for Unix shell-like languages.") and the result is used as the new [`sys.argv`](sys#sys.argv "sys.argv"). History, breakpoints, actions and debugger options are preserved. [`restart`](#pdbcommand-restart) is an alias for [`run`](#pdbcommand-run). `q(uit)` Quit from the debugger. The program being executed is aborted. `debug code` Enter a recursive debugger that steps through the code argument (which is an arbitrary expression or statement to be executed in the current environment). `retval` Print the return value for the last return of a function. #### Footnotes `1` Whether a frame is considered to originate in a certain module is determined by the `__name__` in the frame globals.
programming_docs
python difflib — Helpers for computing deltas difflib — Helpers for computing deltas ====================================== **Source code:** [Lib/difflib.py](https://github.com/python/cpython/tree/3.9/Lib/difflib.py) This module provides classes and functions for comparing sequences. It can be used for example, for comparing files, and can produce information about file differences in various formats, including HTML and context and unified diffs. For comparing directories and files, see also, the [`filecmp`](filecmp#module-filecmp "filecmp: Compare files efficiently.") module. `class difflib.SequenceMatcher` This is a flexible class for comparing pairs of sequences of any type, so long as the sequence elements are [hashable](../glossary#term-hashable). The basic algorithm predates, and is a little fancier than, an algorithm published in the late 1980’s by Ratcliff and Obershelp under the hyperbolic name “gestalt pattern matching.” The idea is to find the longest contiguous matching subsequence that contains no “junk” elements; these “junk” elements are ones that are uninteresting in some sense, such as blank lines or whitespace. (Handling junk is an extension to the Ratcliff and Obershelp algorithm.) The same idea is then applied recursively to the pieces of the sequences to the left and to the right of the matching subsequence. This does not yield minimal edit sequences, but does tend to yield matches that “look right” to people. **Timing:** The basic Ratcliff-Obershelp algorithm is cubic time in the worst case and quadratic time in the expected case. [`SequenceMatcher`](#difflib.SequenceMatcher "difflib.SequenceMatcher") is quadratic time for the worst case and has expected-case behavior dependent in a complicated way on how many elements the sequences have in common; best case time is linear. **Automatic junk heuristic:** [`SequenceMatcher`](#difflib.SequenceMatcher "difflib.SequenceMatcher") supports a heuristic that automatically treats certain sequence items as junk. The heuristic counts how many times each individual item appears in the sequence. If an item’s duplicates (after the first one) account for more than 1% of the sequence and the sequence is at least 200 items long, this item is marked as “popular” and is treated as junk for the purpose of sequence matching. This heuristic can be turned off by setting the `autojunk` argument to `False` when creating the [`SequenceMatcher`](#difflib.SequenceMatcher "difflib.SequenceMatcher"). New in version 3.2: The *autojunk* parameter. `class difflib.Differ` This is a class for comparing sequences of lines of text, and producing human-readable differences or deltas. Differ uses [`SequenceMatcher`](#difflib.SequenceMatcher "difflib.SequenceMatcher") both to compare sequences of lines, and to compare sequences of characters within similar (near-matching) lines. Each line of a [`Differ`](#difflib.Differ "difflib.Differ") delta begins with a two-letter code: | Code | Meaning | | --- | --- | | `'- '` | line unique to sequence 1 | | `'+ '` | line unique to sequence 2 | | `'  '` | line common to both sequences | | `'? '` | line not present in either input sequence | Lines beginning with ‘`?`’ attempt to guide the eye to intraline differences, and were not present in either input sequence. These lines can be confusing if the sequences contain tab characters. `class difflib.HtmlDiff` This class can be used to create an HTML table (or a complete HTML file containing the table) showing a side by side, line by line comparison of text with inter-line and intra-line change highlights. The table can be generated in either full or contextual difference mode. The constructor for this class is: `__init__(tabsize=8, wrapcolumn=None, linejunk=None, charjunk=IS_CHARACTER_JUNK)` Initializes instance of [`HtmlDiff`](#difflib.HtmlDiff "difflib.HtmlDiff"). *tabsize* is an optional keyword argument to specify tab stop spacing and defaults to `8`. *wrapcolumn* is an optional keyword to specify column number where lines are broken and wrapped, defaults to `None` where lines are not wrapped. *linejunk* and *charjunk* are optional keyword arguments passed into [`ndiff()`](#difflib.ndiff "difflib.ndiff") (used by [`HtmlDiff`](#difflib.HtmlDiff "difflib.HtmlDiff") to generate the side by side HTML differences). See [`ndiff()`](#difflib.ndiff "difflib.ndiff") documentation for argument default values and descriptions. The following methods are public: `make_file(fromlines, tolines, fromdesc='', todesc='', context=False, numlines=5, *, charset='utf-8')` Compares *fromlines* and *tolines* (lists of strings) and returns a string which is a complete HTML file containing a table showing line by line differences with inter-line and intra-line changes highlighted. *fromdesc* and *todesc* are optional keyword arguments to specify from/to file column header strings (both default to an empty string). *context* and *numlines* are both optional keyword arguments. Set *context* to `True` when contextual differences are to be shown, else the default is `False` to show the full files. *numlines* defaults to `5`. When *context* is `True` *numlines* controls the number of context lines which surround the difference highlights. When *context* is `False` *numlines* controls the number of lines which are shown before a difference highlight when using the “next” hyperlinks (setting to zero would cause the “next” hyperlinks to place the next difference highlight at the top of the browser without any leading context). Note *fromdesc* and *todesc* are interpreted as unescaped HTML and should be properly escaped while receiving input from untrusted sources. Changed in version 3.5: *charset* keyword-only argument was added. The default charset of HTML document changed from `'ISO-8859-1'` to `'utf-8'`. `make_table(fromlines, tolines, fromdesc='', todesc='', context=False, numlines=5)` Compares *fromlines* and *tolines* (lists of strings) and returns a string which is a complete HTML table showing line by line differences with inter-line and intra-line changes highlighted. The arguments for this method are the same as those for the [`make_file()`](#difflib.HtmlDiff.make_file "difflib.HtmlDiff.make_file") method. `Tools/scripts/diff.py` is a command-line front-end to this class and contains a good example of its use. `difflib.context_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n')` Compare *a* and *b* (lists of strings); return a delta (a [generator](../glossary#term-generator) generating the delta lines) in context diff format. Context diffs are a compact way of showing just the lines that have changed plus a few lines of context. The changes are shown in a before/after style. The number of context lines is set by *n* which defaults to three. By default, the diff control lines (those with `***` or `---`) are created with a trailing newline. This is helpful so that inputs created from [`io.IOBase.readlines()`](io#io.IOBase.readlines "io.IOBase.readlines") result in diffs that are suitable for use with [`io.IOBase.writelines()`](io#io.IOBase.writelines "io.IOBase.writelines") since both the inputs and outputs have trailing newlines. For inputs that do not have trailing newlines, set the *lineterm* argument to `""` so that the output will be uniformly newline free. The context diff format normally has a header for filenames and modification times. Any or all of these may be specified using strings for *fromfile*, *tofile*, *fromfiledate*, and *tofiledate*. The modification times are normally expressed in the ISO 8601 format. If not specified, the strings default to blanks. ``` >>> s1 = ['bacon\n', 'eggs\n', 'ham\n', 'guido\n'] >>> s2 = ['python\n', 'eggy\n', 'hamster\n', 'guido\n'] >>> sys.stdout.writelines(context_diff(s1, s2, fromfile='before.py', tofile='after.py')) *** before.py --- after.py *************** *** 1,4 **** ! bacon ! eggs ! ham guido --- 1,4 ---- ! python ! eggy ! hamster guido ``` See [A command-line interface to difflib](#difflib-interface) for a more detailed example. `difflib.get_close_matches(word, possibilities, n=3, cutoff=0.6)` Return a list of the best “good enough” matches. *word* is a sequence for which close matches are desired (typically a string), and *possibilities* is a list of sequences against which to match *word* (typically a list of strings). Optional argument *n* (default `3`) is the maximum number of close matches to return; *n* must be greater than `0`. Optional argument *cutoff* (default `0.6`) is a float in the range [0, 1]. Possibilities that don’t score at least that similar to *word* are ignored. The best (no more than *n*) matches among the possibilities are returned in a list, sorted by similarity score, most similar first. ``` >>> get_close_matches('appel', ['ape', 'apple', 'peach', 'puppy']) ['apple', 'ape'] >>> import keyword >>> get_close_matches('wheel', keyword.kwlist) ['while'] >>> get_close_matches('pineapple', keyword.kwlist) [] >>> get_close_matches('accept', keyword.kwlist) ['except'] ``` `difflib.ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK)` Compare *a* and *b* (lists of strings); return a [`Differ`](#difflib.Differ "difflib.Differ")-style delta (a [generator](../glossary#term-generator) generating the delta lines). Optional keyword parameters *linejunk* and *charjunk* are filtering functions (or `None`): *linejunk*: A function that accepts a single string argument, and returns true if the string is junk, or false if not. The default is `None`. There is also a module-level function [`IS_LINE_JUNK()`](#difflib.IS_LINE_JUNK "difflib.IS_LINE_JUNK"), which filters out lines without visible characters, except for at most one pound character (`'#'`) – however the underlying [`SequenceMatcher`](#difflib.SequenceMatcher "difflib.SequenceMatcher") class does a dynamic analysis of which lines are so frequent as to constitute noise, and this usually works better than using this function. *charjunk*: A function that accepts a character (a string of length 1), and returns if the character is junk, or false if not. The default is module-level function [`IS_CHARACTER_JUNK()`](#difflib.IS_CHARACTER_JUNK "difflib.IS_CHARACTER_JUNK"), which filters out whitespace characters (a blank or tab; it’s a bad idea to include newline in this!). `Tools/scripts/ndiff.py` is a command-line front-end to this function. ``` >>> diff = ndiff('one\ntwo\nthree\n'.splitlines(keepends=True), ... 'ore\ntree\nemu\n'.splitlines(keepends=True)) >>> print(''.join(diff), end="") - one ? ^ + ore ? ^ - two - three ? - + tree + emu ``` `difflib.restore(sequence, which)` Return one of the two sequences that generated a delta. Given a *sequence* produced by [`Differ.compare()`](#difflib.Differ.compare "difflib.Differ.compare") or [`ndiff()`](#difflib.ndiff "difflib.ndiff"), extract lines originating from file 1 or 2 (parameter *which*), stripping off line prefixes. Example: ``` >>> diff = ndiff('one\ntwo\nthree\n'.splitlines(keepends=True), ... 'ore\ntree\nemu\n'.splitlines(keepends=True)) >>> diff = list(diff) # materialize the generated delta into a list >>> print(''.join(restore(diff, 1)), end="") one two three >>> print(''.join(restore(diff, 2)), end="") ore tree emu ``` `difflib.unified_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n')` Compare *a* and *b* (lists of strings); return a delta (a [generator](../glossary#term-generator) generating the delta lines) in unified diff format. Unified diffs are a compact way of showing just the lines that have changed plus a few lines of context. The changes are shown in an inline style (instead of separate before/after blocks). The number of context lines is set by *n* which defaults to three. By default, the diff control lines (those with `---`, `+++`, or `@@`) are created with a trailing newline. This is helpful so that inputs created from [`io.IOBase.readlines()`](io#io.IOBase.readlines "io.IOBase.readlines") result in diffs that are suitable for use with [`io.IOBase.writelines()`](io#io.IOBase.writelines "io.IOBase.writelines") since both the inputs and outputs have trailing newlines. For inputs that do not have trailing newlines, set the *lineterm* argument to `""` so that the output will be uniformly newline free. The context diff format normally has a header for filenames and modification times. Any or all of these may be specified using strings for *fromfile*, *tofile*, *fromfiledate*, and *tofiledate*. The modification times are normally expressed in the ISO 8601 format. If not specified, the strings default to blanks. ``` >>> s1 = ['bacon\n', 'eggs\n', 'ham\n', 'guido\n'] >>> s2 = ['python\n', 'eggy\n', 'hamster\n', 'guido\n'] >>> sys.stdout.writelines(unified_diff(s1, s2, fromfile='before.py', tofile='after.py')) --- before.py +++ after.py @@ -1,4 +1,4 @@ -bacon -eggs -ham +python +eggy +hamster guido ``` See [A command-line interface to difflib](#difflib-interface) for a more detailed example. `difflib.diff_bytes(dfunc, a, b, fromfile=b'', tofile=b'', fromfiledate=b'', tofiledate=b'', n=3, lineterm=b'\n')` Compare *a* and *b* (lists of bytes objects) using *dfunc*; yield a sequence of delta lines (also bytes) in the format returned by *dfunc*. *dfunc* must be a callable, typically either [`unified_diff()`](#difflib.unified_diff "difflib.unified_diff") or [`context_diff()`](#difflib.context_diff "difflib.context_diff"). Allows you to compare data with unknown or inconsistent encoding. All inputs except *n* must be bytes objects, not str. Works by losslessly converting all inputs (except *n*) to str, and calling `dfunc(a, b, fromfile, tofile, fromfiledate, tofiledate, n, lineterm)`. The output of *dfunc* is then converted back to bytes, so the delta lines that you receive have the same unknown/inconsistent encodings as *a* and *b*. New in version 3.5. `difflib.IS_LINE_JUNK(line)` Return `True` for ignorable lines. The line *line* is ignorable if *line* is blank or contains a single `'#'`, otherwise it is not ignorable. Used as a default for parameter *linejunk* in [`ndiff()`](#difflib.ndiff "difflib.ndiff") in older versions. `difflib.IS_CHARACTER_JUNK(ch)` Return `True` for ignorable characters. The character *ch* is ignorable if *ch* is a space or tab, otherwise it is not ignorable. Used as a default for parameter *charjunk* in [`ndiff()`](#difflib.ndiff "difflib.ndiff"). See also [Pattern Matching: The Gestalt Approach](http://www.drdobbs.com/database/pattern-matching-the-gestalt-approach/184407970) Discussion of a similar algorithm by John W. Ratcliff and D. E. Metzener. This was published in [Dr. Dobb’s Journal](http://www.drdobbs.com/) in July, 1988. SequenceMatcher Objects ----------------------- The [`SequenceMatcher`](#difflib.SequenceMatcher "difflib.SequenceMatcher") class has this constructor: `class difflib.SequenceMatcher(isjunk=None, a='', b='', autojunk=True)` Optional argument *isjunk* must be `None` (the default) or a one-argument function that takes a sequence element and returns true if and only if the element is “junk” and should be ignored. Passing `None` for *isjunk* is equivalent to passing `lambda x: False`; in other words, no elements are ignored. For example, pass: ``` lambda x: x in " \t" ``` if you’re comparing lines as sequences of characters, and don’t want to synch up on blanks or hard tabs. The optional arguments *a* and *b* are sequences to be compared; both default to empty strings. The elements of both sequences must be [hashable](../glossary#term-hashable). The optional argument *autojunk* can be used to disable the automatic junk heuristic. New in version 3.2: The *autojunk* parameter. SequenceMatcher objects get three data attributes: *bjunk* is the set of elements of *b* for which *isjunk* is `True`; *bpopular* is the set of non-junk elements considered popular by the heuristic (if it is not disabled); *b2j* is a dict mapping the remaining elements of *b* to a list of positions where they occur. All three are reset whenever *b* is reset with [`set_seqs()`](#difflib.SequenceMatcher.set_seqs "difflib.SequenceMatcher.set_seqs") or [`set_seq2()`](#difflib.SequenceMatcher.set_seq2 "difflib.SequenceMatcher.set_seq2"). New in version 3.2: The *bjunk* and *bpopular* attributes. [`SequenceMatcher`](#difflib.SequenceMatcher "difflib.SequenceMatcher") objects have the following methods: `set_seqs(a, b)` Set the two sequences to be compared. [`SequenceMatcher`](#difflib.SequenceMatcher "difflib.SequenceMatcher") computes and caches detailed information about the second sequence, so if you want to compare one sequence against many sequences, use [`set_seq2()`](#difflib.SequenceMatcher.set_seq2 "difflib.SequenceMatcher.set_seq2") to set the commonly used sequence once and call [`set_seq1()`](#difflib.SequenceMatcher.set_seq1 "difflib.SequenceMatcher.set_seq1") repeatedly, once for each of the other sequences. `set_seq1(a)` Set the first sequence to be compared. The second sequence to be compared is not changed. `set_seq2(b)` Set the second sequence to be compared. The first sequence to be compared is not changed. `find_longest_match(alo=0, ahi=None, blo=0, bhi=None)` Find longest matching block in `a[alo:ahi]` and `b[blo:bhi]`. If *isjunk* was omitted or `None`, [`find_longest_match()`](#difflib.SequenceMatcher.find_longest_match "difflib.SequenceMatcher.find_longest_match") returns `(i, j, k)` such that `a[i:i+k]` is equal to `b[j:j+k]`, where `alo <= i <= i+k <= ahi` and `blo <= j <= j+k <= bhi`. For all `(i', j', k')` meeting those conditions, the additional conditions `k >= k'`, `i <= i'`, and if `i == i'`, `j <= j'` are also met. In other words, of all maximal matching blocks, return one that starts earliest in *a*, and of all those maximal matching blocks that start earliest in *a*, return the one that starts earliest in *b*. ``` >>> s = SequenceMatcher(None, " abcd", "abcd abcd") >>> s.find_longest_match(0, 5, 0, 9) Match(a=0, b=4, size=5) ``` If *isjunk* was provided, first the longest matching block is determined as above, but with the additional restriction that no junk element appears in the block. Then that block is extended as far as possible by matching (only) junk elements on both sides. So the resulting block never matches on junk except as identical junk happens to be adjacent to an interesting match. Here’s the same example as before, but considering blanks to be junk. That prevents `' abcd'` from matching the `' abcd'` at the tail end of the second sequence directly. Instead only the `'abcd'` can match, and matches the leftmost `'abcd'` in the second sequence: ``` >>> s = SequenceMatcher(lambda x: x==" ", " abcd", "abcd abcd") >>> s.find_longest_match(0, 5, 0, 9) Match(a=1, b=0, size=4) ``` If no blocks match, this returns `(alo, blo, 0)`. This method returns a [named tuple](../glossary#term-named-tuple) `Match(a, b, size)`. Changed in version 3.9: Added default arguments. `get_matching_blocks()` Return list of triples describing non-overlapping matching subsequences. Each triple is of the form `(i, j, n)`, and means that `a[i:i+n] == b[j:j+n]`. The triples are monotonically increasing in *i* and *j*. The last triple is a dummy, and has the value `(len(a), len(b), 0)`. It is the only triple with `n == 0`. If `(i, j, n)` and `(i', j', n')` are adjacent triples in the list, and the second is not the last triple in the list, then `i+n < i'` or `j+n < j'`; in other words, adjacent triples always describe non-adjacent equal blocks. ``` >>> s = SequenceMatcher(None, "abxcd", "abcd") >>> s.get_matching_blocks() [Match(a=0, b=0, size=2), Match(a=3, b=2, size=2), Match(a=5, b=4, size=0)] ``` `get_opcodes()` Return list of 5-tuples describing how to turn *a* into *b*. Each tuple is of the form `(tag, i1, i2, j1, j2)`. The first tuple has `i1 == j1 == 0`, and remaining tuples have *i1* equal to the *i2* from the preceding tuple, and, likewise, *j1* equal to the previous *j2*. The *tag* values are strings, with these meanings: | Value | Meaning | | --- | --- | | `'replace'` | `a[i1:i2]` should be replaced by `b[j1:j2]`. | | `'delete'` | `a[i1:i2]` should be deleted. Note that `j1 == j2` in this case. | | `'insert'` | `b[j1:j2]` should be inserted at `a[i1:i1]`. Note that `i1 == i2` in this case. | | `'equal'` | `a[i1:i2] == b[j1:j2]` (the sub-sequences are equal). | For example: ``` >>> a = "qabxcd" >>> b = "abycdf" >>> s = SequenceMatcher(None, a, b) >>> for tag, i1, i2, j1, j2 in s.get_opcodes(): ... print('{:7} a[{}:{}] --> b[{}:{}] {!r:>8} --> {!r}'.format( ... tag, i1, i2, j1, j2, a[i1:i2], b[j1:j2])) delete a[0:1] --> b[0:0] 'q' --> '' equal a[1:3] --> b[0:2] 'ab' --> 'ab' replace a[3:4] --> b[2:3] 'x' --> 'y' equal a[4:6] --> b[3:5] 'cd' --> 'cd' insert a[6:6] --> b[5:6] '' --> 'f' ``` `get_grouped_opcodes(n=3)` Return a [generator](../glossary#term-generator) of groups with up to *n* lines of context. Starting with the groups returned by [`get_opcodes()`](#difflib.SequenceMatcher.get_opcodes "difflib.SequenceMatcher.get_opcodes"), this method splits out smaller change clusters and eliminates intervening ranges which have no changes. The groups are returned in the same format as [`get_opcodes()`](#difflib.SequenceMatcher.get_opcodes "difflib.SequenceMatcher.get_opcodes"). `ratio()` Return a measure of the sequences’ similarity as a float in the range [0, 1]. Where T is the total number of elements in both sequences, and M is the number of matches, this is 2.0\*M / T. Note that this is `1.0` if the sequences are identical, and `0.0` if they have nothing in common. This is expensive to compute if [`get_matching_blocks()`](#difflib.SequenceMatcher.get_matching_blocks "difflib.SequenceMatcher.get_matching_blocks") or [`get_opcodes()`](#difflib.SequenceMatcher.get_opcodes "difflib.SequenceMatcher.get_opcodes") hasn’t already been called, in which case you may want to try [`quick_ratio()`](#difflib.SequenceMatcher.quick_ratio "difflib.SequenceMatcher.quick_ratio") or [`real_quick_ratio()`](#difflib.SequenceMatcher.real_quick_ratio "difflib.SequenceMatcher.real_quick_ratio") first to get an upper bound. Note Caution: The result of a [`ratio()`](#difflib.SequenceMatcher.ratio "difflib.SequenceMatcher.ratio") call may depend on the order of the arguments. For instance: ``` >>> SequenceMatcher(None, 'tide', 'diet').ratio() 0.25 >>> SequenceMatcher(None, 'diet', 'tide').ratio() 0.5 ``` `quick_ratio()` Return an upper bound on [`ratio()`](#difflib.SequenceMatcher.ratio "difflib.SequenceMatcher.ratio") relatively quickly. `real_quick_ratio()` Return an upper bound on [`ratio()`](#difflib.SequenceMatcher.ratio "difflib.SequenceMatcher.ratio") very quickly. The three methods that return the ratio of matching to total characters can give different results due to differing levels of approximation, although `quick_ratio()` and `real_quick_ratio()` are always at least as large as `ratio()`: ``` >>> s = SequenceMatcher(None, "abcd", "bcde") >>> s.ratio() 0.75 >>> s.quick_ratio() 0.75 >>> s.real_quick_ratio() 1.0 ``` SequenceMatcher Examples ------------------------ This example compares two strings, considering blanks to be “junk”: ``` >>> s = SequenceMatcher(lambda x: x == " ", ... "private Thread currentThread;", ... "private volatile Thread currentThread;") ``` `ratio()` returns a float in [0, 1], measuring the similarity of the sequences. As a rule of thumb, a `ratio()` value over 0.6 means the sequences are close matches: ``` >>> print(round(s.ratio(), 3)) 0.866 ``` If you’re only interested in where the sequences match, `get_matching_blocks()` is handy: ``` >>> for block in s.get_matching_blocks(): ... print("a[%d] and b[%d] match for %d elements" % block) a[0] and b[0] match for 8 elements a[8] and b[17] match for 21 elements a[29] and b[38] match for 0 elements ``` Note that the last tuple returned by `get_matching_blocks()` is always a dummy, `(len(a), len(b), 0)`, and this is the only case in which the last tuple element (number of elements matched) is `0`. If you want to know how to change the first sequence into the second, use `get_opcodes()`: ``` >>> for opcode in s.get_opcodes(): ... print("%6s a[%d:%d] b[%d:%d]" % opcode) equal a[0:8] b[0:8] insert a[8:8] b[8:17] equal a[8:29] b[17:38] ``` See also * The [`get_close_matches()`](#difflib.get_close_matches "difflib.get_close_matches") function in this module which shows how simple code building on [`SequenceMatcher`](#difflib.SequenceMatcher "difflib.SequenceMatcher") can be used to do useful work. * [Simple version control recipe](https://code.activestate.com/recipes/576729/) for a small application built with [`SequenceMatcher`](#difflib.SequenceMatcher "difflib.SequenceMatcher"). Differ Objects -------------- Note that [`Differ`](#difflib.Differ "difflib.Differ")-generated deltas make no claim to be **minimal** diffs. To the contrary, minimal diffs are often counter-intuitive, because they synch up anywhere possible, sometimes accidental matches 100 pages apart. Restricting synch points to contiguous matches preserves some notion of locality, at the occasional cost of producing a longer diff. The [`Differ`](#difflib.Differ "difflib.Differ") class has this constructor: `class difflib.Differ(linejunk=None, charjunk=None)` Optional keyword parameters *linejunk* and *charjunk* are for filter functions (or `None`): *linejunk*: A function that accepts a single string argument, and returns true if the string is junk. The default is `None`, meaning that no line is considered junk. *charjunk*: A function that accepts a single character argument (a string of length 1), and returns true if the character is junk. The default is `None`, meaning that no character is considered junk. These junk-filtering functions speed up matching to find differences and do not cause any differing lines or characters to be ignored. Read the description of the [`find_longest_match()`](#difflib.SequenceMatcher.find_longest_match "difflib.SequenceMatcher.find_longest_match") method’s *isjunk* parameter for an explanation. [`Differ`](#difflib.Differ "difflib.Differ") objects are used (deltas generated) via a single method: `compare(a, b)` Compare two sequences of lines, and generate the delta (a sequence of lines). Each sequence must contain individual single-line strings ending with newlines. Such sequences can be obtained from the [`readlines()`](io#io.IOBase.readlines "io.IOBase.readlines") method of file-like objects. The delta generated also consists of newline-terminated strings, ready to be printed as-is via the [`writelines()`](io#io.IOBase.writelines "io.IOBase.writelines") method of a file-like object. Differ Example -------------- This example compares two texts. First we set up the texts, sequences of individual single-line strings ending with newlines (such sequences can also be obtained from the `readlines()` method of file-like objects): ``` >>> text1 = ''' 1. Beautiful is better than ugly. ... 2. Explicit is better than implicit. ... 3. Simple is better than complex. ... 4. Complex is better than complicated. ... '''.splitlines(keepends=True) >>> len(text1) 4 >>> text1[0][-1] '\n' >>> text2 = ''' 1. Beautiful is better than ugly. ... 3. Simple is better than complex. ... 4. Complicated is better than complex. ... 5. Flat is better than nested. ... '''.splitlines(keepends=True) ``` Next we instantiate a Differ object: ``` >>> d = Differ() ``` Note that when instantiating a [`Differ`](#difflib.Differ "difflib.Differ") object we may pass functions to filter out line and character “junk.” See the [`Differ()`](#difflib.Differ "difflib.Differ") constructor for details. Finally, we compare the two: ``` >>> result = list(d.compare(text1, text2)) ``` `result` is a list of strings, so let’s pretty-print it: ``` >>> from pprint import pprint >>> pprint(result) [' 1. Beautiful is better than ugly.\n', '- 2. Explicit is better than implicit.\n', '- 3. Simple is better than complex.\n', '+ 3. Simple is better than complex.\n', '? ++\n', '- 4. Complex is better than complicated.\n', '? ^ ---- ^\n', '+ 4. Complicated is better than complex.\n', '? ++++ ^ ^\n', '+ 5. Flat is better than nested.\n'] ``` As a single multi-line string it looks like this: ``` >>> import sys >>> sys.stdout.writelines(result) 1. Beautiful is better than ugly. - 2. Explicit is better than implicit. - 3. Simple is better than complex. + 3. Simple is better than complex. ? ++ - 4. Complex is better than complicated. ? ^ ---- ^ + 4. Complicated is better than complex. ? ++++ ^ ^ + 5. Flat is better than nested. ``` A command-line interface to difflib ----------------------------------- This example shows how to use difflib to create a `diff`-like utility. It is also contained in the Python source distribution, as `Tools/scripts/diff.py`. ``` #!/usr/bin/env python3 """ Command line interface to difflib.py providing diffs in four formats: * ndiff: lists every line and highlights interline changes. * context: highlights clusters of changes in a before/after format. * unified: highlights clusters of changes in an inline format. * html: generates side by side comparison with change highlights. """ import sys, os, difflib, argparse from datetime import datetime, timezone def file_mtime(path): t = datetime.fromtimestamp(os.stat(path).st_mtime, timezone.utc) return t.astimezone().isoformat() def main(): parser = argparse.ArgumentParser() parser.add_argument('-c', action='store_true', default=False, help='Produce a context format diff (default)') parser.add_argument('-u', action='store_true', default=False, help='Produce a unified format diff') parser.add_argument('-m', action='store_true', default=False, help='Produce HTML side by side diff ' '(can use -c and -l in conjunction)') parser.add_argument('-n', action='store_true', default=False, help='Produce a ndiff format diff') parser.add_argument('-l', '--lines', type=int, default=3, help='Set number of context lines (default 3)') parser.add_argument('fromfile') parser.add_argument('tofile') options = parser.parse_args() n = options.lines fromfile = options.fromfile tofile = options.tofile fromdate = file_mtime(fromfile) todate = file_mtime(tofile) with open(fromfile) as ff: fromlines = ff.readlines() with open(tofile) as tf: tolines = tf.readlines() if options.u: diff = difflib.unified_diff(fromlines, tolines, fromfile, tofile, fromdate, todate, n=n) elif options.n: diff = difflib.ndiff(fromlines, tolines) elif options.m: diff = difflib.HtmlDiff().make_file(fromlines,tolines,fromfile,tofile,context=options.c,numlines=n) else: diff = difflib.context_diff(fromlines, tolines, fromfile, tofile, fromdate, todate, n=n) sys.stdout.writelines(diff) if __name__ == '__main__': main() ```
programming_docs
python struct — Interpret bytes as packed binary data struct — Interpret bytes as packed binary data ============================================== **Source code:** [Lib/struct.py](https://github.com/python/cpython/tree/3.9/Lib/struct.py) This module performs conversions between Python values and C structs represented as Python [`bytes`](stdtypes#bytes "bytes") objects. This can be used in handling binary data stored in files or from network connections, among other sources. It uses [Format Strings](#struct-format-strings) as compact descriptions of the layout of the C structs and the intended conversion to/from Python values. Note By default, the result of packing a given C struct includes pad bytes in order to maintain proper alignment for the C types involved; similarly, alignment is taken into account when unpacking. This behavior is chosen so that the bytes of a packed struct correspond exactly to the layout in memory of the corresponding C struct. To handle platform-independent data formats or omit implicit pad bytes, use `standard` size and alignment instead of `native` size and alignment: see [Byte Order, Size, and Alignment](#struct-alignment) for details. Several [`struct`](#module-struct "struct: Interpret bytes as packed binary data.") functions (and methods of [`Struct`](#struct.Struct "struct.Struct")) take a *buffer* argument. This refers to objects that implement the [Buffer Protocol](../c-api/buffer#bufferobjects) and provide either a readable or read-writable buffer. The most common types used for that purpose are [`bytes`](stdtypes#bytes "bytes") and [`bytearray`](stdtypes#bytearray "bytearray"), but many other types that can be viewed as an array of bytes implement the buffer protocol, so that they can be read/filled without additional copying from a [`bytes`](stdtypes#bytes "bytes") object. Functions and Exceptions ------------------------ The module defines the following exception and functions: `exception struct.error` Exception raised on various occasions; argument is a string describing what is wrong. `struct.pack(format, v1, v2, ...)` Return a bytes object containing the values *v1*, *v2*, … packed according to the format string *format*. The arguments must match the values required by the format exactly. `struct.pack_into(format, buffer, offset, v1, v2, ...)` Pack the values *v1*, *v2*, … according to the format string *format* and write the packed bytes into the writable buffer *buffer* starting at position *offset*. Note that *offset* is a required argument. `struct.unpack(format, buffer)` Unpack from the buffer *buffer* (presumably packed by `pack(format, ...)`) according to the format string *format*. The result is a tuple even if it contains exactly one item. The buffer’s size in bytes must match the size required by the format, as reflected by [`calcsize()`](#struct.calcsize "struct.calcsize"). `struct.unpack_from(format, /, buffer, offset=0)` Unpack from *buffer* starting at position *offset*, according to the format string *format*. The result is a tuple even if it contains exactly one item. The buffer’s size in bytes, starting at position *offset*, must be at least the size required by the format, as reflected by [`calcsize()`](#struct.calcsize "struct.calcsize"). `struct.iter_unpack(format, buffer)` Iteratively unpack from the buffer *buffer* according to the format string *format*. This function returns an iterator which will read equally-sized chunks from the buffer until all its contents have been consumed. The buffer’s size in bytes must be a multiple of the size required by the format, as reflected by [`calcsize()`](#struct.calcsize "struct.calcsize"). Each iteration yields a tuple as specified by the format string. New in version 3.4. `struct.calcsize(format)` Return the size of the struct (and hence of the bytes object produced by `pack(format, ...)`) corresponding to the format string *format*. Format Strings -------------- Format strings are the mechanism used to specify the expected layout when packing and unpacking data. They are built up from [Format Characters](#format-characters), which specify the type of data being packed/unpacked. In addition, there are special characters for controlling the [Byte Order, Size, and Alignment](#struct-alignment). ### Byte Order, Size, and Alignment By default, C types are represented in the machine’s native format and byte order, and properly aligned by skipping pad bytes if necessary (according to the rules used by the C compiler). Alternatively, the first character of the format string can be used to indicate the byte order, size and alignment of the packed data, according to the following table: | Character | Byte order | Size | Alignment | | --- | --- | --- | --- | | `@` | native | native | native | | `=` | native | standard | none | | `<` | little-endian | standard | none | | `>` | big-endian | standard | none | | `!` | network (= big-endian) | standard | none | If the first character is not one of these, `'@'` is assumed. Native byte order is big-endian or little-endian, depending on the host system. For example, Intel x86 and AMD64 (x86-64) are little-endian; Motorola 68000 and PowerPC G5 are big-endian; ARM and Intel Itanium feature switchable endianness (bi-endian). Use `sys.byteorder` to check the endianness of your system. Native size and alignment are determined using the C compiler’s `sizeof` expression. This is always combined with native byte order. Standard size depends only on the format character; see the table in the [Format Characters](#format-characters) section. Note the difference between `'@'` and `'='`: both use native byte order, but the size and alignment of the latter is standardized. The form `'!'` represents the network byte order which is always big-endian as defined in [IETF RFC 1700](https://tools.ietf.org/html/rfc1700). There is no way to indicate non-native byte order (force byte-swapping); use the appropriate choice of `'<'` or `'>'`. Notes: 1. Padding is only automatically added between successive structure members. No padding is added at the beginning or the end of the encoded struct. 2. No padding is added when using non-native size and alignment, e.g. with ‘<’, ‘>’, ‘=’, and ‘!’. 3. To align the end of a structure to the alignment requirement of a particular type, end the format with the code for that type with a repeat count of zero. See [Examples](#struct-examples). ### Format Characters Format characters have the following meaning; the conversion between C and Python values should be obvious given their types. The ‘Standard size’ column refers to the size of the packed value in bytes when using standard size; that is, when the format string starts with one of `'<'`, `'>'`, `'!'` or `'='`. When using native size, the size of the packed value is platform-dependent. | Format | C Type | Python type | Standard size | Notes | | --- | --- | --- | --- | --- | | `x` | pad byte | no value | | | | `c` | `char` | bytes of length 1 | 1 | | | `b` | `signed char` | integer | 1 | (1), (2) | | `B` | `unsigned char` | integer | 1 | (2) | | `?` | `_Bool` | bool | 1 | (1) | | `h` | `short` | integer | 2 | (2) | | `H` | `unsigned short` | integer | 2 | (2) | | `i` | `int` | integer | 4 | (2) | | `I` | `unsigned int` | integer | 4 | (2) | | `l` | `long` | integer | 4 | (2) | | `L` | `unsigned long` | integer | 4 | (2) | | `q` | `long long` | integer | 8 | (2) | | `Q` | `unsigned long long` | integer | 8 | (2) | | `n` | `ssize_t` | integer | | (3) | | `N` | `size_t` | integer | | (3) | | `e` | (6) | float | 2 | (4) | | `f` | `float` | float | 4 | (4) | | `d` | `double` | float | 8 | (4) | | `s` | `char[]` | bytes | | | | `p` | `char[]` | bytes | | | | `P` | `void *` | integer | | (5) | Changed in version 3.3: Added support for the `'n'` and `'N'` formats. Changed in version 3.6: Added support for the `'e'` format. Notes: 1. The `'?'` conversion code corresponds to the `_Bool` type defined by C99. If this type is not available, it is simulated using a `char`. In standard mode, it is always represented by one byte. 2. When attempting to pack a non-integer using any of the integer conversion codes, if the non-integer has a [`__index__()`](../reference/datamodel#object.__index__ "object.__index__") method then that method is called to convert the argument to an integer before packing. Changed in version 3.2: Added use of the [`__index__()`](../reference/datamodel#object.__index__ "object.__index__") method for non-integers. 3. The `'n'` and `'N'` conversion codes are only available for the native size (selected as the default or with the `'@'` byte order character). For the standard size, you can use whichever of the other integer formats fits your application. 4. For the `'f'`, `'d'` and `'e'` conversion codes, the packed representation uses the IEEE 754 binary32, binary64 or binary16 format (for `'f'`, `'d'` or `'e'` respectively), regardless of the floating-point format used by the platform. 5. The `'P'` format character is only available for the native byte ordering (selected as the default or with the `'@'` byte order character). The byte order character `'='` chooses to use little- or big-endian ordering based on the host system. The struct module does not interpret this as native ordering, so the `'P'` format is not available. 6. The IEEE 754 binary16 “half precision” type was introduced in the 2008 revision of the [IEEE 754 standard](https://en.wikipedia.org/wiki/IEEE_floating_point#IEEE_754-2008). It has a sign bit, a 5-bit exponent and 11-bit precision (with 10 bits explicitly stored), and can represent numbers between approximately `6.1e-05` and `6.5e+04` at full precision. This type is not widely supported by C compilers: on a typical machine, an unsigned short can be used for storage, but not for math operations. See the Wikipedia page on the [half-precision floating-point format](https://en.wikipedia.org/wiki/Half-precision_floating-point_format) for more information. A format character may be preceded by an integral repeat count. For example, the format string `'4h'` means exactly the same as `'hhhh'`. Whitespace characters between formats are ignored; a count and its format must not contain whitespace though. For the `'s'` format character, the count is interpreted as the length of the bytes, not a repeat count like for the other format characters; for example, `'10s'` means a single 10-byte string, while `'10c'` means 10 characters. If a count is not given, it defaults to 1. For packing, the string is truncated or padded with null bytes as appropriate to make it fit. For unpacking, the resulting bytes object always has exactly the specified number of bytes. As a special case, `'0s'` means a single, empty string (while `'0c'` means 0 characters). When packing a value `x` using one of the integer formats (`'b'`, `'B'`, `'h'`, `'H'`, `'i'`, `'I'`, `'l'`, `'L'`, `'q'`, `'Q'`), if `x` is outside the valid range for that format then [`struct.error`](#struct.error "struct.error") is raised. Changed in version 3.1: Previously, some of the integer formats wrapped out-of-range values and raised [`DeprecationWarning`](exceptions#DeprecationWarning "DeprecationWarning") instead of [`struct.error`](#struct.error "struct.error"). The `'p'` format character encodes a “Pascal string”, meaning a short variable-length string stored in a *fixed number of bytes*, given by the count. The first byte stored is the length of the string, or 255, whichever is smaller. The bytes of the string follow. If the string passed in to [`pack()`](#struct.pack "struct.pack") is too long (longer than the count minus 1), only the leading `count-1` bytes of the string are stored. If the string is shorter than `count-1`, it is padded with null bytes so that exactly count bytes in all are used. Note that for [`unpack()`](#struct.unpack "struct.unpack"), the `'p'` format character consumes `count` bytes, but that the string returned can never contain more than 255 bytes. For the `'?'` format character, the return value is either [`True`](constants#True "True") or [`False`](constants#False "False"). When packing, the truth value of the argument object is used. Either 0 or 1 in the native or standard bool representation will be packed, and any non-zero value will be `True` when unpacking. ### Examples Note All examples assume a native byte order, size, and alignment with a big-endian machine. A basic example of packing/unpacking three integers: ``` >>> from struct import * >>> pack('hhl', 1, 2, 3) b'\x00\x01\x00\x02\x00\x00\x00\x03' >>> unpack('hhl', b'\x00\x01\x00\x02\x00\x00\x00\x03') (1, 2, 3) >>> calcsize('hhl') 8 ``` Unpacked fields can be named by assigning them to variables or by wrapping the result in a named tuple: ``` >>> record = b'raymond \x32\x12\x08\x01\x08' >>> name, serialnum, school, gradelevel = unpack('<10sHHb', record) >>> from collections import namedtuple >>> Student = namedtuple('Student', 'name serialnum school gradelevel') >>> Student._make(unpack('<10sHHb', record)) Student(name=b'raymond ', serialnum=4658, school=264, gradelevel=8) ``` The ordering of format characters may have an impact on size since the padding needed to satisfy alignment requirements is different: ``` >>> pack('ci', b'*', 0x12131415) b'*\x00\x00\x00\x12\x13\x14\x15' >>> pack('ic', 0x12131415, b'*') b'\x12\x13\x14\x15*' >>> calcsize('ci') 8 >>> calcsize('ic') 5 ``` The following format `'llh0l'` specifies two pad bytes at the end, assuming longs are aligned on 4-byte boundaries: ``` >>> pack('llh0l', 1, 2, 3) b'\x00\x00\x00\x01\x00\x00\x00\x02\x00\x03\x00\x00' ``` This only works when native size and alignment are in effect; standard size and alignment does not enforce any alignment. See also `Module` [`array`](array#module-array "array: Space efficient arrays of uniformly typed numeric values.") Packed binary storage of homogeneous data. `Module` [`xdrlib`](xdrlib#module-xdrlib "xdrlib: Encoders and decoders for the External Data Representation (XDR). (deprecated)") Packing and unpacking of XDR data. Classes ------- The [`struct`](#module-struct "struct: Interpret bytes as packed binary data.") module also defines the following type: `class struct.Struct(format)` Return a new Struct object which writes and reads binary data according to the format string *format*. Creating a Struct object once and calling its methods is more efficient than calling the [`struct`](#module-struct "struct: Interpret bytes as packed binary data.") functions with the same format since the format string only needs to be compiled once. Note The compiled versions of the most recent format strings passed to [`Struct`](#struct.Struct "struct.Struct") and the module-level functions are cached, so programs that use only a few format strings needn’t worry about reusing a single [`Struct`](#struct.Struct "struct.Struct") instance. Compiled Struct objects support the following methods and attributes: `pack(v1, v2, ...)` Identical to the [`pack()`](#struct.pack "struct.pack") function, using the compiled format. (`len(result)` will equal [`size`](#struct.Struct.size "struct.Struct.size").) `pack_into(buffer, offset, v1, v2, ...)` Identical to the [`pack_into()`](#struct.pack_into "struct.pack_into") function, using the compiled format. `unpack(buffer)` Identical to the [`unpack()`](#struct.unpack "struct.unpack") function, using the compiled format. The buffer’s size in bytes must equal [`size`](#struct.Struct.size "struct.Struct.size"). `unpack_from(buffer, offset=0)` Identical to the [`unpack_from()`](#struct.unpack_from "struct.unpack_from") function, using the compiled format. The buffer’s size in bytes, starting at position *offset*, must be at least [`size`](#struct.Struct.size "struct.Struct.size"). `iter_unpack(buffer)` Identical to the [`iter_unpack()`](#struct.iter_unpack "struct.iter_unpack") function, using the compiled format. The buffer’s size in bytes must be a multiple of [`size`](#struct.Struct.size "struct.Struct.size"). New in version 3.4. `format` The format string used to construct this Struct object. Changed in version 3.7: The format string type is now [`str`](stdtypes#str "str") instead of [`bytes`](stdtypes#bytes "bytes"). `size` The calculated size of the struct (and hence of the bytes object produced by the [`pack()`](#struct.pack "struct.pack") method) corresponding to [`format`](functions#format "format"). python Graphical User Interfaces with Tk Graphical User Interfaces with Tk ================================= Tk/Tcl has long been an integral part of Python. It provides a robust and platform independent windowing toolkit, that is available to Python programmers using the [`tkinter`](tkinter#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") package, and its extension, the [`tkinter.tix`](tkinter.tix#module-tkinter.tix "tkinter.tix: Tk Extension Widgets for Tkinter") and the [`tkinter.ttk`](tkinter.ttk#module-tkinter.ttk "tkinter.ttk: Tk themed widget set") modules. The [`tkinter`](tkinter#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") package is a thin object-oriented layer on top of Tcl/Tk. To use [`tkinter`](tkinter#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces"), you don’t need to write Tcl code, but you will need to consult the Tk documentation, and occasionally the Tcl documentation. [`tkinter`](tkinter#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") is a set of wrappers that implement the Tk widgets as Python classes. [`tkinter`](tkinter#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces")’s chief virtues are that it is fast, and that it usually comes bundled with Python. Although its standard documentation is weak, good material is available, which includes: references, tutorials, a book and others. [`tkinter`](tkinter#module-tkinter "tkinter: Interface to Tcl/Tk for graphical user interfaces") is also famous for having an outdated look and feel, which has been vastly improved in Tk 8.5. Nevertheless, there are many other GUI libraries that you could be interested in. The Python wiki lists several alternative [GUI frameworks and tools](https://wiki.python.org/moin/GuiProgramming). * [`tkinter` — Python interface to Tcl/Tk](tkinter) + [Tkinter Modules](tkinter#tkinter-modules) + [Tkinter Life Preserver](tkinter#tkinter-life-preserver) - [How To Use This Section](tkinter#how-to-use-this-section) - [A Simple Hello World Program](tkinter#a-simple-hello-world-program) + [A (Very) Quick Look at Tcl/Tk](tkinter#a-very-quick-look-at-tcl-tk) + [Mapping Basic Tk into Tkinter](tkinter#mapping-basic-tk-into-tkinter) + [How Tk and Tkinter are Related](tkinter#how-tk-and-tkinter-are-related) + [Handy Reference](tkinter#handy-reference) - [Setting Options](tkinter#setting-options) - [The Packer](tkinter#the-packer) - [Packer Options](tkinter#packer-options) - [Coupling Widget Variables](tkinter#coupling-widget-variables) - [The Window Manager](tkinter#the-window-manager) - [Tk Option Data Types](tkinter#tk-option-data-types) - [Bindings and Events](tkinter#bindings-and-events) - [The index Parameter](tkinter#the-index-parameter) - [Images](tkinter#images) + [File Handlers](tkinter#file-handlers) * [`tkinter.colorchooser` — Color choosing dialog](tkinter.colorchooser) * [`tkinter.font` — Tkinter font wrapper](tkinter.font) * [Tkinter Dialogs](dialog) + [`tkinter.simpledialog` — Standard Tkinter input dialogs](dialog#module-tkinter.simpledialog) + [`tkinter.filedialog` — File selection dialogs](dialog#module-tkinter.filedialog) - [Native Load/Save Dialogs](dialog#native-load-save-dialogs) + [`tkinter.commondialog` — Dialog window templates](dialog#module-tkinter.commondialog) * [`tkinter.messagebox` — Tkinter message prompts](tkinter.messagebox) * [`tkinter.scrolledtext` — Scrolled Text Widget](tkinter.scrolledtext) * [`tkinter.dnd` — Drag and drop support](tkinter.dnd) * [`tkinter.ttk` — Tk themed widgets](tkinter.ttk) + [Using Ttk](tkinter.ttk#using-ttk) + [Ttk Widgets](tkinter.ttk#ttk-widgets) + [Widget](tkinter.ttk#widget) - [Standard Options](tkinter.ttk#standard-options) - [Scrollable Widget Options](tkinter.ttk#scrollable-widget-options) - [Label Options](tkinter.ttk#label-options) - [Compatibility Options](tkinter.ttk#compatibility-options) - [Widget States](tkinter.ttk#widget-states) - [ttk.Widget](tkinter.ttk#ttk-widget) + [Combobox](tkinter.ttk#combobox) - [Options](tkinter.ttk#options) - [Virtual events](tkinter.ttk#virtual-events) - [ttk.Combobox](tkinter.ttk#ttk-combobox) + [Spinbox](tkinter.ttk#spinbox) - [Options](tkinter.ttk#id1) - [Virtual events](tkinter.ttk#id2) - [ttk.Spinbox](tkinter.ttk#ttk-spinbox) + [Notebook](tkinter.ttk#notebook) - [Options](tkinter.ttk#id3) - [Tab Options](tkinter.ttk#tab-options) - [Tab Identifiers](tkinter.ttk#tab-identifiers) - [Virtual Events](tkinter.ttk#id4) - [ttk.Notebook](tkinter.ttk#ttk-notebook) + [Progressbar](tkinter.ttk#progressbar) - [Options](tkinter.ttk#id5) - [ttk.Progressbar](tkinter.ttk#ttk-progressbar) + [Separator](tkinter.ttk#separator) - [Options](tkinter.ttk#id6) + [Sizegrip](tkinter.ttk#sizegrip) - [Platform-specific notes](tkinter.ttk#platform-specific-notes) - [Bugs](tkinter.ttk#bugs) + [Treeview](tkinter.ttk#treeview) - [Options](tkinter.ttk#id7) - [Item Options](tkinter.ttk#item-options) - [Tag Options](tkinter.ttk#tag-options) - [Column Identifiers](tkinter.ttk#column-identifiers) - [Virtual Events](tkinter.ttk#id8) - [ttk.Treeview](tkinter.ttk#ttk-treeview) + [Ttk Styling](tkinter.ttk#ttk-styling) - [Layouts](tkinter.ttk#layouts) * [`tkinter.tix` — Extension widgets for Tk](tkinter.tix) + [Using Tix](tkinter.tix#using-tix) + [Tix Widgets](tkinter.tix#tix-widgets) - [Basic Widgets](tkinter.tix#basic-widgets) - [File Selectors](tkinter.tix#file-selectors) - [Hierarchical ListBox](tkinter.tix#hierarchical-listbox) - [Tabular ListBox](tkinter.tix#tabular-listbox) - [Manager Widgets](tkinter.tix#manager-widgets) - [Image Types](tkinter.tix#image-types) - [Miscellaneous Widgets](tkinter.tix#miscellaneous-widgets) - [Form Geometry Manager](tkinter.tix#form-geometry-manager) + [Tix Commands](tkinter.tix#tix-commands) * [IDLE](idle) + [Menus](idle#menus) - [File menu (Shell and Editor)](idle#file-menu-shell-and-editor) - [Edit menu (Shell and Editor)](idle#edit-menu-shell-and-editor) - [Format menu (Editor window only)](idle#format-menu-editor-window-only) - [Run menu (Editor window only)](idle#run-menu-editor-window-only) - [Shell menu (Shell window only)](idle#shell-menu-shell-window-only) - [Debug menu (Shell window only)](idle#debug-menu-shell-window-only) - [Options menu (Shell and Editor)](idle#options-menu-shell-and-editor) - [Window menu (Shell and Editor)](idle#window-menu-shell-and-editor) - [Help menu (Shell and Editor)](idle#help-menu-shell-and-editor) - [Context Menus](idle#context-menus) + [Editing and navigation](idle#editing-and-navigation) - [Editor windows](idle#editor-windows) - [Key bindings](idle#key-bindings) - [Automatic indentation](idle#automatic-indentation) - [Completions](idle#completions) - [Calltips](idle#calltips) - [Code Context](idle#code-context) - [Python Shell window](idle#python-shell-window) - [Text colors](idle#text-colors) + [Startup and code execution](idle#startup-and-code-execution) - [Command line usage](idle#command-line-usage) - [Startup failure](idle#startup-failure) - [Running user code](idle#running-user-code) - [User output in Shell](idle#user-output-in-shell) - [Developing tkinter applications](idle#developing-tkinter-applications) - [Running without a subprocess](idle#running-without-a-subprocess) + [Help and preferences](idle#help-and-preferences) - [Help sources](idle#help-sources) - [Setting preferences](idle#setting-preferences) - [IDLE on macOS](idle#idle-on-macos) - [Extensions](idle#extensions)
programming_docs
python keyword — Testing for Python keywords keyword — Testing for Python keywords ===================================== **Source code:** [Lib/keyword.py](https://github.com/python/cpython/tree/3.9/Lib/keyword.py) This module allows a Python program to determine if a string is a [keyword](../reference/lexical_analysis#keywords). `keyword.iskeyword(s)` Return `True` if *s* is a Python [keyword](../reference/lexical_analysis#keywords). `keyword.kwlist` Sequence containing all the [keywords](../reference/lexical_analysis#keywords) defined for the interpreter. If any keywords are defined to only be active when particular [`__future__`](__future__#module-__future__ "__future__: Future statement definitions") statements are in effect, these will be included as well. `keyword.issoftkeyword(s)` Return `True` if *s* is a Python soft [keyword](../reference/lexical_analysis#keywords). New in version 3.9. `keyword.softkwlist` Sequence containing all the soft [keywords](../reference/lexical_analysis#keywords) defined for the interpreter. If any soft keywords are defined to only be active when particular [`__future__`](__future__#module-__future__ "__future__: Future statement definitions") statements are in effect, these will be included as well. New in version 3.9. python xml.dom.minidom — Minimal DOM implementation xml.dom.minidom — Minimal DOM implementation ============================================ **Source code:** [Lib/xml/dom/minidom.py](https://github.com/python/cpython/tree/3.9/Lib/xml/dom/minidom.py) [`xml.dom.minidom`](#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation.") is a minimal implementation of the Document Object Model interface, with an API similar to that in other languages. It is intended to be simpler than the full DOM and also significantly smaller. Users who are not already proficient with the DOM should consider using the [`xml.etree.ElementTree`](xml.etree.elementtree#module-xml.etree.ElementTree "xml.etree.ElementTree: Implementation of the ElementTree API.") module for their XML processing instead. Warning The [`xml.dom.minidom`](#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation.") module is not secure against maliciously constructed data. If you need to parse untrusted or unauthenticated data see [XML vulnerabilities](xml#xml-vulnerabilities). DOM applications typically start by parsing some XML into a DOM. With [`xml.dom.minidom`](#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation."), this is done through the parse functions: ``` from xml.dom.minidom import parse, parseString dom1 = parse('c:\\temp\\mydata.xml') # parse an XML file by name datasource = open('c:\\temp\\mydata.xml') dom2 = parse(datasource) # parse an open file dom3 = parseString('<myxml>Some data<empty/> some more data</myxml>') ``` The [`parse()`](#xml.dom.minidom.parse "xml.dom.minidom.parse") function can take either a filename or an open file object. `xml.dom.minidom.parse(filename_or_file, parser=None, bufsize=None)` Return a `Document` from the given input. *filename\_or\_file* may be either a file name, or a file-like object. *parser*, if given, must be a SAX2 parser object. This function will change the document handler of the parser and activate namespace support; other parser configuration (like setting an entity resolver) must have been done in advance. If you have XML in a string, you can use the [`parseString()`](#xml.dom.minidom.parseString "xml.dom.minidom.parseString") function instead: `xml.dom.minidom.parseString(string, parser=None)` Return a `Document` that represents the *string*. This method creates an [`io.StringIO`](io#io.StringIO "io.StringIO") object for the string and passes that on to [`parse()`](#xml.dom.minidom.parse "xml.dom.minidom.parse"). Both functions return a `Document` object representing the content of the document. What the [`parse()`](#xml.dom.minidom.parse "xml.dom.minidom.parse") and [`parseString()`](#xml.dom.minidom.parseString "xml.dom.minidom.parseString") functions do is connect an XML parser with a “DOM builder” that can accept parse events from any SAX parser and convert them into a DOM tree. The name of the functions are perhaps misleading, but are easy to grasp when learning the interfaces. The parsing of the document will be completed before these functions return; it’s simply that these functions do not provide a parser implementation themselves. You can also create a `Document` by calling a method on a “DOM Implementation” object. You can get this object either by calling the `getDOMImplementation()` function in the [`xml.dom`](xml.dom#module-xml.dom "xml.dom: Document Object Model API for Python.") package or the [`xml.dom.minidom`](#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation.") module. Once you have a `Document`, you can add child nodes to it to populate the DOM: ``` from xml.dom.minidom import getDOMImplementation impl = getDOMImplementation() newdoc = impl.createDocument(None, "some_tag", None) top_element = newdoc.documentElement text = newdoc.createTextNode('Some textual content.') top_element.appendChild(text) ``` Once you have a DOM document object, you can access the parts of your XML document through its properties and methods. These properties are defined in the DOM specification. The main property of the document object is the `documentElement` property. It gives you the main element in the XML document: the one that holds all others. Here is an example program: ``` dom3 = parseString("<myxml>Some data</myxml>") assert dom3.documentElement.tagName == "myxml" ``` When you are finished with a DOM tree, you may optionally call the `unlink()` method to encourage early cleanup of the now-unneeded objects. `unlink()` is an [`xml.dom.minidom`](#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation.")-specific extension to the DOM API that renders the node and its descendants are essentially useless. Otherwise, Python’s garbage collector will eventually take care of the objects in the tree. See also [Document Object Model (DOM) Level 1 Specification](https://www.w3.org/TR/REC-DOM-Level-1/) The W3C recommendation for the DOM supported by [`xml.dom.minidom`](#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation."). DOM Objects ----------- The definition of the DOM API for Python is given as part of the [`xml.dom`](xml.dom#module-xml.dom "xml.dom: Document Object Model API for Python.") module documentation. This section lists the differences between the API and [`xml.dom.minidom`](#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation."). `Node.unlink()` Break internal references within the DOM so that it will be garbage collected on versions of Python without cyclic GC. Even when cyclic GC is available, using this can make large amounts of memory available sooner, so calling this on DOM objects as soon as they are no longer needed is good practice. This only needs to be called on the `Document` object, but may be called on child nodes to discard children of that node. You can avoid calling this method explicitly by using the [`with`](../reference/compound_stmts#with) statement. The following code will automatically unlink *dom* when the `with` block is exited: ``` with xml.dom.minidom.parse(datasource) as dom: ... # Work with dom. ``` `Node.writexml(writer, indent="", addindent="", newl="", encoding=None, standalone=None)` Write XML to the writer object. The writer receives texts but not bytes as input, it should have a `write()` method which matches that of the file object interface. The *indent* parameter is the indentation of the current node. The *addindent* parameter is the incremental indentation to use for subnodes of the current one. The *newl* parameter specifies the string to use to terminate newlines. For the `Document` node, an additional keyword argument *encoding* can be used to specify the encoding field of the XML header. Similarly, explicitly stating the *standalone* argument causes the standalone document declarations to be added to the prologue of the XML document. If the value is set to `True`, `standalone=”yes”` is added, otherwise it is set to `“no”`. Not stating the argument will omit the declaration from the document. Changed in version 3.8: The [`writexml()`](#xml.dom.minidom.Node.writexml "xml.dom.minidom.Node.writexml") method now preserves the attribute order specified by the user. Changed in version 3.9: The *standalone* parameter was added. `Node.toxml(encoding=None, standalone=None)` Return a string or byte string containing the XML represented by the DOM node. With an explicit *encoding* [1](#id3) argument, the result is a byte string in the specified encoding. With no *encoding* argument, the result is a Unicode string, and the XML declaration in the resulting string does not specify an encoding. Encoding this string in an encoding other than UTF-8 is likely incorrect, since UTF-8 is the default encoding of XML. The *standalone* argument behaves exactly as in [`writexml()`](#xml.dom.minidom.Node.writexml "xml.dom.minidom.Node.writexml"). Changed in version 3.8: The [`toxml()`](#xml.dom.minidom.Node.toxml "xml.dom.minidom.Node.toxml") method now preserves the attribute order specified by the user. Changed in version 3.9: The *standalone* parameter was added. `Node.toprettyxml(indent="\t", newl="\n", encoding=None, standalone=None)` Return a pretty-printed version of the document. *indent* specifies the indentation string and defaults to a tabulator; *newl* specifies the string emitted at the end of each line and defaults to `\n`. The *encoding* argument behaves like the corresponding argument of [`toxml()`](#xml.dom.minidom.Node.toxml "xml.dom.minidom.Node.toxml"). The *standalone* argument behaves exactly as in [`writexml()`](#xml.dom.minidom.Node.writexml "xml.dom.minidom.Node.writexml"). Changed in version 3.8: The [`toprettyxml()`](#xml.dom.minidom.Node.toprettyxml "xml.dom.minidom.Node.toprettyxml") method now preserves the attribute order specified by the user. Changed in version 3.9: The *standalone* parameter was added. DOM Example ----------- This example program is a fairly realistic example of a simple program. In this particular case, we do not take much advantage of the flexibility of the DOM. ``` import xml.dom.minidom document = """\ <slideshow> <title>Demo slideshow</title> <slide><title>Slide title</title> <point>This is a demo</point> <point>Of a program for processing slides</point> </slide> <slide><title>Another demo slide</title> <point>It is important</point> <point>To have more than</point> <point>one slide</point> </slide> </slideshow> """ dom = xml.dom.minidom.parseString(document) def getText(nodelist): rc = [] for node in nodelist: if node.nodeType == node.TEXT_NODE: rc.append(node.data) return ''.join(rc) def handleSlideshow(slideshow): print("<html>") handleSlideshowTitle(slideshow.getElementsByTagName("title")[0]) slides = slideshow.getElementsByTagName("slide") handleToc(slides) handleSlides(slides) print("</html>") def handleSlides(slides): for slide in slides: handleSlide(slide) def handleSlide(slide): handleSlideTitle(slide.getElementsByTagName("title")[0]) handlePoints(slide.getElementsByTagName("point")) def handleSlideshowTitle(title): print("<title>%s</title>" % getText(title.childNodes)) def handleSlideTitle(title): print("<h2>%s</h2>" % getText(title.childNodes)) def handlePoints(points): print("<ul>") for point in points: handlePoint(point) print("</ul>") def handlePoint(point): print("<li>%s</li>" % getText(point.childNodes)) def handleToc(slides): for slide in slides: title = slide.getElementsByTagName("title")[0] print("<p>%s</p>" % getText(title.childNodes)) handleSlideshow(dom) ``` minidom and the DOM standard ---------------------------- The [`xml.dom.minidom`](#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation.") module is essentially a DOM 1.0-compatible DOM with some DOM 2 features (primarily namespace features). Usage of the DOM interface in Python is straight-forward. The following mapping rules apply: * Interfaces are accessed through instance objects. Applications should not instantiate the classes themselves; they should use the creator functions available on the `Document` object. Derived interfaces support all operations (and attributes) from the base interfaces, plus any new operations. * Operations are used as methods. Since the DOM uses only [`in`](../reference/expressions#in) parameters, the arguments are passed in normal order (from left to right). There are no optional arguments. `void` operations return `None`. * IDL attributes map to instance attributes. For compatibility with the OMG IDL language mapping for Python, an attribute `foo` can also be accessed through accessor methods `_get_foo()` and `_set_foo()`. `readonly` attributes must not be changed; this is not enforced at runtime. * The types `short int`, `unsigned int`, `unsigned long long`, and `boolean` all map to Python integer objects. * The type `DOMString` maps to Python strings. [`xml.dom.minidom`](#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation.") supports either bytes or strings, but will normally produce strings. Values of type `DOMString` may also be `None` where allowed to have the IDL `null` value by the DOM specification from the W3C. * `const` declarations map to variables in their respective scope (e.g. `xml.dom.minidom.Node.PROCESSING_INSTRUCTION_NODE`); they must not be changed. * `DOMException` is currently not supported in [`xml.dom.minidom`](#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation."). Instead, [`xml.dom.minidom`](#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation.") uses standard Python exceptions such as [`TypeError`](exceptions#TypeError "TypeError") and [`AttributeError`](exceptions#AttributeError "AttributeError"). * `NodeList` objects are implemented using Python’s built-in list type. These objects provide the interface defined in the DOM specification, but with earlier versions of Python they do not support the official API. They are, however, much more “Pythonic” than the interface defined in the W3C recommendations. The following interfaces have no implementation in [`xml.dom.minidom`](#module-xml.dom.minidom "xml.dom.minidom: Minimal Document Object Model (DOM) implementation."): * `DOMTimeStamp` * `EntityReference` Most of these reflect information in the XML document that is not of general utility to most DOM users. #### Footnotes `1` The encoding name included in the XML output should conform to the appropriate standards. For example, “UTF-8” is valid, but “UTF8” is not valid in an XML document’s declaration, even though Python accepts it as an encoding name. See <https://www.w3.org/TR/2006/REC-xml11-20060816/#NT-EncodingDecl> and <https://www.iana.org/assignments/character-sets/character-sets.xhtml>. python site — Site-specific configuration hook site — Site-specific configuration hook ======================================= **Source code:** [Lib/site.py](https://github.com/python/cpython/tree/3.9/Lib/site.py) **This module is automatically imported during initialization.** The automatic import can be suppressed using the interpreter’s [`-S`](../using/cmdline#id3) option. Importing this module will append site-specific paths to the module search path and add a few builtins, unless [`-S`](../using/cmdline#id3) was used. In that case, this module can be safely imported with no automatic modifications to the module search path or additions to the builtins. To explicitly trigger the usual site-specific additions, call the [`site.main()`](#site.main "site.main") function. Changed in version 3.3: Importing the module used to trigger paths manipulation even when using [`-S`](../using/cmdline#id3). It starts by constructing up to four directories from a head and a tail part. For the head part, it uses `sys.prefix` and `sys.exec_prefix`; empty heads are skipped. For the tail part, it uses the empty string and then `lib/site-packages` (on Windows) or `lib/python*X.Y*/site-packages` (on Unix and macOS). For each of the distinct head-tail combinations, it sees if it refers to an existing directory, and if so, adds it to `sys.path` and also inspects the newly added path for configuration files. Changed in version 3.5: Support for the “site-python” directory has been removed. If a file named “pyvenv.cfg” exists one directory above sys.executable, sys.prefix and sys.exec\_prefix are set to that directory and it is also checked for site-packages (sys.base\_prefix and sys.base\_exec\_prefix will always be the “real” prefixes of the Python installation). If “pyvenv.cfg” (a bootstrap configuration file) contains the key “include-system-site-packages” set to anything other than “true” (case-insensitive), the system-level prefixes will not be searched for site-packages; otherwise they will. A path configuration file is a file whose name has the form `*name*.pth` and exists in one of the four directories mentioned above; its contents are additional items (one per line) to be added to `sys.path`. Non-existing items are never added to `sys.path`, and no check is made that the item refers to a directory rather than a file. No item is added to `sys.path` more than once. Blank lines and lines beginning with `#` are skipped. Lines starting with `import` (followed by space or tab) are executed. Note An executable line in a `.pth` file is run at every Python startup, regardless of whether a particular module is actually going to be used. Its impact should thus be kept to a minimum. The primary intended purpose of executable lines is to make the corresponding module(s) importable (load 3rd-party import hooks, adjust `PATH` etc). Any other initialization is supposed to be done upon a module’s actual import, if and when it happens. Limiting a code chunk to a single line is a deliberate measure to discourage putting anything more complex here. For example, suppose `sys.prefix` and `sys.exec_prefix` are set to `/usr/local`. The Python X.Y library is then installed in `/usr/local/lib/python*X.Y*`. Suppose this has a subdirectory `/usr/local/lib/python*X.Y*/site-packages` with three subsubdirectories, `foo`, `bar` and `spam`, and two path configuration files, `foo.pth` and `bar.pth`. Assume `foo.pth` contains the following: ``` # foo package configuration foo bar bletch ``` and `bar.pth` contains: ``` # bar package configuration bar ``` Then the following version-specific directories are added to `sys.path`, in this order: ``` /usr/local/lib/pythonX.Y/site-packages/bar /usr/local/lib/pythonX.Y/site-packages/foo ``` Note that `bletch` is omitted because it doesn’t exist; the `bar` directory precedes the `foo` directory because `bar.pth` comes alphabetically before `foo.pth`; and `spam` is omitted because it is not mentioned in either path configuration file. After these path manipulations, an attempt is made to import a module named `sitecustomize`, which can perform arbitrary site-specific customizations. It is typically created by a system administrator in the site-packages directory. If this import fails with an [`ImportError`](exceptions#ImportError "ImportError") or its subclass exception, and the exception’s `name` attribute equals to `'sitecustomize'`, it is silently ignored. If Python is started without output streams available, as with `pythonw.exe` on Windows (which is used by default to start IDLE), attempted output from `sitecustomize` is ignored. Any other exception causes a silent and perhaps mysterious failure of the process. After this, an attempt is made to import a module named `usercustomize`, which can perform arbitrary user-specific customizations, if [`ENABLE_USER_SITE`](#site.ENABLE_USER_SITE "site.ENABLE_USER_SITE") is true. This file is intended to be created in the user site-packages directory (see below), which is part of `sys.path` unless disabled by [`-s`](../using/cmdline#cmdoption-s). If this import fails with an [`ImportError`](exceptions#ImportError "ImportError") or its subclass exception, and the exception’s `name` attribute equals to `'usercustomize'`, it is silently ignored. Note that for some non-Unix systems, `sys.prefix` and `sys.exec_prefix` are empty, and the path manipulations are skipped; however the import of `sitecustomize` and `usercustomize` is still attempted. Readline configuration ---------------------- On systems that support [`readline`](readline#module-readline "readline: GNU readline support for Python. (Unix)"), this module will also import and configure the [`rlcompleter`](rlcompleter#module-rlcompleter "rlcompleter: Python identifier completion, suitable for the GNU readline library.") module, if Python is started in [interactive mode](../tutorial/interpreter#tut-interactive) and without the [`-S`](../using/cmdline#id3) option. The default behavior is enable tab-completion and to use `~/.python_history` as the history save file. To disable it, delete (or override) the [`sys.__interactivehook__`](sys#sys.__interactivehook__ "sys.__interactivehook__") attribute in your `sitecustomize` or `usercustomize` module or your [`PYTHONSTARTUP`](../using/cmdline#envvar-PYTHONSTARTUP) file. Changed in version 3.4: Activation of rlcompleter and history was made automatic. Module contents --------------- `site.PREFIXES` A list of prefixes for site-packages directories. `site.ENABLE_USER_SITE` Flag showing the status of the user site-packages directory. `True` means that it is enabled and was added to `sys.path`. `False` means that it was disabled by user request (with [`-s`](../using/cmdline#cmdoption-s) or [`PYTHONNOUSERSITE`](../using/cmdline#envvar-PYTHONNOUSERSITE)). `None` means it was disabled for security reasons (mismatch between user or group id and effective id) or by an administrator. `site.USER_SITE` Path to the user site-packages for the running Python. Can be `None` if [`getusersitepackages()`](#site.getusersitepackages "site.getusersitepackages") hasn’t been called yet. Default value is `~/.local/lib/python*X.Y*/site-packages` for UNIX and non-framework macOS builds, `~/Library/Python/*X.Y*/lib/python/site-packages` for macOS framework builds, and `*%APPDATA%*\Python\Python*XY*\site-packages` on Windows. This directory is a site directory, which means that `.pth` files in it will be processed. `site.USER_BASE` Path to the base directory for the user site-packages. Can be `None` if [`getuserbase()`](#site.getuserbase "site.getuserbase") hasn’t been called yet. Default value is `~/.local` for UNIX and macOS non-framework builds, `~/Library/Python/*X.Y*` for macOS framework builds, and `*%APPDATA%*\Python` for Windows. This value is used by Distutils to compute the installation directories for scripts, data files, Python modules, etc. for the [user installation scheme](../install/index#inst-alt-install-user). See also [`PYTHONUSERBASE`](../using/cmdline#envvar-PYTHONUSERBASE). `site.main()` Adds all the standard site-specific directories to the module search path. This function is called automatically when this module is imported, unless the Python interpreter was started with the [`-S`](../using/cmdline#id3) flag. Changed in version 3.3: This function used to be called unconditionally. `site.addsitedir(sitedir, known_paths=None)` Add a directory to sys.path and process its `.pth` files. Typically used in `sitecustomize` or `usercustomize` (see above). `site.getsitepackages()` Return a list containing all global site-packages directories. New in version 3.2. `site.getuserbase()` Return the path of the user base directory, [`USER_BASE`](#site.USER_BASE "site.USER_BASE"). If it is not initialized yet, this function will also set it, respecting [`PYTHONUSERBASE`](../using/cmdline#envvar-PYTHONUSERBASE). New in version 3.2. `site.getusersitepackages()` Return the path of the user-specific site-packages directory, [`USER_SITE`](#site.USER_SITE "site.USER_SITE"). If it is not initialized yet, this function will also set it, respecting [`USER_BASE`](#site.USER_BASE "site.USER_BASE"). To determine if the user-specific site-packages was added to `sys.path` [`ENABLE_USER_SITE`](#site.ENABLE_USER_SITE "site.ENABLE_USER_SITE") should be used. New in version 3.2. Command Line Interface ---------------------- The [`site`](#module-site "site: Module responsible for site-specific configuration.") module also provides a way to get the user directories from the command line: ``` $ python3 -m site --user-site /home/user/.local/lib/python3.3/site-packages ``` If it is called without arguments, it will print the contents of [`sys.path`](sys#sys.path "sys.path") on the standard output, followed by the value of [`USER_BASE`](#site.USER_BASE "site.USER_BASE") and whether the directory exists, then the same thing for [`USER_SITE`](#site.USER_SITE "site.USER_SITE"), and finally the value of [`ENABLE_USER_SITE`](#site.ENABLE_USER_SITE "site.ENABLE_USER_SITE"). `--user-base` Print the path to the user base directory. `--user-site` Print the path to the user site-packages directory. If both options are given, user base and user site will be printed (always in this order), separated by [`os.pathsep`](os#os.pathsep "os.pathsep"). If any option is given, the script will exit with one of these values: `0` if the user site-packages directory is enabled, `1` if it was disabled by the user, `2` if it is disabled for security reasons or by an administrator, and a value greater than 2 if there is an error. See also [**PEP 370**](https://www.python.org/dev/peps/pep-0370) – Per user site-packages directory
programming_docs
python email.iterators: Iterators email.iterators: Iterators ========================== **Source code:** [Lib/email/iterators.py](https://github.com/python/cpython/tree/3.9/Lib/email/iterators.py) Iterating over a message object tree is fairly easy with the [`Message.walk`](email.compat32-message#email.message.Message.walk "email.message.Message.walk") method. The [`email.iterators`](#module-email.iterators "email.iterators: Iterate over a message object tree.") module provides some useful higher level iterations over message object trees. `email.iterators.body_line_iterator(msg, decode=False)` This iterates over all the payloads in all the subparts of *msg*, returning the string payloads line-by-line. It skips over all the subpart headers, and it skips over any subpart with a payload that isn’t a Python string. This is somewhat equivalent to reading the flat text representation of the message from a file using [`readline()`](io#io.TextIOBase.readline "io.TextIOBase.readline"), skipping over all the intervening headers. Optional *decode* is passed through to [`Message.get_payload`](email.compat32-message#email.message.Message.get_payload "email.message.Message.get_payload"). `email.iterators.typed_subpart_iterator(msg, maintype='text', subtype=None)` This iterates over all the subparts of *msg*, returning only those subparts that match the MIME type specified by *maintype* and *subtype*. Note that *subtype* is optional; if omitted, then subpart MIME type matching is done only with the main type. *maintype* is optional too; it defaults to *text*. Thus, by default [`typed_subpart_iterator()`](#email.iterators.typed_subpart_iterator "email.iterators.typed_subpart_iterator") returns each subpart that has a MIME type of *text/\**. The following function has been added as a useful debugging tool. It should *not* be considered part of the supported public interface for the package. `email.iterators._structure(msg, fp=None, level=0, include_default=False)` Prints an indented representation of the content types of the message object structure. For example: ``` >>> msg = email.message_from_file(somefile) >>> _structure(msg) multipart/mixed text/plain text/plain multipart/digest message/rfc822 text/plain message/rfc822 text/plain message/rfc822 text/plain message/rfc822 text/plain message/rfc822 text/plain text/plain ``` Optional *fp* is a file-like object to print the output to. It must be suitable for Python’s [`print()`](functions#print "print") function. *level* is used internally. *include\_default*, if true, prints the default type as well. python cmd — Support for line-oriented command interpreters cmd — Support for line-oriented command interpreters ==================================================== **Source code:** [Lib/cmd.py](https://github.com/python/cpython/tree/3.9/Lib/cmd.py) The [`Cmd`](#cmd.Cmd "cmd.Cmd") class provides a simple framework for writing line-oriented command interpreters. These are often useful for test harnesses, administrative tools, and prototypes that will later be wrapped in a more sophisticated interface. `class cmd.Cmd(completekey='tab', stdin=None, stdout=None)` A [`Cmd`](#cmd.Cmd "cmd.Cmd") instance or subclass instance is a line-oriented interpreter framework. There is no good reason to instantiate [`Cmd`](#cmd.Cmd "cmd.Cmd") itself; rather, it’s useful as a superclass of an interpreter class you define yourself in order to inherit [`Cmd`](#cmd.Cmd "cmd.Cmd")’s methods and encapsulate action methods. The optional argument *completekey* is the [`readline`](readline#module-readline "readline: GNU readline support for Python. (Unix)") name of a completion key; it defaults to `Tab`. If *completekey* is not [`None`](constants#None "None") and [`readline`](readline#module-readline "readline: GNU readline support for Python. (Unix)") is available, command completion is done automatically. The optional arguments *stdin* and *stdout* specify the input and output file objects that the Cmd instance or subclass instance will use for input and output. If not specified, they will default to [`sys.stdin`](sys#sys.stdin "sys.stdin") and [`sys.stdout`](sys#sys.stdout "sys.stdout"). If you want a given *stdin* to be used, make sure to set the instance’s [`use_rawinput`](#cmd.Cmd.use_rawinput "cmd.Cmd.use_rawinput") attribute to `False`, otherwise *stdin* will be ignored. Cmd Objects ----------- A [`Cmd`](#cmd.Cmd "cmd.Cmd") instance has the following methods: `Cmd.cmdloop(intro=None)` Repeatedly issue a prompt, accept input, parse an initial prefix off the received input, and dispatch to action methods, passing them the remainder of the line as argument. The optional argument is a banner or intro string to be issued before the first prompt (this overrides the [`intro`](#cmd.Cmd.intro "cmd.Cmd.intro") class attribute). If the [`readline`](readline#module-readline "readline: GNU readline support for Python. (Unix)") module is loaded, input will automatically inherit **bash**-like history-list editing (e.g. `Control-P` scrolls back to the last command, `Control-N` forward to the next one, `Control-F` moves the cursor to the right non-destructively, `Control-B` moves the cursor to the left non-destructively, etc.). An end-of-file on input is passed back as the string `'EOF'`. An interpreter instance will recognize a command name `foo` if and only if it has a method `do_foo()`. As a special case, a line beginning with the character `'?'` is dispatched to the method `do_help()`. As another special case, a line beginning with the character `'!'` is dispatched to the method `do_shell()` (if such a method is defined). This method will return when the [`postcmd()`](#cmd.Cmd.postcmd "cmd.Cmd.postcmd") method returns a true value. The *stop* argument to [`postcmd()`](#cmd.Cmd.postcmd "cmd.Cmd.postcmd") is the return value from the command’s corresponding `do_*()` method. If completion is enabled, completing commands will be done automatically, and completing of commands args is done by calling `complete_foo()` with arguments *text*, *line*, *begidx*, and *endidx*. *text* is the string prefix we are attempting to match: all returned matches must begin with it. *line* is the current input line with leading whitespace removed, *begidx* and *endidx* are the beginning and ending indexes of the prefix text, which could be used to provide different completion depending upon which position the argument is in. All subclasses of [`Cmd`](#cmd.Cmd "cmd.Cmd") inherit a predefined `do_help()`. This method, called with an argument `'bar'`, invokes the corresponding method `help_bar()`, and if that is not present, prints the docstring of `do_bar()`, if available. With no argument, `do_help()` lists all available help topics (that is, all commands with corresponding `help_*()` methods or commands that have docstrings), and also lists any undocumented commands. `Cmd.onecmd(str)` Interpret the argument as though it had been typed in response to the prompt. This may be overridden, but should not normally need to be; see the [`precmd()`](#cmd.Cmd.precmd "cmd.Cmd.precmd") and [`postcmd()`](#cmd.Cmd.postcmd "cmd.Cmd.postcmd") methods for useful execution hooks. The return value is a flag indicating whether interpretation of commands by the interpreter should stop. If there is a `do_*()` method for the command *str*, the return value of that method is returned, otherwise the return value from the [`default()`](#cmd.Cmd.default "cmd.Cmd.default") method is returned. `Cmd.emptyline()` Method called when an empty line is entered in response to the prompt. If this method is not overridden, it repeats the last nonempty command entered. `Cmd.default(line)` Method called on an input line when the command prefix is not recognized. If this method is not overridden, it prints an error message and returns. `Cmd.completedefault(text, line, begidx, endidx)` Method called to complete an input line when no command-specific `complete_*()` method is available. By default, it returns an empty list. `Cmd.precmd(line)` Hook method executed just before the command line *line* is interpreted, but after the input prompt is generated and issued. This method is a stub in [`Cmd`](#cmd.Cmd "cmd.Cmd"); it exists to be overridden by subclasses. The return value is used as the command which will be executed by the [`onecmd()`](#cmd.Cmd.onecmd "cmd.Cmd.onecmd") method; the [`precmd()`](#cmd.Cmd.precmd "cmd.Cmd.precmd") implementation may re-write the command or simply return *line* unchanged. `Cmd.postcmd(stop, line)` Hook method executed just after a command dispatch is finished. This method is a stub in [`Cmd`](#cmd.Cmd "cmd.Cmd"); it exists to be overridden by subclasses. *line* is the command line which was executed, and *stop* is a flag which indicates whether execution will be terminated after the call to [`postcmd()`](#cmd.Cmd.postcmd "cmd.Cmd.postcmd"); this will be the return value of the [`onecmd()`](#cmd.Cmd.onecmd "cmd.Cmd.onecmd") method. The return value of this method will be used as the new value for the internal flag which corresponds to *stop*; returning false will cause interpretation to continue. `Cmd.preloop()` Hook method executed once when [`cmdloop()`](#cmd.Cmd.cmdloop "cmd.Cmd.cmdloop") is called. This method is a stub in [`Cmd`](#cmd.Cmd "cmd.Cmd"); it exists to be overridden by subclasses. `Cmd.postloop()` Hook method executed once when [`cmdloop()`](#cmd.Cmd.cmdloop "cmd.Cmd.cmdloop") is about to return. This method is a stub in [`Cmd`](#cmd.Cmd "cmd.Cmd"); it exists to be overridden by subclasses. Instances of [`Cmd`](#cmd.Cmd "cmd.Cmd") subclasses have some public instance variables: `Cmd.prompt` The prompt issued to solicit input. `Cmd.identchars` The string of characters accepted for the command prefix. `Cmd.lastcmd` The last nonempty command prefix seen. `Cmd.cmdqueue` A list of queued input lines. The cmdqueue list is checked in [`cmdloop()`](#cmd.Cmd.cmdloop "cmd.Cmd.cmdloop") when new input is needed; if it is nonempty, its elements will be processed in order, as if entered at the prompt. `Cmd.intro` A string to issue as an intro or banner. May be overridden by giving the [`cmdloop()`](#cmd.Cmd.cmdloop "cmd.Cmd.cmdloop") method an argument. `Cmd.doc_header` The header to issue if the help output has a section for documented commands. `Cmd.misc_header` The header to issue if the help output has a section for miscellaneous help topics (that is, there are `help_*()` methods without corresponding `do_*()` methods). `Cmd.undoc_header` The header to issue if the help output has a section for undocumented commands (that is, there are `do_*()` methods without corresponding `help_*()` methods). `Cmd.ruler` The character used to draw separator lines under the help-message headers. If empty, no ruler line is drawn. It defaults to `'='`. `Cmd.use_rawinput` A flag, defaulting to true. If true, [`cmdloop()`](#cmd.Cmd.cmdloop "cmd.Cmd.cmdloop") uses [`input()`](functions#input "input") to display a prompt and read the next command; if false, `sys.stdout.write()` and `sys.stdin.readline()` are used. (This means that by importing [`readline`](readline#module-readline "readline: GNU readline support for Python. (Unix)"), on systems that support it, the interpreter will automatically support **Emacs**-like line editing and command-history keystrokes.) Cmd Example ----------- The [`cmd`](#module-cmd "cmd: Build line-oriented command interpreters.") module is mainly useful for building custom shells that let a user work with a program interactively. This section presents a simple example of how to build a shell around a few of the commands in the [`turtle`](turtle#module-turtle "turtle: An educational framework for simple graphics applications") module. Basic turtle commands such as [`forward()`](turtle#turtle.forward "turtle.forward") are added to a [`Cmd`](#cmd.Cmd "cmd.Cmd") subclass with method named `do_forward()`. The argument is converted to a number and dispatched to the turtle module. The docstring is used in the help utility provided by the shell. The example also includes a basic record and playback facility implemented with the [`precmd()`](#cmd.Cmd.precmd "cmd.Cmd.precmd") method which is responsible for converting the input to lowercase and writing the commands to a file. The `do_playback()` method reads the file and adds the recorded commands to the `cmdqueue` for immediate playback: ``` import cmd, sys from turtle import * class TurtleShell(cmd.Cmd): intro = 'Welcome to the turtle shell. Type help or ? to list commands.\n' prompt = '(turtle) ' file = None # ----- basic turtle commands ----- def do_forward(self, arg): 'Move the turtle forward by the specified distance: FORWARD 10' forward(*parse(arg)) def do_right(self, arg): 'Turn turtle right by given number of degrees: RIGHT 20' right(*parse(arg)) def do_left(self, arg): 'Turn turtle left by given number of degrees: LEFT 90' left(*parse(arg)) def do_goto(self, arg): 'Move turtle to an absolute position with changing orientation. GOTO 100 200' goto(*parse(arg)) def do_home(self, arg): 'Return turtle to the home position: HOME' home() def do_circle(self, arg): 'Draw circle with given radius an options extent and steps: CIRCLE 50' circle(*parse(arg)) def do_position(self, arg): 'Print the current turtle position: POSITION' print('Current position is %d %d\n' % position()) def do_heading(self, arg): 'Print the current turtle heading in degrees: HEADING' print('Current heading is %d\n' % (heading(),)) def do_color(self, arg): 'Set the color: COLOR BLUE' color(arg.lower()) def do_undo(self, arg): 'Undo (repeatedly) the last turtle action(s): UNDO' def do_reset(self, arg): 'Clear the screen and return turtle to center: RESET' reset() def do_bye(self, arg): 'Stop recording, close the turtle window, and exit: BYE' print('Thank you for using Turtle') self.close() bye() return True # ----- record and playback ----- def do_record(self, arg): 'Save future commands to filename: RECORD rose.cmd' self.file = open(arg, 'w') def do_playback(self, arg): 'Playback commands from a file: PLAYBACK rose.cmd' self.close() with open(arg) as f: self.cmdqueue.extend(f.read().splitlines()) def precmd(self, line): line = line.lower() if self.file and 'playback' not in line: print(line, file=self.file) return line def close(self): if self.file: self.file.close() self.file = None def parse(arg): 'Convert a series of zero or more numbers to an argument tuple' return tuple(map(int, arg.split())) if __name__ == '__main__': TurtleShell().cmdloop() ``` Here is a sample session with the turtle shell showing the help functions, using blank lines to repeat commands, and the simple record and playback facility: ``` Welcome to the turtle shell. Type help or ? to list commands. (turtle) ? Documented commands (type help <topic>): ======================================== bye color goto home playback record right circle forward heading left position reset undo (turtle) help forward Move the turtle forward by the specified distance: FORWARD 10 (turtle) record spiral.cmd (turtle) position Current position is 0 0 (turtle) heading Current heading is 0 (turtle) reset (turtle) circle 20 (turtle) right 30 (turtle) circle 40 (turtle) right 30 (turtle) circle 60 (turtle) right 30 (turtle) circle 80 (turtle) right 30 (turtle) circle 100 (turtle) right 30 (turtle) circle 120 (turtle) right 30 (turtle) circle 120 (turtle) heading Current heading is 180 (turtle) forward 100 (turtle) (turtle) right 90 (turtle) forward 100 (turtle) (turtle) right 90 (turtle) forward 400 (turtle) right 90 (turtle) forward 500 (turtle) right 90 (turtle) forward 400 (turtle) right 90 (turtle) forward 300 (turtle) playback spiral.cmd Current position is 0 0 Current heading is 0 Current heading is 180 (turtle) bye Thank you for using Turtle ``` python trace — Trace or track Python statement execution trace — Trace or track Python statement execution ================================================= **Source code:** [Lib/trace.py](https://github.com/python/cpython/tree/3.9/Lib/trace.py) The [`trace`](#module-trace "trace: Trace or track Python statement execution.") module allows you to trace program execution, generate annotated statement coverage listings, print caller/callee relationships and list functions executed during a program run. It can be used in another program or from the command line. See also [Coverage.py](https://coverage.readthedocs.io/) A popular third-party coverage tool that provides HTML output along with advanced features such as branch coverage. Command-Line Usage ------------------ The [`trace`](#module-trace "trace: Trace or track Python statement execution.") module can be invoked from the command line. It can be as simple as ``` python -m trace --count -C . somefile.py ... ``` The above will execute `somefile.py` and generate annotated listings of all Python modules imported during the execution into the current directory. `--help` Display usage and exit. `--version` Display the version of the module and exit. New in version 3.8: Added `--module` option that allows to run an executable module. ### Main options At least one of the following options must be specified when invoking [`trace`](#module-trace "trace: Trace or track Python statement execution."). The [`--listfuncs`](#cmdoption-trace-l) option is mutually exclusive with the [`--trace`](#cmdoption-trace-t) and [`--count`](#cmdoption-trace-c) options. When [`--listfuncs`](#cmdoption-trace-l) is provided, neither [`--count`](#cmdoption-trace-c) nor [`--trace`](#cmdoption-trace-t) are accepted, and vice versa. `-c, --count` Produce a set of annotated listing files upon program completion that shows how many times each statement was executed. See also [`--coverdir`](#cmdoption-trace-coverdir), [`--file`](#cmdoption-trace-f) and [`--no-report`](#cmdoption-trace-no-report) below. `-t, --trace` Display lines as they are executed. `-l, --listfuncs` Display the functions executed by running the program. `-r, --report` Produce an annotated list from an earlier program run that used the [`--count`](#cmdoption-trace-c) and [`--file`](#cmdoption-trace-f) option. This does not execute any code. `-T, --trackcalls` Display the calling relationships exposed by running the program. ### Modifiers `-f, --file=<file>` Name of a file to accumulate counts over several tracing runs. Should be used with the [`--count`](#cmdoption-trace-c) option. `-C, --coverdir=<dir>` Directory where the report files go. The coverage report for `package.module` is written to file `*dir*/*package*/*module*.cover`. `-m, --missing` When generating annotated listings, mark lines which were not executed with `>>>>>>`. `-s, --summary` When using [`--count`](#cmdoption-trace-c) or [`--report`](#cmdoption-trace-r), write a brief summary to stdout for each file processed. `-R, --no-report` Do not generate annotated listings. This is useful if you intend to make several runs with [`--count`](#cmdoption-trace-c), and then produce a single set of annotated listings at the end. `-g, --timing` Prefix each line with the time since the program started. Only used while tracing. ### Filters These options may be repeated multiple times. `--ignore-module=<mod>` Ignore each of the given module names and its submodules (if it is a package). The argument can be a list of names separated by a comma. `--ignore-dir=<dir>` Ignore all modules and packages in the named directory and subdirectories. The argument can be a list of directories separated by [`os.pathsep`](os#os.pathsep "os.pathsep"). Programmatic Interface ---------------------- `class trace.Trace(count=1, trace=1, countfuncs=0, countcallers=0, ignoremods=(), ignoredirs=(), infile=None, outfile=None, timing=False)` Create an object to trace execution of a single statement or expression. All parameters are optional. *count* enables counting of line numbers. *trace* enables line execution tracing. *countfuncs* enables listing of the functions called during the run. *countcallers* enables call relationship tracking. *ignoremods* is a list of modules or packages to ignore. *ignoredirs* is a list of directories whose modules or packages should be ignored. *infile* is the name of the file from which to read stored count information. *outfile* is the name of the file in which to write updated count information. *timing* enables a timestamp relative to when tracing was started to be displayed. `run(cmd)` Execute the command and gather statistics from the execution with the current tracing parameters. *cmd* must be a string or code object, suitable for passing into [`exec()`](functions#exec "exec"). `runctx(cmd, globals=None, locals=None)` Execute the command and gather statistics from the execution with the current tracing parameters, in the defined global and local environments. If not defined, *globals* and *locals* default to empty dictionaries. `runfunc(func, /, *args, **kwds)` Call *func* with the given arguments under control of the [`Trace`](#trace.Trace "trace.Trace") object with the current tracing parameters. `results()` Return a [`CoverageResults`](#trace.CoverageResults "trace.CoverageResults") object that contains the cumulative results of all previous calls to `run`, `runctx` and `runfunc` for the given [`Trace`](#trace.Trace "trace.Trace") instance. Does not reset the accumulated trace results. `class trace.CoverageResults` A container for coverage results, created by [`Trace.results()`](#trace.Trace.results "trace.Trace.results"). Should not be created directly by the user. `update(other)` Merge in data from another [`CoverageResults`](#trace.CoverageResults "trace.CoverageResults") object. `write_results(show_missing=True, summary=False, coverdir=None)` Write coverage results. Set *show\_missing* to show lines that had no hits. Set *summary* to include in the output the coverage summary per module. *coverdir* specifies the directory into which the coverage result files will be output. If `None`, the results for each source file are placed in its directory. A simple example demonstrating the use of the programmatic interface: ``` import sys import trace # create a Trace object, telling it what to ignore, and whether to # do tracing or line-counting or both. tracer = trace.Trace( ignoredirs=[sys.prefix, sys.exec_prefix], trace=0, count=1) # run the new command using the given tracer tracer.run('main()') # make a report, placing output in the current directory r = tracer.results() r.write_results(show_missing=True, coverdir=".") ```
programming_docs