doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
numpy.generic.squeeze method generic.squeeze() Scalar method identical to the corresponding array attribute. Please see ndarray.squeeze.
numpy.reference.generated.numpy.generic.squeeze
numpy.generic.strides attribute generic.strides Tuple of bytes steps in each dimension.
numpy.reference.generated.numpy.generic.strides
numpy.generic.T attribute generic.T Scalar attribute identical to the corresponding array attribute. Please see ndarray.T.
numpy.reference.generated.numpy.generic.t
get_build_temp_dir()[source] Return a path to a temporary directory where temporary files should be placed.
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.get_build_temp_dir
get_config_cmd()[source] Returns the numpy.distutils config command instance.
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.get_config_cmd
get_distribution()[source] Return the distutils distribution object for self.
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.get_distribution
get_info(*names)[source] Get resources information. Return information (from system_info.get_info) for all of the names in the argument list in a single dictionary.
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.get_info
get_subpackage(subpackage_name, subpackage_path=None, parent_name=None, caller_level=1)[source] Return list of subpackage configurations. Parameters subpackage_namestr or None Name of the subpackage to get the configuration. ‘*’ in subpackage_name is handled as a wildcard. subpackage_pathstr If None, then the path is assumed to be the local path plus the subpackage_name. If a setup.py file is not found in the subpackage_path, then a default configuration is used. parent_namestr Parent name.
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.get_subpackage
get_version(version_file=None, version_variable=None)[source] Try to get version string of a package. Return a version string of the current package or None if the version information could not be detected. Notes This method scans files named __version__.py, <packagename>_version.py, version.py, and __svn_version__.py for string variables version, __version__, and <packagename>_version, until a version number is found.
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.get_version
have_f77c()[source] Check for availability of Fortran 77 compiler. Use it inside source generating function to ensure that setup distribution instance has been initialized. Notes True if a Fortran 77 compiler is available (because a simple Fortran 77 code was able to be compiled successfully).
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.have_f77c
have_f90c()[source] Check for availability of Fortran 90 compiler. Use it inside source generating function to ensure that setup distribution instance has been initialized. Notes True if a Fortran 90 compiler is available (because a simple Fortran 90 code was able to be compiled successfully)
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.have_f90c
numpy.i: a SWIG Interface File for NumPy Introduction The Simple Wrapper and Interface Generator (or SWIG) is a powerful tool for generating wrapper code for interfacing to a wide variety of scripting languages. SWIG can parse header files, and using only the code prototypes, create an interface to the target language. But SWIG is not omnipotent. For example, it cannot know from the prototype: double rms(double* seq, int n); what exactly seq is. Is it a single value to be altered in-place? Is it an array, and if so what is its length? Is it input-only? Output-only? Input-output? SWIG cannot determine these details, and does not attempt to do so. If we designed rms, we probably made it a routine that takes an input-only array of length n of double values called seq and returns the root mean square. The default behavior of SWIG, however, will be to create a wrapper function that compiles, but is nearly impossible to use from the scripting language in the way the C routine was intended. For Python, the preferred way of handling contiguous (or technically, strided) blocks of homogeneous data is with NumPy, which provides full object-oriented access to multidimensial arrays of data. Therefore, the most logical Python interface for the rms function would be (including doc string): def rms(seq): """ rms: return the root mean square of a sequence rms(numpy.ndarray) -> double rms(list) -> double rms(tuple) -> double """ where seq would be a NumPy array of double values, and its length n would be extracted from seq internally before being passed to the C routine. Even better, since NumPy supports construction of arrays from arbitrary Python sequences, seq itself could be a nearly arbitrary sequence (so long as each element can be converted to a double) and the wrapper code would internally convert it to a NumPy array before extracting its data and length. SWIG allows these types of conversions to be defined via a mechanism called typemaps. This document provides information on how to use numpy.i, a SWIG interface file that defines a series of typemaps intended to make the type of array-related conversions described above relatively simple to implement. For example, suppose that the rms function prototype defined above was in a header file named rms.h. To obtain the Python interface discussed above, your SWIG interface file would need the following: %{ #define SWIG_FILE_WITH_INIT #include "rms.h" %} %include "numpy.i" %init %{ import_array(); %} %apply (double* IN_ARRAY1, int DIM1) {(double* seq, int n)}; %include "rms.h" Typemaps are keyed off a list of one or more function arguments, either by type or by type and name. We will refer to such lists as signatures. One of the many typemaps defined by numpy.i is used above and has the signature (double* IN_ARRAY1, int DIM1). The argument names are intended to suggest that the double* argument is an input array of one dimension and that the int represents the size of that dimension. This is precisely the pattern in the rms prototype. Most likely, no actual prototypes to be wrapped will have the argument names IN_ARRAY1 and DIM1. We use the SWIG %apply directive to apply the typemap for one-dimensional input arrays of type double to the actual prototype used by rms. Using numpy.i effectively, therefore, requires knowing what typemaps are available and what they do. A SWIG interface file that includes the SWIG directives given above will produce wrapper code that looks something like: 1 PyObject *_wrap_rms(PyObject *args) { 2 PyObject *resultobj = 0; 3 double *arg1 = (double *) 0 ; 4 int arg2 ; 5 double result; 6 PyArrayObject *array1 = NULL ; 7 int is_new_object1 = 0 ; 8 PyObject * obj0 = 0 ; 9 10 if (!PyArg_ParseTuple(args,(char *)"O:rms",&obj0)) SWIG_fail; 11 { 12 array1 = obj_to_array_contiguous_allow_conversion( 13 obj0, NPY_DOUBLE, &is_new_object1); 14 npy_intp size[1] = { 15 -1 16 }; 17 if (!array1 || !require_dimensions(array1, 1) || 18 !require_size(array1, size, 1)) SWIG_fail; 19 arg1 = (double*) array1->data; 20 arg2 = (int) array1->dimensions[0]; 21 } 22 result = (double)rms(arg1,arg2); 23 resultobj = SWIG_From_double((double)(result)); 24 { 25 if (is_new_object1 && array1) Py_DECREF(array1); 26 } 27 return resultobj; 28 fail: 29 { 30 if (is_new_object1 && array1) Py_DECREF(array1); 31 } 32 return NULL; 33 } The typemaps from numpy.i are responsible for the following lines of code: 12–20, 25 and 30. Line 10 parses the input to the rms function. From the format string "O:rms", we can see that the argument list is expected to be a single Python object (specified by the O before the colon) and whose pointer is stored in obj0. A number of functions, supplied by numpy.i, are called to make and check the (possible) conversion from a generic Python object to a NumPy array. These functions are explained in the section Helper Functions, but hopefully their names are self-explanatory. At line 12 we use obj0 to construct a NumPy array. At line 17, we check the validity of the result: that it is non-null and that it has a single dimension of arbitrary length. Once these states are verified, we extract the data buffer and length in lines 19 and 20 so that we can call the underlying C function at line 22. Line 25 performs memory management for the case where we have created a new array that is no longer needed. This code has a significant amount of error handling. Note the SWIG_fail is a macro for goto fail, referring to the label at line 28. If the user provides the wrong number of arguments, this will be caught at line 10. If construction of the NumPy array fails or produces an array with the wrong number of dimensions, these errors are caught at line 17. And finally, if an error is detected, memory is still managed correctly at line 30. Note that if the C function signature was in a different order: double rms(int n, double* seq); that SWIG would not match the typemap signature given above with the argument list for rms. Fortunately, numpy.i has a set of typemaps with the data pointer given last: %apply (int DIM1, double* IN_ARRAY1) {(int n, double* seq)}; This simply has the effect of switching the definitions of arg1 and arg2 in lines 3 and 4 of the generated code above, and their assignments in lines 19 and 20. Using numpy.i The numpy.i file is currently located in the tools/swig sub-directory under the numpy installation directory. Typically, you will want to copy it to the directory where you are developing your wrappers. A simple module that only uses a single SWIG interface file should include the following: %{ #define SWIG_FILE_WITH_INIT %} %include "numpy.i" %init %{ import_array(); %} Within a compiled Python module, import_array() should only get called once. This could be in a C/C++ file that you have written and is linked to the module. If this is the case, then none of your interface files should #define SWIG_FILE_WITH_INIT or call import_array(). Or, this initialization call could be in a wrapper file generated by SWIG from an interface file that has the %init block as above. If this is the case, and you have more than one SWIG interface file, then only one interface file should #define SWIG_FILE_WITH_INIT and call import_array(). Available Typemaps The typemap directives provided by numpy.i for arrays of different data types, say double and int, and dimensions of different types, say int or long, are identical to one another except for the C and NumPy type specifications. The typemaps are therefore implemented (typically behind the scenes) via a macro: %numpy_typemaps(DATA_TYPE, DATA_TYPECODE, DIM_TYPE) that can be invoked for appropriate (DATA_TYPE, DATA_TYPECODE, DIM_TYPE) triplets. For example: %numpy_typemaps(double, NPY_DOUBLE, int) %numpy_typemaps(int, NPY_INT , int) The numpy.i interface file uses the %numpy_typemaps macro to implement typemaps for the following C data types and int dimension types: signed char unsigned char short unsigned short int unsigned int long unsigned long long long unsigned long long float double In the following descriptions, we reference a generic DATA_TYPE, which could be any of the C data types listed above, and DIM_TYPE which should be one of the many types of integers. The typemap signatures are largely differentiated on the name given to the buffer pointer. Names with FARRAY are for Fortran-ordered arrays, and names with ARRAY are for C-ordered (or 1D arrays). Input Arrays Input arrays are defined as arrays of data that are passed into a routine but are not altered in-place or returned to the user. The Python input array is therefore allowed to be almost any Python sequence (such as a list) that can be converted to the requested type of array. The input array signatures are 1D: ( DATA_TYPE IN_ARRAY1[ANY] ) ( DATA_TYPE* IN_ARRAY1, int DIM1 ) ( int DIM1, DATA_TYPE* IN_ARRAY1 ) 2D: ( DATA_TYPE IN_ARRAY2[ANY][ANY] ) ( DATA_TYPE* IN_ARRAY2, int DIM1, int DIM2 ) ( int DIM1, int DIM2, DATA_TYPE* IN_ARRAY2 ) ( DATA_TYPE* IN_FARRAY2, int DIM1, int DIM2 ) ( int DIM1, int DIM2, DATA_TYPE* IN_FARRAY2 ) 3D: ( DATA_TYPE IN_ARRAY3[ANY][ANY][ANY] ) ( DATA_TYPE* IN_ARRAY3, int DIM1, int DIM2, int DIM3 ) ( int DIM1, int DIM2, int DIM3, DATA_TYPE* IN_ARRAY3 ) ( DATA_TYPE* IN_FARRAY3, int DIM1, int DIM2, int DIM3 ) ( int DIM1, int DIM2, int DIM3, DATA_TYPE* IN_FARRAY3 ) 4D: (DATA_TYPE IN_ARRAY4[ANY][ANY][ANY][ANY]) (DATA_TYPE* IN_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, , DIM_TYPE DIM4, DATA_TYPE* IN_ARRAY4) (DATA_TYPE* IN_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* IN_FARRAY4) The first signature listed, ( DATA_TYPE IN_ARRAY[ANY] ) is for one-dimensional arrays with hard-coded dimensions. Likewise, ( DATA_TYPE IN_ARRAY2[ANY][ANY] ) is for two-dimensional arrays with hard-coded dimensions, and similarly for three-dimensional. In-Place Arrays In-place arrays are defined as arrays that are modified in-place. The input values may or may not be used, but the values at the time the function returns are significant. The provided Python argument must therefore be a NumPy array of the required type. The in-place signatures are 1D: ( DATA_TYPE INPLACE_ARRAY1[ANY] ) ( DATA_TYPE* INPLACE_ARRAY1, int DIM1 ) ( int DIM1, DATA_TYPE* INPLACE_ARRAY1 ) 2D: ( DATA_TYPE INPLACE_ARRAY2[ANY][ANY] ) ( DATA_TYPE* INPLACE_ARRAY2, int DIM1, int DIM2 ) ( int DIM1, int DIM2, DATA_TYPE* INPLACE_ARRAY2 ) ( DATA_TYPE* INPLACE_FARRAY2, int DIM1, int DIM2 ) ( int DIM1, int DIM2, DATA_TYPE* INPLACE_FARRAY2 ) 3D: ( DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY] ) ( DATA_TYPE* INPLACE_ARRAY3, int DIM1, int DIM2, int DIM3 ) ( int DIM1, int DIM2, int DIM3, DATA_TYPE* INPLACE_ARRAY3 ) ( DATA_TYPE* INPLACE_FARRAY3, int DIM1, int DIM2, int DIM3 ) ( int DIM1, int DIM2, int DIM3, DATA_TYPE* INPLACE_FARRAY3 ) 4D: (DATA_TYPE INPLACE_ARRAY4[ANY][ANY][ANY][ANY]) (DATA_TYPE* INPLACE_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, , DIM_TYPE DIM4, DATA_TYPE* INPLACE_ARRAY4) (DATA_TYPE* INPLACE_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* INPLACE_FARRAY4) These typemaps now check to make sure that the INPLACE_ARRAY arguments use native byte ordering. If not, an exception is raised. There is also a “flat” in-place array for situations in which you would like to modify or process each element, regardless of the number of dimensions. One example is a “quantization” function that quantizes each element of an array in-place, be it 1D, 2D or whatever. This form checks for continuity but allows either C or Fortran ordering. ND: (DATA_TYPE* INPLACE_ARRAY_FLAT, DIM_TYPE DIM_FLAT) Argout Arrays Argout arrays are arrays that appear in the input arguments in C, but are in fact output arrays. This pattern occurs often when there is more than one output variable and the single return argument is therefore not sufficient. In Python, the conventional way to return multiple arguments is to pack them into a sequence (tuple, list, etc.) and return the sequence. This is what the argout typemaps do. If a wrapped function that uses these argout typemaps has more than one return argument, they are packed into a tuple or list, depending on the version of Python. The Python user does not pass these arrays in, they simply get returned. For the case where a dimension is specified, the python user must provide that dimension as an argument. The argout signatures are 1D: ( DATA_TYPE ARGOUT_ARRAY1[ANY] ) ( DATA_TYPE* ARGOUT_ARRAY1, int DIM1 ) ( int DIM1, DATA_TYPE* ARGOUT_ARRAY1 ) 2D: ( DATA_TYPE ARGOUT_ARRAY2[ANY][ANY] ) 3D: ( DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY] ) 4D: ( DATA_TYPE ARGOUT_ARRAY4[ANY][ANY][ANY][ANY] ) These are typically used in situations where in C/C++, you would allocate a(n) array(s) on the heap, and call the function to fill the array(s) values. In Python, the arrays are allocated for you and returned as new array objects. Note that we support DATA_TYPE* argout typemaps in 1D, but not 2D or 3D. This is because of a quirk with the SWIG typemap syntax and cannot be avoided. Note that for these types of 1D typemaps, the Python function will take a single argument representing DIM1. Argout View Arrays Argoutview arrays are for when your C code provides you with a view of its internal data and does not require any memory to be allocated by the user. This can be dangerous. There is almost no way to guarantee that the internal data from the C code will remain in existence for the entire lifetime of the NumPy array that encapsulates it. If the user destroys the object that provides the view of the data before destroying the NumPy array, then using that array may result in bad memory references or segmentation faults. Nevertheless, there are situations, working with large data sets, where you simply have no other choice. The C code to be wrapped for argoutview arrays are characterized by pointers: pointers to the dimensions and double pointers to the data, so that these values can be passed back to the user. The argoutview typemap signatures are therefore 1D: ( DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1 ) ( DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEW_ARRAY1 ) 2D: ( DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2 ) ( DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_ARRAY2 ) ( DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2 ) ( DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_FARRAY2 ) 3D: ( DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) ( DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3) ( DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) ( DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_FARRAY3) 4D: (DATA_TYPE** ARGOUTVIEW_ARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEW_ARRAY4) (DATA_TYPE** ARGOUTVIEW_FARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEW_FARRAY4) Note that arrays with hard-coded dimensions are not supported. These cannot follow the double pointer signatures of these typemaps. Memory Managed Argout View Arrays A recent addition to numpy.i are typemaps that permit argout arrays with views into memory that is managed. See the discussion here. 1D: (DATA_TYPE** ARGOUTVIEWM_ARRAY1, DIM_TYPE* DIM1) (DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEWM_ARRAY1) 2D: (DATA_TYPE** ARGOUTVIEWM_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEWM_ARRAY2) (DATA_TYPE** ARGOUTVIEWM_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEWM_FARRAY2) 3D: (DATA_TYPE** ARGOUTVIEWM_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEWM_ARRAY3) (DATA_TYPE** ARGOUTVIEWM_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEWM_FARRAY3) 4D: (DATA_TYPE** ARGOUTVIEWM_ARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEWM_ARRAY4) (DATA_TYPE** ARGOUTVIEWM_FARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEWM_FARRAY4) Output Arrays The numpy.i interface file does not support typemaps for output arrays, for several reasons. First, C/C++ return arguments are limited to a single value. This prevents obtaining dimension information in a general way. Second, arrays with hard-coded lengths are not permitted as return arguments. In other words: double[3] newVector(double x, double y, double z); is not legal C/C++ syntax. Therefore, we cannot provide typemaps of the form: %typemap(out) (TYPE[ANY]); If you run into a situation where a function or method is returning a pointer to an array, your best bet is to write your own version of the function to be wrapped, either with %extend for the case of class methods or %ignore and %rename for the case of functions. Other Common Types: bool Note that C++ type bool is not supported in the list in the Available Typemaps section. NumPy bools are a single byte, while the C++ bool is four bytes (at least on my system). Therefore: %numpy_typemaps(bool, NPY_BOOL, int) will result in typemaps that will produce code that reference improper data lengths. You can implement the following macro expansion: %numpy_typemaps(bool, NPY_UINT, int) to fix the data length problem, and Input Arrays will work fine, but In-Place Arrays might fail type-checking. Other Common Types: complex Typemap conversions for complex floating-point types is also not supported automatically. This is because Python and NumPy are written in C, which does not have native complex types. Both Python and NumPy implement their own (essentially equivalent) struct definitions for complex variables: /* Python */ typedef struct {double real; double imag;} Py_complex; /* NumPy */ typedef struct {float real, imag;} npy_cfloat; typedef struct {double real, imag;} npy_cdouble; We could have implemented: %numpy_typemaps(Py_complex , NPY_CDOUBLE, int) %numpy_typemaps(npy_cfloat , NPY_CFLOAT , int) %numpy_typemaps(npy_cdouble, NPY_CDOUBLE, int) which would have provided automatic type conversions for arrays of type Py_complex, npy_cfloat and npy_cdouble. However, it seemed unlikely that there would be any independent (non-Python, non-NumPy) application code that people would be using SWIG to generate a Python interface to, that also used these definitions for complex types. More likely, these application codes will define their own complex types, or in the case of C++, use std::complex. Assuming these data structures are compatible with Python and NumPy complex types, %numpy_typemap expansions as above (with the user’s complex type substituted for the first argument) should work. NumPy Array Scalars and SWIG SWIG has sophisticated type checking for numerical types. For example, if your C/C++ routine expects an integer as input, the code generated by SWIG will check for both Python integers and Python long integers, and raise an overflow error if the provided Python integer is too big to cast down to a C integer. With the introduction of NumPy scalar arrays into your Python code, you might conceivably extract an integer from a NumPy array and attempt to pass this to a SWIG-wrapped C/C++ function that expects an int, but the SWIG type checking will not recognize the NumPy array scalar as an integer. (Often, this does in fact work – it depends on whether NumPy recognizes the integer type you are using as inheriting from the Python integer type on the platform you are using. Sometimes, this means that code that works on a 32-bit machine will fail on a 64-bit machine.) If you get a Python error that looks like the following: TypeError: in method 'MyClass_MyMethod', argument 2 of type 'int' and the argument you are passing is an integer extracted from a NumPy array, then you have stumbled upon this problem. The solution is to modify the SWIG type conversion system to accept NumPy array scalars in addition to the standard integer types. Fortunately, this capability has been provided for you. Simply copy the file: pyfragments.swg to the working build directory for you project, and this problem will be fixed. It is suggested that you do this anyway, as it only increases the capabilities of your Python interface. Why is There a Second File? The SWIG type checking and conversion system is a complicated combination of C macros, SWIG macros, SWIG typemaps and SWIG fragments. Fragments are a way to conditionally insert code into your wrapper file if it is needed, and not insert it if not needed. If multiple typemaps require the same fragment, the fragment only gets inserted into your wrapper code once. There is a fragment for converting a Python integer to a C long. There is a different fragment that converts a Python integer to a C int, that calls the routine defined in the long fragment. We can make the changes we want here by changing the definition for the long fragment. SWIG determines the active definition for a fragment using a “first come, first served” system. That is, we need to define the fragment for long conversions prior to SWIG doing it internally. SWIG allows us to do this by putting our fragment definitions in the file pyfragments.swg. If we were to put the new fragment definitions in numpy.i, they would be ignored. Helper Functions The numpy.i file contains several macros and routines that it uses internally to build its typemaps. However, these functions may be useful elsewhere in your interface file. These macros and routines are implemented as fragments, which are described briefly in the previous section. If you try to use one or more of the following macros or functions, but your compiler complains that it does not recognize the symbol, then you need to force these fragments to appear in your code using: %fragment("NumPy_Fragments"); in your SWIG interface file. Macros is_array(a) Evaluates as true if a is non-NULL and can be cast to a PyArrayObject*. array_type(a) Evaluates to the integer data type code of a, assuming a can be cast to a PyArrayObject*. array_numdims(a) Evaluates to the integer number of dimensions of a, assuming a can be cast to a PyArrayObject*. array_dimensions(a) Evaluates to an array of type npy_intp and length array_numdims(a), giving the lengths of all of the dimensions of a, assuming a can be cast to a PyArrayObject*. array_size(a,i) Evaluates to the i-th dimension size of a, assuming a can be cast to a PyArrayObject*. array_strides(a) Evaluates to an array of type npy_intp and length array_numdims(a), giving the stridess of all of the dimensions of a, assuming a can be cast to a PyArrayObject*. A stride is the distance in bytes between an element and its immediate neighbor along the same axis. array_stride(a,i) Evaluates to the i-th stride of a, assuming a can be cast to a PyArrayObject*. array_data(a) Evaluates to a pointer of type void* that points to the data buffer of a, assuming a can be cast to a PyArrayObject*. array_descr(a) Returns a borrowed reference to the dtype property (PyArray_Descr*) of a, assuming a can be cast to a PyArrayObject*. array_flags(a) Returns an integer representing the flags of a, assuming a can be cast to a PyArrayObject*. array_enableflags(a,f) Sets the flag represented by f of a, assuming a can be cast to a PyArrayObject*. array_is_contiguous(a) Evaluates as true if a is a contiguous array. Equivalent to (PyArray_ISCONTIGUOUS(a)). array_is_native(a) Evaluates as true if the data buffer of a uses native byte order. Equivalent to (PyArray_ISNOTSWAPPED(a)). array_is_fortran(a) Evaluates as true if a is FORTRAN ordered. Routines pytype_string() Return type: const char* Arguments: PyObject* py_obj, a general Python object. Return a string describing the type of py_obj. typecode_string() Return type: const char* Arguments: int typecode, a NumPy integer typecode. Return a string describing the type corresponding to the NumPy typecode. type_match() Return type: int Arguments: int actual_type, the NumPy typecode of a NumPy array. int desired_type, the desired NumPy typecode. Make sure that actual_type is compatible with desired_type. For example, this allows character and byte types, or int and long types, to match. This is now equivalent to PyArray_EquivTypenums(). obj_to_array_no_conversion() Return type: PyArrayObject* Arguments: PyObject* input, a general Python object. int typecode, the desired NumPy typecode. Cast input to a PyArrayObject* if legal, and ensure that it is of type typecode. If input cannot be cast, or the typecode is wrong, set a Python error and return NULL. obj_to_array_allow_conversion() Return type: PyArrayObject* Arguments: PyObject* input, a general Python object. int typecode, the desired NumPy typecode of the resulting array. int* is_new_object, returns a value of 0 if no conversion performed, else 1. Convert input to a NumPy array with the given typecode. On success, return a valid PyArrayObject* with the correct type. On failure, the Python error string will be set and the routine returns NULL. make_contiguous() Return type: PyArrayObject* Arguments: PyArrayObject* ary, a NumPy array. int* is_new_object, returns a value of 0 if no conversion performed, else 1. int min_dims, minimum allowable dimensions. int max_dims, maximum allowable dimensions. Check to see if ary is contiguous. If so, return the input pointer and flag it as not a new object. If it is not contiguous, create a new PyArrayObject* using the original data, flag it as a new object and return the pointer. make_fortran() Return type: PyArrayObject* Arguments PyArrayObject* ary, a NumPy array. int* is_new_object, returns a value of 0 if no conversion performed, else 1. Check to see if ary is Fortran contiguous. If so, return the input pointer and flag it as not a new object. If it is not Fortran contiguous, create a new PyArrayObject* using the original data, flag it as a new object and return the pointer. obj_to_array_contiguous_allow_conversion() Return type: PyArrayObject* Arguments: PyObject* input, a general Python object. int typecode, the desired NumPy typecode of the resulting array. int* is_new_object, returns a value of 0 if no conversion performed, else 1. Convert input to a contiguous PyArrayObject* of the specified type. If the input object is not a contiguous PyArrayObject*, a new one will be created and the new object flag will be set. obj_to_array_fortran_allow_conversion() Return type: PyArrayObject* Arguments: PyObject* input, a general Python object. int typecode, the desired NumPy typecode of the resulting array. int* is_new_object, returns a value of 0 if no conversion performed, else 1. Convert input to a Fortran contiguous PyArrayObject* of the specified type. If the input object is not a Fortran contiguous PyArrayObject*, a new one will be created and the new object flag will be set. require_contiguous() Return type: int Arguments: PyArrayObject* ary, a NumPy array. Test whether ary is contiguous. If so, return 1. Otherwise, set a Python error and return 0. require_native() Return type: int Arguments: PyArray_Object* ary, a NumPy array. Require that ary is not byte-swapped. If the array is not byte-swapped, return 1. Otherwise, set a Python error and return 0. require_dimensions() Return type: int Arguments: PyArrayObject* ary, a NumPy array. int exact_dimensions, the desired number of dimensions. Require ary to have a specified number of dimensions. If the array has the specified number of dimensions, return 1. Otherwise, set a Python error and return 0. require_dimensions_n() Return type: int Arguments: PyArrayObject* ary, a NumPy array. int* exact_dimensions, an array of integers representing acceptable numbers of dimensions. int n, the length of exact_dimensions. Require ary to have one of a list of specified number of dimensions. If the array has one of the specified number of dimensions, return 1. Otherwise, set the Python error string and return 0. require_size() Return type: int Arguments: PyArrayObject* ary, a NumPy array. npy_int* size, an array representing the desired lengths of each dimension. int n, the length of size. Require ary to have a specified shape. If the array has the specified shape, return 1. Otherwise, set the Python error string and return 0. require_fortran() Return type: int Arguments: PyArrayObject* ary, a NumPy array. Require the given PyArrayObject to to be Fortran ordered. If the PyArrayObject is already Fortran ordered, do nothing. Else, set the Fortran ordering flag and recompute the strides. Beyond the Provided Typemaps There are many C or C++ array/NumPy array situations not covered by a simple %include "numpy.i" and subsequent %apply directives. A Common Example Consider a reasonable prototype for a dot product function: double dot(int len, double* vec1, double* vec2); The Python interface that we want is: def dot(vec1, vec2): """ dot(PyObject,PyObject) -> double """ The problem here is that there is one dimension argument and two array arguments, and our typemaps are set up for dimensions that apply to a single array (in fact, SWIG does not provide a mechanism for associating len with vec2 that takes two Python input arguments). The recommended solution is the following: %apply (int DIM1, double* IN_ARRAY1) {(int len1, double* vec1), (int len2, double* vec2)} %rename (dot) my_dot; %exception my_dot { $action if (PyErr_Occurred()) SWIG_fail; } %inline %{ double my_dot(int len1, double* vec1, int len2, double* vec2) { if (len1 != len2) { PyErr_Format(PyExc_ValueError, "Arrays of lengths (%d,%d) given", len1, len2); return 0.0; } return dot(len1, vec1, vec2); } %} If the header file that contains the prototype for double dot() also contains other prototypes that you want to wrap, so that you need to %include this header file, then you will also need a %ignore dot; directive, placed after the %rename and before the %include directives. Or, if the function in question is a class method, you will want to use %extend rather than %inline in addition to %ignore. A note on error handling: Note that my_dot returns a double but that it can also raise a Python error. The resulting wrapper function will return a Python float representation of 0.0 when the vector lengths do not match. Since this is not NULL, the Python interpreter will not know to check for an error. For this reason, we add the %exception directive above for my_dot to get the behavior we want (note that $action is a macro that gets expanded to a valid call to my_dot). In general, you will probably want to write a SWIG macro to perform this task. Other Situations There are other wrapping situations in which numpy.i may be helpful when you encounter them. In some situations, it is possible that you could use the %numpy_typemaps macro to implement typemaps for your own types. See the Other Common Types: bool or Other Common Types: complex sections for examples. Another situation is if your dimensions are of a type other than int (say long for example): %numpy_typemaps(double, NPY_DOUBLE, long) You can use the code in numpy.i to write your own typemaps. For example, if you had a five-dimensional array as a function argument, you could cut-and-paste the appropriate four-dimensional typemaps into your interface file. The modifications for the fourth dimension would be trivial. Sometimes, the best approach is to use the %extend directive to define new methods for your classes (or overload existing ones) that take a PyObject* (that either is or can be converted to a PyArrayObject*) instead of a pointer to a buffer. In this case, the helper routines in numpy.i can be very useful. Writing typemaps can be a bit nonintuitive. If you have specific questions about writing SWIG typemaps for NumPy, the developers of numpy.i do monitor the Numpy-discussion and Swig-user mail lists. A Final Note When you use the %apply directive, as is usually necessary to use numpy.i, it will remain in effect until you tell SWIG that it shouldn’t be. If the arguments to the functions or methods that you are wrapping have common names, such as length or vector, these typemaps may get applied in situations you do not expect or want. Therefore, it is always a good idea to add a %clear directive after you are done with a specific typemap: %apply (double* IN_ARRAY1, int DIM1) {(double* vector, int length)} %include "my_header.h" %clear (double* vector, int length); In general, you should target these typemap signatures specifically where you want them, and then clear them after you are done. Summary Out of the box, numpy.i provides typemaps that support conversion between NumPy arrays and C arrays: That can be one of 12 different scalar types: signed char, unsigned char, short, unsigned short, int, unsigned int, long, unsigned long, long long, unsigned long long, float and double. That support 74 different argument signatures for each data type, including: One-dimensional, two-dimensional, three-dimensional and four-dimensional arrays. Input-only, in-place, argout, argoutview, and memory managed argoutview behavior. Hard-coded dimensions, data-buffer-then-dimensions specification, and dimensions-then-data-buffer specification. Both C-ordering (“last dimension fastest”) or Fortran-ordering (“first dimension fastest”) support for 2D, 3D and 4D arrays. The numpy.i interface file also provides additional tools for wrapper developers, including: A SWIG macro (%numpy_typemaps) with three arguments for implementing the 74 argument signatures for the user’s choice of (1) C data type, (2) NumPy data type (assuming they match), and (3) dimension type. Fourteen C macros and fifteen C functions that can be used to write specialized typemaps, extensions, or inlined functions that handle cases not covered by the provided typemaps. Note that the macros and functions are coded specifically to work with the NumPy C/API regardless of NumPy version number, both before and after the deprecation of some aspects of the API after version 1.6.
numpy.reference.swig.interface-file
numpy.lib.format.descr_to_dtype lib.format.descr_to_dtype(descr)[source] Returns a dtype based off the given description. This is essentially the reverse of dtype_to_descr(). It will remove the valueless padding fields created by, i.e. simple fields like dtype(‘float32’), and then convert the description to its corresponding dtype. Parameters descrobject The object retrieved by dtype.descr. Can be passed to numpy.dtype() in order to replicate the input dtype. Returns dtypedtype The dtype constructed by the description.
numpy.reference.generated.numpy.lib.format.descr_to_dtype
numpy.lib.format.dtype_to_descr lib.format.dtype_to_descr(dtype)[source] Get a serializable descriptor from the dtype. The .descr attribute of a dtype object cannot be round-tripped through the dtype() constructor. Simple types, like dtype(‘float32’), have a descr which looks like a record array with one field with ‘’ as a name. The dtype() constructor interprets this as a request to give a default name. Instead, we construct descriptor that can be passed to dtype(). Parameters dtypedtype The dtype of the array that will be written to disk. Returns descrobject An object that can be passed to numpy.dtype() in order to replicate the input dtype.
numpy.reference.generated.numpy.lib.format.dtype_to_descr
numpy.lib.format.header_data_from_array_1_0 lib.format.header_data_from_array_1_0(array)[source] Get the dictionary of header metadata from a numpy.ndarray. Parameters arraynumpy.ndarray Returns ddict This has the appropriate entries for writing its string representation to the header of the file.
numpy.reference.generated.numpy.lib.format.header_data_from_array_1_0
numpy.lib.format.magic lib.format.magic(major, minor)[source] Return the magic string for the given file format version. Parameters majorint in [0, 255] minorint in [0, 255] Returns magicstr Raises ValueError if the version cannot be formatted.
numpy.reference.generated.numpy.lib.format.magic
numpy.lib.format.open_memmap lib.format.open_memmap(filename, mode='r+', dtype=None, shape=None, fortran_order=False, version=None)[source] Open a .npy file as a memory-mapped array. This may be used to read an existing file or create a new one. Parameters filenamestr or path-like The name of the file on disk. This may not be a file-like object. modestr, optional The mode in which to open the file; the default is ‘r+’. In addition to the standard file modes, ‘c’ is also accepted to mean “copy on write.” See memmap for the available mode strings. dtypedata-type, optional The data type of the array if we are creating a new file in “write” mode, if not, dtype is ignored. The default value is None, which results in a data-type of float64. shapetuple of int The shape of the array if we are creating a new file in “write” mode, in which case this parameter is required. Otherwise, this parameter is ignored and is thus optional. fortran_orderbool, optional Whether the array should be Fortran-contiguous (True) or C-contiguous (False, the default) if we are creating a new file in “write” mode. versiontuple of int (major, minor) or None If the mode is a “write” mode, then this is the version of the file format used to create the file. None means use the oldest supported version that is able to store the data. Default: None Returns marraymemmap The memory-mapped array. Raises ValueError If the data or the mode is invalid. OSError If the file is not found or cannot be opened correctly. See also numpy.memmap
numpy.reference.generated.numpy.lib.format.open_memmap
numpy.lib.format.read_array lib.format.read_array(fp, allow_pickle=False, pickle_kwargs=None)[source] Read an array from an NPY file. Parameters fpfile_like object If this is not a real file object, then this may take extra memory and time. allow_picklebool, optional Whether to allow writing pickled data. Default: False Changed in version 1.16.3: Made default False in response to CVE-2019-6446. pickle_kwargsdict Additional keyword arguments to pass to pickle.load. These are only useful when loading object arrays saved on Python 2 when using Python 3. Returns arrayndarray The array from the data on disk. Raises ValueError If the data is invalid, or allow_pickle=False and the file contains an object array.
numpy.reference.generated.numpy.lib.format.read_array
numpy.lib.format.read_array_header_1_0 lib.format.read_array_header_1_0(fp)[source] Read an array header from a filelike object using the 1.0 file format version. This will leave the file object located just after the header. Parameters fpfilelike object A file object or something with a read() method like a file. Returns shapetuple of int The shape of the array. fortran_orderbool The array data will be written out directly if it is either C-contiguous or Fortran-contiguous. Otherwise, it will be made contiguous before writing it out. dtypedtype The dtype of the file’s data. Raises ValueError If the data is invalid.
numpy.reference.generated.numpy.lib.format.read_array_header_1_0
numpy.lib.format.read_array_header_2_0 lib.format.read_array_header_2_0(fp)[source] Read an array header from a filelike object using the 2.0 file format version. This will leave the file object located just after the header. New in version 1.9.0. Parameters fpfilelike object A file object or something with a read() method like a file. Returns shapetuple of int The shape of the array. fortran_orderbool The array data will be written out directly if it is either C-contiguous or Fortran-contiguous. Otherwise, it will be made contiguous before writing it out. dtypedtype The dtype of the file’s data. Raises ValueError If the data is invalid.
numpy.reference.generated.numpy.lib.format.read_array_header_2_0
numpy.lib.format.read_magic lib.format.read_magic(fp)[source] Read the magic string to get the version of the file format. Parameters fpfilelike object Returns majorint minorint
numpy.reference.generated.numpy.lib.format.read_magic
numpy.lib.format.write_array lib.format.write_array(fp, array, version=None, allow_pickle=True, pickle_kwargs=None)[source] Write an array to an NPY file, including a header. If the array is neither C-contiguous nor Fortran-contiguous AND the file_like object is not a real file object, this function will have to copy data in memory. Parameters fpfile_like object An open, writable file object, or similar object with a .write() method. arrayndarray The array to write to disk. version(int, int) or None, optional The version number of the format. None means use the oldest supported version that is able to store the data. Default: None allow_picklebool, optional Whether to allow writing pickled data. Default: True pickle_kwargsdict, optional Additional keyword arguments to pass to pickle.dump, excluding ‘protocol’. These are only useful when pickling objects in object arrays on Python 3 to Python 2 compatible format. Raises ValueError If the array cannot be persisted. This includes the case of allow_pickle=False and array being an object array. Various other errors If the array contains Python objects as part of its dtype, the process of pickling them may raise various errors if the objects are not picklable.
numpy.reference.generated.numpy.lib.format.write_array
numpy.lib.format.write_array_header_1_0 lib.format.write_array_header_1_0(fp, d)[source] Write the header for an array using the 1.0 format. Parameters fpfilelike object ddict This has the appropriate entries for writing its string representation to the header of the file.
numpy.reference.generated.numpy.lib.format.write_array_header_1_0
numpy.lib.format.write_array_header_2_0 lib.format.write_array_header_2_0(fp, d)[source] Write the header for an array using the 2.0 format. The 2.0 format allows storing very large structured arrays. New in version 1.9.0. Parameters fpfilelike object ddict This has the appropriate entries for writing its string representation to the header of the file.
numpy.reference.generated.numpy.lib.format.write_array_header_2_0
numpy.lib.scimath.arccos lib.scimath.arccos(x)[source] Compute the inverse cosine of x. Return the “principal value” (for a description of this, see numpy.arccos) of the inverse cosine of x. For real x such that abs(x) <= 1, this is a real number in the closed interval \([0, \pi]\). Otherwise, the complex principle value is returned. Parameters xarray_like or scalar The value(s) whose arccos is (are) required. Returns outndarray or scalar The inverse cosine(s) of the x value(s). If x was a scalar, so is out, otherwise an array object is returned. See also numpy.arccos Notes For an arccos() that returns NAN when real x is not in the interval [-1,1], use numpy.arccos. Examples >>> np.set_printoptions(precision=4) >>> np.emath.arccos(1) # a scalar is returned 0.0 >>> np.emath.arccos([1,2]) array([0.-0.j , 0.-1.317j])
numpy.reference.generated.numpy.lib.scimath.arccos
numpy.lib.scimath.arcsin lib.scimath.arcsin(x)[source] Compute the inverse sine of x. Return the “principal value” (for a description of this, see numpy.arcsin) of the inverse sine of x. For real x such that abs(x) <= 1, this is a real number in the closed interval \([-\pi/2, \pi/2]\). Otherwise, the complex principle value is returned. Parameters xarray_like or scalar The value(s) whose arcsin is (are) required. Returns outndarray or scalar The inverse sine(s) of the x value(s). If x was a scalar, so is out, otherwise an array object is returned. See also numpy.arcsin Notes For an arcsin() that returns NAN when real x is not in the interval [-1,1], use numpy.arcsin. Examples >>> np.set_printoptions(precision=4) >>> np.emath.arcsin(0) 0.0 >>> np.emath.arcsin([0,1]) array([0. , 1.5708])
numpy.reference.generated.numpy.lib.scimath.arcsin
numpy.lib.scimath.arctanh lib.scimath.arctanh(x)[source] Compute the inverse hyperbolic tangent of x. Return the “principal value” (for a description of this, see numpy.arctanh) of arctanh(x). For real x such that abs(x) < 1, this is a real number. If abs(x) > 1, or if x is complex, the result is complex. Finally, x = 1 returns``inf`` and x=-1 returns -inf. Parameters xarray_like The value(s) whose arctanh is (are) required. Returns outndarray or scalar The inverse hyperbolic tangent(s) of the x value(s). If x was a scalar so is out, otherwise an array is returned. See also numpy.arctanh Notes For an arctanh() that returns NAN when real x is not in the interval (-1,1), use numpy.arctanh (this latter, however, does return +/-inf for x = +/-1). Examples >>> np.set_printoptions(precision=4) >>> from numpy.testing import suppress_warnings >>> with suppress_warnings() as sup: ... sup.filter(RuntimeWarning) ... np.emath.arctanh(np.eye(2)) array([[inf, 0.], [ 0., inf]]) >>> np.emath.arctanh([1j]) array([0.+0.7854j])
numpy.reference.generated.numpy.lib.scimath.arctanh
numpy.lib.scimath.log lib.scimath.log(x)[source] Compute the natural logarithm of x. Return the “principal value” (for a description of this, see numpy.log) of \(log_e(x)\). For real x > 0, this is a real number (log(0) returns -inf and log(np.inf) returns inf). Otherwise, the complex principle value is returned. Parameters xarray_like The value(s) whose log is (are) required. Returns outndarray or scalar The log of the x value(s). If x was a scalar, so is out, otherwise an array is returned. See also numpy.log Notes For a log() that returns NAN when real x < 0, use numpy.log (note, however, that otherwise numpy.log and this log are identical, i.e., both return -inf for x = 0, inf for x = inf, and, notably, the complex principle value if x.imag != 0). Examples >>> np.emath.log(np.exp(1)) 1.0 Negative arguments are handled “correctly” (recall that exp(log(x)) == x does not hold for real x < 0): >>> np.emath.log(-np.exp(1)) == (1 + np.pi * 1j) True
numpy.reference.generated.numpy.lib.scimath.log
numpy.lib.scimath.log10 lib.scimath.log10(x)[source] Compute the logarithm base 10 of x. Return the “principal value” (for a description of this, see numpy.log10) of \(log_{10}(x)\). For real x > 0, this is a real number (log10(0) returns -inf and log10(np.inf) returns inf). Otherwise, the complex principle value is returned. Parameters xarray_like or scalar The value(s) whose log base 10 is (are) required. Returns outndarray or scalar The log base 10 of the x value(s). If x was a scalar, so is out, otherwise an array object is returned. See also numpy.log10 Notes For a log10() that returns NAN when real x < 0, use numpy.log10 (note, however, that otherwise numpy.log10 and this log10 are identical, i.e., both return -inf for x = 0, inf for x = inf, and, notably, the complex principle value if x.imag != 0). Examples (We set the printing precision so the example can be auto-tested) >>> np.set_printoptions(precision=4) >>> np.emath.log10(10**1) 1.0 >>> np.emath.log10([-10**1, -10**2, 10**2]) array([1.+1.3644j, 2.+1.3644j, 2.+0.j ])
numpy.reference.generated.numpy.lib.scimath.log10
numpy.lib.scimath.log2 lib.scimath.log2(x)[source] Compute the logarithm base 2 of x. Return the “principal value” (for a description of this, see numpy.log2) of \(log_2(x)\). For real x > 0, this is a real number (log2(0) returns -inf and log2(np.inf) returns inf). Otherwise, the complex principle value is returned. Parameters xarray_like The value(s) whose log base 2 is (are) required. Returns outndarray or scalar The log base 2 of the x value(s). If x was a scalar, so is out, otherwise an array is returned. See also numpy.log2 Notes For a log2() that returns NAN when real x < 0, use numpy.log2 (note, however, that otherwise numpy.log2 and this log2 are identical, i.e., both return -inf for x = 0, inf for x = inf, and, notably, the complex principle value if x.imag != 0). Examples We set the printing precision so the example can be auto-tested: >>> np.set_printoptions(precision=4) >>> np.emath.log2(8) 3.0 >>> np.emath.log2([-4, -8, 8]) array([2.+4.5324j, 3.+4.5324j, 3.+0.j ])
numpy.reference.generated.numpy.lib.scimath.log2
numpy.lib.scimath.logn lib.scimath.logn(n, x)[source] Take log base n of x. If x contains negative inputs, the answer is computed and returned in the complex domain. Parameters narray_like The integer base(s) in which the log is taken. xarray_like The value(s) whose log base n is (are) required. Returns outndarray or scalar The log base n of the x value(s). If x was a scalar, so is out, otherwise an array is returned. Examples >>> np.set_printoptions(precision=4) >>> np.emath.logn(2, [4, 8]) array([2., 3.]) >>> np.emath.logn(2, [-4, -8, 8]) array([2.+4.5324j, 3.+4.5324j, 3.+0.j ])
numpy.reference.generated.numpy.lib.scimath.logn
numpy.lib.scimath.power lib.scimath.power(x, p)[source] Return x to the power p, (x**p). If x contains negative values, the output is converted to the complex domain. Parameters xarray_like The input value(s). parray_like of ints The power(s) to which x is raised. If x contains multiple values, p has to either be a scalar, or contain the same number of values as x. In the latter case, the result is x[0]**p[0], x[1]**p[1], .... Returns outndarray or scalar The result of x**p. If x and p are scalars, so is out, otherwise an array is returned. See also numpy.power Examples >>> np.set_printoptions(precision=4) >>> np.emath.power([2, 4], 2) array([ 4, 16]) >>> np.emath.power([2, 4], -2) array([0.25 , 0.0625]) >>> np.emath.power([-2, 4], 2) array([ 4.-0.j, 16.+0.j])
numpy.reference.generated.numpy.lib.scimath.power
numpy.lib.scimath.sqrt lib.scimath.sqrt(x)[source] Compute the square root of x. For negative input elements, a complex value is returned (unlike numpy.sqrt which returns NaN). Parameters xarray_like The input value(s). Returns outndarray or scalar The square root of x. If x was a scalar, so is out, otherwise an array is returned. See also numpy.sqrt Examples For real, non-negative inputs this works just like numpy.sqrt: >>> np.emath.sqrt(1) 1.0 >>> np.emath.sqrt([1, 4]) array([1., 2.]) But it automatically handles negative inputs: >>> np.emath.sqrt(-1) 1j >>> np.emath.sqrt([-1,4]) array([0.+1.j, 2.+0.j])
numpy.reference.generated.numpy.lib.scimath.sqrt
numpy.lib.stride_tricks.as_strided lib.stride_tricks.as_strided(x, shape=None, strides=None, subok=False, writeable=True)[source] Create a view into the array with the given shape and strides. Warning This function has to be used with extreme care, see notes. Parameters xndarray Array to create a new. shapesequence of int, optional The shape of the new array. Defaults to x.shape. stridessequence of int, optional The strides of the new array. Defaults to x.strides. subokbool, optional New in version 1.10. If True, subclasses are preserved. writeablebool, optional New in version 1.12. If set to False, the returned array will always be readonly. Otherwise it will be writable if the original array was. It is advisable to set this to False if possible (see Notes). Returns viewndarray See also broadcast_to broadcast an array to a given shape. reshape reshape an array. lib.stride_tricks.sliding_window_view userfriendly and safe function for the creation of sliding window views. Notes as_strided creates a view into the array given the exact strides and shape. This means it manipulates the internal data structure of ndarray and, if done incorrectly, the array elements can point to invalid memory and can corrupt results or crash your program. It is advisable to always use the original x.strides when calculating new strides to avoid reliance on a contiguous memory layout. Furthermore, arrays created with this function often contain self overlapping memory, so that two elements are identical. Vectorized write operations on such arrays will typically be unpredictable. They may even give different results for small, large, or transposed arrays. Since writing to these arrays has to be tested and done with great care, you may want to use writeable=False to avoid accidental write operations. For these reasons it is advisable to avoid as_strided when possible.
numpy.reference.generated.numpy.lib.stride_tricks.as_strided
numpy.lib.stride_tricks.sliding_window_view lib.stride_tricks.sliding_window_view(x, window_shape, axis=None, *, subok=False, writeable=False)[source] Create a sliding window view into the array with the given window shape. Also known as rolling or moving window, the window slides across all dimensions of the array and extracts subsets of the array at all window positions. New in version 1.20.0. Parameters xarray_like Array to create the sliding window view from. window_shapeint or tuple of int Size of window over each axis that takes part in the sliding window. If axis is not present, must have same length as the number of input array dimensions. Single integers i are treated as if they were the tuple (i,). axisint or tuple of int, optional Axis or axes along which the sliding window is applied. By default, the sliding window is applied to all axes and window_shape[i] will refer to axis i of x. If axis is given as a tuple of int, window_shape[i] will refer to the axis axis[i] of x. Single integers i are treated as if they were the tuple (i,). subokbool, optional If True, sub-classes will be passed-through, otherwise the returned array will be forced to be a base-class array (default). writeablebool, optional When true, allow writing to the returned view. The default is false, as this should be used with caution: the returned view contains the same memory location multiple times, so writing to one location will cause others to change. Returns viewndarray Sliding window view of the array. The sliding window dimensions are inserted at the end, and the original dimensions are trimmed as required by the size of the sliding window. That is, view.shape = x_shape_trimmed + window_shape, where x_shape_trimmed is x.shape with every entry reduced by one less than the corresponding window size. See also lib.stride_tricks.as_strided A lower-level and less safe routine for creating arbitrary views from custom shape and strides. broadcast_to broadcast an array to a given shape. Notes For many applications using a sliding window view can be convenient, but potentially very slow. Often specialized solutions exist, for example: scipy.signal.fftconvolve filtering functions in scipy.ndimage moving window functions provided by bottleneck. As a rough estimate, a sliding window approach with an input size of N and a window size of W will scale as O(N*W) where frequently a special algorithm can achieve O(N). That means that the sliding window variant for a window size of 100 can be a 100 times slower than a more specialized version. Nevertheless, for small window sizes, when no custom algorithm exists, or as a prototyping and developing tool, this function can be a good solution. Examples >>> x = np.arange(6) >>> x.shape (6,) >>> v = sliding_window_view(x, 3) >>> v.shape (4, 3) >>> v array([[0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5]]) This also works in more dimensions, e.g. >>> i, j = np.ogrid[:3, :4] >>> x = 10*i + j >>> x.shape (3, 4) >>> x array([[ 0, 1, 2, 3], [10, 11, 12, 13], [20, 21, 22, 23]]) >>> shape = (2,2) >>> v = sliding_window_view(x, shape) >>> v.shape (2, 3, 2, 2) >>> v array([[[[ 0, 1], [10, 11]], [[ 1, 2], [11, 12]], [[ 2, 3], [12, 13]]], [[[10, 11], [20, 21]], [[11, 12], [21, 22]], [[12, 13], [22, 23]]]]) The axis can be specified explicitly: >>> v = sliding_window_view(x, 3, 0) >>> v.shape (1, 4, 3) >>> v array([[[ 0, 10, 20], [ 1, 11, 21], [ 2, 12, 22], [ 3, 13, 23]]]) The same axis can be used several times. In that case, every use reduces the corresponding original dimension: >>> v = sliding_window_view(x, (2, 3), (1, 1)) >>> v.shape (3, 1, 2, 3) >>> v array([[[[ 0, 1, 2], [ 1, 2, 3]]], [[[10, 11, 12], [11, 12, 13]]], [[[20, 21, 22], [21, 22, 23]]]]) Combining with stepped slicing (::step), this can be used to take sliding views which skip elements: >>> x = np.arange(7) >>> sliding_window_view(x, 5)[:, ::2] array([[0, 2, 4], [1, 3, 5], [2, 4, 6]]) or views which move by multiple elements >>> x = np.arange(7) >>> sliding_window_view(x, 3)[::2, :] array([[0, 1, 2], [2, 3, 4], [4, 5, 6]]) A common application of sliding_window_view is the calculation of running statistics. The simplest example is the moving average: >>> x = np.arange(6) >>> x.shape (6,) >>> v = sliding_window_view(x, 3) >>> v.shape (4, 3) >>> v array([[0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5]]) >>> moving_average = v.mean(axis=-1) >>> moving_average array([1., 2., 3., 4.]) Note that a sliding window approach is often not optimal (see Notes).
numpy.reference.generated.numpy.lib.stride_tricks.sliding_window_view
numpy.linalg.cholesky linalg.cholesky(a)[source] Cholesky decomposition. Return the Cholesky decomposition, L * L.H, of the square matrix a, where L is lower-triangular and .H is the conjugate transpose operator (which is the ordinary transpose if a is real-valued). a must be Hermitian (symmetric if real-valued) and positive-definite. No checking is performed to verify whether a is Hermitian or not. In addition, only the lower-triangular and diagonal elements of a are used. Only L is actually returned. Parameters a(…, M, M) array_like Hermitian (symmetric if all elements are real), positive-definite input matrix. Returns L(…, M, M) array_like Upper or lower-triangular Cholesky factor of a. Returns a matrix object if a is a matrix object. Raises LinAlgError If the decomposition fails, for example, if a is not positive-definite. See also scipy.linalg.cholesky Similar function in SciPy. scipy.linalg.cholesky_banded Cholesky decompose a banded Hermitian positive-definite matrix. scipy.linalg.cho_factor Cholesky decomposition of a matrix, to use in scipy.linalg.cho_solve. Notes New in version 1.8.0. Broadcasting rules apply, see the numpy.linalg documentation for details. The Cholesky decomposition is often used as a fast way of solving \[A \mathbf{x} = \mathbf{b}\] (when A is both Hermitian/symmetric and positive-definite). First, we solve for \(\mathbf{y}\) in \[L \mathbf{y} = \mathbf{b},\] and then for \(\mathbf{x}\) in \[L.H \mathbf{x} = \mathbf{y}.\] Examples >>> A = np.array([[1,-2j],[2j,5]]) >>> A array([[ 1.+0.j, -0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> L = np.linalg.cholesky(A) >>> L array([[1.+0.j, 0.+0.j], [0.+2.j, 1.+0.j]]) >>> np.dot(L, L.T.conj()) # verify that L * L.H = A array([[1.+0.j, 0.-2.j], [0.+2.j, 5.+0.j]]) >>> A = [[1,-2j],[2j,5]] # what happens if A is only array_like? >>> np.linalg.cholesky(A) # an ndarray object is returned array([[1.+0.j, 0.+0.j], [0.+2.j, 1.+0.j]]) >>> # But a matrix object is returned if A is a matrix object >>> np.linalg.cholesky(np.matrix(A)) matrix([[ 1.+0.j, 0.+0.j], [ 0.+2.j, 1.+0.j]])
numpy.reference.generated.numpy.linalg.cholesky
numpy.linalg.cond linalg.cond(x, p=None)[source] Compute the condition number of a matrix. This function is capable of returning the condition number using one of seven different norms, depending on the value of p (see Parameters below). Parameters x(…, M, N) array_like The matrix whose condition number is sought. p{None, 1, -1, 2, -2, inf, -inf, ‘fro’}, optional Order of the norm used in the condition number computation: p norm for matrices None 2-norm, computed directly using the SVD ‘fro’ Frobenius norm inf max(sum(abs(x), axis=1)) -inf min(sum(abs(x), axis=1)) 1 max(sum(abs(x), axis=0)) -1 min(sum(abs(x), axis=0)) 2 2-norm (largest sing. value) -2 smallest singular value inf means the numpy.inf object, and the Frobenius norm is the root-of-sum-of-squares norm. Returns c{float, inf} The condition number of the matrix. May be infinite. See also numpy.linalg.norm Notes The condition number of x is defined as the norm of x times the norm of the inverse of x [1]; the norm can be the usual L2-norm (root-of-sum-of-squares) or one of a number of other matrix norms. References 1 G. Strang, Linear Algebra and Its Applications, Orlando, FL, Academic Press, Inc., 1980, pg. 285. Examples >>> from numpy import linalg as LA >>> a = np.array([[1, 0, -1], [0, 1, 0], [1, 0, 1]]) >>> a array([[ 1, 0, -1], [ 0, 1, 0], [ 1, 0, 1]]) >>> LA.cond(a) 1.4142135623730951 >>> LA.cond(a, 'fro') 3.1622776601683795 >>> LA.cond(a, np.inf) 2.0 >>> LA.cond(a, -np.inf) 1.0 >>> LA.cond(a, 1) 2.0 >>> LA.cond(a, -1) 1.0 >>> LA.cond(a, 2) 1.4142135623730951 >>> LA.cond(a, -2) 0.70710678118654746 # may vary >>> min(LA.svd(a, compute_uv=False))*min(LA.svd(LA.inv(a), compute_uv=False)) 0.70710678118654746 # may vary
numpy.reference.generated.numpy.linalg.cond
numpy.linalg.det linalg.det(a)[source] Compute the determinant of an array. Parameters a(…, M, M) array_like Input array to compute determinants for. Returns det(…) array_like Determinant of a. See also slogdet Another way to represent the determinant, more suitable for large matrices where underflow/overflow may occur. scipy.linalg.det Similar function in SciPy. Notes New in version 1.8.0. Broadcasting rules apply, see the numpy.linalg documentation for details. The determinant is computed via LU factorization using the LAPACK routine z/dgetrf. Examples The determinant of a 2-D array [[a, b], [c, d]] is ad - bc: >>> a = np.array([[1, 2], [3, 4]]) >>> np.linalg.det(a) -2.0 # may vary Computing determinants for a stack of matrices: >>> a = np.array([ [[1, 2], [3, 4]], [[1, 2], [2, 1]], [[1, 3], [3, 1]] ]) >>> a.shape (3, 2, 2) >>> np.linalg.det(a) array([-2., -3., -8.])
numpy.reference.generated.numpy.linalg.det
numpy.linalg.eig linalg.eig(a)[source] Compute the eigenvalues and right eigenvectors of a square array. Parameters a(…, M, M) array Matrices for which the eigenvalues and right eigenvectors will be computed Returns w(…, M) array The eigenvalues, each repeated according to its multiplicity. The eigenvalues are not necessarily ordered. The resulting array will be of complex type, unless the imaginary part is zero in which case it will be cast to a real type. When a is real the resulting eigenvalues will be real (0 imaginary part) or occur in conjugate pairs v(…, M, M) array The normalized (unit “length”) eigenvectors, such that the column v[:,i] is the eigenvector corresponding to the eigenvalue w[i]. Raises LinAlgError If the eigenvalue computation does not converge. See also eigvals eigenvalues of a non-symmetric array. eigh eigenvalues and eigenvectors of a real symmetric or complex Hermitian (conjugate symmetric) array. eigvalsh eigenvalues of a real symmetric or complex Hermitian (conjugate symmetric) array. scipy.linalg.eig Similar function in SciPy that also solves the generalized eigenvalue problem. scipy.linalg.schur Best choice for unitary and other non-Hermitian normal matrices. Notes New in version 1.8.0. Broadcasting rules apply, see the numpy.linalg documentation for details. This is implemented using the _geev LAPACK routines which compute the eigenvalues and eigenvectors of general square arrays. The number w is an eigenvalue of a if there exists a vector v such that a @ v = w * v. Thus, the arrays a, w, and v satisfy the equations a @ v[:,i] = w[i] * v[:,i] for \(i \in \{0,...,M-1\}\). The array v of eigenvectors may not be of maximum rank, that is, some of the columns may be linearly dependent, although round-off error may obscure that fact. If the eigenvalues are all different, then theoretically the eigenvectors are linearly independent and a can be diagonalized by a similarity transformation using v, i.e, inv(v) @ a @ v is diagonal. For non-Hermitian normal matrices the SciPy function scipy.linalg.schur is preferred because the matrix v is guaranteed to be unitary, which is not the case when using eig. The Schur factorization produces an upper triangular matrix rather than a diagonal matrix, but for normal matrices only the diagonal of the upper triangular matrix is needed, the rest is roundoff error. Finally, it is emphasized that v consists of the right (as in right-hand side) eigenvectors of a. A vector y satisfying y.T @ a = z * y.T for some number z is called a left eigenvector of a, and, in general, the left and right eigenvectors of a matrix are not necessarily the (perhaps conjugate) transposes of each other. References G. Strang, Linear Algebra and Its Applications, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, Various pp. Examples >>> from numpy import linalg as LA (Almost) trivial example with real e-values and e-vectors. >>> w, v = LA.eig(np.diag((1, 2, 3))) >>> w; v array([1., 2., 3.]) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) Real matrix possessing complex e-values and e-vectors; note that the e-values are complex conjugates of each other. >>> w, v = LA.eig(np.array([[1, -1], [1, 1]])) >>> w; v array([1.+1.j, 1.-1.j]) array([[0.70710678+0.j , 0.70710678-0.j ], [0. -0.70710678j, 0. +0.70710678j]]) Complex-valued matrix with real e-values (but complex-valued e-vectors); note that a.conj().T == a, i.e., a is Hermitian. >>> a = np.array([[1, 1j], [-1j, 1]]) >>> w, v = LA.eig(a) >>> w; v array([2.+0.j, 0.+0.j]) array([[ 0. +0.70710678j, 0.70710678+0.j ], # may vary [ 0.70710678+0.j , -0. +0.70710678j]]) Be careful about round-off error! >>> a = np.array([[1 + 1e-9, 0], [0, 1 - 1e-9]]) >>> # Theor. e-values are 1 +/- 1e-9 >>> w, v = LA.eig(a) >>> w; v array([1., 1.]) array([[1., 0.], [0., 1.]])
numpy.reference.generated.numpy.linalg.eig
numpy.linalg.eigh linalg.eigh(a, UPLO='L')[source] Return the eigenvalues and eigenvectors of a complex Hermitian (conjugate symmetric) or a real symmetric matrix. Returns two objects, a 1-D array containing the eigenvalues of a, and a 2-D square array or matrix (depending on the input type) of the corresponding eigenvectors (in columns). Parameters a(…, M, M) array Hermitian or real symmetric matrices whose eigenvalues and eigenvectors are to be computed. UPLO{‘L’, ‘U’}, optional Specifies whether the calculation is done with the lower triangular part of a (‘L’, default) or the upper triangular part (‘U’). Irrespective of this value only the real parts of the diagonal will be considered in the computation to preserve the notion of a Hermitian matrix. It therefore follows that the imaginary part of the diagonal will always be treated as zero. Returns w(…, M) ndarray The eigenvalues in ascending order, each repeated according to its multiplicity. v{(…, M, M) ndarray, (…, M, M) matrix} The column v[:, i] is the normalized eigenvector corresponding to the eigenvalue w[i]. Will return a matrix object if a is a matrix object. Raises LinAlgError If the eigenvalue computation does not converge. See also eigvalsh eigenvalues of real symmetric or complex Hermitian (conjugate symmetric) arrays. eig eigenvalues and right eigenvectors for non-symmetric arrays. eigvals eigenvalues of non-symmetric arrays. scipy.linalg.eigh Similar function in SciPy (but also solves the generalized eigenvalue problem). Notes New in version 1.8.0. Broadcasting rules apply, see the numpy.linalg documentation for details. The eigenvalues/eigenvectors are computed using LAPACK routines _syevd, _heevd. The eigenvalues of real symmetric or complex Hermitian matrices are always real. [1] The array v of (column) eigenvectors is unitary and a, w, and v satisfy the equations dot(a, v[:, i]) = w[i] * v[:, i]. References 1 G. Strang, Linear Algebra and Its Applications, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pg. 222. Examples >>> from numpy import linalg as LA >>> a = np.array([[1, -2j], [2j, 5]]) >>> a array([[ 1.+0.j, -0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> w, v = LA.eigh(a) >>> w; v array([0.17157288, 5.82842712]) array([[-0.92387953+0.j , -0.38268343+0.j ], # may vary [ 0. +0.38268343j, 0. -0.92387953j]]) >>> np.dot(a, v[:, 0]) - w[0] * v[:, 0] # verify 1st e-val/vec pair array([5.55111512e-17+0.0000000e+00j, 0.00000000e+00+1.2490009e-16j]) >>> np.dot(a, v[:, 1]) - w[1] * v[:, 1] # verify 2nd e-val/vec pair array([0.+0.j, 0.+0.j]) >>> A = np.matrix(a) # what happens if input is a matrix object >>> A matrix([[ 1.+0.j, -0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> w, v = LA.eigh(A) >>> w; v array([0.17157288, 5.82842712]) matrix([[-0.92387953+0.j , -0.38268343+0.j ], # may vary [ 0. +0.38268343j, 0. -0.92387953j]]) >>> # demonstrate the treatment of the imaginary part of the diagonal >>> a = np.array([[5+2j, 9-2j], [0+2j, 2-1j]]) >>> a array([[5.+2.j, 9.-2.j], [0.+2.j, 2.-1.j]]) >>> # with UPLO='L' this is numerically equivalent to using LA.eig() with: >>> b = np.array([[5.+0.j, 0.-2.j], [0.+2.j, 2.-0.j]]) >>> b array([[5.+0.j, 0.-2.j], [0.+2.j, 2.+0.j]]) >>> wa, va = LA.eigh(a) >>> wb, vb = LA.eig(b) >>> wa; wb array([1., 6.]) array([6.+0.j, 1.+0.j]) >>> va; vb array([[-0.4472136 +0.j , -0.89442719+0.j ], # may vary [ 0. +0.89442719j, 0. -0.4472136j ]]) array([[ 0.89442719+0.j , -0. +0.4472136j], [-0. +0.4472136j, 0.89442719+0.j ]])
numpy.reference.generated.numpy.linalg.eigh
numpy.linalg.eigvals linalg.eigvals(a)[source] Compute the eigenvalues of a general matrix. Main difference between eigvals and eig: the eigenvectors aren’t returned. Parameters a(…, M, M) array_like A complex- or real-valued matrix whose eigenvalues will be computed. Returns w(…, M,) ndarray The eigenvalues, each repeated according to its multiplicity. They are not necessarily ordered, nor are they necessarily real for real matrices. Raises LinAlgError If the eigenvalue computation does not converge. See also eig eigenvalues and right eigenvectors of general arrays eigvalsh eigenvalues of real symmetric or complex Hermitian (conjugate symmetric) arrays. eigh eigenvalues and eigenvectors of real symmetric or complex Hermitian (conjugate symmetric) arrays. scipy.linalg.eigvals Similar function in SciPy. Notes New in version 1.8.0. Broadcasting rules apply, see the numpy.linalg documentation for details. This is implemented using the _geev LAPACK routines which compute the eigenvalues and eigenvectors of general square arrays. Examples Illustration, using the fact that the eigenvalues of a diagonal matrix are its diagonal elements, that multiplying a matrix on the left by an orthogonal matrix, Q, and on the right by Q.T (the transpose of Q), preserves the eigenvalues of the “middle” matrix. In other words, if Q is orthogonal, then Q * A * Q.T has the same eigenvalues as A: >>> from numpy import linalg as LA >>> x = np.random.random() >>> Q = np.array([[np.cos(x), -np.sin(x)], [np.sin(x), np.cos(x)]]) >>> LA.norm(Q[0, :]), LA.norm(Q[1, :]), np.dot(Q[0, :],Q[1, :]) (1.0, 1.0, 0.0) Now multiply a diagonal matrix by Q on one side and by Q.T on the other: >>> D = np.diag((-1,1)) >>> LA.eigvals(D) array([-1., 1.]) >>> A = np.dot(Q, D) >>> A = np.dot(A, Q.T) >>> LA.eigvals(A) array([ 1., -1.]) # random
numpy.reference.generated.numpy.linalg.eigvals
numpy.linalg.eigvalsh linalg.eigvalsh(a, UPLO='L')[source] Compute the eigenvalues of a complex Hermitian or real symmetric matrix. Main difference from eigh: the eigenvectors are not computed. Parameters a(…, M, M) array_like A complex- or real-valued matrix whose eigenvalues are to be computed. UPLO{‘L’, ‘U’}, optional Specifies whether the calculation is done with the lower triangular part of a (‘L’, default) or the upper triangular part (‘U’). Irrespective of this value only the real parts of the diagonal will be considered in the computation to preserve the notion of a Hermitian matrix. It therefore follows that the imaginary part of the diagonal will always be treated as zero. Returns w(…, M,) ndarray The eigenvalues in ascending order, each repeated according to its multiplicity. Raises LinAlgError If the eigenvalue computation does not converge. See also eigh eigenvalues and eigenvectors of real symmetric or complex Hermitian (conjugate symmetric) arrays. eigvals eigenvalues of general real or complex arrays. eig eigenvalues and right eigenvectors of general real or complex arrays. scipy.linalg.eigvalsh Similar function in SciPy. Notes New in version 1.8.0. Broadcasting rules apply, see the numpy.linalg documentation for details. The eigenvalues are computed using LAPACK routines _syevd, _heevd. Examples >>> from numpy import linalg as LA >>> a = np.array([[1, -2j], [2j, 5]]) >>> LA.eigvalsh(a) array([ 0.17157288, 5.82842712]) # may vary >>> # demonstrate the treatment of the imaginary part of the diagonal >>> a = np.array([[5+2j, 9-2j], [0+2j, 2-1j]]) >>> a array([[5.+2.j, 9.-2.j], [0.+2.j, 2.-1.j]]) >>> # with UPLO='L' this is numerically equivalent to using LA.eigvals() >>> # with: >>> b = np.array([[5.+0.j, 0.-2.j], [0.+2.j, 2.-0.j]]) >>> b array([[5.+0.j, 0.-2.j], [0.+2.j, 2.+0.j]]) >>> wa = LA.eigvalsh(a) >>> wb = LA.eigvals(b) >>> wa; wb array([1., 6.]) array([6.+0.j, 1.+0.j])
numpy.reference.generated.numpy.linalg.eigvalsh
numpy.linalg.inv linalg.inv(a)[source] Compute the (multiplicative) inverse of a matrix. Given a square matrix a, return the matrix ainv satisfying dot(a, ainv) = dot(ainv, a) = eye(a.shape[0]). Parameters a(…, M, M) array_like Matrix to be inverted. Returns ainv(…, M, M) ndarray or matrix (Multiplicative) inverse of the matrix a. Raises LinAlgError If a is not square or inversion fails. See also scipy.linalg.inv Similar function in SciPy. Notes New in version 1.8.0. Broadcasting rules apply, see the numpy.linalg documentation for details. Examples >>> from numpy.linalg import inv >>> a = np.array([[1., 2.], [3., 4.]]) >>> ainv = inv(a) >>> np.allclose(np.dot(a, ainv), np.eye(2)) True >>> np.allclose(np.dot(ainv, a), np.eye(2)) True If a is a matrix object, then the return value is a matrix as well: >>> ainv = inv(np.matrix(a)) >>> ainv matrix([[-2. , 1. ], [ 1.5, -0.5]]) Inverses of several matrices can be computed at once: >>> a = np.array([[[1., 2.], [3., 4.]], [[1, 3], [3, 5]]]) >>> inv(a) array([[[-2. , 1. ], [ 1.5 , -0.5 ]], [[-1.25, 0.75], [ 0.75, -0.25]]])
numpy.reference.generated.numpy.linalg.inv
numpy.linalg.LinAlgError exception linalg.LinAlgError[source] Generic Python-exception-derived object raised by linalg functions. General purpose exception class, derived from Python’s exception.Exception class, programmatically raised in linalg functions when a Linear Algebra-related condition would prevent further correct execution of the function. Parameters None Examples >>> from numpy import linalg as LA >>> LA.inv(np.zeros((2,2))) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "...linalg.py", line 350, in inv return wrap(solve(a, identity(a.shape[0], dtype=a.dtype))) File "...linalg.py", line 249, in solve raise LinAlgError('Singular matrix') numpy.linalg.LinAlgError: Singular matrix
numpy.reference.generated.numpy.linalg.linalgerror
numpy.linalg.lstsq linalg.lstsq(a, b, rcond='warn')[source] Return the least-squares solution to a linear matrix equation. Computes the vector x that approximately solves the equation a @ x = b. The equation may be under-, well-, or over-determined (i.e., the number of linearly independent rows of a can be less than, equal to, or greater than its number of linearly independent columns). If a is square and of full rank, then x (but for round-off error) is the “exact” solution of the equation. Else, x minimizes the Euclidean 2-norm \(||b - ax||\). If there are multiple minimizing solutions, the one with the smallest 2-norm \(||x||\) is returned. Parameters a(M, N) array_like “Coefficient” matrix. b{(M,), (M, K)} array_like Ordinate or “dependent variable” values. If b is two-dimensional, the least-squares solution is calculated for each of the K columns of b. rcondfloat, optional Cut-off ratio for small singular values of a. For the purposes of rank determination, singular values are treated as zero if they are smaller than rcond times the largest singular value of a. Changed in version 1.14.0: If not set, a FutureWarning is given. The previous default of -1 will use the machine precision as rcond parameter, the new default will use the machine precision times max(M, N). To silence the warning and use the new default, use rcond=None, to keep using the old behavior, use rcond=-1. Returns x{(N,), (N, K)} ndarray Least-squares solution. If b is two-dimensional, the solutions are in the K columns of x. residuals{(1,), (K,), (0,)} ndarray Sums of squared residuals: Squared Euclidean 2-norm for each column in b - a @ x. If the rank of a is < N or M <= N, this is an empty array. If b is 1-dimensional, this is a (1,) shape array. Otherwise the shape is (K,). rankint Rank of matrix a. s(min(M, N),) ndarray Singular values of a. Raises LinAlgError If computation does not converge. See also scipy.linalg.lstsq Similar function in SciPy. Notes If b is a matrix, then all array results are returned as matrices. Examples Fit a line, y = mx + c, through some noisy data-points: >>> x = np.array([0, 1, 2, 3]) >>> y = np.array([-1, 0.2, 0.9, 2.1]) By examining the coefficients, we see that the line should have a gradient of roughly 1 and cut the y-axis at, more or less, -1. We can rewrite the line equation as y = Ap, where A = [[x 1]] and p = [[m], [c]]. Now use lstsq to solve for p: >>> A = np.vstack([x, np.ones(len(x))]).T >>> A array([[ 0., 1.], [ 1., 1.], [ 2., 1.], [ 3., 1.]]) >>> m, c = np.linalg.lstsq(A, y, rcond=None)[0] >>> m, c (1.0 -0.95) # may vary Plot the data along with the fitted line: >>> import matplotlib.pyplot as plt >>> _ = plt.plot(x, y, 'o', label='Original data', markersize=10) >>> _ = plt.plot(x, m*x + c, 'r', label='Fitted line') >>> _ = plt.legend() >>> plt.show()
numpy.reference.generated.numpy.linalg.lstsq
numpy.linalg.matrix_power linalg.matrix_power(a, n)[source] Raise a square matrix to the (integer) power n. For positive integers n, the power is computed by repeated matrix squarings and matrix multiplications. If n == 0, the identity matrix of the same shape as M is returned. If n < 0, the inverse is computed and then raised to the abs(n). Note Stacks of object matrices are not currently supported. Parameters a(…, M, M) array_like Matrix to be “powered”. nint The exponent can be any integer or long integer, positive, negative, or zero. Returns a**n(…, M, M) ndarray or matrix object The return value is the same shape and type as M; if the exponent is positive or zero then the type of the elements is the same as those of M. If the exponent is negative the elements are floating-point. Raises LinAlgError For matrices that are not square or that (for negative powers) cannot be inverted numerically. Examples >>> from numpy.linalg import matrix_power >>> i = np.array([[0, 1], [-1, 0]]) # matrix equiv. of the imaginary unit >>> matrix_power(i, 3) # should = -i array([[ 0, -1], [ 1, 0]]) >>> matrix_power(i, 0) array([[1, 0], [0, 1]]) >>> matrix_power(i, -3) # should = 1/(-i) = i, but w/ f.p. elements array([[ 0., 1.], [-1., 0.]]) Somewhat more sophisticated example >>> q = np.zeros((4, 4)) >>> q[0:2, 0:2] = -i >>> q[2:4, 2:4] = i >>> q # one of the three quaternion units not equal to 1 array([[ 0., -1., 0., 0.], [ 1., 0., 0., 0.], [ 0., 0., 0., 1.], [ 0., 0., -1., 0.]]) >>> matrix_power(q, 2) # = -np.eye(4) array([[-1., 0., 0., 0.], [ 0., -1., 0., 0.], [ 0., 0., -1., 0.], [ 0., 0., 0., -1.]])
numpy.reference.generated.numpy.linalg.matrix_power
numpy.linalg.matrix_rank linalg.matrix_rank(A, tol=None, hermitian=False)[source] Return matrix rank of array using SVD method Rank of the array is the number of singular values of the array that are greater than tol. Changed in version 1.14: Can now operate on stacks of matrices Parameters A{(M,), (…, M, N)} array_like Input vector or stack of matrices. tol(…) array_like, float, optional Threshold below which SVD values are considered zero. If tol is None, and S is an array with singular values for M, and eps is the epsilon value for datatype of S, then tol is set to S.max() * max(M, N) * eps. Changed in version 1.14: Broadcasted against the stack of matrices hermitianbool, optional If True, A is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. Defaults to False. New in version 1.14. Returns rank(…) array_like Rank of A. Notes The default threshold to detect rank deficiency is a test on the magnitude of the singular values of A. By default, we identify singular values less than S.max() * max(M, N) * eps as indicating rank deficiency (with the symbols defined above). This is the algorithm MATLAB uses [1]. It also appears in Numerical recipes in the discussion of SVD solutions for linear least squares [2]. This default threshold is designed to detect rank deficiency accounting for the numerical errors of the SVD computation. Imagine that there is a column in A that is an exact (in floating point) linear combination of other columns in A. Computing the SVD on A will not produce a singular value exactly equal to 0 in general: any difference of the smallest SVD value from 0 will be caused by numerical imprecision in the calculation of the SVD. Our threshold for small SVD values takes this numerical imprecision into account, and the default threshold will detect such numerical rank deficiency. The threshold may declare a matrix A rank deficient even if the linear combination of some columns of A is not exactly equal to another column of A but only numerically very close to another column of A. We chose our default threshold because it is in wide use. Other thresholds are possible. For example, elsewhere in the 2007 edition of Numerical recipes there is an alternative threshold of S.max() * np.finfo(A.dtype).eps / 2. * np.sqrt(m + n + 1.). The authors describe this threshold as being based on “expected roundoff error” (p 71). The thresholds above deal with floating point roundoff error in the calculation of the SVD. However, you may have more information about the sources of error in A that would make you consider other tolerance values to detect effective rank deficiency. The most useful measure of the tolerance depends on the operations you intend to use on your matrix. For example, if your data come from uncertain measurements with uncertainties greater than floating point epsilon, choosing a tolerance near that uncertainty may be preferable. The tolerance may be absolute if the uncertainties are absolute rather than relative. References 1 MATLAB reference documentation, “Rank” https://www.mathworks.com/help/techdoc/ref/rank.html 2 W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, “Numerical Recipes (3rd edition)”, Cambridge University Press, 2007, page 795. Examples >>> from numpy.linalg import matrix_rank >>> matrix_rank(np.eye(4)) # Full rank matrix 4 >>> I=np.eye(4); I[-1,-1] = 0. # rank deficient matrix >>> matrix_rank(I) 3 >>> matrix_rank(np.ones((4,))) # 1 dimension - rank 1 unless all 0 1 >>> matrix_rank(np.zeros((4,))) 0
numpy.reference.generated.numpy.linalg.matrix_rank
numpy.linalg.multi_dot linalg.multi_dot(arrays, *, out=None)[source] Compute the dot product of two or more arrays in a single function call, while automatically selecting the fastest evaluation order. multi_dot chains numpy.dot and uses optimal parenthesization of the matrices [1] [2]. Depending on the shapes of the matrices, this can speed up the multiplication a lot. If the first argument is 1-D it is treated as a row vector. If the last argument is 1-D it is treated as a column vector. The other arguments must be 2-D. Think of multi_dot as: def multi_dot(arrays): return functools.reduce(np.dot, arrays) Parameters arrayssequence of array_like If the first argument is 1-D it is treated as row vector. If the last argument is 1-D it is treated as column vector. The other arguments must be 2-D. outndarray, optional Output argument. This must have the exact kind that would be returned if it was not used. In particular, it must have the right type, must be C-contiguous, and its dtype must be the dtype that would be returned for dot(a, b). This is a performance feature. Therefore, if these conditions are not met, an exception is raised, instead of attempting to be flexible. New in version 1.19.0. Returns outputndarray Returns the dot product of the supplied arrays. See also numpy.dot dot multiplication with two arguments. Notes The cost for a matrix multiplication can be calculated with the following function: def cost(A, B): return A.shape[0] * A.shape[1] * B.shape[1] Assume we have three matrices \(A_{10x100}, B_{100x5}, C_{5x50}\). The costs for the two different parenthesizations are as follows: cost((AB)C) = 10*100*5 + 10*5*50 = 5000 + 2500 = 7500 cost(A(BC)) = 10*100*50 + 100*5*50 = 50000 + 25000 = 75000 References 1 Cormen, “Introduction to Algorithms”, Chapter 15.2, p. 370-378 2 https://en.wikipedia.org/wiki/Matrix_chain_multiplication Examples multi_dot allows you to write: >>> from numpy.linalg import multi_dot >>> # Prepare some data >>> A = np.random.random((10000, 100)) >>> B = np.random.random((100, 1000)) >>> C = np.random.random((1000, 5)) >>> D = np.random.random((5, 333)) >>> # the actual dot multiplication >>> _ = multi_dot([A, B, C, D]) instead of: >>> _ = np.dot(np.dot(np.dot(A, B), C), D) >>> # or >>> _ = A.dot(B).dot(C).dot(D)
numpy.reference.generated.numpy.linalg.multi_dot
numpy.linalg.norm linalg.norm(x, ord=None, axis=None, keepdims=False)[source] Matrix or vector norm. This function is able to return one of eight different matrix norms, or one of an infinite number of vector norms (described below), depending on the value of the ord parameter. Parameters xarray_like Input array. If axis is None, x must be 1-D or 2-D, unless ord is None. If both axis and ord are None, the 2-norm of x.ravel will be returned. ord{non-zero int, inf, -inf, ‘fro’, ‘nuc’}, optional Order of the norm (see table under Notes). inf means numpy’s inf object. The default is None. axis{None, int, 2-tuple of ints}, optional. If axis is an integer, it specifies the axis of x along which to compute the vector norms. If axis is a 2-tuple, it specifies the axes that hold 2-D matrices, and the matrix norms of these matrices are computed. If axis is None then either a vector norm (when x is 1-D) or a matrix norm (when x is 2-D) is returned. The default is None. New in version 1.8.0. keepdimsbool, optional If this is set to True, the axes which are normed over are left in the result as dimensions with size one. With this option the result will broadcast correctly against the original x. New in version 1.10.0. Returns nfloat or ndarray Norm of the matrix or vector(s). See also scipy.linalg.norm Similar function in SciPy. Notes For values of ord < 1, the result is, strictly speaking, not a mathematical ‘norm’, but it may still be useful for various numerical purposes. The following norms can be calculated: ord norm for matrices norm for vectors None Frobenius norm 2-norm ‘fro’ Frobenius norm – ‘nuc’ nuclear norm – inf max(sum(abs(x), axis=1)) max(abs(x)) -inf min(sum(abs(x), axis=1)) min(abs(x)) 0 – sum(x != 0) 1 max(sum(abs(x), axis=0)) as below -1 min(sum(abs(x), axis=0)) as below 2 2-norm (largest sing. value) as below -2 smallest singular value as below other – sum(abs(x)**ord)**(1./ord) The Frobenius norm is given by [1]: \(||A||_F = [\sum_{i,j} abs(a_{i,j})^2]^{1/2}\) The nuclear norm is the sum of the singular values. Both the Frobenius and nuclear norm orders are only defined for matrices and raise a ValueError when x.ndim != 2. References 1 G. H. Golub and C. F. Van Loan, Matrix Computations, Baltimore, MD, Johns Hopkins University Press, 1985, pg. 15 Examples >>> from numpy import linalg as LA >>> a = np.arange(9) - 4 >>> a array([-4, -3, -2, ..., 2, 3, 4]) >>> b = a.reshape((3, 3)) >>> b array([[-4, -3, -2], [-1, 0, 1], [ 2, 3, 4]]) >>> LA.norm(a) 7.745966692414834 >>> LA.norm(b) 7.745966692414834 >>> LA.norm(b, 'fro') 7.745966692414834 >>> LA.norm(a, np.inf) 4.0 >>> LA.norm(b, np.inf) 9.0 >>> LA.norm(a, -np.inf) 0.0 >>> LA.norm(b, -np.inf) 2.0 >>> LA.norm(a, 1) 20.0 >>> LA.norm(b, 1) 7.0 >>> LA.norm(a, -1) -4.6566128774142013e-010 >>> LA.norm(b, -1) 6.0 >>> LA.norm(a, 2) 7.745966692414834 >>> LA.norm(b, 2) 7.3484692283495345 >>> LA.norm(a, -2) 0.0 >>> LA.norm(b, -2) 1.8570331885190563e-016 # may vary >>> LA.norm(a, 3) 5.8480354764257312 # may vary >>> LA.norm(a, -3) 0.0 Using the axis argument to compute vector norms: >>> c = np.array([[ 1, 2, 3], ... [-1, 1, 4]]) >>> LA.norm(c, axis=0) array([ 1.41421356, 2.23606798, 5. ]) >>> LA.norm(c, axis=1) array([ 3.74165739, 4.24264069]) >>> LA.norm(c, ord=1, axis=1) array([ 6., 6.]) Using the axis argument to compute matrix norms: >>> m = np.arange(8).reshape(2,2,2) >>> LA.norm(m, axis=(1,2)) array([ 3.74165739, 11.22497216]) >>> LA.norm(m[0, :, :]), LA.norm(m[1, :, :]) (3.7416573867739413, 11.224972160321824)
numpy.reference.generated.numpy.linalg.norm
numpy.linalg.pinv linalg.pinv(a, rcond=1e-15, hermitian=False)[source] Compute the (Moore-Penrose) pseudo-inverse of a matrix. Calculate the generalized inverse of a matrix using its singular-value decomposition (SVD) and including all large singular values. Changed in version 1.14: Can now operate on stacks of matrices Parameters a(…, M, N) array_like Matrix or stack of matrices to be pseudo-inverted. rcond(…) array_like of float Cutoff for small singular values. Singular values less than or equal to rcond * largest_singular_value are set to zero. Broadcasts against the stack of matrices. hermitianbool, optional If True, a is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. Defaults to False. New in version 1.17.0. Returns B(…, N, M) ndarray The pseudo-inverse of a. If a is a matrix instance, then so is B. Raises LinAlgError If the SVD computation does not converge. See also scipy.linalg.pinv Similar function in SciPy. scipy.linalg.pinv2 Similar function in SciPy (SVD-based). scipy.linalg.pinvh Compute the (Moore-Penrose) pseudo-inverse of a Hermitian matrix. Notes The pseudo-inverse of a matrix A, denoted \(A^+\), is defined as: “the matrix that ‘solves’ [the least-squares problem] \(Ax = b\),” i.e., if \(\bar{x}\) is said solution, then \(A^+\) is that matrix such that \(\bar{x} = A^+b\). It can be shown that if \(Q_1 \Sigma Q_2^T = A\) is the singular value decomposition of A, then \(A^+ = Q_2 \Sigma^+ Q_1^T\), where \(Q_{1,2}\) are orthogonal matrices, \(\Sigma\) is a diagonal matrix consisting of A’s so-called singular values, (followed, typically, by zeros), and then \(\Sigma^+\) is simply the diagonal matrix consisting of the reciprocals of A’s singular values (again, followed by zeros). [1] References 1 G. Strang, Linear Algebra and Its Applications, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pp. 139-142. Examples The following example checks that a * a+ * a == a and a+ * a * a+ == a+: >>> a = np.random.randn(9, 6) >>> B = np.linalg.pinv(a) >>> np.allclose(a, np.dot(a, np.dot(B, a))) True >>> np.allclose(B, np.dot(B, np.dot(a, B))) True
numpy.reference.generated.numpy.linalg.pinv
numpy.linalg.qr linalg.qr(a, mode='reduced')[source] Compute the qr factorization of a matrix. Factor the matrix a as qr, where q is orthonormal and r is upper-triangular. Parameters aarray_like, shape (…, M, N) An array-like object with the dimensionality of at least 2. mode{‘reduced’, ‘complete’, ‘r’, ‘raw’}, optional If K = min(M, N), then ‘reduced’returns q, r with dimensions (…, M, K), (…, K, N) (default) ‘complete’ : returns q, r with dimensions (…, M, M), (…, M, N) ‘r’ : returns r only with dimensions (…, K, N) ‘raw’ : returns h, tau with dimensions (…, N, M), (…, K,) The options ‘reduced’, ‘complete, and ‘raw’ are new in numpy 1.8, see the notes for more information. The default is ‘reduced’, and to maintain backward compatibility with earlier versions of numpy both it and the old default ‘full’ can be omitted. Note that array h returned in ‘raw’ mode is transposed for calling Fortran. The ‘economic’ mode is deprecated. The modes ‘full’ and ‘economic’ may be passed using only the first letter for backwards compatibility, but all others must be spelled out. See the Notes for more explanation. Returns qndarray of float or complex, optional A matrix with orthonormal columns. When mode = ‘complete’ the result is an orthogonal/unitary matrix depending on whether or not a is real/complex. The determinant may be either +/- 1 in that case. In case the number of dimensions in the input array is greater than 2 then a stack of the matrices with above properties is returned. rndarray of float or complex, optional The upper-triangular matrix or a stack of upper-triangular matrices if the number of dimensions in the input array is greater than 2. (h, tau)ndarrays of np.double or np.cdouble, optional The array h contains the Householder reflectors that generate q along with r. The tau array contains scaling factors for the reflectors. In the deprecated ‘economic’ mode only h is returned. Raises LinAlgError If factoring fails. See also scipy.linalg.qr Similar function in SciPy. scipy.linalg.rq Compute RQ decomposition of a matrix. Notes This is an interface to the LAPACK routines dgeqrf, zgeqrf, dorgqr, and zungqr. For more information on the qr factorization, see for example: https://en.wikipedia.org/wiki/QR_factorization Subclasses of ndarray are preserved except for the ‘raw’ mode. So if a is of type matrix, all the return values will be matrices too. New ‘reduced’, ‘complete’, and ‘raw’ options for mode were added in NumPy 1.8.0 and the old option ‘full’ was made an alias of ‘reduced’. In addition the options ‘full’ and ‘economic’ were deprecated. Because ‘full’ was the previous default and ‘reduced’ is the new default, backward compatibility can be maintained by letting mode default. The ‘raw’ option was added so that LAPACK routines that can multiply arrays by q using the Householder reflectors can be used. Note that in this case the returned arrays are of type np.double or np.cdouble and the h array is transposed to be FORTRAN compatible. No routines using the ‘raw’ return are currently exposed by numpy, but some are available in lapack_lite and just await the necessary work. Examples >>> a = np.random.randn(9, 6) >>> q, r = np.linalg.qr(a) >>> np.allclose(a, np.dot(q, r)) # a does equal qr True >>> r2 = np.linalg.qr(a, mode='r') >>> np.allclose(r, r2) # mode='r' returns the same r as mode='full' True >>> a = np.random.normal(size=(3, 2, 2)) # Stack of 2 x 2 matrices as input >>> q, r = np.linalg.qr(a) >>> q.shape (3, 2, 2) >>> r.shape (3, 2, 2) >>> np.allclose(a, np.matmul(q, r)) True Example illustrating a common use of qr: solving of least squares problems What are the least-squares-best m and y0 in y = y0 + mx for the following data: {(0,1), (1,0), (1,2), (2,1)}. (Graph the points and you’ll see that it should be y0 = 0, m = 1.) The answer is provided by solving the over-determined matrix equation Ax = b, where: A = array([[0, 1], [1, 1], [1, 1], [2, 1]]) x = array([[y0], [m]]) b = array([[1], [0], [2], [1]]) If A = qr such that q is orthonormal (which is always possible via Gram-Schmidt), then x = inv(r) * (q.T) * b. (In numpy practice, however, we simply use lstsq.) >>> A = np.array([[0, 1], [1, 1], [1, 1], [2, 1]]) >>> A array([[0, 1], [1, 1], [1, 1], [2, 1]]) >>> b = np.array([1, 0, 2, 1]) >>> q, r = np.linalg.qr(A) >>> p = np.dot(q.T, b) >>> np.dot(np.linalg.inv(r), p) array([ 1.1e-16, 1.0e+00])
numpy.reference.generated.numpy.linalg.qr
numpy.linalg.slogdet linalg.slogdet(a)[source] Compute the sign and (natural) logarithm of the determinant of an array. If an array has a very small or very large determinant, then a call to det may overflow or underflow. This routine is more robust against such issues, because it computes the logarithm of the determinant rather than the determinant itself. Parameters a(…, M, M) array_like Input array, has to be a square 2-D array. Returns sign(…) array_like A number representing the sign of the determinant. For a real matrix, this is 1, 0, or -1. For a complex matrix, this is a complex number with absolute value 1 (i.e., it is on the unit circle), or else 0. logdet(…) array_like The natural log of the absolute value of the determinant. If the determinant is zero, then sign will be 0 and logdet will be -Inf. In all cases, the determinant is equal to sign * np.exp(logdet). See also det Notes New in version 1.8.0. Broadcasting rules apply, see the numpy.linalg documentation for details. New in version 1.6.0. The determinant is computed via LU factorization using the LAPACK routine z/dgetrf. Examples The determinant of a 2-D array [[a, b], [c, d]] is ad - bc: >>> a = np.array([[1, 2], [3, 4]]) >>> (sign, logdet) = np.linalg.slogdet(a) >>> (sign, logdet) (-1, 0.69314718055994529) # may vary >>> sign * np.exp(logdet) -2.0 Computing log-determinants for a stack of matrices: >>> a = np.array([ [[1, 2], [3, 4]], [[1, 2], [2, 1]], [[1, 3], [3, 1]] ]) >>> a.shape (3, 2, 2) >>> sign, logdet = np.linalg.slogdet(a) >>> (sign, logdet) (array([-1., -1., -1.]), array([ 0.69314718, 1.09861229, 2.07944154])) >>> sign * np.exp(logdet) array([-2., -3., -8.]) This routine succeeds where ordinary det does not: >>> np.linalg.det(np.eye(500) * 0.1) 0.0 >>> np.linalg.slogdet(np.eye(500) * 0.1) (1, -1151.2925464970228)
numpy.reference.generated.numpy.linalg.slogdet
numpy.linalg.solve linalg.solve(a, b)[source] Solve a linear matrix equation, or system of linear scalar equations. Computes the “exact” solution, x, of the well-determined, i.e., full rank, linear matrix equation ax = b. Parameters a(…, M, M) array_like Coefficient matrix. b{(…, M,), (…, M, K)}, array_like Ordinate or “dependent variable” values. Returns x{(…, M,), (…, M, K)} ndarray Solution to the system a x = b. Returned shape is identical to b. Raises LinAlgError If a is singular or not square. See also scipy.linalg.solve Similar function in SciPy. Notes New in version 1.8.0. Broadcasting rules apply, see the numpy.linalg documentation for details. The solutions are computed using LAPACK routine _gesv. a must be square and of full-rank, i.e., all rows (or, equivalently, columns) must be linearly independent; if either is not true, use lstsq for the least-squares best “solution” of the system/equation. References 1 G. Strang, Linear Algebra and Its Applications, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pg. 22. Examples Solve the system of equations x0 + 2 * x1 = 1 and 3 * x0 + 5 * x1 = 2: >>> a = np.array([[1, 2], [3, 5]]) >>> b = np.array([1, 2]) >>> x = np.linalg.solve(a, b) >>> x array([-1., 1.]) Check that the solution is correct: >>> np.allclose(np.dot(a, x), b) True
numpy.reference.generated.numpy.linalg.solve
numpy.linalg.svd linalg.svd(a, full_matrices=True, compute_uv=True, hermitian=False)[source] Singular Value Decomposition. When a is a 2D array, it is factorized as u @ np.diag(s) @ vh = (u * s) @ vh, where u and vh are 2D unitary arrays and s is a 1D array of a’s singular values. When a is higher-dimensional, SVD is applied in stacked mode as explained below. Parameters a(…, M, N) array_like A real or complex array with a.ndim >= 2. full_matricesbool, optional If True (default), u and vh have the shapes (..., M, M) and (..., N, N), respectively. Otherwise, the shapes are (..., M, K) and (..., K, N), respectively, where K = min(M, N). compute_uvbool, optional Whether or not to compute u and vh in addition to s. True by default. hermitianbool, optional If True, a is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. Defaults to False. New in version 1.17.0. Returns u{ (…, M, M), (…, M, K) } array Unitary array(s). The first a.ndim - 2 dimensions have the same size as those of the input a. The size of the last two dimensions depends on the value of full_matrices. Only returned when compute_uv is True. s(…, K) array Vector(s) with the singular values, within each vector sorted in descending order. The first a.ndim - 2 dimensions have the same size as those of the input a. vh{ (…, N, N), (…, K, N) } array Unitary array(s). The first a.ndim - 2 dimensions have the same size as those of the input a. The size of the last two dimensions depends on the value of full_matrices. Only returned when compute_uv is True. Raises LinAlgError If SVD computation does not converge. See also scipy.linalg.svd Similar function in SciPy. scipy.linalg.svdvals Compute singular values of a matrix. Notes Changed in version 1.8.0: Broadcasting rules apply, see the numpy.linalg documentation for details. The decomposition is performed using LAPACK routine _gesdd. SVD is usually described for the factorization of a 2D matrix \(A\). The higher-dimensional case will be discussed below. In the 2D case, SVD is written as \(A = U S V^H\), where \(A = a\), \(U= u\), \(S= \mathtt{np.diag}(s)\) and \(V^H = vh\). The 1D array s contains the singular values of a and u and vh are unitary. The rows of vh are the eigenvectors of \(A^H A\) and the columns of u are the eigenvectors of \(A A^H\). In both cases the corresponding (possibly non-zero) eigenvalues are given by s**2. If a has more than two dimensions, then broadcasting rules apply, as explained in Linear algebra on several matrices at once. This means that SVD is working in “stacked” mode: it iterates over all indices of the first a.ndim - 2 dimensions and for each combination SVD is applied to the last two indices. The matrix a can be reconstructed from the decomposition with either (u * s[..., None, :]) @ vh or u @ (s[..., None] * vh). (The @ operator can be replaced by the function np.matmul for python versions below 3.5.) If a is a matrix object (as opposed to an ndarray), then so are all the return values. Examples >>> a = np.random.randn(9, 6) + 1j*np.random.randn(9, 6) >>> b = np.random.randn(2, 7, 8, 3) + 1j*np.random.randn(2, 7, 8, 3) Reconstruction based on full SVD, 2D case: >>> u, s, vh = np.linalg.svd(a, full_matrices=True) >>> u.shape, s.shape, vh.shape ((9, 9), (6,), (6, 6)) >>> np.allclose(a, np.dot(u[:, :6] * s, vh)) True >>> smat = np.zeros((9, 6), dtype=complex) >>> smat[:6, :6] = np.diag(s) >>> np.allclose(a, np.dot(u, np.dot(smat, vh))) True Reconstruction based on reduced SVD, 2D case: >>> u, s, vh = np.linalg.svd(a, full_matrices=False) >>> u.shape, s.shape, vh.shape ((9, 6), (6,), (6, 6)) >>> np.allclose(a, np.dot(u * s, vh)) True >>> smat = np.diag(s) >>> np.allclose(a, np.dot(u, np.dot(smat, vh))) True Reconstruction based on full SVD, 4D case: >>> u, s, vh = np.linalg.svd(b, full_matrices=True) >>> u.shape, s.shape, vh.shape ((2, 7, 8, 8), (2, 7, 3), (2, 7, 3, 3)) >>> np.allclose(b, np.matmul(u[..., :3] * s[..., None, :], vh)) True >>> np.allclose(b, np.matmul(u[..., :3], s[..., None] * vh)) True Reconstruction based on reduced SVD, 4D case: >>> u, s, vh = np.linalg.svd(b, full_matrices=False) >>> u.shape, s.shape, vh.shape ((2, 7, 8, 3), (2, 7, 3), (2, 7, 3, 3)) >>> np.allclose(b, np.matmul(u * s[..., None, :], vh)) True >>> np.allclose(b, np.matmul(u, s[..., None] * vh)) True
numpy.reference.generated.numpy.linalg.svd
numpy.linalg.tensorinv linalg.tensorinv(a, ind=2)[source] Compute the ‘inverse’ of an N-dimensional array. The result is an inverse for a relative to the tensordot operation tensordot(a, b, ind), i. e., up to floating-point accuracy, tensordot(tensorinv(a), a, ind) is the “identity” tensor for the tensordot operation. Parameters aarray_like Tensor to ‘invert’. Its shape must be ‘square’, i. e., prod(a.shape[:ind]) == prod(a.shape[ind:]). indint, optional Number of first indices that are involved in the inverse sum. Must be a positive integer, default is 2. Returns bndarray a’s tensordot inverse, shape a.shape[ind:] + a.shape[:ind]. Raises LinAlgError If a is singular or not ‘square’ (in the above sense). See also numpy.tensordot, tensorsolve Examples >>> a = np.eye(4*6) >>> a.shape = (4, 6, 8, 3) >>> ainv = np.linalg.tensorinv(a, ind=2) >>> ainv.shape (8, 3, 4, 6) >>> b = np.random.randn(4, 6) >>> np.allclose(np.tensordot(ainv, b), np.linalg.tensorsolve(a, b)) True >>> a = np.eye(4*6) >>> a.shape = (24, 8, 3) >>> ainv = np.linalg.tensorinv(a, ind=1) >>> ainv.shape (8, 3, 24) >>> b = np.random.randn(24) >>> np.allclose(np.tensordot(ainv, b, 1), np.linalg.tensorsolve(a, b)) True
numpy.reference.generated.numpy.linalg.tensorinv
numpy.linalg.tensorsolve linalg.tensorsolve(a, b, axes=None)[source] Solve the tensor equation a x = b for x. It is assumed that all indices of x are summed over in the product, together with the rightmost indices of a, as is done in, for example, tensordot(a, x, axes=b.ndim). Parameters aarray_like Coefficient tensor, of shape b.shape + Q. Q, a tuple, equals the shape of that sub-tensor of a consisting of the appropriate number of its rightmost indices, and must be such that prod(Q) == prod(b.shape) (in which sense a is said to be ‘square’). barray_like Right-hand tensor, which can be of any shape. axestuple of ints, optional Axes in a to reorder to the right, before inversion. If None (default), no reordering is done. Returns xndarray, shape Q Raises LinAlgError If a is singular or not ‘square’ (in the above sense). See also numpy.tensordot, tensorinv, numpy.einsum Examples >>> a = np.eye(2*3*4) >>> a.shape = (2*3, 4, 2, 3, 4) >>> b = np.random.randn(2*3, 4) >>> x = np.linalg.tensorsolve(a, b) >>> x.shape (2, 3, 4) >>> np.allclose(np.tensordot(a, x, axes=3), b) True
numpy.reference.generated.numpy.linalg.tensorsolve
numpy.ma.all ma.all(self, axis=None, out=None, keepdims=<no value>) = <numpy.ma.core._frommethod object> Returns True if all elements evaluate to True. The output array is masked where all the values along the given axis are masked: if the output would have been a scalar and that all the values are masked, then the output is masked. Refer to numpy.all for full documentation. See also numpy.ndarray.all corresponding function for ndarrays numpy.all equivalent function Examples >>> np.ma.array([1,2,3]).all() True >>> a = np.ma.array([1,2,3], mask=True) >>> (a.all() is np.ma.masked) True
numpy.reference.generated.numpy.ma.all
numpy.ma.allclose ma.allclose(a, b, masked_equal=True, rtol=1e-05, atol=1e-08)[source] Returns True if two arrays are element-wise equal within a tolerance. This function is equivalent to allclose except that masked values are treated as equal (default) or unequal, depending on the masked_equal argument. Parameters a, barray_like Input arrays to compare. masked_equalbool, optional Whether masked values in a and b are considered equal (True) or not (False). They are considered equal by default. rtolfloat, optional Relative tolerance. The relative difference is equal to rtol * b. Default is 1e-5. atolfloat, optional Absolute tolerance. The absolute difference is equal to atol. Default is 1e-8. Returns ybool Returns True if the two arrays are equal within the given tolerance, False otherwise. If either array contains NaN, then False is returned. See also all, any numpy.allclose the non-masked allclose. Notes If the following equation is element-wise True, then allclose returns True: absolute(`a` - `b`) <= (`atol` + `rtol` * absolute(`b`)) Return True if all elements of a and b are equal subject to given tolerances. Examples >>> a = np.ma.array([1e10, 1e-7, 42.0], mask=[0, 0, 1]) >>> a masked_array(data=[10000000000.0, 1e-07, --], mask=[False, False, True], fill_value=1e+20) >>> b = np.ma.array([1e10, 1e-8, -42.0], mask=[0, 0, 1]) >>> np.ma.allclose(a, b) False >>> a = np.ma.array([1e10, 1e-8, 42.0], mask=[0, 0, 1]) >>> b = np.ma.array([1.00001e10, 1e-9, -42.0], mask=[0, 0, 1]) >>> np.ma.allclose(a, b) True >>> np.ma.allclose(a, b, masked_equal=False) False Masked values are not compared directly. >>> a = np.ma.array([1e10, 1e-8, 42.0], mask=[0, 0, 1]) >>> b = np.ma.array([1.00001e10, 1e-9, 42.0], mask=[0, 0, 1]) >>> np.ma.allclose(a, b) True >>> np.ma.allclose(a, b, masked_equal=False) False
numpy.reference.generated.numpy.ma.allclose
numpy.ma.allequal ma.allequal(a, b, fill_value=True)[source] Return True if all entries of a and b are equal, using fill_value as a truth value where either or both are masked. Parameters a, barray_like Input arrays to compare. fill_valuebool, optional Whether masked values in a or b are considered equal (True) or not (False). Returns ybool Returns True if the two arrays are equal within the given tolerance, False otherwise. If either array contains NaN, then False is returned. See also all, any numpy.ma.allclose Examples >>> a = np.ma.array([1e10, 1e-7, 42.0], mask=[0, 0, 1]) >>> a masked_array(data=[10000000000.0, 1e-07, --], mask=[False, False, True], fill_value=1e+20) >>> b = np.array([1e10, 1e-7, -42.0]) >>> b array([ 1.00000000e+10, 1.00000000e-07, -4.20000000e+01]) >>> np.ma.allequal(a, b, fill_value=False) False >>> np.ma.allequal(a, b) True
numpy.reference.generated.numpy.ma.allequal
numpy.ma.anom ma.anom(self, axis=None, dtype=None) = <numpy.ma.core._frommethod object> Compute the anomalies (deviations from the arithmetic mean) along the given axis. Returns an array of anomalies, with the same shape as the input and where the arithmetic mean is computed along the given axis. Parameters axisint, optional Axis over which the anomalies are taken. The default is to use the mean of the flattened array as reference. dtypedtype, optional Type to use in computing the variance. For arrays of integer type the default is float32; for arrays of float types it is the same as the array type. See also mean Compute the mean of the array. Examples >>> a = np.ma.array([1,2,3]) >>> a.anom() masked_array(data=[-1., 0., 1.], mask=False, fill_value=1e+20)
numpy.reference.generated.numpy.ma.anom
numpy.ma.anomalies ma.anomalies(self, axis=None, dtype=None) = <numpy.ma.core._frommethod object> Compute the anomalies (deviations from the arithmetic mean) along the given axis. Returns an array of anomalies, with the same shape as the input and where the arithmetic mean is computed along the given axis. Parameters axisint, optional Axis over which the anomalies are taken. The default is to use the mean of the flattened array as reference. dtypedtype, optional Type to use in computing the variance. For arrays of integer type the default is float32; for arrays of float types it is the same as the array type. See also mean Compute the mean of the array. Examples >>> a = np.ma.array([1,2,3]) >>> a.anom() masked_array(data=[-1., 0., 1.], mask=False, fill_value=1e+20)
numpy.reference.generated.numpy.ma.anomalies
numpy.ma.any ma.any(self, axis=None, out=None, keepdims=<no value>) = <numpy.ma.core._frommethod object> Returns True if any of the elements of a evaluate to True. Masked values are considered as False during computation. Refer to numpy.any for full documentation. See also numpy.ndarray.any corresponding function for ndarrays numpy.any equivalent function
numpy.reference.generated.numpy.ma.any
numpy.ma.append ma.append(a, b, axis=None)[source] Append values to the end of an array. New in version 1.9.0. Parameters aarray_like Values are appended to a copy of this array. barray_like These values are appended to a copy of a. It must be of the correct shape (the same shape as a, excluding axis). If axis is not specified, b can be any shape and will be flattened before use. axisint, optional The axis along which v are appended. If axis is not given, both a and b are flattened before use. Returns appendMaskedArray A copy of a with b appended to axis. Note that append does not occur in-place: a new array is allocated and filled. If axis is None, the result is a flattened array. See also numpy.append Equivalent function in the top-level NumPy module. Examples >>> import numpy.ma as ma >>> a = ma.masked_values([1, 2, 3], 2) >>> b = ma.masked_values([[4, 5, 6], [7, 8, 9]], 7) >>> ma.append(a, b) masked_array(data=[1, --, 3, 4, 5, 6, --, 8, 9], mask=[False, True, False, False, False, False, True, False, False], fill_value=999999)
numpy.reference.generated.numpy.ma.append
numpy.ma.apply_along_axis ma.apply_along_axis(func1d, axis, arr, *args, **kwargs)[source] Apply a function to 1-D slices along the given axis. Execute func1d(a, *args, **kwargs) where func1d operates on 1-D arrays and a is a 1-D slice of arr along axis. This is equivalent to (but faster than) the following use of ndindex and s_, which sets each of ii, jj, and kk to a tuple of indices: Ni, Nk = a.shape[:axis], a.shape[axis+1:] for ii in ndindex(Ni): for kk in ndindex(Nk): f = func1d(arr[ii + s_[:,] + kk]) Nj = f.shape for jj in ndindex(Nj): out[ii + jj + kk] = f[jj] Equivalently, eliminating the inner loop, this can be expressed as: Ni, Nk = a.shape[:axis], a.shape[axis+1:] for ii in ndindex(Ni): for kk in ndindex(Nk): out[ii + s_[...,] + kk] = func1d(arr[ii + s_[:,] + kk]) Parameters func1dfunction (M,) -> (Nj…) This function should accept 1-D arrays. It is applied to 1-D slices of arr along the specified axis. axisinteger Axis along which arr is sliced. arrndarray (Ni…, M, Nk…) Input array. argsany Additional arguments to func1d. kwargsany Additional named arguments to func1d. New in version 1.9.0. Returns outndarray (Ni…, Nj…, Nk…) The output array. The shape of out is identical to the shape of arr, except along the axis dimension. This axis is removed, and replaced with new dimensions equal to the shape of the return value of func1d. So if func1d returns a scalar out will have one fewer dimensions than arr. See also apply_over_axes Apply a function repeatedly over multiple axes. Examples >>> def my_func(a): ... """Average first and last element of a 1-D array""" ... return (a[0] + a[-1]) * 0.5 >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> np.apply_along_axis(my_func, 0, b) array([4., 5., 6.]) >>> np.apply_along_axis(my_func, 1, b) array([2., 5., 8.]) For a function that returns a 1D array, the number of dimensions in outarr is the same as arr. >>> b = np.array([[8,1,7], [4,3,9], [5,2,6]]) >>> np.apply_along_axis(sorted, 1, b) array([[1, 7, 8], [3, 4, 9], [2, 5, 6]]) For a function that returns a higher dimensional array, those dimensions are inserted in place of the axis dimension. >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> np.apply_along_axis(np.diag, -1, b) array([[[1, 0, 0], [0, 2, 0], [0, 0, 3]], [[4, 0, 0], [0, 5, 0], [0, 0, 6]], [[7, 0, 0], [0, 8, 0], [0, 0, 9]]])
numpy.reference.generated.numpy.ma.apply_along_axis
numpy.ma.apply_over_axes ma.apply_over_axes(func, a, axes)[source] Apply a function repeatedly over multiple axes. func is called as res = func(a, axis), where axis is the first element of axes. The result res of the function call must have either the same dimensions as a or one less dimension. If res has one less dimension than a, a dimension is inserted before axis. The call to func is then repeated for each axis in axes, with res as the first argument. Parameters funcfunction This function must take two arguments, func(a, axis). aarray_like Input array. axesarray_like Axes over which func is applied; the elements must be integers. Returns apply_over_axisndarray The output array. The number of dimensions is the same as a, but the shape can be different. This depends on whether func changes the shape of its output with respect to its input. See also apply_along_axis Apply a function to 1-D slices of an array along the given axis. Examples >>> a = np.ma.arange(24).reshape(2,3,4) >>> a[:,0,1] = np.ma.masked >>> a[:,1,:] = np.ma.masked >>> a masked_array( data=[[[0, --, 2, 3], [--, --, --, --], [8, 9, 10, 11]], [[12, --, 14, 15], [--, --, --, --], [20, 21, 22, 23]]], mask=[[[False, True, False, False], [ True, True, True, True], [False, False, False, False]], [[False, True, False, False], [ True, True, True, True], [False, False, False, False]]], fill_value=999999) >>> np.ma.apply_over_axes(np.ma.sum, a, [0,2]) masked_array( data=[[[46], [--], [124]]], mask=[[[False], [ True], [False]]], fill_value=999999) Tuple axis arguments to ufuncs are equivalent: >>> np.ma.sum(a, axis=(0,2)).reshape((1,-1,1)) masked_array( data=[[[46], [--], [124]]], mask=[[[False], [ True], [False]]], fill_value=999999)
numpy.reference.generated.numpy.ma.apply_over_axes
numpy.ma.arange ma.arange([start, ]stop, [step, ]dtype=None, *, like=None) = <numpy.ma.core._convert2ma object> Return evenly spaced values within a given interval. Values are generated within the half-open interval [start, stop) (in other words, the interval including start but excluding stop). For integer arguments the function is equivalent to the Python built-in range function, but returns an ndarray rather than a list. When using a non-integer step, such as 0.1, it is often better to use numpy.linspace. See the warnings section below for more information. Parameters startinteger or real, optional Start of interval. The interval includes this value. The default start value is 0. stopinteger or real End of interval. The interval does not include this value, except in some cases where step is not an integer and floating point round-off affects the length of out. stepinteger or real, optional Spacing between values. For any output out, this is the distance between two adjacent values, out[i+1] - out[i]. The default step size is 1. If step is specified as a position argument, start must also be given. dtypedtype The type of the output array. If dtype is not given, infer the data type from the other input arguments. likearray_like Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as like supports the __array_function__ protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns arangeMaskedArray Array of evenly spaced values. For floating point arguments, the length of the result is ceil((stop - start)/step). Because of floating point overflow, this rule may result in the last element of out being greater than stop. Warning The length of the output might not be numerically stable. Another stability issue is due to the internal implementation of numpy.arange. The actual step value used to populate the array is dtype(start + step) - dtype(start) and not step. Precision loss can occur here, due to casting or due to using floating points when start is much larger than step. This can lead to unexpected behaviour. For example: >>> np.arange(0, 5, 0.5, dtype=int) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) >>> np.arange(-3, 3, 0.5, dtype=int) array([-3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8]) In such cases, the use of numpy.linspace should be preferred. See also numpy.linspace Evenly spaced numbers with careful handling of endpoints. numpy.ogrid Arrays of evenly spaced numbers in N-dimensions. numpy.mgrid Grid-shaped arrays of evenly spaced numbers in N-dimensions. Examples >>> np.arange(3) array([0, 1, 2]) >>> np.arange(3.0) array([ 0., 1., 2.]) >>> np.arange(3,7) array([3, 4, 5, 6]) >>> np.arange(3,7,2) array([3, 5])
numpy.reference.generated.numpy.ma.arange
numpy.ma.argmax ma.argmax(self, axis=None, fill_value=None, out=None) = <numpy.ma.core._frommethod object> Returns array of indices of the maximum values along the given axis. Masked values are treated as if they had the value fill_value. Parameters axis{None, integer} If None, the index is into the flattened array, otherwise along the specified axis fill_valuescalar or None, optional Value used to fill in the masked values. If None, the output of maximum_fill_value(self._data) is used instead. out{None, array}, optional Array into which the result can be placed. Its type is preserved and it must be of the right shape to hold the output. Returns index_array{integer_array} Examples >>> a = np.arange(6).reshape(2,3) >>> a.argmax() 5 >>> a.argmax(0) array([1, 1, 1]) >>> a.argmax(1) array([2, 2])
numpy.reference.generated.numpy.ma.argmax
numpy.ma.argmin ma.argmin(self, axis=None, fill_value=None, out=None) = <numpy.ma.core._frommethod object> Return array of indices to the minimum values along the given axis. Parameters axis{None, integer} If None, the index is into the flattened array, otherwise along the specified axis fill_valuescalar or None, optional Value used to fill in the masked values. If None, the output of minimum_fill_value(self._data) is used instead. out{None, array}, optional Array into which the result can be placed. Its type is preserved and it must be of the right shape to hold the output. Returns ndarray or scalar If multi-dimension input, returns a new ndarray of indices to the minimum values along the given axis. Otherwise, returns a scalar of index to the minimum values along the given axis. Examples >>> x = np.ma.array(np.arange(4), mask=[1,1,0,0]) >>> x.shape = (2,2) >>> x masked_array( data=[[--, --], [2, 3]], mask=[[ True, True], [False, False]], fill_value=999999) >>> x.argmin(axis=0, fill_value=-1) array([0, 0]) >>> x.argmin(axis=0, fill_value=9) array([1, 1])
numpy.reference.generated.numpy.ma.argmin
numpy.ma.argsort ma.argsort(a, axis=<no value>, kind=None, order=None, endwith=True, fill_value=None)[source] Return an ndarray of indices that sort the array along the specified axis. Masked values are filled beforehand to fill_value. Parameters axisint, optional Axis along which to sort. If None, the default, the flattened array is used. Changed in version 1.13.0: Previously, the default was documented to be -1, but that was in error. At some future date, the default will change to -1, as originally intended. Until then, the axis should be given explicitly when arr.ndim > 1, to avoid a FutureWarning. kind{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional The sorting algorithm used. orderlist, optional When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. Not all fields need be specified. endwith{True, False}, optional Whether missing values (if any) should be treated as the largest values (True) or the smallest values (False) When the array contains unmasked values at the same extremes of the datatype, the ordering of these values and the masked values is undefined. fill_valuescalar or None, optional Value used internally for the masked values. If fill_value is not None, it supersedes endwith. Returns index_arrayndarray, int Array of indices that sort a along the specified axis. In other words, a[index_array] yields a sorted a. See also ma.MaskedArray.sort Describes sorting algorithms used. lexsort Indirect stable sort with multiple keys. numpy.ndarray.sort Inplace sort. Notes See sort for notes on the different sorting algorithms. Examples >>> a = np.ma.array([3,2,1], mask=[False, False, True]) >>> a masked_array(data=[3, 2, --], mask=[False, False, True], fill_value=999999) >>> a.argsort() array([1, 0, 2])
numpy.reference.generated.numpy.ma.argsort
numpy.ma.around ma.around = <numpy.ma.core._MaskedUnaryOperation object> Round an array to the given number of decimals. See also around equivalent function; see for details.
numpy.reference.generated.numpy.ma.around
numpy.ma.array ma.array(data, dtype=None, copy=False, order=None, mask=False, fill_value=None, keep_mask=True, hard_mask=False, shrink=True, subok=True, ndmin=0)[source] An array class with possibly masked values. Masked values of True exclude the corresponding element from any computation. Construction: x = MaskedArray(data, mask=nomask, dtype=None, copy=False, subok=True, ndmin=0, fill_value=None, keep_mask=True, hard_mask=None, shrink=True, order=None) Parameters dataarray_like Input data. masksequence, optional Mask. Must be convertible to an array of booleans with the same shape as data. True indicates a masked (i.e. invalid) data. dtypedtype, optional Data type of the output. If dtype is None, the type of the data argument (data.dtype) is used. If dtype is not None and different from data.dtype, a copy is performed. copybool, optional Whether to copy the input data (True), or to use a reference instead. Default is False. subokbool, optional Whether to return a subclass of MaskedArray if possible (True) or a plain MaskedArray. Default is True. ndminint, optional Minimum number of dimensions. Default is 0. fill_valuescalar, optional Value used to fill in the masked values when necessary. If None, a default based on the data-type is used. keep_maskbool, optional Whether to combine mask with the mask of the input data, if any (True), or to use only mask for the output (False). Default is True. hard_maskbool, optional Whether to use a hard mask or not. With a hard mask, masked values cannot be unmasked. Default is False. shrinkbool, optional Whether to force compression of an empty mask. Default is True. order{‘C’, ‘F’, ‘A’}, optional Specify the order of the array. If order is ‘C’, then the array will be in C-contiguous order (last-index varies the fastest). If order is ‘F’, then the returned array will be in Fortran-contiguous order (first-index varies the fastest). If order is ‘A’ (default), then the returned array may be in any order (either C-, Fortran-contiguous, or even discontiguous), unless a copy is required, in which case it will be C-contiguous. Examples The mask can be initialized with an array of boolean values with the same shape as data. >>> data = np.arange(6).reshape((2, 3)) >>> np.ma.MaskedArray(data, mask=[[False, True, False], ... [False, False, True]]) masked_array( data=[[0, --, 2], [3, 4, --]], mask=[[False, True, False], [False, False, True]], fill_value=999999) Alternatively, the mask can be initialized to homogeneous boolean array with the same shape as data by passing in a scalar boolean value: >>> np.ma.MaskedArray(data, mask=False) masked_array( data=[[0, 1, 2], [3, 4, 5]], mask=[[False, False, False], [False, False, False]], fill_value=999999) >>> np.ma.MaskedArray(data, mask=True) masked_array( data=[[--, --, --], [--, --, --]], mask=[[ True, True, True], [ True, True, True]], fill_value=999999, dtype=int64) Note The recommended practice for initializing mask with a scalar boolean value is to use True/False rather than np.True_/np.False_. The reason is nomask is represented internally as np.False_. >>> np.False_ is np.ma.nomask True
numpy.reference.generated.numpy.ma.array
numpy.ma.asanyarray ma.asanyarray(a, dtype=None)[source] Convert the input to a masked array, conserving subclasses. If a is a subclass of MaskedArray, its class is conserved. No copy is performed if the input is already an ndarray. Parameters aarray_like Input data, in any form that can be converted to an array. dtypedtype, optional By default, the data-type is inferred from the input data. order{‘C’, ‘F’}, optional Whether to use row-major (‘C’) or column-major (‘FORTRAN’) memory representation. Default is ‘C’. Returns outMaskedArray MaskedArray interpretation of a. See also asarray Similar to asanyarray, but does not conserve subclass. Examples >>> x = np.arange(10.).reshape(2, 5) >>> x array([[0., 1., 2., 3., 4.], [5., 6., 7., 8., 9.]]) >>> np.ma.asanyarray(x) masked_array( data=[[0., 1., 2., 3., 4.], [5., 6., 7., 8., 9.]], mask=False, fill_value=1e+20) >>> type(np.ma.asanyarray(x)) <class 'numpy.ma.core.MaskedArray'>
numpy.reference.generated.numpy.ma.asanyarray
numpy.ma.asarray ma.asarray(a, dtype=None, order=None)[source] Convert the input to a masked array of the given data-type. No copy is performed if the input is already an ndarray. If a is a subclass of MaskedArray, a base class MaskedArray is returned. Parameters aarray_like Input data, in any form that can be converted to a masked array. This includes lists, lists of tuples, tuples, tuples of tuples, tuples of lists, ndarrays and masked arrays. dtypedtype, optional By default, the data-type is inferred from the input data. order{‘C’, ‘F’}, optional Whether to use row-major (‘C’) or column-major (‘FORTRAN’) memory representation. Default is ‘C’. Returns outMaskedArray Masked array interpretation of a. See also asanyarray Similar to asarray, but conserves subclasses. Examples >>> x = np.arange(10.).reshape(2, 5) >>> x array([[0., 1., 2., 3., 4.], [5., 6., 7., 8., 9.]]) >>> np.ma.asarray(x) masked_array( data=[[0., 1., 2., 3., 4.], [5., 6., 7., 8., 9.]], mask=False, fill_value=1e+20) >>> type(np.ma.asarray(x)) <class 'numpy.ma.core.MaskedArray'>
numpy.reference.generated.numpy.ma.asarray
numpy.ma.atleast_1d ma.atleast_1d(*args, **kwargs) = <numpy.ma.extras._fromnxfunction_allargs object> Convert inputs to arrays with at least one dimension. Scalar inputs are converted to 1-dimensional arrays, whilst higher-dimensional inputs are preserved. Parameters arys1, arys2, …array_like One or more input arrays. Returns retndarray An array, or list of arrays, each with a.ndim >= 1. Copies are made only if necessary. See also atleast_2d, atleast_3d Notes The function is applied to both the _data and the _mask, if any. Examples >>> np.atleast_1d(1.0) array([1.]) >>> x = np.arange(9.0).reshape(3,3) >>> np.atleast_1d(x) array([[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]) >>> np.atleast_1d(x) is x True >>> np.atleast_1d(1, [3, 4]) [array([1]), array([3, 4])]
numpy.reference.generated.numpy.ma.atleast_1d
numpy.ma.atleast_2d ma.atleast_2d(*args, **kwargs) = <numpy.ma.extras._fromnxfunction_allargs object> View inputs as arrays with at least two dimensions. Parameters arys1, arys2, …array_like One or more array-like sequences. Non-array inputs are converted to arrays. Arrays that already have two or more dimensions are preserved. Returns res, res2, …ndarray An array, or list of arrays, each with a.ndim >= 2. Copies are avoided where possible, and views with two or more dimensions are returned. See also atleast_1d, atleast_3d Notes The function is applied to both the _data and the _mask, if any. Examples >>> np.atleast_2d(3.0) array([[3.]]) >>> x = np.arange(3.0) >>> np.atleast_2d(x) array([[0., 1., 2.]]) >>> np.atleast_2d(x).base is x True >>> np.atleast_2d(1, [1, 2], [[1, 2]]) [array([[1]]), array([[1, 2]]), array([[1, 2]])]
numpy.reference.generated.numpy.ma.atleast_2d
numpy.ma.atleast_3d ma.atleast_3d(*args, **kwargs) = <numpy.ma.extras._fromnxfunction_allargs object> View inputs as arrays with at least three dimensions. Parameters arys1, arys2, …array_like One or more array-like sequences. Non-array inputs are converted to arrays. Arrays that already have three or more dimensions are preserved. Returns res1, res2, …ndarray An array, or list of arrays, each with a.ndim >= 3. Copies are avoided where possible, and views with three or more dimensions are returned. For example, a 1-D array of shape (N,) becomes a view of shape (1, N, 1), and a 2-D array of shape (M, N) becomes a view of shape (M, N, 1). See also atleast_1d, atleast_2d Notes The function is applied to both the _data and the _mask, if any. Examples >>> np.atleast_3d(3.0) array([[[3.]]]) >>> x = np.arange(3.0) >>> np.atleast_3d(x).shape (1, 3, 1) >>> x = np.arange(12.0).reshape(4,3) >>> np.atleast_3d(x).shape (4, 3, 1) >>> np.atleast_3d(x).base is x.base # x is a reshape, so not base itself True >>> for arr in np.atleast_3d([1, 2], [[1, 2]], [[[1, 2]]]): ... print(arr, arr.shape) ... [[[1] [2]]] (1, 2, 1) [[[1] [2]]] (1, 2, 1) [[[1 2]]] (1, 1, 2)
numpy.reference.generated.numpy.ma.atleast_3d
numpy.ma.average ma.average(a, axis=None, weights=None, returned=False)[source] Return the weighted average of array over the given axis. Parameters aarray_like Data to be averaged. Masked entries are not taken into account in the computation. axisint, optional Axis along which to average a. If None, averaging is done over the flattened array. weightsarray_like, optional The importance that each element has in the computation of the average. The weights array can either be 1-D (in which case its length must be the size of a along the given axis) or of the same shape as a. If weights=None, then all data in a are assumed to have a weight equal to one. The 1-D calculation is: avg = sum(a * weights) / sum(weights) The only constraint on weights is that sum(weights) must not be 0. returnedbool, optional Flag indicating whether a tuple (result, sum of weights) should be returned as output (True), or just the result (False). Default is False. Returns average, [sum_of_weights](tuple of) scalar or MaskedArray The average along the specified axis. When returned is True, return a tuple with the average as the first element and the sum of the weights as the second element. The return type is np.float64 if a is of integer type and floats smaller than float64, or the input data-type, otherwise. If returned, sum_of_weights is always float64. Examples >>> a = np.ma.array([1., 2., 3., 4.], mask=[False, False, True, True]) >>> np.ma.average(a, weights=[3, 1, 0, 0]) 1.25 >>> x = np.ma.arange(6.).reshape(3, 2) >>> x masked_array( data=[[0., 1.], [2., 3.], [4., 5.]], mask=False, fill_value=1e+20) >>> avg, sumweights = np.ma.average(x, axis=0, weights=[1, 2, 3], ... returned=True) >>> avg masked_array(data=[2.6666666666666665, 3.6666666666666665], mask=[False, False], fill_value=1e+20)
numpy.reference.generated.numpy.ma.average
numpy.ma.choose ma.choose(indices, choices, out=None, mode='raise')[source] Use an index array to construct a new array from a list of choices. Given an array of integers and a list of n choice arrays, this method will create a new array that merges each of the choice arrays. Where a value in index is i, the new array will have the value that choices[i] contains in the same place. Parameters indicesndarray of ints This array must contain integers in [0, n-1], where n is the number of choices. choicessequence of arrays Choice arrays. The index array and all of the choices should be broadcastable to the same shape. outarray, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype. mode{‘raise’, ‘wrap’, ‘clip’}, optional Specifies how out-of-bounds indices will behave. ‘raise’ : raise an error ‘wrap’ : wrap around ‘clip’ : clip to the range Returns merged_arrayarray See also choose equivalent function Examples >>> choice = np.array([[1,1,1], [2,2,2], [3,3,3]]) >>> a = np.array([2, 1, 0]) >>> np.ma.choose(a, choice) masked_array(data=[3, 2, 1], mask=False, fill_value=999999)
numpy.reference.generated.numpy.ma.choose
numpy.ma.clip ma.clip(*args, **kwargs) = <numpy.ma.core._convert2ma object> Clip (limit) the values in an array. Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of [0, 1] is specified, values smaller than 0 become 0, and values larger than 1 become 1. Equivalent to but faster than np.minimum(a_max, np.maximum(a, a_min)). No check is performed to ensure a_min < a_max. Parameters aarray_like Array containing elements to clip. a_min, a_maxarray_like or None Minimum and maximum value. If None, clipping is not performed on the corresponding edge. Only one of a_min and a_max may be None. Both are broadcast against a. outndarray, optional The results will be placed in this array. It may be the input array for in-place clipping. out must be of the right shape to hold the output. Its type is preserved. **kwargs For other keyword-only arguments, see the ufunc docs. New in version 1.17.0. Returns clipped_arrayMaskedArray An array with the elements of a, but where values < a_min are replaced with a_min, and those > a_max with a_max. See also Output type determination Notes When a_min is greater than a_max, clip returns an array in which all values are equal to a_max, as shown in the second example. Examples >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.clip(a, 1, 8) array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) >>> np.clip(a, 8, 1) array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) >>> np.clip(a, 3, 6, out=a) array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) >>> a array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.clip(a, [3, 4, 1, 1, 1, 4, 4, 4, 4, 4], 8) array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8])
numpy.reference.generated.numpy.ma.clip
numpy.ma.clump_masked ma.clump_masked(a)[source] Returns a list of slices corresponding to the masked clumps of a 1-D array. (A “clump” is defined as a contiguous region of the array). Parameters andarray A one-dimensional masked array. Returns sliceslist of slice The list of slices, one for each continuous region of masked elements in a. See also flatnotmasked_edges, flatnotmasked_contiguous, notmasked_edges notmasked_contiguous, clump_unmasked Notes New in version 1.4.0. Examples >>> a = np.ma.masked_array(np.arange(10)) >>> a[[0, 1, 2, 6, 8, 9]] = np.ma.masked >>> np.ma.clump_masked(a) [slice(0, 3, None), slice(6, 7, None), slice(8, 10, None)]
numpy.reference.generated.numpy.ma.clump_masked
numpy.ma.clump_unmasked ma.clump_unmasked(a)[source] Return list of slices corresponding to the unmasked clumps of a 1-D array. (A “clump” is defined as a contiguous region of the array). Parameters andarray A one-dimensional masked array. Returns sliceslist of slice The list of slices, one for each continuous region of unmasked elements in a. See also flatnotmasked_edges, flatnotmasked_contiguous, notmasked_edges notmasked_contiguous, clump_masked Notes New in version 1.4.0. Examples >>> a = np.ma.masked_array(np.arange(10)) >>> a[[0, 1, 2, 6, 8, 9]] = np.ma.masked >>> np.ma.clump_unmasked(a) [slice(3, 6, None), slice(7, 8, None)]
numpy.reference.generated.numpy.ma.clump_unmasked
numpy.ma.column_stack ma.column_stack(*args, **kwargs) = <numpy.ma.extras._fromnxfunction_seq object> Stack 1-D arrays as columns into a 2-D array. Take a sequence of 1-D arrays and stack them as columns to make a single 2-D array. 2-D arrays are stacked as-is, just like with hstack. 1-D arrays are turned into 2-D columns first. Parameters tupsequence of 1-D or 2-D arrays. Arrays to stack. All of them must have the same first dimension. Returns stacked2-D array The array formed by stacking the given arrays. See also stack, hstack, vstack, concatenate Notes The function is applied to both the _data and the _mask, if any. Examples >>> a = np.array((1,2,3)) >>> b = np.array((2,3,4)) >>> np.column_stack((a,b)) array([[1, 2], [2, 3], [3, 4]])
numpy.reference.generated.numpy.ma.column_stack
numpy.ma.common_fill_value ma.common_fill_value(a, b)[source] Return the common filling value of two masked arrays, if any. If a.fill_value == b.fill_value, return the fill value, otherwise return None. Parameters a, bMaskedArray The masked arrays for which to compare fill values. Returns fill_valuescalar or None The common fill value, or None. Examples >>> x = np.ma.array([0, 1.], fill_value=3) >>> y = np.ma.array([0, 1.], fill_value=3) >>> np.ma.common_fill_value(x, y) 3.0
numpy.reference.generated.numpy.ma.common_fill_value
numpy.ma.compress_cols ma.compress_cols(a)[source] Suppress whole columns of a 2-D array that contain masked values. This is equivalent to np.ma.compress_rowcols(a, 1), see compress_rowcols for details. See also compress_rowcols
numpy.reference.generated.numpy.ma.compress_cols
numpy.ma.compress_rowcols ma.compress_rowcols(x, axis=None)[source] Suppress the rows and/or columns of a 2-D array that contain masked values. The suppression behavior is selected with the axis parameter. If axis is None, both rows and columns are suppressed. If axis is 0, only rows are suppressed. If axis is 1 or -1, only columns are suppressed. Parameters xarray_like, MaskedArray The array to operate on. If not a MaskedArray instance (or if no array elements are masked), x is interpreted as a MaskedArray with mask set to nomask. Must be a 2D array. axisint, optional Axis along which to perform the operation. Default is None. Returns compressed_arrayndarray The compressed array. Examples >>> x = np.ma.array(np.arange(9).reshape(3, 3), mask=[[1, 0, 0], ... [1, 0, 0], ... [0, 0, 0]]) >>> x masked_array( data=[[--, 1, 2], [--, 4, 5], [6, 7, 8]], mask=[[ True, False, False], [ True, False, False], [False, False, False]], fill_value=999999) >>> np.ma.compress_rowcols(x) array([[7, 8]]) >>> np.ma.compress_rowcols(x, 0) array([[6, 7, 8]]) >>> np.ma.compress_rowcols(x, 1) array([[1, 2], [4, 5], [7, 8]])
numpy.reference.generated.numpy.ma.compress_rowcols
numpy.ma.compress_rows ma.compress_rows(a)[source] Suppress whole rows of a 2-D array that contain masked values. This is equivalent to np.ma.compress_rowcols(a, 0), see compress_rowcols for details. See also compress_rowcols
numpy.reference.generated.numpy.ma.compress_rows
numpy.ma.compressed ma.compressed(x)[source] Return all the non-masked data as a 1-D array. This function is equivalent to calling the “compressed” method of a ma.MaskedArray, see ma.MaskedArray.compressed for details. See also ma.MaskedArray.compressed Equivalent method.
numpy.reference.generated.numpy.ma.compressed
numpy.ma.concatenate ma.concatenate(arrays, axis=0)[source] Concatenate a sequence of arrays along the given axis. Parameters arrayssequence of array_like The arrays must have the same shape, except in the dimension corresponding to axis (the first, by default). axisint, optional The axis along which the arrays will be joined. Default is 0. Returns resultMaskedArray The concatenated array with any masked entries preserved. See also numpy.concatenate Equivalent function in the top-level NumPy module. Examples >>> import numpy.ma as ma >>> a = ma.arange(3) >>> a[1] = ma.masked >>> b = ma.arange(2, 5) >>> a masked_array(data=[0, --, 2], mask=[False, True, False], fill_value=999999) >>> b masked_array(data=[2, 3, 4], mask=False, fill_value=999999) >>> ma.concatenate([a, b]) masked_array(data=[0, --, 2, 2, 3, 4], mask=[False, True, False, False, False, False], fill_value=999999)
numpy.reference.generated.numpy.ma.concatenate
numpy.ma.conjugate ma.conjugate(x, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj]) = <numpy.ma.core._MaskedUnaryOperation object> Return the complex conjugate, element-wise. The complex conjugate of a complex number is obtained by changing the sign of its imaginary part. Parameters xarray_like Input value. outndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. wherearray_like, optional This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs For other keyword-only arguments, see the ufunc docs. Returns yndarray The complex conjugate of x, with same dtype as y. This is a scalar if x is a scalar. Notes conj is an alias for conjugate: >>> np.conj is np.conjugate True Examples >>> np.conjugate(1+2j) (1-2j) >>> x = np.eye(2) + 1j * np.eye(2) >>> np.conjugate(x) array([[ 1.-1.j, 0.-0.j], [ 0.-0.j, 1.-1.j]])
numpy.reference.generated.numpy.ma.conjugate
numpy.ma.copy ma.copy(self, *args, **params) a.copy(order='C') = <numpy.ma.core._frommethod object> Return a copy of the array. Parameters order{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if a is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of a as closely as possible. (Note that this function and numpy.copy are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also numpy.copy Similar function with different default behavior numpy.copyto Notes This function is the preferred method for creating an array copy. The function numpy.copy is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. Examples >>> x = np.array([[1,2,3],[4,5,6]], order='F') >>> y = x.copy() >>> x.fill(0) >>> x array([[0, 0, 0], [0, 0, 0]]) >>> y array([[1, 2, 3], [4, 5, 6]]) >>> y.flags['C_CONTIGUOUS'] True
numpy.reference.generated.numpy.ma.copy
numpy.ma.corrcoef ma.corrcoef(x, y=None, rowvar=True, bias=<no value>, allow_masked=True, ddof=<no value>)[source] Return Pearson product-moment correlation coefficients. Except for the handling of missing data this function does the same as numpy.corrcoef. For more details and examples, see numpy.corrcoef. Parameters xarray_like A 1-D or 2-D array containing multiple variables and observations. Each row of x represents a variable, and each column a single observation of all those variables. Also see rowvar below. yarray_like, optional An additional set of variables and observations. y has the same shape as x. rowvarbool, optional If rowvar is True (default), then each row represents a variable, with observations in the columns. Otherwise, the relationship is transposed: each column represents a variable, while the rows contain observations. bias_NoValue, optional Has no effect, do not use. Deprecated since version 1.10.0. allow_maskedbool, optional If True, masked values are propagated pair-wise: if a value is masked in x, the corresponding value is masked in y. If False, raises an exception. Because bias is deprecated, this argument needs to be treated as keyword only to avoid a warning. ddof_NoValue, optional Has no effect, do not use. Deprecated since version 1.10.0. See also numpy.corrcoef Equivalent function in top-level NumPy module. cov Estimate the covariance matrix. Notes This function accepts but discards arguments bias and ddof. This is for backwards compatibility with previous versions of this function. These arguments had no effect on the return values of the function and can be safely ignored in this and previous versions of numpy.
numpy.reference.generated.numpy.ma.corrcoef
numpy.ma.count ma.count(self, axis=None, keepdims=<no value>) = <numpy.ma.core._frommethod object> Count the non-masked elements of the array along the given axis. Parameters axisNone or int or tuple of ints, optional Axis or axes along which the count is performed. The default, None, performs the count over all the dimensions of the input array. axis may be negative, in which case it counts from the last to the first axis. New in version 1.10.0. If this is a tuple of ints, the count is performed on multiple axes, instead of a single axis or all the axes as before. keepdimsbool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns resultndarray or scalar An array with the same shape as the input array, with the specified axis removed. If the array is a 0-d array, or if axis is None, a scalar is returned. See also ma.count_masked Count masked elements in array or along a given axis. Examples >>> import numpy.ma as ma >>> a = ma.arange(6).reshape((2, 3)) >>> a[1, :] = ma.masked >>> a masked_array( data=[[0, 1, 2], [--, --, --]], mask=[[False, False, False], [ True, True, True]], fill_value=999999) >>> a.count() 3 When the axis keyword is specified an array of appropriate size is returned. >>> a.count(axis=0) array([1, 1, 1]) >>> a.count(axis=1) array([3, 0])
numpy.reference.generated.numpy.ma.count
numpy.ma.count_masked ma.count_masked(arr, axis=None)[source] Count the number of masked elements along the given axis. Parameters arrarray_like An array with (possibly) masked elements. axisint, optional Axis along which to count. If None (default), a flattened version of the array is used. Returns countint, ndarray The total number of masked elements (axis=None) or the number of masked elements along each slice of the given axis. See also MaskedArray.count Count non-masked elements. Examples >>> import numpy.ma as ma >>> a = np.arange(9).reshape((3,3)) >>> a = ma.array(a) >>> a[1, 0] = ma.masked >>> a[1, 2] = ma.masked >>> a[2, 1] = ma.masked >>> a masked_array( data=[[0, 1, 2], [--, 4, --], [6, --, 8]], mask=[[False, False, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> ma.count_masked(a) 3 When the axis keyword is used an array is returned. >>> ma.count_masked(a, axis=0) array([1, 1, 1]) >>> ma.count_masked(a, axis=1) array([0, 2, 1])
numpy.reference.generated.numpy.ma.count_masked
numpy.ma.cov ma.cov(x, y=None, rowvar=True, bias=False, allow_masked=True, ddof=None)[source] Estimate the covariance matrix. Except for the handling of missing data this function does the same as numpy.cov. For more details and examples, see numpy.cov. By default, masked values are recognized as such. If x and y have the same shape, a common mask is allocated: if x[i,j] is masked, then y[i,j] will also be masked. Setting allow_masked to False will raise an exception if values are missing in either of the input arrays. Parameters xarray_like A 1-D or 2-D array containing multiple variables and observations. Each row of x represents a variable, and each column a single observation of all those variables. Also see rowvar below. yarray_like, optional An additional set of variables and observations. y has the same shape as x. rowvarbool, optional If rowvar is True (default), then each row represents a variable, with observations in the columns. Otherwise, the relationship is transposed: each column represents a variable, while the rows contain observations. biasbool, optional Default normalization (False) is by (N-1), where N is the number of observations given (unbiased estimate). If bias is True, then normalization is by N. This keyword can be overridden by the keyword ddof in numpy versions >= 1.5. allow_maskedbool, optional If True, masked values are propagated pair-wise: if a value is masked in x, the corresponding value is masked in y. If False, raises a ValueError exception when some values are missing. ddof{None, int}, optional If not None normalization is by (N - ddof), where N is the number of observations; this overrides the value implied by bias. The default value is None. New in version 1.5. Raises ValueError Raised if some values are missing and allow_masked is False. See also numpy.cov
numpy.reference.generated.numpy.ma.cov
numpy.ma.cumprod ma.cumprod(self, axis=None, dtype=None, out=None) = <numpy.ma.core._frommethod object> Return the cumulative product of the array elements over the given axis. Masked values are set to 1 internally during the computation. However, their position is saved, and the result will be masked at the same locations. Refer to numpy.cumprod for full documentation. See also numpy.ndarray.cumprod corresponding function for ndarrays numpy.cumprod equivalent function Notes The mask is lost if out is not a valid MaskedArray ! Arithmetic is modular when using integer types, and no error is raised on overflow.
numpy.reference.generated.numpy.ma.cumprod
numpy.ma.cumsum ma.cumsum(self, axis=None, dtype=None, out=None) = <numpy.ma.core._frommethod object> Return the cumulative sum of the array elements over the given axis. Masked values are set to 0 internally during the computation. However, their position is saved, and the result will be masked at the same locations. Refer to numpy.cumsum for full documentation. See also numpy.ndarray.cumsum corresponding function for ndarrays numpy.cumsum equivalent function Notes The mask is lost if out is not a valid ma.MaskedArray ! Arithmetic is modular when using integer types, and no error is raised on overflow. Examples >>> marr = np.ma.array(np.arange(10), mask=[0,0,0,1,1,1,0,0,0,0]) >>> marr.cumsum() masked_array(data=[0, 1, 3, --, --, --, 9, 16, 24, 33], mask=[False, False, False, True, True, True, False, False, False, False], fill_value=999999)
numpy.reference.generated.numpy.ma.cumsum
numpy.ma.default_fill_value ma.default_fill_value(obj)[source] Return the default fill value for the argument object. The default filling value depends on the datatype of the input array or the type of the input scalar: datatype default bool True int 999999 float 1.e20 complex 1.e20+0j object ‘?’ string ‘N/A’ For structured types, a structured scalar is returned, with each field the default fill value for its type. For subarray types, the fill value is an array of the same size containing the default scalar fill value. Parameters objndarray, dtype or scalar The array data-type or scalar for which the default fill value is returned. Returns fill_valuescalar The default fill value. Examples >>> np.ma.default_fill_value(1) 999999 >>> np.ma.default_fill_value(np.array([1.1, 2., np.pi])) 1e+20 >>> np.ma.default_fill_value(np.dtype(complex)) (1e+20+0j)
numpy.reference.generated.numpy.ma.default_fill_value
numpy.ma.diag ma.diag(v, k=0)[source] Extract a diagonal or construct a diagonal array. This function is the equivalent of numpy.diag that takes masked values into account, see numpy.diag for details. See also numpy.diag Equivalent function for ndarrays.
numpy.reference.generated.numpy.ma.diag
numpy.ma.diff ma.diff(*args, **kwargs) = <numpy.ma.core._convert2ma object> Calculate the n-th discrete difference along the given axis. The first difference is given by out[i] = a[i+1] - a[i] along the given axis, higher differences are calculated by using diff recursively. Parameters aarray_like Input array nint, optional The number of times values are differenced. If zero, the input is returned as-is. axisint, optional The axis along which the difference is taken, default is the last axis. prepend, appendarray_like, optional Values to prepend or append to a along axis prior to performing the difference. Scalar values are expanded to arrays with length 1 in the direction of axis and the shape of the input array in along all other axes. Otherwise the dimension and shape must match a except along axis. New in version 1.16.0. Returns diffMaskedArray The n-th differences. The shape of the output is the same as a except along axis where the dimension is smaller by n. The type of the output is the same as the type of the difference between any two elements of a. This is the same as the type of a in most cases. A notable exception is datetime64, which results in a timedelta64 output array. See also gradient, ediff1d, cumsum Notes Type is preserved for boolean arrays, so the result will contain False when consecutive elements are the same and True when they differ. For unsigned integer arrays, the results will also be unsigned. This should not be surprising, as the result is consistent with calculating the difference directly: >>> u8_arr = np.array([1, 0], dtype=np.uint8) >>> np.diff(u8_arr) array([255], dtype=uint8) >>> u8_arr[1,...] - u8_arr[0,...] 255 If this is not desirable, then the array should be cast to a larger integer type first: >>> i16_arr = u8_arr.astype(np.int16) >>> np.diff(i16_arr) array([-1], dtype=int16) Examples >>> x = np.array([1, 2, 4, 7, 0]) >>> np.diff(x) array([ 1, 2, 3, -7]) >>> np.diff(x, n=2) array([ 1, 1, -10]) >>> x = np.array([[1, 3, 6, 10], [0, 5, 6, 8]]) >>> np.diff(x) array([[2, 3, 4], [5, 1, 2]]) >>> np.diff(x, axis=0) array([[-1, 2, 0, -2]]) >>> x = np.arange('1066-10-13', '1066-10-16', dtype=np.datetime64) >>> np.diff(x) array([1, 1], dtype='timedelta64[D]')
numpy.reference.generated.numpy.ma.diff
numpy.ma.dot ma.dot(a, b, strict=False, out=None)[source] Return the dot product of two arrays. This function is the equivalent of numpy.dot that takes masked values into account. Note that strict and out are in different position than in the method version. In order to maintain compatibility with the corresponding method, it is recommended that the optional arguments be treated as keyword only. At some point that may be mandatory. Note Works only with 2-D arrays at the moment. Parameters a, bmasked_array_like Inputs arrays. strictbool, optional Whether masked data are propagated (True) or set to 0 (False) for the computation. Default is False. Propagating the mask means that if a masked value appears in a row or column, the whole row or column is considered masked. outmasked_array, optional Output argument. This must have the exact kind that would be returned if it was not used. In particular, it must have the right type, must be C-contiguous, and its dtype must be the dtype that would be returned for dot(a,b). This is a performance feature. Therefore, if these conditions are not met, an exception is raised, instead of attempting to be flexible. New in version 1.10.2. See also numpy.dot Equivalent function for ndarrays. Examples >>> a = np.ma.array([[1, 2, 3], [4, 5, 6]], mask=[[1, 0, 0], [0, 0, 0]]) >>> b = np.ma.array([[1, 2], [3, 4], [5, 6]], mask=[[1, 0], [0, 0], [0, 0]]) >>> np.ma.dot(a, b) masked_array( data=[[21, 26], [45, 64]], mask=[[False, False], [False, False]], fill_value=999999) >>> np.ma.dot(a, b, strict=True) masked_array( data=[[--, --], [--, 64]], mask=[[ True, True], [ True, False]], fill_value=999999)
numpy.reference.generated.numpy.ma.dot